Don't warn when alignment of global common data exceeds maximum alignment.
[official-gcc.git] / gcc / gimple-ssa-store-merging.c
blob5d3094b5f3cb2a7fcf61508cba1aefac1f4cd4b0
1 /* GIMPLE store merging and byte swapping passes.
2 Copyright (C) 2009-2021 Free Software Foundation, Inc.
3 Contributed by ARM Ltd.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it
8 under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3, or (at your option)
10 any later version.
12 GCC is distributed in the hope that it will be useful, but
13 WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 General Public License for more details.
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The purpose of the store merging pass is to combine multiple memory stores
22 of constant values, values loaded from memory, bitwise operations on those,
23 or bit-field values, to consecutive locations, into fewer wider stores.
25 For example, if we have a sequence peforming four byte stores to
26 consecutive memory locations:
27 [p ] := imm1;
28 [p + 1B] := imm2;
29 [p + 2B] := imm3;
30 [p + 3B] := imm4;
31 we can transform this into a single 4-byte store if the target supports it:
32 [p] := imm1:imm2:imm3:imm4 concatenated according to endianness.
34 Or:
35 [p ] := [q ];
36 [p + 1B] := [q + 1B];
37 [p + 2B] := [q + 2B];
38 [p + 3B] := [q + 3B];
39 if there is no overlap can be transformed into a single 4-byte
40 load followed by single 4-byte store.
42 Or:
43 [p ] := [q ] ^ imm1;
44 [p + 1B] := [q + 1B] ^ imm2;
45 [p + 2B] := [q + 2B] ^ imm3;
46 [p + 3B] := [q + 3B] ^ imm4;
47 if there is no overlap can be transformed into a single 4-byte
48 load, xored with imm1:imm2:imm3:imm4 and stored using a single 4-byte store.
50 Or:
51 [p:1 ] := imm;
52 [p:31] := val & 0x7FFFFFFF;
53 we can transform this into a single 4-byte store if the target supports it:
54 [p] := imm:(val & 0x7FFFFFFF) concatenated according to endianness.
56 The algorithm is applied to each basic block in three phases:
58 1) Scan through the basic block and record assignments to destinations
59 that can be expressed as a store to memory of a certain size at a certain
60 bit offset from base expressions we can handle. For bit-fields we also
61 record the surrounding bit region, i.e. bits that could be stored in
62 a read-modify-write operation when storing the bit-field. Record store
63 chains to different bases in a hash_map (m_stores) and make sure to
64 terminate such chains when appropriate (for example when the stored
65 values get used subsequently).
66 These stores can be a result of structure element initializers, array stores
67 etc. A store_immediate_info object is recorded for every such store.
68 Record as many such assignments to a single base as possible until a
69 statement that interferes with the store sequence is encountered.
70 Each store has up to 2 operands, which can be a either constant, a memory
71 load or an SSA name, from which the value to be stored can be computed.
72 At most one of the operands can be a constant. The operands are recorded
73 in store_operand_info struct.
75 2) Analyze the chains of stores recorded in phase 1) (i.e. the vector of
76 store_immediate_info objects) and coalesce contiguous stores into
77 merged_store_group objects. For bit-field stores, we don't need to
78 require the stores to be contiguous, just their surrounding bit regions
79 have to be contiguous. If the expression being stored is different
80 between adjacent stores, such as one store storing a constant and
81 following storing a value loaded from memory, or if the loaded memory
82 objects are not adjacent, a new merged_store_group is created as well.
84 For example, given the stores:
85 [p ] := 0;
86 [p + 1B] := 1;
87 [p + 3B] := 0;
88 [p + 4B] := 1;
89 [p + 5B] := 0;
90 [p + 6B] := 0;
91 This phase would produce two merged_store_group objects, one recording the
92 two bytes stored in the memory region [p : p + 1] and another
93 recording the four bytes stored in the memory region [p + 3 : p + 6].
95 3) The merged_store_group objects produced in phase 2) are processed
96 to generate the sequence of wider stores that set the contiguous memory
97 regions to the sequence of bytes that correspond to it. This may emit
98 multiple stores per store group to handle contiguous stores that are not
99 of a size that is a power of 2. For example it can try to emit a 40-bit
100 store as a 32-bit store followed by an 8-bit store.
101 We try to emit as wide stores as we can while respecting STRICT_ALIGNMENT
102 or TARGET_SLOW_UNALIGNED_ACCESS settings.
104 Note on endianness and example:
105 Consider 2 contiguous 16-bit stores followed by 2 contiguous 8-bit stores:
106 [p ] := 0x1234;
107 [p + 2B] := 0x5678;
108 [p + 4B] := 0xab;
109 [p + 5B] := 0xcd;
111 The memory layout for little-endian (LE) and big-endian (BE) must be:
112 p |LE|BE|
113 ---------
114 0 |34|12|
115 1 |12|34|
116 2 |78|56|
117 3 |56|78|
118 4 |ab|ab|
119 5 |cd|cd|
121 To merge these into a single 48-bit merged value 'val' in phase 2)
122 on little-endian we insert stores to higher (consecutive) bitpositions
123 into the most significant bits of the merged value.
124 The final merged value would be: 0xcdab56781234
126 For big-endian we insert stores to higher bitpositions into the least
127 significant bits of the merged value.
128 The final merged value would be: 0x12345678abcd
130 Then, in phase 3), we want to emit this 48-bit value as a 32-bit store
131 followed by a 16-bit store. Again, we must consider endianness when
132 breaking down the 48-bit value 'val' computed above.
133 For little endian we emit:
134 [p] (32-bit) := 0x56781234; // val & 0x0000ffffffff;
135 [p + 4B] (16-bit) := 0xcdab; // (val & 0xffff00000000) >> 32;
137 Whereas for big-endian we emit:
138 [p] (32-bit) := 0x12345678; // (val & 0xffffffff0000) >> 16;
139 [p + 4B] (16-bit) := 0xabcd; // val & 0x00000000ffff; */
141 #include "config.h"
142 #include "system.h"
143 #include "coretypes.h"
144 #include "backend.h"
145 #include "tree.h"
146 #include "gimple.h"
147 #include "builtins.h"
148 #include "fold-const.h"
149 #include "tree-pass.h"
150 #include "ssa.h"
151 #include "gimple-pretty-print.h"
152 #include "alias.h"
153 #include "fold-const.h"
154 #include "print-tree.h"
155 #include "tree-hash-traits.h"
156 #include "gimple-iterator.h"
157 #include "gimplify.h"
158 #include "gimple-fold.h"
159 #include "stor-layout.h"
160 #include "timevar.h"
161 #include "cfganal.h"
162 #include "cfgcleanup.h"
163 #include "tree-cfg.h"
164 #include "except.h"
165 #include "tree-eh.h"
166 #include "target.h"
167 #include "gimplify-me.h"
168 #include "rtl.h"
169 #include "expr.h" /* For get_bit_range. */
170 #include "optabs-tree.h"
171 #include "dbgcnt.h"
172 #include "selftest.h"
174 /* The maximum size (in bits) of the stores this pass should generate. */
175 #define MAX_STORE_BITSIZE (BITS_PER_WORD)
176 #define MAX_STORE_BYTES (MAX_STORE_BITSIZE / BITS_PER_UNIT)
178 /* Limit to bound the number of aliasing checks for loads with the same
179 vuse as the corresponding store. */
180 #define MAX_STORE_ALIAS_CHECKS 64
182 namespace {
184 struct bswap_stat
186 /* Number of hand-written 16-bit nop / bswaps found. */
187 int found_16bit;
189 /* Number of hand-written 32-bit nop / bswaps found. */
190 int found_32bit;
192 /* Number of hand-written 64-bit nop / bswaps found. */
193 int found_64bit;
194 } nop_stats, bswap_stats;
196 /* A symbolic number structure is used to detect byte permutation and selection
197 patterns of a source. To achieve that, its field N contains an artificial
198 number consisting of BITS_PER_MARKER sized markers tracking where does each
199 byte come from in the source:
201 0 - target byte has the value 0
202 FF - target byte has an unknown value (eg. due to sign extension)
203 1..size - marker value is the byte index in the source (0 for lsb).
205 To detect permutations on memory sources (arrays and structures), a symbolic
206 number is also associated:
207 - a base address BASE_ADDR and an OFFSET giving the address of the source;
208 - a range which gives the difference between the highest and lowest accessed
209 memory location to make such a symbolic number;
210 - the address SRC of the source element of lowest address as a convenience
211 to easily get BASE_ADDR + offset + lowest bytepos;
212 - number of expressions N_OPS bitwise ored together to represent
213 approximate cost of the computation.
215 Note 1: the range is different from size as size reflects the size of the
216 type of the current expression. For instance, for an array char a[],
217 (short) a[0] | (short) a[3] would have a size of 2 but a range of 4 while
218 (short) a[0] | ((short) a[0] << 1) would still have a size of 2 but this
219 time a range of 1.
221 Note 2: for non-memory sources, range holds the same value as size.
223 Note 3: SRC points to the SSA_NAME in case of non-memory source. */
225 struct symbolic_number {
226 uint64_t n;
227 tree type;
228 tree base_addr;
229 tree offset;
230 poly_int64_pod bytepos;
231 tree src;
232 tree alias_set;
233 tree vuse;
234 unsigned HOST_WIDE_INT range;
235 int n_ops;
238 #define BITS_PER_MARKER 8
239 #define MARKER_MASK ((1 << BITS_PER_MARKER) - 1)
240 #define MARKER_BYTE_UNKNOWN MARKER_MASK
241 #define HEAD_MARKER(n, size) \
242 ((n) & ((uint64_t) MARKER_MASK << (((size) - 1) * BITS_PER_MARKER)))
244 /* The number which the find_bswap_or_nop_1 result should match in
245 order to have a nop. The number is masked according to the size of
246 the symbolic number before using it. */
247 #define CMPNOP (sizeof (int64_t) < 8 ? 0 : \
248 (uint64_t)0x08070605 << 32 | 0x04030201)
250 /* The number which the find_bswap_or_nop_1 result should match in
251 order to have a byte swap. The number is masked according to the
252 size of the symbolic number before using it. */
253 #define CMPXCHG (sizeof (int64_t) < 8 ? 0 : \
254 (uint64_t)0x01020304 << 32 | 0x05060708)
256 /* Perform a SHIFT or ROTATE operation by COUNT bits on symbolic
257 number N. Return false if the requested operation is not permitted
258 on a symbolic number. */
260 inline bool
261 do_shift_rotate (enum tree_code code,
262 struct symbolic_number *n,
263 int count)
265 int i, size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
266 unsigned head_marker;
268 if (count < 0
269 || count >= TYPE_PRECISION (n->type)
270 || count % BITS_PER_UNIT != 0)
271 return false;
272 count = (count / BITS_PER_UNIT) * BITS_PER_MARKER;
274 /* Zero out the extra bits of N in order to avoid them being shifted
275 into the significant bits. */
276 if (size < 64 / BITS_PER_MARKER)
277 n->n &= ((uint64_t) 1 << (size * BITS_PER_MARKER)) - 1;
279 switch (code)
281 case LSHIFT_EXPR:
282 n->n <<= count;
283 break;
284 case RSHIFT_EXPR:
285 head_marker = HEAD_MARKER (n->n, size);
286 n->n >>= count;
287 /* Arithmetic shift of signed type: result is dependent on the value. */
288 if (!TYPE_UNSIGNED (n->type) && head_marker)
289 for (i = 0; i < count / BITS_PER_MARKER; i++)
290 n->n |= (uint64_t) MARKER_BYTE_UNKNOWN
291 << ((size - 1 - i) * BITS_PER_MARKER);
292 break;
293 case LROTATE_EXPR:
294 n->n = (n->n << count) | (n->n >> ((size * BITS_PER_MARKER) - count));
295 break;
296 case RROTATE_EXPR:
297 n->n = (n->n >> count) | (n->n << ((size * BITS_PER_MARKER) - count));
298 break;
299 default:
300 return false;
302 /* Zero unused bits for size. */
303 if (size < 64 / BITS_PER_MARKER)
304 n->n &= ((uint64_t) 1 << (size * BITS_PER_MARKER)) - 1;
305 return true;
308 /* Perform sanity checking for the symbolic number N and the gimple
309 statement STMT. */
311 inline bool
312 verify_symbolic_number_p (struct symbolic_number *n, gimple *stmt)
314 tree lhs_type;
316 lhs_type = TREE_TYPE (gimple_get_lhs (stmt));
318 if (TREE_CODE (lhs_type) != INTEGER_TYPE
319 && TREE_CODE (lhs_type) != ENUMERAL_TYPE)
320 return false;
322 if (TYPE_PRECISION (lhs_type) != TYPE_PRECISION (n->type))
323 return false;
325 return true;
328 /* Initialize the symbolic number N for the bswap pass from the base element
329 SRC manipulated by the bitwise OR expression. */
331 bool
332 init_symbolic_number (struct symbolic_number *n, tree src)
334 int size;
336 if (!INTEGRAL_TYPE_P (TREE_TYPE (src)) && !POINTER_TYPE_P (TREE_TYPE (src)))
337 return false;
339 n->base_addr = n->offset = n->alias_set = n->vuse = NULL_TREE;
340 n->src = src;
342 /* Set up the symbolic number N by setting each byte to a value between 1 and
343 the byte size of rhs1. The highest order byte is set to n->size and the
344 lowest order byte to 1. */
345 n->type = TREE_TYPE (src);
346 size = TYPE_PRECISION (n->type);
347 if (size % BITS_PER_UNIT != 0)
348 return false;
349 size /= BITS_PER_UNIT;
350 if (size > 64 / BITS_PER_MARKER)
351 return false;
352 n->range = size;
353 n->n = CMPNOP;
354 n->n_ops = 1;
356 if (size < 64 / BITS_PER_MARKER)
357 n->n &= ((uint64_t) 1 << (size * BITS_PER_MARKER)) - 1;
359 return true;
362 /* Check if STMT might be a byte swap or a nop from a memory source and returns
363 the answer. If so, REF is that memory source and the base of the memory area
364 accessed and the offset of the access from that base are recorded in N. */
366 bool
367 find_bswap_or_nop_load (gimple *stmt, tree ref, struct symbolic_number *n)
369 /* Leaf node is an array or component ref. Memorize its base and
370 offset from base to compare to other such leaf node. */
371 poly_int64 bitsize, bitpos, bytepos;
372 machine_mode mode;
373 int unsignedp, reversep, volatilep;
374 tree offset, base_addr;
376 /* Not prepared to handle PDP endian. */
377 if (BYTES_BIG_ENDIAN != WORDS_BIG_ENDIAN)
378 return false;
380 if (!gimple_assign_load_p (stmt) || gimple_has_volatile_ops (stmt))
381 return false;
383 base_addr = get_inner_reference (ref, &bitsize, &bitpos, &offset, &mode,
384 &unsignedp, &reversep, &volatilep);
386 if (TREE_CODE (base_addr) == TARGET_MEM_REF)
387 /* Do not rewrite TARGET_MEM_REF. */
388 return false;
389 else if (TREE_CODE (base_addr) == MEM_REF)
391 poly_offset_int bit_offset = 0;
392 tree off = TREE_OPERAND (base_addr, 1);
394 if (!integer_zerop (off))
396 poly_offset_int boff = mem_ref_offset (base_addr);
397 boff <<= LOG2_BITS_PER_UNIT;
398 bit_offset += boff;
401 base_addr = TREE_OPERAND (base_addr, 0);
403 /* Avoid returning a negative bitpos as this may wreak havoc later. */
404 if (maybe_lt (bit_offset, 0))
406 tree byte_offset = wide_int_to_tree
407 (sizetype, bits_to_bytes_round_down (bit_offset));
408 bit_offset = num_trailing_bits (bit_offset);
409 if (offset)
410 offset = size_binop (PLUS_EXPR, offset, byte_offset);
411 else
412 offset = byte_offset;
415 bitpos += bit_offset.force_shwi ();
417 else
418 base_addr = build_fold_addr_expr (base_addr);
420 if (!multiple_p (bitpos, BITS_PER_UNIT, &bytepos))
421 return false;
422 if (!multiple_p (bitsize, BITS_PER_UNIT))
423 return false;
424 if (reversep)
425 return false;
427 if (!init_symbolic_number (n, ref))
428 return false;
429 n->base_addr = base_addr;
430 n->offset = offset;
431 n->bytepos = bytepos;
432 n->alias_set = reference_alias_ptr_type (ref);
433 n->vuse = gimple_vuse (stmt);
434 return true;
437 /* Compute the symbolic number N representing the result of a bitwise OR on 2
438 symbolic number N1 and N2 whose source statements are respectively
439 SOURCE_STMT1 and SOURCE_STMT2. */
441 gimple *
442 perform_symbolic_merge (gimple *source_stmt1, struct symbolic_number *n1,
443 gimple *source_stmt2, struct symbolic_number *n2,
444 struct symbolic_number *n)
446 int i, size;
447 uint64_t mask;
448 gimple *source_stmt;
449 struct symbolic_number *n_start;
451 tree rhs1 = gimple_assign_rhs1 (source_stmt1);
452 if (TREE_CODE (rhs1) == BIT_FIELD_REF
453 && TREE_CODE (TREE_OPERAND (rhs1, 0)) == SSA_NAME)
454 rhs1 = TREE_OPERAND (rhs1, 0);
455 tree rhs2 = gimple_assign_rhs1 (source_stmt2);
456 if (TREE_CODE (rhs2) == BIT_FIELD_REF
457 && TREE_CODE (TREE_OPERAND (rhs2, 0)) == SSA_NAME)
458 rhs2 = TREE_OPERAND (rhs2, 0);
460 /* Sources are different, cancel bswap if they are not memory location with
461 the same base (array, structure, ...). */
462 if (rhs1 != rhs2)
464 uint64_t inc;
465 HOST_WIDE_INT start1, start2, start_sub, end_sub, end1, end2, end;
466 struct symbolic_number *toinc_n_ptr, *n_end;
467 basic_block bb1, bb2;
469 if (!n1->base_addr || !n2->base_addr
470 || !operand_equal_p (n1->base_addr, n2->base_addr, 0))
471 return NULL;
473 if (!n1->offset != !n2->offset
474 || (n1->offset && !operand_equal_p (n1->offset, n2->offset, 0)))
475 return NULL;
477 start1 = 0;
478 if (!(n2->bytepos - n1->bytepos).is_constant (&start2))
479 return NULL;
481 if (start1 < start2)
483 n_start = n1;
484 start_sub = start2 - start1;
486 else
488 n_start = n2;
489 start_sub = start1 - start2;
492 bb1 = gimple_bb (source_stmt1);
493 bb2 = gimple_bb (source_stmt2);
494 if (dominated_by_p (CDI_DOMINATORS, bb1, bb2))
495 source_stmt = source_stmt1;
496 else
497 source_stmt = source_stmt2;
499 /* Find the highest address at which a load is performed and
500 compute related info. */
501 end1 = start1 + (n1->range - 1);
502 end2 = start2 + (n2->range - 1);
503 if (end1 < end2)
505 end = end2;
506 end_sub = end2 - end1;
508 else
510 end = end1;
511 end_sub = end1 - end2;
513 n_end = (end2 > end1) ? n2 : n1;
515 /* Find symbolic number whose lsb is the most significant. */
516 if (BYTES_BIG_ENDIAN)
517 toinc_n_ptr = (n_end == n1) ? n2 : n1;
518 else
519 toinc_n_ptr = (n_start == n1) ? n2 : n1;
521 n->range = end - MIN (start1, start2) + 1;
523 /* Check that the range of memory covered can be represented by
524 a symbolic number. */
525 if (n->range > 64 / BITS_PER_MARKER)
526 return NULL;
528 /* Reinterpret byte marks in symbolic number holding the value of
529 bigger weight according to target endianness. */
530 inc = BYTES_BIG_ENDIAN ? end_sub : start_sub;
531 size = TYPE_PRECISION (n1->type) / BITS_PER_UNIT;
532 for (i = 0; i < size; i++, inc <<= BITS_PER_MARKER)
534 unsigned marker
535 = (toinc_n_ptr->n >> (i * BITS_PER_MARKER)) & MARKER_MASK;
536 if (marker && marker != MARKER_BYTE_UNKNOWN)
537 toinc_n_ptr->n += inc;
540 else
542 n->range = n1->range;
543 n_start = n1;
544 source_stmt = source_stmt1;
547 if (!n1->alias_set
548 || alias_ptr_types_compatible_p (n1->alias_set, n2->alias_set))
549 n->alias_set = n1->alias_set;
550 else
551 n->alias_set = ptr_type_node;
552 n->vuse = n_start->vuse;
553 n->base_addr = n_start->base_addr;
554 n->offset = n_start->offset;
555 n->src = n_start->src;
556 n->bytepos = n_start->bytepos;
557 n->type = n_start->type;
558 size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
560 for (i = 0, mask = MARKER_MASK; i < size; i++, mask <<= BITS_PER_MARKER)
562 uint64_t masked1, masked2;
564 masked1 = n1->n & mask;
565 masked2 = n2->n & mask;
566 if (masked1 && masked2 && masked1 != masked2)
567 return NULL;
569 n->n = n1->n | n2->n;
570 n->n_ops = n1->n_ops + n2->n_ops;
572 return source_stmt;
575 /* find_bswap_or_nop_1 invokes itself recursively with N and tries to perform
576 the operation given by the rhs of STMT on the result. If the operation
577 could successfully be executed the function returns a gimple stmt whose
578 rhs's first tree is the expression of the source operand and NULL
579 otherwise. */
581 gimple *
582 find_bswap_or_nop_1 (gimple *stmt, struct symbolic_number *n, int limit)
584 enum tree_code code;
585 tree rhs1, rhs2 = NULL;
586 gimple *rhs1_stmt, *rhs2_stmt, *source_stmt1;
587 enum gimple_rhs_class rhs_class;
589 if (!limit || !is_gimple_assign (stmt))
590 return NULL;
592 rhs1 = gimple_assign_rhs1 (stmt);
594 if (find_bswap_or_nop_load (stmt, rhs1, n))
595 return stmt;
597 /* Handle BIT_FIELD_REF. */
598 if (TREE_CODE (rhs1) == BIT_FIELD_REF
599 && TREE_CODE (TREE_OPERAND (rhs1, 0)) == SSA_NAME)
601 if (!tree_fits_uhwi_p (TREE_OPERAND (rhs1, 1))
602 || !tree_fits_uhwi_p (TREE_OPERAND (rhs1, 2)))
603 return NULL;
605 unsigned HOST_WIDE_INT bitsize = tree_to_uhwi (TREE_OPERAND (rhs1, 1));
606 unsigned HOST_WIDE_INT bitpos = tree_to_uhwi (TREE_OPERAND (rhs1, 2));
607 if (bitpos % BITS_PER_UNIT == 0
608 && bitsize % BITS_PER_UNIT == 0
609 && init_symbolic_number (n, TREE_OPERAND (rhs1, 0)))
611 /* Handle big-endian bit numbering in BIT_FIELD_REF. */
612 if (BYTES_BIG_ENDIAN)
613 bitpos = TYPE_PRECISION (n->type) - bitpos - bitsize;
615 /* Shift. */
616 if (!do_shift_rotate (RSHIFT_EXPR, n, bitpos))
617 return NULL;
619 /* Mask. */
620 uint64_t mask = 0;
621 uint64_t tmp = (1 << BITS_PER_UNIT) - 1;
622 for (unsigned i = 0; i < bitsize / BITS_PER_UNIT;
623 i++, tmp <<= BITS_PER_UNIT)
624 mask |= (uint64_t) MARKER_MASK << (i * BITS_PER_MARKER);
625 n->n &= mask;
627 /* Convert. */
628 n->type = TREE_TYPE (rhs1);
629 if (!n->base_addr)
630 n->range = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
632 return verify_symbolic_number_p (n, stmt) ? stmt : NULL;
635 return NULL;
638 if (TREE_CODE (rhs1) != SSA_NAME)
639 return NULL;
641 code = gimple_assign_rhs_code (stmt);
642 rhs_class = gimple_assign_rhs_class (stmt);
643 rhs1_stmt = SSA_NAME_DEF_STMT (rhs1);
645 if (rhs_class == GIMPLE_BINARY_RHS)
646 rhs2 = gimple_assign_rhs2 (stmt);
648 /* Handle unary rhs and binary rhs with integer constants as second
649 operand. */
651 if (rhs_class == GIMPLE_UNARY_RHS
652 || (rhs_class == GIMPLE_BINARY_RHS
653 && TREE_CODE (rhs2) == INTEGER_CST))
655 if (code != BIT_AND_EXPR
656 && code != LSHIFT_EXPR
657 && code != RSHIFT_EXPR
658 && code != LROTATE_EXPR
659 && code != RROTATE_EXPR
660 && !CONVERT_EXPR_CODE_P (code))
661 return NULL;
663 source_stmt1 = find_bswap_or_nop_1 (rhs1_stmt, n, limit - 1);
665 /* If find_bswap_or_nop_1 returned NULL, STMT is a leaf node and
666 we have to initialize the symbolic number. */
667 if (!source_stmt1)
669 if (gimple_assign_load_p (stmt)
670 || !init_symbolic_number (n, rhs1))
671 return NULL;
672 source_stmt1 = stmt;
675 switch (code)
677 case BIT_AND_EXPR:
679 int i, size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
680 uint64_t val = int_cst_value (rhs2), mask = 0;
681 uint64_t tmp = (1 << BITS_PER_UNIT) - 1;
683 /* Only constants masking full bytes are allowed. */
684 for (i = 0; i < size; i++, tmp <<= BITS_PER_UNIT)
685 if ((val & tmp) != 0 && (val & tmp) != tmp)
686 return NULL;
687 else if (val & tmp)
688 mask |= (uint64_t) MARKER_MASK << (i * BITS_PER_MARKER);
690 n->n &= mask;
692 break;
693 case LSHIFT_EXPR:
694 case RSHIFT_EXPR:
695 case LROTATE_EXPR:
696 case RROTATE_EXPR:
697 if (!do_shift_rotate (code, n, (int) TREE_INT_CST_LOW (rhs2)))
698 return NULL;
699 break;
700 CASE_CONVERT:
702 int i, type_size, old_type_size;
703 tree type;
705 type = TREE_TYPE (gimple_assign_lhs (stmt));
706 type_size = TYPE_PRECISION (type);
707 if (type_size % BITS_PER_UNIT != 0)
708 return NULL;
709 type_size /= BITS_PER_UNIT;
710 if (type_size > 64 / BITS_PER_MARKER)
711 return NULL;
713 /* Sign extension: result is dependent on the value. */
714 old_type_size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
715 if (!TYPE_UNSIGNED (n->type) && type_size > old_type_size
716 && HEAD_MARKER (n->n, old_type_size))
717 for (i = 0; i < type_size - old_type_size; i++)
718 n->n |= (uint64_t) MARKER_BYTE_UNKNOWN
719 << ((type_size - 1 - i) * BITS_PER_MARKER);
721 if (type_size < 64 / BITS_PER_MARKER)
723 /* If STMT casts to a smaller type mask out the bits not
724 belonging to the target type. */
725 n->n &= ((uint64_t) 1 << (type_size * BITS_PER_MARKER)) - 1;
727 n->type = type;
728 if (!n->base_addr)
729 n->range = type_size;
731 break;
732 default:
733 return NULL;
735 return verify_symbolic_number_p (n, stmt) ? source_stmt1 : NULL;
738 /* Handle binary rhs. */
740 if (rhs_class == GIMPLE_BINARY_RHS)
742 struct symbolic_number n1, n2;
743 gimple *source_stmt, *source_stmt2;
745 if (code != BIT_IOR_EXPR)
746 return NULL;
748 if (TREE_CODE (rhs2) != SSA_NAME)
749 return NULL;
751 rhs2_stmt = SSA_NAME_DEF_STMT (rhs2);
753 switch (code)
755 case BIT_IOR_EXPR:
756 source_stmt1 = find_bswap_or_nop_1 (rhs1_stmt, &n1, limit - 1);
758 if (!source_stmt1)
759 return NULL;
761 source_stmt2 = find_bswap_or_nop_1 (rhs2_stmt, &n2, limit - 1);
763 if (!source_stmt2)
764 return NULL;
766 if (TYPE_PRECISION (n1.type) != TYPE_PRECISION (n2.type))
767 return NULL;
769 if (n1.vuse != n2.vuse)
770 return NULL;
772 source_stmt
773 = perform_symbolic_merge (source_stmt1, &n1, source_stmt2, &n2, n);
775 if (!source_stmt)
776 return NULL;
778 if (!verify_symbolic_number_p (n, stmt))
779 return NULL;
781 break;
782 default:
783 return NULL;
785 return source_stmt;
787 return NULL;
790 /* Helper for find_bswap_or_nop and try_coalesce_bswap to compute
791 *CMPXCHG, *CMPNOP and adjust *N. */
793 void
794 find_bswap_or_nop_finalize (struct symbolic_number *n, uint64_t *cmpxchg,
795 uint64_t *cmpnop)
797 unsigned rsize;
798 uint64_t tmpn, mask;
800 /* The number which the find_bswap_or_nop_1 result should match in order
801 to have a full byte swap. The number is shifted to the right
802 according to the size of the symbolic number before using it. */
803 *cmpxchg = CMPXCHG;
804 *cmpnop = CMPNOP;
806 /* Find real size of result (highest non-zero byte). */
807 if (n->base_addr)
808 for (tmpn = n->n, rsize = 0; tmpn; tmpn >>= BITS_PER_MARKER, rsize++);
809 else
810 rsize = n->range;
812 /* Zero out the bits corresponding to untouched bytes in original gimple
813 expression. */
814 if (n->range < (int) sizeof (int64_t))
816 mask = ((uint64_t) 1 << (n->range * BITS_PER_MARKER)) - 1;
817 *cmpxchg >>= (64 / BITS_PER_MARKER - n->range) * BITS_PER_MARKER;
818 *cmpnop &= mask;
821 /* Zero out the bits corresponding to unused bytes in the result of the
822 gimple expression. */
823 if (rsize < n->range)
825 if (BYTES_BIG_ENDIAN)
827 mask = ((uint64_t) 1 << (rsize * BITS_PER_MARKER)) - 1;
828 *cmpxchg &= mask;
829 *cmpnop >>= (n->range - rsize) * BITS_PER_MARKER;
831 else
833 mask = ((uint64_t) 1 << (rsize * BITS_PER_MARKER)) - 1;
834 *cmpxchg >>= (n->range - rsize) * BITS_PER_MARKER;
835 *cmpnop &= mask;
837 n->range = rsize;
840 n->range *= BITS_PER_UNIT;
843 /* Check if STMT completes a bswap implementation or a read in a given
844 endianness consisting of ORs, SHIFTs and ANDs and sets *BSWAP
845 accordingly. It also sets N to represent the kind of operations
846 performed: size of the resulting expression and whether it works on
847 a memory source, and if so alias-set and vuse. At last, the
848 function returns a stmt whose rhs's first tree is the source
849 expression. */
851 gimple *
852 find_bswap_or_nop (gimple *stmt, struct symbolic_number *n, bool *bswap)
854 tree type_size = TYPE_SIZE_UNIT (TREE_TYPE (gimple_get_lhs (stmt)));
855 if (!tree_fits_uhwi_p (type_size))
856 return NULL;
858 /* The last parameter determines the depth search limit. It usually
859 correlates directly to the number n of bytes to be touched. We
860 increase that number by 2 * (log2(n) + 1) here in order to also
861 cover signed -> unsigned conversions of the src operand as can be seen
862 in libgcc, and for initial shift/and operation of the src operand. */
863 int limit = tree_to_uhwi (type_size);
864 limit += 2 * (1 + (int) ceil_log2 ((unsigned HOST_WIDE_INT) limit));
865 gimple *ins_stmt = find_bswap_or_nop_1 (stmt, n, limit);
867 if (!ins_stmt)
869 if (gimple_assign_rhs_code (stmt) != CONSTRUCTOR
870 || BYTES_BIG_ENDIAN != WORDS_BIG_ENDIAN)
871 return NULL;
872 unsigned HOST_WIDE_INT sz = tree_to_uhwi (type_size) * BITS_PER_UNIT;
873 if (sz != 16 && sz != 32 && sz != 64)
874 return NULL;
875 tree rhs = gimple_assign_rhs1 (stmt);
876 if (CONSTRUCTOR_NELTS (rhs) == 0)
877 return NULL;
878 tree eltype = TREE_TYPE (TREE_TYPE (rhs));
879 unsigned HOST_WIDE_INT eltsz
880 = int_size_in_bytes (eltype) * BITS_PER_UNIT;
881 if (TYPE_PRECISION (eltype) != eltsz)
882 return NULL;
883 constructor_elt *elt;
884 unsigned int i;
885 tree type = build_nonstandard_integer_type (sz, 1);
886 FOR_EACH_VEC_SAFE_ELT (CONSTRUCTOR_ELTS (rhs), i, elt)
888 if (TREE_CODE (elt->value) != SSA_NAME
889 || !INTEGRAL_TYPE_P (TREE_TYPE (elt->value)))
890 return NULL;
891 struct symbolic_number n1;
892 gimple *source_stmt
893 = find_bswap_or_nop_1 (SSA_NAME_DEF_STMT (elt->value), &n1,
894 limit - 1);
896 if (!source_stmt)
897 return NULL;
899 n1.type = type;
900 if (!n1.base_addr)
901 n1.range = sz / BITS_PER_UNIT;
903 if (i == 0)
905 ins_stmt = source_stmt;
906 *n = n1;
908 else
910 if (n->vuse != n1.vuse)
911 return NULL;
913 struct symbolic_number n0 = *n;
915 if (!BYTES_BIG_ENDIAN)
917 if (!do_shift_rotate (LSHIFT_EXPR, &n1, i * eltsz))
918 return NULL;
920 else if (!do_shift_rotate (LSHIFT_EXPR, &n0, eltsz))
921 return NULL;
922 ins_stmt
923 = perform_symbolic_merge (ins_stmt, &n0, source_stmt, &n1, n);
925 if (!ins_stmt)
926 return NULL;
931 uint64_t cmpxchg, cmpnop;
932 find_bswap_or_nop_finalize (n, &cmpxchg, &cmpnop);
934 /* A complete byte swap should make the symbolic number to start with
935 the largest digit in the highest order byte. Unchanged symbolic
936 number indicates a read with same endianness as target architecture. */
937 if (n->n == cmpnop)
938 *bswap = false;
939 else if (n->n == cmpxchg)
940 *bswap = true;
941 else
942 return NULL;
944 /* Useless bit manipulation performed by code. */
945 if (!n->base_addr && n->n == cmpnop && n->n_ops == 1)
946 return NULL;
948 return ins_stmt;
951 const pass_data pass_data_optimize_bswap =
953 GIMPLE_PASS, /* type */
954 "bswap", /* name */
955 OPTGROUP_NONE, /* optinfo_flags */
956 TV_NONE, /* tv_id */
957 PROP_ssa, /* properties_required */
958 0, /* properties_provided */
959 0, /* properties_destroyed */
960 0, /* todo_flags_start */
961 0, /* todo_flags_finish */
964 class pass_optimize_bswap : public gimple_opt_pass
966 public:
967 pass_optimize_bswap (gcc::context *ctxt)
968 : gimple_opt_pass (pass_data_optimize_bswap, ctxt)
971 /* opt_pass methods: */
972 virtual bool gate (function *)
974 return flag_expensive_optimizations && optimize && BITS_PER_UNIT == 8;
977 virtual unsigned int execute (function *);
979 }; // class pass_optimize_bswap
981 /* Helper function for bswap_replace. Build VIEW_CONVERT_EXPR from
982 VAL to TYPE. If VAL has different type size, emit a NOP_EXPR cast
983 first. */
985 static tree
986 bswap_view_convert (gimple_stmt_iterator *gsi, tree type, tree val)
988 gcc_assert (INTEGRAL_TYPE_P (TREE_TYPE (val))
989 || POINTER_TYPE_P (TREE_TYPE (val)));
990 if (TYPE_SIZE (type) != TYPE_SIZE (TREE_TYPE (val)))
992 HOST_WIDE_INT prec = TREE_INT_CST_LOW (TYPE_SIZE (type));
993 if (POINTER_TYPE_P (TREE_TYPE (val)))
995 gimple *g
996 = gimple_build_assign (make_ssa_name (pointer_sized_int_node),
997 NOP_EXPR, val);
998 gsi_insert_before (gsi, g, GSI_SAME_STMT);
999 val = gimple_assign_lhs (g);
1001 tree itype = build_nonstandard_integer_type (prec, 1);
1002 gimple *g = gimple_build_assign (make_ssa_name (itype), NOP_EXPR, val);
1003 gsi_insert_before (gsi, g, GSI_SAME_STMT);
1004 val = gimple_assign_lhs (g);
1006 return build1 (VIEW_CONVERT_EXPR, type, val);
1009 /* Perform the bswap optimization: replace the expression computed in the rhs
1010 of gsi_stmt (GSI) (or if NULL add instead of replace) by an equivalent
1011 bswap, load or load + bswap expression.
1012 Which of these alternatives replace the rhs is given by N->base_addr (non
1013 null if a load is needed) and BSWAP. The type, VUSE and set-alias of the
1014 load to perform are also given in N while the builtin bswap invoke is given
1015 in FNDEL. Finally, if a load is involved, INS_STMT refers to one of the
1016 load statements involved to construct the rhs in gsi_stmt (GSI) and
1017 N->range gives the size of the rhs expression for maintaining some
1018 statistics.
1020 Note that if the replacement involve a load and if gsi_stmt (GSI) is
1021 non-NULL, that stmt is moved just after INS_STMT to do the load with the
1022 same VUSE which can lead to gsi_stmt (GSI) changing of basic block. */
1024 tree
1025 bswap_replace (gimple_stmt_iterator gsi, gimple *ins_stmt, tree fndecl,
1026 tree bswap_type, tree load_type, struct symbolic_number *n,
1027 bool bswap)
1029 tree src, tmp, tgt = NULL_TREE;
1030 gimple *bswap_stmt;
1031 tree_code conv_code = NOP_EXPR;
1033 gimple *cur_stmt = gsi_stmt (gsi);
1034 src = n->src;
1035 if (cur_stmt)
1037 tgt = gimple_assign_lhs (cur_stmt);
1038 if (gimple_assign_rhs_code (cur_stmt) == CONSTRUCTOR
1039 && tgt
1040 && VECTOR_TYPE_P (TREE_TYPE (tgt)))
1041 conv_code = VIEW_CONVERT_EXPR;
1044 /* Need to load the value from memory first. */
1045 if (n->base_addr)
1047 gimple_stmt_iterator gsi_ins = gsi;
1048 if (ins_stmt)
1049 gsi_ins = gsi_for_stmt (ins_stmt);
1050 tree addr_expr, addr_tmp, val_expr, val_tmp;
1051 tree load_offset_ptr, aligned_load_type;
1052 gimple *load_stmt;
1053 unsigned align = get_object_alignment (src);
1054 poly_int64 load_offset = 0;
1056 if (cur_stmt)
1058 basic_block ins_bb = gimple_bb (ins_stmt);
1059 basic_block cur_bb = gimple_bb (cur_stmt);
1060 if (!dominated_by_p (CDI_DOMINATORS, cur_bb, ins_bb))
1061 return NULL_TREE;
1063 /* Move cur_stmt just before one of the load of the original
1064 to ensure it has the same VUSE. See PR61517 for what could
1065 go wrong. */
1066 if (gimple_bb (cur_stmt) != gimple_bb (ins_stmt))
1067 reset_flow_sensitive_info (gimple_assign_lhs (cur_stmt));
1068 gsi_move_before (&gsi, &gsi_ins);
1069 gsi = gsi_for_stmt (cur_stmt);
1071 else
1072 gsi = gsi_ins;
1074 /* Compute address to load from and cast according to the size
1075 of the load. */
1076 addr_expr = build_fold_addr_expr (src);
1077 if (is_gimple_mem_ref_addr (addr_expr))
1078 addr_tmp = unshare_expr (addr_expr);
1079 else
1081 addr_tmp = unshare_expr (n->base_addr);
1082 if (!is_gimple_mem_ref_addr (addr_tmp))
1083 addr_tmp = force_gimple_operand_gsi_1 (&gsi, addr_tmp,
1084 is_gimple_mem_ref_addr,
1085 NULL_TREE, true,
1086 GSI_SAME_STMT);
1087 load_offset = n->bytepos;
1088 if (n->offset)
1090 tree off
1091 = force_gimple_operand_gsi (&gsi, unshare_expr (n->offset),
1092 true, NULL_TREE, true,
1093 GSI_SAME_STMT);
1094 gimple *stmt
1095 = gimple_build_assign (make_ssa_name (TREE_TYPE (addr_tmp)),
1096 POINTER_PLUS_EXPR, addr_tmp, off);
1097 gsi_insert_before (&gsi, stmt, GSI_SAME_STMT);
1098 addr_tmp = gimple_assign_lhs (stmt);
1102 /* Perform the load. */
1103 aligned_load_type = load_type;
1104 if (align < TYPE_ALIGN (load_type))
1105 aligned_load_type = build_aligned_type (load_type, align);
1106 load_offset_ptr = build_int_cst (n->alias_set, load_offset);
1107 val_expr = fold_build2 (MEM_REF, aligned_load_type, addr_tmp,
1108 load_offset_ptr);
1110 if (!bswap)
1112 if (n->range == 16)
1113 nop_stats.found_16bit++;
1114 else if (n->range == 32)
1115 nop_stats.found_32bit++;
1116 else
1118 gcc_assert (n->range == 64);
1119 nop_stats.found_64bit++;
1122 /* Convert the result of load if necessary. */
1123 if (tgt && !useless_type_conversion_p (TREE_TYPE (tgt), load_type))
1125 val_tmp = make_temp_ssa_name (aligned_load_type, NULL,
1126 "load_dst");
1127 load_stmt = gimple_build_assign (val_tmp, val_expr);
1128 gimple_set_vuse (load_stmt, n->vuse);
1129 gsi_insert_before (&gsi, load_stmt, GSI_SAME_STMT);
1130 if (conv_code == VIEW_CONVERT_EXPR)
1131 val_tmp = bswap_view_convert (&gsi, TREE_TYPE (tgt), val_tmp);
1132 gimple_assign_set_rhs_with_ops (&gsi, conv_code, val_tmp);
1133 update_stmt (cur_stmt);
1135 else if (cur_stmt)
1137 gimple_assign_set_rhs_with_ops (&gsi, MEM_REF, val_expr);
1138 gimple_set_vuse (cur_stmt, n->vuse);
1139 update_stmt (cur_stmt);
1141 else
1143 tgt = make_ssa_name (load_type);
1144 cur_stmt = gimple_build_assign (tgt, MEM_REF, val_expr);
1145 gimple_set_vuse (cur_stmt, n->vuse);
1146 gsi_insert_before (&gsi, cur_stmt, GSI_SAME_STMT);
1149 if (dump_file)
1151 fprintf (dump_file,
1152 "%d bit load in target endianness found at: ",
1153 (int) n->range);
1154 print_gimple_stmt (dump_file, cur_stmt, 0);
1156 return tgt;
1158 else
1160 val_tmp = make_temp_ssa_name (aligned_load_type, NULL, "load_dst");
1161 load_stmt = gimple_build_assign (val_tmp, val_expr);
1162 gimple_set_vuse (load_stmt, n->vuse);
1163 gsi_insert_before (&gsi, load_stmt, GSI_SAME_STMT);
1165 src = val_tmp;
1167 else if (!bswap)
1169 gimple *g = NULL;
1170 if (tgt && !useless_type_conversion_p (TREE_TYPE (tgt), TREE_TYPE (src)))
1172 if (!is_gimple_val (src))
1173 return NULL_TREE;
1174 if (conv_code == VIEW_CONVERT_EXPR)
1175 src = bswap_view_convert (&gsi, TREE_TYPE (tgt), src);
1176 g = gimple_build_assign (tgt, conv_code, src);
1178 else if (cur_stmt)
1179 g = gimple_build_assign (tgt, src);
1180 else
1181 tgt = src;
1182 if (n->range == 16)
1183 nop_stats.found_16bit++;
1184 else if (n->range == 32)
1185 nop_stats.found_32bit++;
1186 else
1188 gcc_assert (n->range == 64);
1189 nop_stats.found_64bit++;
1191 if (dump_file)
1193 fprintf (dump_file,
1194 "%d bit reshuffle in target endianness found at: ",
1195 (int) n->range);
1196 if (cur_stmt)
1197 print_gimple_stmt (dump_file, cur_stmt, 0);
1198 else
1200 print_generic_expr (dump_file, tgt, TDF_NONE);
1201 fprintf (dump_file, "\n");
1204 if (cur_stmt)
1205 gsi_replace (&gsi, g, true);
1206 return tgt;
1208 else if (TREE_CODE (src) == BIT_FIELD_REF)
1209 src = TREE_OPERAND (src, 0);
1211 if (n->range == 16)
1212 bswap_stats.found_16bit++;
1213 else if (n->range == 32)
1214 bswap_stats.found_32bit++;
1215 else
1217 gcc_assert (n->range == 64);
1218 bswap_stats.found_64bit++;
1221 tmp = src;
1223 /* Convert the src expression if necessary. */
1224 if (!useless_type_conversion_p (TREE_TYPE (tmp), bswap_type))
1226 gimple *convert_stmt;
1228 tmp = make_temp_ssa_name (bswap_type, NULL, "bswapsrc");
1229 convert_stmt = gimple_build_assign (tmp, NOP_EXPR, src);
1230 gsi_insert_before (&gsi, convert_stmt, GSI_SAME_STMT);
1233 /* Canonical form for 16 bit bswap is a rotate expression. Only 16bit values
1234 are considered as rotation of 2N bit values by N bits is generally not
1235 equivalent to a bswap. Consider for instance 0x01020304 r>> 16 which
1236 gives 0x03040102 while a bswap for that value is 0x04030201. */
1237 if (bswap && n->range == 16)
1239 tree count = build_int_cst (NULL, BITS_PER_UNIT);
1240 src = fold_build2 (LROTATE_EXPR, bswap_type, tmp, count);
1241 bswap_stmt = gimple_build_assign (NULL, src);
1243 else
1244 bswap_stmt = gimple_build_call (fndecl, 1, tmp);
1246 if (tgt == NULL_TREE)
1247 tgt = make_ssa_name (bswap_type);
1248 tmp = tgt;
1250 /* Convert the result if necessary. */
1251 if (!useless_type_conversion_p (TREE_TYPE (tgt), bswap_type))
1253 gimple *convert_stmt;
1255 tmp = make_temp_ssa_name (bswap_type, NULL, "bswapdst");
1256 tree atmp = tmp;
1257 if (conv_code == VIEW_CONVERT_EXPR)
1258 atmp = bswap_view_convert (&gsi, TREE_TYPE (tgt), tmp);
1259 convert_stmt = gimple_build_assign (tgt, conv_code, atmp);
1260 gsi_insert_after (&gsi, convert_stmt, GSI_SAME_STMT);
1263 gimple_set_lhs (bswap_stmt, tmp);
1265 if (dump_file)
1267 fprintf (dump_file, "%d bit bswap implementation found at: ",
1268 (int) n->range);
1269 if (cur_stmt)
1270 print_gimple_stmt (dump_file, cur_stmt, 0);
1271 else
1273 print_generic_expr (dump_file, tgt, TDF_NONE);
1274 fprintf (dump_file, "\n");
1278 if (cur_stmt)
1280 gsi_insert_after (&gsi, bswap_stmt, GSI_SAME_STMT);
1281 gsi_remove (&gsi, true);
1283 else
1284 gsi_insert_before (&gsi, bswap_stmt, GSI_SAME_STMT);
1285 return tgt;
1288 /* Try to optimize an assignment CUR_STMT with CONSTRUCTOR on the rhs
1289 using bswap optimizations. CDI_DOMINATORS need to be
1290 computed on entry. Return true if it has been optimized and
1291 TODO_update_ssa is needed. */
1293 static bool
1294 maybe_optimize_vector_constructor (gimple *cur_stmt)
1296 tree fndecl = NULL_TREE, bswap_type = NULL_TREE, load_type;
1297 struct symbolic_number n;
1298 bool bswap;
1300 gcc_assert (is_gimple_assign (cur_stmt)
1301 && gimple_assign_rhs_code (cur_stmt) == CONSTRUCTOR);
1303 tree rhs = gimple_assign_rhs1 (cur_stmt);
1304 if (!VECTOR_TYPE_P (TREE_TYPE (rhs))
1305 || !INTEGRAL_TYPE_P (TREE_TYPE (TREE_TYPE (rhs)))
1306 || gimple_assign_lhs (cur_stmt) == NULL_TREE)
1307 return false;
1309 HOST_WIDE_INT sz = int_size_in_bytes (TREE_TYPE (rhs)) * BITS_PER_UNIT;
1310 switch (sz)
1312 case 16:
1313 load_type = bswap_type = uint16_type_node;
1314 break;
1315 case 32:
1316 if (builtin_decl_explicit_p (BUILT_IN_BSWAP32)
1317 && optab_handler (bswap_optab, SImode) != CODE_FOR_nothing)
1319 load_type = uint32_type_node;
1320 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP32);
1321 bswap_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
1323 else
1324 return false;
1325 break;
1326 case 64:
1327 if (builtin_decl_explicit_p (BUILT_IN_BSWAP64)
1328 && (optab_handler (bswap_optab, DImode) != CODE_FOR_nothing
1329 || (word_mode == SImode
1330 && builtin_decl_explicit_p (BUILT_IN_BSWAP32)
1331 && optab_handler (bswap_optab, SImode) != CODE_FOR_nothing)))
1333 load_type = uint64_type_node;
1334 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP64);
1335 bswap_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
1337 else
1338 return false;
1339 break;
1340 default:
1341 return false;
1344 gimple *ins_stmt = find_bswap_or_nop (cur_stmt, &n, &bswap);
1345 if (!ins_stmt || n.range != (unsigned HOST_WIDE_INT) sz)
1346 return false;
1348 if (bswap && !fndecl && n.range != 16)
1349 return false;
1351 memset (&nop_stats, 0, sizeof (nop_stats));
1352 memset (&bswap_stats, 0, sizeof (bswap_stats));
1353 return bswap_replace (gsi_for_stmt (cur_stmt), ins_stmt, fndecl,
1354 bswap_type, load_type, &n, bswap) != NULL_TREE;
1357 /* Find manual byte swap implementations as well as load in a given
1358 endianness. Byte swaps are turned into a bswap builtin invokation
1359 while endian loads are converted to bswap builtin invokation or
1360 simple load according to the target endianness. */
1362 unsigned int
1363 pass_optimize_bswap::execute (function *fun)
1365 basic_block bb;
1366 bool bswap32_p, bswap64_p;
1367 bool changed = false;
1368 tree bswap32_type = NULL_TREE, bswap64_type = NULL_TREE;
1370 bswap32_p = (builtin_decl_explicit_p (BUILT_IN_BSWAP32)
1371 && optab_handler (bswap_optab, SImode) != CODE_FOR_nothing);
1372 bswap64_p = (builtin_decl_explicit_p (BUILT_IN_BSWAP64)
1373 && (optab_handler (bswap_optab, DImode) != CODE_FOR_nothing
1374 || (bswap32_p && word_mode == SImode)));
1376 /* Determine the argument type of the builtins. The code later on
1377 assumes that the return and argument type are the same. */
1378 if (bswap32_p)
1380 tree fndecl = builtin_decl_explicit (BUILT_IN_BSWAP32);
1381 bswap32_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
1384 if (bswap64_p)
1386 tree fndecl = builtin_decl_explicit (BUILT_IN_BSWAP64);
1387 bswap64_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
1390 memset (&nop_stats, 0, sizeof (nop_stats));
1391 memset (&bswap_stats, 0, sizeof (bswap_stats));
1392 calculate_dominance_info (CDI_DOMINATORS);
1394 FOR_EACH_BB_FN (bb, fun)
1396 gimple_stmt_iterator gsi;
1398 /* We do a reverse scan for bswap patterns to make sure we get the
1399 widest match. As bswap pattern matching doesn't handle previously
1400 inserted smaller bswap replacements as sub-patterns, the wider
1401 variant wouldn't be detected. */
1402 for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi);)
1404 gimple *ins_stmt, *cur_stmt = gsi_stmt (gsi);
1405 tree fndecl = NULL_TREE, bswap_type = NULL_TREE, load_type;
1406 enum tree_code code;
1407 struct symbolic_number n;
1408 bool bswap;
1410 /* This gsi_prev (&gsi) is not part of the for loop because cur_stmt
1411 might be moved to a different basic block by bswap_replace and gsi
1412 must not points to it if that's the case. Moving the gsi_prev
1413 there make sure that gsi points to the statement previous to
1414 cur_stmt while still making sure that all statements are
1415 considered in this basic block. */
1416 gsi_prev (&gsi);
1418 if (!is_gimple_assign (cur_stmt))
1419 continue;
1421 code = gimple_assign_rhs_code (cur_stmt);
1422 switch (code)
1424 case LROTATE_EXPR:
1425 case RROTATE_EXPR:
1426 if (!tree_fits_uhwi_p (gimple_assign_rhs2 (cur_stmt))
1427 || tree_to_uhwi (gimple_assign_rhs2 (cur_stmt))
1428 % BITS_PER_UNIT)
1429 continue;
1430 /* Fall through. */
1431 case BIT_IOR_EXPR:
1432 break;
1433 case CONSTRUCTOR:
1435 tree rhs = gimple_assign_rhs1 (cur_stmt);
1436 if (VECTOR_TYPE_P (TREE_TYPE (rhs))
1437 && INTEGRAL_TYPE_P (TREE_TYPE (TREE_TYPE (rhs))))
1438 break;
1440 continue;
1441 default:
1442 continue;
1445 ins_stmt = find_bswap_or_nop (cur_stmt, &n, &bswap);
1447 if (!ins_stmt)
1448 continue;
1450 switch (n.range)
1452 case 16:
1453 /* Already in canonical form, nothing to do. */
1454 if (code == LROTATE_EXPR || code == RROTATE_EXPR)
1455 continue;
1456 load_type = bswap_type = uint16_type_node;
1457 break;
1458 case 32:
1459 load_type = uint32_type_node;
1460 if (bswap32_p)
1462 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP32);
1463 bswap_type = bswap32_type;
1465 break;
1466 case 64:
1467 load_type = uint64_type_node;
1468 if (bswap64_p)
1470 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP64);
1471 bswap_type = bswap64_type;
1473 break;
1474 default:
1475 continue;
1478 if (bswap && !fndecl && n.range != 16)
1479 continue;
1481 if (bswap_replace (gsi_for_stmt (cur_stmt), ins_stmt, fndecl,
1482 bswap_type, load_type, &n, bswap))
1483 changed = true;
1487 statistics_counter_event (fun, "16-bit nop implementations found",
1488 nop_stats.found_16bit);
1489 statistics_counter_event (fun, "32-bit nop implementations found",
1490 nop_stats.found_32bit);
1491 statistics_counter_event (fun, "64-bit nop implementations found",
1492 nop_stats.found_64bit);
1493 statistics_counter_event (fun, "16-bit bswap implementations found",
1494 bswap_stats.found_16bit);
1495 statistics_counter_event (fun, "32-bit bswap implementations found",
1496 bswap_stats.found_32bit);
1497 statistics_counter_event (fun, "64-bit bswap implementations found",
1498 bswap_stats.found_64bit);
1500 return (changed ? TODO_update_ssa : 0);
1503 } // anon namespace
1505 gimple_opt_pass *
1506 make_pass_optimize_bswap (gcc::context *ctxt)
1508 return new pass_optimize_bswap (ctxt);
1511 namespace {
1513 /* Struct recording one operand for the store, which is either a constant,
1514 then VAL represents the constant and all the other fields are zero, or
1515 a memory load, then VAL represents the reference, BASE_ADDR is non-NULL
1516 and the other fields also reflect the memory load, or an SSA name, then
1517 VAL represents the SSA name and all the other fields are zero, */
1519 class store_operand_info
1521 public:
1522 tree val;
1523 tree base_addr;
1524 poly_uint64 bitsize;
1525 poly_uint64 bitpos;
1526 poly_uint64 bitregion_start;
1527 poly_uint64 bitregion_end;
1528 gimple *stmt;
1529 bool bit_not_p;
1530 store_operand_info ();
1533 store_operand_info::store_operand_info ()
1534 : val (NULL_TREE), base_addr (NULL_TREE), bitsize (0), bitpos (0),
1535 bitregion_start (0), bitregion_end (0), stmt (NULL), bit_not_p (false)
1539 /* Struct recording the information about a single store of an immediate
1540 to memory. These are created in the first phase and coalesced into
1541 merged_store_group objects in the second phase. */
1543 class store_immediate_info
1545 public:
1546 unsigned HOST_WIDE_INT bitsize;
1547 unsigned HOST_WIDE_INT bitpos;
1548 unsigned HOST_WIDE_INT bitregion_start;
1549 /* This is one past the last bit of the bit region. */
1550 unsigned HOST_WIDE_INT bitregion_end;
1551 gimple *stmt;
1552 unsigned int order;
1553 /* INTEGER_CST for constant store, STRING_CST for string store,
1554 MEM_REF for memory copy, BIT_*_EXPR for logical bitwise operation,
1555 BIT_INSERT_EXPR for bit insertion.
1556 LROTATE_EXPR if it can be only bswap optimized and
1557 ops are not really meaningful.
1558 NOP_EXPR if bswap optimization detected identity, ops
1559 are not meaningful. */
1560 enum tree_code rhs_code;
1561 /* Two fields for bswap optimization purposes. */
1562 struct symbolic_number n;
1563 gimple *ins_stmt;
1564 /* True if BIT_{AND,IOR,XOR}_EXPR result is inverted before storing. */
1565 bool bit_not_p;
1566 /* True if ops have been swapped and thus ops[1] represents
1567 rhs1 of BIT_{AND,IOR,XOR}_EXPR and ops[0] represents rhs2. */
1568 bool ops_swapped_p;
1569 /* The index number of the landing pad, or 0 if there is none. */
1570 int lp_nr;
1571 /* Operands. For BIT_*_EXPR rhs_code both operands are used, otherwise
1572 just the first one. */
1573 store_operand_info ops[2];
1574 store_immediate_info (unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
1575 unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
1576 gimple *, unsigned int, enum tree_code,
1577 struct symbolic_number &, gimple *, bool, int,
1578 const store_operand_info &,
1579 const store_operand_info &);
1582 store_immediate_info::store_immediate_info (unsigned HOST_WIDE_INT bs,
1583 unsigned HOST_WIDE_INT bp,
1584 unsigned HOST_WIDE_INT brs,
1585 unsigned HOST_WIDE_INT bre,
1586 gimple *st,
1587 unsigned int ord,
1588 enum tree_code rhscode,
1589 struct symbolic_number &nr,
1590 gimple *ins_stmtp,
1591 bool bitnotp,
1592 int nr2,
1593 const store_operand_info &op0r,
1594 const store_operand_info &op1r)
1595 : bitsize (bs), bitpos (bp), bitregion_start (brs), bitregion_end (bre),
1596 stmt (st), order (ord), rhs_code (rhscode), n (nr),
1597 ins_stmt (ins_stmtp), bit_not_p (bitnotp), ops_swapped_p (false),
1598 lp_nr (nr2), ops { op0r, op1r }
1602 /* Struct representing a group of stores to contiguous memory locations.
1603 These are produced by the second phase (coalescing) and consumed in the
1604 third phase that outputs the widened stores. */
1606 class merged_store_group
1608 public:
1609 unsigned HOST_WIDE_INT start;
1610 unsigned HOST_WIDE_INT width;
1611 unsigned HOST_WIDE_INT bitregion_start;
1612 unsigned HOST_WIDE_INT bitregion_end;
1613 /* The size of the allocated memory for val and mask. */
1614 unsigned HOST_WIDE_INT buf_size;
1615 unsigned HOST_WIDE_INT align_base;
1616 poly_uint64 load_align_base[2];
1618 unsigned int align;
1619 unsigned int load_align[2];
1620 unsigned int first_order;
1621 unsigned int last_order;
1622 bool bit_insertion;
1623 bool string_concatenation;
1624 bool only_constants;
1625 bool consecutive;
1626 unsigned int first_nonmergeable_order;
1627 int lp_nr;
1629 auto_vec<store_immediate_info *> stores;
1630 /* We record the first and last original statements in the sequence because
1631 we'll need their vuse/vdef and replacement position. It's easier to keep
1632 track of them separately as 'stores' is reordered by apply_stores. */
1633 gimple *last_stmt;
1634 gimple *first_stmt;
1635 unsigned char *val;
1636 unsigned char *mask;
1638 merged_store_group (store_immediate_info *);
1639 ~merged_store_group ();
1640 bool can_be_merged_into (store_immediate_info *);
1641 void merge_into (store_immediate_info *);
1642 void merge_overlapping (store_immediate_info *);
1643 bool apply_stores ();
1644 private:
1645 void do_merge (store_immediate_info *);
1648 /* Debug helper. Dump LEN elements of byte array PTR to FD in hex. */
1650 static void
1651 dump_char_array (FILE *fd, unsigned char *ptr, unsigned int len)
1653 if (!fd)
1654 return;
1656 for (unsigned int i = 0; i < len; i++)
1657 fprintf (fd, "%02x ", ptr[i]);
1658 fprintf (fd, "\n");
1661 /* Clear out LEN bits starting from bit START in the byte array
1662 PTR. This clears the bits to the *right* from START.
1663 START must be within [0, BITS_PER_UNIT) and counts starting from
1664 the least significant bit. */
1666 static void
1667 clear_bit_region_be (unsigned char *ptr, unsigned int start,
1668 unsigned int len)
1670 if (len == 0)
1671 return;
1672 /* Clear len bits to the right of start. */
1673 else if (len <= start + 1)
1675 unsigned char mask = (~(~0U << len));
1676 mask = mask << (start + 1U - len);
1677 ptr[0] &= ~mask;
1679 else if (start != BITS_PER_UNIT - 1)
1681 clear_bit_region_be (ptr, start, (start % BITS_PER_UNIT) + 1);
1682 clear_bit_region_be (ptr + 1, BITS_PER_UNIT - 1,
1683 len - (start % BITS_PER_UNIT) - 1);
1685 else if (start == BITS_PER_UNIT - 1
1686 && len > BITS_PER_UNIT)
1688 unsigned int nbytes = len / BITS_PER_UNIT;
1689 memset (ptr, 0, nbytes);
1690 if (len % BITS_PER_UNIT != 0)
1691 clear_bit_region_be (ptr + nbytes, BITS_PER_UNIT - 1,
1692 len % BITS_PER_UNIT);
1694 else
1695 gcc_unreachable ();
1698 /* In the byte array PTR clear the bit region starting at bit
1699 START and is LEN bits wide.
1700 For regions spanning multiple bytes do this recursively until we reach
1701 zero LEN or a region contained within a single byte. */
1703 static void
1704 clear_bit_region (unsigned char *ptr, unsigned int start,
1705 unsigned int len)
1707 /* Degenerate base case. */
1708 if (len == 0)
1709 return;
1710 else if (start >= BITS_PER_UNIT)
1711 clear_bit_region (ptr + 1, start - BITS_PER_UNIT, len);
1712 /* Second base case. */
1713 else if ((start + len) <= BITS_PER_UNIT)
1715 unsigned char mask = (~0U) << (unsigned char) (BITS_PER_UNIT - len);
1716 mask >>= BITS_PER_UNIT - (start + len);
1718 ptr[0] &= ~mask;
1720 return;
1722 /* Clear most significant bits in a byte and proceed with the next byte. */
1723 else if (start != 0)
1725 clear_bit_region (ptr, start, BITS_PER_UNIT - start);
1726 clear_bit_region (ptr + 1, 0, len - (BITS_PER_UNIT - start));
1728 /* Whole bytes need to be cleared. */
1729 else if (start == 0 && len > BITS_PER_UNIT)
1731 unsigned int nbytes = len / BITS_PER_UNIT;
1732 /* We could recurse on each byte but we clear whole bytes, so a simple
1733 memset will do. */
1734 memset (ptr, '\0', nbytes);
1735 /* Clear the remaining sub-byte region if there is one. */
1736 if (len % BITS_PER_UNIT != 0)
1737 clear_bit_region (ptr + nbytes, 0, len % BITS_PER_UNIT);
1739 else
1740 gcc_unreachable ();
1743 /* Write BITLEN bits of EXPR to the byte array PTR at
1744 bit position BITPOS. PTR should contain TOTAL_BYTES elements.
1745 Return true if the operation succeeded. */
1747 static bool
1748 encode_tree_to_bitpos (tree expr, unsigned char *ptr, int bitlen, int bitpos,
1749 unsigned int total_bytes)
1751 unsigned int first_byte = bitpos / BITS_PER_UNIT;
1752 bool sub_byte_op_p = ((bitlen % BITS_PER_UNIT)
1753 || (bitpos % BITS_PER_UNIT)
1754 || !int_mode_for_size (bitlen, 0).exists ());
1755 bool empty_ctor_p
1756 = (TREE_CODE (expr) == CONSTRUCTOR
1757 && CONSTRUCTOR_NELTS (expr) == 0
1758 && TYPE_SIZE_UNIT (TREE_TYPE (expr))
1759 && tree_fits_uhwi_p (TYPE_SIZE_UNIT (TREE_TYPE (expr))));
1761 if (!sub_byte_op_p)
1763 if (first_byte >= total_bytes)
1764 return false;
1765 total_bytes -= first_byte;
1766 if (empty_ctor_p)
1768 unsigned HOST_WIDE_INT rhs_bytes
1769 = tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (expr)));
1770 if (rhs_bytes > total_bytes)
1771 return false;
1772 memset (ptr + first_byte, '\0', rhs_bytes);
1773 return true;
1775 return native_encode_expr (expr, ptr + first_byte, total_bytes) != 0;
1778 /* LITTLE-ENDIAN
1779 We are writing a non byte-sized quantity or at a position that is not
1780 at a byte boundary.
1781 |--------|--------|--------| ptr + first_byte
1783 xxx xxxxxxxx xxx< bp>
1784 |______EXPR____|
1786 First native_encode_expr EXPR into a temporary buffer and shift each
1787 byte in the buffer by 'bp' (carrying the bits over as necessary).
1788 |00000000|00xxxxxx|xxxxxxxx| << bp = |000xxxxx|xxxxxxxx|xxx00000|
1789 <------bitlen---->< bp>
1790 Then we clear the destination bits:
1791 |---00000|00000000|000-----| ptr + first_byte
1792 <-------bitlen--->< bp>
1794 Finally we ORR the bytes of the shifted EXPR into the cleared region:
1795 |---xxxxx||xxxxxxxx||xxx-----| ptr + first_byte.
1797 BIG-ENDIAN
1798 We are writing a non byte-sized quantity or at a position that is not
1799 at a byte boundary.
1800 ptr + first_byte |--------|--------|--------|
1802 <bp >xxx xxxxxxxx xxx
1803 |_____EXPR_____|
1805 First native_encode_expr EXPR into a temporary buffer and shift each
1806 byte in the buffer to the right by (carrying the bits over as necessary).
1807 We shift by as much as needed to align the most significant bit of EXPR
1808 with bitpos:
1809 |00xxxxxx|xxxxxxxx| >> 3 = |00000xxx|xxxxxxxx|xxxxx000|
1810 <---bitlen----> <bp ><-----bitlen----->
1811 Then we clear the destination bits:
1812 ptr + first_byte |-----000||00000000||00000---|
1813 <bp ><-------bitlen----->
1815 Finally we ORR the bytes of the shifted EXPR into the cleared region:
1816 ptr + first_byte |---xxxxx||xxxxxxxx||xxx-----|.
1817 The awkwardness comes from the fact that bitpos is counted from the
1818 most significant bit of a byte. */
1820 /* We must be dealing with fixed-size data at this point, since the
1821 total size is also fixed. */
1822 unsigned int byte_size;
1823 if (empty_ctor_p)
1825 unsigned HOST_WIDE_INT rhs_bytes
1826 = tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (expr)));
1827 if (rhs_bytes > total_bytes)
1828 return false;
1829 byte_size = rhs_bytes;
1831 else
1833 fixed_size_mode mode
1834 = as_a <fixed_size_mode> (TYPE_MODE (TREE_TYPE (expr)));
1835 byte_size
1836 = mode == BLKmode
1837 ? tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (expr)))
1838 : GET_MODE_SIZE (mode);
1840 /* Allocate an extra byte so that we have space to shift into. */
1841 byte_size++;
1842 unsigned char *tmpbuf = XALLOCAVEC (unsigned char, byte_size);
1843 memset (tmpbuf, '\0', byte_size);
1844 /* The store detection code should only have allowed constants that are
1845 accepted by native_encode_expr or empty ctors. */
1846 if (!empty_ctor_p
1847 && native_encode_expr (expr, tmpbuf, byte_size - 1) == 0)
1848 gcc_unreachable ();
1850 /* The native_encode_expr machinery uses TYPE_MODE to determine how many
1851 bytes to write. This means it can write more than
1852 ROUND_UP (bitlen, BITS_PER_UNIT) / BITS_PER_UNIT bytes (for example
1853 write 8 bytes for a bitlen of 40). Skip the bytes that are not within
1854 bitlen and zero out the bits that are not relevant as well (that may
1855 contain a sign bit due to sign-extension). */
1856 unsigned int padding
1857 = byte_size - ROUND_UP (bitlen, BITS_PER_UNIT) / BITS_PER_UNIT - 1;
1858 /* On big-endian the padding is at the 'front' so just skip the initial
1859 bytes. */
1860 if (BYTES_BIG_ENDIAN)
1861 tmpbuf += padding;
1863 byte_size -= padding;
1865 if (bitlen % BITS_PER_UNIT != 0)
1867 if (BYTES_BIG_ENDIAN)
1868 clear_bit_region_be (tmpbuf, BITS_PER_UNIT - 1,
1869 BITS_PER_UNIT - (bitlen % BITS_PER_UNIT));
1870 else
1871 clear_bit_region (tmpbuf, bitlen,
1872 byte_size * BITS_PER_UNIT - bitlen);
1874 /* Left shifting relies on the last byte being clear if bitlen is
1875 a multiple of BITS_PER_UNIT, which might not be clear if
1876 there are padding bytes. */
1877 else if (!BYTES_BIG_ENDIAN)
1878 tmpbuf[byte_size - 1] = '\0';
1880 /* Clear the bit region in PTR where the bits from TMPBUF will be
1881 inserted into. */
1882 if (BYTES_BIG_ENDIAN)
1883 clear_bit_region_be (ptr + first_byte,
1884 BITS_PER_UNIT - 1 - (bitpos % BITS_PER_UNIT), bitlen);
1885 else
1886 clear_bit_region (ptr + first_byte, bitpos % BITS_PER_UNIT, bitlen);
1888 int shift_amnt;
1889 int bitlen_mod = bitlen % BITS_PER_UNIT;
1890 int bitpos_mod = bitpos % BITS_PER_UNIT;
1892 bool skip_byte = false;
1893 if (BYTES_BIG_ENDIAN)
1895 /* BITPOS and BITLEN are exactly aligned and no shifting
1896 is necessary. */
1897 if (bitpos_mod + bitlen_mod == BITS_PER_UNIT
1898 || (bitpos_mod == 0 && bitlen_mod == 0))
1899 shift_amnt = 0;
1900 /* |. . . . . . . .|
1901 <bp > <blen >.
1902 We always shift right for BYTES_BIG_ENDIAN so shift the beginning
1903 of the value until it aligns with 'bp' in the next byte over. */
1904 else if (bitpos_mod + bitlen_mod < BITS_PER_UNIT)
1906 shift_amnt = bitlen_mod + bitpos_mod;
1907 skip_byte = bitlen_mod != 0;
1909 /* |. . . . . . . .|
1910 <----bp--->
1911 <---blen---->.
1912 Shift the value right within the same byte so it aligns with 'bp'. */
1913 else
1914 shift_amnt = bitlen_mod + bitpos_mod - BITS_PER_UNIT;
1916 else
1917 shift_amnt = bitpos % BITS_PER_UNIT;
1919 /* Create the shifted version of EXPR. */
1920 if (!BYTES_BIG_ENDIAN)
1922 shift_bytes_in_array_left (tmpbuf, byte_size, shift_amnt);
1923 if (shift_amnt == 0)
1924 byte_size--;
1926 else
1928 gcc_assert (BYTES_BIG_ENDIAN);
1929 shift_bytes_in_array_right (tmpbuf, byte_size, shift_amnt);
1930 /* If shifting right forced us to move into the next byte skip the now
1931 empty byte. */
1932 if (skip_byte)
1934 tmpbuf++;
1935 byte_size--;
1939 /* Insert the bits from TMPBUF. */
1940 for (unsigned int i = 0; i < byte_size; i++)
1941 ptr[first_byte + i] |= tmpbuf[i];
1943 return true;
1946 /* Sorting function for store_immediate_info objects.
1947 Sorts them by bitposition. */
1949 static int
1950 sort_by_bitpos (const void *x, const void *y)
1952 store_immediate_info *const *tmp = (store_immediate_info * const *) x;
1953 store_immediate_info *const *tmp2 = (store_immediate_info * const *) y;
1955 if ((*tmp)->bitpos < (*tmp2)->bitpos)
1956 return -1;
1957 else if ((*tmp)->bitpos > (*tmp2)->bitpos)
1958 return 1;
1959 else
1960 /* If they are the same let's use the order which is guaranteed to
1961 be different. */
1962 return (*tmp)->order - (*tmp2)->order;
1965 /* Sorting function for store_immediate_info objects.
1966 Sorts them by the order field. */
1968 static int
1969 sort_by_order (const void *x, const void *y)
1971 store_immediate_info *const *tmp = (store_immediate_info * const *) x;
1972 store_immediate_info *const *tmp2 = (store_immediate_info * const *) y;
1974 if ((*tmp)->order < (*tmp2)->order)
1975 return -1;
1976 else if ((*tmp)->order > (*tmp2)->order)
1977 return 1;
1979 gcc_unreachable ();
1982 /* Initialize a merged_store_group object from a store_immediate_info
1983 object. */
1985 merged_store_group::merged_store_group (store_immediate_info *info)
1987 start = info->bitpos;
1988 width = info->bitsize;
1989 bitregion_start = info->bitregion_start;
1990 bitregion_end = info->bitregion_end;
1991 /* VAL has memory allocated for it in apply_stores once the group
1992 width has been finalized. */
1993 val = NULL;
1994 mask = NULL;
1995 bit_insertion = info->rhs_code == BIT_INSERT_EXPR;
1996 string_concatenation = info->rhs_code == STRING_CST;
1997 only_constants = info->rhs_code == INTEGER_CST;
1998 consecutive = true;
1999 first_nonmergeable_order = ~0U;
2000 lp_nr = info->lp_nr;
2001 unsigned HOST_WIDE_INT align_bitpos = 0;
2002 get_object_alignment_1 (gimple_assign_lhs (info->stmt),
2003 &align, &align_bitpos);
2004 align_base = start - align_bitpos;
2005 for (int i = 0; i < 2; ++i)
2007 store_operand_info &op = info->ops[i];
2008 if (op.base_addr == NULL_TREE)
2010 load_align[i] = 0;
2011 load_align_base[i] = 0;
2013 else
2015 get_object_alignment_1 (op.val, &load_align[i], &align_bitpos);
2016 load_align_base[i] = op.bitpos - align_bitpos;
2019 stores.create (1);
2020 stores.safe_push (info);
2021 last_stmt = info->stmt;
2022 last_order = info->order;
2023 first_stmt = last_stmt;
2024 first_order = last_order;
2025 buf_size = 0;
2028 merged_store_group::~merged_store_group ()
2030 if (val)
2031 XDELETEVEC (val);
2034 /* Return true if the store described by INFO can be merged into the group. */
2036 bool
2037 merged_store_group::can_be_merged_into (store_immediate_info *info)
2039 /* Do not merge bswap patterns. */
2040 if (info->rhs_code == LROTATE_EXPR)
2041 return false;
2043 if (info->lp_nr != lp_nr)
2044 return false;
2046 /* The canonical case. */
2047 if (info->rhs_code == stores[0]->rhs_code)
2048 return true;
2050 /* BIT_INSERT_EXPR is compatible with INTEGER_CST if no STRING_CST. */
2051 if (info->rhs_code == BIT_INSERT_EXPR && stores[0]->rhs_code == INTEGER_CST)
2052 return !string_concatenation;
2054 if (stores[0]->rhs_code == BIT_INSERT_EXPR && info->rhs_code == INTEGER_CST)
2055 return !string_concatenation;
2057 /* We can turn MEM_REF into BIT_INSERT_EXPR for bit-field stores, but do it
2058 only for small regions since this can generate a lot of instructions. */
2059 if (info->rhs_code == MEM_REF
2060 && (stores[0]->rhs_code == INTEGER_CST
2061 || stores[0]->rhs_code == BIT_INSERT_EXPR)
2062 && info->bitregion_start == stores[0]->bitregion_start
2063 && info->bitregion_end == stores[0]->bitregion_end
2064 && info->bitregion_end - info->bitregion_start <= MAX_FIXED_MODE_SIZE)
2065 return !string_concatenation;
2067 if (stores[0]->rhs_code == MEM_REF
2068 && (info->rhs_code == INTEGER_CST
2069 || info->rhs_code == BIT_INSERT_EXPR)
2070 && info->bitregion_start == stores[0]->bitregion_start
2071 && info->bitregion_end == stores[0]->bitregion_end
2072 && info->bitregion_end - info->bitregion_start <= MAX_FIXED_MODE_SIZE)
2073 return !string_concatenation;
2075 /* STRING_CST is compatible with INTEGER_CST if no BIT_INSERT_EXPR. */
2076 if (info->rhs_code == STRING_CST
2077 && stores[0]->rhs_code == INTEGER_CST
2078 && stores[0]->bitsize == CHAR_BIT)
2079 return !bit_insertion;
2081 if (stores[0]->rhs_code == STRING_CST
2082 && info->rhs_code == INTEGER_CST
2083 && info->bitsize == CHAR_BIT)
2084 return !bit_insertion;
2086 return false;
2089 /* Helper method for merge_into and merge_overlapping to do
2090 the common part. */
2092 void
2093 merged_store_group::do_merge (store_immediate_info *info)
2095 bitregion_start = MIN (bitregion_start, info->bitregion_start);
2096 bitregion_end = MAX (bitregion_end, info->bitregion_end);
2098 unsigned int this_align;
2099 unsigned HOST_WIDE_INT align_bitpos = 0;
2100 get_object_alignment_1 (gimple_assign_lhs (info->stmt),
2101 &this_align, &align_bitpos);
2102 if (this_align > align)
2104 align = this_align;
2105 align_base = info->bitpos - align_bitpos;
2107 for (int i = 0; i < 2; ++i)
2109 store_operand_info &op = info->ops[i];
2110 if (!op.base_addr)
2111 continue;
2113 get_object_alignment_1 (op.val, &this_align, &align_bitpos);
2114 if (this_align > load_align[i])
2116 load_align[i] = this_align;
2117 load_align_base[i] = op.bitpos - align_bitpos;
2121 gimple *stmt = info->stmt;
2122 stores.safe_push (info);
2123 if (info->order > last_order)
2125 last_order = info->order;
2126 last_stmt = stmt;
2128 else if (info->order < first_order)
2130 first_order = info->order;
2131 first_stmt = stmt;
2134 if (info->bitpos != start + width)
2135 consecutive = false;
2137 /* We need to use extraction if there is any bit-field. */
2138 if (info->rhs_code == BIT_INSERT_EXPR)
2140 bit_insertion = true;
2141 gcc_assert (!string_concatenation);
2144 /* We want to use concatenation if there is any string. */
2145 if (info->rhs_code == STRING_CST)
2147 string_concatenation = true;
2148 gcc_assert (!bit_insertion);
2151 /* But we cannot use it if we don't have consecutive stores. */
2152 if (!consecutive)
2153 string_concatenation = false;
2155 if (info->rhs_code != INTEGER_CST)
2156 only_constants = false;
2159 /* Merge a store recorded by INFO into this merged store.
2160 The store is not overlapping with the existing recorded
2161 stores. */
2163 void
2164 merged_store_group::merge_into (store_immediate_info *info)
2166 do_merge (info);
2168 /* Make sure we're inserting in the position we think we're inserting. */
2169 gcc_assert (info->bitpos >= start + width
2170 && info->bitregion_start <= bitregion_end);
2172 width = info->bitpos + info->bitsize - start;
2175 /* Merge a store described by INFO into this merged store.
2176 INFO overlaps in some way with the current store (i.e. it's not contiguous
2177 which is handled by merged_store_group::merge_into). */
2179 void
2180 merged_store_group::merge_overlapping (store_immediate_info *info)
2182 do_merge (info);
2184 /* If the store extends the size of the group, extend the width. */
2185 if (info->bitpos + info->bitsize > start + width)
2186 width = info->bitpos + info->bitsize - start;
2189 /* Go through all the recorded stores in this group in program order and
2190 apply their values to the VAL byte array to create the final merged
2191 value. Return true if the operation succeeded. */
2193 bool
2194 merged_store_group::apply_stores ()
2196 store_immediate_info *info;
2197 unsigned int i;
2199 /* Make sure we have more than one store in the group, otherwise we cannot
2200 merge anything. */
2201 if (bitregion_start % BITS_PER_UNIT != 0
2202 || bitregion_end % BITS_PER_UNIT != 0
2203 || stores.length () == 1)
2204 return false;
2206 buf_size = (bitregion_end - bitregion_start) / BITS_PER_UNIT;
2208 /* Really do string concatenation for large strings only. */
2209 if (buf_size <= MOVE_MAX)
2210 string_concatenation = false;
2212 /* Create a power-of-2-sized buffer for native_encode_expr. */
2213 if (!string_concatenation)
2214 buf_size = 1 << ceil_log2 (buf_size);
2216 val = XNEWVEC (unsigned char, 2 * buf_size);
2217 mask = val + buf_size;
2218 memset (val, 0, buf_size);
2219 memset (mask, ~0U, buf_size);
2221 stores.qsort (sort_by_order);
2223 FOR_EACH_VEC_ELT (stores, i, info)
2225 unsigned int pos_in_buffer = info->bitpos - bitregion_start;
2226 tree cst;
2227 if (info->ops[0].val && info->ops[0].base_addr == NULL_TREE)
2228 cst = info->ops[0].val;
2229 else if (info->ops[1].val && info->ops[1].base_addr == NULL_TREE)
2230 cst = info->ops[1].val;
2231 else
2232 cst = NULL_TREE;
2233 bool ret = true;
2234 if (cst && info->rhs_code != BIT_INSERT_EXPR)
2235 ret = encode_tree_to_bitpos (cst, val, info->bitsize, pos_in_buffer,
2236 buf_size);
2237 unsigned char *m = mask + (pos_in_buffer / BITS_PER_UNIT);
2238 if (BYTES_BIG_ENDIAN)
2239 clear_bit_region_be (m, (BITS_PER_UNIT - 1
2240 - (pos_in_buffer % BITS_PER_UNIT)),
2241 info->bitsize);
2242 else
2243 clear_bit_region (m, pos_in_buffer % BITS_PER_UNIT, info->bitsize);
2244 if (cst && dump_file && (dump_flags & TDF_DETAILS))
2246 if (ret)
2248 fputs ("After writing ", dump_file);
2249 print_generic_expr (dump_file, cst, TDF_NONE);
2250 fprintf (dump_file, " of size " HOST_WIDE_INT_PRINT_DEC
2251 " at position %d\n", info->bitsize, pos_in_buffer);
2252 fputs (" the merged value contains ", dump_file);
2253 dump_char_array (dump_file, val, buf_size);
2254 fputs (" the merged mask contains ", dump_file);
2255 dump_char_array (dump_file, mask, buf_size);
2256 if (bit_insertion)
2257 fputs (" bit insertion is required\n", dump_file);
2258 if (string_concatenation)
2259 fputs (" string concatenation is required\n", dump_file);
2261 else
2262 fprintf (dump_file, "Failed to merge stores\n");
2264 if (!ret)
2265 return false;
2267 stores.qsort (sort_by_bitpos);
2268 return true;
2271 /* Structure describing the store chain. */
2273 class imm_store_chain_info
2275 public:
2276 /* Doubly-linked list that imposes an order on chain processing.
2277 PNXP (prev's next pointer) points to the head of a list, or to
2278 the next field in the previous chain in the list.
2279 See pass_store_merging::m_stores_head for more rationale. */
2280 imm_store_chain_info *next, **pnxp;
2281 tree base_addr;
2282 auto_vec<store_immediate_info *> m_store_info;
2283 auto_vec<merged_store_group *> m_merged_store_groups;
2285 imm_store_chain_info (imm_store_chain_info *&inspt, tree b_a)
2286 : next (inspt), pnxp (&inspt), base_addr (b_a)
2288 inspt = this;
2289 if (next)
2291 gcc_checking_assert (pnxp == next->pnxp);
2292 next->pnxp = &next;
2295 ~imm_store_chain_info ()
2297 *pnxp = next;
2298 if (next)
2300 gcc_checking_assert (&next == next->pnxp);
2301 next->pnxp = pnxp;
2304 bool terminate_and_process_chain ();
2305 bool try_coalesce_bswap (merged_store_group *, unsigned int, unsigned int,
2306 unsigned int);
2307 bool coalesce_immediate_stores ();
2308 bool output_merged_store (merged_store_group *);
2309 bool output_merged_stores ();
2312 const pass_data pass_data_tree_store_merging = {
2313 GIMPLE_PASS, /* type */
2314 "store-merging", /* name */
2315 OPTGROUP_NONE, /* optinfo_flags */
2316 TV_GIMPLE_STORE_MERGING, /* tv_id */
2317 PROP_ssa, /* properties_required */
2318 0, /* properties_provided */
2319 0, /* properties_destroyed */
2320 0, /* todo_flags_start */
2321 TODO_update_ssa, /* todo_flags_finish */
2324 class pass_store_merging : public gimple_opt_pass
2326 public:
2327 pass_store_merging (gcc::context *ctxt)
2328 : gimple_opt_pass (pass_data_tree_store_merging, ctxt), m_stores_head (),
2329 m_n_chains (0), m_n_stores (0)
2333 /* Pass not supported for PDP-endian, nor for insane hosts or
2334 target character sizes where native_{encode,interpret}_expr
2335 doesn't work properly. */
2336 virtual bool
2337 gate (function *)
2339 return flag_store_merging
2340 && BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
2341 && CHAR_BIT == 8
2342 && BITS_PER_UNIT == 8;
2345 virtual unsigned int execute (function *);
2347 private:
2348 hash_map<tree_operand_hash, class imm_store_chain_info *> m_stores;
2350 /* Form a doubly-linked stack of the elements of m_stores, so that
2351 we can iterate over them in a predictable way. Using this order
2352 avoids extraneous differences in the compiler output just because
2353 of tree pointer variations (e.g. different chains end up in
2354 different positions of m_stores, so they are handled in different
2355 orders, so they allocate or release SSA names in different
2356 orders, and when they get reused, subsequent passes end up
2357 getting different SSA names, which may ultimately change
2358 decisions when going out of SSA). */
2359 imm_store_chain_info *m_stores_head;
2361 /* The number of store chains currently tracked. */
2362 unsigned m_n_chains;
2363 /* The number of stores currently tracked. */
2364 unsigned m_n_stores;
2366 bool process_store (gimple *);
2367 bool terminate_and_process_chain (imm_store_chain_info *);
2368 bool terminate_all_aliasing_chains (imm_store_chain_info **, gimple *);
2369 bool terminate_and_process_all_chains ();
2370 }; // class pass_store_merging
2372 /* Terminate and process all recorded chains. Return true if any changes
2373 were made. */
2375 bool
2376 pass_store_merging::terminate_and_process_all_chains ()
2378 bool ret = false;
2379 while (m_stores_head)
2380 ret |= terminate_and_process_chain (m_stores_head);
2381 gcc_assert (m_stores.is_empty ());
2382 return ret;
2385 /* Terminate all chains that are affected by the statement STMT.
2386 CHAIN_INFO is the chain we should ignore from the checks if
2387 non-NULL. Return true if any changes were made. */
2389 bool
2390 pass_store_merging::terminate_all_aliasing_chains (imm_store_chain_info
2391 **chain_info,
2392 gimple *stmt)
2394 bool ret = false;
2396 /* If the statement doesn't touch memory it can't alias. */
2397 if (!gimple_vuse (stmt))
2398 return false;
2400 tree store_lhs = gimple_store_p (stmt) ? gimple_get_lhs (stmt) : NULL_TREE;
2401 ao_ref store_lhs_ref;
2402 ao_ref_init (&store_lhs_ref, store_lhs);
2403 for (imm_store_chain_info *next = m_stores_head, *cur = next; cur; cur = next)
2405 next = cur->next;
2407 /* We already checked all the stores in chain_info and terminated the
2408 chain if necessary. Skip it here. */
2409 if (chain_info && *chain_info == cur)
2410 continue;
2412 store_immediate_info *info;
2413 unsigned int i;
2414 FOR_EACH_VEC_ELT (cur->m_store_info, i, info)
2416 tree lhs = gimple_assign_lhs (info->stmt);
2417 ao_ref lhs_ref;
2418 ao_ref_init (&lhs_ref, lhs);
2419 if (ref_maybe_used_by_stmt_p (stmt, &lhs_ref)
2420 || stmt_may_clobber_ref_p_1 (stmt, &lhs_ref)
2421 || (store_lhs && refs_may_alias_p_1 (&store_lhs_ref,
2422 &lhs_ref, false)))
2424 if (dump_file && (dump_flags & TDF_DETAILS))
2426 fprintf (dump_file, "stmt causes chain termination:\n");
2427 print_gimple_stmt (dump_file, stmt, 0);
2429 ret |= terminate_and_process_chain (cur);
2430 break;
2435 return ret;
2438 /* Helper function. Terminate the recorded chain storing to base object
2439 BASE. Return true if the merging and output was successful. The m_stores
2440 entry is removed after the processing in any case. */
2442 bool
2443 pass_store_merging::terminate_and_process_chain (imm_store_chain_info *chain_info)
2445 m_n_stores -= chain_info->m_store_info.length ();
2446 m_n_chains--;
2447 bool ret = chain_info->terminate_and_process_chain ();
2448 m_stores.remove (chain_info->base_addr);
2449 delete chain_info;
2450 return ret;
2453 /* Return true if stmts in between FIRST (inclusive) and LAST (exclusive)
2454 may clobber REF. FIRST and LAST must have non-NULL vdef. We want to
2455 be able to sink load of REF across stores between FIRST and LAST, up
2456 to right before LAST. */
2458 bool
2459 stmts_may_clobber_ref_p (gimple *first, gimple *last, tree ref)
2461 ao_ref r;
2462 ao_ref_init (&r, ref);
2463 unsigned int count = 0;
2464 tree vop = gimple_vdef (last);
2465 gimple *stmt;
2467 /* Return true conservatively if the basic blocks are different. */
2468 if (gimple_bb (first) != gimple_bb (last))
2469 return true;
2473 stmt = SSA_NAME_DEF_STMT (vop);
2474 if (stmt_may_clobber_ref_p_1 (stmt, &r))
2475 return true;
2476 if (gimple_store_p (stmt)
2477 && refs_anti_dependent_p (ref, gimple_get_lhs (stmt)))
2478 return true;
2479 /* Avoid quadratic compile time by bounding the number of checks
2480 we perform. */
2481 if (++count > MAX_STORE_ALIAS_CHECKS)
2482 return true;
2483 vop = gimple_vuse (stmt);
2485 while (stmt != first);
2487 return false;
2490 /* Return true if INFO->ops[IDX] is mergeable with the
2491 corresponding loads already in MERGED_STORE group.
2492 BASE_ADDR is the base address of the whole store group. */
2494 bool
2495 compatible_load_p (merged_store_group *merged_store,
2496 store_immediate_info *info,
2497 tree base_addr, int idx)
2499 store_immediate_info *infof = merged_store->stores[0];
2500 if (!info->ops[idx].base_addr
2501 || maybe_ne (info->ops[idx].bitpos - infof->ops[idx].bitpos,
2502 info->bitpos - infof->bitpos)
2503 || !operand_equal_p (info->ops[idx].base_addr,
2504 infof->ops[idx].base_addr, 0))
2505 return false;
2507 store_immediate_info *infol = merged_store->stores.last ();
2508 tree load_vuse = gimple_vuse (info->ops[idx].stmt);
2509 /* In this case all vuses should be the same, e.g.
2510 _1 = s.a; _2 = s.b; _3 = _1 | 1; t.a = _3; _4 = _2 | 2; t.b = _4;
2512 _1 = s.a; _2 = s.b; t.a = _1; t.b = _2;
2513 and we can emit the coalesced load next to any of those loads. */
2514 if (gimple_vuse (infof->ops[idx].stmt) == load_vuse
2515 && gimple_vuse (infol->ops[idx].stmt) == load_vuse)
2516 return true;
2518 /* Otherwise, at least for now require that the load has the same
2519 vuse as the store. See following examples. */
2520 if (gimple_vuse (info->stmt) != load_vuse)
2521 return false;
2523 if (gimple_vuse (infof->stmt) != gimple_vuse (infof->ops[idx].stmt)
2524 || (infof != infol
2525 && gimple_vuse (infol->stmt) != gimple_vuse (infol->ops[idx].stmt)))
2526 return false;
2528 /* If the load is from the same location as the store, already
2529 the construction of the immediate chain info guarantees no intervening
2530 stores, so no further checks are needed. Example:
2531 _1 = s.a; _2 = _1 & -7; s.a = _2; _3 = s.b; _4 = _3 & -7; s.b = _4; */
2532 if (known_eq (info->ops[idx].bitpos, info->bitpos)
2533 && operand_equal_p (info->ops[idx].base_addr, base_addr, 0))
2534 return true;
2536 /* Otherwise, we need to punt if any of the loads can be clobbered by any
2537 of the stores in the group, or any other stores in between those.
2538 Previous calls to compatible_load_p ensured that for all the
2539 merged_store->stores IDX loads, no stmts starting with
2540 merged_store->first_stmt and ending right before merged_store->last_stmt
2541 clobbers those loads. */
2542 gimple *first = merged_store->first_stmt;
2543 gimple *last = merged_store->last_stmt;
2544 /* The stores are sorted by increasing store bitpos, so if info->stmt store
2545 comes before the so far first load, we'll be changing
2546 merged_store->first_stmt. In that case we need to give up if
2547 any of the earlier processed loads clobber with the stmts in the new
2548 range. */
2549 if (info->order < merged_store->first_order)
2551 for (store_immediate_info *infoc : merged_store->stores)
2552 if (stmts_may_clobber_ref_p (info->stmt, first, infoc->ops[idx].val))
2553 return false;
2554 first = info->stmt;
2556 /* Similarly, we could change merged_store->last_stmt, so ensure
2557 in that case no stmts in the new range clobber any of the earlier
2558 processed loads. */
2559 else if (info->order > merged_store->last_order)
2561 for (store_immediate_info *infoc : merged_store->stores)
2562 if (stmts_may_clobber_ref_p (last, info->stmt, infoc->ops[idx].val))
2563 return false;
2564 last = info->stmt;
2566 /* And finally, we'd be adding a new load to the set, ensure it isn't
2567 clobbered in the new range. */
2568 if (stmts_may_clobber_ref_p (first, last, info->ops[idx].val))
2569 return false;
2571 /* Otherwise, we are looking for:
2572 _1 = s.a; _2 = _1 ^ 15; t.a = _2; _3 = s.b; _4 = _3 ^ 15; t.b = _4;
2574 _1 = s.a; t.a = _1; _2 = s.b; t.b = _2; */
2575 return true;
2578 /* Add all refs loaded to compute VAL to REFS vector. */
2580 void
2581 gather_bswap_load_refs (vec<tree> *refs, tree val)
2583 if (TREE_CODE (val) != SSA_NAME)
2584 return;
2586 gimple *stmt = SSA_NAME_DEF_STMT (val);
2587 if (!is_gimple_assign (stmt))
2588 return;
2590 if (gimple_assign_load_p (stmt))
2592 refs->safe_push (gimple_assign_rhs1 (stmt));
2593 return;
2596 switch (gimple_assign_rhs_class (stmt))
2598 case GIMPLE_BINARY_RHS:
2599 gather_bswap_load_refs (refs, gimple_assign_rhs2 (stmt));
2600 /* FALLTHRU */
2601 case GIMPLE_UNARY_RHS:
2602 gather_bswap_load_refs (refs, gimple_assign_rhs1 (stmt));
2603 break;
2604 default:
2605 gcc_unreachable ();
2609 /* Check if there are any stores in M_STORE_INFO after index I
2610 (where M_STORE_INFO must be sorted by sort_by_bitpos) that overlap
2611 a potential group ending with END that have their order
2612 smaller than LAST_ORDER. ALL_INTEGER_CST_P is true if
2613 all the stores already merged and the one under consideration
2614 have rhs_code of INTEGER_CST. Return true if there are no such stores.
2615 Consider:
2616 MEM[(long long int *)p_28] = 0;
2617 MEM[(long long int *)p_28 + 8B] = 0;
2618 MEM[(long long int *)p_28 + 16B] = 0;
2619 MEM[(long long int *)p_28 + 24B] = 0;
2620 _129 = (int) _130;
2621 MEM[(int *)p_28 + 8B] = _129;
2622 MEM[(int *)p_28].a = -1;
2623 We already have
2624 MEM[(long long int *)p_28] = 0;
2625 MEM[(int *)p_28].a = -1;
2626 stmts in the current group and need to consider if it is safe to
2627 add MEM[(long long int *)p_28 + 8B] = 0; store into the same group.
2628 There is an overlap between that store and the MEM[(int *)p_28 + 8B] = _129;
2629 store though, so if we add the MEM[(long long int *)p_28 + 8B] = 0;
2630 into the group and merging of those 3 stores is successful, merged
2631 stmts will be emitted at the latest store from that group, i.e.
2632 LAST_ORDER, which is the MEM[(int *)p_28].a = -1; store.
2633 The MEM[(int *)p_28 + 8B] = _129; store that originally follows
2634 the MEM[(long long int *)p_28 + 8B] = 0; would now be before it,
2635 so we need to refuse merging MEM[(long long int *)p_28 + 8B] = 0;
2636 into the group. That way it will be its own store group and will
2637 not be touched. If ALL_INTEGER_CST_P and there are overlapping
2638 INTEGER_CST stores, those are mergeable using merge_overlapping,
2639 so don't return false for those.
2641 Similarly, check stores from FIRST_EARLIER (inclusive) to END_EARLIER
2642 (exclusive), whether they don't overlap the bitrange START to END
2643 and have order in between FIRST_ORDER and LAST_ORDER. This is to
2644 prevent merging in cases like:
2645 MEM <char[12]> [&b + 8B] = {};
2646 MEM[(short *) &b] = 5;
2647 _5 = *x_4(D);
2648 MEM <long long unsigned int> [&b + 2B] = _5;
2649 MEM[(char *)&b + 16B] = 88;
2650 MEM[(int *)&b + 20B] = 1;
2651 The = {} store comes in sort_by_bitpos before the = 88 store, and can't
2652 be merged with it, because the = _5 store overlaps these and is in between
2653 them in sort_by_order ordering. If it was merged, the merged store would
2654 go after the = _5 store and thus change behavior. */
2656 static bool
2657 check_no_overlap (const vec<store_immediate_info *> &m_store_info,
2658 unsigned int i,
2659 bool all_integer_cst_p, unsigned int first_order,
2660 unsigned int last_order, unsigned HOST_WIDE_INT start,
2661 unsigned HOST_WIDE_INT end, unsigned int first_earlier,
2662 unsigned end_earlier)
2664 unsigned int len = m_store_info.length ();
2665 for (unsigned int j = first_earlier; j < end_earlier; j++)
2667 store_immediate_info *info = m_store_info[j];
2668 if (info->order > first_order
2669 && info->order < last_order
2670 && info->bitpos + info->bitsize > start)
2671 return false;
2673 for (++i; i < len; ++i)
2675 store_immediate_info *info = m_store_info[i];
2676 if (info->bitpos >= end)
2677 break;
2678 if (info->order < last_order
2679 && (!all_integer_cst_p || info->rhs_code != INTEGER_CST))
2680 return false;
2682 return true;
2685 /* Return true if m_store_info[first] and at least one following store
2686 form a group which store try_size bitsize value which is byte swapped
2687 from a memory load or some value, or identity from some value.
2688 This uses the bswap pass APIs. */
2690 bool
2691 imm_store_chain_info::try_coalesce_bswap (merged_store_group *merged_store,
2692 unsigned int first,
2693 unsigned int try_size,
2694 unsigned int first_earlier)
2696 unsigned int len = m_store_info.length (), last = first;
2697 unsigned HOST_WIDE_INT width = m_store_info[first]->bitsize;
2698 if (width >= try_size)
2699 return false;
2700 for (unsigned int i = first + 1; i < len; ++i)
2702 if (m_store_info[i]->bitpos != m_store_info[first]->bitpos + width
2703 || m_store_info[i]->lp_nr != merged_store->lp_nr
2704 || m_store_info[i]->ins_stmt == NULL)
2705 return false;
2706 width += m_store_info[i]->bitsize;
2707 if (width >= try_size)
2709 last = i;
2710 break;
2713 if (width != try_size)
2714 return false;
2716 bool allow_unaligned
2717 = !STRICT_ALIGNMENT && param_store_merging_allow_unaligned;
2718 /* Punt if the combined store would not be aligned and we need alignment. */
2719 if (!allow_unaligned)
2721 unsigned int align = merged_store->align;
2722 unsigned HOST_WIDE_INT align_base = merged_store->align_base;
2723 for (unsigned int i = first + 1; i <= last; ++i)
2725 unsigned int this_align;
2726 unsigned HOST_WIDE_INT align_bitpos = 0;
2727 get_object_alignment_1 (gimple_assign_lhs (m_store_info[i]->stmt),
2728 &this_align, &align_bitpos);
2729 if (this_align > align)
2731 align = this_align;
2732 align_base = m_store_info[i]->bitpos - align_bitpos;
2735 unsigned HOST_WIDE_INT align_bitpos
2736 = (m_store_info[first]->bitpos - align_base) & (align - 1);
2737 if (align_bitpos)
2738 align = least_bit_hwi (align_bitpos);
2739 if (align < try_size)
2740 return false;
2743 tree type;
2744 switch (try_size)
2746 case 16: type = uint16_type_node; break;
2747 case 32: type = uint32_type_node; break;
2748 case 64: type = uint64_type_node; break;
2749 default: gcc_unreachable ();
2751 struct symbolic_number n;
2752 gimple *ins_stmt = NULL;
2753 int vuse_store = -1;
2754 unsigned int first_order = merged_store->first_order;
2755 unsigned int last_order = merged_store->last_order;
2756 gimple *first_stmt = merged_store->first_stmt;
2757 gimple *last_stmt = merged_store->last_stmt;
2758 unsigned HOST_WIDE_INT end = merged_store->start + merged_store->width;
2759 store_immediate_info *infof = m_store_info[first];
2761 for (unsigned int i = first; i <= last; ++i)
2763 store_immediate_info *info = m_store_info[i];
2764 struct symbolic_number this_n = info->n;
2765 this_n.type = type;
2766 if (!this_n.base_addr)
2767 this_n.range = try_size / BITS_PER_UNIT;
2768 else
2769 /* Update vuse in case it has changed by output_merged_stores. */
2770 this_n.vuse = gimple_vuse (info->ins_stmt);
2771 unsigned int bitpos = info->bitpos - infof->bitpos;
2772 if (!do_shift_rotate (LSHIFT_EXPR, &this_n,
2773 BYTES_BIG_ENDIAN
2774 ? try_size - info->bitsize - bitpos
2775 : bitpos))
2776 return false;
2777 if (this_n.base_addr && vuse_store)
2779 unsigned int j;
2780 for (j = first; j <= last; ++j)
2781 if (this_n.vuse == gimple_vuse (m_store_info[j]->stmt))
2782 break;
2783 if (j > last)
2785 if (vuse_store == 1)
2786 return false;
2787 vuse_store = 0;
2790 if (i == first)
2792 n = this_n;
2793 ins_stmt = info->ins_stmt;
2795 else
2797 if (n.base_addr && n.vuse != this_n.vuse)
2799 if (vuse_store == 0)
2800 return false;
2801 vuse_store = 1;
2803 if (info->order > last_order)
2805 last_order = info->order;
2806 last_stmt = info->stmt;
2808 else if (info->order < first_order)
2810 first_order = info->order;
2811 first_stmt = info->stmt;
2813 end = MAX (end, info->bitpos + info->bitsize);
2815 ins_stmt = perform_symbolic_merge (ins_stmt, &n, info->ins_stmt,
2816 &this_n, &n);
2817 if (ins_stmt == NULL)
2818 return false;
2822 uint64_t cmpxchg, cmpnop;
2823 find_bswap_or_nop_finalize (&n, &cmpxchg, &cmpnop);
2825 /* A complete byte swap should make the symbolic number to start with
2826 the largest digit in the highest order byte. Unchanged symbolic
2827 number indicates a read with same endianness as target architecture. */
2828 if (n.n != cmpnop && n.n != cmpxchg)
2829 return false;
2831 if (n.base_addr == NULL_TREE && !is_gimple_val (n.src))
2832 return false;
2834 if (!check_no_overlap (m_store_info, last, false, first_order, last_order,
2835 merged_store->start, end, first_earlier, first))
2836 return false;
2838 /* Don't handle memory copy this way if normal non-bswap processing
2839 would handle it too. */
2840 if (n.n == cmpnop && (unsigned) n.n_ops == last - first + 1)
2842 unsigned int i;
2843 for (i = first; i <= last; ++i)
2844 if (m_store_info[i]->rhs_code != MEM_REF)
2845 break;
2846 if (i == last + 1)
2847 return false;
2850 if (n.n == cmpxchg)
2851 switch (try_size)
2853 case 16:
2854 /* Will emit LROTATE_EXPR. */
2855 break;
2856 case 32:
2857 if (builtin_decl_explicit_p (BUILT_IN_BSWAP32)
2858 && optab_handler (bswap_optab, SImode) != CODE_FOR_nothing)
2859 break;
2860 return false;
2861 case 64:
2862 if (builtin_decl_explicit_p (BUILT_IN_BSWAP64)
2863 && optab_handler (bswap_optab, DImode) != CODE_FOR_nothing)
2864 break;
2865 return false;
2866 default:
2867 gcc_unreachable ();
2870 if (!allow_unaligned && n.base_addr)
2872 unsigned int align = get_object_alignment (n.src);
2873 if (align < try_size)
2874 return false;
2877 /* If each load has vuse of the corresponding store, need to verify
2878 the loads can be sunk right before the last store. */
2879 if (vuse_store == 1)
2881 auto_vec<tree, 64> refs;
2882 for (unsigned int i = first; i <= last; ++i)
2883 gather_bswap_load_refs (&refs,
2884 gimple_assign_rhs1 (m_store_info[i]->stmt));
2886 for (tree ref : refs)
2887 if (stmts_may_clobber_ref_p (first_stmt, last_stmt, ref))
2888 return false;
2889 n.vuse = NULL_TREE;
2892 infof->n = n;
2893 infof->ins_stmt = ins_stmt;
2894 for (unsigned int i = first; i <= last; ++i)
2896 m_store_info[i]->rhs_code = n.n == cmpxchg ? LROTATE_EXPR : NOP_EXPR;
2897 m_store_info[i]->ops[0].base_addr = NULL_TREE;
2898 m_store_info[i]->ops[1].base_addr = NULL_TREE;
2899 if (i != first)
2900 merged_store->merge_into (m_store_info[i]);
2903 return true;
2906 /* Go through the candidate stores recorded in m_store_info and merge them
2907 into merged_store_group objects recorded into m_merged_store_groups
2908 representing the widened stores. Return true if coalescing was successful
2909 and the number of widened stores is fewer than the original number
2910 of stores. */
2912 bool
2913 imm_store_chain_info::coalesce_immediate_stores ()
2915 /* Anything less can't be processed. */
2916 if (m_store_info.length () < 2)
2917 return false;
2919 if (dump_file && (dump_flags & TDF_DETAILS))
2920 fprintf (dump_file, "Attempting to coalesce %u stores in chain\n",
2921 m_store_info.length ());
2923 store_immediate_info *info;
2924 unsigned int i, ignore = 0;
2925 unsigned int first_earlier = 0;
2926 unsigned int end_earlier = 0;
2928 /* Order the stores by the bitposition they write to. */
2929 m_store_info.qsort (sort_by_bitpos);
2931 info = m_store_info[0];
2932 merged_store_group *merged_store = new merged_store_group (info);
2933 if (dump_file && (dump_flags & TDF_DETAILS))
2934 fputs ("New store group\n", dump_file);
2936 FOR_EACH_VEC_ELT (m_store_info, i, info)
2938 unsigned HOST_WIDE_INT new_bitregion_start, new_bitregion_end;
2940 if (i <= ignore)
2941 goto done;
2943 while (first_earlier < end_earlier
2944 && (m_store_info[first_earlier]->bitpos
2945 + m_store_info[first_earlier]->bitsize
2946 <= merged_store->start))
2947 first_earlier++;
2949 /* First try to handle group of stores like:
2950 p[0] = data >> 24;
2951 p[1] = data >> 16;
2952 p[2] = data >> 8;
2953 p[3] = data;
2954 using the bswap framework. */
2955 if (info->bitpos == merged_store->start + merged_store->width
2956 && merged_store->stores.length () == 1
2957 && merged_store->stores[0]->ins_stmt != NULL
2958 && info->lp_nr == merged_store->lp_nr
2959 && info->ins_stmt != NULL)
2961 unsigned int try_size;
2962 for (try_size = 64; try_size >= 16; try_size >>= 1)
2963 if (try_coalesce_bswap (merged_store, i - 1, try_size,
2964 first_earlier))
2965 break;
2967 if (try_size >= 16)
2969 ignore = i + merged_store->stores.length () - 1;
2970 m_merged_store_groups.safe_push (merged_store);
2971 if (ignore < m_store_info.length ())
2973 merged_store = new merged_store_group (m_store_info[ignore]);
2974 end_earlier = ignore;
2976 else
2977 merged_store = NULL;
2978 goto done;
2982 new_bitregion_start
2983 = MIN (merged_store->bitregion_start, info->bitregion_start);
2984 new_bitregion_end
2985 = MAX (merged_store->bitregion_end, info->bitregion_end);
2987 if (info->order >= merged_store->first_nonmergeable_order
2988 || (((new_bitregion_end - new_bitregion_start + 1) / BITS_PER_UNIT)
2989 > (unsigned) param_store_merging_max_size))
2992 /* |---store 1---|
2993 |---store 2---|
2994 Overlapping stores. */
2995 else if (IN_RANGE (info->bitpos, merged_store->start,
2996 merged_store->start + merged_store->width - 1)
2997 /* |---store 1---||---store 2---|
2998 Handle also the consecutive INTEGER_CST stores case here,
2999 as we have here the code to deal with overlaps. */
3000 || (info->bitregion_start <= merged_store->bitregion_end
3001 && info->rhs_code == INTEGER_CST
3002 && merged_store->only_constants
3003 && merged_store->can_be_merged_into (info)))
3005 /* Only allow overlapping stores of constants. */
3006 if (info->rhs_code == INTEGER_CST
3007 && merged_store->only_constants
3008 && info->lp_nr == merged_store->lp_nr)
3010 unsigned int first_order
3011 = MIN (merged_store->first_order, info->order);
3012 unsigned int last_order
3013 = MAX (merged_store->last_order, info->order);
3014 unsigned HOST_WIDE_INT end
3015 = MAX (merged_store->start + merged_store->width,
3016 info->bitpos + info->bitsize);
3017 if (check_no_overlap (m_store_info, i, true, first_order,
3018 last_order, merged_store->start, end,
3019 first_earlier, end_earlier))
3021 /* check_no_overlap call above made sure there are no
3022 overlapping stores with non-INTEGER_CST rhs_code
3023 in between the first and last of the stores we've
3024 just merged. If there are any INTEGER_CST rhs_code
3025 stores in between, we need to merge_overlapping them
3026 even if in the sort_by_bitpos order there are other
3027 overlapping stores in between. Keep those stores as is.
3028 Example:
3029 MEM[(int *)p_28] = 0;
3030 MEM[(char *)p_28 + 3B] = 1;
3031 MEM[(char *)p_28 + 1B] = 2;
3032 MEM[(char *)p_28 + 2B] = MEM[(char *)p_28 + 6B];
3033 We can't merge the zero store with the store of two and
3034 not merge anything else, because the store of one is
3035 in the original order in between those two, but in
3036 store_by_bitpos order it comes after the last store that
3037 we can't merge with them. We can merge the first 3 stores
3038 and keep the last store as is though. */
3039 unsigned int len = m_store_info.length ();
3040 unsigned int try_order = last_order;
3041 unsigned int first_nonmergeable_order;
3042 unsigned int k;
3043 bool last_iter = false;
3044 int attempts = 0;
3047 unsigned int max_order = 0;
3048 unsigned int min_order = first_order;
3049 unsigned first_nonmergeable_int_order = ~0U;
3050 unsigned HOST_WIDE_INT this_end = end;
3051 k = i;
3052 first_nonmergeable_order = ~0U;
3053 for (unsigned int j = i + 1; j < len; ++j)
3055 store_immediate_info *info2 = m_store_info[j];
3056 if (info2->bitpos >= this_end)
3057 break;
3058 if (info2->order < try_order)
3060 if (info2->rhs_code != INTEGER_CST
3061 || info2->lp_nr != merged_store->lp_nr)
3063 /* Normally check_no_overlap makes sure this
3064 doesn't happen, but if end grows below,
3065 then we need to process more stores than
3066 check_no_overlap verified. Example:
3067 MEM[(int *)p_5] = 0;
3068 MEM[(short *)p_5 + 3B] = 1;
3069 MEM[(char *)p_5 + 4B] = _9;
3070 MEM[(char *)p_5 + 2B] = 2; */
3071 k = 0;
3072 break;
3074 k = j;
3075 min_order = MIN (min_order, info2->order);
3076 this_end = MAX (this_end,
3077 info2->bitpos + info2->bitsize);
3079 else if (info2->rhs_code == INTEGER_CST
3080 && info2->lp_nr == merged_store->lp_nr
3081 && !last_iter)
3083 max_order = MAX (max_order, info2->order + 1);
3084 first_nonmergeable_int_order
3085 = MIN (first_nonmergeable_int_order,
3086 info2->order);
3088 else
3089 first_nonmergeable_order
3090 = MIN (first_nonmergeable_order, info2->order);
3092 if (k > i
3093 && !check_no_overlap (m_store_info, len - 1, true,
3094 min_order, try_order,
3095 merged_store->start, this_end,
3096 first_earlier, end_earlier))
3097 k = 0;
3098 if (k == 0)
3100 if (last_order == try_order)
3101 break;
3102 /* If this failed, but only because we grew
3103 try_order, retry with the last working one,
3104 so that we merge at least something. */
3105 try_order = last_order;
3106 last_iter = true;
3107 continue;
3109 last_order = try_order;
3110 /* Retry with a larger try_order to see if we could
3111 merge some further INTEGER_CST stores. */
3112 if (max_order
3113 && (first_nonmergeable_int_order
3114 < first_nonmergeable_order))
3116 try_order = MIN (max_order,
3117 first_nonmergeable_order);
3118 try_order
3119 = MIN (try_order,
3120 merged_store->first_nonmergeable_order);
3121 if (try_order > last_order && ++attempts < 16)
3122 continue;
3124 first_nonmergeable_order
3125 = MIN (first_nonmergeable_order,
3126 first_nonmergeable_int_order);
3127 end = this_end;
3128 break;
3130 while (1);
3132 if (k != 0)
3134 merged_store->merge_overlapping (info);
3136 merged_store->first_nonmergeable_order
3137 = MIN (merged_store->first_nonmergeable_order,
3138 first_nonmergeable_order);
3140 for (unsigned int j = i + 1; j <= k; j++)
3142 store_immediate_info *info2 = m_store_info[j];
3143 gcc_assert (info2->bitpos < end);
3144 if (info2->order < last_order)
3146 gcc_assert (info2->rhs_code == INTEGER_CST);
3147 if (info != info2)
3148 merged_store->merge_overlapping (info2);
3150 /* Other stores are kept and not merged in any
3151 way. */
3153 ignore = k;
3154 goto done;
3159 /* |---store 1---||---store 2---|
3160 This store is consecutive to the previous one.
3161 Merge it into the current store group. There can be gaps in between
3162 the stores, but there can't be gaps in between bitregions. */
3163 else if (info->bitregion_start <= merged_store->bitregion_end
3164 && merged_store->can_be_merged_into (info))
3166 store_immediate_info *infof = merged_store->stores[0];
3168 /* All the rhs_code ops that take 2 operands are commutative,
3169 swap the operands if it could make the operands compatible. */
3170 if (infof->ops[0].base_addr
3171 && infof->ops[1].base_addr
3172 && info->ops[0].base_addr
3173 && info->ops[1].base_addr
3174 && known_eq (info->ops[1].bitpos - infof->ops[0].bitpos,
3175 info->bitpos - infof->bitpos)
3176 && operand_equal_p (info->ops[1].base_addr,
3177 infof->ops[0].base_addr, 0))
3179 std::swap (info->ops[0], info->ops[1]);
3180 info->ops_swapped_p = true;
3182 if (check_no_overlap (m_store_info, i, false,
3183 MIN (merged_store->first_order, info->order),
3184 MAX (merged_store->last_order, info->order),
3185 merged_store->start,
3186 MAX (merged_store->start + merged_store->width,
3187 info->bitpos + info->bitsize),
3188 first_earlier, end_earlier))
3190 /* Turn MEM_REF into BIT_INSERT_EXPR for bit-field stores. */
3191 if (info->rhs_code == MEM_REF && infof->rhs_code != MEM_REF)
3193 info->rhs_code = BIT_INSERT_EXPR;
3194 info->ops[0].val = gimple_assign_rhs1 (info->stmt);
3195 info->ops[0].base_addr = NULL_TREE;
3197 else if (infof->rhs_code == MEM_REF && info->rhs_code != MEM_REF)
3199 for (store_immediate_info *infoj : merged_store->stores)
3201 infoj->rhs_code = BIT_INSERT_EXPR;
3202 infoj->ops[0].val = gimple_assign_rhs1 (infoj->stmt);
3203 infoj->ops[0].base_addr = NULL_TREE;
3205 merged_store->bit_insertion = true;
3207 if ((infof->ops[0].base_addr
3208 ? compatible_load_p (merged_store, info, base_addr, 0)
3209 : !info->ops[0].base_addr)
3210 && (infof->ops[1].base_addr
3211 ? compatible_load_p (merged_store, info, base_addr, 1)
3212 : !info->ops[1].base_addr))
3214 merged_store->merge_into (info);
3215 goto done;
3220 /* |---store 1---| <gap> |---store 2---|.
3221 Gap between stores or the rhs not compatible. Start a new group. */
3223 /* Try to apply all the stores recorded for the group to determine
3224 the bitpattern they write and discard it if that fails.
3225 This will also reject single-store groups. */
3226 if (merged_store->apply_stores ())
3227 m_merged_store_groups.safe_push (merged_store);
3228 else
3229 delete merged_store;
3231 merged_store = new merged_store_group (info);
3232 end_earlier = i;
3233 if (dump_file && (dump_flags & TDF_DETAILS))
3234 fputs ("New store group\n", dump_file);
3236 done:
3237 if (dump_file && (dump_flags & TDF_DETAILS))
3239 fprintf (dump_file, "Store %u:\nbitsize:" HOST_WIDE_INT_PRINT_DEC
3240 " bitpos:" HOST_WIDE_INT_PRINT_DEC " val:",
3241 i, info->bitsize, info->bitpos);
3242 print_generic_expr (dump_file, gimple_assign_rhs1 (info->stmt));
3243 fputc ('\n', dump_file);
3247 /* Record or discard the last store group. */
3248 if (merged_store)
3250 if (merged_store->apply_stores ())
3251 m_merged_store_groups.safe_push (merged_store);
3252 else
3253 delete merged_store;
3256 gcc_assert (m_merged_store_groups.length () <= m_store_info.length ());
3258 bool success
3259 = !m_merged_store_groups.is_empty ()
3260 && m_merged_store_groups.length () < m_store_info.length ();
3262 if (success && dump_file)
3263 fprintf (dump_file, "Coalescing successful!\nMerged into %u stores\n",
3264 m_merged_store_groups.length ());
3266 return success;
3269 /* Return the type to use for the merged stores or loads described by STMTS.
3270 This is needed to get the alias sets right. If IS_LOAD, look for rhs,
3271 otherwise lhs. Additionally set *CLIQUEP and *BASEP to MR_DEPENDENCE_*
3272 of the MEM_REFs if any. */
3274 static tree
3275 get_alias_type_for_stmts (vec<gimple *> &stmts, bool is_load,
3276 unsigned short *cliquep, unsigned short *basep)
3278 gimple *stmt;
3279 unsigned int i;
3280 tree type = NULL_TREE;
3281 tree ret = NULL_TREE;
3282 *cliquep = 0;
3283 *basep = 0;
3285 FOR_EACH_VEC_ELT (stmts, i, stmt)
3287 tree ref = is_load ? gimple_assign_rhs1 (stmt)
3288 : gimple_assign_lhs (stmt);
3289 tree type1 = reference_alias_ptr_type (ref);
3290 tree base = get_base_address (ref);
3292 if (i == 0)
3294 if (TREE_CODE (base) == MEM_REF)
3296 *cliquep = MR_DEPENDENCE_CLIQUE (base);
3297 *basep = MR_DEPENDENCE_BASE (base);
3299 ret = type = type1;
3300 continue;
3302 if (!alias_ptr_types_compatible_p (type, type1))
3303 ret = ptr_type_node;
3304 if (TREE_CODE (base) != MEM_REF
3305 || *cliquep != MR_DEPENDENCE_CLIQUE (base)
3306 || *basep != MR_DEPENDENCE_BASE (base))
3308 *cliquep = 0;
3309 *basep = 0;
3312 return ret;
3315 /* Return the location_t information we can find among the statements
3316 in STMTS. */
3318 static location_t
3319 get_location_for_stmts (vec<gimple *> &stmts)
3321 for (gimple *stmt : stmts)
3322 if (gimple_has_location (stmt))
3323 return gimple_location (stmt);
3325 return UNKNOWN_LOCATION;
3328 /* Used to decribe a store resulting from splitting a wide store in smaller
3329 regularly-sized stores in split_group. */
3331 class split_store
3333 public:
3334 unsigned HOST_WIDE_INT bytepos;
3335 unsigned HOST_WIDE_INT size;
3336 unsigned HOST_WIDE_INT align;
3337 auto_vec<store_immediate_info *> orig_stores;
3338 /* True if there is a single orig stmt covering the whole split store. */
3339 bool orig;
3340 split_store (unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
3341 unsigned HOST_WIDE_INT);
3344 /* Simple constructor. */
3346 split_store::split_store (unsigned HOST_WIDE_INT bp,
3347 unsigned HOST_WIDE_INT sz,
3348 unsigned HOST_WIDE_INT al)
3349 : bytepos (bp), size (sz), align (al), orig (false)
3351 orig_stores.create (0);
3354 /* Record all stores in GROUP that write to the region starting at BITPOS and
3355 is of size BITSIZE. Record infos for such statements in STORES if
3356 non-NULL. The stores in GROUP must be sorted by bitposition. Return INFO
3357 if there is exactly one original store in the range (in that case ignore
3358 clobber stmts, unless there are only clobber stmts). */
3360 static store_immediate_info *
3361 find_constituent_stores (class merged_store_group *group,
3362 vec<store_immediate_info *> *stores,
3363 unsigned int *first,
3364 unsigned HOST_WIDE_INT bitpos,
3365 unsigned HOST_WIDE_INT bitsize)
3367 store_immediate_info *info, *ret = NULL;
3368 unsigned int i;
3369 bool second = false;
3370 bool update_first = true;
3371 unsigned HOST_WIDE_INT end = bitpos + bitsize;
3372 for (i = *first; group->stores.iterate (i, &info); ++i)
3374 unsigned HOST_WIDE_INT stmt_start = info->bitpos;
3375 unsigned HOST_WIDE_INT stmt_end = stmt_start + info->bitsize;
3376 if (stmt_end <= bitpos)
3378 /* BITPOS passed to this function never decreases from within the
3379 same split_group call, so optimize and don't scan info records
3380 which are known to end before or at BITPOS next time.
3381 Only do it if all stores before this one also pass this. */
3382 if (update_first)
3383 *first = i + 1;
3384 continue;
3386 else
3387 update_first = false;
3389 /* The stores in GROUP are ordered by bitposition so if we're past
3390 the region for this group return early. */
3391 if (stmt_start >= end)
3392 return ret;
3394 if (gimple_clobber_p (info->stmt))
3396 if (stores)
3397 stores->safe_push (info);
3398 if (ret == NULL)
3399 ret = info;
3400 continue;
3402 if (stores)
3404 stores->safe_push (info);
3405 if (ret && !gimple_clobber_p (ret->stmt))
3407 ret = NULL;
3408 second = true;
3411 else if (ret && !gimple_clobber_p (ret->stmt))
3412 return NULL;
3413 if (!second)
3414 ret = info;
3416 return ret;
3419 /* Return how many SSA_NAMEs used to compute value to store in the INFO
3420 store have multiple uses. If any SSA_NAME has multiple uses, also
3421 count statements needed to compute it. */
3423 static unsigned
3424 count_multiple_uses (store_immediate_info *info)
3426 gimple *stmt = info->stmt;
3427 unsigned ret = 0;
3428 switch (info->rhs_code)
3430 case INTEGER_CST:
3431 case STRING_CST:
3432 return 0;
3433 case BIT_AND_EXPR:
3434 case BIT_IOR_EXPR:
3435 case BIT_XOR_EXPR:
3436 if (info->bit_not_p)
3438 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3439 ret = 1; /* Fall through below to return
3440 the BIT_NOT_EXPR stmt and then
3441 BIT_{AND,IOR,XOR}_EXPR and anything it
3442 uses. */
3443 else
3444 /* stmt is after this the BIT_NOT_EXPR. */
3445 stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3447 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3449 ret += 1 + info->ops[0].bit_not_p;
3450 if (info->ops[1].base_addr)
3451 ret += 1 + info->ops[1].bit_not_p;
3452 return ret + 1;
3454 stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3455 /* stmt is now the BIT_*_EXPR. */
3456 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3457 ret += 1 + info->ops[info->ops_swapped_p].bit_not_p;
3458 else if (info->ops[info->ops_swapped_p].bit_not_p)
3460 gimple *stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3461 if (!has_single_use (gimple_assign_rhs1 (stmt2)))
3462 ++ret;
3464 if (info->ops[1].base_addr == NULL_TREE)
3466 gcc_checking_assert (!info->ops_swapped_p);
3467 return ret;
3469 if (!has_single_use (gimple_assign_rhs2 (stmt)))
3470 ret += 1 + info->ops[1 - info->ops_swapped_p].bit_not_p;
3471 else if (info->ops[1 - info->ops_swapped_p].bit_not_p)
3473 gimple *stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs2 (stmt));
3474 if (!has_single_use (gimple_assign_rhs1 (stmt2)))
3475 ++ret;
3477 return ret;
3478 case MEM_REF:
3479 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3480 return 1 + info->ops[0].bit_not_p;
3481 else if (info->ops[0].bit_not_p)
3483 stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3484 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3485 return 1;
3487 return 0;
3488 case BIT_INSERT_EXPR:
3489 return has_single_use (gimple_assign_rhs1 (stmt)) ? 0 : 1;
3490 default:
3491 gcc_unreachable ();
3495 /* Split a merged store described by GROUP by populating the SPLIT_STORES
3496 vector (if non-NULL) with split_store structs describing the byte offset
3497 (from the base), the bit size and alignment of each store as well as the
3498 original statements involved in each such split group.
3499 This is to separate the splitting strategy from the statement
3500 building/emission/linking done in output_merged_store.
3501 Return number of new stores.
3502 If ALLOW_UNALIGNED_STORE is false, then all stores must be aligned.
3503 If ALLOW_UNALIGNED_LOAD is false, then all loads must be aligned.
3504 BZERO_FIRST may be true only when the first store covers the whole group
3505 and clears it; if BZERO_FIRST is true, keep that first store in the set
3506 unmodified and emit further stores for the overrides only.
3507 If SPLIT_STORES is NULL, it is just a dry run to count number of
3508 new stores. */
3510 static unsigned int
3511 split_group (merged_store_group *group, bool allow_unaligned_store,
3512 bool allow_unaligned_load, bool bzero_first,
3513 vec<split_store *> *split_stores,
3514 unsigned *total_orig,
3515 unsigned *total_new)
3517 unsigned HOST_WIDE_INT pos = group->bitregion_start;
3518 unsigned HOST_WIDE_INT size = group->bitregion_end - pos;
3519 unsigned HOST_WIDE_INT bytepos = pos / BITS_PER_UNIT;
3520 unsigned HOST_WIDE_INT group_align = group->align;
3521 unsigned HOST_WIDE_INT align_base = group->align_base;
3522 unsigned HOST_WIDE_INT group_load_align = group_align;
3523 bool any_orig = false;
3525 gcc_assert ((size % BITS_PER_UNIT == 0) && (pos % BITS_PER_UNIT == 0));
3527 /* For bswap framework using sets of stores, all the checking has been done
3528 earlier in try_coalesce_bswap and the result always needs to be emitted
3529 as a single store. Likewise for string concatenation, */
3530 if (group->stores[0]->rhs_code == LROTATE_EXPR
3531 || group->stores[0]->rhs_code == NOP_EXPR
3532 || group->string_concatenation)
3534 gcc_assert (!bzero_first);
3535 if (total_orig)
3537 /* Avoid the old/new stmt count heuristics. It should be
3538 always beneficial. */
3539 total_new[0] = 1;
3540 total_orig[0] = 2;
3543 if (split_stores)
3545 unsigned HOST_WIDE_INT align_bitpos
3546 = (group->start - align_base) & (group_align - 1);
3547 unsigned HOST_WIDE_INT align = group_align;
3548 if (align_bitpos)
3549 align = least_bit_hwi (align_bitpos);
3550 bytepos = group->start / BITS_PER_UNIT;
3551 split_store *store
3552 = new split_store (bytepos, group->width, align);
3553 unsigned int first = 0;
3554 find_constituent_stores (group, &store->orig_stores,
3555 &first, group->start, group->width);
3556 split_stores->safe_push (store);
3559 return 1;
3562 unsigned int ret = 0, first = 0;
3563 unsigned HOST_WIDE_INT try_pos = bytepos;
3565 if (total_orig)
3567 unsigned int i;
3568 store_immediate_info *info = group->stores[0];
3570 total_new[0] = 0;
3571 total_orig[0] = 1; /* The orig store. */
3572 info = group->stores[0];
3573 if (info->ops[0].base_addr)
3574 total_orig[0]++;
3575 if (info->ops[1].base_addr)
3576 total_orig[0]++;
3577 switch (info->rhs_code)
3579 case BIT_AND_EXPR:
3580 case BIT_IOR_EXPR:
3581 case BIT_XOR_EXPR:
3582 total_orig[0]++; /* The orig BIT_*_EXPR stmt. */
3583 break;
3584 default:
3585 break;
3587 total_orig[0] *= group->stores.length ();
3589 FOR_EACH_VEC_ELT (group->stores, i, info)
3591 total_new[0] += count_multiple_uses (info);
3592 total_orig[0] += (info->bit_not_p
3593 + info->ops[0].bit_not_p
3594 + info->ops[1].bit_not_p);
3598 if (!allow_unaligned_load)
3599 for (int i = 0; i < 2; ++i)
3600 if (group->load_align[i])
3601 group_load_align = MIN (group_load_align, group->load_align[i]);
3603 if (bzero_first)
3605 store_immediate_info *gstore;
3606 FOR_EACH_VEC_ELT (group->stores, first, gstore)
3607 if (!gimple_clobber_p (gstore->stmt))
3608 break;
3609 ++first;
3610 ret = 1;
3611 if (split_stores)
3613 split_store *store
3614 = new split_store (bytepos, gstore->bitsize, align_base);
3615 store->orig_stores.safe_push (gstore);
3616 store->orig = true;
3617 any_orig = true;
3618 split_stores->safe_push (store);
3622 while (size > 0)
3624 if ((allow_unaligned_store || group_align <= BITS_PER_UNIT)
3625 && (group->mask[try_pos - bytepos] == (unsigned char) ~0U
3626 || (bzero_first && group->val[try_pos - bytepos] == 0)))
3628 /* Skip padding bytes. */
3629 ++try_pos;
3630 size -= BITS_PER_UNIT;
3631 continue;
3634 unsigned HOST_WIDE_INT try_bitpos = try_pos * BITS_PER_UNIT;
3635 unsigned int try_size = MAX_STORE_BITSIZE, nonmasked;
3636 unsigned HOST_WIDE_INT align_bitpos
3637 = (try_bitpos - align_base) & (group_align - 1);
3638 unsigned HOST_WIDE_INT align = group_align;
3639 bool found_orig = false;
3640 if (align_bitpos)
3641 align = least_bit_hwi (align_bitpos);
3642 if (!allow_unaligned_store)
3643 try_size = MIN (try_size, align);
3644 if (!allow_unaligned_load)
3646 /* If we can't do or don't want to do unaligned stores
3647 as well as loads, we need to take the loads into account
3648 as well. */
3649 unsigned HOST_WIDE_INT load_align = group_load_align;
3650 align_bitpos = (try_bitpos - align_base) & (load_align - 1);
3651 if (align_bitpos)
3652 load_align = least_bit_hwi (align_bitpos);
3653 for (int i = 0; i < 2; ++i)
3654 if (group->load_align[i])
3656 align_bitpos
3657 = known_alignment (try_bitpos
3658 - group->stores[0]->bitpos
3659 + group->stores[0]->ops[i].bitpos
3660 - group->load_align_base[i]);
3661 if (align_bitpos & (group_load_align - 1))
3663 unsigned HOST_WIDE_INT a = least_bit_hwi (align_bitpos);
3664 load_align = MIN (load_align, a);
3667 try_size = MIN (try_size, load_align);
3669 store_immediate_info *info
3670 = find_constituent_stores (group, NULL, &first, try_bitpos, try_size);
3671 if (info && !gimple_clobber_p (info->stmt))
3673 /* If there is just one original statement for the range, see if
3674 we can just reuse the original store which could be even larger
3675 than try_size. */
3676 unsigned HOST_WIDE_INT stmt_end
3677 = ROUND_UP (info->bitpos + info->bitsize, BITS_PER_UNIT);
3678 info = find_constituent_stores (group, NULL, &first, try_bitpos,
3679 stmt_end - try_bitpos);
3680 if (info && info->bitpos >= try_bitpos)
3682 store_immediate_info *info2 = NULL;
3683 unsigned int first_copy = first;
3684 if (info->bitpos > try_bitpos
3685 && stmt_end - try_bitpos <= try_size)
3687 info2 = find_constituent_stores (group, NULL, &first_copy,
3688 try_bitpos,
3689 info->bitpos - try_bitpos);
3690 gcc_assert (info2 == NULL || gimple_clobber_p (info2->stmt));
3692 if (info2 == NULL && stmt_end - try_bitpos < try_size)
3694 info2 = find_constituent_stores (group, NULL, &first_copy,
3695 stmt_end,
3696 (try_bitpos + try_size)
3697 - stmt_end);
3698 gcc_assert (info2 == NULL || gimple_clobber_p (info2->stmt));
3700 if (info2 == NULL)
3702 try_size = stmt_end - try_bitpos;
3703 found_orig = true;
3704 goto found;
3709 /* Approximate store bitsize for the case when there are no padding
3710 bits. */
3711 while (try_size > size)
3712 try_size /= 2;
3713 /* Now look for whole padding bytes at the end of that bitsize. */
3714 for (nonmasked = try_size / BITS_PER_UNIT; nonmasked > 0; --nonmasked)
3715 if (group->mask[try_pos - bytepos + nonmasked - 1]
3716 != (unsigned char) ~0U
3717 && (!bzero_first
3718 || group->val[try_pos - bytepos + nonmasked - 1] != 0))
3719 break;
3720 if (nonmasked == 0 || (info && gimple_clobber_p (info->stmt)))
3722 /* If entire try_size range is padding, skip it. */
3723 try_pos += try_size / BITS_PER_UNIT;
3724 size -= try_size;
3725 continue;
3727 /* Otherwise try to decrease try_size if second half, last 3 quarters
3728 etc. are padding. */
3729 nonmasked *= BITS_PER_UNIT;
3730 while (nonmasked <= try_size / 2)
3731 try_size /= 2;
3732 if (!allow_unaligned_store && group_align > BITS_PER_UNIT)
3734 /* Now look for whole padding bytes at the start of that bitsize. */
3735 unsigned int try_bytesize = try_size / BITS_PER_UNIT, masked;
3736 for (masked = 0; masked < try_bytesize; ++masked)
3737 if (group->mask[try_pos - bytepos + masked] != (unsigned char) ~0U
3738 && (!bzero_first
3739 || group->val[try_pos - bytepos + masked] != 0))
3740 break;
3741 masked *= BITS_PER_UNIT;
3742 gcc_assert (masked < try_size);
3743 if (masked >= try_size / 2)
3745 while (masked >= try_size / 2)
3747 try_size /= 2;
3748 try_pos += try_size / BITS_PER_UNIT;
3749 size -= try_size;
3750 masked -= try_size;
3752 /* Need to recompute the alignment, so just retry at the new
3753 position. */
3754 continue;
3758 found:
3759 ++ret;
3761 if (split_stores)
3763 split_store *store
3764 = new split_store (try_pos, try_size, align);
3765 info = find_constituent_stores (group, &store->orig_stores,
3766 &first, try_bitpos, try_size);
3767 if (info
3768 && !gimple_clobber_p (info->stmt)
3769 && info->bitpos >= try_bitpos
3770 && info->bitpos + info->bitsize <= try_bitpos + try_size
3771 && (store->orig_stores.length () == 1
3772 || found_orig
3773 || (info->bitpos == try_bitpos
3774 && (info->bitpos + info->bitsize
3775 == try_bitpos + try_size))))
3777 store->orig = true;
3778 any_orig = true;
3780 split_stores->safe_push (store);
3783 try_pos += try_size / BITS_PER_UNIT;
3784 size -= try_size;
3787 if (total_orig)
3789 unsigned int i;
3790 split_store *store;
3791 /* If we are reusing some original stores and any of the
3792 original SSA_NAMEs had multiple uses, we need to subtract
3793 those now before we add the new ones. */
3794 if (total_new[0] && any_orig)
3796 FOR_EACH_VEC_ELT (*split_stores, i, store)
3797 if (store->orig)
3798 total_new[0] -= count_multiple_uses (store->orig_stores[0]);
3800 total_new[0] += ret; /* The new store. */
3801 store_immediate_info *info = group->stores[0];
3802 if (info->ops[0].base_addr)
3803 total_new[0] += ret;
3804 if (info->ops[1].base_addr)
3805 total_new[0] += ret;
3806 switch (info->rhs_code)
3808 case BIT_AND_EXPR:
3809 case BIT_IOR_EXPR:
3810 case BIT_XOR_EXPR:
3811 total_new[0] += ret; /* The new BIT_*_EXPR stmt. */
3812 break;
3813 default:
3814 break;
3816 FOR_EACH_VEC_ELT (*split_stores, i, store)
3818 unsigned int j;
3819 bool bit_not_p[3] = { false, false, false };
3820 /* If all orig_stores have certain bit_not_p set, then
3821 we'd use a BIT_NOT_EXPR stmt and need to account for it.
3822 If some orig_stores have certain bit_not_p set, then
3823 we'd use a BIT_XOR_EXPR with a mask and need to account for
3824 it. */
3825 FOR_EACH_VEC_ELT (store->orig_stores, j, info)
3827 if (info->ops[0].bit_not_p)
3828 bit_not_p[0] = true;
3829 if (info->ops[1].bit_not_p)
3830 bit_not_p[1] = true;
3831 if (info->bit_not_p)
3832 bit_not_p[2] = true;
3834 total_new[0] += bit_not_p[0] + bit_not_p[1] + bit_not_p[2];
3839 return ret;
3842 /* Return the operation through which the operand IDX (if < 2) or
3843 result (IDX == 2) should be inverted. If NOP_EXPR, no inversion
3844 is done, if BIT_NOT_EXPR, all bits are inverted, if BIT_XOR_EXPR,
3845 the bits should be xored with mask. */
3847 static enum tree_code
3848 invert_op (split_store *split_store, int idx, tree int_type, tree &mask)
3850 unsigned int i;
3851 store_immediate_info *info;
3852 unsigned int cnt = 0;
3853 bool any_paddings = false;
3854 FOR_EACH_VEC_ELT (split_store->orig_stores, i, info)
3856 bool bit_not_p = idx < 2 ? info->ops[idx].bit_not_p : info->bit_not_p;
3857 if (bit_not_p)
3859 ++cnt;
3860 tree lhs = gimple_assign_lhs (info->stmt);
3861 if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
3862 && TYPE_PRECISION (TREE_TYPE (lhs)) < info->bitsize)
3863 any_paddings = true;
3866 mask = NULL_TREE;
3867 if (cnt == 0)
3868 return NOP_EXPR;
3869 if (cnt == split_store->orig_stores.length () && !any_paddings)
3870 return BIT_NOT_EXPR;
3872 unsigned HOST_WIDE_INT try_bitpos = split_store->bytepos * BITS_PER_UNIT;
3873 unsigned buf_size = split_store->size / BITS_PER_UNIT;
3874 unsigned char *buf
3875 = XALLOCAVEC (unsigned char, buf_size);
3876 memset (buf, ~0U, buf_size);
3877 FOR_EACH_VEC_ELT (split_store->orig_stores, i, info)
3879 bool bit_not_p = idx < 2 ? info->ops[idx].bit_not_p : info->bit_not_p;
3880 if (!bit_not_p)
3881 continue;
3882 /* Clear regions with bit_not_p and invert afterwards, rather than
3883 clear regions with !bit_not_p, so that gaps in between stores aren't
3884 set in the mask. */
3885 unsigned HOST_WIDE_INT bitsize = info->bitsize;
3886 unsigned HOST_WIDE_INT prec = bitsize;
3887 unsigned int pos_in_buffer = 0;
3888 if (any_paddings)
3890 tree lhs = gimple_assign_lhs (info->stmt);
3891 if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
3892 && TYPE_PRECISION (TREE_TYPE (lhs)) < bitsize)
3893 prec = TYPE_PRECISION (TREE_TYPE (lhs));
3895 if (info->bitpos < try_bitpos)
3897 gcc_assert (info->bitpos + bitsize > try_bitpos);
3898 if (!BYTES_BIG_ENDIAN)
3900 if (prec <= try_bitpos - info->bitpos)
3901 continue;
3902 prec -= try_bitpos - info->bitpos;
3904 bitsize -= try_bitpos - info->bitpos;
3905 if (BYTES_BIG_ENDIAN && prec > bitsize)
3906 prec = bitsize;
3908 else
3909 pos_in_buffer = info->bitpos - try_bitpos;
3910 if (prec < bitsize)
3912 /* If this is a bool inversion, invert just the least significant
3913 prec bits rather than all bits of it. */
3914 if (BYTES_BIG_ENDIAN)
3916 pos_in_buffer += bitsize - prec;
3917 if (pos_in_buffer >= split_store->size)
3918 continue;
3920 bitsize = prec;
3922 if (pos_in_buffer + bitsize > split_store->size)
3923 bitsize = split_store->size - pos_in_buffer;
3924 unsigned char *p = buf + (pos_in_buffer / BITS_PER_UNIT);
3925 if (BYTES_BIG_ENDIAN)
3926 clear_bit_region_be (p, (BITS_PER_UNIT - 1
3927 - (pos_in_buffer % BITS_PER_UNIT)), bitsize);
3928 else
3929 clear_bit_region (p, pos_in_buffer % BITS_PER_UNIT, bitsize);
3931 for (unsigned int i = 0; i < buf_size; ++i)
3932 buf[i] = ~buf[i];
3933 mask = native_interpret_expr (int_type, buf, buf_size);
3934 return BIT_XOR_EXPR;
3937 /* Given a merged store group GROUP output the widened version of it.
3938 The store chain is against the base object BASE.
3939 Try store sizes of at most MAX_STORE_BITSIZE bits wide and don't output
3940 unaligned stores for STRICT_ALIGNMENT targets or if it's too expensive.
3941 Make sure that the number of statements output is less than the number of
3942 original statements. If a better sequence is possible emit it and
3943 return true. */
3945 bool
3946 imm_store_chain_info::output_merged_store (merged_store_group *group)
3948 const unsigned HOST_WIDE_INT start_byte_pos
3949 = group->bitregion_start / BITS_PER_UNIT;
3950 unsigned int orig_num_stmts = group->stores.length ();
3951 if (orig_num_stmts < 2)
3952 return false;
3954 bool allow_unaligned_store
3955 = !STRICT_ALIGNMENT && param_store_merging_allow_unaligned;
3956 bool allow_unaligned_load = allow_unaligned_store;
3957 bool bzero_first = false;
3958 store_immediate_info *store;
3959 unsigned int num_clobber_stmts = 0;
3960 if (group->stores[0]->rhs_code == INTEGER_CST)
3962 unsigned int i;
3963 FOR_EACH_VEC_ELT (group->stores, i, store)
3964 if (gimple_clobber_p (store->stmt))
3965 num_clobber_stmts++;
3966 else if (TREE_CODE (gimple_assign_rhs1 (store->stmt)) == CONSTRUCTOR
3967 && CONSTRUCTOR_NELTS (gimple_assign_rhs1 (store->stmt)) == 0
3968 && group->start == store->bitpos
3969 && group->width == store->bitsize
3970 && (group->start % BITS_PER_UNIT) == 0
3971 && (group->width % BITS_PER_UNIT) == 0)
3973 bzero_first = true;
3974 break;
3976 else
3977 break;
3978 FOR_EACH_VEC_ELT_FROM (group->stores, i, store, i)
3979 if (gimple_clobber_p (store->stmt))
3980 num_clobber_stmts++;
3981 if (num_clobber_stmts == orig_num_stmts)
3982 return false;
3983 orig_num_stmts -= num_clobber_stmts;
3985 if (allow_unaligned_store || bzero_first)
3987 /* If unaligned stores are allowed, see how many stores we'd emit
3988 for unaligned and how many stores we'd emit for aligned stores.
3989 Only use unaligned stores if it allows fewer stores than aligned.
3990 Similarly, if there is a whole region clear first, prefer expanding
3991 it together compared to expanding clear first followed by merged
3992 further stores. */
3993 unsigned cnt[4] = { ~0U, ~0U, ~0U, ~0U };
3994 int pass_min = 0;
3995 for (int pass = 0; pass < 4; ++pass)
3997 if (!allow_unaligned_store && (pass & 1) != 0)
3998 continue;
3999 if (!bzero_first && (pass & 2) != 0)
4000 continue;
4001 cnt[pass] = split_group (group, (pass & 1) != 0,
4002 allow_unaligned_load, (pass & 2) != 0,
4003 NULL, NULL, NULL);
4004 if (cnt[pass] < cnt[pass_min])
4005 pass_min = pass;
4007 if ((pass_min & 1) == 0)
4008 allow_unaligned_store = false;
4009 if ((pass_min & 2) == 0)
4010 bzero_first = false;
4013 auto_vec<class split_store *, 32> split_stores;
4014 split_store *split_store;
4015 unsigned total_orig, total_new, i;
4016 split_group (group, allow_unaligned_store, allow_unaligned_load, bzero_first,
4017 &split_stores, &total_orig, &total_new);
4019 /* Determine if there is a clobber covering the whole group at the start,
4020 followed by proposed split stores that cover the whole group. In that
4021 case, prefer the transformation even if
4022 split_stores.length () == orig_num_stmts. */
4023 bool clobber_first = false;
4024 if (num_clobber_stmts
4025 && gimple_clobber_p (group->stores[0]->stmt)
4026 && group->start == group->stores[0]->bitpos
4027 && group->width == group->stores[0]->bitsize
4028 && (group->start % BITS_PER_UNIT) == 0
4029 && (group->width % BITS_PER_UNIT) == 0)
4031 clobber_first = true;
4032 unsigned HOST_WIDE_INT pos = group->start / BITS_PER_UNIT;
4033 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4034 if (split_store->bytepos != pos)
4036 clobber_first = false;
4037 break;
4039 else
4040 pos += split_store->size / BITS_PER_UNIT;
4041 if (pos != (group->start + group->width) / BITS_PER_UNIT)
4042 clobber_first = false;
4045 if (split_stores.length () >= orig_num_stmts + clobber_first)
4048 /* We didn't manage to reduce the number of statements. Bail out. */
4049 if (dump_file && (dump_flags & TDF_DETAILS))
4050 fprintf (dump_file, "Exceeded original number of stmts (%u)."
4051 " Not profitable to emit new sequence.\n",
4052 orig_num_stmts);
4053 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4054 delete split_store;
4055 return false;
4057 if (total_orig <= total_new)
4059 /* If number of estimated new statements is above estimated original
4060 statements, bail out too. */
4061 if (dump_file && (dump_flags & TDF_DETAILS))
4062 fprintf (dump_file, "Estimated number of original stmts (%u)"
4063 " not larger than estimated number of new"
4064 " stmts (%u).\n",
4065 total_orig, total_new);
4066 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4067 delete split_store;
4068 return false;
4070 if (group->stores[0]->rhs_code == INTEGER_CST)
4072 bool all_orig = true;
4073 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4074 if (!split_store->orig)
4076 all_orig = false;
4077 break;
4079 if (all_orig)
4081 unsigned int cnt = split_stores.length ();
4082 store_immediate_info *store;
4083 FOR_EACH_VEC_ELT (group->stores, i, store)
4084 if (gimple_clobber_p (store->stmt))
4085 ++cnt;
4086 /* Punt if we wouldn't make any real changes, i.e. keep all
4087 orig stmts + all clobbers. */
4088 if (cnt == group->stores.length ())
4090 if (dump_file && (dump_flags & TDF_DETAILS))
4091 fprintf (dump_file, "Exceeded original number of stmts (%u)."
4092 " Not profitable to emit new sequence.\n",
4093 orig_num_stmts);
4094 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4095 delete split_store;
4096 return false;
4101 gimple_stmt_iterator last_gsi = gsi_for_stmt (group->last_stmt);
4102 gimple_seq seq = NULL;
4103 tree last_vdef, new_vuse;
4104 last_vdef = gimple_vdef (group->last_stmt);
4105 new_vuse = gimple_vuse (group->last_stmt);
4106 tree bswap_res = NULL_TREE;
4108 /* Clobbers are not removed. */
4109 if (gimple_clobber_p (group->last_stmt))
4111 new_vuse = make_ssa_name (gimple_vop (cfun), group->last_stmt);
4112 gimple_set_vdef (group->last_stmt, new_vuse);
4115 if (group->stores[0]->rhs_code == LROTATE_EXPR
4116 || group->stores[0]->rhs_code == NOP_EXPR)
4118 tree fndecl = NULL_TREE, bswap_type = NULL_TREE, load_type;
4119 gimple *ins_stmt = group->stores[0]->ins_stmt;
4120 struct symbolic_number *n = &group->stores[0]->n;
4121 bool bswap = group->stores[0]->rhs_code == LROTATE_EXPR;
4123 switch (n->range)
4125 case 16:
4126 load_type = bswap_type = uint16_type_node;
4127 break;
4128 case 32:
4129 load_type = uint32_type_node;
4130 if (bswap)
4132 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP32);
4133 bswap_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
4135 break;
4136 case 64:
4137 load_type = uint64_type_node;
4138 if (bswap)
4140 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP64);
4141 bswap_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
4143 break;
4144 default:
4145 gcc_unreachable ();
4148 /* If the loads have each vuse of the corresponding store,
4149 we've checked the aliasing already in try_coalesce_bswap and
4150 we want to sink the need load into seq. So need to use new_vuse
4151 on the load. */
4152 if (n->base_addr)
4154 if (n->vuse == NULL)
4156 n->vuse = new_vuse;
4157 ins_stmt = NULL;
4159 else
4160 /* Update vuse in case it has changed by output_merged_stores. */
4161 n->vuse = gimple_vuse (ins_stmt);
4163 bswap_res = bswap_replace (gsi_start (seq), ins_stmt, fndecl,
4164 bswap_type, load_type, n, bswap);
4165 gcc_assert (bswap_res);
4168 gimple *stmt = NULL;
4169 auto_vec<gimple *, 32> orig_stmts;
4170 gimple_seq this_seq;
4171 tree addr = force_gimple_operand_1 (unshare_expr (base_addr), &this_seq,
4172 is_gimple_mem_ref_addr, NULL_TREE);
4173 gimple_seq_add_seq_without_update (&seq, this_seq);
4175 tree load_addr[2] = { NULL_TREE, NULL_TREE };
4176 gimple_seq load_seq[2] = { NULL, NULL };
4177 gimple_stmt_iterator load_gsi[2] = { gsi_none (), gsi_none () };
4178 for (int j = 0; j < 2; ++j)
4180 store_operand_info &op = group->stores[0]->ops[j];
4181 if (op.base_addr == NULL_TREE)
4182 continue;
4184 store_immediate_info *infol = group->stores.last ();
4185 if (gimple_vuse (op.stmt) == gimple_vuse (infol->ops[j].stmt))
4187 /* We can't pick the location randomly; while we've verified
4188 all the loads have the same vuse, they can be still in different
4189 basic blocks and we need to pick the one from the last bb:
4190 int x = q[0];
4191 if (x == N) return;
4192 int y = q[1];
4193 p[0] = x;
4194 p[1] = y;
4195 otherwise if we put the wider load at the q[0] load, we might
4196 segfault if q[1] is not mapped. */
4197 basic_block bb = gimple_bb (op.stmt);
4198 gimple *ostmt = op.stmt;
4199 store_immediate_info *info;
4200 FOR_EACH_VEC_ELT (group->stores, i, info)
4202 gimple *tstmt = info->ops[j].stmt;
4203 basic_block tbb = gimple_bb (tstmt);
4204 if (dominated_by_p (CDI_DOMINATORS, tbb, bb))
4206 ostmt = tstmt;
4207 bb = tbb;
4210 load_gsi[j] = gsi_for_stmt (ostmt);
4211 load_addr[j]
4212 = force_gimple_operand_1 (unshare_expr (op.base_addr),
4213 &load_seq[j], is_gimple_mem_ref_addr,
4214 NULL_TREE);
4216 else if (operand_equal_p (base_addr, op.base_addr, 0))
4217 load_addr[j] = addr;
4218 else
4220 load_addr[j]
4221 = force_gimple_operand_1 (unshare_expr (op.base_addr),
4222 &this_seq, is_gimple_mem_ref_addr,
4223 NULL_TREE);
4224 gimple_seq_add_seq_without_update (&seq, this_seq);
4228 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4230 const unsigned HOST_WIDE_INT try_size = split_store->size;
4231 const unsigned HOST_WIDE_INT try_pos = split_store->bytepos;
4232 const unsigned HOST_WIDE_INT try_bitpos = try_pos * BITS_PER_UNIT;
4233 const unsigned HOST_WIDE_INT try_align = split_store->align;
4234 const unsigned HOST_WIDE_INT try_offset = try_pos - start_byte_pos;
4235 tree dest, src;
4236 location_t loc;
4238 if (split_store->orig)
4240 /* If there is just a single non-clobber constituent store
4241 which covers the whole area, just reuse the lhs and rhs. */
4242 gimple *orig_stmt = NULL;
4243 store_immediate_info *store;
4244 unsigned int j;
4245 FOR_EACH_VEC_ELT (split_store->orig_stores, j, store)
4246 if (!gimple_clobber_p (store->stmt))
4248 orig_stmt = store->stmt;
4249 break;
4251 dest = gimple_assign_lhs (orig_stmt);
4252 src = gimple_assign_rhs1 (orig_stmt);
4253 loc = gimple_location (orig_stmt);
4255 else
4257 store_immediate_info *info;
4258 unsigned short clique, base;
4259 unsigned int k;
4260 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
4261 orig_stmts.safe_push (info->stmt);
4262 tree offset_type
4263 = get_alias_type_for_stmts (orig_stmts, false, &clique, &base);
4264 tree dest_type;
4265 loc = get_location_for_stmts (orig_stmts);
4266 orig_stmts.truncate (0);
4268 if (group->string_concatenation)
4269 dest_type
4270 = build_array_type_nelts (char_type_node,
4271 try_size / BITS_PER_UNIT);
4272 else
4274 dest_type = build_nonstandard_integer_type (try_size, UNSIGNED);
4275 dest_type = build_aligned_type (dest_type, try_align);
4277 dest = fold_build2 (MEM_REF, dest_type, addr,
4278 build_int_cst (offset_type, try_pos));
4279 if (TREE_CODE (dest) == MEM_REF)
4281 MR_DEPENDENCE_CLIQUE (dest) = clique;
4282 MR_DEPENDENCE_BASE (dest) = base;
4285 tree mask;
4286 if (bswap_res || group->string_concatenation)
4287 mask = integer_zero_node;
4288 else
4289 mask = native_interpret_expr (dest_type,
4290 group->mask + try_offset,
4291 group->buf_size);
4293 tree ops[2];
4294 for (int j = 0;
4295 j < 1 + (split_store->orig_stores[0]->ops[1].val != NULL_TREE);
4296 ++j)
4298 store_operand_info &op = split_store->orig_stores[0]->ops[j];
4299 if (bswap_res)
4300 ops[j] = bswap_res;
4301 else if (group->string_concatenation)
4303 ops[j] = build_string (try_size / BITS_PER_UNIT,
4304 (const char *) group->val + try_offset);
4305 TREE_TYPE (ops[j]) = dest_type;
4307 else if (op.base_addr)
4309 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
4310 orig_stmts.safe_push (info->ops[j].stmt);
4312 offset_type = get_alias_type_for_stmts (orig_stmts, true,
4313 &clique, &base);
4314 location_t load_loc = get_location_for_stmts (orig_stmts);
4315 orig_stmts.truncate (0);
4317 unsigned HOST_WIDE_INT load_align = group->load_align[j];
4318 unsigned HOST_WIDE_INT align_bitpos
4319 = known_alignment (try_bitpos
4320 - split_store->orig_stores[0]->bitpos
4321 + op.bitpos);
4322 if (align_bitpos & (load_align - 1))
4323 load_align = least_bit_hwi (align_bitpos);
4325 tree load_int_type
4326 = build_nonstandard_integer_type (try_size, UNSIGNED);
4327 load_int_type
4328 = build_aligned_type (load_int_type, load_align);
4330 poly_uint64 load_pos
4331 = exact_div (try_bitpos
4332 - split_store->orig_stores[0]->bitpos
4333 + op.bitpos,
4334 BITS_PER_UNIT);
4335 ops[j] = fold_build2 (MEM_REF, load_int_type, load_addr[j],
4336 build_int_cst (offset_type, load_pos));
4337 if (TREE_CODE (ops[j]) == MEM_REF)
4339 MR_DEPENDENCE_CLIQUE (ops[j]) = clique;
4340 MR_DEPENDENCE_BASE (ops[j]) = base;
4342 if (!integer_zerop (mask))
4344 /* The load might load some bits (that will be masked
4345 off later on) uninitialized, avoid -W*uninitialized
4346 warnings in that case. */
4347 suppress_warning (ops[j], OPT_Wuninitialized);
4350 stmt = gimple_build_assign (make_ssa_name (dest_type), ops[j]);
4351 gimple_set_location (stmt, load_loc);
4352 if (gsi_bb (load_gsi[j]))
4354 gimple_set_vuse (stmt, gimple_vuse (op.stmt));
4355 gimple_seq_add_stmt_without_update (&load_seq[j], stmt);
4357 else
4359 gimple_set_vuse (stmt, new_vuse);
4360 gimple_seq_add_stmt_without_update (&seq, stmt);
4362 ops[j] = gimple_assign_lhs (stmt);
4363 tree xor_mask;
4364 enum tree_code inv_op
4365 = invert_op (split_store, j, dest_type, xor_mask);
4366 if (inv_op != NOP_EXPR)
4368 stmt = gimple_build_assign (make_ssa_name (dest_type),
4369 inv_op, ops[j], xor_mask);
4370 gimple_set_location (stmt, load_loc);
4371 ops[j] = gimple_assign_lhs (stmt);
4373 if (gsi_bb (load_gsi[j]))
4374 gimple_seq_add_stmt_without_update (&load_seq[j],
4375 stmt);
4376 else
4377 gimple_seq_add_stmt_without_update (&seq, stmt);
4380 else
4381 ops[j] = native_interpret_expr (dest_type,
4382 group->val + try_offset,
4383 group->buf_size);
4386 switch (split_store->orig_stores[0]->rhs_code)
4388 case BIT_AND_EXPR:
4389 case BIT_IOR_EXPR:
4390 case BIT_XOR_EXPR:
4391 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
4393 tree rhs1 = gimple_assign_rhs1 (info->stmt);
4394 orig_stmts.safe_push (SSA_NAME_DEF_STMT (rhs1));
4396 location_t bit_loc;
4397 bit_loc = get_location_for_stmts (orig_stmts);
4398 orig_stmts.truncate (0);
4400 stmt
4401 = gimple_build_assign (make_ssa_name (dest_type),
4402 split_store->orig_stores[0]->rhs_code,
4403 ops[0], ops[1]);
4404 gimple_set_location (stmt, bit_loc);
4405 /* If there is just one load and there is a separate
4406 load_seq[0], emit the bitwise op right after it. */
4407 if (load_addr[1] == NULL_TREE && gsi_bb (load_gsi[0]))
4408 gimple_seq_add_stmt_without_update (&load_seq[0], stmt);
4409 /* Otherwise, if at least one load is in seq, we need to
4410 emit the bitwise op right before the store. If there
4411 are two loads and are emitted somewhere else, it would
4412 be better to emit the bitwise op as early as possible;
4413 we don't track where that would be possible right now
4414 though. */
4415 else
4416 gimple_seq_add_stmt_without_update (&seq, stmt);
4417 src = gimple_assign_lhs (stmt);
4418 tree xor_mask;
4419 enum tree_code inv_op;
4420 inv_op = invert_op (split_store, 2, dest_type, xor_mask);
4421 if (inv_op != NOP_EXPR)
4423 stmt = gimple_build_assign (make_ssa_name (dest_type),
4424 inv_op, src, xor_mask);
4425 gimple_set_location (stmt, bit_loc);
4426 if (load_addr[1] == NULL_TREE && gsi_bb (load_gsi[0]))
4427 gimple_seq_add_stmt_without_update (&load_seq[0], stmt);
4428 else
4429 gimple_seq_add_stmt_without_update (&seq, stmt);
4430 src = gimple_assign_lhs (stmt);
4432 break;
4433 case LROTATE_EXPR:
4434 case NOP_EXPR:
4435 src = ops[0];
4436 if (!is_gimple_val (src))
4438 stmt = gimple_build_assign (make_ssa_name (TREE_TYPE (src)),
4439 src);
4440 gimple_seq_add_stmt_without_update (&seq, stmt);
4441 src = gimple_assign_lhs (stmt);
4443 if (!useless_type_conversion_p (dest_type, TREE_TYPE (src)))
4445 stmt = gimple_build_assign (make_ssa_name (dest_type),
4446 NOP_EXPR, src);
4447 gimple_seq_add_stmt_without_update (&seq, stmt);
4448 src = gimple_assign_lhs (stmt);
4450 inv_op = invert_op (split_store, 2, dest_type, xor_mask);
4451 if (inv_op != NOP_EXPR)
4453 stmt = gimple_build_assign (make_ssa_name (dest_type),
4454 inv_op, src, xor_mask);
4455 gimple_set_location (stmt, loc);
4456 gimple_seq_add_stmt_without_update (&seq, stmt);
4457 src = gimple_assign_lhs (stmt);
4459 break;
4460 default:
4461 src = ops[0];
4462 break;
4465 /* If bit insertion is required, we use the source as an accumulator
4466 into which the successive bit-field values are manually inserted.
4467 FIXME: perhaps use BIT_INSERT_EXPR instead in some cases? */
4468 if (group->bit_insertion)
4469 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
4470 if (info->rhs_code == BIT_INSERT_EXPR
4471 && info->bitpos < try_bitpos + try_size
4472 && info->bitpos + info->bitsize > try_bitpos)
4474 /* Mask, truncate, convert to final type, shift and ior into
4475 the accumulator. Note that every step can be a no-op. */
4476 const HOST_WIDE_INT start_gap = info->bitpos - try_bitpos;
4477 const HOST_WIDE_INT end_gap
4478 = (try_bitpos + try_size) - (info->bitpos + info->bitsize);
4479 tree tem = info->ops[0].val;
4480 if (!INTEGRAL_TYPE_P (TREE_TYPE (tem)))
4482 const unsigned HOST_WIDE_INT size
4483 = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (tem)));
4484 tree integer_type
4485 = build_nonstandard_integer_type (size, UNSIGNED);
4486 tem = gimple_build (&seq, loc, VIEW_CONVERT_EXPR,
4487 integer_type, tem);
4489 if (TYPE_PRECISION (TREE_TYPE (tem)) <= info->bitsize)
4491 tree bitfield_type
4492 = build_nonstandard_integer_type (info->bitsize,
4493 UNSIGNED);
4494 tem = gimple_convert (&seq, loc, bitfield_type, tem);
4496 else if ((BYTES_BIG_ENDIAN ? start_gap : end_gap) > 0)
4498 const unsigned HOST_WIDE_INT imask
4499 = (HOST_WIDE_INT_1U << info->bitsize) - 1;
4500 tem = gimple_build (&seq, loc,
4501 BIT_AND_EXPR, TREE_TYPE (tem), tem,
4502 build_int_cst (TREE_TYPE (tem),
4503 imask));
4505 const HOST_WIDE_INT shift
4506 = (BYTES_BIG_ENDIAN ? end_gap : start_gap);
4507 if (shift < 0)
4508 tem = gimple_build (&seq, loc,
4509 RSHIFT_EXPR, TREE_TYPE (tem), tem,
4510 build_int_cst (NULL_TREE, -shift));
4511 tem = gimple_convert (&seq, loc, dest_type, tem);
4512 if (shift > 0)
4513 tem = gimple_build (&seq, loc,
4514 LSHIFT_EXPR, dest_type, tem,
4515 build_int_cst (NULL_TREE, shift));
4516 src = gimple_build (&seq, loc,
4517 BIT_IOR_EXPR, dest_type, tem, src);
4520 if (!integer_zerop (mask))
4522 tree tem = make_ssa_name (dest_type);
4523 tree load_src = unshare_expr (dest);
4524 /* The load might load some or all bits uninitialized,
4525 avoid -W*uninitialized warnings in that case.
4526 As optimization, it would be nice if all the bits are
4527 provably uninitialized (no stores at all yet or previous
4528 store a CLOBBER) we'd optimize away the load and replace
4529 it e.g. with 0. */
4530 suppress_warning (load_src, OPT_Wuninitialized);
4531 stmt = gimple_build_assign (tem, load_src);
4532 gimple_set_location (stmt, loc);
4533 gimple_set_vuse (stmt, new_vuse);
4534 gimple_seq_add_stmt_without_update (&seq, stmt);
4536 /* FIXME: If there is a single chunk of zero bits in mask,
4537 perhaps use BIT_INSERT_EXPR instead? */
4538 stmt = gimple_build_assign (make_ssa_name (dest_type),
4539 BIT_AND_EXPR, tem, mask);
4540 gimple_set_location (stmt, loc);
4541 gimple_seq_add_stmt_without_update (&seq, stmt);
4542 tem = gimple_assign_lhs (stmt);
4544 if (TREE_CODE (src) == INTEGER_CST)
4545 src = wide_int_to_tree (dest_type,
4546 wi::bit_and_not (wi::to_wide (src),
4547 wi::to_wide (mask)));
4548 else
4550 tree nmask
4551 = wide_int_to_tree (dest_type,
4552 wi::bit_not (wi::to_wide (mask)));
4553 stmt = gimple_build_assign (make_ssa_name (dest_type),
4554 BIT_AND_EXPR, src, nmask);
4555 gimple_set_location (stmt, loc);
4556 gimple_seq_add_stmt_without_update (&seq, stmt);
4557 src = gimple_assign_lhs (stmt);
4559 stmt = gimple_build_assign (make_ssa_name (dest_type),
4560 BIT_IOR_EXPR, tem, src);
4561 gimple_set_location (stmt, loc);
4562 gimple_seq_add_stmt_without_update (&seq, stmt);
4563 src = gimple_assign_lhs (stmt);
4567 stmt = gimple_build_assign (dest, src);
4568 gimple_set_location (stmt, loc);
4569 gimple_set_vuse (stmt, new_vuse);
4570 gimple_seq_add_stmt_without_update (&seq, stmt);
4572 if (group->lp_nr && stmt_could_throw_p (cfun, stmt))
4573 add_stmt_to_eh_lp (stmt, group->lp_nr);
4575 tree new_vdef;
4576 if (i < split_stores.length () - 1)
4577 new_vdef = make_ssa_name (gimple_vop (cfun), stmt);
4578 else
4579 new_vdef = last_vdef;
4581 gimple_set_vdef (stmt, new_vdef);
4582 SSA_NAME_DEF_STMT (new_vdef) = stmt;
4583 new_vuse = new_vdef;
4586 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4587 delete split_store;
4589 gcc_assert (seq);
4590 if (dump_file)
4592 fprintf (dump_file,
4593 "New sequence of %u stores to replace old one of %u stores\n",
4594 split_stores.length (), orig_num_stmts);
4595 if (dump_flags & TDF_DETAILS)
4596 print_gimple_seq (dump_file, seq, 0, TDF_VOPS | TDF_MEMSYMS);
4599 if (gimple_clobber_p (group->last_stmt))
4600 update_stmt (group->last_stmt);
4602 if (group->lp_nr > 0)
4604 /* We're going to insert a sequence of (potentially) throwing stores
4605 into an active EH region. This means that we're going to create
4606 new basic blocks with EH edges pointing to the post landing pad
4607 and, therefore, to have to update its PHI nodes, if any. For the
4608 virtual PHI node, we're going to use the VDEFs created above, but
4609 for the other nodes, we need to record the original reaching defs. */
4610 eh_landing_pad lp = get_eh_landing_pad_from_number (group->lp_nr);
4611 basic_block lp_bb = label_to_block (cfun, lp->post_landing_pad);
4612 basic_block last_bb = gimple_bb (group->last_stmt);
4613 edge last_edge = find_edge (last_bb, lp_bb);
4614 auto_vec<tree, 16> last_defs;
4615 gphi_iterator gpi;
4616 for (gpi = gsi_start_phis (lp_bb); !gsi_end_p (gpi); gsi_next (&gpi))
4618 gphi *phi = gpi.phi ();
4619 tree last_def;
4620 if (virtual_operand_p (gimple_phi_result (phi)))
4621 last_def = NULL_TREE;
4622 else
4623 last_def = gimple_phi_arg_def (phi, last_edge->dest_idx);
4624 last_defs.safe_push (last_def);
4627 /* Do the insertion. Then, if new basic blocks have been created in the
4628 process, rewind the chain of VDEFs create above to walk the new basic
4629 blocks and update the corresponding arguments of the PHI nodes. */
4630 update_modified_stmts (seq);
4631 if (gimple_find_sub_bbs (seq, &last_gsi))
4632 while (last_vdef != gimple_vuse (group->last_stmt))
4634 gimple *stmt = SSA_NAME_DEF_STMT (last_vdef);
4635 if (stmt_could_throw_p (cfun, stmt))
4637 edge new_edge = find_edge (gimple_bb (stmt), lp_bb);
4638 unsigned int i;
4639 for (gpi = gsi_start_phis (lp_bb), i = 0;
4640 !gsi_end_p (gpi);
4641 gsi_next (&gpi), i++)
4643 gphi *phi = gpi.phi ();
4644 tree new_def;
4645 if (virtual_operand_p (gimple_phi_result (phi)))
4646 new_def = last_vdef;
4647 else
4648 new_def = last_defs[i];
4649 add_phi_arg (phi, new_def, new_edge, UNKNOWN_LOCATION);
4652 last_vdef = gimple_vuse (stmt);
4655 else
4656 gsi_insert_seq_after (&last_gsi, seq, GSI_SAME_STMT);
4658 for (int j = 0; j < 2; ++j)
4659 if (load_seq[j])
4660 gsi_insert_seq_after (&load_gsi[j], load_seq[j], GSI_SAME_STMT);
4662 return true;
4665 /* Process the merged_store_group objects created in the coalescing phase.
4666 The stores are all against the base object BASE.
4667 Try to output the widened stores and delete the original statements if
4668 successful. Return true iff any changes were made. */
4670 bool
4671 imm_store_chain_info::output_merged_stores ()
4673 unsigned int i;
4674 merged_store_group *merged_store;
4675 bool ret = false;
4676 FOR_EACH_VEC_ELT (m_merged_store_groups, i, merged_store)
4678 if (dbg_cnt (store_merging)
4679 && output_merged_store (merged_store))
4681 unsigned int j;
4682 store_immediate_info *store;
4683 FOR_EACH_VEC_ELT (merged_store->stores, j, store)
4685 gimple *stmt = store->stmt;
4686 gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
4687 /* Don't remove clobbers, they are still useful even if
4688 everything is overwritten afterwards. */
4689 if (gimple_clobber_p (stmt))
4690 continue;
4691 gsi_remove (&gsi, true);
4692 if (store->lp_nr)
4693 remove_stmt_from_eh_lp (stmt);
4694 if (stmt != merged_store->last_stmt)
4696 unlink_stmt_vdef (stmt);
4697 release_defs (stmt);
4700 ret = true;
4703 if (ret && dump_file)
4704 fprintf (dump_file, "Merging successful!\n");
4706 return ret;
4709 /* Coalesce the store_immediate_info objects recorded against the base object
4710 BASE in the first phase and output them.
4711 Delete the allocated structures.
4712 Return true if any changes were made. */
4714 bool
4715 imm_store_chain_info::terminate_and_process_chain ()
4717 if (dump_file && (dump_flags & TDF_DETAILS))
4718 fprintf (dump_file, "Terminating chain with %u stores\n",
4719 m_store_info.length ());
4720 /* Process store chain. */
4721 bool ret = false;
4722 if (m_store_info.length () > 1)
4724 ret = coalesce_immediate_stores ();
4725 if (ret)
4726 ret = output_merged_stores ();
4729 /* Delete all the entries we allocated ourselves. */
4730 store_immediate_info *info;
4731 unsigned int i;
4732 FOR_EACH_VEC_ELT (m_store_info, i, info)
4733 delete info;
4735 merged_store_group *merged_info;
4736 FOR_EACH_VEC_ELT (m_merged_store_groups, i, merged_info)
4737 delete merged_info;
4739 return ret;
4742 /* Return true iff LHS is a destination potentially interesting for
4743 store merging. In practice these are the codes that get_inner_reference
4744 can process. */
4746 static bool
4747 lhs_valid_for_store_merging_p (tree lhs)
4749 if (DECL_P (lhs))
4750 return true;
4752 switch (TREE_CODE (lhs))
4754 case ARRAY_REF:
4755 case ARRAY_RANGE_REF:
4756 case BIT_FIELD_REF:
4757 case COMPONENT_REF:
4758 case MEM_REF:
4759 case VIEW_CONVERT_EXPR:
4760 return true;
4761 default:
4762 return false;
4765 gcc_unreachable ();
4768 /* Return true if the tree RHS is a constant we want to consider
4769 during store merging. In practice accept all codes that
4770 native_encode_expr accepts. */
4772 static bool
4773 rhs_valid_for_store_merging_p (tree rhs)
4775 unsigned HOST_WIDE_INT size;
4776 if (TREE_CODE (rhs) == CONSTRUCTOR
4777 && CONSTRUCTOR_NELTS (rhs) == 0
4778 && TYPE_SIZE_UNIT (TREE_TYPE (rhs))
4779 && tree_fits_uhwi_p (TYPE_SIZE_UNIT (TREE_TYPE (rhs))))
4780 return true;
4781 return (GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (rhs))).is_constant (&size)
4782 && native_encode_expr (rhs, NULL, size) != 0);
4785 /* Adjust *PBITPOS, *PBITREGION_START and *PBITREGION_END by BYTE_OFF bytes
4786 and return true on success or false on failure. */
4788 static bool
4789 adjust_bit_pos (poly_offset_int byte_off,
4790 poly_int64 *pbitpos,
4791 poly_uint64 *pbitregion_start,
4792 poly_uint64 *pbitregion_end)
4794 poly_offset_int bit_off = byte_off << LOG2_BITS_PER_UNIT;
4795 bit_off += *pbitpos;
4797 if (known_ge (bit_off, 0) && bit_off.to_shwi (pbitpos))
4799 if (maybe_ne (*pbitregion_end, 0U))
4801 bit_off = byte_off << LOG2_BITS_PER_UNIT;
4802 bit_off += *pbitregion_start;
4803 if (bit_off.to_uhwi (pbitregion_start))
4805 bit_off = byte_off << LOG2_BITS_PER_UNIT;
4806 bit_off += *pbitregion_end;
4807 if (!bit_off.to_uhwi (pbitregion_end))
4808 *pbitregion_end = 0;
4810 else
4811 *pbitregion_end = 0;
4813 return true;
4815 else
4816 return false;
4819 /* If MEM is a memory reference usable for store merging (either as
4820 store destination or for loads), return the non-NULL base_addr
4821 and set *PBITSIZE, *PBITPOS, *PBITREGION_START and *PBITREGION_END.
4822 Otherwise return NULL, *PBITPOS should be still valid even for that
4823 case. */
4825 static tree
4826 mem_valid_for_store_merging (tree mem, poly_uint64 *pbitsize,
4827 poly_uint64 *pbitpos,
4828 poly_uint64 *pbitregion_start,
4829 poly_uint64 *pbitregion_end)
4831 poly_int64 bitsize, bitpos;
4832 poly_uint64 bitregion_start = 0, bitregion_end = 0;
4833 machine_mode mode;
4834 int unsignedp = 0, reversep = 0, volatilep = 0;
4835 tree offset;
4836 tree base_addr = get_inner_reference (mem, &bitsize, &bitpos, &offset, &mode,
4837 &unsignedp, &reversep, &volatilep);
4838 *pbitsize = bitsize;
4839 if (known_eq (bitsize, 0))
4840 return NULL_TREE;
4842 if (TREE_CODE (mem) == COMPONENT_REF
4843 && DECL_BIT_FIELD_TYPE (TREE_OPERAND (mem, 1)))
4845 get_bit_range (&bitregion_start, &bitregion_end, mem, &bitpos, &offset);
4846 if (maybe_ne (bitregion_end, 0U))
4847 bitregion_end += 1;
4850 if (reversep)
4851 return NULL_TREE;
4853 /* We do not want to rewrite TARGET_MEM_REFs. */
4854 if (TREE_CODE (base_addr) == TARGET_MEM_REF)
4855 return NULL_TREE;
4856 /* In some cases get_inner_reference may return a
4857 MEM_REF [ptr + byteoffset]. For the purposes of this pass
4858 canonicalize the base_addr to MEM_REF [ptr] and take
4859 byteoffset into account in the bitpos. This occurs in
4860 PR 23684 and this way we can catch more chains. */
4861 else if (TREE_CODE (base_addr) == MEM_REF)
4863 if (!adjust_bit_pos (mem_ref_offset (base_addr), &bitpos,
4864 &bitregion_start, &bitregion_end))
4865 return NULL_TREE;
4866 base_addr = TREE_OPERAND (base_addr, 0);
4868 /* get_inner_reference returns the base object, get at its
4869 address now. */
4870 else
4872 if (maybe_lt (bitpos, 0))
4873 return NULL_TREE;
4874 base_addr = build_fold_addr_expr (base_addr);
4877 if (offset)
4879 /* If the access is variable offset then a base decl has to be
4880 address-taken to be able to emit pointer-based stores to it.
4881 ??? We might be able to get away with re-using the original
4882 base up to the first variable part and then wrapping that inside
4883 a BIT_FIELD_REF. */
4884 tree base = get_base_address (base_addr);
4885 if (!base || (DECL_P (base) && !TREE_ADDRESSABLE (base)))
4886 return NULL_TREE;
4888 /* Similarly to above for the base, remove constant from the offset. */
4889 if (TREE_CODE (offset) == PLUS_EXPR
4890 && TREE_CODE (TREE_OPERAND (offset, 1)) == INTEGER_CST
4891 && adjust_bit_pos (wi::to_poly_offset (TREE_OPERAND (offset, 1)),
4892 &bitpos, &bitregion_start, &bitregion_end))
4893 offset = TREE_OPERAND (offset, 0);
4895 base_addr = build2 (POINTER_PLUS_EXPR, TREE_TYPE (base_addr),
4896 base_addr, offset);
4899 if (known_eq (bitregion_end, 0U))
4901 bitregion_start = round_down_to_byte_boundary (bitpos);
4902 bitregion_end = round_up_to_byte_boundary (bitpos + bitsize);
4905 *pbitsize = bitsize;
4906 *pbitpos = bitpos;
4907 *pbitregion_start = bitregion_start;
4908 *pbitregion_end = bitregion_end;
4909 return base_addr;
4912 /* Return true if STMT is a load that can be used for store merging.
4913 In that case fill in *OP. BITSIZE, BITPOS, BITREGION_START and
4914 BITREGION_END are properties of the corresponding store. */
4916 static bool
4917 handled_load (gimple *stmt, store_operand_info *op,
4918 poly_uint64 bitsize, poly_uint64 bitpos,
4919 poly_uint64 bitregion_start, poly_uint64 bitregion_end)
4921 if (!is_gimple_assign (stmt))
4922 return false;
4923 if (gimple_assign_rhs_code (stmt) == BIT_NOT_EXPR)
4925 tree rhs1 = gimple_assign_rhs1 (stmt);
4926 if (TREE_CODE (rhs1) == SSA_NAME
4927 && handled_load (SSA_NAME_DEF_STMT (rhs1), op, bitsize, bitpos,
4928 bitregion_start, bitregion_end))
4930 /* Don't allow _1 = load; _2 = ~1; _3 = ~_2; which should have
4931 been optimized earlier, but if allowed here, would confuse the
4932 multiple uses counting. */
4933 if (op->bit_not_p)
4934 return false;
4935 op->bit_not_p = !op->bit_not_p;
4936 return true;
4938 return false;
4940 if (gimple_vuse (stmt)
4941 && gimple_assign_load_p (stmt)
4942 && !stmt_can_throw_internal (cfun, stmt)
4943 && !gimple_has_volatile_ops (stmt))
4945 tree mem = gimple_assign_rhs1 (stmt);
4946 op->base_addr
4947 = mem_valid_for_store_merging (mem, &op->bitsize, &op->bitpos,
4948 &op->bitregion_start,
4949 &op->bitregion_end);
4950 if (op->base_addr != NULL_TREE
4951 && known_eq (op->bitsize, bitsize)
4952 && multiple_p (op->bitpos - bitpos, BITS_PER_UNIT)
4953 && known_ge (op->bitpos - op->bitregion_start,
4954 bitpos - bitregion_start)
4955 && known_ge (op->bitregion_end - op->bitpos,
4956 bitregion_end - bitpos))
4958 op->stmt = stmt;
4959 op->val = mem;
4960 op->bit_not_p = false;
4961 return true;
4964 return false;
4967 /* Return the index number of the landing pad for STMT, if any. */
4969 static int
4970 lp_nr_for_store (gimple *stmt)
4972 if (!cfun->can_throw_non_call_exceptions || !cfun->eh)
4973 return 0;
4975 if (!stmt_could_throw_p (cfun, stmt))
4976 return 0;
4978 return lookup_stmt_eh_lp (stmt);
4981 /* Record the store STMT for store merging optimization if it can be
4982 optimized. Return true if any changes were made. */
4984 bool
4985 pass_store_merging::process_store (gimple *stmt)
4987 tree lhs = gimple_assign_lhs (stmt);
4988 tree rhs = gimple_assign_rhs1 (stmt);
4989 poly_uint64 bitsize, bitpos = 0;
4990 poly_uint64 bitregion_start = 0, bitregion_end = 0;
4991 tree base_addr
4992 = mem_valid_for_store_merging (lhs, &bitsize, &bitpos,
4993 &bitregion_start, &bitregion_end);
4994 if (known_eq (bitsize, 0U))
4995 return false;
4997 bool invalid = (base_addr == NULL_TREE
4998 || (maybe_gt (bitsize,
4999 (unsigned int) MAX_BITSIZE_MODE_ANY_INT)
5000 && TREE_CODE (rhs) != INTEGER_CST
5001 && (TREE_CODE (rhs) != CONSTRUCTOR
5002 || CONSTRUCTOR_NELTS (rhs) != 0)));
5003 enum tree_code rhs_code = ERROR_MARK;
5004 bool bit_not_p = false;
5005 struct symbolic_number n;
5006 gimple *ins_stmt = NULL;
5007 store_operand_info ops[2];
5008 if (invalid)
5010 else if (TREE_CODE (rhs) == STRING_CST)
5012 rhs_code = STRING_CST;
5013 ops[0].val = rhs;
5015 else if (rhs_valid_for_store_merging_p (rhs))
5017 rhs_code = INTEGER_CST;
5018 ops[0].val = rhs;
5020 else if (TREE_CODE (rhs) == SSA_NAME)
5022 gimple *def_stmt = SSA_NAME_DEF_STMT (rhs), *def_stmt1, *def_stmt2;
5023 if (!is_gimple_assign (def_stmt))
5024 invalid = true;
5025 else if (handled_load (def_stmt, &ops[0], bitsize, bitpos,
5026 bitregion_start, bitregion_end))
5027 rhs_code = MEM_REF;
5028 else if (gimple_assign_rhs_code (def_stmt) == BIT_NOT_EXPR)
5030 tree rhs1 = gimple_assign_rhs1 (def_stmt);
5031 if (TREE_CODE (rhs1) == SSA_NAME
5032 && is_gimple_assign (SSA_NAME_DEF_STMT (rhs1)))
5034 bit_not_p = true;
5035 def_stmt = SSA_NAME_DEF_STMT (rhs1);
5039 if (rhs_code == ERROR_MARK && !invalid)
5040 switch ((rhs_code = gimple_assign_rhs_code (def_stmt)))
5042 case BIT_AND_EXPR:
5043 case BIT_IOR_EXPR:
5044 case BIT_XOR_EXPR:
5045 tree rhs1, rhs2;
5046 rhs1 = gimple_assign_rhs1 (def_stmt);
5047 rhs2 = gimple_assign_rhs2 (def_stmt);
5048 invalid = true;
5049 if (TREE_CODE (rhs1) != SSA_NAME)
5050 break;
5051 def_stmt1 = SSA_NAME_DEF_STMT (rhs1);
5052 if (!is_gimple_assign (def_stmt1)
5053 || !handled_load (def_stmt1, &ops[0], bitsize, bitpos,
5054 bitregion_start, bitregion_end))
5055 break;
5056 if (rhs_valid_for_store_merging_p (rhs2))
5057 ops[1].val = rhs2;
5058 else if (TREE_CODE (rhs2) != SSA_NAME)
5059 break;
5060 else
5062 def_stmt2 = SSA_NAME_DEF_STMT (rhs2);
5063 if (!is_gimple_assign (def_stmt2))
5064 break;
5065 else if (!handled_load (def_stmt2, &ops[1], bitsize, bitpos,
5066 bitregion_start, bitregion_end))
5067 break;
5069 invalid = false;
5070 break;
5071 default:
5072 invalid = true;
5073 break;
5076 unsigned HOST_WIDE_INT const_bitsize;
5077 if (bitsize.is_constant (&const_bitsize)
5078 && (const_bitsize % BITS_PER_UNIT) == 0
5079 && const_bitsize <= 64
5080 && multiple_p (bitpos, BITS_PER_UNIT))
5082 ins_stmt = find_bswap_or_nop_1 (def_stmt, &n, 12);
5083 if (ins_stmt)
5085 uint64_t nn = n.n;
5086 for (unsigned HOST_WIDE_INT i = 0;
5087 i < const_bitsize;
5088 i += BITS_PER_UNIT, nn >>= BITS_PER_MARKER)
5089 if ((nn & MARKER_MASK) == 0
5090 || (nn & MARKER_MASK) == MARKER_BYTE_UNKNOWN)
5092 ins_stmt = NULL;
5093 break;
5095 if (ins_stmt)
5097 if (invalid)
5099 rhs_code = LROTATE_EXPR;
5100 ops[0].base_addr = NULL_TREE;
5101 ops[1].base_addr = NULL_TREE;
5103 invalid = false;
5108 if (invalid
5109 && bitsize.is_constant (&const_bitsize)
5110 && ((const_bitsize % BITS_PER_UNIT) != 0
5111 || !multiple_p (bitpos, BITS_PER_UNIT))
5112 && const_bitsize <= MAX_FIXED_MODE_SIZE)
5114 /* Bypass a conversion to the bit-field type. */
5115 if (!bit_not_p
5116 && is_gimple_assign (def_stmt)
5117 && CONVERT_EXPR_CODE_P (rhs_code))
5119 tree rhs1 = gimple_assign_rhs1 (def_stmt);
5120 if (TREE_CODE (rhs1) == SSA_NAME
5121 && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
5122 rhs = rhs1;
5124 rhs_code = BIT_INSERT_EXPR;
5125 bit_not_p = false;
5126 ops[0].val = rhs;
5127 ops[0].base_addr = NULL_TREE;
5128 ops[1].base_addr = NULL_TREE;
5129 invalid = false;
5132 else
5133 invalid = true;
5135 unsigned HOST_WIDE_INT const_bitsize, const_bitpos;
5136 unsigned HOST_WIDE_INT const_bitregion_start, const_bitregion_end;
5137 if (invalid
5138 || !bitsize.is_constant (&const_bitsize)
5139 || !bitpos.is_constant (&const_bitpos)
5140 || !bitregion_start.is_constant (&const_bitregion_start)
5141 || !bitregion_end.is_constant (&const_bitregion_end))
5142 return terminate_all_aliasing_chains (NULL, stmt);
5144 if (!ins_stmt)
5145 memset (&n, 0, sizeof (n));
5147 class imm_store_chain_info **chain_info = NULL;
5148 bool ret = false;
5149 if (base_addr)
5150 chain_info = m_stores.get (base_addr);
5152 store_immediate_info *info;
5153 if (chain_info)
5155 unsigned int ord = (*chain_info)->m_store_info.length ();
5156 info = new store_immediate_info (const_bitsize, const_bitpos,
5157 const_bitregion_start,
5158 const_bitregion_end,
5159 stmt, ord, rhs_code, n, ins_stmt,
5160 bit_not_p, lp_nr_for_store (stmt),
5161 ops[0], ops[1]);
5162 if (dump_file && (dump_flags & TDF_DETAILS))
5164 fprintf (dump_file, "Recording immediate store from stmt:\n");
5165 print_gimple_stmt (dump_file, stmt, 0);
5167 (*chain_info)->m_store_info.safe_push (info);
5168 m_n_stores++;
5169 ret |= terminate_all_aliasing_chains (chain_info, stmt);
5170 /* If we reach the limit of stores to merge in a chain terminate and
5171 process the chain now. */
5172 if ((*chain_info)->m_store_info.length ()
5173 == (unsigned int) param_max_stores_to_merge)
5175 if (dump_file && (dump_flags & TDF_DETAILS))
5176 fprintf (dump_file,
5177 "Reached maximum number of statements to merge:\n");
5178 ret |= terminate_and_process_chain (*chain_info);
5181 else
5183 /* Store aliases any existing chain? */
5184 ret |= terminate_all_aliasing_chains (NULL, stmt);
5186 /* Start a new chain. */
5187 class imm_store_chain_info *new_chain
5188 = new imm_store_chain_info (m_stores_head, base_addr);
5189 info = new store_immediate_info (const_bitsize, const_bitpos,
5190 const_bitregion_start,
5191 const_bitregion_end,
5192 stmt, 0, rhs_code, n, ins_stmt,
5193 bit_not_p, lp_nr_for_store (stmt),
5194 ops[0], ops[1]);
5195 new_chain->m_store_info.safe_push (info);
5196 m_n_stores++;
5197 m_stores.put (base_addr, new_chain);
5198 m_n_chains++;
5199 if (dump_file && (dump_flags & TDF_DETAILS))
5201 fprintf (dump_file, "Starting active chain number %u with statement:\n",
5202 m_n_chains);
5203 print_gimple_stmt (dump_file, stmt, 0);
5204 fprintf (dump_file, "The base object is:\n");
5205 print_generic_expr (dump_file, base_addr);
5206 fprintf (dump_file, "\n");
5210 /* Prune oldest chains so that after adding the chain or store above
5211 we're again within the limits set by the params. */
5212 if (m_n_chains > (unsigned)param_max_store_chains_to_track
5213 || m_n_stores > (unsigned)param_max_stores_to_track)
5215 if (dump_file && (dump_flags & TDF_DETAILS))
5216 fprintf (dump_file, "Too many chains (%u > %d) or stores (%u > %d), "
5217 "terminating oldest chain(s).\n", m_n_chains,
5218 param_max_store_chains_to_track, m_n_stores,
5219 param_max_stores_to_track);
5220 imm_store_chain_info **e = &m_stores_head;
5221 unsigned idx = 0;
5222 unsigned n_stores = 0;
5223 while (*e)
5225 if (idx >= (unsigned)param_max_store_chains_to_track
5226 || (n_stores + (*e)->m_store_info.length ()
5227 > (unsigned)param_max_stores_to_track))
5228 ret |= terminate_and_process_chain (*e);
5229 else
5231 n_stores += (*e)->m_store_info.length ();
5232 e = &(*e)->next;
5233 ++idx;
5238 return ret;
5241 /* Return true if STMT is a store valid for store merging. */
5243 static bool
5244 store_valid_for_store_merging_p (gimple *stmt)
5246 return gimple_assign_single_p (stmt)
5247 && gimple_vdef (stmt)
5248 && lhs_valid_for_store_merging_p (gimple_assign_lhs (stmt))
5249 && (!gimple_has_volatile_ops (stmt) || gimple_clobber_p (stmt));
5252 enum basic_block_status { BB_INVALID, BB_VALID, BB_EXTENDED_VALID };
5254 /* Return the status of basic block BB wrt store merging. */
5256 static enum basic_block_status
5257 get_status_for_store_merging (basic_block bb)
5259 unsigned int num_statements = 0;
5260 unsigned int num_constructors = 0;
5261 gimple_stmt_iterator gsi;
5262 edge e;
5264 for (gsi = gsi_after_labels (bb); !gsi_end_p (gsi); gsi_next (&gsi))
5266 gimple *stmt = gsi_stmt (gsi);
5268 if (is_gimple_debug (stmt))
5269 continue;
5271 if (store_valid_for_store_merging_p (stmt) && ++num_statements >= 2)
5272 break;
5274 if (is_gimple_assign (stmt)
5275 && gimple_assign_rhs_code (stmt) == CONSTRUCTOR)
5277 tree rhs = gimple_assign_rhs1 (stmt);
5278 if (VECTOR_TYPE_P (TREE_TYPE (rhs))
5279 && INTEGRAL_TYPE_P (TREE_TYPE (TREE_TYPE (rhs)))
5280 && gimple_assign_lhs (stmt) != NULL_TREE)
5282 HOST_WIDE_INT sz
5283 = int_size_in_bytes (TREE_TYPE (rhs)) * BITS_PER_UNIT;
5284 if (sz == 16 || sz == 32 || sz == 64)
5286 num_constructors = 1;
5287 break;
5293 if (num_statements == 0 && num_constructors == 0)
5294 return BB_INVALID;
5296 if (cfun->can_throw_non_call_exceptions && cfun->eh
5297 && store_valid_for_store_merging_p (gimple_seq_last_stmt (bb_seq (bb)))
5298 && (e = find_fallthru_edge (bb->succs))
5299 && e->dest == bb->next_bb)
5300 return BB_EXTENDED_VALID;
5302 return (num_statements >= 2 || num_constructors) ? BB_VALID : BB_INVALID;
5305 /* Entry point for the pass. Go over each basic block recording chains of
5306 immediate stores. Upon encountering a terminating statement (as defined
5307 by stmt_terminates_chain_p) process the recorded stores and emit the widened
5308 variants. */
5310 unsigned int
5311 pass_store_merging::execute (function *fun)
5313 basic_block bb;
5314 hash_set<gimple *> orig_stmts;
5315 bool changed = false, open_chains = false;
5317 /* If the function can throw and catch non-call exceptions, we'll be trying
5318 to merge stores across different basic blocks so we need to first unsplit
5319 the EH edges in order to streamline the CFG of the function. */
5320 if (cfun->can_throw_non_call_exceptions && cfun->eh)
5321 unsplit_eh_edges ();
5323 calculate_dominance_info (CDI_DOMINATORS);
5325 FOR_EACH_BB_FN (bb, fun)
5327 const basic_block_status bb_status = get_status_for_store_merging (bb);
5328 gimple_stmt_iterator gsi;
5330 if (open_chains && (bb_status == BB_INVALID || !single_pred_p (bb)))
5332 changed |= terminate_and_process_all_chains ();
5333 open_chains = false;
5336 if (bb_status == BB_INVALID)
5337 continue;
5339 if (dump_file && (dump_flags & TDF_DETAILS))
5340 fprintf (dump_file, "Processing basic block <%d>:\n", bb->index);
5342 for (gsi = gsi_after_labels (bb); !gsi_end_p (gsi); )
5344 gimple *stmt = gsi_stmt (gsi);
5345 gsi_next (&gsi);
5347 if (is_gimple_debug (stmt))
5348 continue;
5350 if (gimple_has_volatile_ops (stmt) && !gimple_clobber_p (stmt))
5352 /* Terminate all chains. */
5353 if (dump_file && (dump_flags & TDF_DETAILS))
5354 fprintf (dump_file, "Volatile access terminates "
5355 "all chains\n");
5356 changed |= terminate_and_process_all_chains ();
5357 open_chains = false;
5358 continue;
5361 if (is_gimple_assign (stmt)
5362 && gimple_assign_rhs_code (stmt) == CONSTRUCTOR
5363 && maybe_optimize_vector_constructor (stmt))
5364 continue;
5366 if (store_valid_for_store_merging_p (stmt))
5367 changed |= process_store (stmt);
5368 else
5369 changed |= terminate_all_aliasing_chains (NULL, stmt);
5372 if (bb_status == BB_EXTENDED_VALID)
5373 open_chains = true;
5374 else
5376 changed |= terminate_and_process_all_chains ();
5377 open_chains = false;
5381 if (open_chains)
5382 changed |= terminate_and_process_all_chains ();
5384 /* If the function can throw and catch non-call exceptions and something
5385 changed during the pass, then the CFG has (very likely) changed too. */
5386 if (cfun->can_throw_non_call_exceptions && cfun->eh && changed)
5388 free_dominance_info (CDI_DOMINATORS);
5389 return TODO_cleanup_cfg;
5392 return 0;
5395 } // anon namespace
5397 /* Construct and return a store merging pass object. */
5399 gimple_opt_pass *
5400 make_pass_store_merging (gcc::context *ctxt)
5402 return new pass_store_merging (ctxt);
5405 #if CHECKING_P
5407 namespace selftest {
5409 /* Selftests for store merging helpers. */
5411 /* Assert that all elements of the byte arrays X and Y, both of length N
5412 are equal. */
5414 static void
5415 verify_array_eq (unsigned char *x, unsigned char *y, unsigned int n)
5417 for (unsigned int i = 0; i < n; i++)
5419 if (x[i] != y[i])
5421 fprintf (stderr, "Arrays do not match. X:\n");
5422 dump_char_array (stderr, x, n);
5423 fprintf (stderr, "Y:\n");
5424 dump_char_array (stderr, y, n);
5426 ASSERT_EQ (x[i], y[i]);
5430 /* Test shift_bytes_in_array_left and that it carries bits across between
5431 bytes correctly. */
5433 static void
5434 verify_shift_bytes_in_array_left (void)
5436 /* byte 1 | byte 0
5437 00011111 | 11100000. */
5438 unsigned char orig[2] = { 0xe0, 0x1f };
5439 unsigned char in[2];
5440 memcpy (in, orig, sizeof orig);
5442 unsigned char expected[2] = { 0x80, 0x7f };
5443 shift_bytes_in_array_left (in, sizeof (in), 2);
5444 verify_array_eq (in, expected, sizeof (in));
5446 memcpy (in, orig, sizeof orig);
5447 memcpy (expected, orig, sizeof orig);
5448 /* Check that shifting by zero doesn't change anything. */
5449 shift_bytes_in_array_left (in, sizeof (in), 0);
5450 verify_array_eq (in, expected, sizeof (in));
5454 /* Test shift_bytes_in_array_right and that it carries bits across between
5455 bytes correctly. */
5457 static void
5458 verify_shift_bytes_in_array_right (void)
5460 /* byte 1 | byte 0
5461 00011111 | 11100000. */
5462 unsigned char orig[2] = { 0x1f, 0xe0};
5463 unsigned char in[2];
5464 memcpy (in, orig, sizeof orig);
5465 unsigned char expected[2] = { 0x07, 0xf8};
5466 shift_bytes_in_array_right (in, sizeof (in), 2);
5467 verify_array_eq (in, expected, sizeof (in));
5469 memcpy (in, orig, sizeof orig);
5470 memcpy (expected, orig, sizeof orig);
5471 /* Check that shifting by zero doesn't change anything. */
5472 shift_bytes_in_array_right (in, sizeof (in), 0);
5473 verify_array_eq (in, expected, sizeof (in));
5476 /* Test clear_bit_region that it clears exactly the bits asked and
5477 nothing more. */
5479 static void
5480 verify_clear_bit_region (void)
5482 /* Start with all bits set and test clearing various patterns in them. */
5483 unsigned char orig[3] = { 0xff, 0xff, 0xff};
5484 unsigned char in[3];
5485 unsigned char expected[3];
5486 memcpy (in, orig, sizeof in);
5488 /* Check zeroing out all the bits. */
5489 clear_bit_region (in, 0, 3 * BITS_PER_UNIT);
5490 expected[0] = expected[1] = expected[2] = 0;
5491 verify_array_eq (in, expected, sizeof in);
5493 memcpy (in, orig, sizeof in);
5494 /* Leave the first and last bits intact. */
5495 clear_bit_region (in, 1, 3 * BITS_PER_UNIT - 2);
5496 expected[0] = 0x1;
5497 expected[1] = 0;
5498 expected[2] = 0x80;
5499 verify_array_eq (in, expected, sizeof in);
5502 /* Test clear_bit_region_be that it clears exactly the bits asked and
5503 nothing more. */
5505 static void
5506 verify_clear_bit_region_be (void)
5508 /* Start with all bits set and test clearing various patterns in them. */
5509 unsigned char orig[3] = { 0xff, 0xff, 0xff};
5510 unsigned char in[3];
5511 unsigned char expected[3];
5512 memcpy (in, orig, sizeof in);
5514 /* Check zeroing out all the bits. */
5515 clear_bit_region_be (in, BITS_PER_UNIT - 1, 3 * BITS_PER_UNIT);
5516 expected[0] = expected[1] = expected[2] = 0;
5517 verify_array_eq (in, expected, sizeof in);
5519 memcpy (in, orig, sizeof in);
5520 /* Leave the first and last bits intact. */
5521 clear_bit_region_be (in, BITS_PER_UNIT - 2, 3 * BITS_PER_UNIT - 2);
5522 expected[0] = 0x80;
5523 expected[1] = 0;
5524 expected[2] = 0x1;
5525 verify_array_eq (in, expected, sizeof in);
5529 /* Run all of the selftests within this file. */
5531 void
5532 store_merging_c_tests (void)
5534 verify_shift_bytes_in_array_left ();
5535 verify_shift_bytes_in_array_right ();
5536 verify_clear_bit_region ();
5537 verify_clear_bit_region_be ();
5540 } // namespace selftest
5541 #endif /* CHECKING_P. */