PR tree-optimization/87826
[official-gcc.git] / gcc / gimple-ssa-store-merging.c
blob5e5823c44a0343d67f323eee241d2416bfb8caf8
1 /* GIMPLE store merging and byte swapping passes.
2 Copyright (C) 2009-2018 Free Software Foundation, Inc.
3 Contributed by ARM Ltd.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it
8 under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3, or (at your option)
10 any later version.
12 GCC is distributed in the hope that it will be useful, but
13 WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 General Public License for more details.
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The purpose of the store merging pass is to combine multiple memory stores
22 of constant values, values loaded from memory, bitwise operations on those,
23 or bit-field values, to consecutive locations, into fewer wider stores.
25 For example, if we have a sequence peforming four byte stores to
26 consecutive memory locations:
27 [p ] := imm1;
28 [p + 1B] := imm2;
29 [p + 2B] := imm3;
30 [p + 3B] := imm4;
31 we can transform this into a single 4-byte store if the target supports it:
32 [p] := imm1:imm2:imm3:imm4 concatenated according to endianness.
34 Or:
35 [p ] := [q ];
36 [p + 1B] := [q + 1B];
37 [p + 2B] := [q + 2B];
38 [p + 3B] := [q + 3B];
39 if there is no overlap can be transformed into a single 4-byte
40 load followed by single 4-byte store.
42 Or:
43 [p ] := [q ] ^ imm1;
44 [p + 1B] := [q + 1B] ^ imm2;
45 [p + 2B] := [q + 2B] ^ imm3;
46 [p + 3B] := [q + 3B] ^ imm4;
47 if there is no overlap can be transformed into a single 4-byte
48 load, xored with imm1:imm2:imm3:imm4 and stored using a single 4-byte store.
50 Or:
51 [p:1 ] := imm;
52 [p:31] := val & 0x7FFFFFFF;
53 we can transform this into a single 4-byte store if the target supports it:
54 [p] := imm:(val & 0x7FFFFFFF) concatenated according to endianness.
56 The algorithm is applied to each basic block in three phases:
58 1) Scan through the basic block and record assignments to destinations
59 that can be expressed as a store to memory of a certain size at a certain
60 bit offset from base expressions we can handle. For bit-fields we also
61 record the surrounding bit region, i.e. bits that could be stored in
62 a read-modify-write operation when storing the bit-field. Record store
63 chains to different bases in a hash_map (m_stores) and make sure to
64 terminate such chains when appropriate (for example when when the stored
65 values get used subsequently).
66 These stores can be a result of structure element initializers, array stores
67 etc. A store_immediate_info object is recorded for every such store.
68 Record as many such assignments to a single base as possible until a
69 statement that interferes with the store sequence is encountered.
70 Each store has up to 2 operands, which can be a either constant, a memory
71 load or an SSA name, from which the value to be stored can be computed.
72 At most one of the operands can be a constant. The operands are recorded
73 in store_operand_info struct.
75 2) Analyze the chains of stores recorded in phase 1) (i.e. the vector of
76 store_immediate_info objects) and coalesce contiguous stores into
77 merged_store_group objects. For bit-field stores, we don't need to
78 require the stores to be contiguous, just their surrounding bit regions
79 have to be contiguous. If the expression being stored is different
80 between adjacent stores, such as one store storing a constant and
81 following storing a value loaded from memory, or if the loaded memory
82 objects are not adjacent, a new merged_store_group is created as well.
84 For example, given the stores:
85 [p ] := 0;
86 [p + 1B] := 1;
87 [p + 3B] := 0;
88 [p + 4B] := 1;
89 [p + 5B] := 0;
90 [p + 6B] := 0;
91 This phase would produce two merged_store_group objects, one recording the
92 two bytes stored in the memory region [p : p + 1] and another
93 recording the four bytes stored in the memory region [p + 3 : p + 6].
95 3) The merged_store_group objects produced in phase 2) are processed
96 to generate the sequence of wider stores that set the contiguous memory
97 regions to the sequence of bytes that correspond to it. This may emit
98 multiple stores per store group to handle contiguous stores that are not
99 of a size that is a power of 2. For example it can try to emit a 40-bit
100 store as a 32-bit store followed by an 8-bit store.
101 We try to emit as wide stores as we can while respecting STRICT_ALIGNMENT
102 or TARGET_SLOW_UNALIGNED_ACCESS settings.
104 Note on endianness and example:
105 Consider 2 contiguous 16-bit stores followed by 2 contiguous 8-bit stores:
106 [p ] := 0x1234;
107 [p + 2B] := 0x5678;
108 [p + 4B] := 0xab;
109 [p + 5B] := 0xcd;
111 The memory layout for little-endian (LE) and big-endian (BE) must be:
112 p |LE|BE|
113 ---------
114 0 |34|12|
115 1 |12|34|
116 2 |78|56|
117 3 |56|78|
118 4 |ab|ab|
119 5 |cd|cd|
121 To merge these into a single 48-bit merged value 'val' in phase 2)
122 on little-endian we insert stores to higher (consecutive) bitpositions
123 into the most significant bits of the merged value.
124 The final merged value would be: 0xcdab56781234
126 For big-endian we insert stores to higher bitpositions into the least
127 significant bits of the merged value.
128 The final merged value would be: 0x12345678abcd
130 Then, in phase 3), we want to emit this 48-bit value as a 32-bit store
131 followed by a 16-bit store. Again, we must consider endianness when
132 breaking down the 48-bit value 'val' computed above.
133 For little endian we emit:
134 [p] (32-bit) := 0x56781234; // val & 0x0000ffffffff;
135 [p + 4B] (16-bit) := 0xcdab; // (val & 0xffff00000000) >> 32;
137 Whereas for big-endian we emit:
138 [p] (32-bit) := 0x12345678; // (val & 0xffffffff0000) >> 16;
139 [p + 4B] (16-bit) := 0xabcd; // val & 0x00000000ffff; */
141 #include "config.h"
142 #include "system.h"
143 #include "coretypes.h"
144 #include "backend.h"
145 #include "tree.h"
146 #include "gimple.h"
147 #include "builtins.h"
148 #include "fold-const.h"
149 #include "tree-pass.h"
150 #include "ssa.h"
151 #include "gimple-pretty-print.h"
152 #include "alias.h"
153 #include "fold-const.h"
154 #include "params.h"
155 #include "print-tree.h"
156 #include "tree-hash-traits.h"
157 #include "gimple-iterator.h"
158 #include "gimplify.h"
159 #include "gimple-fold.h"
160 #include "stor-layout.h"
161 #include "timevar.h"
162 #include "tree-cfg.h"
163 #include "tree-eh.h"
164 #include "target.h"
165 #include "gimplify-me.h"
166 #include "rtl.h"
167 #include "expr.h" /* For get_bit_range. */
168 #include "optabs-tree.h"
169 #include "selftest.h"
171 /* The maximum size (in bits) of the stores this pass should generate. */
172 #define MAX_STORE_BITSIZE (BITS_PER_WORD)
173 #define MAX_STORE_BYTES (MAX_STORE_BITSIZE / BITS_PER_UNIT)
175 /* Limit to bound the number of aliasing checks for loads with the same
176 vuse as the corresponding store. */
177 #define MAX_STORE_ALIAS_CHECKS 64
179 namespace {
181 struct bswap_stat
183 /* Number of hand-written 16-bit nop / bswaps found. */
184 int found_16bit;
186 /* Number of hand-written 32-bit nop / bswaps found. */
187 int found_32bit;
189 /* Number of hand-written 64-bit nop / bswaps found. */
190 int found_64bit;
191 } nop_stats, bswap_stats;
193 /* A symbolic number structure is used to detect byte permutation and selection
194 patterns of a source. To achieve that, its field N contains an artificial
195 number consisting of BITS_PER_MARKER sized markers tracking where does each
196 byte come from in the source:
198 0 - target byte has the value 0
199 FF - target byte has an unknown value (eg. due to sign extension)
200 1..size - marker value is the byte index in the source (0 for lsb).
202 To detect permutations on memory sources (arrays and structures), a symbolic
203 number is also associated:
204 - a base address BASE_ADDR and an OFFSET giving the address of the source;
205 - a range which gives the difference between the highest and lowest accessed
206 memory location to make such a symbolic number;
207 - the address SRC of the source element of lowest address as a convenience
208 to easily get BASE_ADDR + offset + lowest bytepos;
209 - number of expressions N_OPS bitwise ored together to represent
210 approximate cost of the computation.
212 Note 1: the range is different from size as size reflects the size of the
213 type of the current expression. For instance, for an array char a[],
214 (short) a[0] | (short) a[3] would have a size of 2 but a range of 4 while
215 (short) a[0] | ((short) a[0] << 1) would still have a size of 2 but this
216 time a range of 1.
218 Note 2: for non-memory sources, range holds the same value as size.
220 Note 3: SRC points to the SSA_NAME in case of non-memory source. */
222 struct symbolic_number {
223 uint64_t n;
224 tree type;
225 tree base_addr;
226 tree offset;
227 poly_int64_pod bytepos;
228 tree src;
229 tree alias_set;
230 tree vuse;
231 unsigned HOST_WIDE_INT range;
232 int n_ops;
235 #define BITS_PER_MARKER 8
236 #define MARKER_MASK ((1 << BITS_PER_MARKER) - 1)
237 #define MARKER_BYTE_UNKNOWN MARKER_MASK
238 #define HEAD_MARKER(n, size) \
239 ((n) & ((uint64_t) MARKER_MASK << (((size) - 1) * BITS_PER_MARKER)))
241 /* The number which the find_bswap_or_nop_1 result should match in
242 order to have a nop. The number is masked according to the size of
243 the symbolic number before using it. */
244 #define CMPNOP (sizeof (int64_t) < 8 ? 0 : \
245 (uint64_t)0x08070605 << 32 | 0x04030201)
247 /* The number which the find_bswap_or_nop_1 result should match in
248 order to have a byte swap. The number is masked according to the
249 size of the symbolic number before using it. */
250 #define CMPXCHG (sizeof (int64_t) < 8 ? 0 : \
251 (uint64_t)0x01020304 << 32 | 0x05060708)
253 /* Perform a SHIFT or ROTATE operation by COUNT bits on symbolic
254 number N. Return false if the requested operation is not permitted
255 on a symbolic number. */
257 inline bool
258 do_shift_rotate (enum tree_code code,
259 struct symbolic_number *n,
260 int count)
262 int i, size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
263 unsigned head_marker;
265 if (count < 0
266 || count >= TYPE_PRECISION (n->type)
267 || count % BITS_PER_UNIT != 0)
268 return false;
269 count = (count / BITS_PER_UNIT) * BITS_PER_MARKER;
271 /* Zero out the extra bits of N in order to avoid them being shifted
272 into the significant bits. */
273 if (size < 64 / BITS_PER_MARKER)
274 n->n &= ((uint64_t) 1 << (size * BITS_PER_MARKER)) - 1;
276 switch (code)
278 case LSHIFT_EXPR:
279 n->n <<= count;
280 break;
281 case RSHIFT_EXPR:
282 head_marker = HEAD_MARKER (n->n, size);
283 n->n >>= count;
284 /* Arithmetic shift of signed type: result is dependent on the value. */
285 if (!TYPE_UNSIGNED (n->type) && head_marker)
286 for (i = 0; i < count / BITS_PER_MARKER; i++)
287 n->n |= (uint64_t) MARKER_BYTE_UNKNOWN
288 << ((size - 1 - i) * BITS_PER_MARKER);
289 break;
290 case LROTATE_EXPR:
291 n->n = (n->n << count) | (n->n >> ((size * BITS_PER_MARKER) - count));
292 break;
293 case RROTATE_EXPR:
294 n->n = (n->n >> count) | (n->n << ((size * BITS_PER_MARKER) - count));
295 break;
296 default:
297 return false;
299 /* Zero unused bits for size. */
300 if (size < 64 / BITS_PER_MARKER)
301 n->n &= ((uint64_t) 1 << (size * BITS_PER_MARKER)) - 1;
302 return true;
305 /* Perform sanity checking for the symbolic number N and the gimple
306 statement STMT. */
308 inline bool
309 verify_symbolic_number_p (struct symbolic_number *n, gimple *stmt)
311 tree lhs_type;
313 lhs_type = gimple_expr_type (stmt);
315 if (TREE_CODE (lhs_type) != INTEGER_TYPE)
316 return false;
318 if (TYPE_PRECISION (lhs_type) != TYPE_PRECISION (n->type))
319 return false;
321 return true;
324 /* Initialize the symbolic number N for the bswap pass from the base element
325 SRC manipulated by the bitwise OR expression. */
327 bool
328 init_symbolic_number (struct symbolic_number *n, tree src)
330 int size;
332 if (! INTEGRAL_TYPE_P (TREE_TYPE (src)))
333 return false;
335 n->base_addr = n->offset = n->alias_set = n->vuse = NULL_TREE;
336 n->src = src;
338 /* Set up the symbolic number N by setting each byte to a value between 1 and
339 the byte size of rhs1. The highest order byte is set to n->size and the
340 lowest order byte to 1. */
341 n->type = TREE_TYPE (src);
342 size = TYPE_PRECISION (n->type);
343 if (size % BITS_PER_UNIT != 0)
344 return false;
345 size /= BITS_PER_UNIT;
346 if (size > 64 / BITS_PER_MARKER)
347 return false;
348 n->range = size;
349 n->n = CMPNOP;
350 n->n_ops = 1;
352 if (size < 64 / BITS_PER_MARKER)
353 n->n &= ((uint64_t) 1 << (size * BITS_PER_MARKER)) - 1;
355 return true;
358 /* Check if STMT might be a byte swap or a nop from a memory source and returns
359 the answer. If so, REF is that memory source and the base of the memory area
360 accessed and the offset of the access from that base are recorded in N. */
362 bool
363 find_bswap_or_nop_load (gimple *stmt, tree ref, struct symbolic_number *n)
365 /* Leaf node is an array or component ref. Memorize its base and
366 offset from base to compare to other such leaf node. */
367 poly_int64 bitsize, bitpos, bytepos;
368 machine_mode mode;
369 int unsignedp, reversep, volatilep;
370 tree offset, base_addr;
372 /* Not prepared to handle PDP endian. */
373 if (BYTES_BIG_ENDIAN != WORDS_BIG_ENDIAN)
374 return false;
376 if (!gimple_assign_load_p (stmt) || gimple_has_volatile_ops (stmt))
377 return false;
379 base_addr = get_inner_reference (ref, &bitsize, &bitpos, &offset, &mode,
380 &unsignedp, &reversep, &volatilep);
382 if (TREE_CODE (base_addr) == TARGET_MEM_REF)
383 /* Do not rewrite TARGET_MEM_REF. */
384 return false;
385 else if (TREE_CODE (base_addr) == MEM_REF)
387 poly_offset_int bit_offset = 0;
388 tree off = TREE_OPERAND (base_addr, 1);
390 if (!integer_zerop (off))
392 poly_offset_int boff = mem_ref_offset (base_addr);
393 boff <<= LOG2_BITS_PER_UNIT;
394 bit_offset += boff;
397 base_addr = TREE_OPERAND (base_addr, 0);
399 /* Avoid returning a negative bitpos as this may wreak havoc later. */
400 if (maybe_lt (bit_offset, 0))
402 tree byte_offset = wide_int_to_tree
403 (sizetype, bits_to_bytes_round_down (bit_offset));
404 bit_offset = num_trailing_bits (bit_offset);
405 if (offset)
406 offset = size_binop (PLUS_EXPR, offset, byte_offset);
407 else
408 offset = byte_offset;
411 bitpos += bit_offset.force_shwi ();
413 else
414 base_addr = build_fold_addr_expr (base_addr);
416 if (!multiple_p (bitpos, BITS_PER_UNIT, &bytepos))
417 return false;
418 if (!multiple_p (bitsize, BITS_PER_UNIT))
419 return false;
420 if (reversep)
421 return false;
423 if (!init_symbolic_number (n, ref))
424 return false;
425 n->base_addr = base_addr;
426 n->offset = offset;
427 n->bytepos = bytepos;
428 n->alias_set = reference_alias_ptr_type (ref);
429 n->vuse = gimple_vuse (stmt);
430 return true;
433 /* Compute the symbolic number N representing the result of a bitwise OR on 2
434 symbolic number N1 and N2 whose source statements are respectively
435 SOURCE_STMT1 and SOURCE_STMT2. */
437 gimple *
438 perform_symbolic_merge (gimple *source_stmt1, struct symbolic_number *n1,
439 gimple *source_stmt2, struct symbolic_number *n2,
440 struct symbolic_number *n)
442 int i, size;
443 uint64_t mask;
444 gimple *source_stmt;
445 struct symbolic_number *n_start;
447 tree rhs1 = gimple_assign_rhs1 (source_stmt1);
448 if (TREE_CODE (rhs1) == BIT_FIELD_REF
449 && TREE_CODE (TREE_OPERAND (rhs1, 0)) == SSA_NAME)
450 rhs1 = TREE_OPERAND (rhs1, 0);
451 tree rhs2 = gimple_assign_rhs1 (source_stmt2);
452 if (TREE_CODE (rhs2) == BIT_FIELD_REF
453 && TREE_CODE (TREE_OPERAND (rhs2, 0)) == SSA_NAME)
454 rhs2 = TREE_OPERAND (rhs2, 0);
456 /* Sources are different, cancel bswap if they are not memory location with
457 the same base (array, structure, ...). */
458 if (rhs1 != rhs2)
460 uint64_t inc;
461 HOST_WIDE_INT start1, start2, start_sub, end_sub, end1, end2, end;
462 struct symbolic_number *toinc_n_ptr, *n_end;
463 basic_block bb1, bb2;
465 if (!n1->base_addr || !n2->base_addr
466 || !operand_equal_p (n1->base_addr, n2->base_addr, 0))
467 return NULL;
469 if (!n1->offset != !n2->offset
470 || (n1->offset && !operand_equal_p (n1->offset, n2->offset, 0)))
471 return NULL;
473 start1 = 0;
474 if (!(n2->bytepos - n1->bytepos).is_constant (&start2))
475 return NULL;
477 if (start1 < start2)
479 n_start = n1;
480 start_sub = start2 - start1;
482 else
484 n_start = n2;
485 start_sub = start1 - start2;
488 bb1 = gimple_bb (source_stmt1);
489 bb2 = gimple_bb (source_stmt2);
490 if (dominated_by_p (CDI_DOMINATORS, bb1, bb2))
491 source_stmt = source_stmt1;
492 else
493 source_stmt = source_stmt2;
495 /* Find the highest address at which a load is performed and
496 compute related info. */
497 end1 = start1 + (n1->range - 1);
498 end2 = start2 + (n2->range - 1);
499 if (end1 < end2)
501 end = end2;
502 end_sub = end2 - end1;
504 else
506 end = end1;
507 end_sub = end1 - end2;
509 n_end = (end2 > end1) ? n2 : n1;
511 /* Find symbolic number whose lsb is the most significant. */
512 if (BYTES_BIG_ENDIAN)
513 toinc_n_ptr = (n_end == n1) ? n2 : n1;
514 else
515 toinc_n_ptr = (n_start == n1) ? n2 : n1;
517 n->range = end - MIN (start1, start2) + 1;
519 /* Check that the range of memory covered can be represented by
520 a symbolic number. */
521 if (n->range > 64 / BITS_PER_MARKER)
522 return NULL;
524 /* Reinterpret byte marks in symbolic number holding the value of
525 bigger weight according to target endianness. */
526 inc = BYTES_BIG_ENDIAN ? end_sub : start_sub;
527 size = TYPE_PRECISION (n1->type) / BITS_PER_UNIT;
528 for (i = 0; i < size; i++, inc <<= BITS_PER_MARKER)
530 unsigned marker
531 = (toinc_n_ptr->n >> (i * BITS_PER_MARKER)) & MARKER_MASK;
532 if (marker && marker != MARKER_BYTE_UNKNOWN)
533 toinc_n_ptr->n += inc;
536 else
538 n->range = n1->range;
539 n_start = n1;
540 source_stmt = source_stmt1;
543 if (!n1->alias_set
544 || alias_ptr_types_compatible_p (n1->alias_set, n2->alias_set))
545 n->alias_set = n1->alias_set;
546 else
547 n->alias_set = ptr_type_node;
548 n->vuse = n_start->vuse;
549 n->base_addr = n_start->base_addr;
550 n->offset = n_start->offset;
551 n->src = n_start->src;
552 n->bytepos = n_start->bytepos;
553 n->type = n_start->type;
554 size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
556 for (i = 0, mask = MARKER_MASK; i < size; i++, mask <<= BITS_PER_MARKER)
558 uint64_t masked1, masked2;
560 masked1 = n1->n & mask;
561 masked2 = n2->n & mask;
562 if (masked1 && masked2 && masked1 != masked2)
563 return NULL;
565 n->n = n1->n | n2->n;
566 n->n_ops = n1->n_ops + n2->n_ops;
568 return source_stmt;
571 /* find_bswap_or_nop_1 invokes itself recursively with N and tries to perform
572 the operation given by the rhs of STMT on the result. If the operation
573 could successfully be executed the function returns a gimple stmt whose
574 rhs's first tree is the expression of the source operand and NULL
575 otherwise. */
577 gimple *
578 find_bswap_or_nop_1 (gimple *stmt, struct symbolic_number *n, int limit)
580 enum tree_code code;
581 tree rhs1, rhs2 = NULL;
582 gimple *rhs1_stmt, *rhs2_stmt, *source_stmt1;
583 enum gimple_rhs_class rhs_class;
585 if (!limit || !is_gimple_assign (stmt))
586 return NULL;
588 rhs1 = gimple_assign_rhs1 (stmt);
590 if (find_bswap_or_nop_load (stmt, rhs1, n))
591 return stmt;
593 /* Handle BIT_FIELD_REF. */
594 if (TREE_CODE (rhs1) == BIT_FIELD_REF
595 && TREE_CODE (TREE_OPERAND (rhs1, 0)) == SSA_NAME)
597 unsigned HOST_WIDE_INT bitsize = tree_to_uhwi (TREE_OPERAND (rhs1, 1));
598 unsigned HOST_WIDE_INT bitpos = tree_to_uhwi (TREE_OPERAND (rhs1, 2));
599 if (bitpos % BITS_PER_UNIT == 0
600 && bitsize % BITS_PER_UNIT == 0
601 && init_symbolic_number (n, TREE_OPERAND (rhs1, 0)))
603 /* Handle big-endian bit numbering in BIT_FIELD_REF. */
604 if (BYTES_BIG_ENDIAN)
605 bitpos = TYPE_PRECISION (n->type) - bitpos - bitsize;
607 /* Shift. */
608 if (!do_shift_rotate (RSHIFT_EXPR, n, bitpos))
609 return NULL;
611 /* Mask. */
612 uint64_t mask = 0;
613 uint64_t tmp = (1 << BITS_PER_UNIT) - 1;
614 for (unsigned i = 0; i < bitsize / BITS_PER_UNIT;
615 i++, tmp <<= BITS_PER_UNIT)
616 mask |= (uint64_t) MARKER_MASK << (i * BITS_PER_MARKER);
617 n->n &= mask;
619 /* Convert. */
620 n->type = TREE_TYPE (rhs1);
621 if (!n->base_addr)
622 n->range = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
624 return verify_symbolic_number_p (n, stmt) ? stmt : NULL;
627 return NULL;
630 if (TREE_CODE (rhs1) != SSA_NAME)
631 return NULL;
633 code = gimple_assign_rhs_code (stmt);
634 rhs_class = gimple_assign_rhs_class (stmt);
635 rhs1_stmt = SSA_NAME_DEF_STMT (rhs1);
637 if (rhs_class == GIMPLE_BINARY_RHS)
638 rhs2 = gimple_assign_rhs2 (stmt);
640 /* Handle unary rhs and binary rhs with integer constants as second
641 operand. */
643 if (rhs_class == GIMPLE_UNARY_RHS
644 || (rhs_class == GIMPLE_BINARY_RHS
645 && TREE_CODE (rhs2) == INTEGER_CST))
647 if (code != BIT_AND_EXPR
648 && code != LSHIFT_EXPR
649 && code != RSHIFT_EXPR
650 && code != LROTATE_EXPR
651 && code != RROTATE_EXPR
652 && !CONVERT_EXPR_CODE_P (code))
653 return NULL;
655 source_stmt1 = find_bswap_or_nop_1 (rhs1_stmt, n, limit - 1);
657 /* If find_bswap_or_nop_1 returned NULL, STMT is a leaf node and
658 we have to initialize the symbolic number. */
659 if (!source_stmt1)
661 if (gimple_assign_load_p (stmt)
662 || !init_symbolic_number (n, rhs1))
663 return NULL;
664 source_stmt1 = stmt;
667 switch (code)
669 case BIT_AND_EXPR:
671 int i, size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
672 uint64_t val = int_cst_value (rhs2), mask = 0;
673 uint64_t tmp = (1 << BITS_PER_UNIT) - 1;
675 /* Only constants masking full bytes are allowed. */
676 for (i = 0; i < size; i++, tmp <<= BITS_PER_UNIT)
677 if ((val & tmp) != 0 && (val & tmp) != tmp)
678 return NULL;
679 else if (val & tmp)
680 mask |= (uint64_t) MARKER_MASK << (i * BITS_PER_MARKER);
682 n->n &= mask;
684 break;
685 case LSHIFT_EXPR:
686 case RSHIFT_EXPR:
687 case LROTATE_EXPR:
688 case RROTATE_EXPR:
689 if (!do_shift_rotate (code, n, (int) TREE_INT_CST_LOW (rhs2)))
690 return NULL;
691 break;
692 CASE_CONVERT:
694 int i, type_size, old_type_size;
695 tree type;
697 type = gimple_expr_type (stmt);
698 type_size = TYPE_PRECISION (type);
699 if (type_size % BITS_PER_UNIT != 0)
700 return NULL;
701 type_size /= BITS_PER_UNIT;
702 if (type_size > 64 / BITS_PER_MARKER)
703 return NULL;
705 /* Sign extension: result is dependent on the value. */
706 old_type_size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
707 if (!TYPE_UNSIGNED (n->type) && type_size > old_type_size
708 && HEAD_MARKER (n->n, old_type_size))
709 for (i = 0; i < type_size - old_type_size; i++)
710 n->n |= (uint64_t) MARKER_BYTE_UNKNOWN
711 << ((type_size - 1 - i) * BITS_PER_MARKER);
713 if (type_size < 64 / BITS_PER_MARKER)
715 /* If STMT casts to a smaller type mask out the bits not
716 belonging to the target type. */
717 n->n &= ((uint64_t) 1 << (type_size * BITS_PER_MARKER)) - 1;
719 n->type = type;
720 if (!n->base_addr)
721 n->range = type_size;
723 break;
724 default:
725 return NULL;
727 return verify_symbolic_number_p (n, stmt) ? source_stmt1 : NULL;
730 /* Handle binary rhs. */
732 if (rhs_class == GIMPLE_BINARY_RHS)
734 struct symbolic_number n1, n2;
735 gimple *source_stmt, *source_stmt2;
737 if (code != BIT_IOR_EXPR)
738 return NULL;
740 if (TREE_CODE (rhs2) != SSA_NAME)
741 return NULL;
743 rhs2_stmt = SSA_NAME_DEF_STMT (rhs2);
745 switch (code)
747 case BIT_IOR_EXPR:
748 source_stmt1 = find_bswap_or_nop_1 (rhs1_stmt, &n1, limit - 1);
750 if (!source_stmt1)
751 return NULL;
753 source_stmt2 = find_bswap_or_nop_1 (rhs2_stmt, &n2, limit - 1);
755 if (!source_stmt2)
756 return NULL;
758 if (TYPE_PRECISION (n1.type) != TYPE_PRECISION (n2.type))
759 return NULL;
761 if (n1.vuse != n2.vuse)
762 return NULL;
764 source_stmt
765 = perform_symbolic_merge (source_stmt1, &n1, source_stmt2, &n2, n);
767 if (!source_stmt)
768 return NULL;
770 if (!verify_symbolic_number_p (n, stmt))
771 return NULL;
773 break;
774 default:
775 return NULL;
777 return source_stmt;
779 return NULL;
782 /* Helper for find_bswap_or_nop and try_coalesce_bswap to compute
783 *CMPXCHG, *CMPNOP and adjust *N. */
785 void
786 find_bswap_or_nop_finalize (struct symbolic_number *n, uint64_t *cmpxchg,
787 uint64_t *cmpnop)
789 unsigned rsize;
790 uint64_t tmpn, mask;
792 /* The number which the find_bswap_or_nop_1 result should match in order
793 to have a full byte swap. The number is shifted to the right
794 according to the size of the symbolic number before using it. */
795 *cmpxchg = CMPXCHG;
796 *cmpnop = CMPNOP;
798 /* Find real size of result (highest non-zero byte). */
799 if (n->base_addr)
800 for (tmpn = n->n, rsize = 0; tmpn; tmpn >>= BITS_PER_MARKER, rsize++);
801 else
802 rsize = n->range;
804 /* Zero out the bits corresponding to untouched bytes in original gimple
805 expression. */
806 if (n->range < (int) sizeof (int64_t))
808 mask = ((uint64_t) 1 << (n->range * BITS_PER_MARKER)) - 1;
809 *cmpxchg >>= (64 / BITS_PER_MARKER - n->range) * BITS_PER_MARKER;
810 *cmpnop &= mask;
813 /* Zero out the bits corresponding to unused bytes in the result of the
814 gimple expression. */
815 if (rsize < n->range)
817 if (BYTES_BIG_ENDIAN)
819 mask = ((uint64_t) 1 << (rsize * BITS_PER_MARKER)) - 1;
820 *cmpxchg &= mask;
821 *cmpnop >>= (n->range - rsize) * BITS_PER_MARKER;
823 else
825 mask = ((uint64_t) 1 << (rsize * BITS_PER_MARKER)) - 1;
826 *cmpxchg >>= (n->range - rsize) * BITS_PER_MARKER;
827 *cmpnop &= mask;
829 n->range = rsize;
832 n->range *= BITS_PER_UNIT;
835 /* Check if STMT completes a bswap implementation or a read in a given
836 endianness consisting of ORs, SHIFTs and ANDs and sets *BSWAP
837 accordingly. It also sets N to represent the kind of operations
838 performed: size of the resulting expression and whether it works on
839 a memory source, and if so alias-set and vuse. At last, the
840 function returns a stmt whose rhs's first tree is the source
841 expression. */
843 gimple *
844 find_bswap_or_nop (gimple *stmt, struct symbolic_number *n, bool *bswap)
846 /* The last parameter determines the depth search limit. It usually
847 correlates directly to the number n of bytes to be touched. We
848 increase that number by log2(n) + 1 here in order to also
849 cover signed -> unsigned conversions of the src operand as can be seen
850 in libgcc, and for initial shift/and operation of the src operand. */
851 int limit = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (gimple_expr_type (stmt)));
852 limit += 1 + (int) ceil_log2 ((unsigned HOST_WIDE_INT) limit);
853 gimple *ins_stmt = find_bswap_or_nop_1 (stmt, n, limit);
855 if (!ins_stmt)
856 return NULL;
858 uint64_t cmpxchg, cmpnop;
859 find_bswap_or_nop_finalize (n, &cmpxchg, &cmpnop);
861 /* A complete byte swap should make the symbolic number to start with
862 the largest digit in the highest order byte. Unchanged symbolic
863 number indicates a read with same endianness as target architecture. */
864 if (n->n == cmpnop)
865 *bswap = false;
866 else if (n->n == cmpxchg)
867 *bswap = true;
868 else
869 return NULL;
871 /* Useless bit manipulation performed by code. */
872 if (!n->base_addr && n->n == cmpnop && n->n_ops == 1)
873 return NULL;
875 return ins_stmt;
878 const pass_data pass_data_optimize_bswap =
880 GIMPLE_PASS, /* type */
881 "bswap", /* name */
882 OPTGROUP_NONE, /* optinfo_flags */
883 TV_NONE, /* tv_id */
884 PROP_ssa, /* properties_required */
885 0, /* properties_provided */
886 0, /* properties_destroyed */
887 0, /* todo_flags_start */
888 0, /* todo_flags_finish */
891 class pass_optimize_bswap : public gimple_opt_pass
893 public:
894 pass_optimize_bswap (gcc::context *ctxt)
895 : gimple_opt_pass (pass_data_optimize_bswap, ctxt)
898 /* opt_pass methods: */
899 virtual bool gate (function *)
901 return flag_expensive_optimizations && optimize && BITS_PER_UNIT == 8;
904 virtual unsigned int execute (function *);
906 }; // class pass_optimize_bswap
908 /* Perform the bswap optimization: replace the expression computed in the rhs
909 of gsi_stmt (GSI) (or if NULL add instead of replace) by an equivalent
910 bswap, load or load + bswap expression.
911 Which of these alternatives replace the rhs is given by N->base_addr (non
912 null if a load is needed) and BSWAP. The type, VUSE and set-alias of the
913 load to perform are also given in N while the builtin bswap invoke is given
914 in FNDEL. Finally, if a load is involved, INS_STMT refers to one of the
915 load statements involved to construct the rhs in gsi_stmt (GSI) and
916 N->range gives the size of the rhs expression for maintaining some
917 statistics.
919 Note that if the replacement involve a load and if gsi_stmt (GSI) is
920 non-NULL, that stmt is moved just after INS_STMT to do the load with the
921 same VUSE which can lead to gsi_stmt (GSI) changing of basic block. */
923 tree
924 bswap_replace (gimple_stmt_iterator gsi, gimple *ins_stmt, tree fndecl,
925 tree bswap_type, tree load_type, struct symbolic_number *n,
926 bool bswap)
928 tree src, tmp, tgt = NULL_TREE;
929 gimple *bswap_stmt;
931 gimple *cur_stmt = gsi_stmt (gsi);
932 src = n->src;
933 if (cur_stmt)
934 tgt = gimple_assign_lhs (cur_stmt);
936 /* Need to load the value from memory first. */
937 if (n->base_addr)
939 gimple_stmt_iterator gsi_ins = gsi;
940 if (ins_stmt)
941 gsi_ins = gsi_for_stmt (ins_stmt);
942 tree addr_expr, addr_tmp, val_expr, val_tmp;
943 tree load_offset_ptr, aligned_load_type;
944 gimple *load_stmt;
945 unsigned align = get_object_alignment (src);
946 poly_int64 load_offset = 0;
948 if (cur_stmt)
950 basic_block ins_bb = gimple_bb (ins_stmt);
951 basic_block cur_bb = gimple_bb (cur_stmt);
952 if (!dominated_by_p (CDI_DOMINATORS, cur_bb, ins_bb))
953 return NULL_TREE;
955 /* Move cur_stmt just before one of the load of the original
956 to ensure it has the same VUSE. See PR61517 for what could
957 go wrong. */
958 if (gimple_bb (cur_stmt) != gimple_bb (ins_stmt))
959 reset_flow_sensitive_info (gimple_assign_lhs (cur_stmt));
960 gsi_move_before (&gsi, &gsi_ins);
961 gsi = gsi_for_stmt (cur_stmt);
963 else
964 gsi = gsi_ins;
966 /* Compute address to load from and cast according to the size
967 of the load. */
968 addr_expr = build_fold_addr_expr (src);
969 if (is_gimple_mem_ref_addr (addr_expr))
970 addr_tmp = unshare_expr (addr_expr);
971 else
973 addr_tmp = unshare_expr (n->base_addr);
974 if (!is_gimple_mem_ref_addr (addr_tmp))
975 addr_tmp = force_gimple_operand_gsi_1 (&gsi, addr_tmp,
976 is_gimple_mem_ref_addr,
977 NULL_TREE, true,
978 GSI_SAME_STMT);
979 load_offset = n->bytepos;
980 if (n->offset)
982 tree off
983 = force_gimple_operand_gsi (&gsi, unshare_expr (n->offset),
984 true, NULL_TREE, true,
985 GSI_SAME_STMT);
986 gimple *stmt
987 = gimple_build_assign (make_ssa_name (TREE_TYPE (addr_tmp)),
988 POINTER_PLUS_EXPR, addr_tmp, off);
989 gsi_insert_before (&gsi, stmt, GSI_SAME_STMT);
990 addr_tmp = gimple_assign_lhs (stmt);
994 /* Perform the load. */
995 aligned_load_type = load_type;
996 if (align < TYPE_ALIGN (load_type))
997 aligned_load_type = build_aligned_type (load_type, align);
998 load_offset_ptr = build_int_cst (n->alias_set, load_offset);
999 val_expr = fold_build2 (MEM_REF, aligned_load_type, addr_tmp,
1000 load_offset_ptr);
1002 if (!bswap)
1004 if (n->range == 16)
1005 nop_stats.found_16bit++;
1006 else if (n->range == 32)
1007 nop_stats.found_32bit++;
1008 else
1010 gcc_assert (n->range == 64);
1011 nop_stats.found_64bit++;
1014 /* Convert the result of load if necessary. */
1015 if (tgt && !useless_type_conversion_p (TREE_TYPE (tgt), load_type))
1017 val_tmp = make_temp_ssa_name (aligned_load_type, NULL,
1018 "load_dst");
1019 load_stmt = gimple_build_assign (val_tmp, val_expr);
1020 gimple_set_vuse (load_stmt, n->vuse);
1021 gsi_insert_before (&gsi, load_stmt, GSI_SAME_STMT);
1022 gimple_assign_set_rhs_with_ops (&gsi, NOP_EXPR, val_tmp);
1023 update_stmt (cur_stmt);
1025 else if (cur_stmt)
1027 gimple_assign_set_rhs_with_ops (&gsi, MEM_REF, val_expr);
1028 gimple_set_vuse (cur_stmt, n->vuse);
1029 update_stmt (cur_stmt);
1031 else
1033 tgt = make_ssa_name (load_type);
1034 cur_stmt = gimple_build_assign (tgt, MEM_REF, val_expr);
1035 gimple_set_vuse (cur_stmt, n->vuse);
1036 gsi_insert_before (&gsi, cur_stmt, GSI_SAME_STMT);
1039 if (dump_file)
1041 fprintf (dump_file,
1042 "%d bit load in target endianness found at: ",
1043 (int) n->range);
1044 print_gimple_stmt (dump_file, cur_stmt, 0);
1046 return tgt;
1048 else
1050 val_tmp = make_temp_ssa_name (aligned_load_type, NULL, "load_dst");
1051 load_stmt = gimple_build_assign (val_tmp, val_expr);
1052 gimple_set_vuse (load_stmt, n->vuse);
1053 gsi_insert_before (&gsi, load_stmt, GSI_SAME_STMT);
1055 src = val_tmp;
1057 else if (!bswap)
1059 gimple *g = NULL;
1060 if (tgt && !useless_type_conversion_p (TREE_TYPE (tgt), TREE_TYPE (src)))
1062 if (!is_gimple_val (src))
1063 return NULL_TREE;
1064 g = gimple_build_assign (tgt, NOP_EXPR, src);
1066 else if (cur_stmt)
1067 g = gimple_build_assign (tgt, src);
1068 else
1069 tgt = src;
1070 if (n->range == 16)
1071 nop_stats.found_16bit++;
1072 else if (n->range == 32)
1073 nop_stats.found_32bit++;
1074 else
1076 gcc_assert (n->range == 64);
1077 nop_stats.found_64bit++;
1079 if (dump_file)
1081 fprintf (dump_file,
1082 "%d bit reshuffle in target endianness found at: ",
1083 (int) n->range);
1084 if (cur_stmt)
1085 print_gimple_stmt (dump_file, cur_stmt, 0);
1086 else
1088 print_generic_expr (dump_file, tgt, TDF_NONE);
1089 fprintf (dump_file, "\n");
1092 if (cur_stmt)
1093 gsi_replace (&gsi, g, true);
1094 return tgt;
1096 else if (TREE_CODE (src) == BIT_FIELD_REF)
1097 src = TREE_OPERAND (src, 0);
1099 if (n->range == 16)
1100 bswap_stats.found_16bit++;
1101 else if (n->range == 32)
1102 bswap_stats.found_32bit++;
1103 else
1105 gcc_assert (n->range == 64);
1106 bswap_stats.found_64bit++;
1109 tmp = src;
1111 /* Convert the src expression if necessary. */
1112 if (!useless_type_conversion_p (TREE_TYPE (tmp), bswap_type))
1114 gimple *convert_stmt;
1116 tmp = make_temp_ssa_name (bswap_type, NULL, "bswapsrc");
1117 convert_stmt = gimple_build_assign (tmp, NOP_EXPR, src);
1118 gsi_insert_before (&gsi, convert_stmt, GSI_SAME_STMT);
1121 /* Canonical form for 16 bit bswap is a rotate expression. Only 16bit values
1122 are considered as rotation of 2N bit values by N bits is generally not
1123 equivalent to a bswap. Consider for instance 0x01020304 r>> 16 which
1124 gives 0x03040102 while a bswap for that value is 0x04030201. */
1125 if (bswap && n->range == 16)
1127 tree count = build_int_cst (NULL, BITS_PER_UNIT);
1128 src = fold_build2 (LROTATE_EXPR, bswap_type, tmp, count);
1129 bswap_stmt = gimple_build_assign (NULL, src);
1131 else
1132 bswap_stmt = gimple_build_call (fndecl, 1, tmp);
1134 if (tgt == NULL_TREE)
1135 tgt = make_ssa_name (bswap_type);
1136 tmp = tgt;
1138 /* Convert the result if necessary. */
1139 if (!useless_type_conversion_p (TREE_TYPE (tgt), bswap_type))
1141 gimple *convert_stmt;
1143 tmp = make_temp_ssa_name (bswap_type, NULL, "bswapdst");
1144 convert_stmt = gimple_build_assign (tgt, NOP_EXPR, tmp);
1145 gsi_insert_after (&gsi, convert_stmt, GSI_SAME_STMT);
1148 gimple_set_lhs (bswap_stmt, tmp);
1150 if (dump_file)
1152 fprintf (dump_file, "%d bit bswap implementation found at: ",
1153 (int) n->range);
1154 if (cur_stmt)
1155 print_gimple_stmt (dump_file, cur_stmt, 0);
1156 else
1158 print_generic_expr (dump_file, tgt, TDF_NONE);
1159 fprintf (dump_file, "\n");
1163 if (cur_stmt)
1165 gsi_insert_after (&gsi, bswap_stmt, GSI_SAME_STMT);
1166 gsi_remove (&gsi, true);
1168 else
1169 gsi_insert_before (&gsi, bswap_stmt, GSI_SAME_STMT);
1170 return tgt;
1173 /* Find manual byte swap implementations as well as load in a given
1174 endianness. Byte swaps are turned into a bswap builtin invokation
1175 while endian loads are converted to bswap builtin invokation or
1176 simple load according to the target endianness. */
1178 unsigned int
1179 pass_optimize_bswap::execute (function *fun)
1181 basic_block bb;
1182 bool bswap32_p, bswap64_p;
1183 bool changed = false;
1184 tree bswap32_type = NULL_TREE, bswap64_type = NULL_TREE;
1186 bswap32_p = (builtin_decl_explicit_p (BUILT_IN_BSWAP32)
1187 && optab_handler (bswap_optab, SImode) != CODE_FOR_nothing);
1188 bswap64_p = (builtin_decl_explicit_p (BUILT_IN_BSWAP64)
1189 && (optab_handler (bswap_optab, DImode) != CODE_FOR_nothing
1190 || (bswap32_p && word_mode == SImode)));
1192 /* Determine the argument type of the builtins. The code later on
1193 assumes that the return and argument type are the same. */
1194 if (bswap32_p)
1196 tree fndecl = builtin_decl_explicit (BUILT_IN_BSWAP32);
1197 bswap32_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
1200 if (bswap64_p)
1202 tree fndecl = builtin_decl_explicit (BUILT_IN_BSWAP64);
1203 bswap64_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
1206 memset (&nop_stats, 0, sizeof (nop_stats));
1207 memset (&bswap_stats, 0, sizeof (bswap_stats));
1208 calculate_dominance_info (CDI_DOMINATORS);
1210 FOR_EACH_BB_FN (bb, fun)
1212 gimple_stmt_iterator gsi;
1214 /* We do a reverse scan for bswap patterns to make sure we get the
1215 widest match. As bswap pattern matching doesn't handle previously
1216 inserted smaller bswap replacements as sub-patterns, the wider
1217 variant wouldn't be detected. */
1218 for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi);)
1220 gimple *ins_stmt, *cur_stmt = gsi_stmt (gsi);
1221 tree fndecl = NULL_TREE, bswap_type = NULL_TREE, load_type;
1222 enum tree_code code;
1223 struct symbolic_number n;
1224 bool bswap;
1226 /* This gsi_prev (&gsi) is not part of the for loop because cur_stmt
1227 might be moved to a different basic block by bswap_replace and gsi
1228 must not points to it if that's the case. Moving the gsi_prev
1229 there make sure that gsi points to the statement previous to
1230 cur_stmt while still making sure that all statements are
1231 considered in this basic block. */
1232 gsi_prev (&gsi);
1234 if (!is_gimple_assign (cur_stmt))
1235 continue;
1237 code = gimple_assign_rhs_code (cur_stmt);
1238 switch (code)
1240 case LROTATE_EXPR:
1241 case RROTATE_EXPR:
1242 if (!tree_fits_uhwi_p (gimple_assign_rhs2 (cur_stmt))
1243 || tree_to_uhwi (gimple_assign_rhs2 (cur_stmt))
1244 % BITS_PER_UNIT)
1245 continue;
1246 /* Fall through. */
1247 case BIT_IOR_EXPR:
1248 break;
1249 default:
1250 continue;
1253 ins_stmt = find_bswap_or_nop (cur_stmt, &n, &bswap);
1255 if (!ins_stmt)
1256 continue;
1258 switch (n.range)
1260 case 16:
1261 /* Already in canonical form, nothing to do. */
1262 if (code == LROTATE_EXPR || code == RROTATE_EXPR)
1263 continue;
1264 load_type = bswap_type = uint16_type_node;
1265 break;
1266 case 32:
1267 load_type = uint32_type_node;
1268 if (bswap32_p)
1270 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP32);
1271 bswap_type = bswap32_type;
1273 break;
1274 case 64:
1275 load_type = uint64_type_node;
1276 if (bswap64_p)
1278 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP64);
1279 bswap_type = bswap64_type;
1281 break;
1282 default:
1283 continue;
1286 if (bswap && !fndecl && n.range != 16)
1287 continue;
1289 if (bswap_replace (gsi_for_stmt (cur_stmt), ins_stmt, fndecl,
1290 bswap_type, load_type, &n, bswap))
1291 changed = true;
1295 statistics_counter_event (fun, "16-bit nop implementations found",
1296 nop_stats.found_16bit);
1297 statistics_counter_event (fun, "32-bit nop implementations found",
1298 nop_stats.found_32bit);
1299 statistics_counter_event (fun, "64-bit nop implementations found",
1300 nop_stats.found_64bit);
1301 statistics_counter_event (fun, "16-bit bswap implementations found",
1302 bswap_stats.found_16bit);
1303 statistics_counter_event (fun, "32-bit bswap implementations found",
1304 bswap_stats.found_32bit);
1305 statistics_counter_event (fun, "64-bit bswap implementations found",
1306 bswap_stats.found_64bit);
1308 return (changed ? TODO_update_ssa : 0);
1311 } // anon namespace
1313 gimple_opt_pass *
1314 make_pass_optimize_bswap (gcc::context *ctxt)
1316 return new pass_optimize_bswap (ctxt);
1319 namespace {
1321 /* Struct recording one operand for the store, which is either a constant,
1322 then VAL represents the constant and all the other fields are zero, or
1323 a memory load, then VAL represents the reference, BASE_ADDR is non-NULL
1324 and the other fields also reflect the memory load, or an SSA name, then
1325 VAL represents the SSA name and all the other fields are zero, */
1327 struct store_operand_info
1329 tree val;
1330 tree base_addr;
1331 poly_uint64 bitsize;
1332 poly_uint64 bitpos;
1333 poly_uint64 bitregion_start;
1334 poly_uint64 bitregion_end;
1335 gimple *stmt;
1336 bool bit_not_p;
1337 store_operand_info ();
1340 store_operand_info::store_operand_info ()
1341 : val (NULL_TREE), base_addr (NULL_TREE), bitsize (0), bitpos (0),
1342 bitregion_start (0), bitregion_end (0), stmt (NULL), bit_not_p (false)
1346 /* Struct recording the information about a single store of an immediate
1347 to memory. These are created in the first phase and coalesced into
1348 merged_store_group objects in the second phase. */
1350 struct store_immediate_info
1352 unsigned HOST_WIDE_INT bitsize;
1353 unsigned HOST_WIDE_INT bitpos;
1354 unsigned HOST_WIDE_INT bitregion_start;
1355 /* This is one past the last bit of the bit region. */
1356 unsigned HOST_WIDE_INT bitregion_end;
1357 gimple *stmt;
1358 unsigned int order;
1359 /* INTEGER_CST for constant stores, MEM_REF for memory copy,
1360 BIT_*_EXPR for logical bitwise operation, BIT_INSERT_EXPR
1361 for bit insertion.
1362 LROTATE_EXPR if it can be only bswap optimized and
1363 ops are not really meaningful.
1364 NOP_EXPR if bswap optimization detected identity, ops
1365 are not meaningful. */
1366 enum tree_code rhs_code;
1367 /* Two fields for bswap optimization purposes. */
1368 struct symbolic_number n;
1369 gimple *ins_stmt;
1370 /* True if BIT_{AND,IOR,XOR}_EXPR result is inverted before storing. */
1371 bool bit_not_p;
1372 /* True if ops have been swapped and thus ops[1] represents
1373 rhs1 of BIT_{AND,IOR,XOR}_EXPR and ops[0] represents rhs2. */
1374 bool ops_swapped_p;
1375 /* Operands. For BIT_*_EXPR rhs_code both operands are used, otherwise
1376 just the first one. */
1377 store_operand_info ops[2];
1378 store_immediate_info (unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
1379 unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
1380 gimple *, unsigned int, enum tree_code,
1381 struct symbolic_number &, gimple *, bool,
1382 const store_operand_info &,
1383 const store_operand_info &);
1386 store_immediate_info::store_immediate_info (unsigned HOST_WIDE_INT bs,
1387 unsigned HOST_WIDE_INT bp,
1388 unsigned HOST_WIDE_INT brs,
1389 unsigned HOST_WIDE_INT bre,
1390 gimple *st,
1391 unsigned int ord,
1392 enum tree_code rhscode,
1393 struct symbolic_number &nr,
1394 gimple *ins_stmtp,
1395 bool bitnotp,
1396 const store_operand_info &op0r,
1397 const store_operand_info &op1r)
1398 : bitsize (bs), bitpos (bp), bitregion_start (brs), bitregion_end (bre),
1399 stmt (st), order (ord), rhs_code (rhscode), n (nr),
1400 ins_stmt (ins_stmtp), bit_not_p (bitnotp), ops_swapped_p (false)
1401 #if __cplusplus >= 201103L
1402 , ops { op0r, op1r }
1405 #else
1407 ops[0] = op0r;
1408 ops[1] = op1r;
1410 #endif
1412 /* Struct representing a group of stores to contiguous memory locations.
1413 These are produced by the second phase (coalescing) and consumed in the
1414 third phase that outputs the widened stores. */
1416 struct merged_store_group
1418 unsigned HOST_WIDE_INT start;
1419 unsigned HOST_WIDE_INT width;
1420 unsigned HOST_WIDE_INT bitregion_start;
1421 unsigned HOST_WIDE_INT bitregion_end;
1422 /* The size of the allocated memory for val and mask. */
1423 unsigned HOST_WIDE_INT buf_size;
1424 unsigned HOST_WIDE_INT align_base;
1425 poly_uint64 load_align_base[2];
1427 unsigned int align;
1428 unsigned int load_align[2];
1429 unsigned int first_order;
1430 unsigned int last_order;
1431 bool bit_insertion;
1433 auto_vec<store_immediate_info *> stores;
1434 /* We record the first and last original statements in the sequence because
1435 we'll need their vuse/vdef and replacement position. It's easier to keep
1436 track of them separately as 'stores' is reordered by apply_stores. */
1437 gimple *last_stmt;
1438 gimple *first_stmt;
1439 unsigned char *val;
1440 unsigned char *mask;
1442 merged_store_group (store_immediate_info *);
1443 ~merged_store_group ();
1444 bool can_be_merged_into (store_immediate_info *);
1445 void merge_into (store_immediate_info *);
1446 void merge_overlapping (store_immediate_info *);
1447 bool apply_stores ();
1448 private:
1449 void do_merge (store_immediate_info *);
1452 /* Debug helper. Dump LEN elements of byte array PTR to FD in hex. */
1454 static void
1455 dump_char_array (FILE *fd, unsigned char *ptr, unsigned int len)
1457 if (!fd)
1458 return;
1460 for (unsigned int i = 0; i < len; i++)
1461 fprintf (fd, "%02x ", ptr[i]);
1462 fprintf (fd, "\n");
1465 /* Shift left the bytes in PTR of SZ elements by AMNT bits, carrying over the
1466 bits between adjacent elements. AMNT should be within
1467 [0, BITS_PER_UNIT).
1468 Example, AMNT = 2:
1469 00011111|11100000 << 2 = 01111111|10000000
1470 PTR[1] | PTR[0] PTR[1] | PTR[0]. */
1472 static void
1473 shift_bytes_in_array (unsigned char *ptr, unsigned int sz, unsigned int amnt)
1475 if (amnt == 0)
1476 return;
1478 unsigned char carry_over = 0U;
1479 unsigned char carry_mask = (~0U) << (unsigned char) (BITS_PER_UNIT - amnt);
1480 unsigned char clear_mask = (~0U) << amnt;
1482 for (unsigned int i = 0; i < sz; i++)
1484 unsigned prev_carry_over = carry_over;
1485 carry_over = (ptr[i] & carry_mask) >> (BITS_PER_UNIT - amnt);
1487 ptr[i] <<= amnt;
1488 if (i != 0)
1490 ptr[i] &= clear_mask;
1491 ptr[i] |= prev_carry_over;
1496 /* Like shift_bytes_in_array but for big-endian.
1497 Shift right the bytes in PTR of SZ elements by AMNT bits, carrying over the
1498 bits between adjacent elements. AMNT should be within
1499 [0, BITS_PER_UNIT).
1500 Example, AMNT = 2:
1501 00011111|11100000 >> 2 = 00000111|11111000
1502 PTR[0] | PTR[1] PTR[0] | PTR[1]. */
1504 static void
1505 shift_bytes_in_array_right (unsigned char *ptr, unsigned int sz,
1506 unsigned int amnt)
1508 if (amnt == 0)
1509 return;
1511 unsigned char carry_over = 0U;
1512 unsigned char carry_mask = ~(~0U << amnt);
1514 for (unsigned int i = 0; i < sz; i++)
1516 unsigned prev_carry_over = carry_over;
1517 carry_over = ptr[i] & carry_mask;
1519 carry_over <<= (unsigned char) BITS_PER_UNIT - amnt;
1520 ptr[i] >>= amnt;
1521 ptr[i] |= prev_carry_over;
1525 /* Clear out LEN bits starting from bit START in the byte array
1526 PTR. This clears the bits to the *right* from START.
1527 START must be within [0, BITS_PER_UNIT) and counts starting from
1528 the least significant bit. */
1530 static void
1531 clear_bit_region_be (unsigned char *ptr, unsigned int start,
1532 unsigned int len)
1534 if (len == 0)
1535 return;
1536 /* Clear len bits to the right of start. */
1537 else if (len <= start + 1)
1539 unsigned char mask = (~(~0U << len));
1540 mask = mask << (start + 1U - len);
1541 ptr[0] &= ~mask;
1543 else if (start != BITS_PER_UNIT - 1)
1545 clear_bit_region_be (ptr, start, (start % BITS_PER_UNIT) + 1);
1546 clear_bit_region_be (ptr + 1, BITS_PER_UNIT - 1,
1547 len - (start % BITS_PER_UNIT) - 1);
1549 else if (start == BITS_PER_UNIT - 1
1550 && len > BITS_PER_UNIT)
1552 unsigned int nbytes = len / BITS_PER_UNIT;
1553 memset (ptr, 0, nbytes);
1554 if (len % BITS_PER_UNIT != 0)
1555 clear_bit_region_be (ptr + nbytes, BITS_PER_UNIT - 1,
1556 len % BITS_PER_UNIT);
1558 else
1559 gcc_unreachable ();
1562 /* In the byte array PTR clear the bit region starting at bit
1563 START and is LEN bits wide.
1564 For regions spanning multiple bytes do this recursively until we reach
1565 zero LEN or a region contained within a single byte. */
1567 static void
1568 clear_bit_region (unsigned char *ptr, unsigned int start,
1569 unsigned int len)
1571 /* Degenerate base case. */
1572 if (len == 0)
1573 return;
1574 else if (start >= BITS_PER_UNIT)
1575 clear_bit_region (ptr + 1, start - BITS_PER_UNIT, len);
1576 /* Second base case. */
1577 else if ((start + len) <= BITS_PER_UNIT)
1579 unsigned char mask = (~0U) << (unsigned char) (BITS_PER_UNIT - len);
1580 mask >>= BITS_PER_UNIT - (start + len);
1582 ptr[0] &= ~mask;
1584 return;
1586 /* Clear most significant bits in a byte and proceed with the next byte. */
1587 else if (start != 0)
1589 clear_bit_region (ptr, start, BITS_PER_UNIT - start);
1590 clear_bit_region (ptr + 1, 0, len - (BITS_PER_UNIT - start));
1592 /* Whole bytes need to be cleared. */
1593 else if (start == 0 && len > BITS_PER_UNIT)
1595 unsigned int nbytes = len / BITS_PER_UNIT;
1596 /* We could recurse on each byte but we clear whole bytes, so a simple
1597 memset will do. */
1598 memset (ptr, '\0', nbytes);
1599 /* Clear the remaining sub-byte region if there is one. */
1600 if (len % BITS_PER_UNIT != 0)
1601 clear_bit_region (ptr + nbytes, 0, len % BITS_PER_UNIT);
1603 else
1604 gcc_unreachable ();
1607 /* Write BITLEN bits of EXPR to the byte array PTR at
1608 bit position BITPOS. PTR should contain TOTAL_BYTES elements.
1609 Return true if the operation succeeded. */
1611 static bool
1612 encode_tree_to_bitpos (tree expr, unsigned char *ptr, int bitlen, int bitpos,
1613 unsigned int total_bytes)
1615 unsigned int first_byte = bitpos / BITS_PER_UNIT;
1616 tree tmp_int = expr;
1617 bool sub_byte_op_p = ((bitlen % BITS_PER_UNIT)
1618 || (bitpos % BITS_PER_UNIT)
1619 || !int_mode_for_size (bitlen, 0).exists ());
1621 if (!sub_byte_op_p)
1622 return native_encode_expr (tmp_int, ptr + first_byte, total_bytes) != 0;
1624 /* LITTLE-ENDIAN
1625 We are writing a non byte-sized quantity or at a position that is not
1626 at a byte boundary.
1627 |--------|--------|--------| ptr + first_byte
1629 xxx xxxxxxxx xxx< bp>
1630 |______EXPR____|
1632 First native_encode_expr EXPR into a temporary buffer and shift each
1633 byte in the buffer by 'bp' (carrying the bits over as necessary).
1634 |00000000|00xxxxxx|xxxxxxxx| << bp = |000xxxxx|xxxxxxxx|xxx00000|
1635 <------bitlen---->< bp>
1636 Then we clear the destination bits:
1637 |---00000|00000000|000-----| ptr + first_byte
1638 <-------bitlen--->< bp>
1640 Finally we ORR the bytes of the shifted EXPR into the cleared region:
1641 |---xxxxx||xxxxxxxx||xxx-----| ptr + first_byte.
1643 BIG-ENDIAN
1644 We are writing a non byte-sized quantity or at a position that is not
1645 at a byte boundary.
1646 ptr + first_byte |--------|--------|--------|
1648 <bp >xxx xxxxxxxx xxx
1649 |_____EXPR_____|
1651 First native_encode_expr EXPR into a temporary buffer and shift each
1652 byte in the buffer to the right by (carrying the bits over as necessary).
1653 We shift by as much as needed to align the most significant bit of EXPR
1654 with bitpos:
1655 |00xxxxxx|xxxxxxxx| >> 3 = |00000xxx|xxxxxxxx|xxxxx000|
1656 <---bitlen----> <bp ><-----bitlen----->
1657 Then we clear the destination bits:
1658 ptr + first_byte |-----000||00000000||00000---|
1659 <bp ><-------bitlen----->
1661 Finally we ORR the bytes of the shifted EXPR into the cleared region:
1662 ptr + first_byte |---xxxxx||xxxxxxxx||xxx-----|.
1663 The awkwardness comes from the fact that bitpos is counted from the
1664 most significant bit of a byte. */
1666 /* We must be dealing with fixed-size data at this point, since the
1667 total size is also fixed. */
1668 fixed_size_mode mode = as_a <fixed_size_mode> (TYPE_MODE (TREE_TYPE (expr)));
1669 /* Allocate an extra byte so that we have space to shift into. */
1670 unsigned int byte_size = GET_MODE_SIZE (mode) + 1;
1671 unsigned char *tmpbuf = XALLOCAVEC (unsigned char, byte_size);
1672 memset (tmpbuf, '\0', byte_size);
1673 /* The store detection code should only have allowed constants that are
1674 accepted by native_encode_expr. */
1675 if (native_encode_expr (expr, tmpbuf, byte_size - 1) == 0)
1676 gcc_unreachable ();
1678 /* The native_encode_expr machinery uses TYPE_MODE to determine how many
1679 bytes to write. This means it can write more than
1680 ROUND_UP (bitlen, BITS_PER_UNIT) / BITS_PER_UNIT bytes (for example
1681 write 8 bytes for a bitlen of 40). Skip the bytes that are not within
1682 bitlen and zero out the bits that are not relevant as well (that may
1683 contain a sign bit due to sign-extension). */
1684 unsigned int padding
1685 = byte_size - ROUND_UP (bitlen, BITS_PER_UNIT) / BITS_PER_UNIT - 1;
1686 /* On big-endian the padding is at the 'front' so just skip the initial
1687 bytes. */
1688 if (BYTES_BIG_ENDIAN)
1689 tmpbuf += padding;
1691 byte_size -= padding;
1693 if (bitlen % BITS_PER_UNIT != 0)
1695 if (BYTES_BIG_ENDIAN)
1696 clear_bit_region_be (tmpbuf, BITS_PER_UNIT - 1,
1697 BITS_PER_UNIT - (bitlen % BITS_PER_UNIT));
1698 else
1699 clear_bit_region (tmpbuf, bitlen,
1700 byte_size * BITS_PER_UNIT - bitlen);
1702 /* Left shifting relies on the last byte being clear if bitlen is
1703 a multiple of BITS_PER_UNIT, which might not be clear if
1704 there are padding bytes. */
1705 else if (!BYTES_BIG_ENDIAN)
1706 tmpbuf[byte_size - 1] = '\0';
1708 /* Clear the bit region in PTR where the bits from TMPBUF will be
1709 inserted into. */
1710 if (BYTES_BIG_ENDIAN)
1711 clear_bit_region_be (ptr + first_byte,
1712 BITS_PER_UNIT - 1 - (bitpos % BITS_PER_UNIT), bitlen);
1713 else
1714 clear_bit_region (ptr + first_byte, bitpos % BITS_PER_UNIT, bitlen);
1716 int shift_amnt;
1717 int bitlen_mod = bitlen % BITS_PER_UNIT;
1718 int bitpos_mod = bitpos % BITS_PER_UNIT;
1720 bool skip_byte = false;
1721 if (BYTES_BIG_ENDIAN)
1723 /* BITPOS and BITLEN are exactly aligned and no shifting
1724 is necessary. */
1725 if (bitpos_mod + bitlen_mod == BITS_PER_UNIT
1726 || (bitpos_mod == 0 && bitlen_mod == 0))
1727 shift_amnt = 0;
1728 /* |. . . . . . . .|
1729 <bp > <blen >.
1730 We always shift right for BYTES_BIG_ENDIAN so shift the beginning
1731 of the value until it aligns with 'bp' in the next byte over. */
1732 else if (bitpos_mod + bitlen_mod < BITS_PER_UNIT)
1734 shift_amnt = bitlen_mod + bitpos_mod;
1735 skip_byte = bitlen_mod != 0;
1737 /* |. . . . . . . .|
1738 <----bp--->
1739 <---blen---->.
1740 Shift the value right within the same byte so it aligns with 'bp'. */
1741 else
1742 shift_amnt = bitlen_mod + bitpos_mod - BITS_PER_UNIT;
1744 else
1745 shift_amnt = bitpos % BITS_PER_UNIT;
1747 /* Create the shifted version of EXPR. */
1748 if (!BYTES_BIG_ENDIAN)
1750 shift_bytes_in_array (tmpbuf, byte_size, shift_amnt);
1751 if (shift_amnt == 0)
1752 byte_size--;
1754 else
1756 gcc_assert (BYTES_BIG_ENDIAN);
1757 shift_bytes_in_array_right (tmpbuf, byte_size, shift_amnt);
1758 /* If shifting right forced us to move into the next byte skip the now
1759 empty byte. */
1760 if (skip_byte)
1762 tmpbuf++;
1763 byte_size--;
1767 /* Insert the bits from TMPBUF. */
1768 for (unsigned int i = 0; i < byte_size; i++)
1769 ptr[first_byte + i] |= tmpbuf[i];
1771 return true;
1774 /* Sorting function for store_immediate_info objects.
1775 Sorts them by bitposition. */
1777 static int
1778 sort_by_bitpos (const void *x, const void *y)
1780 store_immediate_info *const *tmp = (store_immediate_info * const *) x;
1781 store_immediate_info *const *tmp2 = (store_immediate_info * const *) y;
1783 if ((*tmp)->bitpos < (*tmp2)->bitpos)
1784 return -1;
1785 else if ((*tmp)->bitpos > (*tmp2)->bitpos)
1786 return 1;
1787 else
1788 /* If they are the same let's use the order which is guaranteed to
1789 be different. */
1790 return (*tmp)->order - (*tmp2)->order;
1793 /* Sorting function for store_immediate_info objects.
1794 Sorts them by the order field. */
1796 static int
1797 sort_by_order (const void *x, const void *y)
1799 store_immediate_info *const *tmp = (store_immediate_info * const *) x;
1800 store_immediate_info *const *tmp2 = (store_immediate_info * const *) y;
1802 if ((*tmp)->order < (*tmp2)->order)
1803 return -1;
1804 else if ((*tmp)->order > (*tmp2)->order)
1805 return 1;
1807 gcc_unreachable ();
1810 /* Initialize a merged_store_group object from a store_immediate_info
1811 object. */
1813 merged_store_group::merged_store_group (store_immediate_info *info)
1815 start = info->bitpos;
1816 width = info->bitsize;
1817 bitregion_start = info->bitregion_start;
1818 bitregion_end = info->bitregion_end;
1819 /* VAL has memory allocated for it in apply_stores once the group
1820 width has been finalized. */
1821 val = NULL;
1822 mask = NULL;
1823 bit_insertion = false;
1824 unsigned HOST_WIDE_INT align_bitpos = 0;
1825 get_object_alignment_1 (gimple_assign_lhs (info->stmt),
1826 &align, &align_bitpos);
1827 align_base = start - align_bitpos;
1828 for (int i = 0; i < 2; ++i)
1830 store_operand_info &op = info->ops[i];
1831 if (op.base_addr == NULL_TREE)
1833 load_align[i] = 0;
1834 load_align_base[i] = 0;
1836 else
1838 get_object_alignment_1 (op.val, &load_align[i], &align_bitpos);
1839 load_align_base[i] = op.bitpos - align_bitpos;
1842 stores.create (1);
1843 stores.safe_push (info);
1844 last_stmt = info->stmt;
1845 last_order = info->order;
1846 first_stmt = last_stmt;
1847 first_order = last_order;
1848 buf_size = 0;
1851 merged_store_group::~merged_store_group ()
1853 if (val)
1854 XDELETEVEC (val);
1857 /* Return true if the store described by INFO can be merged into the group. */
1859 bool
1860 merged_store_group::can_be_merged_into (store_immediate_info *info)
1862 /* Do not merge bswap patterns. */
1863 if (info->rhs_code == LROTATE_EXPR)
1864 return false;
1866 /* The canonical case. */
1867 if (info->rhs_code == stores[0]->rhs_code)
1868 return true;
1870 /* BIT_INSERT_EXPR is compatible with INTEGER_CST. */
1871 if (info->rhs_code == BIT_INSERT_EXPR && stores[0]->rhs_code == INTEGER_CST)
1872 return true;
1874 if (stores[0]->rhs_code == BIT_INSERT_EXPR && info->rhs_code == INTEGER_CST)
1875 return true;
1877 /* We can turn MEM_REF into BIT_INSERT_EXPR for bit-field stores. */
1878 if (info->rhs_code == MEM_REF
1879 && (stores[0]->rhs_code == INTEGER_CST
1880 || stores[0]->rhs_code == BIT_INSERT_EXPR)
1881 && info->bitregion_start == stores[0]->bitregion_start
1882 && info->bitregion_end == stores[0]->bitregion_end)
1883 return true;
1885 if (stores[0]->rhs_code == MEM_REF
1886 && (info->rhs_code == INTEGER_CST
1887 || info->rhs_code == BIT_INSERT_EXPR)
1888 && info->bitregion_start == stores[0]->bitregion_start
1889 && info->bitregion_end == stores[0]->bitregion_end)
1890 return true;
1892 return false;
1895 /* Helper method for merge_into and merge_overlapping to do
1896 the common part. */
1898 void
1899 merged_store_group::do_merge (store_immediate_info *info)
1901 bitregion_start = MIN (bitregion_start, info->bitregion_start);
1902 bitregion_end = MAX (bitregion_end, info->bitregion_end);
1904 unsigned int this_align;
1905 unsigned HOST_WIDE_INT align_bitpos = 0;
1906 get_object_alignment_1 (gimple_assign_lhs (info->stmt),
1907 &this_align, &align_bitpos);
1908 if (this_align > align)
1910 align = this_align;
1911 align_base = info->bitpos - align_bitpos;
1913 for (int i = 0; i < 2; ++i)
1915 store_operand_info &op = info->ops[i];
1916 if (!op.base_addr)
1917 continue;
1919 get_object_alignment_1 (op.val, &this_align, &align_bitpos);
1920 if (this_align > load_align[i])
1922 load_align[i] = this_align;
1923 load_align_base[i] = op.bitpos - align_bitpos;
1927 gimple *stmt = info->stmt;
1928 stores.safe_push (info);
1929 if (info->order > last_order)
1931 last_order = info->order;
1932 last_stmt = stmt;
1934 else if (info->order < first_order)
1936 first_order = info->order;
1937 first_stmt = stmt;
1941 /* Merge a store recorded by INFO into this merged store.
1942 The store is not overlapping with the existing recorded
1943 stores. */
1945 void
1946 merged_store_group::merge_into (store_immediate_info *info)
1948 /* Make sure we're inserting in the position we think we're inserting. */
1949 gcc_assert (info->bitpos >= start + width
1950 && info->bitregion_start <= bitregion_end);
1952 width = info->bitpos + info->bitsize - start;
1953 do_merge (info);
1956 /* Merge a store described by INFO into this merged store.
1957 INFO overlaps in some way with the current store (i.e. it's not contiguous
1958 which is handled by merged_store_group::merge_into). */
1960 void
1961 merged_store_group::merge_overlapping (store_immediate_info *info)
1963 /* If the store extends the size of the group, extend the width. */
1964 if (info->bitpos + info->bitsize > start + width)
1965 width = info->bitpos + info->bitsize - start;
1967 do_merge (info);
1970 /* Go through all the recorded stores in this group in program order and
1971 apply their values to the VAL byte array to create the final merged
1972 value. Return true if the operation succeeded. */
1974 bool
1975 merged_store_group::apply_stores ()
1977 /* Make sure we have more than one store in the group, otherwise we cannot
1978 merge anything. */
1979 if (bitregion_start % BITS_PER_UNIT != 0
1980 || bitregion_end % BITS_PER_UNIT != 0
1981 || stores.length () == 1)
1982 return false;
1984 stores.qsort (sort_by_order);
1985 store_immediate_info *info;
1986 unsigned int i;
1987 /* Create a power-of-2-sized buffer for native_encode_expr. */
1988 buf_size = 1 << ceil_log2 ((bitregion_end - bitregion_start) / BITS_PER_UNIT);
1989 val = XNEWVEC (unsigned char, 2 * buf_size);
1990 mask = val + buf_size;
1991 memset (val, 0, buf_size);
1992 memset (mask, ~0U, buf_size);
1994 FOR_EACH_VEC_ELT (stores, i, info)
1996 unsigned int pos_in_buffer = info->bitpos - bitregion_start;
1997 tree cst;
1998 if (info->ops[0].val && info->ops[0].base_addr == NULL_TREE)
1999 cst = info->ops[0].val;
2000 else if (info->ops[1].val && info->ops[1].base_addr == NULL_TREE)
2001 cst = info->ops[1].val;
2002 else
2003 cst = NULL_TREE;
2004 bool ret = true;
2005 if (cst)
2007 if (info->rhs_code == BIT_INSERT_EXPR)
2008 bit_insertion = true;
2009 else
2010 ret = encode_tree_to_bitpos (cst, val, info->bitsize,
2011 pos_in_buffer, buf_size);
2013 unsigned char *m = mask + (pos_in_buffer / BITS_PER_UNIT);
2014 if (BYTES_BIG_ENDIAN)
2015 clear_bit_region_be (m, (BITS_PER_UNIT - 1
2016 - (pos_in_buffer % BITS_PER_UNIT)),
2017 info->bitsize);
2018 else
2019 clear_bit_region (m, pos_in_buffer % BITS_PER_UNIT, info->bitsize);
2020 if (cst && dump_file && (dump_flags & TDF_DETAILS))
2022 if (ret)
2024 fputs ("After writing ", dump_file);
2025 print_generic_expr (dump_file, cst, TDF_NONE);
2026 fprintf (dump_file, " of size " HOST_WIDE_INT_PRINT_DEC
2027 " at position %d\n", info->bitsize, pos_in_buffer);
2028 fputs (" the merged value contains ", dump_file);
2029 dump_char_array (dump_file, val, buf_size);
2030 fputs (" the merged mask contains ", dump_file);
2031 dump_char_array (dump_file, mask, buf_size);
2032 if (bit_insertion)
2033 fputs (" bit insertion is required\n", dump_file);
2035 else
2036 fprintf (dump_file, "Failed to merge stores\n");
2038 if (!ret)
2039 return false;
2041 stores.qsort (sort_by_bitpos);
2042 return true;
2045 /* Structure describing the store chain. */
2047 struct imm_store_chain_info
2049 /* Doubly-linked list that imposes an order on chain processing.
2050 PNXP (prev's next pointer) points to the head of a list, or to
2051 the next field in the previous chain in the list.
2052 See pass_store_merging::m_stores_head for more rationale. */
2053 imm_store_chain_info *next, **pnxp;
2054 tree base_addr;
2055 auto_vec<store_immediate_info *> m_store_info;
2056 auto_vec<merged_store_group *> m_merged_store_groups;
2058 imm_store_chain_info (imm_store_chain_info *&inspt, tree b_a)
2059 : next (inspt), pnxp (&inspt), base_addr (b_a)
2061 inspt = this;
2062 if (next)
2064 gcc_checking_assert (pnxp == next->pnxp);
2065 next->pnxp = &next;
2068 ~imm_store_chain_info ()
2070 *pnxp = next;
2071 if (next)
2073 gcc_checking_assert (&next == next->pnxp);
2074 next->pnxp = pnxp;
2077 bool terminate_and_process_chain ();
2078 bool try_coalesce_bswap (merged_store_group *, unsigned int, unsigned int);
2079 bool coalesce_immediate_stores ();
2080 bool output_merged_store (merged_store_group *);
2081 bool output_merged_stores ();
2084 const pass_data pass_data_tree_store_merging = {
2085 GIMPLE_PASS, /* type */
2086 "store-merging", /* name */
2087 OPTGROUP_NONE, /* optinfo_flags */
2088 TV_GIMPLE_STORE_MERGING, /* tv_id */
2089 PROP_ssa, /* properties_required */
2090 0, /* properties_provided */
2091 0, /* properties_destroyed */
2092 0, /* todo_flags_start */
2093 TODO_update_ssa, /* todo_flags_finish */
2096 class pass_store_merging : public gimple_opt_pass
2098 public:
2099 pass_store_merging (gcc::context *ctxt)
2100 : gimple_opt_pass (pass_data_tree_store_merging, ctxt), m_stores_head ()
2104 /* Pass not supported for PDP-endian, nor for insane hosts or
2105 target character sizes where native_{encode,interpret}_expr
2106 doesn't work properly. */
2107 virtual bool
2108 gate (function *)
2110 return flag_store_merging
2111 && BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
2112 && CHAR_BIT == 8
2113 && BITS_PER_UNIT == 8;
2116 virtual unsigned int execute (function *);
2118 private:
2119 hash_map<tree_operand_hash, struct imm_store_chain_info *> m_stores;
2121 /* Form a doubly-linked stack of the elements of m_stores, so that
2122 we can iterate over them in a predictable way. Using this order
2123 avoids extraneous differences in the compiler output just because
2124 of tree pointer variations (e.g. different chains end up in
2125 different positions of m_stores, so they are handled in different
2126 orders, so they allocate or release SSA names in different
2127 orders, and when they get reused, subsequent passes end up
2128 getting different SSA names, which may ultimately change
2129 decisions when going out of SSA). */
2130 imm_store_chain_info *m_stores_head;
2132 void process_store (gimple *);
2133 bool terminate_and_process_all_chains ();
2134 bool terminate_all_aliasing_chains (imm_store_chain_info **, gimple *);
2135 bool terminate_and_release_chain (imm_store_chain_info *);
2136 }; // class pass_store_merging
2138 /* Terminate and process all recorded chains. Return true if any changes
2139 were made. */
2141 bool
2142 pass_store_merging::terminate_and_process_all_chains ()
2144 bool ret = false;
2145 while (m_stores_head)
2146 ret |= terminate_and_release_chain (m_stores_head);
2147 gcc_assert (m_stores.elements () == 0);
2148 gcc_assert (m_stores_head == NULL);
2150 return ret;
2153 /* Terminate all chains that are affected by the statement STMT.
2154 CHAIN_INFO is the chain we should ignore from the checks if
2155 non-NULL. */
2157 bool
2158 pass_store_merging::terminate_all_aliasing_chains (imm_store_chain_info
2159 **chain_info,
2160 gimple *stmt)
2162 bool ret = false;
2164 /* If the statement doesn't touch memory it can't alias. */
2165 if (!gimple_vuse (stmt))
2166 return false;
2168 tree store_lhs = gimple_store_p (stmt) ? gimple_get_lhs (stmt) : NULL_TREE;
2169 for (imm_store_chain_info *next = m_stores_head, *cur = next; cur; cur = next)
2171 next = cur->next;
2173 /* We already checked all the stores in chain_info and terminated the
2174 chain if necessary. Skip it here. */
2175 if (chain_info && *chain_info == cur)
2176 continue;
2178 store_immediate_info *info;
2179 unsigned int i;
2180 FOR_EACH_VEC_ELT (cur->m_store_info, i, info)
2182 tree lhs = gimple_assign_lhs (info->stmt);
2183 if (ref_maybe_used_by_stmt_p (stmt, lhs)
2184 || stmt_may_clobber_ref_p (stmt, lhs)
2185 || (store_lhs && refs_output_dependent_p (store_lhs, lhs)))
2187 if (dump_file && (dump_flags & TDF_DETAILS))
2189 fprintf (dump_file, "stmt causes chain termination:\n");
2190 print_gimple_stmt (dump_file, stmt, 0);
2192 terminate_and_release_chain (cur);
2193 ret = true;
2194 break;
2199 return ret;
2202 /* Helper function. Terminate the recorded chain storing to base object
2203 BASE. Return true if the merging and output was successful. The m_stores
2204 entry is removed after the processing in any case. */
2206 bool
2207 pass_store_merging::terminate_and_release_chain (imm_store_chain_info *chain_info)
2209 bool ret = chain_info->terminate_and_process_chain ();
2210 m_stores.remove (chain_info->base_addr);
2211 delete chain_info;
2212 return ret;
2215 /* Return true if stmts in between FIRST (inclusive) and LAST (exclusive)
2216 may clobber REF. FIRST and LAST must be in the same basic block and
2217 have non-NULL vdef. We want to be able to sink load of REF across
2218 stores between FIRST and LAST, up to right before LAST. */
2220 bool
2221 stmts_may_clobber_ref_p (gimple *first, gimple *last, tree ref)
2223 ao_ref r;
2224 ao_ref_init (&r, ref);
2225 unsigned int count = 0;
2226 tree vop = gimple_vdef (last);
2227 gimple *stmt;
2229 gcc_checking_assert (gimple_bb (first) == gimple_bb (last));
2232 stmt = SSA_NAME_DEF_STMT (vop);
2233 if (stmt_may_clobber_ref_p_1 (stmt, &r))
2234 return true;
2235 if (gimple_store_p (stmt)
2236 && refs_anti_dependent_p (ref, gimple_get_lhs (stmt)))
2237 return true;
2238 /* Avoid quadratic compile time by bounding the number of checks
2239 we perform. */
2240 if (++count > MAX_STORE_ALIAS_CHECKS)
2241 return true;
2242 vop = gimple_vuse (stmt);
2244 while (stmt != first);
2245 return false;
2248 /* Return true if INFO->ops[IDX] is mergeable with the
2249 corresponding loads already in MERGED_STORE group.
2250 BASE_ADDR is the base address of the whole store group. */
2252 bool
2253 compatible_load_p (merged_store_group *merged_store,
2254 store_immediate_info *info,
2255 tree base_addr, int idx)
2257 store_immediate_info *infof = merged_store->stores[0];
2258 if (!info->ops[idx].base_addr
2259 || maybe_ne (info->ops[idx].bitpos - infof->ops[idx].bitpos,
2260 info->bitpos - infof->bitpos)
2261 || !operand_equal_p (info->ops[idx].base_addr,
2262 infof->ops[idx].base_addr, 0))
2263 return false;
2265 store_immediate_info *infol = merged_store->stores.last ();
2266 tree load_vuse = gimple_vuse (info->ops[idx].stmt);
2267 /* In this case all vuses should be the same, e.g.
2268 _1 = s.a; _2 = s.b; _3 = _1 | 1; t.a = _3; _4 = _2 | 2; t.b = _4;
2270 _1 = s.a; _2 = s.b; t.a = _1; t.b = _2;
2271 and we can emit the coalesced load next to any of those loads. */
2272 if (gimple_vuse (infof->ops[idx].stmt) == load_vuse
2273 && gimple_vuse (infol->ops[idx].stmt) == load_vuse)
2274 return true;
2276 /* Otherwise, at least for now require that the load has the same
2277 vuse as the store. See following examples. */
2278 if (gimple_vuse (info->stmt) != load_vuse)
2279 return false;
2281 if (gimple_vuse (infof->stmt) != gimple_vuse (infof->ops[idx].stmt)
2282 || (infof != infol
2283 && gimple_vuse (infol->stmt) != gimple_vuse (infol->ops[idx].stmt)))
2284 return false;
2286 /* If the load is from the same location as the store, already
2287 the construction of the immediate chain info guarantees no intervening
2288 stores, so no further checks are needed. Example:
2289 _1 = s.a; _2 = _1 & -7; s.a = _2; _3 = s.b; _4 = _3 & -7; s.b = _4; */
2290 if (known_eq (info->ops[idx].bitpos, info->bitpos)
2291 && operand_equal_p (info->ops[idx].base_addr, base_addr, 0))
2292 return true;
2294 /* Otherwise, we need to punt if any of the loads can be clobbered by any
2295 of the stores in the group, or any other stores in between those.
2296 Previous calls to compatible_load_p ensured that for all the
2297 merged_store->stores IDX loads, no stmts starting with
2298 merged_store->first_stmt and ending right before merged_store->last_stmt
2299 clobbers those loads. */
2300 gimple *first = merged_store->first_stmt;
2301 gimple *last = merged_store->last_stmt;
2302 unsigned int i;
2303 store_immediate_info *infoc;
2304 /* The stores are sorted by increasing store bitpos, so if info->stmt store
2305 comes before the so far first load, we'll be changing
2306 merged_store->first_stmt. In that case we need to give up if
2307 any of the earlier processed loads clobber with the stmts in the new
2308 range. */
2309 if (info->order < merged_store->first_order)
2311 FOR_EACH_VEC_ELT (merged_store->stores, i, infoc)
2312 if (stmts_may_clobber_ref_p (info->stmt, first, infoc->ops[idx].val))
2313 return false;
2314 first = info->stmt;
2316 /* Similarly, we could change merged_store->last_stmt, so ensure
2317 in that case no stmts in the new range clobber any of the earlier
2318 processed loads. */
2319 else if (info->order > merged_store->last_order)
2321 FOR_EACH_VEC_ELT (merged_store->stores, i, infoc)
2322 if (stmts_may_clobber_ref_p (last, info->stmt, infoc->ops[idx].val))
2323 return false;
2324 last = info->stmt;
2326 /* And finally, we'd be adding a new load to the set, ensure it isn't
2327 clobbered in the new range. */
2328 if (stmts_may_clobber_ref_p (first, last, info->ops[idx].val))
2329 return false;
2331 /* Otherwise, we are looking for:
2332 _1 = s.a; _2 = _1 ^ 15; t.a = _2; _3 = s.b; _4 = _3 ^ 15; t.b = _4;
2334 _1 = s.a; t.a = _1; _2 = s.b; t.b = _2; */
2335 return true;
2338 /* Add all refs loaded to compute VAL to REFS vector. */
2340 void
2341 gather_bswap_load_refs (vec<tree> *refs, tree val)
2343 if (TREE_CODE (val) != SSA_NAME)
2344 return;
2346 gimple *stmt = SSA_NAME_DEF_STMT (val);
2347 if (!is_gimple_assign (stmt))
2348 return;
2350 if (gimple_assign_load_p (stmt))
2352 refs->safe_push (gimple_assign_rhs1 (stmt));
2353 return;
2356 switch (gimple_assign_rhs_class (stmt))
2358 case GIMPLE_BINARY_RHS:
2359 gather_bswap_load_refs (refs, gimple_assign_rhs2 (stmt));
2360 /* FALLTHRU */
2361 case GIMPLE_UNARY_RHS:
2362 gather_bswap_load_refs (refs, gimple_assign_rhs1 (stmt));
2363 break;
2364 default:
2365 gcc_unreachable ();
2369 /* Check if there are any stores in M_STORE_INFO after index I
2370 (where M_STORE_INFO must be sorted by sort_by_bitpos) that overlap
2371 a potential group ending with END that have their order
2372 smaller than LAST_ORDER. RHS_CODE is the kind of store in the
2373 group. Return true if there are no such stores.
2374 Consider:
2375 MEM[(long long int *)p_28] = 0;
2376 MEM[(long long int *)p_28 + 8B] = 0;
2377 MEM[(long long int *)p_28 + 16B] = 0;
2378 MEM[(long long int *)p_28 + 24B] = 0;
2379 _129 = (int) _130;
2380 MEM[(int *)p_28 + 8B] = _129;
2381 MEM[(int *)p_28].a = -1;
2382 We already have
2383 MEM[(long long int *)p_28] = 0;
2384 MEM[(int *)p_28].a = -1;
2385 stmts in the current group and need to consider if it is safe to
2386 add MEM[(long long int *)p_28 + 8B] = 0; store into the same group.
2387 There is an overlap between that store and the MEM[(int *)p_28 + 8B] = _129;
2388 store though, so if we add the MEM[(long long int *)p_28 + 8B] = 0;
2389 into the group and merging of those 3 stores is successful, merged
2390 stmts will be emitted at the latest store from that group, i.e.
2391 LAST_ORDER, which is the MEM[(int *)p_28].a = -1; store.
2392 The MEM[(int *)p_28 + 8B] = _129; store that originally follows
2393 the MEM[(long long int *)p_28 + 8B] = 0; would now be before it,
2394 so we need to refuse merging MEM[(long long int *)p_28 + 8B] = 0;
2395 into the group. That way it will be its own store group and will
2396 not be touched. If RHS_CODE is INTEGER_CST and there are overlapping
2397 INTEGER_CST stores, those are mergeable using merge_overlapping,
2398 so don't return false for those. */
2400 static bool
2401 check_no_overlap (vec<store_immediate_info *> m_store_info, unsigned int i,
2402 enum tree_code rhs_code, unsigned int last_order,
2403 unsigned HOST_WIDE_INT end)
2405 unsigned int len = m_store_info.length ();
2406 for (++i; i < len; ++i)
2408 store_immediate_info *info = m_store_info[i];
2409 if (info->bitpos >= end)
2410 break;
2411 if (info->order < last_order
2412 && (rhs_code != INTEGER_CST || info->rhs_code != INTEGER_CST))
2413 return false;
2415 return true;
2418 /* Return true if m_store_info[first] and at least one following store
2419 form a group which store try_size bitsize value which is byte swapped
2420 from a memory load or some value, or identity from some value.
2421 This uses the bswap pass APIs. */
2423 bool
2424 imm_store_chain_info::try_coalesce_bswap (merged_store_group *merged_store,
2425 unsigned int first,
2426 unsigned int try_size)
2428 unsigned int len = m_store_info.length (), last = first;
2429 unsigned HOST_WIDE_INT width = m_store_info[first]->bitsize;
2430 if (width >= try_size)
2431 return false;
2432 for (unsigned int i = first + 1; i < len; ++i)
2434 if (m_store_info[i]->bitpos != m_store_info[first]->bitpos + width
2435 || m_store_info[i]->ins_stmt == NULL)
2436 return false;
2437 width += m_store_info[i]->bitsize;
2438 if (width >= try_size)
2440 last = i;
2441 break;
2444 if (width != try_size)
2445 return false;
2447 bool allow_unaligned
2448 = !STRICT_ALIGNMENT && PARAM_VALUE (PARAM_STORE_MERGING_ALLOW_UNALIGNED);
2449 /* Punt if the combined store would not be aligned and we need alignment. */
2450 if (!allow_unaligned)
2452 unsigned int align = merged_store->align;
2453 unsigned HOST_WIDE_INT align_base = merged_store->align_base;
2454 for (unsigned int i = first + 1; i <= last; ++i)
2456 unsigned int this_align;
2457 unsigned HOST_WIDE_INT align_bitpos = 0;
2458 get_object_alignment_1 (gimple_assign_lhs (m_store_info[i]->stmt),
2459 &this_align, &align_bitpos);
2460 if (this_align > align)
2462 align = this_align;
2463 align_base = m_store_info[i]->bitpos - align_bitpos;
2466 unsigned HOST_WIDE_INT align_bitpos
2467 = (m_store_info[first]->bitpos - align_base) & (align - 1);
2468 if (align_bitpos)
2469 align = least_bit_hwi (align_bitpos);
2470 if (align < try_size)
2471 return false;
2474 tree type;
2475 switch (try_size)
2477 case 16: type = uint16_type_node; break;
2478 case 32: type = uint32_type_node; break;
2479 case 64: type = uint64_type_node; break;
2480 default: gcc_unreachable ();
2482 struct symbolic_number n;
2483 gimple *ins_stmt = NULL;
2484 int vuse_store = -1;
2485 unsigned int first_order = merged_store->first_order;
2486 unsigned int last_order = merged_store->last_order;
2487 gimple *first_stmt = merged_store->first_stmt;
2488 gimple *last_stmt = merged_store->last_stmt;
2489 unsigned HOST_WIDE_INT end = merged_store->start + merged_store->width;
2490 store_immediate_info *infof = m_store_info[first];
2492 for (unsigned int i = first; i <= last; ++i)
2494 store_immediate_info *info = m_store_info[i];
2495 struct symbolic_number this_n = info->n;
2496 this_n.type = type;
2497 if (!this_n.base_addr)
2498 this_n.range = try_size / BITS_PER_UNIT;
2499 else
2500 /* Update vuse in case it has changed by output_merged_stores. */
2501 this_n.vuse = gimple_vuse (info->ins_stmt);
2502 unsigned int bitpos = info->bitpos - infof->bitpos;
2503 if (!do_shift_rotate (LSHIFT_EXPR, &this_n,
2504 BYTES_BIG_ENDIAN
2505 ? try_size - info->bitsize - bitpos
2506 : bitpos))
2507 return false;
2508 if (this_n.base_addr && vuse_store)
2510 unsigned int j;
2511 for (j = first; j <= last; ++j)
2512 if (this_n.vuse == gimple_vuse (m_store_info[j]->stmt))
2513 break;
2514 if (j > last)
2516 if (vuse_store == 1)
2517 return false;
2518 vuse_store = 0;
2521 if (i == first)
2523 n = this_n;
2524 ins_stmt = info->ins_stmt;
2526 else
2528 if (n.base_addr && n.vuse != this_n.vuse)
2530 if (vuse_store == 0)
2531 return false;
2532 vuse_store = 1;
2534 if (info->order > last_order)
2536 last_order = info->order;
2537 last_stmt = info->stmt;
2539 else if (info->order < first_order)
2541 first_order = info->order;
2542 first_stmt = info->stmt;
2544 end = MAX (end, info->bitpos + info->bitsize);
2546 ins_stmt = perform_symbolic_merge (ins_stmt, &n, info->ins_stmt,
2547 &this_n, &n);
2548 if (ins_stmt == NULL)
2549 return false;
2553 uint64_t cmpxchg, cmpnop;
2554 find_bswap_or_nop_finalize (&n, &cmpxchg, &cmpnop);
2556 /* A complete byte swap should make the symbolic number to start with
2557 the largest digit in the highest order byte. Unchanged symbolic
2558 number indicates a read with same endianness as target architecture. */
2559 if (n.n != cmpnop && n.n != cmpxchg)
2560 return false;
2562 if (n.base_addr == NULL_TREE && !is_gimple_val (n.src))
2563 return false;
2565 if (!check_no_overlap (m_store_info, last, LROTATE_EXPR, last_order, end))
2566 return false;
2568 /* Don't handle memory copy this way if normal non-bswap processing
2569 would handle it too. */
2570 if (n.n == cmpnop && (unsigned) n.n_ops == last - first + 1)
2572 unsigned int i;
2573 for (i = first; i <= last; ++i)
2574 if (m_store_info[i]->rhs_code != MEM_REF)
2575 break;
2576 if (i == last + 1)
2577 return false;
2580 if (n.n == cmpxchg)
2581 switch (try_size)
2583 case 16:
2584 /* Will emit LROTATE_EXPR. */
2585 break;
2586 case 32:
2587 if (builtin_decl_explicit_p (BUILT_IN_BSWAP32)
2588 && optab_handler (bswap_optab, SImode) != CODE_FOR_nothing)
2589 break;
2590 return false;
2591 case 64:
2592 if (builtin_decl_explicit_p (BUILT_IN_BSWAP64)
2593 && optab_handler (bswap_optab, DImode) != CODE_FOR_nothing)
2594 break;
2595 return false;
2596 default:
2597 gcc_unreachable ();
2600 if (!allow_unaligned && n.base_addr)
2602 unsigned int align = get_object_alignment (n.src);
2603 if (align < try_size)
2604 return false;
2607 /* If each load has vuse of the corresponding store, need to verify
2608 the loads can be sunk right before the last store. */
2609 if (vuse_store == 1)
2611 auto_vec<tree, 64> refs;
2612 for (unsigned int i = first; i <= last; ++i)
2613 gather_bswap_load_refs (&refs,
2614 gimple_assign_rhs1 (m_store_info[i]->stmt));
2616 unsigned int i;
2617 tree ref;
2618 FOR_EACH_VEC_ELT (refs, i, ref)
2619 if (stmts_may_clobber_ref_p (first_stmt, last_stmt, ref))
2620 return false;
2621 n.vuse = NULL_TREE;
2624 infof->n = n;
2625 infof->ins_stmt = ins_stmt;
2626 for (unsigned int i = first; i <= last; ++i)
2628 m_store_info[i]->rhs_code = n.n == cmpxchg ? LROTATE_EXPR : NOP_EXPR;
2629 m_store_info[i]->ops[0].base_addr = NULL_TREE;
2630 m_store_info[i]->ops[1].base_addr = NULL_TREE;
2631 if (i != first)
2632 merged_store->merge_into (m_store_info[i]);
2635 return true;
2638 /* Go through the candidate stores recorded in m_store_info and merge them
2639 into merged_store_group objects recorded into m_merged_store_groups
2640 representing the widened stores. Return true if coalescing was successful
2641 and the number of widened stores is fewer than the original number
2642 of stores. */
2644 bool
2645 imm_store_chain_info::coalesce_immediate_stores ()
2647 /* Anything less can't be processed. */
2648 if (m_store_info.length () < 2)
2649 return false;
2651 if (dump_file && (dump_flags & TDF_DETAILS))
2652 fprintf (dump_file, "Attempting to coalesce %u stores in chain\n",
2653 m_store_info.length ());
2655 store_immediate_info *info;
2656 unsigned int i, ignore = 0;
2658 /* Order the stores by the bitposition they write to. */
2659 m_store_info.qsort (sort_by_bitpos);
2661 info = m_store_info[0];
2662 merged_store_group *merged_store = new merged_store_group (info);
2663 if (dump_file && (dump_flags & TDF_DETAILS))
2664 fputs ("New store group\n", dump_file);
2666 FOR_EACH_VEC_ELT (m_store_info, i, info)
2668 if (i <= ignore)
2669 goto done;
2671 /* First try to handle group of stores like:
2672 p[0] = data >> 24;
2673 p[1] = data >> 16;
2674 p[2] = data >> 8;
2675 p[3] = data;
2676 using the bswap framework. */
2677 if (info->bitpos == merged_store->start + merged_store->width
2678 && merged_store->stores.length () == 1
2679 && merged_store->stores[0]->ins_stmt != NULL
2680 && info->ins_stmt != NULL)
2682 unsigned int try_size;
2683 for (try_size = 64; try_size >= 16; try_size >>= 1)
2684 if (try_coalesce_bswap (merged_store, i - 1, try_size))
2685 break;
2687 if (try_size >= 16)
2689 ignore = i + merged_store->stores.length () - 1;
2690 m_merged_store_groups.safe_push (merged_store);
2691 if (ignore < m_store_info.length ())
2692 merged_store = new merged_store_group (m_store_info[ignore]);
2693 else
2694 merged_store = NULL;
2695 goto done;
2699 /* |---store 1---|
2700 |---store 2---|
2701 Overlapping stores. */
2702 if (IN_RANGE (info->bitpos, merged_store->start,
2703 merged_store->start + merged_store->width - 1))
2705 /* Only allow overlapping stores of constants. */
2706 if (info->rhs_code == INTEGER_CST)
2708 bool only_constants = true;
2709 store_immediate_info *infoj;
2710 unsigned int j;
2711 FOR_EACH_VEC_ELT (merged_store->stores, j, infoj)
2712 if (infoj->rhs_code != INTEGER_CST)
2714 only_constants = false;
2715 break;
2717 unsigned int last_order
2718 = MAX (merged_store->last_order, info->order);
2719 unsigned HOST_WIDE_INT end
2720 = MAX (merged_store->start + merged_store->width,
2721 info->bitpos + info->bitsize);
2722 if (only_constants
2723 && check_no_overlap (m_store_info, i, INTEGER_CST,
2724 last_order, end))
2726 /* check_no_overlap call above made sure there are no
2727 overlapping stores with non-INTEGER_CST rhs_code
2728 in between the first and last of the stores we've
2729 just merged. If there are any INTEGER_CST rhs_code
2730 stores in between, we need to merge_overlapping them
2731 even if in the sort_by_bitpos order there are other
2732 overlapping stores in between. Keep those stores as is.
2733 Example:
2734 MEM[(int *)p_28] = 0;
2735 MEM[(char *)p_28 + 3B] = 1;
2736 MEM[(char *)p_28 + 1B] = 2;
2737 MEM[(char *)p_28 + 2B] = MEM[(char *)p_28 + 6B];
2738 We can't merge the zero store with the store of two and
2739 not merge anything else, because the store of one is
2740 in the original order in between those two, but in
2741 store_by_bitpos order it comes after the last store that
2742 we can't merge with them. We can merge the first 3 stores
2743 and keep the last store as is though. */
2744 unsigned int len = m_store_info.length (), k = i;
2745 for (unsigned int j = i + 1; j < len; ++j)
2747 store_immediate_info *info2 = m_store_info[j];
2748 if (info2->bitpos >= end)
2749 break;
2750 if (info2->order < last_order)
2752 if (info2->rhs_code != INTEGER_CST)
2754 /* Normally check_no_overlap makes sure this
2755 doesn't happen, but if end grows below, then
2756 we need to process more stores than
2757 check_no_overlap verified. Example:
2758 MEM[(int *)p_5] = 0;
2759 MEM[(short *)p_5 + 3B] = 1;
2760 MEM[(char *)p_5 + 4B] = _9;
2761 MEM[(char *)p_5 + 2B] = 2; */
2762 k = 0;
2763 break;
2765 k = j;
2766 end = MAX (end, info2->bitpos + info2->bitsize);
2770 if (k != 0)
2772 merged_store->merge_overlapping (info);
2774 for (unsigned int j = i + 1; j <= k; j++)
2776 store_immediate_info *info2 = m_store_info[j];
2777 gcc_assert (info2->bitpos < end);
2778 if (info2->order < last_order)
2780 gcc_assert (info2->rhs_code == INTEGER_CST);
2781 merged_store->merge_overlapping (info2);
2783 /* Other stores are kept and not merged in any
2784 way. */
2786 ignore = k;
2787 goto done;
2792 /* |---store 1---||---store 2---|
2793 This store is consecutive to the previous one.
2794 Merge it into the current store group. There can be gaps in between
2795 the stores, but there can't be gaps in between bitregions. */
2796 else if (info->bitregion_start <= merged_store->bitregion_end
2797 && merged_store->can_be_merged_into (info))
2799 store_immediate_info *infof = merged_store->stores[0];
2801 /* All the rhs_code ops that take 2 operands are commutative,
2802 swap the operands if it could make the operands compatible. */
2803 if (infof->ops[0].base_addr
2804 && infof->ops[1].base_addr
2805 && info->ops[0].base_addr
2806 && info->ops[1].base_addr
2807 && known_eq (info->ops[1].bitpos - infof->ops[0].bitpos,
2808 info->bitpos - infof->bitpos)
2809 && operand_equal_p (info->ops[1].base_addr,
2810 infof->ops[0].base_addr, 0))
2812 std::swap (info->ops[0], info->ops[1]);
2813 info->ops_swapped_p = true;
2815 if (check_no_overlap (m_store_info, i, info->rhs_code,
2816 MAX (merged_store->last_order, info->order),
2817 MAX (merged_store->start + merged_store->width,
2818 info->bitpos + info->bitsize)))
2820 /* Turn MEM_REF into BIT_INSERT_EXPR for bit-field stores. */
2821 if (info->rhs_code == MEM_REF && infof->rhs_code != MEM_REF)
2823 info->rhs_code = BIT_INSERT_EXPR;
2824 info->ops[0].val = gimple_assign_rhs1 (info->stmt);
2825 info->ops[0].base_addr = NULL_TREE;
2827 else if (infof->rhs_code == MEM_REF && info->rhs_code != MEM_REF)
2829 store_immediate_info *infoj;
2830 unsigned int j;
2831 FOR_EACH_VEC_ELT (merged_store->stores, j, infoj)
2833 infoj->rhs_code = BIT_INSERT_EXPR;
2834 infoj->ops[0].val = gimple_assign_rhs1 (infoj->stmt);
2835 infoj->ops[0].base_addr = NULL_TREE;
2838 if ((infof->ops[0].base_addr
2839 ? compatible_load_p (merged_store, info, base_addr, 0)
2840 : !info->ops[0].base_addr)
2841 && (infof->ops[1].base_addr
2842 ? compatible_load_p (merged_store, info, base_addr, 1)
2843 : !info->ops[1].base_addr))
2845 merged_store->merge_into (info);
2846 goto done;
2851 /* |---store 1---| <gap> |---store 2---|.
2852 Gap between stores or the rhs not compatible. Start a new group. */
2854 /* Try to apply all the stores recorded for the group to determine
2855 the bitpattern they write and discard it if that fails.
2856 This will also reject single-store groups. */
2857 if (merged_store->apply_stores ())
2858 m_merged_store_groups.safe_push (merged_store);
2859 else
2860 delete merged_store;
2862 merged_store = new merged_store_group (info);
2863 if (dump_file && (dump_flags & TDF_DETAILS))
2864 fputs ("New store group\n", dump_file);
2866 done:
2867 if (dump_file && (dump_flags & TDF_DETAILS))
2869 fprintf (dump_file, "Store %u:\nbitsize:" HOST_WIDE_INT_PRINT_DEC
2870 " bitpos:" HOST_WIDE_INT_PRINT_DEC " val:",
2871 i, info->bitsize, info->bitpos);
2872 print_generic_expr (dump_file, gimple_assign_rhs1 (info->stmt));
2873 fputc ('\n', dump_file);
2877 /* Record or discard the last store group. */
2878 if (merged_store)
2880 if (merged_store->apply_stores ())
2881 m_merged_store_groups.safe_push (merged_store);
2882 else
2883 delete merged_store;
2886 gcc_assert (m_merged_store_groups.length () <= m_store_info.length ());
2888 bool success
2889 = !m_merged_store_groups.is_empty ()
2890 && m_merged_store_groups.length () < m_store_info.length ();
2892 if (success && dump_file)
2893 fprintf (dump_file, "Coalescing successful!\nMerged into %u stores\n",
2894 m_merged_store_groups.length ());
2896 return success;
2899 /* Return the type to use for the merged stores or loads described by STMTS.
2900 This is needed to get the alias sets right. If IS_LOAD, look for rhs,
2901 otherwise lhs. Additionally set *CLIQUEP and *BASEP to MR_DEPENDENCE_*
2902 of the MEM_REFs if any. */
2904 static tree
2905 get_alias_type_for_stmts (vec<gimple *> &stmts, bool is_load,
2906 unsigned short *cliquep, unsigned short *basep)
2908 gimple *stmt;
2909 unsigned int i;
2910 tree type = NULL_TREE;
2911 tree ret = NULL_TREE;
2912 *cliquep = 0;
2913 *basep = 0;
2915 FOR_EACH_VEC_ELT (stmts, i, stmt)
2917 tree ref = is_load ? gimple_assign_rhs1 (stmt)
2918 : gimple_assign_lhs (stmt);
2919 tree type1 = reference_alias_ptr_type (ref);
2920 tree base = get_base_address (ref);
2922 if (i == 0)
2924 if (TREE_CODE (base) == MEM_REF)
2926 *cliquep = MR_DEPENDENCE_CLIQUE (base);
2927 *basep = MR_DEPENDENCE_BASE (base);
2929 ret = type = type1;
2930 continue;
2932 if (!alias_ptr_types_compatible_p (type, type1))
2933 ret = ptr_type_node;
2934 if (TREE_CODE (base) != MEM_REF
2935 || *cliquep != MR_DEPENDENCE_CLIQUE (base)
2936 || *basep != MR_DEPENDENCE_BASE (base))
2938 *cliquep = 0;
2939 *basep = 0;
2942 return ret;
2945 /* Return the location_t information we can find among the statements
2946 in STMTS. */
2948 static location_t
2949 get_location_for_stmts (vec<gimple *> &stmts)
2951 gimple *stmt;
2952 unsigned int i;
2954 FOR_EACH_VEC_ELT (stmts, i, stmt)
2955 if (gimple_has_location (stmt))
2956 return gimple_location (stmt);
2958 return UNKNOWN_LOCATION;
2961 /* Used to decribe a store resulting from splitting a wide store in smaller
2962 regularly-sized stores in split_group. */
2964 struct split_store
2966 unsigned HOST_WIDE_INT bytepos;
2967 unsigned HOST_WIDE_INT size;
2968 unsigned HOST_WIDE_INT align;
2969 auto_vec<store_immediate_info *> orig_stores;
2970 /* True if there is a single orig stmt covering the whole split store. */
2971 bool orig;
2972 split_store (unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
2973 unsigned HOST_WIDE_INT);
2976 /* Simple constructor. */
2978 split_store::split_store (unsigned HOST_WIDE_INT bp,
2979 unsigned HOST_WIDE_INT sz,
2980 unsigned HOST_WIDE_INT al)
2981 : bytepos (bp), size (sz), align (al), orig (false)
2983 orig_stores.create (0);
2986 /* Record all stores in GROUP that write to the region starting at BITPOS and
2987 is of size BITSIZE. Record infos for such statements in STORES if
2988 non-NULL. The stores in GROUP must be sorted by bitposition. Return INFO
2989 if there is exactly one original store in the range. */
2991 static store_immediate_info *
2992 find_constituent_stores (struct merged_store_group *group,
2993 vec<store_immediate_info *> *stores,
2994 unsigned int *first,
2995 unsigned HOST_WIDE_INT bitpos,
2996 unsigned HOST_WIDE_INT bitsize)
2998 store_immediate_info *info, *ret = NULL;
2999 unsigned int i;
3000 bool second = false;
3001 bool update_first = true;
3002 unsigned HOST_WIDE_INT end = bitpos + bitsize;
3003 for (i = *first; group->stores.iterate (i, &info); ++i)
3005 unsigned HOST_WIDE_INT stmt_start = info->bitpos;
3006 unsigned HOST_WIDE_INT stmt_end = stmt_start + info->bitsize;
3007 if (stmt_end <= bitpos)
3009 /* BITPOS passed to this function never decreases from within the
3010 same split_group call, so optimize and don't scan info records
3011 which are known to end before or at BITPOS next time.
3012 Only do it if all stores before this one also pass this. */
3013 if (update_first)
3014 *first = i + 1;
3015 continue;
3017 else
3018 update_first = false;
3020 /* The stores in GROUP are ordered by bitposition so if we're past
3021 the region for this group return early. */
3022 if (stmt_start >= end)
3023 return ret;
3025 if (stores)
3027 stores->safe_push (info);
3028 if (ret)
3030 ret = NULL;
3031 second = true;
3034 else if (ret)
3035 return NULL;
3036 if (!second)
3037 ret = info;
3039 return ret;
3042 /* Return how many SSA_NAMEs used to compute value to store in the INFO
3043 store have multiple uses. If any SSA_NAME has multiple uses, also
3044 count statements needed to compute it. */
3046 static unsigned
3047 count_multiple_uses (store_immediate_info *info)
3049 gimple *stmt = info->stmt;
3050 unsigned ret = 0;
3051 switch (info->rhs_code)
3053 case INTEGER_CST:
3054 return 0;
3055 case BIT_AND_EXPR:
3056 case BIT_IOR_EXPR:
3057 case BIT_XOR_EXPR:
3058 if (info->bit_not_p)
3060 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3061 ret = 1; /* Fall through below to return
3062 the BIT_NOT_EXPR stmt and then
3063 BIT_{AND,IOR,XOR}_EXPR and anything it
3064 uses. */
3065 else
3066 /* stmt is after this the BIT_NOT_EXPR. */
3067 stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3069 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3071 ret += 1 + info->ops[0].bit_not_p;
3072 if (info->ops[1].base_addr)
3073 ret += 1 + info->ops[1].bit_not_p;
3074 return ret + 1;
3076 stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3077 /* stmt is now the BIT_*_EXPR. */
3078 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3079 ret += 1 + info->ops[info->ops_swapped_p].bit_not_p;
3080 else if (info->ops[info->ops_swapped_p].bit_not_p)
3082 gimple *stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3083 if (!has_single_use (gimple_assign_rhs1 (stmt2)))
3084 ++ret;
3086 if (info->ops[1].base_addr == NULL_TREE)
3088 gcc_checking_assert (!info->ops_swapped_p);
3089 return ret;
3091 if (!has_single_use (gimple_assign_rhs2 (stmt)))
3092 ret += 1 + info->ops[1 - info->ops_swapped_p].bit_not_p;
3093 else if (info->ops[1 - info->ops_swapped_p].bit_not_p)
3095 gimple *stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs2 (stmt));
3096 if (!has_single_use (gimple_assign_rhs1 (stmt2)))
3097 ++ret;
3099 return ret;
3100 case MEM_REF:
3101 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3102 return 1 + info->ops[0].bit_not_p;
3103 else if (info->ops[0].bit_not_p)
3105 stmt = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt));
3106 if (!has_single_use (gimple_assign_rhs1 (stmt)))
3107 return 1;
3109 return 0;
3110 case BIT_INSERT_EXPR:
3111 return has_single_use (gimple_assign_rhs1 (stmt)) ? 0 : 1;
3112 default:
3113 gcc_unreachable ();
3117 /* Split a merged store described by GROUP by populating the SPLIT_STORES
3118 vector (if non-NULL) with split_store structs describing the byte offset
3119 (from the base), the bit size and alignment of each store as well as the
3120 original statements involved in each such split group.
3121 This is to separate the splitting strategy from the statement
3122 building/emission/linking done in output_merged_store.
3123 Return number of new stores.
3124 If ALLOW_UNALIGNED_STORE is false, then all stores must be aligned.
3125 If ALLOW_UNALIGNED_LOAD is false, then all loads must be aligned.
3126 If SPLIT_STORES is NULL, it is just a dry run to count number of
3127 new stores. */
3129 static unsigned int
3130 split_group (merged_store_group *group, bool allow_unaligned_store,
3131 bool allow_unaligned_load,
3132 vec<struct split_store *> *split_stores,
3133 unsigned *total_orig,
3134 unsigned *total_new)
3136 unsigned HOST_WIDE_INT pos = group->bitregion_start;
3137 unsigned HOST_WIDE_INT size = group->bitregion_end - pos;
3138 unsigned HOST_WIDE_INT bytepos = pos / BITS_PER_UNIT;
3139 unsigned HOST_WIDE_INT group_align = group->align;
3140 unsigned HOST_WIDE_INT align_base = group->align_base;
3141 unsigned HOST_WIDE_INT group_load_align = group_align;
3142 bool any_orig = false;
3144 gcc_assert ((size % BITS_PER_UNIT == 0) && (pos % BITS_PER_UNIT == 0));
3146 if (group->stores[0]->rhs_code == LROTATE_EXPR
3147 || group->stores[0]->rhs_code == NOP_EXPR)
3149 /* For bswap framework using sets of stores, all the checking
3150 has been done earlier in try_coalesce_bswap and needs to be
3151 emitted as a single store. */
3152 if (total_orig)
3154 /* Avoid the old/new stmt count heuristics. It should be
3155 always beneficial. */
3156 total_new[0] = 1;
3157 total_orig[0] = 2;
3160 if (split_stores)
3162 unsigned HOST_WIDE_INT align_bitpos
3163 = (group->start - align_base) & (group_align - 1);
3164 unsigned HOST_WIDE_INT align = group_align;
3165 if (align_bitpos)
3166 align = least_bit_hwi (align_bitpos);
3167 bytepos = group->start / BITS_PER_UNIT;
3168 struct split_store *store
3169 = new split_store (bytepos, group->width, align);
3170 unsigned int first = 0;
3171 find_constituent_stores (group, &store->orig_stores,
3172 &first, group->start, group->width);
3173 split_stores->safe_push (store);
3176 return 1;
3179 unsigned int ret = 0, first = 0;
3180 unsigned HOST_WIDE_INT try_pos = bytepos;
3182 if (total_orig)
3184 unsigned int i;
3185 store_immediate_info *info = group->stores[0];
3187 total_new[0] = 0;
3188 total_orig[0] = 1; /* The orig store. */
3189 info = group->stores[0];
3190 if (info->ops[0].base_addr)
3191 total_orig[0]++;
3192 if (info->ops[1].base_addr)
3193 total_orig[0]++;
3194 switch (info->rhs_code)
3196 case BIT_AND_EXPR:
3197 case BIT_IOR_EXPR:
3198 case BIT_XOR_EXPR:
3199 total_orig[0]++; /* The orig BIT_*_EXPR stmt. */
3200 break;
3201 default:
3202 break;
3204 total_orig[0] *= group->stores.length ();
3206 FOR_EACH_VEC_ELT (group->stores, i, info)
3208 total_new[0] += count_multiple_uses (info);
3209 total_orig[0] += (info->bit_not_p
3210 + info->ops[0].bit_not_p
3211 + info->ops[1].bit_not_p);
3215 if (!allow_unaligned_load)
3216 for (int i = 0; i < 2; ++i)
3217 if (group->load_align[i])
3218 group_load_align = MIN (group_load_align, group->load_align[i]);
3220 while (size > 0)
3222 if ((allow_unaligned_store || group_align <= BITS_PER_UNIT)
3223 && group->mask[try_pos - bytepos] == (unsigned char) ~0U)
3225 /* Skip padding bytes. */
3226 ++try_pos;
3227 size -= BITS_PER_UNIT;
3228 continue;
3231 unsigned HOST_WIDE_INT try_bitpos = try_pos * BITS_PER_UNIT;
3232 unsigned int try_size = MAX_STORE_BITSIZE, nonmasked;
3233 unsigned HOST_WIDE_INT align_bitpos
3234 = (try_bitpos - align_base) & (group_align - 1);
3235 unsigned HOST_WIDE_INT align = group_align;
3236 if (align_bitpos)
3237 align = least_bit_hwi (align_bitpos);
3238 if (!allow_unaligned_store)
3239 try_size = MIN (try_size, align);
3240 if (!allow_unaligned_load)
3242 /* If we can't do or don't want to do unaligned stores
3243 as well as loads, we need to take the loads into account
3244 as well. */
3245 unsigned HOST_WIDE_INT load_align = group_load_align;
3246 align_bitpos = (try_bitpos - align_base) & (load_align - 1);
3247 if (align_bitpos)
3248 load_align = least_bit_hwi (align_bitpos);
3249 for (int i = 0; i < 2; ++i)
3250 if (group->load_align[i])
3252 align_bitpos
3253 = known_alignment (try_bitpos
3254 - group->stores[0]->bitpos
3255 + group->stores[0]->ops[i].bitpos
3256 - group->load_align_base[i]);
3257 if (align_bitpos & (group_load_align - 1))
3259 unsigned HOST_WIDE_INT a = least_bit_hwi (align_bitpos);
3260 load_align = MIN (load_align, a);
3263 try_size = MIN (try_size, load_align);
3265 store_immediate_info *info
3266 = find_constituent_stores (group, NULL, &first, try_bitpos, try_size);
3267 if (info)
3269 /* If there is just one original statement for the range, see if
3270 we can just reuse the original store which could be even larger
3271 than try_size. */
3272 unsigned HOST_WIDE_INT stmt_end
3273 = ROUND_UP (info->bitpos + info->bitsize, BITS_PER_UNIT);
3274 info = find_constituent_stores (group, NULL, &first, try_bitpos,
3275 stmt_end - try_bitpos);
3276 if (info && info->bitpos >= try_bitpos)
3278 try_size = stmt_end - try_bitpos;
3279 goto found;
3283 /* Approximate store bitsize for the case when there are no padding
3284 bits. */
3285 while (try_size > size)
3286 try_size /= 2;
3287 /* Now look for whole padding bytes at the end of that bitsize. */
3288 for (nonmasked = try_size / BITS_PER_UNIT; nonmasked > 0; --nonmasked)
3289 if (group->mask[try_pos - bytepos + nonmasked - 1]
3290 != (unsigned char) ~0U)
3291 break;
3292 if (nonmasked == 0)
3294 /* If entire try_size range is padding, skip it. */
3295 try_pos += try_size / BITS_PER_UNIT;
3296 size -= try_size;
3297 continue;
3299 /* Otherwise try to decrease try_size if second half, last 3 quarters
3300 etc. are padding. */
3301 nonmasked *= BITS_PER_UNIT;
3302 while (nonmasked <= try_size / 2)
3303 try_size /= 2;
3304 if (!allow_unaligned_store && group_align > BITS_PER_UNIT)
3306 /* Now look for whole padding bytes at the start of that bitsize. */
3307 unsigned int try_bytesize = try_size / BITS_PER_UNIT, masked;
3308 for (masked = 0; masked < try_bytesize; ++masked)
3309 if (group->mask[try_pos - bytepos + masked] != (unsigned char) ~0U)
3310 break;
3311 masked *= BITS_PER_UNIT;
3312 gcc_assert (masked < try_size);
3313 if (masked >= try_size / 2)
3315 while (masked >= try_size / 2)
3317 try_size /= 2;
3318 try_pos += try_size / BITS_PER_UNIT;
3319 size -= try_size;
3320 masked -= try_size;
3322 /* Need to recompute the alignment, so just retry at the new
3323 position. */
3324 continue;
3328 found:
3329 ++ret;
3331 if (split_stores)
3333 struct split_store *store
3334 = new split_store (try_pos, try_size, align);
3335 info = find_constituent_stores (group, &store->orig_stores,
3336 &first, try_bitpos, try_size);
3337 if (info
3338 && info->bitpos >= try_bitpos
3339 && info->bitpos + info->bitsize <= try_bitpos + try_size)
3341 store->orig = true;
3342 any_orig = true;
3344 split_stores->safe_push (store);
3347 try_pos += try_size / BITS_PER_UNIT;
3348 size -= try_size;
3351 if (total_orig)
3353 unsigned int i;
3354 struct split_store *store;
3355 /* If we are reusing some original stores and any of the
3356 original SSA_NAMEs had multiple uses, we need to subtract
3357 those now before we add the new ones. */
3358 if (total_new[0] && any_orig)
3360 FOR_EACH_VEC_ELT (*split_stores, i, store)
3361 if (store->orig)
3362 total_new[0] -= count_multiple_uses (store->orig_stores[0]);
3364 total_new[0] += ret; /* The new store. */
3365 store_immediate_info *info = group->stores[0];
3366 if (info->ops[0].base_addr)
3367 total_new[0] += ret;
3368 if (info->ops[1].base_addr)
3369 total_new[0] += ret;
3370 switch (info->rhs_code)
3372 case BIT_AND_EXPR:
3373 case BIT_IOR_EXPR:
3374 case BIT_XOR_EXPR:
3375 total_new[0] += ret; /* The new BIT_*_EXPR stmt. */
3376 break;
3377 default:
3378 break;
3380 FOR_EACH_VEC_ELT (*split_stores, i, store)
3382 unsigned int j;
3383 bool bit_not_p[3] = { false, false, false };
3384 /* If all orig_stores have certain bit_not_p set, then
3385 we'd use a BIT_NOT_EXPR stmt and need to account for it.
3386 If some orig_stores have certain bit_not_p set, then
3387 we'd use a BIT_XOR_EXPR with a mask and need to account for
3388 it. */
3389 FOR_EACH_VEC_ELT (store->orig_stores, j, info)
3391 if (info->ops[0].bit_not_p)
3392 bit_not_p[0] = true;
3393 if (info->ops[1].bit_not_p)
3394 bit_not_p[1] = true;
3395 if (info->bit_not_p)
3396 bit_not_p[2] = true;
3398 total_new[0] += bit_not_p[0] + bit_not_p[1] + bit_not_p[2];
3403 return ret;
3406 /* Return the operation through which the operand IDX (if < 2) or
3407 result (IDX == 2) should be inverted. If NOP_EXPR, no inversion
3408 is done, if BIT_NOT_EXPR, all bits are inverted, if BIT_XOR_EXPR,
3409 the bits should be xored with mask. */
3411 static enum tree_code
3412 invert_op (split_store *split_store, int idx, tree int_type, tree &mask)
3414 unsigned int i;
3415 store_immediate_info *info;
3416 unsigned int cnt = 0;
3417 bool any_paddings = false;
3418 FOR_EACH_VEC_ELT (split_store->orig_stores, i, info)
3420 bool bit_not_p = idx < 2 ? info->ops[idx].bit_not_p : info->bit_not_p;
3421 if (bit_not_p)
3423 ++cnt;
3424 tree lhs = gimple_assign_lhs (info->stmt);
3425 if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
3426 && TYPE_PRECISION (TREE_TYPE (lhs)) < info->bitsize)
3427 any_paddings = true;
3430 mask = NULL_TREE;
3431 if (cnt == 0)
3432 return NOP_EXPR;
3433 if (cnt == split_store->orig_stores.length () && !any_paddings)
3434 return BIT_NOT_EXPR;
3436 unsigned HOST_WIDE_INT try_bitpos = split_store->bytepos * BITS_PER_UNIT;
3437 unsigned buf_size = split_store->size / BITS_PER_UNIT;
3438 unsigned char *buf
3439 = XALLOCAVEC (unsigned char, buf_size);
3440 memset (buf, ~0U, buf_size);
3441 FOR_EACH_VEC_ELT (split_store->orig_stores, i, info)
3443 bool bit_not_p = idx < 2 ? info->ops[idx].bit_not_p : info->bit_not_p;
3444 if (!bit_not_p)
3445 continue;
3446 /* Clear regions with bit_not_p and invert afterwards, rather than
3447 clear regions with !bit_not_p, so that gaps in between stores aren't
3448 set in the mask. */
3449 unsigned HOST_WIDE_INT bitsize = info->bitsize;
3450 unsigned HOST_WIDE_INT prec = bitsize;
3451 unsigned int pos_in_buffer = 0;
3452 if (any_paddings)
3454 tree lhs = gimple_assign_lhs (info->stmt);
3455 if (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
3456 && TYPE_PRECISION (TREE_TYPE (lhs)) < bitsize)
3457 prec = TYPE_PRECISION (TREE_TYPE (lhs));
3459 if (info->bitpos < try_bitpos)
3461 gcc_assert (info->bitpos + bitsize > try_bitpos);
3462 if (!BYTES_BIG_ENDIAN)
3464 if (prec <= try_bitpos - info->bitpos)
3465 continue;
3466 prec -= try_bitpos - info->bitpos;
3468 bitsize -= try_bitpos - info->bitpos;
3469 if (BYTES_BIG_ENDIAN && prec > bitsize)
3470 prec = bitsize;
3472 else
3473 pos_in_buffer = info->bitpos - try_bitpos;
3474 if (prec < bitsize)
3476 /* If this is a bool inversion, invert just the least significant
3477 prec bits rather than all bits of it. */
3478 if (BYTES_BIG_ENDIAN)
3480 pos_in_buffer += bitsize - prec;
3481 if (pos_in_buffer >= split_store->size)
3482 continue;
3484 bitsize = prec;
3486 if (pos_in_buffer + bitsize > split_store->size)
3487 bitsize = split_store->size - pos_in_buffer;
3488 unsigned char *p = buf + (pos_in_buffer / BITS_PER_UNIT);
3489 if (BYTES_BIG_ENDIAN)
3490 clear_bit_region_be (p, (BITS_PER_UNIT - 1
3491 - (pos_in_buffer % BITS_PER_UNIT)), bitsize);
3492 else
3493 clear_bit_region (p, pos_in_buffer % BITS_PER_UNIT, bitsize);
3495 for (unsigned int i = 0; i < buf_size; ++i)
3496 buf[i] = ~buf[i];
3497 mask = native_interpret_expr (int_type, buf, buf_size);
3498 return BIT_XOR_EXPR;
3501 /* Given a merged store group GROUP output the widened version of it.
3502 The store chain is against the base object BASE.
3503 Try store sizes of at most MAX_STORE_BITSIZE bits wide and don't output
3504 unaligned stores for STRICT_ALIGNMENT targets or if it's too expensive.
3505 Make sure that the number of statements output is less than the number of
3506 original statements. If a better sequence is possible emit it and
3507 return true. */
3509 bool
3510 imm_store_chain_info::output_merged_store (merged_store_group *group)
3512 split_store *split_store;
3513 unsigned int i;
3514 unsigned HOST_WIDE_INT start_byte_pos
3515 = group->bitregion_start / BITS_PER_UNIT;
3517 unsigned int orig_num_stmts = group->stores.length ();
3518 if (orig_num_stmts < 2)
3519 return false;
3521 auto_vec<struct split_store *, 32> split_stores;
3522 bool allow_unaligned_store
3523 = !STRICT_ALIGNMENT && PARAM_VALUE (PARAM_STORE_MERGING_ALLOW_UNALIGNED);
3524 bool allow_unaligned_load = allow_unaligned_store;
3525 if (allow_unaligned_store)
3527 /* If unaligned stores are allowed, see how many stores we'd emit
3528 for unaligned and how many stores we'd emit for aligned stores.
3529 Only use unaligned stores if it allows fewer stores than aligned. */
3530 unsigned aligned_cnt
3531 = split_group (group, false, allow_unaligned_load, NULL, NULL, NULL);
3532 unsigned unaligned_cnt
3533 = split_group (group, true, allow_unaligned_load, NULL, NULL, NULL);
3534 if (aligned_cnt <= unaligned_cnt)
3535 allow_unaligned_store = false;
3537 unsigned total_orig, total_new;
3538 split_group (group, allow_unaligned_store, allow_unaligned_load,
3539 &split_stores, &total_orig, &total_new);
3541 if (split_stores.length () >= orig_num_stmts)
3543 /* We didn't manage to reduce the number of statements. Bail out. */
3544 if (dump_file && (dump_flags & TDF_DETAILS))
3545 fprintf (dump_file, "Exceeded original number of stmts (%u)."
3546 " Not profitable to emit new sequence.\n",
3547 orig_num_stmts);
3548 FOR_EACH_VEC_ELT (split_stores, i, split_store)
3549 delete split_store;
3550 return false;
3552 if (total_orig <= total_new)
3554 /* If number of estimated new statements is above estimated original
3555 statements, bail out too. */
3556 if (dump_file && (dump_flags & TDF_DETAILS))
3557 fprintf (dump_file, "Estimated number of original stmts (%u)"
3558 " not larger than estimated number of new"
3559 " stmts (%u).\n",
3560 total_orig, total_new);
3561 FOR_EACH_VEC_ELT (split_stores, i, split_store)
3562 delete split_store;
3563 return false;
3566 gimple_stmt_iterator last_gsi = gsi_for_stmt (group->last_stmt);
3567 gimple_seq seq = NULL;
3568 tree last_vdef, new_vuse;
3569 last_vdef = gimple_vdef (group->last_stmt);
3570 new_vuse = gimple_vuse (group->last_stmt);
3571 tree bswap_res = NULL_TREE;
3573 if (group->stores[0]->rhs_code == LROTATE_EXPR
3574 || group->stores[0]->rhs_code == NOP_EXPR)
3576 tree fndecl = NULL_TREE, bswap_type = NULL_TREE, load_type;
3577 gimple *ins_stmt = group->stores[0]->ins_stmt;
3578 struct symbolic_number *n = &group->stores[0]->n;
3579 bool bswap = group->stores[0]->rhs_code == LROTATE_EXPR;
3581 switch (n->range)
3583 case 16:
3584 load_type = bswap_type = uint16_type_node;
3585 break;
3586 case 32:
3587 load_type = uint32_type_node;
3588 if (bswap)
3590 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP32);
3591 bswap_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
3593 break;
3594 case 64:
3595 load_type = uint64_type_node;
3596 if (bswap)
3598 fndecl = builtin_decl_explicit (BUILT_IN_BSWAP64);
3599 bswap_type = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (fndecl)));
3601 break;
3602 default:
3603 gcc_unreachable ();
3606 /* If the loads have each vuse of the corresponding store,
3607 we've checked the aliasing already in try_coalesce_bswap and
3608 we want to sink the need load into seq. So need to use new_vuse
3609 on the load. */
3610 if (n->base_addr)
3612 if (n->vuse == NULL)
3614 n->vuse = new_vuse;
3615 ins_stmt = NULL;
3617 else
3618 /* Update vuse in case it has changed by output_merged_stores. */
3619 n->vuse = gimple_vuse (ins_stmt);
3621 bswap_res = bswap_replace (gsi_start (seq), ins_stmt, fndecl,
3622 bswap_type, load_type, n, bswap);
3623 gcc_assert (bswap_res);
3626 gimple *stmt = NULL;
3627 auto_vec<gimple *, 32> orig_stmts;
3628 gimple_seq this_seq;
3629 tree addr = force_gimple_operand_1 (unshare_expr (base_addr), &this_seq,
3630 is_gimple_mem_ref_addr, NULL_TREE);
3631 gimple_seq_add_seq_without_update (&seq, this_seq);
3633 tree load_addr[2] = { NULL_TREE, NULL_TREE };
3634 gimple_seq load_seq[2] = { NULL, NULL };
3635 gimple_stmt_iterator load_gsi[2] = { gsi_none (), gsi_none () };
3636 for (int j = 0; j < 2; ++j)
3638 store_operand_info &op = group->stores[0]->ops[j];
3639 if (op.base_addr == NULL_TREE)
3640 continue;
3642 store_immediate_info *infol = group->stores.last ();
3643 if (gimple_vuse (op.stmt) == gimple_vuse (infol->ops[j].stmt))
3645 /* We can't pick the location randomly; while we've verified
3646 all the loads have the same vuse, they can be still in different
3647 basic blocks and we need to pick the one from the last bb:
3648 int x = q[0];
3649 if (x == N) return;
3650 int y = q[1];
3651 p[0] = x;
3652 p[1] = y;
3653 otherwise if we put the wider load at the q[0] load, we might
3654 segfault if q[1] is not mapped. */
3655 basic_block bb = gimple_bb (op.stmt);
3656 gimple *ostmt = op.stmt;
3657 store_immediate_info *info;
3658 FOR_EACH_VEC_ELT (group->stores, i, info)
3660 gimple *tstmt = info->ops[j].stmt;
3661 basic_block tbb = gimple_bb (tstmt);
3662 if (dominated_by_p (CDI_DOMINATORS, tbb, bb))
3664 ostmt = tstmt;
3665 bb = tbb;
3668 load_gsi[j] = gsi_for_stmt (ostmt);
3669 load_addr[j]
3670 = force_gimple_operand_1 (unshare_expr (op.base_addr),
3671 &load_seq[j], is_gimple_mem_ref_addr,
3672 NULL_TREE);
3674 else if (operand_equal_p (base_addr, op.base_addr, 0))
3675 load_addr[j] = addr;
3676 else
3678 load_addr[j]
3679 = force_gimple_operand_1 (unshare_expr (op.base_addr),
3680 &this_seq, is_gimple_mem_ref_addr,
3681 NULL_TREE);
3682 gimple_seq_add_seq_without_update (&seq, this_seq);
3686 FOR_EACH_VEC_ELT (split_stores, i, split_store)
3688 unsigned HOST_WIDE_INT try_size = split_store->size;
3689 unsigned HOST_WIDE_INT try_pos = split_store->bytepos;
3690 unsigned HOST_WIDE_INT try_bitpos = try_pos * BITS_PER_UNIT;
3691 unsigned HOST_WIDE_INT align = split_store->align;
3692 tree dest, src;
3693 location_t loc;
3694 if (split_store->orig)
3696 /* If there is just a single constituent store which covers
3697 the whole area, just reuse the lhs and rhs. */
3698 gimple *orig_stmt = split_store->orig_stores[0]->stmt;
3699 dest = gimple_assign_lhs (orig_stmt);
3700 src = gimple_assign_rhs1 (orig_stmt);
3701 loc = gimple_location (orig_stmt);
3703 else
3705 store_immediate_info *info;
3706 unsigned short clique, base;
3707 unsigned int k;
3708 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
3709 orig_stmts.safe_push (info->stmt);
3710 tree offset_type
3711 = get_alias_type_for_stmts (orig_stmts, false, &clique, &base);
3712 loc = get_location_for_stmts (orig_stmts);
3713 orig_stmts.truncate (0);
3715 tree int_type = build_nonstandard_integer_type (try_size, UNSIGNED);
3716 int_type = build_aligned_type (int_type, align);
3717 dest = fold_build2 (MEM_REF, int_type, addr,
3718 build_int_cst (offset_type, try_pos));
3719 if (TREE_CODE (dest) == MEM_REF)
3721 MR_DEPENDENCE_CLIQUE (dest) = clique;
3722 MR_DEPENDENCE_BASE (dest) = base;
3725 tree mask;
3726 if (bswap_res)
3727 mask = integer_zero_node;
3728 else
3729 mask = native_interpret_expr (int_type,
3730 group->mask + try_pos
3731 - start_byte_pos,
3732 group->buf_size);
3734 tree ops[2];
3735 for (int j = 0;
3736 j < 1 + (split_store->orig_stores[0]->ops[1].val != NULL_TREE);
3737 ++j)
3739 store_operand_info &op = split_store->orig_stores[0]->ops[j];
3740 if (bswap_res)
3741 ops[j] = bswap_res;
3742 else if (op.base_addr)
3744 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
3745 orig_stmts.safe_push (info->ops[j].stmt);
3747 offset_type = get_alias_type_for_stmts (orig_stmts, true,
3748 &clique, &base);
3749 location_t load_loc = get_location_for_stmts (orig_stmts);
3750 orig_stmts.truncate (0);
3752 unsigned HOST_WIDE_INT load_align = group->load_align[j];
3753 unsigned HOST_WIDE_INT align_bitpos
3754 = known_alignment (try_bitpos
3755 - split_store->orig_stores[0]->bitpos
3756 + op.bitpos);
3757 if (align_bitpos & (load_align - 1))
3758 load_align = least_bit_hwi (align_bitpos);
3760 tree load_int_type
3761 = build_nonstandard_integer_type (try_size, UNSIGNED);
3762 load_int_type
3763 = build_aligned_type (load_int_type, load_align);
3765 poly_uint64 load_pos
3766 = exact_div (try_bitpos
3767 - split_store->orig_stores[0]->bitpos
3768 + op.bitpos,
3769 BITS_PER_UNIT);
3770 ops[j] = fold_build2 (MEM_REF, load_int_type, load_addr[j],
3771 build_int_cst (offset_type, load_pos));
3772 if (TREE_CODE (ops[j]) == MEM_REF)
3774 MR_DEPENDENCE_CLIQUE (ops[j]) = clique;
3775 MR_DEPENDENCE_BASE (ops[j]) = base;
3777 if (!integer_zerop (mask))
3778 /* The load might load some bits (that will be masked off
3779 later on) uninitialized, avoid -W*uninitialized
3780 warnings in that case. */
3781 TREE_NO_WARNING (ops[j]) = 1;
3783 stmt = gimple_build_assign (make_ssa_name (int_type),
3784 ops[j]);
3785 gimple_set_location (stmt, load_loc);
3786 if (gsi_bb (load_gsi[j]))
3788 gimple_set_vuse (stmt, gimple_vuse (op.stmt));
3789 gimple_seq_add_stmt_without_update (&load_seq[j], stmt);
3791 else
3793 gimple_set_vuse (stmt, new_vuse);
3794 gimple_seq_add_stmt_without_update (&seq, stmt);
3796 ops[j] = gimple_assign_lhs (stmt);
3797 tree xor_mask;
3798 enum tree_code inv_op
3799 = invert_op (split_store, j, int_type, xor_mask);
3800 if (inv_op != NOP_EXPR)
3802 stmt = gimple_build_assign (make_ssa_name (int_type),
3803 inv_op, ops[j], xor_mask);
3804 gimple_set_location (stmt, load_loc);
3805 ops[j] = gimple_assign_lhs (stmt);
3807 if (gsi_bb (load_gsi[j]))
3808 gimple_seq_add_stmt_without_update (&load_seq[j],
3809 stmt);
3810 else
3811 gimple_seq_add_stmt_without_update (&seq, stmt);
3814 else
3815 ops[j] = native_interpret_expr (int_type,
3816 group->val + try_pos
3817 - start_byte_pos,
3818 group->buf_size);
3821 switch (split_store->orig_stores[0]->rhs_code)
3823 case BIT_AND_EXPR:
3824 case BIT_IOR_EXPR:
3825 case BIT_XOR_EXPR:
3826 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
3828 tree rhs1 = gimple_assign_rhs1 (info->stmt);
3829 orig_stmts.safe_push (SSA_NAME_DEF_STMT (rhs1));
3831 location_t bit_loc;
3832 bit_loc = get_location_for_stmts (orig_stmts);
3833 orig_stmts.truncate (0);
3835 stmt
3836 = gimple_build_assign (make_ssa_name (int_type),
3837 split_store->orig_stores[0]->rhs_code,
3838 ops[0], ops[1]);
3839 gimple_set_location (stmt, bit_loc);
3840 /* If there is just one load and there is a separate
3841 load_seq[0], emit the bitwise op right after it. */
3842 if (load_addr[1] == NULL_TREE && gsi_bb (load_gsi[0]))
3843 gimple_seq_add_stmt_without_update (&load_seq[0], stmt);
3844 /* Otherwise, if at least one load is in seq, we need to
3845 emit the bitwise op right before the store. If there
3846 are two loads and are emitted somewhere else, it would
3847 be better to emit the bitwise op as early as possible;
3848 we don't track where that would be possible right now
3849 though. */
3850 else
3851 gimple_seq_add_stmt_without_update (&seq, stmt);
3852 src = gimple_assign_lhs (stmt);
3853 tree xor_mask;
3854 enum tree_code inv_op;
3855 inv_op = invert_op (split_store, 2, int_type, xor_mask);
3856 if (inv_op != NOP_EXPR)
3858 stmt = gimple_build_assign (make_ssa_name (int_type),
3859 inv_op, src, xor_mask);
3860 gimple_set_location (stmt, bit_loc);
3861 if (load_addr[1] == NULL_TREE && gsi_bb (load_gsi[0]))
3862 gimple_seq_add_stmt_without_update (&load_seq[0], stmt);
3863 else
3864 gimple_seq_add_stmt_without_update (&seq, stmt);
3865 src = gimple_assign_lhs (stmt);
3867 break;
3868 case LROTATE_EXPR:
3869 case NOP_EXPR:
3870 src = ops[0];
3871 if (!is_gimple_val (src))
3873 stmt = gimple_build_assign (make_ssa_name (TREE_TYPE (src)),
3874 src);
3875 gimple_seq_add_stmt_without_update (&seq, stmt);
3876 src = gimple_assign_lhs (stmt);
3878 if (!useless_type_conversion_p (int_type, TREE_TYPE (src)))
3880 stmt = gimple_build_assign (make_ssa_name (int_type),
3881 NOP_EXPR, src);
3882 gimple_seq_add_stmt_without_update (&seq, stmt);
3883 src = gimple_assign_lhs (stmt);
3885 inv_op = invert_op (split_store, 2, int_type, xor_mask);
3886 if (inv_op != NOP_EXPR)
3888 stmt = gimple_build_assign (make_ssa_name (int_type),
3889 inv_op, src, xor_mask);
3890 gimple_set_location (stmt, loc);
3891 gimple_seq_add_stmt_without_update (&seq, stmt);
3892 src = gimple_assign_lhs (stmt);
3894 break;
3895 default:
3896 src = ops[0];
3897 break;
3900 /* If bit insertion is required, we use the source as an accumulator
3901 into which the successive bit-field values are manually inserted.
3902 FIXME: perhaps use BIT_INSERT_EXPR instead in some cases? */
3903 if (group->bit_insertion)
3904 FOR_EACH_VEC_ELT (split_store->orig_stores, k, info)
3905 if (info->rhs_code == BIT_INSERT_EXPR
3906 && info->bitpos < try_bitpos + try_size
3907 && info->bitpos + info->bitsize > try_bitpos)
3909 /* Mask, truncate, convert to final type, shift and ior into
3910 the accumulator. Note that every step can be a no-op. */
3911 const HOST_WIDE_INT start_gap = info->bitpos - try_bitpos;
3912 const HOST_WIDE_INT end_gap
3913 = (try_bitpos + try_size) - (info->bitpos + info->bitsize);
3914 tree tem = info->ops[0].val;
3915 if (TYPE_PRECISION (TREE_TYPE (tem)) <= info->bitsize)
3917 tree bitfield_type
3918 = build_nonstandard_integer_type (info->bitsize,
3919 UNSIGNED);
3920 tem = gimple_convert (&seq, loc, bitfield_type, tem);
3922 else if ((BYTES_BIG_ENDIAN ? start_gap : end_gap) > 0)
3924 const unsigned HOST_WIDE_INT imask
3925 = (HOST_WIDE_INT_1U << info->bitsize) - 1;
3926 tem = gimple_build (&seq, loc,
3927 BIT_AND_EXPR, TREE_TYPE (tem), tem,
3928 build_int_cst (TREE_TYPE (tem),
3929 imask));
3931 const HOST_WIDE_INT shift
3932 = (BYTES_BIG_ENDIAN ? end_gap : start_gap);
3933 if (shift < 0)
3934 tem = gimple_build (&seq, loc,
3935 RSHIFT_EXPR, TREE_TYPE (tem), tem,
3936 build_int_cst (NULL_TREE, -shift));
3937 tem = gimple_convert (&seq, loc, int_type, tem);
3938 if (shift > 0)
3939 tem = gimple_build (&seq, loc,
3940 LSHIFT_EXPR, int_type, tem,
3941 build_int_cst (NULL_TREE, shift));
3942 src = gimple_build (&seq, loc,
3943 BIT_IOR_EXPR, int_type, tem, src);
3946 if (!integer_zerop (mask))
3948 tree tem = make_ssa_name (int_type);
3949 tree load_src = unshare_expr (dest);
3950 /* The load might load some or all bits uninitialized,
3951 avoid -W*uninitialized warnings in that case.
3952 As optimization, it would be nice if all the bits are
3953 provably uninitialized (no stores at all yet or previous
3954 store a CLOBBER) we'd optimize away the load and replace
3955 it e.g. with 0. */
3956 TREE_NO_WARNING (load_src) = 1;
3957 stmt = gimple_build_assign (tem, load_src);
3958 gimple_set_location (stmt, loc);
3959 gimple_set_vuse (stmt, new_vuse);
3960 gimple_seq_add_stmt_without_update (&seq, stmt);
3962 /* FIXME: If there is a single chunk of zero bits in mask,
3963 perhaps use BIT_INSERT_EXPR instead? */
3964 stmt = gimple_build_assign (make_ssa_name (int_type),
3965 BIT_AND_EXPR, tem, mask);
3966 gimple_set_location (stmt, loc);
3967 gimple_seq_add_stmt_without_update (&seq, stmt);
3968 tem = gimple_assign_lhs (stmt);
3970 if (TREE_CODE (src) == INTEGER_CST)
3971 src = wide_int_to_tree (int_type,
3972 wi::bit_and_not (wi::to_wide (src),
3973 wi::to_wide (mask)));
3974 else
3976 tree nmask
3977 = wide_int_to_tree (int_type,
3978 wi::bit_not (wi::to_wide (mask)));
3979 stmt = gimple_build_assign (make_ssa_name (int_type),
3980 BIT_AND_EXPR, src, nmask);
3981 gimple_set_location (stmt, loc);
3982 gimple_seq_add_stmt_without_update (&seq, stmt);
3983 src = gimple_assign_lhs (stmt);
3985 stmt = gimple_build_assign (make_ssa_name (int_type),
3986 BIT_IOR_EXPR, tem, src);
3987 gimple_set_location (stmt, loc);
3988 gimple_seq_add_stmt_without_update (&seq, stmt);
3989 src = gimple_assign_lhs (stmt);
3993 stmt = gimple_build_assign (dest, src);
3994 gimple_set_location (stmt, loc);
3995 gimple_set_vuse (stmt, new_vuse);
3996 gimple_seq_add_stmt_without_update (&seq, stmt);
3998 tree new_vdef;
3999 if (i < split_stores.length () - 1)
4000 new_vdef = make_ssa_name (gimple_vop (cfun), stmt);
4001 else
4002 new_vdef = last_vdef;
4004 gimple_set_vdef (stmt, new_vdef);
4005 SSA_NAME_DEF_STMT (new_vdef) = stmt;
4006 new_vuse = new_vdef;
4009 FOR_EACH_VEC_ELT (split_stores, i, split_store)
4010 delete split_store;
4012 gcc_assert (seq);
4013 if (dump_file)
4015 fprintf (dump_file,
4016 "New sequence of %u stores to replace old one of %u stores\n",
4017 split_stores.length (), orig_num_stmts);
4018 if (dump_flags & TDF_DETAILS)
4019 print_gimple_seq (dump_file, seq, 0, TDF_VOPS | TDF_MEMSYMS);
4021 gsi_insert_seq_after (&last_gsi, seq, GSI_SAME_STMT);
4022 for (int j = 0; j < 2; ++j)
4023 if (load_seq[j])
4024 gsi_insert_seq_after (&load_gsi[j], load_seq[j], GSI_SAME_STMT);
4026 return true;
4029 /* Process the merged_store_group objects created in the coalescing phase.
4030 The stores are all against the base object BASE.
4031 Try to output the widened stores and delete the original statements if
4032 successful. Return true iff any changes were made. */
4034 bool
4035 imm_store_chain_info::output_merged_stores ()
4037 unsigned int i;
4038 merged_store_group *merged_store;
4039 bool ret = false;
4040 FOR_EACH_VEC_ELT (m_merged_store_groups, i, merged_store)
4042 if (output_merged_store (merged_store))
4044 unsigned int j;
4045 store_immediate_info *store;
4046 FOR_EACH_VEC_ELT (merged_store->stores, j, store)
4048 gimple *stmt = store->stmt;
4049 gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
4050 gsi_remove (&gsi, true);
4051 if (stmt != merged_store->last_stmt)
4053 unlink_stmt_vdef (stmt);
4054 release_defs (stmt);
4057 ret = true;
4060 if (ret && dump_file)
4061 fprintf (dump_file, "Merging successful!\n");
4063 return ret;
4066 /* Coalesce the store_immediate_info objects recorded against the base object
4067 BASE in the first phase and output them.
4068 Delete the allocated structures.
4069 Return true if any changes were made. */
4071 bool
4072 imm_store_chain_info::terminate_and_process_chain ()
4074 /* Process store chain. */
4075 bool ret = false;
4076 if (m_store_info.length () > 1)
4078 ret = coalesce_immediate_stores ();
4079 if (ret)
4080 ret = output_merged_stores ();
4083 /* Delete all the entries we allocated ourselves. */
4084 store_immediate_info *info;
4085 unsigned int i;
4086 FOR_EACH_VEC_ELT (m_store_info, i, info)
4087 delete info;
4089 merged_store_group *merged_info;
4090 FOR_EACH_VEC_ELT (m_merged_store_groups, i, merged_info)
4091 delete merged_info;
4093 return ret;
4096 /* Return true iff LHS is a destination potentially interesting for
4097 store merging. In practice these are the codes that get_inner_reference
4098 can process. */
4100 static bool
4101 lhs_valid_for_store_merging_p (tree lhs)
4103 tree_code code = TREE_CODE (lhs);
4105 if (code == ARRAY_REF || code == ARRAY_RANGE_REF || code == MEM_REF
4106 || code == COMPONENT_REF || code == BIT_FIELD_REF)
4107 return true;
4109 return false;
4112 /* Return true if the tree RHS is a constant we want to consider
4113 during store merging. In practice accept all codes that
4114 native_encode_expr accepts. */
4116 static bool
4117 rhs_valid_for_store_merging_p (tree rhs)
4119 unsigned HOST_WIDE_INT size;
4120 return (GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (rhs))).is_constant (&size)
4121 && native_encode_expr (rhs, NULL, size) != 0);
4124 /* If MEM is a memory reference usable for store merging (either as
4125 store destination or for loads), return the non-NULL base_addr
4126 and set *PBITSIZE, *PBITPOS, *PBITREGION_START and *PBITREGION_END.
4127 Otherwise return NULL, *PBITPOS should be still valid even for that
4128 case. */
4130 static tree
4131 mem_valid_for_store_merging (tree mem, poly_uint64 *pbitsize,
4132 poly_uint64 *pbitpos,
4133 poly_uint64 *pbitregion_start,
4134 poly_uint64 *pbitregion_end)
4136 poly_int64 bitsize, bitpos;
4137 poly_uint64 bitregion_start = 0, bitregion_end = 0;
4138 machine_mode mode;
4139 int unsignedp = 0, reversep = 0, volatilep = 0;
4140 tree offset;
4141 tree base_addr = get_inner_reference (mem, &bitsize, &bitpos, &offset, &mode,
4142 &unsignedp, &reversep, &volatilep);
4143 *pbitsize = bitsize;
4144 if (known_eq (bitsize, 0))
4145 return NULL_TREE;
4147 if (TREE_CODE (mem) == COMPONENT_REF
4148 && DECL_BIT_FIELD_TYPE (TREE_OPERAND (mem, 1)))
4150 get_bit_range (&bitregion_start, &bitregion_end, mem, &bitpos, &offset);
4151 if (maybe_ne (bitregion_end, 0U))
4152 bitregion_end += 1;
4155 if (reversep)
4156 return NULL_TREE;
4158 /* We do not want to rewrite TARGET_MEM_REFs. */
4159 if (TREE_CODE (base_addr) == TARGET_MEM_REF)
4160 return NULL_TREE;
4161 /* In some cases get_inner_reference may return a
4162 MEM_REF [ptr + byteoffset]. For the purposes of this pass
4163 canonicalize the base_addr to MEM_REF [ptr] and take
4164 byteoffset into account in the bitpos. This occurs in
4165 PR 23684 and this way we can catch more chains. */
4166 else if (TREE_CODE (base_addr) == MEM_REF)
4168 poly_offset_int byte_off = mem_ref_offset (base_addr);
4169 poly_offset_int bit_off = byte_off << LOG2_BITS_PER_UNIT;
4170 bit_off += bitpos;
4171 if (known_ge (bit_off, 0) && bit_off.to_shwi (&bitpos))
4173 if (maybe_ne (bitregion_end, 0U))
4175 bit_off = byte_off << LOG2_BITS_PER_UNIT;
4176 bit_off += bitregion_start;
4177 if (bit_off.to_uhwi (&bitregion_start))
4179 bit_off = byte_off << LOG2_BITS_PER_UNIT;
4180 bit_off += bitregion_end;
4181 if (!bit_off.to_uhwi (&bitregion_end))
4182 bitregion_end = 0;
4184 else
4185 bitregion_end = 0;
4188 else
4189 return NULL_TREE;
4190 base_addr = TREE_OPERAND (base_addr, 0);
4192 /* get_inner_reference returns the base object, get at its
4193 address now. */
4194 else
4196 if (maybe_lt (bitpos, 0))
4197 return NULL_TREE;
4198 base_addr = build_fold_addr_expr (base_addr);
4201 if (known_eq (bitregion_end, 0U))
4203 bitregion_start = round_down_to_byte_boundary (bitpos);
4204 bitregion_end = bitpos;
4205 bitregion_end = round_up_to_byte_boundary (bitregion_end + bitsize);
4208 if (offset != NULL_TREE)
4210 /* If the access is variable offset then a base decl has to be
4211 address-taken to be able to emit pointer-based stores to it.
4212 ??? We might be able to get away with re-using the original
4213 base up to the first variable part and then wrapping that inside
4214 a BIT_FIELD_REF. */
4215 tree base = get_base_address (base_addr);
4216 if (! base
4217 || (DECL_P (base) && ! TREE_ADDRESSABLE (base)))
4218 return NULL_TREE;
4220 base_addr = build2 (POINTER_PLUS_EXPR, TREE_TYPE (base_addr),
4221 base_addr, offset);
4224 *pbitsize = bitsize;
4225 *pbitpos = bitpos;
4226 *pbitregion_start = bitregion_start;
4227 *pbitregion_end = bitregion_end;
4228 return base_addr;
4231 /* Return true if STMT is a load that can be used for store merging.
4232 In that case fill in *OP. BITSIZE, BITPOS, BITREGION_START and
4233 BITREGION_END are properties of the corresponding store. */
4235 static bool
4236 handled_load (gimple *stmt, store_operand_info *op,
4237 poly_uint64 bitsize, poly_uint64 bitpos,
4238 poly_uint64 bitregion_start, poly_uint64 bitregion_end)
4240 if (!is_gimple_assign (stmt))
4241 return false;
4242 if (gimple_assign_rhs_code (stmt) == BIT_NOT_EXPR)
4244 tree rhs1 = gimple_assign_rhs1 (stmt);
4245 if (TREE_CODE (rhs1) == SSA_NAME
4246 && handled_load (SSA_NAME_DEF_STMT (rhs1), op, bitsize, bitpos,
4247 bitregion_start, bitregion_end))
4249 /* Don't allow _1 = load; _2 = ~1; _3 = ~_2; which should have
4250 been optimized earlier, but if allowed here, would confuse the
4251 multiple uses counting. */
4252 if (op->bit_not_p)
4253 return false;
4254 op->bit_not_p = !op->bit_not_p;
4255 return true;
4257 return false;
4259 if (gimple_vuse (stmt)
4260 && gimple_assign_load_p (stmt)
4261 && !stmt_can_throw_internal (cfun, stmt)
4262 && !gimple_has_volatile_ops (stmt))
4264 tree mem = gimple_assign_rhs1 (stmt);
4265 op->base_addr
4266 = mem_valid_for_store_merging (mem, &op->bitsize, &op->bitpos,
4267 &op->bitregion_start,
4268 &op->bitregion_end);
4269 if (op->base_addr != NULL_TREE
4270 && known_eq (op->bitsize, bitsize)
4271 && multiple_p (op->bitpos - bitpos, BITS_PER_UNIT)
4272 && known_ge (op->bitpos - op->bitregion_start,
4273 bitpos - bitregion_start)
4274 && known_ge (op->bitregion_end - op->bitpos,
4275 bitregion_end - bitpos))
4277 op->stmt = stmt;
4278 op->val = mem;
4279 op->bit_not_p = false;
4280 return true;
4283 return false;
4286 /* Record the store STMT for store merging optimization if it can be
4287 optimized. */
4289 void
4290 pass_store_merging::process_store (gimple *stmt)
4292 tree lhs = gimple_assign_lhs (stmt);
4293 tree rhs = gimple_assign_rhs1 (stmt);
4294 poly_uint64 bitsize, bitpos;
4295 poly_uint64 bitregion_start, bitregion_end;
4296 tree base_addr
4297 = mem_valid_for_store_merging (lhs, &bitsize, &bitpos,
4298 &bitregion_start, &bitregion_end);
4299 if (known_eq (bitsize, 0U))
4300 return;
4302 bool invalid = (base_addr == NULL_TREE
4303 || (maybe_gt (bitsize,
4304 (unsigned int) MAX_BITSIZE_MODE_ANY_INT)
4305 && (TREE_CODE (rhs) != INTEGER_CST)));
4306 enum tree_code rhs_code = ERROR_MARK;
4307 bool bit_not_p = false;
4308 struct symbolic_number n;
4309 gimple *ins_stmt = NULL;
4310 store_operand_info ops[2];
4311 if (invalid)
4313 else if (rhs_valid_for_store_merging_p (rhs))
4315 rhs_code = INTEGER_CST;
4316 ops[0].val = rhs;
4318 else if (TREE_CODE (rhs) != SSA_NAME)
4319 invalid = true;
4320 else
4322 gimple *def_stmt = SSA_NAME_DEF_STMT (rhs), *def_stmt1, *def_stmt2;
4323 if (!is_gimple_assign (def_stmt))
4324 invalid = true;
4325 else if (handled_load (def_stmt, &ops[0], bitsize, bitpos,
4326 bitregion_start, bitregion_end))
4327 rhs_code = MEM_REF;
4328 else if (gimple_assign_rhs_code (def_stmt) == BIT_NOT_EXPR)
4330 tree rhs1 = gimple_assign_rhs1 (def_stmt);
4331 if (TREE_CODE (rhs1) == SSA_NAME
4332 && is_gimple_assign (SSA_NAME_DEF_STMT (rhs1)))
4334 bit_not_p = true;
4335 def_stmt = SSA_NAME_DEF_STMT (rhs1);
4339 if (rhs_code == ERROR_MARK && !invalid)
4340 switch ((rhs_code = gimple_assign_rhs_code (def_stmt)))
4342 case BIT_AND_EXPR:
4343 case BIT_IOR_EXPR:
4344 case BIT_XOR_EXPR:
4345 tree rhs1, rhs2;
4346 rhs1 = gimple_assign_rhs1 (def_stmt);
4347 rhs2 = gimple_assign_rhs2 (def_stmt);
4348 invalid = true;
4349 if (TREE_CODE (rhs1) != SSA_NAME)
4350 break;
4351 def_stmt1 = SSA_NAME_DEF_STMT (rhs1);
4352 if (!is_gimple_assign (def_stmt1)
4353 || !handled_load (def_stmt1, &ops[0], bitsize, bitpos,
4354 bitregion_start, bitregion_end))
4355 break;
4356 if (rhs_valid_for_store_merging_p (rhs2))
4357 ops[1].val = rhs2;
4358 else if (TREE_CODE (rhs2) != SSA_NAME)
4359 break;
4360 else
4362 def_stmt2 = SSA_NAME_DEF_STMT (rhs2);
4363 if (!is_gimple_assign (def_stmt2))
4364 break;
4365 else if (!handled_load (def_stmt2, &ops[1], bitsize, bitpos,
4366 bitregion_start, bitregion_end))
4367 break;
4369 invalid = false;
4370 break;
4371 default:
4372 invalid = true;
4373 break;
4376 unsigned HOST_WIDE_INT const_bitsize;
4377 if (bitsize.is_constant (&const_bitsize)
4378 && (const_bitsize % BITS_PER_UNIT) == 0
4379 && const_bitsize <= 64
4380 && multiple_p (bitpos, BITS_PER_UNIT))
4382 ins_stmt = find_bswap_or_nop_1 (def_stmt, &n, 12);
4383 if (ins_stmt)
4385 uint64_t nn = n.n;
4386 for (unsigned HOST_WIDE_INT i = 0;
4387 i < const_bitsize;
4388 i += BITS_PER_UNIT, nn >>= BITS_PER_MARKER)
4389 if ((nn & MARKER_MASK) == 0
4390 || (nn & MARKER_MASK) == MARKER_BYTE_UNKNOWN)
4392 ins_stmt = NULL;
4393 break;
4395 if (ins_stmt)
4397 if (invalid)
4399 rhs_code = LROTATE_EXPR;
4400 ops[0].base_addr = NULL_TREE;
4401 ops[1].base_addr = NULL_TREE;
4403 invalid = false;
4408 if (invalid
4409 && bitsize.is_constant (&const_bitsize)
4410 && ((const_bitsize % BITS_PER_UNIT) != 0
4411 || !multiple_p (bitpos, BITS_PER_UNIT))
4412 && const_bitsize <= 64)
4414 /* Bypass a conversion to the bit-field type. */
4415 if (!bit_not_p
4416 && is_gimple_assign (def_stmt)
4417 && CONVERT_EXPR_CODE_P (rhs_code))
4419 tree rhs1 = gimple_assign_rhs1 (def_stmt);
4420 if (TREE_CODE (rhs1) == SSA_NAME
4421 && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
4422 rhs = rhs1;
4424 rhs_code = BIT_INSERT_EXPR;
4425 bit_not_p = false;
4426 ops[0].val = rhs;
4427 ops[0].base_addr = NULL_TREE;
4428 ops[1].base_addr = NULL_TREE;
4429 invalid = false;
4433 unsigned HOST_WIDE_INT const_bitsize, const_bitpos;
4434 unsigned HOST_WIDE_INT const_bitregion_start, const_bitregion_end;
4435 if (invalid
4436 || !bitsize.is_constant (&const_bitsize)
4437 || !bitpos.is_constant (&const_bitpos)
4438 || !bitregion_start.is_constant (&const_bitregion_start)
4439 || !bitregion_end.is_constant (&const_bitregion_end))
4441 terminate_all_aliasing_chains (NULL, stmt);
4442 return;
4445 if (!ins_stmt)
4446 memset (&n, 0, sizeof (n));
4448 struct imm_store_chain_info **chain_info = NULL;
4449 if (base_addr)
4450 chain_info = m_stores.get (base_addr);
4452 store_immediate_info *info;
4453 if (chain_info)
4455 unsigned int ord = (*chain_info)->m_store_info.length ();
4456 info = new store_immediate_info (const_bitsize, const_bitpos,
4457 const_bitregion_start,
4458 const_bitregion_end,
4459 stmt, ord, rhs_code, n, ins_stmt,
4460 bit_not_p, ops[0], ops[1]);
4461 if (dump_file && (dump_flags & TDF_DETAILS))
4463 fprintf (dump_file, "Recording immediate store from stmt:\n");
4464 print_gimple_stmt (dump_file, stmt, 0);
4466 (*chain_info)->m_store_info.safe_push (info);
4467 terminate_all_aliasing_chains (chain_info, stmt);
4468 /* If we reach the limit of stores to merge in a chain terminate and
4469 process the chain now. */
4470 if ((*chain_info)->m_store_info.length ()
4471 == (unsigned int) PARAM_VALUE (PARAM_MAX_STORES_TO_MERGE))
4473 if (dump_file && (dump_flags & TDF_DETAILS))
4474 fprintf (dump_file,
4475 "Reached maximum number of statements to merge:\n");
4476 terminate_and_release_chain (*chain_info);
4478 return;
4481 /* Store aliases any existing chain? */
4482 terminate_all_aliasing_chains (NULL, stmt);
4483 /* Start a new chain. */
4484 struct imm_store_chain_info *new_chain
4485 = new imm_store_chain_info (m_stores_head, base_addr);
4486 info = new store_immediate_info (const_bitsize, const_bitpos,
4487 const_bitregion_start,
4488 const_bitregion_end,
4489 stmt, 0, rhs_code, n, ins_stmt,
4490 bit_not_p, ops[0], ops[1]);
4491 new_chain->m_store_info.safe_push (info);
4492 m_stores.put (base_addr, new_chain);
4493 if (dump_file && (dump_flags & TDF_DETAILS))
4495 fprintf (dump_file, "Starting new chain with statement:\n");
4496 print_gimple_stmt (dump_file, stmt, 0);
4497 fprintf (dump_file, "The base object is:\n");
4498 print_generic_expr (dump_file, base_addr);
4499 fprintf (dump_file, "\n");
4503 /* Entry point for the pass. Go over each basic block recording chains of
4504 immediate stores. Upon encountering a terminating statement (as defined
4505 by stmt_terminates_chain_p) process the recorded stores and emit the widened
4506 variants. */
4508 unsigned int
4509 pass_store_merging::execute (function *fun)
4511 basic_block bb;
4512 hash_set<gimple *> orig_stmts;
4514 calculate_dominance_info (CDI_DOMINATORS);
4516 FOR_EACH_BB_FN (bb, fun)
4518 gimple_stmt_iterator gsi;
4519 unsigned HOST_WIDE_INT num_statements = 0;
4520 /* Record the original statements so that we can keep track of
4521 statements emitted in this pass and not re-process new
4522 statements. */
4523 for (gsi = gsi_after_labels (bb); !gsi_end_p (gsi); gsi_next (&gsi))
4525 if (is_gimple_debug (gsi_stmt (gsi)))
4526 continue;
4528 if (++num_statements >= 2)
4529 break;
4532 if (num_statements < 2)
4533 continue;
4535 if (dump_file && (dump_flags & TDF_DETAILS))
4536 fprintf (dump_file, "Processing basic block <%d>:\n", bb->index);
4538 for (gsi = gsi_after_labels (bb); !gsi_end_p (gsi); gsi_next (&gsi))
4540 gimple *stmt = gsi_stmt (gsi);
4542 if (is_gimple_debug (stmt))
4543 continue;
4545 if (gimple_has_volatile_ops (stmt))
4547 /* Terminate all chains. */
4548 if (dump_file && (dump_flags & TDF_DETAILS))
4549 fprintf (dump_file, "Volatile access terminates "
4550 "all chains\n");
4551 terminate_and_process_all_chains ();
4552 continue;
4555 if (gimple_assign_single_p (stmt) && gimple_vdef (stmt)
4556 && !stmt_can_throw_internal (cfun, stmt)
4557 && lhs_valid_for_store_merging_p (gimple_assign_lhs (stmt)))
4558 process_store (stmt);
4559 else
4560 terminate_all_aliasing_chains (NULL, stmt);
4562 terminate_and_process_all_chains ();
4564 return 0;
4567 } // anon namespace
4569 /* Construct and return a store merging pass object. */
4571 gimple_opt_pass *
4572 make_pass_store_merging (gcc::context *ctxt)
4574 return new pass_store_merging (ctxt);
4577 #if CHECKING_P
4579 namespace selftest {
4581 /* Selftests for store merging helpers. */
4583 /* Assert that all elements of the byte arrays X and Y, both of length N
4584 are equal. */
4586 static void
4587 verify_array_eq (unsigned char *x, unsigned char *y, unsigned int n)
4589 for (unsigned int i = 0; i < n; i++)
4591 if (x[i] != y[i])
4593 fprintf (stderr, "Arrays do not match. X:\n");
4594 dump_char_array (stderr, x, n);
4595 fprintf (stderr, "Y:\n");
4596 dump_char_array (stderr, y, n);
4598 ASSERT_EQ (x[i], y[i]);
4602 /* Test shift_bytes_in_array and that it carries bits across between
4603 bytes correctly. */
4605 static void
4606 verify_shift_bytes_in_array (void)
4608 /* byte 1 | byte 0
4609 00011111 | 11100000. */
4610 unsigned char orig[2] = { 0xe0, 0x1f };
4611 unsigned char in[2];
4612 memcpy (in, orig, sizeof orig);
4614 unsigned char expected[2] = { 0x80, 0x7f };
4615 shift_bytes_in_array (in, sizeof (in), 2);
4616 verify_array_eq (in, expected, sizeof (in));
4618 memcpy (in, orig, sizeof orig);
4619 memcpy (expected, orig, sizeof orig);
4620 /* Check that shifting by zero doesn't change anything. */
4621 shift_bytes_in_array (in, sizeof (in), 0);
4622 verify_array_eq (in, expected, sizeof (in));
4626 /* Test shift_bytes_in_array_right and that it carries bits across between
4627 bytes correctly. */
4629 static void
4630 verify_shift_bytes_in_array_right (void)
4632 /* byte 1 | byte 0
4633 00011111 | 11100000. */
4634 unsigned char orig[2] = { 0x1f, 0xe0};
4635 unsigned char in[2];
4636 memcpy (in, orig, sizeof orig);
4637 unsigned char expected[2] = { 0x07, 0xf8};
4638 shift_bytes_in_array_right (in, sizeof (in), 2);
4639 verify_array_eq (in, expected, sizeof (in));
4641 memcpy (in, orig, sizeof orig);
4642 memcpy (expected, orig, sizeof orig);
4643 /* Check that shifting by zero doesn't change anything. */
4644 shift_bytes_in_array_right (in, sizeof (in), 0);
4645 verify_array_eq (in, expected, sizeof (in));
4648 /* Test clear_bit_region that it clears exactly the bits asked and
4649 nothing more. */
4651 static void
4652 verify_clear_bit_region (void)
4654 /* Start with all bits set and test clearing various patterns in them. */
4655 unsigned char orig[3] = { 0xff, 0xff, 0xff};
4656 unsigned char in[3];
4657 unsigned char expected[3];
4658 memcpy (in, orig, sizeof in);
4660 /* Check zeroing out all the bits. */
4661 clear_bit_region (in, 0, 3 * BITS_PER_UNIT);
4662 expected[0] = expected[1] = expected[2] = 0;
4663 verify_array_eq (in, expected, sizeof in);
4665 memcpy (in, orig, sizeof in);
4666 /* Leave the first and last bits intact. */
4667 clear_bit_region (in, 1, 3 * BITS_PER_UNIT - 2);
4668 expected[0] = 0x1;
4669 expected[1] = 0;
4670 expected[2] = 0x80;
4671 verify_array_eq (in, expected, sizeof in);
4674 /* Test verify_clear_bit_region_be that it clears exactly the bits asked and
4675 nothing more. */
4677 static void
4678 verify_clear_bit_region_be (void)
4680 /* Start with all bits set and test clearing various patterns in them. */
4681 unsigned char orig[3] = { 0xff, 0xff, 0xff};
4682 unsigned char in[3];
4683 unsigned char expected[3];
4684 memcpy (in, orig, sizeof in);
4686 /* Check zeroing out all the bits. */
4687 clear_bit_region_be (in, BITS_PER_UNIT - 1, 3 * BITS_PER_UNIT);
4688 expected[0] = expected[1] = expected[2] = 0;
4689 verify_array_eq (in, expected, sizeof in);
4691 memcpy (in, orig, sizeof in);
4692 /* Leave the first and last bits intact. */
4693 clear_bit_region_be (in, BITS_PER_UNIT - 2, 3 * BITS_PER_UNIT - 2);
4694 expected[0] = 0x80;
4695 expected[1] = 0;
4696 expected[2] = 0x1;
4697 verify_array_eq (in, expected, sizeof in);
4701 /* Run all of the selftests within this file. */
4703 void
4704 store_merging_c_tests (void)
4706 verify_shift_bytes_in_array ();
4707 verify_shift_bytes_in_array_right ();
4708 verify_clear_bit_region ();
4709 verify_clear_bit_region_be ();
4712 } // namespace selftest
4713 #endif /* CHECKING_P. */