2008-05-30 Vladimir Makarov <vmakarov@redhat.com>
[official-gcc.git] / gcc / tree-predcom.c
bloba9b048e566229abf9b6b50667d23e8e49976854c
1 /* Predictive commoning.
2 Copyright (C) 2005, 2007 Free Software Foundation, Inc.
4 This file is part of GCC.
6 GCC is free software; you can redistribute it and/or modify it
7 under the terms of the GNU General Public License as published by the
8 Free Software Foundation; either version 3, or (at your option) any
9 later version.
11 GCC is distributed in the hope that it will be useful, but WITHOUT
12 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 for more details.
16 You should have received a copy of the GNU General Public License
17 along with GCC; see the file COPYING3. If not see
18 <http://www.gnu.org/licenses/>. */
20 /* This file implements the predictive commoning optimization. Predictive
21 commoning can be viewed as CSE around a loop, and with some improvements,
22 as generalized strength reduction-- i.e., reusing values computed in
23 earlier iterations of a loop in the later ones. So far, the pass only
24 handles the most useful case, that is, reusing values of memory references.
25 If you think this is all just a special case of PRE, you are sort of right;
26 however, concentrating on loops is simpler, and makes it possible to
27 incorporate data dependence analysis to detect the opportunities, perform
28 loop unrolling to avoid copies together with renaming immediately,
29 and if needed, we could also take register pressure into account.
31 Let us demonstrate what is done on an example:
33 for (i = 0; i < 100; i++)
35 a[i+2] = a[i] + a[i+1];
36 b[10] = b[10] + i;
37 c[i] = c[99 - i];
38 d[i] = d[i + 1];
41 1) We find data references in the loop, and split them to mutually
42 independent groups (i.e., we find components of a data dependence
43 graph). We ignore read-read dependences whose distance is not constant.
44 (TODO -- we could also ignore antidependences). In this example, we
45 find the following groups:
47 a[i]{read}, a[i+1]{read}, a[i+2]{write}
48 b[10]{read}, b[10]{write}
49 c[99 - i]{read}, c[i]{write}
50 d[i + 1]{read}, d[i]{write}
52 2) Inside each of the group, we verify several conditions:
53 a) all the references must differ in indices only, and the indices
54 must all have the same step
55 b) the references must dominate loop latch (and thus, they must be
56 ordered by dominance relation).
57 c) the distance of the indices must be a small multiple of the step
58 We are then able to compute the difference of the references (# of
59 iterations before they point to the same place as the first of them).
60 Also, in case there are writes in the loop, we split the groups into
61 chains whose head is the write whose values are used by the reads in
62 the same chain. The chains are then processed independently,
63 making the further transformations simpler. Also, the shorter chains
64 need the same number of registers, but may require lower unrolling
65 factor in order to get rid of the copies on the loop latch.
67 In our example, we get the following chains (the chain for c is invalid).
69 a[i]{read,+0}, a[i+1]{read,-1}, a[i+2]{write,-2}
70 b[10]{read,+0}, b[10]{write,+0}
71 d[i + 1]{read,+0}, d[i]{write,+1}
73 3) For each read, we determine the read or write whose value it reuses,
74 together with the distance of this reuse. I.e. we take the last
75 reference before it with distance 0, or the last of the references
76 with the smallest positive distance to the read. Then, we remove
77 the references that are not used in any of these chains, discard the
78 empty groups, and propagate all the links so that they point to the
79 single root reference of the chain (adjusting their distance
80 appropriately). Some extra care needs to be taken for references with
81 step 0. In our example (the numbers indicate the distance of the
82 reuse),
84 a[i] --> (*) 2, a[i+1] --> (*) 1, a[i+2] (*)
85 b[10] --> (*) 1, b[10] (*)
87 4) The chains are combined together if possible. If the corresponding
88 elements of two chains are always combined together with the same
89 operator, we remember just the result of this combination, instead
90 of remembering the values separately. We may need to perform
91 reassociation to enable combining, for example
93 e[i] + f[i+1] + e[i+1] + f[i]
95 can be reassociated as
97 (e[i] + f[i]) + (e[i+1] + f[i+1])
99 and we can combine the chains for e and f into one chain.
101 5) For each root reference (end of the chain) R, let N be maximum distance
102 of a reference reusing its value. Variables R0 upto RN are created,
103 together with phi nodes that transfer values from R1 .. RN to
104 R0 .. R(N-1).
105 Initial values are loaded to R0..R(N-1) (in case not all references
106 must necessarily be accessed and they may trap, we may fail here;
107 TODO sometimes, the loads could be guarded by a check for the number
108 of iterations). Values loaded/stored in roots are also copied to
109 RN. Other reads are replaced with the appropriate variable Ri.
110 Everything is put to SSA form.
112 As a small improvement, if R0 is dead after the root (i.e., all uses of
113 the value with the maximum distance dominate the root), we can avoid
114 creating RN and use R0 instead of it.
116 In our example, we get (only the parts concerning a and b are shown):
117 for (i = 0; i < 100; i++)
119 f = phi (a[0], s);
120 s = phi (a[1], f);
121 x = phi (b[10], x);
123 f = f + s;
124 a[i+2] = f;
125 x = x + i;
126 b[10] = x;
129 6) Factor F for unrolling is determined as the smallest common multiple of
130 (N + 1) for each root reference (N for references for that we avoided
131 creating RN). If F and the loop is small enough, loop is unrolled F
132 times. The stores to RN (R0) in the copies of the loop body are
133 periodically replaced with R0, R1, ... (R1, R2, ...), so that they can
134 be coalesced and the copies can be eliminated.
136 TODO -- copy propagation and other optimizations may change the live
137 ranges of the temporary registers and prevent them from being coalesced;
138 this may increase the register pressure.
140 In our case, F = 2 and the (main loop of the) result is
142 for (i = 0; i < ...; i += 2)
144 f = phi (a[0], f);
145 s = phi (a[1], s);
146 x = phi (b[10], x);
148 f = f + s;
149 a[i+2] = f;
150 x = x + i;
151 b[10] = x;
153 s = s + f;
154 a[i+3] = s;
155 x = x + i;
156 b[10] = x;
159 TODO -- stores killing other stores can be taken into account, e.g.,
160 for (i = 0; i < n; i++)
162 a[i] = 1;
163 a[i+2] = 2;
166 can be replaced with
168 t0 = a[0];
169 t1 = a[1];
170 for (i = 0; i < n; i++)
172 a[i] = 1;
173 t2 = 2;
174 t0 = t1;
175 t1 = t2;
177 a[n] = t0;
178 a[n+1] = t1;
180 The interesting part is that this would generalize store motion; still, since
181 sm is performed elsewhere, it does not seem that important.
183 Predictive commoning can be generalized for arbitrary computations (not
184 just memory loads), and also nontrivial transfer functions (e.g., replacing
185 i * i with ii_last + 2 * i + 1), to generalize strength reduction. */
187 #include "config.h"
188 #include "system.h"
189 #include "coretypes.h"
190 #include "tm.h"
191 #include "tree.h"
192 #include "tm_p.h"
193 #include "cfgloop.h"
194 #include "tree-flow.h"
195 #include "ggc.h"
196 #include "tree-data-ref.h"
197 #include "tree-scalar-evolution.h"
198 #include "tree-chrec.h"
199 #include "params.h"
200 #include "diagnostic.h"
201 #include "tree-pass.h"
202 #include "tree-affine.h"
203 #include "tree-inline.h"
205 /* The maximum number of iterations between the considered memory
206 references. */
208 #define MAX_DISTANCE (target_avail_regs < 16 ? 4 : 8)
210 /* Data references. */
212 typedef struct dref
214 /* The reference itself. */
215 struct data_reference *ref;
217 /* The statement in that the reference appears. */
218 tree stmt;
220 /* Distance of the reference from the root of the chain (in number of
221 iterations of the loop). */
222 unsigned distance;
224 /* Number of iterations offset from the first reference in the component. */
225 double_int offset;
227 /* Number of the reference in a component, in dominance ordering. */
228 unsigned pos;
230 /* True if the memory reference is always accessed when the loop is
231 entered. */
232 unsigned always_accessed : 1;
233 } *dref;
235 DEF_VEC_P (dref);
236 DEF_VEC_ALLOC_P (dref, heap);
238 /* Type of the chain of the references. */
240 enum chain_type
242 /* The addresses of the references in the chain are constant. */
243 CT_INVARIANT,
245 /* There are only loads in the chain. */
246 CT_LOAD,
248 /* Root of the chain is store, the rest are loads. */
249 CT_STORE_LOAD,
251 /* A combination of two chains. */
252 CT_COMBINATION
255 /* Chains of data references. */
257 typedef struct chain
259 /* Type of the chain. */
260 enum chain_type type;
262 /* For combination chains, the operator and the two chains that are
263 combined, and the type of the result. */
264 enum tree_code operator;
265 tree rslt_type;
266 struct chain *ch1, *ch2;
268 /* The references in the chain. */
269 VEC(dref,heap) *refs;
271 /* The maximum distance of the reference in the chain from the root. */
272 unsigned length;
274 /* The variables used to copy the value throughout iterations. */
275 VEC(tree,heap) *vars;
277 /* Initializers for the variables. */
278 VEC(tree,heap) *inits;
280 /* True if there is a use of a variable with the maximal distance
281 that comes after the root in the loop. */
282 unsigned has_max_use_after : 1;
284 /* True if all the memory references in the chain are always accessed. */
285 unsigned all_always_accessed : 1;
287 /* True if this chain was combined together with some other chain. */
288 unsigned combined : 1;
289 } *chain_p;
291 DEF_VEC_P (chain_p);
292 DEF_VEC_ALLOC_P (chain_p, heap);
294 /* Describes the knowledge about the step of the memory references in
295 the component. */
297 enum ref_step_type
299 /* The step is zero. */
300 RS_INVARIANT,
302 /* The step is nonzero. */
303 RS_NONZERO,
305 /* The step may or may not be nonzero. */
306 RS_ANY
309 /* Components of the data dependence graph. */
311 struct component
313 /* The references in the component. */
314 VEC(dref,heap) *refs;
316 /* What we know about the step of the references in the component. */
317 enum ref_step_type comp_step;
319 /* Next component in the list. */
320 struct component *next;
323 /* Bitmap of ssa names defined by looparound phi nodes covered by chains. */
325 static bitmap looparound_phis;
327 /* Cache used by tree_to_aff_combination_expand. */
329 static struct pointer_map_t *name_expansions;
331 /* Dumps data reference REF to FILE. */
333 extern void dump_dref (FILE *, dref);
334 void
335 dump_dref (FILE *file, dref ref)
337 if (ref->ref)
339 fprintf (file, " ");
340 print_generic_expr (file, DR_REF (ref->ref), TDF_SLIM);
341 fprintf (file, " (id %u%s)\n", ref->pos,
342 DR_IS_READ (ref->ref) ? "" : ", write");
344 fprintf (file, " offset ");
345 dump_double_int (file, ref->offset, false);
346 fprintf (file, "\n");
348 fprintf (file, " distance %u\n", ref->distance);
350 else
352 if (TREE_CODE (ref->stmt) == PHI_NODE)
353 fprintf (file, " looparound ref\n");
354 else
355 fprintf (file, " combination ref\n");
356 fprintf (file, " in statement ");
357 print_generic_expr (file, ref->stmt, TDF_SLIM);
358 fprintf (file, "\n");
359 fprintf (file, " distance %u\n", ref->distance);
364 /* Dumps CHAIN to FILE. */
366 extern void dump_chain (FILE *, chain_p);
367 void
368 dump_chain (FILE *file, chain_p chain)
370 dref a;
371 const char *chain_type;
372 unsigned i;
373 tree var;
375 switch (chain->type)
377 case CT_INVARIANT:
378 chain_type = "Load motion";
379 break;
381 case CT_LOAD:
382 chain_type = "Loads-only";
383 break;
385 case CT_STORE_LOAD:
386 chain_type = "Store-loads";
387 break;
389 case CT_COMBINATION:
390 chain_type = "Combination";
391 break;
393 default:
394 gcc_unreachable ();
397 fprintf (file, "%s chain %p%s\n", chain_type, (void *) chain,
398 chain->combined ? " (combined)" : "");
399 if (chain->type != CT_INVARIANT)
400 fprintf (file, " max distance %u%s\n", chain->length,
401 chain->has_max_use_after ? "" : ", may reuse first");
403 if (chain->type == CT_COMBINATION)
405 fprintf (file, " equal to %p %s %p in type ",
406 (void *) chain->ch1, op_symbol_code (chain->operator),
407 (void *) chain->ch2);
408 print_generic_expr (file, chain->rslt_type, TDF_SLIM);
409 fprintf (file, "\n");
412 if (chain->vars)
414 fprintf (file, " vars");
415 for (i = 0; VEC_iterate (tree, chain->vars, i, var); i++)
417 fprintf (file, " ");
418 print_generic_expr (file, var, TDF_SLIM);
420 fprintf (file, "\n");
423 if (chain->inits)
425 fprintf (file, " inits");
426 for (i = 0; VEC_iterate (tree, chain->inits, i, var); i++)
428 fprintf (file, " ");
429 print_generic_expr (file, var, TDF_SLIM);
431 fprintf (file, "\n");
434 fprintf (file, " references:\n");
435 for (i = 0; VEC_iterate (dref, chain->refs, i, a); i++)
436 dump_dref (file, a);
438 fprintf (file, "\n");
441 /* Dumps CHAINS to FILE. */
443 extern void dump_chains (FILE *, VEC (chain_p, heap) *);
444 void
445 dump_chains (FILE *file, VEC (chain_p, heap) *chains)
447 chain_p chain;
448 unsigned i;
450 for (i = 0; VEC_iterate (chain_p, chains, i, chain); i++)
451 dump_chain (file, chain);
454 /* Dumps COMP to FILE. */
456 extern void dump_component (FILE *, struct component *);
457 void
458 dump_component (FILE *file, struct component *comp)
460 dref a;
461 unsigned i;
463 fprintf (file, "Component%s:\n",
464 comp->comp_step == RS_INVARIANT ? " (invariant)" : "");
465 for (i = 0; VEC_iterate (dref, comp->refs, i, a); i++)
466 dump_dref (file, a);
467 fprintf (file, "\n");
470 /* Dumps COMPS to FILE. */
472 extern void dump_components (FILE *, struct component *);
473 void
474 dump_components (FILE *file, struct component *comps)
476 struct component *comp;
478 for (comp = comps; comp; comp = comp->next)
479 dump_component (file, comp);
482 /* Frees a chain CHAIN. */
484 static void
485 release_chain (chain_p chain)
487 dref ref;
488 unsigned i;
490 if (chain == NULL)
491 return;
493 for (i = 0; VEC_iterate (dref, chain->refs, i, ref); i++)
494 free (ref);
496 VEC_free (dref, heap, chain->refs);
497 VEC_free (tree, heap, chain->vars);
498 VEC_free (tree, heap, chain->inits);
500 free (chain);
503 /* Frees CHAINS. */
505 static void
506 release_chains (VEC (chain_p, heap) *chains)
508 unsigned i;
509 chain_p chain;
511 for (i = 0; VEC_iterate (chain_p, chains, i, chain); i++)
512 release_chain (chain);
513 VEC_free (chain_p, heap, chains);
516 /* Frees a component COMP. */
518 static void
519 release_component (struct component *comp)
521 VEC_free (dref, heap, comp->refs);
522 free (comp);
525 /* Frees list of components COMPS. */
527 static void
528 release_components (struct component *comps)
530 struct component *act, *next;
532 for (act = comps; act; act = next)
534 next = act->next;
535 release_component (act);
539 /* Finds a root of tree given by FATHERS containing A, and performs path
540 shortening. */
542 static unsigned
543 component_of (unsigned fathers[], unsigned a)
545 unsigned root, n;
547 for (root = a; root != fathers[root]; root = fathers[root])
548 continue;
550 for (; a != root; a = n)
552 n = fathers[a];
553 fathers[a] = root;
556 return root;
559 /* Join operation for DFU. FATHERS gives the tree, SIZES are sizes of the
560 components, A and B are components to merge. */
562 static void
563 merge_comps (unsigned fathers[], unsigned sizes[], unsigned a, unsigned b)
565 unsigned ca = component_of (fathers, a);
566 unsigned cb = component_of (fathers, b);
568 if (ca == cb)
569 return;
571 if (sizes[ca] < sizes[cb])
573 sizes[cb] += sizes[ca];
574 fathers[ca] = cb;
576 else
578 sizes[ca] += sizes[cb];
579 fathers[cb] = ca;
583 /* Returns true if A is a reference that is suitable for predictive commoning
584 in the innermost loop that contains it. REF_STEP is set according to the
585 step of the reference A. */
587 static bool
588 suitable_reference_p (struct data_reference *a, enum ref_step_type *ref_step)
590 tree ref = DR_REF (a), step = DR_STEP (a);
592 if (!step
593 || !is_gimple_reg_type (TREE_TYPE (ref))
594 || tree_could_throw_p (ref))
595 return false;
597 if (integer_zerop (step))
598 *ref_step = RS_INVARIANT;
599 else if (integer_nonzerop (step))
600 *ref_step = RS_NONZERO;
601 else
602 *ref_step = RS_ANY;
604 return true;
607 /* Stores DR_OFFSET (DR) + DR_INIT (DR) to OFFSET. */
609 static void
610 aff_combination_dr_offset (struct data_reference *dr, aff_tree *offset)
612 aff_tree delta;
614 tree_to_aff_combination_expand (DR_OFFSET (dr), sizetype, offset,
615 &name_expansions);
616 aff_combination_const (&delta, sizetype, tree_to_double_int (DR_INIT (dr)));
617 aff_combination_add (offset, &delta);
620 /* Determines number of iterations of the innermost enclosing loop before B
621 refers to exactly the same location as A and stores it to OFF. If A and
622 B do not have the same step, they never meet, or anything else fails,
623 returns false, otherwise returns true. Both A and B are assumed to
624 satisfy suitable_reference_p. */
626 static bool
627 determine_offset (struct data_reference *a, struct data_reference *b,
628 double_int *off)
630 aff_tree diff, baseb, step;
631 tree typea, typeb;
633 /* Check that both the references access the location in the same type. */
634 typea = TREE_TYPE (DR_REF (a));
635 typeb = TREE_TYPE (DR_REF (b));
636 if (!useless_type_conversion_p (typeb, typea))
637 return false;
639 /* Check whether the base address and the step of both references is the
640 same. */
641 if (!operand_equal_p (DR_STEP (a), DR_STEP (b), 0)
642 || !operand_equal_p (DR_BASE_ADDRESS (a), DR_BASE_ADDRESS (b), 0))
643 return false;
645 if (integer_zerop (DR_STEP (a)))
647 /* If the references have loop invariant address, check that they access
648 exactly the same location. */
649 *off = double_int_zero;
650 return (operand_equal_p (DR_OFFSET (a), DR_OFFSET (b), 0)
651 && operand_equal_p (DR_INIT (a), DR_INIT (b), 0));
654 /* Compare the offsets of the addresses, and check whether the difference
655 is a multiple of step. */
656 aff_combination_dr_offset (a, &diff);
657 aff_combination_dr_offset (b, &baseb);
658 aff_combination_scale (&baseb, double_int_minus_one);
659 aff_combination_add (&diff, &baseb);
661 tree_to_aff_combination_expand (DR_STEP (a), sizetype,
662 &step, &name_expansions);
663 return aff_combination_constant_multiple_p (&diff, &step, off);
666 /* Returns the last basic block in LOOP for that we are sure that
667 it is executed whenever the loop is entered. */
669 static basic_block
670 last_always_executed_block (struct loop *loop)
672 unsigned i;
673 VEC (edge, heap) *exits = get_loop_exit_edges (loop);
674 edge ex;
675 basic_block last = loop->latch;
677 for (i = 0; VEC_iterate (edge, exits, i, ex); i++)
678 last = nearest_common_dominator (CDI_DOMINATORS, last, ex->src);
679 VEC_free (edge, heap, exits);
681 return last;
684 /* Splits dependence graph on DATAREFS described by DEPENDS to components. */
686 static struct component *
687 split_data_refs_to_components (struct loop *loop,
688 VEC (data_reference_p, heap) *datarefs,
689 VEC (ddr_p, heap) *depends)
691 unsigned i, n = VEC_length (data_reference_p, datarefs);
692 unsigned ca, ia, ib, bad;
693 unsigned *comp_father = XNEWVEC (unsigned, n + 1);
694 unsigned *comp_size = XNEWVEC (unsigned, n + 1);
695 struct component **comps;
696 struct data_reference *dr, *dra, *drb;
697 struct data_dependence_relation *ddr;
698 struct component *comp_list = NULL, *comp;
699 dref dataref;
700 basic_block last_always_executed = last_always_executed_block (loop);
702 for (i = 0; VEC_iterate (data_reference_p, datarefs, i, dr); i++)
704 if (!DR_REF (dr))
706 /* A fake reference for call or asm_expr that may clobber memory;
707 just fail. */
708 goto end;
710 dr->aux = (void *) (size_t) i;
711 comp_father[i] = i;
712 comp_size[i] = 1;
715 /* A component reserved for the "bad" data references. */
716 comp_father[n] = n;
717 comp_size[n] = 1;
719 for (i = 0; VEC_iterate (data_reference_p, datarefs, i, dr); i++)
721 enum ref_step_type dummy;
723 if (!suitable_reference_p (dr, &dummy))
725 ia = (unsigned) (size_t) dr->aux;
726 merge_comps (comp_father, comp_size, n, ia);
730 for (i = 0; VEC_iterate (ddr_p, depends, i, ddr); i++)
732 double_int dummy_off;
734 if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
735 continue;
737 dra = DDR_A (ddr);
738 drb = DDR_B (ddr);
739 ia = component_of (comp_father, (unsigned) (size_t) dra->aux);
740 ib = component_of (comp_father, (unsigned) (size_t) drb->aux);
741 if (ia == ib)
742 continue;
744 bad = component_of (comp_father, n);
746 /* If both A and B are reads, we may ignore unsuitable dependences. */
747 if (DR_IS_READ (dra) && DR_IS_READ (drb)
748 && (ia == bad || ib == bad
749 || !determine_offset (dra, drb, &dummy_off)))
750 continue;
752 merge_comps (comp_father, comp_size, ia, ib);
755 comps = XCNEWVEC (struct component *, n);
756 bad = component_of (comp_father, n);
757 for (i = 0; VEC_iterate (data_reference_p, datarefs, i, dr); i++)
759 ia = (unsigned) (size_t) dr->aux;
760 ca = component_of (comp_father, ia);
761 if (ca == bad)
762 continue;
764 comp = comps[ca];
765 if (!comp)
767 comp = XCNEW (struct component);
768 comp->refs = VEC_alloc (dref, heap, comp_size[ca]);
769 comps[ca] = comp;
772 dataref = XCNEW (struct dref);
773 dataref->ref = dr;
774 dataref->stmt = DR_STMT (dr);
775 dataref->offset = double_int_zero;
776 dataref->distance = 0;
778 dataref->always_accessed
779 = dominated_by_p (CDI_DOMINATORS, last_always_executed,
780 bb_for_stmt (dataref->stmt));
781 dataref->pos = VEC_length (dref, comp->refs);
782 VEC_quick_push (dref, comp->refs, dataref);
785 for (i = 0; i < n; i++)
787 comp = comps[i];
788 if (comp)
790 comp->next = comp_list;
791 comp_list = comp;
794 free (comps);
796 end:
797 free (comp_father);
798 free (comp_size);
799 return comp_list;
802 /* Returns true if the component COMP satisfies the conditions
803 described in 2) at the beginning of this file. LOOP is the current
804 loop. */
806 static bool
807 suitable_component_p (struct loop *loop, struct component *comp)
809 unsigned i;
810 dref a, first;
811 basic_block ba, bp = loop->header;
812 bool ok, has_write = false;
814 for (i = 0; VEC_iterate (dref, comp->refs, i, a); i++)
816 ba = bb_for_stmt (a->stmt);
818 if (!just_once_each_iteration_p (loop, ba))
819 return false;
821 gcc_assert (dominated_by_p (CDI_DOMINATORS, ba, bp));
822 bp = ba;
824 if (!DR_IS_READ (a->ref))
825 has_write = true;
828 first = VEC_index (dref, comp->refs, 0);
829 ok = suitable_reference_p (first->ref, &comp->comp_step);
830 gcc_assert (ok);
831 first->offset = double_int_zero;
833 for (i = 1; VEC_iterate (dref, comp->refs, i, a); i++)
835 if (!determine_offset (first->ref, a->ref, &a->offset))
836 return false;
838 #ifdef ENABLE_CHECKING
840 enum ref_step_type a_step;
841 ok = suitable_reference_p (a->ref, &a_step);
842 gcc_assert (ok && a_step == comp->comp_step);
844 #endif
847 /* If there is a write inside the component, we must know whether the
848 step is nonzero or not -- we would not otherwise be able to recognize
849 whether the value accessed by reads comes from the OFFSET-th iteration
850 or the previous one. */
851 if (has_write && comp->comp_step == RS_ANY)
852 return false;
854 return true;
857 /* Check the conditions on references inside each of components COMPS,
858 and remove the unsuitable components from the list. The new list
859 of components is returned. The conditions are described in 2) at
860 the beginning of this file. LOOP is the current loop. */
862 static struct component *
863 filter_suitable_components (struct loop *loop, struct component *comps)
865 struct component **comp, *act;
867 for (comp = &comps; *comp; )
869 act = *comp;
870 if (suitable_component_p (loop, act))
871 comp = &act->next;
872 else
874 *comp = act->next;
875 release_component (act);
879 return comps;
882 /* Compares two drefs A and B by their offset and position. Callback for
883 qsort. */
885 static int
886 order_drefs (const void *a, const void *b)
888 const dref *da = a;
889 const dref *db = b;
890 int offcmp = double_int_scmp ((*da)->offset, (*db)->offset);
892 if (offcmp != 0)
893 return offcmp;
895 return (*da)->pos - (*db)->pos;
898 /* Returns root of the CHAIN. */
900 static inline dref
901 get_chain_root (chain_p chain)
903 return VEC_index (dref, chain->refs, 0);
906 /* Adds REF to the chain CHAIN. */
908 static void
909 add_ref_to_chain (chain_p chain, dref ref)
911 dref root = get_chain_root (chain);
912 double_int dist;
914 gcc_assert (double_int_scmp (root->offset, ref->offset) <= 0);
915 dist = double_int_add (ref->offset, double_int_neg (root->offset));
916 if (double_int_ucmp (uhwi_to_double_int (MAX_DISTANCE), dist) <= 0)
917 return;
918 gcc_assert (double_int_fits_in_uhwi_p (dist));
920 VEC_safe_push (dref, heap, chain->refs, ref);
922 ref->distance = double_int_to_uhwi (dist);
924 if (ref->distance >= chain->length)
926 chain->length = ref->distance;
927 chain->has_max_use_after = false;
930 if (ref->distance == chain->length
931 && ref->pos > root->pos)
932 chain->has_max_use_after = true;
934 chain->all_always_accessed &= ref->always_accessed;
937 /* Returns the chain for invariant component COMP. */
939 static chain_p
940 make_invariant_chain (struct component *comp)
942 chain_p chain = XCNEW (struct chain);
943 unsigned i;
944 dref ref;
946 chain->type = CT_INVARIANT;
948 chain->all_always_accessed = true;
950 for (i = 0; VEC_iterate (dref, comp->refs, i, ref); i++)
952 VEC_safe_push (dref, heap, chain->refs, ref);
953 chain->all_always_accessed &= ref->always_accessed;
956 return chain;
959 /* Make a new chain rooted at REF. */
961 static chain_p
962 make_rooted_chain (dref ref)
964 chain_p chain = XCNEW (struct chain);
966 chain->type = DR_IS_READ (ref->ref) ? CT_LOAD : CT_STORE_LOAD;
968 VEC_safe_push (dref, heap, chain->refs, ref);
969 chain->all_always_accessed = ref->always_accessed;
971 ref->distance = 0;
973 return chain;
976 /* Returns true if CHAIN is not trivial. */
978 static bool
979 nontrivial_chain_p (chain_p chain)
981 return chain != NULL && VEC_length (dref, chain->refs) > 1;
984 /* Returns the ssa name that contains the value of REF, or NULL_TREE if there
985 is no such name. */
987 static tree
988 name_for_ref (dref ref)
990 tree name;
992 if (TREE_CODE (ref->stmt) == GIMPLE_MODIFY_STMT)
994 if (!ref->ref || DR_IS_READ (ref->ref))
995 name = GIMPLE_STMT_OPERAND (ref->stmt, 0);
996 else
997 name = GIMPLE_STMT_OPERAND (ref->stmt, 1);
999 else
1000 name = PHI_RESULT (ref->stmt);
1002 return (TREE_CODE (name) == SSA_NAME ? name : NULL_TREE);
1005 /* Returns true if REF is a valid initializer for ROOT with given DISTANCE (in
1006 iterations of the innermost enclosing loop). */
1008 static bool
1009 valid_initializer_p (struct data_reference *ref,
1010 unsigned distance, struct data_reference *root)
1012 aff_tree diff, base, step;
1013 double_int off;
1015 if (!DR_BASE_ADDRESS (ref))
1016 return false;
1018 /* Both REF and ROOT must be accessing the same object. */
1019 if (!operand_equal_p (DR_BASE_ADDRESS (ref), DR_BASE_ADDRESS (root), 0))
1020 return false;
1022 /* The initializer is defined outside of loop, hence its address must be
1023 invariant inside the loop. */
1024 gcc_assert (integer_zerop (DR_STEP (ref)));
1026 /* If the address of the reference is invariant, initializer must access
1027 exactly the same location. */
1028 if (integer_zerop (DR_STEP (root)))
1029 return (operand_equal_p (DR_OFFSET (ref), DR_OFFSET (root), 0)
1030 && operand_equal_p (DR_INIT (ref), DR_INIT (root), 0));
1032 /* Verify that this index of REF is equal to the root's index at
1033 -DISTANCE-th iteration. */
1034 aff_combination_dr_offset (root, &diff);
1035 aff_combination_dr_offset (ref, &base);
1036 aff_combination_scale (&base, double_int_minus_one);
1037 aff_combination_add (&diff, &base);
1039 tree_to_aff_combination_expand (DR_STEP (root), sizetype, &step,
1040 &name_expansions);
1041 if (!aff_combination_constant_multiple_p (&diff, &step, &off))
1042 return false;
1044 if (!double_int_equal_p (off, uhwi_to_double_int (distance)))
1045 return false;
1047 return true;
1050 /* Finds looparound phi node of LOOP that copies the value of REF, and if its
1051 initial value is correct (equal to initial value of REF shifted by one
1052 iteration), returns the phi node. Otherwise, NULL_TREE is returned. ROOT
1053 is the root of the current chain. */
1055 static tree
1056 find_looparound_phi (struct loop *loop, dref ref, dref root)
1058 tree name, phi, init, init_stmt, init_ref;
1059 edge latch = loop_latch_edge (loop);
1060 struct data_reference init_dr;
1062 if (TREE_CODE (ref->stmt) == GIMPLE_MODIFY_STMT)
1064 if (DR_IS_READ (ref->ref))
1065 name = GIMPLE_STMT_OPERAND (ref->stmt, 0);
1066 else
1067 name = GIMPLE_STMT_OPERAND (ref->stmt, 1);
1069 else
1070 name = PHI_RESULT (ref->stmt);
1071 if (!name)
1072 return NULL_TREE;
1074 for (phi = phi_nodes (loop->header); phi; phi = PHI_CHAIN (phi))
1075 if (PHI_ARG_DEF_FROM_EDGE (phi, latch) == name)
1076 break;
1078 if (!phi)
1079 return NULL_TREE;
1081 init = PHI_ARG_DEF_FROM_EDGE (phi, loop_preheader_edge (loop));
1082 if (TREE_CODE (init) != SSA_NAME)
1083 return NULL_TREE;
1084 init_stmt = SSA_NAME_DEF_STMT (init);
1085 if (TREE_CODE (init_stmt) != GIMPLE_MODIFY_STMT)
1086 return NULL_TREE;
1087 gcc_assert (GIMPLE_STMT_OPERAND (init_stmt, 0) == init);
1089 init_ref = GIMPLE_STMT_OPERAND (init_stmt, 1);
1090 if (!REFERENCE_CLASS_P (init_ref)
1091 && !DECL_P (init_ref))
1092 return NULL_TREE;
1094 /* Analyze the behavior of INIT_REF with respect to LOOP (innermost
1095 loop enclosing PHI). */
1096 memset (&init_dr, 0, sizeof (struct data_reference));
1097 DR_REF (&init_dr) = init_ref;
1098 DR_STMT (&init_dr) = phi;
1099 dr_analyze_innermost (&init_dr);
1101 if (!valid_initializer_p (&init_dr, ref->distance + 1, root->ref))
1102 return NULL_TREE;
1104 return phi;
1107 /* Adds a reference for the looparound copy of REF in PHI to CHAIN. */
1109 static void
1110 insert_looparound_copy (chain_p chain, dref ref, tree phi)
1112 dref nw = XCNEW (struct dref), aref;
1113 unsigned i;
1115 nw->stmt = phi;
1116 nw->distance = ref->distance + 1;
1117 nw->always_accessed = 1;
1119 for (i = 0; VEC_iterate (dref, chain->refs, i, aref); i++)
1120 if (aref->distance >= nw->distance)
1121 break;
1122 VEC_safe_insert (dref, heap, chain->refs, i, nw);
1124 if (nw->distance > chain->length)
1126 chain->length = nw->distance;
1127 chain->has_max_use_after = false;
1131 /* For references in CHAIN that are copied around the LOOP (created previously
1132 by PRE, or by user), add the results of such copies to the chain. This
1133 enables us to remove the copies by unrolling, and may need less registers
1134 (also, it may allow us to combine chains together). */
1136 static void
1137 add_looparound_copies (struct loop *loop, chain_p chain)
1139 unsigned i;
1140 dref ref, root = get_chain_root (chain);
1141 tree phi;
1143 for (i = 0; VEC_iterate (dref, chain->refs, i, ref); i++)
1145 phi = find_looparound_phi (loop, ref, root);
1146 if (!phi)
1147 continue;
1149 bitmap_set_bit (looparound_phis, SSA_NAME_VERSION (PHI_RESULT (phi)));
1150 insert_looparound_copy (chain, ref, phi);
1154 /* Find roots of the values and determine distances in the component COMP.
1155 The references are redistributed into CHAINS. LOOP is the current
1156 loop. */
1158 static void
1159 determine_roots_comp (struct loop *loop,
1160 struct component *comp,
1161 VEC (chain_p, heap) **chains)
1163 unsigned i;
1164 dref a;
1165 chain_p chain = NULL;
1167 /* Invariants are handled specially. */
1168 if (comp->comp_step == RS_INVARIANT)
1170 chain = make_invariant_chain (comp);
1171 VEC_safe_push (chain_p, heap, *chains, chain);
1172 return;
1175 qsort (VEC_address (dref, comp->refs), VEC_length (dref, comp->refs),
1176 sizeof (dref), order_drefs);
1178 for (i = 0; VEC_iterate (dref, comp->refs, i, a); i++)
1180 if (!chain || !DR_IS_READ (a->ref))
1182 if (nontrivial_chain_p (chain))
1183 VEC_safe_push (chain_p, heap, *chains, chain);
1184 else
1185 release_chain (chain);
1186 chain = make_rooted_chain (a);
1187 continue;
1190 add_ref_to_chain (chain, a);
1193 if (nontrivial_chain_p (chain))
1195 add_looparound_copies (loop, chain);
1196 VEC_safe_push (chain_p, heap, *chains, chain);
1198 else
1199 release_chain (chain);
1202 /* Find roots of the values and determine distances in components COMPS, and
1203 separates the references to CHAINS. LOOP is the current loop. */
1205 static void
1206 determine_roots (struct loop *loop,
1207 struct component *comps, VEC (chain_p, heap) **chains)
1209 struct component *comp;
1211 for (comp = comps; comp; comp = comp->next)
1212 determine_roots_comp (loop, comp, chains);
1215 /* Replace the reference in statement STMT with temporary variable
1216 NEW. If SET is true, NEW is instead initialized to the value of
1217 the reference in the statement. IN_LHS is true if the reference
1218 is in the lhs of STMT, false if it is in rhs. */
1220 static void
1221 replace_ref_with (tree stmt, tree new, bool set, bool in_lhs)
1223 tree val, new_stmt;
1224 block_stmt_iterator bsi;
1226 if (TREE_CODE (stmt) == PHI_NODE)
1228 gcc_assert (!in_lhs && !set);
1230 val = PHI_RESULT (stmt);
1231 bsi = bsi_after_labels (bb_for_stmt (stmt));
1232 remove_phi_node (stmt, NULL_TREE, false);
1234 /* Turn the phi node into GIMPLE_MODIFY_STMT. */
1235 new_stmt = build_gimple_modify_stmt (val, new);
1236 SSA_NAME_DEF_STMT (val) = new_stmt;
1237 bsi_insert_before (&bsi, new_stmt, BSI_NEW_STMT);
1238 return;
1241 /* Since the reference is of gimple_reg type, it should only
1242 appear as lhs or rhs of modify statement. */
1243 gcc_assert (TREE_CODE (stmt) == GIMPLE_MODIFY_STMT);
1245 /* If we do not need to initialize NEW, just replace the use of OLD. */
1246 if (!set)
1248 gcc_assert (!in_lhs);
1249 GIMPLE_STMT_OPERAND (stmt, 1) = new;
1250 update_stmt (stmt);
1251 return;
1254 bsi = bsi_for_stmt (stmt);
1255 if (in_lhs)
1257 val = GIMPLE_STMT_OPERAND (stmt, 1);
1259 /* OLD = VAL
1261 is transformed to
1263 OLD = VAL
1264 NEW = VAL
1266 (since the reference is of gimple_reg type, VAL is either gimple
1267 invariant or ssa name). */
1269 else
1271 val = GIMPLE_STMT_OPERAND (stmt, 0);
1273 /* VAL = OLD
1275 is transformed to
1277 VAL = OLD
1278 NEW = VAL */
1281 new_stmt = build_gimple_modify_stmt (new, unshare_expr (val));
1282 bsi_insert_after (&bsi, new_stmt, BSI_NEW_STMT);
1283 SSA_NAME_DEF_STMT (new) = new_stmt;
1286 /* Returns the reference to the address of REF in the ITER-th iteration of
1287 LOOP, or NULL if we fail to determine it (ITER may be negative). We
1288 try to preserve the original shape of the reference (not rewrite it
1289 as an indirect ref to the address), to make tree_could_trap_p in
1290 prepare_initializers_chain return false more often. */
1292 static tree
1293 ref_at_iteration (struct loop *loop, tree ref, int iter)
1295 tree idx, *idx_p, type, val, op0 = NULL_TREE, ret;
1296 affine_iv iv;
1297 bool ok;
1299 if (handled_component_p (ref))
1301 op0 = ref_at_iteration (loop, TREE_OPERAND (ref, 0), iter);
1302 if (!op0)
1303 return NULL_TREE;
1305 else if (!INDIRECT_REF_P (ref))
1306 return unshare_expr (ref);
1308 if (TREE_CODE (ref) == INDIRECT_REF)
1310 ret = build1 (INDIRECT_REF, TREE_TYPE (ref), NULL_TREE);
1311 idx = TREE_OPERAND (ref, 0);
1312 idx_p = &TREE_OPERAND (ret, 0);
1314 else if (TREE_CODE (ref) == COMPONENT_REF)
1316 /* Check that the offset is loop invariant. */
1317 if (TREE_OPERAND (ref, 2)
1318 && !expr_invariant_in_loop_p (loop, TREE_OPERAND (ref, 2)))
1319 return NULL_TREE;
1321 return build3 (COMPONENT_REF, TREE_TYPE (ref), op0,
1322 unshare_expr (TREE_OPERAND (ref, 1)),
1323 unshare_expr (TREE_OPERAND (ref, 2)));
1325 else if (TREE_CODE (ref) == ARRAY_REF)
1327 /* Check that the lower bound and the step are loop invariant. */
1328 if (TREE_OPERAND (ref, 2)
1329 && !expr_invariant_in_loop_p (loop, TREE_OPERAND (ref, 2)))
1330 return NULL_TREE;
1331 if (TREE_OPERAND (ref, 3)
1332 && !expr_invariant_in_loop_p (loop, TREE_OPERAND (ref, 3)))
1333 return NULL_TREE;
1335 ret = build4 (ARRAY_REF, TREE_TYPE (ref), op0, NULL_TREE,
1336 unshare_expr (TREE_OPERAND (ref, 2)),
1337 unshare_expr (TREE_OPERAND (ref, 3)));
1338 idx = TREE_OPERAND (ref, 1);
1339 idx_p = &TREE_OPERAND (ret, 1);
1341 else
1342 return NULL_TREE;
1344 ok = simple_iv (loop, first_stmt (loop->header), idx, &iv, true);
1345 if (!ok)
1346 return NULL_TREE;
1347 iv.base = expand_simple_operations (iv.base);
1348 if (integer_zerop (iv.step))
1349 *idx_p = unshare_expr (iv.base);
1350 else
1352 type = TREE_TYPE (iv.base);
1353 if (POINTER_TYPE_P (type))
1355 val = fold_build2 (MULT_EXPR, sizetype, iv.step,
1356 size_int (iter));
1357 val = fold_build2 (POINTER_PLUS_EXPR, type, iv.base, val);
1359 else
1361 val = fold_build2 (MULT_EXPR, type, iv.step,
1362 build_int_cst_type (type, iter));
1363 val = fold_build2 (PLUS_EXPR, type, iv.base, val);
1365 *idx_p = unshare_expr (val);
1368 return ret;
1371 /* Get the initialization expression for the INDEX-th temporary variable
1372 of CHAIN. */
1374 static tree
1375 get_init_expr (chain_p chain, unsigned index)
1377 if (chain->type == CT_COMBINATION)
1379 tree e1 = get_init_expr (chain->ch1, index);
1380 tree e2 = get_init_expr (chain->ch2, index);
1382 return fold_build2 (chain->operator, chain->rslt_type, e1, e2);
1384 else
1385 return VEC_index (tree, chain->inits, index);
1388 /* Marks all virtual operands of statement STMT for renaming. */
1390 void
1391 mark_virtual_ops_for_renaming (tree stmt)
1393 ssa_op_iter iter;
1394 tree var;
1396 if (TREE_CODE (stmt) == PHI_NODE)
1398 var = PHI_RESULT (stmt);
1399 if (is_gimple_reg (var))
1400 return;
1402 if (TREE_CODE (var) == SSA_NAME)
1403 var = SSA_NAME_VAR (var);
1404 mark_sym_for_renaming (var);
1405 return;
1408 update_stmt (stmt);
1410 FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_ALL_VIRTUALS)
1412 if (TREE_CODE (var) == SSA_NAME)
1413 var = SSA_NAME_VAR (var);
1414 mark_sym_for_renaming (var);
1418 /* Calls mark_virtual_ops_for_renaming for all members of LIST. */
1420 static void
1421 mark_virtual_ops_for_renaming_list (tree list)
1423 tree_stmt_iterator tsi;
1425 for (tsi = tsi_start (list); !tsi_end_p (tsi); tsi_next (&tsi))
1426 mark_virtual_ops_for_renaming (tsi_stmt (tsi));
1429 /* Returns a new temporary variable used for the I-th variable carrying
1430 value of REF. The variable's uid is marked in TMP_VARS. */
1432 static tree
1433 predcom_tmp_var (tree ref, unsigned i, bitmap tmp_vars)
1435 tree type = TREE_TYPE (ref);
1436 tree var = create_tmp_var (type, get_lsm_tmp_name (ref, i));
1438 /* We never access the components of the temporary variable in predictive
1439 commoning. */
1440 if (TREE_CODE (type) == COMPLEX_TYPE
1441 || TREE_CODE (type) == VECTOR_TYPE)
1442 DECL_GIMPLE_REG_P (var) = 1;
1444 add_referenced_var (var);
1445 bitmap_set_bit (tmp_vars, DECL_UID (var));
1446 return var;
1449 /* Creates the variables for CHAIN, as well as phi nodes for them and
1450 initialization on entry to LOOP. Uids of the newly created
1451 temporary variables are marked in TMP_VARS. */
1453 static void
1454 initialize_root_vars (struct loop *loop, chain_p chain, bitmap tmp_vars)
1456 unsigned i;
1457 unsigned n = chain->length;
1458 dref root = get_chain_root (chain);
1459 bool reuse_first = !chain->has_max_use_after;
1460 tree ref, init, var, next, stmts;
1461 tree phi;
1462 edge entry = loop_preheader_edge (loop), latch = loop_latch_edge (loop);
1464 /* If N == 0, then all the references are within the single iteration. And
1465 since this is an nonempty chain, reuse_first cannot be true. */
1466 gcc_assert (n > 0 || !reuse_first);
1468 chain->vars = VEC_alloc (tree, heap, n + 1);
1470 if (chain->type == CT_COMBINATION)
1471 ref = GIMPLE_STMT_OPERAND (root->stmt, 0);
1472 else
1473 ref = DR_REF (root->ref);
1475 for (i = 0; i < n + (reuse_first ? 0 : 1); i++)
1477 var = predcom_tmp_var (ref, i, tmp_vars);
1478 VEC_quick_push (tree, chain->vars, var);
1480 if (reuse_first)
1481 VEC_quick_push (tree, chain->vars, VEC_index (tree, chain->vars, 0));
1483 for (i = 0; VEC_iterate (tree, chain->vars, i, var); i++)
1484 VEC_replace (tree, chain->vars, i, make_ssa_name (var, NULL_TREE));
1486 for (i = 0; i < n; i++)
1488 var = VEC_index (tree, chain->vars, i);
1489 next = VEC_index (tree, chain->vars, i + 1);
1490 init = get_init_expr (chain, i);
1492 init = force_gimple_operand (init, &stmts, true, NULL_TREE);
1493 if (stmts)
1495 mark_virtual_ops_for_renaming_list (stmts);
1496 bsi_insert_on_edge_immediate (entry, stmts);
1499 phi = create_phi_node (var, loop->header);
1500 SSA_NAME_DEF_STMT (var) = phi;
1501 add_phi_arg (phi, init, entry);
1502 add_phi_arg (phi, next, latch);
1506 /* Create the variables and initialization statement for root of chain
1507 CHAIN. Uids of the newly created temporary variables are marked
1508 in TMP_VARS. */
1510 static void
1511 initialize_root (struct loop *loop, chain_p chain, bitmap tmp_vars)
1513 dref root = get_chain_root (chain);
1514 bool in_lhs = (chain->type == CT_STORE_LOAD
1515 || chain->type == CT_COMBINATION);
1517 initialize_root_vars (loop, chain, tmp_vars);
1518 replace_ref_with (root->stmt,
1519 VEC_index (tree, chain->vars, chain->length),
1520 true, in_lhs);
1523 /* Initializes a variable for load motion for ROOT and prepares phi nodes and
1524 initialization on entry to LOOP if necessary. The ssa name for the variable
1525 is stored in VARS. If WRITTEN is true, also a phi node to copy its value
1526 around the loop is created. Uid of the newly created temporary variable
1527 is marked in TMP_VARS. INITS is the list containing the (single)
1528 initializer. */
1530 static void
1531 initialize_root_vars_lm (struct loop *loop, dref root, bool written,
1532 VEC(tree, heap) **vars, VEC(tree, heap) *inits,
1533 bitmap tmp_vars)
1535 unsigned i;
1536 tree ref = DR_REF (root->ref), init, var, next, stmts;
1537 tree phi;
1538 edge entry = loop_preheader_edge (loop), latch = loop_latch_edge (loop);
1540 /* Find the initializer for the variable, and check that it cannot
1541 trap. */
1542 init = VEC_index (tree, inits, 0);
1544 *vars = VEC_alloc (tree, heap, written ? 2 : 1);
1545 var = predcom_tmp_var (ref, 0, tmp_vars);
1546 VEC_quick_push (tree, *vars, var);
1547 if (written)
1548 VEC_quick_push (tree, *vars, VEC_index (tree, *vars, 0));
1550 for (i = 0; VEC_iterate (tree, *vars, i, var); i++)
1551 VEC_replace (tree, *vars, i, make_ssa_name (var, NULL_TREE));
1553 var = VEC_index (tree, *vars, 0);
1555 init = force_gimple_operand (init, &stmts, written, NULL_TREE);
1556 if (stmts)
1558 mark_virtual_ops_for_renaming_list (stmts);
1559 bsi_insert_on_edge_immediate (entry, stmts);
1562 if (written)
1564 next = VEC_index (tree, *vars, 1);
1565 phi = create_phi_node (var, loop->header);
1566 SSA_NAME_DEF_STMT (var) = phi;
1567 add_phi_arg (phi, init, entry);
1568 add_phi_arg (phi, next, latch);
1570 else
1572 init = build_gimple_modify_stmt (var, init);
1573 SSA_NAME_DEF_STMT (var) = init;
1574 mark_virtual_ops_for_renaming (init);
1575 bsi_insert_on_edge_immediate (entry, init);
1580 /* Execute load motion for references in chain CHAIN. Uids of the newly
1581 created temporary variables are marked in TMP_VARS. */
1583 static void
1584 execute_load_motion (struct loop *loop, chain_p chain, bitmap tmp_vars)
1586 VEC (tree, heap) *vars;
1587 dref a;
1588 unsigned n_writes = 0, ridx, i;
1589 tree var;
1591 gcc_assert (chain->type == CT_INVARIANT);
1592 gcc_assert (!chain->combined);
1593 for (i = 0; VEC_iterate (dref, chain->refs, i, a); i++)
1594 if (!DR_IS_READ (a->ref))
1595 n_writes++;
1597 /* If there are no reads in the loop, there is nothing to do. */
1598 if (n_writes == VEC_length (dref, chain->refs))
1599 return;
1601 initialize_root_vars_lm (loop, get_chain_root (chain), n_writes > 0,
1602 &vars, chain->inits, tmp_vars);
1604 ridx = 0;
1605 for (i = 0; VEC_iterate (dref, chain->refs, i, a); i++)
1607 bool is_read = DR_IS_READ (a->ref);
1608 mark_virtual_ops_for_renaming (a->stmt);
1610 if (!DR_IS_READ (a->ref))
1612 n_writes--;
1613 if (n_writes)
1615 var = VEC_index (tree, vars, 0);
1616 var = make_ssa_name (SSA_NAME_VAR (var), NULL_TREE);
1617 VEC_replace (tree, vars, 0, var);
1619 else
1620 ridx = 1;
1623 replace_ref_with (a->stmt, VEC_index (tree, vars, ridx),
1624 !is_read, !is_read);
1627 VEC_free (tree, heap, vars);
1630 /* Returns the single statement in that NAME is used, excepting
1631 the looparound phi nodes contained in one of the chains. If there is no
1632 such statement, or more statements, NULL_TREE is returned. */
1634 static tree
1635 single_nonlooparound_use (tree name)
1637 use_operand_p use;
1638 imm_use_iterator it;
1639 tree stmt, ret = NULL_TREE;
1641 FOR_EACH_IMM_USE_FAST (use, it, name)
1643 stmt = USE_STMT (use);
1645 if (TREE_CODE (stmt) == PHI_NODE)
1647 /* Ignore uses in looparound phi nodes. Uses in other phi nodes
1648 could not be processed anyway, so just fail for them. */
1649 if (bitmap_bit_p (looparound_phis,
1650 SSA_NAME_VERSION (PHI_RESULT (stmt))))
1651 continue;
1653 return NULL_TREE;
1655 else if (ret != NULL_TREE)
1656 return NULL_TREE;
1657 else
1658 ret = stmt;
1661 return ret;
1664 /* Remove statement STMT, as well as the chain of assignments in that it is
1665 used. */
1667 static void
1668 remove_stmt (tree stmt)
1670 tree next, name;
1672 if (TREE_CODE (stmt) == PHI_NODE)
1674 name = PHI_RESULT (stmt);
1675 next = single_nonlooparound_use (name);
1676 remove_phi_node (stmt, NULL_TREE, true);
1678 if (!next
1679 || TREE_CODE (next) != GIMPLE_MODIFY_STMT
1680 || GIMPLE_STMT_OPERAND (next, 1) != name)
1681 return;
1683 stmt = next;
1686 while (1)
1688 block_stmt_iterator bsi;
1690 bsi = bsi_for_stmt (stmt);
1692 name = GIMPLE_STMT_OPERAND (stmt, 0);
1693 gcc_assert (TREE_CODE (name) == SSA_NAME);
1695 next = single_nonlooparound_use (name);
1697 mark_virtual_ops_for_renaming (stmt);
1698 bsi_remove (&bsi, true);
1700 if (!next
1701 || TREE_CODE (next) != GIMPLE_MODIFY_STMT
1702 || GIMPLE_STMT_OPERAND (next, 1) != name)
1703 return;
1705 stmt = next;
1709 /* Perform the predictive commoning optimization for a chain CHAIN.
1710 Uids of the newly created temporary variables are marked in TMP_VARS.*/
1712 static void
1713 execute_pred_commoning_chain (struct loop *loop, chain_p chain,
1714 bitmap tmp_vars)
1716 unsigned i;
1717 dref a, root;
1718 tree var;
1720 if (chain->combined)
1722 /* For combined chains, just remove the statements that are used to
1723 compute the values of the expression (except for the root one). */
1724 for (i = 1; VEC_iterate (dref, chain->refs, i, a); i++)
1725 remove_stmt (a->stmt);
1727 else
1729 /* For non-combined chains, set up the variables that hold its value,
1730 and replace the uses of the original references by these
1731 variables. */
1732 root = get_chain_root (chain);
1733 mark_virtual_ops_for_renaming (root->stmt);
1735 initialize_root (loop, chain, tmp_vars);
1736 for (i = 1; VEC_iterate (dref, chain->refs, i, a); i++)
1738 mark_virtual_ops_for_renaming (a->stmt);
1739 var = VEC_index (tree, chain->vars, chain->length - a->distance);
1740 replace_ref_with (a->stmt, var, false, false);
1745 /* Determines the unroll factor necessary to remove as many temporary variable
1746 copies as possible. CHAINS is the list of chains that will be
1747 optimized. */
1749 static unsigned
1750 determine_unroll_factor (VEC (chain_p, heap) *chains)
1752 chain_p chain;
1753 unsigned factor = 1, af, nfactor, i;
1754 unsigned max = PARAM_VALUE (PARAM_MAX_UNROLL_TIMES);
1756 for (i = 0; VEC_iterate (chain_p, chains, i, chain); i++)
1758 if (chain->type == CT_INVARIANT || chain->combined)
1759 continue;
1761 /* The best unroll factor for this chain is equal to the number of
1762 temporary variables that we create for it. */
1763 af = chain->length;
1764 if (chain->has_max_use_after)
1765 af++;
1767 nfactor = factor * af / gcd (factor, af);
1768 if (nfactor <= max)
1769 factor = nfactor;
1772 return factor;
1775 /* Perform the predictive commoning optimization for CHAINS.
1776 Uids of the newly created temporary variables are marked in TMP_VARS. */
1778 static void
1779 execute_pred_commoning (struct loop *loop, VEC (chain_p, heap) *chains,
1780 bitmap tmp_vars)
1782 chain_p chain;
1783 unsigned i;
1785 for (i = 0; VEC_iterate (chain_p, chains, i, chain); i++)
1787 if (chain->type == CT_INVARIANT)
1788 execute_load_motion (loop, chain, tmp_vars);
1789 else
1790 execute_pred_commoning_chain (loop, chain, tmp_vars);
1793 update_ssa (TODO_update_ssa_only_virtuals);
1796 /* For each reference in CHAINS, if its defining statement is
1797 ssa name, set it to phi node that defines it. */
1799 static void
1800 replace_phis_by_defined_names (VEC (chain_p, heap) *chains)
1802 chain_p chain;
1803 dref a;
1804 unsigned i, j;
1806 for (i = 0; VEC_iterate (chain_p, chains, i, chain); i++)
1807 for (j = 0; VEC_iterate (dref, chain->refs, j, a); j++)
1809 gcc_assert (TREE_CODE (a->stmt) != SSA_NAME);
1810 if (TREE_CODE (a->stmt) == PHI_NODE)
1811 a->stmt = PHI_RESULT (a->stmt);
1815 /* For each reference in CHAINS, if its defining statement is
1816 phi node, set it to the ssa name that is defined by it. */
1818 static void
1819 replace_names_by_phis (VEC (chain_p, heap) *chains)
1821 chain_p chain;
1822 dref a;
1823 unsigned i, j;
1825 for (i = 0; VEC_iterate (chain_p, chains, i, chain); i++)
1826 for (j = 0; VEC_iterate (dref, chain->refs, j, a); j++)
1827 if (TREE_CODE (a->stmt) == SSA_NAME)
1829 a->stmt = SSA_NAME_DEF_STMT (a->stmt);
1830 gcc_assert (TREE_CODE (a->stmt) == PHI_NODE);
1834 /* Wrapper over execute_pred_commoning, to pass it as a callback
1835 to tree_transform_and_unroll_loop. */
1837 struct epcc_data
1839 VEC (chain_p, heap) *chains;
1840 bitmap tmp_vars;
1843 static void
1844 execute_pred_commoning_cbck (struct loop *loop, void *data)
1846 struct epcc_data *dta = data;
1848 /* Restore phi nodes that were replaced by ssa names before
1849 tree_transform_and_unroll_loop (see detailed description in
1850 tree_predictive_commoning_loop). */
1851 replace_names_by_phis (dta->chains);
1852 execute_pred_commoning (loop, dta->chains, dta->tmp_vars);
1855 /* Returns true if we can and should unroll LOOP FACTOR times. Number
1856 of iterations of the loop is returned in NITER. */
1858 static bool
1859 should_unroll_loop_p (struct loop *loop, unsigned factor,
1860 struct tree_niter_desc *niter)
1862 edge exit;
1864 if (factor == 1)
1865 return false;
1867 /* Check whether unrolling is possible. We only want to unroll loops
1868 for that we are able to determine number of iterations. We also
1869 want to split the extra iterations of the loop from its end,
1870 therefore we require that the loop has precisely one
1871 exit. */
1873 exit = single_dom_exit (loop);
1874 if (!exit)
1875 return false;
1877 if (!number_of_iterations_exit (loop, exit, niter, false))
1878 return false;
1880 /* And of course, we must be able to duplicate the loop. */
1881 if (!can_duplicate_loop_p (loop))
1882 return false;
1884 /* The final loop should be small enough. */
1885 if (tree_num_loop_insns (loop, &eni_size_weights) * factor
1886 > (unsigned) PARAM_VALUE (PARAM_MAX_UNROLLED_INSNS))
1887 return false;
1889 return true;
1892 /* Base NAME and all the names in the chain of phi nodes that use it
1893 on variable VAR. The phi nodes are recognized by being in the copies of
1894 the header of the LOOP. */
1896 static void
1897 base_names_in_chain_on (struct loop *loop, tree name, tree var)
1899 tree stmt, phi;
1900 imm_use_iterator iter;
1901 edge e;
1903 SSA_NAME_VAR (name) = var;
1905 while (1)
1907 phi = NULL;
1908 FOR_EACH_IMM_USE_STMT (stmt, iter, name)
1910 if (TREE_CODE (stmt) == PHI_NODE
1911 && flow_bb_inside_loop_p (loop, bb_for_stmt (stmt)))
1913 phi = stmt;
1914 BREAK_FROM_IMM_USE_STMT (iter);
1917 if (!phi)
1918 return;
1920 if (bb_for_stmt (phi) == loop->header)
1921 e = loop_latch_edge (loop);
1922 else
1923 e = single_pred_edge (bb_for_stmt (stmt));
1925 name = PHI_RESULT (phi);
1926 SSA_NAME_VAR (name) = var;
1930 /* Given an unrolled LOOP after predictive commoning, remove the
1931 register copies arising from phi nodes by changing the base
1932 variables of SSA names. TMP_VARS is the set of the temporary variables
1933 for those we want to perform this. */
1935 static void
1936 eliminate_temp_copies (struct loop *loop, bitmap tmp_vars)
1938 edge e;
1939 tree phi, name, use, var, stmt;
1941 e = loop_latch_edge (loop);
1942 for (phi = phi_nodes (loop->header); phi; phi = PHI_CHAIN (phi))
1944 name = PHI_RESULT (phi);
1945 var = SSA_NAME_VAR (name);
1946 if (!bitmap_bit_p (tmp_vars, DECL_UID (var)))
1947 continue;
1948 use = PHI_ARG_DEF_FROM_EDGE (phi, e);
1949 gcc_assert (TREE_CODE (use) == SSA_NAME);
1951 /* Base all the ssa names in the ud and du chain of NAME on VAR. */
1952 stmt = SSA_NAME_DEF_STMT (use);
1953 while (TREE_CODE (stmt) == PHI_NODE
1954 /* In case we could not unroll the loop enough to eliminate
1955 all copies, we may reach the loop header before the defining
1956 statement (in that case, some register copies will be present
1957 in loop latch in the final code, corresponding to the newly
1958 created looparound phi nodes). */
1959 && bb_for_stmt (stmt) != loop->header)
1961 gcc_assert (single_pred_p (bb_for_stmt (stmt)));
1962 use = PHI_ARG_DEF (stmt, 0);
1963 stmt = SSA_NAME_DEF_STMT (use);
1966 base_names_in_chain_on (loop, use, var);
1970 /* Returns true if CHAIN is suitable to be combined. */
1972 static bool
1973 chain_can_be_combined_p (chain_p chain)
1975 return (!chain->combined
1976 && (chain->type == CT_LOAD || chain->type == CT_COMBINATION));
1979 /* Returns the modify statement that uses NAME. Skips over assignment
1980 statements, NAME is replaced with the actual name used in the returned
1981 statement. */
1983 static tree
1984 find_use_stmt (tree *name)
1986 tree stmt, rhs, lhs;
1988 /* Skip over assignments. */
1989 while (1)
1991 stmt = single_nonlooparound_use (*name);
1992 if (!stmt)
1993 return NULL_TREE;
1995 if (TREE_CODE (stmt) != GIMPLE_MODIFY_STMT)
1996 return NULL_TREE;
1998 lhs = GIMPLE_STMT_OPERAND (stmt, 0);
1999 if (TREE_CODE (lhs) != SSA_NAME)
2000 return NULL_TREE;
2002 rhs = GIMPLE_STMT_OPERAND (stmt, 1);
2003 if (rhs != *name)
2004 break;
2006 *name = lhs;
2009 if (!EXPR_P (rhs)
2010 || REFERENCE_CLASS_P (rhs)
2011 || TREE_CODE_LENGTH (TREE_CODE (rhs)) != 2)
2012 return NULL_TREE;
2014 return stmt;
2017 /* Returns true if we may perform reassociation for operation CODE in TYPE. */
2019 static bool
2020 may_reassociate_p (tree type, enum tree_code code)
2022 if (FLOAT_TYPE_P (type)
2023 && !flag_unsafe_math_optimizations)
2024 return false;
2026 return (commutative_tree_code (code)
2027 && associative_tree_code (code));
2030 /* If the operation used in STMT is associative and commutative, go through the
2031 tree of the same operations and returns its root. Distance to the root
2032 is stored in DISTANCE. */
2034 static tree
2035 find_associative_operation_root (tree stmt, unsigned *distance)
2037 tree rhs = GIMPLE_STMT_OPERAND (stmt, 1), lhs, next;
2038 enum tree_code code = TREE_CODE (rhs);
2039 unsigned dist = 0;
2041 if (!may_reassociate_p (TREE_TYPE (rhs), code))
2042 return NULL_TREE;
2044 while (1)
2046 lhs = GIMPLE_STMT_OPERAND (stmt, 0);
2047 gcc_assert (TREE_CODE (lhs) == SSA_NAME);
2049 next = find_use_stmt (&lhs);
2050 if (!next)
2051 break;
2053 rhs = GIMPLE_STMT_OPERAND (next, 1);
2054 if (TREE_CODE (rhs) != code)
2055 break;
2057 stmt = next;
2058 dist++;
2061 if (distance)
2062 *distance = dist;
2063 return stmt;
2066 /* Returns the common statement in that NAME1 and NAME2 have a use. If there
2067 is no such statement, returns NULL_TREE. In case the operation used on
2068 NAME1 and NAME2 is associative and commutative, returns the root of the
2069 tree formed by this operation instead of the statement that uses NAME1 or
2070 NAME2. */
2072 static tree
2073 find_common_use_stmt (tree *name1, tree *name2)
2075 tree stmt1, stmt2;
2077 stmt1 = find_use_stmt (name1);
2078 if (!stmt1)
2079 return NULL_TREE;
2081 stmt2 = find_use_stmt (name2);
2082 if (!stmt2)
2083 return NULL_TREE;
2085 if (stmt1 == stmt2)
2086 return stmt1;
2088 stmt1 = find_associative_operation_root (stmt1, NULL);
2089 if (!stmt1)
2090 return NULL_TREE;
2091 stmt2 = find_associative_operation_root (stmt2, NULL);
2092 if (!stmt2)
2093 return NULL_TREE;
2095 return (stmt1 == stmt2 ? stmt1 : NULL_TREE);
2098 /* Checks whether R1 and R2 are combined together using CODE, with the result
2099 in RSLT_TYPE, in order R1 CODE R2 if SWAP is false and in order R2 CODE R1
2100 if it is true. If CODE is ERROR_MARK, set these values instead. */
2102 static bool
2103 combinable_refs_p (dref r1, dref r2,
2104 enum tree_code *code, bool *swap, tree *rslt_type)
2106 enum tree_code acode;
2107 bool aswap;
2108 tree atype;
2109 tree name1, name2, stmt, rhs;
2111 name1 = name_for_ref (r1);
2112 name2 = name_for_ref (r2);
2113 gcc_assert (name1 != NULL_TREE && name2 != NULL_TREE);
2115 stmt = find_common_use_stmt (&name1, &name2);
2117 if (!stmt)
2118 return false;
2120 rhs = GIMPLE_STMT_OPERAND (stmt, 1);
2121 acode = TREE_CODE (rhs);
2122 aswap = (!commutative_tree_code (acode)
2123 && TREE_OPERAND (rhs, 0) != name1);
2124 atype = TREE_TYPE (rhs);
2126 if (*code == ERROR_MARK)
2128 *code = acode;
2129 *swap = aswap;
2130 *rslt_type = atype;
2131 return true;
2134 return (*code == acode
2135 && *swap == aswap
2136 && *rslt_type == atype);
2139 /* Remove OP from the operation on rhs of STMT, and replace STMT with
2140 an assignment of the remaining operand. */
2142 static void
2143 remove_name_from_operation (tree stmt, tree op)
2145 tree *rhs;
2147 gcc_assert (TREE_CODE (stmt) == GIMPLE_MODIFY_STMT);
2149 rhs = &GIMPLE_STMT_OPERAND (stmt, 1);
2150 if (TREE_OPERAND (*rhs, 0) == op)
2151 *rhs = TREE_OPERAND (*rhs, 1);
2152 else if (TREE_OPERAND (*rhs, 1) == op)
2153 *rhs = TREE_OPERAND (*rhs, 0);
2154 else
2155 gcc_unreachable ();
2156 update_stmt (stmt);
2159 /* Reassociates the expression in that NAME1 and NAME2 are used so that they
2160 are combined in a single statement, and returns this statement. */
2162 static tree
2163 reassociate_to_the_same_stmt (tree name1, tree name2)
2165 tree stmt1, stmt2, root1, root2, r1, r2, s1, s2;
2166 tree new_stmt, tmp_stmt, new_name, tmp_name, var;
2167 unsigned dist1, dist2;
2168 enum tree_code code;
2169 tree type = TREE_TYPE (name1);
2170 block_stmt_iterator bsi;
2172 stmt1 = find_use_stmt (&name1);
2173 stmt2 = find_use_stmt (&name2);
2174 root1 = find_associative_operation_root (stmt1, &dist1);
2175 root2 = find_associative_operation_root (stmt2, &dist2);
2176 code = TREE_CODE (GIMPLE_STMT_OPERAND (stmt1, 1));
2178 gcc_assert (root1 && root2 && root1 == root2
2179 && code == TREE_CODE (GIMPLE_STMT_OPERAND (stmt2, 1)));
2181 /* Find the root of the nearest expression in that both NAME1 and NAME2
2182 are used. */
2183 r1 = name1;
2184 s1 = stmt1;
2185 r2 = name2;
2186 s2 = stmt2;
2188 while (dist1 > dist2)
2190 s1 = find_use_stmt (&r1);
2191 r1 = GIMPLE_STMT_OPERAND (s1, 0);
2192 dist1--;
2194 while (dist2 > dist1)
2196 s2 = find_use_stmt (&r2);
2197 r2 = GIMPLE_STMT_OPERAND (s2, 0);
2198 dist2--;
2201 while (s1 != s2)
2203 s1 = find_use_stmt (&r1);
2204 r1 = GIMPLE_STMT_OPERAND (s1, 0);
2205 s2 = find_use_stmt (&r2);
2206 r2 = GIMPLE_STMT_OPERAND (s2, 0);
2209 /* Remove NAME1 and NAME2 from the statements in that they are used
2210 currently. */
2211 remove_name_from_operation (stmt1, name1);
2212 remove_name_from_operation (stmt2, name2);
2214 /* Insert the new statement combining NAME1 and NAME2 before S1, and
2215 combine it with the rhs of S1. */
2216 var = create_tmp_var (type, "predreastmp");
2217 add_referenced_var (var);
2218 new_name = make_ssa_name (var, NULL_TREE);
2219 new_stmt = build_gimple_modify_stmt (new_name,
2220 fold_build2 (code, type, name1, name2));
2221 SSA_NAME_DEF_STMT (new_name) = new_stmt;
2223 var = create_tmp_var (type, "predreastmp");
2224 add_referenced_var (var);
2225 tmp_name = make_ssa_name (var, NULL_TREE);
2226 tmp_stmt = build_gimple_modify_stmt (tmp_name,
2227 GIMPLE_STMT_OPERAND (s1, 1));
2228 SSA_NAME_DEF_STMT (tmp_name) = tmp_stmt;
2230 GIMPLE_STMT_OPERAND (s1, 1) = fold_build2 (code, type, new_name, tmp_name);
2231 update_stmt (s1);
2233 bsi = bsi_for_stmt (s1);
2234 bsi_insert_before (&bsi, new_stmt, BSI_SAME_STMT);
2235 bsi_insert_before (&bsi, tmp_stmt, BSI_SAME_STMT);
2237 return new_stmt;
2240 /* Returns the statement that combines references R1 and R2. In case R1
2241 and R2 are not used in the same statement, but they are used with an
2242 associative and commutative operation in the same expression, reassociate
2243 the expression so that they are used in the same statement. */
2245 static tree
2246 stmt_combining_refs (dref r1, dref r2)
2248 tree stmt1, stmt2;
2249 tree name1 = name_for_ref (r1);
2250 tree name2 = name_for_ref (r2);
2252 stmt1 = find_use_stmt (&name1);
2253 stmt2 = find_use_stmt (&name2);
2254 if (stmt1 == stmt2)
2255 return stmt1;
2257 return reassociate_to_the_same_stmt (name1, name2);
2260 /* Tries to combine chains CH1 and CH2 together. If this succeeds, the
2261 description of the new chain is returned, otherwise we return NULL. */
2263 static chain_p
2264 combine_chains (chain_p ch1, chain_p ch2)
2266 dref r1, r2, nw;
2267 enum tree_code op = ERROR_MARK;
2268 bool swap = false;
2269 chain_p new_chain;
2270 unsigned i;
2271 tree root_stmt;
2272 tree rslt_type = NULL_TREE;
2274 if (ch1 == ch2)
2275 return false;
2276 if (ch1->length != ch2->length)
2277 return NULL;
2279 if (VEC_length (dref, ch1->refs) != VEC_length (dref, ch2->refs))
2280 return NULL;
2282 for (i = 0; (VEC_iterate (dref, ch1->refs, i, r1)
2283 && VEC_iterate (dref, ch2->refs, i, r2)); i++)
2285 if (r1->distance != r2->distance)
2286 return NULL;
2288 if (!combinable_refs_p (r1, r2, &op, &swap, &rslt_type))
2289 return NULL;
2292 if (swap)
2294 chain_p tmp = ch1;
2295 ch1 = ch2;
2296 ch2 = tmp;
2299 new_chain = XCNEW (struct chain);
2300 new_chain->type = CT_COMBINATION;
2301 new_chain->operator = op;
2302 new_chain->ch1 = ch1;
2303 new_chain->ch2 = ch2;
2304 new_chain->rslt_type = rslt_type;
2305 new_chain->length = ch1->length;
2307 for (i = 0; (VEC_iterate (dref, ch1->refs, i, r1)
2308 && VEC_iterate (dref, ch2->refs, i, r2)); i++)
2310 nw = XCNEW (struct dref);
2311 nw->stmt = stmt_combining_refs (r1, r2);
2312 nw->distance = r1->distance;
2314 VEC_safe_push (dref, heap, new_chain->refs, nw);
2317 new_chain->has_max_use_after = false;
2318 root_stmt = get_chain_root (new_chain)->stmt;
2319 for (i = 1; VEC_iterate (dref, new_chain->refs, i, nw); i++)
2321 if (nw->distance == new_chain->length
2322 && !stmt_dominates_stmt_p (nw->stmt, root_stmt))
2324 new_chain->has_max_use_after = true;
2325 break;
2329 ch1->combined = true;
2330 ch2->combined = true;
2331 return new_chain;
2334 /* Try to combine the CHAINS. */
2336 static void
2337 try_combine_chains (VEC (chain_p, heap) **chains)
2339 unsigned i, j;
2340 chain_p ch1, ch2, cch;
2341 VEC (chain_p, heap) *worklist = NULL;
2343 for (i = 0; VEC_iterate (chain_p, *chains, i, ch1); i++)
2344 if (chain_can_be_combined_p (ch1))
2345 VEC_safe_push (chain_p, heap, worklist, ch1);
2347 while (!VEC_empty (chain_p, worklist))
2349 ch1 = VEC_pop (chain_p, worklist);
2350 if (!chain_can_be_combined_p (ch1))
2351 continue;
2353 for (j = 0; VEC_iterate (chain_p, *chains, j, ch2); j++)
2355 if (!chain_can_be_combined_p (ch2))
2356 continue;
2358 cch = combine_chains (ch1, ch2);
2359 if (cch)
2361 VEC_safe_push (chain_p, heap, worklist, cch);
2362 VEC_safe_push (chain_p, heap, *chains, cch);
2363 break;
2369 /* Sets alias information based on data reference DR for REF,
2370 if necessary. */
2372 static void
2373 set_alias_info (tree ref, struct data_reference *dr)
2375 tree var;
2376 tree tag = DR_SYMBOL_TAG (dr);
2378 gcc_assert (tag != NULL_TREE);
2380 ref = get_base_address (ref);
2381 if (!ref || !INDIRECT_REF_P (ref))
2382 return;
2384 var = SSA_NAME_VAR (TREE_OPERAND (ref, 0));
2385 if (var_ann (var)->symbol_mem_tag)
2386 return;
2388 if (!MTAG_P (tag))
2389 new_type_alias (var, tag, ref);
2390 else
2391 var_ann (var)->symbol_mem_tag = tag;
2394 /* Prepare initializers for CHAIN in LOOP. Returns false if this is
2395 impossible because one of these initializers may trap, true otherwise. */
2397 static bool
2398 prepare_initializers_chain (struct loop *loop, chain_p chain)
2400 unsigned i, n = (chain->type == CT_INVARIANT) ? 1 : chain->length;
2401 struct data_reference *dr = get_chain_root (chain)->ref;
2402 tree init, stmts;
2403 dref laref;
2404 edge entry = loop_preheader_edge (loop);
2406 /* Find the initializers for the variables, and check that they cannot
2407 trap. */
2408 chain->inits = VEC_alloc (tree, heap, n);
2409 for (i = 0; i < n; i++)
2410 VEC_quick_push (tree, chain->inits, NULL_TREE);
2412 /* If we have replaced some looparound phi nodes, use their initializers
2413 instead of creating our own. */
2414 for (i = 0; VEC_iterate (dref, chain->refs, i, laref); i++)
2416 if (TREE_CODE (laref->stmt) != PHI_NODE)
2417 continue;
2419 gcc_assert (laref->distance > 0);
2420 VEC_replace (tree, chain->inits, n - laref->distance,
2421 PHI_ARG_DEF_FROM_EDGE (laref->stmt, entry));
2424 for (i = 0; i < n; i++)
2426 if (VEC_index (tree, chain->inits, i) != NULL_TREE)
2427 continue;
2429 init = ref_at_iteration (loop, DR_REF (dr), (int) i - n);
2430 if (!init)
2431 return false;
2433 if (!chain->all_always_accessed && tree_could_trap_p (init))
2434 return false;
2436 init = force_gimple_operand (init, &stmts, false, NULL_TREE);
2437 if (stmts)
2439 mark_virtual_ops_for_renaming_list (stmts);
2440 bsi_insert_on_edge_immediate (entry, stmts);
2442 set_alias_info (init, dr);
2444 VEC_replace (tree, chain->inits, i, init);
2447 return true;
2450 /* Prepare initializers for CHAINS in LOOP, and free chains that cannot
2451 be used because the initializers might trap. */
2453 static void
2454 prepare_initializers (struct loop *loop, VEC (chain_p, heap) *chains)
2456 chain_p chain;
2457 unsigned i;
2459 for (i = 0; i < VEC_length (chain_p, chains); )
2461 chain = VEC_index (chain_p, chains, i);
2462 if (prepare_initializers_chain (loop, chain))
2463 i++;
2464 else
2466 release_chain (chain);
2467 VEC_unordered_remove (chain_p, chains, i);
2472 /* Performs predictive commoning for LOOP. Returns true if LOOP was
2473 unrolled. */
2475 static bool
2476 tree_predictive_commoning_loop (struct loop *loop)
2478 VEC (data_reference_p, heap) *datarefs;
2479 VEC (ddr_p, heap) *dependences;
2480 struct component *components;
2481 VEC (chain_p, heap) *chains = NULL;
2482 unsigned unroll_factor;
2483 struct tree_niter_desc desc;
2484 bool unroll = false;
2485 edge exit;
2486 bitmap tmp_vars;
2488 if (dump_file && (dump_flags & TDF_DETAILS))
2489 fprintf (dump_file, "Processing loop %d\n", loop->num);
2491 /* Find the data references and split them into components according to their
2492 dependence relations. */
2493 datarefs = VEC_alloc (data_reference_p, heap, 10);
2494 dependences = VEC_alloc (ddr_p, heap, 10);
2495 compute_data_dependences_for_loop (loop, true, &datarefs, &dependences);
2496 if (dump_file && (dump_flags & TDF_DETAILS))
2497 dump_data_dependence_relations (dump_file, dependences);
2499 components = split_data_refs_to_components (loop, datarefs, dependences);
2500 free_dependence_relations (dependences);
2501 if (!components)
2503 free_data_refs (datarefs);
2504 return false;
2507 if (dump_file && (dump_flags & TDF_DETAILS))
2509 fprintf (dump_file, "Initial state:\n\n");
2510 dump_components (dump_file, components);
2513 /* Find the suitable components and split them into chains. */
2514 components = filter_suitable_components (loop, components);
2516 tmp_vars = BITMAP_ALLOC (NULL);
2517 looparound_phis = BITMAP_ALLOC (NULL);
2518 determine_roots (loop, components, &chains);
2519 release_components (components);
2521 if (!chains)
2523 if (dump_file && (dump_flags & TDF_DETAILS))
2524 fprintf (dump_file,
2525 "Predictive commoning failed: no suitable chains\n");
2526 goto end;
2528 prepare_initializers (loop, chains);
2530 /* Try to combine the chains that are always worked with together. */
2531 try_combine_chains (&chains);
2533 if (dump_file && (dump_flags & TDF_DETAILS))
2535 fprintf (dump_file, "Before commoning:\n\n");
2536 dump_chains (dump_file, chains);
2539 /* Determine the unroll factor, and if the loop should be unrolled, ensure
2540 that its number of iterations is divisible by the factor. */
2541 unroll_factor = determine_unroll_factor (chains);
2542 scev_reset ();
2543 unroll = should_unroll_loop_p (loop, unroll_factor, &desc);
2544 exit = single_dom_exit (loop);
2546 /* Execute the predictive commoning transformations, and possibly unroll the
2547 loop. */
2548 if (unroll)
2550 struct epcc_data dta;
2552 if (dump_file && (dump_flags & TDF_DETAILS))
2553 fprintf (dump_file, "Unrolling %u times.\n", unroll_factor);
2555 dta.chains = chains;
2556 dta.tmp_vars = tmp_vars;
2558 update_ssa (TODO_update_ssa_only_virtuals);
2560 /* Cfg manipulations performed in tree_transform_and_unroll_loop before
2561 execute_pred_commoning_cbck is called may cause phi nodes to be
2562 reallocated, which is a problem since CHAINS may point to these
2563 statements. To fix this, we store the ssa names defined by the
2564 phi nodes here instead of the phi nodes themselves, and restore
2565 the phi nodes in execute_pred_commoning_cbck. A bit hacky. */
2566 replace_phis_by_defined_names (chains);
2568 tree_transform_and_unroll_loop (loop, unroll_factor, exit, &desc,
2569 execute_pred_commoning_cbck, &dta);
2570 eliminate_temp_copies (loop, tmp_vars);
2572 else
2574 if (dump_file && (dump_flags & TDF_DETAILS))
2575 fprintf (dump_file,
2576 "Executing predictive commoning without unrolling.\n");
2577 execute_pred_commoning (loop, chains, tmp_vars);
2580 end: ;
2581 release_chains (chains);
2582 free_data_refs (datarefs);
2583 BITMAP_FREE (tmp_vars);
2584 BITMAP_FREE (looparound_phis);
2586 free_affine_expand_cache (&name_expansions);
2588 return unroll;
2591 /* Runs predictive commoning. */
2593 unsigned
2594 tree_predictive_commoning (void)
2596 bool unrolled = false;
2597 struct loop *loop;
2598 loop_iterator li;
2599 unsigned ret = 0;
2601 initialize_original_copy_tables ();
2602 FOR_EACH_LOOP (li, loop, LI_ONLY_INNERMOST)
2604 unrolled |= tree_predictive_commoning_loop (loop);
2607 if (unrolled)
2609 scev_reset ();
2610 ret = TODO_cleanup_cfg;
2612 free_original_copy_tables ();
2614 return ret;