2008-09-24 Michael J. Eager <eager@eagercon.com>
[official-gcc.git] / gcc / tree-ssa-ccp.c
blob22626a53840b3e87c7c2e384d9aaa5ab1c397e6c
1 /* Conditional constant propagation pass for the GNU compiler.
2 Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008
3 Free Software Foundation, Inc.
4 Adapted from original RTL SSA-CCP by Daniel Berlin <dberlin@dberlin.org>
5 Adapted to GIMPLE trees by Diego Novillo <dnovillo@redhat.com>
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it
10 under the terms of the GNU General Public License as published by the
11 Free Software Foundation; either version 3, or (at your option) any
12 later version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT
15 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING3. If not see
21 <http://www.gnu.org/licenses/>. */
23 /* Conditional constant propagation (CCP) is based on the SSA
24 propagation engine (tree-ssa-propagate.c). Constant assignments of
25 the form VAR = CST are propagated from the assignments into uses of
26 VAR, which in turn may generate new constants. The simulation uses
27 a four level lattice to keep track of constant values associated
28 with SSA names. Given an SSA name V_i, it may take one of the
29 following values:
31 UNINITIALIZED -> the initial state of the value. This value
32 is replaced with a correct initial value
33 the first time the value is used, so the
34 rest of the pass does not need to care about
35 it. Using this value simplifies initialization
36 of the pass, and prevents us from needlessly
37 scanning statements that are never reached.
39 UNDEFINED -> V_i is a local variable whose definition
40 has not been processed yet. Therefore we
41 don't yet know if its value is a constant
42 or not.
44 CONSTANT -> V_i has been found to hold a constant
45 value C.
47 VARYING -> V_i cannot take a constant value, or if it
48 does, it is not possible to determine it
49 at compile time.
51 The core of SSA-CCP is in ccp_visit_stmt and ccp_visit_phi_node:
53 1- In ccp_visit_stmt, we are interested in assignments whose RHS
54 evaluates into a constant and conditional jumps whose predicate
55 evaluates into a boolean true or false. When an assignment of
56 the form V_i = CONST is found, V_i's lattice value is set to
57 CONSTANT and CONST is associated with it. This causes the
58 propagation engine to add all the SSA edges coming out the
59 assignment into the worklists, so that statements that use V_i
60 can be visited.
62 If the statement is a conditional with a constant predicate, we
63 mark the outgoing edges as executable or not executable
64 depending on the predicate's value. This is then used when
65 visiting PHI nodes to know when a PHI argument can be ignored.
68 2- In ccp_visit_phi_node, if all the PHI arguments evaluate to the
69 same constant C, then the LHS of the PHI is set to C. This
70 evaluation is known as the "meet operation". Since one of the
71 goals of this evaluation is to optimistically return constant
72 values as often as possible, it uses two main short cuts:
74 - If an argument is flowing in through a non-executable edge, it
75 is ignored. This is useful in cases like this:
77 if (PRED)
78 a_9 = 3;
79 else
80 a_10 = 100;
81 a_11 = PHI (a_9, a_10)
83 If PRED is known to always evaluate to false, then we can
84 assume that a_11 will always take its value from a_10, meaning
85 that instead of consider it VARYING (a_9 and a_10 have
86 different values), we can consider it CONSTANT 100.
88 - If an argument has an UNDEFINED value, then it does not affect
89 the outcome of the meet operation. If a variable V_i has an
90 UNDEFINED value, it means that either its defining statement
91 hasn't been visited yet or V_i has no defining statement, in
92 which case the original symbol 'V' is being used
93 uninitialized. Since 'V' is a local variable, the compiler
94 may assume any initial value for it.
97 After propagation, every variable V_i that ends up with a lattice
98 value of CONSTANT will have the associated constant value in the
99 array CONST_VAL[i].VALUE. That is fed into substitute_and_fold for
100 final substitution and folding.
103 Constant propagation in stores and loads (STORE-CCP)
104 ----------------------------------------------------
106 While CCP has all the logic to propagate constants in GIMPLE
107 registers, it is missing the ability to associate constants with
108 stores and loads (i.e., pointer dereferences, structures and
109 global/aliased variables). We don't keep loads and stores in
110 SSA, but we do build a factored use-def web for them (in the
111 virtual operands).
113 For instance, consider the following code fragment:
115 struct A a;
116 const int B = 42;
118 void foo (int i)
120 if (i > 10)
121 a.a = 42;
122 else
124 a.b = 21;
125 a.a = a.b + 21;
128 if (a.a != B)
129 never_executed ();
132 We should be able to deduce that the predicate 'a.a != B' is always
133 false. To achieve this, we associate constant values to the SSA
134 names in the VDEF operands for each store. Additionally,
135 since we also glob partial loads/stores with the base symbol, we
136 also keep track of the memory reference where the constant value
137 was stored (in the MEM_REF field of PROP_VALUE_T). For instance,
139 # a_5 = VDEF <a_4>
140 a.a = 2;
142 # VUSE <a_5>
143 x_3 = a.b;
145 In the example above, CCP will associate value '2' with 'a_5', but
146 it would be wrong to replace the load from 'a.b' with '2', because
147 '2' had been stored into a.a.
149 Note that the initial value of virtual operands is VARYING, not
150 UNDEFINED. Consider, for instance global variables:
152 int A;
154 foo (int i)
156 if (i_3 > 10)
157 A_4 = 3;
158 # A_5 = PHI (A_4, A_2);
160 # VUSE <A_5>
161 A.0_6 = A;
163 return A.0_6;
166 The value of A_2 cannot be assumed to be UNDEFINED, as it may have
167 been defined outside of foo. If we were to assume it UNDEFINED, we
168 would erroneously optimize the above into 'return 3;'.
170 Though STORE-CCP is not too expensive, it does have to do more work
171 than regular CCP, so it is only enabled at -O2. Both regular CCP
172 and STORE-CCP use the exact same algorithm. The only distinction
173 is that when doing STORE-CCP, the boolean variable DO_STORE_CCP is
174 set to true. This affects the evaluation of statements and PHI
175 nodes.
177 References:
179 Constant propagation with conditional branches,
180 Wegman and Zadeck, ACM TOPLAS 13(2):181-210.
182 Building an Optimizing Compiler,
183 Robert Morgan, Butterworth-Heinemann, 1998, Section 8.9.
185 Advanced Compiler Design and Implementation,
186 Steven Muchnick, Morgan Kaufmann, 1997, Section 12.6 */
188 #include "config.h"
189 #include "system.h"
190 #include "coretypes.h"
191 #include "tm.h"
192 #include "tree.h"
193 #include "flags.h"
194 #include "rtl.h"
195 #include "tm_p.h"
196 #include "ggc.h"
197 #include "basic-block.h"
198 #include "output.h"
199 #include "expr.h"
200 #include "function.h"
201 #include "diagnostic.h"
202 #include "timevar.h"
203 #include "tree-dump.h"
204 #include "tree-flow.h"
205 #include "tree-pass.h"
206 #include "tree-ssa-propagate.h"
207 #include "value-prof.h"
208 #include "langhooks.h"
209 #include "target.h"
210 #include "toplev.h"
213 /* Possible lattice values. */
214 typedef enum
216 UNINITIALIZED,
217 UNDEFINED,
218 CONSTANT,
219 VARYING
220 } ccp_lattice_t;
222 /* Array of propagated constant values. After propagation,
223 CONST_VAL[I].VALUE holds the constant value for SSA_NAME(I). If
224 the constant is held in an SSA name representing a memory store
225 (i.e., a VDEF), CONST_VAL[I].MEM_REF will contain the actual
226 memory reference used to store (i.e., the LHS of the assignment
227 doing the store). */
228 static prop_value_t *const_val;
230 /* Dump constant propagation value VAL to file OUTF prefixed by PREFIX. */
232 static void
233 dump_lattice_value (FILE *outf, const char *prefix, prop_value_t val)
235 switch (val.lattice_val)
237 case UNINITIALIZED:
238 fprintf (outf, "%sUNINITIALIZED", prefix);
239 break;
240 case UNDEFINED:
241 fprintf (outf, "%sUNDEFINED", prefix);
242 break;
243 case VARYING:
244 fprintf (outf, "%sVARYING", prefix);
245 break;
246 case CONSTANT:
247 fprintf (outf, "%sCONSTANT ", prefix);
248 print_generic_expr (outf, val.value, dump_flags);
249 break;
250 default:
251 gcc_unreachable ();
256 /* Print lattice value VAL to stderr. */
258 void debug_lattice_value (prop_value_t val);
260 void
261 debug_lattice_value (prop_value_t val)
263 dump_lattice_value (stderr, "", val);
264 fprintf (stderr, "\n");
269 /* If SYM is a constant variable with known value, return the value.
270 NULL_TREE is returned otherwise. */
272 tree
273 get_symbol_constant_value (tree sym)
275 if (TREE_STATIC (sym)
276 && TREE_READONLY (sym)
277 && !MTAG_P (sym))
279 tree val = DECL_INITIAL (sym);
280 if (val)
282 STRIP_USELESS_TYPE_CONVERSION (val);
283 if (is_gimple_min_invariant (val))
284 return val;
286 /* Variables declared 'const' without an initializer
287 have zero as the initializer if they may not be
288 overridden at link or run time. */
289 if (!val
290 && targetm.binds_local_p (sym)
291 && (INTEGRAL_TYPE_P (TREE_TYPE (sym))
292 || SCALAR_FLOAT_TYPE_P (TREE_TYPE (sym))))
293 return fold_convert (TREE_TYPE (sym), integer_zero_node);
296 return NULL_TREE;
299 /* Compute a default value for variable VAR and store it in the
300 CONST_VAL array. The following rules are used to get default
301 values:
303 1- Global and static variables that are declared constant are
304 considered CONSTANT.
306 2- Any other value is considered UNDEFINED. This is useful when
307 considering PHI nodes. PHI arguments that are undefined do not
308 change the constant value of the PHI node, which allows for more
309 constants to be propagated.
311 3- Variables defined by statements other than assignments and PHI
312 nodes are considered VARYING.
314 4- Initial values of variables that are not GIMPLE registers are
315 considered VARYING. */
317 static prop_value_t
318 get_default_value (tree var)
320 tree sym = SSA_NAME_VAR (var);
321 prop_value_t val = { UNINITIALIZED, NULL_TREE };
322 tree cst_val;
324 if (!is_gimple_reg (var))
326 /* Short circuit for regular CCP. We are not interested in any
327 non-register when DO_STORE_CCP is false. */
328 val.lattice_val = VARYING;
330 else if ((cst_val = get_symbol_constant_value (sym)) != NULL_TREE)
332 /* Globals and static variables declared 'const' take their
333 initial value. */
334 val.lattice_val = CONSTANT;
335 val.value = cst_val;
337 else
339 gimple stmt = SSA_NAME_DEF_STMT (var);
341 if (gimple_nop_p (stmt))
343 /* Variables defined by an empty statement are those used
344 before being initialized. If VAR is a local variable, we
345 can assume initially that it is UNDEFINED, otherwise we must
346 consider it VARYING. */
347 if (is_gimple_reg (sym) && TREE_CODE (sym) != PARM_DECL)
348 val.lattice_val = UNDEFINED;
349 else
350 val.lattice_val = VARYING;
352 else if (is_gimple_assign (stmt)
353 /* Value-returning GIMPLE_CALL statements assign to
354 a variable, and are treated similarly to GIMPLE_ASSIGN. */
355 || (is_gimple_call (stmt)
356 && gimple_call_lhs (stmt) != NULL_TREE)
357 || gimple_code (stmt) == GIMPLE_PHI)
359 /* Any other variable defined by an assignment or a PHI node
360 is considered UNDEFINED. */
361 val.lattice_val = UNDEFINED;
363 else
365 /* Otherwise, VAR will never take on a constant value. */
366 val.lattice_val = VARYING;
370 return val;
374 /* Get the constant value associated with variable VAR. */
376 static inline prop_value_t *
377 get_value (tree var)
379 prop_value_t *val;
381 if (const_val == NULL)
382 return NULL;
384 val = &const_val[SSA_NAME_VERSION (var)];
385 if (val->lattice_val == UNINITIALIZED)
386 *val = get_default_value (var);
388 return val;
391 /* Sets the value associated with VAR to VARYING. */
393 static inline void
394 set_value_varying (tree var)
396 prop_value_t *val = &const_val[SSA_NAME_VERSION (var)];
398 val->lattice_val = VARYING;
399 val->value = NULL_TREE;
402 /* For float types, modify the value of VAL to make ccp work correctly
403 for non-standard values (-0, NaN):
405 If HONOR_SIGNED_ZEROS is false, and VAL = -0, we canonicalize it to 0.
406 If HONOR_NANS is false, and VAL is NaN, we canonicalize it to UNDEFINED.
407 This is to fix the following problem (see PR 29921): Suppose we have
409 x = 0.0 * y
411 and we set value of y to NaN. This causes value of x to be set to NaN.
412 When we later determine that y is in fact VARYING, fold uses the fact
413 that HONOR_NANS is false, and we try to change the value of x to 0,
414 causing an ICE. With HONOR_NANS being false, the real appearance of
415 NaN would cause undefined behavior, though, so claiming that y (and x)
416 are UNDEFINED initially is correct. */
418 static void
419 canonicalize_float_value (prop_value_t *val)
421 enum machine_mode mode;
422 tree type;
423 REAL_VALUE_TYPE d;
425 if (val->lattice_val != CONSTANT
426 || TREE_CODE (val->value) != REAL_CST)
427 return;
429 d = TREE_REAL_CST (val->value);
430 type = TREE_TYPE (val->value);
431 mode = TYPE_MODE (type);
433 if (!HONOR_SIGNED_ZEROS (mode)
434 && REAL_VALUE_MINUS_ZERO (d))
436 val->value = build_real (type, dconst0);
437 return;
440 if (!HONOR_NANS (mode)
441 && REAL_VALUE_ISNAN (d))
443 val->lattice_val = UNDEFINED;
444 val->value = NULL;
445 return;
449 /* Set the value for variable VAR to NEW_VAL. Return true if the new
450 value is different from VAR's previous value. */
452 static bool
453 set_lattice_value (tree var, prop_value_t new_val)
455 prop_value_t *old_val = get_value (var);
457 canonicalize_float_value (&new_val);
459 /* Lattice transitions must always be monotonically increasing in
460 value. If *OLD_VAL and NEW_VAL are the same, return false to
461 inform the caller that this was a non-transition. */
463 gcc_assert (old_val->lattice_val < new_val.lattice_val
464 || (old_val->lattice_val == new_val.lattice_val
465 && ((!old_val->value && !new_val.value)
466 || operand_equal_p (old_val->value, new_val.value, 0))));
468 if (old_val->lattice_val != new_val.lattice_val)
470 if (dump_file && (dump_flags & TDF_DETAILS))
472 dump_lattice_value (dump_file, "Lattice value changed to ", new_val);
473 fprintf (dump_file, ". Adding SSA edges to worklist.\n");
476 *old_val = new_val;
478 gcc_assert (new_val.lattice_val != UNDEFINED);
479 return true;
482 return false;
486 /* Return the likely CCP lattice value for STMT.
488 If STMT has no operands, then return CONSTANT.
490 Else if undefinedness of operands of STMT cause its value to be
491 undefined, then return UNDEFINED.
493 Else if any operands of STMT are constants, then return CONSTANT.
495 Else return VARYING. */
497 static ccp_lattice_t
498 likely_value (gimple stmt)
500 bool has_constant_operand, has_undefined_operand, all_undefined_operands;
501 tree use;
502 ssa_op_iter iter;
504 enum gimple_code code = gimple_code (stmt);
506 /* This function appears to be called only for assignments, calls,
507 conditionals, and switches, due to the logic in visit_stmt. */
508 gcc_assert (code == GIMPLE_ASSIGN
509 || code == GIMPLE_CALL
510 || code == GIMPLE_COND
511 || code == GIMPLE_SWITCH);
513 /* If the statement has volatile operands, it won't fold to a
514 constant value. */
515 if (gimple_has_volatile_ops (stmt))
516 return VARYING;
518 /* If we are not doing store-ccp, statements with loads
519 and/or stores will never fold into a constant. */
520 if (!ZERO_SSA_OPERANDS (stmt, SSA_OP_ALL_VIRTUALS))
521 return VARYING;
523 /* Note that only a GIMPLE_SINGLE_RHS assignment can satisfy
524 is_gimple_min_invariant, so we do not consider calls or
525 other forms of assignment. */
526 if (gimple_assign_single_p (stmt)
527 && is_gimple_min_invariant (gimple_assign_rhs1 (stmt)))
528 return CONSTANT;
530 if (code == GIMPLE_COND
531 && is_gimple_min_invariant (gimple_cond_lhs (stmt))
532 && is_gimple_min_invariant (gimple_cond_rhs (stmt)))
533 return CONSTANT;
535 if (code == GIMPLE_SWITCH
536 && is_gimple_min_invariant (gimple_switch_index (stmt)))
537 return CONSTANT;
539 /* Arrive here for more complex cases. */
541 has_constant_operand = false;
542 has_undefined_operand = false;
543 all_undefined_operands = true;
544 FOR_EACH_SSA_TREE_OPERAND (use, stmt, iter, SSA_OP_USE | SSA_OP_VUSE)
546 prop_value_t *val = get_value (use);
548 if (val->lattice_val == UNDEFINED)
549 has_undefined_operand = true;
550 else
551 all_undefined_operands = false;
553 if (val->lattice_val == CONSTANT)
554 has_constant_operand = true;
557 /* If the operation combines operands like COMPLEX_EXPR make sure to
558 not mark the result UNDEFINED if only one part of the result is
559 undefined. */
560 if (has_undefined_operand && all_undefined_operands)
561 return UNDEFINED;
562 else if (code == GIMPLE_ASSIGN && has_undefined_operand)
564 switch (gimple_assign_rhs_code (stmt))
566 /* Unary operators are handled with all_undefined_operands. */
567 case PLUS_EXPR:
568 case MINUS_EXPR:
569 case POINTER_PLUS_EXPR:
570 /* Not MIN_EXPR, MAX_EXPR. One VARYING operand may be selected.
571 Not bitwise operators, one VARYING operand may specify the
572 result completely. Not logical operators for the same reason.
573 Not COMPLEX_EXPR as one VARYING operand makes the result partly
574 not UNDEFINED. Not *DIV_EXPR, comparisons and shifts because
575 the undefined operand may be promoted. */
576 return UNDEFINED;
578 default:
582 /* If there was an UNDEFINED operand but the result may be not UNDEFINED
583 fall back to VARYING even if there were CONSTANT operands. */
584 if (has_undefined_operand)
585 return VARYING;
587 if (has_constant_operand
588 /* We do not consider virtual operands here -- load from read-only
589 memory may have only VARYING virtual operands, but still be
590 constant. */
591 || ZERO_SSA_OPERANDS (stmt, SSA_OP_USE))
592 return CONSTANT;
594 return VARYING;
597 /* Returns true if STMT cannot be constant. */
599 static bool
600 surely_varying_stmt_p (gimple stmt)
602 /* If the statement has operands that we cannot handle, it cannot be
603 constant. */
604 if (gimple_has_volatile_ops (stmt))
605 return true;
607 if (!ZERO_SSA_OPERANDS (stmt, SSA_OP_ALL_VIRTUALS))
608 return true;
610 /* If it is a call and does not return a value or is not a
611 builtin and not an indirect call, it is varying. */
612 if (is_gimple_call (stmt))
614 tree fndecl;
615 if (!gimple_call_lhs (stmt)
616 || ((fndecl = gimple_call_fndecl (stmt)) != NULL_TREE
617 && !DECL_BUILT_IN (fndecl)))
618 return true;
621 /* Anything other than assignments and conditional jumps are not
622 interesting for CCP. */
623 if (gimple_code (stmt) != GIMPLE_ASSIGN
624 && gimple_code (stmt) != GIMPLE_COND
625 && gimple_code (stmt) != GIMPLE_SWITCH
626 && gimple_code (stmt) != GIMPLE_CALL)
627 return true;
629 return false;
632 /* Initialize local data structures for CCP. */
634 static void
635 ccp_initialize (void)
637 basic_block bb;
639 const_val = XCNEWVEC (prop_value_t, num_ssa_names);
641 /* Initialize simulation flags for PHI nodes and statements. */
642 FOR_EACH_BB (bb)
644 gimple_stmt_iterator i;
646 for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
648 gimple stmt = gsi_stmt (i);
649 bool is_varying = surely_varying_stmt_p (stmt);
651 if (is_varying)
653 tree def;
654 ssa_op_iter iter;
656 /* If the statement will not produce a constant, mark
657 all its outputs VARYING. */
658 FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_ALL_DEFS)
660 if (is_varying)
661 set_value_varying (def);
664 prop_set_simulate_again (stmt, !is_varying);
668 /* Now process PHI nodes. We never clear the simulate_again flag on
669 phi nodes, since we do not know which edges are executable yet,
670 except for phi nodes for virtual operands when we do not do store ccp. */
671 FOR_EACH_BB (bb)
673 gimple_stmt_iterator i;
675 for (i = gsi_start_phis (bb); !gsi_end_p (i); gsi_next (&i))
677 gimple phi = gsi_stmt (i);
679 if (!is_gimple_reg (gimple_phi_result (phi)))
680 prop_set_simulate_again (phi, false);
681 else
682 prop_set_simulate_again (phi, true);
688 /* Do final substitution of propagated values, cleanup the flowgraph and
689 free allocated storage.
691 Return TRUE when something was optimized. */
693 static bool
694 ccp_finalize (void)
696 /* Perform substitutions based on the known constant values. */
697 bool something_changed = substitute_and_fold (const_val, false);
699 free (const_val);
700 const_val = NULL;
701 return something_changed;;
705 /* Compute the meet operator between *VAL1 and *VAL2. Store the result
706 in VAL1.
708 any M UNDEFINED = any
709 any M VARYING = VARYING
710 Ci M Cj = Ci if (i == j)
711 Ci M Cj = VARYING if (i != j)
714 static void
715 ccp_lattice_meet (prop_value_t *val1, prop_value_t *val2)
717 if (val1->lattice_val == UNDEFINED)
719 /* UNDEFINED M any = any */
720 *val1 = *val2;
722 else if (val2->lattice_val == UNDEFINED)
724 /* any M UNDEFINED = any
725 Nothing to do. VAL1 already contains the value we want. */
728 else if (val1->lattice_val == VARYING
729 || val2->lattice_val == VARYING)
731 /* any M VARYING = VARYING. */
732 val1->lattice_val = VARYING;
733 val1->value = NULL_TREE;
735 else if (val1->lattice_val == CONSTANT
736 && val2->lattice_val == CONSTANT
737 && simple_cst_equal (val1->value, val2->value) == 1)
739 /* Ci M Cj = Ci if (i == j)
740 Ci M Cj = VARYING if (i != j)
742 If these two values come from memory stores, make sure that
743 they come from the same memory reference. */
744 val1->lattice_val = CONSTANT;
745 val1->value = val1->value;
747 else
749 /* Any other combination is VARYING. */
750 val1->lattice_val = VARYING;
751 val1->value = NULL_TREE;
756 /* Loop through the PHI_NODE's parameters for BLOCK and compare their
757 lattice values to determine PHI_NODE's lattice value. The value of a
758 PHI node is determined calling ccp_lattice_meet with all the arguments
759 of the PHI node that are incoming via executable edges. */
761 static enum ssa_prop_result
762 ccp_visit_phi_node (gimple phi)
764 unsigned i;
765 prop_value_t *old_val, new_val;
767 if (dump_file && (dump_flags & TDF_DETAILS))
769 fprintf (dump_file, "\nVisiting PHI node: ");
770 print_gimple_stmt (dump_file, phi, 0, dump_flags);
773 old_val = get_value (gimple_phi_result (phi));
774 switch (old_val->lattice_val)
776 case VARYING:
777 return SSA_PROP_VARYING;
779 case CONSTANT:
780 new_val = *old_val;
781 break;
783 case UNDEFINED:
784 new_val.lattice_val = UNDEFINED;
785 new_val.value = NULL_TREE;
786 break;
788 default:
789 gcc_unreachable ();
792 for (i = 0; i < gimple_phi_num_args (phi); i++)
794 /* Compute the meet operator over all the PHI arguments flowing
795 through executable edges. */
796 edge e = gimple_phi_arg_edge (phi, i);
798 if (dump_file && (dump_flags & TDF_DETAILS))
800 fprintf (dump_file,
801 "\n Argument #%d (%d -> %d %sexecutable)\n",
802 i, e->src->index, e->dest->index,
803 (e->flags & EDGE_EXECUTABLE) ? "" : "not ");
806 /* If the incoming edge is executable, Compute the meet operator for
807 the existing value of the PHI node and the current PHI argument. */
808 if (e->flags & EDGE_EXECUTABLE)
810 tree arg = gimple_phi_arg (phi, i)->def;
811 prop_value_t arg_val;
813 if (is_gimple_min_invariant (arg))
815 arg_val.lattice_val = CONSTANT;
816 arg_val.value = arg;
818 else
819 arg_val = *(get_value (arg));
821 ccp_lattice_meet (&new_val, &arg_val);
823 if (dump_file && (dump_flags & TDF_DETAILS))
825 fprintf (dump_file, "\t");
826 print_generic_expr (dump_file, arg, dump_flags);
827 dump_lattice_value (dump_file, "\tValue: ", arg_val);
828 fprintf (dump_file, "\n");
831 if (new_val.lattice_val == VARYING)
832 break;
836 if (dump_file && (dump_flags & TDF_DETAILS))
838 dump_lattice_value (dump_file, "\n PHI node value: ", new_val);
839 fprintf (dump_file, "\n\n");
842 /* Make the transition to the new value. */
843 if (set_lattice_value (gimple_phi_result (phi), new_val))
845 if (new_val.lattice_val == VARYING)
846 return SSA_PROP_VARYING;
847 else
848 return SSA_PROP_INTERESTING;
850 else
851 return SSA_PROP_NOT_INTERESTING;
855 /* CCP specific front-end to the non-destructive constant folding
856 routines.
858 Attempt to simplify the RHS of STMT knowing that one or more
859 operands are constants.
861 If simplification is possible, return the simplified RHS,
862 otherwise return the original RHS or NULL_TREE. */
864 static tree
865 ccp_fold (gimple stmt)
867 switch (gimple_code (stmt))
869 case GIMPLE_ASSIGN:
871 enum tree_code subcode = gimple_assign_rhs_code (stmt);
873 switch (get_gimple_rhs_class (subcode))
875 case GIMPLE_SINGLE_RHS:
877 tree rhs = gimple_assign_rhs1 (stmt);
878 enum tree_code_class kind = TREE_CODE_CLASS (subcode);
880 if (TREE_CODE (rhs) == SSA_NAME)
882 /* If the RHS is an SSA_NAME, return its known constant value,
883 if any. */
884 return get_value (rhs)->value;
886 /* Handle propagating invariant addresses into address operations.
887 The folding we do here matches that in tree-ssa-forwprop.c. */
888 else if (TREE_CODE (rhs) == ADDR_EXPR)
890 tree *base;
891 base = &TREE_OPERAND (rhs, 0);
892 while (handled_component_p (*base))
893 base = &TREE_OPERAND (*base, 0);
894 if (TREE_CODE (*base) == INDIRECT_REF
895 && TREE_CODE (TREE_OPERAND (*base, 0)) == SSA_NAME)
897 prop_value_t *val = get_value (TREE_OPERAND (*base, 0));
898 if (val->lattice_val == CONSTANT
899 && TREE_CODE (val->value) == ADDR_EXPR
900 && useless_type_conversion_p
901 (TREE_TYPE (TREE_OPERAND (*base, 0)),
902 TREE_TYPE (val->value))
903 && useless_type_conversion_p
904 (TREE_TYPE (*base),
905 TREE_TYPE (TREE_OPERAND (val->value, 0))))
907 /* We need to return a new tree, not modify the IL
908 or share parts of it. So play some tricks to
909 avoid manually building it. */
910 tree ret, save = *base;
911 *base = TREE_OPERAND (val->value, 0);
912 ret = unshare_expr (rhs);
913 recompute_tree_invariant_for_addr_expr (ret);
914 *base = save;
915 return ret;
920 if (kind == tcc_reference)
922 if (TREE_CODE (rhs) == VIEW_CONVERT_EXPR
923 && TREE_CODE (TREE_OPERAND (rhs, 0)) == SSA_NAME)
925 prop_value_t *val = get_value (TREE_OPERAND (rhs, 0));
926 if (val->lattice_val == CONSTANT)
927 return fold_unary (VIEW_CONVERT_EXPR,
928 TREE_TYPE (rhs), val->value);
930 return fold_const_aggregate_ref (rhs);
932 else if (kind == tcc_declaration)
933 return get_symbol_constant_value (rhs);
934 return rhs;
937 case GIMPLE_UNARY_RHS:
939 /* Handle unary operators that can appear in GIMPLE form.
940 Note that we know the single operand must be a constant,
941 so this should almost always return a simplified RHS. */
942 tree lhs = gimple_assign_lhs (stmt);
943 tree op0 = gimple_assign_rhs1 (stmt);
944 tree res;
946 /* Simplify the operand down to a constant. */
947 if (TREE_CODE (op0) == SSA_NAME)
949 prop_value_t *val = get_value (op0);
950 if (val->lattice_val == CONSTANT)
951 op0 = get_value (op0)->value;
954 /* Conversions are useless for CCP purposes if they are
955 value-preserving. Thus the restrictions that
956 useless_type_conversion_p places for pointer type conversions
957 do not apply here. Substitution later will only substitute to
958 allowed places. */
959 if (CONVERT_EXPR_CODE_P (subcode)
960 && POINTER_TYPE_P (TREE_TYPE (lhs))
961 && POINTER_TYPE_P (TREE_TYPE (op0))
962 /* Do not allow differences in volatile qualification
963 as this might get us confused as to whether a
964 propagation destination statement is volatile
965 or not. See PR36988. */
966 && (TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (lhs)))
967 == TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (op0)))))
969 tree tem;
970 /* Still try to generate a constant of correct type. */
971 if (!useless_type_conversion_p (TREE_TYPE (lhs),
972 TREE_TYPE (op0))
973 && ((tem = maybe_fold_offset_to_address
974 (op0, integer_zero_node, TREE_TYPE (lhs)))
975 != NULL_TREE))
976 return tem;
977 return op0;
980 res = fold_unary (subcode, gimple_expr_type (stmt), op0);
982 /* If the operation was a conversion do _not_ mark a
983 resulting constant with TREE_OVERFLOW if the original
984 constant was not. These conversions have implementation
985 defined behavior and retaining the TREE_OVERFLOW flag
986 here would confuse later passes such as VRP. */
987 if (res
988 && TREE_CODE (res) == INTEGER_CST
989 && TREE_CODE (op0) == INTEGER_CST
990 && CONVERT_EXPR_CODE_P (subcode))
991 TREE_OVERFLOW (res) = TREE_OVERFLOW (op0);
993 return res;
996 case GIMPLE_BINARY_RHS:
998 /* Handle binary operators that can appear in GIMPLE form. */
999 tree op0 = gimple_assign_rhs1 (stmt);
1000 tree op1 = gimple_assign_rhs2 (stmt);
1002 /* Simplify the operands down to constants when appropriate. */
1003 if (TREE_CODE (op0) == SSA_NAME)
1005 prop_value_t *val = get_value (op0);
1006 if (val->lattice_val == CONSTANT)
1007 op0 = val->value;
1010 if (TREE_CODE (op1) == SSA_NAME)
1012 prop_value_t *val = get_value (op1);
1013 if (val->lattice_val == CONSTANT)
1014 op1 = val->value;
1017 /* Fold &foo + CST into an invariant reference if possible. */
1018 if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR
1019 && TREE_CODE (op0) == ADDR_EXPR
1020 && TREE_CODE (op1) == INTEGER_CST)
1022 tree lhs = gimple_assign_lhs (stmt);
1023 tree tem = maybe_fold_offset_to_address (op0, op1,
1024 TREE_TYPE (lhs));
1025 if (tem != NULL_TREE)
1026 return tem;
1029 return fold_binary (subcode, gimple_expr_type (stmt), op0, op1);
1032 default:
1033 gcc_unreachable ();
1036 break;
1038 case GIMPLE_CALL:
1040 tree fn = gimple_call_fn (stmt);
1041 prop_value_t *val;
1043 if (TREE_CODE (fn) == SSA_NAME)
1045 val = get_value (fn);
1046 if (val->lattice_val == CONSTANT)
1047 fn = val->value;
1049 if (TREE_CODE (fn) == ADDR_EXPR
1050 && TREE_CODE (TREE_OPERAND (fn, 0)) == FUNCTION_DECL
1051 && DECL_BUILT_IN (TREE_OPERAND (fn, 0)))
1053 tree *args = XALLOCAVEC (tree, gimple_call_num_args (stmt));
1054 tree call, retval;
1055 unsigned i;
1056 for (i = 0; i < gimple_call_num_args (stmt); ++i)
1058 args[i] = gimple_call_arg (stmt, i);
1059 if (TREE_CODE (args[i]) == SSA_NAME)
1061 val = get_value (args[i]);
1062 if (val->lattice_val == CONSTANT)
1063 args[i] = val->value;
1066 call = build_call_array (gimple_call_return_type (stmt),
1067 fn, gimple_call_num_args (stmt), args);
1068 retval = fold_call_expr (call, false);
1069 if (retval)
1070 /* fold_call_expr wraps the result inside a NOP_EXPR. */
1071 STRIP_NOPS (retval);
1072 return retval;
1074 return NULL_TREE;
1077 case GIMPLE_COND:
1079 /* Handle comparison operators that can appear in GIMPLE form. */
1080 tree op0 = gimple_cond_lhs (stmt);
1081 tree op1 = gimple_cond_rhs (stmt);
1082 enum tree_code code = gimple_cond_code (stmt);
1084 /* Simplify the operands down to constants when appropriate. */
1085 if (TREE_CODE (op0) == SSA_NAME)
1087 prop_value_t *val = get_value (op0);
1088 if (val->lattice_val == CONSTANT)
1089 op0 = val->value;
1092 if (TREE_CODE (op1) == SSA_NAME)
1094 prop_value_t *val = get_value (op1);
1095 if (val->lattice_val == CONSTANT)
1096 op1 = val->value;
1099 return fold_binary (code, boolean_type_node, op0, op1);
1102 case GIMPLE_SWITCH:
1104 tree rhs = gimple_switch_index (stmt);
1106 if (TREE_CODE (rhs) == SSA_NAME)
1108 /* If the RHS is an SSA_NAME, return its known constant value,
1109 if any. */
1110 return get_value (rhs)->value;
1113 return rhs;
1116 default:
1117 gcc_unreachable ();
1122 /* Return the tree representing the element referenced by T if T is an
1123 ARRAY_REF or COMPONENT_REF into constant aggregates. Return
1124 NULL_TREE otherwise. */
1126 tree
1127 fold_const_aggregate_ref (tree t)
1129 prop_value_t *value;
1130 tree base, ctor, idx, field;
1131 unsigned HOST_WIDE_INT cnt;
1132 tree cfield, cval;
1134 switch (TREE_CODE (t))
1136 case ARRAY_REF:
1137 /* Get a CONSTRUCTOR. If BASE is a VAR_DECL, get its
1138 DECL_INITIAL. If BASE is a nested reference into another
1139 ARRAY_REF or COMPONENT_REF, make a recursive call to resolve
1140 the inner reference. */
1141 base = TREE_OPERAND (t, 0);
1142 switch (TREE_CODE (base))
1144 case VAR_DECL:
1145 if (!TREE_READONLY (base)
1146 || TREE_CODE (TREE_TYPE (base)) != ARRAY_TYPE
1147 || !targetm.binds_local_p (base))
1148 return NULL_TREE;
1150 ctor = DECL_INITIAL (base);
1151 break;
1153 case ARRAY_REF:
1154 case COMPONENT_REF:
1155 ctor = fold_const_aggregate_ref (base);
1156 break;
1158 case STRING_CST:
1159 case CONSTRUCTOR:
1160 ctor = base;
1161 break;
1163 default:
1164 return NULL_TREE;
1167 if (ctor == NULL_TREE
1168 || (TREE_CODE (ctor) != CONSTRUCTOR
1169 && TREE_CODE (ctor) != STRING_CST)
1170 || !TREE_STATIC (ctor))
1171 return NULL_TREE;
1173 /* Get the index. If we have an SSA_NAME, try to resolve it
1174 with the current lattice value for the SSA_NAME. */
1175 idx = TREE_OPERAND (t, 1);
1176 switch (TREE_CODE (idx))
1178 case SSA_NAME:
1179 if ((value = get_value (idx))
1180 && value->lattice_val == CONSTANT
1181 && TREE_CODE (value->value) == INTEGER_CST)
1182 idx = value->value;
1183 else
1184 return NULL_TREE;
1185 break;
1187 case INTEGER_CST:
1188 break;
1190 default:
1191 return NULL_TREE;
1194 /* Fold read from constant string. */
1195 if (TREE_CODE (ctor) == STRING_CST)
1197 if ((TYPE_MODE (TREE_TYPE (t))
1198 == TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor))))
1199 && (GET_MODE_CLASS (TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor))))
1200 == MODE_INT)
1201 && GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor)))) == 1
1202 && compare_tree_int (idx, TREE_STRING_LENGTH (ctor)) < 0)
1203 return build_int_cst_type (TREE_TYPE (t),
1204 (TREE_STRING_POINTER (ctor)
1205 [TREE_INT_CST_LOW (idx)]));
1206 return NULL_TREE;
1209 /* Whoo-hoo! I'll fold ya baby. Yeah! */
1210 FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (ctor), cnt, cfield, cval)
1211 if (tree_int_cst_equal (cfield, idx))
1213 STRIP_USELESS_TYPE_CONVERSION (cval);
1214 return cval;
1216 break;
1218 case COMPONENT_REF:
1219 /* Get a CONSTRUCTOR. If BASE is a VAR_DECL, get its
1220 DECL_INITIAL. If BASE is a nested reference into another
1221 ARRAY_REF or COMPONENT_REF, make a recursive call to resolve
1222 the inner reference. */
1223 base = TREE_OPERAND (t, 0);
1224 switch (TREE_CODE (base))
1226 case VAR_DECL:
1227 if (!TREE_READONLY (base)
1228 || TREE_CODE (TREE_TYPE (base)) != RECORD_TYPE
1229 || !targetm.binds_local_p (base))
1230 return NULL_TREE;
1232 ctor = DECL_INITIAL (base);
1233 break;
1235 case ARRAY_REF:
1236 case COMPONENT_REF:
1237 ctor = fold_const_aggregate_ref (base);
1238 break;
1240 default:
1241 return NULL_TREE;
1244 if (ctor == NULL_TREE
1245 || TREE_CODE (ctor) != CONSTRUCTOR
1246 || !TREE_STATIC (ctor))
1247 return NULL_TREE;
1249 field = TREE_OPERAND (t, 1);
1251 FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (ctor), cnt, cfield, cval)
1252 if (cfield == field
1253 /* FIXME: Handle bit-fields. */
1254 && ! DECL_BIT_FIELD (cfield))
1256 STRIP_USELESS_TYPE_CONVERSION (cval);
1257 return cval;
1259 break;
1261 case REALPART_EXPR:
1262 case IMAGPART_EXPR:
1264 tree c = fold_const_aggregate_ref (TREE_OPERAND (t, 0));
1265 if (c && TREE_CODE (c) == COMPLEX_CST)
1266 return fold_build1 (TREE_CODE (t), TREE_TYPE (t), c);
1267 break;
1270 case INDIRECT_REF:
1272 tree base = TREE_OPERAND (t, 0);
1273 if (TREE_CODE (base) == SSA_NAME
1274 && (value = get_value (base))
1275 && value->lattice_val == CONSTANT
1276 && TREE_CODE (value->value) == ADDR_EXPR)
1277 return fold_const_aggregate_ref (TREE_OPERAND (value->value, 0));
1278 break;
1281 default:
1282 break;
1285 return NULL_TREE;
1288 /* Evaluate statement STMT.
1289 Valid only for assignments, calls, conditionals, and switches. */
1291 static prop_value_t
1292 evaluate_stmt (gimple stmt)
1294 prop_value_t val;
1295 tree simplified = NULL_TREE;
1296 ccp_lattice_t likelyvalue = likely_value (stmt);
1297 bool is_constant;
1299 fold_defer_overflow_warnings ();
1301 /* If the statement is likely to have a CONSTANT result, then try
1302 to fold the statement to determine the constant value. */
1303 /* FIXME. This is the only place that we call ccp_fold.
1304 Since likely_value never returns CONSTANT for calls, we will
1305 not attempt to fold them, including builtins that may profit. */
1306 if (likelyvalue == CONSTANT)
1307 simplified = ccp_fold (stmt);
1308 /* If the statement is likely to have a VARYING result, then do not
1309 bother folding the statement. */
1310 else if (likelyvalue == VARYING)
1312 enum gimple_code code = gimple_code (stmt);
1313 if (code == GIMPLE_ASSIGN)
1315 enum tree_code subcode = gimple_assign_rhs_code (stmt);
1317 /* Other cases cannot satisfy is_gimple_min_invariant
1318 without folding. */
1319 if (get_gimple_rhs_class (subcode) == GIMPLE_SINGLE_RHS)
1320 simplified = gimple_assign_rhs1 (stmt);
1322 else if (code == GIMPLE_SWITCH)
1323 simplified = gimple_switch_index (stmt);
1324 else
1325 /* These cannot satisfy is_gimple_min_invariant without folding. */
1326 gcc_assert (code == GIMPLE_CALL || code == GIMPLE_COND);
1329 is_constant = simplified && is_gimple_min_invariant (simplified);
1331 fold_undefer_overflow_warnings (is_constant, stmt, 0);
1333 if (dump_file && (dump_flags & TDF_DETAILS))
1335 fprintf (dump_file, "which is likely ");
1336 switch (likelyvalue)
1338 case CONSTANT:
1339 fprintf (dump_file, "CONSTANT");
1340 break;
1341 case UNDEFINED:
1342 fprintf (dump_file, "UNDEFINED");
1343 break;
1344 case VARYING:
1345 fprintf (dump_file, "VARYING");
1346 break;
1347 default:;
1349 fprintf (dump_file, "\n");
1352 if (is_constant)
1354 /* The statement produced a constant value. */
1355 val.lattice_val = CONSTANT;
1356 val.value = simplified;
1358 else
1360 /* The statement produced a nonconstant value. If the statement
1361 had UNDEFINED operands, then the result of the statement
1362 should be UNDEFINED. Otherwise, the statement is VARYING. */
1363 if (likelyvalue == UNDEFINED)
1364 val.lattice_val = likelyvalue;
1365 else
1366 val.lattice_val = VARYING;
1368 val.value = NULL_TREE;
1371 return val;
1374 /* Visit the assignment statement STMT. Set the value of its LHS to the
1375 value computed by the RHS and store LHS in *OUTPUT_P. If STMT
1376 creates virtual definitions, set the value of each new name to that
1377 of the RHS (if we can derive a constant out of the RHS).
1378 Value-returning call statements also perform an assignment, and
1379 are handled here. */
1381 static enum ssa_prop_result
1382 visit_assignment (gimple stmt, tree *output_p)
1384 prop_value_t val;
1385 enum ssa_prop_result retval;
1387 tree lhs = gimple_get_lhs (stmt);
1389 gcc_assert (gimple_code (stmt) != GIMPLE_CALL
1390 || gimple_call_lhs (stmt) != NULL_TREE);
1392 if (gimple_assign_copy_p (stmt))
1394 tree rhs = gimple_assign_rhs1 (stmt);
1396 if (TREE_CODE (rhs) == SSA_NAME)
1398 /* For a simple copy operation, we copy the lattice values. */
1399 prop_value_t *nval = get_value (rhs);
1400 val = *nval;
1402 else
1403 val = evaluate_stmt (stmt);
1405 else
1406 /* Evaluate the statement, which could be
1407 either a GIMPLE_ASSIGN or a GIMPLE_CALL. */
1408 val = evaluate_stmt (stmt);
1410 retval = SSA_PROP_NOT_INTERESTING;
1412 /* Set the lattice value of the statement's output. */
1413 if (TREE_CODE (lhs) == SSA_NAME)
1415 /* If STMT is an assignment to an SSA_NAME, we only have one
1416 value to set. */
1417 if (set_lattice_value (lhs, val))
1419 *output_p = lhs;
1420 if (val.lattice_val == VARYING)
1421 retval = SSA_PROP_VARYING;
1422 else
1423 retval = SSA_PROP_INTERESTING;
1427 return retval;
1431 /* Visit the conditional statement STMT. Return SSA_PROP_INTERESTING
1432 if it can determine which edge will be taken. Otherwise, return
1433 SSA_PROP_VARYING. */
1435 static enum ssa_prop_result
1436 visit_cond_stmt (gimple stmt, edge *taken_edge_p)
1438 prop_value_t val;
1439 basic_block block;
1441 block = gimple_bb (stmt);
1442 val = evaluate_stmt (stmt);
1444 /* Find which edge out of the conditional block will be taken and add it
1445 to the worklist. If no single edge can be determined statically,
1446 return SSA_PROP_VARYING to feed all the outgoing edges to the
1447 propagation engine. */
1448 *taken_edge_p = val.value ? find_taken_edge (block, val.value) : 0;
1449 if (*taken_edge_p)
1450 return SSA_PROP_INTERESTING;
1451 else
1452 return SSA_PROP_VARYING;
1456 /* Evaluate statement STMT. If the statement produces an output value and
1457 its evaluation changes the lattice value of its output, return
1458 SSA_PROP_INTERESTING and set *OUTPUT_P to the SSA_NAME holding the
1459 output value.
1461 If STMT is a conditional branch and we can determine its truth
1462 value, set *TAKEN_EDGE_P accordingly. If STMT produces a varying
1463 value, return SSA_PROP_VARYING. */
1465 static enum ssa_prop_result
1466 ccp_visit_stmt (gimple stmt, edge *taken_edge_p, tree *output_p)
1468 tree def;
1469 ssa_op_iter iter;
1471 if (dump_file && (dump_flags & TDF_DETAILS))
1473 fprintf (dump_file, "\nVisiting statement:\n");
1474 print_gimple_stmt (dump_file, stmt, 0, dump_flags);
1477 switch (gimple_code (stmt))
1479 case GIMPLE_ASSIGN:
1480 /* If the statement is an assignment that produces a single
1481 output value, evaluate its RHS to see if the lattice value of
1482 its output has changed. */
1483 return visit_assignment (stmt, output_p);
1485 case GIMPLE_CALL:
1486 /* A value-returning call also performs an assignment. */
1487 if (gimple_call_lhs (stmt) != NULL_TREE)
1488 return visit_assignment (stmt, output_p);
1489 break;
1491 case GIMPLE_COND:
1492 case GIMPLE_SWITCH:
1493 /* If STMT is a conditional branch, see if we can determine
1494 which branch will be taken. */
1495 /* FIXME. It appears that we should be able to optimize
1496 computed GOTOs here as well. */
1497 return visit_cond_stmt (stmt, taken_edge_p);
1499 default:
1500 break;
1503 /* Any other kind of statement is not interesting for constant
1504 propagation and, therefore, not worth simulating. */
1505 if (dump_file && (dump_flags & TDF_DETAILS))
1506 fprintf (dump_file, "No interesting values produced. Marked VARYING.\n");
1508 /* Definitions made by statements other than assignments to
1509 SSA_NAMEs represent unknown modifications to their outputs.
1510 Mark them VARYING. */
1511 FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_ALL_DEFS)
1513 prop_value_t v = { VARYING, NULL_TREE };
1514 set_lattice_value (def, v);
1517 return SSA_PROP_VARYING;
1521 /* Main entry point for SSA Conditional Constant Propagation. */
1523 static unsigned int
1524 do_ssa_ccp (void)
1526 ccp_initialize ();
1527 ssa_propagate (ccp_visit_stmt, ccp_visit_phi_node);
1528 if (ccp_finalize ())
1529 return (TODO_cleanup_cfg | TODO_update_ssa | TODO_remove_unused_locals);
1530 else
1531 return 0;
1535 static bool
1536 gate_ccp (void)
1538 return flag_tree_ccp != 0;
1542 struct gimple_opt_pass pass_ccp =
1545 GIMPLE_PASS,
1546 "ccp", /* name */
1547 gate_ccp, /* gate */
1548 do_ssa_ccp, /* execute */
1549 NULL, /* sub */
1550 NULL, /* next */
1551 0, /* static_pass_number */
1552 TV_TREE_CCP, /* tv_id */
1553 PROP_cfg | PROP_ssa, /* properties_required */
1554 0, /* properties_provided */
1555 0, /* properties_destroyed */
1556 0, /* todo_flags_start */
1557 TODO_dump_func | TODO_verify_ssa
1558 | TODO_verify_stmts | TODO_ggc_collect/* todo_flags_finish */
1563 /* A subroutine of fold_stmt_r. Attempts to fold *(A+O) to A[X].
1564 BASE is an array type. OFFSET is a byte displacement. ORIG_TYPE
1565 is the desired result type. */
1567 static tree
1568 maybe_fold_offset_to_array_ref (tree base, tree offset, tree orig_type,
1569 bool allow_negative_idx)
1571 tree min_idx, idx, idx_type, elt_offset = integer_zero_node;
1572 tree array_type, elt_type, elt_size;
1573 tree domain_type;
1575 /* If BASE is an ARRAY_REF, we can pick up another offset (this time
1576 measured in units of the size of elements type) from that ARRAY_REF).
1577 We can't do anything if either is variable.
1579 The case we handle here is *(&A[N]+O). */
1580 if (TREE_CODE (base) == ARRAY_REF)
1582 tree low_bound = array_ref_low_bound (base);
1584 elt_offset = TREE_OPERAND (base, 1);
1585 if (TREE_CODE (low_bound) != INTEGER_CST
1586 || TREE_CODE (elt_offset) != INTEGER_CST)
1587 return NULL_TREE;
1589 elt_offset = int_const_binop (MINUS_EXPR, elt_offset, low_bound, 0);
1590 base = TREE_OPERAND (base, 0);
1593 /* Ignore stupid user tricks of indexing non-array variables. */
1594 array_type = TREE_TYPE (base);
1595 if (TREE_CODE (array_type) != ARRAY_TYPE)
1596 return NULL_TREE;
1597 elt_type = TREE_TYPE (array_type);
1598 if (!useless_type_conversion_p (orig_type, elt_type))
1599 return NULL_TREE;
1601 /* Use signed size type for intermediate computation on the index. */
1602 idx_type = signed_type_for (size_type_node);
1604 /* If OFFSET and ELT_OFFSET are zero, we don't care about the size of the
1605 element type (so we can use the alignment if it's not constant).
1606 Otherwise, compute the offset as an index by using a division. If the
1607 division isn't exact, then don't do anything. */
1608 elt_size = TYPE_SIZE_UNIT (elt_type);
1609 if (!elt_size)
1610 return NULL;
1611 if (integer_zerop (offset))
1613 if (TREE_CODE (elt_size) != INTEGER_CST)
1614 elt_size = size_int (TYPE_ALIGN (elt_type));
1616 idx = build_int_cst (idx_type, 0);
1618 else
1620 unsigned HOST_WIDE_INT lquo, lrem;
1621 HOST_WIDE_INT hquo, hrem;
1622 double_int soffset;
1624 /* The final array offset should be signed, so we need
1625 to sign-extend the (possibly pointer) offset here
1626 and use signed division. */
1627 soffset = double_int_sext (tree_to_double_int (offset),
1628 TYPE_PRECISION (TREE_TYPE (offset)));
1629 if (TREE_CODE (elt_size) != INTEGER_CST
1630 || div_and_round_double (TRUNC_DIV_EXPR, 0,
1631 soffset.low, soffset.high,
1632 TREE_INT_CST_LOW (elt_size),
1633 TREE_INT_CST_HIGH (elt_size),
1634 &lquo, &hquo, &lrem, &hrem)
1635 || lrem || hrem)
1636 return NULL_TREE;
1638 idx = build_int_cst_wide (idx_type, lquo, hquo);
1641 /* Assume the low bound is zero. If there is a domain type, get the
1642 low bound, if any, convert the index into that type, and add the
1643 low bound. */
1644 min_idx = build_int_cst (idx_type, 0);
1645 domain_type = TYPE_DOMAIN (array_type);
1646 if (domain_type)
1648 idx_type = domain_type;
1649 if (TYPE_MIN_VALUE (idx_type))
1650 min_idx = TYPE_MIN_VALUE (idx_type);
1651 else
1652 min_idx = fold_convert (idx_type, min_idx);
1654 if (TREE_CODE (min_idx) != INTEGER_CST)
1655 return NULL_TREE;
1657 elt_offset = fold_convert (idx_type, elt_offset);
1660 if (!integer_zerop (min_idx))
1661 idx = int_const_binop (PLUS_EXPR, idx, min_idx, 0);
1662 if (!integer_zerop (elt_offset))
1663 idx = int_const_binop (PLUS_EXPR, idx, elt_offset, 0);
1665 /* Make sure to possibly truncate late after offsetting. */
1666 idx = fold_convert (idx_type, idx);
1668 /* We don't want to construct access past array bounds. For example
1669 char *(c[4]);
1670 c[3][2];
1671 should not be simplified into (*c)[14] or tree-vrp will
1672 give false warnings. The same is true for
1673 struct A { long x; char d[0]; } *a;
1674 (char *)a - 4;
1675 which should be not folded to &a->d[-8]. */
1676 if (domain_type
1677 && TYPE_MAX_VALUE (domain_type)
1678 && TREE_CODE (TYPE_MAX_VALUE (domain_type)) == INTEGER_CST)
1680 tree up_bound = TYPE_MAX_VALUE (domain_type);
1682 if (tree_int_cst_lt (up_bound, idx)
1683 /* Accesses after the end of arrays of size 0 (gcc
1684 extension) and 1 are likely intentional ("struct
1685 hack"). */
1686 && compare_tree_int (up_bound, 1) > 0)
1687 return NULL_TREE;
1689 if (domain_type
1690 && TYPE_MIN_VALUE (domain_type))
1692 if (!allow_negative_idx
1693 && TREE_CODE (TYPE_MIN_VALUE (domain_type)) == INTEGER_CST
1694 && tree_int_cst_lt (idx, TYPE_MIN_VALUE (domain_type)))
1695 return NULL_TREE;
1697 else if (!allow_negative_idx
1698 && compare_tree_int (idx, 0) < 0)
1699 return NULL_TREE;
1701 return build4 (ARRAY_REF, elt_type, base, idx, NULL_TREE, NULL_TREE);
1705 /* Attempt to fold *(S+O) to S.X.
1706 BASE is a record type. OFFSET is a byte displacement. ORIG_TYPE
1707 is the desired result type. */
1709 static tree
1710 maybe_fold_offset_to_component_ref (tree record_type, tree base, tree offset,
1711 tree orig_type, bool base_is_ptr)
1713 tree f, t, field_type, tail_array_field, field_offset;
1714 tree ret;
1715 tree new_base;
1717 if (TREE_CODE (record_type) != RECORD_TYPE
1718 && TREE_CODE (record_type) != UNION_TYPE
1719 && TREE_CODE (record_type) != QUAL_UNION_TYPE)
1720 return NULL_TREE;
1722 /* Short-circuit silly cases. */
1723 if (useless_type_conversion_p (record_type, orig_type))
1724 return NULL_TREE;
1726 tail_array_field = NULL_TREE;
1727 for (f = TYPE_FIELDS (record_type); f ; f = TREE_CHAIN (f))
1729 int cmp;
1731 if (TREE_CODE (f) != FIELD_DECL)
1732 continue;
1733 if (DECL_BIT_FIELD (f))
1734 continue;
1736 if (!DECL_FIELD_OFFSET (f))
1737 continue;
1738 field_offset = byte_position (f);
1739 if (TREE_CODE (field_offset) != INTEGER_CST)
1740 continue;
1742 /* ??? Java creates "interesting" fields for representing base classes.
1743 They have no name, and have no context. With no context, we get into
1744 trouble with nonoverlapping_component_refs_p. Skip them. */
1745 if (!DECL_FIELD_CONTEXT (f))
1746 continue;
1748 /* The previous array field isn't at the end. */
1749 tail_array_field = NULL_TREE;
1751 /* Check to see if this offset overlaps with the field. */
1752 cmp = tree_int_cst_compare (field_offset, offset);
1753 if (cmp > 0)
1754 continue;
1756 field_type = TREE_TYPE (f);
1758 /* Here we exactly match the offset being checked. If the types match,
1759 then we can return that field. */
1760 if (cmp == 0
1761 && useless_type_conversion_p (orig_type, field_type))
1763 if (base_is_ptr)
1764 base = build1 (INDIRECT_REF, record_type, base);
1765 t = build3 (COMPONENT_REF, field_type, base, f, NULL_TREE);
1766 return t;
1769 /* Don't care about offsets into the middle of scalars. */
1770 if (!AGGREGATE_TYPE_P (field_type))
1771 continue;
1773 /* Check for array at the end of the struct. This is often
1774 used as for flexible array members. We should be able to
1775 turn this into an array access anyway. */
1776 if (TREE_CODE (field_type) == ARRAY_TYPE)
1777 tail_array_field = f;
1779 /* Check the end of the field against the offset. */
1780 if (!DECL_SIZE_UNIT (f)
1781 || TREE_CODE (DECL_SIZE_UNIT (f)) != INTEGER_CST)
1782 continue;
1783 t = int_const_binop (MINUS_EXPR, offset, field_offset, 1);
1784 if (!tree_int_cst_lt (t, DECL_SIZE_UNIT (f)))
1785 continue;
1787 /* If we matched, then set offset to the displacement into
1788 this field. */
1789 if (base_is_ptr)
1790 new_base = build1 (INDIRECT_REF, record_type, base);
1791 else
1792 new_base = base;
1793 new_base = build3 (COMPONENT_REF, field_type, new_base, f, NULL_TREE);
1795 /* Recurse to possibly find the match. */
1796 ret = maybe_fold_offset_to_array_ref (new_base, t, orig_type,
1797 f == TYPE_FIELDS (record_type));
1798 if (ret)
1799 return ret;
1800 ret = maybe_fold_offset_to_component_ref (field_type, new_base, t,
1801 orig_type, false);
1802 if (ret)
1803 return ret;
1806 if (!tail_array_field)
1807 return NULL_TREE;
1809 f = tail_array_field;
1810 field_type = TREE_TYPE (f);
1811 offset = int_const_binop (MINUS_EXPR, offset, byte_position (f), 1);
1813 /* If we get here, we've got an aggregate field, and a possibly
1814 nonzero offset into them. Recurse and hope for a valid match. */
1815 if (base_is_ptr)
1816 base = build1 (INDIRECT_REF, record_type, base);
1817 base = build3 (COMPONENT_REF, field_type, base, f, NULL_TREE);
1819 t = maybe_fold_offset_to_array_ref (base, offset, orig_type,
1820 f == TYPE_FIELDS (record_type));
1821 if (t)
1822 return t;
1823 return maybe_fold_offset_to_component_ref (field_type, base, offset,
1824 orig_type, false);
1827 /* Attempt to express (ORIG_TYPE)BASE+OFFSET as BASE->field_of_orig_type
1828 or BASE[index] or by combination of those.
1830 Before attempting the conversion strip off existing ADDR_EXPRs and
1831 handled component refs. */
1833 tree
1834 maybe_fold_offset_to_reference (tree base, tree offset, tree orig_type)
1836 tree ret;
1837 tree type;
1838 bool base_is_ptr = true;
1840 STRIP_NOPS (base);
1841 if (TREE_CODE (base) == ADDR_EXPR)
1843 base_is_ptr = false;
1845 base = TREE_OPERAND (base, 0);
1847 /* Handle case where existing COMPONENT_REF pick e.g. wrong field of union,
1848 so it needs to be removed and new COMPONENT_REF constructed.
1849 The wrong COMPONENT_REF are often constructed by folding the
1850 (type *)&object within the expression (type *)&object+offset */
1851 if (handled_component_p (base))
1853 HOST_WIDE_INT sub_offset, size, maxsize;
1854 tree newbase;
1855 newbase = get_ref_base_and_extent (base, &sub_offset,
1856 &size, &maxsize);
1857 gcc_assert (newbase);
1858 if (size == maxsize
1859 && size != -1
1860 && !(sub_offset & (BITS_PER_UNIT - 1)))
1862 base = newbase;
1863 if (sub_offset)
1864 offset = int_const_binop (PLUS_EXPR, offset,
1865 build_int_cst (TREE_TYPE (offset),
1866 sub_offset / BITS_PER_UNIT), 1);
1869 if (useless_type_conversion_p (orig_type, TREE_TYPE (base))
1870 && integer_zerop (offset))
1871 return base;
1872 type = TREE_TYPE (base);
1874 else
1876 base_is_ptr = true;
1877 if (!POINTER_TYPE_P (TREE_TYPE (base)))
1878 return NULL_TREE;
1879 type = TREE_TYPE (TREE_TYPE (base));
1881 ret = maybe_fold_offset_to_component_ref (type, base, offset,
1882 orig_type, base_is_ptr);
1883 if (!ret)
1885 if (base_is_ptr)
1886 base = build1 (INDIRECT_REF, type, base);
1887 ret = maybe_fold_offset_to_array_ref (base, offset, orig_type, true);
1889 return ret;
1892 /* Attempt to express (ORIG_TYPE)&BASE+OFFSET as &BASE->field_of_orig_type
1893 or &BASE[index] or by combination of those.
1895 Before attempting the conversion strip off existing component refs. */
1897 tree
1898 maybe_fold_offset_to_address (tree addr, tree offset, tree orig_type)
1900 tree t;
1902 gcc_assert (POINTER_TYPE_P (TREE_TYPE (addr))
1903 && POINTER_TYPE_P (orig_type));
1905 t = maybe_fold_offset_to_reference (addr, offset, TREE_TYPE (orig_type));
1906 if (t != NULL_TREE)
1908 tree orig = addr;
1909 tree ptr_type;
1911 /* For __builtin_object_size to function correctly we need to
1912 make sure not to fold address arithmetic so that we change
1913 reference from one array to another. This would happen for
1914 example for
1916 struct X { char s1[10]; char s2[10] } s;
1917 char *foo (void) { return &s.s2[-4]; }
1919 where we need to avoid generating &s.s1[6]. As the C and
1920 C++ frontends create different initial trees
1921 (char *) &s.s1 + -4 vs. &s.s1[-4] we have to do some
1922 sophisticated comparisons here. Note that checking for the
1923 condition after the fact is easier than trying to avoid doing
1924 the folding. */
1925 STRIP_NOPS (orig);
1926 if (TREE_CODE (orig) == ADDR_EXPR)
1927 orig = TREE_OPERAND (orig, 0);
1928 if ((TREE_CODE (orig) == ARRAY_REF
1929 || (TREE_CODE (orig) == COMPONENT_REF
1930 && TREE_CODE (TREE_TYPE (TREE_OPERAND (orig, 1))) == ARRAY_TYPE))
1931 && (TREE_CODE (t) == ARRAY_REF
1932 || (TREE_CODE (t) == COMPONENT_REF
1933 && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 1))) == ARRAY_TYPE))
1934 && !operand_equal_p (TREE_CODE (orig) == ARRAY_REF
1935 ? TREE_OPERAND (orig, 0) : orig,
1936 TREE_CODE (t) == ARRAY_REF
1937 ? TREE_OPERAND (t, 0) : t, 0))
1938 return NULL_TREE;
1940 ptr_type = build_pointer_type (TREE_TYPE (t));
1941 if (!useless_type_conversion_p (orig_type, ptr_type))
1942 return NULL_TREE;
1943 return build_fold_addr_expr_with_type (t, ptr_type);
1946 return NULL_TREE;
1949 /* A subroutine of fold_stmt_r. Attempt to simplify *(BASE+OFFSET).
1950 Return the simplified expression, or NULL if nothing could be done. */
1952 static tree
1953 maybe_fold_stmt_indirect (tree expr, tree base, tree offset)
1955 tree t;
1956 bool volatile_p = TREE_THIS_VOLATILE (expr);
1958 /* We may well have constructed a double-nested PLUS_EXPR via multiple
1959 substitutions. Fold that down to one. Remove NON_LVALUE_EXPRs that
1960 are sometimes added. */
1961 base = fold (base);
1962 STRIP_TYPE_NOPS (base);
1963 TREE_OPERAND (expr, 0) = base;
1965 /* One possibility is that the address reduces to a string constant. */
1966 t = fold_read_from_constant_string (expr);
1967 if (t)
1968 return t;
1970 /* Add in any offset from a POINTER_PLUS_EXPR. */
1971 if (TREE_CODE (base) == POINTER_PLUS_EXPR)
1973 tree offset2;
1975 offset2 = TREE_OPERAND (base, 1);
1976 if (TREE_CODE (offset2) != INTEGER_CST)
1977 return NULL_TREE;
1978 base = TREE_OPERAND (base, 0);
1980 offset = fold_convert (sizetype,
1981 int_const_binop (PLUS_EXPR, offset, offset2, 1));
1984 if (TREE_CODE (base) == ADDR_EXPR)
1986 tree base_addr = base;
1988 /* Strip the ADDR_EXPR. */
1989 base = TREE_OPERAND (base, 0);
1991 /* Fold away CONST_DECL to its value, if the type is scalar. */
1992 if (TREE_CODE (base) == CONST_DECL
1993 && is_gimple_min_invariant (DECL_INITIAL (base)))
1994 return DECL_INITIAL (base);
1996 /* Try folding *(&B+O) to B.X. */
1997 t = maybe_fold_offset_to_reference (base_addr, offset,
1998 TREE_TYPE (expr));
1999 if (t)
2001 /* Preserve volatileness of the original expression.
2002 We can end up with a plain decl here which is shared
2003 and we shouldn't mess with its flags. */
2004 if (!SSA_VAR_P (t))
2005 TREE_THIS_VOLATILE (t) = volatile_p;
2006 return t;
2009 else
2011 /* We can get here for out-of-range string constant accesses,
2012 such as "_"[3]. Bail out of the entire substitution search
2013 and arrange for the entire statement to be replaced by a
2014 call to __builtin_trap. In all likelihood this will all be
2015 constant-folded away, but in the meantime we can't leave with
2016 something that get_expr_operands can't understand. */
2018 t = base;
2019 STRIP_NOPS (t);
2020 if (TREE_CODE (t) == ADDR_EXPR
2021 && TREE_CODE (TREE_OPERAND (t, 0)) == STRING_CST)
2023 /* FIXME: Except that this causes problems elsewhere with dead
2024 code not being deleted, and we die in the rtl expanders
2025 because we failed to remove some ssa_name. In the meantime,
2026 just return zero. */
2027 /* FIXME2: This condition should be signaled by
2028 fold_read_from_constant_string directly, rather than
2029 re-checking for it here. */
2030 return integer_zero_node;
2033 /* Try folding *(B+O) to B->X. Still an improvement. */
2034 if (POINTER_TYPE_P (TREE_TYPE (base)))
2036 t = maybe_fold_offset_to_reference (base, offset,
2037 TREE_TYPE (expr));
2038 if (t)
2039 return t;
2043 /* Otherwise we had an offset that we could not simplify. */
2044 return NULL_TREE;
2048 /* A quaint feature extant in our address arithmetic is that there
2049 can be hidden type changes here. The type of the result need
2050 not be the same as the type of the input pointer.
2052 What we're after here is an expression of the form
2053 (T *)(&array + const)
2054 where array is OP0, const is OP1, RES_TYPE is T and
2055 the cast doesn't actually exist, but is implicit in the
2056 type of the POINTER_PLUS_EXPR. We'd like to turn this into
2057 &array[x]
2058 which may be able to propagate further. */
2060 tree
2061 maybe_fold_stmt_addition (tree res_type, tree op0, tree op1)
2063 tree ptd_type;
2064 tree t;
2066 /* It had better be a constant. */
2067 if (TREE_CODE (op1) != INTEGER_CST)
2068 return NULL_TREE;
2069 /* The first operand should be an ADDR_EXPR. */
2070 if (TREE_CODE (op0) != ADDR_EXPR)
2071 return NULL_TREE;
2072 op0 = TREE_OPERAND (op0, 0);
2074 /* If the first operand is an ARRAY_REF, expand it so that we can fold
2075 the offset into it. */
2076 while (TREE_CODE (op0) == ARRAY_REF)
2078 tree array_obj = TREE_OPERAND (op0, 0);
2079 tree array_idx = TREE_OPERAND (op0, 1);
2080 tree elt_type = TREE_TYPE (op0);
2081 tree elt_size = TYPE_SIZE_UNIT (elt_type);
2082 tree min_idx;
2084 if (TREE_CODE (array_idx) != INTEGER_CST)
2085 break;
2086 if (TREE_CODE (elt_size) != INTEGER_CST)
2087 break;
2089 /* Un-bias the index by the min index of the array type. */
2090 min_idx = TYPE_DOMAIN (TREE_TYPE (array_obj));
2091 if (min_idx)
2093 min_idx = TYPE_MIN_VALUE (min_idx);
2094 if (min_idx)
2096 if (TREE_CODE (min_idx) != INTEGER_CST)
2097 break;
2099 array_idx = fold_convert (TREE_TYPE (min_idx), array_idx);
2100 if (!integer_zerop (min_idx))
2101 array_idx = int_const_binop (MINUS_EXPR, array_idx,
2102 min_idx, 0);
2106 /* Convert the index to a byte offset. */
2107 array_idx = fold_convert (sizetype, array_idx);
2108 array_idx = int_const_binop (MULT_EXPR, array_idx, elt_size, 0);
2110 /* Update the operands for the next round, or for folding. */
2111 op1 = int_const_binop (PLUS_EXPR,
2112 array_idx, op1, 0);
2113 op0 = array_obj;
2116 ptd_type = TREE_TYPE (res_type);
2117 /* If we want a pointer to void, reconstruct the reference from the
2118 array element type. A pointer to that can be trivially converted
2119 to void *. This happens as we fold (void *)(ptr p+ off). */
2120 if (VOID_TYPE_P (ptd_type)
2121 && TREE_CODE (TREE_TYPE (op0)) == ARRAY_TYPE)
2122 ptd_type = TREE_TYPE (TREE_TYPE (op0));
2124 /* At which point we can try some of the same things as for indirects. */
2125 t = maybe_fold_offset_to_array_ref (op0, op1, ptd_type, true);
2126 if (!t)
2127 t = maybe_fold_offset_to_component_ref (TREE_TYPE (op0), op0, op1,
2128 ptd_type, false);
2129 if (t)
2130 t = build1 (ADDR_EXPR, res_type, t);
2132 return t;
2135 /* For passing state through walk_tree into fold_stmt_r and its
2136 children. */
2138 struct fold_stmt_r_data
2140 gimple stmt;
2141 bool *changed_p;
2142 bool *inside_addr_expr_p;
2145 /* Subroutine of fold_stmt called via walk_tree. We perform several
2146 simplifications of EXPR_P, mostly having to do with pointer arithmetic. */
2148 static tree
2149 fold_stmt_r (tree *expr_p, int *walk_subtrees, void *data)
2151 struct walk_stmt_info *wi = (struct walk_stmt_info *) data;
2152 struct fold_stmt_r_data *fold_stmt_r_data;
2153 bool *inside_addr_expr_p;
2154 bool *changed_p;
2155 tree expr = *expr_p, t;
2156 bool volatile_p = TREE_THIS_VOLATILE (expr);
2158 fold_stmt_r_data = (struct fold_stmt_r_data *) wi->info;
2159 inside_addr_expr_p = fold_stmt_r_data->inside_addr_expr_p;
2160 changed_p = fold_stmt_r_data->changed_p;
2162 /* ??? It'd be nice if walk_tree had a pre-order option. */
2163 switch (TREE_CODE (expr))
2165 case INDIRECT_REF:
2166 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2167 if (t)
2168 return t;
2169 *walk_subtrees = 0;
2171 t = maybe_fold_stmt_indirect (expr, TREE_OPERAND (expr, 0),
2172 integer_zero_node);
2173 if (!t
2174 && TREE_CODE (TREE_OPERAND (expr, 0)) == ADDR_EXPR)
2175 /* If we had a good reason for propagating the address here,
2176 make sure we end up with valid gimple. See PR34989. */
2177 t = TREE_OPERAND (TREE_OPERAND (expr, 0), 0);
2178 break;
2180 case NOP_EXPR:
2181 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2182 if (t)
2183 return t;
2184 *walk_subtrees = 0;
2186 if (POINTER_TYPE_P (TREE_TYPE (expr))
2187 && POINTER_TYPE_P (TREE_TYPE (TREE_TYPE (expr)))
2188 && POINTER_TYPE_P (TREE_TYPE (TREE_OPERAND (expr, 0)))
2189 && (t = maybe_fold_offset_to_address (TREE_OPERAND (expr, 0),
2190 integer_zero_node,
2191 TREE_TYPE (TREE_TYPE (expr)))))
2192 return t;
2193 break;
2195 /* ??? Could handle more ARRAY_REFs here, as a variant of INDIRECT_REF.
2196 We'd only want to bother decomposing an existing ARRAY_REF if
2197 the base array is found to have another offset contained within.
2198 Otherwise we'd be wasting time. */
2199 case ARRAY_REF:
2200 /* If we are not processing expressions found within an
2201 ADDR_EXPR, then we can fold constant array references. */
2202 if (!*inside_addr_expr_p)
2203 t = fold_read_from_constant_string (expr);
2204 else
2205 t = NULL;
2206 break;
2208 case ADDR_EXPR:
2209 *inside_addr_expr_p = true;
2210 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2211 *inside_addr_expr_p = false;
2212 if (t)
2213 return t;
2214 *walk_subtrees = 0;
2216 /* Make sure the value is properly considered constant, and so gets
2217 propagated as expected. */
2218 if (*changed_p)
2219 recompute_tree_invariant_for_addr_expr (expr);
2220 return NULL_TREE;
2222 case COMPONENT_REF:
2223 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2224 if (t)
2225 return t;
2226 *walk_subtrees = 0;
2228 /* Make sure the FIELD_DECL is actually a field in the type on the lhs.
2229 We've already checked that the records are compatible, so we should
2230 come up with a set of compatible fields. */
2232 tree expr_record = TREE_TYPE (TREE_OPERAND (expr, 0));
2233 tree expr_field = TREE_OPERAND (expr, 1);
2235 if (DECL_FIELD_CONTEXT (expr_field) != TYPE_MAIN_VARIANT (expr_record))
2237 expr_field = find_compatible_field (expr_record, expr_field);
2238 TREE_OPERAND (expr, 1) = expr_field;
2241 break;
2243 case TARGET_MEM_REF:
2244 t = maybe_fold_tmr (expr);
2245 break;
2247 case POINTER_PLUS_EXPR:
2248 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2249 if (t)
2250 return t;
2251 t = walk_tree (&TREE_OPERAND (expr, 1), fold_stmt_r, data, NULL);
2252 if (t)
2253 return t;
2254 *walk_subtrees = 0;
2256 t = maybe_fold_stmt_addition (TREE_TYPE (expr),
2257 TREE_OPERAND (expr, 0),
2258 TREE_OPERAND (expr, 1));
2259 break;
2261 case COND_EXPR:
2262 if (COMPARISON_CLASS_P (TREE_OPERAND (expr, 0)))
2264 tree op0 = TREE_OPERAND (expr, 0);
2265 tree tem;
2266 bool set;
2268 fold_defer_overflow_warnings ();
2269 tem = fold_binary (TREE_CODE (op0), TREE_TYPE (op0),
2270 TREE_OPERAND (op0, 0),
2271 TREE_OPERAND (op0, 1));
2272 /* This is actually a conditional expression, not a GIMPLE
2273 conditional statement, however, the valid_gimple_rhs_p
2274 test still applies. */
2275 set = tem && is_gimple_condexpr (tem) && valid_gimple_rhs_p (tem);
2276 fold_undefer_overflow_warnings (set, fold_stmt_r_data->stmt, 0);
2277 if (set)
2279 COND_EXPR_COND (expr) = tem;
2280 t = expr;
2281 break;
2284 return NULL_TREE;
2286 default:
2287 return NULL_TREE;
2290 if (t)
2292 /* Preserve volatileness of the original expression.
2293 We can end up with a plain decl here which is shared
2294 and we shouldn't mess with its flags. */
2295 if (!SSA_VAR_P (t))
2296 TREE_THIS_VOLATILE (t) = volatile_p;
2297 *expr_p = t;
2298 *changed_p = true;
2301 return NULL_TREE;
2304 /* Return the string length, maximum string length or maximum value of
2305 ARG in LENGTH.
2306 If ARG is an SSA name variable, follow its use-def chains. If LENGTH
2307 is not NULL and, for TYPE == 0, its value is not equal to the length
2308 we determine or if we are unable to determine the length or value,
2309 return false. VISITED is a bitmap of visited variables.
2310 TYPE is 0 if string length should be returned, 1 for maximum string
2311 length and 2 for maximum value ARG can have. */
2313 static bool
2314 get_maxval_strlen (tree arg, tree *length, bitmap visited, int type)
2316 tree var, val;
2317 gimple def_stmt;
2319 if (TREE_CODE (arg) != SSA_NAME)
2321 if (TREE_CODE (arg) == COND_EXPR)
2322 return get_maxval_strlen (COND_EXPR_THEN (arg), length, visited, type)
2323 && get_maxval_strlen (COND_EXPR_ELSE (arg), length, visited, type);
2324 /* We can end up with &(*iftmp_1)[0] here as well, so handle it. */
2325 else if (TREE_CODE (arg) == ADDR_EXPR
2326 && TREE_CODE (TREE_OPERAND (arg, 0)) == ARRAY_REF
2327 && integer_zerop (TREE_OPERAND (TREE_OPERAND (arg, 0), 1)))
2329 tree aop0 = TREE_OPERAND (TREE_OPERAND (arg, 0), 0);
2330 if (TREE_CODE (aop0) == INDIRECT_REF
2331 && TREE_CODE (TREE_OPERAND (aop0, 0)) == SSA_NAME)
2332 return get_maxval_strlen (TREE_OPERAND (aop0, 0),
2333 length, visited, type);
2336 if (type == 2)
2338 val = arg;
2339 if (TREE_CODE (val) != INTEGER_CST
2340 || tree_int_cst_sgn (val) < 0)
2341 return false;
2343 else
2344 val = c_strlen (arg, 1);
2345 if (!val)
2346 return false;
2348 if (*length)
2350 if (type > 0)
2352 if (TREE_CODE (*length) != INTEGER_CST
2353 || TREE_CODE (val) != INTEGER_CST)
2354 return false;
2356 if (tree_int_cst_lt (*length, val))
2357 *length = val;
2358 return true;
2360 else if (simple_cst_equal (val, *length) != 1)
2361 return false;
2364 *length = val;
2365 return true;
2368 /* If we were already here, break the infinite cycle. */
2369 if (bitmap_bit_p (visited, SSA_NAME_VERSION (arg)))
2370 return true;
2371 bitmap_set_bit (visited, SSA_NAME_VERSION (arg));
2373 var = arg;
2374 def_stmt = SSA_NAME_DEF_STMT (var);
2376 switch (gimple_code (def_stmt))
2378 case GIMPLE_ASSIGN:
2379 /* The RHS of the statement defining VAR must either have a
2380 constant length or come from another SSA_NAME with a constant
2381 length. */
2382 if (gimple_assign_single_p (def_stmt)
2383 || gimple_assign_unary_nop_p (def_stmt))
2385 tree rhs = gimple_assign_rhs1 (def_stmt);
2386 return get_maxval_strlen (rhs, length, visited, type);
2388 return false;
2390 case GIMPLE_PHI:
2392 /* All the arguments of the PHI node must have the same constant
2393 length. */
2394 unsigned i;
2396 for (i = 0; i < gimple_phi_num_args (def_stmt); i++)
2398 tree arg = gimple_phi_arg (def_stmt, i)->def;
2400 /* If this PHI has itself as an argument, we cannot
2401 determine the string length of this argument. However,
2402 if we can find a constant string length for the other
2403 PHI args then we can still be sure that this is a
2404 constant string length. So be optimistic and just
2405 continue with the next argument. */
2406 if (arg == gimple_phi_result (def_stmt))
2407 continue;
2409 if (!get_maxval_strlen (arg, length, visited, type))
2410 return false;
2413 return true;
2415 default:
2416 return false;
2421 /* Fold builtin call in statement STMT. Returns a simplified tree.
2422 We may return a non-constant expression, including another call
2423 to a different function and with different arguments, e.g.,
2424 substituting memcpy for strcpy when the string length is known.
2425 Note that some builtins expand into inline code that may not
2426 be valid in GIMPLE. Callers must take care. */
2428 static tree
2429 ccp_fold_builtin (gimple stmt)
2431 tree result, val[3];
2432 tree callee, a;
2433 int arg_mask, i, type;
2434 bitmap visited;
2435 bool ignore;
2436 int nargs;
2438 gcc_assert (is_gimple_call (stmt));
2440 ignore = (gimple_call_lhs (stmt) == NULL);
2442 /* First try the generic builtin folder. If that succeeds, return the
2443 result directly. */
2444 result = fold_call_stmt (stmt, ignore);
2445 if (result)
2447 if (ignore)
2448 STRIP_NOPS (result);
2449 return result;
2452 /* Ignore MD builtins. */
2453 callee = gimple_call_fndecl (stmt);
2454 if (DECL_BUILT_IN_CLASS (callee) == BUILT_IN_MD)
2455 return NULL_TREE;
2457 /* If the builtin could not be folded, and it has no argument list,
2458 we're done. */
2459 nargs = gimple_call_num_args (stmt);
2460 if (nargs == 0)
2461 return NULL_TREE;
2463 /* Limit the work only for builtins we know how to simplify. */
2464 switch (DECL_FUNCTION_CODE (callee))
2466 case BUILT_IN_STRLEN:
2467 case BUILT_IN_FPUTS:
2468 case BUILT_IN_FPUTS_UNLOCKED:
2469 arg_mask = 1;
2470 type = 0;
2471 break;
2472 case BUILT_IN_STRCPY:
2473 case BUILT_IN_STRNCPY:
2474 arg_mask = 2;
2475 type = 0;
2476 break;
2477 case BUILT_IN_MEMCPY_CHK:
2478 case BUILT_IN_MEMPCPY_CHK:
2479 case BUILT_IN_MEMMOVE_CHK:
2480 case BUILT_IN_MEMSET_CHK:
2481 case BUILT_IN_STRNCPY_CHK:
2482 arg_mask = 4;
2483 type = 2;
2484 break;
2485 case BUILT_IN_STRCPY_CHK:
2486 case BUILT_IN_STPCPY_CHK:
2487 arg_mask = 2;
2488 type = 1;
2489 break;
2490 case BUILT_IN_SNPRINTF_CHK:
2491 case BUILT_IN_VSNPRINTF_CHK:
2492 arg_mask = 2;
2493 type = 2;
2494 break;
2495 default:
2496 return NULL_TREE;
2499 /* Try to use the dataflow information gathered by the CCP process. */
2500 visited = BITMAP_ALLOC (NULL);
2502 memset (val, 0, sizeof (val));
2503 for (i = 0; i < nargs; i++)
2505 if ((arg_mask >> i) & 1)
2507 a = gimple_call_arg (stmt, i);
2508 bitmap_clear (visited);
2509 if (!get_maxval_strlen (a, &val[i], visited, type))
2510 val[i] = NULL_TREE;
2514 BITMAP_FREE (visited);
2516 result = NULL_TREE;
2517 switch (DECL_FUNCTION_CODE (callee))
2519 case BUILT_IN_STRLEN:
2520 if (val[0])
2522 tree new_val =
2523 fold_convert (TREE_TYPE (gimple_call_lhs (stmt)), val[0]);
2525 /* If the result is not a valid gimple value, or not a cast
2526 of a valid gimple value, then we can not use the result. */
2527 if (is_gimple_val (new_val)
2528 || (is_gimple_cast (new_val)
2529 && is_gimple_val (TREE_OPERAND (new_val, 0))))
2530 return new_val;
2532 break;
2534 case BUILT_IN_STRCPY:
2535 if (val[1] && is_gimple_val (val[1]) && nargs == 2)
2536 result = fold_builtin_strcpy (callee,
2537 gimple_call_arg (stmt, 0),
2538 gimple_call_arg (stmt, 1),
2539 val[1]);
2540 break;
2542 case BUILT_IN_STRNCPY:
2543 if (val[1] && is_gimple_val (val[1]) && nargs == 3)
2544 result = fold_builtin_strncpy (callee,
2545 gimple_call_arg (stmt, 0),
2546 gimple_call_arg (stmt, 1),
2547 gimple_call_arg (stmt, 2),
2548 val[1]);
2549 break;
2551 case BUILT_IN_FPUTS:
2552 result = fold_builtin_fputs (gimple_call_arg (stmt, 0),
2553 gimple_call_arg (stmt, 1),
2554 ignore, false, val[0]);
2555 break;
2557 case BUILT_IN_FPUTS_UNLOCKED:
2558 result = fold_builtin_fputs (gimple_call_arg (stmt, 0),
2559 gimple_call_arg (stmt, 1),
2560 ignore, true, val[0]);
2561 break;
2563 case BUILT_IN_MEMCPY_CHK:
2564 case BUILT_IN_MEMPCPY_CHK:
2565 case BUILT_IN_MEMMOVE_CHK:
2566 case BUILT_IN_MEMSET_CHK:
2567 if (val[2] && is_gimple_val (val[2]))
2568 result = fold_builtin_memory_chk (callee,
2569 gimple_call_arg (stmt, 0),
2570 gimple_call_arg (stmt, 1),
2571 gimple_call_arg (stmt, 2),
2572 gimple_call_arg (stmt, 3),
2573 val[2], ignore,
2574 DECL_FUNCTION_CODE (callee));
2575 break;
2577 case BUILT_IN_STRCPY_CHK:
2578 case BUILT_IN_STPCPY_CHK:
2579 if (val[1] && is_gimple_val (val[1]))
2580 result = fold_builtin_stxcpy_chk (callee,
2581 gimple_call_arg (stmt, 0),
2582 gimple_call_arg (stmt, 1),
2583 gimple_call_arg (stmt, 2),
2584 val[1], ignore,
2585 DECL_FUNCTION_CODE (callee));
2586 break;
2588 case BUILT_IN_STRNCPY_CHK:
2589 if (val[2] && is_gimple_val (val[2]))
2590 result = fold_builtin_strncpy_chk (gimple_call_arg (stmt, 0),
2591 gimple_call_arg (stmt, 1),
2592 gimple_call_arg (stmt, 2),
2593 gimple_call_arg (stmt, 3),
2594 val[2]);
2595 break;
2597 case BUILT_IN_SNPRINTF_CHK:
2598 case BUILT_IN_VSNPRINTF_CHK:
2599 if (val[1] && is_gimple_val (val[1]))
2600 result = gimple_fold_builtin_snprintf_chk (stmt, val[1],
2601 DECL_FUNCTION_CODE (callee));
2602 break;
2604 default:
2605 gcc_unreachable ();
2608 if (result && ignore)
2609 result = fold_ignored_result (result);
2610 return result;
2613 /* Attempt to fold an assignment statement pointed-to by SI. Returns a
2614 replacement rhs for the statement or NULL_TREE if no simplification
2615 could be made. It is assumed that the operands have been previously
2616 folded. */
2618 static tree
2619 fold_gimple_assign (gimple_stmt_iterator *si)
2621 gimple stmt = gsi_stmt (*si);
2622 enum tree_code subcode = gimple_assign_rhs_code (stmt);
2624 tree result = NULL;
2626 switch (get_gimple_rhs_class (subcode))
2628 case GIMPLE_SINGLE_RHS:
2630 tree rhs = gimple_assign_rhs1 (stmt);
2632 /* Try to fold a conditional expression. */
2633 if (TREE_CODE (rhs) == COND_EXPR)
2635 tree temp = fold (COND_EXPR_COND (rhs));
2636 if (temp != COND_EXPR_COND (rhs))
2637 result = fold_build3 (COND_EXPR, TREE_TYPE (rhs), temp,
2638 COND_EXPR_THEN (rhs), COND_EXPR_ELSE (rhs));
2641 /* If we couldn't fold the RHS, hand over to the generic
2642 fold routines. */
2643 if (result == NULL_TREE)
2644 result = fold (rhs);
2646 /* Strip away useless type conversions. Both the NON_LVALUE_EXPR
2647 that may have been added by fold, and "useless" type
2648 conversions that might now be apparent due to propagation. */
2649 STRIP_USELESS_TYPE_CONVERSION (result);
2651 if (result != rhs && valid_gimple_rhs_p (result))
2652 return result;
2653 else
2654 /* It is possible that fold_stmt_r simplified the RHS.
2655 Make sure that the subcode of this statement still
2656 reflects the principal operator of the rhs operand. */
2657 return rhs;
2659 break;
2661 case GIMPLE_UNARY_RHS:
2663 tree rhs = gimple_assign_rhs1 (stmt);
2665 result = fold_unary (subcode, gimple_expr_type (stmt), rhs);
2666 if (result)
2668 /* If the operation was a conversion do _not_ mark a
2669 resulting constant with TREE_OVERFLOW if the original
2670 constant was not. These conversions have implementation
2671 defined behavior and retaining the TREE_OVERFLOW flag
2672 here would confuse later passes such as VRP. */
2673 if (CONVERT_EXPR_CODE_P (subcode)
2674 && TREE_CODE (result) == INTEGER_CST
2675 && TREE_CODE (rhs) == INTEGER_CST)
2676 TREE_OVERFLOW (result) = TREE_OVERFLOW (rhs);
2678 STRIP_USELESS_TYPE_CONVERSION (result);
2679 if (valid_gimple_rhs_p (result))
2680 return result;
2682 else if (CONVERT_EXPR_CODE_P (subcode)
2683 && POINTER_TYPE_P (gimple_expr_type (stmt))
2684 && POINTER_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt))))
2686 tree type = gimple_expr_type (stmt);
2687 tree t = maybe_fold_offset_to_address (gimple_assign_rhs1 (stmt),
2688 integer_zero_node, type);
2689 if (t)
2690 return t;
2693 break;
2695 case GIMPLE_BINARY_RHS:
2696 /* Try to fold pointer addition. */
2697 if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR)
2698 result = maybe_fold_stmt_addition (
2699 TREE_TYPE (gimple_assign_lhs (stmt)),
2700 gimple_assign_rhs1 (stmt),
2701 gimple_assign_rhs2 (stmt));
2703 if (!result)
2704 result = fold_binary (subcode,
2705 TREE_TYPE (gimple_assign_lhs (stmt)),
2706 gimple_assign_rhs1 (stmt),
2707 gimple_assign_rhs2 (stmt));
2709 if (result)
2711 STRIP_USELESS_TYPE_CONVERSION (result);
2712 if (valid_gimple_rhs_p (result))
2713 return result;
2715 break;
2717 case GIMPLE_INVALID_RHS:
2718 gcc_unreachable ();
2721 return NULL_TREE;
2724 /* Attempt to fold a conditional statement. Return true if any changes were
2725 made. We only attempt to fold the condition expression, and do not perform
2726 any transformation that would require alteration of the cfg. It is
2727 assumed that the operands have been previously folded. */
2729 static bool
2730 fold_gimple_cond (gimple stmt)
2732 tree result = fold_binary (gimple_cond_code (stmt),
2733 boolean_type_node,
2734 gimple_cond_lhs (stmt),
2735 gimple_cond_rhs (stmt));
2737 if (result)
2739 STRIP_USELESS_TYPE_CONVERSION (result);
2740 if (is_gimple_condexpr (result) && valid_gimple_rhs_p (result))
2742 gimple_cond_set_condition_from_tree (stmt, result);
2743 return true;
2747 return false;
2751 /* Attempt to fold a call statement referenced by the statement iterator GSI.
2752 The statement may be replaced by another statement, e.g., if the call
2753 simplifies to a constant value. Return true if any changes were made.
2754 It is assumed that the operands have been previously folded. */
2756 static bool
2757 fold_gimple_call (gimple_stmt_iterator *gsi)
2759 gimple stmt = gsi_stmt (*gsi);
2761 tree callee = gimple_call_fndecl (stmt);
2763 /* Check for builtins that CCP can handle using information not
2764 available in the generic fold routines. */
2765 if (callee && DECL_BUILT_IN (callee))
2767 tree result = ccp_fold_builtin (stmt);
2769 if (result)
2770 return update_call_from_tree (gsi, result);
2772 else
2774 /* Check for resolvable OBJ_TYPE_REF. The only sorts we can resolve
2775 here are when we've propagated the address of a decl into the
2776 object slot. */
2777 /* ??? Should perhaps do this in fold proper. However, doing it
2778 there requires that we create a new CALL_EXPR, and that requires
2779 copying EH region info to the new node. Easier to just do it
2780 here where we can just smash the call operand. */
2781 /* ??? Is there a good reason not to do this in fold_stmt_inplace? */
2782 callee = gimple_call_fn (stmt);
2783 if (TREE_CODE (callee) == OBJ_TYPE_REF
2784 && lang_hooks.fold_obj_type_ref
2785 && TREE_CODE (OBJ_TYPE_REF_OBJECT (callee)) == ADDR_EXPR
2786 && DECL_P (TREE_OPERAND
2787 (OBJ_TYPE_REF_OBJECT (callee), 0)))
2789 tree t;
2791 /* ??? Caution: Broken ADDR_EXPR semantics means that
2792 looking at the type of the operand of the addr_expr
2793 can yield an array type. See silly exception in
2794 check_pointer_types_r. */
2795 t = TREE_TYPE (TREE_TYPE (OBJ_TYPE_REF_OBJECT (callee)));
2796 t = lang_hooks.fold_obj_type_ref (callee, t);
2797 if (t)
2799 gimple_call_set_fn (stmt, t);
2800 return true;
2805 return false;
2808 /* Fold the statement pointed to by GSI. In some cases, this function may
2809 replace the whole statement with a new one. Returns true iff folding
2810 makes any changes. */
2812 bool
2813 fold_stmt (gimple_stmt_iterator *gsi)
2815 tree res;
2816 struct fold_stmt_r_data fold_stmt_r_data;
2817 struct walk_stmt_info wi;
2819 bool changed = false;
2820 bool inside_addr_expr = false;
2822 gimple stmt = gsi_stmt (*gsi);
2824 fold_stmt_r_data.stmt = stmt;
2825 fold_stmt_r_data.changed_p = &changed;
2826 fold_stmt_r_data.inside_addr_expr_p = &inside_addr_expr;
2828 memset (&wi, 0, sizeof (wi));
2829 wi.info = &fold_stmt_r_data;
2831 /* Fold the individual operands.
2832 For example, fold instances of *&VAR into VAR, etc. */
2833 res = walk_gimple_op (stmt, fold_stmt_r, &wi);
2834 gcc_assert (!res);
2836 /* Fold the main computation performed by the statement. */
2837 switch (gimple_code (stmt))
2839 case GIMPLE_ASSIGN:
2841 tree new_rhs = fold_gimple_assign (gsi);
2842 if (new_rhs != NULL_TREE)
2844 gimple_assign_set_rhs_from_tree (gsi, new_rhs);
2845 changed = true;
2847 stmt = gsi_stmt (*gsi);
2848 break;
2850 case GIMPLE_COND:
2851 changed |= fold_gimple_cond (stmt);
2852 break;
2853 case GIMPLE_CALL:
2854 /* The entire statement may be replaced in this case. */
2855 changed |= fold_gimple_call (gsi);
2856 break;
2858 default:
2859 return changed;
2860 break;
2863 return changed;
2866 /* Perform the minimal folding on statement STMT. Only operations like
2867 *&x created by constant propagation are handled. The statement cannot
2868 be replaced with a new one. Return true if the statement was
2869 changed, false otherwise. */
2871 bool
2872 fold_stmt_inplace (gimple stmt)
2874 tree res;
2875 struct fold_stmt_r_data fold_stmt_r_data;
2876 struct walk_stmt_info wi;
2877 gimple_stmt_iterator si;
2879 bool changed = false;
2880 bool inside_addr_expr = false;
2882 fold_stmt_r_data.stmt = stmt;
2883 fold_stmt_r_data.changed_p = &changed;
2884 fold_stmt_r_data.inside_addr_expr_p = &inside_addr_expr;
2886 memset (&wi, 0, sizeof (wi));
2887 wi.info = &fold_stmt_r_data;
2889 /* Fold the individual operands.
2890 For example, fold instances of *&VAR into VAR, etc.
2892 It appears that, at one time, maybe_fold_stmt_indirect
2893 would cause the walk to return non-null in order to
2894 signal that the entire statement should be replaced with
2895 a call to _builtin_trap. This functionality is currently
2896 disabled, as noted in a FIXME, and cannot be supported here. */
2897 res = walk_gimple_op (stmt, fold_stmt_r, &wi);
2898 gcc_assert (!res);
2900 /* Fold the main computation performed by the statement. */
2901 switch (gimple_code (stmt))
2903 case GIMPLE_ASSIGN:
2905 unsigned old_num_ops;
2906 tree new_rhs;
2907 old_num_ops = gimple_num_ops (stmt);
2908 si = gsi_for_stmt (stmt);
2909 new_rhs = fold_gimple_assign (&si);
2910 if (new_rhs != NULL_TREE
2911 && get_gimple_rhs_num_ops (TREE_CODE (new_rhs)) < old_num_ops)
2913 gimple_assign_set_rhs_from_tree (&si, new_rhs);
2914 changed = true;
2916 gcc_assert (gsi_stmt (si) == stmt);
2917 break;
2919 case GIMPLE_COND:
2920 changed |= fold_gimple_cond (stmt);
2921 break;
2923 default:
2924 break;
2927 return changed;
2930 /* Try to optimize out __builtin_stack_restore. Optimize it out
2931 if there is another __builtin_stack_restore in the same basic
2932 block and no calls or ASM_EXPRs are in between, or if this block's
2933 only outgoing edge is to EXIT_BLOCK and there are no calls or
2934 ASM_EXPRs after this __builtin_stack_restore. */
2936 static tree
2937 optimize_stack_restore (gimple_stmt_iterator i)
2939 tree callee, rhs;
2940 gimple stmt, stack_save;
2941 gimple_stmt_iterator stack_save_gsi;
2943 basic_block bb = gsi_bb (i);
2944 gimple call = gsi_stmt (i);
2946 if (gimple_code (call) != GIMPLE_CALL
2947 || gimple_call_num_args (call) != 1
2948 || TREE_CODE (gimple_call_arg (call, 0)) != SSA_NAME
2949 || !POINTER_TYPE_P (TREE_TYPE (gimple_call_arg (call, 0))))
2950 return NULL_TREE;
2952 for (gsi_next (&i); !gsi_end_p (i); gsi_next (&i))
2954 stmt = gsi_stmt (i);
2955 if (gimple_code (stmt) == GIMPLE_ASM)
2956 return NULL_TREE;
2957 if (gimple_code (stmt) != GIMPLE_CALL)
2958 continue;
2960 callee = gimple_call_fndecl (stmt);
2961 if (!callee || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL)
2962 return NULL_TREE;
2964 if (DECL_FUNCTION_CODE (callee) == BUILT_IN_STACK_RESTORE)
2965 break;
2968 if (gsi_end_p (i)
2969 && (! single_succ_p (bb)
2970 || single_succ_edge (bb)->dest != EXIT_BLOCK_PTR))
2971 return NULL_TREE;
2973 stack_save = SSA_NAME_DEF_STMT (gimple_call_arg (call, 0));
2974 if (gimple_code (stack_save) != GIMPLE_CALL
2975 || gimple_call_lhs (stack_save) != gimple_call_arg (call, 0)
2976 || stmt_could_throw_p (stack_save)
2977 || !has_single_use (gimple_call_arg (call, 0)))
2978 return NULL_TREE;
2980 callee = gimple_call_fndecl (stack_save);
2981 if (!callee
2982 || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL
2983 || DECL_FUNCTION_CODE (callee) != BUILT_IN_STACK_SAVE
2984 || gimple_call_num_args (stack_save) != 0)
2985 return NULL_TREE;
2987 stack_save_gsi = gsi_for_stmt (stack_save);
2988 push_stmt_changes (gsi_stmt_ptr (&stack_save_gsi));
2989 rhs = build_int_cst (TREE_TYPE (gimple_call_arg (call, 0)), 0);
2990 if (!update_call_from_tree (&stack_save_gsi, rhs))
2992 discard_stmt_changes (gsi_stmt_ptr (&stack_save_gsi));
2993 return NULL_TREE;
2995 pop_stmt_changes (gsi_stmt_ptr (&stack_save_gsi));
2997 /* No effect, so the statement will be deleted. */
2998 return integer_zero_node;
3001 /* If va_list type is a simple pointer and nothing special is needed,
3002 optimize __builtin_va_start (&ap, 0) into ap = __builtin_next_arg (0),
3003 __builtin_va_end (&ap) out as NOP and __builtin_va_copy into a simple
3004 pointer assignment. */
3006 static tree
3007 optimize_stdarg_builtin (gimple call)
3009 tree callee, lhs, rhs, cfun_va_list;
3010 bool va_list_simple_ptr;
3012 if (gimple_code (call) != GIMPLE_CALL)
3013 return NULL_TREE;
3015 callee = gimple_call_fndecl (call);
3017 cfun_va_list = targetm.fn_abi_va_list (callee);
3018 va_list_simple_ptr = POINTER_TYPE_P (cfun_va_list)
3019 && (TREE_TYPE (cfun_va_list) == void_type_node
3020 || TREE_TYPE (cfun_va_list) == char_type_node);
3022 switch (DECL_FUNCTION_CODE (callee))
3024 case BUILT_IN_VA_START:
3025 if (!va_list_simple_ptr
3026 || targetm.expand_builtin_va_start != NULL
3027 || built_in_decls[BUILT_IN_NEXT_ARG] == NULL)
3028 return NULL_TREE;
3030 if (gimple_call_num_args (call) != 2)
3031 return NULL_TREE;
3033 lhs = gimple_call_arg (call, 0);
3034 if (!POINTER_TYPE_P (TREE_TYPE (lhs))
3035 || TYPE_MAIN_VARIANT (TREE_TYPE (TREE_TYPE (lhs)))
3036 != TYPE_MAIN_VARIANT (cfun_va_list))
3037 return NULL_TREE;
3039 lhs = build_fold_indirect_ref (lhs);
3040 rhs = build_call_expr (built_in_decls[BUILT_IN_NEXT_ARG],
3041 1, integer_zero_node);
3042 rhs = fold_convert (TREE_TYPE (lhs), rhs);
3043 return build2 (MODIFY_EXPR, TREE_TYPE (lhs), lhs, rhs);
3045 case BUILT_IN_VA_COPY:
3046 if (!va_list_simple_ptr)
3047 return NULL_TREE;
3049 if (gimple_call_num_args (call) != 2)
3050 return NULL_TREE;
3052 lhs = gimple_call_arg (call, 0);
3053 if (!POINTER_TYPE_P (TREE_TYPE (lhs))
3054 || TYPE_MAIN_VARIANT (TREE_TYPE (TREE_TYPE (lhs)))
3055 != TYPE_MAIN_VARIANT (cfun_va_list))
3056 return NULL_TREE;
3058 lhs = build_fold_indirect_ref (lhs);
3059 rhs = gimple_call_arg (call, 1);
3060 if (TYPE_MAIN_VARIANT (TREE_TYPE (rhs))
3061 != TYPE_MAIN_VARIANT (cfun_va_list))
3062 return NULL_TREE;
3064 rhs = fold_convert (TREE_TYPE (lhs), rhs);
3065 return build2 (MODIFY_EXPR, TREE_TYPE (lhs), lhs, rhs);
3067 case BUILT_IN_VA_END:
3068 /* No effect, so the statement will be deleted. */
3069 return integer_zero_node;
3071 default:
3072 gcc_unreachable ();
3076 /* Convert EXPR into a GIMPLE value suitable for substitution on the
3077 RHS of an assignment. Insert the necessary statements before
3078 iterator *SI_P. The statement at *SI_P, which must be a GIMPLE_CALL
3079 is replaced. If the call is expected to produces a result, then it
3080 is replaced by an assignment of the new RHS to the result variable.
3081 If the result is to be ignored, then the call is replaced by a
3082 GIMPLE_NOP. */
3084 static void
3085 gimplify_and_update_call_from_tree (gimple_stmt_iterator *si_p, tree expr)
3087 tree lhs;
3088 tree tmp = NULL_TREE; /* Silence warning. */
3089 gimple stmt, new_stmt;
3090 gimple_stmt_iterator i;
3091 gimple_seq stmts = gimple_seq_alloc();
3092 struct gimplify_ctx gctx;
3094 stmt = gsi_stmt (*si_p);
3096 gcc_assert (is_gimple_call (stmt));
3098 lhs = gimple_call_lhs (stmt);
3100 push_gimplify_context (&gctx);
3102 if (lhs == NULL_TREE)
3103 gimplify_and_add (expr, &stmts);
3104 else
3105 tmp = get_initialized_tmp_var (expr, &stmts, NULL);
3107 pop_gimplify_context (NULL);
3109 if (gimple_has_location (stmt))
3110 annotate_all_with_location (stmts, gimple_location (stmt));
3112 /* The replacement can expose previously unreferenced variables. */
3113 for (i = gsi_start (stmts); !gsi_end_p (i); gsi_next (&i))
3115 new_stmt = gsi_stmt (i);
3116 find_new_referenced_vars (new_stmt);
3117 gsi_insert_before (si_p, new_stmt, GSI_NEW_STMT);
3118 mark_symbols_for_renaming (new_stmt);
3119 gsi_next (si_p);
3122 if (lhs == NULL_TREE)
3123 new_stmt = gimple_build_nop ();
3124 else
3126 new_stmt = gimple_build_assign (lhs, tmp);
3127 copy_virtual_operands (new_stmt, stmt);
3128 move_ssa_defining_stmt_for_defs (new_stmt, stmt);
3131 gimple_set_location (new_stmt, gimple_location (stmt));
3132 gsi_replace (si_p, new_stmt, false);
3135 /* A simple pass that attempts to fold all builtin functions. This pass
3136 is run after we've propagated as many constants as we can. */
3138 static unsigned int
3139 execute_fold_all_builtins (void)
3141 bool cfg_changed = false;
3142 basic_block bb;
3143 unsigned int todoflags = 0;
3145 FOR_EACH_BB (bb)
3147 gimple_stmt_iterator i;
3148 for (i = gsi_start_bb (bb); !gsi_end_p (i); )
3150 gimple stmt, old_stmt;
3151 tree callee, result;
3152 enum built_in_function fcode;
3154 stmt = gsi_stmt (i);
3156 if (gimple_code (stmt) != GIMPLE_CALL)
3158 gsi_next (&i);
3159 continue;
3161 callee = gimple_call_fndecl (stmt);
3162 if (!callee || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL)
3164 gsi_next (&i);
3165 continue;
3167 fcode = DECL_FUNCTION_CODE (callee);
3169 result = ccp_fold_builtin (stmt);
3171 if (result)
3172 gimple_remove_stmt_histograms (cfun, stmt);
3174 if (!result)
3175 switch (DECL_FUNCTION_CODE (callee))
3177 case BUILT_IN_CONSTANT_P:
3178 /* Resolve __builtin_constant_p. If it hasn't been
3179 folded to integer_one_node by now, it's fairly
3180 certain that the value simply isn't constant. */
3181 result = integer_zero_node;
3182 break;
3184 case BUILT_IN_STACK_RESTORE:
3185 result = optimize_stack_restore (i);
3186 if (result)
3187 break;
3188 gsi_next (&i);
3189 continue;
3191 case BUILT_IN_VA_START:
3192 case BUILT_IN_VA_END:
3193 case BUILT_IN_VA_COPY:
3194 /* These shouldn't be folded before pass_stdarg. */
3195 result = optimize_stdarg_builtin (stmt);
3196 if (result)
3197 break;
3198 /* FALLTHRU */
3200 default:
3201 gsi_next (&i);
3202 continue;
3205 if (dump_file && (dump_flags & TDF_DETAILS))
3207 fprintf (dump_file, "Simplified\n ");
3208 print_gimple_stmt (dump_file, stmt, 0, dump_flags);
3211 old_stmt = stmt;
3212 push_stmt_changes (gsi_stmt_ptr (&i));
3214 if (!update_call_from_tree (&i, result))
3216 gimplify_and_update_call_from_tree (&i, result);
3217 todoflags |= TODO_rebuild_alias;
3220 stmt = gsi_stmt (i);
3221 pop_stmt_changes (gsi_stmt_ptr (&i));
3223 if (maybe_clean_or_replace_eh_stmt (old_stmt, stmt)
3224 && gimple_purge_dead_eh_edges (bb))
3225 cfg_changed = true;
3227 if (dump_file && (dump_flags & TDF_DETAILS))
3229 fprintf (dump_file, "to\n ");
3230 print_gimple_stmt (dump_file, stmt, 0, dump_flags);
3231 fprintf (dump_file, "\n");
3234 /* Retry the same statement if it changed into another
3235 builtin, there might be new opportunities now. */
3236 if (gimple_code (stmt) != GIMPLE_CALL)
3238 gsi_next (&i);
3239 continue;
3241 callee = gimple_call_fndecl (stmt);
3242 if (!callee
3243 || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL
3244 || DECL_FUNCTION_CODE (callee) == fcode)
3245 gsi_next (&i);
3249 /* Delete unreachable blocks. */
3250 if (cfg_changed)
3251 todoflags |= TODO_cleanup_cfg;
3253 return todoflags;
3257 struct gimple_opt_pass pass_fold_builtins =
3260 GIMPLE_PASS,
3261 "fab", /* name */
3262 NULL, /* gate */
3263 execute_fold_all_builtins, /* execute */
3264 NULL, /* sub */
3265 NULL, /* next */
3266 0, /* static_pass_number */
3267 0, /* tv_id */
3268 PROP_cfg | PROP_ssa, /* properties_required */
3269 0, /* properties_provided */
3270 0, /* properties_destroyed */
3271 0, /* todo_flags_start */
3272 TODO_dump_func
3273 | TODO_verify_ssa
3274 | TODO_update_ssa /* todo_flags_finish */