2009-01-30 Richard Guenther <rguenther@suse.de>
[official-gcc.git] / gcc / tree-ssa-ccp.c
blobef6890c65c3024dd33992df2aadc089b4127e2ee
1 /* Conditional constant propagation pass for the GNU compiler.
2 Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008
3 Free Software Foundation, Inc.
4 Adapted from original RTL SSA-CCP by Daniel Berlin <dberlin@dberlin.org>
5 Adapted to GIMPLE trees by Diego Novillo <dnovillo@redhat.com>
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it
10 under the terms of the GNU General Public License as published by the
11 Free Software Foundation; either version 3, or (at your option) any
12 later version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT
15 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING3. If not see
21 <http://www.gnu.org/licenses/>. */
23 /* Conditional constant propagation (CCP) is based on the SSA
24 propagation engine (tree-ssa-propagate.c). Constant assignments of
25 the form VAR = CST are propagated from the assignments into uses of
26 VAR, which in turn may generate new constants. The simulation uses
27 a four level lattice to keep track of constant values associated
28 with SSA names. Given an SSA name V_i, it may take one of the
29 following values:
31 UNINITIALIZED -> the initial state of the value. This value
32 is replaced with a correct initial value
33 the first time the value is used, so the
34 rest of the pass does not need to care about
35 it. Using this value simplifies initialization
36 of the pass, and prevents us from needlessly
37 scanning statements that are never reached.
39 UNDEFINED -> V_i is a local variable whose definition
40 has not been processed yet. Therefore we
41 don't yet know if its value is a constant
42 or not.
44 CONSTANT -> V_i has been found to hold a constant
45 value C.
47 VARYING -> V_i cannot take a constant value, or if it
48 does, it is not possible to determine it
49 at compile time.
51 The core of SSA-CCP is in ccp_visit_stmt and ccp_visit_phi_node:
53 1- In ccp_visit_stmt, we are interested in assignments whose RHS
54 evaluates into a constant and conditional jumps whose predicate
55 evaluates into a boolean true or false. When an assignment of
56 the form V_i = CONST is found, V_i's lattice value is set to
57 CONSTANT and CONST is associated with it. This causes the
58 propagation engine to add all the SSA edges coming out the
59 assignment into the worklists, so that statements that use V_i
60 can be visited.
62 If the statement is a conditional with a constant predicate, we
63 mark the outgoing edges as executable or not executable
64 depending on the predicate's value. This is then used when
65 visiting PHI nodes to know when a PHI argument can be ignored.
68 2- In ccp_visit_phi_node, if all the PHI arguments evaluate to the
69 same constant C, then the LHS of the PHI is set to C. This
70 evaluation is known as the "meet operation". Since one of the
71 goals of this evaluation is to optimistically return constant
72 values as often as possible, it uses two main short cuts:
74 - If an argument is flowing in through a non-executable edge, it
75 is ignored. This is useful in cases like this:
77 if (PRED)
78 a_9 = 3;
79 else
80 a_10 = 100;
81 a_11 = PHI (a_9, a_10)
83 If PRED is known to always evaluate to false, then we can
84 assume that a_11 will always take its value from a_10, meaning
85 that instead of consider it VARYING (a_9 and a_10 have
86 different values), we can consider it CONSTANT 100.
88 - If an argument has an UNDEFINED value, then it does not affect
89 the outcome of the meet operation. If a variable V_i has an
90 UNDEFINED value, it means that either its defining statement
91 hasn't been visited yet or V_i has no defining statement, in
92 which case the original symbol 'V' is being used
93 uninitialized. Since 'V' is a local variable, the compiler
94 may assume any initial value for it.
97 After propagation, every variable V_i that ends up with a lattice
98 value of CONSTANT will have the associated constant value in the
99 array CONST_VAL[i].VALUE. That is fed into substitute_and_fold for
100 final substitution and folding.
103 Constant propagation in stores and loads (STORE-CCP)
104 ----------------------------------------------------
106 While CCP has all the logic to propagate constants in GIMPLE
107 registers, it is missing the ability to associate constants with
108 stores and loads (i.e., pointer dereferences, structures and
109 global/aliased variables). We don't keep loads and stores in
110 SSA, but we do build a factored use-def web for them (in the
111 virtual operands).
113 For instance, consider the following code fragment:
115 struct A a;
116 const int B = 42;
118 void foo (int i)
120 if (i > 10)
121 a.a = 42;
122 else
124 a.b = 21;
125 a.a = a.b + 21;
128 if (a.a != B)
129 never_executed ();
132 We should be able to deduce that the predicate 'a.a != B' is always
133 false. To achieve this, we associate constant values to the SSA
134 names in the VDEF operands for each store. Additionally,
135 since we also glob partial loads/stores with the base symbol, we
136 also keep track of the memory reference where the constant value
137 was stored (in the MEM_REF field of PROP_VALUE_T). For instance,
139 # a_5 = VDEF <a_4>
140 a.a = 2;
142 # VUSE <a_5>
143 x_3 = a.b;
145 In the example above, CCP will associate value '2' with 'a_5', but
146 it would be wrong to replace the load from 'a.b' with '2', because
147 '2' had been stored into a.a.
149 Note that the initial value of virtual operands is VARYING, not
150 UNDEFINED. Consider, for instance global variables:
152 int A;
154 foo (int i)
156 if (i_3 > 10)
157 A_4 = 3;
158 # A_5 = PHI (A_4, A_2);
160 # VUSE <A_5>
161 A.0_6 = A;
163 return A.0_6;
166 The value of A_2 cannot be assumed to be UNDEFINED, as it may have
167 been defined outside of foo. If we were to assume it UNDEFINED, we
168 would erroneously optimize the above into 'return 3;'.
170 Though STORE-CCP is not too expensive, it does have to do more work
171 than regular CCP, so it is only enabled at -O2. Both regular CCP
172 and STORE-CCP use the exact same algorithm. The only distinction
173 is that when doing STORE-CCP, the boolean variable DO_STORE_CCP is
174 set to true. This affects the evaluation of statements and PHI
175 nodes.
177 References:
179 Constant propagation with conditional branches,
180 Wegman and Zadeck, ACM TOPLAS 13(2):181-210.
182 Building an Optimizing Compiler,
183 Robert Morgan, Butterworth-Heinemann, 1998, Section 8.9.
185 Advanced Compiler Design and Implementation,
186 Steven Muchnick, Morgan Kaufmann, 1997, Section 12.6 */
188 #include "config.h"
189 #include "system.h"
190 #include "coretypes.h"
191 #include "tm.h"
192 #include "tree.h"
193 #include "flags.h"
194 #include "rtl.h"
195 #include "tm_p.h"
196 #include "ggc.h"
197 #include "basic-block.h"
198 #include "output.h"
199 #include "expr.h"
200 #include "function.h"
201 #include "diagnostic.h"
202 #include "timevar.h"
203 #include "tree-dump.h"
204 #include "tree-flow.h"
205 #include "tree-pass.h"
206 #include "tree-ssa-propagate.h"
207 #include "value-prof.h"
208 #include "langhooks.h"
209 #include "target.h"
210 #include "toplev.h"
213 /* Possible lattice values. */
214 typedef enum
216 UNINITIALIZED,
217 UNDEFINED,
218 CONSTANT,
219 VARYING
220 } ccp_lattice_t;
222 /* Array of propagated constant values. After propagation,
223 CONST_VAL[I].VALUE holds the constant value for SSA_NAME(I). If
224 the constant is held in an SSA name representing a memory store
225 (i.e., a VDEF), CONST_VAL[I].MEM_REF will contain the actual
226 memory reference used to store (i.e., the LHS of the assignment
227 doing the store). */
228 static prop_value_t *const_val;
230 /* Dump constant propagation value VAL to file OUTF prefixed by PREFIX. */
232 static void
233 dump_lattice_value (FILE *outf, const char *prefix, prop_value_t val)
235 switch (val.lattice_val)
237 case UNINITIALIZED:
238 fprintf (outf, "%sUNINITIALIZED", prefix);
239 break;
240 case UNDEFINED:
241 fprintf (outf, "%sUNDEFINED", prefix);
242 break;
243 case VARYING:
244 fprintf (outf, "%sVARYING", prefix);
245 break;
246 case CONSTANT:
247 fprintf (outf, "%sCONSTANT ", prefix);
248 print_generic_expr (outf, val.value, dump_flags);
249 break;
250 default:
251 gcc_unreachable ();
256 /* Print lattice value VAL to stderr. */
258 void debug_lattice_value (prop_value_t val);
260 void
261 debug_lattice_value (prop_value_t val)
263 dump_lattice_value (stderr, "", val);
264 fprintf (stderr, "\n");
269 /* If SYM is a constant variable with known value, return the value.
270 NULL_TREE is returned otherwise. */
272 tree
273 get_symbol_constant_value (tree sym)
275 if (TREE_STATIC (sym)
276 && TREE_READONLY (sym)
277 && !MTAG_P (sym))
279 tree val = DECL_INITIAL (sym);
280 if (val)
282 STRIP_USELESS_TYPE_CONVERSION (val);
283 if (is_gimple_min_invariant (val))
284 return val;
286 /* Variables declared 'const' without an initializer
287 have zero as the initializer if they may not be
288 overridden at link or run time. */
289 if (!val
290 && targetm.binds_local_p (sym)
291 && (INTEGRAL_TYPE_P (TREE_TYPE (sym))
292 || SCALAR_FLOAT_TYPE_P (TREE_TYPE (sym))))
293 return fold_convert (TREE_TYPE (sym), integer_zero_node);
296 return NULL_TREE;
299 /* Compute a default value for variable VAR and store it in the
300 CONST_VAL array. The following rules are used to get default
301 values:
303 1- Global and static variables that are declared constant are
304 considered CONSTANT.
306 2- Any other value is considered UNDEFINED. This is useful when
307 considering PHI nodes. PHI arguments that are undefined do not
308 change the constant value of the PHI node, which allows for more
309 constants to be propagated.
311 3- Variables defined by statements other than assignments and PHI
312 nodes are considered VARYING.
314 4- Initial values of variables that are not GIMPLE registers are
315 considered VARYING. */
317 static prop_value_t
318 get_default_value (tree var)
320 tree sym = SSA_NAME_VAR (var);
321 prop_value_t val = { UNINITIALIZED, NULL_TREE };
322 tree cst_val;
324 if (!is_gimple_reg (var))
326 /* Short circuit for regular CCP. We are not interested in any
327 non-register when DO_STORE_CCP is false. */
328 val.lattice_val = VARYING;
330 else if ((cst_val = get_symbol_constant_value (sym)) != NULL_TREE)
332 /* Globals and static variables declared 'const' take their
333 initial value. */
334 val.lattice_val = CONSTANT;
335 val.value = cst_val;
337 else
339 gimple stmt = SSA_NAME_DEF_STMT (var);
341 if (gimple_nop_p (stmt))
343 /* Variables defined by an empty statement are those used
344 before being initialized. If VAR is a local variable, we
345 can assume initially that it is UNDEFINED, otherwise we must
346 consider it VARYING. */
347 if (is_gimple_reg (sym) && TREE_CODE (sym) != PARM_DECL)
348 val.lattice_val = UNDEFINED;
349 else
350 val.lattice_val = VARYING;
352 else if (is_gimple_assign (stmt)
353 /* Value-returning GIMPLE_CALL statements assign to
354 a variable, and are treated similarly to GIMPLE_ASSIGN. */
355 || (is_gimple_call (stmt)
356 && gimple_call_lhs (stmt) != NULL_TREE)
357 || gimple_code (stmt) == GIMPLE_PHI)
359 /* Any other variable defined by an assignment or a PHI node
360 is considered UNDEFINED. */
361 val.lattice_val = UNDEFINED;
363 else
365 /* Otherwise, VAR will never take on a constant value. */
366 val.lattice_val = VARYING;
370 return val;
374 /* Get the constant value associated with variable VAR. */
376 static inline prop_value_t *
377 get_value (tree var)
379 prop_value_t *val;
381 if (const_val == NULL)
382 return NULL;
384 val = &const_val[SSA_NAME_VERSION (var)];
385 if (val->lattice_val == UNINITIALIZED)
386 *val = get_default_value (var);
388 return val;
391 /* Sets the value associated with VAR to VARYING. */
393 static inline void
394 set_value_varying (tree var)
396 prop_value_t *val = &const_val[SSA_NAME_VERSION (var)];
398 val->lattice_val = VARYING;
399 val->value = NULL_TREE;
402 /* For float types, modify the value of VAL to make ccp work correctly
403 for non-standard values (-0, NaN):
405 If HONOR_SIGNED_ZEROS is false, and VAL = -0, we canonicalize it to 0.
406 If HONOR_NANS is false, and VAL is NaN, we canonicalize it to UNDEFINED.
407 This is to fix the following problem (see PR 29921): Suppose we have
409 x = 0.0 * y
411 and we set value of y to NaN. This causes value of x to be set to NaN.
412 When we later determine that y is in fact VARYING, fold uses the fact
413 that HONOR_NANS is false, and we try to change the value of x to 0,
414 causing an ICE. With HONOR_NANS being false, the real appearance of
415 NaN would cause undefined behavior, though, so claiming that y (and x)
416 are UNDEFINED initially is correct. */
418 static void
419 canonicalize_float_value (prop_value_t *val)
421 enum machine_mode mode;
422 tree type;
423 REAL_VALUE_TYPE d;
425 if (val->lattice_val != CONSTANT
426 || TREE_CODE (val->value) != REAL_CST)
427 return;
429 d = TREE_REAL_CST (val->value);
430 type = TREE_TYPE (val->value);
431 mode = TYPE_MODE (type);
433 if (!HONOR_SIGNED_ZEROS (mode)
434 && REAL_VALUE_MINUS_ZERO (d))
436 val->value = build_real (type, dconst0);
437 return;
440 if (!HONOR_NANS (mode)
441 && REAL_VALUE_ISNAN (d))
443 val->lattice_val = UNDEFINED;
444 val->value = NULL;
445 return;
449 /* Set the value for variable VAR to NEW_VAL. Return true if the new
450 value is different from VAR's previous value. */
452 static bool
453 set_lattice_value (tree var, prop_value_t new_val)
455 prop_value_t *old_val = get_value (var);
457 canonicalize_float_value (&new_val);
459 /* Lattice transitions must always be monotonically increasing in
460 value. If *OLD_VAL and NEW_VAL are the same, return false to
461 inform the caller that this was a non-transition. */
463 gcc_assert (old_val->lattice_val < new_val.lattice_val
464 || (old_val->lattice_val == new_val.lattice_val
465 && ((!old_val->value && !new_val.value)
466 || operand_equal_p (old_val->value, new_val.value, 0))));
468 if (old_val->lattice_val != new_val.lattice_val)
470 if (dump_file && (dump_flags & TDF_DETAILS))
472 dump_lattice_value (dump_file, "Lattice value changed to ", new_val);
473 fprintf (dump_file, ". Adding SSA edges to worklist.\n");
476 *old_val = new_val;
478 gcc_assert (new_val.lattice_val != UNDEFINED);
479 return true;
482 return false;
486 /* Return the likely CCP lattice value for STMT.
488 If STMT has no operands, then return CONSTANT.
490 Else if undefinedness of operands of STMT cause its value to be
491 undefined, then return UNDEFINED.
493 Else if any operands of STMT are constants, then return CONSTANT.
495 Else return VARYING. */
497 static ccp_lattice_t
498 likely_value (gimple stmt)
500 bool has_constant_operand, has_undefined_operand, all_undefined_operands;
501 tree use;
502 ssa_op_iter iter;
504 enum gimple_code code = gimple_code (stmt);
506 /* This function appears to be called only for assignments, calls,
507 conditionals, and switches, due to the logic in visit_stmt. */
508 gcc_assert (code == GIMPLE_ASSIGN
509 || code == GIMPLE_CALL
510 || code == GIMPLE_COND
511 || code == GIMPLE_SWITCH);
513 /* If the statement has volatile operands, it won't fold to a
514 constant value. */
515 if (gimple_has_volatile_ops (stmt))
516 return VARYING;
518 /* If we are not doing store-ccp, statements with loads
519 and/or stores will never fold into a constant. */
520 if (!ZERO_SSA_OPERANDS (stmt, SSA_OP_ALL_VIRTUALS))
521 return VARYING;
523 /* Note that only a GIMPLE_SINGLE_RHS assignment can satisfy
524 is_gimple_min_invariant, so we do not consider calls or
525 other forms of assignment. */
526 if (gimple_assign_single_p (stmt)
527 && is_gimple_min_invariant (gimple_assign_rhs1 (stmt)))
528 return CONSTANT;
530 if (code == GIMPLE_COND
531 && is_gimple_min_invariant (gimple_cond_lhs (stmt))
532 && is_gimple_min_invariant (gimple_cond_rhs (stmt)))
533 return CONSTANT;
535 if (code == GIMPLE_SWITCH
536 && is_gimple_min_invariant (gimple_switch_index (stmt)))
537 return CONSTANT;
539 /* Arrive here for more complex cases. */
541 has_constant_operand = false;
542 has_undefined_operand = false;
543 all_undefined_operands = true;
544 FOR_EACH_SSA_TREE_OPERAND (use, stmt, iter, SSA_OP_USE | SSA_OP_VUSE)
546 prop_value_t *val = get_value (use);
548 if (val->lattice_val == UNDEFINED)
549 has_undefined_operand = true;
550 else
551 all_undefined_operands = false;
553 if (val->lattice_val == CONSTANT)
554 has_constant_operand = true;
557 /* If the operation combines operands like COMPLEX_EXPR make sure to
558 not mark the result UNDEFINED if only one part of the result is
559 undefined. */
560 if (has_undefined_operand && all_undefined_operands)
561 return UNDEFINED;
562 else if (code == GIMPLE_ASSIGN && has_undefined_operand)
564 switch (gimple_assign_rhs_code (stmt))
566 /* Unary operators are handled with all_undefined_operands. */
567 case PLUS_EXPR:
568 case MINUS_EXPR:
569 case POINTER_PLUS_EXPR:
570 /* Not MIN_EXPR, MAX_EXPR. One VARYING operand may be selected.
571 Not bitwise operators, one VARYING operand may specify the
572 result completely. Not logical operators for the same reason.
573 Not COMPLEX_EXPR as one VARYING operand makes the result partly
574 not UNDEFINED. Not *DIV_EXPR, comparisons and shifts because
575 the undefined operand may be promoted. */
576 return UNDEFINED;
578 default:
582 /* If there was an UNDEFINED operand but the result may be not UNDEFINED
583 fall back to VARYING even if there were CONSTANT operands. */
584 if (has_undefined_operand)
585 return VARYING;
587 if (has_constant_operand
588 /* We do not consider virtual operands here -- load from read-only
589 memory may have only VARYING virtual operands, but still be
590 constant. */
591 || ZERO_SSA_OPERANDS (stmt, SSA_OP_USE))
592 return CONSTANT;
594 return VARYING;
597 /* Returns true if STMT cannot be constant. */
599 static bool
600 surely_varying_stmt_p (gimple stmt)
602 /* If the statement has operands that we cannot handle, it cannot be
603 constant. */
604 if (gimple_has_volatile_ops (stmt))
605 return true;
607 if (!ZERO_SSA_OPERANDS (stmt, SSA_OP_ALL_VIRTUALS))
608 return true;
610 /* If it is a call and does not return a value or is not a
611 builtin and not an indirect call, it is varying. */
612 if (is_gimple_call (stmt))
614 tree fndecl;
615 if (!gimple_call_lhs (stmt)
616 || ((fndecl = gimple_call_fndecl (stmt)) != NULL_TREE
617 && !DECL_BUILT_IN (fndecl)))
618 return true;
621 /* Anything other than assignments and conditional jumps are not
622 interesting for CCP. */
623 if (gimple_code (stmt) != GIMPLE_ASSIGN
624 && gimple_code (stmt) != GIMPLE_COND
625 && gimple_code (stmt) != GIMPLE_SWITCH
626 && gimple_code (stmt) != GIMPLE_CALL)
627 return true;
629 return false;
632 /* Initialize local data structures for CCP. */
634 static void
635 ccp_initialize (void)
637 basic_block bb;
639 const_val = XCNEWVEC (prop_value_t, num_ssa_names);
641 /* Initialize simulation flags for PHI nodes and statements. */
642 FOR_EACH_BB (bb)
644 gimple_stmt_iterator i;
646 for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
648 gimple stmt = gsi_stmt (i);
649 bool is_varying = surely_varying_stmt_p (stmt);
651 if (is_varying)
653 tree def;
654 ssa_op_iter iter;
656 /* If the statement will not produce a constant, mark
657 all its outputs VARYING. */
658 FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_ALL_DEFS)
660 if (is_varying)
661 set_value_varying (def);
664 prop_set_simulate_again (stmt, !is_varying);
668 /* Now process PHI nodes. We never clear the simulate_again flag on
669 phi nodes, since we do not know which edges are executable yet,
670 except for phi nodes for virtual operands when we do not do store ccp. */
671 FOR_EACH_BB (bb)
673 gimple_stmt_iterator i;
675 for (i = gsi_start_phis (bb); !gsi_end_p (i); gsi_next (&i))
677 gimple phi = gsi_stmt (i);
679 if (!is_gimple_reg (gimple_phi_result (phi)))
680 prop_set_simulate_again (phi, false);
681 else
682 prop_set_simulate_again (phi, true);
688 /* Do final substitution of propagated values, cleanup the flowgraph and
689 free allocated storage.
691 Return TRUE when something was optimized. */
693 static bool
694 ccp_finalize (void)
696 /* Perform substitutions based on the known constant values. */
697 bool something_changed = substitute_and_fold (const_val, false);
699 free (const_val);
700 const_val = NULL;
701 return something_changed;;
705 /* Compute the meet operator between *VAL1 and *VAL2. Store the result
706 in VAL1.
708 any M UNDEFINED = any
709 any M VARYING = VARYING
710 Ci M Cj = Ci if (i == j)
711 Ci M Cj = VARYING if (i != j)
714 static void
715 ccp_lattice_meet (prop_value_t *val1, prop_value_t *val2)
717 if (val1->lattice_val == UNDEFINED)
719 /* UNDEFINED M any = any */
720 *val1 = *val2;
722 else if (val2->lattice_val == UNDEFINED)
724 /* any M UNDEFINED = any
725 Nothing to do. VAL1 already contains the value we want. */
728 else if (val1->lattice_val == VARYING
729 || val2->lattice_val == VARYING)
731 /* any M VARYING = VARYING. */
732 val1->lattice_val = VARYING;
733 val1->value = NULL_TREE;
735 else if (val1->lattice_val == CONSTANT
736 && val2->lattice_val == CONSTANT
737 && simple_cst_equal (val1->value, val2->value) == 1)
739 /* Ci M Cj = Ci if (i == j)
740 Ci M Cj = VARYING if (i != j)
742 If these two values come from memory stores, make sure that
743 they come from the same memory reference. */
744 val1->lattice_val = CONSTANT;
745 val1->value = val1->value;
747 else
749 /* Any other combination is VARYING. */
750 val1->lattice_val = VARYING;
751 val1->value = NULL_TREE;
756 /* Loop through the PHI_NODE's parameters for BLOCK and compare their
757 lattice values to determine PHI_NODE's lattice value. The value of a
758 PHI node is determined calling ccp_lattice_meet with all the arguments
759 of the PHI node that are incoming via executable edges. */
761 static enum ssa_prop_result
762 ccp_visit_phi_node (gimple phi)
764 unsigned i;
765 prop_value_t *old_val, new_val;
767 if (dump_file && (dump_flags & TDF_DETAILS))
769 fprintf (dump_file, "\nVisiting PHI node: ");
770 print_gimple_stmt (dump_file, phi, 0, dump_flags);
773 old_val = get_value (gimple_phi_result (phi));
774 switch (old_val->lattice_val)
776 case VARYING:
777 return SSA_PROP_VARYING;
779 case CONSTANT:
780 new_val = *old_val;
781 break;
783 case UNDEFINED:
784 new_val.lattice_val = UNDEFINED;
785 new_val.value = NULL_TREE;
786 break;
788 default:
789 gcc_unreachable ();
792 for (i = 0; i < gimple_phi_num_args (phi); i++)
794 /* Compute the meet operator over all the PHI arguments flowing
795 through executable edges. */
796 edge e = gimple_phi_arg_edge (phi, i);
798 if (dump_file && (dump_flags & TDF_DETAILS))
800 fprintf (dump_file,
801 "\n Argument #%d (%d -> %d %sexecutable)\n",
802 i, e->src->index, e->dest->index,
803 (e->flags & EDGE_EXECUTABLE) ? "" : "not ");
806 /* If the incoming edge is executable, Compute the meet operator for
807 the existing value of the PHI node and the current PHI argument. */
808 if (e->flags & EDGE_EXECUTABLE)
810 tree arg = gimple_phi_arg (phi, i)->def;
811 prop_value_t arg_val;
813 if (is_gimple_min_invariant (arg))
815 arg_val.lattice_val = CONSTANT;
816 arg_val.value = arg;
818 else
819 arg_val = *(get_value (arg));
821 ccp_lattice_meet (&new_val, &arg_val);
823 if (dump_file && (dump_flags & TDF_DETAILS))
825 fprintf (dump_file, "\t");
826 print_generic_expr (dump_file, arg, dump_flags);
827 dump_lattice_value (dump_file, "\tValue: ", arg_val);
828 fprintf (dump_file, "\n");
831 if (new_val.lattice_val == VARYING)
832 break;
836 if (dump_file && (dump_flags & TDF_DETAILS))
838 dump_lattice_value (dump_file, "\n PHI node value: ", new_val);
839 fprintf (dump_file, "\n\n");
842 /* Make the transition to the new value. */
843 if (set_lattice_value (gimple_phi_result (phi), new_val))
845 if (new_val.lattice_val == VARYING)
846 return SSA_PROP_VARYING;
847 else
848 return SSA_PROP_INTERESTING;
850 else
851 return SSA_PROP_NOT_INTERESTING;
854 /* Return true if we may propagate the address expression ADDR into the
855 dereference DEREF and cancel them. */
857 bool
858 may_propagate_address_into_dereference (tree addr, tree deref)
860 gcc_assert (INDIRECT_REF_P (deref)
861 && TREE_CODE (addr) == ADDR_EXPR);
863 /* Don't propagate if ADDR's operand has incomplete type. */
864 if (!COMPLETE_TYPE_P (TREE_TYPE (TREE_OPERAND (addr, 0))))
865 return false;
867 /* If the address is invariant then we do not need to preserve restrict
868 qualifications. But we do need to preserve volatile qualifiers until
869 we can annotate the folded dereference itself properly. */
870 if (is_gimple_min_invariant (addr)
871 && (!TREE_THIS_VOLATILE (deref)
872 || TYPE_VOLATILE (TREE_TYPE (addr))))
873 return useless_type_conversion_p (TREE_TYPE (deref),
874 TREE_TYPE (TREE_OPERAND (addr, 0)));
876 /* Else both the address substitution and the folding must result in
877 a valid useless type conversion sequence. */
878 return (useless_type_conversion_p (TREE_TYPE (TREE_OPERAND (deref, 0)),
879 TREE_TYPE (addr))
880 && useless_type_conversion_p (TREE_TYPE (deref),
881 TREE_TYPE (TREE_OPERAND (addr, 0))));
884 /* CCP specific front-end to the non-destructive constant folding
885 routines.
887 Attempt to simplify the RHS of STMT knowing that one or more
888 operands are constants.
890 If simplification is possible, return the simplified RHS,
891 otherwise return the original RHS or NULL_TREE. */
893 static tree
894 ccp_fold (gimple stmt)
896 switch (gimple_code (stmt))
898 case GIMPLE_ASSIGN:
900 enum tree_code subcode = gimple_assign_rhs_code (stmt);
902 switch (get_gimple_rhs_class (subcode))
904 case GIMPLE_SINGLE_RHS:
906 tree rhs = gimple_assign_rhs1 (stmt);
907 enum tree_code_class kind = TREE_CODE_CLASS (subcode);
909 if (TREE_CODE (rhs) == SSA_NAME)
911 /* If the RHS is an SSA_NAME, return its known constant value,
912 if any. */
913 return get_value (rhs)->value;
915 /* Handle propagating invariant addresses into address operations.
916 The folding we do here matches that in tree-ssa-forwprop.c. */
917 else if (TREE_CODE (rhs) == ADDR_EXPR)
919 tree *base;
920 base = &TREE_OPERAND (rhs, 0);
921 while (handled_component_p (*base))
922 base = &TREE_OPERAND (*base, 0);
923 if (TREE_CODE (*base) == INDIRECT_REF
924 && TREE_CODE (TREE_OPERAND (*base, 0)) == SSA_NAME)
926 prop_value_t *val = get_value (TREE_OPERAND (*base, 0));
927 if (val->lattice_val == CONSTANT
928 && TREE_CODE (val->value) == ADDR_EXPR
929 && may_propagate_address_into_dereference
930 (val->value, *base))
932 /* We need to return a new tree, not modify the IL
933 or share parts of it. So play some tricks to
934 avoid manually building it. */
935 tree ret, save = *base;
936 *base = TREE_OPERAND (val->value, 0);
937 ret = unshare_expr (rhs);
938 recompute_tree_invariant_for_addr_expr (ret);
939 *base = save;
940 return ret;
945 if (kind == tcc_reference)
947 if (TREE_CODE (rhs) == VIEW_CONVERT_EXPR
948 && TREE_CODE (TREE_OPERAND (rhs, 0)) == SSA_NAME)
950 prop_value_t *val = get_value (TREE_OPERAND (rhs, 0));
951 if (val->lattice_val == CONSTANT)
952 return fold_unary (VIEW_CONVERT_EXPR,
953 TREE_TYPE (rhs), val->value);
955 return fold_const_aggregate_ref (rhs);
957 else if (kind == tcc_declaration)
958 return get_symbol_constant_value (rhs);
959 return rhs;
962 case GIMPLE_UNARY_RHS:
964 /* Handle unary operators that can appear in GIMPLE form.
965 Note that we know the single operand must be a constant,
966 so this should almost always return a simplified RHS. */
967 tree lhs = gimple_assign_lhs (stmt);
968 tree op0 = gimple_assign_rhs1 (stmt);
970 /* Simplify the operand down to a constant. */
971 if (TREE_CODE (op0) == SSA_NAME)
973 prop_value_t *val = get_value (op0);
974 if (val->lattice_val == CONSTANT)
975 op0 = get_value (op0)->value;
978 /* Conversions are useless for CCP purposes if they are
979 value-preserving. Thus the restrictions that
980 useless_type_conversion_p places for pointer type conversions
981 do not apply here. Substitution later will only substitute to
982 allowed places. */
983 if (CONVERT_EXPR_CODE_P (subcode)
984 && POINTER_TYPE_P (TREE_TYPE (lhs))
985 && POINTER_TYPE_P (TREE_TYPE (op0))
986 /* Do not allow differences in volatile qualification
987 as this might get us confused as to whether a
988 propagation destination statement is volatile
989 or not. See PR36988. */
990 && (TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (lhs)))
991 == TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (op0)))))
993 tree tem;
994 /* Still try to generate a constant of correct type. */
995 if (!useless_type_conversion_p (TREE_TYPE (lhs),
996 TREE_TYPE (op0))
997 && ((tem = maybe_fold_offset_to_address
998 (op0, integer_zero_node, TREE_TYPE (lhs)))
999 != NULL_TREE))
1000 return tem;
1001 return op0;
1004 return fold_unary_ignore_overflow (subcode,
1005 gimple_expr_type (stmt), op0);
1008 case GIMPLE_BINARY_RHS:
1010 /* Handle binary operators that can appear in GIMPLE form. */
1011 tree op0 = gimple_assign_rhs1 (stmt);
1012 tree op1 = gimple_assign_rhs2 (stmt);
1014 /* Simplify the operands down to constants when appropriate. */
1015 if (TREE_CODE (op0) == SSA_NAME)
1017 prop_value_t *val = get_value (op0);
1018 if (val->lattice_val == CONSTANT)
1019 op0 = val->value;
1022 if (TREE_CODE (op1) == SSA_NAME)
1024 prop_value_t *val = get_value (op1);
1025 if (val->lattice_val == CONSTANT)
1026 op1 = val->value;
1029 /* Fold &foo + CST into an invariant reference if possible. */
1030 if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR
1031 && TREE_CODE (op0) == ADDR_EXPR
1032 && TREE_CODE (op1) == INTEGER_CST)
1034 tree lhs = gimple_assign_lhs (stmt);
1035 tree tem = maybe_fold_offset_to_address (op0, op1,
1036 TREE_TYPE (lhs));
1037 if (tem != NULL_TREE)
1038 return tem;
1041 return fold_binary (subcode, gimple_expr_type (stmt), op0, op1);
1044 default:
1045 gcc_unreachable ();
1048 break;
1050 case GIMPLE_CALL:
1052 tree fn = gimple_call_fn (stmt);
1053 prop_value_t *val;
1055 if (TREE_CODE (fn) == SSA_NAME)
1057 val = get_value (fn);
1058 if (val->lattice_val == CONSTANT)
1059 fn = val->value;
1061 if (TREE_CODE (fn) == ADDR_EXPR
1062 && TREE_CODE (TREE_OPERAND (fn, 0)) == FUNCTION_DECL
1063 && DECL_BUILT_IN (TREE_OPERAND (fn, 0)))
1065 tree *args = XALLOCAVEC (tree, gimple_call_num_args (stmt));
1066 tree call, retval;
1067 unsigned i;
1068 for (i = 0; i < gimple_call_num_args (stmt); ++i)
1070 args[i] = gimple_call_arg (stmt, i);
1071 if (TREE_CODE (args[i]) == SSA_NAME)
1073 val = get_value (args[i]);
1074 if (val->lattice_val == CONSTANT)
1075 args[i] = val->value;
1078 call = build_call_array (gimple_call_return_type (stmt),
1079 fn, gimple_call_num_args (stmt), args);
1080 retval = fold_call_expr (call, false);
1081 if (retval)
1082 /* fold_call_expr wraps the result inside a NOP_EXPR. */
1083 STRIP_NOPS (retval);
1084 return retval;
1086 return NULL_TREE;
1089 case GIMPLE_COND:
1091 /* Handle comparison operators that can appear in GIMPLE form. */
1092 tree op0 = gimple_cond_lhs (stmt);
1093 tree op1 = gimple_cond_rhs (stmt);
1094 enum tree_code code = gimple_cond_code (stmt);
1096 /* Simplify the operands down to constants when appropriate. */
1097 if (TREE_CODE (op0) == SSA_NAME)
1099 prop_value_t *val = get_value (op0);
1100 if (val->lattice_val == CONSTANT)
1101 op0 = val->value;
1104 if (TREE_CODE (op1) == SSA_NAME)
1106 prop_value_t *val = get_value (op1);
1107 if (val->lattice_val == CONSTANT)
1108 op1 = val->value;
1111 return fold_binary (code, boolean_type_node, op0, op1);
1114 case GIMPLE_SWITCH:
1116 tree rhs = gimple_switch_index (stmt);
1118 if (TREE_CODE (rhs) == SSA_NAME)
1120 /* If the RHS is an SSA_NAME, return its known constant value,
1121 if any. */
1122 return get_value (rhs)->value;
1125 return rhs;
1128 default:
1129 gcc_unreachable ();
1134 /* Return the tree representing the element referenced by T if T is an
1135 ARRAY_REF or COMPONENT_REF into constant aggregates. Return
1136 NULL_TREE otherwise. */
1138 tree
1139 fold_const_aggregate_ref (tree t)
1141 prop_value_t *value;
1142 tree base, ctor, idx, field;
1143 unsigned HOST_WIDE_INT cnt;
1144 tree cfield, cval;
1146 switch (TREE_CODE (t))
1148 case ARRAY_REF:
1149 /* Get a CONSTRUCTOR. If BASE is a VAR_DECL, get its
1150 DECL_INITIAL. If BASE is a nested reference into another
1151 ARRAY_REF or COMPONENT_REF, make a recursive call to resolve
1152 the inner reference. */
1153 base = TREE_OPERAND (t, 0);
1154 switch (TREE_CODE (base))
1156 case VAR_DECL:
1157 if (!TREE_READONLY (base)
1158 || TREE_CODE (TREE_TYPE (base)) != ARRAY_TYPE
1159 || !targetm.binds_local_p (base))
1160 return NULL_TREE;
1162 ctor = DECL_INITIAL (base);
1163 break;
1165 case ARRAY_REF:
1166 case COMPONENT_REF:
1167 ctor = fold_const_aggregate_ref (base);
1168 break;
1170 case STRING_CST:
1171 case CONSTRUCTOR:
1172 ctor = base;
1173 break;
1175 default:
1176 return NULL_TREE;
1179 if (ctor == NULL_TREE
1180 || (TREE_CODE (ctor) != CONSTRUCTOR
1181 && TREE_CODE (ctor) != STRING_CST)
1182 || !TREE_STATIC (ctor))
1183 return NULL_TREE;
1185 /* Get the index. If we have an SSA_NAME, try to resolve it
1186 with the current lattice value for the SSA_NAME. */
1187 idx = TREE_OPERAND (t, 1);
1188 switch (TREE_CODE (idx))
1190 case SSA_NAME:
1191 if ((value = get_value (idx))
1192 && value->lattice_val == CONSTANT
1193 && TREE_CODE (value->value) == INTEGER_CST)
1194 idx = value->value;
1195 else
1196 return NULL_TREE;
1197 break;
1199 case INTEGER_CST:
1200 break;
1202 default:
1203 return NULL_TREE;
1206 /* Fold read from constant string. */
1207 if (TREE_CODE (ctor) == STRING_CST)
1209 if ((TYPE_MODE (TREE_TYPE (t))
1210 == TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor))))
1211 && (GET_MODE_CLASS (TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor))))
1212 == MODE_INT)
1213 && GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor)))) == 1
1214 && compare_tree_int (idx, TREE_STRING_LENGTH (ctor)) < 0)
1215 return build_int_cst_type (TREE_TYPE (t),
1216 (TREE_STRING_POINTER (ctor)
1217 [TREE_INT_CST_LOW (idx)]));
1218 return NULL_TREE;
1221 /* Whoo-hoo! I'll fold ya baby. Yeah! */
1222 FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (ctor), cnt, cfield, cval)
1223 if (tree_int_cst_equal (cfield, idx))
1225 STRIP_USELESS_TYPE_CONVERSION (cval);
1226 return cval;
1228 break;
1230 case COMPONENT_REF:
1231 /* Get a CONSTRUCTOR. If BASE is a VAR_DECL, get its
1232 DECL_INITIAL. If BASE is a nested reference into another
1233 ARRAY_REF or COMPONENT_REF, make a recursive call to resolve
1234 the inner reference. */
1235 base = TREE_OPERAND (t, 0);
1236 switch (TREE_CODE (base))
1238 case VAR_DECL:
1239 if (!TREE_READONLY (base)
1240 || TREE_CODE (TREE_TYPE (base)) != RECORD_TYPE
1241 || !targetm.binds_local_p (base))
1242 return NULL_TREE;
1244 ctor = DECL_INITIAL (base);
1245 break;
1247 case ARRAY_REF:
1248 case COMPONENT_REF:
1249 ctor = fold_const_aggregate_ref (base);
1250 break;
1252 default:
1253 return NULL_TREE;
1256 if (ctor == NULL_TREE
1257 || TREE_CODE (ctor) != CONSTRUCTOR
1258 || !TREE_STATIC (ctor))
1259 return NULL_TREE;
1261 field = TREE_OPERAND (t, 1);
1263 FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (ctor), cnt, cfield, cval)
1264 if (cfield == field
1265 /* FIXME: Handle bit-fields. */
1266 && ! DECL_BIT_FIELD (cfield))
1268 STRIP_USELESS_TYPE_CONVERSION (cval);
1269 return cval;
1271 break;
1273 case REALPART_EXPR:
1274 case IMAGPART_EXPR:
1276 tree c = fold_const_aggregate_ref (TREE_OPERAND (t, 0));
1277 if (c && TREE_CODE (c) == COMPLEX_CST)
1278 return fold_build1 (TREE_CODE (t), TREE_TYPE (t), c);
1279 break;
1282 case INDIRECT_REF:
1284 tree base = TREE_OPERAND (t, 0);
1285 if (TREE_CODE (base) == SSA_NAME
1286 && (value = get_value (base))
1287 && value->lattice_val == CONSTANT
1288 && TREE_CODE (value->value) == ADDR_EXPR)
1289 return fold_const_aggregate_ref (TREE_OPERAND (value->value, 0));
1290 break;
1293 default:
1294 break;
1297 return NULL_TREE;
1300 /* Evaluate statement STMT.
1301 Valid only for assignments, calls, conditionals, and switches. */
1303 static prop_value_t
1304 evaluate_stmt (gimple stmt)
1306 prop_value_t val;
1307 tree simplified = NULL_TREE;
1308 ccp_lattice_t likelyvalue = likely_value (stmt);
1309 bool is_constant;
1311 fold_defer_overflow_warnings ();
1313 /* If the statement is likely to have a CONSTANT result, then try
1314 to fold the statement to determine the constant value. */
1315 /* FIXME. This is the only place that we call ccp_fold.
1316 Since likely_value never returns CONSTANT for calls, we will
1317 not attempt to fold them, including builtins that may profit. */
1318 if (likelyvalue == CONSTANT)
1319 simplified = ccp_fold (stmt);
1320 /* If the statement is likely to have a VARYING result, then do not
1321 bother folding the statement. */
1322 else if (likelyvalue == VARYING)
1324 enum gimple_code code = gimple_code (stmt);
1325 if (code == GIMPLE_ASSIGN)
1327 enum tree_code subcode = gimple_assign_rhs_code (stmt);
1329 /* Other cases cannot satisfy is_gimple_min_invariant
1330 without folding. */
1331 if (get_gimple_rhs_class (subcode) == GIMPLE_SINGLE_RHS)
1332 simplified = gimple_assign_rhs1 (stmt);
1334 else if (code == GIMPLE_SWITCH)
1335 simplified = gimple_switch_index (stmt);
1336 else
1337 /* These cannot satisfy is_gimple_min_invariant without folding. */
1338 gcc_assert (code == GIMPLE_CALL || code == GIMPLE_COND);
1341 is_constant = simplified && is_gimple_min_invariant (simplified);
1343 fold_undefer_overflow_warnings (is_constant, stmt, 0);
1345 if (dump_file && (dump_flags & TDF_DETAILS))
1347 fprintf (dump_file, "which is likely ");
1348 switch (likelyvalue)
1350 case CONSTANT:
1351 fprintf (dump_file, "CONSTANT");
1352 break;
1353 case UNDEFINED:
1354 fprintf (dump_file, "UNDEFINED");
1355 break;
1356 case VARYING:
1357 fprintf (dump_file, "VARYING");
1358 break;
1359 default:;
1361 fprintf (dump_file, "\n");
1364 if (is_constant)
1366 /* The statement produced a constant value. */
1367 val.lattice_val = CONSTANT;
1368 val.value = simplified;
1370 else
1372 /* The statement produced a nonconstant value. If the statement
1373 had UNDEFINED operands, then the result of the statement
1374 should be UNDEFINED. Otherwise, the statement is VARYING. */
1375 if (likelyvalue == UNDEFINED)
1376 val.lattice_val = likelyvalue;
1377 else
1378 val.lattice_val = VARYING;
1380 val.value = NULL_TREE;
1383 return val;
1386 /* Visit the assignment statement STMT. Set the value of its LHS to the
1387 value computed by the RHS and store LHS in *OUTPUT_P. If STMT
1388 creates virtual definitions, set the value of each new name to that
1389 of the RHS (if we can derive a constant out of the RHS).
1390 Value-returning call statements also perform an assignment, and
1391 are handled here. */
1393 static enum ssa_prop_result
1394 visit_assignment (gimple stmt, tree *output_p)
1396 prop_value_t val;
1397 enum ssa_prop_result retval;
1399 tree lhs = gimple_get_lhs (stmt);
1401 gcc_assert (gimple_code (stmt) != GIMPLE_CALL
1402 || gimple_call_lhs (stmt) != NULL_TREE);
1404 if (gimple_assign_copy_p (stmt))
1406 tree rhs = gimple_assign_rhs1 (stmt);
1408 if (TREE_CODE (rhs) == SSA_NAME)
1410 /* For a simple copy operation, we copy the lattice values. */
1411 prop_value_t *nval = get_value (rhs);
1412 val = *nval;
1414 else
1415 val = evaluate_stmt (stmt);
1417 else
1418 /* Evaluate the statement, which could be
1419 either a GIMPLE_ASSIGN or a GIMPLE_CALL. */
1420 val = evaluate_stmt (stmt);
1422 retval = SSA_PROP_NOT_INTERESTING;
1424 /* Set the lattice value of the statement's output. */
1425 if (TREE_CODE (lhs) == SSA_NAME)
1427 /* If STMT is an assignment to an SSA_NAME, we only have one
1428 value to set. */
1429 if (set_lattice_value (lhs, val))
1431 *output_p = lhs;
1432 if (val.lattice_val == VARYING)
1433 retval = SSA_PROP_VARYING;
1434 else
1435 retval = SSA_PROP_INTERESTING;
1439 return retval;
1443 /* Visit the conditional statement STMT. Return SSA_PROP_INTERESTING
1444 if it can determine which edge will be taken. Otherwise, return
1445 SSA_PROP_VARYING. */
1447 static enum ssa_prop_result
1448 visit_cond_stmt (gimple stmt, edge *taken_edge_p)
1450 prop_value_t val;
1451 basic_block block;
1453 block = gimple_bb (stmt);
1454 val = evaluate_stmt (stmt);
1456 /* Find which edge out of the conditional block will be taken and add it
1457 to the worklist. If no single edge can be determined statically,
1458 return SSA_PROP_VARYING to feed all the outgoing edges to the
1459 propagation engine. */
1460 *taken_edge_p = val.value ? find_taken_edge (block, val.value) : 0;
1461 if (*taken_edge_p)
1462 return SSA_PROP_INTERESTING;
1463 else
1464 return SSA_PROP_VARYING;
1468 /* Evaluate statement STMT. If the statement produces an output value and
1469 its evaluation changes the lattice value of its output, return
1470 SSA_PROP_INTERESTING and set *OUTPUT_P to the SSA_NAME holding the
1471 output value.
1473 If STMT is a conditional branch and we can determine its truth
1474 value, set *TAKEN_EDGE_P accordingly. If STMT produces a varying
1475 value, return SSA_PROP_VARYING. */
1477 static enum ssa_prop_result
1478 ccp_visit_stmt (gimple stmt, edge *taken_edge_p, tree *output_p)
1480 tree def;
1481 ssa_op_iter iter;
1483 if (dump_file && (dump_flags & TDF_DETAILS))
1485 fprintf (dump_file, "\nVisiting statement:\n");
1486 print_gimple_stmt (dump_file, stmt, 0, dump_flags);
1489 switch (gimple_code (stmt))
1491 case GIMPLE_ASSIGN:
1492 /* If the statement is an assignment that produces a single
1493 output value, evaluate its RHS to see if the lattice value of
1494 its output has changed. */
1495 return visit_assignment (stmt, output_p);
1497 case GIMPLE_CALL:
1498 /* A value-returning call also performs an assignment. */
1499 if (gimple_call_lhs (stmt) != NULL_TREE)
1500 return visit_assignment (stmt, output_p);
1501 break;
1503 case GIMPLE_COND:
1504 case GIMPLE_SWITCH:
1505 /* If STMT is a conditional branch, see if we can determine
1506 which branch will be taken. */
1507 /* FIXME. It appears that we should be able to optimize
1508 computed GOTOs here as well. */
1509 return visit_cond_stmt (stmt, taken_edge_p);
1511 default:
1512 break;
1515 /* Any other kind of statement is not interesting for constant
1516 propagation and, therefore, not worth simulating. */
1517 if (dump_file && (dump_flags & TDF_DETAILS))
1518 fprintf (dump_file, "No interesting values produced. Marked VARYING.\n");
1520 /* Definitions made by statements other than assignments to
1521 SSA_NAMEs represent unknown modifications to their outputs.
1522 Mark them VARYING. */
1523 FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_ALL_DEFS)
1525 prop_value_t v = { VARYING, NULL_TREE };
1526 set_lattice_value (def, v);
1529 return SSA_PROP_VARYING;
1533 /* Main entry point for SSA Conditional Constant Propagation. */
1535 static unsigned int
1536 do_ssa_ccp (void)
1538 ccp_initialize ();
1539 ssa_propagate (ccp_visit_stmt, ccp_visit_phi_node);
1540 if (ccp_finalize ())
1541 return (TODO_cleanup_cfg | TODO_update_ssa | TODO_remove_unused_locals);
1542 else
1543 return 0;
1547 static bool
1548 gate_ccp (void)
1550 return flag_tree_ccp != 0;
1554 struct gimple_opt_pass pass_ccp =
1557 GIMPLE_PASS,
1558 "ccp", /* name */
1559 gate_ccp, /* gate */
1560 do_ssa_ccp, /* execute */
1561 NULL, /* sub */
1562 NULL, /* next */
1563 0, /* static_pass_number */
1564 TV_TREE_CCP, /* tv_id */
1565 PROP_cfg | PROP_ssa, /* properties_required */
1566 0, /* properties_provided */
1567 0, /* properties_destroyed */
1568 0, /* todo_flags_start */
1569 TODO_dump_func | TODO_verify_ssa
1570 | TODO_verify_stmts | TODO_ggc_collect/* todo_flags_finish */
1575 /* A subroutine of fold_stmt_r. Attempts to fold *(A+O) to A[X].
1576 BASE is an array type. OFFSET is a byte displacement. ORIG_TYPE
1577 is the desired result type. */
1579 static tree
1580 maybe_fold_offset_to_array_ref (tree base, tree offset, tree orig_type,
1581 bool allow_negative_idx)
1583 tree min_idx, idx, idx_type, elt_offset = integer_zero_node;
1584 tree array_type, elt_type, elt_size;
1585 tree domain_type;
1587 /* If BASE is an ARRAY_REF, we can pick up another offset (this time
1588 measured in units of the size of elements type) from that ARRAY_REF).
1589 We can't do anything if either is variable.
1591 The case we handle here is *(&A[N]+O). */
1592 if (TREE_CODE (base) == ARRAY_REF)
1594 tree low_bound = array_ref_low_bound (base);
1596 elt_offset = TREE_OPERAND (base, 1);
1597 if (TREE_CODE (low_bound) != INTEGER_CST
1598 || TREE_CODE (elt_offset) != INTEGER_CST)
1599 return NULL_TREE;
1601 elt_offset = int_const_binop (MINUS_EXPR, elt_offset, low_bound, 0);
1602 base = TREE_OPERAND (base, 0);
1605 /* Ignore stupid user tricks of indexing non-array variables. */
1606 array_type = TREE_TYPE (base);
1607 if (TREE_CODE (array_type) != ARRAY_TYPE)
1608 return NULL_TREE;
1609 elt_type = TREE_TYPE (array_type);
1610 if (!useless_type_conversion_p (orig_type, elt_type))
1611 return NULL_TREE;
1613 /* Use signed size type for intermediate computation on the index. */
1614 idx_type = signed_type_for (size_type_node);
1616 /* If OFFSET and ELT_OFFSET are zero, we don't care about the size of the
1617 element type (so we can use the alignment if it's not constant).
1618 Otherwise, compute the offset as an index by using a division. If the
1619 division isn't exact, then don't do anything. */
1620 elt_size = TYPE_SIZE_UNIT (elt_type);
1621 if (!elt_size)
1622 return NULL;
1623 if (integer_zerop (offset))
1625 if (TREE_CODE (elt_size) != INTEGER_CST)
1626 elt_size = size_int (TYPE_ALIGN (elt_type));
1628 idx = build_int_cst (idx_type, 0);
1630 else
1632 unsigned HOST_WIDE_INT lquo, lrem;
1633 HOST_WIDE_INT hquo, hrem;
1634 double_int soffset;
1636 /* The final array offset should be signed, so we need
1637 to sign-extend the (possibly pointer) offset here
1638 and use signed division. */
1639 soffset = double_int_sext (tree_to_double_int (offset),
1640 TYPE_PRECISION (TREE_TYPE (offset)));
1641 if (TREE_CODE (elt_size) != INTEGER_CST
1642 || div_and_round_double (TRUNC_DIV_EXPR, 0,
1643 soffset.low, soffset.high,
1644 TREE_INT_CST_LOW (elt_size),
1645 TREE_INT_CST_HIGH (elt_size),
1646 &lquo, &hquo, &lrem, &hrem)
1647 || lrem || hrem)
1648 return NULL_TREE;
1650 idx = build_int_cst_wide (idx_type, lquo, hquo);
1653 /* Assume the low bound is zero. If there is a domain type, get the
1654 low bound, if any, convert the index into that type, and add the
1655 low bound. */
1656 min_idx = build_int_cst (idx_type, 0);
1657 domain_type = TYPE_DOMAIN (array_type);
1658 if (domain_type)
1660 idx_type = domain_type;
1661 if (TYPE_MIN_VALUE (idx_type))
1662 min_idx = TYPE_MIN_VALUE (idx_type);
1663 else
1664 min_idx = fold_convert (idx_type, min_idx);
1666 if (TREE_CODE (min_idx) != INTEGER_CST)
1667 return NULL_TREE;
1669 elt_offset = fold_convert (idx_type, elt_offset);
1672 if (!integer_zerop (min_idx))
1673 idx = int_const_binop (PLUS_EXPR, idx, min_idx, 0);
1674 if (!integer_zerop (elt_offset))
1675 idx = int_const_binop (PLUS_EXPR, idx, elt_offset, 0);
1677 /* Make sure to possibly truncate late after offsetting. */
1678 idx = fold_convert (idx_type, idx);
1680 /* We don't want to construct access past array bounds. For example
1681 char *(c[4]);
1682 c[3][2];
1683 should not be simplified into (*c)[14] or tree-vrp will
1684 give false warnings. The same is true for
1685 struct A { long x; char d[0]; } *a;
1686 (char *)a - 4;
1687 which should be not folded to &a->d[-8]. */
1688 if (domain_type
1689 && TYPE_MAX_VALUE (domain_type)
1690 && TREE_CODE (TYPE_MAX_VALUE (domain_type)) == INTEGER_CST)
1692 tree up_bound = TYPE_MAX_VALUE (domain_type);
1694 if (tree_int_cst_lt (up_bound, idx)
1695 /* Accesses after the end of arrays of size 0 (gcc
1696 extension) and 1 are likely intentional ("struct
1697 hack"). */
1698 && compare_tree_int (up_bound, 1) > 0)
1699 return NULL_TREE;
1701 if (domain_type
1702 && TYPE_MIN_VALUE (domain_type))
1704 if (!allow_negative_idx
1705 && TREE_CODE (TYPE_MIN_VALUE (domain_type)) == INTEGER_CST
1706 && tree_int_cst_lt (idx, TYPE_MIN_VALUE (domain_type)))
1707 return NULL_TREE;
1709 else if (!allow_negative_idx
1710 && compare_tree_int (idx, 0) < 0)
1711 return NULL_TREE;
1713 return build4 (ARRAY_REF, elt_type, base, idx, NULL_TREE, NULL_TREE);
1717 /* Attempt to fold *(S+O) to S.X.
1718 BASE is a record type. OFFSET is a byte displacement. ORIG_TYPE
1719 is the desired result type. */
1721 static tree
1722 maybe_fold_offset_to_component_ref (tree record_type, tree base, tree offset,
1723 tree orig_type, bool base_is_ptr)
1725 tree f, t, field_type, tail_array_field, field_offset;
1726 tree ret;
1727 tree new_base;
1729 if (TREE_CODE (record_type) != RECORD_TYPE
1730 && TREE_CODE (record_type) != UNION_TYPE
1731 && TREE_CODE (record_type) != QUAL_UNION_TYPE)
1732 return NULL_TREE;
1734 /* Short-circuit silly cases. */
1735 if (useless_type_conversion_p (record_type, orig_type))
1736 return NULL_TREE;
1738 tail_array_field = NULL_TREE;
1739 for (f = TYPE_FIELDS (record_type); f ; f = TREE_CHAIN (f))
1741 int cmp;
1743 if (TREE_CODE (f) != FIELD_DECL)
1744 continue;
1745 if (DECL_BIT_FIELD (f))
1746 continue;
1748 if (!DECL_FIELD_OFFSET (f))
1749 continue;
1750 field_offset = byte_position (f);
1751 if (TREE_CODE (field_offset) != INTEGER_CST)
1752 continue;
1754 /* ??? Java creates "interesting" fields for representing base classes.
1755 They have no name, and have no context. With no context, we get into
1756 trouble with nonoverlapping_component_refs_p. Skip them. */
1757 if (!DECL_FIELD_CONTEXT (f))
1758 continue;
1760 /* The previous array field isn't at the end. */
1761 tail_array_field = NULL_TREE;
1763 /* Check to see if this offset overlaps with the field. */
1764 cmp = tree_int_cst_compare (field_offset, offset);
1765 if (cmp > 0)
1766 continue;
1768 field_type = TREE_TYPE (f);
1770 /* Here we exactly match the offset being checked. If the types match,
1771 then we can return that field. */
1772 if (cmp == 0
1773 && useless_type_conversion_p (orig_type, field_type))
1775 if (base_is_ptr)
1776 base = build1 (INDIRECT_REF, record_type, base);
1777 t = build3 (COMPONENT_REF, field_type, base, f, NULL_TREE);
1778 return t;
1781 /* Don't care about offsets into the middle of scalars. */
1782 if (!AGGREGATE_TYPE_P (field_type))
1783 continue;
1785 /* Check for array at the end of the struct. This is often
1786 used as for flexible array members. We should be able to
1787 turn this into an array access anyway. */
1788 if (TREE_CODE (field_type) == ARRAY_TYPE)
1789 tail_array_field = f;
1791 /* Check the end of the field against the offset. */
1792 if (!DECL_SIZE_UNIT (f)
1793 || TREE_CODE (DECL_SIZE_UNIT (f)) != INTEGER_CST)
1794 continue;
1795 t = int_const_binop (MINUS_EXPR, offset, field_offset, 1);
1796 if (!tree_int_cst_lt (t, DECL_SIZE_UNIT (f)))
1797 continue;
1799 /* If we matched, then set offset to the displacement into
1800 this field. */
1801 if (base_is_ptr)
1802 new_base = build1 (INDIRECT_REF, record_type, base);
1803 else
1804 new_base = base;
1805 new_base = build3 (COMPONENT_REF, field_type, new_base, f, NULL_TREE);
1807 /* Recurse to possibly find the match. */
1808 ret = maybe_fold_offset_to_array_ref (new_base, t, orig_type,
1809 f == TYPE_FIELDS (record_type));
1810 if (ret)
1811 return ret;
1812 ret = maybe_fold_offset_to_component_ref (field_type, new_base, t,
1813 orig_type, false);
1814 if (ret)
1815 return ret;
1818 if (!tail_array_field)
1819 return NULL_TREE;
1821 f = tail_array_field;
1822 field_type = TREE_TYPE (f);
1823 offset = int_const_binop (MINUS_EXPR, offset, byte_position (f), 1);
1825 /* If we get here, we've got an aggregate field, and a possibly
1826 nonzero offset into them. Recurse and hope for a valid match. */
1827 if (base_is_ptr)
1828 base = build1 (INDIRECT_REF, record_type, base);
1829 base = build3 (COMPONENT_REF, field_type, base, f, NULL_TREE);
1831 t = maybe_fold_offset_to_array_ref (base, offset, orig_type,
1832 f == TYPE_FIELDS (record_type));
1833 if (t)
1834 return t;
1835 return maybe_fold_offset_to_component_ref (field_type, base, offset,
1836 orig_type, false);
1839 /* Attempt to express (ORIG_TYPE)BASE+OFFSET as BASE->field_of_orig_type
1840 or BASE[index] or by combination of those.
1842 Before attempting the conversion strip off existing ADDR_EXPRs and
1843 handled component refs. */
1845 tree
1846 maybe_fold_offset_to_reference (tree base, tree offset, tree orig_type)
1848 tree ret;
1849 tree type;
1850 bool base_is_ptr = true;
1852 STRIP_NOPS (base);
1853 if (TREE_CODE (base) == ADDR_EXPR)
1855 base_is_ptr = false;
1857 base = TREE_OPERAND (base, 0);
1859 /* Handle case where existing COMPONENT_REF pick e.g. wrong field of union,
1860 so it needs to be removed and new COMPONENT_REF constructed.
1861 The wrong COMPONENT_REF are often constructed by folding the
1862 (type *)&object within the expression (type *)&object+offset */
1863 if (handled_component_p (base))
1865 HOST_WIDE_INT sub_offset, size, maxsize;
1866 tree newbase;
1867 newbase = get_ref_base_and_extent (base, &sub_offset,
1868 &size, &maxsize);
1869 gcc_assert (newbase);
1870 if (size == maxsize
1871 && size != -1
1872 && !(sub_offset & (BITS_PER_UNIT - 1)))
1874 base = newbase;
1875 if (sub_offset)
1876 offset = int_const_binop (PLUS_EXPR, offset,
1877 build_int_cst (TREE_TYPE (offset),
1878 sub_offset / BITS_PER_UNIT), 1);
1881 if (useless_type_conversion_p (orig_type, TREE_TYPE (base))
1882 && integer_zerop (offset))
1883 return base;
1884 type = TREE_TYPE (base);
1886 else
1888 base_is_ptr = true;
1889 if (!POINTER_TYPE_P (TREE_TYPE (base)))
1890 return NULL_TREE;
1891 type = TREE_TYPE (TREE_TYPE (base));
1893 ret = maybe_fold_offset_to_component_ref (type, base, offset,
1894 orig_type, base_is_ptr);
1895 if (!ret)
1897 if (base_is_ptr)
1898 base = build1 (INDIRECT_REF, type, base);
1899 ret = maybe_fold_offset_to_array_ref (base, offset, orig_type, true);
1901 return ret;
1904 /* Attempt to express (ORIG_TYPE)&BASE+OFFSET as &BASE->field_of_orig_type
1905 or &BASE[index] or by combination of those.
1907 Before attempting the conversion strip off existing component refs. */
1909 tree
1910 maybe_fold_offset_to_address (tree addr, tree offset, tree orig_type)
1912 tree t;
1914 gcc_assert (POINTER_TYPE_P (TREE_TYPE (addr))
1915 && POINTER_TYPE_P (orig_type));
1917 t = maybe_fold_offset_to_reference (addr, offset, TREE_TYPE (orig_type));
1918 if (t != NULL_TREE)
1920 tree orig = addr;
1921 tree ptr_type;
1923 /* For __builtin_object_size to function correctly we need to
1924 make sure not to fold address arithmetic so that we change
1925 reference from one array to another. This would happen for
1926 example for
1928 struct X { char s1[10]; char s2[10] } s;
1929 char *foo (void) { return &s.s2[-4]; }
1931 where we need to avoid generating &s.s1[6]. As the C and
1932 C++ frontends create different initial trees
1933 (char *) &s.s1 + -4 vs. &s.s1[-4] we have to do some
1934 sophisticated comparisons here. Note that checking for the
1935 condition after the fact is easier than trying to avoid doing
1936 the folding. */
1937 STRIP_NOPS (orig);
1938 if (TREE_CODE (orig) == ADDR_EXPR)
1939 orig = TREE_OPERAND (orig, 0);
1940 if ((TREE_CODE (orig) == ARRAY_REF
1941 || (TREE_CODE (orig) == COMPONENT_REF
1942 && TREE_CODE (TREE_TYPE (TREE_OPERAND (orig, 1))) == ARRAY_TYPE))
1943 && (TREE_CODE (t) == ARRAY_REF
1944 || (TREE_CODE (t) == COMPONENT_REF
1945 && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 1))) == ARRAY_TYPE))
1946 && !operand_equal_p (TREE_CODE (orig) == ARRAY_REF
1947 ? TREE_OPERAND (orig, 0) : orig,
1948 TREE_CODE (t) == ARRAY_REF
1949 ? TREE_OPERAND (t, 0) : t, 0))
1950 return NULL_TREE;
1952 ptr_type = build_pointer_type (TREE_TYPE (t));
1953 if (!useless_type_conversion_p (orig_type, ptr_type))
1954 return NULL_TREE;
1955 return build_fold_addr_expr_with_type (t, ptr_type);
1958 return NULL_TREE;
1961 /* A subroutine of fold_stmt_r. Attempt to simplify *(BASE+OFFSET).
1962 Return the simplified expression, or NULL if nothing could be done. */
1964 static tree
1965 maybe_fold_stmt_indirect (tree expr, tree base, tree offset)
1967 tree t;
1968 bool volatile_p = TREE_THIS_VOLATILE (expr);
1970 /* We may well have constructed a double-nested PLUS_EXPR via multiple
1971 substitutions. Fold that down to one. Remove NON_LVALUE_EXPRs that
1972 are sometimes added. */
1973 base = fold (base);
1974 STRIP_TYPE_NOPS (base);
1975 TREE_OPERAND (expr, 0) = base;
1977 /* One possibility is that the address reduces to a string constant. */
1978 t = fold_read_from_constant_string (expr);
1979 if (t)
1980 return t;
1982 /* Add in any offset from a POINTER_PLUS_EXPR. */
1983 if (TREE_CODE (base) == POINTER_PLUS_EXPR)
1985 tree offset2;
1987 offset2 = TREE_OPERAND (base, 1);
1988 if (TREE_CODE (offset2) != INTEGER_CST)
1989 return NULL_TREE;
1990 base = TREE_OPERAND (base, 0);
1992 offset = fold_convert (sizetype,
1993 int_const_binop (PLUS_EXPR, offset, offset2, 1));
1996 if (TREE_CODE (base) == ADDR_EXPR)
1998 tree base_addr = base;
2000 /* Strip the ADDR_EXPR. */
2001 base = TREE_OPERAND (base, 0);
2003 /* Fold away CONST_DECL to its value, if the type is scalar. */
2004 if (TREE_CODE (base) == CONST_DECL
2005 && is_gimple_min_invariant (DECL_INITIAL (base)))
2006 return DECL_INITIAL (base);
2008 /* Try folding *(&B+O) to B.X. */
2009 t = maybe_fold_offset_to_reference (base_addr, offset,
2010 TREE_TYPE (expr));
2011 if (t)
2013 /* Preserve volatileness of the original expression.
2014 We can end up with a plain decl here which is shared
2015 and we shouldn't mess with its flags. */
2016 if (!SSA_VAR_P (t))
2017 TREE_THIS_VOLATILE (t) = volatile_p;
2018 return t;
2021 else
2023 /* We can get here for out-of-range string constant accesses,
2024 such as "_"[3]. Bail out of the entire substitution search
2025 and arrange for the entire statement to be replaced by a
2026 call to __builtin_trap. In all likelihood this will all be
2027 constant-folded away, but in the meantime we can't leave with
2028 something that get_expr_operands can't understand. */
2030 t = base;
2031 STRIP_NOPS (t);
2032 if (TREE_CODE (t) == ADDR_EXPR
2033 && TREE_CODE (TREE_OPERAND (t, 0)) == STRING_CST)
2035 /* FIXME: Except that this causes problems elsewhere with dead
2036 code not being deleted, and we die in the rtl expanders
2037 because we failed to remove some ssa_name. In the meantime,
2038 just return zero. */
2039 /* FIXME2: This condition should be signaled by
2040 fold_read_from_constant_string directly, rather than
2041 re-checking for it here. */
2042 return integer_zero_node;
2045 /* Try folding *(B+O) to B->X. Still an improvement. */
2046 if (POINTER_TYPE_P (TREE_TYPE (base)))
2048 t = maybe_fold_offset_to_reference (base, offset,
2049 TREE_TYPE (expr));
2050 if (t)
2051 return t;
2055 /* Otherwise we had an offset that we could not simplify. */
2056 return NULL_TREE;
2060 /* A quaint feature extant in our address arithmetic is that there
2061 can be hidden type changes here. The type of the result need
2062 not be the same as the type of the input pointer.
2064 What we're after here is an expression of the form
2065 (T *)(&array + const)
2066 where array is OP0, const is OP1, RES_TYPE is T and
2067 the cast doesn't actually exist, but is implicit in the
2068 type of the POINTER_PLUS_EXPR. We'd like to turn this into
2069 &array[x]
2070 which may be able to propagate further. */
2072 tree
2073 maybe_fold_stmt_addition (tree res_type, tree op0, tree op1)
2075 tree ptd_type;
2076 tree t;
2078 /* It had better be a constant. */
2079 if (TREE_CODE (op1) != INTEGER_CST)
2080 return NULL_TREE;
2081 /* The first operand should be an ADDR_EXPR. */
2082 if (TREE_CODE (op0) != ADDR_EXPR)
2083 return NULL_TREE;
2084 op0 = TREE_OPERAND (op0, 0);
2086 /* If the first operand is an ARRAY_REF, expand it so that we can fold
2087 the offset into it. */
2088 while (TREE_CODE (op0) == ARRAY_REF)
2090 tree array_obj = TREE_OPERAND (op0, 0);
2091 tree array_idx = TREE_OPERAND (op0, 1);
2092 tree elt_type = TREE_TYPE (op0);
2093 tree elt_size = TYPE_SIZE_UNIT (elt_type);
2094 tree min_idx;
2096 if (TREE_CODE (array_idx) != INTEGER_CST)
2097 break;
2098 if (TREE_CODE (elt_size) != INTEGER_CST)
2099 break;
2101 /* Un-bias the index by the min index of the array type. */
2102 min_idx = TYPE_DOMAIN (TREE_TYPE (array_obj));
2103 if (min_idx)
2105 min_idx = TYPE_MIN_VALUE (min_idx);
2106 if (min_idx)
2108 if (TREE_CODE (min_idx) != INTEGER_CST)
2109 break;
2111 array_idx = fold_convert (TREE_TYPE (min_idx), array_idx);
2112 if (!integer_zerop (min_idx))
2113 array_idx = int_const_binop (MINUS_EXPR, array_idx,
2114 min_idx, 0);
2118 /* Convert the index to a byte offset. */
2119 array_idx = fold_convert (sizetype, array_idx);
2120 array_idx = int_const_binop (MULT_EXPR, array_idx, elt_size, 0);
2122 /* Update the operands for the next round, or for folding. */
2123 op1 = int_const_binop (PLUS_EXPR,
2124 array_idx, op1, 0);
2125 op0 = array_obj;
2128 ptd_type = TREE_TYPE (res_type);
2129 /* If we want a pointer to void, reconstruct the reference from the
2130 array element type. A pointer to that can be trivially converted
2131 to void *. This happens as we fold (void *)(ptr p+ off). */
2132 if (VOID_TYPE_P (ptd_type)
2133 && TREE_CODE (TREE_TYPE (op0)) == ARRAY_TYPE)
2134 ptd_type = TREE_TYPE (TREE_TYPE (op0));
2136 /* At which point we can try some of the same things as for indirects. */
2137 t = maybe_fold_offset_to_array_ref (op0, op1, ptd_type, true);
2138 if (!t)
2139 t = maybe_fold_offset_to_component_ref (TREE_TYPE (op0), op0, op1,
2140 ptd_type, false);
2141 if (t)
2142 t = build1 (ADDR_EXPR, res_type, t);
2144 return t;
2147 /* For passing state through walk_tree into fold_stmt_r and its
2148 children. */
2150 struct fold_stmt_r_data
2152 gimple stmt;
2153 bool *changed_p;
2154 bool *inside_addr_expr_p;
2157 /* Subroutine of fold_stmt called via walk_tree. We perform several
2158 simplifications of EXPR_P, mostly having to do with pointer arithmetic. */
2160 static tree
2161 fold_stmt_r (tree *expr_p, int *walk_subtrees, void *data)
2163 struct walk_stmt_info *wi = (struct walk_stmt_info *) data;
2164 struct fold_stmt_r_data *fold_stmt_r_data;
2165 bool *inside_addr_expr_p;
2166 bool *changed_p;
2167 tree expr = *expr_p, t;
2168 bool volatile_p = TREE_THIS_VOLATILE (expr);
2170 fold_stmt_r_data = (struct fold_stmt_r_data *) wi->info;
2171 inside_addr_expr_p = fold_stmt_r_data->inside_addr_expr_p;
2172 changed_p = fold_stmt_r_data->changed_p;
2174 /* ??? It'd be nice if walk_tree had a pre-order option. */
2175 switch (TREE_CODE (expr))
2177 case INDIRECT_REF:
2178 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2179 if (t)
2180 return t;
2181 *walk_subtrees = 0;
2183 t = maybe_fold_stmt_indirect (expr, TREE_OPERAND (expr, 0),
2184 integer_zero_node);
2185 /* Avoid folding *"abc" = 5 into 'a' = 5. */
2186 if (wi->is_lhs && t && TREE_CODE (t) == INTEGER_CST)
2187 t = NULL_TREE;
2188 if (!t
2189 && TREE_CODE (TREE_OPERAND (expr, 0)) == ADDR_EXPR)
2190 /* If we had a good reason for propagating the address here,
2191 make sure we end up with valid gimple. See PR34989. */
2192 t = TREE_OPERAND (TREE_OPERAND (expr, 0), 0);
2193 break;
2195 case NOP_EXPR:
2196 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2197 if (t)
2198 return t;
2199 *walk_subtrees = 0;
2201 if (POINTER_TYPE_P (TREE_TYPE (expr))
2202 && POINTER_TYPE_P (TREE_TYPE (TREE_TYPE (expr)))
2203 && POINTER_TYPE_P (TREE_TYPE (TREE_OPERAND (expr, 0)))
2204 && (t = maybe_fold_offset_to_address (TREE_OPERAND (expr, 0),
2205 integer_zero_node,
2206 TREE_TYPE (TREE_TYPE (expr)))))
2207 return t;
2208 break;
2210 /* ??? Could handle more ARRAY_REFs here, as a variant of INDIRECT_REF.
2211 We'd only want to bother decomposing an existing ARRAY_REF if
2212 the base array is found to have another offset contained within.
2213 Otherwise we'd be wasting time. */
2214 case ARRAY_REF:
2215 /* If we are not processing expressions found within an
2216 ADDR_EXPR, then we can fold constant array references.
2217 Don't fold on LHS either, to avoid folding "abc"[0] = 5
2218 into 'a' = 5. */
2219 if (!*inside_addr_expr_p && !wi->is_lhs)
2220 t = fold_read_from_constant_string (expr);
2221 else
2222 t = NULL;
2223 break;
2225 case ADDR_EXPR:
2226 *inside_addr_expr_p = true;
2227 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2228 *inside_addr_expr_p = false;
2229 if (t)
2230 return t;
2231 *walk_subtrees = 0;
2233 /* Make sure the value is properly considered constant, and so gets
2234 propagated as expected. */
2235 if (*changed_p)
2236 recompute_tree_invariant_for_addr_expr (expr);
2237 return NULL_TREE;
2239 case COMPONENT_REF:
2240 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2241 if (t)
2242 return t;
2243 *walk_subtrees = 0;
2245 /* Make sure the FIELD_DECL is actually a field in the type on the lhs.
2246 We've already checked that the records are compatible, so we should
2247 come up with a set of compatible fields. */
2249 tree expr_record = TREE_TYPE (TREE_OPERAND (expr, 0));
2250 tree expr_field = TREE_OPERAND (expr, 1);
2252 if (DECL_FIELD_CONTEXT (expr_field) != TYPE_MAIN_VARIANT (expr_record))
2254 expr_field = find_compatible_field (expr_record, expr_field);
2255 TREE_OPERAND (expr, 1) = expr_field;
2258 break;
2260 case TARGET_MEM_REF:
2261 t = maybe_fold_tmr (expr);
2262 break;
2264 case POINTER_PLUS_EXPR:
2265 t = walk_tree (&TREE_OPERAND (expr, 0), fold_stmt_r, data, NULL);
2266 if (t)
2267 return t;
2268 t = walk_tree (&TREE_OPERAND (expr, 1), fold_stmt_r, data, NULL);
2269 if (t)
2270 return t;
2271 *walk_subtrees = 0;
2273 t = maybe_fold_stmt_addition (TREE_TYPE (expr),
2274 TREE_OPERAND (expr, 0),
2275 TREE_OPERAND (expr, 1));
2276 break;
2278 case COND_EXPR:
2279 if (COMPARISON_CLASS_P (TREE_OPERAND (expr, 0)))
2281 tree op0 = TREE_OPERAND (expr, 0);
2282 tree tem;
2283 bool set;
2285 fold_defer_overflow_warnings ();
2286 tem = fold_binary (TREE_CODE (op0), TREE_TYPE (op0),
2287 TREE_OPERAND (op0, 0),
2288 TREE_OPERAND (op0, 1));
2289 /* This is actually a conditional expression, not a GIMPLE
2290 conditional statement, however, the valid_gimple_rhs_p
2291 test still applies. */
2292 set = tem && is_gimple_condexpr (tem) && valid_gimple_rhs_p (tem);
2293 fold_undefer_overflow_warnings (set, fold_stmt_r_data->stmt, 0);
2294 if (set)
2296 COND_EXPR_COND (expr) = tem;
2297 t = expr;
2298 break;
2301 return NULL_TREE;
2303 default:
2304 return NULL_TREE;
2307 if (t)
2309 /* Preserve volatileness of the original expression.
2310 We can end up with a plain decl here which is shared
2311 and we shouldn't mess with its flags. */
2312 if (!SSA_VAR_P (t))
2313 TREE_THIS_VOLATILE (t) = volatile_p;
2314 *expr_p = t;
2315 *changed_p = true;
2318 return NULL_TREE;
2321 /* Return the string length, maximum string length or maximum value of
2322 ARG in LENGTH.
2323 If ARG is an SSA name variable, follow its use-def chains. If LENGTH
2324 is not NULL and, for TYPE == 0, its value is not equal to the length
2325 we determine or if we are unable to determine the length or value,
2326 return false. VISITED is a bitmap of visited variables.
2327 TYPE is 0 if string length should be returned, 1 for maximum string
2328 length and 2 for maximum value ARG can have. */
2330 static bool
2331 get_maxval_strlen (tree arg, tree *length, bitmap visited, int type)
2333 tree var, val;
2334 gimple def_stmt;
2336 if (TREE_CODE (arg) != SSA_NAME)
2338 if (TREE_CODE (arg) == COND_EXPR)
2339 return get_maxval_strlen (COND_EXPR_THEN (arg), length, visited, type)
2340 && get_maxval_strlen (COND_EXPR_ELSE (arg), length, visited, type);
2341 /* We can end up with &(*iftmp_1)[0] here as well, so handle it. */
2342 else if (TREE_CODE (arg) == ADDR_EXPR
2343 && TREE_CODE (TREE_OPERAND (arg, 0)) == ARRAY_REF
2344 && integer_zerop (TREE_OPERAND (TREE_OPERAND (arg, 0), 1)))
2346 tree aop0 = TREE_OPERAND (TREE_OPERAND (arg, 0), 0);
2347 if (TREE_CODE (aop0) == INDIRECT_REF
2348 && TREE_CODE (TREE_OPERAND (aop0, 0)) == SSA_NAME)
2349 return get_maxval_strlen (TREE_OPERAND (aop0, 0),
2350 length, visited, type);
2353 if (type == 2)
2355 val = arg;
2356 if (TREE_CODE (val) != INTEGER_CST
2357 || tree_int_cst_sgn (val) < 0)
2358 return false;
2360 else
2361 val = c_strlen (arg, 1);
2362 if (!val)
2363 return false;
2365 if (*length)
2367 if (type > 0)
2369 if (TREE_CODE (*length) != INTEGER_CST
2370 || TREE_CODE (val) != INTEGER_CST)
2371 return false;
2373 if (tree_int_cst_lt (*length, val))
2374 *length = val;
2375 return true;
2377 else if (simple_cst_equal (val, *length) != 1)
2378 return false;
2381 *length = val;
2382 return true;
2385 /* If we were already here, break the infinite cycle. */
2386 if (bitmap_bit_p (visited, SSA_NAME_VERSION (arg)))
2387 return true;
2388 bitmap_set_bit (visited, SSA_NAME_VERSION (arg));
2390 var = arg;
2391 def_stmt = SSA_NAME_DEF_STMT (var);
2393 switch (gimple_code (def_stmt))
2395 case GIMPLE_ASSIGN:
2396 /* The RHS of the statement defining VAR must either have a
2397 constant length or come from another SSA_NAME with a constant
2398 length. */
2399 if (gimple_assign_single_p (def_stmt)
2400 || gimple_assign_unary_nop_p (def_stmt))
2402 tree rhs = gimple_assign_rhs1 (def_stmt);
2403 return get_maxval_strlen (rhs, length, visited, type);
2405 return false;
2407 case GIMPLE_PHI:
2409 /* All the arguments of the PHI node must have the same constant
2410 length. */
2411 unsigned i;
2413 for (i = 0; i < gimple_phi_num_args (def_stmt); i++)
2415 tree arg = gimple_phi_arg (def_stmt, i)->def;
2417 /* If this PHI has itself as an argument, we cannot
2418 determine the string length of this argument. However,
2419 if we can find a constant string length for the other
2420 PHI args then we can still be sure that this is a
2421 constant string length. So be optimistic and just
2422 continue with the next argument. */
2423 if (arg == gimple_phi_result (def_stmt))
2424 continue;
2426 if (!get_maxval_strlen (arg, length, visited, type))
2427 return false;
2430 return true;
2432 default:
2433 return false;
2438 /* Fold builtin call in statement STMT. Returns a simplified tree.
2439 We may return a non-constant expression, including another call
2440 to a different function and with different arguments, e.g.,
2441 substituting memcpy for strcpy when the string length is known.
2442 Note that some builtins expand into inline code that may not
2443 be valid in GIMPLE. Callers must take care. */
2445 static tree
2446 ccp_fold_builtin (gimple stmt)
2448 tree result, val[3];
2449 tree callee, a;
2450 int arg_idx, type;
2451 bitmap visited;
2452 bool ignore;
2453 int nargs;
2455 gcc_assert (is_gimple_call (stmt));
2457 ignore = (gimple_call_lhs (stmt) == NULL);
2459 /* First try the generic builtin folder. If that succeeds, return the
2460 result directly. */
2461 result = fold_call_stmt (stmt, ignore);
2462 if (result)
2464 if (ignore)
2465 STRIP_NOPS (result);
2466 return result;
2469 /* Ignore MD builtins. */
2470 callee = gimple_call_fndecl (stmt);
2471 if (DECL_BUILT_IN_CLASS (callee) == BUILT_IN_MD)
2472 return NULL_TREE;
2474 /* If the builtin could not be folded, and it has no argument list,
2475 we're done. */
2476 nargs = gimple_call_num_args (stmt);
2477 if (nargs == 0)
2478 return NULL_TREE;
2480 /* Limit the work only for builtins we know how to simplify. */
2481 switch (DECL_FUNCTION_CODE (callee))
2483 case BUILT_IN_STRLEN:
2484 case BUILT_IN_FPUTS:
2485 case BUILT_IN_FPUTS_UNLOCKED:
2486 arg_idx = 0;
2487 type = 0;
2488 break;
2489 case BUILT_IN_STRCPY:
2490 case BUILT_IN_STRNCPY:
2491 arg_idx = 1;
2492 type = 0;
2493 break;
2494 case BUILT_IN_MEMCPY_CHK:
2495 case BUILT_IN_MEMPCPY_CHK:
2496 case BUILT_IN_MEMMOVE_CHK:
2497 case BUILT_IN_MEMSET_CHK:
2498 case BUILT_IN_STRNCPY_CHK:
2499 arg_idx = 2;
2500 type = 2;
2501 break;
2502 case BUILT_IN_STRCPY_CHK:
2503 case BUILT_IN_STPCPY_CHK:
2504 arg_idx = 1;
2505 type = 1;
2506 break;
2507 case BUILT_IN_SNPRINTF_CHK:
2508 case BUILT_IN_VSNPRINTF_CHK:
2509 arg_idx = 1;
2510 type = 2;
2511 break;
2512 default:
2513 return NULL_TREE;
2516 if (arg_idx >= nargs)
2517 return NULL_TREE;
2519 /* Try to use the dataflow information gathered by the CCP process. */
2520 visited = BITMAP_ALLOC (NULL);
2521 bitmap_clear (visited);
2523 memset (val, 0, sizeof (val));
2524 a = gimple_call_arg (stmt, arg_idx);
2525 if (!get_maxval_strlen (a, &val[arg_idx], visited, type))
2526 val[arg_idx] = NULL_TREE;
2528 BITMAP_FREE (visited);
2530 result = NULL_TREE;
2531 switch (DECL_FUNCTION_CODE (callee))
2533 case BUILT_IN_STRLEN:
2534 if (val[0] && nargs == 1)
2536 tree new_val =
2537 fold_convert (TREE_TYPE (gimple_call_lhs (stmt)), val[0]);
2539 /* If the result is not a valid gimple value, or not a cast
2540 of a valid gimple value, then we can not use the result. */
2541 if (is_gimple_val (new_val)
2542 || (is_gimple_cast (new_val)
2543 && is_gimple_val (TREE_OPERAND (new_val, 0))))
2544 return new_val;
2546 break;
2548 case BUILT_IN_STRCPY:
2549 if (val[1] && is_gimple_val (val[1]) && nargs == 2)
2550 result = fold_builtin_strcpy (callee,
2551 gimple_call_arg (stmt, 0),
2552 gimple_call_arg (stmt, 1),
2553 val[1]);
2554 break;
2556 case BUILT_IN_STRNCPY:
2557 if (val[1] && is_gimple_val (val[1]) && nargs == 3)
2558 result = fold_builtin_strncpy (callee,
2559 gimple_call_arg (stmt, 0),
2560 gimple_call_arg (stmt, 1),
2561 gimple_call_arg (stmt, 2),
2562 val[1]);
2563 break;
2565 case BUILT_IN_FPUTS:
2566 if (nargs == 2)
2567 result = fold_builtin_fputs (gimple_call_arg (stmt, 0),
2568 gimple_call_arg (stmt, 1),
2569 ignore, false, val[0]);
2570 break;
2572 case BUILT_IN_FPUTS_UNLOCKED:
2573 if (nargs == 2)
2574 result = fold_builtin_fputs (gimple_call_arg (stmt, 0),
2575 gimple_call_arg (stmt, 1),
2576 ignore, true, val[0]);
2577 break;
2579 case BUILT_IN_MEMCPY_CHK:
2580 case BUILT_IN_MEMPCPY_CHK:
2581 case BUILT_IN_MEMMOVE_CHK:
2582 case BUILT_IN_MEMSET_CHK:
2583 if (val[2] && is_gimple_val (val[2]) && nargs == 4)
2584 result = fold_builtin_memory_chk (callee,
2585 gimple_call_arg (stmt, 0),
2586 gimple_call_arg (stmt, 1),
2587 gimple_call_arg (stmt, 2),
2588 gimple_call_arg (stmt, 3),
2589 val[2], ignore,
2590 DECL_FUNCTION_CODE (callee));
2591 break;
2593 case BUILT_IN_STRCPY_CHK:
2594 case BUILT_IN_STPCPY_CHK:
2595 if (val[1] && is_gimple_val (val[1]) && nargs == 3)
2596 result = fold_builtin_stxcpy_chk (callee,
2597 gimple_call_arg (stmt, 0),
2598 gimple_call_arg (stmt, 1),
2599 gimple_call_arg (stmt, 2),
2600 val[1], ignore,
2601 DECL_FUNCTION_CODE (callee));
2602 break;
2604 case BUILT_IN_STRNCPY_CHK:
2605 if (val[2] && is_gimple_val (val[2]) && nargs == 4)
2606 result = fold_builtin_strncpy_chk (gimple_call_arg (stmt, 0),
2607 gimple_call_arg (stmt, 1),
2608 gimple_call_arg (stmt, 2),
2609 gimple_call_arg (stmt, 3),
2610 val[2]);
2611 break;
2613 case BUILT_IN_SNPRINTF_CHK:
2614 case BUILT_IN_VSNPRINTF_CHK:
2615 if (val[1] && is_gimple_val (val[1]))
2616 result = gimple_fold_builtin_snprintf_chk (stmt, val[1],
2617 DECL_FUNCTION_CODE (callee));
2618 break;
2620 default:
2621 gcc_unreachable ();
2624 if (result && ignore)
2625 result = fold_ignored_result (result);
2626 return result;
2629 /* Attempt to fold an assignment statement pointed-to by SI. Returns a
2630 replacement rhs for the statement or NULL_TREE if no simplification
2631 could be made. It is assumed that the operands have been previously
2632 folded. */
2634 static tree
2635 fold_gimple_assign (gimple_stmt_iterator *si)
2637 gimple stmt = gsi_stmt (*si);
2638 enum tree_code subcode = gimple_assign_rhs_code (stmt);
2640 tree result = NULL;
2642 switch (get_gimple_rhs_class (subcode))
2644 case GIMPLE_SINGLE_RHS:
2646 tree rhs = gimple_assign_rhs1 (stmt);
2648 /* Try to fold a conditional expression. */
2649 if (TREE_CODE (rhs) == COND_EXPR)
2651 tree temp = fold (COND_EXPR_COND (rhs));
2652 if (temp != COND_EXPR_COND (rhs))
2653 result = fold_build3 (COND_EXPR, TREE_TYPE (rhs), temp,
2654 COND_EXPR_THEN (rhs), COND_EXPR_ELSE (rhs));
2657 /* If we couldn't fold the RHS, hand over to the generic
2658 fold routines. */
2659 if (result == NULL_TREE)
2660 result = fold (rhs);
2662 /* Strip away useless type conversions. Both the NON_LVALUE_EXPR
2663 that may have been added by fold, and "useless" type
2664 conversions that might now be apparent due to propagation. */
2665 STRIP_USELESS_TYPE_CONVERSION (result);
2667 if (result != rhs && valid_gimple_rhs_p (result))
2668 return result;
2669 else
2670 /* It is possible that fold_stmt_r simplified the RHS.
2671 Make sure that the subcode of this statement still
2672 reflects the principal operator of the rhs operand. */
2673 return rhs;
2675 break;
2677 case GIMPLE_UNARY_RHS:
2679 tree rhs = gimple_assign_rhs1 (stmt);
2681 result = fold_unary (subcode, gimple_expr_type (stmt), rhs);
2682 if (result)
2684 /* If the operation was a conversion do _not_ mark a
2685 resulting constant with TREE_OVERFLOW if the original
2686 constant was not. These conversions have implementation
2687 defined behavior and retaining the TREE_OVERFLOW flag
2688 here would confuse later passes such as VRP. */
2689 if (CONVERT_EXPR_CODE_P (subcode)
2690 && TREE_CODE (result) == INTEGER_CST
2691 && TREE_CODE (rhs) == INTEGER_CST)
2692 TREE_OVERFLOW (result) = TREE_OVERFLOW (rhs);
2694 STRIP_USELESS_TYPE_CONVERSION (result);
2695 if (valid_gimple_rhs_p (result))
2696 return result;
2698 else if (CONVERT_EXPR_CODE_P (subcode)
2699 && POINTER_TYPE_P (gimple_expr_type (stmt))
2700 && POINTER_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt))))
2702 tree type = gimple_expr_type (stmt);
2703 tree t = maybe_fold_offset_to_address (gimple_assign_rhs1 (stmt),
2704 integer_zero_node, type);
2705 if (t)
2706 return t;
2709 break;
2711 case GIMPLE_BINARY_RHS:
2712 /* Try to fold pointer addition. */
2713 if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR)
2715 tree type = TREE_TYPE (gimple_assign_rhs1 (stmt));
2716 if (TREE_CODE (TREE_TYPE (type)) == ARRAY_TYPE)
2718 type = build_pointer_type (TREE_TYPE (TREE_TYPE (type)));
2719 if (!useless_type_conversion_p
2720 (TREE_TYPE (gimple_assign_lhs (stmt)), type))
2721 type = TREE_TYPE (gimple_assign_rhs1 (stmt));
2723 result = maybe_fold_stmt_addition (type,
2724 gimple_assign_rhs1 (stmt),
2725 gimple_assign_rhs2 (stmt));
2728 if (!result)
2729 result = fold_binary (subcode,
2730 TREE_TYPE (gimple_assign_lhs (stmt)),
2731 gimple_assign_rhs1 (stmt),
2732 gimple_assign_rhs2 (stmt));
2734 if (result)
2736 STRIP_USELESS_TYPE_CONVERSION (result);
2737 if (valid_gimple_rhs_p (result))
2738 return result;
2740 /* Fold might have produced non-GIMPLE, so if we trust it blindly
2741 we lose canonicalization opportunities. Do not go again
2742 through fold here though, or the same non-GIMPLE will be
2743 produced. */
2744 if (commutative_tree_code (subcode)
2745 && tree_swap_operands_p (gimple_assign_rhs1 (stmt),
2746 gimple_assign_rhs2 (stmt), false))
2747 return build2 (subcode, TREE_TYPE (gimple_assign_lhs (stmt)),
2748 gimple_assign_rhs2 (stmt),
2749 gimple_assign_rhs1 (stmt));
2751 break;
2753 case GIMPLE_INVALID_RHS:
2754 gcc_unreachable ();
2757 return NULL_TREE;
2760 /* Attempt to fold a conditional statement. Return true if any changes were
2761 made. We only attempt to fold the condition expression, and do not perform
2762 any transformation that would require alteration of the cfg. It is
2763 assumed that the operands have been previously folded. */
2765 static bool
2766 fold_gimple_cond (gimple stmt)
2768 tree result = fold_binary (gimple_cond_code (stmt),
2769 boolean_type_node,
2770 gimple_cond_lhs (stmt),
2771 gimple_cond_rhs (stmt));
2773 if (result)
2775 STRIP_USELESS_TYPE_CONVERSION (result);
2776 if (is_gimple_condexpr (result) && valid_gimple_rhs_p (result))
2778 gimple_cond_set_condition_from_tree (stmt, result);
2779 return true;
2783 return false;
2787 /* Attempt to fold a call statement referenced by the statement iterator GSI.
2788 The statement may be replaced by another statement, e.g., if the call
2789 simplifies to a constant value. Return true if any changes were made.
2790 It is assumed that the operands have been previously folded. */
2792 static bool
2793 fold_gimple_call (gimple_stmt_iterator *gsi)
2795 gimple stmt = gsi_stmt (*gsi);
2797 tree callee = gimple_call_fndecl (stmt);
2799 /* Check for builtins that CCP can handle using information not
2800 available in the generic fold routines. */
2801 if (callee && DECL_BUILT_IN (callee))
2803 tree result = ccp_fold_builtin (stmt);
2805 if (result)
2806 return update_call_from_tree (gsi, result);
2808 else
2810 /* Check for resolvable OBJ_TYPE_REF. The only sorts we can resolve
2811 here are when we've propagated the address of a decl into the
2812 object slot. */
2813 /* ??? Should perhaps do this in fold proper. However, doing it
2814 there requires that we create a new CALL_EXPR, and that requires
2815 copying EH region info to the new node. Easier to just do it
2816 here where we can just smash the call operand. */
2817 /* ??? Is there a good reason not to do this in fold_stmt_inplace? */
2818 callee = gimple_call_fn (stmt);
2819 if (TREE_CODE (callee) == OBJ_TYPE_REF
2820 && lang_hooks.fold_obj_type_ref
2821 && TREE_CODE (OBJ_TYPE_REF_OBJECT (callee)) == ADDR_EXPR
2822 && DECL_P (TREE_OPERAND
2823 (OBJ_TYPE_REF_OBJECT (callee), 0)))
2825 tree t;
2827 /* ??? Caution: Broken ADDR_EXPR semantics means that
2828 looking at the type of the operand of the addr_expr
2829 can yield an array type. See silly exception in
2830 check_pointer_types_r. */
2831 t = TREE_TYPE (TREE_TYPE (OBJ_TYPE_REF_OBJECT (callee)));
2832 t = lang_hooks.fold_obj_type_ref (callee, t);
2833 if (t)
2835 gimple_call_set_fn (stmt, t);
2836 return true;
2841 return false;
2844 /* Fold the statement pointed to by GSI. In some cases, this function may
2845 replace the whole statement with a new one. Returns true iff folding
2846 makes any changes. */
2848 bool
2849 fold_stmt (gimple_stmt_iterator *gsi)
2851 tree res;
2852 struct fold_stmt_r_data fold_stmt_r_data;
2853 struct walk_stmt_info wi;
2855 bool changed = false;
2856 bool inside_addr_expr = false;
2858 gimple stmt = gsi_stmt (*gsi);
2860 fold_stmt_r_data.stmt = stmt;
2861 fold_stmt_r_data.changed_p = &changed;
2862 fold_stmt_r_data.inside_addr_expr_p = &inside_addr_expr;
2864 memset (&wi, 0, sizeof (wi));
2865 wi.info = &fold_stmt_r_data;
2867 /* Fold the individual operands.
2868 For example, fold instances of *&VAR into VAR, etc. */
2869 res = walk_gimple_op (stmt, fold_stmt_r, &wi);
2870 gcc_assert (!res);
2872 /* Fold the main computation performed by the statement. */
2873 switch (gimple_code (stmt))
2875 case GIMPLE_ASSIGN:
2877 tree new_rhs = fold_gimple_assign (gsi);
2878 if (new_rhs != NULL_TREE)
2880 gimple_assign_set_rhs_from_tree (gsi, new_rhs);
2881 changed = true;
2883 stmt = gsi_stmt (*gsi);
2884 break;
2886 case GIMPLE_COND:
2887 changed |= fold_gimple_cond (stmt);
2888 break;
2889 case GIMPLE_CALL:
2890 /* The entire statement may be replaced in this case. */
2891 changed |= fold_gimple_call (gsi);
2892 break;
2894 default:
2895 return changed;
2896 break;
2899 return changed;
2902 /* Perform the minimal folding on statement STMT. Only operations like
2903 *&x created by constant propagation are handled. The statement cannot
2904 be replaced with a new one. Return true if the statement was
2905 changed, false otherwise. */
2907 bool
2908 fold_stmt_inplace (gimple stmt)
2910 tree res;
2911 struct fold_stmt_r_data fold_stmt_r_data;
2912 struct walk_stmt_info wi;
2913 gimple_stmt_iterator si;
2915 bool changed = false;
2916 bool inside_addr_expr = false;
2918 fold_stmt_r_data.stmt = stmt;
2919 fold_stmt_r_data.changed_p = &changed;
2920 fold_stmt_r_data.inside_addr_expr_p = &inside_addr_expr;
2922 memset (&wi, 0, sizeof (wi));
2923 wi.info = &fold_stmt_r_data;
2925 /* Fold the individual operands.
2926 For example, fold instances of *&VAR into VAR, etc.
2928 It appears that, at one time, maybe_fold_stmt_indirect
2929 would cause the walk to return non-null in order to
2930 signal that the entire statement should be replaced with
2931 a call to _builtin_trap. This functionality is currently
2932 disabled, as noted in a FIXME, and cannot be supported here. */
2933 res = walk_gimple_op (stmt, fold_stmt_r, &wi);
2934 gcc_assert (!res);
2936 /* Fold the main computation performed by the statement. */
2937 switch (gimple_code (stmt))
2939 case GIMPLE_ASSIGN:
2941 unsigned old_num_ops;
2942 tree new_rhs;
2943 old_num_ops = gimple_num_ops (stmt);
2944 si = gsi_for_stmt (stmt);
2945 new_rhs = fold_gimple_assign (&si);
2946 if (new_rhs != NULL_TREE
2947 && get_gimple_rhs_num_ops (TREE_CODE (new_rhs)) < old_num_ops)
2949 gimple_assign_set_rhs_from_tree (&si, new_rhs);
2950 changed = true;
2952 gcc_assert (gsi_stmt (si) == stmt);
2953 break;
2955 case GIMPLE_COND:
2956 changed |= fold_gimple_cond (stmt);
2957 break;
2959 default:
2960 break;
2963 return changed;
2966 /* Try to optimize out __builtin_stack_restore. Optimize it out
2967 if there is another __builtin_stack_restore in the same basic
2968 block and no calls or ASM_EXPRs are in between, or if this block's
2969 only outgoing edge is to EXIT_BLOCK and there are no calls or
2970 ASM_EXPRs after this __builtin_stack_restore. */
2972 static tree
2973 optimize_stack_restore (gimple_stmt_iterator i)
2975 tree callee, rhs;
2976 gimple stmt, stack_save;
2977 gimple_stmt_iterator stack_save_gsi;
2979 basic_block bb = gsi_bb (i);
2980 gimple call = gsi_stmt (i);
2982 if (gimple_code (call) != GIMPLE_CALL
2983 || gimple_call_num_args (call) != 1
2984 || TREE_CODE (gimple_call_arg (call, 0)) != SSA_NAME
2985 || !POINTER_TYPE_P (TREE_TYPE (gimple_call_arg (call, 0))))
2986 return NULL_TREE;
2988 for (gsi_next (&i); !gsi_end_p (i); gsi_next (&i))
2990 stmt = gsi_stmt (i);
2991 if (gimple_code (stmt) == GIMPLE_ASM)
2992 return NULL_TREE;
2993 if (gimple_code (stmt) != GIMPLE_CALL)
2994 continue;
2996 callee = gimple_call_fndecl (stmt);
2997 if (!callee || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL)
2998 return NULL_TREE;
3000 if (DECL_FUNCTION_CODE (callee) == BUILT_IN_STACK_RESTORE)
3001 break;
3004 if (gsi_end_p (i)
3005 && (! single_succ_p (bb)
3006 || single_succ_edge (bb)->dest != EXIT_BLOCK_PTR))
3007 return NULL_TREE;
3009 stack_save = SSA_NAME_DEF_STMT (gimple_call_arg (call, 0));
3010 if (gimple_code (stack_save) != GIMPLE_CALL
3011 || gimple_call_lhs (stack_save) != gimple_call_arg (call, 0)
3012 || stmt_could_throw_p (stack_save)
3013 || !has_single_use (gimple_call_arg (call, 0)))
3014 return NULL_TREE;
3016 callee = gimple_call_fndecl (stack_save);
3017 if (!callee
3018 || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL
3019 || DECL_FUNCTION_CODE (callee) != BUILT_IN_STACK_SAVE
3020 || gimple_call_num_args (stack_save) != 0)
3021 return NULL_TREE;
3023 stack_save_gsi = gsi_for_stmt (stack_save);
3024 push_stmt_changes (gsi_stmt_ptr (&stack_save_gsi));
3025 rhs = build_int_cst (TREE_TYPE (gimple_call_arg (call, 0)), 0);
3026 if (!update_call_from_tree (&stack_save_gsi, rhs))
3028 discard_stmt_changes (gsi_stmt_ptr (&stack_save_gsi));
3029 return NULL_TREE;
3031 pop_stmt_changes (gsi_stmt_ptr (&stack_save_gsi));
3033 /* No effect, so the statement will be deleted. */
3034 return integer_zero_node;
3037 /* If va_list type is a simple pointer and nothing special is needed,
3038 optimize __builtin_va_start (&ap, 0) into ap = __builtin_next_arg (0),
3039 __builtin_va_end (&ap) out as NOP and __builtin_va_copy into a simple
3040 pointer assignment. */
3042 static tree
3043 optimize_stdarg_builtin (gimple call)
3045 tree callee, lhs, rhs, cfun_va_list;
3046 bool va_list_simple_ptr;
3048 if (gimple_code (call) != GIMPLE_CALL)
3049 return NULL_TREE;
3051 callee = gimple_call_fndecl (call);
3053 cfun_va_list = targetm.fn_abi_va_list (callee);
3054 va_list_simple_ptr = POINTER_TYPE_P (cfun_va_list)
3055 && (TREE_TYPE (cfun_va_list) == void_type_node
3056 || TREE_TYPE (cfun_va_list) == char_type_node);
3058 switch (DECL_FUNCTION_CODE (callee))
3060 case BUILT_IN_VA_START:
3061 if (!va_list_simple_ptr
3062 || targetm.expand_builtin_va_start != NULL
3063 || built_in_decls[BUILT_IN_NEXT_ARG] == NULL)
3064 return NULL_TREE;
3066 if (gimple_call_num_args (call) != 2)
3067 return NULL_TREE;
3069 lhs = gimple_call_arg (call, 0);
3070 if (!POINTER_TYPE_P (TREE_TYPE (lhs))
3071 || TYPE_MAIN_VARIANT (TREE_TYPE (TREE_TYPE (lhs)))
3072 != TYPE_MAIN_VARIANT (cfun_va_list))
3073 return NULL_TREE;
3075 lhs = build_fold_indirect_ref (lhs);
3076 rhs = build_call_expr (built_in_decls[BUILT_IN_NEXT_ARG],
3077 1, integer_zero_node);
3078 rhs = fold_convert (TREE_TYPE (lhs), rhs);
3079 return build2 (MODIFY_EXPR, TREE_TYPE (lhs), lhs, rhs);
3081 case BUILT_IN_VA_COPY:
3082 if (!va_list_simple_ptr)
3083 return NULL_TREE;
3085 if (gimple_call_num_args (call) != 2)
3086 return NULL_TREE;
3088 lhs = gimple_call_arg (call, 0);
3089 if (!POINTER_TYPE_P (TREE_TYPE (lhs))
3090 || TYPE_MAIN_VARIANT (TREE_TYPE (TREE_TYPE (lhs)))
3091 != TYPE_MAIN_VARIANT (cfun_va_list))
3092 return NULL_TREE;
3094 lhs = build_fold_indirect_ref (lhs);
3095 rhs = gimple_call_arg (call, 1);
3096 if (TYPE_MAIN_VARIANT (TREE_TYPE (rhs))
3097 != TYPE_MAIN_VARIANT (cfun_va_list))
3098 return NULL_TREE;
3100 rhs = fold_convert (TREE_TYPE (lhs), rhs);
3101 return build2 (MODIFY_EXPR, TREE_TYPE (lhs), lhs, rhs);
3103 case BUILT_IN_VA_END:
3104 /* No effect, so the statement will be deleted. */
3105 return integer_zero_node;
3107 default:
3108 gcc_unreachable ();
3112 /* Convert EXPR into a GIMPLE value suitable for substitution on the
3113 RHS of an assignment. Insert the necessary statements before
3114 iterator *SI_P. The statement at *SI_P, which must be a GIMPLE_CALL
3115 is replaced. If the call is expected to produces a result, then it
3116 is replaced by an assignment of the new RHS to the result variable.
3117 If the result is to be ignored, then the call is replaced by a
3118 GIMPLE_NOP. */
3120 static void
3121 gimplify_and_update_call_from_tree (gimple_stmt_iterator *si_p, tree expr)
3123 tree lhs;
3124 tree tmp = NULL_TREE; /* Silence warning. */
3125 gimple stmt, new_stmt;
3126 gimple_stmt_iterator i;
3127 gimple_seq stmts = gimple_seq_alloc();
3128 struct gimplify_ctx gctx;
3130 stmt = gsi_stmt (*si_p);
3132 gcc_assert (is_gimple_call (stmt));
3134 lhs = gimple_call_lhs (stmt);
3136 push_gimplify_context (&gctx);
3138 if (lhs == NULL_TREE)
3139 gimplify_and_add (expr, &stmts);
3140 else
3141 tmp = get_initialized_tmp_var (expr, &stmts, NULL);
3143 pop_gimplify_context (NULL);
3145 if (gimple_has_location (stmt))
3146 annotate_all_with_location (stmts, gimple_location (stmt));
3148 /* The replacement can expose previously unreferenced variables. */
3149 for (i = gsi_start (stmts); !gsi_end_p (i); gsi_next (&i))
3151 new_stmt = gsi_stmt (i);
3152 find_new_referenced_vars (new_stmt);
3153 gsi_insert_before (si_p, new_stmt, GSI_NEW_STMT);
3154 mark_symbols_for_renaming (new_stmt);
3155 gsi_next (si_p);
3158 if (lhs == NULL_TREE)
3159 new_stmt = gimple_build_nop ();
3160 else
3162 new_stmt = gimple_build_assign (lhs, tmp);
3163 copy_virtual_operands (new_stmt, stmt);
3164 move_ssa_defining_stmt_for_defs (new_stmt, stmt);
3167 gimple_set_location (new_stmt, gimple_location (stmt));
3168 gsi_replace (si_p, new_stmt, false);
3171 /* A simple pass that attempts to fold all builtin functions. This pass
3172 is run after we've propagated as many constants as we can. */
3174 static unsigned int
3175 execute_fold_all_builtins (void)
3177 bool cfg_changed = false;
3178 basic_block bb;
3179 unsigned int todoflags = 0;
3181 FOR_EACH_BB (bb)
3183 gimple_stmt_iterator i;
3184 for (i = gsi_start_bb (bb); !gsi_end_p (i); )
3186 gimple stmt, old_stmt;
3187 tree callee, result;
3188 enum built_in_function fcode;
3190 stmt = gsi_stmt (i);
3192 if (gimple_code (stmt) != GIMPLE_CALL)
3194 gsi_next (&i);
3195 continue;
3197 callee = gimple_call_fndecl (stmt);
3198 if (!callee || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL)
3200 gsi_next (&i);
3201 continue;
3203 fcode = DECL_FUNCTION_CODE (callee);
3205 result = ccp_fold_builtin (stmt);
3207 if (result)
3208 gimple_remove_stmt_histograms (cfun, stmt);
3210 if (!result)
3211 switch (DECL_FUNCTION_CODE (callee))
3213 case BUILT_IN_CONSTANT_P:
3214 /* Resolve __builtin_constant_p. If it hasn't been
3215 folded to integer_one_node by now, it's fairly
3216 certain that the value simply isn't constant. */
3217 result = integer_zero_node;
3218 break;
3220 case BUILT_IN_STACK_RESTORE:
3221 result = optimize_stack_restore (i);
3222 if (result)
3223 break;
3224 gsi_next (&i);
3225 continue;
3227 case BUILT_IN_VA_START:
3228 case BUILT_IN_VA_END:
3229 case BUILT_IN_VA_COPY:
3230 /* These shouldn't be folded before pass_stdarg. */
3231 result = optimize_stdarg_builtin (stmt);
3232 if (result)
3233 break;
3234 /* FALLTHRU */
3236 default:
3237 gsi_next (&i);
3238 continue;
3241 if (dump_file && (dump_flags & TDF_DETAILS))
3243 fprintf (dump_file, "Simplified\n ");
3244 print_gimple_stmt (dump_file, stmt, 0, dump_flags);
3247 old_stmt = stmt;
3248 push_stmt_changes (gsi_stmt_ptr (&i));
3250 if (!update_call_from_tree (&i, result))
3252 gimplify_and_update_call_from_tree (&i, result);
3253 todoflags |= TODO_rebuild_alias;
3256 stmt = gsi_stmt (i);
3257 pop_stmt_changes (gsi_stmt_ptr (&i));
3259 if (maybe_clean_or_replace_eh_stmt (old_stmt, stmt)
3260 && gimple_purge_dead_eh_edges (bb))
3261 cfg_changed = true;
3263 if (dump_file && (dump_flags & TDF_DETAILS))
3265 fprintf (dump_file, "to\n ");
3266 print_gimple_stmt (dump_file, stmt, 0, dump_flags);
3267 fprintf (dump_file, "\n");
3270 /* Retry the same statement if it changed into another
3271 builtin, there might be new opportunities now. */
3272 if (gimple_code (stmt) != GIMPLE_CALL)
3274 gsi_next (&i);
3275 continue;
3277 callee = gimple_call_fndecl (stmt);
3278 if (!callee
3279 || DECL_BUILT_IN_CLASS (callee) != BUILT_IN_NORMAL
3280 || DECL_FUNCTION_CODE (callee) == fcode)
3281 gsi_next (&i);
3285 /* Delete unreachable blocks. */
3286 if (cfg_changed)
3287 todoflags |= TODO_cleanup_cfg;
3289 return todoflags;
3293 struct gimple_opt_pass pass_fold_builtins =
3296 GIMPLE_PASS,
3297 "fab", /* name */
3298 NULL, /* gate */
3299 execute_fold_all_builtins, /* execute */
3300 NULL, /* sub */
3301 NULL, /* next */
3302 0, /* static_pass_number */
3303 0, /* tv_id */
3304 PROP_cfg | PROP_ssa, /* properties_required */
3305 0, /* properties_provided */
3306 0, /* properties_destroyed */
3307 0, /* todo_flags_start */
3308 TODO_dump_func
3309 | TODO_verify_ssa
3310 | TODO_update_ssa /* todo_flags_finish */