1 @c Copyright (C) 2004-2015 Free Software Foundation, Inc.
2 @c This is part of the GCC manual.
3 @c For copying conditions, see the file gcc.texi.
5 @c ---------------------------------------------------------------------
7 @c ---------------------------------------------------------------------
10 @chapter Analysis and Optimization of GIMPLE tuples
12 @cindex Optimization infrastructure for GIMPLE
14 GCC uses three main intermediate languages to represent the program
15 during compilation: GENERIC, GIMPLE and RTL@. GENERIC is a
16 language-independent representation generated by each front end. It
17 is used to serve as an interface between the parser and optimizer.
18 GENERIC is a common representation that is able to represent programs
19 written in all the languages supported by GCC@.
21 GIMPLE and RTL are used to optimize the program. GIMPLE is used for
22 target and language independent optimizations (e.g., inlining,
23 constant propagation, tail call elimination, redundancy elimination,
24 etc). Much like GENERIC, GIMPLE is a language independent, tree based
25 representation. However, it differs from GENERIC in that the GIMPLE
26 grammar is more restrictive: expressions contain no more than 3
27 operands (except function calls), it has no control flow structures
28 and expressions with side-effects are only allowed on the right hand
29 side of assignments. See the chapter describing GENERIC and GIMPLE
32 This chapter describes the data structures and functions used in the
33 GIMPLE optimizers (also known as ``tree optimizers'' or ``middle
34 end''). In particular, it focuses on all the macros, data structures,
35 functions and programming constructs needed to implement optimization
39 * Annotations:: Attributes for variables.
40 * SSA Operands:: SSA names referenced by GIMPLE statements.
41 * SSA:: Static Single Assignment representation.
42 * Alias analysis:: Representing aliased loads and stores.
43 * Memory model:: Memory model used by the middle-end.
50 The optimizers need to associate attributes with variables during the
51 optimization process. For instance, we need to know whether a
52 variable has aliases. All these attributes are stored in data
53 structures called annotations which are then linked to the field
54 @code{ann} in @code{struct tree_common}.
60 @cindex virtual operands
64 Almost every GIMPLE statement will contain a reference to a variable
65 or memory location. Since statements come in different shapes and
66 sizes, their operands are going to be located at various spots inside
67 the statement's tree. To facilitate access to the statement's
68 operands, they are organized into lists associated inside each
69 statement's annotation. Each element in an operand list is a pointer
70 to a @code{VAR_DECL}, @code{PARM_DECL} or @code{SSA_NAME} tree node.
71 This provides a very convenient way of examining and replacing
74 Data flow analysis and optimization is done on all tree nodes
75 representing variables. Any node for which @code{SSA_VAR_P} returns
76 nonzero is considered when scanning statement operands. However, not
77 all @code{SSA_VAR_P} variables are processed in the same way. For the
78 purposes of optimization, we need to distinguish between references to
79 local scalar variables and references to globals, statics, structures,
80 arrays, aliased variables, etc. The reason is simple, the compiler
81 can gather complete data flow information for a local scalar. On the
82 other hand, a global variable may be modified by a function call, it
83 may not be possible to keep track of all the elements of an array or
84 the fields of a structure, etc.
86 The operand scanner gathers two kinds of operands: @dfn{real} and
87 @dfn{virtual}. An operand for which @code{is_gimple_reg} returns true
88 is considered real, otherwise it is a virtual operand. We also
89 distinguish between uses and definitions. An operand is used if its
90 value is loaded by the statement (e.g., the operand at the RHS of an
91 assignment). If the statement assigns a new value to the operand, the
92 operand is considered a definition (e.g., the operand at the LHS of
95 Virtual and real operands also have very different data flow
96 properties. Real operands are unambiguous references to the
97 full object that they represent. For instance, given
106 Since @code{a} and @code{b} are non-aliased locals, the statement
107 @code{a = b} will have one real definition and one real use because
108 variable @code{a} is completely modified with the contents of
109 variable @code{b}. Real definition are also known as @dfn{killing
110 definitions}. Similarly, the use of @code{b} reads all its bits.
112 In contrast, virtual operands are used with variables that can have
113 a partial or ambiguous reference. This includes structures, arrays,
114 globals, and aliased variables. In these cases, we have two types of
115 definitions. For globals, structures, and arrays, we can determine from
116 a statement whether a variable of these types has a killing definition.
117 If the variable does, then the statement is marked as having a
118 @dfn{must definition} of that variable. However, if a statement is only
119 defining a part of the variable (i.e.@: a field in a structure), or if we
120 know that a statement might define the variable but we cannot say for sure,
121 then we mark that statement as having a @dfn{may definition}. For
137 The assignment @code{*p = 5} may be a definition of @code{a} or
138 @code{b}. If we cannot determine statically where @code{p} is
139 pointing to at the time of the store operation, we create virtual
140 definitions to mark that statement as a potential definition site for
141 @code{a} and @code{b}. Memory loads are similarly marked with virtual
142 use operands. Virtual operands are shown in tree dumps right before
143 the statement that contains them. To request a tree dump with virtual
144 operands, use the @option{-vops} option to @option{-fdump-tree}:
164 Notice that @code{VDEF} operands have two copies of the referenced
165 variable. This indicates that this is not a killing definition of
166 that variable. In this case we refer to it as a @dfn{may definition}
167 or @dfn{aliased store}. The presence of the second copy of the
168 variable in the @code{VDEF} operand will become important when the
169 function is converted into SSA form. This will be used to link all
170 the non-killing definitions to prevent optimizations from making
171 incorrect assumptions about them.
173 Operands are updated as soon as the statement is finished via a call
174 to @code{update_stmt}. If statement elements are changed via
175 @code{SET_USE} or @code{SET_DEF}, then no further action is required
176 (i.e., those macros take care of updating the statement). If changes
177 are made by manipulating the statement's tree directly, then a call
178 must be made to @code{update_stmt} when complete. Calling one of the
179 @code{bsi_insert} routines or @code{bsi_replace} performs an implicit
180 call to @code{update_stmt}.
182 @subsection Operand Iterators And Access Routines
183 @cindex Operand Iterators
184 @cindex Operand Access Routines
186 Operands are collected by @file{tree-ssa-operands.c}. They are stored
187 inside each statement's annotation and can be accessed through either the
188 operand iterators or an access routine.
190 The following access routines are available for examining operands:
193 @item @code{SINGLE_SSA_@{USE,DEF,TREE@}_OPERAND}: These accessors will return
194 NULL unless there is exactly one operand matching the specified flags. If
195 there is exactly one operand, the operand is returned as either a @code{tree},
196 @code{def_operand_p}, or @code{use_operand_p}.
199 tree t = SINGLE_SSA_TREE_OPERAND (stmt, flags);
200 use_operand_p u = SINGLE_SSA_USE_OPERAND (stmt, SSA_ALL_VIRTUAL_USES);
201 def_operand_p d = SINGLE_SSA_DEF_OPERAND (stmt, SSA_OP_ALL_DEFS);
204 @item @code{ZERO_SSA_OPERANDS}: This macro returns true if there are no
205 operands matching the specified flags.
208 if (ZERO_SSA_OPERANDS (stmt, SSA_OP_ALL_VIRTUALS))
212 @item @code{NUM_SSA_OPERANDS}: This macro Returns the number of operands
213 matching 'flags'. This actually executes a loop to perform the count, so
214 only use this if it is really needed.
217 int count = NUM_SSA_OPERANDS (stmt, flags)
222 If you wish to iterate over some or all operands, use the
223 @code{FOR_EACH_SSA_@{USE,DEF,TREE@}_OPERAND} iterator. For example, to print
224 all the operands for a statement:
228 print_ops (tree stmt)
233 FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_ALL_OPERANDS)
234 print_generic_expr (stderr, var, TDF_SLIM);
239 How to choose the appropriate iterator:
242 @item Determine whether you are need to see the operand pointers, or just the
243 trees, and choose the appropriate macro:
248 use_operand_p FOR_EACH_SSA_USE_OPERAND
249 def_operand_p FOR_EACH_SSA_DEF_OPERAND
250 tree FOR_EACH_SSA_TREE_OPERAND
253 @item You need to declare a variable of the type you are interested
254 in, and an ssa_op_iter structure which serves as the loop controlling
257 @item Determine which operands you wish to use, and specify the flags of
258 those you are interested in. They are documented in
259 @file{tree-ssa-operands.h}:
262 #define SSA_OP_USE 0x01 /* @r{Real USE operands.} */
263 #define SSA_OP_DEF 0x02 /* @r{Real DEF operands.} */
264 #define SSA_OP_VUSE 0x04 /* @r{VUSE operands.} */
265 #define SSA_OP_VDEF 0x08 /* @r{VDEF operands.} */
267 /* @r{These are commonly grouped operand flags.} */
268 #define SSA_OP_VIRTUAL_USES (SSA_OP_VUSE)
269 #define SSA_OP_VIRTUAL_DEFS (SSA_OP_VDEF)
270 #define SSA_OP_ALL_VIRTUALS (SSA_OP_VIRTUAL_USES | SSA_OP_VIRTUAL_DEFS)
271 #define SSA_OP_ALL_USES (SSA_OP_VIRTUAL_USES | SSA_OP_USE)
272 #define SSA_OP_ALL_DEFS (SSA_OP_VIRTUAL_DEFS | SSA_OP_DEF)
273 #define SSA_OP_ALL_OPERANDS (SSA_OP_ALL_USES | SSA_OP_ALL_DEFS)
277 So if you want to look at the use pointers for all the @code{USE} and
278 @code{VUSE} operands, you would do something like:
284 FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, (SSA_OP_USE | SSA_OP_VUSE))
286 process_use_ptr (use_p);
290 The @code{TREE} macro is basically the same as the @code{USE} and
291 @code{DEF} macros, only with the use or def dereferenced via
292 @code{USE_FROM_PTR (use_p)} and @code{DEF_FROM_PTR (def_p)}. Since we
293 aren't using operand pointers, use and defs flags can be mixed.
299 FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_VUSE)
301 print_generic_expr (stderr, var, TDF_SLIM);
305 @code{VDEF}s are broken into two flags, one for the
306 @code{DEF} portion (@code{SSA_OP_VDEF}) and one for the USE portion
307 (@code{SSA_OP_VUSE}).
309 There are many examples in the code, in addition to the documentation
310 in @file{tree-ssa-operands.h} and @file{ssa-iterators.h}.
312 There are also a couple of variants on the stmt iterators regarding PHI
315 @code{FOR_EACH_PHI_ARG} Works exactly like
316 @code{FOR_EACH_SSA_USE_OPERAND}, except it works over @code{PHI} arguments
317 instead of statement operands.
320 /* Look at every virtual PHI use. */
321 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_VIRTUAL_USES)
326 /* Look at every real PHI use. */
327 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_USES)
330 /* Look at every PHI use. */
331 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_ALL_USES)
335 @code{FOR_EACH_PHI_OR_STMT_@{USE,DEF@}} works exactly like
336 @code{FOR_EACH_SSA_@{USE,DEF@}_OPERAND}, except it will function on
337 either a statement or a @code{PHI} node. These should be used when it is
338 appropriate but they are not quite as efficient as the individual
339 @code{FOR_EACH_PHI} and @code{FOR_EACH_SSA} routines.
342 FOR_EACH_PHI_OR_STMT_USE (use_operand_p, stmt, iter, flags)
347 FOR_EACH_PHI_OR_STMT_DEF (def_operand_p, phi, iter, flags)
353 @subsection Immediate Uses
354 @cindex Immediate Uses
356 Immediate use information is now always available. Using the immediate use
357 iterators, you may examine every use of any @code{SSA_NAME}. For instance,
358 to change each use of @code{ssa_var} to @code{ssa_var2} and call fold_stmt on
359 each stmt after that is done:
362 use_operand_p imm_use_p;
363 imm_use_iterator iterator;
367 FOR_EACH_IMM_USE_STMT (stmt, iterator, ssa_var)
369 FOR_EACH_IMM_USE_ON_STMT (imm_use_p, iterator)
370 SET_USE (imm_use_p, ssa_var_2);
375 There are 2 iterators which can be used. @code{FOR_EACH_IMM_USE_FAST} is
376 used when the immediate uses are not changed, i.e., you are looking at the
377 uses, but not setting them.
379 If they do get changed, then care must be taken that things are not changed
380 under the iterators, so use the @code{FOR_EACH_IMM_USE_STMT} and
381 @code{FOR_EACH_IMM_USE_ON_STMT} iterators. They attempt to preserve the
382 sanity of the use list by moving all the uses for a statement into
383 a controlled position, and then iterating over those uses. Then the
384 optimization can manipulate the stmt when all the uses have been
385 processed. This is a little slower than the FAST version since it adds a
386 placeholder element and must sort through the list a bit for each statement.
387 This placeholder element must be also be removed if the loop is
388 terminated early. The macro @code{BREAK_FROM_IMM_USE_SAFE} is provided
392 FOR_EACH_IMM_USE_STMT (stmt, iterator, ssa_var)
394 if (stmt == last_stmt)
395 BREAK_FROM_SAFE_IMM_USE (iter);
397 FOR_EACH_IMM_USE_ON_STMT (imm_use_p, iterator)
398 SET_USE (imm_use_p, ssa_var_2);
403 There are checks in @code{verify_ssa} which verify that the immediate use list
404 is up to date, as well as checking that an optimization didn't break from the
405 loop without using this macro. It is safe to simply 'break'; from a
406 @code{FOR_EACH_IMM_USE_FAST} traverse.
408 Some useful functions and macros:
410 @item @code{has_zero_uses (ssa_var)} : Returns true if there are no uses of
412 @item @code{has_single_use (ssa_var)} : Returns true if there is only a
413 single use of @code{ssa_var}.
414 @item @code{single_imm_use (ssa_var, use_operand_p *ptr, tree *stmt)} :
415 Returns true if there is only a single use of @code{ssa_var}, and also returns
416 the use pointer and statement it occurs in, in the second and third parameters.
417 @item @code{num_imm_uses (ssa_var)} : Returns the number of immediate uses of
418 @code{ssa_var}. It is better not to use this if possible since it simply
419 utilizes a loop to count the uses.
420 @item @code{PHI_ARG_INDEX_FROM_USE (use_p)} : Given a use within a @code{PHI}
421 node, return the index number for the use. An assert is triggered if the use
422 isn't located in a @code{PHI} node.
423 @item @code{USE_STMT (use_p)} : Return the statement a use occurs in.
426 Note that uses are not put into an immediate use list until their statement is
427 actually inserted into the instruction stream via a @code{bsi_*} routine.
429 It is also still possible to utilize lazy updating of statements, but this
430 should be used only when absolutely required. Both alias analysis and the
431 dominator optimizations currently do this.
433 When lazy updating is being used, the immediate use information is out of date
434 and cannot be used reliably. Lazy updating is achieved by simply marking
435 statements modified via calls to @code{mark_stmt_modified} instead of
436 @code{update_stmt}. When lazy updating is no longer required, all the
437 modified statements must have @code{update_stmt} called in order to bring them
438 up to date. This must be done before the optimization is finished, or
439 @code{verify_ssa} will trigger an abort.
441 This is done with a simple loop over the instruction stream:
443 block_stmt_iterator bsi;
447 for (bsi = bsi_start (bb); !bsi_end_p (bsi); bsi_next (&bsi))
448 update_stmt_if_modified (bsi_stmt (bsi));
453 @section Static Single Assignment
455 @cindex static single assignment
457 Most of the tree optimizers rely on the data flow information provided
458 by the Static Single Assignment (SSA) form. We implement the SSA form
459 as described in @cite{R. Cytron, J. Ferrante, B. Rosen, M. Wegman, and
460 K. Zadeck. Efficiently Computing Static Single Assignment Form and the
461 Control Dependence Graph. ACM Transactions on Programming Languages
462 and Systems, 13(4):451-490, October 1991}.
464 The SSA form is based on the premise that program variables are
465 assigned in exactly one location in the program. Multiple assignments
466 to the same variable create new versions of that variable. Naturally,
467 actual programs are seldom in SSA form initially because variables
468 tend to be assigned multiple times. The compiler modifies the program
469 representation so that every time a variable is assigned in the code,
470 a new version of the variable is created. Different versions of the
471 same variable are distinguished by subscripting the variable name with
472 its version number. Variables used in the right-hand side of
473 expressions are renamed so that their version number matches that of
474 the most recent assignment.
476 We represent variable versions using @code{SSA_NAME} nodes. The
477 renaming process in @file{tree-ssa.c} wraps every real and
478 virtual operand with an @code{SSA_NAME} node which contains
479 the version number and the statement that created the
480 @code{SSA_NAME}. Only definitions and virtual definitions may
481 create new @code{SSA_NAME} nodes.
484 Sometimes, flow of control makes it impossible to determine the
485 most recent version of a variable. In these cases, the compiler
486 inserts an artificial definition for that variable called
487 @dfn{PHI function} or @dfn{PHI node}. This new definition merges
488 all the incoming versions of the variable to create a new name
489 for it. For instance,
499 # a_4 = PHI <a_1, a_2, a_3>
503 Since it is not possible to determine which of the three branches
504 will be taken at runtime, we don't know which of @code{a_1},
505 @code{a_2} or @code{a_3} to use at the return statement. So, the
506 SSA renamer creates a new version @code{a_4} which is assigned
507 the result of ``merging'' @code{a_1}, @code{a_2} and @code{a_3}.
508 Hence, PHI nodes mean ``one of these operands. I don't know
511 The following functions can be used to examine PHI nodes
513 @defun gimple_phi_result (@var{phi})
514 Returns the @code{SSA_NAME} created by PHI node @var{phi} (i.e.,
518 @defun gimple_phi_num_args (@var{phi})
519 Returns the number of arguments in @var{phi}. This number is exactly
520 the number of incoming edges to the basic block holding @var{phi}@.
523 @defun gimple_phi_arg (@var{phi}, @var{i})
524 Returns @var{i}th argument of @var{phi}@.
527 @defun gimple_phi_arg_edge (@var{phi}, @var{i})
528 Returns the incoming edge for the @var{i}th argument of @var{phi}.
531 @defun gimple_phi_arg_def (@var{phi}, @var{i})
532 Returns the @code{SSA_NAME} for the @var{i}th argument of @var{phi}.
536 @subsection Preserving the SSA form
538 @cindex preserving SSA form
539 Some optimization passes make changes to the function that
540 invalidate the SSA property. This can happen when a pass has
541 added new symbols or changed the program so that variables that
542 were previously aliased aren't anymore. Whenever something like this
543 happens, the affected symbols must be renamed into SSA form again.
544 Transformations that emit new code or replicate existing statements
545 will also need to update the SSA form@.
547 Since GCC implements two different SSA forms for register and virtual
548 variables, keeping the SSA form up to date depends on whether you are
549 updating register or virtual names. In both cases, the general idea
550 behind incremental SSA updates is similar: when new SSA names are
551 created, they typically are meant to replace other existing names in
554 For instance, given the following code:
570 Suppose that we insert new names @code{x_10} and @code{x_11} (lines
571 @code{4} and @code{8})@.
589 We want to replace all the uses of @code{x_1} with the new definitions
590 of @code{x_10} and @code{x_11}. Note that the only uses that should
591 be replaced are those at lines @code{5}, @code{9} and @code{11}.
592 Also, the use of @code{x_7} at line @code{9} should @emph{not} be
593 replaced (this is why we cannot just mark symbol @code{x} for
596 Additionally, we may need to insert a PHI node at line @code{11}
597 because that is a merge point for @code{x_10} and @code{x_11}. So the
598 use of @code{x_1} at line @code{11} will be replaced with the new PHI
599 node. The insertion of PHI nodes is optional. They are not strictly
600 necessary to preserve the SSA form, and depending on what the caller
601 inserted, they may not even be useful for the optimizers@.
603 Updating the SSA form is a two step process. First, the pass has to
604 identify which names need to be updated and/or which symbols need to
605 be renamed into SSA form for the first time. When new names are
606 introduced to replace existing names in the program, the mapping
607 between the old and the new names are registered by calling
608 @code{register_new_name_mapping} (note that if your pass creates new
609 code by duplicating basic blocks, the call to @code{tree_duplicate_bb}
610 will set up the necessary mappings automatically).
612 After the replacement mappings have been registered and new symbols
613 marked for renaming, a call to @code{update_ssa} makes the registered
614 changes. This can be done with an explicit call or by creating
615 @code{TODO} flags in the @code{tree_opt_pass} structure for your pass.
616 There are several @code{TODO} flags that control the behavior of
620 @item @code{TODO_update_ssa}. Update the SSA form inserting PHI nodes
621 for newly exposed symbols and virtual names marked for updating.
622 When updating real names, only insert PHI nodes for a real name
623 @code{O_j} in blocks reached by all the new and old definitions for
624 @code{O_j}. If the iterated dominance frontier for @code{O_j}
625 is not pruned, we may end up inserting PHI nodes in blocks that
626 have one or more edges with no incoming definition for
627 @code{O_j}. This would lead to uninitialized warnings for
628 @code{O_j}'s symbol@.
630 @item @code{TODO_update_ssa_no_phi}. Update the SSA form without
631 inserting any new PHI nodes at all. This is used by passes that
632 have either inserted all the PHI nodes themselves or passes that
633 need only to patch use-def and def-def chains for virtuals
637 @item @code{TODO_update_ssa_full_phi}. Insert PHI nodes everywhere
638 they are needed. No pruning of the IDF is done. This is used
639 by passes that need the PHI nodes for @code{O_j} even if it
640 means that some arguments will come from the default definition
641 of @code{O_j}'s symbol (e.g., @code{pass_linear_transform})@.
643 WARNING: If you need to use this flag, chances are that your
644 pass may be doing something wrong. Inserting PHI nodes for an
645 old name where not all edges carry a new replacement may lead to
646 silent codegen errors or spurious uninitialized warnings@.
648 @item @code{TODO_update_ssa_only_virtuals}. Passes that update the
649 SSA form on their own may want to delegate the updating of
650 virtual names to the generic updater. Since FUD chains are
651 easier to maintain, this simplifies the work they need to do.
652 NOTE: If this flag is used, any OLD->NEW mappings for real names
653 are explicitly destroyed and only the symbols marked for
654 renaming are processed@.
657 @subsection Preserving the virtual SSA form
658 @cindex preserving virtual SSA form
660 The virtual SSA form is harder to preserve than the non-virtual SSA form
661 mainly because the set of virtual operands for a statement may change at
662 what some would consider unexpected times. In general, statement
663 modifications should be bracketed between calls to
664 @code{push_stmt_changes} and @code{pop_stmt_changes}. For example,
667 munge_stmt (tree stmt)
669 push_stmt_changes (&stmt);
670 @dots{} rewrite STMT @dots{}
671 pop_stmt_changes (&stmt);
675 The call to @code{push_stmt_changes} saves the current state of the
676 statement operands and the call to @code{pop_stmt_changes} compares
677 the saved state with the current one and does the appropriate symbol
678 marking for the SSA renamer.
680 It is possible to modify several statements at a time, provided that
681 @code{push_stmt_changes} and @code{pop_stmt_changes} are called in
682 LIFO order, as when processing a stack of statements.
684 Additionally, if the pass discovers that it did not need to make
685 changes to the statement after calling @code{push_stmt_changes}, it
686 can simply discard the topmost change buffer by calling
687 @code{discard_stmt_changes}. This will avoid the expensive operand
688 re-scan operation and the buffer comparison that determines if symbols
689 need to be marked for renaming.
691 @subsection Examining @code{SSA_NAME} nodes
692 @cindex examining SSA_NAMEs
694 The following macros can be used to examine @code{SSA_NAME} nodes
696 @defmac SSA_NAME_DEF_STMT (@var{var})
697 Returns the statement @var{s} that creates the @code{SSA_NAME}
698 @var{var}. If @var{s} is an empty statement (i.e., @code{IS_EMPTY_STMT
699 (@var{s})} returns @code{true}), it means that the first reference to
700 this variable is a USE or a VUSE@.
703 @defmac SSA_NAME_VERSION (@var{var})
704 Returns the version number of the @code{SSA_NAME} object @var{var}.
708 @subsection Walking the dominator tree
710 @deftypefn {Tree SSA function} void walk_dominator_tree (@var{walk_data}, @var{bb})
712 This function walks the dominator tree for the current CFG calling a
713 set of callback functions defined in @var{struct dom_walk_data} in
714 @file{domwalk.h}. The call back functions you need to define give you
715 hooks to execute custom code at various points during traversal:
718 @item Once to initialize any local data needed while processing
719 @var{bb} and its children. This local data is pushed into an
720 internal stack which is automatically pushed and popped as the
721 walker traverses the dominator tree.
723 @item Once before traversing all the statements in the @var{bb}.
725 @item Once for every statement inside @var{bb}.
727 @item Once after traversing all the statements and before recursing
728 into @var{bb}'s dominator children.
730 @item It then recurses into all the dominator children of @var{bb}.
732 @item After recursing into all the dominator children of @var{bb} it
733 can, optionally, traverse every statement in @var{bb} again
734 (i.e., repeating steps 2 and 3).
736 @item Once after walking the statements in @var{bb} and @var{bb}'s
737 dominator children. At this stage, the block local data stack
743 @section Alias analysis
745 @cindex flow-sensitive alias analysis
746 @cindex flow-insensitive alias analysis
748 Alias analysis in GIMPLE SSA form consists of two pieces. First
749 the virtual SSA web ties conflicting memory accesses and provides
750 a SSA use-def chain and SSA immediate-use chains for walking
751 possibly dependent memory accesses. Second an alias-oracle can
752 be queried to disambiguate explicit and implicit memory references.
755 @item Memory SSA form.
757 All statements that may use memory have exactly one accompanied use of
758 a virtual SSA name that represents the state of memory at the
759 given point in the IL.
761 All statements that may define memory have exactly one accompanied
762 definition of a virtual SSA name using the previous state of memory
763 and defining the new state of memory after the given point in the IL.
769 # .MEM_3 = VDEF <.MEM_2(D)>
776 The virtual SSA names in this case are @code{.MEM_2(D)} and
777 @code{.MEM_3}. The store to the global variable @code{i}
778 defines @code{.MEM_3} invalidating @code{.MEM_2(D)}. The
779 load from @code{i} uses that new state @code{.MEM_3}.
781 The virtual SSA web serves as constraints to SSA optimizers
782 preventing illegitimate code-motion and optimization. It
783 also provides a way to walk related memory statements.
785 @item Points-to and escape analysis.
787 Points-to analysis builds a set of constraints from the GIMPLE
788 SSA IL representing all pointer operations and facts we do
789 or do not know about pointers. Solving this set of constraints
790 yields a conservatively correct solution for each pointer
791 variable in the program (though we are only interested in
792 SSA name pointers) as to what it may possibly point to.
794 This points-to solution for a given SSA name pointer is stored
795 in the @code{pt_solution} sub-structure of the
796 @code{SSA_NAME_PTR_INFO} record. The following accessor
797 functions are available:
800 @item @code{pt_solution_includes}
801 @item @code{pt_solutions_intersect}
804 Points-to analysis also computes the solution for two special
805 set of pointers, @code{ESCAPED} and @code{CALLUSED}. Those
806 represent all memory that has escaped the scope of analysis
807 or that is used by pure or nested const calls.
809 @item Type-based alias analysis
811 Type-based alias analysis is frontend dependent though generic
812 support is provided by the middle-end in @code{alias.c}. TBAA
813 code is used by both tree optimizers and RTL optimizers.
815 Every language that wishes to perform language-specific alias analysis
816 should define a function that computes, given a @code{tree}
817 node, an alias set for the node. Nodes in different alias sets are not
818 allowed to alias. For an example, see the C front-end function
819 @code{c_get_alias_set}.
821 @item Tree alias-oracle
823 The tree alias-oracle provides means to disambiguate two memory
824 references and memory references against statements. The following
825 queries are available:
828 @item @code{refs_may_alias_p}
829 @item @code{ref_maybe_used_by_stmt_p}
830 @item @code{stmt_may_clobber_ref_p}
833 In addition to those two kind of statement walkers are available
834 walking statements related to a reference ref.
835 @code{walk_non_aliased_vuses} walks over dominating memory defining
836 statements and calls back if the statement does not clobber ref
837 providing the non-aliased VUSE. The walk stops at
838 the first clobbering statement or if asked to.
839 @code{walk_aliased_vdefs} walks over dominating memory defining
840 statements and calls back on each statement clobbering ref
841 providing its aliasing VDEF. The walk stops if asked to.
847 @section Memory model
850 The memory model used by the middle-end models that of the C/C++
851 languages. The middle-end has the notion of an effective type
852 of a memory region which is used for type-based alias analysis.
854 The following is a refinement of ISO C99 6.5/6, clarifying the block copy case
855 to follow common sense and extending the concept of a dynamic effective
856 type to objects with a declared type as required for C++.
859 The effective type of an object for an access to its stored value is
860 the declared type of the object or the effective type determined by
861 a previous store to it. If a value is stored into an object through
862 an lvalue having a type that is not a character type, then the
863 type of the lvalue becomes the effective type of the object for that
864 access and for subsequent accesses that do not modify the stored value.
865 If a value is copied into an object using @code{memcpy} or @code{memmove},
866 or is copied as an array of character type, then the effective type
867 of the modified object for that access and for subsequent accesses that
868 do not modify the value is undetermined. For all other accesses to an
869 object, the effective type of the object is simply the type of the
870 lvalue used for the access.