1 @c Copyright (C) 2004-2013 Free Software Foundation, Inc.
2 @c This is part of the GCC manual.
3 @c For copying conditions, see the file gcc.texi.
5 @c ---------------------------------------------------------------------
7 @c ---------------------------------------------------------------------
10 @chapter Analysis and Optimization of GIMPLE tuples
12 @cindex Optimization infrastructure for GIMPLE
14 GCC uses three main intermediate languages to represent the program
15 during compilation: GENERIC, GIMPLE and RTL@. GENERIC is a
16 language-independent representation generated by each front end. It
17 is used to serve as an interface between the parser and optimizer.
18 GENERIC is a common representation that is able to represent programs
19 written in all the languages supported by GCC@.
21 GIMPLE and RTL are used to optimize the program. GIMPLE is used for
22 target and language independent optimizations (e.g., inlining,
23 constant propagation, tail call elimination, redundancy elimination,
24 etc). Much like GENERIC, GIMPLE is a language independent, tree based
25 representation. However, it differs from GENERIC in that the GIMPLE
26 grammar is more restrictive: expressions contain no more than 3
27 operands (except function calls), it has no control flow structures
28 and expressions with side-effects are only allowed on the right hand
29 side of assignments. See the chapter describing GENERIC and GIMPLE
32 This chapter describes the data structures and functions used in the
33 GIMPLE optimizers (also known as ``tree optimizers'' or ``middle
34 end''). In particular, it focuses on all the macros, data structures,
35 functions and programming constructs needed to implement optimization
39 * Annotations:: Attributes for variables.
40 * SSA Operands:: SSA names referenced by GIMPLE statements.
41 * SSA:: Static Single Assignment representation.
42 * Alias analysis:: Representing aliased loads and stores.
43 * Memory model:: Memory model used by the middle-end.
50 The optimizers need to associate attributes with variables during the
51 optimization process. For instance, we need to know whether a
52 variable has aliases. All these attributes are stored in data
53 structures called annotations which are then linked to the field
54 @code{ann} in @code{struct tree_common}.
56 Presently, we define annotations for variables (@code{var_ann_t}).
57 Annotations are defined and documented in @file{tree-flow.h}.
63 @cindex virtual operands
67 Almost every GIMPLE statement will contain a reference to a variable
68 or memory location. Since statements come in different shapes and
69 sizes, their operands are going to be located at various spots inside
70 the statement's tree. To facilitate access to the statement's
71 operands, they are organized into lists associated inside each
72 statement's annotation. Each element in an operand list is a pointer
73 to a @code{VAR_DECL}, @code{PARM_DECL} or @code{SSA_NAME} tree node.
74 This provides a very convenient way of examining and replacing
77 Data flow analysis and optimization is done on all tree nodes
78 representing variables. Any node for which @code{SSA_VAR_P} returns
79 nonzero is considered when scanning statement operands. However, not
80 all @code{SSA_VAR_P} variables are processed in the same way. For the
81 purposes of optimization, we need to distinguish between references to
82 local scalar variables and references to globals, statics, structures,
83 arrays, aliased variables, etc. The reason is simple, the compiler
84 can gather complete data flow information for a local scalar. On the
85 other hand, a global variable may be modified by a function call, it
86 may not be possible to keep track of all the elements of an array or
87 the fields of a structure, etc.
89 The operand scanner gathers two kinds of operands: @dfn{real} and
90 @dfn{virtual}. An operand for which @code{is_gimple_reg} returns true
91 is considered real, otherwise it is a virtual operand. We also
92 distinguish between uses and definitions. An operand is used if its
93 value is loaded by the statement (e.g., the operand at the RHS of an
94 assignment). If the statement assigns a new value to the operand, the
95 operand is considered a definition (e.g., the operand at the LHS of
98 Virtual and real operands also have very different data flow
99 properties. Real operands are unambiguous references to the
100 full object that they represent. For instance, given
109 Since @code{a} and @code{b} are non-aliased locals, the statement
110 @code{a = b} will have one real definition and one real use because
111 variable @code{a} is completely modified with the contents of
112 variable @code{b}. Real definition are also known as @dfn{killing
113 definitions}. Similarly, the use of @code{b} reads all its bits.
115 In contrast, virtual operands are used with variables that can have
116 a partial or ambiguous reference. This includes structures, arrays,
117 globals, and aliased variables. In these cases, we have two types of
118 definitions. For globals, structures, and arrays, we can determine from
119 a statement whether a variable of these types has a killing definition.
120 If the variable does, then the statement is marked as having a
121 @dfn{must definition} of that variable. However, if a statement is only
122 defining a part of the variable (i.e.@: a field in a structure), or if we
123 know that a statement might define the variable but we cannot say for sure,
124 then we mark that statement as having a @dfn{may definition}. For
140 The assignment @code{*p = 5} may be a definition of @code{a} or
141 @code{b}. If we cannot determine statically where @code{p} is
142 pointing to at the time of the store operation, we create virtual
143 definitions to mark that statement as a potential definition site for
144 @code{a} and @code{b}. Memory loads are similarly marked with virtual
145 use operands. Virtual operands are shown in tree dumps right before
146 the statement that contains them. To request a tree dump with virtual
147 operands, use the @option{-vops} option to @option{-fdump-tree}:
167 Notice that @code{VDEF} operands have two copies of the referenced
168 variable. This indicates that this is not a killing definition of
169 that variable. In this case we refer to it as a @dfn{may definition}
170 or @dfn{aliased store}. The presence of the second copy of the
171 variable in the @code{VDEF} operand will become important when the
172 function is converted into SSA form. This will be used to link all
173 the non-killing definitions to prevent optimizations from making
174 incorrect assumptions about them.
176 Operands are updated as soon as the statement is finished via a call
177 to @code{update_stmt}. If statement elements are changed via
178 @code{SET_USE} or @code{SET_DEF}, then no further action is required
179 (i.e., those macros take care of updating the statement). If changes
180 are made by manipulating the statement's tree directly, then a call
181 must be made to @code{update_stmt} when complete. Calling one of the
182 @code{bsi_insert} routines or @code{bsi_replace} performs an implicit
183 call to @code{update_stmt}.
185 @subsection Operand Iterators And Access Routines
186 @cindex Operand Iterators
187 @cindex Operand Access Routines
189 Operands are collected by @file{tree-ssa-operands.c}. They are stored
190 inside each statement's annotation and can be accessed through either the
191 operand iterators or an access routine.
193 The following access routines are available for examining operands:
196 @item @code{SINGLE_SSA_@{USE,DEF,TREE@}_OPERAND}: These accessors will return
197 NULL unless there is exactly one operand matching the specified flags. If
198 there is exactly one operand, the operand is returned as either a @code{tree},
199 @code{def_operand_p}, or @code{use_operand_p}.
202 tree t = SINGLE_SSA_TREE_OPERAND (stmt, flags);
203 use_operand_p u = SINGLE_SSA_USE_OPERAND (stmt, SSA_ALL_VIRTUAL_USES);
204 def_operand_p d = SINGLE_SSA_DEF_OPERAND (stmt, SSA_OP_ALL_DEFS);
207 @item @code{ZERO_SSA_OPERANDS}: This macro returns true if there are no
208 operands matching the specified flags.
211 if (ZERO_SSA_OPERANDS (stmt, SSA_OP_ALL_VIRTUALS))
215 @item @code{NUM_SSA_OPERANDS}: This macro Returns the number of operands
216 matching 'flags'. This actually executes a loop to perform the count, so
217 only use this if it is really needed.
220 int count = NUM_SSA_OPERANDS (stmt, flags)
225 If you wish to iterate over some or all operands, use the
226 @code{FOR_EACH_SSA_@{USE,DEF,TREE@}_OPERAND} iterator. For example, to print
227 all the operands for a statement:
231 print_ops (tree stmt)
236 FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_ALL_OPERANDS)
237 print_generic_expr (stderr, var, TDF_SLIM);
242 How to choose the appropriate iterator:
245 @item Determine whether you are need to see the operand pointers, or just the
246 trees, and choose the appropriate macro:
251 use_operand_p FOR_EACH_SSA_USE_OPERAND
252 def_operand_p FOR_EACH_SSA_DEF_OPERAND
253 tree FOR_EACH_SSA_TREE_OPERAND
256 @item You need to declare a variable of the type you are interested
257 in, and an ssa_op_iter structure which serves as the loop controlling
260 @item Determine which operands you wish to use, and specify the flags of
261 those you are interested in. They are documented in
262 @file{tree-ssa-operands.h}:
265 #define SSA_OP_USE 0x01 /* @r{Real USE operands.} */
266 #define SSA_OP_DEF 0x02 /* @r{Real DEF operands.} */
267 #define SSA_OP_VUSE 0x04 /* @r{VUSE operands.} */
268 #define SSA_OP_VMAYUSE 0x08 /* @r{USE portion of VDEFS.} */
269 #define SSA_OP_VDEF 0x10 /* @r{DEF portion of VDEFS.} */
271 /* @r{These are commonly grouped operand flags.} */
272 #define SSA_OP_VIRTUAL_USES (SSA_OP_VUSE | SSA_OP_VMAYUSE)
273 #define SSA_OP_VIRTUAL_DEFS (SSA_OP_VDEF)
274 #define SSA_OP_ALL_USES (SSA_OP_VIRTUAL_USES | SSA_OP_USE)
275 #define SSA_OP_ALL_DEFS (SSA_OP_VIRTUAL_DEFS | SSA_OP_DEF)
276 #define SSA_OP_ALL_OPERANDS (SSA_OP_ALL_USES | SSA_OP_ALL_DEFS)
280 So if you want to look at the use pointers for all the @code{USE} and
281 @code{VUSE} operands, you would do something like:
287 FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, (SSA_OP_USE | SSA_OP_VUSE))
289 process_use_ptr (use_p);
293 The @code{TREE} macro is basically the same as the @code{USE} and
294 @code{DEF} macros, only with the use or def dereferenced via
295 @code{USE_FROM_PTR (use_p)} and @code{DEF_FROM_PTR (def_p)}. Since we
296 aren't using operand pointers, use and defs flags can be mixed.
302 FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_VUSE)
304 print_generic_expr (stderr, var, TDF_SLIM);
308 @code{VDEF}s are broken into two flags, one for the
309 @code{DEF} portion (@code{SSA_OP_VDEF}) and one for the USE portion
310 (@code{SSA_OP_VMAYUSE}). If all you want to look at are the
311 @code{VDEF}s together, there is a fourth iterator macro for this,
312 which returns both a def_operand_p and a use_operand_p for each
313 @code{VDEF} in the statement. Note that you don't need any flags for
321 FOR_EACH_SSA_MAYDEF_OPERAND (def_p, use_p, stmt, iter)
327 There are many examples in the code as well, as well as the
328 documentation in @file{tree-ssa-operands.h}.
330 There are also a couple of variants on the stmt iterators regarding PHI
333 @code{FOR_EACH_PHI_ARG} Works exactly like
334 @code{FOR_EACH_SSA_USE_OPERAND}, except it works over @code{PHI} arguments
335 instead of statement operands.
338 /* Look at every virtual PHI use. */
339 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_VIRTUAL_USES)
344 /* Look at every real PHI use. */
345 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_USES)
348 /* Look at every PHI use. */
349 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_ALL_USES)
353 @code{FOR_EACH_PHI_OR_STMT_@{USE,DEF@}} works exactly like
354 @code{FOR_EACH_SSA_@{USE,DEF@}_OPERAND}, except it will function on
355 either a statement or a @code{PHI} node. These should be used when it is
356 appropriate but they are not quite as efficient as the individual
357 @code{FOR_EACH_PHI} and @code{FOR_EACH_SSA} routines.
360 FOR_EACH_PHI_OR_STMT_USE (use_operand_p, stmt, iter, flags)
365 FOR_EACH_PHI_OR_STMT_DEF (def_operand_p, phi, iter, flags)
371 @subsection Immediate Uses
372 @cindex Immediate Uses
374 Immediate use information is now always available. Using the immediate use
375 iterators, you may examine every use of any @code{SSA_NAME}. For instance,
376 to change each use of @code{ssa_var} to @code{ssa_var2} and call fold_stmt on
377 each stmt after that is done:
380 use_operand_p imm_use_p;
381 imm_use_iterator iterator;
385 FOR_EACH_IMM_USE_STMT (stmt, iterator, ssa_var)
387 FOR_EACH_IMM_USE_ON_STMT (imm_use_p, iterator)
388 SET_USE (imm_use_p, ssa_var_2);
393 There are 2 iterators which can be used. @code{FOR_EACH_IMM_USE_FAST} is
394 used when the immediate uses are not changed, i.e., you are looking at the
395 uses, but not setting them.
397 If they do get changed, then care must be taken that things are not changed
398 under the iterators, so use the @code{FOR_EACH_IMM_USE_STMT} and
399 @code{FOR_EACH_IMM_USE_ON_STMT} iterators. They attempt to preserve the
400 sanity of the use list by moving all the uses for a statement into
401 a controlled position, and then iterating over those uses. Then the
402 optimization can manipulate the stmt when all the uses have been
403 processed. This is a little slower than the FAST version since it adds a
404 placeholder element and must sort through the list a bit for each statement.
405 This placeholder element must be also be removed if the loop is
406 terminated early. The macro @code{BREAK_FROM_IMM_USE_SAFE} is provided
410 FOR_EACH_IMM_USE_STMT (stmt, iterator, ssa_var)
412 if (stmt == last_stmt)
413 BREAK_FROM_SAFE_IMM_USE (iter);
415 FOR_EACH_IMM_USE_ON_STMT (imm_use_p, iterator)
416 SET_USE (imm_use_p, ssa_var_2);
421 There are checks in @code{verify_ssa} which verify that the immediate use list
422 is up to date, as well as checking that an optimization didn't break from the
423 loop without using this macro. It is safe to simply 'break'; from a
424 @code{FOR_EACH_IMM_USE_FAST} traverse.
426 Some useful functions and macros:
428 @item @code{has_zero_uses (ssa_var)} : Returns true if there are no uses of
430 @item @code{has_single_use (ssa_var)} : Returns true if there is only a
431 single use of @code{ssa_var}.
432 @item @code{single_imm_use (ssa_var, use_operand_p *ptr, tree *stmt)} :
433 Returns true if there is only a single use of @code{ssa_var}, and also returns
434 the use pointer and statement it occurs in, in the second and third parameters.
435 @item @code{num_imm_uses (ssa_var)} : Returns the number of immediate uses of
436 @code{ssa_var}. It is better not to use this if possible since it simply
437 utilizes a loop to count the uses.
438 @item @code{PHI_ARG_INDEX_FROM_USE (use_p)} : Given a use within a @code{PHI}
439 node, return the index number for the use. An assert is triggered if the use
440 isn't located in a @code{PHI} node.
441 @item @code{USE_STMT (use_p)} : Return the statement a use occurs in.
444 Note that uses are not put into an immediate use list until their statement is
445 actually inserted into the instruction stream via a @code{bsi_*} routine.
447 It is also still possible to utilize lazy updating of statements, but this
448 should be used only when absolutely required. Both alias analysis and the
449 dominator optimizations currently do this.
451 When lazy updating is being used, the immediate use information is out of date
452 and cannot be used reliably. Lazy updating is achieved by simply marking
453 statements modified via calls to @code{mark_stmt_modified} instead of
454 @code{update_stmt}. When lazy updating is no longer required, all the
455 modified statements must have @code{update_stmt} called in order to bring them
456 up to date. This must be done before the optimization is finished, or
457 @code{verify_ssa} will trigger an abort.
459 This is done with a simple loop over the instruction stream:
461 block_stmt_iterator bsi;
465 for (bsi = bsi_start (bb); !bsi_end_p (bsi); bsi_next (&bsi))
466 update_stmt_if_modified (bsi_stmt (bsi));
471 @section Static Single Assignment
473 @cindex static single assignment
475 Most of the tree optimizers rely on the data flow information provided
476 by the Static Single Assignment (SSA) form. We implement the SSA form
477 as described in @cite{R. Cytron, J. Ferrante, B. Rosen, M. Wegman, and
478 K. Zadeck. Efficiently Computing Static Single Assignment Form and the
479 Control Dependence Graph. ACM Transactions on Programming Languages
480 and Systems, 13(4):451-490, October 1991}.
482 The SSA form is based on the premise that program variables are
483 assigned in exactly one location in the program. Multiple assignments
484 to the same variable create new versions of that variable. Naturally,
485 actual programs are seldom in SSA form initially because variables
486 tend to be assigned multiple times. The compiler modifies the program
487 representation so that every time a variable is assigned in the code,
488 a new version of the variable is created. Different versions of the
489 same variable are distinguished by subscripting the variable name with
490 its version number. Variables used in the right-hand side of
491 expressions are renamed so that their version number matches that of
492 the most recent assignment.
494 We represent variable versions using @code{SSA_NAME} nodes. The
495 renaming process in @file{tree-ssa.c} wraps every real and
496 virtual operand with an @code{SSA_NAME} node which contains
497 the version number and the statement that created the
498 @code{SSA_NAME}. Only definitions and virtual definitions may
499 create new @code{SSA_NAME} nodes.
502 Sometimes, flow of control makes it impossible to determine the
503 most recent version of a variable. In these cases, the compiler
504 inserts an artificial definition for that variable called
505 @dfn{PHI function} or @dfn{PHI node}. This new definition merges
506 all the incoming versions of the variable to create a new name
507 for it. For instance,
517 # a_4 = PHI <a_1, a_2, a_3>
521 Since it is not possible to determine which of the three branches
522 will be taken at runtime, we don't know which of @code{a_1},
523 @code{a_2} or @code{a_3} to use at the return statement. So, the
524 SSA renamer creates a new version @code{a_4} which is assigned
525 the result of ``merging'' @code{a_1}, @code{a_2} and @code{a_3}.
526 Hence, PHI nodes mean ``one of these operands. I don't know
529 The following functions can be used to examine PHI nodes
531 @defun gimple_phi_result (@var{phi})
532 Returns the @code{SSA_NAME} created by PHI node @var{phi} (i.e.,
536 @defun gimple_phi_num_args (@var{phi})
537 Returns the number of arguments in @var{phi}. This number is exactly
538 the number of incoming edges to the basic block holding @var{phi}@.
541 @defun gimple_phi_arg (@var{phi}, @var{i})
542 Returns @var{i}th argument of @var{phi}@.
545 @defun gimple_phi_arg_edge (@var{phi}, @var{i})
546 Returns the incoming edge for the @var{i}th argument of @var{phi}.
549 @defun gimple_phi_arg_def (@var{phi}, @var{i})
550 Returns the @code{SSA_NAME} for the @var{i}th argument of @var{phi}.
554 @subsection Preserving the SSA form
556 @cindex preserving SSA form
557 Some optimization passes make changes to the function that
558 invalidate the SSA property. This can happen when a pass has
559 added new symbols or changed the program so that variables that
560 were previously aliased aren't anymore. Whenever something like this
561 happens, the affected symbols must be renamed into SSA form again.
562 Transformations that emit new code or replicate existing statements
563 will also need to update the SSA form@.
565 Since GCC implements two different SSA forms for register and virtual
566 variables, keeping the SSA form up to date depends on whether you are
567 updating register or virtual names. In both cases, the general idea
568 behind incremental SSA updates is similar: when new SSA names are
569 created, they typically are meant to replace other existing names in
572 For instance, given the following code:
588 Suppose that we insert new names @code{x_10} and @code{x_11} (lines
589 @code{4} and @code{8})@.
607 We want to replace all the uses of @code{x_1} with the new definitions
608 of @code{x_10} and @code{x_11}. Note that the only uses that should
609 be replaced are those at lines @code{5}, @code{9} and @code{11}.
610 Also, the use of @code{x_7} at line @code{9} should @emph{not} be
611 replaced (this is why we cannot just mark symbol @code{x} for
614 Additionally, we may need to insert a PHI node at line @code{11}
615 because that is a merge point for @code{x_10} and @code{x_11}. So the
616 use of @code{x_1} at line @code{11} will be replaced with the new PHI
617 node. The insertion of PHI nodes is optional. They are not strictly
618 necessary to preserve the SSA form, and depending on what the caller
619 inserted, they may not even be useful for the optimizers@.
621 Updating the SSA form is a two step process. First, the pass has to
622 identify which names need to be updated and/or which symbols need to
623 be renamed into SSA form for the first time. When new names are
624 introduced to replace existing names in the program, the mapping
625 between the old and the new names are registered by calling
626 @code{register_new_name_mapping} (note that if your pass creates new
627 code by duplicating basic blocks, the call to @code{tree_duplicate_bb}
628 will set up the necessary mappings automatically).
630 After the replacement mappings have been registered and new symbols
631 marked for renaming, a call to @code{update_ssa} makes the registered
632 changes. This can be done with an explicit call or by creating
633 @code{TODO} flags in the @code{tree_opt_pass} structure for your pass.
634 There are several @code{TODO} flags that control the behavior of
638 @item @code{TODO_update_ssa}. Update the SSA form inserting PHI nodes
639 for newly exposed symbols and virtual names marked for updating.
640 When updating real names, only insert PHI nodes for a real name
641 @code{O_j} in blocks reached by all the new and old definitions for
642 @code{O_j}. If the iterated dominance frontier for @code{O_j}
643 is not pruned, we may end up inserting PHI nodes in blocks that
644 have one or more edges with no incoming definition for
645 @code{O_j}. This would lead to uninitialized warnings for
646 @code{O_j}'s symbol@.
648 @item @code{TODO_update_ssa_no_phi}. Update the SSA form without
649 inserting any new PHI nodes at all. This is used by passes that
650 have either inserted all the PHI nodes themselves or passes that
651 need only to patch use-def and def-def chains for virtuals
655 @item @code{TODO_update_ssa_full_phi}. Insert PHI nodes everywhere
656 they are needed. No pruning of the IDF is done. This is used
657 by passes that need the PHI nodes for @code{O_j} even if it
658 means that some arguments will come from the default definition
659 of @code{O_j}'s symbol (e.g., @code{pass_linear_transform})@.
661 WARNING: If you need to use this flag, chances are that your
662 pass may be doing something wrong. Inserting PHI nodes for an
663 old name where not all edges carry a new replacement may lead to
664 silent codegen errors or spurious uninitialized warnings@.
666 @item @code{TODO_update_ssa_only_virtuals}. Passes that update the
667 SSA form on their own may want to delegate the updating of
668 virtual names to the generic updater. Since FUD chains are
669 easier to maintain, this simplifies the work they need to do.
670 NOTE: If this flag is used, any OLD->NEW mappings for real names
671 are explicitly destroyed and only the symbols marked for
672 renaming are processed@.
675 @subsection Preserving the virtual SSA form
676 @cindex preserving virtual SSA form
678 The virtual SSA form is harder to preserve than the non-virtual SSA form
679 mainly because the set of virtual operands for a statement may change at
680 what some would consider unexpected times. In general, statement
681 modifications should be bracketed between calls to
682 @code{push_stmt_changes} and @code{pop_stmt_changes}. For example,
685 munge_stmt (tree stmt)
687 push_stmt_changes (&stmt);
688 @dots{} rewrite STMT @dots{}
689 pop_stmt_changes (&stmt);
693 The call to @code{push_stmt_changes} saves the current state of the
694 statement operands and the call to @code{pop_stmt_changes} compares
695 the saved state with the current one and does the appropriate symbol
696 marking for the SSA renamer.
698 It is possible to modify several statements at a time, provided that
699 @code{push_stmt_changes} and @code{pop_stmt_changes} are called in
700 LIFO order, as when processing a stack of statements.
702 Additionally, if the pass discovers that it did not need to make
703 changes to the statement after calling @code{push_stmt_changes}, it
704 can simply discard the topmost change buffer by calling
705 @code{discard_stmt_changes}. This will avoid the expensive operand
706 re-scan operation and the buffer comparison that determines if symbols
707 need to be marked for renaming.
709 @subsection Examining @code{SSA_NAME} nodes
710 @cindex examining SSA_NAMEs
712 The following macros can be used to examine @code{SSA_NAME} nodes
714 @defmac SSA_NAME_DEF_STMT (@var{var})
715 Returns the statement @var{s} that creates the @code{SSA_NAME}
716 @var{var}. If @var{s} is an empty statement (i.e., @code{IS_EMPTY_STMT
717 (@var{s})} returns @code{true}), it means that the first reference to
718 this variable is a USE or a VUSE@.
721 @defmac SSA_NAME_VERSION (@var{var})
722 Returns the version number of the @code{SSA_NAME} object @var{var}.
726 @subsection Walking the dominator tree
728 @deftypefn {Tree SSA function} void walk_dominator_tree (@var{walk_data}, @var{bb})
730 This function walks the dominator tree for the current CFG calling a
731 set of callback functions defined in @var{struct dom_walk_data} in
732 @file{domwalk.h}. The call back functions you need to define give you
733 hooks to execute custom code at various points during traversal:
736 @item Once to initialize any local data needed while processing
737 @var{bb} and its children. This local data is pushed into an
738 internal stack which is automatically pushed and popped as the
739 walker traverses the dominator tree.
741 @item Once before traversing all the statements in the @var{bb}.
743 @item Once for every statement inside @var{bb}.
745 @item Once after traversing all the statements and before recursing
746 into @var{bb}'s dominator children.
748 @item It then recurses into all the dominator children of @var{bb}.
750 @item After recursing into all the dominator children of @var{bb} it
751 can, optionally, traverse every statement in @var{bb} again
752 (i.e., repeating steps 2 and 3).
754 @item Once after walking the statements in @var{bb} and @var{bb}'s
755 dominator children. At this stage, the block local data stack
761 @section Alias analysis
763 @cindex flow-sensitive alias analysis
764 @cindex flow-insensitive alias analysis
766 Alias analysis in GIMPLE SSA form consists of two pieces. First
767 the virtual SSA web ties conflicting memory accesses and provides
768 a SSA use-def chain and SSA immediate-use chains for walking
769 possibly dependent memory accesses. Second an alias-oracle can
770 be queried to disambiguate explicit and implicit memory references.
773 @item Memory SSA form.
775 All statements that may use memory have exactly one accompanied use of
776 a virtual SSA name that represents the state of memory at the
777 given point in the IL.
779 All statements that may define memory have exactly one accompanied
780 definition of a virtual SSA name using the previous state of memory
781 and defining the new state of memory after the given point in the IL.
787 # .MEM_3 = VDEF <.MEM_2(D)>
794 The virtual SSA names in this case are @code{.MEM_2(D)} and
795 @code{.MEM_3}. The store to the global variable @code{i}
796 defines @code{.MEM_3} invalidating @code{.MEM_2(D)}. The
797 load from @code{i} uses that new state @code{.MEM_3}.
799 The virtual SSA web serves as constraints to SSA optimizers
800 preventing illegitimate code-motion and optimization. It
801 also provides a way to walk related memory statements.
803 @item Points-to and escape analysis.
805 Points-to analysis builds a set of constraints from the GIMPLE
806 SSA IL representing all pointer operations and facts we do
807 or do not know about pointers. Solving this set of constraints
808 yields a conservatively correct solution for each pointer
809 variable in the program (though we are only interested in
810 SSA name pointers) as to what it may possibly point to.
812 This points-to solution for a given SSA name pointer is stored
813 in the @code{pt_solution} sub-structure of the
814 @code{SSA_NAME_PTR_INFO} record. The following accessor
815 functions are available:
818 @item @code{pt_solution_includes}
819 @item @code{pt_solutions_intersect}
822 Points-to analysis also computes the solution for two special
823 set of pointers, @code{ESCAPED} and @code{CALLUSED}. Those
824 represent all memory that has escaped the scope of analysis
825 or that is used by pure or nested const calls.
827 @item Type-based alias analysis
829 Type-based alias analysis is frontend dependent though generic
830 support is provided by the middle-end in @code{alias.c}. TBAA
831 code is used by both tree optimizers and RTL optimizers.
833 Every language that wishes to perform language-specific alias analysis
834 should define a function that computes, given a @code{tree}
835 node, an alias set for the node. Nodes in different alias sets are not
836 allowed to alias. For an example, see the C front-end function
837 @code{c_get_alias_set}.
839 @item Tree alias-oracle
841 The tree alias-oracle provides means to disambiguate two memory
842 references and memory references against statements. The following
843 queries are available:
846 @item @code{refs_may_alias_p}
847 @item @code{ref_maybe_used_by_stmt_p}
848 @item @code{stmt_may_clobber_ref_p}
851 In addition to those two kind of statement walkers are available
852 walking statements related to a reference ref.
853 @code{walk_non_aliased_vuses} walks over dominating memory defining
854 statements and calls back if the statement does not clobber ref
855 providing the non-aliased VUSE. The walk stops at
856 the first clobbering statement or if asked to.
857 @code{walk_aliased_vdefs} walks over dominating memory defining
858 statements and calls back on each statement clobbering ref
859 providing its aliasing VDEF. The walk stops if asked to.
865 @section Memory model
868 The memory model used by the middle-end models that of the C/C++
869 languages. The middle-end has the notion of an effective type
870 of a memory region which is used for type-based alias analysis.
872 The following is a refinement of ISO C99 6.5/6, clarifying the block copy case
873 to follow common sense and extending the concept of a dynamic effective
874 type to objects with a declared type as required for C++.
877 The effective type of an object for an access to its stored value is
878 the declared type of the object or the effective type determined by
879 a previous store to it. If a value is stored into an object through
880 an lvalue having a type that is not a character type, then the
881 type of the lvalue becomes the effective type of the object for that
882 access and for subsequent accesses that do not modify the stored value.
883 If a value is copied into an object using @code{memcpy} or @code{memmove},
884 or is copied as an array of character type, then the effective type
885 of the modified object for that access and for subsequent accesses that
886 do not modify the value is undetermined. For all other accesses to an
887 object, the effective type of the object is simply the type of the
888 lvalue used for the access.