1 @c Copyright (c) 2004, 2005 Free Software Foundation, Inc.
2 @c Free Software Foundation, Inc.
3 @c This is part of the GCC manual.
4 @c For copying conditions, see the file gcc.texi.
6 @c ---------------------------------------------------------------------
8 @c ---------------------------------------------------------------------
11 @chapter Analysis and Optimization of GIMPLE Trees
13 @cindex Optimization infrastructure for GIMPLE
15 GCC uses three main intermediate languages to represent the program
16 during compilation: GENERIC, GIMPLE and RTL@. GENERIC is a
17 language-independent representation generated by each front end. It
18 is used to serve as an interface between the parser and optimizer.
19 GENERIC is a common representation that is able to represent programs
20 written in all the languages supported by GCC@.
22 GIMPLE and RTL are used to optimize the program. GIMPLE is used for
23 target and language independent optimizations (e.g., inlining,
24 constant propagation, tail call elimination, redundancy elimination,
25 etc). Much like GENERIC, GIMPLE is a language independent, tree based
26 representation. However, it differs from GENERIC in that the GIMPLE
27 grammar is more restrictive: expressions contain no more than 3
28 operands (except function calls), it has no control flow structures
29 and expressions with side-effects are only allowed on the right hand
30 side of assignments. See the chapter describing GENERIC and GIMPLE
33 This chapter describes the data structures and functions used in the
34 GIMPLE optimizers (also known as ``tree optimizers'' or ``middle
35 end''). In particular, it focuses on all the macros, data structures,
36 functions and programming constructs needed to implement optimization
40 * GENERIC:: A high-level language-independent representation.
41 * GIMPLE:: A lower-level factored tree representation.
42 * Annotations:: Attributes for statements and variables.
43 * Statement Operands:: Variables referenced by GIMPLE statements.
44 * SSA:: Static Single Assignment representation.
45 * Alias analysis:: Representing aliased loads and stores.
52 The purpose of GENERIC is simply to provide a language-independent way of
53 representing an entire function in trees. To this end, it was necessary to
54 add a few new tree codes to the back end, but most everything was already
55 there. If you can express it with the codes in @code{gcc/tree.def}, it's
58 Early on, there was a great deal of debate about how to think about
59 statements in a tree IL@. In GENERIC, a statement is defined as any
60 expression whose value, if any, is ignored. A statement will always
61 have @code{TREE_SIDE_EFFECTS} set (or it will be discarded), but a
62 non-statement expression may also have side effects. A
63 @code{CALL_EXPR}, for instance.
65 It would be possible for some local optimizations to work on the
66 GENERIC form of a function; indeed, the adapted tree inliner works
67 fine on GENERIC, but the current compiler performs inlining after
68 lowering to GIMPLE (a restricted form described in the next section).
69 Indeed, currently the frontends perform this lowering before handing
70 off to @code{tree_rest_of_compilation}, but this seems inelegant.
72 If necessary, a front end can use some language-dependent tree codes
73 in its GENERIC representation, so long as it provides a hook for
74 converting them to GIMPLE and doesn't expect them to work with any
75 (hypothetical) optimizers that run before the conversion to GIMPLE@.
76 The intermediate representation used while parsing C and C++ looks
77 very little like GENERIC, but the C and C++ gimplifier hooks are
78 perfectly happy to take it as input and spit out GIMPLE@.
84 GIMPLE is a simplified subset of GENERIC for use in optimization. The
85 particular subset chosen (and the name) was heavily influenced by the
86 SIMPLE IL used by the McCAT compiler project at McGill University,
87 though we have made some different choices. For one thing, SIMPLE
88 doesn't support @code{goto}; a production compiler can't afford that
91 GIMPLE retains much of the structure of the parse trees: lexical
92 scopes are represented as containers, rather than markers. However,
93 expressions are broken down into a 3-address form, using temporary
94 variables to hold intermediate values. Also, control structures are
97 In GIMPLE no container node is ever used for its value; if a
98 @code{COND_EXPR} or @code{BIND_EXPR} has a value, it is stored into a
99 temporary within the controlled blocks, and that temporary is used in
100 place of the container.
102 The compiler pass which lowers GENERIC to GIMPLE is referred to as the
103 @samp{gimplifier}. The gimplifier works recursively, replacing complex
104 statements with sequences of simple statements.
106 @c Currently, the only way to
107 @c tell whether or not an expression is in GIMPLE form is by recursively
108 @c examining it; in the future there will probably be a flag to help avoid
109 @c redundant work. FIXME FIXME
114 * GIMPLE Expressions::
117 * Rough GIMPLE Grammar::
121 @subsection Interfaces
122 @cindex gimplification
124 The tree representation of a function is stored in
125 @code{DECL_SAVED_TREE}. It is lowered to GIMPLE by a call to
126 @code{gimplify_function_tree}.
128 If a front end wants to include language-specific tree codes in the tree
129 representation which it provides to the back end, it must provide a
130 definition of @code{LANG_HOOKS_GIMPLIFY_EXPR} which knows how to
131 convert the front end trees to GIMPLE@. Usually such a hook will involve
132 much of the same code for expanding front end trees to RTL@. This function
133 can return fully lowered GIMPLE, or it can return GENERIC trees and let the
134 main gimplifier lower them the rest of the way; this is often simpler.
136 The C and C++ front ends currently convert directly from front end
137 trees to GIMPLE, and hand that off to the back end rather than first
138 converting to GENERIC@. Their gimplifier hooks know about all the
139 @code{_STMT} nodes and how to convert them to GENERIC forms. There
140 was some work done on a genericization pass which would run first, but
141 the existence of @code{STMT_EXPR} meant that in order to convert all
142 of the C statements into GENERIC equivalents would involve walking the
143 entire tree anyway, so it was simpler to lower all the way. This
144 might change in the future if someone writes an optimization pass
145 which would work better with higher-level trees, but currently the
146 optimizers all expect GIMPLE@.
148 A front end which wants to use the tree optimizers (and already has
149 some sort of whole-function tree representation) only needs to provide
150 a definition of @code{LANG_HOOKS_GIMPLIFY_EXPR}, call
151 @code{gimplify_function_tree} to lower to GIMPLE, and then hand off to
152 @code{tree_rest_of_compilation} to compile and output the function.
154 You can tell the compiler to dump a C-like representation of the GIMPLE
155 form with the flag @option{-fdump-tree-gimple}.
158 @subsection Temporaries
161 When gimplification encounters a subexpression which is too complex, it
162 creates a new temporary variable to hold the value of the subexpression,
163 and adds a new statement to initialize it before the current statement.
164 These special temporaries are known as @samp{expression temporaries}, and are
165 allocated using @code{get_formal_tmp_var}. The compiler tries to
166 always evaluate identical expressions into the same temporary, to simplify
167 elimination of redundant calculations.
169 We can only use expression temporaries when we know that it will not be
170 reevaluated before its value is used, and that it will not be otherwise
171 modified@footnote{These restrictions are derived from those in Morgan 4.8.}.
172 Other temporaries can be allocated using
173 @code{get_initialized_tmp_var} or @code{create_tmp_var}.
175 Currently, an expression like @code{a = b + 5} is not reduced any
176 further. We tried converting it to something like
181 but this bloated the representation for minimal benefit. However, a
182 variable which must live in memory cannot appear in an expression; its
183 value is explicitly loaded into a temporary first. Similarly, storing
184 the value of an expression to a memory variable goes through a
187 @node GIMPLE Expressions
188 @subsection Expressions
189 @cindex GIMPLE Expressions
191 In general, expressions in GIMPLE consist of an operation and the
192 appropriate number of simple operands; these operands must either be a
193 GIMPLE rvalue (@code{is_gimple_val}), i.e.@: a constant or a register
194 variable. More complex operands are factored out into temporaries, so
205 The same rule holds for arguments to a @code{CALL_EXPR}.
207 The target of an assignment is usually a variable, but can also be an
208 @code{INDIRECT_REF} or a compound lvalue as described below.
211 * Compound Expressions::
213 * Conditional Expressions::
214 * Logical Operators::
217 @node Compound Expressions
218 @subsubsection Compound Expressions
219 @cindex Compound Expressions
221 The left-hand side of a C comma expression is simply moved into a separate
224 @node Compound Lvalues
225 @subsubsection Compound Lvalues
226 @cindex Compound Lvalues
228 Currently compound lvalues involving array and structure field references
229 are not broken down; an expression like @code{a.b[2] = 42} is not reduced
230 any further (though complex array subscripts are). This restriction is a
231 workaround for limitations in later optimizers; if we were to convert this
239 alias analysis would not remember that the reference to @code{T1[2]} came
240 by way of @code{a.b}, so it would think that the assignment could alias
241 another member of @code{a}; this broke @code{struct-alias-1.c}. Future
242 optimizer improvements may make this limitation unnecessary.
244 @node Conditional Expressions
245 @subsubsection Conditional Expressions
246 @cindex Conditional Expressions
248 A C @code{?:} expression is converted into an @code{if} statement with
249 each branch assigning to the same temporary. So,
263 Tree level if-conversion pass re-introduces @code{?:} expression, if appropriate.
264 It is used to vectorize loops with conditions using vector conditional operations.
266 Note that in GIMPLE, @code{if} statements are also represented using
267 @code{COND_EXPR}, as described below.
269 @node Logical Operators
270 @subsubsection Logical Operators
271 @cindex Logical Operators
273 Except when they appear in the condition operand of a @code{COND_EXPR},
274 logical `and' and `or' operators are simplified as follows:
275 @code{a = b && c} becomes
284 Note that @code{T1} in this example cannot be an expression temporary,
285 because it has two different assignments.
288 @subsection Statements
291 Most statements will be assignment statements, represented by
292 @code{MODIFY_EXPR}. A @code{CALL_EXPR} whose value is ignored can
293 also be a statement. No other C expressions can appear at statement level;
294 a reference to a volatile object is converted into a @code{MODIFY_EXPR}.
295 In GIMPLE form, type of @code{MODIFY_EXPR} is not meaningful. Instead, use type
298 There are also several varieties of complex statements.
302 * Statement Sequences::
305 * Selection Statements::
308 * GIMPLE Exception Handling::
312 @subsubsection Blocks
315 Block scopes and the variables they declare in GENERIC and GIMPLE are
316 expressed using the @code{BIND_EXPR} code, which in previous versions of
317 GCC was primarily used for the C statement-expression extension.
319 Variables in a block are collected into @code{BIND_EXPR_VARS} in
320 declaration order. Any runtime initialization is moved out of
321 @code{DECL_INITIAL} and into a statement in the controlled block. When
322 gimplifying from C or C++, this initialization replaces the
325 Variable-length arrays (VLAs) complicate this process, as their size often
326 refers to variables initialized earlier in the block. To handle this, we
327 currently split the block at that point, and move the VLA into a new, inner
328 @code{BIND_EXPR}. This strategy may change in the future.
330 @code{DECL_SAVED_TREE} for a GIMPLE function will always be a
331 @code{BIND_EXPR} which contains declarations for the temporary variables
332 used in the function.
334 A C++ program will usually contain more @code{BIND_EXPR}s than there are
335 syntactic blocks in the source code, since several C++ constructs have
336 implicit scopes associated with them. On the other hand, although the C++
337 front end uses pseudo-scopes to handle cleanups for objects with
338 destructors, these don't translate into the GIMPLE form; multiple
339 declarations at the same level use the same @code{BIND_EXPR}.
341 @node Statement Sequences
342 @subsubsection Statement Sequences
343 @cindex Statement Sequences
345 Multiple statements at the same nesting level are collected into a
346 @code{STATEMENT_LIST}. Statement lists are modified and traversed
347 using the interface in @samp{tree-iterator.h}.
349 @node Empty Statements
350 @subsubsection Empty Statements
351 @cindex Empty Statements
353 Whenever possible, statements with no effect are discarded. But if they
354 are nested within another construct which cannot be discarded for some
355 reason, they are instead replaced with an empty statement, generated by
356 @code{build_empty_stmt}. Initially, all empty statements were shared,
357 after the pattern of the Java front end, but this caused a lot of trouble in
360 An empty statement is represented as @code{(void)0}.
366 At one time loops were expressed in GIMPLE using @code{LOOP_EXPR}, but
367 now they are lowered to explicit gotos.
369 @node Selection Statements
370 @subsubsection Selection Statements
371 @cindex Selection Statements
373 A simple selection statement, such as the C @code{if} statement, is
374 expressed in GIMPLE using a void @code{COND_EXPR}. If only one branch is
375 used, the other is filled with an empty statement.
377 Normally, the condition expression is reduced to a simple comparison. If
378 it is a shortcut (@code{&&} or @code{||}) expression, however, we try to
379 break up the @code{if} into multiple @code{if}s so that the implied shortcut
380 is taken directly, much like the transformation done by @code{do_jump} in
383 A @code{SWITCH_EXPR} in GIMPLE contains the condition and a
384 @code{TREE_VEC} of @code{CASE_LABEL_EXPR}s describing the case values
385 and corresponding @code{LABEL_DECL}s to jump to. The body of the
386 @code{switch} is moved after the @code{SWITCH_EXPR}.
392 Other jumps are expressed by either @code{GOTO_EXPR} or @code{RETURN_EXPR}.
394 The operand of a @code{GOTO_EXPR} must be either a label or a variable
395 containing the address to jump to.
397 The operand of a @code{RETURN_EXPR} is either @code{NULL_TREE} or a
398 @code{MODIFY_EXPR} which sets the return value. It would be nice to
399 move the @code{MODIFY_EXPR} into a separate statement, but the special
400 return semantics in @code{expand_return} make that difficult. It may
401 still happen in the future, perhaps by moving most of that logic into
402 @code{expand_assignment}.
405 @subsubsection Cleanups
408 Destructors for local C++ objects and similar dynamic cleanups are
409 represented in GIMPLE by a @code{TRY_FINALLY_EXPR}.
410 @code{TRY_FINALLY_EXPR} has two operands, both of which are a sequence
411 of statements to execute. The first sequence is executed. When it
412 completes the second sequence is executed.
414 The first sequence may complete in the following ways:
418 @item Execute the last statement in the sequence and fall off the
421 @item Execute a goto statement (@code{GOTO_EXPR}) to an ordinary
422 label outside the sequence.
424 @item Execute a return statement (@code{RETURN_EXPR}).
426 @item Throw an exception. This is currently not explicitly represented in
431 The second sequence is not executed if the first sequence completes by
432 calling @code{setjmp} or @code{exit} or any other function that does
433 not return. The second sequence is also not executed if the first
434 sequence completes via a non-local goto or a computed goto (in general
435 the compiler does not know whether such a goto statement exits the
436 first sequence or not, so we assume that it doesn't).
438 After the second sequence is executed, if it completes normally by
439 falling off the end, execution continues wherever the first sequence
440 would have continued, by falling off the end, or doing a goto, etc.
442 @code{TRY_FINALLY_EXPR} complicates the flow graph, since the cleanup
443 needs to appear on every edge out of the controlled block; this
444 reduces the freedom to move code across these edges. Therefore, the
445 EH lowering pass which runs before most of the optimization passes
446 eliminates these expressions by explicitly adding the cleanup to each
447 edge. Rethrowing the exception is represented using @code{RESX_EXPR}.
450 @node GIMPLE Exception Handling
451 @subsubsection Exception Handling
452 @cindex GIMPLE Exception Handling
454 Other exception handling constructs are represented using
455 @code{TRY_CATCH_EXPR}. @code{TRY_CATCH_EXPR} has two operands. The
456 first operand is a sequence of statements to execute. If executing
457 these statements does not throw an exception, then the second operand
458 is ignored. Otherwise, if an exception is thrown, then the second
459 operand of the @code{TRY_CATCH_EXPR} is checked. The second operand
460 may have the following forms:
464 @item A sequence of statements to execute. When an exception occurs,
465 these statements are executed, and then the exception is rethrown.
467 @item A sequence of @code{CATCH_EXPR} expressions. Each @code{CATCH_EXPR}
468 has a list of applicable exception types and handler code. If the
469 thrown exception matches one of the caught types, the associated
470 handler code is executed. If the handler code falls off the bottom,
471 execution continues after the original @code{TRY_CATCH_EXPR}.
473 @item An @code{EH_FILTER_EXPR} expression. This has a list of
474 permitted exception types, and code to handle a match failure. If the
475 thrown exception does not match one of the allowed types, the
476 associated match failure code is executed. If the thrown exception
477 does match, it continues unwinding the stack looking for the next
482 Currently throwing an exception is not directly represented in GIMPLE,
483 since it is implemented by calling a function. At some point in the future
484 we will want to add some way to express that the call will throw an
485 exception of a known type.
487 Just before running the optimizers, the compiler lowers the high-level
488 EH constructs above into a set of @samp{goto}s, magic labels, and EH
489 regions. Continuing to unwind at the end of a cleanup is represented
490 with a @code{RESX_EXPR}.
493 @subsection GIMPLE Example
494 @cindex GIMPLE Example
497 struct A @{ A(); ~A(); @};
504 int j = (--i, i ? 0 : 1);
506 for (int x = 42; x > 0; --x)
573 @node Rough GIMPLE Grammar
574 @subsection Rough GIMPLE Grammar
575 @cindex Rough GIMPLE Grammar
578 function : FUNCTION_DECL
579 DECL_SAVED_TREE -> compound-stmt
581 compound-stmt: STATEMENT_LIST
596 BIND_EXPR_VARS -> chain of DECLs
597 BIND_EXPR_BLOCK -> BLOCK
598 BIND_EXPR_BODY -> compound-stmt
605 switch-stmt : SWITCH_EXPR
608 op2 -> TREE_VEC of CASE_LABEL_EXPRs
609 The CASE_LABEL_EXPRs are sorted by CASE_LOW,
612 goto-stmt : GOTO_EXPR
613 op0 -> LABEL_DECL | val
615 return-stmt : RETURN_EXPR
624 resx-stmt : RESX_EXPR
626 label-stmt : LABEL_EXPR
629 try-stmt : TRY_CATCH_EXPR
640 catch-seq : STATEMENT_LIST
641 members -> CATCH_EXPR
643 modify-stmt : MODIFY_EXPR
647 call-stmt : CALL_EXPR
648 op0 -> val | OBJ_TYPE_REF
651 call-arg-list: TREE_LIST
652 members -> lhs | CONST
657 addressable : addr-expr-arg
660 with-size-arg: addressable
663 indirectref : INDIRECT_REF
675 bitfieldref : BIT_FIELD_REF
680 compref : inner-compref
692 inner-compref: min-lval
737 The optimizers need to associate attributes with statements and
738 variables during the optimization process. For instance, we need to
739 know what basic block a statement belongs to or whether a variable
740 has aliases. All these attributes are stored in data structures
741 called annotations which are then linked to the field @code{ann} in
742 @code{struct tree_common}.
744 Presently, we define annotations for statements (@code{stmt_ann_t}),
745 variables (@code{var_ann_t}) and SSA names (@code{ssa_name_ann_t}).
746 Annotations are defined and documented in @file{tree-flow.h}.
749 @node Statement Operands
750 @section Statement Operands
752 @cindex virtual operands
753 @cindex real operands
756 Almost every GIMPLE statement will contain a reference to a variable
757 or memory location. Since statements come in different shapes and
758 sizes, their operands are going to be located at various spots inside
759 the statement's tree. To facilitate access to the statement's
760 operands, they are organized into lists associated inside each
761 statement's annotation. Each element in an operand list is a pointer
762 to a @code{VAR_DECL}, @code{PARM_DECL} or @code{SSA_NAME} tree node.
763 This provides a very convenient way of examining and replacing
766 Data flow analysis and optimization is done on all tree nodes
767 representing variables. Any node for which @code{SSA_VAR_P} returns
768 nonzero is considered when scanning statement operands. However, not
769 all @code{SSA_VAR_P} variables are processed in the same way. For the
770 purposes of optimization, we need to distinguish between references to
771 local scalar variables and references to globals, statics, structures,
772 arrays, aliased variables, etc. The reason is simple, the compiler
773 can gather complete data flow information for a local scalar. On the
774 other hand, a global variable may be modified by a function call, it
775 may not be possible to keep track of all the elements of an array or
776 the fields of a structure, etc.
778 The operand scanner gathers two kinds of operands: @dfn{real} and
779 @dfn{virtual}. An operand for which @code{is_gimple_reg} returns true
780 is considered real, otherwise it is a virtual operand. We also
781 distinguish between uses and definitions. An operand is used if its
782 value is loaded by the statement (e.g., the operand at the RHS of an
783 assignment). If the statement assigns a new value to the operand, the
784 operand is considered a definition (e.g., the operand at the LHS of
787 Virtual and real operands also have very different data flow
788 properties. Real operands are unambiguous references to the
789 full object that they represent. For instance, given
798 Since @code{a} and @code{b} are non-aliased locals, the statement
799 @code{a = b} will have one real definition and one real use because
800 variable @code{b} is completely modified with the contents of
801 variable @code{a}. Real definition are also known as @dfn{killing
802 definitions}. Similarly, the use of @code{a} reads all its bits.
804 In contrast, virtual operands are used with variables that can have
805 a partial or ambiguous reference. This includes structures, arrays,
806 globals, and aliased variables. In these cases, we have two types of
807 definitions. For globals, structures, and arrays, we can determine from
808 a statement whether a variable of these types has a killing definition.
809 If the variable does, then the statement is marked as having a
810 @dfn{must definition} of that variable. However, if a statement is only
811 defining a part of the variable (i.e.@: a field in a structure), or if we
812 know that a statement might define the variable but we cannot say for sure,
813 then we mark that statement as having a @dfn{may definition}. For
829 The assignment @code{*p = 5} may be a definition of @code{a} or
830 @code{b}. If we cannot determine statically where @code{p} is
831 pointing to at the time of the store operation, we create virtual
832 definitions to mark that statement as a potential definition site for
833 @code{a} and @code{b}. Memory loads are similarly marked with virtual
834 use operands. Virtual operands are shown in tree dumps right before
835 the statement that contains them. To request a tree dump with virtual
836 operands, use the @option{-vops} option to @option{-fdump-tree}:
856 Notice that @code{V_MAY_DEF} operands have two copies of the referenced
857 variable. This indicates that this is not a killing definition of
858 that variable. In this case we refer to it as a @dfn{may definition}
859 or @dfn{aliased store}. The presence of the second copy of the
860 variable in the @code{V_MAY_DEF} operand will become important when the
861 function is converted into SSA form. This will be used to link all
862 the non-killing definitions to prevent optimizations from making
863 incorrect assumptions about them.
865 Operands are updated as soon as the statement is finished via a call
866 to @code{update_stmt}. If statement elements are changed via
867 @code{SET_USE} or @code{SET_DEF}, then no further action is required
868 (ie, those macros take care of updating the statement). If changes
869 are made by manipulating the statement's tree directly, then a call
870 must be made to @code{update_stmt} when complete. Calling one of the
871 @code{bsi_insert} routines or @code{bsi_replace} performs an implicit
872 call to @code{update_stmt}.
874 @subsection Operand Iterators And Access Routines
875 @cindex Operand Iterators
876 @cindex Operand Access Routines
878 Operands are collected by @file{tree-ssa-operands.c}. They are stored
879 inside each statement's annotation and can be accessed through either the
880 operand iterators or an access routine.
882 The following access routines are available for examining operands:
885 @item @code{SINGLE_SSA_@{USE,DEF,TREE@}_OPERAND}: These accessors will return
886 NULL unless there is exactly one operand matching the specified flags. If
887 there is exactly one operand, the operand is returned as either a @code{tree},
888 @code{def_operand_p}, or @code{use_operand_p}.
891 tree t = SINGLE_SSA_TREE_OPERAND (stmt, flags);
892 use_operand_p u = SINGLE_SSA_USE_OPERAND (stmt, SSA_ALL_VIRTUAL_USES);
893 def_operand_p d = SINGLE_SSA_DEF_OPERAND (stmt, SSA_OP_ALL_DEFS);
896 @item @code{ZERO_SSA_OPERANDS}: This macro returns true if there are no
897 operands matching the specified flags.
900 if (ZERO_SSA_OPERANDS (stmt, SSA_OP_ALL_VIRTUALS))
904 @item @code{NUM_SSA_OPERANDS}: This macro Returns the number of operands
905 matching 'flags'. This actually executes a loop to perform the count, so
906 only use this if it is really needed.
909 int count = NUM_SSA_OPERANDS (stmt, flags)
914 If you wish to iterate over some or all operands, use the
915 @code{FOR_EACH_SSA_@{USE,DEF,TREE@}_OPERAND} iterator. For example, to print
916 all the operands for a statement:
920 print_ops (tree stmt)
925 FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_ALL_OPERANDS)
926 print_generic_expr (stderr, var, TDF_SLIM);
931 How to choose the appropriate iterator:
934 @item Determine whether you are need to see the operand pointers, or just the
935 trees, and choose the appropriate macro:
940 use_operand_p FOR_EACH_SSA_USE_OPERAND
941 def_operand_p FOR_EACH_SSA_DEF_OPERAND
942 tree FOR_EACH_SSA_TREE_OPERAND
945 @item You need to declare a variable of the type you are interested
946 in, and an ssa_op_iter structure which serves as the loop
947 controlling variable.
949 @item Determine which operands you wish to use, and specify the flags of
950 those you are interested in. They are documented in
951 @file{tree-ssa-operands.h}:
954 #define SSA_OP_USE 0x01 /* @r{Real USE operands.} */
955 #define SSA_OP_DEF 0x02 /* @r{Real DEF operands.} */
956 #define SSA_OP_VUSE 0x04 /* @r{VUSE operands.} */
957 #define SSA_OP_VMAYUSE 0x08 /* @r{USE portion of V_MAY_DEFS.} */
958 #define SSA_OP_VMAYDEF 0x10 /* @r{DEF portion of V_MAY_DEFS.} */
959 #define SSA_OP_VMUSTDEF 0x20 /* @r{V_MUST_DEF definitions.} */
961 /* @r{These are commonly grouped operand flags.} */
962 #define SSA_OP_VIRTUAL_USES (SSA_OP_VUSE | SSA_OP_VMAYUSE)
963 #define SSA_OP_VIRTUAL_DEFS (SSA_OP_VMAYDEF | SSA_OP_VMUSTDEF)
964 #define SSA_OP_ALL_USES (SSA_OP_VIRTUAL_USES | SSA_OP_USE)
965 #define SSA_OP_ALL_DEFS (SSA_OP_VIRTUAL_DEFS | SSA_OP_DEF)
966 #define SSA_OP_ALL_OPERANDS (SSA_OP_ALL_USES | SSA_OP_ALL_DEFS)
970 So if you want to look at the use pointers for all the @code{USE} and
971 @code{VUSE} operands, you would do something like:
977 FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, (SSA_OP_USE | SSA_OP_VUSE))
979 process_use_ptr (use_p);
983 The @code{TREE} macro is basically the same as the @code{USE} and
984 @code{DEF} macros, only with the use or def dereferenced via
985 @code{USE_FROM_PTR (use_p)} and @code{DEF_FROM_PTR (def_p)}. Since we
986 aren't using operand pointers, use and defs flags can be mixed.
992 FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_VUSE | SSA_OP_VMUSTDEF)
994 print_generic_expr (stderr, var, TDF_SLIM);
998 @code{V_MAY_DEF}s are broken into two flags, one for the
999 @code{DEF} portion (@code{SSA_OP_VMAYDEF}) and one for the USE portion
1000 (@code{SSA_OP_VMAYUSE}). If all you want to look at are the
1001 @code{V_MAY_DEF}s together, there is a fourth iterator macro for this,
1002 which returns both a def_operand_p and a use_operand_p for each
1003 @code{V_MAY_DEF} in the statement. Note that you don't need any flags for
1007 use_operand_p use_p;
1008 def_operand_p def_p;
1011 FOR_EACH_SSA_MAYDEF_OPERAND (def_p, use_p, stmt, iter)
1017 @code{V_MUST_DEF}s are broken into two flags, one for the
1018 @code{DEF} portion (@code{SSA_OP_VMUSTDEF}) and one for the kill portion
1019 (@code{SSA_OP_VMUSTKILL}). If all you want to look at are the
1020 @code{V_MUST_DEF}s together, there is a fourth iterator macro for this,
1021 which returns both a def_operand_p and a use_operand_p for each
1022 @code{V_MUST_DEF} in the statement. Note that you don't need any flags for
1026 use_operand_p kill_p;
1027 def_operand_p def_p;
1030 FOR_EACH_SSA_MUSTDEF_OPERAND (def_p, kill_p, stmt, iter)
1037 There are many examples in the code as well, as well as the
1038 documentation in @file{tree-ssa-operands.h}.
1040 There are also a couple of variants on the stmt iterators regarding PHI
1043 @code{FOR_EACH_PHI_ARG} Works exactly like
1044 @code{FOR_EACH_SSA_USE_OPERAND}, except it works over @code{PHI} arguments
1045 instead of statement operands.
1048 /* Look at every virtual PHI use. */
1049 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_VIRTUAL_USES)
1054 /* Look at every real PHI use. */
1055 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_USES)
1058 /* Look at every every PHI use. */
1059 FOR_EACH_PHI_ARG (use_p, phi_stmt, iter, SSA_OP_ALL_USES)
1063 @code{FOR_EACH_PHI_OR_STMT_@{USE,DEF@}} works exactly like
1064 @code{FOR_EACH_SSA_@{USE,DEF@}_OPERAND}, except it will function on
1065 either a statement or a @code{PHI} node. These should be used when it is
1066 appropriate but they are not quite as efficient as the individual
1067 @code{FOR_EACH_PHI} and @code{FOR_EACH_SSA} routines.
1070 FOR_EACH_PHI_OR_STMT_USE (use_operand_p, stmt, iter, flags)
1075 FOR_EACH_PHI_OR_STMT_DEF (def_operand_p, phi, iter, flags)
1081 @subsection Immediate Uses
1082 @cindex Immediate Uses
1084 Immediate use information is now always available. Using the immediate use
1085 iterators, you may examine every use of any @code{SSA_NAME}. For instance,
1086 to change each use of @code{ssa_var} to @code{ssa_var2}:
1089 use_operand_p imm_use_p;
1090 imm_use_iterator iterator;
1093 FOR_EACH_IMM_USE_SAFE (imm_use_p, iterator, ssa_var)
1094 SET_USE (imm_use_p, ssa_var_2);
1097 There are 2 iterators which can be used. @code{FOR_EACH_IMM_USE_FAST} is used
1098 when the immediate uses are not changed, ie. you are looking at the uses, but
1101 If they do get changed, then care must be taken that things are not changed
1102 under the iterators, so use the @code{FOR_EACH_IMM_USE_SAFE} iterator. It
1103 attempts to preserve the sanity of the use list by moving an iterator element
1104 through the use list, preventing insertions and deletions in the list from
1105 resulting in invalid pointers. This is a little slower since it adds a
1106 placeholder element and moves it through the list. This element must be
1107 also be removed if the loop is terminated early. A macro
1108 (@code{BREAK_FROM SAFE_IMM_USE}) is provided for this:
1111 FOR_EACH_IMM_USE_SAFE (use_p, iter, var)
1113 if (var == last_var)
1114 BREAK_FROM_SAFE_IMM_USE (iter);
1116 SET_USE (use_p, var2);
1120 There are checks in @code{verify_ssa} which verify that the immediate use list
1121 is up to date, as well as checking that an optimization didn't break from the
1122 loop without using this macro. It is safe to simply 'break'; from a
1123 @code{FOR_EACH_IMM_USE_FAST} traverse.
1125 Some useful functions and macros:
1127 @item @code{has_zero_uses (ssa_var)} : Returns true if there are no uses of
1129 @item @code{has_single_use (ssa_var)} : Returns true if there is only a
1130 single use of @code{ssa_var}.
1131 @item @code{single_imm_use (ssa_var, use_operand_p *ptr, tree *stmt)} :
1132 Returns true if there is only a single use of @code{ssa_var}, and also returns
1133 the use pointer and statement it occurs in in the second and third parameters.
1134 @item @code{num_imm_uses (ssa_var)} : Returns the number of immediate uses of
1135 @code{ssa_var}. It is better not to use this if possible since it simply
1136 utilizes a loop to count the uses.
1137 @item @code{PHI_ARG_INDEX_FROM_USE (use_p)} : Given a use within a @code{PHI}
1138 node, return the index number for the use. An assert is triggered if the use
1139 isn't located in a @code{PHI} node.
1140 @item @code{USE_STMT (use_p)} : Return the statement a use occurs in.
1143 Note that uses are not put into an immediate use list until their statement is
1144 actually inserted into the instruction stream via a @code{bsi_*} routine.
1146 It is also still possible to utilize lazy updating of statements, but this
1147 should be used only when absolutely required. Both alias analysis and the
1148 dominator optimizations currently do this.
1150 When lazy updating is being used, the immediate use information is out of date
1151 and cannot be used reliably. Lazy updating is achieved by simply marking
1152 statements modified via calls to @code{mark_stmt_modified} instead of
1153 @code{update_stmt}. When lazy updating is no longer required, all the
1154 modified statements must have @code{update_stmt} called in order to bring them
1155 up to date. This must be done before the optimization is finished, or
1156 @code{verify_ssa} will trigger an abort.
1158 This is done with a simple loop over the instruction stream:
1160 block_stmt_iterator bsi;
1164 for (bsi = bsi_start (bb); !bsi_end_p (bsi); bsi_next (&bsi))
1165 update_stmt_if_modified (bsi_stmt (bsi));
1170 @section Static Single Assignment
1172 @cindex static single assignment
1174 Most of the tree optimizers rely on the data flow information provided
1175 by the Static Single Assignment (SSA) form. We implement the SSA form
1176 as described in @cite{R. Cytron, J. Ferrante, B. Rosen, M. Wegman, and
1177 K. Zadeck. Efficiently Computing Static Single Assignment Form and the
1178 Control Dependence Graph. ACM Transactions on Programming Languages
1179 and Systems, 13(4):451-490, October 1991}.
1181 The SSA form is based on the premise that program variables are
1182 assigned in exactly one location in the program. Multiple assignments
1183 to the same variable create new versions of that variable. Naturally,
1184 actual programs are seldom in SSA form initially because variables
1185 tend to be assigned multiple times. The compiler modifies the program
1186 representation so that every time a variable is assigned in the code,
1187 a new version of the variable is created. Different versions of the
1188 same variable are distinguished by subscripting the variable name with
1189 its version number. Variables used in the right-hand side of
1190 expressions are renamed so that their version number matches that of
1191 the most recent assignment.
1193 We represent variable versions using @code{SSA_NAME} nodes. The
1194 renaming process in @file{tree-ssa.c} wraps every real and
1195 virtual operand with an @code{SSA_NAME} node which contains
1196 the version number and the statement that created the
1197 @code{SSA_NAME}. Only definitions and virtual definitions may
1198 create new @code{SSA_NAME} nodes.
1200 Sometimes, flow of control makes it impossible to determine what is the
1201 most recent version of a variable. In these cases, the compiler
1202 inserts an artificial definition for that variable called
1203 @dfn{PHI function} or @dfn{PHI node}. This new definition merges
1204 all the incoming versions of the variable to create a new name
1205 for it. For instance,
1215 # a_4 = PHI <a_1, a_2, a_3>
1219 Since it is not possible to determine which of the three branches
1220 will be taken at runtime, we don't know which of @code{a_1},
1221 @code{a_2} or @code{a_3} to use at the return statement. So, the
1222 SSA renamer creates a new version @code{a_4} which is assigned
1223 the result of ``merging'' @code{a_1}, @code{a_2} and @code{a_3}.
1224 Hence, PHI nodes mean ``one of these operands. I don't know
1227 The following macros can be used to examine PHI nodes
1229 @defmac PHI_RESULT (@var{phi})
1230 Returns the @code{SSA_NAME} created by PHI node @var{phi} (i.e.,
1234 @defmac PHI_NUM_ARGS (@var{phi})
1235 Returns the number of arguments in @var{phi}. This number is exactly
1236 the number of incoming edges to the basic block holding @var{phi}@.
1239 @defmac PHI_ARG_ELT (@var{phi}, @var{i})
1240 Returns a tuple representing the @var{i}th argument of @var{phi}@.
1241 Each element of this tuple contains an @code{SSA_NAME} @var{var} and
1242 the incoming edge through which @var{var} flows.
1245 @defmac PHI_ARG_EDGE (@var{phi}, @var{i})
1246 Returns the incoming edge for the @var{i}th argument of @var{phi}.
1249 @defmac PHI_ARG_DEF (@var{phi}, @var{i})
1250 Returns the @code{SSA_NAME} for the @var{i}th argument of @var{phi}.
1254 @subsection Preserving the SSA form
1256 @cindex preserving SSA form
1257 Some optimization passes make changes to the function that
1258 invalidate the SSA property. This can happen when a pass has
1259 added new symbols or changed the program so that variables that
1260 were previously aliased aren't anymore. Whenever something like this
1261 happens, the affected symbols must be renamed into SSA form again.
1262 Transformations that emit new code or replicate existing statements
1263 will also need to update the SSA form@.
1265 Since GCC implements two different SSA forms for register and virtual
1266 variables, keeping the SSA form up to date depends on whether you are
1267 updating register or virtual names. In both cases, the general idea
1268 behind incremental SSA updates is similar: when new SSA names are
1269 created, they typically are meant to replace other existing names in
1272 For instance, given the following code:
1276 2 x_1 = PHI (0, x_5)
1288 Suppose that we insert new names @code{x_10} and @code{x_11} (lines
1289 @code{4} and @code{8})@.
1293 2 x_1 = PHI (0, x_5)
1307 We want to replace all the uses of @code{x_1} with the new definitions
1308 of @code{x_10} and @code{x_11}. Note that the only uses that should
1309 be replaced are those at lines @code{5}, @code{9} and @code{11}.
1310 Also, the use of @code{x_7} at line @code{9} should @emph{not} be
1311 replaced (this is why we cannot just mark symbol @code{x} for
1314 Additionally, we may need to insert a PHI node at line @code{11}
1315 because that is a merge point for @code{x_10} and @code{x_11}. So the
1316 use of @code{x_1} at line @code{11} will be replaced with the new PHI
1317 node. The insertion of PHI nodes is optional. They are not strictly
1318 necessary to preserve the SSA form, and depending on what the caller
1319 inserted, they may not even be useful for the optimizers@.
1321 Updating the SSA form is a two step process. First, the pass has to
1322 identify which names need to be updated and/or which symbols need to
1323 be renamed into SSA form for the first time. When new names are
1324 introduced to replace existing names in the program, the mapping
1325 between the old and the new names are registered by calling
1326 @code{register_new_name_mapping} (note that if your pass creates new
1327 code by duplicating basic blocks, the call to @code{tree_duplicate_bb}
1328 will set up the necessary mappings automatically). On the other hand,
1329 if your pass exposes a new symbol that should be put in SSA form for
1330 the first time, the new symbol should be registered with
1331 @code{mark_sym_for_renaming}.
1333 After the replacement mappings have been registered and new symbols
1334 marked for renaming, a call to @code{update_ssa} makes the registered
1335 changes. This can be done with an explicit call or by creating
1336 @code{TODO} flags in the @code{tree_opt_pass} structure for your pass.
1337 There are several @code{TODO} flags that control the behavior of
1341 @item @code{TODO_update_ssa}. Update the SSA form inserting PHI nodes
1342 for newly exposed symbols and virtual names marked for updating.
1343 When updating real names, only insert PHI nodes for a real name
1344 @code{O_j} in blocks reached by all the new and old definitions for
1345 @code{O_j}. If the iterated dominance frontier for @code{O_j}
1346 is not pruned, we may end up inserting PHI nodes in blocks that
1347 have one or more edges with no incoming definition for
1348 @code{O_j}. This would lead to uninitialized warnings for
1349 @code{O_j}'s symbol@.
1351 @item @code{TODO_update_ssa_no_phi}. Update the SSA form without
1352 inserting any new PHI nodes at all. This is used by passes that
1353 have either inserted all the PHI nodes themselves or passes that
1354 need only to patch use-def and def-def chains for virtuals
1358 @item @code{TODO_update_ssa_full_phi}. Insert PHI nodes everywhere
1359 they are needed. No pruning of the IDF is done. This is used
1360 by passes that need the PHI nodes for @code{O_j} even if it
1361 means that some arguments will come from the default definition
1362 of @code{O_j}'s symbol (e.g., @code{pass_linear_transform})@.
1364 WARNING: If you need to use this flag, chances are that your
1365 pass may be doing something wrong. Inserting PHI nodes for an
1366 old name where not all edges carry a new replacement may lead to
1367 silent codegen errors or spurious uninitialized warnings@.
1369 @item @code{TODO_update_ssa_only_virtuals}. Passes that update the
1370 SSA form on their own may want to delegate the updating of
1371 virtual names to the generic updater. Since FUD chains are
1372 easier to maintain, this simplifies the work they need to do.
1373 NOTE: If this flag is used, any OLD->NEW mappings for real names
1374 are explicitly destroyed and only the symbols marked for
1375 renaming are processed@.
1379 @subsection Examining @code{SSA_NAME} nodes
1380 @cindex examining SSA_NAMEs
1382 The following macros can be used to examine @code{SSA_NAME} nodes
1384 @defmac SSA_NAME_DEF_STMT (@var{var})
1385 Returns the statement @var{s} that creates the @code{SSA_NAME}
1386 @var{var}. If @var{s} is an empty statement (i.e., @code{IS_EMPTY_STMT
1387 (@var{s})} returns @code{true}), it means that the first reference to
1388 this variable is a USE or a VUSE@.
1391 @defmac SSA_NAME_VERSION (@var{var})
1392 Returns the version number of the @code{SSA_NAME} object @var{var}.
1396 @subsection Walking use-def chains
1398 @deftypefn {Tree SSA function} void walk_use_def_chains (@var{var}, @var{fn}, @var{data})
1400 Walks use-def chains starting at the @code{SSA_NAME} node @var{var}.
1401 Calls function @var{fn} at each reaching definition found. Function
1402 @var{FN} takes three arguments: @var{var}, its defining statement
1403 (@var{def_stmt}) and a generic pointer to whatever state information
1404 that @var{fn} may want to maintain (@var{data}). Function @var{fn} is
1405 able to stop the walk by returning @code{true}, otherwise in order to
1406 continue the walk, @var{fn} should return @code{false}.
1408 Note, that if @var{def_stmt} is a @code{PHI} node, the semantics are
1409 slightly different. For each argument @var{arg} of the PHI node, this
1413 @item Walk the use-def chains for @var{arg}.
1414 @item Call @code{FN (@var{arg}, @var{phi}, @var{data})}.
1417 Note how the first argument to @var{fn} is no longer the original
1418 variable @var{var}, but the PHI argument currently being examined.
1419 If @var{fn} wants to get at @var{var}, it should call
1420 @code{PHI_RESULT} (@var{phi}).
1423 @subsection Walking the dominator tree
1425 @deftypefn {Tree SSA function} void walk_dominator_tree (@var{walk_data}, @var{bb})
1427 This function walks the dominator tree for the current CFG calling a
1428 set of callback functions defined in @var{struct dom_walk_data} in
1429 @file{domwalk.h}. The call back functions you need to define give you
1430 hooks to execute custom code at various points during traversal:
1433 @item Once to initialize any local data needed while processing
1434 @var{bb} and its children. This local data is pushed into an
1435 internal stack which is automatically pushed and popped as the
1436 walker traverses the dominator tree.
1438 @item Once before traversing all the statements in the @var{bb}.
1440 @item Once for every statement inside @var{bb}.
1442 @item Once after traversing all the statements and before recursing
1443 into @var{bb}'s dominator children.
1445 @item It then recurses into all the dominator children of @var{bb}.
1447 @item After recursing into all the dominator children of @var{bb} it
1448 can, optionally, traverse every statement in @var{bb} again
1449 (i.e., repeating steps 2 and 3).
1451 @item Once after walking the statements in @var{bb} and @var{bb}'s
1452 dominator children. At this stage, the block local data stack
1457 @node Alias analysis
1458 @section Alias analysis
1460 @cindex flow-sensitive alias analysis
1461 @cindex flow-insensitive alias analysis
1463 Alias analysis proceeds in 4 main phases:
1466 @item Structural alias analysis.
1468 This phase walks the types for structure variables, and determines which
1469 of the fields can overlap using offset and size of each field. For each
1470 field, a ``subvariable'' called a ``Structure field tag'' (SFT)@ is
1471 created, which represents that field as a separate variable. All
1472 accesses that could possibly overlap with a given field will have
1473 virtual operands for the SFT of that field.
1484 int tmp1, tmp2, tmp3;
1485 SFT.0_2 = V_MUST_DEF <SFT.0_1>
1487 SFT.1_4 = V_MUST_DEF <SFT.1_3>
1495 tmp3_7 = tmp1_5 + tmp2_6;
1500 If you copy the type tag for a variable for some reason, you probably
1501 also want to copy the subvariables for that variable.
1503 @item Points-to and escape analysis.
1505 This phase walks the use-def chains in the SSA web looking for
1509 @item Assignments of the form @code{P_i = &VAR}
1510 @item Assignments of the form P_i = malloc()
1511 @item Pointers and ADDR_EXPR that escape the current function.
1514 The concept of `escaping' is the same one used in the Java world.
1515 When a pointer or an ADDR_EXPR escapes, it means that it has been
1516 exposed outside of the current function. So, assignment to
1517 global variables, function arguments and returning a pointer are
1520 This is where we are currently limited. Since not everything is
1521 renamed into SSA, we lose track of escape properties when a
1522 pointer is stashed inside a field in a structure, for instance.
1523 In those cases, we are assuming that the pointer does escape.
1525 We use escape analysis to determine whether a variable is
1526 call-clobbered. Simply put, if an ADDR_EXPR escapes, then the
1527 variable is call-clobbered. If a pointer P_i escapes, then all
1528 the variables pointed-to by P_i (and its memory tag) also escape.
1530 @item Compute flow-sensitive aliases
1532 We have two classes of memory tags. Memory tags associated with
1533 the pointed-to data type of the pointers in the program. These
1534 tags are called ``type memory tag'' (TMT)@. The other class are
1535 those associated with SSA_NAMEs, called ``name memory tag'' (NMT)@.
1536 The basic idea is that when adding operands for an INDIRECT_REF
1537 *P_i, we will first check whether P_i has a name tag, if it does
1538 we use it, because that will have more precise aliasing
1539 information. Otherwise, we use the standard type tag.
1541 In this phase, we go through all the pointers we found in
1542 points-to analysis and create alias sets for the name memory tags
1543 associated with each pointer P_i. If P_i escapes, we mark
1544 call-clobbered the variables it points to and its tag.
1547 @item Compute flow-insensitive aliases
1549 This pass will compare the alias set of every type memory tag and
1550 every addressable variable found in the program. Given a type
1551 memory tag TMT and an addressable variable V@. If the alias sets
1552 of TMT and V conflict (as computed by may_alias_p), then V is
1553 marked as an alias tag and added to the alias set of TMT@.
1556 For instance, consider the following function:
1575 After aliasing analysis has finished, the type memory tag for
1576 pointer @code{p} will have two aliases, namely variables @code{a} and
1578 Every time pointer @code{p} is dereferenced, we want to mark the
1579 operation as a potential reference to @code{a} and @code{b}.
1590 # p_1 = PHI <p_4(1), p_6(2)>;
1592 # a_7 = V_MAY_DEF <a_3>;
1593 # b_8 = V_MAY_DEF <b_5>;
1596 # a_9 = V_MAY_DEF <a_7>
1606 In certain cases, the list of may aliases for a pointer may grow
1607 too large. This may cause an explosion in the number of virtual
1608 operands inserted in the code. Resulting in increased memory
1609 consumption and compilation time.
1611 When the number of virtual operands needed to represent aliased
1612 loads and stores grows too large (configurable with @option{--param
1613 max-aliased-vops}), alias sets are grouped to avoid severe
1614 compile-time slow downs and memory consumption. The alias
1615 grouping heuristic proceeds as follows:
1618 @item Sort the list of pointers in decreasing number of contributed
1621 @item Take the first pointer from the list and reverse the role
1622 of the memory tag and its aliases. Usually, whenever an
1623 aliased variable Vi is found to alias with a memory tag
1624 T, we add Vi to the may-aliases set for T@. Meaning that
1625 after alias analysis, we will have:
1628 may-aliases(T) = @{ V1, V2, V3, ..., Vn @}
1631 This means that every statement that references T, will get
1632 @code{n} virtual operands for each of the Vi tags. But, when
1633 alias grouping is enabled, we make T an alias tag and add it
1634 to the alias set of all the Vi variables:
1637 may-aliases(V1) = @{ T @}
1638 may-aliases(V2) = @{ T @}
1640 may-aliases(Vn) = @{ T @}
1643 This has two effects: (a) statements referencing T will only get
1644 a single virtual operand, and, (b) all the variables Vi will now
1645 appear to alias each other. So, we lose alias precision to
1646 improve compile time. But, in theory, a program with such a high
1647 level of aliasing should not be very optimizable in the first
1650 @item Since variables may be in the alias set of more than one
1651 memory tag, the grouping done in step (2) needs to be extended
1652 to all the memory tags that have a non-empty intersection with
1653 the may-aliases set of tag T@. For instance, if we originally
1654 had these may-aliases sets:
1657 may-aliases(T) = @{ V1, V2, V3 @}
1658 may-aliases(R) = @{ V2, V4 @}
1661 In step (2) we would have reverted the aliases for T as:
1664 may-aliases(V1) = @{ T @}
1665 may-aliases(V2) = @{ T @}
1666 may-aliases(V3) = @{ T @}
1669 But note that now V2 is no longer aliased with R@. We could
1670 add R to may-aliases(V2), but we are in the process of
1671 grouping aliases to reduce virtual operands so what we do is
1672 add V4 to the grouping to obtain:
1675 may-aliases(V1) = @{ T @}
1676 may-aliases(V2) = @{ T @}
1677 may-aliases(V3) = @{ T @}
1678 may-aliases(V4) = @{ T @}
1681 @item If the total number of virtual operands due to aliasing is
1682 still above the threshold set by max-alias-vops, go back to (2).