* altivec.md: Delete UNSPEC_VMLADDUHM.
[official-gcc.git] / gcc / ira.c
blob02dc8b21cba50a88e80a4d2cdbabd3429a7909ad
1 /* Integrated Register Allocator (IRA) entry point.
2 Copyright (C) 2006-2015 Free Software Foundation, Inc.
3 Contributed by Vladimir Makarov <vmakarov@redhat.com>.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it under
8 the terms of the GNU General Public License as published by the Free
9 Software Foundation; either version 3, or (at your option) any later
10 version.
12 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
13 WARRANTY; without even the implied warranty of MERCHANTABILITY or
14 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
15 for more details.
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The integrated register allocator (IRA) is a
22 regional register allocator performing graph coloring on a top-down
23 traversal of nested regions. Graph coloring in a region is based
24 on Chaitin-Briggs algorithm. It is called integrated because
25 register coalescing, register live range splitting, and choosing a
26 better hard register are done on-the-fly during coloring. Register
27 coalescing and choosing a cheaper hard register is done by hard
28 register preferencing during hard register assigning. The live
29 range splitting is a byproduct of the regional register allocation.
31 Major IRA notions are:
33 o *Region* is a part of CFG where graph coloring based on
34 Chaitin-Briggs algorithm is done. IRA can work on any set of
35 nested CFG regions forming a tree. Currently the regions are
36 the entire function for the root region and natural loops for
37 the other regions. Therefore data structure representing a
38 region is called loop_tree_node.
40 o *Allocno class* is a register class used for allocation of
41 given allocno. It means that only hard register of given
42 register class can be assigned to given allocno. In reality,
43 even smaller subset of (*profitable*) hard registers can be
44 assigned. In rare cases, the subset can be even smaller
45 because our modification of Chaitin-Briggs algorithm requires
46 that sets of hard registers can be assigned to allocnos forms a
47 forest, i.e. the sets can be ordered in a way where any
48 previous set is not intersected with given set or is a superset
49 of given set.
51 o *Pressure class* is a register class belonging to a set of
52 register classes containing all of the hard-registers available
53 for register allocation. The set of all pressure classes for a
54 target is defined in the corresponding machine-description file
55 according some criteria. Register pressure is calculated only
56 for pressure classes and it affects some IRA decisions as
57 forming allocation regions.
59 o *Allocno* represents the live range of a pseudo-register in a
60 region. Besides the obvious attributes like the corresponding
61 pseudo-register number, allocno class, conflicting allocnos and
62 conflicting hard-registers, there are a few allocno attributes
63 which are important for understanding the allocation algorithm:
65 - *Live ranges*. This is a list of ranges of *program points*
66 where the allocno lives. Program points represent places
67 where a pseudo can be born or become dead (there are
68 approximately two times more program points than the insns)
69 and they are represented by integers starting with 0. The
70 live ranges are used to find conflicts between allocnos.
71 They also play very important role for the transformation of
72 the IRA internal representation of several regions into a one
73 region representation. The later is used during the reload
74 pass work because each allocno represents all of the
75 corresponding pseudo-registers.
77 - *Hard-register costs*. This is a vector of size equal to the
78 number of available hard-registers of the allocno class. The
79 cost of a callee-clobbered hard-register for an allocno is
80 increased by the cost of save/restore code around the calls
81 through the given allocno's life. If the allocno is a move
82 instruction operand and another operand is a hard-register of
83 the allocno class, the cost of the hard-register is decreased
84 by the move cost.
86 When an allocno is assigned, the hard-register with minimal
87 full cost is used. Initially, a hard-register's full cost is
88 the corresponding value from the hard-register's cost vector.
89 If the allocno is connected by a *copy* (see below) to
90 another allocno which has just received a hard-register, the
91 cost of the hard-register is decreased. Before choosing a
92 hard-register for an allocno, the allocno's current costs of
93 the hard-registers are modified by the conflict hard-register
94 costs of all of the conflicting allocnos which are not
95 assigned yet.
97 - *Conflict hard-register costs*. This is a vector of the same
98 size as the hard-register costs vector. To permit an
99 unassigned allocno to get a better hard-register, IRA uses
100 this vector to calculate the final full cost of the
101 available hard-registers. Conflict hard-register costs of an
102 unassigned allocno are also changed with a change of the
103 hard-register cost of the allocno when a copy involving the
104 allocno is processed as described above. This is done to
105 show other unassigned allocnos that a given allocno prefers
106 some hard-registers in order to remove the move instruction
107 corresponding to the copy.
109 o *Cap*. If a pseudo-register does not live in a region but
110 lives in a nested region, IRA creates a special allocno called
111 a cap in the outer region. A region cap is also created for a
112 subregion cap.
114 o *Copy*. Allocnos can be connected by copies. Copies are used
115 to modify hard-register costs for allocnos during coloring.
116 Such modifications reflects a preference to use the same
117 hard-register for the allocnos connected by copies. Usually
118 copies are created for move insns (in this case it results in
119 register coalescing). But IRA also creates copies for operands
120 of an insn which should be assigned to the same hard-register
121 due to constraints in the machine description (it usually
122 results in removing a move generated in reload to satisfy
123 the constraints) and copies referring to the allocno which is
124 the output operand of an instruction and the allocno which is
125 an input operand dying in the instruction (creation of such
126 copies results in less register shuffling). IRA *does not*
127 create copies between the same register allocnos from different
128 regions because we use another technique for propagating
129 hard-register preference on the borders of regions.
131 Allocnos (including caps) for the upper region in the region tree
132 *accumulate* information important for coloring from allocnos with
133 the same pseudo-register from nested regions. This includes
134 hard-register and memory costs, conflicts with hard-registers,
135 allocno conflicts, allocno copies and more. *Thus, attributes for
136 allocnos in a region have the same values as if the region had no
137 subregions*. It means that attributes for allocnos in the
138 outermost region corresponding to the function have the same values
139 as though the allocation used only one region which is the entire
140 function. It also means that we can look at IRA work as if the
141 first IRA did allocation for all function then it improved the
142 allocation for loops then their subloops and so on.
144 IRA major passes are:
146 o Building IRA internal representation which consists of the
147 following subpasses:
149 * First, IRA builds regions and creates allocnos (file
150 ira-build.c) and initializes most of their attributes.
152 * Then IRA finds an allocno class for each allocno and
153 calculates its initial (non-accumulated) cost of memory and
154 each hard-register of its allocno class (file ira-cost.c).
156 * IRA creates live ranges of each allocno, calculates register
157 pressure for each pressure class in each region, sets up
158 conflict hard registers for each allocno and info about calls
159 the allocno lives through (file ira-lives.c).
161 * IRA removes low register pressure loops from the regions
162 mostly to speed IRA up (file ira-build.c).
164 * IRA propagates accumulated allocno info from lower region
165 allocnos to corresponding upper region allocnos (file
166 ira-build.c).
168 * IRA creates all caps (file ira-build.c).
170 * Having live-ranges of allocnos and their classes, IRA creates
171 conflicting allocnos for each allocno. Conflicting allocnos
172 are stored as a bit vector or array of pointers to the
173 conflicting allocnos whatever is more profitable (file
174 ira-conflicts.c). At this point IRA creates allocno copies.
176 o Coloring. Now IRA has all necessary info to start graph coloring
177 process. It is done in each region on top-down traverse of the
178 region tree (file ira-color.c). There are following subpasses:
180 * Finding profitable hard registers of corresponding allocno
181 class for each allocno. For example, only callee-saved hard
182 registers are frequently profitable for allocnos living
183 through colors. If the profitable hard register set of
184 allocno does not form a tree based on subset relation, we use
185 some approximation to form the tree. This approximation is
186 used to figure out trivial colorability of allocnos. The
187 approximation is a pretty rare case.
189 * Putting allocnos onto the coloring stack. IRA uses Briggs
190 optimistic coloring which is a major improvement over
191 Chaitin's coloring. Therefore IRA does not spill allocnos at
192 this point. There is some freedom in the order of putting
193 allocnos on the stack which can affect the final result of
194 the allocation. IRA uses some heuristics to improve the
195 order. The major one is to form *threads* from colorable
196 allocnos and push them on the stack by threads. Thread is a
197 set of non-conflicting colorable allocnos connected by
198 copies. The thread contains allocnos from the colorable
199 bucket or colorable allocnos already pushed onto the coloring
200 stack. Pushing thread allocnos one after another onto the
201 stack increases chances of removing copies when the allocnos
202 get the same hard reg.
204 We also use a modification of Chaitin-Briggs algorithm which
205 works for intersected register classes of allocnos. To
206 figure out trivial colorability of allocnos, the mentioned
207 above tree of hard register sets is used. To get an idea how
208 the algorithm works in i386 example, let us consider an
209 allocno to which any general hard register can be assigned.
210 If the allocno conflicts with eight allocnos to which only
211 EAX register can be assigned, given allocno is still
212 trivially colorable because all conflicting allocnos might be
213 assigned only to EAX and all other general hard registers are
214 still free.
216 To get an idea of the used trivial colorability criterion, it
217 is also useful to read article "Graph-Coloring Register
218 Allocation for Irregular Architectures" by Michael D. Smith
219 and Glen Holloway. Major difference between the article
220 approach and approach used in IRA is that Smith's approach
221 takes register classes only from machine description and IRA
222 calculate register classes from intermediate code too
223 (e.g. an explicit usage of hard registers in RTL code for
224 parameter passing can result in creation of additional
225 register classes which contain or exclude the hard
226 registers). That makes IRA approach useful for improving
227 coloring even for architectures with regular register files
228 and in fact some benchmarking shows the improvement for
229 regular class architectures is even bigger than for irregular
230 ones. Another difference is that Smith's approach chooses
231 intersection of classes of all insn operands in which a given
232 pseudo occurs. IRA can use bigger classes if it is still
233 more profitable than memory usage.
235 * Popping the allocnos from the stack and assigning them hard
236 registers. If IRA can not assign a hard register to an
237 allocno and the allocno is coalesced, IRA undoes the
238 coalescing and puts the uncoalesced allocnos onto the stack in
239 the hope that some such allocnos will get a hard register
240 separately. If IRA fails to assign hard register or memory
241 is more profitable for it, IRA spills the allocno. IRA
242 assigns the allocno the hard-register with minimal full
243 allocation cost which reflects the cost of usage of the
244 hard-register for the allocno and cost of usage of the
245 hard-register for allocnos conflicting with given allocno.
247 * Chaitin-Briggs coloring assigns as many pseudos as possible
248 to hard registers. After coloring we try to improve
249 allocation with cost point of view. We improve the
250 allocation by spilling some allocnos and assigning the freed
251 hard registers to other allocnos if it decreases the overall
252 allocation cost.
254 * After allocno assigning in the region, IRA modifies the hard
255 register and memory costs for the corresponding allocnos in
256 the subregions to reflect the cost of possible loads, stores,
257 or moves on the border of the region and its subregions.
258 When default regional allocation algorithm is used
259 (-fira-algorithm=mixed), IRA just propagates the assignment
260 for allocnos if the register pressure in the region for the
261 corresponding pressure class is less than number of available
262 hard registers for given pressure class.
264 o Spill/restore code moving. When IRA performs an allocation
265 by traversing regions in top-down order, it does not know what
266 happens below in the region tree. Therefore, sometimes IRA
267 misses opportunities to perform a better allocation. A simple
268 optimization tries to improve allocation in a region having
269 subregions and containing in another region. If the
270 corresponding allocnos in the subregion are spilled, it spills
271 the region allocno if it is profitable. The optimization
272 implements a simple iterative algorithm performing profitable
273 transformations while they are still possible. It is fast in
274 practice, so there is no real need for a better time complexity
275 algorithm.
277 o Code change. After coloring, two allocnos representing the
278 same pseudo-register outside and inside a region respectively
279 may be assigned to different locations (hard-registers or
280 memory). In this case IRA creates and uses a new
281 pseudo-register inside the region and adds code to move allocno
282 values on the region's borders. This is done during top-down
283 traversal of the regions (file ira-emit.c). In some
284 complicated cases IRA can create a new allocno to move allocno
285 values (e.g. when a swap of values stored in two hard-registers
286 is needed). At this stage, the new allocno is marked as
287 spilled. IRA still creates the pseudo-register and the moves
288 on the region borders even when both allocnos were assigned to
289 the same hard-register. If the reload pass spills a
290 pseudo-register for some reason, the effect will be smaller
291 because another allocno will still be in the hard-register. In
292 most cases, this is better then spilling both allocnos. If
293 reload does not change the allocation for the two
294 pseudo-registers, the trivial move will be removed by
295 post-reload optimizations. IRA does not generate moves for
296 allocnos assigned to the same hard register when the default
297 regional allocation algorithm is used and the register pressure
298 in the region for the corresponding pressure class is less than
299 number of available hard registers for given pressure class.
300 IRA also does some optimizations to remove redundant stores and
301 to reduce code duplication on the region borders.
303 o Flattening internal representation. After changing code, IRA
304 transforms its internal representation for several regions into
305 one region representation (file ira-build.c). This process is
306 called IR flattening. Such process is more complicated than IR
307 rebuilding would be, but is much faster.
309 o After IR flattening, IRA tries to assign hard registers to all
310 spilled allocnos. This is implemented by a simple and fast
311 priority coloring algorithm (see function
312 ira_reassign_conflict_allocnos::ira-color.c). Here new allocnos
313 created during the code change pass can be assigned to hard
314 registers.
316 o At the end IRA calls the reload pass. The reload pass
317 communicates with IRA through several functions in file
318 ira-color.c to improve its decisions in
320 * sharing stack slots for the spilled pseudos based on IRA info
321 about pseudo-register conflicts.
323 * reassigning hard-registers to all spilled pseudos at the end
324 of each reload iteration.
326 * choosing a better hard-register to spill based on IRA info
327 about pseudo-register live ranges and the register pressure
328 in places where the pseudo-register lives.
330 IRA uses a lot of data representing the target processors. These
331 data are initialized in file ira.c.
333 If function has no loops (or the loops are ignored when
334 -fira-algorithm=CB is used), we have classic Chaitin-Briggs
335 coloring (only instead of separate pass of coalescing, we use hard
336 register preferencing). In such case, IRA works much faster
337 because many things are not made (like IR flattening, the
338 spill/restore optimization, and the code change).
340 Literature is worth to read for better understanding the code:
342 o Preston Briggs, Keith D. Cooper, Linda Torczon. Improvements to
343 Graph Coloring Register Allocation.
345 o David Callahan, Brian Koblenz. Register allocation via
346 hierarchical graph coloring.
348 o Keith Cooper, Anshuman Dasgupta, Jason Eckhardt. Revisiting Graph
349 Coloring Register Allocation: A Study of the Chaitin-Briggs and
350 Callahan-Koblenz Algorithms.
352 o Guei-Yuan Lueh, Thomas Gross, and Ali-Reza Adl-Tabatabai. Global
353 Register Allocation Based on Graph Fusion.
355 o Michael D. Smith and Glenn Holloway. Graph-Coloring Register
356 Allocation for Irregular Architectures
358 o Vladimir Makarov. The Integrated Register Allocator for GCC.
360 o Vladimir Makarov. The top-down register allocator for irregular
361 register file architectures.
366 #include "config.h"
367 #include "system.h"
368 #include "coretypes.h"
369 #include "tm.h"
370 #include "regs.h"
371 #include "input.h"
372 #include "alias.h"
373 #include "symtab.h"
374 #include "tree.h"
375 #include "rtl.h"
376 #include "tm_p.h"
377 #include "target.h"
378 #include "flags.h"
379 #include "obstack.h"
380 #include "bitmap.h"
381 #include "hard-reg-set.h"
382 #include "predict.h"
383 #include "function.h"
384 #include "dominance.h"
385 #include "cfg.h"
386 #include "cfgrtl.h"
387 #include "cfgbuild.h"
388 #include "cfgcleanup.h"
389 #include "basic-block.h"
390 #include "df.h"
391 #include "insn-config.h"
392 #include "expmed.h"
393 #include "dojump.h"
394 #include "explow.h"
395 #include "calls.h"
396 #include "emit-rtl.h"
397 #include "varasm.h"
398 #include "stmt.h"
399 #include "expr.h"
400 #include "recog.h"
401 #include "params.h"
402 #include "tree-pass.h"
403 #include "output.h"
404 #include "except.h"
405 #include "reload.h"
406 #include "diagnostic-core.h"
407 #include "ira-int.h"
408 #include "lra.h"
409 #include "dce.h"
410 #include "dbgcnt.h"
411 #include "rtl-iter.h"
412 #include "shrink-wrap.h"
414 struct target_ira default_target_ira;
415 struct target_ira_int default_target_ira_int;
416 #if SWITCHABLE_TARGET
417 struct target_ira *this_target_ira = &default_target_ira;
418 struct target_ira_int *this_target_ira_int = &default_target_ira_int;
419 #endif
421 /* A modified value of flag `-fira-verbose' used internally. */
422 int internal_flag_ira_verbose;
424 /* Dump file of the allocator if it is not NULL. */
425 FILE *ira_dump_file;
427 /* The number of elements in the following array. */
428 int ira_spilled_reg_stack_slots_num;
430 /* The following array contains info about spilled pseudo-registers
431 stack slots used in current function so far. */
432 struct ira_spilled_reg_stack_slot *ira_spilled_reg_stack_slots;
434 /* Correspondingly overall cost of the allocation, overall cost before
435 reload, cost of the allocnos assigned to hard-registers, cost of
436 the allocnos assigned to memory, cost of loads, stores and register
437 move insns generated for pseudo-register live range splitting (see
438 ira-emit.c). */
439 int64_t ira_overall_cost, overall_cost_before;
440 int64_t ira_reg_cost, ira_mem_cost;
441 int64_t ira_load_cost, ira_store_cost, ira_shuffle_cost;
442 int ira_move_loops_num, ira_additional_jumps_num;
444 /* All registers that can be eliminated. */
446 HARD_REG_SET eliminable_regset;
448 /* Value of max_reg_num () before IRA work start. This value helps
449 us to recognize a situation when new pseudos were created during
450 IRA work. */
451 static int max_regno_before_ira;
453 /* Temporary hard reg set used for a different calculation. */
454 static HARD_REG_SET temp_hard_regset;
456 #define last_mode_for_init_move_cost \
457 (this_target_ira_int->x_last_mode_for_init_move_cost)
460 /* The function sets up the map IRA_REG_MODE_HARD_REGSET. */
461 static void
462 setup_reg_mode_hard_regset (void)
464 int i, m, hard_regno;
466 for (m = 0; m < NUM_MACHINE_MODES; m++)
467 for (hard_regno = 0; hard_regno < FIRST_PSEUDO_REGISTER; hard_regno++)
469 CLEAR_HARD_REG_SET (ira_reg_mode_hard_regset[hard_regno][m]);
470 for (i = hard_regno_nregs[hard_regno][m] - 1; i >= 0; i--)
471 if (hard_regno + i < FIRST_PSEUDO_REGISTER)
472 SET_HARD_REG_BIT (ira_reg_mode_hard_regset[hard_regno][m],
473 hard_regno + i);
478 #define no_unit_alloc_regs \
479 (this_target_ira_int->x_no_unit_alloc_regs)
481 /* The function sets up the three arrays declared above. */
482 static void
483 setup_class_hard_regs (void)
485 int cl, i, hard_regno, n;
486 HARD_REG_SET processed_hard_reg_set;
488 ira_assert (SHRT_MAX >= FIRST_PSEUDO_REGISTER);
489 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
491 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
492 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
493 CLEAR_HARD_REG_SET (processed_hard_reg_set);
494 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
496 ira_non_ordered_class_hard_regs[cl][i] = -1;
497 ira_class_hard_reg_index[cl][i] = -1;
499 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
501 #ifdef REG_ALLOC_ORDER
502 hard_regno = reg_alloc_order[i];
503 #else
504 hard_regno = i;
505 #endif
506 if (TEST_HARD_REG_BIT (processed_hard_reg_set, hard_regno))
507 continue;
508 SET_HARD_REG_BIT (processed_hard_reg_set, hard_regno);
509 if (! TEST_HARD_REG_BIT (temp_hard_regset, hard_regno))
510 ira_class_hard_reg_index[cl][hard_regno] = -1;
511 else
513 ira_class_hard_reg_index[cl][hard_regno] = n;
514 ira_class_hard_regs[cl][n++] = hard_regno;
517 ira_class_hard_regs_num[cl] = n;
518 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
519 if (TEST_HARD_REG_BIT (temp_hard_regset, i))
520 ira_non_ordered_class_hard_regs[cl][n++] = i;
521 ira_assert (ira_class_hard_regs_num[cl] == n);
525 /* Set up global variables defining info about hard registers for the
526 allocation. These depend on USE_HARD_FRAME_P whose TRUE value means
527 that we can use the hard frame pointer for the allocation. */
528 static void
529 setup_alloc_regs (bool use_hard_frame_p)
531 #ifdef ADJUST_REG_ALLOC_ORDER
532 ADJUST_REG_ALLOC_ORDER;
533 #endif
534 COPY_HARD_REG_SET (no_unit_alloc_regs, fixed_reg_set);
535 if (! use_hard_frame_p)
536 SET_HARD_REG_BIT (no_unit_alloc_regs, HARD_FRAME_POINTER_REGNUM);
537 setup_class_hard_regs ();
542 #define alloc_reg_class_subclasses \
543 (this_target_ira_int->x_alloc_reg_class_subclasses)
545 /* Initialize the table of subclasses of each reg class. */
546 static void
547 setup_reg_subclasses (void)
549 int i, j;
550 HARD_REG_SET temp_hard_regset2;
552 for (i = 0; i < N_REG_CLASSES; i++)
553 for (j = 0; j < N_REG_CLASSES; j++)
554 alloc_reg_class_subclasses[i][j] = LIM_REG_CLASSES;
556 for (i = 0; i < N_REG_CLASSES; i++)
558 if (i == (int) NO_REGS)
559 continue;
561 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
562 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
563 if (hard_reg_set_empty_p (temp_hard_regset))
564 continue;
565 for (j = 0; j < N_REG_CLASSES; j++)
566 if (i != j)
568 enum reg_class *p;
570 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[j]);
571 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
572 if (! hard_reg_set_subset_p (temp_hard_regset,
573 temp_hard_regset2))
574 continue;
575 p = &alloc_reg_class_subclasses[j][0];
576 while (*p != LIM_REG_CLASSES) p++;
577 *p = (enum reg_class) i;
584 /* Set up IRA_MEMORY_MOVE_COST and IRA_MAX_MEMORY_MOVE_COST. */
585 static void
586 setup_class_subset_and_memory_move_costs (void)
588 int cl, cl2, mode, cost;
589 HARD_REG_SET temp_hard_regset2;
591 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
592 ira_memory_move_cost[mode][NO_REGS][0]
593 = ira_memory_move_cost[mode][NO_REGS][1] = SHRT_MAX;
594 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
596 if (cl != (int) NO_REGS)
597 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
599 ira_max_memory_move_cost[mode][cl][0]
600 = ira_memory_move_cost[mode][cl][0]
601 = memory_move_cost ((machine_mode) mode,
602 (reg_class_t) cl, false);
603 ira_max_memory_move_cost[mode][cl][1]
604 = ira_memory_move_cost[mode][cl][1]
605 = memory_move_cost ((machine_mode) mode,
606 (reg_class_t) cl, true);
607 /* Costs for NO_REGS are used in cost calculation on the
608 1st pass when the preferred register classes are not
609 known yet. In this case we take the best scenario. */
610 if (ira_memory_move_cost[mode][NO_REGS][0]
611 > ira_memory_move_cost[mode][cl][0])
612 ira_max_memory_move_cost[mode][NO_REGS][0]
613 = ira_memory_move_cost[mode][NO_REGS][0]
614 = ira_memory_move_cost[mode][cl][0];
615 if (ira_memory_move_cost[mode][NO_REGS][1]
616 > ira_memory_move_cost[mode][cl][1])
617 ira_max_memory_move_cost[mode][NO_REGS][1]
618 = ira_memory_move_cost[mode][NO_REGS][1]
619 = ira_memory_move_cost[mode][cl][1];
622 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
623 for (cl2 = (int) N_REG_CLASSES - 1; cl2 >= 0; cl2--)
625 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
626 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
627 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
628 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
629 ira_class_subset_p[cl][cl2]
630 = hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2);
631 if (! hard_reg_set_empty_p (temp_hard_regset2)
632 && hard_reg_set_subset_p (reg_class_contents[cl2],
633 reg_class_contents[cl]))
634 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
636 cost = ira_memory_move_cost[mode][cl2][0];
637 if (cost > ira_max_memory_move_cost[mode][cl][0])
638 ira_max_memory_move_cost[mode][cl][0] = cost;
639 cost = ira_memory_move_cost[mode][cl2][1];
640 if (cost > ira_max_memory_move_cost[mode][cl][1])
641 ira_max_memory_move_cost[mode][cl][1] = cost;
644 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
645 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
647 ira_memory_move_cost[mode][cl][0]
648 = ira_max_memory_move_cost[mode][cl][0];
649 ira_memory_move_cost[mode][cl][1]
650 = ira_max_memory_move_cost[mode][cl][1];
652 setup_reg_subclasses ();
657 /* Define the following macro if allocation through malloc if
658 preferable. */
659 #define IRA_NO_OBSTACK
661 #ifndef IRA_NO_OBSTACK
662 /* Obstack used for storing all dynamic data (except bitmaps) of the
663 IRA. */
664 static struct obstack ira_obstack;
665 #endif
667 /* Obstack used for storing all bitmaps of the IRA. */
668 static struct bitmap_obstack ira_bitmap_obstack;
670 /* Allocate memory of size LEN for IRA data. */
671 void *
672 ira_allocate (size_t len)
674 void *res;
676 #ifndef IRA_NO_OBSTACK
677 res = obstack_alloc (&ira_obstack, len);
678 #else
679 res = xmalloc (len);
680 #endif
681 return res;
684 /* Free memory ADDR allocated for IRA data. */
685 void
686 ira_free (void *addr ATTRIBUTE_UNUSED)
688 #ifndef IRA_NO_OBSTACK
689 /* do nothing */
690 #else
691 free (addr);
692 #endif
696 /* Allocate and returns bitmap for IRA. */
697 bitmap
698 ira_allocate_bitmap (void)
700 return BITMAP_ALLOC (&ira_bitmap_obstack);
703 /* Free bitmap B allocated for IRA. */
704 void
705 ira_free_bitmap (bitmap b ATTRIBUTE_UNUSED)
707 /* do nothing */
712 /* Output information about allocation of all allocnos (except for
713 caps) into file F. */
714 void
715 ira_print_disposition (FILE *f)
717 int i, n, max_regno;
718 ira_allocno_t a;
719 basic_block bb;
721 fprintf (f, "Disposition:");
722 max_regno = max_reg_num ();
723 for (n = 0, i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
724 for (a = ira_regno_allocno_map[i];
725 a != NULL;
726 a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
728 if (n % 4 == 0)
729 fprintf (f, "\n");
730 n++;
731 fprintf (f, " %4d:r%-4d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
732 if ((bb = ALLOCNO_LOOP_TREE_NODE (a)->bb) != NULL)
733 fprintf (f, "b%-3d", bb->index);
734 else
735 fprintf (f, "l%-3d", ALLOCNO_LOOP_TREE_NODE (a)->loop_num);
736 if (ALLOCNO_HARD_REGNO (a) >= 0)
737 fprintf (f, " %3d", ALLOCNO_HARD_REGNO (a));
738 else
739 fprintf (f, " mem");
741 fprintf (f, "\n");
744 /* Outputs information about allocation of all allocnos into
745 stderr. */
746 void
747 ira_debug_disposition (void)
749 ira_print_disposition (stderr);
754 /* Set up ira_stack_reg_pressure_class which is the biggest pressure
755 register class containing stack registers or NO_REGS if there are
756 no stack registers. To find this class, we iterate through all
757 register pressure classes and choose the first register pressure
758 class containing all the stack registers and having the biggest
759 size. */
760 static void
761 setup_stack_reg_pressure_class (void)
763 ira_stack_reg_pressure_class = NO_REGS;
764 #ifdef STACK_REGS
766 int i, best, size;
767 enum reg_class cl;
768 HARD_REG_SET temp_hard_regset2;
770 CLEAR_HARD_REG_SET (temp_hard_regset);
771 for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
772 SET_HARD_REG_BIT (temp_hard_regset, i);
773 best = 0;
774 for (i = 0; i < ira_pressure_classes_num; i++)
776 cl = ira_pressure_classes[i];
777 COPY_HARD_REG_SET (temp_hard_regset2, temp_hard_regset);
778 AND_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
779 size = hard_reg_set_size (temp_hard_regset2);
780 if (best < size)
782 best = size;
783 ira_stack_reg_pressure_class = cl;
787 #endif
790 /* Find pressure classes which are register classes for which we
791 calculate register pressure in IRA, register pressure sensitive
792 insn scheduling, and register pressure sensitive loop invariant
793 motion.
795 To make register pressure calculation easy, we always use
796 non-intersected register pressure classes. A move of hard
797 registers from one register pressure class is not more expensive
798 than load and store of the hard registers. Most likely an allocno
799 class will be a subset of a register pressure class and in many
800 cases a register pressure class. That makes usage of register
801 pressure classes a good approximation to find a high register
802 pressure. */
803 static void
804 setup_pressure_classes (void)
806 int cost, i, n, curr;
807 int cl, cl2;
808 enum reg_class pressure_classes[N_REG_CLASSES];
809 int m;
810 HARD_REG_SET temp_hard_regset2;
811 bool insert_p;
813 n = 0;
814 for (cl = 0; cl < N_REG_CLASSES; cl++)
816 if (ira_class_hard_regs_num[cl] == 0)
817 continue;
818 if (ira_class_hard_regs_num[cl] != 1
819 /* A register class without subclasses may contain a few
820 hard registers and movement between them is costly
821 (e.g. SPARC FPCC registers). We still should consider it
822 as a candidate for a pressure class. */
823 && alloc_reg_class_subclasses[cl][0] < cl)
825 /* Check that the moves between any hard registers of the
826 current class are not more expensive for a legal mode
827 than load/store of the hard registers of the current
828 class. Such class is a potential candidate to be a
829 register pressure class. */
830 for (m = 0; m < NUM_MACHINE_MODES; m++)
832 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
833 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
834 AND_COMPL_HARD_REG_SET (temp_hard_regset,
835 ira_prohibited_class_mode_regs[cl][m]);
836 if (hard_reg_set_empty_p (temp_hard_regset))
837 continue;
838 ira_init_register_move_cost_if_necessary ((machine_mode) m);
839 cost = ira_register_move_cost[m][cl][cl];
840 if (cost <= ira_max_memory_move_cost[m][cl][1]
841 || cost <= ira_max_memory_move_cost[m][cl][0])
842 break;
844 if (m >= NUM_MACHINE_MODES)
845 continue;
847 curr = 0;
848 insert_p = true;
849 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
850 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
851 /* Remove so far added pressure classes which are subset of the
852 current candidate class. Prefer GENERAL_REGS as a pressure
853 register class to another class containing the same
854 allocatable hard registers. We do this because machine
855 dependent cost hooks might give wrong costs for the latter
856 class but always give the right cost for the former class
857 (GENERAL_REGS). */
858 for (i = 0; i < n; i++)
860 cl2 = pressure_classes[i];
861 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
862 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
863 if (hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2)
864 && (! hard_reg_set_equal_p (temp_hard_regset, temp_hard_regset2)
865 || cl2 == (int) GENERAL_REGS))
867 pressure_classes[curr++] = (enum reg_class) cl2;
868 insert_p = false;
869 continue;
871 if (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset)
872 && (! hard_reg_set_equal_p (temp_hard_regset2, temp_hard_regset)
873 || cl == (int) GENERAL_REGS))
874 continue;
875 if (hard_reg_set_equal_p (temp_hard_regset2, temp_hard_regset))
876 insert_p = false;
877 pressure_classes[curr++] = (enum reg_class) cl2;
879 /* If the current candidate is a subset of a so far added
880 pressure class, don't add it to the list of the pressure
881 classes. */
882 if (insert_p)
883 pressure_classes[curr++] = (enum reg_class) cl;
884 n = curr;
886 #ifdef ENABLE_IRA_CHECKING
888 HARD_REG_SET ignore_hard_regs;
890 /* Check pressure classes correctness: here we check that hard
891 registers from all register pressure classes contains all hard
892 registers available for the allocation. */
893 CLEAR_HARD_REG_SET (temp_hard_regset);
894 CLEAR_HARD_REG_SET (temp_hard_regset2);
895 COPY_HARD_REG_SET (ignore_hard_regs, no_unit_alloc_regs);
896 for (cl = 0; cl < LIM_REG_CLASSES; cl++)
898 /* For some targets (like MIPS with MD_REGS), there are some
899 classes with hard registers available for allocation but
900 not able to hold value of any mode. */
901 for (m = 0; m < NUM_MACHINE_MODES; m++)
902 if (contains_reg_of_mode[cl][m])
903 break;
904 if (m >= NUM_MACHINE_MODES)
906 IOR_HARD_REG_SET (ignore_hard_regs, reg_class_contents[cl]);
907 continue;
909 for (i = 0; i < n; i++)
910 if ((int) pressure_classes[i] == cl)
911 break;
912 IOR_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
913 if (i < n)
914 IOR_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
916 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
917 /* Some targets (like SPARC with ICC reg) have allocatable regs
918 for which no reg class is defined. */
919 if (REGNO_REG_CLASS (i) == NO_REGS)
920 SET_HARD_REG_BIT (ignore_hard_regs, i);
921 AND_COMPL_HARD_REG_SET (temp_hard_regset, ignore_hard_regs);
922 AND_COMPL_HARD_REG_SET (temp_hard_regset2, ignore_hard_regs);
923 ira_assert (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset));
925 #endif
926 ira_pressure_classes_num = 0;
927 for (i = 0; i < n; i++)
929 cl = (int) pressure_classes[i];
930 ira_reg_pressure_class_p[cl] = true;
931 ira_pressure_classes[ira_pressure_classes_num++] = (enum reg_class) cl;
933 setup_stack_reg_pressure_class ();
936 /* Set up IRA_UNIFORM_CLASS_P. Uniform class is a register class
937 whose register move cost between any registers of the class is the
938 same as for all its subclasses. We use the data to speed up the
939 2nd pass of calculations of allocno costs. */
940 static void
941 setup_uniform_class_p (void)
943 int i, cl, cl2, m;
945 for (cl = 0; cl < N_REG_CLASSES; cl++)
947 ira_uniform_class_p[cl] = false;
948 if (ira_class_hard_regs_num[cl] == 0)
949 continue;
950 /* We can not use alloc_reg_class_subclasses here because move
951 cost hooks does not take into account that some registers are
952 unavailable for the subtarget. E.g. for i686, INT_SSE_REGS
953 is element of alloc_reg_class_subclasses for GENERAL_REGS
954 because SSE regs are unavailable. */
955 for (i = 0; (cl2 = reg_class_subclasses[cl][i]) != LIM_REG_CLASSES; i++)
957 if (ira_class_hard_regs_num[cl2] == 0)
958 continue;
959 for (m = 0; m < NUM_MACHINE_MODES; m++)
960 if (contains_reg_of_mode[cl][m] && contains_reg_of_mode[cl2][m])
962 ira_init_register_move_cost_if_necessary ((machine_mode) m);
963 if (ira_register_move_cost[m][cl][cl]
964 != ira_register_move_cost[m][cl2][cl2])
965 break;
967 if (m < NUM_MACHINE_MODES)
968 break;
970 if (cl2 == LIM_REG_CLASSES)
971 ira_uniform_class_p[cl] = true;
975 /* Set up IRA_ALLOCNO_CLASSES, IRA_ALLOCNO_CLASSES_NUM,
976 IRA_IMPORTANT_CLASSES, and IRA_IMPORTANT_CLASSES_NUM.
978 Target may have many subtargets and not all target hard registers can
979 be used for allocation, e.g. x86 port in 32-bit mode can not use
980 hard registers introduced in x86-64 like r8-r15). Some classes
981 might have the same allocatable hard registers, e.g. INDEX_REGS
982 and GENERAL_REGS in x86 port in 32-bit mode. To decrease different
983 calculations efforts we introduce allocno classes which contain
984 unique non-empty sets of allocatable hard-registers.
986 Pseudo class cost calculation in ira-costs.c is very expensive.
987 Therefore we are trying to decrease number of classes involved in
988 such calculation. Register classes used in the cost calculation
989 are called important classes. They are allocno classes and other
990 non-empty classes whose allocatable hard register sets are inside
991 of an allocno class hard register set. From the first sight, it
992 looks like that they are just allocno classes. It is not true. In
993 example of x86-port in 32-bit mode, allocno classes will contain
994 GENERAL_REGS but not LEGACY_REGS (because allocatable hard
995 registers are the same for the both classes). The important
996 classes will contain GENERAL_REGS and LEGACY_REGS. It is done
997 because a machine description insn constraint may refers for
998 LEGACY_REGS and code in ira-costs.c is mostly base on investigation
999 of the insn constraints. */
1000 static void
1001 setup_allocno_and_important_classes (void)
1003 int i, j, n, cl;
1004 bool set_p;
1005 HARD_REG_SET temp_hard_regset2;
1006 static enum reg_class classes[LIM_REG_CLASSES + 1];
1008 n = 0;
1009 /* Collect classes which contain unique sets of allocatable hard
1010 registers. Prefer GENERAL_REGS to other classes containing the
1011 same set of hard registers. */
1012 for (i = 0; i < LIM_REG_CLASSES; i++)
1014 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
1015 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1016 for (j = 0; j < n; j++)
1018 cl = classes[j];
1019 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
1020 AND_COMPL_HARD_REG_SET (temp_hard_regset2,
1021 no_unit_alloc_regs);
1022 if (hard_reg_set_equal_p (temp_hard_regset,
1023 temp_hard_regset2))
1024 break;
1026 if (j >= n)
1027 classes[n++] = (enum reg_class) i;
1028 else if (i == GENERAL_REGS)
1029 /* Prefer general regs. For i386 example, it means that
1030 we prefer GENERAL_REGS over INDEX_REGS or LEGACY_REGS
1031 (all of them consists of the same available hard
1032 registers). */
1033 classes[j] = (enum reg_class) i;
1035 classes[n] = LIM_REG_CLASSES;
1037 /* Set up classes which can be used for allocnos as classes
1038 containing non-empty unique sets of allocatable hard
1039 registers. */
1040 ira_allocno_classes_num = 0;
1041 for (i = 0; (cl = classes[i]) != LIM_REG_CLASSES; i++)
1042 if (ira_class_hard_regs_num[cl] > 0)
1043 ira_allocno_classes[ira_allocno_classes_num++] = (enum reg_class) cl;
1044 ira_important_classes_num = 0;
1045 /* Add non-allocno classes containing to non-empty set of
1046 allocatable hard regs. */
1047 for (cl = 0; cl < N_REG_CLASSES; cl++)
1048 if (ira_class_hard_regs_num[cl] > 0)
1050 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1051 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1052 set_p = false;
1053 for (j = 0; j < ira_allocno_classes_num; j++)
1055 COPY_HARD_REG_SET (temp_hard_regset2,
1056 reg_class_contents[ira_allocno_classes[j]]);
1057 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
1058 if ((enum reg_class) cl == ira_allocno_classes[j])
1059 break;
1060 else if (hard_reg_set_subset_p (temp_hard_regset,
1061 temp_hard_regset2))
1062 set_p = true;
1064 if (set_p && j >= ira_allocno_classes_num)
1065 ira_important_classes[ira_important_classes_num++]
1066 = (enum reg_class) cl;
1068 /* Now add allocno classes to the important classes. */
1069 for (j = 0; j < ira_allocno_classes_num; j++)
1070 ira_important_classes[ira_important_classes_num++]
1071 = ira_allocno_classes[j];
1072 for (cl = 0; cl < N_REG_CLASSES; cl++)
1074 ira_reg_allocno_class_p[cl] = false;
1075 ira_reg_pressure_class_p[cl] = false;
1077 for (j = 0; j < ira_allocno_classes_num; j++)
1078 ira_reg_allocno_class_p[ira_allocno_classes[j]] = true;
1079 setup_pressure_classes ();
1080 setup_uniform_class_p ();
1083 /* Setup translation in CLASS_TRANSLATE of all classes into a class
1084 given by array CLASSES of length CLASSES_NUM. The function is used
1085 make translation any reg class to an allocno class or to an
1086 pressure class. This translation is necessary for some
1087 calculations when we can use only allocno or pressure classes and
1088 such translation represents an approximate representation of all
1089 classes.
1091 The translation in case when allocatable hard register set of a
1092 given class is subset of allocatable hard register set of a class
1093 in CLASSES is pretty simple. We use smallest classes from CLASSES
1094 containing a given class. If allocatable hard register set of a
1095 given class is not a subset of any corresponding set of a class
1096 from CLASSES, we use the cheapest (with load/store point of view)
1097 class from CLASSES whose set intersects with given class set. */
1098 static void
1099 setup_class_translate_array (enum reg_class *class_translate,
1100 int classes_num, enum reg_class *classes)
1102 int cl, mode;
1103 enum reg_class aclass, best_class, *cl_ptr;
1104 int i, cost, min_cost, best_cost;
1106 for (cl = 0; cl < N_REG_CLASSES; cl++)
1107 class_translate[cl] = NO_REGS;
1109 for (i = 0; i < classes_num; i++)
1111 aclass = classes[i];
1112 for (cl_ptr = &alloc_reg_class_subclasses[aclass][0];
1113 (cl = *cl_ptr) != LIM_REG_CLASSES;
1114 cl_ptr++)
1115 if (class_translate[cl] == NO_REGS)
1116 class_translate[cl] = aclass;
1117 class_translate[aclass] = aclass;
1119 /* For classes which are not fully covered by one of given classes
1120 (in other words covered by more one given class), use the
1121 cheapest class. */
1122 for (cl = 0; cl < N_REG_CLASSES; cl++)
1124 if (cl == NO_REGS || class_translate[cl] != NO_REGS)
1125 continue;
1126 best_class = NO_REGS;
1127 best_cost = INT_MAX;
1128 for (i = 0; i < classes_num; i++)
1130 aclass = classes[i];
1131 COPY_HARD_REG_SET (temp_hard_regset,
1132 reg_class_contents[aclass]);
1133 AND_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1134 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1135 if (! hard_reg_set_empty_p (temp_hard_regset))
1137 min_cost = INT_MAX;
1138 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1140 cost = (ira_memory_move_cost[mode][aclass][0]
1141 + ira_memory_move_cost[mode][aclass][1]);
1142 if (min_cost > cost)
1143 min_cost = cost;
1145 if (best_class == NO_REGS || best_cost > min_cost)
1147 best_class = aclass;
1148 best_cost = min_cost;
1152 class_translate[cl] = best_class;
1156 /* Set up array IRA_ALLOCNO_CLASS_TRANSLATE and
1157 IRA_PRESSURE_CLASS_TRANSLATE. */
1158 static void
1159 setup_class_translate (void)
1161 setup_class_translate_array (ira_allocno_class_translate,
1162 ira_allocno_classes_num, ira_allocno_classes);
1163 setup_class_translate_array (ira_pressure_class_translate,
1164 ira_pressure_classes_num, ira_pressure_classes);
1167 /* Order numbers of allocno classes in original target allocno class
1168 array, -1 for non-allocno classes. */
1169 static int allocno_class_order[N_REG_CLASSES];
1171 /* The function used to sort the important classes. */
1172 static int
1173 comp_reg_classes_func (const void *v1p, const void *v2p)
1175 enum reg_class cl1 = *(const enum reg_class *) v1p;
1176 enum reg_class cl2 = *(const enum reg_class *) v2p;
1177 enum reg_class tcl1, tcl2;
1178 int diff;
1180 tcl1 = ira_allocno_class_translate[cl1];
1181 tcl2 = ira_allocno_class_translate[cl2];
1182 if (tcl1 != NO_REGS && tcl2 != NO_REGS
1183 && (diff = allocno_class_order[tcl1] - allocno_class_order[tcl2]) != 0)
1184 return diff;
1185 return (int) cl1 - (int) cl2;
1188 /* For correct work of function setup_reg_class_relation we need to
1189 reorder important classes according to the order of their allocno
1190 classes. It places important classes containing the same
1191 allocatable hard register set adjacent to each other and allocno
1192 class with the allocatable hard register set right after the other
1193 important classes with the same set.
1195 In example from comments of function
1196 setup_allocno_and_important_classes, it places LEGACY_REGS and
1197 GENERAL_REGS close to each other and GENERAL_REGS is after
1198 LEGACY_REGS. */
1199 static void
1200 reorder_important_classes (void)
1202 int i;
1204 for (i = 0; i < N_REG_CLASSES; i++)
1205 allocno_class_order[i] = -1;
1206 for (i = 0; i < ira_allocno_classes_num; i++)
1207 allocno_class_order[ira_allocno_classes[i]] = i;
1208 qsort (ira_important_classes, ira_important_classes_num,
1209 sizeof (enum reg_class), comp_reg_classes_func);
1210 for (i = 0; i < ira_important_classes_num; i++)
1211 ira_important_class_nums[ira_important_classes[i]] = i;
1214 /* Set up IRA_REG_CLASS_SUBUNION, IRA_REG_CLASS_SUPERUNION,
1215 IRA_REG_CLASS_SUPER_CLASSES, IRA_REG_CLASSES_INTERSECT, and
1216 IRA_REG_CLASSES_INTERSECT_P. For the meaning of the relations,
1217 please see corresponding comments in ira-int.h. */
1218 static void
1219 setup_reg_class_relations (void)
1221 int i, cl1, cl2, cl3;
1222 HARD_REG_SET intersection_set, union_set, temp_set2;
1223 bool important_class_p[N_REG_CLASSES];
1225 memset (important_class_p, 0, sizeof (important_class_p));
1226 for (i = 0; i < ira_important_classes_num; i++)
1227 important_class_p[ira_important_classes[i]] = true;
1228 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1230 ira_reg_class_super_classes[cl1][0] = LIM_REG_CLASSES;
1231 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1233 ira_reg_classes_intersect_p[cl1][cl2] = false;
1234 ira_reg_class_intersect[cl1][cl2] = NO_REGS;
1235 ira_reg_class_subset[cl1][cl2] = NO_REGS;
1236 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl1]);
1237 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1238 COPY_HARD_REG_SET (temp_set2, reg_class_contents[cl2]);
1239 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1240 if (hard_reg_set_empty_p (temp_hard_regset)
1241 && hard_reg_set_empty_p (temp_set2))
1243 /* The both classes have no allocatable hard registers
1244 -- take all class hard registers into account and use
1245 reg_class_subunion and reg_class_superunion. */
1246 for (i = 0;; i++)
1248 cl3 = reg_class_subclasses[cl1][i];
1249 if (cl3 == LIM_REG_CLASSES)
1250 break;
1251 if (reg_class_subset_p (ira_reg_class_intersect[cl1][cl2],
1252 (enum reg_class) cl3))
1253 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1255 ira_reg_class_subunion[cl1][cl2] = reg_class_subunion[cl1][cl2];
1256 ira_reg_class_superunion[cl1][cl2] = reg_class_superunion[cl1][cl2];
1257 continue;
1259 ira_reg_classes_intersect_p[cl1][cl2]
1260 = hard_reg_set_intersect_p (temp_hard_regset, temp_set2);
1261 if (important_class_p[cl1] && important_class_p[cl2]
1262 && hard_reg_set_subset_p (temp_hard_regset, temp_set2))
1264 /* CL1 and CL2 are important classes and CL1 allocatable
1265 hard register set is inside of CL2 allocatable hard
1266 registers -- make CL1 a superset of CL2. */
1267 enum reg_class *p;
1269 p = &ira_reg_class_super_classes[cl1][0];
1270 while (*p != LIM_REG_CLASSES)
1271 p++;
1272 *p++ = (enum reg_class) cl2;
1273 *p = LIM_REG_CLASSES;
1275 ira_reg_class_subunion[cl1][cl2] = NO_REGS;
1276 ira_reg_class_superunion[cl1][cl2] = NO_REGS;
1277 COPY_HARD_REG_SET (intersection_set, reg_class_contents[cl1]);
1278 AND_HARD_REG_SET (intersection_set, reg_class_contents[cl2]);
1279 AND_COMPL_HARD_REG_SET (intersection_set, no_unit_alloc_regs);
1280 COPY_HARD_REG_SET (union_set, reg_class_contents[cl1]);
1281 IOR_HARD_REG_SET (union_set, reg_class_contents[cl2]);
1282 AND_COMPL_HARD_REG_SET (union_set, no_unit_alloc_regs);
1283 for (cl3 = 0; cl3 < N_REG_CLASSES; cl3++)
1285 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl3]);
1286 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1287 if (hard_reg_set_subset_p (temp_hard_regset, intersection_set))
1289 /* CL3 allocatable hard register set is inside of
1290 intersection of allocatable hard register sets
1291 of CL1 and CL2. */
1292 if (important_class_p[cl3])
1294 COPY_HARD_REG_SET
1295 (temp_set2,
1296 reg_class_contents
1297 [(int) ira_reg_class_intersect[cl1][cl2]]);
1298 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1299 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1300 /* If the allocatable hard register sets are
1301 the same, prefer GENERAL_REGS or the
1302 smallest class for debugging
1303 purposes. */
1304 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1305 && (cl3 == GENERAL_REGS
1306 || ((ira_reg_class_intersect[cl1][cl2]
1307 != GENERAL_REGS)
1308 && hard_reg_set_subset_p
1309 (reg_class_contents[cl3],
1310 reg_class_contents
1311 [(int)
1312 ira_reg_class_intersect[cl1][cl2]])))))
1313 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1315 COPY_HARD_REG_SET
1316 (temp_set2,
1317 reg_class_contents[(int) ira_reg_class_subset[cl1][cl2]]);
1318 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1319 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1320 /* Ignore unavailable hard registers and prefer
1321 smallest class for debugging purposes. */
1322 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1323 && hard_reg_set_subset_p
1324 (reg_class_contents[cl3],
1325 reg_class_contents
1326 [(int) ira_reg_class_subset[cl1][cl2]])))
1327 ira_reg_class_subset[cl1][cl2] = (enum reg_class) cl3;
1329 if (important_class_p[cl3]
1330 && hard_reg_set_subset_p (temp_hard_regset, union_set))
1332 /* CL3 allocatable hard register set is inside of
1333 union of allocatable hard register sets of CL1
1334 and CL2. */
1335 COPY_HARD_REG_SET
1336 (temp_set2,
1337 reg_class_contents[(int) ira_reg_class_subunion[cl1][cl2]]);
1338 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1339 if (ira_reg_class_subunion[cl1][cl2] == NO_REGS
1340 || (hard_reg_set_subset_p (temp_set2, temp_hard_regset)
1342 && (! hard_reg_set_equal_p (temp_set2,
1343 temp_hard_regset)
1344 || cl3 == GENERAL_REGS
1345 /* If the allocatable hard register sets are the
1346 same, prefer GENERAL_REGS or the smallest
1347 class for debugging purposes. */
1348 || (ira_reg_class_subunion[cl1][cl2] != GENERAL_REGS
1349 && hard_reg_set_subset_p
1350 (reg_class_contents[cl3],
1351 reg_class_contents
1352 [(int) ira_reg_class_subunion[cl1][cl2]])))))
1353 ira_reg_class_subunion[cl1][cl2] = (enum reg_class) cl3;
1355 if (hard_reg_set_subset_p (union_set, temp_hard_regset))
1357 /* CL3 allocatable hard register set contains union
1358 of allocatable hard register sets of CL1 and
1359 CL2. */
1360 COPY_HARD_REG_SET
1361 (temp_set2,
1362 reg_class_contents[(int) ira_reg_class_superunion[cl1][cl2]]);
1363 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1364 if (ira_reg_class_superunion[cl1][cl2] == NO_REGS
1365 || (hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1367 && (! hard_reg_set_equal_p (temp_set2,
1368 temp_hard_regset)
1369 || cl3 == GENERAL_REGS
1370 /* If the allocatable hard register sets are the
1371 same, prefer GENERAL_REGS or the smallest
1372 class for debugging purposes. */
1373 || (ira_reg_class_superunion[cl1][cl2] != GENERAL_REGS
1374 && hard_reg_set_subset_p
1375 (reg_class_contents[cl3],
1376 reg_class_contents
1377 [(int) ira_reg_class_superunion[cl1][cl2]])))))
1378 ira_reg_class_superunion[cl1][cl2] = (enum reg_class) cl3;
1385 /* Output all uniform and important classes into file F. */
1386 static void
1387 print_unform_and_important_classes (FILE *f)
1389 static const char *const reg_class_names[] = REG_CLASS_NAMES;
1390 int i, cl;
1392 fprintf (f, "Uniform classes:\n");
1393 for (cl = 0; cl < N_REG_CLASSES; cl++)
1394 if (ira_uniform_class_p[cl])
1395 fprintf (f, " %s", reg_class_names[cl]);
1396 fprintf (f, "\nImportant classes:\n");
1397 for (i = 0; i < ira_important_classes_num; i++)
1398 fprintf (f, " %s", reg_class_names[ira_important_classes[i]]);
1399 fprintf (f, "\n");
1402 /* Output all possible allocno or pressure classes and their
1403 translation map into file F. */
1404 static void
1405 print_translated_classes (FILE *f, bool pressure_p)
1407 int classes_num = (pressure_p
1408 ? ira_pressure_classes_num : ira_allocno_classes_num);
1409 enum reg_class *classes = (pressure_p
1410 ? ira_pressure_classes : ira_allocno_classes);
1411 enum reg_class *class_translate = (pressure_p
1412 ? ira_pressure_class_translate
1413 : ira_allocno_class_translate);
1414 static const char *const reg_class_names[] = REG_CLASS_NAMES;
1415 int i;
1417 fprintf (f, "%s classes:\n", pressure_p ? "Pressure" : "Allocno");
1418 for (i = 0; i < classes_num; i++)
1419 fprintf (f, " %s", reg_class_names[classes[i]]);
1420 fprintf (f, "\nClass translation:\n");
1421 for (i = 0; i < N_REG_CLASSES; i++)
1422 fprintf (f, " %s -> %s\n", reg_class_names[i],
1423 reg_class_names[class_translate[i]]);
1426 /* Output all possible allocno and translation classes and the
1427 translation maps into stderr. */
1428 void
1429 ira_debug_allocno_classes (void)
1431 print_unform_and_important_classes (stderr);
1432 print_translated_classes (stderr, false);
1433 print_translated_classes (stderr, true);
1436 /* Set up different arrays concerning class subsets, allocno and
1437 important classes. */
1438 static void
1439 find_reg_classes (void)
1441 setup_allocno_and_important_classes ();
1442 setup_class_translate ();
1443 reorder_important_classes ();
1444 setup_reg_class_relations ();
1449 /* Set up the array above. */
1450 static void
1451 setup_hard_regno_aclass (void)
1453 int i;
1455 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1457 #if 1
1458 ira_hard_regno_allocno_class[i]
1459 = (TEST_HARD_REG_BIT (no_unit_alloc_regs, i)
1460 ? NO_REGS
1461 : ira_allocno_class_translate[REGNO_REG_CLASS (i)]);
1462 #else
1463 int j;
1464 enum reg_class cl;
1465 ira_hard_regno_allocno_class[i] = NO_REGS;
1466 for (j = 0; j < ira_allocno_classes_num; j++)
1468 cl = ira_allocno_classes[j];
1469 if (ira_class_hard_reg_index[cl][i] >= 0)
1471 ira_hard_regno_allocno_class[i] = cl;
1472 break;
1475 #endif
1481 /* Form IRA_REG_CLASS_MAX_NREGS and IRA_REG_CLASS_MIN_NREGS maps. */
1482 static void
1483 setup_reg_class_nregs (void)
1485 int i, cl, cl2, m;
1487 for (m = 0; m < MAX_MACHINE_MODE; m++)
1489 for (cl = 0; cl < N_REG_CLASSES; cl++)
1490 ira_reg_class_max_nregs[cl][m]
1491 = ira_reg_class_min_nregs[cl][m]
1492 = targetm.class_max_nregs ((reg_class_t) cl, (machine_mode) m);
1493 for (cl = 0; cl < N_REG_CLASSES; cl++)
1494 for (i = 0;
1495 (cl2 = alloc_reg_class_subclasses[cl][i]) != LIM_REG_CLASSES;
1496 i++)
1497 if (ira_reg_class_min_nregs[cl2][m]
1498 < ira_reg_class_min_nregs[cl][m])
1499 ira_reg_class_min_nregs[cl][m] = ira_reg_class_min_nregs[cl2][m];
1505 /* Set up IRA_PROHIBITED_CLASS_MODE_REGS and IRA_CLASS_SINGLETON.
1506 This function is called once IRA_CLASS_HARD_REGS has been initialized. */
1507 static void
1508 setup_prohibited_class_mode_regs (void)
1510 int j, k, hard_regno, cl, last_hard_regno, count;
1512 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1514 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1515 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1516 for (j = 0; j < NUM_MACHINE_MODES; j++)
1518 count = 0;
1519 last_hard_regno = -1;
1520 CLEAR_HARD_REG_SET (ira_prohibited_class_mode_regs[cl][j]);
1521 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1523 hard_regno = ira_class_hard_regs[cl][k];
1524 if (! HARD_REGNO_MODE_OK (hard_regno, (machine_mode) j))
1525 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1526 hard_regno);
1527 else if (in_hard_reg_set_p (temp_hard_regset,
1528 (machine_mode) j, hard_regno))
1530 last_hard_regno = hard_regno;
1531 count++;
1534 ira_class_singleton[cl][j] = (count == 1 ? last_hard_regno : -1);
1539 /* Clarify IRA_PROHIBITED_CLASS_MODE_REGS by excluding hard registers
1540 spanning from one register pressure class to another one. It is
1541 called after defining the pressure classes. */
1542 static void
1543 clarify_prohibited_class_mode_regs (void)
1545 int j, k, hard_regno, cl, pclass, nregs;
1547 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1548 for (j = 0; j < NUM_MACHINE_MODES; j++)
1550 CLEAR_HARD_REG_SET (ira_useful_class_mode_regs[cl][j]);
1551 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1553 hard_regno = ira_class_hard_regs[cl][k];
1554 if (TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j], hard_regno))
1555 continue;
1556 nregs = hard_regno_nregs[hard_regno][j];
1557 if (hard_regno + nregs > FIRST_PSEUDO_REGISTER)
1559 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1560 hard_regno);
1561 continue;
1563 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
1564 for (nregs-- ;nregs >= 0; nregs--)
1565 if (((enum reg_class) pclass
1566 != ira_pressure_class_translate[REGNO_REG_CLASS
1567 (hard_regno + nregs)]))
1569 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1570 hard_regno);
1571 break;
1573 if (!TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1574 hard_regno))
1575 add_to_hard_reg_set (&ira_useful_class_mode_regs[cl][j],
1576 (machine_mode) j, hard_regno);
1581 /* Allocate and initialize IRA_REGISTER_MOVE_COST, IRA_MAY_MOVE_IN_COST
1582 and IRA_MAY_MOVE_OUT_COST for MODE. */
1583 void
1584 ira_init_register_move_cost (machine_mode mode)
1586 static unsigned short last_move_cost[N_REG_CLASSES][N_REG_CLASSES];
1587 bool all_match = true;
1588 unsigned int cl1, cl2;
1590 ira_assert (ira_register_move_cost[mode] == NULL
1591 && ira_may_move_in_cost[mode] == NULL
1592 && ira_may_move_out_cost[mode] == NULL);
1593 ira_assert (have_regs_of_mode[mode]);
1594 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1595 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1597 int cost;
1598 if (!contains_reg_of_mode[cl1][mode]
1599 || !contains_reg_of_mode[cl2][mode])
1601 if ((ira_reg_class_max_nregs[cl1][mode]
1602 > ira_class_hard_regs_num[cl1])
1603 || (ira_reg_class_max_nregs[cl2][mode]
1604 > ira_class_hard_regs_num[cl2]))
1605 cost = 65535;
1606 else
1607 cost = (ira_memory_move_cost[mode][cl1][0]
1608 + ira_memory_move_cost[mode][cl2][1]) * 2;
1610 else
1612 cost = register_move_cost (mode, (enum reg_class) cl1,
1613 (enum reg_class) cl2);
1614 ira_assert (cost < 65535);
1616 all_match &= (last_move_cost[cl1][cl2] == cost);
1617 last_move_cost[cl1][cl2] = cost;
1619 if (all_match && last_mode_for_init_move_cost != -1)
1621 ira_register_move_cost[mode]
1622 = ira_register_move_cost[last_mode_for_init_move_cost];
1623 ira_may_move_in_cost[mode]
1624 = ira_may_move_in_cost[last_mode_for_init_move_cost];
1625 ira_may_move_out_cost[mode]
1626 = ira_may_move_out_cost[last_mode_for_init_move_cost];
1627 return;
1629 last_mode_for_init_move_cost = mode;
1630 ira_register_move_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1631 ira_may_move_in_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1632 ira_may_move_out_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1633 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1634 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1636 int cost;
1637 enum reg_class *p1, *p2;
1639 if (last_move_cost[cl1][cl2] == 65535)
1641 ira_register_move_cost[mode][cl1][cl2] = 65535;
1642 ira_may_move_in_cost[mode][cl1][cl2] = 65535;
1643 ira_may_move_out_cost[mode][cl1][cl2] = 65535;
1645 else
1647 cost = last_move_cost[cl1][cl2];
1649 for (p2 = &reg_class_subclasses[cl2][0];
1650 *p2 != LIM_REG_CLASSES; p2++)
1651 if (ira_class_hard_regs_num[*p2] > 0
1652 && (ira_reg_class_max_nregs[*p2][mode]
1653 <= ira_class_hard_regs_num[*p2]))
1654 cost = MAX (cost, ira_register_move_cost[mode][cl1][*p2]);
1656 for (p1 = &reg_class_subclasses[cl1][0];
1657 *p1 != LIM_REG_CLASSES; p1++)
1658 if (ira_class_hard_regs_num[*p1] > 0
1659 && (ira_reg_class_max_nregs[*p1][mode]
1660 <= ira_class_hard_regs_num[*p1]))
1661 cost = MAX (cost, ira_register_move_cost[mode][*p1][cl2]);
1663 ira_assert (cost <= 65535);
1664 ira_register_move_cost[mode][cl1][cl2] = cost;
1666 if (ira_class_subset_p[cl1][cl2])
1667 ira_may_move_in_cost[mode][cl1][cl2] = 0;
1668 else
1669 ira_may_move_in_cost[mode][cl1][cl2] = cost;
1671 if (ira_class_subset_p[cl2][cl1])
1672 ira_may_move_out_cost[mode][cl1][cl2] = 0;
1673 else
1674 ira_may_move_out_cost[mode][cl1][cl2] = cost;
1681 /* This is called once during compiler work. It sets up
1682 different arrays whose values don't depend on the compiled
1683 function. */
1684 void
1685 ira_init_once (void)
1687 ira_init_costs_once ();
1688 lra_init_once ();
1691 /* Free ira_max_register_move_cost, ira_may_move_in_cost and
1692 ira_may_move_out_cost for each mode. */
1693 void
1694 target_ira_int::free_register_move_costs (void)
1696 int mode, i;
1698 /* Reset move_cost and friends, making sure we only free shared
1699 table entries once. */
1700 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1701 if (x_ira_register_move_cost[mode])
1703 for (i = 0;
1704 i < mode && (x_ira_register_move_cost[i]
1705 != x_ira_register_move_cost[mode]);
1706 i++)
1708 if (i == mode)
1710 free (x_ira_register_move_cost[mode]);
1711 free (x_ira_may_move_in_cost[mode]);
1712 free (x_ira_may_move_out_cost[mode]);
1715 memset (x_ira_register_move_cost, 0, sizeof x_ira_register_move_cost);
1716 memset (x_ira_may_move_in_cost, 0, sizeof x_ira_may_move_in_cost);
1717 memset (x_ira_may_move_out_cost, 0, sizeof x_ira_may_move_out_cost);
1718 last_mode_for_init_move_cost = -1;
1721 target_ira_int::~target_ira_int ()
1723 free_ira_costs ();
1724 free_register_move_costs ();
1727 /* This is called every time when register related information is
1728 changed. */
1729 void
1730 ira_init (void)
1732 this_target_ira_int->free_register_move_costs ();
1733 setup_reg_mode_hard_regset ();
1734 setup_alloc_regs (flag_omit_frame_pointer != 0);
1735 setup_class_subset_and_memory_move_costs ();
1736 setup_reg_class_nregs ();
1737 setup_prohibited_class_mode_regs ();
1738 find_reg_classes ();
1739 clarify_prohibited_class_mode_regs ();
1740 setup_hard_regno_aclass ();
1741 ira_init_costs ();
1745 #define ira_prohibited_mode_move_regs_initialized_p \
1746 (this_target_ira_int->x_ira_prohibited_mode_move_regs_initialized_p)
1748 /* Set up IRA_PROHIBITED_MODE_MOVE_REGS. */
1749 static void
1750 setup_prohibited_mode_move_regs (void)
1752 int i, j;
1753 rtx test_reg1, test_reg2, move_pat;
1754 rtx_insn *move_insn;
1756 if (ira_prohibited_mode_move_regs_initialized_p)
1757 return;
1758 ira_prohibited_mode_move_regs_initialized_p = true;
1759 test_reg1 = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 1);
1760 test_reg2 = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 2);
1761 move_pat = gen_rtx_SET (test_reg1, test_reg2);
1762 move_insn = gen_rtx_INSN (VOIDmode, 0, 0, 0, move_pat, 0, -1, 0);
1763 for (i = 0; i < NUM_MACHINE_MODES; i++)
1765 SET_HARD_REG_SET (ira_prohibited_mode_move_regs[i]);
1766 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
1768 if (! HARD_REGNO_MODE_OK (j, (machine_mode) i))
1769 continue;
1770 set_mode_and_regno (test_reg1, (machine_mode) i, j);
1771 set_mode_and_regno (test_reg2, (machine_mode) i, j);
1772 INSN_CODE (move_insn) = -1;
1773 recog_memoized (move_insn);
1774 if (INSN_CODE (move_insn) < 0)
1775 continue;
1776 extract_insn (move_insn);
1777 /* We don't know whether the move will be in code that is optimized
1778 for size or speed, so consider all enabled alternatives. */
1779 if (! constrain_operands (1, get_enabled_alternatives (move_insn)))
1780 continue;
1781 CLEAR_HARD_REG_BIT (ira_prohibited_mode_move_regs[i], j);
1788 /* Setup possible alternatives in ALTS for INSN. */
1789 void
1790 ira_setup_alts (rtx_insn *insn, HARD_REG_SET &alts)
1792 /* MAP nalt * nop -> start of constraints for given operand and
1793 alternative. */
1794 static vec<const char *> insn_constraints;
1795 int nop, nalt;
1796 bool curr_swapped;
1797 const char *p;
1798 int commutative = -1;
1800 extract_insn (insn);
1801 alternative_mask preferred = get_preferred_alternatives (insn);
1802 CLEAR_HARD_REG_SET (alts);
1803 insn_constraints.release ();
1804 insn_constraints.safe_grow_cleared (recog_data.n_operands
1805 * recog_data.n_alternatives + 1);
1806 /* Check that the hard reg set is enough for holding all
1807 alternatives. It is hard to imagine the situation when the
1808 assertion is wrong. */
1809 ira_assert (recog_data.n_alternatives
1810 <= (int) MAX (sizeof (HARD_REG_ELT_TYPE) * CHAR_BIT,
1811 FIRST_PSEUDO_REGISTER));
1812 for (curr_swapped = false;; curr_swapped = true)
1814 /* Calculate some data common for all alternatives to speed up the
1815 function. */
1816 for (nop = 0; nop < recog_data.n_operands; nop++)
1818 for (nalt = 0, p = recog_data.constraints[nop];
1819 nalt < recog_data.n_alternatives;
1820 nalt++)
1822 insn_constraints[nop * recog_data.n_alternatives + nalt] = p;
1823 while (*p && *p != ',')
1824 p++;
1825 if (*p)
1826 p++;
1829 for (nalt = 0; nalt < recog_data.n_alternatives; nalt++)
1831 if (!TEST_BIT (preferred, nalt)
1832 || TEST_HARD_REG_BIT (alts, nalt))
1833 continue;
1835 for (nop = 0; nop < recog_data.n_operands; nop++)
1837 int c, len;
1839 rtx op = recog_data.operand[nop];
1840 p = insn_constraints[nop * recog_data.n_alternatives + nalt];
1841 if (*p == 0 || *p == ',')
1842 continue;
1845 switch (c = *p, len = CONSTRAINT_LEN (c, p), c)
1847 case '#':
1848 case ',':
1849 c = '\0';
1850 case '\0':
1851 len = 0;
1852 break;
1854 case '%':
1855 /* We only support one commutative marker, the
1856 first one. We already set commutative
1857 above. */
1858 if (commutative < 0)
1859 commutative = nop;
1860 break;
1862 case '0': case '1': case '2': case '3': case '4':
1863 case '5': case '6': case '7': case '8': case '9':
1864 goto op_success;
1865 break;
1867 case 'g':
1868 goto op_success;
1869 break;
1871 default:
1873 enum constraint_num cn = lookup_constraint (p);
1874 switch (get_constraint_type (cn))
1876 case CT_REGISTER:
1877 if (reg_class_for_constraint (cn) != NO_REGS)
1878 goto op_success;
1879 break;
1881 case CT_CONST_INT:
1882 if (CONST_INT_P (op)
1883 && (insn_const_int_ok_for_constraint
1884 (INTVAL (op), cn)))
1885 goto op_success;
1886 break;
1888 case CT_ADDRESS:
1889 case CT_MEMORY:
1890 goto op_success;
1892 case CT_FIXED_FORM:
1893 if (constraint_satisfied_p (op, cn))
1894 goto op_success;
1895 break;
1897 break;
1900 while (p += len, c);
1901 break;
1902 op_success:
1905 if (nop >= recog_data.n_operands)
1906 SET_HARD_REG_BIT (alts, nalt);
1908 if (commutative < 0)
1909 break;
1910 if (curr_swapped)
1911 break;
1912 std::swap (recog_data.operand[commutative],
1913 recog_data.operand[commutative + 1]);
1917 /* Return the number of the output non-early clobber operand which
1918 should be the same in any case as operand with number OP_NUM (or
1919 negative value if there is no such operand). The function takes
1920 only really possible alternatives into consideration. */
1922 ira_get_dup_out_num (int op_num, HARD_REG_SET &alts)
1924 int curr_alt, c, original, dup;
1925 bool ignore_p, use_commut_op_p;
1926 const char *str;
1928 if (op_num < 0 || recog_data.n_alternatives == 0)
1929 return -1;
1930 /* We should find duplications only for input operands. */
1931 if (recog_data.operand_type[op_num] != OP_IN)
1932 return -1;
1933 str = recog_data.constraints[op_num];
1934 use_commut_op_p = false;
1935 for (;;)
1937 rtx op = recog_data.operand[op_num];
1939 for (curr_alt = 0, ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt),
1940 original = -1;;)
1942 c = *str;
1943 if (c == '\0')
1944 break;
1945 if (c == '#')
1946 ignore_p = true;
1947 else if (c == ',')
1949 curr_alt++;
1950 ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt);
1952 else if (! ignore_p)
1953 switch (c)
1955 case 'g':
1956 goto fail;
1957 default:
1959 enum constraint_num cn = lookup_constraint (str);
1960 enum reg_class cl = reg_class_for_constraint (cn);
1961 if (cl != NO_REGS
1962 && !targetm.class_likely_spilled_p (cl))
1963 goto fail;
1964 if (constraint_satisfied_p (op, cn))
1965 goto fail;
1966 break;
1969 case '0': case '1': case '2': case '3': case '4':
1970 case '5': case '6': case '7': case '8': case '9':
1971 if (original != -1 && original != c)
1972 goto fail;
1973 original = c;
1974 break;
1976 str += CONSTRAINT_LEN (c, str);
1978 if (original == -1)
1979 goto fail;
1980 dup = -1;
1981 for (ignore_p = false, str = recog_data.constraints[original - '0'];
1982 *str != 0;
1983 str++)
1984 if (ignore_p)
1986 if (*str == ',')
1987 ignore_p = false;
1989 else if (*str == '#')
1990 ignore_p = true;
1991 else if (! ignore_p)
1993 if (*str == '=')
1994 dup = original - '0';
1995 /* It is better ignore an alternative with early clobber. */
1996 else if (*str == '&')
1997 goto fail;
1999 if (dup >= 0)
2000 return dup;
2001 fail:
2002 if (use_commut_op_p)
2003 break;
2004 use_commut_op_p = true;
2005 if (recog_data.constraints[op_num][0] == '%')
2006 str = recog_data.constraints[op_num + 1];
2007 else if (op_num > 0 && recog_data.constraints[op_num - 1][0] == '%')
2008 str = recog_data.constraints[op_num - 1];
2009 else
2010 break;
2012 return -1;
2017 /* Search forward to see if the source register of a copy insn dies
2018 before either it or the destination register is modified, but don't
2019 scan past the end of the basic block. If so, we can replace the
2020 source with the destination and let the source die in the copy
2021 insn.
2023 This will reduce the number of registers live in that range and may
2024 enable the destination and the source coalescing, thus often saving
2025 one register in addition to a register-register copy. */
2027 static void
2028 decrease_live_ranges_number (void)
2030 basic_block bb;
2031 rtx_insn *insn;
2032 rtx set, src, dest, dest_death, note;
2033 rtx_insn *p, *q;
2034 int sregno, dregno;
2036 if (! flag_expensive_optimizations)
2037 return;
2039 if (ira_dump_file)
2040 fprintf (ira_dump_file, "Starting decreasing number of live ranges...\n");
2042 FOR_EACH_BB_FN (bb, cfun)
2043 FOR_BB_INSNS (bb, insn)
2045 set = single_set (insn);
2046 if (! set)
2047 continue;
2048 src = SET_SRC (set);
2049 dest = SET_DEST (set);
2050 if (! REG_P (src) || ! REG_P (dest)
2051 || find_reg_note (insn, REG_DEAD, src))
2052 continue;
2053 sregno = REGNO (src);
2054 dregno = REGNO (dest);
2056 /* We don't want to mess with hard regs if register classes
2057 are small. */
2058 if (sregno == dregno
2059 || (targetm.small_register_classes_for_mode_p (GET_MODE (src))
2060 && (sregno < FIRST_PSEUDO_REGISTER
2061 || dregno < FIRST_PSEUDO_REGISTER))
2062 /* We don't see all updates to SP if they are in an
2063 auto-inc memory reference, so we must disallow this
2064 optimization on them. */
2065 || sregno == STACK_POINTER_REGNUM
2066 || dregno == STACK_POINTER_REGNUM)
2067 continue;
2069 dest_death = NULL_RTX;
2071 for (p = NEXT_INSN (insn); p; p = NEXT_INSN (p))
2073 if (! INSN_P (p))
2074 continue;
2075 if (BLOCK_FOR_INSN (p) != bb)
2076 break;
2078 if (reg_set_p (src, p) || reg_set_p (dest, p)
2079 /* If SRC is an asm-declared register, it must not be
2080 replaced in any asm. Unfortunately, the REG_EXPR
2081 tree for the asm variable may be absent in the SRC
2082 rtx, so we can't check the actual register
2083 declaration easily (the asm operand will have it,
2084 though). To avoid complicating the test for a rare
2085 case, we just don't perform register replacement
2086 for a hard reg mentioned in an asm. */
2087 || (sregno < FIRST_PSEUDO_REGISTER
2088 && asm_noperands (PATTERN (p)) >= 0
2089 && reg_overlap_mentioned_p (src, PATTERN (p)))
2090 /* Don't change hard registers used by a call. */
2091 || (CALL_P (p) && sregno < FIRST_PSEUDO_REGISTER
2092 && find_reg_fusage (p, USE, src))
2093 /* Don't change a USE of a register. */
2094 || (GET_CODE (PATTERN (p)) == USE
2095 && reg_overlap_mentioned_p (src, XEXP (PATTERN (p), 0))))
2096 break;
2098 /* See if all of SRC dies in P. This test is slightly
2099 more conservative than it needs to be. */
2100 if ((note = find_regno_note (p, REG_DEAD, sregno))
2101 && GET_MODE (XEXP (note, 0)) == GET_MODE (src))
2103 int failed = 0;
2105 /* We can do the optimization. Scan forward from INSN
2106 again, replacing regs as we go. Set FAILED if a
2107 replacement can't be done. In that case, we can't
2108 move the death note for SRC. This should be
2109 rare. */
2111 /* Set to stop at next insn. */
2112 for (q = next_real_insn (insn);
2113 q != next_real_insn (p);
2114 q = next_real_insn (q))
2116 if (reg_overlap_mentioned_p (src, PATTERN (q)))
2118 /* If SRC is a hard register, we might miss
2119 some overlapping registers with
2120 validate_replace_rtx, so we would have to
2121 undo it. We can't if DEST is present in
2122 the insn, so fail in that combination of
2123 cases. */
2124 if (sregno < FIRST_PSEUDO_REGISTER
2125 && reg_mentioned_p (dest, PATTERN (q)))
2126 failed = 1;
2128 /* Attempt to replace all uses. */
2129 else if (!validate_replace_rtx (src, dest, q))
2130 failed = 1;
2132 /* If this succeeded, but some part of the
2133 register is still present, undo the
2134 replacement. */
2135 else if (sregno < FIRST_PSEUDO_REGISTER
2136 && reg_overlap_mentioned_p (src, PATTERN (q)))
2138 validate_replace_rtx (dest, src, q);
2139 failed = 1;
2143 /* If DEST dies here, remove the death note and
2144 save it for later. Make sure ALL of DEST dies
2145 here; again, this is overly conservative. */
2146 if (! dest_death
2147 && (dest_death = find_regno_note (q, REG_DEAD, dregno)))
2149 if (GET_MODE (XEXP (dest_death, 0)) == GET_MODE (dest))
2150 remove_note (q, dest_death);
2151 else
2153 failed = 1;
2154 dest_death = 0;
2159 if (! failed)
2161 /* Move death note of SRC from P to INSN. */
2162 remove_note (p, note);
2163 XEXP (note, 1) = REG_NOTES (insn);
2164 REG_NOTES (insn) = note;
2167 /* DEST is also dead if INSN has a REG_UNUSED note for
2168 DEST. */
2169 if (! dest_death
2170 && (dest_death
2171 = find_regno_note (insn, REG_UNUSED, dregno)))
2173 PUT_REG_NOTE_KIND (dest_death, REG_DEAD);
2174 remove_note (insn, dest_death);
2177 /* Put death note of DEST on P if we saw it die. */
2178 if (dest_death)
2180 XEXP (dest_death, 1) = REG_NOTES (p);
2181 REG_NOTES (p) = dest_death;
2183 break;
2186 /* If SRC is a hard register which is set or killed in
2187 some other way, we can't do this optimization. */
2188 else if (sregno < FIRST_PSEUDO_REGISTER && dead_or_set_p (p, src))
2189 break;
2196 /* Return nonzero if REGNO is a particularly bad choice for reloading X. */
2197 static bool
2198 ira_bad_reload_regno_1 (int regno, rtx x)
2200 int x_regno, n, i;
2201 ira_allocno_t a;
2202 enum reg_class pref;
2204 /* We only deal with pseudo regs. */
2205 if (! x || GET_CODE (x) != REG)
2206 return false;
2208 x_regno = REGNO (x);
2209 if (x_regno < FIRST_PSEUDO_REGISTER)
2210 return false;
2212 /* If the pseudo prefers REGNO explicitly, then do not consider
2213 REGNO a bad spill choice. */
2214 pref = reg_preferred_class (x_regno);
2215 if (reg_class_size[pref] == 1)
2216 return !TEST_HARD_REG_BIT (reg_class_contents[pref], regno);
2218 /* If the pseudo conflicts with REGNO, then we consider REGNO a
2219 poor choice for a reload regno. */
2220 a = ira_regno_allocno_map[x_regno];
2221 n = ALLOCNO_NUM_OBJECTS (a);
2222 for (i = 0; i < n; i++)
2224 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2225 if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
2226 return true;
2228 return false;
2231 /* Return nonzero if REGNO is a particularly bad choice for reloading
2232 IN or OUT. */
2233 bool
2234 ira_bad_reload_regno (int regno, rtx in, rtx out)
2236 return (ira_bad_reload_regno_1 (regno, in)
2237 || ira_bad_reload_regno_1 (regno, out));
2240 /* Add register clobbers from asm statements. */
2241 static void
2242 compute_regs_asm_clobbered (void)
2244 basic_block bb;
2246 FOR_EACH_BB_FN (bb, cfun)
2248 rtx_insn *insn;
2249 FOR_BB_INSNS_REVERSE (bb, insn)
2251 df_ref def;
2253 if (NONDEBUG_INSN_P (insn) && extract_asm_operands (PATTERN (insn)))
2254 FOR_EACH_INSN_DEF (def, insn)
2256 unsigned int dregno = DF_REF_REGNO (def);
2257 if (HARD_REGISTER_NUM_P (dregno))
2258 add_to_hard_reg_set (&crtl->asm_clobbers,
2259 GET_MODE (DF_REF_REAL_REG (def)),
2260 dregno);
2267 /* Set up ELIMINABLE_REGSET, IRA_NO_ALLOC_REGS, and
2268 REGS_EVER_LIVE. */
2269 void
2270 ira_setup_eliminable_regset (void)
2272 #ifdef ELIMINABLE_REGS
2273 int i;
2274 static const struct {const int from, to; } eliminables[] = ELIMINABLE_REGS;
2275 #endif
2276 /* FIXME: If EXIT_IGNORE_STACK is set, we will not save and restore
2277 sp for alloca. So we can't eliminate the frame pointer in that
2278 case. At some point, we should improve this by emitting the
2279 sp-adjusting insns for this case. */
2280 frame_pointer_needed
2281 = (! flag_omit_frame_pointer
2282 || (cfun->calls_alloca && EXIT_IGNORE_STACK)
2283 /* We need the frame pointer to catch stack overflow exceptions
2284 if the stack pointer is moving. */
2285 || (flag_stack_check && STACK_CHECK_MOVING_SP)
2286 || crtl->accesses_prior_frames
2287 || (SUPPORTS_STACK_ALIGNMENT && crtl->stack_realign_needed)
2288 /* We need a frame pointer for all Cilk Plus functions that use
2289 Cilk keywords. */
2290 || (flag_cilkplus && cfun->is_cilk_function)
2291 || targetm.frame_pointer_required ());
2293 /* The chance that FRAME_POINTER_NEEDED is changed from inspecting
2294 RTL is very small. So if we use frame pointer for RA and RTL
2295 actually prevents this, we will spill pseudos assigned to the
2296 frame pointer in LRA. */
2298 if (frame_pointer_needed)
2299 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2301 COPY_HARD_REG_SET (ira_no_alloc_regs, no_unit_alloc_regs);
2302 CLEAR_HARD_REG_SET (eliminable_regset);
2304 compute_regs_asm_clobbered ();
2306 /* Build the regset of all eliminable registers and show we can't
2307 use those that we already know won't be eliminated. */
2308 #ifdef ELIMINABLE_REGS
2309 for (i = 0; i < (int) ARRAY_SIZE (eliminables); i++)
2311 bool cannot_elim
2312 = (! targetm.can_eliminate (eliminables[i].from, eliminables[i].to)
2313 || (eliminables[i].to == STACK_POINTER_REGNUM && frame_pointer_needed));
2315 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, eliminables[i].from))
2317 SET_HARD_REG_BIT (eliminable_regset, eliminables[i].from);
2319 if (cannot_elim)
2320 SET_HARD_REG_BIT (ira_no_alloc_regs, eliminables[i].from);
2322 else if (cannot_elim)
2323 error ("%s cannot be used in asm here",
2324 reg_names[eliminables[i].from]);
2325 else
2326 df_set_regs_ever_live (eliminables[i].from, true);
2328 if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
2330 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2332 SET_HARD_REG_BIT (eliminable_regset, HARD_FRAME_POINTER_REGNUM);
2333 if (frame_pointer_needed)
2334 SET_HARD_REG_BIT (ira_no_alloc_regs, HARD_FRAME_POINTER_REGNUM);
2336 else if (frame_pointer_needed)
2337 error ("%s cannot be used in asm here",
2338 reg_names[HARD_FRAME_POINTER_REGNUM]);
2339 else
2340 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2343 #else
2344 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2346 SET_HARD_REG_BIT (eliminable_regset, FRAME_POINTER_REGNUM);
2347 if (frame_pointer_needed)
2348 SET_HARD_REG_BIT (ira_no_alloc_regs, FRAME_POINTER_REGNUM);
2350 else if (frame_pointer_needed)
2351 error ("%s cannot be used in asm here", reg_names[FRAME_POINTER_REGNUM]);
2352 else
2353 df_set_regs_ever_live (FRAME_POINTER_REGNUM, true);
2354 #endif
2359 /* Vector of substitutions of register numbers,
2360 used to map pseudo regs into hardware regs.
2361 This is set up as a result of register allocation.
2362 Element N is the hard reg assigned to pseudo reg N,
2363 or is -1 if no hard reg was assigned.
2364 If N is a hard reg number, element N is N. */
2365 short *reg_renumber;
2367 /* Set up REG_RENUMBER and CALLER_SAVE_NEEDED (used by reload) from
2368 the allocation found by IRA. */
2369 static void
2370 setup_reg_renumber (void)
2372 int regno, hard_regno;
2373 ira_allocno_t a;
2374 ira_allocno_iterator ai;
2376 caller_save_needed = 0;
2377 FOR_EACH_ALLOCNO (a, ai)
2379 if (ira_use_lra_p && ALLOCNO_CAP_MEMBER (a) != NULL)
2380 continue;
2381 /* There are no caps at this point. */
2382 ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
2383 if (! ALLOCNO_ASSIGNED_P (a))
2384 /* It can happen if A is not referenced but partially anticipated
2385 somewhere in a region. */
2386 ALLOCNO_ASSIGNED_P (a) = true;
2387 ira_free_allocno_updated_costs (a);
2388 hard_regno = ALLOCNO_HARD_REGNO (a);
2389 regno = ALLOCNO_REGNO (a);
2390 reg_renumber[regno] = (hard_regno < 0 ? -1 : hard_regno);
2391 if (hard_regno >= 0)
2393 int i, nwords;
2394 enum reg_class pclass;
2395 ira_object_t obj;
2397 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
2398 nwords = ALLOCNO_NUM_OBJECTS (a);
2399 for (i = 0; i < nwords; i++)
2401 obj = ALLOCNO_OBJECT (a, i);
2402 IOR_COMPL_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
2403 reg_class_contents[pclass]);
2405 if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
2406 && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
2407 call_used_reg_set))
2409 ira_assert (!optimize || flag_caller_saves
2410 || (ALLOCNO_CALLS_CROSSED_NUM (a)
2411 == ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a))
2412 || regno >= ira_reg_equiv_len
2413 || ira_equiv_no_lvalue_p (regno));
2414 caller_save_needed = 1;
2420 /* Set up allocno assignment flags for further allocation
2421 improvements. */
2422 static void
2423 setup_allocno_assignment_flags (void)
2425 int hard_regno;
2426 ira_allocno_t a;
2427 ira_allocno_iterator ai;
2429 FOR_EACH_ALLOCNO (a, ai)
2431 if (! ALLOCNO_ASSIGNED_P (a))
2432 /* It can happen if A is not referenced but partially anticipated
2433 somewhere in a region. */
2434 ira_free_allocno_updated_costs (a);
2435 hard_regno = ALLOCNO_HARD_REGNO (a);
2436 /* Don't assign hard registers to allocnos which are destination
2437 of removed store at the end of loop. It has no sense to keep
2438 the same value in different hard registers. It is also
2439 impossible to assign hard registers correctly to such
2440 allocnos because the cost info and info about intersected
2441 calls are incorrect for them. */
2442 ALLOCNO_ASSIGNED_P (a) = (hard_regno >= 0
2443 || ALLOCNO_EMIT_DATA (a)->mem_optimized_dest_p
2444 || (ALLOCNO_MEMORY_COST (a)
2445 - ALLOCNO_CLASS_COST (a)) < 0);
2446 ira_assert
2447 (hard_regno < 0
2448 || ira_hard_reg_in_set_p (hard_regno, ALLOCNO_MODE (a),
2449 reg_class_contents[ALLOCNO_CLASS (a)]));
2453 /* Evaluate overall allocation cost and the costs for using hard
2454 registers and memory for allocnos. */
2455 static void
2456 calculate_allocation_cost (void)
2458 int hard_regno, cost;
2459 ira_allocno_t a;
2460 ira_allocno_iterator ai;
2462 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
2463 FOR_EACH_ALLOCNO (a, ai)
2465 hard_regno = ALLOCNO_HARD_REGNO (a);
2466 ira_assert (hard_regno < 0
2467 || (ira_hard_reg_in_set_p
2468 (hard_regno, ALLOCNO_MODE (a),
2469 reg_class_contents[ALLOCNO_CLASS (a)])));
2470 if (hard_regno < 0)
2472 cost = ALLOCNO_MEMORY_COST (a);
2473 ira_mem_cost += cost;
2475 else if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
2477 cost = (ALLOCNO_HARD_REG_COSTS (a)
2478 [ira_class_hard_reg_index
2479 [ALLOCNO_CLASS (a)][hard_regno]]);
2480 ira_reg_cost += cost;
2482 else
2484 cost = ALLOCNO_CLASS_COST (a);
2485 ira_reg_cost += cost;
2487 ira_overall_cost += cost;
2490 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
2492 fprintf (ira_dump_file,
2493 "+++Costs: overall %" PRId64
2494 ", reg %" PRId64
2495 ", mem %" PRId64
2496 ", ld %" PRId64
2497 ", st %" PRId64
2498 ", move %" PRId64,
2499 ira_overall_cost, ira_reg_cost, ira_mem_cost,
2500 ira_load_cost, ira_store_cost, ira_shuffle_cost);
2501 fprintf (ira_dump_file, "\n+++ move loops %d, new jumps %d\n",
2502 ira_move_loops_num, ira_additional_jumps_num);
2507 #ifdef ENABLE_IRA_CHECKING
2508 /* Check the correctness of the allocation. We do need this because
2509 of complicated code to transform more one region internal
2510 representation into one region representation. */
2511 static void
2512 check_allocation (void)
2514 ira_allocno_t a;
2515 int hard_regno, nregs, conflict_nregs;
2516 ira_allocno_iterator ai;
2518 FOR_EACH_ALLOCNO (a, ai)
2520 int n = ALLOCNO_NUM_OBJECTS (a);
2521 int i;
2523 if (ALLOCNO_CAP_MEMBER (a) != NULL
2524 || (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
2525 continue;
2526 nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
2527 if (nregs == 1)
2528 /* We allocated a single hard register. */
2529 n = 1;
2530 else if (n > 1)
2531 /* We allocated multiple hard registers, and we will test
2532 conflicts in a granularity of single hard regs. */
2533 nregs = 1;
2535 for (i = 0; i < n; i++)
2537 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2538 ira_object_t conflict_obj;
2539 ira_object_conflict_iterator oci;
2540 int this_regno = hard_regno;
2541 if (n > 1)
2543 if (REG_WORDS_BIG_ENDIAN)
2544 this_regno += n - i - 1;
2545 else
2546 this_regno += i;
2548 FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
2550 ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
2551 int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
2552 if (conflict_hard_regno < 0)
2553 continue;
2555 conflict_nregs
2556 = (hard_regno_nregs
2557 [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
2559 if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1
2560 && conflict_nregs == ALLOCNO_NUM_OBJECTS (conflict_a))
2562 if (REG_WORDS_BIG_ENDIAN)
2563 conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
2564 - OBJECT_SUBWORD (conflict_obj) - 1);
2565 else
2566 conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
2567 conflict_nregs = 1;
2570 if ((conflict_hard_regno <= this_regno
2571 && this_regno < conflict_hard_regno + conflict_nregs)
2572 || (this_regno <= conflict_hard_regno
2573 && conflict_hard_regno < this_regno + nregs))
2575 fprintf (stderr, "bad allocation for %d and %d\n",
2576 ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
2577 gcc_unreachable ();
2583 #endif
2585 /* Allocate REG_EQUIV_INIT. Set up it from IRA_REG_EQUIV which should
2586 be already calculated. */
2587 static void
2588 setup_reg_equiv_init (void)
2590 int i;
2591 int max_regno = max_reg_num ();
2593 for (i = 0; i < max_regno; i++)
2594 reg_equiv_init (i) = ira_reg_equiv[i].init_insns;
2597 /* Update equiv regno from movement of FROM_REGNO to TO_REGNO. INSNS
2598 are insns which were generated for such movement. It is assumed
2599 that FROM_REGNO and TO_REGNO always have the same value at the
2600 point of any move containing such registers. This function is used
2601 to update equiv info for register shuffles on the region borders
2602 and for caller save/restore insns. */
2603 void
2604 ira_update_equiv_info_by_shuffle_insn (int to_regno, int from_regno, rtx_insn *insns)
2606 rtx_insn *insn;
2607 rtx x, note;
2609 if (! ira_reg_equiv[from_regno].defined_p
2610 && (! ira_reg_equiv[to_regno].defined_p
2611 || ((x = ira_reg_equiv[to_regno].memory) != NULL_RTX
2612 && ! MEM_READONLY_P (x))))
2613 return;
2614 insn = insns;
2615 if (NEXT_INSN (insn) != NULL_RTX)
2617 if (! ira_reg_equiv[to_regno].defined_p)
2619 ira_assert (ira_reg_equiv[to_regno].init_insns == NULL_RTX);
2620 return;
2622 ira_reg_equiv[to_regno].defined_p = false;
2623 ira_reg_equiv[to_regno].memory
2624 = ira_reg_equiv[to_regno].constant
2625 = ira_reg_equiv[to_regno].invariant
2626 = ira_reg_equiv[to_regno].init_insns = NULL;
2627 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2628 fprintf (ira_dump_file,
2629 " Invalidating equiv info for reg %d\n", to_regno);
2630 return;
2632 /* It is possible that FROM_REGNO still has no equivalence because
2633 in shuffles to_regno<-from_regno and from_regno<-to_regno the 2nd
2634 insn was not processed yet. */
2635 if (ira_reg_equiv[from_regno].defined_p)
2637 ira_reg_equiv[to_regno].defined_p = true;
2638 if ((x = ira_reg_equiv[from_regno].memory) != NULL_RTX)
2640 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX
2641 && ira_reg_equiv[from_regno].constant == NULL_RTX);
2642 ira_assert (ira_reg_equiv[to_regno].memory == NULL_RTX
2643 || rtx_equal_p (ira_reg_equiv[to_regno].memory, x));
2644 ira_reg_equiv[to_regno].memory = x;
2645 if (! MEM_READONLY_P (x))
2646 /* We don't add the insn to insn init list because memory
2647 equivalence is just to say what memory is better to use
2648 when the pseudo is spilled. */
2649 return;
2651 else if ((x = ira_reg_equiv[from_regno].constant) != NULL_RTX)
2653 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX);
2654 ira_assert (ira_reg_equiv[to_regno].constant == NULL_RTX
2655 || rtx_equal_p (ira_reg_equiv[to_regno].constant, x));
2656 ira_reg_equiv[to_regno].constant = x;
2658 else
2660 x = ira_reg_equiv[from_regno].invariant;
2661 ira_assert (x != NULL_RTX);
2662 ira_assert (ira_reg_equiv[to_regno].invariant == NULL_RTX
2663 || rtx_equal_p (ira_reg_equiv[to_regno].invariant, x));
2664 ira_reg_equiv[to_regno].invariant = x;
2666 if (find_reg_note (insn, REG_EQUIV, x) == NULL_RTX)
2668 note = set_unique_reg_note (insn, REG_EQUIV, x);
2669 gcc_assert (note != NULL_RTX);
2670 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2672 fprintf (ira_dump_file,
2673 " Adding equiv note to insn %u for reg %d ",
2674 INSN_UID (insn), to_regno);
2675 dump_value_slim (ira_dump_file, x, 1);
2676 fprintf (ira_dump_file, "\n");
2680 ira_reg_equiv[to_regno].init_insns
2681 = gen_rtx_INSN_LIST (VOIDmode, insn,
2682 ira_reg_equiv[to_regno].init_insns);
2683 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2684 fprintf (ira_dump_file,
2685 " Adding equiv init move insn %u to reg %d\n",
2686 INSN_UID (insn), to_regno);
2689 /* Fix values of array REG_EQUIV_INIT after live range splitting done
2690 by IRA. */
2691 static void
2692 fix_reg_equiv_init (void)
2694 int max_regno = max_reg_num ();
2695 int i, new_regno, max;
2696 rtx set;
2697 rtx_insn_list *x, *next, *prev;
2698 rtx_insn *insn;
2700 if (max_regno_before_ira < max_regno)
2702 max = vec_safe_length (reg_equivs);
2703 grow_reg_equivs ();
2704 for (i = FIRST_PSEUDO_REGISTER; i < max; i++)
2705 for (prev = NULL, x = reg_equiv_init (i);
2706 x != NULL_RTX;
2707 x = next)
2709 next = x->next ();
2710 insn = x->insn ();
2711 set = single_set (insn);
2712 ira_assert (set != NULL_RTX
2713 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))));
2714 if (REG_P (SET_DEST (set))
2715 && ((int) REGNO (SET_DEST (set)) == i
2716 || (int) ORIGINAL_REGNO (SET_DEST (set)) == i))
2717 new_regno = REGNO (SET_DEST (set));
2718 else if (REG_P (SET_SRC (set))
2719 && ((int) REGNO (SET_SRC (set)) == i
2720 || (int) ORIGINAL_REGNO (SET_SRC (set)) == i))
2721 new_regno = REGNO (SET_SRC (set));
2722 else
2723 gcc_unreachable ();
2724 if (new_regno == i)
2725 prev = x;
2726 else
2728 /* Remove the wrong list element. */
2729 if (prev == NULL_RTX)
2730 reg_equiv_init (i) = next;
2731 else
2732 XEXP (prev, 1) = next;
2733 XEXP (x, 1) = reg_equiv_init (new_regno);
2734 reg_equiv_init (new_regno) = x;
2740 #ifdef ENABLE_IRA_CHECKING
2741 /* Print redundant memory-memory copies. */
2742 static void
2743 print_redundant_copies (void)
2745 int hard_regno;
2746 ira_allocno_t a;
2747 ira_copy_t cp, next_cp;
2748 ira_allocno_iterator ai;
2750 FOR_EACH_ALLOCNO (a, ai)
2752 if (ALLOCNO_CAP_MEMBER (a) != NULL)
2753 /* It is a cap. */
2754 continue;
2755 hard_regno = ALLOCNO_HARD_REGNO (a);
2756 if (hard_regno >= 0)
2757 continue;
2758 for (cp = ALLOCNO_COPIES (a); cp != NULL; cp = next_cp)
2759 if (cp->first == a)
2760 next_cp = cp->next_first_allocno_copy;
2761 else
2763 next_cp = cp->next_second_allocno_copy;
2764 if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL
2765 && cp->insn != NULL_RTX
2766 && ALLOCNO_HARD_REGNO (cp->first) == hard_regno)
2767 fprintf (ira_dump_file,
2768 " Redundant move from %d(freq %d):%d\n",
2769 INSN_UID (cp->insn), cp->freq, hard_regno);
2773 #endif
2775 /* Setup preferred and alternative classes for new pseudo-registers
2776 created by IRA starting with START. */
2777 static void
2778 setup_preferred_alternate_classes_for_new_pseudos (int start)
2780 int i, old_regno;
2781 int max_regno = max_reg_num ();
2783 for (i = start; i < max_regno; i++)
2785 old_regno = ORIGINAL_REGNO (regno_reg_rtx[i]);
2786 ira_assert (i != old_regno);
2787 setup_reg_classes (i, reg_preferred_class (old_regno),
2788 reg_alternate_class (old_regno),
2789 reg_allocno_class (old_regno));
2790 if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
2791 fprintf (ira_dump_file,
2792 " New r%d: setting preferred %s, alternative %s\n",
2793 i, reg_class_names[reg_preferred_class (old_regno)],
2794 reg_class_names[reg_alternate_class (old_regno)]);
2799 /* The number of entries allocated in reg_info. */
2800 static int allocated_reg_info_size;
2802 /* Regional allocation can create new pseudo-registers. This function
2803 expands some arrays for pseudo-registers. */
2804 static void
2805 expand_reg_info (void)
2807 int i;
2808 int size = max_reg_num ();
2810 resize_reg_info ();
2811 for (i = allocated_reg_info_size; i < size; i++)
2812 setup_reg_classes (i, GENERAL_REGS, ALL_REGS, GENERAL_REGS);
2813 setup_preferred_alternate_classes_for_new_pseudos (allocated_reg_info_size);
2814 allocated_reg_info_size = size;
2817 /* Return TRUE if there is too high register pressure in the function.
2818 It is used to decide when stack slot sharing is worth to do. */
2819 static bool
2820 too_high_register_pressure_p (void)
2822 int i;
2823 enum reg_class pclass;
2825 for (i = 0; i < ira_pressure_classes_num; i++)
2827 pclass = ira_pressure_classes[i];
2828 if (ira_loop_tree_root->reg_pressure[pclass] > 10000)
2829 return true;
2831 return false;
2836 /* Indicate that hard register number FROM was eliminated and replaced with
2837 an offset from hard register number TO. The status of hard registers live
2838 at the start of a basic block is updated by replacing a use of FROM with
2839 a use of TO. */
2841 void
2842 mark_elimination (int from, int to)
2844 basic_block bb;
2845 bitmap r;
2847 FOR_EACH_BB_FN (bb, cfun)
2849 r = DF_LR_IN (bb);
2850 if (bitmap_bit_p (r, from))
2852 bitmap_clear_bit (r, from);
2853 bitmap_set_bit (r, to);
2855 if (! df_live)
2856 continue;
2857 r = DF_LIVE_IN (bb);
2858 if (bitmap_bit_p (r, from))
2860 bitmap_clear_bit (r, from);
2861 bitmap_set_bit (r, to);
2868 /* The length of the following array. */
2869 int ira_reg_equiv_len;
2871 /* Info about equiv. info for each register. */
2872 struct ira_reg_equiv_s *ira_reg_equiv;
2874 /* Expand ira_reg_equiv if necessary. */
2875 void
2876 ira_expand_reg_equiv (void)
2878 int old = ira_reg_equiv_len;
2880 if (ira_reg_equiv_len > max_reg_num ())
2881 return;
2882 ira_reg_equiv_len = max_reg_num () * 3 / 2 + 1;
2883 ira_reg_equiv
2884 = (struct ira_reg_equiv_s *) xrealloc (ira_reg_equiv,
2885 ira_reg_equiv_len
2886 * sizeof (struct ira_reg_equiv_s));
2887 gcc_assert (old < ira_reg_equiv_len);
2888 memset (ira_reg_equiv + old, 0,
2889 sizeof (struct ira_reg_equiv_s) * (ira_reg_equiv_len - old));
2892 static void
2893 init_reg_equiv (void)
2895 ira_reg_equiv_len = 0;
2896 ira_reg_equiv = NULL;
2897 ira_expand_reg_equiv ();
2900 static void
2901 finish_reg_equiv (void)
2903 free (ira_reg_equiv);
2908 struct equivalence
2910 /* Set when a REG_EQUIV note is found or created. Use to
2911 keep track of what memory accesses might be created later,
2912 e.g. by reload. */
2913 rtx replacement;
2914 rtx *src_p;
2916 /* The list of each instruction which initializes this register.
2918 NULL indicates we know nothing about this register's equivalence
2919 properties.
2921 An INSN_LIST with a NULL insn indicates this pseudo is already
2922 known to not have a valid equivalence. */
2923 rtx_insn_list *init_insns;
2925 /* Loop depth is used to recognize equivalences which appear
2926 to be present within the same loop (or in an inner loop). */
2927 short loop_depth;
2928 /* Nonzero if this had a preexisting REG_EQUIV note. */
2929 unsigned char is_arg_equivalence : 1;
2930 /* Set when an attempt should be made to replace a register
2931 with the associated src_p entry. */
2932 unsigned char replace : 1;
2933 /* Set if this register has no known equivalence. */
2934 unsigned char no_equiv : 1;
2937 /* reg_equiv[N] (where N is a pseudo reg number) is the equivalence
2938 structure for that register. */
2939 static struct equivalence *reg_equiv;
2941 /* Used for communication between the following two functions: contains
2942 a MEM that we wish to ensure remains unchanged. */
2943 static rtx equiv_mem;
2945 /* Set nonzero if EQUIV_MEM is modified. */
2946 static int equiv_mem_modified;
2948 /* If EQUIV_MEM is modified by modifying DEST, indicate that it is modified.
2949 Called via note_stores. */
2950 static void
2951 validate_equiv_mem_from_store (rtx dest, const_rtx set ATTRIBUTE_UNUSED,
2952 void *data ATTRIBUTE_UNUSED)
2954 if ((REG_P (dest)
2955 && reg_overlap_mentioned_p (dest, equiv_mem))
2956 || (MEM_P (dest)
2957 && anti_dependence (equiv_mem, dest)))
2958 equiv_mem_modified = 1;
2961 /* Verify that no store between START and the death of REG invalidates
2962 MEMREF. MEMREF is invalidated by modifying a register used in MEMREF,
2963 by storing into an overlapping memory location, or with a non-const
2964 CALL_INSN.
2966 Return 1 if MEMREF remains valid. */
2967 static int
2968 validate_equiv_mem (rtx_insn *start, rtx reg, rtx memref)
2970 rtx_insn *insn;
2971 rtx note;
2973 equiv_mem = memref;
2974 equiv_mem_modified = 0;
2976 /* If the memory reference has side effects or is volatile, it isn't a
2977 valid equivalence. */
2978 if (side_effects_p (memref))
2979 return 0;
2981 for (insn = start; insn && ! equiv_mem_modified; insn = NEXT_INSN (insn))
2983 if (! INSN_P (insn))
2984 continue;
2986 if (find_reg_note (insn, REG_DEAD, reg))
2987 return 1;
2989 /* This used to ignore readonly memory and const/pure calls. The problem
2990 is the equivalent form may reference a pseudo which gets assigned a
2991 call clobbered hard reg. When we later replace REG with its
2992 equivalent form, the value in the call-clobbered reg has been
2993 changed and all hell breaks loose. */
2994 if (CALL_P (insn))
2995 return 0;
2997 note_stores (PATTERN (insn), validate_equiv_mem_from_store, NULL);
2999 /* If a register mentioned in MEMREF is modified via an
3000 auto-increment, we lose the equivalence. Do the same if one
3001 dies; although we could extend the life, it doesn't seem worth
3002 the trouble. */
3004 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3005 if ((REG_NOTE_KIND (note) == REG_INC
3006 || REG_NOTE_KIND (note) == REG_DEAD)
3007 && REG_P (XEXP (note, 0))
3008 && reg_overlap_mentioned_p (XEXP (note, 0), memref))
3009 return 0;
3012 return 0;
3015 /* Returns zero if X is known to be invariant. */
3016 static int
3017 equiv_init_varies_p (rtx x)
3019 RTX_CODE code = GET_CODE (x);
3020 int i;
3021 const char *fmt;
3023 switch (code)
3025 case MEM:
3026 return !MEM_READONLY_P (x) || equiv_init_varies_p (XEXP (x, 0));
3028 case CONST:
3029 CASE_CONST_ANY:
3030 case SYMBOL_REF:
3031 case LABEL_REF:
3032 return 0;
3034 case REG:
3035 return reg_equiv[REGNO (x)].replace == 0 && rtx_varies_p (x, 0);
3037 case ASM_OPERANDS:
3038 if (MEM_VOLATILE_P (x))
3039 return 1;
3041 /* Fall through. */
3043 default:
3044 break;
3047 fmt = GET_RTX_FORMAT (code);
3048 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3049 if (fmt[i] == 'e')
3051 if (equiv_init_varies_p (XEXP (x, i)))
3052 return 1;
3054 else if (fmt[i] == 'E')
3056 int j;
3057 for (j = 0; j < XVECLEN (x, i); j++)
3058 if (equiv_init_varies_p (XVECEXP (x, i, j)))
3059 return 1;
3062 return 0;
3065 /* Returns nonzero if X (used to initialize register REGNO) is movable.
3066 X is only movable if the registers it uses have equivalent initializations
3067 which appear to be within the same loop (or in an inner loop) and movable
3068 or if they are not candidates for local_alloc and don't vary. */
3069 static int
3070 equiv_init_movable_p (rtx x, int regno)
3072 int i, j;
3073 const char *fmt;
3074 enum rtx_code code = GET_CODE (x);
3076 switch (code)
3078 case SET:
3079 return equiv_init_movable_p (SET_SRC (x), regno);
3081 case CC0:
3082 case CLOBBER:
3083 return 0;
3085 case PRE_INC:
3086 case PRE_DEC:
3087 case POST_INC:
3088 case POST_DEC:
3089 case PRE_MODIFY:
3090 case POST_MODIFY:
3091 return 0;
3093 case REG:
3094 return ((reg_equiv[REGNO (x)].loop_depth >= reg_equiv[regno].loop_depth
3095 && reg_equiv[REGNO (x)].replace)
3096 || (REG_BASIC_BLOCK (REGNO (x)) < NUM_FIXED_BLOCKS
3097 && ! rtx_varies_p (x, 0)));
3099 case UNSPEC_VOLATILE:
3100 return 0;
3102 case ASM_OPERANDS:
3103 if (MEM_VOLATILE_P (x))
3104 return 0;
3106 /* Fall through. */
3108 default:
3109 break;
3112 fmt = GET_RTX_FORMAT (code);
3113 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3114 switch (fmt[i])
3116 case 'e':
3117 if (! equiv_init_movable_p (XEXP (x, i), regno))
3118 return 0;
3119 break;
3120 case 'E':
3121 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3122 if (! equiv_init_movable_p (XVECEXP (x, i, j), regno))
3123 return 0;
3124 break;
3127 return 1;
3130 /* TRUE if X uses any registers for which reg_equiv[REGNO].replace is
3131 true. */
3132 static int
3133 contains_replace_regs (rtx x)
3135 int i, j;
3136 const char *fmt;
3137 enum rtx_code code = GET_CODE (x);
3139 switch (code)
3141 case CONST:
3142 case LABEL_REF:
3143 case SYMBOL_REF:
3144 CASE_CONST_ANY:
3145 case PC:
3146 case CC0:
3147 case HIGH:
3148 return 0;
3150 case REG:
3151 return reg_equiv[REGNO (x)].replace;
3153 default:
3154 break;
3157 fmt = GET_RTX_FORMAT (code);
3158 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3159 switch (fmt[i])
3161 case 'e':
3162 if (contains_replace_regs (XEXP (x, i)))
3163 return 1;
3164 break;
3165 case 'E':
3166 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3167 if (contains_replace_regs (XVECEXP (x, i, j)))
3168 return 1;
3169 break;
3172 return 0;
3175 /* TRUE if X references a memory location that would be affected by a store
3176 to MEMREF. */
3177 static int
3178 memref_referenced_p (rtx memref, rtx x)
3180 int i, j;
3181 const char *fmt;
3182 enum rtx_code code = GET_CODE (x);
3184 switch (code)
3186 case CONST:
3187 case LABEL_REF:
3188 case SYMBOL_REF:
3189 CASE_CONST_ANY:
3190 case PC:
3191 case CC0:
3192 case HIGH:
3193 case LO_SUM:
3194 return 0;
3196 case REG:
3197 return (reg_equiv[REGNO (x)].replacement
3198 && memref_referenced_p (memref,
3199 reg_equiv[REGNO (x)].replacement));
3201 case MEM:
3202 if (true_dependence (memref, VOIDmode, x))
3203 return 1;
3204 break;
3206 case SET:
3207 /* If we are setting a MEM, it doesn't count (its address does), but any
3208 other SET_DEST that has a MEM in it is referencing the MEM. */
3209 if (MEM_P (SET_DEST (x)))
3211 if (memref_referenced_p (memref, XEXP (SET_DEST (x), 0)))
3212 return 1;
3214 else if (memref_referenced_p (memref, SET_DEST (x)))
3215 return 1;
3217 return memref_referenced_p (memref, SET_SRC (x));
3219 default:
3220 break;
3223 fmt = GET_RTX_FORMAT (code);
3224 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3225 switch (fmt[i])
3227 case 'e':
3228 if (memref_referenced_p (memref, XEXP (x, i)))
3229 return 1;
3230 break;
3231 case 'E':
3232 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3233 if (memref_referenced_p (memref, XVECEXP (x, i, j)))
3234 return 1;
3235 break;
3238 return 0;
3241 /* TRUE if some insn in the range (START, END] references a memory location
3242 that would be affected by a store to MEMREF. */
3243 static int
3244 memref_used_between_p (rtx memref, rtx_insn *start, rtx_insn *end)
3246 rtx_insn *insn;
3248 for (insn = NEXT_INSN (start); insn != NEXT_INSN (end);
3249 insn = NEXT_INSN (insn))
3251 if (!NONDEBUG_INSN_P (insn))
3252 continue;
3254 if (memref_referenced_p (memref, PATTERN (insn)))
3255 return 1;
3257 /* Nonconst functions may access memory. */
3258 if (CALL_P (insn) && (! RTL_CONST_CALL_P (insn)))
3259 return 1;
3262 return 0;
3265 /* Mark REG as having no known equivalence.
3266 Some instructions might have been processed before and furnished
3267 with REG_EQUIV notes for this register; these notes will have to be
3268 removed.
3269 STORE is the piece of RTL that does the non-constant / conflicting
3270 assignment - a SET, CLOBBER or REG_INC note. It is currently not used,
3271 but needs to be there because this function is called from note_stores. */
3272 static void
3273 no_equiv (rtx reg, const_rtx store ATTRIBUTE_UNUSED,
3274 void *data ATTRIBUTE_UNUSED)
3276 int regno;
3277 rtx_insn_list *list;
3279 if (!REG_P (reg))
3280 return;
3281 regno = REGNO (reg);
3282 reg_equiv[regno].no_equiv = 1;
3283 list = reg_equiv[regno].init_insns;
3284 if (list && list->insn () == NULL)
3285 return;
3286 reg_equiv[regno].init_insns = gen_rtx_INSN_LIST (VOIDmode, NULL_RTX, NULL);
3287 reg_equiv[regno].replacement = NULL_RTX;
3288 /* This doesn't matter for equivalences made for argument registers, we
3289 should keep their initialization insns. */
3290 if (reg_equiv[regno].is_arg_equivalence)
3291 return;
3292 ira_reg_equiv[regno].defined_p = false;
3293 ira_reg_equiv[regno].init_insns = NULL;
3294 for (; list; list = list->next ())
3296 rtx_insn *insn = list->insn ();
3297 remove_note (insn, find_reg_note (insn, REG_EQUIV, NULL_RTX));
3301 /* Check whether the SUBREG is a paradoxical subreg and set the result
3302 in PDX_SUBREGS. */
3304 static void
3305 set_paradoxical_subreg (rtx_insn *insn, bool *pdx_subregs)
3307 subrtx_iterator::array_type array;
3308 FOR_EACH_SUBRTX (iter, array, PATTERN (insn), NONCONST)
3310 const_rtx subreg = *iter;
3311 if (GET_CODE (subreg) == SUBREG)
3313 const_rtx reg = SUBREG_REG (subreg);
3314 if (REG_P (reg) && paradoxical_subreg_p (subreg))
3315 pdx_subregs[REGNO (reg)] = true;
3320 /* In DEBUG_INSN location adjust REGs from CLEARED_REGS bitmap to the
3321 equivalent replacement. */
3323 static rtx
3324 adjust_cleared_regs (rtx loc, const_rtx old_rtx ATTRIBUTE_UNUSED, void *data)
3326 if (REG_P (loc))
3328 bitmap cleared_regs = (bitmap) data;
3329 if (bitmap_bit_p (cleared_regs, REGNO (loc)))
3330 return simplify_replace_fn_rtx (copy_rtx (*reg_equiv[REGNO (loc)].src_p),
3331 NULL_RTX, adjust_cleared_regs, data);
3333 return NULL_RTX;
3336 /* Nonzero if we recorded an equivalence for a LABEL_REF. */
3337 static int recorded_label_ref;
3339 /* Find registers that are equivalent to a single value throughout the
3340 compilation (either because they can be referenced in memory or are
3341 set once from a single constant). Lower their priority for a
3342 register.
3344 If such a register is only referenced once, try substituting its
3345 value into the using insn. If it succeeds, we can eliminate the
3346 register completely.
3348 Initialize init_insns in ira_reg_equiv array.
3350 Return non-zero if jump label rebuilding should be done. */
3351 static int
3352 update_equiv_regs (void)
3354 rtx_insn *insn;
3355 basic_block bb;
3356 int loop_depth;
3357 bitmap cleared_regs;
3358 bool *pdx_subregs;
3360 /* We need to keep track of whether or not we recorded a LABEL_REF so
3361 that we know if the jump optimizer needs to be rerun. */
3362 recorded_label_ref = 0;
3364 /* Use pdx_subregs to show whether a reg is used in a paradoxical
3365 subreg. */
3366 pdx_subregs = XCNEWVEC (bool, max_regno);
3368 reg_equiv = XCNEWVEC (struct equivalence, max_regno);
3369 grow_reg_equivs ();
3371 init_alias_analysis ();
3373 /* Scan insns and set pdx_subregs[regno] if the reg is used in a
3374 paradoxical subreg. Don't set such reg equivalent to a mem,
3375 because lra will not substitute such equiv memory in order to
3376 prevent access beyond allocated memory for paradoxical memory subreg. */
3377 FOR_EACH_BB_FN (bb, cfun)
3378 FOR_BB_INSNS (bb, insn)
3379 if (NONDEBUG_INSN_P (insn))
3380 set_paradoxical_subreg (insn, pdx_subregs);
3382 /* Scan the insns and find which registers have equivalences. Do this
3383 in a separate scan of the insns because (due to -fcse-follow-jumps)
3384 a register can be set below its use. */
3385 FOR_EACH_BB_FN (bb, cfun)
3387 loop_depth = bb_loop_depth (bb);
3389 for (insn = BB_HEAD (bb);
3390 insn != NEXT_INSN (BB_END (bb));
3391 insn = NEXT_INSN (insn))
3393 rtx note;
3394 rtx set;
3395 rtx dest, src;
3396 int regno;
3398 if (! INSN_P (insn))
3399 continue;
3401 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3402 if (REG_NOTE_KIND (note) == REG_INC)
3403 no_equiv (XEXP (note, 0), note, NULL);
3405 set = single_set (insn);
3407 /* If this insn contains more (or less) than a single SET,
3408 only mark all destinations as having no known equivalence. */
3409 if (set == NULL_RTX)
3411 note_stores (PATTERN (insn), no_equiv, NULL);
3412 continue;
3414 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3416 int i;
3418 for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
3420 rtx part = XVECEXP (PATTERN (insn), 0, i);
3421 if (part != set)
3422 note_stores (part, no_equiv, NULL);
3426 dest = SET_DEST (set);
3427 src = SET_SRC (set);
3429 /* See if this is setting up the equivalence between an argument
3430 register and its stack slot. */
3431 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3432 if (note)
3434 gcc_assert (REG_P (dest));
3435 regno = REGNO (dest);
3437 /* Note that we don't want to clear init_insns in
3438 ira_reg_equiv even if there are multiple sets of this
3439 register. */
3440 reg_equiv[regno].is_arg_equivalence = 1;
3442 /* The insn result can have equivalence memory although
3443 the equivalence is not set up by the insn. We add
3444 this insn to init insns as it is a flag for now that
3445 regno has an equivalence. We will remove the insn
3446 from init insn list later. */
3447 if (rtx_equal_p (src, XEXP (note, 0)) || MEM_P (XEXP (note, 0)))
3448 ira_reg_equiv[regno].init_insns
3449 = gen_rtx_INSN_LIST (VOIDmode, insn,
3450 ira_reg_equiv[regno].init_insns);
3452 /* Continue normally in case this is a candidate for
3453 replacements. */
3456 if (!optimize)
3457 continue;
3459 /* We only handle the case of a pseudo register being set
3460 once, or always to the same value. */
3461 /* ??? The mn10200 port breaks if we add equivalences for
3462 values that need an ADDRESS_REGS register and set them equivalent
3463 to a MEM of a pseudo. The actual problem is in the over-conservative
3464 handling of INPADDR_ADDRESS / INPUT_ADDRESS / INPUT triples in
3465 calculate_needs, but we traditionally work around this problem
3466 here by rejecting equivalences when the destination is in a register
3467 that's likely spilled. This is fragile, of course, since the
3468 preferred class of a pseudo depends on all instructions that set
3469 or use it. */
3471 if (!REG_P (dest)
3472 || (regno = REGNO (dest)) < FIRST_PSEUDO_REGISTER
3473 || (reg_equiv[regno].init_insns
3474 && reg_equiv[regno].init_insns->insn () == NULL)
3475 || (targetm.class_likely_spilled_p (reg_preferred_class (regno))
3476 && MEM_P (src) && ! reg_equiv[regno].is_arg_equivalence))
3478 /* This might be setting a SUBREG of a pseudo, a pseudo that is
3479 also set somewhere else to a constant. */
3480 note_stores (set, no_equiv, NULL);
3481 continue;
3484 /* Don't set reg (if pdx_subregs[regno] == true) equivalent to a mem. */
3485 if (MEM_P (src) && pdx_subregs[regno])
3487 note_stores (set, no_equiv, NULL);
3488 continue;
3491 note = find_reg_note (insn, REG_EQUAL, NULL_RTX);
3493 /* cse sometimes generates function invariants, but doesn't put a
3494 REG_EQUAL note on the insn. Since this note would be redundant,
3495 there's no point creating it earlier than here. */
3496 if (! note && ! rtx_varies_p (src, 0))
3497 note = set_unique_reg_note (insn, REG_EQUAL, copy_rtx (src));
3499 /* Don't bother considering a REG_EQUAL note containing an EXPR_LIST
3500 since it represents a function call. */
3501 if (note && GET_CODE (XEXP (note, 0)) == EXPR_LIST)
3502 note = NULL_RTX;
3504 if (DF_REG_DEF_COUNT (regno) != 1)
3506 bool equal_p = true;
3507 rtx_insn_list *list;
3509 /* If we have already processed this pseudo and determined it
3510 can not have an equivalence, then honor that decision. */
3511 if (reg_equiv[regno].no_equiv)
3512 continue;
3514 if (! note
3515 || rtx_varies_p (XEXP (note, 0), 0)
3516 || (reg_equiv[regno].replacement
3517 && ! rtx_equal_p (XEXP (note, 0),
3518 reg_equiv[regno].replacement)))
3520 no_equiv (dest, set, NULL);
3521 continue;
3524 list = reg_equiv[regno].init_insns;
3525 for (; list; list = list->next ())
3527 rtx note_tmp;
3528 rtx_insn *insn_tmp;
3530 insn_tmp = list->insn ();
3531 note_tmp = find_reg_note (insn_tmp, REG_EQUAL, NULL_RTX);
3532 gcc_assert (note_tmp);
3533 if (! rtx_equal_p (XEXP (note, 0), XEXP (note_tmp, 0)))
3535 equal_p = false;
3536 break;
3540 if (! equal_p)
3542 no_equiv (dest, set, NULL);
3543 continue;
3547 /* Record this insn as initializing this register. */
3548 reg_equiv[regno].init_insns
3549 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv[regno].init_insns);
3551 /* If this register is known to be equal to a constant, record that
3552 it is always equivalent to the constant. */
3553 if (DF_REG_DEF_COUNT (regno) == 1
3554 && note && ! rtx_varies_p (XEXP (note, 0), 0))
3556 rtx note_value = XEXP (note, 0);
3557 remove_note (insn, note);
3558 set_unique_reg_note (insn, REG_EQUIV, note_value);
3561 /* If this insn introduces a "constant" register, decrease the priority
3562 of that register. Record this insn if the register is only used once
3563 more and the equivalence value is the same as our source.
3565 The latter condition is checked for two reasons: First, it is an
3566 indication that it may be more efficient to actually emit the insn
3567 as written (if no registers are available, reload will substitute
3568 the equivalence). Secondly, it avoids problems with any registers
3569 dying in this insn whose death notes would be missed.
3571 If we don't have a REG_EQUIV note, see if this insn is loading
3572 a register used only in one basic block from a MEM. If so, and the
3573 MEM remains unchanged for the life of the register, add a REG_EQUIV
3574 note. */
3575 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3577 if (note == NULL_RTX && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3578 && MEM_P (SET_SRC (set))
3579 && validate_equiv_mem (insn, dest, SET_SRC (set)))
3580 note = set_unique_reg_note (insn, REG_EQUIV, copy_rtx (SET_SRC (set)));
3582 if (note)
3584 int regno = REGNO (dest);
3585 rtx x = XEXP (note, 0);
3587 /* If we haven't done so, record for reload that this is an
3588 equivalencing insn. */
3589 if (!reg_equiv[regno].is_arg_equivalence)
3590 ira_reg_equiv[regno].init_insns
3591 = gen_rtx_INSN_LIST (VOIDmode, insn,
3592 ira_reg_equiv[regno].init_insns);
3594 /* Record whether or not we created a REG_EQUIV note for a LABEL_REF.
3595 We might end up substituting the LABEL_REF for uses of the
3596 pseudo here or later. That kind of transformation may turn an
3597 indirect jump into a direct jump, in which case we must rerun the
3598 jump optimizer to ensure that the JUMP_LABEL fields are valid. */
3599 if (GET_CODE (x) == LABEL_REF
3600 || (GET_CODE (x) == CONST
3601 && GET_CODE (XEXP (x, 0)) == PLUS
3602 && (GET_CODE (XEXP (XEXP (x, 0), 0)) == LABEL_REF)))
3603 recorded_label_ref = 1;
3605 reg_equiv[regno].replacement = x;
3606 reg_equiv[regno].src_p = &SET_SRC (set);
3607 reg_equiv[regno].loop_depth = (short) loop_depth;
3609 /* Don't mess with things live during setjmp. */
3610 if (REG_LIVE_LENGTH (regno) >= 0 && optimize)
3612 /* Note that the statement below does not affect the priority
3613 in local-alloc! */
3614 REG_LIVE_LENGTH (regno) *= 2;
3616 /* If the register is referenced exactly twice, meaning it is
3617 set once and used once, indicate that the reference may be
3618 replaced by the equivalence we computed above. Do this
3619 even if the register is only used in one block so that
3620 dependencies can be handled where the last register is
3621 used in a different block (i.e. HIGH / LO_SUM sequences)
3622 and to reduce the number of registers alive across
3623 calls. */
3625 if (REG_N_REFS (regno) == 2
3626 && (rtx_equal_p (x, src)
3627 || ! equiv_init_varies_p (src))
3628 && NONJUMP_INSN_P (insn)
3629 && equiv_init_movable_p (PATTERN (insn), regno))
3630 reg_equiv[regno].replace = 1;
3636 if (!optimize)
3637 goto out;
3639 /* A second pass, to gather additional equivalences with memory. This needs
3640 to be done after we know which registers we are going to replace. */
3642 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
3644 rtx set, src, dest;
3645 unsigned regno;
3647 if (! INSN_P (insn))
3648 continue;
3650 set = single_set (insn);
3651 if (! set)
3652 continue;
3654 dest = SET_DEST (set);
3655 src = SET_SRC (set);
3657 /* If this sets a MEM to the contents of a REG that is only used
3658 in a single basic block, see if the register is always equivalent
3659 to that memory location and if moving the store from INSN to the
3660 insn that set REG is safe. If so, put a REG_EQUIV note on the
3661 initializing insn.
3663 Don't add a REG_EQUIV note if the insn already has one. The existing
3664 REG_EQUIV is likely more useful than the one we are adding.
3666 If one of the regs in the address has reg_equiv[REGNO].replace set,
3667 then we can't add this REG_EQUIV note. The reg_equiv[REGNO].replace
3668 optimization may move the set of this register immediately before
3669 insn, which puts it after reg_equiv[REGNO].init_insns, and hence
3670 the mention in the REG_EQUIV note would be to an uninitialized
3671 pseudo. */
3673 if (MEM_P (dest) && REG_P (src)
3674 && (regno = REGNO (src)) >= FIRST_PSEUDO_REGISTER
3675 && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3676 && DF_REG_DEF_COUNT (regno) == 1
3677 && reg_equiv[regno].init_insns != NULL
3678 && reg_equiv[regno].init_insns->insn () != NULL
3679 && ! find_reg_note (XEXP (reg_equiv[regno].init_insns, 0),
3680 REG_EQUIV, NULL_RTX)
3681 && ! contains_replace_regs (XEXP (dest, 0))
3682 && ! pdx_subregs[regno])
3684 rtx_insn *init_insn =
3685 as_a <rtx_insn *> (XEXP (reg_equiv[regno].init_insns, 0));
3686 if (validate_equiv_mem (init_insn, src, dest)
3687 && ! memref_used_between_p (dest, init_insn, insn)
3688 /* Attaching a REG_EQUIV note will fail if INIT_INSN has
3689 multiple sets. */
3690 && set_unique_reg_note (init_insn, REG_EQUIV, copy_rtx (dest)))
3692 /* This insn makes the equivalence, not the one initializing
3693 the register. */
3694 ira_reg_equiv[regno].init_insns
3695 = gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX);
3696 df_notes_rescan (init_insn);
3701 cleared_regs = BITMAP_ALLOC (NULL);
3702 /* Now scan all regs killed in an insn to see if any of them are
3703 registers only used that once. If so, see if we can replace the
3704 reference with the equivalent form. If we can, delete the
3705 initializing reference and this register will go away. If we
3706 can't replace the reference, and the initializing reference is
3707 within the same loop (or in an inner loop), then move the register
3708 initialization just before the use, so that they are in the same
3709 basic block. */
3710 FOR_EACH_BB_REVERSE_FN (bb, cfun)
3712 loop_depth = bb_loop_depth (bb);
3713 for (insn = BB_END (bb);
3714 insn != PREV_INSN (BB_HEAD (bb));
3715 insn = PREV_INSN (insn))
3717 rtx link;
3719 if (! INSN_P (insn))
3720 continue;
3722 /* Don't substitute into a non-local goto, this confuses CFG. */
3723 if (JUMP_P (insn)
3724 && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
3725 continue;
3727 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
3729 if (REG_NOTE_KIND (link) == REG_DEAD
3730 /* Make sure this insn still refers to the register. */
3731 && reg_mentioned_p (XEXP (link, 0), PATTERN (insn)))
3733 int regno = REGNO (XEXP (link, 0));
3734 rtx equiv_insn;
3736 if (! reg_equiv[regno].replace
3737 || reg_equiv[regno].loop_depth < (short) loop_depth
3738 /* There is no sense to move insns if live range
3739 shrinkage or register pressure-sensitive
3740 scheduling were done because it will not
3741 improve allocation but worsen insn schedule
3742 with a big probability. */
3743 || flag_live_range_shrinkage
3744 || (flag_sched_pressure && flag_schedule_insns))
3745 continue;
3747 /* reg_equiv[REGNO].replace gets set only when
3748 REG_N_REFS[REGNO] is 2, i.e. the register is set
3749 once and used once. (If it were only set, but
3750 not used, flow would have deleted the setting
3751 insns.) Hence there can only be one insn in
3752 reg_equiv[REGNO].init_insns. */
3753 gcc_assert (reg_equiv[regno].init_insns
3754 && !XEXP (reg_equiv[regno].init_insns, 1));
3755 equiv_insn = XEXP (reg_equiv[regno].init_insns, 0);
3757 /* We may not move instructions that can throw, since
3758 that changes basic block boundaries and we are not
3759 prepared to adjust the CFG to match. */
3760 if (can_throw_internal (equiv_insn))
3761 continue;
3763 if (asm_noperands (PATTERN (equiv_insn)) < 0
3764 && validate_replace_rtx (regno_reg_rtx[regno],
3765 *(reg_equiv[regno].src_p), insn))
3767 rtx equiv_link;
3768 rtx last_link;
3769 rtx note;
3771 /* Find the last note. */
3772 for (last_link = link; XEXP (last_link, 1);
3773 last_link = XEXP (last_link, 1))
3776 /* Append the REG_DEAD notes from equiv_insn. */
3777 equiv_link = REG_NOTES (equiv_insn);
3778 while (equiv_link)
3780 note = equiv_link;
3781 equiv_link = XEXP (equiv_link, 1);
3782 if (REG_NOTE_KIND (note) == REG_DEAD)
3784 remove_note (equiv_insn, note);
3785 XEXP (last_link, 1) = note;
3786 XEXP (note, 1) = NULL_RTX;
3787 last_link = note;
3791 remove_death (regno, insn);
3792 SET_REG_N_REFS (regno, 0);
3793 REG_FREQ (regno) = 0;
3794 delete_insn (equiv_insn);
3796 reg_equiv[regno].init_insns
3797 = reg_equiv[regno].init_insns->next ();
3799 ira_reg_equiv[regno].init_insns = NULL;
3800 bitmap_set_bit (cleared_regs, regno);
3802 /* Move the initialization of the register to just before
3803 INSN. Update the flow information. */
3804 else if (prev_nondebug_insn (insn) != equiv_insn)
3806 rtx_insn *new_insn;
3808 new_insn = emit_insn_before (PATTERN (equiv_insn), insn);
3809 REG_NOTES (new_insn) = REG_NOTES (equiv_insn);
3810 REG_NOTES (equiv_insn) = 0;
3811 /* Rescan it to process the notes. */
3812 df_insn_rescan (new_insn);
3814 /* Make sure this insn is recognized before
3815 reload begins, otherwise
3816 eliminate_regs_in_insn will die. */
3817 INSN_CODE (new_insn) = INSN_CODE (equiv_insn);
3819 delete_insn (equiv_insn);
3821 XEXP (reg_equiv[regno].init_insns, 0) = new_insn;
3823 REG_BASIC_BLOCK (regno) = bb->index;
3824 REG_N_CALLS_CROSSED (regno) = 0;
3825 REG_FREQ_CALLS_CROSSED (regno) = 0;
3826 REG_N_THROWING_CALLS_CROSSED (regno) = 0;
3827 REG_LIVE_LENGTH (regno) = 2;
3829 if (insn == BB_HEAD (bb))
3830 BB_HEAD (bb) = PREV_INSN (insn);
3832 ira_reg_equiv[regno].init_insns
3833 = gen_rtx_INSN_LIST (VOIDmode, new_insn, NULL_RTX);
3834 bitmap_set_bit (cleared_regs, regno);
3841 if (!bitmap_empty_p (cleared_regs))
3843 FOR_EACH_BB_FN (bb, cfun)
3845 bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
3846 bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
3847 if (! df_live)
3848 continue;
3849 bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
3850 bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
3853 /* Last pass - adjust debug insns referencing cleared regs. */
3854 if (MAY_HAVE_DEBUG_INSNS)
3855 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
3856 if (DEBUG_INSN_P (insn))
3858 rtx old_loc = INSN_VAR_LOCATION_LOC (insn);
3859 INSN_VAR_LOCATION_LOC (insn)
3860 = simplify_replace_fn_rtx (old_loc, NULL_RTX,
3861 adjust_cleared_regs,
3862 (void *) cleared_regs);
3863 if (old_loc != INSN_VAR_LOCATION_LOC (insn))
3864 df_insn_rescan (insn);
3868 BITMAP_FREE (cleared_regs);
3870 out:
3871 /* Clean up. */
3873 end_alias_analysis ();
3874 free (reg_equiv);
3875 free (pdx_subregs);
3876 return recorded_label_ref;
3881 /* Set up fields memory, constant, and invariant from init_insns in
3882 the structures of array ira_reg_equiv. */
3883 static void
3884 setup_reg_equiv (void)
3886 int i;
3887 rtx_insn_list *elem, *prev_elem, *next_elem;
3888 rtx_insn *insn;
3889 rtx set, x;
3891 for (i = FIRST_PSEUDO_REGISTER; i < ira_reg_equiv_len; i++)
3892 for (prev_elem = NULL, elem = ira_reg_equiv[i].init_insns;
3893 elem;
3894 prev_elem = elem, elem = next_elem)
3896 next_elem = elem->next ();
3897 insn = elem->insn ();
3898 set = single_set (insn);
3900 /* Init insns can set up equivalence when the reg is a destination or
3901 a source (in this case the destination is memory). */
3902 if (set != 0 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))))
3904 if ((x = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != NULL)
3906 x = XEXP (x, 0);
3907 if (REG_P (SET_DEST (set))
3908 && REGNO (SET_DEST (set)) == (unsigned int) i
3909 && ! rtx_equal_p (SET_SRC (set), x) && MEM_P (x))
3911 /* This insn reporting the equivalence but
3912 actually not setting it. Remove it from the
3913 list. */
3914 if (prev_elem == NULL)
3915 ira_reg_equiv[i].init_insns = next_elem;
3916 else
3917 XEXP (prev_elem, 1) = next_elem;
3918 elem = prev_elem;
3921 else if (REG_P (SET_DEST (set))
3922 && REGNO (SET_DEST (set)) == (unsigned int) i)
3923 x = SET_SRC (set);
3924 else
3926 gcc_assert (REG_P (SET_SRC (set))
3927 && REGNO (SET_SRC (set)) == (unsigned int) i);
3928 x = SET_DEST (set);
3930 if (! function_invariant_p (x)
3931 || ! flag_pic
3932 /* A function invariant is often CONSTANT_P but may
3933 include a register. We promise to only pass
3934 CONSTANT_P objects to LEGITIMATE_PIC_OPERAND_P. */
3935 || (CONSTANT_P (x) && LEGITIMATE_PIC_OPERAND_P (x)))
3937 /* It can happen that a REG_EQUIV note contains a MEM
3938 that is not a legitimate memory operand. As later
3939 stages of reload assume that all addresses found in
3940 the lra_regno_equiv_* arrays were originally
3941 legitimate, we ignore such REG_EQUIV notes. */
3942 if (memory_operand (x, VOIDmode))
3944 ira_reg_equiv[i].defined_p = true;
3945 ira_reg_equiv[i].memory = x;
3946 continue;
3948 else if (function_invariant_p (x))
3950 machine_mode mode;
3952 mode = GET_MODE (SET_DEST (set));
3953 if (GET_CODE (x) == PLUS
3954 || x == frame_pointer_rtx || x == arg_pointer_rtx)
3955 /* This is PLUS of frame pointer and a constant,
3956 or fp, or argp. */
3957 ira_reg_equiv[i].invariant = x;
3958 else if (targetm.legitimate_constant_p (mode, x))
3959 ira_reg_equiv[i].constant = x;
3960 else
3962 ira_reg_equiv[i].memory = force_const_mem (mode, x);
3963 if (ira_reg_equiv[i].memory == NULL_RTX)
3965 ira_reg_equiv[i].defined_p = false;
3966 ira_reg_equiv[i].init_insns = NULL;
3967 break;
3970 ira_reg_equiv[i].defined_p = true;
3971 continue;
3975 ira_reg_equiv[i].defined_p = false;
3976 ira_reg_equiv[i].init_insns = NULL;
3977 break;
3983 /* Print chain C to FILE. */
3984 static void
3985 print_insn_chain (FILE *file, struct insn_chain *c)
3987 fprintf (file, "insn=%d, ", INSN_UID (c->insn));
3988 bitmap_print (file, &c->live_throughout, "live_throughout: ", ", ");
3989 bitmap_print (file, &c->dead_or_set, "dead_or_set: ", "\n");
3993 /* Print all reload_insn_chains to FILE. */
3994 static void
3995 print_insn_chains (FILE *file)
3997 struct insn_chain *c;
3998 for (c = reload_insn_chain; c ; c = c->next)
3999 print_insn_chain (file, c);
4002 /* Return true if pseudo REGNO should be added to set live_throughout
4003 or dead_or_set of the insn chains for reload consideration. */
4004 static bool
4005 pseudo_for_reload_consideration_p (int regno)
4007 /* Consider spilled pseudos too for IRA because they still have a
4008 chance to get hard-registers in the reload when IRA is used. */
4009 return (reg_renumber[regno] >= 0 || ira_conflicts_p);
4012 /* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] using
4013 REG to the number of nregs, and INIT_VALUE to get the
4014 initialization. ALLOCNUM need not be the regno of REG. */
4015 static void
4016 init_live_subregs (bool init_value, sbitmap *live_subregs,
4017 bitmap live_subregs_used, int allocnum, rtx reg)
4019 unsigned int regno = REGNO (SUBREG_REG (reg));
4020 int size = GET_MODE_SIZE (GET_MODE (regno_reg_rtx[regno]));
4022 gcc_assert (size > 0);
4024 /* Been there, done that. */
4025 if (bitmap_bit_p (live_subregs_used, allocnum))
4026 return;
4028 /* Create a new one. */
4029 if (live_subregs[allocnum] == NULL)
4030 live_subregs[allocnum] = sbitmap_alloc (size);
4032 /* If the entire reg was live before blasting into subregs, we need
4033 to init all of the subregs to ones else init to 0. */
4034 if (init_value)
4035 bitmap_ones (live_subregs[allocnum]);
4036 else
4037 bitmap_clear (live_subregs[allocnum]);
4039 bitmap_set_bit (live_subregs_used, allocnum);
4042 /* Walk the insns of the current function and build reload_insn_chain,
4043 and record register life information. */
4044 static void
4045 build_insn_chain (void)
4047 unsigned int i;
4048 struct insn_chain **p = &reload_insn_chain;
4049 basic_block bb;
4050 struct insn_chain *c = NULL;
4051 struct insn_chain *next = NULL;
4052 bitmap live_relevant_regs = BITMAP_ALLOC (NULL);
4053 bitmap elim_regset = BITMAP_ALLOC (NULL);
4054 /* live_subregs is a vector used to keep accurate information about
4055 which hardregs are live in multiword pseudos. live_subregs and
4056 live_subregs_used are indexed by pseudo number. The live_subreg
4057 entry for a particular pseudo is only used if the corresponding
4058 element is non zero in live_subregs_used. The sbitmap size of
4059 live_subreg[allocno] is number of bytes that the pseudo can
4060 occupy. */
4061 sbitmap *live_subregs = XCNEWVEC (sbitmap, max_regno);
4062 bitmap live_subregs_used = BITMAP_ALLOC (NULL);
4064 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
4065 if (TEST_HARD_REG_BIT (eliminable_regset, i))
4066 bitmap_set_bit (elim_regset, i);
4067 FOR_EACH_BB_REVERSE_FN (bb, cfun)
4069 bitmap_iterator bi;
4070 rtx_insn *insn;
4072 CLEAR_REG_SET (live_relevant_regs);
4073 bitmap_clear (live_subregs_used);
4075 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb), 0, i, bi)
4077 if (i >= FIRST_PSEUDO_REGISTER)
4078 break;
4079 bitmap_set_bit (live_relevant_regs, i);
4082 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb),
4083 FIRST_PSEUDO_REGISTER, i, bi)
4085 if (pseudo_for_reload_consideration_p (i))
4086 bitmap_set_bit (live_relevant_regs, i);
4089 FOR_BB_INSNS_REVERSE (bb, insn)
4091 if (!NOTE_P (insn) && !BARRIER_P (insn))
4093 struct df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4094 df_ref def, use;
4096 c = new_insn_chain ();
4097 c->next = next;
4098 next = c;
4099 *p = c;
4100 p = &c->prev;
4102 c->insn = insn;
4103 c->block = bb->index;
4105 if (NONDEBUG_INSN_P (insn))
4106 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4108 unsigned int regno = DF_REF_REGNO (def);
4110 /* Ignore may clobbers because these are generated
4111 from calls. However, every other kind of def is
4112 added to dead_or_set. */
4113 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
4115 if (regno < FIRST_PSEUDO_REGISTER)
4117 if (!fixed_regs[regno])
4118 bitmap_set_bit (&c->dead_or_set, regno);
4120 else if (pseudo_for_reload_consideration_p (regno))
4121 bitmap_set_bit (&c->dead_or_set, regno);
4124 if ((regno < FIRST_PSEUDO_REGISTER
4125 || reg_renumber[regno] >= 0
4126 || ira_conflicts_p)
4127 && (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL)))
4129 rtx reg = DF_REF_REG (def);
4131 /* We can model subregs, but not if they are
4132 wrapped in ZERO_EXTRACTS. */
4133 if (GET_CODE (reg) == SUBREG
4134 && !DF_REF_FLAGS_IS_SET (def, DF_REF_ZERO_EXTRACT))
4136 unsigned int start = SUBREG_BYTE (reg);
4137 unsigned int last = start
4138 + GET_MODE_SIZE (GET_MODE (reg));
4140 init_live_subregs
4141 (bitmap_bit_p (live_relevant_regs, regno),
4142 live_subregs, live_subregs_used, regno, reg);
4144 if (!DF_REF_FLAGS_IS_SET
4145 (def, DF_REF_STRICT_LOW_PART))
4147 /* Expand the range to cover entire words.
4148 Bytes added here are "don't care". */
4149 start
4150 = start / UNITS_PER_WORD * UNITS_PER_WORD;
4151 last = ((last + UNITS_PER_WORD - 1)
4152 / UNITS_PER_WORD * UNITS_PER_WORD);
4155 /* Ignore the paradoxical bits. */
4156 if (last > SBITMAP_SIZE (live_subregs[regno]))
4157 last = SBITMAP_SIZE (live_subregs[regno]);
4159 while (start < last)
4161 bitmap_clear_bit (live_subregs[regno], start);
4162 start++;
4165 if (bitmap_empty_p (live_subregs[regno]))
4167 bitmap_clear_bit (live_subregs_used, regno);
4168 bitmap_clear_bit (live_relevant_regs, regno);
4170 else
4171 /* Set live_relevant_regs here because
4172 that bit has to be true to get us to
4173 look at the live_subregs fields. */
4174 bitmap_set_bit (live_relevant_regs, regno);
4176 else
4178 /* DF_REF_PARTIAL is generated for
4179 subregs, STRICT_LOW_PART, and
4180 ZERO_EXTRACT. We handle the subreg
4181 case above so here we have to keep from
4182 modeling the def as a killing def. */
4183 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL))
4185 bitmap_clear_bit (live_subregs_used, regno);
4186 bitmap_clear_bit (live_relevant_regs, regno);
4192 bitmap_and_compl_into (live_relevant_regs, elim_regset);
4193 bitmap_copy (&c->live_throughout, live_relevant_regs);
4195 if (NONDEBUG_INSN_P (insn))
4196 FOR_EACH_INSN_INFO_USE (use, insn_info)
4198 unsigned int regno = DF_REF_REGNO (use);
4199 rtx reg = DF_REF_REG (use);
4201 /* DF_REF_READ_WRITE on a use means that this use
4202 is fabricated from a def that is a partial set
4203 to a multiword reg. Here, we only model the
4204 subreg case that is not wrapped in ZERO_EXTRACT
4205 precisely so we do not need to look at the
4206 fabricated use. */
4207 if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE)
4208 && !DF_REF_FLAGS_IS_SET (use, DF_REF_ZERO_EXTRACT)
4209 && DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
4210 continue;
4212 /* Add the last use of each var to dead_or_set. */
4213 if (!bitmap_bit_p (live_relevant_regs, regno))
4215 if (regno < FIRST_PSEUDO_REGISTER)
4217 if (!fixed_regs[regno])
4218 bitmap_set_bit (&c->dead_or_set, regno);
4220 else if (pseudo_for_reload_consideration_p (regno))
4221 bitmap_set_bit (&c->dead_or_set, regno);
4224 if (regno < FIRST_PSEUDO_REGISTER
4225 || pseudo_for_reload_consideration_p (regno))
4227 if (GET_CODE (reg) == SUBREG
4228 && !DF_REF_FLAGS_IS_SET (use,
4229 DF_REF_SIGN_EXTRACT
4230 | DF_REF_ZERO_EXTRACT))
4232 unsigned int start = SUBREG_BYTE (reg);
4233 unsigned int last = start
4234 + GET_MODE_SIZE (GET_MODE (reg));
4236 init_live_subregs
4237 (bitmap_bit_p (live_relevant_regs, regno),
4238 live_subregs, live_subregs_used, regno, reg);
4240 /* Ignore the paradoxical bits. */
4241 if (last > SBITMAP_SIZE (live_subregs[regno]))
4242 last = SBITMAP_SIZE (live_subregs[regno]);
4244 while (start < last)
4246 bitmap_set_bit (live_subregs[regno], start);
4247 start++;
4250 else
4251 /* Resetting the live_subregs_used is
4252 effectively saying do not use the subregs
4253 because we are reading the whole
4254 pseudo. */
4255 bitmap_clear_bit (live_subregs_used, regno);
4256 bitmap_set_bit (live_relevant_regs, regno);
4262 /* FIXME!! The following code is a disaster. Reload needs to see the
4263 labels and jump tables that are just hanging out in between
4264 the basic blocks. See pr33676. */
4265 insn = BB_HEAD (bb);
4267 /* Skip over the barriers and cruft. */
4268 while (insn && (BARRIER_P (insn) || NOTE_P (insn)
4269 || BLOCK_FOR_INSN (insn) == bb))
4270 insn = PREV_INSN (insn);
4272 /* While we add anything except barriers and notes, the focus is
4273 to get the labels and jump tables into the
4274 reload_insn_chain. */
4275 while (insn)
4277 if (!NOTE_P (insn) && !BARRIER_P (insn))
4279 if (BLOCK_FOR_INSN (insn))
4280 break;
4282 c = new_insn_chain ();
4283 c->next = next;
4284 next = c;
4285 *p = c;
4286 p = &c->prev;
4288 /* The block makes no sense here, but it is what the old
4289 code did. */
4290 c->block = bb->index;
4291 c->insn = insn;
4292 bitmap_copy (&c->live_throughout, live_relevant_regs);
4294 insn = PREV_INSN (insn);
4298 reload_insn_chain = c;
4299 *p = NULL;
4301 for (i = 0; i < (unsigned int) max_regno; i++)
4302 if (live_subregs[i] != NULL)
4303 sbitmap_free (live_subregs[i]);
4304 free (live_subregs);
4305 BITMAP_FREE (live_subregs_used);
4306 BITMAP_FREE (live_relevant_regs);
4307 BITMAP_FREE (elim_regset);
4309 if (dump_file)
4310 print_insn_chains (dump_file);
4313 /* Examine the rtx found in *LOC, which is read or written to as determined
4314 by TYPE. Return false if we find a reason why an insn containing this
4315 rtx should not be moved (such as accesses to non-constant memory), true
4316 otherwise. */
4317 static bool
4318 rtx_moveable_p (rtx *loc, enum op_type type)
4320 const char *fmt;
4321 rtx x = *loc;
4322 enum rtx_code code = GET_CODE (x);
4323 int i, j;
4325 code = GET_CODE (x);
4326 switch (code)
4328 case CONST:
4329 CASE_CONST_ANY:
4330 case SYMBOL_REF:
4331 case LABEL_REF:
4332 return true;
4334 case PC:
4335 return type == OP_IN;
4337 case CC0:
4338 return false;
4340 case REG:
4341 if (x == frame_pointer_rtx)
4342 return true;
4343 if (HARD_REGISTER_P (x))
4344 return false;
4346 return true;
4348 case MEM:
4349 if (type == OP_IN && MEM_READONLY_P (x))
4350 return rtx_moveable_p (&XEXP (x, 0), OP_IN);
4351 return false;
4353 case SET:
4354 return (rtx_moveable_p (&SET_SRC (x), OP_IN)
4355 && rtx_moveable_p (&SET_DEST (x), OP_OUT));
4357 case STRICT_LOW_PART:
4358 return rtx_moveable_p (&XEXP (x, 0), OP_OUT);
4360 case ZERO_EXTRACT:
4361 case SIGN_EXTRACT:
4362 return (rtx_moveable_p (&XEXP (x, 0), type)
4363 && rtx_moveable_p (&XEXP (x, 1), OP_IN)
4364 && rtx_moveable_p (&XEXP (x, 2), OP_IN));
4366 case CLOBBER:
4367 return rtx_moveable_p (&SET_DEST (x), OP_OUT);
4369 case UNSPEC_VOLATILE:
4370 /* It is a bad idea to consider insns with with such rtl
4371 as moveable ones. The insn scheduler also considers them as barrier
4372 for a reason. */
4373 return false;
4375 default:
4376 break;
4379 fmt = GET_RTX_FORMAT (code);
4380 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
4382 if (fmt[i] == 'e')
4384 if (!rtx_moveable_p (&XEXP (x, i), type))
4385 return false;
4387 else if (fmt[i] == 'E')
4388 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4390 if (!rtx_moveable_p (&XVECEXP (x, i, j), type))
4391 return false;
4394 return true;
4397 /* A wrapper around dominated_by_p, which uses the information in UID_LUID
4398 to give dominance relationships between two insns I1 and I2. */
4399 static bool
4400 insn_dominated_by_p (rtx i1, rtx i2, int *uid_luid)
4402 basic_block bb1 = BLOCK_FOR_INSN (i1);
4403 basic_block bb2 = BLOCK_FOR_INSN (i2);
4405 if (bb1 == bb2)
4406 return uid_luid[INSN_UID (i2)] < uid_luid[INSN_UID (i1)];
4407 return dominated_by_p (CDI_DOMINATORS, bb1, bb2);
4410 /* Record the range of register numbers added by find_moveable_pseudos. */
4411 int first_moveable_pseudo, last_moveable_pseudo;
4413 /* These two vectors hold data for every register added by
4414 find_movable_pseudos, with index 0 holding data for the
4415 first_moveable_pseudo. */
4416 /* The original home register. */
4417 static vec<rtx> pseudo_replaced_reg;
4419 /* Look for instances where we have an instruction that is known to increase
4420 register pressure, and whose result is not used immediately. If it is
4421 possible to move the instruction downwards to just before its first use,
4422 split its lifetime into two ranges. We create a new pseudo to compute the
4423 value, and emit a move instruction just before the first use. If, after
4424 register allocation, the new pseudo remains unallocated, the function
4425 move_unallocated_pseudos then deletes the move instruction and places
4426 the computation just before the first use.
4428 Such a move is safe and profitable if all the input registers remain live
4429 and unchanged between the original computation and its first use. In such
4430 a situation, the computation is known to increase register pressure, and
4431 moving it is known to at least not worsen it.
4433 We restrict moves to only those cases where a register remains unallocated,
4434 in order to avoid interfering too much with the instruction schedule. As
4435 an exception, we may move insns which only modify their input register
4436 (typically induction variables), as this increases the freedom for our
4437 intended transformation, and does not limit the second instruction
4438 scheduler pass. */
4440 static void
4441 find_moveable_pseudos (void)
4443 unsigned i;
4444 int max_regs = max_reg_num ();
4445 int max_uid = get_max_uid ();
4446 basic_block bb;
4447 int *uid_luid = XNEWVEC (int, max_uid);
4448 rtx_insn **closest_uses = XNEWVEC (rtx_insn *, max_regs);
4449 /* A set of registers which are live but not modified throughout a block. */
4450 bitmap_head *bb_transp_live = XNEWVEC (bitmap_head,
4451 last_basic_block_for_fn (cfun));
4452 /* A set of registers which only exist in a given basic block. */
4453 bitmap_head *bb_local = XNEWVEC (bitmap_head,
4454 last_basic_block_for_fn (cfun));
4455 /* A set of registers which are set once, in an instruction that can be
4456 moved freely downwards, but are otherwise transparent to a block. */
4457 bitmap_head *bb_moveable_reg_sets = XNEWVEC (bitmap_head,
4458 last_basic_block_for_fn (cfun));
4459 bitmap_head live, used, set, interesting, unusable_as_input;
4460 bitmap_iterator bi;
4461 bitmap_initialize (&interesting, 0);
4463 first_moveable_pseudo = max_regs;
4464 pseudo_replaced_reg.release ();
4465 pseudo_replaced_reg.safe_grow_cleared (max_regs);
4467 df_analyze ();
4468 calculate_dominance_info (CDI_DOMINATORS);
4470 i = 0;
4471 bitmap_initialize (&live, 0);
4472 bitmap_initialize (&used, 0);
4473 bitmap_initialize (&set, 0);
4474 bitmap_initialize (&unusable_as_input, 0);
4475 FOR_EACH_BB_FN (bb, cfun)
4477 rtx_insn *insn;
4478 bitmap transp = bb_transp_live + bb->index;
4479 bitmap moveable = bb_moveable_reg_sets + bb->index;
4480 bitmap local = bb_local + bb->index;
4482 bitmap_initialize (local, 0);
4483 bitmap_initialize (transp, 0);
4484 bitmap_initialize (moveable, 0);
4485 bitmap_copy (&live, df_get_live_out (bb));
4486 bitmap_and_into (&live, df_get_live_in (bb));
4487 bitmap_copy (transp, &live);
4488 bitmap_clear (moveable);
4489 bitmap_clear (&live);
4490 bitmap_clear (&used);
4491 bitmap_clear (&set);
4492 FOR_BB_INSNS (bb, insn)
4493 if (NONDEBUG_INSN_P (insn))
4495 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4496 df_ref def, use;
4498 uid_luid[INSN_UID (insn)] = i++;
4500 def = df_single_def (insn_info);
4501 use = df_single_use (insn_info);
4502 if (use
4503 && def
4504 && DF_REF_REGNO (use) == DF_REF_REGNO (def)
4505 && !bitmap_bit_p (&set, DF_REF_REGNO (use))
4506 && rtx_moveable_p (&PATTERN (insn), OP_IN))
4508 unsigned regno = DF_REF_REGNO (use);
4509 bitmap_set_bit (moveable, regno);
4510 bitmap_set_bit (&set, regno);
4511 bitmap_set_bit (&used, regno);
4512 bitmap_clear_bit (transp, regno);
4513 continue;
4515 FOR_EACH_INSN_INFO_USE (use, insn_info)
4517 unsigned regno = DF_REF_REGNO (use);
4518 bitmap_set_bit (&used, regno);
4519 if (bitmap_clear_bit (moveable, regno))
4520 bitmap_clear_bit (transp, regno);
4523 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4525 unsigned regno = DF_REF_REGNO (def);
4526 bitmap_set_bit (&set, regno);
4527 bitmap_clear_bit (transp, regno);
4528 bitmap_clear_bit (moveable, regno);
4533 bitmap_clear (&live);
4534 bitmap_clear (&used);
4535 bitmap_clear (&set);
4537 FOR_EACH_BB_FN (bb, cfun)
4539 bitmap local = bb_local + bb->index;
4540 rtx_insn *insn;
4542 FOR_BB_INSNS (bb, insn)
4543 if (NONDEBUG_INSN_P (insn))
4545 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4546 rtx_insn *def_insn;
4547 rtx closest_use, note;
4548 df_ref def, use;
4549 unsigned regno;
4550 bool all_dominated, all_local;
4551 machine_mode mode;
4553 def = df_single_def (insn_info);
4554 /* There must be exactly one def in this insn. */
4555 if (!def || !single_set (insn))
4556 continue;
4557 /* This must be the only definition of the reg. We also limit
4558 which modes we deal with so that we can assume we can generate
4559 move instructions. */
4560 regno = DF_REF_REGNO (def);
4561 mode = GET_MODE (DF_REF_REG (def));
4562 if (DF_REG_DEF_COUNT (regno) != 1
4563 || !DF_REF_INSN_INFO (def)
4564 || HARD_REGISTER_NUM_P (regno)
4565 || DF_REG_EQ_USE_COUNT (regno) > 0
4566 || (!INTEGRAL_MODE_P (mode) && !FLOAT_MODE_P (mode)))
4567 continue;
4568 def_insn = DF_REF_INSN (def);
4570 for (note = REG_NOTES (def_insn); note; note = XEXP (note, 1))
4571 if (REG_NOTE_KIND (note) == REG_EQUIV && MEM_P (XEXP (note, 0)))
4572 break;
4574 if (note)
4576 if (dump_file)
4577 fprintf (dump_file, "Ignoring reg %d, has equiv memory\n",
4578 regno);
4579 bitmap_set_bit (&unusable_as_input, regno);
4580 continue;
4583 use = DF_REG_USE_CHAIN (regno);
4584 all_dominated = true;
4585 all_local = true;
4586 closest_use = NULL_RTX;
4587 for (; use; use = DF_REF_NEXT_REG (use))
4589 rtx_insn *insn;
4590 if (!DF_REF_INSN_INFO (use))
4592 all_dominated = false;
4593 all_local = false;
4594 break;
4596 insn = DF_REF_INSN (use);
4597 if (DEBUG_INSN_P (insn))
4598 continue;
4599 if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (def_insn))
4600 all_local = false;
4601 if (!insn_dominated_by_p (insn, def_insn, uid_luid))
4602 all_dominated = false;
4603 if (closest_use != insn && closest_use != const0_rtx)
4605 if (closest_use == NULL_RTX)
4606 closest_use = insn;
4607 else if (insn_dominated_by_p (closest_use, insn, uid_luid))
4608 closest_use = insn;
4609 else if (!insn_dominated_by_p (insn, closest_use, uid_luid))
4610 closest_use = const0_rtx;
4613 if (!all_dominated)
4615 if (dump_file)
4616 fprintf (dump_file, "Reg %d not all uses dominated by set\n",
4617 regno);
4618 continue;
4620 if (all_local)
4621 bitmap_set_bit (local, regno);
4622 if (closest_use == const0_rtx || closest_use == NULL
4623 || next_nonnote_nondebug_insn (def_insn) == closest_use)
4625 if (dump_file)
4626 fprintf (dump_file, "Reg %d uninteresting%s\n", regno,
4627 closest_use == const0_rtx || closest_use == NULL
4628 ? " (no unique first use)" : "");
4629 continue;
4631 if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (closest_use)))
4633 if (dump_file)
4634 fprintf (dump_file, "Reg %d: closest user uses cc0\n",
4635 regno);
4636 continue;
4639 bitmap_set_bit (&interesting, regno);
4640 /* If we get here, we know closest_use is a non-NULL insn
4641 (as opposed to const_0_rtx). */
4642 closest_uses[regno] = as_a <rtx_insn *> (closest_use);
4644 if (dump_file && (all_local || all_dominated))
4646 fprintf (dump_file, "Reg %u:", regno);
4647 if (all_local)
4648 fprintf (dump_file, " local to bb %d", bb->index);
4649 if (all_dominated)
4650 fprintf (dump_file, " def dominates all uses");
4651 if (closest_use != const0_rtx)
4652 fprintf (dump_file, " has unique first use");
4653 fputs ("\n", dump_file);
4658 EXECUTE_IF_SET_IN_BITMAP (&interesting, 0, i, bi)
4660 df_ref def = DF_REG_DEF_CHAIN (i);
4661 rtx_insn *def_insn = DF_REF_INSN (def);
4662 basic_block def_block = BLOCK_FOR_INSN (def_insn);
4663 bitmap def_bb_local = bb_local + def_block->index;
4664 bitmap def_bb_moveable = bb_moveable_reg_sets + def_block->index;
4665 bitmap def_bb_transp = bb_transp_live + def_block->index;
4666 bool local_to_bb_p = bitmap_bit_p (def_bb_local, i);
4667 rtx_insn *use_insn = closest_uses[i];
4668 df_ref use;
4669 bool all_ok = true;
4670 bool all_transp = true;
4672 if (!REG_P (DF_REF_REG (def)))
4673 continue;
4675 if (!local_to_bb_p)
4677 if (dump_file)
4678 fprintf (dump_file, "Reg %u not local to one basic block\n",
4680 continue;
4682 if (reg_equiv_init (i) != NULL_RTX)
4684 if (dump_file)
4685 fprintf (dump_file, "Ignoring reg %u with equiv init insn\n",
4687 continue;
4689 if (!rtx_moveable_p (&PATTERN (def_insn), OP_IN))
4691 if (dump_file)
4692 fprintf (dump_file, "Found def insn %d for %d to be not moveable\n",
4693 INSN_UID (def_insn), i);
4694 continue;
4696 if (dump_file)
4697 fprintf (dump_file, "Examining insn %d, def for %d\n",
4698 INSN_UID (def_insn), i);
4699 FOR_EACH_INSN_USE (use, def_insn)
4701 unsigned regno = DF_REF_REGNO (use);
4702 if (bitmap_bit_p (&unusable_as_input, regno))
4704 all_ok = false;
4705 if (dump_file)
4706 fprintf (dump_file, " found unusable input reg %u.\n", regno);
4707 break;
4709 if (!bitmap_bit_p (def_bb_transp, regno))
4711 if (bitmap_bit_p (def_bb_moveable, regno)
4712 && !control_flow_insn_p (use_insn)
4713 && (!HAVE_cc0 || !sets_cc0_p (use_insn)))
4715 if (modified_between_p (DF_REF_REG (use), def_insn, use_insn))
4717 rtx_insn *x = NEXT_INSN (def_insn);
4718 while (!modified_in_p (DF_REF_REG (use), x))
4720 gcc_assert (x != use_insn);
4721 x = NEXT_INSN (x);
4723 if (dump_file)
4724 fprintf (dump_file, " input reg %u modified but insn %d moveable\n",
4725 regno, INSN_UID (x));
4726 emit_insn_after (PATTERN (x), use_insn);
4727 set_insn_deleted (x);
4729 else
4731 if (dump_file)
4732 fprintf (dump_file, " input reg %u modified between def and use\n",
4733 regno);
4734 all_transp = false;
4737 else
4738 all_transp = false;
4741 if (!all_ok)
4742 continue;
4743 if (!dbg_cnt (ira_move))
4744 break;
4745 if (dump_file)
4746 fprintf (dump_file, " all ok%s\n", all_transp ? " and transp" : "");
4748 if (all_transp)
4750 rtx def_reg = DF_REF_REG (def);
4751 rtx newreg = ira_create_new_reg (def_reg);
4752 if (validate_change (def_insn, DF_REF_REAL_LOC (def), newreg, 0))
4754 unsigned nregno = REGNO (newreg);
4755 emit_insn_before (gen_move_insn (def_reg, newreg), use_insn);
4756 nregno -= max_regs;
4757 pseudo_replaced_reg[nregno] = def_reg;
4762 FOR_EACH_BB_FN (bb, cfun)
4764 bitmap_clear (bb_local + bb->index);
4765 bitmap_clear (bb_transp_live + bb->index);
4766 bitmap_clear (bb_moveable_reg_sets + bb->index);
4768 bitmap_clear (&interesting);
4769 bitmap_clear (&unusable_as_input);
4770 free (uid_luid);
4771 free (closest_uses);
4772 free (bb_local);
4773 free (bb_transp_live);
4774 free (bb_moveable_reg_sets);
4776 last_moveable_pseudo = max_reg_num ();
4778 fix_reg_equiv_init ();
4779 expand_reg_info ();
4780 regstat_free_n_sets_and_refs ();
4781 regstat_free_ri ();
4782 regstat_init_n_sets_and_refs ();
4783 regstat_compute_ri ();
4784 free_dominance_info (CDI_DOMINATORS);
4787 /* If SET pattern SET is an assignment from a hard register to a pseudo which
4788 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), return
4789 the destination. Otherwise return NULL. */
4791 static rtx
4792 interesting_dest_for_shprep_1 (rtx set, basic_block call_dom)
4794 rtx src = SET_SRC (set);
4795 rtx dest = SET_DEST (set);
4796 if (!REG_P (src) || !HARD_REGISTER_P (src)
4797 || !REG_P (dest) || HARD_REGISTER_P (dest)
4798 || (call_dom && !bitmap_bit_p (df_get_live_in (call_dom), REGNO (dest))))
4799 return NULL;
4800 return dest;
4803 /* If insn is interesting for parameter range-splitting shrink-wrapping
4804 preparation, i.e. it is a single set from a hard register to a pseudo, which
4805 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), or a
4806 parallel statement with only one such statement, return the destination.
4807 Otherwise return NULL. */
4809 static rtx
4810 interesting_dest_for_shprep (rtx_insn *insn, basic_block call_dom)
4812 if (!INSN_P (insn))
4813 return NULL;
4814 rtx pat = PATTERN (insn);
4815 if (GET_CODE (pat) == SET)
4816 return interesting_dest_for_shprep_1 (pat, call_dom);
4818 if (GET_CODE (pat) != PARALLEL)
4819 return NULL;
4820 rtx ret = NULL;
4821 for (int i = 0; i < XVECLEN (pat, 0); i++)
4823 rtx sub = XVECEXP (pat, 0, i);
4824 if (GET_CODE (sub) == USE || GET_CODE (sub) == CLOBBER)
4825 continue;
4826 if (GET_CODE (sub) != SET
4827 || side_effects_p (sub))
4828 return NULL;
4829 rtx dest = interesting_dest_for_shprep_1 (sub, call_dom);
4830 if (dest && ret)
4831 return NULL;
4832 if (dest)
4833 ret = dest;
4835 return ret;
4838 /* Split live ranges of pseudos that are loaded from hard registers in the
4839 first BB in a BB that dominates all non-sibling call if such a BB can be
4840 found and is not in a loop. Return true if the function has made any
4841 changes. */
4843 static bool
4844 split_live_ranges_for_shrink_wrap (void)
4846 basic_block bb, call_dom = NULL;
4847 basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
4848 rtx_insn *insn, *last_interesting_insn = NULL;
4849 bitmap_head need_new, reachable;
4850 vec<basic_block> queue;
4852 if (!SHRINK_WRAPPING_ENABLED)
4853 return false;
4855 bitmap_initialize (&need_new, 0);
4856 bitmap_initialize (&reachable, 0);
4857 queue.create (n_basic_blocks_for_fn (cfun));
4859 FOR_EACH_BB_FN (bb, cfun)
4860 FOR_BB_INSNS (bb, insn)
4861 if (CALL_P (insn) && !SIBLING_CALL_P (insn))
4863 if (bb == first)
4865 bitmap_clear (&need_new);
4866 bitmap_clear (&reachable);
4867 queue.release ();
4868 return false;
4871 bitmap_set_bit (&need_new, bb->index);
4872 bitmap_set_bit (&reachable, bb->index);
4873 queue.quick_push (bb);
4874 break;
4877 if (queue.is_empty ())
4879 bitmap_clear (&need_new);
4880 bitmap_clear (&reachable);
4881 queue.release ();
4882 return false;
4885 while (!queue.is_empty ())
4887 edge e;
4888 edge_iterator ei;
4890 bb = queue.pop ();
4891 FOR_EACH_EDGE (e, ei, bb->succs)
4892 if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
4893 && bitmap_set_bit (&reachable, e->dest->index))
4894 queue.quick_push (e->dest);
4896 queue.release ();
4898 FOR_BB_INSNS (first, insn)
4900 rtx dest = interesting_dest_for_shprep (insn, NULL);
4901 if (!dest)
4902 continue;
4904 if (DF_REG_DEF_COUNT (REGNO (dest)) > 1)
4906 bitmap_clear (&need_new);
4907 bitmap_clear (&reachable);
4908 return false;
4911 for (df_ref use = DF_REG_USE_CHAIN (REGNO(dest));
4912 use;
4913 use = DF_REF_NEXT_REG (use))
4915 int ubbi = DF_REF_BB (use)->index;
4916 if (bitmap_bit_p (&reachable, ubbi))
4917 bitmap_set_bit (&need_new, ubbi);
4919 last_interesting_insn = insn;
4922 bitmap_clear (&reachable);
4923 if (!last_interesting_insn)
4925 bitmap_clear (&need_new);
4926 return false;
4929 call_dom = nearest_common_dominator_for_set (CDI_DOMINATORS, &need_new);
4930 bitmap_clear (&need_new);
4931 if (call_dom == first)
4932 return false;
4934 loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
4935 while (bb_loop_depth (call_dom) > 0)
4936 call_dom = get_immediate_dominator (CDI_DOMINATORS, call_dom);
4937 loop_optimizer_finalize ();
4939 if (call_dom == first)
4940 return false;
4942 calculate_dominance_info (CDI_POST_DOMINATORS);
4943 if (dominated_by_p (CDI_POST_DOMINATORS, first, call_dom))
4945 free_dominance_info (CDI_POST_DOMINATORS);
4946 return false;
4948 free_dominance_info (CDI_POST_DOMINATORS);
4950 if (dump_file)
4951 fprintf (dump_file, "Will split live ranges of parameters at BB %i\n",
4952 call_dom->index);
4954 bool ret = false;
4955 FOR_BB_INSNS (first, insn)
4957 rtx dest = interesting_dest_for_shprep (insn, call_dom);
4958 if (!dest || dest == pic_offset_table_rtx)
4959 continue;
4961 rtx newreg = NULL_RTX;
4962 df_ref use, next;
4963 for (use = DF_REG_USE_CHAIN (REGNO (dest)); use; use = next)
4965 rtx_insn *uin = DF_REF_INSN (use);
4966 next = DF_REF_NEXT_REG (use);
4968 basic_block ubb = BLOCK_FOR_INSN (uin);
4969 if (ubb == call_dom
4970 || dominated_by_p (CDI_DOMINATORS, ubb, call_dom))
4972 if (!newreg)
4973 newreg = ira_create_new_reg (dest);
4974 validate_change (uin, DF_REF_REAL_LOC (use), newreg, true);
4978 if (newreg)
4980 rtx_insn *new_move = gen_move_insn (newreg, dest);
4981 emit_insn_after (new_move, bb_note (call_dom));
4982 if (dump_file)
4984 fprintf (dump_file, "Split live-range of register ");
4985 print_rtl_single (dump_file, dest);
4987 ret = true;
4990 if (insn == last_interesting_insn)
4991 break;
4993 apply_change_group ();
4994 return ret;
4997 /* Perform the second half of the transformation started in
4998 find_moveable_pseudos. We look for instances where the newly introduced
4999 pseudo remains unallocated, and remove it by moving the definition to
5000 just before its use, replacing the move instruction generated by
5001 find_moveable_pseudos. */
5002 static void
5003 move_unallocated_pseudos (void)
5005 int i;
5006 for (i = first_moveable_pseudo; i < last_moveable_pseudo; i++)
5007 if (reg_renumber[i] < 0)
5009 int idx = i - first_moveable_pseudo;
5010 rtx other_reg = pseudo_replaced_reg[idx];
5011 rtx_insn *def_insn = DF_REF_INSN (DF_REG_DEF_CHAIN (i));
5012 /* The use must follow all definitions of OTHER_REG, so we can
5013 insert the new definition immediately after any of them. */
5014 df_ref other_def = DF_REG_DEF_CHAIN (REGNO (other_reg));
5015 rtx_insn *move_insn = DF_REF_INSN (other_def);
5016 rtx_insn *newinsn = emit_insn_after (PATTERN (def_insn), move_insn);
5017 rtx set;
5018 int success;
5020 if (dump_file)
5021 fprintf (dump_file, "moving def of %d (insn %d now) ",
5022 REGNO (other_reg), INSN_UID (def_insn));
5024 delete_insn (move_insn);
5025 while ((other_def = DF_REG_DEF_CHAIN (REGNO (other_reg))))
5026 delete_insn (DF_REF_INSN (other_def));
5027 delete_insn (def_insn);
5029 set = single_set (newinsn);
5030 success = validate_change (newinsn, &SET_DEST (set), other_reg, 0);
5031 gcc_assert (success);
5032 if (dump_file)
5033 fprintf (dump_file, " %d) rather than keep unallocated replacement %d\n",
5034 INSN_UID (newinsn), i);
5035 SET_REG_N_REFS (i, 0);
5039 /* If the backend knows where to allocate pseudos for hard
5040 register initial values, register these allocations now. */
5041 static void
5042 allocate_initial_values (void)
5044 if (targetm.allocate_initial_value)
5046 rtx hreg, preg, x;
5047 int i, regno;
5049 for (i = 0; HARD_REGISTER_NUM_P (i); i++)
5051 if (! initial_value_entry (i, &hreg, &preg))
5052 break;
5054 x = targetm.allocate_initial_value (hreg);
5055 regno = REGNO (preg);
5056 if (x && REG_N_SETS (regno) <= 1)
5058 if (MEM_P (x))
5059 reg_equiv_memory_loc (regno) = x;
5060 else
5062 basic_block bb;
5063 int new_regno;
5065 gcc_assert (REG_P (x));
5066 new_regno = REGNO (x);
5067 reg_renumber[regno] = new_regno;
5068 /* Poke the regno right into regno_reg_rtx so that even
5069 fixed regs are accepted. */
5070 SET_REGNO (preg, new_regno);
5071 /* Update global register liveness information. */
5072 FOR_EACH_BB_FN (bb, cfun)
5074 if (REGNO_REG_SET_P (df_get_live_in (bb), regno))
5075 SET_REGNO_REG_SET (df_get_live_in (bb), new_regno);
5076 if (REGNO_REG_SET_P (df_get_live_out (bb), regno))
5077 SET_REGNO_REG_SET (df_get_live_out (bb), new_regno);
5083 gcc_checking_assert (! initial_value_entry (FIRST_PSEUDO_REGISTER,
5084 &hreg, &preg));
5089 /* True when we use LRA instead of reload pass for the current
5090 function. */
5091 bool ira_use_lra_p;
5093 /* True if we have allocno conflicts. It is false for non-optimized
5094 mode or when the conflict table is too big. */
5095 bool ira_conflicts_p;
5097 /* Saved between IRA and reload. */
5098 static int saved_flag_ira_share_spill_slots;
5100 /* This is the main entry of IRA. */
5101 static void
5102 ira (FILE *f)
5104 bool loops_p;
5105 int ira_max_point_before_emit;
5106 int rebuild_p;
5107 bool saved_flag_caller_saves = flag_caller_saves;
5108 enum ira_region saved_flag_ira_region = flag_ira_region;
5110 /* Perform target specific PIC register initialization. */
5111 targetm.init_pic_reg ();
5113 ira_conflicts_p = optimize > 0;
5115 ira_use_lra_p = targetm.lra_p ();
5116 /* If there are too many pseudos and/or basic blocks (e.g. 10K
5117 pseudos and 10K blocks or 100K pseudos and 1K blocks), we will
5118 use simplified and faster algorithms in LRA. */
5119 lra_simple_p
5120 = (ira_use_lra_p
5121 && max_reg_num () >= (1 << 26) / last_basic_block_for_fn (cfun));
5122 if (lra_simple_p)
5124 /* It permits to skip live range splitting in LRA. */
5125 flag_caller_saves = false;
5126 /* There is no sense to do regional allocation when we use
5127 simplified LRA. */
5128 flag_ira_region = IRA_REGION_ONE;
5129 ira_conflicts_p = false;
5132 #ifndef IRA_NO_OBSTACK
5133 gcc_obstack_init (&ira_obstack);
5134 #endif
5135 bitmap_obstack_initialize (&ira_bitmap_obstack);
5137 /* LRA uses its own infrastructure to handle caller save registers. */
5138 if (flag_caller_saves && !ira_use_lra_p)
5139 init_caller_save ();
5141 if (flag_ira_verbose < 10)
5143 internal_flag_ira_verbose = flag_ira_verbose;
5144 ira_dump_file = f;
5146 else
5148 internal_flag_ira_verbose = flag_ira_verbose - 10;
5149 ira_dump_file = stderr;
5152 setup_prohibited_mode_move_regs ();
5153 decrease_live_ranges_number ();
5154 df_note_add_problem ();
5156 /* DF_LIVE can't be used in the register allocator, too many other
5157 parts of the compiler depend on using the "classic" liveness
5158 interpretation of the DF_LR problem. See PR38711.
5159 Remove the problem, so that we don't spend time updating it in
5160 any of the df_analyze() calls during IRA/LRA. */
5161 if (optimize > 1)
5162 df_remove_problem (df_live);
5163 gcc_checking_assert (df_live == NULL);
5165 #ifdef ENABLE_CHECKING
5166 df->changeable_flags |= DF_VERIFY_SCHEDULED;
5167 #endif
5168 df_analyze ();
5170 init_reg_equiv ();
5171 if (ira_conflicts_p)
5173 calculate_dominance_info (CDI_DOMINATORS);
5175 if (split_live_ranges_for_shrink_wrap ())
5176 df_analyze ();
5178 free_dominance_info (CDI_DOMINATORS);
5181 df_clear_flags (DF_NO_INSN_RESCAN);
5183 regstat_init_n_sets_and_refs ();
5184 regstat_compute_ri ();
5186 /* If we are not optimizing, then this is the only place before
5187 register allocation where dataflow is done. And that is needed
5188 to generate these warnings. */
5189 if (warn_clobbered)
5190 generate_setjmp_warnings ();
5192 /* Determine if the current function is a leaf before running IRA
5193 since this can impact optimizations done by the prologue and
5194 epilogue thus changing register elimination offsets. */
5195 crtl->is_leaf = leaf_function_p ();
5197 if (resize_reg_info () && flag_ira_loop_pressure)
5198 ira_set_pseudo_classes (true, ira_dump_file);
5200 rebuild_p = update_equiv_regs ();
5201 setup_reg_equiv ();
5202 setup_reg_equiv_init ();
5204 if (optimize && rebuild_p)
5206 timevar_push (TV_JUMP);
5207 rebuild_jump_labels (get_insns ());
5208 if (purge_all_dead_edges ())
5209 delete_unreachable_blocks ();
5210 timevar_pop (TV_JUMP);
5213 allocated_reg_info_size = max_reg_num ();
5215 if (delete_trivially_dead_insns (get_insns (), max_reg_num ()))
5216 df_analyze ();
5218 /* It is not worth to do such improvement when we use a simple
5219 allocation because of -O0 usage or because the function is too
5220 big. */
5221 if (ira_conflicts_p)
5222 find_moveable_pseudos ();
5224 max_regno_before_ira = max_reg_num ();
5225 ira_setup_eliminable_regset ();
5227 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
5228 ira_load_cost = ira_store_cost = ira_shuffle_cost = 0;
5229 ira_move_loops_num = ira_additional_jumps_num = 0;
5231 ira_assert (current_loops == NULL);
5232 if (flag_ira_region == IRA_REGION_ALL || flag_ira_region == IRA_REGION_MIXED)
5233 loop_optimizer_init (AVOID_CFG_MODIFICATIONS | LOOPS_HAVE_RECORDED_EXITS);
5235 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5236 fprintf (ira_dump_file, "Building IRA IR\n");
5237 loops_p = ira_build ();
5239 ira_assert (ira_conflicts_p || !loops_p);
5241 saved_flag_ira_share_spill_slots = flag_ira_share_spill_slots;
5242 if (too_high_register_pressure_p () || cfun->calls_setjmp)
5243 /* It is just wasting compiler's time to pack spilled pseudos into
5244 stack slots in this case -- prohibit it. We also do this if
5245 there is setjmp call because a variable not modified between
5246 setjmp and longjmp the compiler is required to preserve its
5247 value and sharing slots does not guarantee it. */
5248 flag_ira_share_spill_slots = FALSE;
5250 ira_color ();
5252 ira_max_point_before_emit = ira_max_point;
5254 ira_initiate_emit_data ();
5256 ira_emit (loops_p);
5258 max_regno = max_reg_num ();
5259 if (ira_conflicts_p)
5261 if (! loops_p)
5263 if (! ira_use_lra_p)
5264 ira_initiate_assign ();
5266 else
5268 expand_reg_info ();
5270 if (ira_use_lra_p)
5272 ira_allocno_t a;
5273 ira_allocno_iterator ai;
5275 FOR_EACH_ALLOCNO (a, ai)
5277 int old_regno = ALLOCNO_REGNO (a);
5278 int new_regno = REGNO (ALLOCNO_EMIT_DATA (a)->reg);
5280 ALLOCNO_REGNO (a) = new_regno;
5282 if (old_regno != new_regno)
5283 setup_reg_classes (new_regno, reg_preferred_class (old_regno),
5284 reg_alternate_class (old_regno),
5285 reg_allocno_class (old_regno));
5289 else
5291 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5292 fprintf (ira_dump_file, "Flattening IR\n");
5293 ira_flattening (max_regno_before_ira, ira_max_point_before_emit);
5295 /* New insns were generated: add notes and recalculate live
5296 info. */
5297 df_analyze ();
5299 /* ??? Rebuild the loop tree, but why? Does the loop tree
5300 change if new insns were generated? Can that be handled
5301 by updating the loop tree incrementally? */
5302 loop_optimizer_finalize ();
5303 free_dominance_info (CDI_DOMINATORS);
5304 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
5305 | LOOPS_HAVE_RECORDED_EXITS);
5307 if (! ira_use_lra_p)
5309 setup_allocno_assignment_flags ();
5310 ira_initiate_assign ();
5311 ira_reassign_conflict_allocnos (max_regno);
5316 ira_finish_emit_data ();
5318 setup_reg_renumber ();
5320 calculate_allocation_cost ();
5322 #ifdef ENABLE_IRA_CHECKING
5323 if (ira_conflicts_p)
5324 check_allocation ();
5325 #endif
5327 if (max_regno != max_regno_before_ira)
5329 regstat_free_n_sets_and_refs ();
5330 regstat_free_ri ();
5331 regstat_init_n_sets_and_refs ();
5332 regstat_compute_ri ();
5335 overall_cost_before = ira_overall_cost;
5336 if (! ira_conflicts_p)
5337 grow_reg_equivs ();
5338 else
5340 fix_reg_equiv_init ();
5342 #ifdef ENABLE_IRA_CHECKING
5343 print_redundant_copies ();
5344 #endif
5345 if (! ira_use_lra_p)
5347 ira_spilled_reg_stack_slots_num = 0;
5348 ira_spilled_reg_stack_slots
5349 = ((struct ira_spilled_reg_stack_slot *)
5350 ira_allocate (max_regno
5351 * sizeof (struct ira_spilled_reg_stack_slot)));
5352 memset (ira_spilled_reg_stack_slots, 0,
5353 max_regno * sizeof (struct ira_spilled_reg_stack_slot));
5356 allocate_initial_values ();
5358 /* See comment for find_moveable_pseudos call. */
5359 if (ira_conflicts_p)
5360 move_unallocated_pseudos ();
5362 /* Restore original values. */
5363 if (lra_simple_p)
5365 flag_caller_saves = saved_flag_caller_saves;
5366 flag_ira_region = saved_flag_ira_region;
5370 static void
5371 do_reload (void)
5373 basic_block bb;
5374 bool need_dce;
5375 unsigned pic_offset_table_regno = INVALID_REGNUM;
5377 if (flag_ira_verbose < 10)
5378 ira_dump_file = dump_file;
5380 /* If pic_offset_table_rtx is a pseudo register, then keep it so
5381 after reload to avoid possible wrong usages of hard reg assigned
5382 to it. */
5383 if (pic_offset_table_rtx
5384 && REGNO (pic_offset_table_rtx) >= FIRST_PSEUDO_REGISTER)
5385 pic_offset_table_regno = REGNO (pic_offset_table_rtx);
5387 timevar_push (TV_RELOAD);
5388 if (ira_use_lra_p)
5390 if (current_loops != NULL)
5392 loop_optimizer_finalize ();
5393 free_dominance_info (CDI_DOMINATORS);
5395 FOR_ALL_BB_FN (bb, cfun)
5396 bb->loop_father = NULL;
5397 current_loops = NULL;
5399 ira_destroy ();
5401 lra (ira_dump_file);
5402 /* ???!!! Move it before lra () when we use ira_reg_equiv in
5403 LRA. */
5404 vec_free (reg_equivs);
5405 reg_equivs = NULL;
5406 need_dce = false;
5408 else
5410 df_set_flags (DF_NO_INSN_RESCAN);
5411 build_insn_chain ();
5413 need_dce = reload (get_insns (), ira_conflicts_p);
5417 timevar_pop (TV_RELOAD);
5419 timevar_push (TV_IRA);
5421 if (ira_conflicts_p && ! ira_use_lra_p)
5423 ira_free (ira_spilled_reg_stack_slots);
5424 ira_finish_assign ();
5427 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL
5428 && overall_cost_before != ira_overall_cost)
5429 fprintf (ira_dump_file, "+++Overall after reload %" PRId64 "\n",
5430 ira_overall_cost);
5432 flag_ira_share_spill_slots = saved_flag_ira_share_spill_slots;
5434 if (! ira_use_lra_p)
5436 ira_destroy ();
5437 if (current_loops != NULL)
5439 loop_optimizer_finalize ();
5440 free_dominance_info (CDI_DOMINATORS);
5442 FOR_ALL_BB_FN (bb, cfun)
5443 bb->loop_father = NULL;
5444 current_loops = NULL;
5446 regstat_free_ri ();
5447 regstat_free_n_sets_and_refs ();
5450 if (optimize)
5451 cleanup_cfg (CLEANUP_EXPENSIVE);
5453 finish_reg_equiv ();
5455 bitmap_obstack_release (&ira_bitmap_obstack);
5456 #ifndef IRA_NO_OBSTACK
5457 obstack_free (&ira_obstack, NULL);
5458 #endif
5460 /* The code after the reload has changed so much that at this point
5461 we might as well just rescan everything. Note that
5462 df_rescan_all_insns is not going to help here because it does not
5463 touch the artificial uses and defs. */
5464 df_finish_pass (true);
5465 df_scan_alloc (NULL);
5466 df_scan_blocks ();
5468 if (optimize > 1)
5470 df_live_add_problem ();
5471 df_live_set_all_dirty ();
5474 if (optimize)
5475 df_analyze ();
5477 if (need_dce && optimize)
5478 run_fast_dce ();
5480 /* Diagnose uses of the hard frame pointer when it is used as a global
5481 register. Often we can get away with letting the user appropriate
5482 the frame pointer, but we should let them know when code generation
5483 makes that impossible. */
5484 if (global_regs[HARD_FRAME_POINTER_REGNUM] && frame_pointer_needed)
5486 tree decl = global_regs_decl[HARD_FRAME_POINTER_REGNUM];
5487 error_at (DECL_SOURCE_LOCATION (current_function_decl),
5488 "frame pointer required, but reserved");
5489 inform (DECL_SOURCE_LOCATION (decl), "for %qD", decl);
5492 if (pic_offset_table_regno != INVALID_REGNUM)
5493 pic_offset_table_rtx = gen_rtx_REG (Pmode, pic_offset_table_regno);
5495 timevar_pop (TV_IRA);
5498 /* Run the integrated register allocator. */
5500 namespace {
5502 const pass_data pass_data_ira =
5504 RTL_PASS, /* type */
5505 "ira", /* name */
5506 OPTGROUP_NONE, /* optinfo_flags */
5507 TV_IRA, /* tv_id */
5508 0, /* properties_required */
5509 0, /* properties_provided */
5510 0, /* properties_destroyed */
5511 0, /* todo_flags_start */
5512 TODO_do_not_ggc_collect, /* todo_flags_finish */
5515 class pass_ira : public rtl_opt_pass
5517 public:
5518 pass_ira (gcc::context *ctxt)
5519 : rtl_opt_pass (pass_data_ira, ctxt)
5522 /* opt_pass methods: */
5523 virtual bool gate (function *)
5525 return !targetm.no_register_allocation;
5527 virtual unsigned int execute (function *)
5529 ira (dump_file);
5530 return 0;
5533 }; // class pass_ira
5535 } // anon namespace
5537 rtl_opt_pass *
5538 make_pass_ira (gcc::context *ctxt)
5540 return new pass_ira (ctxt);
5543 namespace {
5545 const pass_data pass_data_reload =
5547 RTL_PASS, /* type */
5548 "reload", /* name */
5549 OPTGROUP_NONE, /* optinfo_flags */
5550 TV_RELOAD, /* tv_id */
5551 0, /* properties_required */
5552 0, /* properties_provided */
5553 0, /* properties_destroyed */
5554 0, /* todo_flags_start */
5555 0, /* todo_flags_finish */
5558 class pass_reload : public rtl_opt_pass
5560 public:
5561 pass_reload (gcc::context *ctxt)
5562 : rtl_opt_pass (pass_data_reload, ctxt)
5565 /* opt_pass methods: */
5566 virtual bool gate (function *)
5568 return !targetm.no_register_allocation;
5570 virtual unsigned int execute (function *)
5572 do_reload ();
5573 return 0;
5576 }; // class pass_reload
5578 } // anon namespace
5580 rtl_opt_pass *
5581 make_pass_reload (gcc::context *ctxt)
5583 return new pass_reload (ctxt);