libgomp: Use pthread mutexes in the nvptx plugin.
[official-gcc.git] / gcc / ira.c
blobb7ae86ee7103d707ae342da36dc2b689bc94ab6c
1 /* Integrated Register Allocator (IRA) entry point.
2 Copyright (C) 2006-2015 Free Software Foundation, Inc.
3 Contributed by Vladimir Makarov <vmakarov@redhat.com>.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it under
8 the terms of the GNU General Public License as published by the Free
9 Software Foundation; either version 3, or (at your option) any later
10 version.
12 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
13 WARRANTY; without even the implied warranty of MERCHANTABILITY or
14 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
15 for more details.
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The integrated register allocator (IRA) is a
22 regional register allocator performing graph coloring on a top-down
23 traversal of nested regions. Graph coloring in a region is based
24 on Chaitin-Briggs algorithm. It is called integrated because
25 register coalescing, register live range splitting, and choosing a
26 better hard register are done on-the-fly during coloring. Register
27 coalescing and choosing a cheaper hard register is done by hard
28 register preferencing during hard register assigning. The live
29 range splitting is a byproduct of the regional register allocation.
31 Major IRA notions are:
33 o *Region* is a part of CFG where graph coloring based on
34 Chaitin-Briggs algorithm is done. IRA can work on any set of
35 nested CFG regions forming a tree. Currently the regions are
36 the entire function for the root region and natural loops for
37 the other regions. Therefore data structure representing a
38 region is called loop_tree_node.
40 o *Allocno class* is a register class used for allocation of
41 given allocno. It means that only hard register of given
42 register class can be assigned to given allocno. In reality,
43 even smaller subset of (*profitable*) hard registers can be
44 assigned. In rare cases, the subset can be even smaller
45 because our modification of Chaitin-Briggs algorithm requires
46 that sets of hard registers can be assigned to allocnos forms a
47 forest, i.e. the sets can be ordered in a way where any
48 previous set is not intersected with given set or is a superset
49 of given set.
51 o *Pressure class* is a register class belonging to a set of
52 register classes containing all of the hard-registers available
53 for register allocation. The set of all pressure classes for a
54 target is defined in the corresponding machine-description file
55 according some criteria. Register pressure is calculated only
56 for pressure classes and it affects some IRA decisions as
57 forming allocation regions.
59 o *Allocno* represents the live range of a pseudo-register in a
60 region. Besides the obvious attributes like the corresponding
61 pseudo-register number, allocno class, conflicting allocnos and
62 conflicting hard-registers, there are a few allocno attributes
63 which are important for understanding the allocation algorithm:
65 - *Live ranges*. This is a list of ranges of *program points*
66 where the allocno lives. Program points represent places
67 where a pseudo can be born or become dead (there are
68 approximately two times more program points than the insns)
69 and they are represented by integers starting with 0. The
70 live ranges are used to find conflicts between allocnos.
71 They also play very important role for the transformation of
72 the IRA internal representation of several regions into a one
73 region representation. The later is used during the reload
74 pass work because each allocno represents all of the
75 corresponding pseudo-registers.
77 - *Hard-register costs*. This is a vector of size equal to the
78 number of available hard-registers of the allocno class. The
79 cost of a callee-clobbered hard-register for an allocno is
80 increased by the cost of save/restore code around the calls
81 through the given allocno's life. If the allocno is a move
82 instruction operand and another operand is a hard-register of
83 the allocno class, the cost of the hard-register is decreased
84 by the move cost.
86 When an allocno is assigned, the hard-register with minimal
87 full cost is used. Initially, a hard-register's full cost is
88 the corresponding value from the hard-register's cost vector.
89 If the allocno is connected by a *copy* (see below) to
90 another allocno which has just received a hard-register, the
91 cost of the hard-register is decreased. Before choosing a
92 hard-register for an allocno, the allocno's current costs of
93 the hard-registers are modified by the conflict hard-register
94 costs of all of the conflicting allocnos which are not
95 assigned yet.
97 - *Conflict hard-register costs*. This is a vector of the same
98 size as the hard-register costs vector. To permit an
99 unassigned allocno to get a better hard-register, IRA uses
100 this vector to calculate the final full cost of the
101 available hard-registers. Conflict hard-register costs of an
102 unassigned allocno are also changed with a change of the
103 hard-register cost of the allocno when a copy involving the
104 allocno is processed as described above. This is done to
105 show other unassigned allocnos that a given allocno prefers
106 some hard-registers in order to remove the move instruction
107 corresponding to the copy.
109 o *Cap*. If a pseudo-register does not live in a region but
110 lives in a nested region, IRA creates a special allocno called
111 a cap in the outer region. A region cap is also created for a
112 subregion cap.
114 o *Copy*. Allocnos can be connected by copies. Copies are used
115 to modify hard-register costs for allocnos during coloring.
116 Such modifications reflects a preference to use the same
117 hard-register for the allocnos connected by copies. Usually
118 copies are created for move insns (in this case it results in
119 register coalescing). But IRA also creates copies for operands
120 of an insn which should be assigned to the same hard-register
121 due to constraints in the machine description (it usually
122 results in removing a move generated in reload to satisfy
123 the constraints) and copies referring to the allocno which is
124 the output operand of an instruction and the allocno which is
125 an input operand dying in the instruction (creation of such
126 copies results in less register shuffling). IRA *does not*
127 create copies between the same register allocnos from different
128 regions because we use another technique for propagating
129 hard-register preference on the borders of regions.
131 Allocnos (including caps) for the upper region in the region tree
132 *accumulate* information important for coloring from allocnos with
133 the same pseudo-register from nested regions. This includes
134 hard-register and memory costs, conflicts with hard-registers,
135 allocno conflicts, allocno copies and more. *Thus, attributes for
136 allocnos in a region have the same values as if the region had no
137 subregions*. It means that attributes for allocnos in the
138 outermost region corresponding to the function have the same values
139 as though the allocation used only one region which is the entire
140 function. It also means that we can look at IRA work as if the
141 first IRA did allocation for all function then it improved the
142 allocation for loops then their subloops and so on.
144 IRA major passes are:
146 o Building IRA internal representation which consists of the
147 following subpasses:
149 * First, IRA builds regions and creates allocnos (file
150 ira-build.c) and initializes most of their attributes.
152 * Then IRA finds an allocno class for each allocno and
153 calculates its initial (non-accumulated) cost of memory and
154 each hard-register of its allocno class (file ira-cost.c).
156 * IRA creates live ranges of each allocno, calculates register
157 pressure for each pressure class in each region, sets up
158 conflict hard registers for each allocno and info about calls
159 the allocno lives through (file ira-lives.c).
161 * IRA removes low register pressure loops from the regions
162 mostly to speed IRA up (file ira-build.c).
164 * IRA propagates accumulated allocno info from lower region
165 allocnos to corresponding upper region allocnos (file
166 ira-build.c).
168 * IRA creates all caps (file ira-build.c).
170 * Having live-ranges of allocnos and their classes, IRA creates
171 conflicting allocnos for each allocno. Conflicting allocnos
172 are stored as a bit vector or array of pointers to the
173 conflicting allocnos whatever is more profitable (file
174 ira-conflicts.c). At this point IRA creates allocno copies.
176 o Coloring. Now IRA has all necessary info to start graph coloring
177 process. It is done in each region on top-down traverse of the
178 region tree (file ira-color.c). There are following subpasses:
180 * Finding profitable hard registers of corresponding allocno
181 class for each allocno. For example, only callee-saved hard
182 registers are frequently profitable for allocnos living
183 through colors. If the profitable hard register set of
184 allocno does not form a tree based on subset relation, we use
185 some approximation to form the tree. This approximation is
186 used to figure out trivial colorability of allocnos. The
187 approximation is a pretty rare case.
189 * Putting allocnos onto the coloring stack. IRA uses Briggs
190 optimistic coloring which is a major improvement over
191 Chaitin's coloring. Therefore IRA does not spill allocnos at
192 this point. There is some freedom in the order of putting
193 allocnos on the stack which can affect the final result of
194 the allocation. IRA uses some heuristics to improve the
195 order. The major one is to form *threads* from colorable
196 allocnos and push them on the stack by threads. Thread is a
197 set of non-conflicting colorable allocnos connected by
198 copies. The thread contains allocnos from the colorable
199 bucket or colorable allocnos already pushed onto the coloring
200 stack. Pushing thread allocnos one after another onto the
201 stack increases chances of removing copies when the allocnos
202 get the same hard reg.
204 We also use a modification of Chaitin-Briggs algorithm which
205 works for intersected register classes of allocnos. To
206 figure out trivial colorability of allocnos, the mentioned
207 above tree of hard register sets is used. To get an idea how
208 the algorithm works in i386 example, let us consider an
209 allocno to which any general hard register can be assigned.
210 If the allocno conflicts with eight allocnos to which only
211 EAX register can be assigned, given allocno is still
212 trivially colorable because all conflicting allocnos might be
213 assigned only to EAX and all other general hard registers are
214 still free.
216 To get an idea of the used trivial colorability criterion, it
217 is also useful to read article "Graph-Coloring Register
218 Allocation for Irregular Architectures" by Michael D. Smith
219 and Glen Holloway. Major difference between the article
220 approach and approach used in IRA is that Smith's approach
221 takes register classes only from machine description and IRA
222 calculate register classes from intermediate code too
223 (e.g. an explicit usage of hard registers in RTL code for
224 parameter passing can result in creation of additional
225 register classes which contain or exclude the hard
226 registers). That makes IRA approach useful for improving
227 coloring even for architectures with regular register files
228 and in fact some benchmarking shows the improvement for
229 regular class architectures is even bigger than for irregular
230 ones. Another difference is that Smith's approach chooses
231 intersection of classes of all insn operands in which a given
232 pseudo occurs. IRA can use bigger classes if it is still
233 more profitable than memory usage.
235 * Popping the allocnos from the stack and assigning them hard
236 registers. If IRA can not assign a hard register to an
237 allocno and the allocno is coalesced, IRA undoes the
238 coalescing and puts the uncoalesced allocnos onto the stack in
239 the hope that some such allocnos will get a hard register
240 separately. If IRA fails to assign hard register or memory
241 is more profitable for it, IRA spills the allocno. IRA
242 assigns the allocno the hard-register with minimal full
243 allocation cost which reflects the cost of usage of the
244 hard-register for the allocno and cost of usage of the
245 hard-register for allocnos conflicting with given allocno.
247 * Chaitin-Briggs coloring assigns as many pseudos as possible
248 to hard registers. After coloring we try to improve
249 allocation with cost point of view. We improve the
250 allocation by spilling some allocnos and assigning the freed
251 hard registers to other allocnos if it decreases the overall
252 allocation cost.
254 * After allocno assigning in the region, IRA modifies the hard
255 register and memory costs for the corresponding allocnos in
256 the subregions to reflect the cost of possible loads, stores,
257 or moves on the border of the region and its subregions.
258 When default regional allocation algorithm is used
259 (-fira-algorithm=mixed), IRA just propagates the assignment
260 for allocnos if the register pressure in the region for the
261 corresponding pressure class is less than number of available
262 hard registers for given pressure class.
264 o Spill/restore code moving. When IRA performs an allocation
265 by traversing regions in top-down order, it does not know what
266 happens below in the region tree. Therefore, sometimes IRA
267 misses opportunities to perform a better allocation. A simple
268 optimization tries to improve allocation in a region having
269 subregions and containing in another region. If the
270 corresponding allocnos in the subregion are spilled, it spills
271 the region allocno if it is profitable. The optimization
272 implements a simple iterative algorithm performing profitable
273 transformations while they are still possible. It is fast in
274 practice, so there is no real need for a better time complexity
275 algorithm.
277 o Code change. After coloring, two allocnos representing the
278 same pseudo-register outside and inside a region respectively
279 may be assigned to different locations (hard-registers or
280 memory). In this case IRA creates and uses a new
281 pseudo-register inside the region and adds code to move allocno
282 values on the region's borders. This is done during top-down
283 traversal of the regions (file ira-emit.c). In some
284 complicated cases IRA can create a new allocno to move allocno
285 values (e.g. when a swap of values stored in two hard-registers
286 is needed). At this stage, the new allocno is marked as
287 spilled. IRA still creates the pseudo-register and the moves
288 on the region borders even when both allocnos were assigned to
289 the same hard-register. If the reload pass spills a
290 pseudo-register for some reason, the effect will be smaller
291 because another allocno will still be in the hard-register. In
292 most cases, this is better then spilling both allocnos. If
293 reload does not change the allocation for the two
294 pseudo-registers, the trivial move will be removed by
295 post-reload optimizations. IRA does not generate moves for
296 allocnos assigned to the same hard register when the default
297 regional allocation algorithm is used and the register pressure
298 in the region for the corresponding pressure class is less than
299 number of available hard registers for given pressure class.
300 IRA also does some optimizations to remove redundant stores and
301 to reduce code duplication on the region borders.
303 o Flattening internal representation. After changing code, IRA
304 transforms its internal representation for several regions into
305 one region representation (file ira-build.c). This process is
306 called IR flattening. Such process is more complicated than IR
307 rebuilding would be, but is much faster.
309 o After IR flattening, IRA tries to assign hard registers to all
310 spilled allocnos. This is implemented by a simple and fast
311 priority coloring algorithm (see function
312 ira_reassign_conflict_allocnos::ira-color.c). Here new allocnos
313 created during the code change pass can be assigned to hard
314 registers.
316 o At the end IRA calls the reload pass. The reload pass
317 communicates with IRA through several functions in file
318 ira-color.c to improve its decisions in
320 * sharing stack slots for the spilled pseudos based on IRA info
321 about pseudo-register conflicts.
323 * reassigning hard-registers to all spilled pseudos at the end
324 of each reload iteration.
326 * choosing a better hard-register to spill based on IRA info
327 about pseudo-register live ranges and the register pressure
328 in places where the pseudo-register lives.
330 IRA uses a lot of data representing the target processors. These
331 data are initialized in file ira.c.
333 If function has no loops (or the loops are ignored when
334 -fira-algorithm=CB is used), we have classic Chaitin-Briggs
335 coloring (only instead of separate pass of coalescing, we use hard
336 register preferencing). In such case, IRA works much faster
337 because many things are not made (like IR flattening, the
338 spill/restore optimization, and the code change).
340 Literature is worth to read for better understanding the code:
342 o Preston Briggs, Keith D. Cooper, Linda Torczon. Improvements to
343 Graph Coloring Register Allocation.
345 o David Callahan, Brian Koblenz. Register allocation via
346 hierarchical graph coloring.
348 o Keith Cooper, Anshuman Dasgupta, Jason Eckhardt. Revisiting Graph
349 Coloring Register Allocation: A Study of the Chaitin-Briggs and
350 Callahan-Koblenz Algorithms.
352 o Guei-Yuan Lueh, Thomas Gross, and Ali-Reza Adl-Tabatabai. Global
353 Register Allocation Based on Graph Fusion.
355 o Michael D. Smith and Glenn Holloway. Graph-Coloring Register
356 Allocation for Irregular Architectures
358 o Vladimir Makarov. The Integrated Register Allocator for GCC.
360 o Vladimir Makarov. The top-down register allocator for irregular
361 register file architectures.
366 #include "config.h"
367 #include "system.h"
368 #include "coretypes.h"
369 #include "tm.h"
370 #include "regs.h"
371 #include "hash-set.h"
372 #include "machmode.h"
373 #include "vec.h"
374 #include "double-int.h"
375 #include "input.h"
376 #include "alias.h"
377 #include "symtab.h"
378 #include "wide-int.h"
379 #include "inchash.h"
380 #include "tree.h"
381 #include "rtl.h"
382 #include "tm_p.h"
383 #include "target.h"
384 #include "flags.h"
385 #include "obstack.h"
386 #include "bitmap.h"
387 #include "hard-reg-set.h"
388 #include "predict.h"
389 #include "input.h"
390 #include "function.h"
391 #include "dominance.h"
392 #include "cfg.h"
393 #include "cfgrtl.h"
394 #include "cfgbuild.h"
395 #include "cfgcleanup.h"
396 #include "basic-block.h"
397 #include "df.h"
398 #include "expr.h"
399 #include "recog.h"
400 #include "params.h"
401 #include "tree-pass.h"
402 #include "output.h"
403 #include "except.h"
404 #include "reload.h"
405 #include "diagnostic-core.h"
406 #include "ggc.h"
407 #include "ira-int.h"
408 #include "lra.h"
409 #include "dce.h"
410 #include "dbgcnt.h"
411 #include "rtl-iter.h"
412 #include "shrink-wrap.h"
414 struct target_ira default_target_ira;
415 struct target_ira_int default_target_ira_int;
416 #if SWITCHABLE_TARGET
417 struct target_ira *this_target_ira = &default_target_ira;
418 struct target_ira_int *this_target_ira_int = &default_target_ira_int;
419 #endif
421 /* A modified value of flag `-fira-verbose' used internally. */
422 int internal_flag_ira_verbose;
424 /* Dump file of the allocator if it is not NULL. */
425 FILE *ira_dump_file;
427 /* The number of elements in the following array. */
428 int ira_spilled_reg_stack_slots_num;
430 /* The following array contains info about spilled pseudo-registers
431 stack slots used in current function so far. */
432 struct ira_spilled_reg_stack_slot *ira_spilled_reg_stack_slots;
434 /* Correspondingly overall cost of the allocation, overall cost before
435 reload, cost of the allocnos assigned to hard-registers, cost of
436 the allocnos assigned to memory, cost of loads, stores and register
437 move insns generated for pseudo-register live range splitting (see
438 ira-emit.c). */
439 int64_t ira_overall_cost, overall_cost_before;
440 int64_t ira_reg_cost, ira_mem_cost;
441 int64_t ira_load_cost, ira_store_cost, ira_shuffle_cost;
442 int ira_move_loops_num, ira_additional_jumps_num;
444 /* All registers that can be eliminated. */
446 HARD_REG_SET eliminable_regset;
448 /* Value of max_reg_num () before IRA work start. This value helps
449 us to recognize a situation when new pseudos were created during
450 IRA work. */
451 static int max_regno_before_ira;
453 /* Temporary hard reg set used for a different calculation. */
454 static HARD_REG_SET temp_hard_regset;
456 #define last_mode_for_init_move_cost \
457 (this_target_ira_int->x_last_mode_for_init_move_cost)
460 /* The function sets up the map IRA_REG_MODE_HARD_REGSET. */
461 static void
462 setup_reg_mode_hard_regset (void)
464 int i, m, hard_regno;
466 for (m = 0; m < NUM_MACHINE_MODES; m++)
467 for (hard_regno = 0; hard_regno < FIRST_PSEUDO_REGISTER; hard_regno++)
469 CLEAR_HARD_REG_SET (ira_reg_mode_hard_regset[hard_regno][m]);
470 for (i = hard_regno_nregs[hard_regno][m] - 1; i >= 0; i--)
471 if (hard_regno + i < FIRST_PSEUDO_REGISTER)
472 SET_HARD_REG_BIT (ira_reg_mode_hard_regset[hard_regno][m],
473 hard_regno + i);
478 #define no_unit_alloc_regs \
479 (this_target_ira_int->x_no_unit_alloc_regs)
481 /* The function sets up the three arrays declared above. */
482 static void
483 setup_class_hard_regs (void)
485 int cl, i, hard_regno, n;
486 HARD_REG_SET processed_hard_reg_set;
488 ira_assert (SHRT_MAX >= FIRST_PSEUDO_REGISTER);
489 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
491 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
492 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
493 CLEAR_HARD_REG_SET (processed_hard_reg_set);
494 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
496 ira_non_ordered_class_hard_regs[cl][i] = -1;
497 ira_class_hard_reg_index[cl][i] = -1;
499 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
501 #ifdef REG_ALLOC_ORDER
502 hard_regno = reg_alloc_order[i];
503 #else
504 hard_regno = i;
505 #endif
506 if (TEST_HARD_REG_BIT (processed_hard_reg_set, hard_regno))
507 continue;
508 SET_HARD_REG_BIT (processed_hard_reg_set, hard_regno);
509 if (! TEST_HARD_REG_BIT (temp_hard_regset, hard_regno))
510 ira_class_hard_reg_index[cl][hard_regno] = -1;
511 else
513 ira_class_hard_reg_index[cl][hard_regno] = n;
514 ira_class_hard_regs[cl][n++] = hard_regno;
517 ira_class_hard_regs_num[cl] = n;
518 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
519 if (TEST_HARD_REG_BIT (temp_hard_regset, i))
520 ira_non_ordered_class_hard_regs[cl][n++] = i;
521 ira_assert (ira_class_hard_regs_num[cl] == n);
525 /* Set up global variables defining info about hard registers for the
526 allocation. These depend on USE_HARD_FRAME_P whose TRUE value means
527 that we can use the hard frame pointer for the allocation. */
528 static void
529 setup_alloc_regs (bool use_hard_frame_p)
531 #ifdef ADJUST_REG_ALLOC_ORDER
532 ADJUST_REG_ALLOC_ORDER;
533 #endif
534 COPY_HARD_REG_SET (no_unit_alloc_regs, fixed_reg_set);
535 if (! use_hard_frame_p)
536 SET_HARD_REG_BIT (no_unit_alloc_regs, HARD_FRAME_POINTER_REGNUM);
537 setup_class_hard_regs ();
542 #define alloc_reg_class_subclasses \
543 (this_target_ira_int->x_alloc_reg_class_subclasses)
545 /* Initialize the table of subclasses of each reg class. */
546 static void
547 setup_reg_subclasses (void)
549 int i, j;
550 HARD_REG_SET temp_hard_regset2;
552 for (i = 0; i < N_REG_CLASSES; i++)
553 for (j = 0; j < N_REG_CLASSES; j++)
554 alloc_reg_class_subclasses[i][j] = LIM_REG_CLASSES;
556 for (i = 0; i < N_REG_CLASSES; i++)
558 if (i == (int) NO_REGS)
559 continue;
561 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
562 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
563 if (hard_reg_set_empty_p (temp_hard_regset))
564 continue;
565 for (j = 0; j < N_REG_CLASSES; j++)
566 if (i != j)
568 enum reg_class *p;
570 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[j]);
571 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
572 if (! hard_reg_set_subset_p (temp_hard_regset,
573 temp_hard_regset2))
574 continue;
575 p = &alloc_reg_class_subclasses[j][0];
576 while (*p != LIM_REG_CLASSES) p++;
577 *p = (enum reg_class) i;
584 /* Set up IRA_MEMORY_MOVE_COST and IRA_MAX_MEMORY_MOVE_COST. */
585 static void
586 setup_class_subset_and_memory_move_costs (void)
588 int cl, cl2, mode, cost;
589 HARD_REG_SET temp_hard_regset2;
591 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
592 ira_memory_move_cost[mode][NO_REGS][0]
593 = ira_memory_move_cost[mode][NO_REGS][1] = SHRT_MAX;
594 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
596 if (cl != (int) NO_REGS)
597 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
599 ira_max_memory_move_cost[mode][cl][0]
600 = ira_memory_move_cost[mode][cl][0]
601 = memory_move_cost ((machine_mode) mode,
602 (reg_class_t) cl, false);
603 ira_max_memory_move_cost[mode][cl][1]
604 = ira_memory_move_cost[mode][cl][1]
605 = memory_move_cost ((machine_mode) mode,
606 (reg_class_t) cl, true);
607 /* Costs for NO_REGS are used in cost calculation on the
608 1st pass when the preferred register classes are not
609 known yet. In this case we take the best scenario. */
610 if (ira_memory_move_cost[mode][NO_REGS][0]
611 > ira_memory_move_cost[mode][cl][0])
612 ira_max_memory_move_cost[mode][NO_REGS][0]
613 = ira_memory_move_cost[mode][NO_REGS][0]
614 = ira_memory_move_cost[mode][cl][0];
615 if (ira_memory_move_cost[mode][NO_REGS][1]
616 > ira_memory_move_cost[mode][cl][1])
617 ira_max_memory_move_cost[mode][NO_REGS][1]
618 = ira_memory_move_cost[mode][NO_REGS][1]
619 = ira_memory_move_cost[mode][cl][1];
622 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
623 for (cl2 = (int) N_REG_CLASSES - 1; cl2 >= 0; cl2--)
625 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
626 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
627 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
628 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
629 ira_class_subset_p[cl][cl2]
630 = hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2);
631 if (! hard_reg_set_empty_p (temp_hard_regset2)
632 && hard_reg_set_subset_p (reg_class_contents[cl2],
633 reg_class_contents[cl]))
634 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
636 cost = ira_memory_move_cost[mode][cl2][0];
637 if (cost > ira_max_memory_move_cost[mode][cl][0])
638 ira_max_memory_move_cost[mode][cl][0] = cost;
639 cost = ira_memory_move_cost[mode][cl2][1];
640 if (cost > ira_max_memory_move_cost[mode][cl][1])
641 ira_max_memory_move_cost[mode][cl][1] = cost;
644 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
645 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
647 ira_memory_move_cost[mode][cl][0]
648 = ira_max_memory_move_cost[mode][cl][0];
649 ira_memory_move_cost[mode][cl][1]
650 = ira_max_memory_move_cost[mode][cl][1];
652 setup_reg_subclasses ();
657 /* Define the following macro if allocation through malloc if
658 preferable. */
659 #define IRA_NO_OBSTACK
661 #ifndef IRA_NO_OBSTACK
662 /* Obstack used for storing all dynamic data (except bitmaps) of the
663 IRA. */
664 static struct obstack ira_obstack;
665 #endif
667 /* Obstack used for storing all bitmaps of the IRA. */
668 static struct bitmap_obstack ira_bitmap_obstack;
670 /* Allocate memory of size LEN for IRA data. */
671 void *
672 ira_allocate (size_t len)
674 void *res;
676 #ifndef IRA_NO_OBSTACK
677 res = obstack_alloc (&ira_obstack, len);
678 #else
679 res = xmalloc (len);
680 #endif
681 return res;
684 /* Free memory ADDR allocated for IRA data. */
685 void
686 ira_free (void *addr ATTRIBUTE_UNUSED)
688 #ifndef IRA_NO_OBSTACK
689 /* do nothing */
690 #else
691 free (addr);
692 #endif
696 /* Allocate and returns bitmap for IRA. */
697 bitmap
698 ira_allocate_bitmap (void)
700 return BITMAP_ALLOC (&ira_bitmap_obstack);
703 /* Free bitmap B allocated for IRA. */
704 void
705 ira_free_bitmap (bitmap b ATTRIBUTE_UNUSED)
707 /* do nothing */
712 /* Output information about allocation of all allocnos (except for
713 caps) into file F. */
714 void
715 ira_print_disposition (FILE *f)
717 int i, n, max_regno;
718 ira_allocno_t a;
719 basic_block bb;
721 fprintf (f, "Disposition:");
722 max_regno = max_reg_num ();
723 for (n = 0, i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
724 for (a = ira_regno_allocno_map[i];
725 a != NULL;
726 a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
728 if (n % 4 == 0)
729 fprintf (f, "\n");
730 n++;
731 fprintf (f, " %4d:r%-4d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
732 if ((bb = ALLOCNO_LOOP_TREE_NODE (a)->bb) != NULL)
733 fprintf (f, "b%-3d", bb->index);
734 else
735 fprintf (f, "l%-3d", ALLOCNO_LOOP_TREE_NODE (a)->loop_num);
736 if (ALLOCNO_HARD_REGNO (a) >= 0)
737 fprintf (f, " %3d", ALLOCNO_HARD_REGNO (a));
738 else
739 fprintf (f, " mem");
741 fprintf (f, "\n");
744 /* Outputs information about allocation of all allocnos into
745 stderr. */
746 void
747 ira_debug_disposition (void)
749 ira_print_disposition (stderr);
754 /* Set up ira_stack_reg_pressure_class which is the biggest pressure
755 register class containing stack registers or NO_REGS if there are
756 no stack registers. To find this class, we iterate through all
757 register pressure classes and choose the first register pressure
758 class containing all the stack registers and having the biggest
759 size. */
760 static void
761 setup_stack_reg_pressure_class (void)
763 ira_stack_reg_pressure_class = NO_REGS;
764 #ifdef STACK_REGS
766 int i, best, size;
767 enum reg_class cl;
768 HARD_REG_SET temp_hard_regset2;
770 CLEAR_HARD_REG_SET (temp_hard_regset);
771 for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
772 SET_HARD_REG_BIT (temp_hard_regset, i);
773 best = 0;
774 for (i = 0; i < ira_pressure_classes_num; i++)
776 cl = ira_pressure_classes[i];
777 COPY_HARD_REG_SET (temp_hard_regset2, temp_hard_regset);
778 AND_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
779 size = hard_reg_set_size (temp_hard_regset2);
780 if (best < size)
782 best = size;
783 ira_stack_reg_pressure_class = cl;
787 #endif
790 /* Find pressure classes which are register classes for which we
791 calculate register pressure in IRA, register pressure sensitive
792 insn scheduling, and register pressure sensitive loop invariant
793 motion.
795 To make register pressure calculation easy, we always use
796 non-intersected register pressure classes. A move of hard
797 registers from one register pressure class is not more expensive
798 than load and store of the hard registers. Most likely an allocno
799 class will be a subset of a register pressure class and in many
800 cases a register pressure class. That makes usage of register
801 pressure classes a good approximation to find a high register
802 pressure. */
803 static void
804 setup_pressure_classes (void)
806 int cost, i, n, curr;
807 int cl, cl2;
808 enum reg_class pressure_classes[N_REG_CLASSES];
809 int m;
810 HARD_REG_SET temp_hard_regset2;
811 bool insert_p;
813 n = 0;
814 for (cl = 0; cl < N_REG_CLASSES; cl++)
816 if (ira_class_hard_regs_num[cl] == 0)
817 continue;
818 if (ira_class_hard_regs_num[cl] != 1
819 /* A register class without subclasses may contain a few
820 hard registers and movement between them is costly
821 (e.g. SPARC FPCC registers). We still should consider it
822 as a candidate for a pressure class. */
823 && alloc_reg_class_subclasses[cl][0] < cl)
825 /* Check that the moves between any hard registers of the
826 current class are not more expensive for a legal mode
827 than load/store of the hard registers of the current
828 class. Such class is a potential candidate to be a
829 register pressure class. */
830 for (m = 0; m < NUM_MACHINE_MODES; m++)
832 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
833 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
834 AND_COMPL_HARD_REG_SET (temp_hard_regset,
835 ira_prohibited_class_mode_regs[cl][m]);
836 if (hard_reg_set_empty_p (temp_hard_regset))
837 continue;
838 ira_init_register_move_cost_if_necessary ((machine_mode) m);
839 cost = ira_register_move_cost[m][cl][cl];
840 if (cost <= ira_max_memory_move_cost[m][cl][1]
841 || cost <= ira_max_memory_move_cost[m][cl][0])
842 break;
844 if (m >= NUM_MACHINE_MODES)
845 continue;
847 curr = 0;
848 insert_p = true;
849 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
850 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
851 /* Remove so far added pressure classes which are subset of the
852 current candidate class. Prefer GENERAL_REGS as a pressure
853 register class to another class containing the same
854 allocatable hard registers. We do this because machine
855 dependent cost hooks might give wrong costs for the latter
856 class but always give the right cost for the former class
857 (GENERAL_REGS). */
858 for (i = 0; i < n; i++)
860 cl2 = pressure_classes[i];
861 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
862 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
863 if (hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2)
864 && (! hard_reg_set_equal_p (temp_hard_regset, temp_hard_regset2)
865 || cl2 == (int) GENERAL_REGS))
867 pressure_classes[curr++] = (enum reg_class) cl2;
868 insert_p = false;
869 continue;
871 if (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset)
872 && (! hard_reg_set_equal_p (temp_hard_regset2, temp_hard_regset)
873 || cl == (int) GENERAL_REGS))
874 continue;
875 if (hard_reg_set_equal_p (temp_hard_regset2, temp_hard_regset))
876 insert_p = false;
877 pressure_classes[curr++] = (enum reg_class) cl2;
879 /* If the current candidate is a subset of a so far added
880 pressure class, don't add it to the list of the pressure
881 classes. */
882 if (insert_p)
883 pressure_classes[curr++] = (enum reg_class) cl;
884 n = curr;
886 #ifdef ENABLE_IRA_CHECKING
888 HARD_REG_SET ignore_hard_regs;
890 /* Check pressure classes correctness: here we check that hard
891 registers from all register pressure classes contains all hard
892 registers available for the allocation. */
893 CLEAR_HARD_REG_SET (temp_hard_regset);
894 CLEAR_HARD_REG_SET (temp_hard_regset2);
895 COPY_HARD_REG_SET (ignore_hard_regs, no_unit_alloc_regs);
896 for (cl = 0; cl < LIM_REG_CLASSES; cl++)
898 /* For some targets (like MIPS with MD_REGS), there are some
899 classes with hard registers available for allocation but
900 not able to hold value of any mode. */
901 for (m = 0; m < NUM_MACHINE_MODES; m++)
902 if (contains_reg_of_mode[cl][m])
903 break;
904 if (m >= NUM_MACHINE_MODES)
906 IOR_HARD_REG_SET (ignore_hard_regs, reg_class_contents[cl]);
907 continue;
909 for (i = 0; i < n; i++)
910 if ((int) pressure_classes[i] == cl)
911 break;
912 IOR_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
913 if (i < n)
914 IOR_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
916 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
917 /* Some targets (like SPARC with ICC reg) have allocatable regs
918 for which no reg class is defined. */
919 if (REGNO_REG_CLASS (i) == NO_REGS)
920 SET_HARD_REG_BIT (ignore_hard_regs, i);
921 AND_COMPL_HARD_REG_SET (temp_hard_regset, ignore_hard_regs);
922 AND_COMPL_HARD_REG_SET (temp_hard_regset2, ignore_hard_regs);
923 ira_assert (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset));
925 #endif
926 ira_pressure_classes_num = 0;
927 for (i = 0; i < n; i++)
929 cl = (int) pressure_classes[i];
930 ira_reg_pressure_class_p[cl] = true;
931 ira_pressure_classes[ira_pressure_classes_num++] = (enum reg_class) cl;
933 setup_stack_reg_pressure_class ();
936 /* Set up IRA_UNIFORM_CLASS_P. Uniform class is a register class
937 whose register move cost between any registers of the class is the
938 same as for all its subclasses. We use the data to speed up the
939 2nd pass of calculations of allocno costs. */
940 static void
941 setup_uniform_class_p (void)
943 int i, cl, cl2, m;
945 for (cl = 0; cl < N_REG_CLASSES; cl++)
947 ira_uniform_class_p[cl] = false;
948 if (ira_class_hard_regs_num[cl] == 0)
949 continue;
950 /* We can not use alloc_reg_class_subclasses here because move
951 cost hooks does not take into account that some registers are
952 unavailable for the subtarget. E.g. for i686, INT_SSE_REGS
953 is element of alloc_reg_class_subclasses for GENERAL_REGS
954 because SSE regs are unavailable. */
955 for (i = 0; (cl2 = reg_class_subclasses[cl][i]) != LIM_REG_CLASSES; i++)
957 if (ira_class_hard_regs_num[cl2] == 0)
958 continue;
959 for (m = 0; m < NUM_MACHINE_MODES; m++)
960 if (contains_reg_of_mode[cl][m] && contains_reg_of_mode[cl2][m])
962 ira_init_register_move_cost_if_necessary ((machine_mode) m);
963 if (ira_register_move_cost[m][cl][cl]
964 != ira_register_move_cost[m][cl2][cl2])
965 break;
967 if (m < NUM_MACHINE_MODES)
968 break;
970 if (cl2 == LIM_REG_CLASSES)
971 ira_uniform_class_p[cl] = true;
975 /* Set up IRA_ALLOCNO_CLASSES, IRA_ALLOCNO_CLASSES_NUM,
976 IRA_IMPORTANT_CLASSES, and IRA_IMPORTANT_CLASSES_NUM.
978 Target may have many subtargets and not all target hard registers can
979 be used for allocation, e.g. x86 port in 32-bit mode can not use
980 hard registers introduced in x86-64 like r8-r15). Some classes
981 might have the same allocatable hard registers, e.g. INDEX_REGS
982 and GENERAL_REGS in x86 port in 32-bit mode. To decrease different
983 calculations efforts we introduce allocno classes which contain
984 unique non-empty sets of allocatable hard-registers.
986 Pseudo class cost calculation in ira-costs.c is very expensive.
987 Therefore we are trying to decrease number of classes involved in
988 such calculation. Register classes used in the cost calculation
989 are called important classes. They are allocno classes and other
990 non-empty classes whose allocatable hard register sets are inside
991 of an allocno class hard register set. From the first sight, it
992 looks like that they are just allocno classes. It is not true. In
993 example of x86-port in 32-bit mode, allocno classes will contain
994 GENERAL_REGS but not LEGACY_REGS (because allocatable hard
995 registers are the same for the both classes). The important
996 classes will contain GENERAL_REGS and LEGACY_REGS. It is done
997 because a machine description insn constraint may refers for
998 LEGACY_REGS and code in ira-costs.c is mostly base on investigation
999 of the insn constraints. */
1000 static void
1001 setup_allocno_and_important_classes (void)
1003 int i, j, n, cl;
1004 bool set_p;
1005 HARD_REG_SET temp_hard_regset2;
1006 static enum reg_class classes[LIM_REG_CLASSES + 1];
1008 n = 0;
1009 /* Collect classes which contain unique sets of allocatable hard
1010 registers. Prefer GENERAL_REGS to other classes containing the
1011 same set of hard registers. */
1012 for (i = 0; i < LIM_REG_CLASSES; i++)
1014 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
1015 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1016 for (j = 0; j < n; j++)
1018 cl = classes[j];
1019 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
1020 AND_COMPL_HARD_REG_SET (temp_hard_regset2,
1021 no_unit_alloc_regs);
1022 if (hard_reg_set_equal_p (temp_hard_regset,
1023 temp_hard_regset2))
1024 break;
1026 if (j >= n)
1027 classes[n++] = (enum reg_class) i;
1028 else if (i == GENERAL_REGS)
1029 /* Prefer general regs. For i386 example, it means that
1030 we prefer GENERAL_REGS over INDEX_REGS or LEGACY_REGS
1031 (all of them consists of the same available hard
1032 registers). */
1033 classes[j] = (enum reg_class) i;
1035 classes[n] = LIM_REG_CLASSES;
1037 /* Set up classes which can be used for allocnos as classes
1038 containing non-empty unique sets of allocatable hard
1039 registers. */
1040 ira_allocno_classes_num = 0;
1041 for (i = 0; (cl = classes[i]) != LIM_REG_CLASSES; i++)
1042 if (ira_class_hard_regs_num[cl] > 0)
1043 ira_allocno_classes[ira_allocno_classes_num++] = (enum reg_class) cl;
1044 ira_important_classes_num = 0;
1045 /* Add non-allocno classes containing to non-empty set of
1046 allocatable hard regs. */
1047 for (cl = 0; cl < N_REG_CLASSES; cl++)
1048 if (ira_class_hard_regs_num[cl] > 0)
1050 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1051 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1052 set_p = false;
1053 for (j = 0; j < ira_allocno_classes_num; j++)
1055 COPY_HARD_REG_SET (temp_hard_regset2,
1056 reg_class_contents[ira_allocno_classes[j]]);
1057 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
1058 if ((enum reg_class) cl == ira_allocno_classes[j])
1059 break;
1060 else if (hard_reg_set_subset_p (temp_hard_regset,
1061 temp_hard_regset2))
1062 set_p = true;
1064 if (set_p && j >= ira_allocno_classes_num)
1065 ira_important_classes[ira_important_classes_num++]
1066 = (enum reg_class) cl;
1068 /* Now add allocno classes to the important classes. */
1069 for (j = 0; j < ira_allocno_classes_num; j++)
1070 ira_important_classes[ira_important_classes_num++]
1071 = ira_allocno_classes[j];
1072 for (cl = 0; cl < N_REG_CLASSES; cl++)
1074 ira_reg_allocno_class_p[cl] = false;
1075 ira_reg_pressure_class_p[cl] = false;
1077 for (j = 0; j < ira_allocno_classes_num; j++)
1078 ira_reg_allocno_class_p[ira_allocno_classes[j]] = true;
1079 setup_pressure_classes ();
1080 setup_uniform_class_p ();
1083 /* Setup translation in CLASS_TRANSLATE of all classes into a class
1084 given by array CLASSES of length CLASSES_NUM. The function is used
1085 make translation any reg class to an allocno class or to an
1086 pressure class. This translation is necessary for some
1087 calculations when we can use only allocno or pressure classes and
1088 such translation represents an approximate representation of all
1089 classes.
1091 The translation in case when allocatable hard register set of a
1092 given class is subset of allocatable hard register set of a class
1093 in CLASSES is pretty simple. We use smallest classes from CLASSES
1094 containing a given class. If allocatable hard register set of a
1095 given class is not a subset of any corresponding set of a class
1096 from CLASSES, we use the cheapest (with load/store point of view)
1097 class from CLASSES whose set intersects with given class set. */
1098 static void
1099 setup_class_translate_array (enum reg_class *class_translate,
1100 int classes_num, enum reg_class *classes)
1102 int cl, mode;
1103 enum reg_class aclass, best_class, *cl_ptr;
1104 int i, cost, min_cost, best_cost;
1106 for (cl = 0; cl < N_REG_CLASSES; cl++)
1107 class_translate[cl] = NO_REGS;
1109 for (i = 0; i < classes_num; i++)
1111 aclass = classes[i];
1112 for (cl_ptr = &alloc_reg_class_subclasses[aclass][0];
1113 (cl = *cl_ptr) != LIM_REG_CLASSES;
1114 cl_ptr++)
1115 if (class_translate[cl] == NO_REGS)
1116 class_translate[cl] = aclass;
1117 class_translate[aclass] = aclass;
1119 /* For classes which are not fully covered by one of given classes
1120 (in other words covered by more one given class), use the
1121 cheapest class. */
1122 for (cl = 0; cl < N_REG_CLASSES; cl++)
1124 if (cl == NO_REGS || class_translate[cl] != NO_REGS)
1125 continue;
1126 best_class = NO_REGS;
1127 best_cost = INT_MAX;
1128 for (i = 0; i < classes_num; i++)
1130 aclass = classes[i];
1131 COPY_HARD_REG_SET (temp_hard_regset,
1132 reg_class_contents[aclass]);
1133 AND_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1134 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1135 if (! hard_reg_set_empty_p (temp_hard_regset))
1137 min_cost = INT_MAX;
1138 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1140 cost = (ira_memory_move_cost[mode][aclass][0]
1141 + ira_memory_move_cost[mode][aclass][1]);
1142 if (min_cost > cost)
1143 min_cost = cost;
1145 if (best_class == NO_REGS || best_cost > min_cost)
1147 best_class = aclass;
1148 best_cost = min_cost;
1152 class_translate[cl] = best_class;
1156 /* Set up array IRA_ALLOCNO_CLASS_TRANSLATE and
1157 IRA_PRESSURE_CLASS_TRANSLATE. */
1158 static void
1159 setup_class_translate (void)
1161 setup_class_translate_array (ira_allocno_class_translate,
1162 ira_allocno_classes_num, ira_allocno_classes);
1163 setup_class_translate_array (ira_pressure_class_translate,
1164 ira_pressure_classes_num, ira_pressure_classes);
1167 /* Order numbers of allocno classes in original target allocno class
1168 array, -1 for non-allocno classes. */
1169 static int allocno_class_order[N_REG_CLASSES];
1171 /* The function used to sort the important classes. */
1172 static int
1173 comp_reg_classes_func (const void *v1p, const void *v2p)
1175 enum reg_class cl1 = *(const enum reg_class *) v1p;
1176 enum reg_class cl2 = *(const enum reg_class *) v2p;
1177 enum reg_class tcl1, tcl2;
1178 int diff;
1180 tcl1 = ira_allocno_class_translate[cl1];
1181 tcl2 = ira_allocno_class_translate[cl2];
1182 if (tcl1 != NO_REGS && tcl2 != NO_REGS
1183 && (diff = allocno_class_order[tcl1] - allocno_class_order[tcl2]) != 0)
1184 return diff;
1185 return (int) cl1 - (int) cl2;
1188 /* For correct work of function setup_reg_class_relation we need to
1189 reorder important classes according to the order of their allocno
1190 classes. It places important classes containing the same
1191 allocatable hard register set adjacent to each other and allocno
1192 class with the allocatable hard register set right after the other
1193 important classes with the same set.
1195 In example from comments of function
1196 setup_allocno_and_important_classes, it places LEGACY_REGS and
1197 GENERAL_REGS close to each other and GENERAL_REGS is after
1198 LEGACY_REGS. */
1199 static void
1200 reorder_important_classes (void)
1202 int i;
1204 for (i = 0; i < N_REG_CLASSES; i++)
1205 allocno_class_order[i] = -1;
1206 for (i = 0; i < ira_allocno_classes_num; i++)
1207 allocno_class_order[ira_allocno_classes[i]] = i;
1208 qsort (ira_important_classes, ira_important_classes_num,
1209 sizeof (enum reg_class), comp_reg_classes_func);
1210 for (i = 0; i < ira_important_classes_num; i++)
1211 ira_important_class_nums[ira_important_classes[i]] = i;
1214 /* Set up IRA_REG_CLASS_SUBUNION, IRA_REG_CLASS_SUPERUNION,
1215 IRA_REG_CLASS_SUPER_CLASSES, IRA_REG_CLASSES_INTERSECT, and
1216 IRA_REG_CLASSES_INTERSECT_P. For the meaning of the relations,
1217 please see corresponding comments in ira-int.h. */
1218 static void
1219 setup_reg_class_relations (void)
1221 int i, cl1, cl2, cl3;
1222 HARD_REG_SET intersection_set, union_set, temp_set2;
1223 bool important_class_p[N_REG_CLASSES];
1225 memset (important_class_p, 0, sizeof (important_class_p));
1226 for (i = 0; i < ira_important_classes_num; i++)
1227 important_class_p[ira_important_classes[i]] = true;
1228 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1230 ira_reg_class_super_classes[cl1][0] = LIM_REG_CLASSES;
1231 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1233 ira_reg_classes_intersect_p[cl1][cl2] = false;
1234 ira_reg_class_intersect[cl1][cl2] = NO_REGS;
1235 ira_reg_class_subset[cl1][cl2] = NO_REGS;
1236 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl1]);
1237 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1238 COPY_HARD_REG_SET (temp_set2, reg_class_contents[cl2]);
1239 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1240 if (hard_reg_set_empty_p (temp_hard_regset)
1241 && hard_reg_set_empty_p (temp_set2))
1243 /* The both classes have no allocatable hard registers
1244 -- take all class hard registers into account and use
1245 reg_class_subunion and reg_class_superunion. */
1246 for (i = 0;; i++)
1248 cl3 = reg_class_subclasses[cl1][i];
1249 if (cl3 == LIM_REG_CLASSES)
1250 break;
1251 if (reg_class_subset_p (ira_reg_class_intersect[cl1][cl2],
1252 (enum reg_class) cl3))
1253 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1255 ira_reg_class_subunion[cl1][cl2] = reg_class_subunion[cl1][cl2];
1256 ira_reg_class_superunion[cl1][cl2] = reg_class_superunion[cl1][cl2];
1257 continue;
1259 ira_reg_classes_intersect_p[cl1][cl2]
1260 = hard_reg_set_intersect_p (temp_hard_regset, temp_set2);
1261 if (important_class_p[cl1] && important_class_p[cl2]
1262 && hard_reg_set_subset_p (temp_hard_regset, temp_set2))
1264 /* CL1 and CL2 are important classes and CL1 allocatable
1265 hard register set is inside of CL2 allocatable hard
1266 registers -- make CL1 a superset of CL2. */
1267 enum reg_class *p;
1269 p = &ira_reg_class_super_classes[cl1][0];
1270 while (*p != LIM_REG_CLASSES)
1271 p++;
1272 *p++ = (enum reg_class) cl2;
1273 *p = LIM_REG_CLASSES;
1275 ira_reg_class_subunion[cl1][cl2] = NO_REGS;
1276 ira_reg_class_superunion[cl1][cl2] = NO_REGS;
1277 COPY_HARD_REG_SET (intersection_set, reg_class_contents[cl1]);
1278 AND_HARD_REG_SET (intersection_set, reg_class_contents[cl2]);
1279 AND_COMPL_HARD_REG_SET (intersection_set, no_unit_alloc_regs);
1280 COPY_HARD_REG_SET (union_set, reg_class_contents[cl1]);
1281 IOR_HARD_REG_SET (union_set, reg_class_contents[cl2]);
1282 AND_COMPL_HARD_REG_SET (union_set, no_unit_alloc_regs);
1283 for (cl3 = 0; cl3 < N_REG_CLASSES; cl3++)
1285 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl3]);
1286 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1287 if (hard_reg_set_subset_p (temp_hard_regset, intersection_set))
1289 /* CL3 allocatable hard register set is inside of
1290 intersection of allocatable hard register sets
1291 of CL1 and CL2. */
1292 if (important_class_p[cl3])
1294 COPY_HARD_REG_SET
1295 (temp_set2,
1296 reg_class_contents
1297 [(int) ira_reg_class_intersect[cl1][cl2]]);
1298 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1299 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1300 /* If the allocatable hard register sets are
1301 the same, prefer GENERAL_REGS or the
1302 smallest class for debugging
1303 purposes. */
1304 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1305 && (cl3 == GENERAL_REGS
1306 || ((ira_reg_class_intersect[cl1][cl2]
1307 != GENERAL_REGS)
1308 && hard_reg_set_subset_p
1309 (reg_class_contents[cl3],
1310 reg_class_contents
1311 [(int)
1312 ira_reg_class_intersect[cl1][cl2]])))))
1313 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1315 COPY_HARD_REG_SET
1316 (temp_set2,
1317 reg_class_contents[(int) ira_reg_class_subset[cl1][cl2]]);
1318 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1319 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1320 /* Ignore unavailable hard registers and prefer
1321 smallest class for debugging purposes. */
1322 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1323 && hard_reg_set_subset_p
1324 (reg_class_contents[cl3],
1325 reg_class_contents
1326 [(int) ira_reg_class_subset[cl1][cl2]])))
1327 ira_reg_class_subset[cl1][cl2] = (enum reg_class) cl3;
1329 if (important_class_p[cl3]
1330 && hard_reg_set_subset_p (temp_hard_regset, union_set))
1332 /* CL3 allocatable hard register set is inside of
1333 union of allocatable hard register sets of CL1
1334 and CL2. */
1335 COPY_HARD_REG_SET
1336 (temp_set2,
1337 reg_class_contents[(int) ira_reg_class_subunion[cl1][cl2]]);
1338 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1339 if (ira_reg_class_subunion[cl1][cl2] == NO_REGS
1340 || (hard_reg_set_subset_p (temp_set2, temp_hard_regset)
1342 && (! hard_reg_set_equal_p (temp_set2,
1343 temp_hard_regset)
1344 || cl3 == GENERAL_REGS
1345 /* If the allocatable hard register sets are the
1346 same, prefer GENERAL_REGS or the smallest
1347 class for debugging purposes. */
1348 || (ira_reg_class_subunion[cl1][cl2] != GENERAL_REGS
1349 && hard_reg_set_subset_p
1350 (reg_class_contents[cl3],
1351 reg_class_contents
1352 [(int) ira_reg_class_subunion[cl1][cl2]])))))
1353 ira_reg_class_subunion[cl1][cl2] = (enum reg_class) cl3;
1355 if (hard_reg_set_subset_p (union_set, temp_hard_regset))
1357 /* CL3 allocatable hard register set contains union
1358 of allocatable hard register sets of CL1 and
1359 CL2. */
1360 COPY_HARD_REG_SET
1361 (temp_set2,
1362 reg_class_contents[(int) ira_reg_class_superunion[cl1][cl2]]);
1363 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1364 if (ira_reg_class_superunion[cl1][cl2] == NO_REGS
1365 || (hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1367 && (! hard_reg_set_equal_p (temp_set2,
1368 temp_hard_regset)
1369 || cl3 == GENERAL_REGS
1370 /* If the allocatable hard register sets are the
1371 same, prefer GENERAL_REGS or the smallest
1372 class for debugging purposes. */
1373 || (ira_reg_class_superunion[cl1][cl2] != GENERAL_REGS
1374 && hard_reg_set_subset_p
1375 (reg_class_contents[cl3],
1376 reg_class_contents
1377 [(int) ira_reg_class_superunion[cl1][cl2]])))))
1378 ira_reg_class_superunion[cl1][cl2] = (enum reg_class) cl3;
1385 /* Output all uniform and important classes into file F. */
1386 static void
1387 print_unform_and_important_classes (FILE *f)
1389 static const char *const reg_class_names[] = REG_CLASS_NAMES;
1390 int i, cl;
1392 fprintf (f, "Uniform classes:\n");
1393 for (cl = 0; cl < N_REG_CLASSES; cl++)
1394 if (ira_uniform_class_p[cl])
1395 fprintf (f, " %s", reg_class_names[cl]);
1396 fprintf (f, "\nImportant classes:\n");
1397 for (i = 0; i < ira_important_classes_num; i++)
1398 fprintf (f, " %s", reg_class_names[ira_important_classes[i]]);
1399 fprintf (f, "\n");
1402 /* Output all possible allocno or pressure classes and their
1403 translation map into file F. */
1404 static void
1405 print_translated_classes (FILE *f, bool pressure_p)
1407 int classes_num = (pressure_p
1408 ? ira_pressure_classes_num : ira_allocno_classes_num);
1409 enum reg_class *classes = (pressure_p
1410 ? ira_pressure_classes : ira_allocno_classes);
1411 enum reg_class *class_translate = (pressure_p
1412 ? ira_pressure_class_translate
1413 : ira_allocno_class_translate);
1414 static const char *const reg_class_names[] = REG_CLASS_NAMES;
1415 int i;
1417 fprintf (f, "%s classes:\n", pressure_p ? "Pressure" : "Allocno");
1418 for (i = 0; i < classes_num; i++)
1419 fprintf (f, " %s", reg_class_names[classes[i]]);
1420 fprintf (f, "\nClass translation:\n");
1421 for (i = 0; i < N_REG_CLASSES; i++)
1422 fprintf (f, " %s -> %s\n", reg_class_names[i],
1423 reg_class_names[class_translate[i]]);
1426 /* Output all possible allocno and translation classes and the
1427 translation maps into stderr. */
1428 void
1429 ira_debug_allocno_classes (void)
1431 print_unform_and_important_classes (stderr);
1432 print_translated_classes (stderr, false);
1433 print_translated_classes (stderr, true);
1436 /* Set up different arrays concerning class subsets, allocno and
1437 important classes. */
1438 static void
1439 find_reg_classes (void)
1441 setup_allocno_and_important_classes ();
1442 setup_class_translate ();
1443 reorder_important_classes ();
1444 setup_reg_class_relations ();
1449 /* Set up the array above. */
1450 static void
1451 setup_hard_regno_aclass (void)
1453 int i;
1455 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1457 #if 1
1458 ira_hard_regno_allocno_class[i]
1459 = (TEST_HARD_REG_BIT (no_unit_alloc_regs, i)
1460 ? NO_REGS
1461 : ira_allocno_class_translate[REGNO_REG_CLASS (i)]);
1462 #else
1463 int j;
1464 enum reg_class cl;
1465 ira_hard_regno_allocno_class[i] = NO_REGS;
1466 for (j = 0; j < ira_allocno_classes_num; j++)
1468 cl = ira_allocno_classes[j];
1469 if (ira_class_hard_reg_index[cl][i] >= 0)
1471 ira_hard_regno_allocno_class[i] = cl;
1472 break;
1475 #endif
1481 /* Form IRA_REG_CLASS_MAX_NREGS and IRA_REG_CLASS_MIN_NREGS maps. */
1482 static void
1483 setup_reg_class_nregs (void)
1485 int i, cl, cl2, m;
1487 for (m = 0; m < MAX_MACHINE_MODE; m++)
1489 for (cl = 0; cl < N_REG_CLASSES; cl++)
1490 ira_reg_class_max_nregs[cl][m]
1491 = ira_reg_class_min_nregs[cl][m]
1492 = targetm.class_max_nregs ((reg_class_t) cl, (machine_mode) m);
1493 for (cl = 0; cl < N_REG_CLASSES; cl++)
1494 for (i = 0;
1495 (cl2 = alloc_reg_class_subclasses[cl][i]) != LIM_REG_CLASSES;
1496 i++)
1497 if (ira_reg_class_min_nregs[cl2][m]
1498 < ira_reg_class_min_nregs[cl][m])
1499 ira_reg_class_min_nregs[cl][m] = ira_reg_class_min_nregs[cl2][m];
1505 /* Set up IRA_PROHIBITED_CLASS_MODE_REGS and IRA_CLASS_SINGLETON.
1506 This function is called once IRA_CLASS_HARD_REGS has been initialized. */
1507 static void
1508 setup_prohibited_class_mode_regs (void)
1510 int j, k, hard_regno, cl, last_hard_regno, count;
1512 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1514 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1515 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1516 for (j = 0; j < NUM_MACHINE_MODES; j++)
1518 count = 0;
1519 last_hard_regno = -1;
1520 CLEAR_HARD_REG_SET (ira_prohibited_class_mode_regs[cl][j]);
1521 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1523 hard_regno = ira_class_hard_regs[cl][k];
1524 if (! HARD_REGNO_MODE_OK (hard_regno, (machine_mode) j))
1525 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1526 hard_regno);
1527 else if (in_hard_reg_set_p (temp_hard_regset,
1528 (machine_mode) j, hard_regno))
1530 last_hard_regno = hard_regno;
1531 count++;
1534 ira_class_singleton[cl][j] = (count == 1 ? last_hard_regno : -1);
1539 /* Clarify IRA_PROHIBITED_CLASS_MODE_REGS by excluding hard registers
1540 spanning from one register pressure class to another one. It is
1541 called after defining the pressure classes. */
1542 static void
1543 clarify_prohibited_class_mode_regs (void)
1545 int j, k, hard_regno, cl, pclass, nregs;
1547 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1548 for (j = 0; j < NUM_MACHINE_MODES; j++)
1550 CLEAR_HARD_REG_SET (ira_useful_class_mode_regs[cl][j]);
1551 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1553 hard_regno = ira_class_hard_regs[cl][k];
1554 if (TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j], hard_regno))
1555 continue;
1556 nregs = hard_regno_nregs[hard_regno][j];
1557 if (hard_regno + nregs > FIRST_PSEUDO_REGISTER)
1559 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1560 hard_regno);
1561 continue;
1563 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
1564 for (nregs-- ;nregs >= 0; nregs--)
1565 if (((enum reg_class) pclass
1566 != ira_pressure_class_translate[REGNO_REG_CLASS
1567 (hard_regno + nregs)]))
1569 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1570 hard_regno);
1571 break;
1573 if (!TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1574 hard_regno))
1575 add_to_hard_reg_set (&ira_useful_class_mode_regs[cl][j],
1576 (machine_mode) j, hard_regno);
1581 /* Allocate and initialize IRA_REGISTER_MOVE_COST, IRA_MAY_MOVE_IN_COST
1582 and IRA_MAY_MOVE_OUT_COST for MODE. */
1583 void
1584 ira_init_register_move_cost (machine_mode mode)
1586 static unsigned short last_move_cost[N_REG_CLASSES][N_REG_CLASSES];
1587 bool all_match = true;
1588 unsigned int cl1, cl2;
1590 ira_assert (ira_register_move_cost[mode] == NULL
1591 && ira_may_move_in_cost[mode] == NULL
1592 && ira_may_move_out_cost[mode] == NULL);
1593 ira_assert (have_regs_of_mode[mode]);
1594 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1595 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1597 int cost;
1598 if (!contains_reg_of_mode[cl1][mode]
1599 || !contains_reg_of_mode[cl2][mode])
1601 if ((ira_reg_class_max_nregs[cl1][mode]
1602 > ira_class_hard_regs_num[cl1])
1603 || (ira_reg_class_max_nregs[cl2][mode]
1604 > ira_class_hard_regs_num[cl2]))
1605 cost = 65535;
1606 else
1607 cost = (ira_memory_move_cost[mode][cl1][0]
1608 + ira_memory_move_cost[mode][cl2][1]) * 2;
1610 else
1612 cost = register_move_cost (mode, (enum reg_class) cl1,
1613 (enum reg_class) cl2);
1614 ira_assert (cost < 65535);
1616 all_match &= (last_move_cost[cl1][cl2] == cost);
1617 last_move_cost[cl1][cl2] = cost;
1619 if (all_match && last_mode_for_init_move_cost != -1)
1621 ira_register_move_cost[mode]
1622 = ira_register_move_cost[last_mode_for_init_move_cost];
1623 ira_may_move_in_cost[mode]
1624 = ira_may_move_in_cost[last_mode_for_init_move_cost];
1625 ira_may_move_out_cost[mode]
1626 = ira_may_move_out_cost[last_mode_for_init_move_cost];
1627 return;
1629 last_mode_for_init_move_cost = mode;
1630 ira_register_move_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1631 ira_may_move_in_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1632 ira_may_move_out_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1633 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1634 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1636 int cost;
1637 enum reg_class *p1, *p2;
1639 if (last_move_cost[cl1][cl2] == 65535)
1641 ira_register_move_cost[mode][cl1][cl2] = 65535;
1642 ira_may_move_in_cost[mode][cl1][cl2] = 65535;
1643 ira_may_move_out_cost[mode][cl1][cl2] = 65535;
1645 else
1647 cost = last_move_cost[cl1][cl2];
1649 for (p2 = &reg_class_subclasses[cl2][0];
1650 *p2 != LIM_REG_CLASSES; p2++)
1651 if (ira_class_hard_regs_num[*p2] > 0
1652 && (ira_reg_class_max_nregs[*p2][mode]
1653 <= ira_class_hard_regs_num[*p2]))
1654 cost = MAX (cost, ira_register_move_cost[mode][cl1][*p2]);
1656 for (p1 = &reg_class_subclasses[cl1][0];
1657 *p1 != LIM_REG_CLASSES; p1++)
1658 if (ira_class_hard_regs_num[*p1] > 0
1659 && (ira_reg_class_max_nregs[*p1][mode]
1660 <= ira_class_hard_regs_num[*p1]))
1661 cost = MAX (cost, ira_register_move_cost[mode][*p1][cl2]);
1663 ira_assert (cost <= 65535);
1664 ira_register_move_cost[mode][cl1][cl2] = cost;
1666 if (ira_class_subset_p[cl1][cl2])
1667 ira_may_move_in_cost[mode][cl1][cl2] = 0;
1668 else
1669 ira_may_move_in_cost[mode][cl1][cl2] = cost;
1671 if (ira_class_subset_p[cl2][cl1])
1672 ira_may_move_out_cost[mode][cl1][cl2] = 0;
1673 else
1674 ira_may_move_out_cost[mode][cl1][cl2] = cost;
1681 /* This is called once during compiler work. It sets up
1682 different arrays whose values don't depend on the compiled
1683 function. */
1684 void
1685 ira_init_once (void)
1687 ira_init_costs_once ();
1688 lra_init_once ();
1691 /* Free ira_max_register_move_cost, ira_may_move_in_cost and
1692 ira_may_move_out_cost for each mode. */
1693 void
1694 target_ira_int::free_register_move_costs (void)
1696 int mode, i;
1698 /* Reset move_cost and friends, making sure we only free shared
1699 table entries once. */
1700 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1701 if (x_ira_register_move_cost[mode])
1703 for (i = 0;
1704 i < mode && (x_ira_register_move_cost[i]
1705 != x_ira_register_move_cost[mode]);
1706 i++)
1708 if (i == mode)
1710 free (x_ira_register_move_cost[mode]);
1711 free (x_ira_may_move_in_cost[mode]);
1712 free (x_ira_may_move_out_cost[mode]);
1715 memset (x_ira_register_move_cost, 0, sizeof x_ira_register_move_cost);
1716 memset (x_ira_may_move_in_cost, 0, sizeof x_ira_may_move_in_cost);
1717 memset (x_ira_may_move_out_cost, 0, sizeof x_ira_may_move_out_cost);
1718 last_mode_for_init_move_cost = -1;
1721 target_ira_int::~target_ira_int ()
1723 free_ira_costs ();
1724 free_register_move_costs ();
1727 /* This is called every time when register related information is
1728 changed. */
1729 void
1730 ira_init (void)
1732 this_target_ira_int->free_register_move_costs ();
1733 setup_reg_mode_hard_regset ();
1734 setup_alloc_regs (flag_omit_frame_pointer != 0);
1735 setup_class_subset_and_memory_move_costs ();
1736 setup_reg_class_nregs ();
1737 setup_prohibited_class_mode_regs ();
1738 find_reg_classes ();
1739 clarify_prohibited_class_mode_regs ();
1740 setup_hard_regno_aclass ();
1741 ira_init_costs ();
1745 #define ira_prohibited_mode_move_regs_initialized_p \
1746 (this_target_ira_int->x_ira_prohibited_mode_move_regs_initialized_p)
1748 /* Set up IRA_PROHIBITED_MODE_MOVE_REGS. */
1749 static void
1750 setup_prohibited_mode_move_regs (void)
1752 int i, j;
1753 rtx test_reg1, test_reg2, move_pat;
1754 rtx_insn *move_insn;
1756 if (ira_prohibited_mode_move_regs_initialized_p)
1757 return;
1758 ira_prohibited_mode_move_regs_initialized_p = true;
1759 test_reg1 = gen_rtx_REG (VOIDmode, 0);
1760 test_reg2 = gen_rtx_REG (VOIDmode, 0);
1761 move_pat = gen_rtx_SET (VOIDmode, test_reg1, test_reg2);
1762 move_insn = gen_rtx_INSN (VOIDmode, 0, 0, 0, move_pat, 0, -1, 0);
1763 for (i = 0; i < NUM_MACHINE_MODES; i++)
1765 SET_HARD_REG_SET (ira_prohibited_mode_move_regs[i]);
1766 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
1768 if (! HARD_REGNO_MODE_OK (j, (machine_mode) i))
1769 continue;
1770 SET_REGNO_RAW (test_reg1, j);
1771 PUT_MODE (test_reg1, (machine_mode) i);
1772 SET_REGNO_RAW (test_reg2, j);
1773 PUT_MODE (test_reg2, (machine_mode) i);
1774 INSN_CODE (move_insn) = -1;
1775 recog_memoized (move_insn);
1776 if (INSN_CODE (move_insn) < 0)
1777 continue;
1778 extract_insn (move_insn);
1779 /* We don't know whether the move will be in code that is optimized
1780 for size or speed, so consider all enabled alternatives. */
1781 if (! constrain_operands (1, get_enabled_alternatives (move_insn)))
1782 continue;
1783 CLEAR_HARD_REG_BIT (ira_prohibited_mode_move_regs[i], j);
1790 /* Setup possible alternatives in ALTS for INSN. */
1791 void
1792 ira_setup_alts (rtx_insn *insn, HARD_REG_SET &alts)
1794 /* MAP nalt * nop -> start of constraints for given operand and
1795 alternative. */
1796 static vec<const char *> insn_constraints;
1797 int nop, nalt;
1798 bool curr_swapped;
1799 const char *p;
1800 rtx op;
1801 int commutative = -1;
1803 extract_insn (insn);
1804 alternative_mask preferred = get_preferred_alternatives (insn);
1805 CLEAR_HARD_REG_SET (alts);
1806 insn_constraints.release ();
1807 insn_constraints.safe_grow_cleared (recog_data.n_operands
1808 * recog_data.n_alternatives + 1);
1809 /* Check that the hard reg set is enough for holding all
1810 alternatives. It is hard to imagine the situation when the
1811 assertion is wrong. */
1812 ira_assert (recog_data.n_alternatives
1813 <= (int) MAX (sizeof (HARD_REG_ELT_TYPE) * CHAR_BIT,
1814 FIRST_PSEUDO_REGISTER));
1815 for (curr_swapped = false;; curr_swapped = true)
1817 /* Calculate some data common for all alternatives to speed up the
1818 function. */
1819 for (nop = 0; nop < recog_data.n_operands; nop++)
1821 for (nalt = 0, p = recog_data.constraints[nop];
1822 nalt < recog_data.n_alternatives;
1823 nalt++)
1825 insn_constraints[nop * recog_data.n_alternatives + nalt] = p;
1826 while (*p && *p != ',')
1827 p++;
1828 if (*p)
1829 p++;
1832 for (nalt = 0; nalt < recog_data.n_alternatives; nalt++)
1834 if (!TEST_BIT (preferred, nalt)
1835 || TEST_HARD_REG_BIT (alts, nalt))
1836 continue;
1838 for (nop = 0; nop < recog_data.n_operands; nop++)
1840 int c, len;
1842 op = recog_data.operand[nop];
1843 p = insn_constraints[nop * recog_data.n_alternatives + nalt];
1844 if (*p == 0 || *p == ',')
1845 continue;
1848 switch (c = *p, len = CONSTRAINT_LEN (c, p), c)
1850 case '#':
1851 case ',':
1852 c = '\0';
1853 case '\0':
1854 len = 0;
1855 break;
1857 case '%':
1858 /* We only support one commutative marker, the
1859 first one. We already set commutative
1860 above. */
1861 if (commutative < 0)
1862 commutative = nop;
1863 break;
1865 case '0': case '1': case '2': case '3': case '4':
1866 case '5': case '6': case '7': case '8': case '9':
1867 goto op_success;
1868 break;
1870 case 'g':
1871 goto op_success;
1872 break;
1874 default:
1876 enum constraint_num cn = lookup_constraint (p);
1877 switch (get_constraint_type (cn))
1879 case CT_REGISTER:
1880 if (reg_class_for_constraint (cn) != NO_REGS)
1881 goto op_success;
1882 break;
1884 case CT_CONST_INT:
1885 if (CONST_INT_P (op)
1886 && (insn_const_int_ok_for_constraint
1887 (INTVAL (op), cn)))
1888 goto op_success;
1889 break;
1891 case CT_ADDRESS:
1892 case CT_MEMORY:
1893 goto op_success;
1895 case CT_FIXED_FORM:
1896 if (constraint_satisfied_p (op, cn))
1897 goto op_success;
1898 break;
1900 break;
1903 while (p += len, c);
1904 break;
1905 op_success:
1908 if (nop >= recog_data.n_operands)
1909 SET_HARD_REG_BIT (alts, nalt);
1911 if (commutative < 0)
1912 break;
1913 if (curr_swapped)
1914 break;
1915 op = recog_data.operand[commutative];
1916 recog_data.operand[commutative] = recog_data.operand[commutative + 1];
1917 recog_data.operand[commutative + 1] = op;
1922 /* Return the number of the output non-early clobber operand which
1923 should be the same in any case as operand with number OP_NUM (or
1924 negative value if there is no such operand). The function takes
1925 only really possible alternatives into consideration. */
1927 ira_get_dup_out_num (int op_num, HARD_REG_SET &alts)
1929 int curr_alt, c, original, dup;
1930 bool ignore_p, use_commut_op_p;
1931 const char *str;
1933 if (op_num < 0 || recog_data.n_alternatives == 0)
1934 return -1;
1935 /* We should find duplications only for input operands. */
1936 if (recog_data.operand_type[op_num] != OP_IN)
1937 return -1;
1938 str = recog_data.constraints[op_num];
1939 use_commut_op_p = false;
1940 for (;;)
1942 rtx op = recog_data.operand[op_num];
1944 for (curr_alt = 0, ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt),
1945 original = -1;;)
1947 c = *str;
1948 if (c == '\0')
1949 break;
1950 if (c == '#')
1951 ignore_p = true;
1952 else if (c == ',')
1954 curr_alt++;
1955 ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt);
1957 else if (! ignore_p)
1958 switch (c)
1960 case 'g':
1961 goto fail;
1962 default:
1964 enum constraint_num cn = lookup_constraint (str);
1965 enum reg_class cl = reg_class_for_constraint (cn);
1966 if (cl != NO_REGS
1967 && !targetm.class_likely_spilled_p (cl))
1968 goto fail;
1969 if (constraint_satisfied_p (op, cn))
1970 goto fail;
1971 break;
1974 case '0': case '1': case '2': case '3': case '4':
1975 case '5': case '6': case '7': case '8': case '9':
1976 if (original != -1 && original != c)
1977 goto fail;
1978 original = c;
1979 break;
1981 str += CONSTRAINT_LEN (c, str);
1983 if (original == -1)
1984 goto fail;
1985 dup = -1;
1986 for (ignore_p = false, str = recog_data.constraints[original - '0'];
1987 *str != 0;
1988 str++)
1989 if (ignore_p)
1991 if (*str == ',')
1992 ignore_p = false;
1994 else if (*str == '#')
1995 ignore_p = true;
1996 else if (! ignore_p)
1998 if (*str == '=')
1999 dup = original - '0';
2000 /* It is better ignore an alternative with early clobber. */
2001 else if (*str == '&')
2002 goto fail;
2004 if (dup >= 0)
2005 return dup;
2006 fail:
2007 if (use_commut_op_p)
2008 break;
2009 use_commut_op_p = true;
2010 if (recog_data.constraints[op_num][0] == '%')
2011 str = recog_data.constraints[op_num + 1];
2012 else if (op_num > 0 && recog_data.constraints[op_num - 1][0] == '%')
2013 str = recog_data.constraints[op_num - 1];
2014 else
2015 break;
2017 return -1;
2022 /* Search forward to see if the source register of a copy insn dies
2023 before either it or the destination register is modified, but don't
2024 scan past the end of the basic block. If so, we can replace the
2025 source with the destination and let the source die in the copy
2026 insn.
2028 This will reduce the number of registers live in that range and may
2029 enable the destination and the source coalescing, thus often saving
2030 one register in addition to a register-register copy. */
2032 static void
2033 decrease_live_ranges_number (void)
2035 basic_block bb;
2036 rtx_insn *insn;
2037 rtx set, src, dest, dest_death, q, note;
2038 rtx_insn *p;
2039 int sregno, dregno;
2041 if (! flag_expensive_optimizations)
2042 return;
2044 if (ira_dump_file)
2045 fprintf (ira_dump_file, "Starting decreasing number of live ranges...\n");
2047 FOR_EACH_BB_FN (bb, cfun)
2048 FOR_BB_INSNS (bb, insn)
2050 set = single_set (insn);
2051 if (! set)
2052 continue;
2053 src = SET_SRC (set);
2054 dest = SET_DEST (set);
2055 if (! REG_P (src) || ! REG_P (dest)
2056 || find_reg_note (insn, REG_DEAD, src))
2057 continue;
2058 sregno = REGNO (src);
2059 dregno = REGNO (dest);
2061 /* We don't want to mess with hard regs if register classes
2062 are small. */
2063 if (sregno == dregno
2064 || (targetm.small_register_classes_for_mode_p (GET_MODE (src))
2065 && (sregno < FIRST_PSEUDO_REGISTER
2066 || dregno < FIRST_PSEUDO_REGISTER))
2067 /* We don't see all updates to SP if they are in an
2068 auto-inc memory reference, so we must disallow this
2069 optimization on them. */
2070 || sregno == STACK_POINTER_REGNUM
2071 || dregno == STACK_POINTER_REGNUM)
2072 continue;
2074 dest_death = NULL_RTX;
2076 for (p = NEXT_INSN (insn); p; p = NEXT_INSN (p))
2078 if (! INSN_P (p))
2079 continue;
2080 if (BLOCK_FOR_INSN (p) != bb)
2081 break;
2083 if (reg_set_p (src, p) || reg_set_p (dest, p)
2084 /* If SRC is an asm-declared register, it must not be
2085 replaced in any asm. Unfortunately, the REG_EXPR
2086 tree for the asm variable may be absent in the SRC
2087 rtx, so we can't check the actual register
2088 declaration easily (the asm operand will have it,
2089 though). To avoid complicating the test for a rare
2090 case, we just don't perform register replacement
2091 for a hard reg mentioned in an asm. */
2092 || (sregno < FIRST_PSEUDO_REGISTER
2093 && asm_noperands (PATTERN (p)) >= 0
2094 && reg_overlap_mentioned_p (src, PATTERN (p)))
2095 /* Don't change hard registers used by a call. */
2096 || (CALL_P (p) && sregno < FIRST_PSEUDO_REGISTER
2097 && find_reg_fusage (p, USE, src))
2098 /* Don't change a USE of a register. */
2099 || (GET_CODE (PATTERN (p)) == USE
2100 && reg_overlap_mentioned_p (src, XEXP (PATTERN (p), 0))))
2101 break;
2103 /* See if all of SRC dies in P. This test is slightly
2104 more conservative than it needs to be. */
2105 if ((note = find_regno_note (p, REG_DEAD, sregno))
2106 && GET_MODE (XEXP (note, 0)) == GET_MODE (src))
2108 int failed = 0;
2110 /* We can do the optimization. Scan forward from INSN
2111 again, replacing regs as we go. Set FAILED if a
2112 replacement can't be done. In that case, we can't
2113 move the death note for SRC. This should be
2114 rare. */
2116 /* Set to stop at next insn. */
2117 for (q = next_real_insn (insn);
2118 q != next_real_insn (p);
2119 q = next_real_insn (q))
2121 if (reg_overlap_mentioned_p (src, PATTERN (q)))
2123 /* If SRC is a hard register, we might miss
2124 some overlapping registers with
2125 validate_replace_rtx, so we would have to
2126 undo it. We can't if DEST is present in
2127 the insn, so fail in that combination of
2128 cases. */
2129 if (sregno < FIRST_PSEUDO_REGISTER
2130 && reg_mentioned_p (dest, PATTERN (q)))
2131 failed = 1;
2133 /* Attempt to replace all uses. */
2134 else if (!validate_replace_rtx (src, dest, q))
2135 failed = 1;
2137 /* If this succeeded, but some part of the
2138 register is still present, undo the
2139 replacement. */
2140 else if (sregno < FIRST_PSEUDO_REGISTER
2141 && reg_overlap_mentioned_p (src, PATTERN (q)))
2143 validate_replace_rtx (dest, src, q);
2144 failed = 1;
2148 /* If DEST dies here, remove the death note and
2149 save it for later. Make sure ALL of DEST dies
2150 here; again, this is overly conservative. */
2151 if (! dest_death
2152 && (dest_death = find_regno_note (q, REG_DEAD, dregno)))
2154 if (GET_MODE (XEXP (dest_death, 0)) == GET_MODE (dest))
2155 remove_note (q, dest_death);
2156 else
2158 failed = 1;
2159 dest_death = 0;
2164 if (! failed)
2166 /* Move death note of SRC from P to INSN. */
2167 remove_note (p, note);
2168 XEXP (note, 1) = REG_NOTES (insn);
2169 REG_NOTES (insn) = note;
2172 /* DEST is also dead if INSN has a REG_UNUSED note for
2173 DEST. */
2174 if (! dest_death
2175 && (dest_death
2176 = find_regno_note (insn, REG_UNUSED, dregno)))
2178 PUT_REG_NOTE_KIND (dest_death, REG_DEAD);
2179 remove_note (insn, dest_death);
2182 /* Put death note of DEST on P if we saw it die. */
2183 if (dest_death)
2185 XEXP (dest_death, 1) = REG_NOTES (p);
2186 REG_NOTES (p) = dest_death;
2188 break;
2191 /* If SRC is a hard register which is set or killed in
2192 some other way, we can't do this optimization. */
2193 else if (sregno < FIRST_PSEUDO_REGISTER && dead_or_set_p (p, src))
2194 break;
2201 /* Return nonzero if REGNO is a particularly bad choice for reloading X. */
2202 static bool
2203 ira_bad_reload_regno_1 (int regno, rtx x)
2205 int x_regno, n, i;
2206 ira_allocno_t a;
2207 enum reg_class pref;
2209 /* We only deal with pseudo regs. */
2210 if (! x || GET_CODE (x) != REG)
2211 return false;
2213 x_regno = REGNO (x);
2214 if (x_regno < FIRST_PSEUDO_REGISTER)
2215 return false;
2217 /* If the pseudo prefers REGNO explicitly, then do not consider
2218 REGNO a bad spill choice. */
2219 pref = reg_preferred_class (x_regno);
2220 if (reg_class_size[pref] == 1)
2221 return !TEST_HARD_REG_BIT (reg_class_contents[pref], regno);
2223 /* If the pseudo conflicts with REGNO, then we consider REGNO a
2224 poor choice for a reload regno. */
2225 a = ira_regno_allocno_map[x_regno];
2226 n = ALLOCNO_NUM_OBJECTS (a);
2227 for (i = 0; i < n; i++)
2229 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2230 if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
2231 return true;
2233 return false;
2236 /* Return nonzero if REGNO is a particularly bad choice for reloading
2237 IN or OUT. */
2238 bool
2239 ira_bad_reload_regno (int regno, rtx in, rtx out)
2241 return (ira_bad_reload_regno_1 (regno, in)
2242 || ira_bad_reload_regno_1 (regno, out));
2245 /* Add register clobbers from asm statements. */
2246 static void
2247 compute_regs_asm_clobbered (void)
2249 basic_block bb;
2251 FOR_EACH_BB_FN (bb, cfun)
2253 rtx_insn *insn;
2254 FOR_BB_INSNS_REVERSE (bb, insn)
2256 df_ref def;
2258 if (NONDEBUG_INSN_P (insn) && extract_asm_operands (PATTERN (insn)))
2259 FOR_EACH_INSN_DEF (def, insn)
2261 unsigned int dregno = DF_REF_REGNO (def);
2262 if (HARD_REGISTER_NUM_P (dregno))
2263 add_to_hard_reg_set (&crtl->asm_clobbers,
2264 GET_MODE (DF_REF_REAL_REG (def)),
2265 dregno);
2272 /* Set up ELIMINABLE_REGSET, IRA_NO_ALLOC_REGS, and
2273 REGS_EVER_LIVE. */
2274 void
2275 ira_setup_eliminable_regset (void)
2277 #ifdef ELIMINABLE_REGS
2278 int i;
2279 static const struct {const int from, to; } eliminables[] = ELIMINABLE_REGS;
2280 #endif
2281 /* FIXME: If EXIT_IGNORE_STACK is set, we will not save and restore
2282 sp for alloca. So we can't eliminate the frame pointer in that
2283 case. At some point, we should improve this by emitting the
2284 sp-adjusting insns for this case. */
2285 frame_pointer_needed
2286 = (! flag_omit_frame_pointer
2287 || (cfun->calls_alloca && EXIT_IGNORE_STACK)
2288 /* We need the frame pointer to catch stack overflow exceptions
2289 if the stack pointer is moving. */
2290 || (flag_stack_check && STACK_CHECK_MOVING_SP)
2291 || crtl->accesses_prior_frames
2292 || (SUPPORTS_STACK_ALIGNMENT && crtl->stack_realign_needed)
2293 /* We need a frame pointer for all Cilk Plus functions that use
2294 Cilk keywords. */
2295 || (flag_cilkplus && cfun->is_cilk_function)
2296 || targetm.frame_pointer_required ());
2298 /* The chance that FRAME_POINTER_NEEDED is changed from inspecting
2299 RTL is very small. So if we use frame pointer for RA and RTL
2300 actually prevents this, we will spill pseudos assigned to the
2301 frame pointer in LRA. */
2303 if (frame_pointer_needed)
2304 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2306 COPY_HARD_REG_SET (ira_no_alloc_regs, no_unit_alloc_regs);
2307 CLEAR_HARD_REG_SET (eliminable_regset);
2309 compute_regs_asm_clobbered ();
2311 /* Build the regset of all eliminable registers and show we can't
2312 use those that we already know won't be eliminated. */
2313 #ifdef ELIMINABLE_REGS
2314 for (i = 0; i < (int) ARRAY_SIZE (eliminables); i++)
2316 bool cannot_elim
2317 = (! targetm.can_eliminate (eliminables[i].from, eliminables[i].to)
2318 || (eliminables[i].to == STACK_POINTER_REGNUM && frame_pointer_needed));
2320 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, eliminables[i].from))
2322 SET_HARD_REG_BIT (eliminable_regset, eliminables[i].from);
2324 if (cannot_elim)
2325 SET_HARD_REG_BIT (ira_no_alloc_regs, eliminables[i].from);
2327 else if (cannot_elim)
2328 error ("%s cannot be used in asm here",
2329 reg_names[eliminables[i].from]);
2330 else
2331 df_set_regs_ever_live (eliminables[i].from, true);
2333 #if !HARD_FRAME_POINTER_IS_FRAME_POINTER
2334 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2336 SET_HARD_REG_BIT (eliminable_regset, HARD_FRAME_POINTER_REGNUM);
2337 if (frame_pointer_needed)
2338 SET_HARD_REG_BIT (ira_no_alloc_regs, HARD_FRAME_POINTER_REGNUM);
2340 else if (frame_pointer_needed)
2341 error ("%s cannot be used in asm here",
2342 reg_names[HARD_FRAME_POINTER_REGNUM]);
2343 else
2344 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2345 #endif
2347 #else
2348 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2350 SET_HARD_REG_BIT (eliminable_regset, FRAME_POINTER_REGNUM);
2351 if (frame_pointer_needed)
2352 SET_HARD_REG_BIT (ira_no_alloc_regs, FRAME_POINTER_REGNUM);
2354 else if (frame_pointer_needed)
2355 error ("%s cannot be used in asm here", reg_names[FRAME_POINTER_REGNUM]);
2356 else
2357 df_set_regs_ever_live (FRAME_POINTER_REGNUM, true);
2358 #endif
2363 /* Vector of substitutions of register numbers,
2364 used to map pseudo regs into hardware regs.
2365 This is set up as a result of register allocation.
2366 Element N is the hard reg assigned to pseudo reg N,
2367 or is -1 if no hard reg was assigned.
2368 If N is a hard reg number, element N is N. */
2369 short *reg_renumber;
2371 /* Set up REG_RENUMBER and CALLER_SAVE_NEEDED (used by reload) from
2372 the allocation found by IRA. */
2373 static void
2374 setup_reg_renumber (void)
2376 int regno, hard_regno;
2377 ira_allocno_t a;
2378 ira_allocno_iterator ai;
2380 caller_save_needed = 0;
2381 FOR_EACH_ALLOCNO (a, ai)
2383 if (ira_use_lra_p && ALLOCNO_CAP_MEMBER (a) != NULL)
2384 continue;
2385 /* There are no caps at this point. */
2386 ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
2387 if (! ALLOCNO_ASSIGNED_P (a))
2388 /* It can happen if A is not referenced but partially anticipated
2389 somewhere in a region. */
2390 ALLOCNO_ASSIGNED_P (a) = true;
2391 ira_free_allocno_updated_costs (a);
2392 hard_regno = ALLOCNO_HARD_REGNO (a);
2393 regno = ALLOCNO_REGNO (a);
2394 reg_renumber[regno] = (hard_regno < 0 ? -1 : hard_regno);
2395 if (hard_regno >= 0)
2397 int i, nwords;
2398 enum reg_class pclass;
2399 ira_object_t obj;
2401 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
2402 nwords = ALLOCNO_NUM_OBJECTS (a);
2403 for (i = 0; i < nwords; i++)
2405 obj = ALLOCNO_OBJECT (a, i);
2406 IOR_COMPL_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
2407 reg_class_contents[pclass]);
2409 if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
2410 && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
2411 call_used_reg_set))
2413 ira_assert (!optimize || flag_caller_saves
2414 || (ALLOCNO_CALLS_CROSSED_NUM (a)
2415 == ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a))
2416 || regno >= ira_reg_equiv_len
2417 || ira_equiv_no_lvalue_p (regno));
2418 caller_save_needed = 1;
2424 /* Set up allocno assignment flags for further allocation
2425 improvements. */
2426 static void
2427 setup_allocno_assignment_flags (void)
2429 int hard_regno;
2430 ira_allocno_t a;
2431 ira_allocno_iterator ai;
2433 FOR_EACH_ALLOCNO (a, ai)
2435 if (! ALLOCNO_ASSIGNED_P (a))
2436 /* It can happen if A is not referenced but partially anticipated
2437 somewhere in a region. */
2438 ira_free_allocno_updated_costs (a);
2439 hard_regno = ALLOCNO_HARD_REGNO (a);
2440 /* Don't assign hard registers to allocnos which are destination
2441 of removed store at the end of loop. It has no sense to keep
2442 the same value in different hard registers. It is also
2443 impossible to assign hard registers correctly to such
2444 allocnos because the cost info and info about intersected
2445 calls are incorrect for them. */
2446 ALLOCNO_ASSIGNED_P (a) = (hard_regno >= 0
2447 || ALLOCNO_EMIT_DATA (a)->mem_optimized_dest_p
2448 || (ALLOCNO_MEMORY_COST (a)
2449 - ALLOCNO_CLASS_COST (a)) < 0);
2450 ira_assert
2451 (hard_regno < 0
2452 || ira_hard_reg_in_set_p (hard_regno, ALLOCNO_MODE (a),
2453 reg_class_contents[ALLOCNO_CLASS (a)]));
2457 /* Evaluate overall allocation cost and the costs for using hard
2458 registers and memory for allocnos. */
2459 static void
2460 calculate_allocation_cost (void)
2462 int hard_regno, cost;
2463 ira_allocno_t a;
2464 ira_allocno_iterator ai;
2466 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
2467 FOR_EACH_ALLOCNO (a, ai)
2469 hard_regno = ALLOCNO_HARD_REGNO (a);
2470 ira_assert (hard_regno < 0
2471 || (ira_hard_reg_in_set_p
2472 (hard_regno, ALLOCNO_MODE (a),
2473 reg_class_contents[ALLOCNO_CLASS (a)])));
2474 if (hard_regno < 0)
2476 cost = ALLOCNO_MEMORY_COST (a);
2477 ira_mem_cost += cost;
2479 else if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
2481 cost = (ALLOCNO_HARD_REG_COSTS (a)
2482 [ira_class_hard_reg_index
2483 [ALLOCNO_CLASS (a)][hard_regno]]);
2484 ira_reg_cost += cost;
2486 else
2488 cost = ALLOCNO_CLASS_COST (a);
2489 ira_reg_cost += cost;
2491 ira_overall_cost += cost;
2494 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
2496 fprintf (ira_dump_file,
2497 "+++Costs: overall %"PRId64
2498 ", reg %"PRId64
2499 ", mem %"PRId64
2500 ", ld %"PRId64
2501 ", st %"PRId64
2502 ", move %"PRId64,
2503 ira_overall_cost, ira_reg_cost, ira_mem_cost,
2504 ira_load_cost, ira_store_cost, ira_shuffle_cost);
2505 fprintf (ira_dump_file, "\n+++ move loops %d, new jumps %d\n",
2506 ira_move_loops_num, ira_additional_jumps_num);
2511 #ifdef ENABLE_IRA_CHECKING
2512 /* Check the correctness of the allocation. We do need this because
2513 of complicated code to transform more one region internal
2514 representation into one region representation. */
2515 static void
2516 check_allocation (void)
2518 ira_allocno_t a;
2519 int hard_regno, nregs, conflict_nregs;
2520 ira_allocno_iterator ai;
2522 FOR_EACH_ALLOCNO (a, ai)
2524 int n = ALLOCNO_NUM_OBJECTS (a);
2525 int i;
2527 if (ALLOCNO_CAP_MEMBER (a) != NULL
2528 || (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
2529 continue;
2530 nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
2531 if (nregs == 1)
2532 /* We allocated a single hard register. */
2533 n = 1;
2534 else if (n > 1)
2535 /* We allocated multiple hard registers, and we will test
2536 conflicts in a granularity of single hard regs. */
2537 nregs = 1;
2539 for (i = 0; i < n; i++)
2541 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2542 ira_object_t conflict_obj;
2543 ira_object_conflict_iterator oci;
2544 int this_regno = hard_regno;
2545 if (n > 1)
2547 if (REG_WORDS_BIG_ENDIAN)
2548 this_regno += n - i - 1;
2549 else
2550 this_regno += i;
2552 FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
2554 ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
2555 int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
2556 if (conflict_hard_regno < 0)
2557 continue;
2559 conflict_nregs
2560 = (hard_regno_nregs
2561 [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
2563 if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1
2564 && conflict_nregs == ALLOCNO_NUM_OBJECTS (conflict_a))
2566 if (REG_WORDS_BIG_ENDIAN)
2567 conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
2568 - OBJECT_SUBWORD (conflict_obj) - 1);
2569 else
2570 conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
2571 conflict_nregs = 1;
2574 if ((conflict_hard_regno <= this_regno
2575 && this_regno < conflict_hard_regno + conflict_nregs)
2576 || (this_regno <= conflict_hard_regno
2577 && conflict_hard_regno < this_regno + nregs))
2579 fprintf (stderr, "bad allocation for %d and %d\n",
2580 ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
2581 gcc_unreachable ();
2587 #endif
2589 /* Allocate REG_EQUIV_INIT. Set up it from IRA_REG_EQUIV which should
2590 be already calculated. */
2591 static void
2592 setup_reg_equiv_init (void)
2594 int i;
2595 int max_regno = max_reg_num ();
2597 for (i = 0; i < max_regno; i++)
2598 reg_equiv_init (i) = ira_reg_equiv[i].init_insns;
2601 /* Update equiv regno from movement of FROM_REGNO to TO_REGNO. INSNS
2602 are insns which were generated for such movement. It is assumed
2603 that FROM_REGNO and TO_REGNO always have the same value at the
2604 point of any move containing such registers. This function is used
2605 to update equiv info for register shuffles on the region borders
2606 and for caller save/restore insns. */
2607 void
2608 ira_update_equiv_info_by_shuffle_insn (int to_regno, int from_regno, rtx_insn *insns)
2610 rtx_insn *insn;
2611 rtx x, note;
2613 if (! ira_reg_equiv[from_regno].defined_p
2614 && (! ira_reg_equiv[to_regno].defined_p
2615 || ((x = ira_reg_equiv[to_regno].memory) != NULL_RTX
2616 && ! MEM_READONLY_P (x))))
2617 return;
2618 insn = insns;
2619 if (NEXT_INSN (insn) != NULL_RTX)
2621 if (! ira_reg_equiv[to_regno].defined_p)
2623 ira_assert (ira_reg_equiv[to_regno].init_insns == NULL_RTX);
2624 return;
2626 ira_reg_equiv[to_regno].defined_p = false;
2627 ira_reg_equiv[to_regno].memory
2628 = ira_reg_equiv[to_regno].constant
2629 = ira_reg_equiv[to_regno].invariant
2630 = ira_reg_equiv[to_regno].init_insns = NULL;
2631 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2632 fprintf (ira_dump_file,
2633 " Invalidating equiv info for reg %d\n", to_regno);
2634 return;
2636 /* It is possible that FROM_REGNO still has no equivalence because
2637 in shuffles to_regno<-from_regno and from_regno<-to_regno the 2nd
2638 insn was not processed yet. */
2639 if (ira_reg_equiv[from_regno].defined_p)
2641 ira_reg_equiv[to_regno].defined_p = true;
2642 if ((x = ira_reg_equiv[from_regno].memory) != NULL_RTX)
2644 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX
2645 && ira_reg_equiv[from_regno].constant == NULL_RTX);
2646 ira_assert (ira_reg_equiv[to_regno].memory == NULL_RTX
2647 || rtx_equal_p (ira_reg_equiv[to_regno].memory, x));
2648 ira_reg_equiv[to_regno].memory = x;
2649 if (! MEM_READONLY_P (x))
2650 /* We don't add the insn to insn init list because memory
2651 equivalence is just to say what memory is better to use
2652 when the pseudo is spilled. */
2653 return;
2655 else if ((x = ira_reg_equiv[from_regno].constant) != NULL_RTX)
2657 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX);
2658 ira_assert (ira_reg_equiv[to_regno].constant == NULL_RTX
2659 || rtx_equal_p (ira_reg_equiv[to_regno].constant, x));
2660 ira_reg_equiv[to_regno].constant = x;
2662 else
2664 x = ira_reg_equiv[from_regno].invariant;
2665 ira_assert (x != NULL_RTX);
2666 ira_assert (ira_reg_equiv[to_regno].invariant == NULL_RTX
2667 || rtx_equal_p (ira_reg_equiv[to_regno].invariant, x));
2668 ira_reg_equiv[to_regno].invariant = x;
2670 if (find_reg_note (insn, REG_EQUIV, x) == NULL_RTX)
2672 note = set_unique_reg_note (insn, REG_EQUIV, x);
2673 gcc_assert (note != NULL_RTX);
2674 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2676 fprintf (ira_dump_file,
2677 " Adding equiv note to insn %u for reg %d ",
2678 INSN_UID (insn), to_regno);
2679 dump_value_slim (ira_dump_file, x, 1);
2680 fprintf (ira_dump_file, "\n");
2684 ira_reg_equiv[to_regno].init_insns
2685 = gen_rtx_INSN_LIST (VOIDmode, insn,
2686 ira_reg_equiv[to_regno].init_insns);
2687 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2688 fprintf (ira_dump_file,
2689 " Adding equiv init move insn %u to reg %d\n",
2690 INSN_UID (insn), to_regno);
2693 /* Fix values of array REG_EQUIV_INIT after live range splitting done
2694 by IRA. */
2695 static void
2696 fix_reg_equiv_init (void)
2698 int max_regno = max_reg_num ();
2699 int i, new_regno, max;
2700 rtx x, prev, next, insn, set;
2702 if (max_regno_before_ira < max_regno)
2704 max = vec_safe_length (reg_equivs);
2705 grow_reg_equivs ();
2706 for (i = FIRST_PSEUDO_REGISTER; i < max; i++)
2707 for (prev = NULL_RTX, x = reg_equiv_init (i);
2708 x != NULL_RTX;
2709 x = next)
2711 next = XEXP (x, 1);
2712 insn = XEXP (x, 0);
2713 set = single_set (as_a <rtx_insn *> (insn));
2714 ira_assert (set != NULL_RTX
2715 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))));
2716 if (REG_P (SET_DEST (set))
2717 && ((int) REGNO (SET_DEST (set)) == i
2718 || (int) ORIGINAL_REGNO (SET_DEST (set)) == i))
2719 new_regno = REGNO (SET_DEST (set));
2720 else if (REG_P (SET_SRC (set))
2721 && ((int) REGNO (SET_SRC (set)) == i
2722 || (int) ORIGINAL_REGNO (SET_SRC (set)) == i))
2723 new_regno = REGNO (SET_SRC (set));
2724 else
2725 gcc_unreachable ();
2726 if (new_regno == i)
2727 prev = x;
2728 else
2730 /* Remove the wrong list element. */
2731 if (prev == NULL_RTX)
2732 reg_equiv_init (i) = next;
2733 else
2734 XEXP (prev, 1) = next;
2735 XEXP (x, 1) = reg_equiv_init (new_regno);
2736 reg_equiv_init (new_regno) = x;
2742 #ifdef ENABLE_IRA_CHECKING
2743 /* Print redundant memory-memory copies. */
2744 static void
2745 print_redundant_copies (void)
2747 int hard_regno;
2748 ira_allocno_t a;
2749 ira_copy_t cp, next_cp;
2750 ira_allocno_iterator ai;
2752 FOR_EACH_ALLOCNO (a, ai)
2754 if (ALLOCNO_CAP_MEMBER (a) != NULL)
2755 /* It is a cap. */
2756 continue;
2757 hard_regno = ALLOCNO_HARD_REGNO (a);
2758 if (hard_regno >= 0)
2759 continue;
2760 for (cp = ALLOCNO_COPIES (a); cp != NULL; cp = next_cp)
2761 if (cp->first == a)
2762 next_cp = cp->next_first_allocno_copy;
2763 else
2765 next_cp = cp->next_second_allocno_copy;
2766 if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL
2767 && cp->insn != NULL_RTX
2768 && ALLOCNO_HARD_REGNO (cp->first) == hard_regno)
2769 fprintf (ira_dump_file,
2770 " Redundant move from %d(freq %d):%d\n",
2771 INSN_UID (cp->insn), cp->freq, hard_regno);
2775 #endif
2777 /* Setup preferred and alternative classes for new pseudo-registers
2778 created by IRA starting with START. */
2779 static void
2780 setup_preferred_alternate_classes_for_new_pseudos (int start)
2782 int i, old_regno;
2783 int max_regno = max_reg_num ();
2785 for (i = start; i < max_regno; i++)
2787 old_regno = ORIGINAL_REGNO (regno_reg_rtx[i]);
2788 ira_assert (i != old_regno);
2789 setup_reg_classes (i, reg_preferred_class (old_regno),
2790 reg_alternate_class (old_regno),
2791 reg_allocno_class (old_regno));
2792 if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
2793 fprintf (ira_dump_file,
2794 " New r%d: setting preferred %s, alternative %s\n",
2795 i, reg_class_names[reg_preferred_class (old_regno)],
2796 reg_class_names[reg_alternate_class (old_regno)]);
2801 /* The number of entries allocated in reg_info. */
2802 static int allocated_reg_info_size;
2804 /* Regional allocation can create new pseudo-registers. This function
2805 expands some arrays for pseudo-registers. */
2806 static void
2807 expand_reg_info (void)
2809 int i;
2810 int size = max_reg_num ();
2812 resize_reg_info ();
2813 for (i = allocated_reg_info_size; i < size; i++)
2814 setup_reg_classes (i, GENERAL_REGS, ALL_REGS, GENERAL_REGS);
2815 setup_preferred_alternate_classes_for_new_pseudos (allocated_reg_info_size);
2816 allocated_reg_info_size = size;
2819 /* Return TRUE if there is too high register pressure in the function.
2820 It is used to decide when stack slot sharing is worth to do. */
2821 static bool
2822 too_high_register_pressure_p (void)
2824 int i;
2825 enum reg_class pclass;
2827 for (i = 0; i < ira_pressure_classes_num; i++)
2829 pclass = ira_pressure_classes[i];
2830 if (ira_loop_tree_root->reg_pressure[pclass] > 10000)
2831 return true;
2833 return false;
2838 /* Indicate that hard register number FROM was eliminated and replaced with
2839 an offset from hard register number TO. The status of hard registers live
2840 at the start of a basic block is updated by replacing a use of FROM with
2841 a use of TO. */
2843 void
2844 mark_elimination (int from, int to)
2846 basic_block bb;
2847 bitmap r;
2849 FOR_EACH_BB_FN (bb, cfun)
2851 r = DF_LR_IN (bb);
2852 if (bitmap_bit_p (r, from))
2854 bitmap_clear_bit (r, from);
2855 bitmap_set_bit (r, to);
2857 if (! df_live)
2858 continue;
2859 r = DF_LIVE_IN (bb);
2860 if (bitmap_bit_p (r, from))
2862 bitmap_clear_bit (r, from);
2863 bitmap_set_bit (r, to);
2870 /* The length of the following array. */
2871 int ira_reg_equiv_len;
2873 /* Info about equiv. info for each register. */
2874 struct ira_reg_equiv_s *ira_reg_equiv;
2876 /* Expand ira_reg_equiv if necessary. */
2877 void
2878 ira_expand_reg_equiv (void)
2880 int old = ira_reg_equiv_len;
2882 if (ira_reg_equiv_len > max_reg_num ())
2883 return;
2884 ira_reg_equiv_len = max_reg_num () * 3 / 2 + 1;
2885 ira_reg_equiv
2886 = (struct ira_reg_equiv_s *) xrealloc (ira_reg_equiv,
2887 ira_reg_equiv_len
2888 * sizeof (struct ira_reg_equiv_s));
2889 gcc_assert (old < ira_reg_equiv_len);
2890 memset (ira_reg_equiv + old, 0,
2891 sizeof (struct ira_reg_equiv_s) * (ira_reg_equiv_len - old));
2894 static void
2895 init_reg_equiv (void)
2897 ira_reg_equiv_len = 0;
2898 ira_reg_equiv = NULL;
2899 ira_expand_reg_equiv ();
2902 static void
2903 finish_reg_equiv (void)
2905 free (ira_reg_equiv);
2910 struct equivalence
2912 /* Set when a REG_EQUIV note is found or created. Use to
2913 keep track of what memory accesses might be created later,
2914 e.g. by reload. */
2915 rtx replacement;
2916 rtx *src_p;
2918 /* The list of each instruction which initializes this register.
2920 NULL indicates we know nothing about this register's equivalence
2921 properties.
2923 An INSN_LIST with a NULL insn indicates this pseudo is already
2924 known to not have a valid equivalence. */
2925 rtx_insn_list *init_insns;
2927 /* Loop depth is used to recognize equivalences which appear
2928 to be present within the same loop (or in an inner loop). */
2929 short loop_depth;
2930 /* Nonzero if this had a preexisting REG_EQUIV note. */
2931 unsigned char is_arg_equivalence : 1;
2932 /* Set when an attempt should be made to replace a register
2933 with the associated src_p entry. */
2934 unsigned char replace : 1;
2935 /* Set if this register has no known equivalence. */
2936 unsigned char no_equiv : 1;
2939 /* reg_equiv[N] (where N is a pseudo reg number) is the equivalence
2940 structure for that register. */
2941 static struct equivalence *reg_equiv;
2943 /* Used for communication between the following two functions: contains
2944 a MEM that we wish to ensure remains unchanged. */
2945 static rtx equiv_mem;
2947 /* Set nonzero if EQUIV_MEM is modified. */
2948 static int equiv_mem_modified;
2950 /* If EQUIV_MEM is modified by modifying DEST, indicate that it is modified.
2951 Called via note_stores. */
2952 static void
2953 validate_equiv_mem_from_store (rtx dest, const_rtx set ATTRIBUTE_UNUSED,
2954 void *data ATTRIBUTE_UNUSED)
2956 if ((REG_P (dest)
2957 && reg_overlap_mentioned_p (dest, equiv_mem))
2958 || (MEM_P (dest)
2959 && anti_dependence (equiv_mem, dest)))
2960 equiv_mem_modified = 1;
2963 /* Verify that no store between START and the death of REG invalidates
2964 MEMREF. MEMREF is invalidated by modifying a register used in MEMREF,
2965 by storing into an overlapping memory location, or with a non-const
2966 CALL_INSN.
2968 Return 1 if MEMREF remains valid. */
2969 static int
2970 validate_equiv_mem (rtx_insn *start, rtx reg, rtx memref)
2972 rtx_insn *insn;
2973 rtx note;
2975 equiv_mem = memref;
2976 equiv_mem_modified = 0;
2978 /* If the memory reference has side effects or is volatile, it isn't a
2979 valid equivalence. */
2980 if (side_effects_p (memref))
2981 return 0;
2983 for (insn = start; insn && ! equiv_mem_modified; insn = NEXT_INSN (insn))
2985 if (! INSN_P (insn))
2986 continue;
2988 if (find_reg_note (insn, REG_DEAD, reg))
2989 return 1;
2991 /* This used to ignore readonly memory and const/pure calls. The problem
2992 is the equivalent form may reference a pseudo which gets assigned a
2993 call clobbered hard reg. When we later replace REG with its
2994 equivalent form, the value in the call-clobbered reg has been
2995 changed and all hell breaks loose. */
2996 if (CALL_P (insn))
2997 return 0;
2999 note_stores (PATTERN (insn), validate_equiv_mem_from_store, NULL);
3001 /* If a register mentioned in MEMREF is modified via an
3002 auto-increment, we lose the equivalence. Do the same if one
3003 dies; although we could extend the life, it doesn't seem worth
3004 the trouble. */
3006 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3007 if ((REG_NOTE_KIND (note) == REG_INC
3008 || REG_NOTE_KIND (note) == REG_DEAD)
3009 && REG_P (XEXP (note, 0))
3010 && reg_overlap_mentioned_p (XEXP (note, 0), memref))
3011 return 0;
3014 return 0;
3017 /* Returns zero if X is known to be invariant. */
3018 static int
3019 equiv_init_varies_p (rtx x)
3021 RTX_CODE code = GET_CODE (x);
3022 int i;
3023 const char *fmt;
3025 switch (code)
3027 case MEM:
3028 return !MEM_READONLY_P (x) || equiv_init_varies_p (XEXP (x, 0));
3030 case CONST:
3031 CASE_CONST_ANY:
3032 case SYMBOL_REF:
3033 case LABEL_REF:
3034 return 0;
3036 case REG:
3037 return reg_equiv[REGNO (x)].replace == 0 && rtx_varies_p (x, 0);
3039 case ASM_OPERANDS:
3040 if (MEM_VOLATILE_P (x))
3041 return 1;
3043 /* Fall through. */
3045 default:
3046 break;
3049 fmt = GET_RTX_FORMAT (code);
3050 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3051 if (fmt[i] == 'e')
3053 if (equiv_init_varies_p (XEXP (x, i)))
3054 return 1;
3056 else if (fmt[i] == 'E')
3058 int j;
3059 for (j = 0; j < XVECLEN (x, i); j++)
3060 if (equiv_init_varies_p (XVECEXP (x, i, j)))
3061 return 1;
3064 return 0;
3067 /* Returns nonzero if X (used to initialize register REGNO) is movable.
3068 X is only movable if the registers it uses have equivalent initializations
3069 which appear to be within the same loop (or in an inner loop) and movable
3070 or if they are not candidates for local_alloc and don't vary. */
3071 static int
3072 equiv_init_movable_p (rtx x, int regno)
3074 int i, j;
3075 const char *fmt;
3076 enum rtx_code code = GET_CODE (x);
3078 switch (code)
3080 case SET:
3081 return equiv_init_movable_p (SET_SRC (x), regno);
3083 case CC0:
3084 case CLOBBER:
3085 return 0;
3087 case PRE_INC:
3088 case PRE_DEC:
3089 case POST_INC:
3090 case POST_DEC:
3091 case PRE_MODIFY:
3092 case POST_MODIFY:
3093 return 0;
3095 case REG:
3096 return ((reg_equiv[REGNO (x)].loop_depth >= reg_equiv[regno].loop_depth
3097 && reg_equiv[REGNO (x)].replace)
3098 || (REG_BASIC_BLOCK (REGNO (x)) < NUM_FIXED_BLOCKS
3099 && ! rtx_varies_p (x, 0)));
3101 case UNSPEC_VOLATILE:
3102 return 0;
3104 case ASM_OPERANDS:
3105 if (MEM_VOLATILE_P (x))
3106 return 0;
3108 /* Fall through. */
3110 default:
3111 break;
3114 fmt = GET_RTX_FORMAT (code);
3115 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3116 switch (fmt[i])
3118 case 'e':
3119 if (! equiv_init_movable_p (XEXP (x, i), regno))
3120 return 0;
3121 break;
3122 case 'E':
3123 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3124 if (! equiv_init_movable_p (XVECEXP (x, i, j), regno))
3125 return 0;
3126 break;
3129 return 1;
3132 /* TRUE if X uses any registers for which reg_equiv[REGNO].replace is
3133 true. */
3134 static int
3135 contains_replace_regs (rtx x)
3137 int i, j;
3138 const char *fmt;
3139 enum rtx_code code = GET_CODE (x);
3141 switch (code)
3143 case CONST:
3144 case LABEL_REF:
3145 case SYMBOL_REF:
3146 CASE_CONST_ANY:
3147 case PC:
3148 case CC0:
3149 case HIGH:
3150 return 0;
3152 case REG:
3153 return reg_equiv[REGNO (x)].replace;
3155 default:
3156 break;
3159 fmt = GET_RTX_FORMAT (code);
3160 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3161 switch (fmt[i])
3163 case 'e':
3164 if (contains_replace_regs (XEXP (x, i)))
3165 return 1;
3166 break;
3167 case 'E':
3168 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3169 if (contains_replace_regs (XVECEXP (x, i, j)))
3170 return 1;
3171 break;
3174 return 0;
3177 /* TRUE if X references a memory location that would be affected by a store
3178 to MEMREF. */
3179 static int
3180 memref_referenced_p (rtx memref, rtx x)
3182 int i, j;
3183 const char *fmt;
3184 enum rtx_code code = GET_CODE (x);
3186 switch (code)
3188 case CONST:
3189 case LABEL_REF:
3190 case SYMBOL_REF:
3191 CASE_CONST_ANY:
3192 case PC:
3193 case CC0:
3194 case HIGH:
3195 case LO_SUM:
3196 return 0;
3198 case REG:
3199 return (reg_equiv[REGNO (x)].replacement
3200 && memref_referenced_p (memref,
3201 reg_equiv[REGNO (x)].replacement));
3203 case MEM:
3204 if (true_dependence (memref, VOIDmode, x))
3205 return 1;
3206 break;
3208 case SET:
3209 /* If we are setting a MEM, it doesn't count (its address does), but any
3210 other SET_DEST that has a MEM in it is referencing the MEM. */
3211 if (MEM_P (SET_DEST (x)))
3213 if (memref_referenced_p (memref, XEXP (SET_DEST (x), 0)))
3214 return 1;
3216 else if (memref_referenced_p (memref, SET_DEST (x)))
3217 return 1;
3219 return memref_referenced_p (memref, SET_SRC (x));
3221 default:
3222 break;
3225 fmt = GET_RTX_FORMAT (code);
3226 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3227 switch (fmt[i])
3229 case 'e':
3230 if (memref_referenced_p (memref, XEXP (x, i)))
3231 return 1;
3232 break;
3233 case 'E':
3234 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3235 if (memref_referenced_p (memref, XVECEXP (x, i, j)))
3236 return 1;
3237 break;
3240 return 0;
3243 /* TRUE if some insn in the range (START, END] references a memory location
3244 that would be affected by a store to MEMREF. */
3245 static int
3246 memref_used_between_p (rtx memref, rtx_insn *start, rtx_insn *end)
3248 rtx_insn *insn;
3250 for (insn = NEXT_INSN (start); insn != NEXT_INSN (end);
3251 insn = NEXT_INSN (insn))
3253 if (!NONDEBUG_INSN_P (insn))
3254 continue;
3256 if (memref_referenced_p (memref, PATTERN (insn)))
3257 return 1;
3259 /* Nonconst functions may access memory. */
3260 if (CALL_P (insn) && (! RTL_CONST_CALL_P (insn)))
3261 return 1;
3264 return 0;
3267 /* Mark REG as having no known equivalence.
3268 Some instructions might have been processed before and furnished
3269 with REG_EQUIV notes for this register; these notes will have to be
3270 removed.
3271 STORE is the piece of RTL that does the non-constant / conflicting
3272 assignment - a SET, CLOBBER or REG_INC note. It is currently not used,
3273 but needs to be there because this function is called from note_stores. */
3274 static void
3275 no_equiv (rtx reg, const_rtx store ATTRIBUTE_UNUSED,
3276 void *data ATTRIBUTE_UNUSED)
3278 int regno;
3279 rtx_insn_list *list;
3281 if (!REG_P (reg))
3282 return;
3283 regno = REGNO (reg);
3284 reg_equiv[regno].no_equiv = 1;
3285 list = reg_equiv[regno].init_insns;
3286 if (list && list->insn () == NULL)
3287 return;
3288 reg_equiv[regno].init_insns = gen_rtx_INSN_LIST (VOIDmode, NULL_RTX, NULL);
3289 reg_equiv[regno].replacement = NULL_RTX;
3290 /* This doesn't matter for equivalences made for argument registers, we
3291 should keep their initialization insns. */
3292 if (reg_equiv[regno].is_arg_equivalence)
3293 return;
3294 ira_reg_equiv[regno].defined_p = false;
3295 ira_reg_equiv[regno].init_insns = NULL;
3296 for (; list; list = list->next ())
3298 rtx_insn *insn = list->insn ();
3299 remove_note (insn, find_reg_note (insn, REG_EQUIV, NULL_RTX));
3303 /* Check whether the SUBREG is a paradoxical subreg and set the result
3304 in PDX_SUBREGS. */
3306 static void
3307 set_paradoxical_subreg (rtx_insn *insn, bool *pdx_subregs)
3309 subrtx_iterator::array_type array;
3310 FOR_EACH_SUBRTX (iter, array, PATTERN (insn), NONCONST)
3312 const_rtx subreg = *iter;
3313 if (GET_CODE (subreg) == SUBREG)
3315 const_rtx reg = SUBREG_REG (subreg);
3316 if (REG_P (reg) && paradoxical_subreg_p (subreg))
3317 pdx_subregs[REGNO (reg)] = true;
3322 /* In DEBUG_INSN location adjust REGs from CLEARED_REGS bitmap to the
3323 equivalent replacement. */
3325 static rtx
3326 adjust_cleared_regs (rtx loc, const_rtx old_rtx ATTRIBUTE_UNUSED, void *data)
3328 if (REG_P (loc))
3330 bitmap cleared_regs = (bitmap) data;
3331 if (bitmap_bit_p (cleared_regs, REGNO (loc)))
3332 return simplify_replace_fn_rtx (copy_rtx (*reg_equiv[REGNO (loc)].src_p),
3333 NULL_RTX, adjust_cleared_regs, data);
3335 return NULL_RTX;
3338 /* Nonzero if we recorded an equivalence for a LABEL_REF. */
3339 static int recorded_label_ref;
3341 /* Find registers that are equivalent to a single value throughout the
3342 compilation (either because they can be referenced in memory or are
3343 set once from a single constant). Lower their priority for a
3344 register.
3346 If such a register is only referenced once, try substituting its
3347 value into the using insn. If it succeeds, we can eliminate the
3348 register completely.
3350 Initialize init_insns in ira_reg_equiv array.
3352 Return non-zero if jump label rebuilding should be done. */
3353 static int
3354 update_equiv_regs (void)
3356 rtx_insn *insn;
3357 basic_block bb;
3358 int loop_depth;
3359 bitmap cleared_regs;
3360 bool *pdx_subregs;
3362 /* We need to keep track of whether or not we recorded a LABEL_REF so
3363 that we know if the jump optimizer needs to be rerun. */
3364 recorded_label_ref = 0;
3366 /* Use pdx_subregs to show whether a reg is used in a paradoxical
3367 subreg. */
3368 pdx_subregs = XCNEWVEC (bool, max_regno);
3370 reg_equiv = XCNEWVEC (struct equivalence, max_regno);
3371 grow_reg_equivs ();
3373 init_alias_analysis ();
3375 /* Scan insns and set pdx_subregs[regno] if the reg is used in a
3376 paradoxical subreg. Don't set such reg equivalent to a mem,
3377 because lra will not substitute such equiv memory in order to
3378 prevent access beyond allocated memory for paradoxical memory subreg. */
3379 FOR_EACH_BB_FN (bb, cfun)
3380 FOR_BB_INSNS (bb, insn)
3381 if (NONDEBUG_INSN_P (insn))
3382 set_paradoxical_subreg (insn, pdx_subregs);
3384 /* Scan the insns and find which registers have equivalences. Do this
3385 in a separate scan of the insns because (due to -fcse-follow-jumps)
3386 a register can be set below its use. */
3387 FOR_EACH_BB_FN (bb, cfun)
3389 loop_depth = bb_loop_depth (bb);
3391 for (insn = BB_HEAD (bb);
3392 insn != NEXT_INSN (BB_END (bb));
3393 insn = NEXT_INSN (insn))
3395 rtx note;
3396 rtx set;
3397 rtx dest, src;
3398 int regno;
3400 if (! INSN_P (insn))
3401 continue;
3403 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3404 if (REG_NOTE_KIND (note) == REG_INC)
3405 no_equiv (XEXP (note, 0), note, NULL);
3407 set = single_set (insn);
3409 /* If this insn contains more (or less) than a single SET,
3410 only mark all destinations as having no known equivalence. */
3411 if (set == NULL_RTX)
3413 note_stores (PATTERN (insn), no_equiv, NULL);
3414 continue;
3416 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3418 int i;
3420 for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
3422 rtx part = XVECEXP (PATTERN (insn), 0, i);
3423 if (part != set)
3424 note_stores (part, no_equiv, NULL);
3428 dest = SET_DEST (set);
3429 src = SET_SRC (set);
3431 /* See if this is setting up the equivalence between an argument
3432 register and its stack slot. */
3433 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3434 if (note)
3436 gcc_assert (REG_P (dest));
3437 regno = REGNO (dest);
3439 /* Note that we don't want to clear init_insns in
3440 ira_reg_equiv even if there are multiple sets of this
3441 register. */
3442 reg_equiv[regno].is_arg_equivalence = 1;
3444 /* The insn result can have equivalence memory although
3445 the equivalence is not set up by the insn. We add
3446 this insn to init insns as it is a flag for now that
3447 regno has an equivalence. We will remove the insn
3448 from init insn list later. */
3449 if (rtx_equal_p (src, XEXP (note, 0)) || MEM_P (XEXP (note, 0)))
3450 ira_reg_equiv[regno].init_insns
3451 = gen_rtx_INSN_LIST (VOIDmode, insn,
3452 ira_reg_equiv[regno].init_insns);
3454 /* Continue normally in case this is a candidate for
3455 replacements. */
3458 if (!optimize)
3459 continue;
3461 /* We only handle the case of a pseudo register being set
3462 once, or always to the same value. */
3463 /* ??? The mn10200 port breaks if we add equivalences for
3464 values that need an ADDRESS_REGS register and set them equivalent
3465 to a MEM of a pseudo. The actual problem is in the over-conservative
3466 handling of INPADDR_ADDRESS / INPUT_ADDRESS / INPUT triples in
3467 calculate_needs, but we traditionally work around this problem
3468 here by rejecting equivalences when the destination is in a register
3469 that's likely spilled. This is fragile, of course, since the
3470 preferred class of a pseudo depends on all instructions that set
3471 or use it. */
3473 if (!REG_P (dest)
3474 || (regno = REGNO (dest)) < FIRST_PSEUDO_REGISTER
3475 || (reg_equiv[regno].init_insns
3476 && reg_equiv[regno].init_insns->insn () == NULL)
3477 || (targetm.class_likely_spilled_p (reg_preferred_class (regno))
3478 && MEM_P (src) && ! reg_equiv[regno].is_arg_equivalence))
3480 /* This might be setting a SUBREG of a pseudo, a pseudo that is
3481 also set somewhere else to a constant. */
3482 note_stores (set, no_equiv, NULL);
3483 continue;
3486 /* Don't set reg (if pdx_subregs[regno] == true) equivalent to a mem. */
3487 if (MEM_P (src) && pdx_subregs[regno])
3489 note_stores (set, no_equiv, NULL);
3490 continue;
3493 note = find_reg_note (insn, REG_EQUAL, NULL_RTX);
3495 /* cse sometimes generates function invariants, but doesn't put a
3496 REG_EQUAL note on the insn. Since this note would be redundant,
3497 there's no point creating it earlier than here. */
3498 if (! note && ! rtx_varies_p (src, 0))
3499 note = set_unique_reg_note (insn, REG_EQUAL, copy_rtx (src));
3501 /* Don't bother considering a REG_EQUAL note containing an EXPR_LIST
3502 since it represents a function call. */
3503 if (note && GET_CODE (XEXP (note, 0)) == EXPR_LIST)
3504 note = NULL_RTX;
3506 if (DF_REG_DEF_COUNT (regno) != 1)
3508 bool equal_p = true;
3509 rtx_insn_list *list;
3511 /* If we have already processed this pseudo and determined it
3512 can not have an equivalence, then honor that decision. */
3513 if (reg_equiv[regno].no_equiv)
3514 continue;
3516 if (! note
3517 || rtx_varies_p (XEXP (note, 0), 0)
3518 || (reg_equiv[regno].replacement
3519 && ! rtx_equal_p (XEXP (note, 0),
3520 reg_equiv[regno].replacement)))
3522 no_equiv (dest, set, NULL);
3523 continue;
3526 list = reg_equiv[regno].init_insns;
3527 for (; list; list = list->next ())
3529 rtx note_tmp;
3530 rtx_insn *insn_tmp;
3532 insn_tmp = list->insn ();
3533 note_tmp = find_reg_note (insn_tmp, REG_EQUAL, NULL_RTX);
3534 gcc_assert (note_tmp);
3535 if (! rtx_equal_p (XEXP (note, 0), XEXP (note_tmp, 0)))
3537 equal_p = false;
3538 break;
3542 if (! equal_p)
3544 no_equiv (dest, set, NULL);
3545 continue;
3549 /* Record this insn as initializing this register. */
3550 reg_equiv[regno].init_insns
3551 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv[regno].init_insns);
3553 /* If this register is known to be equal to a constant, record that
3554 it is always equivalent to the constant. */
3555 if (DF_REG_DEF_COUNT (regno) == 1
3556 && note && ! rtx_varies_p (XEXP (note, 0), 0))
3558 rtx note_value = XEXP (note, 0);
3559 remove_note (insn, note);
3560 set_unique_reg_note (insn, REG_EQUIV, note_value);
3563 /* If this insn introduces a "constant" register, decrease the priority
3564 of that register. Record this insn if the register is only used once
3565 more and the equivalence value is the same as our source.
3567 The latter condition is checked for two reasons: First, it is an
3568 indication that it may be more efficient to actually emit the insn
3569 as written (if no registers are available, reload will substitute
3570 the equivalence). Secondly, it avoids problems with any registers
3571 dying in this insn whose death notes would be missed.
3573 If we don't have a REG_EQUIV note, see if this insn is loading
3574 a register used only in one basic block from a MEM. If so, and the
3575 MEM remains unchanged for the life of the register, add a REG_EQUIV
3576 note. */
3577 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3579 if (note == NULL_RTX && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3580 && MEM_P (SET_SRC (set))
3581 && validate_equiv_mem (insn, dest, SET_SRC (set)))
3582 note = set_unique_reg_note (insn, REG_EQUIV, copy_rtx (SET_SRC (set)));
3584 if (note)
3586 int regno = REGNO (dest);
3587 rtx x = XEXP (note, 0);
3589 /* If we haven't done so, record for reload that this is an
3590 equivalencing insn. */
3591 if (!reg_equiv[regno].is_arg_equivalence)
3592 ira_reg_equiv[regno].init_insns
3593 = gen_rtx_INSN_LIST (VOIDmode, insn,
3594 ira_reg_equiv[regno].init_insns);
3596 /* Record whether or not we created a REG_EQUIV note for a LABEL_REF.
3597 We might end up substituting the LABEL_REF for uses of the
3598 pseudo here or later. That kind of transformation may turn an
3599 indirect jump into a direct jump, in which case we must rerun the
3600 jump optimizer to ensure that the JUMP_LABEL fields are valid. */
3601 if (GET_CODE (x) == LABEL_REF
3602 || (GET_CODE (x) == CONST
3603 && GET_CODE (XEXP (x, 0)) == PLUS
3604 && (GET_CODE (XEXP (XEXP (x, 0), 0)) == LABEL_REF)))
3605 recorded_label_ref = 1;
3607 reg_equiv[regno].replacement = x;
3608 reg_equiv[regno].src_p = &SET_SRC (set);
3609 reg_equiv[regno].loop_depth = (short) loop_depth;
3611 /* Don't mess with things live during setjmp. */
3612 if (REG_LIVE_LENGTH (regno) >= 0 && optimize)
3614 /* Note that the statement below does not affect the priority
3615 in local-alloc! */
3616 REG_LIVE_LENGTH (regno) *= 2;
3618 /* If the register is referenced exactly twice, meaning it is
3619 set once and used once, indicate that the reference may be
3620 replaced by the equivalence we computed above. Do this
3621 even if the register is only used in one block so that
3622 dependencies can be handled where the last register is
3623 used in a different block (i.e. HIGH / LO_SUM sequences)
3624 and to reduce the number of registers alive across
3625 calls. */
3627 if (REG_N_REFS (regno) == 2
3628 && (rtx_equal_p (x, src)
3629 || ! equiv_init_varies_p (src))
3630 && NONJUMP_INSN_P (insn)
3631 && equiv_init_movable_p (PATTERN (insn), regno))
3632 reg_equiv[regno].replace = 1;
3638 if (!optimize)
3639 goto out;
3641 /* A second pass, to gather additional equivalences with memory. This needs
3642 to be done after we know which registers we are going to replace. */
3644 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
3646 rtx set, src, dest;
3647 unsigned regno;
3649 if (! INSN_P (insn))
3650 continue;
3652 set = single_set (insn);
3653 if (! set)
3654 continue;
3656 dest = SET_DEST (set);
3657 src = SET_SRC (set);
3659 /* If this sets a MEM to the contents of a REG that is only used
3660 in a single basic block, see if the register is always equivalent
3661 to that memory location and if moving the store from INSN to the
3662 insn that set REG is safe. If so, put a REG_EQUIV note on the
3663 initializing insn.
3665 Don't add a REG_EQUIV note if the insn already has one. The existing
3666 REG_EQUIV is likely more useful than the one we are adding.
3668 If one of the regs in the address has reg_equiv[REGNO].replace set,
3669 then we can't add this REG_EQUIV note. The reg_equiv[REGNO].replace
3670 optimization may move the set of this register immediately before
3671 insn, which puts it after reg_equiv[REGNO].init_insns, and hence
3672 the mention in the REG_EQUIV note would be to an uninitialized
3673 pseudo. */
3675 if (MEM_P (dest) && REG_P (src)
3676 && (regno = REGNO (src)) >= FIRST_PSEUDO_REGISTER
3677 && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3678 && DF_REG_DEF_COUNT (regno) == 1
3679 && reg_equiv[regno].init_insns != NULL
3680 && reg_equiv[regno].init_insns->insn () != NULL
3681 && ! find_reg_note (XEXP (reg_equiv[regno].init_insns, 0),
3682 REG_EQUIV, NULL_RTX)
3683 && ! contains_replace_regs (XEXP (dest, 0))
3684 && ! pdx_subregs[regno])
3686 rtx_insn *init_insn =
3687 as_a <rtx_insn *> (XEXP (reg_equiv[regno].init_insns, 0));
3688 if (validate_equiv_mem (init_insn, src, dest)
3689 && ! memref_used_between_p (dest, init_insn, insn)
3690 /* Attaching a REG_EQUIV note will fail if INIT_INSN has
3691 multiple sets. */
3692 && set_unique_reg_note (init_insn, REG_EQUIV, copy_rtx (dest)))
3694 /* This insn makes the equivalence, not the one initializing
3695 the register. */
3696 ira_reg_equiv[regno].init_insns
3697 = gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX);
3698 df_notes_rescan (init_insn);
3703 cleared_regs = BITMAP_ALLOC (NULL);
3704 /* Now scan all regs killed in an insn to see if any of them are
3705 registers only used that once. If so, see if we can replace the
3706 reference with the equivalent form. If we can, delete the
3707 initializing reference and this register will go away. If we
3708 can't replace the reference, and the initializing reference is
3709 within the same loop (or in an inner loop), then move the register
3710 initialization just before the use, so that they are in the same
3711 basic block. */
3712 FOR_EACH_BB_REVERSE_FN (bb, cfun)
3714 loop_depth = bb_loop_depth (bb);
3715 for (insn = BB_END (bb);
3716 insn != PREV_INSN (BB_HEAD (bb));
3717 insn = PREV_INSN (insn))
3719 rtx link;
3721 if (! INSN_P (insn))
3722 continue;
3724 /* Don't substitute into a non-local goto, this confuses CFG. */
3725 if (JUMP_P (insn)
3726 && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
3727 continue;
3729 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
3731 if (REG_NOTE_KIND (link) == REG_DEAD
3732 /* Make sure this insn still refers to the register. */
3733 && reg_mentioned_p (XEXP (link, 0), PATTERN (insn)))
3735 int regno = REGNO (XEXP (link, 0));
3736 rtx equiv_insn;
3738 if (! reg_equiv[regno].replace
3739 || reg_equiv[regno].loop_depth < (short) loop_depth
3740 /* There is no sense to move insns if live range
3741 shrinkage or register pressure-sensitive
3742 scheduling were done because it will not
3743 improve allocation but worsen insn schedule
3744 with a big probability. */
3745 || flag_live_range_shrinkage
3746 || (flag_sched_pressure && flag_schedule_insns))
3747 continue;
3749 /* reg_equiv[REGNO].replace gets set only when
3750 REG_N_REFS[REGNO] is 2, i.e. the register is set
3751 once and used once. (If it were only set, but
3752 not used, flow would have deleted the setting
3753 insns.) Hence there can only be one insn in
3754 reg_equiv[REGNO].init_insns. */
3755 gcc_assert (reg_equiv[regno].init_insns
3756 && !XEXP (reg_equiv[regno].init_insns, 1));
3757 equiv_insn = XEXP (reg_equiv[regno].init_insns, 0);
3759 /* We may not move instructions that can throw, since
3760 that changes basic block boundaries and we are not
3761 prepared to adjust the CFG to match. */
3762 if (can_throw_internal (equiv_insn))
3763 continue;
3765 if (asm_noperands (PATTERN (equiv_insn)) < 0
3766 && validate_replace_rtx (regno_reg_rtx[regno],
3767 *(reg_equiv[regno].src_p), insn))
3769 rtx equiv_link;
3770 rtx last_link;
3771 rtx note;
3773 /* Find the last note. */
3774 for (last_link = link; XEXP (last_link, 1);
3775 last_link = XEXP (last_link, 1))
3778 /* Append the REG_DEAD notes from equiv_insn. */
3779 equiv_link = REG_NOTES (equiv_insn);
3780 while (equiv_link)
3782 note = equiv_link;
3783 equiv_link = XEXP (equiv_link, 1);
3784 if (REG_NOTE_KIND (note) == REG_DEAD)
3786 remove_note (equiv_insn, note);
3787 XEXP (last_link, 1) = note;
3788 XEXP (note, 1) = NULL_RTX;
3789 last_link = note;
3793 remove_death (regno, insn);
3794 SET_REG_N_REFS (regno, 0);
3795 REG_FREQ (regno) = 0;
3796 delete_insn (equiv_insn);
3798 reg_equiv[regno].init_insns
3799 = reg_equiv[regno].init_insns->next ();
3801 ira_reg_equiv[regno].init_insns = NULL;
3802 bitmap_set_bit (cleared_regs, regno);
3804 /* Move the initialization of the register to just before
3805 INSN. Update the flow information. */
3806 else if (prev_nondebug_insn (insn) != equiv_insn)
3808 rtx_insn *new_insn;
3810 new_insn = emit_insn_before (PATTERN (equiv_insn), insn);
3811 REG_NOTES (new_insn) = REG_NOTES (equiv_insn);
3812 REG_NOTES (equiv_insn) = 0;
3813 /* Rescan it to process the notes. */
3814 df_insn_rescan (new_insn);
3816 /* Make sure this insn is recognized before
3817 reload begins, otherwise
3818 eliminate_regs_in_insn will die. */
3819 INSN_CODE (new_insn) = INSN_CODE (equiv_insn);
3821 delete_insn (equiv_insn);
3823 XEXP (reg_equiv[regno].init_insns, 0) = new_insn;
3825 REG_BASIC_BLOCK (regno) = bb->index;
3826 REG_N_CALLS_CROSSED (regno) = 0;
3827 REG_FREQ_CALLS_CROSSED (regno) = 0;
3828 REG_N_THROWING_CALLS_CROSSED (regno) = 0;
3829 REG_LIVE_LENGTH (regno) = 2;
3831 if (insn == BB_HEAD (bb))
3832 BB_HEAD (bb) = PREV_INSN (insn);
3834 ira_reg_equiv[regno].init_insns
3835 = gen_rtx_INSN_LIST (VOIDmode, new_insn, NULL_RTX);
3836 bitmap_set_bit (cleared_regs, regno);
3843 if (!bitmap_empty_p (cleared_regs))
3845 FOR_EACH_BB_FN (bb, cfun)
3847 bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
3848 bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
3849 if (! df_live)
3850 continue;
3851 bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
3852 bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
3855 /* Last pass - adjust debug insns referencing cleared regs. */
3856 if (MAY_HAVE_DEBUG_INSNS)
3857 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
3858 if (DEBUG_INSN_P (insn))
3860 rtx old_loc = INSN_VAR_LOCATION_LOC (insn);
3861 INSN_VAR_LOCATION_LOC (insn)
3862 = simplify_replace_fn_rtx (old_loc, NULL_RTX,
3863 adjust_cleared_regs,
3864 (void *) cleared_regs);
3865 if (old_loc != INSN_VAR_LOCATION_LOC (insn))
3866 df_insn_rescan (insn);
3870 BITMAP_FREE (cleared_regs);
3872 out:
3873 /* Clean up. */
3875 end_alias_analysis ();
3876 free (reg_equiv);
3877 free (pdx_subregs);
3878 return recorded_label_ref;
3883 /* Set up fields memory, constant, and invariant from init_insns in
3884 the structures of array ira_reg_equiv. */
3885 static void
3886 setup_reg_equiv (void)
3888 int i;
3889 rtx_insn_list *elem, *prev_elem, *next_elem;
3890 rtx_insn *insn;
3891 rtx set, x;
3893 for (i = FIRST_PSEUDO_REGISTER; i < ira_reg_equiv_len; i++)
3894 for (prev_elem = NULL, elem = ira_reg_equiv[i].init_insns;
3895 elem;
3896 prev_elem = elem, elem = next_elem)
3898 next_elem = elem->next ();
3899 insn = elem->insn ();
3900 set = single_set (insn);
3902 /* Init insns can set up equivalence when the reg is a destination or
3903 a source (in this case the destination is memory). */
3904 if (set != 0 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))))
3906 if ((x = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != NULL)
3908 x = XEXP (x, 0);
3909 if (REG_P (SET_DEST (set))
3910 && REGNO (SET_DEST (set)) == (unsigned int) i
3911 && ! rtx_equal_p (SET_SRC (set), x) && MEM_P (x))
3913 /* This insn reporting the equivalence but
3914 actually not setting it. Remove it from the
3915 list. */
3916 if (prev_elem == NULL)
3917 ira_reg_equiv[i].init_insns = next_elem;
3918 else
3919 XEXP (prev_elem, 1) = next_elem;
3920 elem = prev_elem;
3923 else if (REG_P (SET_DEST (set))
3924 && REGNO (SET_DEST (set)) == (unsigned int) i)
3925 x = SET_SRC (set);
3926 else
3928 gcc_assert (REG_P (SET_SRC (set))
3929 && REGNO (SET_SRC (set)) == (unsigned int) i);
3930 x = SET_DEST (set);
3932 if (! function_invariant_p (x)
3933 || ! flag_pic
3934 /* A function invariant is often CONSTANT_P but may
3935 include a register. We promise to only pass
3936 CONSTANT_P objects to LEGITIMATE_PIC_OPERAND_P. */
3937 || (CONSTANT_P (x) && LEGITIMATE_PIC_OPERAND_P (x)))
3939 /* It can happen that a REG_EQUIV note contains a MEM
3940 that is not a legitimate memory operand. As later
3941 stages of reload assume that all addresses found in
3942 the lra_regno_equiv_* arrays were originally
3943 legitimate, we ignore such REG_EQUIV notes. */
3944 if (memory_operand (x, VOIDmode))
3946 ira_reg_equiv[i].defined_p = true;
3947 ira_reg_equiv[i].memory = x;
3948 continue;
3950 else if (function_invariant_p (x))
3952 machine_mode mode;
3954 mode = GET_MODE (SET_DEST (set));
3955 if (GET_CODE (x) == PLUS
3956 || x == frame_pointer_rtx || x == arg_pointer_rtx)
3957 /* This is PLUS of frame pointer and a constant,
3958 or fp, or argp. */
3959 ira_reg_equiv[i].invariant = x;
3960 else if (targetm.legitimate_constant_p (mode, x))
3961 ira_reg_equiv[i].constant = x;
3962 else
3964 ira_reg_equiv[i].memory = force_const_mem (mode, x);
3965 if (ira_reg_equiv[i].memory == NULL_RTX)
3967 ira_reg_equiv[i].defined_p = false;
3968 ira_reg_equiv[i].init_insns = NULL;
3969 break;
3972 ira_reg_equiv[i].defined_p = true;
3973 continue;
3977 ira_reg_equiv[i].defined_p = false;
3978 ira_reg_equiv[i].init_insns = NULL;
3979 break;
3985 /* Print chain C to FILE. */
3986 static void
3987 print_insn_chain (FILE *file, struct insn_chain *c)
3989 fprintf (file, "insn=%d, ", INSN_UID (c->insn));
3990 bitmap_print (file, &c->live_throughout, "live_throughout: ", ", ");
3991 bitmap_print (file, &c->dead_or_set, "dead_or_set: ", "\n");
3995 /* Print all reload_insn_chains to FILE. */
3996 static void
3997 print_insn_chains (FILE *file)
3999 struct insn_chain *c;
4000 for (c = reload_insn_chain; c ; c = c->next)
4001 print_insn_chain (file, c);
4004 /* Return true if pseudo REGNO should be added to set live_throughout
4005 or dead_or_set of the insn chains for reload consideration. */
4006 static bool
4007 pseudo_for_reload_consideration_p (int regno)
4009 /* Consider spilled pseudos too for IRA because they still have a
4010 chance to get hard-registers in the reload when IRA is used. */
4011 return (reg_renumber[regno] >= 0 || ira_conflicts_p);
4014 /* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] using
4015 REG to the number of nregs, and INIT_VALUE to get the
4016 initialization. ALLOCNUM need not be the regno of REG. */
4017 static void
4018 init_live_subregs (bool init_value, sbitmap *live_subregs,
4019 bitmap live_subregs_used, int allocnum, rtx reg)
4021 unsigned int regno = REGNO (SUBREG_REG (reg));
4022 int size = GET_MODE_SIZE (GET_MODE (regno_reg_rtx[regno]));
4024 gcc_assert (size > 0);
4026 /* Been there, done that. */
4027 if (bitmap_bit_p (live_subregs_used, allocnum))
4028 return;
4030 /* Create a new one. */
4031 if (live_subregs[allocnum] == NULL)
4032 live_subregs[allocnum] = sbitmap_alloc (size);
4034 /* If the entire reg was live before blasting into subregs, we need
4035 to init all of the subregs to ones else init to 0. */
4036 if (init_value)
4037 bitmap_ones (live_subregs[allocnum]);
4038 else
4039 bitmap_clear (live_subregs[allocnum]);
4041 bitmap_set_bit (live_subregs_used, allocnum);
4044 /* Walk the insns of the current function and build reload_insn_chain,
4045 and record register life information. */
4046 static void
4047 build_insn_chain (void)
4049 unsigned int i;
4050 struct insn_chain **p = &reload_insn_chain;
4051 basic_block bb;
4052 struct insn_chain *c = NULL;
4053 struct insn_chain *next = NULL;
4054 bitmap live_relevant_regs = BITMAP_ALLOC (NULL);
4055 bitmap elim_regset = BITMAP_ALLOC (NULL);
4056 /* live_subregs is a vector used to keep accurate information about
4057 which hardregs are live in multiword pseudos. live_subregs and
4058 live_subregs_used are indexed by pseudo number. The live_subreg
4059 entry for a particular pseudo is only used if the corresponding
4060 element is non zero in live_subregs_used. The sbitmap size of
4061 live_subreg[allocno] is number of bytes that the pseudo can
4062 occupy. */
4063 sbitmap *live_subregs = XCNEWVEC (sbitmap, max_regno);
4064 bitmap live_subregs_used = BITMAP_ALLOC (NULL);
4066 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
4067 if (TEST_HARD_REG_BIT (eliminable_regset, i))
4068 bitmap_set_bit (elim_regset, i);
4069 FOR_EACH_BB_REVERSE_FN (bb, cfun)
4071 bitmap_iterator bi;
4072 rtx_insn *insn;
4074 CLEAR_REG_SET (live_relevant_regs);
4075 bitmap_clear (live_subregs_used);
4077 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb), 0, i, bi)
4079 if (i >= FIRST_PSEUDO_REGISTER)
4080 break;
4081 bitmap_set_bit (live_relevant_regs, i);
4084 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb),
4085 FIRST_PSEUDO_REGISTER, i, bi)
4087 if (pseudo_for_reload_consideration_p (i))
4088 bitmap_set_bit (live_relevant_regs, i);
4091 FOR_BB_INSNS_REVERSE (bb, insn)
4093 if (!NOTE_P (insn) && !BARRIER_P (insn))
4095 struct df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4096 df_ref def, use;
4098 c = new_insn_chain ();
4099 c->next = next;
4100 next = c;
4101 *p = c;
4102 p = &c->prev;
4104 c->insn = insn;
4105 c->block = bb->index;
4107 if (NONDEBUG_INSN_P (insn))
4108 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4110 unsigned int regno = DF_REF_REGNO (def);
4112 /* Ignore may clobbers because these are generated
4113 from calls. However, every other kind of def is
4114 added to dead_or_set. */
4115 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
4117 if (regno < FIRST_PSEUDO_REGISTER)
4119 if (!fixed_regs[regno])
4120 bitmap_set_bit (&c->dead_or_set, regno);
4122 else if (pseudo_for_reload_consideration_p (regno))
4123 bitmap_set_bit (&c->dead_or_set, regno);
4126 if ((regno < FIRST_PSEUDO_REGISTER
4127 || reg_renumber[regno] >= 0
4128 || ira_conflicts_p)
4129 && (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL)))
4131 rtx reg = DF_REF_REG (def);
4133 /* We can model subregs, but not if they are
4134 wrapped in ZERO_EXTRACTS. */
4135 if (GET_CODE (reg) == SUBREG
4136 && !DF_REF_FLAGS_IS_SET (def, DF_REF_ZERO_EXTRACT))
4138 unsigned int start = SUBREG_BYTE (reg);
4139 unsigned int last = start
4140 + GET_MODE_SIZE (GET_MODE (reg));
4142 init_live_subregs
4143 (bitmap_bit_p (live_relevant_regs, regno),
4144 live_subregs, live_subregs_used, regno, reg);
4146 if (!DF_REF_FLAGS_IS_SET
4147 (def, DF_REF_STRICT_LOW_PART))
4149 /* Expand the range to cover entire words.
4150 Bytes added here are "don't care". */
4151 start
4152 = start / UNITS_PER_WORD * UNITS_PER_WORD;
4153 last = ((last + UNITS_PER_WORD - 1)
4154 / UNITS_PER_WORD * UNITS_PER_WORD);
4157 /* Ignore the paradoxical bits. */
4158 if (last > SBITMAP_SIZE (live_subregs[regno]))
4159 last = SBITMAP_SIZE (live_subregs[regno]);
4161 while (start < last)
4163 bitmap_clear_bit (live_subregs[regno], start);
4164 start++;
4167 if (bitmap_empty_p (live_subregs[regno]))
4169 bitmap_clear_bit (live_subregs_used, regno);
4170 bitmap_clear_bit (live_relevant_regs, regno);
4172 else
4173 /* Set live_relevant_regs here because
4174 that bit has to be true to get us to
4175 look at the live_subregs fields. */
4176 bitmap_set_bit (live_relevant_regs, regno);
4178 else
4180 /* DF_REF_PARTIAL is generated for
4181 subregs, STRICT_LOW_PART, and
4182 ZERO_EXTRACT. We handle the subreg
4183 case above so here we have to keep from
4184 modeling the def as a killing def. */
4185 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL))
4187 bitmap_clear_bit (live_subregs_used, regno);
4188 bitmap_clear_bit (live_relevant_regs, regno);
4194 bitmap_and_compl_into (live_relevant_regs, elim_regset);
4195 bitmap_copy (&c->live_throughout, live_relevant_regs);
4197 if (NONDEBUG_INSN_P (insn))
4198 FOR_EACH_INSN_INFO_USE (use, insn_info)
4200 unsigned int regno = DF_REF_REGNO (use);
4201 rtx reg = DF_REF_REG (use);
4203 /* DF_REF_READ_WRITE on a use means that this use
4204 is fabricated from a def that is a partial set
4205 to a multiword reg. Here, we only model the
4206 subreg case that is not wrapped in ZERO_EXTRACT
4207 precisely so we do not need to look at the
4208 fabricated use. */
4209 if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE)
4210 && !DF_REF_FLAGS_IS_SET (use, DF_REF_ZERO_EXTRACT)
4211 && DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
4212 continue;
4214 /* Add the last use of each var to dead_or_set. */
4215 if (!bitmap_bit_p (live_relevant_regs, regno))
4217 if (regno < FIRST_PSEUDO_REGISTER)
4219 if (!fixed_regs[regno])
4220 bitmap_set_bit (&c->dead_or_set, regno);
4222 else if (pseudo_for_reload_consideration_p (regno))
4223 bitmap_set_bit (&c->dead_or_set, regno);
4226 if (regno < FIRST_PSEUDO_REGISTER
4227 || pseudo_for_reload_consideration_p (regno))
4229 if (GET_CODE (reg) == SUBREG
4230 && !DF_REF_FLAGS_IS_SET (use,
4231 DF_REF_SIGN_EXTRACT
4232 | DF_REF_ZERO_EXTRACT))
4234 unsigned int start = SUBREG_BYTE (reg);
4235 unsigned int last = start
4236 + GET_MODE_SIZE (GET_MODE (reg));
4238 init_live_subregs
4239 (bitmap_bit_p (live_relevant_regs, regno),
4240 live_subregs, live_subregs_used, regno, reg);
4242 /* Ignore the paradoxical bits. */
4243 if (last > SBITMAP_SIZE (live_subregs[regno]))
4244 last = SBITMAP_SIZE (live_subregs[regno]);
4246 while (start < last)
4248 bitmap_set_bit (live_subregs[regno], start);
4249 start++;
4252 else
4253 /* Resetting the live_subregs_used is
4254 effectively saying do not use the subregs
4255 because we are reading the whole
4256 pseudo. */
4257 bitmap_clear_bit (live_subregs_used, regno);
4258 bitmap_set_bit (live_relevant_regs, regno);
4264 /* FIXME!! The following code is a disaster. Reload needs to see the
4265 labels and jump tables that are just hanging out in between
4266 the basic blocks. See pr33676. */
4267 insn = BB_HEAD (bb);
4269 /* Skip over the barriers and cruft. */
4270 while (insn && (BARRIER_P (insn) || NOTE_P (insn)
4271 || BLOCK_FOR_INSN (insn) == bb))
4272 insn = PREV_INSN (insn);
4274 /* While we add anything except barriers and notes, the focus is
4275 to get the labels and jump tables into the
4276 reload_insn_chain. */
4277 while (insn)
4279 if (!NOTE_P (insn) && !BARRIER_P (insn))
4281 if (BLOCK_FOR_INSN (insn))
4282 break;
4284 c = new_insn_chain ();
4285 c->next = next;
4286 next = c;
4287 *p = c;
4288 p = &c->prev;
4290 /* The block makes no sense here, but it is what the old
4291 code did. */
4292 c->block = bb->index;
4293 c->insn = insn;
4294 bitmap_copy (&c->live_throughout, live_relevant_regs);
4296 insn = PREV_INSN (insn);
4300 reload_insn_chain = c;
4301 *p = NULL;
4303 for (i = 0; i < (unsigned int) max_regno; i++)
4304 if (live_subregs[i] != NULL)
4305 sbitmap_free (live_subregs[i]);
4306 free (live_subregs);
4307 BITMAP_FREE (live_subregs_used);
4308 BITMAP_FREE (live_relevant_regs);
4309 BITMAP_FREE (elim_regset);
4311 if (dump_file)
4312 print_insn_chains (dump_file);
4315 /* Examine the rtx found in *LOC, which is read or written to as determined
4316 by TYPE. Return false if we find a reason why an insn containing this
4317 rtx should not be moved (such as accesses to non-constant memory), true
4318 otherwise. */
4319 static bool
4320 rtx_moveable_p (rtx *loc, enum op_type type)
4322 const char *fmt;
4323 rtx x = *loc;
4324 enum rtx_code code = GET_CODE (x);
4325 int i, j;
4327 code = GET_CODE (x);
4328 switch (code)
4330 case CONST:
4331 CASE_CONST_ANY:
4332 case SYMBOL_REF:
4333 case LABEL_REF:
4334 return true;
4336 case PC:
4337 return type == OP_IN;
4339 case CC0:
4340 return false;
4342 case REG:
4343 if (x == frame_pointer_rtx)
4344 return true;
4345 if (HARD_REGISTER_P (x))
4346 return false;
4348 return true;
4350 case MEM:
4351 if (type == OP_IN && MEM_READONLY_P (x))
4352 return rtx_moveable_p (&XEXP (x, 0), OP_IN);
4353 return false;
4355 case SET:
4356 return (rtx_moveable_p (&SET_SRC (x), OP_IN)
4357 && rtx_moveable_p (&SET_DEST (x), OP_OUT));
4359 case STRICT_LOW_PART:
4360 return rtx_moveable_p (&XEXP (x, 0), OP_OUT);
4362 case ZERO_EXTRACT:
4363 case SIGN_EXTRACT:
4364 return (rtx_moveable_p (&XEXP (x, 0), type)
4365 && rtx_moveable_p (&XEXP (x, 1), OP_IN)
4366 && rtx_moveable_p (&XEXP (x, 2), OP_IN));
4368 case CLOBBER:
4369 return rtx_moveable_p (&SET_DEST (x), OP_OUT);
4371 case UNSPEC_VOLATILE:
4372 /* It is a bad idea to consider insns with with such rtl
4373 as moveable ones. The insn scheduler also considers them as barrier
4374 for a reason. */
4375 return false;
4377 default:
4378 break;
4381 fmt = GET_RTX_FORMAT (code);
4382 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
4384 if (fmt[i] == 'e')
4386 if (!rtx_moveable_p (&XEXP (x, i), type))
4387 return false;
4389 else if (fmt[i] == 'E')
4390 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4392 if (!rtx_moveable_p (&XVECEXP (x, i, j), type))
4393 return false;
4396 return true;
4399 /* A wrapper around dominated_by_p, which uses the information in UID_LUID
4400 to give dominance relationships between two insns I1 and I2. */
4401 static bool
4402 insn_dominated_by_p (rtx i1, rtx i2, int *uid_luid)
4404 basic_block bb1 = BLOCK_FOR_INSN (i1);
4405 basic_block bb2 = BLOCK_FOR_INSN (i2);
4407 if (bb1 == bb2)
4408 return uid_luid[INSN_UID (i2)] < uid_luid[INSN_UID (i1)];
4409 return dominated_by_p (CDI_DOMINATORS, bb1, bb2);
4412 /* Record the range of register numbers added by find_moveable_pseudos. */
4413 int first_moveable_pseudo, last_moveable_pseudo;
4415 /* These two vectors hold data for every register added by
4416 find_movable_pseudos, with index 0 holding data for the
4417 first_moveable_pseudo. */
4418 /* The original home register. */
4419 static vec<rtx> pseudo_replaced_reg;
4421 /* Look for instances where we have an instruction that is known to increase
4422 register pressure, and whose result is not used immediately. If it is
4423 possible to move the instruction downwards to just before its first use,
4424 split its lifetime into two ranges. We create a new pseudo to compute the
4425 value, and emit a move instruction just before the first use. If, after
4426 register allocation, the new pseudo remains unallocated, the function
4427 move_unallocated_pseudos then deletes the move instruction and places
4428 the computation just before the first use.
4430 Such a move is safe and profitable if all the input registers remain live
4431 and unchanged between the original computation and its first use. In such
4432 a situation, the computation is known to increase register pressure, and
4433 moving it is known to at least not worsen it.
4435 We restrict moves to only those cases where a register remains unallocated,
4436 in order to avoid interfering too much with the instruction schedule. As
4437 an exception, we may move insns which only modify their input register
4438 (typically induction variables), as this increases the freedom for our
4439 intended transformation, and does not limit the second instruction
4440 scheduler pass. */
4442 static void
4443 find_moveable_pseudos (void)
4445 unsigned i;
4446 int max_regs = max_reg_num ();
4447 int max_uid = get_max_uid ();
4448 basic_block bb;
4449 int *uid_luid = XNEWVEC (int, max_uid);
4450 rtx_insn **closest_uses = XNEWVEC (rtx_insn *, max_regs);
4451 /* A set of registers which are live but not modified throughout a block. */
4452 bitmap_head *bb_transp_live = XNEWVEC (bitmap_head,
4453 last_basic_block_for_fn (cfun));
4454 /* A set of registers which only exist in a given basic block. */
4455 bitmap_head *bb_local = XNEWVEC (bitmap_head,
4456 last_basic_block_for_fn (cfun));
4457 /* A set of registers which are set once, in an instruction that can be
4458 moved freely downwards, but are otherwise transparent to a block. */
4459 bitmap_head *bb_moveable_reg_sets = XNEWVEC (bitmap_head,
4460 last_basic_block_for_fn (cfun));
4461 bitmap_head live, used, set, interesting, unusable_as_input;
4462 bitmap_iterator bi;
4463 bitmap_initialize (&interesting, 0);
4465 first_moveable_pseudo = max_regs;
4466 pseudo_replaced_reg.release ();
4467 pseudo_replaced_reg.safe_grow_cleared (max_regs);
4469 df_analyze ();
4470 calculate_dominance_info (CDI_DOMINATORS);
4472 i = 0;
4473 bitmap_initialize (&live, 0);
4474 bitmap_initialize (&used, 0);
4475 bitmap_initialize (&set, 0);
4476 bitmap_initialize (&unusable_as_input, 0);
4477 FOR_EACH_BB_FN (bb, cfun)
4479 rtx_insn *insn;
4480 bitmap transp = bb_transp_live + bb->index;
4481 bitmap moveable = bb_moveable_reg_sets + bb->index;
4482 bitmap local = bb_local + bb->index;
4484 bitmap_initialize (local, 0);
4485 bitmap_initialize (transp, 0);
4486 bitmap_initialize (moveable, 0);
4487 bitmap_copy (&live, df_get_live_out (bb));
4488 bitmap_and_into (&live, df_get_live_in (bb));
4489 bitmap_copy (transp, &live);
4490 bitmap_clear (moveable);
4491 bitmap_clear (&live);
4492 bitmap_clear (&used);
4493 bitmap_clear (&set);
4494 FOR_BB_INSNS (bb, insn)
4495 if (NONDEBUG_INSN_P (insn))
4497 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4498 df_ref def, use;
4500 uid_luid[INSN_UID (insn)] = i++;
4502 def = df_single_def (insn_info);
4503 use = df_single_use (insn_info);
4504 if (use
4505 && def
4506 && DF_REF_REGNO (use) == DF_REF_REGNO (def)
4507 && !bitmap_bit_p (&set, DF_REF_REGNO (use))
4508 && rtx_moveable_p (&PATTERN (insn), OP_IN))
4510 unsigned regno = DF_REF_REGNO (use);
4511 bitmap_set_bit (moveable, regno);
4512 bitmap_set_bit (&set, regno);
4513 bitmap_set_bit (&used, regno);
4514 bitmap_clear_bit (transp, regno);
4515 continue;
4517 FOR_EACH_INSN_INFO_USE (use, insn_info)
4519 unsigned regno = DF_REF_REGNO (use);
4520 bitmap_set_bit (&used, regno);
4521 if (bitmap_clear_bit (moveable, regno))
4522 bitmap_clear_bit (transp, regno);
4525 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4527 unsigned regno = DF_REF_REGNO (def);
4528 bitmap_set_bit (&set, regno);
4529 bitmap_clear_bit (transp, regno);
4530 bitmap_clear_bit (moveable, regno);
4535 bitmap_clear (&live);
4536 bitmap_clear (&used);
4537 bitmap_clear (&set);
4539 FOR_EACH_BB_FN (bb, cfun)
4541 bitmap local = bb_local + bb->index;
4542 rtx_insn *insn;
4544 FOR_BB_INSNS (bb, insn)
4545 if (NONDEBUG_INSN_P (insn))
4547 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4548 rtx_insn *def_insn;
4549 rtx closest_use, note;
4550 df_ref def, use;
4551 unsigned regno;
4552 bool all_dominated, all_local;
4553 machine_mode mode;
4555 def = df_single_def (insn_info);
4556 /* There must be exactly one def in this insn. */
4557 if (!def || !single_set (insn))
4558 continue;
4559 /* This must be the only definition of the reg. We also limit
4560 which modes we deal with so that we can assume we can generate
4561 move instructions. */
4562 regno = DF_REF_REGNO (def);
4563 mode = GET_MODE (DF_REF_REG (def));
4564 if (DF_REG_DEF_COUNT (regno) != 1
4565 || !DF_REF_INSN_INFO (def)
4566 || HARD_REGISTER_NUM_P (regno)
4567 || DF_REG_EQ_USE_COUNT (regno) > 0
4568 || (!INTEGRAL_MODE_P (mode) && !FLOAT_MODE_P (mode)))
4569 continue;
4570 def_insn = DF_REF_INSN (def);
4572 for (note = REG_NOTES (def_insn); note; note = XEXP (note, 1))
4573 if (REG_NOTE_KIND (note) == REG_EQUIV && MEM_P (XEXP (note, 0)))
4574 break;
4576 if (note)
4578 if (dump_file)
4579 fprintf (dump_file, "Ignoring reg %d, has equiv memory\n",
4580 regno);
4581 bitmap_set_bit (&unusable_as_input, regno);
4582 continue;
4585 use = DF_REG_USE_CHAIN (regno);
4586 all_dominated = true;
4587 all_local = true;
4588 closest_use = NULL_RTX;
4589 for (; use; use = DF_REF_NEXT_REG (use))
4591 rtx_insn *insn;
4592 if (!DF_REF_INSN_INFO (use))
4594 all_dominated = false;
4595 all_local = false;
4596 break;
4598 insn = DF_REF_INSN (use);
4599 if (DEBUG_INSN_P (insn))
4600 continue;
4601 if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (def_insn))
4602 all_local = false;
4603 if (!insn_dominated_by_p (insn, def_insn, uid_luid))
4604 all_dominated = false;
4605 if (closest_use != insn && closest_use != const0_rtx)
4607 if (closest_use == NULL_RTX)
4608 closest_use = insn;
4609 else if (insn_dominated_by_p (closest_use, insn, uid_luid))
4610 closest_use = insn;
4611 else if (!insn_dominated_by_p (insn, closest_use, uid_luid))
4612 closest_use = const0_rtx;
4615 if (!all_dominated)
4617 if (dump_file)
4618 fprintf (dump_file, "Reg %d not all uses dominated by set\n",
4619 regno);
4620 continue;
4622 if (all_local)
4623 bitmap_set_bit (local, regno);
4624 if (closest_use == const0_rtx || closest_use == NULL
4625 || next_nonnote_nondebug_insn (def_insn) == closest_use)
4627 if (dump_file)
4628 fprintf (dump_file, "Reg %d uninteresting%s\n", regno,
4629 closest_use == const0_rtx || closest_use == NULL
4630 ? " (no unique first use)" : "");
4631 continue;
4633 #ifdef HAVE_cc0
4634 if (reg_referenced_p (cc0_rtx, PATTERN (closest_use)))
4636 if (dump_file)
4637 fprintf (dump_file, "Reg %d: closest user uses cc0\n",
4638 regno);
4639 continue;
4641 #endif
4642 bitmap_set_bit (&interesting, regno);
4643 /* If we get here, we know closest_use is a non-NULL insn
4644 (as opposed to const_0_rtx). */
4645 closest_uses[regno] = as_a <rtx_insn *> (closest_use);
4647 if (dump_file && (all_local || all_dominated))
4649 fprintf (dump_file, "Reg %u:", regno);
4650 if (all_local)
4651 fprintf (dump_file, " local to bb %d", bb->index);
4652 if (all_dominated)
4653 fprintf (dump_file, " def dominates all uses");
4654 if (closest_use != const0_rtx)
4655 fprintf (dump_file, " has unique first use");
4656 fputs ("\n", dump_file);
4661 EXECUTE_IF_SET_IN_BITMAP (&interesting, 0, i, bi)
4663 df_ref def = DF_REG_DEF_CHAIN (i);
4664 rtx_insn *def_insn = DF_REF_INSN (def);
4665 basic_block def_block = BLOCK_FOR_INSN (def_insn);
4666 bitmap def_bb_local = bb_local + def_block->index;
4667 bitmap def_bb_moveable = bb_moveable_reg_sets + def_block->index;
4668 bitmap def_bb_transp = bb_transp_live + def_block->index;
4669 bool local_to_bb_p = bitmap_bit_p (def_bb_local, i);
4670 rtx_insn *use_insn = closest_uses[i];
4671 df_ref use;
4672 bool all_ok = true;
4673 bool all_transp = true;
4675 if (!REG_P (DF_REF_REG (def)))
4676 continue;
4678 if (!local_to_bb_p)
4680 if (dump_file)
4681 fprintf (dump_file, "Reg %u not local to one basic block\n",
4683 continue;
4685 if (reg_equiv_init (i) != NULL_RTX)
4687 if (dump_file)
4688 fprintf (dump_file, "Ignoring reg %u with equiv init insn\n",
4690 continue;
4692 if (!rtx_moveable_p (&PATTERN (def_insn), OP_IN))
4694 if (dump_file)
4695 fprintf (dump_file, "Found def insn %d for %d to be not moveable\n",
4696 INSN_UID (def_insn), i);
4697 continue;
4699 if (dump_file)
4700 fprintf (dump_file, "Examining insn %d, def for %d\n",
4701 INSN_UID (def_insn), i);
4702 FOR_EACH_INSN_USE (use, def_insn)
4704 unsigned regno = DF_REF_REGNO (use);
4705 if (bitmap_bit_p (&unusable_as_input, regno))
4707 all_ok = false;
4708 if (dump_file)
4709 fprintf (dump_file, " found unusable input reg %u.\n", regno);
4710 break;
4712 if (!bitmap_bit_p (def_bb_transp, regno))
4714 if (bitmap_bit_p (def_bb_moveable, regno)
4715 && !control_flow_insn_p (use_insn)
4716 #ifdef HAVE_cc0
4717 && !sets_cc0_p (use_insn)
4718 #endif
4721 if (modified_between_p (DF_REF_REG (use), def_insn, use_insn))
4723 rtx_insn *x = NEXT_INSN (def_insn);
4724 while (!modified_in_p (DF_REF_REG (use), x))
4726 gcc_assert (x != use_insn);
4727 x = NEXT_INSN (x);
4729 if (dump_file)
4730 fprintf (dump_file, " input reg %u modified but insn %d moveable\n",
4731 regno, INSN_UID (x));
4732 emit_insn_after (PATTERN (x), use_insn);
4733 set_insn_deleted (x);
4735 else
4737 if (dump_file)
4738 fprintf (dump_file, " input reg %u modified between def and use\n",
4739 regno);
4740 all_transp = false;
4743 else
4744 all_transp = false;
4747 if (!all_ok)
4748 continue;
4749 if (!dbg_cnt (ira_move))
4750 break;
4751 if (dump_file)
4752 fprintf (dump_file, " all ok%s\n", all_transp ? " and transp" : "");
4754 if (all_transp)
4756 rtx def_reg = DF_REF_REG (def);
4757 rtx newreg = ira_create_new_reg (def_reg);
4758 if (validate_change (def_insn, DF_REF_REAL_LOC (def), newreg, 0))
4760 unsigned nregno = REGNO (newreg);
4761 emit_insn_before (gen_move_insn (def_reg, newreg), use_insn);
4762 nregno -= max_regs;
4763 pseudo_replaced_reg[nregno] = def_reg;
4768 FOR_EACH_BB_FN (bb, cfun)
4770 bitmap_clear (bb_local + bb->index);
4771 bitmap_clear (bb_transp_live + bb->index);
4772 bitmap_clear (bb_moveable_reg_sets + bb->index);
4774 bitmap_clear (&interesting);
4775 bitmap_clear (&unusable_as_input);
4776 free (uid_luid);
4777 free (closest_uses);
4778 free (bb_local);
4779 free (bb_transp_live);
4780 free (bb_moveable_reg_sets);
4782 last_moveable_pseudo = max_reg_num ();
4784 fix_reg_equiv_init ();
4785 expand_reg_info ();
4786 regstat_free_n_sets_and_refs ();
4787 regstat_free_ri ();
4788 regstat_init_n_sets_and_refs ();
4789 regstat_compute_ri ();
4790 free_dominance_info (CDI_DOMINATORS);
4793 /* If SET pattern SET is an assignment from a hard register to a pseudo which
4794 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), return
4795 the destination. Otherwise return NULL. */
4797 static rtx
4798 interesting_dest_for_shprep_1 (rtx set, basic_block call_dom)
4800 rtx src = SET_SRC (set);
4801 rtx dest = SET_DEST (set);
4802 if (!REG_P (src) || !HARD_REGISTER_P (src)
4803 || !REG_P (dest) || HARD_REGISTER_P (dest)
4804 || (call_dom && !bitmap_bit_p (df_get_live_in (call_dom), REGNO (dest))))
4805 return NULL;
4806 return dest;
4809 /* If insn is interesting for parameter range-splitting shrink-wrapping
4810 preparation, i.e. it is a single set from a hard register to a pseudo, which
4811 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), or a
4812 parallel statement with only one such statement, return the destination.
4813 Otherwise return NULL. */
4815 static rtx
4816 interesting_dest_for_shprep (rtx_insn *insn, basic_block call_dom)
4818 if (!INSN_P (insn))
4819 return NULL;
4820 rtx pat = PATTERN (insn);
4821 if (GET_CODE (pat) == SET)
4822 return interesting_dest_for_shprep_1 (pat, call_dom);
4824 if (GET_CODE (pat) != PARALLEL)
4825 return NULL;
4826 rtx ret = NULL;
4827 for (int i = 0; i < XVECLEN (pat, 0); i++)
4829 rtx sub = XVECEXP (pat, 0, i);
4830 if (GET_CODE (sub) == USE || GET_CODE (sub) == CLOBBER)
4831 continue;
4832 if (GET_CODE (sub) != SET
4833 || side_effects_p (sub))
4834 return NULL;
4835 rtx dest = interesting_dest_for_shprep_1 (sub, call_dom);
4836 if (dest && ret)
4837 return NULL;
4838 if (dest)
4839 ret = dest;
4841 return ret;
4844 /* Split live ranges of pseudos that are loaded from hard registers in the
4845 first BB in a BB that dominates all non-sibling call if such a BB can be
4846 found and is not in a loop. Return true if the function has made any
4847 changes. */
4849 static bool
4850 split_live_ranges_for_shrink_wrap (void)
4852 basic_block bb, call_dom = NULL;
4853 basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
4854 rtx_insn *insn, *last_interesting_insn = NULL;
4855 bitmap_head need_new, reachable;
4856 vec<basic_block> queue;
4858 if (!SHRINK_WRAPPING_ENABLED)
4859 return false;
4861 bitmap_initialize (&need_new, 0);
4862 bitmap_initialize (&reachable, 0);
4863 queue.create (n_basic_blocks_for_fn (cfun));
4865 FOR_EACH_BB_FN (bb, cfun)
4866 FOR_BB_INSNS (bb, insn)
4867 if (CALL_P (insn) && !SIBLING_CALL_P (insn))
4869 if (bb == first)
4871 bitmap_clear (&need_new);
4872 bitmap_clear (&reachable);
4873 queue.release ();
4874 return false;
4877 bitmap_set_bit (&need_new, bb->index);
4878 bitmap_set_bit (&reachable, bb->index);
4879 queue.quick_push (bb);
4880 break;
4883 if (queue.is_empty ())
4885 bitmap_clear (&need_new);
4886 bitmap_clear (&reachable);
4887 queue.release ();
4888 return false;
4891 while (!queue.is_empty ())
4893 edge e;
4894 edge_iterator ei;
4896 bb = queue.pop ();
4897 FOR_EACH_EDGE (e, ei, bb->succs)
4898 if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
4899 && bitmap_set_bit (&reachable, e->dest->index))
4900 queue.quick_push (e->dest);
4902 queue.release ();
4904 FOR_BB_INSNS (first, insn)
4906 rtx dest = interesting_dest_for_shprep (insn, NULL);
4907 if (!dest)
4908 continue;
4910 if (DF_REG_DEF_COUNT (REGNO (dest)) > 1)
4912 bitmap_clear (&need_new);
4913 bitmap_clear (&reachable);
4914 return false;
4917 for (df_ref use = DF_REG_USE_CHAIN (REGNO(dest));
4918 use;
4919 use = DF_REF_NEXT_REG (use))
4921 int ubbi = DF_REF_BB (use)->index;
4922 if (bitmap_bit_p (&reachable, ubbi))
4923 bitmap_set_bit (&need_new, ubbi);
4925 last_interesting_insn = insn;
4928 bitmap_clear (&reachable);
4929 if (!last_interesting_insn)
4931 bitmap_clear (&need_new);
4932 return false;
4935 call_dom = nearest_common_dominator_for_set (CDI_DOMINATORS, &need_new);
4936 bitmap_clear (&need_new);
4937 if (call_dom == first)
4938 return false;
4940 loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
4941 while (bb_loop_depth (call_dom) > 0)
4942 call_dom = get_immediate_dominator (CDI_DOMINATORS, call_dom);
4943 loop_optimizer_finalize ();
4945 if (call_dom == first)
4946 return false;
4948 calculate_dominance_info (CDI_POST_DOMINATORS);
4949 if (dominated_by_p (CDI_POST_DOMINATORS, first, call_dom))
4951 free_dominance_info (CDI_POST_DOMINATORS);
4952 return false;
4954 free_dominance_info (CDI_POST_DOMINATORS);
4956 if (dump_file)
4957 fprintf (dump_file, "Will split live ranges of parameters at BB %i\n",
4958 call_dom->index);
4960 bool ret = false;
4961 FOR_BB_INSNS (first, insn)
4963 rtx dest = interesting_dest_for_shprep (insn, call_dom);
4964 if (!dest || dest == pic_offset_table_rtx)
4965 continue;
4967 rtx newreg = NULL_RTX;
4968 df_ref use, next;
4969 for (use = DF_REG_USE_CHAIN (REGNO (dest)); use; use = next)
4971 rtx_insn *uin = DF_REF_INSN (use);
4972 next = DF_REF_NEXT_REG (use);
4974 basic_block ubb = BLOCK_FOR_INSN (uin);
4975 if (ubb == call_dom
4976 || dominated_by_p (CDI_DOMINATORS, ubb, call_dom))
4978 if (!newreg)
4979 newreg = ira_create_new_reg (dest);
4980 validate_change (uin, DF_REF_REAL_LOC (use), newreg, true);
4984 if (newreg)
4986 rtx new_move = gen_move_insn (newreg, dest);
4987 emit_insn_after (new_move, bb_note (call_dom));
4988 if (dump_file)
4990 fprintf (dump_file, "Split live-range of register ");
4991 print_rtl_single (dump_file, dest);
4993 ret = true;
4996 if (insn == last_interesting_insn)
4997 break;
4999 apply_change_group ();
5000 return ret;
5003 /* Perform the second half of the transformation started in
5004 find_moveable_pseudos. We look for instances where the newly introduced
5005 pseudo remains unallocated, and remove it by moving the definition to
5006 just before its use, replacing the move instruction generated by
5007 find_moveable_pseudos. */
5008 static void
5009 move_unallocated_pseudos (void)
5011 int i;
5012 for (i = first_moveable_pseudo; i < last_moveable_pseudo; i++)
5013 if (reg_renumber[i] < 0)
5015 int idx = i - first_moveable_pseudo;
5016 rtx other_reg = pseudo_replaced_reg[idx];
5017 rtx_insn *def_insn = DF_REF_INSN (DF_REG_DEF_CHAIN (i));
5018 /* The use must follow all definitions of OTHER_REG, so we can
5019 insert the new definition immediately after any of them. */
5020 df_ref other_def = DF_REG_DEF_CHAIN (REGNO (other_reg));
5021 rtx_insn *move_insn = DF_REF_INSN (other_def);
5022 rtx_insn *newinsn = emit_insn_after (PATTERN (def_insn), move_insn);
5023 rtx set;
5024 int success;
5026 if (dump_file)
5027 fprintf (dump_file, "moving def of %d (insn %d now) ",
5028 REGNO (other_reg), INSN_UID (def_insn));
5030 delete_insn (move_insn);
5031 while ((other_def = DF_REG_DEF_CHAIN (REGNO (other_reg))))
5032 delete_insn (DF_REF_INSN (other_def));
5033 delete_insn (def_insn);
5035 set = single_set (newinsn);
5036 success = validate_change (newinsn, &SET_DEST (set), other_reg, 0);
5037 gcc_assert (success);
5038 if (dump_file)
5039 fprintf (dump_file, " %d) rather than keep unallocated replacement %d\n",
5040 INSN_UID (newinsn), i);
5041 SET_REG_N_REFS (i, 0);
5045 /* If the backend knows where to allocate pseudos for hard
5046 register initial values, register these allocations now. */
5047 static void
5048 allocate_initial_values (void)
5050 if (targetm.allocate_initial_value)
5052 rtx hreg, preg, x;
5053 int i, regno;
5055 for (i = 0; HARD_REGISTER_NUM_P (i); i++)
5057 if (! initial_value_entry (i, &hreg, &preg))
5058 break;
5060 x = targetm.allocate_initial_value (hreg);
5061 regno = REGNO (preg);
5062 if (x && REG_N_SETS (regno) <= 1)
5064 if (MEM_P (x))
5065 reg_equiv_memory_loc (regno) = x;
5066 else
5068 basic_block bb;
5069 int new_regno;
5071 gcc_assert (REG_P (x));
5072 new_regno = REGNO (x);
5073 reg_renumber[regno] = new_regno;
5074 /* Poke the regno right into regno_reg_rtx so that even
5075 fixed regs are accepted. */
5076 SET_REGNO (preg, new_regno);
5077 /* Update global register liveness information. */
5078 FOR_EACH_BB_FN (bb, cfun)
5080 if (REGNO_REG_SET_P (df_get_live_in (bb), regno))
5081 SET_REGNO_REG_SET (df_get_live_in (bb), new_regno);
5082 if (REGNO_REG_SET_P (df_get_live_out (bb), regno))
5083 SET_REGNO_REG_SET (df_get_live_out (bb), new_regno);
5089 gcc_checking_assert (! initial_value_entry (FIRST_PSEUDO_REGISTER,
5090 &hreg, &preg));
5095 /* True when we use LRA instead of reload pass for the current
5096 function. */
5097 bool ira_use_lra_p;
5099 /* True if we have allocno conflicts. It is false for non-optimized
5100 mode or when the conflict table is too big. */
5101 bool ira_conflicts_p;
5103 /* Saved between IRA and reload. */
5104 static int saved_flag_ira_share_spill_slots;
5106 /* This is the main entry of IRA. */
5107 static void
5108 ira (FILE *f)
5110 bool loops_p;
5111 int ira_max_point_before_emit;
5112 int rebuild_p;
5113 bool saved_flag_caller_saves = flag_caller_saves;
5114 enum ira_region saved_flag_ira_region = flag_ira_region;
5116 /* Perform target specific PIC register initialization. */
5117 targetm.init_pic_reg ();
5119 ira_conflicts_p = optimize > 0;
5121 ira_use_lra_p = targetm.lra_p ();
5122 /* If there are too many pseudos and/or basic blocks (e.g. 10K
5123 pseudos and 10K blocks or 100K pseudos and 1K blocks), we will
5124 use simplified and faster algorithms in LRA. */
5125 lra_simple_p
5126 = (ira_use_lra_p
5127 && max_reg_num () >= (1 << 26) / last_basic_block_for_fn (cfun));
5128 if (lra_simple_p)
5130 /* It permits to skip live range splitting in LRA. */
5131 flag_caller_saves = false;
5132 /* There is no sense to do regional allocation when we use
5133 simplified LRA. */
5134 flag_ira_region = IRA_REGION_ONE;
5135 ira_conflicts_p = false;
5138 #ifndef IRA_NO_OBSTACK
5139 gcc_obstack_init (&ira_obstack);
5140 #endif
5141 bitmap_obstack_initialize (&ira_bitmap_obstack);
5143 /* LRA uses its own infrastructure to handle caller save registers. */
5144 if (flag_caller_saves && !ira_use_lra_p)
5145 init_caller_save ();
5147 if (flag_ira_verbose < 10)
5149 internal_flag_ira_verbose = flag_ira_verbose;
5150 ira_dump_file = f;
5152 else
5154 internal_flag_ira_verbose = flag_ira_verbose - 10;
5155 ira_dump_file = stderr;
5158 setup_prohibited_mode_move_regs ();
5159 decrease_live_ranges_number ();
5160 df_note_add_problem ();
5162 /* DF_LIVE can't be used in the register allocator, too many other
5163 parts of the compiler depend on using the "classic" liveness
5164 interpretation of the DF_LR problem. See PR38711.
5165 Remove the problem, so that we don't spend time updating it in
5166 any of the df_analyze() calls during IRA/LRA. */
5167 if (optimize > 1)
5168 df_remove_problem (df_live);
5169 gcc_checking_assert (df_live == NULL);
5171 #ifdef ENABLE_CHECKING
5172 df->changeable_flags |= DF_VERIFY_SCHEDULED;
5173 #endif
5174 df_analyze ();
5176 init_reg_equiv ();
5177 if (ira_conflicts_p)
5179 calculate_dominance_info (CDI_DOMINATORS);
5181 if (split_live_ranges_for_shrink_wrap ())
5182 df_analyze ();
5184 free_dominance_info (CDI_DOMINATORS);
5187 df_clear_flags (DF_NO_INSN_RESCAN);
5189 regstat_init_n_sets_and_refs ();
5190 regstat_compute_ri ();
5192 /* If we are not optimizing, then this is the only place before
5193 register allocation where dataflow is done. And that is needed
5194 to generate these warnings. */
5195 if (warn_clobbered)
5196 generate_setjmp_warnings ();
5198 /* Determine if the current function is a leaf before running IRA
5199 since this can impact optimizations done by the prologue and
5200 epilogue thus changing register elimination offsets. */
5201 crtl->is_leaf = leaf_function_p ();
5203 if (resize_reg_info () && flag_ira_loop_pressure)
5204 ira_set_pseudo_classes (true, ira_dump_file);
5206 rebuild_p = update_equiv_regs ();
5207 setup_reg_equiv ();
5208 setup_reg_equiv_init ();
5210 if (optimize && rebuild_p)
5212 timevar_push (TV_JUMP);
5213 rebuild_jump_labels (get_insns ());
5214 if (purge_all_dead_edges ())
5215 delete_unreachable_blocks ();
5216 timevar_pop (TV_JUMP);
5219 allocated_reg_info_size = max_reg_num ();
5221 if (delete_trivially_dead_insns (get_insns (), max_reg_num ()))
5222 df_analyze ();
5224 /* It is not worth to do such improvement when we use a simple
5225 allocation because of -O0 usage or because the function is too
5226 big. */
5227 if (ira_conflicts_p)
5228 find_moveable_pseudos ();
5230 max_regno_before_ira = max_reg_num ();
5231 ira_setup_eliminable_regset ();
5233 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
5234 ira_load_cost = ira_store_cost = ira_shuffle_cost = 0;
5235 ira_move_loops_num = ira_additional_jumps_num = 0;
5237 ira_assert (current_loops == NULL);
5238 if (flag_ira_region == IRA_REGION_ALL || flag_ira_region == IRA_REGION_MIXED)
5239 loop_optimizer_init (AVOID_CFG_MODIFICATIONS | LOOPS_HAVE_RECORDED_EXITS);
5241 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5242 fprintf (ira_dump_file, "Building IRA IR\n");
5243 loops_p = ira_build ();
5245 ira_assert (ira_conflicts_p || !loops_p);
5247 saved_flag_ira_share_spill_slots = flag_ira_share_spill_slots;
5248 if (too_high_register_pressure_p () || cfun->calls_setjmp)
5249 /* It is just wasting compiler's time to pack spilled pseudos into
5250 stack slots in this case -- prohibit it. We also do this if
5251 there is setjmp call because a variable not modified between
5252 setjmp and longjmp the compiler is required to preserve its
5253 value and sharing slots does not guarantee it. */
5254 flag_ira_share_spill_slots = FALSE;
5256 ira_color ();
5258 ira_max_point_before_emit = ira_max_point;
5260 ira_initiate_emit_data ();
5262 ira_emit (loops_p);
5264 max_regno = max_reg_num ();
5265 if (ira_conflicts_p)
5267 if (! loops_p)
5269 if (! ira_use_lra_p)
5270 ira_initiate_assign ();
5272 else
5274 expand_reg_info ();
5276 if (ira_use_lra_p)
5278 ira_allocno_t a;
5279 ira_allocno_iterator ai;
5281 FOR_EACH_ALLOCNO (a, ai)
5283 int old_regno = ALLOCNO_REGNO (a);
5284 int new_regno = REGNO (ALLOCNO_EMIT_DATA (a)->reg);
5286 ALLOCNO_REGNO (a) = new_regno;
5288 if (old_regno != new_regno)
5289 setup_reg_classes (new_regno, reg_preferred_class (old_regno),
5290 reg_alternate_class (old_regno),
5291 reg_allocno_class (old_regno));
5295 else
5297 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5298 fprintf (ira_dump_file, "Flattening IR\n");
5299 ira_flattening (max_regno_before_ira, ira_max_point_before_emit);
5301 /* New insns were generated: add notes and recalculate live
5302 info. */
5303 df_analyze ();
5305 /* ??? Rebuild the loop tree, but why? Does the loop tree
5306 change if new insns were generated? Can that be handled
5307 by updating the loop tree incrementally? */
5308 loop_optimizer_finalize ();
5309 free_dominance_info (CDI_DOMINATORS);
5310 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
5311 | LOOPS_HAVE_RECORDED_EXITS);
5313 if (! ira_use_lra_p)
5315 setup_allocno_assignment_flags ();
5316 ira_initiate_assign ();
5317 ira_reassign_conflict_allocnos (max_regno);
5322 ira_finish_emit_data ();
5324 setup_reg_renumber ();
5326 calculate_allocation_cost ();
5328 #ifdef ENABLE_IRA_CHECKING
5329 if (ira_conflicts_p)
5330 check_allocation ();
5331 #endif
5333 if (max_regno != max_regno_before_ira)
5335 regstat_free_n_sets_and_refs ();
5336 regstat_free_ri ();
5337 regstat_init_n_sets_and_refs ();
5338 regstat_compute_ri ();
5341 overall_cost_before = ira_overall_cost;
5342 if (! ira_conflicts_p)
5343 grow_reg_equivs ();
5344 else
5346 fix_reg_equiv_init ();
5348 #ifdef ENABLE_IRA_CHECKING
5349 print_redundant_copies ();
5350 #endif
5351 if (! ira_use_lra_p)
5353 ira_spilled_reg_stack_slots_num = 0;
5354 ira_spilled_reg_stack_slots
5355 = ((struct ira_spilled_reg_stack_slot *)
5356 ira_allocate (max_regno
5357 * sizeof (struct ira_spilled_reg_stack_slot)));
5358 memset (ira_spilled_reg_stack_slots, 0,
5359 max_regno * sizeof (struct ira_spilled_reg_stack_slot));
5362 allocate_initial_values ();
5364 /* See comment for find_moveable_pseudos call. */
5365 if (ira_conflicts_p)
5366 move_unallocated_pseudos ();
5368 /* Restore original values. */
5369 if (lra_simple_p)
5371 flag_caller_saves = saved_flag_caller_saves;
5372 flag_ira_region = saved_flag_ira_region;
5376 static void
5377 do_reload (void)
5379 basic_block bb;
5380 bool need_dce;
5381 unsigned pic_offset_table_regno = INVALID_REGNUM;
5383 if (flag_ira_verbose < 10)
5384 ira_dump_file = dump_file;
5386 /* If pic_offset_table_rtx is a pseudo register, then keep it so
5387 after reload to avoid possible wrong usages of hard reg assigned
5388 to it. */
5389 if (pic_offset_table_rtx
5390 && REGNO (pic_offset_table_rtx) >= FIRST_PSEUDO_REGISTER)
5391 pic_offset_table_regno = REGNO (pic_offset_table_rtx);
5393 timevar_push (TV_RELOAD);
5394 if (ira_use_lra_p)
5396 if (current_loops != NULL)
5398 loop_optimizer_finalize ();
5399 free_dominance_info (CDI_DOMINATORS);
5401 FOR_ALL_BB_FN (bb, cfun)
5402 bb->loop_father = NULL;
5403 current_loops = NULL;
5405 ira_destroy ();
5407 lra (ira_dump_file);
5408 /* ???!!! Move it before lra () when we use ira_reg_equiv in
5409 LRA. */
5410 vec_free (reg_equivs);
5411 reg_equivs = NULL;
5412 need_dce = false;
5414 else
5416 df_set_flags (DF_NO_INSN_RESCAN);
5417 build_insn_chain ();
5419 need_dce = reload (get_insns (), ira_conflicts_p);
5423 timevar_pop (TV_RELOAD);
5425 timevar_push (TV_IRA);
5427 if (ira_conflicts_p && ! ira_use_lra_p)
5429 ira_free (ira_spilled_reg_stack_slots);
5430 ira_finish_assign ();
5433 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL
5434 && overall_cost_before != ira_overall_cost)
5435 fprintf (ira_dump_file, "+++Overall after reload %"PRId64 "\n",
5436 ira_overall_cost);
5438 flag_ira_share_spill_slots = saved_flag_ira_share_spill_slots;
5440 if (! ira_use_lra_p)
5442 ira_destroy ();
5443 if (current_loops != NULL)
5445 loop_optimizer_finalize ();
5446 free_dominance_info (CDI_DOMINATORS);
5448 FOR_ALL_BB_FN (bb, cfun)
5449 bb->loop_father = NULL;
5450 current_loops = NULL;
5452 regstat_free_ri ();
5453 regstat_free_n_sets_and_refs ();
5456 if (optimize)
5457 cleanup_cfg (CLEANUP_EXPENSIVE);
5459 finish_reg_equiv ();
5461 bitmap_obstack_release (&ira_bitmap_obstack);
5462 #ifndef IRA_NO_OBSTACK
5463 obstack_free (&ira_obstack, NULL);
5464 #endif
5466 /* The code after the reload has changed so much that at this point
5467 we might as well just rescan everything. Note that
5468 df_rescan_all_insns is not going to help here because it does not
5469 touch the artificial uses and defs. */
5470 df_finish_pass (true);
5471 df_scan_alloc (NULL);
5472 df_scan_blocks ();
5474 if (optimize > 1)
5476 df_live_add_problem ();
5477 df_live_set_all_dirty ();
5480 if (optimize)
5481 df_analyze ();
5483 if (need_dce && optimize)
5484 run_fast_dce ();
5486 /* Diagnose uses of the hard frame pointer when it is used as a global
5487 register. Often we can get away with letting the user appropriate
5488 the frame pointer, but we should let them know when code generation
5489 makes that impossible. */
5490 if (global_regs[HARD_FRAME_POINTER_REGNUM] && frame_pointer_needed)
5492 tree decl = global_regs_decl[HARD_FRAME_POINTER_REGNUM];
5493 error_at (DECL_SOURCE_LOCATION (current_function_decl),
5494 "frame pointer required, but reserved");
5495 inform (DECL_SOURCE_LOCATION (decl), "for %qD", decl);
5498 if (pic_offset_table_regno != INVALID_REGNUM)
5499 pic_offset_table_rtx = gen_rtx_REG (Pmode, pic_offset_table_regno);
5501 timevar_pop (TV_IRA);
5504 /* Run the integrated register allocator. */
5506 namespace {
5508 const pass_data pass_data_ira =
5510 RTL_PASS, /* type */
5511 "ira", /* name */
5512 OPTGROUP_NONE, /* optinfo_flags */
5513 TV_IRA, /* tv_id */
5514 0, /* properties_required */
5515 0, /* properties_provided */
5516 0, /* properties_destroyed */
5517 0, /* todo_flags_start */
5518 TODO_do_not_ggc_collect, /* todo_flags_finish */
5521 class pass_ira : public rtl_opt_pass
5523 public:
5524 pass_ira (gcc::context *ctxt)
5525 : rtl_opt_pass (pass_data_ira, ctxt)
5528 /* opt_pass methods: */
5529 virtual bool gate (function *)
5531 return !targetm.no_register_allocation;
5533 virtual unsigned int execute (function *)
5535 ira (dump_file);
5536 return 0;
5539 }; // class pass_ira
5541 } // anon namespace
5543 rtl_opt_pass *
5544 make_pass_ira (gcc::context *ctxt)
5546 return new pass_ira (ctxt);
5549 namespace {
5551 const pass_data pass_data_reload =
5553 RTL_PASS, /* type */
5554 "reload", /* name */
5555 OPTGROUP_NONE, /* optinfo_flags */
5556 TV_RELOAD, /* tv_id */
5557 0, /* properties_required */
5558 0, /* properties_provided */
5559 0, /* properties_destroyed */
5560 0, /* todo_flags_start */
5561 0, /* todo_flags_finish */
5564 class pass_reload : public rtl_opt_pass
5566 public:
5567 pass_reload (gcc::context *ctxt)
5568 : rtl_opt_pass (pass_data_reload, ctxt)
5571 /* opt_pass methods: */
5572 virtual bool gate (function *)
5574 return !targetm.no_register_allocation;
5576 virtual unsigned int execute (function *)
5578 do_reload ();
5579 return 0;
5582 }; // class pass_reload
5584 } // anon namespace
5586 rtl_opt_pass *
5587 make_pass_reload (gcc::context *ctxt)
5589 return new pass_reload (ctxt);