Fix previous commit
[official-gcc.git] / gcc / ira.c
blob9f8da67539cfca3720cdcd7ac8342332d2ec5626
1 /* Integrated Register Allocator (IRA) entry point.
2 Copyright (C) 2006-2019 Free Software Foundation, Inc.
3 Contributed by Vladimir Makarov <vmakarov@redhat.com>.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it under
8 the terms of the GNU General Public License as published by the Free
9 Software Foundation; either version 3, or (at your option) any later
10 version.
12 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
13 WARRANTY; without even the implied warranty of MERCHANTABILITY or
14 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
15 for more details.
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The integrated register allocator (IRA) is a
22 regional register allocator performing graph coloring on a top-down
23 traversal of nested regions. Graph coloring in a region is based
24 on Chaitin-Briggs algorithm. It is called integrated because
25 register coalescing, register live range splitting, and choosing a
26 better hard register are done on-the-fly during coloring. Register
27 coalescing and choosing a cheaper hard register is done by hard
28 register preferencing during hard register assigning. The live
29 range splitting is a byproduct of the regional register allocation.
31 Major IRA notions are:
33 o *Region* is a part of CFG where graph coloring based on
34 Chaitin-Briggs algorithm is done. IRA can work on any set of
35 nested CFG regions forming a tree. Currently the regions are
36 the entire function for the root region and natural loops for
37 the other regions. Therefore data structure representing a
38 region is called loop_tree_node.
40 o *Allocno class* is a register class used for allocation of
41 given allocno. It means that only hard register of given
42 register class can be assigned to given allocno. In reality,
43 even smaller subset of (*profitable*) hard registers can be
44 assigned. In rare cases, the subset can be even smaller
45 because our modification of Chaitin-Briggs algorithm requires
46 that sets of hard registers can be assigned to allocnos forms a
47 forest, i.e. the sets can be ordered in a way where any
48 previous set is not intersected with given set or is a superset
49 of given set.
51 o *Pressure class* is a register class belonging to a set of
52 register classes containing all of the hard-registers available
53 for register allocation. The set of all pressure classes for a
54 target is defined in the corresponding machine-description file
55 according some criteria. Register pressure is calculated only
56 for pressure classes and it affects some IRA decisions as
57 forming allocation regions.
59 o *Allocno* represents the live range of a pseudo-register in a
60 region. Besides the obvious attributes like the corresponding
61 pseudo-register number, allocno class, conflicting allocnos and
62 conflicting hard-registers, there are a few allocno attributes
63 which are important for understanding the allocation algorithm:
65 - *Live ranges*. This is a list of ranges of *program points*
66 where the allocno lives. Program points represent places
67 where a pseudo can be born or become dead (there are
68 approximately two times more program points than the insns)
69 and they are represented by integers starting with 0. The
70 live ranges are used to find conflicts between allocnos.
71 They also play very important role for the transformation of
72 the IRA internal representation of several regions into a one
73 region representation. The later is used during the reload
74 pass work because each allocno represents all of the
75 corresponding pseudo-registers.
77 - *Hard-register costs*. This is a vector of size equal to the
78 number of available hard-registers of the allocno class. The
79 cost of a callee-clobbered hard-register for an allocno is
80 increased by the cost of save/restore code around the calls
81 through the given allocno's life. If the allocno is a move
82 instruction operand and another operand is a hard-register of
83 the allocno class, the cost of the hard-register is decreased
84 by the move cost.
86 When an allocno is assigned, the hard-register with minimal
87 full cost is used. Initially, a hard-register's full cost is
88 the corresponding value from the hard-register's cost vector.
89 If the allocno is connected by a *copy* (see below) to
90 another allocno which has just received a hard-register, the
91 cost of the hard-register is decreased. Before choosing a
92 hard-register for an allocno, the allocno's current costs of
93 the hard-registers are modified by the conflict hard-register
94 costs of all of the conflicting allocnos which are not
95 assigned yet.
97 - *Conflict hard-register costs*. This is a vector of the same
98 size as the hard-register costs vector. To permit an
99 unassigned allocno to get a better hard-register, IRA uses
100 this vector to calculate the final full cost of the
101 available hard-registers. Conflict hard-register costs of an
102 unassigned allocno are also changed with a change of the
103 hard-register cost of the allocno when a copy involving the
104 allocno is processed as described above. This is done to
105 show other unassigned allocnos that a given allocno prefers
106 some hard-registers in order to remove the move instruction
107 corresponding to the copy.
109 o *Cap*. If a pseudo-register does not live in a region but
110 lives in a nested region, IRA creates a special allocno called
111 a cap in the outer region. A region cap is also created for a
112 subregion cap.
114 o *Copy*. Allocnos can be connected by copies. Copies are used
115 to modify hard-register costs for allocnos during coloring.
116 Such modifications reflects a preference to use the same
117 hard-register for the allocnos connected by copies. Usually
118 copies are created for move insns (in this case it results in
119 register coalescing). But IRA also creates copies for operands
120 of an insn which should be assigned to the same hard-register
121 due to constraints in the machine description (it usually
122 results in removing a move generated in reload to satisfy
123 the constraints) and copies referring to the allocno which is
124 the output operand of an instruction and the allocno which is
125 an input operand dying in the instruction (creation of such
126 copies results in less register shuffling). IRA *does not*
127 create copies between the same register allocnos from different
128 regions because we use another technique for propagating
129 hard-register preference on the borders of regions.
131 Allocnos (including caps) for the upper region in the region tree
132 *accumulate* information important for coloring from allocnos with
133 the same pseudo-register from nested regions. This includes
134 hard-register and memory costs, conflicts with hard-registers,
135 allocno conflicts, allocno copies and more. *Thus, attributes for
136 allocnos in a region have the same values as if the region had no
137 subregions*. It means that attributes for allocnos in the
138 outermost region corresponding to the function have the same values
139 as though the allocation used only one region which is the entire
140 function. It also means that we can look at IRA work as if the
141 first IRA did allocation for all function then it improved the
142 allocation for loops then their subloops and so on.
144 IRA major passes are:
146 o Building IRA internal representation which consists of the
147 following subpasses:
149 * First, IRA builds regions and creates allocnos (file
150 ira-build.c) and initializes most of their attributes.
152 * Then IRA finds an allocno class for each allocno and
153 calculates its initial (non-accumulated) cost of memory and
154 each hard-register of its allocno class (file ira-cost.c).
156 * IRA creates live ranges of each allocno, calculates register
157 pressure for each pressure class in each region, sets up
158 conflict hard registers for each allocno and info about calls
159 the allocno lives through (file ira-lives.c).
161 * IRA removes low register pressure loops from the regions
162 mostly to speed IRA up (file ira-build.c).
164 * IRA propagates accumulated allocno info from lower region
165 allocnos to corresponding upper region allocnos (file
166 ira-build.c).
168 * IRA creates all caps (file ira-build.c).
170 * Having live-ranges of allocnos and their classes, IRA creates
171 conflicting allocnos for each allocno. Conflicting allocnos
172 are stored as a bit vector or array of pointers to the
173 conflicting allocnos whatever is more profitable (file
174 ira-conflicts.c). At this point IRA creates allocno copies.
176 o Coloring. Now IRA has all necessary info to start graph coloring
177 process. It is done in each region on top-down traverse of the
178 region tree (file ira-color.c). There are following subpasses:
180 * Finding profitable hard registers of corresponding allocno
181 class for each allocno. For example, only callee-saved hard
182 registers are frequently profitable for allocnos living
183 through colors. If the profitable hard register set of
184 allocno does not form a tree based on subset relation, we use
185 some approximation to form the tree. This approximation is
186 used to figure out trivial colorability of allocnos. The
187 approximation is a pretty rare case.
189 * Putting allocnos onto the coloring stack. IRA uses Briggs
190 optimistic coloring which is a major improvement over
191 Chaitin's coloring. Therefore IRA does not spill allocnos at
192 this point. There is some freedom in the order of putting
193 allocnos on the stack which can affect the final result of
194 the allocation. IRA uses some heuristics to improve the
195 order. The major one is to form *threads* from colorable
196 allocnos and push them on the stack by threads. Thread is a
197 set of non-conflicting colorable allocnos connected by
198 copies. The thread contains allocnos from the colorable
199 bucket or colorable allocnos already pushed onto the coloring
200 stack. Pushing thread allocnos one after another onto the
201 stack increases chances of removing copies when the allocnos
202 get the same hard reg.
204 We also use a modification of Chaitin-Briggs algorithm which
205 works for intersected register classes of allocnos. To
206 figure out trivial colorability of allocnos, the mentioned
207 above tree of hard register sets is used. To get an idea how
208 the algorithm works in i386 example, let us consider an
209 allocno to which any general hard register can be assigned.
210 If the allocno conflicts with eight allocnos to which only
211 EAX register can be assigned, given allocno is still
212 trivially colorable because all conflicting allocnos might be
213 assigned only to EAX and all other general hard registers are
214 still free.
216 To get an idea of the used trivial colorability criterion, it
217 is also useful to read article "Graph-Coloring Register
218 Allocation for Irregular Architectures" by Michael D. Smith
219 and Glen Holloway. Major difference between the article
220 approach and approach used in IRA is that Smith's approach
221 takes register classes only from machine description and IRA
222 calculate register classes from intermediate code too
223 (e.g. an explicit usage of hard registers in RTL code for
224 parameter passing can result in creation of additional
225 register classes which contain or exclude the hard
226 registers). That makes IRA approach useful for improving
227 coloring even for architectures with regular register files
228 and in fact some benchmarking shows the improvement for
229 regular class architectures is even bigger than for irregular
230 ones. Another difference is that Smith's approach chooses
231 intersection of classes of all insn operands in which a given
232 pseudo occurs. IRA can use bigger classes if it is still
233 more profitable than memory usage.
235 * Popping the allocnos from the stack and assigning them hard
236 registers. If IRA cannot assign a hard register to an
237 allocno and the allocno is coalesced, IRA undoes the
238 coalescing and puts the uncoalesced allocnos onto the stack in
239 the hope that some such allocnos will get a hard register
240 separately. If IRA fails to assign hard register or memory
241 is more profitable for it, IRA spills the allocno. IRA
242 assigns the allocno the hard-register with minimal full
243 allocation cost which reflects the cost of usage of the
244 hard-register for the allocno and cost of usage of the
245 hard-register for allocnos conflicting with given allocno.
247 * Chaitin-Briggs coloring assigns as many pseudos as possible
248 to hard registers. After coloring we try to improve
249 allocation with cost point of view. We improve the
250 allocation by spilling some allocnos and assigning the freed
251 hard registers to other allocnos if it decreases the overall
252 allocation cost.
254 * After allocno assigning in the region, IRA modifies the hard
255 register and memory costs for the corresponding allocnos in
256 the subregions to reflect the cost of possible loads, stores,
257 or moves on the border of the region and its subregions.
258 When default regional allocation algorithm is used
259 (-fira-algorithm=mixed), IRA just propagates the assignment
260 for allocnos if the register pressure in the region for the
261 corresponding pressure class is less than number of available
262 hard registers for given pressure class.
264 o Spill/restore code moving. When IRA performs an allocation
265 by traversing regions in top-down order, it does not know what
266 happens below in the region tree. Therefore, sometimes IRA
267 misses opportunities to perform a better allocation. A simple
268 optimization tries to improve allocation in a region having
269 subregions and containing in another region. If the
270 corresponding allocnos in the subregion are spilled, it spills
271 the region allocno if it is profitable. The optimization
272 implements a simple iterative algorithm performing profitable
273 transformations while they are still possible. It is fast in
274 practice, so there is no real need for a better time complexity
275 algorithm.
277 o Code change. After coloring, two allocnos representing the
278 same pseudo-register outside and inside a region respectively
279 may be assigned to different locations (hard-registers or
280 memory). In this case IRA creates and uses a new
281 pseudo-register inside the region and adds code to move allocno
282 values on the region's borders. This is done during top-down
283 traversal of the regions (file ira-emit.c). In some
284 complicated cases IRA can create a new allocno to move allocno
285 values (e.g. when a swap of values stored in two hard-registers
286 is needed). At this stage, the new allocno is marked as
287 spilled. IRA still creates the pseudo-register and the moves
288 on the region borders even when both allocnos were assigned to
289 the same hard-register. If the reload pass spills a
290 pseudo-register for some reason, the effect will be smaller
291 because another allocno will still be in the hard-register. In
292 most cases, this is better then spilling both allocnos. If
293 reload does not change the allocation for the two
294 pseudo-registers, the trivial move will be removed by
295 post-reload optimizations. IRA does not generate moves for
296 allocnos assigned to the same hard register when the default
297 regional allocation algorithm is used and the register pressure
298 in the region for the corresponding pressure class is less than
299 number of available hard registers for given pressure class.
300 IRA also does some optimizations to remove redundant stores and
301 to reduce code duplication on the region borders.
303 o Flattening internal representation. After changing code, IRA
304 transforms its internal representation for several regions into
305 one region representation (file ira-build.c). This process is
306 called IR flattening. Such process is more complicated than IR
307 rebuilding would be, but is much faster.
309 o After IR flattening, IRA tries to assign hard registers to all
310 spilled allocnos. This is implemented by a simple and fast
311 priority coloring algorithm (see function
312 ira_reassign_conflict_allocnos::ira-color.c). Here new allocnos
313 created during the code change pass can be assigned to hard
314 registers.
316 o At the end IRA calls the reload pass. The reload pass
317 communicates with IRA through several functions in file
318 ira-color.c to improve its decisions in
320 * sharing stack slots for the spilled pseudos based on IRA info
321 about pseudo-register conflicts.
323 * reassigning hard-registers to all spilled pseudos at the end
324 of each reload iteration.
326 * choosing a better hard-register to spill based on IRA info
327 about pseudo-register live ranges and the register pressure
328 in places where the pseudo-register lives.
330 IRA uses a lot of data representing the target processors. These
331 data are initialized in file ira.c.
333 If function has no loops (or the loops are ignored when
334 -fira-algorithm=CB is used), we have classic Chaitin-Briggs
335 coloring (only instead of separate pass of coalescing, we use hard
336 register preferencing). In such case, IRA works much faster
337 because many things are not made (like IR flattening, the
338 spill/restore optimization, and the code change).
340 Literature is worth to read for better understanding the code:
342 o Preston Briggs, Keith D. Cooper, Linda Torczon. Improvements to
343 Graph Coloring Register Allocation.
345 o David Callahan, Brian Koblenz. Register allocation via
346 hierarchical graph coloring.
348 o Keith Cooper, Anshuman Dasgupta, Jason Eckhardt. Revisiting Graph
349 Coloring Register Allocation: A Study of the Chaitin-Briggs and
350 Callahan-Koblenz Algorithms.
352 o Guei-Yuan Lueh, Thomas Gross, and Ali-Reza Adl-Tabatabai. Global
353 Register Allocation Based on Graph Fusion.
355 o Michael D. Smith and Glenn Holloway. Graph-Coloring Register
356 Allocation for Irregular Architectures
358 o Vladimir Makarov. The Integrated Register Allocator for GCC.
360 o Vladimir Makarov. The top-down register allocator for irregular
361 register file architectures.
366 #include "config.h"
367 #include "system.h"
368 #include "coretypes.h"
369 #include "backend.h"
370 #include "target.h"
371 #include "rtl.h"
372 #include "tree.h"
373 #include "df.h"
374 #include "memmodel.h"
375 #include "tm_p.h"
376 #include "insn-config.h"
377 #include "regs.h"
378 #include "ira.h"
379 #include "ira-int.h"
380 #include "diagnostic-core.h"
381 #include "cfgrtl.h"
382 #include "cfgbuild.h"
383 #include "cfgcleanup.h"
384 #include "expr.h"
385 #include "tree-pass.h"
386 #include "output.h"
387 #include "reload.h"
388 #include "cfgloop.h"
389 #include "lra.h"
390 #include "dce.h"
391 #include "dbgcnt.h"
392 #include "rtl-iter.h"
393 #include "shrink-wrap.h"
394 #include "print-rtl.h"
396 struct target_ira default_target_ira;
397 class target_ira_int default_target_ira_int;
398 #if SWITCHABLE_TARGET
399 struct target_ira *this_target_ira = &default_target_ira;
400 class target_ira_int *this_target_ira_int = &default_target_ira_int;
401 #endif
403 /* A modified value of flag `-fira-verbose' used internally. */
404 int internal_flag_ira_verbose;
406 /* Dump file of the allocator if it is not NULL. */
407 FILE *ira_dump_file;
409 /* The number of elements in the following array. */
410 int ira_spilled_reg_stack_slots_num;
412 /* The following array contains info about spilled pseudo-registers
413 stack slots used in current function so far. */
414 class ira_spilled_reg_stack_slot *ira_spilled_reg_stack_slots;
416 /* Correspondingly overall cost of the allocation, overall cost before
417 reload, cost of the allocnos assigned to hard-registers, cost of
418 the allocnos assigned to memory, cost of loads, stores and register
419 move insns generated for pseudo-register live range splitting (see
420 ira-emit.c). */
421 int64_t ira_overall_cost, overall_cost_before;
422 int64_t ira_reg_cost, ira_mem_cost;
423 int64_t ira_load_cost, ira_store_cost, ira_shuffle_cost;
424 int ira_move_loops_num, ira_additional_jumps_num;
426 /* All registers that can be eliminated. */
428 HARD_REG_SET eliminable_regset;
430 /* Value of max_reg_num () before IRA work start. This value helps
431 us to recognize a situation when new pseudos were created during
432 IRA work. */
433 static int max_regno_before_ira;
435 /* Temporary hard reg set used for a different calculation. */
436 static HARD_REG_SET temp_hard_regset;
438 #define last_mode_for_init_move_cost \
439 (this_target_ira_int->x_last_mode_for_init_move_cost)
442 /* The function sets up the map IRA_REG_MODE_HARD_REGSET. */
443 static void
444 setup_reg_mode_hard_regset (void)
446 int i, m, hard_regno;
448 for (m = 0; m < NUM_MACHINE_MODES; m++)
449 for (hard_regno = 0; hard_regno < FIRST_PSEUDO_REGISTER; hard_regno++)
451 CLEAR_HARD_REG_SET (ira_reg_mode_hard_regset[hard_regno][m]);
452 for (i = hard_regno_nregs (hard_regno, (machine_mode) m) - 1;
453 i >= 0; i--)
454 if (hard_regno + i < FIRST_PSEUDO_REGISTER)
455 SET_HARD_REG_BIT (ira_reg_mode_hard_regset[hard_regno][m],
456 hard_regno + i);
461 #define no_unit_alloc_regs \
462 (this_target_ira_int->x_no_unit_alloc_regs)
464 /* The function sets up the three arrays declared above. */
465 static void
466 setup_class_hard_regs (void)
468 int cl, i, hard_regno, n;
469 HARD_REG_SET processed_hard_reg_set;
471 ira_assert (SHRT_MAX >= FIRST_PSEUDO_REGISTER);
472 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
474 temp_hard_regset = reg_class_contents[cl] & ~no_unit_alloc_regs;
475 CLEAR_HARD_REG_SET (processed_hard_reg_set);
476 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
478 ira_non_ordered_class_hard_regs[cl][i] = -1;
479 ira_class_hard_reg_index[cl][i] = -1;
481 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
483 #ifdef REG_ALLOC_ORDER
484 hard_regno = reg_alloc_order[i];
485 #else
486 hard_regno = i;
487 #endif
488 if (TEST_HARD_REG_BIT (processed_hard_reg_set, hard_regno))
489 continue;
490 SET_HARD_REG_BIT (processed_hard_reg_set, hard_regno);
491 if (! TEST_HARD_REG_BIT (temp_hard_regset, hard_regno))
492 ira_class_hard_reg_index[cl][hard_regno] = -1;
493 else
495 ira_class_hard_reg_index[cl][hard_regno] = n;
496 ira_class_hard_regs[cl][n++] = hard_regno;
499 ira_class_hard_regs_num[cl] = n;
500 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
501 if (TEST_HARD_REG_BIT (temp_hard_regset, i))
502 ira_non_ordered_class_hard_regs[cl][n++] = i;
503 ira_assert (ira_class_hard_regs_num[cl] == n);
507 /* Set up global variables defining info about hard registers for the
508 allocation. These depend on USE_HARD_FRAME_P whose TRUE value means
509 that we can use the hard frame pointer for the allocation. */
510 static void
511 setup_alloc_regs (bool use_hard_frame_p)
513 #ifdef ADJUST_REG_ALLOC_ORDER
514 ADJUST_REG_ALLOC_ORDER;
515 #endif
516 no_unit_alloc_regs = fixed_nonglobal_reg_set;
517 if (! use_hard_frame_p)
518 SET_HARD_REG_BIT (no_unit_alloc_regs, HARD_FRAME_POINTER_REGNUM);
519 setup_class_hard_regs ();
524 #define alloc_reg_class_subclasses \
525 (this_target_ira_int->x_alloc_reg_class_subclasses)
527 /* Initialize the table of subclasses of each reg class. */
528 static void
529 setup_reg_subclasses (void)
531 int i, j;
532 HARD_REG_SET temp_hard_regset2;
534 for (i = 0; i < N_REG_CLASSES; i++)
535 for (j = 0; j < N_REG_CLASSES; j++)
536 alloc_reg_class_subclasses[i][j] = LIM_REG_CLASSES;
538 for (i = 0; i < N_REG_CLASSES; i++)
540 if (i == (int) NO_REGS)
541 continue;
543 temp_hard_regset = reg_class_contents[i] & ~no_unit_alloc_regs;
544 if (hard_reg_set_empty_p (temp_hard_regset))
545 continue;
546 for (j = 0; j < N_REG_CLASSES; j++)
547 if (i != j)
549 enum reg_class *p;
551 temp_hard_regset2 = reg_class_contents[j] & ~no_unit_alloc_regs;
552 if (! hard_reg_set_subset_p (temp_hard_regset,
553 temp_hard_regset2))
554 continue;
555 p = &alloc_reg_class_subclasses[j][0];
556 while (*p != LIM_REG_CLASSES) p++;
557 *p = (enum reg_class) i;
564 /* Set up IRA_MEMORY_MOVE_COST and IRA_MAX_MEMORY_MOVE_COST. */
565 static void
566 setup_class_subset_and_memory_move_costs (void)
568 int cl, cl2, mode, cost;
569 HARD_REG_SET temp_hard_regset2;
571 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
572 ira_memory_move_cost[mode][NO_REGS][0]
573 = ira_memory_move_cost[mode][NO_REGS][1] = SHRT_MAX;
574 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
576 if (cl != (int) NO_REGS)
577 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
579 ira_max_memory_move_cost[mode][cl][0]
580 = ira_memory_move_cost[mode][cl][0]
581 = memory_move_cost ((machine_mode) mode,
582 (reg_class_t) cl, false);
583 ira_max_memory_move_cost[mode][cl][1]
584 = ira_memory_move_cost[mode][cl][1]
585 = memory_move_cost ((machine_mode) mode,
586 (reg_class_t) cl, true);
587 /* Costs for NO_REGS are used in cost calculation on the
588 1st pass when the preferred register classes are not
589 known yet. In this case we take the best scenario. */
590 if (ira_memory_move_cost[mode][NO_REGS][0]
591 > ira_memory_move_cost[mode][cl][0])
592 ira_max_memory_move_cost[mode][NO_REGS][0]
593 = ira_memory_move_cost[mode][NO_REGS][0]
594 = ira_memory_move_cost[mode][cl][0];
595 if (ira_memory_move_cost[mode][NO_REGS][1]
596 > ira_memory_move_cost[mode][cl][1])
597 ira_max_memory_move_cost[mode][NO_REGS][1]
598 = ira_memory_move_cost[mode][NO_REGS][1]
599 = ira_memory_move_cost[mode][cl][1];
602 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
603 for (cl2 = (int) N_REG_CLASSES - 1; cl2 >= 0; cl2--)
605 temp_hard_regset = reg_class_contents[cl] & ~no_unit_alloc_regs;
606 temp_hard_regset2 = reg_class_contents[cl2] & ~no_unit_alloc_regs;
607 ira_class_subset_p[cl][cl2]
608 = hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2);
609 if (! hard_reg_set_empty_p (temp_hard_regset2)
610 && hard_reg_set_subset_p (reg_class_contents[cl2],
611 reg_class_contents[cl]))
612 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
614 cost = ira_memory_move_cost[mode][cl2][0];
615 if (cost > ira_max_memory_move_cost[mode][cl][0])
616 ira_max_memory_move_cost[mode][cl][0] = cost;
617 cost = ira_memory_move_cost[mode][cl2][1];
618 if (cost > ira_max_memory_move_cost[mode][cl][1])
619 ira_max_memory_move_cost[mode][cl][1] = cost;
622 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
623 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
625 ira_memory_move_cost[mode][cl][0]
626 = ira_max_memory_move_cost[mode][cl][0];
627 ira_memory_move_cost[mode][cl][1]
628 = ira_max_memory_move_cost[mode][cl][1];
630 setup_reg_subclasses ();
635 /* Define the following macro if allocation through malloc if
636 preferable. */
637 #define IRA_NO_OBSTACK
639 #ifndef IRA_NO_OBSTACK
640 /* Obstack used for storing all dynamic data (except bitmaps) of the
641 IRA. */
642 static struct obstack ira_obstack;
643 #endif
645 /* Obstack used for storing all bitmaps of the IRA. */
646 static struct bitmap_obstack ira_bitmap_obstack;
648 /* Allocate memory of size LEN for IRA data. */
649 void *
650 ira_allocate (size_t len)
652 void *res;
654 #ifndef IRA_NO_OBSTACK
655 res = obstack_alloc (&ira_obstack, len);
656 #else
657 res = xmalloc (len);
658 #endif
659 return res;
662 /* Free memory ADDR allocated for IRA data. */
663 void
664 ira_free (void *addr ATTRIBUTE_UNUSED)
666 #ifndef IRA_NO_OBSTACK
667 /* do nothing */
668 #else
669 free (addr);
670 #endif
674 /* Allocate and returns bitmap for IRA. */
675 bitmap
676 ira_allocate_bitmap (void)
678 return BITMAP_ALLOC (&ira_bitmap_obstack);
681 /* Free bitmap B allocated for IRA. */
682 void
683 ira_free_bitmap (bitmap b ATTRIBUTE_UNUSED)
685 /* do nothing */
690 /* Output information about allocation of all allocnos (except for
691 caps) into file F. */
692 void
693 ira_print_disposition (FILE *f)
695 int i, n, max_regno;
696 ira_allocno_t a;
697 basic_block bb;
699 fprintf (f, "Disposition:");
700 max_regno = max_reg_num ();
701 for (n = 0, i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
702 for (a = ira_regno_allocno_map[i];
703 a != NULL;
704 a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
706 if (n % 4 == 0)
707 fprintf (f, "\n");
708 n++;
709 fprintf (f, " %4d:r%-4d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
710 if ((bb = ALLOCNO_LOOP_TREE_NODE (a)->bb) != NULL)
711 fprintf (f, "b%-3d", bb->index);
712 else
713 fprintf (f, "l%-3d", ALLOCNO_LOOP_TREE_NODE (a)->loop_num);
714 if (ALLOCNO_HARD_REGNO (a) >= 0)
715 fprintf (f, " %3d", ALLOCNO_HARD_REGNO (a));
716 else
717 fprintf (f, " mem");
719 fprintf (f, "\n");
722 /* Outputs information about allocation of all allocnos into
723 stderr. */
724 void
725 ira_debug_disposition (void)
727 ira_print_disposition (stderr);
732 /* Set up ira_stack_reg_pressure_class which is the biggest pressure
733 register class containing stack registers or NO_REGS if there are
734 no stack registers. To find this class, we iterate through all
735 register pressure classes and choose the first register pressure
736 class containing all the stack registers and having the biggest
737 size. */
738 static void
739 setup_stack_reg_pressure_class (void)
741 ira_stack_reg_pressure_class = NO_REGS;
742 #ifdef STACK_REGS
744 int i, best, size;
745 enum reg_class cl;
746 HARD_REG_SET temp_hard_regset2;
748 CLEAR_HARD_REG_SET (temp_hard_regset);
749 for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
750 SET_HARD_REG_BIT (temp_hard_regset, i);
751 best = 0;
752 for (i = 0; i < ira_pressure_classes_num; i++)
754 cl = ira_pressure_classes[i];
755 temp_hard_regset2 = temp_hard_regset & reg_class_contents[cl];
756 size = hard_reg_set_size (temp_hard_regset2);
757 if (best < size)
759 best = size;
760 ira_stack_reg_pressure_class = cl;
764 #endif
767 /* Find pressure classes which are register classes for which we
768 calculate register pressure in IRA, register pressure sensitive
769 insn scheduling, and register pressure sensitive loop invariant
770 motion.
772 To make register pressure calculation easy, we always use
773 non-intersected register pressure classes. A move of hard
774 registers from one register pressure class is not more expensive
775 than load and store of the hard registers. Most likely an allocno
776 class will be a subset of a register pressure class and in many
777 cases a register pressure class. That makes usage of register
778 pressure classes a good approximation to find a high register
779 pressure. */
780 static void
781 setup_pressure_classes (void)
783 int cost, i, n, curr;
784 int cl, cl2;
785 enum reg_class pressure_classes[N_REG_CLASSES];
786 int m;
787 HARD_REG_SET temp_hard_regset2;
788 bool insert_p;
790 if (targetm.compute_pressure_classes)
791 n = targetm.compute_pressure_classes (pressure_classes);
792 else
794 n = 0;
795 for (cl = 0; cl < N_REG_CLASSES; cl++)
797 if (ira_class_hard_regs_num[cl] == 0)
798 continue;
799 if (ira_class_hard_regs_num[cl] != 1
800 /* A register class without subclasses may contain a few
801 hard registers and movement between them is costly
802 (e.g. SPARC FPCC registers). We still should consider it
803 as a candidate for a pressure class. */
804 && alloc_reg_class_subclasses[cl][0] < cl)
806 /* Check that the moves between any hard registers of the
807 current class are not more expensive for a legal mode
808 than load/store of the hard registers of the current
809 class. Such class is a potential candidate to be a
810 register pressure class. */
811 for (m = 0; m < NUM_MACHINE_MODES; m++)
813 temp_hard_regset
814 = (reg_class_contents[cl]
815 & ~(no_unit_alloc_regs
816 | ira_prohibited_class_mode_regs[cl][m]));
817 if (hard_reg_set_empty_p (temp_hard_regset))
818 continue;
819 ira_init_register_move_cost_if_necessary ((machine_mode) m);
820 cost = ira_register_move_cost[m][cl][cl];
821 if (cost <= ira_max_memory_move_cost[m][cl][1]
822 || cost <= ira_max_memory_move_cost[m][cl][0])
823 break;
825 if (m >= NUM_MACHINE_MODES)
826 continue;
828 curr = 0;
829 insert_p = true;
830 temp_hard_regset = reg_class_contents[cl] & ~no_unit_alloc_regs;
831 /* Remove so far added pressure classes which are subset of the
832 current candidate class. Prefer GENERAL_REGS as a pressure
833 register class to another class containing the same
834 allocatable hard registers. We do this because machine
835 dependent cost hooks might give wrong costs for the latter
836 class but always give the right cost for the former class
837 (GENERAL_REGS). */
838 for (i = 0; i < n; i++)
840 cl2 = pressure_classes[i];
841 temp_hard_regset2 = (reg_class_contents[cl2]
842 & ~no_unit_alloc_regs);
843 if (hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2)
844 && (temp_hard_regset != temp_hard_regset2
845 || cl2 == (int) GENERAL_REGS))
847 pressure_classes[curr++] = (enum reg_class) cl2;
848 insert_p = false;
849 continue;
851 if (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset)
852 && (temp_hard_regset2 != temp_hard_regset
853 || cl == (int) GENERAL_REGS))
854 continue;
855 if (temp_hard_regset2 == temp_hard_regset)
856 insert_p = false;
857 pressure_classes[curr++] = (enum reg_class) cl2;
859 /* If the current candidate is a subset of a so far added
860 pressure class, don't add it to the list of the pressure
861 classes. */
862 if (insert_p)
863 pressure_classes[curr++] = (enum reg_class) cl;
864 n = curr;
867 #ifdef ENABLE_IRA_CHECKING
869 HARD_REG_SET ignore_hard_regs;
871 /* Check pressure classes correctness: here we check that hard
872 registers from all register pressure classes contains all hard
873 registers available for the allocation. */
874 CLEAR_HARD_REG_SET (temp_hard_regset);
875 CLEAR_HARD_REG_SET (temp_hard_regset2);
876 ignore_hard_regs = no_unit_alloc_regs;
877 for (cl = 0; cl < LIM_REG_CLASSES; cl++)
879 /* For some targets (like MIPS with MD_REGS), there are some
880 classes with hard registers available for allocation but
881 not able to hold value of any mode. */
882 for (m = 0; m < NUM_MACHINE_MODES; m++)
883 if (contains_reg_of_mode[cl][m])
884 break;
885 if (m >= NUM_MACHINE_MODES)
887 ignore_hard_regs |= reg_class_contents[cl];
888 continue;
890 for (i = 0; i < n; i++)
891 if ((int) pressure_classes[i] == cl)
892 break;
893 temp_hard_regset2 |= reg_class_contents[cl];
894 if (i < n)
895 temp_hard_regset |= reg_class_contents[cl];
897 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
898 /* Some targets (like SPARC with ICC reg) have allocatable regs
899 for which no reg class is defined. */
900 if (REGNO_REG_CLASS (i) == NO_REGS)
901 SET_HARD_REG_BIT (ignore_hard_regs, i);
902 temp_hard_regset &= ~ignore_hard_regs;
903 temp_hard_regset2 &= ~ignore_hard_regs;
904 ira_assert (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset));
906 #endif
907 ira_pressure_classes_num = 0;
908 for (i = 0; i < n; i++)
910 cl = (int) pressure_classes[i];
911 ira_reg_pressure_class_p[cl] = true;
912 ira_pressure_classes[ira_pressure_classes_num++] = (enum reg_class) cl;
914 setup_stack_reg_pressure_class ();
917 /* Set up IRA_UNIFORM_CLASS_P. Uniform class is a register class
918 whose register move cost between any registers of the class is the
919 same as for all its subclasses. We use the data to speed up the
920 2nd pass of calculations of allocno costs. */
921 static void
922 setup_uniform_class_p (void)
924 int i, cl, cl2, m;
926 for (cl = 0; cl < N_REG_CLASSES; cl++)
928 ira_uniform_class_p[cl] = false;
929 if (ira_class_hard_regs_num[cl] == 0)
930 continue;
931 /* We cannot use alloc_reg_class_subclasses here because move
932 cost hooks does not take into account that some registers are
933 unavailable for the subtarget. E.g. for i686, INT_SSE_REGS
934 is element of alloc_reg_class_subclasses for GENERAL_REGS
935 because SSE regs are unavailable. */
936 for (i = 0; (cl2 = reg_class_subclasses[cl][i]) != LIM_REG_CLASSES; i++)
938 if (ira_class_hard_regs_num[cl2] == 0)
939 continue;
940 for (m = 0; m < NUM_MACHINE_MODES; m++)
941 if (contains_reg_of_mode[cl][m] && contains_reg_of_mode[cl2][m])
943 ira_init_register_move_cost_if_necessary ((machine_mode) m);
944 if (ira_register_move_cost[m][cl][cl]
945 != ira_register_move_cost[m][cl2][cl2])
946 break;
948 if (m < NUM_MACHINE_MODES)
949 break;
951 if (cl2 == LIM_REG_CLASSES)
952 ira_uniform_class_p[cl] = true;
956 /* Set up IRA_ALLOCNO_CLASSES, IRA_ALLOCNO_CLASSES_NUM,
957 IRA_IMPORTANT_CLASSES, and IRA_IMPORTANT_CLASSES_NUM.
959 Target may have many subtargets and not all target hard registers can
960 be used for allocation, e.g. x86 port in 32-bit mode cannot use
961 hard registers introduced in x86-64 like r8-r15). Some classes
962 might have the same allocatable hard registers, e.g. INDEX_REGS
963 and GENERAL_REGS in x86 port in 32-bit mode. To decrease different
964 calculations efforts we introduce allocno classes which contain
965 unique non-empty sets of allocatable hard-registers.
967 Pseudo class cost calculation in ira-costs.c is very expensive.
968 Therefore we are trying to decrease number of classes involved in
969 such calculation. Register classes used in the cost calculation
970 are called important classes. They are allocno classes and other
971 non-empty classes whose allocatable hard register sets are inside
972 of an allocno class hard register set. From the first sight, it
973 looks like that they are just allocno classes. It is not true. In
974 example of x86-port in 32-bit mode, allocno classes will contain
975 GENERAL_REGS but not LEGACY_REGS (because allocatable hard
976 registers are the same for the both classes). The important
977 classes will contain GENERAL_REGS and LEGACY_REGS. It is done
978 because a machine description insn constraint may refers for
979 LEGACY_REGS and code in ira-costs.c is mostly base on investigation
980 of the insn constraints. */
981 static void
982 setup_allocno_and_important_classes (void)
984 int i, j, n, cl;
985 bool set_p;
986 HARD_REG_SET temp_hard_regset2;
987 static enum reg_class classes[LIM_REG_CLASSES + 1];
989 n = 0;
990 /* Collect classes which contain unique sets of allocatable hard
991 registers. Prefer GENERAL_REGS to other classes containing the
992 same set of hard registers. */
993 for (i = 0; i < LIM_REG_CLASSES; i++)
995 temp_hard_regset = reg_class_contents[i] & ~no_unit_alloc_regs;
996 for (j = 0; j < n; j++)
998 cl = classes[j];
999 temp_hard_regset2 = reg_class_contents[cl] & ~no_unit_alloc_regs;
1000 if (temp_hard_regset == temp_hard_regset2)
1001 break;
1003 if (j >= n || targetm.additional_allocno_class_p (i))
1004 classes[n++] = (enum reg_class) i;
1005 else if (i == GENERAL_REGS)
1006 /* Prefer general regs. For i386 example, it means that
1007 we prefer GENERAL_REGS over INDEX_REGS or LEGACY_REGS
1008 (all of them consists of the same available hard
1009 registers). */
1010 classes[j] = (enum reg_class) i;
1012 classes[n] = LIM_REG_CLASSES;
1014 /* Set up classes which can be used for allocnos as classes
1015 containing non-empty unique sets of allocatable hard
1016 registers. */
1017 ira_allocno_classes_num = 0;
1018 for (i = 0; (cl = classes[i]) != LIM_REG_CLASSES; i++)
1019 if (ira_class_hard_regs_num[cl] > 0)
1020 ira_allocno_classes[ira_allocno_classes_num++] = (enum reg_class) cl;
1021 ira_important_classes_num = 0;
1022 /* Add non-allocno classes containing to non-empty set of
1023 allocatable hard regs. */
1024 for (cl = 0; cl < N_REG_CLASSES; cl++)
1025 if (ira_class_hard_regs_num[cl] > 0)
1027 temp_hard_regset = reg_class_contents[cl] & ~no_unit_alloc_regs;
1028 set_p = false;
1029 for (j = 0; j < ira_allocno_classes_num; j++)
1031 temp_hard_regset2 = (reg_class_contents[ira_allocno_classes[j]]
1032 & ~no_unit_alloc_regs);
1033 if ((enum reg_class) cl == ira_allocno_classes[j])
1034 break;
1035 else if (hard_reg_set_subset_p (temp_hard_regset,
1036 temp_hard_regset2))
1037 set_p = true;
1039 if (set_p && j >= ira_allocno_classes_num)
1040 ira_important_classes[ira_important_classes_num++]
1041 = (enum reg_class) cl;
1043 /* Now add allocno classes to the important classes. */
1044 for (j = 0; j < ira_allocno_classes_num; j++)
1045 ira_important_classes[ira_important_classes_num++]
1046 = ira_allocno_classes[j];
1047 for (cl = 0; cl < N_REG_CLASSES; cl++)
1049 ira_reg_allocno_class_p[cl] = false;
1050 ira_reg_pressure_class_p[cl] = false;
1052 for (j = 0; j < ira_allocno_classes_num; j++)
1053 ira_reg_allocno_class_p[ira_allocno_classes[j]] = true;
1054 setup_pressure_classes ();
1055 setup_uniform_class_p ();
1058 /* Setup translation in CLASS_TRANSLATE of all classes into a class
1059 given by array CLASSES of length CLASSES_NUM. The function is used
1060 make translation any reg class to an allocno class or to an
1061 pressure class. This translation is necessary for some
1062 calculations when we can use only allocno or pressure classes and
1063 such translation represents an approximate representation of all
1064 classes.
1066 The translation in case when allocatable hard register set of a
1067 given class is subset of allocatable hard register set of a class
1068 in CLASSES is pretty simple. We use smallest classes from CLASSES
1069 containing a given class. If allocatable hard register set of a
1070 given class is not a subset of any corresponding set of a class
1071 from CLASSES, we use the cheapest (with load/store point of view)
1072 class from CLASSES whose set intersects with given class set. */
1073 static void
1074 setup_class_translate_array (enum reg_class *class_translate,
1075 int classes_num, enum reg_class *classes)
1077 int cl, mode;
1078 enum reg_class aclass, best_class, *cl_ptr;
1079 int i, cost, min_cost, best_cost;
1081 for (cl = 0; cl < N_REG_CLASSES; cl++)
1082 class_translate[cl] = NO_REGS;
1084 for (i = 0; i < classes_num; i++)
1086 aclass = classes[i];
1087 for (cl_ptr = &alloc_reg_class_subclasses[aclass][0];
1088 (cl = *cl_ptr) != LIM_REG_CLASSES;
1089 cl_ptr++)
1090 if (class_translate[cl] == NO_REGS)
1091 class_translate[cl] = aclass;
1092 class_translate[aclass] = aclass;
1094 /* For classes which are not fully covered by one of given classes
1095 (in other words covered by more one given class), use the
1096 cheapest class. */
1097 for (cl = 0; cl < N_REG_CLASSES; cl++)
1099 if (cl == NO_REGS || class_translate[cl] != NO_REGS)
1100 continue;
1101 best_class = NO_REGS;
1102 best_cost = INT_MAX;
1103 for (i = 0; i < classes_num; i++)
1105 aclass = classes[i];
1106 temp_hard_regset = (reg_class_contents[aclass]
1107 & reg_class_contents[cl]
1108 & ~no_unit_alloc_regs);
1109 if (! hard_reg_set_empty_p (temp_hard_regset))
1111 min_cost = INT_MAX;
1112 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1114 cost = (ira_memory_move_cost[mode][aclass][0]
1115 + ira_memory_move_cost[mode][aclass][1]);
1116 if (min_cost > cost)
1117 min_cost = cost;
1119 if (best_class == NO_REGS || best_cost > min_cost)
1121 best_class = aclass;
1122 best_cost = min_cost;
1126 class_translate[cl] = best_class;
1130 /* Set up array IRA_ALLOCNO_CLASS_TRANSLATE and
1131 IRA_PRESSURE_CLASS_TRANSLATE. */
1132 static void
1133 setup_class_translate (void)
1135 setup_class_translate_array (ira_allocno_class_translate,
1136 ira_allocno_classes_num, ira_allocno_classes);
1137 setup_class_translate_array (ira_pressure_class_translate,
1138 ira_pressure_classes_num, ira_pressure_classes);
1141 /* Order numbers of allocno classes in original target allocno class
1142 array, -1 for non-allocno classes. */
1143 static int allocno_class_order[N_REG_CLASSES];
1145 /* The function used to sort the important classes. */
1146 static int
1147 comp_reg_classes_func (const void *v1p, const void *v2p)
1149 enum reg_class cl1 = *(const enum reg_class *) v1p;
1150 enum reg_class cl2 = *(const enum reg_class *) v2p;
1151 enum reg_class tcl1, tcl2;
1152 int diff;
1154 tcl1 = ira_allocno_class_translate[cl1];
1155 tcl2 = ira_allocno_class_translate[cl2];
1156 if (tcl1 != NO_REGS && tcl2 != NO_REGS
1157 && (diff = allocno_class_order[tcl1] - allocno_class_order[tcl2]) != 0)
1158 return diff;
1159 return (int) cl1 - (int) cl2;
1162 /* For correct work of function setup_reg_class_relation we need to
1163 reorder important classes according to the order of their allocno
1164 classes. It places important classes containing the same
1165 allocatable hard register set adjacent to each other and allocno
1166 class with the allocatable hard register set right after the other
1167 important classes with the same set.
1169 In example from comments of function
1170 setup_allocno_and_important_classes, it places LEGACY_REGS and
1171 GENERAL_REGS close to each other and GENERAL_REGS is after
1172 LEGACY_REGS. */
1173 static void
1174 reorder_important_classes (void)
1176 int i;
1178 for (i = 0; i < N_REG_CLASSES; i++)
1179 allocno_class_order[i] = -1;
1180 for (i = 0; i < ira_allocno_classes_num; i++)
1181 allocno_class_order[ira_allocno_classes[i]] = i;
1182 qsort (ira_important_classes, ira_important_classes_num,
1183 sizeof (enum reg_class), comp_reg_classes_func);
1184 for (i = 0; i < ira_important_classes_num; i++)
1185 ira_important_class_nums[ira_important_classes[i]] = i;
1188 /* Set up IRA_REG_CLASS_SUBUNION, IRA_REG_CLASS_SUPERUNION,
1189 IRA_REG_CLASS_SUPER_CLASSES, IRA_REG_CLASSES_INTERSECT, and
1190 IRA_REG_CLASSES_INTERSECT_P. For the meaning of the relations,
1191 please see corresponding comments in ira-int.h. */
1192 static void
1193 setup_reg_class_relations (void)
1195 int i, cl1, cl2, cl3;
1196 HARD_REG_SET intersection_set, union_set, temp_set2;
1197 bool important_class_p[N_REG_CLASSES];
1199 memset (important_class_p, 0, sizeof (important_class_p));
1200 for (i = 0; i < ira_important_classes_num; i++)
1201 important_class_p[ira_important_classes[i]] = true;
1202 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1204 ira_reg_class_super_classes[cl1][0] = LIM_REG_CLASSES;
1205 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1207 ira_reg_classes_intersect_p[cl1][cl2] = false;
1208 ira_reg_class_intersect[cl1][cl2] = NO_REGS;
1209 ira_reg_class_subset[cl1][cl2] = NO_REGS;
1210 temp_hard_regset = reg_class_contents[cl1] & ~no_unit_alloc_regs;
1211 temp_set2 = reg_class_contents[cl2] & ~no_unit_alloc_regs;
1212 if (hard_reg_set_empty_p (temp_hard_regset)
1213 && hard_reg_set_empty_p (temp_set2))
1215 /* The both classes have no allocatable hard registers
1216 -- take all class hard registers into account and use
1217 reg_class_subunion and reg_class_superunion. */
1218 for (i = 0;; i++)
1220 cl3 = reg_class_subclasses[cl1][i];
1221 if (cl3 == LIM_REG_CLASSES)
1222 break;
1223 if (reg_class_subset_p (ira_reg_class_intersect[cl1][cl2],
1224 (enum reg_class) cl3))
1225 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1227 ira_reg_class_subunion[cl1][cl2] = reg_class_subunion[cl1][cl2];
1228 ira_reg_class_superunion[cl1][cl2] = reg_class_superunion[cl1][cl2];
1229 continue;
1231 ira_reg_classes_intersect_p[cl1][cl2]
1232 = hard_reg_set_intersect_p (temp_hard_regset, temp_set2);
1233 if (important_class_p[cl1] && important_class_p[cl2]
1234 && hard_reg_set_subset_p (temp_hard_regset, temp_set2))
1236 /* CL1 and CL2 are important classes and CL1 allocatable
1237 hard register set is inside of CL2 allocatable hard
1238 registers -- make CL1 a superset of CL2. */
1239 enum reg_class *p;
1241 p = &ira_reg_class_super_classes[cl1][0];
1242 while (*p != LIM_REG_CLASSES)
1243 p++;
1244 *p++ = (enum reg_class) cl2;
1245 *p = LIM_REG_CLASSES;
1247 ira_reg_class_subunion[cl1][cl2] = NO_REGS;
1248 ira_reg_class_superunion[cl1][cl2] = NO_REGS;
1249 intersection_set = (reg_class_contents[cl1]
1250 & reg_class_contents[cl2]
1251 & ~no_unit_alloc_regs);
1252 union_set = ((reg_class_contents[cl1] | reg_class_contents[cl2])
1253 & ~no_unit_alloc_regs);
1254 for (cl3 = 0; cl3 < N_REG_CLASSES; cl3++)
1256 temp_hard_regset = reg_class_contents[cl3] & ~no_unit_alloc_regs;
1257 if (hard_reg_set_subset_p (temp_hard_regset, intersection_set))
1259 /* CL3 allocatable hard register set is inside of
1260 intersection of allocatable hard register sets
1261 of CL1 and CL2. */
1262 if (important_class_p[cl3])
1264 temp_set2
1265 = (reg_class_contents
1266 [ira_reg_class_intersect[cl1][cl2]]);
1267 temp_set2 &= ~no_unit_alloc_regs;
1268 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1269 /* If the allocatable hard register sets are
1270 the same, prefer GENERAL_REGS or the
1271 smallest class for debugging
1272 purposes. */
1273 || (temp_hard_regset == temp_set2
1274 && (cl3 == GENERAL_REGS
1275 || ((ira_reg_class_intersect[cl1][cl2]
1276 != GENERAL_REGS)
1277 && hard_reg_set_subset_p
1278 (reg_class_contents[cl3],
1279 reg_class_contents
1280 [(int)
1281 ira_reg_class_intersect[cl1][cl2]])))))
1282 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1284 temp_set2
1285 = (reg_class_contents[ira_reg_class_subset[cl1][cl2]]
1286 & ~no_unit_alloc_regs);
1287 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1288 /* Ignore unavailable hard registers and prefer
1289 smallest class for debugging purposes. */
1290 || (temp_hard_regset == temp_set2
1291 && hard_reg_set_subset_p
1292 (reg_class_contents[cl3],
1293 reg_class_contents
1294 [(int) ira_reg_class_subset[cl1][cl2]])))
1295 ira_reg_class_subset[cl1][cl2] = (enum reg_class) cl3;
1297 if (important_class_p[cl3]
1298 && hard_reg_set_subset_p (temp_hard_regset, union_set))
1300 /* CL3 allocatable hard register set is inside of
1301 union of allocatable hard register sets of CL1
1302 and CL2. */
1303 temp_set2
1304 = (reg_class_contents[ira_reg_class_subunion[cl1][cl2]]
1305 & ~no_unit_alloc_regs);
1306 if (ira_reg_class_subunion[cl1][cl2] == NO_REGS
1307 || (hard_reg_set_subset_p (temp_set2, temp_hard_regset)
1309 && (temp_set2 != temp_hard_regset
1310 || cl3 == GENERAL_REGS
1311 /* If the allocatable hard register sets are the
1312 same, prefer GENERAL_REGS or the smallest
1313 class for debugging purposes. */
1314 || (ira_reg_class_subunion[cl1][cl2] != GENERAL_REGS
1315 && hard_reg_set_subset_p
1316 (reg_class_contents[cl3],
1317 reg_class_contents
1318 [(int) ira_reg_class_subunion[cl1][cl2]])))))
1319 ira_reg_class_subunion[cl1][cl2] = (enum reg_class) cl3;
1321 if (hard_reg_set_subset_p (union_set, temp_hard_regset))
1323 /* CL3 allocatable hard register set contains union
1324 of allocatable hard register sets of CL1 and
1325 CL2. */
1326 temp_set2
1327 = (reg_class_contents[ira_reg_class_superunion[cl1][cl2]]
1328 & ~no_unit_alloc_regs);
1329 if (ira_reg_class_superunion[cl1][cl2] == NO_REGS
1330 || (hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1332 && (temp_set2 != temp_hard_regset
1333 || cl3 == GENERAL_REGS
1334 /* If the allocatable hard register sets are the
1335 same, prefer GENERAL_REGS or the smallest
1336 class for debugging purposes. */
1337 || (ira_reg_class_superunion[cl1][cl2] != GENERAL_REGS
1338 && hard_reg_set_subset_p
1339 (reg_class_contents[cl3],
1340 reg_class_contents
1341 [(int) ira_reg_class_superunion[cl1][cl2]])))))
1342 ira_reg_class_superunion[cl1][cl2] = (enum reg_class) cl3;
1349 /* Output all uniform and important classes into file F. */
1350 static void
1351 print_uniform_and_important_classes (FILE *f)
1353 int i, cl;
1355 fprintf (f, "Uniform classes:\n");
1356 for (cl = 0; cl < N_REG_CLASSES; cl++)
1357 if (ira_uniform_class_p[cl])
1358 fprintf (f, " %s", reg_class_names[cl]);
1359 fprintf (f, "\nImportant classes:\n");
1360 for (i = 0; i < ira_important_classes_num; i++)
1361 fprintf (f, " %s", reg_class_names[ira_important_classes[i]]);
1362 fprintf (f, "\n");
1365 /* Output all possible allocno or pressure classes and their
1366 translation map into file F. */
1367 static void
1368 print_translated_classes (FILE *f, bool pressure_p)
1370 int classes_num = (pressure_p
1371 ? ira_pressure_classes_num : ira_allocno_classes_num);
1372 enum reg_class *classes = (pressure_p
1373 ? ira_pressure_classes : ira_allocno_classes);
1374 enum reg_class *class_translate = (pressure_p
1375 ? ira_pressure_class_translate
1376 : ira_allocno_class_translate);
1377 int i;
1379 fprintf (f, "%s classes:\n", pressure_p ? "Pressure" : "Allocno");
1380 for (i = 0; i < classes_num; i++)
1381 fprintf (f, " %s", reg_class_names[classes[i]]);
1382 fprintf (f, "\nClass translation:\n");
1383 for (i = 0; i < N_REG_CLASSES; i++)
1384 fprintf (f, " %s -> %s\n", reg_class_names[i],
1385 reg_class_names[class_translate[i]]);
1388 /* Output all possible allocno and translation classes and the
1389 translation maps into stderr. */
1390 void
1391 ira_debug_allocno_classes (void)
1393 print_uniform_and_important_classes (stderr);
1394 print_translated_classes (stderr, false);
1395 print_translated_classes (stderr, true);
1398 /* Set up different arrays concerning class subsets, allocno and
1399 important classes. */
1400 static void
1401 find_reg_classes (void)
1403 setup_allocno_and_important_classes ();
1404 setup_class_translate ();
1405 reorder_important_classes ();
1406 setup_reg_class_relations ();
1411 /* Set up the array above. */
1412 static void
1413 setup_hard_regno_aclass (void)
1415 int i;
1417 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1419 #if 1
1420 ira_hard_regno_allocno_class[i]
1421 = (TEST_HARD_REG_BIT (no_unit_alloc_regs, i)
1422 ? NO_REGS
1423 : ira_allocno_class_translate[REGNO_REG_CLASS (i)]);
1424 #else
1425 int j;
1426 enum reg_class cl;
1427 ira_hard_regno_allocno_class[i] = NO_REGS;
1428 for (j = 0; j < ira_allocno_classes_num; j++)
1430 cl = ira_allocno_classes[j];
1431 if (ira_class_hard_reg_index[cl][i] >= 0)
1433 ira_hard_regno_allocno_class[i] = cl;
1434 break;
1437 #endif
1443 /* Form IRA_REG_CLASS_MAX_NREGS and IRA_REG_CLASS_MIN_NREGS maps. */
1444 static void
1445 setup_reg_class_nregs (void)
1447 int i, cl, cl2, m;
1449 for (m = 0; m < MAX_MACHINE_MODE; m++)
1451 for (cl = 0; cl < N_REG_CLASSES; cl++)
1452 ira_reg_class_max_nregs[cl][m]
1453 = ira_reg_class_min_nregs[cl][m]
1454 = targetm.class_max_nregs ((reg_class_t) cl, (machine_mode) m);
1455 for (cl = 0; cl < N_REG_CLASSES; cl++)
1456 for (i = 0;
1457 (cl2 = alloc_reg_class_subclasses[cl][i]) != LIM_REG_CLASSES;
1458 i++)
1459 if (ira_reg_class_min_nregs[cl2][m]
1460 < ira_reg_class_min_nregs[cl][m])
1461 ira_reg_class_min_nregs[cl][m] = ira_reg_class_min_nregs[cl2][m];
1467 /* Set up IRA_PROHIBITED_CLASS_MODE_REGS and IRA_CLASS_SINGLETON.
1468 This function is called once IRA_CLASS_HARD_REGS has been initialized. */
1469 static void
1470 setup_prohibited_class_mode_regs (void)
1472 int j, k, hard_regno, cl, last_hard_regno, count;
1474 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1476 temp_hard_regset = reg_class_contents[cl] & ~no_unit_alloc_regs;
1477 for (j = 0; j < NUM_MACHINE_MODES; j++)
1479 count = 0;
1480 last_hard_regno = -1;
1481 CLEAR_HARD_REG_SET (ira_prohibited_class_mode_regs[cl][j]);
1482 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1484 hard_regno = ira_class_hard_regs[cl][k];
1485 if (!targetm.hard_regno_mode_ok (hard_regno, (machine_mode) j))
1486 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1487 hard_regno);
1488 else if (in_hard_reg_set_p (temp_hard_regset,
1489 (machine_mode) j, hard_regno))
1491 last_hard_regno = hard_regno;
1492 count++;
1495 ira_class_singleton[cl][j] = (count == 1 ? last_hard_regno : -1);
1500 /* Clarify IRA_PROHIBITED_CLASS_MODE_REGS by excluding hard registers
1501 spanning from one register pressure class to another one. It is
1502 called after defining the pressure classes. */
1503 static void
1504 clarify_prohibited_class_mode_regs (void)
1506 int j, k, hard_regno, cl, pclass, nregs;
1508 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1509 for (j = 0; j < NUM_MACHINE_MODES; j++)
1511 CLEAR_HARD_REG_SET (ira_useful_class_mode_regs[cl][j]);
1512 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1514 hard_regno = ira_class_hard_regs[cl][k];
1515 if (TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j], hard_regno))
1516 continue;
1517 nregs = hard_regno_nregs (hard_regno, (machine_mode) j);
1518 if (hard_regno + nregs > FIRST_PSEUDO_REGISTER)
1520 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1521 hard_regno);
1522 continue;
1524 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
1525 for (nregs-- ;nregs >= 0; nregs--)
1526 if (((enum reg_class) pclass
1527 != ira_pressure_class_translate[REGNO_REG_CLASS
1528 (hard_regno + nregs)]))
1530 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1531 hard_regno);
1532 break;
1534 if (!TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1535 hard_regno))
1536 add_to_hard_reg_set (&ira_useful_class_mode_regs[cl][j],
1537 (machine_mode) j, hard_regno);
1542 /* Allocate and initialize IRA_REGISTER_MOVE_COST, IRA_MAY_MOVE_IN_COST
1543 and IRA_MAY_MOVE_OUT_COST for MODE. */
1544 void
1545 ira_init_register_move_cost (machine_mode mode)
1547 static unsigned short last_move_cost[N_REG_CLASSES][N_REG_CLASSES];
1548 bool all_match = true;
1549 unsigned int i, cl1, cl2;
1550 HARD_REG_SET ok_regs;
1552 ira_assert (ira_register_move_cost[mode] == NULL
1553 && ira_may_move_in_cost[mode] == NULL
1554 && ira_may_move_out_cost[mode] == NULL);
1555 CLEAR_HARD_REG_SET (ok_regs);
1556 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1557 if (targetm.hard_regno_mode_ok (i, mode))
1558 SET_HARD_REG_BIT (ok_regs, i);
1560 /* Note that we might be asked about the move costs of modes that
1561 cannot be stored in any hard register, for example if an inline
1562 asm tries to create a register operand with an impossible mode.
1563 We therefore can't assert have_regs_of_mode[mode] here. */
1564 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1565 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1567 int cost;
1568 if (!hard_reg_set_intersect_p (ok_regs, reg_class_contents[cl1])
1569 || !hard_reg_set_intersect_p (ok_regs, reg_class_contents[cl2]))
1571 if ((ira_reg_class_max_nregs[cl1][mode]
1572 > ira_class_hard_regs_num[cl1])
1573 || (ira_reg_class_max_nregs[cl2][mode]
1574 > ira_class_hard_regs_num[cl2]))
1575 cost = 65535;
1576 else
1577 cost = (ira_memory_move_cost[mode][cl1][0]
1578 + ira_memory_move_cost[mode][cl2][1]) * 2;
1580 else
1582 cost = register_move_cost (mode, (enum reg_class) cl1,
1583 (enum reg_class) cl2);
1584 ira_assert (cost < 65535);
1586 all_match &= (last_move_cost[cl1][cl2] == cost);
1587 last_move_cost[cl1][cl2] = cost;
1589 if (all_match && last_mode_for_init_move_cost != -1)
1591 ira_register_move_cost[mode]
1592 = ira_register_move_cost[last_mode_for_init_move_cost];
1593 ira_may_move_in_cost[mode]
1594 = ira_may_move_in_cost[last_mode_for_init_move_cost];
1595 ira_may_move_out_cost[mode]
1596 = ira_may_move_out_cost[last_mode_for_init_move_cost];
1597 return;
1599 last_mode_for_init_move_cost = mode;
1600 ira_register_move_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1601 ira_may_move_in_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1602 ira_may_move_out_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1603 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1604 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1606 int cost;
1607 enum reg_class *p1, *p2;
1609 if (last_move_cost[cl1][cl2] == 65535)
1611 ira_register_move_cost[mode][cl1][cl2] = 65535;
1612 ira_may_move_in_cost[mode][cl1][cl2] = 65535;
1613 ira_may_move_out_cost[mode][cl1][cl2] = 65535;
1615 else
1617 cost = last_move_cost[cl1][cl2];
1619 for (p2 = &reg_class_subclasses[cl2][0];
1620 *p2 != LIM_REG_CLASSES; p2++)
1621 if (ira_class_hard_regs_num[*p2] > 0
1622 && (ira_reg_class_max_nregs[*p2][mode]
1623 <= ira_class_hard_regs_num[*p2]))
1624 cost = MAX (cost, ira_register_move_cost[mode][cl1][*p2]);
1626 for (p1 = &reg_class_subclasses[cl1][0];
1627 *p1 != LIM_REG_CLASSES; p1++)
1628 if (ira_class_hard_regs_num[*p1] > 0
1629 && (ira_reg_class_max_nregs[*p1][mode]
1630 <= ira_class_hard_regs_num[*p1]))
1631 cost = MAX (cost, ira_register_move_cost[mode][*p1][cl2]);
1633 ira_assert (cost <= 65535);
1634 ira_register_move_cost[mode][cl1][cl2] = cost;
1636 if (ira_class_subset_p[cl1][cl2])
1637 ira_may_move_in_cost[mode][cl1][cl2] = 0;
1638 else
1639 ira_may_move_in_cost[mode][cl1][cl2] = cost;
1641 if (ira_class_subset_p[cl2][cl1])
1642 ira_may_move_out_cost[mode][cl1][cl2] = 0;
1643 else
1644 ira_may_move_out_cost[mode][cl1][cl2] = cost;
1651 /* This is called once during compiler work. It sets up
1652 different arrays whose values don't depend on the compiled
1653 function. */
1654 void
1655 ira_init_once (void)
1657 ira_init_costs_once ();
1658 lra_init_once ();
1660 ira_use_lra_p = targetm.lra_p ();
1663 /* Free ira_max_register_move_cost, ira_may_move_in_cost and
1664 ira_may_move_out_cost for each mode. */
1665 void
1666 target_ira_int::free_register_move_costs (void)
1668 int mode, i;
1670 /* Reset move_cost and friends, making sure we only free shared
1671 table entries once. */
1672 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1673 if (x_ira_register_move_cost[mode])
1675 for (i = 0;
1676 i < mode && (x_ira_register_move_cost[i]
1677 != x_ira_register_move_cost[mode]);
1678 i++)
1680 if (i == mode)
1682 free (x_ira_register_move_cost[mode]);
1683 free (x_ira_may_move_in_cost[mode]);
1684 free (x_ira_may_move_out_cost[mode]);
1687 memset (x_ira_register_move_cost, 0, sizeof x_ira_register_move_cost);
1688 memset (x_ira_may_move_in_cost, 0, sizeof x_ira_may_move_in_cost);
1689 memset (x_ira_may_move_out_cost, 0, sizeof x_ira_may_move_out_cost);
1690 last_mode_for_init_move_cost = -1;
1693 target_ira_int::~target_ira_int ()
1695 free_ira_costs ();
1696 free_register_move_costs ();
1699 /* This is called every time when register related information is
1700 changed. */
1701 void
1702 ira_init (void)
1704 this_target_ira_int->free_register_move_costs ();
1705 setup_reg_mode_hard_regset ();
1706 setup_alloc_regs (flag_omit_frame_pointer != 0);
1707 setup_class_subset_and_memory_move_costs ();
1708 setup_reg_class_nregs ();
1709 setup_prohibited_class_mode_regs ();
1710 find_reg_classes ();
1711 clarify_prohibited_class_mode_regs ();
1712 setup_hard_regno_aclass ();
1713 ira_init_costs ();
1717 #define ira_prohibited_mode_move_regs_initialized_p \
1718 (this_target_ira_int->x_ira_prohibited_mode_move_regs_initialized_p)
1720 /* Set up IRA_PROHIBITED_MODE_MOVE_REGS. */
1721 static void
1722 setup_prohibited_mode_move_regs (void)
1724 int i, j;
1725 rtx test_reg1, test_reg2, move_pat;
1726 rtx_insn *move_insn;
1728 if (ira_prohibited_mode_move_regs_initialized_p)
1729 return;
1730 ira_prohibited_mode_move_regs_initialized_p = true;
1731 test_reg1 = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 1);
1732 test_reg2 = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 2);
1733 move_pat = gen_rtx_SET (test_reg1, test_reg2);
1734 move_insn = gen_rtx_INSN (VOIDmode, 0, 0, 0, move_pat, 0, -1, 0);
1735 for (i = 0; i < NUM_MACHINE_MODES; i++)
1737 SET_HARD_REG_SET (ira_prohibited_mode_move_regs[i]);
1738 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
1740 if (!targetm.hard_regno_mode_ok (j, (machine_mode) i))
1741 continue;
1742 set_mode_and_regno (test_reg1, (machine_mode) i, j);
1743 set_mode_and_regno (test_reg2, (machine_mode) i, j);
1744 INSN_CODE (move_insn) = -1;
1745 recog_memoized (move_insn);
1746 if (INSN_CODE (move_insn) < 0)
1747 continue;
1748 extract_insn (move_insn);
1749 /* We don't know whether the move will be in code that is optimized
1750 for size or speed, so consider all enabled alternatives. */
1751 if (! constrain_operands (1, get_enabled_alternatives (move_insn)))
1752 continue;
1753 CLEAR_HARD_REG_BIT (ira_prohibited_mode_move_regs[i], j);
1760 /* Extract INSN and return the set of alternatives that we should consider.
1761 This excludes any alternatives whose constraints are obviously impossible
1762 to meet (e.g. because the constraint requires a constant and the operand
1763 is nonconstant). It also excludes alternatives that are bound to need
1764 a spill or reload, as long as we have other alternatives that match
1765 exactly. */
1766 alternative_mask
1767 ira_setup_alts (rtx_insn *insn)
1769 int nop, nalt;
1770 bool curr_swapped;
1771 const char *p;
1772 int commutative = -1;
1774 extract_insn (insn);
1775 preprocess_constraints (insn);
1776 alternative_mask preferred = get_preferred_alternatives (insn);
1777 alternative_mask alts = 0;
1778 alternative_mask exact_alts = 0;
1779 /* Check that the hard reg set is enough for holding all
1780 alternatives. It is hard to imagine the situation when the
1781 assertion is wrong. */
1782 ira_assert (recog_data.n_alternatives
1783 <= (int) MAX (sizeof (HARD_REG_ELT_TYPE) * CHAR_BIT,
1784 FIRST_PSEUDO_REGISTER));
1785 for (nop = 0; nop < recog_data.n_operands; nop++)
1786 if (recog_data.constraints[nop][0] == '%')
1788 commutative = nop;
1789 break;
1791 for (curr_swapped = false;; curr_swapped = true)
1793 for (nalt = 0; nalt < recog_data.n_alternatives; nalt++)
1795 if (!TEST_BIT (preferred, nalt) || TEST_BIT (exact_alts, nalt))
1796 continue;
1798 const operand_alternative *op_alt
1799 = &recog_op_alt[nalt * recog_data.n_operands];
1800 int this_reject = 0;
1801 for (nop = 0; nop < recog_data.n_operands; nop++)
1803 int c, len;
1805 this_reject += op_alt[nop].reject;
1807 rtx op = recog_data.operand[nop];
1808 p = op_alt[nop].constraint;
1809 if (*p == 0 || *p == ',')
1810 continue;
1812 bool win_p = false;
1814 switch (c = *p, len = CONSTRAINT_LEN (c, p), c)
1816 case '#':
1817 case ',':
1818 c = '\0';
1819 /* FALLTHRU */
1820 case '\0':
1821 len = 0;
1822 break;
1824 case '%':
1825 /* The commutative modifier is handled above. */
1826 break;
1828 case '0': case '1': case '2': case '3': case '4':
1829 case '5': case '6': case '7': case '8': case '9':
1831 rtx other = recog_data.operand[c - '0'];
1832 if (MEM_P (other)
1833 ? rtx_equal_p (other, op)
1834 : REG_P (op) || SUBREG_P (op))
1835 goto op_success;
1836 win_p = true;
1838 break;
1840 case 'g':
1841 goto op_success;
1842 break;
1844 default:
1846 enum constraint_num cn = lookup_constraint (p);
1847 switch (get_constraint_type (cn))
1849 case CT_REGISTER:
1850 if (reg_class_for_constraint (cn) != NO_REGS)
1852 if (REG_P (op) || SUBREG_P (op))
1853 goto op_success;
1854 win_p = true;
1856 break;
1858 case CT_CONST_INT:
1859 if (CONST_INT_P (op)
1860 && (insn_const_int_ok_for_constraint
1861 (INTVAL (op), cn)))
1862 goto op_success;
1863 break;
1865 case CT_ADDRESS:
1866 goto op_success;
1868 case CT_MEMORY:
1869 case CT_SPECIAL_MEMORY:
1870 if (MEM_P (op))
1871 goto op_success;
1872 win_p = true;
1873 break;
1875 case CT_FIXED_FORM:
1876 if (constraint_satisfied_p (op, cn))
1877 goto op_success;
1878 break;
1880 break;
1883 while (p += len, c);
1884 if (!win_p)
1885 break;
1886 /* We can make the alternative match by spilling a register
1887 to memory or loading something into a register. Count a
1888 cost of one reload (the equivalent of the '?' constraint). */
1889 this_reject += 6;
1890 op_success:
1894 if (nop >= recog_data.n_operands)
1896 alts |= ALTERNATIVE_BIT (nalt);
1897 if (this_reject == 0)
1898 exact_alts |= ALTERNATIVE_BIT (nalt);
1901 if (commutative < 0)
1902 break;
1903 /* Swap forth and back to avoid changing recog_data. */
1904 std::swap (recog_data.operand[commutative],
1905 recog_data.operand[commutative + 1]);
1906 if (curr_swapped)
1907 break;
1909 return exact_alts ? exact_alts : alts;
1912 /* Return the number of the output non-early clobber operand which
1913 should be the same in any case as operand with number OP_NUM (or
1914 negative value if there is no such operand). ALTS is the mask
1915 of alternatives that we should consider. */
1917 ira_get_dup_out_num (int op_num, alternative_mask alts)
1919 int curr_alt, c, original, dup;
1920 bool ignore_p, use_commut_op_p;
1921 const char *str;
1923 if (op_num < 0 || recog_data.n_alternatives == 0)
1924 return -1;
1925 /* We should find duplications only for input operands. */
1926 if (recog_data.operand_type[op_num] != OP_IN)
1927 return -1;
1928 str = recog_data.constraints[op_num];
1929 use_commut_op_p = false;
1930 for (;;)
1932 rtx op = recog_data.operand[op_num];
1934 for (curr_alt = 0, ignore_p = !TEST_BIT (alts, curr_alt),
1935 original = -1;;)
1937 c = *str;
1938 if (c == '\0')
1939 break;
1940 if (c == '#')
1941 ignore_p = true;
1942 else if (c == ',')
1944 curr_alt++;
1945 ignore_p = !TEST_BIT (alts, curr_alt);
1947 else if (! ignore_p)
1948 switch (c)
1950 case 'g':
1951 goto fail;
1952 default:
1954 enum constraint_num cn = lookup_constraint (str);
1955 enum reg_class cl = reg_class_for_constraint (cn);
1956 if (cl != NO_REGS
1957 && !targetm.class_likely_spilled_p (cl))
1958 goto fail;
1959 if (constraint_satisfied_p (op, cn))
1960 goto fail;
1961 break;
1964 case '0': case '1': case '2': case '3': case '4':
1965 case '5': case '6': case '7': case '8': case '9':
1966 if (original != -1 && original != c)
1967 goto fail;
1968 original = c;
1969 break;
1971 str += CONSTRAINT_LEN (c, str);
1973 if (original == -1)
1974 goto fail;
1975 dup = original - '0';
1976 if (recog_data.operand_type[dup] == OP_OUT)
1977 return dup;
1978 fail:
1979 if (use_commut_op_p)
1980 break;
1981 use_commut_op_p = true;
1982 if (recog_data.constraints[op_num][0] == '%')
1983 str = recog_data.constraints[op_num + 1];
1984 else if (op_num > 0 && recog_data.constraints[op_num - 1][0] == '%')
1985 str = recog_data.constraints[op_num - 1];
1986 else
1987 break;
1989 return -1;
1994 /* Search forward to see if the source register of a copy insn dies
1995 before either it or the destination register is modified, but don't
1996 scan past the end of the basic block. If so, we can replace the
1997 source with the destination and let the source die in the copy
1998 insn.
2000 This will reduce the number of registers live in that range and may
2001 enable the destination and the source coalescing, thus often saving
2002 one register in addition to a register-register copy. */
2004 static void
2005 decrease_live_ranges_number (void)
2007 basic_block bb;
2008 rtx_insn *insn;
2009 rtx set, src, dest, dest_death, note;
2010 rtx_insn *p, *q;
2011 int sregno, dregno;
2013 if (! flag_expensive_optimizations)
2014 return;
2016 if (ira_dump_file)
2017 fprintf (ira_dump_file, "Starting decreasing number of live ranges...\n");
2019 FOR_EACH_BB_FN (bb, cfun)
2020 FOR_BB_INSNS (bb, insn)
2022 set = single_set (insn);
2023 if (! set)
2024 continue;
2025 src = SET_SRC (set);
2026 dest = SET_DEST (set);
2027 if (! REG_P (src) || ! REG_P (dest)
2028 || find_reg_note (insn, REG_DEAD, src))
2029 continue;
2030 sregno = REGNO (src);
2031 dregno = REGNO (dest);
2033 /* We don't want to mess with hard regs if register classes
2034 are small. */
2035 if (sregno == dregno
2036 || (targetm.small_register_classes_for_mode_p (GET_MODE (src))
2037 && (sregno < FIRST_PSEUDO_REGISTER
2038 || dregno < FIRST_PSEUDO_REGISTER))
2039 /* We don't see all updates to SP if they are in an
2040 auto-inc memory reference, so we must disallow this
2041 optimization on them. */
2042 || sregno == STACK_POINTER_REGNUM
2043 || dregno == STACK_POINTER_REGNUM)
2044 continue;
2046 dest_death = NULL_RTX;
2048 for (p = NEXT_INSN (insn); p; p = NEXT_INSN (p))
2050 if (! INSN_P (p))
2051 continue;
2052 if (BLOCK_FOR_INSN (p) != bb)
2053 break;
2055 if (reg_set_p (src, p) || reg_set_p (dest, p)
2056 /* If SRC is an asm-declared register, it must not be
2057 replaced in any asm. Unfortunately, the REG_EXPR
2058 tree for the asm variable may be absent in the SRC
2059 rtx, so we can't check the actual register
2060 declaration easily (the asm operand will have it,
2061 though). To avoid complicating the test for a rare
2062 case, we just don't perform register replacement
2063 for a hard reg mentioned in an asm. */
2064 || (sregno < FIRST_PSEUDO_REGISTER
2065 && asm_noperands (PATTERN (p)) >= 0
2066 && reg_overlap_mentioned_p (src, PATTERN (p)))
2067 /* Don't change hard registers used by a call. */
2068 || (CALL_P (p) && sregno < FIRST_PSEUDO_REGISTER
2069 && find_reg_fusage (p, USE, src))
2070 /* Don't change a USE of a register. */
2071 || (GET_CODE (PATTERN (p)) == USE
2072 && reg_overlap_mentioned_p (src, XEXP (PATTERN (p), 0))))
2073 break;
2075 /* See if all of SRC dies in P. This test is slightly
2076 more conservative than it needs to be. */
2077 if ((note = find_regno_note (p, REG_DEAD, sregno))
2078 && GET_MODE (XEXP (note, 0)) == GET_MODE (src))
2080 int failed = 0;
2082 /* We can do the optimization. Scan forward from INSN
2083 again, replacing regs as we go. Set FAILED if a
2084 replacement can't be done. In that case, we can't
2085 move the death note for SRC. This should be
2086 rare. */
2088 /* Set to stop at next insn. */
2089 for (q = next_real_insn (insn);
2090 q != next_real_insn (p);
2091 q = next_real_insn (q))
2093 if (reg_overlap_mentioned_p (src, PATTERN (q)))
2095 /* If SRC is a hard register, we might miss
2096 some overlapping registers with
2097 validate_replace_rtx, so we would have to
2098 undo it. We can't if DEST is present in
2099 the insn, so fail in that combination of
2100 cases. */
2101 if (sregno < FIRST_PSEUDO_REGISTER
2102 && reg_mentioned_p (dest, PATTERN (q)))
2103 failed = 1;
2105 /* Attempt to replace all uses. */
2106 else if (!validate_replace_rtx (src, dest, q))
2107 failed = 1;
2109 /* If this succeeded, but some part of the
2110 register is still present, undo the
2111 replacement. */
2112 else if (sregno < FIRST_PSEUDO_REGISTER
2113 && reg_overlap_mentioned_p (src, PATTERN (q)))
2115 validate_replace_rtx (dest, src, q);
2116 failed = 1;
2120 /* If DEST dies here, remove the death note and
2121 save it for later. Make sure ALL of DEST dies
2122 here; again, this is overly conservative. */
2123 if (! dest_death
2124 && (dest_death = find_regno_note (q, REG_DEAD, dregno)))
2126 if (GET_MODE (XEXP (dest_death, 0)) == GET_MODE (dest))
2127 remove_note (q, dest_death);
2128 else
2130 failed = 1;
2131 dest_death = 0;
2136 if (! failed)
2138 /* Move death note of SRC from P to INSN. */
2139 remove_note (p, note);
2140 XEXP (note, 1) = REG_NOTES (insn);
2141 REG_NOTES (insn) = note;
2144 /* DEST is also dead if INSN has a REG_UNUSED note for
2145 DEST. */
2146 if (! dest_death
2147 && (dest_death
2148 = find_regno_note (insn, REG_UNUSED, dregno)))
2150 PUT_REG_NOTE_KIND (dest_death, REG_DEAD);
2151 remove_note (insn, dest_death);
2154 /* Put death note of DEST on P if we saw it die. */
2155 if (dest_death)
2157 XEXP (dest_death, 1) = REG_NOTES (p);
2158 REG_NOTES (p) = dest_death;
2160 break;
2163 /* If SRC is a hard register which is set or killed in
2164 some other way, we can't do this optimization. */
2165 else if (sregno < FIRST_PSEUDO_REGISTER && dead_or_set_p (p, src))
2166 break;
2173 /* Return nonzero if REGNO is a particularly bad choice for reloading X. */
2174 static bool
2175 ira_bad_reload_regno_1 (int regno, rtx x)
2177 int x_regno, n, i;
2178 ira_allocno_t a;
2179 enum reg_class pref;
2181 /* We only deal with pseudo regs. */
2182 if (! x || GET_CODE (x) != REG)
2183 return false;
2185 x_regno = REGNO (x);
2186 if (x_regno < FIRST_PSEUDO_REGISTER)
2187 return false;
2189 /* If the pseudo prefers REGNO explicitly, then do not consider
2190 REGNO a bad spill choice. */
2191 pref = reg_preferred_class (x_regno);
2192 if (reg_class_size[pref] == 1)
2193 return !TEST_HARD_REG_BIT (reg_class_contents[pref], regno);
2195 /* If the pseudo conflicts with REGNO, then we consider REGNO a
2196 poor choice for a reload regno. */
2197 a = ira_regno_allocno_map[x_regno];
2198 n = ALLOCNO_NUM_OBJECTS (a);
2199 for (i = 0; i < n; i++)
2201 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2202 if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
2203 return true;
2205 return false;
2208 /* Return nonzero if REGNO is a particularly bad choice for reloading
2209 IN or OUT. */
2210 bool
2211 ira_bad_reload_regno (int regno, rtx in, rtx out)
2213 return (ira_bad_reload_regno_1 (regno, in)
2214 || ira_bad_reload_regno_1 (regno, out));
2217 /* Add register clobbers from asm statements. */
2218 static void
2219 compute_regs_asm_clobbered (void)
2221 basic_block bb;
2223 FOR_EACH_BB_FN (bb, cfun)
2225 rtx_insn *insn;
2226 FOR_BB_INSNS_REVERSE (bb, insn)
2228 df_ref def;
2230 if (NONDEBUG_INSN_P (insn) && asm_noperands (PATTERN (insn)) >= 0)
2231 FOR_EACH_INSN_DEF (def, insn)
2233 unsigned int dregno = DF_REF_REGNO (def);
2234 if (HARD_REGISTER_NUM_P (dregno))
2235 add_to_hard_reg_set (&crtl->asm_clobbers,
2236 GET_MODE (DF_REF_REAL_REG (def)),
2237 dregno);
2244 /* Set up ELIMINABLE_REGSET, IRA_NO_ALLOC_REGS, and
2245 REGS_EVER_LIVE. */
2246 void
2247 ira_setup_eliminable_regset (void)
2249 int i;
2250 static const struct {const int from, to; } eliminables[] = ELIMINABLE_REGS;
2252 /* Setup is_leaf as frame_pointer_required may use it. This function
2253 is called by sched_init before ira if scheduling is enabled. */
2254 crtl->is_leaf = leaf_function_p ();
2256 /* FIXME: If EXIT_IGNORE_STACK is set, we will not save and restore
2257 sp for alloca. So we can't eliminate the frame pointer in that
2258 case. At some point, we should improve this by emitting the
2259 sp-adjusting insns for this case. */
2260 frame_pointer_needed
2261 = (! flag_omit_frame_pointer
2262 || (cfun->calls_alloca && EXIT_IGNORE_STACK)
2263 /* We need the frame pointer to catch stack overflow exceptions if
2264 the stack pointer is moving (as for the alloca case just above). */
2265 || (STACK_CHECK_MOVING_SP
2266 && flag_stack_check
2267 && flag_exceptions
2268 && cfun->can_throw_non_call_exceptions)
2269 || crtl->accesses_prior_frames
2270 || (SUPPORTS_STACK_ALIGNMENT && crtl->stack_realign_needed)
2271 || targetm.frame_pointer_required ());
2273 /* The chance that FRAME_POINTER_NEEDED is changed from inspecting
2274 RTL is very small. So if we use frame pointer for RA and RTL
2275 actually prevents this, we will spill pseudos assigned to the
2276 frame pointer in LRA. */
2278 if (frame_pointer_needed)
2279 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2281 ira_no_alloc_regs = no_unit_alloc_regs;
2282 CLEAR_HARD_REG_SET (eliminable_regset);
2284 compute_regs_asm_clobbered ();
2286 /* Build the regset of all eliminable registers and show we can't
2287 use those that we already know won't be eliminated. */
2288 for (i = 0; i < (int) ARRAY_SIZE (eliminables); i++)
2290 bool cannot_elim
2291 = (! targetm.can_eliminate (eliminables[i].from, eliminables[i].to)
2292 || (eliminables[i].to == STACK_POINTER_REGNUM && frame_pointer_needed));
2294 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, eliminables[i].from))
2296 SET_HARD_REG_BIT (eliminable_regset, eliminables[i].from);
2298 if (cannot_elim)
2299 SET_HARD_REG_BIT (ira_no_alloc_regs, eliminables[i].from);
2301 else if (cannot_elim)
2302 error ("%s cannot be used in %<asm%> here",
2303 reg_names[eliminables[i].from]);
2304 else
2305 df_set_regs_ever_live (eliminables[i].from, true);
2307 if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
2309 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2311 SET_HARD_REG_BIT (eliminable_regset, HARD_FRAME_POINTER_REGNUM);
2312 if (frame_pointer_needed)
2313 SET_HARD_REG_BIT (ira_no_alloc_regs, HARD_FRAME_POINTER_REGNUM);
2315 else if (frame_pointer_needed)
2316 error ("%s cannot be used in %<asm%> here",
2317 reg_names[HARD_FRAME_POINTER_REGNUM]);
2318 else
2319 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2325 /* Vector of substitutions of register numbers,
2326 used to map pseudo regs into hardware regs.
2327 This is set up as a result of register allocation.
2328 Element N is the hard reg assigned to pseudo reg N,
2329 or is -1 if no hard reg was assigned.
2330 If N is a hard reg number, element N is N. */
2331 short *reg_renumber;
2333 /* Set up REG_RENUMBER and CALLER_SAVE_NEEDED (used by reload) from
2334 the allocation found by IRA. */
2335 static void
2336 setup_reg_renumber (void)
2338 int regno, hard_regno;
2339 ira_allocno_t a;
2340 ira_allocno_iterator ai;
2342 caller_save_needed = 0;
2343 FOR_EACH_ALLOCNO (a, ai)
2345 if (ira_use_lra_p && ALLOCNO_CAP_MEMBER (a) != NULL)
2346 continue;
2347 /* There are no caps at this point. */
2348 ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
2349 if (! ALLOCNO_ASSIGNED_P (a))
2350 /* It can happen if A is not referenced but partially anticipated
2351 somewhere in a region. */
2352 ALLOCNO_ASSIGNED_P (a) = true;
2353 ira_free_allocno_updated_costs (a);
2354 hard_regno = ALLOCNO_HARD_REGNO (a);
2355 regno = ALLOCNO_REGNO (a);
2356 reg_renumber[regno] = (hard_regno < 0 ? -1 : hard_regno);
2357 if (hard_regno >= 0)
2359 int i, nwords;
2360 enum reg_class pclass;
2361 ira_object_t obj;
2363 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
2364 nwords = ALLOCNO_NUM_OBJECTS (a);
2365 for (i = 0; i < nwords; i++)
2367 obj = ALLOCNO_OBJECT (a, i);
2368 OBJECT_TOTAL_CONFLICT_HARD_REGS (obj)
2369 |= ~reg_class_contents[pclass];
2371 if (ira_need_caller_save_p (a, hard_regno))
2373 ira_assert (!optimize || flag_caller_saves
2374 || (ALLOCNO_CALLS_CROSSED_NUM (a)
2375 == ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a))
2376 || regno >= ira_reg_equiv_len
2377 || ira_equiv_no_lvalue_p (regno));
2378 caller_save_needed = 1;
2384 /* Set up allocno assignment flags for further allocation
2385 improvements. */
2386 static void
2387 setup_allocno_assignment_flags (void)
2389 int hard_regno;
2390 ira_allocno_t a;
2391 ira_allocno_iterator ai;
2393 FOR_EACH_ALLOCNO (a, ai)
2395 if (! ALLOCNO_ASSIGNED_P (a))
2396 /* It can happen if A is not referenced but partially anticipated
2397 somewhere in a region. */
2398 ira_free_allocno_updated_costs (a);
2399 hard_regno = ALLOCNO_HARD_REGNO (a);
2400 /* Don't assign hard registers to allocnos which are destination
2401 of removed store at the end of loop. It has no sense to keep
2402 the same value in different hard registers. It is also
2403 impossible to assign hard registers correctly to such
2404 allocnos because the cost info and info about intersected
2405 calls are incorrect for them. */
2406 ALLOCNO_ASSIGNED_P (a) = (hard_regno >= 0
2407 || ALLOCNO_EMIT_DATA (a)->mem_optimized_dest_p
2408 || (ALLOCNO_MEMORY_COST (a)
2409 - ALLOCNO_CLASS_COST (a)) < 0);
2410 ira_assert
2411 (hard_regno < 0
2412 || ira_hard_reg_in_set_p (hard_regno, ALLOCNO_MODE (a),
2413 reg_class_contents[ALLOCNO_CLASS (a)]));
2417 /* Evaluate overall allocation cost and the costs for using hard
2418 registers and memory for allocnos. */
2419 static void
2420 calculate_allocation_cost (void)
2422 int hard_regno, cost;
2423 ira_allocno_t a;
2424 ira_allocno_iterator ai;
2426 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
2427 FOR_EACH_ALLOCNO (a, ai)
2429 hard_regno = ALLOCNO_HARD_REGNO (a);
2430 ira_assert (hard_regno < 0
2431 || (ira_hard_reg_in_set_p
2432 (hard_regno, ALLOCNO_MODE (a),
2433 reg_class_contents[ALLOCNO_CLASS (a)])));
2434 if (hard_regno < 0)
2436 cost = ALLOCNO_MEMORY_COST (a);
2437 ira_mem_cost += cost;
2439 else if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
2441 cost = (ALLOCNO_HARD_REG_COSTS (a)
2442 [ira_class_hard_reg_index
2443 [ALLOCNO_CLASS (a)][hard_regno]]);
2444 ira_reg_cost += cost;
2446 else
2448 cost = ALLOCNO_CLASS_COST (a);
2449 ira_reg_cost += cost;
2451 ira_overall_cost += cost;
2454 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
2456 fprintf (ira_dump_file,
2457 "+++Costs: overall %" PRId64
2458 ", reg %" PRId64
2459 ", mem %" PRId64
2460 ", ld %" PRId64
2461 ", st %" PRId64
2462 ", move %" PRId64,
2463 ira_overall_cost, ira_reg_cost, ira_mem_cost,
2464 ira_load_cost, ira_store_cost, ira_shuffle_cost);
2465 fprintf (ira_dump_file, "\n+++ move loops %d, new jumps %d\n",
2466 ira_move_loops_num, ira_additional_jumps_num);
2471 #ifdef ENABLE_IRA_CHECKING
2472 /* Check the correctness of the allocation. We do need this because
2473 of complicated code to transform more one region internal
2474 representation into one region representation. */
2475 static void
2476 check_allocation (void)
2478 ira_allocno_t a;
2479 int hard_regno, nregs, conflict_nregs;
2480 ira_allocno_iterator ai;
2482 FOR_EACH_ALLOCNO (a, ai)
2484 int n = ALLOCNO_NUM_OBJECTS (a);
2485 int i;
2487 if (ALLOCNO_CAP_MEMBER (a) != NULL
2488 || (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
2489 continue;
2490 nregs = hard_regno_nregs (hard_regno, ALLOCNO_MODE (a));
2491 if (nregs == 1)
2492 /* We allocated a single hard register. */
2493 n = 1;
2494 else if (n > 1)
2495 /* We allocated multiple hard registers, and we will test
2496 conflicts in a granularity of single hard regs. */
2497 nregs = 1;
2499 for (i = 0; i < n; i++)
2501 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2502 ira_object_t conflict_obj;
2503 ira_object_conflict_iterator oci;
2504 int this_regno = hard_regno;
2505 if (n > 1)
2507 if (REG_WORDS_BIG_ENDIAN)
2508 this_regno += n - i - 1;
2509 else
2510 this_regno += i;
2512 FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
2514 ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
2515 int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
2516 if (conflict_hard_regno < 0)
2517 continue;
2519 conflict_nregs = hard_regno_nregs (conflict_hard_regno,
2520 ALLOCNO_MODE (conflict_a));
2522 if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1
2523 && conflict_nregs == ALLOCNO_NUM_OBJECTS (conflict_a))
2525 if (REG_WORDS_BIG_ENDIAN)
2526 conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
2527 - OBJECT_SUBWORD (conflict_obj) - 1);
2528 else
2529 conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
2530 conflict_nregs = 1;
2533 if ((conflict_hard_regno <= this_regno
2534 && this_regno < conflict_hard_regno + conflict_nregs)
2535 || (this_regno <= conflict_hard_regno
2536 && conflict_hard_regno < this_regno + nregs))
2538 fprintf (stderr, "bad allocation for %d and %d\n",
2539 ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
2540 gcc_unreachable ();
2546 #endif
2548 /* Allocate REG_EQUIV_INIT. Set up it from IRA_REG_EQUIV which should
2549 be already calculated. */
2550 static void
2551 setup_reg_equiv_init (void)
2553 int i;
2554 int max_regno = max_reg_num ();
2556 for (i = 0; i < max_regno; i++)
2557 reg_equiv_init (i) = ira_reg_equiv[i].init_insns;
2560 /* Update equiv regno from movement of FROM_REGNO to TO_REGNO. INSNS
2561 are insns which were generated for such movement. It is assumed
2562 that FROM_REGNO and TO_REGNO always have the same value at the
2563 point of any move containing such registers. This function is used
2564 to update equiv info for register shuffles on the region borders
2565 and for caller save/restore insns. */
2566 void
2567 ira_update_equiv_info_by_shuffle_insn (int to_regno, int from_regno, rtx_insn *insns)
2569 rtx_insn *insn;
2570 rtx x, note;
2572 if (! ira_reg_equiv[from_regno].defined_p
2573 && (! ira_reg_equiv[to_regno].defined_p
2574 || ((x = ira_reg_equiv[to_regno].memory) != NULL_RTX
2575 && ! MEM_READONLY_P (x))))
2576 return;
2577 insn = insns;
2578 if (NEXT_INSN (insn) != NULL_RTX)
2580 if (! ira_reg_equiv[to_regno].defined_p)
2582 ira_assert (ira_reg_equiv[to_regno].init_insns == NULL_RTX);
2583 return;
2585 ira_reg_equiv[to_regno].defined_p = false;
2586 ira_reg_equiv[to_regno].memory
2587 = ira_reg_equiv[to_regno].constant
2588 = ira_reg_equiv[to_regno].invariant
2589 = ira_reg_equiv[to_regno].init_insns = NULL;
2590 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2591 fprintf (ira_dump_file,
2592 " Invalidating equiv info for reg %d\n", to_regno);
2593 return;
2595 /* It is possible that FROM_REGNO still has no equivalence because
2596 in shuffles to_regno<-from_regno and from_regno<-to_regno the 2nd
2597 insn was not processed yet. */
2598 if (ira_reg_equiv[from_regno].defined_p)
2600 ira_reg_equiv[to_regno].defined_p = true;
2601 if ((x = ira_reg_equiv[from_regno].memory) != NULL_RTX)
2603 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX
2604 && ira_reg_equiv[from_regno].constant == NULL_RTX);
2605 ira_assert (ira_reg_equiv[to_regno].memory == NULL_RTX
2606 || rtx_equal_p (ira_reg_equiv[to_regno].memory, x));
2607 ira_reg_equiv[to_regno].memory = x;
2608 if (! MEM_READONLY_P (x))
2609 /* We don't add the insn to insn init list because memory
2610 equivalence is just to say what memory is better to use
2611 when the pseudo is spilled. */
2612 return;
2614 else if ((x = ira_reg_equiv[from_regno].constant) != NULL_RTX)
2616 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX);
2617 ira_assert (ira_reg_equiv[to_regno].constant == NULL_RTX
2618 || rtx_equal_p (ira_reg_equiv[to_regno].constant, x));
2619 ira_reg_equiv[to_regno].constant = x;
2621 else
2623 x = ira_reg_equiv[from_regno].invariant;
2624 ira_assert (x != NULL_RTX);
2625 ira_assert (ira_reg_equiv[to_regno].invariant == NULL_RTX
2626 || rtx_equal_p (ira_reg_equiv[to_regno].invariant, x));
2627 ira_reg_equiv[to_regno].invariant = x;
2629 if (find_reg_note (insn, REG_EQUIV, x) == NULL_RTX)
2631 note = set_unique_reg_note (insn, REG_EQUIV, copy_rtx (x));
2632 gcc_assert (note != NULL_RTX);
2633 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2635 fprintf (ira_dump_file,
2636 " Adding equiv note to insn %u for reg %d ",
2637 INSN_UID (insn), to_regno);
2638 dump_value_slim (ira_dump_file, x, 1);
2639 fprintf (ira_dump_file, "\n");
2643 ira_reg_equiv[to_regno].init_insns
2644 = gen_rtx_INSN_LIST (VOIDmode, insn,
2645 ira_reg_equiv[to_regno].init_insns);
2646 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2647 fprintf (ira_dump_file,
2648 " Adding equiv init move insn %u to reg %d\n",
2649 INSN_UID (insn), to_regno);
2652 /* Fix values of array REG_EQUIV_INIT after live range splitting done
2653 by IRA. */
2654 static void
2655 fix_reg_equiv_init (void)
2657 int max_regno = max_reg_num ();
2658 int i, new_regno, max;
2659 rtx set;
2660 rtx_insn_list *x, *next, *prev;
2661 rtx_insn *insn;
2663 if (max_regno_before_ira < max_regno)
2665 max = vec_safe_length (reg_equivs);
2666 grow_reg_equivs ();
2667 for (i = FIRST_PSEUDO_REGISTER; i < max; i++)
2668 for (prev = NULL, x = reg_equiv_init (i);
2669 x != NULL_RTX;
2670 x = next)
2672 next = x->next ();
2673 insn = x->insn ();
2674 set = single_set (insn);
2675 ira_assert (set != NULL_RTX
2676 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))));
2677 if (REG_P (SET_DEST (set))
2678 && ((int) REGNO (SET_DEST (set)) == i
2679 || (int) ORIGINAL_REGNO (SET_DEST (set)) == i))
2680 new_regno = REGNO (SET_DEST (set));
2681 else if (REG_P (SET_SRC (set))
2682 && ((int) REGNO (SET_SRC (set)) == i
2683 || (int) ORIGINAL_REGNO (SET_SRC (set)) == i))
2684 new_regno = REGNO (SET_SRC (set));
2685 else
2686 gcc_unreachable ();
2687 if (new_regno == i)
2688 prev = x;
2689 else
2691 /* Remove the wrong list element. */
2692 if (prev == NULL_RTX)
2693 reg_equiv_init (i) = next;
2694 else
2695 XEXP (prev, 1) = next;
2696 XEXP (x, 1) = reg_equiv_init (new_regno);
2697 reg_equiv_init (new_regno) = x;
2703 #ifdef ENABLE_IRA_CHECKING
2704 /* Print redundant memory-memory copies. */
2705 static void
2706 print_redundant_copies (void)
2708 int hard_regno;
2709 ira_allocno_t a;
2710 ira_copy_t cp, next_cp;
2711 ira_allocno_iterator ai;
2713 FOR_EACH_ALLOCNO (a, ai)
2715 if (ALLOCNO_CAP_MEMBER (a) != NULL)
2716 /* It is a cap. */
2717 continue;
2718 hard_regno = ALLOCNO_HARD_REGNO (a);
2719 if (hard_regno >= 0)
2720 continue;
2721 for (cp = ALLOCNO_COPIES (a); cp != NULL; cp = next_cp)
2722 if (cp->first == a)
2723 next_cp = cp->next_first_allocno_copy;
2724 else
2726 next_cp = cp->next_second_allocno_copy;
2727 if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL
2728 && cp->insn != NULL_RTX
2729 && ALLOCNO_HARD_REGNO (cp->first) == hard_regno)
2730 fprintf (ira_dump_file,
2731 " Redundant move from %d(freq %d):%d\n",
2732 INSN_UID (cp->insn), cp->freq, hard_regno);
2736 #endif
2738 /* Setup preferred and alternative classes for new pseudo-registers
2739 created by IRA starting with START. */
2740 static void
2741 setup_preferred_alternate_classes_for_new_pseudos (int start)
2743 int i, old_regno;
2744 int max_regno = max_reg_num ();
2746 for (i = start; i < max_regno; i++)
2748 old_regno = ORIGINAL_REGNO (regno_reg_rtx[i]);
2749 ira_assert (i != old_regno);
2750 setup_reg_classes (i, reg_preferred_class (old_regno),
2751 reg_alternate_class (old_regno),
2752 reg_allocno_class (old_regno));
2753 if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
2754 fprintf (ira_dump_file,
2755 " New r%d: setting preferred %s, alternative %s\n",
2756 i, reg_class_names[reg_preferred_class (old_regno)],
2757 reg_class_names[reg_alternate_class (old_regno)]);
2762 /* The number of entries allocated in reg_info. */
2763 static int allocated_reg_info_size;
2765 /* Regional allocation can create new pseudo-registers. This function
2766 expands some arrays for pseudo-registers. */
2767 static void
2768 expand_reg_info (void)
2770 int i;
2771 int size = max_reg_num ();
2773 resize_reg_info ();
2774 for (i = allocated_reg_info_size; i < size; i++)
2775 setup_reg_classes (i, GENERAL_REGS, ALL_REGS, GENERAL_REGS);
2776 setup_preferred_alternate_classes_for_new_pseudos (allocated_reg_info_size);
2777 allocated_reg_info_size = size;
2780 /* Return TRUE if there is too high register pressure in the function.
2781 It is used to decide when stack slot sharing is worth to do. */
2782 static bool
2783 too_high_register_pressure_p (void)
2785 int i;
2786 enum reg_class pclass;
2788 for (i = 0; i < ira_pressure_classes_num; i++)
2790 pclass = ira_pressure_classes[i];
2791 if (ira_loop_tree_root->reg_pressure[pclass] > 10000)
2792 return true;
2794 return false;
2799 /* Indicate that hard register number FROM was eliminated and replaced with
2800 an offset from hard register number TO. The status of hard registers live
2801 at the start of a basic block is updated by replacing a use of FROM with
2802 a use of TO. */
2804 void
2805 mark_elimination (int from, int to)
2807 basic_block bb;
2808 bitmap r;
2810 FOR_EACH_BB_FN (bb, cfun)
2812 r = DF_LR_IN (bb);
2813 if (bitmap_bit_p (r, from))
2815 bitmap_clear_bit (r, from);
2816 bitmap_set_bit (r, to);
2818 if (! df_live)
2819 continue;
2820 r = DF_LIVE_IN (bb);
2821 if (bitmap_bit_p (r, from))
2823 bitmap_clear_bit (r, from);
2824 bitmap_set_bit (r, to);
2831 /* The length of the following array. */
2832 int ira_reg_equiv_len;
2834 /* Info about equiv. info for each register. */
2835 struct ira_reg_equiv_s *ira_reg_equiv;
2837 /* Expand ira_reg_equiv if necessary. */
2838 void
2839 ira_expand_reg_equiv (void)
2841 int old = ira_reg_equiv_len;
2843 if (ira_reg_equiv_len > max_reg_num ())
2844 return;
2845 ira_reg_equiv_len = max_reg_num () * 3 / 2 + 1;
2846 ira_reg_equiv
2847 = (struct ira_reg_equiv_s *) xrealloc (ira_reg_equiv,
2848 ira_reg_equiv_len
2849 * sizeof (struct ira_reg_equiv_s));
2850 gcc_assert (old < ira_reg_equiv_len);
2851 memset (ira_reg_equiv + old, 0,
2852 sizeof (struct ira_reg_equiv_s) * (ira_reg_equiv_len - old));
2855 static void
2856 init_reg_equiv (void)
2858 ira_reg_equiv_len = 0;
2859 ira_reg_equiv = NULL;
2860 ira_expand_reg_equiv ();
2863 static void
2864 finish_reg_equiv (void)
2866 free (ira_reg_equiv);
2871 struct equivalence
2873 /* Set when a REG_EQUIV note is found or created. Use to
2874 keep track of what memory accesses might be created later,
2875 e.g. by reload. */
2876 rtx replacement;
2877 rtx *src_p;
2879 /* The list of each instruction which initializes this register.
2881 NULL indicates we know nothing about this register's equivalence
2882 properties.
2884 An INSN_LIST with a NULL insn indicates this pseudo is already
2885 known to not have a valid equivalence. */
2886 rtx_insn_list *init_insns;
2888 /* Loop depth is used to recognize equivalences which appear
2889 to be present within the same loop (or in an inner loop). */
2890 short loop_depth;
2891 /* Nonzero if this had a preexisting REG_EQUIV note. */
2892 unsigned char is_arg_equivalence : 1;
2893 /* Set when an attempt should be made to replace a register
2894 with the associated src_p entry. */
2895 unsigned char replace : 1;
2896 /* Set if this register has no known equivalence. */
2897 unsigned char no_equiv : 1;
2898 /* Set if this register is mentioned in a paradoxical subreg. */
2899 unsigned char pdx_subregs : 1;
2902 /* reg_equiv[N] (where N is a pseudo reg number) is the equivalence
2903 structure for that register. */
2904 static struct equivalence *reg_equiv;
2906 /* Used for communication between the following two functions. */
2907 struct equiv_mem_data
2909 /* A MEM that we wish to ensure remains unchanged. */
2910 rtx equiv_mem;
2912 /* Set true if EQUIV_MEM is modified. */
2913 bool equiv_mem_modified;
2916 /* If EQUIV_MEM is modified by modifying DEST, indicate that it is modified.
2917 Called via note_stores. */
2918 static void
2919 validate_equiv_mem_from_store (rtx dest, const_rtx set ATTRIBUTE_UNUSED,
2920 void *data)
2922 struct equiv_mem_data *info = (struct equiv_mem_data *) data;
2924 if ((REG_P (dest)
2925 && reg_overlap_mentioned_p (dest, info->equiv_mem))
2926 || (MEM_P (dest)
2927 && anti_dependence (info->equiv_mem, dest)))
2928 info->equiv_mem_modified = true;
2931 enum valid_equiv { valid_none, valid_combine, valid_reload };
2933 /* Verify that no store between START and the death of REG invalidates
2934 MEMREF. MEMREF is invalidated by modifying a register used in MEMREF,
2935 by storing into an overlapping memory location, or with a non-const
2936 CALL_INSN.
2938 Return VALID_RELOAD if MEMREF remains valid for both reload and
2939 combine_and_move insns, VALID_COMBINE if only valid for
2940 combine_and_move_insns, and VALID_NONE otherwise. */
2941 static enum valid_equiv
2942 validate_equiv_mem (rtx_insn *start, rtx reg, rtx memref)
2944 rtx_insn *insn;
2945 rtx note;
2946 struct equiv_mem_data info = { memref, false };
2947 enum valid_equiv ret = valid_reload;
2949 /* If the memory reference has side effects or is volatile, it isn't a
2950 valid equivalence. */
2951 if (side_effects_p (memref))
2952 return valid_none;
2954 for (insn = start; insn; insn = NEXT_INSN (insn))
2956 if (!INSN_P (insn))
2957 continue;
2959 if (find_reg_note (insn, REG_DEAD, reg))
2960 return ret;
2962 if (CALL_P (insn))
2964 /* We can combine a reg def from one insn into a reg use in
2965 another over a call if the memory is readonly or the call
2966 const/pure. However, we can't set reg_equiv notes up for
2967 reload over any call. The problem is the equivalent form
2968 may reference a pseudo which gets assigned a call
2969 clobbered hard reg. When we later replace REG with its
2970 equivalent form, the value in the call-clobbered reg has
2971 been changed and all hell breaks loose. */
2972 ret = valid_combine;
2973 if (!MEM_READONLY_P (memref)
2974 && !RTL_CONST_OR_PURE_CALL_P (insn))
2975 return valid_none;
2978 note_stores (insn, validate_equiv_mem_from_store, &info);
2979 if (info.equiv_mem_modified)
2980 return valid_none;
2982 /* If a register mentioned in MEMREF is modified via an
2983 auto-increment, we lose the equivalence. Do the same if one
2984 dies; although we could extend the life, it doesn't seem worth
2985 the trouble. */
2987 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2988 if ((REG_NOTE_KIND (note) == REG_INC
2989 || REG_NOTE_KIND (note) == REG_DEAD)
2990 && REG_P (XEXP (note, 0))
2991 && reg_overlap_mentioned_p (XEXP (note, 0), memref))
2992 return valid_none;
2995 return valid_none;
2998 /* Returns zero if X is known to be invariant. */
2999 static int
3000 equiv_init_varies_p (rtx x)
3002 RTX_CODE code = GET_CODE (x);
3003 int i;
3004 const char *fmt;
3006 switch (code)
3008 case MEM:
3009 return !MEM_READONLY_P (x) || equiv_init_varies_p (XEXP (x, 0));
3011 case CONST:
3012 CASE_CONST_ANY:
3013 case SYMBOL_REF:
3014 case LABEL_REF:
3015 return 0;
3017 case REG:
3018 return reg_equiv[REGNO (x)].replace == 0 && rtx_varies_p (x, 0);
3020 case ASM_OPERANDS:
3021 if (MEM_VOLATILE_P (x))
3022 return 1;
3024 /* Fall through. */
3026 default:
3027 break;
3030 fmt = GET_RTX_FORMAT (code);
3031 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3032 if (fmt[i] == 'e')
3034 if (equiv_init_varies_p (XEXP (x, i)))
3035 return 1;
3037 else if (fmt[i] == 'E')
3039 int j;
3040 for (j = 0; j < XVECLEN (x, i); j++)
3041 if (equiv_init_varies_p (XVECEXP (x, i, j)))
3042 return 1;
3045 return 0;
3048 /* Returns nonzero if X (used to initialize register REGNO) is movable.
3049 X is only movable if the registers it uses have equivalent initializations
3050 which appear to be within the same loop (or in an inner loop) and movable
3051 or if they are not candidates for local_alloc and don't vary. */
3052 static int
3053 equiv_init_movable_p (rtx x, int regno)
3055 int i, j;
3056 const char *fmt;
3057 enum rtx_code code = GET_CODE (x);
3059 switch (code)
3061 case SET:
3062 return equiv_init_movable_p (SET_SRC (x), regno);
3064 case CC0:
3065 case CLOBBER:
3066 return 0;
3068 case PRE_INC:
3069 case PRE_DEC:
3070 case POST_INC:
3071 case POST_DEC:
3072 case PRE_MODIFY:
3073 case POST_MODIFY:
3074 return 0;
3076 case REG:
3077 return ((reg_equiv[REGNO (x)].loop_depth >= reg_equiv[regno].loop_depth
3078 && reg_equiv[REGNO (x)].replace)
3079 || (REG_BASIC_BLOCK (REGNO (x)) < NUM_FIXED_BLOCKS
3080 && ! rtx_varies_p (x, 0)));
3082 case UNSPEC_VOLATILE:
3083 return 0;
3085 case ASM_OPERANDS:
3086 if (MEM_VOLATILE_P (x))
3087 return 0;
3089 /* Fall through. */
3091 default:
3092 break;
3095 fmt = GET_RTX_FORMAT (code);
3096 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3097 switch (fmt[i])
3099 case 'e':
3100 if (! equiv_init_movable_p (XEXP (x, i), regno))
3101 return 0;
3102 break;
3103 case 'E':
3104 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3105 if (! equiv_init_movable_p (XVECEXP (x, i, j), regno))
3106 return 0;
3107 break;
3110 return 1;
3113 static bool memref_referenced_p (rtx memref, rtx x, bool read_p);
3115 /* Auxiliary function for memref_referenced_p. Process setting X for
3116 MEMREF store. */
3117 static bool
3118 process_set_for_memref_referenced_p (rtx memref, rtx x)
3120 /* If we are setting a MEM, it doesn't count (its address does), but any
3121 other SET_DEST that has a MEM in it is referencing the MEM. */
3122 if (MEM_P (x))
3124 if (memref_referenced_p (memref, XEXP (x, 0), true))
3125 return true;
3127 else if (memref_referenced_p (memref, x, false))
3128 return true;
3130 return false;
3133 /* TRUE if X references a memory location (as a read if READ_P) that
3134 would be affected by a store to MEMREF. */
3135 static bool
3136 memref_referenced_p (rtx memref, rtx x, bool read_p)
3138 int i, j;
3139 const char *fmt;
3140 enum rtx_code code = GET_CODE (x);
3142 switch (code)
3144 case CONST:
3145 case LABEL_REF:
3146 case SYMBOL_REF:
3147 CASE_CONST_ANY:
3148 case PC:
3149 case CC0:
3150 case HIGH:
3151 case LO_SUM:
3152 return false;
3154 case REG:
3155 return (reg_equiv[REGNO (x)].replacement
3156 && memref_referenced_p (memref,
3157 reg_equiv[REGNO (x)].replacement, read_p));
3159 case MEM:
3160 /* Memory X might have another effective type than MEMREF. */
3161 if (read_p || true_dependence (memref, VOIDmode, x))
3162 return true;
3163 break;
3165 case SET:
3166 if (process_set_for_memref_referenced_p (memref, SET_DEST (x)))
3167 return true;
3169 return memref_referenced_p (memref, SET_SRC (x), true);
3171 case CLOBBER:
3172 if (process_set_for_memref_referenced_p (memref, XEXP (x, 0)))
3173 return true;
3175 return false;
3177 case PRE_DEC:
3178 case POST_DEC:
3179 case PRE_INC:
3180 case POST_INC:
3181 if (process_set_for_memref_referenced_p (memref, XEXP (x, 0)))
3182 return true;
3184 return memref_referenced_p (memref, XEXP (x, 0), true);
3186 case POST_MODIFY:
3187 case PRE_MODIFY:
3188 /* op0 = op0 + op1 */
3189 if (process_set_for_memref_referenced_p (memref, XEXP (x, 0)))
3190 return true;
3192 if (memref_referenced_p (memref, XEXP (x, 0), true))
3193 return true;
3195 return memref_referenced_p (memref, XEXP (x, 1), true);
3197 default:
3198 break;
3201 fmt = GET_RTX_FORMAT (code);
3202 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3203 switch (fmt[i])
3205 case 'e':
3206 if (memref_referenced_p (memref, XEXP (x, i), read_p))
3207 return true;
3208 break;
3209 case 'E':
3210 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3211 if (memref_referenced_p (memref, XVECEXP (x, i, j), read_p))
3212 return true;
3213 break;
3216 return false;
3219 /* TRUE if some insn in the range (START, END] references a memory location
3220 that would be affected by a store to MEMREF.
3222 Callers should not call this routine if START is after END in the
3223 RTL chain. */
3225 static int
3226 memref_used_between_p (rtx memref, rtx_insn *start, rtx_insn *end)
3228 rtx_insn *insn;
3230 for (insn = NEXT_INSN (start);
3231 insn && insn != NEXT_INSN (end);
3232 insn = NEXT_INSN (insn))
3234 if (!NONDEBUG_INSN_P (insn))
3235 continue;
3237 if (memref_referenced_p (memref, PATTERN (insn), false))
3238 return 1;
3240 /* Nonconst functions may access memory. */
3241 if (CALL_P (insn) && (! RTL_CONST_CALL_P (insn)))
3242 return 1;
3245 gcc_assert (insn == NEXT_INSN (end));
3246 return 0;
3249 /* Mark REG as having no known equivalence.
3250 Some instructions might have been processed before and furnished
3251 with REG_EQUIV notes for this register; these notes will have to be
3252 removed.
3253 STORE is the piece of RTL that does the non-constant / conflicting
3254 assignment - a SET, CLOBBER or REG_INC note. It is currently not used,
3255 but needs to be there because this function is called from note_stores. */
3256 static void
3257 no_equiv (rtx reg, const_rtx store ATTRIBUTE_UNUSED,
3258 void *data ATTRIBUTE_UNUSED)
3260 int regno;
3261 rtx_insn_list *list;
3263 if (!REG_P (reg))
3264 return;
3265 regno = REGNO (reg);
3266 reg_equiv[regno].no_equiv = 1;
3267 list = reg_equiv[regno].init_insns;
3268 if (list && list->insn () == NULL)
3269 return;
3270 reg_equiv[regno].init_insns = gen_rtx_INSN_LIST (VOIDmode, NULL_RTX, NULL);
3271 reg_equiv[regno].replacement = NULL_RTX;
3272 /* This doesn't matter for equivalences made for argument registers, we
3273 should keep their initialization insns. */
3274 if (reg_equiv[regno].is_arg_equivalence)
3275 return;
3276 ira_reg_equiv[regno].defined_p = false;
3277 ira_reg_equiv[regno].init_insns = NULL;
3278 for (; list; list = list->next ())
3280 rtx_insn *insn = list->insn ();
3281 remove_note (insn, find_reg_note (insn, REG_EQUIV, NULL_RTX));
3285 /* Check whether the SUBREG is a paradoxical subreg and set the result
3286 in PDX_SUBREGS. */
3288 static void
3289 set_paradoxical_subreg (rtx_insn *insn)
3291 subrtx_iterator::array_type array;
3292 FOR_EACH_SUBRTX (iter, array, PATTERN (insn), NONCONST)
3294 const_rtx subreg = *iter;
3295 if (GET_CODE (subreg) == SUBREG)
3297 const_rtx reg = SUBREG_REG (subreg);
3298 if (REG_P (reg) && paradoxical_subreg_p (subreg))
3299 reg_equiv[REGNO (reg)].pdx_subregs = true;
3304 /* In DEBUG_INSN location adjust REGs from CLEARED_REGS bitmap to the
3305 equivalent replacement. */
3307 static rtx
3308 adjust_cleared_regs (rtx loc, const_rtx old_rtx ATTRIBUTE_UNUSED, void *data)
3310 if (REG_P (loc))
3312 bitmap cleared_regs = (bitmap) data;
3313 if (bitmap_bit_p (cleared_regs, REGNO (loc)))
3314 return simplify_replace_fn_rtx (copy_rtx (*reg_equiv[REGNO (loc)].src_p),
3315 NULL_RTX, adjust_cleared_regs, data);
3317 return NULL_RTX;
3320 /* Given register REGNO is set only once, return true if the defining
3321 insn dominates all uses. */
3323 static bool
3324 def_dominates_uses (int regno)
3326 df_ref def = DF_REG_DEF_CHAIN (regno);
3328 struct df_insn_info *def_info = DF_REF_INSN_INFO (def);
3329 /* If this is an artificial def (eh handler regs, hard frame pointer
3330 for non-local goto, regs defined on function entry) then def_info
3331 is NULL and the reg is always live before any use. We might
3332 reasonably return true in that case, but since the only call
3333 of this function is currently here in ira.c when we are looking
3334 at a defining insn we can't have an artificial def as that would
3335 bump DF_REG_DEF_COUNT. */
3336 gcc_assert (DF_REG_DEF_COUNT (regno) == 1 && def_info != NULL);
3338 rtx_insn *def_insn = DF_REF_INSN (def);
3339 basic_block def_bb = BLOCK_FOR_INSN (def_insn);
3341 for (df_ref use = DF_REG_USE_CHAIN (regno);
3342 use;
3343 use = DF_REF_NEXT_REG (use))
3345 struct df_insn_info *use_info = DF_REF_INSN_INFO (use);
3346 /* Only check real uses, not artificial ones. */
3347 if (use_info)
3349 rtx_insn *use_insn = DF_REF_INSN (use);
3350 if (!DEBUG_INSN_P (use_insn))
3352 basic_block use_bb = BLOCK_FOR_INSN (use_insn);
3353 if (use_bb != def_bb
3354 ? !dominated_by_p (CDI_DOMINATORS, use_bb, def_bb)
3355 : DF_INSN_INFO_LUID (use_info) < DF_INSN_INFO_LUID (def_info))
3356 return false;
3360 return true;
3363 /* Scan the instructions before update_equiv_regs. Record which registers
3364 are referenced as paradoxical subregs. Also check for cases in which
3365 the current function needs to save a register that one of its call
3366 instructions clobbers.
3368 These things are logically unrelated, but it's more efficient to do
3369 them together. */
3371 static void
3372 update_equiv_regs_prescan (void)
3374 basic_block bb;
3375 rtx_insn *insn;
3376 function_abi_aggregator callee_abis;
3378 FOR_EACH_BB_FN (bb, cfun)
3379 FOR_BB_INSNS (bb, insn)
3380 if (NONDEBUG_INSN_P (insn))
3382 set_paradoxical_subreg (insn);
3383 if (CALL_P (insn))
3384 callee_abis.note_callee_abi (insn_callee_abi (insn));
3387 HARD_REG_SET extra_caller_saves = callee_abis.caller_save_regs (*crtl->abi);
3388 if (!hard_reg_set_empty_p (extra_caller_saves))
3389 for (unsigned int regno = 0; regno < FIRST_PSEUDO_REGISTER; ++regno)
3390 if (TEST_HARD_REG_BIT (extra_caller_saves, regno))
3391 df_set_regs_ever_live (regno, true);
3394 /* Find registers that are equivalent to a single value throughout the
3395 compilation (either because they can be referenced in memory or are
3396 set once from a single constant). Lower their priority for a
3397 register.
3399 If such a register is only referenced once, try substituting its
3400 value into the using insn. If it succeeds, we can eliminate the
3401 register completely.
3403 Initialize init_insns in ira_reg_equiv array. */
3404 static void
3405 update_equiv_regs (void)
3407 rtx_insn *insn;
3408 basic_block bb;
3410 /* Scan the insns and find which registers have equivalences. Do this
3411 in a separate scan of the insns because (due to -fcse-follow-jumps)
3412 a register can be set below its use. */
3413 bitmap setjmp_crosses = regstat_get_setjmp_crosses ();
3414 FOR_EACH_BB_FN (bb, cfun)
3416 int loop_depth = bb_loop_depth (bb);
3418 for (insn = BB_HEAD (bb);
3419 insn != NEXT_INSN (BB_END (bb));
3420 insn = NEXT_INSN (insn))
3422 rtx note;
3423 rtx set;
3424 rtx dest, src;
3425 int regno;
3427 if (! INSN_P (insn))
3428 continue;
3430 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3431 if (REG_NOTE_KIND (note) == REG_INC)
3432 no_equiv (XEXP (note, 0), note, NULL);
3434 set = single_set (insn);
3436 /* If this insn contains more (or less) than a single SET,
3437 only mark all destinations as having no known equivalence. */
3438 if (set == NULL_RTX
3439 || side_effects_p (SET_SRC (set)))
3441 note_pattern_stores (PATTERN (insn), no_equiv, NULL);
3442 continue;
3444 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3446 int i;
3448 for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
3450 rtx part = XVECEXP (PATTERN (insn), 0, i);
3451 if (part != set)
3452 note_pattern_stores (part, no_equiv, NULL);
3456 dest = SET_DEST (set);
3457 src = SET_SRC (set);
3459 /* See if this is setting up the equivalence between an argument
3460 register and its stack slot. */
3461 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3462 if (note)
3464 gcc_assert (REG_P (dest));
3465 regno = REGNO (dest);
3467 /* Note that we don't want to clear init_insns in
3468 ira_reg_equiv even if there are multiple sets of this
3469 register. */
3470 reg_equiv[regno].is_arg_equivalence = 1;
3472 /* The insn result can have equivalence memory although
3473 the equivalence is not set up by the insn. We add
3474 this insn to init insns as it is a flag for now that
3475 regno has an equivalence. We will remove the insn
3476 from init insn list later. */
3477 if (rtx_equal_p (src, XEXP (note, 0)) || MEM_P (XEXP (note, 0)))
3478 ira_reg_equiv[regno].init_insns
3479 = gen_rtx_INSN_LIST (VOIDmode, insn,
3480 ira_reg_equiv[regno].init_insns);
3482 /* Continue normally in case this is a candidate for
3483 replacements. */
3486 if (!optimize)
3487 continue;
3489 /* We only handle the case of a pseudo register being set
3490 once, or always to the same value. */
3491 /* ??? The mn10200 port breaks if we add equivalences for
3492 values that need an ADDRESS_REGS register and set them equivalent
3493 to a MEM of a pseudo. The actual problem is in the over-conservative
3494 handling of INPADDR_ADDRESS / INPUT_ADDRESS / INPUT triples in
3495 calculate_needs, but we traditionally work around this problem
3496 here by rejecting equivalences when the destination is in a register
3497 that's likely spilled. This is fragile, of course, since the
3498 preferred class of a pseudo depends on all instructions that set
3499 or use it. */
3501 if (!REG_P (dest)
3502 || (regno = REGNO (dest)) < FIRST_PSEUDO_REGISTER
3503 || (reg_equiv[regno].init_insns
3504 && reg_equiv[regno].init_insns->insn () == NULL)
3505 || (targetm.class_likely_spilled_p (reg_preferred_class (regno))
3506 && MEM_P (src) && ! reg_equiv[regno].is_arg_equivalence))
3508 /* This might be setting a SUBREG of a pseudo, a pseudo that is
3509 also set somewhere else to a constant. */
3510 note_pattern_stores (set, no_equiv, NULL);
3511 continue;
3514 /* Don't set reg mentioned in a paradoxical subreg
3515 equivalent to a mem. */
3516 if (MEM_P (src) && reg_equiv[regno].pdx_subregs)
3518 note_pattern_stores (set, no_equiv, NULL);
3519 continue;
3522 note = find_reg_note (insn, REG_EQUAL, NULL_RTX);
3524 /* cse sometimes generates function invariants, but doesn't put a
3525 REG_EQUAL note on the insn. Since this note would be redundant,
3526 there's no point creating it earlier than here. */
3527 if (! note && ! rtx_varies_p (src, 0))
3528 note = set_unique_reg_note (insn, REG_EQUAL, copy_rtx (src));
3530 /* Don't bother considering a REG_EQUAL note containing an EXPR_LIST
3531 since it represents a function call. */
3532 if (note && GET_CODE (XEXP (note, 0)) == EXPR_LIST)
3533 note = NULL_RTX;
3535 if (DF_REG_DEF_COUNT (regno) != 1)
3537 bool equal_p = true;
3538 rtx_insn_list *list;
3540 /* If we have already processed this pseudo and determined it
3541 cannot have an equivalence, then honor that decision. */
3542 if (reg_equiv[regno].no_equiv)
3543 continue;
3545 if (! note
3546 || rtx_varies_p (XEXP (note, 0), 0)
3547 || (reg_equiv[regno].replacement
3548 && ! rtx_equal_p (XEXP (note, 0),
3549 reg_equiv[regno].replacement)))
3551 no_equiv (dest, set, NULL);
3552 continue;
3555 list = reg_equiv[regno].init_insns;
3556 for (; list; list = list->next ())
3558 rtx note_tmp;
3559 rtx_insn *insn_tmp;
3561 insn_tmp = list->insn ();
3562 note_tmp = find_reg_note (insn_tmp, REG_EQUAL, NULL_RTX);
3563 gcc_assert (note_tmp);
3564 if (! rtx_equal_p (XEXP (note, 0), XEXP (note_tmp, 0)))
3566 equal_p = false;
3567 break;
3571 if (! equal_p)
3573 no_equiv (dest, set, NULL);
3574 continue;
3578 /* Record this insn as initializing this register. */
3579 reg_equiv[regno].init_insns
3580 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv[regno].init_insns);
3582 /* If this register is known to be equal to a constant, record that
3583 it is always equivalent to the constant.
3584 Note that it is possible to have a register use before
3585 the def in loops (see gcc.c-torture/execute/pr79286.c)
3586 where the reg is undefined on first use. If the def insn
3587 won't trap we can use it as an equivalence, effectively
3588 choosing the "undefined" value for the reg to be the
3589 same as the value set by the def. */
3590 if (DF_REG_DEF_COUNT (regno) == 1
3591 && note
3592 && !rtx_varies_p (XEXP (note, 0), 0)
3593 && (!may_trap_or_fault_p (XEXP (note, 0))
3594 || def_dominates_uses (regno)))
3596 rtx note_value = XEXP (note, 0);
3597 remove_note (insn, note);
3598 set_unique_reg_note (insn, REG_EQUIV, note_value);
3601 /* If this insn introduces a "constant" register, decrease the priority
3602 of that register. Record this insn if the register is only used once
3603 more and the equivalence value is the same as our source.
3605 The latter condition is checked for two reasons: First, it is an
3606 indication that it may be more efficient to actually emit the insn
3607 as written (if no registers are available, reload will substitute
3608 the equivalence). Secondly, it avoids problems with any registers
3609 dying in this insn whose death notes would be missed.
3611 If we don't have a REG_EQUIV note, see if this insn is loading
3612 a register used only in one basic block from a MEM. If so, and the
3613 MEM remains unchanged for the life of the register, add a REG_EQUIV
3614 note. */
3615 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3617 rtx replacement = NULL_RTX;
3618 if (note)
3619 replacement = XEXP (note, 0);
3620 else if (REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3621 && MEM_P (SET_SRC (set)))
3623 enum valid_equiv validity;
3624 validity = validate_equiv_mem (insn, dest, SET_SRC (set));
3625 if (validity != valid_none)
3627 replacement = copy_rtx (SET_SRC (set));
3628 if (validity == valid_reload)
3629 note = set_unique_reg_note (insn, REG_EQUIV, replacement);
3633 /* If we haven't done so, record for reload that this is an
3634 equivalencing insn. */
3635 if (note && !reg_equiv[regno].is_arg_equivalence)
3636 ira_reg_equiv[regno].init_insns
3637 = gen_rtx_INSN_LIST (VOIDmode, insn,
3638 ira_reg_equiv[regno].init_insns);
3640 if (replacement)
3642 reg_equiv[regno].replacement = replacement;
3643 reg_equiv[regno].src_p = &SET_SRC (set);
3644 reg_equiv[regno].loop_depth = (short) loop_depth;
3646 /* Don't mess with things live during setjmp. */
3647 if (optimize && !bitmap_bit_p (setjmp_crosses, regno))
3649 /* If the register is referenced exactly twice, meaning it is
3650 set once and used once, indicate that the reference may be
3651 replaced by the equivalence we computed above. Do this
3652 even if the register is only used in one block so that
3653 dependencies can be handled where the last register is
3654 used in a different block (i.e. HIGH / LO_SUM sequences)
3655 and to reduce the number of registers alive across
3656 calls. */
3658 if (REG_N_REFS (regno) == 2
3659 && (rtx_equal_p (replacement, src)
3660 || ! equiv_init_varies_p (src))
3661 && NONJUMP_INSN_P (insn)
3662 && equiv_init_movable_p (PATTERN (insn), regno))
3663 reg_equiv[regno].replace = 1;
3670 /* For insns that set a MEM to the contents of a REG that is only used
3671 in a single basic block, see if the register is always equivalent
3672 to that memory location and if moving the store from INSN to the
3673 insn that sets REG is safe. If so, put a REG_EQUIV note on the
3674 initializing insn. */
3675 static void
3676 add_store_equivs (void)
3678 auto_bitmap seen_insns;
3680 for (rtx_insn *insn = get_insns (); insn; insn = NEXT_INSN (insn))
3682 rtx set, src, dest;
3683 unsigned regno;
3684 rtx_insn *init_insn;
3686 bitmap_set_bit (seen_insns, INSN_UID (insn));
3688 if (! INSN_P (insn))
3689 continue;
3691 set = single_set (insn);
3692 if (! set)
3693 continue;
3695 dest = SET_DEST (set);
3696 src = SET_SRC (set);
3698 /* Don't add a REG_EQUIV note if the insn already has one. The existing
3699 REG_EQUIV is likely more useful than the one we are adding. */
3700 if (MEM_P (dest) && REG_P (src)
3701 && (regno = REGNO (src)) >= FIRST_PSEUDO_REGISTER
3702 && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3703 && DF_REG_DEF_COUNT (regno) == 1
3704 && ! reg_equiv[regno].pdx_subregs
3705 && reg_equiv[regno].init_insns != NULL
3706 && (init_insn = reg_equiv[regno].init_insns->insn ()) != 0
3707 && bitmap_bit_p (seen_insns, INSN_UID (init_insn))
3708 && ! find_reg_note (init_insn, REG_EQUIV, NULL_RTX)
3709 && validate_equiv_mem (init_insn, src, dest) == valid_reload
3710 && ! memref_used_between_p (dest, init_insn, insn)
3711 /* Attaching a REG_EQUIV note will fail if INIT_INSN has
3712 multiple sets. */
3713 && set_unique_reg_note (init_insn, REG_EQUIV, copy_rtx (dest)))
3715 /* This insn makes the equivalence, not the one initializing
3716 the register. */
3717 ira_reg_equiv[regno].init_insns
3718 = gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX);
3719 df_notes_rescan (init_insn);
3720 if (dump_file)
3721 fprintf (dump_file,
3722 "Adding REG_EQUIV to insn %d for source of insn %d\n",
3723 INSN_UID (init_insn),
3724 INSN_UID (insn));
3729 /* Scan all regs killed in an insn to see if any of them are registers
3730 only used that once. If so, see if we can replace the reference
3731 with the equivalent form. If we can, delete the initializing
3732 reference and this register will go away. If we can't replace the
3733 reference, and the initializing reference is within the same loop
3734 (or in an inner loop), then move the register initialization just
3735 before the use, so that they are in the same basic block. */
3736 static void
3737 combine_and_move_insns (void)
3739 auto_bitmap cleared_regs;
3740 int max = max_reg_num ();
3742 for (int regno = FIRST_PSEUDO_REGISTER; regno < max; regno++)
3744 if (!reg_equiv[regno].replace)
3745 continue;
3747 rtx_insn *use_insn = 0;
3748 for (df_ref use = DF_REG_USE_CHAIN (regno);
3749 use;
3750 use = DF_REF_NEXT_REG (use))
3751 if (DF_REF_INSN_INFO (use))
3753 if (DEBUG_INSN_P (DF_REF_INSN (use)))
3754 continue;
3755 gcc_assert (!use_insn);
3756 use_insn = DF_REF_INSN (use);
3758 gcc_assert (use_insn);
3760 /* Don't substitute into jumps. indirect_jump_optimize does
3761 this for anything we are prepared to handle. */
3762 if (JUMP_P (use_insn))
3763 continue;
3765 /* Also don't substitute into a conditional trap insn -- it can become
3766 an unconditional trap, and that is a flow control insn. */
3767 if (GET_CODE (PATTERN (use_insn)) == TRAP_IF)
3768 continue;
3770 df_ref def = DF_REG_DEF_CHAIN (regno);
3771 gcc_assert (DF_REG_DEF_COUNT (regno) == 1 && DF_REF_INSN_INFO (def));
3772 rtx_insn *def_insn = DF_REF_INSN (def);
3774 /* We may not move instructions that can throw, since that
3775 changes basic block boundaries and we are not prepared to
3776 adjust the CFG to match. */
3777 if (can_throw_internal (def_insn))
3778 continue;
3780 basic_block use_bb = BLOCK_FOR_INSN (use_insn);
3781 basic_block def_bb = BLOCK_FOR_INSN (def_insn);
3782 if (bb_loop_depth (use_bb) > bb_loop_depth (def_bb))
3783 continue;
3785 if (asm_noperands (PATTERN (def_insn)) < 0
3786 && validate_replace_rtx (regno_reg_rtx[regno],
3787 *reg_equiv[regno].src_p, use_insn))
3789 rtx link;
3790 /* Append the REG_DEAD notes from def_insn. */
3791 for (rtx *p = &REG_NOTES (def_insn); (link = *p) != 0; )
3793 if (REG_NOTE_KIND (XEXP (link, 0)) == REG_DEAD)
3795 *p = XEXP (link, 1);
3796 XEXP (link, 1) = REG_NOTES (use_insn);
3797 REG_NOTES (use_insn) = link;
3799 else
3800 p = &XEXP (link, 1);
3803 remove_death (regno, use_insn);
3804 SET_REG_N_REFS (regno, 0);
3805 REG_FREQ (regno) = 0;
3806 df_ref use;
3807 FOR_EACH_INSN_USE (use, def_insn)
3809 unsigned int use_regno = DF_REF_REGNO (use);
3810 if (!HARD_REGISTER_NUM_P (use_regno))
3811 reg_equiv[use_regno].replace = 0;
3814 delete_insn (def_insn);
3816 reg_equiv[regno].init_insns = NULL;
3817 ira_reg_equiv[regno].init_insns = NULL;
3818 bitmap_set_bit (cleared_regs, regno);
3821 /* Move the initialization of the register to just before
3822 USE_INSN. Update the flow information. */
3823 else if (prev_nondebug_insn (use_insn) != def_insn)
3825 rtx_insn *new_insn;
3827 new_insn = emit_insn_before (PATTERN (def_insn), use_insn);
3828 REG_NOTES (new_insn) = REG_NOTES (def_insn);
3829 REG_NOTES (def_insn) = 0;
3830 /* Rescan it to process the notes. */
3831 df_insn_rescan (new_insn);
3833 /* Make sure this insn is recognized before reload begins,
3834 otherwise eliminate_regs_in_insn will die. */
3835 INSN_CODE (new_insn) = INSN_CODE (def_insn);
3837 delete_insn (def_insn);
3839 XEXP (reg_equiv[regno].init_insns, 0) = new_insn;
3841 REG_BASIC_BLOCK (regno) = use_bb->index;
3842 REG_N_CALLS_CROSSED (regno) = 0;
3844 if (use_insn == BB_HEAD (use_bb))
3845 BB_HEAD (use_bb) = new_insn;
3847 /* We know regno dies in use_insn, but inside a loop
3848 REG_DEAD notes might be missing when def_insn was in
3849 another basic block. However, when we move def_insn into
3850 this bb we'll definitely get a REG_DEAD note and reload
3851 will see the death. It's possible that update_equiv_regs
3852 set up an equivalence referencing regno for a reg set by
3853 use_insn, when regno was seen as non-local. Now that
3854 regno is local to this block, and dies, such an
3855 equivalence is invalid. */
3856 if (find_reg_note (use_insn, REG_EQUIV, regno_reg_rtx[regno]))
3858 rtx set = single_set (use_insn);
3859 if (set && REG_P (SET_DEST (set)))
3860 no_equiv (SET_DEST (set), set, NULL);
3863 ira_reg_equiv[regno].init_insns
3864 = gen_rtx_INSN_LIST (VOIDmode, new_insn, NULL_RTX);
3865 bitmap_set_bit (cleared_regs, regno);
3869 if (!bitmap_empty_p (cleared_regs))
3871 basic_block bb;
3873 FOR_EACH_BB_FN (bb, cfun)
3875 bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
3876 bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
3877 if (!df_live)
3878 continue;
3879 bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
3880 bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
3883 /* Last pass - adjust debug insns referencing cleared regs. */
3884 if (MAY_HAVE_DEBUG_BIND_INSNS)
3885 for (rtx_insn *insn = get_insns (); insn; insn = NEXT_INSN (insn))
3886 if (DEBUG_BIND_INSN_P (insn))
3888 rtx old_loc = INSN_VAR_LOCATION_LOC (insn);
3889 INSN_VAR_LOCATION_LOC (insn)
3890 = simplify_replace_fn_rtx (old_loc, NULL_RTX,
3891 adjust_cleared_regs,
3892 (void *) cleared_regs);
3893 if (old_loc != INSN_VAR_LOCATION_LOC (insn))
3894 df_insn_rescan (insn);
3899 /* A pass over indirect jumps, converting simple cases to direct jumps.
3900 Combine does this optimization too, but only within a basic block. */
3901 static void
3902 indirect_jump_optimize (void)
3904 basic_block bb;
3905 bool rebuild_p = false;
3907 FOR_EACH_BB_REVERSE_FN (bb, cfun)
3909 rtx_insn *insn = BB_END (bb);
3910 if (!JUMP_P (insn)
3911 || find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
3912 continue;
3914 rtx x = pc_set (insn);
3915 if (!x || !REG_P (SET_SRC (x)))
3916 continue;
3918 int regno = REGNO (SET_SRC (x));
3919 if (DF_REG_DEF_COUNT (regno) == 1)
3921 df_ref def = DF_REG_DEF_CHAIN (regno);
3922 if (!DF_REF_IS_ARTIFICIAL (def))
3924 rtx_insn *def_insn = DF_REF_INSN (def);
3925 rtx lab = NULL_RTX;
3926 rtx set = single_set (def_insn);
3927 if (set && GET_CODE (SET_SRC (set)) == LABEL_REF)
3928 lab = SET_SRC (set);
3929 else
3931 rtx eqnote = find_reg_note (def_insn, REG_EQUAL, NULL_RTX);
3932 if (eqnote && GET_CODE (XEXP (eqnote, 0)) == LABEL_REF)
3933 lab = XEXP (eqnote, 0);
3935 if (lab && validate_replace_rtx (SET_SRC (x), lab, insn))
3936 rebuild_p = true;
3941 if (rebuild_p)
3943 timevar_push (TV_JUMP);
3944 rebuild_jump_labels (get_insns ());
3945 if (purge_all_dead_edges ())
3946 delete_unreachable_blocks ();
3947 timevar_pop (TV_JUMP);
3951 /* Set up fields memory, constant, and invariant from init_insns in
3952 the structures of array ira_reg_equiv. */
3953 static void
3954 setup_reg_equiv (void)
3956 int i;
3957 rtx_insn_list *elem, *prev_elem, *next_elem;
3958 rtx_insn *insn;
3959 rtx set, x;
3961 for (i = FIRST_PSEUDO_REGISTER; i < ira_reg_equiv_len; i++)
3962 for (prev_elem = NULL, elem = ira_reg_equiv[i].init_insns;
3963 elem;
3964 prev_elem = elem, elem = next_elem)
3966 next_elem = elem->next ();
3967 insn = elem->insn ();
3968 set = single_set (insn);
3970 /* Init insns can set up equivalence when the reg is a destination or
3971 a source (in this case the destination is memory). */
3972 if (set != 0 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))))
3974 if ((x = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != NULL)
3976 x = XEXP (x, 0);
3977 if (REG_P (SET_DEST (set))
3978 && REGNO (SET_DEST (set)) == (unsigned int) i
3979 && ! rtx_equal_p (SET_SRC (set), x) && MEM_P (x))
3981 /* This insn reporting the equivalence but
3982 actually not setting it. Remove it from the
3983 list. */
3984 if (prev_elem == NULL)
3985 ira_reg_equiv[i].init_insns = next_elem;
3986 else
3987 XEXP (prev_elem, 1) = next_elem;
3988 elem = prev_elem;
3991 else if (REG_P (SET_DEST (set))
3992 && REGNO (SET_DEST (set)) == (unsigned int) i)
3993 x = SET_SRC (set);
3994 else
3996 gcc_assert (REG_P (SET_SRC (set))
3997 && REGNO (SET_SRC (set)) == (unsigned int) i);
3998 x = SET_DEST (set);
4000 if (! function_invariant_p (x)
4001 || ! flag_pic
4002 /* A function invariant is often CONSTANT_P but may
4003 include a register. We promise to only pass
4004 CONSTANT_P objects to LEGITIMATE_PIC_OPERAND_P. */
4005 || (CONSTANT_P (x) && LEGITIMATE_PIC_OPERAND_P (x)))
4007 /* It can happen that a REG_EQUIV note contains a MEM
4008 that is not a legitimate memory operand. As later
4009 stages of reload assume that all addresses found in
4010 the lra_regno_equiv_* arrays were originally
4011 legitimate, we ignore such REG_EQUIV notes. */
4012 if (memory_operand (x, VOIDmode))
4014 ira_reg_equiv[i].defined_p = true;
4015 ira_reg_equiv[i].memory = x;
4016 continue;
4018 else if (function_invariant_p (x))
4020 machine_mode mode;
4022 mode = GET_MODE (SET_DEST (set));
4023 if (GET_CODE (x) == PLUS
4024 || x == frame_pointer_rtx || x == arg_pointer_rtx)
4025 /* This is PLUS of frame pointer and a constant,
4026 or fp, or argp. */
4027 ira_reg_equiv[i].invariant = x;
4028 else if (targetm.legitimate_constant_p (mode, x))
4029 ira_reg_equiv[i].constant = x;
4030 else
4032 ira_reg_equiv[i].memory = force_const_mem (mode, x);
4033 if (ira_reg_equiv[i].memory == NULL_RTX)
4035 ira_reg_equiv[i].defined_p = false;
4036 ira_reg_equiv[i].init_insns = NULL;
4037 break;
4040 ira_reg_equiv[i].defined_p = true;
4041 continue;
4045 ira_reg_equiv[i].defined_p = false;
4046 ira_reg_equiv[i].init_insns = NULL;
4047 break;
4053 /* Print chain C to FILE. */
4054 static void
4055 print_insn_chain (FILE *file, class insn_chain *c)
4057 fprintf (file, "insn=%d, ", INSN_UID (c->insn));
4058 bitmap_print (file, &c->live_throughout, "live_throughout: ", ", ");
4059 bitmap_print (file, &c->dead_or_set, "dead_or_set: ", "\n");
4063 /* Print all reload_insn_chains to FILE. */
4064 static void
4065 print_insn_chains (FILE *file)
4067 class insn_chain *c;
4068 for (c = reload_insn_chain; c ; c = c->next)
4069 print_insn_chain (file, c);
4072 /* Return true if pseudo REGNO should be added to set live_throughout
4073 or dead_or_set of the insn chains for reload consideration. */
4074 static bool
4075 pseudo_for_reload_consideration_p (int regno)
4077 /* Consider spilled pseudos too for IRA because they still have a
4078 chance to get hard-registers in the reload when IRA is used. */
4079 return (reg_renumber[regno] >= 0 || ira_conflicts_p);
4082 /* Return true if we can track the individual bytes of subreg X.
4083 When returning true, set *OUTER_SIZE to the number of bytes in
4084 X itself, *INNER_SIZE to the number of bytes in the inner register
4085 and *START to the offset of the first byte. */
4086 static bool
4087 get_subreg_tracking_sizes (rtx x, HOST_WIDE_INT *outer_size,
4088 HOST_WIDE_INT *inner_size, HOST_WIDE_INT *start)
4090 rtx reg = regno_reg_rtx[REGNO (SUBREG_REG (x))];
4091 return (GET_MODE_SIZE (GET_MODE (x)).is_constant (outer_size)
4092 && GET_MODE_SIZE (GET_MODE (reg)).is_constant (inner_size)
4093 && SUBREG_BYTE (x).is_constant (start));
4096 /* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] for
4097 a register with SIZE bytes, making the register live if INIT_VALUE. */
4098 static void
4099 init_live_subregs (bool init_value, sbitmap *live_subregs,
4100 bitmap live_subregs_used, int allocnum, int size)
4102 gcc_assert (size > 0);
4104 /* Been there, done that. */
4105 if (bitmap_bit_p (live_subregs_used, allocnum))
4106 return;
4108 /* Create a new one. */
4109 if (live_subregs[allocnum] == NULL)
4110 live_subregs[allocnum] = sbitmap_alloc (size);
4112 /* If the entire reg was live before blasting into subregs, we need
4113 to init all of the subregs to ones else init to 0. */
4114 if (init_value)
4115 bitmap_ones (live_subregs[allocnum]);
4116 else
4117 bitmap_clear (live_subregs[allocnum]);
4119 bitmap_set_bit (live_subregs_used, allocnum);
4122 /* Walk the insns of the current function and build reload_insn_chain,
4123 and record register life information. */
4124 static void
4125 build_insn_chain (void)
4127 unsigned int i;
4128 class insn_chain **p = &reload_insn_chain;
4129 basic_block bb;
4130 class insn_chain *c = NULL;
4131 class insn_chain *next = NULL;
4132 auto_bitmap live_relevant_regs;
4133 auto_bitmap elim_regset;
4134 /* live_subregs is a vector used to keep accurate information about
4135 which hardregs are live in multiword pseudos. live_subregs and
4136 live_subregs_used are indexed by pseudo number. The live_subreg
4137 entry for a particular pseudo is only used if the corresponding
4138 element is non zero in live_subregs_used. The sbitmap size of
4139 live_subreg[allocno] is number of bytes that the pseudo can
4140 occupy. */
4141 sbitmap *live_subregs = XCNEWVEC (sbitmap, max_regno);
4142 auto_bitmap live_subregs_used;
4144 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
4145 if (TEST_HARD_REG_BIT (eliminable_regset, i))
4146 bitmap_set_bit (elim_regset, i);
4147 FOR_EACH_BB_REVERSE_FN (bb, cfun)
4149 bitmap_iterator bi;
4150 rtx_insn *insn;
4152 CLEAR_REG_SET (live_relevant_regs);
4153 bitmap_clear (live_subregs_used);
4155 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb), 0, i, bi)
4157 if (i >= FIRST_PSEUDO_REGISTER)
4158 break;
4159 bitmap_set_bit (live_relevant_regs, i);
4162 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb),
4163 FIRST_PSEUDO_REGISTER, i, bi)
4165 if (pseudo_for_reload_consideration_p (i))
4166 bitmap_set_bit (live_relevant_regs, i);
4169 FOR_BB_INSNS_REVERSE (bb, insn)
4171 if (!NOTE_P (insn) && !BARRIER_P (insn))
4173 struct df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4174 df_ref def, use;
4176 c = new_insn_chain ();
4177 c->next = next;
4178 next = c;
4179 *p = c;
4180 p = &c->prev;
4182 c->insn = insn;
4183 c->block = bb->index;
4185 if (NONDEBUG_INSN_P (insn))
4186 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4188 unsigned int regno = DF_REF_REGNO (def);
4190 /* Ignore may clobbers because these are generated
4191 from calls. However, every other kind of def is
4192 added to dead_or_set. */
4193 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
4195 if (regno < FIRST_PSEUDO_REGISTER)
4197 if (!fixed_regs[regno])
4198 bitmap_set_bit (&c->dead_or_set, regno);
4200 else if (pseudo_for_reload_consideration_p (regno))
4201 bitmap_set_bit (&c->dead_or_set, regno);
4204 if ((regno < FIRST_PSEUDO_REGISTER
4205 || reg_renumber[regno] >= 0
4206 || ira_conflicts_p)
4207 && (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL)))
4209 rtx reg = DF_REF_REG (def);
4210 HOST_WIDE_INT outer_size, inner_size, start;
4212 /* We can usually track the liveness of individual
4213 bytes within a subreg. The only exceptions are
4214 subregs wrapped in ZERO_EXTRACTs and subregs whose
4215 size is not known; in those cases we need to be
4216 conservative and treat the definition as a partial
4217 definition of the full register rather than a full
4218 definition of a specific part of the register. */
4219 if (GET_CODE (reg) == SUBREG
4220 && !DF_REF_FLAGS_IS_SET (def, DF_REF_ZERO_EXTRACT)
4221 && get_subreg_tracking_sizes (reg, &outer_size,
4222 &inner_size, &start))
4224 HOST_WIDE_INT last = start + outer_size;
4226 init_live_subregs
4227 (bitmap_bit_p (live_relevant_regs, regno),
4228 live_subregs, live_subregs_used, regno,
4229 inner_size);
4231 if (!DF_REF_FLAGS_IS_SET
4232 (def, DF_REF_STRICT_LOW_PART))
4234 /* Expand the range to cover entire words.
4235 Bytes added here are "don't care". */
4236 start
4237 = start / UNITS_PER_WORD * UNITS_PER_WORD;
4238 last = ((last + UNITS_PER_WORD - 1)
4239 / UNITS_PER_WORD * UNITS_PER_WORD);
4242 /* Ignore the paradoxical bits. */
4243 if (last > SBITMAP_SIZE (live_subregs[regno]))
4244 last = SBITMAP_SIZE (live_subregs[regno]);
4246 while (start < last)
4248 bitmap_clear_bit (live_subregs[regno], start);
4249 start++;
4252 if (bitmap_empty_p (live_subregs[regno]))
4254 bitmap_clear_bit (live_subregs_used, regno);
4255 bitmap_clear_bit (live_relevant_regs, regno);
4257 else
4258 /* Set live_relevant_regs here because
4259 that bit has to be true to get us to
4260 look at the live_subregs fields. */
4261 bitmap_set_bit (live_relevant_regs, regno);
4263 else
4265 /* DF_REF_PARTIAL is generated for
4266 subregs, STRICT_LOW_PART, and
4267 ZERO_EXTRACT. We handle the subreg
4268 case above so here we have to keep from
4269 modeling the def as a killing def. */
4270 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL))
4272 bitmap_clear_bit (live_subregs_used, regno);
4273 bitmap_clear_bit (live_relevant_regs, regno);
4279 bitmap_and_compl_into (live_relevant_regs, elim_regset);
4280 bitmap_copy (&c->live_throughout, live_relevant_regs);
4282 if (NONDEBUG_INSN_P (insn))
4283 FOR_EACH_INSN_INFO_USE (use, insn_info)
4285 unsigned int regno = DF_REF_REGNO (use);
4286 rtx reg = DF_REF_REG (use);
4288 /* DF_REF_READ_WRITE on a use means that this use
4289 is fabricated from a def that is a partial set
4290 to a multiword reg. Here, we only model the
4291 subreg case that is not wrapped in ZERO_EXTRACT
4292 precisely so we do not need to look at the
4293 fabricated use. */
4294 if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE)
4295 && !DF_REF_FLAGS_IS_SET (use, DF_REF_ZERO_EXTRACT)
4296 && DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
4297 continue;
4299 /* Add the last use of each var to dead_or_set. */
4300 if (!bitmap_bit_p (live_relevant_regs, regno))
4302 if (regno < FIRST_PSEUDO_REGISTER)
4304 if (!fixed_regs[regno])
4305 bitmap_set_bit (&c->dead_or_set, regno);
4307 else if (pseudo_for_reload_consideration_p (regno))
4308 bitmap_set_bit (&c->dead_or_set, regno);
4311 if (regno < FIRST_PSEUDO_REGISTER
4312 || pseudo_for_reload_consideration_p (regno))
4314 HOST_WIDE_INT outer_size, inner_size, start;
4315 if (GET_CODE (reg) == SUBREG
4316 && !DF_REF_FLAGS_IS_SET (use,
4317 DF_REF_SIGN_EXTRACT
4318 | DF_REF_ZERO_EXTRACT)
4319 && get_subreg_tracking_sizes (reg, &outer_size,
4320 &inner_size, &start))
4322 HOST_WIDE_INT last = start + outer_size;
4324 init_live_subregs
4325 (bitmap_bit_p (live_relevant_regs, regno),
4326 live_subregs, live_subregs_used, regno,
4327 inner_size);
4329 /* Ignore the paradoxical bits. */
4330 if (last > SBITMAP_SIZE (live_subregs[regno]))
4331 last = SBITMAP_SIZE (live_subregs[regno]);
4333 while (start < last)
4335 bitmap_set_bit (live_subregs[regno], start);
4336 start++;
4339 else
4340 /* Resetting the live_subregs_used is
4341 effectively saying do not use the subregs
4342 because we are reading the whole
4343 pseudo. */
4344 bitmap_clear_bit (live_subregs_used, regno);
4345 bitmap_set_bit (live_relevant_regs, regno);
4351 /* FIXME!! The following code is a disaster. Reload needs to see the
4352 labels and jump tables that are just hanging out in between
4353 the basic blocks. See pr33676. */
4354 insn = BB_HEAD (bb);
4356 /* Skip over the barriers and cruft. */
4357 while (insn && (BARRIER_P (insn) || NOTE_P (insn)
4358 || BLOCK_FOR_INSN (insn) == bb))
4359 insn = PREV_INSN (insn);
4361 /* While we add anything except barriers and notes, the focus is
4362 to get the labels and jump tables into the
4363 reload_insn_chain. */
4364 while (insn)
4366 if (!NOTE_P (insn) && !BARRIER_P (insn))
4368 if (BLOCK_FOR_INSN (insn))
4369 break;
4371 c = new_insn_chain ();
4372 c->next = next;
4373 next = c;
4374 *p = c;
4375 p = &c->prev;
4377 /* The block makes no sense here, but it is what the old
4378 code did. */
4379 c->block = bb->index;
4380 c->insn = insn;
4381 bitmap_copy (&c->live_throughout, live_relevant_regs);
4383 insn = PREV_INSN (insn);
4387 reload_insn_chain = c;
4388 *p = NULL;
4390 for (i = 0; i < (unsigned int) max_regno; i++)
4391 if (live_subregs[i] != NULL)
4392 sbitmap_free (live_subregs[i]);
4393 free (live_subregs);
4395 if (dump_file)
4396 print_insn_chains (dump_file);
4399 /* Examine the rtx found in *LOC, which is read or written to as determined
4400 by TYPE. Return false if we find a reason why an insn containing this
4401 rtx should not be moved (such as accesses to non-constant memory), true
4402 otherwise. */
4403 static bool
4404 rtx_moveable_p (rtx *loc, enum op_type type)
4406 const char *fmt;
4407 rtx x = *loc;
4408 int i, j;
4410 enum rtx_code code = GET_CODE (x);
4411 switch (code)
4413 case CONST:
4414 CASE_CONST_ANY:
4415 case SYMBOL_REF:
4416 case LABEL_REF:
4417 return true;
4419 case PC:
4420 return type == OP_IN;
4422 case CC0:
4423 return false;
4425 case REG:
4426 if (x == frame_pointer_rtx)
4427 return true;
4428 if (HARD_REGISTER_P (x))
4429 return false;
4431 return true;
4433 case MEM:
4434 if (type == OP_IN && MEM_READONLY_P (x))
4435 return rtx_moveable_p (&XEXP (x, 0), OP_IN);
4436 return false;
4438 case SET:
4439 return (rtx_moveable_p (&SET_SRC (x), OP_IN)
4440 && rtx_moveable_p (&SET_DEST (x), OP_OUT));
4442 case STRICT_LOW_PART:
4443 return rtx_moveable_p (&XEXP (x, 0), OP_OUT);
4445 case ZERO_EXTRACT:
4446 case SIGN_EXTRACT:
4447 return (rtx_moveable_p (&XEXP (x, 0), type)
4448 && rtx_moveable_p (&XEXP (x, 1), OP_IN)
4449 && rtx_moveable_p (&XEXP (x, 2), OP_IN));
4451 case CLOBBER:
4452 return rtx_moveable_p (&SET_DEST (x), OP_OUT);
4454 case UNSPEC_VOLATILE:
4455 /* It is a bad idea to consider insns with such rtl
4456 as moveable ones. The insn scheduler also considers them as barrier
4457 for a reason. */
4458 return false;
4460 case ASM_OPERANDS:
4461 /* The same is true for volatile asm: it has unknown side effects, it
4462 cannot be moved at will. */
4463 if (MEM_VOLATILE_P (x))
4464 return false;
4466 default:
4467 break;
4470 fmt = GET_RTX_FORMAT (code);
4471 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
4473 if (fmt[i] == 'e')
4475 if (!rtx_moveable_p (&XEXP (x, i), type))
4476 return false;
4478 else if (fmt[i] == 'E')
4479 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4481 if (!rtx_moveable_p (&XVECEXP (x, i, j), type))
4482 return false;
4485 return true;
4488 /* A wrapper around dominated_by_p, which uses the information in UID_LUID
4489 to give dominance relationships between two insns I1 and I2. */
4490 static bool
4491 insn_dominated_by_p (rtx i1, rtx i2, int *uid_luid)
4493 basic_block bb1 = BLOCK_FOR_INSN (i1);
4494 basic_block bb2 = BLOCK_FOR_INSN (i2);
4496 if (bb1 == bb2)
4497 return uid_luid[INSN_UID (i2)] < uid_luid[INSN_UID (i1)];
4498 return dominated_by_p (CDI_DOMINATORS, bb1, bb2);
4501 /* Record the range of register numbers added by find_moveable_pseudos. */
4502 int first_moveable_pseudo, last_moveable_pseudo;
4504 /* These two vectors hold data for every register added by
4505 find_movable_pseudos, with index 0 holding data for the
4506 first_moveable_pseudo. */
4507 /* The original home register. */
4508 static vec<rtx> pseudo_replaced_reg;
4510 /* Look for instances where we have an instruction that is known to increase
4511 register pressure, and whose result is not used immediately. If it is
4512 possible to move the instruction downwards to just before its first use,
4513 split its lifetime into two ranges. We create a new pseudo to compute the
4514 value, and emit a move instruction just before the first use. If, after
4515 register allocation, the new pseudo remains unallocated, the function
4516 move_unallocated_pseudos then deletes the move instruction and places
4517 the computation just before the first use.
4519 Such a move is safe and profitable if all the input registers remain live
4520 and unchanged between the original computation and its first use. In such
4521 a situation, the computation is known to increase register pressure, and
4522 moving it is known to at least not worsen it.
4524 We restrict moves to only those cases where a register remains unallocated,
4525 in order to avoid interfering too much with the instruction schedule. As
4526 an exception, we may move insns which only modify their input register
4527 (typically induction variables), as this increases the freedom for our
4528 intended transformation, and does not limit the second instruction
4529 scheduler pass. */
4531 static void
4532 find_moveable_pseudos (void)
4534 unsigned i;
4535 int max_regs = max_reg_num ();
4536 int max_uid = get_max_uid ();
4537 basic_block bb;
4538 int *uid_luid = XNEWVEC (int, max_uid);
4539 rtx_insn **closest_uses = XNEWVEC (rtx_insn *, max_regs);
4540 /* A set of registers which are live but not modified throughout a block. */
4541 bitmap_head *bb_transp_live = XNEWVEC (bitmap_head,
4542 last_basic_block_for_fn (cfun));
4543 /* A set of registers which only exist in a given basic block. */
4544 bitmap_head *bb_local = XNEWVEC (bitmap_head,
4545 last_basic_block_for_fn (cfun));
4546 /* A set of registers which are set once, in an instruction that can be
4547 moved freely downwards, but are otherwise transparent to a block. */
4548 bitmap_head *bb_moveable_reg_sets = XNEWVEC (bitmap_head,
4549 last_basic_block_for_fn (cfun));
4550 auto_bitmap live, used, set, interesting, unusable_as_input;
4551 bitmap_iterator bi;
4553 first_moveable_pseudo = max_regs;
4554 pseudo_replaced_reg.release ();
4555 pseudo_replaced_reg.safe_grow_cleared (max_regs);
4557 df_analyze ();
4558 calculate_dominance_info (CDI_DOMINATORS);
4560 i = 0;
4561 FOR_EACH_BB_FN (bb, cfun)
4563 rtx_insn *insn;
4564 bitmap transp = bb_transp_live + bb->index;
4565 bitmap moveable = bb_moveable_reg_sets + bb->index;
4566 bitmap local = bb_local + bb->index;
4568 bitmap_initialize (local, 0);
4569 bitmap_initialize (transp, 0);
4570 bitmap_initialize (moveable, 0);
4571 bitmap_copy (live, df_get_live_out (bb));
4572 bitmap_and_into (live, df_get_live_in (bb));
4573 bitmap_copy (transp, live);
4574 bitmap_clear (moveable);
4575 bitmap_clear (live);
4576 bitmap_clear (used);
4577 bitmap_clear (set);
4578 FOR_BB_INSNS (bb, insn)
4579 if (NONDEBUG_INSN_P (insn))
4581 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4582 df_ref def, use;
4584 uid_luid[INSN_UID (insn)] = i++;
4586 def = df_single_def (insn_info);
4587 use = df_single_use (insn_info);
4588 if (use
4589 && def
4590 && DF_REF_REGNO (use) == DF_REF_REGNO (def)
4591 && !bitmap_bit_p (set, DF_REF_REGNO (use))
4592 && rtx_moveable_p (&PATTERN (insn), OP_IN))
4594 unsigned regno = DF_REF_REGNO (use);
4595 bitmap_set_bit (moveable, regno);
4596 bitmap_set_bit (set, regno);
4597 bitmap_set_bit (used, regno);
4598 bitmap_clear_bit (transp, regno);
4599 continue;
4601 FOR_EACH_INSN_INFO_USE (use, insn_info)
4603 unsigned regno = DF_REF_REGNO (use);
4604 bitmap_set_bit (used, regno);
4605 if (bitmap_clear_bit (moveable, regno))
4606 bitmap_clear_bit (transp, regno);
4609 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4611 unsigned regno = DF_REF_REGNO (def);
4612 bitmap_set_bit (set, regno);
4613 bitmap_clear_bit (transp, regno);
4614 bitmap_clear_bit (moveable, regno);
4619 FOR_EACH_BB_FN (bb, cfun)
4621 bitmap local = bb_local + bb->index;
4622 rtx_insn *insn;
4624 FOR_BB_INSNS (bb, insn)
4625 if (NONDEBUG_INSN_P (insn))
4627 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4628 rtx_insn *def_insn;
4629 rtx closest_use, note;
4630 df_ref def, use;
4631 unsigned regno;
4632 bool all_dominated, all_local;
4633 machine_mode mode;
4635 def = df_single_def (insn_info);
4636 /* There must be exactly one def in this insn. */
4637 if (!def || !single_set (insn))
4638 continue;
4639 /* This must be the only definition of the reg. We also limit
4640 which modes we deal with so that we can assume we can generate
4641 move instructions. */
4642 regno = DF_REF_REGNO (def);
4643 mode = GET_MODE (DF_REF_REG (def));
4644 if (DF_REG_DEF_COUNT (regno) != 1
4645 || !DF_REF_INSN_INFO (def)
4646 || HARD_REGISTER_NUM_P (regno)
4647 || DF_REG_EQ_USE_COUNT (regno) > 0
4648 || (!INTEGRAL_MODE_P (mode) && !FLOAT_MODE_P (mode)))
4649 continue;
4650 def_insn = DF_REF_INSN (def);
4652 for (note = REG_NOTES (def_insn); note; note = XEXP (note, 1))
4653 if (REG_NOTE_KIND (note) == REG_EQUIV && MEM_P (XEXP (note, 0)))
4654 break;
4656 if (note)
4658 if (dump_file)
4659 fprintf (dump_file, "Ignoring reg %d, has equiv memory\n",
4660 regno);
4661 bitmap_set_bit (unusable_as_input, regno);
4662 continue;
4665 use = DF_REG_USE_CHAIN (regno);
4666 all_dominated = true;
4667 all_local = true;
4668 closest_use = NULL_RTX;
4669 for (; use; use = DF_REF_NEXT_REG (use))
4671 rtx_insn *insn;
4672 if (!DF_REF_INSN_INFO (use))
4674 all_dominated = false;
4675 all_local = false;
4676 break;
4678 insn = DF_REF_INSN (use);
4679 if (DEBUG_INSN_P (insn))
4680 continue;
4681 if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (def_insn))
4682 all_local = false;
4683 if (!insn_dominated_by_p (insn, def_insn, uid_luid))
4684 all_dominated = false;
4685 if (closest_use != insn && closest_use != const0_rtx)
4687 if (closest_use == NULL_RTX)
4688 closest_use = insn;
4689 else if (insn_dominated_by_p (closest_use, insn, uid_luid))
4690 closest_use = insn;
4691 else if (!insn_dominated_by_p (insn, closest_use, uid_luid))
4692 closest_use = const0_rtx;
4695 if (!all_dominated)
4697 if (dump_file)
4698 fprintf (dump_file, "Reg %d not all uses dominated by set\n",
4699 regno);
4700 continue;
4702 if (all_local)
4703 bitmap_set_bit (local, regno);
4704 if (closest_use == const0_rtx || closest_use == NULL
4705 || next_nonnote_nondebug_insn (def_insn) == closest_use)
4707 if (dump_file)
4708 fprintf (dump_file, "Reg %d uninteresting%s\n", regno,
4709 closest_use == const0_rtx || closest_use == NULL
4710 ? " (no unique first use)" : "");
4711 continue;
4713 if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (closest_use)))
4715 if (dump_file)
4716 fprintf (dump_file, "Reg %d: closest user uses cc0\n",
4717 regno);
4718 continue;
4721 bitmap_set_bit (interesting, regno);
4722 /* If we get here, we know closest_use is a non-NULL insn
4723 (as opposed to const_0_rtx). */
4724 closest_uses[regno] = as_a <rtx_insn *> (closest_use);
4726 if (dump_file && (all_local || all_dominated))
4728 fprintf (dump_file, "Reg %u:", regno);
4729 if (all_local)
4730 fprintf (dump_file, " local to bb %d", bb->index);
4731 if (all_dominated)
4732 fprintf (dump_file, " def dominates all uses");
4733 if (closest_use != const0_rtx)
4734 fprintf (dump_file, " has unique first use");
4735 fputs ("\n", dump_file);
4740 EXECUTE_IF_SET_IN_BITMAP (interesting, 0, i, bi)
4742 df_ref def = DF_REG_DEF_CHAIN (i);
4743 rtx_insn *def_insn = DF_REF_INSN (def);
4744 basic_block def_block = BLOCK_FOR_INSN (def_insn);
4745 bitmap def_bb_local = bb_local + def_block->index;
4746 bitmap def_bb_moveable = bb_moveable_reg_sets + def_block->index;
4747 bitmap def_bb_transp = bb_transp_live + def_block->index;
4748 bool local_to_bb_p = bitmap_bit_p (def_bb_local, i);
4749 rtx_insn *use_insn = closest_uses[i];
4750 df_ref use;
4751 bool all_ok = true;
4752 bool all_transp = true;
4754 if (!REG_P (DF_REF_REG (def)))
4755 continue;
4757 if (!local_to_bb_p)
4759 if (dump_file)
4760 fprintf (dump_file, "Reg %u not local to one basic block\n",
4762 continue;
4764 if (reg_equiv_init (i) != NULL_RTX)
4766 if (dump_file)
4767 fprintf (dump_file, "Ignoring reg %u with equiv init insn\n",
4769 continue;
4771 if (!rtx_moveable_p (&PATTERN (def_insn), OP_IN))
4773 if (dump_file)
4774 fprintf (dump_file, "Found def insn %d for %d to be not moveable\n",
4775 INSN_UID (def_insn), i);
4776 continue;
4778 if (dump_file)
4779 fprintf (dump_file, "Examining insn %d, def for %d\n",
4780 INSN_UID (def_insn), i);
4781 FOR_EACH_INSN_USE (use, def_insn)
4783 unsigned regno = DF_REF_REGNO (use);
4784 if (bitmap_bit_p (unusable_as_input, regno))
4786 all_ok = false;
4787 if (dump_file)
4788 fprintf (dump_file, " found unusable input reg %u.\n", regno);
4789 break;
4791 if (!bitmap_bit_p (def_bb_transp, regno))
4793 if (bitmap_bit_p (def_bb_moveable, regno)
4794 && !control_flow_insn_p (use_insn)
4795 && (!HAVE_cc0 || !sets_cc0_p (use_insn)))
4797 if (modified_between_p (DF_REF_REG (use), def_insn, use_insn))
4799 rtx_insn *x = NEXT_INSN (def_insn);
4800 while (!modified_in_p (DF_REF_REG (use), x))
4802 gcc_assert (x != use_insn);
4803 x = NEXT_INSN (x);
4805 if (dump_file)
4806 fprintf (dump_file, " input reg %u modified but insn %d moveable\n",
4807 regno, INSN_UID (x));
4808 emit_insn_after (PATTERN (x), use_insn);
4809 set_insn_deleted (x);
4811 else
4813 if (dump_file)
4814 fprintf (dump_file, " input reg %u modified between def and use\n",
4815 regno);
4816 all_transp = false;
4819 else
4820 all_transp = false;
4823 if (!all_ok)
4824 continue;
4825 if (!dbg_cnt (ira_move))
4826 break;
4827 if (dump_file)
4828 fprintf (dump_file, " all ok%s\n", all_transp ? " and transp" : "");
4830 if (all_transp)
4832 rtx def_reg = DF_REF_REG (def);
4833 rtx newreg = ira_create_new_reg (def_reg);
4834 if (validate_change (def_insn, DF_REF_REAL_LOC (def), newreg, 0))
4836 unsigned nregno = REGNO (newreg);
4837 emit_insn_before (gen_move_insn (def_reg, newreg), use_insn);
4838 nregno -= max_regs;
4839 pseudo_replaced_reg[nregno] = def_reg;
4844 FOR_EACH_BB_FN (bb, cfun)
4846 bitmap_clear (bb_local + bb->index);
4847 bitmap_clear (bb_transp_live + bb->index);
4848 bitmap_clear (bb_moveable_reg_sets + bb->index);
4850 free (uid_luid);
4851 free (closest_uses);
4852 free (bb_local);
4853 free (bb_transp_live);
4854 free (bb_moveable_reg_sets);
4856 last_moveable_pseudo = max_reg_num ();
4858 fix_reg_equiv_init ();
4859 expand_reg_info ();
4860 regstat_free_n_sets_and_refs ();
4861 regstat_free_ri ();
4862 regstat_init_n_sets_and_refs ();
4863 regstat_compute_ri ();
4864 free_dominance_info (CDI_DOMINATORS);
4867 /* If SET pattern SET is an assignment from a hard register to a pseudo which
4868 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), return
4869 the destination. Otherwise return NULL. */
4871 static rtx
4872 interesting_dest_for_shprep_1 (rtx set, basic_block call_dom)
4874 rtx src = SET_SRC (set);
4875 rtx dest = SET_DEST (set);
4876 if (!REG_P (src) || !HARD_REGISTER_P (src)
4877 || !REG_P (dest) || HARD_REGISTER_P (dest)
4878 || (call_dom && !bitmap_bit_p (df_get_live_in (call_dom), REGNO (dest))))
4879 return NULL;
4880 return dest;
4883 /* If insn is interesting for parameter range-splitting shrink-wrapping
4884 preparation, i.e. it is a single set from a hard register to a pseudo, which
4885 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), or a
4886 parallel statement with only one such statement, return the destination.
4887 Otherwise return NULL. */
4889 static rtx
4890 interesting_dest_for_shprep (rtx_insn *insn, basic_block call_dom)
4892 if (!INSN_P (insn))
4893 return NULL;
4894 rtx pat = PATTERN (insn);
4895 if (GET_CODE (pat) == SET)
4896 return interesting_dest_for_shprep_1 (pat, call_dom);
4898 if (GET_CODE (pat) != PARALLEL)
4899 return NULL;
4900 rtx ret = NULL;
4901 for (int i = 0; i < XVECLEN (pat, 0); i++)
4903 rtx sub = XVECEXP (pat, 0, i);
4904 if (GET_CODE (sub) == USE || GET_CODE (sub) == CLOBBER)
4905 continue;
4906 if (GET_CODE (sub) != SET
4907 || side_effects_p (sub))
4908 return NULL;
4909 rtx dest = interesting_dest_for_shprep_1 (sub, call_dom);
4910 if (dest && ret)
4911 return NULL;
4912 if (dest)
4913 ret = dest;
4915 return ret;
4918 /* Split live ranges of pseudos that are loaded from hard registers in the
4919 first BB in a BB that dominates all non-sibling call if such a BB can be
4920 found and is not in a loop. Return true if the function has made any
4921 changes. */
4923 static bool
4924 split_live_ranges_for_shrink_wrap (void)
4926 basic_block bb, call_dom = NULL;
4927 basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
4928 rtx_insn *insn, *last_interesting_insn = NULL;
4929 auto_bitmap need_new, reachable;
4930 vec<basic_block> queue;
4932 if (!SHRINK_WRAPPING_ENABLED)
4933 return false;
4935 queue.create (n_basic_blocks_for_fn (cfun));
4937 FOR_EACH_BB_FN (bb, cfun)
4938 FOR_BB_INSNS (bb, insn)
4939 if (CALL_P (insn) && !SIBLING_CALL_P (insn))
4941 if (bb == first)
4943 queue.release ();
4944 return false;
4947 bitmap_set_bit (need_new, bb->index);
4948 bitmap_set_bit (reachable, bb->index);
4949 queue.quick_push (bb);
4950 break;
4953 if (queue.is_empty ())
4955 queue.release ();
4956 return false;
4959 while (!queue.is_empty ())
4961 edge e;
4962 edge_iterator ei;
4964 bb = queue.pop ();
4965 FOR_EACH_EDGE (e, ei, bb->succs)
4966 if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
4967 && bitmap_set_bit (reachable, e->dest->index))
4968 queue.quick_push (e->dest);
4970 queue.release ();
4972 FOR_BB_INSNS (first, insn)
4974 rtx dest = interesting_dest_for_shprep (insn, NULL);
4975 if (!dest)
4976 continue;
4978 if (DF_REG_DEF_COUNT (REGNO (dest)) > 1)
4979 return false;
4981 for (df_ref use = DF_REG_USE_CHAIN (REGNO(dest));
4982 use;
4983 use = DF_REF_NEXT_REG (use))
4985 int ubbi = DF_REF_BB (use)->index;
4986 if (bitmap_bit_p (reachable, ubbi))
4987 bitmap_set_bit (need_new, ubbi);
4989 last_interesting_insn = insn;
4992 if (!last_interesting_insn)
4993 return false;
4995 call_dom = nearest_common_dominator_for_set (CDI_DOMINATORS, need_new);
4996 if (call_dom == first)
4997 return false;
4999 loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
5000 while (bb_loop_depth (call_dom) > 0)
5001 call_dom = get_immediate_dominator (CDI_DOMINATORS, call_dom);
5002 loop_optimizer_finalize ();
5004 if (call_dom == first)
5005 return false;
5007 calculate_dominance_info (CDI_POST_DOMINATORS);
5008 if (dominated_by_p (CDI_POST_DOMINATORS, first, call_dom))
5010 free_dominance_info (CDI_POST_DOMINATORS);
5011 return false;
5013 free_dominance_info (CDI_POST_DOMINATORS);
5015 if (dump_file)
5016 fprintf (dump_file, "Will split live ranges of parameters at BB %i\n",
5017 call_dom->index);
5019 bool ret = false;
5020 FOR_BB_INSNS (first, insn)
5022 rtx dest = interesting_dest_for_shprep (insn, call_dom);
5023 if (!dest || dest == pic_offset_table_rtx)
5024 continue;
5026 bool need_newreg = false;
5027 df_ref use, next;
5028 for (use = DF_REG_USE_CHAIN (REGNO (dest)); use; use = next)
5030 rtx_insn *uin = DF_REF_INSN (use);
5031 next = DF_REF_NEXT_REG (use);
5033 if (DEBUG_INSN_P (uin))
5034 continue;
5036 basic_block ubb = BLOCK_FOR_INSN (uin);
5037 if (ubb == call_dom
5038 || dominated_by_p (CDI_DOMINATORS, ubb, call_dom))
5040 need_newreg = true;
5041 break;
5045 if (need_newreg)
5047 rtx newreg = ira_create_new_reg (dest);
5049 for (use = DF_REG_USE_CHAIN (REGNO (dest)); use; use = next)
5051 rtx_insn *uin = DF_REF_INSN (use);
5052 next = DF_REF_NEXT_REG (use);
5054 basic_block ubb = BLOCK_FOR_INSN (uin);
5055 if (ubb == call_dom
5056 || dominated_by_p (CDI_DOMINATORS, ubb, call_dom))
5057 validate_change (uin, DF_REF_REAL_LOC (use), newreg, true);
5060 rtx_insn *new_move = gen_move_insn (newreg, dest);
5061 emit_insn_after (new_move, bb_note (call_dom));
5062 if (dump_file)
5064 fprintf (dump_file, "Split live-range of register ");
5065 print_rtl_single (dump_file, dest);
5067 ret = true;
5070 if (insn == last_interesting_insn)
5071 break;
5073 apply_change_group ();
5074 return ret;
5077 /* Perform the second half of the transformation started in
5078 find_moveable_pseudos. We look for instances where the newly introduced
5079 pseudo remains unallocated, and remove it by moving the definition to
5080 just before its use, replacing the move instruction generated by
5081 find_moveable_pseudos. */
5082 static void
5083 move_unallocated_pseudos (void)
5085 int i;
5086 for (i = first_moveable_pseudo; i < last_moveable_pseudo; i++)
5087 if (reg_renumber[i] < 0)
5089 int idx = i - first_moveable_pseudo;
5090 rtx other_reg = pseudo_replaced_reg[idx];
5091 rtx_insn *def_insn = DF_REF_INSN (DF_REG_DEF_CHAIN (i));
5092 /* The use must follow all definitions of OTHER_REG, so we can
5093 insert the new definition immediately after any of them. */
5094 df_ref other_def = DF_REG_DEF_CHAIN (REGNO (other_reg));
5095 rtx_insn *move_insn = DF_REF_INSN (other_def);
5096 rtx_insn *newinsn = emit_insn_after (PATTERN (def_insn), move_insn);
5097 rtx set;
5098 int success;
5100 if (dump_file)
5101 fprintf (dump_file, "moving def of %d (insn %d now) ",
5102 REGNO (other_reg), INSN_UID (def_insn));
5104 delete_insn (move_insn);
5105 while ((other_def = DF_REG_DEF_CHAIN (REGNO (other_reg))))
5106 delete_insn (DF_REF_INSN (other_def));
5107 delete_insn (def_insn);
5109 set = single_set (newinsn);
5110 success = validate_change (newinsn, &SET_DEST (set), other_reg, 0);
5111 gcc_assert (success);
5112 if (dump_file)
5113 fprintf (dump_file, " %d) rather than keep unallocated replacement %d\n",
5114 INSN_UID (newinsn), i);
5115 SET_REG_N_REFS (i, 0);
5119 /* If the backend knows where to allocate pseudos for hard
5120 register initial values, register these allocations now. */
5121 static void
5122 allocate_initial_values (void)
5124 if (targetm.allocate_initial_value)
5126 rtx hreg, preg, x;
5127 int i, regno;
5129 for (i = 0; HARD_REGISTER_NUM_P (i); i++)
5131 if (! initial_value_entry (i, &hreg, &preg))
5132 break;
5134 x = targetm.allocate_initial_value (hreg);
5135 regno = REGNO (preg);
5136 if (x && REG_N_SETS (regno) <= 1)
5138 if (MEM_P (x))
5139 reg_equiv_memory_loc (regno) = x;
5140 else
5142 basic_block bb;
5143 int new_regno;
5145 gcc_assert (REG_P (x));
5146 new_regno = REGNO (x);
5147 reg_renumber[regno] = new_regno;
5148 /* Poke the regno right into regno_reg_rtx so that even
5149 fixed regs are accepted. */
5150 SET_REGNO (preg, new_regno);
5151 /* Update global register liveness information. */
5152 FOR_EACH_BB_FN (bb, cfun)
5154 if (REGNO_REG_SET_P (df_get_live_in (bb), regno))
5155 SET_REGNO_REG_SET (df_get_live_in (bb), new_regno);
5156 if (REGNO_REG_SET_P (df_get_live_out (bb), regno))
5157 SET_REGNO_REG_SET (df_get_live_out (bb), new_regno);
5163 gcc_checking_assert (! initial_value_entry (FIRST_PSEUDO_REGISTER,
5164 &hreg, &preg));
5169 /* True when we use LRA instead of reload pass for the current
5170 function. */
5171 bool ira_use_lra_p;
5173 /* True if we have allocno conflicts. It is false for non-optimized
5174 mode or when the conflict table is too big. */
5175 bool ira_conflicts_p;
5177 /* Saved between IRA and reload. */
5178 static int saved_flag_ira_share_spill_slots;
5180 /* This is the main entry of IRA. */
5181 static void
5182 ira (FILE *f)
5184 bool loops_p;
5185 int ira_max_point_before_emit;
5186 bool saved_flag_caller_saves = flag_caller_saves;
5187 enum ira_region saved_flag_ira_region = flag_ira_region;
5188 unsigned int i;
5189 int num_used_regs = 0;
5191 clear_bb_flags ();
5193 /* Determine if the current function is a leaf before running IRA
5194 since this can impact optimizations done by the prologue and
5195 epilogue thus changing register elimination offsets.
5196 Other target callbacks may use crtl->is_leaf too, including
5197 SHRINK_WRAPPING_ENABLED, so initialize as early as possible. */
5198 crtl->is_leaf = leaf_function_p ();
5200 /* Perform target specific PIC register initialization. */
5201 targetm.init_pic_reg ();
5203 ira_conflicts_p = optimize > 0;
5205 /* Determine the number of pseudos actually requiring coloring. */
5206 for (i = FIRST_PSEUDO_REGISTER; i < DF_REG_SIZE (df); i++)
5207 num_used_regs += !!(DF_REG_USE_COUNT (i) + DF_REG_DEF_COUNT (i));
5209 /* If there are too many pseudos and/or basic blocks (e.g. 10K
5210 pseudos and 10K blocks or 100K pseudos and 1K blocks), we will
5211 use simplified and faster algorithms in LRA. */
5212 lra_simple_p
5213 = (ira_use_lra_p
5214 && num_used_regs >= (1 << 26) / last_basic_block_for_fn (cfun));
5216 if (lra_simple_p)
5218 /* It permits to skip live range splitting in LRA. */
5219 flag_caller_saves = false;
5220 /* There is no sense to do regional allocation when we use
5221 simplified LRA. */
5222 flag_ira_region = IRA_REGION_ONE;
5223 ira_conflicts_p = false;
5226 #ifndef IRA_NO_OBSTACK
5227 gcc_obstack_init (&ira_obstack);
5228 #endif
5229 bitmap_obstack_initialize (&ira_bitmap_obstack);
5231 /* LRA uses its own infrastructure to handle caller save registers. */
5232 if (flag_caller_saves && !ira_use_lra_p)
5233 init_caller_save ();
5235 if (flag_ira_verbose < 10)
5237 internal_flag_ira_verbose = flag_ira_verbose;
5238 ira_dump_file = f;
5240 else
5242 internal_flag_ira_verbose = flag_ira_verbose - 10;
5243 ira_dump_file = stderr;
5246 setup_prohibited_mode_move_regs ();
5247 decrease_live_ranges_number ();
5248 df_note_add_problem ();
5250 /* DF_LIVE can't be used in the register allocator, too many other
5251 parts of the compiler depend on using the "classic" liveness
5252 interpretation of the DF_LR problem. See PR38711.
5253 Remove the problem, so that we don't spend time updating it in
5254 any of the df_analyze() calls during IRA/LRA. */
5255 if (optimize > 1)
5256 df_remove_problem (df_live);
5257 gcc_checking_assert (df_live == NULL);
5259 if (flag_checking)
5260 df->changeable_flags |= DF_VERIFY_SCHEDULED;
5262 df_analyze ();
5264 init_reg_equiv ();
5265 if (ira_conflicts_p)
5267 calculate_dominance_info (CDI_DOMINATORS);
5269 if (split_live_ranges_for_shrink_wrap ())
5270 df_analyze ();
5272 free_dominance_info (CDI_DOMINATORS);
5275 df_clear_flags (DF_NO_INSN_RESCAN);
5277 indirect_jump_optimize ();
5278 if (delete_trivially_dead_insns (get_insns (), max_reg_num ()))
5279 df_analyze ();
5281 regstat_init_n_sets_and_refs ();
5282 regstat_compute_ri ();
5284 /* If we are not optimizing, then this is the only place before
5285 register allocation where dataflow is done. And that is needed
5286 to generate these warnings. */
5287 if (warn_clobbered)
5288 generate_setjmp_warnings ();
5290 if (resize_reg_info () && flag_ira_loop_pressure)
5291 ira_set_pseudo_classes (true, ira_dump_file);
5293 init_alias_analysis ();
5294 loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
5295 reg_equiv = XCNEWVEC (struct equivalence, max_reg_num ());
5296 update_equiv_regs_prescan ();
5297 update_equiv_regs ();
5299 /* Don't move insns if live range shrinkage or register
5300 pressure-sensitive scheduling were done because it will not
5301 improve allocation but likely worsen insn scheduling. */
5302 if (optimize
5303 && !flag_live_range_shrinkage
5304 && !(flag_sched_pressure && flag_schedule_insns))
5305 combine_and_move_insns ();
5307 /* Gather additional equivalences with memory. */
5308 if (optimize)
5309 add_store_equivs ();
5311 loop_optimizer_finalize ();
5312 free_dominance_info (CDI_DOMINATORS);
5313 end_alias_analysis ();
5314 free (reg_equiv);
5316 setup_reg_equiv ();
5317 grow_reg_equivs ();
5318 setup_reg_equiv_init ();
5320 allocated_reg_info_size = max_reg_num ();
5322 /* It is not worth to do such improvement when we use a simple
5323 allocation because of -O0 usage or because the function is too
5324 big. */
5325 if (ira_conflicts_p)
5326 find_moveable_pseudos ();
5328 max_regno_before_ira = max_reg_num ();
5329 ira_setup_eliminable_regset ();
5331 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
5332 ira_load_cost = ira_store_cost = ira_shuffle_cost = 0;
5333 ira_move_loops_num = ira_additional_jumps_num = 0;
5335 ira_assert (current_loops == NULL);
5336 if (flag_ira_region == IRA_REGION_ALL || flag_ira_region == IRA_REGION_MIXED)
5337 loop_optimizer_init (AVOID_CFG_MODIFICATIONS | LOOPS_HAVE_RECORDED_EXITS);
5339 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5340 fprintf (ira_dump_file, "Building IRA IR\n");
5341 loops_p = ira_build ();
5343 ira_assert (ira_conflicts_p || !loops_p);
5345 saved_flag_ira_share_spill_slots = flag_ira_share_spill_slots;
5346 if (too_high_register_pressure_p () || cfun->calls_setjmp)
5347 /* It is just wasting compiler's time to pack spilled pseudos into
5348 stack slots in this case -- prohibit it. We also do this if
5349 there is setjmp call because a variable not modified between
5350 setjmp and longjmp the compiler is required to preserve its
5351 value and sharing slots does not guarantee it. */
5352 flag_ira_share_spill_slots = FALSE;
5354 ira_color ();
5356 ira_max_point_before_emit = ira_max_point;
5358 ira_initiate_emit_data ();
5360 ira_emit (loops_p);
5362 max_regno = max_reg_num ();
5363 if (ira_conflicts_p)
5365 if (! loops_p)
5367 if (! ira_use_lra_p)
5368 ira_initiate_assign ();
5370 else
5372 expand_reg_info ();
5374 if (ira_use_lra_p)
5376 ira_allocno_t a;
5377 ira_allocno_iterator ai;
5379 FOR_EACH_ALLOCNO (a, ai)
5381 int old_regno = ALLOCNO_REGNO (a);
5382 int new_regno = REGNO (ALLOCNO_EMIT_DATA (a)->reg);
5384 ALLOCNO_REGNO (a) = new_regno;
5386 if (old_regno != new_regno)
5387 setup_reg_classes (new_regno, reg_preferred_class (old_regno),
5388 reg_alternate_class (old_regno),
5389 reg_allocno_class (old_regno));
5392 else
5394 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5395 fprintf (ira_dump_file, "Flattening IR\n");
5396 ira_flattening (max_regno_before_ira, ira_max_point_before_emit);
5398 /* New insns were generated: add notes and recalculate live
5399 info. */
5400 df_analyze ();
5402 /* ??? Rebuild the loop tree, but why? Does the loop tree
5403 change if new insns were generated? Can that be handled
5404 by updating the loop tree incrementally? */
5405 loop_optimizer_finalize ();
5406 free_dominance_info (CDI_DOMINATORS);
5407 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
5408 | LOOPS_HAVE_RECORDED_EXITS);
5410 if (! ira_use_lra_p)
5412 setup_allocno_assignment_flags ();
5413 ira_initiate_assign ();
5414 ira_reassign_conflict_allocnos (max_regno);
5419 ira_finish_emit_data ();
5421 setup_reg_renumber ();
5423 calculate_allocation_cost ();
5425 #ifdef ENABLE_IRA_CHECKING
5426 if (ira_conflicts_p && ! ira_use_lra_p)
5427 /* Opposite to reload pass, LRA does not use any conflict info
5428 from IRA. We don't rebuild conflict info for LRA (through
5429 ira_flattening call) and cannot use the check here. We could
5430 rebuild this info for LRA in the check mode but there is a risk
5431 that code generated with the check and without it will be a bit
5432 different. Calling ira_flattening in any mode would be a
5433 wasting CPU time. So do not check the allocation for LRA. */
5434 check_allocation ();
5435 #endif
5437 if (max_regno != max_regno_before_ira)
5439 regstat_free_n_sets_and_refs ();
5440 regstat_free_ri ();
5441 regstat_init_n_sets_and_refs ();
5442 regstat_compute_ri ();
5445 overall_cost_before = ira_overall_cost;
5446 if (! ira_conflicts_p)
5447 grow_reg_equivs ();
5448 else
5450 fix_reg_equiv_init ();
5452 #ifdef ENABLE_IRA_CHECKING
5453 print_redundant_copies ();
5454 #endif
5455 if (! ira_use_lra_p)
5457 ira_spilled_reg_stack_slots_num = 0;
5458 ira_spilled_reg_stack_slots
5459 = ((class ira_spilled_reg_stack_slot *)
5460 ira_allocate (max_regno
5461 * sizeof (class ira_spilled_reg_stack_slot)));
5462 memset ((void *)ira_spilled_reg_stack_slots, 0,
5463 max_regno * sizeof (class ira_spilled_reg_stack_slot));
5466 allocate_initial_values ();
5468 /* See comment for find_moveable_pseudos call. */
5469 if (ira_conflicts_p)
5470 move_unallocated_pseudos ();
5472 /* Restore original values. */
5473 if (lra_simple_p)
5475 flag_caller_saves = saved_flag_caller_saves;
5476 flag_ira_region = saved_flag_ira_region;
5480 static void
5481 do_reload (void)
5483 basic_block bb;
5484 bool need_dce;
5485 unsigned pic_offset_table_regno = INVALID_REGNUM;
5487 if (flag_ira_verbose < 10)
5488 ira_dump_file = dump_file;
5490 /* If pic_offset_table_rtx is a pseudo register, then keep it so
5491 after reload to avoid possible wrong usages of hard reg assigned
5492 to it. */
5493 if (pic_offset_table_rtx
5494 && REGNO (pic_offset_table_rtx) >= FIRST_PSEUDO_REGISTER)
5495 pic_offset_table_regno = REGNO (pic_offset_table_rtx);
5497 timevar_push (TV_RELOAD);
5498 if (ira_use_lra_p)
5500 if (current_loops != NULL)
5502 loop_optimizer_finalize ();
5503 free_dominance_info (CDI_DOMINATORS);
5505 FOR_ALL_BB_FN (bb, cfun)
5506 bb->loop_father = NULL;
5507 current_loops = NULL;
5509 ira_destroy ();
5511 lra (ira_dump_file);
5512 /* ???!!! Move it before lra () when we use ira_reg_equiv in
5513 LRA. */
5514 vec_free (reg_equivs);
5515 reg_equivs = NULL;
5516 need_dce = false;
5518 else
5520 df_set_flags (DF_NO_INSN_RESCAN);
5521 build_insn_chain ();
5523 need_dce = reload (get_insns (), ira_conflicts_p);
5526 timevar_pop (TV_RELOAD);
5528 timevar_push (TV_IRA);
5530 if (ira_conflicts_p && ! ira_use_lra_p)
5532 ira_free (ira_spilled_reg_stack_slots);
5533 ira_finish_assign ();
5536 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL
5537 && overall_cost_before != ira_overall_cost)
5538 fprintf (ira_dump_file, "+++Overall after reload %" PRId64 "\n",
5539 ira_overall_cost);
5541 flag_ira_share_spill_slots = saved_flag_ira_share_spill_slots;
5543 if (! ira_use_lra_p)
5545 ira_destroy ();
5546 if (current_loops != NULL)
5548 loop_optimizer_finalize ();
5549 free_dominance_info (CDI_DOMINATORS);
5551 FOR_ALL_BB_FN (bb, cfun)
5552 bb->loop_father = NULL;
5553 current_loops = NULL;
5555 regstat_free_ri ();
5556 regstat_free_n_sets_and_refs ();
5559 if (optimize)
5560 cleanup_cfg (CLEANUP_EXPENSIVE);
5562 finish_reg_equiv ();
5564 bitmap_obstack_release (&ira_bitmap_obstack);
5565 #ifndef IRA_NO_OBSTACK
5566 obstack_free (&ira_obstack, NULL);
5567 #endif
5569 /* The code after the reload has changed so much that at this point
5570 we might as well just rescan everything. Note that
5571 df_rescan_all_insns is not going to help here because it does not
5572 touch the artificial uses and defs. */
5573 df_finish_pass (true);
5574 df_scan_alloc (NULL);
5575 df_scan_blocks ();
5577 if (optimize > 1)
5579 df_live_add_problem ();
5580 df_live_set_all_dirty ();
5583 if (optimize)
5584 df_analyze ();
5586 if (need_dce && optimize)
5587 run_fast_dce ();
5589 /* Diagnose uses of the hard frame pointer when it is used as a global
5590 register. Often we can get away with letting the user appropriate
5591 the frame pointer, but we should let them know when code generation
5592 makes that impossible. */
5593 if (global_regs[HARD_FRAME_POINTER_REGNUM] && frame_pointer_needed)
5595 tree decl = global_regs_decl[HARD_FRAME_POINTER_REGNUM];
5596 error_at (DECL_SOURCE_LOCATION (current_function_decl),
5597 "frame pointer required, but reserved");
5598 inform (DECL_SOURCE_LOCATION (decl), "for %qD", decl);
5601 /* If we are doing generic stack checking, give a warning if this
5602 function's frame size is larger than we expect. */
5603 if (flag_stack_check == GENERIC_STACK_CHECK)
5605 poly_int64 size = get_frame_size () + STACK_CHECK_FIXED_FRAME_SIZE;
5607 for (int i = 0; i < FIRST_PSEUDO_REGISTER; i++)
5608 if (df_regs_ever_live_p (i)
5609 && !fixed_regs[i]
5610 && !crtl->abi->clobbers_full_reg_p (i))
5611 size += UNITS_PER_WORD;
5613 if (constant_lower_bound (size) > STACK_CHECK_MAX_FRAME_SIZE)
5614 warning (0, "frame size too large for reliable stack checking");
5617 if (pic_offset_table_regno != INVALID_REGNUM)
5618 pic_offset_table_rtx = gen_rtx_REG (Pmode, pic_offset_table_regno);
5620 timevar_pop (TV_IRA);
5623 /* Run the integrated register allocator. */
5625 namespace {
5627 const pass_data pass_data_ira =
5629 RTL_PASS, /* type */
5630 "ira", /* name */
5631 OPTGROUP_NONE, /* optinfo_flags */
5632 TV_IRA, /* tv_id */
5633 0, /* properties_required */
5634 0, /* properties_provided */
5635 0, /* properties_destroyed */
5636 0, /* todo_flags_start */
5637 TODO_do_not_ggc_collect, /* todo_flags_finish */
5640 class pass_ira : public rtl_opt_pass
5642 public:
5643 pass_ira (gcc::context *ctxt)
5644 : rtl_opt_pass (pass_data_ira, ctxt)
5647 /* opt_pass methods: */
5648 virtual bool gate (function *)
5650 return !targetm.no_register_allocation;
5652 virtual unsigned int execute (function *)
5654 ira (dump_file);
5655 return 0;
5658 }; // class pass_ira
5660 } // anon namespace
5662 rtl_opt_pass *
5663 make_pass_ira (gcc::context *ctxt)
5665 return new pass_ira (ctxt);
5668 namespace {
5670 const pass_data pass_data_reload =
5672 RTL_PASS, /* type */
5673 "reload", /* name */
5674 OPTGROUP_NONE, /* optinfo_flags */
5675 TV_RELOAD, /* tv_id */
5676 0, /* properties_required */
5677 0, /* properties_provided */
5678 0, /* properties_destroyed */
5679 0, /* todo_flags_start */
5680 0, /* todo_flags_finish */
5683 class pass_reload : public rtl_opt_pass
5685 public:
5686 pass_reload (gcc::context *ctxt)
5687 : rtl_opt_pass (pass_data_reload, ctxt)
5690 /* opt_pass methods: */
5691 virtual bool gate (function *)
5693 return !targetm.no_register_allocation;
5695 virtual unsigned int execute (function *)
5697 do_reload ();
5698 return 0;
5701 }; // class pass_reload
5703 } // anon namespace
5705 rtl_opt_pass *
5706 make_pass_reload (gcc::context *ctxt)
5708 return new pass_reload (ctxt);