[ARM] Remove an unused reload hook.
[official-gcc.git] / gcc / ira.c
blob25baa90845ac506302422d32aea25dbc81f6051a
1 /* Integrated Register Allocator (IRA) entry point.
2 Copyright (C) 2006-2015 Free Software Foundation, Inc.
3 Contributed by Vladimir Makarov <vmakarov@redhat.com>.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it under
8 the terms of the GNU General Public License as published by the Free
9 Software Foundation; either version 3, or (at your option) any later
10 version.
12 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
13 WARRANTY; without even the implied warranty of MERCHANTABILITY or
14 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
15 for more details.
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The integrated register allocator (IRA) is a
22 regional register allocator performing graph coloring on a top-down
23 traversal of nested regions. Graph coloring in a region is based
24 on Chaitin-Briggs algorithm. It is called integrated because
25 register coalescing, register live range splitting, and choosing a
26 better hard register are done on-the-fly during coloring. Register
27 coalescing and choosing a cheaper hard register is done by hard
28 register preferencing during hard register assigning. The live
29 range splitting is a byproduct of the regional register allocation.
31 Major IRA notions are:
33 o *Region* is a part of CFG where graph coloring based on
34 Chaitin-Briggs algorithm is done. IRA can work on any set of
35 nested CFG regions forming a tree. Currently the regions are
36 the entire function for the root region and natural loops for
37 the other regions. Therefore data structure representing a
38 region is called loop_tree_node.
40 o *Allocno class* is a register class used for allocation of
41 given allocno. It means that only hard register of given
42 register class can be assigned to given allocno. In reality,
43 even smaller subset of (*profitable*) hard registers can be
44 assigned. In rare cases, the subset can be even smaller
45 because our modification of Chaitin-Briggs algorithm requires
46 that sets of hard registers can be assigned to allocnos forms a
47 forest, i.e. the sets can be ordered in a way where any
48 previous set is not intersected with given set or is a superset
49 of given set.
51 o *Pressure class* is a register class belonging to a set of
52 register classes containing all of the hard-registers available
53 for register allocation. The set of all pressure classes for a
54 target is defined in the corresponding machine-description file
55 according some criteria. Register pressure is calculated only
56 for pressure classes and it affects some IRA decisions as
57 forming allocation regions.
59 o *Allocno* represents the live range of a pseudo-register in a
60 region. Besides the obvious attributes like the corresponding
61 pseudo-register number, allocno class, conflicting allocnos and
62 conflicting hard-registers, there are a few allocno attributes
63 which are important for understanding the allocation algorithm:
65 - *Live ranges*. This is a list of ranges of *program points*
66 where the allocno lives. Program points represent places
67 where a pseudo can be born or become dead (there are
68 approximately two times more program points than the insns)
69 and they are represented by integers starting with 0. The
70 live ranges are used to find conflicts between allocnos.
71 They also play very important role for the transformation of
72 the IRA internal representation of several regions into a one
73 region representation. The later is used during the reload
74 pass work because each allocno represents all of the
75 corresponding pseudo-registers.
77 - *Hard-register costs*. This is a vector of size equal to the
78 number of available hard-registers of the allocno class. The
79 cost of a callee-clobbered hard-register for an allocno is
80 increased by the cost of save/restore code around the calls
81 through the given allocno's life. If the allocno is a move
82 instruction operand and another operand is a hard-register of
83 the allocno class, the cost of the hard-register is decreased
84 by the move cost.
86 When an allocno is assigned, the hard-register with minimal
87 full cost is used. Initially, a hard-register's full cost is
88 the corresponding value from the hard-register's cost vector.
89 If the allocno is connected by a *copy* (see below) to
90 another allocno which has just received a hard-register, the
91 cost of the hard-register is decreased. Before choosing a
92 hard-register for an allocno, the allocno's current costs of
93 the hard-registers are modified by the conflict hard-register
94 costs of all of the conflicting allocnos which are not
95 assigned yet.
97 - *Conflict hard-register costs*. This is a vector of the same
98 size as the hard-register costs vector. To permit an
99 unassigned allocno to get a better hard-register, IRA uses
100 this vector to calculate the final full cost of the
101 available hard-registers. Conflict hard-register costs of an
102 unassigned allocno are also changed with a change of the
103 hard-register cost of the allocno when a copy involving the
104 allocno is processed as described above. This is done to
105 show other unassigned allocnos that a given allocno prefers
106 some hard-registers in order to remove the move instruction
107 corresponding to the copy.
109 o *Cap*. If a pseudo-register does not live in a region but
110 lives in a nested region, IRA creates a special allocno called
111 a cap in the outer region. A region cap is also created for a
112 subregion cap.
114 o *Copy*. Allocnos can be connected by copies. Copies are used
115 to modify hard-register costs for allocnos during coloring.
116 Such modifications reflects a preference to use the same
117 hard-register for the allocnos connected by copies. Usually
118 copies are created for move insns (in this case it results in
119 register coalescing). But IRA also creates copies for operands
120 of an insn which should be assigned to the same hard-register
121 due to constraints in the machine description (it usually
122 results in removing a move generated in reload to satisfy
123 the constraints) and copies referring to the allocno which is
124 the output operand of an instruction and the allocno which is
125 an input operand dying in the instruction (creation of such
126 copies results in less register shuffling). IRA *does not*
127 create copies between the same register allocnos from different
128 regions because we use another technique for propagating
129 hard-register preference on the borders of regions.
131 Allocnos (including caps) for the upper region in the region tree
132 *accumulate* information important for coloring from allocnos with
133 the same pseudo-register from nested regions. This includes
134 hard-register and memory costs, conflicts with hard-registers,
135 allocno conflicts, allocno copies and more. *Thus, attributes for
136 allocnos in a region have the same values as if the region had no
137 subregions*. It means that attributes for allocnos in the
138 outermost region corresponding to the function have the same values
139 as though the allocation used only one region which is the entire
140 function. It also means that we can look at IRA work as if the
141 first IRA did allocation for all function then it improved the
142 allocation for loops then their subloops and so on.
144 IRA major passes are:
146 o Building IRA internal representation which consists of the
147 following subpasses:
149 * First, IRA builds regions and creates allocnos (file
150 ira-build.c) and initializes most of their attributes.
152 * Then IRA finds an allocno class for each allocno and
153 calculates its initial (non-accumulated) cost of memory and
154 each hard-register of its allocno class (file ira-cost.c).
156 * IRA creates live ranges of each allocno, calculates register
157 pressure for each pressure class in each region, sets up
158 conflict hard registers for each allocno and info about calls
159 the allocno lives through (file ira-lives.c).
161 * IRA removes low register pressure loops from the regions
162 mostly to speed IRA up (file ira-build.c).
164 * IRA propagates accumulated allocno info from lower region
165 allocnos to corresponding upper region allocnos (file
166 ira-build.c).
168 * IRA creates all caps (file ira-build.c).
170 * Having live-ranges of allocnos and their classes, IRA creates
171 conflicting allocnos for each allocno. Conflicting allocnos
172 are stored as a bit vector or array of pointers to the
173 conflicting allocnos whatever is more profitable (file
174 ira-conflicts.c). At this point IRA creates allocno copies.
176 o Coloring. Now IRA has all necessary info to start graph coloring
177 process. It is done in each region on top-down traverse of the
178 region tree (file ira-color.c). There are following subpasses:
180 * Finding profitable hard registers of corresponding allocno
181 class for each allocno. For example, only callee-saved hard
182 registers are frequently profitable for allocnos living
183 through colors. If the profitable hard register set of
184 allocno does not form a tree based on subset relation, we use
185 some approximation to form the tree. This approximation is
186 used to figure out trivial colorability of allocnos. The
187 approximation is a pretty rare case.
189 * Putting allocnos onto the coloring stack. IRA uses Briggs
190 optimistic coloring which is a major improvement over
191 Chaitin's coloring. Therefore IRA does not spill allocnos at
192 this point. There is some freedom in the order of putting
193 allocnos on the stack which can affect the final result of
194 the allocation. IRA uses some heuristics to improve the
195 order. The major one is to form *threads* from colorable
196 allocnos and push them on the stack by threads. Thread is a
197 set of non-conflicting colorable allocnos connected by
198 copies. The thread contains allocnos from the colorable
199 bucket or colorable allocnos already pushed onto the coloring
200 stack. Pushing thread allocnos one after another onto the
201 stack increases chances of removing copies when the allocnos
202 get the same hard reg.
204 We also use a modification of Chaitin-Briggs algorithm which
205 works for intersected register classes of allocnos. To
206 figure out trivial colorability of allocnos, the mentioned
207 above tree of hard register sets is used. To get an idea how
208 the algorithm works in i386 example, let us consider an
209 allocno to which any general hard register can be assigned.
210 If the allocno conflicts with eight allocnos to which only
211 EAX register can be assigned, given allocno is still
212 trivially colorable because all conflicting allocnos might be
213 assigned only to EAX and all other general hard registers are
214 still free.
216 To get an idea of the used trivial colorability criterion, it
217 is also useful to read article "Graph-Coloring Register
218 Allocation for Irregular Architectures" by Michael D. Smith
219 and Glen Holloway. Major difference between the article
220 approach and approach used in IRA is that Smith's approach
221 takes register classes only from machine description and IRA
222 calculate register classes from intermediate code too
223 (e.g. an explicit usage of hard registers in RTL code for
224 parameter passing can result in creation of additional
225 register classes which contain or exclude the hard
226 registers). That makes IRA approach useful for improving
227 coloring even for architectures with regular register files
228 and in fact some benchmarking shows the improvement for
229 regular class architectures is even bigger than for irregular
230 ones. Another difference is that Smith's approach chooses
231 intersection of classes of all insn operands in which a given
232 pseudo occurs. IRA can use bigger classes if it is still
233 more profitable than memory usage.
235 * Popping the allocnos from the stack and assigning them hard
236 registers. If IRA can not assign a hard register to an
237 allocno and the allocno is coalesced, IRA undoes the
238 coalescing and puts the uncoalesced allocnos onto the stack in
239 the hope that some such allocnos will get a hard register
240 separately. If IRA fails to assign hard register or memory
241 is more profitable for it, IRA spills the allocno. IRA
242 assigns the allocno the hard-register with minimal full
243 allocation cost which reflects the cost of usage of the
244 hard-register for the allocno and cost of usage of the
245 hard-register for allocnos conflicting with given allocno.
247 * Chaitin-Briggs coloring assigns as many pseudos as possible
248 to hard registers. After coloring we try to improve
249 allocation with cost point of view. We improve the
250 allocation by spilling some allocnos and assigning the freed
251 hard registers to other allocnos if it decreases the overall
252 allocation cost.
254 * After allocno assigning in the region, IRA modifies the hard
255 register and memory costs for the corresponding allocnos in
256 the subregions to reflect the cost of possible loads, stores,
257 or moves on the border of the region and its subregions.
258 When default regional allocation algorithm is used
259 (-fira-algorithm=mixed), IRA just propagates the assignment
260 for allocnos if the register pressure in the region for the
261 corresponding pressure class is less than number of available
262 hard registers for given pressure class.
264 o Spill/restore code moving. When IRA performs an allocation
265 by traversing regions in top-down order, it does not know what
266 happens below in the region tree. Therefore, sometimes IRA
267 misses opportunities to perform a better allocation. A simple
268 optimization tries to improve allocation in a region having
269 subregions and containing in another region. If the
270 corresponding allocnos in the subregion are spilled, it spills
271 the region allocno if it is profitable. The optimization
272 implements a simple iterative algorithm performing profitable
273 transformations while they are still possible. It is fast in
274 practice, so there is no real need for a better time complexity
275 algorithm.
277 o Code change. After coloring, two allocnos representing the
278 same pseudo-register outside and inside a region respectively
279 may be assigned to different locations (hard-registers or
280 memory). In this case IRA creates and uses a new
281 pseudo-register inside the region and adds code to move allocno
282 values on the region's borders. This is done during top-down
283 traversal of the regions (file ira-emit.c). In some
284 complicated cases IRA can create a new allocno to move allocno
285 values (e.g. when a swap of values stored in two hard-registers
286 is needed). At this stage, the new allocno is marked as
287 spilled. IRA still creates the pseudo-register and the moves
288 on the region borders even when both allocnos were assigned to
289 the same hard-register. If the reload pass spills a
290 pseudo-register for some reason, the effect will be smaller
291 because another allocno will still be in the hard-register. In
292 most cases, this is better then spilling both allocnos. If
293 reload does not change the allocation for the two
294 pseudo-registers, the trivial move will be removed by
295 post-reload optimizations. IRA does not generate moves for
296 allocnos assigned to the same hard register when the default
297 regional allocation algorithm is used and the register pressure
298 in the region for the corresponding pressure class is less than
299 number of available hard registers for given pressure class.
300 IRA also does some optimizations to remove redundant stores and
301 to reduce code duplication on the region borders.
303 o Flattening internal representation. After changing code, IRA
304 transforms its internal representation for several regions into
305 one region representation (file ira-build.c). This process is
306 called IR flattening. Such process is more complicated than IR
307 rebuilding would be, but is much faster.
309 o After IR flattening, IRA tries to assign hard registers to all
310 spilled allocnos. This is implemented by a simple and fast
311 priority coloring algorithm (see function
312 ira_reassign_conflict_allocnos::ira-color.c). Here new allocnos
313 created during the code change pass can be assigned to hard
314 registers.
316 o At the end IRA calls the reload pass. The reload pass
317 communicates with IRA through several functions in file
318 ira-color.c to improve its decisions in
320 * sharing stack slots for the spilled pseudos based on IRA info
321 about pseudo-register conflicts.
323 * reassigning hard-registers to all spilled pseudos at the end
324 of each reload iteration.
326 * choosing a better hard-register to spill based on IRA info
327 about pseudo-register live ranges and the register pressure
328 in places where the pseudo-register lives.
330 IRA uses a lot of data representing the target processors. These
331 data are initialized in file ira.c.
333 If function has no loops (or the loops are ignored when
334 -fira-algorithm=CB is used), we have classic Chaitin-Briggs
335 coloring (only instead of separate pass of coalescing, we use hard
336 register preferencing). In such case, IRA works much faster
337 because many things are not made (like IR flattening, the
338 spill/restore optimization, and the code change).
340 Literature is worth to read for better understanding the code:
342 o Preston Briggs, Keith D. Cooper, Linda Torczon. Improvements to
343 Graph Coloring Register Allocation.
345 o David Callahan, Brian Koblenz. Register allocation via
346 hierarchical graph coloring.
348 o Keith Cooper, Anshuman Dasgupta, Jason Eckhardt. Revisiting Graph
349 Coloring Register Allocation: A Study of the Chaitin-Briggs and
350 Callahan-Koblenz Algorithms.
352 o Guei-Yuan Lueh, Thomas Gross, and Ali-Reza Adl-Tabatabai. Global
353 Register Allocation Based on Graph Fusion.
355 o Michael D. Smith and Glenn Holloway. Graph-Coloring Register
356 Allocation for Irregular Architectures
358 o Vladimir Makarov. The Integrated Register Allocator for GCC.
360 o Vladimir Makarov. The top-down register allocator for irregular
361 register file architectures.
366 #include "config.h"
367 #include "system.h"
368 #include "coretypes.h"
369 #include "tm.h"
370 #include "regs.h"
371 #include "hash-set.h"
372 #include "machmode.h"
373 #include "vec.h"
374 #include "double-int.h"
375 #include "input.h"
376 #include "alias.h"
377 #include "symtab.h"
378 #include "wide-int.h"
379 #include "inchash.h"
380 #include "tree.h"
381 #include "rtl.h"
382 #include "tm_p.h"
383 #include "target.h"
384 #include "flags.h"
385 #include "obstack.h"
386 #include "bitmap.h"
387 #include "hard-reg-set.h"
388 #include "predict.h"
389 #include "function.h"
390 #include "dominance.h"
391 #include "cfg.h"
392 #include "cfgrtl.h"
393 #include "cfgbuild.h"
394 #include "cfgcleanup.h"
395 #include "basic-block.h"
396 #include "df.h"
397 #include "hashtab.h"
398 #include "statistics.h"
399 #include "real.h"
400 #include "fixed-value.h"
401 #include "insn-config.h"
402 #include "expmed.h"
403 #include "dojump.h"
404 #include "explow.h"
405 #include "calls.h"
406 #include "emit-rtl.h"
407 #include "varasm.h"
408 #include "stmt.h"
409 #include "expr.h"
410 #include "recog.h"
411 #include "params.h"
412 #include "tree-pass.h"
413 #include "output.h"
414 #include "except.h"
415 #include "reload.h"
416 #include "diagnostic-core.h"
417 #include "ggc.h"
418 #include "ira-int.h"
419 #include "lra.h"
420 #include "dce.h"
421 #include "dbgcnt.h"
422 #include "rtl-iter.h"
423 #include "shrink-wrap.h"
425 struct target_ira default_target_ira;
426 struct target_ira_int default_target_ira_int;
427 #if SWITCHABLE_TARGET
428 struct target_ira *this_target_ira = &default_target_ira;
429 struct target_ira_int *this_target_ira_int = &default_target_ira_int;
430 #endif
432 /* A modified value of flag `-fira-verbose' used internally. */
433 int internal_flag_ira_verbose;
435 /* Dump file of the allocator if it is not NULL. */
436 FILE *ira_dump_file;
438 /* The number of elements in the following array. */
439 int ira_spilled_reg_stack_slots_num;
441 /* The following array contains info about spilled pseudo-registers
442 stack slots used in current function so far. */
443 struct ira_spilled_reg_stack_slot *ira_spilled_reg_stack_slots;
445 /* Correspondingly overall cost of the allocation, overall cost before
446 reload, cost of the allocnos assigned to hard-registers, cost of
447 the allocnos assigned to memory, cost of loads, stores and register
448 move insns generated for pseudo-register live range splitting (see
449 ira-emit.c). */
450 int64_t ira_overall_cost, overall_cost_before;
451 int64_t ira_reg_cost, ira_mem_cost;
452 int64_t ira_load_cost, ira_store_cost, ira_shuffle_cost;
453 int ira_move_loops_num, ira_additional_jumps_num;
455 /* All registers that can be eliminated. */
457 HARD_REG_SET eliminable_regset;
459 /* Value of max_reg_num () before IRA work start. This value helps
460 us to recognize a situation when new pseudos were created during
461 IRA work. */
462 static int max_regno_before_ira;
464 /* Temporary hard reg set used for a different calculation. */
465 static HARD_REG_SET temp_hard_regset;
467 #define last_mode_for_init_move_cost \
468 (this_target_ira_int->x_last_mode_for_init_move_cost)
471 /* The function sets up the map IRA_REG_MODE_HARD_REGSET. */
472 static void
473 setup_reg_mode_hard_regset (void)
475 int i, m, hard_regno;
477 for (m = 0; m < NUM_MACHINE_MODES; m++)
478 for (hard_regno = 0; hard_regno < FIRST_PSEUDO_REGISTER; hard_regno++)
480 CLEAR_HARD_REG_SET (ira_reg_mode_hard_regset[hard_regno][m]);
481 for (i = hard_regno_nregs[hard_regno][m] - 1; i >= 0; i--)
482 if (hard_regno + i < FIRST_PSEUDO_REGISTER)
483 SET_HARD_REG_BIT (ira_reg_mode_hard_regset[hard_regno][m],
484 hard_regno + i);
489 #define no_unit_alloc_regs \
490 (this_target_ira_int->x_no_unit_alloc_regs)
492 /* The function sets up the three arrays declared above. */
493 static void
494 setup_class_hard_regs (void)
496 int cl, i, hard_regno, n;
497 HARD_REG_SET processed_hard_reg_set;
499 ira_assert (SHRT_MAX >= FIRST_PSEUDO_REGISTER);
500 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
502 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
503 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
504 CLEAR_HARD_REG_SET (processed_hard_reg_set);
505 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
507 ira_non_ordered_class_hard_regs[cl][i] = -1;
508 ira_class_hard_reg_index[cl][i] = -1;
510 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
512 #ifdef REG_ALLOC_ORDER
513 hard_regno = reg_alloc_order[i];
514 #else
515 hard_regno = i;
516 #endif
517 if (TEST_HARD_REG_BIT (processed_hard_reg_set, hard_regno))
518 continue;
519 SET_HARD_REG_BIT (processed_hard_reg_set, hard_regno);
520 if (! TEST_HARD_REG_BIT (temp_hard_regset, hard_regno))
521 ira_class_hard_reg_index[cl][hard_regno] = -1;
522 else
524 ira_class_hard_reg_index[cl][hard_regno] = n;
525 ira_class_hard_regs[cl][n++] = hard_regno;
528 ira_class_hard_regs_num[cl] = n;
529 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
530 if (TEST_HARD_REG_BIT (temp_hard_regset, i))
531 ira_non_ordered_class_hard_regs[cl][n++] = i;
532 ira_assert (ira_class_hard_regs_num[cl] == n);
536 /* Set up global variables defining info about hard registers for the
537 allocation. These depend on USE_HARD_FRAME_P whose TRUE value means
538 that we can use the hard frame pointer for the allocation. */
539 static void
540 setup_alloc_regs (bool use_hard_frame_p)
542 #ifdef ADJUST_REG_ALLOC_ORDER
543 ADJUST_REG_ALLOC_ORDER;
544 #endif
545 COPY_HARD_REG_SET (no_unit_alloc_regs, fixed_reg_set);
546 if (! use_hard_frame_p)
547 SET_HARD_REG_BIT (no_unit_alloc_regs, HARD_FRAME_POINTER_REGNUM);
548 setup_class_hard_regs ();
553 #define alloc_reg_class_subclasses \
554 (this_target_ira_int->x_alloc_reg_class_subclasses)
556 /* Initialize the table of subclasses of each reg class. */
557 static void
558 setup_reg_subclasses (void)
560 int i, j;
561 HARD_REG_SET temp_hard_regset2;
563 for (i = 0; i < N_REG_CLASSES; i++)
564 for (j = 0; j < N_REG_CLASSES; j++)
565 alloc_reg_class_subclasses[i][j] = LIM_REG_CLASSES;
567 for (i = 0; i < N_REG_CLASSES; i++)
569 if (i == (int) NO_REGS)
570 continue;
572 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
573 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
574 if (hard_reg_set_empty_p (temp_hard_regset))
575 continue;
576 for (j = 0; j < N_REG_CLASSES; j++)
577 if (i != j)
579 enum reg_class *p;
581 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[j]);
582 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
583 if (! hard_reg_set_subset_p (temp_hard_regset,
584 temp_hard_regset2))
585 continue;
586 p = &alloc_reg_class_subclasses[j][0];
587 while (*p != LIM_REG_CLASSES) p++;
588 *p = (enum reg_class) i;
595 /* Set up IRA_MEMORY_MOVE_COST and IRA_MAX_MEMORY_MOVE_COST. */
596 static void
597 setup_class_subset_and_memory_move_costs (void)
599 int cl, cl2, mode, cost;
600 HARD_REG_SET temp_hard_regset2;
602 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
603 ira_memory_move_cost[mode][NO_REGS][0]
604 = ira_memory_move_cost[mode][NO_REGS][1] = SHRT_MAX;
605 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
607 if (cl != (int) NO_REGS)
608 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
610 ira_max_memory_move_cost[mode][cl][0]
611 = ira_memory_move_cost[mode][cl][0]
612 = memory_move_cost ((machine_mode) mode,
613 (reg_class_t) cl, false);
614 ira_max_memory_move_cost[mode][cl][1]
615 = ira_memory_move_cost[mode][cl][1]
616 = memory_move_cost ((machine_mode) mode,
617 (reg_class_t) cl, true);
618 /* Costs for NO_REGS are used in cost calculation on the
619 1st pass when the preferred register classes are not
620 known yet. In this case we take the best scenario. */
621 if (ira_memory_move_cost[mode][NO_REGS][0]
622 > ira_memory_move_cost[mode][cl][0])
623 ira_max_memory_move_cost[mode][NO_REGS][0]
624 = ira_memory_move_cost[mode][NO_REGS][0]
625 = ira_memory_move_cost[mode][cl][0];
626 if (ira_memory_move_cost[mode][NO_REGS][1]
627 > ira_memory_move_cost[mode][cl][1])
628 ira_max_memory_move_cost[mode][NO_REGS][1]
629 = ira_memory_move_cost[mode][NO_REGS][1]
630 = ira_memory_move_cost[mode][cl][1];
633 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
634 for (cl2 = (int) N_REG_CLASSES - 1; cl2 >= 0; cl2--)
636 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
637 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
638 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
639 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
640 ira_class_subset_p[cl][cl2]
641 = hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2);
642 if (! hard_reg_set_empty_p (temp_hard_regset2)
643 && hard_reg_set_subset_p (reg_class_contents[cl2],
644 reg_class_contents[cl]))
645 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
647 cost = ira_memory_move_cost[mode][cl2][0];
648 if (cost > ira_max_memory_move_cost[mode][cl][0])
649 ira_max_memory_move_cost[mode][cl][0] = cost;
650 cost = ira_memory_move_cost[mode][cl2][1];
651 if (cost > ira_max_memory_move_cost[mode][cl][1])
652 ira_max_memory_move_cost[mode][cl][1] = cost;
655 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
656 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
658 ira_memory_move_cost[mode][cl][0]
659 = ira_max_memory_move_cost[mode][cl][0];
660 ira_memory_move_cost[mode][cl][1]
661 = ira_max_memory_move_cost[mode][cl][1];
663 setup_reg_subclasses ();
668 /* Define the following macro if allocation through malloc if
669 preferable. */
670 #define IRA_NO_OBSTACK
672 #ifndef IRA_NO_OBSTACK
673 /* Obstack used for storing all dynamic data (except bitmaps) of the
674 IRA. */
675 static struct obstack ira_obstack;
676 #endif
678 /* Obstack used for storing all bitmaps of the IRA. */
679 static struct bitmap_obstack ira_bitmap_obstack;
681 /* Allocate memory of size LEN for IRA data. */
682 void *
683 ira_allocate (size_t len)
685 void *res;
687 #ifndef IRA_NO_OBSTACK
688 res = obstack_alloc (&ira_obstack, len);
689 #else
690 res = xmalloc (len);
691 #endif
692 return res;
695 /* Free memory ADDR allocated for IRA data. */
696 void
697 ira_free (void *addr ATTRIBUTE_UNUSED)
699 #ifndef IRA_NO_OBSTACK
700 /* do nothing */
701 #else
702 free (addr);
703 #endif
707 /* Allocate and returns bitmap for IRA. */
708 bitmap
709 ira_allocate_bitmap (void)
711 return BITMAP_ALLOC (&ira_bitmap_obstack);
714 /* Free bitmap B allocated for IRA. */
715 void
716 ira_free_bitmap (bitmap b ATTRIBUTE_UNUSED)
718 /* do nothing */
723 /* Output information about allocation of all allocnos (except for
724 caps) into file F. */
725 void
726 ira_print_disposition (FILE *f)
728 int i, n, max_regno;
729 ira_allocno_t a;
730 basic_block bb;
732 fprintf (f, "Disposition:");
733 max_regno = max_reg_num ();
734 for (n = 0, i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
735 for (a = ira_regno_allocno_map[i];
736 a != NULL;
737 a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
739 if (n % 4 == 0)
740 fprintf (f, "\n");
741 n++;
742 fprintf (f, " %4d:r%-4d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
743 if ((bb = ALLOCNO_LOOP_TREE_NODE (a)->bb) != NULL)
744 fprintf (f, "b%-3d", bb->index);
745 else
746 fprintf (f, "l%-3d", ALLOCNO_LOOP_TREE_NODE (a)->loop_num);
747 if (ALLOCNO_HARD_REGNO (a) >= 0)
748 fprintf (f, " %3d", ALLOCNO_HARD_REGNO (a));
749 else
750 fprintf (f, " mem");
752 fprintf (f, "\n");
755 /* Outputs information about allocation of all allocnos into
756 stderr. */
757 void
758 ira_debug_disposition (void)
760 ira_print_disposition (stderr);
765 /* Set up ira_stack_reg_pressure_class which is the biggest pressure
766 register class containing stack registers or NO_REGS if there are
767 no stack registers. To find this class, we iterate through all
768 register pressure classes and choose the first register pressure
769 class containing all the stack registers and having the biggest
770 size. */
771 static void
772 setup_stack_reg_pressure_class (void)
774 ira_stack_reg_pressure_class = NO_REGS;
775 #ifdef STACK_REGS
777 int i, best, size;
778 enum reg_class cl;
779 HARD_REG_SET temp_hard_regset2;
781 CLEAR_HARD_REG_SET (temp_hard_regset);
782 for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
783 SET_HARD_REG_BIT (temp_hard_regset, i);
784 best = 0;
785 for (i = 0; i < ira_pressure_classes_num; i++)
787 cl = ira_pressure_classes[i];
788 COPY_HARD_REG_SET (temp_hard_regset2, temp_hard_regset);
789 AND_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
790 size = hard_reg_set_size (temp_hard_regset2);
791 if (best < size)
793 best = size;
794 ira_stack_reg_pressure_class = cl;
798 #endif
801 /* Find pressure classes which are register classes for which we
802 calculate register pressure in IRA, register pressure sensitive
803 insn scheduling, and register pressure sensitive loop invariant
804 motion.
806 To make register pressure calculation easy, we always use
807 non-intersected register pressure classes. A move of hard
808 registers from one register pressure class is not more expensive
809 than load and store of the hard registers. Most likely an allocno
810 class will be a subset of a register pressure class and in many
811 cases a register pressure class. That makes usage of register
812 pressure classes a good approximation to find a high register
813 pressure. */
814 static void
815 setup_pressure_classes (void)
817 int cost, i, n, curr;
818 int cl, cl2;
819 enum reg_class pressure_classes[N_REG_CLASSES];
820 int m;
821 HARD_REG_SET temp_hard_regset2;
822 bool insert_p;
824 n = 0;
825 for (cl = 0; cl < N_REG_CLASSES; cl++)
827 if (ira_class_hard_regs_num[cl] == 0)
828 continue;
829 if (ira_class_hard_regs_num[cl] != 1
830 /* A register class without subclasses may contain a few
831 hard registers and movement between them is costly
832 (e.g. SPARC FPCC registers). We still should consider it
833 as a candidate for a pressure class. */
834 && alloc_reg_class_subclasses[cl][0] < cl)
836 /* Check that the moves between any hard registers of the
837 current class are not more expensive for a legal mode
838 than load/store of the hard registers of the current
839 class. Such class is a potential candidate to be a
840 register pressure class. */
841 for (m = 0; m < NUM_MACHINE_MODES; m++)
843 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
844 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
845 AND_COMPL_HARD_REG_SET (temp_hard_regset,
846 ira_prohibited_class_mode_regs[cl][m]);
847 if (hard_reg_set_empty_p (temp_hard_regset))
848 continue;
849 ira_init_register_move_cost_if_necessary ((machine_mode) m);
850 cost = ira_register_move_cost[m][cl][cl];
851 if (cost <= ira_max_memory_move_cost[m][cl][1]
852 || cost <= ira_max_memory_move_cost[m][cl][0])
853 break;
855 if (m >= NUM_MACHINE_MODES)
856 continue;
858 curr = 0;
859 insert_p = true;
860 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
861 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
862 /* Remove so far added pressure classes which are subset of the
863 current candidate class. Prefer GENERAL_REGS as a pressure
864 register class to another class containing the same
865 allocatable hard registers. We do this because machine
866 dependent cost hooks might give wrong costs for the latter
867 class but always give the right cost for the former class
868 (GENERAL_REGS). */
869 for (i = 0; i < n; i++)
871 cl2 = pressure_classes[i];
872 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
873 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
874 if (hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2)
875 && (! hard_reg_set_equal_p (temp_hard_regset, temp_hard_regset2)
876 || cl2 == (int) GENERAL_REGS))
878 pressure_classes[curr++] = (enum reg_class) cl2;
879 insert_p = false;
880 continue;
882 if (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset)
883 && (! hard_reg_set_equal_p (temp_hard_regset2, temp_hard_regset)
884 || cl == (int) GENERAL_REGS))
885 continue;
886 if (hard_reg_set_equal_p (temp_hard_regset2, temp_hard_regset))
887 insert_p = false;
888 pressure_classes[curr++] = (enum reg_class) cl2;
890 /* If the current candidate is a subset of a so far added
891 pressure class, don't add it to the list of the pressure
892 classes. */
893 if (insert_p)
894 pressure_classes[curr++] = (enum reg_class) cl;
895 n = curr;
897 #ifdef ENABLE_IRA_CHECKING
899 HARD_REG_SET ignore_hard_regs;
901 /* Check pressure classes correctness: here we check that hard
902 registers from all register pressure classes contains all hard
903 registers available for the allocation. */
904 CLEAR_HARD_REG_SET (temp_hard_regset);
905 CLEAR_HARD_REG_SET (temp_hard_regset2);
906 COPY_HARD_REG_SET (ignore_hard_regs, no_unit_alloc_regs);
907 for (cl = 0; cl < LIM_REG_CLASSES; cl++)
909 /* For some targets (like MIPS with MD_REGS), there are some
910 classes with hard registers available for allocation but
911 not able to hold value of any mode. */
912 for (m = 0; m < NUM_MACHINE_MODES; m++)
913 if (contains_reg_of_mode[cl][m])
914 break;
915 if (m >= NUM_MACHINE_MODES)
917 IOR_HARD_REG_SET (ignore_hard_regs, reg_class_contents[cl]);
918 continue;
920 for (i = 0; i < n; i++)
921 if ((int) pressure_classes[i] == cl)
922 break;
923 IOR_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
924 if (i < n)
925 IOR_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
927 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
928 /* Some targets (like SPARC with ICC reg) have allocatable regs
929 for which no reg class is defined. */
930 if (REGNO_REG_CLASS (i) == NO_REGS)
931 SET_HARD_REG_BIT (ignore_hard_regs, i);
932 AND_COMPL_HARD_REG_SET (temp_hard_regset, ignore_hard_regs);
933 AND_COMPL_HARD_REG_SET (temp_hard_regset2, ignore_hard_regs);
934 ira_assert (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset));
936 #endif
937 ira_pressure_classes_num = 0;
938 for (i = 0; i < n; i++)
940 cl = (int) pressure_classes[i];
941 ira_reg_pressure_class_p[cl] = true;
942 ira_pressure_classes[ira_pressure_classes_num++] = (enum reg_class) cl;
944 setup_stack_reg_pressure_class ();
947 /* Set up IRA_UNIFORM_CLASS_P. Uniform class is a register class
948 whose register move cost between any registers of the class is the
949 same as for all its subclasses. We use the data to speed up the
950 2nd pass of calculations of allocno costs. */
951 static void
952 setup_uniform_class_p (void)
954 int i, cl, cl2, m;
956 for (cl = 0; cl < N_REG_CLASSES; cl++)
958 ira_uniform_class_p[cl] = false;
959 if (ira_class_hard_regs_num[cl] == 0)
960 continue;
961 /* We can not use alloc_reg_class_subclasses here because move
962 cost hooks does not take into account that some registers are
963 unavailable for the subtarget. E.g. for i686, INT_SSE_REGS
964 is element of alloc_reg_class_subclasses for GENERAL_REGS
965 because SSE regs are unavailable. */
966 for (i = 0; (cl2 = reg_class_subclasses[cl][i]) != LIM_REG_CLASSES; i++)
968 if (ira_class_hard_regs_num[cl2] == 0)
969 continue;
970 for (m = 0; m < NUM_MACHINE_MODES; m++)
971 if (contains_reg_of_mode[cl][m] && contains_reg_of_mode[cl2][m])
973 ira_init_register_move_cost_if_necessary ((machine_mode) m);
974 if (ira_register_move_cost[m][cl][cl]
975 != ira_register_move_cost[m][cl2][cl2])
976 break;
978 if (m < NUM_MACHINE_MODES)
979 break;
981 if (cl2 == LIM_REG_CLASSES)
982 ira_uniform_class_p[cl] = true;
986 /* Set up IRA_ALLOCNO_CLASSES, IRA_ALLOCNO_CLASSES_NUM,
987 IRA_IMPORTANT_CLASSES, and IRA_IMPORTANT_CLASSES_NUM.
989 Target may have many subtargets and not all target hard registers can
990 be used for allocation, e.g. x86 port in 32-bit mode can not use
991 hard registers introduced in x86-64 like r8-r15). Some classes
992 might have the same allocatable hard registers, e.g. INDEX_REGS
993 and GENERAL_REGS in x86 port in 32-bit mode. To decrease different
994 calculations efforts we introduce allocno classes which contain
995 unique non-empty sets of allocatable hard-registers.
997 Pseudo class cost calculation in ira-costs.c is very expensive.
998 Therefore we are trying to decrease number of classes involved in
999 such calculation. Register classes used in the cost calculation
1000 are called important classes. They are allocno classes and other
1001 non-empty classes whose allocatable hard register sets are inside
1002 of an allocno class hard register set. From the first sight, it
1003 looks like that they are just allocno classes. It is not true. In
1004 example of x86-port in 32-bit mode, allocno classes will contain
1005 GENERAL_REGS but not LEGACY_REGS (because allocatable hard
1006 registers are the same for the both classes). The important
1007 classes will contain GENERAL_REGS and LEGACY_REGS. It is done
1008 because a machine description insn constraint may refers for
1009 LEGACY_REGS and code in ira-costs.c is mostly base on investigation
1010 of the insn constraints. */
1011 static void
1012 setup_allocno_and_important_classes (void)
1014 int i, j, n, cl;
1015 bool set_p;
1016 HARD_REG_SET temp_hard_regset2;
1017 static enum reg_class classes[LIM_REG_CLASSES + 1];
1019 n = 0;
1020 /* Collect classes which contain unique sets of allocatable hard
1021 registers. Prefer GENERAL_REGS to other classes containing the
1022 same set of hard registers. */
1023 for (i = 0; i < LIM_REG_CLASSES; i++)
1025 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
1026 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1027 for (j = 0; j < n; j++)
1029 cl = classes[j];
1030 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
1031 AND_COMPL_HARD_REG_SET (temp_hard_regset2,
1032 no_unit_alloc_regs);
1033 if (hard_reg_set_equal_p (temp_hard_regset,
1034 temp_hard_regset2))
1035 break;
1037 if (j >= n)
1038 classes[n++] = (enum reg_class) i;
1039 else if (i == GENERAL_REGS)
1040 /* Prefer general regs. For i386 example, it means that
1041 we prefer GENERAL_REGS over INDEX_REGS or LEGACY_REGS
1042 (all of them consists of the same available hard
1043 registers). */
1044 classes[j] = (enum reg_class) i;
1046 classes[n] = LIM_REG_CLASSES;
1048 /* Set up classes which can be used for allocnos as classes
1049 containing non-empty unique sets of allocatable hard
1050 registers. */
1051 ira_allocno_classes_num = 0;
1052 for (i = 0; (cl = classes[i]) != LIM_REG_CLASSES; i++)
1053 if (ira_class_hard_regs_num[cl] > 0)
1054 ira_allocno_classes[ira_allocno_classes_num++] = (enum reg_class) cl;
1055 ira_important_classes_num = 0;
1056 /* Add non-allocno classes containing to non-empty set of
1057 allocatable hard regs. */
1058 for (cl = 0; cl < N_REG_CLASSES; cl++)
1059 if (ira_class_hard_regs_num[cl] > 0)
1061 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1062 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1063 set_p = false;
1064 for (j = 0; j < ira_allocno_classes_num; j++)
1066 COPY_HARD_REG_SET (temp_hard_regset2,
1067 reg_class_contents[ira_allocno_classes[j]]);
1068 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
1069 if ((enum reg_class) cl == ira_allocno_classes[j])
1070 break;
1071 else if (hard_reg_set_subset_p (temp_hard_regset,
1072 temp_hard_regset2))
1073 set_p = true;
1075 if (set_p && j >= ira_allocno_classes_num)
1076 ira_important_classes[ira_important_classes_num++]
1077 = (enum reg_class) cl;
1079 /* Now add allocno classes to the important classes. */
1080 for (j = 0; j < ira_allocno_classes_num; j++)
1081 ira_important_classes[ira_important_classes_num++]
1082 = ira_allocno_classes[j];
1083 for (cl = 0; cl < N_REG_CLASSES; cl++)
1085 ira_reg_allocno_class_p[cl] = false;
1086 ira_reg_pressure_class_p[cl] = false;
1088 for (j = 0; j < ira_allocno_classes_num; j++)
1089 ira_reg_allocno_class_p[ira_allocno_classes[j]] = true;
1090 setup_pressure_classes ();
1091 setup_uniform_class_p ();
1094 /* Setup translation in CLASS_TRANSLATE of all classes into a class
1095 given by array CLASSES of length CLASSES_NUM. The function is used
1096 make translation any reg class to an allocno class or to an
1097 pressure class. This translation is necessary for some
1098 calculations when we can use only allocno or pressure classes and
1099 such translation represents an approximate representation of all
1100 classes.
1102 The translation in case when allocatable hard register set of a
1103 given class is subset of allocatable hard register set of a class
1104 in CLASSES is pretty simple. We use smallest classes from CLASSES
1105 containing a given class. If allocatable hard register set of a
1106 given class is not a subset of any corresponding set of a class
1107 from CLASSES, we use the cheapest (with load/store point of view)
1108 class from CLASSES whose set intersects with given class set. */
1109 static void
1110 setup_class_translate_array (enum reg_class *class_translate,
1111 int classes_num, enum reg_class *classes)
1113 int cl, mode;
1114 enum reg_class aclass, best_class, *cl_ptr;
1115 int i, cost, min_cost, best_cost;
1117 for (cl = 0; cl < N_REG_CLASSES; cl++)
1118 class_translate[cl] = NO_REGS;
1120 for (i = 0; i < classes_num; i++)
1122 aclass = classes[i];
1123 for (cl_ptr = &alloc_reg_class_subclasses[aclass][0];
1124 (cl = *cl_ptr) != LIM_REG_CLASSES;
1125 cl_ptr++)
1126 if (class_translate[cl] == NO_REGS)
1127 class_translate[cl] = aclass;
1128 class_translate[aclass] = aclass;
1130 /* For classes which are not fully covered by one of given classes
1131 (in other words covered by more one given class), use the
1132 cheapest class. */
1133 for (cl = 0; cl < N_REG_CLASSES; cl++)
1135 if (cl == NO_REGS || class_translate[cl] != NO_REGS)
1136 continue;
1137 best_class = NO_REGS;
1138 best_cost = INT_MAX;
1139 for (i = 0; i < classes_num; i++)
1141 aclass = classes[i];
1142 COPY_HARD_REG_SET (temp_hard_regset,
1143 reg_class_contents[aclass]);
1144 AND_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1145 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1146 if (! hard_reg_set_empty_p (temp_hard_regset))
1148 min_cost = INT_MAX;
1149 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1151 cost = (ira_memory_move_cost[mode][aclass][0]
1152 + ira_memory_move_cost[mode][aclass][1]);
1153 if (min_cost > cost)
1154 min_cost = cost;
1156 if (best_class == NO_REGS || best_cost > min_cost)
1158 best_class = aclass;
1159 best_cost = min_cost;
1163 class_translate[cl] = best_class;
1167 /* Set up array IRA_ALLOCNO_CLASS_TRANSLATE and
1168 IRA_PRESSURE_CLASS_TRANSLATE. */
1169 static void
1170 setup_class_translate (void)
1172 setup_class_translate_array (ira_allocno_class_translate,
1173 ira_allocno_classes_num, ira_allocno_classes);
1174 setup_class_translate_array (ira_pressure_class_translate,
1175 ira_pressure_classes_num, ira_pressure_classes);
1178 /* Order numbers of allocno classes in original target allocno class
1179 array, -1 for non-allocno classes. */
1180 static int allocno_class_order[N_REG_CLASSES];
1182 /* The function used to sort the important classes. */
1183 static int
1184 comp_reg_classes_func (const void *v1p, const void *v2p)
1186 enum reg_class cl1 = *(const enum reg_class *) v1p;
1187 enum reg_class cl2 = *(const enum reg_class *) v2p;
1188 enum reg_class tcl1, tcl2;
1189 int diff;
1191 tcl1 = ira_allocno_class_translate[cl1];
1192 tcl2 = ira_allocno_class_translate[cl2];
1193 if (tcl1 != NO_REGS && tcl2 != NO_REGS
1194 && (diff = allocno_class_order[tcl1] - allocno_class_order[tcl2]) != 0)
1195 return diff;
1196 return (int) cl1 - (int) cl2;
1199 /* For correct work of function setup_reg_class_relation we need to
1200 reorder important classes according to the order of their allocno
1201 classes. It places important classes containing the same
1202 allocatable hard register set adjacent to each other and allocno
1203 class with the allocatable hard register set right after the other
1204 important classes with the same set.
1206 In example from comments of function
1207 setup_allocno_and_important_classes, it places LEGACY_REGS and
1208 GENERAL_REGS close to each other and GENERAL_REGS is after
1209 LEGACY_REGS. */
1210 static void
1211 reorder_important_classes (void)
1213 int i;
1215 for (i = 0; i < N_REG_CLASSES; i++)
1216 allocno_class_order[i] = -1;
1217 for (i = 0; i < ira_allocno_classes_num; i++)
1218 allocno_class_order[ira_allocno_classes[i]] = i;
1219 qsort (ira_important_classes, ira_important_classes_num,
1220 sizeof (enum reg_class), comp_reg_classes_func);
1221 for (i = 0; i < ira_important_classes_num; i++)
1222 ira_important_class_nums[ira_important_classes[i]] = i;
1225 /* Set up IRA_REG_CLASS_SUBUNION, IRA_REG_CLASS_SUPERUNION,
1226 IRA_REG_CLASS_SUPER_CLASSES, IRA_REG_CLASSES_INTERSECT, and
1227 IRA_REG_CLASSES_INTERSECT_P. For the meaning of the relations,
1228 please see corresponding comments in ira-int.h. */
1229 static void
1230 setup_reg_class_relations (void)
1232 int i, cl1, cl2, cl3;
1233 HARD_REG_SET intersection_set, union_set, temp_set2;
1234 bool important_class_p[N_REG_CLASSES];
1236 memset (important_class_p, 0, sizeof (important_class_p));
1237 for (i = 0; i < ira_important_classes_num; i++)
1238 important_class_p[ira_important_classes[i]] = true;
1239 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1241 ira_reg_class_super_classes[cl1][0] = LIM_REG_CLASSES;
1242 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1244 ira_reg_classes_intersect_p[cl1][cl2] = false;
1245 ira_reg_class_intersect[cl1][cl2] = NO_REGS;
1246 ira_reg_class_subset[cl1][cl2] = NO_REGS;
1247 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl1]);
1248 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1249 COPY_HARD_REG_SET (temp_set2, reg_class_contents[cl2]);
1250 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1251 if (hard_reg_set_empty_p (temp_hard_regset)
1252 && hard_reg_set_empty_p (temp_set2))
1254 /* The both classes have no allocatable hard registers
1255 -- take all class hard registers into account and use
1256 reg_class_subunion and reg_class_superunion. */
1257 for (i = 0;; i++)
1259 cl3 = reg_class_subclasses[cl1][i];
1260 if (cl3 == LIM_REG_CLASSES)
1261 break;
1262 if (reg_class_subset_p (ira_reg_class_intersect[cl1][cl2],
1263 (enum reg_class) cl3))
1264 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1266 ira_reg_class_subunion[cl1][cl2] = reg_class_subunion[cl1][cl2];
1267 ira_reg_class_superunion[cl1][cl2] = reg_class_superunion[cl1][cl2];
1268 continue;
1270 ira_reg_classes_intersect_p[cl1][cl2]
1271 = hard_reg_set_intersect_p (temp_hard_regset, temp_set2);
1272 if (important_class_p[cl1] && important_class_p[cl2]
1273 && hard_reg_set_subset_p (temp_hard_regset, temp_set2))
1275 /* CL1 and CL2 are important classes and CL1 allocatable
1276 hard register set is inside of CL2 allocatable hard
1277 registers -- make CL1 a superset of CL2. */
1278 enum reg_class *p;
1280 p = &ira_reg_class_super_classes[cl1][0];
1281 while (*p != LIM_REG_CLASSES)
1282 p++;
1283 *p++ = (enum reg_class) cl2;
1284 *p = LIM_REG_CLASSES;
1286 ira_reg_class_subunion[cl1][cl2] = NO_REGS;
1287 ira_reg_class_superunion[cl1][cl2] = NO_REGS;
1288 COPY_HARD_REG_SET (intersection_set, reg_class_contents[cl1]);
1289 AND_HARD_REG_SET (intersection_set, reg_class_contents[cl2]);
1290 AND_COMPL_HARD_REG_SET (intersection_set, no_unit_alloc_regs);
1291 COPY_HARD_REG_SET (union_set, reg_class_contents[cl1]);
1292 IOR_HARD_REG_SET (union_set, reg_class_contents[cl2]);
1293 AND_COMPL_HARD_REG_SET (union_set, no_unit_alloc_regs);
1294 for (cl3 = 0; cl3 < N_REG_CLASSES; cl3++)
1296 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl3]);
1297 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1298 if (hard_reg_set_subset_p (temp_hard_regset, intersection_set))
1300 /* CL3 allocatable hard register set is inside of
1301 intersection of allocatable hard register sets
1302 of CL1 and CL2. */
1303 if (important_class_p[cl3])
1305 COPY_HARD_REG_SET
1306 (temp_set2,
1307 reg_class_contents
1308 [(int) ira_reg_class_intersect[cl1][cl2]]);
1309 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1310 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1311 /* If the allocatable hard register sets are
1312 the same, prefer GENERAL_REGS or the
1313 smallest class for debugging
1314 purposes. */
1315 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1316 && (cl3 == GENERAL_REGS
1317 || ((ira_reg_class_intersect[cl1][cl2]
1318 != GENERAL_REGS)
1319 && hard_reg_set_subset_p
1320 (reg_class_contents[cl3],
1321 reg_class_contents
1322 [(int)
1323 ira_reg_class_intersect[cl1][cl2]])))))
1324 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1326 COPY_HARD_REG_SET
1327 (temp_set2,
1328 reg_class_contents[(int) ira_reg_class_subset[cl1][cl2]]);
1329 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1330 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1331 /* Ignore unavailable hard registers and prefer
1332 smallest class for debugging purposes. */
1333 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1334 && hard_reg_set_subset_p
1335 (reg_class_contents[cl3],
1336 reg_class_contents
1337 [(int) ira_reg_class_subset[cl1][cl2]])))
1338 ira_reg_class_subset[cl1][cl2] = (enum reg_class) cl3;
1340 if (important_class_p[cl3]
1341 && hard_reg_set_subset_p (temp_hard_regset, union_set))
1343 /* CL3 allocatable hard register set is inside of
1344 union of allocatable hard register sets of CL1
1345 and CL2. */
1346 COPY_HARD_REG_SET
1347 (temp_set2,
1348 reg_class_contents[(int) ira_reg_class_subunion[cl1][cl2]]);
1349 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1350 if (ira_reg_class_subunion[cl1][cl2] == NO_REGS
1351 || (hard_reg_set_subset_p (temp_set2, temp_hard_regset)
1353 && (! hard_reg_set_equal_p (temp_set2,
1354 temp_hard_regset)
1355 || cl3 == GENERAL_REGS
1356 /* If the allocatable hard register sets are the
1357 same, prefer GENERAL_REGS or the smallest
1358 class for debugging purposes. */
1359 || (ira_reg_class_subunion[cl1][cl2] != GENERAL_REGS
1360 && hard_reg_set_subset_p
1361 (reg_class_contents[cl3],
1362 reg_class_contents
1363 [(int) ira_reg_class_subunion[cl1][cl2]])))))
1364 ira_reg_class_subunion[cl1][cl2] = (enum reg_class) cl3;
1366 if (hard_reg_set_subset_p (union_set, temp_hard_regset))
1368 /* CL3 allocatable hard register set contains union
1369 of allocatable hard register sets of CL1 and
1370 CL2. */
1371 COPY_HARD_REG_SET
1372 (temp_set2,
1373 reg_class_contents[(int) ira_reg_class_superunion[cl1][cl2]]);
1374 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1375 if (ira_reg_class_superunion[cl1][cl2] == NO_REGS
1376 || (hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1378 && (! hard_reg_set_equal_p (temp_set2,
1379 temp_hard_regset)
1380 || cl3 == GENERAL_REGS
1381 /* If the allocatable hard register sets are the
1382 same, prefer GENERAL_REGS or the smallest
1383 class for debugging purposes. */
1384 || (ira_reg_class_superunion[cl1][cl2] != GENERAL_REGS
1385 && hard_reg_set_subset_p
1386 (reg_class_contents[cl3],
1387 reg_class_contents
1388 [(int) ira_reg_class_superunion[cl1][cl2]])))))
1389 ira_reg_class_superunion[cl1][cl2] = (enum reg_class) cl3;
1396 /* Output all uniform and important classes into file F. */
1397 static void
1398 print_unform_and_important_classes (FILE *f)
1400 static const char *const reg_class_names[] = REG_CLASS_NAMES;
1401 int i, cl;
1403 fprintf (f, "Uniform classes:\n");
1404 for (cl = 0; cl < N_REG_CLASSES; cl++)
1405 if (ira_uniform_class_p[cl])
1406 fprintf (f, " %s", reg_class_names[cl]);
1407 fprintf (f, "\nImportant classes:\n");
1408 for (i = 0; i < ira_important_classes_num; i++)
1409 fprintf (f, " %s", reg_class_names[ira_important_classes[i]]);
1410 fprintf (f, "\n");
1413 /* Output all possible allocno or pressure classes and their
1414 translation map into file F. */
1415 static void
1416 print_translated_classes (FILE *f, bool pressure_p)
1418 int classes_num = (pressure_p
1419 ? ira_pressure_classes_num : ira_allocno_classes_num);
1420 enum reg_class *classes = (pressure_p
1421 ? ira_pressure_classes : ira_allocno_classes);
1422 enum reg_class *class_translate = (pressure_p
1423 ? ira_pressure_class_translate
1424 : ira_allocno_class_translate);
1425 static const char *const reg_class_names[] = REG_CLASS_NAMES;
1426 int i;
1428 fprintf (f, "%s classes:\n", pressure_p ? "Pressure" : "Allocno");
1429 for (i = 0; i < classes_num; i++)
1430 fprintf (f, " %s", reg_class_names[classes[i]]);
1431 fprintf (f, "\nClass translation:\n");
1432 for (i = 0; i < N_REG_CLASSES; i++)
1433 fprintf (f, " %s -> %s\n", reg_class_names[i],
1434 reg_class_names[class_translate[i]]);
1437 /* Output all possible allocno and translation classes and the
1438 translation maps into stderr. */
1439 void
1440 ira_debug_allocno_classes (void)
1442 print_unform_and_important_classes (stderr);
1443 print_translated_classes (stderr, false);
1444 print_translated_classes (stderr, true);
1447 /* Set up different arrays concerning class subsets, allocno and
1448 important classes. */
1449 static void
1450 find_reg_classes (void)
1452 setup_allocno_and_important_classes ();
1453 setup_class_translate ();
1454 reorder_important_classes ();
1455 setup_reg_class_relations ();
1460 /* Set up the array above. */
1461 static void
1462 setup_hard_regno_aclass (void)
1464 int i;
1466 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1468 #if 1
1469 ira_hard_regno_allocno_class[i]
1470 = (TEST_HARD_REG_BIT (no_unit_alloc_regs, i)
1471 ? NO_REGS
1472 : ira_allocno_class_translate[REGNO_REG_CLASS (i)]);
1473 #else
1474 int j;
1475 enum reg_class cl;
1476 ira_hard_regno_allocno_class[i] = NO_REGS;
1477 for (j = 0; j < ira_allocno_classes_num; j++)
1479 cl = ira_allocno_classes[j];
1480 if (ira_class_hard_reg_index[cl][i] >= 0)
1482 ira_hard_regno_allocno_class[i] = cl;
1483 break;
1486 #endif
1492 /* Form IRA_REG_CLASS_MAX_NREGS and IRA_REG_CLASS_MIN_NREGS maps. */
1493 static void
1494 setup_reg_class_nregs (void)
1496 int i, cl, cl2, m;
1498 for (m = 0; m < MAX_MACHINE_MODE; m++)
1500 for (cl = 0; cl < N_REG_CLASSES; cl++)
1501 ira_reg_class_max_nregs[cl][m]
1502 = ira_reg_class_min_nregs[cl][m]
1503 = targetm.class_max_nregs ((reg_class_t) cl, (machine_mode) m);
1504 for (cl = 0; cl < N_REG_CLASSES; cl++)
1505 for (i = 0;
1506 (cl2 = alloc_reg_class_subclasses[cl][i]) != LIM_REG_CLASSES;
1507 i++)
1508 if (ira_reg_class_min_nregs[cl2][m]
1509 < ira_reg_class_min_nregs[cl][m])
1510 ira_reg_class_min_nregs[cl][m] = ira_reg_class_min_nregs[cl2][m];
1516 /* Set up IRA_PROHIBITED_CLASS_MODE_REGS and IRA_CLASS_SINGLETON.
1517 This function is called once IRA_CLASS_HARD_REGS has been initialized. */
1518 static void
1519 setup_prohibited_class_mode_regs (void)
1521 int j, k, hard_regno, cl, last_hard_regno, count;
1523 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1525 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1526 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1527 for (j = 0; j < NUM_MACHINE_MODES; j++)
1529 count = 0;
1530 last_hard_regno = -1;
1531 CLEAR_HARD_REG_SET (ira_prohibited_class_mode_regs[cl][j]);
1532 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1534 hard_regno = ira_class_hard_regs[cl][k];
1535 if (! HARD_REGNO_MODE_OK (hard_regno, (machine_mode) j))
1536 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1537 hard_regno);
1538 else if (in_hard_reg_set_p (temp_hard_regset,
1539 (machine_mode) j, hard_regno))
1541 last_hard_regno = hard_regno;
1542 count++;
1545 ira_class_singleton[cl][j] = (count == 1 ? last_hard_regno : -1);
1550 /* Clarify IRA_PROHIBITED_CLASS_MODE_REGS by excluding hard registers
1551 spanning from one register pressure class to another one. It is
1552 called after defining the pressure classes. */
1553 static void
1554 clarify_prohibited_class_mode_regs (void)
1556 int j, k, hard_regno, cl, pclass, nregs;
1558 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1559 for (j = 0; j < NUM_MACHINE_MODES; j++)
1561 CLEAR_HARD_REG_SET (ira_useful_class_mode_regs[cl][j]);
1562 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1564 hard_regno = ira_class_hard_regs[cl][k];
1565 if (TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j], hard_regno))
1566 continue;
1567 nregs = hard_regno_nregs[hard_regno][j];
1568 if (hard_regno + nregs > FIRST_PSEUDO_REGISTER)
1570 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1571 hard_regno);
1572 continue;
1574 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
1575 for (nregs-- ;nregs >= 0; nregs--)
1576 if (((enum reg_class) pclass
1577 != ira_pressure_class_translate[REGNO_REG_CLASS
1578 (hard_regno + nregs)]))
1580 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1581 hard_regno);
1582 break;
1584 if (!TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1585 hard_regno))
1586 add_to_hard_reg_set (&ira_useful_class_mode_regs[cl][j],
1587 (machine_mode) j, hard_regno);
1592 /* Allocate and initialize IRA_REGISTER_MOVE_COST, IRA_MAY_MOVE_IN_COST
1593 and IRA_MAY_MOVE_OUT_COST for MODE. */
1594 void
1595 ira_init_register_move_cost (machine_mode mode)
1597 static unsigned short last_move_cost[N_REG_CLASSES][N_REG_CLASSES];
1598 bool all_match = true;
1599 unsigned int cl1, cl2;
1601 ira_assert (ira_register_move_cost[mode] == NULL
1602 && ira_may_move_in_cost[mode] == NULL
1603 && ira_may_move_out_cost[mode] == NULL);
1604 ira_assert (have_regs_of_mode[mode]);
1605 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1606 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1608 int cost;
1609 if (!contains_reg_of_mode[cl1][mode]
1610 || !contains_reg_of_mode[cl2][mode])
1612 if ((ira_reg_class_max_nregs[cl1][mode]
1613 > ira_class_hard_regs_num[cl1])
1614 || (ira_reg_class_max_nregs[cl2][mode]
1615 > ira_class_hard_regs_num[cl2]))
1616 cost = 65535;
1617 else
1618 cost = (ira_memory_move_cost[mode][cl1][0]
1619 + ira_memory_move_cost[mode][cl2][1]) * 2;
1621 else
1623 cost = register_move_cost (mode, (enum reg_class) cl1,
1624 (enum reg_class) cl2);
1625 ira_assert (cost < 65535);
1627 all_match &= (last_move_cost[cl1][cl2] == cost);
1628 last_move_cost[cl1][cl2] = cost;
1630 if (all_match && last_mode_for_init_move_cost != -1)
1632 ira_register_move_cost[mode]
1633 = ira_register_move_cost[last_mode_for_init_move_cost];
1634 ira_may_move_in_cost[mode]
1635 = ira_may_move_in_cost[last_mode_for_init_move_cost];
1636 ira_may_move_out_cost[mode]
1637 = ira_may_move_out_cost[last_mode_for_init_move_cost];
1638 return;
1640 last_mode_for_init_move_cost = mode;
1641 ira_register_move_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1642 ira_may_move_in_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1643 ira_may_move_out_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1644 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1645 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1647 int cost;
1648 enum reg_class *p1, *p2;
1650 if (last_move_cost[cl1][cl2] == 65535)
1652 ira_register_move_cost[mode][cl1][cl2] = 65535;
1653 ira_may_move_in_cost[mode][cl1][cl2] = 65535;
1654 ira_may_move_out_cost[mode][cl1][cl2] = 65535;
1656 else
1658 cost = last_move_cost[cl1][cl2];
1660 for (p2 = &reg_class_subclasses[cl2][0];
1661 *p2 != LIM_REG_CLASSES; p2++)
1662 if (ira_class_hard_regs_num[*p2] > 0
1663 && (ira_reg_class_max_nregs[*p2][mode]
1664 <= ira_class_hard_regs_num[*p2]))
1665 cost = MAX (cost, ira_register_move_cost[mode][cl1][*p2]);
1667 for (p1 = &reg_class_subclasses[cl1][0];
1668 *p1 != LIM_REG_CLASSES; p1++)
1669 if (ira_class_hard_regs_num[*p1] > 0
1670 && (ira_reg_class_max_nregs[*p1][mode]
1671 <= ira_class_hard_regs_num[*p1]))
1672 cost = MAX (cost, ira_register_move_cost[mode][*p1][cl2]);
1674 ira_assert (cost <= 65535);
1675 ira_register_move_cost[mode][cl1][cl2] = cost;
1677 if (ira_class_subset_p[cl1][cl2])
1678 ira_may_move_in_cost[mode][cl1][cl2] = 0;
1679 else
1680 ira_may_move_in_cost[mode][cl1][cl2] = cost;
1682 if (ira_class_subset_p[cl2][cl1])
1683 ira_may_move_out_cost[mode][cl1][cl2] = 0;
1684 else
1685 ira_may_move_out_cost[mode][cl1][cl2] = cost;
1692 /* This is called once during compiler work. It sets up
1693 different arrays whose values don't depend on the compiled
1694 function. */
1695 void
1696 ira_init_once (void)
1698 ira_init_costs_once ();
1699 lra_init_once ();
1702 /* Free ira_max_register_move_cost, ira_may_move_in_cost and
1703 ira_may_move_out_cost for each mode. */
1704 void
1705 target_ira_int::free_register_move_costs (void)
1707 int mode, i;
1709 /* Reset move_cost and friends, making sure we only free shared
1710 table entries once. */
1711 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1712 if (x_ira_register_move_cost[mode])
1714 for (i = 0;
1715 i < mode && (x_ira_register_move_cost[i]
1716 != x_ira_register_move_cost[mode]);
1717 i++)
1719 if (i == mode)
1721 free (x_ira_register_move_cost[mode]);
1722 free (x_ira_may_move_in_cost[mode]);
1723 free (x_ira_may_move_out_cost[mode]);
1726 memset (x_ira_register_move_cost, 0, sizeof x_ira_register_move_cost);
1727 memset (x_ira_may_move_in_cost, 0, sizeof x_ira_may_move_in_cost);
1728 memset (x_ira_may_move_out_cost, 0, sizeof x_ira_may_move_out_cost);
1729 last_mode_for_init_move_cost = -1;
1732 target_ira_int::~target_ira_int ()
1734 free_ira_costs ();
1735 free_register_move_costs ();
1738 /* This is called every time when register related information is
1739 changed. */
1740 void
1741 ira_init (void)
1743 this_target_ira_int->free_register_move_costs ();
1744 setup_reg_mode_hard_regset ();
1745 setup_alloc_regs (flag_omit_frame_pointer != 0);
1746 setup_class_subset_and_memory_move_costs ();
1747 setup_reg_class_nregs ();
1748 setup_prohibited_class_mode_regs ();
1749 find_reg_classes ();
1750 clarify_prohibited_class_mode_regs ();
1751 setup_hard_regno_aclass ();
1752 ira_init_costs ();
1756 #define ira_prohibited_mode_move_regs_initialized_p \
1757 (this_target_ira_int->x_ira_prohibited_mode_move_regs_initialized_p)
1759 /* Set up IRA_PROHIBITED_MODE_MOVE_REGS. */
1760 static void
1761 setup_prohibited_mode_move_regs (void)
1763 int i, j;
1764 rtx test_reg1, test_reg2, move_pat;
1765 rtx_insn *move_insn;
1767 if (ira_prohibited_mode_move_regs_initialized_p)
1768 return;
1769 ira_prohibited_mode_move_regs_initialized_p = true;
1770 test_reg1 = gen_rtx_REG (VOIDmode, 0);
1771 test_reg2 = gen_rtx_REG (VOIDmode, 0);
1772 move_pat = gen_rtx_SET (VOIDmode, test_reg1, test_reg2);
1773 move_insn = gen_rtx_INSN (VOIDmode, 0, 0, 0, move_pat, 0, -1, 0);
1774 for (i = 0; i < NUM_MACHINE_MODES; i++)
1776 SET_HARD_REG_SET (ira_prohibited_mode_move_regs[i]);
1777 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
1779 if (! HARD_REGNO_MODE_OK (j, (machine_mode) i))
1780 continue;
1781 SET_REGNO_RAW (test_reg1, j);
1782 PUT_MODE (test_reg1, (machine_mode) i);
1783 SET_REGNO_RAW (test_reg2, j);
1784 PUT_MODE (test_reg2, (machine_mode) i);
1785 INSN_CODE (move_insn) = -1;
1786 recog_memoized (move_insn);
1787 if (INSN_CODE (move_insn) < 0)
1788 continue;
1789 extract_insn (move_insn);
1790 /* We don't know whether the move will be in code that is optimized
1791 for size or speed, so consider all enabled alternatives. */
1792 if (! constrain_operands (1, get_enabled_alternatives (move_insn)))
1793 continue;
1794 CLEAR_HARD_REG_BIT (ira_prohibited_mode_move_regs[i], j);
1801 /* Setup possible alternatives in ALTS for INSN. */
1802 void
1803 ira_setup_alts (rtx_insn *insn, HARD_REG_SET &alts)
1805 /* MAP nalt * nop -> start of constraints for given operand and
1806 alternative. */
1807 static vec<const char *> insn_constraints;
1808 int nop, nalt;
1809 bool curr_swapped;
1810 const char *p;
1811 rtx op;
1812 int commutative = -1;
1814 extract_insn (insn);
1815 alternative_mask preferred = get_preferred_alternatives (insn);
1816 CLEAR_HARD_REG_SET (alts);
1817 insn_constraints.release ();
1818 insn_constraints.safe_grow_cleared (recog_data.n_operands
1819 * recog_data.n_alternatives + 1);
1820 /* Check that the hard reg set is enough for holding all
1821 alternatives. It is hard to imagine the situation when the
1822 assertion is wrong. */
1823 ira_assert (recog_data.n_alternatives
1824 <= (int) MAX (sizeof (HARD_REG_ELT_TYPE) * CHAR_BIT,
1825 FIRST_PSEUDO_REGISTER));
1826 for (curr_swapped = false;; curr_swapped = true)
1828 /* Calculate some data common for all alternatives to speed up the
1829 function. */
1830 for (nop = 0; nop < recog_data.n_operands; nop++)
1832 for (nalt = 0, p = recog_data.constraints[nop];
1833 nalt < recog_data.n_alternatives;
1834 nalt++)
1836 insn_constraints[nop * recog_data.n_alternatives + nalt] = p;
1837 while (*p && *p != ',')
1838 p++;
1839 if (*p)
1840 p++;
1843 for (nalt = 0; nalt < recog_data.n_alternatives; nalt++)
1845 if (!TEST_BIT (preferred, nalt)
1846 || TEST_HARD_REG_BIT (alts, nalt))
1847 continue;
1849 for (nop = 0; nop < recog_data.n_operands; nop++)
1851 int c, len;
1853 op = recog_data.operand[nop];
1854 p = insn_constraints[nop * recog_data.n_alternatives + nalt];
1855 if (*p == 0 || *p == ',')
1856 continue;
1859 switch (c = *p, len = CONSTRAINT_LEN (c, p), c)
1861 case '#':
1862 case ',':
1863 c = '\0';
1864 case '\0':
1865 len = 0;
1866 break;
1868 case '%':
1869 /* We only support one commutative marker, the
1870 first one. We already set commutative
1871 above. */
1872 if (commutative < 0)
1873 commutative = nop;
1874 break;
1876 case '0': case '1': case '2': case '3': case '4':
1877 case '5': case '6': case '7': case '8': case '9':
1878 goto op_success;
1879 break;
1881 case 'g':
1882 goto op_success;
1883 break;
1885 default:
1887 enum constraint_num cn = lookup_constraint (p);
1888 switch (get_constraint_type (cn))
1890 case CT_REGISTER:
1891 if (reg_class_for_constraint (cn) != NO_REGS)
1892 goto op_success;
1893 break;
1895 case CT_CONST_INT:
1896 if (CONST_INT_P (op)
1897 && (insn_const_int_ok_for_constraint
1898 (INTVAL (op), cn)))
1899 goto op_success;
1900 break;
1902 case CT_ADDRESS:
1903 case CT_MEMORY:
1904 goto op_success;
1906 case CT_FIXED_FORM:
1907 if (constraint_satisfied_p (op, cn))
1908 goto op_success;
1909 break;
1911 break;
1914 while (p += len, c);
1915 break;
1916 op_success:
1919 if (nop >= recog_data.n_operands)
1920 SET_HARD_REG_BIT (alts, nalt);
1922 if (commutative < 0)
1923 break;
1924 if (curr_swapped)
1925 break;
1926 op = recog_data.operand[commutative];
1927 recog_data.operand[commutative] = recog_data.operand[commutative + 1];
1928 recog_data.operand[commutative + 1] = op;
1933 /* Return the number of the output non-early clobber operand which
1934 should be the same in any case as operand with number OP_NUM (or
1935 negative value if there is no such operand). The function takes
1936 only really possible alternatives into consideration. */
1938 ira_get_dup_out_num (int op_num, HARD_REG_SET &alts)
1940 int curr_alt, c, original, dup;
1941 bool ignore_p, use_commut_op_p;
1942 const char *str;
1944 if (op_num < 0 || recog_data.n_alternatives == 0)
1945 return -1;
1946 /* We should find duplications only for input operands. */
1947 if (recog_data.operand_type[op_num] != OP_IN)
1948 return -1;
1949 str = recog_data.constraints[op_num];
1950 use_commut_op_p = false;
1951 for (;;)
1953 rtx op = recog_data.operand[op_num];
1955 for (curr_alt = 0, ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt),
1956 original = -1;;)
1958 c = *str;
1959 if (c == '\0')
1960 break;
1961 if (c == '#')
1962 ignore_p = true;
1963 else if (c == ',')
1965 curr_alt++;
1966 ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt);
1968 else if (! ignore_p)
1969 switch (c)
1971 case 'g':
1972 goto fail;
1973 default:
1975 enum constraint_num cn = lookup_constraint (str);
1976 enum reg_class cl = reg_class_for_constraint (cn);
1977 if (cl != NO_REGS
1978 && !targetm.class_likely_spilled_p (cl))
1979 goto fail;
1980 if (constraint_satisfied_p (op, cn))
1981 goto fail;
1982 break;
1985 case '0': case '1': case '2': case '3': case '4':
1986 case '5': case '6': case '7': case '8': case '9':
1987 if (original != -1 && original != c)
1988 goto fail;
1989 original = c;
1990 break;
1992 str += CONSTRAINT_LEN (c, str);
1994 if (original == -1)
1995 goto fail;
1996 dup = -1;
1997 for (ignore_p = false, str = recog_data.constraints[original - '0'];
1998 *str != 0;
1999 str++)
2000 if (ignore_p)
2002 if (*str == ',')
2003 ignore_p = false;
2005 else if (*str == '#')
2006 ignore_p = true;
2007 else if (! ignore_p)
2009 if (*str == '=')
2010 dup = original - '0';
2011 /* It is better ignore an alternative with early clobber. */
2012 else if (*str == '&')
2013 goto fail;
2015 if (dup >= 0)
2016 return dup;
2017 fail:
2018 if (use_commut_op_p)
2019 break;
2020 use_commut_op_p = true;
2021 if (recog_data.constraints[op_num][0] == '%')
2022 str = recog_data.constraints[op_num + 1];
2023 else if (op_num > 0 && recog_data.constraints[op_num - 1][0] == '%')
2024 str = recog_data.constraints[op_num - 1];
2025 else
2026 break;
2028 return -1;
2033 /* Search forward to see if the source register of a copy insn dies
2034 before either it or the destination register is modified, but don't
2035 scan past the end of the basic block. If so, we can replace the
2036 source with the destination and let the source die in the copy
2037 insn.
2039 This will reduce the number of registers live in that range and may
2040 enable the destination and the source coalescing, thus often saving
2041 one register in addition to a register-register copy. */
2043 static void
2044 decrease_live_ranges_number (void)
2046 basic_block bb;
2047 rtx_insn *insn;
2048 rtx set, src, dest, dest_death, q, note;
2049 rtx_insn *p;
2050 int sregno, dregno;
2052 if (! flag_expensive_optimizations)
2053 return;
2055 if (ira_dump_file)
2056 fprintf (ira_dump_file, "Starting decreasing number of live ranges...\n");
2058 FOR_EACH_BB_FN (bb, cfun)
2059 FOR_BB_INSNS (bb, insn)
2061 set = single_set (insn);
2062 if (! set)
2063 continue;
2064 src = SET_SRC (set);
2065 dest = SET_DEST (set);
2066 if (! REG_P (src) || ! REG_P (dest)
2067 || find_reg_note (insn, REG_DEAD, src))
2068 continue;
2069 sregno = REGNO (src);
2070 dregno = REGNO (dest);
2072 /* We don't want to mess with hard regs if register classes
2073 are small. */
2074 if (sregno == dregno
2075 || (targetm.small_register_classes_for_mode_p (GET_MODE (src))
2076 && (sregno < FIRST_PSEUDO_REGISTER
2077 || dregno < FIRST_PSEUDO_REGISTER))
2078 /* We don't see all updates to SP if they are in an
2079 auto-inc memory reference, so we must disallow this
2080 optimization on them. */
2081 || sregno == STACK_POINTER_REGNUM
2082 || dregno == STACK_POINTER_REGNUM)
2083 continue;
2085 dest_death = NULL_RTX;
2087 for (p = NEXT_INSN (insn); p; p = NEXT_INSN (p))
2089 if (! INSN_P (p))
2090 continue;
2091 if (BLOCK_FOR_INSN (p) != bb)
2092 break;
2094 if (reg_set_p (src, p) || reg_set_p (dest, p)
2095 /* If SRC is an asm-declared register, it must not be
2096 replaced in any asm. Unfortunately, the REG_EXPR
2097 tree for the asm variable may be absent in the SRC
2098 rtx, so we can't check the actual register
2099 declaration easily (the asm operand will have it,
2100 though). To avoid complicating the test for a rare
2101 case, we just don't perform register replacement
2102 for a hard reg mentioned in an asm. */
2103 || (sregno < FIRST_PSEUDO_REGISTER
2104 && asm_noperands (PATTERN (p)) >= 0
2105 && reg_overlap_mentioned_p (src, PATTERN (p)))
2106 /* Don't change hard registers used by a call. */
2107 || (CALL_P (p) && sregno < FIRST_PSEUDO_REGISTER
2108 && find_reg_fusage (p, USE, src))
2109 /* Don't change a USE of a register. */
2110 || (GET_CODE (PATTERN (p)) == USE
2111 && reg_overlap_mentioned_p (src, XEXP (PATTERN (p), 0))))
2112 break;
2114 /* See if all of SRC dies in P. This test is slightly
2115 more conservative than it needs to be. */
2116 if ((note = find_regno_note (p, REG_DEAD, sregno))
2117 && GET_MODE (XEXP (note, 0)) == GET_MODE (src))
2119 int failed = 0;
2121 /* We can do the optimization. Scan forward from INSN
2122 again, replacing regs as we go. Set FAILED if a
2123 replacement can't be done. In that case, we can't
2124 move the death note for SRC. This should be
2125 rare. */
2127 /* Set to stop at next insn. */
2128 for (q = next_real_insn (insn);
2129 q != next_real_insn (p);
2130 q = next_real_insn (q))
2132 if (reg_overlap_mentioned_p (src, PATTERN (q)))
2134 /* If SRC is a hard register, we might miss
2135 some overlapping registers with
2136 validate_replace_rtx, so we would have to
2137 undo it. We can't if DEST is present in
2138 the insn, so fail in that combination of
2139 cases. */
2140 if (sregno < FIRST_PSEUDO_REGISTER
2141 && reg_mentioned_p (dest, PATTERN (q)))
2142 failed = 1;
2144 /* Attempt to replace all uses. */
2145 else if (!validate_replace_rtx (src, dest, q))
2146 failed = 1;
2148 /* If this succeeded, but some part of the
2149 register is still present, undo the
2150 replacement. */
2151 else if (sregno < FIRST_PSEUDO_REGISTER
2152 && reg_overlap_mentioned_p (src, PATTERN (q)))
2154 validate_replace_rtx (dest, src, q);
2155 failed = 1;
2159 /* If DEST dies here, remove the death note and
2160 save it for later. Make sure ALL of DEST dies
2161 here; again, this is overly conservative. */
2162 if (! dest_death
2163 && (dest_death = find_regno_note (q, REG_DEAD, dregno)))
2165 if (GET_MODE (XEXP (dest_death, 0)) == GET_MODE (dest))
2166 remove_note (q, dest_death);
2167 else
2169 failed = 1;
2170 dest_death = 0;
2175 if (! failed)
2177 /* Move death note of SRC from P to INSN. */
2178 remove_note (p, note);
2179 XEXP (note, 1) = REG_NOTES (insn);
2180 REG_NOTES (insn) = note;
2183 /* DEST is also dead if INSN has a REG_UNUSED note for
2184 DEST. */
2185 if (! dest_death
2186 && (dest_death
2187 = find_regno_note (insn, REG_UNUSED, dregno)))
2189 PUT_REG_NOTE_KIND (dest_death, REG_DEAD);
2190 remove_note (insn, dest_death);
2193 /* Put death note of DEST on P if we saw it die. */
2194 if (dest_death)
2196 XEXP (dest_death, 1) = REG_NOTES (p);
2197 REG_NOTES (p) = dest_death;
2199 break;
2202 /* If SRC is a hard register which is set or killed in
2203 some other way, we can't do this optimization. */
2204 else if (sregno < FIRST_PSEUDO_REGISTER && dead_or_set_p (p, src))
2205 break;
2212 /* Return nonzero if REGNO is a particularly bad choice for reloading X. */
2213 static bool
2214 ira_bad_reload_regno_1 (int regno, rtx x)
2216 int x_regno, n, i;
2217 ira_allocno_t a;
2218 enum reg_class pref;
2220 /* We only deal with pseudo regs. */
2221 if (! x || GET_CODE (x) != REG)
2222 return false;
2224 x_regno = REGNO (x);
2225 if (x_regno < FIRST_PSEUDO_REGISTER)
2226 return false;
2228 /* If the pseudo prefers REGNO explicitly, then do not consider
2229 REGNO a bad spill choice. */
2230 pref = reg_preferred_class (x_regno);
2231 if (reg_class_size[pref] == 1)
2232 return !TEST_HARD_REG_BIT (reg_class_contents[pref], regno);
2234 /* If the pseudo conflicts with REGNO, then we consider REGNO a
2235 poor choice for a reload regno. */
2236 a = ira_regno_allocno_map[x_regno];
2237 n = ALLOCNO_NUM_OBJECTS (a);
2238 for (i = 0; i < n; i++)
2240 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2241 if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
2242 return true;
2244 return false;
2247 /* Return nonzero if REGNO is a particularly bad choice for reloading
2248 IN or OUT. */
2249 bool
2250 ira_bad_reload_regno (int regno, rtx in, rtx out)
2252 return (ira_bad_reload_regno_1 (regno, in)
2253 || ira_bad_reload_regno_1 (regno, out));
2256 /* Add register clobbers from asm statements. */
2257 static void
2258 compute_regs_asm_clobbered (void)
2260 basic_block bb;
2262 FOR_EACH_BB_FN (bb, cfun)
2264 rtx_insn *insn;
2265 FOR_BB_INSNS_REVERSE (bb, insn)
2267 df_ref def;
2269 if (NONDEBUG_INSN_P (insn) && extract_asm_operands (PATTERN (insn)))
2270 FOR_EACH_INSN_DEF (def, insn)
2272 unsigned int dregno = DF_REF_REGNO (def);
2273 if (HARD_REGISTER_NUM_P (dregno))
2274 add_to_hard_reg_set (&crtl->asm_clobbers,
2275 GET_MODE (DF_REF_REAL_REG (def)),
2276 dregno);
2283 /* Set up ELIMINABLE_REGSET, IRA_NO_ALLOC_REGS, and
2284 REGS_EVER_LIVE. */
2285 void
2286 ira_setup_eliminable_regset (void)
2288 #ifdef ELIMINABLE_REGS
2289 int i;
2290 static const struct {const int from, to; } eliminables[] = ELIMINABLE_REGS;
2291 #endif
2292 /* FIXME: If EXIT_IGNORE_STACK is set, we will not save and restore
2293 sp for alloca. So we can't eliminate the frame pointer in that
2294 case. At some point, we should improve this by emitting the
2295 sp-adjusting insns for this case. */
2296 frame_pointer_needed
2297 = (! flag_omit_frame_pointer
2298 || (cfun->calls_alloca && EXIT_IGNORE_STACK)
2299 /* We need the frame pointer to catch stack overflow exceptions
2300 if the stack pointer is moving. */
2301 || (flag_stack_check && STACK_CHECK_MOVING_SP)
2302 || crtl->accesses_prior_frames
2303 || (SUPPORTS_STACK_ALIGNMENT && crtl->stack_realign_needed)
2304 /* We need a frame pointer for all Cilk Plus functions that use
2305 Cilk keywords. */
2306 || (flag_cilkplus && cfun->is_cilk_function)
2307 || targetm.frame_pointer_required ());
2309 /* The chance that FRAME_POINTER_NEEDED is changed from inspecting
2310 RTL is very small. So if we use frame pointer for RA and RTL
2311 actually prevents this, we will spill pseudos assigned to the
2312 frame pointer in LRA. */
2314 if (frame_pointer_needed)
2315 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2317 COPY_HARD_REG_SET (ira_no_alloc_regs, no_unit_alloc_regs);
2318 CLEAR_HARD_REG_SET (eliminable_regset);
2320 compute_regs_asm_clobbered ();
2322 /* Build the regset of all eliminable registers and show we can't
2323 use those that we already know won't be eliminated. */
2324 #ifdef ELIMINABLE_REGS
2325 for (i = 0; i < (int) ARRAY_SIZE (eliminables); i++)
2327 bool cannot_elim
2328 = (! targetm.can_eliminate (eliminables[i].from, eliminables[i].to)
2329 || (eliminables[i].to == STACK_POINTER_REGNUM && frame_pointer_needed));
2331 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, eliminables[i].from))
2333 SET_HARD_REG_BIT (eliminable_regset, eliminables[i].from);
2335 if (cannot_elim)
2336 SET_HARD_REG_BIT (ira_no_alloc_regs, eliminables[i].from);
2338 else if (cannot_elim)
2339 error ("%s cannot be used in asm here",
2340 reg_names[eliminables[i].from]);
2341 else
2342 df_set_regs_ever_live (eliminables[i].from, true);
2344 if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
2346 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2348 SET_HARD_REG_BIT (eliminable_regset, HARD_FRAME_POINTER_REGNUM);
2349 if (frame_pointer_needed)
2350 SET_HARD_REG_BIT (ira_no_alloc_regs, HARD_FRAME_POINTER_REGNUM);
2352 else if (frame_pointer_needed)
2353 error ("%s cannot be used in asm here",
2354 reg_names[HARD_FRAME_POINTER_REGNUM]);
2355 else
2356 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2359 #else
2360 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2362 SET_HARD_REG_BIT (eliminable_regset, FRAME_POINTER_REGNUM);
2363 if (frame_pointer_needed)
2364 SET_HARD_REG_BIT (ira_no_alloc_regs, FRAME_POINTER_REGNUM);
2366 else if (frame_pointer_needed)
2367 error ("%s cannot be used in asm here", reg_names[FRAME_POINTER_REGNUM]);
2368 else
2369 df_set_regs_ever_live (FRAME_POINTER_REGNUM, true);
2370 #endif
2375 /* Vector of substitutions of register numbers,
2376 used to map pseudo regs into hardware regs.
2377 This is set up as a result of register allocation.
2378 Element N is the hard reg assigned to pseudo reg N,
2379 or is -1 if no hard reg was assigned.
2380 If N is a hard reg number, element N is N. */
2381 short *reg_renumber;
2383 /* Set up REG_RENUMBER and CALLER_SAVE_NEEDED (used by reload) from
2384 the allocation found by IRA. */
2385 static void
2386 setup_reg_renumber (void)
2388 int regno, hard_regno;
2389 ira_allocno_t a;
2390 ira_allocno_iterator ai;
2392 caller_save_needed = 0;
2393 FOR_EACH_ALLOCNO (a, ai)
2395 if (ira_use_lra_p && ALLOCNO_CAP_MEMBER (a) != NULL)
2396 continue;
2397 /* There are no caps at this point. */
2398 ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
2399 if (! ALLOCNO_ASSIGNED_P (a))
2400 /* It can happen if A is not referenced but partially anticipated
2401 somewhere in a region. */
2402 ALLOCNO_ASSIGNED_P (a) = true;
2403 ira_free_allocno_updated_costs (a);
2404 hard_regno = ALLOCNO_HARD_REGNO (a);
2405 regno = ALLOCNO_REGNO (a);
2406 reg_renumber[regno] = (hard_regno < 0 ? -1 : hard_regno);
2407 if (hard_regno >= 0)
2409 int i, nwords;
2410 enum reg_class pclass;
2411 ira_object_t obj;
2413 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
2414 nwords = ALLOCNO_NUM_OBJECTS (a);
2415 for (i = 0; i < nwords; i++)
2417 obj = ALLOCNO_OBJECT (a, i);
2418 IOR_COMPL_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
2419 reg_class_contents[pclass]);
2421 if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
2422 && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
2423 call_used_reg_set))
2425 ira_assert (!optimize || flag_caller_saves
2426 || (ALLOCNO_CALLS_CROSSED_NUM (a)
2427 == ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a))
2428 || regno >= ira_reg_equiv_len
2429 || ira_equiv_no_lvalue_p (regno));
2430 caller_save_needed = 1;
2436 /* Set up allocno assignment flags for further allocation
2437 improvements. */
2438 static void
2439 setup_allocno_assignment_flags (void)
2441 int hard_regno;
2442 ira_allocno_t a;
2443 ira_allocno_iterator ai;
2445 FOR_EACH_ALLOCNO (a, ai)
2447 if (! ALLOCNO_ASSIGNED_P (a))
2448 /* It can happen if A is not referenced but partially anticipated
2449 somewhere in a region. */
2450 ira_free_allocno_updated_costs (a);
2451 hard_regno = ALLOCNO_HARD_REGNO (a);
2452 /* Don't assign hard registers to allocnos which are destination
2453 of removed store at the end of loop. It has no sense to keep
2454 the same value in different hard registers. It is also
2455 impossible to assign hard registers correctly to such
2456 allocnos because the cost info and info about intersected
2457 calls are incorrect for them. */
2458 ALLOCNO_ASSIGNED_P (a) = (hard_regno >= 0
2459 || ALLOCNO_EMIT_DATA (a)->mem_optimized_dest_p
2460 || (ALLOCNO_MEMORY_COST (a)
2461 - ALLOCNO_CLASS_COST (a)) < 0);
2462 ira_assert
2463 (hard_regno < 0
2464 || ira_hard_reg_in_set_p (hard_regno, ALLOCNO_MODE (a),
2465 reg_class_contents[ALLOCNO_CLASS (a)]));
2469 /* Evaluate overall allocation cost and the costs for using hard
2470 registers and memory for allocnos. */
2471 static void
2472 calculate_allocation_cost (void)
2474 int hard_regno, cost;
2475 ira_allocno_t a;
2476 ira_allocno_iterator ai;
2478 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
2479 FOR_EACH_ALLOCNO (a, ai)
2481 hard_regno = ALLOCNO_HARD_REGNO (a);
2482 ira_assert (hard_regno < 0
2483 || (ira_hard_reg_in_set_p
2484 (hard_regno, ALLOCNO_MODE (a),
2485 reg_class_contents[ALLOCNO_CLASS (a)])));
2486 if (hard_regno < 0)
2488 cost = ALLOCNO_MEMORY_COST (a);
2489 ira_mem_cost += cost;
2491 else if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
2493 cost = (ALLOCNO_HARD_REG_COSTS (a)
2494 [ira_class_hard_reg_index
2495 [ALLOCNO_CLASS (a)][hard_regno]]);
2496 ira_reg_cost += cost;
2498 else
2500 cost = ALLOCNO_CLASS_COST (a);
2501 ira_reg_cost += cost;
2503 ira_overall_cost += cost;
2506 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
2508 fprintf (ira_dump_file,
2509 "+++Costs: overall %"PRId64
2510 ", reg %"PRId64
2511 ", mem %"PRId64
2512 ", ld %"PRId64
2513 ", st %"PRId64
2514 ", move %"PRId64,
2515 ira_overall_cost, ira_reg_cost, ira_mem_cost,
2516 ira_load_cost, ira_store_cost, ira_shuffle_cost);
2517 fprintf (ira_dump_file, "\n+++ move loops %d, new jumps %d\n",
2518 ira_move_loops_num, ira_additional_jumps_num);
2523 #ifdef ENABLE_IRA_CHECKING
2524 /* Check the correctness of the allocation. We do need this because
2525 of complicated code to transform more one region internal
2526 representation into one region representation. */
2527 static void
2528 check_allocation (void)
2530 ira_allocno_t a;
2531 int hard_regno, nregs, conflict_nregs;
2532 ira_allocno_iterator ai;
2534 FOR_EACH_ALLOCNO (a, ai)
2536 int n = ALLOCNO_NUM_OBJECTS (a);
2537 int i;
2539 if (ALLOCNO_CAP_MEMBER (a) != NULL
2540 || (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
2541 continue;
2542 nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
2543 if (nregs == 1)
2544 /* We allocated a single hard register. */
2545 n = 1;
2546 else if (n > 1)
2547 /* We allocated multiple hard registers, and we will test
2548 conflicts in a granularity of single hard regs. */
2549 nregs = 1;
2551 for (i = 0; i < n; i++)
2553 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2554 ira_object_t conflict_obj;
2555 ira_object_conflict_iterator oci;
2556 int this_regno = hard_regno;
2557 if (n > 1)
2559 if (REG_WORDS_BIG_ENDIAN)
2560 this_regno += n - i - 1;
2561 else
2562 this_regno += i;
2564 FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
2566 ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
2567 int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
2568 if (conflict_hard_regno < 0)
2569 continue;
2571 conflict_nregs
2572 = (hard_regno_nregs
2573 [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
2575 if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1
2576 && conflict_nregs == ALLOCNO_NUM_OBJECTS (conflict_a))
2578 if (REG_WORDS_BIG_ENDIAN)
2579 conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
2580 - OBJECT_SUBWORD (conflict_obj) - 1);
2581 else
2582 conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
2583 conflict_nregs = 1;
2586 if ((conflict_hard_regno <= this_regno
2587 && this_regno < conflict_hard_regno + conflict_nregs)
2588 || (this_regno <= conflict_hard_regno
2589 && conflict_hard_regno < this_regno + nregs))
2591 fprintf (stderr, "bad allocation for %d and %d\n",
2592 ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
2593 gcc_unreachable ();
2599 #endif
2601 /* Allocate REG_EQUIV_INIT. Set up it from IRA_REG_EQUIV which should
2602 be already calculated. */
2603 static void
2604 setup_reg_equiv_init (void)
2606 int i;
2607 int max_regno = max_reg_num ();
2609 for (i = 0; i < max_regno; i++)
2610 reg_equiv_init (i) = ira_reg_equiv[i].init_insns;
2613 /* Update equiv regno from movement of FROM_REGNO to TO_REGNO. INSNS
2614 are insns which were generated for such movement. It is assumed
2615 that FROM_REGNO and TO_REGNO always have the same value at the
2616 point of any move containing such registers. This function is used
2617 to update equiv info for register shuffles on the region borders
2618 and for caller save/restore insns. */
2619 void
2620 ira_update_equiv_info_by_shuffle_insn (int to_regno, int from_regno, rtx_insn *insns)
2622 rtx_insn *insn;
2623 rtx x, note;
2625 if (! ira_reg_equiv[from_regno].defined_p
2626 && (! ira_reg_equiv[to_regno].defined_p
2627 || ((x = ira_reg_equiv[to_regno].memory) != NULL_RTX
2628 && ! MEM_READONLY_P (x))))
2629 return;
2630 insn = insns;
2631 if (NEXT_INSN (insn) != NULL_RTX)
2633 if (! ira_reg_equiv[to_regno].defined_p)
2635 ira_assert (ira_reg_equiv[to_regno].init_insns == NULL_RTX);
2636 return;
2638 ira_reg_equiv[to_regno].defined_p = false;
2639 ira_reg_equiv[to_regno].memory
2640 = ira_reg_equiv[to_regno].constant
2641 = ira_reg_equiv[to_regno].invariant
2642 = ira_reg_equiv[to_regno].init_insns = NULL;
2643 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2644 fprintf (ira_dump_file,
2645 " Invalidating equiv info for reg %d\n", to_regno);
2646 return;
2648 /* It is possible that FROM_REGNO still has no equivalence because
2649 in shuffles to_regno<-from_regno and from_regno<-to_regno the 2nd
2650 insn was not processed yet. */
2651 if (ira_reg_equiv[from_regno].defined_p)
2653 ira_reg_equiv[to_regno].defined_p = true;
2654 if ((x = ira_reg_equiv[from_regno].memory) != NULL_RTX)
2656 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX
2657 && ira_reg_equiv[from_regno].constant == NULL_RTX);
2658 ira_assert (ira_reg_equiv[to_regno].memory == NULL_RTX
2659 || rtx_equal_p (ira_reg_equiv[to_regno].memory, x));
2660 ira_reg_equiv[to_regno].memory = x;
2661 if (! MEM_READONLY_P (x))
2662 /* We don't add the insn to insn init list because memory
2663 equivalence is just to say what memory is better to use
2664 when the pseudo is spilled. */
2665 return;
2667 else if ((x = ira_reg_equiv[from_regno].constant) != NULL_RTX)
2669 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX);
2670 ira_assert (ira_reg_equiv[to_regno].constant == NULL_RTX
2671 || rtx_equal_p (ira_reg_equiv[to_regno].constant, x));
2672 ira_reg_equiv[to_regno].constant = x;
2674 else
2676 x = ira_reg_equiv[from_regno].invariant;
2677 ira_assert (x != NULL_RTX);
2678 ira_assert (ira_reg_equiv[to_regno].invariant == NULL_RTX
2679 || rtx_equal_p (ira_reg_equiv[to_regno].invariant, x));
2680 ira_reg_equiv[to_regno].invariant = x;
2682 if (find_reg_note (insn, REG_EQUIV, x) == NULL_RTX)
2684 note = set_unique_reg_note (insn, REG_EQUIV, x);
2685 gcc_assert (note != NULL_RTX);
2686 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2688 fprintf (ira_dump_file,
2689 " Adding equiv note to insn %u for reg %d ",
2690 INSN_UID (insn), to_regno);
2691 dump_value_slim (ira_dump_file, x, 1);
2692 fprintf (ira_dump_file, "\n");
2696 ira_reg_equiv[to_regno].init_insns
2697 = gen_rtx_INSN_LIST (VOIDmode, insn,
2698 ira_reg_equiv[to_regno].init_insns);
2699 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2700 fprintf (ira_dump_file,
2701 " Adding equiv init move insn %u to reg %d\n",
2702 INSN_UID (insn), to_regno);
2705 /* Fix values of array REG_EQUIV_INIT after live range splitting done
2706 by IRA. */
2707 static void
2708 fix_reg_equiv_init (void)
2710 int max_regno = max_reg_num ();
2711 int i, new_regno, max;
2712 rtx x, prev, next, insn, set;
2714 if (max_regno_before_ira < max_regno)
2716 max = vec_safe_length (reg_equivs);
2717 grow_reg_equivs ();
2718 for (i = FIRST_PSEUDO_REGISTER; i < max; i++)
2719 for (prev = NULL_RTX, x = reg_equiv_init (i);
2720 x != NULL_RTX;
2721 x = next)
2723 next = XEXP (x, 1);
2724 insn = XEXP (x, 0);
2725 set = single_set (as_a <rtx_insn *> (insn));
2726 ira_assert (set != NULL_RTX
2727 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))));
2728 if (REG_P (SET_DEST (set))
2729 && ((int) REGNO (SET_DEST (set)) == i
2730 || (int) ORIGINAL_REGNO (SET_DEST (set)) == i))
2731 new_regno = REGNO (SET_DEST (set));
2732 else if (REG_P (SET_SRC (set))
2733 && ((int) REGNO (SET_SRC (set)) == i
2734 || (int) ORIGINAL_REGNO (SET_SRC (set)) == i))
2735 new_regno = REGNO (SET_SRC (set));
2736 else
2737 gcc_unreachable ();
2738 if (new_regno == i)
2739 prev = x;
2740 else
2742 /* Remove the wrong list element. */
2743 if (prev == NULL_RTX)
2744 reg_equiv_init (i) = next;
2745 else
2746 XEXP (prev, 1) = next;
2747 XEXP (x, 1) = reg_equiv_init (new_regno);
2748 reg_equiv_init (new_regno) = x;
2754 #ifdef ENABLE_IRA_CHECKING
2755 /* Print redundant memory-memory copies. */
2756 static void
2757 print_redundant_copies (void)
2759 int hard_regno;
2760 ira_allocno_t a;
2761 ira_copy_t cp, next_cp;
2762 ira_allocno_iterator ai;
2764 FOR_EACH_ALLOCNO (a, ai)
2766 if (ALLOCNO_CAP_MEMBER (a) != NULL)
2767 /* It is a cap. */
2768 continue;
2769 hard_regno = ALLOCNO_HARD_REGNO (a);
2770 if (hard_regno >= 0)
2771 continue;
2772 for (cp = ALLOCNO_COPIES (a); cp != NULL; cp = next_cp)
2773 if (cp->first == a)
2774 next_cp = cp->next_first_allocno_copy;
2775 else
2777 next_cp = cp->next_second_allocno_copy;
2778 if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL
2779 && cp->insn != NULL_RTX
2780 && ALLOCNO_HARD_REGNO (cp->first) == hard_regno)
2781 fprintf (ira_dump_file,
2782 " Redundant move from %d(freq %d):%d\n",
2783 INSN_UID (cp->insn), cp->freq, hard_regno);
2787 #endif
2789 /* Setup preferred and alternative classes for new pseudo-registers
2790 created by IRA starting with START. */
2791 static void
2792 setup_preferred_alternate_classes_for_new_pseudos (int start)
2794 int i, old_regno;
2795 int max_regno = max_reg_num ();
2797 for (i = start; i < max_regno; i++)
2799 old_regno = ORIGINAL_REGNO (regno_reg_rtx[i]);
2800 ira_assert (i != old_regno);
2801 setup_reg_classes (i, reg_preferred_class (old_regno),
2802 reg_alternate_class (old_regno),
2803 reg_allocno_class (old_regno));
2804 if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
2805 fprintf (ira_dump_file,
2806 " New r%d: setting preferred %s, alternative %s\n",
2807 i, reg_class_names[reg_preferred_class (old_regno)],
2808 reg_class_names[reg_alternate_class (old_regno)]);
2813 /* The number of entries allocated in reg_info. */
2814 static int allocated_reg_info_size;
2816 /* Regional allocation can create new pseudo-registers. This function
2817 expands some arrays for pseudo-registers. */
2818 static void
2819 expand_reg_info (void)
2821 int i;
2822 int size = max_reg_num ();
2824 resize_reg_info ();
2825 for (i = allocated_reg_info_size; i < size; i++)
2826 setup_reg_classes (i, GENERAL_REGS, ALL_REGS, GENERAL_REGS);
2827 setup_preferred_alternate_classes_for_new_pseudos (allocated_reg_info_size);
2828 allocated_reg_info_size = size;
2831 /* Return TRUE if there is too high register pressure in the function.
2832 It is used to decide when stack slot sharing is worth to do. */
2833 static bool
2834 too_high_register_pressure_p (void)
2836 int i;
2837 enum reg_class pclass;
2839 for (i = 0; i < ira_pressure_classes_num; i++)
2841 pclass = ira_pressure_classes[i];
2842 if (ira_loop_tree_root->reg_pressure[pclass] > 10000)
2843 return true;
2845 return false;
2850 /* Indicate that hard register number FROM was eliminated and replaced with
2851 an offset from hard register number TO. The status of hard registers live
2852 at the start of a basic block is updated by replacing a use of FROM with
2853 a use of TO. */
2855 void
2856 mark_elimination (int from, int to)
2858 basic_block bb;
2859 bitmap r;
2861 FOR_EACH_BB_FN (bb, cfun)
2863 r = DF_LR_IN (bb);
2864 if (bitmap_bit_p (r, from))
2866 bitmap_clear_bit (r, from);
2867 bitmap_set_bit (r, to);
2869 if (! df_live)
2870 continue;
2871 r = DF_LIVE_IN (bb);
2872 if (bitmap_bit_p (r, from))
2874 bitmap_clear_bit (r, from);
2875 bitmap_set_bit (r, to);
2882 /* The length of the following array. */
2883 int ira_reg_equiv_len;
2885 /* Info about equiv. info for each register. */
2886 struct ira_reg_equiv_s *ira_reg_equiv;
2888 /* Expand ira_reg_equiv if necessary. */
2889 void
2890 ira_expand_reg_equiv (void)
2892 int old = ira_reg_equiv_len;
2894 if (ira_reg_equiv_len > max_reg_num ())
2895 return;
2896 ira_reg_equiv_len = max_reg_num () * 3 / 2 + 1;
2897 ira_reg_equiv
2898 = (struct ira_reg_equiv_s *) xrealloc (ira_reg_equiv,
2899 ira_reg_equiv_len
2900 * sizeof (struct ira_reg_equiv_s));
2901 gcc_assert (old < ira_reg_equiv_len);
2902 memset (ira_reg_equiv + old, 0,
2903 sizeof (struct ira_reg_equiv_s) * (ira_reg_equiv_len - old));
2906 static void
2907 init_reg_equiv (void)
2909 ira_reg_equiv_len = 0;
2910 ira_reg_equiv = NULL;
2911 ira_expand_reg_equiv ();
2914 static void
2915 finish_reg_equiv (void)
2917 free (ira_reg_equiv);
2922 struct equivalence
2924 /* Set when a REG_EQUIV note is found or created. Use to
2925 keep track of what memory accesses might be created later,
2926 e.g. by reload. */
2927 rtx replacement;
2928 rtx *src_p;
2930 /* The list of each instruction which initializes this register.
2932 NULL indicates we know nothing about this register's equivalence
2933 properties.
2935 An INSN_LIST with a NULL insn indicates this pseudo is already
2936 known to not have a valid equivalence. */
2937 rtx_insn_list *init_insns;
2939 /* Loop depth is used to recognize equivalences which appear
2940 to be present within the same loop (or in an inner loop). */
2941 short loop_depth;
2942 /* Nonzero if this had a preexisting REG_EQUIV note. */
2943 unsigned char is_arg_equivalence : 1;
2944 /* Set when an attempt should be made to replace a register
2945 with the associated src_p entry. */
2946 unsigned char replace : 1;
2947 /* Set if this register has no known equivalence. */
2948 unsigned char no_equiv : 1;
2951 /* reg_equiv[N] (where N is a pseudo reg number) is the equivalence
2952 structure for that register. */
2953 static struct equivalence *reg_equiv;
2955 /* Used for communication between the following two functions: contains
2956 a MEM that we wish to ensure remains unchanged. */
2957 static rtx equiv_mem;
2959 /* Set nonzero if EQUIV_MEM is modified. */
2960 static int equiv_mem_modified;
2962 /* If EQUIV_MEM is modified by modifying DEST, indicate that it is modified.
2963 Called via note_stores. */
2964 static void
2965 validate_equiv_mem_from_store (rtx dest, const_rtx set ATTRIBUTE_UNUSED,
2966 void *data ATTRIBUTE_UNUSED)
2968 if ((REG_P (dest)
2969 && reg_overlap_mentioned_p (dest, equiv_mem))
2970 || (MEM_P (dest)
2971 && anti_dependence (equiv_mem, dest)))
2972 equiv_mem_modified = 1;
2975 /* Verify that no store between START and the death of REG invalidates
2976 MEMREF. MEMREF is invalidated by modifying a register used in MEMREF,
2977 by storing into an overlapping memory location, or with a non-const
2978 CALL_INSN.
2980 Return 1 if MEMREF remains valid. */
2981 static int
2982 validate_equiv_mem (rtx_insn *start, rtx reg, rtx memref)
2984 rtx_insn *insn;
2985 rtx note;
2987 equiv_mem = memref;
2988 equiv_mem_modified = 0;
2990 /* If the memory reference has side effects or is volatile, it isn't a
2991 valid equivalence. */
2992 if (side_effects_p (memref))
2993 return 0;
2995 for (insn = start; insn && ! equiv_mem_modified; insn = NEXT_INSN (insn))
2997 if (! INSN_P (insn))
2998 continue;
3000 if (find_reg_note (insn, REG_DEAD, reg))
3001 return 1;
3003 /* This used to ignore readonly memory and const/pure calls. The problem
3004 is the equivalent form may reference a pseudo which gets assigned a
3005 call clobbered hard reg. When we later replace REG with its
3006 equivalent form, the value in the call-clobbered reg has been
3007 changed and all hell breaks loose. */
3008 if (CALL_P (insn))
3009 return 0;
3011 note_stores (PATTERN (insn), validate_equiv_mem_from_store, NULL);
3013 /* If a register mentioned in MEMREF is modified via an
3014 auto-increment, we lose the equivalence. Do the same if one
3015 dies; although we could extend the life, it doesn't seem worth
3016 the trouble. */
3018 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3019 if ((REG_NOTE_KIND (note) == REG_INC
3020 || REG_NOTE_KIND (note) == REG_DEAD)
3021 && REG_P (XEXP (note, 0))
3022 && reg_overlap_mentioned_p (XEXP (note, 0), memref))
3023 return 0;
3026 return 0;
3029 /* Returns zero if X is known to be invariant. */
3030 static int
3031 equiv_init_varies_p (rtx x)
3033 RTX_CODE code = GET_CODE (x);
3034 int i;
3035 const char *fmt;
3037 switch (code)
3039 case MEM:
3040 return !MEM_READONLY_P (x) || equiv_init_varies_p (XEXP (x, 0));
3042 case CONST:
3043 CASE_CONST_ANY:
3044 case SYMBOL_REF:
3045 case LABEL_REF:
3046 return 0;
3048 case REG:
3049 return reg_equiv[REGNO (x)].replace == 0 && rtx_varies_p (x, 0);
3051 case ASM_OPERANDS:
3052 if (MEM_VOLATILE_P (x))
3053 return 1;
3055 /* Fall through. */
3057 default:
3058 break;
3061 fmt = GET_RTX_FORMAT (code);
3062 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3063 if (fmt[i] == 'e')
3065 if (equiv_init_varies_p (XEXP (x, i)))
3066 return 1;
3068 else if (fmt[i] == 'E')
3070 int j;
3071 for (j = 0; j < XVECLEN (x, i); j++)
3072 if (equiv_init_varies_p (XVECEXP (x, i, j)))
3073 return 1;
3076 return 0;
3079 /* Returns nonzero if X (used to initialize register REGNO) is movable.
3080 X is only movable if the registers it uses have equivalent initializations
3081 which appear to be within the same loop (or in an inner loop) and movable
3082 or if they are not candidates for local_alloc and don't vary. */
3083 static int
3084 equiv_init_movable_p (rtx x, int regno)
3086 int i, j;
3087 const char *fmt;
3088 enum rtx_code code = GET_CODE (x);
3090 switch (code)
3092 case SET:
3093 return equiv_init_movable_p (SET_SRC (x), regno);
3095 case CC0:
3096 case CLOBBER:
3097 return 0;
3099 case PRE_INC:
3100 case PRE_DEC:
3101 case POST_INC:
3102 case POST_DEC:
3103 case PRE_MODIFY:
3104 case POST_MODIFY:
3105 return 0;
3107 case REG:
3108 return ((reg_equiv[REGNO (x)].loop_depth >= reg_equiv[regno].loop_depth
3109 && reg_equiv[REGNO (x)].replace)
3110 || (REG_BASIC_BLOCK (REGNO (x)) < NUM_FIXED_BLOCKS
3111 && ! rtx_varies_p (x, 0)));
3113 case UNSPEC_VOLATILE:
3114 return 0;
3116 case ASM_OPERANDS:
3117 if (MEM_VOLATILE_P (x))
3118 return 0;
3120 /* Fall through. */
3122 default:
3123 break;
3126 fmt = GET_RTX_FORMAT (code);
3127 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3128 switch (fmt[i])
3130 case 'e':
3131 if (! equiv_init_movable_p (XEXP (x, i), regno))
3132 return 0;
3133 break;
3134 case 'E':
3135 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3136 if (! equiv_init_movable_p (XVECEXP (x, i, j), regno))
3137 return 0;
3138 break;
3141 return 1;
3144 /* TRUE if X uses any registers for which reg_equiv[REGNO].replace is
3145 true. */
3146 static int
3147 contains_replace_regs (rtx x)
3149 int i, j;
3150 const char *fmt;
3151 enum rtx_code code = GET_CODE (x);
3153 switch (code)
3155 case CONST:
3156 case LABEL_REF:
3157 case SYMBOL_REF:
3158 CASE_CONST_ANY:
3159 case PC:
3160 case CC0:
3161 case HIGH:
3162 return 0;
3164 case REG:
3165 return reg_equiv[REGNO (x)].replace;
3167 default:
3168 break;
3171 fmt = GET_RTX_FORMAT (code);
3172 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3173 switch (fmt[i])
3175 case 'e':
3176 if (contains_replace_regs (XEXP (x, i)))
3177 return 1;
3178 break;
3179 case 'E':
3180 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3181 if (contains_replace_regs (XVECEXP (x, i, j)))
3182 return 1;
3183 break;
3186 return 0;
3189 /* TRUE if X references a memory location that would be affected by a store
3190 to MEMREF. */
3191 static int
3192 memref_referenced_p (rtx memref, rtx x)
3194 int i, j;
3195 const char *fmt;
3196 enum rtx_code code = GET_CODE (x);
3198 switch (code)
3200 case CONST:
3201 case LABEL_REF:
3202 case SYMBOL_REF:
3203 CASE_CONST_ANY:
3204 case PC:
3205 case CC0:
3206 case HIGH:
3207 case LO_SUM:
3208 return 0;
3210 case REG:
3211 return (reg_equiv[REGNO (x)].replacement
3212 && memref_referenced_p (memref,
3213 reg_equiv[REGNO (x)].replacement));
3215 case MEM:
3216 if (true_dependence (memref, VOIDmode, x))
3217 return 1;
3218 break;
3220 case SET:
3221 /* If we are setting a MEM, it doesn't count (its address does), but any
3222 other SET_DEST that has a MEM in it is referencing the MEM. */
3223 if (MEM_P (SET_DEST (x)))
3225 if (memref_referenced_p (memref, XEXP (SET_DEST (x), 0)))
3226 return 1;
3228 else if (memref_referenced_p (memref, SET_DEST (x)))
3229 return 1;
3231 return memref_referenced_p (memref, SET_SRC (x));
3233 default:
3234 break;
3237 fmt = GET_RTX_FORMAT (code);
3238 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3239 switch (fmt[i])
3241 case 'e':
3242 if (memref_referenced_p (memref, XEXP (x, i)))
3243 return 1;
3244 break;
3245 case 'E':
3246 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3247 if (memref_referenced_p (memref, XVECEXP (x, i, j)))
3248 return 1;
3249 break;
3252 return 0;
3255 /* TRUE if some insn in the range (START, END] references a memory location
3256 that would be affected by a store to MEMREF. */
3257 static int
3258 memref_used_between_p (rtx memref, rtx_insn *start, rtx_insn *end)
3260 rtx_insn *insn;
3262 for (insn = NEXT_INSN (start); insn != NEXT_INSN (end);
3263 insn = NEXT_INSN (insn))
3265 if (!NONDEBUG_INSN_P (insn))
3266 continue;
3268 if (memref_referenced_p (memref, PATTERN (insn)))
3269 return 1;
3271 /* Nonconst functions may access memory. */
3272 if (CALL_P (insn) && (! RTL_CONST_CALL_P (insn)))
3273 return 1;
3276 return 0;
3279 /* Mark REG as having no known equivalence.
3280 Some instructions might have been processed before and furnished
3281 with REG_EQUIV notes for this register; these notes will have to be
3282 removed.
3283 STORE is the piece of RTL that does the non-constant / conflicting
3284 assignment - a SET, CLOBBER or REG_INC note. It is currently not used,
3285 but needs to be there because this function is called from note_stores. */
3286 static void
3287 no_equiv (rtx reg, const_rtx store ATTRIBUTE_UNUSED,
3288 void *data ATTRIBUTE_UNUSED)
3290 int regno;
3291 rtx_insn_list *list;
3293 if (!REG_P (reg))
3294 return;
3295 regno = REGNO (reg);
3296 reg_equiv[regno].no_equiv = 1;
3297 list = reg_equiv[regno].init_insns;
3298 if (list && list->insn () == NULL)
3299 return;
3300 reg_equiv[regno].init_insns = gen_rtx_INSN_LIST (VOIDmode, NULL_RTX, NULL);
3301 reg_equiv[regno].replacement = NULL_RTX;
3302 /* This doesn't matter for equivalences made for argument registers, we
3303 should keep their initialization insns. */
3304 if (reg_equiv[regno].is_arg_equivalence)
3305 return;
3306 ira_reg_equiv[regno].defined_p = false;
3307 ira_reg_equiv[regno].init_insns = NULL;
3308 for (; list; list = list->next ())
3310 rtx_insn *insn = list->insn ();
3311 remove_note (insn, find_reg_note (insn, REG_EQUIV, NULL_RTX));
3315 /* Check whether the SUBREG is a paradoxical subreg and set the result
3316 in PDX_SUBREGS. */
3318 static void
3319 set_paradoxical_subreg (rtx_insn *insn, bool *pdx_subregs)
3321 subrtx_iterator::array_type array;
3322 FOR_EACH_SUBRTX (iter, array, PATTERN (insn), NONCONST)
3324 const_rtx subreg = *iter;
3325 if (GET_CODE (subreg) == SUBREG)
3327 const_rtx reg = SUBREG_REG (subreg);
3328 if (REG_P (reg) && paradoxical_subreg_p (subreg))
3329 pdx_subregs[REGNO (reg)] = true;
3334 /* In DEBUG_INSN location adjust REGs from CLEARED_REGS bitmap to the
3335 equivalent replacement. */
3337 static rtx
3338 adjust_cleared_regs (rtx loc, const_rtx old_rtx ATTRIBUTE_UNUSED, void *data)
3340 if (REG_P (loc))
3342 bitmap cleared_regs = (bitmap) data;
3343 if (bitmap_bit_p (cleared_regs, REGNO (loc)))
3344 return simplify_replace_fn_rtx (copy_rtx (*reg_equiv[REGNO (loc)].src_p),
3345 NULL_RTX, adjust_cleared_regs, data);
3347 return NULL_RTX;
3350 /* Nonzero if we recorded an equivalence for a LABEL_REF. */
3351 static int recorded_label_ref;
3353 /* Find registers that are equivalent to a single value throughout the
3354 compilation (either because they can be referenced in memory or are
3355 set once from a single constant). Lower their priority for a
3356 register.
3358 If such a register is only referenced once, try substituting its
3359 value into the using insn. If it succeeds, we can eliminate the
3360 register completely.
3362 Initialize init_insns in ira_reg_equiv array.
3364 Return non-zero if jump label rebuilding should be done. */
3365 static int
3366 update_equiv_regs (void)
3368 rtx_insn *insn;
3369 basic_block bb;
3370 int loop_depth;
3371 bitmap cleared_regs;
3372 bool *pdx_subregs;
3374 /* We need to keep track of whether or not we recorded a LABEL_REF so
3375 that we know if the jump optimizer needs to be rerun. */
3376 recorded_label_ref = 0;
3378 /* Use pdx_subregs to show whether a reg is used in a paradoxical
3379 subreg. */
3380 pdx_subregs = XCNEWVEC (bool, max_regno);
3382 reg_equiv = XCNEWVEC (struct equivalence, max_regno);
3383 grow_reg_equivs ();
3385 init_alias_analysis ();
3387 /* Scan insns and set pdx_subregs[regno] if the reg is used in a
3388 paradoxical subreg. Don't set such reg equivalent to a mem,
3389 because lra will not substitute such equiv memory in order to
3390 prevent access beyond allocated memory for paradoxical memory subreg. */
3391 FOR_EACH_BB_FN (bb, cfun)
3392 FOR_BB_INSNS (bb, insn)
3393 if (NONDEBUG_INSN_P (insn))
3394 set_paradoxical_subreg (insn, pdx_subregs);
3396 /* Scan the insns and find which registers have equivalences. Do this
3397 in a separate scan of the insns because (due to -fcse-follow-jumps)
3398 a register can be set below its use. */
3399 FOR_EACH_BB_FN (bb, cfun)
3401 loop_depth = bb_loop_depth (bb);
3403 for (insn = BB_HEAD (bb);
3404 insn != NEXT_INSN (BB_END (bb));
3405 insn = NEXT_INSN (insn))
3407 rtx note;
3408 rtx set;
3409 rtx dest, src;
3410 int regno;
3412 if (! INSN_P (insn))
3413 continue;
3415 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3416 if (REG_NOTE_KIND (note) == REG_INC)
3417 no_equiv (XEXP (note, 0), note, NULL);
3419 set = single_set (insn);
3421 /* If this insn contains more (or less) than a single SET,
3422 only mark all destinations as having no known equivalence. */
3423 if (set == NULL_RTX)
3425 note_stores (PATTERN (insn), no_equiv, NULL);
3426 continue;
3428 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3430 int i;
3432 for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
3434 rtx part = XVECEXP (PATTERN (insn), 0, i);
3435 if (part != set)
3436 note_stores (part, no_equiv, NULL);
3440 dest = SET_DEST (set);
3441 src = SET_SRC (set);
3443 /* See if this is setting up the equivalence between an argument
3444 register and its stack slot. */
3445 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3446 if (note)
3448 gcc_assert (REG_P (dest));
3449 regno = REGNO (dest);
3451 /* Note that we don't want to clear init_insns in
3452 ira_reg_equiv even if there are multiple sets of this
3453 register. */
3454 reg_equiv[regno].is_arg_equivalence = 1;
3456 /* The insn result can have equivalence memory although
3457 the equivalence is not set up by the insn. We add
3458 this insn to init insns as it is a flag for now that
3459 regno has an equivalence. We will remove the insn
3460 from init insn list later. */
3461 if (rtx_equal_p (src, XEXP (note, 0)) || MEM_P (XEXP (note, 0)))
3462 ira_reg_equiv[regno].init_insns
3463 = gen_rtx_INSN_LIST (VOIDmode, insn,
3464 ira_reg_equiv[regno].init_insns);
3466 /* Continue normally in case this is a candidate for
3467 replacements. */
3470 if (!optimize)
3471 continue;
3473 /* We only handle the case of a pseudo register being set
3474 once, or always to the same value. */
3475 /* ??? The mn10200 port breaks if we add equivalences for
3476 values that need an ADDRESS_REGS register and set them equivalent
3477 to a MEM of a pseudo. The actual problem is in the over-conservative
3478 handling of INPADDR_ADDRESS / INPUT_ADDRESS / INPUT triples in
3479 calculate_needs, but we traditionally work around this problem
3480 here by rejecting equivalences when the destination is in a register
3481 that's likely spilled. This is fragile, of course, since the
3482 preferred class of a pseudo depends on all instructions that set
3483 or use it. */
3485 if (!REG_P (dest)
3486 || (regno = REGNO (dest)) < FIRST_PSEUDO_REGISTER
3487 || (reg_equiv[regno].init_insns
3488 && reg_equiv[regno].init_insns->insn () == NULL)
3489 || (targetm.class_likely_spilled_p (reg_preferred_class (regno))
3490 && MEM_P (src) && ! reg_equiv[regno].is_arg_equivalence))
3492 /* This might be setting a SUBREG of a pseudo, a pseudo that is
3493 also set somewhere else to a constant. */
3494 note_stores (set, no_equiv, NULL);
3495 continue;
3498 /* Don't set reg (if pdx_subregs[regno] == true) equivalent to a mem. */
3499 if (MEM_P (src) && pdx_subregs[regno])
3501 note_stores (set, no_equiv, NULL);
3502 continue;
3505 note = find_reg_note (insn, REG_EQUAL, NULL_RTX);
3507 /* cse sometimes generates function invariants, but doesn't put a
3508 REG_EQUAL note on the insn. Since this note would be redundant,
3509 there's no point creating it earlier than here. */
3510 if (! note && ! rtx_varies_p (src, 0))
3511 note = set_unique_reg_note (insn, REG_EQUAL, copy_rtx (src));
3513 /* Don't bother considering a REG_EQUAL note containing an EXPR_LIST
3514 since it represents a function call. */
3515 if (note && GET_CODE (XEXP (note, 0)) == EXPR_LIST)
3516 note = NULL_RTX;
3518 if (DF_REG_DEF_COUNT (regno) != 1)
3520 bool equal_p = true;
3521 rtx_insn_list *list;
3523 /* If we have already processed this pseudo and determined it
3524 can not have an equivalence, then honor that decision. */
3525 if (reg_equiv[regno].no_equiv)
3526 continue;
3528 if (! note
3529 || rtx_varies_p (XEXP (note, 0), 0)
3530 || (reg_equiv[regno].replacement
3531 && ! rtx_equal_p (XEXP (note, 0),
3532 reg_equiv[regno].replacement)))
3534 no_equiv (dest, set, NULL);
3535 continue;
3538 list = reg_equiv[regno].init_insns;
3539 for (; list; list = list->next ())
3541 rtx note_tmp;
3542 rtx_insn *insn_tmp;
3544 insn_tmp = list->insn ();
3545 note_tmp = find_reg_note (insn_tmp, REG_EQUAL, NULL_RTX);
3546 gcc_assert (note_tmp);
3547 if (! rtx_equal_p (XEXP (note, 0), XEXP (note_tmp, 0)))
3549 equal_p = false;
3550 break;
3554 if (! equal_p)
3556 no_equiv (dest, set, NULL);
3557 continue;
3561 /* Record this insn as initializing this register. */
3562 reg_equiv[regno].init_insns
3563 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv[regno].init_insns);
3565 /* If this register is known to be equal to a constant, record that
3566 it is always equivalent to the constant. */
3567 if (DF_REG_DEF_COUNT (regno) == 1
3568 && note && ! rtx_varies_p (XEXP (note, 0), 0))
3570 rtx note_value = XEXP (note, 0);
3571 remove_note (insn, note);
3572 set_unique_reg_note (insn, REG_EQUIV, note_value);
3575 /* If this insn introduces a "constant" register, decrease the priority
3576 of that register. Record this insn if the register is only used once
3577 more and the equivalence value is the same as our source.
3579 The latter condition is checked for two reasons: First, it is an
3580 indication that it may be more efficient to actually emit the insn
3581 as written (if no registers are available, reload will substitute
3582 the equivalence). Secondly, it avoids problems with any registers
3583 dying in this insn whose death notes would be missed.
3585 If we don't have a REG_EQUIV note, see if this insn is loading
3586 a register used only in one basic block from a MEM. If so, and the
3587 MEM remains unchanged for the life of the register, add a REG_EQUIV
3588 note. */
3589 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3591 if (note == NULL_RTX && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3592 && MEM_P (SET_SRC (set))
3593 && validate_equiv_mem (insn, dest, SET_SRC (set)))
3594 note = set_unique_reg_note (insn, REG_EQUIV, copy_rtx (SET_SRC (set)));
3596 if (note)
3598 int regno = REGNO (dest);
3599 rtx x = XEXP (note, 0);
3601 /* If we haven't done so, record for reload that this is an
3602 equivalencing insn. */
3603 if (!reg_equiv[regno].is_arg_equivalence)
3604 ira_reg_equiv[regno].init_insns
3605 = gen_rtx_INSN_LIST (VOIDmode, insn,
3606 ira_reg_equiv[regno].init_insns);
3608 /* Record whether or not we created a REG_EQUIV note for a LABEL_REF.
3609 We might end up substituting the LABEL_REF for uses of the
3610 pseudo here or later. That kind of transformation may turn an
3611 indirect jump into a direct jump, in which case we must rerun the
3612 jump optimizer to ensure that the JUMP_LABEL fields are valid. */
3613 if (GET_CODE (x) == LABEL_REF
3614 || (GET_CODE (x) == CONST
3615 && GET_CODE (XEXP (x, 0)) == PLUS
3616 && (GET_CODE (XEXP (XEXP (x, 0), 0)) == LABEL_REF)))
3617 recorded_label_ref = 1;
3619 reg_equiv[regno].replacement = x;
3620 reg_equiv[regno].src_p = &SET_SRC (set);
3621 reg_equiv[regno].loop_depth = (short) loop_depth;
3623 /* Don't mess with things live during setjmp. */
3624 if (REG_LIVE_LENGTH (regno) >= 0 && optimize)
3626 /* Note that the statement below does not affect the priority
3627 in local-alloc! */
3628 REG_LIVE_LENGTH (regno) *= 2;
3630 /* If the register is referenced exactly twice, meaning it is
3631 set once and used once, indicate that the reference may be
3632 replaced by the equivalence we computed above. Do this
3633 even if the register is only used in one block so that
3634 dependencies can be handled where the last register is
3635 used in a different block (i.e. HIGH / LO_SUM sequences)
3636 and to reduce the number of registers alive across
3637 calls. */
3639 if (REG_N_REFS (regno) == 2
3640 && (rtx_equal_p (x, src)
3641 || ! equiv_init_varies_p (src))
3642 && NONJUMP_INSN_P (insn)
3643 && equiv_init_movable_p (PATTERN (insn), regno))
3644 reg_equiv[regno].replace = 1;
3650 if (!optimize)
3651 goto out;
3653 /* A second pass, to gather additional equivalences with memory. This needs
3654 to be done after we know which registers we are going to replace. */
3656 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
3658 rtx set, src, dest;
3659 unsigned regno;
3661 if (! INSN_P (insn))
3662 continue;
3664 set = single_set (insn);
3665 if (! set)
3666 continue;
3668 dest = SET_DEST (set);
3669 src = SET_SRC (set);
3671 /* If this sets a MEM to the contents of a REG that is only used
3672 in a single basic block, see if the register is always equivalent
3673 to that memory location and if moving the store from INSN to the
3674 insn that set REG is safe. If so, put a REG_EQUIV note on the
3675 initializing insn.
3677 Don't add a REG_EQUIV note if the insn already has one. The existing
3678 REG_EQUIV is likely more useful than the one we are adding.
3680 If one of the regs in the address has reg_equiv[REGNO].replace set,
3681 then we can't add this REG_EQUIV note. The reg_equiv[REGNO].replace
3682 optimization may move the set of this register immediately before
3683 insn, which puts it after reg_equiv[REGNO].init_insns, and hence
3684 the mention in the REG_EQUIV note would be to an uninitialized
3685 pseudo. */
3687 if (MEM_P (dest) && REG_P (src)
3688 && (regno = REGNO (src)) >= FIRST_PSEUDO_REGISTER
3689 && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3690 && DF_REG_DEF_COUNT (regno) == 1
3691 && reg_equiv[regno].init_insns != NULL
3692 && reg_equiv[regno].init_insns->insn () != NULL
3693 && ! find_reg_note (XEXP (reg_equiv[regno].init_insns, 0),
3694 REG_EQUIV, NULL_RTX)
3695 && ! contains_replace_regs (XEXP (dest, 0))
3696 && ! pdx_subregs[regno])
3698 rtx_insn *init_insn =
3699 as_a <rtx_insn *> (XEXP (reg_equiv[regno].init_insns, 0));
3700 if (validate_equiv_mem (init_insn, src, dest)
3701 && ! memref_used_between_p (dest, init_insn, insn)
3702 /* Attaching a REG_EQUIV note will fail if INIT_INSN has
3703 multiple sets. */
3704 && set_unique_reg_note (init_insn, REG_EQUIV, copy_rtx (dest)))
3706 /* This insn makes the equivalence, not the one initializing
3707 the register. */
3708 ira_reg_equiv[regno].init_insns
3709 = gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX);
3710 df_notes_rescan (init_insn);
3715 cleared_regs = BITMAP_ALLOC (NULL);
3716 /* Now scan all regs killed in an insn to see if any of them are
3717 registers only used that once. If so, see if we can replace the
3718 reference with the equivalent form. If we can, delete the
3719 initializing reference and this register will go away. If we
3720 can't replace the reference, and the initializing reference is
3721 within the same loop (or in an inner loop), then move the register
3722 initialization just before the use, so that they are in the same
3723 basic block. */
3724 FOR_EACH_BB_REVERSE_FN (bb, cfun)
3726 loop_depth = bb_loop_depth (bb);
3727 for (insn = BB_END (bb);
3728 insn != PREV_INSN (BB_HEAD (bb));
3729 insn = PREV_INSN (insn))
3731 rtx link;
3733 if (! INSN_P (insn))
3734 continue;
3736 /* Don't substitute into a non-local goto, this confuses CFG. */
3737 if (JUMP_P (insn)
3738 && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
3739 continue;
3741 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
3743 if (REG_NOTE_KIND (link) == REG_DEAD
3744 /* Make sure this insn still refers to the register. */
3745 && reg_mentioned_p (XEXP (link, 0), PATTERN (insn)))
3747 int regno = REGNO (XEXP (link, 0));
3748 rtx equiv_insn;
3750 if (! reg_equiv[regno].replace
3751 || reg_equiv[regno].loop_depth < (short) loop_depth
3752 /* There is no sense to move insns if live range
3753 shrinkage or register pressure-sensitive
3754 scheduling were done because it will not
3755 improve allocation but worsen insn schedule
3756 with a big probability. */
3757 || flag_live_range_shrinkage
3758 || (flag_sched_pressure && flag_schedule_insns))
3759 continue;
3761 /* reg_equiv[REGNO].replace gets set only when
3762 REG_N_REFS[REGNO] is 2, i.e. the register is set
3763 once and used once. (If it were only set, but
3764 not used, flow would have deleted the setting
3765 insns.) Hence there can only be one insn in
3766 reg_equiv[REGNO].init_insns. */
3767 gcc_assert (reg_equiv[regno].init_insns
3768 && !XEXP (reg_equiv[regno].init_insns, 1));
3769 equiv_insn = XEXP (reg_equiv[regno].init_insns, 0);
3771 /* We may not move instructions that can throw, since
3772 that changes basic block boundaries and we are not
3773 prepared to adjust the CFG to match. */
3774 if (can_throw_internal (equiv_insn))
3775 continue;
3777 if (asm_noperands (PATTERN (equiv_insn)) < 0
3778 && validate_replace_rtx (regno_reg_rtx[regno],
3779 *(reg_equiv[regno].src_p), insn))
3781 rtx equiv_link;
3782 rtx last_link;
3783 rtx note;
3785 /* Find the last note. */
3786 for (last_link = link; XEXP (last_link, 1);
3787 last_link = XEXP (last_link, 1))
3790 /* Append the REG_DEAD notes from equiv_insn. */
3791 equiv_link = REG_NOTES (equiv_insn);
3792 while (equiv_link)
3794 note = equiv_link;
3795 equiv_link = XEXP (equiv_link, 1);
3796 if (REG_NOTE_KIND (note) == REG_DEAD)
3798 remove_note (equiv_insn, note);
3799 XEXP (last_link, 1) = note;
3800 XEXP (note, 1) = NULL_RTX;
3801 last_link = note;
3805 remove_death (regno, insn);
3806 SET_REG_N_REFS (regno, 0);
3807 REG_FREQ (regno) = 0;
3808 delete_insn (equiv_insn);
3810 reg_equiv[regno].init_insns
3811 = reg_equiv[regno].init_insns->next ();
3813 ira_reg_equiv[regno].init_insns = NULL;
3814 bitmap_set_bit (cleared_regs, regno);
3816 /* Move the initialization of the register to just before
3817 INSN. Update the flow information. */
3818 else if (prev_nondebug_insn (insn) != equiv_insn)
3820 rtx_insn *new_insn;
3822 new_insn = emit_insn_before (PATTERN (equiv_insn), insn);
3823 REG_NOTES (new_insn) = REG_NOTES (equiv_insn);
3824 REG_NOTES (equiv_insn) = 0;
3825 /* Rescan it to process the notes. */
3826 df_insn_rescan (new_insn);
3828 /* Make sure this insn is recognized before
3829 reload begins, otherwise
3830 eliminate_regs_in_insn will die. */
3831 INSN_CODE (new_insn) = INSN_CODE (equiv_insn);
3833 delete_insn (equiv_insn);
3835 XEXP (reg_equiv[regno].init_insns, 0) = new_insn;
3837 REG_BASIC_BLOCK (regno) = bb->index;
3838 REG_N_CALLS_CROSSED (regno) = 0;
3839 REG_FREQ_CALLS_CROSSED (regno) = 0;
3840 REG_N_THROWING_CALLS_CROSSED (regno) = 0;
3841 REG_LIVE_LENGTH (regno) = 2;
3843 if (insn == BB_HEAD (bb))
3844 BB_HEAD (bb) = PREV_INSN (insn);
3846 ira_reg_equiv[regno].init_insns
3847 = gen_rtx_INSN_LIST (VOIDmode, new_insn, NULL_RTX);
3848 bitmap_set_bit (cleared_regs, regno);
3855 if (!bitmap_empty_p (cleared_regs))
3857 FOR_EACH_BB_FN (bb, cfun)
3859 bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
3860 bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
3861 if (! df_live)
3862 continue;
3863 bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
3864 bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
3867 /* Last pass - adjust debug insns referencing cleared regs. */
3868 if (MAY_HAVE_DEBUG_INSNS)
3869 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
3870 if (DEBUG_INSN_P (insn))
3872 rtx old_loc = INSN_VAR_LOCATION_LOC (insn);
3873 INSN_VAR_LOCATION_LOC (insn)
3874 = simplify_replace_fn_rtx (old_loc, NULL_RTX,
3875 adjust_cleared_regs,
3876 (void *) cleared_regs);
3877 if (old_loc != INSN_VAR_LOCATION_LOC (insn))
3878 df_insn_rescan (insn);
3882 BITMAP_FREE (cleared_regs);
3884 out:
3885 /* Clean up. */
3887 end_alias_analysis ();
3888 free (reg_equiv);
3889 free (pdx_subregs);
3890 return recorded_label_ref;
3895 /* Set up fields memory, constant, and invariant from init_insns in
3896 the structures of array ira_reg_equiv. */
3897 static void
3898 setup_reg_equiv (void)
3900 int i;
3901 rtx_insn_list *elem, *prev_elem, *next_elem;
3902 rtx_insn *insn;
3903 rtx set, x;
3905 for (i = FIRST_PSEUDO_REGISTER; i < ira_reg_equiv_len; i++)
3906 for (prev_elem = NULL, elem = ira_reg_equiv[i].init_insns;
3907 elem;
3908 prev_elem = elem, elem = next_elem)
3910 next_elem = elem->next ();
3911 insn = elem->insn ();
3912 set = single_set (insn);
3914 /* Init insns can set up equivalence when the reg is a destination or
3915 a source (in this case the destination is memory). */
3916 if (set != 0 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))))
3918 if ((x = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != NULL)
3920 x = XEXP (x, 0);
3921 if (REG_P (SET_DEST (set))
3922 && REGNO (SET_DEST (set)) == (unsigned int) i
3923 && ! rtx_equal_p (SET_SRC (set), x) && MEM_P (x))
3925 /* This insn reporting the equivalence but
3926 actually not setting it. Remove it from the
3927 list. */
3928 if (prev_elem == NULL)
3929 ira_reg_equiv[i].init_insns = next_elem;
3930 else
3931 XEXP (prev_elem, 1) = next_elem;
3932 elem = prev_elem;
3935 else if (REG_P (SET_DEST (set))
3936 && REGNO (SET_DEST (set)) == (unsigned int) i)
3937 x = SET_SRC (set);
3938 else
3940 gcc_assert (REG_P (SET_SRC (set))
3941 && REGNO (SET_SRC (set)) == (unsigned int) i);
3942 x = SET_DEST (set);
3944 if (! function_invariant_p (x)
3945 || ! flag_pic
3946 /* A function invariant is often CONSTANT_P but may
3947 include a register. We promise to only pass
3948 CONSTANT_P objects to LEGITIMATE_PIC_OPERAND_P. */
3949 || (CONSTANT_P (x) && LEGITIMATE_PIC_OPERAND_P (x)))
3951 /* It can happen that a REG_EQUIV note contains a MEM
3952 that is not a legitimate memory operand. As later
3953 stages of reload assume that all addresses found in
3954 the lra_regno_equiv_* arrays were originally
3955 legitimate, we ignore such REG_EQUIV notes. */
3956 if (memory_operand (x, VOIDmode))
3958 ira_reg_equiv[i].defined_p = true;
3959 ira_reg_equiv[i].memory = x;
3960 continue;
3962 else if (function_invariant_p (x))
3964 machine_mode mode;
3966 mode = GET_MODE (SET_DEST (set));
3967 if (GET_CODE (x) == PLUS
3968 || x == frame_pointer_rtx || x == arg_pointer_rtx)
3969 /* This is PLUS of frame pointer and a constant,
3970 or fp, or argp. */
3971 ira_reg_equiv[i].invariant = x;
3972 else if (targetm.legitimate_constant_p (mode, x))
3973 ira_reg_equiv[i].constant = x;
3974 else
3976 ira_reg_equiv[i].memory = force_const_mem (mode, x);
3977 if (ira_reg_equiv[i].memory == NULL_RTX)
3979 ira_reg_equiv[i].defined_p = false;
3980 ira_reg_equiv[i].init_insns = NULL;
3981 break;
3984 ira_reg_equiv[i].defined_p = true;
3985 continue;
3989 ira_reg_equiv[i].defined_p = false;
3990 ira_reg_equiv[i].init_insns = NULL;
3991 break;
3997 /* Print chain C to FILE. */
3998 static void
3999 print_insn_chain (FILE *file, struct insn_chain *c)
4001 fprintf (file, "insn=%d, ", INSN_UID (c->insn));
4002 bitmap_print (file, &c->live_throughout, "live_throughout: ", ", ");
4003 bitmap_print (file, &c->dead_or_set, "dead_or_set: ", "\n");
4007 /* Print all reload_insn_chains to FILE. */
4008 static void
4009 print_insn_chains (FILE *file)
4011 struct insn_chain *c;
4012 for (c = reload_insn_chain; c ; c = c->next)
4013 print_insn_chain (file, c);
4016 /* Return true if pseudo REGNO should be added to set live_throughout
4017 or dead_or_set of the insn chains for reload consideration. */
4018 static bool
4019 pseudo_for_reload_consideration_p (int regno)
4021 /* Consider spilled pseudos too for IRA because they still have a
4022 chance to get hard-registers in the reload when IRA is used. */
4023 return (reg_renumber[regno] >= 0 || ira_conflicts_p);
4026 /* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] using
4027 REG to the number of nregs, and INIT_VALUE to get the
4028 initialization. ALLOCNUM need not be the regno of REG. */
4029 static void
4030 init_live_subregs (bool init_value, sbitmap *live_subregs,
4031 bitmap live_subregs_used, int allocnum, rtx reg)
4033 unsigned int regno = REGNO (SUBREG_REG (reg));
4034 int size = GET_MODE_SIZE (GET_MODE (regno_reg_rtx[regno]));
4036 gcc_assert (size > 0);
4038 /* Been there, done that. */
4039 if (bitmap_bit_p (live_subregs_used, allocnum))
4040 return;
4042 /* Create a new one. */
4043 if (live_subregs[allocnum] == NULL)
4044 live_subregs[allocnum] = sbitmap_alloc (size);
4046 /* If the entire reg was live before blasting into subregs, we need
4047 to init all of the subregs to ones else init to 0. */
4048 if (init_value)
4049 bitmap_ones (live_subregs[allocnum]);
4050 else
4051 bitmap_clear (live_subregs[allocnum]);
4053 bitmap_set_bit (live_subregs_used, allocnum);
4056 /* Walk the insns of the current function and build reload_insn_chain,
4057 and record register life information. */
4058 static void
4059 build_insn_chain (void)
4061 unsigned int i;
4062 struct insn_chain **p = &reload_insn_chain;
4063 basic_block bb;
4064 struct insn_chain *c = NULL;
4065 struct insn_chain *next = NULL;
4066 bitmap live_relevant_regs = BITMAP_ALLOC (NULL);
4067 bitmap elim_regset = BITMAP_ALLOC (NULL);
4068 /* live_subregs is a vector used to keep accurate information about
4069 which hardregs are live in multiword pseudos. live_subregs and
4070 live_subregs_used are indexed by pseudo number. The live_subreg
4071 entry for a particular pseudo is only used if the corresponding
4072 element is non zero in live_subregs_used. The sbitmap size of
4073 live_subreg[allocno] is number of bytes that the pseudo can
4074 occupy. */
4075 sbitmap *live_subregs = XCNEWVEC (sbitmap, max_regno);
4076 bitmap live_subregs_used = BITMAP_ALLOC (NULL);
4078 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
4079 if (TEST_HARD_REG_BIT (eliminable_regset, i))
4080 bitmap_set_bit (elim_regset, i);
4081 FOR_EACH_BB_REVERSE_FN (bb, cfun)
4083 bitmap_iterator bi;
4084 rtx_insn *insn;
4086 CLEAR_REG_SET (live_relevant_regs);
4087 bitmap_clear (live_subregs_used);
4089 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb), 0, i, bi)
4091 if (i >= FIRST_PSEUDO_REGISTER)
4092 break;
4093 bitmap_set_bit (live_relevant_regs, i);
4096 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb),
4097 FIRST_PSEUDO_REGISTER, i, bi)
4099 if (pseudo_for_reload_consideration_p (i))
4100 bitmap_set_bit (live_relevant_regs, i);
4103 FOR_BB_INSNS_REVERSE (bb, insn)
4105 if (!NOTE_P (insn) && !BARRIER_P (insn))
4107 struct df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4108 df_ref def, use;
4110 c = new_insn_chain ();
4111 c->next = next;
4112 next = c;
4113 *p = c;
4114 p = &c->prev;
4116 c->insn = insn;
4117 c->block = bb->index;
4119 if (NONDEBUG_INSN_P (insn))
4120 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4122 unsigned int regno = DF_REF_REGNO (def);
4124 /* Ignore may clobbers because these are generated
4125 from calls. However, every other kind of def is
4126 added to dead_or_set. */
4127 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
4129 if (regno < FIRST_PSEUDO_REGISTER)
4131 if (!fixed_regs[regno])
4132 bitmap_set_bit (&c->dead_or_set, regno);
4134 else if (pseudo_for_reload_consideration_p (regno))
4135 bitmap_set_bit (&c->dead_or_set, regno);
4138 if ((regno < FIRST_PSEUDO_REGISTER
4139 || reg_renumber[regno] >= 0
4140 || ira_conflicts_p)
4141 && (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL)))
4143 rtx reg = DF_REF_REG (def);
4145 /* We can model subregs, but not if they are
4146 wrapped in ZERO_EXTRACTS. */
4147 if (GET_CODE (reg) == SUBREG
4148 && !DF_REF_FLAGS_IS_SET (def, DF_REF_ZERO_EXTRACT))
4150 unsigned int start = SUBREG_BYTE (reg);
4151 unsigned int last = start
4152 + GET_MODE_SIZE (GET_MODE (reg));
4154 init_live_subregs
4155 (bitmap_bit_p (live_relevant_regs, regno),
4156 live_subregs, live_subregs_used, regno, reg);
4158 if (!DF_REF_FLAGS_IS_SET
4159 (def, DF_REF_STRICT_LOW_PART))
4161 /* Expand the range to cover entire words.
4162 Bytes added here are "don't care". */
4163 start
4164 = start / UNITS_PER_WORD * UNITS_PER_WORD;
4165 last = ((last + UNITS_PER_WORD - 1)
4166 / UNITS_PER_WORD * UNITS_PER_WORD);
4169 /* Ignore the paradoxical bits. */
4170 if (last > SBITMAP_SIZE (live_subregs[regno]))
4171 last = SBITMAP_SIZE (live_subregs[regno]);
4173 while (start < last)
4175 bitmap_clear_bit (live_subregs[regno], start);
4176 start++;
4179 if (bitmap_empty_p (live_subregs[regno]))
4181 bitmap_clear_bit (live_subregs_used, regno);
4182 bitmap_clear_bit (live_relevant_regs, regno);
4184 else
4185 /* Set live_relevant_regs here because
4186 that bit has to be true to get us to
4187 look at the live_subregs fields. */
4188 bitmap_set_bit (live_relevant_regs, regno);
4190 else
4192 /* DF_REF_PARTIAL is generated for
4193 subregs, STRICT_LOW_PART, and
4194 ZERO_EXTRACT. We handle the subreg
4195 case above so here we have to keep from
4196 modeling the def as a killing def. */
4197 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL))
4199 bitmap_clear_bit (live_subregs_used, regno);
4200 bitmap_clear_bit (live_relevant_regs, regno);
4206 bitmap_and_compl_into (live_relevant_regs, elim_regset);
4207 bitmap_copy (&c->live_throughout, live_relevant_regs);
4209 if (NONDEBUG_INSN_P (insn))
4210 FOR_EACH_INSN_INFO_USE (use, insn_info)
4212 unsigned int regno = DF_REF_REGNO (use);
4213 rtx reg = DF_REF_REG (use);
4215 /* DF_REF_READ_WRITE on a use means that this use
4216 is fabricated from a def that is a partial set
4217 to a multiword reg. Here, we only model the
4218 subreg case that is not wrapped in ZERO_EXTRACT
4219 precisely so we do not need to look at the
4220 fabricated use. */
4221 if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE)
4222 && !DF_REF_FLAGS_IS_SET (use, DF_REF_ZERO_EXTRACT)
4223 && DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
4224 continue;
4226 /* Add the last use of each var to dead_or_set. */
4227 if (!bitmap_bit_p (live_relevant_regs, regno))
4229 if (regno < FIRST_PSEUDO_REGISTER)
4231 if (!fixed_regs[regno])
4232 bitmap_set_bit (&c->dead_or_set, regno);
4234 else if (pseudo_for_reload_consideration_p (regno))
4235 bitmap_set_bit (&c->dead_or_set, regno);
4238 if (regno < FIRST_PSEUDO_REGISTER
4239 || pseudo_for_reload_consideration_p (regno))
4241 if (GET_CODE (reg) == SUBREG
4242 && !DF_REF_FLAGS_IS_SET (use,
4243 DF_REF_SIGN_EXTRACT
4244 | DF_REF_ZERO_EXTRACT))
4246 unsigned int start = SUBREG_BYTE (reg);
4247 unsigned int last = start
4248 + GET_MODE_SIZE (GET_MODE (reg));
4250 init_live_subregs
4251 (bitmap_bit_p (live_relevant_regs, regno),
4252 live_subregs, live_subregs_used, regno, reg);
4254 /* Ignore the paradoxical bits. */
4255 if (last > SBITMAP_SIZE (live_subregs[regno]))
4256 last = SBITMAP_SIZE (live_subregs[regno]);
4258 while (start < last)
4260 bitmap_set_bit (live_subregs[regno], start);
4261 start++;
4264 else
4265 /* Resetting the live_subregs_used is
4266 effectively saying do not use the subregs
4267 because we are reading the whole
4268 pseudo. */
4269 bitmap_clear_bit (live_subregs_used, regno);
4270 bitmap_set_bit (live_relevant_regs, regno);
4276 /* FIXME!! The following code is a disaster. Reload needs to see the
4277 labels and jump tables that are just hanging out in between
4278 the basic blocks. See pr33676. */
4279 insn = BB_HEAD (bb);
4281 /* Skip over the barriers and cruft. */
4282 while (insn && (BARRIER_P (insn) || NOTE_P (insn)
4283 || BLOCK_FOR_INSN (insn) == bb))
4284 insn = PREV_INSN (insn);
4286 /* While we add anything except barriers and notes, the focus is
4287 to get the labels and jump tables into the
4288 reload_insn_chain. */
4289 while (insn)
4291 if (!NOTE_P (insn) && !BARRIER_P (insn))
4293 if (BLOCK_FOR_INSN (insn))
4294 break;
4296 c = new_insn_chain ();
4297 c->next = next;
4298 next = c;
4299 *p = c;
4300 p = &c->prev;
4302 /* The block makes no sense here, but it is what the old
4303 code did. */
4304 c->block = bb->index;
4305 c->insn = insn;
4306 bitmap_copy (&c->live_throughout, live_relevant_regs);
4308 insn = PREV_INSN (insn);
4312 reload_insn_chain = c;
4313 *p = NULL;
4315 for (i = 0; i < (unsigned int) max_regno; i++)
4316 if (live_subregs[i] != NULL)
4317 sbitmap_free (live_subregs[i]);
4318 free (live_subregs);
4319 BITMAP_FREE (live_subregs_used);
4320 BITMAP_FREE (live_relevant_regs);
4321 BITMAP_FREE (elim_regset);
4323 if (dump_file)
4324 print_insn_chains (dump_file);
4327 /* Examine the rtx found in *LOC, which is read or written to as determined
4328 by TYPE. Return false if we find a reason why an insn containing this
4329 rtx should not be moved (such as accesses to non-constant memory), true
4330 otherwise. */
4331 static bool
4332 rtx_moveable_p (rtx *loc, enum op_type type)
4334 const char *fmt;
4335 rtx x = *loc;
4336 enum rtx_code code = GET_CODE (x);
4337 int i, j;
4339 code = GET_CODE (x);
4340 switch (code)
4342 case CONST:
4343 CASE_CONST_ANY:
4344 case SYMBOL_REF:
4345 case LABEL_REF:
4346 return true;
4348 case PC:
4349 return type == OP_IN;
4351 case CC0:
4352 return false;
4354 case REG:
4355 if (x == frame_pointer_rtx)
4356 return true;
4357 if (HARD_REGISTER_P (x))
4358 return false;
4360 return true;
4362 case MEM:
4363 if (type == OP_IN && MEM_READONLY_P (x))
4364 return rtx_moveable_p (&XEXP (x, 0), OP_IN);
4365 return false;
4367 case SET:
4368 return (rtx_moveable_p (&SET_SRC (x), OP_IN)
4369 && rtx_moveable_p (&SET_DEST (x), OP_OUT));
4371 case STRICT_LOW_PART:
4372 return rtx_moveable_p (&XEXP (x, 0), OP_OUT);
4374 case ZERO_EXTRACT:
4375 case SIGN_EXTRACT:
4376 return (rtx_moveable_p (&XEXP (x, 0), type)
4377 && rtx_moveable_p (&XEXP (x, 1), OP_IN)
4378 && rtx_moveable_p (&XEXP (x, 2), OP_IN));
4380 case CLOBBER:
4381 return rtx_moveable_p (&SET_DEST (x), OP_OUT);
4383 case UNSPEC_VOLATILE:
4384 /* It is a bad idea to consider insns with with such rtl
4385 as moveable ones. The insn scheduler also considers them as barrier
4386 for a reason. */
4387 return false;
4389 default:
4390 break;
4393 fmt = GET_RTX_FORMAT (code);
4394 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
4396 if (fmt[i] == 'e')
4398 if (!rtx_moveable_p (&XEXP (x, i), type))
4399 return false;
4401 else if (fmt[i] == 'E')
4402 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4404 if (!rtx_moveable_p (&XVECEXP (x, i, j), type))
4405 return false;
4408 return true;
4411 /* A wrapper around dominated_by_p, which uses the information in UID_LUID
4412 to give dominance relationships between two insns I1 and I2. */
4413 static bool
4414 insn_dominated_by_p (rtx i1, rtx i2, int *uid_luid)
4416 basic_block bb1 = BLOCK_FOR_INSN (i1);
4417 basic_block bb2 = BLOCK_FOR_INSN (i2);
4419 if (bb1 == bb2)
4420 return uid_luid[INSN_UID (i2)] < uid_luid[INSN_UID (i1)];
4421 return dominated_by_p (CDI_DOMINATORS, bb1, bb2);
4424 /* Record the range of register numbers added by find_moveable_pseudos. */
4425 int first_moveable_pseudo, last_moveable_pseudo;
4427 /* These two vectors hold data for every register added by
4428 find_movable_pseudos, with index 0 holding data for the
4429 first_moveable_pseudo. */
4430 /* The original home register. */
4431 static vec<rtx> pseudo_replaced_reg;
4433 /* Look for instances where we have an instruction that is known to increase
4434 register pressure, and whose result is not used immediately. If it is
4435 possible to move the instruction downwards to just before its first use,
4436 split its lifetime into two ranges. We create a new pseudo to compute the
4437 value, and emit a move instruction just before the first use. If, after
4438 register allocation, the new pseudo remains unallocated, the function
4439 move_unallocated_pseudos then deletes the move instruction and places
4440 the computation just before the first use.
4442 Such a move is safe and profitable if all the input registers remain live
4443 and unchanged between the original computation and its first use. In such
4444 a situation, the computation is known to increase register pressure, and
4445 moving it is known to at least not worsen it.
4447 We restrict moves to only those cases where a register remains unallocated,
4448 in order to avoid interfering too much with the instruction schedule. As
4449 an exception, we may move insns which only modify their input register
4450 (typically induction variables), as this increases the freedom for our
4451 intended transformation, and does not limit the second instruction
4452 scheduler pass. */
4454 static void
4455 find_moveable_pseudos (void)
4457 unsigned i;
4458 int max_regs = max_reg_num ();
4459 int max_uid = get_max_uid ();
4460 basic_block bb;
4461 int *uid_luid = XNEWVEC (int, max_uid);
4462 rtx_insn **closest_uses = XNEWVEC (rtx_insn *, max_regs);
4463 /* A set of registers which are live but not modified throughout a block. */
4464 bitmap_head *bb_transp_live = XNEWVEC (bitmap_head,
4465 last_basic_block_for_fn (cfun));
4466 /* A set of registers which only exist in a given basic block. */
4467 bitmap_head *bb_local = XNEWVEC (bitmap_head,
4468 last_basic_block_for_fn (cfun));
4469 /* A set of registers which are set once, in an instruction that can be
4470 moved freely downwards, but are otherwise transparent to a block. */
4471 bitmap_head *bb_moveable_reg_sets = XNEWVEC (bitmap_head,
4472 last_basic_block_for_fn (cfun));
4473 bitmap_head live, used, set, interesting, unusable_as_input;
4474 bitmap_iterator bi;
4475 bitmap_initialize (&interesting, 0);
4477 first_moveable_pseudo = max_regs;
4478 pseudo_replaced_reg.release ();
4479 pseudo_replaced_reg.safe_grow_cleared (max_regs);
4481 df_analyze ();
4482 calculate_dominance_info (CDI_DOMINATORS);
4484 i = 0;
4485 bitmap_initialize (&live, 0);
4486 bitmap_initialize (&used, 0);
4487 bitmap_initialize (&set, 0);
4488 bitmap_initialize (&unusable_as_input, 0);
4489 FOR_EACH_BB_FN (bb, cfun)
4491 rtx_insn *insn;
4492 bitmap transp = bb_transp_live + bb->index;
4493 bitmap moveable = bb_moveable_reg_sets + bb->index;
4494 bitmap local = bb_local + bb->index;
4496 bitmap_initialize (local, 0);
4497 bitmap_initialize (transp, 0);
4498 bitmap_initialize (moveable, 0);
4499 bitmap_copy (&live, df_get_live_out (bb));
4500 bitmap_and_into (&live, df_get_live_in (bb));
4501 bitmap_copy (transp, &live);
4502 bitmap_clear (moveable);
4503 bitmap_clear (&live);
4504 bitmap_clear (&used);
4505 bitmap_clear (&set);
4506 FOR_BB_INSNS (bb, insn)
4507 if (NONDEBUG_INSN_P (insn))
4509 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4510 df_ref def, use;
4512 uid_luid[INSN_UID (insn)] = i++;
4514 def = df_single_def (insn_info);
4515 use = df_single_use (insn_info);
4516 if (use
4517 && def
4518 && DF_REF_REGNO (use) == DF_REF_REGNO (def)
4519 && !bitmap_bit_p (&set, DF_REF_REGNO (use))
4520 && rtx_moveable_p (&PATTERN (insn), OP_IN))
4522 unsigned regno = DF_REF_REGNO (use);
4523 bitmap_set_bit (moveable, regno);
4524 bitmap_set_bit (&set, regno);
4525 bitmap_set_bit (&used, regno);
4526 bitmap_clear_bit (transp, regno);
4527 continue;
4529 FOR_EACH_INSN_INFO_USE (use, insn_info)
4531 unsigned regno = DF_REF_REGNO (use);
4532 bitmap_set_bit (&used, regno);
4533 if (bitmap_clear_bit (moveable, regno))
4534 bitmap_clear_bit (transp, regno);
4537 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4539 unsigned regno = DF_REF_REGNO (def);
4540 bitmap_set_bit (&set, regno);
4541 bitmap_clear_bit (transp, regno);
4542 bitmap_clear_bit (moveable, regno);
4547 bitmap_clear (&live);
4548 bitmap_clear (&used);
4549 bitmap_clear (&set);
4551 FOR_EACH_BB_FN (bb, cfun)
4553 bitmap local = bb_local + bb->index;
4554 rtx_insn *insn;
4556 FOR_BB_INSNS (bb, insn)
4557 if (NONDEBUG_INSN_P (insn))
4559 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4560 rtx_insn *def_insn;
4561 rtx closest_use, note;
4562 df_ref def, use;
4563 unsigned regno;
4564 bool all_dominated, all_local;
4565 machine_mode mode;
4567 def = df_single_def (insn_info);
4568 /* There must be exactly one def in this insn. */
4569 if (!def || !single_set (insn))
4570 continue;
4571 /* This must be the only definition of the reg. We also limit
4572 which modes we deal with so that we can assume we can generate
4573 move instructions. */
4574 regno = DF_REF_REGNO (def);
4575 mode = GET_MODE (DF_REF_REG (def));
4576 if (DF_REG_DEF_COUNT (regno) != 1
4577 || !DF_REF_INSN_INFO (def)
4578 || HARD_REGISTER_NUM_P (regno)
4579 || DF_REG_EQ_USE_COUNT (regno) > 0
4580 || (!INTEGRAL_MODE_P (mode) && !FLOAT_MODE_P (mode)))
4581 continue;
4582 def_insn = DF_REF_INSN (def);
4584 for (note = REG_NOTES (def_insn); note; note = XEXP (note, 1))
4585 if (REG_NOTE_KIND (note) == REG_EQUIV && MEM_P (XEXP (note, 0)))
4586 break;
4588 if (note)
4590 if (dump_file)
4591 fprintf (dump_file, "Ignoring reg %d, has equiv memory\n",
4592 regno);
4593 bitmap_set_bit (&unusable_as_input, regno);
4594 continue;
4597 use = DF_REG_USE_CHAIN (regno);
4598 all_dominated = true;
4599 all_local = true;
4600 closest_use = NULL_RTX;
4601 for (; use; use = DF_REF_NEXT_REG (use))
4603 rtx_insn *insn;
4604 if (!DF_REF_INSN_INFO (use))
4606 all_dominated = false;
4607 all_local = false;
4608 break;
4610 insn = DF_REF_INSN (use);
4611 if (DEBUG_INSN_P (insn))
4612 continue;
4613 if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (def_insn))
4614 all_local = false;
4615 if (!insn_dominated_by_p (insn, def_insn, uid_luid))
4616 all_dominated = false;
4617 if (closest_use != insn && closest_use != const0_rtx)
4619 if (closest_use == NULL_RTX)
4620 closest_use = insn;
4621 else if (insn_dominated_by_p (closest_use, insn, uid_luid))
4622 closest_use = insn;
4623 else if (!insn_dominated_by_p (insn, closest_use, uid_luid))
4624 closest_use = const0_rtx;
4627 if (!all_dominated)
4629 if (dump_file)
4630 fprintf (dump_file, "Reg %d not all uses dominated by set\n",
4631 regno);
4632 continue;
4634 if (all_local)
4635 bitmap_set_bit (local, regno);
4636 if (closest_use == const0_rtx || closest_use == NULL
4637 || next_nonnote_nondebug_insn (def_insn) == closest_use)
4639 if (dump_file)
4640 fprintf (dump_file, "Reg %d uninteresting%s\n", regno,
4641 closest_use == const0_rtx || closest_use == NULL
4642 ? " (no unique first use)" : "");
4643 continue;
4645 if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (closest_use)))
4647 if (dump_file)
4648 fprintf (dump_file, "Reg %d: closest user uses cc0\n",
4649 regno);
4650 continue;
4653 bitmap_set_bit (&interesting, regno);
4654 /* If we get here, we know closest_use is a non-NULL insn
4655 (as opposed to const_0_rtx). */
4656 closest_uses[regno] = as_a <rtx_insn *> (closest_use);
4658 if (dump_file && (all_local || all_dominated))
4660 fprintf (dump_file, "Reg %u:", regno);
4661 if (all_local)
4662 fprintf (dump_file, " local to bb %d", bb->index);
4663 if (all_dominated)
4664 fprintf (dump_file, " def dominates all uses");
4665 if (closest_use != const0_rtx)
4666 fprintf (dump_file, " has unique first use");
4667 fputs ("\n", dump_file);
4672 EXECUTE_IF_SET_IN_BITMAP (&interesting, 0, i, bi)
4674 df_ref def = DF_REG_DEF_CHAIN (i);
4675 rtx_insn *def_insn = DF_REF_INSN (def);
4676 basic_block def_block = BLOCK_FOR_INSN (def_insn);
4677 bitmap def_bb_local = bb_local + def_block->index;
4678 bitmap def_bb_moveable = bb_moveable_reg_sets + def_block->index;
4679 bitmap def_bb_transp = bb_transp_live + def_block->index;
4680 bool local_to_bb_p = bitmap_bit_p (def_bb_local, i);
4681 rtx_insn *use_insn = closest_uses[i];
4682 df_ref use;
4683 bool all_ok = true;
4684 bool all_transp = true;
4686 if (!REG_P (DF_REF_REG (def)))
4687 continue;
4689 if (!local_to_bb_p)
4691 if (dump_file)
4692 fprintf (dump_file, "Reg %u not local to one basic block\n",
4694 continue;
4696 if (reg_equiv_init (i) != NULL_RTX)
4698 if (dump_file)
4699 fprintf (dump_file, "Ignoring reg %u with equiv init insn\n",
4701 continue;
4703 if (!rtx_moveable_p (&PATTERN (def_insn), OP_IN))
4705 if (dump_file)
4706 fprintf (dump_file, "Found def insn %d for %d to be not moveable\n",
4707 INSN_UID (def_insn), i);
4708 continue;
4710 if (dump_file)
4711 fprintf (dump_file, "Examining insn %d, def for %d\n",
4712 INSN_UID (def_insn), i);
4713 FOR_EACH_INSN_USE (use, def_insn)
4715 unsigned regno = DF_REF_REGNO (use);
4716 if (bitmap_bit_p (&unusable_as_input, regno))
4718 all_ok = false;
4719 if (dump_file)
4720 fprintf (dump_file, " found unusable input reg %u.\n", regno);
4721 break;
4723 if (!bitmap_bit_p (def_bb_transp, regno))
4725 if (bitmap_bit_p (def_bb_moveable, regno)
4726 && !control_flow_insn_p (use_insn)
4727 && (!HAVE_cc0 || !sets_cc0_p (use_insn)))
4729 if (modified_between_p (DF_REF_REG (use), def_insn, use_insn))
4731 rtx_insn *x = NEXT_INSN (def_insn);
4732 while (!modified_in_p (DF_REF_REG (use), x))
4734 gcc_assert (x != use_insn);
4735 x = NEXT_INSN (x);
4737 if (dump_file)
4738 fprintf (dump_file, " input reg %u modified but insn %d moveable\n",
4739 regno, INSN_UID (x));
4740 emit_insn_after (PATTERN (x), use_insn);
4741 set_insn_deleted (x);
4743 else
4745 if (dump_file)
4746 fprintf (dump_file, " input reg %u modified between def and use\n",
4747 regno);
4748 all_transp = false;
4751 else
4752 all_transp = false;
4755 if (!all_ok)
4756 continue;
4757 if (!dbg_cnt (ira_move))
4758 break;
4759 if (dump_file)
4760 fprintf (dump_file, " all ok%s\n", all_transp ? " and transp" : "");
4762 if (all_transp)
4764 rtx def_reg = DF_REF_REG (def);
4765 rtx newreg = ira_create_new_reg (def_reg);
4766 if (validate_change (def_insn, DF_REF_REAL_LOC (def), newreg, 0))
4768 unsigned nregno = REGNO (newreg);
4769 emit_insn_before (gen_move_insn (def_reg, newreg), use_insn);
4770 nregno -= max_regs;
4771 pseudo_replaced_reg[nregno] = def_reg;
4776 FOR_EACH_BB_FN (bb, cfun)
4778 bitmap_clear (bb_local + bb->index);
4779 bitmap_clear (bb_transp_live + bb->index);
4780 bitmap_clear (bb_moveable_reg_sets + bb->index);
4782 bitmap_clear (&interesting);
4783 bitmap_clear (&unusable_as_input);
4784 free (uid_luid);
4785 free (closest_uses);
4786 free (bb_local);
4787 free (bb_transp_live);
4788 free (bb_moveable_reg_sets);
4790 last_moveable_pseudo = max_reg_num ();
4792 fix_reg_equiv_init ();
4793 expand_reg_info ();
4794 regstat_free_n_sets_and_refs ();
4795 regstat_free_ri ();
4796 regstat_init_n_sets_and_refs ();
4797 regstat_compute_ri ();
4798 free_dominance_info (CDI_DOMINATORS);
4801 /* If SET pattern SET is an assignment from a hard register to a pseudo which
4802 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), return
4803 the destination. Otherwise return NULL. */
4805 static rtx
4806 interesting_dest_for_shprep_1 (rtx set, basic_block call_dom)
4808 rtx src = SET_SRC (set);
4809 rtx dest = SET_DEST (set);
4810 if (!REG_P (src) || !HARD_REGISTER_P (src)
4811 || !REG_P (dest) || HARD_REGISTER_P (dest)
4812 || (call_dom && !bitmap_bit_p (df_get_live_in (call_dom), REGNO (dest))))
4813 return NULL;
4814 return dest;
4817 /* If insn is interesting for parameter range-splitting shrink-wrapping
4818 preparation, i.e. it is a single set from a hard register to a pseudo, which
4819 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), or a
4820 parallel statement with only one such statement, return the destination.
4821 Otherwise return NULL. */
4823 static rtx
4824 interesting_dest_for_shprep (rtx_insn *insn, basic_block call_dom)
4826 if (!INSN_P (insn))
4827 return NULL;
4828 rtx pat = PATTERN (insn);
4829 if (GET_CODE (pat) == SET)
4830 return interesting_dest_for_shprep_1 (pat, call_dom);
4832 if (GET_CODE (pat) != PARALLEL)
4833 return NULL;
4834 rtx ret = NULL;
4835 for (int i = 0; i < XVECLEN (pat, 0); i++)
4837 rtx sub = XVECEXP (pat, 0, i);
4838 if (GET_CODE (sub) == USE || GET_CODE (sub) == CLOBBER)
4839 continue;
4840 if (GET_CODE (sub) != SET
4841 || side_effects_p (sub))
4842 return NULL;
4843 rtx dest = interesting_dest_for_shprep_1 (sub, call_dom);
4844 if (dest && ret)
4845 return NULL;
4846 if (dest)
4847 ret = dest;
4849 return ret;
4852 /* Split live ranges of pseudos that are loaded from hard registers in the
4853 first BB in a BB that dominates all non-sibling call if such a BB can be
4854 found and is not in a loop. Return true if the function has made any
4855 changes. */
4857 static bool
4858 split_live_ranges_for_shrink_wrap (void)
4860 basic_block bb, call_dom = NULL;
4861 basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
4862 rtx_insn *insn, *last_interesting_insn = NULL;
4863 bitmap_head need_new, reachable;
4864 vec<basic_block> queue;
4866 if (!SHRINK_WRAPPING_ENABLED)
4867 return false;
4869 bitmap_initialize (&need_new, 0);
4870 bitmap_initialize (&reachable, 0);
4871 queue.create (n_basic_blocks_for_fn (cfun));
4873 FOR_EACH_BB_FN (bb, cfun)
4874 FOR_BB_INSNS (bb, insn)
4875 if (CALL_P (insn) && !SIBLING_CALL_P (insn))
4877 if (bb == first)
4879 bitmap_clear (&need_new);
4880 bitmap_clear (&reachable);
4881 queue.release ();
4882 return false;
4885 bitmap_set_bit (&need_new, bb->index);
4886 bitmap_set_bit (&reachable, bb->index);
4887 queue.quick_push (bb);
4888 break;
4891 if (queue.is_empty ())
4893 bitmap_clear (&need_new);
4894 bitmap_clear (&reachable);
4895 queue.release ();
4896 return false;
4899 while (!queue.is_empty ())
4901 edge e;
4902 edge_iterator ei;
4904 bb = queue.pop ();
4905 FOR_EACH_EDGE (e, ei, bb->succs)
4906 if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
4907 && bitmap_set_bit (&reachable, e->dest->index))
4908 queue.quick_push (e->dest);
4910 queue.release ();
4912 FOR_BB_INSNS (first, insn)
4914 rtx dest = interesting_dest_for_shprep (insn, NULL);
4915 if (!dest)
4916 continue;
4918 if (DF_REG_DEF_COUNT (REGNO (dest)) > 1)
4920 bitmap_clear (&need_new);
4921 bitmap_clear (&reachable);
4922 return false;
4925 for (df_ref use = DF_REG_USE_CHAIN (REGNO(dest));
4926 use;
4927 use = DF_REF_NEXT_REG (use))
4929 int ubbi = DF_REF_BB (use)->index;
4930 if (bitmap_bit_p (&reachable, ubbi))
4931 bitmap_set_bit (&need_new, ubbi);
4933 last_interesting_insn = insn;
4936 bitmap_clear (&reachable);
4937 if (!last_interesting_insn)
4939 bitmap_clear (&need_new);
4940 return false;
4943 call_dom = nearest_common_dominator_for_set (CDI_DOMINATORS, &need_new);
4944 bitmap_clear (&need_new);
4945 if (call_dom == first)
4946 return false;
4948 loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
4949 while (bb_loop_depth (call_dom) > 0)
4950 call_dom = get_immediate_dominator (CDI_DOMINATORS, call_dom);
4951 loop_optimizer_finalize ();
4953 if (call_dom == first)
4954 return false;
4956 calculate_dominance_info (CDI_POST_DOMINATORS);
4957 if (dominated_by_p (CDI_POST_DOMINATORS, first, call_dom))
4959 free_dominance_info (CDI_POST_DOMINATORS);
4960 return false;
4962 free_dominance_info (CDI_POST_DOMINATORS);
4964 if (dump_file)
4965 fprintf (dump_file, "Will split live ranges of parameters at BB %i\n",
4966 call_dom->index);
4968 bool ret = false;
4969 FOR_BB_INSNS (first, insn)
4971 rtx dest = interesting_dest_for_shprep (insn, call_dom);
4972 if (!dest || dest == pic_offset_table_rtx)
4973 continue;
4975 rtx newreg = NULL_RTX;
4976 df_ref use, next;
4977 for (use = DF_REG_USE_CHAIN (REGNO (dest)); use; use = next)
4979 rtx_insn *uin = DF_REF_INSN (use);
4980 next = DF_REF_NEXT_REG (use);
4982 basic_block ubb = BLOCK_FOR_INSN (uin);
4983 if (ubb == call_dom
4984 || dominated_by_p (CDI_DOMINATORS, ubb, call_dom))
4986 if (!newreg)
4987 newreg = ira_create_new_reg (dest);
4988 validate_change (uin, DF_REF_REAL_LOC (use), newreg, true);
4992 if (newreg)
4994 rtx new_move = gen_move_insn (newreg, dest);
4995 emit_insn_after (new_move, bb_note (call_dom));
4996 if (dump_file)
4998 fprintf (dump_file, "Split live-range of register ");
4999 print_rtl_single (dump_file, dest);
5001 ret = true;
5004 if (insn == last_interesting_insn)
5005 break;
5007 apply_change_group ();
5008 return ret;
5011 /* Perform the second half of the transformation started in
5012 find_moveable_pseudos. We look for instances where the newly introduced
5013 pseudo remains unallocated, and remove it by moving the definition to
5014 just before its use, replacing the move instruction generated by
5015 find_moveable_pseudos. */
5016 static void
5017 move_unallocated_pseudos (void)
5019 int i;
5020 for (i = first_moveable_pseudo; i < last_moveable_pseudo; i++)
5021 if (reg_renumber[i] < 0)
5023 int idx = i - first_moveable_pseudo;
5024 rtx other_reg = pseudo_replaced_reg[idx];
5025 rtx_insn *def_insn = DF_REF_INSN (DF_REG_DEF_CHAIN (i));
5026 /* The use must follow all definitions of OTHER_REG, so we can
5027 insert the new definition immediately after any of them. */
5028 df_ref other_def = DF_REG_DEF_CHAIN (REGNO (other_reg));
5029 rtx_insn *move_insn = DF_REF_INSN (other_def);
5030 rtx_insn *newinsn = emit_insn_after (PATTERN (def_insn), move_insn);
5031 rtx set;
5032 int success;
5034 if (dump_file)
5035 fprintf (dump_file, "moving def of %d (insn %d now) ",
5036 REGNO (other_reg), INSN_UID (def_insn));
5038 delete_insn (move_insn);
5039 while ((other_def = DF_REG_DEF_CHAIN (REGNO (other_reg))))
5040 delete_insn (DF_REF_INSN (other_def));
5041 delete_insn (def_insn);
5043 set = single_set (newinsn);
5044 success = validate_change (newinsn, &SET_DEST (set), other_reg, 0);
5045 gcc_assert (success);
5046 if (dump_file)
5047 fprintf (dump_file, " %d) rather than keep unallocated replacement %d\n",
5048 INSN_UID (newinsn), i);
5049 SET_REG_N_REFS (i, 0);
5053 /* If the backend knows where to allocate pseudos for hard
5054 register initial values, register these allocations now. */
5055 static void
5056 allocate_initial_values (void)
5058 if (targetm.allocate_initial_value)
5060 rtx hreg, preg, x;
5061 int i, regno;
5063 for (i = 0; HARD_REGISTER_NUM_P (i); i++)
5065 if (! initial_value_entry (i, &hreg, &preg))
5066 break;
5068 x = targetm.allocate_initial_value (hreg);
5069 regno = REGNO (preg);
5070 if (x && REG_N_SETS (regno) <= 1)
5072 if (MEM_P (x))
5073 reg_equiv_memory_loc (regno) = x;
5074 else
5076 basic_block bb;
5077 int new_regno;
5079 gcc_assert (REG_P (x));
5080 new_regno = REGNO (x);
5081 reg_renumber[regno] = new_regno;
5082 /* Poke the regno right into regno_reg_rtx so that even
5083 fixed regs are accepted. */
5084 SET_REGNO (preg, new_regno);
5085 /* Update global register liveness information. */
5086 FOR_EACH_BB_FN (bb, cfun)
5088 if (REGNO_REG_SET_P (df_get_live_in (bb), regno))
5089 SET_REGNO_REG_SET (df_get_live_in (bb), new_regno);
5090 if (REGNO_REG_SET_P (df_get_live_out (bb), regno))
5091 SET_REGNO_REG_SET (df_get_live_out (bb), new_regno);
5097 gcc_checking_assert (! initial_value_entry (FIRST_PSEUDO_REGISTER,
5098 &hreg, &preg));
5103 /* True when we use LRA instead of reload pass for the current
5104 function. */
5105 bool ira_use_lra_p;
5107 /* True if we have allocno conflicts. It is false for non-optimized
5108 mode or when the conflict table is too big. */
5109 bool ira_conflicts_p;
5111 /* Saved between IRA and reload. */
5112 static int saved_flag_ira_share_spill_slots;
5114 /* This is the main entry of IRA. */
5115 static void
5116 ira (FILE *f)
5118 bool loops_p;
5119 int ira_max_point_before_emit;
5120 int rebuild_p;
5121 bool saved_flag_caller_saves = flag_caller_saves;
5122 enum ira_region saved_flag_ira_region = flag_ira_region;
5124 /* Perform target specific PIC register initialization. */
5125 targetm.init_pic_reg ();
5127 ira_conflicts_p = optimize > 0;
5129 ira_use_lra_p = targetm.lra_p ();
5130 /* If there are too many pseudos and/or basic blocks (e.g. 10K
5131 pseudos and 10K blocks or 100K pseudos and 1K blocks), we will
5132 use simplified and faster algorithms in LRA. */
5133 lra_simple_p
5134 = (ira_use_lra_p
5135 && max_reg_num () >= (1 << 26) / last_basic_block_for_fn (cfun));
5136 if (lra_simple_p)
5138 /* It permits to skip live range splitting in LRA. */
5139 flag_caller_saves = false;
5140 /* There is no sense to do regional allocation when we use
5141 simplified LRA. */
5142 flag_ira_region = IRA_REGION_ONE;
5143 ira_conflicts_p = false;
5146 #ifndef IRA_NO_OBSTACK
5147 gcc_obstack_init (&ira_obstack);
5148 #endif
5149 bitmap_obstack_initialize (&ira_bitmap_obstack);
5151 /* LRA uses its own infrastructure to handle caller save registers. */
5152 if (flag_caller_saves && !ira_use_lra_p)
5153 init_caller_save ();
5155 if (flag_ira_verbose < 10)
5157 internal_flag_ira_verbose = flag_ira_verbose;
5158 ira_dump_file = f;
5160 else
5162 internal_flag_ira_verbose = flag_ira_verbose - 10;
5163 ira_dump_file = stderr;
5166 setup_prohibited_mode_move_regs ();
5167 decrease_live_ranges_number ();
5168 df_note_add_problem ();
5170 /* DF_LIVE can't be used in the register allocator, too many other
5171 parts of the compiler depend on using the "classic" liveness
5172 interpretation of the DF_LR problem. See PR38711.
5173 Remove the problem, so that we don't spend time updating it in
5174 any of the df_analyze() calls during IRA/LRA. */
5175 if (optimize > 1)
5176 df_remove_problem (df_live);
5177 gcc_checking_assert (df_live == NULL);
5179 #ifdef ENABLE_CHECKING
5180 df->changeable_flags |= DF_VERIFY_SCHEDULED;
5181 #endif
5182 df_analyze ();
5184 init_reg_equiv ();
5185 if (ira_conflicts_p)
5187 calculate_dominance_info (CDI_DOMINATORS);
5189 if (split_live_ranges_for_shrink_wrap ())
5190 df_analyze ();
5192 free_dominance_info (CDI_DOMINATORS);
5195 df_clear_flags (DF_NO_INSN_RESCAN);
5197 regstat_init_n_sets_and_refs ();
5198 regstat_compute_ri ();
5200 /* If we are not optimizing, then this is the only place before
5201 register allocation where dataflow is done. And that is needed
5202 to generate these warnings. */
5203 if (warn_clobbered)
5204 generate_setjmp_warnings ();
5206 /* Determine if the current function is a leaf before running IRA
5207 since this can impact optimizations done by the prologue and
5208 epilogue thus changing register elimination offsets. */
5209 crtl->is_leaf = leaf_function_p ();
5211 if (resize_reg_info () && flag_ira_loop_pressure)
5212 ira_set_pseudo_classes (true, ira_dump_file);
5214 rebuild_p = update_equiv_regs ();
5215 setup_reg_equiv ();
5216 setup_reg_equiv_init ();
5218 if (optimize && rebuild_p)
5220 timevar_push (TV_JUMP);
5221 rebuild_jump_labels (get_insns ());
5222 if (purge_all_dead_edges ())
5223 delete_unreachable_blocks ();
5224 timevar_pop (TV_JUMP);
5227 allocated_reg_info_size = max_reg_num ();
5229 if (delete_trivially_dead_insns (get_insns (), max_reg_num ()))
5230 df_analyze ();
5232 /* It is not worth to do such improvement when we use a simple
5233 allocation because of -O0 usage or because the function is too
5234 big. */
5235 if (ira_conflicts_p)
5236 find_moveable_pseudos ();
5238 max_regno_before_ira = max_reg_num ();
5239 ira_setup_eliminable_regset ();
5241 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
5242 ira_load_cost = ira_store_cost = ira_shuffle_cost = 0;
5243 ira_move_loops_num = ira_additional_jumps_num = 0;
5245 ira_assert (current_loops == NULL);
5246 if (flag_ira_region == IRA_REGION_ALL || flag_ira_region == IRA_REGION_MIXED)
5247 loop_optimizer_init (AVOID_CFG_MODIFICATIONS | LOOPS_HAVE_RECORDED_EXITS);
5249 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5250 fprintf (ira_dump_file, "Building IRA IR\n");
5251 loops_p = ira_build ();
5253 ira_assert (ira_conflicts_p || !loops_p);
5255 saved_flag_ira_share_spill_slots = flag_ira_share_spill_slots;
5256 if (too_high_register_pressure_p () || cfun->calls_setjmp)
5257 /* It is just wasting compiler's time to pack spilled pseudos into
5258 stack slots in this case -- prohibit it. We also do this if
5259 there is setjmp call because a variable not modified between
5260 setjmp and longjmp the compiler is required to preserve its
5261 value and sharing slots does not guarantee it. */
5262 flag_ira_share_spill_slots = FALSE;
5264 ira_color ();
5266 ira_max_point_before_emit = ira_max_point;
5268 ira_initiate_emit_data ();
5270 ira_emit (loops_p);
5272 max_regno = max_reg_num ();
5273 if (ira_conflicts_p)
5275 if (! loops_p)
5277 if (! ira_use_lra_p)
5278 ira_initiate_assign ();
5280 else
5282 expand_reg_info ();
5284 if (ira_use_lra_p)
5286 ira_allocno_t a;
5287 ira_allocno_iterator ai;
5289 FOR_EACH_ALLOCNO (a, ai)
5291 int old_regno = ALLOCNO_REGNO (a);
5292 int new_regno = REGNO (ALLOCNO_EMIT_DATA (a)->reg);
5294 ALLOCNO_REGNO (a) = new_regno;
5296 if (old_regno != new_regno)
5297 setup_reg_classes (new_regno, reg_preferred_class (old_regno),
5298 reg_alternate_class (old_regno),
5299 reg_allocno_class (old_regno));
5303 else
5305 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5306 fprintf (ira_dump_file, "Flattening IR\n");
5307 ira_flattening (max_regno_before_ira, ira_max_point_before_emit);
5309 /* New insns were generated: add notes and recalculate live
5310 info. */
5311 df_analyze ();
5313 /* ??? Rebuild the loop tree, but why? Does the loop tree
5314 change if new insns were generated? Can that be handled
5315 by updating the loop tree incrementally? */
5316 loop_optimizer_finalize ();
5317 free_dominance_info (CDI_DOMINATORS);
5318 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
5319 | LOOPS_HAVE_RECORDED_EXITS);
5321 if (! ira_use_lra_p)
5323 setup_allocno_assignment_flags ();
5324 ira_initiate_assign ();
5325 ira_reassign_conflict_allocnos (max_regno);
5330 ira_finish_emit_data ();
5332 setup_reg_renumber ();
5334 calculate_allocation_cost ();
5336 #ifdef ENABLE_IRA_CHECKING
5337 if (ira_conflicts_p)
5338 check_allocation ();
5339 #endif
5341 if (max_regno != max_regno_before_ira)
5343 regstat_free_n_sets_and_refs ();
5344 regstat_free_ri ();
5345 regstat_init_n_sets_and_refs ();
5346 regstat_compute_ri ();
5349 overall_cost_before = ira_overall_cost;
5350 if (! ira_conflicts_p)
5351 grow_reg_equivs ();
5352 else
5354 fix_reg_equiv_init ();
5356 #ifdef ENABLE_IRA_CHECKING
5357 print_redundant_copies ();
5358 #endif
5359 if (! ira_use_lra_p)
5361 ira_spilled_reg_stack_slots_num = 0;
5362 ira_spilled_reg_stack_slots
5363 = ((struct ira_spilled_reg_stack_slot *)
5364 ira_allocate (max_regno
5365 * sizeof (struct ira_spilled_reg_stack_slot)));
5366 memset (ira_spilled_reg_stack_slots, 0,
5367 max_regno * sizeof (struct ira_spilled_reg_stack_slot));
5370 allocate_initial_values ();
5372 /* See comment for find_moveable_pseudos call. */
5373 if (ira_conflicts_p)
5374 move_unallocated_pseudos ();
5376 /* Restore original values. */
5377 if (lra_simple_p)
5379 flag_caller_saves = saved_flag_caller_saves;
5380 flag_ira_region = saved_flag_ira_region;
5384 static void
5385 do_reload (void)
5387 basic_block bb;
5388 bool need_dce;
5389 unsigned pic_offset_table_regno = INVALID_REGNUM;
5391 if (flag_ira_verbose < 10)
5392 ira_dump_file = dump_file;
5394 /* If pic_offset_table_rtx is a pseudo register, then keep it so
5395 after reload to avoid possible wrong usages of hard reg assigned
5396 to it. */
5397 if (pic_offset_table_rtx
5398 && REGNO (pic_offset_table_rtx) >= FIRST_PSEUDO_REGISTER)
5399 pic_offset_table_regno = REGNO (pic_offset_table_rtx);
5401 timevar_push (TV_RELOAD);
5402 if (ira_use_lra_p)
5404 if (current_loops != NULL)
5406 loop_optimizer_finalize ();
5407 free_dominance_info (CDI_DOMINATORS);
5409 FOR_ALL_BB_FN (bb, cfun)
5410 bb->loop_father = NULL;
5411 current_loops = NULL;
5413 ira_destroy ();
5415 lra (ira_dump_file);
5416 /* ???!!! Move it before lra () when we use ira_reg_equiv in
5417 LRA. */
5418 vec_free (reg_equivs);
5419 reg_equivs = NULL;
5420 need_dce = false;
5422 else
5424 df_set_flags (DF_NO_INSN_RESCAN);
5425 build_insn_chain ();
5427 need_dce = reload (get_insns (), ira_conflicts_p);
5431 timevar_pop (TV_RELOAD);
5433 timevar_push (TV_IRA);
5435 if (ira_conflicts_p && ! ira_use_lra_p)
5437 ira_free (ira_spilled_reg_stack_slots);
5438 ira_finish_assign ();
5441 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL
5442 && overall_cost_before != ira_overall_cost)
5443 fprintf (ira_dump_file, "+++Overall after reload %"PRId64 "\n",
5444 ira_overall_cost);
5446 flag_ira_share_spill_slots = saved_flag_ira_share_spill_slots;
5448 if (! ira_use_lra_p)
5450 ira_destroy ();
5451 if (current_loops != NULL)
5453 loop_optimizer_finalize ();
5454 free_dominance_info (CDI_DOMINATORS);
5456 FOR_ALL_BB_FN (bb, cfun)
5457 bb->loop_father = NULL;
5458 current_loops = NULL;
5460 regstat_free_ri ();
5461 regstat_free_n_sets_and_refs ();
5464 if (optimize)
5465 cleanup_cfg (CLEANUP_EXPENSIVE);
5467 finish_reg_equiv ();
5469 bitmap_obstack_release (&ira_bitmap_obstack);
5470 #ifndef IRA_NO_OBSTACK
5471 obstack_free (&ira_obstack, NULL);
5472 #endif
5474 /* The code after the reload has changed so much that at this point
5475 we might as well just rescan everything. Note that
5476 df_rescan_all_insns is not going to help here because it does not
5477 touch the artificial uses and defs. */
5478 df_finish_pass (true);
5479 df_scan_alloc (NULL);
5480 df_scan_blocks ();
5482 if (optimize > 1)
5484 df_live_add_problem ();
5485 df_live_set_all_dirty ();
5488 if (optimize)
5489 df_analyze ();
5491 if (need_dce && optimize)
5492 run_fast_dce ();
5494 /* Diagnose uses of the hard frame pointer when it is used as a global
5495 register. Often we can get away with letting the user appropriate
5496 the frame pointer, but we should let them know when code generation
5497 makes that impossible. */
5498 if (global_regs[HARD_FRAME_POINTER_REGNUM] && frame_pointer_needed)
5500 tree decl = global_regs_decl[HARD_FRAME_POINTER_REGNUM];
5501 error_at (DECL_SOURCE_LOCATION (current_function_decl),
5502 "frame pointer required, but reserved");
5503 inform (DECL_SOURCE_LOCATION (decl), "for %qD", decl);
5506 if (pic_offset_table_regno != INVALID_REGNUM)
5507 pic_offset_table_rtx = gen_rtx_REG (Pmode, pic_offset_table_regno);
5509 timevar_pop (TV_IRA);
5512 /* Run the integrated register allocator. */
5514 namespace {
5516 const pass_data pass_data_ira =
5518 RTL_PASS, /* type */
5519 "ira", /* name */
5520 OPTGROUP_NONE, /* optinfo_flags */
5521 TV_IRA, /* tv_id */
5522 0, /* properties_required */
5523 0, /* properties_provided */
5524 0, /* properties_destroyed */
5525 0, /* todo_flags_start */
5526 TODO_do_not_ggc_collect, /* todo_flags_finish */
5529 class pass_ira : public rtl_opt_pass
5531 public:
5532 pass_ira (gcc::context *ctxt)
5533 : rtl_opt_pass (pass_data_ira, ctxt)
5536 /* opt_pass methods: */
5537 virtual bool gate (function *)
5539 return !targetm.no_register_allocation;
5541 virtual unsigned int execute (function *)
5543 ira (dump_file);
5544 return 0;
5547 }; // class pass_ira
5549 } // anon namespace
5551 rtl_opt_pass *
5552 make_pass_ira (gcc::context *ctxt)
5554 return new pass_ira (ctxt);
5557 namespace {
5559 const pass_data pass_data_reload =
5561 RTL_PASS, /* type */
5562 "reload", /* name */
5563 OPTGROUP_NONE, /* optinfo_flags */
5564 TV_RELOAD, /* tv_id */
5565 0, /* properties_required */
5566 0, /* properties_provided */
5567 0, /* properties_destroyed */
5568 0, /* todo_flags_start */
5569 0, /* todo_flags_finish */
5572 class pass_reload : public rtl_opt_pass
5574 public:
5575 pass_reload (gcc::context *ctxt)
5576 : rtl_opt_pass (pass_data_reload, ctxt)
5579 /* opt_pass methods: */
5580 virtual bool gate (function *)
5582 return !targetm.no_register_allocation;
5584 virtual unsigned int execute (function *)
5586 do_reload ();
5587 return 0;
5590 }; // class pass_reload
5592 } // anon namespace
5594 rtl_opt_pass *
5595 make_pass_reload (gcc::context *ctxt)
5597 return new pass_reload (ctxt);