[ARM] PR target/78253 Call weak function instead of strong when called through pointer.
[official-gcc.git] / gcc / ira.c
blob96b4b6206a518f9e858cb7fed71e7fdaf0106204
1 /* Integrated Register Allocator (IRA) entry point.
2 Copyright (C) 2006-2017 Free Software Foundation, Inc.
3 Contributed by Vladimir Makarov <vmakarov@redhat.com>.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it under
8 the terms of the GNU General Public License as published by the Free
9 Software Foundation; either version 3, or (at your option) any later
10 version.
12 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
13 WARRANTY; without even the implied warranty of MERCHANTABILITY or
14 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
15 for more details.
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The integrated register allocator (IRA) is a
22 regional register allocator performing graph coloring on a top-down
23 traversal of nested regions. Graph coloring in a region is based
24 on Chaitin-Briggs algorithm. It is called integrated because
25 register coalescing, register live range splitting, and choosing a
26 better hard register are done on-the-fly during coloring. Register
27 coalescing and choosing a cheaper hard register is done by hard
28 register preferencing during hard register assigning. The live
29 range splitting is a byproduct of the regional register allocation.
31 Major IRA notions are:
33 o *Region* is a part of CFG where graph coloring based on
34 Chaitin-Briggs algorithm is done. IRA can work on any set of
35 nested CFG regions forming a tree. Currently the regions are
36 the entire function for the root region and natural loops for
37 the other regions. Therefore data structure representing a
38 region is called loop_tree_node.
40 o *Allocno class* is a register class used for allocation of
41 given allocno. It means that only hard register of given
42 register class can be assigned to given allocno. In reality,
43 even smaller subset of (*profitable*) hard registers can be
44 assigned. In rare cases, the subset can be even smaller
45 because our modification of Chaitin-Briggs algorithm requires
46 that sets of hard registers can be assigned to allocnos forms a
47 forest, i.e. the sets can be ordered in a way where any
48 previous set is not intersected with given set or is a superset
49 of given set.
51 o *Pressure class* is a register class belonging to a set of
52 register classes containing all of the hard-registers available
53 for register allocation. The set of all pressure classes for a
54 target is defined in the corresponding machine-description file
55 according some criteria. Register pressure is calculated only
56 for pressure classes and it affects some IRA decisions as
57 forming allocation regions.
59 o *Allocno* represents the live range of a pseudo-register in a
60 region. Besides the obvious attributes like the corresponding
61 pseudo-register number, allocno class, conflicting allocnos and
62 conflicting hard-registers, there are a few allocno attributes
63 which are important for understanding the allocation algorithm:
65 - *Live ranges*. This is a list of ranges of *program points*
66 where the allocno lives. Program points represent places
67 where a pseudo can be born or become dead (there are
68 approximately two times more program points than the insns)
69 and they are represented by integers starting with 0. The
70 live ranges are used to find conflicts between allocnos.
71 They also play very important role for the transformation of
72 the IRA internal representation of several regions into a one
73 region representation. The later is used during the reload
74 pass work because each allocno represents all of the
75 corresponding pseudo-registers.
77 - *Hard-register costs*. This is a vector of size equal to the
78 number of available hard-registers of the allocno class. The
79 cost of a callee-clobbered hard-register for an allocno is
80 increased by the cost of save/restore code around the calls
81 through the given allocno's life. If the allocno is a move
82 instruction operand and another operand is a hard-register of
83 the allocno class, the cost of the hard-register is decreased
84 by the move cost.
86 When an allocno is assigned, the hard-register with minimal
87 full cost is used. Initially, a hard-register's full cost is
88 the corresponding value from the hard-register's cost vector.
89 If the allocno is connected by a *copy* (see below) to
90 another allocno which has just received a hard-register, the
91 cost of the hard-register is decreased. Before choosing a
92 hard-register for an allocno, the allocno's current costs of
93 the hard-registers are modified by the conflict hard-register
94 costs of all of the conflicting allocnos which are not
95 assigned yet.
97 - *Conflict hard-register costs*. This is a vector of the same
98 size as the hard-register costs vector. To permit an
99 unassigned allocno to get a better hard-register, IRA uses
100 this vector to calculate the final full cost of the
101 available hard-registers. Conflict hard-register costs of an
102 unassigned allocno are also changed with a change of the
103 hard-register cost of the allocno when a copy involving the
104 allocno is processed as described above. This is done to
105 show other unassigned allocnos that a given allocno prefers
106 some hard-registers in order to remove the move instruction
107 corresponding to the copy.
109 o *Cap*. If a pseudo-register does not live in a region but
110 lives in a nested region, IRA creates a special allocno called
111 a cap in the outer region. A region cap is also created for a
112 subregion cap.
114 o *Copy*. Allocnos can be connected by copies. Copies are used
115 to modify hard-register costs for allocnos during coloring.
116 Such modifications reflects a preference to use the same
117 hard-register for the allocnos connected by copies. Usually
118 copies are created for move insns (in this case it results in
119 register coalescing). But IRA also creates copies for operands
120 of an insn which should be assigned to the same hard-register
121 due to constraints in the machine description (it usually
122 results in removing a move generated in reload to satisfy
123 the constraints) and copies referring to the allocno which is
124 the output operand of an instruction and the allocno which is
125 an input operand dying in the instruction (creation of such
126 copies results in less register shuffling). IRA *does not*
127 create copies between the same register allocnos from different
128 regions because we use another technique for propagating
129 hard-register preference on the borders of regions.
131 Allocnos (including caps) for the upper region in the region tree
132 *accumulate* information important for coloring from allocnos with
133 the same pseudo-register from nested regions. This includes
134 hard-register and memory costs, conflicts with hard-registers,
135 allocno conflicts, allocno copies and more. *Thus, attributes for
136 allocnos in a region have the same values as if the region had no
137 subregions*. It means that attributes for allocnos in the
138 outermost region corresponding to the function have the same values
139 as though the allocation used only one region which is the entire
140 function. It also means that we can look at IRA work as if the
141 first IRA did allocation for all function then it improved the
142 allocation for loops then their subloops and so on.
144 IRA major passes are:
146 o Building IRA internal representation which consists of the
147 following subpasses:
149 * First, IRA builds regions and creates allocnos (file
150 ira-build.c) and initializes most of their attributes.
152 * Then IRA finds an allocno class for each allocno and
153 calculates its initial (non-accumulated) cost of memory and
154 each hard-register of its allocno class (file ira-cost.c).
156 * IRA creates live ranges of each allocno, calculates register
157 pressure for each pressure class in each region, sets up
158 conflict hard registers for each allocno and info about calls
159 the allocno lives through (file ira-lives.c).
161 * IRA removes low register pressure loops from the regions
162 mostly to speed IRA up (file ira-build.c).
164 * IRA propagates accumulated allocno info from lower region
165 allocnos to corresponding upper region allocnos (file
166 ira-build.c).
168 * IRA creates all caps (file ira-build.c).
170 * Having live-ranges of allocnos and their classes, IRA creates
171 conflicting allocnos for each allocno. Conflicting allocnos
172 are stored as a bit vector or array of pointers to the
173 conflicting allocnos whatever is more profitable (file
174 ira-conflicts.c). At this point IRA creates allocno copies.
176 o Coloring. Now IRA has all necessary info to start graph coloring
177 process. It is done in each region on top-down traverse of the
178 region tree (file ira-color.c). There are following subpasses:
180 * Finding profitable hard registers of corresponding allocno
181 class for each allocno. For example, only callee-saved hard
182 registers are frequently profitable for allocnos living
183 through colors. If the profitable hard register set of
184 allocno does not form a tree based on subset relation, we use
185 some approximation to form the tree. This approximation is
186 used to figure out trivial colorability of allocnos. The
187 approximation is a pretty rare case.
189 * Putting allocnos onto the coloring stack. IRA uses Briggs
190 optimistic coloring which is a major improvement over
191 Chaitin's coloring. Therefore IRA does not spill allocnos at
192 this point. There is some freedom in the order of putting
193 allocnos on the stack which can affect the final result of
194 the allocation. IRA uses some heuristics to improve the
195 order. The major one is to form *threads* from colorable
196 allocnos and push them on the stack by threads. Thread is a
197 set of non-conflicting colorable allocnos connected by
198 copies. The thread contains allocnos from the colorable
199 bucket or colorable allocnos already pushed onto the coloring
200 stack. Pushing thread allocnos one after another onto the
201 stack increases chances of removing copies when the allocnos
202 get the same hard reg.
204 We also use a modification of Chaitin-Briggs algorithm which
205 works for intersected register classes of allocnos. To
206 figure out trivial colorability of allocnos, the mentioned
207 above tree of hard register sets is used. To get an idea how
208 the algorithm works in i386 example, let us consider an
209 allocno to which any general hard register can be assigned.
210 If the allocno conflicts with eight allocnos to which only
211 EAX register can be assigned, given allocno is still
212 trivially colorable because all conflicting allocnos might be
213 assigned only to EAX and all other general hard registers are
214 still free.
216 To get an idea of the used trivial colorability criterion, it
217 is also useful to read article "Graph-Coloring Register
218 Allocation for Irregular Architectures" by Michael D. Smith
219 and Glen Holloway. Major difference between the article
220 approach and approach used in IRA is that Smith's approach
221 takes register classes only from machine description and IRA
222 calculate register classes from intermediate code too
223 (e.g. an explicit usage of hard registers in RTL code for
224 parameter passing can result in creation of additional
225 register classes which contain or exclude the hard
226 registers). That makes IRA approach useful for improving
227 coloring even for architectures with regular register files
228 and in fact some benchmarking shows the improvement for
229 regular class architectures is even bigger than for irregular
230 ones. Another difference is that Smith's approach chooses
231 intersection of classes of all insn operands in which a given
232 pseudo occurs. IRA can use bigger classes if it is still
233 more profitable than memory usage.
235 * Popping the allocnos from the stack and assigning them hard
236 registers. If IRA can not assign a hard register to an
237 allocno and the allocno is coalesced, IRA undoes the
238 coalescing and puts the uncoalesced allocnos onto the stack in
239 the hope that some such allocnos will get a hard register
240 separately. If IRA fails to assign hard register or memory
241 is more profitable for it, IRA spills the allocno. IRA
242 assigns the allocno the hard-register with minimal full
243 allocation cost which reflects the cost of usage of the
244 hard-register for the allocno and cost of usage of the
245 hard-register for allocnos conflicting with given allocno.
247 * Chaitin-Briggs coloring assigns as many pseudos as possible
248 to hard registers. After coloring we try to improve
249 allocation with cost point of view. We improve the
250 allocation by spilling some allocnos and assigning the freed
251 hard registers to other allocnos if it decreases the overall
252 allocation cost.
254 * After allocno assigning in the region, IRA modifies the hard
255 register and memory costs for the corresponding allocnos in
256 the subregions to reflect the cost of possible loads, stores,
257 or moves on the border of the region and its subregions.
258 When default regional allocation algorithm is used
259 (-fira-algorithm=mixed), IRA just propagates the assignment
260 for allocnos if the register pressure in the region for the
261 corresponding pressure class is less than number of available
262 hard registers for given pressure class.
264 o Spill/restore code moving. When IRA performs an allocation
265 by traversing regions in top-down order, it does not know what
266 happens below in the region tree. Therefore, sometimes IRA
267 misses opportunities to perform a better allocation. A simple
268 optimization tries to improve allocation in a region having
269 subregions and containing in another region. If the
270 corresponding allocnos in the subregion are spilled, it spills
271 the region allocno if it is profitable. The optimization
272 implements a simple iterative algorithm performing profitable
273 transformations while they are still possible. It is fast in
274 practice, so there is no real need for a better time complexity
275 algorithm.
277 o Code change. After coloring, two allocnos representing the
278 same pseudo-register outside and inside a region respectively
279 may be assigned to different locations (hard-registers or
280 memory). In this case IRA creates and uses a new
281 pseudo-register inside the region and adds code to move allocno
282 values on the region's borders. This is done during top-down
283 traversal of the regions (file ira-emit.c). In some
284 complicated cases IRA can create a new allocno to move allocno
285 values (e.g. when a swap of values stored in two hard-registers
286 is needed). At this stage, the new allocno is marked as
287 spilled. IRA still creates the pseudo-register and the moves
288 on the region borders even when both allocnos were assigned to
289 the same hard-register. If the reload pass spills a
290 pseudo-register for some reason, the effect will be smaller
291 because another allocno will still be in the hard-register. In
292 most cases, this is better then spilling both allocnos. If
293 reload does not change the allocation for the two
294 pseudo-registers, the trivial move will be removed by
295 post-reload optimizations. IRA does not generate moves for
296 allocnos assigned to the same hard register when the default
297 regional allocation algorithm is used and the register pressure
298 in the region for the corresponding pressure class is less than
299 number of available hard registers for given pressure class.
300 IRA also does some optimizations to remove redundant stores and
301 to reduce code duplication on the region borders.
303 o Flattening internal representation. After changing code, IRA
304 transforms its internal representation for several regions into
305 one region representation (file ira-build.c). This process is
306 called IR flattening. Such process is more complicated than IR
307 rebuilding would be, but is much faster.
309 o After IR flattening, IRA tries to assign hard registers to all
310 spilled allocnos. This is implemented by a simple and fast
311 priority coloring algorithm (see function
312 ira_reassign_conflict_allocnos::ira-color.c). Here new allocnos
313 created during the code change pass can be assigned to hard
314 registers.
316 o At the end IRA calls the reload pass. The reload pass
317 communicates with IRA through several functions in file
318 ira-color.c to improve its decisions in
320 * sharing stack slots for the spilled pseudos based on IRA info
321 about pseudo-register conflicts.
323 * reassigning hard-registers to all spilled pseudos at the end
324 of each reload iteration.
326 * choosing a better hard-register to spill based on IRA info
327 about pseudo-register live ranges and the register pressure
328 in places where the pseudo-register lives.
330 IRA uses a lot of data representing the target processors. These
331 data are initialized in file ira.c.
333 If function has no loops (or the loops are ignored when
334 -fira-algorithm=CB is used), we have classic Chaitin-Briggs
335 coloring (only instead of separate pass of coalescing, we use hard
336 register preferencing). In such case, IRA works much faster
337 because many things are not made (like IR flattening, the
338 spill/restore optimization, and the code change).
340 Literature is worth to read for better understanding the code:
342 o Preston Briggs, Keith D. Cooper, Linda Torczon. Improvements to
343 Graph Coloring Register Allocation.
345 o David Callahan, Brian Koblenz. Register allocation via
346 hierarchical graph coloring.
348 o Keith Cooper, Anshuman Dasgupta, Jason Eckhardt. Revisiting Graph
349 Coloring Register Allocation: A Study of the Chaitin-Briggs and
350 Callahan-Koblenz Algorithms.
352 o Guei-Yuan Lueh, Thomas Gross, and Ali-Reza Adl-Tabatabai. Global
353 Register Allocation Based on Graph Fusion.
355 o Michael D. Smith and Glenn Holloway. Graph-Coloring Register
356 Allocation for Irregular Architectures
358 o Vladimir Makarov. The Integrated Register Allocator for GCC.
360 o Vladimir Makarov. The top-down register allocator for irregular
361 register file architectures.
366 #include "config.h"
367 #include "system.h"
368 #include "coretypes.h"
369 #include "backend.h"
370 #include "target.h"
371 #include "rtl.h"
372 #include "tree.h"
373 #include "df.h"
374 #include "memmodel.h"
375 #include "tm_p.h"
376 #include "insn-config.h"
377 #include "regs.h"
378 #include "ira.h"
379 #include "ira-int.h"
380 #include "diagnostic-core.h"
381 #include "cfgrtl.h"
382 #include "cfgbuild.h"
383 #include "cfgcleanup.h"
384 #include "expr.h"
385 #include "tree-pass.h"
386 #include "output.h"
387 #include "reload.h"
388 #include "cfgloop.h"
389 #include "lra.h"
390 #include "dce.h"
391 #include "dbgcnt.h"
392 #include "rtl-iter.h"
393 #include "shrink-wrap.h"
394 #include "print-rtl.h"
396 struct target_ira default_target_ira;
397 struct target_ira_int default_target_ira_int;
398 #if SWITCHABLE_TARGET
399 struct target_ira *this_target_ira = &default_target_ira;
400 struct target_ira_int *this_target_ira_int = &default_target_ira_int;
401 #endif
403 /* A modified value of flag `-fira-verbose' used internally. */
404 int internal_flag_ira_verbose;
406 /* Dump file of the allocator if it is not NULL. */
407 FILE *ira_dump_file;
409 /* The number of elements in the following array. */
410 int ira_spilled_reg_stack_slots_num;
412 /* The following array contains info about spilled pseudo-registers
413 stack slots used in current function so far. */
414 struct ira_spilled_reg_stack_slot *ira_spilled_reg_stack_slots;
416 /* Correspondingly overall cost of the allocation, overall cost before
417 reload, cost of the allocnos assigned to hard-registers, cost of
418 the allocnos assigned to memory, cost of loads, stores and register
419 move insns generated for pseudo-register live range splitting (see
420 ira-emit.c). */
421 int64_t ira_overall_cost, overall_cost_before;
422 int64_t ira_reg_cost, ira_mem_cost;
423 int64_t ira_load_cost, ira_store_cost, ira_shuffle_cost;
424 int ira_move_loops_num, ira_additional_jumps_num;
426 /* All registers that can be eliminated. */
428 HARD_REG_SET eliminable_regset;
430 /* Value of max_reg_num () before IRA work start. This value helps
431 us to recognize a situation when new pseudos were created during
432 IRA work. */
433 static int max_regno_before_ira;
435 /* Temporary hard reg set used for a different calculation. */
436 static HARD_REG_SET temp_hard_regset;
438 #define last_mode_for_init_move_cost \
439 (this_target_ira_int->x_last_mode_for_init_move_cost)
442 /* The function sets up the map IRA_REG_MODE_HARD_REGSET. */
443 static void
444 setup_reg_mode_hard_regset (void)
446 int i, m, hard_regno;
448 for (m = 0; m < NUM_MACHINE_MODES; m++)
449 for (hard_regno = 0; hard_regno < FIRST_PSEUDO_REGISTER; hard_regno++)
451 CLEAR_HARD_REG_SET (ira_reg_mode_hard_regset[hard_regno][m]);
452 for (i = hard_regno_nregs[hard_regno][m] - 1; i >= 0; i--)
453 if (hard_regno + i < FIRST_PSEUDO_REGISTER)
454 SET_HARD_REG_BIT (ira_reg_mode_hard_regset[hard_regno][m],
455 hard_regno + i);
460 #define no_unit_alloc_regs \
461 (this_target_ira_int->x_no_unit_alloc_regs)
463 /* The function sets up the three arrays declared above. */
464 static void
465 setup_class_hard_regs (void)
467 int cl, i, hard_regno, n;
468 HARD_REG_SET processed_hard_reg_set;
470 ira_assert (SHRT_MAX >= FIRST_PSEUDO_REGISTER);
471 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
473 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
474 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
475 CLEAR_HARD_REG_SET (processed_hard_reg_set);
476 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
478 ira_non_ordered_class_hard_regs[cl][i] = -1;
479 ira_class_hard_reg_index[cl][i] = -1;
481 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
483 #ifdef REG_ALLOC_ORDER
484 hard_regno = reg_alloc_order[i];
485 #else
486 hard_regno = i;
487 #endif
488 if (TEST_HARD_REG_BIT (processed_hard_reg_set, hard_regno))
489 continue;
490 SET_HARD_REG_BIT (processed_hard_reg_set, hard_regno);
491 if (! TEST_HARD_REG_BIT (temp_hard_regset, hard_regno))
492 ira_class_hard_reg_index[cl][hard_regno] = -1;
493 else
495 ira_class_hard_reg_index[cl][hard_regno] = n;
496 ira_class_hard_regs[cl][n++] = hard_regno;
499 ira_class_hard_regs_num[cl] = n;
500 for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
501 if (TEST_HARD_REG_BIT (temp_hard_regset, i))
502 ira_non_ordered_class_hard_regs[cl][n++] = i;
503 ira_assert (ira_class_hard_regs_num[cl] == n);
507 /* Set up global variables defining info about hard registers for the
508 allocation. These depend on USE_HARD_FRAME_P whose TRUE value means
509 that we can use the hard frame pointer for the allocation. */
510 static void
511 setup_alloc_regs (bool use_hard_frame_p)
513 #ifdef ADJUST_REG_ALLOC_ORDER
514 ADJUST_REG_ALLOC_ORDER;
515 #endif
516 COPY_HARD_REG_SET (no_unit_alloc_regs, fixed_nonglobal_reg_set);
517 if (! use_hard_frame_p)
518 SET_HARD_REG_BIT (no_unit_alloc_regs, HARD_FRAME_POINTER_REGNUM);
519 setup_class_hard_regs ();
524 #define alloc_reg_class_subclasses \
525 (this_target_ira_int->x_alloc_reg_class_subclasses)
527 /* Initialize the table of subclasses of each reg class. */
528 static void
529 setup_reg_subclasses (void)
531 int i, j;
532 HARD_REG_SET temp_hard_regset2;
534 for (i = 0; i < N_REG_CLASSES; i++)
535 for (j = 0; j < N_REG_CLASSES; j++)
536 alloc_reg_class_subclasses[i][j] = LIM_REG_CLASSES;
538 for (i = 0; i < N_REG_CLASSES; i++)
540 if (i == (int) NO_REGS)
541 continue;
543 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
544 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
545 if (hard_reg_set_empty_p (temp_hard_regset))
546 continue;
547 for (j = 0; j < N_REG_CLASSES; j++)
548 if (i != j)
550 enum reg_class *p;
552 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[j]);
553 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
554 if (! hard_reg_set_subset_p (temp_hard_regset,
555 temp_hard_regset2))
556 continue;
557 p = &alloc_reg_class_subclasses[j][0];
558 while (*p != LIM_REG_CLASSES) p++;
559 *p = (enum reg_class) i;
566 /* Set up IRA_MEMORY_MOVE_COST and IRA_MAX_MEMORY_MOVE_COST. */
567 static void
568 setup_class_subset_and_memory_move_costs (void)
570 int cl, cl2, mode, cost;
571 HARD_REG_SET temp_hard_regset2;
573 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
574 ira_memory_move_cost[mode][NO_REGS][0]
575 = ira_memory_move_cost[mode][NO_REGS][1] = SHRT_MAX;
576 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
578 if (cl != (int) NO_REGS)
579 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
581 ira_max_memory_move_cost[mode][cl][0]
582 = ira_memory_move_cost[mode][cl][0]
583 = memory_move_cost ((machine_mode) mode,
584 (reg_class_t) cl, false);
585 ira_max_memory_move_cost[mode][cl][1]
586 = ira_memory_move_cost[mode][cl][1]
587 = memory_move_cost ((machine_mode) mode,
588 (reg_class_t) cl, true);
589 /* Costs for NO_REGS are used in cost calculation on the
590 1st pass when the preferred register classes are not
591 known yet. In this case we take the best scenario. */
592 if (ira_memory_move_cost[mode][NO_REGS][0]
593 > ira_memory_move_cost[mode][cl][0])
594 ira_max_memory_move_cost[mode][NO_REGS][0]
595 = ira_memory_move_cost[mode][NO_REGS][0]
596 = ira_memory_move_cost[mode][cl][0];
597 if (ira_memory_move_cost[mode][NO_REGS][1]
598 > ira_memory_move_cost[mode][cl][1])
599 ira_max_memory_move_cost[mode][NO_REGS][1]
600 = ira_memory_move_cost[mode][NO_REGS][1]
601 = ira_memory_move_cost[mode][cl][1];
604 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
605 for (cl2 = (int) N_REG_CLASSES - 1; cl2 >= 0; cl2--)
607 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
608 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
609 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
610 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
611 ira_class_subset_p[cl][cl2]
612 = hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2);
613 if (! hard_reg_set_empty_p (temp_hard_regset2)
614 && hard_reg_set_subset_p (reg_class_contents[cl2],
615 reg_class_contents[cl]))
616 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
618 cost = ira_memory_move_cost[mode][cl2][0];
619 if (cost > ira_max_memory_move_cost[mode][cl][0])
620 ira_max_memory_move_cost[mode][cl][0] = cost;
621 cost = ira_memory_move_cost[mode][cl2][1];
622 if (cost > ira_max_memory_move_cost[mode][cl][1])
623 ira_max_memory_move_cost[mode][cl][1] = cost;
626 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
627 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
629 ira_memory_move_cost[mode][cl][0]
630 = ira_max_memory_move_cost[mode][cl][0];
631 ira_memory_move_cost[mode][cl][1]
632 = ira_max_memory_move_cost[mode][cl][1];
634 setup_reg_subclasses ();
639 /* Define the following macro if allocation through malloc if
640 preferable. */
641 #define IRA_NO_OBSTACK
643 #ifndef IRA_NO_OBSTACK
644 /* Obstack used for storing all dynamic data (except bitmaps) of the
645 IRA. */
646 static struct obstack ira_obstack;
647 #endif
649 /* Obstack used for storing all bitmaps of the IRA. */
650 static struct bitmap_obstack ira_bitmap_obstack;
652 /* Allocate memory of size LEN for IRA data. */
653 void *
654 ira_allocate (size_t len)
656 void *res;
658 #ifndef IRA_NO_OBSTACK
659 res = obstack_alloc (&ira_obstack, len);
660 #else
661 res = xmalloc (len);
662 #endif
663 return res;
666 /* Free memory ADDR allocated for IRA data. */
667 void
668 ira_free (void *addr ATTRIBUTE_UNUSED)
670 #ifndef IRA_NO_OBSTACK
671 /* do nothing */
672 #else
673 free (addr);
674 #endif
678 /* Allocate and returns bitmap for IRA. */
679 bitmap
680 ira_allocate_bitmap (void)
682 return BITMAP_ALLOC (&ira_bitmap_obstack);
685 /* Free bitmap B allocated for IRA. */
686 void
687 ira_free_bitmap (bitmap b ATTRIBUTE_UNUSED)
689 /* do nothing */
694 /* Output information about allocation of all allocnos (except for
695 caps) into file F. */
696 void
697 ira_print_disposition (FILE *f)
699 int i, n, max_regno;
700 ira_allocno_t a;
701 basic_block bb;
703 fprintf (f, "Disposition:");
704 max_regno = max_reg_num ();
705 for (n = 0, i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
706 for (a = ira_regno_allocno_map[i];
707 a != NULL;
708 a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
710 if (n % 4 == 0)
711 fprintf (f, "\n");
712 n++;
713 fprintf (f, " %4d:r%-4d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
714 if ((bb = ALLOCNO_LOOP_TREE_NODE (a)->bb) != NULL)
715 fprintf (f, "b%-3d", bb->index);
716 else
717 fprintf (f, "l%-3d", ALLOCNO_LOOP_TREE_NODE (a)->loop_num);
718 if (ALLOCNO_HARD_REGNO (a) >= 0)
719 fprintf (f, " %3d", ALLOCNO_HARD_REGNO (a));
720 else
721 fprintf (f, " mem");
723 fprintf (f, "\n");
726 /* Outputs information about allocation of all allocnos into
727 stderr. */
728 void
729 ira_debug_disposition (void)
731 ira_print_disposition (stderr);
736 /* Set up ira_stack_reg_pressure_class which is the biggest pressure
737 register class containing stack registers or NO_REGS if there are
738 no stack registers. To find this class, we iterate through all
739 register pressure classes and choose the first register pressure
740 class containing all the stack registers and having the biggest
741 size. */
742 static void
743 setup_stack_reg_pressure_class (void)
745 ira_stack_reg_pressure_class = NO_REGS;
746 #ifdef STACK_REGS
748 int i, best, size;
749 enum reg_class cl;
750 HARD_REG_SET temp_hard_regset2;
752 CLEAR_HARD_REG_SET (temp_hard_regset);
753 for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
754 SET_HARD_REG_BIT (temp_hard_regset, i);
755 best = 0;
756 for (i = 0; i < ira_pressure_classes_num; i++)
758 cl = ira_pressure_classes[i];
759 COPY_HARD_REG_SET (temp_hard_regset2, temp_hard_regset);
760 AND_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
761 size = hard_reg_set_size (temp_hard_regset2);
762 if (best < size)
764 best = size;
765 ira_stack_reg_pressure_class = cl;
769 #endif
772 /* Find pressure classes which are register classes for which we
773 calculate register pressure in IRA, register pressure sensitive
774 insn scheduling, and register pressure sensitive loop invariant
775 motion.
777 To make register pressure calculation easy, we always use
778 non-intersected register pressure classes. A move of hard
779 registers from one register pressure class is not more expensive
780 than load and store of the hard registers. Most likely an allocno
781 class will be a subset of a register pressure class and in many
782 cases a register pressure class. That makes usage of register
783 pressure classes a good approximation to find a high register
784 pressure. */
785 static void
786 setup_pressure_classes (void)
788 int cost, i, n, curr;
789 int cl, cl2;
790 enum reg_class pressure_classes[N_REG_CLASSES];
791 int m;
792 HARD_REG_SET temp_hard_regset2;
793 bool insert_p;
795 if (targetm.compute_pressure_classes)
796 n = targetm.compute_pressure_classes (pressure_classes);
797 else
799 n = 0;
800 for (cl = 0; cl < N_REG_CLASSES; cl++)
802 if (ira_class_hard_regs_num[cl] == 0)
803 continue;
804 if (ira_class_hard_regs_num[cl] != 1
805 /* A register class without subclasses may contain a few
806 hard registers and movement between them is costly
807 (e.g. SPARC FPCC registers). We still should consider it
808 as a candidate for a pressure class. */
809 && alloc_reg_class_subclasses[cl][0] < cl)
811 /* Check that the moves between any hard registers of the
812 current class are not more expensive for a legal mode
813 than load/store of the hard registers of the current
814 class. Such class is a potential candidate to be a
815 register pressure class. */
816 for (m = 0; m < NUM_MACHINE_MODES; m++)
818 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
819 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
820 AND_COMPL_HARD_REG_SET (temp_hard_regset,
821 ira_prohibited_class_mode_regs[cl][m]);
822 if (hard_reg_set_empty_p (temp_hard_regset))
823 continue;
824 ira_init_register_move_cost_if_necessary ((machine_mode) m);
825 cost = ira_register_move_cost[m][cl][cl];
826 if (cost <= ira_max_memory_move_cost[m][cl][1]
827 || cost <= ira_max_memory_move_cost[m][cl][0])
828 break;
830 if (m >= NUM_MACHINE_MODES)
831 continue;
833 curr = 0;
834 insert_p = true;
835 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
836 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
837 /* Remove so far added pressure classes which are subset of the
838 current candidate class. Prefer GENERAL_REGS as a pressure
839 register class to another class containing the same
840 allocatable hard registers. We do this because machine
841 dependent cost hooks might give wrong costs for the latter
842 class but always give the right cost for the former class
843 (GENERAL_REGS). */
844 for (i = 0; i < n; i++)
846 cl2 = pressure_classes[i];
847 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
848 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
849 if (hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2)
850 && (! hard_reg_set_equal_p (temp_hard_regset,
851 temp_hard_regset2)
852 || cl2 == (int) GENERAL_REGS))
854 pressure_classes[curr++] = (enum reg_class) cl2;
855 insert_p = false;
856 continue;
858 if (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset)
859 && (! hard_reg_set_equal_p (temp_hard_regset2,
860 temp_hard_regset)
861 || cl == (int) GENERAL_REGS))
862 continue;
863 if (hard_reg_set_equal_p (temp_hard_regset2, temp_hard_regset))
864 insert_p = false;
865 pressure_classes[curr++] = (enum reg_class) cl2;
867 /* If the current candidate is a subset of a so far added
868 pressure class, don't add it to the list of the pressure
869 classes. */
870 if (insert_p)
871 pressure_classes[curr++] = (enum reg_class) cl;
872 n = curr;
875 #ifdef ENABLE_IRA_CHECKING
877 HARD_REG_SET ignore_hard_regs;
879 /* Check pressure classes correctness: here we check that hard
880 registers from all register pressure classes contains all hard
881 registers available for the allocation. */
882 CLEAR_HARD_REG_SET (temp_hard_regset);
883 CLEAR_HARD_REG_SET (temp_hard_regset2);
884 COPY_HARD_REG_SET (ignore_hard_regs, no_unit_alloc_regs);
885 for (cl = 0; cl < LIM_REG_CLASSES; cl++)
887 /* For some targets (like MIPS with MD_REGS), there are some
888 classes with hard registers available for allocation but
889 not able to hold value of any mode. */
890 for (m = 0; m < NUM_MACHINE_MODES; m++)
891 if (contains_reg_of_mode[cl][m])
892 break;
893 if (m >= NUM_MACHINE_MODES)
895 IOR_HARD_REG_SET (ignore_hard_regs, reg_class_contents[cl]);
896 continue;
898 for (i = 0; i < n; i++)
899 if ((int) pressure_classes[i] == cl)
900 break;
901 IOR_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
902 if (i < n)
903 IOR_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
905 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
906 /* Some targets (like SPARC with ICC reg) have allocatable regs
907 for which no reg class is defined. */
908 if (REGNO_REG_CLASS (i) == NO_REGS)
909 SET_HARD_REG_BIT (ignore_hard_regs, i);
910 AND_COMPL_HARD_REG_SET (temp_hard_regset, ignore_hard_regs);
911 AND_COMPL_HARD_REG_SET (temp_hard_regset2, ignore_hard_regs);
912 ira_assert (hard_reg_set_subset_p (temp_hard_regset2, temp_hard_regset));
914 #endif
915 ira_pressure_classes_num = 0;
916 for (i = 0; i < n; i++)
918 cl = (int) pressure_classes[i];
919 ira_reg_pressure_class_p[cl] = true;
920 ira_pressure_classes[ira_pressure_classes_num++] = (enum reg_class) cl;
922 setup_stack_reg_pressure_class ();
925 /* Set up IRA_UNIFORM_CLASS_P. Uniform class is a register class
926 whose register move cost between any registers of the class is the
927 same as for all its subclasses. We use the data to speed up the
928 2nd pass of calculations of allocno costs. */
929 static void
930 setup_uniform_class_p (void)
932 int i, cl, cl2, m;
934 for (cl = 0; cl < N_REG_CLASSES; cl++)
936 ira_uniform_class_p[cl] = false;
937 if (ira_class_hard_regs_num[cl] == 0)
938 continue;
939 /* We can not use alloc_reg_class_subclasses here because move
940 cost hooks does not take into account that some registers are
941 unavailable for the subtarget. E.g. for i686, INT_SSE_REGS
942 is element of alloc_reg_class_subclasses for GENERAL_REGS
943 because SSE regs are unavailable. */
944 for (i = 0; (cl2 = reg_class_subclasses[cl][i]) != LIM_REG_CLASSES; i++)
946 if (ira_class_hard_regs_num[cl2] == 0)
947 continue;
948 for (m = 0; m < NUM_MACHINE_MODES; m++)
949 if (contains_reg_of_mode[cl][m] && contains_reg_of_mode[cl2][m])
951 ira_init_register_move_cost_if_necessary ((machine_mode) m);
952 if (ira_register_move_cost[m][cl][cl]
953 != ira_register_move_cost[m][cl2][cl2])
954 break;
956 if (m < NUM_MACHINE_MODES)
957 break;
959 if (cl2 == LIM_REG_CLASSES)
960 ira_uniform_class_p[cl] = true;
964 /* Set up IRA_ALLOCNO_CLASSES, IRA_ALLOCNO_CLASSES_NUM,
965 IRA_IMPORTANT_CLASSES, and IRA_IMPORTANT_CLASSES_NUM.
967 Target may have many subtargets and not all target hard registers can
968 be used for allocation, e.g. x86 port in 32-bit mode can not use
969 hard registers introduced in x86-64 like r8-r15). Some classes
970 might have the same allocatable hard registers, e.g. INDEX_REGS
971 and GENERAL_REGS in x86 port in 32-bit mode. To decrease different
972 calculations efforts we introduce allocno classes which contain
973 unique non-empty sets of allocatable hard-registers.
975 Pseudo class cost calculation in ira-costs.c is very expensive.
976 Therefore we are trying to decrease number of classes involved in
977 such calculation. Register classes used in the cost calculation
978 are called important classes. They are allocno classes and other
979 non-empty classes whose allocatable hard register sets are inside
980 of an allocno class hard register set. From the first sight, it
981 looks like that they are just allocno classes. It is not true. In
982 example of x86-port in 32-bit mode, allocno classes will contain
983 GENERAL_REGS but not LEGACY_REGS (because allocatable hard
984 registers are the same for the both classes). The important
985 classes will contain GENERAL_REGS and LEGACY_REGS. It is done
986 because a machine description insn constraint may refers for
987 LEGACY_REGS and code in ira-costs.c is mostly base on investigation
988 of the insn constraints. */
989 static void
990 setup_allocno_and_important_classes (void)
992 int i, j, n, cl;
993 bool set_p;
994 HARD_REG_SET temp_hard_regset2;
995 static enum reg_class classes[LIM_REG_CLASSES + 1];
997 n = 0;
998 /* Collect classes which contain unique sets of allocatable hard
999 registers. Prefer GENERAL_REGS to other classes containing the
1000 same set of hard registers. */
1001 for (i = 0; i < LIM_REG_CLASSES; i++)
1003 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
1004 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1005 for (j = 0; j < n; j++)
1007 cl = classes[j];
1008 COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl]);
1009 AND_COMPL_HARD_REG_SET (temp_hard_regset2,
1010 no_unit_alloc_regs);
1011 if (hard_reg_set_equal_p (temp_hard_regset,
1012 temp_hard_regset2))
1013 break;
1015 if (j >= n || targetm.additional_allocno_class_p (i))
1016 classes[n++] = (enum reg_class) i;
1017 else if (i == GENERAL_REGS)
1018 /* Prefer general regs. For i386 example, it means that
1019 we prefer GENERAL_REGS over INDEX_REGS or LEGACY_REGS
1020 (all of them consists of the same available hard
1021 registers). */
1022 classes[j] = (enum reg_class) i;
1024 classes[n] = LIM_REG_CLASSES;
1026 /* Set up classes which can be used for allocnos as classes
1027 containing non-empty unique sets of allocatable hard
1028 registers. */
1029 ira_allocno_classes_num = 0;
1030 for (i = 0; (cl = classes[i]) != LIM_REG_CLASSES; i++)
1031 if (ira_class_hard_regs_num[cl] > 0)
1032 ira_allocno_classes[ira_allocno_classes_num++] = (enum reg_class) cl;
1033 ira_important_classes_num = 0;
1034 /* Add non-allocno classes containing to non-empty set of
1035 allocatable hard regs. */
1036 for (cl = 0; cl < N_REG_CLASSES; cl++)
1037 if (ira_class_hard_regs_num[cl] > 0)
1039 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1040 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1041 set_p = false;
1042 for (j = 0; j < ira_allocno_classes_num; j++)
1044 COPY_HARD_REG_SET (temp_hard_regset2,
1045 reg_class_contents[ira_allocno_classes[j]]);
1046 AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
1047 if ((enum reg_class) cl == ira_allocno_classes[j])
1048 break;
1049 else if (hard_reg_set_subset_p (temp_hard_regset,
1050 temp_hard_regset2))
1051 set_p = true;
1053 if (set_p && j >= ira_allocno_classes_num)
1054 ira_important_classes[ira_important_classes_num++]
1055 = (enum reg_class) cl;
1057 /* Now add allocno classes to the important classes. */
1058 for (j = 0; j < ira_allocno_classes_num; j++)
1059 ira_important_classes[ira_important_classes_num++]
1060 = ira_allocno_classes[j];
1061 for (cl = 0; cl < N_REG_CLASSES; cl++)
1063 ira_reg_allocno_class_p[cl] = false;
1064 ira_reg_pressure_class_p[cl] = false;
1066 for (j = 0; j < ira_allocno_classes_num; j++)
1067 ira_reg_allocno_class_p[ira_allocno_classes[j]] = true;
1068 setup_pressure_classes ();
1069 setup_uniform_class_p ();
1072 /* Setup translation in CLASS_TRANSLATE of all classes into a class
1073 given by array CLASSES of length CLASSES_NUM. The function is used
1074 make translation any reg class to an allocno class or to an
1075 pressure class. This translation is necessary for some
1076 calculations when we can use only allocno or pressure classes and
1077 such translation represents an approximate representation of all
1078 classes.
1080 The translation in case when allocatable hard register set of a
1081 given class is subset of allocatable hard register set of a class
1082 in CLASSES is pretty simple. We use smallest classes from CLASSES
1083 containing a given class. If allocatable hard register set of a
1084 given class is not a subset of any corresponding set of a class
1085 from CLASSES, we use the cheapest (with load/store point of view)
1086 class from CLASSES whose set intersects with given class set. */
1087 static void
1088 setup_class_translate_array (enum reg_class *class_translate,
1089 int classes_num, enum reg_class *classes)
1091 int cl, mode;
1092 enum reg_class aclass, best_class, *cl_ptr;
1093 int i, cost, min_cost, best_cost;
1095 for (cl = 0; cl < N_REG_CLASSES; cl++)
1096 class_translate[cl] = NO_REGS;
1098 for (i = 0; i < classes_num; i++)
1100 aclass = classes[i];
1101 for (cl_ptr = &alloc_reg_class_subclasses[aclass][0];
1102 (cl = *cl_ptr) != LIM_REG_CLASSES;
1103 cl_ptr++)
1104 if (class_translate[cl] == NO_REGS)
1105 class_translate[cl] = aclass;
1106 class_translate[aclass] = aclass;
1108 /* For classes which are not fully covered by one of given classes
1109 (in other words covered by more one given class), use the
1110 cheapest class. */
1111 for (cl = 0; cl < N_REG_CLASSES; cl++)
1113 if (cl == NO_REGS || class_translate[cl] != NO_REGS)
1114 continue;
1115 best_class = NO_REGS;
1116 best_cost = INT_MAX;
1117 for (i = 0; i < classes_num; i++)
1119 aclass = classes[i];
1120 COPY_HARD_REG_SET (temp_hard_regset,
1121 reg_class_contents[aclass]);
1122 AND_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1123 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1124 if (! hard_reg_set_empty_p (temp_hard_regset))
1126 min_cost = INT_MAX;
1127 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1129 cost = (ira_memory_move_cost[mode][aclass][0]
1130 + ira_memory_move_cost[mode][aclass][1]);
1131 if (min_cost > cost)
1132 min_cost = cost;
1134 if (best_class == NO_REGS || best_cost > min_cost)
1136 best_class = aclass;
1137 best_cost = min_cost;
1141 class_translate[cl] = best_class;
1145 /* Set up array IRA_ALLOCNO_CLASS_TRANSLATE and
1146 IRA_PRESSURE_CLASS_TRANSLATE. */
1147 static void
1148 setup_class_translate (void)
1150 setup_class_translate_array (ira_allocno_class_translate,
1151 ira_allocno_classes_num, ira_allocno_classes);
1152 setup_class_translate_array (ira_pressure_class_translate,
1153 ira_pressure_classes_num, ira_pressure_classes);
1156 /* Order numbers of allocno classes in original target allocno class
1157 array, -1 for non-allocno classes. */
1158 static int allocno_class_order[N_REG_CLASSES];
1160 /* The function used to sort the important classes. */
1161 static int
1162 comp_reg_classes_func (const void *v1p, const void *v2p)
1164 enum reg_class cl1 = *(const enum reg_class *) v1p;
1165 enum reg_class cl2 = *(const enum reg_class *) v2p;
1166 enum reg_class tcl1, tcl2;
1167 int diff;
1169 tcl1 = ira_allocno_class_translate[cl1];
1170 tcl2 = ira_allocno_class_translate[cl2];
1171 if (tcl1 != NO_REGS && tcl2 != NO_REGS
1172 && (diff = allocno_class_order[tcl1] - allocno_class_order[tcl2]) != 0)
1173 return diff;
1174 return (int) cl1 - (int) cl2;
1177 /* For correct work of function setup_reg_class_relation we need to
1178 reorder important classes according to the order of their allocno
1179 classes. It places important classes containing the same
1180 allocatable hard register set adjacent to each other and allocno
1181 class with the allocatable hard register set right after the other
1182 important classes with the same set.
1184 In example from comments of function
1185 setup_allocno_and_important_classes, it places LEGACY_REGS and
1186 GENERAL_REGS close to each other and GENERAL_REGS is after
1187 LEGACY_REGS. */
1188 static void
1189 reorder_important_classes (void)
1191 int i;
1193 for (i = 0; i < N_REG_CLASSES; i++)
1194 allocno_class_order[i] = -1;
1195 for (i = 0; i < ira_allocno_classes_num; i++)
1196 allocno_class_order[ira_allocno_classes[i]] = i;
1197 qsort (ira_important_classes, ira_important_classes_num,
1198 sizeof (enum reg_class), comp_reg_classes_func);
1199 for (i = 0; i < ira_important_classes_num; i++)
1200 ira_important_class_nums[ira_important_classes[i]] = i;
1203 /* Set up IRA_REG_CLASS_SUBUNION, IRA_REG_CLASS_SUPERUNION,
1204 IRA_REG_CLASS_SUPER_CLASSES, IRA_REG_CLASSES_INTERSECT, and
1205 IRA_REG_CLASSES_INTERSECT_P. For the meaning of the relations,
1206 please see corresponding comments in ira-int.h. */
1207 static void
1208 setup_reg_class_relations (void)
1210 int i, cl1, cl2, cl3;
1211 HARD_REG_SET intersection_set, union_set, temp_set2;
1212 bool important_class_p[N_REG_CLASSES];
1214 memset (important_class_p, 0, sizeof (important_class_p));
1215 for (i = 0; i < ira_important_classes_num; i++)
1216 important_class_p[ira_important_classes[i]] = true;
1217 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1219 ira_reg_class_super_classes[cl1][0] = LIM_REG_CLASSES;
1220 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1222 ira_reg_classes_intersect_p[cl1][cl2] = false;
1223 ira_reg_class_intersect[cl1][cl2] = NO_REGS;
1224 ira_reg_class_subset[cl1][cl2] = NO_REGS;
1225 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl1]);
1226 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1227 COPY_HARD_REG_SET (temp_set2, reg_class_contents[cl2]);
1228 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1229 if (hard_reg_set_empty_p (temp_hard_regset)
1230 && hard_reg_set_empty_p (temp_set2))
1232 /* The both classes have no allocatable hard registers
1233 -- take all class hard registers into account and use
1234 reg_class_subunion and reg_class_superunion. */
1235 for (i = 0;; i++)
1237 cl3 = reg_class_subclasses[cl1][i];
1238 if (cl3 == LIM_REG_CLASSES)
1239 break;
1240 if (reg_class_subset_p (ira_reg_class_intersect[cl1][cl2],
1241 (enum reg_class) cl3))
1242 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1244 ira_reg_class_subunion[cl1][cl2] = reg_class_subunion[cl1][cl2];
1245 ira_reg_class_superunion[cl1][cl2] = reg_class_superunion[cl1][cl2];
1246 continue;
1248 ira_reg_classes_intersect_p[cl1][cl2]
1249 = hard_reg_set_intersect_p (temp_hard_regset, temp_set2);
1250 if (important_class_p[cl1] && important_class_p[cl2]
1251 && hard_reg_set_subset_p (temp_hard_regset, temp_set2))
1253 /* CL1 and CL2 are important classes and CL1 allocatable
1254 hard register set is inside of CL2 allocatable hard
1255 registers -- make CL1 a superset of CL2. */
1256 enum reg_class *p;
1258 p = &ira_reg_class_super_classes[cl1][0];
1259 while (*p != LIM_REG_CLASSES)
1260 p++;
1261 *p++ = (enum reg_class) cl2;
1262 *p = LIM_REG_CLASSES;
1264 ira_reg_class_subunion[cl1][cl2] = NO_REGS;
1265 ira_reg_class_superunion[cl1][cl2] = NO_REGS;
1266 COPY_HARD_REG_SET (intersection_set, reg_class_contents[cl1]);
1267 AND_HARD_REG_SET (intersection_set, reg_class_contents[cl2]);
1268 AND_COMPL_HARD_REG_SET (intersection_set, no_unit_alloc_regs);
1269 COPY_HARD_REG_SET (union_set, reg_class_contents[cl1]);
1270 IOR_HARD_REG_SET (union_set, reg_class_contents[cl2]);
1271 AND_COMPL_HARD_REG_SET (union_set, no_unit_alloc_regs);
1272 for (cl3 = 0; cl3 < N_REG_CLASSES; cl3++)
1274 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl3]);
1275 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1276 if (hard_reg_set_subset_p (temp_hard_regset, intersection_set))
1278 /* CL3 allocatable hard register set is inside of
1279 intersection of allocatable hard register sets
1280 of CL1 and CL2. */
1281 if (important_class_p[cl3])
1283 COPY_HARD_REG_SET
1284 (temp_set2,
1285 reg_class_contents
1286 [(int) ira_reg_class_intersect[cl1][cl2]]);
1287 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1288 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1289 /* If the allocatable hard register sets are
1290 the same, prefer GENERAL_REGS or the
1291 smallest class for debugging
1292 purposes. */
1293 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1294 && (cl3 == GENERAL_REGS
1295 || ((ira_reg_class_intersect[cl1][cl2]
1296 != GENERAL_REGS)
1297 && hard_reg_set_subset_p
1298 (reg_class_contents[cl3],
1299 reg_class_contents
1300 [(int)
1301 ira_reg_class_intersect[cl1][cl2]])))))
1302 ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1304 COPY_HARD_REG_SET
1305 (temp_set2,
1306 reg_class_contents[(int) ira_reg_class_subset[cl1][cl2]]);
1307 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1308 if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1309 /* Ignore unavailable hard registers and prefer
1310 smallest class for debugging purposes. */
1311 || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1312 && hard_reg_set_subset_p
1313 (reg_class_contents[cl3],
1314 reg_class_contents
1315 [(int) ira_reg_class_subset[cl1][cl2]])))
1316 ira_reg_class_subset[cl1][cl2] = (enum reg_class) cl3;
1318 if (important_class_p[cl3]
1319 && hard_reg_set_subset_p (temp_hard_regset, union_set))
1321 /* CL3 allocatable hard register set is inside of
1322 union of allocatable hard register sets of CL1
1323 and CL2. */
1324 COPY_HARD_REG_SET
1325 (temp_set2,
1326 reg_class_contents[(int) ira_reg_class_subunion[cl1][cl2]]);
1327 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1328 if (ira_reg_class_subunion[cl1][cl2] == NO_REGS
1329 || (hard_reg_set_subset_p (temp_set2, temp_hard_regset)
1331 && (! hard_reg_set_equal_p (temp_set2,
1332 temp_hard_regset)
1333 || cl3 == GENERAL_REGS
1334 /* If the allocatable hard register sets are the
1335 same, prefer GENERAL_REGS or the smallest
1336 class for debugging purposes. */
1337 || (ira_reg_class_subunion[cl1][cl2] != GENERAL_REGS
1338 && hard_reg_set_subset_p
1339 (reg_class_contents[cl3],
1340 reg_class_contents
1341 [(int) ira_reg_class_subunion[cl1][cl2]])))))
1342 ira_reg_class_subunion[cl1][cl2] = (enum reg_class) cl3;
1344 if (hard_reg_set_subset_p (union_set, temp_hard_regset))
1346 /* CL3 allocatable hard register set contains union
1347 of allocatable hard register sets of CL1 and
1348 CL2. */
1349 COPY_HARD_REG_SET
1350 (temp_set2,
1351 reg_class_contents[(int) ira_reg_class_superunion[cl1][cl2]]);
1352 AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1353 if (ira_reg_class_superunion[cl1][cl2] == NO_REGS
1354 || (hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1356 && (! hard_reg_set_equal_p (temp_set2,
1357 temp_hard_regset)
1358 || cl3 == GENERAL_REGS
1359 /* If the allocatable hard register sets are the
1360 same, prefer GENERAL_REGS or the smallest
1361 class for debugging purposes. */
1362 || (ira_reg_class_superunion[cl1][cl2] != GENERAL_REGS
1363 && hard_reg_set_subset_p
1364 (reg_class_contents[cl3],
1365 reg_class_contents
1366 [(int) ira_reg_class_superunion[cl1][cl2]])))))
1367 ira_reg_class_superunion[cl1][cl2] = (enum reg_class) cl3;
1374 /* Output all uniform and important classes into file F. */
1375 static void
1376 print_uniform_and_important_classes (FILE *f)
1378 int i, cl;
1380 fprintf (f, "Uniform classes:\n");
1381 for (cl = 0; cl < N_REG_CLASSES; cl++)
1382 if (ira_uniform_class_p[cl])
1383 fprintf (f, " %s", reg_class_names[cl]);
1384 fprintf (f, "\nImportant classes:\n");
1385 for (i = 0; i < ira_important_classes_num; i++)
1386 fprintf (f, " %s", reg_class_names[ira_important_classes[i]]);
1387 fprintf (f, "\n");
1390 /* Output all possible allocno or pressure classes and their
1391 translation map into file F. */
1392 static void
1393 print_translated_classes (FILE *f, bool pressure_p)
1395 int classes_num = (pressure_p
1396 ? ira_pressure_classes_num : ira_allocno_classes_num);
1397 enum reg_class *classes = (pressure_p
1398 ? ira_pressure_classes : ira_allocno_classes);
1399 enum reg_class *class_translate = (pressure_p
1400 ? ira_pressure_class_translate
1401 : ira_allocno_class_translate);
1402 int i;
1404 fprintf (f, "%s classes:\n", pressure_p ? "Pressure" : "Allocno");
1405 for (i = 0; i < classes_num; i++)
1406 fprintf (f, " %s", reg_class_names[classes[i]]);
1407 fprintf (f, "\nClass translation:\n");
1408 for (i = 0; i < N_REG_CLASSES; i++)
1409 fprintf (f, " %s -> %s\n", reg_class_names[i],
1410 reg_class_names[class_translate[i]]);
1413 /* Output all possible allocno and translation classes and the
1414 translation maps into stderr. */
1415 void
1416 ira_debug_allocno_classes (void)
1418 print_uniform_and_important_classes (stderr);
1419 print_translated_classes (stderr, false);
1420 print_translated_classes (stderr, true);
1423 /* Set up different arrays concerning class subsets, allocno and
1424 important classes. */
1425 static void
1426 find_reg_classes (void)
1428 setup_allocno_and_important_classes ();
1429 setup_class_translate ();
1430 reorder_important_classes ();
1431 setup_reg_class_relations ();
1436 /* Set up the array above. */
1437 static void
1438 setup_hard_regno_aclass (void)
1440 int i;
1442 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1444 #if 1
1445 ira_hard_regno_allocno_class[i]
1446 = (TEST_HARD_REG_BIT (no_unit_alloc_regs, i)
1447 ? NO_REGS
1448 : ira_allocno_class_translate[REGNO_REG_CLASS (i)]);
1449 #else
1450 int j;
1451 enum reg_class cl;
1452 ira_hard_regno_allocno_class[i] = NO_REGS;
1453 for (j = 0; j < ira_allocno_classes_num; j++)
1455 cl = ira_allocno_classes[j];
1456 if (ira_class_hard_reg_index[cl][i] >= 0)
1458 ira_hard_regno_allocno_class[i] = cl;
1459 break;
1462 #endif
1468 /* Form IRA_REG_CLASS_MAX_NREGS and IRA_REG_CLASS_MIN_NREGS maps. */
1469 static void
1470 setup_reg_class_nregs (void)
1472 int i, cl, cl2, m;
1474 for (m = 0; m < MAX_MACHINE_MODE; m++)
1476 for (cl = 0; cl < N_REG_CLASSES; cl++)
1477 ira_reg_class_max_nregs[cl][m]
1478 = ira_reg_class_min_nregs[cl][m]
1479 = targetm.class_max_nregs ((reg_class_t) cl, (machine_mode) m);
1480 for (cl = 0; cl < N_REG_CLASSES; cl++)
1481 for (i = 0;
1482 (cl2 = alloc_reg_class_subclasses[cl][i]) != LIM_REG_CLASSES;
1483 i++)
1484 if (ira_reg_class_min_nregs[cl2][m]
1485 < ira_reg_class_min_nregs[cl][m])
1486 ira_reg_class_min_nregs[cl][m] = ira_reg_class_min_nregs[cl2][m];
1492 /* Set up IRA_PROHIBITED_CLASS_MODE_REGS and IRA_CLASS_SINGLETON.
1493 This function is called once IRA_CLASS_HARD_REGS has been initialized. */
1494 static void
1495 setup_prohibited_class_mode_regs (void)
1497 int j, k, hard_regno, cl, last_hard_regno, count;
1499 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1501 COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
1502 AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1503 for (j = 0; j < NUM_MACHINE_MODES; j++)
1505 count = 0;
1506 last_hard_regno = -1;
1507 CLEAR_HARD_REG_SET (ira_prohibited_class_mode_regs[cl][j]);
1508 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1510 hard_regno = ira_class_hard_regs[cl][k];
1511 if (! HARD_REGNO_MODE_OK (hard_regno, (machine_mode) j))
1512 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1513 hard_regno);
1514 else if (in_hard_reg_set_p (temp_hard_regset,
1515 (machine_mode) j, hard_regno))
1517 last_hard_regno = hard_regno;
1518 count++;
1521 ira_class_singleton[cl][j] = (count == 1 ? last_hard_regno : -1);
1526 /* Clarify IRA_PROHIBITED_CLASS_MODE_REGS by excluding hard registers
1527 spanning from one register pressure class to another one. It is
1528 called after defining the pressure classes. */
1529 static void
1530 clarify_prohibited_class_mode_regs (void)
1532 int j, k, hard_regno, cl, pclass, nregs;
1534 for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
1535 for (j = 0; j < NUM_MACHINE_MODES; j++)
1537 CLEAR_HARD_REG_SET (ira_useful_class_mode_regs[cl][j]);
1538 for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1540 hard_regno = ira_class_hard_regs[cl][k];
1541 if (TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j], hard_regno))
1542 continue;
1543 nregs = hard_regno_nregs[hard_regno][j];
1544 if (hard_regno + nregs > FIRST_PSEUDO_REGISTER)
1546 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1547 hard_regno);
1548 continue;
1550 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
1551 for (nregs-- ;nregs >= 0; nregs--)
1552 if (((enum reg_class) pclass
1553 != ira_pressure_class_translate[REGNO_REG_CLASS
1554 (hard_regno + nregs)]))
1556 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1557 hard_regno);
1558 break;
1560 if (!TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs[cl][j],
1561 hard_regno))
1562 add_to_hard_reg_set (&ira_useful_class_mode_regs[cl][j],
1563 (machine_mode) j, hard_regno);
1568 /* Allocate and initialize IRA_REGISTER_MOVE_COST, IRA_MAY_MOVE_IN_COST
1569 and IRA_MAY_MOVE_OUT_COST for MODE. */
1570 void
1571 ira_init_register_move_cost (machine_mode mode)
1573 static unsigned short last_move_cost[N_REG_CLASSES][N_REG_CLASSES];
1574 bool all_match = true;
1575 unsigned int cl1, cl2;
1577 ira_assert (ira_register_move_cost[mode] == NULL
1578 && ira_may_move_in_cost[mode] == NULL
1579 && ira_may_move_out_cost[mode] == NULL);
1580 ira_assert (have_regs_of_mode[mode]);
1581 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1582 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1584 int cost;
1585 if (!contains_reg_of_mode[cl1][mode]
1586 || !contains_reg_of_mode[cl2][mode])
1588 if ((ira_reg_class_max_nregs[cl1][mode]
1589 > ira_class_hard_regs_num[cl1])
1590 || (ira_reg_class_max_nregs[cl2][mode]
1591 > ira_class_hard_regs_num[cl2]))
1592 cost = 65535;
1593 else
1594 cost = (ira_memory_move_cost[mode][cl1][0]
1595 + ira_memory_move_cost[mode][cl2][1]) * 2;
1597 else
1599 cost = register_move_cost (mode, (enum reg_class) cl1,
1600 (enum reg_class) cl2);
1601 ira_assert (cost < 65535);
1603 all_match &= (last_move_cost[cl1][cl2] == cost);
1604 last_move_cost[cl1][cl2] = cost;
1606 if (all_match && last_mode_for_init_move_cost != -1)
1608 ira_register_move_cost[mode]
1609 = ira_register_move_cost[last_mode_for_init_move_cost];
1610 ira_may_move_in_cost[mode]
1611 = ira_may_move_in_cost[last_mode_for_init_move_cost];
1612 ira_may_move_out_cost[mode]
1613 = ira_may_move_out_cost[last_mode_for_init_move_cost];
1614 return;
1616 last_mode_for_init_move_cost = mode;
1617 ira_register_move_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1618 ira_may_move_in_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1619 ira_may_move_out_cost[mode] = XNEWVEC (move_table, N_REG_CLASSES);
1620 for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1621 for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1623 int cost;
1624 enum reg_class *p1, *p2;
1626 if (last_move_cost[cl1][cl2] == 65535)
1628 ira_register_move_cost[mode][cl1][cl2] = 65535;
1629 ira_may_move_in_cost[mode][cl1][cl2] = 65535;
1630 ira_may_move_out_cost[mode][cl1][cl2] = 65535;
1632 else
1634 cost = last_move_cost[cl1][cl2];
1636 for (p2 = &reg_class_subclasses[cl2][0];
1637 *p2 != LIM_REG_CLASSES; p2++)
1638 if (ira_class_hard_regs_num[*p2] > 0
1639 && (ira_reg_class_max_nregs[*p2][mode]
1640 <= ira_class_hard_regs_num[*p2]))
1641 cost = MAX (cost, ira_register_move_cost[mode][cl1][*p2]);
1643 for (p1 = &reg_class_subclasses[cl1][0];
1644 *p1 != LIM_REG_CLASSES; p1++)
1645 if (ira_class_hard_regs_num[*p1] > 0
1646 && (ira_reg_class_max_nregs[*p1][mode]
1647 <= ira_class_hard_regs_num[*p1]))
1648 cost = MAX (cost, ira_register_move_cost[mode][*p1][cl2]);
1650 ira_assert (cost <= 65535);
1651 ira_register_move_cost[mode][cl1][cl2] = cost;
1653 if (ira_class_subset_p[cl1][cl2])
1654 ira_may_move_in_cost[mode][cl1][cl2] = 0;
1655 else
1656 ira_may_move_in_cost[mode][cl1][cl2] = cost;
1658 if (ira_class_subset_p[cl2][cl1])
1659 ira_may_move_out_cost[mode][cl1][cl2] = 0;
1660 else
1661 ira_may_move_out_cost[mode][cl1][cl2] = cost;
1668 /* This is called once during compiler work. It sets up
1669 different arrays whose values don't depend on the compiled
1670 function. */
1671 void
1672 ira_init_once (void)
1674 ira_init_costs_once ();
1675 lra_init_once ();
1677 ira_use_lra_p = targetm.lra_p ();
1680 /* Free ira_max_register_move_cost, ira_may_move_in_cost and
1681 ira_may_move_out_cost for each mode. */
1682 void
1683 target_ira_int::free_register_move_costs (void)
1685 int mode, i;
1687 /* Reset move_cost and friends, making sure we only free shared
1688 table entries once. */
1689 for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1690 if (x_ira_register_move_cost[mode])
1692 for (i = 0;
1693 i < mode && (x_ira_register_move_cost[i]
1694 != x_ira_register_move_cost[mode]);
1695 i++)
1697 if (i == mode)
1699 free (x_ira_register_move_cost[mode]);
1700 free (x_ira_may_move_in_cost[mode]);
1701 free (x_ira_may_move_out_cost[mode]);
1704 memset (x_ira_register_move_cost, 0, sizeof x_ira_register_move_cost);
1705 memset (x_ira_may_move_in_cost, 0, sizeof x_ira_may_move_in_cost);
1706 memset (x_ira_may_move_out_cost, 0, sizeof x_ira_may_move_out_cost);
1707 last_mode_for_init_move_cost = -1;
1710 target_ira_int::~target_ira_int ()
1712 free_ira_costs ();
1713 free_register_move_costs ();
1716 /* This is called every time when register related information is
1717 changed. */
1718 void
1719 ira_init (void)
1721 this_target_ira_int->free_register_move_costs ();
1722 setup_reg_mode_hard_regset ();
1723 setup_alloc_regs (flag_omit_frame_pointer != 0);
1724 setup_class_subset_and_memory_move_costs ();
1725 setup_reg_class_nregs ();
1726 setup_prohibited_class_mode_regs ();
1727 find_reg_classes ();
1728 clarify_prohibited_class_mode_regs ();
1729 setup_hard_regno_aclass ();
1730 ira_init_costs ();
1734 #define ira_prohibited_mode_move_regs_initialized_p \
1735 (this_target_ira_int->x_ira_prohibited_mode_move_regs_initialized_p)
1737 /* Set up IRA_PROHIBITED_MODE_MOVE_REGS. */
1738 static void
1739 setup_prohibited_mode_move_regs (void)
1741 int i, j;
1742 rtx test_reg1, test_reg2, move_pat;
1743 rtx_insn *move_insn;
1745 if (ira_prohibited_mode_move_regs_initialized_p)
1746 return;
1747 ira_prohibited_mode_move_regs_initialized_p = true;
1748 test_reg1 = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 1);
1749 test_reg2 = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 2);
1750 move_pat = gen_rtx_SET (test_reg1, test_reg2);
1751 move_insn = gen_rtx_INSN (VOIDmode, 0, 0, 0, move_pat, 0, -1, 0);
1752 for (i = 0; i < NUM_MACHINE_MODES; i++)
1754 SET_HARD_REG_SET (ira_prohibited_mode_move_regs[i]);
1755 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
1757 if (! HARD_REGNO_MODE_OK (j, (machine_mode) i))
1758 continue;
1759 set_mode_and_regno (test_reg1, (machine_mode) i, j);
1760 set_mode_and_regno (test_reg2, (machine_mode) i, j);
1761 INSN_CODE (move_insn) = -1;
1762 recog_memoized (move_insn);
1763 if (INSN_CODE (move_insn) < 0)
1764 continue;
1765 extract_insn (move_insn);
1766 /* We don't know whether the move will be in code that is optimized
1767 for size or speed, so consider all enabled alternatives. */
1768 if (! constrain_operands (1, get_enabled_alternatives (move_insn)))
1769 continue;
1770 CLEAR_HARD_REG_BIT (ira_prohibited_mode_move_regs[i], j);
1777 /* Setup possible alternatives in ALTS for INSN. */
1778 void
1779 ira_setup_alts (rtx_insn *insn, HARD_REG_SET &alts)
1781 /* MAP nalt * nop -> start of constraints for given operand and
1782 alternative. */
1783 static vec<const char *> insn_constraints;
1784 int nop, nalt;
1785 bool curr_swapped;
1786 const char *p;
1787 int commutative = -1;
1789 extract_insn (insn);
1790 alternative_mask preferred = get_preferred_alternatives (insn);
1791 CLEAR_HARD_REG_SET (alts);
1792 insn_constraints.release ();
1793 insn_constraints.safe_grow_cleared (recog_data.n_operands
1794 * recog_data.n_alternatives + 1);
1795 /* Check that the hard reg set is enough for holding all
1796 alternatives. It is hard to imagine the situation when the
1797 assertion is wrong. */
1798 ira_assert (recog_data.n_alternatives
1799 <= (int) MAX (sizeof (HARD_REG_ELT_TYPE) * CHAR_BIT,
1800 FIRST_PSEUDO_REGISTER));
1801 for (curr_swapped = false;; curr_swapped = true)
1803 /* Calculate some data common for all alternatives to speed up the
1804 function. */
1805 for (nop = 0; nop < recog_data.n_operands; nop++)
1807 for (nalt = 0, p = recog_data.constraints[nop];
1808 nalt < recog_data.n_alternatives;
1809 nalt++)
1811 insn_constraints[nop * recog_data.n_alternatives + nalt] = p;
1812 while (*p && *p != ',')
1814 /* We only support one commutative marker, the first
1815 one. We already set commutative above. */
1816 if (*p == '%' && commutative < 0)
1817 commutative = nop;
1818 p++;
1820 if (*p)
1821 p++;
1824 for (nalt = 0; nalt < recog_data.n_alternatives; nalt++)
1826 if (!TEST_BIT (preferred, nalt)
1827 || TEST_HARD_REG_BIT (alts, nalt))
1828 continue;
1830 for (nop = 0; nop < recog_data.n_operands; nop++)
1832 int c, len;
1834 rtx op = recog_data.operand[nop];
1835 p = insn_constraints[nop * recog_data.n_alternatives + nalt];
1836 if (*p == 0 || *p == ',')
1837 continue;
1840 switch (c = *p, len = CONSTRAINT_LEN (c, p), c)
1842 case '#':
1843 case ',':
1844 c = '\0';
1845 /* FALLTHRU */
1846 case '\0':
1847 len = 0;
1848 break;
1850 case '%':
1851 /* The commutative modifier is handled above. */
1852 break;
1854 case '0': case '1': case '2': case '3': case '4':
1855 case '5': case '6': case '7': case '8': case '9':
1856 goto op_success;
1857 break;
1859 case 'g':
1860 goto op_success;
1861 break;
1863 default:
1865 enum constraint_num cn = lookup_constraint (p);
1866 switch (get_constraint_type (cn))
1868 case CT_REGISTER:
1869 if (reg_class_for_constraint (cn) != NO_REGS)
1870 goto op_success;
1871 break;
1873 case CT_CONST_INT:
1874 if (CONST_INT_P (op)
1875 && (insn_const_int_ok_for_constraint
1876 (INTVAL (op), cn)))
1877 goto op_success;
1878 break;
1880 case CT_ADDRESS:
1881 case CT_MEMORY:
1882 case CT_SPECIAL_MEMORY:
1883 goto op_success;
1885 case CT_FIXED_FORM:
1886 if (constraint_satisfied_p (op, cn))
1887 goto op_success;
1888 break;
1890 break;
1893 while (p += len, c);
1894 break;
1895 op_success:
1898 if (nop >= recog_data.n_operands)
1899 SET_HARD_REG_BIT (alts, nalt);
1901 if (commutative < 0)
1902 break;
1903 /* Swap forth and back to avoid changing recog_data. */
1904 std::swap (recog_data.operand[commutative],
1905 recog_data.operand[commutative + 1]);
1906 if (curr_swapped)
1907 break;
1911 /* Return the number of the output non-early clobber operand which
1912 should be the same in any case as operand with number OP_NUM (or
1913 negative value if there is no such operand). The function takes
1914 only really possible alternatives into consideration. */
1916 ira_get_dup_out_num (int op_num, HARD_REG_SET &alts)
1918 int curr_alt, c, original, dup;
1919 bool ignore_p, use_commut_op_p;
1920 const char *str;
1922 if (op_num < 0 || recog_data.n_alternatives == 0)
1923 return -1;
1924 /* We should find duplications only for input operands. */
1925 if (recog_data.operand_type[op_num] != OP_IN)
1926 return -1;
1927 str = recog_data.constraints[op_num];
1928 use_commut_op_p = false;
1929 for (;;)
1931 rtx op = recog_data.operand[op_num];
1933 for (curr_alt = 0, ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt),
1934 original = -1;;)
1936 c = *str;
1937 if (c == '\0')
1938 break;
1939 if (c == '#')
1940 ignore_p = true;
1941 else if (c == ',')
1943 curr_alt++;
1944 ignore_p = !TEST_HARD_REG_BIT (alts, curr_alt);
1946 else if (! ignore_p)
1947 switch (c)
1949 case 'g':
1950 goto fail;
1951 default:
1953 enum constraint_num cn = lookup_constraint (str);
1954 enum reg_class cl = reg_class_for_constraint (cn);
1955 if (cl != NO_REGS
1956 && !targetm.class_likely_spilled_p (cl))
1957 goto fail;
1958 if (constraint_satisfied_p (op, cn))
1959 goto fail;
1960 break;
1963 case '0': case '1': case '2': case '3': case '4':
1964 case '5': case '6': case '7': case '8': case '9':
1965 if (original != -1 && original != c)
1966 goto fail;
1967 original = c;
1968 break;
1970 str += CONSTRAINT_LEN (c, str);
1972 if (original == -1)
1973 goto fail;
1974 dup = -1;
1975 for (ignore_p = false, str = recog_data.constraints[original - '0'];
1976 *str != 0;
1977 str++)
1978 if (ignore_p)
1980 if (*str == ',')
1981 ignore_p = false;
1983 else if (*str == '#')
1984 ignore_p = true;
1985 else if (! ignore_p)
1987 if (*str == '=')
1988 dup = original - '0';
1989 /* It is better ignore an alternative with early clobber. */
1990 else if (*str == '&')
1991 goto fail;
1993 if (dup >= 0)
1994 return dup;
1995 fail:
1996 if (use_commut_op_p)
1997 break;
1998 use_commut_op_p = true;
1999 if (recog_data.constraints[op_num][0] == '%')
2000 str = recog_data.constraints[op_num + 1];
2001 else if (op_num > 0 && recog_data.constraints[op_num - 1][0] == '%')
2002 str = recog_data.constraints[op_num - 1];
2003 else
2004 break;
2006 return -1;
2011 /* Search forward to see if the source register of a copy insn dies
2012 before either it or the destination register is modified, but don't
2013 scan past the end of the basic block. If so, we can replace the
2014 source with the destination and let the source die in the copy
2015 insn.
2017 This will reduce the number of registers live in that range and may
2018 enable the destination and the source coalescing, thus often saving
2019 one register in addition to a register-register copy. */
2021 static void
2022 decrease_live_ranges_number (void)
2024 basic_block bb;
2025 rtx_insn *insn;
2026 rtx set, src, dest, dest_death, note;
2027 rtx_insn *p, *q;
2028 int sregno, dregno;
2030 if (! flag_expensive_optimizations)
2031 return;
2033 if (ira_dump_file)
2034 fprintf (ira_dump_file, "Starting decreasing number of live ranges...\n");
2036 FOR_EACH_BB_FN (bb, cfun)
2037 FOR_BB_INSNS (bb, insn)
2039 set = single_set (insn);
2040 if (! set)
2041 continue;
2042 src = SET_SRC (set);
2043 dest = SET_DEST (set);
2044 if (! REG_P (src) || ! REG_P (dest)
2045 || find_reg_note (insn, REG_DEAD, src))
2046 continue;
2047 sregno = REGNO (src);
2048 dregno = REGNO (dest);
2050 /* We don't want to mess with hard regs if register classes
2051 are small. */
2052 if (sregno == dregno
2053 || (targetm.small_register_classes_for_mode_p (GET_MODE (src))
2054 && (sregno < FIRST_PSEUDO_REGISTER
2055 || dregno < FIRST_PSEUDO_REGISTER))
2056 /* We don't see all updates to SP if they are in an
2057 auto-inc memory reference, so we must disallow this
2058 optimization on them. */
2059 || sregno == STACK_POINTER_REGNUM
2060 || dregno == STACK_POINTER_REGNUM)
2061 continue;
2063 dest_death = NULL_RTX;
2065 for (p = NEXT_INSN (insn); p; p = NEXT_INSN (p))
2067 if (! INSN_P (p))
2068 continue;
2069 if (BLOCK_FOR_INSN (p) != bb)
2070 break;
2072 if (reg_set_p (src, p) || reg_set_p (dest, p)
2073 /* If SRC is an asm-declared register, it must not be
2074 replaced in any asm. Unfortunately, the REG_EXPR
2075 tree for the asm variable may be absent in the SRC
2076 rtx, so we can't check the actual register
2077 declaration easily (the asm operand will have it,
2078 though). To avoid complicating the test for a rare
2079 case, we just don't perform register replacement
2080 for a hard reg mentioned in an asm. */
2081 || (sregno < FIRST_PSEUDO_REGISTER
2082 && asm_noperands (PATTERN (p)) >= 0
2083 && reg_overlap_mentioned_p (src, PATTERN (p)))
2084 /* Don't change hard registers used by a call. */
2085 || (CALL_P (p) && sregno < FIRST_PSEUDO_REGISTER
2086 && find_reg_fusage (p, USE, src))
2087 /* Don't change a USE of a register. */
2088 || (GET_CODE (PATTERN (p)) == USE
2089 && reg_overlap_mentioned_p (src, XEXP (PATTERN (p), 0))))
2090 break;
2092 /* See if all of SRC dies in P. This test is slightly
2093 more conservative than it needs to be. */
2094 if ((note = find_regno_note (p, REG_DEAD, sregno))
2095 && GET_MODE (XEXP (note, 0)) == GET_MODE (src))
2097 int failed = 0;
2099 /* We can do the optimization. Scan forward from INSN
2100 again, replacing regs as we go. Set FAILED if a
2101 replacement can't be done. In that case, we can't
2102 move the death note for SRC. This should be
2103 rare. */
2105 /* Set to stop at next insn. */
2106 for (q = next_real_insn (insn);
2107 q != next_real_insn (p);
2108 q = next_real_insn (q))
2110 if (reg_overlap_mentioned_p (src, PATTERN (q)))
2112 /* If SRC is a hard register, we might miss
2113 some overlapping registers with
2114 validate_replace_rtx, so we would have to
2115 undo it. We can't if DEST is present in
2116 the insn, so fail in that combination of
2117 cases. */
2118 if (sregno < FIRST_PSEUDO_REGISTER
2119 && reg_mentioned_p (dest, PATTERN (q)))
2120 failed = 1;
2122 /* Attempt to replace all uses. */
2123 else if (!validate_replace_rtx (src, dest, q))
2124 failed = 1;
2126 /* If this succeeded, but some part of the
2127 register is still present, undo the
2128 replacement. */
2129 else if (sregno < FIRST_PSEUDO_REGISTER
2130 && reg_overlap_mentioned_p (src, PATTERN (q)))
2132 validate_replace_rtx (dest, src, q);
2133 failed = 1;
2137 /* If DEST dies here, remove the death note and
2138 save it for later. Make sure ALL of DEST dies
2139 here; again, this is overly conservative. */
2140 if (! dest_death
2141 && (dest_death = find_regno_note (q, REG_DEAD, dregno)))
2143 if (GET_MODE (XEXP (dest_death, 0)) == GET_MODE (dest))
2144 remove_note (q, dest_death);
2145 else
2147 failed = 1;
2148 dest_death = 0;
2153 if (! failed)
2155 /* Move death note of SRC from P to INSN. */
2156 remove_note (p, note);
2157 XEXP (note, 1) = REG_NOTES (insn);
2158 REG_NOTES (insn) = note;
2161 /* DEST is also dead if INSN has a REG_UNUSED note for
2162 DEST. */
2163 if (! dest_death
2164 && (dest_death
2165 = find_regno_note (insn, REG_UNUSED, dregno)))
2167 PUT_REG_NOTE_KIND (dest_death, REG_DEAD);
2168 remove_note (insn, dest_death);
2171 /* Put death note of DEST on P if we saw it die. */
2172 if (dest_death)
2174 XEXP (dest_death, 1) = REG_NOTES (p);
2175 REG_NOTES (p) = dest_death;
2177 break;
2180 /* If SRC is a hard register which is set or killed in
2181 some other way, we can't do this optimization. */
2182 else if (sregno < FIRST_PSEUDO_REGISTER && dead_or_set_p (p, src))
2183 break;
2190 /* Return nonzero if REGNO is a particularly bad choice for reloading X. */
2191 static bool
2192 ira_bad_reload_regno_1 (int regno, rtx x)
2194 int x_regno, n, i;
2195 ira_allocno_t a;
2196 enum reg_class pref;
2198 /* We only deal with pseudo regs. */
2199 if (! x || GET_CODE (x) != REG)
2200 return false;
2202 x_regno = REGNO (x);
2203 if (x_regno < FIRST_PSEUDO_REGISTER)
2204 return false;
2206 /* If the pseudo prefers REGNO explicitly, then do not consider
2207 REGNO a bad spill choice. */
2208 pref = reg_preferred_class (x_regno);
2209 if (reg_class_size[pref] == 1)
2210 return !TEST_HARD_REG_BIT (reg_class_contents[pref], regno);
2212 /* If the pseudo conflicts with REGNO, then we consider REGNO a
2213 poor choice for a reload regno. */
2214 a = ira_regno_allocno_map[x_regno];
2215 n = ALLOCNO_NUM_OBJECTS (a);
2216 for (i = 0; i < n; i++)
2218 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2219 if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
2220 return true;
2222 return false;
2225 /* Return nonzero if REGNO is a particularly bad choice for reloading
2226 IN or OUT. */
2227 bool
2228 ira_bad_reload_regno (int regno, rtx in, rtx out)
2230 return (ira_bad_reload_regno_1 (regno, in)
2231 || ira_bad_reload_regno_1 (regno, out));
2234 /* Add register clobbers from asm statements. */
2235 static void
2236 compute_regs_asm_clobbered (void)
2238 basic_block bb;
2240 FOR_EACH_BB_FN (bb, cfun)
2242 rtx_insn *insn;
2243 FOR_BB_INSNS_REVERSE (bb, insn)
2245 df_ref def;
2247 if (NONDEBUG_INSN_P (insn) && asm_noperands (PATTERN (insn)) >= 0)
2248 FOR_EACH_INSN_DEF (def, insn)
2250 unsigned int dregno = DF_REF_REGNO (def);
2251 if (HARD_REGISTER_NUM_P (dregno))
2252 add_to_hard_reg_set (&crtl->asm_clobbers,
2253 GET_MODE (DF_REF_REAL_REG (def)),
2254 dregno);
2261 /* Set up ELIMINABLE_REGSET, IRA_NO_ALLOC_REGS, and
2262 REGS_EVER_LIVE. */
2263 void
2264 ira_setup_eliminable_regset (void)
2266 int i;
2267 static const struct {const int from, to; } eliminables[] = ELIMINABLE_REGS;
2269 /* Setup is_leaf as frame_pointer_required may use it. This function
2270 is called by sched_init before ira if scheduling is enabled. */
2271 crtl->is_leaf = leaf_function_p ();
2273 /* FIXME: If EXIT_IGNORE_STACK is set, we will not save and restore
2274 sp for alloca. So we can't eliminate the frame pointer in that
2275 case. At some point, we should improve this by emitting the
2276 sp-adjusting insns for this case. */
2277 frame_pointer_needed
2278 = (! flag_omit_frame_pointer
2279 || (cfun->calls_alloca && EXIT_IGNORE_STACK)
2280 /* We need the frame pointer to catch stack overflow exceptions if
2281 the stack pointer is moving (as for the alloca case just above). */
2282 || (STACK_CHECK_MOVING_SP
2283 && flag_stack_check
2284 && flag_exceptions
2285 && cfun->can_throw_non_call_exceptions)
2286 || crtl->accesses_prior_frames
2287 || (SUPPORTS_STACK_ALIGNMENT && crtl->stack_realign_needed)
2288 /* We need a frame pointer for all Cilk Plus functions that use
2289 Cilk keywords. */
2290 || (flag_cilkplus && cfun->is_cilk_function)
2291 || targetm.frame_pointer_required ());
2293 /* The chance that FRAME_POINTER_NEEDED is changed from inspecting
2294 RTL is very small. So if we use frame pointer for RA and RTL
2295 actually prevents this, we will spill pseudos assigned to the
2296 frame pointer in LRA. */
2298 if (frame_pointer_needed)
2299 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2301 COPY_HARD_REG_SET (ira_no_alloc_regs, no_unit_alloc_regs);
2302 CLEAR_HARD_REG_SET (eliminable_regset);
2304 compute_regs_asm_clobbered ();
2306 /* Build the regset of all eliminable registers and show we can't
2307 use those that we already know won't be eliminated. */
2308 for (i = 0; i < (int) ARRAY_SIZE (eliminables); i++)
2310 bool cannot_elim
2311 = (! targetm.can_eliminate (eliminables[i].from, eliminables[i].to)
2312 || (eliminables[i].to == STACK_POINTER_REGNUM && frame_pointer_needed));
2314 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, eliminables[i].from))
2316 SET_HARD_REG_BIT (eliminable_regset, eliminables[i].from);
2318 if (cannot_elim)
2319 SET_HARD_REG_BIT (ira_no_alloc_regs, eliminables[i].from);
2321 else if (cannot_elim)
2322 error ("%s cannot be used in asm here",
2323 reg_names[eliminables[i].from]);
2324 else
2325 df_set_regs_ever_live (eliminables[i].from, true);
2327 if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
2329 if (!TEST_HARD_REG_BIT (crtl->asm_clobbers, HARD_FRAME_POINTER_REGNUM))
2331 SET_HARD_REG_BIT (eliminable_regset, HARD_FRAME_POINTER_REGNUM);
2332 if (frame_pointer_needed)
2333 SET_HARD_REG_BIT (ira_no_alloc_regs, HARD_FRAME_POINTER_REGNUM);
2335 else if (frame_pointer_needed)
2336 error ("%s cannot be used in asm here",
2337 reg_names[HARD_FRAME_POINTER_REGNUM]);
2338 else
2339 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
2345 /* Vector of substitutions of register numbers,
2346 used to map pseudo regs into hardware regs.
2347 This is set up as a result of register allocation.
2348 Element N is the hard reg assigned to pseudo reg N,
2349 or is -1 if no hard reg was assigned.
2350 If N is a hard reg number, element N is N. */
2351 short *reg_renumber;
2353 /* Set up REG_RENUMBER and CALLER_SAVE_NEEDED (used by reload) from
2354 the allocation found by IRA. */
2355 static void
2356 setup_reg_renumber (void)
2358 int regno, hard_regno;
2359 ira_allocno_t a;
2360 ira_allocno_iterator ai;
2362 caller_save_needed = 0;
2363 FOR_EACH_ALLOCNO (a, ai)
2365 if (ira_use_lra_p && ALLOCNO_CAP_MEMBER (a) != NULL)
2366 continue;
2367 /* There are no caps at this point. */
2368 ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
2369 if (! ALLOCNO_ASSIGNED_P (a))
2370 /* It can happen if A is not referenced but partially anticipated
2371 somewhere in a region. */
2372 ALLOCNO_ASSIGNED_P (a) = true;
2373 ira_free_allocno_updated_costs (a);
2374 hard_regno = ALLOCNO_HARD_REGNO (a);
2375 regno = ALLOCNO_REGNO (a);
2376 reg_renumber[regno] = (hard_regno < 0 ? -1 : hard_regno);
2377 if (hard_regno >= 0)
2379 int i, nwords;
2380 enum reg_class pclass;
2381 ira_object_t obj;
2383 pclass = ira_pressure_class_translate[REGNO_REG_CLASS (hard_regno)];
2384 nwords = ALLOCNO_NUM_OBJECTS (a);
2385 for (i = 0; i < nwords; i++)
2387 obj = ALLOCNO_OBJECT (a, i);
2388 IOR_COMPL_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
2389 reg_class_contents[pclass]);
2391 if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
2392 && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
2393 call_used_reg_set))
2395 ira_assert (!optimize || flag_caller_saves
2396 || (ALLOCNO_CALLS_CROSSED_NUM (a)
2397 == ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a))
2398 || regno >= ira_reg_equiv_len
2399 || ira_equiv_no_lvalue_p (regno));
2400 caller_save_needed = 1;
2406 /* Set up allocno assignment flags for further allocation
2407 improvements. */
2408 static void
2409 setup_allocno_assignment_flags (void)
2411 int hard_regno;
2412 ira_allocno_t a;
2413 ira_allocno_iterator ai;
2415 FOR_EACH_ALLOCNO (a, ai)
2417 if (! ALLOCNO_ASSIGNED_P (a))
2418 /* It can happen if A is not referenced but partially anticipated
2419 somewhere in a region. */
2420 ira_free_allocno_updated_costs (a);
2421 hard_regno = ALLOCNO_HARD_REGNO (a);
2422 /* Don't assign hard registers to allocnos which are destination
2423 of removed store at the end of loop. It has no sense to keep
2424 the same value in different hard registers. It is also
2425 impossible to assign hard registers correctly to such
2426 allocnos because the cost info and info about intersected
2427 calls are incorrect for them. */
2428 ALLOCNO_ASSIGNED_P (a) = (hard_regno >= 0
2429 || ALLOCNO_EMIT_DATA (a)->mem_optimized_dest_p
2430 || (ALLOCNO_MEMORY_COST (a)
2431 - ALLOCNO_CLASS_COST (a)) < 0);
2432 ira_assert
2433 (hard_regno < 0
2434 || ira_hard_reg_in_set_p (hard_regno, ALLOCNO_MODE (a),
2435 reg_class_contents[ALLOCNO_CLASS (a)]));
2439 /* Evaluate overall allocation cost and the costs for using hard
2440 registers and memory for allocnos. */
2441 static void
2442 calculate_allocation_cost (void)
2444 int hard_regno, cost;
2445 ira_allocno_t a;
2446 ira_allocno_iterator ai;
2448 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
2449 FOR_EACH_ALLOCNO (a, ai)
2451 hard_regno = ALLOCNO_HARD_REGNO (a);
2452 ira_assert (hard_regno < 0
2453 || (ira_hard_reg_in_set_p
2454 (hard_regno, ALLOCNO_MODE (a),
2455 reg_class_contents[ALLOCNO_CLASS (a)])));
2456 if (hard_regno < 0)
2458 cost = ALLOCNO_MEMORY_COST (a);
2459 ira_mem_cost += cost;
2461 else if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
2463 cost = (ALLOCNO_HARD_REG_COSTS (a)
2464 [ira_class_hard_reg_index
2465 [ALLOCNO_CLASS (a)][hard_regno]]);
2466 ira_reg_cost += cost;
2468 else
2470 cost = ALLOCNO_CLASS_COST (a);
2471 ira_reg_cost += cost;
2473 ira_overall_cost += cost;
2476 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
2478 fprintf (ira_dump_file,
2479 "+++Costs: overall %" PRId64
2480 ", reg %" PRId64
2481 ", mem %" PRId64
2482 ", ld %" PRId64
2483 ", st %" PRId64
2484 ", move %" PRId64,
2485 ira_overall_cost, ira_reg_cost, ira_mem_cost,
2486 ira_load_cost, ira_store_cost, ira_shuffle_cost);
2487 fprintf (ira_dump_file, "\n+++ move loops %d, new jumps %d\n",
2488 ira_move_loops_num, ira_additional_jumps_num);
2493 #ifdef ENABLE_IRA_CHECKING
2494 /* Check the correctness of the allocation. We do need this because
2495 of complicated code to transform more one region internal
2496 representation into one region representation. */
2497 static void
2498 check_allocation (void)
2500 ira_allocno_t a;
2501 int hard_regno, nregs, conflict_nregs;
2502 ira_allocno_iterator ai;
2504 FOR_EACH_ALLOCNO (a, ai)
2506 int n = ALLOCNO_NUM_OBJECTS (a);
2507 int i;
2509 if (ALLOCNO_CAP_MEMBER (a) != NULL
2510 || (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
2511 continue;
2512 nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
2513 if (nregs == 1)
2514 /* We allocated a single hard register. */
2515 n = 1;
2516 else if (n > 1)
2517 /* We allocated multiple hard registers, and we will test
2518 conflicts in a granularity of single hard regs. */
2519 nregs = 1;
2521 for (i = 0; i < n; i++)
2523 ira_object_t obj = ALLOCNO_OBJECT (a, i);
2524 ira_object_t conflict_obj;
2525 ira_object_conflict_iterator oci;
2526 int this_regno = hard_regno;
2527 if (n > 1)
2529 if (REG_WORDS_BIG_ENDIAN)
2530 this_regno += n - i - 1;
2531 else
2532 this_regno += i;
2534 FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
2536 ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
2537 int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
2538 if (conflict_hard_regno < 0)
2539 continue;
2541 conflict_nregs
2542 = (hard_regno_nregs
2543 [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
2545 if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1
2546 && conflict_nregs == ALLOCNO_NUM_OBJECTS (conflict_a))
2548 if (REG_WORDS_BIG_ENDIAN)
2549 conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
2550 - OBJECT_SUBWORD (conflict_obj) - 1);
2551 else
2552 conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
2553 conflict_nregs = 1;
2556 if ((conflict_hard_regno <= this_regno
2557 && this_regno < conflict_hard_regno + conflict_nregs)
2558 || (this_regno <= conflict_hard_regno
2559 && conflict_hard_regno < this_regno + nregs))
2561 fprintf (stderr, "bad allocation for %d and %d\n",
2562 ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
2563 gcc_unreachable ();
2569 #endif
2571 /* Allocate REG_EQUIV_INIT. Set up it from IRA_REG_EQUIV which should
2572 be already calculated. */
2573 static void
2574 setup_reg_equiv_init (void)
2576 int i;
2577 int max_regno = max_reg_num ();
2579 for (i = 0; i < max_regno; i++)
2580 reg_equiv_init (i) = ira_reg_equiv[i].init_insns;
2583 /* Update equiv regno from movement of FROM_REGNO to TO_REGNO. INSNS
2584 are insns which were generated for such movement. It is assumed
2585 that FROM_REGNO and TO_REGNO always have the same value at the
2586 point of any move containing such registers. This function is used
2587 to update equiv info for register shuffles on the region borders
2588 and for caller save/restore insns. */
2589 void
2590 ira_update_equiv_info_by_shuffle_insn (int to_regno, int from_regno, rtx_insn *insns)
2592 rtx_insn *insn;
2593 rtx x, note;
2595 if (! ira_reg_equiv[from_regno].defined_p
2596 && (! ira_reg_equiv[to_regno].defined_p
2597 || ((x = ira_reg_equiv[to_regno].memory) != NULL_RTX
2598 && ! MEM_READONLY_P (x))))
2599 return;
2600 insn = insns;
2601 if (NEXT_INSN (insn) != NULL_RTX)
2603 if (! ira_reg_equiv[to_regno].defined_p)
2605 ira_assert (ira_reg_equiv[to_regno].init_insns == NULL_RTX);
2606 return;
2608 ira_reg_equiv[to_regno].defined_p = false;
2609 ira_reg_equiv[to_regno].memory
2610 = ira_reg_equiv[to_regno].constant
2611 = ira_reg_equiv[to_regno].invariant
2612 = ira_reg_equiv[to_regno].init_insns = NULL;
2613 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2614 fprintf (ira_dump_file,
2615 " Invalidating equiv info for reg %d\n", to_regno);
2616 return;
2618 /* It is possible that FROM_REGNO still has no equivalence because
2619 in shuffles to_regno<-from_regno and from_regno<-to_regno the 2nd
2620 insn was not processed yet. */
2621 if (ira_reg_equiv[from_regno].defined_p)
2623 ira_reg_equiv[to_regno].defined_p = true;
2624 if ((x = ira_reg_equiv[from_regno].memory) != NULL_RTX)
2626 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX
2627 && ira_reg_equiv[from_regno].constant == NULL_RTX);
2628 ira_assert (ira_reg_equiv[to_regno].memory == NULL_RTX
2629 || rtx_equal_p (ira_reg_equiv[to_regno].memory, x));
2630 ira_reg_equiv[to_regno].memory = x;
2631 if (! MEM_READONLY_P (x))
2632 /* We don't add the insn to insn init list because memory
2633 equivalence is just to say what memory is better to use
2634 when the pseudo is spilled. */
2635 return;
2637 else if ((x = ira_reg_equiv[from_regno].constant) != NULL_RTX)
2639 ira_assert (ira_reg_equiv[from_regno].invariant == NULL_RTX);
2640 ira_assert (ira_reg_equiv[to_regno].constant == NULL_RTX
2641 || rtx_equal_p (ira_reg_equiv[to_regno].constant, x));
2642 ira_reg_equiv[to_regno].constant = x;
2644 else
2646 x = ira_reg_equiv[from_regno].invariant;
2647 ira_assert (x != NULL_RTX);
2648 ira_assert (ira_reg_equiv[to_regno].invariant == NULL_RTX
2649 || rtx_equal_p (ira_reg_equiv[to_regno].invariant, x));
2650 ira_reg_equiv[to_regno].invariant = x;
2652 if (find_reg_note (insn, REG_EQUIV, x) == NULL_RTX)
2654 note = set_unique_reg_note (insn, REG_EQUIV, copy_rtx (x));
2655 gcc_assert (note != NULL_RTX);
2656 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2658 fprintf (ira_dump_file,
2659 " Adding equiv note to insn %u for reg %d ",
2660 INSN_UID (insn), to_regno);
2661 dump_value_slim (ira_dump_file, x, 1);
2662 fprintf (ira_dump_file, "\n");
2666 ira_reg_equiv[to_regno].init_insns
2667 = gen_rtx_INSN_LIST (VOIDmode, insn,
2668 ira_reg_equiv[to_regno].init_insns);
2669 if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
2670 fprintf (ira_dump_file,
2671 " Adding equiv init move insn %u to reg %d\n",
2672 INSN_UID (insn), to_regno);
2675 /* Fix values of array REG_EQUIV_INIT after live range splitting done
2676 by IRA. */
2677 static void
2678 fix_reg_equiv_init (void)
2680 int max_regno = max_reg_num ();
2681 int i, new_regno, max;
2682 rtx set;
2683 rtx_insn_list *x, *next, *prev;
2684 rtx_insn *insn;
2686 if (max_regno_before_ira < max_regno)
2688 max = vec_safe_length (reg_equivs);
2689 grow_reg_equivs ();
2690 for (i = FIRST_PSEUDO_REGISTER; i < max; i++)
2691 for (prev = NULL, x = reg_equiv_init (i);
2692 x != NULL_RTX;
2693 x = next)
2695 next = x->next ();
2696 insn = x->insn ();
2697 set = single_set (insn);
2698 ira_assert (set != NULL_RTX
2699 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))));
2700 if (REG_P (SET_DEST (set))
2701 && ((int) REGNO (SET_DEST (set)) == i
2702 || (int) ORIGINAL_REGNO (SET_DEST (set)) == i))
2703 new_regno = REGNO (SET_DEST (set));
2704 else if (REG_P (SET_SRC (set))
2705 && ((int) REGNO (SET_SRC (set)) == i
2706 || (int) ORIGINAL_REGNO (SET_SRC (set)) == i))
2707 new_regno = REGNO (SET_SRC (set));
2708 else
2709 gcc_unreachable ();
2710 if (new_regno == i)
2711 prev = x;
2712 else
2714 /* Remove the wrong list element. */
2715 if (prev == NULL_RTX)
2716 reg_equiv_init (i) = next;
2717 else
2718 XEXP (prev, 1) = next;
2719 XEXP (x, 1) = reg_equiv_init (new_regno);
2720 reg_equiv_init (new_regno) = x;
2726 #ifdef ENABLE_IRA_CHECKING
2727 /* Print redundant memory-memory copies. */
2728 static void
2729 print_redundant_copies (void)
2731 int hard_regno;
2732 ira_allocno_t a;
2733 ira_copy_t cp, next_cp;
2734 ira_allocno_iterator ai;
2736 FOR_EACH_ALLOCNO (a, ai)
2738 if (ALLOCNO_CAP_MEMBER (a) != NULL)
2739 /* It is a cap. */
2740 continue;
2741 hard_regno = ALLOCNO_HARD_REGNO (a);
2742 if (hard_regno >= 0)
2743 continue;
2744 for (cp = ALLOCNO_COPIES (a); cp != NULL; cp = next_cp)
2745 if (cp->first == a)
2746 next_cp = cp->next_first_allocno_copy;
2747 else
2749 next_cp = cp->next_second_allocno_copy;
2750 if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL
2751 && cp->insn != NULL_RTX
2752 && ALLOCNO_HARD_REGNO (cp->first) == hard_regno)
2753 fprintf (ira_dump_file,
2754 " Redundant move from %d(freq %d):%d\n",
2755 INSN_UID (cp->insn), cp->freq, hard_regno);
2759 #endif
2761 /* Setup preferred and alternative classes for new pseudo-registers
2762 created by IRA starting with START. */
2763 static void
2764 setup_preferred_alternate_classes_for_new_pseudos (int start)
2766 int i, old_regno;
2767 int max_regno = max_reg_num ();
2769 for (i = start; i < max_regno; i++)
2771 old_regno = ORIGINAL_REGNO (regno_reg_rtx[i]);
2772 ira_assert (i != old_regno);
2773 setup_reg_classes (i, reg_preferred_class (old_regno),
2774 reg_alternate_class (old_regno),
2775 reg_allocno_class (old_regno));
2776 if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
2777 fprintf (ira_dump_file,
2778 " New r%d: setting preferred %s, alternative %s\n",
2779 i, reg_class_names[reg_preferred_class (old_regno)],
2780 reg_class_names[reg_alternate_class (old_regno)]);
2785 /* The number of entries allocated in reg_info. */
2786 static int allocated_reg_info_size;
2788 /* Regional allocation can create new pseudo-registers. This function
2789 expands some arrays for pseudo-registers. */
2790 static void
2791 expand_reg_info (void)
2793 int i;
2794 int size = max_reg_num ();
2796 resize_reg_info ();
2797 for (i = allocated_reg_info_size; i < size; i++)
2798 setup_reg_classes (i, GENERAL_REGS, ALL_REGS, GENERAL_REGS);
2799 setup_preferred_alternate_classes_for_new_pseudos (allocated_reg_info_size);
2800 allocated_reg_info_size = size;
2803 /* Return TRUE if there is too high register pressure in the function.
2804 It is used to decide when stack slot sharing is worth to do. */
2805 static bool
2806 too_high_register_pressure_p (void)
2808 int i;
2809 enum reg_class pclass;
2811 for (i = 0; i < ira_pressure_classes_num; i++)
2813 pclass = ira_pressure_classes[i];
2814 if (ira_loop_tree_root->reg_pressure[pclass] > 10000)
2815 return true;
2817 return false;
2822 /* Indicate that hard register number FROM was eliminated and replaced with
2823 an offset from hard register number TO. The status of hard registers live
2824 at the start of a basic block is updated by replacing a use of FROM with
2825 a use of TO. */
2827 void
2828 mark_elimination (int from, int to)
2830 basic_block bb;
2831 bitmap r;
2833 FOR_EACH_BB_FN (bb, cfun)
2835 r = DF_LR_IN (bb);
2836 if (bitmap_bit_p (r, from))
2838 bitmap_clear_bit (r, from);
2839 bitmap_set_bit (r, to);
2841 if (! df_live)
2842 continue;
2843 r = DF_LIVE_IN (bb);
2844 if (bitmap_bit_p (r, from))
2846 bitmap_clear_bit (r, from);
2847 bitmap_set_bit (r, to);
2854 /* The length of the following array. */
2855 int ira_reg_equiv_len;
2857 /* Info about equiv. info for each register. */
2858 struct ira_reg_equiv_s *ira_reg_equiv;
2860 /* Expand ira_reg_equiv if necessary. */
2861 void
2862 ira_expand_reg_equiv (void)
2864 int old = ira_reg_equiv_len;
2866 if (ira_reg_equiv_len > max_reg_num ())
2867 return;
2868 ira_reg_equiv_len = max_reg_num () * 3 / 2 + 1;
2869 ira_reg_equiv
2870 = (struct ira_reg_equiv_s *) xrealloc (ira_reg_equiv,
2871 ira_reg_equiv_len
2872 * sizeof (struct ira_reg_equiv_s));
2873 gcc_assert (old < ira_reg_equiv_len);
2874 memset (ira_reg_equiv + old, 0,
2875 sizeof (struct ira_reg_equiv_s) * (ira_reg_equiv_len - old));
2878 static void
2879 init_reg_equiv (void)
2881 ira_reg_equiv_len = 0;
2882 ira_reg_equiv = NULL;
2883 ira_expand_reg_equiv ();
2886 static void
2887 finish_reg_equiv (void)
2889 free (ira_reg_equiv);
2894 struct equivalence
2896 /* Set when a REG_EQUIV note is found or created. Use to
2897 keep track of what memory accesses might be created later,
2898 e.g. by reload. */
2899 rtx replacement;
2900 rtx *src_p;
2902 /* The list of each instruction which initializes this register.
2904 NULL indicates we know nothing about this register's equivalence
2905 properties.
2907 An INSN_LIST with a NULL insn indicates this pseudo is already
2908 known to not have a valid equivalence. */
2909 rtx_insn_list *init_insns;
2911 /* Loop depth is used to recognize equivalences which appear
2912 to be present within the same loop (or in an inner loop). */
2913 short loop_depth;
2914 /* Nonzero if this had a preexisting REG_EQUIV note. */
2915 unsigned char is_arg_equivalence : 1;
2916 /* Set when an attempt should be made to replace a register
2917 with the associated src_p entry. */
2918 unsigned char replace : 1;
2919 /* Set if this register has no known equivalence. */
2920 unsigned char no_equiv : 1;
2921 /* Set if this register is mentioned in a paradoxical subreg. */
2922 unsigned char pdx_subregs : 1;
2925 /* reg_equiv[N] (where N is a pseudo reg number) is the equivalence
2926 structure for that register. */
2927 static struct equivalence *reg_equiv;
2929 /* Used for communication between the following two functions. */
2930 struct equiv_mem_data
2932 /* A MEM that we wish to ensure remains unchanged. */
2933 rtx equiv_mem;
2935 /* Set true if EQUIV_MEM is modified. */
2936 bool equiv_mem_modified;
2939 /* If EQUIV_MEM is modified by modifying DEST, indicate that it is modified.
2940 Called via note_stores. */
2941 static void
2942 validate_equiv_mem_from_store (rtx dest, const_rtx set ATTRIBUTE_UNUSED,
2943 void *data)
2945 struct equiv_mem_data *info = (struct equiv_mem_data *) data;
2947 if ((REG_P (dest)
2948 && reg_overlap_mentioned_p (dest, info->equiv_mem))
2949 || (MEM_P (dest)
2950 && anti_dependence (info->equiv_mem, dest)))
2951 info->equiv_mem_modified = true;
2954 enum valid_equiv { valid_none, valid_combine, valid_reload };
2956 /* Verify that no store between START and the death of REG invalidates
2957 MEMREF. MEMREF is invalidated by modifying a register used in MEMREF,
2958 by storing into an overlapping memory location, or with a non-const
2959 CALL_INSN.
2961 Return VALID_RELOAD if MEMREF remains valid for both reload and
2962 combine_and_move insns, VALID_COMBINE if only valid for
2963 combine_and_move_insns, and VALID_NONE otherwise. */
2964 static enum valid_equiv
2965 validate_equiv_mem (rtx_insn *start, rtx reg, rtx memref)
2967 rtx_insn *insn;
2968 rtx note;
2969 struct equiv_mem_data info = { memref, false };
2970 enum valid_equiv ret = valid_reload;
2972 /* If the memory reference has side effects or is volatile, it isn't a
2973 valid equivalence. */
2974 if (side_effects_p (memref))
2975 return valid_none;
2977 for (insn = start; insn; insn = NEXT_INSN (insn))
2979 if (!INSN_P (insn))
2980 continue;
2982 if (find_reg_note (insn, REG_DEAD, reg))
2983 return ret;
2985 if (CALL_P (insn))
2987 /* We can combine a reg def from one insn into a reg use in
2988 another over a call if the memory is readonly or the call
2989 const/pure. However, we can't set reg_equiv notes up for
2990 reload over any call. The problem is the equivalent form
2991 may reference a pseudo which gets assigned a call
2992 clobbered hard reg. When we later replace REG with its
2993 equivalent form, the value in the call-clobbered reg has
2994 been changed and all hell breaks loose. */
2995 ret = valid_combine;
2996 if (!MEM_READONLY_P (memref)
2997 && !RTL_CONST_OR_PURE_CALL_P (insn))
2998 return valid_none;
3001 note_stores (PATTERN (insn), validate_equiv_mem_from_store, &info);
3002 if (info.equiv_mem_modified)
3003 return valid_none;
3005 /* If a register mentioned in MEMREF is modified via an
3006 auto-increment, we lose the equivalence. Do the same if one
3007 dies; although we could extend the life, it doesn't seem worth
3008 the trouble. */
3010 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3011 if ((REG_NOTE_KIND (note) == REG_INC
3012 || REG_NOTE_KIND (note) == REG_DEAD)
3013 && REG_P (XEXP (note, 0))
3014 && reg_overlap_mentioned_p (XEXP (note, 0), memref))
3015 return valid_none;
3018 return valid_none;
3021 /* Returns zero if X is known to be invariant. */
3022 static int
3023 equiv_init_varies_p (rtx x)
3025 RTX_CODE code = GET_CODE (x);
3026 int i;
3027 const char *fmt;
3029 switch (code)
3031 case MEM:
3032 return !MEM_READONLY_P (x) || equiv_init_varies_p (XEXP (x, 0));
3034 case CONST:
3035 CASE_CONST_ANY:
3036 case SYMBOL_REF:
3037 case LABEL_REF:
3038 return 0;
3040 case REG:
3041 return reg_equiv[REGNO (x)].replace == 0 && rtx_varies_p (x, 0);
3043 case ASM_OPERANDS:
3044 if (MEM_VOLATILE_P (x))
3045 return 1;
3047 /* Fall through. */
3049 default:
3050 break;
3053 fmt = GET_RTX_FORMAT (code);
3054 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3055 if (fmt[i] == 'e')
3057 if (equiv_init_varies_p (XEXP (x, i)))
3058 return 1;
3060 else if (fmt[i] == 'E')
3062 int j;
3063 for (j = 0; j < XVECLEN (x, i); j++)
3064 if (equiv_init_varies_p (XVECEXP (x, i, j)))
3065 return 1;
3068 return 0;
3071 /* Returns nonzero if X (used to initialize register REGNO) is movable.
3072 X is only movable if the registers it uses have equivalent initializations
3073 which appear to be within the same loop (or in an inner loop) and movable
3074 or if they are not candidates for local_alloc and don't vary. */
3075 static int
3076 equiv_init_movable_p (rtx x, int regno)
3078 int i, j;
3079 const char *fmt;
3080 enum rtx_code code = GET_CODE (x);
3082 switch (code)
3084 case SET:
3085 return equiv_init_movable_p (SET_SRC (x), regno);
3087 case CC0:
3088 case CLOBBER:
3089 return 0;
3091 case PRE_INC:
3092 case PRE_DEC:
3093 case POST_INC:
3094 case POST_DEC:
3095 case PRE_MODIFY:
3096 case POST_MODIFY:
3097 return 0;
3099 case REG:
3100 return ((reg_equiv[REGNO (x)].loop_depth >= reg_equiv[regno].loop_depth
3101 && reg_equiv[REGNO (x)].replace)
3102 || (REG_BASIC_BLOCK (REGNO (x)) < NUM_FIXED_BLOCKS
3103 && ! rtx_varies_p (x, 0)));
3105 case UNSPEC_VOLATILE:
3106 return 0;
3108 case ASM_OPERANDS:
3109 if (MEM_VOLATILE_P (x))
3110 return 0;
3112 /* Fall through. */
3114 default:
3115 break;
3118 fmt = GET_RTX_FORMAT (code);
3119 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3120 switch (fmt[i])
3122 case 'e':
3123 if (! equiv_init_movable_p (XEXP (x, i), regno))
3124 return 0;
3125 break;
3126 case 'E':
3127 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3128 if (! equiv_init_movable_p (XVECEXP (x, i, j), regno))
3129 return 0;
3130 break;
3133 return 1;
3136 /* TRUE if X references a memory location that would be affected by a store
3137 to MEMREF. */
3138 static int
3139 memref_referenced_p (rtx memref, rtx x)
3141 int i, j;
3142 const char *fmt;
3143 enum rtx_code code = GET_CODE (x);
3145 switch (code)
3147 case CONST:
3148 case LABEL_REF:
3149 case SYMBOL_REF:
3150 CASE_CONST_ANY:
3151 case PC:
3152 case CC0:
3153 case HIGH:
3154 case LO_SUM:
3155 return 0;
3157 case REG:
3158 return (reg_equiv[REGNO (x)].replacement
3159 && memref_referenced_p (memref,
3160 reg_equiv[REGNO (x)].replacement));
3162 case MEM:
3163 if (true_dependence (memref, VOIDmode, x))
3164 return 1;
3165 break;
3167 case SET:
3168 /* If we are setting a MEM, it doesn't count (its address does), but any
3169 other SET_DEST that has a MEM in it is referencing the MEM. */
3170 if (MEM_P (SET_DEST (x)))
3172 if (memref_referenced_p (memref, XEXP (SET_DEST (x), 0)))
3173 return 1;
3175 else if (memref_referenced_p (memref, SET_DEST (x)))
3176 return 1;
3178 return memref_referenced_p (memref, SET_SRC (x));
3180 default:
3181 break;
3184 fmt = GET_RTX_FORMAT (code);
3185 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
3186 switch (fmt[i])
3188 case 'e':
3189 if (memref_referenced_p (memref, XEXP (x, i)))
3190 return 1;
3191 break;
3192 case 'E':
3193 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3194 if (memref_referenced_p (memref, XVECEXP (x, i, j)))
3195 return 1;
3196 break;
3199 return 0;
3202 /* TRUE if some insn in the range (START, END] references a memory location
3203 that would be affected by a store to MEMREF.
3205 Callers should not call this routine if START is after END in the
3206 RTL chain. */
3208 static int
3209 memref_used_between_p (rtx memref, rtx_insn *start, rtx_insn *end)
3211 rtx_insn *insn;
3213 for (insn = NEXT_INSN (start);
3214 insn && insn != NEXT_INSN (end);
3215 insn = NEXT_INSN (insn))
3217 if (!NONDEBUG_INSN_P (insn))
3218 continue;
3220 if (memref_referenced_p (memref, PATTERN (insn)))
3221 return 1;
3223 /* Nonconst functions may access memory. */
3224 if (CALL_P (insn) && (! RTL_CONST_CALL_P (insn)))
3225 return 1;
3228 gcc_assert (insn == NEXT_INSN (end));
3229 return 0;
3232 /* Mark REG as having no known equivalence.
3233 Some instructions might have been processed before and furnished
3234 with REG_EQUIV notes for this register; these notes will have to be
3235 removed.
3236 STORE is the piece of RTL that does the non-constant / conflicting
3237 assignment - a SET, CLOBBER or REG_INC note. It is currently not used,
3238 but needs to be there because this function is called from note_stores. */
3239 static void
3240 no_equiv (rtx reg, const_rtx store ATTRIBUTE_UNUSED,
3241 void *data ATTRIBUTE_UNUSED)
3243 int regno;
3244 rtx_insn_list *list;
3246 if (!REG_P (reg))
3247 return;
3248 regno = REGNO (reg);
3249 reg_equiv[regno].no_equiv = 1;
3250 list = reg_equiv[regno].init_insns;
3251 if (list && list->insn () == NULL)
3252 return;
3253 reg_equiv[regno].init_insns = gen_rtx_INSN_LIST (VOIDmode, NULL_RTX, NULL);
3254 reg_equiv[regno].replacement = NULL_RTX;
3255 /* This doesn't matter for equivalences made for argument registers, we
3256 should keep their initialization insns. */
3257 if (reg_equiv[regno].is_arg_equivalence)
3258 return;
3259 ira_reg_equiv[regno].defined_p = false;
3260 ira_reg_equiv[regno].init_insns = NULL;
3261 for (; list; list = list->next ())
3263 rtx_insn *insn = list->insn ();
3264 remove_note (insn, find_reg_note (insn, REG_EQUIV, NULL_RTX));
3268 /* Check whether the SUBREG is a paradoxical subreg and set the result
3269 in PDX_SUBREGS. */
3271 static void
3272 set_paradoxical_subreg (rtx_insn *insn)
3274 subrtx_iterator::array_type array;
3275 FOR_EACH_SUBRTX (iter, array, PATTERN (insn), NONCONST)
3277 const_rtx subreg = *iter;
3278 if (GET_CODE (subreg) == SUBREG)
3280 const_rtx reg = SUBREG_REG (subreg);
3281 if (REG_P (reg) && paradoxical_subreg_p (subreg))
3282 reg_equiv[REGNO (reg)].pdx_subregs = true;
3287 /* In DEBUG_INSN location adjust REGs from CLEARED_REGS bitmap to the
3288 equivalent replacement. */
3290 static rtx
3291 adjust_cleared_regs (rtx loc, const_rtx old_rtx ATTRIBUTE_UNUSED, void *data)
3293 if (REG_P (loc))
3295 bitmap cleared_regs = (bitmap) data;
3296 if (bitmap_bit_p (cleared_regs, REGNO (loc)))
3297 return simplify_replace_fn_rtx (copy_rtx (*reg_equiv[REGNO (loc)].src_p),
3298 NULL_RTX, adjust_cleared_regs, data);
3300 return NULL_RTX;
3303 /* Find registers that are equivalent to a single value throughout the
3304 compilation (either because they can be referenced in memory or are
3305 set once from a single constant). Lower their priority for a
3306 register.
3308 If such a register is only referenced once, try substituting its
3309 value into the using insn. If it succeeds, we can eliminate the
3310 register completely.
3312 Initialize init_insns in ira_reg_equiv array. */
3313 static void
3314 update_equiv_regs (void)
3316 rtx_insn *insn;
3317 basic_block bb;
3319 /* Scan insns and set pdx_subregs if the reg is used in a
3320 paradoxical subreg. Don't set such reg equivalent to a mem,
3321 because lra will not substitute such equiv memory in order to
3322 prevent access beyond allocated memory for paradoxical memory subreg. */
3323 FOR_EACH_BB_FN (bb, cfun)
3324 FOR_BB_INSNS (bb, insn)
3325 if (NONDEBUG_INSN_P (insn))
3326 set_paradoxical_subreg (insn);
3328 /* Scan the insns and find which registers have equivalences. Do this
3329 in a separate scan of the insns because (due to -fcse-follow-jumps)
3330 a register can be set below its use. */
3331 bitmap setjmp_crosses = regstat_get_setjmp_crosses ();
3332 FOR_EACH_BB_FN (bb, cfun)
3334 int loop_depth = bb_loop_depth (bb);
3336 for (insn = BB_HEAD (bb);
3337 insn != NEXT_INSN (BB_END (bb));
3338 insn = NEXT_INSN (insn))
3340 rtx note;
3341 rtx set;
3342 rtx dest, src;
3343 int regno;
3345 if (! INSN_P (insn))
3346 continue;
3348 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3349 if (REG_NOTE_KIND (note) == REG_INC)
3350 no_equiv (XEXP (note, 0), note, NULL);
3352 set = single_set (insn);
3354 /* If this insn contains more (or less) than a single SET,
3355 only mark all destinations as having no known equivalence. */
3356 if (set == NULL_RTX
3357 || side_effects_p (SET_SRC (set)))
3359 note_stores (PATTERN (insn), no_equiv, NULL);
3360 continue;
3362 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3364 int i;
3366 for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
3368 rtx part = XVECEXP (PATTERN (insn), 0, i);
3369 if (part != set)
3370 note_stores (part, no_equiv, NULL);
3374 dest = SET_DEST (set);
3375 src = SET_SRC (set);
3377 /* See if this is setting up the equivalence between an argument
3378 register and its stack slot. */
3379 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3380 if (note)
3382 gcc_assert (REG_P (dest));
3383 regno = REGNO (dest);
3385 /* Note that we don't want to clear init_insns in
3386 ira_reg_equiv even if there are multiple sets of this
3387 register. */
3388 reg_equiv[regno].is_arg_equivalence = 1;
3390 /* The insn result can have equivalence memory although
3391 the equivalence is not set up by the insn. We add
3392 this insn to init insns as it is a flag for now that
3393 regno has an equivalence. We will remove the insn
3394 from init insn list later. */
3395 if (rtx_equal_p (src, XEXP (note, 0)) || MEM_P (XEXP (note, 0)))
3396 ira_reg_equiv[regno].init_insns
3397 = gen_rtx_INSN_LIST (VOIDmode, insn,
3398 ira_reg_equiv[regno].init_insns);
3400 /* Continue normally in case this is a candidate for
3401 replacements. */
3404 if (!optimize)
3405 continue;
3407 /* We only handle the case of a pseudo register being set
3408 once, or always to the same value. */
3409 /* ??? The mn10200 port breaks if we add equivalences for
3410 values that need an ADDRESS_REGS register and set them equivalent
3411 to a MEM of a pseudo. The actual problem is in the over-conservative
3412 handling of INPADDR_ADDRESS / INPUT_ADDRESS / INPUT triples in
3413 calculate_needs, but we traditionally work around this problem
3414 here by rejecting equivalences when the destination is in a register
3415 that's likely spilled. This is fragile, of course, since the
3416 preferred class of a pseudo depends on all instructions that set
3417 or use it. */
3419 if (!REG_P (dest)
3420 || (regno = REGNO (dest)) < FIRST_PSEUDO_REGISTER
3421 || (reg_equiv[regno].init_insns
3422 && reg_equiv[regno].init_insns->insn () == NULL)
3423 || (targetm.class_likely_spilled_p (reg_preferred_class (regno))
3424 && MEM_P (src) && ! reg_equiv[regno].is_arg_equivalence))
3426 /* This might be setting a SUBREG of a pseudo, a pseudo that is
3427 also set somewhere else to a constant. */
3428 note_stores (set, no_equiv, NULL);
3429 continue;
3432 /* Don't set reg mentioned in a paradoxical subreg
3433 equivalent to a mem. */
3434 if (MEM_P (src) && reg_equiv[regno].pdx_subregs)
3436 note_stores (set, no_equiv, NULL);
3437 continue;
3440 note = find_reg_note (insn, REG_EQUAL, NULL_RTX);
3442 /* cse sometimes generates function invariants, but doesn't put a
3443 REG_EQUAL note on the insn. Since this note would be redundant,
3444 there's no point creating it earlier than here. */
3445 if (! note && ! rtx_varies_p (src, 0))
3446 note = set_unique_reg_note (insn, REG_EQUAL, copy_rtx (src));
3448 /* Don't bother considering a REG_EQUAL note containing an EXPR_LIST
3449 since it represents a function call. */
3450 if (note && GET_CODE (XEXP (note, 0)) == EXPR_LIST)
3451 note = NULL_RTX;
3453 if (DF_REG_DEF_COUNT (regno) != 1)
3455 bool equal_p = true;
3456 rtx_insn_list *list;
3458 /* If we have already processed this pseudo and determined it
3459 can not have an equivalence, then honor that decision. */
3460 if (reg_equiv[regno].no_equiv)
3461 continue;
3463 if (! note
3464 || rtx_varies_p (XEXP (note, 0), 0)
3465 || (reg_equiv[regno].replacement
3466 && ! rtx_equal_p (XEXP (note, 0),
3467 reg_equiv[regno].replacement)))
3469 no_equiv (dest, set, NULL);
3470 continue;
3473 list = reg_equiv[regno].init_insns;
3474 for (; list; list = list->next ())
3476 rtx note_tmp;
3477 rtx_insn *insn_tmp;
3479 insn_tmp = list->insn ();
3480 note_tmp = find_reg_note (insn_tmp, REG_EQUAL, NULL_RTX);
3481 gcc_assert (note_tmp);
3482 if (! rtx_equal_p (XEXP (note, 0), XEXP (note_tmp, 0)))
3484 equal_p = false;
3485 break;
3489 if (! equal_p)
3491 no_equiv (dest, set, NULL);
3492 continue;
3496 /* Record this insn as initializing this register. */
3497 reg_equiv[regno].init_insns
3498 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv[regno].init_insns);
3500 /* If this register is known to be equal to a constant, record that
3501 it is always equivalent to the constant. */
3502 if (DF_REG_DEF_COUNT (regno) == 1
3503 && note && ! rtx_varies_p (XEXP (note, 0), 0))
3505 rtx note_value = XEXP (note, 0);
3506 remove_note (insn, note);
3507 set_unique_reg_note (insn, REG_EQUIV, note_value);
3510 /* If this insn introduces a "constant" register, decrease the priority
3511 of that register. Record this insn if the register is only used once
3512 more and the equivalence value is the same as our source.
3514 The latter condition is checked for two reasons: First, it is an
3515 indication that it may be more efficient to actually emit the insn
3516 as written (if no registers are available, reload will substitute
3517 the equivalence). Secondly, it avoids problems with any registers
3518 dying in this insn whose death notes would be missed.
3520 If we don't have a REG_EQUIV note, see if this insn is loading
3521 a register used only in one basic block from a MEM. If so, and the
3522 MEM remains unchanged for the life of the register, add a REG_EQUIV
3523 note. */
3524 note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
3526 rtx replacement = NULL_RTX;
3527 if (note)
3528 replacement = XEXP (note, 0);
3529 else if (REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3530 && MEM_P (SET_SRC (set)))
3532 enum valid_equiv validity;
3533 validity = validate_equiv_mem (insn, dest, SET_SRC (set));
3534 if (validity != valid_none)
3536 replacement = copy_rtx (SET_SRC (set));
3537 if (validity == valid_reload)
3538 note = set_unique_reg_note (insn, REG_EQUIV, replacement);
3542 /* If we haven't done so, record for reload that this is an
3543 equivalencing insn. */
3544 if (note && !reg_equiv[regno].is_arg_equivalence)
3545 ira_reg_equiv[regno].init_insns
3546 = gen_rtx_INSN_LIST (VOIDmode, insn,
3547 ira_reg_equiv[regno].init_insns);
3549 if (replacement)
3551 reg_equiv[regno].replacement = replacement;
3552 reg_equiv[regno].src_p = &SET_SRC (set);
3553 reg_equiv[regno].loop_depth = (short) loop_depth;
3555 /* Don't mess with things live during setjmp. */
3556 if (optimize && !bitmap_bit_p (setjmp_crosses, regno))
3558 /* If the register is referenced exactly twice, meaning it is
3559 set once and used once, indicate that the reference may be
3560 replaced by the equivalence we computed above. Do this
3561 even if the register is only used in one block so that
3562 dependencies can be handled where the last register is
3563 used in a different block (i.e. HIGH / LO_SUM sequences)
3564 and to reduce the number of registers alive across
3565 calls. */
3567 if (REG_N_REFS (regno) == 2
3568 && (rtx_equal_p (replacement, src)
3569 || ! equiv_init_varies_p (src))
3570 && NONJUMP_INSN_P (insn)
3571 && equiv_init_movable_p (PATTERN (insn), regno))
3572 reg_equiv[regno].replace = 1;
3579 /* For insns that set a MEM to the contents of a REG that is only used
3580 in a single basic block, see if the register is always equivalent
3581 to that memory location and if moving the store from INSN to the
3582 insn that sets REG is safe. If so, put a REG_EQUIV note on the
3583 initializing insn. */
3584 static void
3585 add_store_equivs (void)
3587 bitmap_head seen_insns;
3589 bitmap_initialize (&seen_insns, NULL);
3590 for (rtx_insn *insn = get_insns (); insn; insn = NEXT_INSN (insn))
3592 rtx set, src, dest;
3593 unsigned regno;
3594 rtx_insn *init_insn;
3596 bitmap_set_bit (&seen_insns, INSN_UID (insn));
3598 if (! INSN_P (insn))
3599 continue;
3601 set = single_set (insn);
3602 if (! set)
3603 continue;
3605 dest = SET_DEST (set);
3606 src = SET_SRC (set);
3608 /* Don't add a REG_EQUIV note if the insn already has one. The existing
3609 REG_EQUIV is likely more useful than the one we are adding. */
3610 if (MEM_P (dest) && REG_P (src)
3611 && (regno = REGNO (src)) >= FIRST_PSEUDO_REGISTER
3612 && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
3613 && DF_REG_DEF_COUNT (regno) == 1
3614 && ! reg_equiv[regno].pdx_subregs
3615 && reg_equiv[regno].init_insns != NULL
3616 && (init_insn = reg_equiv[regno].init_insns->insn ()) != 0
3617 && bitmap_bit_p (&seen_insns, INSN_UID (init_insn))
3618 && ! find_reg_note (init_insn, REG_EQUIV, NULL_RTX)
3619 && validate_equiv_mem (init_insn, src, dest) == valid_reload
3620 && ! memref_used_between_p (dest, init_insn, insn)
3621 /* Attaching a REG_EQUIV note will fail if INIT_INSN has
3622 multiple sets. */
3623 && set_unique_reg_note (init_insn, REG_EQUIV, copy_rtx (dest)))
3625 /* This insn makes the equivalence, not the one initializing
3626 the register. */
3627 ira_reg_equiv[regno].init_insns
3628 = gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX);
3629 df_notes_rescan (init_insn);
3630 if (dump_file)
3631 fprintf (dump_file,
3632 "Adding REG_EQUIV to insn %d for source of insn %d\n",
3633 INSN_UID (init_insn),
3634 INSN_UID (insn));
3637 bitmap_clear (&seen_insns);
3640 /* Scan all regs killed in an insn to see if any of them are registers
3641 only used that once. If so, see if we can replace the reference
3642 with the equivalent form. If we can, delete the initializing
3643 reference and this register will go away. If we can't replace the
3644 reference, and the initializing reference is within the same loop
3645 (or in an inner loop), then move the register initialization just
3646 before the use, so that they are in the same basic block. */
3647 static void
3648 combine_and_move_insns (void)
3650 bitmap cleared_regs = BITMAP_ALLOC (NULL);
3651 int max = max_reg_num ();
3653 for (int regno = FIRST_PSEUDO_REGISTER; regno < max; regno++)
3655 if (!reg_equiv[regno].replace)
3656 continue;
3658 rtx_insn *use_insn = 0;
3659 for (df_ref use = DF_REG_USE_CHAIN (regno);
3660 use;
3661 use = DF_REF_NEXT_REG (use))
3662 if (DF_REF_INSN_INFO (use))
3664 if (DEBUG_INSN_P (DF_REF_INSN (use)))
3665 continue;
3666 gcc_assert (!use_insn);
3667 use_insn = DF_REF_INSN (use);
3669 gcc_assert (use_insn);
3671 /* Don't substitute into jumps. indirect_jump_optimize does
3672 this for anything we are prepared to handle. */
3673 if (JUMP_P (use_insn))
3674 continue;
3676 /* Also don't substitute into a conditional trap insn -- it can become
3677 an unconditional trap, and that is a flow control insn. */
3678 if (GET_CODE (PATTERN (use_insn)) == TRAP_IF)
3679 continue;
3681 df_ref def = DF_REG_DEF_CHAIN (regno);
3682 gcc_assert (DF_REG_DEF_COUNT (regno) == 1 && DF_REF_INSN_INFO (def));
3683 rtx_insn *def_insn = DF_REF_INSN (def);
3685 /* We may not move instructions that can throw, since that
3686 changes basic block boundaries and we are not prepared to
3687 adjust the CFG to match. */
3688 if (can_throw_internal (def_insn))
3689 continue;
3691 basic_block use_bb = BLOCK_FOR_INSN (use_insn);
3692 basic_block def_bb = BLOCK_FOR_INSN (def_insn);
3693 if (bb_loop_depth (use_bb) > bb_loop_depth (def_bb))
3694 continue;
3696 if (asm_noperands (PATTERN (def_insn)) < 0
3697 && validate_replace_rtx (regno_reg_rtx[regno],
3698 *reg_equiv[regno].src_p, use_insn))
3700 rtx link;
3701 /* Append the REG_DEAD notes from def_insn. */
3702 for (rtx *p = &REG_NOTES (def_insn); (link = *p) != 0; )
3704 if (REG_NOTE_KIND (XEXP (link, 0)) == REG_DEAD)
3706 *p = XEXP (link, 1);
3707 XEXP (link, 1) = REG_NOTES (use_insn);
3708 REG_NOTES (use_insn) = link;
3710 else
3711 p = &XEXP (link, 1);
3714 remove_death (regno, use_insn);
3715 SET_REG_N_REFS (regno, 0);
3716 REG_FREQ (regno) = 0;
3717 df_ref use;
3718 FOR_EACH_INSN_USE (use, def_insn)
3720 unsigned int use_regno = DF_REF_REGNO (use);
3721 if (!HARD_REGISTER_NUM_P (use_regno))
3722 reg_equiv[use_regno].replace = 0;
3725 delete_insn (def_insn);
3727 reg_equiv[regno].init_insns = NULL;
3728 ira_reg_equiv[regno].init_insns = NULL;
3729 bitmap_set_bit (cleared_regs, regno);
3732 /* Move the initialization of the register to just before
3733 USE_INSN. Update the flow information. */
3734 else if (prev_nondebug_insn (use_insn) != def_insn)
3736 rtx_insn *new_insn;
3738 new_insn = emit_insn_before (PATTERN (def_insn), use_insn);
3739 REG_NOTES (new_insn) = REG_NOTES (def_insn);
3740 REG_NOTES (def_insn) = 0;
3741 /* Rescan it to process the notes. */
3742 df_insn_rescan (new_insn);
3744 /* Make sure this insn is recognized before reload begins,
3745 otherwise eliminate_regs_in_insn will die. */
3746 INSN_CODE (new_insn) = INSN_CODE (def_insn);
3748 delete_insn (def_insn);
3750 XEXP (reg_equiv[regno].init_insns, 0) = new_insn;
3752 REG_BASIC_BLOCK (regno) = use_bb->index;
3753 REG_N_CALLS_CROSSED (regno) = 0;
3755 if (use_insn == BB_HEAD (use_bb))
3756 BB_HEAD (use_bb) = new_insn;
3758 /* We know regno dies in use_insn, but inside a loop
3759 REG_DEAD notes might be missing when def_insn was in
3760 another basic block. However, when we move def_insn into
3761 this bb we'll definitely get a REG_DEAD note and reload
3762 will see the death. It's possible that update_equiv_regs
3763 set up an equivalence referencing regno for a reg set by
3764 use_insn, when regno was seen as non-local. Now that
3765 regno is local to this block, and dies, such an
3766 equivalence is invalid. */
3767 if (find_reg_note (use_insn, REG_EQUIV, regno_reg_rtx[regno]))
3769 rtx set = single_set (use_insn);
3770 if (set && REG_P (SET_DEST (set)))
3771 no_equiv (SET_DEST (set), set, NULL);
3774 ira_reg_equiv[regno].init_insns
3775 = gen_rtx_INSN_LIST (VOIDmode, new_insn, NULL_RTX);
3776 bitmap_set_bit (cleared_regs, regno);
3780 if (!bitmap_empty_p (cleared_regs))
3782 basic_block bb;
3784 FOR_EACH_BB_FN (bb, cfun)
3786 bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
3787 bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
3788 if (!df_live)
3789 continue;
3790 bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
3791 bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
3794 /* Last pass - adjust debug insns referencing cleared regs. */
3795 if (MAY_HAVE_DEBUG_INSNS)
3796 for (rtx_insn *insn = get_insns (); insn; insn = NEXT_INSN (insn))
3797 if (DEBUG_INSN_P (insn))
3799 rtx old_loc = INSN_VAR_LOCATION_LOC (insn);
3800 INSN_VAR_LOCATION_LOC (insn)
3801 = simplify_replace_fn_rtx (old_loc, NULL_RTX,
3802 adjust_cleared_regs,
3803 (void *) cleared_regs);
3804 if (old_loc != INSN_VAR_LOCATION_LOC (insn))
3805 df_insn_rescan (insn);
3809 BITMAP_FREE (cleared_regs);
3812 /* A pass over indirect jumps, converting simple cases to direct jumps.
3813 Combine does this optimization too, but only within a basic block. */
3814 static void
3815 indirect_jump_optimize (void)
3817 basic_block bb;
3818 bool rebuild_p = false;
3820 FOR_EACH_BB_REVERSE_FN (bb, cfun)
3822 rtx_insn *insn = BB_END (bb);
3823 if (!JUMP_P (insn)
3824 || find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
3825 continue;
3827 rtx x = pc_set (insn);
3828 if (!x || !REG_P (SET_SRC (x)))
3829 continue;
3831 int regno = REGNO (SET_SRC (x));
3832 if (DF_REG_DEF_COUNT (regno) == 1)
3834 df_ref def = DF_REG_DEF_CHAIN (regno);
3835 if (!DF_REF_IS_ARTIFICIAL (def))
3837 rtx_insn *def_insn = DF_REF_INSN (def);
3838 rtx lab = NULL_RTX;
3839 rtx set = single_set (def_insn);
3840 if (set && GET_CODE (SET_SRC (set)) == LABEL_REF)
3841 lab = SET_SRC (set);
3842 else
3844 rtx eqnote = find_reg_note (def_insn, REG_EQUAL, NULL_RTX);
3845 if (eqnote && GET_CODE (XEXP (eqnote, 0)) == LABEL_REF)
3846 lab = XEXP (eqnote, 0);
3848 if (lab && validate_replace_rtx (SET_SRC (x), lab, insn))
3849 rebuild_p = true;
3854 if (rebuild_p)
3856 timevar_push (TV_JUMP);
3857 rebuild_jump_labels (get_insns ());
3858 if (purge_all_dead_edges ())
3859 delete_unreachable_blocks ();
3860 timevar_pop (TV_JUMP);
3864 /* Set up fields memory, constant, and invariant from init_insns in
3865 the structures of array ira_reg_equiv. */
3866 static void
3867 setup_reg_equiv (void)
3869 int i;
3870 rtx_insn_list *elem, *prev_elem, *next_elem;
3871 rtx_insn *insn;
3872 rtx set, x;
3874 for (i = FIRST_PSEUDO_REGISTER; i < ira_reg_equiv_len; i++)
3875 for (prev_elem = NULL, elem = ira_reg_equiv[i].init_insns;
3876 elem;
3877 prev_elem = elem, elem = next_elem)
3879 next_elem = elem->next ();
3880 insn = elem->insn ();
3881 set = single_set (insn);
3883 /* Init insns can set up equivalence when the reg is a destination or
3884 a source (in this case the destination is memory). */
3885 if (set != 0 && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))))
3887 if ((x = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != NULL)
3889 x = XEXP (x, 0);
3890 if (REG_P (SET_DEST (set))
3891 && REGNO (SET_DEST (set)) == (unsigned int) i
3892 && ! rtx_equal_p (SET_SRC (set), x) && MEM_P (x))
3894 /* This insn reporting the equivalence but
3895 actually not setting it. Remove it from the
3896 list. */
3897 if (prev_elem == NULL)
3898 ira_reg_equiv[i].init_insns = next_elem;
3899 else
3900 XEXP (prev_elem, 1) = next_elem;
3901 elem = prev_elem;
3904 else if (REG_P (SET_DEST (set))
3905 && REGNO (SET_DEST (set)) == (unsigned int) i)
3906 x = SET_SRC (set);
3907 else
3909 gcc_assert (REG_P (SET_SRC (set))
3910 && REGNO (SET_SRC (set)) == (unsigned int) i);
3911 x = SET_DEST (set);
3913 if (! function_invariant_p (x)
3914 || ! flag_pic
3915 /* A function invariant is often CONSTANT_P but may
3916 include a register. We promise to only pass
3917 CONSTANT_P objects to LEGITIMATE_PIC_OPERAND_P. */
3918 || (CONSTANT_P (x) && LEGITIMATE_PIC_OPERAND_P (x)))
3920 /* It can happen that a REG_EQUIV note contains a MEM
3921 that is not a legitimate memory operand. As later
3922 stages of reload assume that all addresses found in
3923 the lra_regno_equiv_* arrays were originally
3924 legitimate, we ignore such REG_EQUIV notes. */
3925 if (memory_operand (x, VOIDmode))
3927 ira_reg_equiv[i].defined_p = true;
3928 ira_reg_equiv[i].memory = x;
3929 continue;
3931 else if (function_invariant_p (x))
3933 machine_mode mode;
3935 mode = GET_MODE (SET_DEST (set));
3936 if (GET_CODE (x) == PLUS
3937 || x == frame_pointer_rtx || x == arg_pointer_rtx)
3938 /* This is PLUS of frame pointer and a constant,
3939 or fp, or argp. */
3940 ira_reg_equiv[i].invariant = x;
3941 else if (targetm.legitimate_constant_p (mode, x))
3942 ira_reg_equiv[i].constant = x;
3943 else
3945 ira_reg_equiv[i].memory = force_const_mem (mode, x);
3946 if (ira_reg_equiv[i].memory == NULL_RTX)
3948 ira_reg_equiv[i].defined_p = false;
3949 ira_reg_equiv[i].init_insns = NULL;
3950 break;
3953 ira_reg_equiv[i].defined_p = true;
3954 continue;
3958 ira_reg_equiv[i].defined_p = false;
3959 ira_reg_equiv[i].init_insns = NULL;
3960 break;
3966 /* Print chain C to FILE. */
3967 static void
3968 print_insn_chain (FILE *file, struct insn_chain *c)
3970 fprintf (file, "insn=%d, ", INSN_UID (c->insn));
3971 bitmap_print (file, &c->live_throughout, "live_throughout: ", ", ");
3972 bitmap_print (file, &c->dead_or_set, "dead_or_set: ", "\n");
3976 /* Print all reload_insn_chains to FILE. */
3977 static void
3978 print_insn_chains (FILE *file)
3980 struct insn_chain *c;
3981 for (c = reload_insn_chain; c ; c = c->next)
3982 print_insn_chain (file, c);
3985 /* Return true if pseudo REGNO should be added to set live_throughout
3986 or dead_or_set of the insn chains for reload consideration. */
3987 static bool
3988 pseudo_for_reload_consideration_p (int regno)
3990 /* Consider spilled pseudos too for IRA because they still have a
3991 chance to get hard-registers in the reload when IRA is used. */
3992 return (reg_renumber[regno] >= 0 || ira_conflicts_p);
3995 /* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] using
3996 REG to the number of nregs, and INIT_VALUE to get the
3997 initialization. ALLOCNUM need not be the regno of REG. */
3998 static void
3999 init_live_subregs (bool init_value, sbitmap *live_subregs,
4000 bitmap live_subregs_used, int allocnum, rtx reg)
4002 unsigned int regno = REGNO (SUBREG_REG (reg));
4003 int size = GET_MODE_SIZE (GET_MODE (regno_reg_rtx[regno]));
4005 gcc_assert (size > 0);
4007 /* Been there, done that. */
4008 if (bitmap_bit_p (live_subregs_used, allocnum))
4009 return;
4011 /* Create a new one. */
4012 if (live_subregs[allocnum] == NULL)
4013 live_subregs[allocnum] = sbitmap_alloc (size);
4015 /* If the entire reg was live before blasting into subregs, we need
4016 to init all of the subregs to ones else init to 0. */
4017 if (init_value)
4018 bitmap_ones (live_subregs[allocnum]);
4019 else
4020 bitmap_clear (live_subregs[allocnum]);
4022 bitmap_set_bit (live_subregs_used, allocnum);
4025 /* Walk the insns of the current function and build reload_insn_chain,
4026 and record register life information. */
4027 static void
4028 build_insn_chain (void)
4030 unsigned int i;
4031 struct insn_chain **p = &reload_insn_chain;
4032 basic_block bb;
4033 struct insn_chain *c = NULL;
4034 struct insn_chain *next = NULL;
4035 bitmap live_relevant_regs = BITMAP_ALLOC (NULL);
4036 bitmap elim_regset = BITMAP_ALLOC (NULL);
4037 /* live_subregs is a vector used to keep accurate information about
4038 which hardregs are live in multiword pseudos. live_subregs and
4039 live_subregs_used are indexed by pseudo number. The live_subreg
4040 entry for a particular pseudo is only used if the corresponding
4041 element is non zero in live_subregs_used. The sbitmap size of
4042 live_subreg[allocno] is number of bytes that the pseudo can
4043 occupy. */
4044 sbitmap *live_subregs = XCNEWVEC (sbitmap, max_regno);
4045 bitmap live_subregs_used = BITMAP_ALLOC (NULL);
4047 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
4048 if (TEST_HARD_REG_BIT (eliminable_regset, i))
4049 bitmap_set_bit (elim_regset, i);
4050 FOR_EACH_BB_REVERSE_FN (bb, cfun)
4052 bitmap_iterator bi;
4053 rtx_insn *insn;
4055 CLEAR_REG_SET (live_relevant_regs);
4056 bitmap_clear (live_subregs_used);
4058 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb), 0, i, bi)
4060 if (i >= FIRST_PSEUDO_REGISTER)
4061 break;
4062 bitmap_set_bit (live_relevant_regs, i);
4065 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb),
4066 FIRST_PSEUDO_REGISTER, i, bi)
4068 if (pseudo_for_reload_consideration_p (i))
4069 bitmap_set_bit (live_relevant_regs, i);
4072 FOR_BB_INSNS_REVERSE (bb, insn)
4074 if (!NOTE_P (insn) && !BARRIER_P (insn))
4076 struct df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4077 df_ref def, use;
4079 c = new_insn_chain ();
4080 c->next = next;
4081 next = c;
4082 *p = c;
4083 p = &c->prev;
4085 c->insn = insn;
4086 c->block = bb->index;
4088 if (NONDEBUG_INSN_P (insn))
4089 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4091 unsigned int regno = DF_REF_REGNO (def);
4093 /* Ignore may clobbers because these are generated
4094 from calls. However, every other kind of def is
4095 added to dead_or_set. */
4096 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
4098 if (regno < FIRST_PSEUDO_REGISTER)
4100 if (!fixed_regs[regno])
4101 bitmap_set_bit (&c->dead_or_set, regno);
4103 else if (pseudo_for_reload_consideration_p (regno))
4104 bitmap_set_bit (&c->dead_or_set, regno);
4107 if ((regno < FIRST_PSEUDO_REGISTER
4108 || reg_renumber[regno] >= 0
4109 || ira_conflicts_p)
4110 && (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL)))
4112 rtx reg = DF_REF_REG (def);
4114 /* We can model subregs, but not if they are
4115 wrapped in ZERO_EXTRACTS. */
4116 if (GET_CODE (reg) == SUBREG
4117 && !DF_REF_FLAGS_IS_SET (def, DF_REF_ZERO_EXTRACT))
4119 unsigned int start = SUBREG_BYTE (reg);
4120 unsigned int last = start
4121 + GET_MODE_SIZE (GET_MODE (reg));
4123 init_live_subregs
4124 (bitmap_bit_p (live_relevant_regs, regno),
4125 live_subregs, live_subregs_used, regno, reg);
4127 if (!DF_REF_FLAGS_IS_SET
4128 (def, DF_REF_STRICT_LOW_PART))
4130 /* Expand the range to cover entire words.
4131 Bytes added here are "don't care". */
4132 start
4133 = start / UNITS_PER_WORD * UNITS_PER_WORD;
4134 last = ((last + UNITS_PER_WORD - 1)
4135 / UNITS_PER_WORD * UNITS_PER_WORD);
4138 /* Ignore the paradoxical bits. */
4139 if (last > SBITMAP_SIZE (live_subregs[regno]))
4140 last = SBITMAP_SIZE (live_subregs[regno]);
4142 while (start < last)
4144 bitmap_clear_bit (live_subregs[regno], start);
4145 start++;
4148 if (bitmap_empty_p (live_subregs[regno]))
4150 bitmap_clear_bit (live_subregs_used, regno);
4151 bitmap_clear_bit (live_relevant_regs, regno);
4153 else
4154 /* Set live_relevant_regs here because
4155 that bit has to be true to get us to
4156 look at the live_subregs fields. */
4157 bitmap_set_bit (live_relevant_regs, regno);
4159 else
4161 /* DF_REF_PARTIAL is generated for
4162 subregs, STRICT_LOW_PART, and
4163 ZERO_EXTRACT. We handle the subreg
4164 case above so here we have to keep from
4165 modeling the def as a killing def. */
4166 if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL))
4168 bitmap_clear_bit (live_subregs_used, regno);
4169 bitmap_clear_bit (live_relevant_regs, regno);
4175 bitmap_and_compl_into (live_relevant_regs, elim_regset);
4176 bitmap_copy (&c->live_throughout, live_relevant_regs);
4178 if (NONDEBUG_INSN_P (insn))
4179 FOR_EACH_INSN_INFO_USE (use, insn_info)
4181 unsigned int regno = DF_REF_REGNO (use);
4182 rtx reg = DF_REF_REG (use);
4184 /* DF_REF_READ_WRITE on a use means that this use
4185 is fabricated from a def that is a partial set
4186 to a multiword reg. Here, we only model the
4187 subreg case that is not wrapped in ZERO_EXTRACT
4188 precisely so we do not need to look at the
4189 fabricated use. */
4190 if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE)
4191 && !DF_REF_FLAGS_IS_SET (use, DF_REF_ZERO_EXTRACT)
4192 && DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
4193 continue;
4195 /* Add the last use of each var to dead_or_set. */
4196 if (!bitmap_bit_p (live_relevant_regs, regno))
4198 if (regno < FIRST_PSEUDO_REGISTER)
4200 if (!fixed_regs[regno])
4201 bitmap_set_bit (&c->dead_or_set, regno);
4203 else if (pseudo_for_reload_consideration_p (regno))
4204 bitmap_set_bit (&c->dead_or_set, regno);
4207 if (regno < FIRST_PSEUDO_REGISTER
4208 || pseudo_for_reload_consideration_p (regno))
4210 if (GET_CODE (reg) == SUBREG
4211 && !DF_REF_FLAGS_IS_SET (use,
4212 DF_REF_SIGN_EXTRACT
4213 | DF_REF_ZERO_EXTRACT))
4215 unsigned int start = SUBREG_BYTE (reg);
4216 unsigned int last = start
4217 + GET_MODE_SIZE (GET_MODE (reg));
4219 init_live_subregs
4220 (bitmap_bit_p (live_relevant_regs, regno),
4221 live_subregs, live_subregs_used, regno, reg);
4223 /* Ignore the paradoxical bits. */
4224 if (last > SBITMAP_SIZE (live_subregs[regno]))
4225 last = SBITMAP_SIZE (live_subregs[regno]);
4227 while (start < last)
4229 bitmap_set_bit (live_subregs[regno], start);
4230 start++;
4233 else
4234 /* Resetting the live_subregs_used is
4235 effectively saying do not use the subregs
4236 because we are reading the whole
4237 pseudo. */
4238 bitmap_clear_bit (live_subregs_used, regno);
4239 bitmap_set_bit (live_relevant_regs, regno);
4245 /* FIXME!! The following code is a disaster. Reload needs to see the
4246 labels and jump tables that are just hanging out in between
4247 the basic blocks. See pr33676. */
4248 insn = BB_HEAD (bb);
4250 /* Skip over the barriers and cruft. */
4251 while (insn && (BARRIER_P (insn) || NOTE_P (insn)
4252 || BLOCK_FOR_INSN (insn) == bb))
4253 insn = PREV_INSN (insn);
4255 /* While we add anything except barriers and notes, the focus is
4256 to get the labels and jump tables into the
4257 reload_insn_chain. */
4258 while (insn)
4260 if (!NOTE_P (insn) && !BARRIER_P (insn))
4262 if (BLOCK_FOR_INSN (insn))
4263 break;
4265 c = new_insn_chain ();
4266 c->next = next;
4267 next = c;
4268 *p = c;
4269 p = &c->prev;
4271 /* The block makes no sense here, but it is what the old
4272 code did. */
4273 c->block = bb->index;
4274 c->insn = insn;
4275 bitmap_copy (&c->live_throughout, live_relevant_regs);
4277 insn = PREV_INSN (insn);
4281 reload_insn_chain = c;
4282 *p = NULL;
4284 for (i = 0; i < (unsigned int) max_regno; i++)
4285 if (live_subregs[i] != NULL)
4286 sbitmap_free (live_subregs[i]);
4287 free (live_subregs);
4288 BITMAP_FREE (live_subregs_used);
4289 BITMAP_FREE (live_relevant_regs);
4290 BITMAP_FREE (elim_regset);
4292 if (dump_file)
4293 print_insn_chains (dump_file);
4296 /* Examine the rtx found in *LOC, which is read or written to as determined
4297 by TYPE. Return false if we find a reason why an insn containing this
4298 rtx should not be moved (such as accesses to non-constant memory), true
4299 otherwise. */
4300 static bool
4301 rtx_moveable_p (rtx *loc, enum op_type type)
4303 const char *fmt;
4304 rtx x = *loc;
4305 enum rtx_code code = GET_CODE (x);
4306 int i, j;
4308 code = GET_CODE (x);
4309 switch (code)
4311 case CONST:
4312 CASE_CONST_ANY:
4313 case SYMBOL_REF:
4314 case LABEL_REF:
4315 return true;
4317 case PC:
4318 return type == OP_IN;
4320 case CC0:
4321 return false;
4323 case REG:
4324 if (x == frame_pointer_rtx)
4325 return true;
4326 if (HARD_REGISTER_P (x))
4327 return false;
4329 return true;
4331 case MEM:
4332 if (type == OP_IN && MEM_READONLY_P (x))
4333 return rtx_moveable_p (&XEXP (x, 0), OP_IN);
4334 return false;
4336 case SET:
4337 return (rtx_moveable_p (&SET_SRC (x), OP_IN)
4338 && rtx_moveable_p (&SET_DEST (x), OP_OUT));
4340 case STRICT_LOW_PART:
4341 return rtx_moveable_p (&XEXP (x, 0), OP_OUT);
4343 case ZERO_EXTRACT:
4344 case SIGN_EXTRACT:
4345 return (rtx_moveable_p (&XEXP (x, 0), type)
4346 && rtx_moveable_p (&XEXP (x, 1), OP_IN)
4347 && rtx_moveable_p (&XEXP (x, 2), OP_IN));
4349 case CLOBBER:
4350 return rtx_moveable_p (&SET_DEST (x), OP_OUT);
4352 case UNSPEC_VOLATILE:
4353 /* It is a bad idea to consider insns with such rtl
4354 as moveable ones. The insn scheduler also considers them as barrier
4355 for a reason. */
4356 return false;
4358 default:
4359 break;
4362 fmt = GET_RTX_FORMAT (code);
4363 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
4365 if (fmt[i] == 'e')
4367 if (!rtx_moveable_p (&XEXP (x, i), type))
4368 return false;
4370 else if (fmt[i] == 'E')
4371 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4373 if (!rtx_moveable_p (&XVECEXP (x, i, j), type))
4374 return false;
4377 return true;
4380 /* A wrapper around dominated_by_p, which uses the information in UID_LUID
4381 to give dominance relationships between two insns I1 and I2. */
4382 static bool
4383 insn_dominated_by_p (rtx i1, rtx i2, int *uid_luid)
4385 basic_block bb1 = BLOCK_FOR_INSN (i1);
4386 basic_block bb2 = BLOCK_FOR_INSN (i2);
4388 if (bb1 == bb2)
4389 return uid_luid[INSN_UID (i2)] < uid_luid[INSN_UID (i1)];
4390 return dominated_by_p (CDI_DOMINATORS, bb1, bb2);
4393 /* Record the range of register numbers added by find_moveable_pseudos. */
4394 int first_moveable_pseudo, last_moveable_pseudo;
4396 /* These two vectors hold data for every register added by
4397 find_movable_pseudos, with index 0 holding data for the
4398 first_moveable_pseudo. */
4399 /* The original home register. */
4400 static vec<rtx> pseudo_replaced_reg;
4402 /* Look for instances where we have an instruction that is known to increase
4403 register pressure, and whose result is not used immediately. If it is
4404 possible to move the instruction downwards to just before its first use,
4405 split its lifetime into two ranges. We create a new pseudo to compute the
4406 value, and emit a move instruction just before the first use. If, after
4407 register allocation, the new pseudo remains unallocated, the function
4408 move_unallocated_pseudos then deletes the move instruction and places
4409 the computation just before the first use.
4411 Such a move is safe and profitable if all the input registers remain live
4412 and unchanged between the original computation and its first use. In such
4413 a situation, the computation is known to increase register pressure, and
4414 moving it is known to at least not worsen it.
4416 We restrict moves to only those cases where a register remains unallocated,
4417 in order to avoid interfering too much with the instruction schedule. As
4418 an exception, we may move insns which only modify their input register
4419 (typically induction variables), as this increases the freedom for our
4420 intended transformation, and does not limit the second instruction
4421 scheduler pass. */
4423 static void
4424 find_moveable_pseudos (void)
4426 unsigned i;
4427 int max_regs = max_reg_num ();
4428 int max_uid = get_max_uid ();
4429 basic_block bb;
4430 int *uid_luid = XNEWVEC (int, max_uid);
4431 rtx_insn **closest_uses = XNEWVEC (rtx_insn *, max_regs);
4432 /* A set of registers which are live but not modified throughout a block. */
4433 bitmap_head *bb_transp_live = XNEWVEC (bitmap_head,
4434 last_basic_block_for_fn (cfun));
4435 /* A set of registers which only exist in a given basic block. */
4436 bitmap_head *bb_local = XNEWVEC (bitmap_head,
4437 last_basic_block_for_fn (cfun));
4438 /* A set of registers which are set once, in an instruction that can be
4439 moved freely downwards, but are otherwise transparent to a block. */
4440 bitmap_head *bb_moveable_reg_sets = XNEWVEC (bitmap_head,
4441 last_basic_block_for_fn (cfun));
4442 bitmap_head live, used, set, interesting, unusable_as_input;
4443 bitmap_iterator bi;
4444 bitmap_initialize (&interesting, 0);
4446 first_moveable_pseudo = max_regs;
4447 pseudo_replaced_reg.release ();
4448 pseudo_replaced_reg.safe_grow_cleared (max_regs);
4450 df_analyze ();
4451 calculate_dominance_info (CDI_DOMINATORS);
4453 i = 0;
4454 bitmap_initialize (&live, 0);
4455 bitmap_initialize (&used, 0);
4456 bitmap_initialize (&set, 0);
4457 bitmap_initialize (&unusable_as_input, 0);
4458 FOR_EACH_BB_FN (bb, cfun)
4460 rtx_insn *insn;
4461 bitmap transp = bb_transp_live + bb->index;
4462 bitmap moveable = bb_moveable_reg_sets + bb->index;
4463 bitmap local = bb_local + bb->index;
4465 bitmap_initialize (local, 0);
4466 bitmap_initialize (transp, 0);
4467 bitmap_initialize (moveable, 0);
4468 bitmap_copy (&live, df_get_live_out (bb));
4469 bitmap_and_into (&live, df_get_live_in (bb));
4470 bitmap_copy (transp, &live);
4471 bitmap_clear (moveable);
4472 bitmap_clear (&live);
4473 bitmap_clear (&used);
4474 bitmap_clear (&set);
4475 FOR_BB_INSNS (bb, insn)
4476 if (NONDEBUG_INSN_P (insn))
4478 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4479 df_ref def, use;
4481 uid_luid[INSN_UID (insn)] = i++;
4483 def = df_single_def (insn_info);
4484 use = df_single_use (insn_info);
4485 if (use
4486 && def
4487 && DF_REF_REGNO (use) == DF_REF_REGNO (def)
4488 && !bitmap_bit_p (&set, DF_REF_REGNO (use))
4489 && rtx_moveable_p (&PATTERN (insn), OP_IN))
4491 unsigned regno = DF_REF_REGNO (use);
4492 bitmap_set_bit (moveable, regno);
4493 bitmap_set_bit (&set, regno);
4494 bitmap_set_bit (&used, regno);
4495 bitmap_clear_bit (transp, regno);
4496 continue;
4498 FOR_EACH_INSN_INFO_USE (use, insn_info)
4500 unsigned regno = DF_REF_REGNO (use);
4501 bitmap_set_bit (&used, regno);
4502 if (bitmap_clear_bit (moveable, regno))
4503 bitmap_clear_bit (transp, regno);
4506 FOR_EACH_INSN_INFO_DEF (def, insn_info)
4508 unsigned regno = DF_REF_REGNO (def);
4509 bitmap_set_bit (&set, regno);
4510 bitmap_clear_bit (transp, regno);
4511 bitmap_clear_bit (moveable, regno);
4516 bitmap_clear (&live);
4517 bitmap_clear (&used);
4518 bitmap_clear (&set);
4520 FOR_EACH_BB_FN (bb, cfun)
4522 bitmap local = bb_local + bb->index;
4523 rtx_insn *insn;
4525 FOR_BB_INSNS (bb, insn)
4526 if (NONDEBUG_INSN_P (insn))
4528 df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
4529 rtx_insn *def_insn;
4530 rtx closest_use, note;
4531 df_ref def, use;
4532 unsigned regno;
4533 bool all_dominated, all_local;
4534 machine_mode mode;
4536 def = df_single_def (insn_info);
4537 /* There must be exactly one def in this insn. */
4538 if (!def || !single_set (insn))
4539 continue;
4540 /* This must be the only definition of the reg. We also limit
4541 which modes we deal with so that we can assume we can generate
4542 move instructions. */
4543 regno = DF_REF_REGNO (def);
4544 mode = GET_MODE (DF_REF_REG (def));
4545 if (DF_REG_DEF_COUNT (regno) != 1
4546 || !DF_REF_INSN_INFO (def)
4547 || HARD_REGISTER_NUM_P (regno)
4548 || DF_REG_EQ_USE_COUNT (regno) > 0
4549 || (!INTEGRAL_MODE_P (mode) && !FLOAT_MODE_P (mode)))
4550 continue;
4551 def_insn = DF_REF_INSN (def);
4553 for (note = REG_NOTES (def_insn); note; note = XEXP (note, 1))
4554 if (REG_NOTE_KIND (note) == REG_EQUIV && MEM_P (XEXP (note, 0)))
4555 break;
4557 if (note)
4559 if (dump_file)
4560 fprintf (dump_file, "Ignoring reg %d, has equiv memory\n",
4561 regno);
4562 bitmap_set_bit (&unusable_as_input, regno);
4563 continue;
4566 use = DF_REG_USE_CHAIN (regno);
4567 all_dominated = true;
4568 all_local = true;
4569 closest_use = NULL_RTX;
4570 for (; use; use = DF_REF_NEXT_REG (use))
4572 rtx_insn *insn;
4573 if (!DF_REF_INSN_INFO (use))
4575 all_dominated = false;
4576 all_local = false;
4577 break;
4579 insn = DF_REF_INSN (use);
4580 if (DEBUG_INSN_P (insn))
4581 continue;
4582 if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (def_insn))
4583 all_local = false;
4584 if (!insn_dominated_by_p (insn, def_insn, uid_luid))
4585 all_dominated = false;
4586 if (closest_use != insn && closest_use != const0_rtx)
4588 if (closest_use == NULL_RTX)
4589 closest_use = insn;
4590 else if (insn_dominated_by_p (closest_use, insn, uid_luid))
4591 closest_use = insn;
4592 else if (!insn_dominated_by_p (insn, closest_use, uid_luid))
4593 closest_use = const0_rtx;
4596 if (!all_dominated)
4598 if (dump_file)
4599 fprintf (dump_file, "Reg %d not all uses dominated by set\n",
4600 regno);
4601 continue;
4603 if (all_local)
4604 bitmap_set_bit (local, regno);
4605 if (closest_use == const0_rtx || closest_use == NULL
4606 || next_nonnote_nondebug_insn (def_insn) == closest_use)
4608 if (dump_file)
4609 fprintf (dump_file, "Reg %d uninteresting%s\n", regno,
4610 closest_use == const0_rtx || closest_use == NULL
4611 ? " (no unique first use)" : "");
4612 continue;
4614 if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (closest_use)))
4616 if (dump_file)
4617 fprintf (dump_file, "Reg %d: closest user uses cc0\n",
4618 regno);
4619 continue;
4622 bitmap_set_bit (&interesting, regno);
4623 /* If we get here, we know closest_use is a non-NULL insn
4624 (as opposed to const_0_rtx). */
4625 closest_uses[regno] = as_a <rtx_insn *> (closest_use);
4627 if (dump_file && (all_local || all_dominated))
4629 fprintf (dump_file, "Reg %u:", regno);
4630 if (all_local)
4631 fprintf (dump_file, " local to bb %d", bb->index);
4632 if (all_dominated)
4633 fprintf (dump_file, " def dominates all uses");
4634 if (closest_use != const0_rtx)
4635 fprintf (dump_file, " has unique first use");
4636 fputs ("\n", dump_file);
4641 EXECUTE_IF_SET_IN_BITMAP (&interesting, 0, i, bi)
4643 df_ref def = DF_REG_DEF_CHAIN (i);
4644 rtx_insn *def_insn = DF_REF_INSN (def);
4645 basic_block def_block = BLOCK_FOR_INSN (def_insn);
4646 bitmap def_bb_local = bb_local + def_block->index;
4647 bitmap def_bb_moveable = bb_moveable_reg_sets + def_block->index;
4648 bitmap def_bb_transp = bb_transp_live + def_block->index;
4649 bool local_to_bb_p = bitmap_bit_p (def_bb_local, i);
4650 rtx_insn *use_insn = closest_uses[i];
4651 df_ref use;
4652 bool all_ok = true;
4653 bool all_transp = true;
4655 if (!REG_P (DF_REF_REG (def)))
4656 continue;
4658 if (!local_to_bb_p)
4660 if (dump_file)
4661 fprintf (dump_file, "Reg %u not local to one basic block\n",
4663 continue;
4665 if (reg_equiv_init (i) != NULL_RTX)
4667 if (dump_file)
4668 fprintf (dump_file, "Ignoring reg %u with equiv init insn\n",
4670 continue;
4672 if (!rtx_moveable_p (&PATTERN (def_insn), OP_IN))
4674 if (dump_file)
4675 fprintf (dump_file, "Found def insn %d for %d to be not moveable\n",
4676 INSN_UID (def_insn), i);
4677 continue;
4679 if (dump_file)
4680 fprintf (dump_file, "Examining insn %d, def for %d\n",
4681 INSN_UID (def_insn), i);
4682 FOR_EACH_INSN_USE (use, def_insn)
4684 unsigned regno = DF_REF_REGNO (use);
4685 if (bitmap_bit_p (&unusable_as_input, regno))
4687 all_ok = false;
4688 if (dump_file)
4689 fprintf (dump_file, " found unusable input reg %u.\n", regno);
4690 break;
4692 if (!bitmap_bit_p (def_bb_transp, regno))
4694 if (bitmap_bit_p (def_bb_moveable, regno)
4695 && !control_flow_insn_p (use_insn)
4696 && (!HAVE_cc0 || !sets_cc0_p (use_insn)))
4698 if (modified_between_p (DF_REF_REG (use), def_insn, use_insn))
4700 rtx_insn *x = NEXT_INSN (def_insn);
4701 while (!modified_in_p (DF_REF_REG (use), x))
4703 gcc_assert (x != use_insn);
4704 x = NEXT_INSN (x);
4706 if (dump_file)
4707 fprintf (dump_file, " input reg %u modified but insn %d moveable\n",
4708 regno, INSN_UID (x));
4709 emit_insn_after (PATTERN (x), use_insn);
4710 set_insn_deleted (x);
4712 else
4714 if (dump_file)
4715 fprintf (dump_file, " input reg %u modified between def and use\n",
4716 regno);
4717 all_transp = false;
4720 else
4721 all_transp = false;
4724 if (!all_ok)
4725 continue;
4726 if (!dbg_cnt (ira_move))
4727 break;
4728 if (dump_file)
4729 fprintf (dump_file, " all ok%s\n", all_transp ? " and transp" : "");
4731 if (all_transp)
4733 rtx def_reg = DF_REF_REG (def);
4734 rtx newreg = ira_create_new_reg (def_reg);
4735 if (validate_change (def_insn, DF_REF_REAL_LOC (def), newreg, 0))
4737 unsigned nregno = REGNO (newreg);
4738 emit_insn_before (gen_move_insn (def_reg, newreg), use_insn);
4739 nregno -= max_regs;
4740 pseudo_replaced_reg[nregno] = def_reg;
4745 FOR_EACH_BB_FN (bb, cfun)
4747 bitmap_clear (bb_local + bb->index);
4748 bitmap_clear (bb_transp_live + bb->index);
4749 bitmap_clear (bb_moveable_reg_sets + bb->index);
4751 bitmap_clear (&interesting);
4752 bitmap_clear (&unusable_as_input);
4753 free (uid_luid);
4754 free (closest_uses);
4755 free (bb_local);
4756 free (bb_transp_live);
4757 free (bb_moveable_reg_sets);
4759 last_moveable_pseudo = max_reg_num ();
4761 fix_reg_equiv_init ();
4762 expand_reg_info ();
4763 regstat_free_n_sets_and_refs ();
4764 regstat_free_ri ();
4765 regstat_init_n_sets_and_refs ();
4766 regstat_compute_ri ();
4767 free_dominance_info (CDI_DOMINATORS);
4770 /* If SET pattern SET is an assignment from a hard register to a pseudo which
4771 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), return
4772 the destination. Otherwise return NULL. */
4774 static rtx
4775 interesting_dest_for_shprep_1 (rtx set, basic_block call_dom)
4777 rtx src = SET_SRC (set);
4778 rtx dest = SET_DEST (set);
4779 if (!REG_P (src) || !HARD_REGISTER_P (src)
4780 || !REG_P (dest) || HARD_REGISTER_P (dest)
4781 || (call_dom && !bitmap_bit_p (df_get_live_in (call_dom), REGNO (dest))))
4782 return NULL;
4783 return dest;
4786 /* If insn is interesting for parameter range-splitting shrink-wrapping
4787 preparation, i.e. it is a single set from a hard register to a pseudo, which
4788 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), or a
4789 parallel statement with only one such statement, return the destination.
4790 Otherwise return NULL. */
4792 static rtx
4793 interesting_dest_for_shprep (rtx_insn *insn, basic_block call_dom)
4795 if (!INSN_P (insn))
4796 return NULL;
4797 rtx pat = PATTERN (insn);
4798 if (GET_CODE (pat) == SET)
4799 return interesting_dest_for_shprep_1 (pat, call_dom);
4801 if (GET_CODE (pat) != PARALLEL)
4802 return NULL;
4803 rtx ret = NULL;
4804 for (int i = 0; i < XVECLEN (pat, 0); i++)
4806 rtx sub = XVECEXP (pat, 0, i);
4807 if (GET_CODE (sub) == USE || GET_CODE (sub) == CLOBBER)
4808 continue;
4809 if (GET_CODE (sub) != SET
4810 || side_effects_p (sub))
4811 return NULL;
4812 rtx dest = interesting_dest_for_shprep_1 (sub, call_dom);
4813 if (dest && ret)
4814 return NULL;
4815 if (dest)
4816 ret = dest;
4818 return ret;
4821 /* Split live ranges of pseudos that are loaded from hard registers in the
4822 first BB in a BB that dominates all non-sibling call if such a BB can be
4823 found and is not in a loop. Return true if the function has made any
4824 changes. */
4826 static bool
4827 split_live_ranges_for_shrink_wrap (void)
4829 basic_block bb, call_dom = NULL;
4830 basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
4831 rtx_insn *insn, *last_interesting_insn = NULL;
4832 bitmap_head need_new, reachable;
4833 vec<basic_block> queue;
4835 if (!SHRINK_WRAPPING_ENABLED)
4836 return false;
4838 bitmap_initialize (&need_new, 0);
4839 bitmap_initialize (&reachable, 0);
4840 queue.create (n_basic_blocks_for_fn (cfun));
4842 FOR_EACH_BB_FN (bb, cfun)
4843 FOR_BB_INSNS (bb, insn)
4844 if (CALL_P (insn) && !SIBLING_CALL_P (insn))
4846 if (bb == first)
4848 bitmap_clear (&need_new);
4849 bitmap_clear (&reachable);
4850 queue.release ();
4851 return false;
4854 bitmap_set_bit (&need_new, bb->index);
4855 bitmap_set_bit (&reachable, bb->index);
4856 queue.quick_push (bb);
4857 break;
4860 if (queue.is_empty ())
4862 bitmap_clear (&need_new);
4863 bitmap_clear (&reachable);
4864 queue.release ();
4865 return false;
4868 while (!queue.is_empty ())
4870 edge e;
4871 edge_iterator ei;
4873 bb = queue.pop ();
4874 FOR_EACH_EDGE (e, ei, bb->succs)
4875 if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
4876 && bitmap_set_bit (&reachable, e->dest->index))
4877 queue.quick_push (e->dest);
4879 queue.release ();
4881 FOR_BB_INSNS (first, insn)
4883 rtx dest = interesting_dest_for_shprep (insn, NULL);
4884 if (!dest)
4885 continue;
4887 if (DF_REG_DEF_COUNT (REGNO (dest)) > 1)
4889 bitmap_clear (&need_new);
4890 bitmap_clear (&reachable);
4891 return false;
4894 for (df_ref use = DF_REG_USE_CHAIN (REGNO(dest));
4895 use;
4896 use = DF_REF_NEXT_REG (use))
4898 int ubbi = DF_REF_BB (use)->index;
4899 if (bitmap_bit_p (&reachable, ubbi))
4900 bitmap_set_bit (&need_new, ubbi);
4902 last_interesting_insn = insn;
4905 bitmap_clear (&reachable);
4906 if (!last_interesting_insn)
4908 bitmap_clear (&need_new);
4909 return false;
4912 call_dom = nearest_common_dominator_for_set (CDI_DOMINATORS, &need_new);
4913 bitmap_clear (&need_new);
4914 if (call_dom == first)
4915 return false;
4917 loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
4918 while (bb_loop_depth (call_dom) > 0)
4919 call_dom = get_immediate_dominator (CDI_DOMINATORS, call_dom);
4920 loop_optimizer_finalize ();
4922 if (call_dom == first)
4923 return false;
4925 calculate_dominance_info (CDI_POST_DOMINATORS);
4926 if (dominated_by_p (CDI_POST_DOMINATORS, first, call_dom))
4928 free_dominance_info (CDI_POST_DOMINATORS);
4929 return false;
4931 free_dominance_info (CDI_POST_DOMINATORS);
4933 if (dump_file)
4934 fprintf (dump_file, "Will split live ranges of parameters at BB %i\n",
4935 call_dom->index);
4937 bool ret = false;
4938 FOR_BB_INSNS (first, insn)
4940 rtx dest = interesting_dest_for_shprep (insn, call_dom);
4941 if (!dest || dest == pic_offset_table_rtx)
4942 continue;
4944 rtx newreg = NULL_RTX;
4945 df_ref use, next;
4946 for (use = DF_REG_USE_CHAIN (REGNO (dest)); use; use = next)
4948 rtx_insn *uin = DF_REF_INSN (use);
4949 next = DF_REF_NEXT_REG (use);
4951 basic_block ubb = BLOCK_FOR_INSN (uin);
4952 if (ubb == call_dom
4953 || dominated_by_p (CDI_DOMINATORS, ubb, call_dom))
4955 if (!newreg)
4956 newreg = ira_create_new_reg (dest);
4957 validate_change (uin, DF_REF_REAL_LOC (use), newreg, true);
4961 if (newreg)
4963 rtx_insn *new_move = gen_move_insn (newreg, dest);
4964 emit_insn_after (new_move, bb_note (call_dom));
4965 if (dump_file)
4967 fprintf (dump_file, "Split live-range of register ");
4968 print_rtl_single (dump_file, dest);
4970 ret = true;
4973 if (insn == last_interesting_insn)
4974 break;
4976 apply_change_group ();
4977 return ret;
4980 /* Perform the second half of the transformation started in
4981 find_moveable_pseudos. We look for instances where the newly introduced
4982 pseudo remains unallocated, and remove it by moving the definition to
4983 just before its use, replacing the move instruction generated by
4984 find_moveable_pseudos. */
4985 static void
4986 move_unallocated_pseudos (void)
4988 int i;
4989 for (i = first_moveable_pseudo; i < last_moveable_pseudo; i++)
4990 if (reg_renumber[i] < 0)
4992 int idx = i - first_moveable_pseudo;
4993 rtx other_reg = pseudo_replaced_reg[idx];
4994 rtx_insn *def_insn = DF_REF_INSN (DF_REG_DEF_CHAIN (i));
4995 /* The use must follow all definitions of OTHER_REG, so we can
4996 insert the new definition immediately after any of them. */
4997 df_ref other_def = DF_REG_DEF_CHAIN (REGNO (other_reg));
4998 rtx_insn *move_insn = DF_REF_INSN (other_def);
4999 rtx_insn *newinsn = emit_insn_after (PATTERN (def_insn), move_insn);
5000 rtx set;
5001 int success;
5003 if (dump_file)
5004 fprintf (dump_file, "moving def of %d (insn %d now) ",
5005 REGNO (other_reg), INSN_UID (def_insn));
5007 delete_insn (move_insn);
5008 while ((other_def = DF_REG_DEF_CHAIN (REGNO (other_reg))))
5009 delete_insn (DF_REF_INSN (other_def));
5010 delete_insn (def_insn);
5012 set = single_set (newinsn);
5013 success = validate_change (newinsn, &SET_DEST (set), other_reg, 0);
5014 gcc_assert (success);
5015 if (dump_file)
5016 fprintf (dump_file, " %d) rather than keep unallocated replacement %d\n",
5017 INSN_UID (newinsn), i);
5018 SET_REG_N_REFS (i, 0);
5022 /* If the backend knows where to allocate pseudos for hard
5023 register initial values, register these allocations now. */
5024 static void
5025 allocate_initial_values (void)
5027 if (targetm.allocate_initial_value)
5029 rtx hreg, preg, x;
5030 int i, regno;
5032 for (i = 0; HARD_REGISTER_NUM_P (i); i++)
5034 if (! initial_value_entry (i, &hreg, &preg))
5035 break;
5037 x = targetm.allocate_initial_value (hreg);
5038 regno = REGNO (preg);
5039 if (x && REG_N_SETS (regno) <= 1)
5041 if (MEM_P (x))
5042 reg_equiv_memory_loc (regno) = x;
5043 else
5045 basic_block bb;
5046 int new_regno;
5048 gcc_assert (REG_P (x));
5049 new_regno = REGNO (x);
5050 reg_renumber[regno] = new_regno;
5051 /* Poke the regno right into regno_reg_rtx so that even
5052 fixed regs are accepted. */
5053 SET_REGNO (preg, new_regno);
5054 /* Update global register liveness information. */
5055 FOR_EACH_BB_FN (bb, cfun)
5057 if (REGNO_REG_SET_P (df_get_live_in (bb), regno))
5058 SET_REGNO_REG_SET (df_get_live_in (bb), new_regno);
5059 if (REGNO_REG_SET_P (df_get_live_out (bb), regno))
5060 SET_REGNO_REG_SET (df_get_live_out (bb), new_regno);
5066 gcc_checking_assert (! initial_value_entry (FIRST_PSEUDO_REGISTER,
5067 &hreg, &preg));
5072 /* True when we use LRA instead of reload pass for the current
5073 function. */
5074 bool ira_use_lra_p;
5076 /* True if we have allocno conflicts. It is false for non-optimized
5077 mode or when the conflict table is too big. */
5078 bool ira_conflicts_p;
5080 /* Saved between IRA and reload. */
5081 static int saved_flag_ira_share_spill_slots;
5083 /* This is the main entry of IRA. */
5084 static void
5085 ira (FILE *f)
5087 bool loops_p;
5088 int ira_max_point_before_emit;
5089 bool saved_flag_caller_saves = flag_caller_saves;
5090 enum ira_region saved_flag_ira_region = flag_ira_region;
5092 clear_bb_flags ();
5094 /* Determine if the current function is a leaf before running IRA
5095 since this can impact optimizations done by the prologue and
5096 epilogue thus changing register elimination offsets.
5097 Other target callbacks may use crtl->is_leaf too, including
5098 SHRINK_WRAPPING_ENABLED, so initialize as early as possible. */
5099 crtl->is_leaf = leaf_function_p ();
5101 /* Perform target specific PIC register initialization. */
5102 targetm.init_pic_reg ();
5104 ira_conflicts_p = optimize > 0;
5106 /* If there are too many pseudos and/or basic blocks (e.g. 10K
5107 pseudos and 10K blocks or 100K pseudos and 1K blocks), we will
5108 use simplified and faster algorithms in LRA. */
5109 lra_simple_p
5110 = (ira_use_lra_p
5111 && max_reg_num () >= (1 << 26) / last_basic_block_for_fn (cfun));
5112 if (lra_simple_p)
5114 /* It permits to skip live range splitting in LRA. */
5115 flag_caller_saves = false;
5116 /* There is no sense to do regional allocation when we use
5117 simplified LRA. */
5118 flag_ira_region = IRA_REGION_ONE;
5119 ira_conflicts_p = false;
5122 #ifndef IRA_NO_OBSTACK
5123 gcc_obstack_init (&ira_obstack);
5124 #endif
5125 bitmap_obstack_initialize (&ira_bitmap_obstack);
5127 /* LRA uses its own infrastructure to handle caller save registers. */
5128 if (flag_caller_saves && !ira_use_lra_p)
5129 init_caller_save ();
5131 if (flag_ira_verbose < 10)
5133 internal_flag_ira_verbose = flag_ira_verbose;
5134 ira_dump_file = f;
5136 else
5138 internal_flag_ira_verbose = flag_ira_verbose - 10;
5139 ira_dump_file = stderr;
5142 setup_prohibited_mode_move_regs ();
5143 decrease_live_ranges_number ();
5144 df_note_add_problem ();
5146 /* DF_LIVE can't be used in the register allocator, too many other
5147 parts of the compiler depend on using the "classic" liveness
5148 interpretation of the DF_LR problem. See PR38711.
5149 Remove the problem, so that we don't spend time updating it in
5150 any of the df_analyze() calls during IRA/LRA. */
5151 if (optimize > 1)
5152 df_remove_problem (df_live);
5153 gcc_checking_assert (df_live == NULL);
5155 if (flag_checking)
5156 df->changeable_flags |= DF_VERIFY_SCHEDULED;
5158 df_analyze ();
5160 init_reg_equiv ();
5161 if (ira_conflicts_p)
5163 calculate_dominance_info (CDI_DOMINATORS);
5165 if (split_live_ranges_for_shrink_wrap ())
5166 df_analyze ();
5168 free_dominance_info (CDI_DOMINATORS);
5171 df_clear_flags (DF_NO_INSN_RESCAN);
5173 indirect_jump_optimize ();
5174 if (delete_trivially_dead_insns (get_insns (), max_reg_num ()))
5175 df_analyze ();
5177 regstat_init_n_sets_and_refs ();
5178 regstat_compute_ri ();
5180 /* If we are not optimizing, then this is the only place before
5181 register allocation where dataflow is done. And that is needed
5182 to generate these warnings. */
5183 if (warn_clobbered)
5184 generate_setjmp_warnings ();
5186 if (resize_reg_info () && flag_ira_loop_pressure)
5187 ira_set_pseudo_classes (true, ira_dump_file);
5189 init_alias_analysis ();
5190 loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
5191 reg_equiv = XCNEWVEC (struct equivalence, max_reg_num ());
5192 update_equiv_regs ();
5194 /* Don't move insns if live range shrinkage or register
5195 pressure-sensitive scheduling were done because it will not
5196 improve allocation but likely worsen insn scheduling. */
5197 if (optimize
5198 && !flag_live_range_shrinkage
5199 && !(flag_sched_pressure && flag_schedule_insns))
5200 combine_and_move_insns ();
5202 /* Gather additional equivalences with memory. */
5203 if (optimize)
5204 add_store_equivs ();
5206 loop_optimizer_finalize ();
5207 free_dominance_info (CDI_DOMINATORS);
5208 end_alias_analysis ();
5209 free (reg_equiv);
5211 setup_reg_equiv ();
5212 grow_reg_equivs ();
5213 setup_reg_equiv_init ();
5215 allocated_reg_info_size = max_reg_num ();
5217 /* It is not worth to do such improvement when we use a simple
5218 allocation because of -O0 usage or because the function is too
5219 big. */
5220 if (ira_conflicts_p)
5221 find_moveable_pseudos ();
5223 max_regno_before_ira = max_reg_num ();
5224 ira_setup_eliminable_regset ();
5226 ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
5227 ira_load_cost = ira_store_cost = ira_shuffle_cost = 0;
5228 ira_move_loops_num = ira_additional_jumps_num = 0;
5230 ira_assert (current_loops == NULL);
5231 if (flag_ira_region == IRA_REGION_ALL || flag_ira_region == IRA_REGION_MIXED)
5232 loop_optimizer_init (AVOID_CFG_MODIFICATIONS | LOOPS_HAVE_RECORDED_EXITS);
5234 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5235 fprintf (ira_dump_file, "Building IRA IR\n");
5236 loops_p = ira_build ();
5238 ira_assert (ira_conflicts_p || !loops_p);
5240 saved_flag_ira_share_spill_slots = flag_ira_share_spill_slots;
5241 if (too_high_register_pressure_p () || cfun->calls_setjmp)
5242 /* It is just wasting compiler's time to pack spilled pseudos into
5243 stack slots in this case -- prohibit it. We also do this if
5244 there is setjmp call because a variable not modified between
5245 setjmp and longjmp the compiler is required to preserve its
5246 value and sharing slots does not guarantee it. */
5247 flag_ira_share_spill_slots = FALSE;
5249 ira_color ();
5251 ira_max_point_before_emit = ira_max_point;
5253 ira_initiate_emit_data ();
5255 ira_emit (loops_p);
5257 max_regno = max_reg_num ();
5258 if (ira_conflicts_p)
5260 if (! loops_p)
5262 if (! ira_use_lra_p)
5263 ira_initiate_assign ();
5265 else
5267 expand_reg_info ();
5269 if (ira_use_lra_p)
5271 ira_allocno_t a;
5272 ira_allocno_iterator ai;
5274 FOR_EACH_ALLOCNO (a, ai)
5276 int old_regno = ALLOCNO_REGNO (a);
5277 int new_regno = REGNO (ALLOCNO_EMIT_DATA (a)->reg);
5279 ALLOCNO_REGNO (a) = new_regno;
5281 if (old_regno != new_regno)
5282 setup_reg_classes (new_regno, reg_preferred_class (old_regno),
5283 reg_alternate_class (old_regno),
5284 reg_allocno_class (old_regno));
5288 else
5290 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
5291 fprintf (ira_dump_file, "Flattening IR\n");
5292 ira_flattening (max_regno_before_ira, ira_max_point_before_emit);
5294 /* New insns were generated: add notes and recalculate live
5295 info. */
5296 df_analyze ();
5298 /* ??? Rebuild the loop tree, but why? Does the loop tree
5299 change if new insns were generated? Can that be handled
5300 by updating the loop tree incrementally? */
5301 loop_optimizer_finalize ();
5302 free_dominance_info (CDI_DOMINATORS);
5303 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
5304 | LOOPS_HAVE_RECORDED_EXITS);
5306 if (! ira_use_lra_p)
5308 setup_allocno_assignment_flags ();
5309 ira_initiate_assign ();
5310 ira_reassign_conflict_allocnos (max_regno);
5315 ira_finish_emit_data ();
5317 setup_reg_renumber ();
5319 calculate_allocation_cost ();
5321 #ifdef ENABLE_IRA_CHECKING
5322 if (ira_conflicts_p)
5323 check_allocation ();
5324 #endif
5326 if (max_regno != max_regno_before_ira)
5328 regstat_free_n_sets_and_refs ();
5329 regstat_free_ri ();
5330 regstat_init_n_sets_and_refs ();
5331 regstat_compute_ri ();
5334 overall_cost_before = ira_overall_cost;
5335 if (! ira_conflicts_p)
5336 grow_reg_equivs ();
5337 else
5339 fix_reg_equiv_init ();
5341 #ifdef ENABLE_IRA_CHECKING
5342 print_redundant_copies ();
5343 #endif
5344 if (! ira_use_lra_p)
5346 ira_spilled_reg_stack_slots_num = 0;
5347 ira_spilled_reg_stack_slots
5348 = ((struct ira_spilled_reg_stack_slot *)
5349 ira_allocate (max_regno
5350 * sizeof (struct ira_spilled_reg_stack_slot)));
5351 memset (ira_spilled_reg_stack_slots, 0,
5352 max_regno * sizeof (struct ira_spilled_reg_stack_slot));
5355 allocate_initial_values ();
5357 /* See comment for find_moveable_pseudos call. */
5358 if (ira_conflicts_p)
5359 move_unallocated_pseudos ();
5361 /* Restore original values. */
5362 if (lra_simple_p)
5364 flag_caller_saves = saved_flag_caller_saves;
5365 flag_ira_region = saved_flag_ira_region;
5369 static void
5370 do_reload (void)
5372 basic_block bb;
5373 bool need_dce;
5374 unsigned pic_offset_table_regno = INVALID_REGNUM;
5376 if (flag_ira_verbose < 10)
5377 ira_dump_file = dump_file;
5379 /* If pic_offset_table_rtx is a pseudo register, then keep it so
5380 after reload to avoid possible wrong usages of hard reg assigned
5381 to it. */
5382 if (pic_offset_table_rtx
5383 && REGNO (pic_offset_table_rtx) >= FIRST_PSEUDO_REGISTER)
5384 pic_offset_table_regno = REGNO (pic_offset_table_rtx);
5386 timevar_push (TV_RELOAD);
5387 if (ira_use_lra_p)
5389 if (current_loops != NULL)
5391 loop_optimizer_finalize ();
5392 free_dominance_info (CDI_DOMINATORS);
5394 FOR_ALL_BB_FN (bb, cfun)
5395 bb->loop_father = NULL;
5396 current_loops = NULL;
5398 ira_destroy ();
5400 lra (ira_dump_file);
5401 /* ???!!! Move it before lra () when we use ira_reg_equiv in
5402 LRA. */
5403 vec_free (reg_equivs);
5404 reg_equivs = NULL;
5405 need_dce = false;
5407 else
5409 df_set_flags (DF_NO_INSN_RESCAN);
5410 build_insn_chain ();
5412 need_dce = reload (get_insns (), ira_conflicts_p);
5415 timevar_pop (TV_RELOAD);
5417 timevar_push (TV_IRA);
5419 if (ira_conflicts_p && ! ira_use_lra_p)
5421 ira_free (ira_spilled_reg_stack_slots);
5422 ira_finish_assign ();
5425 if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL
5426 && overall_cost_before != ira_overall_cost)
5427 fprintf (ira_dump_file, "+++Overall after reload %" PRId64 "\n",
5428 ira_overall_cost);
5430 flag_ira_share_spill_slots = saved_flag_ira_share_spill_slots;
5432 if (! ira_use_lra_p)
5434 ira_destroy ();
5435 if (current_loops != NULL)
5437 loop_optimizer_finalize ();
5438 free_dominance_info (CDI_DOMINATORS);
5440 FOR_ALL_BB_FN (bb, cfun)
5441 bb->loop_father = NULL;
5442 current_loops = NULL;
5444 regstat_free_ri ();
5445 regstat_free_n_sets_and_refs ();
5448 if (optimize)
5449 cleanup_cfg (CLEANUP_EXPENSIVE);
5451 finish_reg_equiv ();
5453 bitmap_obstack_release (&ira_bitmap_obstack);
5454 #ifndef IRA_NO_OBSTACK
5455 obstack_free (&ira_obstack, NULL);
5456 #endif
5458 /* The code after the reload has changed so much that at this point
5459 we might as well just rescan everything. Note that
5460 df_rescan_all_insns is not going to help here because it does not
5461 touch the artificial uses and defs. */
5462 df_finish_pass (true);
5463 df_scan_alloc (NULL);
5464 df_scan_blocks ();
5466 if (optimize > 1)
5468 df_live_add_problem ();
5469 df_live_set_all_dirty ();
5472 if (optimize)
5473 df_analyze ();
5475 if (need_dce && optimize)
5476 run_fast_dce ();
5478 /* Diagnose uses of the hard frame pointer when it is used as a global
5479 register. Often we can get away with letting the user appropriate
5480 the frame pointer, but we should let them know when code generation
5481 makes that impossible. */
5482 if (global_regs[HARD_FRAME_POINTER_REGNUM] && frame_pointer_needed)
5484 tree decl = global_regs_decl[HARD_FRAME_POINTER_REGNUM];
5485 error_at (DECL_SOURCE_LOCATION (current_function_decl),
5486 "frame pointer required, but reserved");
5487 inform (DECL_SOURCE_LOCATION (decl), "for %qD", decl);
5490 /* If we are doing generic stack checking, give a warning if this
5491 function's frame size is larger than we expect. */
5492 if (flag_stack_check == GENERIC_STACK_CHECK)
5494 HOST_WIDE_INT size = get_frame_size () + STACK_CHECK_FIXED_FRAME_SIZE;
5496 for (int i = 0; i < FIRST_PSEUDO_REGISTER; i++)
5497 if (df_regs_ever_live_p (i) && !fixed_regs[i] && call_used_regs[i])
5498 size += UNITS_PER_WORD;
5500 if (size > STACK_CHECK_MAX_FRAME_SIZE)
5501 warning (0, "frame size too large for reliable stack checking");
5504 if (pic_offset_table_regno != INVALID_REGNUM)
5505 pic_offset_table_rtx = gen_rtx_REG (Pmode, pic_offset_table_regno);
5507 timevar_pop (TV_IRA);
5510 /* Run the integrated register allocator. */
5512 namespace {
5514 const pass_data pass_data_ira =
5516 RTL_PASS, /* type */
5517 "ira", /* name */
5518 OPTGROUP_NONE, /* optinfo_flags */
5519 TV_IRA, /* tv_id */
5520 0, /* properties_required */
5521 0, /* properties_provided */
5522 0, /* properties_destroyed */
5523 0, /* todo_flags_start */
5524 TODO_do_not_ggc_collect, /* todo_flags_finish */
5527 class pass_ira : public rtl_opt_pass
5529 public:
5530 pass_ira (gcc::context *ctxt)
5531 : rtl_opt_pass (pass_data_ira, ctxt)
5534 /* opt_pass methods: */
5535 virtual bool gate (function *)
5537 return !targetm.no_register_allocation;
5539 virtual unsigned int execute (function *)
5541 ira (dump_file);
5542 return 0;
5545 }; // class pass_ira
5547 } // anon namespace
5549 rtl_opt_pass *
5550 make_pass_ira (gcc::context *ctxt)
5552 return new pass_ira (ctxt);
5555 namespace {
5557 const pass_data pass_data_reload =
5559 RTL_PASS, /* type */
5560 "reload", /* name */
5561 OPTGROUP_NONE, /* optinfo_flags */
5562 TV_RELOAD, /* tv_id */
5563 0, /* properties_required */
5564 0, /* properties_provided */
5565 0, /* properties_destroyed */
5566 0, /* todo_flags_start */
5567 0, /* todo_flags_finish */
5570 class pass_reload : public rtl_opt_pass
5572 public:
5573 pass_reload (gcc::context *ctxt)
5574 : rtl_opt_pass (pass_data_reload, ctxt)
5577 /* opt_pass methods: */
5578 virtual bool gate (function *)
5580 return !targetm.no_register_allocation;
5582 virtual unsigned int execute (function *)
5584 do_reload ();
5585 return 0;
5588 }; // class pass_reload
5590 } // anon namespace
5592 rtl_opt_pass *
5593 make_pass_reload (gcc::context *ctxt)
5595 return new pass_reload (ctxt);