Import final gcc2 snapshot (990109)
[official-gcc.git] / gcc / sched.c
blob30f7d0b05f4a4b3d44ff1b45da18d89c1a35e99b
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 93-97, 1998 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GNU CC.
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
11 any later version.
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 59 Temple Place - Suite 330,
21 Boston, MA 02111-1307, USA. */
23 /* Instruction scheduling pass.
25 This pass implements list scheduling within basic blocks. It is
26 run after flow analysis, but before register allocation. The
27 scheduler works as follows:
29 We compute insn priorities based on data dependencies. Flow
30 analysis only creates a fraction of the data-dependencies we must
31 observe: namely, only those dependencies which the combiner can be
32 expected to use. For this pass, we must therefore create the
33 remaining dependencies we need to observe: register dependencies,
34 memory dependencies, dependencies to keep function calls in order,
35 and the dependence between a conditional branch and the setting of
36 condition codes are all dealt with here.
38 The scheduler first traverses the data flow graph, starting with
39 the last instruction, and proceeding to the first, assigning
40 values to insn_priority as it goes. This sorts the instructions
41 topologically by data dependence.
43 Once priorities have been established, we order the insns using
44 list scheduling. This works as follows: starting with a list of
45 all the ready insns, and sorted according to priority number, we
46 schedule the insn from the end of the list by placing its
47 predecessors in the list according to their priority order. We
48 consider this insn scheduled by setting the pointer to the "end" of
49 the list to point to the previous insn. When an insn has no
50 predecessors, we either queue it until sufficient time has elapsed
51 or add it to the ready list. As the instructions are scheduled or
52 when stalls are introduced, the queue advances and dumps insns into
53 the ready list. When all insns down to the lowest priority have
54 been scheduled, the critical path of the basic block has been made
55 as short as possible. The remaining insns are then scheduled in
56 remaining slots.
58 Function unit conflicts are resolved during reverse list scheduling
59 by tracking the time when each insn is committed to the schedule
60 and from that, the time the function units it uses must be free.
61 As insns on the ready list are considered for scheduling, those
62 that would result in a blockage of the already committed insns are
63 queued until no blockage will result. Among the remaining insns on
64 the ready list to be considered, the first one with the largest
65 potential for causing a subsequent blockage is chosen.
67 The following list shows the order in which we want to break ties
68 among insns in the ready list:
70 1. choose insn with lowest conflict cost, ties broken by
71 2. choose insn with the longest path to end of bb, ties broken by
72 3. choose insn that kills the most registers, ties broken by
73 4. choose insn that conflicts with the most ready insns, or finally
74 5. choose insn with lowest UID.
76 Memory references complicate matters. Only if we can be certain
77 that memory references are not part of the data dependency graph
78 (via true, anti, or output dependence), can we move operations past
79 memory references. To first approximation, reads can be done
80 independently, while writes introduce dependencies. Better
81 approximations will yield fewer dependencies.
83 Dependencies set up by memory references are treated in exactly the
84 same way as other dependencies, by using LOG_LINKS.
86 Having optimized the critical path, we may have also unduly
87 extended the lifetimes of some registers. If an operation requires
88 that constants be loaded into registers, it is certainly desirable
89 to load those constants as early as necessary, but no earlier.
90 I.e., it will not do to load up a bunch of registers at the
91 beginning of a basic block only to use them at the end, if they
92 could be loaded later, since this may result in excessive register
93 utilization.
95 Note that since branches are never in basic blocks, but only end
96 basic blocks, this pass will not do any branch scheduling. But
97 that is ok, since we can use GNU's delayed branch scheduling
98 pass to take care of this case.
100 Also note that no further optimizations based on algebraic identities
101 are performed, so this pass would be a good one to perform instruction
102 splitting, such as breaking up a multiply instruction into shifts
103 and adds where that is profitable.
105 Given the memory aliasing analysis that this pass should perform,
106 it should be possible to remove redundant stores to memory, and to
107 load values from registers instead of hitting memory.
109 This pass must update information that subsequent passes expect to be
110 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
111 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
112 basic_block_end.
114 The information in the line number notes is carefully retained by
115 this pass. Notes that refer to the starting and ending of
116 exception regions are also carefully retained by this pass. All
117 other NOTE insns are grouped in their same relative order at the
118 beginning of basic blocks that have been scheduled. */
120 #include "config.h"
121 #include "system.h"
122 #include "rtl.h"
123 #include "basic-block.h"
124 #include "regs.h"
125 #include "hard-reg-set.h"
126 #include "flags.h"
127 #include "insn-config.h"
128 #include "insn-attr.h"
130 #ifdef INSN_SCHEDULING
131 /* Arrays set up by scheduling for the same respective purposes as
132 similar-named arrays set up by flow analysis. We work with these
133 arrays during the scheduling pass so we can compare values against
134 unscheduled code.
136 Values of these arrays are copied at the end of this pass into the
137 arrays set up by flow analysis. */
138 static int *sched_reg_n_calls_crossed;
139 static int *sched_reg_live_length;
141 /* Element N is the next insn that sets (hard or pseudo) register
142 N within the current basic block; or zero, if there is no
143 such insn. Needed for new registers which may be introduced
144 by splitting insns. */
145 static rtx *reg_last_uses;
146 static rtx *reg_last_sets;
147 static regset reg_pending_sets;
148 static int reg_pending_sets_all;
150 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
151 static int *insn_luid;
152 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
154 /* Vector indexed by INSN_UID giving each instruction a priority. */
155 static int *insn_priority;
156 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
158 static short *insn_costs;
159 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
161 /* Vector indexed by INSN_UID giving an encoding of the function units
162 used. */
163 static short *insn_units;
164 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
166 /* Vector indexed by INSN_UID giving an encoding of the blockage range
167 function. The unit and the range are encoded. */
168 static unsigned int *insn_blockage;
169 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
170 #define UNIT_BITS 5
171 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
172 #define ENCODE_BLOCKAGE(U,R) \
173 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
174 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
175 | MAX_BLOCKAGE_COST (R))
176 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
177 #define BLOCKAGE_RANGE(B) \
178 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
179 | (B) & BLOCKAGE_MASK)
181 /* Encodings of the `<name>_unit_blockage_range' function. */
182 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
183 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
185 #define DONE_PRIORITY -1
186 #define MAX_PRIORITY 0x7fffffff
187 #define TAIL_PRIORITY 0x7ffffffe
188 #define LAUNCH_PRIORITY 0x7f000001
189 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
190 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
192 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
193 static int *insn_ref_count;
194 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
196 /* Vector indexed by INSN_UID giving line-number note in effect for each
197 insn. For line-number notes, this indicates whether the note may be
198 reused. */
199 static rtx *line_note;
200 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
202 /* Vector indexed by basic block number giving the starting line-number
203 for each basic block. */
204 static rtx *line_note_head;
206 /* List of important notes we must keep around. This is a pointer to the
207 last element in the list. */
208 static rtx note_list;
210 /* Regsets telling whether a given register is live or dead before the last
211 scheduled insn. Must scan the instructions once before scheduling to
212 determine what registers are live or dead at the end of the block. */
213 static regset bb_dead_regs;
214 static regset bb_live_regs;
216 /* Regset telling whether a given register is live after the insn currently
217 being scheduled. Before processing an insn, this is equal to bb_live_regs
218 above. This is used so that we can find registers that are newly born/dead
219 after processing an insn. */
220 static regset old_live_regs;
222 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
223 during the initial scan and reused later. If there are not exactly as
224 many REG_DEAD notes in the post scheduled code as there were in the
225 prescheduled code then we trigger an abort because this indicates a bug. */
226 static rtx dead_notes;
228 /* Queues, etc. */
230 /* An instruction is ready to be scheduled when all insns following it
231 have already been scheduled. It is important to ensure that all
232 insns which use its result will not be executed until its result
233 has been computed. An insn is maintained in one of four structures:
235 (P) the "Pending" set of insns which cannot be scheduled until
236 their dependencies have been satisfied.
237 (Q) the "Queued" set of insns that can be scheduled when sufficient
238 time has passed.
239 (R) the "Ready" list of unscheduled, uncommitted insns.
240 (S) the "Scheduled" list of insns.
242 Initially, all insns are either "Pending" or "Ready" depending on
243 whether their dependencies are satisfied.
245 Insns move from the "Ready" list to the "Scheduled" list as they
246 are committed to the schedule. As this occurs, the insns in the
247 "Pending" list have their dependencies satisfied and move to either
248 the "Ready" list or the "Queued" set depending on whether
249 sufficient time has passed to make them ready. As time passes,
250 insns move from the "Queued" set to the "Ready" list. Insns may
251 move from the "Ready" list to the "Queued" set if they are blocked
252 due to a function unit conflict.
254 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
255 insns, i.e., those that are ready, queued, and pending.
256 The "Queued" set (Q) is implemented by the variable `insn_queue'.
257 The "Ready" list (R) is implemented by the variables `ready' and
258 `n_ready'.
259 The "Scheduled" list (S) is the new insn chain built by this pass.
261 The transition (R->S) is implemented in the scheduling loop in
262 `schedule_block' when the best insn to schedule is chosen.
263 The transition (R->Q) is implemented in `schedule_select' when an
264 insn is found to to have a function unit conflict with the already
265 committed insns.
266 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
267 insns move from the ready list to the scheduled list.
268 The transition (Q->R) is implemented at the top of the scheduling
269 loop in `schedule_block' as time passes or stalls are introduced. */
271 /* Implement a circular buffer to delay instructions until sufficient
272 time has passed. INSN_QUEUE_SIZE is a power of two larger than
273 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
274 longest time an isnsn may be queued. */
275 static rtx insn_queue[INSN_QUEUE_SIZE];
276 static int q_ptr = 0;
277 static int q_size = 0;
278 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
279 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
281 /* Vector indexed by INSN_UID giving the minimum clock tick at which
282 the insn becomes ready. This is used to note timing constraints for
283 insns in the pending list. */
284 static int *insn_tick;
285 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
287 /* Data structure for keeping track of register information
288 during that register's life. */
290 struct sometimes
292 int regno;
293 int live_length;
294 int calls_crossed;
297 /* Forward declarations. */
298 static rtx canon_rtx PROTO((rtx));
299 static int rtx_equal_for_memref_p PROTO((rtx, rtx));
300 static rtx find_symbolic_term PROTO((rtx));
301 static int memrefs_conflict_p PROTO((int, rtx, int, rtx,
302 HOST_WIDE_INT));
303 static void add_dependence PROTO((rtx, rtx, enum reg_note));
304 static void remove_dependence PROTO((rtx, rtx));
305 static rtx find_insn_list PROTO((rtx, rtx));
306 static int insn_unit PROTO((rtx));
307 static unsigned int blockage_range PROTO((int, rtx));
308 static void clear_units PROTO((void));
309 static void prepare_unit PROTO((int));
310 static int actual_hazard_this_instance PROTO((int, int, rtx, int, int));
311 static void schedule_unit PROTO((int, rtx, int));
312 static int actual_hazard PROTO((int, rtx, int, int));
313 static int potential_hazard PROTO((int, rtx, int));
314 static int insn_cost PROTO((rtx, rtx, rtx));
315 static int priority PROTO((rtx));
316 static void free_pending_lists PROTO((void));
317 static void add_insn_mem_dependence PROTO((rtx *, rtx *, rtx, rtx));
318 static void flush_pending_lists PROTO((rtx, int));
319 static void sched_analyze_1 PROTO((rtx, rtx));
320 static void sched_analyze_2 PROTO((rtx, rtx));
321 static void sched_analyze_insn PROTO((rtx, rtx, rtx));
322 static int sched_analyze PROTO((rtx, rtx));
323 static void sched_note_set PROTO((int, rtx, int));
324 static int rank_for_schedule PROTO((const GENERIC_PTR,
325 const GENERIC_PTR));
326 static void swap_sort PROTO((rtx *, int));
327 static void queue_insn PROTO((rtx, int));
328 static int birthing_insn_p PROTO((rtx));
329 static void adjust_priority PROTO((rtx));
330 static int schedule_insn PROTO((rtx, rtx *, int, int));
331 static int schedule_select PROTO((rtx *, int, int, FILE *));
332 static void create_reg_dead_note PROTO((rtx, rtx));
333 static void attach_deaths PROTO((rtx, rtx, int));
334 static void attach_deaths_insn PROTO((rtx));
335 static rtx unlink_notes PROTO((rtx, rtx));
336 static int new_sometimes_live PROTO((struct sometimes *, int, int));
337 static void finish_sometimes_live PROTO((struct sometimes *, int));
338 static rtx reemit_notes PROTO((rtx, rtx));
339 static void schedule_block PROTO((int, FILE *));
340 static rtx regno_use_in PROTO((int, rtx));
341 static void split_hard_reg_notes PROTO((rtx, rtx, rtx, rtx));
342 static void new_insn_dead_notes PROTO((rtx, rtx, rtx, rtx));
343 static void update_n_sets PROTO((rtx, int));
344 static void update_flow_info PROTO((rtx, rtx, rtx, rtx));
346 /* Main entry point of this file. */
347 void schedule_insns PROTO((FILE *));
349 #endif /* INSN_SCHEDULING */
351 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
353 /* Vector indexed by N giving the initial (unchanging) value known
354 for pseudo-register N. */
355 static rtx *reg_known_value;
357 /* Vector recording for each reg_known_value whether it is due to a
358 REG_EQUIV note. Future passes (viz., reload) may replace the
359 pseudo with the equivalent expression and so we account for the
360 dependences that would be introduced if that happens. */
361 /* ??? This is a problem only on the Convex. The REG_EQUIV notes created in
362 assign_parms mention the arg pointer, and there are explicit insns in the
363 RTL that modify the arg pointer. Thus we must ensure that such insns don't
364 get scheduled across each other because that would invalidate the REG_EQUIV
365 notes. One could argue that the REG_EQUIV notes are wrong, but solving
366 the problem in the scheduler will likely give better code, so we do it
367 here. */
368 static char *reg_known_equiv_p;
370 /* Indicates number of valid entries in reg_known_value. */
371 static int reg_known_value_size;
373 static rtx
374 canon_rtx (x)
375 rtx x;
377 /* Recursively look for equivalences. */
378 if (GET_CODE (x) == REG && REGNO (x) >= FIRST_PSEUDO_REGISTER
379 && REGNO (x) <= reg_known_value_size)
380 return reg_known_value[REGNO (x)] == x
381 ? x : canon_rtx (reg_known_value[REGNO (x)]);
382 else if (GET_CODE (x) == PLUS)
384 rtx x0 = canon_rtx (XEXP (x, 0));
385 rtx x1 = canon_rtx (XEXP (x, 1));
387 if (x0 != XEXP (x, 0) || x1 != XEXP (x, 1))
389 /* We can tolerate LO_SUMs being offset here; these
390 rtl are used for nothing other than comparisons. */
391 if (GET_CODE (x0) == CONST_INT)
392 return plus_constant_for_output (x1, INTVAL (x0));
393 else if (GET_CODE (x1) == CONST_INT)
394 return plus_constant_for_output (x0, INTVAL (x1));
395 return gen_rtx_PLUS (GET_MODE (x), x0, x1);
398 /* This gives us much better alias analysis when called from
399 the loop optimizer. Note we want to leave the original
400 MEM alone, but need to return the canonicalized MEM with
401 all the flags with their original values. */
402 else if (GET_CODE (x) == MEM)
404 rtx copy = copy_rtx (x);
405 XEXP (copy, 0) = canon_rtx (XEXP (copy, 0));
406 x = copy;
408 return x;
411 /* Set up all info needed to perform alias analysis on memory references. */
413 void
414 init_alias_analysis ()
416 int maxreg = max_reg_num ();
417 rtx insn;
418 rtx note;
419 rtx set;
421 reg_known_value_size = maxreg;
423 reg_known_value
424 = (rtx *) oballoc ((maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx))
425 - FIRST_PSEUDO_REGISTER;
426 bzero ((char *) (reg_known_value + FIRST_PSEUDO_REGISTER),
427 (maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx));
429 reg_known_equiv_p
430 = (char *) oballoc ((maxreg -FIRST_PSEUDO_REGISTER) * sizeof (char))
431 - FIRST_PSEUDO_REGISTER;
432 bzero (reg_known_equiv_p + FIRST_PSEUDO_REGISTER,
433 (maxreg - FIRST_PSEUDO_REGISTER) * sizeof (char));
435 /* Fill in the entries with known constant values. */
436 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
437 if ((set = single_set (insn)) != 0
438 && GET_CODE (SET_DEST (set)) == REG
439 && REGNO (SET_DEST (set)) >= FIRST_PSEUDO_REGISTER
440 && (((note = find_reg_note (insn, REG_EQUAL, 0)) != 0
441 && REG_N_SETS (REGNO (SET_DEST (set))) == 1)
442 || (note = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != 0)
443 && GET_CODE (XEXP (note, 0)) != EXPR_LIST)
445 int regno = REGNO (SET_DEST (set));
446 reg_known_value[regno] = XEXP (note, 0);
447 reg_known_equiv_p[regno] = REG_NOTE_KIND (note) == REG_EQUIV;
450 /* Fill in the remaining entries. */
451 while (--maxreg >= FIRST_PSEUDO_REGISTER)
452 if (reg_known_value[maxreg] == 0)
453 reg_known_value[maxreg] = regno_reg_rtx[maxreg];
456 /* Return 1 if X and Y are identical-looking rtx's.
458 We use the data in reg_known_value above to see if two registers with
459 different numbers are, in fact, equivalent. */
461 static int
462 rtx_equal_for_memref_p (x, y)
463 rtx x, y;
465 register int i;
466 register int j;
467 register enum rtx_code code;
468 register char *fmt;
470 if (x == 0 && y == 0)
471 return 1;
472 if (x == 0 || y == 0)
473 return 0;
474 x = canon_rtx (x);
475 y = canon_rtx (y);
477 if (x == y)
478 return 1;
480 code = GET_CODE (x);
481 /* Rtx's of different codes cannot be equal. */
482 if (code != GET_CODE (y))
483 return 0;
485 /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent.
486 (REG:SI x) and (REG:HI x) are NOT equivalent. */
488 if (GET_MODE (x) != GET_MODE (y))
489 return 0;
491 /* REG, LABEL_REF, and SYMBOL_REF can be compared nonrecursively. */
493 if (code == REG)
494 return REGNO (x) == REGNO (y);
495 if (code == LABEL_REF)
496 return XEXP (x, 0) == XEXP (y, 0);
497 if (code == SYMBOL_REF)
498 return XSTR (x, 0) == XSTR (y, 0);
500 /* For commutative operations, the RTX match if the operand match in any
501 order. Also handle the simple binary and unary cases without a loop. */
502 if (code == EQ || code == NE || GET_RTX_CLASS (code) == 'c')
503 return ((rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
504 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)))
505 || (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 1))
506 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 0))));
507 else if (GET_RTX_CLASS (code) == '<' || GET_RTX_CLASS (code) == '2')
508 return (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
509 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)));
510 else if (GET_RTX_CLASS (code) == '1')
511 return rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0));
513 /* Compare the elements. If any pair of corresponding elements
514 fail to match, return 0 for the whole things. */
516 fmt = GET_RTX_FORMAT (code);
517 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
519 switch (fmt[i])
521 case 'w':
522 if (XWINT (x, i) != XWINT (y, i))
523 return 0;
524 break;
526 case 'n':
527 case 'i':
528 if (XINT (x, i) != XINT (y, i))
529 return 0;
530 break;
532 case 'V':
533 case 'E':
534 /* Two vectors must have the same length. */
535 if (XVECLEN (x, i) != XVECLEN (y, i))
536 return 0;
538 /* And the corresponding elements must match. */
539 for (j = 0; j < XVECLEN (x, i); j++)
540 if (rtx_equal_for_memref_p (XVECEXP (x, i, j), XVECEXP (y, i, j)) == 0)
541 return 0;
542 break;
544 case 'e':
545 if (rtx_equal_for_memref_p (XEXP (x, i), XEXP (y, i)) == 0)
546 return 0;
547 break;
549 case 'S':
550 case 's':
551 if (strcmp (XSTR (x, i), XSTR (y, i)))
552 return 0;
553 break;
555 case 'u':
556 /* These are just backpointers, so they don't matter. */
557 break;
559 case '0':
560 break;
562 /* It is believed that rtx's at this level will never
563 contain anything but integers and other rtx's,
564 except for within LABEL_REFs and SYMBOL_REFs. */
565 default:
566 abort ();
569 return 1;
572 /* Given an rtx X, find a SYMBOL_REF or LABEL_REF within
573 X and return it, or return 0 if none found. */
575 static rtx
576 find_symbolic_term (x)
577 rtx x;
579 register int i;
580 register enum rtx_code code;
581 register char *fmt;
583 code = GET_CODE (x);
584 if (code == SYMBOL_REF || code == LABEL_REF)
585 return x;
586 if (GET_RTX_CLASS (code) == 'o')
587 return 0;
589 fmt = GET_RTX_FORMAT (code);
590 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
592 rtx t;
594 if (fmt[i] == 'e')
596 t = find_symbolic_term (XEXP (x, i));
597 if (t != 0)
598 return t;
600 else if (fmt[i] == 'E')
601 break;
603 return 0;
606 /* Return nonzero if X and Y (memory addresses) could reference the
607 same location in memory. C is an offset accumulator. When
608 C is nonzero, we are testing aliases between X and Y + C.
609 XSIZE is the size in bytes of the X reference,
610 similarly YSIZE is the size in bytes for Y.
612 If XSIZE or YSIZE is zero, we do not know the amount of memory being
613 referenced (the reference was BLKmode), so make the most pessimistic
614 assumptions.
616 We recognize the following cases of non-conflicting memory:
618 (1) addresses involving the frame pointer cannot conflict
619 with addresses involving static variables.
620 (2) static variables with different addresses cannot conflict.
622 Nice to notice that varying addresses cannot conflict with fp if no
623 local variables had their addresses taken, but that's too hard now. */
625 /* ??? In Fortran, references to a array parameter can never conflict with
626 another array parameter. */
628 static int
629 memrefs_conflict_p (xsize, x, ysize, y, c)
630 rtx x, y;
631 int xsize, ysize;
632 HOST_WIDE_INT c;
634 if (GET_CODE (x) == HIGH)
635 x = XEXP (x, 0);
636 else if (GET_CODE (x) == LO_SUM)
637 x = XEXP (x, 1);
638 else
639 x = canon_rtx (x);
640 if (GET_CODE (y) == HIGH)
641 y = XEXP (y, 0);
642 else if (GET_CODE (y) == LO_SUM)
643 y = XEXP (y, 1);
644 else
645 y = canon_rtx (y);
647 if (rtx_equal_for_memref_p (x, y))
648 return (xsize == 0 || ysize == 0
649 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
651 if (y == frame_pointer_rtx || y == hard_frame_pointer_rtx
652 || y == stack_pointer_rtx)
654 rtx t = y;
655 int tsize = ysize;
656 y = x; ysize = xsize;
657 x = t; xsize = tsize;
660 if (x == frame_pointer_rtx || x == hard_frame_pointer_rtx
661 || x == stack_pointer_rtx)
663 rtx y1;
665 if (CONSTANT_P (y))
666 return 0;
668 if (GET_CODE (y) == PLUS
669 && canon_rtx (XEXP (y, 0)) == x
670 && (y1 = canon_rtx (XEXP (y, 1)))
671 && GET_CODE (y1) == CONST_INT)
673 c += INTVAL (y1);
674 return (xsize == 0 || ysize == 0
675 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
678 if (GET_CODE (y) == PLUS
679 && (y1 = canon_rtx (XEXP (y, 0)))
680 && CONSTANT_P (y1))
681 return 0;
683 return 1;
686 if (GET_CODE (x) == PLUS)
688 /* The fact that X is canonicalized means that this
689 PLUS rtx is canonicalized. */
690 rtx x0 = XEXP (x, 0);
691 rtx x1 = XEXP (x, 1);
693 if (GET_CODE (y) == PLUS)
695 /* The fact that Y is canonicalized means that this
696 PLUS rtx is canonicalized. */
697 rtx y0 = XEXP (y, 0);
698 rtx y1 = XEXP (y, 1);
700 if (rtx_equal_for_memref_p (x1, y1))
701 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
702 if (rtx_equal_for_memref_p (x0, y0))
703 return memrefs_conflict_p (xsize, x1, ysize, y1, c);
704 if (GET_CODE (x1) == CONST_INT)
705 if (GET_CODE (y1) == CONST_INT)
706 return memrefs_conflict_p (xsize, x0, ysize, y0,
707 c - INTVAL (x1) + INTVAL (y1));
708 else
709 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
710 else if (GET_CODE (y1) == CONST_INT)
711 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
713 /* Handle case where we cannot understand iteration operators,
714 but we notice that the base addresses are distinct objects. */
715 x = find_symbolic_term (x);
716 if (x == 0)
717 return 1;
718 y = find_symbolic_term (y);
719 if (y == 0)
720 return 1;
721 return rtx_equal_for_memref_p (x, y);
723 else if (GET_CODE (x1) == CONST_INT)
724 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
726 else if (GET_CODE (y) == PLUS)
728 /* The fact that Y is canonicalized means that this
729 PLUS rtx is canonicalized. */
730 rtx y0 = XEXP (y, 0);
731 rtx y1 = XEXP (y, 1);
733 if (GET_CODE (y1) == CONST_INT)
734 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
735 else
736 return 1;
739 if (GET_CODE (x) == GET_CODE (y) && GET_CODE (x) == MULT)
741 /* Handle cases where we expect the second operands to be the
742 same, and check only whether the first operand would conflict
743 or not. */
744 rtx x0, y0;
745 rtx x1 = canon_rtx (XEXP (x, 1));
746 rtx y1 = canon_rtx (XEXP (y, 1));
747 if (! rtx_equal_for_memref_p (x1, y1))
748 return 1;
749 x0 = canon_rtx (XEXP (x, 0));
750 y0 = canon_rtx (XEXP (y, 0));
751 if (rtx_equal_for_memref_p (x0, y0))
752 return (xsize == 0 || ysize == 0
753 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
755 /* Can't properly adjust our sizes. */
756 if (GET_CODE (x1) != CONST_INT)
757 return 1;
758 xsize /= INTVAL (x1);
759 ysize /= INTVAL (x1);
760 c /= INTVAL (x1);
761 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
764 if (CONSTANT_P (x))
766 if (GET_CODE (x) == CONST_INT && GET_CODE (y) == CONST_INT)
768 c += (INTVAL (y) - INTVAL (x));
769 return (xsize == 0 || ysize == 0
770 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
773 if (GET_CODE (x) == CONST)
775 if (GET_CODE (y) == CONST)
776 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
777 ysize, canon_rtx (XEXP (y, 0)), c);
778 else
779 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
780 ysize, y, c);
782 if (GET_CODE (y) == CONST)
783 return memrefs_conflict_p (xsize, x, ysize,
784 canon_rtx (XEXP (y, 0)), c);
786 if (CONSTANT_P (y))
787 return (rtx_equal_for_memref_p (x, y)
788 && (xsize == 0 || ysize == 0
789 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0)));
791 return 1;
793 return 1;
796 /* Functions to compute memory dependencies.
798 Since we process the insns in execution order, we can build tables
799 to keep track of what registers are fixed (and not aliased), what registers
800 are varying in known ways, and what registers are varying in unknown
801 ways.
803 If both memory references are volatile, then there must always be a
804 dependence between the two references, since their order can not be
805 changed. A volatile and non-volatile reference can be interchanged
806 though.
808 A MEM_IN_STRUCT reference at a non-QImode non-AND varying address can never
809 conflict with a non-MEM_IN_STRUCT reference at a fixed address. We must
810 allow QImode aliasing because the ANSI C standard allows character
811 pointers to alias anything. We are assuming that characters are
812 always QImode here. We also must allow AND addresses, because they may
813 generate accesses outside the object being referenced. This is used to
814 generate aligned addresses from unaligned addresses, for instance, the
815 alpha storeqi_unaligned pattern. */
817 /* Read dependence: X is read after read in MEM takes place. There can
818 only be a dependence here if both reads are volatile. */
821 read_dependence (mem, x)
822 rtx mem;
823 rtx x;
825 return MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem);
828 /* True dependence: X is read after store in MEM takes place. */
831 true_dependence (mem, x)
832 rtx mem;
833 rtx x;
835 /* If X is an unchanging read, then it can't possibly conflict with any
836 non-unchanging store. It may conflict with an unchanging write though,
837 because there may be a single store to this address to initialize it.
838 Just fall through to the code below to resolve the case where we have
839 both an unchanging read and an unchanging write. This won't handle all
840 cases optimally, but the possible performance loss should be
841 negligible. */
842 x = canon_rtx (x);
843 mem = canon_rtx (mem);
844 if (RTX_UNCHANGING_P (x) && ! RTX_UNCHANGING_P (mem))
845 return 0;
847 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
848 /* If we have two memrefs whose MEM_IN_STRUCT_P values differ and
849 one is volatile, show they conflict. This isn't quite right, but
850 works around the fact that MEM_IN_STRUCT_P cannot be set
851 totally accurately. */
852 || (MEM_IN_STRUCT_P (x) != MEM_IN_STRUCT_P (mem)
853 && (MEM_VOLATILE_P (x) || MEM_VOLATILE_P (mem)))
854 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
855 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
856 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
857 && GET_MODE (mem) != QImode
858 && GET_CODE (XEXP (mem, 0)) != AND
859 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
860 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
861 && GET_MODE (x) != QImode
862 && GET_CODE (XEXP (x, 0)) != AND
863 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
866 /* Anti dependence: X is written after read in MEM takes place. */
869 anti_dependence (mem, x)
870 rtx mem;
871 rtx x;
873 /* If MEM is an unchanging read, then it can't possibly conflict with
874 the store to X, because there is at most one store to MEM, and it must
875 have occurred somewhere before MEM. */
876 x = canon_rtx (x);
877 mem = canon_rtx (mem);
878 if (RTX_UNCHANGING_P (mem))
879 return 0;
881 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
882 || (MEM_IN_STRUCT_P (x) != MEM_IN_STRUCT_P (mem)
883 && (MEM_VOLATILE_P (x) || MEM_VOLATILE_P (mem)))
884 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
885 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
886 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
887 && GET_MODE (mem) != QImode
888 && GET_CODE (XEXP (mem, 0)) != AND
889 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
890 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
891 && GET_MODE (x) != QImode
892 && GET_CODE (XEXP (x, 0)) != AND
893 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
896 /* Output dependence: X is written after store in MEM takes place. */
899 output_dependence (mem, x)
900 rtx mem;
901 rtx x;
903 x = canon_rtx (x);
904 mem = canon_rtx (mem);
905 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
906 || (MEM_IN_STRUCT_P (x) != MEM_IN_STRUCT_P (mem)
907 && (MEM_VOLATILE_P (x) || MEM_VOLATILE_P (mem)))
908 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
909 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
910 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
911 && GET_MODE (mem) != QImode
912 && GET_CODE (XEXP (mem, 0)) != AND
913 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
914 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
915 && GET_MODE (x) != QImode
916 && GET_CODE (XEXP (x, 0)) != AND
917 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
920 /* Helper functions for instruction scheduling. */
922 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
923 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
924 of dependence that this link represents. */
926 static void
927 add_dependence (insn, elem, dep_type)
928 rtx insn;
929 rtx elem;
930 enum reg_note dep_type;
932 rtx link, next;
934 /* Don't depend an insn on itself. */
935 if (insn == elem)
936 return;
938 /* If elem is part of a sequence that must be scheduled together, then
939 make the dependence point to the last insn of the sequence.
940 When HAVE_cc0, it is possible for NOTEs to exist between users and
941 setters of the condition codes, so we must skip past notes here.
942 Otherwise, NOTEs are impossible here. */
944 next = NEXT_INSN (elem);
946 #ifdef HAVE_cc0
947 while (next && GET_CODE (next) == NOTE)
948 next = NEXT_INSN (next);
949 #endif
951 if (next && SCHED_GROUP_P (next)
952 && GET_CODE (next) != CODE_LABEL)
954 /* Notes will never intervene here though, so don't bother checking
955 for them. */
956 /* We must reject CODE_LABELs, so that we don't get confused by one
957 that has LABEL_PRESERVE_P set, which is represented by the same
958 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
959 SCHED_GROUP_P. */
960 while (NEXT_INSN (next) && SCHED_GROUP_P (NEXT_INSN (next))
961 && GET_CODE (NEXT_INSN (next)) != CODE_LABEL)
962 next = NEXT_INSN (next);
964 /* Again, don't depend an insn on itself. */
965 if (insn == next)
966 return;
968 /* Make the dependence to NEXT, the last insn of the group, instead
969 of the original ELEM. */
970 elem = next;
973 /* Check that we don't already have this dependence. */
974 for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
975 if (XEXP (link, 0) == elem)
977 /* If this is a more restrictive type of dependence than the existing
978 one, then change the existing dependence to this type. */
979 if ((int) dep_type < (int) REG_NOTE_KIND (link))
980 PUT_REG_NOTE_KIND (link, dep_type);
981 return;
983 /* Might want to check one level of transitivity to save conses. */
985 link = rtx_alloc (INSN_LIST);
986 /* Insn dependency, not data dependency. */
987 PUT_REG_NOTE_KIND (link, dep_type);
988 XEXP (link, 0) = elem;
989 XEXP (link, 1) = LOG_LINKS (insn);
990 LOG_LINKS (insn) = link;
993 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
994 of INSN. Abort if not found. */
996 static void
997 remove_dependence (insn, elem)
998 rtx insn;
999 rtx elem;
1001 rtx prev, link;
1002 int found = 0;
1004 for (prev = 0, link = LOG_LINKS (insn); link; link = XEXP (link, 1))
1006 if (XEXP (link, 0) == elem)
1008 RTX_INTEGRATED_P (link) = 1;
1009 if (prev)
1010 XEXP (prev, 1) = XEXP (link, 1);
1011 else
1012 LOG_LINKS (insn) = XEXP (link, 1);
1013 found = 1;
1015 else
1016 prev = link;
1019 if (! found)
1020 abort ();
1021 return;
1024 #ifndef INSN_SCHEDULING
1025 void
1026 schedule_insns (dump_file)
1027 FILE *dump_file;
1030 #else
1031 #ifndef __GNUC__
1032 #define __inline
1033 #endif
1035 /* Computation of memory dependencies. */
1037 /* The *_insns and *_mems are paired lists. Each pending memory operation
1038 will have a pointer to the MEM rtx on one list and a pointer to the
1039 containing insn on the other list in the same place in the list. */
1041 /* We can't use add_dependence like the old code did, because a single insn
1042 may have multiple memory accesses, and hence needs to be on the list
1043 once for each memory access. Add_dependence won't let you add an insn
1044 to a list more than once. */
1046 /* An INSN_LIST containing all insns with pending read operations. */
1047 static rtx pending_read_insns;
1049 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
1050 static rtx pending_read_mems;
1052 /* An INSN_LIST containing all insns with pending write operations. */
1053 static rtx pending_write_insns;
1055 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
1056 static rtx pending_write_mems;
1058 /* Indicates the combined length of the two pending lists. We must prevent
1059 these lists from ever growing too large since the number of dependencies
1060 produced is at least O(N*N), and execution time is at least O(4*N*N), as
1061 a function of the length of these pending lists. */
1063 static int pending_lists_length;
1065 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
1067 static rtx unused_insn_list;
1069 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
1071 static rtx unused_expr_list;
1073 /* The last insn upon which all memory references must depend.
1074 This is an insn which flushed the pending lists, creating a dependency
1075 between it and all previously pending memory references. This creates
1076 a barrier (or a checkpoint) which no memory reference is allowed to cross.
1078 This includes all non constant CALL_INSNs. When we do interprocedural
1079 alias analysis, this restriction can be relaxed.
1080 This may also be an INSN that writes memory if the pending lists grow
1081 too large. */
1083 static rtx last_pending_memory_flush;
1085 /* The last function call we have seen. All hard regs, and, of course,
1086 the last function call, must depend on this. */
1088 static rtx last_function_call;
1090 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
1091 that does not already cross a call. We create dependencies between each
1092 of those insn and the next call insn, to ensure that they won't cross a call
1093 after scheduling is done. */
1095 static rtx sched_before_next_call;
1097 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
1098 so that insns independent of the last scheduled insn will be preferred
1099 over dependent instructions. */
1101 static rtx last_scheduled_insn;
1103 /* Process an insn's memory dependencies. There are four kinds of
1104 dependencies:
1106 (0) read dependence: read follows read
1107 (1) true dependence: read follows write
1108 (2) anti dependence: write follows read
1109 (3) output dependence: write follows write
1111 We are careful to build only dependencies which actually exist, and
1112 use transitivity to avoid building too many links. */
1114 /* Return the INSN_LIST containing INSN in LIST, or NULL
1115 if LIST does not contain INSN. */
1117 __inline static rtx
1118 find_insn_list (insn, list)
1119 rtx insn;
1120 rtx list;
1122 while (list)
1124 if (XEXP (list, 0) == insn)
1125 return list;
1126 list = XEXP (list, 1);
1128 return 0;
1131 /* Compute the function units used by INSN. This caches the value
1132 returned by function_units_used. A function unit is encoded as the
1133 unit number if the value is non-negative and the compliment of a
1134 mask if the value is negative. A function unit index is the
1135 non-negative encoding. */
1137 __inline static int
1138 insn_unit (insn)
1139 rtx insn;
1141 register int unit = INSN_UNIT (insn);
1143 if (unit == 0)
1145 recog_memoized (insn);
1147 /* A USE insn, or something else we don't need to understand.
1148 We can't pass these directly to function_units_used because it will
1149 trigger a fatal error for unrecognizable insns. */
1150 if (INSN_CODE (insn) < 0)
1151 unit = -1;
1152 else
1154 unit = function_units_used (insn);
1155 /* Increment non-negative values so we can cache zero. */
1156 if (unit >= 0) unit++;
1158 /* We only cache 16 bits of the result, so if the value is out of
1159 range, don't cache it. */
1160 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
1161 || unit >= 0
1162 || (~unit & ((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
1163 INSN_UNIT (insn) = unit;
1165 return (unit > 0 ? unit - 1 : unit);
1168 /* Compute the blockage range for executing INSN on UNIT. This caches
1169 the value returned by the blockage_range_function for the unit.
1170 These values are encoded in an int where the upper half gives the
1171 minimum value and the lower half gives the maximum value. */
1173 __inline static unsigned int
1174 blockage_range (unit, insn)
1175 int unit;
1176 rtx insn;
1178 unsigned int blockage = INSN_BLOCKAGE (insn);
1179 unsigned int range;
1181 if (UNIT_BLOCKED (blockage) != unit + 1)
1183 range = function_units[unit].blockage_range_function (insn);
1184 /* We only cache the blockage range for one unit and then only if
1185 the values fit. */
1186 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
1187 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
1189 else
1190 range = BLOCKAGE_RANGE (blockage);
1192 return range;
1195 /* A vector indexed by function unit instance giving the last insn to use
1196 the unit. The value of the function unit instance index for unit U
1197 instance I is (U + I * FUNCTION_UNITS_SIZE). */
1198 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1200 /* A vector indexed by function unit instance giving the minimum time when
1201 the unit will unblock based on the maximum blockage cost. */
1202 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1204 /* A vector indexed by function unit number giving the number of insns
1205 that remain to use the unit. */
1206 static int unit_n_insns[FUNCTION_UNITS_SIZE];
1208 /* Reset the function unit state to the null state. */
1210 static void
1211 clear_units ()
1213 bzero ((char *) unit_last_insn, sizeof (unit_last_insn));
1214 bzero ((char *) unit_tick, sizeof (unit_tick));
1215 bzero ((char *) unit_n_insns, sizeof (unit_n_insns));
1218 /* Record an insn as one that will use the units encoded by UNIT. */
1220 __inline static void
1221 prepare_unit (unit)
1222 int unit;
1224 int i;
1226 if (unit >= 0)
1227 unit_n_insns[unit]++;
1228 else
1229 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1230 if ((unit & 1) != 0)
1231 prepare_unit (i);
1234 /* Return the actual hazard cost of executing INSN on the unit UNIT,
1235 instance INSTANCE at time CLOCK if the previous actual hazard cost
1236 was COST. */
1238 __inline static int
1239 actual_hazard_this_instance (unit, instance, insn, clock, cost)
1240 int unit, instance, clock, cost;
1241 rtx insn;
1243 int tick = unit_tick[instance];
1245 if (tick - clock > cost)
1247 /* The scheduler is operating in reverse, so INSN is the executing
1248 insn and the unit's last insn is the candidate insn. We want a
1249 more exact measure of the blockage if we execute INSN at CLOCK
1250 given when we committed the execution of the unit's last insn.
1252 The blockage value is given by either the unit's max blockage
1253 constant, blockage range function, or blockage function. Use
1254 the most exact form for the given unit. */
1256 if (function_units[unit].blockage_range_function)
1258 if (function_units[unit].blockage_function)
1259 tick += (function_units[unit].blockage_function
1260 (insn, unit_last_insn[instance])
1261 - function_units[unit].max_blockage);
1262 else
1263 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
1264 - function_units[unit].max_blockage);
1266 if (tick - clock > cost)
1267 cost = tick - clock;
1269 return cost;
1272 /* Record INSN as having begun execution on the units encoded by UNIT at
1273 time CLOCK. */
1275 __inline static void
1276 schedule_unit (unit, insn, clock)
1277 int unit, clock;
1278 rtx insn;
1280 int i;
1282 if (unit >= 0)
1284 int instance = unit;
1285 #if MAX_MULTIPLICITY > 1
1286 /* Find the first free instance of the function unit and use that
1287 one. We assume that one is free. */
1288 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1290 if (! actual_hazard_this_instance (unit, instance, insn, clock, 0))
1291 break;
1292 instance += FUNCTION_UNITS_SIZE;
1294 #endif
1295 unit_last_insn[instance] = insn;
1296 unit_tick[instance] = (clock + function_units[unit].max_blockage);
1298 else
1299 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1300 if ((unit & 1) != 0)
1301 schedule_unit (i, insn, clock);
1304 /* Return the actual hazard cost of executing INSN on the units encoded by
1305 UNIT at time CLOCK if the previous actual hazard cost was COST. */
1307 __inline static int
1308 actual_hazard (unit, insn, clock, cost)
1309 int unit, clock, cost;
1310 rtx insn;
1312 int i;
1314 if (unit >= 0)
1316 /* Find the instance of the function unit with the minimum hazard. */
1317 int instance = unit;
1318 int best_cost = actual_hazard_this_instance (unit, instance, insn,
1319 clock, cost);
1320 int this_cost;
1322 #if MAX_MULTIPLICITY > 1
1323 if (best_cost > cost)
1325 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1327 instance += FUNCTION_UNITS_SIZE;
1328 this_cost = actual_hazard_this_instance (unit, instance, insn,
1329 clock, cost);
1330 if (this_cost < best_cost)
1332 best_cost = this_cost;
1333 if (this_cost <= cost)
1334 break;
1338 #endif
1339 cost = MAX (cost, best_cost);
1341 else
1342 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1343 if ((unit & 1) != 0)
1344 cost = actual_hazard (i, insn, clock, cost);
1346 return cost;
1349 /* Return the potential hazard cost of executing an instruction on the
1350 units encoded by UNIT if the previous potential hazard cost was COST.
1351 An insn with a large blockage time is chosen in preference to one
1352 with a smaller time; an insn that uses a unit that is more likely
1353 to be used is chosen in preference to one with a unit that is less
1354 used. We are trying to minimize a subsequent actual hazard. */
1356 __inline static int
1357 potential_hazard (unit, insn, cost)
1358 int unit, cost;
1359 rtx insn;
1361 int i, ncost;
1362 unsigned int minb, maxb;
1364 if (unit >= 0)
1366 minb = maxb = function_units[unit].max_blockage;
1367 if (maxb > 1)
1369 if (function_units[unit].blockage_range_function)
1371 maxb = minb = blockage_range (unit, insn);
1372 maxb = MAX_BLOCKAGE_COST (maxb);
1373 minb = MIN_BLOCKAGE_COST (minb);
1376 if (maxb > 1)
1378 /* Make the number of instructions left dominate. Make the
1379 minimum delay dominate the maximum delay. If all these
1380 are the same, use the unit number to add an arbitrary
1381 ordering. Other terms can be added. */
1382 ncost = minb * 0x40 + maxb;
1383 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
1384 if (ncost > cost)
1385 cost = ncost;
1389 else
1390 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1391 if ((unit & 1) != 0)
1392 cost = potential_hazard (i, insn, cost);
1394 return cost;
1397 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
1398 This is the number of virtual cycles taken between instruction issue and
1399 instruction results. */
1401 __inline static int
1402 insn_cost (insn, link, used)
1403 rtx insn, link, used;
1405 register int cost = INSN_COST (insn);
1407 if (cost == 0)
1409 recog_memoized (insn);
1411 /* A USE insn, or something else we don't need to understand.
1412 We can't pass these directly to result_ready_cost because it will
1413 trigger a fatal error for unrecognizable insns. */
1414 if (INSN_CODE (insn) < 0)
1416 INSN_COST (insn) = 1;
1417 return 1;
1419 else
1421 cost = result_ready_cost (insn);
1423 if (cost < 1)
1424 cost = 1;
1426 INSN_COST (insn) = cost;
1430 /* A USE insn should never require the value used to be computed. This
1431 allows the computation of a function's result and parameter values to
1432 overlap the return and call. */
1433 recog_memoized (used);
1434 if (INSN_CODE (used) < 0)
1435 LINK_COST_FREE (link) = 1;
1437 /* If some dependencies vary the cost, compute the adjustment. Most
1438 commonly, the adjustment is complete: either the cost is ignored
1439 (in the case of an output- or anti-dependence), or the cost is
1440 unchanged. These values are cached in the link as LINK_COST_FREE
1441 and LINK_COST_ZERO. */
1443 if (LINK_COST_FREE (link))
1444 cost = 1;
1445 #ifdef ADJUST_COST
1446 else if (! LINK_COST_ZERO (link))
1448 int ncost = cost;
1450 ADJUST_COST (used, link, insn, ncost);
1451 if (ncost <= 1)
1452 LINK_COST_FREE (link) = ncost = 1;
1453 if (cost == ncost)
1454 LINK_COST_ZERO (link) = 1;
1455 cost = ncost;
1457 #endif
1458 return cost;
1461 /* Compute the priority number for INSN. */
1463 static int
1464 priority (insn)
1465 rtx insn;
1467 if (insn && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
1469 int prev_priority;
1470 int max_priority;
1471 int this_priority = INSN_PRIORITY (insn);
1472 rtx prev;
1474 if (this_priority > 0)
1475 return this_priority;
1477 max_priority = 1;
1479 /* Nonzero if these insns must be scheduled together. */
1480 if (SCHED_GROUP_P (insn))
1482 prev = insn;
1483 while (SCHED_GROUP_P (prev))
1485 prev = PREV_INSN (prev);
1486 INSN_REF_COUNT (prev) += 1;
1490 for (prev = LOG_LINKS (insn); prev; prev = XEXP (prev, 1))
1492 rtx x = XEXP (prev, 0);
1494 /* If this was a duplicate of a dependence we already deleted,
1495 ignore it. */
1496 if (RTX_INTEGRATED_P (prev))
1497 continue;
1499 /* A dependence pointing to a note or deleted insn is always
1500 obsolete, because sched_analyze_insn will have created any
1501 necessary new dependences which replace it. Notes and deleted
1502 insns can be created when instructions are deleted by insn
1503 splitting, or by register allocation. */
1504 if (GET_CODE (x) == NOTE || INSN_DELETED_P (x))
1506 remove_dependence (insn, x);
1507 continue;
1510 /* Clear the link cost adjustment bits. */
1511 LINK_COST_FREE (prev) = 0;
1512 #ifdef ADJUST_COST
1513 LINK_COST_ZERO (prev) = 0;
1514 #endif
1516 /* This priority calculation was chosen because it results in the
1517 least instruction movement, and does not hurt the performance
1518 of the resulting code compared to the old algorithm.
1519 This makes the sched algorithm more stable, which results
1520 in better code, because there is less register pressure,
1521 cross jumping is more likely to work, and debugging is easier.
1523 When all instructions have a latency of 1, there is no need to
1524 move any instructions. Subtracting one here ensures that in such
1525 cases all instructions will end up with a priority of one, and
1526 hence no scheduling will be done.
1528 The original code did not subtract the one, and added the
1529 insn_cost of the current instruction to its priority (e.g.
1530 move the insn_cost call down to the end). */
1532 prev_priority = priority (x) + insn_cost (x, prev, insn) - 1;
1534 if (prev_priority > max_priority)
1535 max_priority = prev_priority;
1536 INSN_REF_COUNT (x) += 1;
1539 prepare_unit (insn_unit (insn));
1540 INSN_PRIORITY (insn) = max_priority;
1541 return INSN_PRIORITY (insn);
1543 return 0;
1546 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
1547 them to the unused_*_list variables, so that they can be reused. */
1549 static void
1550 free_pending_lists ()
1552 register rtx link, prev_link;
1554 if (pending_read_insns)
1556 prev_link = pending_read_insns;
1557 link = XEXP (prev_link, 1);
1559 while (link)
1561 prev_link = link;
1562 link = XEXP (link, 1);
1565 XEXP (prev_link, 1) = unused_insn_list;
1566 unused_insn_list = pending_read_insns;
1567 pending_read_insns = 0;
1570 if (pending_write_insns)
1572 prev_link = pending_write_insns;
1573 link = XEXP (prev_link, 1);
1575 while (link)
1577 prev_link = link;
1578 link = XEXP (link, 1);
1581 XEXP (prev_link, 1) = unused_insn_list;
1582 unused_insn_list = pending_write_insns;
1583 pending_write_insns = 0;
1586 if (pending_read_mems)
1588 prev_link = pending_read_mems;
1589 link = XEXP (prev_link, 1);
1591 while (link)
1593 prev_link = link;
1594 link = XEXP (link, 1);
1597 XEXP (prev_link, 1) = unused_expr_list;
1598 unused_expr_list = pending_read_mems;
1599 pending_read_mems = 0;
1602 if (pending_write_mems)
1604 prev_link = pending_write_mems;
1605 link = XEXP (prev_link, 1);
1607 while (link)
1609 prev_link = link;
1610 link = XEXP (link, 1);
1613 XEXP (prev_link, 1) = unused_expr_list;
1614 unused_expr_list = pending_write_mems;
1615 pending_write_mems = 0;
1619 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1620 The MEM is a memory reference contained within INSN, which we are saving
1621 so that we can do memory aliasing on it. */
1623 static void
1624 add_insn_mem_dependence (insn_list, mem_list, insn, mem)
1625 rtx *insn_list, *mem_list, insn, mem;
1627 register rtx link;
1629 if (unused_insn_list)
1631 link = unused_insn_list;
1632 unused_insn_list = XEXP (link, 1);
1634 else
1635 link = rtx_alloc (INSN_LIST);
1636 XEXP (link, 0) = insn;
1637 XEXP (link, 1) = *insn_list;
1638 *insn_list = link;
1640 if (unused_expr_list)
1642 link = unused_expr_list;
1643 unused_expr_list = XEXP (link, 1);
1645 else
1646 link = rtx_alloc (EXPR_LIST);
1647 XEXP (link, 0) = mem;
1648 XEXP (link, 1) = *mem_list;
1649 *mem_list = link;
1651 pending_lists_length++;
1654 /* Make a dependency between every memory reference on the pending lists
1655 and INSN, thus flushing the pending lists. If ONLY_WRITE, don't flush
1656 the read list. */
1658 static void
1659 flush_pending_lists (insn, only_write)
1660 rtx insn;
1661 int only_write;
1663 rtx link;
1665 while (pending_read_insns && ! only_write)
1667 add_dependence (insn, XEXP (pending_read_insns, 0), REG_DEP_ANTI);
1669 link = pending_read_insns;
1670 pending_read_insns = XEXP (pending_read_insns, 1);
1671 XEXP (link, 1) = unused_insn_list;
1672 unused_insn_list = link;
1674 link = pending_read_mems;
1675 pending_read_mems = XEXP (pending_read_mems, 1);
1676 XEXP (link, 1) = unused_expr_list;
1677 unused_expr_list = link;
1679 while (pending_write_insns)
1681 add_dependence (insn, XEXP (pending_write_insns, 0), REG_DEP_ANTI);
1683 link = pending_write_insns;
1684 pending_write_insns = XEXP (pending_write_insns, 1);
1685 XEXP (link, 1) = unused_insn_list;
1686 unused_insn_list = link;
1688 link = pending_write_mems;
1689 pending_write_mems = XEXP (pending_write_mems, 1);
1690 XEXP (link, 1) = unused_expr_list;
1691 unused_expr_list = link;
1693 pending_lists_length = 0;
1695 if (last_pending_memory_flush)
1696 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1698 last_pending_memory_flush = insn;
1701 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1702 by the write to the destination of X, and reads of everything mentioned. */
1704 static void
1705 sched_analyze_1 (x, insn)
1706 rtx x;
1707 rtx insn;
1709 register int regno;
1710 register rtx dest = SET_DEST (x);
1712 if (dest == 0)
1713 return;
1715 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
1716 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1718 if (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1720 /* The second and third arguments are values read by this insn. */
1721 sched_analyze_2 (XEXP (dest, 1), insn);
1722 sched_analyze_2 (XEXP (dest, 2), insn);
1724 dest = SUBREG_REG (dest);
1727 if (GET_CODE (dest) == REG)
1729 register int i;
1731 regno = REGNO (dest);
1733 /* A hard reg in a wide mode may really be multiple registers.
1734 If so, mark all of them just like the first. */
1735 if (regno < FIRST_PSEUDO_REGISTER)
1737 i = HARD_REGNO_NREGS (regno, GET_MODE (dest));
1738 while (--i >= 0)
1740 rtx u;
1742 for (u = reg_last_uses[regno+i]; u; u = XEXP (u, 1))
1743 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1744 reg_last_uses[regno + i] = 0;
1745 if (reg_last_sets[regno + i])
1746 add_dependence (insn, reg_last_sets[regno + i],
1747 REG_DEP_OUTPUT);
1748 SET_REGNO_REG_SET (reg_pending_sets, regno + i);
1749 if ((call_used_regs[i] || global_regs[i])
1750 && last_function_call)
1751 /* Function calls clobber all call_used regs. */
1752 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1755 else
1757 rtx u;
1759 for (u = reg_last_uses[regno]; u; u = XEXP (u, 1))
1760 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1761 reg_last_uses[regno] = 0;
1762 if (reg_last_sets[regno])
1763 add_dependence (insn, reg_last_sets[regno], REG_DEP_OUTPUT);
1764 SET_REGNO_REG_SET (reg_pending_sets, regno);
1766 /* Pseudos that are REG_EQUIV to something may be replaced
1767 by that during reloading. We need only add dependencies for
1768 the address in the REG_EQUIV note. */
1769 if (! reload_completed
1770 && reg_known_equiv_p[regno]
1771 && GET_CODE (reg_known_value[regno]) == MEM)
1772 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1774 /* Don't let it cross a call after scheduling if it doesn't
1775 already cross one. */
1776 if (REG_N_CALLS_CROSSED (regno) == 0 && last_function_call)
1777 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1780 else if (GET_CODE (dest) == MEM)
1782 /* Writing memory. */
1784 if (pending_lists_length > 32)
1786 /* Flush all pending reads and writes to prevent the pending lists
1787 from getting any larger. Insn scheduling runs too slowly when
1788 these lists get long. The number 32 was chosen because it
1789 seems like a reasonable number. When compiling GCC with itself,
1790 this flush occurs 8 times for sparc, and 10 times for m88k using
1791 the number 32. */
1792 flush_pending_lists (insn, 0);
1794 else
1796 rtx pending, pending_mem;
1798 pending = pending_read_insns;
1799 pending_mem = pending_read_mems;
1800 while (pending)
1802 /* If a dependency already exists, don't create a new one. */
1803 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1804 if (anti_dependence (XEXP (pending_mem, 0), dest))
1805 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1807 pending = XEXP (pending, 1);
1808 pending_mem = XEXP (pending_mem, 1);
1811 pending = pending_write_insns;
1812 pending_mem = pending_write_mems;
1813 while (pending)
1815 /* If a dependency already exists, don't create a new one. */
1816 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1817 if (output_dependence (XEXP (pending_mem, 0), dest))
1818 add_dependence (insn, XEXP (pending, 0), REG_DEP_OUTPUT);
1820 pending = XEXP (pending, 1);
1821 pending_mem = XEXP (pending_mem, 1);
1824 if (last_pending_memory_flush)
1825 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1827 add_insn_mem_dependence (&pending_write_insns, &pending_write_mems,
1828 insn, dest);
1830 sched_analyze_2 (XEXP (dest, 0), insn);
1833 /* Analyze reads. */
1834 if (GET_CODE (x) == SET)
1835 sched_analyze_2 (SET_SRC (x), insn);
1838 /* Analyze the uses of memory and registers in rtx X in INSN. */
1840 static void
1841 sched_analyze_2 (x, insn)
1842 rtx x;
1843 rtx insn;
1845 register int i;
1846 register int j;
1847 register enum rtx_code code;
1848 register char *fmt;
1850 if (x == 0)
1851 return;
1853 code = GET_CODE (x);
1855 switch (code)
1857 case CONST_INT:
1858 case CONST_DOUBLE:
1859 case SYMBOL_REF:
1860 case CONST:
1861 case LABEL_REF:
1862 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1863 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1864 this does not mean that this insn is using cc0. */
1865 return;
1867 #ifdef HAVE_cc0
1868 case CC0:
1870 rtx link, prev;
1872 /* User of CC0 depends on immediately preceding insn. */
1873 SCHED_GROUP_P (insn) = 1;
1875 /* There may be a note before this insn now, but all notes will
1876 be removed before we actually try to schedule the insns, so
1877 it won't cause a problem later. We must avoid it here though. */
1878 prev = prev_nonnote_insn (insn);
1880 /* Make a copy of all dependencies on the immediately previous insn,
1881 and add to this insn. This is so that all the dependencies will
1882 apply to the group. Remove an explicit dependence on this insn
1883 as SCHED_GROUP_P now represents it. */
1885 if (find_insn_list (prev, LOG_LINKS (insn)))
1886 remove_dependence (insn, prev);
1888 for (link = LOG_LINKS (prev); link; link = XEXP (link, 1))
1889 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1891 return;
1893 #endif
1895 case REG:
1897 int regno = REGNO (x);
1898 if (regno < FIRST_PSEUDO_REGISTER)
1900 int i;
1902 i = HARD_REGNO_NREGS (regno, GET_MODE (x));
1903 while (--i >= 0)
1905 reg_last_uses[regno + i]
1906 = gen_rtx_INSN_LIST (VOIDmode,
1907 insn, reg_last_uses[regno + i]);
1908 if (reg_last_sets[regno + i])
1909 add_dependence (insn, reg_last_sets[regno + i], 0);
1910 if ((call_used_regs[regno + i] || global_regs[regno + i])
1911 && last_function_call)
1912 /* Function calls clobber all call_used regs. */
1913 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1916 else
1918 reg_last_uses[regno]
1919 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_last_uses[regno]);
1920 if (reg_last_sets[regno])
1921 add_dependence (insn, reg_last_sets[regno], 0);
1923 /* Pseudos that are REG_EQUIV to something may be replaced
1924 by that during reloading. We need only add dependencies for
1925 the address in the REG_EQUIV note. */
1926 if (! reload_completed
1927 && reg_known_equiv_p[regno]
1928 && GET_CODE (reg_known_value[regno]) == MEM)
1929 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1931 /* If the register does not already cross any calls, then add this
1932 insn to the sched_before_next_call list so that it will still
1933 not cross calls after scheduling. */
1934 if (REG_N_CALLS_CROSSED (regno) == 0)
1935 add_dependence (sched_before_next_call, insn, REG_DEP_ANTI);
1937 return;
1940 case MEM:
1942 /* Reading memory. */
1944 rtx pending, pending_mem;
1946 pending = pending_read_insns;
1947 pending_mem = pending_read_mems;
1948 while (pending)
1950 /* If a dependency already exists, don't create a new one. */
1951 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1952 if (read_dependence (XEXP (pending_mem, 0), x))
1953 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1955 pending = XEXP (pending, 1);
1956 pending_mem = XEXP (pending_mem, 1);
1959 pending = pending_write_insns;
1960 pending_mem = pending_write_mems;
1961 while (pending)
1963 /* If a dependency already exists, don't create a new one. */
1964 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1965 if (true_dependence (XEXP (pending_mem, 0), x))
1966 add_dependence (insn, XEXP (pending, 0), 0);
1968 pending = XEXP (pending, 1);
1969 pending_mem = XEXP (pending_mem, 1);
1971 if (last_pending_memory_flush)
1972 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1974 /* Always add these dependencies to pending_reads, since
1975 this insn may be followed by a write. */
1976 add_insn_mem_dependence (&pending_read_insns, &pending_read_mems,
1977 insn, x);
1979 /* Take advantage of tail recursion here. */
1980 sched_analyze_2 (XEXP (x, 0), insn);
1981 return;
1984 case ASM_OPERANDS:
1985 case ASM_INPUT:
1986 case UNSPEC_VOLATILE:
1987 case TRAP_IF:
1989 rtx u;
1991 /* Traditional and volatile asm instructions must be considered to use
1992 and clobber all hard registers, all pseudo-registers and all of
1993 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1995 Consider for instance a volatile asm that changes the fpu rounding
1996 mode. An insn should not be moved across this even if it only uses
1997 pseudo-regs because it might give an incorrectly rounded result. */
1998 if (code != ASM_OPERANDS || MEM_VOLATILE_P (x))
2000 int max_reg = max_reg_num ();
2001 for (i = 0; i < max_reg; i++)
2003 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2004 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2005 reg_last_uses[i] = 0;
2006 if (reg_last_sets[i])
2007 add_dependence (insn, reg_last_sets[i], 0);
2009 reg_pending_sets_all = 1;
2011 flush_pending_lists (insn, 0);
2014 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
2015 We can not just fall through here since then we would be confused
2016 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
2017 traditional asms unlike their normal usage. */
2019 if (code == ASM_OPERANDS)
2021 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
2022 sched_analyze_2 (ASM_OPERANDS_INPUT (x, j), insn);
2023 return;
2025 break;
2028 case PRE_DEC:
2029 case POST_DEC:
2030 case PRE_INC:
2031 case POST_INC:
2032 /* These both read and modify the result. We must handle them as writes
2033 to get proper dependencies for following instructions. We must handle
2034 them as reads to get proper dependencies from this to previous
2035 instructions. Thus we need to pass them to both sched_analyze_1
2036 and sched_analyze_2. We must call sched_analyze_2 first in order
2037 to get the proper antecedent for the read. */
2038 sched_analyze_2 (XEXP (x, 0), insn);
2039 sched_analyze_1 (x, insn);
2040 return;
2042 default:
2043 break;
2046 /* Other cases: walk the insn. */
2047 fmt = GET_RTX_FORMAT (code);
2048 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2050 if (fmt[i] == 'e')
2051 sched_analyze_2 (XEXP (x, i), insn);
2052 else if (fmt[i] == 'E')
2053 for (j = 0; j < XVECLEN (x, i); j++)
2054 sched_analyze_2 (XVECEXP (x, i, j), insn);
2058 /* Analyze an INSN with pattern X to find all dependencies. */
2060 static void
2061 sched_analyze_insn (x, insn, loop_notes)
2062 rtx x, insn;
2063 rtx loop_notes;
2065 register RTX_CODE code = GET_CODE (x);
2066 rtx link;
2067 int maxreg = max_reg_num ();
2068 int i;
2070 if (code == SET || code == CLOBBER)
2071 sched_analyze_1 (x, insn);
2072 else if (code == PARALLEL)
2074 register int i;
2075 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2077 code = GET_CODE (XVECEXP (x, 0, i));
2078 if (code == SET || code == CLOBBER)
2079 sched_analyze_1 (XVECEXP (x, 0, i), insn);
2080 else
2081 sched_analyze_2 (XVECEXP (x, 0, i), insn);
2084 else
2085 sched_analyze_2 (x, insn);
2087 /* Mark registers CLOBBERED or used by called function. */
2088 if (GET_CODE (insn) == CALL_INSN)
2089 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
2091 if (GET_CODE (XEXP (link, 0)) == CLOBBER)
2092 sched_analyze_1 (XEXP (link, 0), insn);
2093 else
2094 sched_analyze_2 (XEXP (link, 0), insn);
2097 /* If there is a {LOOP,EHREGION}_{BEG,END} note in the middle of a basic block, then
2098 we must be sure that no instructions are scheduled across it.
2099 Otherwise, the reg_n_refs info (which depends on loop_depth) would
2100 become incorrect. */
2102 if (loop_notes)
2104 int max_reg = max_reg_num ();
2105 rtx link;
2107 for (i = 0; i < max_reg; i++)
2109 rtx u;
2110 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2111 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2112 reg_last_uses[i] = 0;
2113 if (reg_last_sets[i])
2114 add_dependence (insn, reg_last_sets[i], 0);
2116 reg_pending_sets_all = 1;
2118 flush_pending_lists (insn, 0);
2120 link = loop_notes;
2121 while (XEXP (link, 1))
2122 link = XEXP (link, 1);
2123 XEXP (link, 1) = REG_NOTES (insn);
2124 REG_NOTES (insn) = loop_notes;
2127 /* After reload, it is possible for an instruction to have a REG_DEAD note
2128 for a register that actually dies a few instructions earlier. For
2129 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
2130 In this case, we must consider the insn to use the register mentioned
2131 in the REG_DEAD note. Otherwise, we may accidentally move this insn
2132 after another insn that sets the register, thus getting obviously invalid
2133 rtl. This confuses reorg which believes that REG_DEAD notes are still
2134 meaningful.
2136 ??? We would get better code if we fixed reload to put the REG_DEAD
2137 notes in the right places, but that may not be worth the effort. */
2139 if (reload_completed)
2141 rtx note;
2143 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2144 if (REG_NOTE_KIND (note) == REG_DEAD)
2145 sched_analyze_2 (XEXP (note, 0), insn);
2148 EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i,
2150 reg_last_sets[i] = insn;
2152 CLEAR_REG_SET (reg_pending_sets);
2154 if (reg_pending_sets_all)
2156 for (i = 0; i < maxreg; i++)
2157 reg_last_sets[i] = insn;
2158 reg_pending_sets_all = 0;
2161 /* Handle function calls and function returns created by the epilogue
2162 threading code. */
2163 if (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN)
2165 rtx dep_insn;
2166 rtx prev_dep_insn;
2168 /* When scheduling instructions, we make sure calls don't lose their
2169 accompanying USE insns by depending them one on another in order.
2171 Also, we must do the same thing for returns created by the epilogue
2172 threading code. Note this code works only in this special case,
2173 because other passes make no guarantee that they will never emit
2174 an instruction between a USE and a RETURN. There is such a guarantee
2175 for USE instructions immediately before a call. */
2177 prev_dep_insn = insn;
2178 dep_insn = PREV_INSN (insn);
2179 while (GET_CODE (dep_insn) == INSN
2180 && GET_CODE (PATTERN (dep_insn)) == USE
2181 && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == REG)
2183 SCHED_GROUP_P (prev_dep_insn) = 1;
2185 /* Make a copy of all dependencies on dep_insn, and add to insn.
2186 This is so that all of the dependencies will apply to the
2187 group. */
2189 for (link = LOG_LINKS (dep_insn); link; link = XEXP (link, 1))
2190 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
2192 prev_dep_insn = dep_insn;
2193 dep_insn = PREV_INSN (dep_insn);
2198 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
2199 for every dependency. */
2201 static int
2202 sched_analyze (head, tail)
2203 rtx head, tail;
2205 register rtx insn;
2206 register int n_insns = 0;
2207 register rtx u;
2208 register int luid = 0;
2209 rtx loop_notes = 0;
2211 for (insn = head; ; insn = NEXT_INSN (insn))
2213 INSN_LUID (insn) = luid++;
2215 if (GET_CODE (insn) == INSN || GET_CODE (insn) == JUMP_INSN)
2217 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2218 loop_notes = 0;
2219 n_insns += 1;
2221 else if (GET_CODE (insn) == CALL_INSN)
2223 rtx x;
2224 register int i;
2226 /* Any instruction using a hard register which may get clobbered
2227 by a call needs to be marked as dependent on this call.
2228 This prevents a use of a hard return reg from being moved
2229 past a void call (i.e. it does not explicitly set the hard
2230 return reg). */
2232 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
2233 all registers, not just hard registers, may be clobbered by this
2234 call. */
2236 /* Insn, being a CALL_INSN, magically depends on
2237 `last_function_call' already. */
2239 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == NOTE
2240 && NOTE_LINE_NUMBER (NEXT_INSN (insn)) == NOTE_INSN_SETJMP)
2242 int max_reg = max_reg_num ();
2243 for (i = 0; i < max_reg; i++)
2245 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2246 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2247 reg_last_uses[i] = 0;
2248 if (reg_last_sets[i])
2249 add_dependence (insn, reg_last_sets[i], 0);
2251 reg_pending_sets_all = 1;
2253 /* Add a pair of fake REG_NOTEs which we will later
2254 convert back into a NOTE_INSN_SETJMP note. See
2255 reemit_notes for why we use a pair of of NOTEs. */
2257 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_DEAD,
2258 GEN_INT (0),
2259 REG_NOTES (insn));
2260 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_DEAD,
2261 GEN_INT (NOTE_INSN_SETJMP),
2262 REG_NOTES (insn));
2264 else
2266 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
2267 if (call_used_regs[i] || global_regs[i])
2269 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2270 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2271 reg_last_uses[i] = 0;
2272 if (reg_last_sets[i])
2273 add_dependence (insn, reg_last_sets[i], REG_DEP_ANTI);
2274 SET_REGNO_REG_SET (reg_pending_sets, i);
2278 /* For each insn which shouldn't cross a call, add a dependence
2279 between that insn and this call insn. */
2280 x = LOG_LINKS (sched_before_next_call);
2281 while (x)
2283 add_dependence (insn, XEXP (x, 0), REG_DEP_ANTI);
2284 x = XEXP (x, 1);
2286 LOG_LINKS (sched_before_next_call) = 0;
2288 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2289 loop_notes = 0;
2291 /* In the absence of interprocedural alias analysis, we must flush
2292 all pending reads and writes, and start new dependencies starting
2293 from here. But only flush writes for constant calls (which may
2294 be passed a pointer to something we haven't written yet). */
2295 flush_pending_lists (insn, CONST_CALL_P (insn));
2297 /* Depend this function call (actually, the user of this
2298 function call) on all hard register clobberage. */
2299 last_function_call = insn;
2300 n_insns += 1;
2303 /* See comments on reemit_notes as to why we do this. */
2304 else if (GET_CODE (insn) == NOTE
2305 && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG
2306 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_END
2307 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG
2308 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_END
2309 || (NOTE_LINE_NUMBER (insn) == NOTE_INSN_SETJMP
2310 && GET_CODE (PREV_INSN (insn)) != CALL_INSN)))
2312 loop_notes = gen_rtx_EXPR_LIST (REG_DEAD,
2313 GEN_INT (NOTE_BLOCK_NUMBER (insn)),
2314 loop_notes);
2315 loop_notes = gen_rtx_EXPR_LIST (REG_DEAD,
2316 GEN_INT (NOTE_LINE_NUMBER (insn)),
2317 loop_notes);
2318 CONST_CALL_P (loop_notes) = CONST_CALL_P (insn);
2321 if (insn == tail)
2322 return n_insns;
2325 abort ();
2328 /* Called when we see a set of a register. If death is true, then we are
2329 scanning backwards. Mark that register as unborn. If nobody says
2330 otherwise, that is how things will remain. If death is false, then we
2331 are scanning forwards. Mark that register as being born. */
2333 static void
2334 sched_note_set (b, x, death)
2335 int b;
2336 rtx x;
2337 int death;
2339 register int regno;
2340 register rtx reg = SET_DEST (x);
2341 int subreg_p = 0;
2343 if (reg == 0)
2344 return;
2346 while (GET_CODE (reg) == SUBREG || GET_CODE (reg) == STRICT_LOW_PART
2347 || GET_CODE (reg) == SIGN_EXTRACT || GET_CODE (reg) == ZERO_EXTRACT)
2349 /* Must treat modification of just one hardware register of a multi-reg
2350 value or just a byte field of a register exactly the same way that
2351 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
2352 does not kill the entire register. */
2353 if (GET_CODE (reg) != SUBREG
2354 || REG_SIZE (SUBREG_REG (reg)) > REG_SIZE (reg))
2355 subreg_p = 1;
2357 reg = SUBREG_REG (reg);
2360 if (GET_CODE (reg) != REG)
2361 return;
2363 /* Global registers are always live, so the code below does not apply
2364 to them. */
2366 regno = REGNO (reg);
2367 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2369 if (death)
2371 /* If we only set part of the register, then this set does not
2372 kill it. */
2373 if (subreg_p)
2374 return;
2376 /* Try killing this register. */
2377 if (regno < FIRST_PSEUDO_REGISTER)
2379 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2380 while (--j >= 0)
2382 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
2383 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
2386 else
2388 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
2389 SET_REGNO_REG_SET (bb_dead_regs, regno);
2392 else
2394 /* Make the register live again. */
2395 if (regno < FIRST_PSEUDO_REGISTER)
2397 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2398 while (--j >= 0)
2400 SET_REGNO_REG_SET (bb_live_regs, regno + j);
2401 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
2404 else
2406 SET_REGNO_REG_SET (bb_live_regs, regno);
2407 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
2413 /* Macros and functions for keeping the priority queue sorted, and
2414 dealing with queueing and dequeueing of instructions. */
2416 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
2417 do { if ((NEW_READY) - (OLD_READY) == 1) \
2418 swap_sort (READY, NEW_READY); \
2419 else if ((NEW_READY) - (OLD_READY) > 1) \
2420 qsort ((char *) READY, NEW_READY, sizeof (rtx), \
2421 rank_for_schedule); } \
2422 while (0)
2424 /* Returns a positive value if y is preferred; returns a negative value if
2425 x is preferred. Should never return 0, since that will make the sort
2426 unstable. */
2428 static int
2429 rank_for_schedule (x, y)
2430 const GENERIC_PTR x;
2431 const GENERIC_PTR y;
2433 rtx tmp = *(rtx *) y;
2434 rtx tmp2 = *(rtx *) x;
2435 rtx link;
2436 int tmp_class, tmp2_class;
2437 int value;
2439 /* Choose the instruction with the highest priority, if different. */
2440 if (value = INSN_PRIORITY (tmp) - INSN_PRIORITY (tmp2))
2441 return value;
2443 if (last_scheduled_insn)
2445 /* Classify the instructions into three classes:
2446 1) Data dependent on last schedule insn.
2447 2) Anti/Output dependent on last scheduled insn.
2448 3) Independent of last scheduled insn, or has latency of one.
2449 Choose the insn from the highest numbered class if different. */
2450 link = find_insn_list (tmp, LOG_LINKS (last_scheduled_insn));
2451 if (link == 0 || insn_cost (tmp, link, last_scheduled_insn) == 1)
2452 tmp_class = 3;
2453 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2454 tmp_class = 1;
2455 else
2456 tmp_class = 2;
2458 link = find_insn_list (tmp2, LOG_LINKS (last_scheduled_insn));
2459 if (link == 0 || insn_cost (tmp2, link, last_scheduled_insn) == 1)
2460 tmp2_class = 3;
2461 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2462 tmp2_class = 1;
2463 else
2464 tmp2_class = 2;
2466 if (value = tmp_class - tmp2_class)
2467 return value;
2470 /* If insns are equally good, sort by INSN_LUID (original insn order),
2471 so that we make the sort stable. This minimizes instruction movement,
2472 thus minimizing sched's effect on debugging and cross-jumping. */
2473 return INSN_LUID (tmp) - INSN_LUID (tmp2);
2476 /* Resort the array A in which only element at index N may be out of order. */
2478 __inline static void
2479 swap_sort (a, n)
2480 rtx *a;
2481 int n;
2483 rtx insn = a[n-1];
2484 int i = n-2;
2486 while (i >= 0 && rank_for_schedule ((const GENERIC_PTR) (a + i),
2487 (const GENERIC_PTR) &insn) >= 0)
2489 a[i+1] = a[i];
2490 i -= 1;
2492 a[i+1] = insn;
2495 static int max_priority;
2497 /* Add INSN to the insn queue so that it fires at least N_CYCLES
2498 before the currently executing insn. */
2500 __inline static void
2501 queue_insn (insn, n_cycles)
2502 rtx insn;
2503 int n_cycles;
2505 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
2506 NEXT_INSN (insn) = insn_queue[next_q];
2507 insn_queue[next_q] = insn;
2508 q_size += 1;
2511 /* Return nonzero if PAT is the pattern of an insn which makes a
2512 register live. */
2514 __inline static int
2515 birthing_insn_p (pat)
2516 rtx pat;
2518 int j;
2520 if (reload_completed == 1)
2521 return 0;
2523 if (GET_CODE (pat) == SET
2524 && GET_CODE (SET_DEST (pat)) == REG)
2526 rtx dest = SET_DEST (pat);
2527 int i = REGNO (dest);
2529 /* It would be more accurate to use refers_to_regno_p or
2530 reg_mentioned_p to determine when the dest is not live before this
2531 insn. */
2533 if (REGNO_REG_SET_P (bb_live_regs, i))
2534 return (REG_N_SETS (i) == 1);
2536 return 0;
2538 if (GET_CODE (pat) == PARALLEL)
2540 for (j = 0; j < XVECLEN (pat, 0); j++)
2541 if (birthing_insn_p (XVECEXP (pat, 0, j)))
2542 return 1;
2544 return 0;
2547 /* PREV is an insn that is ready to execute. Adjust its priority if that
2548 will help shorten register lifetimes. */
2550 __inline static void
2551 adjust_priority (prev)
2552 rtx prev;
2554 /* Trying to shorten register lives after reload has completed
2555 is useless and wrong. It gives inaccurate schedules. */
2556 if (reload_completed == 0)
2558 rtx note;
2559 int n_deaths = 0;
2561 /* ??? This code has no effect, because REG_DEAD notes are removed
2562 before we ever get here. */
2563 for (note = REG_NOTES (prev); note; note = XEXP (note, 1))
2564 if (REG_NOTE_KIND (note) == REG_DEAD)
2565 n_deaths += 1;
2567 /* Defer scheduling insns which kill registers, since that
2568 shortens register lives. Prefer scheduling insns which
2569 make registers live for the same reason. */
2570 switch (n_deaths)
2572 default:
2573 INSN_PRIORITY (prev) >>= 3;
2574 break;
2575 case 3:
2576 INSN_PRIORITY (prev) >>= 2;
2577 break;
2578 case 2:
2579 case 1:
2580 INSN_PRIORITY (prev) >>= 1;
2581 break;
2582 case 0:
2583 if (birthing_insn_p (PATTERN (prev)))
2585 int max = max_priority;
2587 if (max > INSN_PRIORITY (prev))
2588 INSN_PRIORITY (prev) = max;
2590 break;
2592 #ifdef ADJUST_PRIORITY
2593 ADJUST_PRIORITY (prev);
2594 #endif
2598 /* INSN is the "currently executing insn". Launch each insn which was
2599 waiting on INSN (in the backwards dataflow sense). READY is a
2600 vector of insns which are ready to fire. N_READY is the number of
2601 elements in READY. CLOCK is the current virtual cycle. */
2603 static int
2604 schedule_insn (insn, ready, n_ready, clock)
2605 rtx insn;
2606 rtx *ready;
2607 int n_ready;
2608 int clock;
2610 rtx link;
2611 int new_ready = n_ready;
2613 if (MAX_BLOCKAGE > 1)
2614 schedule_unit (insn_unit (insn), insn, clock);
2616 if (LOG_LINKS (insn) == 0)
2617 return n_ready;
2619 /* This is used by the function adjust_priority above. */
2620 if (n_ready > 0)
2621 max_priority = MAX (INSN_PRIORITY (ready[0]), INSN_PRIORITY (insn));
2622 else
2623 max_priority = INSN_PRIORITY (insn);
2625 for (link = LOG_LINKS (insn); link != 0; link = XEXP (link, 1))
2627 rtx prev = XEXP (link, 0);
2628 int cost = insn_cost (prev, link, insn);
2630 if ((INSN_REF_COUNT (prev) -= 1) != 0)
2632 /* We satisfied one requirement to fire PREV. Record the earliest
2633 time when PREV can fire. No need to do this if the cost is 1,
2634 because PREV can fire no sooner than the next cycle. */
2635 if (cost > 1)
2636 INSN_TICK (prev) = MAX (INSN_TICK (prev), clock + cost);
2638 else
2640 /* We satisfied the last requirement to fire PREV. Ensure that all
2641 timing requirements are satisfied. */
2642 if (INSN_TICK (prev) - clock > cost)
2643 cost = INSN_TICK (prev) - clock;
2645 /* Adjust the priority of PREV and either put it on the ready
2646 list or queue it. */
2647 adjust_priority (prev);
2648 if (cost <= 1)
2649 ready[new_ready++] = prev;
2650 else
2651 queue_insn (prev, cost);
2655 return new_ready;
2658 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2659 those that are blocked due to function unit hazards and rearrange
2660 the remaining ones to minimize subsequent function unit hazards. */
2662 static int
2663 schedule_select (ready, n_ready, clock, file)
2664 rtx *ready;
2665 int n_ready, clock;
2666 FILE *file;
2668 int pri = INSN_PRIORITY (ready[0]);
2669 int i, j, k, q, cost, best_cost, best_insn = 0, new_ready = n_ready;
2670 rtx insn;
2672 /* Work down the ready list in groups of instructions with the same
2673 priority value. Queue insns in the group that are blocked and
2674 select among those that remain for the one with the largest
2675 potential hazard. */
2676 for (i = 0; i < n_ready; i = j)
2678 int opri = pri;
2679 for (j = i + 1; j < n_ready; j++)
2680 if ((pri = INSN_PRIORITY (ready[j])) != opri)
2681 break;
2683 /* Queue insns in the group that are blocked. */
2684 for (k = i, q = 0; k < j; k++)
2686 insn = ready[k];
2687 if ((cost = actual_hazard (insn_unit (insn), insn, clock, 0)) != 0)
2689 q++;
2690 ready[k] = 0;
2691 queue_insn (insn, cost);
2692 if (file)
2693 fprintf (file, "\n;; blocking insn %d for %d cycles",
2694 INSN_UID (insn), cost);
2697 new_ready -= q;
2699 /* Check the next group if all insns were queued. */
2700 if (j - i - q == 0)
2701 continue;
2703 /* If more than one remains, select the first one with the largest
2704 potential hazard. */
2705 else if (j - i - q > 1)
2707 best_cost = -1;
2708 for (k = i; k < j; k++)
2710 if ((insn = ready[k]) == 0)
2711 continue;
2712 if ((cost = potential_hazard (insn_unit (insn), insn, 0))
2713 > best_cost)
2715 best_cost = cost;
2716 best_insn = k;
2720 /* We have found a suitable insn to schedule. */
2721 break;
2724 /* Move the best insn to be front of the ready list. */
2725 if (best_insn != 0)
2727 if (file)
2729 fprintf (file, ", now");
2730 for (i = 0; i < n_ready; i++)
2731 if (ready[i])
2732 fprintf (file, " %d", INSN_UID (ready[i]));
2733 fprintf (file, "\n;; insn %d has a greater potential hazard",
2734 INSN_UID (ready[best_insn]));
2736 for (i = best_insn; i > 0; i--)
2738 insn = ready[i-1];
2739 ready[i-1] = ready[i];
2740 ready[i] = insn;
2744 /* Compact the ready list. */
2745 if (new_ready < n_ready)
2746 for (i = j = 0; i < n_ready; i++)
2747 if (ready[i])
2748 ready[j++] = ready[i];
2750 return new_ready;
2753 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2754 dead_notes list. */
2756 static void
2757 create_reg_dead_note (reg, insn)
2758 rtx reg, insn;
2760 rtx link;
2762 /* The number of registers killed after scheduling must be the same as the
2763 number of registers killed before scheduling. The number of REG_DEAD
2764 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2765 might become one DImode hard register REG_DEAD note, but the number of
2766 registers killed will be conserved.
2768 We carefully remove REG_DEAD notes from the dead_notes list, so that
2769 there will be none left at the end. If we run out early, then there
2770 is a bug somewhere in flow, combine and/or sched. */
2772 if (dead_notes == 0)
2774 #if 1
2775 abort ();
2776 #else
2777 link = rtx_alloc (EXPR_LIST);
2778 PUT_REG_NOTE_KIND (link, REG_DEAD);
2779 #endif
2781 else
2783 /* Number of regs killed by REG. */
2784 int regs_killed = (REGNO (reg) >= FIRST_PSEUDO_REGISTER ? 1
2785 : HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg)));
2786 /* Number of regs killed by REG_DEAD notes taken off the list. */
2787 int reg_note_regs;
2789 link = dead_notes;
2790 reg_note_regs = (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2791 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2792 GET_MODE (XEXP (link, 0))));
2793 while (reg_note_regs < regs_killed)
2795 link = XEXP (link, 1);
2796 reg_note_regs += (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2797 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2798 GET_MODE (XEXP (link, 0))));
2800 dead_notes = XEXP (link, 1);
2802 /* If we took too many regs kills off, put the extra ones back. */
2803 while (reg_note_regs > regs_killed)
2805 rtx temp_reg, temp_link;
2807 temp_reg = gen_rtx_REG (word_mode, 0);
2808 temp_link = rtx_alloc (EXPR_LIST);
2809 PUT_REG_NOTE_KIND (temp_link, REG_DEAD);
2810 XEXP (temp_link, 0) = temp_reg;
2811 XEXP (temp_link, 1) = dead_notes;
2812 dead_notes = temp_link;
2813 reg_note_regs--;
2817 XEXP (link, 0) = reg;
2818 XEXP (link, 1) = REG_NOTES (insn);
2819 REG_NOTES (insn) = link;
2822 /* Subroutine on attach_deaths_insn--handles the recursive search
2823 through INSN. If SET_P is true, then x is being modified by the insn. */
2825 static void
2826 attach_deaths (x, insn, set_p)
2827 rtx x;
2828 rtx insn;
2829 int set_p;
2831 register int i;
2832 register int j;
2833 register enum rtx_code code;
2834 register char *fmt;
2836 if (x == 0)
2837 return;
2839 code = GET_CODE (x);
2841 switch (code)
2843 case CONST_INT:
2844 case CONST_DOUBLE:
2845 case LABEL_REF:
2846 case SYMBOL_REF:
2847 case CONST:
2848 case CODE_LABEL:
2849 case PC:
2850 case CC0:
2851 /* Get rid of the easy cases first. */
2852 return;
2854 case REG:
2856 /* If the register dies in this insn, queue that note, and mark
2857 this register as needing to die. */
2858 /* This code is very similar to mark_used_1 (if set_p is false)
2859 and mark_set_1 (if set_p is true) in flow.c. */
2861 register int regno;
2862 int some_needed;
2863 int all_needed;
2865 if (set_p)
2866 return;
2868 regno = REGNO (x);
2869 all_needed = some_needed = REGNO_REG_SET_P (old_live_regs, regno);
2870 if (regno < FIRST_PSEUDO_REGISTER)
2872 int n;
2874 n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2875 while (--n > 0)
2877 int needed = (REGNO_REG_SET_P (old_live_regs, regno + n));
2878 some_needed |= needed;
2879 all_needed &= needed;
2883 /* If it wasn't live before we started, then add a REG_DEAD note.
2884 We must check the previous lifetime info not the current info,
2885 because we may have to execute this code several times, e.g.
2886 once for a clobber (which doesn't add a note) and later
2887 for a use (which does add a note).
2889 Always make the register live. We must do this even if it was
2890 live before, because this may be an insn which sets and uses
2891 the same register, in which case the register has already been
2892 killed, so we must make it live again.
2894 Global registers are always live, and should never have a REG_DEAD
2895 note added for them, so none of the code below applies to them. */
2897 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2899 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2900 STACK_POINTER_REGNUM, since these are always considered to be
2901 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2902 if (regno != FRAME_POINTER_REGNUM
2903 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2904 && ! (regno == HARD_FRAME_POINTER_REGNUM)
2905 #endif
2906 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2907 && ! (regno == ARG_POINTER_REGNUM && fixed_regs[regno])
2908 #endif
2909 && regno != STACK_POINTER_REGNUM)
2911 if (! all_needed && ! dead_or_set_p (insn, x))
2913 /* Check for the case where the register dying partially
2914 overlaps the register set by this insn. */
2915 if (regno < FIRST_PSEUDO_REGISTER
2916 && HARD_REGNO_NREGS (regno, GET_MODE (x)) > 1)
2918 int n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2919 while (--n >= 0)
2920 some_needed |= dead_or_set_regno_p (insn, regno + n);
2923 /* If none of the words in X is needed, make a REG_DEAD
2924 note. Otherwise, we must make partial REG_DEAD
2925 notes. */
2926 if (! some_needed)
2927 create_reg_dead_note (x, insn);
2928 else
2930 int i;
2932 /* Don't make a REG_DEAD note for a part of a
2933 register that is set in the insn. */
2934 for (i = HARD_REGNO_NREGS (regno, GET_MODE (x)) - 1;
2935 i >= 0; i--)
2936 if (! REGNO_REG_SET_P (old_live_regs, regno + i)
2937 && ! dead_or_set_regno_p (insn, regno + i))
2938 create_reg_dead_note (gen_rtx_REG
2939 (reg_raw_mode[regno + i],
2940 regno + i),
2941 insn);
2946 if (regno < FIRST_PSEUDO_REGISTER)
2948 int j = HARD_REGNO_NREGS (regno, GET_MODE (x));
2949 while (--j >= 0)
2951 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
2952 SET_REGNO_REG_SET (bb_live_regs, regno + j);
2955 else
2957 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
2958 SET_REGNO_REG_SET (bb_live_regs, regno);
2961 return;
2964 case MEM:
2965 /* Handle tail-recursive case. */
2966 attach_deaths (XEXP (x, 0), insn, 0);
2967 return;
2969 case SUBREG:
2970 attach_deaths (SUBREG_REG (x), insn,
2971 set_p && ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
2972 <= UNITS_PER_WORD)
2973 || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
2974 == GET_MODE_SIZE (GET_MODE ((x))))));
2975 return;
2977 case STRICT_LOW_PART:
2978 attach_deaths (XEXP (x, 0), insn, 0);
2979 return;
2981 case ZERO_EXTRACT:
2982 case SIGN_EXTRACT:
2983 attach_deaths (XEXP (x, 0), insn, 0);
2984 attach_deaths (XEXP (x, 1), insn, 0);
2985 attach_deaths (XEXP (x, 2), insn, 0);
2986 return;
2988 default:
2989 /* Other cases: walk the insn. */
2990 fmt = GET_RTX_FORMAT (code);
2991 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2993 if (fmt[i] == 'e')
2994 attach_deaths (XEXP (x, i), insn, 0);
2995 else if (fmt[i] == 'E')
2996 for (j = 0; j < XVECLEN (x, i); j++)
2997 attach_deaths (XVECEXP (x, i, j), insn, 0);
3002 /* After INSN has executed, add register death notes for each register
3003 that is dead after INSN. */
3005 static void
3006 attach_deaths_insn (insn)
3007 rtx insn;
3009 rtx x = PATTERN (insn);
3010 register RTX_CODE code = GET_CODE (x);
3011 rtx link;
3013 if (code == SET)
3015 attach_deaths (SET_SRC (x), insn, 0);
3017 /* A register might die here even if it is the destination, e.g.
3018 it is the target of a volatile read and is otherwise unused.
3019 Hence we must always call attach_deaths for the SET_DEST. */
3020 attach_deaths (SET_DEST (x), insn, 1);
3022 else if (code == PARALLEL)
3024 register int i;
3025 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
3027 code = GET_CODE (XVECEXP (x, 0, i));
3028 if (code == SET)
3030 attach_deaths (SET_SRC (XVECEXP (x, 0, i)), insn, 0);
3032 attach_deaths (SET_DEST (XVECEXP (x, 0, i)), insn, 1);
3034 /* Flow does not add REG_DEAD notes to registers that die in
3035 clobbers, so we can't either. */
3036 else if (code != CLOBBER)
3037 attach_deaths (XVECEXP (x, 0, i), insn, 0);
3040 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
3041 MEM being clobbered, just like flow. */
3042 else if (code == CLOBBER && GET_CODE (XEXP (x, 0)) == MEM)
3043 attach_deaths (XEXP (XEXP (x, 0), 0), insn, 0);
3044 /* Otherwise don't add a death note to things being clobbered. */
3045 else if (code != CLOBBER)
3046 attach_deaths (x, insn, 0);
3048 /* Make death notes for things used in the called function. */
3049 if (GET_CODE (insn) == CALL_INSN)
3050 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
3051 attach_deaths (XEXP (XEXP (link, 0), 0), insn,
3052 GET_CODE (XEXP (link, 0)) == CLOBBER);
3055 /* Delete notes beginning with INSN and maybe put them in the chain
3056 of notes ended by NOTE_LIST.
3057 Returns the insn following the notes. */
3059 static rtx
3060 unlink_notes (insn, tail)
3061 rtx insn, tail;
3063 rtx prev = PREV_INSN (insn);
3065 while (insn != tail && GET_CODE (insn) == NOTE)
3067 rtx next = NEXT_INSN (insn);
3068 /* Delete the note from its current position. */
3069 if (prev)
3070 NEXT_INSN (prev) = next;
3071 if (next)
3072 PREV_INSN (next) = prev;
3074 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
3075 /* Record line-number notes so they can be reused. */
3076 LINE_NOTE (insn) = insn;
3078 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
3079 immediately after the call they follow. We use a fake
3080 (REG_DEAD (const_int -1)) note to remember them.
3081 Likewise with NOTE_INSN_{LOOP,EHREGION}_{BEG, END}. */
3082 else if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_SETJMP
3083 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
3084 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
3085 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
3086 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
3088 /* Insert the note at the end of the notes list. */
3089 PREV_INSN (insn) = note_list;
3090 if (note_list)
3091 NEXT_INSN (note_list) = insn;
3092 note_list = insn;
3095 insn = next;
3097 return insn;
3100 /* Constructor for `sometimes' data structure. */
3102 static int
3103 new_sometimes_live (regs_sometimes_live, regno, sometimes_max)
3104 struct sometimes *regs_sometimes_live;
3105 int regno;
3106 int sometimes_max;
3108 register struct sometimes *p;
3110 /* There should never be a register greater than max_regno here. If there
3111 is, it means that a define_split has created a new pseudo reg. This
3112 is not allowed, since there will not be flow info available for any
3113 new register, so catch the error here. */
3114 if (regno >= max_regno)
3115 abort ();
3117 p = &regs_sometimes_live[sometimes_max];
3118 p->regno = regno;
3119 p->live_length = 0;
3120 p->calls_crossed = 0;
3121 sometimes_max++;
3122 return sometimes_max;
3125 /* Count lengths of all regs we are currently tracking,
3126 and find new registers no longer live. */
3128 static void
3129 finish_sometimes_live (regs_sometimes_live, sometimes_max)
3130 struct sometimes *regs_sometimes_live;
3131 int sometimes_max;
3133 int i;
3135 for (i = 0; i < sometimes_max; i++)
3137 register struct sometimes *p = &regs_sometimes_live[i];
3138 int regno = p->regno;
3140 sched_reg_live_length[regno] += p->live_length;
3141 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3145 /* Search INSN for fake REG_DEAD note pairs for NOTE_INSN_SETJMP,
3146 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
3147 NOTEs. The REG_DEAD note following first one is contains the saved
3148 value for NOTE_BLOCK_NUMBER which is useful for
3149 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
3150 output by the instruction scheduler. Return the new value of LAST. */
3152 static rtx
3153 reemit_notes (insn, last)
3154 rtx insn;
3155 rtx last;
3157 rtx note;
3159 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3161 if (REG_NOTE_KIND (note) == REG_DEAD
3162 && GET_CODE (XEXP (note, 0)) == CONST_INT)
3164 if (INTVAL (XEXP (note, 0)) == NOTE_INSN_SETJMP)
3166 CONST_CALL_P (emit_note_after (INTVAL (XEXP (note, 0)), insn))
3167 = CONST_CALL_P (note);
3168 remove_note (insn, note);
3169 note = XEXP (note, 1);
3171 else
3173 last = emit_note_before (INTVAL (XEXP (note, 0)), last);
3174 remove_note (insn, note);
3175 note = XEXP (note, 1);
3176 NOTE_BLOCK_NUMBER (last) = INTVAL (XEXP (note, 0));
3178 remove_note (insn, note);
3181 return last;
3184 /* Use modified list scheduling to rearrange insns in basic block
3185 B. FILE, if nonzero, is where we dump interesting output about
3186 this pass. */
3188 static void
3189 schedule_block (b, file)
3190 int b;
3191 FILE *file;
3193 rtx insn, last;
3194 rtx *ready, link;
3195 int i, j, n_ready = 0, new_ready, n_insns;
3196 int sched_n_insns = 0;
3197 int clock;
3198 #define NEED_NOTHING 0
3199 #define NEED_HEAD 1
3200 #define NEED_TAIL 2
3201 int new_needs;
3203 /* HEAD and TAIL delimit the region being scheduled. */
3204 rtx head = basic_block_head[b];
3205 rtx tail = basic_block_end[b];
3206 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
3207 being scheduled. When the insns have been ordered,
3208 these insns delimit where the new insns are to be
3209 spliced back into the insn chain. */
3210 rtx next_tail;
3211 rtx prev_head;
3213 /* Keep life information accurate. */
3214 register struct sometimes *regs_sometimes_live;
3215 int sometimes_max;
3217 if (file)
3218 fprintf (file, ";;\t -- basic block number %d from %d to %d --\n",
3219 b, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
3221 i = max_reg_num ();
3222 reg_last_uses = (rtx *) alloca (i * sizeof (rtx));
3223 bzero ((char *) reg_last_uses, i * sizeof (rtx));
3224 reg_last_sets = (rtx *) alloca (i * sizeof (rtx));
3225 bzero ((char *) reg_last_sets, i * sizeof (rtx));
3226 reg_pending_sets = ALLOCA_REG_SET ();
3227 CLEAR_REG_SET (reg_pending_sets);
3228 reg_pending_sets_all = 0;
3229 clear_units ();
3231 /* Remove certain insns at the beginning from scheduling,
3232 by advancing HEAD. */
3234 /* At the start of a function, before reload has run, don't delay getting
3235 parameters from hard registers into pseudo registers. */
3236 if (reload_completed == 0 && b == 0)
3238 while (head != tail
3239 && GET_CODE (head) == NOTE
3240 && NOTE_LINE_NUMBER (head) != NOTE_INSN_FUNCTION_BEG)
3241 head = NEXT_INSN (head);
3242 while (head != tail
3243 && GET_CODE (head) == INSN
3244 && GET_CODE (PATTERN (head)) == SET)
3246 rtx src = SET_SRC (PATTERN (head));
3247 while (GET_CODE (src) == SUBREG
3248 || GET_CODE (src) == SIGN_EXTEND
3249 || GET_CODE (src) == ZERO_EXTEND
3250 || GET_CODE (src) == SIGN_EXTRACT
3251 || GET_CODE (src) == ZERO_EXTRACT)
3252 src = XEXP (src, 0);
3253 if (GET_CODE (src) != REG
3254 || REGNO (src) >= FIRST_PSEUDO_REGISTER)
3255 break;
3256 /* Keep this insn from ever being scheduled. */
3257 INSN_REF_COUNT (head) = 1;
3258 head = NEXT_INSN (head);
3262 /* Don't include any notes or labels at the beginning of the
3263 basic block, or notes at the ends of basic blocks. */
3264 while (head != tail)
3266 if (GET_CODE (head) == NOTE)
3267 head = NEXT_INSN (head);
3268 else if (GET_CODE (tail) == NOTE)
3269 tail = PREV_INSN (tail);
3270 else if (GET_CODE (head) == CODE_LABEL)
3271 head = NEXT_INSN (head);
3272 else break;
3274 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
3275 to schedule this block. */
3276 if (head == tail
3277 && (GET_CODE (head) == NOTE || GET_CODE (head) == CODE_LABEL))
3278 goto ret;
3280 #if 0
3281 /* This short-cut doesn't work. It does not count call insns crossed by
3282 registers in reg_sometimes_live. It does not mark these registers as
3283 dead if they die in this block. It does not mark these registers live
3284 (or create new reg_sometimes_live entries if necessary) if they are born
3285 in this block.
3287 The easy solution is to just always schedule a block. This block only
3288 has one insn, so this won't slow down this pass by much. */
3290 if (head == tail)
3291 goto ret;
3292 #endif
3294 /* Now HEAD through TAIL are the insns actually to be rearranged;
3295 Let PREV_HEAD and NEXT_TAIL enclose them. */
3296 prev_head = PREV_INSN (head);
3297 next_tail = NEXT_INSN (tail);
3299 /* Initialize basic block data structures. */
3300 dead_notes = 0;
3301 pending_read_insns = 0;
3302 pending_read_mems = 0;
3303 pending_write_insns = 0;
3304 pending_write_mems = 0;
3305 pending_lists_length = 0;
3306 last_pending_memory_flush = 0;
3307 last_function_call = 0;
3308 last_scheduled_insn = 0;
3310 LOG_LINKS (sched_before_next_call) = 0;
3312 n_insns = sched_analyze (head, tail);
3313 if (n_insns == 0)
3315 free_pending_lists ();
3316 goto ret;
3319 /* Allocate vector to hold insns to be rearranged (except those
3320 insns which are controlled by an insn with SCHED_GROUP_P set).
3321 All these insns are included between ORIG_HEAD and ORIG_TAIL,
3322 as those variables ultimately are set up. */
3323 ready = (rtx *) alloca ((n_insns+1) * sizeof (rtx));
3325 /* TAIL is now the last of the insns to be rearranged.
3326 Put those insns into the READY vector. */
3327 insn = tail;
3329 /* For all branches, calls, uses, and cc0 setters, force them to remain
3330 in order at the end of the block by adding dependencies and giving
3331 the last a high priority. There may be notes present, and prev_head
3332 may also be a note.
3334 Branches must obviously remain at the end. Calls should remain at the
3335 end since moving them results in worse register allocation. Uses remain
3336 at the end to ensure proper register allocation. cc0 setters remaim
3337 at the end because they can't be moved away from their cc0 user. */
3338 last = 0;
3339 while (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN
3340 || (GET_CODE (insn) == INSN
3341 && (GET_CODE (PATTERN (insn)) == USE
3342 #ifdef HAVE_cc0
3343 || sets_cc0_p (PATTERN (insn))
3344 #endif
3346 || GET_CODE (insn) == NOTE)
3348 if (GET_CODE (insn) != NOTE)
3350 priority (insn);
3351 if (last == 0)
3353 ready[n_ready++] = insn;
3354 INSN_PRIORITY (insn) = TAIL_PRIORITY - i;
3355 INSN_REF_COUNT (insn) = 0;
3357 else if (! find_insn_list (insn, LOG_LINKS (last)))
3359 add_dependence (last, insn, REG_DEP_ANTI);
3360 INSN_REF_COUNT (insn)++;
3362 last = insn;
3364 /* Skip over insns that are part of a group. */
3365 while (SCHED_GROUP_P (insn))
3367 insn = prev_nonnote_insn (insn);
3368 priority (insn);
3372 insn = PREV_INSN (insn);
3373 /* Don't overrun the bounds of the basic block. */
3374 if (insn == prev_head)
3375 break;
3378 /* Assign priorities to instructions. Also check whether they
3379 are in priority order already. If so then I will be nonnegative.
3380 We use this shortcut only before reloading. */
3381 #if 0
3382 i = reload_completed ? DONE_PRIORITY : MAX_PRIORITY;
3383 #endif
3385 for (; insn != prev_head; insn = PREV_INSN (insn))
3387 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3389 priority (insn);
3390 if (INSN_REF_COUNT (insn) == 0)
3392 if (last == 0)
3393 ready[n_ready++] = insn;
3394 else
3396 /* Make this dependent on the last of the instructions
3397 that must remain in order at the end of the block. */
3398 add_dependence (last, insn, REG_DEP_ANTI);
3399 INSN_REF_COUNT (insn) = 1;
3402 if (SCHED_GROUP_P (insn))
3404 while (SCHED_GROUP_P (insn))
3406 insn = prev_nonnote_insn (insn);
3407 priority (insn);
3409 continue;
3411 #if 0
3412 if (i < 0)
3413 continue;
3414 if (INSN_PRIORITY (insn) < i)
3415 i = INSN_PRIORITY (insn);
3416 else if (INSN_PRIORITY (insn) > i)
3417 i = DONE_PRIORITY;
3418 #endif
3422 #if 0
3423 /* This short-cut doesn't work. It does not count call insns crossed by
3424 registers in reg_sometimes_live. It does not mark these registers as
3425 dead if they die in this block. It does not mark these registers live
3426 (or create new reg_sometimes_live entries if necessary) if they are born
3427 in this block.
3429 The easy solution is to just always schedule a block. These blocks tend
3430 to be very short, so this doesn't slow down this pass by much. */
3432 /* If existing order is good, don't bother to reorder. */
3433 if (i != DONE_PRIORITY)
3435 if (file)
3436 fprintf (file, ";; already scheduled\n");
3438 if (reload_completed == 0)
3440 for (i = 0; i < sometimes_max; i++)
3441 regs_sometimes_live[i].live_length += n_insns;
3443 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3445 free_pending_lists ();
3446 goto ret;
3448 #endif
3450 /* Scan all the insns to be scheduled, removing NOTE insns
3451 and register death notes.
3452 Line number NOTE insns end up in NOTE_LIST.
3453 Register death notes end up in DEAD_NOTES.
3455 Recreate the register life information for the end of this basic
3456 block. */
3458 if (reload_completed == 0)
3460 COPY_REG_SET (bb_live_regs, basic_block_live_at_start[b]);
3461 CLEAR_REG_SET (bb_dead_regs);
3463 if (b == 0)
3465 /* This is the first block in the function. There may be insns
3466 before head that we can't schedule. We still need to examine
3467 them though for accurate register lifetime analysis. */
3469 /* We don't want to remove any REG_DEAD notes as the code below
3470 does. */
3472 for (insn = basic_block_head[b]; insn != head;
3473 insn = NEXT_INSN (insn))
3474 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3476 /* See if the register gets born here. */
3477 /* We must check for registers being born before we check for
3478 registers dying. It is possible for a register to be born
3479 and die in the same insn, e.g. reading from a volatile
3480 memory location into an otherwise unused register. Such
3481 a register must be marked as dead after this insn. */
3482 if (GET_CODE (PATTERN (insn)) == SET
3483 || GET_CODE (PATTERN (insn)) == CLOBBER)
3484 sched_note_set (b, PATTERN (insn), 0);
3485 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3487 int j;
3488 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3489 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3490 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3491 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3493 /* ??? This code is obsolete and should be deleted. It
3494 is harmless though, so we will leave it in for now. */
3495 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3496 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3497 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3500 /* Each call clobbers (makes live) all call-clobbered regs
3501 that are not global or fixed. Note that the function-value
3502 reg is a call_clobbered reg. */
3504 if (GET_CODE (insn) == CALL_INSN)
3506 int j;
3507 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
3508 if (call_used_regs[j] && ! global_regs[j]
3509 && ! fixed_regs[j])
3511 SET_REGNO_REG_SET (bb_live_regs, j);
3512 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
3516 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
3518 if ((REG_NOTE_KIND (link) == REG_DEAD
3519 || REG_NOTE_KIND (link) == REG_UNUSED)
3520 /* Verify that the REG_NOTE has a valid value. */
3521 && GET_CODE (XEXP (link, 0)) == REG)
3523 register int regno = REGNO (XEXP (link, 0));
3525 if (regno < FIRST_PSEUDO_REGISTER)
3527 int j = HARD_REGNO_NREGS (regno,
3528 GET_MODE (XEXP (link, 0)));
3529 while (--j >= 0)
3531 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
3532 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
3535 else
3537 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
3538 SET_REGNO_REG_SET (bb_dead_regs, regno);
3546 /* If debugging information is being produced, keep track of the line
3547 number notes for each insn. */
3548 if (write_symbols != NO_DEBUG)
3550 /* We must use the true line number for the first insn in the block
3551 that was computed and saved at the start of this pass. We can't
3552 use the current line number, because scheduling of the previous
3553 block may have changed the current line number. */
3554 rtx line = line_note_head[b];
3556 for (insn = basic_block_head[b];
3557 insn != next_tail;
3558 insn = NEXT_INSN (insn))
3559 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3560 line = insn;
3561 else
3562 LINE_NOTE (insn) = line;
3565 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3567 rtx prev, next, link;
3569 /* Farm out notes. This is needed to keep the debugger from
3570 getting completely deranged. */
3571 if (GET_CODE (insn) == NOTE)
3573 prev = insn;
3574 insn = unlink_notes (insn, next_tail);
3575 if (prev == tail)
3576 abort ();
3577 if (prev == head)
3578 abort ();
3579 if (insn == next_tail)
3580 abort ();
3583 if (reload_completed == 0
3584 && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3586 /* See if the register gets born here. */
3587 /* We must check for registers being born before we check for
3588 registers dying. It is possible for a register to be born and
3589 die in the same insn, e.g. reading from a volatile memory
3590 location into an otherwise unused register. Such a register
3591 must be marked as dead after this insn. */
3592 if (GET_CODE (PATTERN (insn)) == SET
3593 || GET_CODE (PATTERN (insn)) == CLOBBER)
3594 sched_note_set (b, PATTERN (insn), 0);
3595 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3597 int j;
3598 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3599 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3600 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3601 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3603 /* ??? This code is obsolete and should be deleted. It
3604 is harmless though, so we will leave it in for now. */
3605 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3606 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3607 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3610 /* Each call clobbers (makes live) all call-clobbered regs that are
3611 not global or fixed. Note that the function-value reg is a
3612 call_clobbered reg. */
3614 if (GET_CODE (insn) == CALL_INSN)
3616 int j;
3617 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
3618 if (call_used_regs[j] && ! global_regs[j]
3619 && ! fixed_regs[j])
3621 SET_REGNO_REG_SET (bb_live_regs, j);
3622 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
3626 /* Need to know what registers this insn kills. */
3627 for (prev = 0, link = REG_NOTES (insn); link; link = next)
3629 next = XEXP (link, 1);
3630 if ((REG_NOTE_KIND (link) == REG_DEAD
3631 || REG_NOTE_KIND (link) == REG_UNUSED)
3632 /* Verify that the REG_NOTE has a valid value. */
3633 && GET_CODE (XEXP (link, 0)) == REG)
3635 register int regno = REGNO (XEXP (link, 0));
3637 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3638 alone. */
3639 if (REG_NOTE_KIND (link) == REG_DEAD)
3641 if (prev)
3642 XEXP (prev, 1) = next;
3643 else
3644 REG_NOTES (insn) = next;
3645 XEXP (link, 1) = dead_notes;
3646 dead_notes = link;
3648 else
3649 prev = link;
3651 if (regno < FIRST_PSEUDO_REGISTER)
3653 int j = HARD_REGNO_NREGS (regno,
3654 GET_MODE (XEXP (link, 0)));
3655 while (--j >= 0)
3657 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
3658 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
3661 else
3663 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
3664 SET_REGNO_REG_SET (bb_dead_regs, regno);
3667 else
3668 prev = link;
3673 if (reload_completed == 0)
3675 /* Keep track of register lives. */
3676 old_live_regs = ALLOCA_REG_SET ();
3677 regs_sometimes_live
3678 = (struct sometimes *) alloca (max_regno * sizeof (struct sometimes));
3679 sometimes_max = 0;
3681 /* Start with registers live at end. */
3682 COPY_REG_SET (old_live_regs, bb_live_regs);
3683 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs, 0, j,
3685 sometimes_max
3686 = new_sometimes_live (regs_sometimes_live,
3687 j, sometimes_max);
3691 SCHED_SORT (ready, n_ready, 1);
3693 if (file)
3695 fprintf (file, ";; ready list initially:\n;; ");
3696 for (i = 0; i < n_ready; i++)
3697 fprintf (file, "%d ", INSN_UID (ready[i]));
3698 fprintf (file, "\n\n");
3700 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3701 if (INSN_PRIORITY (insn) > 0)
3702 fprintf (file, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3703 INSN_UID (insn), INSN_PRIORITY (insn),
3704 INSN_REF_COUNT (insn));
3707 /* Now HEAD and TAIL are going to become disconnected
3708 entirely from the insn chain. */
3709 tail = 0;
3711 /* Q_SIZE will always be zero here. */
3712 q_ptr = 0; clock = 0;
3713 bzero ((char *) insn_queue, sizeof (insn_queue));
3715 /* Now, perform list scheduling. */
3717 /* Where we start inserting insns is after TAIL. */
3718 last = next_tail;
3720 new_needs = (NEXT_INSN (prev_head) == basic_block_head[b]
3721 ? NEED_HEAD : NEED_NOTHING);
3722 if (PREV_INSN (next_tail) == basic_block_end[b])
3723 new_needs |= NEED_TAIL;
3725 new_ready = n_ready;
3726 while (sched_n_insns < n_insns)
3728 q_ptr = NEXT_Q (q_ptr); clock++;
3730 /* Add all pending insns that can be scheduled without stalls to the
3731 ready list. */
3732 for (insn = insn_queue[q_ptr]; insn; insn = NEXT_INSN (insn))
3734 if (file)
3735 fprintf (file, ";; launching %d before %d with no stalls at T-%d\n",
3736 INSN_UID (insn), INSN_UID (last), clock);
3737 ready[new_ready++] = insn;
3738 q_size -= 1;
3740 insn_queue[q_ptr] = 0;
3742 /* If there are no ready insns, stall until one is ready and add all
3743 of the pending insns at that point to the ready list. */
3744 if (new_ready == 0)
3746 register int stalls;
3748 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
3749 if (insn = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)])
3751 for (; insn; insn = NEXT_INSN (insn))
3753 if (file)
3754 fprintf (file, ";; launching %d before %d with %d stalls at T-%d\n",
3755 INSN_UID (insn), INSN_UID (last), stalls, clock);
3756 ready[new_ready++] = insn;
3757 q_size -= 1;
3759 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
3760 break;
3763 q_ptr = NEXT_Q_AFTER (q_ptr, stalls); clock += stalls;
3766 /* There should be some instructions waiting to fire. */
3767 if (new_ready == 0)
3768 abort ();
3770 if (file)
3772 fprintf (file, ";; ready list at T-%d:", clock);
3773 for (i = 0; i < new_ready; i++)
3774 fprintf (file, " %d (%x)",
3775 INSN_UID (ready[i]), INSN_PRIORITY (ready[i]));
3778 /* Sort the ready list and choose the best insn to schedule. Select
3779 which insn should issue in this cycle and queue those that are
3780 blocked by function unit hazards.
3782 N_READY holds the number of items that were scheduled the last time,
3783 minus the one instruction scheduled on the last loop iteration; it
3784 is not modified for any other reason in this loop. */
3786 SCHED_SORT (ready, new_ready, n_ready);
3787 if (MAX_BLOCKAGE > 1)
3789 new_ready = schedule_select (ready, new_ready, clock, file);
3790 if (new_ready == 0)
3792 if (file)
3793 fprintf (file, "\n");
3794 /* We must set n_ready here, to ensure that sorting always
3795 occurs when we come back to the SCHED_SORT line above. */
3796 n_ready = 0;
3797 continue;
3800 n_ready = new_ready;
3801 last_scheduled_insn = insn = ready[0];
3803 /* The first insn scheduled becomes the new tail. */
3804 if (tail == 0)
3805 tail = insn;
3807 if (file)
3809 fprintf (file, ", now");
3810 for (i = 0; i < n_ready; i++)
3811 fprintf (file, " %d", INSN_UID (ready[i]));
3812 fprintf (file, "\n");
3815 if (DONE_PRIORITY_P (insn))
3816 abort ();
3818 if (reload_completed == 0)
3820 /* Process this insn, and each insn linked to this one which must
3821 be immediately output after this insn. */
3824 /* First we kill registers set by this insn, and then we
3825 make registers used by this insn live. This is the opposite
3826 order used above because we are traversing the instructions
3827 backwards. */
3829 /* Strictly speaking, we should scan REG_UNUSED notes and make
3830 every register mentioned there live, however, we will just
3831 kill them again immediately below, so there doesn't seem to
3832 be any reason why we bother to do this. */
3834 /* See if this is the last notice we must take of a register. */
3835 if (GET_CODE (PATTERN (insn)) == SET
3836 || GET_CODE (PATTERN (insn)) == CLOBBER)
3837 sched_note_set (b, PATTERN (insn), 1);
3838 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3840 int j;
3841 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3842 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3843 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3844 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 1);
3847 /* This code keeps life analysis information up to date. */
3848 if (GET_CODE (insn) == CALL_INSN)
3850 register struct sometimes *p;
3852 /* A call kills all call used registers that are not
3853 global or fixed, except for those mentioned in the call
3854 pattern which will be made live again later. */
3855 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
3856 if (call_used_regs[i] && ! global_regs[i]
3857 && ! fixed_regs[i])
3859 CLEAR_REGNO_REG_SET (bb_live_regs, i);
3860 SET_REGNO_REG_SET (bb_dead_regs, i);
3863 /* Regs live at the time of a call instruction must not
3864 go in a register clobbered by calls. Record this for
3865 all regs now live. Note that insns which are born or
3866 die in a call do not cross a call, so this must be done
3867 after the killings (above) and before the births
3868 (below). */
3869 p = regs_sometimes_live;
3870 for (i = 0; i < sometimes_max; i++, p++)
3871 if (REGNO_REG_SET_P (bb_live_regs, p->regno))
3872 p->calls_crossed += 1;
3875 /* Make every register used live, and add REG_DEAD notes for
3876 registers which were not live before we started. */
3877 attach_deaths_insn (insn);
3879 /* Find registers now made live by that instruction. */
3880 EXECUTE_IF_AND_COMPL_IN_REG_SET (bb_live_regs, old_live_regs, 0, i,
3882 sometimes_max
3883 = new_sometimes_live (regs_sometimes_live,
3884 i, sometimes_max);
3886 IOR_REG_SET (old_live_regs, bb_live_regs);
3888 /* Count lengths of all regs we are worrying about now,
3889 and handle registers no longer live. */
3891 for (i = 0; i < sometimes_max; i++)
3893 register struct sometimes *p = &regs_sometimes_live[i];
3894 int regno = p->regno;
3896 p->live_length += 1;
3898 if (!REGNO_REG_SET_P (bb_live_regs, p->regno))
3900 /* This is the end of one of this register's lifetime
3901 segments. Save the lifetime info collected so far,
3902 and clear its bit in the old_live_regs entry. */
3903 sched_reg_live_length[regno] += p->live_length;
3904 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3905 CLEAR_REGNO_REG_SET (old_live_regs, p->regno);
3907 /* Delete the reg_sometimes_live entry for this reg by
3908 copying the last entry over top of it. */
3909 *p = regs_sometimes_live[--sometimes_max];
3910 /* ...and decrement i so that this newly copied entry
3911 will be processed. */
3912 i--;
3916 link = insn;
3917 insn = PREV_INSN (insn);
3919 while (SCHED_GROUP_P (link));
3921 /* Set INSN back to the insn we are scheduling now. */
3922 insn = ready[0];
3925 /* Schedule INSN. Remove it from the ready list. */
3926 ready += 1;
3927 n_ready -= 1;
3929 sched_n_insns += 1;
3930 NEXT_INSN (insn) = last;
3931 PREV_INSN (last) = insn;
3933 /* Everything that precedes INSN now either becomes "ready", if
3934 it can execute immediately before INSN, or "pending", if
3935 there must be a delay. Give INSN high enough priority that
3936 at least one (maybe more) reg-killing insns can be launched
3937 ahead of all others. Mark INSN as scheduled by changing its
3938 priority to -1. */
3939 INSN_PRIORITY (insn) = LAUNCH_PRIORITY;
3940 new_ready = schedule_insn (insn, ready, n_ready, clock);
3941 INSN_PRIORITY (insn) = DONE_PRIORITY;
3943 /* Schedule all prior insns that must not be moved. */
3944 if (SCHED_GROUP_P (insn))
3946 /* Disable these insns from being launched, in case one of the
3947 insns in the group has a dependency on an earlier one. */
3948 link = insn;
3949 while (SCHED_GROUP_P (link))
3951 /* Disable these insns from being launched by anybody. */
3952 link = PREV_INSN (link);
3953 INSN_REF_COUNT (link) = 0;
3956 /* Now handle each group insn like the main insn was handled
3957 above. */
3958 link = insn;
3959 while (SCHED_GROUP_P (link))
3961 link = PREV_INSN (link);
3963 sched_n_insns += 1;
3965 /* ??? Why don't we set LAUNCH_PRIORITY here? */
3966 new_ready = schedule_insn (link, ready, new_ready, clock);
3967 INSN_PRIORITY (link) = DONE_PRIORITY;
3971 /* Put back NOTE_INSN_SETJMP,
3972 NOTE_INSN_{LOOP,EHREGION}_{BEGIN,END} notes. */
3974 /* To prime the loop. We need to handle INSN and all the insns in the
3975 sched group. */
3976 last = NEXT_INSN (insn);
3979 insn = PREV_INSN (last);
3981 /* Maintain a valid chain so emit_note_before works.
3982 This is necessary because PREV_INSN (insn) isn't valid
3983 (if ! SCHED_GROUP_P) and if it points to an insn already
3984 scheduled, a circularity will result. */
3985 if (! SCHED_GROUP_P (insn))
3987 NEXT_INSN (prev_head) = insn;
3988 PREV_INSN (insn) = prev_head;
3991 last = reemit_notes (insn, insn);
3993 while (SCHED_GROUP_P (insn));
3995 if (q_size != 0)
3996 abort ();
3998 if (reload_completed == 0)
3999 finish_sometimes_live (regs_sometimes_live, sometimes_max);
4001 /* HEAD is now the first insn in the chain of insns that
4002 been scheduled by the loop above.
4003 TAIL is the last of those insns. */
4004 head = last;
4006 /* NOTE_LIST is the end of a chain of notes previously found
4007 among the insns. Insert them at the beginning of the insns. */
4008 if (note_list != 0)
4010 rtx note_head = note_list;
4011 while (PREV_INSN (note_head))
4012 note_head = PREV_INSN (note_head);
4014 PREV_INSN (head) = note_list;
4015 NEXT_INSN (note_list) = head;
4016 head = note_head;
4019 /* There should be no REG_DEAD notes leftover at the end.
4020 In practice, this can occur as the result of bugs in flow, combine.c,
4021 and/or sched.c. The values of the REG_DEAD notes remaining are
4022 meaningless, because dead_notes is just used as a free list. */
4023 #if 1
4024 if (dead_notes != 0)
4025 abort ();
4026 #endif
4028 if (new_needs & NEED_HEAD)
4029 basic_block_head[b] = head;
4030 PREV_INSN (head) = prev_head;
4031 NEXT_INSN (prev_head) = head;
4033 if (new_needs & NEED_TAIL)
4034 basic_block_end[b] = tail;
4035 NEXT_INSN (tail) = next_tail;
4036 PREV_INSN (next_tail) = tail;
4038 /* Restore the line-number notes of each insn. */
4039 if (write_symbols != NO_DEBUG)
4041 rtx line, note, prev, new;
4042 int notes = 0;
4044 head = basic_block_head[b];
4045 next_tail = NEXT_INSN (basic_block_end[b]);
4047 /* Determine the current line-number. We want to know the current
4048 line number of the first insn of the block here, in case it is
4049 different from the true line number that was saved earlier. If
4050 different, then we need a line number note before the first insn
4051 of this block. If it happens to be the same, then we don't want to
4052 emit another line number note here. */
4053 for (line = head; line; line = PREV_INSN (line))
4054 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4055 break;
4057 /* Walk the insns keeping track of the current line-number and inserting
4058 the line-number notes as needed. */
4059 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
4060 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4061 line = insn;
4062 /* This used to emit line number notes before every non-deleted note.
4063 However, this confuses a debugger, because line notes not separated
4064 by real instructions all end up at the same address. I can find no
4065 use for line number notes before other notes, so none are emitted. */
4066 else if (GET_CODE (insn) != NOTE
4067 && (note = LINE_NOTE (insn)) != 0
4068 && note != line
4069 && (line == 0
4070 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
4071 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
4073 line = note;
4074 prev = PREV_INSN (insn);
4075 if (LINE_NOTE (note))
4077 /* Re-use the original line-number note. */
4078 LINE_NOTE (note) = 0;
4079 PREV_INSN (note) = prev;
4080 NEXT_INSN (prev) = note;
4081 PREV_INSN (insn) = note;
4082 NEXT_INSN (note) = insn;
4084 else
4086 notes++;
4087 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
4088 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
4089 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note);
4092 if (file && notes)
4093 fprintf (file, ";; added %d line-number notes\n", notes);
4096 if (file)
4098 fprintf (file, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
4099 clock, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
4102 /* Yow! We're done! */
4103 free_pending_lists ();
4105 ret:
4106 FREE_REG_SET (reg_pending_sets);
4107 FREE_REG_SET (old_live_regs);
4109 return;
4112 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
4113 REGNO, returning the rtx of the reference found if any. Otherwise,
4114 returns 0. */
4116 static rtx
4117 regno_use_in (regno, x)
4118 int regno;
4119 rtx x;
4121 register char *fmt;
4122 int i, j;
4123 rtx tem;
4125 if (GET_CODE (x) == REG && REGNO (x) == regno)
4126 return x;
4128 fmt = GET_RTX_FORMAT (GET_CODE (x));
4129 for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
4131 if (fmt[i] == 'e')
4133 if (tem = regno_use_in (regno, XEXP (x, i)))
4134 return tem;
4136 else if (fmt[i] == 'E')
4137 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4138 if (tem = regno_use_in (regno , XVECEXP (x, i, j)))
4139 return tem;
4142 return 0;
4145 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
4146 needed for the hard register mentioned in the note. This can happen
4147 if the reference to the hard register in the original insn was split into
4148 several smaller hard register references in the split insns. */
4150 static void
4151 split_hard_reg_notes (note, first, last, orig_insn)
4152 rtx note, first, last, orig_insn;
4154 rtx reg, temp, link;
4155 int n_regs, i, new_reg;
4156 rtx insn;
4158 /* Assume that this is a REG_DEAD note. */
4159 if (REG_NOTE_KIND (note) != REG_DEAD)
4160 abort ();
4162 reg = XEXP (note, 0);
4164 n_regs = HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg));
4166 for (i = 0; i < n_regs; i++)
4168 new_reg = REGNO (reg) + i;
4170 /* Check for references to new_reg in the split insns. */
4171 for (insn = last; ; insn = PREV_INSN (insn))
4173 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4174 && (temp = regno_use_in (new_reg, PATTERN (insn))))
4176 /* Create a new reg dead note here. */
4177 link = rtx_alloc (EXPR_LIST);
4178 PUT_REG_NOTE_KIND (link, REG_DEAD);
4179 XEXP (link, 0) = temp;
4180 XEXP (link, 1) = REG_NOTES (insn);
4181 REG_NOTES (insn) = link;
4183 /* If killed multiple registers here, then add in the excess. */
4184 i += HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) - 1;
4186 break;
4188 /* It isn't mentioned anywhere, so no new reg note is needed for
4189 this register. */
4190 if (insn == first)
4191 break;
4196 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
4197 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
4199 static void
4200 new_insn_dead_notes (pat, insn, last, orig_insn)
4201 rtx pat, insn, last, orig_insn;
4203 rtx dest, tem, set;
4205 /* PAT is either a CLOBBER or a SET here. */
4206 dest = XEXP (pat, 0);
4208 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4209 || GET_CODE (dest) == STRICT_LOW_PART
4210 || GET_CODE (dest) == SIGN_EXTRACT)
4211 dest = XEXP (dest, 0);
4213 if (GET_CODE (dest) == REG)
4215 for (tem = last; tem != insn; tem = PREV_INSN (tem))
4217 if (GET_RTX_CLASS (GET_CODE (tem)) == 'i'
4218 && reg_overlap_mentioned_p (dest, PATTERN (tem))
4219 && (set = single_set (tem)))
4221 rtx tem_dest = SET_DEST (set);
4223 while (GET_CODE (tem_dest) == ZERO_EXTRACT
4224 || GET_CODE (tem_dest) == SUBREG
4225 || GET_CODE (tem_dest) == STRICT_LOW_PART
4226 || GET_CODE (tem_dest) == SIGN_EXTRACT)
4227 tem_dest = XEXP (tem_dest, 0);
4229 if (! rtx_equal_p (tem_dest, dest))
4231 /* Use the same scheme as combine.c, don't put both REG_DEAD
4232 and REG_UNUSED notes on the same insn. */
4233 if (! find_regno_note (tem, REG_UNUSED, REGNO (dest))
4234 && ! find_regno_note (tem, REG_DEAD, REGNO (dest)))
4236 rtx note = rtx_alloc (EXPR_LIST);
4237 PUT_REG_NOTE_KIND (note, REG_DEAD);
4238 XEXP (note, 0) = dest;
4239 XEXP (note, 1) = REG_NOTES (tem);
4240 REG_NOTES (tem) = note;
4242 /* The reg only dies in one insn, the last one that uses
4243 it. */
4244 break;
4246 else if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4247 /* We found an instruction that both uses the register,
4248 and sets it, so no new REG_NOTE is needed for this set. */
4249 break;
4252 /* If this is a set, it must die somewhere, unless it is the dest of
4253 the original insn, and hence is live after the original insn. Abort
4254 if it isn't supposed to be live after the original insn.
4256 If this is a clobber, then just add a REG_UNUSED note. */
4257 if (tem == insn)
4259 int live_after_orig_insn = 0;
4260 rtx pattern = PATTERN (orig_insn);
4261 int i;
4263 if (GET_CODE (pat) == CLOBBER)
4265 rtx note = rtx_alloc (EXPR_LIST);
4266 PUT_REG_NOTE_KIND (note, REG_UNUSED);
4267 XEXP (note, 0) = dest;
4268 XEXP (note, 1) = REG_NOTES (insn);
4269 REG_NOTES (insn) = note;
4270 return;
4273 /* The original insn could have multiple sets, so search the
4274 insn for all sets. */
4275 if (GET_CODE (pattern) == SET)
4277 if (reg_overlap_mentioned_p (dest, SET_DEST (pattern)))
4278 live_after_orig_insn = 1;
4280 else if (GET_CODE (pattern) == PARALLEL)
4282 for (i = 0; i < XVECLEN (pattern, 0); i++)
4283 if (GET_CODE (XVECEXP (pattern, 0, i)) == SET
4284 && reg_overlap_mentioned_p (dest,
4285 SET_DEST (XVECEXP (pattern,
4286 0, i))))
4287 live_after_orig_insn = 1;
4290 if (! live_after_orig_insn)
4291 abort ();
4296 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
4297 registers modified by X. INC is -1 if the containing insn is being deleted,
4298 and is 1 if the containing insn is a newly generated insn. */
4300 static void
4301 update_n_sets (x, inc)
4302 rtx x;
4303 int inc;
4305 rtx dest = SET_DEST (x);
4307 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
4308 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
4309 dest = SUBREG_REG (dest);
4311 if (GET_CODE (dest) == REG)
4313 int regno = REGNO (dest);
4315 if (regno < FIRST_PSEUDO_REGISTER)
4317 register int i;
4318 int endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (dest));
4320 for (i = regno; i < endregno; i++)
4321 REG_N_SETS (i) += inc;
4323 else
4324 REG_N_SETS (regno) += inc;
4328 /* Updates all flow-analysis related quantities (including REG_NOTES) for
4329 the insns from FIRST to LAST inclusive that were created by splitting
4330 ORIG_INSN. NOTES are the original REG_NOTES. */
4332 static void
4333 update_flow_info (notes, first, last, orig_insn)
4334 rtx notes;
4335 rtx first, last;
4336 rtx orig_insn;
4338 rtx insn, note;
4339 rtx next;
4340 rtx orig_dest, temp;
4341 rtx set;
4343 /* Get and save the destination set by the original insn. */
4345 orig_dest = single_set (orig_insn);
4346 if (orig_dest)
4347 orig_dest = SET_DEST (orig_dest);
4349 /* Move REG_NOTES from the original insn to where they now belong. */
4351 for (note = notes; note; note = next)
4353 next = XEXP (note, 1);
4354 switch (REG_NOTE_KIND (note))
4356 case REG_DEAD:
4357 case REG_UNUSED:
4358 /* Move these notes from the original insn to the last new insn where
4359 the register is now set. */
4361 for (insn = last; ; insn = PREV_INSN (insn))
4363 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4364 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4366 /* If this note refers to a multiple word hard register, it
4367 may have been split into several smaller hard register
4368 references, so handle it specially. */
4369 temp = XEXP (note, 0);
4370 if (REG_NOTE_KIND (note) == REG_DEAD
4371 && GET_CODE (temp) == REG
4372 && REGNO (temp) < FIRST_PSEUDO_REGISTER
4373 && HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) > 1)
4374 split_hard_reg_notes (note, first, last, orig_insn);
4375 else
4377 XEXP (note, 1) = REG_NOTES (insn);
4378 REG_NOTES (insn) = note;
4381 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
4382 notes. */
4383 /* ??? This won't handle multiple word registers correctly,
4384 but should be good enough for now. */
4385 if (REG_NOTE_KIND (note) == REG_UNUSED
4386 && GET_CODE (XEXP (note, 0)) != SCRATCH
4387 && ! dead_or_set_p (insn, XEXP (note, 0)))
4388 PUT_REG_NOTE_KIND (note, REG_DEAD);
4390 /* The reg only dies in one insn, the last one that uses
4391 it. */
4392 break;
4394 /* It must die somewhere, fail it we couldn't find where it died.
4396 If this is a REG_UNUSED note, then it must be a temporary
4397 register that was not needed by this instantiation of the
4398 pattern, so we can safely ignore it. */
4399 if (insn == first)
4401 /* After reload, REG_DEAD notes come sometimes an
4402 instruction after the register actually dies. */
4403 if (reload_completed && REG_NOTE_KIND (note) == REG_DEAD)
4405 XEXP (note, 1) = REG_NOTES (insn);
4406 REG_NOTES (insn) = note;
4407 break;
4410 if (REG_NOTE_KIND (note) != REG_UNUSED)
4411 abort ();
4413 break;
4416 break;
4418 case REG_WAS_0:
4419 /* This note applies to the dest of the original insn. Find the
4420 first new insn that now has the same dest, and move the note
4421 there. */
4423 if (! orig_dest)
4424 abort ();
4426 for (insn = first; ; insn = NEXT_INSN (insn))
4428 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4429 && (temp = single_set (insn))
4430 && rtx_equal_p (SET_DEST (temp), orig_dest))
4432 XEXP (note, 1) = REG_NOTES (insn);
4433 REG_NOTES (insn) = note;
4434 /* The reg is only zero before one insn, the first that
4435 uses it. */
4436 break;
4438 /* If this note refers to a multiple word hard
4439 register, it may have been split into several smaller
4440 hard register references. We could split the notes,
4441 but simply dropping them is good enough. */
4442 if (GET_CODE (orig_dest) == REG
4443 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
4444 && HARD_REGNO_NREGS (REGNO (orig_dest),
4445 GET_MODE (orig_dest)) > 1)
4446 break;
4447 /* It must be set somewhere, fail if we couldn't find where it
4448 was set. */
4449 if (insn == last)
4450 abort ();
4452 break;
4454 case REG_EQUAL:
4455 case REG_EQUIV:
4456 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
4457 set is meaningless. Just drop the note. */
4458 if (! orig_dest)
4459 break;
4461 case REG_NO_CONFLICT:
4462 /* These notes apply to the dest of the original insn. Find the last
4463 new insn that now has the same dest, and move the note there. */
4465 if (! orig_dest)
4466 abort ();
4468 for (insn = last; ; insn = PREV_INSN (insn))
4470 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4471 && (temp = single_set (insn))
4472 && rtx_equal_p (SET_DEST (temp), orig_dest))
4474 XEXP (note, 1) = REG_NOTES (insn);
4475 REG_NOTES (insn) = note;
4476 /* Only put this note on one of the new insns. */
4477 break;
4480 /* The original dest must still be set someplace. Abort if we
4481 couldn't find it. */
4482 if (insn == first)
4484 /* However, if this note refers to a multiple word hard
4485 register, it may have been split into several smaller
4486 hard register references. We could split the notes,
4487 but simply dropping them is good enough. */
4488 if (GET_CODE (orig_dest) == REG
4489 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
4490 && HARD_REGNO_NREGS (REGNO (orig_dest),
4491 GET_MODE (orig_dest)) > 1)
4492 break;
4493 /* Likewise for multi-word memory references. */
4494 if (GET_CODE (orig_dest) == MEM
4495 && SIZE_FOR_MODE (orig_dest) > MOVE_MAX)
4496 break;
4497 abort ();
4500 break;
4502 case REG_LIBCALL:
4503 /* Move a REG_LIBCALL note to the first insn created, and update
4504 the corresponding REG_RETVAL note. */
4505 XEXP (note, 1) = REG_NOTES (first);
4506 REG_NOTES (first) = note;
4508 insn = XEXP (note, 0);
4509 note = find_reg_note (insn, REG_RETVAL, NULL_RTX);
4510 if (note)
4511 XEXP (note, 0) = first;
4512 break;
4514 case REG_EXEC_COUNT:
4515 /* Move a REG_EXEC_COUNT note to the first insn created. */
4516 XEXP (note, 1) = REG_NOTES (first);
4517 REG_NOTES (first) = note;
4518 break;
4520 case REG_RETVAL:
4521 /* Move a REG_RETVAL note to the last insn created, and update
4522 the corresponding REG_LIBCALL note. */
4523 XEXP (note, 1) = REG_NOTES (last);
4524 REG_NOTES (last) = note;
4526 insn = XEXP (note, 0);
4527 note = find_reg_note (insn, REG_LIBCALL, NULL_RTX);
4528 if (note)
4529 XEXP (note, 0) = last;
4530 break;
4532 case REG_NONNEG:
4533 case REG_BR_PROB:
4534 /* This should be moved to whichever instruction is a JUMP_INSN. */
4536 for (insn = last; ; insn = PREV_INSN (insn))
4538 if (GET_CODE (insn) == JUMP_INSN)
4540 XEXP (note, 1) = REG_NOTES (insn);
4541 REG_NOTES (insn) = note;
4542 /* Only put this note on one of the new insns. */
4543 break;
4545 /* Fail if we couldn't find a JUMP_INSN. */
4546 if (insn == first)
4547 abort ();
4549 break;
4551 case REG_INC:
4552 /* reload sometimes leaves obsolete REG_INC notes around. */
4553 if (reload_completed)
4554 break;
4555 /* This should be moved to whichever instruction now has the
4556 increment operation. */
4557 abort ();
4559 case REG_LABEL:
4560 /* Should be moved to the new insn(s) which use the label. */
4561 for (insn = first; insn != NEXT_INSN (last); insn = NEXT_INSN (insn))
4562 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4563 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4564 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_LABEL,
4565 XEXP (note, 0),
4566 REG_NOTES (insn));
4567 break;
4569 case REG_CC_SETTER:
4570 case REG_CC_USER:
4571 /* These two notes will never appear until after reorg, so we don't
4572 have to handle them here. */
4573 default:
4574 abort ();
4578 /* Each new insn created, except the last, has a new set. If the destination
4579 is a register, then this reg is now live across several insns, whereas
4580 previously the dest reg was born and died within the same insn. To
4581 reflect this, we now need a REG_DEAD note on the insn where this
4582 dest reg dies.
4584 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4586 for (insn = first; insn != last; insn = NEXT_INSN (insn))
4588 rtx pat;
4589 int i;
4591 pat = PATTERN (insn);
4592 if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
4593 new_insn_dead_notes (pat, insn, last, orig_insn);
4594 else if (GET_CODE (pat) == PARALLEL)
4596 for (i = 0; i < XVECLEN (pat, 0); i++)
4597 if (GET_CODE (XVECEXP (pat, 0, i)) == SET
4598 || GET_CODE (XVECEXP (pat, 0, i)) == CLOBBER)
4599 new_insn_dead_notes (XVECEXP (pat, 0, i), insn, last, orig_insn);
4603 /* If any insn, except the last, uses the register set by the last insn,
4604 then we need a new REG_DEAD note on that insn. In this case, there
4605 would not have been a REG_DEAD note for this register in the original
4606 insn because it was used and set within one insn. */
4608 set = single_set (last);
4609 if (set)
4611 rtx dest = SET_DEST (set);
4613 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4614 || GET_CODE (dest) == STRICT_LOW_PART
4615 || GET_CODE (dest) == SIGN_EXTRACT)
4616 dest = XEXP (dest, 0);
4618 if (GET_CODE (dest) == REG
4619 /* Global registers are always live, so the code below does not
4620 apply to them. */
4621 && (REGNO (dest) >= FIRST_PSEUDO_REGISTER
4622 || ! global_regs[REGNO (dest)]))
4624 rtx stop_insn = PREV_INSN (first);
4626 /* If the last insn uses the register that it is setting, then
4627 we don't want to put a REG_DEAD note there. Search backwards
4628 to find the first insn that sets but does not use DEST. */
4630 insn = last;
4631 if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4633 for (insn = PREV_INSN (insn); insn != first;
4634 insn = PREV_INSN (insn))
4636 if ((set = single_set (insn))
4637 && reg_mentioned_p (dest, SET_DEST (set))
4638 && ! reg_overlap_mentioned_p (dest, SET_SRC (set)))
4639 break;
4643 /* Now find the first insn that uses but does not set DEST. */
4645 for (insn = PREV_INSN (insn); insn != stop_insn;
4646 insn = PREV_INSN (insn))
4648 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4649 && reg_mentioned_p (dest, PATTERN (insn))
4650 && (set = single_set (insn)))
4652 rtx insn_dest = SET_DEST (set);
4654 while (GET_CODE (insn_dest) == ZERO_EXTRACT
4655 || GET_CODE (insn_dest) == SUBREG
4656 || GET_CODE (insn_dest) == STRICT_LOW_PART
4657 || GET_CODE (insn_dest) == SIGN_EXTRACT)
4658 insn_dest = XEXP (insn_dest, 0);
4660 if (insn_dest != dest)
4662 note = rtx_alloc (EXPR_LIST);
4663 PUT_REG_NOTE_KIND (note, REG_DEAD);
4664 XEXP (note, 0) = dest;
4665 XEXP (note, 1) = REG_NOTES (insn);
4666 REG_NOTES (insn) = note;
4667 /* The reg only dies in one insn, the last one
4668 that uses it. */
4669 break;
4676 /* If the original dest is modifying a multiple register target, and the
4677 original instruction was split such that the original dest is now set
4678 by two or more SUBREG sets, then the split insns no longer kill the
4679 destination of the original insn.
4681 In this case, if there exists an instruction in the same basic block,
4682 before the split insn, which uses the original dest, and this use is
4683 killed by the original insn, then we must remove the REG_DEAD note on
4684 this insn, because it is now superfluous.
4686 This does not apply when a hard register gets split, because the code
4687 knows how to handle overlapping hard registers properly. */
4688 if (orig_dest && GET_CODE (orig_dest) == REG)
4690 int found_orig_dest = 0;
4691 int found_split_dest = 0;
4693 for (insn = first; ; insn = NEXT_INSN (insn))
4695 rtx pat = PATTERN (insn);
4696 int i = GET_CODE (pat) == PARALLEL ? XVECLEN (pat, 0) : 0;
4697 set = pat;
4698 for (;;)
4700 if (GET_CODE (set) == SET)
4702 if (GET_CODE (SET_DEST (set)) == REG
4703 && REGNO (SET_DEST (set)) == REGNO (orig_dest))
4705 found_orig_dest = 1;
4706 break;
4708 else if (GET_CODE (SET_DEST (set)) == SUBREG
4709 && SUBREG_REG (SET_DEST (set)) == orig_dest)
4711 found_split_dest = 1;
4712 break;
4715 if (--i < 0)
4716 break;
4717 set = XVECEXP (pat, 0, i);
4720 if (insn == last)
4721 break;
4724 if (found_split_dest)
4726 /* Search backwards from FIRST, looking for the first insn that uses
4727 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4728 If we find an insn, and it has a REG_DEAD note, then delete the
4729 note. */
4731 for (insn = first; insn; insn = PREV_INSN (insn))
4733 if (GET_CODE (insn) == CODE_LABEL
4734 || GET_CODE (insn) == JUMP_INSN)
4735 break;
4736 else if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4737 && reg_mentioned_p (orig_dest, insn))
4739 note = find_regno_note (insn, REG_DEAD, REGNO (orig_dest));
4740 if (note)
4741 remove_note (insn, note);
4745 else if (! found_orig_dest)
4747 /* This should never happen. */
4748 abort ();
4752 /* Update reg_n_sets. This is necessary to prevent local alloc from
4753 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4754 a reg from set once to set multiple times. */
4757 rtx x = PATTERN (orig_insn);
4758 RTX_CODE code = GET_CODE (x);
4760 if (code == SET || code == CLOBBER)
4761 update_n_sets (x, -1);
4762 else if (code == PARALLEL)
4764 int i;
4765 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4767 code = GET_CODE (XVECEXP (x, 0, i));
4768 if (code == SET || code == CLOBBER)
4769 update_n_sets (XVECEXP (x, 0, i), -1);
4773 for (insn = first; ; insn = NEXT_INSN (insn))
4775 x = PATTERN (insn);
4776 code = GET_CODE (x);
4778 if (code == SET || code == CLOBBER)
4779 update_n_sets (x, 1);
4780 else if (code == PARALLEL)
4782 int i;
4783 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4785 code = GET_CODE (XVECEXP (x, 0, i));
4786 if (code == SET || code == CLOBBER)
4787 update_n_sets (XVECEXP (x, 0, i), 1);
4791 if (insn == last)
4792 break;
4797 /* The one entry point in this file. DUMP_FILE is the dump file for
4798 this pass. */
4800 void
4801 schedule_insns (dump_file)
4802 FILE *dump_file;
4804 int max_uid = MAX_INSNS_PER_SPLIT * (get_max_uid () + 1);
4805 int b;
4806 int i;
4807 rtx insn;
4809 /* Taking care of this degenerate case makes the rest of
4810 this code simpler. */
4811 if (n_basic_blocks == 0)
4812 return;
4814 /* Create an insn here so that we can hang dependencies off of it later. */
4815 sched_before_next_call
4816 = gen_rtx_INSN (VOIDmode, 0, NULL_RTX, NULL_RTX,
4817 NULL_RTX, 0, NULL_RTX, NULL_RTX);
4819 /* Initialize the unused_*_lists. We can't use the ones left over from
4820 the previous function, because gcc has freed that memory. We can use
4821 the ones left over from the first sched pass in the second pass however,
4822 so only clear them on the first sched pass. The first pass is before
4823 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4825 if (reload_completed == 0 || ! flag_schedule_insns)
4827 unused_insn_list = 0;
4828 unused_expr_list = 0;
4831 /* We create no insns here, only reorder them, so we
4832 remember how far we can cut back the stack on exit. */
4834 /* Allocate data for this pass. See comments, above,
4835 for what these vectors do. */
4836 insn_luid = (int *) alloca (max_uid * sizeof (int));
4837 insn_priority = (int *) alloca (max_uid * sizeof (int));
4838 insn_tick = (int *) alloca (max_uid * sizeof (int));
4839 insn_costs = (short *) alloca (max_uid * sizeof (short));
4840 insn_units = (short *) alloca (max_uid * sizeof (short));
4841 insn_blockage = (unsigned int *) alloca (max_uid * sizeof (unsigned int));
4842 insn_ref_count = (int *) alloca (max_uid * sizeof (int));
4844 if (reload_completed == 0)
4846 sched_reg_n_calls_crossed = (int *) alloca (max_regno * sizeof (int));
4847 sched_reg_live_length = (int *) alloca (max_regno * sizeof (int));
4848 bb_dead_regs = ALLOCA_REG_SET ();
4849 bb_live_regs = ALLOCA_REG_SET ();
4850 bzero ((char *) sched_reg_n_calls_crossed, max_regno * sizeof (int));
4851 bzero ((char *) sched_reg_live_length, max_regno * sizeof (int));
4852 init_alias_analysis ();
4854 else
4856 sched_reg_n_calls_crossed = 0;
4857 sched_reg_live_length = 0;
4858 bb_dead_regs = 0;
4859 bb_live_regs = 0;
4860 if (! flag_schedule_insns)
4861 init_alias_analysis ();
4864 if (write_symbols != NO_DEBUG)
4866 rtx line;
4868 line_note = (rtx *) alloca (max_uid * sizeof (rtx));
4869 bzero ((char *) line_note, max_uid * sizeof (rtx));
4870 line_note_head = (rtx *) alloca (n_basic_blocks * sizeof (rtx));
4871 bzero ((char *) line_note_head, n_basic_blocks * sizeof (rtx));
4873 /* Determine the line-number at the start of each basic block.
4874 This must be computed and saved now, because after a basic block's
4875 predecessor has been scheduled, it is impossible to accurately
4876 determine the correct line number for the first insn of the block. */
4878 for (b = 0; b < n_basic_blocks; b++)
4879 for (line = basic_block_head[b]; line; line = PREV_INSN (line))
4880 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4882 line_note_head[b] = line;
4883 break;
4887 bzero ((char *) insn_luid, max_uid * sizeof (int));
4888 bzero ((char *) insn_priority, max_uid * sizeof (int));
4889 bzero ((char *) insn_tick, max_uid * sizeof (int));
4890 bzero ((char *) insn_costs, max_uid * sizeof (short));
4891 bzero ((char *) insn_units, max_uid * sizeof (short));
4892 bzero ((char *) insn_blockage, max_uid * sizeof (unsigned int));
4893 bzero ((char *) insn_ref_count, max_uid * sizeof (int));
4895 /* Schedule each basic block, block by block. */
4897 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4898 known why this is done. */
4899 /* ??? Perhaps it's done to ensure NEXT_TAIL in schedule_block is a
4900 valid insn. */
4902 insn = basic_block_end[n_basic_blocks-1];
4903 if (NEXT_INSN (insn) == 0
4904 || (GET_CODE (insn) != NOTE
4905 && GET_CODE (insn) != CODE_LABEL
4906 /* Don't emit a NOTE if it would end up between an unconditional
4907 jump and a BARRIER. */
4908 && ! (GET_CODE (insn) == JUMP_INSN
4909 && GET_CODE (NEXT_INSN (insn)) == BARRIER)))
4910 emit_note_after (NOTE_INSN_DELETED, basic_block_end[n_basic_blocks-1]);
4912 for (b = 0; b < n_basic_blocks; b++)
4914 rtx insn, next;
4916 note_list = 0;
4918 for (insn = basic_block_head[b]; ; insn = next)
4920 rtx prev;
4921 rtx set;
4923 /* Can't use `next_real_insn' because that
4924 might go across CODE_LABELS and short-out basic blocks. */
4925 next = NEXT_INSN (insn);
4926 if (GET_CODE (insn) != INSN)
4928 if (insn == basic_block_end[b])
4929 break;
4931 continue;
4934 /* Don't split no-op move insns. These should silently disappear
4935 later in final. Splitting such insns would break the code
4936 that handles REG_NO_CONFLICT blocks. */
4937 set = single_set (insn);
4938 if (set && rtx_equal_p (SET_SRC (set), SET_DEST (set)))
4940 if (insn == basic_block_end[b])
4941 break;
4943 /* Nops get in the way while scheduling, so delete them now if
4944 register allocation has already been done. It is too risky
4945 to try to do this before register allocation, and there are
4946 unlikely to be very many nops then anyways. */
4947 if (reload_completed)
4949 PUT_CODE (insn, NOTE);
4950 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4951 NOTE_SOURCE_FILE (insn) = 0;
4954 continue;
4957 /* Split insns here to get max fine-grain parallelism. */
4958 prev = PREV_INSN (insn);
4959 /* It is probably not worthwhile to try to split again in the
4960 second pass. However, if flag_schedule_insns is not set,
4961 the first and only (if any) scheduling pass is after reload. */
4962 if (reload_completed == 0 || ! flag_schedule_insns)
4964 rtx last, first = PREV_INSN (insn);
4965 rtx notes = REG_NOTES (insn);
4967 last = try_split (PATTERN (insn), insn, 1);
4968 if (last != insn)
4970 /* try_split returns the NOTE that INSN became. */
4971 first = NEXT_INSN (first);
4972 update_flow_info (notes, first, last, insn);
4974 PUT_CODE (insn, NOTE);
4975 NOTE_SOURCE_FILE (insn) = 0;
4976 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4977 if (insn == basic_block_head[b])
4978 basic_block_head[b] = first;
4979 if (insn == basic_block_end[b])
4981 basic_block_end[b] = last;
4982 break;
4987 if (insn == basic_block_end[b])
4988 break;
4991 schedule_block (b, dump_file);
4993 #ifdef USE_C_ALLOCA
4994 alloca (0);
4995 #endif
4998 /* Reposition the prologue and epilogue notes in case we moved the
4999 prologue/epilogue insns. */
5000 if (reload_completed)
5001 reposition_prologue_and_epilogue_notes (get_insns ());
5003 if (write_symbols != NO_DEBUG)
5005 rtx line = 0;
5006 rtx insn = get_insns ();
5007 int active_insn = 0;
5008 int notes = 0;
5010 /* Walk the insns deleting redundant line-number notes. Many of these
5011 are already present. The remainder tend to occur at basic
5012 block boundaries. */
5013 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
5014 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
5016 /* If there are no active insns following, INSN is redundant. */
5017 if (active_insn == 0)
5019 notes++;
5020 NOTE_SOURCE_FILE (insn) = 0;
5021 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
5023 /* If the line number is unchanged, LINE is redundant. */
5024 else if (line
5025 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
5026 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
5028 notes++;
5029 NOTE_SOURCE_FILE (line) = 0;
5030 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
5031 line = insn;
5033 else
5034 line = insn;
5035 active_insn = 0;
5037 else if (! ((GET_CODE (insn) == NOTE
5038 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
5039 || (GET_CODE (insn) == INSN
5040 && (GET_CODE (PATTERN (insn)) == USE
5041 || GET_CODE (PATTERN (insn)) == CLOBBER))))
5042 active_insn++;
5044 if (dump_file && notes)
5045 fprintf (dump_file, ";; deleted %d line-number notes\n", notes);
5048 if (reload_completed == 0)
5050 int regno;
5051 for (regno = 0; regno < max_regno; regno++)
5052 if (sched_reg_live_length[regno])
5054 if (dump_file)
5056 if (REG_LIVE_LENGTH (regno) > sched_reg_live_length[regno])
5057 fprintf (dump_file,
5058 ";; register %d life shortened from %d to %d\n",
5059 regno, REG_LIVE_LENGTH (regno),
5060 sched_reg_live_length[regno]);
5061 /* Negative values are special; don't overwrite the current
5062 reg_live_length value if it is negative. */
5063 else if (REG_LIVE_LENGTH (regno) < sched_reg_live_length[regno]
5064 && REG_LIVE_LENGTH (regno) >= 0)
5065 fprintf (dump_file,
5066 ";; register %d life extended from %d to %d\n",
5067 regno, REG_LIVE_LENGTH (regno),
5068 sched_reg_live_length[regno]);
5070 if (! REG_N_CALLS_CROSSED (regno)
5071 && sched_reg_n_calls_crossed[regno])
5072 fprintf (dump_file,
5073 ";; register %d now crosses calls\n", regno);
5074 else if (REG_N_CALLS_CROSSED (regno)
5075 && ! sched_reg_n_calls_crossed[regno]
5076 && REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
5077 fprintf (dump_file,
5078 ";; register %d no longer crosses calls\n", regno);
5081 /* Negative values are special; don't overwrite the current
5082 reg_live_length value if it is negative. */
5083 if (REG_LIVE_LENGTH (regno) >= 0)
5084 REG_LIVE_LENGTH (regno) = sched_reg_live_length[regno];
5086 /* We can't change the value of reg_n_calls_crossed to zero for
5087 pseudos which are live in more than one block.
5089 This is because combine might have made an optimization which
5090 invalidated basic_block_live_at_start and reg_n_calls_crossed,
5091 but it does not update them. If we update reg_n_calls_crossed
5092 here, the two variables are now inconsistent, and this might
5093 confuse the caller-save code into saving a register that doesn't
5094 need to be saved. This is only a problem when we zero calls
5095 crossed for a pseudo live in multiple basic blocks.
5097 Alternatively, we could try to correctly update basic block live
5098 at start here in sched, but that seems complicated. */
5099 if (sched_reg_n_calls_crossed[regno]
5100 || REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
5101 REG_N_CALLS_CROSSED (regno) = sched_reg_n_calls_crossed[regno];
5105 if (reload_completed == 0)
5107 FREE_REG_SET (bb_dead_regs);
5108 FREE_REG_SET (bb_live_regs);
5112 #endif /* INSN_SCHEDULING */