gcc2 snapshot 980401 import
[official-gcc.git] / gcc / sched.c
blobabc98ddd68a241072a4ee64b5fbec153d14b58b7
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 93-97, 1998 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GNU CC.
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
11 any later version.
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 59 Temple Place - Suite 330,
21 Boston, MA 02111-1307, USA. */
23 /* Instruction scheduling pass.
25 This pass implements list scheduling within basic blocks. It is
26 run after flow analysis, but before register allocation. The
27 scheduler works as follows:
29 We compute insn priorities based on data dependencies. Flow
30 analysis only creates a fraction of the data-dependencies we must
31 observe: namely, only those dependencies which the combiner can be
32 expected to use. For this pass, we must therefore create the
33 remaining dependencies we need to observe: register dependencies,
34 memory dependencies, dependencies to keep function calls in order,
35 and the dependence between a conditional branch and the setting of
36 condition codes are all dealt with here.
38 The scheduler first traverses the data flow graph, starting with
39 the last instruction, and proceeding to the first, assigning
40 values to insn_priority as it goes. This sorts the instructions
41 topologically by data dependence.
43 Once priorities have been established, we order the insns using
44 list scheduling. This works as follows: starting with a list of
45 all the ready insns, and sorted according to priority number, we
46 schedule the insn from the end of the list by placing its
47 predecessors in the list according to their priority order. We
48 consider this insn scheduled by setting the pointer to the "end" of
49 the list to point to the previous insn. When an insn has no
50 predecessors, we either queue it until sufficient time has elapsed
51 or add it to the ready list. As the instructions are scheduled or
52 when stalls are introduced, the queue advances and dumps insns into
53 the ready list. When all insns down to the lowest priority have
54 been scheduled, the critical path of the basic block has been made
55 as short as possible. The remaining insns are then scheduled in
56 remaining slots.
58 Function unit conflicts are resolved during reverse list scheduling
59 by tracking the time when each insn is committed to the schedule
60 and from that, the time the function units it uses must be free.
61 As insns on the ready list are considered for scheduling, those
62 that would result in a blockage of the already committed insns are
63 queued until no blockage will result. Among the remaining insns on
64 the ready list to be considered, the first one with the largest
65 potential for causing a subsequent blockage is chosen.
67 The following list shows the order in which we want to break ties
68 among insns in the ready list:
70 1. choose insn with lowest conflict cost, ties broken by
71 2. choose insn with the longest path to end of bb, ties broken by
72 3. choose insn that kills the most registers, ties broken by
73 4. choose insn that conflicts with the most ready insns, or finally
74 5. choose insn with lowest UID.
76 Memory references complicate matters. Only if we can be certain
77 that memory references are not part of the data dependency graph
78 (via true, anti, or output dependence), can we move operations past
79 memory references. To first approximation, reads can be done
80 independently, while writes introduce dependencies. Better
81 approximations will yield fewer dependencies.
83 Dependencies set up by memory references are treated in exactly the
84 same way as other dependencies, by using LOG_LINKS.
86 Having optimized the critical path, we may have also unduly
87 extended the lifetimes of some registers. If an operation requires
88 that constants be loaded into registers, it is certainly desirable
89 to load those constants as early as necessary, but no earlier.
90 I.e., it will not do to load up a bunch of registers at the
91 beginning of a basic block only to use them at the end, if they
92 could be loaded later, since this may result in excessive register
93 utilization.
95 Note that since branches are never in basic blocks, but only end
96 basic blocks, this pass will not do any branch scheduling. But
97 that is ok, since we can use GNU's delayed branch scheduling
98 pass to take care of this case.
100 Also note that no further optimizations based on algebraic identities
101 are performed, so this pass would be a good one to perform instruction
102 splitting, such as breaking up a multiply instruction into shifts
103 and adds where that is profitable.
105 Given the memory aliasing analysis that this pass should perform,
106 it should be possible to remove redundant stores to memory, and to
107 load values from registers instead of hitting memory.
109 This pass must update information that subsequent passes expect to be
110 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
111 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
112 basic_block_end.
114 The information in the line number notes is carefully retained by
115 this pass. Notes that refer to the starting and ending of
116 exception regions are also carefully retained by this pass. All
117 other NOTE insns are grouped in their same relative order at the
118 beginning of basic blocks that have been scheduled. */
120 #include "config.h"
121 #include <stdio.h>
122 #include "rtl.h"
123 #include "basic-block.h"
124 #include "regs.h"
125 #include "hard-reg-set.h"
126 #include "flags.h"
127 #include "insn-config.h"
128 #include "insn-attr.h"
130 #ifdef INSN_SCHEDULING
131 /* Arrays set up by scheduling for the same respective purposes as
132 similar-named arrays set up by flow analysis. We work with these
133 arrays during the scheduling pass so we can compare values against
134 unscheduled code.
136 Values of these arrays are copied at the end of this pass into the
137 arrays set up by flow analysis. */
138 static int *sched_reg_n_calls_crossed;
139 static int *sched_reg_live_length;
141 /* Element N is the next insn that sets (hard or pseudo) register
142 N within the current basic block; or zero, if there is no
143 such insn. Needed for new registers which may be introduced
144 by splitting insns. */
145 static rtx *reg_last_uses;
146 static rtx *reg_last_sets;
147 static regset reg_pending_sets;
148 static int reg_pending_sets_all;
150 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
151 static int *insn_luid;
152 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
154 /* Vector indexed by INSN_UID giving each instruction a priority. */
155 static int *insn_priority;
156 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
158 static short *insn_costs;
159 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
161 /* Vector indexed by INSN_UID giving an encoding of the function units
162 used. */
163 static short *insn_units;
164 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
166 /* Vector indexed by INSN_UID giving an encoding of the blockage range
167 function. The unit and the range are encoded. */
168 static unsigned int *insn_blockage;
169 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
170 #define UNIT_BITS 5
171 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
172 #define ENCODE_BLOCKAGE(U,R) \
173 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
174 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
175 | MAX_BLOCKAGE_COST (R))
176 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
177 #define BLOCKAGE_RANGE(B) \
178 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
179 | (B) & BLOCKAGE_MASK)
181 /* Encodings of the `<name>_unit_blockage_range' function. */
182 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
183 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
185 #define DONE_PRIORITY -1
186 #define MAX_PRIORITY 0x7fffffff
187 #define TAIL_PRIORITY 0x7ffffffe
188 #define LAUNCH_PRIORITY 0x7f000001
189 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
190 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
192 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
193 static int *insn_ref_count;
194 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
196 /* Vector indexed by INSN_UID giving line-number note in effect for each
197 insn. For line-number notes, this indicates whether the note may be
198 reused. */
199 static rtx *line_note;
200 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
202 /* Vector indexed by basic block number giving the starting line-number
203 for each basic block. */
204 static rtx *line_note_head;
206 /* List of important notes we must keep around. This is a pointer to the
207 last element in the list. */
208 static rtx note_list;
210 /* Regsets telling whether a given register is live or dead before the last
211 scheduled insn. Must scan the instructions once before scheduling to
212 determine what registers are live or dead at the end of the block. */
213 static regset bb_dead_regs;
214 static regset bb_live_regs;
216 /* Regset telling whether a given register is live after the insn currently
217 being scheduled. Before processing an insn, this is equal to bb_live_regs
218 above. This is used so that we can find registers that are newly born/dead
219 after processing an insn. */
220 static regset old_live_regs;
222 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
223 during the initial scan and reused later. If there are not exactly as
224 many REG_DEAD notes in the post scheduled code as there were in the
225 prescheduled code then we trigger an abort because this indicates a bug. */
226 static rtx dead_notes;
228 /* Queues, etc. */
230 /* An instruction is ready to be scheduled when all insns following it
231 have already been scheduled. It is important to ensure that all
232 insns which use its result will not be executed until its result
233 has been computed. An insn is maintained in one of four structures:
235 (P) the "Pending" set of insns which cannot be scheduled until
236 their dependencies have been satisfied.
237 (Q) the "Queued" set of insns that can be scheduled when sufficient
238 time has passed.
239 (R) the "Ready" list of unscheduled, uncommitted insns.
240 (S) the "Scheduled" list of insns.
242 Initially, all insns are either "Pending" or "Ready" depending on
243 whether their dependencies are satisfied.
245 Insns move from the "Ready" list to the "Scheduled" list as they
246 are committed to the schedule. As this occurs, the insns in the
247 "Pending" list have their dependencies satisfied and move to either
248 the "Ready" list or the "Queued" set depending on whether
249 sufficient time has passed to make them ready. As time passes,
250 insns move from the "Queued" set to the "Ready" list. Insns may
251 move from the "Ready" list to the "Queued" set if they are blocked
252 due to a function unit conflict.
254 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
255 insns, i.e., those that are ready, queued, and pending.
256 The "Queued" set (Q) is implemented by the variable `insn_queue'.
257 The "Ready" list (R) is implemented by the variables `ready' and
258 `n_ready'.
259 The "Scheduled" list (S) is the new insn chain built by this pass.
261 The transition (R->S) is implemented in the scheduling loop in
262 `schedule_block' when the best insn to schedule is chosen.
263 The transition (R->Q) is implemented in `schedule_select' when an
264 insn is found to to have a function unit conflict with the already
265 committed insns.
266 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
267 insns move from the ready list to the scheduled list.
268 The transition (Q->R) is implemented at the top of the scheduling
269 loop in `schedule_block' as time passes or stalls are introduced. */
271 /* Implement a circular buffer to delay instructions until sufficient
272 time has passed. INSN_QUEUE_SIZE is a power of two larger than
273 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
274 longest time an isnsn may be queued. */
275 static rtx insn_queue[INSN_QUEUE_SIZE];
276 static int q_ptr = 0;
277 static int q_size = 0;
278 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
279 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
281 /* Vector indexed by INSN_UID giving the minimum clock tick at which
282 the insn becomes ready. This is used to note timing constraints for
283 insns in the pending list. */
284 static int *insn_tick;
285 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
287 /* Data structure for keeping track of register information
288 during that register's life. */
290 struct sometimes
292 int regno;
293 int live_length;
294 int calls_crossed;
297 /* Forward declarations. */
298 static rtx canon_rtx PROTO((rtx));
299 static int rtx_equal_for_memref_p PROTO((rtx, rtx));
300 static rtx find_symbolic_term PROTO((rtx));
301 static int memrefs_conflict_p PROTO((int, rtx, int, rtx,
302 HOST_WIDE_INT));
303 static void add_dependence PROTO((rtx, rtx, enum reg_note));
304 static void remove_dependence PROTO((rtx, rtx));
305 static rtx find_insn_list PROTO((rtx, rtx));
306 static int insn_unit PROTO((rtx));
307 static unsigned int blockage_range PROTO((int, rtx));
308 static void clear_units PROTO((void));
309 static void prepare_unit PROTO((int));
310 static int actual_hazard_this_instance PROTO((int, int, rtx, int, int));
311 static void schedule_unit PROTO((int, rtx, int));
312 static int actual_hazard PROTO((int, rtx, int, int));
313 static int potential_hazard PROTO((int, rtx, int));
314 static int insn_cost PROTO((rtx, rtx, rtx));
315 static int priority PROTO((rtx));
316 static void free_pending_lists PROTO((void));
317 static void add_insn_mem_dependence PROTO((rtx *, rtx *, rtx, rtx));
318 static void flush_pending_lists PROTO((rtx, int));
319 static void sched_analyze_1 PROTO((rtx, rtx));
320 static void sched_analyze_2 PROTO((rtx, rtx));
321 static void sched_analyze_insn PROTO((rtx, rtx, rtx));
322 static int sched_analyze PROTO((rtx, rtx));
323 static void sched_note_set PROTO((int, rtx, int));
324 static int rank_for_schedule PROTO((rtx *, rtx *));
325 static void swap_sort PROTO((rtx *, int));
326 static void queue_insn PROTO((rtx, int));
327 static int birthing_insn_p PROTO((rtx));
328 static void adjust_priority PROTO((rtx));
329 static int schedule_insn PROTO((rtx, rtx *, int, int));
330 static int schedule_select PROTO((rtx *, int, int, FILE *));
331 static void create_reg_dead_note PROTO((rtx, rtx));
332 static void attach_deaths PROTO((rtx, rtx, int));
333 static void attach_deaths_insn PROTO((rtx));
334 static rtx unlink_notes PROTO((rtx, rtx));
335 static int new_sometimes_live PROTO((struct sometimes *, int, int));
336 static void finish_sometimes_live PROTO((struct sometimes *, int));
337 static rtx reemit_notes PROTO((rtx, rtx));
338 static void schedule_block PROTO((int, FILE *));
339 static rtx regno_use_in PROTO((int, rtx));
340 static void split_hard_reg_notes PROTO((rtx, rtx, rtx, rtx));
341 static void new_insn_dead_notes PROTO((rtx, rtx, rtx, rtx));
342 static void update_n_sets PROTO((rtx, int));
343 static void update_flow_info PROTO((rtx, rtx, rtx, rtx));
345 /* Main entry point of this file. */
346 void schedule_insns PROTO((FILE *));
348 #endif /* INSN_SCHEDULING */
350 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
352 /* Vector indexed by N giving the initial (unchanging) value known
353 for pseudo-register N. */
354 static rtx *reg_known_value;
356 /* Vector recording for each reg_known_value whether it is due to a
357 REG_EQUIV note. Future passes (viz., reload) may replace the
358 pseudo with the equivalent expression and so we account for the
359 dependences that would be introduced if that happens. */
360 /* ??? This is a problem only on the Convex. The REG_EQUIV notes created in
361 assign_parms mention the arg pointer, and there are explicit insns in the
362 RTL that modify the arg pointer. Thus we must ensure that such insns don't
363 get scheduled across each other because that would invalidate the REG_EQUIV
364 notes. One could argue that the REG_EQUIV notes are wrong, but solving
365 the problem in the scheduler will likely give better code, so we do it
366 here. */
367 static char *reg_known_equiv_p;
369 /* Indicates number of valid entries in reg_known_value. */
370 static int reg_known_value_size;
372 static rtx
373 canon_rtx (x)
374 rtx x;
376 /* Recursively look for equivalences. */
377 if (GET_CODE (x) == REG && REGNO (x) >= FIRST_PSEUDO_REGISTER
378 && REGNO (x) <= reg_known_value_size)
379 return reg_known_value[REGNO (x)] == x
380 ? x : canon_rtx (reg_known_value[REGNO (x)]);
381 else if (GET_CODE (x) == PLUS)
383 rtx x0 = canon_rtx (XEXP (x, 0));
384 rtx x1 = canon_rtx (XEXP (x, 1));
386 if (x0 != XEXP (x, 0) || x1 != XEXP (x, 1))
388 /* We can tolerate LO_SUMs being offset here; these
389 rtl are used for nothing other than comparisons. */
390 if (GET_CODE (x0) == CONST_INT)
391 return plus_constant_for_output (x1, INTVAL (x0));
392 else if (GET_CODE (x1) == CONST_INT)
393 return plus_constant_for_output (x0, INTVAL (x1));
394 return gen_rtx (PLUS, GET_MODE (x), x0, x1);
397 /* This gives us much better alias analysis when called from
398 the loop optimizer. Note we want to leave the original
399 MEM alone, but need to return the canonicalized MEM with
400 all the flags with their original values. */
401 else if (GET_CODE (x) == MEM)
403 rtx copy = copy_rtx (x);
404 XEXP (copy, 0) = canon_rtx (XEXP (copy, 0));
405 x = copy;
407 return x;
410 /* Set up all info needed to perform alias analysis on memory references. */
412 void
413 init_alias_analysis ()
415 int maxreg = max_reg_num ();
416 rtx insn;
417 rtx note;
418 rtx set;
420 reg_known_value_size = maxreg;
422 reg_known_value
423 = (rtx *) oballoc ((maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx))
424 - FIRST_PSEUDO_REGISTER;
425 bzero ((char *) (reg_known_value + FIRST_PSEUDO_REGISTER),
426 (maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx));
428 reg_known_equiv_p
429 = (char *) oballoc ((maxreg -FIRST_PSEUDO_REGISTER) * sizeof (char))
430 - FIRST_PSEUDO_REGISTER;
431 bzero (reg_known_equiv_p + FIRST_PSEUDO_REGISTER,
432 (maxreg - FIRST_PSEUDO_REGISTER) * sizeof (char));
434 /* Fill in the entries with known constant values. */
435 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
436 if ((set = single_set (insn)) != 0
437 && GET_CODE (SET_DEST (set)) == REG
438 && REGNO (SET_DEST (set)) >= FIRST_PSEUDO_REGISTER
439 && (((note = find_reg_note (insn, REG_EQUAL, 0)) != 0
440 && REG_N_SETS (REGNO (SET_DEST (set))) == 1)
441 || (note = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != 0)
442 && GET_CODE (XEXP (note, 0)) != EXPR_LIST)
444 int regno = REGNO (SET_DEST (set));
445 reg_known_value[regno] = XEXP (note, 0);
446 reg_known_equiv_p[regno] = REG_NOTE_KIND (note) == REG_EQUIV;
449 /* Fill in the remaining entries. */
450 while (--maxreg >= FIRST_PSEUDO_REGISTER)
451 if (reg_known_value[maxreg] == 0)
452 reg_known_value[maxreg] = regno_reg_rtx[maxreg];
455 /* Return 1 if X and Y are identical-looking rtx's.
457 We use the data in reg_known_value above to see if two registers with
458 different numbers are, in fact, equivalent. */
460 static int
461 rtx_equal_for_memref_p (x, y)
462 rtx x, y;
464 register int i;
465 register int j;
466 register enum rtx_code code;
467 register char *fmt;
469 if (x == 0 && y == 0)
470 return 1;
471 if (x == 0 || y == 0)
472 return 0;
473 x = canon_rtx (x);
474 y = canon_rtx (y);
476 if (x == y)
477 return 1;
479 code = GET_CODE (x);
480 /* Rtx's of different codes cannot be equal. */
481 if (code != GET_CODE (y))
482 return 0;
484 /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent.
485 (REG:SI x) and (REG:HI x) are NOT equivalent. */
487 if (GET_MODE (x) != GET_MODE (y))
488 return 0;
490 /* REG, LABEL_REF, and SYMBOL_REF can be compared nonrecursively. */
492 if (code == REG)
493 return REGNO (x) == REGNO (y);
494 if (code == LABEL_REF)
495 return XEXP (x, 0) == XEXP (y, 0);
496 if (code == SYMBOL_REF)
497 return XSTR (x, 0) == XSTR (y, 0);
499 /* For commutative operations, the RTX match if the operand match in any
500 order. Also handle the simple binary and unary cases without a loop. */
501 if (code == EQ || code == NE || GET_RTX_CLASS (code) == 'c')
502 return ((rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
503 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)))
504 || (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 1))
505 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 0))));
506 else if (GET_RTX_CLASS (code) == '<' || GET_RTX_CLASS (code) == '2')
507 return (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
508 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)));
509 else if (GET_RTX_CLASS (code) == '1')
510 return rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0));
512 /* Compare the elements. If any pair of corresponding elements
513 fail to match, return 0 for the whole things. */
515 fmt = GET_RTX_FORMAT (code);
516 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
518 switch (fmt[i])
520 case 'w':
521 if (XWINT (x, i) != XWINT (y, i))
522 return 0;
523 break;
525 case 'n':
526 case 'i':
527 if (XINT (x, i) != XINT (y, i))
528 return 0;
529 break;
531 case 'V':
532 case 'E':
533 /* Two vectors must have the same length. */
534 if (XVECLEN (x, i) != XVECLEN (y, i))
535 return 0;
537 /* And the corresponding elements must match. */
538 for (j = 0; j < XVECLEN (x, i); j++)
539 if (rtx_equal_for_memref_p (XVECEXP (x, i, j), XVECEXP (y, i, j)) == 0)
540 return 0;
541 break;
543 case 'e':
544 if (rtx_equal_for_memref_p (XEXP (x, i), XEXP (y, i)) == 0)
545 return 0;
546 break;
548 case 'S':
549 case 's':
550 if (strcmp (XSTR (x, i), XSTR (y, i)))
551 return 0;
552 break;
554 case 'u':
555 /* These are just backpointers, so they don't matter. */
556 break;
558 case '0':
559 break;
561 /* It is believed that rtx's at this level will never
562 contain anything but integers and other rtx's,
563 except for within LABEL_REFs and SYMBOL_REFs. */
564 default:
565 abort ();
568 return 1;
571 /* Given an rtx X, find a SYMBOL_REF or LABEL_REF within
572 X and return it, or return 0 if none found. */
574 static rtx
575 find_symbolic_term (x)
576 rtx x;
578 register int i;
579 register enum rtx_code code;
580 register char *fmt;
582 code = GET_CODE (x);
583 if (code == SYMBOL_REF || code == LABEL_REF)
584 return x;
585 if (GET_RTX_CLASS (code) == 'o')
586 return 0;
588 fmt = GET_RTX_FORMAT (code);
589 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
591 rtx t;
593 if (fmt[i] == 'e')
595 t = find_symbolic_term (XEXP (x, i));
596 if (t != 0)
597 return t;
599 else if (fmt[i] == 'E')
600 break;
602 return 0;
605 /* Return nonzero if X and Y (memory addresses) could reference the
606 same location in memory. C is an offset accumulator. When
607 C is nonzero, we are testing aliases between X and Y + C.
608 XSIZE is the size in bytes of the X reference,
609 similarly YSIZE is the size in bytes for Y.
611 If XSIZE or YSIZE is zero, we do not know the amount of memory being
612 referenced (the reference was BLKmode), so make the most pessimistic
613 assumptions.
615 We recognize the following cases of non-conflicting memory:
617 (1) addresses involving the frame pointer cannot conflict
618 with addresses involving static variables.
619 (2) static variables with different addresses cannot conflict.
621 Nice to notice that varying addresses cannot conflict with fp if no
622 local variables had their addresses taken, but that's too hard now. */
624 /* ??? In Fortran, references to a array parameter can never conflict with
625 another array parameter. */
627 static int
628 memrefs_conflict_p (xsize, x, ysize, y, c)
629 rtx x, y;
630 int xsize, ysize;
631 HOST_WIDE_INT c;
633 if (GET_CODE (x) == HIGH)
634 x = XEXP (x, 0);
635 else if (GET_CODE (x) == LO_SUM)
636 x = XEXP (x, 1);
637 else
638 x = canon_rtx (x);
639 if (GET_CODE (y) == HIGH)
640 y = XEXP (y, 0);
641 else if (GET_CODE (y) == LO_SUM)
642 y = XEXP (y, 1);
643 else
644 y = canon_rtx (y);
646 if (rtx_equal_for_memref_p (x, y))
647 return (xsize == 0 || ysize == 0
648 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
650 if (y == frame_pointer_rtx || y == hard_frame_pointer_rtx
651 || y == stack_pointer_rtx)
653 rtx t = y;
654 int tsize = ysize;
655 y = x; ysize = xsize;
656 x = t; xsize = tsize;
659 if (x == frame_pointer_rtx || x == hard_frame_pointer_rtx
660 || x == stack_pointer_rtx)
662 rtx y1;
664 if (CONSTANT_P (y))
665 return 0;
667 if (GET_CODE (y) == PLUS
668 && canon_rtx (XEXP (y, 0)) == x
669 && (y1 = canon_rtx (XEXP (y, 1)))
670 && GET_CODE (y1) == CONST_INT)
672 c += INTVAL (y1);
673 return (xsize == 0 || ysize == 0
674 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
677 if (GET_CODE (y) == PLUS
678 && (y1 = canon_rtx (XEXP (y, 0)))
679 && CONSTANT_P (y1))
680 return 0;
682 return 1;
685 if (GET_CODE (x) == PLUS)
687 /* The fact that X is canonicalized means that this
688 PLUS rtx is canonicalized. */
689 rtx x0 = XEXP (x, 0);
690 rtx x1 = XEXP (x, 1);
692 if (GET_CODE (y) == PLUS)
694 /* The fact that Y is canonicalized means that this
695 PLUS rtx is canonicalized. */
696 rtx y0 = XEXP (y, 0);
697 rtx y1 = XEXP (y, 1);
699 if (rtx_equal_for_memref_p (x1, y1))
700 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
701 if (rtx_equal_for_memref_p (x0, y0))
702 return memrefs_conflict_p (xsize, x1, ysize, y1, c);
703 if (GET_CODE (x1) == CONST_INT)
704 if (GET_CODE (y1) == CONST_INT)
705 return memrefs_conflict_p (xsize, x0, ysize, y0,
706 c - INTVAL (x1) + INTVAL (y1));
707 else
708 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
709 else if (GET_CODE (y1) == CONST_INT)
710 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
712 /* Handle case where we cannot understand iteration operators,
713 but we notice that the base addresses are distinct objects. */
714 x = find_symbolic_term (x);
715 if (x == 0)
716 return 1;
717 y = find_symbolic_term (y);
718 if (y == 0)
719 return 1;
720 return rtx_equal_for_memref_p (x, y);
722 else if (GET_CODE (x1) == CONST_INT)
723 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
725 else if (GET_CODE (y) == PLUS)
727 /* The fact that Y is canonicalized means that this
728 PLUS rtx is canonicalized. */
729 rtx y0 = XEXP (y, 0);
730 rtx y1 = XEXP (y, 1);
732 if (GET_CODE (y1) == CONST_INT)
733 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
734 else
735 return 1;
738 if (GET_CODE (x) == GET_CODE (y) && GET_CODE (x) == MULT)
740 /* Handle cases where we expect the second operands to be the
741 same, and check only whether the first operand would conflict
742 or not. */
743 rtx x0, y0;
744 rtx x1 = canon_rtx (XEXP (x, 1));
745 rtx y1 = canon_rtx (XEXP (y, 1));
746 if (! rtx_equal_for_memref_p (x1, y1))
747 return 1;
748 x0 = canon_rtx (XEXP (x, 0));
749 y0 = canon_rtx (XEXP (y, 0));
750 if (rtx_equal_for_memref_p (x0, y0))
751 return (xsize == 0 || ysize == 0
752 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
754 /* Can't properly adjust our sizes. */
755 if (GET_CODE (x1) != CONST_INT)
756 return 1;
757 xsize /= INTVAL (x1);
758 ysize /= INTVAL (x1);
759 c /= INTVAL (x1);
760 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
763 if (CONSTANT_P (x))
765 if (GET_CODE (x) == CONST_INT && GET_CODE (y) == CONST_INT)
767 c += (INTVAL (y) - INTVAL (x));
768 return (xsize == 0 || ysize == 0
769 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
772 if (GET_CODE (x) == CONST)
774 if (GET_CODE (y) == CONST)
775 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
776 ysize, canon_rtx (XEXP (y, 0)), c);
777 else
778 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
779 ysize, y, c);
781 if (GET_CODE (y) == CONST)
782 return memrefs_conflict_p (xsize, x, ysize,
783 canon_rtx (XEXP (y, 0)), c);
785 if (CONSTANT_P (y))
786 return (rtx_equal_for_memref_p (x, y)
787 && (xsize == 0 || ysize == 0
788 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0)));
790 return 1;
792 return 1;
795 /* Functions to compute memory dependencies.
797 Since we process the insns in execution order, we can build tables
798 to keep track of what registers are fixed (and not aliased), what registers
799 are varying in known ways, and what registers are varying in unknown
800 ways.
802 If both memory references are volatile, then there must always be a
803 dependence between the two references, since their order can not be
804 changed. A volatile and non-volatile reference can be interchanged
805 though.
807 A MEM_IN_STRUCT reference at a non-QImode non-AND varying address can never
808 conflict with a non-MEM_IN_STRUCT reference at a fixed address. We must
809 allow QImode aliasing because the ANSI C standard allows character
810 pointers to alias anything. We are assuming that characters are
811 always QImode here. We also must allow AND addresses, because they may
812 generate accesses outside the object being referenced. This is used to
813 generate aligned addresses from unaligned addresses, for instance, the
814 alpha storeqi_unaligned pattern. */
816 /* Read dependence: X is read after read in MEM takes place. There can
817 only be a dependence here if both reads are volatile. */
820 read_dependence (mem, x)
821 rtx mem;
822 rtx x;
824 return MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem);
827 /* True dependence: X is read after store in MEM takes place. */
830 true_dependence (mem, x)
831 rtx mem;
832 rtx x;
834 /* If X is an unchanging read, then it can't possibly conflict with any
835 non-unchanging store. It may conflict with an unchanging write though,
836 because there may be a single store to this address to initialize it.
837 Just fall through to the code below to resolve the case where we have
838 both an unchanging read and an unchanging write. This won't handle all
839 cases optimally, but the possible performance loss should be
840 negligible. */
841 x = canon_rtx (x);
842 mem = canon_rtx (mem);
843 if (RTX_UNCHANGING_P (x) && ! RTX_UNCHANGING_P (mem))
844 return 0;
846 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
847 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
848 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
849 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
850 && GET_MODE (mem) != QImode
851 && GET_CODE (XEXP (mem, 0)) != AND
852 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
853 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
854 && GET_MODE (x) != QImode
855 && GET_CODE (XEXP (x, 0)) != AND
856 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
859 /* Anti dependence: X is written after read in MEM takes place. */
862 anti_dependence (mem, x)
863 rtx mem;
864 rtx x;
866 /* If MEM is an unchanging read, then it can't possibly conflict with
867 the store to X, because there is at most one store to MEM, and it must
868 have occurred somewhere before MEM. */
869 x = canon_rtx (x);
870 mem = canon_rtx (mem);
871 if (RTX_UNCHANGING_P (mem))
872 return 0;
874 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
875 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
876 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
877 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
878 && GET_MODE (mem) != QImode
879 && GET_CODE (XEXP (mem, 0)) != AND
880 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
881 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
882 && GET_MODE (x) != QImode
883 && GET_CODE (XEXP (x, 0)) != AND
884 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
887 /* Output dependence: X is written after store in MEM takes place. */
890 output_dependence (mem, x)
891 rtx mem;
892 rtx x;
894 x = canon_rtx (x);
895 mem = canon_rtx (mem);
896 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
897 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
898 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
899 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
900 && GET_MODE (mem) != QImode
901 && GET_CODE (XEXP (mem, 0)) != AND
902 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
903 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
904 && GET_MODE (x) != QImode
905 && GET_CODE (XEXP (x, 0)) != AND
906 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
909 /* Helper functions for instruction scheduling. */
911 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
912 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
913 of dependence that this link represents. */
915 static void
916 add_dependence (insn, elem, dep_type)
917 rtx insn;
918 rtx elem;
919 enum reg_note dep_type;
921 rtx link, next;
923 /* Don't depend an insn on itself. */
924 if (insn == elem)
925 return;
927 /* If elem is part of a sequence that must be scheduled together, then
928 make the dependence point to the last insn of the sequence.
929 When HAVE_cc0, it is possible for NOTEs to exist between users and
930 setters of the condition codes, so we must skip past notes here.
931 Otherwise, NOTEs are impossible here. */
933 next = NEXT_INSN (elem);
935 #ifdef HAVE_cc0
936 while (next && GET_CODE (next) == NOTE)
937 next = NEXT_INSN (next);
938 #endif
940 if (next && SCHED_GROUP_P (next)
941 && GET_CODE (next) != CODE_LABEL)
943 /* Notes will never intervene here though, so don't bother checking
944 for them. */
945 /* We must reject CODE_LABELs, so that we don't get confused by one
946 that has LABEL_PRESERVE_P set, which is represented by the same
947 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
948 SCHED_GROUP_P. */
949 while (NEXT_INSN (next) && SCHED_GROUP_P (NEXT_INSN (next))
950 && GET_CODE (NEXT_INSN (next)) != CODE_LABEL)
951 next = NEXT_INSN (next);
953 /* Again, don't depend an insn on itself. */
954 if (insn == next)
955 return;
957 /* Make the dependence to NEXT, the last insn of the group, instead
958 of the original ELEM. */
959 elem = next;
962 /* Check that we don't already have this dependence. */
963 for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
964 if (XEXP (link, 0) == elem)
966 /* If this is a more restrictive type of dependence than the existing
967 one, then change the existing dependence to this type. */
968 if ((int) dep_type < (int) REG_NOTE_KIND (link))
969 PUT_REG_NOTE_KIND (link, dep_type);
970 return;
972 /* Might want to check one level of transitivity to save conses. */
974 link = rtx_alloc (INSN_LIST);
975 /* Insn dependency, not data dependency. */
976 PUT_REG_NOTE_KIND (link, dep_type);
977 XEXP (link, 0) = elem;
978 XEXP (link, 1) = LOG_LINKS (insn);
979 LOG_LINKS (insn) = link;
982 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
983 of INSN. Abort if not found. */
985 static void
986 remove_dependence (insn, elem)
987 rtx insn;
988 rtx elem;
990 rtx prev, link;
991 int found = 0;
993 for (prev = 0, link = LOG_LINKS (insn); link; link = XEXP (link, 1))
995 if (XEXP (link, 0) == elem)
997 RTX_INTEGRATED_P (link) = 1;
998 if (prev)
999 XEXP (prev, 1) = XEXP (link, 1);
1000 else
1001 LOG_LINKS (insn) = XEXP (link, 1);
1002 found = 1;
1004 else
1005 prev = link;
1008 if (! found)
1009 abort ();
1010 return;
1013 #ifndef INSN_SCHEDULING
1014 void
1015 schedule_insns (dump_file)
1016 FILE *dump_file;
1019 #else
1020 #ifndef __GNUC__
1021 #define __inline
1022 #endif
1024 /* Computation of memory dependencies. */
1026 /* The *_insns and *_mems are paired lists. Each pending memory operation
1027 will have a pointer to the MEM rtx on one list and a pointer to the
1028 containing insn on the other list in the same place in the list. */
1030 /* We can't use add_dependence like the old code did, because a single insn
1031 may have multiple memory accesses, and hence needs to be on the list
1032 once for each memory access. Add_dependence won't let you add an insn
1033 to a list more than once. */
1035 /* An INSN_LIST containing all insns with pending read operations. */
1036 static rtx pending_read_insns;
1038 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
1039 static rtx pending_read_mems;
1041 /* An INSN_LIST containing all insns with pending write operations. */
1042 static rtx pending_write_insns;
1044 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
1045 static rtx pending_write_mems;
1047 /* Indicates the combined length of the two pending lists. We must prevent
1048 these lists from ever growing too large since the number of dependencies
1049 produced is at least O(N*N), and execution time is at least O(4*N*N), as
1050 a function of the length of these pending lists. */
1052 static int pending_lists_length;
1054 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
1056 static rtx unused_insn_list;
1058 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
1060 static rtx unused_expr_list;
1062 /* The last insn upon which all memory references must depend.
1063 This is an insn which flushed the pending lists, creating a dependency
1064 between it and all previously pending memory references. This creates
1065 a barrier (or a checkpoint) which no memory reference is allowed to cross.
1067 This includes all non constant CALL_INSNs. When we do interprocedural
1068 alias analysis, this restriction can be relaxed.
1069 This may also be an INSN that writes memory if the pending lists grow
1070 too large. */
1072 static rtx last_pending_memory_flush;
1074 /* The last function call we have seen. All hard regs, and, of course,
1075 the last function call, must depend on this. */
1077 static rtx last_function_call;
1079 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
1080 that does not already cross a call. We create dependencies between each
1081 of those insn and the next call insn, to ensure that they won't cross a call
1082 after scheduling is done. */
1084 static rtx sched_before_next_call;
1086 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
1087 so that insns independent of the last scheduled insn will be preferred
1088 over dependent instructions. */
1090 static rtx last_scheduled_insn;
1092 /* Process an insn's memory dependencies. There are four kinds of
1093 dependencies:
1095 (0) read dependence: read follows read
1096 (1) true dependence: read follows write
1097 (2) anti dependence: write follows read
1098 (3) output dependence: write follows write
1100 We are careful to build only dependencies which actually exist, and
1101 use transitivity to avoid building too many links. */
1103 /* Return the INSN_LIST containing INSN in LIST, or NULL
1104 if LIST does not contain INSN. */
1106 __inline static rtx
1107 find_insn_list (insn, list)
1108 rtx insn;
1109 rtx list;
1111 while (list)
1113 if (XEXP (list, 0) == insn)
1114 return list;
1115 list = XEXP (list, 1);
1117 return 0;
1120 /* Compute the function units used by INSN. This caches the value
1121 returned by function_units_used. A function unit is encoded as the
1122 unit number if the value is non-negative and the compliment of a
1123 mask if the value is negative. A function unit index is the
1124 non-negative encoding. */
1126 __inline static int
1127 insn_unit (insn)
1128 rtx insn;
1130 register int unit = INSN_UNIT (insn);
1132 if (unit == 0)
1134 recog_memoized (insn);
1136 /* A USE insn, or something else we don't need to understand.
1137 We can't pass these directly to function_units_used because it will
1138 trigger a fatal error for unrecognizable insns. */
1139 if (INSN_CODE (insn) < 0)
1140 unit = -1;
1141 else
1143 unit = function_units_used (insn);
1144 /* Increment non-negative values so we can cache zero. */
1145 if (unit >= 0) unit++;
1147 /* We only cache 16 bits of the result, so if the value is out of
1148 range, don't cache it. */
1149 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
1150 || unit >= 0
1151 || (~unit & ((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
1152 INSN_UNIT (insn) = unit;
1154 return (unit > 0 ? unit - 1 : unit);
1157 /* Compute the blockage range for executing INSN on UNIT. This caches
1158 the value returned by the blockage_range_function for the unit.
1159 These values are encoded in an int where the upper half gives the
1160 minimum value and the lower half gives the maximum value. */
1162 __inline static unsigned int
1163 blockage_range (unit, insn)
1164 int unit;
1165 rtx insn;
1167 unsigned int blockage = INSN_BLOCKAGE (insn);
1168 unsigned int range;
1170 if (UNIT_BLOCKED (blockage) != unit + 1)
1172 range = function_units[unit].blockage_range_function (insn);
1173 /* We only cache the blockage range for one unit and then only if
1174 the values fit. */
1175 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
1176 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
1178 else
1179 range = BLOCKAGE_RANGE (blockage);
1181 return range;
1184 /* A vector indexed by function unit instance giving the last insn to use
1185 the unit. The value of the function unit instance index for unit U
1186 instance I is (U + I * FUNCTION_UNITS_SIZE). */
1187 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1189 /* A vector indexed by function unit instance giving the minimum time when
1190 the unit will unblock based on the maximum blockage cost. */
1191 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1193 /* A vector indexed by function unit number giving the number of insns
1194 that remain to use the unit. */
1195 static int unit_n_insns[FUNCTION_UNITS_SIZE];
1197 /* Reset the function unit state to the null state. */
1199 static void
1200 clear_units ()
1202 bzero ((char *) unit_last_insn, sizeof (unit_last_insn));
1203 bzero ((char *) unit_tick, sizeof (unit_tick));
1204 bzero ((char *) unit_n_insns, sizeof (unit_n_insns));
1207 /* Record an insn as one that will use the units encoded by UNIT. */
1209 __inline static void
1210 prepare_unit (unit)
1211 int unit;
1213 int i;
1215 if (unit >= 0)
1216 unit_n_insns[unit]++;
1217 else
1218 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1219 if ((unit & 1) != 0)
1220 prepare_unit (i);
1223 /* Return the actual hazard cost of executing INSN on the unit UNIT,
1224 instance INSTANCE at time CLOCK if the previous actual hazard cost
1225 was COST. */
1227 __inline static int
1228 actual_hazard_this_instance (unit, instance, insn, clock, cost)
1229 int unit, instance, clock, cost;
1230 rtx insn;
1232 int tick = unit_tick[instance];
1234 if (tick - clock > cost)
1236 /* The scheduler is operating in reverse, so INSN is the executing
1237 insn and the unit's last insn is the candidate insn. We want a
1238 more exact measure of the blockage if we execute INSN at CLOCK
1239 given when we committed the execution of the unit's last insn.
1241 The blockage value is given by either the unit's max blockage
1242 constant, blockage range function, or blockage function. Use
1243 the most exact form for the given unit. */
1245 if (function_units[unit].blockage_range_function)
1247 if (function_units[unit].blockage_function)
1248 tick += (function_units[unit].blockage_function
1249 (insn, unit_last_insn[instance])
1250 - function_units[unit].max_blockage);
1251 else
1252 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
1253 - function_units[unit].max_blockage);
1255 if (tick - clock > cost)
1256 cost = tick - clock;
1258 return cost;
1261 /* Record INSN as having begun execution on the units encoded by UNIT at
1262 time CLOCK. */
1264 __inline static void
1265 schedule_unit (unit, insn, clock)
1266 int unit, clock;
1267 rtx insn;
1269 int i;
1271 if (unit >= 0)
1273 int instance = unit;
1274 #if MAX_MULTIPLICITY > 1
1275 /* Find the first free instance of the function unit and use that
1276 one. We assume that one is free. */
1277 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1279 if (! actual_hazard_this_instance (unit, instance, insn, clock, 0))
1280 break;
1281 instance += FUNCTION_UNITS_SIZE;
1283 #endif
1284 unit_last_insn[instance] = insn;
1285 unit_tick[instance] = (clock + function_units[unit].max_blockage);
1287 else
1288 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1289 if ((unit & 1) != 0)
1290 schedule_unit (i, insn, clock);
1293 /* Return the actual hazard cost of executing INSN on the units encoded by
1294 UNIT at time CLOCK if the previous actual hazard cost was COST. */
1296 __inline static int
1297 actual_hazard (unit, insn, clock, cost)
1298 int unit, clock, cost;
1299 rtx insn;
1301 int i;
1303 if (unit >= 0)
1305 /* Find the instance of the function unit with the minimum hazard. */
1306 int instance = unit;
1307 int best_cost = actual_hazard_this_instance (unit, instance, insn,
1308 clock, cost);
1309 int this_cost;
1311 #if MAX_MULTIPLICITY > 1
1312 if (best_cost > cost)
1314 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1316 instance += FUNCTION_UNITS_SIZE;
1317 this_cost = actual_hazard_this_instance (unit, instance, insn,
1318 clock, cost);
1319 if (this_cost < best_cost)
1321 best_cost = this_cost;
1322 if (this_cost <= cost)
1323 break;
1327 #endif
1328 cost = MAX (cost, best_cost);
1330 else
1331 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1332 if ((unit & 1) != 0)
1333 cost = actual_hazard (i, insn, clock, cost);
1335 return cost;
1338 /* Return the potential hazard cost of executing an instruction on the
1339 units encoded by UNIT if the previous potential hazard cost was COST.
1340 An insn with a large blockage time is chosen in preference to one
1341 with a smaller time; an insn that uses a unit that is more likely
1342 to be used is chosen in preference to one with a unit that is less
1343 used. We are trying to minimize a subsequent actual hazard. */
1345 __inline static int
1346 potential_hazard (unit, insn, cost)
1347 int unit, cost;
1348 rtx insn;
1350 int i, ncost;
1351 unsigned int minb, maxb;
1353 if (unit >= 0)
1355 minb = maxb = function_units[unit].max_blockage;
1356 if (maxb > 1)
1358 if (function_units[unit].blockage_range_function)
1360 maxb = minb = blockage_range (unit, insn);
1361 maxb = MAX_BLOCKAGE_COST (maxb);
1362 minb = MIN_BLOCKAGE_COST (minb);
1365 if (maxb > 1)
1367 /* Make the number of instructions left dominate. Make the
1368 minimum delay dominate the maximum delay. If all these
1369 are the same, use the unit number to add an arbitrary
1370 ordering. Other terms can be added. */
1371 ncost = minb * 0x40 + maxb;
1372 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
1373 if (ncost > cost)
1374 cost = ncost;
1378 else
1379 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1380 if ((unit & 1) != 0)
1381 cost = potential_hazard (i, insn, cost);
1383 return cost;
1386 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
1387 This is the number of virtual cycles taken between instruction issue and
1388 instruction results. */
1390 __inline static int
1391 insn_cost (insn, link, used)
1392 rtx insn, link, used;
1394 register int cost = INSN_COST (insn);
1396 if (cost == 0)
1398 recog_memoized (insn);
1400 /* A USE insn, or something else we don't need to understand.
1401 We can't pass these directly to result_ready_cost because it will
1402 trigger a fatal error for unrecognizable insns. */
1403 if (INSN_CODE (insn) < 0)
1405 INSN_COST (insn) = 1;
1406 return 1;
1408 else
1410 cost = result_ready_cost (insn);
1412 if (cost < 1)
1413 cost = 1;
1415 INSN_COST (insn) = cost;
1419 /* A USE insn should never require the value used to be computed. This
1420 allows the computation of a function's result and parameter values to
1421 overlap the return and call. */
1422 recog_memoized (used);
1423 if (INSN_CODE (used) < 0)
1424 LINK_COST_FREE (link) = 1;
1426 /* If some dependencies vary the cost, compute the adjustment. Most
1427 commonly, the adjustment is complete: either the cost is ignored
1428 (in the case of an output- or anti-dependence), or the cost is
1429 unchanged. These values are cached in the link as LINK_COST_FREE
1430 and LINK_COST_ZERO. */
1432 if (LINK_COST_FREE (link))
1433 cost = 1;
1434 #ifdef ADJUST_COST
1435 else if (! LINK_COST_ZERO (link))
1437 int ncost = cost;
1439 ADJUST_COST (used, link, insn, ncost);
1440 if (ncost <= 1)
1441 LINK_COST_FREE (link) = ncost = 1;
1442 if (cost == ncost)
1443 LINK_COST_ZERO (link) = 1;
1444 cost = ncost;
1446 #endif
1447 return cost;
1450 /* Compute the priority number for INSN. */
1452 static int
1453 priority (insn)
1454 rtx insn;
1456 if (insn && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
1458 int prev_priority;
1459 int max_priority;
1460 int this_priority = INSN_PRIORITY (insn);
1461 rtx prev;
1463 if (this_priority > 0)
1464 return this_priority;
1466 max_priority = 1;
1468 /* Nonzero if these insns must be scheduled together. */
1469 if (SCHED_GROUP_P (insn))
1471 prev = insn;
1472 while (SCHED_GROUP_P (prev))
1474 prev = PREV_INSN (prev);
1475 INSN_REF_COUNT (prev) += 1;
1479 for (prev = LOG_LINKS (insn); prev; prev = XEXP (prev, 1))
1481 rtx x = XEXP (prev, 0);
1483 /* If this was a duplicate of a dependence we already deleted,
1484 ignore it. */
1485 if (RTX_INTEGRATED_P (prev))
1486 continue;
1488 /* A dependence pointing to a note or deleted insn is always
1489 obsolete, because sched_analyze_insn will have created any
1490 necessary new dependences which replace it. Notes and deleted
1491 insns can be created when instructions are deleted by insn
1492 splitting, or by register allocation. */
1493 if (GET_CODE (x) == NOTE || INSN_DELETED_P (x))
1495 remove_dependence (insn, x);
1496 continue;
1499 /* Clear the link cost adjustment bits. */
1500 LINK_COST_FREE (prev) = 0;
1501 #ifdef ADJUST_COST
1502 LINK_COST_ZERO (prev) = 0;
1503 #endif
1505 /* This priority calculation was chosen because it results in the
1506 least instruction movement, and does not hurt the performance
1507 of the resulting code compared to the old algorithm.
1508 This makes the sched algorithm more stable, which results
1509 in better code, because there is less register pressure,
1510 cross jumping is more likely to work, and debugging is easier.
1512 When all instructions have a latency of 1, there is no need to
1513 move any instructions. Subtracting one here ensures that in such
1514 cases all instructions will end up with a priority of one, and
1515 hence no scheduling will be done.
1517 The original code did not subtract the one, and added the
1518 insn_cost of the current instruction to its priority (e.g.
1519 move the insn_cost call down to the end). */
1521 prev_priority = priority (x) + insn_cost (x, prev, insn) - 1;
1523 if (prev_priority > max_priority)
1524 max_priority = prev_priority;
1525 INSN_REF_COUNT (x) += 1;
1528 prepare_unit (insn_unit (insn));
1529 INSN_PRIORITY (insn) = max_priority;
1530 return INSN_PRIORITY (insn);
1532 return 0;
1535 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
1536 them to the unused_*_list variables, so that they can be reused. */
1538 static void
1539 free_pending_lists ()
1541 register rtx link, prev_link;
1543 if (pending_read_insns)
1545 prev_link = pending_read_insns;
1546 link = XEXP (prev_link, 1);
1548 while (link)
1550 prev_link = link;
1551 link = XEXP (link, 1);
1554 XEXP (prev_link, 1) = unused_insn_list;
1555 unused_insn_list = pending_read_insns;
1556 pending_read_insns = 0;
1559 if (pending_write_insns)
1561 prev_link = pending_write_insns;
1562 link = XEXP (prev_link, 1);
1564 while (link)
1566 prev_link = link;
1567 link = XEXP (link, 1);
1570 XEXP (prev_link, 1) = unused_insn_list;
1571 unused_insn_list = pending_write_insns;
1572 pending_write_insns = 0;
1575 if (pending_read_mems)
1577 prev_link = pending_read_mems;
1578 link = XEXP (prev_link, 1);
1580 while (link)
1582 prev_link = link;
1583 link = XEXP (link, 1);
1586 XEXP (prev_link, 1) = unused_expr_list;
1587 unused_expr_list = pending_read_mems;
1588 pending_read_mems = 0;
1591 if (pending_write_mems)
1593 prev_link = pending_write_mems;
1594 link = XEXP (prev_link, 1);
1596 while (link)
1598 prev_link = link;
1599 link = XEXP (link, 1);
1602 XEXP (prev_link, 1) = unused_expr_list;
1603 unused_expr_list = pending_write_mems;
1604 pending_write_mems = 0;
1608 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1609 The MEM is a memory reference contained within INSN, which we are saving
1610 so that we can do memory aliasing on it. */
1612 static void
1613 add_insn_mem_dependence (insn_list, mem_list, insn, mem)
1614 rtx *insn_list, *mem_list, insn, mem;
1616 register rtx link;
1618 if (unused_insn_list)
1620 link = unused_insn_list;
1621 unused_insn_list = XEXP (link, 1);
1623 else
1624 link = rtx_alloc (INSN_LIST);
1625 XEXP (link, 0) = insn;
1626 XEXP (link, 1) = *insn_list;
1627 *insn_list = link;
1629 if (unused_expr_list)
1631 link = unused_expr_list;
1632 unused_expr_list = XEXP (link, 1);
1634 else
1635 link = rtx_alloc (EXPR_LIST);
1636 XEXP (link, 0) = mem;
1637 XEXP (link, 1) = *mem_list;
1638 *mem_list = link;
1640 pending_lists_length++;
1643 /* Make a dependency between every memory reference on the pending lists
1644 and INSN, thus flushing the pending lists. If ONLY_WRITE, don't flush
1645 the read list. */
1647 static void
1648 flush_pending_lists (insn, only_write)
1649 rtx insn;
1650 int only_write;
1652 rtx link;
1654 while (pending_read_insns && ! only_write)
1656 add_dependence (insn, XEXP (pending_read_insns, 0), REG_DEP_ANTI);
1658 link = pending_read_insns;
1659 pending_read_insns = XEXP (pending_read_insns, 1);
1660 XEXP (link, 1) = unused_insn_list;
1661 unused_insn_list = link;
1663 link = pending_read_mems;
1664 pending_read_mems = XEXP (pending_read_mems, 1);
1665 XEXP (link, 1) = unused_expr_list;
1666 unused_expr_list = link;
1668 while (pending_write_insns)
1670 add_dependence (insn, XEXP (pending_write_insns, 0), REG_DEP_ANTI);
1672 link = pending_write_insns;
1673 pending_write_insns = XEXP (pending_write_insns, 1);
1674 XEXP (link, 1) = unused_insn_list;
1675 unused_insn_list = link;
1677 link = pending_write_mems;
1678 pending_write_mems = XEXP (pending_write_mems, 1);
1679 XEXP (link, 1) = unused_expr_list;
1680 unused_expr_list = link;
1682 pending_lists_length = 0;
1684 if (last_pending_memory_flush)
1685 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1687 last_pending_memory_flush = insn;
1690 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1691 by the write to the destination of X, and reads of everything mentioned. */
1693 static void
1694 sched_analyze_1 (x, insn)
1695 rtx x;
1696 rtx insn;
1698 register int regno;
1699 register rtx dest = SET_DEST (x);
1701 if (dest == 0)
1702 return;
1704 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
1705 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1707 if (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1709 /* The second and third arguments are values read by this insn. */
1710 sched_analyze_2 (XEXP (dest, 1), insn);
1711 sched_analyze_2 (XEXP (dest, 2), insn);
1713 dest = SUBREG_REG (dest);
1716 if (GET_CODE (dest) == REG)
1718 register int i;
1720 regno = REGNO (dest);
1722 /* A hard reg in a wide mode may really be multiple registers.
1723 If so, mark all of them just like the first. */
1724 if (regno < FIRST_PSEUDO_REGISTER)
1726 i = HARD_REGNO_NREGS (regno, GET_MODE (dest));
1727 while (--i >= 0)
1729 rtx u;
1731 for (u = reg_last_uses[regno+i]; u; u = XEXP (u, 1))
1732 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1733 reg_last_uses[regno + i] = 0;
1734 if (reg_last_sets[regno + i])
1735 add_dependence (insn, reg_last_sets[regno + i],
1736 REG_DEP_OUTPUT);
1737 SET_REGNO_REG_SET (reg_pending_sets, regno + i);
1738 if ((call_used_regs[i] || global_regs[i])
1739 && last_function_call)
1740 /* Function calls clobber all call_used regs. */
1741 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1744 else
1746 rtx u;
1748 for (u = reg_last_uses[regno]; u; u = XEXP (u, 1))
1749 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1750 reg_last_uses[regno] = 0;
1751 if (reg_last_sets[regno])
1752 add_dependence (insn, reg_last_sets[regno], REG_DEP_OUTPUT);
1753 SET_REGNO_REG_SET (reg_pending_sets, regno);
1755 /* Pseudos that are REG_EQUIV to something may be replaced
1756 by that during reloading. We need only add dependencies for
1757 the address in the REG_EQUIV note. */
1758 if (! reload_completed
1759 && reg_known_equiv_p[regno]
1760 && GET_CODE (reg_known_value[regno]) == MEM)
1761 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1763 /* Don't let it cross a call after scheduling if it doesn't
1764 already cross one. */
1765 if (REG_N_CALLS_CROSSED (regno) == 0 && last_function_call)
1766 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1769 else if (GET_CODE (dest) == MEM)
1771 /* Writing memory. */
1773 if (pending_lists_length > 32)
1775 /* Flush all pending reads and writes to prevent the pending lists
1776 from getting any larger. Insn scheduling runs too slowly when
1777 these lists get long. The number 32 was chosen because it
1778 seems like a reasonable number. When compiling GCC with itself,
1779 this flush occurs 8 times for sparc, and 10 times for m88k using
1780 the number 32. */
1781 flush_pending_lists (insn, 0);
1783 else
1785 rtx pending, pending_mem;
1787 pending = pending_read_insns;
1788 pending_mem = pending_read_mems;
1789 while (pending)
1791 /* If a dependency already exists, don't create a new one. */
1792 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1793 if (anti_dependence (XEXP (pending_mem, 0), dest))
1794 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1796 pending = XEXP (pending, 1);
1797 pending_mem = XEXP (pending_mem, 1);
1800 pending = pending_write_insns;
1801 pending_mem = pending_write_mems;
1802 while (pending)
1804 /* If a dependency already exists, don't create a new one. */
1805 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1806 if (output_dependence (XEXP (pending_mem, 0), dest))
1807 add_dependence (insn, XEXP (pending, 0), REG_DEP_OUTPUT);
1809 pending = XEXP (pending, 1);
1810 pending_mem = XEXP (pending_mem, 1);
1813 if (last_pending_memory_flush)
1814 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1816 add_insn_mem_dependence (&pending_write_insns, &pending_write_mems,
1817 insn, dest);
1819 sched_analyze_2 (XEXP (dest, 0), insn);
1822 /* Analyze reads. */
1823 if (GET_CODE (x) == SET)
1824 sched_analyze_2 (SET_SRC (x), insn);
1827 /* Analyze the uses of memory and registers in rtx X in INSN. */
1829 static void
1830 sched_analyze_2 (x, insn)
1831 rtx x;
1832 rtx insn;
1834 register int i;
1835 register int j;
1836 register enum rtx_code code;
1837 register char *fmt;
1839 if (x == 0)
1840 return;
1842 code = GET_CODE (x);
1844 switch (code)
1846 case CONST_INT:
1847 case CONST_DOUBLE:
1848 case SYMBOL_REF:
1849 case CONST:
1850 case LABEL_REF:
1851 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1852 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1853 this does not mean that this insn is using cc0. */
1854 return;
1856 #ifdef HAVE_cc0
1857 case CC0:
1859 rtx link, prev;
1861 /* User of CC0 depends on immediately preceding insn. */
1862 SCHED_GROUP_P (insn) = 1;
1864 /* There may be a note before this insn now, but all notes will
1865 be removed before we actually try to schedule the insns, so
1866 it won't cause a problem later. We must avoid it here though. */
1867 prev = prev_nonnote_insn (insn);
1869 /* Make a copy of all dependencies on the immediately previous insn,
1870 and add to this insn. This is so that all the dependencies will
1871 apply to the group. Remove an explicit dependence on this insn
1872 as SCHED_GROUP_P now represents it. */
1874 if (find_insn_list (prev, LOG_LINKS (insn)))
1875 remove_dependence (insn, prev);
1877 for (link = LOG_LINKS (prev); link; link = XEXP (link, 1))
1878 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1880 return;
1882 #endif
1884 case REG:
1886 int regno = REGNO (x);
1887 if (regno < FIRST_PSEUDO_REGISTER)
1889 int i;
1891 i = HARD_REGNO_NREGS (regno, GET_MODE (x));
1892 while (--i >= 0)
1894 reg_last_uses[regno + i]
1895 = gen_rtx (INSN_LIST, VOIDmode,
1896 insn, reg_last_uses[regno + i]);
1897 if (reg_last_sets[regno + i])
1898 add_dependence (insn, reg_last_sets[regno + i], 0);
1899 if ((call_used_regs[regno + i] || global_regs[regno + i])
1900 && last_function_call)
1901 /* Function calls clobber all call_used regs. */
1902 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1905 else
1907 reg_last_uses[regno]
1908 = gen_rtx (INSN_LIST, VOIDmode, insn, reg_last_uses[regno]);
1909 if (reg_last_sets[regno])
1910 add_dependence (insn, reg_last_sets[regno], 0);
1912 /* Pseudos that are REG_EQUIV to something may be replaced
1913 by that during reloading. We need only add dependencies for
1914 the address in the REG_EQUIV note. */
1915 if (! reload_completed
1916 && reg_known_equiv_p[regno]
1917 && GET_CODE (reg_known_value[regno]) == MEM)
1918 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1920 /* If the register does not already cross any calls, then add this
1921 insn to the sched_before_next_call list so that it will still
1922 not cross calls after scheduling. */
1923 if (REG_N_CALLS_CROSSED (regno) == 0)
1924 add_dependence (sched_before_next_call, insn, REG_DEP_ANTI);
1926 return;
1929 case MEM:
1931 /* Reading memory. */
1933 rtx pending, pending_mem;
1935 pending = pending_read_insns;
1936 pending_mem = pending_read_mems;
1937 while (pending)
1939 /* If a dependency already exists, don't create a new one. */
1940 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1941 if (read_dependence (XEXP (pending_mem, 0), x))
1942 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1944 pending = XEXP (pending, 1);
1945 pending_mem = XEXP (pending_mem, 1);
1948 pending = pending_write_insns;
1949 pending_mem = pending_write_mems;
1950 while (pending)
1952 /* If a dependency already exists, don't create a new one. */
1953 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1954 if (true_dependence (XEXP (pending_mem, 0), x))
1955 add_dependence (insn, XEXP (pending, 0), 0);
1957 pending = XEXP (pending, 1);
1958 pending_mem = XEXP (pending_mem, 1);
1960 if (last_pending_memory_flush)
1961 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1963 /* Always add these dependencies to pending_reads, since
1964 this insn may be followed by a write. */
1965 add_insn_mem_dependence (&pending_read_insns, &pending_read_mems,
1966 insn, x);
1968 /* Take advantage of tail recursion here. */
1969 sched_analyze_2 (XEXP (x, 0), insn);
1970 return;
1973 case ASM_OPERANDS:
1974 case ASM_INPUT:
1975 case UNSPEC_VOLATILE:
1976 case TRAP_IF:
1978 rtx u;
1980 /* Traditional and volatile asm instructions must be considered to use
1981 and clobber all hard registers, all pseudo-registers and all of
1982 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1984 Consider for instance a volatile asm that changes the fpu rounding
1985 mode. An insn should not be moved across this even if it only uses
1986 pseudo-regs because it might give an incorrectly rounded result. */
1987 if (code != ASM_OPERANDS || MEM_VOLATILE_P (x))
1989 int max_reg = max_reg_num ();
1990 for (i = 0; i < max_reg; i++)
1992 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1993 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1994 reg_last_uses[i] = 0;
1995 if (reg_last_sets[i])
1996 add_dependence (insn, reg_last_sets[i], 0);
1998 reg_pending_sets_all = 1;
2000 flush_pending_lists (insn, 0);
2003 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
2004 We can not just fall through here since then we would be confused
2005 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
2006 traditional asms unlike their normal usage. */
2008 if (code == ASM_OPERANDS)
2010 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
2011 sched_analyze_2 (ASM_OPERANDS_INPUT (x, j), insn);
2012 return;
2014 break;
2017 case PRE_DEC:
2018 case POST_DEC:
2019 case PRE_INC:
2020 case POST_INC:
2021 /* These both read and modify the result. We must handle them as writes
2022 to get proper dependencies for following instructions. We must handle
2023 them as reads to get proper dependencies from this to previous
2024 instructions. Thus we need to pass them to both sched_analyze_1
2025 and sched_analyze_2. We must call sched_analyze_2 first in order
2026 to get the proper antecedent for the read. */
2027 sched_analyze_2 (XEXP (x, 0), insn);
2028 sched_analyze_1 (x, insn);
2029 return;
2031 default:
2032 break;
2035 /* Other cases: walk the insn. */
2036 fmt = GET_RTX_FORMAT (code);
2037 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2039 if (fmt[i] == 'e')
2040 sched_analyze_2 (XEXP (x, i), insn);
2041 else if (fmt[i] == 'E')
2042 for (j = 0; j < XVECLEN (x, i); j++)
2043 sched_analyze_2 (XVECEXP (x, i, j), insn);
2047 /* Analyze an INSN with pattern X to find all dependencies. */
2049 static void
2050 sched_analyze_insn (x, insn, loop_notes)
2051 rtx x, insn;
2052 rtx loop_notes;
2054 register RTX_CODE code = GET_CODE (x);
2055 rtx link;
2056 int maxreg = max_reg_num ();
2057 int i;
2059 if (code == SET || code == CLOBBER)
2060 sched_analyze_1 (x, insn);
2061 else if (code == PARALLEL)
2063 register int i;
2064 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2066 code = GET_CODE (XVECEXP (x, 0, i));
2067 if (code == SET || code == CLOBBER)
2068 sched_analyze_1 (XVECEXP (x, 0, i), insn);
2069 else
2070 sched_analyze_2 (XVECEXP (x, 0, i), insn);
2073 else
2074 sched_analyze_2 (x, insn);
2076 /* Mark registers CLOBBERED or used by called function. */
2077 if (GET_CODE (insn) == CALL_INSN)
2078 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
2080 if (GET_CODE (XEXP (link, 0)) == CLOBBER)
2081 sched_analyze_1 (XEXP (link, 0), insn);
2082 else
2083 sched_analyze_2 (XEXP (link, 0), insn);
2086 /* If there is a {LOOP,EHREGION}_{BEG,END} note in the middle of a basic block, then
2087 we must be sure that no instructions are scheduled across it.
2088 Otherwise, the reg_n_refs info (which depends on loop_depth) would
2089 become incorrect. */
2091 if (loop_notes)
2093 int max_reg = max_reg_num ();
2094 rtx link;
2096 for (i = 0; i < max_reg; i++)
2098 rtx u;
2099 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2100 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2101 reg_last_uses[i] = 0;
2102 if (reg_last_sets[i])
2103 add_dependence (insn, reg_last_sets[i], 0);
2105 reg_pending_sets_all = 1;
2107 flush_pending_lists (insn, 0);
2109 link = loop_notes;
2110 while (XEXP (link, 1))
2111 link = XEXP (link, 1);
2112 XEXP (link, 1) = REG_NOTES (insn);
2113 REG_NOTES (insn) = loop_notes;
2116 /* After reload, it is possible for an instruction to have a REG_DEAD note
2117 for a register that actually dies a few instructions earlier. For
2118 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
2119 In this case, we must consider the insn to use the register mentioned
2120 in the REG_DEAD note. Otherwise, we may accidentally move this insn
2121 after another insn that sets the register, thus getting obviously invalid
2122 rtl. This confuses reorg which believes that REG_DEAD notes are still
2123 meaningful.
2125 ??? We would get better code if we fixed reload to put the REG_DEAD
2126 notes in the right places, but that may not be worth the effort. */
2128 if (reload_completed)
2130 rtx note;
2132 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2133 if (REG_NOTE_KIND (note) == REG_DEAD)
2134 sched_analyze_2 (XEXP (note, 0), insn);
2137 EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i,
2139 reg_last_sets[i] = insn;
2141 CLEAR_REG_SET (reg_pending_sets);
2143 if (reg_pending_sets_all)
2145 for (i = 0; i < maxreg; i++)
2146 reg_last_sets[i] = insn;
2147 reg_pending_sets_all = 0;
2150 /* Handle function calls and function returns created by the epilogue
2151 threading code. */
2152 if (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN)
2154 rtx dep_insn;
2155 rtx prev_dep_insn;
2157 /* When scheduling instructions, we make sure calls don't lose their
2158 accompanying USE insns by depending them one on another in order.
2160 Also, we must do the same thing for returns created by the epilogue
2161 threading code. Note this code works only in this special case,
2162 because other passes make no guarantee that they will never emit
2163 an instruction between a USE and a RETURN. There is such a guarantee
2164 for USE instructions immediately before a call. */
2166 prev_dep_insn = insn;
2167 dep_insn = PREV_INSN (insn);
2168 while (GET_CODE (dep_insn) == INSN
2169 && GET_CODE (PATTERN (dep_insn)) == USE
2170 && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == REG)
2172 SCHED_GROUP_P (prev_dep_insn) = 1;
2174 /* Make a copy of all dependencies on dep_insn, and add to insn.
2175 This is so that all of the dependencies will apply to the
2176 group. */
2178 for (link = LOG_LINKS (dep_insn); link; link = XEXP (link, 1))
2179 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
2181 prev_dep_insn = dep_insn;
2182 dep_insn = PREV_INSN (dep_insn);
2187 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
2188 for every dependency. */
2190 static int
2191 sched_analyze (head, tail)
2192 rtx head, tail;
2194 register rtx insn;
2195 register int n_insns = 0;
2196 register rtx u;
2197 register int luid = 0;
2198 rtx loop_notes = 0;
2200 for (insn = head; ; insn = NEXT_INSN (insn))
2202 INSN_LUID (insn) = luid++;
2204 if (GET_CODE (insn) == INSN || GET_CODE (insn) == JUMP_INSN)
2206 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2207 loop_notes = 0;
2208 n_insns += 1;
2210 else if (GET_CODE (insn) == CALL_INSN)
2212 rtx x;
2213 register int i;
2215 /* Any instruction using a hard register which may get clobbered
2216 by a call needs to be marked as dependent on this call.
2217 This prevents a use of a hard return reg from being moved
2218 past a void call (i.e. it does not explicitly set the hard
2219 return reg). */
2221 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
2222 all registers, not just hard registers, may be clobbered by this
2223 call. */
2225 /* Insn, being a CALL_INSN, magically depends on
2226 `last_function_call' already. */
2228 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == NOTE
2229 && NOTE_LINE_NUMBER (NEXT_INSN (insn)) == NOTE_INSN_SETJMP)
2231 int max_reg = max_reg_num ();
2232 for (i = 0; i < max_reg; i++)
2234 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2235 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2236 reg_last_uses[i] = 0;
2237 if (reg_last_sets[i])
2238 add_dependence (insn, reg_last_sets[i], 0);
2240 reg_pending_sets_all = 1;
2242 /* Add a pair of fake REG_NOTEs which we will later
2243 convert back into a NOTE_INSN_SETJMP note. See
2244 reemit_notes for why we use a pair of of NOTEs. */
2246 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_DEAD,
2247 GEN_INT (0),
2248 REG_NOTES (insn));
2249 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_DEAD,
2250 GEN_INT (NOTE_INSN_SETJMP),
2251 REG_NOTES (insn));
2253 else
2255 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
2256 if (call_used_regs[i] || global_regs[i])
2258 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2259 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2260 reg_last_uses[i] = 0;
2261 if (reg_last_sets[i])
2262 add_dependence (insn, reg_last_sets[i], REG_DEP_ANTI);
2263 SET_REGNO_REG_SET (reg_pending_sets, i);
2267 /* For each insn which shouldn't cross a call, add a dependence
2268 between that insn and this call insn. */
2269 x = LOG_LINKS (sched_before_next_call);
2270 while (x)
2272 add_dependence (insn, XEXP (x, 0), REG_DEP_ANTI);
2273 x = XEXP (x, 1);
2275 LOG_LINKS (sched_before_next_call) = 0;
2277 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2278 loop_notes = 0;
2280 /* In the absence of interprocedural alias analysis, we must flush
2281 all pending reads and writes, and start new dependencies starting
2282 from here. But only flush writes for constant calls (which may
2283 be passed a pointer to something we haven't written yet). */
2284 flush_pending_lists (insn, CONST_CALL_P (insn));
2286 /* Depend this function call (actually, the user of this
2287 function call) on all hard register clobberage. */
2288 last_function_call = insn;
2289 n_insns += 1;
2292 /* See comments on reemit_notes as to why we do this. */
2293 else if (GET_CODE (insn) == NOTE
2294 && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG
2295 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_END
2296 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG
2297 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_END
2298 || (NOTE_LINE_NUMBER (insn) == NOTE_INSN_SETJMP
2299 && GET_CODE (PREV_INSN (insn)) != CALL_INSN)))
2301 loop_notes = gen_rtx (EXPR_LIST, REG_DEAD,
2302 GEN_INT (NOTE_BLOCK_NUMBER (insn)), loop_notes);
2303 loop_notes = gen_rtx (EXPR_LIST, REG_DEAD,
2304 GEN_INT (NOTE_LINE_NUMBER (insn)), loop_notes);
2305 CONST_CALL_P (loop_notes) = CONST_CALL_P (insn);
2308 if (insn == tail)
2309 return n_insns;
2312 abort ();
2315 /* Called when we see a set of a register. If death is true, then we are
2316 scanning backwards. Mark that register as unborn. If nobody says
2317 otherwise, that is how things will remain. If death is false, then we
2318 are scanning forwards. Mark that register as being born. */
2320 static void
2321 sched_note_set (b, x, death)
2322 int b;
2323 rtx x;
2324 int death;
2326 register int regno;
2327 register rtx reg = SET_DEST (x);
2328 int subreg_p = 0;
2330 if (reg == 0)
2331 return;
2333 while (GET_CODE (reg) == SUBREG || GET_CODE (reg) == STRICT_LOW_PART
2334 || GET_CODE (reg) == SIGN_EXTRACT || GET_CODE (reg) == ZERO_EXTRACT)
2336 /* Must treat modification of just one hardware register of a multi-reg
2337 value or just a byte field of a register exactly the same way that
2338 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
2339 does not kill the entire register. */
2340 if (GET_CODE (reg) != SUBREG
2341 || REG_SIZE (SUBREG_REG (reg)) > REG_SIZE (reg))
2342 subreg_p = 1;
2344 reg = SUBREG_REG (reg);
2347 if (GET_CODE (reg) != REG)
2348 return;
2350 /* Global registers are always live, so the code below does not apply
2351 to them. */
2353 regno = REGNO (reg);
2354 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2356 if (death)
2358 /* If we only set part of the register, then this set does not
2359 kill it. */
2360 if (subreg_p)
2361 return;
2363 /* Try killing this register. */
2364 if (regno < FIRST_PSEUDO_REGISTER)
2366 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2367 while (--j >= 0)
2369 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
2370 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
2373 else
2375 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
2376 SET_REGNO_REG_SET (bb_dead_regs, regno);
2379 else
2381 /* Make the register live again. */
2382 if (regno < FIRST_PSEUDO_REGISTER)
2384 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2385 while (--j >= 0)
2387 SET_REGNO_REG_SET (bb_live_regs, regno + j);
2388 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
2391 else
2393 SET_REGNO_REG_SET (bb_live_regs, regno);
2394 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
2400 /* Macros and functions for keeping the priority queue sorted, and
2401 dealing with queueing and dequeueing of instructions. */
2403 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
2404 do { if ((NEW_READY) - (OLD_READY) == 1) \
2405 swap_sort (READY, NEW_READY); \
2406 else if ((NEW_READY) - (OLD_READY) > 1) \
2407 qsort (READY, NEW_READY, sizeof (rtx), rank_for_schedule); } \
2408 while (0)
2410 /* Returns a positive value if y is preferred; returns a negative value if
2411 x is preferred. Should never return 0, since that will make the sort
2412 unstable. */
2414 static int
2415 rank_for_schedule (x, y)
2416 rtx *x, *y;
2418 rtx tmp = *y;
2419 rtx tmp2 = *x;
2420 rtx link;
2421 int tmp_class, tmp2_class;
2422 int value;
2424 /* Choose the instruction with the highest priority, if different. */
2425 if (value = INSN_PRIORITY (tmp) - INSN_PRIORITY (tmp2))
2426 return value;
2428 if (last_scheduled_insn)
2430 /* Classify the instructions into three classes:
2431 1) Data dependent on last schedule insn.
2432 2) Anti/Output dependent on last scheduled insn.
2433 3) Independent of last scheduled insn, or has latency of one.
2434 Choose the insn from the highest numbered class if different. */
2435 link = find_insn_list (tmp, LOG_LINKS (last_scheduled_insn));
2436 if (link == 0 || insn_cost (tmp, link, last_scheduled_insn) == 1)
2437 tmp_class = 3;
2438 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2439 tmp_class = 1;
2440 else
2441 tmp_class = 2;
2443 link = find_insn_list (tmp2, LOG_LINKS (last_scheduled_insn));
2444 if (link == 0 || insn_cost (tmp2, link, last_scheduled_insn) == 1)
2445 tmp2_class = 3;
2446 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2447 tmp2_class = 1;
2448 else
2449 tmp2_class = 2;
2451 if (value = tmp_class - tmp2_class)
2452 return value;
2455 /* If insns are equally good, sort by INSN_LUID (original insn order),
2456 so that we make the sort stable. This minimizes instruction movement,
2457 thus minimizing sched's effect on debugging and cross-jumping. */
2458 return INSN_LUID (tmp) - INSN_LUID (tmp2);
2461 /* Resort the array A in which only element at index N may be out of order. */
2463 __inline static void
2464 swap_sort (a, n)
2465 rtx *a;
2466 int n;
2468 rtx insn = a[n-1];
2469 int i = n-2;
2471 while (i >= 0 && rank_for_schedule (a+i, &insn) >= 0)
2473 a[i+1] = a[i];
2474 i -= 1;
2476 a[i+1] = insn;
2479 static int max_priority;
2481 /* Add INSN to the insn queue so that it fires at least N_CYCLES
2482 before the currently executing insn. */
2484 __inline static void
2485 queue_insn (insn, n_cycles)
2486 rtx insn;
2487 int n_cycles;
2489 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
2490 NEXT_INSN (insn) = insn_queue[next_q];
2491 insn_queue[next_q] = insn;
2492 q_size += 1;
2495 /* Return nonzero if PAT is the pattern of an insn which makes a
2496 register live. */
2498 __inline static int
2499 birthing_insn_p (pat)
2500 rtx pat;
2502 int j;
2504 if (reload_completed == 1)
2505 return 0;
2507 if (GET_CODE (pat) == SET
2508 && GET_CODE (SET_DEST (pat)) == REG)
2510 rtx dest = SET_DEST (pat);
2511 int i = REGNO (dest);
2513 /* It would be more accurate to use refers_to_regno_p or
2514 reg_mentioned_p to determine when the dest is not live before this
2515 insn. */
2517 if (REGNO_REG_SET_P (bb_live_regs, i))
2518 return (REG_N_SETS (i) == 1);
2520 return 0;
2522 if (GET_CODE (pat) == PARALLEL)
2524 for (j = 0; j < XVECLEN (pat, 0); j++)
2525 if (birthing_insn_p (XVECEXP (pat, 0, j)))
2526 return 1;
2528 return 0;
2531 /* PREV is an insn that is ready to execute. Adjust its priority if that
2532 will help shorten register lifetimes. */
2534 __inline static void
2535 adjust_priority (prev)
2536 rtx prev;
2538 /* Trying to shorten register lives after reload has completed
2539 is useless and wrong. It gives inaccurate schedules. */
2540 if (reload_completed == 0)
2542 rtx note;
2543 int n_deaths = 0;
2545 /* ??? This code has no effect, because REG_DEAD notes are removed
2546 before we ever get here. */
2547 for (note = REG_NOTES (prev); note; note = XEXP (note, 1))
2548 if (REG_NOTE_KIND (note) == REG_DEAD)
2549 n_deaths += 1;
2551 /* Defer scheduling insns which kill registers, since that
2552 shortens register lives. Prefer scheduling insns which
2553 make registers live for the same reason. */
2554 switch (n_deaths)
2556 default:
2557 INSN_PRIORITY (prev) >>= 3;
2558 break;
2559 case 3:
2560 INSN_PRIORITY (prev) >>= 2;
2561 break;
2562 case 2:
2563 case 1:
2564 INSN_PRIORITY (prev) >>= 1;
2565 break;
2566 case 0:
2567 if (birthing_insn_p (PATTERN (prev)))
2569 int max = max_priority;
2571 if (max > INSN_PRIORITY (prev))
2572 INSN_PRIORITY (prev) = max;
2574 break;
2576 #ifdef ADJUST_PRIORITY
2577 ADJUST_PRIORITY (prev);
2578 #endif
2582 /* INSN is the "currently executing insn". Launch each insn which was
2583 waiting on INSN (in the backwards dataflow sense). READY is a
2584 vector of insns which are ready to fire. N_READY is the number of
2585 elements in READY. CLOCK is the current virtual cycle. */
2587 static int
2588 schedule_insn (insn, ready, n_ready, clock)
2589 rtx insn;
2590 rtx *ready;
2591 int n_ready;
2592 int clock;
2594 rtx link;
2595 int new_ready = n_ready;
2597 if (MAX_BLOCKAGE > 1)
2598 schedule_unit (insn_unit (insn), insn, clock);
2600 if (LOG_LINKS (insn) == 0)
2601 return n_ready;
2603 /* This is used by the function adjust_priority above. */
2604 if (n_ready > 0)
2605 max_priority = MAX (INSN_PRIORITY (ready[0]), INSN_PRIORITY (insn));
2606 else
2607 max_priority = INSN_PRIORITY (insn);
2609 for (link = LOG_LINKS (insn); link != 0; link = XEXP (link, 1))
2611 rtx prev = XEXP (link, 0);
2612 int cost = insn_cost (prev, link, insn);
2614 if ((INSN_REF_COUNT (prev) -= 1) != 0)
2616 /* We satisfied one requirement to fire PREV. Record the earliest
2617 time when PREV can fire. No need to do this if the cost is 1,
2618 because PREV can fire no sooner than the next cycle. */
2619 if (cost > 1)
2620 INSN_TICK (prev) = MAX (INSN_TICK (prev), clock + cost);
2622 else
2624 /* We satisfied the last requirement to fire PREV. Ensure that all
2625 timing requirements are satisfied. */
2626 if (INSN_TICK (prev) - clock > cost)
2627 cost = INSN_TICK (prev) - clock;
2629 /* Adjust the priority of PREV and either put it on the ready
2630 list or queue it. */
2631 adjust_priority (prev);
2632 if (cost <= 1)
2633 ready[new_ready++] = prev;
2634 else
2635 queue_insn (prev, cost);
2639 return new_ready;
2642 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2643 those that are blocked due to function unit hazards and rearrange
2644 the remaining ones to minimize subsequent function unit hazards. */
2646 static int
2647 schedule_select (ready, n_ready, clock, file)
2648 rtx *ready;
2649 int n_ready, clock;
2650 FILE *file;
2652 int pri = INSN_PRIORITY (ready[0]);
2653 int i, j, k, q, cost, best_cost, best_insn = 0, new_ready = n_ready;
2654 rtx insn;
2656 /* Work down the ready list in groups of instructions with the same
2657 priority value. Queue insns in the group that are blocked and
2658 select among those that remain for the one with the largest
2659 potential hazard. */
2660 for (i = 0; i < n_ready; i = j)
2662 int opri = pri;
2663 for (j = i + 1; j < n_ready; j++)
2664 if ((pri = INSN_PRIORITY (ready[j])) != opri)
2665 break;
2667 /* Queue insns in the group that are blocked. */
2668 for (k = i, q = 0; k < j; k++)
2670 insn = ready[k];
2671 if ((cost = actual_hazard (insn_unit (insn), insn, clock, 0)) != 0)
2673 q++;
2674 ready[k] = 0;
2675 queue_insn (insn, cost);
2676 if (file)
2677 fprintf (file, "\n;; blocking insn %d for %d cycles",
2678 INSN_UID (insn), cost);
2681 new_ready -= q;
2683 /* Check the next group if all insns were queued. */
2684 if (j - i - q == 0)
2685 continue;
2687 /* If more than one remains, select the first one with the largest
2688 potential hazard. */
2689 else if (j - i - q > 1)
2691 best_cost = -1;
2692 for (k = i; k < j; k++)
2694 if ((insn = ready[k]) == 0)
2695 continue;
2696 if ((cost = potential_hazard (insn_unit (insn), insn, 0))
2697 > best_cost)
2699 best_cost = cost;
2700 best_insn = k;
2704 /* We have found a suitable insn to schedule. */
2705 break;
2708 /* Move the best insn to be front of the ready list. */
2709 if (best_insn != 0)
2711 if (file)
2713 fprintf (file, ", now");
2714 for (i = 0; i < n_ready; i++)
2715 if (ready[i])
2716 fprintf (file, " %d", INSN_UID (ready[i]));
2717 fprintf (file, "\n;; insn %d has a greater potential hazard",
2718 INSN_UID (ready[best_insn]));
2720 for (i = best_insn; i > 0; i--)
2722 insn = ready[i-1];
2723 ready[i-1] = ready[i];
2724 ready[i] = insn;
2728 /* Compact the ready list. */
2729 if (new_ready < n_ready)
2730 for (i = j = 0; i < n_ready; i++)
2731 if (ready[i])
2732 ready[j++] = ready[i];
2734 return new_ready;
2737 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2738 dead_notes list. */
2740 static void
2741 create_reg_dead_note (reg, insn)
2742 rtx reg, insn;
2744 rtx link;
2746 /* The number of registers killed after scheduling must be the same as the
2747 number of registers killed before scheduling. The number of REG_DEAD
2748 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2749 might become one DImode hard register REG_DEAD note, but the number of
2750 registers killed will be conserved.
2752 We carefully remove REG_DEAD notes from the dead_notes list, so that
2753 there will be none left at the end. If we run out early, then there
2754 is a bug somewhere in flow, combine and/or sched. */
2756 if (dead_notes == 0)
2758 #if 1
2759 abort ();
2760 #else
2761 link = rtx_alloc (EXPR_LIST);
2762 PUT_REG_NOTE_KIND (link, REG_DEAD);
2763 #endif
2765 else
2767 /* Number of regs killed by REG. */
2768 int regs_killed = (REGNO (reg) >= FIRST_PSEUDO_REGISTER ? 1
2769 : HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg)));
2770 /* Number of regs killed by REG_DEAD notes taken off the list. */
2771 int reg_note_regs;
2773 link = dead_notes;
2774 reg_note_regs = (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2775 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2776 GET_MODE (XEXP (link, 0))));
2777 while (reg_note_regs < regs_killed)
2779 link = XEXP (link, 1);
2780 reg_note_regs += (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2781 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2782 GET_MODE (XEXP (link, 0))));
2784 dead_notes = XEXP (link, 1);
2786 /* If we took too many regs kills off, put the extra ones back. */
2787 while (reg_note_regs > regs_killed)
2789 rtx temp_reg, temp_link;
2791 temp_reg = gen_rtx (REG, word_mode, 0);
2792 temp_link = rtx_alloc (EXPR_LIST);
2793 PUT_REG_NOTE_KIND (temp_link, REG_DEAD);
2794 XEXP (temp_link, 0) = temp_reg;
2795 XEXP (temp_link, 1) = dead_notes;
2796 dead_notes = temp_link;
2797 reg_note_regs--;
2801 XEXP (link, 0) = reg;
2802 XEXP (link, 1) = REG_NOTES (insn);
2803 REG_NOTES (insn) = link;
2806 /* Subroutine on attach_deaths_insn--handles the recursive search
2807 through INSN. If SET_P is true, then x is being modified by the insn. */
2809 static void
2810 attach_deaths (x, insn, set_p)
2811 rtx x;
2812 rtx insn;
2813 int set_p;
2815 register int i;
2816 register int j;
2817 register enum rtx_code code;
2818 register char *fmt;
2820 if (x == 0)
2821 return;
2823 code = GET_CODE (x);
2825 switch (code)
2827 case CONST_INT:
2828 case CONST_DOUBLE:
2829 case LABEL_REF:
2830 case SYMBOL_REF:
2831 case CONST:
2832 case CODE_LABEL:
2833 case PC:
2834 case CC0:
2835 /* Get rid of the easy cases first. */
2836 return;
2838 case REG:
2840 /* If the register dies in this insn, queue that note, and mark
2841 this register as needing to die. */
2842 /* This code is very similar to mark_used_1 (if set_p is false)
2843 and mark_set_1 (if set_p is true) in flow.c. */
2845 register int regno;
2846 int some_needed;
2847 int all_needed;
2849 if (set_p)
2850 return;
2852 regno = REGNO (x);
2853 all_needed = some_needed = REGNO_REG_SET_P (old_live_regs, regno);
2854 if (regno < FIRST_PSEUDO_REGISTER)
2856 int n;
2858 n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2859 while (--n > 0)
2861 int needed = (REGNO_REG_SET_P (old_live_regs, regno + n));
2862 some_needed |= needed;
2863 all_needed &= needed;
2867 /* If it wasn't live before we started, then add a REG_DEAD note.
2868 We must check the previous lifetime info not the current info,
2869 because we may have to execute this code several times, e.g.
2870 once for a clobber (which doesn't add a note) and later
2871 for a use (which does add a note).
2873 Always make the register live. We must do this even if it was
2874 live before, because this may be an insn which sets and uses
2875 the same register, in which case the register has already been
2876 killed, so we must make it live again.
2878 Global registers are always live, and should never have a REG_DEAD
2879 note added for them, so none of the code below applies to them. */
2881 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2883 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2884 STACK_POINTER_REGNUM, since these are always considered to be
2885 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2886 if (regno != FRAME_POINTER_REGNUM
2887 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2888 && ! (regno == HARD_FRAME_POINTER_REGNUM)
2889 #endif
2890 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2891 && ! (regno == ARG_POINTER_REGNUM && fixed_regs[regno])
2892 #endif
2893 && regno != STACK_POINTER_REGNUM)
2895 if (! all_needed && ! dead_or_set_p (insn, x))
2897 /* Check for the case where the register dying partially
2898 overlaps the register set by this insn. */
2899 if (regno < FIRST_PSEUDO_REGISTER
2900 && HARD_REGNO_NREGS (regno, GET_MODE (x)) > 1)
2902 int n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2903 while (--n >= 0)
2904 some_needed |= dead_or_set_regno_p (insn, regno + n);
2907 /* If none of the words in X is needed, make a REG_DEAD
2908 note. Otherwise, we must make partial REG_DEAD
2909 notes. */
2910 if (! some_needed)
2911 create_reg_dead_note (x, insn);
2912 else
2914 int i;
2916 /* Don't make a REG_DEAD note for a part of a
2917 register that is set in the insn. */
2918 for (i = HARD_REGNO_NREGS (regno, GET_MODE (x)) - 1;
2919 i >= 0; i--)
2920 if (! REGNO_REG_SET_P (old_live_regs, regno + i)
2921 && ! dead_or_set_regno_p (insn, regno + i))
2922 create_reg_dead_note (gen_rtx (REG,
2923 reg_raw_mode[regno + i],
2924 regno + i),
2925 insn);
2930 if (regno < FIRST_PSEUDO_REGISTER)
2932 int j = HARD_REGNO_NREGS (regno, GET_MODE (x));
2933 while (--j >= 0)
2935 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
2936 SET_REGNO_REG_SET (bb_live_regs, regno + j);
2939 else
2941 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
2942 SET_REGNO_REG_SET (bb_live_regs, regno);
2945 return;
2948 case MEM:
2949 /* Handle tail-recursive case. */
2950 attach_deaths (XEXP (x, 0), insn, 0);
2951 return;
2953 case SUBREG:
2954 attach_deaths (SUBREG_REG (x), insn,
2955 set_p && ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
2956 <= UNITS_PER_WORD)
2957 || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
2958 == GET_MODE_SIZE (GET_MODE ((x))))));
2959 return;
2961 case STRICT_LOW_PART:
2962 attach_deaths (XEXP (x, 0), insn, 0);
2963 return;
2965 case ZERO_EXTRACT:
2966 case SIGN_EXTRACT:
2967 attach_deaths (XEXP (x, 0), insn, 0);
2968 attach_deaths (XEXP (x, 1), insn, 0);
2969 attach_deaths (XEXP (x, 2), insn, 0);
2970 return;
2972 default:
2973 /* Other cases: walk the insn. */
2974 fmt = GET_RTX_FORMAT (code);
2975 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2977 if (fmt[i] == 'e')
2978 attach_deaths (XEXP (x, i), insn, 0);
2979 else if (fmt[i] == 'E')
2980 for (j = 0; j < XVECLEN (x, i); j++)
2981 attach_deaths (XVECEXP (x, i, j), insn, 0);
2986 /* After INSN has executed, add register death notes for each register
2987 that is dead after INSN. */
2989 static void
2990 attach_deaths_insn (insn)
2991 rtx insn;
2993 rtx x = PATTERN (insn);
2994 register RTX_CODE code = GET_CODE (x);
2995 rtx link;
2997 if (code == SET)
2999 attach_deaths (SET_SRC (x), insn, 0);
3001 /* A register might die here even if it is the destination, e.g.
3002 it is the target of a volatile read and is otherwise unused.
3003 Hence we must always call attach_deaths for the SET_DEST. */
3004 attach_deaths (SET_DEST (x), insn, 1);
3006 else if (code == PARALLEL)
3008 register int i;
3009 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
3011 code = GET_CODE (XVECEXP (x, 0, i));
3012 if (code == SET)
3014 attach_deaths (SET_SRC (XVECEXP (x, 0, i)), insn, 0);
3016 attach_deaths (SET_DEST (XVECEXP (x, 0, i)), insn, 1);
3018 /* Flow does not add REG_DEAD notes to registers that die in
3019 clobbers, so we can't either. */
3020 else if (code != CLOBBER)
3021 attach_deaths (XVECEXP (x, 0, i), insn, 0);
3024 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
3025 MEM being clobbered, just like flow. */
3026 else if (code == CLOBBER && GET_CODE (XEXP (x, 0)) == MEM)
3027 attach_deaths (XEXP (XEXP (x, 0), 0), insn, 0);
3028 /* Otherwise don't add a death note to things being clobbered. */
3029 else if (code != CLOBBER)
3030 attach_deaths (x, insn, 0);
3032 /* Make death notes for things used in the called function. */
3033 if (GET_CODE (insn) == CALL_INSN)
3034 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
3035 attach_deaths (XEXP (XEXP (link, 0), 0), insn,
3036 GET_CODE (XEXP (link, 0)) == CLOBBER);
3039 /* Delete notes beginning with INSN and maybe put them in the chain
3040 of notes ended by NOTE_LIST.
3041 Returns the insn following the notes. */
3043 static rtx
3044 unlink_notes (insn, tail)
3045 rtx insn, tail;
3047 rtx prev = PREV_INSN (insn);
3049 while (insn != tail && GET_CODE (insn) == NOTE)
3051 rtx next = NEXT_INSN (insn);
3052 /* Delete the note from its current position. */
3053 if (prev)
3054 NEXT_INSN (prev) = next;
3055 if (next)
3056 PREV_INSN (next) = prev;
3058 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
3059 /* Record line-number notes so they can be reused. */
3060 LINE_NOTE (insn) = insn;
3062 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
3063 immediately after the call they follow. We use a fake
3064 (REG_DEAD (const_int -1)) note to remember them.
3065 Likewise with NOTE_INSN_{LOOP,EHREGION}_{BEG, END}. */
3066 else if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_SETJMP
3067 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
3068 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
3069 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
3070 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
3072 /* Insert the note at the end of the notes list. */
3073 PREV_INSN (insn) = note_list;
3074 if (note_list)
3075 NEXT_INSN (note_list) = insn;
3076 note_list = insn;
3079 insn = next;
3081 return insn;
3084 /* Constructor for `sometimes' data structure. */
3086 static int
3087 new_sometimes_live (regs_sometimes_live, regno, sometimes_max)
3088 struct sometimes *regs_sometimes_live;
3089 int regno;
3090 int sometimes_max;
3092 register struct sometimes *p;
3094 /* There should never be a register greater than max_regno here. If there
3095 is, it means that a define_split has created a new pseudo reg. This
3096 is not allowed, since there will not be flow info available for any
3097 new register, so catch the error here. */
3098 if (regno >= max_regno)
3099 abort ();
3101 p = &regs_sometimes_live[sometimes_max];
3102 p->regno = regno;
3103 p->live_length = 0;
3104 p->calls_crossed = 0;
3105 sometimes_max++;
3106 return sometimes_max;
3109 /* Count lengths of all regs we are currently tracking,
3110 and find new registers no longer live. */
3112 static void
3113 finish_sometimes_live (regs_sometimes_live, sometimes_max)
3114 struct sometimes *regs_sometimes_live;
3115 int sometimes_max;
3117 int i;
3119 for (i = 0; i < sometimes_max; i++)
3121 register struct sometimes *p = &regs_sometimes_live[i];
3122 int regno = p->regno;
3124 sched_reg_live_length[regno] += p->live_length;
3125 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3129 /* Search INSN for fake REG_DEAD note pairs for NOTE_INSN_SETJMP,
3130 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
3131 NOTEs. The REG_DEAD note following first one is contains the saved
3132 value for NOTE_BLOCK_NUMBER which is useful for
3133 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
3134 output by the instruction scheduler. Return the new value of LAST. */
3136 static rtx
3137 reemit_notes (insn, last)
3138 rtx insn;
3139 rtx last;
3141 rtx note;
3143 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3145 if (REG_NOTE_KIND (note) == REG_DEAD
3146 && GET_CODE (XEXP (note, 0)) == CONST_INT)
3148 if (INTVAL (XEXP (note, 0)) == NOTE_INSN_SETJMP)
3150 CONST_CALL_P (emit_note_after (INTVAL (XEXP (note, 0)), insn))
3151 = CONST_CALL_P (note);
3152 remove_note (insn, note);
3153 note = XEXP (note, 1);
3155 else
3157 last = emit_note_before (INTVAL (XEXP (note, 0)), last);
3158 remove_note (insn, note);
3159 note = XEXP (note, 1);
3160 NOTE_BLOCK_NUMBER (last) = INTVAL (XEXP (note, 0));
3162 remove_note (insn, note);
3165 return last;
3168 /* Use modified list scheduling to rearrange insns in basic block
3169 B. FILE, if nonzero, is where we dump interesting output about
3170 this pass. */
3172 static void
3173 schedule_block (b, file)
3174 int b;
3175 FILE *file;
3177 rtx insn, last;
3178 rtx *ready, link;
3179 int i, j, n_ready = 0, new_ready, n_insns;
3180 int sched_n_insns = 0;
3181 int clock;
3182 #define NEED_NOTHING 0
3183 #define NEED_HEAD 1
3184 #define NEED_TAIL 2
3185 int new_needs;
3187 /* HEAD and TAIL delimit the region being scheduled. */
3188 rtx head = basic_block_head[b];
3189 rtx tail = basic_block_end[b];
3190 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
3191 being scheduled. When the insns have been ordered,
3192 these insns delimit where the new insns are to be
3193 spliced back into the insn chain. */
3194 rtx next_tail;
3195 rtx prev_head;
3197 /* Keep life information accurate. */
3198 register struct sometimes *regs_sometimes_live;
3199 int sometimes_max;
3201 if (file)
3202 fprintf (file, ";;\t -- basic block number %d from %d to %d --\n",
3203 b, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
3205 i = max_reg_num ();
3206 reg_last_uses = (rtx *) alloca (i * sizeof (rtx));
3207 bzero ((char *) reg_last_uses, i * sizeof (rtx));
3208 reg_last_sets = (rtx *) alloca (i * sizeof (rtx));
3209 bzero ((char *) reg_last_sets, i * sizeof (rtx));
3210 reg_pending_sets = ALLOCA_REG_SET ();
3211 CLEAR_REG_SET (reg_pending_sets);
3212 reg_pending_sets_all = 0;
3213 clear_units ();
3215 /* Remove certain insns at the beginning from scheduling,
3216 by advancing HEAD. */
3218 /* At the start of a function, before reload has run, don't delay getting
3219 parameters from hard registers into pseudo registers. */
3220 if (reload_completed == 0 && b == 0)
3222 while (head != tail
3223 && GET_CODE (head) == NOTE
3224 && NOTE_LINE_NUMBER (head) != NOTE_INSN_FUNCTION_BEG)
3225 head = NEXT_INSN (head);
3226 while (head != tail
3227 && GET_CODE (head) == INSN
3228 && GET_CODE (PATTERN (head)) == SET)
3230 rtx src = SET_SRC (PATTERN (head));
3231 while (GET_CODE (src) == SUBREG
3232 || GET_CODE (src) == SIGN_EXTEND
3233 || GET_CODE (src) == ZERO_EXTEND
3234 || GET_CODE (src) == SIGN_EXTRACT
3235 || GET_CODE (src) == ZERO_EXTRACT)
3236 src = XEXP (src, 0);
3237 if (GET_CODE (src) != REG
3238 || REGNO (src) >= FIRST_PSEUDO_REGISTER)
3239 break;
3240 /* Keep this insn from ever being scheduled. */
3241 INSN_REF_COUNT (head) = 1;
3242 head = NEXT_INSN (head);
3246 /* Don't include any notes or labels at the beginning of the
3247 basic block, or notes at the ends of basic blocks. */
3248 while (head != tail)
3250 if (GET_CODE (head) == NOTE)
3251 head = NEXT_INSN (head);
3252 else if (GET_CODE (tail) == NOTE)
3253 tail = PREV_INSN (tail);
3254 else if (GET_CODE (head) == CODE_LABEL)
3255 head = NEXT_INSN (head);
3256 else break;
3258 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
3259 to schedule this block. */
3260 if (head == tail
3261 && (GET_CODE (head) == NOTE || GET_CODE (head) == CODE_LABEL))
3262 goto ret;
3264 #if 0
3265 /* This short-cut doesn't work. It does not count call insns crossed by
3266 registers in reg_sometimes_live. It does not mark these registers as
3267 dead if they die in this block. It does not mark these registers live
3268 (or create new reg_sometimes_live entries if necessary) if they are born
3269 in this block.
3271 The easy solution is to just always schedule a block. This block only
3272 has one insn, so this won't slow down this pass by much. */
3274 if (head == tail)
3275 goto ret;
3276 #endif
3278 /* Now HEAD through TAIL are the insns actually to be rearranged;
3279 Let PREV_HEAD and NEXT_TAIL enclose them. */
3280 prev_head = PREV_INSN (head);
3281 next_tail = NEXT_INSN (tail);
3283 /* Initialize basic block data structures. */
3284 dead_notes = 0;
3285 pending_read_insns = 0;
3286 pending_read_mems = 0;
3287 pending_write_insns = 0;
3288 pending_write_mems = 0;
3289 pending_lists_length = 0;
3290 last_pending_memory_flush = 0;
3291 last_function_call = 0;
3292 last_scheduled_insn = 0;
3294 LOG_LINKS (sched_before_next_call) = 0;
3296 n_insns = sched_analyze (head, tail);
3297 if (n_insns == 0)
3299 free_pending_lists ();
3300 goto ret;
3303 /* Allocate vector to hold insns to be rearranged (except those
3304 insns which are controlled by an insn with SCHED_GROUP_P set).
3305 All these insns are included between ORIG_HEAD and ORIG_TAIL,
3306 as those variables ultimately are set up. */
3307 ready = (rtx *) alloca ((n_insns+1) * sizeof (rtx));
3309 /* TAIL is now the last of the insns to be rearranged.
3310 Put those insns into the READY vector. */
3311 insn = tail;
3313 /* For all branches, calls, uses, and cc0 setters, force them to remain
3314 in order at the end of the block by adding dependencies and giving
3315 the last a high priority. There may be notes present, and prev_head
3316 may also be a note.
3318 Branches must obviously remain at the end. Calls should remain at the
3319 end since moving them results in worse register allocation. Uses remain
3320 at the end to ensure proper register allocation. cc0 setters remaim
3321 at the end because they can't be moved away from their cc0 user. */
3322 last = 0;
3323 while (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN
3324 || (GET_CODE (insn) == INSN
3325 && (GET_CODE (PATTERN (insn)) == USE
3326 #ifdef HAVE_cc0
3327 || sets_cc0_p (PATTERN (insn))
3328 #endif
3330 || GET_CODE (insn) == NOTE)
3332 if (GET_CODE (insn) != NOTE)
3334 priority (insn);
3335 if (last == 0)
3337 ready[n_ready++] = insn;
3338 INSN_PRIORITY (insn) = TAIL_PRIORITY - i;
3339 INSN_REF_COUNT (insn) = 0;
3341 else if (! find_insn_list (insn, LOG_LINKS (last)))
3343 add_dependence (last, insn, REG_DEP_ANTI);
3344 INSN_REF_COUNT (insn)++;
3346 last = insn;
3348 /* Skip over insns that are part of a group. */
3349 while (SCHED_GROUP_P (insn))
3351 insn = prev_nonnote_insn (insn);
3352 priority (insn);
3356 insn = PREV_INSN (insn);
3357 /* Don't overrun the bounds of the basic block. */
3358 if (insn == prev_head)
3359 break;
3362 /* Assign priorities to instructions. Also check whether they
3363 are in priority order already. If so then I will be nonnegative.
3364 We use this shortcut only before reloading. */
3365 #if 0
3366 i = reload_completed ? DONE_PRIORITY : MAX_PRIORITY;
3367 #endif
3369 for (; insn != prev_head; insn = PREV_INSN (insn))
3371 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3373 priority (insn);
3374 if (INSN_REF_COUNT (insn) == 0)
3376 if (last == 0)
3377 ready[n_ready++] = insn;
3378 else
3380 /* Make this dependent on the last of the instructions
3381 that must remain in order at the end of the block. */
3382 add_dependence (last, insn, REG_DEP_ANTI);
3383 INSN_REF_COUNT (insn) = 1;
3386 if (SCHED_GROUP_P (insn))
3388 while (SCHED_GROUP_P (insn))
3390 insn = prev_nonnote_insn (insn);
3391 priority (insn);
3393 continue;
3395 #if 0
3396 if (i < 0)
3397 continue;
3398 if (INSN_PRIORITY (insn) < i)
3399 i = INSN_PRIORITY (insn);
3400 else if (INSN_PRIORITY (insn) > i)
3401 i = DONE_PRIORITY;
3402 #endif
3406 #if 0
3407 /* This short-cut doesn't work. It does not count call insns crossed by
3408 registers in reg_sometimes_live. It does not mark these registers as
3409 dead if they die in this block. It does not mark these registers live
3410 (or create new reg_sometimes_live entries if necessary) if they are born
3411 in this block.
3413 The easy solution is to just always schedule a block. These blocks tend
3414 to be very short, so this doesn't slow down this pass by much. */
3416 /* If existing order is good, don't bother to reorder. */
3417 if (i != DONE_PRIORITY)
3419 if (file)
3420 fprintf (file, ";; already scheduled\n");
3422 if (reload_completed == 0)
3424 for (i = 0; i < sometimes_max; i++)
3425 regs_sometimes_live[i].live_length += n_insns;
3427 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3429 free_pending_lists ();
3430 goto ret;
3432 #endif
3434 /* Scan all the insns to be scheduled, removing NOTE insns
3435 and register death notes.
3436 Line number NOTE insns end up in NOTE_LIST.
3437 Register death notes end up in DEAD_NOTES.
3439 Recreate the register life information for the end of this basic
3440 block. */
3442 if (reload_completed == 0)
3444 COPY_REG_SET (bb_live_regs, basic_block_live_at_start[b]);
3445 CLEAR_REG_SET (bb_dead_regs);
3447 if (b == 0)
3449 /* This is the first block in the function. There may be insns
3450 before head that we can't schedule. We still need to examine
3451 them though for accurate register lifetime analysis. */
3453 /* We don't want to remove any REG_DEAD notes as the code below
3454 does. */
3456 for (insn = basic_block_head[b]; insn != head;
3457 insn = NEXT_INSN (insn))
3458 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3460 /* See if the register gets born here. */
3461 /* We must check for registers being born before we check for
3462 registers dying. It is possible for a register to be born
3463 and die in the same insn, e.g. reading from a volatile
3464 memory location into an otherwise unused register. Such
3465 a register must be marked as dead after this insn. */
3466 if (GET_CODE (PATTERN (insn)) == SET
3467 || GET_CODE (PATTERN (insn)) == CLOBBER)
3468 sched_note_set (b, PATTERN (insn), 0);
3469 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3471 int j;
3472 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3473 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3474 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3475 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3477 /* ??? This code is obsolete and should be deleted. It
3478 is harmless though, so we will leave it in for now. */
3479 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3480 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3481 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3484 /* Each call clobbers (makes live) all call-clobbered regs
3485 that are not global or fixed. Note that the function-value
3486 reg is a call_clobbered reg. */
3488 if (GET_CODE (insn) == CALL_INSN)
3490 int j;
3491 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
3492 if (call_used_regs[j] && ! global_regs[j]
3493 && ! fixed_regs[j])
3495 SET_REGNO_REG_SET (bb_live_regs, j);
3496 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
3500 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
3502 if ((REG_NOTE_KIND (link) == REG_DEAD
3503 || REG_NOTE_KIND (link) == REG_UNUSED)
3504 /* Verify that the REG_NOTE has a valid value. */
3505 && GET_CODE (XEXP (link, 0)) == REG)
3507 register int regno = REGNO (XEXP (link, 0));
3509 if (regno < FIRST_PSEUDO_REGISTER)
3511 int j = HARD_REGNO_NREGS (regno,
3512 GET_MODE (XEXP (link, 0)));
3513 while (--j >= 0)
3515 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
3516 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
3519 else
3521 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
3522 SET_REGNO_REG_SET (bb_dead_regs, regno);
3530 /* If debugging information is being produced, keep track of the line
3531 number notes for each insn. */
3532 if (write_symbols != NO_DEBUG)
3534 /* We must use the true line number for the first insn in the block
3535 that was computed and saved at the start of this pass. We can't
3536 use the current line number, because scheduling of the previous
3537 block may have changed the current line number. */
3538 rtx line = line_note_head[b];
3540 for (insn = basic_block_head[b];
3541 insn != next_tail;
3542 insn = NEXT_INSN (insn))
3543 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3544 line = insn;
3545 else
3546 LINE_NOTE (insn) = line;
3549 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3551 rtx prev, next, link;
3553 /* Farm out notes. This is needed to keep the debugger from
3554 getting completely deranged. */
3555 if (GET_CODE (insn) == NOTE)
3557 prev = insn;
3558 insn = unlink_notes (insn, next_tail);
3559 if (prev == tail)
3560 abort ();
3561 if (prev == head)
3562 abort ();
3563 if (insn == next_tail)
3564 abort ();
3567 if (reload_completed == 0
3568 && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3570 /* See if the register gets born here. */
3571 /* We must check for registers being born before we check for
3572 registers dying. It is possible for a register to be born and
3573 die in the same insn, e.g. reading from a volatile memory
3574 location into an otherwise unused register. Such a register
3575 must be marked as dead after this insn. */
3576 if (GET_CODE (PATTERN (insn)) == SET
3577 || GET_CODE (PATTERN (insn)) == CLOBBER)
3578 sched_note_set (b, PATTERN (insn), 0);
3579 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3581 int j;
3582 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3583 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3584 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3585 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3587 /* ??? This code is obsolete and should be deleted. It
3588 is harmless though, so we will leave it in for now. */
3589 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3590 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3591 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3594 /* Each call clobbers (makes live) all call-clobbered regs that are
3595 not global or fixed. Note that the function-value reg is a
3596 call_clobbered reg. */
3598 if (GET_CODE (insn) == CALL_INSN)
3600 int j;
3601 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
3602 if (call_used_regs[j] && ! global_regs[j]
3603 && ! fixed_regs[j])
3605 SET_REGNO_REG_SET (bb_live_regs, j);
3606 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
3610 /* Need to know what registers this insn kills. */
3611 for (prev = 0, link = REG_NOTES (insn); link; link = next)
3613 next = XEXP (link, 1);
3614 if ((REG_NOTE_KIND (link) == REG_DEAD
3615 || REG_NOTE_KIND (link) == REG_UNUSED)
3616 /* Verify that the REG_NOTE has a valid value. */
3617 && GET_CODE (XEXP (link, 0)) == REG)
3619 register int regno = REGNO (XEXP (link, 0));
3621 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3622 alone. */
3623 if (REG_NOTE_KIND (link) == REG_DEAD)
3625 if (prev)
3626 XEXP (prev, 1) = next;
3627 else
3628 REG_NOTES (insn) = next;
3629 XEXP (link, 1) = dead_notes;
3630 dead_notes = link;
3632 else
3633 prev = link;
3635 if (regno < FIRST_PSEUDO_REGISTER)
3637 int j = HARD_REGNO_NREGS (regno,
3638 GET_MODE (XEXP (link, 0)));
3639 while (--j >= 0)
3641 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
3642 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
3645 else
3647 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
3648 SET_REGNO_REG_SET (bb_dead_regs, regno);
3651 else
3652 prev = link;
3657 if (reload_completed == 0)
3659 /* Keep track of register lives. */
3660 old_live_regs = ALLOCA_REG_SET ();
3661 regs_sometimes_live
3662 = (struct sometimes *) alloca (max_regno * sizeof (struct sometimes));
3663 sometimes_max = 0;
3665 /* Start with registers live at end. */
3666 COPY_REG_SET (old_live_regs, bb_live_regs);
3667 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs, 0, j,
3669 sometimes_max
3670 = new_sometimes_live (regs_sometimes_live,
3671 j, sometimes_max);
3675 SCHED_SORT (ready, n_ready, 1);
3677 if (file)
3679 fprintf (file, ";; ready list initially:\n;; ");
3680 for (i = 0; i < n_ready; i++)
3681 fprintf (file, "%d ", INSN_UID (ready[i]));
3682 fprintf (file, "\n\n");
3684 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3685 if (INSN_PRIORITY (insn) > 0)
3686 fprintf (file, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3687 INSN_UID (insn), INSN_PRIORITY (insn),
3688 INSN_REF_COUNT (insn));
3691 /* Now HEAD and TAIL are going to become disconnected
3692 entirely from the insn chain. */
3693 tail = 0;
3695 /* Q_SIZE will always be zero here. */
3696 q_ptr = 0; clock = 0;
3697 bzero ((char *) insn_queue, sizeof (insn_queue));
3699 /* Now, perform list scheduling. */
3701 /* Where we start inserting insns is after TAIL. */
3702 last = next_tail;
3704 new_needs = (NEXT_INSN (prev_head) == basic_block_head[b]
3705 ? NEED_HEAD : NEED_NOTHING);
3706 if (PREV_INSN (next_tail) == basic_block_end[b])
3707 new_needs |= NEED_TAIL;
3709 new_ready = n_ready;
3710 while (sched_n_insns < n_insns)
3712 q_ptr = NEXT_Q (q_ptr); clock++;
3714 /* Add all pending insns that can be scheduled without stalls to the
3715 ready list. */
3716 for (insn = insn_queue[q_ptr]; insn; insn = NEXT_INSN (insn))
3718 if (file)
3719 fprintf (file, ";; launching %d before %d with no stalls at T-%d\n",
3720 INSN_UID (insn), INSN_UID (last), clock);
3721 ready[new_ready++] = insn;
3722 q_size -= 1;
3724 insn_queue[q_ptr] = 0;
3726 /* If there are no ready insns, stall until one is ready and add all
3727 of the pending insns at that point to the ready list. */
3728 if (new_ready == 0)
3730 register int stalls;
3732 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
3733 if (insn = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)])
3735 for (; insn; insn = NEXT_INSN (insn))
3737 if (file)
3738 fprintf (file, ";; launching %d before %d with %d stalls at T-%d\n",
3739 INSN_UID (insn), INSN_UID (last), stalls, clock);
3740 ready[new_ready++] = insn;
3741 q_size -= 1;
3743 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
3744 break;
3747 q_ptr = NEXT_Q_AFTER (q_ptr, stalls); clock += stalls;
3750 /* There should be some instructions waiting to fire. */
3751 if (new_ready == 0)
3752 abort ();
3754 if (file)
3756 fprintf (file, ";; ready list at T-%d:", clock);
3757 for (i = 0; i < new_ready; i++)
3758 fprintf (file, " %d (%x)",
3759 INSN_UID (ready[i]), INSN_PRIORITY (ready[i]));
3762 /* Sort the ready list and choose the best insn to schedule. Select
3763 which insn should issue in this cycle and queue those that are
3764 blocked by function unit hazards.
3766 N_READY holds the number of items that were scheduled the last time,
3767 minus the one instruction scheduled on the last loop iteration; it
3768 is not modified for any other reason in this loop. */
3770 SCHED_SORT (ready, new_ready, n_ready);
3771 if (MAX_BLOCKAGE > 1)
3773 new_ready = schedule_select (ready, new_ready, clock, file);
3774 if (new_ready == 0)
3776 if (file)
3777 fprintf (file, "\n");
3778 /* We must set n_ready here, to ensure that sorting always
3779 occurs when we come back to the SCHED_SORT line above. */
3780 n_ready = 0;
3781 continue;
3784 n_ready = new_ready;
3785 last_scheduled_insn = insn = ready[0];
3787 /* The first insn scheduled becomes the new tail. */
3788 if (tail == 0)
3789 tail = insn;
3791 if (file)
3793 fprintf (file, ", now");
3794 for (i = 0; i < n_ready; i++)
3795 fprintf (file, " %d", INSN_UID (ready[i]));
3796 fprintf (file, "\n");
3799 if (DONE_PRIORITY_P (insn))
3800 abort ();
3802 if (reload_completed == 0)
3804 /* Process this insn, and each insn linked to this one which must
3805 be immediately output after this insn. */
3808 /* First we kill registers set by this insn, and then we
3809 make registers used by this insn live. This is the opposite
3810 order used above because we are traversing the instructions
3811 backwards. */
3813 /* Strictly speaking, we should scan REG_UNUSED notes and make
3814 every register mentioned there live, however, we will just
3815 kill them again immediately below, so there doesn't seem to
3816 be any reason why we bother to do this. */
3818 /* See if this is the last notice we must take of a register. */
3819 if (GET_CODE (PATTERN (insn)) == SET
3820 || GET_CODE (PATTERN (insn)) == CLOBBER)
3821 sched_note_set (b, PATTERN (insn), 1);
3822 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3824 int j;
3825 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3826 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3827 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3828 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 1);
3831 /* This code keeps life analysis information up to date. */
3832 if (GET_CODE (insn) == CALL_INSN)
3834 register struct sometimes *p;
3836 /* A call kills all call used registers that are not
3837 global or fixed, except for those mentioned in the call
3838 pattern which will be made live again later. */
3839 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
3840 if (call_used_regs[i] && ! global_regs[i]
3841 && ! fixed_regs[i])
3843 CLEAR_REGNO_REG_SET (bb_live_regs, i);
3844 SET_REGNO_REG_SET (bb_dead_regs, i);
3847 /* Regs live at the time of a call instruction must not
3848 go in a register clobbered by calls. Record this for
3849 all regs now live. Note that insns which are born or
3850 die in a call do not cross a call, so this must be done
3851 after the killings (above) and before the births
3852 (below). */
3853 p = regs_sometimes_live;
3854 for (i = 0; i < sometimes_max; i++, p++)
3855 if (REGNO_REG_SET_P (bb_live_regs, p->regno))
3856 p->calls_crossed += 1;
3859 /* Make every register used live, and add REG_DEAD notes for
3860 registers which were not live before we started. */
3861 attach_deaths_insn (insn);
3863 /* Find registers now made live by that instruction. */
3864 EXECUTE_IF_AND_COMPL_IN_REG_SET (bb_live_regs, old_live_regs, 0, i,
3866 sometimes_max
3867 = new_sometimes_live (regs_sometimes_live,
3868 i, sometimes_max);
3870 IOR_REG_SET (old_live_regs, bb_live_regs);
3872 /* Count lengths of all regs we are worrying about now,
3873 and handle registers no longer live. */
3875 for (i = 0; i < sometimes_max; i++)
3877 register struct sometimes *p = &regs_sometimes_live[i];
3878 int regno = p->regno;
3880 p->live_length += 1;
3882 if (!REGNO_REG_SET_P (bb_live_regs, p->regno))
3884 /* This is the end of one of this register's lifetime
3885 segments. Save the lifetime info collected so far,
3886 and clear its bit in the old_live_regs entry. */
3887 sched_reg_live_length[regno] += p->live_length;
3888 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3889 CLEAR_REGNO_REG_SET (old_live_regs, p->regno);
3891 /* Delete the reg_sometimes_live entry for this reg by
3892 copying the last entry over top of it. */
3893 *p = regs_sometimes_live[--sometimes_max];
3894 /* ...and decrement i so that this newly copied entry
3895 will be processed. */
3896 i--;
3900 link = insn;
3901 insn = PREV_INSN (insn);
3903 while (SCHED_GROUP_P (link));
3905 /* Set INSN back to the insn we are scheduling now. */
3906 insn = ready[0];
3909 /* Schedule INSN. Remove it from the ready list. */
3910 ready += 1;
3911 n_ready -= 1;
3913 sched_n_insns += 1;
3914 NEXT_INSN (insn) = last;
3915 PREV_INSN (last) = insn;
3917 /* Everything that precedes INSN now either becomes "ready", if
3918 it can execute immediately before INSN, or "pending", if
3919 there must be a delay. Give INSN high enough priority that
3920 at least one (maybe more) reg-killing insns can be launched
3921 ahead of all others. Mark INSN as scheduled by changing its
3922 priority to -1. */
3923 INSN_PRIORITY (insn) = LAUNCH_PRIORITY;
3924 new_ready = schedule_insn (insn, ready, n_ready, clock);
3925 INSN_PRIORITY (insn) = DONE_PRIORITY;
3927 /* Schedule all prior insns that must not be moved. */
3928 if (SCHED_GROUP_P (insn))
3930 /* Disable these insns from being launched, in case one of the
3931 insns in the group has a dependency on an earlier one. */
3932 link = insn;
3933 while (SCHED_GROUP_P (link))
3935 /* Disable these insns from being launched by anybody. */
3936 link = PREV_INSN (link);
3937 INSN_REF_COUNT (link) = 0;
3940 /* Now handle each group insn like the main insn was handled
3941 above. */
3942 link = insn;
3943 while (SCHED_GROUP_P (link))
3945 link = PREV_INSN (link);
3947 sched_n_insns += 1;
3949 /* ??? Why don't we set LAUNCH_PRIORITY here? */
3950 new_ready = schedule_insn (link, ready, new_ready, clock);
3951 INSN_PRIORITY (link) = DONE_PRIORITY;
3955 /* Put back NOTE_INSN_SETJMP,
3956 NOTE_INSN_{LOOP,EHREGION}_{BEGIN,END} notes. */
3958 /* To prime the loop. We need to handle INSN and all the insns in the
3959 sched group. */
3960 last = NEXT_INSN (insn);
3963 insn = PREV_INSN (last);
3965 /* Maintain a valid chain so emit_note_before works.
3966 This is necessary because PREV_INSN (insn) isn't valid
3967 (if ! SCHED_GROUP_P) and if it points to an insn already
3968 scheduled, a circularity will result. */
3969 if (! SCHED_GROUP_P (insn))
3971 NEXT_INSN (prev_head) = insn;
3972 PREV_INSN (insn) = prev_head;
3975 last = reemit_notes (insn, insn);
3977 while (SCHED_GROUP_P (insn));
3979 if (q_size != 0)
3980 abort ();
3982 if (reload_completed == 0)
3983 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3985 /* HEAD is now the first insn in the chain of insns that
3986 been scheduled by the loop above.
3987 TAIL is the last of those insns. */
3988 head = last;
3990 /* NOTE_LIST is the end of a chain of notes previously found
3991 among the insns. Insert them at the beginning of the insns. */
3992 if (note_list != 0)
3994 rtx note_head = note_list;
3995 while (PREV_INSN (note_head))
3996 note_head = PREV_INSN (note_head);
3998 PREV_INSN (head) = note_list;
3999 NEXT_INSN (note_list) = head;
4000 head = note_head;
4003 /* There should be no REG_DEAD notes leftover at the end.
4004 In practice, this can occur as the result of bugs in flow, combine.c,
4005 and/or sched.c. The values of the REG_DEAD notes remaining are
4006 meaningless, because dead_notes is just used as a free list. */
4007 #if 1
4008 if (dead_notes != 0)
4009 abort ();
4010 #endif
4012 if (new_needs & NEED_HEAD)
4013 basic_block_head[b] = head;
4014 PREV_INSN (head) = prev_head;
4015 NEXT_INSN (prev_head) = head;
4017 if (new_needs & NEED_TAIL)
4018 basic_block_end[b] = tail;
4019 NEXT_INSN (tail) = next_tail;
4020 PREV_INSN (next_tail) = tail;
4022 /* Restore the line-number notes of each insn. */
4023 if (write_symbols != NO_DEBUG)
4025 rtx line, note, prev, new;
4026 int notes = 0;
4028 head = basic_block_head[b];
4029 next_tail = NEXT_INSN (basic_block_end[b]);
4031 /* Determine the current line-number. We want to know the current
4032 line number of the first insn of the block here, in case it is
4033 different from the true line number that was saved earlier. If
4034 different, then we need a line number note before the first insn
4035 of this block. If it happens to be the same, then we don't want to
4036 emit another line number note here. */
4037 for (line = head; line; line = PREV_INSN (line))
4038 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4039 break;
4041 /* Walk the insns keeping track of the current line-number and inserting
4042 the line-number notes as needed. */
4043 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
4044 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4045 line = insn;
4046 /* This used to emit line number notes before every non-deleted note.
4047 However, this confuses a debugger, because line notes not separated
4048 by real instructions all end up at the same address. I can find no
4049 use for line number notes before other notes, so none are emitted. */
4050 else if (GET_CODE (insn) != NOTE
4051 && (note = LINE_NOTE (insn)) != 0
4052 && note != line
4053 && (line == 0
4054 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
4055 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
4057 line = note;
4058 prev = PREV_INSN (insn);
4059 if (LINE_NOTE (note))
4061 /* Re-use the original line-number note. */
4062 LINE_NOTE (note) = 0;
4063 PREV_INSN (note) = prev;
4064 NEXT_INSN (prev) = note;
4065 PREV_INSN (insn) = note;
4066 NEXT_INSN (note) = insn;
4068 else
4070 notes++;
4071 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
4072 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
4073 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note);
4076 if (file && notes)
4077 fprintf (file, ";; added %d line-number notes\n", notes);
4080 if (file)
4082 fprintf (file, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
4083 clock, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
4086 /* Yow! We're done! */
4087 free_pending_lists ();
4089 ret:
4090 FREE_REG_SET (reg_pending_sets);
4091 FREE_REG_SET (old_live_regs);
4093 return;
4096 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
4097 REGNO, returning the rtx of the reference found if any. Otherwise,
4098 returns 0. */
4100 static rtx
4101 regno_use_in (regno, x)
4102 int regno;
4103 rtx x;
4105 register char *fmt;
4106 int i, j;
4107 rtx tem;
4109 if (GET_CODE (x) == REG && REGNO (x) == regno)
4110 return x;
4112 fmt = GET_RTX_FORMAT (GET_CODE (x));
4113 for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
4115 if (fmt[i] == 'e')
4117 if (tem = regno_use_in (regno, XEXP (x, i)))
4118 return tem;
4120 else if (fmt[i] == 'E')
4121 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4122 if (tem = regno_use_in (regno , XVECEXP (x, i, j)))
4123 return tem;
4126 return 0;
4129 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
4130 needed for the hard register mentioned in the note. This can happen
4131 if the reference to the hard register in the original insn was split into
4132 several smaller hard register references in the split insns. */
4134 static void
4135 split_hard_reg_notes (note, first, last, orig_insn)
4136 rtx note, first, last, orig_insn;
4138 rtx reg, temp, link;
4139 int n_regs, i, new_reg;
4140 rtx insn;
4142 /* Assume that this is a REG_DEAD note. */
4143 if (REG_NOTE_KIND (note) != REG_DEAD)
4144 abort ();
4146 reg = XEXP (note, 0);
4148 n_regs = HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg));
4150 for (i = 0; i < n_regs; i++)
4152 new_reg = REGNO (reg) + i;
4154 /* Check for references to new_reg in the split insns. */
4155 for (insn = last; ; insn = PREV_INSN (insn))
4157 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4158 && (temp = regno_use_in (new_reg, PATTERN (insn))))
4160 /* Create a new reg dead note here. */
4161 link = rtx_alloc (EXPR_LIST);
4162 PUT_REG_NOTE_KIND (link, REG_DEAD);
4163 XEXP (link, 0) = temp;
4164 XEXP (link, 1) = REG_NOTES (insn);
4165 REG_NOTES (insn) = link;
4167 /* If killed multiple registers here, then add in the excess. */
4168 i += HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) - 1;
4170 break;
4172 /* It isn't mentioned anywhere, so no new reg note is needed for
4173 this register. */
4174 if (insn == first)
4175 break;
4180 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
4181 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
4183 static void
4184 new_insn_dead_notes (pat, insn, last, orig_insn)
4185 rtx pat, insn, last, orig_insn;
4187 rtx dest, tem, set;
4189 /* PAT is either a CLOBBER or a SET here. */
4190 dest = XEXP (pat, 0);
4192 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4193 || GET_CODE (dest) == STRICT_LOW_PART
4194 || GET_CODE (dest) == SIGN_EXTRACT)
4195 dest = XEXP (dest, 0);
4197 if (GET_CODE (dest) == REG)
4199 for (tem = last; tem != insn; tem = PREV_INSN (tem))
4201 if (GET_RTX_CLASS (GET_CODE (tem)) == 'i'
4202 && reg_overlap_mentioned_p (dest, PATTERN (tem))
4203 && (set = single_set (tem)))
4205 rtx tem_dest = SET_DEST (set);
4207 while (GET_CODE (tem_dest) == ZERO_EXTRACT
4208 || GET_CODE (tem_dest) == SUBREG
4209 || GET_CODE (tem_dest) == STRICT_LOW_PART
4210 || GET_CODE (tem_dest) == SIGN_EXTRACT)
4211 tem_dest = XEXP (tem_dest, 0);
4213 if (! rtx_equal_p (tem_dest, dest))
4215 /* Use the same scheme as combine.c, don't put both REG_DEAD
4216 and REG_UNUSED notes on the same insn. */
4217 if (! find_regno_note (tem, REG_UNUSED, REGNO (dest))
4218 && ! find_regno_note (tem, REG_DEAD, REGNO (dest)))
4220 rtx note = rtx_alloc (EXPR_LIST);
4221 PUT_REG_NOTE_KIND (note, REG_DEAD);
4222 XEXP (note, 0) = dest;
4223 XEXP (note, 1) = REG_NOTES (tem);
4224 REG_NOTES (tem) = note;
4226 /* The reg only dies in one insn, the last one that uses
4227 it. */
4228 break;
4230 else if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4231 /* We found an instruction that both uses the register,
4232 and sets it, so no new REG_NOTE is needed for this set. */
4233 break;
4236 /* If this is a set, it must die somewhere, unless it is the dest of
4237 the original insn, and hence is live after the original insn. Abort
4238 if it isn't supposed to be live after the original insn.
4240 If this is a clobber, then just add a REG_UNUSED note. */
4241 if (tem == insn)
4243 int live_after_orig_insn = 0;
4244 rtx pattern = PATTERN (orig_insn);
4245 int i;
4247 if (GET_CODE (pat) == CLOBBER)
4249 rtx note = rtx_alloc (EXPR_LIST);
4250 PUT_REG_NOTE_KIND (note, REG_UNUSED);
4251 XEXP (note, 0) = dest;
4252 XEXP (note, 1) = REG_NOTES (insn);
4253 REG_NOTES (insn) = note;
4254 return;
4257 /* The original insn could have multiple sets, so search the
4258 insn for all sets. */
4259 if (GET_CODE (pattern) == SET)
4261 if (reg_overlap_mentioned_p (dest, SET_DEST (pattern)))
4262 live_after_orig_insn = 1;
4264 else if (GET_CODE (pattern) == PARALLEL)
4266 for (i = 0; i < XVECLEN (pattern, 0); i++)
4267 if (GET_CODE (XVECEXP (pattern, 0, i)) == SET
4268 && reg_overlap_mentioned_p (dest,
4269 SET_DEST (XVECEXP (pattern,
4270 0, i))))
4271 live_after_orig_insn = 1;
4274 if (! live_after_orig_insn)
4275 abort ();
4280 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
4281 registers modified by X. INC is -1 if the containing insn is being deleted,
4282 and is 1 if the containing insn is a newly generated insn. */
4284 static void
4285 update_n_sets (x, inc)
4286 rtx x;
4287 int inc;
4289 rtx dest = SET_DEST (x);
4291 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
4292 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
4293 dest = SUBREG_REG (dest);
4295 if (GET_CODE (dest) == REG)
4297 int regno = REGNO (dest);
4299 if (regno < FIRST_PSEUDO_REGISTER)
4301 register int i;
4302 int endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (dest));
4304 for (i = regno; i < endregno; i++)
4305 REG_N_SETS (i) += inc;
4307 else
4308 REG_N_SETS (regno) += inc;
4312 /* Updates all flow-analysis related quantities (including REG_NOTES) for
4313 the insns from FIRST to LAST inclusive that were created by splitting
4314 ORIG_INSN. NOTES are the original REG_NOTES. */
4316 static void
4317 update_flow_info (notes, first, last, orig_insn)
4318 rtx notes;
4319 rtx first, last;
4320 rtx orig_insn;
4322 rtx insn, note;
4323 rtx next;
4324 rtx orig_dest, temp;
4325 rtx set;
4327 /* Get and save the destination set by the original insn. */
4329 orig_dest = single_set (orig_insn);
4330 if (orig_dest)
4331 orig_dest = SET_DEST (orig_dest);
4333 /* Move REG_NOTES from the original insn to where they now belong. */
4335 for (note = notes; note; note = next)
4337 next = XEXP (note, 1);
4338 switch (REG_NOTE_KIND (note))
4340 case REG_DEAD:
4341 case REG_UNUSED:
4342 /* Move these notes from the original insn to the last new insn where
4343 the register is now set. */
4345 for (insn = last; ; insn = PREV_INSN (insn))
4347 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4348 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4350 /* If this note refers to a multiple word hard register, it
4351 may have been split into several smaller hard register
4352 references, so handle it specially. */
4353 temp = XEXP (note, 0);
4354 if (REG_NOTE_KIND (note) == REG_DEAD
4355 && GET_CODE (temp) == REG
4356 && REGNO (temp) < FIRST_PSEUDO_REGISTER
4357 && HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) > 1)
4358 split_hard_reg_notes (note, first, last, orig_insn);
4359 else
4361 XEXP (note, 1) = REG_NOTES (insn);
4362 REG_NOTES (insn) = note;
4365 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
4366 notes. */
4367 /* ??? This won't handle multiple word registers correctly,
4368 but should be good enough for now. */
4369 if (REG_NOTE_KIND (note) == REG_UNUSED
4370 && GET_CODE (XEXP (note, 0)) != SCRATCH
4371 && ! dead_or_set_p (insn, XEXP (note, 0)))
4372 PUT_REG_NOTE_KIND (note, REG_DEAD);
4374 /* The reg only dies in one insn, the last one that uses
4375 it. */
4376 break;
4378 /* It must die somewhere, fail it we couldn't find where it died.
4380 If this is a REG_UNUSED note, then it must be a temporary
4381 register that was not needed by this instantiation of the
4382 pattern, so we can safely ignore it. */
4383 if (insn == first)
4385 /* After reload, REG_DEAD notes come sometimes an
4386 instruction after the register actually dies. */
4387 if (reload_completed && REG_NOTE_KIND (note) == REG_DEAD)
4389 XEXP (note, 1) = REG_NOTES (insn);
4390 REG_NOTES (insn) = note;
4391 break;
4394 if (REG_NOTE_KIND (note) != REG_UNUSED)
4395 abort ();
4397 break;
4400 break;
4402 case REG_WAS_0:
4403 /* This note applies to the dest of the original insn. Find the
4404 first new insn that now has the same dest, and move the note
4405 there. */
4407 if (! orig_dest)
4408 abort ();
4410 for (insn = first; ; insn = NEXT_INSN (insn))
4412 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4413 && (temp = single_set (insn))
4414 && rtx_equal_p (SET_DEST (temp), orig_dest))
4416 XEXP (note, 1) = REG_NOTES (insn);
4417 REG_NOTES (insn) = note;
4418 /* The reg is only zero before one insn, the first that
4419 uses it. */
4420 break;
4422 /* If this note refers to a multiple word hard
4423 register, it may have been split into several smaller
4424 hard register references. We could split the notes,
4425 but simply dropping them is good enough. */
4426 if (GET_CODE (orig_dest) == REG
4427 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
4428 && HARD_REGNO_NREGS (REGNO (orig_dest),
4429 GET_MODE (orig_dest)) > 1)
4430 break;
4431 /* It must be set somewhere, fail if we couldn't find where it
4432 was set. */
4433 if (insn == last)
4434 abort ();
4436 break;
4438 case REG_EQUAL:
4439 case REG_EQUIV:
4440 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
4441 set is meaningless. Just drop the note. */
4442 if (! orig_dest)
4443 break;
4445 case REG_NO_CONFLICT:
4446 /* These notes apply to the dest of the original insn. Find the last
4447 new insn that now has the same dest, and move the note there. */
4449 if (! orig_dest)
4450 abort ();
4452 for (insn = last; ; insn = PREV_INSN (insn))
4454 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4455 && (temp = single_set (insn))
4456 && rtx_equal_p (SET_DEST (temp), orig_dest))
4458 XEXP (note, 1) = REG_NOTES (insn);
4459 REG_NOTES (insn) = note;
4460 /* Only put this note on one of the new insns. */
4461 break;
4464 /* The original dest must still be set someplace. Abort if we
4465 couldn't find it. */
4466 if (insn == first)
4468 /* However, if this note refers to a multiple word hard
4469 register, it may have been split into several smaller
4470 hard register references. We could split the notes,
4471 but simply dropping them is good enough. */
4472 if (GET_CODE (orig_dest) == REG
4473 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
4474 && HARD_REGNO_NREGS (REGNO (orig_dest),
4475 GET_MODE (orig_dest)) > 1)
4476 break;
4477 /* Likewise for multi-word memory references. */
4478 if (GET_CODE (orig_dest) == MEM
4479 && SIZE_FOR_MODE (orig_dest) > MOVE_MAX)
4480 break;
4481 abort ();
4484 break;
4486 case REG_LIBCALL:
4487 /* Move a REG_LIBCALL note to the first insn created, and update
4488 the corresponding REG_RETVAL note. */
4489 XEXP (note, 1) = REG_NOTES (first);
4490 REG_NOTES (first) = note;
4492 insn = XEXP (note, 0);
4493 note = find_reg_note (insn, REG_RETVAL, NULL_RTX);
4494 if (note)
4495 XEXP (note, 0) = first;
4496 break;
4498 case REG_EXEC_COUNT:
4499 /* Move a REG_EXEC_COUNT note to the first insn created. */
4500 XEXP (note, 1) = REG_NOTES (first);
4501 REG_NOTES (first) = note;
4502 break;
4504 case REG_RETVAL:
4505 /* Move a REG_RETVAL note to the last insn created, and update
4506 the corresponding REG_LIBCALL note. */
4507 XEXP (note, 1) = REG_NOTES (last);
4508 REG_NOTES (last) = note;
4510 insn = XEXP (note, 0);
4511 note = find_reg_note (insn, REG_LIBCALL, NULL_RTX);
4512 if (note)
4513 XEXP (note, 0) = last;
4514 break;
4516 case REG_NONNEG:
4517 case REG_BR_PROB:
4518 /* This should be moved to whichever instruction is a JUMP_INSN. */
4520 for (insn = last; ; insn = PREV_INSN (insn))
4522 if (GET_CODE (insn) == JUMP_INSN)
4524 XEXP (note, 1) = REG_NOTES (insn);
4525 REG_NOTES (insn) = note;
4526 /* Only put this note on one of the new insns. */
4527 break;
4529 /* Fail if we couldn't find a JUMP_INSN. */
4530 if (insn == first)
4531 abort ();
4533 break;
4535 case REG_INC:
4536 /* reload sometimes leaves obsolete REG_INC notes around. */
4537 if (reload_completed)
4538 break;
4539 /* This should be moved to whichever instruction now has the
4540 increment operation. */
4541 abort ();
4543 case REG_LABEL:
4544 /* Should be moved to the new insn(s) which use the label. */
4545 for (insn = first; insn != NEXT_INSN (last); insn = NEXT_INSN (insn))
4546 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4547 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4548 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_LABEL,
4549 XEXP (note, 0), REG_NOTES (insn));
4550 break;
4552 case REG_CC_SETTER:
4553 case REG_CC_USER:
4554 /* These two notes will never appear until after reorg, so we don't
4555 have to handle them here. */
4556 default:
4557 abort ();
4561 /* Each new insn created, except the last, has a new set. If the destination
4562 is a register, then this reg is now live across several insns, whereas
4563 previously the dest reg was born and died within the same insn. To
4564 reflect this, we now need a REG_DEAD note on the insn where this
4565 dest reg dies.
4567 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4569 for (insn = first; insn != last; insn = NEXT_INSN (insn))
4571 rtx pat;
4572 int i;
4574 pat = PATTERN (insn);
4575 if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
4576 new_insn_dead_notes (pat, insn, last, orig_insn);
4577 else if (GET_CODE (pat) == PARALLEL)
4579 for (i = 0; i < XVECLEN (pat, 0); i++)
4580 if (GET_CODE (XVECEXP (pat, 0, i)) == SET
4581 || GET_CODE (XVECEXP (pat, 0, i)) == CLOBBER)
4582 new_insn_dead_notes (XVECEXP (pat, 0, i), insn, last, orig_insn);
4586 /* If any insn, except the last, uses the register set by the last insn,
4587 then we need a new REG_DEAD note on that insn. In this case, there
4588 would not have been a REG_DEAD note for this register in the original
4589 insn because it was used and set within one insn. */
4591 set = single_set (last);
4592 if (set)
4594 rtx dest = SET_DEST (set);
4596 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4597 || GET_CODE (dest) == STRICT_LOW_PART
4598 || GET_CODE (dest) == SIGN_EXTRACT)
4599 dest = XEXP (dest, 0);
4601 if (GET_CODE (dest) == REG
4602 /* Global registers are always live, so the code below does not
4603 apply to them. */
4604 && (REGNO (dest) >= FIRST_PSEUDO_REGISTER
4605 || ! global_regs[REGNO (dest)]))
4607 rtx stop_insn = PREV_INSN (first);
4609 /* If the last insn uses the register that it is setting, then
4610 we don't want to put a REG_DEAD note there. Search backwards
4611 to find the first insn that sets but does not use DEST. */
4613 insn = last;
4614 if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4616 for (insn = PREV_INSN (insn); insn != first;
4617 insn = PREV_INSN (insn))
4619 if ((set = single_set (insn))
4620 && reg_mentioned_p (dest, SET_DEST (set))
4621 && ! reg_overlap_mentioned_p (dest, SET_SRC (set)))
4622 break;
4626 /* Now find the first insn that uses but does not set DEST. */
4628 for (insn = PREV_INSN (insn); insn != stop_insn;
4629 insn = PREV_INSN (insn))
4631 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4632 && reg_mentioned_p (dest, PATTERN (insn))
4633 && (set = single_set (insn)))
4635 rtx insn_dest = SET_DEST (set);
4637 while (GET_CODE (insn_dest) == ZERO_EXTRACT
4638 || GET_CODE (insn_dest) == SUBREG
4639 || GET_CODE (insn_dest) == STRICT_LOW_PART
4640 || GET_CODE (insn_dest) == SIGN_EXTRACT)
4641 insn_dest = XEXP (insn_dest, 0);
4643 if (insn_dest != dest)
4645 note = rtx_alloc (EXPR_LIST);
4646 PUT_REG_NOTE_KIND (note, REG_DEAD);
4647 XEXP (note, 0) = dest;
4648 XEXP (note, 1) = REG_NOTES (insn);
4649 REG_NOTES (insn) = note;
4650 /* The reg only dies in one insn, the last one
4651 that uses it. */
4652 break;
4659 /* If the original dest is modifying a multiple register target, and the
4660 original instruction was split such that the original dest is now set
4661 by two or more SUBREG sets, then the split insns no longer kill the
4662 destination of the original insn.
4664 In this case, if there exists an instruction in the same basic block,
4665 before the split insn, which uses the original dest, and this use is
4666 killed by the original insn, then we must remove the REG_DEAD note on
4667 this insn, because it is now superfluous.
4669 This does not apply when a hard register gets split, because the code
4670 knows how to handle overlapping hard registers properly. */
4671 if (orig_dest && GET_CODE (orig_dest) == REG)
4673 int found_orig_dest = 0;
4674 int found_split_dest = 0;
4676 for (insn = first; ; insn = NEXT_INSN (insn))
4678 rtx pat = PATTERN (insn);
4679 int i = GET_CODE (pat) == PARALLEL ? XVECLEN (pat, 0) : 0;
4680 set = pat;
4681 for (;;)
4683 if (GET_CODE (set) == SET)
4685 if (GET_CODE (SET_DEST (set)) == REG
4686 && REGNO (SET_DEST (set)) == REGNO (orig_dest))
4688 found_orig_dest = 1;
4689 break;
4691 else if (GET_CODE (SET_DEST (set)) == SUBREG
4692 && SUBREG_REG (SET_DEST (set)) == orig_dest)
4694 found_split_dest = 1;
4695 break;
4698 if (--i < 0)
4699 break;
4700 set = XVECEXP (pat, 0, i);
4703 if (insn == last)
4704 break;
4707 if (found_split_dest)
4709 /* Search backwards from FIRST, looking for the first insn that uses
4710 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4711 If we find an insn, and it has a REG_DEAD note, then delete the
4712 note. */
4714 for (insn = first; insn; insn = PREV_INSN (insn))
4716 if (GET_CODE (insn) == CODE_LABEL
4717 || GET_CODE (insn) == JUMP_INSN)
4718 break;
4719 else if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4720 && reg_mentioned_p (orig_dest, insn))
4722 note = find_regno_note (insn, REG_DEAD, REGNO (orig_dest));
4723 if (note)
4724 remove_note (insn, note);
4728 else if (! found_orig_dest)
4730 /* This should never happen. */
4731 abort ();
4735 /* Update reg_n_sets. This is necessary to prevent local alloc from
4736 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4737 a reg from set once to set multiple times. */
4740 rtx x = PATTERN (orig_insn);
4741 RTX_CODE code = GET_CODE (x);
4743 if (code == SET || code == CLOBBER)
4744 update_n_sets (x, -1);
4745 else if (code == PARALLEL)
4747 int i;
4748 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4750 code = GET_CODE (XVECEXP (x, 0, i));
4751 if (code == SET || code == CLOBBER)
4752 update_n_sets (XVECEXP (x, 0, i), -1);
4756 for (insn = first; ; insn = NEXT_INSN (insn))
4758 x = PATTERN (insn);
4759 code = GET_CODE (x);
4761 if (code == SET || code == CLOBBER)
4762 update_n_sets (x, 1);
4763 else if (code == PARALLEL)
4765 int i;
4766 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4768 code = GET_CODE (XVECEXP (x, 0, i));
4769 if (code == SET || code == CLOBBER)
4770 update_n_sets (XVECEXP (x, 0, i), 1);
4774 if (insn == last)
4775 break;
4780 /* The one entry point in this file. DUMP_FILE is the dump file for
4781 this pass. */
4783 void
4784 schedule_insns (dump_file)
4785 FILE *dump_file;
4787 int max_uid = MAX_INSNS_PER_SPLIT * (get_max_uid () + 1);
4788 int b;
4789 int i;
4790 rtx insn;
4792 /* Taking care of this degenerate case makes the rest of
4793 this code simpler. */
4794 if (n_basic_blocks == 0)
4795 return;
4797 /* Create an insn here so that we can hang dependencies off of it later. */
4798 sched_before_next_call
4799 = gen_rtx (INSN, VOIDmode, 0, NULL_RTX, NULL_RTX,
4800 NULL_RTX, 0, NULL_RTX, NULL_RTX);
4802 /* Initialize the unused_*_lists. We can't use the ones left over from
4803 the previous function, because gcc has freed that memory. We can use
4804 the ones left over from the first sched pass in the second pass however,
4805 so only clear them on the first sched pass. The first pass is before
4806 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4808 if (reload_completed == 0 || ! flag_schedule_insns)
4810 unused_insn_list = 0;
4811 unused_expr_list = 0;
4814 /* We create no insns here, only reorder them, so we
4815 remember how far we can cut back the stack on exit. */
4817 /* Allocate data for this pass. See comments, above,
4818 for what these vectors do. */
4819 insn_luid = (int *) alloca (max_uid * sizeof (int));
4820 insn_priority = (int *) alloca (max_uid * sizeof (int));
4821 insn_tick = (int *) alloca (max_uid * sizeof (int));
4822 insn_costs = (short *) alloca (max_uid * sizeof (short));
4823 insn_units = (short *) alloca (max_uid * sizeof (short));
4824 insn_blockage = (unsigned int *) alloca (max_uid * sizeof (unsigned int));
4825 insn_ref_count = (int *) alloca (max_uid * sizeof (int));
4827 if (reload_completed == 0)
4829 sched_reg_n_calls_crossed = (int *) alloca (max_regno * sizeof (int));
4830 sched_reg_live_length = (int *) alloca (max_regno * sizeof (int));
4831 bb_dead_regs = ALLOCA_REG_SET ();
4832 bb_live_regs = ALLOCA_REG_SET ();
4833 bzero ((char *) sched_reg_n_calls_crossed, max_regno * sizeof (int));
4834 bzero ((char *) sched_reg_live_length, max_regno * sizeof (int));
4835 init_alias_analysis ();
4837 else
4839 sched_reg_n_calls_crossed = 0;
4840 sched_reg_live_length = 0;
4841 bb_dead_regs = 0;
4842 bb_live_regs = 0;
4843 if (! flag_schedule_insns)
4844 init_alias_analysis ();
4847 if (write_symbols != NO_DEBUG)
4849 rtx line;
4851 line_note = (rtx *) alloca (max_uid * sizeof (rtx));
4852 bzero ((char *) line_note, max_uid * sizeof (rtx));
4853 line_note_head = (rtx *) alloca (n_basic_blocks * sizeof (rtx));
4854 bzero ((char *) line_note_head, n_basic_blocks * sizeof (rtx));
4856 /* Determine the line-number at the start of each basic block.
4857 This must be computed and saved now, because after a basic block's
4858 predecessor has been scheduled, it is impossible to accurately
4859 determine the correct line number for the first insn of the block. */
4861 for (b = 0; b < n_basic_blocks; b++)
4862 for (line = basic_block_head[b]; line; line = PREV_INSN (line))
4863 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4865 line_note_head[b] = line;
4866 break;
4870 bzero ((char *) insn_luid, max_uid * sizeof (int));
4871 bzero ((char *) insn_priority, max_uid * sizeof (int));
4872 bzero ((char *) insn_tick, max_uid * sizeof (int));
4873 bzero ((char *) insn_costs, max_uid * sizeof (short));
4874 bzero ((char *) insn_units, max_uid * sizeof (short));
4875 bzero ((char *) insn_blockage, max_uid * sizeof (unsigned int));
4876 bzero ((char *) insn_ref_count, max_uid * sizeof (int));
4878 /* Schedule each basic block, block by block. */
4880 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4881 known why this is done. */
4882 /* ??? Perhaps it's done to ensure NEXT_TAIL in schedule_block is a
4883 valid insn. */
4885 insn = basic_block_end[n_basic_blocks-1];
4886 if (NEXT_INSN (insn) == 0
4887 || (GET_CODE (insn) != NOTE
4888 && GET_CODE (insn) != CODE_LABEL
4889 /* Don't emit a NOTE if it would end up between an unconditional
4890 jump and a BARRIER. */
4891 && ! (GET_CODE (insn) == JUMP_INSN
4892 && GET_CODE (NEXT_INSN (insn)) == BARRIER)))
4893 emit_note_after (NOTE_INSN_DELETED, basic_block_end[n_basic_blocks-1]);
4895 for (b = 0; b < n_basic_blocks; b++)
4897 rtx insn, next;
4899 note_list = 0;
4901 for (insn = basic_block_head[b]; ; insn = next)
4903 rtx prev;
4904 rtx set;
4906 /* Can't use `next_real_insn' because that
4907 might go across CODE_LABELS and short-out basic blocks. */
4908 next = NEXT_INSN (insn);
4909 if (GET_CODE (insn) != INSN)
4911 if (insn == basic_block_end[b])
4912 break;
4914 continue;
4917 /* Don't split no-op move insns. These should silently disappear
4918 later in final. Splitting such insns would break the code
4919 that handles REG_NO_CONFLICT blocks. */
4920 set = single_set (insn);
4921 if (set && rtx_equal_p (SET_SRC (set), SET_DEST (set)))
4923 if (insn == basic_block_end[b])
4924 break;
4926 /* Nops get in the way while scheduling, so delete them now if
4927 register allocation has already been done. It is too risky
4928 to try to do this before register allocation, and there are
4929 unlikely to be very many nops then anyways. */
4930 if (reload_completed)
4932 PUT_CODE (insn, NOTE);
4933 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4934 NOTE_SOURCE_FILE (insn) = 0;
4937 continue;
4940 /* Split insns here to get max fine-grain parallelism. */
4941 prev = PREV_INSN (insn);
4942 /* It is probably not worthwhile to try to split again in the
4943 second pass. However, if flag_schedule_insns is not set,
4944 the first and only (if any) scheduling pass is after reload. */
4945 if (reload_completed == 0 || ! flag_schedule_insns)
4947 rtx last, first = PREV_INSN (insn);
4948 rtx notes = REG_NOTES (insn);
4950 last = try_split (PATTERN (insn), insn, 1);
4951 if (last != insn)
4953 /* try_split returns the NOTE that INSN became. */
4954 first = NEXT_INSN (first);
4955 update_flow_info (notes, first, last, insn);
4957 PUT_CODE (insn, NOTE);
4958 NOTE_SOURCE_FILE (insn) = 0;
4959 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4960 if (insn == basic_block_head[b])
4961 basic_block_head[b] = first;
4962 if (insn == basic_block_end[b])
4964 basic_block_end[b] = last;
4965 break;
4970 if (insn == basic_block_end[b])
4971 break;
4974 schedule_block (b, dump_file);
4976 #ifdef USE_C_ALLOCA
4977 alloca (0);
4978 #endif
4981 /* Reposition the prologue and epilogue notes in case we moved the
4982 prologue/epilogue insns. */
4983 if (reload_completed)
4984 reposition_prologue_and_epilogue_notes (get_insns ());
4986 if (write_symbols != NO_DEBUG)
4988 rtx line = 0;
4989 rtx insn = get_insns ();
4990 int active_insn = 0;
4991 int notes = 0;
4993 /* Walk the insns deleting redundant line-number notes. Many of these
4994 are already present. The remainder tend to occur at basic
4995 block boundaries. */
4996 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
4997 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4999 /* If there are no active insns following, INSN is redundant. */
5000 if (active_insn == 0)
5002 notes++;
5003 NOTE_SOURCE_FILE (insn) = 0;
5004 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
5006 /* If the line number is unchanged, LINE is redundant. */
5007 else if (line
5008 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
5009 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
5011 notes++;
5012 NOTE_SOURCE_FILE (line) = 0;
5013 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
5014 line = insn;
5016 else
5017 line = insn;
5018 active_insn = 0;
5020 else if (! ((GET_CODE (insn) == NOTE
5021 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
5022 || (GET_CODE (insn) == INSN
5023 && (GET_CODE (PATTERN (insn)) == USE
5024 || GET_CODE (PATTERN (insn)) == CLOBBER))))
5025 active_insn++;
5027 if (dump_file && notes)
5028 fprintf (dump_file, ";; deleted %d line-number notes\n", notes);
5031 if (reload_completed == 0)
5033 int regno;
5034 for (regno = 0; regno < max_regno; regno++)
5035 if (sched_reg_live_length[regno])
5037 if (dump_file)
5039 if (REG_LIVE_LENGTH (regno) > sched_reg_live_length[regno])
5040 fprintf (dump_file,
5041 ";; register %d life shortened from %d to %d\n",
5042 regno, REG_LIVE_LENGTH (regno),
5043 sched_reg_live_length[regno]);
5044 /* Negative values are special; don't overwrite the current
5045 reg_live_length value if it is negative. */
5046 else if (REG_LIVE_LENGTH (regno) < sched_reg_live_length[regno]
5047 && REG_LIVE_LENGTH (regno) >= 0)
5048 fprintf (dump_file,
5049 ";; register %d life extended from %d to %d\n",
5050 regno, REG_LIVE_LENGTH (regno),
5051 sched_reg_live_length[regno]);
5053 if (! REG_N_CALLS_CROSSED (regno)
5054 && sched_reg_n_calls_crossed[regno])
5055 fprintf (dump_file,
5056 ";; register %d now crosses calls\n", regno);
5057 else if (REG_N_CALLS_CROSSED (regno)
5058 && ! sched_reg_n_calls_crossed[regno]
5059 && REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
5060 fprintf (dump_file,
5061 ";; register %d no longer crosses calls\n", regno);
5064 /* Negative values are special; don't overwrite the current
5065 reg_live_length value if it is negative. */
5066 if (REG_LIVE_LENGTH (regno) >= 0)
5067 REG_LIVE_LENGTH (regno) = sched_reg_live_length[regno];
5069 /* We can't change the value of reg_n_calls_crossed to zero for
5070 pseudos which are live in more than one block.
5072 This is because combine might have made an optimization which
5073 invalidated basic_block_live_at_start and reg_n_calls_crossed,
5074 but it does not update them. If we update reg_n_calls_crossed
5075 here, the two variables are now inconsistent, and this might
5076 confuse the caller-save code into saving a register that doesn't
5077 need to be saved. This is only a problem when we zero calls
5078 crossed for a pseudo live in multiple basic blocks.
5080 Alternatively, we could try to correctly update basic block live
5081 at start here in sched, but that seems complicated. */
5082 if (sched_reg_n_calls_crossed[regno]
5083 || REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
5084 REG_N_CALLS_CROSSED (regno) = sched_reg_n_calls_crossed[regno];
5088 if (reload_completed == 0)
5090 FREE_REG_SET (bb_dead_regs);
5091 FREE_REG_SET (bb_live_regs);
5095 #endif /* INSN_SCHEDULING */