(store_constructor, ARRAY_TYPE): Use code for non-integer INDEX for
[official-gcc.git] / gcc / sched.c
blob49c37b15f2f70e72ecc1a3986d3bfad999e0fac2
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GNU CC.
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
11 any later version.
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. */
22 /* Instruction scheduling pass.
24 This pass implements list scheduling within basic blocks. It is
25 run after flow analysis, but before register allocation. The
26 scheduler works as follows:
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning
39 values to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 Function unit conflicts are resolved during reverse list scheduling
58 by tracking the time when each insn is committed to the schedule
59 and from that, the time the function units it uses must be free.
60 As insns on the ready list are considered for scheduling, those
61 that would result in a blockage of the already committed insns are
62 queued until no blockage will result. Among the remaining insns on
63 the ready list to be considered, the first one with the largest
64 potential for causing a subsequent blockage is chosen.
66 The following list shows the order in which we want to break ties
67 among insns in the ready list:
69 1. choose insn with lowest conflict cost, ties broken by
70 2. choose insn with the longest path to end of bb, ties broken by
71 3. choose insn that kills the most registers, ties broken by
72 4. choose insn that conflicts with the most ready insns, or finally
73 5. choose insn with lowest UID.
75 Memory references complicate matters. Only if we can be certain
76 that memory references are not part of the data dependency graph
77 (via true, anti, or output dependence), can we move operations past
78 memory references. To first approximation, reads can be done
79 independently, while writes introduce dependencies. Better
80 approximations will yield fewer dependencies.
82 Dependencies set up by memory references are treated in exactly the
83 same way as other dependencies, by using LOG_LINKS.
85 Having optimized the critical path, we may have also unduly
86 extended the lifetimes of some registers. If an operation requires
87 that constants be loaded into registers, it is certainly desirable
88 to load those constants as early as necessary, but no earlier.
89 I.e., it will not do to load up a bunch of registers at the
90 beginning of a basic block only to use them at the end, if they
91 could be loaded later, since this may result in excessive register
92 utilization.
94 Note that since branches are never in basic blocks, but only end
95 basic blocks, this pass will not do any branch scheduling. But
96 that is ok, since we can use GNU's delayed branch scheduling
97 pass to take care of this case.
99 Also note that no further optimizations based on algebraic identities
100 are performed, so this pass would be a good one to perform instruction
101 splitting, such as breaking up a multiply instruction into shifts
102 and adds where that is profitable.
104 Given the memory aliasing analysis that this pass should perform,
105 it should be possible to remove redundant stores to memory, and to
106 load values from registers instead of hitting memory.
108 This pass must update information that subsequent passes expect to be
109 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
110 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
111 basic_block_end.
113 The information in the line number notes is carefully retained by this
114 pass. All other NOTE insns are grouped in their same relative order at
115 the beginning of basic blocks that have been scheduled. */
117 #include <stdio.h>
118 #include "config.h"
119 #include "rtl.h"
120 #include "basic-block.h"
121 #include "regs.h"
122 #include "hard-reg-set.h"
123 #include "flags.h"
124 #include "insn-config.h"
125 #include "insn-attr.h"
127 #ifdef INSN_SCHEDULING
128 /* Arrays set up by scheduling for the same respective purposes as
129 similar-named arrays set up by flow analysis. We work with these
130 arrays during the scheduling pass so we can compare values against
131 unscheduled code.
133 Values of these arrays are copied at the end of this pass into the
134 arrays set up by flow analysis. */
135 static short *sched_reg_n_deaths;
136 static int *sched_reg_n_calls_crossed;
137 static int *sched_reg_live_length;
139 /* Element N is the next insn that sets (hard or pseudo) register
140 N within the current basic block; or zero, if there is no
141 such insn. Needed for new registers which may be introduced
142 by splitting insns. */
143 static rtx *reg_last_uses;
144 static rtx *reg_last_sets;
145 static regset reg_pending_sets;
146 static int reg_pending_sets_all;
148 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
149 static int *insn_luid;
150 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
152 /* Vector indexed by INSN_UID giving each instruction a priority. */
153 static int *insn_priority;
154 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
156 static short *insn_costs;
157 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
159 /* Vector indexed by INSN_UID giving an encoding of the function units
160 used. */
161 static short *insn_units;
162 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
164 /* Vector indexed by INSN_UID giving an encoding of the blockage range
165 function. The unit and the range are encoded. */
166 static unsigned int *insn_blockage;
167 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
168 #define UNIT_BITS 5
169 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
170 #define ENCODE_BLOCKAGE(U,R) \
171 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
172 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
173 | MAX_BLOCKAGE_COST (R))
174 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
175 #define BLOCKAGE_RANGE(B) \
176 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
177 | (B) & BLOCKAGE_MASK)
179 /* Encodings of the `<name>_unit_blockage_range' function. */
180 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
181 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
183 #define DONE_PRIORITY -1
184 #define MAX_PRIORITY 0x7fffffff
185 #define TAIL_PRIORITY 0x7ffffffe
186 #define LAUNCH_PRIORITY 0x7f000001
187 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
188 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
190 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
191 static int *insn_ref_count;
192 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
194 /* Vector indexed by INSN_UID giving line-number note in effect for each
195 insn. For line-number notes, this indicates whether the note may be
196 reused. */
197 static rtx *line_note;
198 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
200 /* Vector indexed by basic block number giving the starting line-number
201 for each basic block. */
202 static rtx *line_note_head;
204 /* List of important notes we must keep around. This is a pointer to the
205 last element in the list. */
206 static rtx note_list;
208 /* Regsets telling whether a given register is live or dead before the last
209 scheduled insn. Must scan the instructions once before scheduling to
210 determine what registers are live or dead at the end of the block. */
211 static regset bb_dead_regs;
212 static regset bb_live_regs;
214 /* Regset telling whether a given register is live after the insn currently
215 being scheduled. Before processing an insn, this is equal to bb_live_regs
216 above. This is used so that we can find registers that are newly born/dead
217 after processing an insn. */
218 static regset old_live_regs;
220 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
221 during the initial scan and reused later. If there are not exactly as
222 many REG_DEAD notes in the post scheduled code as there were in the
223 prescheduled code then we trigger an abort because this indicates a bug. */
224 static rtx dead_notes;
226 /* Queues, etc. */
228 /* An instruction is ready to be scheduled when all insns following it
229 have already been scheduled. It is important to ensure that all
230 insns which use its result will not be executed until its result
231 has been computed. An insn is maintained in one of four structures:
233 (P) the "Pending" set of insns which cannot be scheduled until
234 their dependencies have been satisfied.
235 (Q) the "Queued" set of insns that can be scheduled when sufficient
236 time has passed.
237 (R) the "Ready" list of unscheduled, uncommitted insns.
238 (S) the "Scheduled" list of insns.
240 Initially, all insns are either "Pending" or "Ready" depending on
241 whether their dependencies are satisfied.
243 Insns move from the "Ready" list to the "Scheduled" list as they
244 are committed to the schedule. As this occurs, the insns in the
245 "Pending" list have their dependencies satisfied and move to either
246 the "Ready" list or the "Queued" set depending on whether
247 sufficient time has passed to make them ready. As time passes,
248 insns move from the "Queued" set to the "Ready" list. Insns may
249 move from the "Ready" list to the "Queued" set if they are blocked
250 due to a function unit conflict.
252 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
253 insns, i.e., those that are ready, queued, and pending.
254 The "Queued" set (Q) is implemented by the variable `insn_queue'.
255 The "Ready" list (R) is implemented by the variables `ready' and
256 `n_ready'.
257 The "Scheduled" list (S) is the new insn chain built by this pass.
259 The transition (R->S) is implemented in the scheduling loop in
260 `schedule_block' when the best insn to schedule is chosen.
261 The transition (R->Q) is implemented in `schedule_select' when an
262 insn is found to to have a function unit conflict with the already
263 committed insns.
264 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
265 insns move from the ready list to the scheduled list.
266 The transition (Q->R) is implemented at the top of the scheduling
267 loop in `schedule_block' as time passes or stalls are introduced. */
269 /* Implement a circular buffer to delay instructions until sufficient
270 time has passed. INSN_QUEUE_SIZE is a power of two larger than
271 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
272 longest time an isnsn may be queued. */
273 static rtx insn_queue[INSN_QUEUE_SIZE];
274 static int q_ptr = 0;
275 static int q_size = 0;
276 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
277 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
279 /* Vector indexed by INSN_UID giving the minimum clock tick at which
280 the insn becomes ready. This is used to note timing constraints for
281 insns in the pending list. */
282 static int *insn_tick;
283 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
285 /* Data structure for keeping track of register information
286 during that register's life. */
288 struct sometimes
290 short offset; short bit;
291 short live_length; short calls_crossed;
294 /* Forward declarations. */
295 static rtx canon_rtx PROTO((rtx));
296 static int rtx_equal_for_memref_p PROTO((rtx, rtx));
297 static rtx find_symbolic_term PROTO((rtx));
298 static int memrefs_conflict_p PROTO((int, rtx, int, rtx,
299 HOST_WIDE_INT));
300 static void add_dependence PROTO((rtx, rtx, enum reg_note));
301 static void remove_dependence PROTO((rtx, rtx));
302 static rtx find_insn_list PROTO((rtx, rtx));
303 static int insn_unit PROTO((rtx));
304 static unsigned int blockage_range PROTO((int, rtx));
305 static void clear_units PROTO((void));
306 static void prepare_unit PROTO((int));
307 static int actual_hazard_this_instance PROTO((int, int, rtx, int, int));
308 static void schedule_unit PROTO((int, rtx, int));
309 static int actual_hazard PROTO((int, rtx, int, int));
310 static int potential_hazard PROTO((int, rtx, int));
311 static int insn_cost PROTO((rtx, rtx, rtx));
312 static int priority PROTO((rtx));
313 static void free_pending_lists PROTO((void));
314 static void add_insn_mem_dependence PROTO((rtx *, rtx *, rtx, rtx));
315 static void flush_pending_lists PROTO((rtx));
316 static void sched_analyze_1 PROTO((rtx, rtx));
317 static void sched_analyze_2 PROTO((rtx, rtx));
318 static void sched_analyze_insn PROTO((rtx, rtx, rtx));
319 static int sched_analyze PROTO((rtx, rtx));
320 static void sched_note_set PROTO((int, rtx, int));
321 static int rank_for_schedule PROTO((rtx *, rtx *));
322 static void swap_sort PROTO((rtx *, int));
323 static void queue_insn PROTO((rtx, int));
324 static int birthing_insn PROTO((rtx));
325 static void adjust_priority PROTO((rtx));
326 static int schedule_insn PROTO((rtx, rtx *, int, int));
327 static int schedule_select PROTO((rtx *, int, int, FILE *));
328 static void create_reg_dead_note PROTO((rtx, rtx));
329 static void attach_deaths PROTO((rtx, rtx, int));
330 static void attach_deaths_insn PROTO((rtx));
331 static rtx unlink_notes PROTO((rtx, rtx));
332 static int new_sometimes_live PROTO((struct sometimes *, int, int,
333 int));
334 static void finish_sometimes_live PROTO((struct sometimes *, int));
335 static void schedule_block PROTO((int, FILE *));
336 static rtx regno_use_in PROTO((int, rtx));
337 static void split_hard_reg_notes PROTO((rtx, rtx, rtx, rtx));
338 static void new_insn_dead_notes PROTO((rtx, rtx, rtx, rtx));
339 static void update_n_sets PROTO((rtx, int));
340 static void update_flow_info PROTO((rtx, rtx, rtx, rtx));
342 /* Main entry point of this file. */
343 void schedule_insns PROTO((FILE *));
345 #endif /* INSN_SCHEDULING */
347 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
349 /* Vector indexed by N giving the initial (unchanging) value known
350 for pseudo-register N. */
351 static rtx *reg_known_value;
353 /* Vector recording for each reg_known_value whether it is due to a
354 REG_EQUIV note. Future passes (viz., reload) may replace the
355 pseudo with the equivalent expression and so we account for the
356 dependences that would be introduced if that happens. */
357 /* ??? This is a problem only on the Convex. The REG_EQUIV notes created in
358 assign_parms mention the arg pointer, and there are explicit insns in the
359 RTL that modify the arg pointer. Thus we must ensure that such insns don't
360 get scheduled across each other because that would invalidate the REG_EQUIV
361 notes. One could argue that the REG_EQUIV notes are wrong, but solving
362 the problem in the scheduler will likely give better code, so we do it
363 here. */
364 static char *reg_known_equiv_p;
366 /* Indicates number of valid entries in reg_known_value. */
367 static int reg_known_value_size;
369 static rtx
370 canon_rtx (x)
371 rtx x;
373 if (GET_CODE (x) == REG && REGNO (x) >= FIRST_PSEUDO_REGISTER
374 && REGNO (x) <= reg_known_value_size)
375 return reg_known_value[REGNO (x)];
376 else if (GET_CODE (x) == PLUS)
378 rtx x0 = canon_rtx (XEXP (x, 0));
379 rtx x1 = canon_rtx (XEXP (x, 1));
381 if (x0 != XEXP (x, 0) || x1 != XEXP (x, 1))
383 /* We can tolerate LO_SUMs being offset here; these
384 rtl are used for nothing other than comparisons. */
385 if (GET_CODE (x0) == CONST_INT)
386 return plus_constant_for_output (x1, INTVAL (x0));
387 else if (GET_CODE (x1) == CONST_INT)
388 return plus_constant_for_output (x0, INTVAL (x1));
389 return gen_rtx (PLUS, GET_MODE (x), x0, x1);
392 return x;
395 /* Set up all info needed to perform alias analysis on memory references. */
397 void
398 init_alias_analysis ()
400 int maxreg = max_reg_num ();
401 rtx insn;
402 rtx note;
403 rtx set;
405 reg_known_value_size = maxreg;
407 reg_known_value
408 = (rtx *) oballoc ((maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx))
409 - FIRST_PSEUDO_REGISTER;
410 bzero ((char *) (reg_known_value + FIRST_PSEUDO_REGISTER),
411 (maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx));
413 reg_known_equiv_p
414 = (char *) oballoc ((maxreg -FIRST_PSEUDO_REGISTER) * sizeof (char))
415 - FIRST_PSEUDO_REGISTER;
416 bzero (reg_known_equiv_p + FIRST_PSEUDO_REGISTER,
417 (maxreg - FIRST_PSEUDO_REGISTER) * sizeof (char));
419 /* Fill in the entries with known constant values. */
420 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
421 if ((set = single_set (insn)) != 0
422 && GET_CODE (SET_DEST (set)) == REG
423 && REGNO (SET_DEST (set)) >= FIRST_PSEUDO_REGISTER
424 && (((note = find_reg_note (insn, REG_EQUAL, 0)) != 0
425 && reg_n_sets[REGNO (SET_DEST (set))] == 1)
426 || (note = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != 0)
427 && GET_CODE (XEXP (note, 0)) != EXPR_LIST)
429 int regno = REGNO (SET_DEST (set));
430 reg_known_value[regno] = XEXP (note, 0);
431 reg_known_equiv_p[regno] = REG_NOTE_KIND (note) == REG_EQUIV;
434 /* Fill in the remaining entries. */
435 while (--maxreg >= FIRST_PSEUDO_REGISTER)
436 if (reg_known_value[maxreg] == 0)
437 reg_known_value[maxreg] = regno_reg_rtx[maxreg];
440 /* Return 1 if X and Y are identical-looking rtx's.
442 We use the data in reg_known_value above to see if two registers with
443 different numbers are, in fact, equivalent. */
445 static int
446 rtx_equal_for_memref_p (x, y)
447 rtx x, y;
449 register int i;
450 register int j;
451 register enum rtx_code code;
452 register char *fmt;
454 if (x == 0 && y == 0)
455 return 1;
456 if (x == 0 || y == 0)
457 return 0;
458 x = canon_rtx (x);
459 y = canon_rtx (y);
461 if (x == y)
462 return 1;
464 code = GET_CODE (x);
465 /* Rtx's of different codes cannot be equal. */
466 if (code != GET_CODE (y))
467 return 0;
469 /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent.
470 (REG:SI x) and (REG:HI x) are NOT equivalent. */
472 if (GET_MODE (x) != GET_MODE (y))
473 return 0;
475 /* REG, LABEL_REF, and SYMBOL_REF can be compared nonrecursively. */
477 if (code == REG)
478 return REGNO (x) == REGNO (y);
479 if (code == LABEL_REF)
480 return XEXP (x, 0) == XEXP (y, 0);
481 if (code == SYMBOL_REF)
482 return XSTR (x, 0) == XSTR (y, 0);
484 /* For commutative operations, the RTX match if the operand match in any
485 order. Also handle the simple binary and unary cases without a loop. */
486 if (code == EQ || code == NE || GET_RTX_CLASS (code) == 'c')
487 return ((rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
488 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)))
489 || (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 1))
490 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 0))));
491 else if (GET_RTX_CLASS (code) == '<' || GET_RTX_CLASS (code) == '2')
492 return (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
493 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)));
494 else if (GET_RTX_CLASS (code) == '1')
495 return rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0));
497 /* Compare the elements. If any pair of corresponding elements
498 fail to match, return 0 for the whole things. */
500 fmt = GET_RTX_FORMAT (code);
501 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
503 switch (fmt[i])
505 case 'w':
506 if (XWINT (x, i) != XWINT (y, i))
507 return 0;
508 break;
510 case 'n':
511 case 'i':
512 if (XINT (x, i) != XINT (y, i))
513 return 0;
514 break;
516 case 'V':
517 case 'E':
518 /* Two vectors must have the same length. */
519 if (XVECLEN (x, i) != XVECLEN (y, i))
520 return 0;
522 /* And the corresponding elements must match. */
523 for (j = 0; j < XVECLEN (x, i); j++)
524 if (rtx_equal_for_memref_p (XVECEXP (x, i, j), XVECEXP (y, i, j)) == 0)
525 return 0;
526 break;
528 case 'e':
529 if (rtx_equal_for_memref_p (XEXP (x, i), XEXP (y, i)) == 0)
530 return 0;
531 break;
533 case 'S':
534 case 's':
535 if (strcmp (XSTR (x, i), XSTR (y, i)))
536 return 0;
537 break;
539 case 'u':
540 /* These are just backpointers, so they don't matter. */
541 break;
543 case '0':
544 break;
546 /* It is believed that rtx's at this level will never
547 contain anything but integers and other rtx's,
548 except for within LABEL_REFs and SYMBOL_REFs. */
549 default:
550 abort ();
553 return 1;
556 /* Given an rtx X, find a SYMBOL_REF or LABEL_REF within
557 X and return it, or return 0 if none found. */
559 static rtx
560 find_symbolic_term (x)
561 rtx x;
563 register int i;
564 register enum rtx_code code;
565 register char *fmt;
567 code = GET_CODE (x);
568 if (code == SYMBOL_REF || code == LABEL_REF)
569 return x;
570 if (GET_RTX_CLASS (code) == 'o')
571 return 0;
573 fmt = GET_RTX_FORMAT (code);
574 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
576 rtx t;
578 if (fmt[i] == 'e')
580 t = find_symbolic_term (XEXP (x, i));
581 if (t != 0)
582 return t;
584 else if (fmt[i] == 'E')
585 break;
587 return 0;
590 /* Return nonzero if X and Y (memory addresses) could reference the
591 same location in memory. C is an offset accumulator. When
592 C is nonzero, we are testing aliases between X and Y + C.
593 XSIZE is the size in bytes of the X reference,
594 similarly YSIZE is the size in bytes for Y.
596 If XSIZE or YSIZE is zero, we do not know the amount of memory being
597 referenced (the reference was BLKmode), so make the most pessimistic
598 assumptions.
600 We recognize the following cases of non-conflicting memory:
602 (1) addresses involving the frame pointer cannot conflict
603 with addresses involving static variables.
604 (2) static variables with different addresses cannot conflict.
606 Nice to notice that varying addresses cannot conflict with fp if no
607 local variables had their addresses taken, but that's too hard now. */
609 /* ??? In Fortran, references to a array parameter can never conflict with
610 another array parameter. */
612 static int
613 memrefs_conflict_p (xsize, x, ysize, y, c)
614 rtx x, y;
615 int xsize, ysize;
616 HOST_WIDE_INT c;
618 if (GET_CODE (x) == HIGH)
619 x = XEXP (x, 0);
620 else if (GET_CODE (x) == LO_SUM)
621 x = XEXP (x, 1);
622 else
623 x = canon_rtx (x);
624 if (GET_CODE (y) == HIGH)
625 y = XEXP (y, 0);
626 else if (GET_CODE (y) == LO_SUM)
627 y = XEXP (y, 1);
628 else
629 y = canon_rtx (y);
631 if (rtx_equal_for_memref_p (x, y))
632 return (xsize == 0 || ysize == 0 ||
633 (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
635 if (y == frame_pointer_rtx || y == hard_frame_pointer_rtx
636 || y == stack_pointer_rtx)
638 rtx t = y;
639 int tsize = ysize;
640 y = x; ysize = xsize;
641 x = t; xsize = tsize;
644 if (x == frame_pointer_rtx || x == hard_frame_pointer_rtx
645 || x == stack_pointer_rtx)
647 rtx y1;
649 if (CONSTANT_P (y))
650 return 0;
652 if (GET_CODE (y) == PLUS
653 && canon_rtx (XEXP (y, 0)) == x
654 && (y1 = canon_rtx (XEXP (y, 1)))
655 && GET_CODE (y1) == CONST_INT)
657 c += INTVAL (y1);
658 return (xsize == 0 || ysize == 0
659 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
662 if (GET_CODE (y) == PLUS
663 && (y1 = canon_rtx (XEXP (y, 0)))
664 && CONSTANT_P (y1))
665 return 0;
667 return 1;
670 if (GET_CODE (x) == PLUS)
672 /* The fact that X is canonicalized means that this
673 PLUS rtx is canonicalized. */
674 rtx x0 = XEXP (x, 0);
675 rtx x1 = XEXP (x, 1);
677 if (GET_CODE (y) == PLUS)
679 /* The fact that Y is canonicalized means that this
680 PLUS rtx is canonicalized. */
681 rtx y0 = XEXP (y, 0);
682 rtx y1 = XEXP (y, 1);
684 if (rtx_equal_for_memref_p (x1, y1))
685 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
686 if (rtx_equal_for_memref_p (x0, y0))
687 return memrefs_conflict_p (xsize, x1, ysize, y1, c);
688 if (GET_CODE (x1) == CONST_INT)
689 if (GET_CODE (y1) == CONST_INT)
690 return memrefs_conflict_p (xsize, x0, ysize, y0,
691 c - INTVAL (x1) + INTVAL (y1));
692 else
693 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
694 else if (GET_CODE (y1) == CONST_INT)
695 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
697 /* Handle case where we cannot understand iteration operators,
698 but we notice that the base addresses are distinct objects. */
699 x = find_symbolic_term (x);
700 if (x == 0)
701 return 1;
702 y = find_symbolic_term (y);
703 if (y == 0)
704 return 1;
705 return rtx_equal_for_memref_p (x, y);
707 else if (GET_CODE (x1) == CONST_INT)
708 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
710 else if (GET_CODE (y) == PLUS)
712 /* The fact that Y is canonicalized means that this
713 PLUS rtx is canonicalized. */
714 rtx y0 = XEXP (y, 0);
715 rtx y1 = XEXP (y, 1);
717 if (GET_CODE (y1) == CONST_INT)
718 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
719 else
720 return 1;
723 if (GET_CODE (x) == GET_CODE (y))
724 switch (GET_CODE (x))
726 case MULT:
728 /* Handle cases where we expect the second operands to be the
729 same, and check only whether the first operand would conflict
730 or not. */
731 rtx x0, y0;
732 rtx x1 = canon_rtx (XEXP (x, 1));
733 rtx y1 = canon_rtx (XEXP (y, 1));
734 if (! rtx_equal_for_memref_p (x1, y1))
735 return 1;
736 x0 = canon_rtx (XEXP (x, 0));
737 y0 = canon_rtx (XEXP (y, 0));
738 if (rtx_equal_for_memref_p (x0, y0))
739 return (xsize == 0 || ysize == 0
740 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
742 /* Can't properly adjust our sizes. */
743 if (GET_CODE (x1) != CONST_INT)
744 return 1;
745 xsize /= INTVAL (x1);
746 ysize /= INTVAL (x1);
747 c /= INTVAL (x1);
748 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
752 if (CONSTANT_P (x))
754 if (GET_CODE (x) == CONST_INT && GET_CODE (y) == CONST_INT)
756 c += (INTVAL (y) - INTVAL (x));
757 return (xsize == 0 || ysize == 0
758 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
761 if (GET_CODE (x) == CONST)
763 if (GET_CODE (y) == CONST)
764 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
765 ysize, canon_rtx (XEXP (y, 0)), c);
766 else
767 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
768 ysize, y, c);
770 if (GET_CODE (y) == CONST)
771 return memrefs_conflict_p (xsize, x, ysize,
772 canon_rtx (XEXP (y, 0)), c);
774 if (CONSTANT_P (y))
775 return (rtx_equal_for_memref_p (x, y)
776 && (xsize == 0 || ysize == 0
777 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0)));
779 return 1;
781 return 1;
784 /* Functions to compute memory dependencies.
786 Since we process the insns in execution order, we can build tables
787 to keep track of what registers are fixed (and not aliased), what registers
788 are varying in known ways, and what registers are varying in unknown
789 ways.
791 If both memory references are volatile, then there must always be a
792 dependence between the two references, since their order can not be
793 changed. A volatile and non-volatile reference can be interchanged
794 though.
796 A MEM_IN_STRUCT reference at a non-QImode varying address can never
797 conflict with a non-MEM_IN_STRUCT reference at a fixed address. We must
798 allow QImode aliasing because the ANSI C standard allows character
799 pointers to alias anything. We are assuming that characters are
800 always QImode here. */
802 /* Read dependence: X is read after read in MEM takes place. There can
803 only be a dependence here if both reads are volatile. */
806 read_dependence (mem, x)
807 rtx mem;
808 rtx x;
810 return MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem);
813 /* True dependence: X is read after store in MEM takes place. */
816 true_dependence (mem, x)
817 rtx mem;
818 rtx x;
820 /* If X is an unchanging read, then it can't possibly conflict with any
821 non-unchanging store. It may conflict with an unchanging write though,
822 because there may be a single store to this address to initialize it.
823 Just fall through to the code below to resolve the case where we have
824 both an unchanging read and an unchanging write. This won't handle all
825 cases optimally, but the possible performance loss should be
826 negligible. */
827 if (RTX_UNCHANGING_P (x) && ! RTX_UNCHANGING_P (mem))
828 return 0;
830 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
831 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
832 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
833 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
834 && GET_MODE (mem) != QImode
835 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
836 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
837 && GET_MODE (x) != QImode
838 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
841 /* Anti dependence: X is written after read in MEM takes place. */
844 anti_dependence (mem, x)
845 rtx mem;
846 rtx x;
848 /* If MEM is an unchanging read, then it can't possibly conflict with
849 the store to X, because there is at most one store to MEM, and it must
850 have occured somewhere before MEM. */
851 if (RTX_UNCHANGING_P (mem))
852 return 0;
854 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
855 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
856 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
857 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
858 && GET_MODE (mem) != QImode
859 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
860 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
861 && GET_MODE (x) != QImode
862 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
865 /* Output dependence: X is written after store in MEM takes place. */
868 output_dependence (mem, x)
869 rtx mem;
870 rtx x;
872 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
873 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
874 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
875 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
876 && GET_MODE (mem) != QImode
877 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
878 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
879 && GET_MODE (x) != QImode
880 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
883 /* Helper functions for instruction scheduling. */
885 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
886 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
887 of dependence that this link represents. */
889 static void
890 add_dependence (insn, elem, dep_type)
891 rtx insn;
892 rtx elem;
893 enum reg_note dep_type;
895 rtx link, next;
897 /* Don't depend an insn on itself. */
898 if (insn == elem)
899 return;
901 /* If elem is part of a sequence that must be scheduled together, then
902 make the dependence point to the last insn of the sequence.
903 When HAVE_cc0, it is possible for NOTEs to exist between users and
904 setters of the condition codes, so we must skip past notes here.
905 Otherwise, NOTEs are impossible here. */
907 next = NEXT_INSN (elem);
909 #ifdef HAVE_cc0
910 while (next && GET_CODE (next) == NOTE)
911 next = NEXT_INSN (next);
912 #endif
914 if (next && SCHED_GROUP_P (next))
916 /* Notes will never intervene here though, so don't bother checking
917 for them. */
918 /* We must reject CODE_LABELs, so that we don't get confused by one
919 that has LABEL_PRESERVE_P set, which is represented by the same
920 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
921 SCHED_GROUP_P. */
922 while (NEXT_INSN (next) && SCHED_GROUP_P (NEXT_INSN (next))
923 && GET_CODE (NEXT_INSN (next)) != CODE_LABEL)
924 next = NEXT_INSN (next);
926 /* Again, don't depend an insn on itself. */
927 if (insn == next)
928 return;
930 /* Make the dependence to NEXT, the last insn of the group, instead
931 of the original ELEM. */
932 elem = next;
935 /* Check that we don't already have this dependence. */
936 for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
937 if (XEXP (link, 0) == elem)
939 /* If this is a more restrictive type of dependence than the existing
940 one, then change the existing dependence to this type. */
941 if ((int) dep_type < (int) REG_NOTE_KIND (link))
942 PUT_REG_NOTE_KIND (link, dep_type);
943 return;
945 /* Might want to check one level of transitivity to save conses. */
947 link = rtx_alloc (INSN_LIST);
948 /* Insn dependency, not data dependency. */
949 PUT_REG_NOTE_KIND (link, dep_type);
950 XEXP (link, 0) = elem;
951 XEXP (link, 1) = LOG_LINKS (insn);
952 LOG_LINKS (insn) = link;
955 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
956 of INSN. Abort if not found. */
958 static void
959 remove_dependence (insn, elem)
960 rtx insn;
961 rtx elem;
963 rtx prev, link;
964 int found = 0;
966 for (prev = 0, link = LOG_LINKS (insn); link;
967 prev = link, link = XEXP (link, 1))
969 if (XEXP (link, 0) == elem)
971 if (prev)
972 XEXP (prev, 1) = XEXP (link, 1);
973 else
974 LOG_LINKS (insn) = XEXP (link, 1);
975 found = 1;
979 if (! found)
980 abort ();
981 return;
984 #ifndef INSN_SCHEDULING
985 void
986 schedule_insns (dump_file)
987 FILE *dump_file;
990 #else
991 #ifndef __GNUC__
992 #define __inline
993 #endif
995 /* Computation of memory dependencies. */
997 /* The *_insns and *_mems are paired lists. Each pending memory operation
998 will have a pointer to the MEM rtx on one list and a pointer to the
999 containing insn on the other list in the same place in the list. */
1001 /* We can't use add_dependence like the old code did, because a single insn
1002 may have multiple memory accesses, and hence needs to be on the list
1003 once for each memory access. Add_dependence won't let you add an insn
1004 to a list more than once. */
1006 /* An INSN_LIST containing all insns with pending read operations. */
1007 static rtx pending_read_insns;
1009 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
1010 static rtx pending_read_mems;
1012 /* An INSN_LIST containing all insns with pending write operations. */
1013 static rtx pending_write_insns;
1015 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
1016 static rtx pending_write_mems;
1018 /* Indicates the combined length of the two pending lists. We must prevent
1019 these lists from ever growing too large since the number of dependencies
1020 produced is at least O(N*N), and execution time is at least O(4*N*N), as
1021 a function of the length of these pending lists. */
1023 static int pending_lists_length;
1025 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
1027 static rtx unused_insn_list;
1029 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
1031 static rtx unused_expr_list;
1033 /* The last insn upon which all memory references must depend.
1034 This is an insn which flushed the pending lists, creating a dependency
1035 between it and all previously pending memory references. This creates
1036 a barrier (or a checkpoint) which no memory reference is allowed to cross.
1038 This includes all non constant CALL_INSNs. When we do interprocedural
1039 alias analysis, this restriction can be relaxed.
1040 This may also be an INSN that writes memory if the pending lists grow
1041 too large. */
1043 static rtx last_pending_memory_flush;
1045 /* The last function call we have seen. All hard regs, and, of course,
1046 the last function call, must depend on this. */
1048 static rtx last_function_call;
1050 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
1051 that does not already cross a call. We create dependencies between each
1052 of those insn and the next call insn, to ensure that they won't cross a call
1053 after scheduling is done. */
1055 static rtx sched_before_next_call;
1057 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
1058 so that insns independent of the last scheduled insn will be preferred
1059 over dependent instructions. */
1061 static rtx last_scheduled_insn;
1063 /* Process an insn's memory dependencies. There are four kinds of
1064 dependencies:
1066 (0) read dependence: read follows read
1067 (1) true dependence: read follows write
1068 (2) anti dependence: write follows read
1069 (3) output dependence: write follows write
1071 We are careful to build only dependencies which actually exist, and
1072 use transitivity to avoid building too many links. */
1074 /* Return the INSN_LIST containing INSN in LIST, or NULL
1075 if LIST does not contain INSN. */
1077 __inline static rtx
1078 find_insn_list (insn, list)
1079 rtx insn;
1080 rtx list;
1082 while (list)
1084 if (XEXP (list, 0) == insn)
1085 return list;
1086 list = XEXP (list, 1);
1088 return 0;
1091 /* Compute the function units used by INSN. This caches the value
1092 returned by function_units_used. A function unit is encoded as the
1093 unit number if the value is non-negative and the compliment of a
1094 mask if the value is negative. A function unit index is the
1095 non-negative encoding. */
1097 __inline static int
1098 insn_unit (insn)
1099 rtx insn;
1101 register int unit = INSN_UNIT (insn);
1103 if (unit == 0)
1105 recog_memoized (insn);
1107 /* A USE insn, or something else we don't need to understand.
1108 We can't pass these directly to function_units_used because it will
1109 trigger a fatal error for unrecognizable insns. */
1110 if (INSN_CODE (insn) < 0)
1111 unit = -1;
1112 else
1114 unit = function_units_used (insn);
1115 /* Increment non-negative values so we can cache zero. */
1116 if (unit >= 0) unit++;
1118 /* We only cache 16 bits of the result, so if the value is out of
1119 range, don't cache it. */
1120 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
1121 || unit >= 0
1122 || (~unit & ((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
1123 INSN_UNIT (insn) = unit;
1125 return (unit > 0 ? unit - 1 : unit);
1128 /* Compute the blockage range for executing INSN on UNIT. This caches
1129 the value returned by the blockage_range_function for the unit.
1130 These values are encoded in an int where the upper half gives the
1131 minimum value and the lower half gives the maximum value. */
1133 __inline static unsigned int
1134 blockage_range (unit, insn)
1135 int unit;
1136 rtx insn;
1138 unsigned int blockage = INSN_BLOCKAGE (insn);
1139 unsigned int range;
1141 if (UNIT_BLOCKED (blockage) != unit + 1)
1143 range = function_units[unit].blockage_range_function (insn);
1144 /* We only cache the blockage range for one unit and then only if
1145 the values fit. */
1146 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
1147 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
1149 else
1150 range = BLOCKAGE_RANGE (blockage);
1152 return range;
1155 /* A vector indexed by function unit instance giving the last insn to use
1156 the unit. The value of the function unit instance index for unit U
1157 instance I is (U + I * FUNCTION_UNITS_SIZE). */
1158 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1160 /* A vector indexed by function unit instance giving the minimum time when
1161 the unit will unblock based on the maximum blockage cost. */
1162 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1164 /* A vector indexed by function unit number giving the number of insns
1165 that remain to use the unit. */
1166 static int unit_n_insns[FUNCTION_UNITS_SIZE];
1168 /* Reset the function unit state to the null state. */
1170 static void
1171 clear_units ()
1173 bzero ((char *) unit_last_insn, sizeof (unit_last_insn));
1174 bzero ((char *) unit_tick, sizeof (unit_tick));
1175 bzero ((char *) unit_n_insns, sizeof (unit_n_insns));
1178 /* Record an insn as one that will use the units encoded by UNIT. */
1180 __inline static void
1181 prepare_unit (unit)
1182 int unit;
1184 int i;
1186 if (unit >= 0)
1187 unit_n_insns[unit]++;
1188 else
1189 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1190 if ((unit & 1) != 0)
1191 prepare_unit (i);
1194 /* Return the actual hazard cost of executing INSN on the unit UNIT,
1195 instance INSTANCE at time CLOCK if the previous actual hazard cost
1196 was COST. */
1198 __inline static int
1199 actual_hazard_this_instance (unit, instance, insn, clock, cost)
1200 int unit, instance, clock, cost;
1201 rtx insn;
1203 int tick = unit_tick[instance];
1205 if (tick - clock > cost)
1207 /* The scheduler is operating in reverse, so INSN is the executing
1208 insn and the unit's last insn is the candidate insn. We want a
1209 more exact measure of the blockage if we execute INSN at CLOCK
1210 given when we committed the execution of the unit's last insn.
1212 The blockage value is given by either the unit's max blockage
1213 constant, blockage range function, or blockage function. Use
1214 the most exact form for the given unit. */
1216 if (function_units[unit].blockage_range_function)
1218 if (function_units[unit].blockage_function)
1219 tick += (function_units[unit].blockage_function
1220 (insn, unit_last_insn[instance])
1221 - function_units[unit].max_blockage);
1222 else
1223 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
1224 - function_units[unit].max_blockage);
1226 if (tick - clock > cost)
1227 cost = tick - clock;
1229 return cost;
1232 /* Record INSN as having begun execution on the units encoded by UNIT at
1233 time CLOCK. */
1235 __inline static void
1236 schedule_unit (unit, insn, clock)
1237 int unit, clock;
1238 rtx insn;
1240 int i;
1242 if (unit >= 0)
1244 int instance = unit;
1245 #if MAX_MULTIPLICITY > 1
1246 /* Find the first free instance of the function unit and use that
1247 one. We assume that one is free. */
1248 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1250 if (! actual_hazard_this_instance (unit, instance, insn, clock, 0))
1251 break;
1252 instance += FUNCTION_UNITS_SIZE;
1254 #endif
1255 unit_last_insn[instance] = insn;
1256 unit_tick[instance] = (clock + function_units[unit].max_blockage);
1258 else
1259 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1260 if ((unit & 1) != 0)
1261 schedule_unit (i, insn, clock);
1264 /* Return the actual hazard cost of executing INSN on the units encoded by
1265 UNIT at time CLOCK if the previous actual hazard cost was COST. */
1267 __inline static int
1268 actual_hazard (unit, insn, clock, cost)
1269 int unit, clock, cost;
1270 rtx insn;
1272 int i;
1274 if (unit >= 0)
1276 /* Find the instance of the function unit with the minimum hazard. */
1277 int instance = unit;
1278 int best_cost = actual_hazard_this_instance (unit, instance, insn,
1279 clock, cost);
1280 int this_cost;
1282 #if MAX_MULTIPLICITY > 1
1283 if (best_cost > cost)
1285 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1287 instance += FUNCTION_UNITS_SIZE;
1288 this_cost = actual_hazard_this_instance (unit, instance, insn,
1289 clock, cost);
1290 if (this_cost < best_cost)
1292 best_cost = this_cost;
1293 if (this_cost <= cost)
1294 break;
1298 #endif
1299 cost = MAX (cost, best_cost);
1301 else
1302 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1303 if ((unit & 1) != 0)
1304 cost = actual_hazard (i, insn, clock, cost);
1306 return cost;
1309 /* Return the potential hazard cost of executing an instruction on the
1310 units encoded by UNIT if the previous potential hazard cost was COST.
1311 An insn with a large blockage time is chosen in preference to one
1312 with a smaller time; an insn that uses a unit that is more likely
1313 to be used is chosen in preference to one with a unit that is less
1314 used. We are trying to minimize a subsequent actual hazard. */
1316 __inline static int
1317 potential_hazard (unit, insn, cost)
1318 int unit, cost;
1319 rtx insn;
1321 int i, ncost;
1322 unsigned int minb, maxb;
1324 if (unit >= 0)
1326 minb = maxb = function_units[unit].max_blockage;
1327 if (maxb > 1)
1329 if (function_units[unit].blockage_range_function)
1331 maxb = minb = blockage_range (unit, insn);
1332 maxb = MAX_BLOCKAGE_COST (maxb);
1333 minb = MIN_BLOCKAGE_COST (minb);
1336 if (maxb > 1)
1338 /* Make the number of instructions left dominate. Make the
1339 minimum delay dominate the maximum delay. If all these
1340 are the same, use the unit number to add an arbitrary
1341 ordering. Other terms can be added. */
1342 ncost = minb * 0x40 + maxb;
1343 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
1344 if (ncost > cost)
1345 cost = ncost;
1349 else
1350 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1351 if ((unit & 1) != 0)
1352 cost = potential_hazard (i, insn, cost);
1354 return cost;
1357 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
1358 This is the number of virtual cycles taken between instruction issue and
1359 instruction results. */
1361 __inline static int
1362 insn_cost (insn, link, used)
1363 rtx insn, link, used;
1365 register int cost = INSN_COST (insn);
1367 if (cost == 0)
1369 recog_memoized (insn);
1371 /* A USE insn, or something else we don't need to understand.
1372 We can't pass these directly to result_ready_cost because it will
1373 trigger a fatal error for unrecognizable insns. */
1374 if (INSN_CODE (insn) < 0)
1376 INSN_COST (insn) = 1;
1377 return 1;
1379 else
1381 cost = result_ready_cost (insn);
1383 if (cost < 1)
1384 cost = 1;
1386 INSN_COST (insn) = cost;
1390 /* A USE insn should never require the value used to be computed. This
1391 allows the computation of a function's result and parameter values to
1392 overlap the return and call. */
1393 recog_memoized (used);
1394 if (INSN_CODE (used) < 0)
1395 LINK_COST_FREE (link) = 1;
1397 /* If some dependencies vary the cost, compute the adjustment. Most
1398 commonly, the adjustment is complete: either the cost is ignored
1399 (in the case of an output- or anti-dependence), or the cost is
1400 unchanged. These values are cached in the link as LINK_COST_FREE
1401 and LINK_COST_ZERO. */
1403 if (LINK_COST_FREE (link))
1404 cost = 1;
1405 #ifdef ADJUST_COST
1406 else if (! LINK_COST_ZERO (link))
1408 int ncost = cost;
1410 ADJUST_COST (used, link, insn, ncost);
1411 if (ncost <= 1)
1412 LINK_COST_FREE (link) = ncost = 1;
1413 if (cost == ncost)
1414 LINK_COST_ZERO (link) = 1;
1415 cost = ncost;
1417 #endif
1418 return cost;
1421 /* Compute the priority number for INSN. */
1423 static int
1424 priority (insn)
1425 rtx insn;
1427 if (insn && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
1429 int prev_priority;
1430 int max_priority;
1431 int this_priority = INSN_PRIORITY (insn);
1432 rtx prev;
1434 if (this_priority > 0)
1435 return this_priority;
1437 max_priority = 1;
1439 /* Nonzero if these insns must be scheduled together. */
1440 if (SCHED_GROUP_P (insn))
1442 prev = insn;
1443 while (SCHED_GROUP_P (prev))
1445 prev = PREV_INSN (prev);
1446 INSN_REF_COUNT (prev) += 1;
1450 for (prev = LOG_LINKS (insn); prev; prev = XEXP (prev, 1))
1452 rtx x = XEXP (prev, 0);
1454 /* A dependence pointing to a note or deleted insn is always
1455 obsolete, because sched_analyze_insn will have created any
1456 necessary new dependences which replace it. Notes and deleted
1457 insns can be created when instructions are deleted by insn
1458 splitting, or by register allocation. */
1459 if (GET_CODE (x) == NOTE || INSN_DELETED_P (x))
1461 remove_dependence (insn, x);
1462 continue;
1465 /* Clear the link cost adjustment bits. */
1466 LINK_COST_FREE (prev) = 0;
1467 #ifdef ADJUST_COST
1468 LINK_COST_ZERO (prev) = 0;
1469 #endif
1471 /* This priority calculation was chosen because it results in the
1472 least instruction movement, and does not hurt the performance
1473 of the resulting code compared to the old algorithm.
1474 This makes the sched algorithm more stable, which results
1475 in better code, because there is less register pressure,
1476 cross jumping is more likely to work, and debugging is easier.
1478 When all instructions have a latency of 1, there is no need to
1479 move any instructions. Subtracting one here ensures that in such
1480 cases all instructions will end up with a priority of one, and
1481 hence no scheduling will be done.
1483 The original code did not subtract the one, and added the
1484 insn_cost of the current instruction to its priority (e.g.
1485 move the insn_cost call down to the end). */
1487 prev_priority = priority (x) + insn_cost (x, prev, insn) - 1;
1489 if (prev_priority > max_priority)
1490 max_priority = prev_priority;
1491 INSN_REF_COUNT (x) += 1;
1494 prepare_unit (insn_unit (insn));
1495 INSN_PRIORITY (insn) = max_priority;
1496 return INSN_PRIORITY (insn);
1498 return 0;
1501 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
1502 them to the unused_*_list variables, so that they can be reused. */
1504 static void
1505 free_pending_lists ()
1507 register rtx link, prev_link;
1509 if (pending_read_insns)
1511 prev_link = pending_read_insns;
1512 link = XEXP (prev_link, 1);
1514 while (link)
1516 prev_link = link;
1517 link = XEXP (link, 1);
1520 XEXP (prev_link, 1) = unused_insn_list;
1521 unused_insn_list = pending_read_insns;
1522 pending_read_insns = 0;
1525 if (pending_write_insns)
1527 prev_link = pending_write_insns;
1528 link = XEXP (prev_link, 1);
1530 while (link)
1532 prev_link = link;
1533 link = XEXP (link, 1);
1536 XEXP (prev_link, 1) = unused_insn_list;
1537 unused_insn_list = pending_write_insns;
1538 pending_write_insns = 0;
1541 if (pending_read_mems)
1543 prev_link = pending_read_mems;
1544 link = XEXP (prev_link, 1);
1546 while (link)
1548 prev_link = link;
1549 link = XEXP (link, 1);
1552 XEXP (prev_link, 1) = unused_expr_list;
1553 unused_expr_list = pending_read_mems;
1554 pending_read_mems = 0;
1557 if (pending_write_mems)
1559 prev_link = pending_write_mems;
1560 link = XEXP (prev_link, 1);
1562 while (link)
1564 prev_link = link;
1565 link = XEXP (link, 1);
1568 XEXP (prev_link, 1) = unused_expr_list;
1569 unused_expr_list = pending_write_mems;
1570 pending_write_mems = 0;
1574 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1575 The MEM is a memory reference contained within INSN, which we are saving
1576 so that we can do memory aliasing on it. */
1578 static void
1579 add_insn_mem_dependence (insn_list, mem_list, insn, mem)
1580 rtx *insn_list, *mem_list, insn, mem;
1582 register rtx link;
1584 if (unused_insn_list)
1586 link = unused_insn_list;
1587 unused_insn_list = XEXP (link, 1);
1589 else
1590 link = rtx_alloc (INSN_LIST);
1591 XEXP (link, 0) = insn;
1592 XEXP (link, 1) = *insn_list;
1593 *insn_list = link;
1595 if (unused_expr_list)
1597 link = unused_expr_list;
1598 unused_expr_list = XEXP (link, 1);
1600 else
1601 link = rtx_alloc (EXPR_LIST);
1602 XEXP (link, 0) = mem;
1603 XEXP (link, 1) = *mem_list;
1604 *mem_list = link;
1606 pending_lists_length++;
1609 /* Make a dependency between every memory reference on the pending lists
1610 and INSN, thus flushing the pending lists. */
1612 static void
1613 flush_pending_lists (insn)
1614 rtx insn;
1616 rtx link;
1618 while (pending_read_insns)
1620 add_dependence (insn, XEXP (pending_read_insns, 0), REG_DEP_ANTI);
1622 link = pending_read_insns;
1623 pending_read_insns = XEXP (pending_read_insns, 1);
1624 XEXP (link, 1) = unused_insn_list;
1625 unused_insn_list = link;
1627 link = pending_read_mems;
1628 pending_read_mems = XEXP (pending_read_mems, 1);
1629 XEXP (link, 1) = unused_expr_list;
1630 unused_expr_list = link;
1632 while (pending_write_insns)
1634 add_dependence (insn, XEXP (pending_write_insns, 0), REG_DEP_ANTI);
1636 link = pending_write_insns;
1637 pending_write_insns = XEXP (pending_write_insns, 1);
1638 XEXP (link, 1) = unused_insn_list;
1639 unused_insn_list = link;
1641 link = pending_write_mems;
1642 pending_write_mems = XEXP (pending_write_mems, 1);
1643 XEXP (link, 1) = unused_expr_list;
1644 unused_expr_list = link;
1646 pending_lists_length = 0;
1648 if (last_pending_memory_flush)
1649 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1651 last_pending_memory_flush = insn;
1654 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1655 by the write to the destination of X, and reads of everything mentioned. */
1657 static void
1658 sched_analyze_1 (x, insn)
1659 rtx x;
1660 rtx insn;
1662 register int regno;
1663 register rtx dest = SET_DEST (x);
1665 if (dest == 0)
1666 return;
1668 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
1669 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1671 if (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1673 /* The second and third arguments are values read by this insn. */
1674 sched_analyze_2 (XEXP (dest, 1), insn);
1675 sched_analyze_2 (XEXP (dest, 2), insn);
1677 dest = SUBREG_REG (dest);
1680 if (GET_CODE (dest) == REG)
1682 register int i;
1684 regno = REGNO (dest);
1686 /* A hard reg in a wide mode may really be multiple registers.
1687 If so, mark all of them just like the first. */
1688 if (regno < FIRST_PSEUDO_REGISTER)
1690 i = HARD_REGNO_NREGS (regno, GET_MODE (dest));
1691 while (--i >= 0)
1693 rtx u;
1695 for (u = reg_last_uses[regno+i]; u; u = XEXP (u, 1))
1696 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1697 reg_last_uses[regno + i] = 0;
1698 if (reg_last_sets[regno + i])
1699 add_dependence (insn, reg_last_sets[regno + i],
1700 REG_DEP_OUTPUT);
1701 reg_pending_sets[(regno + i) / REGSET_ELT_BITS]
1702 |= (REGSET_ELT_TYPE) 1 << ((regno + i) % REGSET_ELT_BITS);
1703 if ((call_used_regs[i] || global_regs[i])
1704 && last_function_call)
1705 /* Function calls clobber all call_used regs. */
1706 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1709 else
1711 rtx u;
1713 for (u = reg_last_uses[regno]; u; u = XEXP (u, 1))
1714 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1715 reg_last_uses[regno] = 0;
1716 if (reg_last_sets[regno])
1717 add_dependence (insn, reg_last_sets[regno], REG_DEP_OUTPUT);
1718 reg_pending_sets[regno / REGSET_ELT_BITS]
1719 |= (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
1721 /* Pseudos that are REG_EQUIV to something may be replaced
1722 by that during reloading. We need only add dependencies for
1723 the address in the REG_EQUIV note. */
1724 if (! reload_completed
1725 && reg_known_equiv_p[regno]
1726 && GET_CODE (reg_known_value[regno]) == MEM)
1727 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1729 /* Don't let it cross a call after scheduling if it doesn't
1730 already cross one. */
1731 if (reg_n_calls_crossed[regno] == 0 && last_function_call)
1732 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1735 else if (GET_CODE (dest) == MEM)
1737 /* Writing memory. */
1739 if (pending_lists_length > 32)
1741 /* Flush all pending reads and writes to prevent the pending lists
1742 from getting any larger. Insn scheduling runs too slowly when
1743 these lists get long. The number 32 was chosen because it
1744 seems like a reasonable number. When compiling GCC with itself,
1745 this flush occurs 8 times for sparc, and 10 times for m88k using
1746 the number 32. */
1747 flush_pending_lists (insn);
1749 else
1751 rtx pending, pending_mem;
1753 pending = pending_read_insns;
1754 pending_mem = pending_read_mems;
1755 while (pending)
1757 /* If a dependency already exists, don't create a new one. */
1758 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1759 if (anti_dependence (XEXP (pending_mem, 0), dest))
1760 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1762 pending = XEXP (pending, 1);
1763 pending_mem = XEXP (pending_mem, 1);
1766 pending = pending_write_insns;
1767 pending_mem = pending_write_mems;
1768 while (pending)
1770 /* If a dependency already exists, don't create a new one. */
1771 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1772 if (output_dependence (XEXP (pending_mem, 0), dest))
1773 add_dependence (insn, XEXP (pending, 0), REG_DEP_OUTPUT);
1775 pending = XEXP (pending, 1);
1776 pending_mem = XEXP (pending_mem, 1);
1779 if (last_pending_memory_flush)
1780 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1782 add_insn_mem_dependence (&pending_write_insns, &pending_write_mems,
1783 insn, dest);
1785 sched_analyze_2 (XEXP (dest, 0), insn);
1788 /* Analyze reads. */
1789 if (GET_CODE (x) == SET)
1790 sched_analyze_2 (SET_SRC (x), insn);
1793 /* Analyze the uses of memory and registers in rtx X in INSN. */
1795 static void
1796 sched_analyze_2 (x, insn)
1797 rtx x;
1798 rtx insn;
1800 register int i;
1801 register int j;
1802 register enum rtx_code code;
1803 register char *fmt;
1805 if (x == 0)
1806 return;
1808 code = GET_CODE (x);
1810 switch (code)
1812 case CONST_INT:
1813 case CONST_DOUBLE:
1814 case SYMBOL_REF:
1815 case CONST:
1816 case LABEL_REF:
1817 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1818 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1819 this does not mean that this insn is using cc0. */
1820 return;
1822 #ifdef HAVE_cc0
1823 case CC0:
1825 rtx link, prev;
1827 /* There may be a note before this insn now, but all notes will
1828 be removed before we actually try to schedule the insns, so
1829 it won't cause a problem later. We must avoid it here though. */
1831 /* User of CC0 depends on immediately preceding insn. */
1832 SCHED_GROUP_P (insn) = 1;
1834 /* Make a copy of all dependencies on the immediately previous insn,
1835 and add to this insn. This is so that all the dependencies will
1836 apply to the group. Remove an explicit dependence on this insn
1837 as SCHED_GROUP_P now represents it. */
1839 prev = PREV_INSN (insn);
1840 while (GET_CODE (prev) == NOTE)
1841 prev = PREV_INSN (prev);
1843 if (find_insn_list (prev, LOG_LINKS (insn)))
1844 remove_dependence (insn, prev);
1846 for (link = LOG_LINKS (prev); link; link = XEXP (link, 1))
1847 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1849 return;
1851 #endif
1853 case REG:
1855 int regno = REGNO (x);
1856 if (regno < FIRST_PSEUDO_REGISTER)
1858 int i;
1860 i = HARD_REGNO_NREGS (regno, GET_MODE (x));
1861 while (--i >= 0)
1863 reg_last_uses[regno + i]
1864 = gen_rtx (INSN_LIST, VOIDmode,
1865 insn, reg_last_uses[regno + i]);
1866 if (reg_last_sets[regno + i])
1867 add_dependence (insn, reg_last_sets[regno + i], 0);
1868 if ((call_used_regs[regno + i] || global_regs[regno + i])
1869 && last_function_call)
1870 /* Function calls clobber all call_used regs. */
1871 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1874 else
1876 reg_last_uses[regno]
1877 = gen_rtx (INSN_LIST, VOIDmode, insn, reg_last_uses[regno]);
1878 if (reg_last_sets[regno])
1879 add_dependence (insn, reg_last_sets[regno], 0);
1881 /* Pseudos that are REG_EQUIV to something may be replaced
1882 by that during reloading. We need only add dependencies for
1883 the address in the REG_EQUIV note. */
1884 if (! reload_completed
1885 && reg_known_equiv_p[regno]
1886 && GET_CODE (reg_known_value[regno]) == MEM)
1887 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1889 /* If the register does not already cross any calls, then add this
1890 insn to the sched_before_next_call list so that it will still
1891 not cross calls after scheduling. */
1892 if (reg_n_calls_crossed[regno] == 0)
1893 add_dependence (sched_before_next_call, insn, REG_DEP_ANTI);
1895 return;
1898 case MEM:
1900 /* Reading memory. */
1902 rtx pending, pending_mem;
1904 pending = pending_read_insns;
1905 pending_mem = pending_read_mems;
1906 while (pending)
1908 /* If a dependency already exists, don't create a new one. */
1909 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1910 if (read_dependence (XEXP (pending_mem, 0), x))
1911 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1913 pending = XEXP (pending, 1);
1914 pending_mem = XEXP (pending_mem, 1);
1917 pending = pending_write_insns;
1918 pending_mem = pending_write_mems;
1919 while (pending)
1921 /* If a dependency already exists, don't create a new one. */
1922 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1923 if (true_dependence (XEXP (pending_mem, 0), x))
1924 add_dependence (insn, XEXP (pending, 0), 0);
1926 pending = XEXP (pending, 1);
1927 pending_mem = XEXP (pending_mem, 1);
1929 if (last_pending_memory_flush)
1930 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1932 /* Always add these dependencies to pending_reads, since
1933 this insn may be followed by a write. */
1934 add_insn_mem_dependence (&pending_read_insns, &pending_read_mems,
1935 insn, x);
1937 /* Take advantage of tail recursion here. */
1938 sched_analyze_2 (XEXP (x, 0), insn);
1939 return;
1942 case ASM_OPERANDS:
1943 case ASM_INPUT:
1944 case UNSPEC_VOLATILE:
1945 case TRAP_IF:
1947 rtx u;
1949 /* Traditional and volatile asm instructions must be considered to use
1950 and clobber all hard registers, all pseudo-registers and all of
1951 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1953 Consider for instance a volatile asm that changes the fpu rounding
1954 mode. An insn should not be moved across this even if it only uses
1955 pseudo-regs because it might give an incorrectly rounded result. */
1956 if (code != ASM_OPERANDS || MEM_VOLATILE_P (x))
1958 int max_reg = max_reg_num ();
1959 for (i = 0; i < max_reg; i++)
1961 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1962 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1963 reg_last_uses[i] = 0;
1964 if (reg_last_sets[i])
1965 add_dependence (insn, reg_last_sets[i], 0);
1967 reg_pending_sets_all = 1;
1969 flush_pending_lists (insn);
1972 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1973 We can not just fall through here since then we would be confused
1974 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1975 traditional asms unlike their normal usage. */
1977 if (code == ASM_OPERANDS)
1979 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1980 sched_analyze_2 (ASM_OPERANDS_INPUT (x, j), insn);
1981 return;
1983 break;
1986 case PRE_DEC:
1987 case POST_DEC:
1988 case PRE_INC:
1989 case POST_INC:
1990 /* These both read and modify the result. We must handle them as writes
1991 to get proper dependencies for following instructions. We must handle
1992 them as reads to get proper dependencies from this to previous
1993 instructions. Thus we need to pass them to both sched_analyze_1
1994 and sched_analyze_2. We must call sched_analyze_2 first in order
1995 to get the proper antecedent for the read. */
1996 sched_analyze_2 (XEXP (x, 0), insn);
1997 sched_analyze_1 (x, insn);
1998 return;
2001 /* Other cases: walk the insn. */
2002 fmt = GET_RTX_FORMAT (code);
2003 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2005 if (fmt[i] == 'e')
2006 sched_analyze_2 (XEXP (x, i), insn);
2007 else if (fmt[i] == 'E')
2008 for (j = 0; j < XVECLEN (x, i); j++)
2009 sched_analyze_2 (XVECEXP (x, i, j), insn);
2013 /* Analyze an INSN with pattern X to find all dependencies. */
2015 static void
2016 sched_analyze_insn (x, insn, loop_notes)
2017 rtx x, insn;
2018 rtx loop_notes;
2020 register RTX_CODE code = GET_CODE (x);
2021 rtx link;
2022 int maxreg = max_reg_num ();
2023 int i;
2025 if (code == SET || code == CLOBBER)
2026 sched_analyze_1 (x, insn);
2027 else if (code == PARALLEL)
2029 register int i;
2030 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2032 code = GET_CODE (XVECEXP (x, 0, i));
2033 if (code == SET || code == CLOBBER)
2034 sched_analyze_1 (XVECEXP (x, 0, i), insn);
2035 else
2036 sched_analyze_2 (XVECEXP (x, 0, i), insn);
2039 else
2040 sched_analyze_2 (x, insn);
2042 /* Mark registers CLOBBERED or used by called function. */
2043 if (GET_CODE (insn) == CALL_INSN)
2044 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
2046 if (GET_CODE (XEXP (link, 0)) == CLOBBER)
2047 sched_analyze_1 (XEXP (link, 0), insn);
2048 else
2049 sched_analyze_2 (XEXP (link, 0), insn);
2052 /* If there is a LOOP_{BEG,END} note in the middle of a basic block, then
2053 we must be sure that no instructions are scheduled across it.
2054 Otherwise, the reg_n_refs info (which depends on loop_depth) would
2055 become incorrect. */
2057 if (loop_notes)
2059 int max_reg = max_reg_num ();
2060 rtx link;
2062 for (i = 0; i < max_reg; i++)
2064 rtx u;
2065 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2066 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2067 reg_last_uses[i] = 0;
2068 if (reg_last_sets[i])
2069 add_dependence (insn, reg_last_sets[i], 0);
2071 reg_pending_sets_all = 1;
2073 flush_pending_lists (insn);
2075 link = loop_notes;
2076 while (XEXP (link, 1))
2077 link = XEXP (link, 1);
2078 XEXP (link, 1) = REG_NOTES (insn);
2079 REG_NOTES (insn) = loop_notes;
2082 /* After reload, it is possible for an instruction to have a REG_DEAD note
2083 for a register that actually dies a few instructions earlier. For
2084 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
2085 In this case, we must consider the insn to use the register mentioned
2086 in the REG_DEAD note. Otherwise, we may accidentally move this insn
2087 after another insn that sets the register, thus getting obviously invalid
2088 rtl. This confuses reorg which believes that REG_DEAD notes are still
2089 meaningful.
2091 ??? We would get better code if we fixed reload to put the REG_DEAD
2092 notes in the right places, but that may not be worth the effort. */
2094 if (reload_completed)
2096 rtx note;
2098 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2099 if (REG_NOTE_KIND (note) == REG_DEAD)
2100 sched_analyze_2 (XEXP (note, 0), insn);
2103 for (i = 0; i < regset_size; i++)
2105 REGSET_ELT_TYPE sets = reg_pending_sets[i];
2106 if (sets)
2108 register int bit;
2109 for (bit = 0; bit < REGSET_ELT_BITS; bit++)
2110 if (sets & ((REGSET_ELT_TYPE) 1 << bit))
2111 reg_last_sets[i * REGSET_ELT_BITS + bit] = insn;
2112 reg_pending_sets[i] = 0;
2115 if (reg_pending_sets_all)
2117 for (i = 0; i < maxreg; i++)
2118 reg_last_sets[i] = insn;
2119 reg_pending_sets_all = 0;
2122 /* Handle function calls and function returns created by the epilogue
2123 threading code. */
2124 if (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN)
2126 rtx dep_insn;
2127 rtx prev_dep_insn;
2129 /* When scheduling instructions, we make sure calls don't lose their
2130 accompanying USE insns by depending them one on another in order.
2132 Also, we must do the same thing for returns created by the epilogue
2133 threading code. Note this code works only in this special case,
2134 because other passes make no guarantee that they will never emit
2135 an instruction between a USE and a RETURN. There is such a guarantee
2136 for USE instructions immediately before a call. */
2138 prev_dep_insn = insn;
2139 dep_insn = PREV_INSN (insn);
2140 while (GET_CODE (dep_insn) == INSN
2141 && GET_CODE (PATTERN (dep_insn)) == USE
2142 && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == REG)
2144 SCHED_GROUP_P (prev_dep_insn) = 1;
2146 /* Make a copy of all dependencies on dep_insn, and add to insn.
2147 This is so that all of the dependencies will apply to the
2148 group. */
2150 for (link = LOG_LINKS (dep_insn); link; link = XEXP (link, 1))
2151 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
2153 prev_dep_insn = dep_insn;
2154 dep_insn = PREV_INSN (dep_insn);
2159 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
2160 for every dependency. */
2162 static int
2163 sched_analyze (head, tail)
2164 rtx head, tail;
2166 register rtx insn;
2167 register int n_insns = 0;
2168 register rtx u;
2169 register int luid = 0;
2170 rtx loop_notes = 0;
2172 for (insn = head; ; insn = NEXT_INSN (insn))
2174 INSN_LUID (insn) = luid++;
2176 if (GET_CODE (insn) == INSN || GET_CODE (insn) == JUMP_INSN)
2178 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2179 loop_notes = 0;
2180 n_insns += 1;
2182 else if (GET_CODE (insn) == CALL_INSN)
2184 rtx x;
2185 register int i;
2187 /* Any instruction using a hard register which may get clobbered
2188 by a call needs to be marked as dependent on this call.
2189 This prevents a use of a hard return reg from being moved
2190 past a void call (i.e. it does not explicitly set the hard
2191 return reg). */
2193 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
2194 all registers, not just hard registers, may be clobbered by this
2195 call. */
2197 /* Insn, being a CALL_INSN, magically depends on
2198 `last_function_call' already. */
2200 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == NOTE
2201 && NOTE_LINE_NUMBER (NEXT_INSN (insn)) == NOTE_INSN_SETJMP)
2203 int max_reg = max_reg_num ();
2204 for (i = 0; i < max_reg; i++)
2206 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2207 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2208 reg_last_uses[i] = 0;
2209 if (reg_last_sets[i])
2210 add_dependence (insn, reg_last_sets[i], 0);
2212 reg_pending_sets_all = 1;
2214 /* Add a fake REG_NOTE which we will later convert
2215 back into a NOTE_INSN_SETJMP note. */
2216 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_DEAD,
2217 GEN_INT (NOTE_INSN_SETJMP),
2218 REG_NOTES (insn));
2220 else
2222 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
2223 if (call_used_regs[i] || global_regs[i])
2225 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2226 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2227 reg_last_uses[i] = 0;
2228 if (reg_last_sets[i])
2229 add_dependence (insn, reg_last_sets[i], REG_DEP_ANTI);
2230 reg_pending_sets[i / REGSET_ELT_BITS]
2231 |= (REGSET_ELT_TYPE) 1 << (i % REGSET_ELT_BITS);
2235 /* For each insn which shouldn't cross a call, add a dependence
2236 between that insn and this call insn. */
2237 x = LOG_LINKS (sched_before_next_call);
2238 while (x)
2240 add_dependence (insn, XEXP (x, 0), REG_DEP_ANTI);
2241 x = XEXP (x, 1);
2243 LOG_LINKS (sched_before_next_call) = 0;
2245 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2246 loop_notes = 0;
2248 /* We don't need to flush memory for a function call which does
2249 not involve memory. */
2250 if (! CONST_CALL_P (insn))
2252 /* In the absence of interprocedural alias analysis,
2253 we must flush all pending reads and writes, and
2254 start new dependencies starting from here. */
2255 flush_pending_lists (insn);
2258 /* Depend this function call (actually, the user of this
2259 function call) on all hard register clobberage. */
2260 last_function_call = insn;
2261 n_insns += 1;
2263 else if (GET_CODE (insn) == NOTE
2264 && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG
2265 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_END))
2266 loop_notes = gen_rtx (EXPR_LIST, REG_DEAD,
2267 GEN_INT (NOTE_LINE_NUMBER (insn)), loop_notes);
2269 if (insn == tail)
2270 return n_insns;
2274 /* Called when we see a set of a register. If death is true, then we are
2275 scanning backwards. Mark that register as unborn. If nobody says
2276 otherwise, that is how things will remain. If death is false, then we
2277 are scanning forwards. Mark that register as being born. */
2279 static void
2280 sched_note_set (b, x, death)
2281 int b;
2282 rtx x;
2283 int death;
2285 register int regno;
2286 register rtx reg = SET_DEST (x);
2287 int subreg_p = 0;
2289 if (reg == 0)
2290 return;
2292 while (GET_CODE (reg) == SUBREG || GET_CODE (reg) == STRICT_LOW_PART
2293 || GET_CODE (reg) == SIGN_EXTRACT || GET_CODE (reg) == ZERO_EXTRACT)
2295 /* Must treat modification of just one hardware register of a multi-reg
2296 value or just a byte field of a register exactly the same way that
2297 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
2298 does not kill the entire register. */
2299 if (GET_CODE (reg) != SUBREG
2300 || REG_SIZE (SUBREG_REG (reg)) > REG_SIZE (reg))
2301 subreg_p = 1;
2303 reg = SUBREG_REG (reg);
2306 if (GET_CODE (reg) != REG)
2307 return;
2309 /* Global registers are always live, so the code below does not apply
2310 to them. */
2312 regno = REGNO (reg);
2313 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2315 register int offset = regno / REGSET_ELT_BITS;
2316 register REGSET_ELT_TYPE bit
2317 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
2319 if (death)
2321 /* If we only set part of the register, then this set does not
2322 kill it. */
2323 if (subreg_p)
2324 return;
2326 /* Try killing this register. */
2327 if (regno < FIRST_PSEUDO_REGISTER)
2329 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2330 while (--j >= 0)
2332 offset = (regno + j) / REGSET_ELT_BITS;
2333 bit = (REGSET_ELT_TYPE) 1 << ((regno + j) % REGSET_ELT_BITS);
2335 bb_live_regs[offset] &= ~bit;
2336 bb_dead_regs[offset] |= bit;
2339 else
2341 bb_live_regs[offset] &= ~bit;
2342 bb_dead_regs[offset] |= bit;
2345 else
2347 /* Make the register live again. */
2348 if (regno < FIRST_PSEUDO_REGISTER)
2350 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2351 while (--j >= 0)
2353 offset = (regno + j) / REGSET_ELT_BITS;
2354 bit = (REGSET_ELT_TYPE) 1 << ((regno + j) % REGSET_ELT_BITS);
2356 bb_live_regs[offset] |= bit;
2357 bb_dead_regs[offset] &= ~bit;
2360 else
2362 bb_live_regs[offset] |= bit;
2363 bb_dead_regs[offset] &= ~bit;
2369 /* Macros and functions for keeping the priority queue sorted, and
2370 dealing with queueing and unqueueing of instructions. */
2372 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
2373 do { if ((NEW_READY) - (OLD_READY) == 1) \
2374 swap_sort (READY, NEW_READY); \
2375 else if ((NEW_READY) - (OLD_READY) > 1) \
2376 qsort (READY, NEW_READY, sizeof (rtx), rank_for_schedule); } \
2377 while (0)
2379 /* Returns a positive value if y is preferred; returns a negative value if
2380 x is preferred. Should never return 0, since that will make the sort
2381 unstable. */
2383 static int
2384 rank_for_schedule (x, y)
2385 rtx *x, *y;
2387 rtx tmp = *y;
2388 rtx tmp2 = *x;
2389 rtx link;
2390 int tmp_class, tmp2_class;
2391 int value;
2393 /* Choose the instruction with the highest priority, if different. */
2394 if (value = INSN_PRIORITY (tmp) - INSN_PRIORITY (tmp2))
2395 return value;
2397 if (last_scheduled_insn)
2399 /* Classify the instructions into three classes:
2400 1) Data dependent on last schedule insn.
2401 2) Anti/Output dependent on last scheduled insn.
2402 3) Independent of last scheduled insn, or has latency of one.
2403 Choose the insn from the highest numbered class if different. */
2404 link = find_insn_list (tmp, LOG_LINKS (last_scheduled_insn));
2405 if (link == 0 || insn_cost (tmp, link, last_scheduled_insn) == 1)
2406 tmp_class = 3;
2407 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2408 tmp_class = 1;
2409 else
2410 tmp_class = 2;
2412 link = find_insn_list (tmp2, LOG_LINKS (last_scheduled_insn));
2413 if (link == 0 || insn_cost (tmp2, link, last_scheduled_insn) == 1)
2414 tmp2_class = 3;
2415 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2416 tmp2_class = 1;
2417 else
2418 tmp2_class = 2;
2420 if (value = tmp_class - tmp2_class)
2421 return value;
2424 /* If insns are equally good, sort by INSN_LUID (original insn order),
2425 so that we make the sort stable. This minimizes instruction movement,
2426 thus minimizing sched's effect on debugging and cross-jumping. */
2427 return INSN_LUID (tmp) - INSN_LUID (tmp2);
2430 /* Resort the array A in which only element at index N may be out of order. */
2432 __inline static void
2433 swap_sort (a, n)
2434 rtx *a;
2435 int n;
2437 rtx insn = a[n-1];
2438 int i = n-2;
2440 while (i >= 0 && rank_for_schedule (a+i, &insn) >= 0)
2442 a[i+1] = a[i];
2443 i -= 1;
2445 a[i+1] = insn;
2448 static int max_priority;
2450 /* Add INSN to the insn queue so that it fires at least N_CYCLES
2451 before the currently executing insn. */
2453 __inline static void
2454 queue_insn (insn, n_cycles)
2455 rtx insn;
2456 int n_cycles;
2458 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
2459 NEXT_INSN (insn) = insn_queue[next_q];
2460 insn_queue[next_q] = insn;
2461 q_size += 1;
2464 /* Return nonzero if PAT is the pattern of an insn which makes a
2465 register live. */
2467 __inline static int
2468 birthing_insn_p (pat)
2469 rtx pat;
2471 int j;
2473 if (reload_completed == 1)
2474 return 0;
2476 if (GET_CODE (pat) == SET
2477 && GET_CODE (SET_DEST (pat)) == REG)
2479 rtx dest = SET_DEST (pat);
2480 int i = REGNO (dest);
2481 int offset = i / REGSET_ELT_BITS;
2482 REGSET_ELT_TYPE bit = (REGSET_ELT_TYPE) 1 << (i % REGSET_ELT_BITS);
2484 /* It would be more accurate to use refers_to_regno_p or
2485 reg_mentioned_p to determine when the dest is not live before this
2486 insn. */
2488 if (bb_live_regs[offset] & bit)
2489 return (reg_n_sets[i] == 1);
2491 return 0;
2493 if (GET_CODE (pat) == PARALLEL)
2495 for (j = 0; j < XVECLEN (pat, 0); j++)
2496 if (birthing_insn_p (XVECEXP (pat, 0, j)))
2497 return 1;
2499 return 0;
2502 /* PREV is an insn that is ready to execute. Adjust its priority if that
2503 will help shorten register lifetimes. */
2505 __inline static void
2506 adjust_priority (prev)
2507 rtx prev;
2509 /* Trying to shorten register lives after reload has completed
2510 is useless and wrong. It gives inaccurate schedules. */
2511 if (reload_completed == 0)
2513 rtx note;
2514 int n_deaths = 0;
2516 /* ??? This code has no effect, because REG_DEAD notes are removed
2517 before we ever get here. */
2518 for (note = REG_NOTES (prev); note; note = XEXP (note, 1))
2519 if (REG_NOTE_KIND (note) == REG_DEAD)
2520 n_deaths += 1;
2522 /* Defer scheduling insns which kill registers, since that
2523 shortens register lives. Prefer scheduling insns which
2524 make registers live for the same reason. */
2525 switch (n_deaths)
2527 default:
2528 INSN_PRIORITY (prev) >>= 3;
2529 break;
2530 case 3:
2531 INSN_PRIORITY (prev) >>= 2;
2532 break;
2533 case 2:
2534 case 1:
2535 INSN_PRIORITY (prev) >>= 1;
2536 break;
2537 case 0:
2538 if (birthing_insn_p (PATTERN (prev)))
2540 int max = max_priority;
2542 if (max > INSN_PRIORITY (prev))
2543 INSN_PRIORITY (prev) = max;
2545 break;
2550 /* INSN is the "currently executing insn". Launch each insn which was
2551 waiting on INSN (in the backwards dataflow sense). READY is a
2552 vector of insns which are ready to fire. N_READY is the number of
2553 elements in READY. CLOCK is the current virtual cycle. */
2555 static int
2556 schedule_insn (insn, ready, n_ready, clock)
2557 rtx insn;
2558 rtx *ready;
2559 int n_ready;
2560 int clock;
2562 rtx link;
2563 int new_ready = n_ready;
2565 if (MAX_BLOCKAGE > 1)
2566 schedule_unit (insn_unit (insn), insn, clock);
2568 if (LOG_LINKS (insn) == 0)
2569 return n_ready;
2571 /* This is used by the function adjust_priority above. */
2572 if (n_ready > 0)
2573 max_priority = MAX (INSN_PRIORITY (ready[0]), INSN_PRIORITY (insn));
2574 else
2575 max_priority = INSN_PRIORITY (insn);
2577 for (link = LOG_LINKS (insn); link != 0; link = XEXP (link, 1))
2579 rtx prev = XEXP (link, 0);
2580 int cost = insn_cost (prev, link, insn);
2582 if ((INSN_REF_COUNT (prev) -= 1) != 0)
2584 /* We satisfied one requirement to fire PREV. Record the earliest
2585 time when PREV can fire. No need to do this if the cost is 1,
2586 because PREV can fire no sooner than the next cycle. */
2587 if (cost > 1)
2588 INSN_TICK (prev) = MAX (INSN_TICK (prev), clock + cost);
2590 else
2592 /* We satisfied the last requirement to fire PREV. Ensure that all
2593 timing requirements are satisfied. */
2594 if (INSN_TICK (prev) - clock > cost)
2595 cost = INSN_TICK (prev) - clock;
2597 /* Adjust the priority of PREV and either put it on the ready
2598 list or queue it. */
2599 adjust_priority (prev);
2600 if (cost <= 1)
2601 ready[new_ready++] = prev;
2602 else
2603 queue_insn (prev, cost);
2607 return new_ready;
2610 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2611 those that are blocked due to function unit hazards and rearrange
2612 the remaining ones to minimize subsequent function unit hazards. */
2614 static int
2615 schedule_select (ready, n_ready, clock, file)
2616 rtx *ready;
2617 int n_ready, clock;
2618 FILE *file;
2620 int pri = INSN_PRIORITY (ready[0]);
2621 int i, j, k, q, cost, best_cost, best_insn = 0, new_ready = n_ready;
2622 rtx insn;
2624 /* Work down the ready list in groups of instructions with the same
2625 priority value. Queue insns in the group that are blocked and
2626 select among those that remain for the one with the largest
2627 potential hazard. */
2628 for (i = 0; i < n_ready; i = j)
2630 int opri = pri;
2631 for (j = i + 1; j < n_ready; j++)
2632 if ((pri = INSN_PRIORITY (ready[j])) != opri)
2633 break;
2635 /* Queue insns in the group that are blocked. */
2636 for (k = i, q = 0; k < j; k++)
2638 insn = ready[k];
2639 if ((cost = actual_hazard (insn_unit (insn), insn, clock, 0)) != 0)
2641 q++;
2642 ready[k] = 0;
2643 queue_insn (insn, cost);
2644 if (file)
2645 fprintf (file, "\n;; blocking insn %d for %d cycles",
2646 INSN_UID (insn), cost);
2649 new_ready -= q;
2651 /* Check the next group if all insns were queued. */
2652 if (j - i - q == 0)
2653 continue;
2655 /* If more than one remains, select the first one with the largest
2656 potential hazard. */
2657 else if (j - i - q > 1)
2659 best_cost = -1;
2660 for (k = i; k < j; k++)
2662 if ((insn = ready[k]) == 0)
2663 continue;
2664 if ((cost = potential_hazard (insn_unit (insn), insn, 0))
2665 > best_cost)
2667 best_cost = cost;
2668 best_insn = k;
2672 /* We have found a suitable insn to schedule. */
2673 break;
2676 /* Move the best insn to be front of the ready list. */
2677 if (best_insn != 0)
2679 if (file)
2681 fprintf (file, ", now");
2682 for (i = 0; i < n_ready; i++)
2683 if (ready[i])
2684 fprintf (file, " %d", INSN_UID (ready[i]));
2685 fprintf (file, "\n;; insn %d has a greater potential hazard",
2686 INSN_UID (ready[best_insn]));
2688 for (i = best_insn; i > 0; i--)
2690 insn = ready[i-1];
2691 ready[i-1] = ready[i];
2692 ready[i] = insn;
2696 /* Compact the ready list. */
2697 if (new_ready < n_ready)
2698 for (i = j = 0; i < n_ready; i++)
2699 if (ready[i])
2700 ready[j++] = ready[i];
2702 return new_ready;
2705 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2706 dead_notes list. */
2708 static void
2709 create_reg_dead_note (reg, insn)
2710 rtx reg, insn;
2712 rtx link;
2714 /* The number of registers killed after scheduling must be the same as the
2715 number of registers killed before scheduling. The number of REG_DEAD
2716 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2717 might become one DImode hard register REG_DEAD note, but the number of
2718 registers killed will be conserved.
2720 We carefully remove REG_DEAD notes from the dead_notes list, so that
2721 there will be none left at the end. If we run out early, then there
2722 is a bug somewhere in flow, combine and/or sched. */
2724 if (dead_notes == 0)
2726 #if 1
2727 abort ();
2728 #else
2729 link = rtx_alloc (EXPR_LIST);
2730 PUT_REG_NOTE_KIND (link, REG_DEAD);
2731 #endif
2733 else
2735 /* Number of regs killed by REG. */
2736 int regs_killed = (REGNO (reg) >= FIRST_PSEUDO_REGISTER ? 1
2737 : HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg)));
2738 /* Number of regs killed by REG_DEAD notes taken off the list. */
2739 int reg_note_regs;
2741 link = dead_notes;
2742 reg_note_regs = (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2743 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2744 GET_MODE (XEXP (link, 0))));
2745 while (reg_note_regs < regs_killed)
2747 link = XEXP (link, 1);
2748 reg_note_regs += (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2749 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2750 GET_MODE (XEXP (link, 0))));
2752 dead_notes = XEXP (link, 1);
2754 /* If we took too many regs kills off, put the extra ones back. */
2755 while (reg_note_regs > regs_killed)
2757 rtx temp_reg, temp_link;
2759 temp_reg = gen_rtx (REG, word_mode, 0);
2760 temp_link = rtx_alloc (EXPR_LIST);
2761 PUT_REG_NOTE_KIND (temp_link, REG_DEAD);
2762 XEXP (temp_link, 0) = temp_reg;
2763 XEXP (temp_link, 1) = dead_notes;
2764 dead_notes = temp_link;
2765 reg_note_regs--;
2769 XEXP (link, 0) = reg;
2770 XEXP (link, 1) = REG_NOTES (insn);
2771 REG_NOTES (insn) = link;
2774 /* Subroutine on attach_deaths_insn--handles the recursive search
2775 through INSN. If SET_P is true, then x is being modified by the insn. */
2777 static void
2778 attach_deaths (x, insn, set_p)
2779 rtx x;
2780 rtx insn;
2781 int set_p;
2783 register int i;
2784 register int j;
2785 register enum rtx_code code;
2786 register char *fmt;
2788 if (x == 0)
2789 return;
2791 code = GET_CODE (x);
2793 switch (code)
2795 case CONST_INT:
2796 case CONST_DOUBLE:
2797 case LABEL_REF:
2798 case SYMBOL_REF:
2799 case CONST:
2800 case CODE_LABEL:
2801 case PC:
2802 case CC0:
2803 /* Get rid of the easy cases first. */
2804 return;
2806 case REG:
2808 /* If the register dies in this insn, queue that note, and mark
2809 this register as needing to die. */
2810 /* This code is very similar to mark_used_1 (if set_p is false)
2811 and mark_set_1 (if set_p is true) in flow.c. */
2813 register int regno = REGNO (x);
2814 register int offset = regno / REGSET_ELT_BITS;
2815 register REGSET_ELT_TYPE bit
2816 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
2817 REGSET_ELT_TYPE all_needed = (old_live_regs[offset] & bit);
2818 REGSET_ELT_TYPE some_needed = (old_live_regs[offset] & bit);
2820 if (set_p)
2821 return;
2823 if (regno < FIRST_PSEUDO_REGISTER)
2825 int n;
2827 n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2828 while (--n > 0)
2830 some_needed |= (old_live_regs[(regno + n) / REGSET_ELT_BITS]
2831 & ((REGSET_ELT_TYPE) 1
2832 << ((regno + n) % REGSET_ELT_BITS)));
2833 all_needed &= (old_live_regs[(regno + n) / REGSET_ELT_BITS]
2834 & ((REGSET_ELT_TYPE) 1
2835 << ((regno + n) % REGSET_ELT_BITS)));
2839 /* If it wasn't live before we started, then add a REG_DEAD note.
2840 We must check the previous lifetime info not the current info,
2841 because we may have to execute this code several times, e.g.
2842 once for a clobber (which doesn't add a note) and later
2843 for a use (which does add a note).
2845 Always make the register live. We must do this even if it was
2846 live before, because this may be an insn which sets and uses
2847 the same register, in which case the register has already been
2848 killed, so we must make it live again.
2850 Global registers are always live, and should never have a REG_DEAD
2851 note added for them, so none of the code below applies to them. */
2853 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2855 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2856 STACK_POINTER_REGNUM, since these are always considered to be
2857 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2858 if (regno != FRAME_POINTER_REGNUM
2859 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2860 && ! (regno == HARD_FRAME_POINTER_REGNUM)
2861 #endif
2862 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2863 && ! (regno == ARG_POINTER_REGNUM && fixed_regs[regno])
2864 #endif
2865 && regno != STACK_POINTER_REGNUM)
2867 if (! all_needed && ! dead_or_set_p (insn, x))
2869 /* Check for the case where the register dying partially
2870 overlaps the register set by this insn. */
2871 if (regno < FIRST_PSEUDO_REGISTER
2872 && HARD_REGNO_NREGS (regno, GET_MODE (x)) > 1)
2874 int n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2875 while (--n >= 0)
2876 some_needed |= dead_or_set_regno_p (insn, regno + n);
2879 /* If none of the words in X is needed, make a REG_DEAD
2880 note. Otherwise, we must make partial REG_DEAD
2881 notes. */
2882 if (! some_needed)
2883 create_reg_dead_note (x, insn);
2884 else
2886 int i;
2888 /* Don't make a REG_DEAD note for a part of a
2889 register that is set in the insn. */
2890 for (i = HARD_REGNO_NREGS (regno, GET_MODE (x)) - 1;
2891 i >= 0; i--)
2892 if ((old_live_regs[(regno + i) / REGSET_ELT_BITS]
2893 & ((REGSET_ELT_TYPE) 1
2894 << ((regno +i) % REGSET_ELT_BITS))) == 0
2895 && ! dead_or_set_regno_p (insn, regno + i))
2896 create_reg_dead_note (gen_rtx (REG,
2897 reg_raw_mode[regno + i],
2898 regno + i),
2899 insn);
2904 if (regno < FIRST_PSEUDO_REGISTER)
2906 int j = HARD_REGNO_NREGS (regno, GET_MODE (x));
2907 while (--j >= 0)
2909 offset = (regno + j) / REGSET_ELT_BITS;
2911 = (REGSET_ELT_TYPE) 1 << ((regno + j) % REGSET_ELT_BITS);
2913 bb_dead_regs[offset] &= ~bit;
2914 bb_live_regs[offset] |= bit;
2917 else
2919 bb_dead_regs[offset] &= ~bit;
2920 bb_live_regs[offset] |= bit;
2923 return;
2926 case MEM:
2927 /* Handle tail-recursive case. */
2928 attach_deaths (XEXP (x, 0), insn, 0);
2929 return;
2931 case SUBREG:
2932 case STRICT_LOW_PART:
2933 /* These two cases preserve the value of SET_P, so handle them
2934 separately. */
2935 attach_deaths (XEXP (x, 0), insn, set_p);
2936 return;
2938 case ZERO_EXTRACT:
2939 case SIGN_EXTRACT:
2940 /* This case preserves the value of SET_P for the first operand, but
2941 clears it for the other two. */
2942 attach_deaths (XEXP (x, 0), insn, set_p);
2943 attach_deaths (XEXP (x, 1), insn, 0);
2944 attach_deaths (XEXP (x, 2), insn, 0);
2945 return;
2947 default:
2948 /* Other cases: walk the insn. */
2949 fmt = GET_RTX_FORMAT (code);
2950 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2952 if (fmt[i] == 'e')
2953 attach_deaths (XEXP (x, i), insn, 0);
2954 else if (fmt[i] == 'E')
2955 for (j = 0; j < XVECLEN (x, i); j++)
2956 attach_deaths (XVECEXP (x, i, j), insn, 0);
2961 /* After INSN has executed, add register death notes for each register
2962 that is dead after INSN. */
2964 static void
2965 attach_deaths_insn (insn)
2966 rtx insn;
2968 rtx x = PATTERN (insn);
2969 register RTX_CODE code = GET_CODE (x);
2970 rtx link;
2972 if (code == SET)
2974 attach_deaths (SET_SRC (x), insn, 0);
2976 /* A register might die here even if it is the destination, e.g.
2977 it is the target of a volatile read and is otherwise unused.
2978 Hence we must always call attach_deaths for the SET_DEST. */
2979 attach_deaths (SET_DEST (x), insn, 1);
2981 else if (code == PARALLEL)
2983 register int i;
2984 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2986 code = GET_CODE (XVECEXP (x, 0, i));
2987 if (code == SET)
2989 attach_deaths (SET_SRC (XVECEXP (x, 0, i)), insn, 0);
2991 attach_deaths (SET_DEST (XVECEXP (x, 0, i)), insn, 1);
2993 /* Flow does not add REG_DEAD notes to registers that die in
2994 clobbers, so we can't either. */
2995 else if (code != CLOBBER)
2996 attach_deaths (XVECEXP (x, 0, i), insn, 0);
2999 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
3000 MEM being clobbered, just like flow. */
3001 else if (code == CLOBBER && GET_CODE (XEXP (x, 0)) == MEM)
3002 attach_deaths (XEXP (XEXP (x, 0), 0), insn, 0);
3003 /* Otherwise don't add a death note to things being clobbered. */
3004 else if (code != CLOBBER)
3005 attach_deaths (x, insn, 0);
3007 /* Make death notes for things used in the called function. */
3008 if (GET_CODE (insn) == CALL_INSN)
3009 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
3010 attach_deaths (XEXP (XEXP (link, 0), 0), insn,
3011 GET_CODE (XEXP (link, 0)) == CLOBBER);
3014 /* Delete notes beginning with INSN and maybe put them in the chain
3015 of notes ended by NOTE_LIST.
3016 Returns the insn following the notes. */
3018 static rtx
3019 unlink_notes (insn, tail)
3020 rtx insn, tail;
3022 rtx prev = PREV_INSN (insn);
3024 while (insn != tail && GET_CODE (insn) == NOTE)
3026 rtx next = NEXT_INSN (insn);
3027 /* Delete the note from its current position. */
3028 if (prev)
3029 NEXT_INSN (prev) = next;
3030 if (next)
3031 PREV_INSN (next) = prev;
3033 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
3034 /* Record line-number notes so they can be reused. */
3035 LINE_NOTE (insn) = insn;
3037 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
3038 immediately after the call they follow. We use a fake
3039 (REG_DEAD (const_int -1)) note to remember them.
3040 Likewise with NOTE_INSN_LOOP_BEG and NOTE_INSN_LOOP_END. */
3041 else if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_SETJMP
3042 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
3043 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END)
3045 /* Insert the note at the end of the notes list. */
3046 PREV_INSN (insn) = note_list;
3047 if (note_list)
3048 NEXT_INSN (note_list) = insn;
3049 note_list = insn;
3052 insn = next;
3054 return insn;
3057 /* Constructor for `sometimes' data structure. */
3059 static int
3060 new_sometimes_live (regs_sometimes_live, offset, bit, sometimes_max)
3061 struct sometimes *regs_sometimes_live;
3062 int offset, bit;
3063 int sometimes_max;
3065 register struct sometimes *p;
3066 register int regno = offset * REGSET_ELT_BITS + bit;
3068 /* There should never be a register greater than max_regno here. If there
3069 is, it means that a define_split has created a new pseudo reg. This
3070 is not allowed, since there will not be flow info available for any
3071 new register, so catch the error here. */
3072 if (regno >= max_regno)
3073 abort ();
3075 p = &regs_sometimes_live[sometimes_max];
3076 p->offset = offset;
3077 p->bit = bit;
3078 p->live_length = 0;
3079 p->calls_crossed = 0;
3080 sometimes_max++;
3081 return sometimes_max;
3084 /* Count lengths of all regs we are currently tracking,
3085 and find new registers no longer live. */
3087 static void
3088 finish_sometimes_live (regs_sometimes_live, sometimes_max)
3089 struct sometimes *regs_sometimes_live;
3090 int sometimes_max;
3092 int i;
3094 for (i = 0; i < sometimes_max; i++)
3096 register struct sometimes *p = &regs_sometimes_live[i];
3097 int regno;
3099 regno = p->offset * REGSET_ELT_BITS + p->bit;
3101 sched_reg_live_length[regno] += p->live_length;
3102 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3106 /* Use modified list scheduling to rearrange insns in basic block
3107 B. FILE, if nonzero, is where we dump interesting output about
3108 this pass. */
3110 static void
3111 schedule_block (b, file)
3112 int b;
3113 FILE *file;
3115 rtx insn, last;
3116 rtx *ready, link;
3117 int i, j, n_ready = 0, new_ready, n_insns = 0;
3118 int sched_n_insns = 0;
3119 int clock;
3120 #define NEED_NOTHING 0
3121 #define NEED_HEAD 1
3122 #define NEED_TAIL 2
3123 int new_needs;
3125 /* HEAD and TAIL delimit the region being scheduled. */
3126 rtx head = basic_block_head[b];
3127 rtx tail = basic_block_end[b];
3128 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
3129 being scheduled. When the insns have been ordered,
3130 these insns delimit where the new insns are to be
3131 spliced back into the insn chain. */
3132 rtx next_tail;
3133 rtx prev_head;
3135 /* Keep life information accurate. */
3136 register struct sometimes *regs_sometimes_live;
3137 int sometimes_max;
3139 if (file)
3140 fprintf (file, ";;\t -- basic block number %d from %d to %d --\n",
3141 b, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
3143 i = max_reg_num ();
3144 reg_last_uses = (rtx *) alloca (i * sizeof (rtx));
3145 bzero ((char *) reg_last_uses, i * sizeof (rtx));
3146 reg_last_sets = (rtx *) alloca (i * sizeof (rtx));
3147 bzero ((char *) reg_last_sets, i * sizeof (rtx));
3148 reg_pending_sets = (regset) alloca (regset_bytes);
3149 bzero ((char *) reg_pending_sets, regset_bytes);
3150 reg_pending_sets_all = 0;
3151 clear_units ();
3153 /* Remove certain insns at the beginning from scheduling,
3154 by advancing HEAD. */
3156 /* At the start of a function, before reload has run, don't delay getting
3157 parameters from hard registers into pseudo registers. */
3158 if (reload_completed == 0 && b == 0)
3160 while (head != tail
3161 && GET_CODE (head) == NOTE
3162 && NOTE_LINE_NUMBER (head) != NOTE_INSN_FUNCTION_BEG)
3163 head = NEXT_INSN (head);
3164 while (head != tail
3165 && GET_CODE (head) == INSN
3166 && GET_CODE (PATTERN (head)) == SET)
3168 rtx src = SET_SRC (PATTERN (head));
3169 while (GET_CODE (src) == SUBREG
3170 || GET_CODE (src) == SIGN_EXTEND
3171 || GET_CODE (src) == ZERO_EXTEND
3172 || GET_CODE (src) == SIGN_EXTRACT
3173 || GET_CODE (src) == ZERO_EXTRACT)
3174 src = XEXP (src, 0);
3175 if (GET_CODE (src) != REG
3176 || REGNO (src) >= FIRST_PSEUDO_REGISTER)
3177 break;
3178 /* Keep this insn from ever being scheduled. */
3179 INSN_REF_COUNT (head) = 1;
3180 head = NEXT_INSN (head);
3184 /* Don't include any notes or labels at the beginning of the
3185 basic block, or notes at the ends of basic blocks. */
3186 while (head != tail)
3188 if (GET_CODE (head) == NOTE)
3189 head = NEXT_INSN (head);
3190 else if (GET_CODE (tail) == NOTE)
3191 tail = PREV_INSN (tail);
3192 else if (GET_CODE (head) == CODE_LABEL)
3193 head = NEXT_INSN (head);
3194 else break;
3196 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
3197 to schedule this block. */
3198 if (head == tail
3199 && (GET_CODE (head) == NOTE || GET_CODE (head) == CODE_LABEL))
3200 return;
3202 #if 0
3203 /* This short-cut doesn't work. It does not count call insns crossed by
3204 registers in reg_sometimes_live. It does not mark these registers as
3205 dead if they die in this block. It does not mark these registers live
3206 (or create new reg_sometimes_live entries if necessary) if they are born
3207 in this block.
3209 The easy solution is to just always schedule a block. This block only
3210 has one insn, so this won't slow down this pass by much. */
3212 if (head == tail)
3213 return;
3214 #endif
3216 /* Now HEAD through TAIL are the insns actually to be rearranged;
3217 Let PREV_HEAD and NEXT_TAIL enclose them. */
3218 prev_head = PREV_INSN (head);
3219 next_tail = NEXT_INSN (tail);
3221 /* Initialize basic block data structures. */
3222 dead_notes = 0;
3223 pending_read_insns = 0;
3224 pending_read_mems = 0;
3225 pending_write_insns = 0;
3226 pending_write_mems = 0;
3227 pending_lists_length = 0;
3228 last_pending_memory_flush = 0;
3229 last_function_call = 0;
3230 last_scheduled_insn = 0;
3232 LOG_LINKS (sched_before_next_call) = 0;
3234 n_insns += sched_analyze (head, tail);
3235 if (n_insns == 0)
3237 free_pending_lists ();
3238 return;
3241 /* Allocate vector to hold insns to be rearranged (except those
3242 insns which are controlled by an insn with SCHED_GROUP_P set).
3243 All these insns are included between ORIG_HEAD and ORIG_TAIL,
3244 as those variables ultimately are set up. */
3245 ready = (rtx *) alloca ((n_insns+1) * sizeof (rtx));
3247 /* TAIL is now the last of the insns to be rearranged.
3248 Put those insns into the READY vector. */
3249 insn = tail;
3251 /* For all branches, calls, uses, and cc0 setters, force them to remain
3252 in order at the end of the block by adding dependencies and giving
3253 the last a high priority. There may be notes present, and prev_head
3254 may also be a note.
3256 Branches must obviously remain at the end. Calls should remain at the
3257 end since moving them results in worse register allocation. Uses remain
3258 at the end to ensure proper register allocation. cc0 setters remaim
3259 at the end because they can't be moved away from their cc0 user. */
3260 last = 0;
3261 while (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN
3262 || (GET_CODE (insn) == INSN
3263 && (GET_CODE (PATTERN (insn)) == USE
3264 #ifdef HAVE_cc0
3265 || sets_cc0_p (PATTERN (insn))
3266 #endif
3268 || GET_CODE (insn) == NOTE)
3270 if (GET_CODE (insn) != NOTE)
3272 priority (insn);
3273 if (last == 0)
3275 ready[n_ready++] = insn;
3276 INSN_PRIORITY (insn) = TAIL_PRIORITY - i;
3277 INSN_REF_COUNT (insn) = 0;
3279 else if (! find_insn_list (insn, LOG_LINKS (last)))
3281 add_dependence (last, insn, REG_DEP_ANTI);
3282 INSN_REF_COUNT (insn)++;
3284 last = insn;
3286 /* Skip over insns that are part of a group. */
3287 while (SCHED_GROUP_P (insn))
3289 insn = prev_nonnote_insn (insn);
3290 priority (insn);
3294 insn = PREV_INSN (insn);
3295 /* Don't overrun the bounds of the basic block. */
3296 if (insn == prev_head)
3297 break;
3300 /* Assign priorities to instructions. Also check whether they
3301 are in priority order already. If so then I will be nonnegative.
3302 We use this shortcut only before reloading. */
3303 #if 0
3304 i = reload_completed ? DONE_PRIORITY : MAX_PRIORITY;
3305 #endif
3307 for (; insn != prev_head; insn = PREV_INSN (insn))
3309 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3311 priority (insn);
3312 if (INSN_REF_COUNT (insn) == 0)
3314 if (last == 0)
3315 ready[n_ready++] = insn;
3316 else
3318 /* Make this dependent on the last of the instructions
3319 that must remain in order at the end of the block. */
3320 add_dependence (last, insn, REG_DEP_ANTI);
3321 INSN_REF_COUNT (insn) = 1;
3324 if (SCHED_GROUP_P (insn))
3326 while (SCHED_GROUP_P (insn))
3328 insn = PREV_INSN (insn);
3329 while (GET_CODE (insn) == NOTE)
3330 insn = PREV_INSN (insn);
3331 priority (insn);
3333 continue;
3335 #if 0
3336 if (i < 0)
3337 continue;
3338 if (INSN_PRIORITY (insn) < i)
3339 i = INSN_PRIORITY (insn);
3340 else if (INSN_PRIORITY (insn) > i)
3341 i = DONE_PRIORITY;
3342 #endif
3346 #if 0
3347 /* This short-cut doesn't work. It does not count call insns crossed by
3348 registers in reg_sometimes_live. It does not mark these registers as
3349 dead if they die in this block. It does not mark these registers live
3350 (or create new reg_sometimes_live entries if necessary) if they are born
3351 in this block.
3353 The easy solution is to just always schedule a block. These blocks tend
3354 to be very short, so this doesn't slow down this pass by much. */
3356 /* If existing order is good, don't bother to reorder. */
3357 if (i != DONE_PRIORITY)
3359 if (file)
3360 fprintf (file, ";; already scheduled\n");
3362 if (reload_completed == 0)
3364 for (i = 0; i < sometimes_max; i++)
3365 regs_sometimes_live[i].live_length += n_insns;
3367 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3369 free_pending_lists ();
3370 return;
3372 #endif
3374 /* Scan all the insns to be scheduled, removing NOTE insns
3375 and register death notes.
3376 Line number NOTE insns end up in NOTE_LIST.
3377 Register death notes end up in DEAD_NOTES.
3379 Recreate the register life information for the end of this basic
3380 block. */
3382 if (reload_completed == 0)
3384 bcopy ((char *) basic_block_live_at_start[b], (char *) bb_live_regs,
3385 regset_bytes);
3386 bzero ((char *) bb_dead_regs, regset_bytes);
3388 if (b == 0)
3390 /* This is the first block in the function. There may be insns
3391 before head that we can't schedule. We still need to examine
3392 them though for accurate register lifetime analysis. */
3394 /* We don't want to remove any REG_DEAD notes as the code below
3395 does. */
3397 for (insn = basic_block_head[b]; insn != head;
3398 insn = NEXT_INSN (insn))
3399 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3401 /* See if the register gets born here. */
3402 /* We must check for registers being born before we check for
3403 registers dying. It is possible for a register to be born
3404 and die in the same insn, e.g. reading from a volatile
3405 memory location into an otherwise unused register. Such
3406 a register must be marked as dead after this insn. */
3407 if (GET_CODE (PATTERN (insn)) == SET
3408 || GET_CODE (PATTERN (insn)) == CLOBBER)
3409 sched_note_set (b, PATTERN (insn), 0);
3410 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3412 int j;
3413 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3414 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3415 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3416 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3418 /* ??? This code is obsolete and should be deleted. It
3419 is harmless though, so we will leave it in for now. */
3420 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3421 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3422 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3425 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
3427 if ((REG_NOTE_KIND (link) == REG_DEAD
3428 || REG_NOTE_KIND (link) == REG_UNUSED)
3429 /* Verify that the REG_NOTE has a valid value. */
3430 && GET_CODE (XEXP (link, 0)) == REG)
3432 register int regno = REGNO (XEXP (link, 0));
3433 register int offset = regno / REGSET_ELT_BITS;
3434 register REGSET_ELT_TYPE bit
3435 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
3437 if (regno < FIRST_PSEUDO_REGISTER)
3439 int j = HARD_REGNO_NREGS (regno,
3440 GET_MODE (XEXP (link, 0)));
3441 while (--j >= 0)
3443 offset = (regno + j) / REGSET_ELT_BITS;
3444 bit = ((REGSET_ELT_TYPE) 1
3445 << ((regno + j) % REGSET_ELT_BITS));
3447 bb_live_regs[offset] &= ~bit;
3448 bb_dead_regs[offset] |= bit;
3451 else
3453 bb_live_regs[offset] &= ~bit;
3454 bb_dead_regs[offset] |= bit;
3462 /* If debugging information is being produced, keep track of the line
3463 number notes for each insn. */
3464 if (write_symbols != NO_DEBUG)
3466 /* We must use the true line number for the first insn in the block
3467 that was computed and saved at the start of this pass. We can't
3468 use the current line number, because scheduling of the previous
3469 block may have changed the current line number. */
3470 rtx line = line_note_head[b];
3472 for (insn = basic_block_head[b];
3473 insn != next_tail;
3474 insn = NEXT_INSN (insn))
3475 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3476 line = insn;
3477 else
3478 LINE_NOTE (insn) = line;
3481 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3483 rtx prev, next, link;
3485 /* Farm out notes. This is needed to keep the debugger from
3486 getting completely deranged. */
3487 if (GET_CODE (insn) == NOTE)
3489 prev = insn;
3490 insn = unlink_notes (insn, next_tail);
3491 if (prev == tail)
3492 abort ();
3493 if (prev == head)
3494 abort ();
3495 if (insn == next_tail)
3496 abort ();
3499 if (reload_completed == 0
3500 && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3502 /* See if the register gets born here. */
3503 /* We must check for registers being born before we check for
3504 registers dying. It is possible for a register to be born and
3505 die in the same insn, e.g. reading from a volatile memory
3506 location into an otherwise unused register. Such a register
3507 must be marked as dead after this insn. */
3508 if (GET_CODE (PATTERN (insn)) == SET
3509 || GET_CODE (PATTERN (insn)) == CLOBBER)
3510 sched_note_set (b, PATTERN (insn), 0);
3511 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3513 int j;
3514 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3515 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3516 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3517 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3519 /* ??? This code is obsolete and should be deleted. It
3520 is harmless though, so we will leave it in for now. */
3521 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3522 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3523 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3526 /* Need to know what registers this insn kills. */
3527 for (prev = 0, link = REG_NOTES (insn); link; link = next)
3529 next = XEXP (link, 1);
3530 if ((REG_NOTE_KIND (link) == REG_DEAD
3531 || REG_NOTE_KIND (link) == REG_UNUSED)
3532 /* Verify that the REG_NOTE has a valid value. */
3533 && GET_CODE (XEXP (link, 0)) == REG)
3535 register int regno = REGNO (XEXP (link, 0));
3536 register int offset = regno / REGSET_ELT_BITS;
3537 register REGSET_ELT_TYPE bit
3538 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
3540 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3541 alone. */
3542 if (REG_NOTE_KIND (link) == REG_DEAD)
3544 if (prev)
3545 XEXP (prev, 1) = next;
3546 else
3547 REG_NOTES (insn) = next;
3548 XEXP (link, 1) = dead_notes;
3549 dead_notes = link;
3551 else
3552 prev = link;
3554 if (regno < FIRST_PSEUDO_REGISTER)
3556 int j = HARD_REGNO_NREGS (regno,
3557 GET_MODE (XEXP (link, 0)));
3558 while (--j >= 0)
3560 offset = (regno + j) / REGSET_ELT_BITS;
3561 bit = ((REGSET_ELT_TYPE) 1
3562 << ((regno + j) % REGSET_ELT_BITS));
3564 bb_live_regs[offset] &= ~bit;
3565 bb_dead_regs[offset] |= bit;
3568 else
3570 bb_live_regs[offset] &= ~bit;
3571 bb_dead_regs[offset] |= bit;
3574 else
3575 prev = link;
3580 if (reload_completed == 0)
3582 /* Keep track of register lives. */
3583 old_live_regs = (regset) alloca (regset_bytes);
3584 regs_sometimes_live
3585 = (struct sometimes *) alloca (max_regno * sizeof (struct sometimes));
3586 sometimes_max = 0;
3588 /* Start with registers live at end. */
3589 for (j = 0; j < regset_size; j++)
3591 REGSET_ELT_TYPE live = bb_live_regs[j];
3592 old_live_regs[j] = live;
3593 if (live)
3595 register int bit;
3596 for (bit = 0; bit < REGSET_ELT_BITS; bit++)
3597 if (live & ((REGSET_ELT_TYPE) 1 << bit))
3598 sometimes_max = new_sometimes_live (regs_sometimes_live, j,
3599 bit, sometimes_max);
3604 SCHED_SORT (ready, n_ready, 1);
3606 if (file)
3608 fprintf (file, ";; ready list initially:\n;; ");
3609 for (i = 0; i < n_ready; i++)
3610 fprintf (file, "%d ", INSN_UID (ready[i]));
3611 fprintf (file, "\n\n");
3613 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3614 if (INSN_PRIORITY (insn) > 0)
3615 fprintf (file, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3616 INSN_UID (insn), INSN_PRIORITY (insn),
3617 INSN_REF_COUNT (insn));
3620 /* Now HEAD and TAIL are going to become disconnected
3621 entirely from the insn chain. */
3622 tail = 0;
3624 /* Q_SIZE will always be zero here. */
3625 q_ptr = 0; clock = 0;
3626 bzero ((char *) insn_queue, sizeof (insn_queue));
3628 /* Now, perform list scheduling. */
3630 /* Where we start inserting insns is after TAIL. */
3631 last = next_tail;
3633 new_needs = (NEXT_INSN (prev_head) == basic_block_head[b]
3634 ? NEED_HEAD : NEED_NOTHING);
3635 if (PREV_INSN (next_tail) == basic_block_end[b])
3636 new_needs |= NEED_TAIL;
3638 new_ready = n_ready;
3639 while (sched_n_insns < n_insns)
3641 q_ptr = NEXT_Q (q_ptr); clock++;
3643 /* Add all pending insns that can be scheduled without stalls to the
3644 ready list. */
3645 for (insn = insn_queue[q_ptr]; insn; insn = NEXT_INSN (insn))
3647 if (file)
3648 fprintf (file, ";; launching %d before %d with no stalls at T-%d\n",
3649 INSN_UID (insn), INSN_UID (last), clock);
3650 ready[new_ready++] = insn;
3651 q_size -= 1;
3653 insn_queue[q_ptr] = 0;
3655 /* If there are no ready insns, stall until one is ready and add all
3656 of the pending insns at that point to the ready list. */
3657 if (new_ready == 0)
3659 register int stalls;
3661 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
3662 if (insn = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)])
3664 for (; insn; insn = NEXT_INSN (insn))
3666 if (file)
3667 fprintf (file, ";; launching %d before %d with %d stalls at T-%d\n",
3668 INSN_UID (insn), INSN_UID (last), stalls, clock);
3669 ready[new_ready++] = insn;
3670 q_size -= 1;
3672 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
3673 break;
3676 q_ptr = NEXT_Q_AFTER (q_ptr, stalls); clock += stalls;
3679 /* There should be some instructions waiting to fire. */
3680 if (new_ready == 0)
3681 abort ();
3683 if (file)
3685 fprintf (file, ";; ready list at T-%d:", clock);
3686 for (i = 0; i < new_ready; i++)
3687 fprintf (file, " %d (%x)",
3688 INSN_UID (ready[i]), INSN_PRIORITY (ready[i]));
3691 /* Sort the ready list and choose the best insn to schedule. Select
3692 which insn should issue in this cycle and queue those that are
3693 blocked by function unit hazards.
3695 N_READY holds the number of items that were scheduled the last time,
3696 minus the one instruction scheduled on the last loop iteration; it
3697 is not modified for any other reason in this loop. */
3699 SCHED_SORT (ready, new_ready, n_ready);
3700 if (MAX_BLOCKAGE > 1)
3702 new_ready = schedule_select (ready, new_ready, clock, file);
3703 if (new_ready == 0)
3705 if (file)
3706 fprintf (file, "\n");
3707 /* We must set n_ready here, to ensure that sorting always
3708 occurs when we come back to the SCHED_SORT line above. */
3709 n_ready = 0;
3710 continue;
3713 n_ready = new_ready;
3714 last_scheduled_insn = insn = ready[0];
3716 /* The first insn scheduled becomes the new tail. */
3717 if (tail == 0)
3718 tail = insn;
3720 if (file)
3722 fprintf (file, ", now");
3723 for (i = 0; i < n_ready; i++)
3724 fprintf (file, " %d", INSN_UID (ready[i]));
3725 fprintf (file, "\n");
3728 if (DONE_PRIORITY_P (insn))
3729 abort ();
3731 if (reload_completed == 0)
3733 /* Process this insn, and each insn linked to this one which must
3734 be immediately output after this insn. */
3737 /* First we kill registers set by this insn, and then we
3738 make registers used by this insn live. This is the opposite
3739 order used above because we are traversing the instructions
3740 backwards. */
3742 /* Strictly speaking, we should scan REG_UNUSED notes and make
3743 every register mentioned there live, however, we will just
3744 kill them again immediately below, so there doesn't seem to
3745 be any reason why we bother to do this. */
3747 /* See if this is the last notice we must take of a register. */
3748 if (GET_CODE (PATTERN (insn)) == SET
3749 || GET_CODE (PATTERN (insn)) == CLOBBER)
3750 sched_note_set (b, PATTERN (insn), 1);
3751 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3753 int j;
3754 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3755 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3756 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3757 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 1);
3760 /* This code keeps life analysis information up to date. */
3761 if (GET_CODE (insn) == CALL_INSN)
3763 register struct sometimes *p;
3765 /* A call kills all call used and global registers, except
3766 for those mentioned in the call pattern which will be
3767 made live again later. */
3768 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
3769 if (call_used_regs[i] || global_regs[i])
3771 register int offset = i / REGSET_ELT_BITS;
3772 register REGSET_ELT_TYPE bit
3773 = (REGSET_ELT_TYPE) 1 << (i % REGSET_ELT_BITS);
3775 bb_live_regs[offset] &= ~bit;
3776 bb_dead_regs[offset] |= bit;
3779 /* Regs live at the time of a call instruction must not
3780 go in a register clobbered by calls. Record this for
3781 all regs now live. Note that insns which are born or
3782 die in a call do not cross a call, so this must be done
3783 after the killings (above) and before the births
3784 (below). */
3785 p = regs_sometimes_live;
3786 for (i = 0; i < sometimes_max; i++, p++)
3787 if (bb_live_regs[p->offset]
3788 & ((REGSET_ELT_TYPE) 1 << p->bit))
3789 p->calls_crossed += 1;
3792 /* Make every register used live, and add REG_DEAD notes for
3793 registers which were not live before we started. */
3794 attach_deaths_insn (insn);
3796 /* Find registers now made live by that instruction. */
3797 for (i = 0; i < regset_size; i++)
3799 REGSET_ELT_TYPE diff = bb_live_regs[i] & ~old_live_regs[i];
3800 if (diff)
3802 register int bit;
3803 old_live_regs[i] |= diff;
3804 for (bit = 0; bit < REGSET_ELT_BITS; bit++)
3805 if (diff & ((REGSET_ELT_TYPE) 1 << bit))
3806 sometimes_max
3807 = new_sometimes_live (regs_sometimes_live, i, bit,
3808 sometimes_max);
3812 /* Count lengths of all regs we are worrying about now,
3813 and handle registers no longer live. */
3815 for (i = 0; i < sometimes_max; i++)
3817 register struct sometimes *p = &regs_sometimes_live[i];
3818 int regno = p->offset*REGSET_ELT_BITS + p->bit;
3820 p->live_length += 1;
3822 if ((bb_live_regs[p->offset]
3823 & ((REGSET_ELT_TYPE) 1 << p->bit)) == 0)
3825 /* This is the end of one of this register's lifetime
3826 segments. Save the lifetime info collected so far,
3827 and clear its bit in the old_live_regs entry. */
3828 sched_reg_live_length[regno] += p->live_length;
3829 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3830 old_live_regs[p->offset]
3831 &= ~((REGSET_ELT_TYPE) 1 << p->bit);
3833 /* Delete the reg_sometimes_live entry for this reg by
3834 copying the last entry over top of it. */
3835 *p = regs_sometimes_live[--sometimes_max];
3836 /* ...and decrement i so that this newly copied entry
3837 will be processed. */
3838 i--;
3842 link = insn;
3843 insn = PREV_INSN (insn);
3845 while (SCHED_GROUP_P (link));
3847 /* Set INSN back to the insn we are scheduling now. */
3848 insn = ready[0];
3851 /* Schedule INSN. Remove it from the ready list. */
3852 ready += 1;
3853 n_ready -= 1;
3855 sched_n_insns += 1;
3856 NEXT_INSN (insn) = last;
3857 PREV_INSN (last) = insn;
3858 last = insn;
3860 /* Check to see if we need to re-emit any notes here. */
3862 rtx note;
3864 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3866 if (REG_NOTE_KIND (note) == REG_DEAD
3867 && GET_CODE (XEXP (note, 0)) == CONST_INT)
3869 if (INTVAL (XEXP (note, 0)) == NOTE_INSN_SETJMP)
3870 emit_note_after (INTVAL (XEXP (note, 0)), insn);
3871 else
3872 last = emit_note_before (INTVAL (XEXP (note, 0)), last);
3873 remove_note (insn, note);
3878 /* Everything that precedes INSN now either becomes "ready", if
3879 it can execute immediately before INSN, or "pending", if
3880 there must be a delay. Give INSN high enough priority that
3881 at least one (maybe more) reg-killing insns can be launched
3882 ahead of all others. Mark INSN as scheduled by changing its
3883 priority to -1. */
3884 INSN_PRIORITY (insn) = LAUNCH_PRIORITY;
3885 new_ready = schedule_insn (insn, ready, n_ready, clock);
3886 INSN_PRIORITY (insn) = DONE_PRIORITY;
3888 /* Schedule all prior insns that must not be moved. */
3889 if (SCHED_GROUP_P (insn))
3891 /* Disable these insns from being launched. */
3892 link = insn;
3893 while (SCHED_GROUP_P (link))
3895 /* Disable these insns from being launched by anybody. */
3896 link = PREV_INSN (link);
3897 INSN_REF_COUNT (link) = 0;
3900 /* None of these insns can move forward into delay slots. */
3901 while (SCHED_GROUP_P (insn))
3903 insn = PREV_INSN (insn);
3904 new_ready = schedule_insn (insn, ready, new_ready, clock);
3905 INSN_PRIORITY (insn) = DONE_PRIORITY;
3907 sched_n_insns += 1;
3908 NEXT_INSN (insn) = last;
3909 PREV_INSN (last) = insn;
3910 last = insn;
3914 if (q_size != 0)
3915 abort ();
3917 if (reload_completed == 0)
3918 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3920 /* HEAD is now the first insn in the chain of insns that
3921 been scheduled by the loop above.
3922 TAIL is the last of those insns. */
3923 head = insn;
3925 /* NOTE_LIST is the end of a chain of notes previously found
3926 among the insns. Insert them at the beginning of the insns. */
3927 if (note_list != 0)
3929 rtx note_head = note_list;
3930 while (PREV_INSN (note_head))
3931 note_head = PREV_INSN (note_head);
3933 PREV_INSN (head) = note_list;
3934 NEXT_INSN (note_list) = head;
3935 head = note_head;
3938 /* There should be no REG_DEAD notes leftover at the end.
3939 In practice, this can occur as the result of bugs in flow, combine.c,
3940 and/or sched.c. The values of the REG_DEAD notes remaining are
3941 meaningless, because dead_notes is just used as a free list. */
3942 #if 1
3943 if (dead_notes != 0)
3944 abort ();
3945 #endif
3947 if (new_needs & NEED_HEAD)
3948 basic_block_head[b] = head;
3949 PREV_INSN (head) = prev_head;
3950 NEXT_INSN (prev_head) = head;
3952 if (new_needs & NEED_TAIL)
3953 basic_block_end[b] = tail;
3954 NEXT_INSN (tail) = next_tail;
3955 PREV_INSN (next_tail) = tail;
3957 /* Restore the line-number notes of each insn. */
3958 if (write_symbols != NO_DEBUG)
3960 rtx line, note, prev, new;
3961 int notes = 0;
3963 head = basic_block_head[b];
3964 next_tail = NEXT_INSN (basic_block_end[b]);
3966 /* Determine the current line-number. We want to know the current
3967 line number of the first insn of the block here, in case it is
3968 different from the true line number that was saved earlier. If
3969 different, then we need a line number note before the first insn
3970 of this block. If it happens to be the same, then we don't want to
3971 emit another line number note here. */
3972 for (line = head; line; line = PREV_INSN (line))
3973 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
3974 break;
3976 /* Walk the insns keeping track of the current line-number and inserting
3977 the line-number notes as needed. */
3978 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3979 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3980 line = insn;
3981 /* This used to emit line number notes before every non-deleted note.
3982 However, this confuses a debugger, because line notes not separated
3983 by real instructions all end up at the same address. I can find no
3984 use for line number notes before other notes, so none are emitted. */
3985 else if (GET_CODE (insn) != NOTE
3986 && (note = LINE_NOTE (insn)) != 0
3987 && note != line
3988 && (line == 0
3989 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
3990 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
3992 line = note;
3993 prev = PREV_INSN (insn);
3994 if (LINE_NOTE (note))
3996 /* Re-use the original line-number note. */
3997 LINE_NOTE (note) = 0;
3998 PREV_INSN (note) = prev;
3999 NEXT_INSN (prev) = note;
4000 PREV_INSN (insn) = note;
4001 NEXT_INSN (note) = insn;
4003 else
4005 notes++;
4006 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
4007 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
4010 if (file && notes)
4011 fprintf (file, ";; added %d line-number notes\n", notes);
4014 if (file)
4016 fprintf (file, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
4017 clock, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
4020 /* Yow! We're done! */
4021 free_pending_lists ();
4023 return;
4026 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
4027 REGNO, returning the rtx of the reference found if any. Otherwise,
4028 returns 0. */
4030 static rtx
4031 regno_use_in (regno, x)
4032 int regno;
4033 rtx x;
4035 register char *fmt;
4036 int i, j;
4037 rtx tem;
4039 if (GET_CODE (x) == REG && REGNO (x) == regno)
4040 return x;
4042 fmt = GET_RTX_FORMAT (GET_CODE (x));
4043 for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
4045 if (fmt[i] == 'e')
4047 if (tem = regno_use_in (regno, XEXP (x, i)))
4048 return tem;
4050 else if (fmt[i] == 'E')
4051 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4052 if (tem = regno_use_in (regno , XVECEXP (x, i, j)))
4053 return tem;
4056 return 0;
4059 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
4060 needed for the hard register mentioned in the note. This can happen
4061 if the reference to the hard register in the original insn was split into
4062 several smaller hard register references in the split insns. */
4064 static void
4065 split_hard_reg_notes (note, first, last, orig_insn)
4066 rtx note, first, last, orig_insn;
4068 rtx reg, temp, link;
4069 int n_regs, i, new_reg;
4070 rtx insn;
4072 /* Assume that this is a REG_DEAD note. */
4073 if (REG_NOTE_KIND (note) != REG_DEAD)
4074 abort ();
4076 reg = XEXP (note, 0);
4078 n_regs = HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg));
4080 for (i = 0; i < n_regs; i++)
4082 new_reg = REGNO (reg) + i;
4084 /* Check for references to new_reg in the split insns. */
4085 for (insn = last; ; insn = PREV_INSN (insn))
4087 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4088 && (temp = regno_use_in (new_reg, PATTERN (insn))))
4090 /* Create a new reg dead note here. */
4091 link = rtx_alloc (EXPR_LIST);
4092 PUT_REG_NOTE_KIND (link, REG_DEAD);
4093 XEXP (link, 0) = temp;
4094 XEXP (link, 1) = REG_NOTES (insn);
4095 REG_NOTES (insn) = link;
4097 /* If killed multiple registers here, then add in the excess. */
4098 i += HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) - 1;
4100 break;
4102 /* It isn't mentioned anywhere, so no new reg note is needed for
4103 this register. */
4104 if (insn == first)
4105 break;
4110 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
4111 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
4113 static void
4114 new_insn_dead_notes (pat, insn, last, orig_insn)
4115 rtx pat, insn, last, orig_insn;
4117 rtx dest, tem, set;
4119 /* PAT is either a CLOBBER or a SET here. */
4120 dest = XEXP (pat, 0);
4122 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4123 || GET_CODE (dest) == STRICT_LOW_PART
4124 || GET_CODE (dest) == SIGN_EXTRACT)
4125 dest = XEXP (dest, 0);
4127 if (GET_CODE (dest) == REG)
4129 for (tem = last; tem != insn; tem = PREV_INSN (tem))
4131 if (GET_RTX_CLASS (GET_CODE (tem)) == 'i'
4132 && reg_overlap_mentioned_p (dest, PATTERN (tem))
4133 && (set = single_set (tem)))
4135 rtx tem_dest = SET_DEST (set);
4137 while (GET_CODE (tem_dest) == ZERO_EXTRACT
4138 || GET_CODE (tem_dest) == SUBREG
4139 || GET_CODE (tem_dest) == STRICT_LOW_PART
4140 || GET_CODE (tem_dest) == SIGN_EXTRACT)
4141 tem_dest = XEXP (tem_dest, 0);
4143 if (! rtx_equal_p (tem_dest, dest))
4145 /* Use the same scheme as combine.c, don't put both REG_DEAD
4146 and REG_UNUSED notes on the same insn. */
4147 if (! find_regno_note (tem, REG_UNUSED, REGNO (dest))
4148 && ! find_regno_note (tem, REG_DEAD, REGNO (dest)))
4150 rtx note = rtx_alloc (EXPR_LIST);
4151 PUT_REG_NOTE_KIND (note, REG_DEAD);
4152 XEXP (note, 0) = dest;
4153 XEXP (note, 1) = REG_NOTES (tem);
4154 REG_NOTES (tem) = note;
4156 /* The reg only dies in one insn, the last one that uses
4157 it. */
4158 break;
4160 else if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4161 /* We found an instruction that both uses the register,
4162 and sets it, so no new REG_NOTE is needed for this set. */
4163 break;
4166 /* If this is a set, it must die somewhere, unless it is the dest of
4167 the original insn, and hence is live after the original insn. Abort
4168 if it isn't supposed to be live after the original insn.
4170 If this is a clobber, then just add a REG_UNUSED note. */
4171 if (tem == insn)
4173 int live_after_orig_insn = 0;
4174 rtx pattern = PATTERN (orig_insn);
4175 int i;
4177 if (GET_CODE (pat) == CLOBBER)
4179 rtx note = rtx_alloc (EXPR_LIST);
4180 PUT_REG_NOTE_KIND (note, REG_UNUSED);
4181 XEXP (note, 0) = dest;
4182 XEXP (note, 1) = REG_NOTES (insn);
4183 REG_NOTES (insn) = note;
4184 return;
4187 /* The original insn could have multiple sets, so search the
4188 insn for all sets. */
4189 if (GET_CODE (pattern) == SET)
4191 if (reg_overlap_mentioned_p (dest, SET_DEST (pattern)))
4192 live_after_orig_insn = 1;
4194 else if (GET_CODE (pattern) == PARALLEL)
4196 for (i = 0; i < XVECLEN (pattern, 0); i++)
4197 if (GET_CODE (XVECEXP (pattern, 0, i)) == SET
4198 && reg_overlap_mentioned_p (dest,
4199 SET_DEST (XVECEXP (pattern,
4200 0, i))))
4201 live_after_orig_insn = 1;
4204 if (! live_after_orig_insn)
4205 abort ();
4210 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
4211 registers modified by X. INC is -1 if the containing insn is being deleted,
4212 and is 1 if the containing insn is a newly generated insn. */
4214 static void
4215 update_n_sets (x, inc)
4216 rtx x;
4217 int inc;
4219 rtx dest = SET_DEST (x);
4221 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
4222 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
4223 dest = SUBREG_REG (dest);
4225 if (GET_CODE (dest) == REG)
4227 int regno = REGNO (dest);
4229 if (regno < FIRST_PSEUDO_REGISTER)
4231 register int i;
4232 int endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (dest));
4234 for (i = regno; i < endregno; i++)
4235 reg_n_sets[i] += inc;
4237 else
4238 reg_n_sets[regno] += inc;
4242 /* Updates all flow-analysis related quantities (including REG_NOTES) for
4243 the insns from FIRST to LAST inclusive that were created by splitting
4244 ORIG_INSN. NOTES are the original REG_NOTES. */
4246 static void
4247 update_flow_info (notes, first, last, orig_insn)
4248 rtx notes;
4249 rtx first, last;
4250 rtx orig_insn;
4252 rtx insn, note;
4253 rtx next;
4254 rtx orig_dest, temp;
4255 rtx set;
4257 /* Get and save the destination set by the original insn. */
4259 orig_dest = single_set (orig_insn);
4260 if (orig_dest)
4261 orig_dest = SET_DEST (orig_dest);
4263 /* Move REG_NOTES from the original insn to where they now belong. */
4265 for (note = notes; note; note = next)
4267 next = XEXP (note, 1);
4268 switch (REG_NOTE_KIND (note))
4270 case REG_DEAD:
4271 case REG_UNUSED:
4272 /* Move these notes from the original insn to the last new insn where
4273 the register is now set. */
4275 for (insn = last; ; insn = PREV_INSN (insn))
4277 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4278 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4280 /* If this note refers to a multiple word hard register, it
4281 may have been split into several smaller hard register
4282 references, so handle it specially. */
4283 temp = XEXP (note, 0);
4284 if (REG_NOTE_KIND (note) == REG_DEAD
4285 && GET_CODE (temp) == REG
4286 && REGNO (temp) < FIRST_PSEUDO_REGISTER
4287 && HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) > 1)
4288 split_hard_reg_notes (note, first, last, orig_insn);
4289 else
4291 XEXP (note, 1) = REG_NOTES (insn);
4292 REG_NOTES (insn) = note;
4295 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
4296 notes. */
4297 /* ??? This won't handle multiple word registers correctly,
4298 but should be good enough for now. */
4299 if (REG_NOTE_KIND (note) == REG_UNUSED
4300 && ! dead_or_set_p (insn, XEXP (note, 0)))
4301 PUT_REG_NOTE_KIND (note, REG_DEAD);
4303 /* The reg only dies in one insn, the last one that uses
4304 it. */
4305 break;
4307 /* It must die somewhere, fail it we couldn't find where it died.
4309 If this is a REG_UNUSED note, then it must be a temporary
4310 register that was not needed by this instantiation of the
4311 pattern, so we can safely ignore it. */
4312 if (insn == first)
4314 if (REG_NOTE_KIND (note) != REG_UNUSED)
4315 abort ();
4317 break;
4320 break;
4322 case REG_WAS_0:
4323 /* This note applies to the dest of the original insn. Find the
4324 first new insn that now has the same dest, and move the note
4325 there. */
4327 if (! orig_dest)
4328 abort ();
4330 for (insn = first; ; insn = NEXT_INSN (insn))
4332 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4333 && (temp = single_set (insn))
4334 && rtx_equal_p (SET_DEST (temp), orig_dest))
4336 XEXP (note, 1) = REG_NOTES (insn);
4337 REG_NOTES (insn) = note;
4338 /* The reg is only zero before one insn, the first that
4339 uses it. */
4340 break;
4342 /* It must be set somewhere, fail if we couldn't find where it
4343 was set. */
4344 if (insn == last)
4345 abort ();
4347 break;
4349 case REG_EQUAL:
4350 case REG_EQUIV:
4351 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
4352 set is meaningless. Just drop the note. */
4353 if (! orig_dest)
4354 break;
4356 case REG_NO_CONFLICT:
4357 /* These notes apply to the dest of the original insn. Find the last
4358 new insn that now has the same dest, and move the note there. */
4360 if (! orig_dest)
4361 abort ();
4363 for (insn = last; ; insn = PREV_INSN (insn))
4365 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4366 && (temp = single_set (insn))
4367 && rtx_equal_p (SET_DEST (temp), orig_dest))
4369 XEXP (note, 1) = REG_NOTES (insn);
4370 REG_NOTES (insn) = note;
4371 /* Only put this note on one of the new insns. */
4372 break;
4375 /* The original dest must still be set someplace. Abort if we
4376 couldn't find it. */
4377 if (insn == first)
4378 abort ();
4380 break;
4382 case REG_LIBCALL:
4383 /* Move a REG_LIBCALL note to the first insn created, and update
4384 the corresponding REG_RETVAL note. */
4385 XEXP (note, 1) = REG_NOTES (first);
4386 REG_NOTES (first) = note;
4388 insn = XEXP (note, 0);
4389 note = find_reg_note (insn, REG_RETVAL, NULL_RTX);
4390 if (note)
4391 XEXP (note, 0) = first;
4392 break;
4394 case REG_RETVAL:
4395 /* Move a REG_RETVAL note to the last insn created, and update
4396 the corresponding REG_LIBCALL note. */
4397 XEXP (note, 1) = REG_NOTES (last);
4398 REG_NOTES (last) = note;
4400 insn = XEXP (note, 0);
4401 note = find_reg_note (insn, REG_LIBCALL, NULL_RTX);
4402 if (note)
4403 XEXP (note, 0) = last;
4404 break;
4406 case REG_NONNEG:
4407 /* This should be moved to whichever instruction is a JUMP_INSN. */
4409 for (insn = last; ; insn = PREV_INSN (insn))
4411 if (GET_CODE (insn) == JUMP_INSN)
4413 XEXP (note, 1) = REG_NOTES (insn);
4414 REG_NOTES (insn) = note;
4415 /* Only put this note on one of the new insns. */
4416 break;
4418 /* Fail if we couldn't find a JUMP_INSN. */
4419 if (insn == first)
4420 abort ();
4422 break;
4424 case REG_INC:
4425 /* This should be moved to whichever instruction now has the
4426 increment operation. */
4427 abort ();
4429 case REG_LABEL:
4430 /* Should be moved to the new insn(s) which use the label. */
4431 for (insn = first; insn != NEXT_INSN (last); insn = NEXT_INSN (insn))
4432 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4433 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4434 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_LABEL,
4435 XEXP (note, 0), REG_NOTES (insn));
4436 break;
4438 case REG_CC_SETTER:
4439 case REG_CC_USER:
4440 /* These two notes will never appear until after reorg, so we don't
4441 have to handle them here. */
4442 default:
4443 abort ();
4447 /* Each new insn created, except the last, has a new set. If the destination
4448 is a register, then this reg is now live across several insns, whereas
4449 previously the dest reg was born and died within the same insn. To
4450 reflect this, we now need a REG_DEAD note on the insn where this
4451 dest reg dies.
4453 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4455 for (insn = first; insn != last; insn = NEXT_INSN (insn))
4457 rtx pat;
4458 int i;
4460 pat = PATTERN (insn);
4461 if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
4462 new_insn_dead_notes (pat, insn, last, orig_insn);
4463 else if (GET_CODE (pat) == PARALLEL)
4465 for (i = 0; i < XVECLEN (pat, 0); i++)
4466 if (GET_CODE (XVECEXP (pat, 0, i)) == SET
4467 || GET_CODE (XVECEXP (pat, 0, i)) == CLOBBER)
4468 new_insn_dead_notes (XVECEXP (pat, 0, i), insn, last, orig_insn);
4472 /* If any insn, except the last, uses the register set by the last insn,
4473 then we need a new REG_DEAD note on that insn. In this case, there
4474 would not have been a REG_DEAD note for this register in the original
4475 insn because it was used and set within one insn.
4477 There is no new REG_DEAD note needed if the last insn uses the register
4478 that it is setting. */
4480 set = single_set (last);
4481 if (set)
4483 rtx dest = SET_DEST (set);
4485 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4486 || GET_CODE (dest) == STRICT_LOW_PART
4487 || GET_CODE (dest) == SIGN_EXTRACT)
4488 dest = XEXP (dest, 0);
4490 if (GET_CODE (dest) == REG
4491 && ! reg_overlap_mentioned_p (dest, SET_SRC (set)))
4493 for (insn = PREV_INSN (last); ; insn = PREV_INSN (insn))
4495 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4496 && reg_mentioned_p (dest, PATTERN (insn))
4497 && (set = single_set (insn)))
4499 rtx insn_dest = SET_DEST (set);
4501 while (GET_CODE (insn_dest) == ZERO_EXTRACT
4502 || GET_CODE (insn_dest) == SUBREG
4503 || GET_CODE (insn_dest) == STRICT_LOW_PART
4504 || GET_CODE (insn_dest) == SIGN_EXTRACT)
4505 insn_dest = XEXP (insn_dest, 0);
4507 if (insn_dest != dest)
4509 note = rtx_alloc (EXPR_LIST);
4510 PUT_REG_NOTE_KIND (note, REG_DEAD);
4511 XEXP (note, 0) = dest;
4512 XEXP (note, 1) = REG_NOTES (insn);
4513 REG_NOTES (insn) = note;
4514 /* The reg only dies in one insn, the last one
4515 that uses it. */
4516 break;
4519 if (insn == first)
4520 break;
4525 /* If the original dest is modifying a multiple register target, and the
4526 original instruction was split such that the original dest is now set
4527 by two or more SUBREG sets, then the split insns no longer kill the
4528 destination of the original insn.
4530 In this case, if there exists an instruction in the same basic block,
4531 before the split insn, which uses the original dest, and this use is
4532 killed by the original insn, then we must remove the REG_DEAD note on
4533 this insn, because it is now superfluous.
4535 This does not apply when a hard register gets split, because the code
4536 knows how to handle overlapping hard registers properly. */
4537 if (orig_dest && GET_CODE (orig_dest) == REG)
4539 int found_orig_dest = 0;
4540 int found_split_dest = 0;
4542 for (insn = first; ; insn = NEXT_INSN (insn))
4544 set = single_set (insn);
4545 if (set)
4547 if (GET_CODE (SET_DEST (set)) == REG
4548 && REGNO (SET_DEST (set)) == REGNO (orig_dest))
4550 found_orig_dest = 1;
4551 break;
4553 else if (GET_CODE (SET_DEST (set)) == SUBREG
4554 && SUBREG_REG (SET_DEST (set)) == orig_dest)
4556 found_split_dest = 1;
4557 break;
4561 if (insn == last)
4562 break;
4565 if (found_split_dest)
4567 /* Search backwards from FIRST, looking for the first insn that uses
4568 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4569 If we find an insn, and it has a REG_DEAD note, then delete the
4570 note. */
4572 for (insn = first; insn; insn = PREV_INSN (insn))
4574 if (GET_CODE (insn) == CODE_LABEL
4575 || GET_CODE (insn) == JUMP_INSN)
4576 break;
4577 else if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4578 && reg_mentioned_p (orig_dest, insn))
4580 note = find_regno_note (insn, REG_DEAD, REGNO (orig_dest));
4581 if (note)
4582 remove_note (insn, note);
4586 else if (! found_orig_dest)
4588 /* This should never happen. */
4589 abort ();
4593 /* Update reg_n_sets. This is necessary to prevent local alloc from
4594 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4595 a reg from set once to set multiple times. */
4598 rtx x = PATTERN (orig_insn);
4599 RTX_CODE code = GET_CODE (x);
4601 if (code == SET || code == CLOBBER)
4602 update_n_sets (x, -1);
4603 else if (code == PARALLEL)
4605 int i;
4606 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4608 code = GET_CODE (XVECEXP (x, 0, i));
4609 if (code == SET || code == CLOBBER)
4610 update_n_sets (XVECEXP (x, 0, i), -1);
4614 for (insn = first; ; insn = NEXT_INSN (insn))
4616 x = PATTERN (insn);
4617 code = GET_CODE (x);
4619 if (code == SET || code == CLOBBER)
4620 update_n_sets (x, 1);
4621 else if (code == PARALLEL)
4623 int i;
4624 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4626 code = GET_CODE (XVECEXP (x, 0, i));
4627 if (code == SET || code == CLOBBER)
4628 update_n_sets (XVECEXP (x, 0, i), 1);
4632 if (insn == last)
4633 break;
4638 /* The one entry point in this file. DUMP_FILE is the dump file for
4639 this pass. */
4641 void
4642 schedule_insns (dump_file)
4643 FILE *dump_file;
4645 int max_uid = MAX_INSNS_PER_SPLIT * (get_max_uid () + 1);
4646 int b;
4647 rtx insn;
4649 /* Taking care of this degenerate case makes the rest of
4650 this code simpler. */
4651 if (n_basic_blocks == 0)
4652 return;
4654 /* Create an insn here so that we can hang dependencies off of it later. */
4655 sched_before_next_call
4656 = gen_rtx (INSN, VOIDmode, 0, NULL_RTX, NULL_RTX,
4657 NULL_RTX, 0, NULL_RTX, 0);
4659 /* Initialize the unused_*_lists. We can't use the ones left over from
4660 the previous function, because gcc has freed that memory. We can use
4661 the ones left over from the first sched pass in the second pass however,
4662 so only clear them on the first sched pass. The first pass is before
4663 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4665 if (reload_completed == 0 || ! flag_schedule_insns)
4667 unused_insn_list = 0;
4668 unused_expr_list = 0;
4671 /* We create no insns here, only reorder them, so we
4672 remember how far we can cut back the stack on exit. */
4674 /* Allocate data for this pass. See comments, above,
4675 for what these vectors do. */
4676 insn_luid = (int *) alloca (max_uid * sizeof (int));
4677 insn_priority = (int *) alloca (max_uid * sizeof (int));
4678 insn_tick = (int *) alloca (max_uid * sizeof (int));
4679 insn_costs = (short *) alloca (max_uid * sizeof (short));
4680 insn_units = (short *) alloca (max_uid * sizeof (short));
4681 insn_blockage = (unsigned int *) alloca (max_uid * sizeof (unsigned int));
4682 insn_ref_count = (int *) alloca (max_uid * sizeof (int));
4684 if (reload_completed == 0)
4686 sched_reg_n_deaths = (short *) alloca (max_regno * sizeof (short));
4687 sched_reg_n_calls_crossed = (int *) alloca (max_regno * sizeof (int));
4688 sched_reg_live_length = (int *) alloca (max_regno * sizeof (int));
4689 bb_dead_regs = (regset) alloca (regset_bytes);
4690 bb_live_regs = (regset) alloca (regset_bytes);
4691 bzero ((char *) sched_reg_n_calls_crossed, max_regno * sizeof (int));
4692 bzero ((char *) sched_reg_live_length, max_regno * sizeof (int));
4693 bcopy ((char *) reg_n_deaths, (char *) sched_reg_n_deaths,
4694 max_regno * sizeof (short));
4695 init_alias_analysis ();
4697 else
4699 sched_reg_n_deaths = 0;
4700 sched_reg_n_calls_crossed = 0;
4701 sched_reg_live_length = 0;
4702 bb_dead_regs = 0;
4703 bb_live_regs = 0;
4704 if (! flag_schedule_insns)
4705 init_alias_analysis ();
4708 if (write_symbols != NO_DEBUG)
4710 rtx line;
4712 line_note = (rtx *) alloca (max_uid * sizeof (rtx));
4713 bzero ((char *) line_note, max_uid * sizeof (rtx));
4714 line_note_head = (rtx *) alloca (n_basic_blocks * sizeof (rtx));
4715 bzero ((char *) line_note_head, n_basic_blocks * sizeof (rtx));
4717 /* Determine the line-number at the start of each basic block.
4718 This must be computed and saved now, because after a basic block's
4719 predecessor has been scheduled, it is impossible to accurately
4720 determine the correct line number for the first insn of the block. */
4722 for (b = 0; b < n_basic_blocks; b++)
4723 for (line = basic_block_head[b]; line; line = PREV_INSN (line))
4724 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4726 line_note_head[b] = line;
4727 break;
4731 bzero ((char *) insn_luid, max_uid * sizeof (int));
4732 bzero ((char *) insn_priority, max_uid * sizeof (int));
4733 bzero ((char *) insn_tick, max_uid * sizeof (int));
4734 bzero ((char *) insn_costs, max_uid * sizeof (short));
4735 bzero ((char *) insn_units, max_uid * sizeof (short));
4736 bzero ((char *) insn_blockage, max_uid * sizeof (unsigned int));
4737 bzero ((char *) insn_ref_count, max_uid * sizeof (int));
4739 /* Schedule each basic block, block by block. */
4741 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4742 known why this is done. */
4744 insn = basic_block_end[n_basic_blocks-1];
4745 if (NEXT_INSN (insn) == 0
4746 || (GET_CODE (insn) != NOTE
4747 && GET_CODE (insn) != CODE_LABEL
4748 /* Don't emit a NOTE if it would end up between an unconditional
4749 jump and a BARRIER. */
4750 && ! (GET_CODE (insn) == JUMP_INSN
4751 && GET_CODE (NEXT_INSN (insn)) == BARRIER)))
4752 emit_note_after (NOTE_INSN_DELETED, basic_block_end[n_basic_blocks-1]);
4754 for (b = 0; b < n_basic_blocks; b++)
4756 rtx insn, next;
4758 note_list = 0;
4760 for (insn = basic_block_head[b]; ; insn = next)
4762 rtx prev;
4763 rtx set;
4765 /* Can't use `next_real_insn' because that
4766 might go across CODE_LABELS and short-out basic blocks. */
4767 next = NEXT_INSN (insn);
4768 if (GET_CODE (insn) != INSN)
4770 if (insn == basic_block_end[b])
4771 break;
4773 continue;
4776 /* Don't split no-op move insns. These should silently disappear
4777 later in final. Splitting such insns would break the code
4778 that handles REG_NO_CONFLICT blocks. */
4779 set = single_set (insn);
4780 if (set && rtx_equal_p (SET_SRC (set), SET_DEST (set)))
4782 if (insn == basic_block_end[b])
4783 break;
4785 /* Nops get in the way while scheduling, so delete them now if
4786 register allocation has already been done. It is too risky
4787 to try to do this before register allocation, and there are
4788 unlikely to be very many nops then anyways. */
4789 if (reload_completed)
4791 PUT_CODE (insn, NOTE);
4792 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4793 NOTE_SOURCE_FILE (insn) = 0;
4796 continue;
4799 /* Split insns here to get max fine-grain parallelism. */
4800 prev = PREV_INSN (insn);
4801 if (reload_completed == 0)
4803 rtx last, first = PREV_INSN (insn);
4804 rtx notes = REG_NOTES (insn);
4806 last = try_split (PATTERN (insn), insn, 1);
4807 if (last != insn)
4809 /* try_split returns the NOTE that INSN became. */
4810 first = NEXT_INSN (first);
4811 update_flow_info (notes, first, last, insn);
4813 PUT_CODE (insn, NOTE);
4814 NOTE_SOURCE_FILE (insn) = 0;
4815 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4816 if (insn == basic_block_head[b])
4817 basic_block_head[b] = first;
4818 if (insn == basic_block_end[b])
4820 basic_block_end[b] = last;
4821 break;
4826 if (insn == basic_block_end[b])
4827 break;
4830 schedule_block (b, dump_file);
4832 #ifdef USE_C_ALLOCA
4833 alloca (0);
4834 #endif
4837 /* Reposition the prologue and epilogue notes in case we moved the
4838 prologue/epilogue insns. */
4839 if (reload_completed)
4840 reposition_prologue_and_epilogue_notes (get_insns ());
4842 if (write_symbols != NO_DEBUG)
4844 rtx line = 0;
4845 rtx insn = get_insns ();
4846 int active_insn = 0;
4847 int notes = 0;
4849 /* Walk the insns deleting redundant line-number notes. Many of these
4850 are already present. The remainder tend to occur at basic
4851 block boundaries. */
4852 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
4853 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4855 /* If there are no active insns following, INSN is redundant. */
4856 if (active_insn == 0)
4858 notes++;
4859 NOTE_SOURCE_FILE (insn) = 0;
4860 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4862 /* If the line number is unchanged, LINE is redundant. */
4863 else if (line
4864 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
4865 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
4867 notes++;
4868 NOTE_SOURCE_FILE (line) = 0;
4869 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
4870 line = insn;
4872 else
4873 line = insn;
4874 active_insn = 0;
4876 else if (! ((GET_CODE (insn) == NOTE
4877 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
4878 || (GET_CODE (insn) == INSN
4879 && (GET_CODE (PATTERN (insn)) == USE
4880 || GET_CODE (PATTERN (insn)) == CLOBBER))))
4881 active_insn++;
4883 if (dump_file && notes)
4884 fprintf (dump_file, ";; deleted %d line-number notes\n", notes);
4887 if (reload_completed == 0)
4889 int regno;
4890 for (regno = 0; regno < max_regno; regno++)
4891 if (sched_reg_live_length[regno])
4893 if (dump_file)
4895 if (reg_live_length[regno] > sched_reg_live_length[regno])
4896 fprintf (dump_file,
4897 ";; register %d life shortened from %d to %d\n",
4898 regno, reg_live_length[regno],
4899 sched_reg_live_length[regno]);
4900 /* Negative values are special; don't overwrite the current
4901 reg_live_length value if it is negative. */
4902 else if (reg_live_length[regno] < sched_reg_live_length[regno]
4903 && reg_live_length[regno] >= 0)
4904 fprintf (dump_file,
4905 ";; register %d life extended from %d to %d\n",
4906 regno, reg_live_length[regno],
4907 sched_reg_live_length[regno]);
4909 if (! reg_n_calls_crossed[regno]
4910 && sched_reg_n_calls_crossed[regno])
4911 fprintf (dump_file,
4912 ";; register %d now crosses calls\n", regno);
4913 else if (reg_n_calls_crossed[regno]
4914 && ! sched_reg_n_calls_crossed[regno]
4915 && reg_basic_block[regno] != REG_BLOCK_GLOBAL)
4916 fprintf (dump_file,
4917 ";; register %d no longer crosses calls\n", regno);
4920 /* Negative values are special; don't overwrite the current
4921 reg_live_length value if it is negative. */
4922 if (reg_live_length[regno] >= 0)
4923 reg_live_length[regno] = sched_reg_live_length[regno];
4925 /* We can't change the value of reg_n_calls_crossed to zero for
4926 pseudos which are live in more than one block.
4928 This is because combine might have made an optimization which
4929 invalidated basic_block_live_at_start and reg_n_calls_crossed,
4930 but it does not update them. If we update reg_n_calls_crossed
4931 here, the two variables are now inconsistent, and this might
4932 confuse the caller-save code into saving a register that doesn't
4933 need to be saved. This is only a problem when we zero calls
4934 crossed for a pseudo live in multiple basic blocks.
4936 Alternatively, we could try to correctly update basic block live
4937 at start here in sched, but that seems complicated. */
4938 if (sched_reg_n_calls_crossed[regno]
4939 || reg_basic_block[regno] != REG_BLOCK_GLOBAL)
4940 reg_n_calls_crossed[regno] = sched_reg_n_calls_crossed[regno];
4944 #endif /* INSN_SCHEDULING */