Undo June 11th change
[official-gcc.git] / gcc / sched.c
blob2cd9d12911c7789a06e7e05d07c907f53a9f6cf1
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 93-97, 1998 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GNU CC.
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
11 any later version.
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 59 Temple Place - Suite 330,
21 Boston, MA 02111-1307, USA. */
23 /* Instruction scheduling pass.
25 This pass implements list scheduling within basic blocks. It is
26 run after flow analysis, but before register allocation. The
27 scheduler works as follows:
29 We compute insn priorities based on data dependencies. Flow
30 analysis only creates a fraction of the data-dependencies we must
31 observe: namely, only those dependencies which the combiner can be
32 expected to use. For this pass, we must therefore create the
33 remaining dependencies we need to observe: register dependencies,
34 memory dependencies, dependencies to keep function calls in order,
35 and the dependence between a conditional branch and the setting of
36 condition codes are all dealt with here.
38 The scheduler first traverses the data flow graph, starting with
39 the last instruction, and proceeding to the first, assigning
40 values to insn_priority as it goes. This sorts the instructions
41 topologically by data dependence.
43 Once priorities have been established, we order the insns using
44 list scheduling. This works as follows: starting with a list of
45 all the ready insns, and sorted according to priority number, we
46 schedule the insn from the end of the list by placing its
47 predecessors in the list according to their priority order. We
48 consider this insn scheduled by setting the pointer to the "end" of
49 the list to point to the previous insn. When an insn has no
50 predecessors, we either queue it until sufficient time has elapsed
51 or add it to the ready list. As the instructions are scheduled or
52 when stalls are introduced, the queue advances and dumps insns into
53 the ready list. When all insns down to the lowest priority have
54 been scheduled, the critical path of the basic block has been made
55 as short as possible. The remaining insns are then scheduled in
56 remaining slots.
58 Function unit conflicts are resolved during reverse list scheduling
59 by tracking the time when each insn is committed to the schedule
60 and from that, the time the function units it uses must be free.
61 As insns on the ready list are considered for scheduling, those
62 that would result in a blockage of the already committed insns are
63 queued until no blockage will result. Among the remaining insns on
64 the ready list to be considered, the first one with the largest
65 potential for causing a subsequent blockage is chosen.
67 The following list shows the order in which we want to break ties
68 among insns in the ready list:
70 1. choose insn with lowest conflict cost, ties broken by
71 2. choose insn with the longest path to end of bb, ties broken by
72 3. choose insn that kills the most registers, ties broken by
73 4. choose insn that conflicts with the most ready insns, or finally
74 5. choose insn with lowest UID.
76 Memory references complicate matters. Only if we can be certain
77 that memory references are not part of the data dependency graph
78 (via true, anti, or output dependence), can we move operations past
79 memory references. To first approximation, reads can be done
80 independently, while writes introduce dependencies. Better
81 approximations will yield fewer dependencies.
83 Dependencies set up by memory references are treated in exactly the
84 same way as other dependencies, by using LOG_LINKS.
86 Having optimized the critical path, we may have also unduly
87 extended the lifetimes of some registers. If an operation requires
88 that constants be loaded into registers, it is certainly desirable
89 to load those constants as early as necessary, but no earlier.
90 I.e., it will not do to load up a bunch of registers at the
91 beginning of a basic block only to use them at the end, if they
92 could be loaded later, since this may result in excessive register
93 utilization.
95 Note that since branches are never in basic blocks, but only end
96 basic blocks, this pass will not do any branch scheduling. But
97 that is ok, since we can use GNU's delayed branch scheduling
98 pass to take care of this case.
100 Also note that no further optimizations based on algebraic identities
101 are performed, so this pass would be a good one to perform instruction
102 splitting, such as breaking up a multiply instruction into shifts
103 and adds where that is profitable.
105 Given the memory aliasing analysis that this pass should perform,
106 it should be possible to remove redundant stores to memory, and to
107 load values from registers instead of hitting memory.
109 This pass must update information that subsequent passes expect to be
110 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
111 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
112 basic_block_end.
114 The information in the line number notes is carefully retained by
115 this pass. Notes that refer to the starting and ending of
116 exception regions are also carefully retained by this pass. All
117 other NOTE insns are grouped in their same relative order at the
118 beginning of basic blocks that have been scheduled. */
120 #include "config.h"
121 #include "system.h"
122 #include "rtl.h"
123 #include "basic-block.h"
124 #include "regs.h"
125 #include "hard-reg-set.h"
126 #include "flags.h"
127 #include "insn-config.h"
128 #include "insn-attr.h"
130 extern char *reg_known_equiv_p;
131 extern rtx *reg_known_value;
133 #ifdef INSN_SCHEDULING
134 /* Arrays set up by scheduling for the same respective purposes as
135 similar-named arrays set up by flow analysis. We work with these
136 arrays during the scheduling pass so we can compare values against
137 unscheduled code.
139 Values of these arrays are copied at the end of this pass into the
140 arrays set up by flow analysis. */
141 static int *sched_reg_n_calls_crossed;
142 static int *sched_reg_live_length;
144 /* Element N is the next insn that sets (hard or pseudo) register
145 N within the current basic block; or zero, if there is no
146 such insn. Needed for new registers which may be introduced
147 by splitting insns. */
148 static rtx *reg_last_uses;
149 static rtx *reg_last_sets;
150 static regset reg_pending_sets;
151 static int reg_pending_sets_all;
153 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
154 static int *insn_luid;
155 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
157 /* Vector indexed by INSN_UID giving each instruction a priority. */
158 static int *insn_priority;
159 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
161 static short *insn_costs;
162 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
164 /* Vector indexed by INSN_UID giving an encoding of the function units
165 used. */
166 static short *insn_units;
167 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
169 /* Vector indexed by INSN_UID giving an encoding of the blockage range
170 function. The unit and the range are encoded. */
171 static unsigned int *insn_blockage;
172 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
173 #define UNIT_BITS 5
174 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
175 #define ENCODE_BLOCKAGE(U,R) \
176 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
177 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
178 | MAX_BLOCKAGE_COST (R))
179 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
180 #define BLOCKAGE_RANGE(B) \
181 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
182 | ((B) & BLOCKAGE_MASK))
184 /* Encodings of the `<name>_unit_blockage_range' function. */
185 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
186 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
188 #define DONE_PRIORITY -1
189 #define MAX_PRIORITY 0x7fffffff
190 #define TAIL_PRIORITY 0x7ffffffe
191 #define LAUNCH_PRIORITY 0x7f000001
192 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
193 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
195 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
196 static int *insn_ref_count;
197 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
199 /* Vector indexed by INSN_UID giving line-number note in effect for each
200 insn. For line-number notes, this indicates whether the note may be
201 reused. */
202 static rtx *line_note;
203 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
205 /* Vector indexed by basic block number giving the starting line-number
206 for each basic block. */
207 static rtx *line_note_head;
209 /* List of important notes we must keep around. This is a pointer to the
210 last element in the list. */
211 static rtx note_list;
213 /* Regsets telling whether a given register is live or dead before the last
214 scheduled insn. Must scan the instructions once before scheduling to
215 determine what registers are live or dead at the end of the block. */
216 static regset bb_dead_regs;
217 static regset bb_live_regs;
219 /* Regset telling whether a given register is live after the insn currently
220 being scheduled. Before processing an insn, this is equal to bb_live_regs
221 above. This is used so that we can find registers that are newly born/dead
222 after processing an insn. */
223 static regset old_live_regs;
225 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
226 during the initial scan and reused later. If there are not exactly as
227 many REG_DEAD notes in the post scheduled code as there were in the
228 prescheduled code then we trigger an abort because this indicates a bug. */
229 static rtx dead_notes;
231 /* Queues, etc. */
233 /* An instruction is ready to be scheduled when all insns following it
234 have already been scheduled. It is important to ensure that all
235 insns which use its result will not be executed until its result
236 has been computed. An insn is maintained in one of four structures:
238 (P) the "Pending" set of insns which cannot be scheduled until
239 their dependencies have been satisfied.
240 (Q) the "Queued" set of insns that can be scheduled when sufficient
241 time has passed.
242 (R) the "Ready" list of unscheduled, uncommitted insns.
243 (S) the "Scheduled" list of insns.
245 Initially, all insns are either "Pending" or "Ready" depending on
246 whether their dependencies are satisfied.
248 Insns move from the "Ready" list to the "Scheduled" list as they
249 are committed to the schedule. As this occurs, the insns in the
250 "Pending" list have their dependencies satisfied and move to either
251 the "Ready" list or the "Queued" set depending on whether
252 sufficient time has passed to make them ready. As time passes,
253 insns move from the "Queued" set to the "Ready" list. Insns may
254 move from the "Ready" list to the "Queued" set if they are blocked
255 due to a function unit conflict.
257 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
258 insns, i.e., those that are ready, queued, and pending.
259 The "Queued" set (Q) is implemented by the variable `insn_queue'.
260 The "Ready" list (R) is implemented by the variables `ready' and
261 `n_ready'.
262 The "Scheduled" list (S) is the new insn chain built by this pass.
264 The transition (R->S) is implemented in the scheduling loop in
265 `schedule_block' when the best insn to schedule is chosen.
266 The transition (R->Q) is implemented in `schedule_select' when an
267 insn is found to have a function unit conflict with the already
268 committed insns.
269 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
270 insns move from the ready list to the scheduled list.
271 The transition (Q->R) is implemented at the top of the scheduling
272 loop in `schedule_block' as time passes or stalls are introduced. */
274 /* Implement a circular buffer to delay instructions until sufficient
275 time has passed. INSN_QUEUE_SIZE is a power of two larger than
276 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
277 longest time an isnsn may be queued. */
278 static rtx insn_queue[INSN_QUEUE_SIZE];
279 static int q_ptr = 0;
280 static int q_size = 0;
281 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
282 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
284 /* Vector indexed by INSN_UID giving the minimum clock tick at which
285 the insn becomes ready. This is used to note timing constraints for
286 insns in the pending list. */
287 static int *insn_tick;
288 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
290 /* Data structure for keeping track of register information
291 during that register's life. */
293 struct sometimes
295 int regno;
296 int live_length;
297 int calls_crossed;
300 /* Forward declarations. */
301 static void add_dependence PROTO((rtx, rtx, enum reg_note));
302 static void remove_dependence PROTO((rtx, rtx));
303 static rtx find_insn_list PROTO((rtx, rtx));
304 static int insn_unit PROTO((rtx));
305 static unsigned int blockage_range PROTO((int, rtx));
306 static void clear_units PROTO((void));
307 static void prepare_unit PROTO((int));
308 static int actual_hazard_this_instance PROTO((int, int, rtx, int, int));
309 static void schedule_unit PROTO((int, rtx, int));
310 static int actual_hazard PROTO((int, rtx, int, int));
311 static int potential_hazard PROTO((int, rtx, int));
312 static int insn_cost PROTO((rtx, rtx, rtx));
313 static int priority PROTO((rtx));
314 static void free_pending_lists PROTO((void));
315 static void add_insn_mem_dependence PROTO((rtx *, rtx *, rtx, rtx));
316 static void flush_pending_lists PROTO((rtx, int));
317 static void sched_analyze_1 PROTO((rtx, rtx));
318 static void sched_analyze_2 PROTO((rtx, rtx));
319 static void sched_analyze_insn PROTO((rtx, rtx, rtx));
320 static int sched_analyze PROTO((rtx, rtx));
321 static void sched_note_set PROTO((int, rtx, int));
322 static int rank_for_schedule PROTO((const GENERIC_PTR, const GENERIC_PTR));
323 static void swap_sort PROTO((rtx *, int));
324 static void queue_insn PROTO((rtx, int));
325 static int birthing_insn_p PROTO((rtx));
326 static void adjust_priority PROTO((rtx));
327 static int schedule_insn PROTO((rtx, rtx *, int, int));
328 static int schedule_select PROTO((rtx *, int, int, FILE *));
329 static void create_reg_dead_note PROTO((rtx, rtx));
330 static void attach_deaths PROTO((rtx, rtx, int));
331 static void attach_deaths_insn PROTO((rtx));
332 static rtx unlink_notes PROTO((rtx, rtx));
333 static int new_sometimes_live PROTO((struct sometimes *, int, int));
334 static void finish_sometimes_live PROTO((struct sometimes *, int));
335 static rtx reemit_notes PROTO((rtx, rtx));
336 static void schedule_block PROTO((int, FILE *));
337 static rtx regno_use_in PROTO((int, rtx));
338 static void split_hard_reg_notes PROTO((rtx, rtx, rtx, rtx));
339 static void new_insn_dead_notes PROTO((rtx, rtx, rtx, rtx));
340 static void update_n_sets PROTO((rtx, int));
341 static void update_flow_info PROTO((rtx, rtx, rtx, rtx));
343 /* Main entry point of this file. */
344 void schedule_insns PROTO((FILE *));
346 #endif /* INSN_SCHEDULING */
348 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
350 /* Helper functions for instruction scheduling. */
352 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
353 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
354 of dependence that this link represents. */
356 static void
357 add_dependence (insn, elem, dep_type)
358 rtx insn;
359 rtx elem;
360 enum reg_note dep_type;
362 rtx link, next;
364 /* Don't depend an insn on itself. */
365 if (insn == elem)
366 return;
368 /* If elem is part of a sequence that must be scheduled together, then
369 make the dependence point to the last insn of the sequence.
370 When HAVE_cc0, it is possible for NOTEs to exist between users and
371 setters of the condition codes, so we must skip past notes here.
372 Otherwise, NOTEs are impossible here. */
374 next = NEXT_INSN (elem);
376 #ifdef HAVE_cc0
377 while (next && GET_CODE (next) == NOTE)
378 next = NEXT_INSN (next);
379 #endif
381 if (next && SCHED_GROUP_P (next)
382 && GET_CODE (next) != CODE_LABEL)
384 /* Notes will never intervene here though, so don't bother checking
385 for them. */
386 /* We must reject CODE_LABELs, so that we don't get confused by one
387 that has LABEL_PRESERVE_P set, which is represented by the same
388 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
389 SCHED_GROUP_P. */
390 while (NEXT_INSN (next) && SCHED_GROUP_P (NEXT_INSN (next))
391 && GET_CODE (NEXT_INSN (next)) != CODE_LABEL)
392 next = NEXT_INSN (next);
394 /* Again, don't depend an insn on itself. */
395 if (insn == next)
396 return;
398 /* Make the dependence to NEXT, the last insn of the group, instead
399 of the original ELEM. */
400 elem = next;
403 /* Check that we don't already have this dependence. */
404 for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
405 if (XEXP (link, 0) == elem)
407 /* If this is a more restrictive type of dependence than the existing
408 one, then change the existing dependence to this type. */
409 if ((int) dep_type < (int) REG_NOTE_KIND (link))
410 PUT_REG_NOTE_KIND (link, dep_type);
411 return;
413 /* Might want to check one level of transitivity to save conses. */
415 link = rtx_alloc (INSN_LIST);
416 /* Insn dependency, not data dependency. */
417 PUT_REG_NOTE_KIND (link, dep_type);
418 XEXP (link, 0) = elem;
419 XEXP (link, 1) = LOG_LINKS (insn);
420 LOG_LINKS (insn) = link;
423 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
424 of INSN. Abort if not found. */
426 static void
427 remove_dependence (insn, elem)
428 rtx insn;
429 rtx elem;
431 rtx prev, link;
432 int found = 0;
434 for (prev = 0, link = LOG_LINKS (insn); link; link = XEXP (link, 1))
436 if (XEXP (link, 0) == elem)
438 RTX_INTEGRATED_P (link) = 1;
439 if (prev)
440 XEXP (prev, 1) = XEXP (link, 1);
441 else
442 LOG_LINKS (insn) = XEXP (link, 1);
443 found = 1;
445 else
446 prev = link;
449 if (! found)
450 abort ();
451 return;
454 #ifndef INSN_SCHEDULING
455 void
456 schedule_insns (dump_file)
457 FILE *dump_file;
460 #else
461 #ifndef __GNUC__
462 #define __inline
463 #endif
465 /* Computation of memory dependencies. */
467 /* The *_insns and *_mems are paired lists. Each pending memory operation
468 will have a pointer to the MEM rtx on one list and a pointer to the
469 containing insn on the other list in the same place in the list. */
471 /* We can't use add_dependence like the old code did, because a single insn
472 may have multiple memory accesses, and hence needs to be on the list
473 once for each memory access. Add_dependence won't let you add an insn
474 to a list more than once. */
476 /* An INSN_LIST containing all insns with pending read operations. */
477 static rtx pending_read_insns;
479 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
480 static rtx pending_read_mems;
482 /* An INSN_LIST containing all insns with pending write operations. */
483 static rtx pending_write_insns;
485 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
486 static rtx pending_write_mems;
488 /* Indicates the combined length of the two pending lists. We must prevent
489 these lists from ever growing too large since the number of dependencies
490 produced is at least O(N*N), and execution time is at least O(4*N*N), as
491 a function of the length of these pending lists. */
493 static int pending_lists_length;
495 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
497 static rtx unused_insn_list;
499 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
501 static rtx unused_expr_list;
503 /* The last insn upon which all memory references must depend.
504 This is an insn which flushed the pending lists, creating a dependency
505 between it and all previously pending memory references. This creates
506 a barrier (or a checkpoint) which no memory reference is allowed to cross.
508 This includes all non constant CALL_INSNs. When we do interprocedural
509 alias analysis, this restriction can be relaxed.
510 This may also be an INSN that writes memory if the pending lists grow
511 too large. */
513 static rtx last_pending_memory_flush;
515 /* The last function call we have seen. All hard regs, and, of course,
516 the last function call, must depend on this. */
518 static rtx last_function_call;
520 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
521 that does not already cross a call. We create dependencies between each
522 of those insn and the next call insn, to ensure that they won't cross a call
523 after scheduling is done. */
525 static rtx sched_before_next_call;
527 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
528 so that insns independent of the last scheduled insn will be preferred
529 over dependent instructions. */
531 static rtx last_scheduled_insn;
533 /* Process an insn's memory dependencies. There are four kinds of
534 dependencies:
536 (0) read dependence: read follows read
537 (1) true dependence: read follows write
538 (2) anti dependence: write follows read
539 (3) output dependence: write follows write
541 We are careful to build only dependencies which actually exist, and
542 use transitivity to avoid building too many links. */
544 /* Return the INSN_LIST containing INSN in LIST, or NULL
545 if LIST does not contain INSN. */
547 __inline static rtx
548 find_insn_list (insn, list)
549 rtx insn;
550 rtx list;
552 while (list)
554 if (XEXP (list, 0) == insn)
555 return list;
556 list = XEXP (list, 1);
558 return 0;
561 /* Compute the function units used by INSN. This caches the value
562 returned by function_units_used. A function unit is encoded as the
563 unit number if the value is non-negative and the compliment of a
564 mask if the value is negative. A function unit index is the
565 non-negative encoding. */
567 __inline static int
568 insn_unit (insn)
569 rtx insn;
571 register int unit = INSN_UNIT (insn);
573 if (unit == 0)
575 recog_memoized (insn);
577 /* A USE insn, or something else we don't need to understand.
578 We can't pass these directly to function_units_used because it will
579 trigger a fatal error for unrecognizable insns. */
580 if (INSN_CODE (insn) < 0)
581 unit = -1;
582 else
584 unit = function_units_used (insn);
585 /* Increment non-negative values so we can cache zero. */
586 if (unit >= 0) unit++;
588 /* We only cache 16 bits of the result, so if the value is out of
589 range, don't cache it. */
590 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
591 || unit >= 0
592 || (~unit & ((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
593 INSN_UNIT (insn) = unit;
595 return (unit > 0 ? unit - 1 : unit);
598 /* Compute the blockage range for executing INSN on UNIT. This caches
599 the value returned by the blockage_range_function for the unit.
600 These values are encoded in an int where the upper half gives the
601 minimum value and the lower half gives the maximum value. */
603 __inline static unsigned int
604 blockage_range (unit, insn)
605 int unit;
606 rtx insn;
608 unsigned int blockage = INSN_BLOCKAGE (insn);
609 unsigned int range;
611 if (UNIT_BLOCKED (blockage) != unit + 1)
613 range = function_units[unit].blockage_range_function (insn);
614 /* We only cache the blockage range for one unit and then only if
615 the values fit. */
616 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
617 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
619 else
620 range = BLOCKAGE_RANGE (blockage);
622 return range;
625 /* A vector indexed by function unit instance giving the last insn to use
626 the unit. The value of the function unit instance index for unit U
627 instance I is (U + I * FUNCTION_UNITS_SIZE). */
628 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
630 /* A vector indexed by function unit instance giving the minimum time when
631 the unit will unblock based on the maximum blockage cost. */
632 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
634 /* A vector indexed by function unit number giving the number of insns
635 that remain to use the unit. */
636 static int unit_n_insns[FUNCTION_UNITS_SIZE];
638 /* Reset the function unit state to the null state. */
640 static void
641 clear_units ()
643 bzero ((char *) unit_last_insn, sizeof (unit_last_insn));
644 bzero ((char *) unit_tick, sizeof (unit_tick));
645 bzero ((char *) unit_n_insns, sizeof (unit_n_insns));
648 /* Record an insn as one that will use the units encoded by UNIT. */
650 __inline static void
651 prepare_unit (unit)
652 int unit;
654 int i;
656 if (unit >= 0)
657 unit_n_insns[unit]++;
658 else
659 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
660 if ((unit & 1) != 0)
661 prepare_unit (i);
664 /* Return the actual hazard cost of executing INSN on the unit UNIT,
665 instance INSTANCE at time CLOCK if the previous actual hazard cost
666 was COST. */
668 __inline static int
669 actual_hazard_this_instance (unit, instance, insn, clock, cost)
670 int unit, instance, clock, cost;
671 rtx insn;
673 int tick = unit_tick[instance];
675 if (tick - clock > cost)
677 /* The scheduler is operating in reverse, so INSN is the executing
678 insn and the unit's last insn is the candidate insn. We want a
679 more exact measure of the blockage if we execute INSN at CLOCK
680 given when we committed the execution of the unit's last insn.
682 The blockage value is given by either the unit's max blockage
683 constant, blockage range function, or blockage function. Use
684 the most exact form for the given unit. */
686 if (function_units[unit].blockage_range_function)
688 if (function_units[unit].blockage_function)
689 tick += (function_units[unit].blockage_function
690 (insn, unit_last_insn[instance])
691 - function_units[unit].max_blockage);
692 else
693 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
694 - function_units[unit].max_blockage);
696 if (tick - clock > cost)
697 cost = tick - clock;
699 return cost;
702 /* Record INSN as having begun execution on the units encoded by UNIT at
703 time CLOCK. */
705 __inline static void
706 schedule_unit (unit, insn, clock)
707 int unit, clock;
708 rtx insn;
710 int i;
712 if (unit >= 0)
714 int instance = unit;
715 #if MAX_MULTIPLICITY > 1
716 /* Find the first free instance of the function unit and use that
717 one. We assume that one is free. */
718 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
720 if (! actual_hazard_this_instance (unit, instance, insn, clock, 0))
721 break;
722 instance += FUNCTION_UNITS_SIZE;
724 #endif
725 unit_last_insn[instance] = insn;
726 unit_tick[instance] = (clock + function_units[unit].max_blockage);
728 else
729 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
730 if ((unit & 1) != 0)
731 schedule_unit (i, insn, clock);
734 /* Return the actual hazard cost of executing INSN on the units encoded by
735 UNIT at time CLOCK if the previous actual hazard cost was COST. */
737 __inline static int
738 actual_hazard (unit, insn, clock, cost)
739 int unit, clock, cost;
740 rtx insn;
742 int i;
744 if (unit >= 0)
746 /* Find the instance of the function unit with the minimum hazard. */
747 int instance = unit;
748 int best_cost = actual_hazard_this_instance (unit, instance, insn,
749 clock, cost);
750 #if MAX_MULTIPLICITY > 1
751 int this_cost;
753 if (best_cost > cost)
755 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
757 instance += FUNCTION_UNITS_SIZE;
758 this_cost = actual_hazard_this_instance (unit, instance, insn,
759 clock, cost);
760 if (this_cost < best_cost)
762 best_cost = this_cost;
763 if (this_cost <= cost)
764 break;
768 #endif
769 cost = MAX (cost, best_cost);
771 else
772 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
773 if ((unit & 1) != 0)
774 cost = actual_hazard (i, insn, clock, cost);
776 return cost;
779 /* Return the potential hazard cost of executing an instruction on the
780 units encoded by UNIT if the previous potential hazard cost was COST.
781 An insn with a large blockage time is chosen in preference to one
782 with a smaller time; an insn that uses a unit that is more likely
783 to be used is chosen in preference to one with a unit that is less
784 used. We are trying to minimize a subsequent actual hazard. */
786 __inline static int
787 potential_hazard (unit, insn, cost)
788 int unit, cost;
789 rtx insn;
791 int i, ncost;
792 unsigned int minb, maxb;
794 if (unit >= 0)
796 minb = maxb = function_units[unit].max_blockage;
797 if (maxb > 1)
799 if (function_units[unit].blockage_range_function)
801 maxb = minb = blockage_range (unit, insn);
802 maxb = MAX_BLOCKAGE_COST (maxb);
803 minb = MIN_BLOCKAGE_COST (minb);
806 if (maxb > 1)
808 /* Make the number of instructions left dominate. Make the
809 minimum delay dominate the maximum delay. If all these
810 are the same, use the unit number to add an arbitrary
811 ordering. Other terms can be added. */
812 ncost = minb * 0x40 + maxb;
813 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
814 if (ncost > cost)
815 cost = ncost;
819 else
820 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
821 if ((unit & 1) != 0)
822 cost = potential_hazard (i, insn, cost);
824 return cost;
827 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
828 This is the number of virtual cycles taken between instruction issue and
829 instruction results. */
831 __inline static int
832 insn_cost (insn, link, used)
833 rtx insn, link, used;
835 register int cost = INSN_COST (insn);
837 if (cost == 0)
839 recog_memoized (insn);
841 /* A USE insn, or something else we don't need to understand.
842 We can't pass these directly to result_ready_cost because it will
843 trigger a fatal error for unrecognizable insns. */
844 if (INSN_CODE (insn) < 0)
846 INSN_COST (insn) = 1;
847 return 1;
849 else
851 cost = result_ready_cost (insn);
853 if (cost < 1)
854 cost = 1;
856 INSN_COST (insn) = cost;
860 /* A USE insn should never require the value used to be computed. This
861 allows the computation of a function's result and parameter values to
862 overlap the return and call. */
863 recog_memoized (used);
864 if (INSN_CODE (used) < 0)
865 LINK_COST_FREE (link) = 1;
867 /* If some dependencies vary the cost, compute the adjustment. Most
868 commonly, the adjustment is complete: either the cost is ignored
869 (in the case of an output- or anti-dependence), or the cost is
870 unchanged. These values are cached in the link as LINK_COST_FREE
871 and LINK_COST_ZERO. */
873 if (LINK_COST_FREE (link))
874 cost = 1;
875 #ifdef ADJUST_COST
876 else if (! LINK_COST_ZERO (link))
878 int ncost = cost;
880 ADJUST_COST (used, link, insn, ncost);
881 if (ncost <= 1)
882 LINK_COST_FREE (link) = ncost = 1;
883 if (cost == ncost)
884 LINK_COST_ZERO (link) = 1;
885 cost = ncost;
887 #endif
888 return cost;
891 /* Compute the priority number for INSN. */
893 static int
894 priority (insn)
895 rtx insn;
897 if (insn && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
899 int prev_priority;
900 int max_priority;
901 int this_priority = INSN_PRIORITY (insn);
902 rtx prev;
904 if (this_priority > 0)
905 return this_priority;
907 max_priority = 1;
909 /* Nonzero if these insns must be scheduled together. */
910 if (SCHED_GROUP_P (insn))
912 prev = insn;
913 while (SCHED_GROUP_P (prev))
915 prev = PREV_INSN (prev);
916 INSN_REF_COUNT (prev) += 1;
920 for (prev = LOG_LINKS (insn); prev; prev = XEXP (prev, 1))
922 rtx x = XEXP (prev, 0);
924 /* If this was a duplicate of a dependence we already deleted,
925 ignore it. */
926 if (RTX_INTEGRATED_P (prev))
927 continue;
929 /* A dependence pointing to a note or deleted insn is always
930 obsolete, because sched_analyze_insn will have created any
931 necessary new dependences which replace it. Notes and deleted
932 insns can be created when instructions are deleted by insn
933 splitting, or by register allocation. */
934 if (GET_CODE (x) == NOTE || INSN_DELETED_P (x))
936 remove_dependence (insn, x);
937 continue;
940 /* Clear the link cost adjustment bits. */
941 LINK_COST_FREE (prev) = 0;
942 #ifdef ADJUST_COST
943 LINK_COST_ZERO (prev) = 0;
944 #endif
946 /* This priority calculation was chosen because it results in the
947 least instruction movement, and does not hurt the performance
948 of the resulting code compared to the old algorithm.
949 This makes the sched algorithm more stable, which results
950 in better code, because there is less register pressure,
951 cross jumping is more likely to work, and debugging is easier.
953 When all instructions have a latency of 1, there is no need to
954 move any instructions. Subtracting one here ensures that in such
955 cases all instructions will end up with a priority of one, and
956 hence no scheduling will be done.
958 The original code did not subtract the one, and added the
959 insn_cost of the current instruction to its priority (e.g.
960 move the insn_cost call down to the end). */
962 prev_priority = priority (x) + insn_cost (x, prev, insn) - 1;
964 if (prev_priority > max_priority)
965 max_priority = prev_priority;
966 INSN_REF_COUNT (x) += 1;
969 prepare_unit (insn_unit (insn));
970 INSN_PRIORITY (insn) = max_priority;
971 return INSN_PRIORITY (insn);
973 return 0;
976 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
977 them to the unused_*_list variables, so that they can be reused. */
979 static void
980 free_pending_lists ()
982 register rtx link, prev_link;
984 if (pending_read_insns)
986 prev_link = pending_read_insns;
987 link = XEXP (prev_link, 1);
989 while (link)
991 prev_link = link;
992 link = XEXP (link, 1);
995 XEXP (prev_link, 1) = unused_insn_list;
996 unused_insn_list = pending_read_insns;
997 pending_read_insns = 0;
1000 if (pending_write_insns)
1002 prev_link = pending_write_insns;
1003 link = XEXP (prev_link, 1);
1005 while (link)
1007 prev_link = link;
1008 link = XEXP (link, 1);
1011 XEXP (prev_link, 1) = unused_insn_list;
1012 unused_insn_list = pending_write_insns;
1013 pending_write_insns = 0;
1016 if (pending_read_mems)
1018 prev_link = pending_read_mems;
1019 link = XEXP (prev_link, 1);
1021 while (link)
1023 prev_link = link;
1024 link = XEXP (link, 1);
1027 XEXP (prev_link, 1) = unused_expr_list;
1028 unused_expr_list = pending_read_mems;
1029 pending_read_mems = 0;
1032 if (pending_write_mems)
1034 prev_link = pending_write_mems;
1035 link = XEXP (prev_link, 1);
1037 while (link)
1039 prev_link = link;
1040 link = XEXP (link, 1);
1043 XEXP (prev_link, 1) = unused_expr_list;
1044 unused_expr_list = pending_write_mems;
1045 pending_write_mems = 0;
1049 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1050 The MEM is a memory reference contained within INSN, which we are saving
1051 so that we can do memory aliasing on it. */
1053 static void
1054 add_insn_mem_dependence (insn_list, mem_list, insn, mem)
1055 rtx *insn_list, *mem_list, insn, mem;
1057 register rtx link;
1059 if (unused_insn_list)
1061 link = unused_insn_list;
1062 unused_insn_list = XEXP (link, 1);
1064 else
1065 link = rtx_alloc (INSN_LIST);
1066 XEXP (link, 0) = insn;
1067 XEXP (link, 1) = *insn_list;
1068 *insn_list = link;
1070 if (unused_expr_list)
1072 link = unused_expr_list;
1073 unused_expr_list = XEXP (link, 1);
1075 else
1076 link = rtx_alloc (EXPR_LIST);
1077 XEXP (link, 0) = mem;
1078 XEXP (link, 1) = *mem_list;
1079 *mem_list = link;
1081 pending_lists_length++;
1084 /* Make a dependency between every memory reference on the pending lists
1085 and INSN, thus flushing the pending lists. If ONLY_WRITE, don't flush
1086 the read list. */
1088 static void
1089 flush_pending_lists (insn, only_write)
1090 rtx insn;
1091 int only_write;
1093 rtx link;
1095 while (pending_read_insns && ! only_write)
1097 add_dependence (insn, XEXP (pending_read_insns, 0), REG_DEP_ANTI);
1099 link = pending_read_insns;
1100 pending_read_insns = XEXP (pending_read_insns, 1);
1101 XEXP (link, 1) = unused_insn_list;
1102 unused_insn_list = link;
1104 link = pending_read_mems;
1105 pending_read_mems = XEXP (pending_read_mems, 1);
1106 XEXP (link, 1) = unused_expr_list;
1107 unused_expr_list = link;
1109 while (pending_write_insns)
1111 add_dependence (insn, XEXP (pending_write_insns, 0), REG_DEP_ANTI);
1113 link = pending_write_insns;
1114 pending_write_insns = XEXP (pending_write_insns, 1);
1115 XEXP (link, 1) = unused_insn_list;
1116 unused_insn_list = link;
1118 link = pending_write_mems;
1119 pending_write_mems = XEXP (pending_write_mems, 1);
1120 XEXP (link, 1) = unused_expr_list;
1121 unused_expr_list = link;
1123 pending_lists_length = 0;
1125 if (last_pending_memory_flush)
1126 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1128 last_pending_memory_flush = insn;
1131 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1132 by the write to the destination of X, and reads of everything mentioned. */
1134 static void
1135 sched_analyze_1 (x, insn)
1136 rtx x;
1137 rtx insn;
1139 register int regno;
1140 register rtx dest = SET_DEST (x);
1142 if (dest == 0)
1143 return;
1145 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
1146 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1148 if (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1150 /* The second and third arguments are values read by this insn. */
1151 sched_analyze_2 (XEXP (dest, 1), insn);
1152 sched_analyze_2 (XEXP (dest, 2), insn);
1154 dest = SUBREG_REG (dest);
1157 if (GET_CODE (dest) == REG)
1159 register int i;
1161 regno = REGNO (dest);
1163 /* A hard reg in a wide mode may really be multiple registers.
1164 If so, mark all of them just like the first. */
1165 if (regno < FIRST_PSEUDO_REGISTER)
1167 i = HARD_REGNO_NREGS (regno, GET_MODE (dest));
1168 while (--i >= 0)
1170 rtx u;
1172 for (u = reg_last_uses[regno+i]; u; u = XEXP (u, 1))
1173 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1174 reg_last_uses[regno + i] = 0;
1175 if (reg_last_sets[regno + i])
1176 add_dependence (insn, reg_last_sets[regno + i],
1177 REG_DEP_OUTPUT);
1178 SET_REGNO_REG_SET (reg_pending_sets, regno + i);
1179 if ((call_used_regs[i] || global_regs[i])
1180 && last_function_call)
1181 /* Function calls clobber all call_used regs. */
1182 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1185 else
1187 rtx u;
1189 for (u = reg_last_uses[regno]; u; u = XEXP (u, 1))
1190 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1191 reg_last_uses[regno] = 0;
1192 if (reg_last_sets[regno])
1193 add_dependence (insn, reg_last_sets[regno], REG_DEP_OUTPUT);
1194 SET_REGNO_REG_SET (reg_pending_sets, regno);
1196 /* Pseudos that are REG_EQUIV to something may be replaced
1197 by that during reloading. We need only add dependencies for
1198 the address in the REG_EQUIV note. */
1199 if (! reload_completed
1200 && reg_known_equiv_p[regno]
1201 && GET_CODE (reg_known_value[regno]) == MEM)
1202 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1204 /* Don't let it cross a call after scheduling if it doesn't
1205 already cross one. */
1206 if (REG_N_CALLS_CROSSED (regno) == 0 && last_function_call)
1207 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1210 else if (GET_CODE (dest) == MEM)
1212 /* Writing memory. */
1214 if (pending_lists_length > 32)
1216 /* Flush all pending reads and writes to prevent the pending lists
1217 from getting any larger. Insn scheduling runs too slowly when
1218 these lists get long. The number 32 was chosen because it
1219 seems like a reasonable number. When compiling GCC with itself,
1220 this flush occurs 8 times for sparc, and 10 times for m88k using
1221 the number 32. */
1222 flush_pending_lists (insn, 0);
1224 else
1226 rtx pending, pending_mem;
1228 pending = pending_read_insns;
1229 pending_mem = pending_read_mems;
1230 while (pending)
1232 /* If a dependency already exists, don't create a new one. */
1233 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1234 if (anti_dependence (XEXP (pending_mem, 0), dest))
1235 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1237 pending = XEXP (pending, 1);
1238 pending_mem = XEXP (pending_mem, 1);
1241 pending = pending_write_insns;
1242 pending_mem = pending_write_mems;
1243 while (pending)
1245 /* If a dependency already exists, don't create a new one. */
1246 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1247 if (output_dependence (XEXP (pending_mem, 0), dest))
1248 add_dependence (insn, XEXP (pending, 0), REG_DEP_OUTPUT);
1250 pending = XEXP (pending, 1);
1251 pending_mem = XEXP (pending_mem, 1);
1254 if (last_pending_memory_flush)
1255 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1257 add_insn_mem_dependence (&pending_write_insns, &pending_write_mems,
1258 insn, dest);
1260 sched_analyze_2 (XEXP (dest, 0), insn);
1263 /* Analyze reads. */
1264 if (GET_CODE (x) == SET)
1265 sched_analyze_2 (SET_SRC (x), insn);
1268 /* Analyze the uses of memory and registers in rtx X in INSN. */
1270 static void
1271 sched_analyze_2 (x, insn)
1272 rtx x;
1273 rtx insn;
1275 register int i;
1276 register int j;
1277 register enum rtx_code code;
1278 register char *fmt;
1280 if (x == 0)
1281 return;
1283 code = GET_CODE (x);
1285 switch (code)
1287 case CONST_INT:
1288 case CONST_DOUBLE:
1289 case SYMBOL_REF:
1290 case CONST:
1291 case LABEL_REF:
1292 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1293 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1294 this does not mean that this insn is using cc0. */
1295 return;
1297 #ifdef HAVE_cc0
1298 case CC0:
1300 rtx link, prev;
1302 /* User of CC0 depends on immediately preceding insn. */
1303 SCHED_GROUP_P (insn) = 1;
1305 /* There may be a note before this insn now, but all notes will
1306 be removed before we actually try to schedule the insns, so
1307 it won't cause a problem later. We must avoid it here though. */
1308 prev = prev_nonnote_insn (insn);
1310 /* Make a copy of all dependencies on the immediately previous insn,
1311 and add to this insn. This is so that all the dependencies will
1312 apply to the group. Remove an explicit dependence on this insn
1313 as SCHED_GROUP_P now represents it. */
1315 if (find_insn_list (prev, LOG_LINKS (insn)))
1316 remove_dependence (insn, prev);
1318 for (link = LOG_LINKS (prev); link; link = XEXP (link, 1))
1319 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1321 return;
1323 #endif
1325 case REG:
1327 int regno = REGNO (x);
1328 if (regno < FIRST_PSEUDO_REGISTER)
1330 int i;
1332 i = HARD_REGNO_NREGS (regno, GET_MODE (x));
1333 while (--i >= 0)
1335 reg_last_uses[regno + i]
1336 = gen_rtx_INSN_LIST (VOIDmode,
1337 insn, reg_last_uses[regno + i]);
1338 if (reg_last_sets[regno + i])
1339 add_dependence (insn, reg_last_sets[regno + i], 0);
1340 if ((call_used_regs[regno + i] || global_regs[regno + i])
1341 && last_function_call)
1342 /* Function calls clobber all call_used regs. */
1343 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1346 else
1348 reg_last_uses[regno]
1349 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_last_uses[regno]);
1350 if (reg_last_sets[regno])
1351 add_dependence (insn, reg_last_sets[regno], 0);
1353 /* Pseudos that are REG_EQUIV to something may be replaced
1354 by that during reloading. We need only add dependencies for
1355 the address in the REG_EQUIV note. */
1356 if (! reload_completed
1357 && reg_known_equiv_p[regno]
1358 && GET_CODE (reg_known_value[regno]) == MEM)
1359 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1361 /* If the register does not already cross any calls, then add this
1362 insn to the sched_before_next_call list so that it will still
1363 not cross calls after scheduling. */
1364 if (REG_N_CALLS_CROSSED (regno) == 0)
1365 add_dependence (sched_before_next_call, insn, REG_DEP_ANTI);
1367 return;
1370 case MEM:
1372 /* Reading memory. */
1374 rtx pending, pending_mem;
1376 pending = pending_read_insns;
1377 pending_mem = pending_read_mems;
1378 while (pending)
1380 /* If a dependency already exists, don't create a new one. */
1381 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1382 if (read_dependence (XEXP (pending_mem, 0), x))
1383 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1385 pending = XEXP (pending, 1);
1386 pending_mem = XEXP (pending_mem, 1);
1389 pending = pending_write_insns;
1390 pending_mem = pending_write_mems;
1391 while (pending)
1393 /* If a dependency already exists, don't create a new one. */
1394 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1395 if (true_dependence (XEXP (pending_mem, 0), VOIDmode,
1396 x, rtx_varies_p))
1397 add_dependence (insn, XEXP (pending, 0), 0);
1399 pending = XEXP (pending, 1);
1400 pending_mem = XEXP (pending_mem, 1);
1402 if (last_pending_memory_flush)
1403 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1405 /* Always add these dependencies to pending_reads, since
1406 this insn may be followed by a write. */
1407 add_insn_mem_dependence (&pending_read_insns, &pending_read_mems,
1408 insn, x);
1410 /* Take advantage of tail recursion here. */
1411 sched_analyze_2 (XEXP (x, 0), insn);
1412 return;
1415 case ASM_OPERANDS:
1416 case ASM_INPUT:
1417 case UNSPEC_VOLATILE:
1418 case TRAP_IF:
1420 rtx u;
1422 /* Traditional and volatile asm instructions must be considered to use
1423 and clobber all hard registers, all pseudo-registers and all of
1424 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1426 Consider for instance a volatile asm that changes the fpu rounding
1427 mode. An insn should not be moved across this even if it only uses
1428 pseudo-regs because it might give an incorrectly rounded result. */
1429 if (code != ASM_OPERANDS || MEM_VOLATILE_P (x))
1431 int max_reg = max_reg_num ();
1432 for (i = 0; i < max_reg; i++)
1434 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1435 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1436 reg_last_uses[i] = 0;
1437 if (reg_last_sets[i])
1438 add_dependence (insn, reg_last_sets[i], 0);
1440 reg_pending_sets_all = 1;
1442 flush_pending_lists (insn, 0);
1445 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1446 We can not just fall through here since then we would be confused
1447 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1448 traditional asms unlike their normal usage. */
1450 if (code == ASM_OPERANDS)
1452 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1453 sched_analyze_2 (ASM_OPERANDS_INPUT (x, j), insn);
1454 return;
1456 break;
1459 case PRE_DEC:
1460 case POST_DEC:
1461 case PRE_INC:
1462 case POST_INC:
1463 /* These both read and modify the result. We must handle them as writes
1464 to get proper dependencies for following instructions. We must handle
1465 them as reads to get proper dependencies from this to previous
1466 instructions. Thus we need to pass them to both sched_analyze_1
1467 and sched_analyze_2. We must call sched_analyze_2 first in order
1468 to get the proper antecedent for the read. */
1469 sched_analyze_2 (XEXP (x, 0), insn);
1470 sched_analyze_1 (x, insn);
1471 return;
1473 default:
1474 break;
1477 /* Other cases: walk the insn. */
1478 fmt = GET_RTX_FORMAT (code);
1479 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1481 if (fmt[i] == 'e')
1482 sched_analyze_2 (XEXP (x, i), insn);
1483 else if (fmt[i] == 'E')
1484 for (j = 0; j < XVECLEN (x, i); j++)
1485 sched_analyze_2 (XVECEXP (x, i, j), insn);
1489 /* Analyze an INSN with pattern X to find all dependencies. */
1491 static void
1492 sched_analyze_insn (x, insn, loop_notes)
1493 rtx x, insn;
1494 rtx loop_notes;
1496 register RTX_CODE code = GET_CODE (x);
1497 rtx link;
1498 int maxreg = max_reg_num ();
1499 int i;
1501 if (code == SET || code == CLOBBER)
1502 sched_analyze_1 (x, insn);
1503 else if (code == PARALLEL)
1505 register int i;
1506 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
1508 code = GET_CODE (XVECEXP (x, 0, i));
1509 if (code == SET || code == CLOBBER)
1510 sched_analyze_1 (XVECEXP (x, 0, i), insn);
1511 else
1512 sched_analyze_2 (XVECEXP (x, 0, i), insn);
1515 else
1516 sched_analyze_2 (x, insn);
1518 /* Mark registers CLOBBERED or used by called function. */
1519 if (GET_CODE (insn) == CALL_INSN)
1520 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
1522 if (GET_CODE (XEXP (link, 0)) == CLOBBER)
1523 sched_analyze_1 (XEXP (link, 0), insn);
1524 else
1525 sched_analyze_2 (XEXP (link, 0), insn);
1528 /* If there is a {LOOP,EHREGION}_{BEG,END} note in the middle of a basic block, then
1529 we must be sure that no instructions are scheduled across it.
1530 Otherwise, the reg_n_refs info (which depends on loop_depth) would
1531 become incorrect. */
1533 if (loop_notes)
1535 int max_reg = max_reg_num ();
1536 rtx link;
1538 for (i = 0; i < max_reg; i++)
1540 rtx u;
1541 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1542 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1543 reg_last_uses[i] = 0;
1544 if (reg_last_sets[i])
1545 add_dependence (insn, reg_last_sets[i], 0);
1547 reg_pending_sets_all = 1;
1549 flush_pending_lists (insn, 0);
1551 link = loop_notes;
1552 while (XEXP (link, 1))
1553 link = XEXP (link, 1);
1554 XEXP (link, 1) = REG_NOTES (insn);
1555 REG_NOTES (insn) = loop_notes;
1558 /* After reload, it is possible for an instruction to have a REG_DEAD note
1559 for a register that actually dies a few instructions earlier. For
1560 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
1561 In this case, we must consider the insn to use the register mentioned
1562 in the REG_DEAD note. Otherwise, we may accidentally move this insn
1563 after another insn that sets the register, thus getting obviously invalid
1564 rtl. This confuses reorg which believes that REG_DEAD notes are still
1565 meaningful.
1567 ??? We would get better code if we fixed reload to put the REG_DEAD
1568 notes in the right places, but that may not be worth the effort. */
1570 if (reload_completed)
1572 rtx note;
1574 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1575 if (REG_NOTE_KIND (note) == REG_DEAD)
1576 sched_analyze_2 (XEXP (note, 0), insn);
1579 EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i,
1581 reg_last_sets[i] = insn;
1583 CLEAR_REG_SET (reg_pending_sets);
1585 if (reg_pending_sets_all)
1587 for (i = 0; i < maxreg; i++)
1588 reg_last_sets[i] = insn;
1589 reg_pending_sets_all = 0;
1592 /* Handle function calls and function returns created by the epilogue
1593 threading code. */
1594 if (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN)
1596 rtx dep_insn;
1597 rtx prev_dep_insn;
1599 /* When scheduling instructions, we make sure calls don't lose their
1600 accompanying USE insns by depending them one on another in order.
1602 Also, we must do the same thing for returns created by the epilogue
1603 threading code. Note this code works only in this special case,
1604 because other passes make no guarantee that they will never emit
1605 an instruction between a USE and a RETURN. There is such a guarantee
1606 for USE instructions immediately before a call. */
1608 prev_dep_insn = insn;
1609 dep_insn = PREV_INSN (insn);
1610 while (GET_CODE (dep_insn) == INSN
1611 && GET_CODE (PATTERN (dep_insn)) == USE
1612 && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == REG)
1614 SCHED_GROUP_P (prev_dep_insn) = 1;
1616 /* Make a copy of all dependencies on dep_insn, and add to insn.
1617 This is so that all of the dependencies will apply to the
1618 group. */
1620 for (link = LOG_LINKS (dep_insn); link; link = XEXP (link, 1))
1621 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1623 prev_dep_insn = dep_insn;
1624 dep_insn = PREV_INSN (dep_insn);
1629 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
1630 for every dependency. */
1632 static int
1633 sched_analyze (head, tail)
1634 rtx head, tail;
1636 register rtx insn;
1637 register int n_insns = 0;
1638 register rtx u;
1639 register int luid = 0;
1640 rtx loop_notes = 0;
1642 for (insn = head; ; insn = NEXT_INSN (insn))
1644 INSN_LUID (insn) = luid++;
1646 if (GET_CODE (insn) == INSN || GET_CODE (insn) == JUMP_INSN)
1648 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
1649 loop_notes = 0;
1650 n_insns += 1;
1652 else if (GET_CODE (insn) == CALL_INSN)
1654 rtx x;
1655 register int i;
1657 /* Any instruction using a hard register which may get clobbered
1658 by a call needs to be marked as dependent on this call.
1659 This prevents a use of a hard return reg from being moved
1660 past a void call (i.e. it does not explicitly set the hard
1661 return reg). */
1663 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
1664 all registers, not just hard registers, may be clobbered by this
1665 call. */
1667 /* Insn, being a CALL_INSN, magically depends on
1668 `last_function_call' already. */
1670 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == NOTE
1671 && NOTE_LINE_NUMBER (NEXT_INSN (insn)) == NOTE_INSN_SETJMP)
1673 int max_reg = max_reg_num ();
1674 for (i = 0; i < max_reg; i++)
1676 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1677 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1678 reg_last_uses[i] = 0;
1679 if (reg_last_sets[i])
1680 add_dependence (insn, reg_last_sets[i], 0);
1682 reg_pending_sets_all = 1;
1684 /* Add a pair of fake REG_NOTEs which we will later
1685 convert back into a NOTE_INSN_SETJMP note. See
1686 reemit_notes for why we use a pair of NOTEs. */
1688 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_DEAD,
1689 GEN_INT (0),
1690 REG_NOTES (insn));
1691 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_DEAD,
1692 GEN_INT (NOTE_INSN_SETJMP),
1693 REG_NOTES (insn));
1695 else
1697 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1698 if (call_used_regs[i] || global_regs[i])
1700 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1701 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1702 reg_last_uses[i] = 0;
1703 if (reg_last_sets[i])
1704 add_dependence (insn, reg_last_sets[i], REG_DEP_ANTI);
1705 SET_REGNO_REG_SET (reg_pending_sets, i);
1709 /* For each insn which shouldn't cross a call, add a dependence
1710 between that insn and this call insn. */
1711 x = LOG_LINKS (sched_before_next_call);
1712 while (x)
1714 add_dependence (insn, XEXP (x, 0), REG_DEP_ANTI);
1715 x = XEXP (x, 1);
1717 LOG_LINKS (sched_before_next_call) = 0;
1719 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
1720 loop_notes = 0;
1722 /* In the absence of interprocedural alias analysis, we must flush
1723 all pending reads and writes, and start new dependencies starting
1724 from here. But only flush writes for constant calls (which may
1725 be passed a pointer to something we haven't written yet). */
1726 flush_pending_lists (insn, CONST_CALL_P (insn));
1728 /* Depend this function call (actually, the user of this
1729 function call) on all hard register clobberage. */
1730 last_function_call = insn;
1731 n_insns += 1;
1734 /* See comments on reemit_notes as to why we do this. */
1735 else if (GET_CODE (insn) == NOTE
1736 && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG
1737 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_END
1738 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG
1739 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_END
1740 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_RANGE_START
1741 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_RANGE_END
1742 || (NOTE_LINE_NUMBER (insn) == NOTE_INSN_SETJMP
1743 && GET_CODE (PREV_INSN (insn)) != CALL_INSN)))
1745 loop_notes = gen_rtx_EXPR_LIST (REG_DEAD,
1746 GEN_INT (NOTE_BLOCK_NUMBER (insn)),
1747 loop_notes);
1748 loop_notes = gen_rtx_EXPR_LIST (REG_DEAD,
1749 GEN_INT (NOTE_LINE_NUMBER (insn)),
1750 loop_notes);
1751 CONST_CALL_P (loop_notes) = CONST_CALL_P (insn);
1754 if (insn == tail)
1755 return n_insns;
1758 abort ();
1761 /* Called when we see a set of a register. If death is true, then we are
1762 scanning backwards. Mark that register as unborn. If nobody says
1763 otherwise, that is how things will remain. If death is false, then we
1764 are scanning forwards. Mark that register as being born. */
1766 static void
1767 sched_note_set (b, x, death)
1768 int b;
1769 rtx x;
1770 int death;
1772 register int regno;
1773 register rtx reg = SET_DEST (x);
1774 int subreg_p = 0;
1776 if (reg == 0)
1777 return;
1779 while (GET_CODE (reg) == SUBREG || GET_CODE (reg) == STRICT_LOW_PART
1780 || GET_CODE (reg) == SIGN_EXTRACT || GET_CODE (reg) == ZERO_EXTRACT)
1782 /* Must treat modification of just one hardware register of a multi-reg
1783 value or just a byte field of a register exactly the same way that
1784 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
1785 does not kill the entire register. */
1786 if (GET_CODE (reg) != SUBREG
1787 || REG_SIZE (SUBREG_REG (reg)) > REG_SIZE (reg))
1788 subreg_p = 1;
1790 reg = SUBREG_REG (reg);
1793 if (GET_CODE (reg) != REG)
1794 return;
1796 /* Global registers are always live, so the code below does not apply
1797 to them. */
1799 regno = REGNO (reg);
1800 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
1802 if (death)
1804 /* If we only set part of the register, then this set does not
1805 kill it. */
1806 if (subreg_p)
1807 return;
1809 /* Try killing this register. */
1810 if (regno < FIRST_PSEUDO_REGISTER)
1812 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
1813 while (--j >= 0)
1815 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
1816 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
1819 else
1821 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
1822 SET_REGNO_REG_SET (bb_dead_regs, regno);
1825 else
1827 /* Make the register live again. */
1828 if (regno < FIRST_PSEUDO_REGISTER)
1830 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
1831 while (--j >= 0)
1833 SET_REGNO_REG_SET (bb_live_regs, regno + j);
1834 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
1837 else
1839 SET_REGNO_REG_SET (bb_live_regs, regno);
1840 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
1846 /* Macros and functions for keeping the priority queue sorted, and
1847 dealing with queueing and dequeueing of instructions. */
1849 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
1850 do { if ((NEW_READY) - (OLD_READY) == 1) \
1851 swap_sort (READY, NEW_READY); \
1852 else if ((NEW_READY) - (OLD_READY) > 1) \
1853 qsort (READY, NEW_READY, sizeof (rtx), rank_for_schedule); } \
1854 while (0)
1856 /* Returns a positive value if y is preferred; returns a negative value if
1857 x is preferred. Should never return 0, since that will make the sort
1858 unstable. */
1860 static int
1861 rank_for_schedule (x, y)
1862 const GENERIC_PTR x;
1863 const GENERIC_PTR y;
1865 rtx tmp = *(rtx *)y;
1866 rtx tmp2 = *(rtx *)x;
1867 rtx link;
1868 int tmp_class, tmp2_class;
1869 int value;
1871 /* Choose the instruction with the highest priority, if different. */
1872 if ((value = INSN_PRIORITY (tmp) - INSN_PRIORITY (tmp2)))
1873 return value;
1875 if (last_scheduled_insn)
1877 /* Classify the instructions into three classes:
1878 1) Data dependent on last schedule insn.
1879 2) Anti/Output dependent on last scheduled insn.
1880 3) Independent of last scheduled insn, or has latency of one.
1881 Choose the insn from the highest numbered class if different. */
1882 link = find_insn_list (tmp, LOG_LINKS (last_scheduled_insn));
1883 if (link == 0 || insn_cost (tmp, link, last_scheduled_insn) == 1)
1884 tmp_class = 3;
1885 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
1886 tmp_class = 1;
1887 else
1888 tmp_class = 2;
1890 link = find_insn_list (tmp2, LOG_LINKS (last_scheduled_insn));
1891 if (link == 0 || insn_cost (tmp2, link, last_scheduled_insn) == 1)
1892 tmp2_class = 3;
1893 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
1894 tmp2_class = 1;
1895 else
1896 tmp2_class = 2;
1898 if ((value = tmp_class - tmp2_class))
1899 return value;
1902 /* If insns are equally good, sort by INSN_LUID (original insn order),
1903 so that we make the sort stable. This minimizes instruction movement,
1904 thus minimizing sched's effect on debugging and cross-jumping. */
1905 return INSN_LUID (tmp) - INSN_LUID (tmp2);
1908 /* Resort the array A in which only element at index N may be out of order. */
1910 __inline static void
1911 swap_sort (a, n)
1912 rtx *a;
1913 int n;
1915 rtx insn = a[n-1];
1916 int i = n-2;
1918 while (i >= 0 && rank_for_schedule (a+i, &insn) >= 0)
1920 a[i+1] = a[i];
1921 i -= 1;
1923 a[i+1] = insn;
1926 static int max_priority;
1928 /* Add INSN to the insn queue so that it fires at least N_CYCLES
1929 before the currently executing insn. */
1931 __inline static void
1932 queue_insn (insn, n_cycles)
1933 rtx insn;
1934 int n_cycles;
1936 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1937 NEXT_INSN (insn) = insn_queue[next_q];
1938 insn_queue[next_q] = insn;
1939 q_size += 1;
1942 /* Return nonzero if PAT is the pattern of an insn which makes a
1943 register live. */
1945 __inline static int
1946 birthing_insn_p (pat)
1947 rtx pat;
1949 int j;
1951 if (reload_completed == 1)
1952 return 0;
1954 if (GET_CODE (pat) == SET
1955 && GET_CODE (SET_DEST (pat)) == REG)
1957 rtx dest = SET_DEST (pat);
1958 int i = REGNO (dest);
1960 /* It would be more accurate to use refers_to_regno_p or
1961 reg_mentioned_p to determine when the dest is not live before this
1962 insn. */
1964 if (REGNO_REG_SET_P (bb_live_regs, i))
1965 return (REG_N_SETS (i) == 1);
1967 return 0;
1969 if (GET_CODE (pat) == PARALLEL)
1971 for (j = 0; j < XVECLEN (pat, 0); j++)
1972 if (birthing_insn_p (XVECEXP (pat, 0, j)))
1973 return 1;
1975 return 0;
1978 /* PREV is an insn that is ready to execute. Adjust its priority if that
1979 will help shorten register lifetimes. */
1981 __inline static void
1982 adjust_priority (prev)
1983 rtx prev;
1985 /* Trying to shorten register lives after reload has completed
1986 is useless and wrong. It gives inaccurate schedules. */
1987 if (reload_completed == 0)
1989 rtx note;
1990 int n_deaths = 0;
1992 /* ??? This code has no effect, because REG_DEAD notes are removed
1993 before we ever get here. */
1994 for (note = REG_NOTES (prev); note; note = XEXP (note, 1))
1995 if (REG_NOTE_KIND (note) == REG_DEAD)
1996 n_deaths += 1;
1998 /* Defer scheduling insns which kill registers, since that
1999 shortens register lives. Prefer scheduling insns which
2000 make registers live for the same reason. */
2001 switch (n_deaths)
2003 default:
2004 INSN_PRIORITY (prev) >>= 3;
2005 break;
2006 case 3:
2007 INSN_PRIORITY (prev) >>= 2;
2008 break;
2009 case 2:
2010 case 1:
2011 INSN_PRIORITY (prev) >>= 1;
2012 break;
2013 case 0:
2014 if (birthing_insn_p (PATTERN (prev)))
2016 int max = max_priority;
2018 if (max > INSN_PRIORITY (prev))
2019 INSN_PRIORITY (prev) = max;
2021 break;
2023 #ifdef ADJUST_PRIORITY
2024 ADJUST_PRIORITY (prev);
2025 #endif
2029 /* INSN is the "currently executing insn". Launch each insn which was
2030 waiting on INSN (in the backwards dataflow sense). READY is a
2031 vector of insns which are ready to fire. N_READY is the number of
2032 elements in READY. CLOCK is the current virtual cycle. */
2034 static int
2035 schedule_insn (insn, ready, n_ready, clock)
2036 rtx insn;
2037 rtx *ready;
2038 int n_ready;
2039 int clock;
2041 rtx link;
2042 int new_ready = n_ready;
2044 if (MAX_BLOCKAGE > 1)
2045 schedule_unit (insn_unit (insn), insn, clock);
2047 if (LOG_LINKS (insn) == 0)
2048 return n_ready;
2050 /* This is used by the function adjust_priority above. */
2051 if (n_ready > 0)
2052 max_priority = MAX (INSN_PRIORITY (ready[0]), INSN_PRIORITY (insn));
2053 else
2054 max_priority = INSN_PRIORITY (insn);
2056 for (link = LOG_LINKS (insn); link != 0; link = XEXP (link, 1))
2058 rtx prev = XEXP (link, 0);
2059 int cost = insn_cost (prev, link, insn);
2061 if ((INSN_REF_COUNT (prev) -= 1) != 0)
2063 /* We satisfied one requirement to fire PREV. Record the earliest
2064 time when PREV can fire. No need to do this if the cost is 1,
2065 because PREV can fire no sooner than the next cycle. */
2066 if (cost > 1)
2067 INSN_TICK (prev) = MAX (INSN_TICK (prev), clock + cost);
2069 else
2071 /* We satisfied the last requirement to fire PREV. Ensure that all
2072 timing requirements are satisfied. */
2073 if (INSN_TICK (prev) - clock > cost)
2074 cost = INSN_TICK (prev) - clock;
2076 /* Adjust the priority of PREV and either put it on the ready
2077 list or queue it. */
2078 adjust_priority (prev);
2079 if (cost <= 1)
2080 ready[new_ready++] = prev;
2081 else
2082 queue_insn (prev, cost);
2086 return new_ready;
2089 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2090 those that are blocked due to function unit hazards and rearrange
2091 the remaining ones to minimize subsequent function unit hazards. */
2093 static int
2094 schedule_select (ready, n_ready, clock, file)
2095 rtx *ready;
2096 int n_ready, clock;
2097 FILE *file;
2099 int pri = INSN_PRIORITY (ready[0]);
2100 int i, j, k, q, cost, best_cost, best_insn = 0, new_ready = n_ready;
2101 rtx insn;
2103 /* Work down the ready list in groups of instructions with the same
2104 priority value. Queue insns in the group that are blocked and
2105 select among those that remain for the one with the largest
2106 potential hazard. */
2107 for (i = 0; i < n_ready; i = j)
2109 int opri = pri;
2110 for (j = i + 1; j < n_ready; j++)
2111 if ((pri = INSN_PRIORITY (ready[j])) != opri)
2112 break;
2114 /* Queue insns in the group that are blocked. */
2115 for (k = i, q = 0; k < j; k++)
2117 insn = ready[k];
2118 if ((cost = actual_hazard (insn_unit (insn), insn, clock, 0)) != 0)
2120 q++;
2121 ready[k] = 0;
2122 queue_insn (insn, cost);
2123 if (file)
2124 fprintf (file, "\n;; blocking insn %d for %d cycles",
2125 INSN_UID (insn), cost);
2128 new_ready -= q;
2130 /* Check the next group if all insns were queued. */
2131 if (j - i - q == 0)
2132 continue;
2134 /* If more than one remains, select the first one with the largest
2135 potential hazard. */
2136 else if (j - i - q > 1)
2138 best_cost = -1;
2139 for (k = i; k < j; k++)
2141 if ((insn = ready[k]) == 0)
2142 continue;
2143 if ((cost = potential_hazard (insn_unit (insn), insn, 0))
2144 > best_cost)
2146 best_cost = cost;
2147 best_insn = k;
2151 /* We have found a suitable insn to schedule. */
2152 break;
2155 /* Move the best insn to be front of the ready list. */
2156 if (best_insn != 0)
2158 if (file)
2160 fprintf (file, ", now");
2161 for (i = 0; i < n_ready; i++)
2162 if (ready[i])
2163 fprintf (file, " %d", INSN_UID (ready[i]));
2164 fprintf (file, "\n;; insn %d has a greater potential hazard",
2165 INSN_UID (ready[best_insn]));
2167 for (i = best_insn; i > 0; i--)
2169 insn = ready[i-1];
2170 ready[i-1] = ready[i];
2171 ready[i] = insn;
2175 /* Compact the ready list. */
2176 if (new_ready < n_ready)
2177 for (i = j = 0; i < n_ready; i++)
2178 if (ready[i])
2179 ready[j++] = ready[i];
2181 return new_ready;
2184 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2185 dead_notes list. */
2187 static void
2188 create_reg_dead_note (reg, insn)
2189 rtx reg, insn;
2191 rtx link;
2193 /* The number of registers killed after scheduling must be the same as the
2194 number of registers killed before scheduling. The number of REG_DEAD
2195 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2196 might become one DImode hard register REG_DEAD note, but the number of
2197 registers killed will be conserved.
2199 We carefully remove REG_DEAD notes from the dead_notes list, so that
2200 there will be none left at the end. If we run out early, then there
2201 is a bug somewhere in flow, combine and/or sched. */
2203 if (dead_notes == 0)
2205 #if 1
2206 abort ();
2207 #else
2208 link = rtx_alloc (EXPR_LIST);
2209 PUT_REG_NOTE_KIND (link, REG_DEAD);
2210 #endif
2212 else
2214 /* Number of regs killed by REG. */
2215 int regs_killed = (REGNO (reg) >= FIRST_PSEUDO_REGISTER ? 1
2216 : HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg)));
2217 /* Number of regs killed by REG_DEAD notes taken off the list. */
2218 int reg_note_regs;
2220 link = dead_notes;
2221 reg_note_regs = (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2222 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2223 GET_MODE (XEXP (link, 0))));
2224 while (reg_note_regs < regs_killed)
2226 /* LINK might be zero if we killed more registers after scheduling
2227 than before, and the last hard register we kill is actually
2228 multiple hard regs. */
2229 if (link == NULL_RTX)
2230 abort ();
2232 link = XEXP (link, 1);
2233 reg_note_regs += (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2234 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2235 GET_MODE (XEXP (link, 0))));
2237 dead_notes = XEXP (link, 1);
2239 /* If we took too many regs kills off, put the extra ones back. */
2240 while (reg_note_regs > regs_killed)
2242 rtx temp_reg, temp_link;
2244 temp_reg = gen_rtx_REG (word_mode, 0);
2245 temp_link = rtx_alloc (EXPR_LIST);
2246 PUT_REG_NOTE_KIND (temp_link, REG_DEAD);
2247 XEXP (temp_link, 0) = temp_reg;
2248 XEXP (temp_link, 1) = dead_notes;
2249 dead_notes = temp_link;
2250 reg_note_regs--;
2254 XEXP (link, 0) = reg;
2255 XEXP (link, 1) = REG_NOTES (insn);
2256 REG_NOTES (insn) = link;
2259 /* Subroutine on attach_deaths_insn--handles the recursive search
2260 through INSN. If SET_P is true, then x is being modified by the insn. */
2262 static void
2263 attach_deaths (x, insn, set_p)
2264 rtx x;
2265 rtx insn;
2266 int set_p;
2268 register int i;
2269 register int j;
2270 register enum rtx_code code;
2271 register char *fmt;
2273 if (x == 0)
2274 return;
2276 code = GET_CODE (x);
2278 switch (code)
2280 case CONST_INT:
2281 case CONST_DOUBLE:
2282 case LABEL_REF:
2283 case SYMBOL_REF:
2284 case CONST:
2285 case CODE_LABEL:
2286 case PC:
2287 case CC0:
2288 /* Get rid of the easy cases first. */
2289 return;
2291 case REG:
2293 /* If the register dies in this insn, queue that note, and mark
2294 this register as needing to die. */
2295 /* This code is very similar to mark_used_1 (if set_p is false)
2296 and mark_set_1 (if set_p is true) in flow.c. */
2298 register int regno;
2299 int some_needed;
2300 int all_needed;
2302 if (set_p)
2303 return;
2305 regno = REGNO (x);
2306 all_needed = some_needed = REGNO_REG_SET_P (old_live_regs, regno);
2307 if (regno < FIRST_PSEUDO_REGISTER)
2309 int n;
2311 n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2312 while (--n > 0)
2314 int needed = (REGNO_REG_SET_P (old_live_regs, regno + n));
2315 some_needed |= needed;
2316 all_needed &= needed;
2320 /* If it wasn't live before we started, then add a REG_DEAD note.
2321 We must check the previous lifetime info not the current info,
2322 because we may have to execute this code several times, e.g.
2323 once for a clobber (which doesn't add a note) and later
2324 for a use (which does add a note).
2326 Always make the register live. We must do this even if it was
2327 live before, because this may be an insn which sets and uses
2328 the same register, in which case the register has already been
2329 killed, so we must make it live again.
2331 Global registers are always live, and should never have a REG_DEAD
2332 note added for them, so none of the code below applies to them. */
2334 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2336 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2337 STACK_POINTER_REGNUM, since these are always considered to be
2338 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2339 if (regno != FRAME_POINTER_REGNUM
2340 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2341 && ! (regno == HARD_FRAME_POINTER_REGNUM)
2342 #endif
2343 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2344 && ! (regno == ARG_POINTER_REGNUM && fixed_regs[regno])
2345 #endif
2346 && regno != STACK_POINTER_REGNUM)
2348 if (! all_needed && ! dead_or_set_p (insn, x))
2350 /* Check for the case where the register dying partially
2351 overlaps the register set by this insn. */
2352 if (regno < FIRST_PSEUDO_REGISTER
2353 && HARD_REGNO_NREGS (regno, GET_MODE (x)) > 1)
2355 int n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2356 while (--n >= 0)
2357 some_needed |= dead_or_set_regno_p (insn, regno + n);
2360 /* If none of the words in X is needed, make a REG_DEAD
2361 note. Otherwise, we must make partial REG_DEAD
2362 notes. */
2363 if (! some_needed)
2364 create_reg_dead_note (x, insn);
2365 else
2367 int i;
2369 /* Don't make a REG_DEAD note for a part of a
2370 register that is set in the insn. */
2371 for (i = HARD_REGNO_NREGS (regno, GET_MODE (x)) - 1;
2372 i >= 0; i--)
2373 if (! REGNO_REG_SET_P (old_live_regs, regno + i)
2374 && ! dead_or_set_regno_p (insn, regno + i))
2375 create_reg_dead_note (gen_rtx_REG (reg_raw_mode[regno + i],
2376 regno + i),
2377 insn);
2382 if (regno < FIRST_PSEUDO_REGISTER)
2384 int j = HARD_REGNO_NREGS (regno, GET_MODE (x));
2385 while (--j >= 0)
2387 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
2388 SET_REGNO_REG_SET (bb_live_regs, regno + j);
2391 else
2393 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
2394 SET_REGNO_REG_SET (bb_live_regs, regno);
2397 return;
2400 case MEM:
2401 /* Handle tail-recursive case. */
2402 attach_deaths (XEXP (x, 0), insn, 0);
2403 return;
2405 case SUBREG:
2406 attach_deaths (SUBREG_REG (x), insn,
2407 set_p && ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
2408 <= UNITS_PER_WORD)
2409 || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
2410 == GET_MODE_SIZE (GET_MODE ((x))))));
2411 return;
2413 case STRICT_LOW_PART:
2414 attach_deaths (XEXP (x, 0), insn, 0);
2415 return;
2417 case ZERO_EXTRACT:
2418 case SIGN_EXTRACT:
2419 attach_deaths (XEXP (x, 0), insn, 0);
2420 attach_deaths (XEXP (x, 1), insn, 0);
2421 attach_deaths (XEXP (x, 2), insn, 0);
2422 return;
2424 default:
2425 /* Other cases: walk the insn. */
2426 fmt = GET_RTX_FORMAT (code);
2427 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2429 if (fmt[i] == 'e')
2430 attach_deaths (XEXP (x, i), insn, 0);
2431 else if (fmt[i] == 'E')
2432 for (j = 0; j < XVECLEN (x, i); j++)
2433 attach_deaths (XVECEXP (x, i, j), insn, 0);
2438 /* After INSN has executed, add register death notes for each register
2439 that is dead after INSN. */
2441 static void
2442 attach_deaths_insn (insn)
2443 rtx insn;
2445 rtx x = PATTERN (insn);
2446 register RTX_CODE code = GET_CODE (x);
2447 rtx link;
2449 if (code == SET)
2451 attach_deaths (SET_SRC (x), insn, 0);
2453 /* A register might die here even if it is the destination, e.g.
2454 it is the target of a volatile read and is otherwise unused.
2455 Hence we must always call attach_deaths for the SET_DEST. */
2456 attach_deaths (SET_DEST (x), insn, 1);
2458 else if (code == PARALLEL)
2460 register int i;
2461 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2463 code = GET_CODE (XVECEXP (x, 0, i));
2464 if (code == SET)
2466 attach_deaths (SET_SRC (XVECEXP (x, 0, i)), insn, 0);
2468 attach_deaths (SET_DEST (XVECEXP (x, 0, i)), insn, 1);
2470 /* Flow does not add REG_DEAD notes to registers that die in
2471 clobbers, so we can't either. */
2472 else if (code != CLOBBER)
2473 attach_deaths (XVECEXP (x, 0, i), insn, 0);
2476 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
2477 MEM being clobbered, just like flow. */
2478 else if (code == CLOBBER && GET_CODE (XEXP (x, 0)) == MEM)
2479 attach_deaths (XEXP (XEXP (x, 0), 0), insn, 0);
2480 /* Otherwise don't add a death note to things being clobbered. */
2481 else if (code != CLOBBER)
2482 attach_deaths (x, insn, 0);
2484 /* Make death notes for things used in the called function. */
2485 if (GET_CODE (insn) == CALL_INSN)
2486 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
2487 attach_deaths (XEXP (XEXP (link, 0), 0), insn,
2488 GET_CODE (XEXP (link, 0)) == CLOBBER);
2491 /* Delete notes beginning with INSN and maybe put them in the chain
2492 of notes ended by NOTE_LIST.
2493 Returns the insn following the notes. */
2495 static rtx
2496 unlink_notes (insn, tail)
2497 rtx insn, tail;
2499 rtx prev = PREV_INSN (insn);
2501 while (insn != tail && GET_CODE (insn) == NOTE)
2503 rtx next = NEXT_INSN (insn);
2504 /* Delete the note from its current position. */
2505 if (prev)
2506 NEXT_INSN (prev) = next;
2507 if (next)
2508 PREV_INSN (next) = prev;
2510 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
2511 /* Record line-number notes so they can be reused. */
2512 LINE_NOTE (insn) = insn;
2514 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
2515 immediately after the call they follow. We use a fake
2516 (REG_DEAD (const_int -1)) note to remember them.
2517 Likewise with NOTE_INSN_{LOOP,EHREGION}_{BEG, END}. */
2518 else if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_SETJMP
2519 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
2520 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
2521 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_RANGE_START
2522 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_RANGE_END
2523 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
2524 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
2526 /* Insert the note at the end of the notes list. */
2527 PREV_INSN (insn) = note_list;
2528 if (note_list)
2529 NEXT_INSN (note_list) = insn;
2530 note_list = insn;
2533 insn = next;
2535 return insn;
2538 /* Constructor for `sometimes' data structure. */
2540 static int
2541 new_sometimes_live (regs_sometimes_live, regno, sometimes_max)
2542 struct sometimes *regs_sometimes_live;
2543 int regno;
2544 int sometimes_max;
2546 register struct sometimes *p;
2548 /* There should never be a register greater than max_regno here. If there
2549 is, it means that a define_split has created a new pseudo reg. This
2550 is not allowed, since there will not be flow info available for any
2551 new register, so catch the error here. */
2552 if (regno >= max_regno)
2553 abort ();
2555 p = &regs_sometimes_live[sometimes_max];
2556 p->regno = regno;
2557 p->live_length = 0;
2558 p->calls_crossed = 0;
2559 sometimes_max++;
2560 return sometimes_max;
2563 /* Count lengths of all regs we are currently tracking,
2564 and find new registers no longer live. */
2566 static void
2567 finish_sometimes_live (regs_sometimes_live, sometimes_max)
2568 struct sometimes *regs_sometimes_live;
2569 int sometimes_max;
2571 int i;
2573 for (i = 0; i < sometimes_max; i++)
2575 register struct sometimes *p = &regs_sometimes_live[i];
2576 int regno = p->regno;
2578 sched_reg_live_length[regno] += p->live_length;
2579 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
2583 /* Search INSN for fake REG_DEAD note pairs for NOTE_INSN_SETJMP,
2584 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
2585 NOTEs. The REG_DEAD note following first one is contains the saved
2586 value for NOTE_BLOCK_NUMBER which is useful for
2587 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
2588 output by the instruction scheduler. Return the new value of LAST. */
2590 static rtx
2591 reemit_notes (insn, last)
2592 rtx insn;
2593 rtx last;
2595 rtx note;
2597 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2599 if (REG_NOTE_KIND (note) == REG_DEAD
2600 && GET_CODE (XEXP (note, 0)) == CONST_INT)
2602 if (INTVAL (XEXP (note, 0)) == NOTE_INSN_SETJMP)
2604 CONST_CALL_P (emit_note_after (INTVAL (XEXP (note, 0)), insn))
2605 = CONST_CALL_P (note);
2606 remove_note (insn, note);
2607 note = XEXP (note, 1);
2609 else
2611 last = emit_note_before (INTVAL (XEXP (note, 0)), last);
2612 remove_note (insn, note);
2613 note = XEXP (note, 1);
2614 NOTE_BLOCK_NUMBER (last) = INTVAL (XEXP (note, 0));
2616 remove_note (insn, note);
2619 return last;
2622 /* Use modified list scheduling to rearrange insns in basic block
2623 B. FILE, if nonzero, is where we dump interesting output about
2624 this pass. */
2626 static void
2627 schedule_block (b, file)
2628 int b;
2629 FILE *file;
2631 rtx insn, last;
2632 rtx *ready, link;
2633 int i, j, n_ready = 0, new_ready, n_insns;
2634 int sched_n_insns = 0;
2635 int clock;
2636 #define NEED_NOTHING 0
2637 #define NEED_HEAD 1
2638 #define NEED_TAIL 2
2639 int new_needs;
2641 /* HEAD and TAIL delimit the region being scheduled. */
2642 rtx head = basic_block_head[b];
2643 rtx tail = basic_block_end[b];
2644 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
2645 being scheduled. When the insns have been ordered,
2646 these insns delimit where the new insns are to be
2647 spliced back into the insn chain. */
2648 rtx next_tail;
2649 rtx prev_head;
2651 /* Keep life information accurate. */
2652 register struct sometimes *regs_sometimes_live;
2653 int sometimes_max;
2655 if (file)
2656 fprintf (file, ";;\t -- basic block number %d from %d to %d --\n",
2657 b, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
2659 i = max_reg_num ();
2660 reg_last_uses = (rtx *) alloca (i * sizeof (rtx));
2661 bzero ((char *) reg_last_uses, i * sizeof (rtx));
2662 reg_last_sets = (rtx *) alloca (i * sizeof (rtx));
2663 bzero ((char *) reg_last_sets, i * sizeof (rtx));
2664 reg_pending_sets = ALLOCA_REG_SET ();
2665 CLEAR_REG_SET (reg_pending_sets);
2666 reg_pending_sets_all = 0;
2667 clear_units ();
2669 #if 0
2670 /* We used to have code to avoid getting parameters moved from hard
2671 argument registers into pseudos.
2673 However, it was removed when it proved to be of marginal benefit and
2674 caused problems because of different notions of what the "head" insn
2675 was. */
2677 /* Remove certain insns at the beginning from scheduling,
2678 by advancing HEAD. */
2680 /* At the start of a function, before reload has run, don't delay getting
2681 parameters from hard registers into pseudo registers. */
2682 if (reload_completed == 0 && b == 0)
2684 while (head != tail
2685 && GET_CODE (head) == NOTE
2686 && NOTE_LINE_NUMBER (head) != NOTE_INSN_FUNCTION_BEG)
2687 head = NEXT_INSN (head);
2688 while (head != tail
2689 && GET_CODE (head) == INSN
2690 && GET_CODE (PATTERN (head)) == SET)
2692 rtx src = SET_SRC (PATTERN (head));
2693 while (GET_CODE (src) == SUBREG
2694 || GET_CODE (src) == SIGN_EXTEND
2695 || GET_CODE (src) == ZERO_EXTEND
2696 || GET_CODE (src) == SIGN_EXTRACT
2697 || GET_CODE (src) == ZERO_EXTRACT)
2698 src = XEXP (src, 0);
2699 if (GET_CODE (src) != REG
2700 || REGNO (src) >= FIRST_PSEUDO_REGISTER)
2701 break;
2702 /* Keep this insn from ever being scheduled. */
2703 INSN_REF_COUNT (head) = 1;
2704 head = NEXT_INSN (head);
2707 #endif
2709 /* Don't include any notes or labels at the beginning of the
2710 basic block, or notes at the ends of basic blocks. */
2711 while (head != tail)
2713 if (GET_CODE (head) == NOTE)
2714 head = NEXT_INSN (head);
2715 else if (GET_CODE (tail) == NOTE)
2716 tail = PREV_INSN (tail);
2717 else if (GET_CODE (head) == CODE_LABEL)
2718 head = NEXT_INSN (head);
2719 else break;
2721 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
2722 to schedule this block. */
2723 if (head == tail
2724 && (GET_CODE (head) == NOTE || GET_CODE (head) == CODE_LABEL))
2725 goto ret;
2727 #if 0
2728 /* This short-cut doesn't work. It does not count call insns crossed by
2729 registers in reg_sometimes_live. It does not mark these registers as
2730 dead if they die in this block. It does not mark these registers live
2731 (or create new reg_sometimes_live entries if necessary) if they are born
2732 in this block.
2734 The easy solution is to just always schedule a block. This block only
2735 has one insn, so this won't slow down this pass by much. */
2737 if (head == tail)
2738 goto ret;
2739 #endif
2741 /* Now HEAD through TAIL are the insns actually to be rearranged;
2742 Let PREV_HEAD and NEXT_TAIL enclose them. */
2743 prev_head = PREV_INSN (head);
2744 next_tail = NEXT_INSN (tail);
2746 /* Initialize basic block data structures. */
2747 dead_notes = 0;
2748 pending_read_insns = 0;
2749 pending_read_mems = 0;
2750 pending_write_insns = 0;
2751 pending_write_mems = 0;
2752 pending_lists_length = 0;
2753 last_pending_memory_flush = 0;
2754 last_function_call = 0;
2755 last_scheduled_insn = 0;
2757 LOG_LINKS (sched_before_next_call) = 0;
2759 n_insns = sched_analyze (head, tail);
2760 if (n_insns == 0)
2762 free_pending_lists ();
2763 goto ret;
2766 /* Allocate vector to hold insns to be rearranged (except those
2767 insns which are controlled by an insn with SCHED_GROUP_P set).
2768 All these insns are included between ORIG_HEAD and ORIG_TAIL,
2769 as those variables ultimately are set up. */
2770 ready = (rtx *) alloca ((n_insns+1) * sizeof (rtx));
2772 /* TAIL is now the last of the insns to be rearranged.
2773 Put those insns into the READY vector. */
2774 insn = tail;
2776 /* For all branches, calls, uses, and cc0 setters, force them to remain
2777 in order at the end of the block by adding dependencies and giving
2778 the last a high priority. There may be notes present, and prev_head
2779 may also be a note.
2781 Branches must obviously remain at the end. Calls should remain at the
2782 end since moving them results in worse register allocation. Uses remain
2783 at the end to ensure proper register allocation. cc0 setters remaim
2784 at the end because they can't be moved away from their cc0 user. */
2785 last = 0;
2786 while (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN
2787 || (GET_CODE (insn) == INSN
2788 && (GET_CODE (PATTERN (insn)) == USE
2789 #ifdef HAVE_cc0
2790 || sets_cc0_p (PATTERN (insn))
2791 #endif
2793 || GET_CODE (insn) == NOTE)
2795 if (GET_CODE (insn) != NOTE)
2797 priority (insn);
2798 if (last == 0)
2800 ready[n_ready++] = insn;
2801 INSN_PRIORITY (insn) = TAIL_PRIORITY - i;
2802 INSN_REF_COUNT (insn) = 0;
2804 else if (! find_insn_list (insn, LOG_LINKS (last)))
2806 add_dependence (last, insn, REG_DEP_ANTI);
2807 INSN_REF_COUNT (insn)++;
2809 last = insn;
2811 /* Skip over insns that are part of a group. */
2812 while (SCHED_GROUP_P (insn))
2814 insn = prev_nonnote_insn (insn);
2815 priority (insn);
2819 insn = PREV_INSN (insn);
2820 /* Don't overrun the bounds of the basic block. */
2821 if (insn == prev_head)
2822 break;
2825 /* Assign priorities to instructions. Also check whether they
2826 are in priority order already. If so then I will be nonnegative.
2827 We use this shortcut only before reloading. */
2828 #if 0
2829 i = reload_completed ? DONE_PRIORITY : MAX_PRIORITY;
2830 #endif
2832 for (; insn != prev_head; insn = PREV_INSN (insn))
2834 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
2836 priority (insn);
2837 if (INSN_REF_COUNT (insn) == 0)
2839 if (last == 0)
2840 ready[n_ready++] = insn;
2841 else
2843 /* Make this dependent on the last of the instructions
2844 that must remain in order at the end of the block. */
2845 add_dependence (last, insn, REG_DEP_ANTI);
2846 INSN_REF_COUNT (insn) = 1;
2849 if (SCHED_GROUP_P (insn))
2851 while (SCHED_GROUP_P (insn))
2853 insn = prev_nonnote_insn (insn);
2854 priority (insn);
2856 continue;
2858 #if 0
2859 if (i < 0)
2860 continue;
2861 if (INSN_PRIORITY (insn) < i)
2862 i = INSN_PRIORITY (insn);
2863 else if (INSN_PRIORITY (insn) > i)
2864 i = DONE_PRIORITY;
2865 #endif
2869 #if 0
2870 /* This short-cut doesn't work. It does not count call insns crossed by
2871 registers in reg_sometimes_live. It does not mark these registers as
2872 dead if they die in this block. It does not mark these registers live
2873 (or create new reg_sometimes_live entries if necessary) if they are born
2874 in this block.
2876 The easy solution is to just always schedule a block. These blocks tend
2877 to be very short, so this doesn't slow down this pass by much. */
2879 /* If existing order is good, don't bother to reorder. */
2880 if (i != DONE_PRIORITY)
2882 if (file)
2883 fprintf (file, ";; already scheduled\n");
2885 if (reload_completed == 0)
2887 for (i = 0; i < sometimes_max; i++)
2888 regs_sometimes_live[i].live_length += n_insns;
2890 finish_sometimes_live (regs_sometimes_live, sometimes_max);
2892 free_pending_lists ();
2893 goto ret;
2895 #endif
2897 /* Scan all the insns to be scheduled, removing NOTE insns
2898 and register death notes.
2899 Line number NOTE insns end up in NOTE_LIST.
2900 Register death notes end up in DEAD_NOTES.
2902 Recreate the register life information for the end of this basic
2903 block. */
2905 if (reload_completed == 0)
2907 COPY_REG_SET (bb_live_regs, basic_block_live_at_start[b]);
2908 CLEAR_REG_SET (bb_dead_regs);
2910 if (b == 0)
2912 /* This is the first block in the function. There may be insns
2913 before head that we can't schedule. We still need to examine
2914 them though for accurate register lifetime analysis. */
2916 /* We don't want to remove any REG_DEAD notes as the code below
2917 does. */
2919 for (insn = basic_block_head[b]; insn != head;
2920 insn = NEXT_INSN (insn))
2921 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
2923 /* See if the register gets born here. */
2924 /* We must check for registers being born before we check for
2925 registers dying. It is possible for a register to be born
2926 and die in the same insn, e.g. reading from a volatile
2927 memory location into an otherwise unused register. Such
2928 a register must be marked as dead after this insn. */
2929 if (GET_CODE (PATTERN (insn)) == SET
2930 || GET_CODE (PATTERN (insn)) == CLOBBER)
2931 sched_note_set (b, PATTERN (insn), 0);
2932 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
2934 int j;
2935 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
2936 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
2937 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
2938 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
2940 /* ??? This code is obsolete and should be deleted. It
2941 is harmless though, so we will leave it in for now. */
2942 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
2943 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
2944 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
2947 /* Each call clobbers (makes live) all call-clobbered regs
2948 that are not global or fixed. Note that the function-value
2949 reg is a call_clobbered reg. */
2951 if (GET_CODE (insn) == CALL_INSN)
2953 int j;
2954 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
2955 if (call_used_regs[j] && ! global_regs[j]
2956 && ! fixed_regs[j])
2958 SET_REGNO_REG_SET (bb_live_regs, j);
2959 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
2963 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
2965 if ((REG_NOTE_KIND (link) == REG_DEAD
2966 || REG_NOTE_KIND (link) == REG_UNUSED)
2967 /* Verify that the REG_NOTE has a valid value. */
2968 && GET_CODE (XEXP (link, 0)) == REG)
2970 register int regno = REGNO (XEXP (link, 0));
2972 if (regno < FIRST_PSEUDO_REGISTER)
2974 int j = HARD_REGNO_NREGS (regno,
2975 GET_MODE (XEXP (link, 0)));
2976 while (--j >= 0)
2978 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
2979 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
2982 else
2984 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
2985 SET_REGNO_REG_SET (bb_dead_regs, regno);
2993 /* If debugging information is being produced, keep track of the line
2994 number notes for each insn. */
2995 if (write_symbols != NO_DEBUG)
2997 /* We must use the true line number for the first insn in the block
2998 that was computed and saved at the start of this pass. We can't
2999 use the current line number, because scheduling of the previous
3000 block may have changed the current line number. */
3001 rtx line = line_note_head[b];
3003 for (insn = basic_block_head[b];
3004 insn != next_tail;
3005 insn = NEXT_INSN (insn))
3006 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3007 line = insn;
3008 else
3009 LINE_NOTE (insn) = line;
3012 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3014 rtx prev, next, link;
3016 /* Farm out notes. This is needed to keep the debugger from
3017 getting completely deranged. */
3018 if (GET_CODE (insn) == NOTE)
3020 prev = insn;
3021 insn = unlink_notes (insn, next_tail);
3022 if (prev == tail)
3023 abort ();
3024 if (prev == head)
3025 abort ();
3026 if (insn == next_tail)
3027 abort ();
3030 if (reload_completed == 0
3031 && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3033 /* See if the register gets born here. */
3034 /* We must check for registers being born before we check for
3035 registers dying. It is possible for a register to be born and
3036 die in the same insn, e.g. reading from a volatile memory
3037 location into an otherwise unused register. Such a register
3038 must be marked as dead after this insn. */
3039 if (GET_CODE (PATTERN (insn)) == SET
3040 || GET_CODE (PATTERN (insn)) == CLOBBER)
3041 sched_note_set (b, PATTERN (insn), 0);
3042 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3044 int j;
3045 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3046 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3047 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3048 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3050 /* ??? This code is obsolete and should be deleted. It
3051 is harmless though, so we will leave it in for now. */
3052 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3053 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3054 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3057 /* Each call clobbers (makes live) all call-clobbered regs that are
3058 not global or fixed. Note that the function-value reg is a
3059 call_clobbered reg. */
3061 if (GET_CODE (insn) == CALL_INSN)
3063 int j;
3064 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
3065 if (call_used_regs[j] && ! global_regs[j]
3066 && ! fixed_regs[j])
3068 SET_REGNO_REG_SET (bb_live_regs, j);
3069 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
3073 /* Need to know what registers this insn kills. */
3074 for (prev = 0, link = REG_NOTES (insn); link; link = next)
3076 next = XEXP (link, 1);
3077 if ((REG_NOTE_KIND (link) == REG_DEAD
3078 || REG_NOTE_KIND (link) == REG_UNUSED)
3079 /* Verify that the REG_NOTE has a valid value. */
3080 && GET_CODE (XEXP (link, 0)) == REG)
3082 register int regno = REGNO (XEXP (link, 0));
3084 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3085 alone. */
3086 if (REG_NOTE_KIND (link) == REG_DEAD)
3088 if (prev)
3089 XEXP (prev, 1) = next;
3090 else
3091 REG_NOTES (insn) = next;
3092 XEXP (link, 1) = dead_notes;
3093 dead_notes = link;
3095 else
3096 prev = link;
3098 if (regno < FIRST_PSEUDO_REGISTER)
3100 int j = HARD_REGNO_NREGS (regno,
3101 GET_MODE (XEXP (link, 0)));
3102 while (--j >= 0)
3104 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
3105 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
3108 else
3110 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
3111 SET_REGNO_REG_SET (bb_dead_regs, regno);
3114 else
3115 prev = link;
3120 if (reload_completed == 0)
3122 /* Keep track of register lives. */
3123 old_live_regs = ALLOCA_REG_SET ();
3124 regs_sometimes_live
3125 = (struct sometimes *) alloca (max_regno * sizeof (struct sometimes));
3126 sometimes_max = 0;
3128 /* Start with registers live at end. */
3129 COPY_REG_SET (old_live_regs, bb_live_regs);
3130 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs, 0, j,
3132 sometimes_max
3133 = new_sometimes_live (regs_sometimes_live,
3134 j, sometimes_max);
3138 SCHED_SORT (ready, n_ready, 1);
3140 if (file)
3142 fprintf (file, ";; ready list initially:\n;; ");
3143 for (i = 0; i < n_ready; i++)
3144 fprintf (file, "%d ", INSN_UID (ready[i]));
3145 fprintf (file, "\n\n");
3147 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3148 if (INSN_PRIORITY (insn) > 0)
3149 fprintf (file, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3150 INSN_UID (insn), INSN_PRIORITY (insn),
3151 INSN_REF_COUNT (insn));
3154 /* Now HEAD and TAIL are going to become disconnected
3155 entirely from the insn chain. */
3156 tail = 0;
3158 /* Q_SIZE will always be zero here. */
3159 q_ptr = 0; clock = 0;
3160 bzero ((char *) insn_queue, sizeof (insn_queue));
3162 /* Now, perform list scheduling. */
3164 /* Where we start inserting insns is after TAIL. */
3165 last = next_tail;
3167 new_needs = (NEXT_INSN (prev_head) == basic_block_head[b]
3168 ? NEED_HEAD : NEED_NOTHING);
3169 if (PREV_INSN (next_tail) == basic_block_end[b])
3170 new_needs |= NEED_TAIL;
3172 new_ready = n_ready;
3173 while (sched_n_insns < n_insns)
3175 q_ptr = NEXT_Q (q_ptr); clock++;
3177 /* Add all pending insns that can be scheduled without stalls to the
3178 ready list. */
3179 for (insn = insn_queue[q_ptr]; insn; insn = NEXT_INSN (insn))
3181 if (file)
3182 fprintf (file, ";; launching %d before %d with no stalls at T-%d\n",
3183 INSN_UID (insn), INSN_UID (last), clock);
3184 ready[new_ready++] = insn;
3185 q_size -= 1;
3187 insn_queue[q_ptr] = 0;
3189 /* If there are no ready insns, stall until one is ready and add all
3190 of the pending insns at that point to the ready list. */
3191 if (new_ready == 0)
3193 register int stalls;
3195 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
3196 if ((insn = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
3198 for (; insn; insn = NEXT_INSN (insn))
3200 if (file)
3201 fprintf (file, ";; launching %d before %d with %d stalls at T-%d\n",
3202 INSN_UID (insn), INSN_UID (last), stalls, clock);
3203 ready[new_ready++] = insn;
3204 q_size -= 1;
3206 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
3207 break;
3210 q_ptr = NEXT_Q_AFTER (q_ptr, stalls); clock += stalls;
3213 /* There should be some instructions waiting to fire. */
3214 if (new_ready == 0)
3215 abort ();
3217 if (file)
3219 fprintf (file, ";; ready list at T-%d:", clock);
3220 for (i = 0; i < new_ready; i++)
3221 fprintf (file, " %d (%x)",
3222 INSN_UID (ready[i]), INSN_PRIORITY (ready[i]));
3225 /* Sort the ready list and choose the best insn to schedule. Select
3226 which insn should issue in this cycle and queue those that are
3227 blocked by function unit hazards.
3229 N_READY holds the number of items that were scheduled the last time,
3230 minus the one instruction scheduled on the last loop iteration; it
3231 is not modified for any other reason in this loop. */
3233 SCHED_SORT (ready, new_ready, n_ready);
3234 if (MAX_BLOCKAGE > 1)
3236 new_ready = schedule_select (ready, new_ready, clock, file);
3237 if (new_ready == 0)
3239 if (file)
3240 fprintf (file, "\n");
3241 /* We must set n_ready here, to ensure that sorting always
3242 occurs when we come back to the SCHED_SORT line above. */
3243 n_ready = 0;
3244 continue;
3247 n_ready = new_ready;
3248 last_scheduled_insn = insn = ready[0];
3250 /* The first insn scheduled becomes the new tail. */
3251 if (tail == 0)
3252 tail = insn;
3254 if (file)
3256 fprintf (file, ", now");
3257 for (i = 0; i < n_ready; i++)
3258 fprintf (file, " %d", INSN_UID (ready[i]));
3259 fprintf (file, "\n");
3262 if (DONE_PRIORITY_P (insn))
3263 abort ();
3265 if (reload_completed == 0)
3267 /* Process this insn, and each insn linked to this one which must
3268 be immediately output after this insn. */
3271 /* First we kill registers set by this insn, and then we
3272 make registers used by this insn live. This is the opposite
3273 order used above because we are traversing the instructions
3274 backwards. */
3276 /* Strictly speaking, we should scan REG_UNUSED notes and make
3277 every register mentioned there live, however, we will just
3278 kill them again immediately below, so there doesn't seem to
3279 be any reason why we bother to do this. */
3281 /* See if this is the last notice we must take of a register. */
3282 if (GET_CODE (PATTERN (insn)) == SET
3283 || GET_CODE (PATTERN (insn)) == CLOBBER)
3284 sched_note_set (b, PATTERN (insn), 1);
3285 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3287 int j;
3288 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3289 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3290 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3291 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 1);
3294 /* This code keeps life analysis information up to date. */
3295 if (GET_CODE (insn) == CALL_INSN)
3297 register struct sometimes *p;
3299 /* A call kills all call used registers that are not
3300 global or fixed, except for those mentioned in the call
3301 pattern which will be made live again later. */
3302 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
3303 if (call_used_regs[i] && ! global_regs[i]
3304 && ! fixed_regs[i])
3306 CLEAR_REGNO_REG_SET (bb_live_regs, i);
3307 SET_REGNO_REG_SET (bb_dead_regs, i);
3310 /* Regs live at the time of a call instruction must not
3311 go in a register clobbered by calls. Record this for
3312 all regs now live. Note that insns which are born or
3313 die in a call do not cross a call, so this must be done
3314 after the killings (above) and before the births
3315 (below). */
3316 p = regs_sometimes_live;
3317 for (i = 0; i < sometimes_max; i++, p++)
3318 if (REGNO_REG_SET_P (bb_live_regs, p->regno))
3319 p->calls_crossed += 1;
3322 /* Make every register used live, and add REG_DEAD notes for
3323 registers which were not live before we started. */
3324 attach_deaths_insn (insn);
3326 /* Find registers now made live by that instruction. */
3327 EXECUTE_IF_AND_COMPL_IN_REG_SET (bb_live_regs, old_live_regs, 0, i,
3329 sometimes_max
3330 = new_sometimes_live (regs_sometimes_live,
3331 i, sometimes_max);
3333 IOR_REG_SET (old_live_regs, bb_live_regs);
3335 /* Count lengths of all regs we are worrying about now,
3336 and handle registers no longer live. */
3338 for (i = 0; i < sometimes_max; i++)
3340 register struct sometimes *p = &regs_sometimes_live[i];
3341 int regno = p->regno;
3343 p->live_length += 1;
3345 if (!REGNO_REG_SET_P (bb_live_regs, p->regno))
3347 /* This is the end of one of this register's lifetime
3348 segments. Save the lifetime info collected so far,
3349 and clear its bit in the old_live_regs entry. */
3350 sched_reg_live_length[regno] += p->live_length;
3351 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3352 CLEAR_REGNO_REG_SET (old_live_regs, p->regno);
3354 /* Delete the reg_sometimes_live entry for this reg by
3355 copying the last entry over top of it. */
3356 *p = regs_sometimes_live[--sometimes_max];
3357 /* ...and decrement i so that this newly copied entry
3358 will be processed. */
3359 i--;
3363 link = insn;
3364 insn = PREV_INSN (insn);
3366 while (SCHED_GROUP_P (link));
3368 /* Set INSN back to the insn we are scheduling now. */
3369 insn = ready[0];
3372 /* Schedule INSN. Remove it from the ready list. */
3373 ready += 1;
3374 n_ready -= 1;
3376 sched_n_insns += 1;
3377 NEXT_INSN (insn) = last;
3378 PREV_INSN (last) = insn;
3380 /* Everything that precedes INSN now either becomes "ready", if
3381 it can execute immediately before INSN, or "pending", if
3382 there must be a delay. Give INSN high enough priority that
3383 at least one (maybe more) reg-killing insns can be launched
3384 ahead of all others. Mark INSN as scheduled by changing its
3385 priority to -1. */
3386 INSN_PRIORITY (insn) = LAUNCH_PRIORITY;
3387 new_ready = schedule_insn (insn, ready, n_ready, clock);
3388 INSN_PRIORITY (insn) = DONE_PRIORITY;
3390 /* Schedule all prior insns that must not be moved. */
3391 if (SCHED_GROUP_P (insn))
3393 /* Disable these insns from being launched, in case one of the
3394 insns in the group has a dependency on an earlier one. */
3395 link = insn;
3396 while (SCHED_GROUP_P (link))
3398 /* Disable these insns from being launched by anybody. */
3399 link = PREV_INSN (link);
3400 INSN_REF_COUNT (link) = 0;
3403 /* Now handle each group insn like the main insn was handled
3404 above. */
3405 link = insn;
3406 while (SCHED_GROUP_P (link))
3408 link = PREV_INSN (link);
3410 sched_n_insns += 1;
3412 /* ??? Why don't we set LAUNCH_PRIORITY here? */
3413 new_ready = schedule_insn (link, ready, new_ready, clock);
3414 INSN_PRIORITY (link) = DONE_PRIORITY;
3418 /* Put back NOTE_INSN_SETJMP,
3419 NOTE_INSN_{LOOP,EHREGION}_{BEGIN,END} notes. */
3421 /* To prime the loop. We need to handle INSN and all the insns in the
3422 sched group. */
3423 last = NEXT_INSN (insn);
3426 insn = PREV_INSN (last);
3428 /* Maintain a valid chain so emit_note_before works.
3429 This is necessary because PREV_INSN (insn) isn't valid
3430 (if ! SCHED_GROUP_P) and if it points to an insn already
3431 scheduled, a circularity will result. */
3432 if (! SCHED_GROUP_P (insn))
3434 NEXT_INSN (prev_head) = insn;
3435 PREV_INSN (insn) = prev_head;
3438 last = reemit_notes (insn, insn);
3440 while (SCHED_GROUP_P (insn));
3442 if (q_size != 0)
3443 abort ();
3445 if (reload_completed == 0)
3446 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3448 /* HEAD is now the first insn in the chain of insns that
3449 been scheduled by the loop above.
3450 TAIL is the last of those insns. */
3451 head = last;
3453 /* NOTE_LIST is the end of a chain of notes previously found
3454 among the insns. Insert them at the beginning of the insns. */
3455 if (note_list != 0)
3457 rtx note_head = note_list;
3458 while (PREV_INSN (note_head))
3459 note_head = PREV_INSN (note_head);
3461 PREV_INSN (head) = note_list;
3462 NEXT_INSN (note_list) = head;
3463 head = note_head;
3466 /* There should be no REG_DEAD notes leftover at the end.
3467 In practice, this can occur as the result of bugs in flow, combine.c,
3468 and/or sched.c. The values of the REG_DEAD notes remaining are
3469 meaningless, because dead_notes is just used as a free list. */
3470 #if 1
3471 if (dead_notes != 0)
3472 abort ();
3473 #endif
3475 if (new_needs & NEED_HEAD)
3476 basic_block_head[b] = head;
3477 PREV_INSN (head) = prev_head;
3478 NEXT_INSN (prev_head) = head;
3480 if (new_needs & NEED_TAIL)
3481 basic_block_end[b] = tail;
3482 NEXT_INSN (tail) = next_tail;
3483 PREV_INSN (next_tail) = tail;
3485 /* Restore the line-number notes of each insn. */
3486 if (write_symbols != NO_DEBUG)
3488 rtx line, note, prev, new;
3489 int notes = 0;
3491 head = basic_block_head[b];
3492 next_tail = NEXT_INSN (basic_block_end[b]);
3494 /* Determine the current line-number. We want to know the current
3495 line number of the first insn of the block here, in case it is
3496 different from the true line number that was saved earlier. If
3497 different, then we need a line number note before the first insn
3498 of this block. If it happens to be the same, then we don't want to
3499 emit another line number note here. */
3500 for (line = head; line; line = PREV_INSN (line))
3501 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
3502 break;
3504 /* Walk the insns keeping track of the current line-number and inserting
3505 the line-number notes as needed. */
3506 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3507 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3508 line = insn;
3509 /* This used to emit line number notes before every non-deleted note.
3510 However, this confuses a debugger, because line notes not separated
3511 by real instructions all end up at the same address. I can find no
3512 use for line number notes before other notes, so none are emitted. */
3513 else if (GET_CODE (insn) != NOTE
3514 && (note = LINE_NOTE (insn)) != 0
3515 && note != line
3516 && (line == 0
3517 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
3518 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
3520 line = note;
3521 prev = PREV_INSN (insn);
3522 if (LINE_NOTE (note))
3524 /* Re-use the original line-number note. */
3525 LINE_NOTE (note) = 0;
3526 PREV_INSN (note) = prev;
3527 NEXT_INSN (prev) = note;
3528 PREV_INSN (insn) = note;
3529 NEXT_INSN (note) = insn;
3531 else
3533 notes++;
3534 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
3535 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
3536 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note);
3539 if (file && notes)
3540 fprintf (file, ";; added %d line-number notes\n", notes);
3543 if (file)
3545 fprintf (file, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
3546 clock, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
3549 /* Yow! We're done! */
3550 free_pending_lists ();
3552 ret:
3553 FREE_REG_SET (reg_pending_sets);
3554 FREE_REG_SET (old_live_regs);
3556 return;
3559 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
3560 REGNO, returning the rtx of the reference found if any. Otherwise,
3561 returns 0. */
3563 static rtx
3564 regno_use_in (regno, x)
3565 int regno;
3566 rtx x;
3568 register char *fmt;
3569 int i, j;
3570 rtx tem;
3572 if (GET_CODE (x) == REG && REGNO (x) == regno)
3573 return x;
3575 fmt = GET_RTX_FORMAT (GET_CODE (x));
3576 for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
3578 if (fmt[i] == 'e')
3580 if ((tem = regno_use_in (regno, XEXP (x, i))))
3581 return tem;
3583 else if (fmt[i] == 'E')
3584 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3585 if ((tem = regno_use_in (regno , XVECEXP (x, i, j))))
3586 return tem;
3589 return 0;
3592 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
3593 needed for the hard register mentioned in the note. This can happen
3594 if the reference to the hard register in the original insn was split into
3595 several smaller hard register references in the split insns. */
3597 static void
3598 split_hard_reg_notes (note, first, last, orig_insn)
3599 rtx note, first, last, orig_insn;
3601 rtx reg, temp, link;
3602 int n_regs, i, new_reg;
3603 rtx insn;
3605 /* Assume that this is a REG_DEAD note. */
3606 if (REG_NOTE_KIND (note) != REG_DEAD)
3607 abort ();
3609 reg = XEXP (note, 0);
3611 n_regs = HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg));
3613 for (i = 0; i < n_regs; i++)
3615 new_reg = REGNO (reg) + i;
3617 /* Check for references to new_reg in the split insns. */
3618 for (insn = last; ; insn = PREV_INSN (insn))
3620 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3621 && (temp = regno_use_in (new_reg, PATTERN (insn))))
3623 /* Create a new reg dead note here. */
3624 link = rtx_alloc (EXPR_LIST);
3625 PUT_REG_NOTE_KIND (link, REG_DEAD);
3626 XEXP (link, 0) = temp;
3627 XEXP (link, 1) = REG_NOTES (insn);
3628 REG_NOTES (insn) = link;
3630 /* If killed multiple registers here, then add in the excess. */
3631 i += HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) - 1;
3633 break;
3635 /* It isn't mentioned anywhere, so no new reg note is needed for
3636 this register. */
3637 if (insn == first)
3638 break;
3643 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
3644 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
3646 static void
3647 new_insn_dead_notes (pat, insn, last, orig_insn)
3648 rtx pat, insn, last, orig_insn;
3650 rtx dest, tem, set;
3652 /* PAT is either a CLOBBER or a SET here. */
3653 dest = XEXP (pat, 0);
3655 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
3656 || GET_CODE (dest) == STRICT_LOW_PART
3657 || GET_CODE (dest) == SIGN_EXTRACT)
3658 dest = XEXP (dest, 0);
3660 if (GET_CODE (dest) == REG)
3662 /* If the original insn already used this register, we may not add new
3663 notes for it. One example for a split that needs this test is
3664 when a multi-word memory access with register-indirect addressing
3665 is split into multiple memory accesses with auto-increment and
3666 one adjusting add instruction for the address register. */
3667 if (reg_referenced_p (dest, PATTERN (orig_insn)))
3668 return;
3669 for (tem = last; tem != insn; tem = PREV_INSN (tem))
3671 if (GET_RTX_CLASS (GET_CODE (tem)) == 'i'
3672 && reg_overlap_mentioned_p (dest, PATTERN (tem))
3673 && (set = single_set (tem)))
3675 rtx tem_dest = SET_DEST (set);
3677 while (GET_CODE (tem_dest) == ZERO_EXTRACT
3678 || GET_CODE (tem_dest) == SUBREG
3679 || GET_CODE (tem_dest) == STRICT_LOW_PART
3680 || GET_CODE (tem_dest) == SIGN_EXTRACT)
3681 tem_dest = XEXP (tem_dest, 0);
3683 if (! rtx_equal_p (tem_dest, dest))
3685 /* Use the same scheme as combine.c, don't put both REG_DEAD
3686 and REG_UNUSED notes on the same insn. */
3687 if (! find_regno_note (tem, REG_UNUSED, REGNO (dest))
3688 && ! find_regno_note (tem, REG_DEAD, REGNO (dest)))
3690 rtx note = rtx_alloc (EXPR_LIST);
3691 PUT_REG_NOTE_KIND (note, REG_DEAD);
3692 XEXP (note, 0) = dest;
3693 XEXP (note, 1) = REG_NOTES (tem);
3694 REG_NOTES (tem) = note;
3696 /* The reg only dies in one insn, the last one that uses
3697 it. */
3698 break;
3700 else if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
3701 /* We found an instruction that both uses the register,
3702 and sets it, so no new REG_NOTE is needed for this set. */
3703 break;
3706 /* If this is a set, it must die somewhere, unless it is the dest of
3707 the original insn, and hence is live after the original insn. Abort
3708 if it isn't supposed to be live after the original insn.
3710 If this is a clobber, then just add a REG_UNUSED note. */
3711 if (tem == insn)
3713 int live_after_orig_insn = 0;
3714 rtx pattern = PATTERN (orig_insn);
3715 int i;
3717 if (GET_CODE (pat) == CLOBBER)
3719 rtx note = rtx_alloc (EXPR_LIST);
3720 PUT_REG_NOTE_KIND (note, REG_UNUSED);
3721 XEXP (note, 0) = dest;
3722 XEXP (note, 1) = REG_NOTES (insn);
3723 REG_NOTES (insn) = note;
3724 return;
3727 /* The original insn could have multiple sets, so search the
3728 insn for all sets. */
3729 if (GET_CODE (pattern) == SET)
3731 if (reg_overlap_mentioned_p (dest, SET_DEST (pattern)))
3732 live_after_orig_insn = 1;
3734 else if (GET_CODE (pattern) == PARALLEL)
3736 for (i = 0; i < XVECLEN (pattern, 0); i++)
3737 if (GET_CODE (XVECEXP (pattern, 0, i)) == SET
3738 && reg_overlap_mentioned_p (dest,
3739 SET_DEST (XVECEXP (pattern,
3740 0, i))))
3741 live_after_orig_insn = 1;
3744 if (! live_after_orig_insn)
3745 abort ();
3750 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
3751 registers modified by X. INC is -1 if the containing insn is being deleted,
3752 and is 1 if the containing insn is a newly generated insn. */
3754 static void
3755 update_n_sets (x, inc)
3756 rtx x;
3757 int inc;
3759 rtx dest = SET_DEST (x);
3761 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
3762 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
3763 dest = SUBREG_REG (dest);
3765 if (GET_CODE (dest) == REG)
3767 int regno = REGNO (dest);
3769 if (regno < FIRST_PSEUDO_REGISTER)
3771 register int i;
3772 int endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (dest));
3774 for (i = regno; i < endregno; i++)
3775 REG_N_SETS (i) += inc;
3777 else
3778 REG_N_SETS (regno) += inc;
3782 /* Updates all flow-analysis related quantities (including REG_NOTES) for
3783 the insns from FIRST to LAST inclusive that were created by splitting
3784 ORIG_INSN. NOTES are the original REG_NOTES. */
3786 static void
3787 update_flow_info (notes, first, last, orig_insn)
3788 rtx notes;
3789 rtx first, last;
3790 rtx orig_insn;
3792 rtx insn, note;
3793 rtx next;
3794 rtx orig_dest, temp;
3795 rtx set;
3797 /* Get and save the destination set by the original insn. */
3799 orig_dest = single_set (orig_insn);
3800 if (orig_dest)
3801 orig_dest = SET_DEST (orig_dest);
3803 /* Move REG_NOTES from the original insn to where they now belong. */
3805 for (note = notes; note; note = next)
3807 next = XEXP (note, 1);
3808 switch (REG_NOTE_KIND (note))
3810 case REG_DEAD:
3811 case REG_UNUSED:
3812 /* Move these notes from the original insn to the last new insn where
3813 the register is now set. */
3815 for (insn = last; ; insn = PREV_INSN (insn))
3817 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3818 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
3820 /* If this note refers to a multiple word hard register, it
3821 may have been split into several smaller hard register
3822 references, so handle it specially. */
3823 temp = XEXP (note, 0);
3824 if (REG_NOTE_KIND (note) == REG_DEAD
3825 && GET_CODE (temp) == REG
3826 && REGNO (temp) < FIRST_PSEUDO_REGISTER
3827 && HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) > 1)
3828 split_hard_reg_notes (note, first, last, orig_insn);
3829 else
3831 XEXP (note, 1) = REG_NOTES (insn);
3832 REG_NOTES (insn) = note;
3835 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
3836 notes. */
3837 /* ??? This won't handle multiple word registers correctly,
3838 but should be good enough for now. */
3839 if (REG_NOTE_KIND (note) == REG_UNUSED
3840 && GET_CODE (XEXP (note, 0)) != SCRATCH
3841 && ! dead_or_set_p (insn, XEXP (note, 0)))
3842 PUT_REG_NOTE_KIND (note, REG_DEAD);
3844 /* The reg only dies in one insn, the last one that uses
3845 it. */
3846 break;
3848 /* It must die somewhere, fail it we couldn't find where it died.
3850 If this is a REG_UNUSED note, then it must be a temporary
3851 register that was not needed by this instantiation of the
3852 pattern, so we can safely ignore it. */
3853 if (insn == first)
3855 /* After reload, REG_DEAD notes come sometimes an
3856 instruction after the register actually dies. */
3857 if (reload_completed && REG_NOTE_KIND (note) == REG_DEAD)
3859 XEXP (note, 1) = REG_NOTES (insn);
3860 REG_NOTES (insn) = note;
3861 break;
3864 if (REG_NOTE_KIND (note) != REG_UNUSED)
3865 abort ();
3867 break;
3870 break;
3872 case REG_WAS_0:
3873 /* If the insn that set the register to 0 was deleted, this
3874 note cannot be relied on any longer. The destination might
3875 even have been moved to memory.
3876 This was observed for SH4 with execute/920501-6.c compilation,
3877 -O2 -fomit-frame-pointer -finline-functions . */
3878 if (GET_CODE (XEXP (note, 0)) == NOTE
3879 || INSN_DELETED_P (XEXP (note, 0)))
3880 break;
3881 /* This note applies to the dest of the original insn. Find the
3882 first new insn that now has the same dest, and move the note
3883 there. */
3885 if (! orig_dest)
3886 abort ();
3888 for (insn = first; ; insn = NEXT_INSN (insn))
3890 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3891 && (temp = single_set (insn))
3892 && rtx_equal_p (SET_DEST (temp), orig_dest))
3894 XEXP (note, 1) = REG_NOTES (insn);
3895 REG_NOTES (insn) = note;
3896 /* The reg is only zero before one insn, the first that
3897 uses it. */
3898 break;
3900 /* If this note refers to a multiple word hard
3901 register, it may have been split into several smaller
3902 hard register references. We could split the notes,
3903 but simply dropping them is good enough. */
3904 if (GET_CODE (orig_dest) == REG
3905 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
3906 && HARD_REGNO_NREGS (REGNO (orig_dest),
3907 GET_MODE (orig_dest)) > 1)
3908 break;
3909 /* It must be set somewhere, fail if we couldn't find where it
3910 was set. */
3911 if (insn == last)
3912 abort ();
3914 break;
3916 case REG_EQUAL:
3917 case REG_EQUIV:
3918 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
3919 set is meaningless. Just drop the note. */
3920 if (! orig_dest)
3921 break;
3923 case REG_NO_CONFLICT:
3924 /* These notes apply to the dest of the original insn. Find the last
3925 new insn that now has the same dest, and move the note there. */
3927 if (! orig_dest)
3928 abort ();
3930 for (insn = last; ; insn = PREV_INSN (insn))
3932 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3933 && (temp = single_set (insn))
3934 && rtx_equal_p (SET_DEST (temp), orig_dest))
3936 XEXP (note, 1) = REG_NOTES (insn);
3937 REG_NOTES (insn) = note;
3938 /* Only put this note on one of the new insns. */
3939 break;
3942 /* The original dest must still be set someplace. Abort if we
3943 couldn't find it. */
3944 if (insn == first)
3946 /* However, if this note refers to a multiple word hard
3947 register, it may have been split into several smaller
3948 hard register references. We could split the notes,
3949 but simply dropping them is good enough. */
3950 if (GET_CODE (orig_dest) == REG
3951 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
3952 && HARD_REGNO_NREGS (REGNO (orig_dest),
3953 GET_MODE (orig_dest)) > 1)
3954 break;
3955 /* Likewise for multi-word memory references. */
3956 if (GET_CODE (orig_dest) == MEM
3957 && SIZE_FOR_MODE (orig_dest) > MOVE_MAX)
3958 break;
3959 abort ();
3962 break;
3964 case REG_LIBCALL:
3965 /* Move a REG_LIBCALL note to the first insn created, and update
3966 the corresponding REG_RETVAL note. */
3967 XEXP (note, 1) = REG_NOTES (first);
3968 REG_NOTES (first) = note;
3970 insn = XEXP (note, 0);
3971 note = find_reg_note (insn, REG_RETVAL, NULL_RTX);
3972 if (note)
3973 XEXP (note, 0) = first;
3974 break;
3976 case REG_EXEC_COUNT:
3977 /* Move a REG_EXEC_COUNT note to the first insn created. */
3978 XEXP (note, 1) = REG_NOTES (first);
3979 REG_NOTES (first) = note;
3980 break;
3982 case REG_RETVAL:
3983 /* Move a REG_RETVAL note to the last insn created, and update
3984 the corresponding REG_LIBCALL note. */
3985 XEXP (note, 1) = REG_NOTES (last);
3986 REG_NOTES (last) = note;
3988 insn = XEXP (note, 0);
3989 note = find_reg_note (insn, REG_LIBCALL, NULL_RTX);
3990 if (note)
3991 XEXP (note, 0) = last;
3992 break;
3994 case REG_NONNEG:
3995 case REG_BR_PROB:
3996 /* This should be moved to whichever instruction is a JUMP_INSN. */
3998 for (insn = last; ; insn = PREV_INSN (insn))
4000 if (GET_CODE (insn) == JUMP_INSN)
4002 XEXP (note, 1) = REG_NOTES (insn);
4003 REG_NOTES (insn) = note;
4004 /* Only put this note on one of the new insns. */
4005 break;
4007 /* Fail if we couldn't find a JUMP_INSN. */
4008 if (insn == first)
4009 abort ();
4011 break;
4013 case REG_INC:
4014 /* reload sometimes leaves obsolete REG_INC notes around. */
4015 if (reload_completed)
4016 break;
4017 /* This should be moved to whichever instruction now has the
4018 increment operation. */
4019 abort ();
4021 case REG_LABEL:
4022 /* Should be moved to the new insn(s) which use the label. */
4023 for (insn = first; insn != NEXT_INSN (last); insn = NEXT_INSN (insn))
4024 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4025 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4026 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_LABEL,
4027 XEXP (note, 0),
4028 REG_NOTES (insn));
4029 break;
4031 case REG_CC_SETTER:
4032 case REG_CC_USER:
4033 /* These two notes will never appear until after reorg, so we don't
4034 have to handle them here. */
4035 default:
4036 abort ();
4040 /* Each new insn created, except the last, has a new set. If the destination
4041 is a register, then this reg is now live across several insns, whereas
4042 previously the dest reg was born and died within the same insn. To
4043 reflect this, we now need a REG_DEAD note on the insn where this
4044 dest reg dies.
4046 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4048 for (insn = first; insn != last; insn = NEXT_INSN (insn))
4050 rtx pat;
4051 int i;
4053 pat = PATTERN (insn);
4054 if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
4055 new_insn_dead_notes (pat, insn, last, orig_insn);
4056 else if (GET_CODE (pat) == PARALLEL)
4058 for (i = 0; i < XVECLEN (pat, 0); i++)
4059 if (GET_CODE (XVECEXP (pat, 0, i)) == SET
4060 || GET_CODE (XVECEXP (pat, 0, i)) == CLOBBER)
4061 new_insn_dead_notes (XVECEXP (pat, 0, i), insn, last, orig_insn);
4065 /* If any insn, except the last, uses the register set by the last insn,
4066 then we need a new REG_DEAD note on that insn. In this case, there
4067 would not have been a REG_DEAD note for this register in the original
4068 insn because it was used and set within one insn. */
4070 set = single_set (last);
4071 if (set)
4073 rtx dest = SET_DEST (set);
4075 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4076 || GET_CODE (dest) == STRICT_LOW_PART
4077 || GET_CODE (dest) == SIGN_EXTRACT)
4078 dest = XEXP (dest, 0);
4080 if (GET_CODE (dest) == REG
4081 /* Global registers are always live, so the code below does not
4082 apply to them. */
4083 && (REGNO (dest) >= FIRST_PSEUDO_REGISTER
4084 || ! global_regs[REGNO (dest)]))
4086 rtx stop_insn = PREV_INSN (first);
4088 /* If the last insn uses the register that it is setting, then
4089 we don't want to put a REG_DEAD note there. Search backwards
4090 to find the first insn that sets but does not use DEST. */
4092 insn = last;
4093 if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4095 for (insn = PREV_INSN (insn); insn != first;
4096 insn = PREV_INSN (insn))
4098 if ((set = single_set (insn))
4099 && reg_mentioned_p (dest, SET_DEST (set))
4100 && ! reg_overlap_mentioned_p (dest, SET_SRC (set)))
4101 break;
4105 /* Now find the first insn that uses but does not set DEST. */
4107 for (insn = PREV_INSN (insn); insn != stop_insn;
4108 insn = PREV_INSN (insn))
4110 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4111 && reg_mentioned_p (dest, PATTERN (insn))
4112 && (set = single_set (insn)))
4114 rtx insn_dest = SET_DEST (set);
4116 while (GET_CODE (insn_dest) == ZERO_EXTRACT
4117 || GET_CODE (insn_dest) == SUBREG
4118 || GET_CODE (insn_dest) == STRICT_LOW_PART
4119 || GET_CODE (insn_dest) == SIGN_EXTRACT)
4120 insn_dest = XEXP (insn_dest, 0);
4122 if (insn_dest != dest)
4124 note = rtx_alloc (EXPR_LIST);
4125 PUT_REG_NOTE_KIND (note, REG_DEAD);
4126 XEXP (note, 0) = dest;
4127 XEXP (note, 1) = REG_NOTES (insn);
4128 REG_NOTES (insn) = note;
4129 /* The reg only dies in one insn, the last one
4130 that uses it. */
4131 break;
4138 /* If the original dest is modifying a multiple register target, and the
4139 original instruction was split such that the original dest is now set
4140 by two or more SUBREG sets, then the split insns no longer kill the
4141 destination of the original insn.
4143 In this case, if there exists an instruction in the same basic block,
4144 before the split insn, which uses the original dest, and this use is
4145 killed by the original insn, then we must remove the REG_DEAD note on
4146 this insn, because it is now superfluous.
4148 This does not apply when a hard register gets split, because the code
4149 knows how to handle overlapping hard registers properly. */
4150 if (orig_dest && GET_CODE (orig_dest) == REG)
4152 int found_orig_dest = 0;
4153 int found_split_dest = 0;
4155 for (insn = first; ; insn = NEXT_INSN (insn))
4157 rtx pat = PATTERN (insn);
4158 int i = GET_CODE (pat) == PARALLEL ? XVECLEN (pat, 0) : 0;
4159 set = pat;
4160 for (;;)
4162 if (GET_CODE (set) == SET)
4164 if (GET_CODE (SET_DEST (set)) == REG
4165 && REGNO (SET_DEST (set)) == REGNO (orig_dest))
4167 found_orig_dest = 1;
4168 break;
4170 else if (GET_CODE (SET_DEST (set)) == SUBREG
4171 && SUBREG_REG (SET_DEST (set)) == orig_dest)
4173 found_split_dest = 1;
4174 break;
4177 if (--i < 0)
4178 break;
4179 set = XVECEXP (pat, 0, i);
4182 if (insn == last)
4183 break;
4186 if (found_split_dest)
4188 /* Search backwards from FIRST, looking for the first insn that uses
4189 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4190 If we find an insn, and it has a REG_DEAD note, then delete the
4191 note. */
4193 for (insn = first; insn; insn = PREV_INSN (insn))
4195 if (GET_CODE (insn) == CODE_LABEL
4196 || GET_CODE (insn) == JUMP_INSN)
4197 break;
4198 else if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4199 && reg_mentioned_p (orig_dest, insn))
4201 note = find_regno_note (insn, REG_DEAD, REGNO (orig_dest));
4202 if (note)
4203 remove_note (insn, note);
4207 else if (! found_orig_dest)
4209 /* This should never happen. */
4210 abort ();
4214 /* Update reg_n_sets. This is necessary to prevent local alloc from
4215 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4216 a reg from set once to set multiple times. */
4219 rtx x = PATTERN (orig_insn);
4220 RTX_CODE code = GET_CODE (x);
4222 if (code == SET || code == CLOBBER)
4223 update_n_sets (x, -1);
4224 else if (code == PARALLEL)
4226 int i;
4227 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4229 code = GET_CODE (XVECEXP (x, 0, i));
4230 if (code == SET || code == CLOBBER)
4231 update_n_sets (XVECEXP (x, 0, i), -1);
4235 for (insn = first; ; insn = NEXT_INSN (insn))
4237 x = PATTERN (insn);
4238 code = GET_CODE (x);
4240 if (code == SET || code == CLOBBER)
4241 update_n_sets (x, 1);
4242 else if (code == PARALLEL)
4244 int i;
4245 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4247 code = GET_CODE (XVECEXP (x, 0, i));
4248 if (code == SET || code == CLOBBER)
4249 update_n_sets (XVECEXP (x, 0, i), 1);
4253 if (insn == last)
4254 break;
4259 /* The one entry point in this file. DUMP_FILE is the dump file for
4260 this pass. */
4262 void
4263 schedule_insns (dump_file)
4264 FILE *dump_file;
4266 int max_uid = MAX_INSNS_PER_SPLIT * (get_max_uid () + 1);
4267 int b;
4268 rtx insn;
4270 /* Taking care of this degenerate case makes the rest of
4271 this code simpler. */
4272 if (n_basic_blocks == 0)
4273 return;
4275 /* Create an insn here so that we can hang dependencies off of it later. */
4276 sched_before_next_call
4277 = gen_rtx_INSN (VOIDmode, 0, NULL_RTX, NULL_RTX,
4278 NULL_RTX, 0, NULL_RTX, NULL_RTX);
4280 /* Initialize the unused_*_lists. We can't use the ones left over from
4281 the previous function, because gcc has freed that memory. We can use
4282 the ones left over from the first sched pass in the second pass however,
4283 so only clear them on the first sched pass. The first pass is before
4284 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4286 if (reload_completed == 0 || ! flag_schedule_insns)
4288 unused_insn_list = 0;
4289 unused_expr_list = 0;
4292 /* We create no insns here, only reorder them, so we
4293 remember how far we can cut back the stack on exit. */
4295 /* Allocate data for this pass. See comments, above,
4296 for what these vectors do. */
4297 insn_luid = (int *) alloca (max_uid * sizeof (int));
4298 insn_priority = (int *) alloca (max_uid * sizeof (int));
4299 insn_tick = (int *) alloca (max_uid * sizeof (int));
4300 insn_costs = (short *) alloca (max_uid * sizeof (short));
4301 insn_units = (short *) alloca (max_uid * sizeof (short));
4302 insn_blockage = (unsigned int *) alloca (max_uid * sizeof (unsigned int));
4303 insn_ref_count = (int *) alloca (max_uid * sizeof (int));
4305 if (reload_completed == 0)
4307 sched_reg_n_calls_crossed = (int *) alloca (max_regno * sizeof (int));
4308 sched_reg_live_length = (int *) alloca (max_regno * sizeof (int));
4309 bb_dead_regs = ALLOCA_REG_SET ();
4310 bb_live_regs = ALLOCA_REG_SET ();
4311 bzero ((char *) sched_reg_n_calls_crossed, max_regno * sizeof (int));
4312 bzero ((char *) sched_reg_live_length, max_regno * sizeof (int));
4314 else
4316 sched_reg_n_calls_crossed = 0;
4317 sched_reg_live_length = 0;
4318 bb_dead_regs = 0;
4319 bb_live_regs = 0;
4321 init_alias_analysis ();
4323 if (write_symbols != NO_DEBUG)
4325 rtx line;
4327 line_note = (rtx *) alloca (max_uid * sizeof (rtx));
4328 bzero ((char *) line_note, max_uid * sizeof (rtx));
4329 line_note_head = (rtx *) alloca (n_basic_blocks * sizeof (rtx));
4330 bzero ((char *) line_note_head, n_basic_blocks * sizeof (rtx));
4332 /* Determine the line-number at the start of each basic block.
4333 This must be computed and saved now, because after a basic block's
4334 predecessor has been scheduled, it is impossible to accurately
4335 determine the correct line number for the first insn of the block. */
4337 for (b = 0; b < n_basic_blocks; b++)
4338 for (line = basic_block_head[b]; line; line = PREV_INSN (line))
4339 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4341 line_note_head[b] = line;
4342 break;
4346 bzero ((char *) insn_luid, max_uid * sizeof (int));
4347 bzero ((char *) insn_priority, max_uid * sizeof (int));
4348 bzero ((char *) insn_tick, max_uid * sizeof (int));
4349 bzero ((char *) insn_costs, max_uid * sizeof (short));
4350 bzero ((char *) insn_units, max_uid * sizeof (short));
4351 bzero ((char *) insn_blockage, max_uid * sizeof (unsigned int));
4352 bzero ((char *) insn_ref_count, max_uid * sizeof (int));
4354 /* Schedule each basic block, block by block. */
4356 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4357 known why this is done. */
4358 /* ??? Perhaps it's done to ensure NEXT_TAIL in schedule_block is a
4359 valid insn. */
4361 insn = basic_block_end[n_basic_blocks-1];
4362 if (NEXT_INSN (insn) == 0
4363 || (GET_CODE (insn) != NOTE
4364 && GET_CODE (insn) != CODE_LABEL
4365 /* Don't emit a NOTE if it would end up between an unconditional
4366 jump and a BARRIER. */
4367 && ! (GET_CODE (insn) == JUMP_INSN
4368 && GET_CODE (NEXT_INSN (insn)) == BARRIER)))
4369 emit_note_after (NOTE_INSN_DELETED, basic_block_end[n_basic_blocks-1]);
4371 for (b = 0; b < n_basic_blocks; b++)
4373 rtx insn, next;
4375 note_list = 0;
4377 for (insn = basic_block_head[b]; ; insn = next)
4379 rtx prev;
4380 rtx set;
4382 /* Can't use `next_real_insn' because that
4383 might go across CODE_LABELS and short-out basic blocks. */
4384 next = NEXT_INSN (insn);
4385 if (GET_CODE (insn) != INSN)
4387 if (insn == basic_block_end[b])
4388 break;
4390 continue;
4393 /* Don't split no-op move insns. These should silently disappear
4394 later in final. Splitting such insns would break the code
4395 that handles REG_NO_CONFLICT blocks. */
4396 set = single_set (insn);
4397 if (set && rtx_equal_p (SET_SRC (set), SET_DEST (set)))
4399 if (insn == basic_block_end[b])
4400 break;
4402 /* Nops get in the way while scheduling, so delete them now if
4403 register allocation has already been done. It is too risky
4404 to try to do this before register allocation, and there are
4405 unlikely to be very many nops then anyways. */
4406 if (reload_completed)
4408 PUT_CODE (insn, NOTE);
4409 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4410 NOTE_SOURCE_FILE (insn) = 0;
4413 continue;
4416 /* Split insns here to get max fine-grain parallelism. */
4417 prev = PREV_INSN (insn);
4418 /* It is probably not worthwhile to try to split again in the
4419 second pass. However, if flag_schedule_insns is not set,
4420 the first and only (if any) scheduling pass is after reload. */
4421 if (reload_completed == 0 || ! flag_schedule_insns)
4423 rtx last, first = PREV_INSN (insn);
4424 rtx notes = REG_NOTES (insn);
4426 last = try_split (PATTERN (insn), insn, 1);
4427 if (last != insn)
4429 /* try_split returns the NOTE that INSN became. */
4430 first = NEXT_INSN (first);
4431 update_flow_info (notes, first, last, insn);
4433 PUT_CODE (insn, NOTE);
4434 NOTE_SOURCE_FILE (insn) = 0;
4435 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4436 if (insn == basic_block_head[b])
4437 basic_block_head[b] = first;
4438 if (insn == basic_block_end[b])
4440 basic_block_end[b] = last;
4441 break;
4446 if (insn == basic_block_end[b])
4447 break;
4450 schedule_block (b, dump_file);
4452 #ifdef USE_C_ALLOCA
4453 alloca (0);
4454 #endif
4457 /* Reposition the prologue and epilogue notes in case we moved the
4458 prologue/epilogue insns. */
4459 if (reload_completed)
4460 reposition_prologue_and_epilogue_notes (get_insns ());
4462 if (write_symbols != NO_DEBUG)
4464 rtx line = 0;
4465 rtx insn = get_insns ();
4466 int active_insn = 0;
4467 int notes = 0;
4469 /* Walk the insns deleting redundant line-number notes. Many of these
4470 are already present. The remainder tend to occur at basic
4471 block boundaries. */
4472 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
4473 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4475 /* If there are no active insns following, INSN is redundant. */
4476 if (active_insn == 0)
4478 notes++;
4479 NOTE_SOURCE_FILE (insn) = 0;
4480 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4482 /* If the line number is unchanged, LINE is redundant. */
4483 else if (line
4484 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
4485 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
4487 notes++;
4488 NOTE_SOURCE_FILE (line) = 0;
4489 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
4490 line = insn;
4492 else
4493 line = insn;
4494 active_insn = 0;
4496 else if (! ((GET_CODE (insn) == NOTE
4497 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
4498 || (GET_CODE (insn) == INSN
4499 && (GET_CODE (PATTERN (insn)) == USE
4500 || GET_CODE (PATTERN (insn)) == CLOBBER))))
4501 active_insn++;
4503 if (dump_file && notes)
4504 fprintf (dump_file, ";; deleted %d line-number notes\n", notes);
4507 if (reload_completed == 0)
4509 int regno;
4510 for (regno = 0; regno < max_regno; regno++)
4511 if (sched_reg_live_length[regno])
4513 if (dump_file)
4515 if (REG_LIVE_LENGTH (regno) > sched_reg_live_length[regno])
4516 fprintf (dump_file,
4517 ";; register %d life shortened from %d to %d\n",
4518 regno, REG_LIVE_LENGTH (regno),
4519 sched_reg_live_length[regno]);
4520 /* Negative values are special; don't overwrite the current
4521 reg_live_length value if it is negative. */
4522 else if (REG_LIVE_LENGTH (regno) < sched_reg_live_length[regno]
4523 && REG_LIVE_LENGTH (regno) >= 0)
4524 fprintf (dump_file,
4525 ";; register %d life extended from %d to %d\n",
4526 regno, REG_LIVE_LENGTH (regno),
4527 sched_reg_live_length[regno]);
4529 if (! REG_N_CALLS_CROSSED (regno)
4530 && sched_reg_n_calls_crossed[regno])
4531 fprintf (dump_file,
4532 ";; register %d now crosses calls\n", regno);
4533 else if (REG_N_CALLS_CROSSED (regno)
4534 && ! sched_reg_n_calls_crossed[regno]
4535 && REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
4536 fprintf (dump_file,
4537 ";; register %d no longer crosses calls\n", regno);
4540 /* Negative values are special; don't overwrite the current
4541 reg_live_length value if it is negative. */
4542 if (REG_LIVE_LENGTH (regno) >= 0)
4543 REG_LIVE_LENGTH (regno) = sched_reg_live_length[regno];
4545 /* We can't change the value of reg_n_calls_crossed to zero for
4546 pseudos which are live in more than one block.
4548 This is because combine might have made an optimization which
4549 invalidated basic_block_live_at_start and reg_n_calls_crossed,
4550 but it does not update them. If we update reg_n_calls_crossed
4551 here, the two variables are now inconsistent, and this might
4552 confuse the caller-save code into saving a register that doesn't
4553 need to be saved. This is only a problem when we zero calls
4554 crossed for a pseudo live in multiple basic blocks.
4556 Alternatively, we could try to correctly update basic block live
4557 at start here in sched, but that seems complicated. */
4558 if (sched_reg_n_calls_crossed[regno]
4559 || REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
4560 REG_N_CALLS_CROSSED (regno) = sched_reg_n_calls_crossed[regno];
4564 if (reload_completed == 0)
4566 FREE_REG_SET (bb_dead_regs);
4567 FREE_REG_SET (bb_live_regs);
4571 #endif /* INSN_SCHEDULING */