2005-04-05 Andrew MacLeod <amacleod@redhat.com>
[official-gcc.git] / gcc / haifa-sched.c
blob6a2ac54691f047fbbeb0c9dd8bac599079b6cdbc
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998,
3 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
4 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
5 and currently maintained by, Jim Wilson (wilson@cygnus.com)
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
12 version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
22 02111-1307, USA. */
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 The following list shows the order in which we want to break ties
58 among insns in the ready list:
60 1. choose insn with the longest path to end of bb, ties
61 broken by
62 2. choose insn with least contribution to register pressure,
63 ties broken by
64 3. prefer in-block upon interblock motion, ties broken by
65 4. prefer useful upon speculative motion, ties broken by
66 5. choose insn with largest control flow probability, ties
67 broken by
68 6. choose insn with the least dependences upon the previously
69 scheduled insn, or finally
70 7 choose the insn which has the most insns dependent on it.
71 8. choose insn with lowest UID.
73 Memory references complicate matters. Only if we can be certain
74 that memory references are not part of the data dependency graph
75 (via true, anti, or output dependence), can we move operations past
76 memory references. To first approximation, reads can be done
77 independently, while writes introduce dependencies. Better
78 approximations will yield fewer dependencies.
80 Before reload, an extended analysis of interblock data dependences
81 is required for interblock scheduling. This is performed in
82 compute_block_backward_dependences ().
84 Dependencies set up by memory references are treated in exactly the
85 same way as other dependencies, by using LOG_LINKS backward
86 dependences. LOG_LINKS are translated into INSN_DEPEND forward
87 dependences for the purpose of forward list scheduling.
89 Having optimized the critical path, we may have also unduly
90 extended the lifetimes of some registers. If an operation requires
91 that constants be loaded into registers, it is certainly desirable
92 to load those constants as early as necessary, but no earlier.
93 I.e., it will not do to load up a bunch of registers at the
94 beginning of a basic block only to use them at the end, if they
95 could be loaded later, since this may result in excessive register
96 utilization.
98 Note that since branches are never in basic blocks, but only end
99 basic blocks, this pass will not move branches. But that is ok,
100 since we can use GNU's delayed branch scheduling pass to take care
101 of this case.
103 Also note that no further optimizations based on algebraic
104 identities are performed, so this pass would be a good one to
105 perform instruction splitting, such as breaking up a multiply
106 instruction into shifts and adds where that is profitable.
108 Given the memory aliasing analysis that this pass should perform,
109 it should be possible to remove redundant stores to memory, and to
110 load values from registers instead of hitting memory.
112 Before reload, speculative insns are moved only if a 'proof' exists
113 that no exception will be caused by this, and if no live registers
114 exist that inhibit the motion (live registers constraints are not
115 represented by data dependence edges).
117 This pass must update information that subsequent passes expect to
118 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
121 The information in the line number notes is carefully retained by
122 this pass. Notes that refer to the starting and ending of
123 exception regions are also carefully retained by this pass. All
124 other NOTE insns are grouped in their same relative order at the
125 beginning of basic blocks and regions that have been scheduled. */
127 #include "config.h"
128 #include "system.h"
129 #include "coretypes.h"
130 #include "tm.h"
131 #include "toplev.h"
132 #include "rtl.h"
133 #include "tm_p.h"
134 #include "hard-reg-set.h"
135 #include "regs.h"
136 #include "function.h"
137 #include "flags.h"
138 #include "insn-config.h"
139 #include "insn-attr.h"
140 #include "except.h"
141 #include "toplev.h"
142 #include "recog.h"
143 #include "sched-int.h"
144 #include "target.h"
146 #ifdef INSN_SCHEDULING
148 /* issue_rate is the number of insns that can be scheduled in the same
149 machine cycle. It can be defined in the config/mach/mach.h file,
150 otherwise we set it to 1. */
152 static int issue_rate;
154 /* sched-verbose controls the amount of debugging output the
155 scheduler prints. It is controlled by -fsched-verbose=N:
156 N>0 and no -DSR : the output is directed to stderr.
157 N>=10 will direct the printouts to stderr (regardless of -dSR).
158 N=1: same as -dSR.
159 N=2: bb's probabilities, detailed ready list info, unit/insn info.
160 N=3: rtl at abort point, control-flow, regions info.
161 N=5: dependences info. */
163 static int sched_verbose_param = 0;
164 int sched_verbose = 0;
166 /* Debugging file. All printouts are sent to dump, which is always set,
167 either to stderr, or to the dump listing file (-dRS). */
168 FILE *sched_dump = 0;
170 /* Highest uid before scheduling. */
171 static int old_max_uid;
173 /* fix_sched_param() is called from toplev.c upon detection
174 of the -fsched-verbose=N option. */
176 void
177 fix_sched_param (const char *param, const char *val)
179 if (!strcmp (param, "verbose"))
180 sched_verbose_param = atoi (val);
181 else
182 warning ("fix_sched_param: unknown param: %s", param);
185 struct haifa_insn_data *h_i_d;
187 #define LINE_NOTE(INSN) (h_i_d[INSN_UID (INSN)].line_note)
188 #define INSN_TICK(INSN) (h_i_d[INSN_UID (INSN)].tick)
190 /* Vector indexed by basic block number giving the starting line-number
191 for each basic block. */
192 static rtx *line_note_head;
194 /* List of important notes we must keep around. This is a pointer to the
195 last element in the list. */
196 static rtx note_list;
198 /* Queues, etc. */
200 /* An instruction is ready to be scheduled when all insns preceding it
201 have already been scheduled. It is important to ensure that all
202 insns which use its result will not be executed until its result
203 has been computed. An insn is maintained in one of four structures:
205 (P) the "Pending" set of insns which cannot be scheduled until
206 their dependencies have been satisfied.
207 (Q) the "Queued" set of insns that can be scheduled when sufficient
208 time has passed.
209 (R) the "Ready" list of unscheduled, uncommitted insns.
210 (S) the "Scheduled" list of insns.
212 Initially, all insns are either "Pending" or "Ready" depending on
213 whether their dependencies are satisfied.
215 Insns move from the "Ready" list to the "Scheduled" list as they
216 are committed to the schedule. As this occurs, the insns in the
217 "Pending" list have their dependencies satisfied and move to either
218 the "Ready" list or the "Queued" set depending on whether
219 sufficient time has passed to make them ready. As time passes,
220 insns move from the "Queued" set to the "Ready" list.
222 The "Pending" list (P) are the insns in the INSN_DEPEND of the unscheduled
223 insns, i.e., those that are ready, queued, and pending.
224 The "Queued" set (Q) is implemented by the variable `insn_queue'.
225 The "Ready" list (R) is implemented by the variables `ready' and
226 `n_ready'.
227 The "Scheduled" list (S) is the new insn chain built by this pass.
229 The transition (R->S) is implemented in the scheduling loop in
230 `schedule_block' when the best insn to schedule is chosen.
231 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
232 insns move from the ready list to the scheduled list.
233 The transition (Q->R) is implemented in 'queue_to_insn' as time
234 passes or stalls are introduced. */
236 /* Implement a circular buffer to delay instructions until sufficient
237 time has passed. For the new pipeline description interface,
238 MAX_INSN_QUEUE_INDEX is a power of two minus one which is larger
239 than maximal time of instruction execution computed by genattr.c on
240 the base maximal time of functional unit reservations and getting a
241 result. This is the longest time an insn may be queued. */
243 static rtx *insn_queue;
244 static int q_ptr = 0;
245 static int q_size = 0;
246 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
247 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
249 /* The following variable value refers for all current and future
250 reservations of the processor units. */
251 state_t curr_state;
253 /* The following variable value is size of memory representing all
254 current and future reservations of the processor units. */
255 static size_t dfa_state_size;
257 /* The following array is used to find the best insn from ready when
258 the automaton pipeline interface is used. */
259 static char *ready_try;
261 /* Describe the ready list of the scheduler.
262 VEC holds space enough for all insns in the current region. VECLEN
263 says how many exactly.
264 FIRST is the index of the element with the highest priority; i.e. the
265 last one in the ready list, since elements are ordered by ascending
266 priority.
267 N_READY determines how many insns are on the ready list. */
269 struct ready_list
271 rtx *vec;
272 int veclen;
273 int first;
274 int n_ready;
277 static int may_trap_exp (rtx, int);
279 /* Nonzero iff the address is comprised from at most 1 register. */
280 #define CONST_BASED_ADDRESS_P(x) \
281 (REG_P (x) \
282 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
283 || (GET_CODE (x) == LO_SUM)) \
284 && (CONSTANT_P (XEXP (x, 0)) \
285 || CONSTANT_P (XEXP (x, 1)))))
287 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
288 as found by analyzing insn's expression. */
290 static int
291 may_trap_exp (rtx x, int is_store)
293 enum rtx_code code;
295 if (x == 0)
296 return TRAP_FREE;
297 code = GET_CODE (x);
298 if (is_store)
300 if (code == MEM && may_trap_p (x))
301 return TRAP_RISKY;
302 else
303 return TRAP_FREE;
305 if (code == MEM)
307 /* The insn uses memory: a volatile load. */
308 if (MEM_VOLATILE_P (x))
309 return IRISKY;
310 /* An exception-free load. */
311 if (!may_trap_p (x))
312 return IFREE;
313 /* A load with 1 base register, to be further checked. */
314 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
315 return PFREE_CANDIDATE;
316 /* No info on the load, to be further checked. */
317 return PRISKY_CANDIDATE;
319 else
321 const char *fmt;
322 int i, insn_class = TRAP_FREE;
324 /* Neither store nor load, check if it may cause a trap. */
325 if (may_trap_p (x))
326 return TRAP_RISKY;
327 /* Recursive step: walk the insn... */
328 fmt = GET_RTX_FORMAT (code);
329 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
331 if (fmt[i] == 'e')
333 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
334 insn_class = WORST_CLASS (insn_class, tmp_class);
336 else if (fmt[i] == 'E')
338 int j;
339 for (j = 0; j < XVECLEN (x, i); j++)
341 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
342 insn_class = WORST_CLASS (insn_class, tmp_class);
343 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
344 break;
347 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
348 break;
350 return insn_class;
354 /* Classifies insn for the purpose of verifying that it can be
355 moved speculatively, by examining it's patterns, returning:
356 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
357 TRAP_FREE: non-load insn.
358 IFREE: load from a globally safe location.
359 IRISKY: volatile load.
360 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
361 being either PFREE or PRISKY. */
364 haifa_classify_insn (rtx insn)
366 rtx pat = PATTERN (insn);
367 int tmp_class = TRAP_FREE;
368 int insn_class = TRAP_FREE;
369 enum rtx_code code;
371 if (GET_CODE (pat) == PARALLEL)
373 int i, len = XVECLEN (pat, 0);
375 for (i = len - 1; i >= 0; i--)
377 code = GET_CODE (XVECEXP (pat, 0, i));
378 switch (code)
380 case CLOBBER:
381 /* Test if it is a 'store'. */
382 tmp_class = may_trap_exp (XEXP (XVECEXP (pat, 0, i), 0), 1);
383 break;
384 case SET:
385 /* Test if it is a store. */
386 tmp_class = may_trap_exp (SET_DEST (XVECEXP (pat, 0, i)), 1);
387 if (tmp_class == TRAP_RISKY)
388 break;
389 /* Test if it is a load. */
390 tmp_class
391 = WORST_CLASS (tmp_class,
392 may_trap_exp (SET_SRC (XVECEXP (pat, 0, i)),
393 0));
394 break;
395 case COND_EXEC:
396 case TRAP_IF:
397 tmp_class = TRAP_RISKY;
398 break;
399 default:
402 insn_class = WORST_CLASS (insn_class, tmp_class);
403 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
404 break;
407 else
409 code = GET_CODE (pat);
410 switch (code)
412 case CLOBBER:
413 /* Test if it is a 'store'. */
414 tmp_class = may_trap_exp (XEXP (pat, 0), 1);
415 break;
416 case SET:
417 /* Test if it is a store. */
418 tmp_class = may_trap_exp (SET_DEST (pat), 1);
419 if (tmp_class == TRAP_RISKY)
420 break;
421 /* Test if it is a load. */
422 tmp_class =
423 WORST_CLASS (tmp_class,
424 may_trap_exp (SET_SRC (pat), 0));
425 break;
426 case COND_EXEC:
427 case TRAP_IF:
428 tmp_class = TRAP_RISKY;
429 break;
430 default:;
432 insn_class = tmp_class;
435 return insn_class;
438 /* Forward declarations. */
440 static int priority (rtx);
441 static int rank_for_schedule (const void *, const void *);
442 static void swap_sort (rtx *, int);
443 static void queue_insn (rtx, int);
444 static int schedule_insn (rtx, struct ready_list *, int);
445 static int find_set_reg_weight (rtx);
446 static void find_insn_reg_weight (int);
447 static void adjust_priority (rtx);
448 static void advance_one_cycle (void);
450 /* Notes handling mechanism:
451 =========================
452 Generally, NOTES are saved before scheduling and restored after scheduling.
453 The scheduler distinguishes between three types of notes:
455 (1) LINE_NUMBER notes, generated and used for debugging. Here,
456 before scheduling a region, a pointer to the LINE_NUMBER note is
457 added to the insn following it (in save_line_notes()), and the note
458 is removed (in rm_line_notes() and unlink_line_notes()). After
459 scheduling the region, this pointer is used for regeneration of
460 the LINE_NUMBER note (in restore_line_notes()).
462 (2) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
463 Before scheduling a region, a pointer to the note is added to the insn
464 that follows or precedes it. (This happens as part of the data dependence
465 computation). After scheduling an insn, the pointer contained in it is
466 used for regenerating the corresponding note (in reemit_notes).
468 (3) All other notes (e.g. INSN_DELETED): Before scheduling a block,
469 these notes are put in a list (in rm_other_notes() and
470 unlink_other_notes ()). After scheduling the block, these notes are
471 inserted at the beginning of the block (in schedule_block()). */
473 static rtx unlink_other_notes (rtx, rtx);
474 static rtx unlink_line_notes (rtx, rtx);
475 static rtx reemit_notes (rtx, rtx);
477 static rtx *ready_lastpos (struct ready_list *);
478 static void ready_sort (struct ready_list *);
479 static rtx ready_remove_first (struct ready_list *);
481 static void queue_to_ready (struct ready_list *);
482 static int early_queue_to_ready (state_t, struct ready_list *);
484 static void debug_ready_list (struct ready_list *);
486 static rtx move_insn1 (rtx, rtx);
487 static rtx move_insn (rtx, rtx);
489 /* The following functions are used to implement multi-pass scheduling
490 on the first cycle. */
491 static rtx ready_element (struct ready_list *, int);
492 static rtx ready_remove (struct ready_list *, int);
493 static int max_issue (struct ready_list *, int *);
495 static rtx choose_ready (struct ready_list *);
497 #endif /* INSN_SCHEDULING */
499 /* Point to state used for the current scheduling pass. */
500 struct sched_info *current_sched_info;
502 #ifndef INSN_SCHEDULING
503 void
504 schedule_insns (FILE *dump_file ATTRIBUTE_UNUSED)
507 #else
509 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
510 so that insns independent of the last scheduled insn will be preferred
511 over dependent instructions. */
513 static rtx last_scheduled_insn;
515 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
516 This is the number of cycles between instruction issue and
517 instruction results. */
519 HAIFA_INLINE int
520 insn_cost (rtx insn, rtx link, rtx used)
522 int cost = INSN_COST (insn);
524 if (cost < 0)
526 /* A USE insn, or something else we don't need to
527 understand. We can't pass these directly to
528 result_ready_cost or insn_default_latency because it will
529 trigger a fatal error for unrecognizable insns. */
530 if (recog_memoized (insn) < 0)
532 INSN_COST (insn) = 0;
533 return 0;
535 else
537 cost = insn_default_latency (insn);
538 if (cost < 0)
539 cost = 0;
541 INSN_COST (insn) = cost;
545 /* In this case estimate cost without caring how insn is used. */
546 if (link == 0 || used == 0)
547 return cost;
549 /* A USE insn should never require the value used to be computed.
550 This allows the computation of a function's result and parameter
551 values to overlap the return and call. */
552 if (recog_memoized (used) < 0)
553 cost = 0;
554 else
556 if (INSN_CODE (insn) >= 0)
558 if (REG_NOTE_KIND (link) == REG_DEP_ANTI)
559 cost = 0;
560 else if (REG_NOTE_KIND (link) == REG_DEP_OUTPUT)
562 cost = (insn_default_latency (insn)
563 - insn_default_latency (used));
564 if (cost <= 0)
565 cost = 1;
567 else if (bypass_p (insn))
568 cost = insn_latency (insn, used);
571 if (targetm.sched.adjust_cost)
572 cost = targetm.sched.adjust_cost (used, link, insn, cost);
574 if (cost < 0)
575 cost = 0;
578 return cost;
581 /* Compute the priority number for INSN. */
583 static int
584 priority (rtx insn)
586 rtx link;
588 if (! INSN_P (insn))
589 return 0;
591 if (! INSN_PRIORITY_KNOWN (insn))
593 int this_priority = 0;
595 if (INSN_DEPEND (insn) == 0)
596 this_priority = insn_cost (insn, 0, 0);
597 else
599 for (link = INSN_DEPEND (insn); link; link = XEXP (link, 1))
601 rtx next;
602 int next_priority;
604 next = XEXP (link, 0);
606 /* Critical path is meaningful in block boundaries only. */
607 if (! (*current_sched_info->contributes_to_priority) (next, insn))
608 continue;
610 next_priority = insn_cost (insn, link, next) + priority (next);
611 if (next_priority > this_priority)
612 this_priority = next_priority;
615 INSN_PRIORITY (insn) = this_priority;
616 INSN_PRIORITY_KNOWN (insn) = 1;
619 return INSN_PRIORITY (insn);
622 /* Macros and functions for keeping the priority queue sorted, and
623 dealing with queuing and dequeuing of instructions. */
625 #define SCHED_SORT(READY, N_READY) \
626 do { if ((N_READY) == 2) \
627 swap_sort (READY, N_READY); \
628 else if ((N_READY) > 2) \
629 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
630 while (0)
632 /* Returns a positive value if x is preferred; returns a negative value if
633 y is preferred. Should never return 0, since that will make the sort
634 unstable. */
636 static int
637 rank_for_schedule (const void *x, const void *y)
639 rtx tmp = *(const rtx *) y;
640 rtx tmp2 = *(const rtx *) x;
641 rtx link;
642 int tmp_class, tmp2_class, depend_count1, depend_count2;
643 int val, priority_val, weight_val, info_val;
645 /* The insn in a schedule group should be issued the first. */
646 if (SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
647 return SCHED_GROUP_P (tmp2) ? 1 : -1;
649 /* Prefer insn with higher priority. */
650 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
652 if (priority_val)
653 return priority_val;
655 /* Prefer an insn with smaller contribution to registers-pressure. */
656 if (!reload_completed &&
657 (weight_val = INSN_REG_WEIGHT (tmp) - INSN_REG_WEIGHT (tmp2)))
658 return weight_val;
660 info_val = (*current_sched_info->rank) (tmp, tmp2);
661 if (info_val)
662 return info_val;
664 /* Compare insns based on their relation to the last-scheduled-insn. */
665 if (last_scheduled_insn)
667 /* Classify the instructions into three classes:
668 1) Data dependent on last schedule insn.
669 2) Anti/Output dependent on last scheduled insn.
670 3) Independent of last scheduled insn, or has latency of one.
671 Choose the insn from the highest numbered class if different. */
672 link = find_insn_list (tmp, INSN_DEPEND (last_scheduled_insn));
673 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp) == 1)
674 tmp_class = 3;
675 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
676 tmp_class = 1;
677 else
678 tmp_class = 2;
680 link = find_insn_list (tmp2, INSN_DEPEND (last_scheduled_insn));
681 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp2) == 1)
682 tmp2_class = 3;
683 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
684 tmp2_class = 1;
685 else
686 tmp2_class = 2;
688 if ((val = tmp2_class - tmp_class))
689 return val;
692 /* Prefer the insn which has more later insns that depend on it.
693 This gives the scheduler more freedom when scheduling later
694 instructions at the expense of added register pressure. */
695 depend_count1 = 0;
696 for (link = INSN_DEPEND (tmp); link; link = XEXP (link, 1))
697 depend_count1++;
699 depend_count2 = 0;
700 for (link = INSN_DEPEND (tmp2); link; link = XEXP (link, 1))
701 depend_count2++;
703 val = depend_count2 - depend_count1;
704 if (val)
705 return val;
707 /* If insns are equally good, sort by INSN_LUID (original insn order),
708 so that we make the sort stable. This minimizes instruction movement,
709 thus minimizing sched's effect on debugging and cross-jumping. */
710 return INSN_LUID (tmp) - INSN_LUID (tmp2);
713 /* Resort the array A in which only element at index N may be out of order. */
715 HAIFA_INLINE static void
716 swap_sort (rtx *a, int n)
718 rtx insn = a[n - 1];
719 int i = n - 2;
721 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
723 a[i + 1] = a[i];
724 i -= 1;
726 a[i + 1] = insn;
729 /* Add INSN to the insn queue so that it can be executed at least
730 N_CYCLES after the currently executing insn. Preserve insns
731 chain for debugging purposes. */
733 HAIFA_INLINE static void
734 queue_insn (rtx insn, int n_cycles)
736 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
737 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
738 insn_queue[next_q] = link;
739 q_size += 1;
741 if (sched_verbose >= 2)
743 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
744 (*current_sched_info->print_insn) (insn, 0));
746 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
750 /* Return a pointer to the bottom of the ready list, i.e. the insn
751 with the lowest priority. */
753 HAIFA_INLINE static rtx *
754 ready_lastpos (struct ready_list *ready)
756 if (ready->n_ready == 0)
757 abort ();
758 return ready->vec + ready->first - ready->n_ready + 1;
761 /* Add an element INSN to the ready list so that it ends up with the lowest
762 priority. */
764 HAIFA_INLINE void
765 ready_add (struct ready_list *ready, rtx insn)
767 if (ready->first == ready->n_ready)
769 memmove (ready->vec + ready->veclen - ready->n_ready,
770 ready_lastpos (ready),
771 ready->n_ready * sizeof (rtx));
772 ready->first = ready->veclen - 1;
774 ready->vec[ready->first - ready->n_ready] = insn;
775 ready->n_ready++;
778 /* Remove the element with the highest priority from the ready list and
779 return it. */
781 HAIFA_INLINE static rtx
782 ready_remove_first (struct ready_list *ready)
784 rtx t;
785 if (ready->n_ready == 0)
786 abort ();
787 t = ready->vec[ready->first--];
788 ready->n_ready--;
789 /* If the queue becomes empty, reset it. */
790 if (ready->n_ready == 0)
791 ready->first = ready->veclen - 1;
792 return t;
795 /* The following code implements multi-pass scheduling for the first
796 cycle. In other words, we will try to choose ready insn which
797 permits to start maximum number of insns on the same cycle. */
799 /* Return a pointer to the element INDEX from the ready. INDEX for
800 insn with the highest priority is 0, and the lowest priority has
801 N_READY - 1. */
803 HAIFA_INLINE static rtx
804 ready_element (struct ready_list *ready, int index)
806 #ifdef ENABLE_CHECKING
807 if (ready->n_ready == 0 || index >= ready->n_ready)
808 abort ();
809 #endif
810 return ready->vec[ready->first - index];
813 /* Remove the element INDEX from the ready list and return it. INDEX
814 for insn with the highest priority is 0, and the lowest priority
815 has N_READY - 1. */
817 HAIFA_INLINE static rtx
818 ready_remove (struct ready_list *ready, int index)
820 rtx t;
821 int i;
823 if (index == 0)
824 return ready_remove_first (ready);
825 if (ready->n_ready == 0 || index >= ready->n_ready)
826 abort ();
827 t = ready->vec[ready->first - index];
828 ready->n_ready--;
829 for (i = index; i < ready->n_ready; i++)
830 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
831 return t;
835 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
836 macro. */
838 HAIFA_INLINE static void
839 ready_sort (struct ready_list *ready)
841 rtx *first = ready_lastpos (ready);
842 SCHED_SORT (first, ready->n_ready);
845 /* PREV is an insn that is ready to execute. Adjust its priority if that
846 will help shorten or lengthen register lifetimes as appropriate. Also
847 provide a hook for the target to tweek itself. */
849 HAIFA_INLINE static void
850 adjust_priority (rtx prev)
852 /* ??? There used to be code here to try and estimate how an insn
853 affected register lifetimes, but it did it by looking at REG_DEAD
854 notes, which we removed in schedule_region. Nor did it try to
855 take into account register pressure or anything useful like that.
857 Revisit when we have a machine model to work with and not before. */
859 if (targetm.sched.adjust_priority)
860 INSN_PRIORITY (prev) =
861 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
864 /* Advance time on one cycle. */
865 HAIFA_INLINE static void
866 advance_one_cycle (void)
868 if (targetm.sched.dfa_pre_cycle_insn)
869 state_transition (curr_state,
870 targetm.sched.dfa_pre_cycle_insn ());
872 state_transition (curr_state, NULL);
874 if (targetm.sched.dfa_post_cycle_insn)
875 state_transition (curr_state,
876 targetm.sched.dfa_post_cycle_insn ());
879 /* Clock at which the previous instruction was issued. */
880 static int last_clock_var;
882 /* INSN is the "currently executing insn". Launch each insn which was
883 waiting on INSN. READY is the ready list which contains the insns
884 that are ready to fire. CLOCK is the current cycle. The function
885 returns necessary cycle advance after issuing the insn (it is not
886 zero for insns in a schedule group). */
888 static int
889 schedule_insn (rtx insn, struct ready_list *ready, int clock)
891 rtx link;
892 int advance = 0;
893 int premature_issue = 0;
895 if (sched_verbose >= 1)
897 char buf[2048];
899 print_insn (buf, insn, 0);
900 buf[40] = 0;
901 fprintf (sched_dump, ";;\t%3i--> %-40s:", clock, buf);
903 if (recog_memoized (insn) < 0)
904 fprintf (sched_dump, "nothing");
905 else
906 print_reservation (sched_dump, insn);
907 fputc ('\n', sched_dump);
910 if (INSN_TICK (insn) > clock)
912 /* 'insn' has been prematurely moved from the queue to the
913 ready list. */
914 premature_issue = INSN_TICK (insn) - clock;
917 for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
919 rtx next = XEXP (link, 0);
920 int cost = insn_cost (insn, link, next);
922 INSN_TICK (next) = MAX (INSN_TICK (next), clock + cost + premature_issue);
924 if ((INSN_DEP_COUNT (next) -= 1) == 0)
926 int effective_cost = INSN_TICK (next) - clock;
928 if (! (*current_sched_info->new_ready) (next))
929 continue;
931 if (sched_verbose >= 2)
933 fprintf (sched_dump, ";;\t\tdependences resolved: insn %s ",
934 (*current_sched_info->print_insn) (next, 0));
936 if (effective_cost < 1)
937 fprintf (sched_dump, "into ready\n");
938 else
939 fprintf (sched_dump, "into queue with cost=%d\n",
940 effective_cost);
943 /* Adjust the priority of NEXT and either put it on the ready
944 list or queue it. */
945 adjust_priority (next);
946 if (effective_cost < 1)
947 ready_add (ready, next);
948 else
950 queue_insn (next, effective_cost);
952 if (SCHED_GROUP_P (next) && advance < effective_cost)
953 advance = effective_cost;
958 /* Annotate the instruction with issue information -- TImode
959 indicates that the instruction is expected not to be able
960 to issue on the same cycle as the previous insn. A machine
961 may use this information to decide how the instruction should
962 be aligned. */
963 if (issue_rate > 1
964 && GET_CODE (PATTERN (insn)) != USE
965 && GET_CODE (PATTERN (insn)) != CLOBBER)
967 if (reload_completed)
968 PUT_MODE (insn, clock > last_clock_var ? TImode : VOIDmode);
969 last_clock_var = clock;
971 return advance;
974 /* Functions for handling of notes. */
976 /* Delete notes beginning with INSN and put them in the chain
977 of notes ended by NOTE_LIST.
978 Returns the insn following the notes. */
980 static rtx
981 unlink_other_notes (rtx insn, rtx tail)
983 rtx prev = PREV_INSN (insn);
985 while (insn != tail && NOTE_P (insn))
987 rtx next = NEXT_INSN (insn);
988 /* Delete the note from its current position. */
989 if (prev)
990 NEXT_INSN (prev) = next;
991 if (next)
992 PREV_INSN (next) = prev;
994 /* See sched_analyze to see how these are handled. */
995 if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
996 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
997 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_BASIC_BLOCK
998 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
999 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
1001 /* Insert the note at the end of the notes list. */
1002 PREV_INSN (insn) = note_list;
1003 if (note_list)
1004 NEXT_INSN (note_list) = insn;
1005 note_list = insn;
1008 insn = next;
1010 return insn;
1013 /* Delete line notes beginning with INSN. Record line-number notes so
1014 they can be reused. Returns the insn following the notes. */
1016 static rtx
1017 unlink_line_notes (rtx insn, rtx tail)
1019 rtx prev = PREV_INSN (insn);
1021 while (insn != tail && NOTE_P (insn))
1023 rtx next = NEXT_INSN (insn);
1025 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
1027 /* Delete the note from its current position. */
1028 if (prev)
1029 NEXT_INSN (prev) = next;
1030 if (next)
1031 PREV_INSN (next) = prev;
1033 /* Record line-number notes so they can be reused. */
1034 LINE_NOTE (insn) = insn;
1036 else
1037 prev = insn;
1039 insn = next;
1041 return insn;
1044 /* Return the head and tail pointers of BB. */
1046 void
1047 get_block_head_tail (int b, rtx *headp, rtx *tailp)
1049 /* HEAD and TAIL delimit the basic block being scheduled. */
1050 rtx head = BB_HEAD (BASIC_BLOCK (b));
1051 rtx tail = BB_END (BASIC_BLOCK (b));
1053 /* Don't include any notes or labels at the beginning of the
1054 basic block, or notes at the ends of basic blocks. */
1055 while (head != tail)
1057 if (NOTE_P (head))
1058 head = NEXT_INSN (head);
1059 else if (NOTE_P (tail))
1060 tail = PREV_INSN (tail);
1061 else if (LABEL_P (head))
1062 head = NEXT_INSN (head);
1063 else
1064 break;
1067 *headp = head;
1068 *tailp = tail;
1071 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1074 no_real_insns_p (rtx head, rtx tail)
1076 while (head != NEXT_INSN (tail))
1078 if (!NOTE_P (head) && !LABEL_P (head))
1079 return 0;
1080 head = NEXT_INSN (head);
1082 return 1;
1085 /* Delete line notes from one block. Save them so they can be later restored
1086 (in restore_line_notes). HEAD and TAIL are the boundaries of the
1087 block in which notes should be processed. */
1089 void
1090 rm_line_notes (rtx head, rtx tail)
1092 rtx next_tail;
1093 rtx insn;
1095 next_tail = NEXT_INSN (tail);
1096 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1098 rtx prev;
1100 /* Farm out notes, and maybe save them in NOTE_LIST.
1101 This is needed to keep the debugger from
1102 getting completely deranged. */
1103 if (NOTE_P (insn))
1105 prev = insn;
1106 insn = unlink_line_notes (insn, next_tail);
1108 if (prev == tail)
1109 abort ();
1110 if (prev == head)
1111 abort ();
1112 if (insn == next_tail)
1113 abort ();
1118 /* Save line number notes for each insn in block B. HEAD and TAIL are
1119 the boundaries of the block in which notes should be processed. */
1121 void
1122 save_line_notes (int b, rtx head, rtx tail)
1124 rtx next_tail;
1126 /* We must use the true line number for the first insn in the block
1127 that was computed and saved at the start of this pass. We can't
1128 use the current line number, because scheduling of the previous
1129 block may have changed the current line number. */
1131 rtx line = line_note_head[b];
1132 rtx insn;
1134 next_tail = NEXT_INSN (tail);
1136 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1137 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1138 line = insn;
1139 else
1140 LINE_NOTE (insn) = line;
1143 /* After a block was scheduled, insert line notes into the insns list.
1144 HEAD and TAIL are the boundaries of the block in which notes should
1145 be processed. */
1147 void
1148 restore_line_notes (rtx head, rtx tail)
1150 rtx line, note, prev, new;
1151 int added_notes = 0;
1152 rtx next_tail, insn;
1154 head = head;
1155 next_tail = NEXT_INSN (tail);
1157 /* Determine the current line-number. We want to know the current
1158 line number of the first insn of the block here, in case it is
1159 different from the true line number that was saved earlier. If
1160 different, then we need a line number note before the first insn
1161 of this block. If it happens to be the same, then we don't want to
1162 emit another line number note here. */
1163 for (line = head; line; line = PREV_INSN (line))
1164 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
1165 break;
1167 /* Walk the insns keeping track of the current line-number and inserting
1168 the line-number notes as needed. */
1169 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1170 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1171 line = insn;
1172 /* This used to emit line number notes before every non-deleted note.
1173 However, this confuses a debugger, because line notes not separated
1174 by real instructions all end up at the same address. I can find no
1175 use for line number notes before other notes, so none are emitted. */
1176 else if (!NOTE_P (insn)
1177 && INSN_UID (insn) < old_max_uid
1178 && (note = LINE_NOTE (insn)) != 0
1179 && note != line
1180 && (line == 0
1181 #ifdef USE_MAPPED_LOCATION
1182 || NOTE_SOURCE_LOCATION (note) != NOTE_SOURCE_LOCATION (line)
1183 #else
1184 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
1185 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)
1186 #endif
1189 line = note;
1190 prev = PREV_INSN (insn);
1191 if (LINE_NOTE (note))
1193 /* Re-use the original line-number note. */
1194 LINE_NOTE (note) = 0;
1195 PREV_INSN (note) = prev;
1196 NEXT_INSN (prev) = note;
1197 PREV_INSN (insn) = note;
1198 NEXT_INSN (note) = insn;
1200 else
1202 added_notes++;
1203 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
1204 #ifndef USE_MAPPED_LOCATION
1205 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
1206 #endif
1209 if (sched_verbose && added_notes)
1210 fprintf (sched_dump, ";; added %d line-number notes\n", added_notes);
1213 /* After scheduling the function, delete redundant line notes from the
1214 insns list. */
1216 void
1217 rm_redundant_line_notes (void)
1219 rtx line = 0;
1220 rtx insn = get_insns ();
1221 int active_insn = 0;
1222 int notes = 0;
1224 /* Walk the insns deleting redundant line-number notes. Many of these
1225 are already present. The remainder tend to occur at basic
1226 block boundaries. */
1227 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
1228 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1230 /* If there are no active insns following, INSN is redundant. */
1231 if (active_insn == 0)
1233 notes++;
1234 SET_INSN_DELETED (insn);
1236 /* If the line number is unchanged, LINE is redundant. */
1237 else if (line
1238 #ifdef USE_MAPPED_LOCATION
1239 && NOTE_SOURCE_LOCATION (line) == NOTE_SOURCE_LOCATION (insn)
1240 #else
1241 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
1242 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn)
1243 #endif
1246 notes++;
1247 SET_INSN_DELETED (line);
1248 line = insn;
1250 else
1251 line = insn;
1252 active_insn = 0;
1254 else if (!((NOTE_P (insn)
1255 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
1256 || (NONJUMP_INSN_P (insn)
1257 && (GET_CODE (PATTERN (insn)) == USE
1258 || GET_CODE (PATTERN (insn)) == CLOBBER))))
1259 active_insn++;
1261 if (sched_verbose && notes)
1262 fprintf (sched_dump, ";; deleted %d line-number notes\n", notes);
1265 /* Delete notes between HEAD and TAIL and put them in the chain
1266 of notes ended by NOTE_LIST. */
1268 void
1269 rm_other_notes (rtx head, rtx tail)
1271 rtx next_tail;
1272 rtx insn;
1274 note_list = 0;
1275 if (head == tail && (! INSN_P (head)))
1276 return;
1278 next_tail = NEXT_INSN (tail);
1279 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1281 rtx prev;
1283 /* Farm out notes, and maybe save them in NOTE_LIST.
1284 This is needed to keep the debugger from
1285 getting completely deranged. */
1286 if (NOTE_P (insn))
1288 prev = insn;
1290 insn = unlink_other_notes (insn, next_tail);
1292 if (prev == tail)
1293 abort ();
1294 if (prev == head)
1295 abort ();
1296 if (insn == next_tail)
1297 abort ();
1302 /* Functions for computation of registers live/usage info. */
1304 /* This function looks for a new register being defined.
1305 If the destination register is already used by the source,
1306 a new register is not needed. */
1308 static int
1309 find_set_reg_weight (rtx x)
1311 if (GET_CODE (x) == CLOBBER
1312 && register_operand (SET_DEST (x), VOIDmode))
1313 return 1;
1314 if (GET_CODE (x) == SET
1315 && register_operand (SET_DEST (x), VOIDmode))
1317 if (REG_P (SET_DEST (x)))
1319 if (!reg_mentioned_p (SET_DEST (x), SET_SRC (x)))
1320 return 1;
1321 else
1322 return 0;
1324 return 1;
1326 return 0;
1329 /* Calculate INSN_REG_WEIGHT for all insns of a block. */
1331 static void
1332 find_insn_reg_weight (int b)
1334 rtx insn, next_tail, head, tail;
1336 get_block_head_tail (b, &head, &tail);
1337 next_tail = NEXT_INSN (tail);
1339 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1341 int reg_weight = 0;
1342 rtx x;
1344 /* Handle register life information. */
1345 if (! INSN_P (insn))
1346 continue;
1348 /* Increment weight for each register born here. */
1349 x = PATTERN (insn);
1350 reg_weight += find_set_reg_weight (x);
1351 if (GET_CODE (x) == PARALLEL)
1353 int j;
1354 for (j = XVECLEN (x, 0) - 1; j >= 0; j--)
1356 x = XVECEXP (PATTERN (insn), 0, j);
1357 reg_weight += find_set_reg_weight (x);
1360 /* Decrement weight for each register that dies here. */
1361 for (x = REG_NOTES (insn); x; x = XEXP (x, 1))
1363 if (REG_NOTE_KIND (x) == REG_DEAD
1364 || REG_NOTE_KIND (x) == REG_UNUSED)
1365 reg_weight--;
1368 INSN_REG_WEIGHT (insn) = reg_weight;
1372 /* Scheduling clock, modified in schedule_block() and queue_to_ready (). */
1373 static int clock_var;
1375 /* Move insns that became ready to fire from queue to ready list. */
1377 static void
1378 queue_to_ready (struct ready_list *ready)
1380 rtx insn;
1381 rtx link;
1383 q_ptr = NEXT_Q (q_ptr);
1385 /* Add all pending insns that can be scheduled without stalls to the
1386 ready list. */
1387 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
1389 insn = XEXP (link, 0);
1390 q_size -= 1;
1392 if (sched_verbose >= 2)
1393 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1394 (*current_sched_info->print_insn) (insn, 0));
1396 ready_add (ready, insn);
1397 if (sched_verbose >= 2)
1398 fprintf (sched_dump, "moving to ready without stalls\n");
1400 insn_queue[q_ptr] = 0;
1402 /* If there are no ready insns, stall until one is ready and add all
1403 of the pending insns at that point to the ready list. */
1404 if (ready->n_ready == 0)
1406 int stalls;
1408 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
1410 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1412 for (; link; link = XEXP (link, 1))
1414 insn = XEXP (link, 0);
1415 q_size -= 1;
1417 if (sched_verbose >= 2)
1418 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1419 (*current_sched_info->print_insn) (insn, 0));
1421 ready_add (ready, insn);
1422 if (sched_verbose >= 2)
1423 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
1425 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
1427 advance_one_cycle ();
1429 break;
1432 advance_one_cycle ();
1435 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
1436 clock_var += stalls;
1440 /* Used by early_queue_to_ready. Determines whether it is "ok" to
1441 prematurely move INSN from the queue to the ready list. Currently,
1442 if a target defines the hook 'is_costly_dependence', this function
1443 uses the hook to check whether there exist any dependences which are
1444 considered costly by the target, between INSN and other insns that
1445 have already been scheduled. Dependences are checked up to Y cycles
1446 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
1447 controlling this value.
1448 (Other considerations could be taken into account instead (or in
1449 addition) depending on user flags and target hooks. */
1451 static bool
1452 ok_for_early_queue_removal (rtx insn)
1454 int n_cycles;
1455 rtx prev_insn = last_scheduled_insn;
1457 if (targetm.sched.is_costly_dependence)
1459 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
1461 for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
1463 rtx dep_link = 0;
1464 int dep_cost;
1466 if (!NOTE_P (prev_insn))
1468 dep_link = find_insn_list (insn, INSN_DEPEND (prev_insn));
1469 if (dep_link)
1471 dep_cost = insn_cost (prev_insn, dep_link, insn) ;
1472 if (targetm.sched.is_costly_dependence (prev_insn, insn,
1473 dep_link, dep_cost,
1474 flag_sched_stalled_insns_dep - n_cycles))
1475 return false;
1479 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
1480 break;
1483 if (!prev_insn)
1484 break;
1485 prev_insn = PREV_INSN (prev_insn);
1489 return true;
1493 /* Remove insns from the queue, before they become "ready" with respect
1494 to FU latency considerations. */
1496 static int
1497 early_queue_to_ready (state_t state, struct ready_list *ready)
1499 rtx insn;
1500 rtx link;
1501 rtx next_link;
1502 rtx prev_link;
1503 bool move_to_ready;
1504 int cost;
1505 state_t temp_state = alloca (dfa_state_size);
1506 int stalls;
1507 int insns_removed = 0;
1510 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
1511 function:
1513 X == 0: There is no limit on how many queued insns can be removed
1514 prematurely. (flag_sched_stalled_insns = -1).
1516 X >= 1: Only X queued insns can be removed prematurely in each
1517 invocation. (flag_sched_stalled_insns = X).
1519 Otherwise: Early queue removal is disabled.
1520 (flag_sched_stalled_insns = 0)
1523 if (! flag_sched_stalled_insns)
1524 return 0;
1526 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
1528 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1530 if (sched_verbose > 6)
1531 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
1533 prev_link = 0;
1534 while (link)
1536 next_link = XEXP (link, 1);
1537 insn = XEXP (link, 0);
1538 if (insn && sched_verbose > 6)
1539 print_rtl_single (sched_dump, insn);
1541 memcpy (temp_state, state, dfa_state_size);
1542 if (recog_memoized (insn) < 0)
1543 /* non-negative to indicate that it's not ready
1544 to avoid infinite Q->R->Q->R... */
1545 cost = 0;
1546 else
1547 cost = state_transition (temp_state, insn);
1549 if (sched_verbose >= 6)
1550 fprintf (sched_dump, "transition cost = %d\n", cost);
1552 move_to_ready = false;
1553 if (cost < 0)
1555 move_to_ready = ok_for_early_queue_removal (insn);
1556 if (move_to_ready == true)
1558 /* move from Q to R */
1559 q_size -= 1;
1560 ready_add (ready, insn);
1562 if (prev_link)
1563 XEXP (prev_link, 1) = next_link;
1564 else
1565 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
1567 free_INSN_LIST_node (link);
1569 if (sched_verbose >= 2)
1570 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
1571 (*current_sched_info->print_insn) (insn, 0));
1573 insns_removed++;
1574 if (insns_removed == flag_sched_stalled_insns)
1575 /* Remove only one insn from Q at a time. */
1576 return insns_removed;
1580 if (move_to_ready == false)
1581 prev_link = link;
1583 link = next_link;
1584 } /* while link */
1585 } /* if link */
1587 } /* for stalls.. */
1589 return insns_removed;
1593 /* Print the ready list for debugging purposes. Callable from debugger. */
1595 static void
1596 debug_ready_list (struct ready_list *ready)
1598 rtx *p;
1599 int i;
1601 if (ready->n_ready == 0)
1603 fprintf (sched_dump, "\n");
1604 return;
1607 p = ready_lastpos (ready);
1608 for (i = 0; i < ready->n_ready; i++)
1609 fprintf (sched_dump, " %s", (*current_sched_info->print_insn) (p[i], 0));
1610 fprintf (sched_dump, "\n");
1613 /* move_insn1: Remove INSN from insn chain, and link it after LAST insn. */
1615 static rtx
1616 move_insn1 (rtx insn, rtx last)
1618 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (insn);
1619 PREV_INSN (NEXT_INSN (insn)) = PREV_INSN (insn);
1621 NEXT_INSN (insn) = NEXT_INSN (last);
1622 PREV_INSN (NEXT_INSN (last)) = insn;
1624 NEXT_INSN (last) = insn;
1625 PREV_INSN (insn) = last;
1627 return insn;
1630 /* Search INSN for REG_SAVE_NOTE note pairs for
1631 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
1632 NOTEs. The REG_SAVE_NOTE note following first one is contains the
1633 saved value for NOTE_BLOCK_NUMBER which is useful for
1634 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
1635 output by the instruction scheduler. Return the new value of LAST. */
1637 static rtx
1638 reemit_notes (rtx insn, rtx last)
1640 rtx note, retval;
1642 retval = last;
1643 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1645 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
1647 enum insn_note note_type = INTVAL (XEXP (note, 0));
1649 last = emit_note_before (note_type, last);
1650 remove_note (insn, note);
1653 return retval;
1656 /* Move INSN. Reemit notes if needed.
1658 Return the last insn emitted by the scheduler, which is the
1659 return value from the first call to reemit_notes. */
1661 static rtx
1662 move_insn (rtx insn, rtx last)
1664 rtx retval = NULL;
1666 move_insn1 (insn, last);
1668 /* If this is the first call to reemit_notes, then record
1669 its return value. */
1670 if (retval == NULL_RTX)
1671 retval = reemit_notes (insn, insn);
1672 else
1673 reemit_notes (insn, insn);
1675 SCHED_GROUP_P (insn) = 0;
1677 return retval;
1680 /* The following structure describe an entry of the stack of choices. */
1681 struct choice_entry
1683 /* Ordinal number of the issued insn in the ready queue. */
1684 int index;
1685 /* The number of the rest insns whose issues we should try. */
1686 int rest;
1687 /* The number of issued essential insns. */
1688 int n;
1689 /* State after issuing the insn. */
1690 state_t state;
1693 /* The following array is used to implement a stack of choices used in
1694 function max_issue. */
1695 static struct choice_entry *choice_stack;
1697 /* The following variable value is number of essential insns issued on
1698 the current cycle. An insn is essential one if it changes the
1699 processors state. */
1700 static int cycle_issued_insns;
1702 /* The following variable value is maximal number of tries of issuing
1703 insns for the first cycle multipass insn scheduling. We define
1704 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
1705 need this constraint if all real insns (with non-negative codes)
1706 had reservations because in this case the algorithm complexity is
1707 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
1708 might be incomplete and such insn might occur. For such
1709 descriptions, the complexity of algorithm (without the constraint)
1710 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
1711 static int max_lookahead_tries;
1713 /* The following value is value of hook
1714 `first_cycle_multipass_dfa_lookahead' at the last call of
1715 `max_issue'. */
1716 static int cached_first_cycle_multipass_dfa_lookahead = 0;
1718 /* The following value is value of `issue_rate' at the last call of
1719 `sched_init'. */
1720 static int cached_issue_rate = 0;
1722 /* The following function returns maximal (or close to maximal) number
1723 of insns which can be issued on the same cycle and one of which
1724 insns is insns with the best rank (the first insn in READY). To
1725 make this function tries different samples of ready insns. READY
1726 is current queue `ready'. Global array READY_TRY reflects what
1727 insns are already issued in this try. INDEX will contain index
1728 of the best insn in READY. The following function is used only for
1729 first cycle multipass scheduling. */
1730 static int
1731 max_issue (struct ready_list *ready, int *index)
1733 int n, i, all, n_ready, best, delay, tries_num;
1734 struct choice_entry *top;
1735 rtx insn;
1737 best = 0;
1738 memcpy (choice_stack->state, curr_state, dfa_state_size);
1739 top = choice_stack;
1740 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1741 top->n = 0;
1742 n_ready = ready->n_ready;
1743 for (all = i = 0; i < n_ready; i++)
1744 if (!ready_try [i])
1745 all++;
1746 i = 0;
1747 tries_num = 0;
1748 for (;;)
1750 if (top->rest == 0 || i >= n_ready)
1752 if (top == choice_stack)
1753 break;
1754 if (best < top - choice_stack && ready_try [0])
1756 best = top - choice_stack;
1757 *index = choice_stack [1].index;
1758 if (top->n == issue_rate - cycle_issued_insns || best == all)
1759 break;
1761 i = top->index;
1762 ready_try [i] = 0;
1763 top--;
1764 memcpy (curr_state, top->state, dfa_state_size);
1766 else if (!ready_try [i])
1768 tries_num++;
1769 if (tries_num > max_lookahead_tries)
1770 break;
1771 insn = ready_element (ready, i);
1772 delay = state_transition (curr_state, insn);
1773 if (delay < 0)
1775 if (state_dead_lock_p (curr_state))
1776 top->rest = 0;
1777 else
1778 top->rest--;
1779 n = top->n;
1780 if (memcmp (top->state, curr_state, dfa_state_size) != 0)
1781 n++;
1782 top++;
1783 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1784 top->index = i;
1785 top->n = n;
1786 memcpy (top->state, curr_state, dfa_state_size);
1787 ready_try [i] = 1;
1788 i = -1;
1791 i++;
1793 while (top != choice_stack)
1795 ready_try [top->index] = 0;
1796 top--;
1798 memcpy (curr_state, choice_stack->state, dfa_state_size);
1799 return best;
1802 /* The following function chooses insn from READY and modifies
1803 *N_READY and READY. The following function is used only for first
1804 cycle multipass scheduling. */
1806 static rtx
1807 choose_ready (struct ready_list *ready)
1809 int lookahead = 0;
1811 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
1812 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
1813 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0)))
1814 return ready_remove_first (ready);
1815 else
1817 /* Try to choose the better insn. */
1818 int index = 0, i;
1819 rtx insn;
1821 if (cached_first_cycle_multipass_dfa_lookahead != lookahead)
1823 cached_first_cycle_multipass_dfa_lookahead = lookahead;
1824 max_lookahead_tries = 100;
1825 for (i = 0; i < issue_rate; i++)
1826 max_lookahead_tries *= lookahead;
1828 insn = ready_element (ready, 0);
1829 if (INSN_CODE (insn) < 0)
1830 return ready_remove_first (ready);
1831 for (i = 1; i < ready->n_ready; i++)
1833 insn = ready_element (ready, i);
1834 ready_try [i]
1835 = (INSN_CODE (insn) < 0
1836 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
1837 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard (insn)));
1839 if (max_issue (ready, &index) == 0)
1840 return ready_remove_first (ready);
1841 else
1842 return ready_remove (ready, index);
1846 /* Use forward list scheduling to rearrange insns of block B in region RGN,
1847 possibly bringing insns from subsequent blocks in the same region. */
1849 void
1850 schedule_block (int b, int rgn_n_insns)
1852 struct ready_list ready;
1853 int i, first_cycle_insn_p;
1854 int can_issue_more;
1855 state_t temp_state = NULL; /* It is used for multipass scheduling. */
1856 int sort_p, advance, start_clock_var;
1858 /* Head/tail info for this block. */
1859 rtx prev_head = current_sched_info->prev_head;
1860 rtx next_tail = current_sched_info->next_tail;
1861 rtx head = NEXT_INSN (prev_head);
1862 rtx tail = PREV_INSN (next_tail);
1864 /* We used to have code to avoid getting parameters moved from hard
1865 argument registers into pseudos.
1867 However, it was removed when it proved to be of marginal benefit
1868 and caused problems because schedule_block and compute_forward_dependences
1869 had different notions of what the "head" insn was. */
1871 if (head == tail && (! INSN_P (head)))
1872 abort ();
1874 /* Debug info. */
1875 if (sched_verbose)
1877 fprintf (sched_dump, ";; ======================================================\n");
1878 fprintf (sched_dump,
1879 ";; -- basic block %d from %d to %d -- %s reload\n",
1880 b, INSN_UID (head), INSN_UID (tail),
1881 (reload_completed ? "after" : "before"));
1882 fprintf (sched_dump, ";; ======================================================\n");
1883 fprintf (sched_dump, "\n");
1886 state_reset (curr_state);
1888 /* Allocate the ready list. */
1889 ready.veclen = rgn_n_insns + 1 + issue_rate;
1890 ready.first = ready.veclen - 1;
1891 ready.vec = xmalloc (ready.veclen * sizeof (rtx));
1892 ready.n_ready = 0;
1894 /* It is used for first cycle multipass scheduling. */
1895 temp_state = alloca (dfa_state_size);
1896 ready_try = xcalloc ((rgn_n_insns + 1), sizeof (char));
1897 choice_stack = xmalloc ((rgn_n_insns + 1)
1898 * sizeof (struct choice_entry));
1899 for (i = 0; i <= rgn_n_insns; i++)
1900 choice_stack[i].state = xmalloc (dfa_state_size);
1902 (*current_sched_info->init_ready_list) (&ready);
1904 if (targetm.sched.md_init)
1905 targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
1907 /* We start inserting insns after PREV_HEAD. */
1908 last_scheduled_insn = prev_head;
1910 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
1911 queue. */
1912 q_ptr = 0;
1913 q_size = 0;
1915 insn_queue = alloca ((max_insn_queue_index + 1) * sizeof (rtx));
1916 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
1917 last_clock_var = -1;
1919 /* Start just before the beginning of time. */
1920 clock_var = -1;
1921 advance = 0;
1923 sort_p = TRUE;
1924 /* Loop until all the insns in BB are scheduled. */
1925 while ((*current_sched_info->schedule_more_p) ())
1929 start_clock_var = clock_var;
1931 clock_var++;
1933 advance_one_cycle ();
1935 /* Add to the ready list all pending insns that can be issued now.
1936 If there are no ready insns, increment clock until one
1937 is ready and add all pending insns at that point to the ready
1938 list. */
1939 queue_to_ready (&ready);
1941 if (ready.n_ready == 0)
1942 abort ();
1944 if (sched_verbose >= 2)
1946 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
1947 debug_ready_list (&ready);
1949 advance -= clock_var - start_clock_var;
1951 while (advance > 0);
1953 if (sort_p)
1955 /* Sort the ready list based on priority. */
1956 ready_sort (&ready);
1958 if (sched_verbose >= 2)
1960 fprintf (sched_dump, ";;\t\tReady list after ready_sort: ");
1961 debug_ready_list (&ready);
1965 /* Allow the target to reorder the list, typically for
1966 better instruction bundling. */
1967 if (sort_p && targetm.sched.reorder
1968 && (ready.n_ready == 0
1969 || !SCHED_GROUP_P (ready_element (&ready, 0))))
1970 can_issue_more =
1971 targetm.sched.reorder (sched_dump, sched_verbose,
1972 ready_lastpos (&ready),
1973 &ready.n_ready, clock_var);
1974 else
1975 can_issue_more = issue_rate;
1977 first_cycle_insn_p = 1;
1978 cycle_issued_insns = 0;
1979 for (;;)
1981 rtx insn;
1982 int cost;
1983 bool asm_p = false;
1985 if (sched_verbose >= 2)
1987 fprintf (sched_dump, ";;\tReady list (t =%3d): ",
1988 clock_var);
1989 debug_ready_list (&ready);
1992 if (ready.n_ready == 0
1993 && can_issue_more
1994 && reload_completed)
1996 /* Allow scheduling insns directly from the queue in case
1997 there's nothing better to do (ready list is empty) but
1998 there are still vacant dispatch slots in the current cycle. */
1999 if (sched_verbose >= 6)
2000 fprintf(sched_dump,";;\t\tSecond chance\n");
2001 memcpy (temp_state, curr_state, dfa_state_size);
2002 if (early_queue_to_ready (temp_state, &ready))
2003 ready_sort (&ready);
2006 if (ready.n_ready == 0 || !can_issue_more
2007 || state_dead_lock_p (curr_state)
2008 || !(*current_sched_info->schedule_more_p) ())
2009 break;
2011 /* Select and remove the insn from the ready list. */
2012 if (sort_p)
2013 insn = choose_ready (&ready);
2014 else
2015 insn = ready_remove_first (&ready);
2017 if (targetm.sched.dfa_new_cycle
2018 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
2019 insn, last_clock_var,
2020 clock_var, &sort_p))
2022 ready_add (&ready, insn);
2023 break;
2026 sort_p = TRUE;
2027 memcpy (temp_state, curr_state, dfa_state_size);
2028 if (recog_memoized (insn) < 0)
2030 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
2031 || asm_noperands (PATTERN (insn)) >= 0);
2032 if (!first_cycle_insn_p && asm_p)
2033 /* This is asm insn which is tryed to be issued on the
2034 cycle not first. Issue it on the next cycle. */
2035 cost = 1;
2036 else
2037 /* A USE insn, or something else we don't need to
2038 understand. We can't pass these directly to
2039 state_transition because it will trigger a
2040 fatal error for unrecognizable insns. */
2041 cost = 0;
2043 else
2045 cost = state_transition (temp_state, insn);
2046 if (cost < 0)
2047 cost = 0;
2048 else if (cost == 0)
2049 cost = 1;
2052 if (cost >= 1)
2054 queue_insn (insn, cost);
2055 if (SCHED_GROUP_P (insn))
2057 advance = cost;
2058 break;
2061 continue;
2064 if (! (*current_sched_info->can_schedule_ready_p) (insn))
2065 goto next;
2067 last_scheduled_insn = move_insn (insn, last_scheduled_insn);
2069 if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
2070 cycle_issued_insns++;
2071 memcpy (curr_state, temp_state, dfa_state_size);
2073 if (targetm.sched.variable_issue)
2074 can_issue_more =
2075 targetm.sched.variable_issue (sched_dump, sched_verbose,
2076 insn, can_issue_more);
2077 /* A naked CLOBBER or USE generates no instruction, so do
2078 not count them against the issue rate. */
2079 else if (GET_CODE (PATTERN (insn)) != USE
2080 && GET_CODE (PATTERN (insn)) != CLOBBER)
2081 can_issue_more--;
2083 advance = schedule_insn (insn, &ready, clock_var);
2085 /* After issuing an asm insn we should start a new cycle. */
2086 if (advance == 0 && asm_p)
2087 advance = 1;
2088 if (advance != 0)
2089 break;
2091 next:
2092 first_cycle_insn_p = 0;
2094 /* Sort the ready list based on priority. This must be
2095 redone here, as schedule_insn may have readied additional
2096 insns that will not be sorted correctly. */
2097 if (ready.n_ready > 0)
2098 ready_sort (&ready);
2100 if (targetm.sched.reorder2
2101 && (ready.n_ready == 0
2102 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2104 can_issue_more =
2105 targetm.sched.reorder2 (sched_dump, sched_verbose,
2106 ready.n_ready
2107 ? ready_lastpos (&ready) : NULL,
2108 &ready.n_ready, clock_var);
2113 if (targetm.sched.md_finish)
2114 targetm.sched.md_finish (sched_dump, sched_verbose);
2116 /* Debug info. */
2117 if (sched_verbose)
2119 fprintf (sched_dump, ";;\tReady list (final): ");
2120 debug_ready_list (&ready);
2123 /* Sanity check -- queue must be empty now. Meaningless if region has
2124 multiple bbs. */
2125 if (current_sched_info->queue_must_finish_empty && q_size != 0)
2126 abort ();
2128 /* Update head/tail boundaries. */
2129 head = NEXT_INSN (prev_head);
2130 tail = last_scheduled_insn;
2132 if (!reload_completed)
2134 rtx insn, link, next;
2136 /* INSN_TICK (minimum clock tick at which the insn becomes
2137 ready) may be not correct for the insn in the subsequent
2138 blocks of the region. We should use a correct value of
2139 `clock_var' or modify INSN_TICK. It is better to keep
2140 clock_var value equal to 0 at the start of a basic block.
2141 Therefore we modify INSN_TICK here. */
2142 for (insn = head; insn != tail; insn = NEXT_INSN (insn))
2143 if (INSN_P (insn))
2145 for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
2147 next = XEXP (link, 0);
2148 INSN_TICK (next) -= clock_var;
2153 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
2154 previously found among the insns. Insert them at the beginning
2155 of the insns. */
2156 if (note_list != 0)
2158 rtx note_head = note_list;
2160 while (PREV_INSN (note_head))
2162 note_head = PREV_INSN (note_head);
2165 PREV_INSN (note_head) = PREV_INSN (head);
2166 NEXT_INSN (PREV_INSN (head)) = note_head;
2167 PREV_INSN (head) = note_list;
2168 NEXT_INSN (note_list) = head;
2169 head = note_head;
2172 /* Debugging. */
2173 if (sched_verbose)
2175 fprintf (sched_dump, ";; total time = %d\n;; new head = %d\n",
2176 clock_var, INSN_UID (head));
2177 fprintf (sched_dump, ";; new tail = %d\n\n",
2178 INSN_UID (tail));
2181 current_sched_info->head = head;
2182 current_sched_info->tail = tail;
2184 free (ready.vec);
2186 free (ready_try);
2187 for (i = 0; i <= rgn_n_insns; i++)
2188 free (choice_stack [i].state);
2189 free (choice_stack);
2192 /* Set_priorities: compute priority of each insn in the block. */
2195 set_priorities (rtx head, rtx tail)
2197 rtx insn;
2198 int n_insn;
2199 int sched_max_insns_priority =
2200 current_sched_info->sched_max_insns_priority;
2201 rtx prev_head;
2203 prev_head = PREV_INSN (head);
2205 if (head == tail && (! INSN_P (head)))
2206 return 0;
2208 n_insn = 0;
2209 sched_max_insns_priority = 0;
2210 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
2212 if (NOTE_P (insn))
2213 continue;
2215 n_insn++;
2216 (void) priority (insn);
2218 if (INSN_PRIORITY_KNOWN (insn))
2219 sched_max_insns_priority =
2220 MAX (sched_max_insns_priority, INSN_PRIORITY (insn));
2222 sched_max_insns_priority += 1;
2223 current_sched_info->sched_max_insns_priority =
2224 sched_max_insns_priority;
2226 return n_insn;
2229 /* Initialize some global state for the scheduler. DUMP_FILE is to be used
2230 for debugging output. */
2232 void
2233 sched_init (FILE *dump_file)
2235 int luid;
2236 basic_block b;
2237 rtx insn;
2238 int i;
2240 /* Disable speculative loads in their presence if cc0 defined. */
2241 #ifdef HAVE_cc0
2242 flag_schedule_speculative_load = 0;
2243 #endif
2245 /* Set dump and sched_verbose for the desired debugging output. If no
2246 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
2247 For -fsched-verbose=N, N>=10, print everything to stderr. */
2248 sched_verbose = sched_verbose_param;
2249 if (sched_verbose_param == 0 && dump_file)
2250 sched_verbose = 1;
2251 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
2252 ? stderr : dump_file);
2254 /* Initialize issue_rate. */
2255 if (targetm.sched.issue_rate)
2256 issue_rate = targetm.sched.issue_rate ();
2257 else
2258 issue_rate = 1;
2260 if (cached_issue_rate != issue_rate)
2262 cached_issue_rate = issue_rate;
2263 /* To invalidate max_lookahead_tries: */
2264 cached_first_cycle_multipass_dfa_lookahead = 0;
2267 /* We use LUID 0 for the fake insn (UID 0) which holds dependencies for
2268 pseudos which do not cross calls. */
2269 old_max_uid = get_max_uid () + 1;
2271 h_i_d = xcalloc (old_max_uid, sizeof (*h_i_d));
2273 for (i = 0; i < old_max_uid; i++)
2274 h_i_d [i].cost = -1;
2276 if (targetm.sched.init_dfa_pre_cycle_insn)
2277 targetm.sched.init_dfa_pre_cycle_insn ();
2279 if (targetm.sched.init_dfa_post_cycle_insn)
2280 targetm.sched.init_dfa_post_cycle_insn ();
2282 dfa_start ();
2283 dfa_state_size = state_size ();
2284 curr_state = xmalloc (dfa_state_size);
2286 h_i_d[0].luid = 0;
2287 luid = 1;
2288 FOR_EACH_BB (b)
2289 for (insn = BB_HEAD (b); ; insn = NEXT_INSN (insn))
2291 INSN_LUID (insn) = luid;
2293 /* Increment the next luid, unless this is a note. We don't
2294 really need separate IDs for notes and we don't want to
2295 schedule differently depending on whether or not there are
2296 line-number notes, i.e., depending on whether or not we're
2297 generating debugging information. */
2298 if (!NOTE_P (insn))
2299 ++luid;
2301 if (insn == BB_END (b))
2302 break;
2305 init_dependency_caches (luid);
2307 init_alias_analysis ();
2309 if (write_symbols != NO_DEBUG)
2311 rtx line;
2313 line_note_head = xcalloc (last_basic_block, sizeof (rtx));
2315 /* Save-line-note-head:
2316 Determine the line-number at the start of each basic block.
2317 This must be computed and saved now, because after a basic block's
2318 predecessor has been scheduled, it is impossible to accurately
2319 determine the correct line number for the first insn of the block. */
2321 FOR_EACH_BB (b)
2323 for (line = BB_HEAD (b); line; line = PREV_INSN (line))
2324 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
2326 line_note_head[b->index] = line;
2327 break;
2329 /* Do a forward search as well, since we won't get to see the first
2330 notes in a basic block. */
2331 for (line = BB_HEAD (b); line; line = NEXT_INSN (line))
2333 if (INSN_P (line))
2334 break;
2335 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
2336 line_note_head[b->index] = line;
2341 /* ??? Add a NOTE after the last insn of the last basic block. It is not
2342 known why this is done. */
2344 insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
2345 if (NEXT_INSN (insn) == 0
2346 || (!NOTE_P (insn)
2347 && !LABEL_P (insn)
2348 /* Don't emit a NOTE if it would end up before a BARRIER. */
2349 && !BARRIER_P (NEXT_INSN (insn))))
2351 emit_note_after (NOTE_INSN_DELETED, BB_END (EXIT_BLOCK_PTR->prev_bb));
2352 /* Make insn to appear outside BB. */
2353 BB_END (EXIT_BLOCK_PTR->prev_bb) = PREV_INSN (BB_END (EXIT_BLOCK_PTR->prev_bb));
2356 /* Compute INSN_REG_WEIGHT for all blocks. We must do this before
2357 removing death notes. */
2358 FOR_EACH_BB_REVERSE (b)
2359 find_insn_reg_weight (b->index);
2361 if (targetm.sched.md_init_global)
2362 targetm.sched.md_init_global (sched_dump, sched_verbose, old_max_uid);
2365 /* Free global data used during insn scheduling. */
2367 void
2368 sched_finish (void)
2370 free (h_i_d);
2371 free (curr_state);
2372 dfa_finish ();
2373 free_dependency_caches ();
2374 end_alias_analysis ();
2375 if (write_symbols != NO_DEBUG)
2376 free (line_note_head);
2378 if (targetm.sched.md_finish_global)
2379 targetm.sched.md_finish_global (sched_dump, sched_verbose);
2381 #endif /* INSN_SCHEDULING */