2006-02-09 Kaz Kojima <kkojima@gcc.gnu.org>
[official-gcc.git] / gcc / haifa-sched.c
blob325573a66cb0bce20319764a261303d015c7c736
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998,
3 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
4 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
5 and currently maintained by, Jim Wilson (wilson@cygnus.com)
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
12 version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
22 02110-1301, USA. */
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 The following list shows the order in which we want to break ties
58 among insns in the ready list:
60 1. choose insn with the longest path to end of bb, ties
61 broken by
62 2. choose insn with least contribution to register pressure,
63 ties broken by
64 3. prefer in-block upon interblock motion, ties broken by
65 4. prefer useful upon speculative motion, ties broken by
66 5. choose insn with largest control flow probability, ties
67 broken by
68 6. choose insn with the least dependences upon the previously
69 scheduled insn, or finally
70 7 choose the insn which has the most insns dependent on it.
71 8. choose insn with lowest UID.
73 Memory references complicate matters. Only if we can be certain
74 that memory references are not part of the data dependency graph
75 (via true, anti, or output dependence), can we move operations past
76 memory references. To first approximation, reads can be done
77 independently, while writes introduce dependencies. Better
78 approximations will yield fewer dependencies.
80 Before reload, an extended analysis of interblock data dependences
81 is required for interblock scheduling. This is performed in
82 compute_block_backward_dependences ().
84 Dependencies set up by memory references are treated in exactly the
85 same way as other dependencies, by using LOG_LINKS backward
86 dependences. LOG_LINKS are translated into INSN_DEPEND forward
87 dependences for the purpose of forward list scheduling.
89 Having optimized the critical path, we may have also unduly
90 extended the lifetimes of some registers. If an operation requires
91 that constants be loaded into registers, it is certainly desirable
92 to load those constants as early as necessary, but no earlier.
93 I.e., it will not do to load up a bunch of registers at the
94 beginning of a basic block only to use them at the end, if they
95 could be loaded later, since this may result in excessive register
96 utilization.
98 Note that since branches are never in basic blocks, but only end
99 basic blocks, this pass will not move branches. But that is ok,
100 since we can use GNU's delayed branch scheduling pass to take care
101 of this case.
103 Also note that no further optimizations based on algebraic
104 identities are performed, so this pass would be a good one to
105 perform instruction splitting, such as breaking up a multiply
106 instruction into shifts and adds where that is profitable.
108 Given the memory aliasing analysis that this pass should perform,
109 it should be possible to remove redundant stores to memory, and to
110 load values from registers instead of hitting memory.
112 Before reload, speculative insns are moved only if a 'proof' exists
113 that no exception will be caused by this, and if no live registers
114 exist that inhibit the motion (live registers constraints are not
115 represented by data dependence edges).
117 This pass must update information that subsequent passes expect to
118 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
121 The information in the line number notes is carefully retained by
122 this pass. Notes that refer to the starting and ending of
123 exception regions are also carefully retained by this pass. All
124 other NOTE insns are grouped in their same relative order at the
125 beginning of basic blocks and regions that have been scheduled. */
127 #include "config.h"
128 #include "system.h"
129 #include "coretypes.h"
130 #include "tm.h"
131 #include "toplev.h"
132 #include "rtl.h"
133 #include "tm_p.h"
134 #include "hard-reg-set.h"
135 #include "regs.h"
136 #include "function.h"
137 #include "flags.h"
138 #include "insn-config.h"
139 #include "insn-attr.h"
140 #include "except.h"
141 #include "toplev.h"
142 #include "recog.h"
143 #include "sched-int.h"
144 #include "target.h"
145 #include "output.h"
147 #ifdef INSN_SCHEDULING
149 /* issue_rate is the number of insns that can be scheduled in the same
150 machine cycle. It can be defined in the config/mach/mach.h file,
151 otherwise we set it to 1. */
153 static int issue_rate;
155 /* sched-verbose controls the amount of debugging output the
156 scheduler prints. It is controlled by -fsched-verbose=N:
157 N>0 and no -DSR : the output is directed to stderr.
158 N>=10 will direct the printouts to stderr (regardless of -dSR).
159 N=1: same as -dSR.
160 N=2: bb's probabilities, detailed ready list info, unit/insn info.
161 N=3: rtl at abort point, control-flow, regions info.
162 N=5: dependences info. */
164 static int sched_verbose_param = 0;
165 int sched_verbose = 0;
167 /* Debugging file. All printouts are sent to dump, which is always set,
168 either to stderr, or to the dump listing file (-dRS). */
169 FILE *sched_dump = 0;
171 /* Highest uid before scheduling. */
172 static int old_max_uid;
174 /* fix_sched_param() is called from toplev.c upon detection
175 of the -fsched-verbose=N option. */
177 void
178 fix_sched_param (const char *param, const char *val)
180 if (!strcmp (param, "verbose"))
181 sched_verbose_param = atoi (val);
182 else
183 warning (0, "fix_sched_param: unknown param: %s", param);
186 struct haifa_insn_data *h_i_d;
188 #define LINE_NOTE(INSN) (h_i_d[INSN_UID (INSN)].line_note)
189 #define INSN_TICK(INSN) (h_i_d[INSN_UID (INSN)].tick)
191 /* Vector indexed by basic block number giving the starting line-number
192 for each basic block. */
193 static rtx *line_note_head;
195 /* List of important notes we must keep around. This is a pointer to the
196 last element in the list. */
197 static rtx note_list;
199 /* Queues, etc. */
201 /* An instruction is ready to be scheduled when all insns preceding it
202 have already been scheduled. It is important to ensure that all
203 insns which use its result will not be executed until its result
204 has been computed. An insn is maintained in one of four structures:
206 (P) the "Pending" set of insns which cannot be scheduled until
207 their dependencies have been satisfied.
208 (Q) the "Queued" set of insns that can be scheduled when sufficient
209 time has passed.
210 (R) the "Ready" list of unscheduled, uncommitted insns.
211 (S) the "Scheduled" list of insns.
213 Initially, all insns are either "Pending" or "Ready" depending on
214 whether their dependencies are satisfied.
216 Insns move from the "Ready" list to the "Scheduled" list as they
217 are committed to the schedule. As this occurs, the insns in the
218 "Pending" list have their dependencies satisfied and move to either
219 the "Ready" list or the "Queued" set depending on whether
220 sufficient time has passed to make them ready. As time passes,
221 insns move from the "Queued" set to the "Ready" list.
223 The "Pending" list (P) are the insns in the INSN_DEPEND of the unscheduled
224 insns, i.e., those that are ready, queued, and pending.
225 The "Queued" set (Q) is implemented by the variable `insn_queue'.
226 The "Ready" list (R) is implemented by the variables `ready' and
227 `n_ready'.
228 The "Scheduled" list (S) is the new insn chain built by this pass.
230 The transition (R->S) is implemented in the scheduling loop in
231 `schedule_block' when the best insn to schedule is chosen.
232 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
233 insns move from the ready list to the scheduled list.
234 The transition (Q->R) is implemented in 'queue_to_insn' as time
235 passes or stalls are introduced. */
237 /* Implement a circular buffer to delay instructions until sufficient
238 time has passed. For the new pipeline description interface,
239 MAX_INSN_QUEUE_INDEX is a power of two minus one which is larger
240 than maximal time of instruction execution computed by genattr.c on
241 the base maximal time of functional unit reservations and getting a
242 result. This is the longest time an insn may be queued. */
244 static rtx *insn_queue;
245 static int q_ptr = 0;
246 static int q_size = 0;
247 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
248 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
250 /* The following variable value refers for all current and future
251 reservations of the processor units. */
252 state_t curr_state;
254 /* The following variable value is size of memory representing all
255 current and future reservations of the processor units. */
256 static size_t dfa_state_size;
258 /* The following array is used to find the best insn from ready when
259 the automaton pipeline interface is used. */
260 static char *ready_try;
262 /* Describe the ready list of the scheduler.
263 VEC holds space enough for all insns in the current region. VECLEN
264 says how many exactly.
265 FIRST is the index of the element with the highest priority; i.e. the
266 last one in the ready list, since elements are ordered by ascending
267 priority.
268 N_READY determines how many insns are on the ready list. */
270 struct ready_list
272 rtx *vec;
273 int veclen;
274 int first;
275 int n_ready;
278 static int may_trap_exp (rtx, int);
280 /* Nonzero iff the address is comprised from at most 1 register. */
281 #define CONST_BASED_ADDRESS_P(x) \
282 (REG_P (x) \
283 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
284 || (GET_CODE (x) == LO_SUM)) \
285 && (CONSTANT_P (XEXP (x, 0)) \
286 || CONSTANT_P (XEXP (x, 1)))))
288 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
289 as found by analyzing insn's expression. */
291 static int
292 may_trap_exp (rtx x, int is_store)
294 enum rtx_code code;
296 if (x == 0)
297 return TRAP_FREE;
298 code = GET_CODE (x);
299 if (is_store)
301 if (code == MEM && may_trap_p (x))
302 return TRAP_RISKY;
303 else
304 return TRAP_FREE;
306 if (code == MEM)
308 /* The insn uses memory: a volatile load. */
309 if (MEM_VOLATILE_P (x))
310 return IRISKY;
311 /* An exception-free load. */
312 if (!may_trap_p (x))
313 return IFREE;
314 /* A load with 1 base register, to be further checked. */
315 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
316 return PFREE_CANDIDATE;
317 /* No info on the load, to be further checked. */
318 return PRISKY_CANDIDATE;
320 else
322 const char *fmt;
323 int i, insn_class = TRAP_FREE;
325 /* Neither store nor load, check if it may cause a trap. */
326 if (may_trap_p (x))
327 return TRAP_RISKY;
328 /* Recursive step: walk the insn... */
329 fmt = GET_RTX_FORMAT (code);
330 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
332 if (fmt[i] == 'e')
334 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
335 insn_class = WORST_CLASS (insn_class, tmp_class);
337 else if (fmt[i] == 'E')
339 int j;
340 for (j = 0; j < XVECLEN (x, i); j++)
342 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
343 insn_class = WORST_CLASS (insn_class, tmp_class);
344 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
345 break;
348 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
349 break;
351 return insn_class;
355 /* Classifies insn for the purpose of verifying that it can be
356 moved speculatively, by examining it's patterns, returning:
357 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
358 TRAP_FREE: non-load insn.
359 IFREE: load from a globally safe location.
360 IRISKY: volatile load.
361 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
362 being either PFREE or PRISKY. */
365 haifa_classify_insn (rtx insn)
367 rtx pat = PATTERN (insn);
368 int tmp_class = TRAP_FREE;
369 int insn_class = TRAP_FREE;
370 enum rtx_code code;
372 if (GET_CODE (pat) == PARALLEL)
374 int i, len = XVECLEN (pat, 0);
376 for (i = len - 1; i >= 0; i--)
378 code = GET_CODE (XVECEXP (pat, 0, i));
379 switch (code)
381 case CLOBBER:
382 /* Test if it is a 'store'. */
383 tmp_class = may_trap_exp (XEXP (XVECEXP (pat, 0, i), 0), 1);
384 break;
385 case SET:
386 /* Test if it is a store. */
387 tmp_class = may_trap_exp (SET_DEST (XVECEXP (pat, 0, i)), 1);
388 if (tmp_class == TRAP_RISKY)
389 break;
390 /* Test if it is a load. */
391 tmp_class
392 = WORST_CLASS (tmp_class,
393 may_trap_exp (SET_SRC (XVECEXP (pat, 0, i)),
394 0));
395 break;
396 case COND_EXEC:
397 case TRAP_IF:
398 tmp_class = TRAP_RISKY;
399 break;
400 default:
403 insn_class = WORST_CLASS (insn_class, tmp_class);
404 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
405 break;
408 else
410 code = GET_CODE (pat);
411 switch (code)
413 case CLOBBER:
414 /* Test if it is a 'store'. */
415 tmp_class = may_trap_exp (XEXP (pat, 0), 1);
416 break;
417 case SET:
418 /* Test if it is a store. */
419 tmp_class = may_trap_exp (SET_DEST (pat), 1);
420 if (tmp_class == TRAP_RISKY)
421 break;
422 /* Test if it is a load. */
423 tmp_class =
424 WORST_CLASS (tmp_class,
425 may_trap_exp (SET_SRC (pat), 0));
426 break;
427 case COND_EXEC:
428 case TRAP_IF:
429 tmp_class = TRAP_RISKY;
430 break;
431 default:;
433 insn_class = tmp_class;
436 return insn_class;
439 /* Forward declarations. */
441 static int priority (rtx);
442 static int rank_for_schedule (const void *, const void *);
443 static void swap_sort (rtx *, int);
444 static void queue_insn (rtx, int);
445 static int schedule_insn (rtx, struct ready_list *, int);
446 static int find_set_reg_weight (rtx);
447 static void find_insn_reg_weight (int);
448 static void adjust_priority (rtx);
449 static void advance_one_cycle (void);
451 /* Notes handling mechanism:
452 =========================
453 Generally, NOTES are saved before scheduling and restored after scheduling.
454 The scheduler distinguishes between three types of notes:
456 (1) LINE_NUMBER notes, generated and used for debugging. Here,
457 before scheduling a region, a pointer to the LINE_NUMBER note is
458 added to the insn following it (in save_line_notes()), and the note
459 is removed (in rm_line_notes() and unlink_line_notes()). After
460 scheduling the region, this pointer is used for regeneration of
461 the LINE_NUMBER note (in restore_line_notes()).
463 (2) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
464 Before scheduling a region, a pointer to the note is added to the insn
465 that follows or precedes it. (This happens as part of the data dependence
466 computation). After scheduling an insn, the pointer contained in it is
467 used for regenerating the corresponding note (in reemit_notes).
469 (3) All other notes (e.g. INSN_DELETED): Before scheduling a block,
470 these notes are put in a list (in rm_other_notes() and
471 unlink_other_notes ()). After scheduling the block, these notes are
472 inserted at the beginning of the block (in schedule_block()). */
474 static rtx unlink_other_notes (rtx, rtx);
475 static rtx unlink_line_notes (rtx, rtx);
476 static rtx reemit_notes (rtx, rtx);
478 static rtx *ready_lastpos (struct ready_list *);
479 static void ready_sort (struct ready_list *);
480 static rtx ready_remove_first (struct ready_list *);
482 static void queue_to_ready (struct ready_list *);
483 static int early_queue_to_ready (state_t, struct ready_list *);
485 static void debug_ready_list (struct ready_list *);
487 static rtx move_insn1 (rtx, rtx);
488 static rtx move_insn (rtx, rtx);
490 /* The following functions are used to implement multi-pass scheduling
491 on the first cycle. */
492 static rtx ready_element (struct ready_list *, int);
493 static rtx ready_remove (struct ready_list *, int);
494 static int max_issue (struct ready_list *, int *);
496 static rtx choose_ready (struct ready_list *);
498 #endif /* INSN_SCHEDULING */
500 /* Point to state used for the current scheduling pass. */
501 struct sched_info *current_sched_info;
503 #ifndef INSN_SCHEDULING
504 void
505 schedule_insns (void)
508 #else
510 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
511 so that insns independent of the last scheduled insn will be preferred
512 over dependent instructions. */
514 static rtx last_scheduled_insn;
516 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
517 This is the number of cycles between instruction issue and
518 instruction results. */
520 HAIFA_INLINE int
521 insn_cost (rtx insn, rtx link, rtx used)
523 int cost = INSN_COST (insn);
525 if (cost < 0)
527 /* A USE insn, or something else we don't need to
528 understand. We can't pass these directly to
529 result_ready_cost or insn_default_latency because it will
530 trigger a fatal error for unrecognizable insns. */
531 if (recog_memoized (insn) < 0)
533 INSN_COST (insn) = 0;
534 return 0;
536 else
538 cost = insn_default_latency (insn);
539 if (cost < 0)
540 cost = 0;
542 INSN_COST (insn) = cost;
546 /* In this case estimate cost without caring how insn is used. */
547 if (link == 0 || used == 0)
548 return cost;
550 /* A USE insn should never require the value used to be computed.
551 This allows the computation of a function's result and parameter
552 values to overlap the return and call. */
553 if (recog_memoized (used) < 0)
554 cost = 0;
555 else
557 if (INSN_CODE (insn) >= 0)
559 if (REG_NOTE_KIND (link) == REG_DEP_ANTI)
560 cost = 0;
561 else if (REG_NOTE_KIND (link) == REG_DEP_OUTPUT)
563 cost = (insn_default_latency (insn)
564 - insn_default_latency (used));
565 if (cost <= 0)
566 cost = 1;
568 else if (bypass_p (insn))
569 cost = insn_latency (insn, used);
572 if (targetm.sched.adjust_cost)
573 cost = targetm.sched.adjust_cost (used, link, insn, cost);
575 if (cost < 0)
576 cost = 0;
579 return cost;
582 /* Compute the priority number for INSN. */
584 static int
585 priority (rtx insn)
587 rtx link;
589 if (! INSN_P (insn))
590 return 0;
592 if (! INSN_PRIORITY_KNOWN (insn))
594 int this_priority = 0;
596 if (INSN_DEPEND (insn) == 0)
597 this_priority = insn_cost (insn, 0, 0);
598 else
600 for (link = INSN_DEPEND (insn); link; link = XEXP (link, 1))
602 rtx next;
603 int next_priority;
605 next = XEXP (link, 0);
607 /* Critical path is meaningful in block boundaries only. */
608 if (! (*current_sched_info->contributes_to_priority) (next, insn))
609 continue;
611 next_priority = insn_cost (insn, link, next) + priority (next);
612 if (next_priority > this_priority)
613 this_priority = next_priority;
616 INSN_PRIORITY (insn) = this_priority;
617 INSN_PRIORITY_KNOWN (insn) = 1;
620 return INSN_PRIORITY (insn);
623 /* Macros and functions for keeping the priority queue sorted, and
624 dealing with queuing and dequeuing of instructions. */
626 #define SCHED_SORT(READY, N_READY) \
627 do { if ((N_READY) == 2) \
628 swap_sort (READY, N_READY); \
629 else if ((N_READY) > 2) \
630 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
631 while (0)
633 /* Returns a positive value if x is preferred; returns a negative value if
634 y is preferred. Should never return 0, since that will make the sort
635 unstable. */
637 static int
638 rank_for_schedule (const void *x, const void *y)
640 rtx tmp = *(const rtx *) y;
641 rtx tmp2 = *(const rtx *) x;
642 rtx link;
643 int tmp_class, tmp2_class, depend_count1, depend_count2;
644 int val, priority_val, weight_val, info_val;
646 /* The insn in a schedule group should be issued the first. */
647 if (SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
648 return SCHED_GROUP_P (tmp2) ? 1 : -1;
650 /* Prefer insn with higher priority. */
651 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
653 if (priority_val)
654 return priority_val;
656 /* Prefer an insn with smaller contribution to registers-pressure. */
657 if (!reload_completed &&
658 (weight_val = INSN_REG_WEIGHT (tmp) - INSN_REG_WEIGHT (tmp2)))
659 return weight_val;
661 info_val = (*current_sched_info->rank) (tmp, tmp2);
662 if (info_val)
663 return info_val;
665 /* Compare insns based on their relation to the last-scheduled-insn. */
666 if (last_scheduled_insn)
668 /* Classify the instructions into three classes:
669 1) Data dependent on last schedule insn.
670 2) Anti/Output dependent on last scheduled insn.
671 3) Independent of last scheduled insn, or has latency of one.
672 Choose the insn from the highest numbered class if different. */
673 link = find_insn_list (tmp, INSN_DEPEND (last_scheduled_insn));
674 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp) == 1)
675 tmp_class = 3;
676 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
677 tmp_class = 1;
678 else
679 tmp_class = 2;
681 link = find_insn_list (tmp2, INSN_DEPEND (last_scheduled_insn));
682 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp2) == 1)
683 tmp2_class = 3;
684 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
685 tmp2_class = 1;
686 else
687 tmp2_class = 2;
689 if ((val = tmp2_class - tmp_class))
690 return val;
693 /* Prefer the insn which has more later insns that depend on it.
694 This gives the scheduler more freedom when scheduling later
695 instructions at the expense of added register pressure. */
696 depend_count1 = 0;
697 for (link = INSN_DEPEND (tmp); link; link = XEXP (link, 1))
698 depend_count1++;
700 depend_count2 = 0;
701 for (link = INSN_DEPEND (tmp2); link; link = XEXP (link, 1))
702 depend_count2++;
704 val = depend_count2 - depend_count1;
705 if (val)
706 return val;
708 /* If insns are equally good, sort by INSN_LUID (original insn order),
709 so that we make the sort stable. This minimizes instruction movement,
710 thus minimizing sched's effect on debugging and cross-jumping. */
711 return INSN_LUID (tmp) - INSN_LUID (tmp2);
714 /* Resort the array A in which only element at index N may be out of order. */
716 HAIFA_INLINE static void
717 swap_sort (rtx *a, int n)
719 rtx insn = a[n - 1];
720 int i = n - 2;
722 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
724 a[i + 1] = a[i];
725 i -= 1;
727 a[i + 1] = insn;
730 /* Add INSN to the insn queue so that it can be executed at least
731 N_CYCLES after the currently executing insn. Preserve insns
732 chain for debugging purposes. */
734 HAIFA_INLINE static void
735 queue_insn (rtx insn, int n_cycles)
737 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
738 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
739 insn_queue[next_q] = link;
740 q_size += 1;
742 if (sched_verbose >= 2)
744 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
745 (*current_sched_info->print_insn) (insn, 0));
747 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
751 /* Return a pointer to the bottom of the ready list, i.e. the insn
752 with the lowest priority. */
754 HAIFA_INLINE static rtx *
755 ready_lastpos (struct ready_list *ready)
757 gcc_assert (ready->n_ready);
758 return ready->vec + ready->first - ready->n_ready + 1;
761 /* Add an element INSN to the ready list so that it ends up with the lowest
762 priority. */
764 HAIFA_INLINE void
765 ready_add (struct ready_list *ready, rtx insn)
767 if (ready->first == ready->n_ready)
769 memmove (ready->vec + ready->veclen - ready->n_ready,
770 ready_lastpos (ready),
771 ready->n_ready * sizeof (rtx));
772 ready->first = ready->veclen - 1;
774 ready->vec[ready->first - ready->n_ready] = insn;
775 ready->n_ready++;
778 /* Remove the element with the highest priority from the ready list and
779 return it. */
781 HAIFA_INLINE static rtx
782 ready_remove_first (struct ready_list *ready)
784 rtx t;
786 gcc_assert (ready->n_ready);
787 t = ready->vec[ready->first--];
788 ready->n_ready--;
789 /* If the queue becomes empty, reset it. */
790 if (ready->n_ready == 0)
791 ready->first = ready->veclen - 1;
792 return t;
795 /* The following code implements multi-pass scheduling for the first
796 cycle. In other words, we will try to choose ready insn which
797 permits to start maximum number of insns on the same cycle. */
799 /* Return a pointer to the element INDEX from the ready. INDEX for
800 insn with the highest priority is 0, and the lowest priority has
801 N_READY - 1. */
803 HAIFA_INLINE static rtx
804 ready_element (struct ready_list *ready, int index)
806 gcc_assert (ready->n_ready && index < ready->n_ready);
808 return ready->vec[ready->first - index];
811 /* Remove the element INDEX from the ready list and return it. INDEX
812 for insn with the highest priority is 0, and the lowest priority
813 has N_READY - 1. */
815 HAIFA_INLINE static rtx
816 ready_remove (struct ready_list *ready, int index)
818 rtx t;
819 int i;
821 if (index == 0)
822 return ready_remove_first (ready);
823 gcc_assert (ready->n_ready && index < ready->n_ready);
824 t = ready->vec[ready->first - index];
825 ready->n_ready--;
826 for (i = index; i < ready->n_ready; i++)
827 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
828 return t;
832 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
833 macro. */
835 HAIFA_INLINE static void
836 ready_sort (struct ready_list *ready)
838 rtx *first = ready_lastpos (ready);
839 SCHED_SORT (first, ready->n_ready);
842 /* PREV is an insn that is ready to execute. Adjust its priority if that
843 will help shorten or lengthen register lifetimes as appropriate. Also
844 provide a hook for the target to tweek itself. */
846 HAIFA_INLINE static void
847 adjust_priority (rtx prev)
849 /* ??? There used to be code here to try and estimate how an insn
850 affected register lifetimes, but it did it by looking at REG_DEAD
851 notes, which we removed in schedule_region. Nor did it try to
852 take into account register pressure or anything useful like that.
854 Revisit when we have a machine model to work with and not before. */
856 if (targetm.sched.adjust_priority)
857 INSN_PRIORITY (prev) =
858 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
861 /* Advance time on one cycle. */
862 HAIFA_INLINE static void
863 advance_one_cycle (void)
865 if (targetm.sched.dfa_pre_cycle_insn)
866 state_transition (curr_state,
867 targetm.sched.dfa_pre_cycle_insn ());
869 state_transition (curr_state, NULL);
871 if (targetm.sched.dfa_post_cycle_insn)
872 state_transition (curr_state,
873 targetm.sched.dfa_post_cycle_insn ());
876 /* Clock at which the previous instruction was issued. */
877 static int last_clock_var;
879 /* INSN is the "currently executing insn". Launch each insn which was
880 waiting on INSN. READY is the ready list which contains the insns
881 that are ready to fire. CLOCK is the current cycle. The function
882 returns necessary cycle advance after issuing the insn (it is not
883 zero for insns in a schedule group). */
885 static int
886 schedule_insn (rtx insn, struct ready_list *ready, int clock)
888 rtx link;
889 int advance = 0;
890 int premature_issue = 0;
892 if (sched_verbose >= 1)
894 char buf[2048];
896 print_insn (buf, insn, 0);
897 buf[40] = 0;
898 fprintf (sched_dump, ";;\t%3i--> %-40s:", clock, buf);
900 if (recog_memoized (insn) < 0)
901 fprintf (sched_dump, "nothing");
902 else
903 print_reservation (sched_dump, insn);
904 fputc ('\n', sched_dump);
907 if (INSN_TICK (insn) > clock)
909 /* 'insn' has been prematurely moved from the queue to the
910 ready list. */
911 premature_issue = INSN_TICK (insn) - clock;
914 for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
916 rtx next = XEXP (link, 0);
917 int cost = insn_cost (insn, link, next);
919 INSN_TICK (next) = MAX (INSN_TICK (next), clock + cost + premature_issue);
921 if ((INSN_DEP_COUNT (next) -= 1) == 0)
923 int effective_cost = INSN_TICK (next) - clock;
925 if (! (*current_sched_info->new_ready) (next))
926 continue;
928 if (sched_verbose >= 2)
930 fprintf (sched_dump, ";;\t\tdependences resolved: insn %s ",
931 (*current_sched_info->print_insn) (next, 0));
933 if (effective_cost < 1)
934 fprintf (sched_dump, "into ready\n");
935 else
936 fprintf (sched_dump, "into queue with cost=%d\n",
937 effective_cost);
940 /* Adjust the priority of NEXT and either put it on the ready
941 list or queue it. */
942 adjust_priority (next);
943 if (effective_cost < 1)
944 ready_add (ready, next);
945 else
947 queue_insn (next, effective_cost);
949 if (SCHED_GROUP_P (next) && advance < effective_cost)
950 advance = effective_cost;
955 /* Annotate the instruction with issue information -- TImode
956 indicates that the instruction is expected not to be able
957 to issue on the same cycle as the previous insn. A machine
958 may use this information to decide how the instruction should
959 be aligned. */
960 if (issue_rate > 1
961 && GET_CODE (PATTERN (insn)) != USE
962 && GET_CODE (PATTERN (insn)) != CLOBBER)
964 if (reload_completed)
965 PUT_MODE (insn, clock > last_clock_var ? TImode : VOIDmode);
966 last_clock_var = clock;
968 return advance;
971 /* Functions for handling of notes. */
973 /* Delete notes beginning with INSN and put them in the chain
974 of notes ended by NOTE_LIST.
975 Returns the insn following the notes. */
977 static rtx
978 unlink_other_notes (rtx insn, rtx tail)
980 rtx prev = PREV_INSN (insn);
982 while (insn != tail && NOTE_P (insn))
984 rtx next = NEXT_INSN (insn);
985 /* Delete the note from its current position. */
986 if (prev)
987 NEXT_INSN (prev) = next;
988 if (next)
989 PREV_INSN (next) = prev;
991 /* See sched_analyze to see how these are handled. */
992 if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
993 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
994 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_BASIC_BLOCK
995 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
996 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
998 /* Insert the note at the end of the notes list. */
999 PREV_INSN (insn) = note_list;
1000 if (note_list)
1001 NEXT_INSN (note_list) = insn;
1002 note_list = insn;
1005 insn = next;
1007 return insn;
1010 /* Delete line notes beginning with INSN. Record line-number notes so
1011 they can be reused. Returns the insn following the notes. */
1013 static rtx
1014 unlink_line_notes (rtx insn, rtx tail)
1016 rtx prev = PREV_INSN (insn);
1018 while (insn != tail && NOTE_P (insn))
1020 rtx next = NEXT_INSN (insn);
1022 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
1024 /* Delete the note from its current position. */
1025 if (prev)
1026 NEXT_INSN (prev) = next;
1027 if (next)
1028 PREV_INSN (next) = prev;
1030 /* Record line-number notes so they can be reused. */
1031 LINE_NOTE (insn) = insn;
1033 else
1034 prev = insn;
1036 insn = next;
1038 return insn;
1041 /* Return the head and tail pointers of BB. */
1043 void
1044 get_block_head_tail (int b, rtx *headp, rtx *tailp)
1046 /* HEAD and TAIL delimit the basic block being scheduled. */
1047 rtx head = BB_HEAD (BASIC_BLOCK (b));
1048 rtx tail = BB_END (BASIC_BLOCK (b));
1050 /* Don't include any notes or labels at the beginning of the
1051 basic block, or notes at the ends of basic blocks. */
1052 while (head != tail)
1054 if (NOTE_P (head))
1055 head = NEXT_INSN (head);
1056 else if (NOTE_P (tail))
1057 tail = PREV_INSN (tail);
1058 else if (LABEL_P (head))
1059 head = NEXT_INSN (head);
1060 else
1061 break;
1064 *headp = head;
1065 *tailp = tail;
1068 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1071 no_real_insns_p (rtx head, rtx tail)
1073 while (head != NEXT_INSN (tail))
1075 if (!NOTE_P (head) && !LABEL_P (head))
1076 return 0;
1077 head = NEXT_INSN (head);
1079 return 1;
1082 /* Delete line notes from one block. Save them so they can be later restored
1083 (in restore_line_notes). HEAD and TAIL are the boundaries of the
1084 block in which notes should be processed. */
1086 void
1087 rm_line_notes (rtx head, rtx tail)
1089 rtx next_tail;
1090 rtx insn;
1092 next_tail = NEXT_INSN (tail);
1093 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1095 rtx prev;
1097 /* Farm out notes, and maybe save them in NOTE_LIST.
1098 This is needed to keep the debugger from
1099 getting completely deranged. */
1100 if (NOTE_P (insn))
1102 prev = insn;
1103 insn = unlink_line_notes (insn, next_tail);
1105 gcc_assert (prev != tail && prev != head && insn != next_tail);
1110 /* Save line number notes for each insn in block B. HEAD and TAIL are
1111 the boundaries of the block in which notes should be processed. */
1113 void
1114 save_line_notes (int b, rtx head, rtx tail)
1116 rtx next_tail;
1118 /* We must use the true line number for the first insn in the block
1119 that was computed and saved at the start of this pass. We can't
1120 use the current line number, because scheduling of the previous
1121 block may have changed the current line number. */
1123 rtx line = line_note_head[b];
1124 rtx insn;
1126 next_tail = NEXT_INSN (tail);
1128 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1129 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1130 line = insn;
1131 else
1132 LINE_NOTE (insn) = line;
1135 /* After a block was scheduled, insert line notes into the insns list.
1136 HEAD and TAIL are the boundaries of the block in which notes should
1137 be processed. */
1139 void
1140 restore_line_notes (rtx head, rtx tail)
1142 rtx line, note, prev, new;
1143 int added_notes = 0;
1144 rtx next_tail, insn;
1146 head = head;
1147 next_tail = NEXT_INSN (tail);
1149 /* Determine the current line-number. We want to know the current
1150 line number of the first insn of the block here, in case it is
1151 different from the true line number that was saved earlier. If
1152 different, then we need a line number note before the first insn
1153 of this block. If it happens to be the same, then we don't want to
1154 emit another line number note here. */
1155 for (line = head; line; line = PREV_INSN (line))
1156 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
1157 break;
1159 /* Walk the insns keeping track of the current line-number and inserting
1160 the line-number notes as needed. */
1161 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1162 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1163 line = insn;
1164 /* This used to emit line number notes before every non-deleted note.
1165 However, this confuses a debugger, because line notes not separated
1166 by real instructions all end up at the same address. I can find no
1167 use for line number notes before other notes, so none are emitted. */
1168 else if (!NOTE_P (insn)
1169 && INSN_UID (insn) < old_max_uid
1170 && (note = LINE_NOTE (insn)) != 0
1171 && note != line
1172 && (line == 0
1173 #ifdef USE_MAPPED_LOCATION
1174 || NOTE_SOURCE_LOCATION (note) != NOTE_SOURCE_LOCATION (line)
1175 #else
1176 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
1177 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)
1178 #endif
1181 line = note;
1182 prev = PREV_INSN (insn);
1183 if (LINE_NOTE (note))
1185 /* Re-use the original line-number note. */
1186 LINE_NOTE (note) = 0;
1187 PREV_INSN (note) = prev;
1188 NEXT_INSN (prev) = note;
1189 PREV_INSN (insn) = note;
1190 NEXT_INSN (note) = insn;
1192 else
1194 added_notes++;
1195 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
1196 #ifndef USE_MAPPED_LOCATION
1197 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
1198 #endif
1201 if (sched_verbose && added_notes)
1202 fprintf (sched_dump, ";; added %d line-number notes\n", added_notes);
1205 /* After scheduling the function, delete redundant line notes from the
1206 insns list. */
1208 void
1209 rm_redundant_line_notes (void)
1211 rtx line = 0;
1212 rtx insn = get_insns ();
1213 int active_insn = 0;
1214 int notes = 0;
1216 /* Walk the insns deleting redundant line-number notes. Many of these
1217 are already present. The remainder tend to occur at basic
1218 block boundaries. */
1219 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
1220 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1222 /* If there are no active insns following, INSN is redundant. */
1223 if (active_insn == 0)
1225 notes++;
1226 SET_INSN_DELETED (insn);
1228 /* If the line number is unchanged, LINE is redundant. */
1229 else if (line
1230 #ifdef USE_MAPPED_LOCATION
1231 && NOTE_SOURCE_LOCATION (line) == NOTE_SOURCE_LOCATION (insn)
1232 #else
1233 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
1234 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn)
1235 #endif
1238 notes++;
1239 SET_INSN_DELETED (line);
1240 line = insn;
1242 else
1243 line = insn;
1244 active_insn = 0;
1246 else if (!((NOTE_P (insn)
1247 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
1248 || (NONJUMP_INSN_P (insn)
1249 && (GET_CODE (PATTERN (insn)) == USE
1250 || GET_CODE (PATTERN (insn)) == CLOBBER))))
1251 active_insn++;
1253 if (sched_verbose && notes)
1254 fprintf (sched_dump, ";; deleted %d line-number notes\n", notes);
1257 /* Delete notes between HEAD and TAIL and put them in the chain
1258 of notes ended by NOTE_LIST. */
1260 void
1261 rm_other_notes (rtx head, rtx tail)
1263 rtx next_tail;
1264 rtx insn;
1266 note_list = 0;
1267 if (head == tail && (! INSN_P (head)))
1268 return;
1270 next_tail = NEXT_INSN (tail);
1271 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1273 rtx prev;
1275 /* Farm out notes, and maybe save them in NOTE_LIST.
1276 This is needed to keep the debugger from
1277 getting completely deranged. */
1278 if (NOTE_P (insn))
1280 prev = insn;
1282 insn = unlink_other_notes (insn, next_tail);
1284 gcc_assert (prev != tail && prev != head && insn != next_tail);
1289 /* Functions for computation of registers live/usage info. */
1291 /* This function looks for a new register being defined.
1292 If the destination register is already used by the source,
1293 a new register is not needed. */
1295 static int
1296 find_set_reg_weight (rtx x)
1298 if (GET_CODE (x) == CLOBBER
1299 && register_operand (SET_DEST (x), VOIDmode))
1300 return 1;
1301 if (GET_CODE (x) == SET
1302 && register_operand (SET_DEST (x), VOIDmode))
1304 if (REG_P (SET_DEST (x)))
1306 if (!reg_mentioned_p (SET_DEST (x), SET_SRC (x)))
1307 return 1;
1308 else
1309 return 0;
1311 return 1;
1313 return 0;
1316 /* Calculate INSN_REG_WEIGHT for all insns of a block. */
1318 static void
1319 find_insn_reg_weight (int b)
1321 rtx insn, next_tail, head, tail;
1323 get_block_head_tail (b, &head, &tail);
1324 next_tail = NEXT_INSN (tail);
1326 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1328 int reg_weight = 0;
1329 rtx x;
1331 /* Handle register life information. */
1332 if (! INSN_P (insn))
1333 continue;
1335 /* Increment weight for each register born here. */
1336 x = PATTERN (insn);
1337 reg_weight += find_set_reg_weight (x);
1338 if (GET_CODE (x) == PARALLEL)
1340 int j;
1341 for (j = XVECLEN (x, 0) - 1; j >= 0; j--)
1343 x = XVECEXP (PATTERN (insn), 0, j);
1344 reg_weight += find_set_reg_weight (x);
1347 /* Decrement weight for each register that dies here. */
1348 for (x = REG_NOTES (insn); x; x = XEXP (x, 1))
1350 if (REG_NOTE_KIND (x) == REG_DEAD
1351 || REG_NOTE_KIND (x) == REG_UNUSED)
1352 reg_weight--;
1355 INSN_REG_WEIGHT (insn) = reg_weight;
1359 /* Scheduling clock, modified in schedule_block() and queue_to_ready (). */
1360 static int clock_var;
1362 /* Move insns that became ready to fire from queue to ready list. */
1364 static void
1365 queue_to_ready (struct ready_list *ready)
1367 rtx insn;
1368 rtx link;
1370 q_ptr = NEXT_Q (q_ptr);
1372 /* Add all pending insns that can be scheduled without stalls to the
1373 ready list. */
1374 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
1376 insn = XEXP (link, 0);
1377 q_size -= 1;
1379 if (sched_verbose >= 2)
1380 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1381 (*current_sched_info->print_insn) (insn, 0));
1383 ready_add (ready, insn);
1384 if (sched_verbose >= 2)
1385 fprintf (sched_dump, "moving to ready without stalls\n");
1387 insn_queue[q_ptr] = 0;
1389 /* If there are no ready insns, stall until one is ready and add all
1390 of the pending insns at that point to the ready list. */
1391 if (ready->n_ready == 0)
1393 int stalls;
1395 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
1397 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1399 for (; link; link = XEXP (link, 1))
1401 insn = XEXP (link, 0);
1402 q_size -= 1;
1404 if (sched_verbose >= 2)
1405 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1406 (*current_sched_info->print_insn) (insn, 0));
1408 ready_add (ready, insn);
1409 if (sched_verbose >= 2)
1410 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
1412 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
1414 advance_one_cycle ();
1416 break;
1419 advance_one_cycle ();
1422 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
1423 clock_var += stalls;
1427 /* Used by early_queue_to_ready. Determines whether it is "ok" to
1428 prematurely move INSN from the queue to the ready list. Currently,
1429 if a target defines the hook 'is_costly_dependence', this function
1430 uses the hook to check whether there exist any dependences which are
1431 considered costly by the target, between INSN and other insns that
1432 have already been scheduled. Dependences are checked up to Y cycles
1433 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
1434 controlling this value.
1435 (Other considerations could be taken into account instead (or in
1436 addition) depending on user flags and target hooks. */
1438 static bool
1439 ok_for_early_queue_removal (rtx insn)
1441 int n_cycles;
1442 rtx prev_insn = last_scheduled_insn;
1444 if (targetm.sched.is_costly_dependence)
1446 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
1448 for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
1450 rtx dep_link = 0;
1451 int dep_cost;
1453 if (!NOTE_P (prev_insn))
1455 dep_link = find_insn_list (insn, INSN_DEPEND (prev_insn));
1456 if (dep_link)
1458 dep_cost = insn_cost (prev_insn, dep_link, insn) ;
1459 if (targetm.sched.is_costly_dependence (prev_insn, insn,
1460 dep_link, dep_cost,
1461 flag_sched_stalled_insns_dep - n_cycles))
1462 return false;
1466 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
1467 break;
1470 if (!prev_insn)
1471 break;
1472 prev_insn = PREV_INSN (prev_insn);
1476 return true;
1480 /* Remove insns from the queue, before they become "ready" with respect
1481 to FU latency considerations. */
1483 static int
1484 early_queue_to_ready (state_t state, struct ready_list *ready)
1486 rtx insn;
1487 rtx link;
1488 rtx next_link;
1489 rtx prev_link;
1490 bool move_to_ready;
1491 int cost;
1492 state_t temp_state = alloca (dfa_state_size);
1493 int stalls;
1494 int insns_removed = 0;
1497 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
1498 function:
1500 X == 0: There is no limit on how many queued insns can be removed
1501 prematurely. (flag_sched_stalled_insns = -1).
1503 X >= 1: Only X queued insns can be removed prematurely in each
1504 invocation. (flag_sched_stalled_insns = X).
1506 Otherwise: Early queue removal is disabled.
1507 (flag_sched_stalled_insns = 0)
1510 if (! flag_sched_stalled_insns)
1511 return 0;
1513 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
1515 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1517 if (sched_verbose > 6)
1518 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
1520 prev_link = 0;
1521 while (link)
1523 next_link = XEXP (link, 1);
1524 insn = XEXP (link, 0);
1525 if (insn && sched_verbose > 6)
1526 print_rtl_single (sched_dump, insn);
1528 memcpy (temp_state, state, dfa_state_size);
1529 if (recog_memoized (insn) < 0)
1530 /* non-negative to indicate that it's not ready
1531 to avoid infinite Q->R->Q->R... */
1532 cost = 0;
1533 else
1534 cost = state_transition (temp_state, insn);
1536 if (sched_verbose >= 6)
1537 fprintf (sched_dump, "transition cost = %d\n", cost);
1539 move_to_ready = false;
1540 if (cost < 0)
1542 move_to_ready = ok_for_early_queue_removal (insn);
1543 if (move_to_ready == true)
1545 /* move from Q to R */
1546 q_size -= 1;
1547 ready_add (ready, insn);
1549 if (prev_link)
1550 XEXP (prev_link, 1) = next_link;
1551 else
1552 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
1554 free_INSN_LIST_node (link);
1556 if (sched_verbose >= 2)
1557 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
1558 (*current_sched_info->print_insn) (insn, 0));
1560 insns_removed++;
1561 if (insns_removed == flag_sched_stalled_insns)
1562 /* Remove only one insn from Q at a time. */
1563 return insns_removed;
1567 if (move_to_ready == false)
1568 prev_link = link;
1570 link = next_link;
1571 } /* while link */
1572 } /* if link */
1574 } /* for stalls.. */
1576 return insns_removed;
1580 /* Print the ready list for debugging purposes. Callable from debugger. */
1582 static void
1583 debug_ready_list (struct ready_list *ready)
1585 rtx *p;
1586 int i;
1588 if (ready->n_ready == 0)
1590 fprintf (sched_dump, "\n");
1591 return;
1594 p = ready_lastpos (ready);
1595 for (i = 0; i < ready->n_ready; i++)
1596 fprintf (sched_dump, " %s", (*current_sched_info->print_insn) (p[i], 0));
1597 fprintf (sched_dump, "\n");
1600 /* move_insn1: Remove INSN from insn chain, and link it after LAST insn. */
1602 static rtx
1603 move_insn1 (rtx insn, rtx last)
1605 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (insn);
1606 PREV_INSN (NEXT_INSN (insn)) = PREV_INSN (insn);
1608 NEXT_INSN (insn) = NEXT_INSN (last);
1609 PREV_INSN (NEXT_INSN (last)) = insn;
1611 NEXT_INSN (last) = insn;
1612 PREV_INSN (insn) = last;
1614 return insn;
1617 /* Search INSN for REG_SAVE_NOTE note pairs for
1618 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
1619 NOTEs. The REG_SAVE_NOTE note following first one is contains the
1620 saved value for NOTE_BLOCK_NUMBER which is useful for
1621 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
1622 output by the instruction scheduler. Return the new value of LAST. */
1624 static rtx
1625 reemit_notes (rtx insn, rtx last)
1627 rtx note, retval;
1629 retval = last;
1630 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1632 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
1634 enum insn_note note_type = INTVAL (XEXP (note, 0));
1636 last = emit_note_before (note_type, last);
1637 remove_note (insn, note);
1640 return retval;
1643 /* Move INSN. Reemit notes if needed.
1645 Return the last insn emitted by the scheduler, which is the
1646 return value from the first call to reemit_notes. */
1648 static rtx
1649 move_insn (rtx insn, rtx last)
1651 rtx retval = NULL;
1653 move_insn1 (insn, last);
1655 /* If this is the first call to reemit_notes, then record
1656 its return value. */
1657 if (retval == NULL_RTX)
1658 retval = reemit_notes (insn, insn);
1659 else
1660 reemit_notes (insn, insn);
1662 SCHED_GROUP_P (insn) = 0;
1664 return retval;
1667 /* The following structure describe an entry of the stack of choices. */
1668 struct choice_entry
1670 /* Ordinal number of the issued insn in the ready queue. */
1671 int index;
1672 /* The number of the rest insns whose issues we should try. */
1673 int rest;
1674 /* The number of issued essential insns. */
1675 int n;
1676 /* State after issuing the insn. */
1677 state_t state;
1680 /* The following array is used to implement a stack of choices used in
1681 function max_issue. */
1682 static struct choice_entry *choice_stack;
1684 /* The following variable value is number of essential insns issued on
1685 the current cycle. An insn is essential one if it changes the
1686 processors state. */
1687 static int cycle_issued_insns;
1689 /* The following variable value is maximal number of tries of issuing
1690 insns for the first cycle multipass insn scheduling. We define
1691 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
1692 need this constraint if all real insns (with non-negative codes)
1693 had reservations because in this case the algorithm complexity is
1694 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
1695 might be incomplete and such insn might occur. For such
1696 descriptions, the complexity of algorithm (without the constraint)
1697 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
1698 static int max_lookahead_tries;
1700 /* The following value is value of hook
1701 `first_cycle_multipass_dfa_lookahead' at the last call of
1702 `max_issue'. */
1703 static int cached_first_cycle_multipass_dfa_lookahead = 0;
1705 /* The following value is value of `issue_rate' at the last call of
1706 `sched_init'. */
1707 static int cached_issue_rate = 0;
1709 /* The following function returns maximal (or close to maximal) number
1710 of insns which can be issued on the same cycle and one of which
1711 insns is insns with the best rank (the first insn in READY). To
1712 make this function tries different samples of ready insns. READY
1713 is current queue `ready'. Global array READY_TRY reflects what
1714 insns are already issued in this try. INDEX will contain index
1715 of the best insn in READY. The following function is used only for
1716 first cycle multipass scheduling. */
1717 static int
1718 max_issue (struct ready_list *ready, int *index)
1720 int n, i, all, n_ready, best, delay, tries_num;
1721 struct choice_entry *top;
1722 rtx insn;
1724 best = 0;
1725 memcpy (choice_stack->state, curr_state, dfa_state_size);
1726 top = choice_stack;
1727 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1728 top->n = 0;
1729 n_ready = ready->n_ready;
1730 for (all = i = 0; i < n_ready; i++)
1731 if (!ready_try [i])
1732 all++;
1733 i = 0;
1734 tries_num = 0;
1735 for (;;)
1737 if (top->rest == 0 || i >= n_ready)
1739 if (top == choice_stack)
1740 break;
1741 if (best < top - choice_stack && ready_try [0])
1743 best = top - choice_stack;
1744 *index = choice_stack [1].index;
1745 if (top->n == issue_rate - cycle_issued_insns || best == all)
1746 break;
1748 i = top->index;
1749 ready_try [i] = 0;
1750 top--;
1751 memcpy (curr_state, top->state, dfa_state_size);
1753 else if (!ready_try [i])
1755 tries_num++;
1756 if (tries_num > max_lookahead_tries)
1757 break;
1758 insn = ready_element (ready, i);
1759 delay = state_transition (curr_state, insn);
1760 if (delay < 0)
1762 if (state_dead_lock_p (curr_state))
1763 top->rest = 0;
1764 else
1765 top->rest--;
1766 n = top->n;
1767 if (memcmp (top->state, curr_state, dfa_state_size) != 0)
1768 n++;
1769 top++;
1770 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1771 top->index = i;
1772 top->n = n;
1773 memcpy (top->state, curr_state, dfa_state_size);
1774 ready_try [i] = 1;
1775 i = -1;
1778 i++;
1780 while (top != choice_stack)
1782 ready_try [top->index] = 0;
1783 top--;
1785 memcpy (curr_state, choice_stack->state, dfa_state_size);
1786 return best;
1789 /* The following function chooses insn from READY and modifies
1790 *N_READY and READY. The following function is used only for first
1791 cycle multipass scheduling. */
1793 static rtx
1794 choose_ready (struct ready_list *ready)
1796 int lookahead = 0;
1798 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
1799 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
1800 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0)))
1801 return ready_remove_first (ready);
1802 else
1804 /* Try to choose the better insn. */
1805 int index = 0, i;
1806 rtx insn;
1808 if (cached_first_cycle_multipass_dfa_lookahead != lookahead)
1810 cached_first_cycle_multipass_dfa_lookahead = lookahead;
1811 max_lookahead_tries = 100;
1812 for (i = 0; i < issue_rate; i++)
1813 max_lookahead_tries *= lookahead;
1815 insn = ready_element (ready, 0);
1816 if (INSN_CODE (insn) < 0)
1817 return ready_remove_first (ready);
1818 for (i = 1; i < ready->n_ready; i++)
1820 insn = ready_element (ready, i);
1821 ready_try [i]
1822 = (INSN_CODE (insn) < 0
1823 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
1824 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard (insn)));
1826 if (max_issue (ready, &index) == 0)
1827 return ready_remove_first (ready);
1828 else
1829 return ready_remove (ready, index);
1833 /* Use forward list scheduling to rearrange insns of block B in region RGN,
1834 possibly bringing insns from subsequent blocks in the same region. */
1836 void
1837 schedule_block (int b, int rgn_n_insns)
1839 struct ready_list ready;
1840 int i, first_cycle_insn_p;
1841 int can_issue_more;
1842 state_t temp_state = NULL; /* It is used for multipass scheduling. */
1843 int sort_p, advance, start_clock_var;
1845 /* Head/tail info for this block. */
1846 rtx prev_head = current_sched_info->prev_head;
1847 rtx next_tail = current_sched_info->next_tail;
1848 rtx head = NEXT_INSN (prev_head);
1849 rtx tail = PREV_INSN (next_tail);
1851 /* We used to have code to avoid getting parameters moved from hard
1852 argument registers into pseudos.
1854 However, it was removed when it proved to be of marginal benefit
1855 and caused problems because schedule_block and compute_forward_dependences
1856 had different notions of what the "head" insn was. */
1858 gcc_assert (head != tail || INSN_P (head));
1860 /* Debug info. */
1861 if (sched_verbose)
1863 fprintf (sched_dump,
1864 ";; ======================================================\n");
1865 fprintf (sched_dump,
1866 ";; -- basic block %d from %d to %d -- %s reload\n",
1867 b, INSN_UID (head), INSN_UID (tail),
1868 (reload_completed ? "after" : "before"));
1869 fprintf (sched_dump,
1870 ";; ======================================================\n");
1871 fprintf (sched_dump, "\n");
1874 state_reset (curr_state);
1876 /* Allocate the ready list. */
1877 ready.veclen = rgn_n_insns + 1 + issue_rate;
1878 ready.first = ready.veclen - 1;
1879 ready.vec = XNEWVEC (rtx, ready.veclen);
1880 ready.n_ready = 0;
1882 /* It is used for first cycle multipass scheduling. */
1883 temp_state = alloca (dfa_state_size);
1884 ready_try = XCNEWVEC (char, rgn_n_insns + 1);
1885 choice_stack = XNEWVEC (struct choice_entry, rgn_n_insns + 1);
1886 for (i = 0; i <= rgn_n_insns; i++)
1887 choice_stack[i].state = xmalloc (dfa_state_size);
1889 (*current_sched_info->init_ready_list) (&ready);
1891 if (targetm.sched.md_init)
1892 targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
1894 /* We start inserting insns after PREV_HEAD. */
1895 last_scheduled_insn = prev_head;
1897 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
1898 queue. */
1899 q_ptr = 0;
1900 q_size = 0;
1902 insn_queue = alloca ((max_insn_queue_index + 1) * sizeof (rtx));
1903 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
1904 last_clock_var = -1;
1906 /* Start just before the beginning of time. */
1907 clock_var = -1;
1908 advance = 0;
1910 sort_p = TRUE;
1911 /* Loop until all the insns in BB are scheduled. */
1912 while ((*current_sched_info->schedule_more_p) ())
1916 start_clock_var = clock_var;
1918 clock_var++;
1920 advance_one_cycle ();
1922 /* Add to the ready list all pending insns that can be issued now.
1923 If there are no ready insns, increment clock until one
1924 is ready and add all pending insns at that point to the ready
1925 list. */
1926 queue_to_ready (&ready);
1928 gcc_assert (ready.n_ready);
1930 if (sched_verbose >= 2)
1932 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
1933 debug_ready_list (&ready);
1935 advance -= clock_var - start_clock_var;
1937 while (advance > 0);
1939 if (sort_p)
1941 /* Sort the ready list based on priority. */
1942 ready_sort (&ready);
1944 if (sched_verbose >= 2)
1946 fprintf (sched_dump, ";;\t\tReady list after ready_sort: ");
1947 debug_ready_list (&ready);
1951 /* Allow the target to reorder the list, typically for
1952 better instruction bundling. */
1953 if (sort_p && targetm.sched.reorder
1954 && (ready.n_ready == 0
1955 || !SCHED_GROUP_P (ready_element (&ready, 0))))
1956 can_issue_more =
1957 targetm.sched.reorder (sched_dump, sched_verbose,
1958 ready_lastpos (&ready),
1959 &ready.n_ready, clock_var);
1960 else
1961 can_issue_more = issue_rate;
1963 first_cycle_insn_p = 1;
1964 cycle_issued_insns = 0;
1965 for (;;)
1967 rtx insn;
1968 int cost;
1969 bool asm_p = false;
1971 if (sched_verbose >= 2)
1973 fprintf (sched_dump, ";;\tReady list (t =%3d): ",
1974 clock_var);
1975 debug_ready_list (&ready);
1978 if (ready.n_ready == 0
1979 && can_issue_more
1980 && reload_completed)
1982 /* Allow scheduling insns directly from the queue in case
1983 there's nothing better to do (ready list is empty) but
1984 there are still vacant dispatch slots in the current cycle. */
1985 if (sched_verbose >= 6)
1986 fprintf(sched_dump,";;\t\tSecond chance\n");
1987 memcpy (temp_state, curr_state, dfa_state_size);
1988 if (early_queue_to_ready (temp_state, &ready))
1989 ready_sort (&ready);
1992 if (ready.n_ready == 0 || !can_issue_more
1993 || state_dead_lock_p (curr_state)
1994 || !(*current_sched_info->schedule_more_p) ())
1995 break;
1997 /* Select and remove the insn from the ready list. */
1998 if (sort_p)
1999 insn = choose_ready (&ready);
2000 else
2001 insn = ready_remove_first (&ready);
2003 if (targetm.sched.dfa_new_cycle
2004 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
2005 insn, last_clock_var,
2006 clock_var, &sort_p))
2008 ready_add (&ready, insn);
2009 break;
2012 sort_p = TRUE;
2013 memcpy (temp_state, curr_state, dfa_state_size);
2014 if (recog_memoized (insn) < 0)
2016 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
2017 || asm_noperands (PATTERN (insn)) >= 0);
2018 if (!first_cycle_insn_p && asm_p)
2019 /* This is asm insn which is tryed to be issued on the
2020 cycle not first. Issue it on the next cycle. */
2021 cost = 1;
2022 else
2023 /* A USE insn, or something else we don't need to
2024 understand. We can't pass these directly to
2025 state_transition because it will trigger a
2026 fatal error for unrecognizable insns. */
2027 cost = 0;
2029 else
2031 cost = state_transition (temp_state, insn);
2032 if (cost < 0)
2033 cost = 0;
2034 else if (cost == 0)
2035 cost = 1;
2038 if (cost >= 1)
2040 queue_insn (insn, cost);
2041 if (SCHED_GROUP_P (insn))
2043 advance = cost;
2044 break;
2047 continue;
2050 if (! (*current_sched_info->can_schedule_ready_p) (insn))
2051 goto next;
2053 last_scheduled_insn = move_insn (insn, last_scheduled_insn);
2055 if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
2056 cycle_issued_insns++;
2057 memcpy (curr_state, temp_state, dfa_state_size);
2059 if (targetm.sched.variable_issue)
2060 can_issue_more =
2061 targetm.sched.variable_issue (sched_dump, sched_verbose,
2062 insn, can_issue_more);
2063 /* A naked CLOBBER or USE generates no instruction, so do
2064 not count them against the issue rate. */
2065 else if (GET_CODE (PATTERN (insn)) != USE
2066 && GET_CODE (PATTERN (insn)) != CLOBBER)
2067 can_issue_more--;
2069 advance = schedule_insn (insn, &ready, clock_var);
2071 /* After issuing an asm insn we should start a new cycle. */
2072 if (advance == 0 && asm_p)
2073 advance = 1;
2074 if (advance != 0)
2075 break;
2077 next:
2078 first_cycle_insn_p = 0;
2080 /* Sort the ready list based on priority. This must be
2081 redone here, as schedule_insn may have readied additional
2082 insns that will not be sorted correctly. */
2083 if (ready.n_ready > 0)
2084 ready_sort (&ready);
2086 if (targetm.sched.reorder2
2087 && (ready.n_ready == 0
2088 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2090 can_issue_more =
2091 targetm.sched.reorder2 (sched_dump, sched_verbose,
2092 ready.n_ready
2093 ? ready_lastpos (&ready) : NULL,
2094 &ready.n_ready, clock_var);
2099 if (targetm.sched.md_finish)
2100 targetm.sched.md_finish (sched_dump, sched_verbose);
2102 /* Debug info. */
2103 if (sched_verbose)
2105 fprintf (sched_dump, ";;\tReady list (final): ");
2106 debug_ready_list (&ready);
2109 /* Sanity check -- queue must be empty now. Meaningless if region has
2110 multiple bbs. */
2111 gcc_assert (!current_sched_info->queue_must_finish_empty || !q_size);
2113 /* Update head/tail boundaries. */
2114 head = NEXT_INSN (prev_head);
2115 tail = last_scheduled_insn;
2117 if (!reload_completed)
2119 rtx insn, link, next;
2121 /* INSN_TICK (minimum clock tick at which the insn becomes
2122 ready) may be not correct for the insn in the subsequent
2123 blocks of the region. We should use a correct value of
2124 `clock_var' or modify INSN_TICK. It is better to keep
2125 clock_var value equal to 0 at the start of a basic block.
2126 Therefore we modify INSN_TICK here. */
2127 for (insn = head; insn != tail; insn = NEXT_INSN (insn))
2128 if (INSN_P (insn))
2130 for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
2132 next = XEXP (link, 0);
2133 INSN_TICK (next) -= clock_var;
2138 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
2139 previously found among the insns. Insert them at the beginning
2140 of the insns. */
2141 if (note_list != 0)
2143 rtx note_head = note_list;
2145 while (PREV_INSN (note_head))
2147 note_head = PREV_INSN (note_head);
2150 PREV_INSN (note_head) = PREV_INSN (head);
2151 NEXT_INSN (PREV_INSN (head)) = note_head;
2152 PREV_INSN (head) = note_list;
2153 NEXT_INSN (note_list) = head;
2154 head = note_head;
2157 /* Debugging. */
2158 if (sched_verbose)
2160 fprintf (sched_dump, ";; total time = %d\n;; new head = %d\n",
2161 clock_var, INSN_UID (head));
2162 fprintf (sched_dump, ";; new tail = %d\n\n",
2163 INSN_UID (tail));
2166 current_sched_info->head = head;
2167 current_sched_info->tail = tail;
2169 free (ready.vec);
2171 free (ready_try);
2172 for (i = 0; i <= rgn_n_insns; i++)
2173 free (choice_stack [i].state);
2174 free (choice_stack);
2177 /* Set_priorities: compute priority of each insn in the block. */
2180 set_priorities (rtx head, rtx tail)
2182 rtx insn;
2183 int n_insn;
2184 int sched_max_insns_priority =
2185 current_sched_info->sched_max_insns_priority;
2186 rtx prev_head;
2188 prev_head = PREV_INSN (head);
2190 if (head == tail && (! INSN_P (head)))
2191 return 0;
2193 n_insn = 0;
2194 sched_max_insns_priority = 0;
2195 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
2197 if (NOTE_P (insn))
2198 continue;
2200 n_insn++;
2201 (void) priority (insn);
2203 if (INSN_PRIORITY_KNOWN (insn))
2204 sched_max_insns_priority =
2205 MAX (sched_max_insns_priority, INSN_PRIORITY (insn));
2207 sched_max_insns_priority += 1;
2208 current_sched_info->sched_max_insns_priority =
2209 sched_max_insns_priority;
2211 return n_insn;
2214 /* Initialize some global state for the scheduler. */
2216 void
2217 sched_init (void)
2219 int luid;
2220 basic_block b;
2221 rtx insn;
2222 int i;
2224 /* Disable speculative loads in their presence if cc0 defined. */
2225 #ifdef HAVE_cc0
2226 flag_schedule_speculative_load = 0;
2227 #endif
2229 /* Set dump and sched_verbose for the desired debugging output. If no
2230 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
2231 For -fsched-verbose=N, N>=10, print everything to stderr. */
2232 sched_verbose = sched_verbose_param;
2233 if (sched_verbose_param == 0 && dump_file)
2234 sched_verbose = 1;
2235 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
2236 ? stderr : dump_file);
2238 /* Initialize issue_rate. */
2239 if (targetm.sched.issue_rate)
2240 issue_rate = targetm.sched.issue_rate ();
2241 else
2242 issue_rate = 1;
2244 if (cached_issue_rate != issue_rate)
2246 cached_issue_rate = issue_rate;
2247 /* To invalidate max_lookahead_tries: */
2248 cached_first_cycle_multipass_dfa_lookahead = 0;
2251 /* We use LUID 0 for the fake insn (UID 0) which holds dependencies for
2252 pseudos which do not cross calls. */
2253 old_max_uid = get_max_uid () + 1;
2255 h_i_d = XCNEWVEC (struct haifa_insn_data, old_max_uid);
2257 for (i = 0; i < old_max_uid; i++)
2258 h_i_d [i].cost = -1;
2260 if (targetm.sched.init_dfa_pre_cycle_insn)
2261 targetm.sched.init_dfa_pre_cycle_insn ();
2263 if (targetm.sched.init_dfa_post_cycle_insn)
2264 targetm.sched.init_dfa_post_cycle_insn ();
2266 dfa_start ();
2267 dfa_state_size = state_size ();
2268 curr_state = xmalloc (dfa_state_size);
2270 h_i_d[0].luid = 0;
2271 luid = 1;
2272 FOR_EACH_BB (b)
2273 for (insn = BB_HEAD (b); ; insn = NEXT_INSN (insn))
2275 INSN_LUID (insn) = luid;
2277 /* Increment the next luid, unless this is a note. We don't
2278 really need separate IDs for notes and we don't want to
2279 schedule differently depending on whether or not there are
2280 line-number notes, i.e., depending on whether or not we're
2281 generating debugging information. */
2282 if (!NOTE_P (insn))
2283 ++luid;
2285 if (insn == BB_END (b))
2286 break;
2289 init_dependency_caches (luid);
2291 init_alias_analysis ();
2293 if (write_symbols != NO_DEBUG)
2295 rtx line;
2297 line_note_head = XCNEWVEC (rtx, last_basic_block);
2299 /* Save-line-note-head:
2300 Determine the line-number at the start of each basic block.
2301 This must be computed and saved now, because after a basic block's
2302 predecessor has been scheduled, it is impossible to accurately
2303 determine the correct line number for the first insn of the block. */
2305 FOR_EACH_BB (b)
2307 for (line = BB_HEAD (b); line; line = PREV_INSN (line))
2308 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
2310 line_note_head[b->index] = line;
2311 break;
2313 /* Do a forward search as well, since we won't get to see the first
2314 notes in a basic block. */
2315 for (line = BB_HEAD (b); line; line = NEXT_INSN (line))
2317 if (INSN_P (line))
2318 break;
2319 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
2320 line_note_head[b->index] = line;
2325 /* ??? Add a NOTE after the last insn of the last basic block. It is not
2326 known why this is done. */
2328 insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
2329 if (NEXT_INSN (insn) == 0
2330 || (!NOTE_P (insn)
2331 && !LABEL_P (insn)
2332 /* Don't emit a NOTE if it would end up before a BARRIER. */
2333 && !BARRIER_P (NEXT_INSN (insn))))
2335 emit_note_after (NOTE_INSN_DELETED, BB_END (EXIT_BLOCK_PTR->prev_bb));
2336 /* Make insn to appear outside BB. */
2337 BB_END (EXIT_BLOCK_PTR->prev_bb) = PREV_INSN (BB_END (EXIT_BLOCK_PTR->prev_bb));
2340 /* Compute INSN_REG_WEIGHT for all blocks. We must do this before
2341 removing death notes. */
2342 FOR_EACH_BB_REVERSE (b)
2343 find_insn_reg_weight (b->index);
2345 if (targetm.sched.md_init_global)
2346 targetm.sched.md_init_global (sched_dump, sched_verbose, old_max_uid);
2349 /* Free global data used during insn scheduling. */
2351 void
2352 sched_finish (void)
2354 free (h_i_d);
2355 free (curr_state);
2356 dfa_finish ();
2357 free_dependency_caches ();
2358 end_alias_analysis ();
2359 if (write_symbols != NO_DEBUG)
2360 free (line_note_head);
2362 if (targetm.sched.md_finish_global)
2363 targetm.sched.md_finish_global (sched_dump, sched_verbose);
2365 current_sched_info = NULL;
2367 #endif /* INSN_SCHEDULING */