Ayee, missed a file.
[official-gcc.git] / gcc / haifa-sched.c
blobad782cc818eab35d37ab9b0793faa1bc79ee914b
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998,
3 1999, 2000, 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
4 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
5 and currently maintained by, Jim Wilson (wilson@cygnus.com)
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
12 version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
22 02111-1307, USA. */
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 The following list shows the order in which we want to break ties
58 among insns in the ready list:
60 1. choose insn with the longest path to end of bb, ties
61 broken by
62 2. choose insn with least contribution to register pressure,
63 ties broken by
64 3. prefer in-block upon interblock motion, ties broken by
65 4. prefer useful upon speculative motion, ties broken by
66 5. choose insn with largest control flow probability, ties
67 broken by
68 6. choose insn with the least dependences upon the previously
69 scheduled insn, or finally
70 7 choose the insn which has the most insns dependent on it.
71 8. choose insn with lowest UID.
73 Memory references complicate matters. Only if we can be certain
74 that memory references are not part of the data dependency graph
75 (via true, anti, or output dependence), can we move operations past
76 memory references. To first approximation, reads can be done
77 independently, while writes introduce dependencies. Better
78 approximations will yield fewer dependencies.
80 Before reload, an extended analysis of interblock data dependences
81 is required for interblock scheduling. This is performed in
82 compute_block_backward_dependences ().
84 Dependencies set up by memory references are treated in exactly the
85 same way as other dependencies, by using LOG_LINKS backward
86 dependences. LOG_LINKS are translated into INSN_DEPEND forward
87 dependences for the purpose of forward list scheduling.
89 Having optimized the critical path, we may have also unduly
90 extended the lifetimes of some registers. If an operation requires
91 that constants be loaded into registers, it is certainly desirable
92 to load those constants as early as necessary, but no earlier.
93 I.e., it will not do to load up a bunch of registers at the
94 beginning of a basic block only to use them at the end, if they
95 could be loaded later, since this may result in excessive register
96 utilization.
98 Note that since branches are never in basic blocks, but only end
99 basic blocks, this pass will not move branches. But that is ok,
100 since we can use GNU's delayed branch scheduling pass to take care
101 of this case.
103 Also note that no further optimizations based on algebraic
104 identities are performed, so this pass would be a good one to
105 perform instruction splitting, such as breaking up a multiply
106 instruction into shifts and adds where that is profitable.
108 Given the memory aliasing analysis that this pass should perform,
109 it should be possible to remove redundant stores to memory, and to
110 load values from registers instead of hitting memory.
112 Before reload, speculative insns are moved only if a 'proof' exists
113 that no exception will be caused by this, and if no live registers
114 exist that inhibit the motion (live registers constraints are not
115 represented by data dependence edges).
117 This pass must update information that subsequent passes expect to
118 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
121 The information in the line number notes is carefully retained by
122 this pass. Notes that refer to the starting and ending of
123 exception regions are also carefully retained by this pass. All
124 other NOTE insns are grouped in their same relative order at the
125 beginning of basic blocks and regions that have been scheduled. */
127 #include "config.h"
128 #include "system.h"
129 #include "coretypes.h"
130 #include "tm.h"
131 #include "toplev.h"
132 #include "rtl.h"
133 #include "tm_p.h"
134 #include "hard-reg-set.h"
135 #include "basic-block.h"
136 #include "regs.h"
137 #include "function.h"
138 #include "flags.h"
139 #include "insn-config.h"
140 #include "insn-attr.h"
141 #include "except.h"
142 #include "toplev.h"
143 #include "recog.h"
144 #include "sched-int.h"
145 #include "target.h"
147 #ifdef INSN_SCHEDULING
149 /* issue_rate is the number of insns that can be scheduled in the same
150 machine cycle. It can be defined in the config/mach/mach.h file,
151 otherwise we set it to 1. */
153 static int issue_rate;
155 /* sched-verbose controls the amount of debugging output the
156 scheduler prints. It is controlled by -fsched-verbose=N:
157 N>0 and no -DSR : the output is directed to stderr.
158 N>=10 will direct the printouts to stderr (regardless of -dSR).
159 N=1: same as -dSR.
160 N=2: bb's probabilities, detailed ready list info, unit/insn info.
161 N=3: rtl at abort point, control-flow, regions info.
162 N=5: dependences info. */
164 static int sched_verbose_param = 0;
165 int sched_verbose = 0;
167 /* Debugging file. All printouts are sent to dump, which is always set,
168 either to stderr, or to the dump listing file (-dRS). */
169 FILE *sched_dump = 0;
171 /* Highest uid before scheduling. */
172 static int old_max_uid;
174 /* fix_sched_param() is called from toplev.c upon detection
175 of the -fsched-verbose=N option. */
177 void
178 fix_sched_param (const char *param, const char *val)
180 if (!strcmp (param, "verbose"))
181 sched_verbose_param = atoi (val);
182 else
183 warning ("fix_sched_param: unknown param: %s", param);
186 struct haifa_insn_data *h_i_d;
188 #define LINE_NOTE(INSN) (h_i_d[INSN_UID (INSN)].line_note)
189 #define INSN_TICK(INSN) (h_i_d[INSN_UID (INSN)].tick)
191 /* Vector indexed by basic block number giving the starting line-number
192 for each basic block. */
193 static rtx *line_note_head;
195 /* List of important notes we must keep around. This is a pointer to the
196 last element in the list. */
197 static rtx note_list;
199 /* Queues, etc. */
201 /* An instruction is ready to be scheduled when all insns preceding it
202 have already been scheduled. It is important to ensure that all
203 insns which use its result will not be executed until its result
204 has been computed. An insn is maintained in one of four structures:
206 (P) the "Pending" set of insns which cannot be scheduled until
207 their dependencies have been satisfied.
208 (Q) the "Queued" set of insns that can be scheduled when sufficient
209 time has passed.
210 (R) the "Ready" list of unscheduled, uncommitted insns.
211 (S) the "Scheduled" list of insns.
213 Initially, all insns are either "Pending" or "Ready" depending on
214 whether their dependencies are satisfied.
216 Insns move from the "Ready" list to the "Scheduled" list as they
217 are committed to the schedule. As this occurs, the insns in the
218 "Pending" list have their dependencies satisfied and move to either
219 the "Ready" list or the "Queued" set depending on whether
220 sufficient time has passed to make them ready. As time passes,
221 insns move from the "Queued" set to the "Ready" list.
223 The "Pending" list (P) are the insns in the INSN_DEPEND of the unscheduled
224 insns, i.e., those that are ready, queued, and pending.
225 The "Queued" set (Q) is implemented by the variable `insn_queue'.
226 The "Ready" list (R) is implemented by the variables `ready' and
227 `n_ready'.
228 The "Scheduled" list (S) is the new insn chain built by this pass.
230 The transition (R->S) is implemented in the scheduling loop in
231 `schedule_block' when the best insn to schedule is chosen.
232 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
233 insns move from the ready list to the scheduled list.
234 The transition (Q->R) is implemented in 'queue_to_insn' as time
235 passes or stalls are introduced. */
237 /* Implement a circular buffer to delay instructions until sufficient
238 time has passed. For the new pipeline description interface,
239 MAX_INSN_QUEUE_INDEX is a power of two minus one which is larger
240 than maximal time of instruction execution computed by genattr.c on
241 the base maximal time of functional unit reservations and getting a
242 result. This is the longest time an insn may be queued. */
244 static rtx *insn_queue;
245 static int q_ptr = 0;
246 static int q_size = 0;
247 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
248 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
250 /* The following variable value refers for all current and future
251 reservations of the processor units. */
252 state_t curr_state;
254 /* The following variable value is size of memory representing all
255 current and future reservations of the processor units. */
256 static size_t dfa_state_size;
258 /* The following array is used to find the best insn from ready when
259 the automaton pipeline interface is used. */
260 static char *ready_try;
262 /* Describe the ready list of the scheduler.
263 VEC holds space enough for all insns in the current region. VECLEN
264 says how many exactly.
265 FIRST is the index of the element with the highest priority; i.e. the
266 last one in the ready list, since elements are ordered by ascending
267 priority.
268 N_READY determines how many insns are on the ready list. */
270 struct ready_list
272 rtx *vec;
273 int veclen;
274 int first;
275 int n_ready;
278 static int may_trap_exp (rtx, int);
280 /* Nonzero iff the address is comprised from at most 1 register. */
281 #define CONST_BASED_ADDRESS_P(x) \
282 (REG_P (x) \
283 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
284 || (GET_CODE (x) == LO_SUM)) \
285 && (CONSTANT_P (XEXP (x, 0)) \
286 || CONSTANT_P (XEXP (x, 1)))))
288 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
289 as found by analyzing insn's expression. */
291 static int
292 may_trap_exp (rtx x, int is_store)
294 enum rtx_code code;
296 if (x == 0)
297 return TRAP_FREE;
298 code = GET_CODE (x);
299 if (is_store)
301 if (code == MEM && may_trap_p (x))
302 return TRAP_RISKY;
303 else
304 return TRAP_FREE;
306 if (code == MEM)
308 /* The insn uses memory: a volatile load. */
309 if (MEM_VOLATILE_P (x))
310 return IRISKY;
311 /* An exception-free load. */
312 if (!may_trap_p (x))
313 return IFREE;
314 /* A load with 1 base register, to be further checked. */
315 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
316 return PFREE_CANDIDATE;
317 /* No info on the load, to be further checked. */
318 return PRISKY_CANDIDATE;
320 else
322 const char *fmt;
323 int i, insn_class = TRAP_FREE;
325 /* Neither store nor load, check if it may cause a trap. */
326 if (may_trap_p (x))
327 return TRAP_RISKY;
328 /* Recursive step: walk the insn... */
329 fmt = GET_RTX_FORMAT (code);
330 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
332 if (fmt[i] == 'e')
334 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
335 insn_class = WORST_CLASS (insn_class, tmp_class);
337 else if (fmt[i] == 'E')
339 int j;
340 for (j = 0; j < XVECLEN (x, i); j++)
342 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
343 insn_class = WORST_CLASS (insn_class, tmp_class);
344 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
345 break;
348 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
349 break;
351 return insn_class;
355 /* Classifies insn for the purpose of verifying that it can be
356 moved speculatively, by examining it's patterns, returning:
357 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
358 TRAP_FREE: non-load insn.
359 IFREE: load from a globally safe location.
360 IRISKY: volatile load.
361 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
362 being either PFREE or PRISKY. */
365 haifa_classify_insn (rtx insn)
367 rtx pat = PATTERN (insn);
368 int tmp_class = TRAP_FREE;
369 int insn_class = TRAP_FREE;
370 enum rtx_code code;
372 if (GET_CODE (pat) == PARALLEL)
374 int i, len = XVECLEN (pat, 0);
376 for (i = len - 1; i >= 0; i--)
378 code = GET_CODE (XVECEXP (pat, 0, i));
379 switch (code)
381 case CLOBBER:
382 /* Test if it is a 'store'. */
383 tmp_class = may_trap_exp (XEXP (XVECEXP (pat, 0, i), 0), 1);
384 break;
385 case SET:
386 /* Test if it is a store. */
387 tmp_class = may_trap_exp (SET_DEST (XVECEXP (pat, 0, i)), 1);
388 if (tmp_class == TRAP_RISKY)
389 break;
390 /* Test if it is a load. */
391 tmp_class
392 = WORST_CLASS (tmp_class,
393 may_trap_exp (SET_SRC (XVECEXP (pat, 0, i)),
394 0));
395 break;
396 case COND_EXEC:
397 case TRAP_IF:
398 tmp_class = TRAP_RISKY;
399 break;
400 default:
403 insn_class = WORST_CLASS (insn_class, tmp_class);
404 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
405 break;
408 else
410 code = GET_CODE (pat);
411 switch (code)
413 case CLOBBER:
414 /* Test if it is a 'store'. */
415 tmp_class = may_trap_exp (XEXP (pat, 0), 1);
416 break;
417 case SET:
418 /* Test if it is a store. */
419 tmp_class = may_trap_exp (SET_DEST (pat), 1);
420 if (tmp_class == TRAP_RISKY)
421 break;
422 /* Test if it is a load. */
423 tmp_class =
424 WORST_CLASS (tmp_class,
425 may_trap_exp (SET_SRC (pat), 0));
426 break;
427 case COND_EXEC:
428 case TRAP_IF:
429 tmp_class = TRAP_RISKY;
430 break;
431 default:;
433 insn_class = tmp_class;
436 return insn_class;
439 /* Forward declarations. */
441 static int priority (rtx);
442 static int rank_for_schedule (const void *, const void *);
443 static void swap_sort (rtx *, int);
444 static void queue_insn (rtx, int);
445 static int schedule_insn (rtx, struct ready_list *, int);
446 static int find_set_reg_weight (rtx);
447 static void find_insn_reg_weight (int);
448 static void adjust_priority (rtx);
449 static void advance_one_cycle (void);
451 /* Notes handling mechanism:
452 =========================
453 Generally, NOTES are saved before scheduling and restored after scheduling.
454 The scheduler distinguishes between three types of notes:
456 (1) LINE_NUMBER notes, generated and used for debugging. Here,
457 before scheduling a region, a pointer to the LINE_NUMBER note is
458 added to the insn following it (in save_line_notes()), and the note
459 is removed (in rm_line_notes() and unlink_line_notes()). After
460 scheduling the region, this pointer is used for regeneration of
461 the LINE_NUMBER note (in restore_line_notes()).
463 (2) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
464 Before scheduling a region, a pointer to the note is added to the insn
465 that follows or precedes it. (This happens as part of the data dependence
466 computation). After scheduling an insn, the pointer contained in it is
467 used for regenerating the corresponding note (in reemit_notes).
469 (3) All other notes (e.g. INSN_DELETED): Before scheduling a block,
470 these notes are put in a list (in rm_other_notes() and
471 unlink_other_notes ()). After scheduling the block, these notes are
472 inserted at the beginning of the block (in schedule_block()). */
474 static rtx unlink_other_notes (rtx, rtx);
475 static rtx unlink_line_notes (rtx, rtx);
476 static rtx reemit_notes (rtx, rtx);
478 static rtx *ready_lastpos (struct ready_list *);
479 static void ready_sort (struct ready_list *);
480 static rtx ready_remove_first (struct ready_list *);
482 static void queue_to_ready (struct ready_list *);
483 static int early_queue_to_ready (state_t, struct ready_list *);
485 static void debug_ready_list (struct ready_list *);
487 static rtx move_insn1 (rtx, rtx);
488 static rtx move_insn (rtx, rtx);
490 /* The following functions are used to implement multi-pass scheduling
491 on the first cycle. */
492 static rtx ready_element (struct ready_list *, int);
493 static rtx ready_remove (struct ready_list *, int);
494 static int max_issue (struct ready_list *, int *);
496 static rtx choose_ready (struct ready_list *);
498 #endif /* INSN_SCHEDULING */
500 /* Point to state used for the current scheduling pass. */
501 struct sched_info *current_sched_info;
503 #ifndef INSN_SCHEDULING
504 void
505 schedule_insns (FILE *dump_file ATTRIBUTE_UNUSED)
508 #else
510 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
511 so that insns independent of the last scheduled insn will be preferred
512 over dependent instructions. */
514 static rtx last_scheduled_insn;
516 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
517 This is the number of cycles between instruction issue and
518 instruction results. */
520 HAIFA_INLINE int
521 insn_cost (rtx insn, rtx link, rtx used)
523 int cost = INSN_COST (insn);
525 if (cost < 0)
527 /* A USE insn, or something else we don't need to
528 understand. We can't pass these directly to
529 result_ready_cost or insn_default_latency because it will
530 trigger a fatal error for unrecognizable insns. */
531 if (recog_memoized (insn) < 0)
533 INSN_COST (insn) = 0;
534 return 0;
536 else
538 cost = insn_default_latency (insn);
539 if (cost < 0)
540 cost = 0;
542 INSN_COST (insn) = cost;
546 /* In this case estimate cost without caring how insn is used. */
547 if (link == 0 || used == 0)
548 return cost;
550 /* A USE insn should never require the value used to be computed.
551 This allows the computation of a function's result and parameter
552 values to overlap the return and call. */
553 if (recog_memoized (used) < 0)
554 cost = 0;
555 else
557 if (INSN_CODE (insn) >= 0)
559 if (REG_NOTE_KIND (link) == REG_DEP_ANTI)
560 cost = 0;
561 else if (REG_NOTE_KIND (link) == REG_DEP_OUTPUT)
563 cost = (insn_default_latency (insn)
564 - insn_default_latency (used));
565 if (cost <= 0)
566 cost = 1;
568 else if (bypass_p (insn))
569 cost = insn_latency (insn, used);
572 if (targetm.sched.adjust_cost)
573 cost = targetm.sched.adjust_cost (used, link, insn, cost);
575 if (cost < 0)
576 cost = 0;
579 return cost;
582 /* Compute the priority number for INSN. */
584 static int
585 priority (rtx insn)
587 rtx link;
589 if (! INSN_P (insn))
590 return 0;
592 if (! INSN_PRIORITY_KNOWN (insn))
594 int this_priority = 0;
596 if (INSN_DEPEND (insn) == 0)
597 this_priority = insn_cost (insn, 0, 0);
598 else
600 for (link = INSN_DEPEND (insn); link; link = XEXP (link, 1))
602 rtx next;
603 int next_priority;
605 next = XEXP (link, 0);
607 /* Critical path is meaningful in block boundaries only. */
608 if (! (*current_sched_info->contributes_to_priority) (next, insn))
609 continue;
611 next_priority = insn_cost (insn, link, next) + priority (next);
612 if (next_priority > this_priority)
613 this_priority = next_priority;
616 INSN_PRIORITY (insn) = this_priority;
617 INSN_PRIORITY_KNOWN (insn) = 1;
620 return INSN_PRIORITY (insn);
623 /* Macros and functions for keeping the priority queue sorted, and
624 dealing with queuing and dequeuing of instructions. */
626 #define SCHED_SORT(READY, N_READY) \
627 do { if ((N_READY) == 2) \
628 swap_sort (READY, N_READY); \
629 else if ((N_READY) > 2) \
630 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
631 while (0)
633 /* Returns a positive value if x is preferred; returns a negative value if
634 y is preferred. Should never return 0, since that will make the sort
635 unstable. */
637 static int
638 rank_for_schedule (const void *x, const void *y)
640 rtx tmp = *(const rtx *) y;
641 rtx tmp2 = *(const rtx *) x;
642 rtx link;
643 int tmp_class, tmp2_class, depend_count1, depend_count2;
644 int val, priority_val, weight_val, info_val;
646 /* The insn in a schedule group should be issued the first. */
647 if (SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
648 return SCHED_GROUP_P (tmp2) ? 1 : -1;
650 /* Prefer insn with higher priority. */
651 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
653 if (priority_val)
654 return priority_val;
656 /* Prefer an insn with smaller contribution to registers-pressure. */
657 if (!reload_completed &&
658 (weight_val = INSN_REG_WEIGHT (tmp) - INSN_REG_WEIGHT (tmp2)))
659 return weight_val;
661 info_val = (*current_sched_info->rank) (tmp, tmp2);
662 if (info_val)
663 return info_val;
665 /* Compare insns based on their relation to the last-scheduled-insn. */
666 if (last_scheduled_insn)
668 /* Classify the instructions into three classes:
669 1) Data dependent on last schedule insn.
670 2) Anti/Output dependent on last scheduled insn.
671 3) Independent of last scheduled insn, or has latency of one.
672 Choose the insn from the highest numbered class if different. */
673 link = find_insn_list (tmp, INSN_DEPEND (last_scheduled_insn));
674 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp) == 1)
675 tmp_class = 3;
676 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
677 tmp_class = 1;
678 else
679 tmp_class = 2;
681 link = find_insn_list (tmp2, INSN_DEPEND (last_scheduled_insn));
682 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp2) == 1)
683 tmp2_class = 3;
684 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
685 tmp2_class = 1;
686 else
687 tmp2_class = 2;
689 if ((val = tmp2_class - tmp_class))
690 return val;
693 /* Prefer the insn which has more later insns that depend on it.
694 This gives the scheduler more freedom when scheduling later
695 instructions at the expense of added register pressure. */
696 depend_count1 = 0;
697 for (link = INSN_DEPEND (tmp); link; link = XEXP (link, 1))
698 depend_count1++;
700 depend_count2 = 0;
701 for (link = INSN_DEPEND (tmp2); link; link = XEXP (link, 1))
702 depend_count2++;
704 val = depend_count2 - depend_count1;
705 if (val)
706 return val;
708 /* If insns are equally good, sort by INSN_LUID (original insn order),
709 so that we make the sort stable. This minimizes instruction movement,
710 thus minimizing sched's effect on debugging and cross-jumping. */
711 return INSN_LUID (tmp) - INSN_LUID (tmp2);
714 /* Resort the array A in which only element at index N may be out of order. */
716 HAIFA_INLINE static void
717 swap_sort (rtx *a, int n)
719 rtx insn = a[n - 1];
720 int i = n - 2;
722 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
724 a[i + 1] = a[i];
725 i -= 1;
727 a[i + 1] = insn;
730 /* Add INSN to the insn queue so that it can be executed at least
731 N_CYCLES after the currently executing insn. Preserve insns
732 chain for debugging purposes. */
734 HAIFA_INLINE static void
735 queue_insn (rtx insn, int n_cycles)
737 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
738 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
739 insn_queue[next_q] = link;
740 q_size += 1;
742 if (sched_verbose >= 2)
744 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
745 (*current_sched_info->print_insn) (insn, 0));
747 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
751 /* Return a pointer to the bottom of the ready list, i.e. the insn
752 with the lowest priority. */
754 HAIFA_INLINE static rtx *
755 ready_lastpos (struct ready_list *ready)
757 if (ready->n_ready == 0)
758 abort ();
759 return ready->vec + ready->first - ready->n_ready + 1;
762 /* Add an element INSN to the ready list so that it ends up with the lowest
763 priority. */
765 HAIFA_INLINE void
766 ready_add (struct ready_list *ready, rtx insn)
768 if (ready->first == ready->n_ready)
770 memmove (ready->vec + ready->veclen - ready->n_ready,
771 ready_lastpos (ready),
772 ready->n_ready * sizeof (rtx));
773 ready->first = ready->veclen - 1;
775 ready->vec[ready->first - ready->n_ready] = insn;
776 ready->n_ready++;
779 /* Remove the element with the highest priority from the ready list and
780 return it. */
782 HAIFA_INLINE static rtx
783 ready_remove_first (struct ready_list *ready)
785 rtx t;
786 if (ready->n_ready == 0)
787 abort ();
788 t = ready->vec[ready->first--];
789 ready->n_ready--;
790 /* If the queue becomes empty, reset it. */
791 if (ready->n_ready == 0)
792 ready->first = ready->veclen - 1;
793 return t;
796 /* The following code implements multi-pass scheduling for the first
797 cycle. In other words, we will try to choose ready insn which
798 permits to start maximum number of insns on the same cycle. */
800 /* Return a pointer to the element INDEX from the ready. INDEX for
801 insn with the highest priority is 0, and the lowest priority has
802 N_READY - 1. */
804 HAIFA_INLINE static rtx
805 ready_element (struct ready_list *ready, int index)
807 #ifdef ENABLE_CHECKING
808 if (ready->n_ready == 0 || index >= ready->n_ready)
809 abort ();
810 #endif
811 return ready->vec[ready->first - index];
814 /* Remove the element INDEX from the ready list and return it. INDEX
815 for insn with the highest priority is 0, and the lowest priority
816 has N_READY - 1. */
818 HAIFA_INLINE static rtx
819 ready_remove (struct ready_list *ready, int index)
821 rtx t;
822 int i;
824 if (index == 0)
825 return ready_remove_first (ready);
826 if (ready->n_ready == 0 || index >= ready->n_ready)
827 abort ();
828 t = ready->vec[ready->first - index];
829 ready->n_ready--;
830 for (i = index; i < ready->n_ready; i++)
831 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
832 return t;
836 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
837 macro. */
839 HAIFA_INLINE static void
840 ready_sort (struct ready_list *ready)
842 rtx *first = ready_lastpos (ready);
843 SCHED_SORT (first, ready->n_ready);
846 /* PREV is an insn that is ready to execute. Adjust its priority if that
847 will help shorten or lengthen register lifetimes as appropriate. Also
848 provide a hook for the target to tweek itself. */
850 HAIFA_INLINE static void
851 adjust_priority (rtx prev)
853 /* ??? There used to be code here to try and estimate how an insn
854 affected register lifetimes, but it did it by looking at REG_DEAD
855 notes, which we removed in schedule_region. Nor did it try to
856 take into account register pressure or anything useful like that.
858 Revisit when we have a machine model to work with and not before. */
860 if (targetm.sched.adjust_priority)
861 INSN_PRIORITY (prev) =
862 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
865 /* Advance time on one cycle. */
866 HAIFA_INLINE static void
867 advance_one_cycle (void)
869 if (targetm.sched.dfa_pre_cycle_insn)
870 state_transition (curr_state,
871 targetm.sched.dfa_pre_cycle_insn ());
873 state_transition (curr_state, NULL);
875 if (targetm.sched.dfa_post_cycle_insn)
876 state_transition (curr_state,
877 targetm.sched.dfa_post_cycle_insn ());
880 /* Clock at which the previous instruction was issued. */
881 static int last_clock_var;
883 /* INSN is the "currently executing insn". Launch each insn which was
884 waiting on INSN. READY is the ready list which contains the insns
885 that are ready to fire. CLOCK is the current cycle. The function
886 returns necessary cycle advance after issuing the insn (it is not
887 zero for insns in a schedule group). */
889 static int
890 schedule_insn (rtx insn, struct ready_list *ready, int clock)
892 rtx link;
893 int advance = 0;
894 int premature_issue = 0;
896 if (sched_verbose >= 1)
898 char buf[2048];
900 print_insn (buf, insn, 0);
901 buf[40] = 0;
902 fprintf (sched_dump, ";;\t%3i--> %-40s:", clock, buf);
904 if (recog_memoized (insn) < 0)
905 fprintf (sched_dump, "nothing");
906 else
907 print_reservation (sched_dump, insn);
908 fputc ('\n', sched_dump);
911 if (INSN_TICK (insn) > clock)
913 /* 'insn' has been prematurely moved from the queue to the
914 ready list. */
915 premature_issue = INSN_TICK (insn) - clock;
918 for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
920 rtx next = XEXP (link, 0);
921 int cost = insn_cost (insn, link, next);
923 INSN_TICK (next) = MAX (INSN_TICK (next), clock + cost + premature_issue);
925 if ((INSN_DEP_COUNT (next) -= 1) == 0)
927 int effective_cost = INSN_TICK (next) - clock;
929 if (! (*current_sched_info->new_ready) (next))
930 continue;
932 if (sched_verbose >= 2)
934 fprintf (sched_dump, ";;\t\tdependences resolved: insn %s ",
935 (*current_sched_info->print_insn) (next, 0));
937 if (effective_cost < 1)
938 fprintf (sched_dump, "into ready\n");
939 else
940 fprintf (sched_dump, "into queue with cost=%d\n",
941 effective_cost);
944 /* Adjust the priority of NEXT and either put it on the ready
945 list or queue it. */
946 adjust_priority (next);
947 if (effective_cost < 1)
948 ready_add (ready, next);
949 else
951 queue_insn (next, effective_cost);
953 if (SCHED_GROUP_P (next) && advance < effective_cost)
954 advance = effective_cost;
959 /* Annotate the instruction with issue information -- TImode
960 indicates that the instruction is expected not to be able
961 to issue on the same cycle as the previous insn. A machine
962 may use this information to decide how the instruction should
963 be aligned. */
964 if (issue_rate > 1
965 && GET_CODE (PATTERN (insn)) != USE
966 && GET_CODE (PATTERN (insn)) != CLOBBER)
968 if (reload_completed)
969 PUT_MODE (insn, clock > last_clock_var ? TImode : VOIDmode);
970 last_clock_var = clock;
972 return advance;
975 /* Functions for handling of notes. */
977 /* Delete notes beginning with INSN and put them in the chain
978 of notes ended by NOTE_LIST.
979 Returns the insn following the notes. */
981 static rtx
982 unlink_other_notes (rtx insn, rtx tail)
984 rtx prev = PREV_INSN (insn);
986 while (insn != tail && NOTE_P (insn))
988 rtx next = NEXT_INSN (insn);
989 /* Delete the note from its current position. */
990 if (prev)
991 NEXT_INSN (prev) = next;
992 if (next)
993 PREV_INSN (next) = prev;
995 /* See sched_analyze to see how these are handled. */
996 if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
997 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
998 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_BASIC_BLOCK
999 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
1000 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
1002 /* Insert the note at the end of the notes list. */
1003 PREV_INSN (insn) = note_list;
1004 if (note_list)
1005 NEXT_INSN (note_list) = insn;
1006 note_list = insn;
1009 insn = next;
1011 return insn;
1014 /* Delete line notes beginning with INSN. Record line-number notes so
1015 they can be reused. Returns the insn following the notes. */
1017 static rtx
1018 unlink_line_notes (rtx insn, rtx tail)
1020 rtx prev = PREV_INSN (insn);
1022 while (insn != tail && NOTE_P (insn))
1024 rtx next = NEXT_INSN (insn);
1026 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
1028 /* Delete the note from its current position. */
1029 if (prev)
1030 NEXT_INSN (prev) = next;
1031 if (next)
1032 PREV_INSN (next) = prev;
1034 /* Record line-number notes so they can be reused. */
1035 LINE_NOTE (insn) = insn;
1037 else
1038 prev = insn;
1040 insn = next;
1042 return insn;
1045 /* Return the head and tail pointers of BB. */
1047 void
1048 get_block_head_tail (int b, rtx *headp, rtx *tailp)
1050 /* HEAD and TAIL delimit the basic block being scheduled. */
1051 rtx head = BB_HEAD (BASIC_BLOCK (b));
1052 rtx tail = BB_END (BASIC_BLOCK (b));
1054 /* Don't include any notes or labels at the beginning of the
1055 basic block, or notes at the ends of basic blocks. */
1056 while (head != tail)
1058 if (NOTE_P (head))
1059 head = NEXT_INSN (head);
1060 else if (NOTE_P (tail))
1061 tail = PREV_INSN (tail);
1062 else if (LABEL_P (head))
1063 head = NEXT_INSN (head);
1064 else
1065 break;
1068 *headp = head;
1069 *tailp = tail;
1072 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1075 no_real_insns_p (rtx head, rtx tail)
1077 while (head != NEXT_INSN (tail))
1079 if (!NOTE_P (head) && !LABEL_P (head))
1080 return 0;
1081 head = NEXT_INSN (head);
1083 return 1;
1086 /* Delete line notes from one block. Save them so they can be later restored
1087 (in restore_line_notes). HEAD and TAIL are the boundaries of the
1088 block in which notes should be processed. */
1090 void
1091 rm_line_notes (rtx head, rtx tail)
1093 rtx next_tail;
1094 rtx insn;
1096 next_tail = NEXT_INSN (tail);
1097 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1099 rtx prev;
1101 /* Farm out notes, and maybe save them in NOTE_LIST.
1102 This is needed to keep the debugger from
1103 getting completely deranged. */
1104 if (NOTE_P (insn))
1106 prev = insn;
1107 insn = unlink_line_notes (insn, next_tail);
1109 if (prev == tail)
1110 abort ();
1111 if (prev == head)
1112 abort ();
1113 if (insn == next_tail)
1114 abort ();
1119 /* Save line number notes for each insn in block B. HEAD and TAIL are
1120 the boundaries of the block in which notes should be processed. */
1122 void
1123 save_line_notes (int b, rtx head, rtx tail)
1125 rtx next_tail;
1127 /* We must use the true line number for the first insn in the block
1128 that was computed and saved at the start of this pass. We can't
1129 use the current line number, because scheduling of the previous
1130 block may have changed the current line number. */
1132 rtx line = line_note_head[b];
1133 rtx insn;
1135 next_tail = NEXT_INSN (tail);
1137 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1138 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1139 line = insn;
1140 else
1141 LINE_NOTE (insn) = line;
1144 /* After a block was scheduled, insert line notes into the insns list.
1145 HEAD and TAIL are the boundaries of the block in which notes should
1146 be processed. */
1148 void
1149 restore_line_notes (rtx head, rtx tail)
1151 rtx line, note, prev, new;
1152 int added_notes = 0;
1153 rtx next_tail, insn;
1155 head = head;
1156 next_tail = NEXT_INSN (tail);
1158 /* Determine the current line-number. We want to know the current
1159 line number of the first insn of the block here, in case it is
1160 different from the true line number that was saved earlier. If
1161 different, then we need a line number note before the first insn
1162 of this block. If it happens to be the same, then we don't want to
1163 emit another line number note here. */
1164 for (line = head; line; line = PREV_INSN (line))
1165 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
1166 break;
1168 /* Walk the insns keeping track of the current line-number and inserting
1169 the line-number notes as needed. */
1170 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1171 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1172 line = insn;
1173 /* This used to emit line number notes before every non-deleted note.
1174 However, this confuses a debugger, because line notes not separated
1175 by real instructions all end up at the same address. I can find no
1176 use for line number notes before other notes, so none are emitted. */
1177 else if (!NOTE_P (insn)
1178 && INSN_UID (insn) < old_max_uid
1179 && (note = LINE_NOTE (insn)) != 0
1180 && note != line
1181 && (line == 0
1182 #ifdef USE_MAPPED_LOCATION
1183 || NOTE_SOURCE_LOCATION (note) != NOTE_SOURCE_LOCATION (line)
1184 #else
1185 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
1186 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)
1187 #endif
1190 line = note;
1191 prev = PREV_INSN (insn);
1192 if (LINE_NOTE (note))
1194 /* Re-use the original line-number note. */
1195 LINE_NOTE (note) = 0;
1196 PREV_INSN (note) = prev;
1197 NEXT_INSN (prev) = note;
1198 PREV_INSN (insn) = note;
1199 NEXT_INSN (note) = insn;
1201 else
1203 added_notes++;
1204 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
1205 #ifndef USE_MAPPED_LOCATION
1206 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
1207 #endif
1210 if (sched_verbose && added_notes)
1211 fprintf (sched_dump, ";; added %d line-number notes\n", added_notes);
1214 /* After scheduling the function, delete redundant line notes from the
1215 insns list. */
1217 void
1218 rm_redundant_line_notes (void)
1220 rtx line = 0;
1221 rtx insn = get_insns ();
1222 int active_insn = 0;
1223 int notes = 0;
1225 /* Walk the insns deleting redundant line-number notes. Many of these
1226 are already present. The remainder tend to occur at basic
1227 block boundaries. */
1228 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
1229 if (NOTE_P (insn) && NOTE_LINE_NUMBER (insn) > 0)
1231 /* If there are no active insns following, INSN is redundant. */
1232 if (active_insn == 0)
1234 notes++;
1235 SET_INSN_DELETED (insn);
1237 /* If the line number is unchanged, LINE is redundant. */
1238 else if (line
1239 #ifdef USE_MAPPED_LOCATION
1240 && NOTE_SOURCE_LOCATION (line) == NOTE_SOURCE_LOCATION (insn)
1241 #else
1242 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
1243 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn)
1244 #endif
1247 notes++;
1248 SET_INSN_DELETED (line);
1249 line = insn;
1251 else
1252 line = insn;
1253 active_insn = 0;
1255 else if (!((NOTE_P (insn)
1256 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
1257 || (NONJUMP_INSN_P (insn)
1258 && (GET_CODE (PATTERN (insn)) == USE
1259 || GET_CODE (PATTERN (insn)) == CLOBBER))))
1260 active_insn++;
1262 if (sched_verbose && notes)
1263 fprintf (sched_dump, ";; deleted %d line-number notes\n", notes);
1266 /* Delete notes between HEAD and TAIL and put them in the chain
1267 of notes ended by NOTE_LIST. */
1269 void
1270 rm_other_notes (rtx head, rtx tail)
1272 rtx next_tail;
1273 rtx insn;
1275 note_list = 0;
1276 if (head == tail && (! INSN_P (head)))
1277 return;
1279 next_tail = NEXT_INSN (tail);
1280 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1282 rtx prev;
1284 /* Farm out notes, and maybe save them in NOTE_LIST.
1285 This is needed to keep the debugger from
1286 getting completely deranged. */
1287 if (NOTE_P (insn))
1289 prev = insn;
1291 insn = unlink_other_notes (insn, next_tail);
1293 if (prev == tail)
1294 abort ();
1295 if (prev == head)
1296 abort ();
1297 if (insn == next_tail)
1298 abort ();
1303 /* Functions for computation of registers live/usage info. */
1305 /* This function looks for a new register being defined.
1306 If the destination register is already used by the source,
1307 a new register is not needed. */
1309 static int
1310 find_set_reg_weight (rtx x)
1312 if (GET_CODE (x) == CLOBBER
1313 && register_operand (SET_DEST (x), VOIDmode))
1314 return 1;
1315 if (GET_CODE (x) == SET
1316 && register_operand (SET_DEST (x), VOIDmode))
1318 if (REG_P (SET_DEST (x)))
1320 if (!reg_mentioned_p (SET_DEST (x), SET_SRC (x)))
1321 return 1;
1322 else
1323 return 0;
1325 return 1;
1327 return 0;
1330 /* Calculate INSN_REG_WEIGHT for all insns of a block. */
1332 static void
1333 find_insn_reg_weight (int b)
1335 rtx insn, next_tail, head, tail;
1337 get_block_head_tail (b, &head, &tail);
1338 next_tail = NEXT_INSN (tail);
1340 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1342 int reg_weight = 0;
1343 rtx x;
1345 /* Handle register life information. */
1346 if (! INSN_P (insn))
1347 continue;
1349 /* Increment weight for each register born here. */
1350 x = PATTERN (insn);
1351 reg_weight += find_set_reg_weight (x);
1352 if (GET_CODE (x) == PARALLEL)
1354 int j;
1355 for (j = XVECLEN (x, 0) - 1; j >= 0; j--)
1357 x = XVECEXP (PATTERN (insn), 0, j);
1358 reg_weight += find_set_reg_weight (x);
1361 /* Decrement weight for each register that dies here. */
1362 for (x = REG_NOTES (insn); x; x = XEXP (x, 1))
1364 if (REG_NOTE_KIND (x) == REG_DEAD
1365 || REG_NOTE_KIND (x) == REG_UNUSED)
1366 reg_weight--;
1369 INSN_REG_WEIGHT (insn) = reg_weight;
1373 /* Scheduling clock, modified in schedule_block() and queue_to_ready (). */
1374 static int clock_var;
1376 /* Move insns that became ready to fire from queue to ready list. */
1378 static void
1379 queue_to_ready (struct ready_list *ready)
1381 rtx insn;
1382 rtx link;
1384 q_ptr = NEXT_Q (q_ptr);
1386 /* Add all pending insns that can be scheduled without stalls to the
1387 ready list. */
1388 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
1390 insn = XEXP (link, 0);
1391 q_size -= 1;
1393 if (sched_verbose >= 2)
1394 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1395 (*current_sched_info->print_insn) (insn, 0));
1397 ready_add (ready, insn);
1398 if (sched_verbose >= 2)
1399 fprintf (sched_dump, "moving to ready without stalls\n");
1401 insn_queue[q_ptr] = 0;
1403 /* If there are no ready insns, stall until one is ready and add all
1404 of the pending insns at that point to the ready list. */
1405 if (ready->n_ready == 0)
1407 int stalls;
1409 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
1411 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1413 for (; link; link = XEXP (link, 1))
1415 insn = XEXP (link, 0);
1416 q_size -= 1;
1418 if (sched_verbose >= 2)
1419 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1420 (*current_sched_info->print_insn) (insn, 0));
1422 ready_add (ready, insn);
1423 if (sched_verbose >= 2)
1424 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
1426 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
1428 advance_one_cycle ();
1430 break;
1433 advance_one_cycle ();
1436 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
1437 clock_var += stalls;
1441 /* Used by early_queue_to_ready. Determines whether it is "ok" to
1442 prematurely move INSN from the queue to the ready list. Currently,
1443 if a target defines the hook 'is_costly_dependence', this function
1444 uses the hook to check whether there exist any dependences which are
1445 considered costly by the target, between INSN and other insns that
1446 have already been scheduled. Dependences are checked up to Y cycles
1447 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
1448 controlling this value.
1449 (Other considerations could be taken into account instead (or in
1450 addition) depending on user flags and target hooks. */
1452 static bool
1453 ok_for_early_queue_removal (rtx insn)
1455 int n_cycles;
1456 rtx prev_insn = last_scheduled_insn;
1458 if (targetm.sched.is_costly_dependence)
1460 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
1462 for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
1464 rtx dep_link = 0;
1465 int dep_cost;
1467 if (!NOTE_P (prev_insn))
1469 dep_link = find_insn_list (insn, INSN_DEPEND (prev_insn));
1470 if (dep_link)
1472 dep_cost = insn_cost (prev_insn, dep_link, insn) ;
1473 if (targetm.sched.is_costly_dependence (prev_insn, insn,
1474 dep_link, dep_cost,
1475 flag_sched_stalled_insns_dep - n_cycles))
1476 return false;
1480 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
1481 break;
1484 if (!prev_insn)
1485 break;
1486 prev_insn = PREV_INSN (prev_insn);
1490 return true;
1494 /* Remove insns from the queue, before they become "ready" with respect
1495 to FU latency considerations. */
1497 static int
1498 early_queue_to_ready (state_t state, struct ready_list *ready)
1500 rtx insn;
1501 rtx link;
1502 rtx next_link;
1503 rtx prev_link;
1504 bool move_to_ready;
1505 int cost;
1506 state_t temp_state = alloca (dfa_state_size);
1507 int stalls;
1508 int insns_removed = 0;
1511 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
1512 function:
1514 X == 0: There is no limit on how many queued insns can be removed
1515 prematurely. (flag_sched_stalled_insns = -1).
1517 X >= 1: Only X queued insns can be removed prematurely in each
1518 invocation. (flag_sched_stalled_insns = X).
1520 Otherwise: Early queue removal is disabled.
1521 (flag_sched_stalled_insns = 0)
1524 if (! flag_sched_stalled_insns)
1525 return 0;
1527 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
1529 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1531 if (sched_verbose > 6)
1532 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
1534 prev_link = 0;
1535 while (link)
1537 next_link = XEXP (link, 1);
1538 insn = XEXP (link, 0);
1539 if (insn && sched_verbose > 6)
1540 print_rtl_single (sched_dump, insn);
1542 memcpy (temp_state, state, dfa_state_size);
1543 if (recog_memoized (insn) < 0)
1544 /* non-negative to indicate that it's not ready
1545 to avoid infinite Q->R->Q->R... */
1546 cost = 0;
1547 else
1548 cost = state_transition (temp_state, insn);
1550 if (sched_verbose >= 6)
1551 fprintf (sched_dump, "transition cost = %d\n", cost);
1553 move_to_ready = false;
1554 if (cost < 0)
1556 move_to_ready = ok_for_early_queue_removal (insn);
1557 if (move_to_ready == true)
1559 /* move from Q to R */
1560 q_size -= 1;
1561 ready_add (ready, insn);
1563 if (prev_link)
1564 XEXP (prev_link, 1) = next_link;
1565 else
1566 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
1568 free_INSN_LIST_node (link);
1570 if (sched_verbose >= 2)
1571 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
1572 (*current_sched_info->print_insn) (insn, 0));
1574 insns_removed++;
1575 if (insns_removed == flag_sched_stalled_insns)
1576 /* Remove only one insn from Q at a time. */
1577 return insns_removed;
1581 if (move_to_ready == false)
1582 prev_link = link;
1584 link = next_link;
1585 } /* while link */
1586 } /* if link */
1588 } /* for stalls.. */
1590 return insns_removed;
1594 /* Print the ready list for debugging purposes. Callable from debugger. */
1596 static void
1597 debug_ready_list (struct ready_list *ready)
1599 rtx *p;
1600 int i;
1602 if (ready->n_ready == 0)
1604 fprintf (sched_dump, "\n");
1605 return;
1608 p = ready_lastpos (ready);
1609 for (i = 0; i < ready->n_ready; i++)
1610 fprintf (sched_dump, " %s", (*current_sched_info->print_insn) (p[i], 0));
1611 fprintf (sched_dump, "\n");
1614 /* move_insn1: Remove INSN from insn chain, and link it after LAST insn. */
1616 static rtx
1617 move_insn1 (rtx insn, rtx last)
1619 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (insn);
1620 PREV_INSN (NEXT_INSN (insn)) = PREV_INSN (insn);
1622 NEXT_INSN (insn) = NEXT_INSN (last);
1623 PREV_INSN (NEXT_INSN (last)) = insn;
1625 NEXT_INSN (last) = insn;
1626 PREV_INSN (insn) = last;
1628 return insn;
1631 /* Search INSN for REG_SAVE_NOTE note pairs for
1632 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
1633 NOTEs. The REG_SAVE_NOTE note following first one is contains the
1634 saved value for NOTE_BLOCK_NUMBER which is useful for
1635 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
1636 output by the instruction scheduler. Return the new value of LAST. */
1638 static rtx
1639 reemit_notes (rtx insn, rtx last)
1641 rtx note, retval;
1643 retval = last;
1644 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1646 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
1648 enum insn_note note_type = INTVAL (XEXP (note, 0));
1650 last = emit_note_before (note_type, last);
1651 remove_note (insn, note);
1652 note = XEXP (note, 1);
1653 if (note_type == NOTE_INSN_EH_REGION_BEG
1654 || note_type == NOTE_INSN_EH_REGION_END)
1655 NOTE_EH_HANDLER (last) = INTVAL (XEXP (note, 0));
1656 remove_note (insn, note);
1659 return retval;
1662 /* Move INSN. Reemit notes if needed.
1664 Return the last insn emitted by the scheduler, which is the
1665 return value from the first call to reemit_notes. */
1667 static rtx
1668 move_insn (rtx insn, rtx last)
1670 rtx retval = NULL;
1672 move_insn1 (insn, last);
1674 /* If this is the first call to reemit_notes, then record
1675 its return value. */
1676 if (retval == NULL_RTX)
1677 retval = reemit_notes (insn, insn);
1678 else
1679 reemit_notes (insn, insn);
1681 SCHED_GROUP_P (insn) = 0;
1683 return retval;
1686 /* The following structure describe an entry of the stack of choices. */
1687 struct choice_entry
1689 /* Ordinal number of the issued insn in the ready queue. */
1690 int index;
1691 /* The number of the rest insns whose issues we should try. */
1692 int rest;
1693 /* The number of issued essential insns. */
1694 int n;
1695 /* State after issuing the insn. */
1696 state_t state;
1699 /* The following array is used to implement a stack of choices used in
1700 function max_issue. */
1701 static struct choice_entry *choice_stack;
1703 /* The following variable value is number of essential insns issued on
1704 the current cycle. An insn is essential one if it changes the
1705 processors state. */
1706 static int cycle_issued_insns;
1708 /* The following variable value is maximal number of tries of issuing
1709 insns for the first cycle multipass insn scheduling. We define
1710 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
1711 need this constraint if all real insns (with non-negative codes)
1712 had reservations because in this case the algorithm complexity is
1713 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
1714 might be incomplete and such insn might occur. For such
1715 descriptions, the complexity of algorithm (without the constraint)
1716 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
1717 static int max_lookahead_tries;
1719 /* The following value is value of hook
1720 `first_cycle_multipass_dfa_lookahead' at the last call of
1721 `max_issue'. */
1722 static int cached_first_cycle_multipass_dfa_lookahead = 0;
1724 /* The following value is value of `issue_rate' at the last call of
1725 `sched_init'. */
1726 static int cached_issue_rate = 0;
1728 /* The following function returns maximal (or close to maximal) number
1729 of insns which can be issued on the same cycle and one of which
1730 insns is insns with the best rank (the first insn in READY). To
1731 make this function tries different samples of ready insns. READY
1732 is current queue `ready'. Global array READY_TRY reflects what
1733 insns are already issued in this try. INDEX will contain index
1734 of the best insn in READY. The following function is used only for
1735 first cycle multipass scheduling. */
1736 static int
1737 max_issue (struct ready_list *ready, int *index)
1739 int n, i, all, n_ready, best, delay, tries_num;
1740 struct choice_entry *top;
1741 rtx insn;
1743 best = 0;
1744 memcpy (choice_stack->state, curr_state, dfa_state_size);
1745 top = choice_stack;
1746 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1747 top->n = 0;
1748 n_ready = ready->n_ready;
1749 for (all = i = 0; i < n_ready; i++)
1750 if (!ready_try [i])
1751 all++;
1752 i = 0;
1753 tries_num = 0;
1754 for (;;)
1756 if (top->rest == 0 || i >= n_ready)
1758 if (top == choice_stack)
1759 break;
1760 if (best < top - choice_stack && ready_try [0])
1762 best = top - choice_stack;
1763 *index = choice_stack [1].index;
1764 if (top->n == issue_rate - cycle_issued_insns || best == all)
1765 break;
1767 i = top->index;
1768 ready_try [i] = 0;
1769 top--;
1770 memcpy (curr_state, top->state, dfa_state_size);
1772 else if (!ready_try [i])
1774 tries_num++;
1775 if (tries_num > max_lookahead_tries)
1776 break;
1777 insn = ready_element (ready, i);
1778 delay = state_transition (curr_state, insn);
1779 if (delay < 0)
1781 if (state_dead_lock_p (curr_state))
1782 top->rest = 0;
1783 else
1784 top->rest--;
1785 n = top->n;
1786 if (memcmp (top->state, curr_state, dfa_state_size) != 0)
1787 n++;
1788 top++;
1789 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1790 top->index = i;
1791 top->n = n;
1792 memcpy (top->state, curr_state, dfa_state_size);
1793 ready_try [i] = 1;
1794 i = -1;
1797 i++;
1799 while (top != choice_stack)
1801 ready_try [top->index] = 0;
1802 top--;
1804 memcpy (curr_state, choice_stack->state, dfa_state_size);
1805 return best;
1808 /* The following function chooses insn from READY and modifies
1809 *N_READY and READY. The following function is used only for first
1810 cycle multipass scheduling. */
1812 static rtx
1813 choose_ready (struct ready_list *ready)
1815 int lookahead = 0;
1817 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
1818 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
1819 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0)))
1820 return ready_remove_first (ready);
1821 else
1823 /* Try to choose the better insn. */
1824 int index = 0, i;
1825 rtx insn;
1827 if (cached_first_cycle_multipass_dfa_lookahead != lookahead)
1829 cached_first_cycle_multipass_dfa_lookahead = lookahead;
1830 max_lookahead_tries = 100;
1831 for (i = 0; i < issue_rate; i++)
1832 max_lookahead_tries *= lookahead;
1834 insn = ready_element (ready, 0);
1835 if (INSN_CODE (insn) < 0)
1836 return ready_remove_first (ready);
1837 for (i = 1; i < ready->n_ready; i++)
1839 insn = ready_element (ready, i);
1840 ready_try [i]
1841 = (INSN_CODE (insn) < 0
1842 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
1843 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard (insn)));
1845 if (max_issue (ready, &index) == 0)
1846 return ready_remove_first (ready);
1847 else
1848 return ready_remove (ready, index);
1852 /* Use forward list scheduling to rearrange insns of block B in region RGN,
1853 possibly bringing insns from subsequent blocks in the same region. */
1855 void
1856 schedule_block (int b, int rgn_n_insns)
1858 struct ready_list ready;
1859 int i, first_cycle_insn_p;
1860 int can_issue_more;
1861 state_t temp_state = NULL; /* It is used for multipass scheduling. */
1862 int sort_p, advance, start_clock_var;
1864 /* Head/tail info for this block. */
1865 rtx prev_head = current_sched_info->prev_head;
1866 rtx next_tail = current_sched_info->next_tail;
1867 rtx head = NEXT_INSN (prev_head);
1868 rtx tail = PREV_INSN (next_tail);
1870 /* We used to have code to avoid getting parameters moved from hard
1871 argument registers into pseudos.
1873 However, it was removed when it proved to be of marginal benefit
1874 and caused problems because schedule_block and compute_forward_dependences
1875 had different notions of what the "head" insn was. */
1877 if (head == tail && (! INSN_P (head)))
1878 abort ();
1880 /* Debug info. */
1881 if (sched_verbose)
1883 fprintf (sched_dump, ";; ======================================================\n");
1884 fprintf (sched_dump,
1885 ";; -- basic block %d from %d to %d -- %s reload\n",
1886 b, INSN_UID (head), INSN_UID (tail),
1887 (reload_completed ? "after" : "before"));
1888 fprintf (sched_dump, ";; ======================================================\n");
1889 fprintf (sched_dump, "\n");
1892 state_reset (curr_state);
1894 /* Allocate the ready list. */
1895 ready.veclen = rgn_n_insns + 1 + issue_rate;
1896 ready.first = ready.veclen - 1;
1897 ready.vec = xmalloc (ready.veclen * sizeof (rtx));
1898 ready.n_ready = 0;
1900 /* It is used for first cycle multipass scheduling. */
1901 temp_state = alloca (dfa_state_size);
1902 ready_try = xcalloc ((rgn_n_insns + 1), sizeof (char));
1903 choice_stack = xmalloc ((rgn_n_insns + 1)
1904 * sizeof (struct choice_entry));
1905 for (i = 0; i <= rgn_n_insns; i++)
1906 choice_stack[i].state = xmalloc (dfa_state_size);
1908 (*current_sched_info->init_ready_list) (&ready);
1910 if (targetm.sched.md_init)
1911 targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
1913 /* We start inserting insns after PREV_HEAD. */
1914 last_scheduled_insn = prev_head;
1916 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
1917 queue. */
1918 q_ptr = 0;
1919 q_size = 0;
1921 insn_queue = alloca ((max_insn_queue_index + 1) * sizeof (rtx));
1922 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
1923 last_clock_var = -1;
1925 /* Start just before the beginning of time. */
1926 clock_var = -1;
1927 advance = 0;
1929 sort_p = TRUE;
1930 /* Loop until all the insns in BB are scheduled. */
1931 while ((*current_sched_info->schedule_more_p) ())
1935 start_clock_var = clock_var;
1937 clock_var++;
1939 advance_one_cycle ();
1941 /* Add to the ready list all pending insns that can be issued now.
1942 If there are no ready insns, increment clock until one
1943 is ready and add all pending insns at that point to the ready
1944 list. */
1945 queue_to_ready (&ready);
1947 if (ready.n_ready == 0)
1948 abort ();
1950 if (sched_verbose >= 2)
1952 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
1953 debug_ready_list (&ready);
1955 advance -= clock_var - start_clock_var;
1957 while (advance > 0);
1959 if (sort_p)
1961 /* Sort the ready list based on priority. */
1962 ready_sort (&ready);
1964 if (sched_verbose >= 2)
1966 fprintf (sched_dump, ";;\t\tReady list after ready_sort: ");
1967 debug_ready_list (&ready);
1971 /* Allow the target to reorder the list, typically for
1972 better instruction bundling. */
1973 if (sort_p && targetm.sched.reorder
1974 && (ready.n_ready == 0
1975 || !SCHED_GROUP_P (ready_element (&ready, 0))))
1976 can_issue_more =
1977 targetm.sched.reorder (sched_dump, sched_verbose,
1978 ready_lastpos (&ready),
1979 &ready.n_ready, clock_var);
1980 else
1981 can_issue_more = issue_rate;
1983 first_cycle_insn_p = 1;
1984 cycle_issued_insns = 0;
1985 for (;;)
1987 rtx insn;
1988 int cost;
1989 bool asm_p = false;
1991 if (sched_verbose >= 2)
1993 fprintf (sched_dump, ";;\tReady list (t =%3d): ",
1994 clock_var);
1995 debug_ready_list (&ready);
1998 if (ready.n_ready == 0
1999 && can_issue_more
2000 && reload_completed)
2002 /* Allow scheduling insns directly from the queue in case
2003 there's nothing better to do (ready list is empty) but
2004 there are still vacant dispatch slots in the current cycle. */
2005 if (sched_verbose >= 6)
2006 fprintf(sched_dump,";;\t\tSecond chance\n");
2007 memcpy (temp_state, curr_state, dfa_state_size);
2008 if (early_queue_to_ready (temp_state, &ready))
2009 ready_sort (&ready);
2012 if (ready.n_ready == 0 || !can_issue_more
2013 || state_dead_lock_p (curr_state)
2014 || !(*current_sched_info->schedule_more_p) ())
2015 break;
2017 /* Select and remove the insn from the ready list. */
2018 if (sort_p)
2019 insn = choose_ready (&ready);
2020 else
2021 insn = ready_remove_first (&ready);
2023 if (targetm.sched.dfa_new_cycle
2024 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
2025 insn, last_clock_var,
2026 clock_var, &sort_p))
2028 ready_add (&ready, insn);
2029 break;
2032 sort_p = TRUE;
2033 memcpy (temp_state, curr_state, dfa_state_size);
2034 if (recog_memoized (insn) < 0)
2036 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
2037 || asm_noperands (PATTERN (insn)) >= 0);
2038 if (!first_cycle_insn_p && asm_p)
2039 /* This is asm insn which is tryed to be issued on the
2040 cycle not first. Issue it on the next cycle. */
2041 cost = 1;
2042 else
2043 /* A USE insn, or something else we don't need to
2044 understand. We can't pass these directly to
2045 state_transition because it will trigger a
2046 fatal error for unrecognizable insns. */
2047 cost = 0;
2049 else
2051 cost = state_transition (temp_state, insn);
2052 if (cost < 0)
2053 cost = 0;
2054 else if (cost == 0)
2055 cost = 1;
2058 if (cost >= 1)
2060 queue_insn (insn, cost);
2061 continue;
2064 if (! (*current_sched_info->can_schedule_ready_p) (insn))
2065 goto next;
2067 last_scheduled_insn = move_insn (insn, last_scheduled_insn);
2069 if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
2070 cycle_issued_insns++;
2071 memcpy (curr_state, temp_state, dfa_state_size);
2073 if (targetm.sched.variable_issue)
2074 can_issue_more =
2075 targetm.sched.variable_issue (sched_dump, sched_verbose,
2076 insn, can_issue_more);
2077 /* A naked CLOBBER or USE generates no instruction, so do
2078 not count them against the issue rate. */
2079 else if (GET_CODE (PATTERN (insn)) != USE
2080 && GET_CODE (PATTERN (insn)) != CLOBBER)
2081 can_issue_more--;
2083 advance = schedule_insn (insn, &ready, clock_var);
2085 /* After issuing an asm insn we should start a new cycle. */
2086 if (advance == 0 && asm_p)
2087 advance = 1;
2088 if (advance != 0)
2089 break;
2091 next:
2092 first_cycle_insn_p = 0;
2094 /* Sort the ready list based on priority. This must be
2095 redone here, as schedule_insn may have readied additional
2096 insns that will not be sorted correctly. */
2097 if (ready.n_ready > 0)
2098 ready_sort (&ready);
2100 if (targetm.sched.reorder2
2101 && (ready.n_ready == 0
2102 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2104 can_issue_more =
2105 targetm.sched.reorder2 (sched_dump, sched_verbose,
2106 ready.n_ready
2107 ? ready_lastpos (&ready) : NULL,
2108 &ready.n_ready, clock_var);
2113 if (targetm.sched.md_finish)
2114 targetm.sched.md_finish (sched_dump, sched_verbose);
2116 /* Debug info. */
2117 if (sched_verbose)
2119 fprintf (sched_dump, ";;\tReady list (final): ");
2120 debug_ready_list (&ready);
2123 /* Sanity check -- queue must be empty now. Meaningless if region has
2124 multiple bbs. */
2125 if (current_sched_info->queue_must_finish_empty && q_size != 0)
2126 abort ();
2128 /* Update head/tail boundaries. */
2129 head = NEXT_INSN (prev_head);
2130 tail = last_scheduled_insn;
2132 if (!reload_completed)
2134 rtx insn, link, next;
2136 /* INSN_TICK (minimum clock tick at which the insn becomes
2137 ready) may be not correct for the insn in the subsequent
2138 blocks of the region. We should use a correct value of
2139 `clock_var' or modify INSN_TICK. It is better to keep
2140 clock_var value equal to 0 at the start of a basic block.
2141 Therefore we modify INSN_TICK here. */
2142 for (insn = head; insn != tail; insn = NEXT_INSN (insn))
2143 if (INSN_P (insn))
2145 for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
2147 next = XEXP (link, 0);
2148 INSN_TICK (next) -= clock_var;
2153 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
2154 previously found among the insns. Insert them at the beginning
2155 of the insns. */
2156 if (note_list != 0)
2158 rtx note_head = note_list;
2160 while (PREV_INSN (note_head))
2162 note_head = PREV_INSN (note_head);
2165 PREV_INSN (note_head) = PREV_INSN (head);
2166 NEXT_INSN (PREV_INSN (head)) = note_head;
2167 PREV_INSN (head) = note_list;
2168 NEXT_INSN (note_list) = head;
2169 head = note_head;
2172 /* Debugging. */
2173 if (sched_verbose)
2175 fprintf (sched_dump, ";; total time = %d\n;; new head = %d\n",
2176 clock_var, INSN_UID (head));
2177 fprintf (sched_dump, ";; new tail = %d\n\n",
2178 INSN_UID (tail));
2181 current_sched_info->head = head;
2182 current_sched_info->tail = tail;
2184 free (ready.vec);
2186 free (ready_try);
2187 for (i = 0; i <= rgn_n_insns; i++)
2188 free (choice_stack [i].state);
2189 free (choice_stack);
2192 /* Set_priorities: compute priority of each insn in the block. */
2195 set_priorities (rtx head, rtx tail)
2197 rtx insn;
2198 int n_insn;
2199 int sched_max_insns_priority =
2200 current_sched_info->sched_max_insns_priority;
2201 rtx prev_head;
2203 prev_head = PREV_INSN (head);
2205 if (head == tail && (! INSN_P (head)))
2206 return 0;
2208 n_insn = 0;
2209 sched_max_insns_priority = 0;
2210 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
2212 if (NOTE_P (insn))
2213 continue;
2215 n_insn++;
2216 (void) priority (insn);
2218 if (INSN_PRIORITY_KNOWN (insn))
2219 sched_max_insns_priority =
2220 MAX (sched_max_insns_priority, INSN_PRIORITY (insn));
2222 sched_max_insns_priority += 1;
2223 current_sched_info->sched_max_insns_priority =
2224 sched_max_insns_priority;
2226 return n_insn;
2229 /* Initialize some global state for the scheduler. DUMP_FILE is to be used
2230 for debugging output. */
2232 void
2233 sched_init (FILE *dump_file)
2235 int luid;
2236 basic_block b;
2237 rtx insn;
2238 int i;
2240 /* Disable speculative loads in their presence if cc0 defined. */
2241 #ifdef HAVE_cc0
2242 flag_schedule_speculative_load = 0;
2243 #endif
2245 /* Set dump and sched_verbose for the desired debugging output. If no
2246 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
2247 For -fsched-verbose=N, N>=10, print everything to stderr. */
2248 sched_verbose = sched_verbose_param;
2249 if (sched_verbose_param == 0 && dump_file)
2250 sched_verbose = 1;
2251 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
2252 ? stderr : dump_file);
2254 /* Initialize issue_rate. */
2255 if (targetm.sched.issue_rate)
2256 issue_rate = targetm.sched.issue_rate ();
2257 else
2258 issue_rate = 1;
2260 if (cached_issue_rate != issue_rate)
2262 cached_issue_rate = issue_rate;
2263 /* To invalidate max_lookahead_tries: */
2264 cached_first_cycle_multipass_dfa_lookahead = 0;
2267 /* We use LUID 0 for the fake insn (UID 0) which holds dependencies for
2268 pseudos which do not cross calls. */
2269 old_max_uid = get_max_uid () + 1;
2271 h_i_d = xcalloc (old_max_uid, sizeof (*h_i_d));
2273 for (i = 0; i < old_max_uid; i++)
2274 h_i_d [i].cost = -1;
2276 if (targetm.sched.init_dfa_pre_cycle_insn)
2277 targetm.sched.init_dfa_pre_cycle_insn ();
2279 if (targetm.sched.init_dfa_post_cycle_insn)
2280 targetm.sched.init_dfa_post_cycle_insn ();
2282 dfa_start ();
2283 dfa_state_size = state_size ();
2284 curr_state = xmalloc (dfa_state_size);
2286 h_i_d[0].luid = 0;
2287 luid = 1;
2288 FOR_EACH_BB (b)
2289 for (insn = BB_HEAD (b); ; insn = NEXT_INSN (insn))
2291 INSN_LUID (insn) = luid;
2293 /* Increment the next luid, unless this is a note. We don't
2294 really need separate IDs for notes and we don't want to
2295 schedule differently depending on whether or not there are
2296 line-number notes, i.e., depending on whether or not we're
2297 generating debugging information. */
2298 if (!NOTE_P (insn))
2299 ++luid;
2301 if (insn == BB_END (b))
2302 break;
2305 init_dependency_caches (luid);
2307 init_alias_analysis ();
2309 if (write_symbols != NO_DEBUG)
2311 rtx line;
2313 line_note_head = xcalloc (last_basic_block, sizeof (rtx));
2315 /* Save-line-note-head:
2316 Determine the line-number at the start of each basic block.
2317 This must be computed and saved now, because after a basic block's
2318 predecessor has been scheduled, it is impossible to accurately
2319 determine the correct line number for the first insn of the block. */
2321 FOR_EACH_BB (b)
2323 for (line = BB_HEAD (b); line; line = PREV_INSN (line))
2324 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
2326 line_note_head[b->index] = line;
2327 break;
2329 /* Do a forward search as well, since we won't get to see the first
2330 notes in a basic block. */
2331 for (line = BB_HEAD (b); line; line = NEXT_INSN (line))
2333 if (INSN_P (line))
2334 break;
2335 if (NOTE_P (line) && NOTE_LINE_NUMBER (line) > 0)
2336 line_note_head[b->index] = line;
2341 /* ??? Add a NOTE after the last insn of the last basic block. It is not
2342 known why this is done. */
2344 insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
2345 if (NEXT_INSN (insn) == 0
2346 || (!NOTE_P (insn)
2347 && !LABEL_P (insn)
2348 /* Don't emit a NOTE if it would end up before a BARRIER. */
2349 && !BARRIER_P (NEXT_INSN (insn))))
2351 emit_note_after (NOTE_INSN_DELETED, BB_END (EXIT_BLOCK_PTR->prev_bb));
2352 /* Make insn to appear outside BB. */
2353 BB_END (EXIT_BLOCK_PTR->prev_bb) = PREV_INSN (BB_END (EXIT_BLOCK_PTR->prev_bb));
2356 /* Compute INSN_REG_WEIGHT for all blocks. We must do this before
2357 removing death notes. */
2358 FOR_EACH_BB_REVERSE (b)
2359 find_insn_reg_weight (b->index);
2361 if (targetm.sched.md_init_global)
2362 targetm.sched.md_init_global (sched_dump, sched_verbose, old_max_uid);
2365 /* Free global data used during insn scheduling. */
2367 void
2368 sched_finish (void)
2370 free (h_i_d);
2371 free (curr_state);
2372 dfa_finish ();
2373 free_dependency_caches ();
2374 end_alias_analysis ();
2375 if (write_symbols != NO_DEBUG)
2376 free (line_note_head);
2378 if (targetm.sched.md_finish_global)
2379 targetm.sched.md_finish_global (sched_dump, sched_verbose);
2381 #endif /* INSN_SCHEDULING */