1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998,
3 1999, 2000, 2001 Free Software Foundation, Inc.
4 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
5 and currently maintained by, Jim Wilson (wilson@cygnus.com)
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free the
21 Free Software Foundation, 59 Temple Place - Suite 330, Boston, MA
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
57 Function unit conflicts are resolved during forward list scheduling
58 by tracking the time when each insn is committed to the schedule
59 and from that, the time the function units it uses must be free.
60 As insns on the ready list are considered for scheduling, those
61 that would result in a blockage of the already committed insns are
62 queued until no blockage will result.
64 The following list shows the order in which we want to break ties
65 among insns in the ready list:
67 1. choose insn with the longest path to end of bb, ties
69 2. choose insn with least contribution to register pressure,
71 3. prefer in-block upon interblock motion, ties broken by
72 4. prefer useful upon speculative motion, ties broken by
73 5. choose insn with largest control flow probability, ties
75 6. choose insn with the least dependences upon the previously
76 scheduled insn, or finally
77 7 choose the insn which has the most insns dependent on it.
78 8. choose insn with lowest UID.
80 Memory references complicate matters. Only if we can be certain
81 that memory references are not part of the data dependency graph
82 (via true, anti, or output dependence), can we move operations past
83 memory references. To first approximation, reads can be done
84 independently, while writes introduce dependencies. Better
85 approximations will yield fewer dependencies.
87 Before reload, an extended analysis of interblock data dependences
88 is required for interblock scheduling. This is performed in
89 compute_block_backward_dependences ().
91 Dependencies set up by memory references are treated in exactly the
92 same way as other dependencies, by using LOG_LINKS backward
93 dependences. LOG_LINKS are translated into INSN_DEPEND forward
94 dependences for the purpose of forward list scheduling.
96 Having optimized the critical path, we may have also unduly
97 extended the lifetimes of some registers. If an operation requires
98 that constants be loaded into registers, it is certainly desirable
99 to load those constants as early as necessary, but no earlier.
100 I.e., it will not do to load up a bunch of registers at the
101 beginning of a basic block only to use them at the end, if they
102 could be loaded later, since this may result in excessive register
105 Note that since branches are never in basic blocks, but only end
106 basic blocks, this pass will not move branches. But that is ok,
107 since we can use GNU's delayed branch scheduling pass to take care
110 Also note that no further optimizations based on algebraic
111 identities are performed, so this pass would be a good one to
112 perform instruction splitting, such as breaking up a multiply
113 instruction into shifts and adds where that is profitable.
115 Given the memory aliasing analysis that this pass should perform,
116 it should be possible to remove redundant stores to memory, and to
117 load values from registers instead of hitting memory.
119 Before reload, speculative insns are moved only if a 'proof' exists
120 that no exception will be caused by this, and if no live registers
121 exist that inhibit the motion (live registers constraints are not
122 represented by data dependence edges).
124 This pass must update information that subsequent passes expect to
125 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
126 reg_n_calls_crossed, and reg_live_length. Also, BLOCK_HEAD,
129 The information in the line number notes is carefully retained by
130 this pass. Notes that refer to the starting and ending of
131 exception regions are also carefully retained by this pass. All
132 other NOTE insns are grouped in their same relative order at the
133 beginning of basic blocks and regions that have been scheduled. */
140 #include "hard-reg-set.h"
141 #include "basic-block.h"
143 #include "function.h"
145 #include "insn-config.h"
146 #include "insn-attr.h"
150 #include "sched-int.h"
153 #ifdef INSN_SCHEDULING
155 /* issue_rate is the number of insns that can be scheduled in the same
156 machine cycle. It can be defined in the config/mach/mach.h file,
157 otherwise we set it to 1. */
159 static int issue_rate
;
161 /* sched-verbose controls the amount of debugging output the
162 scheduler prints. It is controlled by -fsched-verbose=N:
163 N>0 and no -DSR : the output is directed to stderr.
164 N>=10 will direct the printouts to stderr (regardless of -dSR).
166 N=2: bb's probabilities, detailed ready list info, unit/insn info.
167 N=3: rtl at abort point, control-flow, regions info.
168 N=5: dependences info. */
170 static int sched_verbose_param
= 0;
171 int sched_verbose
= 0;
173 /* Debugging file. All printouts are sent to dump, which is always set,
174 either to stderr, or to the dump listing file (-dRS). */
175 FILE *sched_dump
= 0;
177 /* Highest uid before scheduling. */
178 static int old_max_uid
;
180 /* fix_sched_param() is called from toplev.c upon detection
181 of the -fsched-verbose=N option. */
184 fix_sched_param (param
, val
)
185 const char *param
, *val
;
187 if (!strcmp (param
, "verbose"))
188 sched_verbose_param
= atoi (val
);
190 warning ("fix_sched_param: unknown param: %s", param
);
193 struct haifa_insn_data
*h_i_d
;
195 #define DONE_PRIORITY -1
196 #define MAX_PRIORITY 0x7fffffff
197 #define TAIL_PRIORITY 0x7ffffffe
198 #define LAUNCH_PRIORITY 0x7f000001
199 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
200 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
202 #define LINE_NOTE(INSN) (h_i_d[INSN_UID (INSN)].line_note)
203 #define INSN_TICK(INSN) (h_i_d[INSN_UID (INSN)].tick)
205 /* Vector indexed by basic block number giving the starting line-number
206 for each basic block. */
207 static rtx
*line_note_head
;
209 /* List of important notes we must keep around. This is a pointer to the
210 last element in the list. */
211 static rtx note_list
;
215 /* An instruction is ready to be scheduled when all insns preceding it
216 have already been scheduled. It is important to ensure that all
217 insns which use its result will not be executed until its result
218 has been computed. An insn is maintained in one of four structures:
220 (P) the "Pending" set of insns which cannot be scheduled until
221 their dependencies have been satisfied.
222 (Q) the "Queued" set of insns that can be scheduled when sufficient
224 (R) the "Ready" list of unscheduled, uncommitted insns.
225 (S) the "Scheduled" list of insns.
227 Initially, all insns are either "Pending" or "Ready" depending on
228 whether their dependencies are satisfied.
230 Insns move from the "Ready" list to the "Scheduled" list as they
231 are committed to the schedule. As this occurs, the insns in the
232 "Pending" list have their dependencies satisfied and move to either
233 the "Ready" list or the "Queued" set depending on whether
234 sufficient time has passed to make them ready. As time passes,
235 insns move from the "Queued" set to the "Ready" list. Insns may
236 move from the "Ready" list to the "Queued" set if they are blocked
237 due to a function unit conflict.
239 The "Pending" list (P) are the insns in the INSN_DEPEND of the unscheduled
240 insns, i.e., those that are ready, queued, and pending.
241 The "Queued" set (Q) is implemented by the variable `insn_queue'.
242 The "Ready" list (R) is implemented by the variables `ready' and
244 The "Scheduled" list (S) is the new insn chain built by this pass.
246 The transition (R->S) is implemented in the scheduling loop in
247 `schedule_block' when the best insn to schedule is chosen.
248 The transition (R->Q) is implemented in `queue_insn' when an
249 insn is found to have a function unit conflict with the already
251 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
252 insns move from the ready list to the scheduled list.
253 The transition (Q->R) is implemented in 'queue_to_insn' as time
254 passes or stalls are introduced. */
256 /* Implement a circular buffer to delay instructions until sufficient
257 time has passed. INSN_QUEUE_SIZE is a power of two larger than
258 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
259 longest time an isnsn may be queued. */
260 static rtx insn_queue
[INSN_QUEUE_SIZE
];
261 static int q_ptr
= 0;
262 static int q_size
= 0;
263 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
264 #define NEXT_Q_AFTER(X, C) (((X)+C) & (INSN_QUEUE_SIZE-1))
266 /* Describe the ready list of the scheduler.
267 VEC holds space enough for all insns in the current region. VECLEN
268 says how many exactly.
269 FIRST is the index of the element with the highest priority; i.e. the
270 last one in the ready list, since elements are ordered by ascending
272 N_READY determines how many insns are on the ready list. */
282 /* Forward declarations. */
283 static unsigned int blockage_range
PARAMS ((int, rtx
));
284 static void clear_units
PARAMS ((void));
285 static void schedule_unit
PARAMS ((int, rtx
, int));
286 static int actual_hazard
PARAMS ((int, rtx
, int, int));
287 static int potential_hazard
PARAMS ((int, rtx
, int));
288 static int priority
PARAMS ((rtx
));
289 static int rank_for_schedule
PARAMS ((const PTR
, const PTR
));
290 static void swap_sort
PARAMS ((rtx
*, int));
291 static void queue_insn
PARAMS ((rtx
, int));
292 static void schedule_insn
PARAMS ((rtx
, struct ready_list
*, int));
293 static void find_insn_reg_weight
PARAMS ((int));
294 static void adjust_priority
PARAMS ((rtx
));
296 /* Notes handling mechanism:
297 =========================
298 Generally, NOTES are saved before scheduling and restored after scheduling.
299 The scheduler distinguishes between three types of notes:
301 (1) LINE_NUMBER notes, generated and used for debugging. Here,
302 before scheduling a region, a pointer to the LINE_NUMBER note is
303 added to the insn following it (in save_line_notes()), and the note
304 is removed (in rm_line_notes() and unlink_line_notes()). After
305 scheduling the region, this pointer is used for regeneration of
306 the LINE_NUMBER note (in restore_line_notes()).
308 (2) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
309 Before scheduling a region, a pointer to the note is added to the insn
310 that follows or precedes it. (This happens as part of the data dependence
311 computation). After scheduling an insn, the pointer contained in it is
312 used for regenerating the corresponding note (in reemit_notes).
314 (3) All other notes (e.g. INSN_DELETED): Before scheduling a block,
315 these notes are put in a list (in rm_other_notes() and
316 unlink_other_notes ()). After scheduling the block, these notes are
317 inserted at the beginning of the block (in schedule_block()). */
319 static rtx unlink_other_notes
PARAMS ((rtx
, rtx
));
320 static rtx unlink_line_notes
PARAMS ((rtx
, rtx
));
321 static rtx reemit_notes
PARAMS ((rtx
, rtx
));
323 static rtx
*ready_lastpos
PARAMS ((struct ready_list
*));
324 static void ready_sort
PARAMS ((struct ready_list
*));
325 static rtx ready_remove_first
PARAMS ((struct ready_list
*));
327 static void queue_to_ready
PARAMS ((struct ready_list
*));
329 static void debug_ready_list
PARAMS ((struct ready_list
*));
331 static rtx move_insn1
PARAMS ((rtx
, rtx
));
332 static rtx move_insn
PARAMS ((rtx
, rtx
));
334 #endif /* INSN_SCHEDULING */
336 /* Point to state used for the current scheduling pass. */
337 struct sched_info
*current_sched_info
;
339 #ifndef INSN_SCHEDULING
341 schedule_insns (dump_file
)
342 FILE *dump_file ATTRIBUTE_UNUSED
;
347 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
348 so that insns independent of the last scheduled insn will be preferred
349 over dependent instructions. */
351 static rtx last_scheduled_insn
;
353 /* Compute the function units used by INSN. This caches the value
354 returned by function_units_used. A function unit is encoded as the
355 unit number if the value is non-negative and the compliment of a
356 mask if the value is negative. A function unit index is the
357 non-negative encoding. */
363 register int unit
= INSN_UNIT (insn
);
367 recog_memoized (insn
);
369 /* A USE insn, or something else we don't need to understand.
370 We can't pass these directly to function_units_used because it will
371 trigger a fatal error for unrecognizable insns. */
372 if (INSN_CODE (insn
) < 0)
376 unit
= function_units_used (insn
);
377 /* Increment non-negative values so we can cache zero. */
381 /* We only cache 16 bits of the result, so if the value is out of
382 range, don't cache it. */
383 if (FUNCTION_UNITS_SIZE
< HOST_BITS_PER_SHORT
385 || (unit
& ~((1 << (HOST_BITS_PER_SHORT
- 1)) - 1)) == 0)
386 INSN_UNIT (insn
) = unit
;
388 return (unit
> 0 ? unit
- 1 : unit
);
391 /* Compute the blockage range for executing INSN on UNIT. This caches
392 the value returned by the blockage_range_function for the unit.
393 These values are encoded in an int where the upper half gives the
394 minimum value and the lower half gives the maximum value. */
396 HAIFA_INLINE
static unsigned int
397 blockage_range (unit
, insn
)
401 unsigned int blockage
= INSN_BLOCKAGE (insn
);
404 if ((int) UNIT_BLOCKED (blockage
) != unit
+ 1)
406 range
= function_units
[unit
].blockage_range_function (insn
);
407 /* We only cache the blockage range for one unit and then only if
409 if (HOST_BITS_PER_INT
>= UNIT_BITS
+ 2 * BLOCKAGE_BITS
)
410 INSN_BLOCKAGE (insn
) = ENCODE_BLOCKAGE (unit
+ 1, range
);
413 range
= BLOCKAGE_RANGE (blockage
);
418 /* A vector indexed by function unit instance giving the last insn to use
419 the unit. The value of the function unit instance index for unit U
420 instance I is (U + I * FUNCTION_UNITS_SIZE). */
421 static rtx unit_last_insn
[FUNCTION_UNITS_SIZE
* MAX_MULTIPLICITY
];
423 /* A vector indexed by function unit instance giving the minimum time when
424 the unit will unblock based on the maximum blockage cost. */
425 static int unit_tick
[FUNCTION_UNITS_SIZE
* MAX_MULTIPLICITY
];
427 /* A vector indexed by function unit number giving the number of insns
428 that remain to use the unit. */
429 static int unit_n_insns
[FUNCTION_UNITS_SIZE
];
431 /* Access the unit_last_insn array. Used by the visualization code. */
434 get_unit_last_insn (instance
)
437 return unit_last_insn
[instance
];
440 /* Reset the function unit state to the null state. */
445 memset ((char *) unit_last_insn
, 0, sizeof (unit_last_insn
));
446 memset ((char *) unit_tick
, 0, sizeof (unit_tick
));
447 memset ((char *) unit_n_insns
, 0, sizeof (unit_n_insns
));
450 /* Return the issue-delay of an insn. */
453 insn_issue_delay (insn
)
457 int unit
= insn_unit (insn
);
459 /* Efficiency note: in fact, we are working 'hard' to compute a
460 value that was available in md file, and is not available in
461 function_units[] structure. It would be nice to have this
465 if (function_units
[unit
].blockage_range_function
&&
466 function_units
[unit
].blockage_function
)
467 delay
= function_units
[unit
].blockage_function (insn
, insn
);
470 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
471 if ((unit
& 1) != 0 && function_units
[i
].blockage_range_function
472 && function_units
[i
].blockage_function
)
473 delay
= MAX (delay
, function_units
[i
].blockage_function (insn
, insn
));
478 /* Return the actual hazard cost of executing INSN on the unit UNIT,
479 instance INSTANCE at time CLOCK if the previous actual hazard cost
483 actual_hazard_this_instance (unit
, instance
, insn
, clock
, cost
)
484 int unit
, instance
, clock
, cost
;
487 int tick
= unit_tick
[instance
]; /* Issue time of the last issued insn. */
489 if (tick
- clock
> cost
)
491 /* The scheduler is operating forward, so unit's last insn is the
492 executing insn and INSN is the candidate insn. We want a
493 more exact measure of the blockage if we execute INSN at CLOCK
494 given when we committed the execution of the unit's last insn.
496 The blockage value is given by either the unit's max blockage
497 constant, blockage range function, or blockage function. Use
498 the most exact form for the given unit. */
500 if (function_units
[unit
].blockage_range_function
)
502 if (function_units
[unit
].blockage_function
)
503 tick
+= (function_units
[unit
].blockage_function
504 (unit_last_insn
[instance
], insn
)
505 - function_units
[unit
].max_blockage
);
507 tick
+= ((int) MAX_BLOCKAGE_COST (blockage_range (unit
, insn
))
508 - function_units
[unit
].max_blockage
);
510 if (tick
- clock
> cost
)
516 /* Record INSN as having begun execution on the units encoded by UNIT at
519 HAIFA_INLINE
static void
520 schedule_unit (unit
, insn
, clock
)
529 #if MAX_MULTIPLICITY > 1
530 /* Find the first free instance of the function unit and use that
531 one. We assume that one is free. */
532 for (i
= function_units
[unit
].multiplicity
- 1; i
> 0; i
--)
534 if (!actual_hazard_this_instance (unit
, instance
, insn
, clock
, 0))
536 instance
+= FUNCTION_UNITS_SIZE
;
539 unit_last_insn
[instance
] = insn
;
540 unit_tick
[instance
] = (clock
+ function_units
[unit
].max_blockage
);
543 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
545 schedule_unit (i
, insn
, clock
);
548 /* Return the actual hazard cost of executing INSN on the units encoded by
549 UNIT at time CLOCK if the previous actual hazard cost was COST. */
551 HAIFA_INLINE
static int
552 actual_hazard (unit
, insn
, clock
, cost
)
553 int unit
, clock
, cost
;
560 /* Find the instance of the function unit with the minimum hazard. */
562 int best_cost
= actual_hazard_this_instance (unit
, instance
, insn
,
564 #if MAX_MULTIPLICITY > 1
567 if (best_cost
> cost
)
569 for (i
= function_units
[unit
].multiplicity
- 1; i
> 0; i
--)
571 instance
+= FUNCTION_UNITS_SIZE
;
572 this_cost
= actual_hazard_this_instance (unit
, instance
, insn
,
574 if (this_cost
< best_cost
)
576 best_cost
= this_cost
;
577 if (this_cost
<= cost
)
583 cost
= MAX (cost
, best_cost
);
586 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
588 cost
= actual_hazard (i
, insn
, clock
, cost
);
593 /* Return the potential hazard cost of executing an instruction on the
594 units encoded by UNIT if the previous potential hazard cost was COST.
595 An insn with a large blockage time is chosen in preference to one
596 with a smaller time; an insn that uses a unit that is more likely
597 to be used is chosen in preference to one with a unit that is less
598 used. We are trying to minimize a subsequent actual hazard. */
600 HAIFA_INLINE
static int
601 potential_hazard (unit
, insn
, cost
)
606 unsigned int minb
, maxb
;
610 minb
= maxb
= function_units
[unit
].max_blockage
;
613 if (function_units
[unit
].blockage_range_function
)
615 maxb
= minb
= blockage_range (unit
, insn
);
616 maxb
= MAX_BLOCKAGE_COST (maxb
);
617 minb
= MIN_BLOCKAGE_COST (minb
);
622 /* Make the number of instructions left dominate. Make the
623 minimum delay dominate the maximum delay. If all these
624 are the same, use the unit number to add an arbitrary
625 ordering. Other terms can be added. */
626 ncost
= minb
* 0x40 + maxb
;
627 ncost
*= (unit_n_insns
[unit
] - 1) * 0x1000 + unit
;
634 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
636 cost
= potential_hazard (i
, insn
, cost
);
641 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
642 This is the number of cycles between instruction issue and
643 instruction results. */
646 insn_cost (insn
, link
, used
)
647 rtx insn
, link
, used
;
649 register int cost
= INSN_COST (insn
);
653 recog_memoized (insn
);
655 /* A USE insn, or something else we don't need to understand.
656 We can't pass these directly to result_ready_cost because it will
657 trigger a fatal error for unrecognizable insns. */
658 if (INSN_CODE (insn
) < 0)
660 INSN_COST (insn
) = 1;
665 cost
= result_ready_cost (insn
);
670 INSN_COST (insn
) = cost
;
674 /* In this case estimate cost without caring how insn is used. */
675 if (link
== 0 && used
== 0)
678 /* A USE insn should never require the value used to be computed. This
679 allows the computation of a function's result and parameter values to
680 overlap the return and call. */
681 recog_memoized (used
);
682 if (INSN_CODE (used
) < 0)
683 LINK_COST_FREE (link
) = 1;
685 /* If some dependencies vary the cost, compute the adjustment. Most
686 commonly, the adjustment is complete: either the cost is ignored
687 (in the case of an output- or anti-dependence), or the cost is
688 unchanged. These values are cached in the link as LINK_COST_FREE
689 and LINK_COST_ZERO. */
691 if (LINK_COST_FREE (link
))
693 else if (!LINK_COST_ZERO (link
) && targetm
.sched
.adjust_cost
)
695 int ncost
= (*targetm
.sched
.adjust_cost
) (used
, link
, insn
, cost
);
699 LINK_COST_FREE (link
) = 1;
703 LINK_COST_ZERO (link
) = 1;
710 /* Compute the priority number for INSN. */
721 if (! INSN_PRIORITY_KNOWN (insn
))
723 int this_priority
= 0;
725 if (INSN_DEPEND (insn
) == 0)
726 this_priority
= insn_cost (insn
, 0, 0);
729 for (link
= INSN_DEPEND (insn
); link
; link
= XEXP (link
, 1))
734 if (RTX_INTEGRATED_P (link
))
737 next
= XEXP (link
, 0);
739 /* Critical path is meaningful in block boundaries only. */
740 if (! (*current_sched_info
->contributes_to_priority
) (next
, insn
))
743 next_priority
= insn_cost (insn
, link
, next
) + priority (next
);
744 if (next_priority
> this_priority
)
745 this_priority
= next_priority
;
748 INSN_PRIORITY (insn
) = this_priority
;
749 INSN_PRIORITY_KNOWN (insn
) = 1;
752 return INSN_PRIORITY (insn
);
755 /* Macros and functions for keeping the priority queue sorted, and
756 dealing with queueing and dequeueing of instructions. */
758 #define SCHED_SORT(READY, N_READY) \
759 do { if ((N_READY) == 2) \
760 swap_sort (READY, N_READY); \
761 else if ((N_READY) > 2) \
762 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
765 /* Returns a positive value if x is preferred; returns a negative value if
766 y is preferred. Should never return 0, since that will make the sort
770 rank_for_schedule (x
, y
)
774 rtx tmp
= *(const rtx
*) y
;
775 rtx tmp2
= *(const rtx
*) x
;
777 int tmp_class
, tmp2_class
, depend_count1
, depend_count2
;
778 int val
, priority_val
, weight_val
, info_val
;
780 /* Prefer insn with higher priority. */
781 priority_val
= INSN_PRIORITY (tmp2
) - INSN_PRIORITY (tmp
);
785 /* Prefer an insn with smaller contribution to registers-pressure. */
786 if (!reload_completed
&&
787 (weight_val
= INSN_REG_WEIGHT (tmp
) - INSN_REG_WEIGHT (tmp2
)))
790 info_val
= (*current_sched_info
->rank
) (tmp
, tmp2
);
794 /* Compare insns based on their relation to the last-scheduled-insn. */
795 if (last_scheduled_insn
)
797 /* Classify the instructions into three classes:
798 1) Data dependent on last schedule insn.
799 2) Anti/Output dependent on last scheduled insn.
800 3) Independent of last scheduled insn, or has latency of one.
801 Choose the insn from the highest numbered class if different. */
802 link
= find_insn_list (tmp
, INSN_DEPEND (last_scheduled_insn
));
803 if (link
== 0 || insn_cost (last_scheduled_insn
, link
, tmp
) == 1)
805 else if (REG_NOTE_KIND (link
) == 0) /* Data dependence. */
810 link
= find_insn_list (tmp2
, INSN_DEPEND (last_scheduled_insn
));
811 if (link
== 0 || insn_cost (last_scheduled_insn
, link
, tmp2
) == 1)
813 else if (REG_NOTE_KIND (link
) == 0) /* Data dependence. */
818 if ((val
= tmp2_class
- tmp_class
))
822 /* Prefer the insn which has more later insns that depend on it.
823 This gives the scheduler more freedom when scheduling later
824 instructions at the expense of added register pressure. */
826 for (link
= INSN_DEPEND (tmp
); link
; link
= XEXP (link
, 1))
830 for (link
= INSN_DEPEND (tmp2
); link
; link
= XEXP (link
, 1))
833 val
= depend_count2
- depend_count1
;
837 /* If insns are equally good, sort by INSN_LUID (original insn order),
838 so that we make the sort stable. This minimizes instruction movement,
839 thus minimizing sched's effect on debugging and cross-jumping. */
840 return INSN_LUID (tmp
) - INSN_LUID (tmp2
);
843 /* Resort the array A in which only element at index N may be out of order. */
845 HAIFA_INLINE
static void
853 while (i
>= 0 && rank_for_schedule (a
+ i
, &insn
) >= 0)
861 /* Add INSN to the insn queue so that it can be executed at least
862 N_CYCLES after the currently executing insn. Preserve insns
863 chain for debugging purposes. */
865 HAIFA_INLINE
static void
866 queue_insn (insn
, n_cycles
)
870 int next_q
= NEXT_Q_AFTER (q_ptr
, n_cycles
);
871 rtx link
= alloc_INSN_LIST (insn
, insn_queue
[next_q
]);
872 insn_queue
[next_q
] = link
;
875 if (sched_verbose
>= 2)
877 fprintf (sched_dump
, ";;\t\tReady-->Q: insn %s: ",
878 (*current_sched_info
->print_insn
) (insn
, 0));
880 fprintf (sched_dump
, "queued for %d cycles.\n", n_cycles
);
884 /* Return a pointer to the bottom of the ready list, i.e. the insn
885 with the lowest priority. */
887 HAIFA_INLINE
static rtx
*
888 ready_lastpos (ready
)
889 struct ready_list
*ready
;
891 if (ready
->n_ready
== 0)
893 return ready
->vec
+ ready
->first
- ready
->n_ready
+ 1;
896 /* Add an element INSN to the ready list so that it ends up with the lowest
900 ready_add (ready
, insn
)
901 struct ready_list
*ready
;
904 if (ready
->first
== ready
->n_ready
)
906 memmove (ready
->vec
+ ready
->veclen
- ready
->n_ready
,
907 ready_lastpos (ready
),
908 ready
->n_ready
* sizeof (rtx
));
909 ready
->first
= ready
->veclen
- 1;
911 ready
->vec
[ready
->first
- ready
->n_ready
] = insn
;
915 /* Remove the element with the highest priority from the ready list and
918 HAIFA_INLINE
static rtx
919 ready_remove_first (ready
)
920 struct ready_list
*ready
;
923 if (ready
->n_ready
== 0)
925 t
= ready
->vec
[ready
->first
--];
927 /* If the queue becomes empty, reset it. */
928 if (ready
->n_ready
== 0)
929 ready
->first
= ready
->veclen
- 1;
933 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
936 HAIFA_INLINE
static void
938 struct ready_list
*ready
;
940 rtx
*first
= ready_lastpos (ready
);
941 SCHED_SORT (first
, ready
->n_ready
);
944 /* PREV is an insn that is ready to execute. Adjust its priority if that
945 will help shorten or lengthen register lifetimes as appropriate. Also
946 provide a hook for the target to tweek itself. */
948 HAIFA_INLINE
static void
949 adjust_priority (prev
)
952 /* ??? There used to be code here to try and estimate how an insn
953 affected register lifetimes, but it did it by looking at REG_DEAD
954 notes, which we removed in schedule_region. Nor did it try to
955 take into account register pressure or anything useful like that.
957 Revisit when we have a machine model to work with and not before. */
959 if (targetm
.sched
.adjust_priority
)
960 INSN_PRIORITY (prev
) =
961 (*targetm
.sched
.adjust_priority
) (prev
, INSN_PRIORITY (prev
));
964 /* Clock at which the previous instruction was issued. */
965 static int last_clock_var
;
967 /* INSN is the "currently executing insn". Launch each insn which was
968 waiting on INSN. READY is the ready list which contains the insns
969 that are ready to fire. CLOCK is the current cycle.
973 schedule_insn (insn
, ready
, clock
)
975 struct ready_list
*ready
;
981 unit
= insn_unit (insn
);
983 if (sched_verbose
>= 2)
985 fprintf (sched_dump
, ";;\t\t--> scheduling insn <<<%d>>> on unit ",
987 insn_print_units (insn
);
988 fprintf (sched_dump
, "\n");
991 if (sched_verbose
&& unit
== -1)
992 visualize_no_unit (insn
);
994 if (MAX_BLOCKAGE
> 1 || issue_rate
> 1 || sched_verbose
)
995 schedule_unit (unit
, insn
, clock
);
997 if (INSN_DEPEND (insn
) == 0)
1000 for (link
= INSN_DEPEND (insn
); link
!= 0; link
= XEXP (link
, 1))
1002 rtx next
= XEXP (link
, 0);
1003 int cost
= insn_cost (insn
, link
, next
);
1005 INSN_TICK (next
) = MAX (INSN_TICK (next
), clock
+ cost
);
1007 if ((INSN_DEP_COUNT (next
) -= 1) == 0)
1009 int effective_cost
= INSN_TICK (next
) - clock
;
1011 if (! (*current_sched_info
->new_ready
) (next
))
1014 if (sched_verbose
>= 2)
1016 fprintf (sched_dump
, ";;\t\tdependences resolved: insn %s ",
1017 (*current_sched_info
->print_insn
) (next
, 0));
1019 if (effective_cost
< 1)
1020 fprintf (sched_dump
, "into ready\n");
1022 fprintf (sched_dump
, "into queue with cost=%d\n", effective_cost
);
1025 /* Adjust the priority of NEXT and either put it on the ready
1026 list or queue it. */
1027 adjust_priority (next
);
1028 if (effective_cost
< 1)
1029 ready_add (ready
, next
);
1031 queue_insn (next
, effective_cost
);
1035 /* Annotate the instruction with issue information -- TImode
1036 indicates that the instruction is expected not to be able
1037 to issue on the same cycle as the previous insn. A machine
1038 may use this information to decide how the instruction should
1040 if (reload_completed
&& issue_rate
> 1)
1042 PUT_MODE (insn
, clock
> last_clock_var
? TImode
: VOIDmode
);
1043 last_clock_var
= clock
;
1047 /* Functions for handling of notes. */
1049 /* Delete notes beginning with INSN and put them in the chain
1050 of notes ended by NOTE_LIST.
1051 Returns the insn following the notes. */
1054 unlink_other_notes (insn
, tail
)
1057 rtx prev
= PREV_INSN (insn
);
1059 while (insn
!= tail
&& GET_CODE (insn
) == NOTE
)
1061 rtx next
= NEXT_INSN (insn
);
1062 /* Delete the note from its current position. */
1064 NEXT_INSN (prev
) = next
;
1066 PREV_INSN (next
) = prev
;
1068 /* See sched_analyze to see how these are handled. */
1069 if (NOTE_LINE_NUMBER (insn
) != NOTE_INSN_LOOP_BEG
1070 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_LOOP_END
1071 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_RANGE_BEG
1072 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_RANGE_END
1073 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_EH_REGION_BEG
1074 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_EH_REGION_END
)
1076 /* Insert the note at the end of the notes list. */
1077 PREV_INSN (insn
) = note_list
;
1079 NEXT_INSN (note_list
) = insn
;
1088 /* Delete line notes beginning with INSN. Record line-number notes so
1089 they can be reused. Returns the insn following the notes. */
1092 unlink_line_notes (insn
, tail
)
1095 rtx prev
= PREV_INSN (insn
);
1097 while (insn
!= tail
&& GET_CODE (insn
) == NOTE
)
1099 rtx next
= NEXT_INSN (insn
);
1101 if (write_symbols
!= NO_DEBUG
&& NOTE_LINE_NUMBER (insn
) > 0)
1103 /* Delete the note from its current position. */
1105 NEXT_INSN (prev
) = next
;
1107 PREV_INSN (next
) = prev
;
1109 /* Record line-number notes so they can be reused. */
1110 LINE_NOTE (insn
) = insn
;
1120 /* Return the head and tail pointers of BB. */
1123 get_block_head_tail (b
, headp
, tailp
)
1128 /* HEAD and TAIL delimit the basic block being scheduled. */
1129 rtx head
= BLOCK_HEAD (b
);
1130 rtx tail
= BLOCK_END (b
);
1132 /* Don't include any notes or labels at the beginning of the
1133 basic block, or notes at the ends of basic blocks. */
1134 while (head
!= tail
)
1136 if (GET_CODE (head
) == NOTE
)
1137 head
= NEXT_INSN (head
);
1138 else if (GET_CODE (tail
) == NOTE
)
1139 tail
= PREV_INSN (tail
);
1140 else if (GET_CODE (head
) == CODE_LABEL
)
1141 head
= NEXT_INSN (head
);
1150 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1153 no_real_insns_p (head
, tail
)
1156 while (head
!= NEXT_INSN (tail
))
1158 if (GET_CODE (head
) != NOTE
&& GET_CODE (head
) != CODE_LABEL
)
1160 head
= NEXT_INSN (head
);
1165 /* Delete line notes from one block. Save them so they can be later restored
1166 (in restore_line_notes). HEAD and TAIL are the boundaries of the
1167 block in which notes should be processed. */
1170 rm_line_notes (head
, tail
)
1176 next_tail
= NEXT_INSN (tail
);
1177 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
1181 /* Farm out notes, and maybe save them in NOTE_LIST.
1182 This is needed to keep the debugger from
1183 getting completely deranged. */
1184 if (GET_CODE (insn
) == NOTE
)
1187 insn
= unlink_line_notes (insn
, next_tail
);
1193 if (insn
== next_tail
)
1199 /* Save line number notes for each insn in block B. HEAD and TAIL are
1200 the boundaries of the block in which notes should be processed.*/
1203 save_line_notes (b
, head
, tail
)
1209 /* We must use the true line number for the first insn in the block
1210 that was computed and saved at the start of this pass. We can't
1211 use the current line number, because scheduling of the previous
1212 block may have changed the current line number. */
1214 rtx line
= line_note_head
[b
];
1217 next_tail
= NEXT_INSN (tail
);
1219 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
1220 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
1223 LINE_NOTE (insn
) = line
;
1226 /* After a block was scheduled, insert line notes into the insns list.
1227 HEAD and TAIL are the boundaries of the block in which notes should
1231 restore_line_notes (head
, tail
)
1234 rtx line
, note
, prev
, new;
1235 int added_notes
= 0;
1236 rtx next_tail
, insn
;
1239 next_tail
= NEXT_INSN (tail
);
1241 /* Determine the current line-number. We want to know the current
1242 line number of the first insn of the block here, in case it is
1243 different from the true line number that was saved earlier. If
1244 different, then we need a line number note before the first insn
1245 of this block. If it happens to be the same, then we don't want to
1246 emit another line number note here. */
1247 for (line
= head
; line
; line
= PREV_INSN (line
))
1248 if (GET_CODE (line
) == NOTE
&& NOTE_LINE_NUMBER (line
) > 0)
1251 /* Walk the insns keeping track of the current line-number and inserting
1252 the line-number notes as needed. */
1253 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
1254 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
1256 /* This used to emit line number notes before every non-deleted note.
1257 However, this confuses a debugger, because line notes not separated
1258 by real instructions all end up at the same address. I can find no
1259 use for line number notes before other notes, so none are emitted. */
1260 else if (GET_CODE (insn
) != NOTE
1261 && INSN_UID (insn
) < old_max_uid
1262 && (note
= LINE_NOTE (insn
)) != 0
1265 || NOTE_LINE_NUMBER (note
) != NOTE_LINE_NUMBER (line
)
1266 || NOTE_SOURCE_FILE (note
) != NOTE_SOURCE_FILE (line
)))
1269 prev
= PREV_INSN (insn
);
1270 if (LINE_NOTE (note
))
1272 /* Re-use the original line-number note. */
1273 LINE_NOTE (note
) = 0;
1274 PREV_INSN (note
) = prev
;
1275 NEXT_INSN (prev
) = note
;
1276 PREV_INSN (insn
) = note
;
1277 NEXT_INSN (note
) = insn
;
1282 new = emit_note_after (NOTE_LINE_NUMBER (note
), prev
);
1283 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note
);
1284 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note
);
1287 if (sched_verbose
&& added_notes
)
1288 fprintf (sched_dump
, ";; added %d line-number notes\n", added_notes
);
1291 /* After scheduling the function, delete redundant line notes from the
1295 rm_redundant_line_notes ()
1298 rtx insn
= get_insns ();
1299 int active_insn
= 0;
1302 /* Walk the insns deleting redundant line-number notes. Many of these
1303 are already present. The remainder tend to occur at basic
1304 block boundaries. */
1305 for (insn
= get_last_insn (); insn
; insn
= PREV_INSN (insn
))
1306 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
1308 /* If there are no active insns following, INSN is redundant. */
1309 if (active_insn
== 0)
1312 NOTE_SOURCE_FILE (insn
) = 0;
1313 NOTE_LINE_NUMBER (insn
) = NOTE_INSN_DELETED
;
1315 /* If the line number is unchanged, LINE is redundant. */
1317 && NOTE_LINE_NUMBER (line
) == NOTE_LINE_NUMBER (insn
)
1318 && NOTE_SOURCE_FILE (line
) == NOTE_SOURCE_FILE (insn
))
1321 NOTE_SOURCE_FILE (line
) = 0;
1322 NOTE_LINE_NUMBER (line
) = NOTE_INSN_DELETED
;
1329 else if (!((GET_CODE (insn
) == NOTE
1330 && NOTE_LINE_NUMBER (insn
) == NOTE_INSN_DELETED
)
1331 || (GET_CODE (insn
) == INSN
1332 && (GET_CODE (PATTERN (insn
)) == USE
1333 || GET_CODE (PATTERN (insn
)) == CLOBBER
))))
1336 if (sched_verbose
&& notes
)
1337 fprintf (sched_dump
, ";; deleted %d line-number notes\n", notes
);
1340 /* Delete notes between HEAD and TAIL and put them in the chain
1341 of notes ended by NOTE_LIST. */
1344 rm_other_notes (head
, tail
)
1352 if (head
== tail
&& (! INSN_P (head
)))
1355 next_tail
= NEXT_INSN (tail
);
1356 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
1360 /* Farm out notes, and maybe save them in NOTE_LIST.
1361 This is needed to keep the debugger from
1362 getting completely deranged. */
1363 if (GET_CODE (insn
) == NOTE
)
1367 insn
= unlink_other_notes (insn
, next_tail
);
1373 if (insn
== next_tail
)
1379 /* Functions for computation of registers live/usage info. */
1381 /* Calculate INSN_REG_WEIGHT for all insns of a block. */
1384 find_insn_reg_weight (b
)
1387 rtx insn
, next_tail
, head
, tail
;
1389 get_block_head_tail (b
, &head
, &tail
);
1390 next_tail
= NEXT_INSN (tail
);
1392 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
1397 /* Handle register life information. */
1398 if (! INSN_P (insn
))
1401 /* Increment weight for each register born here. */
1403 if ((GET_CODE (x
) == SET
|| GET_CODE (x
) == CLOBBER
)
1404 && register_operand (SET_DEST (x
), VOIDmode
))
1406 else if (GET_CODE (x
) == PARALLEL
)
1409 for (j
= XVECLEN (x
, 0) - 1; j
>= 0; j
--)
1411 x
= XVECEXP (PATTERN (insn
), 0, j
);
1412 if ((GET_CODE (x
) == SET
|| GET_CODE (x
) == CLOBBER
)
1413 && register_operand (SET_DEST (x
), VOIDmode
))
1418 /* Decrement weight for each register that dies here. */
1419 for (x
= REG_NOTES (insn
); x
; x
= XEXP (x
, 1))
1421 if (REG_NOTE_KIND (x
) == REG_DEAD
1422 || REG_NOTE_KIND (x
) == REG_UNUSED
)
1426 INSN_REG_WEIGHT (insn
) = reg_weight
;
1430 /* Scheduling clock, modified in schedule_block() and queue_to_ready (). */
1431 static int clock_var
;
1433 /* Move insns that became ready to fire from queue to ready list. */
1436 queue_to_ready (ready
)
1437 struct ready_list
*ready
;
1442 q_ptr
= NEXT_Q (q_ptr
);
1444 /* Add all pending insns that can be scheduled without stalls to the
1446 for (link
= insn_queue
[q_ptr
]; link
; link
= XEXP (link
, 1))
1448 insn
= XEXP (link
, 0);
1451 if (sched_verbose
>= 2)
1452 fprintf (sched_dump
, ";;\t\tQ-->Ready: insn %s: ",
1453 (*current_sched_info
->print_insn
) (insn
, 0));
1455 ready_add (ready
, insn
);
1456 if (sched_verbose
>= 2)
1457 fprintf (sched_dump
, "moving to ready without stalls\n");
1459 insn_queue
[q_ptr
] = 0;
1461 /* If there are no ready insns, stall until one is ready and add all
1462 of the pending insns at that point to the ready list. */
1463 if (ready
->n_ready
== 0)
1465 register int stalls
;
1467 for (stalls
= 1; stalls
< INSN_QUEUE_SIZE
; stalls
++)
1469 if ((link
= insn_queue
[NEXT_Q_AFTER (q_ptr
, stalls
)]))
1471 for (; link
; link
= XEXP (link
, 1))
1473 insn
= XEXP (link
, 0);
1476 if (sched_verbose
>= 2)
1477 fprintf (sched_dump
, ";;\t\tQ-->Ready: insn %s: ",
1478 (*current_sched_info
->print_insn
) (insn
, 0));
1480 ready_add (ready
, insn
);
1481 if (sched_verbose
>= 2)
1482 fprintf (sched_dump
, "moving to ready with %d stalls\n", stalls
);
1484 insn_queue
[NEXT_Q_AFTER (q_ptr
, stalls
)] = 0;
1491 if (sched_verbose
&& stalls
)
1492 visualize_stall_cycles (stalls
);
1493 q_ptr
= NEXT_Q_AFTER (q_ptr
, stalls
);
1494 clock_var
+= stalls
;
1498 /* Print the ready list for debugging purposes. Callable from debugger. */
1501 debug_ready_list (ready
)
1502 struct ready_list
*ready
;
1507 if (ready
->n_ready
== 0)
1510 p
= ready_lastpos (ready
);
1511 for (i
= 0; i
< ready
->n_ready
; i
++)
1512 fprintf (sched_dump
, " %s", (*current_sched_info
->print_insn
) (p
[i
], 0));
1513 fprintf (sched_dump
, "\n");
1516 /* move_insn1: Remove INSN from insn chain, and link it after LAST insn. */
1519 move_insn1 (insn
, last
)
1522 NEXT_INSN (PREV_INSN (insn
)) = NEXT_INSN (insn
);
1523 PREV_INSN (NEXT_INSN (insn
)) = PREV_INSN (insn
);
1525 NEXT_INSN (insn
) = NEXT_INSN (last
);
1526 PREV_INSN (NEXT_INSN (last
)) = insn
;
1528 NEXT_INSN (last
) = insn
;
1529 PREV_INSN (insn
) = last
;
1534 /* Search INSN for REG_SAVE_NOTE note pairs for
1535 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
1536 NOTEs. The REG_SAVE_NOTE note following first one is contains the
1537 saved value for NOTE_BLOCK_NUMBER which is useful for
1538 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
1539 output by the instruction scheduler. Return the new value of LAST. */
1542 reemit_notes (insn
, last
)
1549 for (note
= REG_NOTES (insn
); note
; note
= XEXP (note
, 1))
1551 if (REG_NOTE_KIND (note
) == REG_SAVE_NOTE
)
1553 enum insn_note note_type
= INTVAL (XEXP (note
, 0));
1555 if (note_type
== NOTE_INSN_RANGE_BEG
1556 || note_type
== NOTE_INSN_RANGE_END
)
1558 last
= emit_note_before (note_type
, last
);
1559 remove_note (insn
, note
);
1560 note
= XEXP (note
, 1);
1561 NOTE_RANGE_INFO (last
) = XEXP (note
, 0);
1565 last
= emit_note_before (note_type
, last
);
1566 remove_note (insn
, note
);
1567 note
= XEXP (note
, 1);
1568 if (note_type
== NOTE_INSN_EH_REGION_BEG
1569 || note_type
== NOTE_INSN_EH_REGION_END
)
1570 NOTE_EH_HANDLER (last
) = INTVAL (XEXP (note
, 0));
1572 remove_note (insn
, note
);
1578 /* Move INSN, and all insns which should be issued before it,
1579 due to SCHED_GROUP_P flag. Reemit notes if needed.
1581 Return the last insn emitted by the scheduler, which is the
1582 return value from the first call to reemit_notes. */
1585 move_insn (insn
, last
)
1590 /* If INSN has SCHED_GROUP_P set, then issue it and any other
1591 insns with SCHED_GROUP_P set first. */
1592 while (SCHED_GROUP_P (insn
))
1594 rtx prev
= PREV_INSN (insn
);
1596 /* Move a SCHED_GROUP_P insn. */
1597 move_insn1 (insn
, last
);
1598 /* If this is the first call to reemit_notes, then record
1599 its return value. */
1600 if (retval
== NULL_RTX
)
1601 retval
= reemit_notes (insn
, insn
);
1603 reemit_notes (insn
, insn
);
1607 /* Now move the first non SCHED_GROUP_P insn. */
1608 move_insn1 (insn
, last
);
1610 /* If this is the first call to reemit_notes, then record
1611 its return value. */
1612 if (retval
== NULL_RTX
)
1613 retval
= reemit_notes (insn
, insn
);
1615 reemit_notes (insn
, insn
);
1620 /* Use forward list scheduling to rearrange insns of block B in region RGN,
1621 possibly bringing insns from subsequent blocks in the same region. */
1624 schedule_block (b
, rgn_n_insns
)
1629 struct ready_list ready
;
1632 /* Head/tail info for this block. */
1633 rtx prev_head
= current_sched_info
->prev_head
;
1634 rtx next_tail
= current_sched_info
->next_tail
;
1635 rtx head
= NEXT_INSN (prev_head
);
1636 rtx tail
= PREV_INSN (next_tail
);
1638 /* We used to have code to avoid getting parameters moved from hard
1639 argument registers into pseudos.
1641 However, it was removed when it proved to be of marginal benefit
1642 and caused problems because schedule_block and compute_forward_dependences
1643 had different notions of what the "head" insn was. */
1645 if (head
== tail
&& (! INSN_P (head
)))
1651 fprintf (sched_dump
, ";; ======================================================\n");
1652 fprintf (sched_dump
,
1653 ";; -- basic block %d from %d to %d -- %s reload\n",
1654 b
, INSN_UID (head
), INSN_UID (tail
),
1655 (reload_completed
? "after" : "before"));
1656 fprintf (sched_dump
, ";; ======================================================\n");
1657 fprintf (sched_dump
, "\n");
1660 init_block_visualization ();
1665 /* Allocate the ready list. */
1666 ready
.veclen
= rgn_n_insns
+ 1 + issue_rate
;
1667 ready
.first
= ready
.veclen
- 1;
1668 ready
.vec
= (rtx
*) xmalloc (ready
.veclen
* sizeof (rtx
));
1671 (*current_sched_info
->init_ready_list
) (&ready
);
1673 if (targetm
.sched
.md_init
)
1674 (*targetm
.sched
.md_init
) (sched_dump
, sched_verbose
, ready
.veclen
);
1676 /* No insns scheduled in this block yet. */
1677 last_scheduled_insn
= 0;
1679 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
1684 memset ((char *) insn_queue
, 0, sizeof (insn_queue
));
1686 /* Start just before the beginning of time. */
1689 /* We start inserting insns after PREV_HEAD. */
1692 /* Loop until all the insns in BB are scheduled. */
1693 while ((*current_sched_info
->schedule_more_p
) ())
1697 /* Add to the ready list all pending insns that can be issued now.
1698 If there are no ready insns, increment clock until one
1699 is ready and add all pending insns at that point to the ready
1701 queue_to_ready (&ready
);
1703 if (sched_verbose
&& targetm
.sched
.cycle_display
)
1704 last
= (*targetm
.sched
.cycle_display
) (clock_var
, last
);
1706 if (ready
.n_ready
== 0)
1709 if (sched_verbose
>= 2)
1711 fprintf (sched_dump
, ";;\t\tReady list after queue_to_ready: ");
1712 debug_ready_list (&ready
);
1715 /* Sort the ready list based on priority. */
1716 ready_sort (&ready
);
1718 /* Allow the target to reorder the list, typically for
1719 better instruction bundling. */
1720 if (targetm
.sched
.reorder
)
1722 (*targetm
.sched
.reorder
) (sched_dump
, sched_verbose
,
1723 ready_lastpos (&ready
),
1724 &ready
.n_ready
, clock_var
);
1726 can_issue_more
= issue_rate
;
1730 fprintf (sched_dump
, "\n;;\tReady list (t =%3d): ", clock_var
);
1731 debug_ready_list (&ready
);
1734 /* Issue insns from ready list. */
1735 while (ready
.n_ready
!= 0
1737 && (*current_sched_info
->schedule_more_p
) ())
1739 /* Select and remove the insn from the ready list. */
1740 rtx insn
= ready_remove_first (&ready
);
1741 int cost
= actual_hazard (insn_unit (insn
), insn
, clock_var
, 0);
1745 queue_insn (insn
, cost
);
1749 if (! (*current_sched_info
->can_schedule_ready_p
) (insn
))
1752 last_scheduled_insn
= insn
;
1753 last
= move_insn (insn
, last
);
1755 if (targetm
.sched
.variable_issue
)
1757 (*targetm
.sched
.variable_issue
) (sched_dump
, sched_verbose
,
1758 insn
, can_issue_more
);
1762 schedule_insn (insn
, &ready
, clock_var
);
1765 if (targetm
.sched
.reorder2
)
1767 /* Sort the ready list based on priority. */
1768 if (ready
.n_ready
> 0)
1769 ready_sort (&ready
);
1771 (*targetm
.sched
.reorder2
) (sched_dump
,sched_verbose
,
1773 ? ready_lastpos (&ready
) : NULL
,
1774 &ready
.n_ready
, clock_var
);
1780 visualize_scheduled_insns (clock_var
);
1783 if (targetm
.sched
.md_finish
)
1784 (*targetm
.sched
.md_finish
) (sched_dump
, sched_verbose
);
1789 fprintf (sched_dump
, ";;\tReady list (final): ");
1790 debug_ready_list (&ready
);
1791 print_block_visualization ("");
1794 /* Sanity check -- queue must be empty now. Meaningless if region has
1796 if (current_sched_info
->queue_must_finish_empty
&& q_size
!= 0)
1799 /* Update head/tail boundaries. */
1800 head
= NEXT_INSN (prev_head
);
1803 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
1804 previously found among the insns. Insert them at the beginning
1808 rtx note_head
= note_list
;
1810 while (PREV_INSN (note_head
))
1812 note_head
= PREV_INSN (note_head
);
1815 PREV_INSN (note_head
) = PREV_INSN (head
);
1816 NEXT_INSN (PREV_INSN (head
)) = note_head
;
1817 PREV_INSN (head
) = note_list
;
1818 NEXT_INSN (note_list
) = head
;
1825 fprintf (sched_dump
, ";; total time = %d\n;; new head = %d\n",
1826 clock_var
, INSN_UID (head
));
1827 fprintf (sched_dump
, ";; new tail = %d\n\n",
1832 current_sched_info
->head
= head
;
1833 current_sched_info
->tail
= tail
;
1838 /* Set_priorities: compute priority of each insn in the block. */
1841 set_priorities (head
, tail
)
1849 prev_head
= PREV_INSN (head
);
1851 if (head
== tail
&& (! INSN_P (head
)))
1855 for (insn
= tail
; insn
!= prev_head
; insn
= PREV_INSN (insn
))
1857 if (GET_CODE (insn
) == NOTE
)
1860 if (!(SCHED_GROUP_P (insn
)))
1862 (void) priority (insn
);
1868 /* Initialize some global state for the scheduler. DUMP_FILE is to be used
1869 for debugging output. */
1872 sched_init (dump_file
)
1878 /* Disable speculative loads in their presence if cc0 defined. */
1880 flag_schedule_speculative_load
= 0;
1883 /* Set dump and sched_verbose for the desired debugging output. If no
1884 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
1885 For -fsched-verbose=N, N>=10, print everything to stderr. */
1886 sched_verbose
= sched_verbose_param
;
1887 if (sched_verbose_param
== 0 && dump_file
)
1889 sched_dump
= ((sched_verbose_param
>= 10 || !dump_file
)
1890 ? stderr
: dump_file
);
1892 /* Initialize issue_rate. */
1893 if (targetm
.sched
.issue_rate
)
1894 issue_rate
= (*targetm
.sched
.issue_rate
) ();
1898 /* We use LUID 0 for the fake insn (UID 0) which holds dependencies for
1899 pseudos which do not cross calls. */
1900 old_max_uid
= get_max_uid () + 1;
1902 h_i_d
= (struct haifa_insn_data
*) xcalloc (old_max_uid
, sizeof (*h_i_d
));
1906 for (b
= 0; b
< n_basic_blocks
; b
++)
1907 for (insn
= BLOCK_HEAD (b
);; insn
= NEXT_INSN (insn
))
1909 INSN_LUID (insn
) = luid
;
1911 /* Increment the next luid, unless this is a note. We don't
1912 really need separate IDs for notes and we don't want to
1913 schedule differently depending on whether or not there are
1914 line-number notes, i.e., depending on whether or not we're
1915 generating debugging information. */
1916 if (GET_CODE (insn
) != NOTE
)
1919 if (insn
== BLOCK_END (b
))
1923 init_dependency_caches (luid
);
1925 compute_bb_for_insn (old_max_uid
);
1927 init_alias_analysis ();
1929 if (write_symbols
!= NO_DEBUG
)
1933 line_note_head
= (rtx
*) xcalloc (n_basic_blocks
, sizeof (rtx
));
1935 /* Save-line-note-head:
1936 Determine the line-number at the start of each basic block.
1937 This must be computed and saved now, because after a basic block's
1938 predecessor has been scheduled, it is impossible to accurately
1939 determine the correct line number for the first insn of the block. */
1941 for (b
= 0; b
< n_basic_blocks
; b
++)
1943 for (line
= BLOCK_HEAD (b
); line
; line
= PREV_INSN (line
))
1944 if (GET_CODE (line
) == NOTE
&& NOTE_LINE_NUMBER (line
) > 0)
1946 line_note_head
[b
] = line
;
1949 /* Do a forward search as well, since we won't get to see the first
1950 notes in a basic block. */
1951 for (line
= BLOCK_HEAD (b
); line
; line
= NEXT_INSN (line
))
1955 if (GET_CODE (line
) == NOTE
&& NOTE_LINE_NUMBER (line
) > 0)
1956 line_note_head
[b
] = line
;
1961 /* Find units used in this fuction, for visualization. */
1963 init_target_units ();
1965 /* ??? Add a NOTE after the last insn of the last basic block. It is not
1966 known why this is done. */
1968 insn
= BLOCK_END (n_basic_blocks
- 1);
1969 if (NEXT_INSN (insn
) == 0
1970 || (GET_CODE (insn
) != NOTE
1971 && GET_CODE (insn
) != CODE_LABEL
1972 /* Don't emit a NOTE if it would end up before a BARRIER. */
1973 && GET_CODE (NEXT_INSN (insn
)) != BARRIER
))
1974 emit_note_after (NOTE_INSN_DELETED
, BLOCK_END (n_basic_blocks
- 1));
1976 /* Compute INSN_REG_WEIGHT for all blocks. We must do this before
1977 removing death notes. */
1978 for (b
= n_basic_blocks
- 1; b
>= 0; b
--)
1979 find_insn_reg_weight (b
);
1982 /* Free global data used during insn scheduling. */
1988 free_dependency_caches ();
1989 end_alias_analysis ();
1990 if (write_symbols
!= NO_DEBUG
)
1991 free (line_note_head
);
1993 #endif /* INSN_SCHEDULING */