* config/alpha/alpha.md (divmodsi_internal_er): Split, so that
[official-gcc.git] / gcc / haifa-sched.c
blob7b9a2e821268e0a217c02a23322e9af2e6a539ae
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998,
3 1999, 2000, 2001 Free Software Foundation, Inc.
4 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
5 and currently maintained by, Jim Wilson (wilson@cygnus.com)
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
12 version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
22 02111-1307, USA. */
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 Function unit conflicts are resolved during forward list scheduling
58 by tracking the time when each insn is committed to the schedule
59 and from that, the time the function units it uses must be free.
60 As insns on the ready list are considered for scheduling, those
61 that would result in a blockage of the already committed insns are
62 queued until no blockage will result.
64 The following list shows the order in which we want to break ties
65 among insns in the ready list:
67 1. choose insn with the longest path to end of bb, ties
68 broken by
69 2. choose insn with least contribution to register pressure,
70 ties broken by
71 3. prefer in-block upon interblock motion, ties broken by
72 4. prefer useful upon speculative motion, ties broken by
73 5. choose insn with largest control flow probability, ties
74 broken by
75 6. choose insn with the least dependences upon the previously
76 scheduled insn, or finally
77 7 choose the insn which has the most insns dependent on it.
78 8. choose insn with lowest UID.
80 Memory references complicate matters. Only if we can be certain
81 that memory references are not part of the data dependency graph
82 (via true, anti, or output dependence), can we move operations past
83 memory references. To first approximation, reads can be done
84 independently, while writes introduce dependencies. Better
85 approximations will yield fewer dependencies.
87 Before reload, an extended analysis of interblock data dependences
88 is required for interblock scheduling. This is performed in
89 compute_block_backward_dependences ().
91 Dependencies set up by memory references are treated in exactly the
92 same way as other dependencies, by using LOG_LINKS backward
93 dependences. LOG_LINKS are translated into INSN_DEPEND forward
94 dependences for the purpose of forward list scheduling.
96 Having optimized the critical path, we may have also unduly
97 extended the lifetimes of some registers. If an operation requires
98 that constants be loaded into registers, it is certainly desirable
99 to load those constants as early as necessary, but no earlier.
100 I.e., it will not do to load up a bunch of registers at the
101 beginning of a basic block only to use them at the end, if they
102 could be loaded later, since this may result in excessive register
103 utilization.
105 Note that since branches are never in basic blocks, but only end
106 basic blocks, this pass will not move branches. But that is ok,
107 since we can use GNU's delayed branch scheduling pass to take care
108 of this case.
110 Also note that no further optimizations based on algebraic
111 identities are performed, so this pass would be a good one to
112 perform instruction splitting, such as breaking up a multiply
113 instruction into shifts and adds where that is profitable.
115 Given the memory aliasing analysis that this pass should perform,
116 it should be possible to remove redundant stores to memory, and to
117 load values from registers instead of hitting memory.
119 Before reload, speculative insns are moved only if a 'proof' exists
120 that no exception will be caused by this, and if no live registers
121 exist that inhibit the motion (live registers constraints are not
122 represented by data dependence edges).
124 This pass must update information that subsequent passes expect to
125 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
126 reg_n_calls_crossed, and reg_live_length. Also, BLOCK_HEAD,
127 BLOCK_END.
129 The information in the line number notes is carefully retained by
130 this pass. Notes that refer to the starting and ending of
131 exception regions are also carefully retained by this pass. All
132 other NOTE insns are grouped in their same relative order at the
133 beginning of basic blocks and regions that have been scheduled. */
135 #include "config.h"
136 #include "system.h"
137 #include "toplev.h"
138 #include "rtl.h"
139 #include "tm_p.h"
140 #include "hard-reg-set.h"
141 #include "basic-block.h"
142 #include "regs.h"
143 #include "function.h"
144 #include "flags.h"
145 #include "insn-config.h"
146 #include "insn-attr.h"
147 #include "except.h"
148 #include "toplev.h"
149 #include "recog.h"
150 #include "sched-int.h"
151 #include "target.h"
153 #ifdef INSN_SCHEDULING
155 /* issue_rate is the number of insns that can be scheduled in the same
156 machine cycle. It can be defined in the config/mach/mach.h file,
157 otherwise we set it to 1. */
159 static int issue_rate;
161 /* sched-verbose controls the amount of debugging output the
162 scheduler prints. It is controlled by -fsched-verbose=N:
163 N>0 and no -DSR : the output is directed to stderr.
164 N>=10 will direct the printouts to stderr (regardless of -dSR).
165 N=1: same as -dSR.
166 N=2: bb's probabilities, detailed ready list info, unit/insn info.
167 N=3: rtl at abort point, control-flow, regions info.
168 N=5: dependences info. */
170 static int sched_verbose_param = 0;
171 int sched_verbose = 0;
173 /* Debugging file. All printouts are sent to dump, which is always set,
174 either to stderr, or to the dump listing file (-dRS). */
175 FILE *sched_dump = 0;
177 /* Highest uid before scheduling. */
178 static int old_max_uid;
180 /* fix_sched_param() is called from toplev.c upon detection
181 of the -fsched-verbose=N option. */
183 void
184 fix_sched_param (param, val)
185 const char *param, *val;
187 if (!strcmp (param, "verbose"))
188 sched_verbose_param = atoi (val);
189 else
190 warning ("fix_sched_param: unknown param: %s", param);
193 struct haifa_insn_data *h_i_d;
195 #define DONE_PRIORITY -1
196 #define MAX_PRIORITY 0x7fffffff
197 #define TAIL_PRIORITY 0x7ffffffe
198 #define LAUNCH_PRIORITY 0x7f000001
199 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
200 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
202 #define LINE_NOTE(INSN) (h_i_d[INSN_UID (INSN)].line_note)
203 #define INSN_TICK(INSN) (h_i_d[INSN_UID (INSN)].tick)
205 /* Vector indexed by basic block number giving the starting line-number
206 for each basic block. */
207 static rtx *line_note_head;
209 /* List of important notes we must keep around. This is a pointer to the
210 last element in the list. */
211 static rtx note_list;
213 /* Queues, etc. */
215 /* An instruction is ready to be scheduled when all insns preceding it
216 have already been scheduled. It is important to ensure that all
217 insns which use its result will not be executed until its result
218 has been computed. An insn is maintained in one of four structures:
220 (P) the "Pending" set of insns which cannot be scheduled until
221 their dependencies have been satisfied.
222 (Q) the "Queued" set of insns that can be scheduled when sufficient
223 time has passed.
224 (R) the "Ready" list of unscheduled, uncommitted insns.
225 (S) the "Scheduled" list of insns.
227 Initially, all insns are either "Pending" or "Ready" depending on
228 whether their dependencies are satisfied.
230 Insns move from the "Ready" list to the "Scheduled" list as they
231 are committed to the schedule. As this occurs, the insns in the
232 "Pending" list have their dependencies satisfied and move to either
233 the "Ready" list or the "Queued" set depending on whether
234 sufficient time has passed to make them ready. As time passes,
235 insns move from the "Queued" set to the "Ready" list. Insns may
236 move from the "Ready" list to the "Queued" set if they are blocked
237 due to a function unit conflict.
239 The "Pending" list (P) are the insns in the INSN_DEPEND of the unscheduled
240 insns, i.e., those that are ready, queued, and pending.
241 The "Queued" set (Q) is implemented by the variable `insn_queue'.
242 The "Ready" list (R) is implemented by the variables `ready' and
243 `n_ready'.
244 The "Scheduled" list (S) is the new insn chain built by this pass.
246 The transition (R->S) is implemented in the scheduling loop in
247 `schedule_block' when the best insn to schedule is chosen.
248 The transition (R->Q) is implemented in `queue_insn' when an
249 insn is found to have a function unit conflict with the already
250 committed insns.
251 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
252 insns move from the ready list to the scheduled list.
253 The transition (Q->R) is implemented in 'queue_to_insn' as time
254 passes or stalls are introduced. */
256 /* Implement a circular buffer to delay instructions until sufficient
257 time has passed. INSN_QUEUE_SIZE is a power of two larger than
258 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
259 longest time an isnsn may be queued. */
260 static rtx insn_queue[INSN_QUEUE_SIZE];
261 static int q_ptr = 0;
262 static int q_size = 0;
263 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
264 #define NEXT_Q_AFTER(X, C) (((X)+C) & (INSN_QUEUE_SIZE-1))
266 /* Describe the ready list of the scheduler.
267 VEC holds space enough for all insns in the current region. VECLEN
268 says how many exactly.
269 FIRST is the index of the element with the highest priority; i.e. the
270 last one in the ready list, since elements are ordered by ascending
271 priority.
272 N_READY determines how many insns are on the ready list. */
274 struct ready_list
276 rtx *vec;
277 int veclen;
278 int first;
279 int n_ready;
282 /* Forward declarations. */
283 static unsigned int blockage_range PARAMS ((int, rtx));
284 static void clear_units PARAMS ((void));
285 static void schedule_unit PARAMS ((int, rtx, int));
286 static int actual_hazard PARAMS ((int, rtx, int, int));
287 static int potential_hazard PARAMS ((int, rtx, int));
288 static int priority PARAMS ((rtx));
289 static int rank_for_schedule PARAMS ((const PTR, const PTR));
290 static void swap_sort PARAMS ((rtx *, int));
291 static void queue_insn PARAMS ((rtx, int));
292 static void schedule_insn PARAMS ((rtx, struct ready_list *, int));
293 static void find_insn_reg_weight PARAMS ((int));
294 static void adjust_priority PARAMS ((rtx));
296 /* Notes handling mechanism:
297 =========================
298 Generally, NOTES are saved before scheduling and restored after scheduling.
299 The scheduler distinguishes between three types of notes:
301 (1) LINE_NUMBER notes, generated and used for debugging. Here,
302 before scheduling a region, a pointer to the LINE_NUMBER note is
303 added to the insn following it (in save_line_notes()), and the note
304 is removed (in rm_line_notes() and unlink_line_notes()). After
305 scheduling the region, this pointer is used for regeneration of
306 the LINE_NUMBER note (in restore_line_notes()).
308 (2) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
309 Before scheduling a region, a pointer to the note is added to the insn
310 that follows or precedes it. (This happens as part of the data dependence
311 computation). After scheduling an insn, the pointer contained in it is
312 used for regenerating the corresponding note (in reemit_notes).
314 (3) All other notes (e.g. INSN_DELETED): Before scheduling a block,
315 these notes are put in a list (in rm_other_notes() and
316 unlink_other_notes ()). After scheduling the block, these notes are
317 inserted at the beginning of the block (in schedule_block()). */
319 static rtx unlink_other_notes PARAMS ((rtx, rtx));
320 static rtx unlink_line_notes PARAMS ((rtx, rtx));
321 static rtx reemit_notes PARAMS ((rtx, rtx));
323 static rtx *ready_lastpos PARAMS ((struct ready_list *));
324 static void ready_sort PARAMS ((struct ready_list *));
325 static rtx ready_remove_first PARAMS ((struct ready_list *));
327 static void queue_to_ready PARAMS ((struct ready_list *));
329 static void debug_ready_list PARAMS ((struct ready_list *));
331 static rtx move_insn1 PARAMS ((rtx, rtx));
332 static rtx move_insn PARAMS ((rtx, rtx));
334 #endif /* INSN_SCHEDULING */
336 /* Point to state used for the current scheduling pass. */
337 struct sched_info *current_sched_info;
339 #ifndef INSN_SCHEDULING
340 void
341 schedule_insns (dump_file)
342 FILE *dump_file ATTRIBUTE_UNUSED;
345 #else
347 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
348 so that insns independent of the last scheduled insn will be preferred
349 over dependent instructions. */
351 static rtx last_scheduled_insn;
353 /* Compute the function units used by INSN. This caches the value
354 returned by function_units_used. A function unit is encoded as the
355 unit number if the value is non-negative and the compliment of a
356 mask if the value is negative. A function unit index is the
357 non-negative encoding. */
359 HAIFA_INLINE int
360 insn_unit (insn)
361 rtx insn;
363 int unit = INSN_UNIT (insn);
365 if (unit == 0)
367 recog_memoized (insn);
369 /* A USE insn, or something else we don't need to understand.
370 We can't pass these directly to function_units_used because it will
371 trigger a fatal error for unrecognizable insns. */
372 if (INSN_CODE (insn) < 0)
373 unit = -1;
374 else
376 unit = function_units_used (insn);
377 /* Increment non-negative values so we can cache zero. */
378 if (unit >= 0)
379 unit++;
381 /* We only cache 16 bits of the result, so if the value is out of
382 range, don't cache it. */
383 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
384 || unit >= 0
385 || (unit & ~((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
386 INSN_UNIT (insn) = unit;
388 return (unit > 0 ? unit - 1 : unit);
391 /* Compute the blockage range for executing INSN on UNIT. This caches
392 the value returned by the blockage_range_function for the unit.
393 These values are encoded in an int where the upper half gives the
394 minimum value and the lower half gives the maximum value. */
396 HAIFA_INLINE static unsigned int
397 blockage_range (unit, insn)
398 int unit;
399 rtx insn;
401 unsigned int blockage = INSN_BLOCKAGE (insn);
402 unsigned int range;
404 if ((int) UNIT_BLOCKED (blockage) != unit + 1)
406 range = function_units[unit].blockage_range_function (insn);
407 /* We only cache the blockage range for one unit and then only if
408 the values fit. */
409 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
410 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
412 else
413 range = BLOCKAGE_RANGE (blockage);
415 return range;
418 /* A vector indexed by function unit instance giving the last insn to use
419 the unit. The value of the function unit instance index for unit U
420 instance I is (U + I * FUNCTION_UNITS_SIZE). */
421 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
423 /* A vector indexed by function unit instance giving the minimum time when
424 the unit will unblock based on the maximum blockage cost. */
425 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
427 /* A vector indexed by function unit number giving the number of insns
428 that remain to use the unit. */
429 static int unit_n_insns[FUNCTION_UNITS_SIZE];
431 /* Access the unit_last_insn array. Used by the visualization code. */
434 get_unit_last_insn (instance)
435 int instance;
437 return unit_last_insn[instance];
440 /* Reset the function unit state to the null state. */
442 static void
443 clear_units ()
445 memset ((char *) unit_last_insn, 0, sizeof (unit_last_insn));
446 memset ((char *) unit_tick, 0, sizeof (unit_tick));
447 memset ((char *) unit_n_insns, 0, sizeof (unit_n_insns));
450 /* Return the issue-delay of an insn. */
452 HAIFA_INLINE int
453 insn_issue_delay (insn)
454 rtx insn;
456 int i, delay = 0;
457 int unit = insn_unit (insn);
459 /* Efficiency note: in fact, we are working 'hard' to compute a
460 value that was available in md file, and is not available in
461 function_units[] structure. It would be nice to have this
462 value there, too. */
463 if (unit >= 0)
465 if (function_units[unit].blockage_range_function &&
466 function_units[unit].blockage_function)
467 delay = function_units[unit].blockage_function (insn, insn);
469 else
470 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
471 if ((unit & 1) != 0 && function_units[i].blockage_range_function
472 && function_units[i].blockage_function)
473 delay = MAX (delay, function_units[i].blockage_function (insn, insn));
475 return delay;
478 /* Return the actual hazard cost of executing INSN on the unit UNIT,
479 instance INSTANCE at time CLOCK if the previous actual hazard cost
480 was COST. */
482 HAIFA_INLINE int
483 actual_hazard_this_instance (unit, instance, insn, clock, cost)
484 int unit, instance, clock, cost;
485 rtx insn;
487 int tick = unit_tick[instance]; /* Issue time of the last issued insn. */
489 if (tick - clock > cost)
491 /* The scheduler is operating forward, so unit's last insn is the
492 executing insn and INSN is the candidate insn. We want a
493 more exact measure of the blockage if we execute INSN at CLOCK
494 given when we committed the execution of the unit's last insn.
496 The blockage value is given by either the unit's max blockage
497 constant, blockage range function, or blockage function. Use
498 the most exact form for the given unit. */
500 if (function_units[unit].blockage_range_function)
502 if (function_units[unit].blockage_function)
503 tick += (function_units[unit].blockage_function
504 (unit_last_insn[instance], insn)
505 - function_units[unit].max_blockage);
506 else
507 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
508 - function_units[unit].max_blockage);
510 if (tick - clock > cost)
511 cost = tick - clock;
513 return cost;
516 /* Record INSN as having begun execution on the units encoded by UNIT at
517 time CLOCK. */
519 HAIFA_INLINE static void
520 schedule_unit (unit, insn, clock)
521 int unit, clock;
522 rtx insn;
524 int i;
526 if (unit >= 0)
528 int instance = unit;
529 #if MAX_MULTIPLICITY > 1
530 /* Find the first free instance of the function unit and use that
531 one. We assume that one is free. */
532 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
534 if (!actual_hazard_this_instance (unit, instance, insn, clock, 0))
535 break;
536 instance += FUNCTION_UNITS_SIZE;
538 #endif
539 unit_last_insn[instance] = insn;
540 unit_tick[instance] = (clock + function_units[unit].max_blockage);
542 else
543 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
544 if ((unit & 1) != 0)
545 schedule_unit (i, insn, clock);
548 /* Return the actual hazard cost of executing INSN on the units encoded by
549 UNIT at time CLOCK if the previous actual hazard cost was COST. */
551 HAIFA_INLINE static int
552 actual_hazard (unit, insn, clock, cost)
553 int unit, clock, cost;
554 rtx insn;
556 int i;
558 if (unit >= 0)
560 /* Find the instance of the function unit with the minimum hazard. */
561 int instance = unit;
562 int best_cost = actual_hazard_this_instance (unit, instance, insn,
563 clock, cost);
564 #if MAX_MULTIPLICITY > 1
565 int this_cost;
567 if (best_cost > cost)
569 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
571 instance += FUNCTION_UNITS_SIZE;
572 this_cost = actual_hazard_this_instance (unit, instance, insn,
573 clock, cost);
574 if (this_cost < best_cost)
576 best_cost = this_cost;
577 if (this_cost <= cost)
578 break;
582 #endif
583 cost = MAX (cost, best_cost);
585 else
586 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
587 if ((unit & 1) != 0)
588 cost = actual_hazard (i, insn, clock, cost);
590 return cost;
593 /* Return the potential hazard cost of executing an instruction on the
594 units encoded by UNIT if the previous potential hazard cost was COST.
595 An insn with a large blockage time is chosen in preference to one
596 with a smaller time; an insn that uses a unit that is more likely
597 to be used is chosen in preference to one with a unit that is less
598 used. We are trying to minimize a subsequent actual hazard. */
600 HAIFA_INLINE static int
601 potential_hazard (unit, insn, cost)
602 int unit, cost;
603 rtx insn;
605 int i, ncost;
606 unsigned int minb, maxb;
608 if (unit >= 0)
610 minb = maxb = function_units[unit].max_blockage;
611 if (maxb > 1)
613 if (function_units[unit].blockage_range_function)
615 maxb = minb = blockage_range (unit, insn);
616 maxb = MAX_BLOCKAGE_COST (maxb);
617 minb = MIN_BLOCKAGE_COST (minb);
620 if (maxb > 1)
622 /* Make the number of instructions left dominate. Make the
623 minimum delay dominate the maximum delay. If all these
624 are the same, use the unit number to add an arbitrary
625 ordering. Other terms can be added. */
626 ncost = minb * 0x40 + maxb;
627 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
628 if (ncost > cost)
629 cost = ncost;
633 else
634 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
635 if ((unit & 1) != 0)
636 cost = potential_hazard (i, insn, cost);
638 return cost;
641 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
642 This is the number of cycles between instruction issue and
643 instruction results. */
645 HAIFA_INLINE int
646 insn_cost (insn, link, used)
647 rtx insn, link, used;
649 int cost = INSN_COST (insn);
651 if (cost == 0)
653 recog_memoized (insn);
655 /* A USE insn, or something else we don't need to understand.
656 We can't pass these directly to result_ready_cost because it will
657 trigger a fatal error for unrecognizable insns. */
658 if (INSN_CODE (insn) < 0)
660 INSN_COST (insn) = 1;
661 return 1;
663 else
665 cost = result_ready_cost (insn);
667 if (cost < 1)
668 cost = 1;
670 INSN_COST (insn) = cost;
674 /* In this case estimate cost without caring how insn is used. */
675 if (link == 0 && used == 0)
676 return cost;
678 /* A USE insn should never require the value used to be computed. This
679 allows the computation of a function's result and parameter values to
680 overlap the return and call. */
681 recog_memoized (used);
682 if (INSN_CODE (used) < 0)
683 LINK_COST_FREE (link) = 1;
685 /* If some dependencies vary the cost, compute the adjustment. Most
686 commonly, the adjustment is complete: either the cost is ignored
687 (in the case of an output- or anti-dependence), or the cost is
688 unchanged. These values are cached in the link as LINK_COST_FREE
689 and LINK_COST_ZERO. */
691 if (LINK_COST_FREE (link))
692 cost = 0;
693 else if (!LINK_COST_ZERO (link) && targetm.sched.adjust_cost)
695 int ncost = (*targetm.sched.adjust_cost) (used, link, insn, cost);
697 if (ncost < 1)
699 LINK_COST_FREE (link) = 1;
700 ncost = 0;
702 if (cost == ncost)
703 LINK_COST_ZERO (link) = 1;
704 cost = ncost;
707 return cost;
710 /* Compute the priority number for INSN. */
712 static int
713 priority (insn)
714 rtx insn;
716 rtx link;
718 if (! INSN_P (insn))
719 return 0;
721 if (! INSN_PRIORITY_KNOWN (insn))
723 int this_priority = 0;
725 if (INSN_DEPEND (insn) == 0)
726 this_priority = insn_cost (insn, 0, 0);
727 else
729 for (link = INSN_DEPEND (insn); link; link = XEXP (link, 1))
731 rtx next;
732 int next_priority;
734 if (RTX_INTEGRATED_P (link))
735 continue;
737 next = XEXP (link, 0);
739 /* Critical path is meaningful in block boundaries only. */
740 if (! (*current_sched_info->contributes_to_priority) (next, insn))
741 continue;
743 next_priority = insn_cost (insn, link, next) + priority (next);
744 if (next_priority > this_priority)
745 this_priority = next_priority;
748 INSN_PRIORITY (insn) = this_priority;
749 INSN_PRIORITY_KNOWN (insn) = 1;
752 return INSN_PRIORITY (insn);
755 /* Macros and functions for keeping the priority queue sorted, and
756 dealing with queueing and dequeueing of instructions. */
758 #define SCHED_SORT(READY, N_READY) \
759 do { if ((N_READY) == 2) \
760 swap_sort (READY, N_READY); \
761 else if ((N_READY) > 2) \
762 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
763 while (0)
765 /* Returns a positive value if x is preferred; returns a negative value if
766 y is preferred. Should never return 0, since that will make the sort
767 unstable. */
769 static int
770 rank_for_schedule (x, y)
771 const PTR x;
772 const PTR y;
774 rtx tmp = *(const rtx *) y;
775 rtx tmp2 = *(const rtx *) x;
776 rtx link;
777 int tmp_class, tmp2_class, depend_count1, depend_count2;
778 int val, priority_val, weight_val, info_val;
780 /* Prefer insn with higher priority. */
781 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
782 if (priority_val)
783 return priority_val;
785 /* Prefer an insn with smaller contribution to registers-pressure. */
786 if (!reload_completed &&
787 (weight_val = INSN_REG_WEIGHT (tmp) - INSN_REG_WEIGHT (tmp2)))
788 return (weight_val);
790 info_val = (*current_sched_info->rank) (tmp, tmp2);
791 if (info_val)
792 return info_val;
794 /* Compare insns based on their relation to the last-scheduled-insn. */
795 if (last_scheduled_insn)
797 /* Classify the instructions into three classes:
798 1) Data dependent on last schedule insn.
799 2) Anti/Output dependent on last scheduled insn.
800 3) Independent of last scheduled insn, or has latency of one.
801 Choose the insn from the highest numbered class if different. */
802 link = find_insn_list (tmp, INSN_DEPEND (last_scheduled_insn));
803 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp) == 1)
804 tmp_class = 3;
805 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
806 tmp_class = 1;
807 else
808 tmp_class = 2;
810 link = find_insn_list (tmp2, INSN_DEPEND (last_scheduled_insn));
811 if (link == 0 || insn_cost (last_scheduled_insn, link, tmp2) == 1)
812 tmp2_class = 3;
813 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
814 tmp2_class = 1;
815 else
816 tmp2_class = 2;
818 if ((val = tmp2_class - tmp_class))
819 return val;
822 /* Prefer the insn which has more later insns that depend on it.
823 This gives the scheduler more freedom when scheduling later
824 instructions at the expense of added register pressure. */
825 depend_count1 = 0;
826 for (link = INSN_DEPEND (tmp); link; link = XEXP (link, 1))
827 depend_count1++;
829 depend_count2 = 0;
830 for (link = INSN_DEPEND (tmp2); link; link = XEXP (link, 1))
831 depend_count2++;
833 val = depend_count2 - depend_count1;
834 if (val)
835 return val;
837 /* If insns are equally good, sort by INSN_LUID (original insn order),
838 so that we make the sort stable. This minimizes instruction movement,
839 thus minimizing sched's effect on debugging and cross-jumping. */
840 return INSN_LUID (tmp) - INSN_LUID (tmp2);
843 /* Resort the array A in which only element at index N may be out of order. */
845 HAIFA_INLINE static void
846 swap_sort (a, n)
847 rtx *a;
848 int n;
850 rtx insn = a[n - 1];
851 int i = n - 2;
853 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
855 a[i + 1] = a[i];
856 i -= 1;
858 a[i + 1] = insn;
861 /* Add INSN to the insn queue so that it can be executed at least
862 N_CYCLES after the currently executing insn. Preserve insns
863 chain for debugging purposes. */
865 HAIFA_INLINE static void
866 queue_insn (insn, n_cycles)
867 rtx insn;
868 int n_cycles;
870 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
871 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
872 insn_queue[next_q] = link;
873 q_size += 1;
875 if (sched_verbose >= 2)
877 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
878 (*current_sched_info->print_insn) (insn, 0));
880 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
884 /* Return a pointer to the bottom of the ready list, i.e. the insn
885 with the lowest priority. */
887 HAIFA_INLINE static rtx *
888 ready_lastpos (ready)
889 struct ready_list *ready;
891 if (ready->n_ready == 0)
892 abort ();
893 return ready->vec + ready->first - ready->n_ready + 1;
896 /* Add an element INSN to the ready list so that it ends up with the lowest
897 priority. */
899 HAIFA_INLINE void
900 ready_add (ready, insn)
901 struct ready_list *ready;
902 rtx insn;
904 if (ready->first == ready->n_ready)
906 memmove (ready->vec + ready->veclen - ready->n_ready,
907 ready_lastpos (ready),
908 ready->n_ready * sizeof (rtx));
909 ready->first = ready->veclen - 1;
911 ready->vec[ready->first - ready->n_ready] = insn;
912 ready->n_ready++;
915 /* Remove the element with the highest priority from the ready list and
916 return it. */
918 HAIFA_INLINE static rtx
919 ready_remove_first (ready)
920 struct ready_list *ready;
922 rtx t;
923 if (ready->n_ready == 0)
924 abort ();
925 t = ready->vec[ready->first--];
926 ready->n_ready--;
927 /* If the queue becomes empty, reset it. */
928 if (ready->n_ready == 0)
929 ready->first = ready->veclen - 1;
930 return t;
933 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
934 macro. */
936 HAIFA_INLINE static void
937 ready_sort (ready)
938 struct ready_list *ready;
940 rtx *first = ready_lastpos (ready);
941 SCHED_SORT (first, ready->n_ready);
944 /* PREV is an insn that is ready to execute. Adjust its priority if that
945 will help shorten or lengthen register lifetimes as appropriate. Also
946 provide a hook for the target to tweek itself. */
948 HAIFA_INLINE static void
949 adjust_priority (prev)
950 rtx prev;
952 /* ??? There used to be code here to try and estimate how an insn
953 affected register lifetimes, but it did it by looking at REG_DEAD
954 notes, which we removed in schedule_region. Nor did it try to
955 take into account register pressure or anything useful like that.
957 Revisit when we have a machine model to work with and not before. */
959 if (targetm.sched.adjust_priority)
960 INSN_PRIORITY (prev) =
961 (*targetm.sched.adjust_priority) (prev, INSN_PRIORITY (prev));
964 /* Clock at which the previous instruction was issued. */
965 static int last_clock_var;
967 /* INSN is the "currently executing insn". Launch each insn which was
968 waiting on INSN. READY is the ready list which contains the insns
969 that are ready to fire. CLOCK is the current cycle.
972 static void
973 schedule_insn (insn, ready, clock)
974 rtx insn;
975 struct ready_list *ready;
976 int clock;
978 rtx link;
979 int unit;
981 unit = insn_unit (insn);
983 if (sched_verbose >= 2)
985 fprintf (sched_dump, ";;\t\t--> scheduling insn <<<%d>>> on unit ",
986 INSN_UID (insn));
987 insn_print_units (insn);
988 fprintf (sched_dump, "\n");
991 if (sched_verbose && unit == -1)
992 visualize_no_unit (insn);
994 if (MAX_BLOCKAGE > 1 || issue_rate > 1 || sched_verbose)
995 schedule_unit (unit, insn, clock);
997 if (INSN_DEPEND (insn) == 0)
998 return;
1000 for (link = INSN_DEPEND (insn); link != 0; link = XEXP (link, 1))
1002 rtx next = XEXP (link, 0);
1003 int cost = insn_cost (insn, link, next);
1005 INSN_TICK (next) = MAX (INSN_TICK (next), clock + cost);
1007 if ((INSN_DEP_COUNT (next) -= 1) == 0)
1009 int effective_cost = INSN_TICK (next) - clock;
1011 if (! (*current_sched_info->new_ready) (next))
1012 continue;
1014 if (sched_verbose >= 2)
1016 fprintf (sched_dump, ";;\t\tdependences resolved: insn %s ",
1017 (*current_sched_info->print_insn) (next, 0));
1019 if (effective_cost < 1)
1020 fprintf (sched_dump, "into ready\n");
1021 else
1022 fprintf (sched_dump, "into queue with cost=%d\n", effective_cost);
1025 /* Adjust the priority of NEXT and either put it on the ready
1026 list or queue it. */
1027 adjust_priority (next);
1028 if (effective_cost < 1)
1029 ready_add (ready, next);
1030 else
1031 queue_insn (next, effective_cost);
1035 /* Annotate the instruction with issue information -- TImode
1036 indicates that the instruction is expected not to be able
1037 to issue on the same cycle as the previous insn. A machine
1038 may use this information to decide how the instruction should
1039 be aligned. */
1040 if (reload_completed && issue_rate > 1)
1042 PUT_MODE (insn, clock > last_clock_var ? TImode : VOIDmode);
1043 last_clock_var = clock;
1047 /* Functions for handling of notes. */
1049 /* Delete notes beginning with INSN and put them in the chain
1050 of notes ended by NOTE_LIST.
1051 Returns the insn following the notes. */
1053 static rtx
1054 unlink_other_notes (insn, tail)
1055 rtx insn, tail;
1057 rtx prev = PREV_INSN (insn);
1059 while (insn != tail && GET_CODE (insn) == NOTE)
1061 rtx next = NEXT_INSN (insn);
1062 /* Delete the note from its current position. */
1063 if (prev)
1064 NEXT_INSN (prev) = next;
1065 if (next)
1066 PREV_INSN (next) = prev;
1068 /* See sched_analyze to see how these are handled. */
1069 if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
1070 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
1071 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_RANGE_BEG
1072 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_RANGE_END
1073 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
1074 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
1076 /* Insert the note at the end of the notes list. */
1077 PREV_INSN (insn) = note_list;
1078 if (note_list)
1079 NEXT_INSN (note_list) = insn;
1080 note_list = insn;
1083 insn = next;
1085 return insn;
1088 /* Delete line notes beginning with INSN. Record line-number notes so
1089 they can be reused. Returns the insn following the notes. */
1091 static rtx
1092 unlink_line_notes (insn, tail)
1093 rtx insn, tail;
1095 rtx prev = PREV_INSN (insn);
1097 while (insn != tail && GET_CODE (insn) == NOTE)
1099 rtx next = NEXT_INSN (insn);
1101 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
1103 /* Delete the note from its current position. */
1104 if (prev)
1105 NEXT_INSN (prev) = next;
1106 if (next)
1107 PREV_INSN (next) = prev;
1109 /* Record line-number notes so they can be reused. */
1110 LINE_NOTE (insn) = insn;
1112 else
1113 prev = insn;
1115 insn = next;
1117 return insn;
1120 /* Return the head and tail pointers of BB. */
1122 void
1123 get_block_head_tail (b, headp, tailp)
1124 int b;
1125 rtx *headp;
1126 rtx *tailp;
1128 /* HEAD and TAIL delimit the basic block being scheduled. */
1129 rtx head = BLOCK_HEAD (b);
1130 rtx tail = BLOCK_END (b);
1132 /* Don't include any notes or labels at the beginning of the
1133 basic block, or notes at the ends of basic blocks. */
1134 while (head != tail)
1136 if (GET_CODE (head) == NOTE)
1137 head = NEXT_INSN (head);
1138 else if (GET_CODE (tail) == NOTE)
1139 tail = PREV_INSN (tail);
1140 else if (GET_CODE (head) == CODE_LABEL)
1141 head = NEXT_INSN (head);
1142 else
1143 break;
1146 *headp = head;
1147 *tailp = tail;
1150 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1153 no_real_insns_p (head, tail)
1154 rtx head, tail;
1156 while (head != NEXT_INSN (tail))
1158 if (GET_CODE (head) != NOTE && GET_CODE (head) != CODE_LABEL)
1159 return 0;
1160 head = NEXT_INSN (head);
1162 return 1;
1165 /* Delete line notes from one block. Save them so they can be later restored
1166 (in restore_line_notes). HEAD and TAIL are the boundaries of the
1167 block in which notes should be processed. */
1169 void
1170 rm_line_notes (head, tail)
1171 rtx head, tail;
1173 rtx next_tail;
1174 rtx insn;
1176 next_tail = NEXT_INSN (tail);
1177 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1179 rtx prev;
1181 /* Farm out notes, and maybe save them in NOTE_LIST.
1182 This is needed to keep the debugger from
1183 getting completely deranged. */
1184 if (GET_CODE (insn) == NOTE)
1186 prev = insn;
1187 insn = unlink_line_notes (insn, next_tail);
1189 if (prev == tail)
1190 abort ();
1191 if (prev == head)
1192 abort ();
1193 if (insn == next_tail)
1194 abort ();
1199 /* Save line number notes for each insn in block B. HEAD and TAIL are
1200 the boundaries of the block in which notes should be processed.*/
1202 void
1203 save_line_notes (b, head, tail)
1204 int b;
1205 rtx head, tail;
1207 rtx next_tail;
1209 /* We must use the true line number for the first insn in the block
1210 that was computed and saved at the start of this pass. We can't
1211 use the current line number, because scheduling of the previous
1212 block may have changed the current line number. */
1214 rtx line = line_note_head[b];
1215 rtx insn;
1217 next_tail = NEXT_INSN (tail);
1219 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1220 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
1221 line = insn;
1222 else
1223 LINE_NOTE (insn) = line;
1226 /* After a block was scheduled, insert line notes into the insns list.
1227 HEAD and TAIL are the boundaries of the block in which notes should
1228 be processed.*/
1230 void
1231 restore_line_notes (head, tail)
1232 rtx head, tail;
1234 rtx line, note, prev, new;
1235 int added_notes = 0;
1236 rtx next_tail, insn;
1238 head = head;
1239 next_tail = NEXT_INSN (tail);
1241 /* Determine the current line-number. We want to know the current
1242 line number of the first insn of the block here, in case it is
1243 different from the true line number that was saved earlier. If
1244 different, then we need a line number note before the first insn
1245 of this block. If it happens to be the same, then we don't want to
1246 emit another line number note here. */
1247 for (line = head; line; line = PREV_INSN (line))
1248 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
1249 break;
1251 /* Walk the insns keeping track of the current line-number and inserting
1252 the line-number notes as needed. */
1253 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1254 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
1255 line = insn;
1256 /* This used to emit line number notes before every non-deleted note.
1257 However, this confuses a debugger, because line notes not separated
1258 by real instructions all end up at the same address. I can find no
1259 use for line number notes before other notes, so none are emitted. */
1260 else if (GET_CODE (insn) != NOTE
1261 && INSN_UID (insn) < old_max_uid
1262 && (note = LINE_NOTE (insn)) != 0
1263 && note != line
1264 && (line == 0
1265 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
1266 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
1268 line = note;
1269 prev = PREV_INSN (insn);
1270 if (LINE_NOTE (note))
1272 /* Re-use the original line-number note. */
1273 LINE_NOTE (note) = 0;
1274 PREV_INSN (note) = prev;
1275 NEXT_INSN (prev) = note;
1276 PREV_INSN (insn) = note;
1277 NEXT_INSN (note) = insn;
1279 else
1281 added_notes++;
1282 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
1283 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
1284 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note);
1287 if (sched_verbose && added_notes)
1288 fprintf (sched_dump, ";; added %d line-number notes\n", added_notes);
1291 /* After scheduling the function, delete redundant line notes from the
1292 insns list. */
1294 void
1295 rm_redundant_line_notes ()
1297 rtx line = 0;
1298 rtx insn = get_insns ();
1299 int active_insn = 0;
1300 int notes = 0;
1302 /* Walk the insns deleting redundant line-number notes. Many of these
1303 are already present. The remainder tend to occur at basic
1304 block boundaries. */
1305 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
1306 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
1308 /* If there are no active insns following, INSN is redundant. */
1309 if (active_insn == 0)
1311 notes++;
1312 NOTE_SOURCE_FILE (insn) = 0;
1313 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
1315 /* If the line number is unchanged, LINE is redundant. */
1316 else if (line
1317 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
1318 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
1320 notes++;
1321 NOTE_SOURCE_FILE (line) = 0;
1322 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
1323 line = insn;
1325 else
1326 line = insn;
1327 active_insn = 0;
1329 else if (!((GET_CODE (insn) == NOTE
1330 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
1331 || (GET_CODE (insn) == INSN
1332 && (GET_CODE (PATTERN (insn)) == USE
1333 || GET_CODE (PATTERN (insn)) == CLOBBER))))
1334 active_insn++;
1336 if (sched_verbose && notes)
1337 fprintf (sched_dump, ";; deleted %d line-number notes\n", notes);
1340 /* Delete notes between HEAD and TAIL and put them in the chain
1341 of notes ended by NOTE_LIST. */
1343 void
1344 rm_other_notes (head, tail)
1345 rtx head;
1346 rtx tail;
1348 rtx next_tail;
1349 rtx insn;
1351 note_list = 0;
1352 if (head == tail && (! INSN_P (head)))
1353 return;
1355 next_tail = NEXT_INSN (tail);
1356 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1358 rtx prev;
1360 /* Farm out notes, and maybe save them in NOTE_LIST.
1361 This is needed to keep the debugger from
1362 getting completely deranged. */
1363 if (GET_CODE (insn) == NOTE)
1365 prev = insn;
1367 insn = unlink_other_notes (insn, next_tail);
1369 if (prev == tail)
1370 abort ();
1371 if (prev == head)
1372 abort ();
1373 if (insn == next_tail)
1374 abort ();
1379 /* Functions for computation of registers live/usage info. */
1381 /* Calculate INSN_REG_WEIGHT for all insns of a block. */
1383 static void
1384 find_insn_reg_weight (b)
1385 int b;
1387 rtx insn, next_tail, head, tail;
1389 get_block_head_tail (b, &head, &tail);
1390 next_tail = NEXT_INSN (tail);
1392 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1394 int reg_weight = 0;
1395 rtx x;
1397 /* Handle register life information. */
1398 if (! INSN_P (insn))
1399 continue;
1401 /* Increment weight for each register born here. */
1402 x = PATTERN (insn);
1403 if ((GET_CODE (x) == SET || GET_CODE (x) == CLOBBER)
1404 && register_operand (SET_DEST (x), VOIDmode))
1405 reg_weight++;
1406 else if (GET_CODE (x) == PARALLEL)
1408 int j;
1409 for (j = XVECLEN (x, 0) - 1; j >= 0; j--)
1411 x = XVECEXP (PATTERN (insn), 0, j);
1412 if ((GET_CODE (x) == SET || GET_CODE (x) == CLOBBER)
1413 && register_operand (SET_DEST (x), VOIDmode))
1414 reg_weight++;
1418 /* Decrement weight for each register that dies here. */
1419 for (x = REG_NOTES (insn); x; x = XEXP (x, 1))
1421 if (REG_NOTE_KIND (x) == REG_DEAD
1422 || REG_NOTE_KIND (x) == REG_UNUSED)
1423 reg_weight--;
1426 INSN_REG_WEIGHT (insn) = reg_weight;
1430 /* Scheduling clock, modified in schedule_block() and queue_to_ready (). */
1431 static int clock_var;
1433 /* Move insns that became ready to fire from queue to ready list. */
1435 static void
1436 queue_to_ready (ready)
1437 struct ready_list *ready;
1439 rtx insn;
1440 rtx link;
1442 q_ptr = NEXT_Q (q_ptr);
1444 /* Add all pending insns that can be scheduled without stalls to the
1445 ready list. */
1446 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
1448 insn = XEXP (link, 0);
1449 q_size -= 1;
1451 if (sched_verbose >= 2)
1452 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1453 (*current_sched_info->print_insn) (insn, 0));
1455 ready_add (ready, insn);
1456 if (sched_verbose >= 2)
1457 fprintf (sched_dump, "moving to ready without stalls\n");
1459 insn_queue[q_ptr] = 0;
1461 /* If there are no ready insns, stall until one is ready and add all
1462 of the pending insns at that point to the ready list. */
1463 if (ready->n_ready == 0)
1465 int stalls;
1467 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
1469 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1471 for (; link; link = XEXP (link, 1))
1473 insn = XEXP (link, 0);
1474 q_size -= 1;
1476 if (sched_verbose >= 2)
1477 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1478 (*current_sched_info->print_insn) (insn, 0));
1480 ready_add (ready, insn);
1481 if (sched_verbose >= 2)
1482 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
1484 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
1486 if (ready->n_ready)
1487 break;
1491 if (sched_verbose && stalls)
1492 visualize_stall_cycles (stalls);
1493 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
1494 clock_var += stalls;
1498 /* Print the ready list for debugging purposes. Callable from debugger. */
1500 static void
1501 debug_ready_list (ready)
1502 struct ready_list *ready;
1504 rtx *p;
1505 int i;
1507 if (ready->n_ready == 0)
1508 return;
1510 p = ready_lastpos (ready);
1511 for (i = 0; i < ready->n_ready; i++)
1512 fprintf (sched_dump, " %s", (*current_sched_info->print_insn) (p[i], 0));
1513 fprintf (sched_dump, "\n");
1516 /* move_insn1: Remove INSN from insn chain, and link it after LAST insn. */
1518 static rtx
1519 move_insn1 (insn, last)
1520 rtx insn, last;
1522 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (insn);
1523 PREV_INSN (NEXT_INSN (insn)) = PREV_INSN (insn);
1525 NEXT_INSN (insn) = NEXT_INSN (last);
1526 PREV_INSN (NEXT_INSN (last)) = insn;
1528 NEXT_INSN (last) = insn;
1529 PREV_INSN (insn) = last;
1531 return insn;
1534 /* Search INSN for REG_SAVE_NOTE note pairs for
1535 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
1536 NOTEs. The REG_SAVE_NOTE note following first one is contains the
1537 saved value for NOTE_BLOCK_NUMBER which is useful for
1538 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
1539 output by the instruction scheduler. Return the new value of LAST. */
1541 static rtx
1542 reemit_notes (insn, last)
1543 rtx insn;
1544 rtx last;
1546 rtx note, retval;
1548 retval = last;
1549 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1551 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
1553 enum insn_note note_type = INTVAL (XEXP (note, 0));
1555 if (note_type == NOTE_INSN_RANGE_BEG
1556 || note_type == NOTE_INSN_RANGE_END)
1558 last = emit_note_before (note_type, last);
1559 remove_note (insn, note);
1560 note = XEXP (note, 1);
1561 NOTE_RANGE_INFO (last) = XEXP (note, 0);
1563 else
1565 last = emit_note_before (note_type, last);
1566 remove_note (insn, note);
1567 note = XEXP (note, 1);
1568 if (note_type == NOTE_INSN_EH_REGION_BEG
1569 || note_type == NOTE_INSN_EH_REGION_END)
1570 NOTE_EH_HANDLER (last) = INTVAL (XEXP (note, 0));
1572 remove_note (insn, note);
1575 return retval;
1578 /* Move INSN, and all insns which should be issued before it,
1579 due to SCHED_GROUP_P flag. Reemit notes if needed.
1581 Return the last insn emitted by the scheduler, which is the
1582 return value from the first call to reemit_notes. */
1584 static rtx
1585 move_insn (insn, last)
1586 rtx insn, last;
1588 rtx retval = NULL;
1590 /* If INSN has SCHED_GROUP_P set, then issue it and any other
1591 insns with SCHED_GROUP_P set first. */
1592 while (SCHED_GROUP_P (insn))
1594 rtx prev = PREV_INSN (insn);
1596 /* Move a SCHED_GROUP_P insn. */
1597 move_insn1 (insn, last);
1598 /* If this is the first call to reemit_notes, then record
1599 its return value. */
1600 if (retval == NULL_RTX)
1601 retval = reemit_notes (insn, insn);
1602 else
1603 reemit_notes (insn, insn);
1604 insn = prev;
1607 /* Now move the first non SCHED_GROUP_P insn. */
1608 move_insn1 (insn, last);
1610 /* If this is the first call to reemit_notes, then record
1611 its return value. */
1612 if (retval == NULL_RTX)
1613 retval = reemit_notes (insn, insn);
1614 else
1615 reemit_notes (insn, insn);
1617 return retval;
1620 /* Use forward list scheduling to rearrange insns of block B in region RGN,
1621 possibly bringing insns from subsequent blocks in the same region. */
1623 void
1624 schedule_block (b, rgn_n_insns)
1625 int b;
1626 int rgn_n_insns;
1628 rtx last;
1629 struct ready_list ready;
1630 int can_issue_more;
1632 /* Head/tail info for this block. */
1633 rtx prev_head = current_sched_info->prev_head;
1634 rtx next_tail = current_sched_info->next_tail;
1635 rtx head = NEXT_INSN (prev_head);
1636 rtx tail = PREV_INSN (next_tail);
1638 /* We used to have code to avoid getting parameters moved from hard
1639 argument registers into pseudos.
1641 However, it was removed when it proved to be of marginal benefit
1642 and caused problems because schedule_block and compute_forward_dependences
1643 had different notions of what the "head" insn was. */
1645 if (head == tail && (! INSN_P (head)))
1646 abort ();
1648 /* Debug info. */
1649 if (sched_verbose)
1651 fprintf (sched_dump, ";; ======================================================\n");
1652 fprintf (sched_dump,
1653 ";; -- basic block %d from %d to %d -- %s reload\n",
1654 b, INSN_UID (head), INSN_UID (tail),
1655 (reload_completed ? "after" : "before"));
1656 fprintf (sched_dump, ";; ======================================================\n");
1657 fprintf (sched_dump, "\n");
1659 visualize_alloc ();
1660 init_block_visualization ();
1663 clear_units ();
1665 /* Allocate the ready list. */
1666 ready.veclen = rgn_n_insns + 1 + issue_rate;
1667 ready.first = ready.veclen - 1;
1668 ready.vec = (rtx *) xmalloc (ready.veclen * sizeof (rtx));
1669 ready.n_ready = 0;
1671 (*current_sched_info->init_ready_list) (&ready);
1673 if (targetm.sched.md_init)
1674 (*targetm.sched.md_init) (sched_dump, sched_verbose, ready.veclen);
1676 /* No insns scheduled in this block yet. */
1677 last_scheduled_insn = 0;
1679 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
1680 queue. */
1681 q_ptr = 0;
1682 q_size = 0;
1683 last_clock_var = 0;
1684 memset ((char *) insn_queue, 0, sizeof (insn_queue));
1686 /* Start just before the beginning of time. */
1687 clock_var = -1;
1689 /* We start inserting insns after PREV_HEAD. */
1690 last = prev_head;
1692 /* Loop until all the insns in BB are scheduled. */
1693 while ((*current_sched_info->schedule_more_p) ())
1695 clock_var++;
1697 /* Add to the ready list all pending insns that can be issued now.
1698 If there are no ready insns, increment clock until one
1699 is ready and add all pending insns at that point to the ready
1700 list. */
1701 queue_to_ready (&ready);
1703 if (sched_verbose && targetm.sched.cycle_display)
1704 last = (*targetm.sched.cycle_display) (clock_var, last);
1706 if (ready.n_ready == 0)
1707 abort ();
1709 if (sched_verbose >= 2)
1711 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
1712 debug_ready_list (&ready);
1715 /* Sort the ready list based on priority. */
1716 ready_sort (&ready);
1718 /* Allow the target to reorder the list, typically for
1719 better instruction bundling. */
1720 if (targetm.sched.reorder)
1721 can_issue_more =
1722 (*targetm.sched.reorder) (sched_dump, sched_verbose,
1723 ready_lastpos (&ready),
1724 &ready.n_ready, clock_var);
1725 else
1726 can_issue_more = issue_rate;
1728 if (sched_verbose)
1730 fprintf (sched_dump, "\n;;\tReady list (t =%3d): ", clock_var);
1731 debug_ready_list (&ready);
1734 /* Issue insns from ready list. */
1735 while (ready.n_ready != 0
1736 && can_issue_more
1737 && (*current_sched_info->schedule_more_p) ())
1739 /* Select and remove the insn from the ready list. */
1740 rtx insn = ready_remove_first (&ready);
1741 int cost = actual_hazard (insn_unit (insn), insn, clock_var, 0);
1743 if (cost >= 1)
1745 queue_insn (insn, cost);
1746 continue;
1749 if (! (*current_sched_info->can_schedule_ready_p) (insn))
1750 goto next;
1752 last_scheduled_insn = insn;
1753 last = move_insn (insn, last);
1755 if (targetm.sched.variable_issue)
1756 can_issue_more =
1757 (*targetm.sched.variable_issue) (sched_dump, sched_verbose,
1758 insn, can_issue_more);
1759 else
1760 can_issue_more--;
1762 schedule_insn (insn, &ready, clock_var);
1764 next:
1765 if (targetm.sched.reorder2)
1767 /* Sort the ready list based on priority. */
1768 if (ready.n_ready > 0)
1769 ready_sort (&ready);
1770 can_issue_more =
1771 (*targetm.sched.reorder2) (sched_dump,sched_verbose,
1772 ready.n_ready
1773 ? ready_lastpos (&ready) : NULL,
1774 &ready.n_ready, clock_var);
1778 /* Debug info. */
1779 if (sched_verbose)
1780 visualize_scheduled_insns (clock_var);
1783 if (targetm.sched.md_finish)
1784 (*targetm.sched.md_finish) (sched_dump, sched_verbose);
1786 /* Debug info. */
1787 if (sched_verbose)
1789 fprintf (sched_dump, ";;\tReady list (final): ");
1790 debug_ready_list (&ready);
1791 print_block_visualization ("");
1794 /* Sanity check -- queue must be empty now. Meaningless if region has
1795 multiple bbs. */
1796 if (current_sched_info->queue_must_finish_empty && q_size != 0)
1797 abort ();
1799 /* Update head/tail boundaries. */
1800 head = NEXT_INSN (prev_head);
1801 tail = last;
1803 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
1804 previously found among the insns. Insert them at the beginning
1805 of the insns. */
1806 if (note_list != 0)
1808 rtx note_head = note_list;
1810 while (PREV_INSN (note_head))
1812 note_head = PREV_INSN (note_head);
1815 PREV_INSN (note_head) = PREV_INSN (head);
1816 NEXT_INSN (PREV_INSN (head)) = note_head;
1817 PREV_INSN (head) = note_list;
1818 NEXT_INSN (note_list) = head;
1819 head = note_head;
1822 /* Debugging. */
1823 if (sched_verbose)
1825 fprintf (sched_dump, ";; total time = %d\n;; new head = %d\n",
1826 clock_var, INSN_UID (head));
1827 fprintf (sched_dump, ";; new tail = %d\n\n",
1828 INSN_UID (tail));
1829 visualize_free ();
1832 current_sched_info->head = head;
1833 current_sched_info->tail = tail;
1835 free (ready.vec);
1838 /* Set_priorities: compute priority of each insn in the block. */
1841 set_priorities (head, tail)
1842 rtx head, tail;
1844 rtx insn;
1845 int n_insn;
1847 rtx prev_head;
1849 prev_head = PREV_INSN (head);
1851 if (head == tail && (! INSN_P (head)))
1852 return 0;
1854 n_insn = 0;
1855 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
1857 if (GET_CODE (insn) == NOTE)
1858 continue;
1860 if (!(SCHED_GROUP_P (insn)))
1861 n_insn++;
1862 (void) priority (insn);
1865 return n_insn;
1868 /* Initialize some global state for the scheduler. DUMP_FILE is to be used
1869 for debugging output. */
1871 void
1872 sched_init (dump_file)
1873 FILE *dump_file;
1875 int luid, b;
1876 rtx insn;
1878 /* Disable speculative loads in their presence if cc0 defined. */
1879 #ifdef HAVE_cc0
1880 flag_schedule_speculative_load = 0;
1881 #endif
1883 /* Set dump and sched_verbose for the desired debugging output. If no
1884 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
1885 For -fsched-verbose=N, N>=10, print everything to stderr. */
1886 sched_verbose = sched_verbose_param;
1887 if (sched_verbose_param == 0 && dump_file)
1888 sched_verbose = 1;
1889 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
1890 ? stderr : dump_file);
1892 /* Initialize issue_rate. */
1893 if (targetm.sched.issue_rate)
1894 issue_rate = (*targetm.sched.issue_rate) ();
1895 else
1896 issue_rate = 1;
1898 /* We use LUID 0 for the fake insn (UID 0) which holds dependencies for
1899 pseudos which do not cross calls. */
1900 old_max_uid = get_max_uid () + 1;
1902 h_i_d = (struct haifa_insn_data *) xcalloc (old_max_uid, sizeof (*h_i_d));
1904 h_i_d[0].luid = 0;
1905 luid = 1;
1906 for (b = 0; b < n_basic_blocks; b++)
1907 for (insn = BLOCK_HEAD (b);; insn = NEXT_INSN (insn))
1909 INSN_LUID (insn) = luid;
1911 /* Increment the next luid, unless this is a note. We don't
1912 really need separate IDs for notes and we don't want to
1913 schedule differently depending on whether or not there are
1914 line-number notes, i.e., depending on whether or not we're
1915 generating debugging information. */
1916 if (GET_CODE (insn) != NOTE)
1917 ++luid;
1919 if (insn == BLOCK_END (b))
1920 break;
1923 init_dependency_caches (luid);
1925 compute_bb_for_insn (old_max_uid);
1927 init_alias_analysis ();
1929 if (write_symbols != NO_DEBUG)
1931 rtx line;
1933 line_note_head = (rtx *) xcalloc (n_basic_blocks, sizeof (rtx));
1935 /* Save-line-note-head:
1936 Determine the line-number at the start of each basic block.
1937 This must be computed and saved now, because after a basic block's
1938 predecessor has been scheduled, it is impossible to accurately
1939 determine the correct line number for the first insn of the block. */
1941 for (b = 0; b < n_basic_blocks; b++)
1943 for (line = BLOCK_HEAD (b); line; line = PREV_INSN (line))
1944 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
1946 line_note_head[b] = line;
1947 break;
1949 /* Do a forward search as well, since we won't get to see the first
1950 notes in a basic block. */
1951 for (line = BLOCK_HEAD (b); line; line = NEXT_INSN (line))
1953 if (INSN_P (line))
1954 break;
1955 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
1956 line_note_head[b] = line;
1961 /* Find units used in this function, for visualization. */
1962 if (sched_verbose)
1963 init_target_units ();
1965 /* ??? Add a NOTE after the last insn of the last basic block. It is not
1966 known why this is done. */
1968 insn = BLOCK_END (n_basic_blocks - 1);
1969 if (NEXT_INSN (insn) == 0
1970 || (GET_CODE (insn) != NOTE
1971 && GET_CODE (insn) != CODE_LABEL
1972 /* Don't emit a NOTE if it would end up before a BARRIER. */
1973 && GET_CODE (NEXT_INSN (insn)) != BARRIER))
1975 emit_note_after (NOTE_INSN_DELETED, BLOCK_END (n_basic_blocks - 1));
1976 /* Make insn to appear outside BB. */
1977 BLOCK_END (n_basic_blocks - 1) = PREV_INSN (BLOCK_END (n_basic_blocks - 1));
1980 /* Compute INSN_REG_WEIGHT for all blocks. We must do this before
1981 removing death notes. */
1982 for (b = n_basic_blocks - 1; b >= 0; b--)
1983 find_insn_reg_weight (b);
1986 /* Free global data used during insn scheduling. */
1988 void
1989 sched_finish ()
1991 free (h_i_d);
1992 free_dependency_caches ();
1993 end_alias_analysis ();
1994 if (write_symbols != NO_DEBUG)
1995 free (line_note_head);
1997 #endif /* INSN_SCHEDULING */