2014-08-27 Yvan Roux <yvan.roux@linaro.org>
[official-gcc.git] / gcc / haifa-sched.c
blobbc46002ff6dc954529dea040043e8d9e7a9d1148
1 /* Instruction scheduling pass.
2 Copyright (C) 1992-2014 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
4 and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GCC.
8 GCC is free software; you can redistribute it and/or modify it under
9 the terms of the GNU General Public License as published by the Free
10 Software Foundation; either version 3, or (at your option) any later
11 version.
13 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
14 WARRANTY; without even the implied warranty of MERCHANTABILITY or
15 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
16 for more details.
18 You should have received a copy of the GNU General Public License
19 along with GCC; see the file COPYING3. If not see
20 <http://www.gnu.org/licenses/>. */
22 /* Instruction scheduling pass. This file, along with sched-deps.c,
23 contains the generic parts. The actual entry point for
24 the normal instruction scheduling pass is found in sched-rgn.c.
26 We compute insn priorities based on data dependencies. Flow
27 analysis only creates a fraction of the data-dependencies we must
28 observe: namely, only those dependencies which the combiner can be
29 expected to use. For this pass, we must therefore create the
30 remaining dependencies we need to observe: register dependencies,
31 memory dependencies, dependencies to keep function calls in order,
32 and the dependence between a conditional branch and the setting of
33 condition codes are all dealt with here.
35 The scheduler first traverses the data flow graph, starting with
36 the last instruction, and proceeding to the first, assigning values
37 to insn_priority as it goes. This sorts the instructions
38 topologically by data dependence.
40 Once priorities have been established, we order the insns using
41 list scheduling. This works as follows: starting with a list of
42 all the ready insns, and sorted according to priority number, we
43 schedule the insn from the end of the list by placing its
44 predecessors in the list according to their priority order. We
45 consider this insn scheduled by setting the pointer to the "end" of
46 the list to point to the previous insn. When an insn has no
47 predecessors, we either queue it until sufficient time has elapsed
48 or add it to the ready list. As the instructions are scheduled or
49 when stalls are introduced, the queue advances and dumps insns into
50 the ready list. When all insns down to the lowest priority have
51 been scheduled, the critical path of the basic block has been made
52 as short as possible. The remaining insns are then scheduled in
53 remaining slots.
55 The following list shows the order in which we want to break ties
56 among insns in the ready list:
58 1. choose insn with the longest path to end of bb, ties
59 broken by
60 2. choose insn with least contribution to register pressure,
61 ties broken by
62 3. prefer in-block upon interblock motion, ties broken by
63 4. prefer useful upon speculative motion, ties broken by
64 5. choose insn with largest control flow probability, ties
65 broken by
66 6. choose insn with the least dependences upon the previously
67 scheduled insn, or finally
68 7 choose the insn which has the most insns dependent on it.
69 8. choose insn with lowest UID.
71 Memory references complicate matters. Only if we can be certain
72 that memory references are not part of the data dependency graph
73 (via true, anti, or output dependence), can we move operations past
74 memory references. To first approximation, reads can be done
75 independently, while writes introduce dependencies. Better
76 approximations will yield fewer dependencies.
78 Before reload, an extended analysis of interblock data dependences
79 is required for interblock scheduling. This is performed in
80 compute_block_dependences ().
82 Dependencies set up by memory references are treated in exactly the
83 same way as other dependencies, by using insn backward dependences
84 INSN_BACK_DEPS. INSN_BACK_DEPS are translated into forward dependences
85 INSN_FORW_DEPS for the purpose of forward list scheduling.
87 Having optimized the critical path, we may have also unduly
88 extended the lifetimes of some registers. If an operation requires
89 that constants be loaded into registers, it is certainly desirable
90 to load those constants as early as necessary, but no earlier.
91 I.e., it will not do to load up a bunch of registers at the
92 beginning of a basic block only to use them at the end, if they
93 could be loaded later, since this may result in excessive register
94 utilization.
96 Note that since branches are never in basic blocks, but only end
97 basic blocks, this pass will not move branches. But that is ok,
98 since we can use GNU's delayed branch scheduling pass to take care
99 of this case.
101 Also note that no further optimizations based on algebraic
102 identities are performed, so this pass would be a good one to
103 perform instruction splitting, such as breaking up a multiply
104 instruction into shifts and adds where that is profitable.
106 Given the memory aliasing analysis that this pass should perform,
107 it should be possible to remove redundant stores to memory, and to
108 load values from registers instead of hitting memory.
110 Before reload, speculative insns are moved only if a 'proof' exists
111 that no exception will be caused by this, and if no live registers
112 exist that inhibit the motion (live registers constraints are not
113 represented by data dependence edges).
115 This pass must update information that subsequent passes expect to
116 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
117 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
119 The information in the line number notes is carefully retained by
120 this pass. Notes that refer to the starting and ending of
121 exception regions are also carefully retained by this pass. All
122 other NOTE insns are grouped in their same relative order at the
123 beginning of basic blocks and regions that have been scheduled. */
125 #include "config.h"
126 #include "system.h"
127 #include "coretypes.h"
128 #include "tm.h"
129 #include "diagnostic-core.h"
130 #include "hard-reg-set.h"
131 #include "rtl.h"
132 #include "tm_p.h"
133 #include "regs.h"
134 #include "function.h"
135 #include "flags.h"
136 #include "insn-config.h"
137 #include "insn-attr.h"
138 #include "except.h"
139 #include "recog.h"
140 #include "sched-int.h"
141 #include "target.h"
142 #include "common/common-target.h"
143 #include "params.h"
144 #include "dbgcnt.h"
145 #include "cfgloop.h"
146 #include "ira.h"
147 #include "emit-rtl.h" /* FIXME: Can go away once crtl is moved to rtl.h. */
148 #include "hash-table.h"
149 #include "dumpfile.h"
151 #ifdef INSN_SCHEDULING
153 /* True if we do register pressure relief through live-range
154 shrinkage. */
155 static bool live_range_shrinkage_p;
157 /* Switch on live range shrinkage. */
158 void
159 initialize_live_range_shrinkage (void)
161 live_range_shrinkage_p = true;
164 /* Switch off live range shrinkage. */
165 void
166 finish_live_range_shrinkage (void)
168 live_range_shrinkage_p = false;
171 /* issue_rate is the number of insns that can be scheduled in the same
172 machine cycle. It can be defined in the config/mach/mach.h file,
173 otherwise we set it to 1. */
175 int issue_rate;
177 /* This can be set to true by a backend if the scheduler should not
178 enable a DCE pass. */
179 bool sched_no_dce;
181 /* The current initiation interval used when modulo scheduling. */
182 static int modulo_ii;
184 /* The maximum number of stages we are prepared to handle. */
185 static int modulo_max_stages;
187 /* The number of insns that exist in each iteration of the loop. We use this
188 to detect when we've scheduled all insns from the first iteration. */
189 static int modulo_n_insns;
191 /* The current count of insns in the first iteration of the loop that have
192 already been scheduled. */
193 static int modulo_insns_scheduled;
195 /* The maximum uid of insns from the first iteration of the loop. */
196 static int modulo_iter0_max_uid;
198 /* The number of times we should attempt to backtrack when modulo scheduling.
199 Decreased each time we have to backtrack. */
200 static int modulo_backtracks_left;
202 /* The stage in which the last insn from the original loop was
203 scheduled. */
204 static int modulo_last_stage;
206 /* sched-verbose controls the amount of debugging output the
207 scheduler prints. It is controlled by -fsched-verbose=N:
208 N>0 and no -DSR : the output is directed to stderr.
209 N>=10 will direct the printouts to stderr (regardless of -dSR).
210 N=1: same as -dSR.
211 N=2: bb's probabilities, detailed ready list info, unit/insn info.
212 N=3: rtl at abort point, control-flow, regions info.
213 N=5: dependences info. */
215 int sched_verbose = 0;
217 /* Debugging file. All printouts are sent to dump, which is always set,
218 either to stderr, or to the dump listing file (-dRS). */
219 FILE *sched_dump = 0;
221 /* This is a placeholder for the scheduler parameters common
222 to all schedulers. */
223 struct common_sched_info_def *common_sched_info;
225 #define INSN_TICK(INSN) (HID (INSN)->tick)
226 #define INSN_EXACT_TICK(INSN) (HID (INSN)->exact_tick)
227 #define INSN_TICK_ESTIMATE(INSN) (HID (INSN)->tick_estimate)
228 #define INTER_TICK(INSN) (HID (INSN)->inter_tick)
229 #define FEEDS_BACKTRACK_INSN(INSN) (HID (INSN)->feeds_backtrack_insn)
230 #define SHADOW_P(INSN) (HID (INSN)->shadow_p)
231 #define MUST_RECOMPUTE_SPEC_P(INSN) (HID (INSN)->must_recompute_spec)
232 /* Cached cost of the instruction. Use insn_cost to get cost of the
233 insn. -1 here means that the field is not initialized. */
234 #define INSN_COST(INSN) (HID (INSN)->cost)
236 /* If INSN_TICK of an instruction is equal to INVALID_TICK,
237 then it should be recalculated from scratch. */
238 #define INVALID_TICK (-(max_insn_queue_index + 1))
239 /* The minimal value of the INSN_TICK of an instruction. */
240 #define MIN_TICK (-max_insn_queue_index)
242 /* List of important notes we must keep around. This is a pointer to the
243 last element in the list. */
244 rtx_insn *note_list;
246 static struct spec_info_def spec_info_var;
247 /* Description of the speculative part of the scheduling.
248 If NULL - no speculation. */
249 spec_info_t spec_info = NULL;
251 /* True, if recovery block was added during scheduling of current block.
252 Used to determine, if we need to fix INSN_TICKs. */
253 static bool haifa_recovery_bb_recently_added_p;
255 /* True, if recovery block was added during this scheduling pass.
256 Used to determine if we should have empty memory pools of dependencies
257 after finishing current region. */
258 bool haifa_recovery_bb_ever_added_p;
260 /* Counters of different types of speculative instructions. */
261 static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
263 /* Array used in {unlink, restore}_bb_notes. */
264 static rtx_insn **bb_header = 0;
266 /* Basic block after which recovery blocks will be created. */
267 static basic_block before_recovery;
269 /* Basic block just before the EXIT_BLOCK and after recovery, if we have
270 created it. */
271 basic_block after_recovery;
273 /* FALSE if we add bb to another region, so we don't need to initialize it. */
274 bool adding_bb_to_current_region_p = true;
276 /* Queues, etc. */
278 /* An instruction is ready to be scheduled when all insns preceding it
279 have already been scheduled. It is important to ensure that all
280 insns which use its result will not be executed until its result
281 has been computed. An insn is maintained in one of four structures:
283 (P) the "Pending" set of insns which cannot be scheduled until
284 their dependencies have been satisfied.
285 (Q) the "Queued" set of insns that can be scheduled when sufficient
286 time has passed.
287 (R) the "Ready" list of unscheduled, uncommitted insns.
288 (S) the "Scheduled" list of insns.
290 Initially, all insns are either "Pending" or "Ready" depending on
291 whether their dependencies are satisfied.
293 Insns move from the "Ready" list to the "Scheduled" list as they
294 are committed to the schedule. As this occurs, the insns in the
295 "Pending" list have their dependencies satisfied and move to either
296 the "Ready" list or the "Queued" set depending on whether
297 sufficient time has passed to make them ready. As time passes,
298 insns move from the "Queued" set to the "Ready" list.
300 The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
301 unscheduled insns, i.e., those that are ready, queued, and pending.
302 The "Queued" set (Q) is implemented by the variable `insn_queue'.
303 The "Ready" list (R) is implemented by the variables `ready' and
304 `n_ready'.
305 The "Scheduled" list (S) is the new insn chain built by this pass.
307 The transition (R->S) is implemented in the scheduling loop in
308 `schedule_block' when the best insn to schedule is chosen.
309 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
310 insns move from the ready list to the scheduled list.
311 The transition (Q->R) is implemented in 'queue_to_insn' as time
312 passes or stalls are introduced. */
314 /* Implement a circular buffer to delay instructions until sufficient
315 time has passed. For the new pipeline description interface,
316 MAX_INSN_QUEUE_INDEX is a power of two minus one which is not less
317 than maximal time of instruction execution computed by genattr.c on
318 the base maximal time of functional unit reservations and getting a
319 result. This is the longest time an insn may be queued. */
321 static rtx *insn_queue;
322 static int q_ptr = 0;
323 static int q_size = 0;
324 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
325 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
327 #define QUEUE_SCHEDULED (-3)
328 #define QUEUE_NOWHERE (-2)
329 #define QUEUE_READY (-1)
330 /* QUEUE_SCHEDULED - INSN is scheduled.
331 QUEUE_NOWHERE - INSN isn't scheduled yet and is neither in
332 queue or ready list.
333 QUEUE_READY - INSN is in ready list.
334 N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles. */
336 #define QUEUE_INDEX(INSN) (HID (INSN)->queue_index)
338 /* The following variable value refers for all current and future
339 reservations of the processor units. */
340 state_t curr_state;
342 /* The following variable value is size of memory representing all
343 current and future reservations of the processor units. */
344 size_t dfa_state_size;
346 /* The following array is used to find the best insn from ready when
347 the automaton pipeline interface is used. */
348 signed char *ready_try = NULL;
350 /* The ready list. */
351 struct ready_list ready = {NULL, 0, 0, 0, 0};
353 /* The pointer to the ready list (to be removed). */
354 static struct ready_list *readyp = &ready;
356 /* Scheduling clock. */
357 static int clock_var;
359 /* Clock at which the previous instruction was issued. */
360 static int last_clock_var;
362 /* Set to true if, when queuing a shadow insn, we discover that it would be
363 scheduled too late. */
364 static bool must_backtrack;
366 /* The following variable value is number of essential insns issued on
367 the current cycle. An insn is essential one if it changes the
368 processors state. */
369 int cycle_issued_insns;
371 /* This records the actual schedule. It is built up during the main phase
372 of schedule_block, and afterwards used to reorder the insns in the RTL. */
373 static vec<rtx_insn *> scheduled_insns;
375 static int may_trap_exp (const_rtx, int);
377 /* Nonzero iff the address is comprised from at most 1 register. */
378 #define CONST_BASED_ADDRESS_P(x) \
379 (REG_P (x) \
380 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
381 || (GET_CODE (x) == LO_SUM)) \
382 && (CONSTANT_P (XEXP (x, 0)) \
383 || CONSTANT_P (XEXP (x, 1)))))
385 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
386 as found by analyzing insn's expression. */
389 static int haifa_luid_for_non_insn (rtx x);
391 /* Haifa version of sched_info hooks common to all headers. */
392 const struct common_sched_info_def haifa_common_sched_info =
394 NULL, /* fix_recovery_cfg */
395 NULL, /* add_block */
396 NULL, /* estimate_number_of_insns */
397 haifa_luid_for_non_insn, /* luid_for_non_insn */
398 SCHED_PASS_UNKNOWN /* sched_pass_id */
401 /* Mapping from instruction UID to its Logical UID. */
402 vec<int> sched_luids = vNULL;
404 /* Next LUID to assign to an instruction. */
405 int sched_max_luid = 1;
407 /* Haifa Instruction Data. */
408 vec<haifa_insn_data_def> h_i_d = vNULL;
410 void (* sched_init_only_bb) (basic_block, basic_block);
412 /* Split block function. Different schedulers might use different functions
413 to handle their internal data consistent. */
414 basic_block (* sched_split_block) (basic_block, rtx);
416 /* Create empty basic block after the specified block. */
417 basic_block (* sched_create_empty_bb) (basic_block);
419 /* Return the number of cycles until INSN is expected to be ready.
420 Return zero if it already is. */
421 static int
422 insn_delay (rtx_insn *insn)
424 return MAX (INSN_TICK (insn) - clock_var, 0);
427 static int
428 may_trap_exp (const_rtx x, int is_store)
430 enum rtx_code code;
432 if (x == 0)
433 return TRAP_FREE;
434 code = GET_CODE (x);
435 if (is_store)
437 if (code == MEM && may_trap_p (x))
438 return TRAP_RISKY;
439 else
440 return TRAP_FREE;
442 if (code == MEM)
444 /* The insn uses memory: a volatile load. */
445 if (MEM_VOLATILE_P (x))
446 return IRISKY;
447 /* An exception-free load. */
448 if (!may_trap_p (x))
449 return IFREE;
450 /* A load with 1 base register, to be further checked. */
451 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
452 return PFREE_CANDIDATE;
453 /* No info on the load, to be further checked. */
454 return PRISKY_CANDIDATE;
456 else
458 const char *fmt;
459 int i, insn_class = TRAP_FREE;
461 /* Neither store nor load, check if it may cause a trap. */
462 if (may_trap_p (x))
463 return TRAP_RISKY;
464 /* Recursive step: walk the insn... */
465 fmt = GET_RTX_FORMAT (code);
466 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
468 if (fmt[i] == 'e')
470 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
471 insn_class = WORST_CLASS (insn_class, tmp_class);
473 else if (fmt[i] == 'E')
475 int j;
476 for (j = 0; j < XVECLEN (x, i); j++)
478 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
479 insn_class = WORST_CLASS (insn_class, tmp_class);
480 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
481 break;
484 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
485 break;
487 return insn_class;
491 /* Classifies rtx X of an insn for the purpose of verifying that X can be
492 executed speculatively (and consequently the insn can be moved
493 speculatively), by examining X, returning:
494 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
495 TRAP_FREE: non-load insn.
496 IFREE: load from a globally safe location.
497 IRISKY: volatile load.
498 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
499 being either PFREE or PRISKY. */
501 static int
502 haifa_classify_rtx (const_rtx x)
504 int tmp_class = TRAP_FREE;
505 int insn_class = TRAP_FREE;
506 enum rtx_code code;
508 if (GET_CODE (x) == PARALLEL)
510 int i, len = XVECLEN (x, 0);
512 for (i = len - 1; i >= 0; i--)
514 tmp_class = haifa_classify_rtx (XVECEXP (x, 0, i));
515 insn_class = WORST_CLASS (insn_class, tmp_class);
516 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
517 break;
520 else
522 code = GET_CODE (x);
523 switch (code)
525 case CLOBBER:
526 /* Test if it is a 'store'. */
527 tmp_class = may_trap_exp (XEXP (x, 0), 1);
528 break;
529 case SET:
530 /* Test if it is a store. */
531 tmp_class = may_trap_exp (SET_DEST (x), 1);
532 if (tmp_class == TRAP_RISKY)
533 break;
534 /* Test if it is a load. */
535 tmp_class =
536 WORST_CLASS (tmp_class,
537 may_trap_exp (SET_SRC (x), 0));
538 break;
539 case COND_EXEC:
540 tmp_class = haifa_classify_rtx (COND_EXEC_CODE (x));
541 if (tmp_class == TRAP_RISKY)
542 break;
543 tmp_class = WORST_CLASS (tmp_class,
544 may_trap_exp (COND_EXEC_TEST (x), 0));
545 break;
546 case TRAP_IF:
547 tmp_class = TRAP_RISKY;
548 break;
549 default:;
551 insn_class = tmp_class;
554 return insn_class;
558 haifa_classify_insn (const_rtx insn)
560 return haifa_classify_rtx (PATTERN (insn));
563 /* After the scheduler initialization function has been called, this function
564 can be called to enable modulo scheduling. II is the initiation interval
565 we should use, it affects the delays for delay_pairs that were recorded as
566 separated by a given number of stages.
568 MAX_STAGES provides us with a limit
569 after which we give up scheduling; the caller must have unrolled at least
570 as many copies of the loop body and recorded delay_pairs for them.
572 INSNS is the number of real (non-debug) insns in one iteration of
573 the loop. MAX_UID can be used to test whether an insn belongs to
574 the first iteration of the loop; all of them have a uid lower than
575 MAX_UID. */
576 void
577 set_modulo_params (int ii, int max_stages, int insns, int max_uid)
579 modulo_ii = ii;
580 modulo_max_stages = max_stages;
581 modulo_n_insns = insns;
582 modulo_iter0_max_uid = max_uid;
583 modulo_backtracks_left = PARAM_VALUE (PARAM_MAX_MODULO_BACKTRACK_ATTEMPTS);
586 /* A structure to record a pair of insns where the first one is a real
587 insn that has delay slots, and the second is its delayed shadow.
588 I1 is scheduled normally and will emit an assembly instruction,
589 while I2 describes the side effect that takes place at the
590 transition between cycles CYCLES and (CYCLES + 1) after I1. */
591 struct delay_pair
593 struct delay_pair *next_same_i1;
594 rtx_insn *i1, *i2;
595 int cycles;
596 /* When doing modulo scheduling, we a delay_pair can also be used to
597 show that I1 and I2 are the same insn in a different stage. If that
598 is the case, STAGES will be nonzero. */
599 int stages;
602 /* Helpers for delay hashing. */
604 struct delay_i1_hasher : typed_noop_remove <delay_pair>
606 typedef delay_pair value_type;
607 typedef void compare_type;
608 static inline hashval_t hash (const value_type *);
609 static inline bool equal (const value_type *, const compare_type *);
612 /* Returns a hash value for X, based on hashing just I1. */
614 inline hashval_t
615 delay_i1_hasher::hash (const value_type *x)
617 return htab_hash_pointer (x->i1);
620 /* Return true if I1 of pair X is the same as that of pair Y. */
622 inline bool
623 delay_i1_hasher::equal (const value_type *x, const compare_type *y)
625 return x->i1 == y;
628 struct delay_i2_hasher : typed_free_remove <delay_pair>
630 typedef delay_pair value_type;
631 typedef void compare_type;
632 static inline hashval_t hash (const value_type *);
633 static inline bool equal (const value_type *, const compare_type *);
636 /* Returns a hash value for X, based on hashing just I2. */
638 inline hashval_t
639 delay_i2_hasher::hash (const value_type *x)
641 return htab_hash_pointer (x->i2);
644 /* Return true if I2 of pair X is the same as that of pair Y. */
646 inline bool
647 delay_i2_hasher::equal (const value_type *x, const compare_type *y)
649 return x->i2 == y;
652 /* Two hash tables to record delay_pairs, one indexed by I1 and the other
653 indexed by I2. */
654 static hash_table<delay_i1_hasher> *delay_htab;
655 static hash_table<delay_i2_hasher> *delay_htab_i2;
657 /* Called through htab_traverse. Walk the hashtable using I2 as
658 index, and delete all elements involving an UID higher than
659 that pointed to by *DATA. */
661 haifa_htab_i2_traverse (delay_pair **slot, int *data)
663 int maxuid = *data;
664 struct delay_pair *p = *slot;
665 if (INSN_UID (p->i2) >= maxuid || INSN_UID (p->i1) >= maxuid)
667 delay_htab_i2->clear_slot (slot);
669 return 1;
672 /* Called through htab_traverse. Walk the hashtable using I2 as
673 index, and delete all elements involving an UID higher than
674 that pointed to by *DATA. */
676 haifa_htab_i1_traverse (delay_pair **pslot, int *data)
678 int maxuid = *data;
679 struct delay_pair *p, *first, **pprev;
681 if (INSN_UID ((*pslot)->i1) >= maxuid)
683 delay_htab->clear_slot (pslot);
684 return 1;
686 pprev = &first;
687 for (p = *pslot; p; p = p->next_same_i1)
689 if (INSN_UID (p->i2) < maxuid)
691 *pprev = p;
692 pprev = &p->next_same_i1;
695 *pprev = NULL;
696 if (first == NULL)
697 delay_htab->clear_slot (pslot);
698 else
699 *pslot = first;
700 return 1;
703 /* Discard all delay pairs which involve an insn with an UID higher
704 than MAX_UID. */
705 void
706 discard_delay_pairs_above (int max_uid)
708 delay_htab->traverse <int *, haifa_htab_i1_traverse> (&max_uid);
709 delay_htab_i2->traverse <int *, haifa_htab_i2_traverse> (&max_uid);
712 /* This function can be called by a port just before it starts the final
713 scheduling pass. It records the fact that an instruction with delay
714 slots has been split into two insns, I1 and I2. The first one will be
715 scheduled normally and initiates the operation. The second one is a
716 shadow which must follow a specific number of cycles after I1; its only
717 purpose is to show the side effect that occurs at that cycle in the RTL.
718 If a JUMP_INSN or a CALL_INSN has been split, I1 should be a normal INSN,
719 while I2 retains the original insn type.
721 There are two ways in which the number of cycles can be specified,
722 involving the CYCLES and STAGES arguments to this function. If STAGES
723 is zero, we just use the value of CYCLES. Otherwise, STAGES is a factor
724 which is multiplied by MODULO_II to give the number of cycles. This is
725 only useful if the caller also calls set_modulo_params to enable modulo
726 scheduling. */
728 void
729 record_delay_slot_pair (rtx_insn *i1, rtx_insn *i2, int cycles, int stages)
731 struct delay_pair *p = XNEW (struct delay_pair);
732 struct delay_pair **slot;
734 p->i1 = i1;
735 p->i2 = i2;
736 p->cycles = cycles;
737 p->stages = stages;
739 if (!delay_htab)
741 delay_htab = new hash_table<delay_i1_hasher> (10);
742 delay_htab_i2 = new hash_table<delay_i2_hasher> (10);
744 slot = delay_htab->find_slot_with_hash (i1, htab_hash_pointer (i1), INSERT);
745 p->next_same_i1 = *slot;
746 *slot = p;
747 slot = delay_htab_i2->find_slot (p, INSERT);
748 *slot = p;
751 /* Examine the delay pair hashtable to see if INSN is a shadow for another,
752 and return the other insn if so. Return NULL otherwise. */
753 rtx_insn *
754 real_insn_for_shadow (rtx_insn *insn)
756 struct delay_pair *pair;
758 if (!delay_htab)
759 return NULL;
761 pair = delay_htab_i2->find_with_hash (insn, htab_hash_pointer (insn));
762 if (!pair || pair->stages > 0)
763 return NULL;
764 return pair->i1;
767 /* For a pair P of insns, return the fixed distance in cycles from the first
768 insn after which the second must be scheduled. */
769 static int
770 pair_delay (struct delay_pair *p)
772 if (p->stages == 0)
773 return p->cycles;
774 else
775 return p->stages * modulo_ii;
778 /* Given an insn INSN, add a dependence on its delayed shadow if it
779 has one. Also try to find situations where shadows depend on each other
780 and add dependencies to the real insns to limit the amount of backtracking
781 needed. */
782 void
783 add_delay_dependencies (rtx_insn *insn)
785 struct delay_pair *pair;
786 sd_iterator_def sd_it;
787 dep_t dep;
789 if (!delay_htab)
790 return;
792 pair = delay_htab_i2->find_with_hash (insn, htab_hash_pointer (insn));
793 if (!pair)
794 return;
795 add_dependence (insn, pair->i1, REG_DEP_ANTI);
796 if (pair->stages)
797 return;
799 FOR_EACH_DEP (pair->i2, SD_LIST_BACK, sd_it, dep)
801 rtx_insn *pro = DEP_PRO (dep);
802 struct delay_pair *other_pair
803 = delay_htab_i2->find_with_hash (pro, htab_hash_pointer (pro));
804 if (!other_pair || other_pair->stages)
805 continue;
806 if (pair_delay (other_pair) >= pair_delay (pair))
808 if (sched_verbose >= 4)
810 fprintf (sched_dump, ";;\tadding dependence %d <- %d\n",
811 INSN_UID (other_pair->i1),
812 INSN_UID (pair->i1));
813 fprintf (sched_dump, ";;\tpair1 %d <- %d, cost %d\n",
814 INSN_UID (pair->i1),
815 INSN_UID (pair->i2),
816 pair_delay (pair));
817 fprintf (sched_dump, ";;\tpair2 %d <- %d, cost %d\n",
818 INSN_UID (other_pair->i1),
819 INSN_UID (other_pair->i2),
820 pair_delay (other_pair));
822 add_dependence (pair->i1, other_pair->i1, REG_DEP_ANTI);
827 /* Forward declarations. */
829 static int priority (rtx_insn *);
830 static int rank_for_schedule (const void *, const void *);
831 static void swap_sort (rtx_insn **, int);
832 static void queue_insn (rtx_insn *, int, const char *);
833 static int schedule_insn (rtx_insn *);
834 static void adjust_priority (rtx_insn *);
835 static void advance_one_cycle (void);
836 static void extend_h_i_d (void);
839 /* Notes handling mechanism:
840 =========================
841 Generally, NOTES are saved before scheduling and restored after scheduling.
842 The scheduler distinguishes between two types of notes:
844 (1) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
845 Before scheduling a region, a pointer to the note is added to the insn
846 that follows or precedes it. (This happens as part of the data dependence
847 computation). After scheduling an insn, the pointer contained in it is
848 used for regenerating the corresponding note (in reemit_notes).
850 (2) All other notes (e.g. INSN_DELETED): Before scheduling a block,
851 these notes are put in a list (in rm_other_notes() and
852 unlink_other_notes ()). After scheduling the block, these notes are
853 inserted at the beginning of the block (in schedule_block()). */
855 static void ready_add (struct ready_list *, rtx_insn *, bool);
856 static rtx_insn *ready_remove_first (struct ready_list *);
857 static rtx_insn *ready_remove_first_dispatch (struct ready_list *ready);
859 static void queue_to_ready (struct ready_list *);
860 static int early_queue_to_ready (state_t, struct ready_list *);
862 /* The following functions are used to implement multi-pass scheduling
863 on the first cycle. */
864 static rtx_insn *ready_remove (struct ready_list *, int);
865 static void ready_remove_insn (rtx);
867 static void fix_inter_tick (rtx_insn *, rtx_insn *);
868 static int fix_tick_ready (rtx_insn *);
869 static void change_queue_index (rtx_insn *, int);
871 /* The following functions are used to implement scheduling of data/control
872 speculative instructions. */
874 static void extend_h_i_d (void);
875 static void init_h_i_d (rtx_insn *);
876 static int haifa_speculate_insn (rtx_insn *, ds_t, rtx *);
877 static void generate_recovery_code (rtx_insn *);
878 static void process_insn_forw_deps_be_in_spec (rtx, rtx_insn *, ds_t);
879 static void begin_speculative_block (rtx_insn *);
880 static void add_to_speculative_block (rtx_insn *);
881 static void init_before_recovery (basic_block *);
882 static void create_check_block_twin (rtx_insn *, bool);
883 static void fix_recovery_deps (basic_block);
884 static bool haifa_change_pattern (rtx_insn *, rtx);
885 static void dump_new_block_header (int, basic_block, rtx_insn *, rtx_insn *);
886 static void restore_bb_notes (basic_block);
887 static void fix_jump_move (rtx_insn *);
888 static void move_block_after_check (rtx_insn *);
889 static void move_succs (vec<edge, va_gc> **, basic_block);
890 static void sched_remove_insn (rtx_insn *);
891 static void clear_priorities (rtx_insn *, rtx_vec_t *);
892 static void calc_priorities (rtx_vec_t);
893 static void add_jump_dependencies (rtx_insn *, rtx_insn *);
895 #endif /* INSN_SCHEDULING */
897 /* Point to state used for the current scheduling pass. */
898 struct haifa_sched_info *current_sched_info;
900 #ifndef INSN_SCHEDULING
901 void
902 schedule_insns (void)
905 #else
907 /* Do register pressure sensitive insn scheduling if the flag is set
908 up. */
909 enum sched_pressure_algorithm sched_pressure;
911 /* Map regno -> its pressure class. The map defined only when
912 SCHED_PRESSURE != SCHED_PRESSURE_NONE. */
913 enum reg_class *sched_regno_pressure_class;
915 /* The current register pressure. Only elements corresponding pressure
916 classes are defined. */
917 static int curr_reg_pressure[N_REG_CLASSES];
919 /* Saved value of the previous array. */
920 static int saved_reg_pressure[N_REG_CLASSES];
922 /* Register living at given scheduling point. */
923 static bitmap curr_reg_live;
925 /* Saved value of the previous array. */
926 static bitmap saved_reg_live;
928 /* Registers mentioned in the current region. */
929 static bitmap region_ref_regs;
931 /* Initiate register pressure relative info for scheduling the current
932 region. Currently it is only clearing register mentioned in the
933 current region. */
934 void
935 sched_init_region_reg_pressure_info (void)
937 bitmap_clear (region_ref_regs);
940 /* PRESSURE[CL] describes the pressure on register class CL. Update it
941 for the birth (if BIRTH_P) or death (if !BIRTH_P) of register REGNO.
942 LIVE tracks the set of live registers; if it is null, assume that
943 every birth or death is genuine. */
944 static inline void
945 mark_regno_birth_or_death (bitmap live, int *pressure, int regno, bool birth_p)
947 enum reg_class pressure_class;
949 pressure_class = sched_regno_pressure_class[regno];
950 if (regno >= FIRST_PSEUDO_REGISTER)
952 if (pressure_class != NO_REGS)
954 if (birth_p)
956 if (!live || bitmap_set_bit (live, regno))
957 pressure[pressure_class]
958 += (ira_reg_class_max_nregs
959 [pressure_class][PSEUDO_REGNO_MODE (regno)]);
961 else
963 if (!live || bitmap_clear_bit (live, regno))
964 pressure[pressure_class]
965 -= (ira_reg_class_max_nregs
966 [pressure_class][PSEUDO_REGNO_MODE (regno)]);
970 else if (pressure_class != NO_REGS
971 && ! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
973 if (birth_p)
975 if (!live || bitmap_set_bit (live, regno))
976 pressure[pressure_class]++;
978 else
980 if (!live || bitmap_clear_bit (live, regno))
981 pressure[pressure_class]--;
986 /* Initiate current register pressure related info from living
987 registers given by LIVE. */
988 static void
989 initiate_reg_pressure_info (bitmap live)
991 int i;
992 unsigned int j;
993 bitmap_iterator bi;
995 for (i = 0; i < ira_pressure_classes_num; i++)
996 curr_reg_pressure[ira_pressure_classes[i]] = 0;
997 bitmap_clear (curr_reg_live);
998 EXECUTE_IF_SET_IN_BITMAP (live, 0, j, bi)
999 if (sched_pressure == SCHED_PRESSURE_MODEL
1000 || current_nr_blocks == 1
1001 || bitmap_bit_p (region_ref_regs, j))
1002 mark_regno_birth_or_death (curr_reg_live, curr_reg_pressure, j, true);
1005 /* Mark registers in X as mentioned in the current region. */
1006 static void
1007 setup_ref_regs (rtx x)
1009 int i, j, regno;
1010 const RTX_CODE code = GET_CODE (x);
1011 const char *fmt;
1013 if (REG_P (x))
1015 regno = REGNO (x);
1016 if (HARD_REGISTER_NUM_P (regno))
1017 bitmap_set_range (region_ref_regs, regno,
1018 hard_regno_nregs[regno][GET_MODE (x)]);
1019 else
1020 bitmap_set_bit (region_ref_regs, REGNO (x));
1021 return;
1023 fmt = GET_RTX_FORMAT (code);
1024 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1025 if (fmt[i] == 'e')
1026 setup_ref_regs (XEXP (x, i));
1027 else if (fmt[i] == 'E')
1029 for (j = 0; j < XVECLEN (x, i); j++)
1030 setup_ref_regs (XVECEXP (x, i, j));
1034 /* Initiate current register pressure related info at the start of
1035 basic block BB. */
1036 static void
1037 initiate_bb_reg_pressure_info (basic_block bb)
1039 unsigned int i ATTRIBUTE_UNUSED;
1040 rtx insn;
1042 if (current_nr_blocks > 1)
1043 FOR_BB_INSNS (bb, insn)
1044 if (NONDEBUG_INSN_P (insn))
1045 setup_ref_regs (PATTERN (insn));
1046 initiate_reg_pressure_info (df_get_live_in (bb));
1047 #ifdef EH_RETURN_DATA_REGNO
1048 if (bb_has_eh_pred (bb))
1049 for (i = 0; ; ++i)
1051 unsigned int regno = EH_RETURN_DATA_REGNO (i);
1053 if (regno == INVALID_REGNUM)
1054 break;
1055 if (! bitmap_bit_p (df_get_live_in (bb), regno))
1056 mark_regno_birth_or_death (curr_reg_live, curr_reg_pressure,
1057 regno, true);
1059 #endif
1062 /* Save current register pressure related info. */
1063 static void
1064 save_reg_pressure (void)
1066 int i;
1068 for (i = 0; i < ira_pressure_classes_num; i++)
1069 saved_reg_pressure[ira_pressure_classes[i]]
1070 = curr_reg_pressure[ira_pressure_classes[i]];
1071 bitmap_copy (saved_reg_live, curr_reg_live);
1074 /* Restore saved register pressure related info. */
1075 static void
1076 restore_reg_pressure (void)
1078 int i;
1080 for (i = 0; i < ira_pressure_classes_num; i++)
1081 curr_reg_pressure[ira_pressure_classes[i]]
1082 = saved_reg_pressure[ira_pressure_classes[i]];
1083 bitmap_copy (curr_reg_live, saved_reg_live);
1086 /* Return TRUE if the register is dying after its USE. */
1087 static bool
1088 dying_use_p (struct reg_use_data *use)
1090 struct reg_use_data *next;
1092 for (next = use->next_regno_use; next != use; next = next->next_regno_use)
1093 if (NONDEBUG_INSN_P (next->insn)
1094 && QUEUE_INDEX (next->insn) != QUEUE_SCHEDULED)
1095 return false;
1096 return true;
1099 /* Print info about the current register pressure and its excess for
1100 each pressure class. */
1101 static void
1102 print_curr_reg_pressure (void)
1104 int i;
1105 enum reg_class cl;
1107 fprintf (sched_dump, ";;\t");
1108 for (i = 0; i < ira_pressure_classes_num; i++)
1110 cl = ira_pressure_classes[i];
1111 gcc_assert (curr_reg_pressure[cl] >= 0);
1112 fprintf (sched_dump, " %s:%d(%d)", reg_class_names[cl],
1113 curr_reg_pressure[cl],
1114 curr_reg_pressure[cl] - ira_class_hard_regs_num[cl]);
1116 fprintf (sched_dump, "\n");
1119 /* Determine if INSN has a condition that is clobbered if a register
1120 in SET_REGS is modified. */
1121 static bool
1122 cond_clobbered_p (rtx_insn *insn, HARD_REG_SET set_regs)
1124 rtx pat = PATTERN (insn);
1125 gcc_assert (GET_CODE (pat) == COND_EXEC);
1126 if (TEST_HARD_REG_BIT (set_regs, REGNO (XEXP (COND_EXEC_TEST (pat), 0))))
1128 sd_iterator_def sd_it;
1129 dep_t dep;
1130 haifa_change_pattern (insn, ORIG_PAT (insn));
1131 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
1132 DEP_STATUS (dep) &= ~DEP_CANCELLED;
1133 TODO_SPEC (insn) = HARD_DEP;
1134 if (sched_verbose >= 2)
1135 fprintf (sched_dump,
1136 ";;\t\tdequeue insn %s because of clobbered condition\n",
1137 (*current_sched_info->print_insn) (insn, 0));
1138 return true;
1141 return false;
1144 /* This function should be called after modifying the pattern of INSN,
1145 to update scheduler data structures as needed. */
1146 static void
1147 update_insn_after_change (rtx_insn *insn)
1149 sd_iterator_def sd_it;
1150 dep_t dep;
1152 dfa_clear_single_insn_cache (insn);
1154 sd_it = sd_iterator_start (insn,
1155 SD_LIST_FORW | SD_LIST_BACK | SD_LIST_RES_BACK);
1156 while (sd_iterator_cond (&sd_it, &dep))
1158 DEP_COST (dep) = UNKNOWN_DEP_COST;
1159 sd_iterator_next (&sd_it);
1162 /* Invalidate INSN_COST, so it'll be recalculated. */
1163 INSN_COST (insn) = -1;
1164 /* Invalidate INSN_TICK, so it'll be recalculated. */
1165 INSN_TICK (insn) = INVALID_TICK;
1169 /* Two VECs, one to hold dependencies for which pattern replacements
1170 need to be applied or restored at the start of the next cycle, and
1171 another to hold an integer that is either one, to apply the
1172 corresponding replacement, or zero to restore it. */
1173 static vec<dep_t> next_cycle_replace_deps;
1174 static vec<int> next_cycle_apply;
1176 static void apply_replacement (dep_t, bool);
1177 static void restore_pattern (dep_t, bool);
1179 /* Look at the remaining dependencies for insn NEXT, and compute and return
1180 the TODO_SPEC value we should use for it. This is called after one of
1181 NEXT's dependencies has been resolved.
1182 We also perform pattern replacements for predication, and for broken
1183 replacement dependencies. The latter is only done if FOR_BACKTRACK is
1184 false. */
1186 static ds_t
1187 recompute_todo_spec (rtx_insn *next, bool for_backtrack)
1189 ds_t new_ds;
1190 sd_iterator_def sd_it;
1191 dep_t dep, modify_dep = NULL;
1192 int n_spec = 0;
1193 int n_control = 0;
1194 int n_replace = 0;
1195 bool first_p = true;
1197 if (sd_lists_empty_p (next, SD_LIST_BACK))
1198 /* NEXT has all its dependencies resolved. */
1199 return 0;
1201 if (!sd_lists_empty_p (next, SD_LIST_HARD_BACK))
1202 return HARD_DEP;
1204 /* Now we've got NEXT with speculative deps only.
1205 1. Look at the deps to see what we have to do.
1206 2. Check if we can do 'todo'. */
1207 new_ds = 0;
1209 FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
1211 rtx_insn *pro = DEP_PRO (dep);
1212 ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
1214 if (DEBUG_INSN_P (pro) && !DEBUG_INSN_P (next))
1215 continue;
1217 if (ds)
1219 n_spec++;
1220 if (first_p)
1222 first_p = false;
1224 new_ds = ds;
1226 else
1227 new_ds = ds_merge (new_ds, ds);
1229 else if (DEP_TYPE (dep) == REG_DEP_CONTROL)
1231 if (QUEUE_INDEX (pro) != QUEUE_SCHEDULED)
1233 n_control++;
1234 modify_dep = dep;
1236 DEP_STATUS (dep) &= ~DEP_CANCELLED;
1238 else if (DEP_REPLACE (dep) != NULL)
1240 if (QUEUE_INDEX (pro) != QUEUE_SCHEDULED)
1242 n_replace++;
1243 modify_dep = dep;
1245 DEP_STATUS (dep) &= ~DEP_CANCELLED;
1249 if (n_replace > 0 && n_control == 0 && n_spec == 0)
1251 if (!dbg_cnt (sched_breakdep))
1252 return HARD_DEP;
1253 FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
1255 struct dep_replacement *desc = DEP_REPLACE (dep);
1256 if (desc != NULL)
1258 if (desc->insn == next && !for_backtrack)
1260 gcc_assert (n_replace == 1);
1261 apply_replacement (dep, true);
1263 DEP_STATUS (dep) |= DEP_CANCELLED;
1266 return 0;
1269 else if (n_control == 1 && n_replace == 0 && n_spec == 0)
1271 rtx_insn *pro, *other;
1272 rtx new_pat;
1273 rtx cond = NULL_RTX;
1274 bool success;
1275 rtx_insn *prev = NULL;
1276 int i;
1277 unsigned regno;
1279 if ((current_sched_info->flags & DO_PREDICATION) == 0
1280 || (ORIG_PAT (next) != NULL_RTX
1281 && PREDICATED_PAT (next) == NULL_RTX))
1282 return HARD_DEP;
1284 pro = DEP_PRO (modify_dep);
1285 other = real_insn_for_shadow (pro);
1286 if (other != NULL_RTX)
1287 pro = other;
1289 cond = sched_get_reverse_condition_uncached (pro);
1290 regno = REGNO (XEXP (cond, 0));
1292 /* Find the last scheduled insn that modifies the condition register.
1293 We can stop looking once we find the insn we depend on through the
1294 REG_DEP_CONTROL; if the condition register isn't modified after it,
1295 we know that it still has the right value. */
1296 if (QUEUE_INDEX (pro) == QUEUE_SCHEDULED)
1297 FOR_EACH_VEC_ELT_REVERSE (scheduled_insns, i, prev)
1299 HARD_REG_SET t;
1301 find_all_hard_reg_sets (prev, &t, true);
1302 if (TEST_HARD_REG_BIT (t, regno))
1303 return HARD_DEP;
1304 if (prev == pro)
1305 break;
1307 if (ORIG_PAT (next) == NULL_RTX)
1309 ORIG_PAT (next) = PATTERN (next);
1311 new_pat = gen_rtx_COND_EXEC (VOIDmode, cond, PATTERN (next));
1312 success = haifa_change_pattern (next, new_pat);
1313 if (!success)
1314 return HARD_DEP;
1315 PREDICATED_PAT (next) = new_pat;
1317 else if (PATTERN (next) != PREDICATED_PAT (next))
1319 bool success = haifa_change_pattern (next,
1320 PREDICATED_PAT (next));
1321 gcc_assert (success);
1323 DEP_STATUS (modify_dep) |= DEP_CANCELLED;
1324 return DEP_CONTROL;
1327 if (PREDICATED_PAT (next) != NULL_RTX)
1329 int tick = INSN_TICK (next);
1330 bool success = haifa_change_pattern (next,
1331 ORIG_PAT (next));
1332 INSN_TICK (next) = tick;
1333 gcc_assert (success);
1336 /* We can't handle the case where there are both speculative and control
1337 dependencies, so we return HARD_DEP in such a case. Also fail if
1338 we have speculative dependencies with not enough points, or more than
1339 one control dependency. */
1340 if ((n_spec > 0 && (n_control > 0 || n_replace > 0))
1341 || (n_spec > 0
1342 /* Too few points? */
1343 && ds_weak (new_ds) < spec_info->data_weakness_cutoff)
1344 || n_control > 0
1345 || n_replace > 0)
1346 return HARD_DEP;
1348 return new_ds;
1351 /* Pointer to the last instruction scheduled. */
1352 static rtx_insn *last_scheduled_insn;
1354 /* Pointer to the last nondebug instruction scheduled within the
1355 block, or the prev_head of the scheduling block. Used by
1356 rank_for_schedule, so that insns independent of the last scheduled
1357 insn will be preferred over dependent instructions. */
1358 static rtx last_nondebug_scheduled_insn;
1360 /* Pointer that iterates through the list of unscheduled insns if we
1361 have a dbg_cnt enabled. It always points at an insn prior to the
1362 first unscheduled one. */
1363 static rtx_insn *nonscheduled_insns_begin;
1365 /* Compute cost of executing INSN.
1366 This is the number of cycles between instruction issue and
1367 instruction results. */
1369 insn_cost (rtx_insn *insn)
1371 int cost;
1373 if (sel_sched_p ())
1375 if (recog_memoized (insn) < 0)
1376 return 0;
1378 cost = insn_default_latency (insn);
1379 if (cost < 0)
1380 cost = 0;
1382 return cost;
1385 cost = INSN_COST (insn);
1387 if (cost < 0)
1389 /* A USE insn, or something else we don't need to
1390 understand. We can't pass these directly to
1391 result_ready_cost or insn_default_latency because it will
1392 trigger a fatal error for unrecognizable insns. */
1393 if (recog_memoized (insn) < 0)
1395 INSN_COST (insn) = 0;
1396 return 0;
1398 else
1400 cost = insn_default_latency (insn);
1401 if (cost < 0)
1402 cost = 0;
1404 INSN_COST (insn) = cost;
1408 return cost;
1411 /* Compute cost of dependence LINK.
1412 This is the number of cycles between instruction issue and
1413 instruction results.
1414 ??? We also use this function to call recog_memoized on all insns. */
1416 dep_cost_1 (dep_t link, dw_t dw)
1418 rtx_insn *insn = DEP_PRO (link);
1419 rtx_insn *used = DEP_CON (link);
1420 int cost;
1422 if (DEP_COST (link) != UNKNOWN_DEP_COST)
1423 return DEP_COST (link);
1425 if (delay_htab)
1427 struct delay_pair *delay_entry;
1428 delay_entry
1429 = delay_htab_i2->find_with_hash (used, htab_hash_pointer (used));
1430 if (delay_entry)
1432 if (delay_entry->i1 == insn)
1434 DEP_COST (link) = pair_delay (delay_entry);
1435 return DEP_COST (link);
1440 /* A USE insn should never require the value used to be computed.
1441 This allows the computation of a function's result and parameter
1442 values to overlap the return and call. We don't care about the
1443 dependence cost when only decreasing register pressure. */
1444 if (recog_memoized (used) < 0)
1446 cost = 0;
1447 recog_memoized (insn);
1449 else
1451 enum reg_note dep_type = DEP_TYPE (link);
1453 cost = insn_cost (insn);
1455 if (INSN_CODE (insn) >= 0)
1457 if (dep_type == REG_DEP_ANTI)
1458 cost = 0;
1459 else if (dep_type == REG_DEP_OUTPUT)
1461 cost = (insn_default_latency (insn)
1462 - insn_default_latency (used));
1463 if (cost <= 0)
1464 cost = 1;
1466 else if (bypass_p (insn))
1467 cost = insn_latency (insn, used);
1471 if (targetm.sched.adjust_cost_2)
1472 cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost,
1473 dw);
1474 else if (targetm.sched.adjust_cost != NULL)
1476 /* This variable is used for backward compatibility with the
1477 targets. */
1478 rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
1480 /* Make it self-cycled, so that if some tries to walk over this
1481 incomplete list he/she will be caught in an endless loop. */
1482 XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
1484 /* Targets use only REG_NOTE_KIND of the link. */
1485 PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
1487 cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
1488 insn, cost);
1490 free_INSN_LIST_node (dep_cost_rtx_link);
1493 if (cost < 0)
1494 cost = 0;
1497 DEP_COST (link) = cost;
1498 return cost;
1501 /* Compute cost of dependence LINK.
1502 This is the number of cycles between instruction issue and
1503 instruction results. */
1505 dep_cost (dep_t link)
1507 return dep_cost_1 (link, 0);
1510 /* Use this sel-sched.c friendly function in reorder2 instead of increasing
1511 INSN_PRIORITY explicitly. */
1512 void
1513 increase_insn_priority (rtx_insn *insn, int amount)
1515 if (!sel_sched_p ())
1517 /* We're dealing with haifa-sched.c INSN_PRIORITY. */
1518 if (INSN_PRIORITY_KNOWN (insn))
1519 INSN_PRIORITY (insn) += amount;
1521 else
1523 /* In sel-sched.c INSN_PRIORITY is not kept up to date.
1524 Use EXPR_PRIORITY instead. */
1525 sel_add_to_insn_priority (insn, amount);
1529 /* Return 'true' if DEP should be included in priority calculations. */
1530 static bool
1531 contributes_to_priority_p (dep_t dep)
1533 if (DEBUG_INSN_P (DEP_CON (dep))
1534 || DEBUG_INSN_P (DEP_PRO (dep)))
1535 return false;
1537 /* Critical path is meaningful in block boundaries only. */
1538 if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
1539 DEP_PRO (dep)))
1540 return false;
1542 if (DEP_REPLACE (dep) != NULL)
1543 return false;
1545 /* If flag COUNT_SPEC_IN_CRITICAL_PATH is set,
1546 then speculative instructions will less likely be
1547 scheduled. That is because the priority of
1548 their producers will increase, and, thus, the
1549 producers will more likely be scheduled, thus,
1550 resolving the dependence. */
1551 if (sched_deps_info->generate_spec_deps
1552 && !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
1553 && (DEP_STATUS (dep) & SPECULATIVE))
1554 return false;
1556 return true;
1559 /* Compute the number of nondebug deps in list LIST for INSN. */
1561 static int
1562 dep_list_size (rtx insn, sd_list_types_def list)
1564 sd_iterator_def sd_it;
1565 dep_t dep;
1566 int dbgcount = 0, nodbgcount = 0;
1568 if (!MAY_HAVE_DEBUG_INSNS)
1569 return sd_lists_size (insn, list);
1571 FOR_EACH_DEP (insn, list, sd_it, dep)
1573 if (DEBUG_INSN_P (DEP_CON (dep)))
1574 dbgcount++;
1575 else if (!DEBUG_INSN_P (DEP_PRO (dep)))
1576 nodbgcount++;
1579 gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, list));
1581 return nodbgcount;
1584 /* Compute the priority number for INSN. */
1585 static int
1586 priority (rtx_insn *insn)
1588 if (! INSN_P (insn))
1589 return 0;
1591 /* We should not be interested in priority of an already scheduled insn. */
1592 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
1594 if (!INSN_PRIORITY_KNOWN (insn))
1596 int this_priority = -1;
1598 if (dep_list_size (insn, SD_LIST_FORW) == 0)
1599 /* ??? We should set INSN_PRIORITY to insn_cost when and insn has
1600 some forward deps but all of them are ignored by
1601 contributes_to_priority hook. At the moment we set priority of
1602 such insn to 0. */
1603 this_priority = insn_cost (insn);
1604 else
1606 rtx prev_first, twin;
1607 basic_block rec;
1609 /* For recovery check instructions we calculate priority slightly
1610 different than that of normal instructions. Instead of walking
1611 through INSN_FORW_DEPS (check) list, we walk through
1612 INSN_FORW_DEPS list of each instruction in the corresponding
1613 recovery block. */
1615 /* Selective scheduling does not define RECOVERY_BLOCK macro. */
1616 rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
1617 if (!rec || rec == EXIT_BLOCK_PTR_FOR_FN (cfun))
1619 prev_first = PREV_INSN (insn);
1620 twin = insn;
1622 else
1624 prev_first = NEXT_INSN (BB_HEAD (rec));
1625 twin = PREV_INSN (BB_END (rec));
1630 sd_iterator_def sd_it;
1631 dep_t dep;
1633 FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
1635 rtx_insn *next;
1636 int next_priority;
1638 next = DEP_CON (dep);
1640 if (BLOCK_FOR_INSN (next) != rec)
1642 int cost;
1644 if (!contributes_to_priority_p (dep))
1645 continue;
1647 if (twin == insn)
1648 cost = dep_cost (dep);
1649 else
1651 struct _dep _dep1, *dep1 = &_dep1;
1653 init_dep (dep1, insn, next, REG_DEP_ANTI);
1655 cost = dep_cost (dep1);
1658 next_priority = cost + priority (next);
1660 if (next_priority > this_priority)
1661 this_priority = next_priority;
1665 twin = PREV_INSN (twin);
1667 while (twin != prev_first);
1670 if (this_priority < 0)
1672 gcc_assert (this_priority == -1);
1674 this_priority = insn_cost (insn);
1677 INSN_PRIORITY (insn) = this_priority;
1678 INSN_PRIORITY_STATUS (insn) = 1;
1681 return INSN_PRIORITY (insn);
1684 /* Macros and functions for keeping the priority queue sorted, and
1685 dealing with queuing and dequeuing of instructions. */
1687 /* For each pressure class CL, set DEATH[CL] to the number of registers
1688 in that class that die in INSN. */
1690 static void
1691 calculate_reg_deaths (rtx_insn *insn, int *death)
1693 int i;
1694 struct reg_use_data *use;
1696 for (i = 0; i < ira_pressure_classes_num; i++)
1697 death[ira_pressure_classes[i]] = 0;
1698 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1699 if (dying_use_p (use))
1700 mark_regno_birth_or_death (0, death, use->regno, true);
1703 /* Setup info about the current register pressure impact of scheduling
1704 INSN at the current scheduling point. */
1705 static void
1706 setup_insn_reg_pressure_info (rtx_insn *insn)
1708 int i, change, before, after, hard_regno;
1709 int excess_cost_change;
1710 enum machine_mode mode;
1711 enum reg_class cl;
1712 struct reg_pressure_data *pressure_info;
1713 int *max_reg_pressure;
1714 static int death[N_REG_CLASSES];
1716 gcc_checking_assert (!DEBUG_INSN_P (insn));
1718 excess_cost_change = 0;
1719 calculate_reg_deaths (insn, death);
1720 pressure_info = INSN_REG_PRESSURE (insn);
1721 max_reg_pressure = INSN_MAX_REG_PRESSURE (insn);
1722 gcc_assert (pressure_info != NULL && max_reg_pressure != NULL);
1723 for (i = 0; i < ira_pressure_classes_num; i++)
1725 cl = ira_pressure_classes[i];
1726 gcc_assert (curr_reg_pressure[cl] >= 0);
1727 change = (int) pressure_info[i].set_increase - death[cl];
1728 before = MAX (0, max_reg_pressure[i] - ira_class_hard_regs_num[cl]);
1729 after = MAX (0, max_reg_pressure[i] + change
1730 - ira_class_hard_regs_num[cl]);
1731 hard_regno = ira_class_hard_regs[cl][0];
1732 gcc_assert (hard_regno >= 0);
1733 mode = reg_raw_mode[hard_regno];
1734 excess_cost_change += ((after - before)
1735 * (ira_memory_move_cost[mode][cl][0]
1736 + ira_memory_move_cost[mode][cl][1]));
1738 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insn) = excess_cost_change;
1741 /* This is the first page of code related to SCHED_PRESSURE_MODEL.
1742 It tries to make the scheduler take register pressure into account
1743 without introducing too many unnecessary stalls. It hooks into the
1744 main scheduling algorithm at several points:
1746 - Before scheduling starts, model_start_schedule constructs a
1747 "model schedule" for the current block. This model schedule is
1748 chosen solely to keep register pressure down. It does not take the
1749 target's pipeline or the original instruction order into account,
1750 except as a tie-breaker. It also doesn't work to a particular
1751 pressure limit.
1753 This model schedule gives us an idea of what pressure can be
1754 achieved for the block and gives us an example of a schedule that
1755 keeps to that pressure. It also makes the final schedule less
1756 dependent on the original instruction order. This is important
1757 because the original order can either be "wide" (many values live
1758 at once, such as in user-scheduled code) or "narrow" (few values
1759 live at once, such as after loop unrolling, where several
1760 iterations are executed sequentially).
1762 We do not apply this model schedule to the rtx stream. We simply
1763 record it in model_schedule. We also compute the maximum pressure,
1764 MP, that was seen during this schedule.
1766 - Instructions are added to the ready queue even if they require
1767 a stall. The length of the stall is instead computed as:
1769 MAX (INSN_TICK (INSN) - clock_var, 0)
1771 (= insn_delay). This allows rank_for_schedule to choose between
1772 introducing a deliberate stall or increasing pressure.
1774 - Before sorting the ready queue, model_set_excess_costs assigns
1775 a pressure-based cost to each ready instruction in the queue.
1776 This is the instruction's INSN_REG_PRESSURE_EXCESS_COST_CHANGE
1777 (ECC for short) and is effectively measured in cycles.
1779 - rank_for_schedule ranks instructions based on:
1781 ECC (insn) + insn_delay (insn)
1783 then as:
1785 insn_delay (insn)
1787 So, for example, an instruction X1 with an ECC of 1 that can issue
1788 now will win over an instruction X0 with an ECC of zero that would
1789 introduce a stall of one cycle. However, an instruction X2 with an
1790 ECC of 2 that can issue now will lose to both X0 and X1.
1792 - When an instruction is scheduled, model_recompute updates the model
1793 schedule with the new pressures (some of which might now exceed the
1794 original maximum pressure MP). model_update_limit_points then searches
1795 for the new point of maximum pressure, if not already known. */
1797 /* Used to separate high-verbosity debug information for SCHED_PRESSURE_MODEL
1798 from surrounding debug information. */
1799 #define MODEL_BAR \
1800 ";;\t\t+------------------------------------------------------\n"
1802 /* Information about the pressure on a particular register class at a
1803 particular point of the model schedule. */
1804 struct model_pressure_data {
1805 /* The pressure at this point of the model schedule, or -1 if the
1806 point is associated with an instruction that has already been
1807 scheduled. */
1808 int ref_pressure;
1810 /* The maximum pressure during or after this point of the model schedule. */
1811 int max_pressure;
1814 /* Per-instruction information that is used while building the model
1815 schedule. Here, "schedule" refers to the model schedule rather
1816 than the main schedule. */
1817 struct model_insn_info {
1818 /* The instruction itself. */
1819 rtx_insn *insn;
1821 /* If this instruction is in model_worklist, these fields link to the
1822 previous (higher-priority) and next (lower-priority) instructions
1823 in the list. */
1824 struct model_insn_info *prev;
1825 struct model_insn_info *next;
1827 /* While constructing the schedule, QUEUE_INDEX describes whether an
1828 instruction has already been added to the schedule (QUEUE_SCHEDULED),
1829 is in model_worklist (QUEUE_READY), or neither (QUEUE_NOWHERE).
1830 old_queue records the value that QUEUE_INDEX had before scheduling
1831 started, so that we can restore it once the schedule is complete. */
1832 int old_queue;
1834 /* The relative importance of an unscheduled instruction. Higher
1835 values indicate greater importance. */
1836 unsigned int model_priority;
1838 /* The length of the longest path of satisfied true dependencies
1839 that leads to this instruction. */
1840 unsigned int depth;
1842 /* The length of the longest path of dependencies of any kind
1843 that leads from this instruction. */
1844 unsigned int alap;
1846 /* The number of predecessor nodes that must still be scheduled. */
1847 int unscheduled_preds;
1850 /* Information about the pressure limit for a particular register class.
1851 This structure is used when applying a model schedule to the main
1852 schedule. */
1853 struct model_pressure_limit {
1854 /* The maximum register pressure seen in the original model schedule. */
1855 int orig_pressure;
1857 /* The maximum register pressure seen in the current model schedule
1858 (which excludes instructions that have already been scheduled). */
1859 int pressure;
1861 /* The point of the current model schedule at which PRESSURE is first
1862 reached. It is set to -1 if the value needs to be recomputed. */
1863 int point;
1866 /* Describes a particular way of measuring register pressure. */
1867 struct model_pressure_group {
1868 /* Index PCI describes the maximum pressure on ira_pressure_classes[PCI]. */
1869 struct model_pressure_limit limits[N_REG_CLASSES];
1871 /* Index (POINT * ira_num_pressure_classes + PCI) describes the pressure
1872 on register class ira_pressure_classes[PCI] at point POINT of the
1873 current model schedule. A POINT of model_num_insns describes the
1874 pressure at the end of the schedule. */
1875 struct model_pressure_data *model;
1878 /* Index POINT gives the instruction at point POINT of the model schedule.
1879 This array doesn't change during main scheduling. */
1880 static vec<rtx_insn *> model_schedule;
1882 /* The list of instructions in the model worklist, sorted in order of
1883 decreasing priority. */
1884 static struct model_insn_info *model_worklist;
1886 /* Index I describes the instruction with INSN_LUID I. */
1887 static struct model_insn_info *model_insns;
1889 /* The number of instructions in the model schedule. */
1890 static int model_num_insns;
1892 /* The index of the first instruction in model_schedule that hasn't yet been
1893 added to the main schedule, or model_num_insns if all of them have. */
1894 static int model_curr_point;
1896 /* Describes the pressure before each instruction in the model schedule. */
1897 static struct model_pressure_group model_before_pressure;
1899 /* The first unused model_priority value (as used in model_insn_info). */
1900 static unsigned int model_next_priority;
1903 /* The model_pressure_data for ira_pressure_classes[PCI] in GROUP
1904 at point POINT of the model schedule. */
1905 #define MODEL_PRESSURE_DATA(GROUP, POINT, PCI) \
1906 (&(GROUP)->model[(POINT) * ira_pressure_classes_num + (PCI)])
1908 /* The maximum pressure on ira_pressure_classes[PCI] in GROUP at or
1909 after point POINT of the model schedule. */
1910 #define MODEL_MAX_PRESSURE(GROUP, POINT, PCI) \
1911 (MODEL_PRESSURE_DATA (GROUP, POINT, PCI)->max_pressure)
1913 /* The pressure on ira_pressure_classes[PCI] in GROUP at point POINT
1914 of the model schedule. */
1915 #define MODEL_REF_PRESSURE(GROUP, POINT, PCI) \
1916 (MODEL_PRESSURE_DATA (GROUP, POINT, PCI)->ref_pressure)
1918 /* Information about INSN that is used when creating the model schedule. */
1919 #define MODEL_INSN_INFO(INSN) \
1920 (&model_insns[INSN_LUID (INSN)])
1922 /* The instruction at point POINT of the model schedule. */
1923 #define MODEL_INSN(POINT) \
1924 (model_schedule[POINT])
1927 /* Return INSN's index in the model schedule, or model_num_insns if it
1928 doesn't belong to that schedule. */
1930 static int
1931 model_index (rtx_insn *insn)
1933 if (INSN_MODEL_INDEX (insn) == 0)
1934 return model_num_insns;
1935 return INSN_MODEL_INDEX (insn) - 1;
1938 /* Make sure that GROUP->limits is up-to-date for the current point
1939 of the model schedule. */
1941 static void
1942 model_update_limit_points_in_group (struct model_pressure_group *group)
1944 int pci, max_pressure, point;
1946 for (pci = 0; pci < ira_pressure_classes_num; pci++)
1948 /* We may have passed the final point at which the pressure in
1949 group->limits[pci].pressure was reached. Update the limit if so. */
1950 max_pressure = MODEL_MAX_PRESSURE (group, model_curr_point, pci);
1951 group->limits[pci].pressure = max_pressure;
1953 /* Find the point at which MAX_PRESSURE is first reached. We need
1954 to search in three cases:
1956 - We've already moved past the previous pressure point.
1957 In this case we search forward from model_curr_point.
1959 - We scheduled the previous point of maximum pressure ahead of
1960 its position in the model schedule, but doing so didn't bring
1961 the pressure point earlier. In this case we search forward
1962 from that previous pressure point.
1964 - Scheduling an instruction early caused the maximum pressure
1965 to decrease. In this case we will have set the pressure
1966 point to -1, and we search forward from model_curr_point. */
1967 point = MAX (group->limits[pci].point, model_curr_point);
1968 while (point < model_num_insns
1969 && MODEL_REF_PRESSURE (group, point, pci) < max_pressure)
1970 point++;
1971 group->limits[pci].point = point;
1973 gcc_assert (MODEL_REF_PRESSURE (group, point, pci) == max_pressure);
1974 gcc_assert (MODEL_MAX_PRESSURE (group, point, pci) == max_pressure);
1978 /* Make sure that all register-pressure limits are up-to-date for the
1979 current position in the model schedule. */
1981 static void
1982 model_update_limit_points (void)
1984 model_update_limit_points_in_group (&model_before_pressure);
1987 /* Return the model_index of the last unscheduled use in chain USE
1988 outside of USE's instruction. Return -1 if there are no other uses,
1989 or model_num_insns if the register is live at the end of the block. */
1991 static int
1992 model_last_use_except (struct reg_use_data *use)
1994 struct reg_use_data *next;
1995 int last, index;
1997 last = -1;
1998 for (next = use->next_regno_use; next != use; next = next->next_regno_use)
1999 if (NONDEBUG_INSN_P (next->insn)
2000 && QUEUE_INDEX (next->insn) != QUEUE_SCHEDULED)
2002 index = model_index (next->insn);
2003 if (index == model_num_insns)
2004 return model_num_insns;
2005 if (last < index)
2006 last = index;
2008 return last;
2011 /* An instruction with model_index POINT has just been scheduled, and it
2012 adds DELTA to the pressure on ira_pressure_classes[PCI] after POINT - 1.
2013 Update MODEL_REF_PRESSURE (GROUP, POINT, PCI) and
2014 MODEL_MAX_PRESSURE (GROUP, POINT, PCI) accordingly. */
2016 static void
2017 model_start_update_pressure (struct model_pressure_group *group,
2018 int point, int pci, int delta)
2020 int next_max_pressure;
2022 if (point == model_num_insns)
2024 /* The instruction wasn't part of the model schedule; it was moved
2025 from a different block. Update the pressure for the end of
2026 the model schedule. */
2027 MODEL_REF_PRESSURE (group, point, pci) += delta;
2028 MODEL_MAX_PRESSURE (group, point, pci) += delta;
2030 else
2032 /* Record that this instruction has been scheduled. Nothing now
2033 changes between POINT and POINT + 1, so get the maximum pressure
2034 from the latter. If the maximum pressure decreases, the new
2035 pressure point may be before POINT. */
2036 MODEL_REF_PRESSURE (group, point, pci) = -1;
2037 next_max_pressure = MODEL_MAX_PRESSURE (group, point + 1, pci);
2038 if (MODEL_MAX_PRESSURE (group, point, pci) > next_max_pressure)
2040 MODEL_MAX_PRESSURE (group, point, pci) = next_max_pressure;
2041 if (group->limits[pci].point == point)
2042 group->limits[pci].point = -1;
2047 /* Record that scheduling a later instruction has changed the pressure
2048 at point POINT of the model schedule by DELTA (which might be 0).
2049 Update GROUP accordingly. Return nonzero if these changes might
2050 trigger changes to previous points as well. */
2052 static int
2053 model_update_pressure (struct model_pressure_group *group,
2054 int point, int pci, int delta)
2056 int ref_pressure, max_pressure, next_max_pressure;
2058 /* If POINT hasn't yet been scheduled, update its pressure. */
2059 ref_pressure = MODEL_REF_PRESSURE (group, point, pci);
2060 if (ref_pressure >= 0 && delta != 0)
2062 ref_pressure += delta;
2063 MODEL_REF_PRESSURE (group, point, pci) = ref_pressure;
2065 /* Check whether the maximum pressure in the overall schedule
2066 has increased. (This means that the MODEL_MAX_PRESSURE of
2067 every point <= POINT will need to increae too; see below.) */
2068 if (group->limits[pci].pressure < ref_pressure)
2069 group->limits[pci].pressure = ref_pressure;
2071 /* If we are at maximum pressure, and the maximum pressure
2072 point was previously unknown or later than POINT,
2073 bring it forward. */
2074 if (group->limits[pci].pressure == ref_pressure
2075 && !IN_RANGE (group->limits[pci].point, 0, point))
2076 group->limits[pci].point = point;
2078 /* If POINT used to be the point of maximum pressure, but isn't
2079 any longer, we need to recalculate it using a forward walk. */
2080 if (group->limits[pci].pressure > ref_pressure
2081 && group->limits[pci].point == point)
2082 group->limits[pci].point = -1;
2085 /* Update the maximum pressure at POINT. Changes here might also
2086 affect the maximum pressure at POINT - 1. */
2087 next_max_pressure = MODEL_MAX_PRESSURE (group, point + 1, pci);
2088 max_pressure = MAX (ref_pressure, next_max_pressure);
2089 if (MODEL_MAX_PRESSURE (group, point, pci) != max_pressure)
2091 MODEL_MAX_PRESSURE (group, point, pci) = max_pressure;
2092 return 1;
2094 return 0;
2097 /* INSN has just been scheduled. Update the model schedule accordingly. */
2099 static void
2100 model_recompute (rtx_insn *insn)
2102 struct {
2103 int last_use;
2104 int regno;
2105 } uses[FIRST_PSEUDO_REGISTER + MAX_RECOG_OPERANDS];
2106 struct reg_use_data *use;
2107 struct reg_pressure_data *reg_pressure;
2108 int delta[N_REG_CLASSES];
2109 int pci, point, mix, new_last, cl, ref_pressure, queue;
2110 unsigned int i, num_uses, num_pending_births;
2111 bool print_p;
2113 /* The destinations of INSN were previously live from POINT onwards, but are
2114 now live from model_curr_point onwards. Set up DELTA accordingly. */
2115 point = model_index (insn);
2116 reg_pressure = INSN_REG_PRESSURE (insn);
2117 for (pci = 0; pci < ira_pressure_classes_num; pci++)
2119 cl = ira_pressure_classes[pci];
2120 delta[cl] = reg_pressure[pci].set_increase;
2123 /* Record which registers previously died at POINT, but which now die
2124 before POINT. Adjust DELTA so that it represents the effect of
2125 this change after POINT - 1. Set NUM_PENDING_BIRTHS to the number of
2126 registers that will be born in the range [model_curr_point, POINT). */
2127 num_uses = 0;
2128 num_pending_births = 0;
2129 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
2131 new_last = model_last_use_except (use);
2132 if (new_last < point)
2134 gcc_assert (num_uses < ARRAY_SIZE (uses));
2135 uses[num_uses].last_use = new_last;
2136 uses[num_uses].regno = use->regno;
2137 /* This register is no longer live after POINT - 1. */
2138 mark_regno_birth_or_death (NULL, delta, use->regno, false);
2139 num_uses++;
2140 if (new_last >= 0)
2141 num_pending_births++;
2145 /* Update the MODEL_REF_PRESSURE and MODEL_MAX_PRESSURE for POINT.
2146 Also set each group pressure limit for POINT. */
2147 for (pci = 0; pci < ira_pressure_classes_num; pci++)
2149 cl = ira_pressure_classes[pci];
2150 model_start_update_pressure (&model_before_pressure,
2151 point, pci, delta[cl]);
2154 /* Walk the model schedule backwards, starting immediately before POINT. */
2155 print_p = false;
2156 if (point != model_curr_point)
2159 point--;
2160 insn = MODEL_INSN (point);
2161 queue = QUEUE_INDEX (insn);
2163 if (queue != QUEUE_SCHEDULED)
2165 /* DELTA describes the effect of the move on the register pressure
2166 after POINT. Make it describe the effect on the pressure
2167 before POINT. */
2168 i = 0;
2169 while (i < num_uses)
2171 if (uses[i].last_use == point)
2173 /* This register is now live again. */
2174 mark_regno_birth_or_death (NULL, delta,
2175 uses[i].regno, true);
2177 /* Remove this use from the array. */
2178 uses[i] = uses[num_uses - 1];
2179 num_uses--;
2180 num_pending_births--;
2182 else
2183 i++;
2186 if (sched_verbose >= 5)
2188 if (!print_p)
2190 fprintf (sched_dump, MODEL_BAR);
2191 fprintf (sched_dump, ";;\t\t| New pressure for model"
2192 " schedule\n");
2193 fprintf (sched_dump, MODEL_BAR);
2194 print_p = true;
2197 fprintf (sched_dump, ";;\t\t| %3d %4d %-30s ",
2198 point, INSN_UID (insn),
2199 str_pattern_slim (PATTERN (insn)));
2200 for (pci = 0; pci < ira_pressure_classes_num; pci++)
2202 cl = ira_pressure_classes[pci];
2203 ref_pressure = MODEL_REF_PRESSURE (&model_before_pressure,
2204 point, pci);
2205 fprintf (sched_dump, " %s:[%d->%d]",
2206 reg_class_names[ira_pressure_classes[pci]],
2207 ref_pressure, ref_pressure + delta[cl]);
2209 fprintf (sched_dump, "\n");
2213 /* Adjust the pressure at POINT. Set MIX to nonzero if POINT - 1
2214 might have changed as well. */
2215 mix = num_pending_births;
2216 for (pci = 0; pci < ira_pressure_classes_num; pci++)
2218 cl = ira_pressure_classes[pci];
2219 mix |= delta[cl];
2220 mix |= model_update_pressure (&model_before_pressure,
2221 point, pci, delta[cl]);
2224 while (mix && point > model_curr_point);
2226 if (print_p)
2227 fprintf (sched_dump, MODEL_BAR);
2230 /* After DEP, which was cancelled, has been resolved for insn NEXT,
2231 check whether the insn's pattern needs restoring. */
2232 static bool
2233 must_restore_pattern_p (rtx_insn *next, dep_t dep)
2235 if (QUEUE_INDEX (next) == QUEUE_SCHEDULED)
2236 return false;
2238 if (DEP_TYPE (dep) == REG_DEP_CONTROL)
2240 gcc_assert (ORIG_PAT (next) != NULL_RTX);
2241 gcc_assert (next == DEP_CON (dep));
2243 else
2245 struct dep_replacement *desc = DEP_REPLACE (dep);
2246 if (desc->insn != next)
2248 gcc_assert (*desc->loc == desc->orig);
2249 return false;
2252 return true;
2255 /* model_spill_cost (CL, P, P') returns the cost of increasing the
2256 pressure on CL from P to P'. We use this to calculate a "base ECC",
2257 baseECC (CL, X), for each pressure class CL and each instruction X.
2258 Supposing X changes the pressure on CL from P to P', and that the
2259 maximum pressure on CL in the current model schedule is MP', then:
2261 * if X occurs before or at the next point of maximum pressure in
2262 the model schedule and P' > MP', then:
2264 baseECC (CL, X) = model_spill_cost (CL, MP, P')
2266 The idea is that the pressure after scheduling a fixed set of
2267 instructions -- in this case, the set up to and including the
2268 next maximum pressure point -- is going to be the same regardless
2269 of the order; we simply want to keep the intermediate pressure
2270 under control. Thus X has a cost of zero unless scheduling it
2271 now would exceed MP'.
2273 If all increases in the set are by the same amount, no zero-cost
2274 instruction will ever cause the pressure to exceed MP'. However,
2275 if X is instead moved past an instruction X' with pressure in the
2276 range (MP' - (P' - P), MP'), the pressure at X' will increase
2277 beyond MP'. Since baseECC is very much a heuristic anyway,
2278 it doesn't seem worth the overhead of tracking cases like these.
2280 The cost of exceeding MP' is always based on the original maximum
2281 pressure MP. This is so that going 2 registers over the original
2282 limit has the same cost regardless of whether it comes from two
2283 separate +1 deltas or from a single +2 delta.
2285 * if X occurs after the next point of maximum pressure in the model
2286 schedule and P' > P, then:
2288 baseECC (CL, X) = model_spill_cost (CL, MP, MP' + (P' - P))
2290 That is, if we move X forward across a point of maximum pressure,
2291 and if X increases the pressure by P' - P, then we conservatively
2292 assume that scheduling X next would increase the maximum pressure
2293 by P' - P. Again, the cost of doing this is based on the original
2294 maximum pressure MP, for the same reason as above.
2296 * if P' < P, P > MP, and X occurs at or after the next point of
2297 maximum pressure, then:
2299 baseECC (CL, X) = -model_spill_cost (CL, MAX (MP, P'), P)
2301 That is, if we have already exceeded the original maximum pressure MP,
2302 and if X might reduce the maximum pressure again -- or at least push
2303 it further back, and thus allow more scheduling freedom -- it is given
2304 a negative cost to reflect the improvement.
2306 * otherwise,
2308 baseECC (CL, X) = 0
2310 In this case, X is not expected to affect the maximum pressure MP',
2311 so it has zero cost.
2313 We then create a combined value baseECC (X) that is the sum of
2314 baseECC (CL, X) for each pressure class CL.
2316 baseECC (X) could itself be used as the ECC value described above.
2317 However, this is often too conservative, in the sense that it
2318 tends to make high-priority instructions that increase pressure
2319 wait too long in cases where introducing a spill would be better.
2320 For this reason the final ECC is a priority-adjusted form of
2321 baseECC (X). Specifically, we calculate:
2323 P (X) = INSN_PRIORITY (X) - insn_delay (X) - baseECC (X)
2324 baseP = MAX { P (X) | baseECC (X) <= 0 }
2326 Then:
2328 ECC (X) = MAX (MIN (baseP - P (X), baseECC (X)), 0)
2330 Thus an instruction's effect on pressure is ignored if it has a high
2331 enough priority relative to the ones that don't increase pressure.
2332 Negative values of baseECC (X) do not increase the priority of X
2333 itself, but they do make it harder for other instructions to
2334 increase the pressure further.
2336 This pressure cost is deliberately timid. The intention has been
2337 to choose a heuristic that rarely interferes with the normal list
2338 scheduler in cases where that scheduler would produce good code.
2339 We simply want to curb some of its worst excesses. */
2341 /* Return the cost of increasing the pressure in class CL from FROM to TO.
2343 Here we use the very simplistic cost model that every register above
2344 ira_class_hard_regs_num[CL] has a spill cost of 1. We could use other
2345 measures instead, such as one based on MEMORY_MOVE_COST. However:
2347 (1) In order for an instruction to be scheduled, the higher cost
2348 would need to be justified in a single saving of that many stalls.
2349 This is overly pessimistic, because the benefit of spilling is
2350 often to avoid a sequence of several short stalls rather than
2351 a single long one.
2353 (2) The cost is still arbitrary. Because we are not allocating
2354 registers during scheduling, we have no way of knowing for
2355 sure how many memory accesses will be required by each spill,
2356 where the spills will be placed within the block, or even
2357 which block(s) will contain the spills.
2359 So a higher cost than 1 is often too conservative in practice,
2360 forcing blocks to contain unnecessary stalls instead of spill code.
2361 The simple cost below seems to be the best compromise. It reduces
2362 the interference with the normal list scheduler, which helps make
2363 it more suitable for a default-on option. */
2365 static int
2366 model_spill_cost (int cl, int from, int to)
2368 from = MAX (from, ira_class_hard_regs_num[cl]);
2369 return MAX (to, from) - from;
2372 /* Return baseECC (ira_pressure_classes[PCI], POINT), given that
2373 P = curr_reg_pressure[ira_pressure_classes[PCI]] and that
2374 P' = P + DELTA. */
2376 static int
2377 model_excess_group_cost (struct model_pressure_group *group,
2378 int point, int pci, int delta)
2380 int pressure, cl;
2382 cl = ira_pressure_classes[pci];
2383 if (delta < 0 && point >= group->limits[pci].point)
2385 pressure = MAX (group->limits[pci].orig_pressure,
2386 curr_reg_pressure[cl] + delta);
2387 return -model_spill_cost (cl, pressure, curr_reg_pressure[cl]);
2390 if (delta > 0)
2392 if (point > group->limits[pci].point)
2393 pressure = group->limits[pci].pressure + delta;
2394 else
2395 pressure = curr_reg_pressure[cl] + delta;
2397 if (pressure > group->limits[pci].pressure)
2398 return model_spill_cost (cl, group->limits[pci].orig_pressure,
2399 pressure);
2402 return 0;
2405 /* Return baseECC (MODEL_INSN (INSN)). Dump the costs to sched_dump
2406 if PRINT_P. */
2408 static int
2409 model_excess_cost (rtx_insn *insn, bool print_p)
2411 int point, pci, cl, cost, this_cost, delta;
2412 struct reg_pressure_data *insn_reg_pressure;
2413 int insn_death[N_REG_CLASSES];
2415 calculate_reg_deaths (insn, insn_death);
2416 point = model_index (insn);
2417 insn_reg_pressure = INSN_REG_PRESSURE (insn);
2418 cost = 0;
2420 if (print_p)
2421 fprintf (sched_dump, ";;\t\t| %3d %4d | %4d %+3d |", point,
2422 INSN_UID (insn), INSN_PRIORITY (insn), insn_delay (insn));
2424 /* Sum up the individual costs for each register class. */
2425 for (pci = 0; pci < ira_pressure_classes_num; pci++)
2427 cl = ira_pressure_classes[pci];
2428 delta = insn_reg_pressure[pci].set_increase - insn_death[cl];
2429 this_cost = model_excess_group_cost (&model_before_pressure,
2430 point, pci, delta);
2431 cost += this_cost;
2432 if (print_p)
2433 fprintf (sched_dump, " %s:[%d base cost %d]",
2434 reg_class_names[cl], delta, this_cost);
2437 if (print_p)
2438 fprintf (sched_dump, "\n");
2440 return cost;
2443 /* Dump the next points of maximum pressure for GROUP. */
2445 static void
2446 model_dump_pressure_points (struct model_pressure_group *group)
2448 int pci, cl;
2450 fprintf (sched_dump, ";;\t\t| pressure points");
2451 for (pci = 0; pci < ira_pressure_classes_num; pci++)
2453 cl = ira_pressure_classes[pci];
2454 fprintf (sched_dump, " %s:[%d->%d at ", reg_class_names[cl],
2455 curr_reg_pressure[cl], group->limits[pci].pressure);
2456 if (group->limits[pci].point < model_num_insns)
2457 fprintf (sched_dump, "%d:%d]", group->limits[pci].point,
2458 INSN_UID (MODEL_INSN (group->limits[pci].point)));
2459 else
2460 fprintf (sched_dump, "end]");
2462 fprintf (sched_dump, "\n");
2465 /* Set INSN_REG_PRESSURE_EXCESS_COST_CHANGE for INSNS[0...COUNT-1]. */
2467 static void
2468 model_set_excess_costs (rtx_insn **insns, int count)
2470 int i, cost, priority_base, priority;
2471 bool print_p;
2473 /* Record the baseECC value for each instruction in the model schedule,
2474 except that negative costs are converted to zero ones now rather thatn
2475 later. Do not assign a cost to debug instructions, since they must
2476 not change code-generation decisions. Experiments suggest we also
2477 get better results by not assigning a cost to instructions from
2478 a different block.
2480 Set PRIORITY_BASE to baseP in the block comment above. This is the
2481 maximum priority of the "cheap" instructions, which should always
2482 include the next model instruction. */
2483 priority_base = 0;
2484 print_p = false;
2485 for (i = 0; i < count; i++)
2486 if (INSN_MODEL_INDEX (insns[i]))
2488 if (sched_verbose >= 6 && !print_p)
2490 fprintf (sched_dump, MODEL_BAR);
2491 fprintf (sched_dump, ";;\t\t| Pressure costs for ready queue\n");
2492 model_dump_pressure_points (&model_before_pressure);
2493 fprintf (sched_dump, MODEL_BAR);
2494 print_p = true;
2496 cost = model_excess_cost (insns[i], print_p);
2497 if (cost <= 0)
2499 priority = INSN_PRIORITY (insns[i]) - insn_delay (insns[i]) - cost;
2500 priority_base = MAX (priority_base, priority);
2501 cost = 0;
2503 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insns[i]) = cost;
2505 if (print_p)
2506 fprintf (sched_dump, MODEL_BAR);
2508 /* Use MAX (baseECC, 0) and baseP to calculcate ECC for each
2509 instruction. */
2510 for (i = 0; i < count; i++)
2512 cost = INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insns[i]);
2513 priority = INSN_PRIORITY (insns[i]) - insn_delay (insns[i]);
2514 if (cost > 0 && priority > priority_base)
2516 cost += priority_base - priority;
2517 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insns[i]) = MAX (cost, 0);
2523 /* Enum of rank_for_schedule heuristic decisions. */
2524 enum rfs_decision {
2525 RFS_DEBUG, RFS_LIVE_RANGE_SHRINK1, RFS_LIVE_RANGE_SHRINK2,
2526 RFS_SCHED_GROUP, RFS_PRESSURE_DELAY, RFS_PRESSURE_TICK,
2527 RFS_FEEDS_BACKTRACK_INSN, RFS_PRIORITY, RFS_SPECULATION,
2528 RFS_SCHED_RANK, RFS_LAST_INSN, RFS_PRESSURE_INDEX,
2529 RFS_DEP_COUNT, RFS_TIE, RFS_N };
2531 /* Corresponding strings for print outs. */
2532 static const char *rfs_str[RFS_N] = {
2533 "RFS_DEBUG", "RFS_LIVE_RANGE_SHRINK1", "RFS_LIVE_RANGE_SHRINK2",
2534 "RFS_SCHED_GROUP", "RFS_PRESSURE_DELAY", "RFS_PRESSURE_TICK",
2535 "RFS_FEEDS_BACKTRACK_INSN", "RFS_PRIORITY", "RFS_SPECULATION",
2536 "RFS_SCHED_RANK", "RFS_LAST_INSN", "RFS_PRESSURE_INDEX",
2537 "RFS_DEP_COUNT", "RFS_TIE" };
2539 /* Statistical breakdown of rank_for_schedule decisions. */
2540 typedef struct { unsigned stats[RFS_N]; } rank_for_schedule_stats_t;
2541 static rank_for_schedule_stats_t rank_for_schedule_stats;
2543 static int
2544 rfs_result (enum rfs_decision decision, int result)
2546 ++rank_for_schedule_stats.stats[decision];
2547 return result;
2550 /* Returns a positive value if x is preferred; returns a negative value if
2551 y is preferred. Should never return 0, since that will make the sort
2552 unstable. */
2554 static int
2555 rank_for_schedule (const void *x, const void *y)
2557 rtx_insn *tmp = *(rtx_insn * const *) y;
2558 rtx_insn *tmp2 = *(rtx_insn * const *) x;
2559 int tmp_class, tmp2_class;
2560 int val, priority_val, info_val, diff;
2562 if (MAY_HAVE_DEBUG_INSNS)
2564 /* Schedule debug insns as early as possible. */
2565 if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
2566 return rfs_result (RFS_DEBUG, -1);
2567 else if (!DEBUG_INSN_P (tmp) && DEBUG_INSN_P (tmp2))
2568 return rfs_result (RFS_DEBUG, 1);
2569 else if (DEBUG_INSN_P (tmp) && DEBUG_INSN_P (tmp2))
2570 return rfs_result (RFS_DEBUG, INSN_LUID (tmp) - INSN_LUID (tmp2));
2573 if (live_range_shrinkage_p)
2575 /* Don't use SCHED_PRESSURE_MODEL -- it results in much worse
2576 code. */
2577 gcc_assert (sched_pressure == SCHED_PRESSURE_WEIGHTED);
2578 if ((INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp) < 0
2579 || INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2) < 0)
2580 && (diff = (INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp)
2581 - INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2))) != 0)
2582 return rfs_result (RFS_LIVE_RANGE_SHRINK1, diff);
2583 /* Sort by INSN_LUID (original insn order), so that we make the
2584 sort stable. This minimizes instruction movement, thus
2585 minimizing sched's effect on debugging and cross-jumping. */
2586 return rfs_result (RFS_LIVE_RANGE_SHRINK2,
2587 INSN_LUID (tmp) - INSN_LUID (tmp2));
2590 /* The insn in a schedule group should be issued the first. */
2591 if (flag_sched_group_heuristic &&
2592 SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
2593 return rfs_result (RFS_SCHED_GROUP, SCHED_GROUP_P (tmp2) ? 1 : -1);
2595 /* Make sure that priority of TMP and TMP2 are initialized. */
2596 gcc_assert (INSN_PRIORITY_KNOWN (tmp) && INSN_PRIORITY_KNOWN (tmp2));
2598 if (sched_pressure != SCHED_PRESSURE_NONE)
2600 /* Prefer insn whose scheduling results in the smallest register
2601 pressure excess. */
2602 if ((diff = (INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp)
2603 + insn_delay (tmp)
2604 - INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2)
2605 - insn_delay (tmp2))))
2606 return rfs_result (RFS_PRESSURE_DELAY, diff);
2609 if (sched_pressure != SCHED_PRESSURE_NONE
2610 && (INSN_TICK (tmp2) > clock_var || INSN_TICK (tmp) > clock_var)
2611 && INSN_TICK (tmp2) != INSN_TICK (tmp))
2613 diff = INSN_TICK (tmp) - INSN_TICK (tmp2);
2614 return rfs_result (RFS_PRESSURE_TICK, diff);
2617 /* If we are doing backtracking in this schedule, prefer insns that
2618 have forward dependencies with negative cost against an insn that
2619 was already scheduled. */
2620 if (current_sched_info->flags & DO_BACKTRACKING)
2622 priority_val = FEEDS_BACKTRACK_INSN (tmp2) - FEEDS_BACKTRACK_INSN (tmp);
2623 if (priority_val)
2624 return rfs_result (RFS_FEEDS_BACKTRACK_INSN, priority_val);
2627 /* Prefer insn with higher priority. */
2628 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
2630 if (flag_sched_critical_path_heuristic && priority_val)
2631 return rfs_result (RFS_PRIORITY, priority_val);
2633 /* Prefer speculative insn with greater dependencies weakness. */
2634 if (flag_sched_spec_insn_heuristic && spec_info)
2636 ds_t ds1, ds2;
2637 dw_t dw1, dw2;
2638 int dw;
2640 ds1 = TODO_SPEC (tmp) & SPECULATIVE;
2641 if (ds1)
2642 dw1 = ds_weak (ds1);
2643 else
2644 dw1 = NO_DEP_WEAK;
2646 ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
2647 if (ds2)
2648 dw2 = ds_weak (ds2);
2649 else
2650 dw2 = NO_DEP_WEAK;
2652 dw = dw2 - dw1;
2653 if (dw > (NO_DEP_WEAK / 8) || dw < -(NO_DEP_WEAK / 8))
2654 return rfs_result (RFS_SPECULATION, dw);
2657 info_val = (*current_sched_info->rank) (tmp, tmp2);
2658 if (flag_sched_rank_heuristic && info_val)
2659 return rfs_result (RFS_SCHED_RANK, info_val);
2661 /* Compare insns based on their relation to the last scheduled
2662 non-debug insn. */
2663 if (flag_sched_last_insn_heuristic && last_nondebug_scheduled_insn)
2665 dep_t dep1;
2666 dep_t dep2;
2667 rtx last = last_nondebug_scheduled_insn;
2669 /* Classify the instructions into three classes:
2670 1) Data dependent on last schedule insn.
2671 2) Anti/Output dependent on last scheduled insn.
2672 3) Independent of last scheduled insn, or has latency of one.
2673 Choose the insn from the highest numbered class if different. */
2674 dep1 = sd_find_dep_between (last, tmp, true);
2676 if (dep1 == NULL || dep_cost (dep1) == 1)
2677 tmp_class = 3;
2678 else if (/* Data dependence. */
2679 DEP_TYPE (dep1) == REG_DEP_TRUE)
2680 tmp_class = 1;
2681 else
2682 tmp_class = 2;
2684 dep2 = sd_find_dep_between (last, tmp2, true);
2686 if (dep2 == NULL || dep_cost (dep2) == 1)
2687 tmp2_class = 3;
2688 else if (/* Data dependence. */
2689 DEP_TYPE (dep2) == REG_DEP_TRUE)
2690 tmp2_class = 1;
2691 else
2692 tmp2_class = 2;
2694 if ((val = tmp2_class - tmp_class))
2695 return rfs_result (RFS_LAST_INSN, val);
2698 /* Prefer instructions that occur earlier in the model schedule. */
2699 if (sched_pressure == SCHED_PRESSURE_MODEL
2700 && INSN_BB (tmp) == target_bb && INSN_BB (tmp2) == target_bb)
2702 diff = model_index (tmp) - model_index (tmp2);
2703 gcc_assert (diff != 0);
2704 return rfs_result (RFS_PRESSURE_INDEX, diff);
2707 /* Prefer the insn which has more later insns that depend on it.
2708 This gives the scheduler more freedom when scheduling later
2709 instructions at the expense of added register pressure. */
2711 val = (dep_list_size (tmp2, SD_LIST_FORW)
2712 - dep_list_size (tmp, SD_LIST_FORW));
2714 if (flag_sched_dep_count_heuristic && val != 0)
2715 return rfs_result (RFS_DEP_COUNT, val);
2717 /* If insns are equally good, sort by INSN_LUID (original insn order),
2718 so that we make the sort stable. This minimizes instruction movement,
2719 thus minimizing sched's effect on debugging and cross-jumping. */
2720 return rfs_result (RFS_TIE, INSN_LUID (tmp) - INSN_LUID (tmp2));
2723 /* Resort the array A in which only element at index N may be out of order. */
2725 HAIFA_INLINE static void
2726 swap_sort (rtx_insn **a, int n)
2728 rtx_insn *insn = a[n - 1];
2729 int i = n - 2;
2731 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
2733 a[i + 1] = a[i];
2734 i -= 1;
2736 a[i + 1] = insn;
2739 /* Add INSN to the insn queue so that it can be executed at least
2740 N_CYCLES after the currently executing insn. Preserve insns
2741 chain for debugging purposes. REASON will be printed in debugging
2742 output. */
2744 HAIFA_INLINE static void
2745 queue_insn (rtx_insn *insn, int n_cycles, const char *reason)
2747 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
2748 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
2749 int new_tick;
2751 gcc_assert (n_cycles <= max_insn_queue_index);
2752 gcc_assert (!DEBUG_INSN_P (insn));
2754 insn_queue[next_q] = link;
2755 q_size += 1;
2757 if (sched_verbose >= 2)
2759 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
2760 (*current_sched_info->print_insn) (insn, 0));
2762 fprintf (sched_dump, "queued for %d cycles (%s).\n", n_cycles, reason);
2765 QUEUE_INDEX (insn) = next_q;
2767 if (current_sched_info->flags & DO_BACKTRACKING)
2769 new_tick = clock_var + n_cycles;
2770 if (INSN_TICK (insn) == INVALID_TICK || INSN_TICK (insn) < new_tick)
2771 INSN_TICK (insn) = new_tick;
2773 if (INSN_EXACT_TICK (insn) != INVALID_TICK
2774 && INSN_EXACT_TICK (insn) < clock_var + n_cycles)
2776 must_backtrack = true;
2777 if (sched_verbose >= 2)
2778 fprintf (sched_dump, ";;\t\tcausing a backtrack.\n");
2783 /* Remove INSN from queue. */
2784 static void
2785 queue_remove (rtx_insn *insn)
2787 gcc_assert (QUEUE_INDEX (insn) >= 0);
2788 remove_free_INSN_LIST_elem (insn, &insn_queue[QUEUE_INDEX (insn)]);
2789 q_size--;
2790 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
2793 /* Return a pointer to the bottom of the ready list, i.e. the insn
2794 with the lowest priority. */
2796 rtx_insn **
2797 ready_lastpos (struct ready_list *ready)
2799 gcc_assert (ready->n_ready >= 1);
2800 return ready->vec + ready->first - ready->n_ready + 1;
2803 /* Add an element INSN to the ready list so that it ends up with the
2804 lowest/highest priority depending on FIRST_P. */
2806 HAIFA_INLINE static void
2807 ready_add (struct ready_list *ready, rtx_insn *insn, bool first_p)
2809 if (!first_p)
2811 if (ready->first == ready->n_ready)
2813 memmove (ready->vec + ready->veclen - ready->n_ready,
2814 ready_lastpos (ready),
2815 ready->n_ready * sizeof (rtx));
2816 ready->first = ready->veclen - 1;
2818 ready->vec[ready->first - ready->n_ready] = insn;
2820 else
2822 if (ready->first == ready->veclen - 1)
2824 if (ready->n_ready)
2825 /* ready_lastpos() fails when called with (ready->n_ready == 0). */
2826 memmove (ready->vec + ready->veclen - ready->n_ready - 1,
2827 ready_lastpos (ready),
2828 ready->n_ready * sizeof (rtx));
2829 ready->first = ready->veclen - 2;
2831 ready->vec[++(ready->first)] = insn;
2834 ready->n_ready++;
2835 if (DEBUG_INSN_P (insn))
2836 ready->n_debug++;
2838 gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
2839 QUEUE_INDEX (insn) = QUEUE_READY;
2841 if (INSN_EXACT_TICK (insn) != INVALID_TICK
2842 && INSN_EXACT_TICK (insn) < clock_var)
2844 must_backtrack = true;
2848 /* Remove the element with the highest priority from the ready list and
2849 return it. */
2851 HAIFA_INLINE static rtx_insn *
2852 ready_remove_first (struct ready_list *ready)
2854 rtx_insn *t;
2856 gcc_assert (ready->n_ready);
2857 t = ready->vec[ready->first--];
2858 ready->n_ready--;
2859 if (DEBUG_INSN_P (t))
2860 ready->n_debug--;
2861 /* If the queue becomes empty, reset it. */
2862 if (ready->n_ready == 0)
2863 ready->first = ready->veclen - 1;
2865 gcc_assert (QUEUE_INDEX (t) == QUEUE_READY);
2866 QUEUE_INDEX (t) = QUEUE_NOWHERE;
2868 return t;
2871 /* The following code implements multi-pass scheduling for the first
2872 cycle. In other words, we will try to choose ready insn which
2873 permits to start maximum number of insns on the same cycle. */
2875 /* Return a pointer to the element INDEX from the ready. INDEX for
2876 insn with the highest priority is 0, and the lowest priority has
2877 N_READY - 1. */
2879 rtx_insn *
2880 ready_element (struct ready_list *ready, int index)
2882 gcc_assert (ready->n_ready && index < ready->n_ready);
2884 return ready->vec[ready->first - index];
2887 /* Remove the element INDEX from the ready list and return it. INDEX
2888 for insn with the highest priority is 0, and the lowest priority
2889 has N_READY - 1. */
2891 HAIFA_INLINE static rtx_insn *
2892 ready_remove (struct ready_list *ready, int index)
2894 rtx_insn *t;
2895 int i;
2897 if (index == 0)
2898 return ready_remove_first (ready);
2899 gcc_assert (ready->n_ready && index < ready->n_ready);
2900 t = ready->vec[ready->first - index];
2901 ready->n_ready--;
2902 if (DEBUG_INSN_P (t))
2903 ready->n_debug--;
2904 for (i = index; i < ready->n_ready; i++)
2905 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
2906 QUEUE_INDEX (t) = QUEUE_NOWHERE;
2907 return t;
2910 /* Remove INSN from the ready list. */
2911 static void
2912 ready_remove_insn (rtx insn)
2914 int i;
2916 for (i = 0; i < readyp->n_ready; i++)
2917 if (ready_element (readyp, i) == insn)
2919 ready_remove (readyp, i);
2920 return;
2922 gcc_unreachable ();
2925 /* Calculate difference of two statistics set WAS and NOW.
2926 Result returned in WAS. */
2927 static void
2928 rank_for_schedule_stats_diff (rank_for_schedule_stats_t *was,
2929 const rank_for_schedule_stats_t *now)
2931 for (int i = 0; i < RFS_N; ++i)
2932 was->stats[i] = now->stats[i] - was->stats[i];
2935 /* Print rank_for_schedule statistics. */
2936 static void
2937 print_rank_for_schedule_stats (const char *prefix,
2938 const rank_for_schedule_stats_t *stats)
2940 for (int i = 0; i < RFS_N; ++i)
2941 if (stats->stats[i])
2942 fprintf (sched_dump, "%s%20s: %u\n", prefix, rfs_str[i], stats->stats[i]);
2945 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
2946 macro. */
2948 void
2949 ready_sort (struct ready_list *ready)
2951 int i;
2952 rtx_insn **first = ready_lastpos (ready);
2954 if (sched_pressure == SCHED_PRESSURE_WEIGHTED)
2956 for (i = 0; i < ready->n_ready; i++)
2957 if (!DEBUG_INSN_P (first[i]))
2958 setup_insn_reg_pressure_info (first[i]);
2960 if (sched_pressure == SCHED_PRESSURE_MODEL
2961 && model_curr_point < model_num_insns)
2962 model_set_excess_costs (first, ready->n_ready);
2964 rank_for_schedule_stats_t stats1;
2965 if (sched_verbose >= 4)
2966 stats1 = rank_for_schedule_stats;
2968 if (ready->n_ready == 2)
2969 swap_sort (first, ready->n_ready);
2970 else if (ready->n_ready > 2)
2971 qsort (first, ready->n_ready, sizeof (rtx), rank_for_schedule);
2973 if (sched_verbose >= 4)
2975 rank_for_schedule_stats_diff (&stats1, &rank_for_schedule_stats);
2976 print_rank_for_schedule_stats (";;\t\t", &stats1);
2980 /* PREV is an insn that is ready to execute. Adjust its priority if that
2981 will help shorten or lengthen register lifetimes as appropriate. Also
2982 provide a hook for the target to tweak itself. */
2984 HAIFA_INLINE static void
2985 adjust_priority (rtx_insn *prev)
2987 /* ??? There used to be code here to try and estimate how an insn
2988 affected register lifetimes, but it did it by looking at REG_DEAD
2989 notes, which we removed in schedule_region. Nor did it try to
2990 take into account register pressure or anything useful like that.
2992 Revisit when we have a machine model to work with and not before. */
2994 if (targetm.sched.adjust_priority)
2995 INSN_PRIORITY (prev) =
2996 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
2999 /* Advance DFA state STATE on one cycle. */
3000 void
3001 advance_state (state_t state)
3003 if (targetm.sched.dfa_pre_advance_cycle)
3004 targetm.sched.dfa_pre_advance_cycle ();
3006 if (targetm.sched.dfa_pre_cycle_insn)
3007 state_transition (state,
3008 targetm.sched.dfa_pre_cycle_insn ());
3010 state_transition (state, NULL);
3012 if (targetm.sched.dfa_post_cycle_insn)
3013 state_transition (state,
3014 targetm.sched.dfa_post_cycle_insn ());
3016 if (targetm.sched.dfa_post_advance_cycle)
3017 targetm.sched.dfa_post_advance_cycle ();
3020 /* Advance time on one cycle. */
3021 HAIFA_INLINE static void
3022 advance_one_cycle (void)
3024 advance_state (curr_state);
3025 if (sched_verbose >= 4)
3026 fprintf (sched_dump, ";;\tAdvance the current state.\n");
3029 /* Update register pressure after scheduling INSN. */
3030 static void
3031 update_register_pressure (rtx_insn *insn)
3033 struct reg_use_data *use;
3034 struct reg_set_data *set;
3036 gcc_checking_assert (!DEBUG_INSN_P (insn));
3038 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
3039 if (dying_use_p (use))
3040 mark_regno_birth_or_death (curr_reg_live, curr_reg_pressure,
3041 use->regno, false);
3042 for (set = INSN_REG_SET_LIST (insn); set != NULL; set = set->next_insn_set)
3043 mark_regno_birth_or_death (curr_reg_live, curr_reg_pressure,
3044 set->regno, true);
3047 /* Set up or update (if UPDATE_P) max register pressure (see its
3048 meaning in sched-int.h::_haifa_insn_data) for all current BB insns
3049 after insn AFTER. */
3050 static void
3051 setup_insn_max_reg_pressure (rtx after, bool update_p)
3053 int i, p;
3054 bool eq_p;
3055 rtx_insn *insn;
3056 static int max_reg_pressure[N_REG_CLASSES];
3058 save_reg_pressure ();
3059 for (i = 0; i < ira_pressure_classes_num; i++)
3060 max_reg_pressure[ira_pressure_classes[i]]
3061 = curr_reg_pressure[ira_pressure_classes[i]];
3062 for (insn = NEXT_INSN (after);
3063 insn != NULL_RTX && ! BARRIER_P (insn)
3064 && BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (after);
3065 insn = NEXT_INSN (insn))
3066 if (NONDEBUG_INSN_P (insn))
3068 eq_p = true;
3069 for (i = 0; i < ira_pressure_classes_num; i++)
3071 p = max_reg_pressure[ira_pressure_classes[i]];
3072 if (INSN_MAX_REG_PRESSURE (insn)[i] != p)
3074 eq_p = false;
3075 INSN_MAX_REG_PRESSURE (insn)[i]
3076 = max_reg_pressure[ira_pressure_classes[i]];
3079 if (update_p && eq_p)
3080 break;
3081 update_register_pressure (insn);
3082 for (i = 0; i < ira_pressure_classes_num; i++)
3083 if (max_reg_pressure[ira_pressure_classes[i]]
3084 < curr_reg_pressure[ira_pressure_classes[i]])
3085 max_reg_pressure[ira_pressure_classes[i]]
3086 = curr_reg_pressure[ira_pressure_classes[i]];
3088 restore_reg_pressure ();
3091 /* Update the current register pressure after scheduling INSN. Update
3092 also max register pressure for unscheduled insns of the current
3093 BB. */
3094 static void
3095 update_reg_and_insn_max_reg_pressure (rtx_insn *insn)
3097 int i;
3098 int before[N_REG_CLASSES];
3100 for (i = 0; i < ira_pressure_classes_num; i++)
3101 before[i] = curr_reg_pressure[ira_pressure_classes[i]];
3102 update_register_pressure (insn);
3103 for (i = 0; i < ira_pressure_classes_num; i++)
3104 if (curr_reg_pressure[ira_pressure_classes[i]] != before[i])
3105 break;
3106 if (i < ira_pressure_classes_num)
3107 setup_insn_max_reg_pressure (insn, true);
3110 /* Set up register pressure at the beginning of basic block BB whose
3111 insns starting after insn AFTER. Set up also max register pressure
3112 for all insns of the basic block. */
3113 void
3114 sched_setup_bb_reg_pressure_info (basic_block bb, rtx after)
3116 gcc_assert (sched_pressure == SCHED_PRESSURE_WEIGHTED);
3117 initiate_bb_reg_pressure_info (bb);
3118 setup_insn_max_reg_pressure (after, false);
3121 /* If doing predication while scheduling, verify whether INSN, which
3122 has just been scheduled, clobbers the conditions of any
3123 instructions that must be predicated in order to break their
3124 dependencies. If so, remove them from the queues so that they will
3125 only be scheduled once their control dependency is resolved. */
3127 static void
3128 check_clobbered_conditions (rtx insn)
3130 HARD_REG_SET t;
3131 int i;
3133 if ((current_sched_info->flags & DO_PREDICATION) == 0)
3134 return;
3136 find_all_hard_reg_sets (insn, &t, true);
3138 restart:
3139 for (i = 0; i < ready.n_ready; i++)
3141 rtx_insn *x = ready_element (&ready, i);
3142 if (TODO_SPEC (x) == DEP_CONTROL && cond_clobbered_p (x, t))
3144 ready_remove_insn (x);
3145 goto restart;
3148 for (i = 0; i <= max_insn_queue_index; i++)
3150 rtx link;
3151 int q = NEXT_Q_AFTER (q_ptr, i);
3153 restart_queue:
3154 for (link = insn_queue[q]; link; link = XEXP (link, 1))
3156 rtx_insn *x = as_a <rtx_insn *> (XEXP (link, 0));
3157 if (TODO_SPEC (x) == DEP_CONTROL && cond_clobbered_p (x, t))
3159 queue_remove (x);
3160 goto restart_queue;
3166 /* Return (in order):
3168 - positive if INSN adversely affects the pressure on one
3169 register class
3171 - negative if INSN reduces the pressure on one register class
3173 - 0 if INSN doesn't affect the pressure on any register class. */
3175 static int
3176 model_classify_pressure (struct model_insn_info *insn)
3178 struct reg_pressure_data *reg_pressure;
3179 int death[N_REG_CLASSES];
3180 int pci, cl, sum;
3182 calculate_reg_deaths (insn->insn, death);
3183 reg_pressure = INSN_REG_PRESSURE (insn->insn);
3184 sum = 0;
3185 for (pci = 0; pci < ira_pressure_classes_num; pci++)
3187 cl = ira_pressure_classes[pci];
3188 if (death[cl] < reg_pressure[pci].set_increase)
3189 return 1;
3190 sum += reg_pressure[pci].set_increase - death[cl];
3192 return sum;
3195 /* Return true if INSN1 should come before INSN2 in the model schedule. */
3197 static int
3198 model_order_p (struct model_insn_info *insn1, struct model_insn_info *insn2)
3200 unsigned int height1, height2;
3201 unsigned int priority1, priority2;
3203 /* Prefer instructions with a higher model priority. */
3204 if (insn1->model_priority != insn2->model_priority)
3205 return insn1->model_priority > insn2->model_priority;
3207 /* Combine the length of the longest path of satisfied true dependencies
3208 that leads to each instruction (depth) with the length of the longest
3209 path of any dependencies that leads from the instruction (alap).
3210 Prefer instructions with the greatest combined length. If the combined
3211 lengths are equal, prefer instructions with the greatest depth.
3213 The idea is that, if we have a set S of "equal" instructions that each
3214 have ALAP value X, and we pick one such instruction I, any true-dependent
3215 successors of I that have ALAP value X - 1 should be preferred over S.
3216 This encourages the schedule to be "narrow" rather than "wide".
3217 However, if I is a low-priority instruction that we decided to
3218 schedule because of its model_classify_pressure, and if there
3219 is a set of higher-priority instructions T, the aforementioned
3220 successors of I should not have the edge over T. */
3221 height1 = insn1->depth + insn1->alap;
3222 height2 = insn2->depth + insn2->alap;
3223 if (height1 != height2)
3224 return height1 > height2;
3225 if (insn1->depth != insn2->depth)
3226 return insn1->depth > insn2->depth;
3228 /* We have no real preference between INSN1 an INSN2 as far as attempts
3229 to reduce pressure go. Prefer instructions with higher priorities. */
3230 priority1 = INSN_PRIORITY (insn1->insn);
3231 priority2 = INSN_PRIORITY (insn2->insn);
3232 if (priority1 != priority2)
3233 return priority1 > priority2;
3235 /* Use the original rtl sequence as a tie-breaker. */
3236 return insn1 < insn2;
3239 /* Add INSN to the model worklist immediately after PREV. Add it to the
3240 beginning of the list if PREV is null. */
3242 static void
3243 model_add_to_worklist_at (struct model_insn_info *insn,
3244 struct model_insn_info *prev)
3246 gcc_assert (QUEUE_INDEX (insn->insn) == QUEUE_NOWHERE);
3247 QUEUE_INDEX (insn->insn) = QUEUE_READY;
3249 insn->prev = prev;
3250 if (prev)
3252 insn->next = prev->next;
3253 prev->next = insn;
3255 else
3257 insn->next = model_worklist;
3258 model_worklist = insn;
3260 if (insn->next)
3261 insn->next->prev = insn;
3264 /* Remove INSN from the model worklist. */
3266 static void
3267 model_remove_from_worklist (struct model_insn_info *insn)
3269 gcc_assert (QUEUE_INDEX (insn->insn) == QUEUE_READY);
3270 QUEUE_INDEX (insn->insn) = QUEUE_NOWHERE;
3272 if (insn->prev)
3273 insn->prev->next = insn->next;
3274 else
3275 model_worklist = insn->next;
3276 if (insn->next)
3277 insn->next->prev = insn->prev;
3280 /* Add INSN to the model worklist. Start looking for a suitable position
3281 between neighbors PREV and NEXT, testing at most MAX_SCHED_READY_INSNS
3282 insns either side. A null PREV indicates the beginning of the list and
3283 a null NEXT indicates the end. */
3285 static void
3286 model_add_to_worklist (struct model_insn_info *insn,
3287 struct model_insn_info *prev,
3288 struct model_insn_info *next)
3290 int count;
3292 count = MAX_SCHED_READY_INSNS;
3293 if (count > 0 && prev && model_order_p (insn, prev))
3296 count--;
3297 prev = prev->prev;
3299 while (count > 0 && prev && model_order_p (insn, prev));
3300 else
3301 while (count > 0 && next && model_order_p (next, insn))
3303 count--;
3304 prev = next;
3305 next = next->next;
3307 model_add_to_worklist_at (insn, prev);
3310 /* INSN may now have a higher priority (in the model_order_p sense)
3311 than before. Move it up the worklist if necessary. */
3313 static void
3314 model_promote_insn (struct model_insn_info *insn)
3316 struct model_insn_info *prev;
3317 int count;
3319 prev = insn->prev;
3320 count = MAX_SCHED_READY_INSNS;
3321 while (count > 0 && prev && model_order_p (insn, prev))
3323 count--;
3324 prev = prev->prev;
3326 if (prev != insn->prev)
3328 model_remove_from_worklist (insn);
3329 model_add_to_worklist_at (insn, prev);
3333 /* Add INSN to the end of the model schedule. */
3335 static void
3336 model_add_to_schedule (rtx_insn *insn)
3338 unsigned int point;
3340 gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
3341 QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
3343 point = model_schedule.length ();
3344 model_schedule.quick_push (insn);
3345 INSN_MODEL_INDEX (insn) = point + 1;
3348 /* Analyze the instructions that are to be scheduled, setting up
3349 MODEL_INSN_INFO (...) and model_num_insns accordingly. Add ready
3350 instructions to model_worklist. */
3352 static void
3353 model_analyze_insns (void)
3355 rtx_insn *start, *end, *iter;
3356 sd_iterator_def sd_it;
3357 dep_t dep;
3358 struct model_insn_info *insn, *con;
3360 model_num_insns = 0;
3361 start = PREV_INSN (current_sched_info->next_tail);
3362 end = current_sched_info->prev_head;
3363 for (iter = start; iter != end; iter = PREV_INSN (iter))
3364 if (NONDEBUG_INSN_P (iter))
3366 insn = MODEL_INSN_INFO (iter);
3367 insn->insn = iter;
3368 FOR_EACH_DEP (iter, SD_LIST_FORW, sd_it, dep)
3370 con = MODEL_INSN_INFO (DEP_CON (dep));
3371 if (con->insn && insn->alap < con->alap + 1)
3372 insn->alap = con->alap + 1;
3375 insn->old_queue = QUEUE_INDEX (iter);
3376 QUEUE_INDEX (iter) = QUEUE_NOWHERE;
3378 insn->unscheduled_preds = dep_list_size (iter, SD_LIST_HARD_BACK);
3379 if (insn->unscheduled_preds == 0)
3380 model_add_to_worklist (insn, NULL, model_worklist);
3382 model_num_insns++;
3386 /* The global state describes the register pressure at the start of the
3387 model schedule. Initialize GROUP accordingly. */
3389 static void
3390 model_init_pressure_group (struct model_pressure_group *group)
3392 int pci, cl;
3394 for (pci = 0; pci < ira_pressure_classes_num; pci++)
3396 cl = ira_pressure_classes[pci];
3397 group->limits[pci].pressure = curr_reg_pressure[cl];
3398 group->limits[pci].point = 0;
3400 /* Use index model_num_insns to record the state after the last
3401 instruction in the model schedule. */
3402 group->model = XNEWVEC (struct model_pressure_data,
3403 (model_num_insns + 1) * ira_pressure_classes_num);
3406 /* Record that MODEL_REF_PRESSURE (GROUP, POINT, PCI) is PRESSURE.
3407 Update the maximum pressure for the whole schedule. */
3409 static void
3410 model_record_pressure (struct model_pressure_group *group,
3411 int point, int pci, int pressure)
3413 MODEL_REF_PRESSURE (group, point, pci) = pressure;
3414 if (group->limits[pci].pressure < pressure)
3416 group->limits[pci].pressure = pressure;
3417 group->limits[pci].point = point;
3421 /* INSN has just been added to the end of the model schedule. Record its
3422 register-pressure information. */
3424 static void
3425 model_record_pressures (struct model_insn_info *insn)
3427 struct reg_pressure_data *reg_pressure;
3428 int point, pci, cl, delta;
3429 int death[N_REG_CLASSES];
3431 point = model_index (insn->insn);
3432 if (sched_verbose >= 2)
3434 if (point == 0)
3436 fprintf (sched_dump, "\n;;\tModel schedule:\n;;\n");
3437 fprintf (sched_dump, ";;\t| idx insn | mpri hght dpth prio |\n");
3439 fprintf (sched_dump, ";;\t| %3d %4d | %4d %4d %4d %4d | %-30s ",
3440 point, INSN_UID (insn->insn), insn->model_priority,
3441 insn->depth + insn->alap, insn->depth,
3442 INSN_PRIORITY (insn->insn),
3443 str_pattern_slim (PATTERN (insn->insn)));
3445 calculate_reg_deaths (insn->insn, death);
3446 reg_pressure = INSN_REG_PRESSURE (insn->insn);
3447 for (pci = 0; pci < ira_pressure_classes_num; pci++)
3449 cl = ira_pressure_classes[pci];
3450 delta = reg_pressure[pci].set_increase - death[cl];
3451 if (sched_verbose >= 2)
3452 fprintf (sched_dump, " %s:[%d,%+d]", reg_class_names[cl],
3453 curr_reg_pressure[cl], delta);
3454 model_record_pressure (&model_before_pressure, point, pci,
3455 curr_reg_pressure[cl]);
3457 if (sched_verbose >= 2)
3458 fprintf (sched_dump, "\n");
3461 /* All instructions have been added to the model schedule. Record the
3462 final register pressure in GROUP and set up all MODEL_MAX_PRESSUREs. */
3464 static void
3465 model_record_final_pressures (struct model_pressure_group *group)
3467 int point, pci, max_pressure, ref_pressure, cl;
3469 for (pci = 0; pci < ira_pressure_classes_num; pci++)
3471 /* Record the final pressure for this class. */
3472 cl = ira_pressure_classes[pci];
3473 point = model_num_insns;
3474 ref_pressure = curr_reg_pressure[cl];
3475 model_record_pressure (group, point, pci, ref_pressure);
3477 /* Record the original maximum pressure. */
3478 group->limits[pci].orig_pressure = group->limits[pci].pressure;
3480 /* Update the MODEL_MAX_PRESSURE for every point of the schedule. */
3481 max_pressure = ref_pressure;
3482 MODEL_MAX_PRESSURE (group, point, pci) = max_pressure;
3483 while (point > 0)
3485 point--;
3486 ref_pressure = MODEL_REF_PRESSURE (group, point, pci);
3487 max_pressure = MAX (max_pressure, ref_pressure);
3488 MODEL_MAX_PRESSURE (group, point, pci) = max_pressure;
3493 /* Update all successors of INSN, given that INSN has just been scheduled. */
3495 static void
3496 model_add_successors_to_worklist (struct model_insn_info *insn)
3498 sd_iterator_def sd_it;
3499 struct model_insn_info *con;
3500 dep_t dep;
3502 FOR_EACH_DEP (insn->insn, SD_LIST_FORW, sd_it, dep)
3504 con = MODEL_INSN_INFO (DEP_CON (dep));
3505 /* Ignore debug instructions, and instructions from other blocks. */
3506 if (con->insn)
3508 con->unscheduled_preds--;
3510 /* Update the depth field of each true-dependent successor.
3511 Increasing the depth gives them a higher priority than
3512 before. */
3513 if (DEP_TYPE (dep) == REG_DEP_TRUE && con->depth < insn->depth + 1)
3515 con->depth = insn->depth + 1;
3516 if (QUEUE_INDEX (con->insn) == QUEUE_READY)
3517 model_promote_insn (con);
3520 /* If this is a true dependency, or if there are no remaining
3521 dependencies for CON (meaning that CON only had non-true
3522 dependencies), make sure that CON is on the worklist.
3523 We don't bother otherwise because it would tend to fill the
3524 worklist with a lot of low-priority instructions that are not
3525 yet ready to issue. */
3526 if ((con->depth > 0 || con->unscheduled_preds == 0)
3527 && QUEUE_INDEX (con->insn) == QUEUE_NOWHERE)
3528 model_add_to_worklist (con, insn, insn->next);
3533 /* Give INSN a higher priority than any current instruction, then give
3534 unscheduled predecessors of INSN a higher priority still. If any of
3535 those predecessors are not on the model worklist, do the same for its
3536 predecessors, and so on. */
3538 static void
3539 model_promote_predecessors (struct model_insn_info *insn)
3541 struct model_insn_info *pro, *first;
3542 sd_iterator_def sd_it;
3543 dep_t dep;
3545 if (sched_verbose >= 7)
3546 fprintf (sched_dump, ";;\t+--- priority of %d = %d, priority of",
3547 INSN_UID (insn->insn), model_next_priority);
3548 insn->model_priority = model_next_priority++;
3549 model_remove_from_worklist (insn);
3550 model_add_to_worklist_at (insn, NULL);
3552 first = NULL;
3553 for (;;)
3555 FOR_EACH_DEP (insn->insn, SD_LIST_HARD_BACK, sd_it, dep)
3557 pro = MODEL_INSN_INFO (DEP_PRO (dep));
3558 /* The first test is to ignore debug instructions, and instructions
3559 from other blocks. */
3560 if (pro->insn
3561 && pro->model_priority != model_next_priority
3562 && QUEUE_INDEX (pro->insn) != QUEUE_SCHEDULED)
3564 pro->model_priority = model_next_priority;
3565 if (sched_verbose >= 7)
3566 fprintf (sched_dump, " %d", INSN_UID (pro->insn));
3567 if (QUEUE_INDEX (pro->insn) == QUEUE_READY)
3569 /* PRO is already in the worklist, but it now has
3570 a higher priority than before. Move it at the
3571 appropriate place. */
3572 model_remove_from_worklist (pro);
3573 model_add_to_worklist (pro, NULL, model_worklist);
3575 else
3577 /* PRO isn't in the worklist. Recursively process
3578 its predecessors until we find one that is. */
3579 pro->next = first;
3580 first = pro;
3584 if (!first)
3585 break;
3586 insn = first;
3587 first = insn->next;
3589 if (sched_verbose >= 7)
3590 fprintf (sched_dump, " = %d\n", model_next_priority);
3591 model_next_priority++;
3594 /* Pick one instruction from model_worklist and process it. */
3596 static void
3597 model_choose_insn (void)
3599 struct model_insn_info *insn, *fallback;
3600 int count;
3602 if (sched_verbose >= 7)
3604 fprintf (sched_dump, ";;\t+--- worklist:\n");
3605 insn = model_worklist;
3606 count = MAX_SCHED_READY_INSNS;
3607 while (count > 0 && insn)
3609 fprintf (sched_dump, ";;\t+--- %d [%d, %d, %d, %d]\n",
3610 INSN_UID (insn->insn), insn->model_priority,
3611 insn->depth + insn->alap, insn->depth,
3612 INSN_PRIORITY (insn->insn));
3613 count--;
3614 insn = insn->next;
3618 /* Look for a ready instruction whose model_classify_priority is zero
3619 or negative, picking the highest-priority one. Adding such an
3620 instruction to the schedule now should do no harm, and may actually
3621 do some good.
3623 Failing that, see whether there is an instruction with the highest
3624 extant model_priority that is not yet ready, but which would reduce
3625 pressure if it became ready. This is designed to catch cases like:
3627 (set (mem (reg R1)) (reg R2))
3629 where the instruction is the last remaining use of R1 and where the
3630 value of R2 is not yet available (or vice versa). The death of R1
3631 means that this instruction already reduces pressure. It is of
3632 course possible that the computation of R2 involves other registers
3633 that are hard to kill, but such cases are rare enough for this
3634 heuristic to be a win in general.
3636 Failing that, just pick the highest-priority instruction in the
3637 worklist. */
3638 count = MAX_SCHED_READY_INSNS;
3639 insn = model_worklist;
3640 fallback = 0;
3641 for (;;)
3643 if (count == 0 || !insn)
3645 insn = fallback ? fallback : model_worklist;
3646 break;
3648 if (insn->unscheduled_preds)
3650 if (model_worklist->model_priority == insn->model_priority
3651 && !fallback
3652 && model_classify_pressure (insn) < 0)
3653 fallback = insn;
3655 else
3657 if (model_classify_pressure (insn) <= 0)
3658 break;
3660 count--;
3661 insn = insn->next;
3664 if (sched_verbose >= 7 && insn != model_worklist)
3666 if (insn->unscheduled_preds)
3667 fprintf (sched_dump, ";;\t+--- promoting insn %d, with dependencies\n",
3668 INSN_UID (insn->insn));
3669 else
3670 fprintf (sched_dump, ";;\t+--- promoting insn %d, which is ready\n",
3671 INSN_UID (insn->insn));
3673 if (insn->unscheduled_preds)
3674 /* INSN isn't yet ready to issue. Give all its predecessors the
3675 highest priority. */
3676 model_promote_predecessors (insn);
3677 else
3679 /* INSN is ready. Add it to the end of model_schedule and
3680 process its successors. */
3681 model_add_successors_to_worklist (insn);
3682 model_remove_from_worklist (insn);
3683 model_add_to_schedule (insn->insn);
3684 model_record_pressures (insn);
3685 update_register_pressure (insn->insn);
3689 /* Restore all QUEUE_INDEXs to the values that they had before
3690 model_start_schedule was called. */
3692 static void
3693 model_reset_queue_indices (void)
3695 unsigned int i;
3696 rtx_insn *insn;
3698 FOR_EACH_VEC_ELT (model_schedule, i, insn)
3699 QUEUE_INDEX (insn) = MODEL_INSN_INFO (insn)->old_queue;
3702 /* We have calculated the model schedule and spill costs. Print a summary
3703 to sched_dump. */
3705 static void
3706 model_dump_pressure_summary (void)
3708 int pci, cl;
3710 fprintf (sched_dump, ";; Pressure summary:");
3711 for (pci = 0; pci < ira_pressure_classes_num; pci++)
3713 cl = ira_pressure_classes[pci];
3714 fprintf (sched_dump, " %s:%d", reg_class_names[cl],
3715 model_before_pressure.limits[pci].pressure);
3717 fprintf (sched_dump, "\n\n");
3720 /* Initialize the SCHED_PRESSURE_MODEL information for the current
3721 scheduling region. */
3723 static void
3724 model_start_schedule (void)
3726 basic_block bb;
3728 model_next_priority = 1;
3729 model_schedule.create (sched_max_luid);
3730 model_insns = XCNEWVEC (struct model_insn_info, sched_max_luid);
3732 bb = BLOCK_FOR_INSN (NEXT_INSN (current_sched_info->prev_head));
3733 initiate_reg_pressure_info (df_get_live_in (bb));
3735 model_analyze_insns ();
3736 model_init_pressure_group (&model_before_pressure);
3737 while (model_worklist)
3738 model_choose_insn ();
3739 gcc_assert (model_num_insns == (int) model_schedule.length ());
3740 if (sched_verbose >= 2)
3741 fprintf (sched_dump, "\n");
3743 model_record_final_pressures (&model_before_pressure);
3744 model_reset_queue_indices ();
3746 XDELETEVEC (model_insns);
3748 model_curr_point = 0;
3749 initiate_reg_pressure_info (df_get_live_in (bb));
3750 if (sched_verbose >= 1)
3751 model_dump_pressure_summary ();
3754 /* Free the information associated with GROUP. */
3756 static void
3757 model_finalize_pressure_group (struct model_pressure_group *group)
3759 XDELETEVEC (group->model);
3762 /* Free the information created by model_start_schedule. */
3764 static void
3765 model_end_schedule (void)
3767 model_finalize_pressure_group (&model_before_pressure);
3768 model_schedule.release ();
3771 /* A structure that holds local state for the loop in schedule_block. */
3772 struct sched_block_state
3774 /* True if no real insns have been scheduled in the current cycle. */
3775 bool first_cycle_insn_p;
3776 /* True if a shadow insn has been scheduled in the current cycle, which
3777 means that no more normal insns can be issued. */
3778 bool shadows_only_p;
3779 /* True if we're winding down a modulo schedule, which means that we only
3780 issue insns with INSN_EXACT_TICK set. */
3781 bool modulo_epilogue;
3782 /* Initialized with the machine's issue rate every cycle, and updated
3783 by calls to the variable_issue hook. */
3784 int can_issue_more;
3787 /* INSN is the "currently executing insn". Launch each insn which was
3788 waiting on INSN. READY is the ready list which contains the insns
3789 that are ready to fire. CLOCK is the current cycle. The function
3790 returns necessary cycle advance after issuing the insn (it is not
3791 zero for insns in a schedule group). */
3793 static int
3794 schedule_insn (rtx_insn *insn)
3796 sd_iterator_def sd_it;
3797 dep_t dep;
3798 int i;
3799 int advance = 0;
3801 if (sched_verbose >= 1)
3803 struct reg_pressure_data *pressure_info;
3804 fprintf (sched_dump, ";;\t%3i--> %s %-40s:",
3805 clock_var, (*current_sched_info->print_insn) (insn, 1),
3806 str_pattern_slim (PATTERN (insn)));
3808 if (recog_memoized (insn) < 0)
3809 fprintf (sched_dump, "nothing");
3810 else
3811 print_reservation (sched_dump, insn);
3812 pressure_info = INSN_REG_PRESSURE (insn);
3813 if (pressure_info != NULL)
3815 fputc (':', sched_dump);
3816 for (i = 0; i < ira_pressure_classes_num; i++)
3817 fprintf (sched_dump, "%s%s%+d(%d)",
3818 scheduled_insns.length () > 1
3819 && INSN_LUID (insn)
3820 < INSN_LUID (scheduled_insns[scheduled_insns.length () - 2]) ? "@" : "",
3821 reg_class_names[ira_pressure_classes[i]],
3822 pressure_info[i].set_increase, pressure_info[i].change);
3824 if (sched_pressure == SCHED_PRESSURE_MODEL
3825 && model_curr_point < model_num_insns
3826 && model_index (insn) == model_curr_point)
3827 fprintf (sched_dump, ":model %d", model_curr_point);
3828 fputc ('\n', sched_dump);
3831 if (sched_pressure == SCHED_PRESSURE_WEIGHTED && !DEBUG_INSN_P (insn))
3832 update_reg_and_insn_max_reg_pressure (insn);
3834 /* Scheduling instruction should have all its dependencies resolved and
3835 should have been removed from the ready list. */
3836 gcc_assert (sd_lists_empty_p (insn, SD_LIST_HARD_BACK));
3838 /* Reset debug insns invalidated by moving this insn. */
3839 if (MAY_HAVE_DEBUG_INSNS && !DEBUG_INSN_P (insn))
3840 for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
3841 sd_iterator_cond (&sd_it, &dep);)
3843 rtx_insn *dbg = DEP_PRO (dep);
3844 struct reg_use_data *use, *next;
3846 if (DEP_STATUS (dep) & DEP_CANCELLED)
3848 sd_iterator_next (&sd_it);
3849 continue;
3852 gcc_assert (DEBUG_INSN_P (dbg));
3854 if (sched_verbose >= 6)
3855 fprintf (sched_dump, ";;\t\tresetting: debug insn %d\n",
3856 INSN_UID (dbg));
3858 /* ??? Rather than resetting the debug insn, we might be able
3859 to emit a debug temp before the just-scheduled insn, but
3860 this would involve checking that the expression at the
3861 point of the debug insn is equivalent to the expression
3862 before the just-scheduled insn. They might not be: the
3863 expression in the debug insn may depend on other insns not
3864 yet scheduled that set MEMs, REGs or even other debug
3865 insns. It's not clear that attempting to preserve debug
3866 information in these cases is worth the effort, given how
3867 uncommon these resets are and the likelihood that the debug
3868 temps introduced won't survive the schedule change. */
3869 INSN_VAR_LOCATION_LOC (dbg) = gen_rtx_UNKNOWN_VAR_LOC ();
3870 df_insn_rescan (dbg);
3872 /* Unknown location doesn't use any registers. */
3873 for (use = INSN_REG_USE_LIST (dbg); use != NULL; use = next)
3875 struct reg_use_data *prev = use;
3877 /* Remove use from the cyclic next_regno_use chain first. */
3878 while (prev->next_regno_use != use)
3879 prev = prev->next_regno_use;
3880 prev->next_regno_use = use->next_regno_use;
3881 next = use->next_insn_use;
3882 free (use);
3884 INSN_REG_USE_LIST (dbg) = NULL;
3886 /* We delete rather than resolve these deps, otherwise we
3887 crash in sched_free_deps(), because forward deps are
3888 expected to be released before backward deps. */
3889 sd_delete_dep (sd_it);
3892 gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
3893 QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
3895 if (sched_pressure == SCHED_PRESSURE_MODEL
3896 && model_curr_point < model_num_insns
3897 && NONDEBUG_INSN_P (insn))
3899 if (model_index (insn) == model_curr_point)
3901 model_curr_point++;
3902 while (model_curr_point < model_num_insns
3903 && (QUEUE_INDEX (MODEL_INSN (model_curr_point))
3904 == QUEUE_SCHEDULED));
3905 else
3906 model_recompute (insn);
3907 model_update_limit_points ();
3908 update_register_pressure (insn);
3909 if (sched_verbose >= 2)
3910 print_curr_reg_pressure ();
3913 gcc_assert (INSN_TICK (insn) >= MIN_TICK);
3914 if (INSN_TICK (insn) > clock_var)
3915 /* INSN has been prematurely moved from the queue to the ready list.
3916 This is possible only if following flag is set. */
3917 gcc_assert (flag_sched_stalled_insns);
3919 /* ??? Probably, if INSN is scheduled prematurely, we should leave
3920 INSN_TICK untouched. This is a machine-dependent issue, actually. */
3921 INSN_TICK (insn) = clock_var;
3923 check_clobbered_conditions (insn);
3925 /* Update dependent instructions. First, see if by scheduling this insn
3926 now we broke a dependence in a way that requires us to change another
3927 insn. */
3928 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
3929 sd_iterator_cond (&sd_it, &dep); sd_iterator_next (&sd_it))
3931 struct dep_replacement *desc = DEP_REPLACE (dep);
3932 rtx_insn *pro = DEP_PRO (dep);
3933 if (QUEUE_INDEX (pro) != QUEUE_SCHEDULED
3934 && desc != NULL && desc->insn == pro)
3935 apply_replacement (dep, false);
3938 /* Go through and resolve forward dependencies. */
3939 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
3940 sd_iterator_cond (&sd_it, &dep);)
3942 rtx_insn *next = DEP_CON (dep);
3943 bool cancelled = (DEP_STATUS (dep) & DEP_CANCELLED) != 0;
3945 /* Resolve the dependence between INSN and NEXT.
3946 sd_resolve_dep () moves current dep to another list thus
3947 advancing the iterator. */
3948 sd_resolve_dep (sd_it);
3950 if (cancelled)
3952 if (must_restore_pattern_p (next, dep))
3953 restore_pattern (dep, false);
3954 continue;
3957 /* Don't bother trying to mark next as ready if insn is a debug
3958 insn. If insn is the last hard dependency, it will have
3959 already been discounted. */
3960 if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next))
3961 continue;
3963 if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
3965 int effective_cost;
3967 effective_cost = try_ready (next);
3969 if (effective_cost >= 0
3970 && SCHED_GROUP_P (next)
3971 && advance < effective_cost)
3972 advance = effective_cost;
3974 else
3975 /* Check always has only one forward dependence (to the first insn in
3976 the recovery block), therefore, this will be executed only once. */
3978 gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
3979 fix_recovery_deps (RECOVERY_BLOCK (insn));
3983 /* Annotate the instruction with issue information -- TImode
3984 indicates that the instruction is expected not to be able
3985 to issue on the same cycle as the previous insn. A machine
3986 may use this information to decide how the instruction should
3987 be aligned. */
3988 if (issue_rate > 1
3989 && GET_CODE (PATTERN (insn)) != USE
3990 && GET_CODE (PATTERN (insn)) != CLOBBER
3991 && !DEBUG_INSN_P (insn))
3993 if (reload_completed)
3994 PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
3995 last_clock_var = clock_var;
3998 if (nonscheduled_insns_begin != NULL_RTX)
3999 /* Indicate to debug counters that INSN is scheduled. */
4000 nonscheduled_insns_begin = insn;
4002 return advance;
4005 /* Functions for handling of notes. */
4007 /* Add note list that ends on FROM_END to the end of TO_ENDP. */
4008 void
4009 concat_note_lists (rtx_insn *from_end, rtx_insn **to_endp)
4011 rtx_insn *from_start;
4013 /* It's easy when have nothing to concat. */
4014 if (from_end == NULL)
4015 return;
4017 /* It's also easy when destination is empty. */
4018 if (*to_endp == NULL)
4020 *to_endp = from_end;
4021 return;
4024 from_start = from_end;
4025 while (PREV_INSN (from_start) != NULL)
4026 from_start = PREV_INSN (from_start);
4028 SET_PREV_INSN (from_start) = *to_endp;
4029 SET_NEXT_INSN (*to_endp) = from_start;
4030 *to_endp = from_end;
4033 /* Delete notes between HEAD and TAIL and put them in the chain
4034 of notes ended by NOTE_LIST. */
4035 void
4036 remove_notes (rtx_insn *head, rtx_insn *tail)
4038 rtx_insn *next_tail, *insn, *next;
4040 note_list = 0;
4041 if (head == tail && !INSN_P (head))
4042 return;
4044 next_tail = NEXT_INSN (tail);
4045 for (insn = head; insn != next_tail; insn = next)
4047 next = NEXT_INSN (insn);
4048 if (!NOTE_P (insn))
4049 continue;
4051 switch (NOTE_KIND (insn))
4053 case NOTE_INSN_BASIC_BLOCK:
4054 continue;
4056 case NOTE_INSN_EPILOGUE_BEG:
4057 if (insn != tail)
4059 remove_insn (insn);
4060 add_reg_note (next, REG_SAVE_NOTE,
4061 GEN_INT (NOTE_INSN_EPILOGUE_BEG));
4062 break;
4064 /* FALLTHRU */
4066 default:
4067 remove_insn (insn);
4069 /* Add the note to list that ends at NOTE_LIST. */
4070 SET_PREV_INSN (insn) = note_list;
4071 SET_NEXT_INSN (insn) = NULL_RTX;
4072 if (note_list)
4073 SET_NEXT_INSN (note_list) = insn;
4074 note_list = insn;
4075 break;
4078 gcc_assert ((sel_sched_p () || insn != tail) && insn != head);
4082 /* A structure to record enough data to allow us to backtrack the scheduler to
4083 a previous state. */
4084 struct haifa_saved_data
4086 /* Next entry on the list. */
4087 struct haifa_saved_data *next;
4089 /* Backtracking is associated with scheduling insns that have delay slots.
4090 DELAY_PAIR points to the structure that contains the insns involved, and
4091 the number of cycles between them. */
4092 struct delay_pair *delay_pair;
4094 /* Data used by the frontend (e.g. sched-ebb or sched-rgn). */
4095 void *fe_saved_data;
4096 /* Data used by the backend. */
4097 void *be_saved_data;
4099 /* Copies of global state. */
4100 int clock_var, last_clock_var;
4101 struct ready_list ready;
4102 state_t curr_state;
4104 rtx_insn *last_scheduled_insn;
4105 rtx last_nondebug_scheduled_insn;
4106 rtx_insn *nonscheduled_insns_begin;
4107 int cycle_issued_insns;
4109 /* Copies of state used in the inner loop of schedule_block. */
4110 struct sched_block_state sched_block;
4112 /* We don't need to save q_ptr, as its value is arbitrary and we can set it
4113 to 0 when restoring. */
4114 int q_size;
4115 rtx *insn_queue;
4117 /* Describe pattern replacements that occurred since this backtrack point
4118 was queued. */
4119 vec<dep_t> replacement_deps;
4120 vec<int> replace_apply;
4122 /* A copy of the next-cycle replacement vectors at the time of the backtrack
4123 point. */
4124 vec<dep_t> next_cycle_deps;
4125 vec<int> next_cycle_apply;
4128 /* A record, in reverse order, of all scheduled insns which have delay slots
4129 and may require backtracking. */
4130 static struct haifa_saved_data *backtrack_queue;
4132 /* For every dependency of INSN, set the FEEDS_BACKTRACK_INSN bit according
4133 to SET_P. */
4134 static void
4135 mark_backtrack_feeds (rtx insn, int set_p)
4137 sd_iterator_def sd_it;
4138 dep_t dep;
4139 FOR_EACH_DEP (insn, SD_LIST_HARD_BACK, sd_it, dep)
4141 FEEDS_BACKTRACK_INSN (DEP_PRO (dep)) = set_p;
4145 /* Save the current scheduler state so that we can backtrack to it
4146 later if necessary. PAIR gives the insns that make it necessary to
4147 save this point. SCHED_BLOCK is the local state of schedule_block
4148 that need to be saved. */
4149 static void
4150 save_backtrack_point (struct delay_pair *pair,
4151 struct sched_block_state sched_block)
4153 int i;
4154 struct haifa_saved_data *save = XNEW (struct haifa_saved_data);
4156 save->curr_state = xmalloc (dfa_state_size);
4157 memcpy (save->curr_state, curr_state, dfa_state_size);
4159 save->ready.first = ready.first;
4160 save->ready.n_ready = ready.n_ready;
4161 save->ready.n_debug = ready.n_debug;
4162 save->ready.veclen = ready.veclen;
4163 save->ready.vec = XNEWVEC (rtx_insn *, ready.veclen);
4164 memcpy (save->ready.vec, ready.vec, ready.veclen * sizeof (rtx));
4166 save->insn_queue = XNEWVEC (rtx, max_insn_queue_index + 1);
4167 save->q_size = q_size;
4168 for (i = 0; i <= max_insn_queue_index; i++)
4170 int q = NEXT_Q_AFTER (q_ptr, i);
4171 save->insn_queue[i] = copy_INSN_LIST (insn_queue[q]);
4174 save->clock_var = clock_var;
4175 save->last_clock_var = last_clock_var;
4176 save->cycle_issued_insns = cycle_issued_insns;
4177 save->last_scheduled_insn = last_scheduled_insn;
4178 save->last_nondebug_scheduled_insn = last_nondebug_scheduled_insn;
4179 save->nonscheduled_insns_begin = nonscheduled_insns_begin;
4181 save->sched_block = sched_block;
4183 save->replacement_deps.create (0);
4184 save->replace_apply.create (0);
4185 save->next_cycle_deps = next_cycle_replace_deps.copy ();
4186 save->next_cycle_apply = next_cycle_apply.copy ();
4188 if (current_sched_info->save_state)
4189 save->fe_saved_data = (*current_sched_info->save_state) ();
4191 if (targetm.sched.alloc_sched_context)
4193 save->be_saved_data = targetm.sched.alloc_sched_context ();
4194 targetm.sched.init_sched_context (save->be_saved_data, false);
4196 else
4197 save->be_saved_data = NULL;
4199 save->delay_pair = pair;
4201 save->next = backtrack_queue;
4202 backtrack_queue = save;
4204 while (pair)
4206 mark_backtrack_feeds (pair->i2, 1);
4207 INSN_TICK (pair->i2) = INVALID_TICK;
4208 INSN_EXACT_TICK (pair->i2) = clock_var + pair_delay (pair);
4209 SHADOW_P (pair->i2) = pair->stages == 0;
4210 pair = pair->next_same_i1;
4214 /* Walk the ready list and all queues. If any insns have unresolved backwards
4215 dependencies, these must be cancelled deps, broken by predication. Set or
4216 clear (depending on SET) the DEP_CANCELLED bit in DEP_STATUS. */
4218 static void
4219 toggle_cancelled_flags (bool set)
4221 int i;
4222 sd_iterator_def sd_it;
4223 dep_t dep;
4225 if (ready.n_ready > 0)
4227 rtx_insn **first = ready_lastpos (&ready);
4228 for (i = 0; i < ready.n_ready; i++)
4229 FOR_EACH_DEP (first[i], SD_LIST_BACK, sd_it, dep)
4230 if (!DEBUG_INSN_P (DEP_PRO (dep)))
4232 if (set)
4233 DEP_STATUS (dep) |= DEP_CANCELLED;
4234 else
4235 DEP_STATUS (dep) &= ~DEP_CANCELLED;
4238 for (i = 0; i <= max_insn_queue_index; i++)
4240 int q = NEXT_Q_AFTER (q_ptr, i);
4241 rtx link;
4242 for (link = insn_queue[q]; link; link = XEXP (link, 1))
4244 rtx insn = XEXP (link, 0);
4245 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
4246 if (!DEBUG_INSN_P (DEP_PRO (dep)))
4248 if (set)
4249 DEP_STATUS (dep) |= DEP_CANCELLED;
4250 else
4251 DEP_STATUS (dep) &= ~DEP_CANCELLED;
4257 /* Undo the replacements that have occurred after backtrack point SAVE
4258 was placed. */
4259 static void
4260 undo_replacements_for_backtrack (struct haifa_saved_data *save)
4262 while (!save->replacement_deps.is_empty ())
4264 dep_t dep = save->replacement_deps.pop ();
4265 int apply_p = save->replace_apply.pop ();
4267 if (apply_p)
4268 restore_pattern (dep, true);
4269 else
4270 apply_replacement (dep, true);
4272 save->replacement_deps.release ();
4273 save->replace_apply.release ();
4276 /* Pop entries from the SCHEDULED_INSNS vector up to and including INSN.
4277 Restore their dependencies to an unresolved state, and mark them as
4278 queued nowhere. */
4280 static void
4281 unschedule_insns_until (rtx insn)
4283 auto_vec<rtx_insn *> recompute_vec;
4285 /* Make two passes over the insns to be unscheduled. First, we clear out
4286 dependencies and other trivial bookkeeping. */
4287 for (;;)
4289 rtx_insn *last;
4290 sd_iterator_def sd_it;
4291 dep_t dep;
4293 last = scheduled_insns.pop ();
4295 /* This will be changed by restore_backtrack_point if the insn is in
4296 any queue. */
4297 QUEUE_INDEX (last) = QUEUE_NOWHERE;
4298 if (last != insn)
4299 INSN_TICK (last) = INVALID_TICK;
4301 if (modulo_ii > 0 && INSN_UID (last) < modulo_iter0_max_uid)
4302 modulo_insns_scheduled--;
4304 for (sd_it = sd_iterator_start (last, SD_LIST_RES_FORW);
4305 sd_iterator_cond (&sd_it, &dep);)
4307 rtx_insn *con = DEP_CON (dep);
4308 sd_unresolve_dep (sd_it);
4309 if (!MUST_RECOMPUTE_SPEC_P (con))
4311 MUST_RECOMPUTE_SPEC_P (con) = 1;
4312 recompute_vec.safe_push (con);
4316 if (last == insn)
4317 break;
4320 /* A second pass, to update ready and speculation status for insns
4321 depending on the unscheduled ones. The first pass must have
4322 popped the scheduled_insns vector up to the point where we
4323 restart scheduling, as recompute_todo_spec requires it to be
4324 up-to-date. */
4325 while (!recompute_vec.is_empty ())
4327 rtx_insn *con;
4329 con = recompute_vec.pop ();
4330 MUST_RECOMPUTE_SPEC_P (con) = 0;
4331 if (!sd_lists_empty_p (con, SD_LIST_HARD_BACK))
4333 TODO_SPEC (con) = HARD_DEP;
4334 INSN_TICK (con) = INVALID_TICK;
4335 if (PREDICATED_PAT (con) != NULL_RTX)
4336 haifa_change_pattern (con, ORIG_PAT (con));
4338 else if (QUEUE_INDEX (con) != QUEUE_SCHEDULED)
4339 TODO_SPEC (con) = recompute_todo_spec (con, true);
4343 /* Restore scheduler state from the topmost entry on the backtracking queue.
4344 PSCHED_BLOCK_P points to the local data of schedule_block that we must
4345 overwrite with the saved data.
4346 The caller must already have called unschedule_insns_until. */
4348 static void
4349 restore_last_backtrack_point (struct sched_block_state *psched_block)
4351 rtx link;
4352 int i;
4353 struct haifa_saved_data *save = backtrack_queue;
4355 backtrack_queue = save->next;
4357 if (current_sched_info->restore_state)
4358 (*current_sched_info->restore_state) (save->fe_saved_data);
4360 if (targetm.sched.alloc_sched_context)
4362 targetm.sched.set_sched_context (save->be_saved_data);
4363 targetm.sched.free_sched_context (save->be_saved_data);
4366 /* Do this first since it clobbers INSN_TICK of the involved
4367 instructions. */
4368 undo_replacements_for_backtrack (save);
4370 /* Clear the QUEUE_INDEX of everything in the ready list or one
4371 of the queues. */
4372 if (ready.n_ready > 0)
4374 rtx_insn **first = ready_lastpos (&ready);
4375 for (i = 0; i < ready.n_ready; i++)
4377 rtx_insn *insn = first[i];
4378 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
4379 INSN_TICK (insn) = INVALID_TICK;
4382 for (i = 0; i <= max_insn_queue_index; i++)
4384 int q = NEXT_Q_AFTER (q_ptr, i);
4386 for (link = insn_queue[q]; link; link = XEXP (link, 1))
4388 rtx_insn *x = as_a <rtx_insn *> (XEXP (link, 0));
4389 QUEUE_INDEX (x) = QUEUE_NOWHERE;
4390 INSN_TICK (x) = INVALID_TICK;
4392 free_INSN_LIST_list (&insn_queue[q]);
4395 free (ready.vec);
4396 ready = save->ready;
4398 if (ready.n_ready > 0)
4400 rtx_insn **first = ready_lastpos (&ready);
4401 for (i = 0; i < ready.n_ready; i++)
4403 rtx_insn *insn = first[i];
4404 QUEUE_INDEX (insn) = QUEUE_READY;
4405 TODO_SPEC (insn) = recompute_todo_spec (insn, true);
4406 INSN_TICK (insn) = save->clock_var;
4410 q_ptr = 0;
4411 q_size = save->q_size;
4412 for (i = 0; i <= max_insn_queue_index; i++)
4414 int q = NEXT_Q_AFTER (q_ptr, i);
4416 insn_queue[q] = save->insn_queue[q];
4418 for (link = insn_queue[q]; link; link = XEXP (link, 1))
4420 rtx_insn *x = as_a <rtx_insn *> (XEXP (link, 0));
4421 QUEUE_INDEX (x) = i;
4422 TODO_SPEC (x) = recompute_todo_spec (x, true);
4423 INSN_TICK (x) = save->clock_var + i;
4426 free (save->insn_queue);
4428 toggle_cancelled_flags (true);
4430 clock_var = save->clock_var;
4431 last_clock_var = save->last_clock_var;
4432 cycle_issued_insns = save->cycle_issued_insns;
4433 last_scheduled_insn = save->last_scheduled_insn;
4434 last_nondebug_scheduled_insn = save->last_nondebug_scheduled_insn;
4435 nonscheduled_insns_begin = save->nonscheduled_insns_begin;
4437 *psched_block = save->sched_block;
4439 memcpy (curr_state, save->curr_state, dfa_state_size);
4440 free (save->curr_state);
4442 mark_backtrack_feeds (save->delay_pair->i2, 0);
4444 gcc_assert (next_cycle_replace_deps.is_empty ());
4445 next_cycle_replace_deps = save->next_cycle_deps.copy ();
4446 next_cycle_apply = save->next_cycle_apply.copy ();
4448 free (save);
4450 for (save = backtrack_queue; save; save = save->next)
4452 mark_backtrack_feeds (save->delay_pair->i2, 1);
4456 /* Discard all data associated with the topmost entry in the backtrack
4457 queue. If RESET_TICK is false, we just want to free the data. If true,
4458 we are doing this because we discovered a reason to backtrack. In the
4459 latter case, also reset the INSN_TICK for the shadow insn. */
4460 static void
4461 free_topmost_backtrack_point (bool reset_tick)
4463 struct haifa_saved_data *save = backtrack_queue;
4464 int i;
4466 backtrack_queue = save->next;
4468 if (reset_tick)
4470 struct delay_pair *pair = save->delay_pair;
4471 while (pair)
4473 INSN_TICK (pair->i2) = INVALID_TICK;
4474 INSN_EXACT_TICK (pair->i2) = INVALID_TICK;
4475 pair = pair->next_same_i1;
4477 undo_replacements_for_backtrack (save);
4479 else
4481 save->replacement_deps.release ();
4482 save->replace_apply.release ();
4485 if (targetm.sched.free_sched_context)
4486 targetm.sched.free_sched_context (save->be_saved_data);
4487 if (current_sched_info->restore_state)
4488 free (save->fe_saved_data);
4489 for (i = 0; i <= max_insn_queue_index; i++)
4490 free_INSN_LIST_list (&save->insn_queue[i]);
4491 free (save->insn_queue);
4492 free (save->curr_state);
4493 free (save->ready.vec);
4494 free (save);
4497 /* Free the entire backtrack queue. */
4498 static void
4499 free_backtrack_queue (void)
4501 while (backtrack_queue)
4502 free_topmost_backtrack_point (false);
4505 /* Apply a replacement described by DESC. If IMMEDIATELY is false, we
4506 may have to postpone the replacement until the start of the next cycle,
4507 at which point we will be called again with IMMEDIATELY true. This is
4508 only done for machines which have instruction packets with explicit
4509 parallelism however. */
4510 static void
4511 apply_replacement (dep_t dep, bool immediately)
4513 struct dep_replacement *desc = DEP_REPLACE (dep);
4514 if (!immediately && targetm.sched.exposed_pipeline && reload_completed)
4516 next_cycle_replace_deps.safe_push (dep);
4517 next_cycle_apply.safe_push (1);
4519 else
4521 bool success;
4523 if (QUEUE_INDEX (desc->insn) == QUEUE_SCHEDULED)
4524 return;
4526 if (sched_verbose >= 5)
4527 fprintf (sched_dump, "applying replacement for insn %d\n",
4528 INSN_UID (desc->insn));
4530 success = validate_change (desc->insn, desc->loc, desc->newval, 0);
4531 gcc_assert (success);
4533 update_insn_after_change (desc->insn);
4534 if ((TODO_SPEC (desc->insn) & (HARD_DEP | DEP_POSTPONED)) == 0)
4535 fix_tick_ready (desc->insn);
4537 if (backtrack_queue != NULL)
4539 backtrack_queue->replacement_deps.safe_push (dep);
4540 backtrack_queue->replace_apply.safe_push (1);
4545 /* We have determined that a pattern involved in DEP must be restored.
4546 If IMMEDIATELY is false, we may have to postpone the replacement
4547 until the start of the next cycle, at which point we will be called
4548 again with IMMEDIATELY true. */
4549 static void
4550 restore_pattern (dep_t dep, bool immediately)
4552 rtx_insn *next = DEP_CON (dep);
4553 int tick = INSN_TICK (next);
4555 /* If we already scheduled the insn, the modified version is
4556 correct. */
4557 if (QUEUE_INDEX (next) == QUEUE_SCHEDULED)
4558 return;
4560 if (!immediately && targetm.sched.exposed_pipeline && reload_completed)
4562 next_cycle_replace_deps.safe_push (dep);
4563 next_cycle_apply.safe_push (0);
4564 return;
4568 if (DEP_TYPE (dep) == REG_DEP_CONTROL)
4570 if (sched_verbose >= 5)
4571 fprintf (sched_dump, "restoring pattern for insn %d\n",
4572 INSN_UID (next));
4573 haifa_change_pattern (next, ORIG_PAT (next));
4575 else
4577 struct dep_replacement *desc = DEP_REPLACE (dep);
4578 bool success;
4580 if (sched_verbose >= 5)
4581 fprintf (sched_dump, "restoring pattern for insn %d\n",
4582 INSN_UID (desc->insn));
4583 tick = INSN_TICK (desc->insn);
4585 success = validate_change (desc->insn, desc->loc, desc->orig, 0);
4586 gcc_assert (success);
4587 update_insn_after_change (desc->insn);
4588 if (backtrack_queue != NULL)
4590 backtrack_queue->replacement_deps.safe_push (dep);
4591 backtrack_queue->replace_apply.safe_push (0);
4594 INSN_TICK (next) = tick;
4595 if (TODO_SPEC (next) == DEP_POSTPONED)
4596 return;
4598 if (sd_lists_empty_p (next, SD_LIST_BACK))
4599 TODO_SPEC (next) = 0;
4600 else if (!sd_lists_empty_p (next, SD_LIST_HARD_BACK))
4601 TODO_SPEC (next) = HARD_DEP;
4604 /* Perform pattern replacements that were queued up until the next
4605 cycle. */
4606 static void
4607 perform_replacements_new_cycle (void)
4609 int i;
4610 dep_t dep;
4611 FOR_EACH_VEC_ELT (next_cycle_replace_deps, i, dep)
4613 int apply_p = next_cycle_apply[i];
4614 if (apply_p)
4615 apply_replacement (dep, true);
4616 else
4617 restore_pattern (dep, true);
4619 next_cycle_replace_deps.truncate (0);
4620 next_cycle_apply.truncate (0);
4623 /* Compute INSN_TICK_ESTIMATE for INSN. PROCESSED is a bitmap of
4624 instructions we've previously encountered, a set bit prevents
4625 recursion. BUDGET is a limit on how far ahead we look, it is
4626 reduced on recursive calls. Return true if we produced a good
4627 estimate, or false if we exceeded the budget. */
4628 static bool
4629 estimate_insn_tick (bitmap processed, rtx_insn *insn, int budget)
4631 sd_iterator_def sd_it;
4632 dep_t dep;
4633 int earliest = INSN_TICK (insn);
4635 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
4637 rtx_insn *pro = DEP_PRO (dep);
4638 int t;
4640 if (DEP_STATUS (dep) & DEP_CANCELLED)
4641 continue;
4643 if (QUEUE_INDEX (pro) == QUEUE_SCHEDULED)
4644 gcc_assert (INSN_TICK (pro) + dep_cost (dep) <= INSN_TICK (insn));
4645 else
4647 int cost = dep_cost (dep);
4648 if (cost >= budget)
4649 return false;
4650 if (!bitmap_bit_p (processed, INSN_LUID (pro)))
4652 if (!estimate_insn_tick (processed, pro, budget - cost))
4653 return false;
4655 gcc_assert (INSN_TICK_ESTIMATE (pro) != INVALID_TICK);
4656 t = INSN_TICK_ESTIMATE (pro) + cost;
4657 if (earliest == INVALID_TICK || t > earliest)
4658 earliest = t;
4661 bitmap_set_bit (processed, INSN_LUID (insn));
4662 INSN_TICK_ESTIMATE (insn) = earliest;
4663 return true;
4666 /* Examine the pair of insns in P, and estimate (optimistically, assuming
4667 infinite resources) the cycle in which the delayed shadow can be issued.
4668 Return the number of cycles that must pass before the real insn can be
4669 issued in order to meet this constraint. */
4670 static int
4671 estimate_shadow_tick (struct delay_pair *p)
4673 bitmap_head processed;
4674 int t;
4675 bool cutoff;
4676 bitmap_initialize (&processed, 0);
4678 cutoff = !estimate_insn_tick (&processed, p->i2,
4679 max_insn_queue_index + pair_delay (p));
4680 bitmap_clear (&processed);
4681 if (cutoff)
4682 return max_insn_queue_index;
4683 t = INSN_TICK_ESTIMATE (p->i2) - (clock_var + pair_delay (p) + 1);
4684 if (t > 0)
4685 return t;
4686 return 0;
4689 /* If INSN has no unresolved backwards dependencies, add it to the schedule and
4690 recursively resolve all its forward dependencies. */
4691 static void
4692 resolve_dependencies (rtx_insn *insn)
4694 sd_iterator_def sd_it;
4695 dep_t dep;
4697 /* Don't use sd_lists_empty_p; it ignores debug insns. */
4698 if (DEPS_LIST_FIRST (INSN_HARD_BACK_DEPS (insn)) != NULL
4699 || DEPS_LIST_FIRST (INSN_SPEC_BACK_DEPS (insn)) != NULL)
4700 return;
4702 if (sched_verbose >= 4)
4703 fprintf (sched_dump, ";;\tquickly resolving %d\n", INSN_UID (insn));
4705 if (QUEUE_INDEX (insn) >= 0)
4706 queue_remove (insn);
4708 scheduled_insns.safe_push (insn);
4710 /* Update dependent instructions. */
4711 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
4712 sd_iterator_cond (&sd_it, &dep);)
4714 rtx_insn *next = DEP_CON (dep);
4716 if (sched_verbose >= 4)
4717 fprintf (sched_dump, ";;\t\tdep %d against %d\n", INSN_UID (insn),
4718 INSN_UID (next));
4720 /* Resolve the dependence between INSN and NEXT.
4721 sd_resolve_dep () moves current dep to another list thus
4722 advancing the iterator. */
4723 sd_resolve_dep (sd_it);
4725 if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
4727 resolve_dependencies (next);
4729 else
4730 /* Check always has only one forward dependence (to the first insn in
4731 the recovery block), therefore, this will be executed only once. */
4733 gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
4739 /* Return the head and tail pointers of ebb starting at BEG and ending
4740 at END. */
4741 void
4742 get_ebb_head_tail (basic_block beg, basic_block end,
4743 rtx_insn **headp, rtx_insn **tailp)
4745 rtx_insn *beg_head = BB_HEAD (beg);
4746 rtx_insn * beg_tail = BB_END (beg);
4747 rtx_insn * end_head = BB_HEAD (end);
4748 rtx_insn * end_tail = BB_END (end);
4750 /* Don't include any notes or labels at the beginning of the BEG
4751 basic block, or notes at the end of the END basic blocks. */
4753 if (LABEL_P (beg_head))
4754 beg_head = NEXT_INSN (beg_head);
4756 while (beg_head != beg_tail)
4757 if (NOTE_P (beg_head))
4758 beg_head = NEXT_INSN (beg_head);
4759 else if (DEBUG_INSN_P (beg_head))
4761 rtx_insn * note, *next;
4763 for (note = NEXT_INSN (beg_head);
4764 note != beg_tail;
4765 note = next)
4767 next = NEXT_INSN (note);
4768 if (NOTE_P (note))
4770 if (sched_verbose >= 9)
4771 fprintf (sched_dump, "reorder %i\n", INSN_UID (note));
4773 reorder_insns_nobb (note, note, PREV_INSN (beg_head));
4775 if (BLOCK_FOR_INSN (note) != beg)
4776 df_insn_change_bb (note, beg);
4778 else if (!DEBUG_INSN_P (note))
4779 break;
4782 break;
4784 else
4785 break;
4787 *headp = beg_head;
4789 if (beg == end)
4790 end_head = beg_head;
4791 else if (LABEL_P (end_head))
4792 end_head = NEXT_INSN (end_head);
4794 while (end_head != end_tail)
4795 if (NOTE_P (end_tail))
4796 end_tail = PREV_INSN (end_tail);
4797 else if (DEBUG_INSN_P (end_tail))
4799 rtx_insn * note, *prev;
4801 for (note = PREV_INSN (end_tail);
4802 note != end_head;
4803 note = prev)
4805 prev = PREV_INSN (note);
4806 if (NOTE_P (note))
4808 if (sched_verbose >= 9)
4809 fprintf (sched_dump, "reorder %i\n", INSN_UID (note));
4811 reorder_insns_nobb (note, note, end_tail);
4813 if (end_tail == BB_END (end))
4814 BB_END (end) = note;
4816 if (BLOCK_FOR_INSN (note) != end)
4817 df_insn_change_bb (note, end);
4819 else if (!DEBUG_INSN_P (note))
4820 break;
4823 break;
4825 else
4826 break;
4828 *tailp = end_tail;
4831 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
4834 no_real_insns_p (const_rtx head, const_rtx tail)
4836 while (head != NEXT_INSN (tail))
4838 if (!NOTE_P (head) && !LABEL_P (head))
4839 return 0;
4840 head = NEXT_INSN (head);
4842 return 1;
4845 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
4846 previously found among the insns. Insert them just before HEAD. */
4847 rtx_insn *
4848 restore_other_notes (rtx_insn *head, basic_block head_bb)
4850 if (note_list != 0)
4852 rtx_insn *note_head = note_list;
4854 if (head)
4855 head_bb = BLOCK_FOR_INSN (head);
4856 else
4857 head = NEXT_INSN (bb_note (head_bb));
4859 while (PREV_INSN (note_head))
4861 set_block_for_insn (note_head, head_bb);
4862 note_head = PREV_INSN (note_head);
4864 /* In the above cycle we've missed this note. */
4865 set_block_for_insn (note_head, head_bb);
4867 SET_PREV_INSN (note_head) = PREV_INSN (head);
4868 SET_NEXT_INSN (PREV_INSN (head)) = note_head;
4869 SET_PREV_INSN (head) = note_list;
4870 SET_NEXT_INSN (note_list) = head;
4872 if (BLOCK_FOR_INSN (head) != head_bb)
4873 BB_END (head_bb) = note_list;
4875 head = note_head;
4878 return head;
4881 /* When we know we are going to discard the schedule due to a failed attempt
4882 at modulo scheduling, undo all replacements. */
4883 static void
4884 undo_all_replacements (void)
4886 rtx_insn *insn;
4887 int i;
4889 FOR_EACH_VEC_ELT (scheduled_insns, i, insn)
4891 sd_iterator_def sd_it;
4892 dep_t dep;
4894 /* See if we must undo a replacement. */
4895 for (sd_it = sd_iterator_start (insn, SD_LIST_RES_FORW);
4896 sd_iterator_cond (&sd_it, &dep); sd_iterator_next (&sd_it))
4898 struct dep_replacement *desc = DEP_REPLACE (dep);
4899 if (desc != NULL)
4900 validate_change (desc->insn, desc->loc, desc->orig, 0);
4905 /* Return first non-scheduled insn in the current scheduling block.
4906 This is mostly used for debug-counter purposes. */
4907 static rtx_insn *
4908 first_nonscheduled_insn (void)
4910 rtx_insn *insn = (nonscheduled_insns_begin != NULL_RTX
4911 ? nonscheduled_insns_begin
4912 : current_sched_info->prev_head);
4916 insn = next_nonnote_nondebug_insn (insn);
4918 while (QUEUE_INDEX (insn) == QUEUE_SCHEDULED);
4920 return insn;
4923 /* Move insns that became ready to fire from queue to ready list. */
4925 static void
4926 queue_to_ready (struct ready_list *ready)
4928 rtx_insn *insn;
4929 rtx link;
4930 rtx skip_insn;
4932 q_ptr = NEXT_Q (q_ptr);
4934 if (dbg_cnt (sched_insn) == false)
4935 /* If debug counter is activated do not requeue the first
4936 nonscheduled insn. */
4937 skip_insn = first_nonscheduled_insn ();
4938 else
4939 skip_insn = NULL_RTX;
4941 /* Add all pending insns that can be scheduled without stalls to the
4942 ready list. */
4943 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
4945 insn = as_a <rtx_insn *> (XEXP (link, 0));
4946 q_size -= 1;
4948 if (sched_verbose >= 2)
4949 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
4950 (*current_sched_info->print_insn) (insn, 0));
4952 /* If the ready list is full, delay the insn for 1 cycle.
4953 See the comment in schedule_block for the rationale. */
4954 if (!reload_completed
4955 && (ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
4956 || (sched_pressure == SCHED_PRESSURE_MODEL
4957 /* Limit pressure recalculations to MAX_SCHED_READY_INSNS
4958 instructions too. */
4959 && model_index (insn) > (model_curr_point
4960 + MAX_SCHED_READY_INSNS)))
4961 && !(sched_pressure == SCHED_PRESSURE_MODEL
4962 && model_curr_point < model_num_insns
4963 /* Always allow the next model instruction to issue. */
4964 && model_index (insn) == model_curr_point)
4965 && !SCHED_GROUP_P (insn)
4966 && insn != skip_insn)
4968 if (sched_verbose >= 2)
4969 fprintf (sched_dump, "keeping in queue, ready full\n");
4970 queue_insn (insn, 1, "ready full");
4972 else
4974 ready_add (ready, insn, false);
4975 if (sched_verbose >= 2)
4976 fprintf (sched_dump, "moving to ready without stalls\n");
4979 free_INSN_LIST_list (&insn_queue[q_ptr]);
4981 /* If there are no ready insns, stall until one is ready and add all
4982 of the pending insns at that point to the ready list. */
4983 if (ready->n_ready == 0)
4985 int stalls;
4987 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
4989 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
4991 for (; link; link = XEXP (link, 1))
4993 insn = as_a <rtx_insn *> (XEXP (link, 0));
4994 q_size -= 1;
4996 if (sched_verbose >= 2)
4997 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
4998 (*current_sched_info->print_insn) (insn, 0));
5000 ready_add (ready, insn, false);
5001 if (sched_verbose >= 2)
5002 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
5004 free_INSN_LIST_list (&insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]);
5006 advance_one_cycle ();
5008 break;
5011 advance_one_cycle ();
5014 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
5015 clock_var += stalls;
5016 if (sched_verbose >= 2)
5017 fprintf (sched_dump, ";;\tAdvancing clock by %d cycle[s] to %d\n",
5018 stalls, clock_var);
5022 /* Used by early_queue_to_ready. Determines whether it is "ok" to
5023 prematurely move INSN from the queue to the ready list. Currently,
5024 if a target defines the hook 'is_costly_dependence', this function
5025 uses the hook to check whether there exist any dependences which are
5026 considered costly by the target, between INSN and other insns that
5027 have already been scheduled. Dependences are checked up to Y cycles
5028 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
5029 controlling this value.
5030 (Other considerations could be taken into account instead (or in
5031 addition) depending on user flags and target hooks. */
5033 static bool
5034 ok_for_early_queue_removal (rtx insn)
5036 if (targetm.sched.is_costly_dependence)
5038 rtx prev_insn;
5039 int n_cycles;
5040 int i = scheduled_insns.length ();
5041 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
5043 while (i-- > 0)
5045 int cost;
5047 prev_insn = scheduled_insns[i];
5049 if (!NOTE_P (prev_insn))
5051 dep_t dep;
5053 dep = sd_find_dep_between (prev_insn, insn, true);
5055 if (dep != NULL)
5057 cost = dep_cost (dep);
5059 if (targetm.sched.is_costly_dependence (dep, cost,
5060 flag_sched_stalled_insns_dep - n_cycles))
5061 return false;
5065 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
5066 break;
5069 if (i == 0)
5070 break;
5074 return true;
5078 /* Remove insns from the queue, before they become "ready" with respect
5079 to FU latency considerations. */
5081 static int
5082 early_queue_to_ready (state_t state, struct ready_list *ready)
5084 rtx_insn *insn;
5085 rtx link;
5086 rtx next_link;
5087 rtx prev_link;
5088 bool move_to_ready;
5089 int cost;
5090 state_t temp_state = alloca (dfa_state_size);
5091 int stalls;
5092 int insns_removed = 0;
5095 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
5096 function:
5098 X == 0: There is no limit on how many queued insns can be removed
5099 prematurely. (flag_sched_stalled_insns = -1).
5101 X >= 1: Only X queued insns can be removed prematurely in each
5102 invocation. (flag_sched_stalled_insns = X).
5104 Otherwise: Early queue removal is disabled.
5105 (flag_sched_stalled_insns = 0)
5108 if (! flag_sched_stalled_insns)
5109 return 0;
5111 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
5113 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
5115 if (sched_verbose > 6)
5116 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
5118 prev_link = 0;
5119 while (link)
5121 next_link = XEXP (link, 1);
5122 insn = as_a <rtx_insn *> (XEXP (link, 0));
5123 if (insn && sched_verbose > 6)
5124 print_rtl_single (sched_dump, insn);
5126 memcpy (temp_state, state, dfa_state_size);
5127 if (recog_memoized (insn) < 0)
5128 /* non-negative to indicate that it's not ready
5129 to avoid infinite Q->R->Q->R... */
5130 cost = 0;
5131 else
5132 cost = state_transition (temp_state, insn);
5134 if (sched_verbose >= 6)
5135 fprintf (sched_dump, "transition cost = %d\n", cost);
5137 move_to_ready = false;
5138 if (cost < 0)
5140 move_to_ready = ok_for_early_queue_removal (insn);
5141 if (move_to_ready == true)
5143 /* move from Q to R */
5144 q_size -= 1;
5145 ready_add (ready, insn, false);
5147 if (prev_link)
5148 XEXP (prev_link, 1) = next_link;
5149 else
5150 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
5152 free_INSN_LIST_node (link);
5154 if (sched_verbose >= 2)
5155 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
5156 (*current_sched_info->print_insn) (insn, 0));
5158 insns_removed++;
5159 if (insns_removed == flag_sched_stalled_insns)
5160 /* Remove no more than flag_sched_stalled_insns insns
5161 from Q at a time. */
5162 return insns_removed;
5166 if (move_to_ready == false)
5167 prev_link = link;
5169 link = next_link;
5170 } /* while link */
5171 } /* if link */
5173 } /* for stalls.. */
5175 return insns_removed;
5179 /* Print the ready list for debugging purposes.
5180 If READY_TRY is non-zero then only print insns that max_issue
5181 will consider. */
5182 static void
5183 debug_ready_list_1 (struct ready_list *ready, signed char *ready_try)
5185 rtx_insn **p;
5186 int i;
5188 if (ready->n_ready == 0)
5190 fprintf (sched_dump, "\n");
5191 return;
5194 p = ready_lastpos (ready);
5195 for (i = 0; i < ready->n_ready; i++)
5197 if (ready_try != NULL && ready_try[ready->n_ready - i - 1])
5198 continue;
5200 fprintf (sched_dump, " %s:%d",
5201 (*current_sched_info->print_insn) (p[i], 0),
5202 INSN_LUID (p[i]));
5203 if (sched_pressure != SCHED_PRESSURE_NONE)
5204 fprintf (sched_dump, "(cost=%d",
5205 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (p[i]));
5206 fprintf (sched_dump, ":prio=%d", INSN_PRIORITY (p[i]));
5207 if (INSN_TICK (p[i]) > clock_var)
5208 fprintf (sched_dump, ":delay=%d", INSN_TICK (p[i]) - clock_var);
5209 if (sched_pressure != SCHED_PRESSURE_NONE)
5210 fprintf (sched_dump, ")");
5212 fprintf (sched_dump, "\n");
5215 /* Print the ready list. Callable from debugger. */
5216 static void
5217 debug_ready_list (struct ready_list *ready)
5219 debug_ready_list_1 (ready, NULL);
5222 /* Search INSN for REG_SAVE_NOTE notes and convert them back into insn
5223 NOTEs. This is used for NOTE_INSN_EPILOGUE_BEG, so that sched-ebb
5224 replaces the epilogue note in the correct basic block. */
5225 void
5226 reemit_notes (rtx_insn *insn)
5228 rtx note;
5229 rtx_insn *last = insn;
5231 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
5233 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
5235 enum insn_note note_type = (enum insn_note) INTVAL (XEXP (note, 0));
5237 last = emit_note_before (note_type, last);
5238 remove_note (insn, note);
5243 /* Move INSN. Reemit notes if needed. Update CFG, if needed. */
5244 static void
5245 move_insn (rtx_insn *insn, rtx last, rtx nt)
5247 if (PREV_INSN (insn) != last)
5249 basic_block bb;
5250 rtx_insn *note;
5251 int jump_p = 0;
5253 bb = BLOCK_FOR_INSN (insn);
5255 /* BB_HEAD is either LABEL or NOTE. */
5256 gcc_assert (BB_HEAD (bb) != insn);
5258 if (BB_END (bb) == insn)
5259 /* If this is last instruction in BB, move end marker one
5260 instruction up. */
5262 /* Jumps are always placed at the end of basic block. */
5263 jump_p = control_flow_insn_p (insn);
5265 gcc_assert (!jump_p
5266 || ((common_sched_info->sched_pass_id == SCHED_RGN_PASS)
5267 && IS_SPECULATION_BRANCHY_CHECK_P (insn))
5268 || (common_sched_info->sched_pass_id
5269 == SCHED_EBB_PASS));
5271 gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
5273 BB_END (bb) = PREV_INSN (insn);
5276 gcc_assert (BB_END (bb) != last);
5278 if (jump_p)
5279 /* We move the block note along with jump. */
5281 gcc_assert (nt);
5283 note = NEXT_INSN (insn);
5284 while (NOTE_NOT_BB_P (note) && note != nt)
5285 note = NEXT_INSN (note);
5287 if (note != nt
5288 && (LABEL_P (note)
5289 || BARRIER_P (note)))
5290 note = NEXT_INSN (note);
5292 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
5294 else
5295 note = insn;
5297 SET_NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (note);
5298 SET_PREV_INSN (NEXT_INSN (note)) = PREV_INSN (insn);
5300 SET_NEXT_INSN (note) = NEXT_INSN (last);
5301 SET_PREV_INSN (NEXT_INSN (last)) = note;
5303 SET_NEXT_INSN (last) = insn;
5304 SET_PREV_INSN (insn) = last;
5306 bb = BLOCK_FOR_INSN (last);
5308 if (jump_p)
5310 fix_jump_move (insn);
5312 if (BLOCK_FOR_INSN (insn) != bb)
5313 move_block_after_check (insn);
5315 gcc_assert (BB_END (bb) == last);
5318 df_insn_change_bb (insn, bb);
5320 /* Update BB_END, if needed. */
5321 if (BB_END (bb) == last)
5322 BB_END (bb) = insn;
5325 SCHED_GROUP_P (insn) = 0;
5328 /* Return true if scheduling INSN will finish current clock cycle. */
5329 static bool
5330 insn_finishes_cycle_p (rtx_insn *insn)
5332 if (SCHED_GROUP_P (insn))
5333 /* After issuing INSN, rest of the sched_group will be forced to issue
5334 in order. Don't make any plans for the rest of cycle. */
5335 return true;
5337 /* Finishing the block will, apparently, finish the cycle. */
5338 if (current_sched_info->insn_finishes_block_p
5339 && current_sched_info->insn_finishes_block_p (insn))
5340 return true;
5342 return false;
5345 /* Define type for target data used in multipass scheduling. */
5346 #ifndef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T
5347 # define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T int
5348 #endif
5349 typedef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T first_cycle_multipass_data_t;
5351 /* The following structure describe an entry of the stack of choices. */
5352 struct choice_entry
5354 /* Ordinal number of the issued insn in the ready queue. */
5355 int index;
5356 /* The number of the rest insns whose issues we should try. */
5357 int rest;
5358 /* The number of issued essential insns. */
5359 int n;
5360 /* State after issuing the insn. */
5361 state_t state;
5362 /* Target-specific data. */
5363 first_cycle_multipass_data_t target_data;
5366 /* The following array is used to implement a stack of choices used in
5367 function max_issue. */
5368 static struct choice_entry *choice_stack;
5370 /* This holds the value of the target dfa_lookahead hook. */
5371 int dfa_lookahead;
5373 /* The following variable value is maximal number of tries of issuing
5374 insns for the first cycle multipass insn scheduling. We define
5375 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
5376 need this constraint if all real insns (with non-negative codes)
5377 had reservations because in this case the algorithm complexity is
5378 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
5379 might be incomplete and such insn might occur. For such
5380 descriptions, the complexity of algorithm (without the constraint)
5381 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
5382 static int max_lookahead_tries;
5384 /* The following value is value of hook
5385 `first_cycle_multipass_dfa_lookahead' at the last call of
5386 `max_issue'. */
5387 static int cached_first_cycle_multipass_dfa_lookahead = 0;
5389 /* The following value is value of `issue_rate' at the last call of
5390 `sched_init'. */
5391 static int cached_issue_rate = 0;
5393 /* The following function returns maximal (or close to maximal) number
5394 of insns which can be issued on the same cycle and one of which
5395 insns is insns with the best rank (the first insn in READY). To
5396 make this function tries different samples of ready insns. READY
5397 is current queue `ready'. Global array READY_TRY reflects what
5398 insns are already issued in this try. The function stops immediately,
5399 if it reached the such a solution, that all instruction can be issued.
5400 INDEX will contain index of the best insn in READY. The following
5401 function is used only for first cycle multipass scheduling.
5403 PRIVILEGED_N >= 0
5405 This function expects recognized insns only. All USEs,
5406 CLOBBERs, etc must be filtered elsewhere. */
5408 max_issue (struct ready_list *ready, int privileged_n, state_t state,
5409 bool first_cycle_insn_p, int *index)
5411 int n, i, all, n_ready, best, delay, tries_num;
5412 int more_issue;
5413 struct choice_entry *top;
5414 rtx_insn *insn;
5416 n_ready = ready->n_ready;
5417 gcc_assert (dfa_lookahead >= 1 && privileged_n >= 0
5418 && privileged_n <= n_ready);
5420 /* Init MAX_LOOKAHEAD_TRIES. */
5421 if (cached_first_cycle_multipass_dfa_lookahead != dfa_lookahead)
5423 cached_first_cycle_multipass_dfa_lookahead = dfa_lookahead;
5424 max_lookahead_tries = 100;
5425 for (i = 0; i < issue_rate; i++)
5426 max_lookahead_tries *= dfa_lookahead;
5429 /* Init max_points. */
5430 more_issue = issue_rate - cycle_issued_insns;
5431 gcc_assert (more_issue >= 0);
5433 /* The number of the issued insns in the best solution. */
5434 best = 0;
5436 top = choice_stack;
5438 /* Set initial state of the search. */
5439 memcpy (top->state, state, dfa_state_size);
5440 top->rest = dfa_lookahead;
5441 top->n = 0;
5442 if (targetm.sched.first_cycle_multipass_begin)
5443 targetm.sched.first_cycle_multipass_begin (&top->target_data,
5444 ready_try, n_ready,
5445 first_cycle_insn_p);
5447 /* Count the number of the insns to search among. */
5448 for (all = i = 0; i < n_ready; i++)
5449 if (!ready_try [i])
5450 all++;
5452 if (sched_verbose >= 2)
5454 fprintf (sched_dump, ";;\t\tmax_issue among %d insns:", all);
5455 debug_ready_list_1 (ready, ready_try);
5458 /* I is the index of the insn to try next. */
5459 i = 0;
5460 tries_num = 0;
5461 for (;;)
5463 if (/* If we've reached a dead end or searched enough of what we have
5464 been asked... */
5465 top->rest == 0
5466 /* or have nothing else to try... */
5467 || i >= n_ready
5468 /* or should not issue more. */
5469 || top->n >= more_issue)
5471 /* ??? (... || i == n_ready). */
5472 gcc_assert (i <= n_ready);
5474 /* We should not issue more than issue_rate instructions. */
5475 gcc_assert (top->n <= more_issue);
5477 if (top == choice_stack)
5478 break;
5480 if (best < top - choice_stack)
5482 if (privileged_n)
5484 n = privileged_n;
5485 /* Try to find issued privileged insn. */
5486 while (n && !ready_try[--n])
5490 if (/* If all insns are equally good... */
5491 privileged_n == 0
5492 /* Or a privileged insn will be issued. */
5493 || ready_try[n])
5494 /* Then we have a solution. */
5496 best = top - choice_stack;
5497 /* This is the index of the insn issued first in this
5498 solution. */
5499 *index = choice_stack [1].index;
5500 if (top->n == more_issue || best == all)
5501 break;
5505 /* Set ready-list index to point to the last insn
5506 ('i++' below will advance it to the next insn). */
5507 i = top->index;
5509 /* Backtrack. */
5510 ready_try [i] = 0;
5512 if (targetm.sched.first_cycle_multipass_backtrack)
5513 targetm.sched.first_cycle_multipass_backtrack (&top->target_data,
5514 ready_try, n_ready);
5516 top--;
5517 memcpy (state, top->state, dfa_state_size);
5519 else if (!ready_try [i])
5521 tries_num++;
5522 if (tries_num > max_lookahead_tries)
5523 break;
5524 insn = ready_element (ready, i);
5525 delay = state_transition (state, insn);
5526 if (delay < 0)
5528 if (state_dead_lock_p (state)
5529 || insn_finishes_cycle_p (insn))
5530 /* We won't issue any more instructions in the next
5531 choice_state. */
5532 top->rest = 0;
5533 else
5534 top->rest--;
5536 n = top->n;
5537 if (memcmp (top->state, state, dfa_state_size) != 0)
5538 n++;
5540 /* Advance to the next choice_entry. */
5541 top++;
5542 /* Initialize it. */
5543 top->rest = dfa_lookahead;
5544 top->index = i;
5545 top->n = n;
5546 memcpy (top->state, state, dfa_state_size);
5547 ready_try [i] = 1;
5549 if (targetm.sched.first_cycle_multipass_issue)
5550 targetm.sched.first_cycle_multipass_issue (&top->target_data,
5551 ready_try, n_ready,
5552 insn,
5553 &((top - 1)
5554 ->target_data));
5556 i = -1;
5560 /* Increase ready-list index. */
5561 i++;
5564 if (targetm.sched.first_cycle_multipass_end)
5565 targetm.sched.first_cycle_multipass_end (best != 0
5566 ? &choice_stack[1].target_data
5567 : NULL);
5569 /* Restore the original state of the DFA. */
5570 memcpy (state, choice_stack->state, dfa_state_size);
5572 return best;
5575 /* The following function chooses insn from READY and modifies
5576 READY. The following function is used only for first
5577 cycle multipass scheduling.
5578 Return:
5579 -1 if cycle should be advanced,
5580 0 if INSN_PTR is set to point to the desirable insn,
5581 1 if choose_ready () should be restarted without advancing the cycle. */
5582 static int
5583 choose_ready (struct ready_list *ready, bool first_cycle_insn_p,
5584 rtx_insn **insn_ptr)
5586 int lookahead;
5588 if (dbg_cnt (sched_insn) == false)
5590 if (nonscheduled_insns_begin == NULL_RTX)
5591 nonscheduled_insns_begin = current_sched_info->prev_head;
5593 rtx_insn *insn = first_nonscheduled_insn ();
5595 if (QUEUE_INDEX (insn) == QUEUE_READY)
5596 /* INSN is in the ready_list. */
5598 ready_remove_insn (insn);
5599 *insn_ptr = insn;
5600 return 0;
5603 /* INSN is in the queue. Advance cycle to move it to the ready list. */
5604 gcc_assert (QUEUE_INDEX (insn) >= 0);
5605 return -1;
5608 lookahead = 0;
5610 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
5611 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
5612 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
5613 || DEBUG_INSN_P (ready_element (ready, 0)))
5615 if (targetm.sched.dispatch (NULL_RTX, IS_DISPATCH_ON))
5616 *insn_ptr = ready_remove_first_dispatch (ready);
5617 else
5618 *insn_ptr = ready_remove_first (ready);
5620 return 0;
5622 else
5624 /* Try to choose the best insn. */
5625 int index = 0, i;
5626 rtx_insn *insn;
5628 insn = ready_element (ready, 0);
5629 if (INSN_CODE (insn) < 0)
5631 *insn_ptr = ready_remove_first (ready);
5632 return 0;
5635 /* Filter the search space. */
5636 for (i = 0; i < ready->n_ready; i++)
5638 ready_try[i] = 0;
5640 insn = ready_element (ready, i);
5642 /* If this insn is recognizable we should have already
5643 recognized it earlier.
5644 ??? Not very clear where this is supposed to be done.
5645 See dep_cost_1. */
5646 gcc_checking_assert (INSN_CODE (insn) >= 0
5647 || recog_memoized (insn) < 0);
5648 if (INSN_CODE (insn) < 0)
5650 /* Non-recognized insns at position 0 are handled above. */
5651 gcc_assert (i > 0);
5652 ready_try[i] = 1;
5653 continue;
5656 if (targetm.sched.first_cycle_multipass_dfa_lookahead_guard)
5658 ready_try[i]
5659 = (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
5660 (insn, i));
5662 if (ready_try[i] < 0)
5663 /* Queue instruction for several cycles.
5664 We need to restart choose_ready as we have changed
5665 the ready list. */
5667 change_queue_index (insn, -ready_try[i]);
5668 return 1;
5671 /* Make sure that we didn't end up with 0'th insn filtered out.
5672 Don't be tempted to make life easier for backends and just
5673 requeue 0'th insn if (ready_try[0] == 0) and restart
5674 choose_ready. Backends should be very considerate about
5675 requeueing instructions -- especially the highest priority
5676 one at position 0. */
5677 gcc_assert (ready_try[i] == 0 || i > 0);
5678 if (ready_try[i])
5679 continue;
5682 gcc_assert (ready_try[i] == 0);
5683 /* INSN made it through the scrutiny of filters! */
5686 if (max_issue (ready, 1, curr_state, first_cycle_insn_p, &index) == 0)
5688 *insn_ptr = ready_remove_first (ready);
5689 if (sched_verbose >= 4)
5690 fprintf (sched_dump, ";;\t\tChosen insn (but can't issue) : %s \n",
5691 (*current_sched_info->print_insn) (*insn_ptr, 0));
5692 return 0;
5694 else
5696 if (sched_verbose >= 4)
5697 fprintf (sched_dump, ";;\t\tChosen insn : %s\n",
5698 (*current_sched_info->print_insn)
5699 (ready_element (ready, index), 0));
5701 *insn_ptr = ready_remove (ready, index);
5702 return 0;
5707 /* This function is called when we have successfully scheduled a
5708 block. It uses the schedule stored in the scheduled_insns vector
5709 to rearrange the RTL. PREV_HEAD is used as the anchor to which we
5710 append the scheduled insns; TAIL is the insn after the scheduled
5711 block. TARGET_BB is the argument passed to schedule_block. */
5713 static void
5714 commit_schedule (rtx_insn *prev_head, rtx_insn *tail, basic_block *target_bb)
5716 unsigned int i;
5717 rtx_insn *insn;
5719 last_scheduled_insn = prev_head;
5720 for (i = 0;
5721 scheduled_insns.iterate (i, &insn);
5722 i++)
5724 if (control_flow_insn_p (last_scheduled_insn)
5725 || current_sched_info->advance_target_bb (*target_bb, insn))
5727 *target_bb = current_sched_info->advance_target_bb (*target_bb, 0);
5729 if (sched_verbose)
5731 rtx_insn *x;
5733 x = next_real_insn (last_scheduled_insn);
5734 gcc_assert (x);
5735 dump_new_block_header (1, *target_bb, x, tail);
5738 last_scheduled_insn = bb_note (*target_bb);
5741 if (current_sched_info->begin_move_insn)
5742 (*current_sched_info->begin_move_insn) (insn, last_scheduled_insn);
5743 move_insn (insn, last_scheduled_insn,
5744 current_sched_info->next_tail);
5745 if (!DEBUG_INSN_P (insn))
5746 reemit_notes (insn);
5747 last_scheduled_insn = insn;
5750 scheduled_insns.truncate (0);
5753 /* Examine all insns on the ready list and queue those which can't be
5754 issued in this cycle. TEMP_STATE is temporary scheduler state we
5755 can use as scratch space. If FIRST_CYCLE_INSN_P is true, no insns
5756 have been issued for the current cycle, which means it is valid to
5757 issue an asm statement.
5759 If SHADOWS_ONLY_P is true, we eliminate all real insns and only
5760 leave those for which SHADOW_P is true. If MODULO_EPILOGUE is true,
5761 we only leave insns which have an INSN_EXACT_TICK. */
5763 static void
5764 prune_ready_list (state_t temp_state, bool first_cycle_insn_p,
5765 bool shadows_only_p, bool modulo_epilogue_p)
5767 int i, pass;
5768 bool sched_group_found = false;
5769 int min_cost_group = 1;
5771 for (i = 0; i < ready.n_ready; i++)
5773 rtx_insn *insn = ready_element (&ready, i);
5774 if (SCHED_GROUP_P (insn))
5776 sched_group_found = true;
5777 break;
5781 /* Make two passes if there's a SCHED_GROUP_P insn; make sure to handle
5782 such an insn first and note its cost, then schedule all other insns
5783 for one cycle later. */
5784 for (pass = sched_group_found ? 0 : 1; pass < 2; )
5786 int n = ready.n_ready;
5787 for (i = 0; i < n; i++)
5789 rtx_insn *insn = ready_element (&ready, i);
5790 int cost = 0;
5791 const char *reason = "resource conflict";
5793 if (DEBUG_INSN_P (insn))
5794 continue;
5796 if (sched_group_found && !SCHED_GROUP_P (insn))
5798 if (pass == 0)
5799 continue;
5800 cost = min_cost_group;
5801 reason = "not in sched group";
5803 else if (modulo_epilogue_p
5804 && INSN_EXACT_TICK (insn) == INVALID_TICK)
5806 cost = max_insn_queue_index;
5807 reason = "not an epilogue insn";
5809 else if (shadows_only_p && !SHADOW_P (insn))
5811 cost = 1;
5812 reason = "not a shadow";
5814 else if (recog_memoized (insn) < 0)
5816 if (!first_cycle_insn_p
5817 && (GET_CODE (PATTERN (insn)) == ASM_INPUT
5818 || asm_noperands (PATTERN (insn)) >= 0))
5819 cost = 1;
5820 reason = "asm";
5822 else if (sched_pressure != SCHED_PRESSURE_NONE)
5824 if (sched_pressure == SCHED_PRESSURE_MODEL
5825 && INSN_TICK (insn) <= clock_var)
5827 memcpy (temp_state, curr_state, dfa_state_size);
5828 if (state_transition (temp_state, insn) >= 0)
5829 INSN_TICK (insn) = clock_var + 1;
5831 cost = 0;
5833 else
5835 int delay_cost = 0;
5837 if (delay_htab)
5839 struct delay_pair *delay_entry;
5840 delay_entry
5841 = delay_htab->find_with_hash (insn,
5842 htab_hash_pointer (insn));
5843 while (delay_entry && delay_cost == 0)
5845 delay_cost = estimate_shadow_tick (delay_entry);
5846 if (delay_cost > max_insn_queue_index)
5847 delay_cost = max_insn_queue_index;
5848 delay_entry = delay_entry->next_same_i1;
5852 memcpy (temp_state, curr_state, dfa_state_size);
5853 cost = state_transition (temp_state, insn);
5854 if (cost < 0)
5855 cost = 0;
5856 else if (cost == 0)
5857 cost = 1;
5858 if (cost < delay_cost)
5860 cost = delay_cost;
5861 reason = "shadow tick";
5864 if (cost >= 1)
5866 if (SCHED_GROUP_P (insn) && cost > min_cost_group)
5867 min_cost_group = cost;
5868 ready_remove (&ready, i);
5869 queue_insn (insn, cost, reason);
5870 if (i + 1 < n)
5871 break;
5874 if (i == n)
5875 pass++;
5879 /* Called when we detect that the schedule is impossible. We examine the
5880 backtrack queue to find the earliest insn that caused this condition. */
5882 static struct haifa_saved_data *
5883 verify_shadows (void)
5885 struct haifa_saved_data *save, *earliest_fail = NULL;
5886 for (save = backtrack_queue; save; save = save->next)
5888 int t;
5889 struct delay_pair *pair = save->delay_pair;
5890 rtx_insn *i1 = pair->i1;
5892 for (; pair; pair = pair->next_same_i1)
5894 rtx_insn *i2 = pair->i2;
5896 if (QUEUE_INDEX (i2) == QUEUE_SCHEDULED)
5897 continue;
5899 t = INSN_TICK (i1) + pair_delay (pair);
5900 if (t < clock_var)
5902 if (sched_verbose >= 2)
5903 fprintf (sched_dump,
5904 ";;\t\tfailed delay requirements for %d/%d (%d->%d)"
5905 ", not ready\n",
5906 INSN_UID (pair->i1), INSN_UID (pair->i2),
5907 INSN_TICK (pair->i1), INSN_EXACT_TICK (pair->i2));
5908 earliest_fail = save;
5909 break;
5911 if (QUEUE_INDEX (i2) >= 0)
5913 int queued_for = INSN_TICK (i2);
5915 if (t < queued_for)
5917 if (sched_verbose >= 2)
5918 fprintf (sched_dump,
5919 ";;\t\tfailed delay requirements for %d/%d"
5920 " (%d->%d), queued too late\n",
5921 INSN_UID (pair->i1), INSN_UID (pair->i2),
5922 INSN_TICK (pair->i1), INSN_EXACT_TICK (pair->i2));
5923 earliest_fail = save;
5924 break;
5930 return earliest_fail;
5933 /* Print instructions together with useful scheduling information between
5934 HEAD and TAIL (inclusive). */
5935 static void
5936 dump_insn_stream (rtx_insn *head, rtx_insn *tail)
5938 fprintf (sched_dump, ";;\t| insn | prio |\n");
5940 rtx_insn *next_tail = NEXT_INSN (tail);
5941 for (rtx_insn *insn = head; insn != next_tail; insn = NEXT_INSN (insn))
5943 int priority = NOTE_P (insn) ? 0 : INSN_PRIORITY (insn);
5944 const char *pattern = (NOTE_P (insn)
5945 ? "note"
5946 : str_pattern_slim (PATTERN (insn)));
5948 fprintf (sched_dump, ";;\t| %4d | %4d | %-30s ",
5949 INSN_UID (insn), priority, pattern);
5951 if (sched_verbose >= 4)
5953 if (NOTE_P (insn) || recog_memoized (insn) < 0)
5954 fprintf (sched_dump, "nothing");
5955 else
5956 print_reservation (sched_dump, insn);
5958 fprintf (sched_dump, "\n");
5962 /* Use forward list scheduling to rearrange insns of block pointed to by
5963 TARGET_BB, possibly bringing insns from subsequent blocks in the same
5964 region. */
5966 bool
5967 schedule_block (basic_block *target_bb, state_t init_state)
5969 int i;
5970 bool success = modulo_ii == 0;
5971 struct sched_block_state ls;
5972 state_t temp_state = NULL; /* It is used for multipass scheduling. */
5973 int sort_p, advance, start_clock_var;
5975 /* Head/tail info for this block. */
5976 rtx_insn *prev_head = current_sched_info->prev_head;
5977 rtx next_tail = current_sched_info->next_tail;
5978 rtx_insn *head = NEXT_INSN (prev_head);
5979 rtx_insn *tail = PREV_INSN (next_tail);
5981 if ((current_sched_info->flags & DONT_BREAK_DEPENDENCIES) == 0
5982 && sched_pressure != SCHED_PRESSURE_MODEL)
5983 find_modifiable_mems (head, tail);
5985 /* We used to have code to avoid getting parameters moved from hard
5986 argument registers into pseudos.
5988 However, it was removed when it proved to be of marginal benefit
5989 and caused problems because schedule_block and compute_forward_dependences
5990 had different notions of what the "head" insn was. */
5992 gcc_assert (head != tail || INSN_P (head));
5994 haifa_recovery_bb_recently_added_p = false;
5996 backtrack_queue = NULL;
5998 /* Debug info. */
5999 if (sched_verbose)
6001 dump_new_block_header (0, *target_bb, head, tail);
6003 if (sched_verbose >= 2)
6005 dump_insn_stream (head, tail);
6006 memset (&rank_for_schedule_stats, 0,
6007 sizeof (rank_for_schedule_stats));
6011 if (init_state == NULL)
6012 state_reset (curr_state);
6013 else
6014 memcpy (curr_state, init_state, dfa_state_size);
6016 /* Clear the ready list. */
6017 ready.first = ready.veclen - 1;
6018 ready.n_ready = 0;
6019 ready.n_debug = 0;
6021 /* It is used for first cycle multipass scheduling. */
6022 temp_state = alloca (dfa_state_size);
6024 if (targetm.sched.init)
6025 targetm.sched.init (sched_dump, sched_verbose, ready.veclen);
6027 /* We start inserting insns after PREV_HEAD. */
6028 last_scheduled_insn = prev_head;
6029 last_nondebug_scheduled_insn = NULL_RTX;
6030 nonscheduled_insns_begin = NULL;
6032 gcc_assert ((NOTE_P (last_scheduled_insn)
6033 || DEBUG_INSN_P (last_scheduled_insn))
6034 && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
6036 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
6037 queue. */
6038 q_ptr = 0;
6039 q_size = 0;
6041 insn_queue = XALLOCAVEC (rtx, max_insn_queue_index + 1);
6042 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
6044 /* Start just before the beginning of time. */
6045 clock_var = -1;
6047 /* We need queue and ready lists and clock_var be initialized
6048 in try_ready () (which is called through init_ready_list ()). */
6049 (*current_sched_info->init_ready_list) ();
6051 if (sched_pressure == SCHED_PRESSURE_MODEL)
6052 model_start_schedule ();
6054 /* The algorithm is O(n^2) in the number of ready insns at any given
6055 time in the worst case. Before reload we are more likely to have
6056 big lists so truncate them to a reasonable size. */
6057 if (!reload_completed
6058 && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
6060 ready_sort (&ready);
6062 /* Find first free-standing insn past MAX_SCHED_READY_INSNS.
6063 If there are debug insns, we know they're first. */
6064 for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
6065 if (!SCHED_GROUP_P (ready_element (&ready, i)))
6066 break;
6068 if (sched_verbose >= 2)
6070 fprintf (sched_dump,
6071 ";;\t\tReady list on entry: %d insns\n", ready.n_ready);
6072 fprintf (sched_dump,
6073 ";;\t\t before reload => truncated to %d insns\n", i);
6076 /* Delay all insns past it for 1 cycle. If debug counter is
6077 activated make an exception for the insn right after
6078 nonscheduled_insns_begin. */
6080 rtx_insn *skip_insn;
6082 if (dbg_cnt (sched_insn) == false)
6083 skip_insn = first_nonscheduled_insn ();
6084 else
6085 skip_insn = NULL;
6087 while (i < ready.n_ready)
6089 rtx_insn *insn;
6091 insn = ready_remove (&ready, i);
6093 if (insn != skip_insn)
6094 queue_insn (insn, 1, "list truncated");
6096 if (skip_insn)
6097 ready_add (&ready, skip_insn, true);
6101 /* Now we can restore basic block notes and maintain precise cfg. */
6102 restore_bb_notes (*target_bb);
6104 last_clock_var = -1;
6106 advance = 0;
6108 gcc_assert (scheduled_insns.length () == 0);
6109 sort_p = TRUE;
6110 must_backtrack = false;
6111 modulo_insns_scheduled = 0;
6113 ls.modulo_epilogue = false;
6114 ls.first_cycle_insn_p = true;
6116 /* Loop until all the insns in BB are scheduled. */
6117 while ((*current_sched_info->schedule_more_p) ())
6119 perform_replacements_new_cycle ();
6122 start_clock_var = clock_var;
6124 clock_var++;
6126 advance_one_cycle ();
6128 /* Add to the ready list all pending insns that can be issued now.
6129 If there are no ready insns, increment clock until one
6130 is ready and add all pending insns at that point to the ready
6131 list. */
6132 queue_to_ready (&ready);
6134 gcc_assert (ready.n_ready);
6136 if (sched_verbose >= 2)
6138 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready:");
6139 debug_ready_list (&ready);
6141 advance -= clock_var - start_clock_var;
6143 while (advance > 0);
6145 if (ls.modulo_epilogue)
6147 int stage = clock_var / modulo_ii;
6148 if (stage > modulo_last_stage * 2 + 2)
6150 if (sched_verbose >= 2)
6151 fprintf (sched_dump,
6152 ";;\t\tmodulo scheduled succeeded at II %d\n",
6153 modulo_ii);
6154 success = true;
6155 goto end_schedule;
6158 else if (modulo_ii > 0)
6160 int stage = clock_var / modulo_ii;
6161 if (stage > modulo_max_stages)
6163 if (sched_verbose >= 2)
6164 fprintf (sched_dump,
6165 ";;\t\tfailing schedule due to excessive stages\n");
6166 goto end_schedule;
6168 if (modulo_n_insns == modulo_insns_scheduled
6169 && stage > modulo_last_stage)
6171 if (sched_verbose >= 2)
6172 fprintf (sched_dump,
6173 ";;\t\tfound kernel after %d stages, II %d\n",
6174 stage, modulo_ii);
6175 ls.modulo_epilogue = true;
6179 prune_ready_list (temp_state, true, false, ls.modulo_epilogue);
6180 if (ready.n_ready == 0)
6181 continue;
6182 if (must_backtrack)
6183 goto do_backtrack;
6185 ls.shadows_only_p = false;
6186 cycle_issued_insns = 0;
6187 ls.can_issue_more = issue_rate;
6188 for (;;)
6190 rtx_insn *insn;
6191 int cost;
6192 bool asm_p;
6194 if (sort_p && ready.n_ready > 0)
6196 /* Sort the ready list based on priority. This must be
6197 done every iteration through the loop, as schedule_insn
6198 may have readied additional insns that will not be
6199 sorted correctly. */
6200 ready_sort (&ready);
6202 if (sched_verbose >= 2)
6204 fprintf (sched_dump,
6205 ";;\t\tReady list after ready_sort: ");
6206 debug_ready_list (&ready);
6210 /* We don't want md sched reorder to even see debug isns, so put
6211 them out right away. */
6212 if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
6213 && (*current_sched_info->schedule_more_p) ())
6215 while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
6217 rtx_insn *insn = ready_remove_first (&ready);
6218 gcc_assert (DEBUG_INSN_P (insn));
6219 (*current_sched_info->begin_schedule_ready) (insn);
6220 scheduled_insns.safe_push (insn);
6221 last_scheduled_insn = insn;
6222 advance = schedule_insn (insn);
6223 gcc_assert (advance == 0);
6224 if (ready.n_ready > 0)
6225 ready_sort (&ready);
6229 if (ls.first_cycle_insn_p && !ready.n_ready)
6230 break;
6232 resume_after_backtrack:
6233 /* Allow the target to reorder the list, typically for
6234 better instruction bundling. */
6235 if (sort_p
6236 && (ready.n_ready == 0
6237 || !SCHED_GROUP_P (ready_element (&ready, 0))))
6239 if (ls.first_cycle_insn_p && targetm.sched.reorder)
6240 ls.can_issue_more
6241 = targetm.sched.reorder (sched_dump, sched_verbose,
6242 ready_lastpos (&ready),
6243 &ready.n_ready, clock_var);
6244 else if (!ls.first_cycle_insn_p && targetm.sched.reorder2)
6245 ls.can_issue_more
6246 = targetm.sched.reorder2 (sched_dump, sched_verbose,
6247 ready.n_ready
6248 ? ready_lastpos (&ready) : NULL,
6249 &ready.n_ready, clock_var);
6252 restart_choose_ready:
6253 if (sched_verbose >= 2)
6255 fprintf (sched_dump, ";;\tReady list (t = %3d): ",
6256 clock_var);
6257 debug_ready_list (&ready);
6258 if (sched_pressure == SCHED_PRESSURE_WEIGHTED)
6259 print_curr_reg_pressure ();
6262 if (ready.n_ready == 0
6263 && ls.can_issue_more
6264 && reload_completed)
6266 /* Allow scheduling insns directly from the queue in case
6267 there's nothing better to do (ready list is empty) but
6268 there are still vacant dispatch slots in the current cycle. */
6269 if (sched_verbose >= 6)
6270 fprintf (sched_dump,";;\t\tSecond chance\n");
6271 memcpy (temp_state, curr_state, dfa_state_size);
6272 if (early_queue_to_ready (temp_state, &ready))
6273 ready_sort (&ready);
6276 if (ready.n_ready == 0
6277 || !ls.can_issue_more
6278 || state_dead_lock_p (curr_state)
6279 || !(*current_sched_info->schedule_more_p) ())
6280 break;
6282 /* Select and remove the insn from the ready list. */
6283 if (sort_p)
6285 int res;
6287 insn = NULL;
6288 res = choose_ready (&ready, ls.first_cycle_insn_p, &insn);
6290 if (res < 0)
6291 /* Finish cycle. */
6292 break;
6293 if (res > 0)
6294 goto restart_choose_ready;
6296 gcc_assert (insn != NULL_RTX);
6298 else
6299 insn = ready_remove_first (&ready);
6301 if (sched_pressure != SCHED_PRESSURE_NONE
6302 && INSN_TICK (insn) > clock_var)
6304 ready_add (&ready, insn, true);
6305 advance = 1;
6306 break;
6309 if (targetm.sched.dfa_new_cycle
6310 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
6311 insn, last_clock_var,
6312 clock_var, &sort_p))
6313 /* SORT_P is used by the target to override sorting
6314 of the ready list. This is needed when the target
6315 has modified its internal structures expecting that
6316 the insn will be issued next. As we need the insn
6317 to have the highest priority (so it will be returned by
6318 the ready_remove_first call above), we invoke
6319 ready_add (&ready, insn, true).
6320 But, still, there is one issue: INSN can be later
6321 discarded by scheduler's front end through
6322 current_sched_info->can_schedule_ready_p, hence, won't
6323 be issued next. */
6325 ready_add (&ready, insn, true);
6326 break;
6329 sort_p = TRUE;
6331 if (current_sched_info->can_schedule_ready_p
6332 && ! (*current_sched_info->can_schedule_ready_p) (insn))
6333 /* We normally get here only if we don't want to move
6334 insn from the split block. */
6336 TODO_SPEC (insn) = DEP_POSTPONED;
6337 goto restart_choose_ready;
6340 if (delay_htab)
6342 /* If this insn is the first part of a delay-slot pair, record a
6343 backtrack point. */
6344 struct delay_pair *delay_entry;
6345 delay_entry
6346 = delay_htab->find_with_hash (insn, htab_hash_pointer (insn));
6347 if (delay_entry)
6349 save_backtrack_point (delay_entry, ls);
6350 if (sched_verbose >= 2)
6351 fprintf (sched_dump, ";;\t\tsaving backtrack point\n");
6355 /* DECISION is made. */
6357 if (modulo_ii > 0 && INSN_UID (insn) < modulo_iter0_max_uid)
6359 modulo_insns_scheduled++;
6360 modulo_last_stage = clock_var / modulo_ii;
6362 if (TODO_SPEC (insn) & SPECULATIVE)
6363 generate_recovery_code (insn);
6365 if (targetm.sched.dispatch (NULL_RTX, IS_DISPATCH_ON))
6366 targetm.sched.dispatch_do (insn, ADD_TO_DISPATCH_WINDOW);
6368 /* Update counters, etc in the scheduler's front end. */
6369 (*current_sched_info->begin_schedule_ready) (insn);
6370 scheduled_insns.safe_push (insn);
6371 gcc_assert (NONDEBUG_INSN_P (insn));
6372 last_nondebug_scheduled_insn = last_scheduled_insn = insn;
6374 if (recog_memoized (insn) >= 0)
6376 memcpy (temp_state, curr_state, dfa_state_size);
6377 cost = state_transition (curr_state, insn);
6378 if (sched_pressure != SCHED_PRESSURE_WEIGHTED)
6379 gcc_assert (cost < 0);
6380 if (memcmp (temp_state, curr_state, dfa_state_size) != 0)
6381 cycle_issued_insns++;
6382 asm_p = false;
6384 else
6385 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
6386 || asm_noperands (PATTERN (insn)) >= 0);
6388 if (targetm.sched.variable_issue)
6389 ls.can_issue_more =
6390 targetm.sched.variable_issue (sched_dump, sched_verbose,
6391 insn, ls.can_issue_more);
6392 /* A naked CLOBBER or USE generates no instruction, so do
6393 not count them against the issue rate. */
6394 else if (GET_CODE (PATTERN (insn)) != USE
6395 && GET_CODE (PATTERN (insn)) != CLOBBER)
6396 ls.can_issue_more--;
6397 advance = schedule_insn (insn);
6399 if (SHADOW_P (insn))
6400 ls.shadows_only_p = true;
6402 /* After issuing an asm insn we should start a new cycle. */
6403 if (advance == 0 && asm_p)
6404 advance = 1;
6406 if (must_backtrack)
6407 break;
6409 if (advance != 0)
6410 break;
6412 ls.first_cycle_insn_p = false;
6413 if (ready.n_ready > 0)
6414 prune_ready_list (temp_state, false, ls.shadows_only_p,
6415 ls.modulo_epilogue);
6418 do_backtrack:
6419 if (!must_backtrack)
6420 for (i = 0; i < ready.n_ready; i++)
6422 rtx_insn *insn = ready_element (&ready, i);
6423 if (INSN_EXACT_TICK (insn) == clock_var)
6425 must_backtrack = true;
6426 clock_var++;
6427 break;
6430 if (must_backtrack && modulo_ii > 0)
6432 if (modulo_backtracks_left == 0)
6433 goto end_schedule;
6434 modulo_backtracks_left--;
6436 while (must_backtrack)
6438 struct haifa_saved_data *failed;
6439 rtx_insn *failed_insn;
6441 must_backtrack = false;
6442 failed = verify_shadows ();
6443 gcc_assert (failed);
6445 failed_insn = failed->delay_pair->i1;
6446 /* Clear these queues. */
6447 perform_replacements_new_cycle ();
6448 toggle_cancelled_flags (false);
6449 unschedule_insns_until (failed_insn);
6450 while (failed != backtrack_queue)
6451 free_topmost_backtrack_point (true);
6452 restore_last_backtrack_point (&ls);
6453 if (sched_verbose >= 2)
6454 fprintf (sched_dump, ";;\t\trewind to cycle %d\n", clock_var);
6455 /* Delay by at least a cycle. This could cause additional
6456 backtracking. */
6457 queue_insn (failed_insn, 1, "backtracked");
6458 advance = 0;
6459 if (must_backtrack)
6460 continue;
6461 if (ready.n_ready > 0)
6462 goto resume_after_backtrack;
6463 else
6465 if (clock_var == 0 && ls.first_cycle_insn_p)
6466 goto end_schedule;
6467 advance = 1;
6468 break;
6471 ls.first_cycle_insn_p = true;
6473 if (ls.modulo_epilogue)
6474 success = true;
6475 end_schedule:
6476 if (!ls.first_cycle_insn_p)
6477 advance_one_cycle ();
6478 perform_replacements_new_cycle ();
6479 if (modulo_ii > 0)
6481 /* Once again, debug insn suckiness: they can be on the ready list
6482 even if they have unresolved dependencies. To make our view
6483 of the world consistent, remove such "ready" insns. */
6484 restart_debug_insn_loop:
6485 for (i = ready.n_ready - 1; i >= 0; i--)
6487 rtx_insn *x;
6489 x = ready_element (&ready, i);
6490 if (DEPS_LIST_FIRST (INSN_HARD_BACK_DEPS (x)) != NULL
6491 || DEPS_LIST_FIRST (INSN_SPEC_BACK_DEPS (x)) != NULL)
6493 ready_remove (&ready, i);
6494 goto restart_debug_insn_loop;
6497 for (i = ready.n_ready - 1; i >= 0; i--)
6499 rtx_insn *x;
6501 x = ready_element (&ready, i);
6502 resolve_dependencies (x);
6504 for (i = 0; i <= max_insn_queue_index; i++)
6506 rtx link;
6507 while ((link = insn_queue[i]) != NULL)
6509 rtx_insn *x = as_a <rtx_insn *> (XEXP (link, 0));
6510 insn_queue[i] = XEXP (link, 1);
6511 QUEUE_INDEX (x) = QUEUE_NOWHERE;
6512 free_INSN_LIST_node (link);
6513 resolve_dependencies (x);
6518 if (!success)
6519 undo_all_replacements ();
6521 /* Debug info. */
6522 if (sched_verbose)
6524 fprintf (sched_dump, ";;\tReady list (final): ");
6525 debug_ready_list (&ready);
6528 if (modulo_ii == 0 && current_sched_info->queue_must_finish_empty)
6529 /* Sanity check -- queue must be empty now. Meaningless if region has
6530 multiple bbs. */
6531 gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
6532 else if (modulo_ii == 0)
6534 /* We must maintain QUEUE_INDEX between blocks in region. */
6535 for (i = ready.n_ready - 1; i >= 0; i--)
6537 rtx_insn *x;
6539 x = ready_element (&ready, i);
6540 QUEUE_INDEX (x) = QUEUE_NOWHERE;
6541 TODO_SPEC (x) = HARD_DEP;
6544 if (q_size)
6545 for (i = 0; i <= max_insn_queue_index; i++)
6547 rtx link;
6548 for (link = insn_queue[i]; link; link = XEXP (link, 1))
6550 rtx_insn *x;
6552 x = as_a <rtx_insn *> (XEXP (link, 0));
6553 QUEUE_INDEX (x) = QUEUE_NOWHERE;
6554 TODO_SPEC (x) = HARD_DEP;
6556 free_INSN_LIST_list (&insn_queue[i]);
6560 if (sched_pressure == SCHED_PRESSURE_MODEL)
6561 model_end_schedule ();
6563 if (success)
6565 commit_schedule (prev_head, tail, target_bb);
6566 if (sched_verbose)
6567 fprintf (sched_dump, ";; total time = %d\n", clock_var);
6569 else
6570 last_scheduled_insn = tail;
6572 scheduled_insns.truncate (0);
6574 if (!current_sched_info->queue_must_finish_empty
6575 || haifa_recovery_bb_recently_added_p)
6577 /* INSN_TICK (minimum clock tick at which the insn becomes
6578 ready) may be not correct for the insn in the subsequent
6579 blocks of the region. We should use a correct value of
6580 `clock_var' or modify INSN_TICK. It is better to keep
6581 clock_var value equal to 0 at the start of a basic block.
6582 Therefore we modify INSN_TICK here. */
6583 fix_inter_tick (NEXT_INSN (prev_head), last_scheduled_insn);
6586 if (targetm.sched.finish)
6588 targetm.sched.finish (sched_dump, sched_verbose);
6589 /* Target might have added some instructions to the scheduled block
6590 in its md_finish () hook. These new insns don't have any data
6591 initialized and to identify them we extend h_i_d so that they'll
6592 get zero luids. */
6593 sched_extend_luids ();
6596 /* Update head/tail boundaries. */
6597 head = NEXT_INSN (prev_head);
6598 tail = last_scheduled_insn;
6600 if (sched_verbose)
6602 fprintf (sched_dump, ";; new head = %d\n;; new tail = %d\n",
6603 INSN_UID (head), INSN_UID (tail));
6605 if (sched_verbose >= 2)
6607 dump_insn_stream (head, tail);
6608 print_rank_for_schedule_stats (";; TOTAL ", &rank_for_schedule_stats);
6611 fprintf (sched_dump, "\n");
6614 head = restore_other_notes (head, NULL);
6616 current_sched_info->head = head;
6617 current_sched_info->tail = tail;
6619 free_backtrack_queue ();
6621 return success;
6624 /* Set_priorities: compute priority of each insn in the block. */
6627 set_priorities (rtx_insn *head, rtx_insn *tail)
6629 rtx_insn *insn;
6630 int n_insn;
6631 int sched_max_insns_priority =
6632 current_sched_info->sched_max_insns_priority;
6633 rtx_insn *prev_head;
6635 if (head == tail && ! INSN_P (head))
6636 gcc_unreachable ();
6638 n_insn = 0;
6640 prev_head = PREV_INSN (head);
6641 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
6643 if (!INSN_P (insn))
6644 continue;
6646 n_insn++;
6647 (void) priority (insn);
6649 gcc_assert (INSN_PRIORITY_KNOWN (insn));
6651 sched_max_insns_priority = MAX (sched_max_insns_priority,
6652 INSN_PRIORITY (insn));
6655 current_sched_info->sched_max_insns_priority = sched_max_insns_priority;
6657 return n_insn;
6660 /* Set dump and sched_verbose for the desired debugging output. If no
6661 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
6662 For -fsched-verbose=N, N>=10, print everything to stderr. */
6663 void
6664 setup_sched_dump (void)
6666 sched_verbose = sched_verbose_param;
6667 if (sched_verbose_param == 0 && dump_file)
6668 sched_verbose = 1;
6669 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
6670 ? stderr : dump_file);
6673 /* Allocate data for register pressure sensitive scheduling. */
6674 static void
6675 alloc_global_sched_pressure_data (void)
6677 if (sched_pressure != SCHED_PRESSURE_NONE)
6679 int i, max_regno = max_reg_num ();
6681 if (sched_dump != NULL)
6682 /* We need info about pseudos for rtl dumps about pseudo
6683 classes and costs. */
6684 regstat_init_n_sets_and_refs ();
6685 ira_set_pseudo_classes (true, sched_verbose ? sched_dump : NULL);
6686 sched_regno_pressure_class
6687 = (enum reg_class *) xmalloc (max_regno * sizeof (enum reg_class));
6688 for (i = 0; i < max_regno; i++)
6689 sched_regno_pressure_class[i]
6690 = (i < FIRST_PSEUDO_REGISTER
6691 ? ira_pressure_class_translate[REGNO_REG_CLASS (i)]
6692 : ira_pressure_class_translate[reg_allocno_class (i)]);
6693 curr_reg_live = BITMAP_ALLOC (NULL);
6694 if (sched_pressure == SCHED_PRESSURE_WEIGHTED)
6696 saved_reg_live = BITMAP_ALLOC (NULL);
6697 region_ref_regs = BITMAP_ALLOC (NULL);
6702 /* Free data for register pressure sensitive scheduling. Also called
6703 from schedule_region when stopping sched-pressure early. */
6704 void
6705 free_global_sched_pressure_data (void)
6707 if (sched_pressure != SCHED_PRESSURE_NONE)
6709 if (regstat_n_sets_and_refs != NULL)
6710 regstat_free_n_sets_and_refs ();
6711 if (sched_pressure == SCHED_PRESSURE_WEIGHTED)
6713 BITMAP_FREE (region_ref_regs);
6714 BITMAP_FREE (saved_reg_live);
6716 BITMAP_FREE (curr_reg_live);
6717 free (sched_regno_pressure_class);
6721 /* Initialize some global state for the scheduler. This function works
6722 with the common data shared between all the schedulers. It is called
6723 from the scheduler specific initialization routine. */
6725 void
6726 sched_init (void)
6728 /* Disable speculative loads in their presence if cc0 defined. */
6729 #ifdef HAVE_cc0
6730 flag_schedule_speculative_load = 0;
6731 #endif
6733 if (targetm.sched.dispatch (NULL_RTX, IS_DISPATCH_ON))
6734 targetm.sched.dispatch_do (NULL_RTX, DISPATCH_INIT);
6736 if (live_range_shrinkage_p)
6737 sched_pressure = SCHED_PRESSURE_WEIGHTED;
6738 else if (flag_sched_pressure
6739 && !reload_completed
6740 && common_sched_info->sched_pass_id == SCHED_RGN_PASS)
6741 sched_pressure = ((enum sched_pressure_algorithm)
6742 PARAM_VALUE (PARAM_SCHED_PRESSURE_ALGORITHM));
6743 else
6744 sched_pressure = SCHED_PRESSURE_NONE;
6746 if (sched_pressure != SCHED_PRESSURE_NONE)
6747 ira_setup_eliminable_regset ();
6749 /* Initialize SPEC_INFO. */
6750 if (targetm.sched.set_sched_flags)
6752 spec_info = &spec_info_var;
6753 targetm.sched.set_sched_flags (spec_info);
6755 if (spec_info->mask != 0)
6757 spec_info->data_weakness_cutoff =
6758 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
6759 spec_info->control_weakness_cutoff =
6760 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF)
6761 * REG_BR_PROB_BASE) / 100;
6763 else
6764 /* So we won't read anything accidentally. */
6765 spec_info = NULL;
6768 else
6769 /* So we won't read anything accidentally. */
6770 spec_info = 0;
6772 /* Initialize issue_rate. */
6773 if (targetm.sched.issue_rate)
6774 issue_rate = targetm.sched.issue_rate ();
6775 else
6776 issue_rate = 1;
6778 if (cached_issue_rate != issue_rate)
6780 cached_issue_rate = issue_rate;
6781 /* To invalidate max_lookahead_tries: */
6782 cached_first_cycle_multipass_dfa_lookahead = 0;
6785 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
6786 dfa_lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
6787 else
6788 dfa_lookahead = 0;
6790 if (targetm.sched.init_dfa_pre_cycle_insn)
6791 targetm.sched.init_dfa_pre_cycle_insn ();
6793 if (targetm.sched.init_dfa_post_cycle_insn)
6794 targetm.sched.init_dfa_post_cycle_insn ();
6796 dfa_start ();
6797 dfa_state_size = state_size ();
6799 init_alias_analysis ();
6801 if (!sched_no_dce)
6802 df_set_flags (DF_LR_RUN_DCE);
6803 df_note_add_problem ();
6805 /* More problems needed for interloop dep calculation in SMS. */
6806 if (common_sched_info->sched_pass_id == SCHED_SMS_PASS)
6808 df_rd_add_problem ();
6809 df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
6812 df_analyze ();
6814 /* Do not run DCE after reload, as this can kill nops inserted
6815 by bundling. */
6816 if (reload_completed)
6817 df_clear_flags (DF_LR_RUN_DCE);
6819 regstat_compute_calls_crossed ();
6821 if (targetm.sched.init_global)
6822 targetm.sched.init_global (sched_dump, sched_verbose, get_max_uid () + 1);
6824 alloc_global_sched_pressure_data ();
6826 curr_state = xmalloc (dfa_state_size);
6829 static void haifa_init_only_bb (basic_block, basic_block);
6831 /* Initialize data structures specific to the Haifa scheduler. */
6832 void
6833 haifa_sched_init (void)
6835 setup_sched_dump ();
6836 sched_init ();
6838 scheduled_insns.create (0);
6840 if (spec_info != NULL)
6842 sched_deps_info->use_deps_list = 1;
6843 sched_deps_info->generate_spec_deps = 1;
6846 /* Initialize luids, dependency caches, target and h_i_d for the
6847 whole function. */
6849 bb_vec_t bbs;
6850 bbs.create (n_basic_blocks_for_fn (cfun));
6851 basic_block bb;
6853 sched_init_bbs ();
6855 FOR_EACH_BB_FN (bb, cfun)
6856 bbs.quick_push (bb);
6857 sched_init_luids (bbs);
6858 sched_deps_init (true);
6859 sched_extend_target ();
6860 haifa_init_h_i_d (bbs);
6862 bbs.release ();
6865 sched_init_only_bb = haifa_init_only_bb;
6866 sched_split_block = sched_split_block_1;
6867 sched_create_empty_bb = sched_create_empty_bb_1;
6868 haifa_recovery_bb_ever_added_p = false;
6870 nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
6871 before_recovery = 0;
6872 after_recovery = 0;
6874 modulo_ii = 0;
6877 /* Finish work with the data specific to the Haifa scheduler. */
6878 void
6879 haifa_sched_finish (void)
6881 sched_create_empty_bb = NULL;
6882 sched_split_block = NULL;
6883 sched_init_only_bb = NULL;
6885 if (spec_info && spec_info->dump)
6887 char c = reload_completed ? 'a' : 'b';
6889 fprintf (spec_info->dump,
6890 ";; %s:\n", current_function_name ());
6892 fprintf (spec_info->dump,
6893 ";; Procedure %cr-begin-data-spec motions == %d\n",
6894 c, nr_begin_data);
6895 fprintf (spec_info->dump,
6896 ";; Procedure %cr-be-in-data-spec motions == %d\n",
6897 c, nr_be_in_data);
6898 fprintf (spec_info->dump,
6899 ";; Procedure %cr-begin-control-spec motions == %d\n",
6900 c, nr_begin_control);
6901 fprintf (spec_info->dump,
6902 ";; Procedure %cr-be-in-control-spec motions == %d\n",
6903 c, nr_be_in_control);
6906 scheduled_insns.release ();
6908 /* Finalize h_i_d, dependency caches, and luids for the whole
6909 function. Target will be finalized in md_global_finish (). */
6910 sched_deps_finish ();
6911 sched_finish_luids ();
6912 current_sched_info = NULL;
6913 sched_finish ();
6916 /* Free global data used during insn scheduling. This function works with
6917 the common data shared between the schedulers. */
6919 void
6920 sched_finish (void)
6922 haifa_finish_h_i_d ();
6923 free_global_sched_pressure_data ();
6924 free (curr_state);
6926 if (targetm.sched.finish_global)
6927 targetm.sched.finish_global (sched_dump, sched_verbose);
6929 end_alias_analysis ();
6931 regstat_free_calls_crossed ();
6933 dfa_finish ();
6936 /* Free all delay_pair structures that were recorded. */
6937 void
6938 free_delay_pairs (void)
6940 if (delay_htab)
6942 delay_htab->empty ();
6943 delay_htab_i2->empty ();
6947 /* Fix INSN_TICKs of the instructions in the current block as well as
6948 INSN_TICKs of their dependents.
6949 HEAD and TAIL are the begin and the end of the current scheduled block. */
6950 static void
6951 fix_inter_tick (rtx_insn *head, rtx_insn *tail)
6953 /* Set of instructions with corrected INSN_TICK. */
6954 bitmap_head processed;
6955 /* ??? It is doubtful if we should assume that cycle advance happens on
6956 basic block boundaries. Basically insns that are unconditionally ready
6957 on the start of the block are more preferable then those which have
6958 a one cycle dependency over insn from the previous block. */
6959 int next_clock = clock_var + 1;
6961 bitmap_initialize (&processed, 0);
6963 /* Iterates over scheduled instructions and fix their INSN_TICKs and
6964 INSN_TICKs of dependent instructions, so that INSN_TICKs are consistent
6965 across different blocks. */
6966 for (tail = NEXT_INSN (tail); head != tail; head = NEXT_INSN (head))
6968 if (INSN_P (head))
6970 int tick;
6971 sd_iterator_def sd_it;
6972 dep_t dep;
6974 tick = INSN_TICK (head);
6975 gcc_assert (tick >= MIN_TICK);
6977 /* Fix INSN_TICK of instruction from just scheduled block. */
6978 if (bitmap_set_bit (&processed, INSN_LUID (head)))
6980 tick -= next_clock;
6982 if (tick < MIN_TICK)
6983 tick = MIN_TICK;
6985 INSN_TICK (head) = tick;
6988 if (DEBUG_INSN_P (head))
6989 continue;
6991 FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
6993 rtx_insn *next;
6995 next = DEP_CON (dep);
6996 tick = INSN_TICK (next);
6998 if (tick != INVALID_TICK
6999 /* If NEXT has its INSN_TICK calculated, fix it.
7000 If not - it will be properly calculated from
7001 scratch later in fix_tick_ready. */
7002 && bitmap_set_bit (&processed, INSN_LUID (next)))
7004 tick -= next_clock;
7006 if (tick < MIN_TICK)
7007 tick = MIN_TICK;
7009 if (tick > INTER_TICK (next))
7010 INTER_TICK (next) = tick;
7011 else
7012 tick = INTER_TICK (next);
7014 INSN_TICK (next) = tick;
7019 bitmap_clear (&processed);
7022 /* Check if NEXT is ready to be added to the ready or queue list.
7023 If "yes", add it to the proper list.
7024 Returns:
7025 -1 - is not ready yet,
7026 0 - added to the ready list,
7027 0 < N - queued for N cycles. */
7029 try_ready (rtx_insn *next)
7031 ds_t old_ts, new_ts;
7033 old_ts = TODO_SPEC (next);
7035 gcc_assert (!(old_ts & ~(SPECULATIVE | HARD_DEP | DEP_CONTROL | DEP_POSTPONED))
7036 && (old_ts == HARD_DEP
7037 || old_ts == DEP_POSTPONED
7038 || (old_ts & SPECULATIVE)
7039 || old_ts == DEP_CONTROL));
7041 new_ts = recompute_todo_spec (next, false);
7043 if (new_ts & (HARD_DEP | DEP_POSTPONED))
7044 gcc_assert (new_ts == old_ts
7045 && QUEUE_INDEX (next) == QUEUE_NOWHERE);
7046 else if (current_sched_info->new_ready)
7047 new_ts = current_sched_info->new_ready (next, new_ts);
7049 /* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
7050 have its original pattern or changed (speculative) one. This is due
7051 to changing ebb in region scheduling.
7052 * But if (old_ts & SPECULATIVE), then we are pretty sure that insn
7053 has speculative pattern.
7055 We can't assert (!(new_ts & HARD_DEP) || new_ts == old_ts) here because
7056 control-speculative NEXT could have been discarded by sched-rgn.c
7057 (the same case as when discarded by can_schedule_ready_p ()). */
7059 if ((new_ts & SPECULATIVE)
7060 /* If (old_ts == new_ts), then (old_ts & SPECULATIVE) and we don't
7061 need to change anything. */
7062 && new_ts != old_ts)
7064 int res;
7065 rtx new_pat;
7067 gcc_assert ((new_ts & SPECULATIVE) && !(new_ts & ~SPECULATIVE));
7069 res = haifa_speculate_insn (next, new_ts, &new_pat);
7071 switch (res)
7073 case -1:
7074 /* It would be nice to change DEP_STATUS of all dependences,
7075 which have ((DEP_STATUS & SPECULATIVE) == new_ts) to HARD_DEP,
7076 so we won't reanalyze anything. */
7077 new_ts = HARD_DEP;
7078 break;
7080 case 0:
7081 /* We follow the rule, that every speculative insn
7082 has non-null ORIG_PAT. */
7083 if (!ORIG_PAT (next))
7084 ORIG_PAT (next) = PATTERN (next);
7085 break;
7087 case 1:
7088 if (!ORIG_PAT (next))
7089 /* If we gonna to overwrite the original pattern of insn,
7090 save it. */
7091 ORIG_PAT (next) = PATTERN (next);
7093 res = haifa_change_pattern (next, new_pat);
7094 gcc_assert (res);
7095 break;
7097 default:
7098 gcc_unreachable ();
7102 /* We need to restore pattern only if (new_ts == 0), because otherwise it is
7103 either correct (new_ts & SPECULATIVE),
7104 or we simply don't care (new_ts & HARD_DEP). */
7106 gcc_assert (!ORIG_PAT (next)
7107 || !IS_SPECULATION_BRANCHY_CHECK_P (next));
7109 TODO_SPEC (next) = new_ts;
7111 if (new_ts & (HARD_DEP | DEP_POSTPONED))
7113 /* We can't assert (QUEUE_INDEX (next) == QUEUE_NOWHERE) here because
7114 control-speculative NEXT could have been discarded by sched-rgn.c
7115 (the same case as when discarded by can_schedule_ready_p ()). */
7116 /*gcc_assert (QUEUE_INDEX (next) == QUEUE_NOWHERE);*/
7118 change_queue_index (next, QUEUE_NOWHERE);
7120 return -1;
7122 else if (!(new_ts & BEGIN_SPEC)
7123 && ORIG_PAT (next) && PREDICATED_PAT (next) == NULL_RTX
7124 && !IS_SPECULATION_CHECK_P (next))
7125 /* We should change pattern of every previously speculative
7126 instruction - and we determine if NEXT was speculative by using
7127 ORIG_PAT field. Except one case - speculation checks have ORIG_PAT
7128 pat too, so skip them. */
7130 bool success = haifa_change_pattern (next, ORIG_PAT (next));
7131 gcc_assert (success);
7132 ORIG_PAT (next) = 0;
7135 if (sched_verbose >= 2)
7137 fprintf (sched_dump, ";;\t\tdependencies resolved: insn %s",
7138 (*current_sched_info->print_insn) (next, 0));
7140 if (spec_info && spec_info->dump)
7142 if (new_ts & BEGIN_DATA)
7143 fprintf (spec_info->dump, "; data-spec;");
7144 if (new_ts & BEGIN_CONTROL)
7145 fprintf (spec_info->dump, "; control-spec;");
7146 if (new_ts & BE_IN_CONTROL)
7147 fprintf (spec_info->dump, "; in-control-spec;");
7149 if (TODO_SPEC (next) & DEP_CONTROL)
7150 fprintf (sched_dump, " predicated");
7151 fprintf (sched_dump, "\n");
7154 adjust_priority (next);
7156 return fix_tick_ready (next);
7159 /* Calculate INSN_TICK of NEXT and add it to either ready or queue list. */
7160 static int
7161 fix_tick_ready (rtx_insn *next)
7163 int tick, delay;
7165 if (!DEBUG_INSN_P (next) && !sd_lists_empty_p (next, SD_LIST_RES_BACK))
7167 int full_p;
7168 sd_iterator_def sd_it;
7169 dep_t dep;
7171 tick = INSN_TICK (next);
7172 /* if tick is not equal to INVALID_TICK, then update
7173 INSN_TICK of NEXT with the most recent resolved dependence
7174 cost. Otherwise, recalculate from scratch. */
7175 full_p = (tick == INVALID_TICK);
7177 FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
7179 rtx_insn *pro = DEP_PRO (dep);
7180 int tick1;
7182 gcc_assert (INSN_TICK (pro) >= MIN_TICK);
7184 tick1 = INSN_TICK (pro) + dep_cost (dep);
7185 if (tick1 > tick)
7186 tick = tick1;
7188 if (!full_p)
7189 break;
7192 else
7193 tick = -1;
7195 INSN_TICK (next) = tick;
7197 delay = tick - clock_var;
7198 if (delay <= 0 || sched_pressure != SCHED_PRESSURE_NONE)
7199 delay = QUEUE_READY;
7201 change_queue_index (next, delay);
7203 return delay;
7206 /* Move NEXT to the proper queue list with (DELAY >= 1),
7207 or add it to the ready list (DELAY == QUEUE_READY),
7208 or remove it from ready and queue lists at all (DELAY == QUEUE_NOWHERE). */
7209 static void
7210 change_queue_index (rtx_insn *next, int delay)
7212 int i = QUEUE_INDEX (next);
7214 gcc_assert (QUEUE_NOWHERE <= delay && delay <= max_insn_queue_index
7215 && delay != 0);
7216 gcc_assert (i != QUEUE_SCHEDULED);
7218 if ((delay > 0 && NEXT_Q_AFTER (q_ptr, delay) == i)
7219 || (delay < 0 && delay == i))
7220 /* We have nothing to do. */
7221 return;
7223 /* Remove NEXT from wherever it is now. */
7224 if (i == QUEUE_READY)
7225 ready_remove_insn (next);
7226 else if (i >= 0)
7227 queue_remove (next);
7229 /* Add it to the proper place. */
7230 if (delay == QUEUE_READY)
7231 ready_add (readyp, next, false);
7232 else if (delay >= 1)
7233 queue_insn (next, delay, "change queue index");
7235 if (sched_verbose >= 2)
7237 fprintf (sched_dump, ";;\t\ttick updated: insn %s",
7238 (*current_sched_info->print_insn) (next, 0));
7240 if (delay == QUEUE_READY)
7241 fprintf (sched_dump, " into ready\n");
7242 else if (delay >= 1)
7243 fprintf (sched_dump, " into queue with cost=%d\n", delay);
7244 else
7245 fprintf (sched_dump, " removed from ready or queue lists\n");
7249 static int sched_ready_n_insns = -1;
7251 /* Initialize per region data structures. */
7252 void
7253 sched_extend_ready_list (int new_sched_ready_n_insns)
7255 int i;
7257 if (sched_ready_n_insns == -1)
7258 /* At the first call we need to initialize one more choice_stack
7259 entry. */
7261 i = 0;
7262 sched_ready_n_insns = 0;
7263 scheduled_insns.reserve (new_sched_ready_n_insns);
7265 else
7266 i = sched_ready_n_insns + 1;
7268 ready.veclen = new_sched_ready_n_insns + issue_rate;
7269 ready.vec = XRESIZEVEC (rtx_insn *, ready.vec, ready.veclen);
7271 gcc_assert (new_sched_ready_n_insns >= sched_ready_n_insns);
7273 ready_try = (signed char *) xrecalloc (ready_try, new_sched_ready_n_insns,
7274 sched_ready_n_insns,
7275 sizeof (*ready_try));
7277 /* We allocate +1 element to save initial state in the choice_stack[0]
7278 entry. */
7279 choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
7280 new_sched_ready_n_insns + 1);
7282 for (; i <= new_sched_ready_n_insns; i++)
7284 choice_stack[i].state = xmalloc (dfa_state_size);
7286 if (targetm.sched.first_cycle_multipass_init)
7287 targetm.sched.first_cycle_multipass_init (&(choice_stack[i]
7288 .target_data));
7291 sched_ready_n_insns = new_sched_ready_n_insns;
7294 /* Free per region data structures. */
7295 void
7296 sched_finish_ready_list (void)
7298 int i;
7300 free (ready.vec);
7301 ready.vec = NULL;
7302 ready.veclen = 0;
7304 free (ready_try);
7305 ready_try = NULL;
7307 for (i = 0; i <= sched_ready_n_insns; i++)
7309 if (targetm.sched.first_cycle_multipass_fini)
7310 targetm.sched.first_cycle_multipass_fini (&(choice_stack[i]
7311 .target_data));
7313 free (choice_stack [i].state);
7315 free (choice_stack);
7316 choice_stack = NULL;
7318 sched_ready_n_insns = -1;
7321 static int
7322 haifa_luid_for_non_insn (rtx x)
7324 gcc_assert (NOTE_P (x) || LABEL_P (x));
7326 return 0;
7329 /* Generates recovery code for INSN. */
7330 static void
7331 generate_recovery_code (rtx_insn *insn)
7333 if (TODO_SPEC (insn) & BEGIN_SPEC)
7334 begin_speculative_block (insn);
7336 /* Here we have insn with no dependencies to
7337 instructions other then CHECK_SPEC ones. */
7339 if (TODO_SPEC (insn) & BE_IN_SPEC)
7340 add_to_speculative_block (insn);
7343 /* Helper function.
7344 Tries to add speculative dependencies of type FS between instructions
7345 in deps_list L and TWIN. */
7346 static void
7347 process_insn_forw_deps_be_in_spec (rtx insn, rtx_insn *twin, ds_t fs)
7349 sd_iterator_def sd_it;
7350 dep_t dep;
7352 FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
7354 ds_t ds;
7355 rtx_insn *consumer;
7357 consumer = DEP_CON (dep);
7359 ds = DEP_STATUS (dep);
7361 if (/* If we want to create speculative dep. */
7363 /* And we can do that because this is a true dep. */
7364 && (ds & DEP_TYPES) == DEP_TRUE)
7366 gcc_assert (!(ds & BE_IN_SPEC));
7368 if (/* If this dep can be overcome with 'begin speculation'. */
7369 ds & BEGIN_SPEC)
7370 /* Then we have a choice: keep the dep 'begin speculative'
7371 or transform it into 'be in speculative'. */
7373 if (/* In try_ready we assert that if insn once became ready
7374 it can be removed from the ready (or queue) list only
7375 due to backend decision. Hence we can't let the
7376 probability of the speculative dep to decrease. */
7377 ds_weak (ds) <= ds_weak (fs))
7379 ds_t new_ds;
7381 new_ds = (ds & ~BEGIN_SPEC) | fs;
7383 if (/* consumer can 'be in speculative'. */
7384 sched_insn_is_legitimate_for_speculation_p (consumer,
7385 new_ds))
7386 /* Transform it to be in speculative. */
7387 ds = new_ds;
7390 else
7391 /* Mark the dep as 'be in speculative'. */
7392 ds |= fs;
7396 dep_def _new_dep, *new_dep = &_new_dep;
7398 init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
7399 sd_add_dep (new_dep, false);
7404 /* Generates recovery code for BEGIN speculative INSN. */
7405 static void
7406 begin_speculative_block (rtx_insn *insn)
7408 if (TODO_SPEC (insn) & BEGIN_DATA)
7409 nr_begin_data++;
7410 if (TODO_SPEC (insn) & BEGIN_CONTROL)
7411 nr_begin_control++;
7413 create_check_block_twin (insn, false);
7415 TODO_SPEC (insn) &= ~BEGIN_SPEC;
7418 static void haifa_init_insn (rtx_insn *);
7420 /* Generates recovery code for BE_IN speculative INSN. */
7421 static void
7422 add_to_speculative_block (rtx_insn *insn)
7424 ds_t ts;
7425 sd_iterator_def sd_it;
7426 dep_t dep;
7427 rtx twins = NULL;
7428 rtx_vec_t priorities_roots;
7430 ts = TODO_SPEC (insn);
7431 gcc_assert (!(ts & ~BE_IN_SPEC));
7433 if (ts & BE_IN_DATA)
7434 nr_be_in_data++;
7435 if (ts & BE_IN_CONTROL)
7436 nr_be_in_control++;
7438 TODO_SPEC (insn) &= ~BE_IN_SPEC;
7439 gcc_assert (!TODO_SPEC (insn));
7441 DONE_SPEC (insn) |= ts;
7443 /* First we convert all simple checks to branchy. */
7444 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
7445 sd_iterator_cond (&sd_it, &dep);)
7447 rtx_insn *check = DEP_PRO (dep);
7449 if (IS_SPECULATION_SIMPLE_CHECK_P (check))
7451 create_check_block_twin (check, true);
7453 /* Restart search. */
7454 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
7456 else
7457 /* Continue search. */
7458 sd_iterator_next (&sd_it);
7461 priorities_roots.create (0);
7462 clear_priorities (insn, &priorities_roots);
7464 while (1)
7466 rtx_insn *check, *twin;
7467 basic_block rec;
7469 /* Get the first backward dependency of INSN. */
7470 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
7471 if (!sd_iterator_cond (&sd_it, &dep))
7472 /* INSN has no backward dependencies left. */
7473 break;
7475 gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
7476 && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
7477 && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
7479 check = DEP_PRO (dep);
7481 gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
7482 && QUEUE_INDEX (check) == QUEUE_NOWHERE);
7484 rec = BLOCK_FOR_INSN (check);
7486 twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
7487 haifa_init_insn (twin);
7489 sd_copy_back_deps (twin, insn, true);
7491 if (sched_verbose && spec_info->dump)
7492 /* INSN_BB (insn) isn't determined for twin insns yet.
7493 So we can't use current_sched_info->print_insn. */
7494 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
7495 INSN_UID (twin), rec->index);
7497 twins = alloc_INSN_LIST (twin, twins);
7499 /* Add dependences between TWIN and all appropriate
7500 instructions from REC. */
7501 FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
7503 rtx_insn *pro = DEP_PRO (dep);
7505 gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
7507 /* INSN might have dependencies from the instructions from
7508 several recovery blocks. At this iteration we process those
7509 producers that reside in REC. */
7510 if (BLOCK_FOR_INSN (pro) == rec)
7512 dep_def _new_dep, *new_dep = &_new_dep;
7514 init_dep (new_dep, pro, twin, REG_DEP_TRUE);
7515 sd_add_dep (new_dep, false);
7519 process_insn_forw_deps_be_in_spec (insn, twin, ts);
7521 /* Remove all dependencies between INSN and insns in REC. */
7522 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
7523 sd_iterator_cond (&sd_it, &dep);)
7525 rtx_insn *pro = DEP_PRO (dep);
7527 if (BLOCK_FOR_INSN (pro) == rec)
7528 sd_delete_dep (sd_it);
7529 else
7530 sd_iterator_next (&sd_it);
7534 /* We couldn't have added the dependencies between INSN and TWINS earlier
7535 because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
7536 while (twins)
7538 rtx twin;
7540 twin = XEXP (twins, 0);
7543 dep_def _new_dep, *new_dep = &_new_dep;
7545 init_dep (new_dep, insn, as_a <rtx_insn *> (twin), REG_DEP_OUTPUT);
7546 sd_add_dep (new_dep, false);
7549 twin = XEXP (twins, 1);
7550 free_INSN_LIST_node (twins);
7551 twins = twin;
7554 calc_priorities (priorities_roots);
7555 priorities_roots.release ();
7558 /* Extends and fills with zeros (only the new part) array pointed to by P. */
7559 void *
7560 xrecalloc (void *p, size_t new_nmemb, size_t old_nmemb, size_t size)
7562 gcc_assert (new_nmemb >= old_nmemb);
7563 p = XRESIZEVAR (void, p, new_nmemb * size);
7564 memset (((char *) p) + old_nmemb * size, 0, (new_nmemb - old_nmemb) * size);
7565 return p;
7568 /* Helper function.
7569 Find fallthru edge from PRED. */
7570 edge
7571 find_fallthru_edge_from (basic_block pred)
7573 edge e;
7574 basic_block succ;
7576 succ = pred->next_bb;
7577 gcc_assert (succ->prev_bb == pred);
7579 if (EDGE_COUNT (pred->succs) <= EDGE_COUNT (succ->preds))
7581 e = find_fallthru_edge (pred->succs);
7583 if (e)
7585 gcc_assert (e->dest == succ);
7586 return e;
7589 else
7591 e = find_fallthru_edge (succ->preds);
7593 if (e)
7595 gcc_assert (e->src == pred);
7596 return e;
7600 return NULL;
7603 /* Extend per basic block data structures. */
7604 static void
7605 sched_extend_bb (void)
7607 /* The following is done to keep current_sched_info->next_tail non null. */
7608 rtx_insn *end = BB_END (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
7609 rtx_insn *insn = DEBUG_INSN_P (end) ? prev_nondebug_insn (end) : end;
7610 if (NEXT_INSN (end) == 0
7611 || (!NOTE_P (insn)
7612 && !LABEL_P (insn)
7613 /* Don't emit a NOTE if it would end up before a BARRIER. */
7614 && !BARRIER_P (NEXT_INSN (end))))
7616 rtx_note *note = emit_note_after (NOTE_INSN_DELETED, end);
7617 /* Make note appear outside BB. */
7618 set_block_for_insn (note, NULL);
7619 BB_END (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb) = end;
7623 /* Init per basic block data structures. */
7624 void
7625 sched_init_bbs (void)
7627 sched_extend_bb ();
7630 /* Initialize BEFORE_RECOVERY variable. */
7631 static void
7632 init_before_recovery (basic_block *before_recovery_ptr)
7634 basic_block last;
7635 edge e;
7637 last = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
7638 e = find_fallthru_edge_from (last);
7640 if (e)
7642 /* We create two basic blocks:
7643 1. Single instruction block is inserted right after E->SRC
7644 and has jump to
7645 2. Empty block right before EXIT_BLOCK.
7646 Between these two blocks recovery blocks will be emitted. */
7648 basic_block single, empty;
7649 rtx_insn *x;
7650 rtx label;
7652 /* If the fallthrough edge to exit we've found is from the block we've
7653 created before, don't do anything more. */
7654 if (last == after_recovery)
7655 return;
7657 adding_bb_to_current_region_p = false;
7659 single = sched_create_empty_bb (last);
7660 empty = sched_create_empty_bb (single);
7662 /* Add new blocks to the root loop. */
7663 if (current_loops != NULL)
7665 add_bb_to_loop (single, (*current_loops->larray)[0]);
7666 add_bb_to_loop (empty, (*current_loops->larray)[0]);
7669 single->count = last->count;
7670 empty->count = last->count;
7671 single->frequency = last->frequency;
7672 empty->frequency = last->frequency;
7673 BB_COPY_PARTITION (single, last);
7674 BB_COPY_PARTITION (empty, last);
7676 redirect_edge_succ (e, single);
7677 make_single_succ_edge (single, empty, 0);
7678 make_single_succ_edge (empty, EXIT_BLOCK_PTR_FOR_FN (cfun),
7679 EDGE_FALLTHRU);
7681 label = block_label (empty);
7682 x = emit_jump_insn_after (gen_jump (label), BB_END (single));
7683 JUMP_LABEL (x) = label;
7684 LABEL_NUSES (label)++;
7685 haifa_init_insn (x);
7687 emit_barrier_after (x);
7689 sched_init_only_bb (empty, NULL);
7690 sched_init_only_bb (single, NULL);
7691 sched_extend_bb ();
7693 adding_bb_to_current_region_p = true;
7694 before_recovery = single;
7695 after_recovery = empty;
7697 if (before_recovery_ptr)
7698 *before_recovery_ptr = before_recovery;
7700 if (sched_verbose >= 2 && spec_info->dump)
7701 fprintf (spec_info->dump,
7702 ";;\t\tFixed fallthru to EXIT : %d->>%d->%d->>EXIT\n",
7703 last->index, single->index, empty->index);
7705 else
7706 before_recovery = last;
7709 /* Returns new recovery block. */
7710 basic_block
7711 sched_create_recovery_block (basic_block *before_recovery_ptr)
7713 rtx label;
7714 rtx_insn *barrier;
7715 basic_block rec;
7717 haifa_recovery_bb_recently_added_p = true;
7718 haifa_recovery_bb_ever_added_p = true;
7720 init_before_recovery (before_recovery_ptr);
7722 barrier = get_last_bb_insn (before_recovery);
7723 gcc_assert (BARRIER_P (barrier));
7725 label = emit_label_after (gen_label_rtx (), barrier);
7727 rec = create_basic_block (label, label, before_recovery);
7729 /* A recovery block always ends with an unconditional jump. */
7730 emit_barrier_after (BB_END (rec));
7732 if (BB_PARTITION (before_recovery) != BB_UNPARTITIONED)
7733 BB_SET_PARTITION (rec, BB_COLD_PARTITION);
7735 if (sched_verbose && spec_info->dump)
7736 fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
7737 rec->index);
7739 return rec;
7742 /* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
7743 and emit necessary jumps. */
7744 void
7745 sched_create_recovery_edges (basic_block first_bb, basic_block rec,
7746 basic_block second_bb)
7748 rtx label;
7749 rtx jump;
7750 int edge_flags;
7752 /* This is fixing of incoming edge. */
7753 /* ??? Which other flags should be specified? */
7754 if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
7755 /* Partition type is the same, if it is "unpartitioned". */
7756 edge_flags = EDGE_CROSSING;
7757 else
7758 edge_flags = 0;
7760 make_edge (first_bb, rec, edge_flags);
7761 label = block_label (second_bb);
7762 jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
7763 JUMP_LABEL (jump) = label;
7764 LABEL_NUSES (label)++;
7766 if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
7767 /* Partition type is the same, if it is "unpartitioned". */
7769 /* Rewritten from cfgrtl.c. */
7770 if (flag_reorder_blocks_and_partition
7771 && targetm_common.have_named_sections)
7773 /* We don't need the same note for the check because
7774 any_condjump_p (check) == true. */
7775 CROSSING_JUMP_P (jump) = 1;
7777 edge_flags = EDGE_CROSSING;
7779 else
7780 edge_flags = 0;
7782 make_single_succ_edge (rec, second_bb, edge_flags);
7783 if (dom_info_available_p (CDI_DOMINATORS))
7784 set_immediate_dominator (CDI_DOMINATORS, rec, first_bb);
7787 /* This function creates recovery code for INSN. If MUTATE_P is nonzero,
7788 INSN is a simple check, that should be converted to branchy one. */
7789 static void
7790 create_check_block_twin (rtx_insn *insn, bool mutate_p)
7792 basic_block rec;
7793 rtx_insn *label, *check, *twin;
7794 rtx check_pat;
7795 ds_t fs;
7796 sd_iterator_def sd_it;
7797 dep_t dep;
7798 dep_def _new_dep, *new_dep = &_new_dep;
7799 ds_t todo_spec;
7801 gcc_assert (ORIG_PAT (insn) != NULL_RTX);
7803 if (!mutate_p)
7804 todo_spec = TODO_SPEC (insn);
7805 else
7807 gcc_assert (IS_SPECULATION_SIMPLE_CHECK_P (insn)
7808 && (TODO_SPEC (insn) & SPECULATIVE) == 0);
7810 todo_spec = CHECK_SPEC (insn);
7813 todo_spec &= SPECULATIVE;
7815 /* Create recovery block. */
7816 if (mutate_p || targetm.sched.needs_block_p (todo_spec))
7818 rec = sched_create_recovery_block (NULL);
7819 label = BB_HEAD (rec);
7821 else
7823 rec = EXIT_BLOCK_PTR_FOR_FN (cfun);
7824 label = NULL;
7827 /* Emit CHECK. */
7828 check_pat = targetm.sched.gen_spec_check (insn, label, todo_spec);
7830 if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
7832 /* To have mem_reg alive at the beginning of second_bb,
7833 we emit check BEFORE insn, so insn after splitting
7834 insn will be at the beginning of second_bb, which will
7835 provide us with the correct life information. */
7836 check = emit_jump_insn_before (check_pat, insn);
7837 JUMP_LABEL (check) = label;
7838 LABEL_NUSES (label)++;
7840 else
7841 check = emit_insn_before (check_pat, insn);
7843 /* Extend data structures. */
7844 haifa_init_insn (check);
7846 /* CHECK is being added to current region. Extend ready list. */
7847 gcc_assert (sched_ready_n_insns != -1);
7848 sched_extend_ready_list (sched_ready_n_insns + 1);
7850 if (current_sched_info->add_remove_insn)
7851 current_sched_info->add_remove_insn (insn, 0);
7853 RECOVERY_BLOCK (check) = rec;
7855 if (sched_verbose && spec_info->dump)
7856 fprintf (spec_info->dump, ";;\t\tGenerated check insn : %s\n",
7857 (*current_sched_info->print_insn) (check, 0));
7859 gcc_assert (ORIG_PAT (insn));
7861 /* Initialize TWIN (twin is a duplicate of original instruction
7862 in the recovery block). */
7863 if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
7865 sd_iterator_def sd_it;
7866 dep_t dep;
7868 FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
7869 if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
7871 struct _dep _dep2, *dep2 = &_dep2;
7873 init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
7875 sd_add_dep (dep2, true);
7878 twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
7879 haifa_init_insn (twin);
7881 if (sched_verbose && spec_info->dump)
7882 /* INSN_BB (insn) isn't determined for twin insns yet.
7883 So we can't use current_sched_info->print_insn. */
7884 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
7885 INSN_UID (twin), rec->index);
7887 else
7889 ORIG_PAT (check) = ORIG_PAT (insn);
7890 HAS_INTERNAL_DEP (check) = 1;
7891 twin = check;
7892 /* ??? We probably should change all OUTPUT dependencies to
7893 (TRUE | OUTPUT). */
7896 /* Copy all resolved back dependencies of INSN to TWIN. This will
7897 provide correct value for INSN_TICK (TWIN). */
7898 sd_copy_back_deps (twin, insn, true);
7900 if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
7901 /* In case of branchy check, fix CFG. */
7903 basic_block first_bb, second_bb;
7904 rtx_insn *jump;
7906 first_bb = BLOCK_FOR_INSN (check);
7907 second_bb = sched_split_block (first_bb, check);
7909 sched_create_recovery_edges (first_bb, rec, second_bb);
7911 sched_init_only_bb (second_bb, first_bb);
7912 sched_init_only_bb (rec, EXIT_BLOCK_PTR_FOR_FN (cfun));
7914 jump = BB_END (rec);
7915 haifa_init_insn (jump);
7918 /* Move backward dependences from INSN to CHECK and
7919 move forward dependences from INSN to TWIN. */
7921 /* First, create dependencies between INSN's producers and CHECK & TWIN. */
7922 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
7924 rtx_insn *pro = DEP_PRO (dep);
7925 ds_t ds;
7927 /* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
7928 check --TRUE--> producer ??? or ANTI ???
7929 twin --TRUE--> producer
7930 twin --ANTI--> check
7932 If BEGIN_CONTROL: [insn ~~ANTI~~> producer]:
7933 check --ANTI--> producer
7934 twin --ANTI--> producer
7935 twin --ANTI--> check
7937 If BE_IN_SPEC: [insn ~~TRUE~~> producer]:
7938 check ~~TRUE~~> producer
7939 twin ~~TRUE~~> producer
7940 twin --ANTI--> check */
7942 ds = DEP_STATUS (dep);
7944 if (ds & BEGIN_SPEC)
7946 gcc_assert (!mutate_p);
7947 ds &= ~BEGIN_SPEC;
7950 init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
7951 sd_add_dep (new_dep, false);
7953 if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
7955 DEP_CON (new_dep) = twin;
7956 sd_add_dep (new_dep, false);
7960 /* Second, remove backward dependencies of INSN. */
7961 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
7962 sd_iterator_cond (&sd_it, &dep);)
7964 if ((DEP_STATUS (dep) & BEGIN_SPEC)
7965 || mutate_p)
7966 /* We can delete this dep because we overcome it with
7967 BEGIN_SPECULATION. */
7968 sd_delete_dep (sd_it);
7969 else
7970 sd_iterator_next (&sd_it);
7973 /* Future Speculations. Determine what BE_IN speculations will be like. */
7974 fs = 0;
7976 /* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
7977 here. */
7979 gcc_assert (!DONE_SPEC (insn));
7981 if (!mutate_p)
7983 ds_t ts = TODO_SPEC (insn);
7985 DONE_SPEC (insn) = ts & BEGIN_SPEC;
7986 CHECK_SPEC (check) = ts & BEGIN_SPEC;
7988 /* Luckiness of future speculations solely depends upon initial
7989 BEGIN speculation. */
7990 if (ts & BEGIN_DATA)
7991 fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
7992 if (ts & BEGIN_CONTROL)
7993 fs = set_dep_weak (fs, BE_IN_CONTROL,
7994 get_dep_weak (ts, BEGIN_CONTROL));
7996 else
7997 CHECK_SPEC (check) = CHECK_SPEC (insn);
7999 /* Future speculations: call the helper. */
8000 process_insn_forw_deps_be_in_spec (insn, twin, fs);
8002 if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
8004 /* Which types of dependencies should we use here is,
8005 generally, machine-dependent question... But, for now,
8006 it is not. */
8008 if (!mutate_p)
8010 init_dep (new_dep, insn, check, REG_DEP_TRUE);
8011 sd_add_dep (new_dep, false);
8013 init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
8014 sd_add_dep (new_dep, false);
8016 else
8018 if (spec_info->dump)
8019 fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
8020 (*current_sched_info->print_insn) (insn, 0));
8022 /* Remove all dependencies of the INSN. */
8024 sd_it = sd_iterator_start (insn, (SD_LIST_FORW
8025 | SD_LIST_BACK
8026 | SD_LIST_RES_BACK));
8027 while (sd_iterator_cond (&sd_it, &dep))
8028 sd_delete_dep (sd_it);
8031 /* If former check (INSN) already was moved to the ready (or queue)
8032 list, add new check (CHECK) there too. */
8033 if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
8034 try_ready (check);
8036 /* Remove old check from instruction stream and free its
8037 data. */
8038 sched_remove_insn (insn);
8041 init_dep (new_dep, check, twin, REG_DEP_ANTI);
8042 sd_add_dep (new_dep, false);
8044 else
8046 init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
8047 sd_add_dep (new_dep, false);
8050 if (!mutate_p)
8051 /* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
8052 because it'll be done later in add_to_speculative_block. */
8054 rtx_vec_t priorities_roots = rtx_vec_t ();
8056 clear_priorities (twin, &priorities_roots);
8057 calc_priorities (priorities_roots);
8058 priorities_roots.release ();
8062 /* Removes dependency between instructions in the recovery block REC
8063 and usual region instructions. It keeps inner dependences so it
8064 won't be necessary to recompute them. */
8065 static void
8066 fix_recovery_deps (basic_block rec)
8068 rtx_insn *note, *insn, *jump;
8069 rtx ready_list = 0;
8070 bitmap_head in_ready;
8071 rtx link;
8073 bitmap_initialize (&in_ready, 0);
8075 /* NOTE - a basic block note. */
8076 note = NEXT_INSN (BB_HEAD (rec));
8077 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
8078 insn = BB_END (rec);
8079 gcc_assert (JUMP_P (insn));
8080 insn = PREV_INSN (insn);
8084 sd_iterator_def sd_it;
8085 dep_t dep;
8087 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
8088 sd_iterator_cond (&sd_it, &dep);)
8090 rtx_insn *consumer = DEP_CON (dep);
8092 if (BLOCK_FOR_INSN (consumer) != rec)
8094 sd_delete_dep (sd_it);
8096 if (bitmap_set_bit (&in_ready, INSN_LUID (consumer)))
8097 ready_list = alloc_INSN_LIST (consumer, ready_list);
8099 else
8101 gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
8103 sd_iterator_next (&sd_it);
8107 insn = PREV_INSN (insn);
8109 while (insn != note);
8111 bitmap_clear (&in_ready);
8113 /* Try to add instructions to the ready or queue list. */
8114 for (link = ready_list; link; link = XEXP (link, 1))
8115 try_ready (as_a <rtx_insn *> (XEXP (link, 0)));
8116 free_INSN_LIST_list (&ready_list);
8118 /* Fixing jump's dependences. */
8119 insn = BB_HEAD (rec);
8120 jump = BB_END (rec);
8122 gcc_assert (LABEL_P (insn));
8123 insn = NEXT_INSN (insn);
8125 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (insn));
8126 add_jump_dependencies (insn, jump);
8129 /* Change pattern of INSN to NEW_PAT. Invalidate cached haifa
8130 instruction data. */
8131 static bool
8132 haifa_change_pattern (rtx_insn *insn, rtx new_pat)
8134 int t;
8136 t = validate_change (insn, &PATTERN (insn), new_pat, 0);
8137 if (!t)
8138 return false;
8140 update_insn_after_change (insn);
8141 return true;
8144 /* -1 - can't speculate,
8145 0 - for speculation with REQUEST mode it is OK to use
8146 current instruction pattern,
8147 1 - need to change pattern for *NEW_PAT to be speculative. */
8149 sched_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
8151 gcc_assert (current_sched_info->flags & DO_SPECULATION
8152 && (request & SPECULATIVE)
8153 && sched_insn_is_legitimate_for_speculation_p (insn, request));
8155 if ((request & spec_info->mask) != request)
8156 return -1;
8158 if (request & BE_IN_SPEC
8159 && !(request & BEGIN_SPEC))
8160 return 0;
8162 return targetm.sched.speculate_insn (insn, request, new_pat);
8165 static int
8166 haifa_speculate_insn (rtx_insn *insn, ds_t request, rtx *new_pat)
8168 gcc_assert (sched_deps_info->generate_spec_deps
8169 && !IS_SPECULATION_CHECK_P (insn));
8171 if (HAS_INTERNAL_DEP (insn)
8172 || SCHED_GROUP_P (insn))
8173 return -1;
8175 return sched_speculate_insn (insn, request, new_pat);
8178 /* Print some information about block BB, which starts with HEAD and
8179 ends with TAIL, before scheduling it.
8180 I is zero, if scheduler is about to start with the fresh ebb. */
8181 static void
8182 dump_new_block_header (int i, basic_block bb, rtx_insn *head, rtx_insn *tail)
8184 if (!i)
8185 fprintf (sched_dump,
8186 ";; ======================================================\n");
8187 else
8188 fprintf (sched_dump,
8189 ";; =====================ADVANCING TO=====================\n");
8190 fprintf (sched_dump,
8191 ";; -- basic block %d from %d to %d -- %s reload\n",
8192 bb->index, INSN_UID (head), INSN_UID (tail),
8193 (reload_completed ? "after" : "before"));
8194 fprintf (sched_dump,
8195 ";; ======================================================\n");
8196 fprintf (sched_dump, "\n");
8199 /* Unlink basic block notes and labels and saves them, so they
8200 can be easily restored. We unlink basic block notes in EBB to
8201 provide back-compatibility with the previous code, as target backends
8202 assume, that there'll be only instructions between
8203 current_sched_info->{head and tail}. We restore these notes as soon
8204 as we can.
8205 FIRST (LAST) is the first (last) basic block in the ebb.
8206 NB: In usual case (FIRST == LAST) nothing is really done. */
8207 void
8208 unlink_bb_notes (basic_block first, basic_block last)
8210 /* We DON'T unlink basic block notes of the first block in the ebb. */
8211 if (first == last)
8212 return;
8214 bb_header = XNEWVEC (rtx_insn *, last_basic_block_for_fn (cfun));
8216 /* Make a sentinel. */
8217 if (last->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
8218 bb_header[last->next_bb->index] = 0;
8220 first = first->next_bb;
8223 rtx_insn *prev, *label, *note, *next;
8225 label = BB_HEAD (last);
8226 if (LABEL_P (label))
8227 note = NEXT_INSN (label);
8228 else
8229 note = label;
8230 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
8232 prev = PREV_INSN (label);
8233 next = NEXT_INSN (note);
8234 gcc_assert (prev && next);
8236 SET_NEXT_INSN (prev) = next;
8237 SET_PREV_INSN (next) = prev;
8239 bb_header[last->index] = label;
8241 if (last == first)
8242 break;
8244 last = last->prev_bb;
8246 while (1);
8249 /* Restore basic block notes.
8250 FIRST is the first basic block in the ebb. */
8251 static void
8252 restore_bb_notes (basic_block first)
8254 if (!bb_header)
8255 return;
8257 /* We DON'T unlink basic block notes of the first block in the ebb. */
8258 first = first->next_bb;
8259 /* Remember: FIRST is actually a second basic block in the ebb. */
8261 while (first != EXIT_BLOCK_PTR_FOR_FN (cfun)
8262 && bb_header[first->index])
8264 rtx_insn *prev, *label, *note, *next;
8266 label = bb_header[first->index];
8267 prev = PREV_INSN (label);
8268 next = NEXT_INSN (prev);
8270 if (LABEL_P (label))
8271 note = NEXT_INSN (label);
8272 else
8273 note = label;
8274 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
8276 bb_header[first->index] = 0;
8278 SET_NEXT_INSN (prev) = label;
8279 SET_NEXT_INSN (note) = next;
8280 SET_PREV_INSN (next) = note;
8282 first = first->next_bb;
8285 free (bb_header);
8286 bb_header = 0;
8289 /* Helper function.
8290 Fix CFG after both in- and inter-block movement of
8291 control_flow_insn_p JUMP. */
8292 static void
8293 fix_jump_move (rtx_insn *jump)
8295 basic_block bb, jump_bb, jump_bb_next;
8297 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
8298 jump_bb = BLOCK_FOR_INSN (jump);
8299 jump_bb_next = jump_bb->next_bb;
8301 gcc_assert (common_sched_info->sched_pass_id == SCHED_EBB_PASS
8302 || IS_SPECULATION_BRANCHY_CHECK_P (jump));
8304 if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
8305 /* if jump_bb_next is not empty. */
8306 BB_END (jump_bb) = BB_END (jump_bb_next);
8308 if (BB_END (bb) != PREV_INSN (jump))
8309 /* Then there are instruction after jump that should be placed
8310 to jump_bb_next. */
8311 BB_END (jump_bb_next) = BB_END (bb);
8312 else
8313 /* Otherwise jump_bb_next is empty. */
8314 BB_END (jump_bb_next) = NEXT_INSN (BB_HEAD (jump_bb_next));
8316 /* To make assertion in move_insn happy. */
8317 BB_END (bb) = PREV_INSN (jump);
8319 update_bb_for_insn (jump_bb_next);
8322 /* Fix CFG after interblock movement of control_flow_insn_p JUMP. */
8323 static void
8324 move_block_after_check (rtx_insn *jump)
8326 basic_block bb, jump_bb, jump_bb_next;
8327 vec<edge, va_gc> *t;
8329 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
8330 jump_bb = BLOCK_FOR_INSN (jump);
8331 jump_bb_next = jump_bb->next_bb;
8333 update_bb_for_insn (jump_bb);
8335 gcc_assert (IS_SPECULATION_CHECK_P (jump)
8336 || IS_SPECULATION_CHECK_P (BB_END (jump_bb_next)));
8338 unlink_block (jump_bb_next);
8339 link_block (jump_bb_next, bb);
8341 t = bb->succs;
8342 bb->succs = 0;
8343 move_succs (&(jump_bb->succs), bb);
8344 move_succs (&(jump_bb_next->succs), jump_bb);
8345 move_succs (&t, jump_bb_next);
8347 df_mark_solutions_dirty ();
8349 common_sched_info->fix_recovery_cfg
8350 (bb->index, jump_bb->index, jump_bb_next->index);
8353 /* Helper function for move_block_after_check.
8354 This functions attaches edge vector pointed to by SUCCSP to
8355 block TO. */
8356 static void
8357 move_succs (vec<edge, va_gc> **succsp, basic_block to)
8359 edge e;
8360 edge_iterator ei;
8362 gcc_assert (to->succs == 0);
8364 to->succs = *succsp;
8366 FOR_EACH_EDGE (e, ei, to->succs)
8367 e->src = to;
8369 *succsp = 0;
8372 /* Remove INSN from the instruction stream.
8373 INSN should have any dependencies. */
8374 static void
8375 sched_remove_insn (rtx_insn *insn)
8377 sd_finish_insn (insn);
8379 change_queue_index (insn, QUEUE_NOWHERE);
8380 current_sched_info->add_remove_insn (insn, 1);
8381 delete_insn (insn);
8384 /* Clear priorities of all instructions, that are forward dependent on INSN.
8385 Store in vector pointed to by ROOTS_PTR insns on which priority () should
8386 be invoked to initialize all cleared priorities. */
8387 static void
8388 clear_priorities (rtx_insn *insn, rtx_vec_t *roots_ptr)
8390 sd_iterator_def sd_it;
8391 dep_t dep;
8392 bool insn_is_root_p = true;
8394 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
8396 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
8398 rtx_insn *pro = DEP_PRO (dep);
8400 if (INSN_PRIORITY_STATUS (pro) >= 0
8401 && QUEUE_INDEX (insn) != QUEUE_SCHEDULED)
8403 /* If DEP doesn't contribute to priority then INSN itself should
8404 be added to priority roots. */
8405 if (contributes_to_priority_p (dep))
8406 insn_is_root_p = false;
8408 INSN_PRIORITY_STATUS (pro) = -1;
8409 clear_priorities (pro, roots_ptr);
8413 if (insn_is_root_p)
8414 roots_ptr->safe_push (insn);
8417 /* Recompute priorities of instructions, whose priorities might have been
8418 changed. ROOTS is a vector of instructions whose priority computation will
8419 trigger initialization of all cleared priorities. */
8420 static void
8421 calc_priorities (rtx_vec_t roots)
8423 int i;
8424 rtx_insn *insn;
8426 FOR_EACH_VEC_ELT (roots, i, insn)
8427 priority (insn);
8431 /* Add dependences between JUMP and other instructions in the recovery
8432 block. INSN is the first insn the recovery block. */
8433 static void
8434 add_jump_dependencies (rtx_insn *insn, rtx_insn *jump)
8438 insn = NEXT_INSN (insn);
8439 if (insn == jump)
8440 break;
8442 if (dep_list_size (insn, SD_LIST_FORW) == 0)
8444 dep_def _new_dep, *new_dep = &_new_dep;
8446 init_dep (new_dep, insn, jump, REG_DEP_ANTI);
8447 sd_add_dep (new_dep, false);
8450 while (1);
8452 gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
8455 /* Extend data structures for logical insn UID. */
8456 void
8457 sched_extend_luids (void)
8459 int new_luids_max_uid = get_max_uid () + 1;
8461 sched_luids.safe_grow_cleared (new_luids_max_uid);
8464 /* Initialize LUID for INSN. */
8465 void
8466 sched_init_insn_luid (rtx_insn *insn)
8468 int i = INSN_P (insn) ? 1 : common_sched_info->luid_for_non_insn (insn);
8469 int luid;
8471 if (i >= 0)
8473 luid = sched_max_luid;
8474 sched_max_luid += i;
8476 else
8477 luid = -1;
8479 SET_INSN_LUID (insn, luid);
8482 /* Initialize luids for BBS.
8483 The hook common_sched_info->luid_for_non_insn () is used to determine
8484 if notes, labels, etc. need luids. */
8485 void
8486 sched_init_luids (bb_vec_t bbs)
8488 int i;
8489 basic_block bb;
8491 sched_extend_luids ();
8492 FOR_EACH_VEC_ELT (bbs, i, bb)
8494 rtx_insn *insn;
8496 FOR_BB_INSNS (bb, insn)
8497 sched_init_insn_luid (insn);
8501 /* Free LUIDs. */
8502 void
8503 sched_finish_luids (void)
8505 sched_luids.release ();
8506 sched_max_luid = 1;
8509 /* Return logical uid of INSN. Helpful while debugging. */
8511 insn_luid (rtx_insn *insn)
8513 return INSN_LUID (insn);
8516 /* Extend per insn data in the target. */
8517 void
8518 sched_extend_target (void)
8520 if (targetm.sched.h_i_d_extended)
8521 targetm.sched.h_i_d_extended ();
8524 /* Extend global scheduler structures (those, that live across calls to
8525 schedule_block) to include information about just emitted INSN. */
8526 static void
8527 extend_h_i_d (void)
8529 int reserve = (get_max_uid () + 1 - h_i_d.length ());
8530 if (reserve > 0
8531 && ! h_i_d.space (reserve))
8533 h_i_d.safe_grow_cleared (3 * get_max_uid () / 2);
8534 sched_extend_target ();
8538 /* Initialize h_i_d entry of the INSN with default values.
8539 Values, that are not explicitly initialized here, hold zero. */
8540 static void
8541 init_h_i_d (rtx_insn *insn)
8543 if (INSN_LUID (insn) > 0)
8545 INSN_COST (insn) = -1;
8546 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
8547 INSN_TICK (insn) = INVALID_TICK;
8548 INSN_EXACT_TICK (insn) = INVALID_TICK;
8549 INTER_TICK (insn) = INVALID_TICK;
8550 TODO_SPEC (insn) = HARD_DEP;
8554 /* Initialize haifa_insn_data for BBS. */
8555 void
8556 haifa_init_h_i_d (bb_vec_t bbs)
8558 int i;
8559 basic_block bb;
8561 extend_h_i_d ();
8562 FOR_EACH_VEC_ELT (bbs, i, bb)
8564 rtx_insn *insn;
8566 FOR_BB_INSNS (bb, insn)
8567 init_h_i_d (insn);
8571 /* Finalize haifa_insn_data. */
8572 void
8573 haifa_finish_h_i_d (void)
8575 int i;
8576 haifa_insn_data_t data;
8577 struct reg_use_data *use, *next;
8579 FOR_EACH_VEC_ELT (h_i_d, i, data)
8581 free (data->max_reg_pressure);
8582 free (data->reg_pressure);
8583 for (use = data->reg_use_list; use != NULL; use = next)
8585 next = use->next_insn_use;
8586 free (use);
8589 h_i_d.release ();
8592 /* Init data for the new insn INSN. */
8593 static void
8594 haifa_init_insn (rtx_insn *insn)
8596 gcc_assert (insn != NULL);
8598 sched_extend_luids ();
8599 sched_init_insn_luid (insn);
8600 sched_extend_target ();
8601 sched_deps_init (false);
8602 extend_h_i_d ();
8603 init_h_i_d (insn);
8605 if (adding_bb_to_current_region_p)
8607 sd_init_insn (insn);
8609 /* Extend dependency caches by one element. */
8610 extend_dependency_caches (1, false);
8612 if (sched_pressure != SCHED_PRESSURE_NONE)
8613 init_insn_reg_pressure_info (insn);
8616 /* Init data for the new basic block BB which comes after AFTER. */
8617 static void
8618 haifa_init_only_bb (basic_block bb, basic_block after)
8620 gcc_assert (bb != NULL);
8622 sched_init_bbs ();
8624 if (common_sched_info->add_block)
8625 /* This changes only data structures of the front-end. */
8626 common_sched_info->add_block (bb, after);
8629 /* A generic version of sched_split_block (). */
8630 basic_block
8631 sched_split_block_1 (basic_block first_bb, rtx after)
8633 edge e;
8635 e = split_block (first_bb, after);
8636 gcc_assert (e->src == first_bb);
8638 /* sched_split_block emits note if *check == BB_END. Probably it
8639 is better to rip that note off. */
8641 return e->dest;
8644 /* A generic version of sched_create_empty_bb (). */
8645 basic_block
8646 sched_create_empty_bb_1 (basic_block after)
8648 return create_empty_bb (after);
8651 /* Insert PAT as an INSN into the schedule and update the necessary data
8652 structures to account for it. */
8653 rtx_insn *
8654 sched_emit_insn (rtx pat)
8656 rtx_insn *insn = emit_insn_before (pat, first_nonscheduled_insn ());
8657 haifa_init_insn (insn);
8659 if (current_sched_info->add_remove_insn)
8660 current_sched_info->add_remove_insn (insn, 0);
8662 (*current_sched_info->begin_schedule_ready) (insn);
8663 scheduled_insns.safe_push (insn);
8665 last_scheduled_insn = insn;
8666 return insn;
8669 /* This function returns a candidate satisfying dispatch constraints from
8670 the ready list. */
8672 static rtx_insn *
8673 ready_remove_first_dispatch (struct ready_list *ready)
8675 int i;
8676 rtx_insn *insn = ready_element (ready, 0);
8678 if (ready->n_ready == 1
8679 || !INSN_P (insn)
8680 || INSN_CODE (insn) < 0
8681 || !active_insn_p (insn)
8682 || targetm.sched.dispatch (insn, FITS_DISPATCH_WINDOW))
8683 return ready_remove_first (ready);
8685 for (i = 1; i < ready->n_ready; i++)
8687 insn = ready_element (ready, i);
8689 if (!INSN_P (insn)
8690 || INSN_CODE (insn) < 0
8691 || !active_insn_p (insn))
8692 continue;
8694 if (targetm.sched.dispatch (insn, FITS_DISPATCH_WINDOW))
8696 /* Return ith element of ready. */
8697 insn = ready_remove (ready, i);
8698 return insn;
8702 if (targetm.sched.dispatch (NULL_RTX, DISPATCH_VIOLATION))
8703 return ready_remove_first (ready);
8705 for (i = 1; i < ready->n_ready; i++)
8707 insn = ready_element (ready, i);
8709 if (!INSN_P (insn)
8710 || INSN_CODE (insn) < 0
8711 || !active_insn_p (insn))
8712 continue;
8714 /* Return i-th element of ready. */
8715 if (targetm.sched.dispatch (insn, IS_CMP))
8716 return ready_remove (ready, i);
8719 return ready_remove_first (ready);
8722 /* Get number of ready insn in the ready list. */
8725 number_in_ready (void)
8727 return ready.n_ready;
8730 /* Get number of ready's in the ready list. */
8733 get_ready_element (int i)
8735 return ready_element (&ready, i);
8738 #endif /* INSN_SCHEDULING */