Print SCoPs under CLooG format.
[official-gcc/graphite-test-results.git] / gcc / haifa-sched.c
blobb7f0cfce359ced98e1cfb5c4b4978f84cacabe8d
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
3 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010
4 Free Software Foundation, Inc.
5 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
6 and currently maintained by, Jim Wilson (wilson@cygnus.com)
8 This file is part of GCC.
10 GCC is free software; you can redistribute it and/or modify it under
11 the terms of the GNU General Public License as published by the Free
12 Software Foundation; either version 3, or (at your option) any later
13 version.
15 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
16 WARRANTY; without even the implied warranty of MERCHANTABILITY or
17 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
18 for more details.
20 You should have received a copy of the GNU General Public License
21 along with GCC; see the file COPYING3. If not see
22 <http://www.gnu.org/licenses/>. */
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 The following list shows the order in which we want to break ties
58 among insns in the ready list:
60 1. choose insn with the longest path to end of bb, ties
61 broken by
62 2. choose insn with least contribution to register pressure,
63 ties broken by
64 3. prefer in-block upon interblock motion, ties broken by
65 4. prefer useful upon speculative motion, ties broken by
66 5. choose insn with largest control flow probability, ties
67 broken by
68 6. choose insn with the least dependences upon the previously
69 scheduled insn, or finally
70 7 choose the insn which has the most insns dependent on it.
71 8. choose insn with lowest UID.
73 Memory references complicate matters. Only if we can be certain
74 that memory references are not part of the data dependency graph
75 (via true, anti, or output dependence), can we move operations past
76 memory references. To first approximation, reads can be done
77 independently, while writes introduce dependencies. Better
78 approximations will yield fewer dependencies.
80 Before reload, an extended analysis of interblock data dependences
81 is required for interblock scheduling. This is performed in
82 compute_block_backward_dependences ().
84 Dependencies set up by memory references are treated in exactly the
85 same way as other dependencies, by using insn backward dependences
86 INSN_BACK_DEPS. INSN_BACK_DEPS are translated into forward dependences
87 INSN_FORW_DEPS the purpose of forward list scheduling.
89 Having optimized the critical path, we may have also unduly
90 extended the lifetimes of some registers. If an operation requires
91 that constants be loaded into registers, it is certainly desirable
92 to load those constants as early as necessary, but no earlier.
93 I.e., it will not do to load up a bunch of registers at the
94 beginning of a basic block only to use them at the end, if they
95 could be loaded later, since this may result in excessive register
96 utilization.
98 Note that since branches are never in basic blocks, but only end
99 basic blocks, this pass will not move branches. But that is ok,
100 since we can use GNU's delayed branch scheduling pass to take care
101 of this case.
103 Also note that no further optimizations based on algebraic
104 identities are performed, so this pass would be a good one to
105 perform instruction splitting, such as breaking up a multiply
106 instruction into shifts and adds where that is profitable.
108 Given the memory aliasing analysis that this pass should perform,
109 it should be possible to remove redundant stores to memory, and to
110 load values from registers instead of hitting memory.
112 Before reload, speculative insns are moved only if a 'proof' exists
113 that no exception will be caused by this, and if no live registers
114 exist that inhibit the motion (live registers constraints are not
115 represented by data dependence edges).
117 This pass must update information that subsequent passes expect to
118 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
121 The information in the line number notes is carefully retained by
122 this pass. Notes that refer to the starting and ending of
123 exception regions are also carefully retained by this pass. All
124 other NOTE insns are grouped in their same relative order at the
125 beginning of basic blocks and regions that have been scheduled. */
127 #include "config.h"
128 #include "system.h"
129 #include "coretypes.h"
130 #include "tm.h"
131 #include "toplev.h"
132 #include "rtl.h"
133 #include "tm_p.h"
134 #include "hard-reg-set.h"
135 #include "regs.h"
136 #include "function.h"
137 #include "flags.h"
138 #include "insn-config.h"
139 #include "insn-attr.h"
140 #include "except.h"
141 #include "toplev.h"
142 #include "recog.h"
143 #include "sched-int.h"
144 #include "target.h"
145 #include "output.h"
146 #include "params.h"
147 #include "vecprim.h"
148 #include "dbgcnt.h"
149 #include "cfgloop.h"
150 #include "ira.h"
152 #ifdef INSN_SCHEDULING
154 /* issue_rate is the number of insns that can be scheduled in the same
155 machine cycle. It can be defined in the config/mach/mach.h file,
156 otherwise we set it to 1. */
158 int issue_rate;
160 /* sched-verbose controls the amount of debugging output the
161 scheduler prints. It is controlled by -fsched-verbose=N:
162 N>0 and no -DSR : the output is directed to stderr.
163 N>=10 will direct the printouts to stderr (regardless of -dSR).
164 N=1: same as -dSR.
165 N=2: bb's probabilities, detailed ready list info, unit/insn info.
166 N=3: rtl at abort point, control-flow, regions info.
167 N=5: dependences info. */
169 static int sched_verbose_param = 0;
170 int sched_verbose = 0;
172 /* Debugging file. All printouts are sent to dump, which is always set,
173 either to stderr, or to the dump listing file (-dRS). */
174 FILE *sched_dump = 0;
176 /* fix_sched_param() is called from toplev.c upon detection
177 of the -fsched-verbose=N option. */
179 void
180 fix_sched_param (const char *param, const char *val)
182 if (!strcmp (param, "verbose"))
183 sched_verbose_param = atoi (val);
184 else
185 warning (0, "fix_sched_param: unknown param: %s", param);
188 /* This is a placeholder for the scheduler parameters common
189 to all schedulers. */
190 struct common_sched_info_def *common_sched_info;
192 #define INSN_TICK(INSN) (HID (INSN)->tick)
193 #define INTER_TICK(INSN) (HID (INSN)->inter_tick)
195 /* If INSN_TICK of an instruction is equal to INVALID_TICK,
196 then it should be recalculated from scratch. */
197 #define INVALID_TICK (-(max_insn_queue_index + 1))
198 /* The minimal value of the INSN_TICK of an instruction. */
199 #define MIN_TICK (-max_insn_queue_index)
201 /* Issue points are used to distinguish between instructions in max_issue ().
202 For now, all instructions are equally good. */
203 #define ISSUE_POINTS(INSN) 1
205 /* List of important notes we must keep around. This is a pointer to the
206 last element in the list. */
207 rtx note_list;
209 static struct spec_info_def spec_info_var;
210 /* Description of the speculative part of the scheduling.
211 If NULL - no speculation. */
212 spec_info_t spec_info = NULL;
214 /* True, if recovery block was added during scheduling of current block.
215 Used to determine, if we need to fix INSN_TICKs. */
216 static bool haifa_recovery_bb_recently_added_p;
218 /* True, if recovery block was added during this scheduling pass.
219 Used to determine if we should have empty memory pools of dependencies
220 after finishing current region. */
221 bool haifa_recovery_bb_ever_added_p;
223 /* Counters of different types of speculative instructions. */
224 static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
226 /* Array used in {unlink, restore}_bb_notes. */
227 static rtx *bb_header = 0;
229 /* Basic block after which recovery blocks will be created. */
230 static basic_block before_recovery;
232 /* Basic block just before the EXIT_BLOCK and after recovery, if we have
233 created it. */
234 basic_block after_recovery;
236 /* FALSE if we add bb to another region, so we don't need to initialize it. */
237 bool adding_bb_to_current_region_p = true;
239 /* Queues, etc. */
241 /* An instruction is ready to be scheduled when all insns preceding it
242 have already been scheduled. It is important to ensure that all
243 insns which use its result will not be executed until its result
244 has been computed. An insn is maintained in one of four structures:
246 (P) the "Pending" set of insns which cannot be scheduled until
247 their dependencies have been satisfied.
248 (Q) the "Queued" set of insns that can be scheduled when sufficient
249 time has passed.
250 (R) the "Ready" list of unscheduled, uncommitted insns.
251 (S) the "Scheduled" list of insns.
253 Initially, all insns are either "Pending" or "Ready" depending on
254 whether their dependencies are satisfied.
256 Insns move from the "Ready" list to the "Scheduled" list as they
257 are committed to the schedule. As this occurs, the insns in the
258 "Pending" list have their dependencies satisfied and move to either
259 the "Ready" list or the "Queued" set depending on whether
260 sufficient time has passed to make them ready. As time passes,
261 insns move from the "Queued" set to the "Ready" list.
263 The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
264 unscheduled insns, i.e., those that are ready, queued, and pending.
265 The "Queued" set (Q) is implemented by the variable `insn_queue'.
266 The "Ready" list (R) is implemented by the variables `ready' and
267 `n_ready'.
268 The "Scheduled" list (S) is the new insn chain built by this pass.
270 The transition (R->S) is implemented in the scheduling loop in
271 `schedule_block' when the best insn to schedule is chosen.
272 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
273 insns move from the ready list to the scheduled list.
274 The transition (Q->R) is implemented in 'queue_to_insn' as time
275 passes or stalls are introduced. */
277 /* Implement a circular buffer to delay instructions until sufficient
278 time has passed. For the new pipeline description interface,
279 MAX_INSN_QUEUE_INDEX is a power of two minus one which is not less
280 than maximal time of instruction execution computed by genattr.c on
281 the base maximal time of functional unit reservations and getting a
282 result. This is the longest time an insn may be queued. */
284 static rtx *insn_queue;
285 static int q_ptr = 0;
286 static int q_size = 0;
287 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
288 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
290 #define QUEUE_SCHEDULED (-3)
291 #define QUEUE_NOWHERE (-2)
292 #define QUEUE_READY (-1)
293 /* QUEUE_SCHEDULED - INSN is scheduled.
294 QUEUE_NOWHERE - INSN isn't scheduled yet and is neither in
295 queue or ready list.
296 QUEUE_READY - INSN is in ready list.
297 N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles. */
299 #define QUEUE_INDEX(INSN) (HID (INSN)->queue_index)
301 /* The following variable value refers for all current and future
302 reservations of the processor units. */
303 state_t curr_state;
305 /* The following variable value is size of memory representing all
306 current and future reservations of the processor units. */
307 size_t dfa_state_size;
309 /* The following array is used to find the best insn from ready when
310 the automaton pipeline interface is used. */
311 char *ready_try = NULL;
313 /* The ready list. */
314 struct ready_list ready = {NULL, 0, 0, 0, 0};
316 /* The pointer to the ready list (to be removed). */
317 static struct ready_list *readyp = &ready;
319 /* Scheduling clock. */
320 static int clock_var;
322 static int may_trap_exp (const_rtx, int);
324 /* Nonzero iff the address is comprised from at most 1 register. */
325 #define CONST_BASED_ADDRESS_P(x) \
326 (REG_P (x) \
327 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
328 || (GET_CODE (x) == LO_SUM)) \
329 && (CONSTANT_P (XEXP (x, 0)) \
330 || CONSTANT_P (XEXP (x, 1)))))
332 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
333 as found by analyzing insn's expression. */
336 static int haifa_luid_for_non_insn (rtx x);
338 /* Haifa version of sched_info hooks common to all headers. */
339 const struct common_sched_info_def haifa_common_sched_info =
341 NULL, /* fix_recovery_cfg */
342 NULL, /* add_block */
343 NULL, /* estimate_number_of_insns */
344 haifa_luid_for_non_insn, /* luid_for_non_insn */
345 SCHED_PASS_UNKNOWN /* sched_pass_id */
348 const struct sched_scan_info_def *sched_scan_info;
350 /* Mapping from instruction UID to its Logical UID. */
351 VEC (int, heap) *sched_luids = NULL;
353 /* Next LUID to assign to an instruction. */
354 int sched_max_luid = 1;
356 /* Haifa Instruction Data. */
357 VEC (haifa_insn_data_def, heap) *h_i_d = NULL;
359 void (* sched_init_only_bb) (basic_block, basic_block);
361 /* Split block function. Different schedulers might use different functions
362 to handle their internal data consistent. */
363 basic_block (* sched_split_block) (basic_block, rtx);
365 /* Create empty basic block after the specified block. */
366 basic_block (* sched_create_empty_bb) (basic_block);
368 static int
369 may_trap_exp (const_rtx x, int is_store)
371 enum rtx_code code;
373 if (x == 0)
374 return TRAP_FREE;
375 code = GET_CODE (x);
376 if (is_store)
378 if (code == MEM && may_trap_p (x))
379 return TRAP_RISKY;
380 else
381 return TRAP_FREE;
383 if (code == MEM)
385 /* The insn uses memory: a volatile load. */
386 if (MEM_VOLATILE_P (x))
387 return IRISKY;
388 /* An exception-free load. */
389 if (!may_trap_p (x))
390 return IFREE;
391 /* A load with 1 base register, to be further checked. */
392 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
393 return PFREE_CANDIDATE;
394 /* No info on the load, to be further checked. */
395 return PRISKY_CANDIDATE;
397 else
399 const char *fmt;
400 int i, insn_class = TRAP_FREE;
402 /* Neither store nor load, check if it may cause a trap. */
403 if (may_trap_p (x))
404 return TRAP_RISKY;
405 /* Recursive step: walk the insn... */
406 fmt = GET_RTX_FORMAT (code);
407 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
409 if (fmt[i] == 'e')
411 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
412 insn_class = WORST_CLASS (insn_class, tmp_class);
414 else if (fmt[i] == 'E')
416 int j;
417 for (j = 0; j < XVECLEN (x, i); j++)
419 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
420 insn_class = WORST_CLASS (insn_class, tmp_class);
421 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
422 break;
425 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
426 break;
428 return insn_class;
432 /* Classifies rtx X of an insn for the purpose of verifying that X can be
433 executed speculatively (and consequently the insn can be moved
434 speculatively), by examining X, returning:
435 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
436 TRAP_FREE: non-load insn.
437 IFREE: load from a globally safe location.
438 IRISKY: volatile load.
439 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
440 being either PFREE or PRISKY. */
442 static int
443 haifa_classify_rtx (const_rtx x)
445 int tmp_class = TRAP_FREE;
446 int insn_class = TRAP_FREE;
447 enum rtx_code code;
449 if (GET_CODE (x) == PARALLEL)
451 int i, len = XVECLEN (x, 0);
453 for (i = len - 1; i >= 0; i--)
455 tmp_class = haifa_classify_rtx (XVECEXP (x, 0, i));
456 insn_class = WORST_CLASS (insn_class, tmp_class);
457 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
458 break;
461 else
463 code = GET_CODE (x);
464 switch (code)
466 case CLOBBER:
467 /* Test if it is a 'store'. */
468 tmp_class = may_trap_exp (XEXP (x, 0), 1);
469 break;
470 case SET:
471 /* Test if it is a store. */
472 tmp_class = may_trap_exp (SET_DEST (x), 1);
473 if (tmp_class == TRAP_RISKY)
474 break;
475 /* Test if it is a load. */
476 tmp_class =
477 WORST_CLASS (tmp_class,
478 may_trap_exp (SET_SRC (x), 0));
479 break;
480 case COND_EXEC:
481 tmp_class = haifa_classify_rtx (COND_EXEC_CODE (x));
482 if (tmp_class == TRAP_RISKY)
483 break;
484 tmp_class = WORST_CLASS (tmp_class,
485 may_trap_exp (COND_EXEC_TEST (x), 0));
486 break;
487 case TRAP_IF:
488 tmp_class = TRAP_RISKY;
489 break;
490 default:;
492 insn_class = tmp_class;
495 return insn_class;
499 haifa_classify_insn (const_rtx insn)
501 return haifa_classify_rtx (PATTERN (insn));
504 /* Forward declarations. */
506 static int priority (rtx);
507 static int rank_for_schedule (const void *, const void *);
508 static void swap_sort (rtx *, int);
509 static void queue_insn (rtx, int);
510 static int schedule_insn (rtx);
511 static void adjust_priority (rtx);
512 static void advance_one_cycle (void);
513 static void extend_h_i_d (void);
516 /* Notes handling mechanism:
517 =========================
518 Generally, NOTES are saved before scheduling and restored after scheduling.
519 The scheduler distinguishes between two types of notes:
521 (1) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
522 Before scheduling a region, a pointer to the note is added to the insn
523 that follows or precedes it. (This happens as part of the data dependence
524 computation). After scheduling an insn, the pointer contained in it is
525 used for regenerating the corresponding note (in reemit_notes).
527 (2) All other notes (e.g. INSN_DELETED): Before scheduling a block,
528 these notes are put in a list (in rm_other_notes() and
529 unlink_other_notes ()). After scheduling the block, these notes are
530 inserted at the beginning of the block (in schedule_block()). */
532 static void ready_add (struct ready_list *, rtx, bool);
533 static rtx ready_remove_first (struct ready_list *);
535 static void queue_to_ready (struct ready_list *);
536 static int early_queue_to_ready (state_t, struct ready_list *);
538 static void debug_ready_list (struct ready_list *);
540 /* The following functions are used to implement multi-pass scheduling
541 on the first cycle. */
542 static rtx ready_remove (struct ready_list *, int);
543 static void ready_remove_insn (rtx);
545 static int choose_ready (struct ready_list *, rtx *);
547 static void fix_inter_tick (rtx, rtx);
548 static int fix_tick_ready (rtx);
549 static void change_queue_index (rtx, int);
551 /* The following functions are used to implement scheduling of data/control
552 speculative instructions. */
554 static void extend_h_i_d (void);
555 static void init_h_i_d (rtx);
556 static void generate_recovery_code (rtx);
557 static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
558 static void begin_speculative_block (rtx);
559 static void add_to_speculative_block (rtx);
560 static void init_before_recovery (basic_block *);
561 static void create_check_block_twin (rtx, bool);
562 static void fix_recovery_deps (basic_block);
563 static void haifa_change_pattern (rtx, rtx);
564 static void dump_new_block_header (int, basic_block, rtx, rtx);
565 static void restore_bb_notes (basic_block);
566 static void fix_jump_move (rtx);
567 static void move_block_after_check (rtx);
568 static void move_succs (VEC(edge,gc) **, basic_block);
569 static void sched_remove_insn (rtx);
570 static void clear_priorities (rtx, rtx_vec_t *);
571 static void calc_priorities (rtx_vec_t);
572 static void add_jump_dependencies (rtx, rtx);
573 #ifdef ENABLE_CHECKING
574 static int has_edge_p (VEC(edge,gc) *, int);
575 static void check_cfg (rtx, rtx);
576 #endif
578 #endif /* INSN_SCHEDULING */
580 /* Point to state used for the current scheduling pass. */
581 struct haifa_sched_info *current_sched_info;
583 #ifndef INSN_SCHEDULING
584 void
585 schedule_insns (void)
588 #else
590 /* Do register pressure sensitive insn scheduling if the flag is set
591 up. */
592 bool sched_pressure_p;
594 /* Map regno -> its cover class. The map defined only when
595 SCHED_PRESSURE_P is true. */
596 enum reg_class *sched_regno_cover_class;
598 /* The current register pressure. Only elements corresponding cover
599 classes are defined. */
600 static int curr_reg_pressure[N_REG_CLASSES];
602 /* Saved value of the previous array. */
603 static int saved_reg_pressure[N_REG_CLASSES];
605 /* Register living at given scheduling point. */
606 static bitmap curr_reg_live;
608 /* Saved value of the previous array. */
609 static bitmap saved_reg_live;
611 /* Registers mentioned in the current region. */
612 static bitmap region_ref_regs;
614 /* Initiate register pressure relative info for scheduling the current
615 region. Currently it is only clearing register mentioned in the
616 current region. */
617 void
618 sched_init_region_reg_pressure_info (void)
620 bitmap_clear (region_ref_regs);
623 /* Update current register pressure related info after birth (if
624 BIRTH_P) or death of register REGNO. */
625 static void
626 mark_regno_birth_or_death (int regno, bool birth_p)
628 enum reg_class cover_class;
630 cover_class = sched_regno_cover_class[regno];
631 if (regno >= FIRST_PSEUDO_REGISTER)
633 if (cover_class != NO_REGS)
635 if (birth_p)
637 bitmap_set_bit (curr_reg_live, regno);
638 curr_reg_pressure[cover_class]
639 += ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
641 else
643 bitmap_clear_bit (curr_reg_live, regno);
644 curr_reg_pressure[cover_class]
645 -= ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
649 else if (cover_class != NO_REGS
650 && ! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
652 if (birth_p)
654 bitmap_set_bit (curr_reg_live, regno);
655 curr_reg_pressure[cover_class]++;
657 else
659 bitmap_clear_bit (curr_reg_live, regno);
660 curr_reg_pressure[cover_class]--;
665 /* Initiate current register pressure related info from living
666 registers given by LIVE. */
667 static void
668 initiate_reg_pressure_info (bitmap live)
670 int i;
671 unsigned int j;
672 bitmap_iterator bi;
674 for (i = 0; i < ira_reg_class_cover_size; i++)
675 curr_reg_pressure[ira_reg_class_cover[i]] = 0;
676 bitmap_clear (curr_reg_live);
677 EXECUTE_IF_SET_IN_BITMAP (live, 0, j, bi)
678 if (current_nr_blocks == 1 || bitmap_bit_p (region_ref_regs, j))
679 mark_regno_birth_or_death (j, true);
682 /* Mark registers in X as mentioned in the current region. */
683 static void
684 setup_ref_regs (rtx x)
686 int i, j, regno;
687 const RTX_CODE code = GET_CODE (x);
688 const char *fmt;
690 if (REG_P (x))
692 regno = REGNO (x);
693 if (regno >= FIRST_PSEUDO_REGISTER)
694 bitmap_set_bit (region_ref_regs, REGNO (x));
695 else
696 for (i = hard_regno_nregs[regno][GET_MODE (x)] - 1; i >= 0; i--)
697 bitmap_set_bit (region_ref_regs, regno + i);
698 return;
700 fmt = GET_RTX_FORMAT (code);
701 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
702 if (fmt[i] == 'e')
703 setup_ref_regs (XEXP (x, i));
704 else if (fmt[i] == 'E')
706 for (j = 0; j < XVECLEN (x, i); j++)
707 setup_ref_regs (XVECEXP (x, i, j));
711 /* Initiate current register pressure related info at the start of
712 basic block BB. */
713 static void
714 initiate_bb_reg_pressure_info (basic_block bb)
716 unsigned int i;
717 rtx insn;
719 if (current_nr_blocks > 1)
720 FOR_BB_INSNS (bb, insn)
721 if (INSN_P (insn))
722 setup_ref_regs (PATTERN (insn));
723 initiate_reg_pressure_info (df_get_live_in (bb));
724 #ifdef EH_RETURN_DATA_REGNO
725 if (bb_has_eh_pred (bb))
726 for (i = 0; ; ++i)
728 unsigned int regno = EH_RETURN_DATA_REGNO (i);
730 if (regno == INVALID_REGNUM)
731 break;
732 if (! bitmap_bit_p (df_get_live_in (bb), regno))
733 mark_regno_birth_or_death (regno, true);
735 #endif
738 /* Save current register pressure related info. */
739 static void
740 save_reg_pressure (void)
742 int i;
744 for (i = 0; i < ira_reg_class_cover_size; i++)
745 saved_reg_pressure[ira_reg_class_cover[i]]
746 = curr_reg_pressure[ira_reg_class_cover[i]];
747 bitmap_copy (saved_reg_live, curr_reg_live);
750 /* Restore saved register pressure related info. */
751 static void
752 restore_reg_pressure (void)
754 int i;
756 for (i = 0; i < ira_reg_class_cover_size; i++)
757 curr_reg_pressure[ira_reg_class_cover[i]]
758 = saved_reg_pressure[ira_reg_class_cover[i]];
759 bitmap_copy (curr_reg_live, saved_reg_live);
762 /* Return TRUE if the register is dying after its USE. */
763 static bool
764 dying_use_p (struct reg_use_data *use)
766 struct reg_use_data *next;
768 for (next = use->next_regno_use; next != use; next = next->next_regno_use)
769 if (NONDEBUG_INSN_P (next->insn)
770 && QUEUE_INDEX (next->insn) != QUEUE_SCHEDULED)
771 return false;
772 return true;
775 /* Print info about the current register pressure and its excess for
776 each cover class. */
777 static void
778 print_curr_reg_pressure (void)
780 int i;
781 enum reg_class cl;
783 fprintf (sched_dump, ";;\t");
784 for (i = 0; i < ira_reg_class_cover_size; i++)
786 cl = ira_reg_class_cover[i];
787 gcc_assert (curr_reg_pressure[cl] >= 0);
788 fprintf (sched_dump, " %s:%d(%d)", reg_class_names[cl],
789 curr_reg_pressure[cl],
790 curr_reg_pressure[cl] - ira_available_class_regs[cl]);
792 fprintf (sched_dump, "\n");
795 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
796 so that insns independent of the last scheduled insn will be preferred
797 over dependent instructions. */
799 static rtx last_scheduled_insn;
801 /* Cached cost of the instruction. Use below function to get cost of the
802 insn. -1 here means that the field is not initialized. */
803 #define INSN_COST(INSN) (HID (INSN)->cost)
805 /* Compute cost of executing INSN.
806 This is the number of cycles between instruction issue and
807 instruction results. */
809 insn_cost (rtx insn)
811 int cost;
813 if (sel_sched_p ())
815 if (recog_memoized (insn) < 0)
816 return 0;
818 cost = insn_default_latency (insn);
819 if (cost < 0)
820 cost = 0;
822 return cost;
825 cost = INSN_COST (insn);
827 if (cost < 0)
829 /* A USE insn, or something else we don't need to
830 understand. We can't pass these directly to
831 result_ready_cost or insn_default_latency because it will
832 trigger a fatal error for unrecognizable insns. */
833 if (recog_memoized (insn) < 0)
835 INSN_COST (insn) = 0;
836 return 0;
838 else
840 cost = insn_default_latency (insn);
841 if (cost < 0)
842 cost = 0;
844 INSN_COST (insn) = cost;
848 return cost;
851 /* Compute cost of dependence LINK.
852 This is the number of cycles between instruction issue and
853 instruction results.
854 ??? We also use this function to call recog_memoized on all insns. */
856 dep_cost_1 (dep_t link, dw_t dw)
858 rtx insn = DEP_PRO (link);
859 rtx used = DEP_CON (link);
860 int cost;
862 /* A USE insn should never require the value used to be computed.
863 This allows the computation of a function's result and parameter
864 values to overlap the return and call. We don't care about the
865 the dependence cost when only decreasing register pressure. */
866 if (recog_memoized (used) < 0)
868 cost = 0;
869 recog_memoized (insn);
871 else
873 enum reg_note dep_type = DEP_TYPE (link);
875 cost = insn_cost (insn);
877 if (INSN_CODE (insn) >= 0)
879 if (dep_type == REG_DEP_ANTI)
880 cost = 0;
881 else if (dep_type == REG_DEP_OUTPUT)
883 cost = (insn_default_latency (insn)
884 - insn_default_latency (used));
885 if (cost <= 0)
886 cost = 1;
888 else if (bypass_p (insn))
889 cost = insn_latency (insn, used);
893 if (targetm.sched.adjust_cost_2)
894 cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost,
895 dw);
896 else if (targetm.sched.adjust_cost != NULL)
898 /* This variable is used for backward compatibility with the
899 targets. */
900 rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
902 /* Make it self-cycled, so that if some tries to walk over this
903 incomplete list he/she will be caught in an endless loop. */
904 XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
906 /* Targets use only REG_NOTE_KIND of the link. */
907 PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
909 cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
910 insn, cost);
912 free_INSN_LIST_node (dep_cost_rtx_link);
915 if (cost < 0)
916 cost = 0;
919 return cost;
922 /* Compute cost of dependence LINK.
923 This is the number of cycles between instruction issue and
924 instruction results. */
926 dep_cost (dep_t link)
928 return dep_cost_1 (link, 0);
931 /* Use this sel-sched.c friendly function in reorder2 instead of increasing
932 INSN_PRIORITY explicitly. */
933 void
934 increase_insn_priority (rtx insn, int amount)
936 if (!sel_sched_p ())
938 /* We're dealing with haifa-sched.c INSN_PRIORITY. */
939 if (INSN_PRIORITY_KNOWN (insn))
940 INSN_PRIORITY (insn) += amount;
942 else
944 /* In sel-sched.c INSN_PRIORITY is not kept up to date.
945 Use EXPR_PRIORITY instead. */
946 sel_add_to_insn_priority (insn, amount);
950 /* Return 'true' if DEP should be included in priority calculations. */
951 static bool
952 contributes_to_priority_p (dep_t dep)
954 if (DEBUG_INSN_P (DEP_CON (dep))
955 || DEBUG_INSN_P (DEP_PRO (dep)))
956 return false;
958 /* Critical path is meaningful in block boundaries only. */
959 if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
960 DEP_PRO (dep)))
961 return false;
963 /* If flag COUNT_SPEC_IN_CRITICAL_PATH is set,
964 then speculative instructions will less likely be
965 scheduled. That is because the priority of
966 their producers will increase, and, thus, the
967 producers will more likely be scheduled, thus,
968 resolving the dependence. */
969 if (sched_deps_info->generate_spec_deps
970 && !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
971 && (DEP_STATUS (dep) & SPECULATIVE))
972 return false;
974 return true;
977 /* Compute the number of nondebug forward deps of an insn. */
979 static int
980 dep_list_size (rtx insn)
982 sd_iterator_def sd_it;
983 dep_t dep;
984 int dbgcount = 0, nodbgcount = 0;
986 if (!MAY_HAVE_DEBUG_INSNS)
987 return sd_lists_size (insn, SD_LIST_FORW);
989 FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
991 if (DEBUG_INSN_P (DEP_CON (dep)))
992 dbgcount++;
993 else if (!DEBUG_INSN_P (DEP_PRO (dep)))
994 nodbgcount++;
997 gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW));
999 return nodbgcount;
1002 /* Compute the priority number for INSN. */
1003 static int
1004 priority (rtx insn)
1006 if (! INSN_P (insn))
1007 return 0;
1009 /* We should not be interested in priority of an already scheduled insn. */
1010 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
1012 if (!INSN_PRIORITY_KNOWN (insn))
1014 int this_priority = -1;
1016 if (dep_list_size (insn) == 0)
1017 /* ??? We should set INSN_PRIORITY to insn_cost when and insn has
1018 some forward deps but all of them are ignored by
1019 contributes_to_priority hook. At the moment we set priority of
1020 such insn to 0. */
1021 this_priority = insn_cost (insn);
1022 else
1024 rtx prev_first, twin;
1025 basic_block rec;
1027 /* For recovery check instructions we calculate priority slightly
1028 different than that of normal instructions. Instead of walking
1029 through INSN_FORW_DEPS (check) list, we walk through
1030 INSN_FORW_DEPS list of each instruction in the corresponding
1031 recovery block. */
1033 /* Selective scheduling does not define RECOVERY_BLOCK macro. */
1034 rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
1035 if (!rec || rec == EXIT_BLOCK_PTR)
1037 prev_first = PREV_INSN (insn);
1038 twin = insn;
1040 else
1042 prev_first = NEXT_INSN (BB_HEAD (rec));
1043 twin = PREV_INSN (BB_END (rec));
1048 sd_iterator_def sd_it;
1049 dep_t dep;
1051 FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
1053 rtx next;
1054 int next_priority;
1056 next = DEP_CON (dep);
1058 if (BLOCK_FOR_INSN (next) != rec)
1060 int cost;
1062 if (!contributes_to_priority_p (dep))
1063 continue;
1065 if (twin == insn)
1066 cost = dep_cost (dep);
1067 else
1069 struct _dep _dep1, *dep1 = &_dep1;
1071 init_dep (dep1, insn, next, REG_DEP_ANTI);
1073 cost = dep_cost (dep1);
1076 next_priority = cost + priority (next);
1078 if (next_priority > this_priority)
1079 this_priority = next_priority;
1083 twin = PREV_INSN (twin);
1085 while (twin != prev_first);
1088 if (this_priority < 0)
1090 gcc_assert (this_priority == -1);
1092 this_priority = insn_cost (insn);
1095 INSN_PRIORITY (insn) = this_priority;
1096 INSN_PRIORITY_STATUS (insn) = 1;
1099 return INSN_PRIORITY (insn);
1102 /* Macros and functions for keeping the priority queue sorted, and
1103 dealing with queuing and dequeuing of instructions. */
1105 #define SCHED_SORT(READY, N_READY) \
1106 do { if ((N_READY) == 2) \
1107 swap_sort (READY, N_READY); \
1108 else if ((N_READY) > 2) \
1109 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
1110 while (0)
1112 /* Setup info about the current register pressure impact of scheduling
1113 INSN at the current scheduling point. */
1114 static void
1115 setup_insn_reg_pressure_info (rtx insn)
1117 int i, change, before, after, hard_regno;
1118 int excess_cost_change;
1119 enum machine_mode mode;
1120 enum reg_class cl;
1121 struct reg_pressure_data *pressure_info;
1122 int *max_reg_pressure;
1123 struct reg_use_data *use;
1124 static int death[N_REG_CLASSES];
1126 excess_cost_change = 0;
1127 for (i = 0; i < ira_reg_class_cover_size; i++)
1128 death[ira_reg_class_cover[i]] = 0;
1129 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1130 if (dying_use_p (use))
1132 cl = sched_regno_cover_class[use->regno];
1133 if (use->regno < FIRST_PSEUDO_REGISTER)
1134 death[cl]++;
1135 else
1136 death[cl] += ira_reg_class_nregs[cl][PSEUDO_REGNO_MODE (use->regno)];
1138 pressure_info = INSN_REG_PRESSURE (insn);
1139 max_reg_pressure = INSN_MAX_REG_PRESSURE (insn);
1140 gcc_assert (pressure_info != NULL && max_reg_pressure != NULL);
1141 for (i = 0; i < ira_reg_class_cover_size; i++)
1143 cl = ira_reg_class_cover[i];
1144 gcc_assert (curr_reg_pressure[cl] >= 0);
1145 change = (int) pressure_info[i].set_increase - death[cl];
1146 before = MAX (0, max_reg_pressure[i] - ira_available_class_regs[cl]);
1147 after = MAX (0, max_reg_pressure[i] + change
1148 - ira_available_class_regs[cl]);
1149 hard_regno = ira_class_hard_regs[cl][0];
1150 gcc_assert (hard_regno >= 0);
1151 mode = reg_raw_mode[hard_regno];
1152 excess_cost_change += ((after - before)
1153 * (ira_memory_move_cost[mode][cl][0]
1154 + ira_memory_move_cost[mode][cl][1]));
1156 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insn) = excess_cost_change;
1159 /* Returns a positive value if x is preferred; returns a negative value if
1160 y is preferred. Should never return 0, since that will make the sort
1161 unstable. */
1163 static int
1164 rank_for_schedule (const void *x, const void *y)
1166 rtx tmp = *(const rtx *) y;
1167 rtx tmp2 = *(const rtx *) x;
1168 rtx last;
1169 int tmp_class, tmp2_class;
1170 int val, priority_val, info_val;
1172 if (MAY_HAVE_DEBUG_INSNS)
1174 /* Schedule debug insns as early as possible. */
1175 if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
1176 return -1;
1177 else if (DEBUG_INSN_P (tmp2))
1178 return 1;
1181 /* The insn in a schedule group should be issued the first. */
1182 if (flag_sched_group_heuristic &&
1183 SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
1184 return SCHED_GROUP_P (tmp2) ? 1 : -1;
1186 /* Make sure that priority of TMP and TMP2 are initialized. */
1187 gcc_assert (INSN_PRIORITY_KNOWN (tmp) && INSN_PRIORITY_KNOWN (tmp2));
1189 if (sched_pressure_p)
1191 int diff;
1193 /* Prefer insn whose scheduling results in the smallest register
1194 pressure excess. */
1195 if ((diff = (INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp)
1196 + (INSN_TICK (tmp) > clock_var
1197 ? INSN_TICK (tmp) - clock_var : 0)
1198 - INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2)
1199 - (INSN_TICK (tmp2) > clock_var
1200 ? INSN_TICK (tmp2) - clock_var : 0))) != 0)
1201 return diff;
1205 if (sched_pressure_p
1206 && (INSN_TICK (tmp2) > clock_var || INSN_TICK (tmp) > clock_var))
1208 if (INSN_TICK (tmp) <= clock_var)
1209 return -1;
1210 else if (INSN_TICK (tmp2) <= clock_var)
1211 return 1;
1212 else
1213 return INSN_TICK (tmp) - INSN_TICK (tmp2);
1215 /* Prefer insn with higher priority. */
1216 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
1218 if (flag_sched_critical_path_heuristic && priority_val)
1219 return priority_val;
1221 /* Prefer speculative insn with greater dependencies weakness. */
1222 if (flag_sched_spec_insn_heuristic && spec_info)
1224 ds_t ds1, ds2;
1225 dw_t dw1, dw2;
1226 int dw;
1228 ds1 = TODO_SPEC (tmp) & SPECULATIVE;
1229 if (ds1)
1230 dw1 = ds_weak (ds1);
1231 else
1232 dw1 = NO_DEP_WEAK;
1234 ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
1235 if (ds2)
1236 dw2 = ds_weak (ds2);
1237 else
1238 dw2 = NO_DEP_WEAK;
1240 dw = dw2 - dw1;
1241 if (dw > (NO_DEP_WEAK / 8) || dw < -(NO_DEP_WEAK / 8))
1242 return dw;
1245 info_val = (*current_sched_info->rank) (tmp, tmp2);
1246 if(flag_sched_rank_heuristic && info_val)
1247 return info_val;
1249 if (flag_sched_last_insn_heuristic)
1251 last = last_scheduled_insn;
1253 if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head)
1255 last = PREV_INSN (last);
1256 while (!NONDEBUG_INSN_P (last)
1257 && last != current_sched_info->prev_head);
1260 /* Compare insns based on their relation to the last scheduled
1261 non-debug insn. */
1262 if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last))
1264 dep_t dep1;
1265 dep_t dep2;
1267 /* Classify the instructions into three classes:
1268 1) Data dependent on last schedule insn.
1269 2) Anti/Output dependent on last scheduled insn.
1270 3) Independent of last scheduled insn, or has latency of one.
1271 Choose the insn from the highest numbered class if different. */
1272 dep1 = sd_find_dep_between (last, tmp, true);
1274 if (dep1 == NULL || dep_cost (dep1) == 1)
1275 tmp_class = 3;
1276 else if (/* Data dependence. */
1277 DEP_TYPE (dep1) == REG_DEP_TRUE)
1278 tmp_class = 1;
1279 else
1280 tmp_class = 2;
1282 dep2 = sd_find_dep_between (last, tmp2, true);
1284 if (dep2 == NULL || dep_cost (dep2) == 1)
1285 tmp2_class = 3;
1286 else if (/* Data dependence. */
1287 DEP_TYPE (dep2) == REG_DEP_TRUE)
1288 tmp2_class = 1;
1289 else
1290 tmp2_class = 2;
1292 if ((val = tmp2_class - tmp_class))
1293 return val;
1296 /* Prefer the insn which has more later insns that depend on it.
1297 This gives the scheduler more freedom when scheduling later
1298 instructions at the expense of added register pressure. */
1300 val = (dep_list_size (tmp2) - dep_list_size (tmp));
1302 if (flag_sched_dep_count_heuristic && val != 0)
1303 return val;
1305 /* If insns are equally good, sort by INSN_LUID (original insn order),
1306 so that we make the sort stable. This minimizes instruction movement,
1307 thus minimizing sched's effect on debugging and cross-jumping. */
1308 return INSN_LUID (tmp) - INSN_LUID (tmp2);
1311 /* Resort the array A in which only element at index N may be out of order. */
1313 HAIFA_INLINE static void
1314 swap_sort (rtx *a, int n)
1316 rtx insn = a[n - 1];
1317 int i = n - 2;
1319 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
1321 a[i + 1] = a[i];
1322 i -= 1;
1324 a[i + 1] = insn;
1327 /* Add INSN to the insn queue so that it can be executed at least
1328 N_CYCLES after the currently executing insn. Preserve insns
1329 chain for debugging purposes. */
1331 HAIFA_INLINE static void
1332 queue_insn (rtx insn, int n_cycles)
1334 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1335 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
1337 gcc_assert (n_cycles <= max_insn_queue_index);
1338 gcc_assert (!DEBUG_INSN_P (insn));
1340 insn_queue[next_q] = link;
1341 q_size += 1;
1343 if (sched_verbose >= 2)
1345 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
1346 (*current_sched_info->print_insn) (insn, 0));
1348 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
1351 QUEUE_INDEX (insn) = next_q;
1354 /* Remove INSN from queue. */
1355 static void
1356 queue_remove (rtx insn)
1358 gcc_assert (QUEUE_INDEX (insn) >= 0);
1359 remove_free_INSN_LIST_elem (insn, &insn_queue[QUEUE_INDEX (insn)]);
1360 q_size--;
1361 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
1364 /* Return a pointer to the bottom of the ready list, i.e. the insn
1365 with the lowest priority. */
1367 rtx *
1368 ready_lastpos (struct ready_list *ready)
1370 gcc_assert (ready->n_ready >= 1);
1371 return ready->vec + ready->first - ready->n_ready + 1;
1374 /* Add an element INSN to the ready list so that it ends up with the
1375 lowest/highest priority depending on FIRST_P. */
1377 HAIFA_INLINE static void
1378 ready_add (struct ready_list *ready, rtx insn, bool first_p)
1380 if (!first_p)
1382 if (ready->first == ready->n_ready)
1384 memmove (ready->vec + ready->veclen - ready->n_ready,
1385 ready_lastpos (ready),
1386 ready->n_ready * sizeof (rtx));
1387 ready->first = ready->veclen - 1;
1389 ready->vec[ready->first - ready->n_ready] = insn;
1391 else
1393 if (ready->first == ready->veclen - 1)
1395 if (ready->n_ready)
1396 /* ready_lastpos() fails when called with (ready->n_ready == 0). */
1397 memmove (ready->vec + ready->veclen - ready->n_ready - 1,
1398 ready_lastpos (ready),
1399 ready->n_ready * sizeof (rtx));
1400 ready->first = ready->veclen - 2;
1402 ready->vec[++(ready->first)] = insn;
1405 ready->n_ready++;
1406 if (DEBUG_INSN_P (insn))
1407 ready->n_debug++;
1409 gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
1410 QUEUE_INDEX (insn) = QUEUE_READY;
1413 /* Remove the element with the highest priority from the ready list and
1414 return it. */
1416 HAIFA_INLINE static rtx
1417 ready_remove_first (struct ready_list *ready)
1419 rtx t;
1421 gcc_assert (ready->n_ready);
1422 t = ready->vec[ready->first--];
1423 ready->n_ready--;
1424 if (DEBUG_INSN_P (t))
1425 ready->n_debug--;
1426 /* If the queue becomes empty, reset it. */
1427 if (ready->n_ready == 0)
1428 ready->first = ready->veclen - 1;
1430 gcc_assert (QUEUE_INDEX (t) == QUEUE_READY);
1431 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1433 return t;
1436 /* The following code implements multi-pass scheduling for the first
1437 cycle. In other words, we will try to choose ready insn which
1438 permits to start maximum number of insns on the same cycle. */
1440 /* Return a pointer to the element INDEX from the ready. INDEX for
1441 insn with the highest priority is 0, and the lowest priority has
1442 N_READY - 1. */
1445 ready_element (struct ready_list *ready, int index)
1447 gcc_assert (ready->n_ready && index < ready->n_ready);
1449 return ready->vec[ready->first - index];
1452 /* Remove the element INDEX from the ready list and return it. INDEX
1453 for insn with the highest priority is 0, and the lowest priority
1454 has N_READY - 1. */
1456 HAIFA_INLINE static rtx
1457 ready_remove (struct ready_list *ready, int index)
1459 rtx t;
1460 int i;
1462 if (index == 0)
1463 return ready_remove_first (ready);
1464 gcc_assert (ready->n_ready && index < ready->n_ready);
1465 t = ready->vec[ready->first - index];
1466 ready->n_ready--;
1467 if (DEBUG_INSN_P (t))
1468 ready->n_debug--;
1469 for (i = index; i < ready->n_ready; i++)
1470 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
1471 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1472 return t;
1475 /* Remove INSN from the ready list. */
1476 static void
1477 ready_remove_insn (rtx insn)
1479 int i;
1481 for (i = 0; i < readyp->n_ready; i++)
1482 if (ready_element (readyp, i) == insn)
1484 ready_remove (readyp, i);
1485 return;
1487 gcc_unreachable ();
1490 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
1491 macro. */
1493 void
1494 ready_sort (struct ready_list *ready)
1496 int i;
1497 rtx *first = ready_lastpos (ready);
1499 if (sched_pressure_p)
1501 for (i = 0; i < ready->n_ready; i++)
1502 setup_insn_reg_pressure_info (first[i]);
1504 SCHED_SORT (first, ready->n_ready);
1507 /* PREV is an insn that is ready to execute. Adjust its priority if that
1508 will help shorten or lengthen register lifetimes as appropriate. Also
1509 provide a hook for the target to tweak itself. */
1511 HAIFA_INLINE static void
1512 adjust_priority (rtx prev)
1514 /* ??? There used to be code here to try and estimate how an insn
1515 affected register lifetimes, but it did it by looking at REG_DEAD
1516 notes, which we removed in schedule_region. Nor did it try to
1517 take into account register pressure or anything useful like that.
1519 Revisit when we have a machine model to work with and not before. */
1521 if (targetm.sched.adjust_priority)
1522 INSN_PRIORITY (prev) =
1523 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
1526 /* Advance DFA state STATE on one cycle. */
1527 void
1528 advance_state (state_t state)
1530 if (targetm.sched.dfa_pre_advance_cycle)
1531 targetm.sched.dfa_pre_advance_cycle ();
1533 if (targetm.sched.dfa_pre_cycle_insn)
1534 state_transition (state,
1535 targetm.sched.dfa_pre_cycle_insn ());
1537 state_transition (state, NULL);
1539 if (targetm.sched.dfa_post_cycle_insn)
1540 state_transition (state,
1541 targetm.sched.dfa_post_cycle_insn ());
1543 if (targetm.sched.dfa_post_advance_cycle)
1544 targetm.sched.dfa_post_advance_cycle ();
1547 /* Advance time on one cycle. */
1548 HAIFA_INLINE static void
1549 advance_one_cycle (void)
1551 advance_state (curr_state);
1552 if (sched_verbose >= 6)
1553 fprintf (sched_dump, ";;\tAdvanced a state.\n");
1556 /* Clock at which the previous instruction was issued. */
1557 static int last_clock_var;
1559 /* Update register pressure after scheduling INSN. */
1560 static void
1561 update_register_pressure (rtx insn)
1563 struct reg_use_data *use;
1564 struct reg_set_data *set;
1566 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1567 if (dying_use_p (use) && bitmap_bit_p (curr_reg_live, use->regno))
1568 mark_regno_birth_or_death (use->regno, false);
1569 for (set = INSN_REG_SET_LIST (insn); set != NULL; set = set->next_insn_set)
1570 mark_regno_birth_or_death (set->regno, true);
1573 /* Set up or update (if UPDATE_P) max register pressure (see its
1574 meaning in sched-int.h::_haifa_insn_data) for all current BB insns
1575 after insn AFTER. */
1576 static void
1577 setup_insn_max_reg_pressure (rtx after, bool update_p)
1579 int i, p;
1580 bool eq_p;
1581 rtx insn;
1582 static int max_reg_pressure[N_REG_CLASSES];
1584 save_reg_pressure ();
1585 for (i = 0; i < ira_reg_class_cover_size; i++)
1586 max_reg_pressure[ira_reg_class_cover[i]]
1587 = curr_reg_pressure[ira_reg_class_cover[i]];
1588 for (insn = NEXT_INSN (after);
1589 insn != NULL_RTX && BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (after);
1590 insn = NEXT_INSN (insn))
1591 if (NONDEBUG_INSN_P (insn))
1593 eq_p = true;
1594 for (i = 0; i < ira_reg_class_cover_size; i++)
1596 p = max_reg_pressure[ira_reg_class_cover[i]];
1597 if (INSN_MAX_REG_PRESSURE (insn)[i] != p)
1599 eq_p = false;
1600 INSN_MAX_REG_PRESSURE (insn)[i]
1601 = max_reg_pressure[ira_reg_class_cover[i]];
1604 if (update_p && eq_p)
1605 break;
1606 update_register_pressure (insn);
1607 for (i = 0; i < ira_reg_class_cover_size; i++)
1608 if (max_reg_pressure[ira_reg_class_cover[i]]
1609 < curr_reg_pressure[ira_reg_class_cover[i]])
1610 max_reg_pressure[ira_reg_class_cover[i]]
1611 = curr_reg_pressure[ira_reg_class_cover[i]];
1613 restore_reg_pressure ();
1616 /* Update the current register pressure after scheduling INSN. Update
1617 also max register pressure for unscheduled insns of the current
1618 BB. */
1619 static void
1620 update_reg_and_insn_max_reg_pressure (rtx insn)
1622 int i;
1623 int before[N_REG_CLASSES];
1625 for (i = 0; i < ira_reg_class_cover_size; i++)
1626 before[i] = curr_reg_pressure[ira_reg_class_cover[i]];
1627 update_register_pressure (insn);
1628 for (i = 0; i < ira_reg_class_cover_size; i++)
1629 if (curr_reg_pressure[ira_reg_class_cover[i]] != before[i])
1630 break;
1631 if (i < ira_reg_class_cover_size)
1632 setup_insn_max_reg_pressure (insn, true);
1635 /* Set up register pressure at the beginning of basic block BB whose
1636 insns starting after insn AFTER. Set up also max register pressure
1637 for all insns of the basic block. */
1638 void
1639 sched_setup_bb_reg_pressure_info (basic_block bb, rtx after)
1641 gcc_assert (sched_pressure_p);
1642 initiate_bb_reg_pressure_info (bb);
1643 setup_insn_max_reg_pressure (after, false);
1646 /* INSN is the "currently executing insn". Launch each insn which was
1647 waiting on INSN. READY is the ready list which contains the insns
1648 that are ready to fire. CLOCK is the current cycle. The function
1649 returns necessary cycle advance after issuing the insn (it is not
1650 zero for insns in a schedule group). */
1652 static int
1653 schedule_insn (rtx insn)
1655 sd_iterator_def sd_it;
1656 dep_t dep;
1657 int i;
1658 int advance = 0;
1660 if (sched_verbose >= 1)
1662 struct reg_pressure_data *pressure_info;
1663 char buf[2048];
1665 print_insn (buf, insn, 0);
1666 buf[40] = 0;
1667 fprintf (sched_dump, ";;\t%3i--> %-40s:", clock_var, buf);
1669 if (recog_memoized (insn) < 0)
1670 fprintf (sched_dump, "nothing");
1671 else
1672 print_reservation (sched_dump, insn);
1673 pressure_info = INSN_REG_PRESSURE (insn);
1674 if (pressure_info != NULL)
1676 fputc (':', sched_dump);
1677 for (i = 0; i < ira_reg_class_cover_size; i++)
1678 fprintf (sched_dump, "%s%+d(%d)",
1679 reg_class_names[ira_reg_class_cover[i]],
1680 pressure_info[i].set_increase, pressure_info[i].change);
1682 fputc ('\n', sched_dump);
1685 if (sched_pressure_p)
1686 update_reg_and_insn_max_reg_pressure (insn);
1688 /* Scheduling instruction should have all its dependencies resolved and
1689 should have been removed from the ready list. */
1690 gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
1692 /* Reset debug insns invalidated by moving this insn. */
1693 if (MAY_HAVE_DEBUG_INSNS && !DEBUG_INSN_P (insn))
1694 for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
1695 sd_iterator_cond (&sd_it, &dep);)
1697 rtx dbg = DEP_PRO (dep);
1699 gcc_assert (DEBUG_INSN_P (dbg));
1701 if (sched_verbose >= 6)
1702 fprintf (sched_dump, ";;\t\tresetting: debug insn %d\n",
1703 INSN_UID (dbg));
1705 /* ??? Rather than resetting the debug insn, we might be able
1706 to emit a debug temp before the just-scheduled insn, but
1707 this would involve checking that the expression at the
1708 point of the debug insn is equivalent to the expression
1709 before the just-scheduled insn. They might not be: the
1710 expression in the debug insn may depend on other insns not
1711 yet scheduled that set MEMs, REGs or even other debug
1712 insns. It's not clear that attempting to preserve debug
1713 information in these cases is worth the effort, given how
1714 uncommon these resets are and the likelihood that the debug
1715 temps introduced won't survive the schedule change. */
1716 INSN_VAR_LOCATION_LOC (dbg) = gen_rtx_UNKNOWN_VAR_LOC ();
1717 df_insn_rescan (dbg);
1719 /* We delete rather than resolve these deps, otherwise we
1720 crash in sched_free_deps(), because forward deps are
1721 expected to be released before backward deps. */
1722 sd_delete_dep (sd_it);
1725 gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
1726 QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
1728 gcc_assert (INSN_TICK (insn) >= MIN_TICK);
1729 if (INSN_TICK (insn) > clock_var)
1730 /* INSN has been prematurely moved from the queue to the ready list.
1731 This is possible only if following flag is set. */
1732 gcc_assert (flag_sched_stalled_insns);
1734 /* ??? Probably, if INSN is scheduled prematurely, we should leave
1735 INSN_TICK untouched. This is a machine-dependent issue, actually. */
1736 INSN_TICK (insn) = clock_var;
1738 /* Update dependent instructions. */
1739 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
1740 sd_iterator_cond (&sd_it, &dep);)
1742 rtx next = DEP_CON (dep);
1744 /* Resolve the dependence between INSN and NEXT.
1745 sd_resolve_dep () moves current dep to another list thus
1746 advancing the iterator. */
1747 sd_resolve_dep (sd_it);
1749 /* Don't bother trying to mark next as ready if insn is a debug
1750 insn. If insn is the last hard dependency, it will have
1751 already been discounted. */
1752 if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next))
1753 continue;
1755 if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
1757 int effective_cost;
1759 effective_cost = try_ready (next);
1761 if (effective_cost >= 0
1762 && SCHED_GROUP_P (next)
1763 && advance < effective_cost)
1764 advance = effective_cost;
1766 else
1767 /* Check always has only one forward dependence (to the first insn in
1768 the recovery block), therefore, this will be executed only once. */
1770 gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
1771 fix_recovery_deps (RECOVERY_BLOCK (insn));
1775 /* This is the place where scheduler doesn't *basically* need backward and
1776 forward dependencies for INSN anymore. Nevertheless they are used in
1777 heuristics in rank_for_schedule (), early_queue_to_ready () and in
1778 some targets (e.g. rs6000). Thus the earliest place where we *can*
1779 remove dependencies is after targetm.sched.md_finish () call in
1780 schedule_block (). But, on the other side, the safest place to remove
1781 dependencies is when we are finishing scheduling entire region. As we
1782 don't generate [many] dependencies during scheduling itself, we won't
1783 need memory until beginning of next region.
1784 Bottom line: Dependencies are removed for all insns in the end of
1785 scheduling the region. */
1787 /* Annotate the instruction with issue information -- TImode
1788 indicates that the instruction is expected not to be able
1789 to issue on the same cycle as the previous insn. A machine
1790 may use this information to decide how the instruction should
1791 be aligned. */
1792 if (issue_rate > 1
1793 && GET_CODE (PATTERN (insn)) != USE
1794 && GET_CODE (PATTERN (insn)) != CLOBBER
1795 && !DEBUG_INSN_P (insn))
1797 if (reload_completed)
1798 PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
1799 last_clock_var = clock_var;
1802 return advance;
1805 /* Functions for handling of notes. */
1807 /* Add note list that ends on FROM_END to the end of TO_ENDP. */
1808 void
1809 concat_note_lists (rtx from_end, rtx *to_endp)
1811 rtx from_start;
1813 /* It's easy when have nothing to concat. */
1814 if (from_end == NULL)
1815 return;
1817 /* It's also easy when destination is empty. */
1818 if (*to_endp == NULL)
1820 *to_endp = from_end;
1821 return;
1824 from_start = from_end;
1825 while (PREV_INSN (from_start) != NULL)
1826 from_start = PREV_INSN (from_start);
1828 PREV_INSN (from_start) = *to_endp;
1829 NEXT_INSN (*to_endp) = from_start;
1830 *to_endp = from_end;
1833 /* Delete notes between HEAD and TAIL and put them in the chain
1834 of notes ended by NOTE_LIST. */
1835 void
1836 remove_notes (rtx head, rtx tail)
1838 rtx next_tail, insn, next;
1840 note_list = 0;
1841 if (head == tail && !INSN_P (head))
1842 return;
1844 next_tail = NEXT_INSN (tail);
1845 for (insn = head; insn != next_tail; insn = next)
1847 next = NEXT_INSN (insn);
1848 if (!NOTE_P (insn))
1849 continue;
1851 switch (NOTE_KIND (insn))
1853 case NOTE_INSN_BASIC_BLOCK:
1854 continue;
1856 case NOTE_INSN_EPILOGUE_BEG:
1857 if (insn != tail)
1859 remove_insn (insn);
1860 add_reg_note (next, REG_SAVE_NOTE,
1861 GEN_INT (NOTE_INSN_EPILOGUE_BEG));
1862 break;
1864 /* FALLTHRU */
1866 default:
1867 remove_insn (insn);
1869 /* Add the note to list that ends at NOTE_LIST. */
1870 PREV_INSN (insn) = note_list;
1871 NEXT_INSN (insn) = NULL_RTX;
1872 if (note_list)
1873 NEXT_INSN (note_list) = insn;
1874 note_list = insn;
1875 break;
1878 gcc_assert ((sel_sched_p () || insn != tail) && insn != head);
1883 /* Return the head and tail pointers of ebb starting at BEG and ending
1884 at END. */
1885 void
1886 get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
1888 rtx beg_head = BB_HEAD (beg);
1889 rtx beg_tail = BB_END (beg);
1890 rtx end_head = BB_HEAD (end);
1891 rtx end_tail = BB_END (end);
1893 /* Don't include any notes or labels at the beginning of the BEG
1894 basic block, or notes at the end of the END basic blocks. */
1896 if (LABEL_P (beg_head))
1897 beg_head = NEXT_INSN (beg_head);
1899 while (beg_head != beg_tail)
1900 if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head))
1901 beg_head = NEXT_INSN (beg_head);
1902 else
1903 break;
1905 *headp = beg_head;
1907 if (beg == end)
1908 end_head = beg_head;
1909 else if (LABEL_P (end_head))
1910 end_head = NEXT_INSN (end_head);
1912 while (end_head != end_tail)
1913 if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail))
1914 end_tail = PREV_INSN (end_tail);
1915 else
1916 break;
1918 *tailp = end_tail;
1921 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1924 no_real_insns_p (const_rtx head, const_rtx tail)
1926 while (head != NEXT_INSN (tail))
1928 if (!NOTE_P (head) && !LABEL_P (head)
1929 && !BOUNDARY_DEBUG_INSN_P (head))
1930 return 0;
1931 head = NEXT_INSN (head);
1933 return 1;
1936 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
1937 previously found among the insns. Insert them just before HEAD. */
1939 restore_other_notes (rtx head, basic_block head_bb)
1941 if (note_list != 0)
1943 rtx note_head = note_list;
1945 if (head)
1946 head_bb = BLOCK_FOR_INSN (head);
1947 else
1948 head = NEXT_INSN (bb_note (head_bb));
1950 while (PREV_INSN (note_head))
1952 set_block_for_insn (note_head, head_bb);
1953 note_head = PREV_INSN (note_head);
1955 /* In the above cycle we've missed this note. */
1956 set_block_for_insn (note_head, head_bb);
1958 PREV_INSN (note_head) = PREV_INSN (head);
1959 NEXT_INSN (PREV_INSN (head)) = note_head;
1960 PREV_INSN (head) = note_list;
1961 NEXT_INSN (note_list) = head;
1963 if (BLOCK_FOR_INSN (head) != head_bb)
1964 BB_END (head_bb) = note_list;
1966 head = note_head;
1969 return head;
1972 /* Move insns that became ready to fire from queue to ready list. */
1974 static void
1975 queue_to_ready (struct ready_list *ready)
1977 rtx insn;
1978 rtx link;
1979 rtx skip_insn;
1981 q_ptr = NEXT_Q (q_ptr);
1983 if (dbg_cnt (sched_insn) == false)
1985 /* If debug counter is activated do not requeue insn next after
1986 last_scheduled_insn. */
1987 skip_insn = next_nonnote_insn (last_scheduled_insn);
1988 while (skip_insn && DEBUG_INSN_P (skip_insn))
1989 skip_insn = next_nonnote_insn (skip_insn);
1991 else
1992 skip_insn = NULL_RTX;
1994 /* Add all pending insns that can be scheduled without stalls to the
1995 ready list. */
1996 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
1998 insn = XEXP (link, 0);
1999 q_size -= 1;
2001 if (sched_verbose >= 2)
2002 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2003 (*current_sched_info->print_insn) (insn, 0));
2005 /* If the ready list is full, delay the insn for 1 cycle.
2006 See the comment in schedule_block for the rationale. */
2007 if (!reload_completed
2008 && ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
2009 && !SCHED_GROUP_P (insn)
2010 && insn != skip_insn)
2012 if (sched_verbose >= 2)
2013 fprintf (sched_dump, "requeued because ready full\n");
2014 queue_insn (insn, 1);
2016 else
2018 ready_add (ready, insn, false);
2019 if (sched_verbose >= 2)
2020 fprintf (sched_dump, "moving to ready without stalls\n");
2023 free_INSN_LIST_list (&insn_queue[q_ptr]);
2025 /* If there are no ready insns, stall until one is ready and add all
2026 of the pending insns at that point to the ready list. */
2027 if (ready->n_ready == 0)
2029 int stalls;
2031 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
2033 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2035 for (; link; link = XEXP (link, 1))
2037 insn = XEXP (link, 0);
2038 q_size -= 1;
2040 if (sched_verbose >= 2)
2041 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2042 (*current_sched_info->print_insn) (insn, 0));
2044 ready_add (ready, insn, false);
2045 if (sched_verbose >= 2)
2046 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
2048 free_INSN_LIST_list (&insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]);
2050 advance_one_cycle ();
2052 break;
2055 advance_one_cycle ();
2058 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
2059 clock_var += stalls;
2063 /* Used by early_queue_to_ready. Determines whether it is "ok" to
2064 prematurely move INSN from the queue to the ready list. Currently,
2065 if a target defines the hook 'is_costly_dependence', this function
2066 uses the hook to check whether there exist any dependences which are
2067 considered costly by the target, between INSN and other insns that
2068 have already been scheduled. Dependences are checked up to Y cycles
2069 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
2070 controlling this value.
2071 (Other considerations could be taken into account instead (or in
2072 addition) depending on user flags and target hooks. */
2074 static bool
2075 ok_for_early_queue_removal (rtx insn)
2077 int n_cycles;
2078 rtx prev_insn = last_scheduled_insn;
2080 if (targetm.sched.is_costly_dependence)
2082 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
2084 for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
2086 int cost;
2088 if (prev_insn == current_sched_info->prev_head)
2090 prev_insn = NULL;
2091 break;
2094 if (!NOTE_P (prev_insn))
2096 dep_t dep;
2098 dep = sd_find_dep_between (prev_insn, insn, true);
2100 if (dep != NULL)
2102 cost = dep_cost (dep);
2104 if (targetm.sched.is_costly_dependence (dep, cost,
2105 flag_sched_stalled_insns_dep - n_cycles))
2106 return false;
2110 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
2111 break;
2114 if (!prev_insn)
2115 break;
2116 prev_insn = PREV_INSN (prev_insn);
2120 return true;
2124 /* Remove insns from the queue, before they become "ready" with respect
2125 to FU latency considerations. */
2127 static int
2128 early_queue_to_ready (state_t state, struct ready_list *ready)
2130 rtx insn;
2131 rtx link;
2132 rtx next_link;
2133 rtx prev_link;
2134 bool move_to_ready;
2135 int cost;
2136 state_t temp_state = alloca (dfa_state_size);
2137 int stalls;
2138 int insns_removed = 0;
2141 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
2142 function:
2144 X == 0: There is no limit on how many queued insns can be removed
2145 prematurely. (flag_sched_stalled_insns = -1).
2147 X >= 1: Only X queued insns can be removed prematurely in each
2148 invocation. (flag_sched_stalled_insns = X).
2150 Otherwise: Early queue removal is disabled.
2151 (flag_sched_stalled_insns = 0)
2154 if (! flag_sched_stalled_insns)
2155 return 0;
2157 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
2159 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2161 if (sched_verbose > 6)
2162 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
2164 prev_link = 0;
2165 while (link)
2167 next_link = XEXP (link, 1);
2168 insn = XEXP (link, 0);
2169 if (insn && sched_verbose > 6)
2170 print_rtl_single (sched_dump, insn);
2172 memcpy (temp_state, state, dfa_state_size);
2173 if (recog_memoized (insn) < 0)
2174 /* non-negative to indicate that it's not ready
2175 to avoid infinite Q->R->Q->R... */
2176 cost = 0;
2177 else
2178 cost = state_transition (temp_state, insn);
2180 if (sched_verbose >= 6)
2181 fprintf (sched_dump, "transition cost = %d\n", cost);
2183 move_to_ready = false;
2184 if (cost < 0)
2186 move_to_ready = ok_for_early_queue_removal (insn);
2187 if (move_to_ready == true)
2189 /* move from Q to R */
2190 q_size -= 1;
2191 ready_add (ready, insn, false);
2193 if (prev_link)
2194 XEXP (prev_link, 1) = next_link;
2195 else
2196 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
2198 free_INSN_LIST_node (link);
2200 if (sched_verbose >= 2)
2201 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
2202 (*current_sched_info->print_insn) (insn, 0));
2204 insns_removed++;
2205 if (insns_removed == flag_sched_stalled_insns)
2206 /* Remove no more than flag_sched_stalled_insns insns
2207 from Q at a time. */
2208 return insns_removed;
2212 if (move_to_ready == false)
2213 prev_link = link;
2215 link = next_link;
2216 } /* while link */
2217 } /* if link */
2219 } /* for stalls.. */
2221 return insns_removed;
2225 /* Print the ready list for debugging purposes. Callable from debugger. */
2227 static void
2228 debug_ready_list (struct ready_list *ready)
2230 rtx *p;
2231 int i;
2233 if (ready->n_ready == 0)
2235 fprintf (sched_dump, "\n");
2236 return;
2239 p = ready_lastpos (ready);
2240 for (i = 0; i < ready->n_ready; i++)
2242 fprintf (sched_dump, " %s:%d",
2243 (*current_sched_info->print_insn) (p[i], 0),
2244 INSN_LUID (p[i]));
2245 if (sched_pressure_p)
2246 fprintf (sched_dump, "(cost=%d",
2247 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (p[i]));
2248 if (INSN_TICK (p[i]) > clock_var)
2249 fprintf (sched_dump, ":delay=%d", INSN_TICK (p[i]) - clock_var);
2250 if (sched_pressure_p)
2251 fprintf (sched_dump, ")");
2253 fprintf (sched_dump, "\n");
2256 /* Search INSN for REG_SAVE_NOTE notes and convert them back into insn
2257 NOTEs. This is used for NOTE_INSN_EPILOGUE_BEG, so that sched-ebb
2258 replaces the epilogue note in the correct basic block. */
2259 void
2260 reemit_notes (rtx insn)
2262 rtx note, last = insn;
2264 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2266 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
2268 enum insn_note note_type = (enum insn_note) INTVAL (XEXP (note, 0));
2270 last = emit_note_before (note_type, last);
2271 remove_note (insn, note);
2276 /* Move INSN. Reemit notes if needed. Update CFG, if needed. */
2277 static void
2278 move_insn (rtx insn, rtx last, rtx nt)
2280 if (PREV_INSN (insn) != last)
2282 basic_block bb;
2283 rtx note;
2284 int jump_p = 0;
2286 bb = BLOCK_FOR_INSN (insn);
2288 /* BB_HEAD is either LABEL or NOTE. */
2289 gcc_assert (BB_HEAD (bb) != insn);
2291 if (BB_END (bb) == insn)
2292 /* If this is last instruction in BB, move end marker one
2293 instruction up. */
2295 /* Jumps are always placed at the end of basic block. */
2296 jump_p = control_flow_insn_p (insn);
2298 gcc_assert (!jump_p
2299 || ((common_sched_info->sched_pass_id == SCHED_RGN_PASS)
2300 && IS_SPECULATION_BRANCHY_CHECK_P (insn))
2301 || (common_sched_info->sched_pass_id
2302 == SCHED_EBB_PASS));
2304 gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
2306 BB_END (bb) = PREV_INSN (insn);
2309 gcc_assert (BB_END (bb) != last);
2311 if (jump_p)
2312 /* We move the block note along with jump. */
2314 gcc_assert (nt);
2316 note = NEXT_INSN (insn);
2317 while (NOTE_NOT_BB_P (note) && note != nt)
2318 note = NEXT_INSN (note);
2320 if (note != nt
2321 && (LABEL_P (note)
2322 || BARRIER_P (note)))
2323 note = NEXT_INSN (note);
2325 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
2327 else
2328 note = insn;
2330 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (note);
2331 PREV_INSN (NEXT_INSN (note)) = PREV_INSN (insn);
2333 NEXT_INSN (note) = NEXT_INSN (last);
2334 PREV_INSN (NEXT_INSN (last)) = note;
2336 NEXT_INSN (last) = insn;
2337 PREV_INSN (insn) = last;
2339 bb = BLOCK_FOR_INSN (last);
2341 if (jump_p)
2343 fix_jump_move (insn);
2345 if (BLOCK_FOR_INSN (insn) != bb)
2346 move_block_after_check (insn);
2348 gcc_assert (BB_END (bb) == last);
2351 df_insn_change_bb (insn, bb);
2353 /* Update BB_END, if needed. */
2354 if (BB_END (bb) == last)
2355 BB_END (bb) = insn;
2358 SCHED_GROUP_P (insn) = 0;
2361 /* Return true if scheduling INSN will finish current clock cycle. */
2362 static bool
2363 insn_finishes_cycle_p (rtx insn)
2365 if (SCHED_GROUP_P (insn))
2366 /* After issuing INSN, rest of the sched_group will be forced to issue
2367 in order. Don't make any plans for the rest of cycle. */
2368 return true;
2370 /* Finishing the block will, apparently, finish the cycle. */
2371 if (current_sched_info->insn_finishes_block_p
2372 && current_sched_info->insn_finishes_block_p (insn))
2373 return true;
2375 return false;
2378 /* The following structure describe an entry of the stack of choices. */
2379 struct choice_entry
2381 /* Ordinal number of the issued insn in the ready queue. */
2382 int index;
2383 /* The number of the rest insns whose issues we should try. */
2384 int rest;
2385 /* The number of issued essential insns. */
2386 int n;
2387 /* State after issuing the insn. */
2388 state_t state;
2391 /* The following array is used to implement a stack of choices used in
2392 function max_issue. */
2393 static struct choice_entry *choice_stack;
2395 /* The following variable value is number of essential insns issued on
2396 the current cycle. An insn is essential one if it changes the
2397 processors state. */
2398 int cycle_issued_insns;
2400 /* This holds the value of the target dfa_lookahead hook. */
2401 int dfa_lookahead;
2403 /* The following variable value is maximal number of tries of issuing
2404 insns for the first cycle multipass insn scheduling. We define
2405 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
2406 need this constraint if all real insns (with non-negative codes)
2407 had reservations because in this case the algorithm complexity is
2408 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
2409 might be incomplete and such insn might occur. For such
2410 descriptions, the complexity of algorithm (without the constraint)
2411 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
2412 static int max_lookahead_tries;
2414 /* The following value is value of hook
2415 `first_cycle_multipass_dfa_lookahead' at the last call of
2416 `max_issue'. */
2417 static int cached_first_cycle_multipass_dfa_lookahead = 0;
2419 /* The following value is value of `issue_rate' at the last call of
2420 `sched_init'. */
2421 static int cached_issue_rate = 0;
2423 /* The following function returns maximal (or close to maximal) number
2424 of insns which can be issued on the same cycle and one of which
2425 insns is insns with the best rank (the first insn in READY). To
2426 make this function tries different samples of ready insns. READY
2427 is current queue `ready'. Global array READY_TRY reflects what
2428 insns are already issued in this try. MAX_POINTS is the sum of points
2429 of all instructions in READY. The function stops immediately,
2430 if it reached the such a solution, that all instruction can be issued.
2431 INDEX will contain index of the best insn in READY. The following
2432 function is used only for first cycle multipass scheduling.
2434 PRIVILEGED_N >= 0
2436 This function expects recognized insns only. All USEs,
2437 CLOBBERs, etc must be filtered elsewhere. */
2439 max_issue (struct ready_list *ready, int privileged_n, state_t state,
2440 int *index)
2442 int n, i, all, n_ready, best, delay, tries_num, max_points;
2443 int more_issue;
2444 struct choice_entry *top;
2445 rtx insn;
2447 n_ready = ready->n_ready;
2448 gcc_assert (dfa_lookahead >= 1 && privileged_n >= 0
2449 && privileged_n <= n_ready);
2451 /* Init MAX_LOOKAHEAD_TRIES. */
2452 if (cached_first_cycle_multipass_dfa_lookahead != dfa_lookahead)
2454 cached_first_cycle_multipass_dfa_lookahead = dfa_lookahead;
2455 max_lookahead_tries = 100;
2456 for (i = 0; i < issue_rate; i++)
2457 max_lookahead_tries *= dfa_lookahead;
2460 /* Init max_points. */
2461 max_points = 0;
2462 more_issue = issue_rate - cycle_issued_insns;
2464 /* ??? We used to assert here that we never issue more insns than issue_rate.
2465 However, some targets (e.g. MIPS/SB1) claim lower issue rate than can be
2466 achieved to get better performance. Until these targets are fixed to use
2467 scheduler hooks to manipulate insns priority instead, the assert should
2468 be disabled.
2470 gcc_assert (more_issue >= 0); */
2472 for (i = 0; i < n_ready; i++)
2473 if (!ready_try [i])
2475 if (more_issue-- > 0)
2476 max_points += ISSUE_POINTS (ready_element (ready, i));
2477 else
2478 break;
2481 /* The number of the issued insns in the best solution. */
2482 best = 0;
2484 top = choice_stack;
2486 /* Set initial state of the search. */
2487 memcpy (top->state, state, dfa_state_size);
2488 top->rest = dfa_lookahead;
2489 top->n = 0;
2491 /* Count the number of the insns to search among. */
2492 for (all = i = 0; i < n_ready; i++)
2493 if (!ready_try [i])
2494 all++;
2496 /* I is the index of the insn to try next. */
2497 i = 0;
2498 tries_num = 0;
2499 for (;;)
2501 if (/* If we've reached a dead end or searched enough of what we have
2502 been asked... */
2503 top->rest == 0
2504 /* Or have nothing else to try. */
2505 || i >= n_ready)
2507 /* ??? (... || i == n_ready). */
2508 gcc_assert (i <= n_ready);
2510 if (top == choice_stack)
2511 break;
2513 if (best < top - choice_stack)
2515 if (privileged_n)
2517 n = privileged_n;
2518 /* Try to find issued privileged insn. */
2519 while (n && !ready_try[--n]);
2522 if (/* If all insns are equally good... */
2523 privileged_n == 0
2524 /* Or a privileged insn will be issued. */
2525 || ready_try[n])
2526 /* Then we have a solution. */
2528 best = top - choice_stack;
2529 /* This is the index of the insn issued first in this
2530 solution. */
2531 *index = choice_stack [1].index;
2532 if (top->n == max_points || best == all)
2533 break;
2537 /* Set ready-list index to point to the last insn
2538 ('i++' below will advance it to the next insn). */
2539 i = top->index;
2541 /* Backtrack. */
2542 ready_try [i] = 0;
2543 top--;
2544 memcpy (state, top->state, dfa_state_size);
2546 else if (!ready_try [i])
2548 tries_num++;
2549 if (tries_num > max_lookahead_tries)
2550 break;
2551 insn = ready_element (ready, i);
2552 delay = state_transition (state, insn);
2553 if (delay < 0)
2555 if (state_dead_lock_p (state)
2556 || insn_finishes_cycle_p (insn))
2557 /* We won't issue any more instructions in the next
2558 choice_state. */
2559 top->rest = 0;
2560 else
2561 top->rest--;
2563 n = top->n;
2564 if (memcmp (top->state, state, dfa_state_size) != 0)
2565 n += ISSUE_POINTS (insn);
2567 /* Advance to the next choice_entry. */
2568 top++;
2569 /* Initialize it. */
2570 top->rest = dfa_lookahead;
2571 top->index = i;
2572 top->n = n;
2573 memcpy (top->state, state, dfa_state_size);
2575 ready_try [i] = 1;
2576 i = -1;
2580 /* Increase ready-list index. */
2581 i++;
2584 /* Restore the original state of the DFA. */
2585 memcpy (state, choice_stack->state, dfa_state_size);
2587 return best;
2590 /* The following function chooses insn from READY and modifies
2591 READY. The following function is used only for first
2592 cycle multipass scheduling.
2593 Return:
2594 -1 if cycle should be advanced,
2595 0 if INSN_PTR is set to point to the desirable insn,
2596 1 if choose_ready () should be restarted without advancing the cycle. */
2597 static int
2598 choose_ready (struct ready_list *ready, rtx *insn_ptr)
2600 int lookahead;
2602 if (dbg_cnt (sched_insn) == false)
2604 rtx insn;
2606 insn = next_nonnote_insn (last_scheduled_insn);
2608 if (QUEUE_INDEX (insn) == QUEUE_READY)
2609 /* INSN is in the ready_list. */
2611 ready_remove_insn (insn);
2612 *insn_ptr = insn;
2613 return 0;
2616 /* INSN is in the queue. Advance cycle to move it to the ready list. */
2617 return -1;
2620 lookahead = 0;
2622 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
2623 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
2624 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
2625 || DEBUG_INSN_P (ready_element (ready, 0)))
2627 *insn_ptr = ready_remove_first (ready);
2628 return 0;
2630 else
2632 /* Try to choose the better insn. */
2633 int index = 0, i, n;
2634 rtx insn;
2635 int try_data = 1, try_control = 1;
2636 ds_t ts;
2638 insn = ready_element (ready, 0);
2639 if (INSN_CODE (insn) < 0)
2641 *insn_ptr = ready_remove_first (ready);
2642 return 0;
2645 if (spec_info
2646 && spec_info->flags & (PREFER_NON_DATA_SPEC
2647 | PREFER_NON_CONTROL_SPEC))
2649 for (i = 0, n = ready->n_ready; i < n; i++)
2651 rtx x;
2652 ds_t s;
2654 x = ready_element (ready, i);
2655 s = TODO_SPEC (x);
2657 if (spec_info->flags & PREFER_NON_DATA_SPEC
2658 && !(s & DATA_SPEC))
2660 try_data = 0;
2661 if (!(spec_info->flags & PREFER_NON_CONTROL_SPEC)
2662 || !try_control)
2663 break;
2666 if (spec_info->flags & PREFER_NON_CONTROL_SPEC
2667 && !(s & CONTROL_SPEC))
2669 try_control = 0;
2670 if (!(spec_info->flags & PREFER_NON_DATA_SPEC) || !try_data)
2671 break;
2676 ts = TODO_SPEC (insn);
2677 if ((ts & SPECULATIVE)
2678 && (((!try_data && (ts & DATA_SPEC))
2679 || (!try_control && (ts & CONTROL_SPEC)))
2680 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
2681 && !targetm.sched
2682 .first_cycle_multipass_dfa_lookahead_guard_spec (insn))))
2683 /* Discard speculative instruction that stands first in the ready
2684 list. */
2686 change_queue_index (insn, 1);
2687 return 1;
2690 ready_try[0] = 0;
2692 for (i = 1; i < ready->n_ready; i++)
2694 insn = ready_element (ready, i);
2696 ready_try [i]
2697 = ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
2698 || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC)));
2701 /* Let the target filter the search space. */
2702 for (i = 1; i < ready->n_ready; i++)
2703 if (!ready_try[i])
2705 insn = ready_element (ready, i);
2707 #ifdef ENABLE_CHECKING
2708 /* If this insn is recognizable we should have already
2709 recognized it earlier.
2710 ??? Not very clear where this is supposed to be done.
2711 See dep_cost_1. */
2712 gcc_assert (INSN_CODE (insn) >= 0
2713 || recog_memoized (insn) < 0);
2714 #endif
2716 ready_try [i]
2717 = (/* INSN_CODE check can be omitted here as it is also done later
2718 in max_issue (). */
2719 INSN_CODE (insn) < 0
2720 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2721 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2722 (insn)));
2725 if (max_issue (ready, 1, curr_state, &index) == 0)
2727 *insn_ptr = ready_remove_first (ready);
2728 if (sched_verbose >= 4)
2729 fprintf (sched_dump, ";;\t\tChosen insn (but can't issue) : %s \n",
2730 (*current_sched_info->print_insn) (*insn_ptr, 0));
2731 return 0;
2733 else
2735 if (sched_verbose >= 4)
2736 fprintf (sched_dump, ";;\t\tChosen insn : %s\n",
2737 (*current_sched_info->print_insn)
2738 (ready_element (ready, index), 0));
2740 *insn_ptr = ready_remove (ready, index);
2741 return 0;
2746 /* Use forward list scheduling to rearrange insns of block pointed to by
2747 TARGET_BB, possibly bringing insns from subsequent blocks in the same
2748 region. */
2750 void
2751 schedule_block (basic_block *target_bb)
2753 int i, first_cycle_insn_p;
2754 int can_issue_more;
2755 state_t temp_state = NULL; /* It is used for multipass scheduling. */
2756 int sort_p, advance, start_clock_var;
2758 /* Head/tail info for this block. */
2759 rtx prev_head = current_sched_info->prev_head;
2760 rtx next_tail = current_sched_info->next_tail;
2761 rtx head = NEXT_INSN (prev_head);
2762 rtx tail = PREV_INSN (next_tail);
2764 /* We used to have code to avoid getting parameters moved from hard
2765 argument registers into pseudos.
2767 However, it was removed when it proved to be of marginal benefit
2768 and caused problems because schedule_block and compute_forward_dependences
2769 had different notions of what the "head" insn was. */
2771 gcc_assert (head != tail || INSN_P (head));
2773 haifa_recovery_bb_recently_added_p = false;
2775 /* Debug info. */
2776 if (sched_verbose)
2777 dump_new_block_header (0, *target_bb, head, tail);
2779 state_reset (curr_state);
2781 /* Clear the ready list. */
2782 ready.first = ready.veclen - 1;
2783 ready.n_ready = 0;
2784 ready.n_debug = 0;
2786 /* It is used for first cycle multipass scheduling. */
2787 temp_state = alloca (dfa_state_size);
2789 if (targetm.sched.md_init)
2790 targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
2792 /* We start inserting insns after PREV_HEAD. */
2793 last_scheduled_insn = prev_head;
2795 gcc_assert ((NOTE_P (last_scheduled_insn)
2796 || BOUNDARY_DEBUG_INSN_P (last_scheduled_insn))
2797 && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
2799 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
2800 queue. */
2801 q_ptr = 0;
2802 q_size = 0;
2804 insn_queue = XALLOCAVEC (rtx, max_insn_queue_index + 1);
2805 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
2807 /* Start just before the beginning of time. */
2808 clock_var = -1;
2810 /* We need queue and ready lists and clock_var be initialized
2811 in try_ready () (which is called through init_ready_list ()). */
2812 (*current_sched_info->init_ready_list) ();
2814 /* The algorithm is O(n^2) in the number of ready insns at any given
2815 time in the worst case. Before reload we are more likely to have
2816 big lists so truncate them to a reasonable size. */
2817 if (!reload_completed
2818 && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
2820 ready_sort (&ready);
2822 /* Find first free-standing insn past MAX_SCHED_READY_INSNS.
2823 If there are debug insns, we know they're first. */
2824 for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
2825 if (!SCHED_GROUP_P (ready_element (&ready, i)))
2826 break;
2828 if (sched_verbose >= 2)
2830 fprintf (sched_dump,
2831 ";;\t\tReady list on entry: %d insns\n", ready.n_ready);
2832 fprintf (sched_dump,
2833 ";;\t\t before reload => truncated to %d insns\n", i);
2836 /* Delay all insns past it for 1 cycle. If debug counter is
2837 activated make an exception for the insn right after
2838 last_scheduled_insn. */
2840 rtx skip_insn;
2842 if (dbg_cnt (sched_insn) == false)
2843 skip_insn = next_nonnote_insn (last_scheduled_insn);
2844 else
2845 skip_insn = NULL_RTX;
2847 while (i < ready.n_ready)
2849 rtx insn;
2851 insn = ready_remove (&ready, i);
2853 if (insn != skip_insn)
2854 queue_insn (insn, 1);
2859 /* Now we can restore basic block notes and maintain precise cfg. */
2860 restore_bb_notes (*target_bb);
2862 last_clock_var = -1;
2864 advance = 0;
2866 sort_p = TRUE;
2867 /* Loop until all the insns in BB are scheduled. */
2868 while ((*current_sched_info->schedule_more_p) ())
2872 start_clock_var = clock_var;
2874 clock_var++;
2876 advance_one_cycle ();
2878 /* Add to the ready list all pending insns that can be issued now.
2879 If there are no ready insns, increment clock until one
2880 is ready and add all pending insns at that point to the ready
2881 list. */
2882 queue_to_ready (&ready);
2884 gcc_assert (ready.n_ready);
2886 if (sched_verbose >= 2)
2888 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
2889 debug_ready_list (&ready);
2891 advance -= clock_var - start_clock_var;
2893 while (advance > 0);
2895 if (sort_p)
2897 /* Sort the ready list based on priority. */
2898 ready_sort (&ready);
2900 if (sched_verbose >= 2)
2902 fprintf (sched_dump, ";;\t\tReady list after ready_sort: ");
2903 debug_ready_list (&ready);
2907 /* We don't want md sched reorder to even see debug isns, so put
2908 them out right away. */
2909 if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2911 if (control_flow_insn_p (last_scheduled_insn))
2913 *target_bb = current_sched_info->advance_target_bb
2914 (*target_bb, 0);
2916 if (sched_verbose)
2918 rtx x;
2920 x = next_real_insn (last_scheduled_insn);
2921 gcc_assert (x);
2922 dump_new_block_header (1, *target_bb, x, tail);
2925 last_scheduled_insn = bb_note (*target_bb);
2928 while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2930 rtx insn = ready_remove_first (&ready);
2931 gcc_assert (DEBUG_INSN_P (insn));
2932 (*current_sched_info->begin_schedule_ready) (insn,
2933 last_scheduled_insn);
2934 move_insn (insn, last_scheduled_insn,
2935 current_sched_info->next_tail);
2936 last_scheduled_insn = insn;
2937 advance = schedule_insn (insn);
2938 gcc_assert (advance == 0);
2939 if (ready.n_ready > 0)
2940 ready_sort (&ready);
2943 if (!ready.n_ready)
2944 continue;
2947 /* Allow the target to reorder the list, typically for
2948 better instruction bundling. */
2949 if (sort_p && targetm.sched.reorder
2950 && (ready.n_ready == 0
2951 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2952 can_issue_more =
2953 targetm.sched.reorder (sched_dump, sched_verbose,
2954 ready_lastpos (&ready),
2955 &ready.n_ready, clock_var);
2956 else
2957 can_issue_more = issue_rate;
2959 first_cycle_insn_p = 1;
2960 cycle_issued_insns = 0;
2961 for (;;)
2963 rtx insn;
2964 int cost;
2965 bool asm_p = false;
2967 if (sched_verbose >= 2)
2969 fprintf (sched_dump, ";;\tReady list (t = %3d): ",
2970 clock_var);
2971 debug_ready_list (&ready);
2972 if (sched_pressure_p)
2973 print_curr_reg_pressure ();
2976 if (ready.n_ready == 0
2977 && can_issue_more
2978 && reload_completed)
2980 /* Allow scheduling insns directly from the queue in case
2981 there's nothing better to do (ready list is empty) but
2982 there are still vacant dispatch slots in the current cycle. */
2983 if (sched_verbose >= 6)
2984 fprintf (sched_dump,";;\t\tSecond chance\n");
2985 memcpy (temp_state, curr_state, dfa_state_size);
2986 if (early_queue_to_ready (temp_state, &ready))
2987 ready_sort (&ready);
2990 if (ready.n_ready == 0
2991 || !can_issue_more
2992 || state_dead_lock_p (curr_state)
2993 || !(*current_sched_info->schedule_more_p) ())
2994 break;
2996 /* Select and remove the insn from the ready list. */
2997 if (sort_p)
2999 int res;
3001 insn = NULL_RTX;
3002 res = choose_ready (&ready, &insn);
3004 if (res < 0)
3005 /* Finish cycle. */
3006 break;
3007 if (res > 0)
3008 /* Restart choose_ready (). */
3009 continue;
3011 gcc_assert (insn != NULL_RTX);
3013 else
3014 insn = ready_remove_first (&ready);
3016 if (sched_pressure_p && INSN_TICK (insn) > clock_var)
3018 ready_add (&ready, insn, true);
3019 advance = 1;
3020 break;
3023 if (targetm.sched.dfa_new_cycle
3024 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
3025 insn, last_clock_var,
3026 clock_var, &sort_p))
3027 /* SORT_P is used by the target to override sorting
3028 of the ready list. This is needed when the target
3029 has modified its internal structures expecting that
3030 the insn will be issued next. As we need the insn
3031 to have the highest priority (so it will be returned by
3032 the ready_remove_first call above), we invoke
3033 ready_add (&ready, insn, true).
3034 But, still, there is one issue: INSN can be later
3035 discarded by scheduler's front end through
3036 current_sched_info->can_schedule_ready_p, hence, won't
3037 be issued next. */
3039 ready_add (&ready, insn, true);
3040 break;
3043 sort_p = TRUE;
3044 memcpy (temp_state, curr_state, dfa_state_size);
3045 if (recog_memoized (insn) < 0)
3047 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
3048 || asm_noperands (PATTERN (insn)) >= 0);
3049 if (!first_cycle_insn_p && asm_p)
3050 /* This is asm insn which is tried to be issued on the
3051 cycle not first. Issue it on the next cycle. */
3052 cost = 1;
3053 else
3054 /* A USE insn, or something else we don't need to
3055 understand. We can't pass these directly to
3056 state_transition because it will trigger a
3057 fatal error for unrecognizable insns. */
3058 cost = 0;
3060 else if (sched_pressure_p)
3061 cost = 0;
3062 else
3064 cost = state_transition (temp_state, insn);
3065 if (cost < 0)
3066 cost = 0;
3067 else if (cost == 0)
3068 cost = 1;
3071 if (cost >= 1)
3073 queue_insn (insn, cost);
3074 if (SCHED_GROUP_P (insn))
3076 advance = cost;
3077 break;
3080 continue;
3083 if (current_sched_info->can_schedule_ready_p
3084 && ! (*current_sched_info->can_schedule_ready_p) (insn))
3085 /* We normally get here only if we don't want to move
3086 insn from the split block. */
3088 TODO_SPEC (insn) = (TODO_SPEC (insn) & ~SPECULATIVE) | HARD_DEP;
3089 continue;
3092 /* DECISION is made. */
3094 if (TODO_SPEC (insn) & SPECULATIVE)
3095 generate_recovery_code (insn);
3097 if (control_flow_insn_p (last_scheduled_insn)
3098 /* This is used to switch basic blocks by request
3099 from scheduler front-end (actually, sched-ebb.c only).
3100 This is used to process blocks with single fallthru
3101 edge. If succeeding block has jump, it [jump] will try
3102 move at the end of current bb, thus corrupting CFG. */
3103 || current_sched_info->advance_target_bb (*target_bb, insn))
3105 *target_bb = current_sched_info->advance_target_bb
3106 (*target_bb, 0);
3108 if (sched_verbose)
3110 rtx x;
3112 x = next_real_insn (last_scheduled_insn);
3113 gcc_assert (x);
3114 dump_new_block_header (1, *target_bb, x, tail);
3117 last_scheduled_insn = bb_note (*target_bb);
3120 /* Update counters, etc in the scheduler's front end. */
3121 (*current_sched_info->begin_schedule_ready) (insn,
3122 last_scheduled_insn);
3124 move_insn (insn, last_scheduled_insn, current_sched_info->next_tail);
3125 reemit_notes (insn);
3126 last_scheduled_insn = insn;
3128 if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
3130 cycle_issued_insns++;
3131 memcpy (curr_state, temp_state, dfa_state_size);
3134 if (targetm.sched.variable_issue)
3135 can_issue_more =
3136 targetm.sched.variable_issue (sched_dump, sched_verbose,
3137 insn, can_issue_more);
3138 /* A naked CLOBBER or USE generates no instruction, so do
3139 not count them against the issue rate. */
3140 else if (GET_CODE (PATTERN (insn)) != USE
3141 && GET_CODE (PATTERN (insn)) != CLOBBER)
3142 can_issue_more--;
3143 advance = schedule_insn (insn);
3145 /* After issuing an asm insn we should start a new cycle. */
3146 if (advance == 0 && asm_p)
3147 advance = 1;
3148 if (advance != 0)
3149 break;
3151 first_cycle_insn_p = 0;
3153 /* Sort the ready list based on priority. This must be
3154 redone here, as schedule_insn may have readied additional
3155 insns that will not be sorted correctly. */
3156 if (ready.n_ready > 0)
3157 ready_sort (&ready);
3159 /* Quickly go through debug insns such that md sched
3160 reorder2 doesn't have to deal with debug insns. */
3161 if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
3162 && (*current_sched_info->schedule_more_p) ())
3164 if (control_flow_insn_p (last_scheduled_insn))
3166 *target_bb = current_sched_info->advance_target_bb
3167 (*target_bb, 0);
3169 if (sched_verbose)
3171 rtx x;
3173 x = next_real_insn (last_scheduled_insn);
3174 gcc_assert (x);
3175 dump_new_block_header (1, *target_bb, x, tail);
3178 last_scheduled_insn = bb_note (*target_bb);
3181 while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
3183 insn = ready_remove_first (&ready);
3184 gcc_assert (DEBUG_INSN_P (insn));
3185 (*current_sched_info->begin_schedule_ready)
3186 (insn, last_scheduled_insn);
3187 move_insn (insn, last_scheduled_insn,
3188 current_sched_info->next_tail);
3189 advance = schedule_insn (insn);
3190 last_scheduled_insn = insn;
3191 gcc_assert (advance == 0);
3192 if (ready.n_ready > 0)
3193 ready_sort (&ready);
3197 if (targetm.sched.reorder2
3198 && (ready.n_ready == 0
3199 || !SCHED_GROUP_P (ready_element (&ready, 0))))
3201 can_issue_more =
3202 targetm.sched.reorder2 (sched_dump, sched_verbose,
3203 ready.n_ready
3204 ? ready_lastpos (&ready) : NULL,
3205 &ready.n_ready, clock_var);
3210 /* Debug info. */
3211 if (sched_verbose)
3213 fprintf (sched_dump, ";;\tReady list (final): ");
3214 debug_ready_list (&ready);
3217 if (current_sched_info->queue_must_finish_empty)
3218 /* Sanity check -- queue must be empty now. Meaningless if region has
3219 multiple bbs. */
3220 gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
3221 else
3223 /* We must maintain QUEUE_INDEX between blocks in region. */
3224 for (i = ready.n_ready - 1; i >= 0; i--)
3226 rtx x;
3228 x = ready_element (&ready, i);
3229 QUEUE_INDEX (x) = QUEUE_NOWHERE;
3230 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3233 if (q_size)
3234 for (i = 0; i <= max_insn_queue_index; i++)
3236 rtx link;
3237 for (link = insn_queue[i]; link; link = XEXP (link, 1))
3239 rtx x;
3241 x = XEXP (link, 0);
3242 QUEUE_INDEX (x) = QUEUE_NOWHERE;
3243 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3245 free_INSN_LIST_list (&insn_queue[i]);
3249 if (sched_verbose)
3250 fprintf (sched_dump, ";; total time = %d\n", clock_var);
3252 if (!current_sched_info->queue_must_finish_empty
3253 || haifa_recovery_bb_recently_added_p)
3255 /* INSN_TICK (minimum clock tick at which the insn becomes
3256 ready) may be not correct for the insn in the subsequent
3257 blocks of the region. We should use a correct value of
3258 `clock_var' or modify INSN_TICK. It is better to keep
3259 clock_var value equal to 0 at the start of a basic block.
3260 Therefore we modify INSN_TICK here. */
3261 fix_inter_tick (NEXT_INSN (prev_head), last_scheduled_insn);
3264 if (targetm.sched.md_finish)
3266 targetm.sched.md_finish (sched_dump, sched_verbose);
3267 /* Target might have added some instructions to the scheduled block
3268 in its md_finish () hook. These new insns don't have any data
3269 initialized and to identify them we extend h_i_d so that they'll
3270 get zero luids. */
3271 sched_init_luids (NULL, NULL, NULL, NULL);
3274 if (sched_verbose)
3275 fprintf (sched_dump, ";; new head = %d\n;; new tail = %d\n\n",
3276 INSN_UID (head), INSN_UID (tail));
3278 /* Update head/tail boundaries. */
3279 head = NEXT_INSN (prev_head);
3280 tail = last_scheduled_insn;
3282 head = restore_other_notes (head, NULL);
3284 current_sched_info->head = head;
3285 current_sched_info->tail = tail;
3288 /* Set_priorities: compute priority of each insn in the block. */
3291 set_priorities (rtx head, rtx tail)
3293 rtx insn;
3294 int n_insn;
3295 int sched_max_insns_priority =
3296 current_sched_info->sched_max_insns_priority;
3297 rtx prev_head;
3299 if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head)))
3300 gcc_unreachable ();
3302 n_insn = 0;
3304 prev_head = PREV_INSN (head);
3305 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
3307 if (!INSN_P (insn))
3308 continue;
3310 n_insn++;
3311 (void) priority (insn);
3313 gcc_assert (INSN_PRIORITY_KNOWN (insn));
3315 sched_max_insns_priority = MAX (sched_max_insns_priority,
3316 INSN_PRIORITY (insn));
3319 current_sched_info->sched_max_insns_priority = sched_max_insns_priority;
3321 return n_insn;
3324 /* Set dump and sched_verbose for the desired debugging output. If no
3325 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
3326 For -fsched-verbose=N, N>=10, print everything to stderr. */
3327 void
3328 setup_sched_dump (void)
3330 sched_verbose = sched_verbose_param;
3331 if (sched_verbose_param == 0 && dump_file)
3332 sched_verbose = 1;
3333 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
3334 ? stderr : dump_file);
3337 /* Initialize some global state for the scheduler. This function works
3338 with the common data shared between all the schedulers. It is called
3339 from the scheduler specific initialization routine. */
3341 void
3342 sched_init (void)
3344 /* Disable speculative loads in their presence if cc0 defined. */
3345 #ifdef HAVE_cc0
3346 flag_schedule_speculative_load = 0;
3347 #endif
3349 sched_pressure_p = (flag_sched_pressure && ! reload_completed
3350 && common_sched_info->sched_pass_id == SCHED_RGN_PASS);
3351 if (sched_pressure_p)
3352 ira_setup_eliminable_regset ();
3354 /* Initialize SPEC_INFO. */
3355 if (targetm.sched.set_sched_flags)
3357 spec_info = &spec_info_var;
3358 targetm.sched.set_sched_flags (spec_info);
3360 if (spec_info->mask != 0)
3362 spec_info->data_weakness_cutoff =
3363 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
3364 spec_info->control_weakness_cutoff =
3365 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF)
3366 * REG_BR_PROB_BASE) / 100;
3368 else
3369 /* So we won't read anything accidentally. */
3370 spec_info = NULL;
3373 else
3374 /* So we won't read anything accidentally. */
3375 spec_info = 0;
3377 /* Initialize issue_rate. */
3378 if (targetm.sched.issue_rate)
3379 issue_rate = targetm.sched.issue_rate ();
3380 else
3381 issue_rate = 1;
3383 if (cached_issue_rate != issue_rate)
3385 cached_issue_rate = issue_rate;
3386 /* To invalidate max_lookahead_tries: */
3387 cached_first_cycle_multipass_dfa_lookahead = 0;
3390 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
3391 dfa_lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
3392 else
3393 dfa_lookahead = 0;
3395 if (targetm.sched.init_dfa_pre_cycle_insn)
3396 targetm.sched.init_dfa_pre_cycle_insn ();
3398 if (targetm.sched.init_dfa_post_cycle_insn)
3399 targetm.sched.init_dfa_post_cycle_insn ();
3401 dfa_start ();
3402 dfa_state_size = state_size ();
3404 init_alias_analysis ();
3406 df_set_flags (DF_LR_RUN_DCE);
3407 df_note_add_problem ();
3409 /* More problems needed for interloop dep calculation in SMS. */
3410 if (common_sched_info->sched_pass_id == SCHED_SMS_PASS)
3412 df_rd_add_problem ();
3413 df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
3416 df_analyze ();
3418 /* Do not run DCE after reload, as this can kill nops inserted
3419 by bundling. */
3420 if (reload_completed)
3421 df_clear_flags (DF_LR_RUN_DCE);
3423 regstat_compute_calls_crossed ();
3425 if (targetm.sched.md_init_global)
3426 targetm.sched.md_init_global (sched_dump, sched_verbose,
3427 get_max_uid () + 1);
3429 if (sched_pressure_p)
3431 int i, max_regno = max_reg_num ();
3433 ira_set_pseudo_classes (sched_verbose ? sched_dump : NULL);
3434 sched_regno_cover_class
3435 = (enum reg_class *) xmalloc (max_regno * sizeof (enum reg_class));
3436 for (i = 0; i < max_regno; i++)
3437 sched_regno_cover_class[i]
3438 = (i < FIRST_PSEUDO_REGISTER
3439 ? ira_class_translate[REGNO_REG_CLASS (i)]
3440 : reg_cover_class (i));
3441 curr_reg_live = BITMAP_ALLOC (NULL);
3442 saved_reg_live = BITMAP_ALLOC (NULL);
3443 region_ref_regs = BITMAP_ALLOC (NULL);
3446 curr_state = xmalloc (dfa_state_size);
3449 static void haifa_init_only_bb (basic_block, basic_block);
3451 /* Initialize data structures specific to the Haifa scheduler. */
3452 void
3453 haifa_sched_init (void)
3455 setup_sched_dump ();
3456 sched_init ();
3458 if (spec_info != NULL)
3460 sched_deps_info->use_deps_list = 1;
3461 sched_deps_info->generate_spec_deps = 1;
3464 /* Initialize luids, dependency caches, target and h_i_d for the
3465 whole function. */
3467 bb_vec_t bbs = VEC_alloc (basic_block, heap, n_basic_blocks);
3468 basic_block bb;
3470 sched_init_bbs ();
3472 FOR_EACH_BB (bb)
3473 VEC_quick_push (basic_block, bbs, bb);
3474 sched_init_luids (bbs, NULL, NULL, NULL);
3475 sched_deps_init (true);
3476 sched_extend_target ();
3477 haifa_init_h_i_d (bbs, NULL, NULL, NULL);
3479 VEC_free (basic_block, heap, bbs);
3482 sched_init_only_bb = haifa_init_only_bb;
3483 sched_split_block = sched_split_block_1;
3484 sched_create_empty_bb = sched_create_empty_bb_1;
3485 haifa_recovery_bb_ever_added_p = false;
3487 #ifdef ENABLE_CHECKING
3488 /* This is used preferably for finding bugs in check_cfg () itself.
3489 We must call sched_bbs_init () before check_cfg () because check_cfg ()
3490 assumes that the last insn in the last bb has a non-null successor. */
3491 check_cfg (0, 0);
3492 #endif
3494 nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
3495 before_recovery = 0;
3496 after_recovery = 0;
3499 /* Finish work with the data specific to the Haifa scheduler. */
3500 void
3501 haifa_sched_finish (void)
3503 sched_create_empty_bb = NULL;
3504 sched_split_block = NULL;
3505 sched_init_only_bb = NULL;
3507 if (spec_info && spec_info->dump)
3509 char c = reload_completed ? 'a' : 'b';
3511 fprintf (spec_info->dump,
3512 ";; %s:\n", current_function_name ());
3514 fprintf (spec_info->dump,
3515 ";; Procedure %cr-begin-data-spec motions == %d\n",
3516 c, nr_begin_data);
3517 fprintf (spec_info->dump,
3518 ";; Procedure %cr-be-in-data-spec motions == %d\n",
3519 c, nr_be_in_data);
3520 fprintf (spec_info->dump,
3521 ";; Procedure %cr-begin-control-spec motions == %d\n",
3522 c, nr_begin_control);
3523 fprintf (spec_info->dump,
3524 ";; Procedure %cr-be-in-control-spec motions == %d\n",
3525 c, nr_be_in_control);
3528 /* Finalize h_i_d, dependency caches, and luids for the whole
3529 function. Target will be finalized in md_global_finish (). */
3530 sched_deps_finish ();
3531 sched_finish_luids ();
3532 current_sched_info = NULL;
3533 sched_finish ();
3536 /* Free global data used during insn scheduling. This function works with
3537 the common data shared between the schedulers. */
3539 void
3540 sched_finish (void)
3542 haifa_finish_h_i_d ();
3543 if (sched_pressure_p)
3545 free (sched_regno_cover_class);
3546 BITMAP_FREE (region_ref_regs);
3547 BITMAP_FREE (saved_reg_live);
3548 BITMAP_FREE (curr_reg_live);
3550 free (curr_state);
3552 if (targetm.sched.md_finish_global)
3553 targetm.sched.md_finish_global (sched_dump, sched_verbose);
3555 end_alias_analysis ();
3557 regstat_free_calls_crossed ();
3559 dfa_finish ();
3561 #ifdef ENABLE_CHECKING
3562 /* After reload ia64 backend clobbers CFG, so can't check anything. */
3563 if (!reload_completed)
3564 check_cfg (0, 0);
3565 #endif
3568 /* Fix INSN_TICKs of the instructions in the current block as well as
3569 INSN_TICKs of their dependents.
3570 HEAD and TAIL are the begin and the end of the current scheduled block. */
3571 static void
3572 fix_inter_tick (rtx head, rtx tail)
3574 /* Set of instructions with corrected INSN_TICK. */
3575 bitmap_head processed;
3576 /* ??? It is doubtful if we should assume that cycle advance happens on
3577 basic block boundaries. Basically insns that are unconditionally ready
3578 on the start of the block are more preferable then those which have
3579 a one cycle dependency over insn from the previous block. */
3580 int next_clock = clock_var + 1;
3582 bitmap_initialize (&processed, 0);
3584 /* Iterates over scheduled instructions and fix their INSN_TICKs and
3585 INSN_TICKs of dependent instructions, so that INSN_TICKs are consistent
3586 across different blocks. */
3587 for (tail = NEXT_INSN (tail); head != tail; head = NEXT_INSN (head))
3589 if (INSN_P (head))
3591 int tick;
3592 sd_iterator_def sd_it;
3593 dep_t dep;
3595 tick = INSN_TICK (head);
3596 gcc_assert (tick >= MIN_TICK);
3598 /* Fix INSN_TICK of instruction from just scheduled block. */
3599 if (!bitmap_bit_p (&processed, INSN_LUID (head)))
3601 bitmap_set_bit (&processed, INSN_LUID (head));
3602 tick -= next_clock;
3604 if (tick < MIN_TICK)
3605 tick = MIN_TICK;
3607 INSN_TICK (head) = tick;
3610 FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
3612 rtx next;
3614 next = DEP_CON (dep);
3615 tick = INSN_TICK (next);
3617 if (tick != INVALID_TICK
3618 /* If NEXT has its INSN_TICK calculated, fix it.
3619 If not - it will be properly calculated from
3620 scratch later in fix_tick_ready. */
3621 && !bitmap_bit_p (&processed, INSN_LUID (next)))
3623 bitmap_set_bit (&processed, INSN_LUID (next));
3624 tick -= next_clock;
3626 if (tick < MIN_TICK)
3627 tick = MIN_TICK;
3629 if (tick > INTER_TICK (next))
3630 INTER_TICK (next) = tick;
3631 else
3632 tick = INTER_TICK (next);
3634 INSN_TICK (next) = tick;
3639 bitmap_clear (&processed);
3642 static int haifa_speculate_insn (rtx, ds_t, rtx *);
3644 /* Check if NEXT is ready to be added to the ready or queue list.
3645 If "yes", add it to the proper list.
3646 Returns:
3647 -1 - is not ready yet,
3648 0 - added to the ready list,
3649 0 < N - queued for N cycles. */
3651 try_ready (rtx next)
3653 ds_t old_ts, *ts;
3655 ts = &TODO_SPEC (next);
3656 old_ts = *ts;
3658 gcc_assert (!(old_ts & ~(SPECULATIVE | HARD_DEP))
3659 && ((old_ts & HARD_DEP)
3660 || (old_ts & SPECULATIVE)));
3662 if (sd_lists_empty_p (next, SD_LIST_BACK))
3663 /* NEXT has all its dependencies resolved. */
3665 /* Remove HARD_DEP bit from NEXT's status. */
3666 *ts &= ~HARD_DEP;
3668 if (current_sched_info->flags & DO_SPECULATION)
3669 /* Remove all speculative bits from NEXT's status. */
3670 *ts &= ~SPECULATIVE;
3672 else
3674 /* One of the NEXT's dependencies has been resolved.
3675 Recalculate NEXT's status. */
3677 *ts &= ~SPECULATIVE & ~HARD_DEP;
3679 if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
3680 /* Now we've got NEXT with speculative deps only.
3681 1. Look at the deps to see what we have to do.
3682 2. Check if we can do 'todo'. */
3684 sd_iterator_def sd_it;
3685 dep_t dep;
3686 bool first_p = true;
3688 FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
3690 ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
3692 if (DEBUG_INSN_P (DEP_PRO (dep))
3693 && !DEBUG_INSN_P (next))
3694 continue;
3696 if (first_p)
3698 first_p = false;
3700 *ts = ds;
3702 else
3703 *ts = ds_merge (*ts, ds);
3706 if (ds_weak (*ts) < spec_info->data_weakness_cutoff)
3707 /* Too few points. */
3708 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3710 else
3711 *ts |= HARD_DEP;
3714 if (*ts & HARD_DEP)
3715 gcc_assert (*ts == old_ts
3716 && QUEUE_INDEX (next) == QUEUE_NOWHERE);
3717 else if (current_sched_info->new_ready)
3718 *ts = current_sched_info->new_ready (next, *ts);
3720 /* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
3721 have its original pattern or changed (speculative) one. This is due
3722 to changing ebb in region scheduling.
3723 * But if (old_ts & SPECULATIVE), then we are pretty sure that insn
3724 has speculative pattern.
3726 We can't assert (!(*ts & HARD_DEP) || *ts == old_ts) here because
3727 control-speculative NEXT could have been discarded by sched-rgn.c
3728 (the same case as when discarded by can_schedule_ready_p ()). */
3730 if ((*ts & SPECULATIVE)
3731 /* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
3732 need to change anything. */
3733 && *ts != old_ts)
3735 int res;
3736 rtx new_pat;
3738 gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
3740 res = haifa_speculate_insn (next, *ts, &new_pat);
3742 switch (res)
3744 case -1:
3745 /* It would be nice to change DEP_STATUS of all dependences,
3746 which have ((DEP_STATUS & SPECULATIVE) == *ts) to HARD_DEP,
3747 so we won't reanalyze anything. */
3748 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3749 break;
3751 case 0:
3752 /* We follow the rule, that every speculative insn
3753 has non-null ORIG_PAT. */
3754 if (!ORIG_PAT (next))
3755 ORIG_PAT (next) = PATTERN (next);
3756 break;
3758 case 1:
3759 if (!ORIG_PAT (next))
3760 /* If we gonna to overwrite the original pattern of insn,
3761 save it. */
3762 ORIG_PAT (next) = PATTERN (next);
3764 haifa_change_pattern (next, new_pat);
3765 break;
3767 default:
3768 gcc_unreachable ();
3772 /* We need to restore pattern only if (*ts == 0), because otherwise it is
3773 either correct (*ts & SPECULATIVE),
3774 or we simply don't care (*ts & HARD_DEP). */
3776 gcc_assert (!ORIG_PAT (next)
3777 || !IS_SPECULATION_BRANCHY_CHECK_P (next));
3779 if (*ts & HARD_DEP)
3781 /* We can't assert (QUEUE_INDEX (next) == QUEUE_NOWHERE) here because
3782 control-speculative NEXT could have been discarded by sched-rgn.c
3783 (the same case as when discarded by can_schedule_ready_p ()). */
3784 /*gcc_assert (QUEUE_INDEX (next) == QUEUE_NOWHERE);*/
3786 change_queue_index (next, QUEUE_NOWHERE);
3787 return -1;
3789 else if (!(*ts & BEGIN_SPEC) && ORIG_PAT (next) && !IS_SPECULATION_CHECK_P (next))
3790 /* We should change pattern of every previously speculative
3791 instruction - and we determine if NEXT was speculative by using
3792 ORIG_PAT field. Except one case - speculation checks have ORIG_PAT
3793 pat too, so skip them. */
3795 haifa_change_pattern (next, ORIG_PAT (next));
3796 ORIG_PAT (next) = 0;
3799 if (sched_verbose >= 2)
3801 int s = TODO_SPEC (next);
3803 fprintf (sched_dump, ";;\t\tdependencies resolved: insn %s",
3804 (*current_sched_info->print_insn) (next, 0));
3806 if (spec_info && spec_info->dump)
3808 if (s & BEGIN_DATA)
3809 fprintf (spec_info->dump, "; data-spec;");
3810 if (s & BEGIN_CONTROL)
3811 fprintf (spec_info->dump, "; control-spec;");
3812 if (s & BE_IN_CONTROL)
3813 fprintf (spec_info->dump, "; in-control-spec;");
3816 fprintf (sched_dump, "\n");
3819 adjust_priority (next);
3821 return fix_tick_ready (next);
3824 /* Calculate INSN_TICK of NEXT and add it to either ready or queue list. */
3825 static int
3826 fix_tick_ready (rtx next)
3828 int tick, delay;
3830 if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
3832 int full_p;
3833 sd_iterator_def sd_it;
3834 dep_t dep;
3836 tick = INSN_TICK (next);
3837 /* if tick is not equal to INVALID_TICK, then update
3838 INSN_TICK of NEXT with the most recent resolved dependence
3839 cost. Otherwise, recalculate from scratch. */
3840 full_p = (tick == INVALID_TICK);
3842 FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
3844 rtx pro = DEP_PRO (dep);
3845 int tick1;
3847 gcc_assert (INSN_TICK (pro) >= MIN_TICK);
3849 tick1 = INSN_TICK (pro) + dep_cost (dep);
3850 if (tick1 > tick)
3851 tick = tick1;
3853 if (!full_p)
3854 break;
3857 else
3858 tick = -1;
3860 INSN_TICK (next) = tick;
3862 delay = tick - clock_var;
3863 if (delay <= 0 || sched_pressure_p)
3864 delay = QUEUE_READY;
3866 change_queue_index (next, delay);
3868 return delay;
3871 /* Move NEXT to the proper queue list with (DELAY >= 1),
3872 or add it to the ready list (DELAY == QUEUE_READY),
3873 or remove it from ready and queue lists at all (DELAY == QUEUE_NOWHERE). */
3874 static void
3875 change_queue_index (rtx next, int delay)
3877 int i = QUEUE_INDEX (next);
3879 gcc_assert (QUEUE_NOWHERE <= delay && delay <= max_insn_queue_index
3880 && delay != 0);
3881 gcc_assert (i != QUEUE_SCHEDULED);
3883 if ((delay > 0 && NEXT_Q_AFTER (q_ptr, delay) == i)
3884 || (delay < 0 && delay == i))
3885 /* We have nothing to do. */
3886 return;
3888 /* Remove NEXT from wherever it is now. */
3889 if (i == QUEUE_READY)
3890 ready_remove_insn (next);
3891 else if (i >= 0)
3892 queue_remove (next);
3894 /* Add it to the proper place. */
3895 if (delay == QUEUE_READY)
3896 ready_add (readyp, next, false);
3897 else if (delay >= 1)
3898 queue_insn (next, delay);
3900 if (sched_verbose >= 2)
3902 fprintf (sched_dump, ";;\t\ttick updated: insn %s",
3903 (*current_sched_info->print_insn) (next, 0));
3905 if (delay == QUEUE_READY)
3906 fprintf (sched_dump, " into ready\n");
3907 else if (delay >= 1)
3908 fprintf (sched_dump, " into queue with cost=%d\n", delay);
3909 else
3910 fprintf (sched_dump, " removed from ready or queue lists\n");
3914 static int sched_ready_n_insns = -1;
3916 /* Initialize per region data structures. */
3917 void
3918 sched_extend_ready_list (int new_sched_ready_n_insns)
3920 int i;
3922 if (sched_ready_n_insns == -1)
3923 /* At the first call we need to initialize one more choice_stack
3924 entry. */
3926 i = 0;
3927 sched_ready_n_insns = 0;
3929 else
3930 i = sched_ready_n_insns + 1;
3932 ready.veclen = new_sched_ready_n_insns + issue_rate;
3933 ready.vec = XRESIZEVEC (rtx, ready.vec, ready.veclen);
3935 gcc_assert (new_sched_ready_n_insns >= sched_ready_n_insns);
3937 ready_try = (char *) xrecalloc (ready_try, new_sched_ready_n_insns,
3938 sched_ready_n_insns, sizeof (*ready_try));
3940 /* We allocate +1 element to save initial state in the choice_stack[0]
3941 entry. */
3942 choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
3943 new_sched_ready_n_insns + 1);
3945 for (; i <= new_sched_ready_n_insns; i++)
3946 choice_stack[i].state = xmalloc (dfa_state_size);
3948 sched_ready_n_insns = new_sched_ready_n_insns;
3951 /* Free per region data structures. */
3952 void
3953 sched_finish_ready_list (void)
3955 int i;
3957 free (ready.vec);
3958 ready.vec = NULL;
3959 ready.veclen = 0;
3961 free (ready_try);
3962 ready_try = NULL;
3964 for (i = 0; i <= sched_ready_n_insns; i++)
3965 free (choice_stack [i].state);
3966 free (choice_stack);
3967 choice_stack = NULL;
3969 sched_ready_n_insns = -1;
3972 static int
3973 haifa_luid_for_non_insn (rtx x)
3975 gcc_assert (NOTE_P (x) || LABEL_P (x));
3977 return 0;
3980 /* Generates recovery code for INSN. */
3981 static void
3982 generate_recovery_code (rtx insn)
3984 if (TODO_SPEC (insn) & BEGIN_SPEC)
3985 begin_speculative_block (insn);
3987 /* Here we have insn with no dependencies to
3988 instructions other then CHECK_SPEC ones. */
3990 if (TODO_SPEC (insn) & BE_IN_SPEC)
3991 add_to_speculative_block (insn);
3994 /* Helper function.
3995 Tries to add speculative dependencies of type FS between instructions
3996 in deps_list L and TWIN. */
3997 static void
3998 process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
4000 sd_iterator_def sd_it;
4001 dep_t dep;
4003 FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
4005 ds_t ds;
4006 rtx consumer;
4008 consumer = DEP_CON (dep);
4010 ds = DEP_STATUS (dep);
4012 if (/* If we want to create speculative dep. */
4014 /* And we can do that because this is a true dep. */
4015 && (ds & DEP_TYPES) == DEP_TRUE)
4017 gcc_assert (!(ds & BE_IN_SPEC));
4019 if (/* If this dep can be overcome with 'begin speculation'. */
4020 ds & BEGIN_SPEC)
4021 /* Then we have a choice: keep the dep 'begin speculative'
4022 or transform it into 'be in speculative'. */
4024 if (/* In try_ready we assert that if insn once became ready
4025 it can be removed from the ready (or queue) list only
4026 due to backend decision. Hence we can't let the
4027 probability of the speculative dep to decrease. */
4028 ds_weak (ds) <= ds_weak (fs))
4030 ds_t new_ds;
4032 new_ds = (ds & ~BEGIN_SPEC) | fs;
4034 if (/* consumer can 'be in speculative'. */
4035 sched_insn_is_legitimate_for_speculation_p (consumer,
4036 new_ds))
4037 /* Transform it to be in speculative. */
4038 ds = new_ds;
4041 else
4042 /* Mark the dep as 'be in speculative'. */
4043 ds |= fs;
4047 dep_def _new_dep, *new_dep = &_new_dep;
4049 init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
4050 sd_add_dep (new_dep, false);
4055 /* Generates recovery code for BEGIN speculative INSN. */
4056 static void
4057 begin_speculative_block (rtx insn)
4059 if (TODO_SPEC (insn) & BEGIN_DATA)
4060 nr_begin_data++;
4061 if (TODO_SPEC (insn) & BEGIN_CONTROL)
4062 nr_begin_control++;
4064 create_check_block_twin (insn, false);
4066 TODO_SPEC (insn) &= ~BEGIN_SPEC;
4069 static void haifa_init_insn (rtx);
4071 /* Generates recovery code for BE_IN speculative INSN. */
4072 static void
4073 add_to_speculative_block (rtx insn)
4075 ds_t ts;
4076 sd_iterator_def sd_it;
4077 dep_t dep;
4078 rtx twins = NULL;
4079 rtx_vec_t priorities_roots;
4081 ts = TODO_SPEC (insn);
4082 gcc_assert (!(ts & ~BE_IN_SPEC));
4084 if (ts & BE_IN_DATA)
4085 nr_be_in_data++;
4086 if (ts & BE_IN_CONTROL)
4087 nr_be_in_control++;
4089 TODO_SPEC (insn) &= ~BE_IN_SPEC;
4090 gcc_assert (!TODO_SPEC (insn));
4092 DONE_SPEC (insn) |= ts;
4094 /* First we convert all simple checks to branchy. */
4095 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4096 sd_iterator_cond (&sd_it, &dep);)
4098 rtx check = DEP_PRO (dep);
4100 if (IS_SPECULATION_SIMPLE_CHECK_P (check))
4102 create_check_block_twin (check, true);
4104 /* Restart search. */
4105 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4107 else
4108 /* Continue search. */
4109 sd_iterator_next (&sd_it);
4112 priorities_roots = NULL;
4113 clear_priorities (insn, &priorities_roots);
4115 while (1)
4117 rtx check, twin;
4118 basic_block rec;
4120 /* Get the first backward dependency of INSN. */
4121 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4122 if (!sd_iterator_cond (&sd_it, &dep))
4123 /* INSN has no backward dependencies left. */
4124 break;
4126 gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
4127 && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
4128 && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4130 check = DEP_PRO (dep);
4132 gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
4133 && QUEUE_INDEX (check) == QUEUE_NOWHERE);
4135 rec = BLOCK_FOR_INSN (check);
4137 twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
4138 haifa_init_insn (twin);
4140 sd_copy_back_deps (twin, insn, true);
4142 if (sched_verbose && spec_info->dump)
4143 /* INSN_BB (insn) isn't determined for twin insns yet.
4144 So we can't use current_sched_info->print_insn. */
4145 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4146 INSN_UID (twin), rec->index);
4148 twins = alloc_INSN_LIST (twin, twins);
4150 /* Add dependences between TWIN and all appropriate
4151 instructions from REC. */
4152 FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
4154 rtx pro = DEP_PRO (dep);
4156 gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
4158 /* INSN might have dependencies from the instructions from
4159 several recovery blocks. At this iteration we process those
4160 producers that reside in REC. */
4161 if (BLOCK_FOR_INSN (pro) == rec)
4163 dep_def _new_dep, *new_dep = &_new_dep;
4165 init_dep (new_dep, pro, twin, REG_DEP_TRUE);
4166 sd_add_dep (new_dep, false);
4170 process_insn_forw_deps_be_in_spec (insn, twin, ts);
4172 /* Remove all dependencies between INSN and insns in REC. */
4173 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4174 sd_iterator_cond (&sd_it, &dep);)
4176 rtx pro = DEP_PRO (dep);
4178 if (BLOCK_FOR_INSN (pro) == rec)
4179 sd_delete_dep (sd_it);
4180 else
4181 sd_iterator_next (&sd_it);
4185 /* We couldn't have added the dependencies between INSN and TWINS earlier
4186 because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
4187 while (twins)
4189 rtx twin;
4191 twin = XEXP (twins, 0);
4194 dep_def _new_dep, *new_dep = &_new_dep;
4196 init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4197 sd_add_dep (new_dep, false);
4200 twin = XEXP (twins, 1);
4201 free_INSN_LIST_node (twins);
4202 twins = twin;
4205 calc_priorities (priorities_roots);
4206 VEC_free (rtx, heap, priorities_roots);
4209 /* Extends and fills with zeros (only the new part) array pointed to by P. */
4210 void *
4211 xrecalloc (void *p, size_t new_nmemb, size_t old_nmemb, size_t size)
4213 gcc_assert (new_nmemb >= old_nmemb);
4214 p = XRESIZEVAR (void, p, new_nmemb * size);
4215 memset (((char *) p) + old_nmemb * size, 0, (new_nmemb - old_nmemb) * size);
4216 return p;
4219 /* Helper function.
4220 Find fallthru edge from PRED. */
4221 edge
4222 find_fallthru_edge (basic_block pred)
4224 edge e;
4225 edge_iterator ei;
4226 basic_block succ;
4228 succ = pred->next_bb;
4229 gcc_assert (succ->prev_bb == pred);
4231 if (EDGE_COUNT (pred->succs) <= EDGE_COUNT (succ->preds))
4233 FOR_EACH_EDGE (e, ei, pred->succs)
4234 if (e->flags & EDGE_FALLTHRU)
4236 gcc_assert (e->dest == succ);
4237 return e;
4240 else
4242 FOR_EACH_EDGE (e, ei, succ->preds)
4243 if (e->flags & EDGE_FALLTHRU)
4245 gcc_assert (e->src == pred);
4246 return e;
4250 return NULL;
4253 /* Extend per basic block data structures. */
4254 static void
4255 sched_extend_bb (void)
4257 rtx insn;
4259 /* The following is done to keep current_sched_info->next_tail non null. */
4260 insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
4261 if (NEXT_INSN (insn) == 0
4262 || (!NOTE_P (insn)
4263 && !LABEL_P (insn)
4264 /* Don't emit a NOTE if it would end up before a BARRIER. */
4265 && !BARRIER_P (NEXT_INSN (insn))))
4267 rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
4268 /* Make insn appear outside BB. */
4269 set_block_for_insn (note, NULL);
4270 BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
4274 /* Init per basic block data structures. */
4275 void
4276 sched_init_bbs (void)
4278 sched_extend_bb ();
4281 /* Initialize BEFORE_RECOVERY variable. */
4282 static void
4283 init_before_recovery (basic_block *before_recovery_ptr)
4285 basic_block last;
4286 edge e;
4288 last = EXIT_BLOCK_PTR->prev_bb;
4289 e = find_fallthru_edge (last);
4291 if (e)
4293 /* We create two basic blocks:
4294 1. Single instruction block is inserted right after E->SRC
4295 and has jump to
4296 2. Empty block right before EXIT_BLOCK.
4297 Between these two blocks recovery blocks will be emitted. */
4299 basic_block single, empty;
4300 rtx x, label;
4302 /* If the fallthrough edge to exit we've found is from the block we've
4303 created before, don't do anything more. */
4304 if (last == after_recovery)
4305 return;
4307 adding_bb_to_current_region_p = false;
4309 single = sched_create_empty_bb (last);
4310 empty = sched_create_empty_bb (single);
4312 /* Add new blocks to the root loop. */
4313 if (current_loops != NULL)
4315 add_bb_to_loop (single, VEC_index (loop_p, current_loops->larray, 0));
4316 add_bb_to_loop (empty, VEC_index (loop_p, current_loops->larray, 0));
4319 single->count = last->count;
4320 empty->count = last->count;
4321 single->frequency = last->frequency;
4322 empty->frequency = last->frequency;
4323 BB_COPY_PARTITION (single, last);
4324 BB_COPY_PARTITION (empty, last);
4326 redirect_edge_succ (e, single);
4327 make_single_succ_edge (single, empty, 0);
4328 make_single_succ_edge (empty, EXIT_BLOCK_PTR,
4329 EDGE_FALLTHRU | EDGE_CAN_FALLTHRU);
4331 label = block_label (empty);
4332 x = emit_jump_insn_after (gen_jump (label), BB_END (single));
4333 JUMP_LABEL (x) = label;
4334 LABEL_NUSES (label)++;
4335 haifa_init_insn (x);
4337 emit_barrier_after (x);
4339 sched_init_only_bb (empty, NULL);
4340 sched_init_only_bb (single, NULL);
4341 sched_extend_bb ();
4343 adding_bb_to_current_region_p = true;
4344 before_recovery = single;
4345 after_recovery = empty;
4347 if (before_recovery_ptr)
4348 *before_recovery_ptr = before_recovery;
4350 if (sched_verbose >= 2 && spec_info->dump)
4351 fprintf (spec_info->dump,
4352 ";;\t\tFixed fallthru to EXIT : %d->>%d->%d->>EXIT\n",
4353 last->index, single->index, empty->index);
4355 else
4356 before_recovery = last;
4359 /* Returns new recovery block. */
4360 basic_block
4361 sched_create_recovery_block (basic_block *before_recovery_ptr)
4363 rtx label;
4364 rtx barrier;
4365 basic_block rec;
4367 haifa_recovery_bb_recently_added_p = true;
4368 haifa_recovery_bb_ever_added_p = true;
4370 init_before_recovery (before_recovery_ptr);
4372 barrier = get_last_bb_insn (before_recovery);
4373 gcc_assert (BARRIER_P (barrier));
4375 label = emit_label_after (gen_label_rtx (), barrier);
4377 rec = create_basic_block (label, label, before_recovery);
4379 /* A recovery block always ends with an unconditional jump. */
4380 emit_barrier_after (BB_END (rec));
4382 if (BB_PARTITION (before_recovery) != BB_UNPARTITIONED)
4383 BB_SET_PARTITION (rec, BB_COLD_PARTITION);
4385 if (sched_verbose && spec_info->dump)
4386 fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
4387 rec->index);
4389 return rec;
4392 /* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
4393 and emit necessary jumps. */
4394 void
4395 sched_create_recovery_edges (basic_block first_bb, basic_block rec,
4396 basic_block second_bb)
4398 rtx label;
4399 rtx jump;
4400 int edge_flags;
4402 /* This is fixing of incoming edge. */
4403 /* ??? Which other flags should be specified? */
4404 if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
4405 /* Partition type is the same, if it is "unpartitioned". */
4406 edge_flags = EDGE_CROSSING;
4407 else
4408 edge_flags = 0;
4410 make_edge (first_bb, rec, edge_flags);
4411 label = block_label (second_bb);
4412 jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
4413 JUMP_LABEL (jump) = label;
4414 LABEL_NUSES (label)++;
4416 if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
4417 /* Partition type is the same, if it is "unpartitioned". */
4419 /* Rewritten from cfgrtl.c. */
4420 if (flag_reorder_blocks_and_partition
4421 && targetm.have_named_sections)
4423 /* We don't need the same note for the check because
4424 any_condjump_p (check) == true. */
4425 add_reg_note (jump, REG_CROSSING_JUMP, NULL_RTX);
4427 edge_flags = EDGE_CROSSING;
4429 else
4430 edge_flags = 0;
4432 make_single_succ_edge (rec, second_bb, edge_flags);
4435 /* This function creates recovery code for INSN. If MUTATE_P is nonzero,
4436 INSN is a simple check, that should be converted to branchy one. */
4437 static void
4438 create_check_block_twin (rtx insn, bool mutate_p)
4440 basic_block rec;
4441 rtx label, check, twin;
4442 ds_t fs;
4443 sd_iterator_def sd_it;
4444 dep_t dep;
4445 dep_def _new_dep, *new_dep = &_new_dep;
4446 ds_t todo_spec;
4448 gcc_assert (ORIG_PAT (insn) != NULL_RTX);
4450 if (!mutate_p)
4451 todo_spec = TODO_SPEC (insn);
4452 else
4454 gcc_assert (IS_SPECULATION_SIMPLE_CHECK_P (insn)
4455 && (TODO_SPEC (insn) & SPECULATIVE) == 0);
4457 todo_spec = CHECK_SPEC (insn);
4460 todo_spec &= SPECULATIVE;
4462 /* Create recovery block. */
4463 if (mutate_p || targetm.sched.needs_block_p (todo_spec))
4465 rec = sched_create_recovery_block (NULL);
4466 label = BB_HEAD (rec);
4468 else
4470 rec = EXIT_BLOCK_PTR;
4471 label = NULL_RTX;
4474 /* Emit CHECK. */
4475 check = targetm.sched.gen_spec_check (insn, label, todo_spec);
4477 if (rec != EXIT_BLOCK_PTR)
4479 /* To have mem_reg alive at the beginning of second_bb,
4480 we emit check BEFORE insn, so insn after splitting
4481 insn will be at the beginning of second_bb, which will
4482 provide us with the correct life information. */
4483 check = emit_jump_insn_before (check, insn);
4484 JUMP_LABEL (check) = label;
4485 LABEL_NUSES (label)++;
4487 else
4488 check = emit_insn_before (check, insn);
4490 /* Extend data structures. */
4491 haifa_init_insn (check);
4493 /* CHECK is being added to current region. Extend ready list. */
4494 gcc_assert (sched_ready_n_insns != -1);
4495 sched_extend_ready_list (sched_ready_n_insns + 1);
4497 if (current_sched_info->add_remove_insn)
4498 current_sched_info->add_remove_insn (insn, 0);
4500 RECOVERY_BLOCK (check) = rec;
4502 if (sched_verbose && spec_info->dump)
4503 fprintf (spec_info->dump, ";;\t\tGenerated check insn : %s\n",
4504 (*current_sched_info->print_insn) (check, 0));
4506 gcc_assert (ORIG_PAT (insn));
4508 /* Initialize TWIN (twin is a duplicate of original instruction
4509 in the recovery block). */
4510 if (rec != EXIT_BLOCK_PTR)
4512 sd_iterator_def sd_it;
4513 dep_t dep;
4515 FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
4516 if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
4518 struct _dep _dep2, *dep2 = &_dep2;
4520 init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
4522 sd_add_dep (dep2, true);
4525 twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
4526 haifa_init_insn (twin);
4528 if (sched_verbose && spec_info->dump)
4529 /* INSN_BB (insn) isn't determined for twin insns yet.
4530 So we can't use current_sched_info->print_insn. */
4531 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4532 INSN_UID (twin), rec->index);
4534 else
4536 ORIG_PAT (check) = ORIG_PAT (insn);
4537 HAS_INTERNAL_DEP (check) = 1;
4538 twin = check;
4539 /* ??? We probably should change all OUTPUT dependencies to
4540 (TRUE | OUTPUT). */
4543 /* Copy all resolved back dependencies of INSN to TWIN. This will
4544 provide correct value for INSN_TICK (TWIN). */
4545 sd_copy_back_deps (twin, insn, true);
4547 if (rec != EXIT_BLOCK_PTR)
4548 /* In case of branchy check, fix CFG. */
4550 basic_block first_bb, second_bb;
4551 rtx jump;
4553 first_bb = BLOCK_FOR_INSN (check);
4554 second_bb = sched_split_block (first_bb, check);
4556 sched_create_recovery_edges (first_bb, rec, second_bb);
4558 sched_init_only_bb (second_bb, first_bb);
4559 sched_init_only_bb (rec, EXIT_BLOCK_PTR);
4561 jump = BB_END (rec);
4562 haifa_init_insn (jump);
4565 /* Move backward dependences from INSN to CHECK and
4566 move forward dependences from INSN to TWIN. */
4568 /* First, create dependencies between INSN's producers and CHECK & TWIN. */
4569 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
4571 rtx pro = DEP_PRO (dep);
4572 ds_t ds;
4574 /* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
4575 check --TRUE--> producer ??? or ANTI ???
4576 twin --TRUE--> producer
4577 twin --ANTI--> check
4579 If BEGIN_CONTROL: [insn ~~ANTI~~> producer]:
4580 check --ANTI--> producer
4581 twin --ANTI--> producer
4582 twin --ANTI--> check
4584 If BE_IN_SPEC: [insn ~~TRUE~~> producer]:
4585 check ~~TRUE~~> producer
4586 twin ~~TRUE~~> producer
4587 twin --ANTI--> check */
4589 ds = DEP_STATUS (dep);
4591 if (ds & BEGIN_SPEC)
4593 gcc_assert (!mutate_p);
4594 ds &= ~BEGIN_SPEC;
4597 init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
4598 sd_add_dep (new_dep, false);
4600 if (rec != EXIT_BLOCK_PTR)
4602 DEP_CON (new_dep) = twin;
4603 sd_add_dep (new_dep, false);
4607 /* Second, remove backward dependencies of INSN. */
4608 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4609 sd_iterator_cond (&sd_it, &dep);)
4611 if ((DEP_STATUS (dep) & BEGIN_SPEC)
4612 || mutate_p)
4613 /* We can delete this dep because we overcome it with
4614 BEGIN_SPECULATION. */
4615 sd_delete_dep (sd_it);
4616 else
4617 sd_iterator_next (&sd_it);
4620 /* Future Speculations. Determine what BE_IN speculations will be like. */
4621 fs = 0;
4623 /* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
4624 here. */
4626 gcc_assert (!DONE_SPEC (insn));
4628 if (!mutate_p)
4630 ds_t ts = TODO_SPEC (insn);
4632 DONE_SPEC (insn) = ts & BEGIN_SPEC;
4633 CHECK_SPEC (check) = ts & BEGIN_SPEC;
4635 /* Luckiness of future speculations solely depends upon initial
4636 BEGIN speculation. */
4637 if (ts & BEGIN_DATA)
4638 fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
4639 if (ts & BEGIN_CONTROL)
4640 fs = set_dep_weak (fs, BE_IN_CONTROL,
4641 get_dep_weak (ts, BEGIN_CONTROL));
4643 else
4644 CHECK_SPEC (check) = CHECK_SPEC (insn);
4646 /* Future speculations: call the helper. */
4647 process_insn_forw_deps_be_in_spec (insn, twin, fs);
4649 if (rec != EXIT_BLOCK_PTR)
4651 /* Which types of dependencies should we use here is,
4652 generally, machine-dependent question... But, for now,
4653 it is not. */
4655 if (!mutate_p)
4657 init_dep (new_dep, insn, check, REG_DEP_TRUE);
4658 sd_add_dep (new_dep, false);
4660 init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4661 sd_add_dep (new_dep, false);
4663 else
4665 if (spec_info->dump)
4666 fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
4667 (*current_sched_info->print_insn) (insn, 0));
4669 /* Remove all dependencies of the INSN. */
4671 sd_it = sd_iterator_start (insn, (SD_LIST_FORW
4672 | SD_LIST_BACK
4673 | SD_LIST_RES_BACK));
4674 while (sd_iterator_cond (&sd_it, &dep))
4675 sd_delete_dep (sd_it);
4678 /* If former check (INSN) already was moved to the ready (or queue)
4679 list, add new check (CHECK) there too. */
4680 if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
4681 try_ready (check);
4683 /* Remove old check from instruction stream and free its
4684 data. */
4685 sched_remove_insn (insn);
4688 init_dep (new_dep, check, twin, REG_DEP_ANTI);
4689 sd_add_dep (new_dep, false);
4691 else
4693 init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
4694 sd_add_dep (new_dep, false);
4697 if (!mutate_p)
4698 /* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
4699 because it'll be done later in add_to_speculative_block. */
4701 rtx_vec_t priorities_roots = NULL;
4703 clear_priorities (twin, &priorities_roots);
4704 calc_priorities (priorities_roots);
4705 VEC_free (rtx, heap, priorities_roots);
4709 /* Removes dependency between instructions in the recovery block REC
4710 and usual region instructions. It keeps inner dependences so it
4711 won't be necessary to recompute them. */
4712 static void
4713 fix_recovery_deps (basic_block rec)
4715 rtx note, insn, jump, ready_list = 0;
4716 bitmap_head in_ready;
4717 rtx link;
4719 bitmap_initialize (&in_ready, 0);
4721 /* NOTE - a basic block note. */
4722 note = NEXT_INSN (BB_HEAD (rec));
4723 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4724 insn = BB_END (rec);
4725 gcc_assert (JUMP_P (insn));
4726 insn = PREV_INSN (insn);
4730 sd_iterator_def sd_it;
4731 dep_t dep;
4733 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
4734 sd_iterator_cond (&sd_it, &dep);)
4736 rtx consumer = DEP_CON (dep);
4738 if (BLOCK_FOR_INSN (consumer) != rec)
4740 sd_delete_dep (sd_it);
4742 if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
4744 ready_list = alloc_INSN_LIST (consumer, ready_list);
4745 bitmap_set_bit (&in_ready, INSN_LUID (consumer));
4748 else
4750 gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4752 sd_iterator_next (&sd_it);
4756 insn = PREV_INSN (insn);
4758 while (insn != note);
4760 bitmap_clear (&in_ready);
4762 /* Try to add instructions to the ready or queue list. */
4763 for (link = ready_list; link; link = XEXP (link, 1))
4764 try_ready (XEXP (link, 0));
4765 free_INSN_LIST_list (&ready_list);
4767 /* Fixing jump's dependences. */
4768 insn = BB_HEAD (rec);
4769 jump = BB_END (rec);
4771 gcc_assert (LABEL_P (insn));
4772 insn = NEXT_INSN (insn);
4774 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (insn));
4775 add_jump_dependencies (insn, jump);
4778 /* Change pattern of INSN to NEW_PAT. */
4779 void
4780 sched_change_pattern (rtx insn, rtx new_pat)
4782 int t;
4784 t = validate_change (insn, &PATTERN (insn), new_pat, 0);
4785 gcc_assert (t);
4786 dfa_clear_single_insn_cache (insn);
4789 /* Change pattern of INSN to NEW_PAT. Invalidate cached haifa
4790 instruction data. */
4791 static void
4792 haifa_change_pattern (rtx insn, rtx new_pat)
4794 sched_change_pattern (insn, new_pat);
4796 /* Invalidate INSN_COST, so it'll be recalculated. */
4797 INSN_COST (insn) = -1;
4798 /* Invalidate INSN_TICK, so it'll be recalculated. */
4799 INSN_TICK (insn) = INVALID_TICK;
4802 /* -1 - can't speculate,
4803 0 - for speculation with REQUEST mode it is OK to use
4804 current instruction pattern,
4805 1 - need to change pattern for *NEW_PAT to be speculative. */
4807 sched_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4809 gcc_assert (current_sched_info->flags & DO_SPECULATION
4810 && (request & SPECULATIVE)
4811 && sched_insn_is_legitimate_for_speculation_p (insn, request));
4813 if ((request & spec_info->mask) != request)
4814 return -1;
4816 if (request & BE_IN_SPEC
4817 && !(request & BEGIN_SPEC))
4818 return 0;
4820 return targetm.sched.speculate_insn (insn, request, new_pat);
4823 static int
4824 haifa_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4826 gcc_assert (sched_deps_info->generate_spec_deps
4827 && !IS_SPECULATION_CHECK_P (insn));
4829 if (HAS_INTERNAL_DEP (insn)
4830 || SCHED_GROUP_P (insn))
4831 return -1;
4833 return sched_speculate_insn (insn, request, new_pat);
4836 /* Print some information about block BB, which starts with HEAD and
4837 ends with TAIL, before scheduling it.
4838 I is zero, if scheduler is about to start with the fresh ebb. */
4839 static void
4840 dump_new_block_header (int i, basic_block bb, rtx head, rtx tail)
4842 if (!i)
4843 fprintf (sched_dump,
4844 ";; ======================================================\n");
4845 else
4846 fprintf (sched_dump,
4847 ";; =====================ADVANCING TO=====================\n");
4848 fprintf (sched_dump,
4849 ";; -- basic block %d from %d to %d -- %s reload\n",
4850 bb->index, INSN_UID (head), INSN_UID (tail),
4851 (reload_completed ? "after" : "before"));
4852 fprintf (sched_dump,
4853 ";; ======================================================\n");
4854 fprintf (sched_dump, "\n");
4857 /* Unlink basic block notes and labels and saves them, so they
4858 can be easily restored. We unlink basic block notes in EBB to
4859 provide back-compatibility with the previous code, as target backends
4860 assume, that there'll be only instructions between
4861 current_sched_info->{head and tail}. We restore these notes as soon
4862 as we can.
4863 FIRST (LAST) is the first (last) basic block in the ebb.
4864 NB: In usual case (FIRST == LAST) nothing is really done. */
4865 void
4866 unlink_bb_notes (basic_block first, basic_block last)
4868 /* We DON'T unlink basic block notes of the first block in the ebb. */
4869 if (first == last)
4870 return;
4872 bb_header = XNEWVEC (rtx, last_basic_block);
4874 /* Make a sentinel. */
4875 if (last->next_bb != EXIT_BLOCK_PTR)
4876 bb_header[last->next_bb->index] = 0;
4878 first = first->next_bb;
4881 rtx prev, label, note, next;
4883 label = BB_HEAD (last);
4884 if (LABEL_P (label))
4885 note = NEXT_INSN (label);
4886 else
4887 note = label;
4888 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4890 prev = PREV_INSN (label);
4891 next = NEXT_INSN (note);
4892 gcc_assert (prev && next);
4894 NEXT_INSN (prev) = next;
4895 PREV_INSN (next) = prev;
4897 bb_header[last->index] = label;
4899 if (last == first)
4900 break;
4902 last = last->prev_bb;
4904 while (1);
4907 /* Restore basic block notes.
4908 FIRST is the first basic block in the ebb. */
4909 static void
4910 restore_bb_notes (basic_block first)
4912 if (!bb_header)
4913 return;
4915 /* We DON'T unlink basic block notes of the first block in the ebb. */
4916 first = first->next_bb;
4917 /* Remember: FIRST is actually a second basic block in the ebb. */
4919 while (first != EXIT_BLOCK_PTR
4920 && bb_header[first->index])
4922 rtx prev, label, note, next;
4924 label = bb_header[first->index];
4925 prev = PREV_INSN (label);
4926 next = NEXT_INSN (prev);
4928 if (LABEL_P (label))
4929 note = NEXT_INSN (label);
4930 else
4931 note = label;
4932 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4934 bb_header[first->index] = 0;
4936 NEXT_INSN (prev) = label;
4937 NEXT_INSN (note) = next;
4938 PREV_INSN (next) = note;
4940 first = first->next_bb;
4943 free (bb_header);
4944 bb_header = 0;
4947 /* Helper function.
4948 Fix CFG after both in- and inter-block movement of
4949 control_flow_insn_p JUMP. */
4950 static void
4951 fix_jump_move (rtx jump)
4953 basic_block bb, jump_bb, jump_bb_next;
4955 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
4956 jump_bb = BLOCK_FOR_INSN (jump);
4957 jump_bb_next = jump_bb->next_bb;
4959 gcc_assert (common_sched_info->sched_pass_id == SCHED_EBB_PASS
4960 || IS_SPECULATION_BRANCHY_CHECK_P (jump));
4962 if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
4963 /* if jump_bb_next is not empty. */
4964 BB_END (jump_bb) = BB_END (jump_bb_next);
4966 if (BB_END (bb) != PREV_INSN (jump))
4967 /* Then there are instruction after jump that should be placed
4968 to jump_bb_next. */
4969 BB_END (jump_bb_next) = BB_END (bb);
4970 else
4971 /* Otherwise jump_bb_next is empty. */
4972 BB_END (jump_bb_next) = NEXT_INSN (BB_HEAD (jump_bb_next));
4974 /* To make assertion in move_insn happy. */
4975 BB_END (bb) = PREV_INSN (jump);
4977 update_bb_for_insn (jump_bb_next);
4980 /* Fix CFG after interblock movement of control_flow_insn_p JUMP. */
4981 static void
4982 move_block_after_check (rtx jump)
4984 basic_block bb, jump_bb, jump_bb_next;
4985 VEC(edge,gc) *t;
4987 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
4988 jump_bb = BLOCK_FOR_INSN (jump);
4989 jump_bb_next = jump_bb->next_bb;
4991 update_bb_for_insn (jump_bb);
4993 gcc_assert (IS_SPECULATION_CHECK_P (jump)
4994 || IS_SPECULATION_CHECK_P (BB_END (jump_bb_next)));
4996 unlink_block (jump_bb_next);
4997 link_block (jump_bb_next, bb);
4999 t = bb->succs;
5000 bb->succs = 0;
5001 move_succs (&(jump_bb->succs), bb);
5002 move_succs (&(jump_bb_next->succs), jump_bb);
5003 move_succs (&t, jump_bb_next);
5005 df_mark_solutions_dirty ();
5007 common_sched_info->fix_recovery_cfg
5008 (bb->index, jump_bb->index, jump_bb_next->index);
5011 /* Helper function for move_block_after_check.
5012 This functions attaches edge vector pointed to by SUCCSP to
5013 block TO. */
5014 static void
5015 move_succs (VEC(edge,gc) **succsp, basic_block to)
5017 edge e;
5018 edge_iterator ei;
5020 gcc_assert (to->succs == 0);
5022 to->succs = *succsp;
5024 FOR_EACH_EDGE (e, ei, to->succs)
5025 e->src = to;
5027 *succsp = 0;
5030 /* Remove INSN from the instruction stream.
5031 INSN should have any dependencies. */
5032 static void
5033 sched_remove_insn (rtx insn)
5035 sd_finish_insn (insn);
5037 change_queue_index (insn, QUEUE_NOWHERE);
5038 current_sched_info->add_remove_insn (insn, 1);
5039 remove_insn (insn);
5042 /* Clear priorities of all instructions, that are forward dependent on INSN.
5043 Store in vector pointed to by ROOTS_PTR insns on which priority () should
5044 be invoked to initialize all cleared priorities. */
5045 static void
5046 clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
5048 sd_iterator_def sd_it;
5049 dep_t dep;
5050 bool insn_is_root_p = true;
5052 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
5054 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
5056 rtx pro = DEP_PRO (dep);
5058 if (INSN_PRIORITY_STATUS (pro) >= 0
5059 && QUEUE_INDEX (insn) != QUEUE_SCHEDULED)
5061 /* If DEP doesn't contribute to priority then INSN itself should
5062 be added to priority roots. */
5063 if (contributes_to_priority_p (dep))
5064 insn_is_root_p = false;
5066 INSN_PRIORITY_STATUS (pro) = -1;
5067 clear_priorities (pro, roots_ptr);
5071 if (insn_is_root_p)
5072 VEC_safe_push (rtx, heap, *roots_ptr, insn);
5075 /* Recompute priorities of instructions, whose priorities might have been
5076 changed. ROOTS is a vector of instructions whose priority computation will
5077 trigger initialization of all cleared priorities. */
5078 static void
5079 calc_priorities (rtx_vec_t roots)
5081 int i;
5082 rtx insn;
5084 for (i = 0; VEC_iterate (rtx, roots, i, insn); i++)
5085 priority (insn);
5089 /* Add dependences between JUMP and other instructions in the recovery
5090 block. INSN is the first insn the recovery block. */
5091 static void
5092 add_jump_dependencies (rtx insn, rtx jump)
5096 insn = NEXT_INSN (insn);
5097 if (insn == jump)
5098 break;
5100 if (dep_list_size (insn) == 0)
5102 dep_def _new_dep, *new_dep = &_new_dep;
5104 init_dep (new_dep, insn, jump, REG_DEP_ANTI);
5105 sd_add_dep (new_dep, false);
5108 while (1);
5110 gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
5113 /* Return the NOTE_INSN_BASIC_BLOCK of BB. */
5115 bb_note (basic_block bb)
5117 rtx note;
5119 note = BB_HEAD (bb);
5120 if (LABEL_P (note))
5121 note = NEXT_INSN (note);
5123 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
5124 return note;
5127 #ifdef ENABLE_CHECKING
5128 /* Helper function for check_cfg.
5129 Return nonzero, if edge vector pointed to by EL has edge with TYPE in
5130 its flags. */
5131 static int
5132 has_edge_p (VEC(edge,gc) *el, int type)
5134 edge e;
5135 edge_iterator ei;
5137 FOR_EACH_EDGE (e, ei, el)
5138 if (e->flags & type)
5139 return 1;
5140 return 0;
5143 /* Search back, starting at INSN, for an insn that is not a
5144 NOTE_INSN_VAR_LOCATION. Don't search beyond HEAD, and return it if
5145 no such insn can be found. */
5146 static inline rtx
5147 prev_non_location_insn (rtx insn, rtx head)
5149 while (insn != head && NOTE_P (insn)
5150 && NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION)
5151 insn = PREV_INSN (insn);
5153 return insn;
5156 /* Check few properties of CFG between HEAD and TAIL.
5157 If HEAD (TAIL) is NULL check from the beginning (till the end) of the
5158 instruction stream. */
5159 static void
5160 check_cfg (rtx head, rtx tail)
5162 rtx next_tail;
5163 basic_block bb = 0;
5164 int not_first = 0, not_last;
5166 if (head == NULL)
5167 head = get_insns ();
5168 if (tail == NULL)
5169 tail = get_last_insn ();
5170 next_tail = NEXT_INSN (tail);
5174 not_last = head != tail;
5176 if (not_first)
5177 gcc_assert (NEXT_INSN (PREV_INSN (head)) == head);
5178 if (not_last)
5179 gcc_assert (PREV_INSN (NEXT_INSN (head)) == head);
5181 if (LABEL_P (head)
5182 || (NOTE_INSN_BASIC_BLOCK_P (head)
5183 && (!not_first
5184 || (not_first && !LABEL_P (PREV_INSN (head))))))
5186 gcc_assert (bb == 0);
5187 bb = BLOCK_FOR_INSN (head);
5188 if (bb != 0)
5189 gcc_assert (BB_HEAD (bb) == head);
5190 else
5191 /* This is the case of jump table. See inside_basic_block_p (). */
5192 gcc_assert (LABEL_P (head) && !inside_basic_block_p (head));
5195 if (bb == 0)
5197 gcc_assert (!inside_basic_block_p (head));
5198 head = NEXT_INSN (head);
5200 else
5202 gcc_assert (inside_basic_block_p (head)
5203 || NOTE_P (head));
5204 gcc_assert (BLOCK_FOR_INSN (head) == bb);
5206 if (LABEL_P (head))
5208 head = NEXT_INSN (head);
5209 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (head));
5211 else
5213 if (control_flow_insn_p (head))
5215 gcc_assert (prev_non_location_insn (BB_END (bb), head)
5216 == head);
5218 if (any_uncondjump_p (head))
5219 gcc_assert (EDGE_COUNT (bb->succs) == 1
5220 && BARRIER_P (NEXT_INSN (head)));
5221 else if (any_condjump_p (head))
5222 gcc_assert (/* Usual case. */
5223 (EDGE_COUNT (bb->succs) > 1
5224 && !BARRIER_P (NEXT_INSN (head)))
5225 /* Or jump to the next instruction. */
5226 || (EDGE_COUNT (bb->succs) == 1
5227 && (BB_HEAD (EDGE_I (bb->succs, 0)->dest)
5228 == JUMP_LABEL (head))));
5230 if (BB_END (bb) == head)
5232 if (EDGE_COUNT (bb->succs) > 1)
5233 gcc_assert (control_flow_insn_p (prev_non_location_insn
5234 (head, BB_HEAD (bb)))
5235 || has_edge_p (bb->succs, EDGE_COMPLEX));
5236 bb = 0;
5239 head = NEXT_INSN (head);
5243 not_first = 1;
5245 while (head != next_tail);
5247 gcc_assert (bb == 0);
5250 #endif /* ENABLE_CHECKING */
5252 /* Extend per basic block data structures. */
5253 static void
5254 extend_bb (void)
5256 if (sched_scan_info->extend_bb)
5257 sched_scan_info->extend_bb ();
5260 /* Init data for BB. */
5261 static void
5262 init_bb (basic_block bb)
5264 if (sched_scan_info->init_bb)
5265 sched_scan_info->init_bb (bb);
5268 /* Extend per insn data structures. */
5269 static void
5270 extend_insn (void)
5272 if (sched_scan_info->extend_insn)
5273 sched_scan_info->extend_insn ();
5276 /* Init data structures for INSN. */
5277 static void
5278 init_insn (rtx insn)
5280 if (sched_scan_info->init_insn)
5281 sched_scan_info->init_insn (insn);
5284 /* Init all insns in BB. */
5285 static void
5286 init_insns_in_bb (basic_block bb)
5288 rtx insn;
5290 FOR_BB_INSNS (bb, insn)
5291 init_insn (insn);
5294 /* A driver function to add a set of basic blocks (BBS),
5295 a single basic block (BB), a set of insns (INSNS) or a single insn (INSN)
5296 to the scheduling region. */
5297 void
5298 sched_scan (const struct sched_scan_info_def *ssi,
5299 bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5301 sched_scan_info = ssi;
5303 if (bbs != NULL || bb != NULL)
5305 extend_bb ();
5307 if (bbs != NULL)
5309 unsigned i;
5310 basic_block x;
5312 for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5313 init_bb (x);
5316 if (bb != NULL)
5317 init_bb (bb);
5320 extend_insn ();
5322 if (bbs != NULL)
5324 unsigned i;
5325 basic_block x;
5327 for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5328 init_insns_in_bb (x);
5331 if (bb != NULL)
5332 init_insns_in_bb (bb);
5334 if (insns != NULL)
5336 unsigned i;
5337 rtx x;
5339 for (i = 0; VEC_iterate (rtx, insns, i, x); i++)
5340 init_insn (x);
5343 if (insn != NULL)
5344 init_insn (insn);
5348 /* Extend data structures for logical insn UID. */
5349 static void
5350 luids_extend_insn (void)
5352 int new_luids_max_uid = get_max_uid () + 1;
5354 VEC_safe_grow_cleared (int, heap, sched_luids, new_luids_max_uid);
5357 /* Initialize LUID for INSN. */
5358 static void
5359 luids_init_insn (rtx insn)
5361 int i = INSN_P (insn) ? 1 : common_sched_info->luid_for_non_insn (insn);
5362 int luid;
5364 if (i >= 0)
5366 luid = sched_max_luid;
5367 sched_max_luid += i;
5369 else
5370 luid = -1;
5372 SET_INSN_LUID (insn, luid);
5375 /* Initialize luids for BBS, BB, INSNS and INSN.
5376 The hook common_sched_info->luid_for_non_insn () is used to determine
5377 if notes, labels, etc. need luids. */
5378 void
5379 sched_init_luids (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5381 const struct sched_scan_info_def ssi =
5383 NULL, /* extend_bb */
5384 NULL, /* init_bb */
5385 luids_extend_insn, /* extend_insn */
5386 luids_init_insn /* init_insn */
5389 sched_scan (&ssi, bbs, bb, insns, insn);
5392 /* Free LUIDs. */
5393 void
5394 sched_finish_luids (void)
5396 VEC_free (int, heap, sched_luids);
5397 sched_max_luid = 1;
5400 /* Return logical uid of INSN. Helpful while debugging. */
5402 insn_luid (rtx insn)
5404 return INSN_LUID (insn);
5407 /* Extend per insn data in the target. */
5408 void
5409 sched_extend_target (void)
5411 if (targetm.sched.h_i_d_extended)
5412 targetm.sched.h_i_d_extended ();
5415 /* Extend global scheduler structures (those, that live across calls to
5416 schedule_block) to include information about just emitted INSN. */
5417 static void
5418 extend_h_i_d (void)
5420 int reserve = (get_max_uid () + 1
5421 - VEC_length (haifa_insn_data_def, h_i_d));
5422 if (reserve > 0
5423 && ! VEC_space (haifa_insn_data_def, h_i_d, reserve))
5425 VEC_safe_grow_cleared (haifa_insn_data_def, heap, h_i_d,
5426 3 * get_max_uid () / 2);
5427 sched_extend_target ();
5431 /* Initialize h_i_d entry of the INSN with default values.
5432 Values, that are not explicitly initialized here, hold zero. */
5433 static void
5434 init_h_i_d (rtx insn)
5436 if (INSN_LUID (insn) > 0)
5438 INSN_COST (insn) = -1;
5439 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
5440 INSN_TICK (insn) = INVALID_TICK;
5441 INTER_TICK (insn) = INVALID_TICK;
5442 TODO_SPEC (insn) = HARD_DEP;
5446 /* Initialize haifa_insn_data for BBS, BB, INSNS and INSN. */
5447 void
5448 haifa_init_h_i_d (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5450 const struct sched_scan_info_def ssi =
5452 NULL, /* extend_bb */
5453 NULL, /* init_bb */
5454 extend_h_i_d, /* extend_insn */
5455 init_h_i_d /* init_insn */
5458 sched_scan (&ssi, bbs, bb, insns, insn);
5461 /* Finalize haifa_insn_data. */
5462 void
5463 haifa_finish_h_i_d (void)
5465 int i;
5466 haifa_insn_data_t data;
5467 struct reg_use_data *use, *next;
5469 for (i = 0; VEC_iterate (haifa_insn_data_def, h_i_d, i, data); i++)
5471 if (data->reg_pressure != NULL)
5472 free (data->reg_pressure);
5473 for (use = data->reg_use_list; use != NULL; use = next)
5475 next = use->next_insn_use;
5476 free (use);
5479 VEC_free (haifa_insn_data_def, heap, h_i_d);
5482 /* Init data for the new insn INSN. */
5483 static void
5484 haifa_init_insn (rtx insn)
5486 gcc_assert (insn != NULL);
5488 sched_init_luids (NULL, NULL, NULL, insn);
5489 sched_extend_target ();
5490 sched_deps_init (false);
5491 haifa_init_h_i_d (NULL, NULL, NULL, insn);
5493 if (adding_bb_to_current_region_p)
5495 sd_init_insn (insn);
5497 /* Extend dependency caches by one element. */
5498 extend_dependency_caches (1, false);
5502 /* Init data for the new basic block BB which comes after AFTER. */
5503 static void
5504 haifa_init_only_bb (basic_block bb, basic_block after)
5506 gcc_assert (bb != NULL);
5508 sched_init_bbs ();
5510 if (common_sched_info->add_block)
5511 /* This changes only data structures of the front-end. */
5512 common_sched_info->add_block (bb, after);
5515 /* A generic version of sched_split_block (). */
5516 basic_block
5517 sched_split_block_1 (basic_block first_bb, rtx after)
5519 edge e;
5521 e = split_block (first_bb, after);
5522 gcc_assert (e->src == first_bb);
5524 /* sched_split_block emits note if *check == BB_END. Probably it
5525 is better to rip that note off. */
5527 return e->dest;
5530 /* A generic version of sched_create_empty_bb (). */
5531 basic_block
5532 sched_create_empty_bb_1 (basic_block after)
5534 return create_empty_bb (after);
5537 /* Insert PAT as an INSN into the schedule and update the necessary data
5538 structures to account for it. */
5540 sched_emit_insn (rtx pat)
5542 rtx insn = emit_insn_after (pat, last_scheduled_insn);
5543 last_scheduled_insn = insn;
5544 haifa_init_insn (insn);
5545 return insn;
5548 #endif /* INSN_SCHEDULING */