2010-11-11 Jakub Jelinek <jakub@redhat.com>
[official-gcc.git] / gcc / haifa-sched.c
blob653561907d0374b12c4be0916cfa9fed7048d8c3
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
3 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010
4 Free Software Foundation, Inc.
5 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
6 and currently maintained by, Jim Wilson (wilson@cygnus.com)
8 This file is part of GCC.
10 GCC is free software; you can redistribute it and/or modify it under
11 the terms of the GNU General Public License as published by the Free
12 Software Foundation; either version 3, or (at your option) any later
13 version.
15 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
16 WARRANTY; without even the implied warranty of MERCHANTABILITY or
17 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
18 for more details.
20 You should have received a copy of the GNU General Public License
21 along with GCC; see the file COPYING3. If not see
22 <http://www.gnu.org/licenses/>. */
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 The following list shows the order in which we want to break ties
58 among insns in the ready list:
60 1. choose insn with the longest path to end of bb, ties
61 broken by
62 2. choose insn with least contribution to register pressure,
63 ties broken by
64 3. prefer in-block upon interblock motion, ties broken by
65 4. prefer useful upon speculative motion, ties broken by
66 5. choose insn with largest control flow probability, ties
67 broken by
68 6. choose insn with the least dependences upon the previously
69 scheduled insn, or finally
70 7 choose the insn which has the most insns dependent on it.
71 8. choose insn with lowest UID.
73 Memory references complicate matters. Only if we can be certain
74 that memory references are not part of the data dependency graph
75 (via true, anti, or output dependence), can we move operations past
76 memory references. To first approximation, reads can be done
77 independently, while writes introduce dependencies. Better
78 approximations will yield fewer dependencies.
80 Before reload, an extended analysis of interblock data dependences
81 is required for interblock scheduling. This is performed in
82 compute_block_backward_dependences ().
84 Dependencies set up by memory references are treated in exactly the
85 same way as other dependencies, by using insn backward dependences
86 INSN_BACK_DEPS. INSN_BACK_DEPS are translated into forward dependences
87 INSN_FORW_DEPS the purpose of forward list scheduling.
89 Having optimized the critical path, we may have also unduly
90 extended the lifetimes of some registers. If an operation requires
91 that constants be loaded into registers, it is certainly desirable
92 to load those constants as early as necessary, but no earlier.
93 I.e., it will not do to load up a bunch of registers at the
94 beginning of a basic block only to use them at the end, if they
95 could be loaded later, since this may result in excessive register
96 utilization.
98 Note that since branches are never in basic blocks, but only end
99 basic blocks, this pass will not move branches. But that is ok,
100 since we can use GNU's delayed branch scheduling pass to take care
101 of this case.
103 Also note that no further optimizations based on algebraic
104 identities are performed, so this pass would be a good one to
105 perform instruction splitting, such as breaking up a multiply
106 instruction into shifts and adds where that is profitable.
108 Given the memory aliasing analysis that this pass should perform,
109 it should be possible to remove redundant stores to memory, and to
110 load values from registers instead of hitting memory.
112 Before reload, speculative insns are moved only if a 'proof' exists
113 that no exception will be caused by this, and if no live registers
114 exist that inhibit the motion (live registers constraints are not
115 represented by data dependence edges).
117 This pass must update information that subsequent passes expect to
118 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
121 The information in the line number notes is carefully retained by
122 this pass. Notes that refer to the starting and ending of
123 exception regions are also carefully retained by this pass. All
124 other NOTE insns are grouped in their same relative order at the
125 beginning of basic blocks and regions that have been scheduled. */
127 #include "config.h"
128 #include "system.h"
129 #include "coretypes.h"
130 #include "tm.h"
131 #include "diagnostic-core.h"
132 #include "toplev.h"
133 #include "rtl.h"
134 #include "tm_p.h"
135 #include "hard-reg-set.h"
136 #include "regs.h"
137 #include "function.h"
138 #include "flags.h"
139 #include "insn-config.h"
140 #include "insn-attr.h"
141 #include "except.h"
142 #include "recog.h"
143 #include "sched-int.h"
144 #include "target.h"
145 #include "output.h"
146 #include "params.h"
147 #include "vecprim.h"
148 #include "dbgcnt.h"
149 #include "cfgloop.h"
150 #include "ira.h"
151 #include "emit-rtl.h" /* FIXME: Can go away once crtl is moved to rtl.h. */
153 #ifdef INSN_SCHEDULING
155 /* issue_rate is the number of insns that can be scheduled in the same
156 machine cycle. It can be defined in the config/mach/mach.h file,
157 otherwise we set it to 1. */
159 int issue_rate;
161 /* sched-verbose controls the amount of debugging output the
162 scheduler prints. It is controlled by -fsched-verbose=N:
163 N>0 and no -DSR : the output is directed to stderr.
164 N>=10 will direct the printouts to stderr (regardless of -dSR).
165 N=1: same as -dSR.
166 N=2: bb's probabilities, detailed ready list info, unit/insn info.
167 N=3: rtl at abort point, control-flow, regions info.
168 N=5: dependences info. */
170 static int sched_verbose_param = 0;
171 int sched_verbose = 0;
173 /* Debugging file. All printouts are sent to dump, which is always set,
174 either to stderr, or to the dump listing file (-dRS). */
175 FILE *sched_dump = 0;
177 /* fix_sched_param() is called from toplev.c upon detection
178 of the -fsched-verbose=N option. */
180 void
181 fix_sched_param (const char *param, const char *val)
183 if (!strcmp (param, "verbose"))
184 sched_verbose_param = atoi (val);
185 else
186 warning (0, "fix_sched_param: unknown param: %s", param);
189 /* This is a placeholder for the scheduler parameters common
190 to all schedulers. */
191 struct common_sched_info_def *common_sched_info;
193 #define INSN_TICK(INSN) (HID (INSN)->tick)
194 #define INTER_TICK(INSN) (HID (INSN)->inter_tick)
196 /* If INSN_TICK of an instruction is equal to INVALID_TICK,
197 then it should be recalculated from scratch. */
198 #define INVALID_TICK (-(max_insn_queue_index + 1))
199 /* The minimal value of the INSN_TICK of an instruction. */
200 #define MIN_TICK (-max_insn_queue_index)
202 /* List of important notes we must keep around. This is a pointer to the
203 last element in the list. */
204 rtx note_list;
206 static struct spec_info_def spec_info_var;
207 /* Description of the speculative part of the scheduling.
208 If NULL - no speculation. */
209 spec_info_t spec_info = NULL;
211 /* True, if recovery block was added during scheduling of current block.
212 Used to determine, if we need to fix INSN_TICKs. */
213 static bool haifa_recovery_bb_recently_added_p;
215 /* True, if recovery block was added during this scheduling pass.
216 Used to determine if we should have empty memory pools of dependencies
217 after finishing current region. */
218 bool haifa_recovery_bb_ever_added_p;
220 /* Counters of different types of speculative instructions. */
221 static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
223 /* Array used in {unlink, restore}_bb_notes. */
224 static rtx *bb_header = 0;
226 /* Basic block after which recovery blocks will be created. */
227 static basic_block before_recovery;
229 /* Basic block just before the EXIT_BLOCK and after recovery, if we have
230 created it. */
231 basic_block after_recovery;
233 /* FALSE if we add bb to another region, so we don't need to initialize it. */
234 bool adding_bb_to_current_region_p = true;
236 /* Queues, etc. */
238 /* An instruction is ready to be scheduled when all insns preceding it
239 have already been scheduled. It is important to ensure that all
240 insns which use its result will not be executed until its result
241 has been computed. An insn is maintained in one of four structures:
243 (P) the "Pending" set of insns which cannot be scheduled until
244 their dependencies have been satisfied.
245 (Q) the "Queued" set of insns that can be scheduled when sufficient
246 time has passed.
247 (R) the "Ready" list of unscheduled, uncommitted insns.
248 (S) the "Scheduled" list of insns.
250 Initially, all insns are either "Pending" or "Ready" depending on
251 whether their dependencies are satisfied.
253 Insns move from the "Ready" list to the "Scheduled" list as they
254 are committed to the schedule. As this occurs, the insns in the
255 "Pending" list have their dependencies satisfied and move to either
256 the "Ready" list or the "Queued" set depending on whether
257 sufficient time has passed to make them ready. As time passes,
258 insns move from the "Queued" set to the "Ready" list.
260 The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
261 unscheduled insns, i.e., those that are ready, queued, and pending.
262 The "Queued" set (Q) is implemented by the variable `insn_queue'.
263 The "Ready" list (R) is implemented by the variables `ready' and
264 `n_ready'.
265 The "Scheduled" list (S) is the new insn chain built by this pass.
267 The transition (R->S) is implemented in the scheduling loop in
268 `schedule_block' when the best insn to schedule is chosen.
269 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
270 insns move from the ready list to the scheduled list.
271 The transition (Q->R) is implemented in 'queue_to_insn' as time
272 passes or stalls are introduced. */
274 /* Implement a circular buffer to delay instructions until sufficient
275 time has passed. For the new pipeline description interface,
276 MAX_INSN_QUEUE_INDEX is a power of two minus one which is not less
277 than maximal time of instruction execution computed by genattr.c on
278 the base maximal time of functional unit reservations and getting a
279 result. This is the longest time an insn may be queued. */
281 static rtx *insn_queue;
282 static int q_ptr = 0;
283 static int q_size = 0;
284 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
285 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
287 #define QUEUE_SCHEDULED (-3)
288 #define QUEUE_NOWHERE (-2)
289 #define QUEUE_READY (-1)
290 /* QUEUE_SCHEDULED - INSN is scheduled.
291 QUEUE_NOWHERE - INSN isn't scheduled yet and is neither in
292 queue or ready list.
293 QUEUE_READY - INSN is in ready list.
294 N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles. */
296 #define QUEUE_INDEX(INSN) (HID (INSN)->queue_index)
298 /* The following variable value refers for all current and future
299 reservations of the processor units. */
300 state_t curr_state;
302 /* The following variable value is size of memory representing all
303 current and future reservations of the processor units. */
304 size_t dfa_state_size;
306 /* The following array is used to find the best insn from ready when
307 the automaton pipeline interface is used. */
308 char *ready_try = NULL;
310 /* The ready list. */
311 struct ready_list ready = {NULL, 0, 0, 0, 0};
313 /* The pointer to the ready list (to be removed). */
314 static struct ready_list *readyp = &ready;
316 /* Scheduling clock. */
317 static int clock_var;
319 static int may_trap_exp (const_rtx, int);
321 /* Nonzero iff the address is comprised from at most 1 register. */
322 #define CONST_BASED_ADDRESS_P(x) \
323 (REG_P (x) \
324 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
325 || (GET_CODE (x) == LO_SUM)) \
326 && (CONSTANT_P (XEXP (x, 0)) \
327 || CONSTANT_P (XEXP (x, 1)))))
329 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
330 as found by analyzing insn's expression. */
333 static int haifa_luid_for_non_insn (rtx x);
335 /* Haifa version of sched_info hooks common to all headers. */
336 const struct common_sched_info_def haifa_common_sched_info =
338 NULL, /* fix_recovery_cfg */
339 NULL, /* add_block */
340 NULL, /* estimate_number_of_insns */
341 haifa_luid_for_non_insn, /* luid_for_non_insn */
342 SCHED_PASS_UNKNOWN /* sched_pass_id */
345 const struct sched_scan_info_def *sched_scan_info;
347 /* Mapping from instruction UID to its Logical UID. */
348 VEC (int, heap) *sched_luids = NULL;
350 /* Next LUID to assign to an instruction. */
351 int sched_max_luid = 1;
353 /* Haifa Instruction Data. */
354 VEC (haifa_insn_data_def, heap) *h_i_d = NULL;
356 void (* sched_init_only_bb) (basic_block, basic_block);
358 /* Split block function. Different schedulers might use different functions
359 to handle their internal data consistent. */
360 basic_block (* sched_split_block) (basic_block, rtx);
362 /* Create empty basic block after the specified block. */
363 basic_block (* sched_create_empty_bb) (basic_block);
365 static int
366 may_trap_exp (const_rtx x, int is_store)
368 enum rtx_code code;
370 if (x == 0)
371 return TRAP_FREE;
372 code = GET_CODE (x);
373 if (is_store)
375 if (code == MEM && may_trap_p (x))
376 return TRAP_RISKY;
377 else
378 return TRAP_FREE;
380 if (code == MEM)
382 /* The insn uses memory: a volatile load. */
383 if (MEM_VOLATILE_P (x))
384 return IRISKY;
385 /* An exception-free load. */
386 if (!may_trap_p (x))
387 return IFREE;
388 /* A load with 1 base register, to be further checked. */
389 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
390 return PFREE_CANDIDATE;
391 /* No info on the load, to be further checked. */
392 return PRISKY_CANDIDATE;
394 else
396 const char *fmt;
397 int i, insn_class = TRAP_FREE;
399 /* Neither store nor load, check if it may cause a trap. */
400 if (may_trap_p (x))
401 return TRAP_RISKY;
402 /* Recursive step: walk the insn... */
403 fmt = GET_RTX_FORMAT (code);
404 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
406 if (fmt[i] == 'e')
408 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
409 insn_class = WORST_CLASS (insn_class, tmp_class);
411 else if (fmt[i] == 'E')
413 int j;
414 for (j = 0; j < XVECLEN (x, i); j++)
416 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
417 insn_class = WORST_CLASS (insn_class, tmp_class);
418 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
419 break;
422 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
423 break;
425 return insn_class;
429 /* Classifies rtx X of an insn for the purpose of verifying that X can be
430 executed speculatively (and consequently the insn can be moved
431 speculatively), by examining X, returning:
432 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
433 TRAP_FREE: non-load insn.
434 IFREE: load from a globally safe location.
435 IRISKY: volatile load.
436 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
437 being either PFREE or PRISKY. */
439 static int
440 haifa_classify_rtx (const_rtx x)
442 int tmp_class = TRAP_FREE;
443 int insn_class = TRAP_FREE;
444 enum rtx_code code;
446 if (GET_CODE (x) == PARALLEL)
448 int i, len = XVECLEN (x, 0);
450 for (i = len - 1; i >= 0; i--)
452 tmp_class = haifa_classify_rtx (XVECEXP (x, 0, i));
453 insn_class = WORST_CLASS (insn_class, tmp_class);
454 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
455 break;
458 else
460 code = GET_CODE (x);
461 switch (code)
463 case CLOBBER:
464 /* Test if it is a 'store'. */
465 tmp_class = may_trap_exp (XEXP (x, 0), 1);
466 break;
467 case SET:
468 /* Test if it is a store. */
469 tmp_class = may_trap_exp (SET_DEST (x), 1);
470 if (tmp_class == TRAP_RISKY)
471 break;
472 /* Test if it is a load. */
473 tmp_class =
474 WORST_CLASS (tmp_class,
475 may_trap_exp (SET_SRC (x), 0));
476 break;
477 case COND_EXEC:
478 tmp_class = haifa_classify_rtx (COND_EXEC_CODE (x));
479 if (tmp_class == TRAP_RISKY)
480 break;
481 tmp_class = WORST_CLASS (tmp_class,
482 may_trap_exp (COND_EXEC_TEST (x), 0));
483 break;
484 case TRAP_IF:
485 tmp_class = TRAP_RISKY;
486 break;
487 default:;
489 insn_class = tmp_class;
492 return insn_class;
496 haifa_classify_insn (const_rtx insn)
498 return haifa_classify_rtx (PATTERN (insn));
501 /* Forward declarations. */
503 static int priority (rtx);
504 static int rank_for_schedule (const void *, const void *);
505 static void swap_sort (rtx *, int);
506 static void queue_insn (rtx, int);
507 static int schedule_insn (rtx);
508 static void adjust_priority (rtx);
509 static void advance_one_cycle (void);
510 static void extend_h_i_d (void);
513 /* Notes handling mechanism:
514 =========================
515 Generally, NOTES are saved before scheduling and restored after scheduling.
516 The scheduler distinguishes between two types of notes:
518 (1) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
519 Before scheduling a region, a pointer to the note is added to the insn
520 that follows or precedes it. (This happens as part of the data dependence
521 computation). After scheduling an insn, the pointer contained in it is
522 used for regenerating the corresponding note (in reemit_notes).
524 (2) All other notes (e.g. INSN_DELETED): Before scheduling a block,
525 these notes are put in a list (in rm_other_notes() and
526 unlink_other_notes ()). After scheduling the block, these notes are
527 inserted at the beginning of the block (in schedule_block()). */
529 static void ready_add (struct ready_list *, rtx, bool);
530 static rtx ready_remove_first (struct ready_list *);
531 static rtx ready_remove_first_dispatch (struct ready_list *ready);
533 static void queue_to_ready (struct ready_list *);
534 static int early_queue_to_ready (state_t, struct ready_list *);
536 static void debug_ready_list (struct ready_list *);
538 /* The following functions are used to implement multi-pass scheduling
539 on the first cycle. */
540 static rtx ready_remove (struct ready_list *, int);
541 static void ready_remove_insn (rtx);
543 static void fix_inter_tick (rtx, rtx);
544 static int fix_tick_ready (rtx);
545 static void change_queue_index (rtx, int);
547 /* The following functions are used to implement scheduling of data/control
548 speculative instructions. */
550 static void extend_h_i_d (void);
551 static void init_h_i_d (rtx);
552 static void generate_recovery_code (rtx);
553 static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
554 static void begin_speculative_block (rtx);
555 static void add_to_speculative_block (rtx);
556 static void init_before_recovery (basic_block *);
557 static void create_check_block_twin (rtx, bool);
558 static void fix_recovery_deps (basic_block);
559 static void haifa_change_pattern (rtx, rtx);
560 static void dump_new_block_header (int, basic_block, rtx, rtx);
561 static void restore_bb_notes (basic_block);
562 static void fix_jump_move (rtx);
563 static void move_block_after_check (rtx);
564 static void move_succs (VEC(edge,gc) **, basic_block);
565 static void sched_remove_insn (rtx);
566 static void clear_priorities (rtx, rtx_vec_t *);
567 static void calc_priorities (rtx_vec_t);
568 static void add_jump_dependencies (rtx, rtx);
569 #ifdef ENABLE_CHECKING
570 static int has_edge_p (VEC(edge,gc) *, int);
571 static void check_cfg (rtx, rtx);
572 #endif
574 #endif /* INSN_SCHEDULING */
576 /* Point to state used for the current scheduling pass. */
577 struct haifa_sched_info *current_sched_info;
579 #ifndef INSN_SCHEDULING
580 void
581 schedule_insns (void)
584 #else
586 /* Do register pressure sensitive insn scheduling if the flag is set
587 up. */
588 bool sched_pressure_p;
590 /* Map regno -> its cover class. The map defined only when
591 SCHED_PRESSURE_P is true. */
592 enum reg_class *sched_regno_cover_class;
594 /* The current register pressure. Only elements corresponding cover
595 classes are defined. */
596 static int curr_reg_pressure[N_REG_CLASSES];
598 /* Saved value of the previous array. */
599 static int saved_reg_pressure[N_REG_CLASSES];
601 /* Register living at given scheduling point. */
602 static bitmap curr_reg_live;
604 /* Saved value of the previous array. */
605 static bitmap saved_reg_live;
607 /* Registers mentioned in the current region. */
608 static bitmap region_ref_regs;
610 /* Initiate register pressure relative info for scheduling the current
611 region. Currently it is only clearing register mentioned in the
612 current region. */
613 void
614 sched_init_region_reg_pressure_info (void)
616 bitmap_clear (region_ref_regs);
619 /* Update current register pressure related info after birth (if
620 BIRTH_P) or death of register REGNO. */
621 static void
622 mark_regno_birth_or_death (int regno, bool birth_p)
624 enum reg_class cover_class;
626 cover_class = sched_regno_cover_class[regno];
627 if (regno >= FIRST_PSEUDO_REGISTER)
629 if (cover_class != NO_REGS)
631 if (birth_p)
633 bitmap_set_bit (curr_reg_live, regno);
634 curr_reg_pressure[cover_class]
635 += ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
637 else
639 bitmap_clear_bit (curr_reg_live, regno);
640 curr_reg_pressure[cover_class]
641 -= ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
645 else if (cover_class != NO_REGS
646 && ! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
648 if (birth_p)
650 bitmap_set_bit (curr_reg_live, regno);
651 curr_reg_pressure[cover_class]++;
653 else
655 bitmap_clear_bit (curr_reg_live, regno);
656 curr_reg_pressure[cover_class]--;
661 /* Initiate current register pressure related info from living
662 registers given by LIVE. */
663 static void
664 initiate_reg_pressure_info (bitmap live)
666 int i;
667 unsigned int j;
668 bitmap_iterator bi;
670 for (i = 0; i < ira_reg_class_cover_size; i++)
671 curr_reg_pressure[ira_reg_class_cover[i]] = 0;
672 bitmap_clear (curr_reg_live);
673 EXECUTE_IF_SET_IN_BITMAP (live, 0, j, bi)
674 if (current_nr_blocks == 1 || bitmap_bit_p (region_ref_regs, j))
675 mark_regno_birth_or_death (j, true);
678 /* Mark registers in X as mentioned in the current region. */
679 static void
680 setup_ref_regs (rtx x)
682 int i, j, regno;
683 const RTX_CODE code = GET_CODE (x);
684 const char *fmt;
686 if (REG_P (x))
688 regno = REGNO (x);
689 if (regno >= FIRST_PSEUDO_REGISTER)
690 bitmap_set_bit (region_ref_regs, REGNO (x));
691 else
692 for (i = hard_regno_nregs[regno][GET_MODE (x)] - 1; i >= 0; i--)
693 bitmap_set_bit (region_ref_regs, regno + i);
694 return;
696 fmt = GET_RTX_FORMAT (code);
697 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
698 if (fmt[i] == 'e')
699 setup_ref_regs (XEXP (x, i));
700 else if (fmt[i] == 'E')
702 for (j = 0; j < XVECLEN (x, i); j++)
703 setup_ref_regs (XVECEXP (x, i, j));
707 /* Initiate current register pressure related info at the start of
708 basic block BB. */
709 static void
710 initiate_bb_reg_pressure_info (basic_block bb)
712 unsigned int i ATTRIBUTE_UNUSED;
713 rtx insn;
715 if (current_nr_blocks > 1)
716 FOR_BB_INSNS (bb, insn)
717 if (NONDEBUG_INSN_P (insn))
718 setup_ref_regs (PATTERN (insn));
719 initiate_reg_pressure_info (df_get_live_in (bb));
720 #ifdef EH_RETURN_DATA_REGNO
721 if (bb_has_eh_pred (bb))
722 for (i = 0; ; ++i)
724 unsigned int regno = EH_RETURN_DATA_REGNO (i);
726 if (regno == INVALID_REGNUM)
727 break;
728 if (! bitmap_bit_p (df_get_live_in (bb), regno))
729 mark_regno_birth_or_death (regno, true);
731 #endif
734 /* Save current register pressure related info. */
735 static void
736 save_reg_pressure (void)
738 int i;
740 for (i = 0; i < ira_reg_class_cover_size; i++)
741 saved_reg_pressure[ira_reg_class_cover[i]]
742 = curr_reg_pressure[ira_reg_class_cover[i]];
743 bitmap_copy (saved_reg_live, curr_reg_live);
746 /* Restore saved register pressure related info. */
747 static void
748 restore_reg_pressure (void)
750 int i;
752 for (i = 0; i < ira_reg_class_cover_size; i++)
753 curr_reg_pressure[ira_reg_class_cover[i]]
754 = saved_reg_pressure[ira_reg_class_cover[i]];
755 bitmap_copy (curr_reg_live, saved_reg_live);
758 /* Return TRUE if the register is dying after its USE. */
759 static bool
760 dying_use_p (struct reg_use_data *use)
762 struct reg_use_data *next;
764 for (next = use->next_regno_use; next != use; next = next->next_regno_use)
765 if (NONDEBUG_INSN_P (next->insn)
766 && QUEUE_INDEX (next->insn) != QUEUE_SCHEDULED)
767 return false;
768 return true;
771 /* Print info about the current register pressure and its excess for
772 each cover class. */
773 static void
774 print_curr_reg_pressure (void)
776 int i;
777 enum reg_class cl;
779 fprintf (sched_dump, ";;\t");
780 for (i = 0; i < ira_reg_class_cover_size; i++)
782 cl = ira_reg_class_cover[i];
783 gcc_assert (curr_reg_pressure[cl] >= 0);
784 fprintf (sched_dump, " %s:%d(%d)", reg_class_names[cl],
785 curr_reg_pressure[cl],
786 curr_reg_pressure[cl] - ira_available_class_regs[cl]);
788 fprintf (sched_dump, "\n");
791 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
792 so that insns independent of the last scheduled insn will be preferred
793 over dependent instructions. */
795 static rtx last_scheduled_insn;
797 /* Cached cost of the instruction. Use below function to get cost of the
798 insn. -1 here means that the field is not initialized. */
799 #define INSN_COST(INSN) (HID (INSN)->cost)
801 /* Compute cost of executing INSN.
802 This is the number of cycles between instruction issue and
803 instruction results. */
805 insn_cost (rtx insn)
807 int cost;
809 if (sel_sched_p ())
811 if (recog_memoized (insn) < 0)
812 return 0;
814 cost = insn_default_latency (insn);
815 if (cost < 0)
816 cost = 0;
818 return cost;
821 cost = INSN_COST (insn);
823 if (cost < 0)
825 /* A USE insn, or something else we don't need to
826 understand. We can't pass these directly to
827 result_ready_cost or insn_default_latency because it will
828 trigger a fatal error for unrecognizable insns. */
829 if (recog_memoized (insn) < 0)
831 INSN_COST (insn) = 0;
832 return 0;
834 else
836 cost = insn_default_latency (insn);
837 if (cost < 0)
838 cost = 0;
840 INSN_COST (insn) = cost;
844 return cost;
847 /* Compute cost of dependence LINK.
848 This is the number of cycles between instruction issue and
849 instruction results.
850 ??? We also use this function to call recog_memoized on all insns. */
852 dep_cost_1 (dep_t link, dw_t dw)
854 rtx insn = DEP_PRO (link);
855 rtx used = DEP_CON (link);
856 int cost;
858 /* A USE insn should never require the value used to be computed.
859 This allows the computation of a function's result and parameter
860 values to overlap the return and call. We don't care about the
861 the dependence cost when only decreasing register pressure. */
862 if (recog_memoized (used) < 0)
864 cost = 0;
865 recog_memoized (insn);
867 else
869 enum reg_note dep_type = DEP_TYPE (link);
871 cost = insn_cost (insn);
873 if (INSN_CODE (insn) >= 0)
875 if (dep_type == REG_DEP_ANTI)
876 cost = 0;
877 else if (dep_type == REG_DEP_OUTPUT)
879 cost = (insn_default_latency (insn)
880 - insn_default_latency (used));
881 if (cost <= 0)
882 cost = 1;
884 else if (bypass_p (insn))
885 cost = insn_latency (insn, used);
889 if (targetm.sched.adjust_cost_2)
890 cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost,
891 dw);
892 else if (targetm.sched.adjust_cost != NULL)
894 /* This variable is used for backward compatibility with the
895 targets. */
896 rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
898 /* Make it self-cycled, so that if some tries to walk over this
899 incomplete list he/she will be caught in an endless loop. */
900 XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
902 /* Targets use only REG_NOTE_KIND of the link. */
903 PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
905 cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
906 insn, cost);
908 free_INSN_LIST_node (dep_cost_rtx_link);
911 if (cost < 0)
912 cost = 0;
915 return cost;
918 /* Compute cost of dependence LINK.
919 This is the number of cycles between instruction issue and
920 instruction results. */
922 dep_cost (dep_t link)
924 return dep_cost_1 (link, 0);
927 /* Use this sel-sched.c friendly function in reorder2 instead of increasing
928 INSN_PRIORITY explicitly. */
929 void
930 increase_insn_priority (rtx insn, int amount)
932 if (!sel_sched_p ())
934 /* We're dealing with haifa-sched.c INSN_PRIORITY. */
935 if (INSN_PRIORITY_KNOWN (insn))
936 INSN_PRIORITY (insn) += amount;
938 else
940 /* In sel-sched.c INSN_PRIORITY is not kept up to date.
941 Use EXPR_PRIORITY instead. */
942 sel_add_to_insn_priority (insn, amount);
946 /* Return 'true' if DEP should be included in priority calculations. */
947 static bool
948 contributes_to_priority_p (dep_t dep)
950 if (DEBUG_INSN_P (DEP_CON (dep))
951 || DEBUG_INSN_P (DEP_PRO (dep)))
952 return false;
954 /* Critical path is meaningful in block boundaries only. */
955 if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
956 DEP_PRO (dep)))
957 return false;
959 /* If flag COUNT_SPEC_IN_CRITICAL_PATH is set,
960 then speculative instructions will less likely be
961 scheduled. That is because the priority of
962 their producers will increase, and, thus, the
963 producers will more likely be scheduled, thus,
964 resolving the dependence. */
965 if (sched_deps_info->generate_spec_deps
966 && !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
967 && (DEP_STATUS (dep) & SPECULATIVE))
968 return false;
970 return true;
973 /* Compute the number of nondebug forward deps of an insn. */
975 static int
976 dep_list_size (rtx insn)
978 sd_iterator_def sd_it;
979 dep_t dep;
980 int dbgcount = 0, nodbgcount = 0;
982 if (!MAY_HAVE_DEBUG_INSNS)
983 return sd_lists_size (insn, SD_LIST_FORW);
985 FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
987 if (DEBUG_INSN_P (DEP_CON (dep)))
988 dbgcount++;
989 else if (!DEBUG_INSN_P (DEP_PRO (dep)))
990 nodbgcount++;
993 gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW));
995 return nodbgcount;
998 /* Compute the priority number for INSN. */
999 static int
1000 priority (rtx insn)
1002 if (! INSN_P (insn))
1003 return 0;
1005 /* We should not be interested in priority of an already scheduled insn. */
1006 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
1008 if (!INSN_PRIORITY_KNOWN (insn))
1010 int this_priority = -1;
1012 if (dep_list_size (insn) == 0)
1013 /* ??? We should set INSN_PRIORITY to insn_cost when and insn has
1014 some forward deps but all of them are ignored by
1015 contributes_to_priority hook. At the moment we set priority of
1016 such insn to 0. */
1017 this_priority = insn_cost (insn);
1018 else
1020 rtx prev_first, twin;
1021 basic_block rec;
1023 /* For recovery check instructions we calculate priority slightly
1024 different than that of normal instructions. Instead of walking
1025 through INSN_FORW_DEPS (check) list, we walk through
1026 INSN_FORW_DEPS list of each instruction in the corresponding
1027 recovery block. */
1029 /* Selective scheduling does not define RECOVERY_BLOCK macro. */
1030 rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
1031 if (!rec || rec == EXIT_BLOCK_PTR)
1033 prev_first = PREV_INSN (insn);
1034 twin = insn;
1036 else
1038 prev_first = NEXT_INSN (BB_HEAD (rec));
1039 twin = PREV_INSN (BB_END (rec));
1044 sd_iterator_def sd_it;
1045 dep_t dep;
1047 FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
1049 rtx next;
1050 int next_priority;
1052 next = DEP_CON (dep);
1054 if (BLOCK_FOR_INSN (next) != rec)
1056 int cost;
1058 if (!contributes_to_priority_p (dep))
1059 continue;
1061 if (twin == insn)
1062 cost = dep_cost (dep);
1063 else
1065 struct _dep _dep1, *dep1 = &_dep1;
1067 init_dep (dep1, insn, next, REG_DEP_ANTI);
1069 cost = dep_cost (dep1);
1072 next_priority = cost + priority (next);
1074 if (next_priority > this_priority)
1075 this_priority = next_priority;
1079 twin = PREV_INSN (twin);
1081 while (twin != prev_first);
1084 if (this_priority < 0)
1086 gcc_assert (this_priority == -1);
1088 this_priority = insn_cost (insn);
1091 INSN_PRIORITY (insn) = this_priority;
1092 INSN_PRIORITY_STATUS (insn) = 1;
1095 return INSN_PRIORITY (insn);
1098 /* Macros and functions for keeping the priority queue sorted, and
1099 dealing with queuing and dequeuing of instructions. */
1101 #define SCHED_SORT(READY, N_READY) \
1102 do { if ((N_READY) == 2) \
1103 swap_sort (READY, N_READY); \
1104 else if ((N_READY) > 2) \
1105 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
1106 while (0)
1108 /* Setup info about the current register pressure impact of scheduling
1109 INSN at the current scheduling point. */
1110 static void
1111 setup_insn_reg_pressure_info (rtx insn)
1113 int i, change, before, after, hard_regno;
1114 int excess_cost_change;
1115 enum machine_mode mode;
1116 enum reg_class cl;
1117 struct reg_pressure_data *pressure_info;
1118 int *max_reg_pressure;
1119 struct reg_use_data *use;
1120 static int death[N_REG_CLASSES];
1122 gcc_checking_assert (!DEBUG_INSN_P (insn));
1124 excess_cost_change = 0;
1125 for (i = 0; i < ira_reg_class_cover_size; i++)
1126 death[ira_reg_class_cover[i]] = 0;
1127 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1128 if (dying_use_p (use))
1130 cl = sched_regno_cover_class[use->regno];
1131 if (use->regno < FIRST_PSEUDO_REGISTER)
1132 death[cl]++;
1133 else
1134 death[cl] += ira_reg_class_nregs[cl][PSEUDO_REGNO_MODE (use->regno)];
1136 pressure_info = INSN_REG_PRESSURE (insn);
1137 max_reg_pressure = INSN_MAX_REG_PRESSURE (insn);
1138 gcc_assert (pressure_info != NULL && max_reg_pressure != NULL);
1139 for (i = 0; i < ira_reg_class_cover_size; i++)
1141 cl = ira_reg_class_cover[i];
1142 gcc_assert (curr_reg_pressure[cl] >= 0);
1143 change = (int) pressure_info[i].set_increase - death[cl];
1144 before = MAX (0, max_reg_pressure[i] - ira_available_class_regs[cl]);
1145 after = MAX (0, max_reg_pressure[i] + change
1146 - ira_available_class_regs[cl]);
1147 hard_regno = ira_class_hard_regs[cl][0];
1148 gcc_assert (hard_regno >= 0);
1149 mode = reg_raw_mode[hard_regno];
1150 excess_cost_change += ((after - before)
1151 * (ira_memory_move_cost[mode][cl][0]
1152 + ira_memory_move_cost[mode][cl][1]));
1154 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insn) = excess_cost_change;
1157 /* Returns a positive value if x is preferred; returns a negative value if
1158 y is preferred. Should never return 0, since that will make the sort
1159 unstable. */
1161 static int
1162 rank_for_schedule (const void *x, const void *y)
1164 rtx tmp = *(const rtx *) y;
1165 rtx tmp2 = *(const rtx *) x;
1166 rtx last;
1167 int tmp_class, tmp2_class;
1168 int val, priority_val, info_val;
1170 if (MAY_HAVE_DEBUG_INSNS)
1172 /* Schedule debug insns as early as possible. */
1173 if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
1174 return -1;
1175 else if (DEBUG_INSN_P (tmp2))
1176 return 1;
1179 /* The insn in a schedule group should be issued the first. */
1180 if (flag_sched_group_heuristic &&
1181 SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
1182 return SCHED_GROUP_P (tmp2) ? 1 : -1;
1184 /* Make sure that priority of TMP and TMP2 are initialized. */
1185 gcc_assert (INSN_PRIORITY_KNOWN (tmp) && INSN_PRIORITY_KNOWN (tmp2));
1187 if (sched_pressure_p)
1189 int diff;
1191 /* Prefer insn whose scheduling results in the smallest register
1192 pressure excess. */
1193 if ((diff = (INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp)
1194 + (INSN_TICK (tmp) > clock_var
1195 ? INSN_TICK (tmp) - clock_var : 0)
1196 - INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2)
1197 - (INSN_TICK (tmp2) > clock_var
1198 ? INSN_TICK (tmp2) - clock_var : 0))) != 0)
1199 return diff;
1203 if (sched_pressure_p
1204 && (INSN_TICK (tmp2) > clock_var || INSN_TICK (tmp) > clock_var))
1206 if (INSN_TICK (tmp) <= clock_var)
1207 return -1;
1208 else if (INSN_TICK (tmp2) <= clock_var)
1209 return 1;
1210 else
1211 return INSN_TICK (tmp) - INSN_TICK (tmp2);
1213 /* Prefer insn with higher priority. */
1214 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
1216 if (flag_sched_critical_path_heuristic && priority_val)
1217 return priority_val;
1219 /* Prefer speculative insn with greater dependencies weakness. */
1220 if (flag_sched_spec_insn_heuristic && spec_info)
1222 ds_t ds1, ds2;
1223 dw_t dw1, dw2;
1224 int dw;
1226 ds1 = TODO_SPEC (tmp) & SPECULATIVE;
1227 if (ds1)
1228 dw1 = ds_weak (ds1);
1229 else
1230 dw1 = NO_DEP_WEAK;
1232 ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
1233 if (ds2)
1234 dw2 = ds_weak (ds2);
1235 else
1236 dw2 = NO_DEP_WEAK;
1238 dw = dw2 - dw1;
1239 if (dw > (NO_DEP_WEAK / 8) || dw < -(NO_DEP_WEAK / 8))
1240 return dw;
1243 info_val = (*current_sched_info->rank) (tmp, tmp2);
1244 if(flag_sched_rank_heuristic && info_val)
1245 return info_val;
1247 if (flag_sched_last_insn_heuristic)
1249 last = last_scheduled_insn;
1251 if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head)
1253 last = PREV_INSN (last);
1254 while (!NONDEBUG_INSN_P (last)
1255 && last != current_sched_info->prev_head);
1258 /* Compare insns based on their relation to the last scheduled
1259 non-debug insn. */
1260 if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last))
1262 dep_t dep1;
1263 dep_t dep2;
1265 /* Classify the instructions into three classes:
1266 1) Data dependent on last schedule insn.
1267 2) Anti/Output dependent on last scheduled insn.
1268 3) Independent of last scheduled insn, or has latency of one.
1269 Choose the insn from the highest numbered class if different. */
1270 dep1 = sd_find_dep_between (last, tmp, true);
1272 if (dep1 == NULL || dep_cost (dep1) == 1)
1273 tmp_class = 3;
1274 else if (/* Data dependence. */
1275 DEP_TYPE (dep1) == REG_DEP_TRUE)
1276 tmp_class = 1;
1277 else
1278 tmp_class = 2;
1280 dep2 = sd_find_dep_between (last, tmp2, true);
1282 if (dep2 == NULL || dep_cost (dep2) == 1)
1283 tmp2_class = 3;
1284 else if (/* Data dependence. */
1285 DEP_TYPE (dep2) == REG_DEP_TRUE)
1286 tmp2_class = 1;
1287 else
1288 tmp2_class = 2;
1290 if ((val = tmp2_class - tmp_class))
1291 return val;
1294 /* Prefer the insn which has more later insns that depend on it.
1295 This gives the scheduler more freedom when scheduling later
1296 instructions at the expense of added register pressure. */
1298 val = (dep_list_size (tmp2) - dep_list_size (tmp));
1300 if (flag_sched_dep_count_heuristic && val != 0)
1301 return val;
1303 /* If insns are equally good, sort by INSN_LUID (original insn order),
1304 so that we make the sort stable. This minimizes instruction movement,
1305 thus minimizing sched's effect on debugging and cross-jumping. */
1306 return INSN_LUID (tmp) - INSN_LUID (tmp2);
1309 /* Resort the array A in which only element at index N may be out of order. */
1311 HAIFA_INLINE static void
1312 swap_sort (rtx *a, int n)
1314 rtx insn = a[n - 1];
1315 int i = n - 2;
1317 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
1319 a[i + 1] = a[i];
1320 i -= 1;
1322 a[i + 1] = insn;
1325 /* Add INSN to the insn queue so that it can be executed at least
1326 N_CYCLES after the currently executing insn. Preserve insns
1327 chain for debugging purposes. */
1329 HAIFA_INLINE static void
1330 queue_insn (rtx insn, int n_cycles)
1332 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1333 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
1335 gcc_assert (n_cycles <= max_insn_queue_index);
1336 gcc_assert (!DEBUG_INSN_P (insn));
1338 insn_queue[next_q] = link;
1339 q_size += 1;
1341 if (sched_verbose >= 2)
1343 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
1344 (*current_sched_info->print_insn) (insn, 0));
1346 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
1349 QUEUE_INDEX (insn) = next_q;
1352 /* Remove INSN from queue. */
1353 static void
1354 queue_remove (rtx insn)
1356 gcc_assert (QUEUE_INDEX (insn) >= 0);
1357 remove_free_INSN_LIST_elem (insn, &insn_queue[QUEUE_INDEX (insn)]);
1358 q_size--;
1359 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
1362 /* Return a pointer to the bottom of the ready list, i.e. the insn
1363 with the lowest priority. */
1365 rtx *
1366 ready_lastpos (struct ready_list *ready)
1368 gcc_assert (ready->n_ready >= 1);
1369 return ready->vec + ready->first - ready->n_ready + 1;
1372 /* Add an element INSN to the ready list so that it ends up with the
1373 lowest/highest priority depending on FIRST_P. */
1375 HAIFA_INLINE static void
1376 ready_add (struct ready_list *ready, rtx insn, bool first_p)
1378 if (!first_p)
1380 if (ready->first == ready->n_ready)
1382 memmove (ready->vec + ready->veclen - ready->n_ready,
1383 ready_lastpos (ready),
1384 ready->n_ready * sizeof (rtx));
1385 ready->first = ready->veclen - 1;
1387 ready->vec[ready->first - ready->n_ready] = insn;
1389 else
1391 if (ready->first == ready->veclen - 1)
1393 if (ready->n_ready)
1394 /* ready_lastpos() fails when called with (ready->n_ready == 0). */
1395 memmove (ready->vec + ready->veclen - ready->n_ready - 1,
1396 ready_lastpos (ready),
1397 ready->n_ready * sizeof (rtx));
1398 ready->first = ready->veclen - 2;
1400 ready->vec[++(ready->first)] = insn;
1403 ready->n_ready++;
1404 if (DEBUG_INSN_P (insn))
1405 ready->n_debug++;
1407 gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
1408 QUEUE_INDEX (insn) = QUEUE_READY;
1411 /* Remove the element with the highest priority from the ready list and
1412 return it. */
1414 HAIFA_INLINE static rtx
1415 ready_remove_first (struct ready_list *ready)
1417 rtx t;
1419 gcc_assert (ready->n_ready);
1420 t = ready->vec[ready->first--];
1421 ready->n_ready--;
1422 if (DEBUG_INSN_P (t))
1423 ready->n_debug--;
1424 /* If the queue becomes empty, reset it. */
1425 if (ready->n_ready == 0)
1426 ready->first = ready->veclen - 1;
1428 gcc_assert (QUEUE_INDEX (t) == QUEUE_READY);
1429 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1431 return t;
1434 /* The following code implements multi-pass scheduling for the first
1435 cycle. In other words, we will try to choose ready insn which
1436 permits to start maximum number of insns on the same cycle. */
1438 /* Return a pointer to the element INDEX from the ready. INDEX for
1439 insn with the highest priority is 0, and the lowest priority has
1440 N_READY - 1. */
1443 ready_element (struct ready_list *ready, int index)
1445 gcc_assert (ready->n_ready && index < ready->n_ready);
1447 return ready->vec[ready->first - index];
1450 /* Remove the element INDEX from the ready list and return it. INDEX
1451 for insn with the highest priority is 0, and the lowest priority
1452 has N_READY - 1. */
1454 HAIFA_INLINE static rtx
1455 ready_remove (struct ready_list *ready, int index)
1457 rtx t;
1458 int i;
1460 if (index == 0)
1461 return ready_remove_first (ready);
1462 gcc_assert (ready->n_ready && index < ready->n_ready);
1463 t = ready->vec[ready->first - index];
1464 ready->n_ready--;
1465 if (DEBUG_INSN_P (t))
1466 ready->n_debug--;
1467 for (i = index; i < ready->n_ready; i++)
1468 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
1469 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1470 return t;
1473 /* Remove INSN from the ready list. */
1474 static void
1475 ready_remove_insn (rtx insn)
1477 int i;
1479 for (i = 0; i < readyp->n_ready; i++)
1480 if (ready_element (readyp, i) == insn)
1482 ready_remove (readyp, i);
1483 return;
1485 gcc_unreachable ();
1488 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
1489 macro. */
1491 void
1492 ready_sort (struct ready_list *ready)
1494 int i;
1495 rtx *first = ready_lastpos (ready);
1497 if (sched_pressure_p)
1499 for (i = 0; i < ready->n_ready; i++)
1500 if (!DEBUG_INSN_P (first[i]))
1501 setup_insn_reg_pressure_info (first[i]);
1503 SCHED_SORT (first, ready->n_ready);
1506 /* PREV is an insn that is ready to execute. Adjust its priority if that
1507 will help shorten or lengthen register lifetimes as appropriate. Also
1508 provide a hook for the target to tweak itself. */
1510 HAIFA_INLINE static void
1511 adjust_priority (rtx prev)
1513 /* ??? There used to be code here to try and estimate how an insn
1514 affected register lifetimes, but it did it by looking at REG_DEAD
1515 notes, which we removed in schedule_region. Nor did it try to
1516 take into account register pressure or anything useful like that.
1518 Revisit when we have a machine model to work with and not before. */
1520 if (targetm.sched.adjust_priority)
1521 INSN_PRIORITY (prev) =
1522 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
1525 /* Advance DFA state STATE on one cycle. */
1526 void
1527 advance_state (state_t state)
1529 if (targetm.sched.dfa_pre_advance_cycle)
1530 targetm.sched.dfa_pre_advance_cycle ();
1532 if (targetm.sched.dfa_pre_cycle_insn)
1533 state_transition (state,
1534 targetm.sched.dfa_pre_cycle_insn ());
1536 state_transition (state, NULL);
1538 if (targetm.sched.dfa_post_cycle_insn)
1539 state_transition (state,
1540 targetm.sched.dfa_post_cycle_insn ());
1542 if (targetm.sched.dfa_post_advance_cycle)
1543 targetm.sched.dfa_post_advance_cycle ();
1546 /* Advance time on one cycle. */
1547 HAIFA_INLINE static void
1548 advance_one_cycle (void)
1550 advance_state (curr_state);
1551 if (sched_verbose >= 6)
1552 fprintf (sched_dump, ";;\tAdvanced a state.\n");
1555 /* Clock at which the previous instruction was issued. */
1556 static int last_clock_var;
1558 /* Update register pressure after scheduling INSN. */
1559 static void
1560 update_register_pressure (rtx insn)
1562 struct reg_use_data *use;
1563 struct reg_set_data *set;
1565 gcc_checking_assert (!DEBUG_INSN_P (insn));
1567 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1568 if (dying_use_p (use) && bitmap_bit_p (curr_reg_live, use->regno))
1569 mark_regno_birth_or_death (use->regno, false);
1570 for (set = INSN_REG_SET_LIST (insn); set != NULL; set = set->next_insn_set)
1571 mark_regno_birth_or_death (set->regno, true);
1574 /* Set up or update (if UPDATE_P) max register pressure (see its
1575 meaning in sched-int.h::_haifa_insn_data) for all current BB insns
1576 after insn AFTER. */
1577 static void
1578 setup_insn_max_reg_pressure (rtx after, bool update_p)
1580 int i, p;
1581 bool eq_p;
1582 rtx insn;
1583 static int max_reg_pressure[N_REG_CLASSES];
1585 save_reg_pressure ();
1586 for (i = 0; i < ira_reg_class_cover_size; i++)
1587 max_reg_pressure[ira_reg_class_cover[i]]
1588 = curr_reg_pressure[ira_reg_class_cover[i]];
1589 for (insn = NEXT_INSN (after);
1590 insn != NULL_RTX && ! BARRIER_P (insn)
1591 && BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (after);
1592 insn = NEXT_INSN (insn))
1593 if (NONDEBUG_INSN_P (insn))
1595 eq_p = true;
1596 for (i = 0; i < ira_reg_class_cover_size; i++)
1598 p = max_reg_pressure[ira_reg_class_cover[i]];
1599 if (INSN_MAX_REG_PRESSURE (insn)[i] != p)
1601 eq_p = false;
1602 INSN_MAX_REG_PRESSURE (insn)[i]
1603 = max_reg_pressure[ira_reg_class_cover[i]];
1606 if (update_p && eq_p)
1607 break;
1608 update_register_pressure (insn);
1609 for (i = 0; i < ira_reg_class_cover_size; i++)
1610 if (max_reg_pressure[ira_reg_class_cover[i]]
1611 < curr_reg_pressure[ira_reg_class_cover[i]])
1612 max_reg_pressure[ira_reg_class_cover[i]]
1613 = curr_reg_pressure[ira_reg_class_cover[i]];
1615 restore_reg_pressure ();
1618 /* Update the current register pressure after scheduling INSN. Update
1619 also max register pressure for unscheduled insns of the current
1620 BB. */
1621 static void
1622 update_reg_and_insn_max_reg_pressure (rtx insn)
1624 int i;
1625 int before[N_REG_CLASSES];
1627 for (i = 0; i < ira_reg_class_cover_size; i++)
1628 before[i] = curr_reg_pressure[ira_reg_class_cover[i]];
1629 update_register_pressure (insn);
1630 for (i = 0; i < ira_reg_class_cover_size; i++)
1631 if (curr_reg_pressure[ira_reg_class_cover[i]] != before[i])
1632 break;
1633 if (i < ira_reg_class_cover_size)
1634 setup_insn_max_reg_pressure (insn, true);
1637 /* Set up register pressure at the beginning of basic block BB whose
1638 insns starting after insn AFTER. Set up also max register pressure
1639 for all insns of the basic block. */
1640 void
1641 sched_setup_bb_reg_pressure_info (basic_block bb, rtx after)
1643 gcc_assert (sched_pressure_p);
1644 initiate_bb_reg_pressure_info (bb);
1645 setup_insn_max_reg_pressure (after, false);
1648 /* INSN is the "currently executing insn". Launch each insn which was
1649 waiting on INSN. READY is the ready list which contains the insns
1650 that are ready to fire. CLOCK is the current cycle. The function
1651 returns necessary cycle advance after issuing the insn (it is not
1652 zero for insns in a schedule group). */
1654 static int
1655 schedule_insn (rtx insn)
1657 sd_iterator_def sd_it;
1658 dep_t dep;
1659 int i;
1660 int advance = 0;
1662 if (sched_verbose >= 1)
1664 struct reg_pressure_data *pressure_info;
1665 char buf[2048];
1667 print_insn (buf, insn, 0);
1668 buf[40] = 0;
1669 fprintf (sched_dump, ";;\t%3i--> %-40s:", clock_var, buf);
1671 if (recog_memoized (insn) < 0)
1672 fprintf (sched_dump, "nothing");
1673 else
1674 print_reservation (sched_dump, insn);
1675 pressure_info = INSN_REG_PRESSURE (insn);
1676 if (pressure_info != NULL)
1678 fputc (':', sched_dump);
1679 for (i = 0; i < ira_reg_class_cover_size; i++)
1680 fprintf (sched_dump, "%s%+d(%d)",
1681 reg_class_names[ira_reg_class_cover[i]],
1682 pressure_info[i].set_increase, pressure_info[i].change);
1684 fputc ('\n', sched_dump);
1687 if (sched_pressure_p && !DEBUG_INSN_P (insn))
1688 update_reg_and_insn_max_reg_pressure (insn);
1690 /* Scheduling instruction should have all its dependencies resolved and
1691 should have been removed from the ready list. */
1692 gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
1694 /* Reset debug insns invalidated by moving this insn. */
1695 if (MAY_HAVE_DEBUG_INSNS && !DEBUG_INSN_P (insn))
1696 for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
1697 sd_iterator_cond (&sd_it, &dep);)
1699 rtx dbg = DEP_PRO (dep);
1700 struct reg_use_data *use, *next;
1702 gcc_assert (DEBUG_INSN_P (dbg));
1704 if (sched_verbose >= 6)
1705 fprintf (sched_dump, ";;\t\tresetting: debug insn %d\n",
1706 INSN_UID (dbg));
1708 /* ??? Rather than resetting the debug insn, we might be able
1709 to emit a debug temp before the just-scheduled insn, but
1710 this would involve checking that the expression at the
1711 point of the debug insn is equivalent to the expression
1712 before the just-scheduled insn. They might not be: the
1713 expression in the debug insn may depend on other insns not
1714 yet scheduled that set MEMs, REGs or even other debug
1715 insns. It's not clear that attempting to preserve debug
1716 information in these cases is worth the effort, given how
1717 uncommon these resets are and the likelihood that the debug
1718 temps introduced won't survive the schedule change. */
1719 INSN_VAR_LOCATION_LOC (dbg) = gen_rtx_UNKNOWN_VAR_LOC ();
1720 df_insn_rescan (dbg);
1722 /* Unknown location doesn't use any registers. */
1723 for (use = INSN_REG_USE_LIST (dbg); use != NULL; use = next)
1725 struct reg_use_data *prev = use;
1727 /* Remove use from the cyclic next_regno_use chain first. */
1728 while (prev->next_regno_use != use)
1729 prev = prev->next_regno_use;
1730 prev->next_regno_use = use->next_regno_use;
1731 next = use->next_insn_use;
1732 free (use);
1734 INSN_REG_USE_LIST (dbg) = NULL;
1736 /* We delete rather than resolve these deps, otherwise we
1737 crash in sched_free_deps(), because forward deps are
1738 expected to be released before backward deps. */
1739 sd_delete_dep (sd_it);
1742 gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
1743 QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
1745 gcc_assert (INSN_TICK (insn) >= MIN_TICK);
1746 if (INSN_TICK (insn) > clock_var)
1747 /* INSN has been prematurely moved from the queue to the ready list.
1748 This is possible only if following flag is set. */
1749 gcc_assert (flag_sched_stalled_insns);
1751 /* ??? Probably, if INSN is scheduled prematurely, we should leave
1752 INSN_TICK untouched. This is a machine-dependent issue, actually. */
1753 INSN_TICK (insn) = clock_var;
1755 /* Update dependent instructions. */
1756 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
1757 sd_iterator_cond (&sd_it, &dep);)
1759 rtx next = DEP_CON (dep);
1761 /* Resolve the dependence between INSN and NEXT.
1762 sd_resolve_dep () moves current dep to another list thus
1763 advancing the iterator. */
1764 sd_resolve_dep (sd_it);
1766 /* Don't bother trying to mark next as ready if insn is a debug
1767 insn. If insn is the last hard dependency, it will have
1768 already been discounted. */
1769 if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next))
1770 continue;
1772 if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
1774 int effective_cost;
1776 effective_cost = try_ready (next);
1778 if (effective_cost >= 0
1779 && SCHED_GROUP_P (next)
1780 && advance < effective_cost)
1781 advance = effective_cost;
1783 else
1784 /* Check always has only one forward dependence (to the first insn in
1785 the recovery block), therefore, this will be executed only once. */
1787 gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
1788 fix_recovery_deps (RECOVERY_BLOCK (insn));
1792 /* This is the place where scheduler doesn't *basically* need backward and
1793 forward dependencies for INSN anymore. Nevertheless they are used in
1794 heuristics in rank_for_schedule (), early_queue_to_ready () and in
1795 some targets (e.g. rs6000). Thus the earliest place where we *can*
1796 remove dependencies is after targetm.sched.finish () call in
1797 schedule_block (). But, on the other side, the safest place to remove
1798 dependencies is when we are finishing scheduling entire region. As we
1799 don't generate [many] dependencies during scheduling itself, we won't
1800 need memory until beginning of next region.
1801 Bottom line: Dependencies are removed for all insns in the end of
1802 scheduling the region. */
1804 /* Annotate the instruction with issue information -- TImode
1805 indicates that the instruction is expected not to be able
1806 to issue on the same cycle as the previous insn. A machine
1807 may use this information to decide how the instruction should
1808 be aligned. */
1809 if (issue_rate > 1
1810 && GET_CODE (PATTERN (insn)) != USE
1811 && GET_CODE (PATTERN (insn)) != CLOBBER
1812 && !DEBUG_INSN_P (insn))
1814 if (reload_completed)
1815 PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
1816 last_clock_var = clock_var;
1819 return advance;
1822 /* Functions for handling of notes. */
1824 /* Add note list that ends on FROM_END to the end of TO_ENDP. */
1825 void
1826 concat_note_lists (rtx from_end, rtx *to_endp)
1828 rtx from_start;
1830 /* It's easy when have nothing to concat. */
1831 if (from_end == NULL)
1832 return;
1834 /* It's also easy when destination is empty. */
1835 if (*to_endp == NULL)
1837 *to_endp = from_end;
1838 return;
1841 from_start = from_end;
1842 while (PREV_INSN (from_start) != NULL)
1843 from_start = PREV_INSN (from_start);
1845 PREV_INSN (from_start) = *to_endp;
1846 NEXT_INSN (*to_endp) = from_start;
1847 *to_endp = from_end;
1850 /* Delete notes between HEAD and TAIL and put them in the chain
1851 of notes ended by NOTE_LIST. */
1852 void
1853 remove_notes (rtx head, rtx tail)
1855 rtx next_tail, insn, next;
1857 note_list = 0;
1858 if (head == tail && !INSN_P (head))
1859 return;
1861 next_tail = NEXT_INSN (tail);
1862 for (insn = head; insn != next_tail; insn = next)
1864 next = NEXT_INSN (insn);
1865 if (!NOTE_P (insn))
1866 continue;
1868 switch (NOTE_KIND (insn))
1870 case NOTE_INSN_BASIC_BLOCK:
1871 continue;
1873 case NOTE_INSN_EPILOGUE_BEG:
1874 if (insn != tail)
1876 remove_insn (insn);
1877 add_reg_note (next, REG_SAVE_NOTE,
1878 GEN_INT (NOTE_INSN_EPILOGUE_BEG));
1879 break;
1881 /* FALLTHRU */
1883 default:
1884 remove_insn (insn);
1886 /* Add the note to list that ends at NOTE_LIST. */
1887 PREV_INSN (insn) = note_list;
1888 NEXT_INSN (insn) = NULL_RTX;
1889 if (note_list)
1890 NEXT_INSN (note_list) = insn;
1891 note_list = insn;
1892 break;
1895 gcc_assert ((sel_sched_p () || insn != tail) && insn != head);
1900 /* Return the head and tail pointers of ebb starting at BEG and ending
1901 at END. */
1902 void
1903 get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
1905 rtx beg_head = BB_HEAD (beg);
1906 rtx beg_tail = BB_END (beg);
1907 rtx end_head = BB_HEAD (end);
1908 rtx end_tail = BB_END (end);
1910 /* Don't include any notes or labels at the beginning of the BEG
1911 basic block, or notes at the end of the END basic blocks. */
1913 if (LABEL_P (beg_head))
1914 beg_head = NEXT_INSN (beg_head);
1916 while (beg_head != beg_tail)
1917 if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head))
1918 beg_head = NEXT_INSN (beg_head);
1919 else
1920 break;
1922 *headp = beg_head;
1924 if (beg == end)
1925 end_head = beg_head;
1926 else if (LABEL_P (end_head))
1927 end_head = NEXT_INSN (end_head);
1929 while (end_head != end_tail)
1930 if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail))
1931 end_tail = PREV_INSN (end_tail);
1932 else
1933 break;
1935 *tailp = end_tail;
1938 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1941 no_real_insns_p (const_rtx head, const_rtx tail)
1943 while (head != NEXT_INSN (tail))
1945 if (!NOTE_P (head) && !LABEL_P (head)
1946 && !BOUNDARY_DEBUG_INSN_P (head))
1947 return 0;
1948 head = NEXT_INSN (head);
1950 return 1;
1953 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
1954 previously found among the insns. Insert them just before HEAD. */
1956 restore_other_notes (rtx head, basic_block head_bb)
1958 if (note_list != 0)
1960 rtx note_head = note_list;
1962 if (head)
1963 head_bb = BLOCK_FOR_INSN (head);
1964 else
1965 head = NEXT_INSN (bb_note (head_bb));
1967 while (PREV_INSN (note_head))
1969 set_block_for_insn (note_head, head_bb);
1970 note_head = PREV_INSN (note_head);
1972 /* In the above cycle we've missed this note. */
1973 set_block_for_insn (note_head, head_bb);
1975 PREV_INSN (note_head) = PREV_INSN (head);
1976 NEXT_INSN (PREV_INSN (head)) = note_head;
1977 PREV_INSN (head) = note_list;
1978 NEXT_INSN (note_list) = head;
1980 if (BLOCK_FOR_INSN (head) != head_bb)
1981 BB_END (head_bb) = note_list;
1983 head = note_head;
1986 return head;
1989 /* Move insns that became ready to fire from queue to ready list. */
1991 static void
1992 queue_to_ready (struct ready_list *ready)
1994 rtx insn;
1995 rtx link;
1996 rtx skip_insn;
1998 q_ptr = NEXT_Q (q_ptr);
2000 if (dbg_cnt (sched_insn) == false)
2001 /* If debug counter is activated do not requeue insn next after
2002 last_scheduled_insn. */
2003 skip_insn = next_nonnote_nondebug_insn (last_scheduled_insn);
2004 else
2005 skip_insn = NULL_RTX;
2007 /* Add all pending insns that can be scheduled without stalls to the
2008 ready list. */
2009 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
2011 insn = XEXP (link, 0);
2012 q_size -= 1;
2014 if (sched_verbose >= 2)
2015 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2016 (*current_sched_info->print_insn) (insn, 0));
2018 /* If the ready list is full, delay the insn for 1 cycle.
2019 See the comment in schedule_block for the rationale. */
2020 if (!reload_completed
2021 && ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
2022 && !SCHED_GROUP_P (insn)
2023 && insn != skip_insn)
2025 if (sched_verbose >= 2)
2026 fprintf (sched_dump, "requeued because ready full\n");
2027 queue_insn (insn, 1);
2029 else
2031 ready_add (ready, insn, false);
2032 if (sched_verbose >= 2)
2033 fprintf (sched_dump, "moving to ready without stalls\n");
2036 free_INSN_LIST_list (&insn_queue[q_ptr]);
2038 /* If there are no ready insns, stall until one is ready and add all
2039 of the pending insns at that point to the ready list. */
2040 if (ready->n_ready == 0)
2042 int stalls;
2044 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
2046 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2048 for (; link; link = XEXP (link, 1))
2050 insn = XEXP (link, 0);
2051 q_size -= 1;
2053 if (sched_verbose >= 2)
2054 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2055 (*current_sched_info->print_insn) (insn, 0));
2057 ready_add (ready, insn, false);
2058 if (sched_verbose >= 2)
2059 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
2061 free_INSN_LIST_list (&insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]);
2063 advance_one_cycle ();
2065 break;
2068 advance_one_cycle ();
2071 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
2072 clock_var += stalls;
2076 /* Used by early_queue_to_ready. Determines whether it is "ok" to
2077 prematurely move INSN from the queue to the ready list. Currently,
2078 if a target defines the hook 'is_costly_dependence', this function
2079 uses the hook to check whether there exist any dependences which are
2080 considered costly by the target, between INSN and other insns that
2081 have already been scheduled. Dependences are checked up to Y cycles
2082 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
2083 controlling this value.
2084 (Other considerations could be taken into account instead (or in
2085 addition) depending on user flags and target hooks. */
2087 static bool
2088 ok_for_early_queue_removal (rtx insn)
2090 int n_cycles;
2091 rtx prev_insn = last_scheduled_insn;
2093 if (targetm.sched.is_costly_dependence)
2095 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
2097 for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
2099 int cost;
2101 if (prev_insn == current_sched_info->prev_head)
2103 prev_insn = NULL;
2104 break;
2107 if (!NOTE_P (prev_insn))
2109 dep_t dep;
2111 dep = sd_find_dep_between (prev_insn, insn, true);
2113 if (dep != NULL)
2115 cost = dep_cost (dep);
2117 if (targetm.sched.is_costly_dependence (dep, cost,
2118 flag_sched_stalled_insns_dep - n_cycles))
2119 return false;
2123 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
2124 break;
2127 if (!prev_insn)
2128 break;
2129 prev_insn = PREV_INSN (prev_insn);
2133 return true;
2137 /* Remove insns from the queue, before they become "ready" with respect
2138 to FU latency considerations. */
2140 static int
2141 early_queue_to_ready (state_t state, struct ready_list *ready)
2143 rtx insn;
2144 rtx link;
2145 rtx next_link;
2146 rtx prev_link;
2147 bool move_to_ready;
2148 int cost;
2149 state_t temp_state = alloca (dfa_state_size);
2150 int stalls;
2151 int insns_removed = 0;
2154 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
2155 function:
2157 X == 0: There is no limit on how many queued insns can be removed
2158 prematurely. (flag_sched_stalled_insns = -1).
2160 X >= 1: Only X queued insns can be removed prematurely in each
2161 invocation. (flag_sched_stalled_insns = X).
2163 Otherwise: Early queue removal is disabled.
2164 (flag_sched_stalled_insns = 0)
2167 if (! flag_sched_stalled_insns)
2168 return 0;
2170 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
2172 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2174 if (sched_verbose > 6)
2175 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
2177 prev_link = 0;
2178 while (link)
2180 next_link = XEXP (link, 1);
2181 insn = XEXP (link, 0);
2182 if (insn && sched_verbose > 6)
2183 print_rtl_single (sched_dump, insn);
2185 memcpy (temp_state, state, dfa_state_size);
2186 if (recog_memoized (insn) < 0)
2187 /* non-negative to indicate that it's not ready
2188 to avoid infinite Q->R->Q->R... */
2189 cost = 0;
2190 else
2191 cost = state_transition (temp_state, insn);
2193 if (sched_verbose >= 6)
2194 fprintf (sched_dump, "transition cost = %d\n", cost);
2196 move_to_ready = false;
2197 if (cost < 0)
2199 move_to_ready = ok_for_early_queue_removal (insn);
2200 if (move_to_ready == true)
2202 /* move from Q to R */
2203 q_size -= 1;
2204 ready_add (ready, insn, false);
2206 if (prev_link)
2207 XEXP (prev_link, 1) = next_link;
2208 else
2209 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
2211 free_INSN_LIST_node (link);
2213 if (sched_verbose >= 2)
2214 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
2215 (*current_sched_info->print_insn) (insn, 0));
2217 insns_removed++;
2218 if (insns_removed == flag_sched_stalled_insns)
2219 /* Remove no more than flag_sched_stalled_insns insns
2220 from Q at a time. */
2221 return insns_removed;
2225 if (move_to_ready == false)
2226 prev_link = link;
2228 link = next_link;
2229 } /* while link */
2230 } /* if link */
2232 } /* for stalls.. */
2234 return insns_removed;
2238 /* Print the ready list for debugging purposes. Callable from debugger. */
2240 static void
2241 debug_ready_list (struct ready_list *ready)
2243 rtx *p;
2244 int i;
2246 if (ready->n_ready == 0)
2248 fprintf (sched_dump, "\n");
2249 return;
2252 p = ready_lastpos (ready);
2253 for (i = 0; i < ready->n_ready; i++)
2255 fprintf (sched_dump, " %s:%d",
2256 (*current_sched_info->print_insn) (p[i], 0),
2257 INSN_LUID (p[i]));
2258 if (sched_pressure_p)
2259 fprintf (sched_dump, "(cost=%d",
2260 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (p[i]));
2261 if (INSN_TICK (p[i]) > clock_var)
2262 fprintf (sched_dump, ":delay=%d", INSN_TICK (p[i]) - clock_var);
2263 if (sched_pressure_p)
2264 fprintf (sched_dump, ")");
2266 fprintf (sched_dump, "\n");
2269 /* Search INSN for REG_SAVE_NOTE notes and convert them back into insn
2270 NOTEs. This is used for NOTE_INSN_EPILOGUE_BEG, so that sched-ebb
2271 replaces the epilogue note in the correct basic block. */
2272 void
2273 reemit_notes (rtx insn)
2275 rtx note, last = insn;
2277 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2279 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
2281 enum insn_note note_type = (enum insn_note) INTVAL (XEXP (note, 0));
2283 last = emit_note_before (note_type, last);
2284 remove_note (insn, note);
2289 /* Move INSN. Reemit notes if needed. Update CFG, if needed. */
2290 static void
2291 move_insn (rtx insn, rtx last, rtx nt)
2293 if (PREV_INSN (insn) != last)
2295 basic_block bb;
2296 rtx note;
2297 int jump_p = 0;
2299 bb = BLOCK_FOR_INSN (insn);
2301 /* BB_HEAD is either LABEL or NOTE. */
2302 gcc_assert (BB_HEAD (bb) != insn);
2304 if (BB_END (bb) == insn)
2305 /* If this is last instruction in BB, move end marker one
2306 instruction up. */
2308 /* Jumps are always placed at the end of basic block. */
2309 jump_p = control_flow_insn_p (insn);
2311 gcc_assert (!jump_p
2312 || ((common_sched_info->sched_pass_id == SCHED_RGN_PASS)
2313 && IS_SPECULATION_BRANCHY_CHECK_P (insn))
2314 || (common_sched_info->sched_pass_id
2315 == SCHED_EBB_PASS));
2317 gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
2319 BB_END (bb) = PREV_INSN (insn);
2322 gcc_assert (BB_END (bb) != last);
2324 if (jump_p)
2325 /* We move the block note along with jump. */
2327 gcc_assert (nt);
2329 note = NEXT_INSN (insn);
2330 while (NOTE_NOT_BB_P (note) && note != nt)
2331 note = NEXT_INSN (note);
2333 if (note != nt
2334 && (LABEL_P (note)
2335 || BARRIER_P (note)))
2336 note = NEXT_INSN (note);
2338 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
2340 else
2341 note = insn;
2343 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (note);
2344 PREV_INSN (NEXT_INSN (note)) = PREV_INSN (insn);
2346 NEXT_INSN (note) = NEXT_INSN (last);
2347 PREV_INSN (NEXT_INSN (last)) = note;
2349 NEXT_INSN (last) = insn;
2350 PREV_INSN (insn) = last;
2352 bb = BLOCK_FOR_INSN (last);
2354 if (jump_p)
2356 fix_jump_move (insn);
2358 if (BLOCK_FOR_INSN (insn) != bb)
2359 move_block_after_check (insn);
2361 gcc_assert (BB_END (bb) == last);
2364 df_insn_change_bb (insn, bb);
2366 /* Update BB_END, if needed. */
2367 if (BB_END (bb) == last)
2368 BB_END (bb) = insn;
2371 SCHED_GROUP_P (insn) = 0;
2374 /* Return true if scheduling INSN will finish current clock cycle. */
2375 static bool
2376 insn_finishes_cycle_p (rtx insn)
2378 if (SCHED_GROUP_P (insn))
2379 /* After issuing INSN, rest of the sched_group will be forced to issue
2380 in order. Don't make any plans for the rest of cycle. */
2381 return true;
2383 /* Finishing the block will, apparently, finish the cycle. */
2384 if (current_sched_info->insn_finishes_block_p
2385 && current_sched_info->insn_finishes_block_p (insn))
2386 return true;
2388 return false;
2391 /* Define type for target data used in multipass scheduling. */
2392 #ifndef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T
2393 # define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T int
2394 #endif
2395 typedef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T first_cycle_multipass_data_t;
2397 /* The following structure describe an entry of the stack of choices. */
2398 struct choice_entry
2400 /* Ordinal number of the issued insn in the ready queue. */
2401 int index;
2402 /* The number of the rest insns whose issues we should try. */
2403 int rest;
2404 /* The number of issued essential insns. */
2405 int n;
2406 /* State after issuing the insn. */
2407 state_t state;
2408 /* Target-specific data. */
2409 first_cycle_multipass_data_t target_data;
2412 /* The following array is used to implement a stack of choices used in
2413 function max_issue. */
2414 static struct choice_entry *choice_stack;
2416 /* The following variable value is number of essential insns issued on
2417 the current cycle. An insn is essential one if it changes the
2418 processors state. */
2419 int cycle_issued_insns;
2421 /* This holds the value of the target dfa_lookahead hook. */
2422 int dfa_lookahead;
2424 /* The following variable value is maximal number of tries of issuing
2425 insns for the first cycle multipass insn scheduling. We define
2426 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
2427 need this constraint if all real insns (with non-negative codes)
2428 had reservations because in this case the algorithm complexity is
2429 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
2430 might be incomplete and such insn might occur. For such
2431 descriptions, the complexity of algorithm (without the constraint)
2432 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
2433 static int max_lookahead_tries;
2435 /* The following value is value of hook
2436 `first_cycle_multipass_dfa_lookahead' at the last call of
2437 `max_issue'. */
2438 static int cached_first_cycle_multipass_dfa_lookahead = 0;
2440 /* The following value is value of `issue_rate' at the last call of
2441 `sched_init'. */
2442 static int cached_issue_rate = 0;
2444 /* The following function returns maximal (or close to maximal) number
2445 of insns which can be issued on the same cycle and one of which
2446 insns is insns with the best rank (the first insn in READY). To
2447 make this function tries different samples of ready insns. READY
2448 is current queue `ready'. Global array READY_TRY reflects what
2449 insns are already issued in this try. The function stops immediately,
2450 if it reached the such a solution, that all instruction can be issued.
2451 INDEX will contain index of the best insn in READY. The following
2452 function is used only for first cycle multipass scheduling.
2454 PRIVILEGED_N >= 0
2456 This function expects recognized insns only. All USEs,
2457 CLOBBERs, etc must be filtered elsewhere. */
2459 max_issue (struct ready_list *ready, int privileged_n, state_t state,
2460 bool first_cycle_insn_p, int *index)
2462 int n, i, all, n_ready, best, delay, tries_num;
2463 int more_issue;
2464 struct choice_entry *top;
2465 rtx insn;
2467 n_ready = ready->n_ready;
2468 gcc_assert (dfa_lookahead >= 1 && privileged_n >= 0
2469 && privileged_n <= n_ready);
2471 /* Init MAX_LOOKAHEAD_TRIES. */
2472 if (cached_first_cycle_multipass_dfa_lookahead != dfa_lookahead)
2474 cached_first_cycle_multipass_dfa_lookahead = dfa_lookahead;
2475 max_lookahead_tries = 100;
2476 for (i = 0; i < issue_rate; i++)
2477 max_lookahead_tries *= dfa_lookahead;
2480 /* Init max_points. */
2481 more_issue = issue_rate - cycle_issued_insns;
2482 gcc_assert (more_issue >= 0);
2484 /* The number of the issued insns in the best solution. */
2485 best = 0;
2487 top = choice_stack;
2489 /* Set initial state of the search. */
2490 memcpy (top->state, state, dfa_state_size);
2491 top->rest = dfa_lookahead;
2492 top->n = 0;
2493 if (targetm.sched.first_cycle_multipass_begin)
2494 targetm.sched.first_cycle_multipass_begin (&top->target_data,
2495 ready_try, n_ready,
2496 first_cycle_insn_p);
2498 /* Count the number of the insns to search among. */
2499 for (all = i = 0; i < n_ready; i++)
2500 if (!ready_try [i])
2501 all++;
2503 /* I is the index of the insn to try next. */
2504 i = 0;
2505 tries_num = 0;
2506 for (;;)
2508 if (/* If we've reached a dead end or searched enough of what we have
2509 been asked... */
2510 top->rest == 0
2511 /* or have nothing else to try... */
2512 || i >= n_ready
2513 /* or should not issue more. */
2514 || top->n >= more_issue)
2516 /* ??? (... || i == n_ready). */
2517 gcc_assert (i <= n_ready);
2519 /* We should not issue more than issue_rate instructions. */
2520 gcc_assert (top->n <= more_issue);
2522 if (top == choice_stack)
2523 break;
2525 if (best < top - choice_stack)
2527 if (privileged_n)
2529 n = privileged_n;
2530 /* Try to find issued privileged insn. */
2531 while (n && !ready_try[--n]);
2534 if (/* If all insns are equally good... */
2535 privileged_n == 0
2536 /* Or a privileged insn will be issued. */
2537 || ready_try[n])
2538 /* Then we have a solution. */
2540 best = top - choice_stack;
2541 /* This is the index of the insn issued first in this
2542 solution. */
2543 *index = choice_stack [1].index;
2544 if (top->n == more_issue || best == all)
2545 break;
2549 /* Set ready-list index to point to the last insn
2550 ('i++' below will advance it to the next insn). */
2551 i = top->index;
2553 /* Backtrack. */
2554 ready_try [i] = 0;
2556 if (targetm.sched.first_cycle_multipass_backtrack)
2557 targetm.sched.first_cycle_multipass_backtrack (&top->target_data,
2558 ready_try, n_ready);
2560 top--;
2561 memcpy (state, top->state, dfa_state_size);
2563 else if (!ready_try [i])
2565 tries_num++;
2566 if (tries_num > max_lookahead_tries)
2567 break;
2568 insn = ready_element (ready, i);
2569 delay = state_transition (state, insn);
2570 if (delay < 0)
2572 if (state_dead_lock_p (state)
2573 || insn_finishes_cycle_p (insn))
2574 /* We won't issue any more instructions in the next
2575 choice_state. */
2576 top->rest = 0;
2577 else
2578 top->rest--;
2580 n = top->n;
2581 if (memcmp (top->state, state, dfa_state_size) != 0)
2582 n++;
2584 /* Advance to the next choice_entry. */
2585 top++;
2586 /* Initialize it. */
2587 top->rest = dfa_lookahead;
2588 top->index = i;
2589 top->n = n;
2590 memcpy (top->state, state, dfa_state_size);
2591 ready_try [i] = 1;
2593 if (targetm.sched.first_cycle_multipass_issue)
2594 targetm.sched.first_cycle_multipass_issue (&top->target_data,
2595 ready_try, n_ready,
2596 insn,
2597 &((top - 1)
2598 ->target_data));
2600 i = -1;
2604 /* Increase ready-list index. */
2605 i++;
2608 if (targetm.sched.first_cycle_multipass_end)
2609 targetm.sched.first_cycle_multipass_end (best != 0
2610 ? &choice_stack[1].target_data
2611 : NULL);
2613 /* Restore the original state of the DFA. */
2614 memcpy (state, choice_stack->state, dfa_state_size);
2616 return best;
2619 /* The following function chooses insn from READY and modifies
2620 READY. The following function is used only for first
2621 cycle multipass scheduling.
2622 Return:
2623 -1 if cycle should be advanced,
2624 0 if INSN_PTR is set to point to the desirable insn,
2625 1 if choose_ready () should be restarted without advancing the cycle. */
2626 static int
2627 choose_ready (struct ready_list *ready, bool first_cycle_insn_p,
2628 rtx *insn_ptr)
2630 int lookahead;
2632 if (dbg_cnt (sched_insn) == false)
2634 rtx insn;
2636 insn = next_nonnote_insn (last_scheduled_insn);
2638 if (QUEUE_INDEX (insn) == QUEUE_READY)
2639 /* INSN is in the ready_list. */
2641 ready_remove_insn (insn);
2642 *insn_ptr = insn;
2643 return 0;
2646 /* INSN is in the queue. Advance cycle to move it to the ready list. */
2647 return -1;
2650 lookahead = 0;
2652 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
2653 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
2654 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
2655 || DEBUG_INSN_P (ready_element (ready, 0)))
2657 if (targetm.sched.dispatch (NULL_RTX, IS_DISPATCH_ON))
2658 *insn_ptr = ready_remove_first_dispatch (ready);
2659 else
2660 *insn_ptr = ready_remove_first (ready);
2662 return 0;
2664 else
2666 /* Try to choose the better insn. */
2667 int index = 0, i, n;
2668 rtx insn;
2669 int try_data = 1, try_control = 1;
2670 ds_t ts;
2672 insn = ready_element (ready, 0);
2673 if (INSN_CODE (insn) < 0)
2675 *insn_ptr = ready_remove_first (ready);
2676 return 0;
2679 if (spec_info
2680 && spec_info->flags & (PREFER_NON_DATA_SPEC
2681 | PREFER_NON_CONTROL_SPEC))
2683 for (i = 0, n = ready->n_ready; i < n; i++)
2685 rtx x;
2686 ds_t s;
2688 x = ready_element (ready, i);
2689 s = TODO_SPEC (x);
2691 if (spec_info->flags & PREFER_NON_DATA_SPEC
2692 && !(s & DATA_SPEC))
2694 try_data = 0;
2695 if (!(spec_info->flags & PREFER_NON_CONTROL_SPEC)
2696 || !try_control)
2697 break;
2700 if (spec_info->flags & PREFER_NON_CONTROL_SPEC
2701 && !(s & CONTROL_SPEC))
2703 try_control = 0;
2704 if (!(spec_info->flags & PREFER_NON_DATA_SPEC) || !try_data)
2705 break;
2710 ts = TODO_SPEC (insn);
2711 if ((ts & SPECULATIVE)
2712 && (((!try_data && (ts & DATA_SPEC))
2713 || (!try_control && (ts & CONTROL_SPEC)))
2714 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
2715 && !targetm.sched
2716 .first_cycle_multipass_dfa_lookahead_guard_spec (insn))))
2717 /* Discard speculative instruction that stands first in the ready
2718 list. */
2720 change_queue_index (insn, 1);
2721 return 1;
2724 ready_try[0] = 0;
2726 for (i = 1; i < ready->n_ready; i++)
2728 insn = ready_element (ready, i);
2730 ready_try [i]
2731 = ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
2732 || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC)));
2735 /* Let the target filter the search space. */
2736 for (i = 1; i < ready->n_ready; i++)
2737 if (!ready_try[i])
2739 insn = ready_element (ready, i);
2741 /* If this insn is recognizable we should have already
2742 recognized it earlier.
2743 ??? Not very clear where this is supposed to be done.
2744 See dep_cost_1. */
2745 gcc_checking_assert (INSN_CODE (insn) >= 0
2746 || recog_memoized (insn) < 0);
2748 ready_try [i]
2749 = (/* INSN_CODE check can be omitted here as it is also done later
2750 in max_issue (). */
2751 INSN_CODE (insn) < 0
2752 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2753 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2754 (insn)));
2757 if (max_issue (ready, 1, curr_state, first_cycle_insn_p, &index) == 0)
2759 *insn_ptr = ready_remove_first (ready);
2760 if (sched_verbose >= 4)
2761 fprintf (sched_dump, ";;\t\tChosen insn (but can't issue) : %s \n",
2762 (*current_sched_info->print_insn) (*insn_ptr, 0));
2763 return 0;
2765 else
2767 if (sched_verbose >= 4)
2768 fprintf (sched_dump, ";;\t\tChosen insn : %s\n",
2769 (*current_sched_info->print_insn)
2770 (ready_element (ready, index), 0));
2772 *insn_ptr = ready_remove (ready, index);
2773 return 0;
2778 /* Use forward list scheduling to rearrange insns of block pointed to by
2779 TARGET_BB, possibly bringing insns from subsequent blocks in the same
2780 region. */
2782 void
2783 schedule_block (basic_block *target_bb)
2785 int i;
2786 bool first_cycle_insn_p;
2787 int can_issue_more;
2788 state_t temp_state = NULL; /* It is used for multipass scheduling. */
2789 int sort_p, advance, start_clock_var;
2791 /* Head/tail info for this block. */
2792 rtx prev_head = current_sched_info->prev_head;
2793 rtx next_tail = current_sched_info->next_tail;
2794 rtx head = NEXT_INSN (prev_head);
2795 rtx tail = PREV_INSN (next_tail);
2797 /* We used to have code to avoid getting parameters moved from hard
2798 argument registers into pseudos.
2800 However, it was removed when it proved to be of marginal benefit
2801 and caused problems because schedule_block and compute_forward_dependences
2802 had different notions of what the "head" insn was. */
2804 gcc_assert (head != tail || INSN_P (head));
2806 haifa_recovery_bb_recently_added_p = false;
2808 /* Debug info. */
2809 if (sched_verbose)
2810 dump_new_block_header (0, *target_bb, head, tail);
2812 state_reset (curr_state);
2814 /* Clear the ready list. */
2815 ready.first = ready.veclen - 1;
2816 ready.n_ready = 0;
2817 ready.n_debug = 0;
2819 /* It is used for first cycle multipass scheduling. */
2820 temp_state = alloca (dfa_state_size);
2822 if (targetm.sched.init)
2823 targetm.sched.init (sched_dump, sched_verbose, ready.veclen);
2825 /* We start inserting insns after PREV_HEAD. */
2826 last_scheduled_insn = prev_head;
2828 gcc_assert ((NOTE_P (last_scheduled_insn)
2829 || BOUNDARY_DEBUG_INSN_P (last_scheduled_insn))
2830 && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
2832 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
2833 queue. */
2834 q_ptr = 0;
2835 q_size = 0;
2837 insn_queue = XALLOCAVEC (rtx, max_insn_queue_index + 1);
2838 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
2840 /* Start just before the beginning of time. */
2841 clock_var = -1;
2843 /* We need queue and ready lists and clock_var be initialized
2844 in try_ready () (which is called through init_ready_list ()). */
2845 (*current_sched_info->init_ready_list) ();
2847 /* The algorithm is O(n^2) in the number of ready insns at any given
2848 time in the worst case. Before reload we are more likely to have
2849 big lists so truncate them to a reasonable size. */
2850 if (!reload_completed
2851 && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
2853 ready_sort (&ready);
2855 /* Find first free-standing insn past MAX_SCHED_READY_INSNS.
2856 If there are debug insns, we know they're first. */
2857 for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
2858 if (!SCHED_GROUP_P (ready_element (&ready, i)))
2859 break;
2861 if (sched_verbose >= 2)
2863 fprintf (sched_dump,
2864 ";;\t\tReady list on entry: %d insns\n", ready.n_ready);
2865 fprintf (sched_dump,
2866 ";;\t\t before reload => truncated to %d insns\n", i);
2869 /* Delay all insns past it for 1 cycle. If debug counter is
2870 activated make an exception for the insn right after
2871 last_scheduled_insn. */
2873 rtx skip_insn;
2875 if (dbg_cnt (sched_insn) == false)
2876 skip_insn = next_nonnote_insn (last_scheduled_insn);
2877 else
2878 skip_insn = NULL_RTX;
2880 while (i < ready.n_ready)
2882 rtx insn;
2884 insn = ready_remove (&ready, i);
2886 if (insn != skip_insn)
2887 queue_insn (insn, 1);
2892 /* Now we can restore basic block notes and maintain precise cfg. */
2893 restore_bb_notes (*target_bb);
2895 last_clock_var = -1;
2897 advance = 0;
2899 sort_p = TRUE;
2900 /* Loop until all the insns in BB are scheduled. */
2901 while ((*current_sched_info->schedule_more_p) ())
2905 start_clock_var = clock_var;
2907 clock_var++;
2909 advance_one_cycle ();
2911 /* Add to the ready list all pending insns that can be issued now.
2912 If there are no ready insns, increment clock until one
2913 is ready and add all pending insns at that point to the ready
2914 list. */
2915 queue_to_ready (&ready);
2917 gcc_assert (ready.n_ready);
2919 if (sched_verbose >= 2)
2921 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
2922 debug_ready_list (&ready);
2924 advance -= clock_var - start_clock_var;
2926 while (advance > 0);
2928 if (sort_p)
2930 /* Sort the ready list based on priority. */
2931 ready_sort (&ready);
2933 if (sched_verbose >= 2)
2935 fprintf (sched_dump, ";;\t\tReady list after ready_sort: ");
2936 debug_ready_list (&ready);
2940 /* We don't want md sched reorder to even see debug isns, so put
2941 them out right away. */
2942 if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2944 if (control_flow_insn_p (last_scheduled_insn))
2946 *target_bb = current_sched_info->advance_target_bb
2947 (*target_bb, 0);
2949 if (sched_verbose)
2951 rtx x;
2953 x = next_real_insn (last_scheduled_insn);
2954 gcc_assert (x);
2955 dump_new_block_header (1, *target_bb, x, tail);
2958 last_scheduled_insn = bb_note (*target_bb);
2961 while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2963 rtx insn = ready_remove_first (&ready);
2964 gcc_assert (DEBUG_INSN_P (insn));
2965 (*current_sched_info->begin_schedule_ready) (insn,
2966 last_scheduled_insn);
2967 move_insn (insn, last_scheduled_insn,
2968 current_sched_info->next_tail);
2969 last_scheduled_insn = insn;
2970 advance = schedule_insn (insn);
2971 gcc_assert (advance == 0);
2972 if (ready.n_ready > 0)
2973 ready_sort (&ready);
2976 if (!ready.n_ready)
2977 continue;
2980 /* Allow the target to reorder the list, typically for
2981 better instruction bundling. */
2982 if (sort_p && targetm.sched.reorder
2983 && (ready.n_ready == 0
2984 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2985 can_issue_more =
2986 targetm.sched.reorder (sched_dump, sched_verbose,
2987 ready_lastpos (&ready),
2988 &ready.n_ready, clock_var);
2989 else
2990 can_issue_more = issue_rate;
2992 first_cycle_insn_p = true;
2993 cycle_issued_insns = 0;
2994 for (;;)
2996 rtx insn;
2997 int cost;
2998 bool asm_p = false;
3000 if (sched_verbose >= 2)
3002 fprintf (sched_dump, ";;\tReady list (t = %3d): ",
3003 clock_var);
3004 debug_ready_list (&ready);
3005 if (sched_pressure_p)
3006 print_curr_reg_pressure ();
3009 if (ready.n_ready == 0
3010 && can_issue_more
3011 && reload_completed)
3013 /* Allow scheduling insns directly from the queue in case
3014 there's nothing better to do (ready list is empty) but
3015 there are still vacant dispatch slots in the current cycle. */
3016 if (sched_verbose >= 6)
3017 fprintf (sched_dump,";;\t\tSecond chance\n");
3018 memcpy (temp_state, curr_state, dfa_state_size);
3019 if (early_queue_to_ready (temp_state, &ready))
3020 ready_sort (&ready);
3023 if (ready.n_ready == 0
3024 || !can_issue_more
3025 || state_dead_lock_p (curr_state)
3026 || !(*current_sched_info->schedule_more_p) ())
3027 break;
3029 /* Select and remove the insn from the ready list. */
3030 if (sort_p)
3032 int res;
3034 insn = NULL_RTX;
3035 res = choose_ready (&ready, first_cycle_insn_p, &insn);
3037 if (res < 0)
3038 /* Finish cycle. */
3039 break;
3040 if (res > 0)
3041 /* Restart choose_ready (). */
3042 continue;
3044 gcc_assert (insn != NULL_RTX);
3046 else
3047 insn = ready_remove_first (&ready);
3049 if (sched_pressure_p && INSN_TICK (insn) > clock_var)
3051 ready_add (&ready, insn, true);
3052 advance = 1;
3053 break;
3056 if (targetm.sched.dfa_new_cycle
3057 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
3058 insn, last_clock_var,
3059 clock_var, &sort_p))
3060 /* SORT_P is used by the target to override sorting
3061 of the ready list. This is needed when the target
3062 has modified its internal structures expecting that
3063 the insn will be issued next. As we need the insn
3064 to have the highest priority (so it will be returned by
3065 the ready_remove_first call above), we invoke
3066 ready_add (&ready, insn, true).
3067 But, still, there is one issue: INSN can be later
3068 discarded by scheduler's front end through
3069 current_sched_info->can_schedule_ready_p, hence, won't
3070 be issued next. */
3072 ready_add (&ready, insn, true);
3073 break;
3076 sort_p = TRUE;
3077 memcpy (temp_state, curr_state, dfa_state_size);
3078 if (recog_memoized (insn) < 0)
3080 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
3081 || asm_noperands (PATTERN (insn)) >= 0);
3082 if (!first_cycle_insn_p && asm_p)
3083 /* This is asm insn which is tried to be issued on the
3084 cycle not first. Issue it on the next cycle. */
3085 cost = 1;
3086 else
3087 /* A USE insn, or something else we don't need to
3088 understand. We can't pass these directly to
3089 state_transition because it will trigger a
3090 fatal error for unrecognizable insns. */
3091 cost = 0;
3093 else if (sched_pressure_p)
3094 cost = 0;
3095 else
3097 cost = state_transition (temp_state, insn);
3098 if (cost < 0)
3099 cost = 0;
3100 else if (cost == 0)
3101 cost = 1;
3104 if (cost >= 1)
3106 queue_insn (insn, cost);
3107 if (SCHED_GROUP_P (insn))
3109 advance = cost;
3110 break;
3113 continue;
3116 if (current_sched_info->can_schedule_ready_p
3117 && ! (*current_sched_info->can_schedule_ready_p) (insn))
3118 /* We normally get here only if we don't want to move
3119 insn from the split block. */
3121 TODO_SPEC (insn) = (TODO_SPEC (insn) & ~SPECULATIVE) | HARD_DEP;
3122 continue;
3125 /* DECISION is made. */
3127 if (TODO_SPEC (insn) & SPECULATIVE)
3128 generate_recovery_code (insn);
3130 if (control_flow_insn_p (last_scheduled_insn)
3131 /* This is used to switch basic blocks by request
3132 from scheduler front-end (actually, sched-ebb.c only).
3133 This is used to process blocks with single fallthru
3134 edge. If succeeding block has jump, it [jump] will try
3135 move at the end of current bb, thus corrupting CFG. */
3136 || current_sched_info->advance_target_bb (*target_bb, insn))
3138 *target_bb = current_sched_info->advance_target_bb
3139 (*target_bb, 0);
3141 if (sched_verbose)
3143 rtx x;
3145 x = next_real_insn (last_scheduled_insn);
3146 gcc_assert (x);
3147 dump_new_block_header (1, *target_bb, x, tail);
3150 last_scheduled_insn = bb_note (*target_bb);
3153 /* Update counters, etc in the scheduler's front end. */
3154 (*current_sched_info->begin_schedule_ready) (insn,
3155 last_scheduled_insn);
3157 move_insn (insn, last_scheduled_insn, current_sched_info->next_tail);
3159 if (targetm.sched.dispatch (NULL_RTX, IS_DISPATCH_ON))
3160 targetm.sched.dispatch_do (insn, ADD_TO_DISPATCH_WINDOW);
3162 reemit_notes (insn);
3163 last_scheduled_insn = insn;
3165 if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
3167 cycle_issued_insns++;
3168 memcpy (curr_state, temp_state, dfa_state_size);
3171 if (targetm.sched.variable_issue)
3172 can_issue_more =
3173 targetm.sched.variable_issue (sched_dump, sched_verbose,
3174 insn, can_issue_more);
3175 /* A naked CLOBBER or USE generates no instruction, so do
3176 not count them against the issue rate. */
3177 else if (GET_CODE (PATTERN (insn)) != USE
3178 && GET_CODE (PATTERN (insn)) != CLOBBER)
3179 can_issue_more--;
3180 advance = schedule_insn (insn);
3182 /* After issuing an asm insn we should start a new cycle. */
3183 if (advance == 0 && asm_p)
3184 advance = 1;
3185 if (advance != 0)
3186 break;
3188 first_cycle_insn_p = false;
3190 /* Sort the ready list based on priority. This must be
3191 redone here, as schedule_insn may have readied additional
3192 insns that will not be sorted correctly. */
3193 if (ready.n_ready > 0)
3194 ready_sort (&ready);
3196 /* Quickly go through debug insns such that md sched
3197 reorder2 doesn't have to deal with debug insns. */
3198 if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
3199 && (*current_sched_info->schedule_more_p) ())
3201 if (control_flow_insn_p (last_scheduled_insn))
3203 *target_bb = current_sched_info->advance_target_bb
3204 (*target_bb, 0);
3206 if (sched_verbose)
3208 rtx x;
3210 x = next_real_insn (last_scheduled_insn);
3211 gcc_assert (x);
3212 dump_new_block_header (1, *target_bb, x, tail);
3215 last_scheduled_insn = bb_note (*target_bb);
3218 while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
3220 insn = ready_remove_first (&ready);
3221 gcc_assert (DEBUG_INSN_P (insn));
3222 (*current_sched_info->begin_schedule_ready)
3223 (insn, last_scheduled_insn);
3224 move_insn (insn, last_scheduled_insn,
3225 current_sched_info->next_tail);
3226 advance = schedule_insn (insn);
3227 last_scheduled_insn = insn;
3228 gcc_assert (advance == 0);
3229 if (ready.n_ready > 0)
3230 ready_sort (&ready);
3234 if (targetm.sched.reorder2
3235 && (ready.n_ready == 0
3236 || !SCHED_GROUP_P (ready_element (&ready, 0))))
3238 can_issue_more =
3239 targetm.sched.reorder2 (sched_dump, sched_verbose,
3240 ready.n_ready
3241 ? ready_lastpos (&ready) : NULL,
3242 &ready.n_ready, clock_var);
3247 /* Debug info. */
3248 if (sched_verbose)
3250 fprintf (sched_dump, ";;\tReady list (final): ");
3251 debug_ready_list (&ready);
3254 if (current_sched_info->queue_must_finish_empty)
3255 /* Sanity check -- queue must be empty now. Meaningless if region has
3256 multiple bbs. */
3257 gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
3258 else
3260 /* We must maintain QUEUE_INDEX between blocks in region. */
3261 for (i = ready.n_ready - 1; i >= 0; i--)
3263 rtx x;
3265 x = ready_element (&ready, i);
3266 QUEUE_INDEX (x) = QUEUE_NOWHERE;
3267 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3270 if (q_size)
3271 for (i = 0; i <= max_insn_queue_index; i++)
3273 rtx link;
3274 for (link = insn_queue[i]; link; link = XEXP (link, 1))
3276 rtx x;
3278 x = XEXP (link, 0);
3279 QUEUE_INDEX (x) = QUEUE_NOWHERE;
3280 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3282 free_INSN_LIST_list (&insn_queue[i]);
3286 if (sched_verbose)
3287 fprintf (sched_dump, ";; total time = %d\n", clock_var);
3289 if (!current_sched_info->queue_must_finish_empty
3290 || haifa_recovery_bb_recently_added_p)
3292 /* INSN_TICK (minimum clock tick at which the insn becomes
3293 ready) may be not correct for the insn in the subsequent
3294 blocks of the region. We should use a correct value of
3295 `clock_var' or modify INSN_TICK. It is better to keep
3296 clock_var value equal to 0 at the start of a basic block.
3297 Therefore we modify INSN_TICK here. */
3298 fix_inter_tick (NEXT_INSN (prev_head), last_scheduled_insn);
3301 if (targetm.sched.finish)
3303 targetm.sched.finish (sched_dump, sched_verbose);
3304 /* Target might have added some instructions to the scheduled block
3305 in its md_finish () hook. These new insns don't have any data
3306 initialized and to identify them we extend h_i_d so that they'll
3307 get zero luids. */
3308 sched_init_luids (NULL, NULL, NULL, NULL);
3311 if (sched_verbose)
3312 fprintf (sched_dump, ";; new head = %d\n;; new tail = %d\n\n",
3313 INSN_UID (head), INSN_UID (tail));
3315 /* Update head/tail boundaries. */
3316 head = NEXT_INSN (prev_head);
3317 tail = last_scheduled_insn;
3319 head = restore_other_notes (head, NULL);
3321 current_sched_info->head = head;
3322 current_sched_info->tail = tail;
3325 /* Set_priorities: compute priority of each insn in the block. */
3328 set_priorities (rtx head, rtx tail)
3330 rtx insn;
3331 int n_insn;
3332 int sched_max_insns_priority =
3333 current_sched_info->sched_max_insns_priority;
3334 rtx prev_head;
3336 if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head)))
3337 gcc_unreachable ();
3339 n_insn = 0;
3341 prev_head = PREV_INSN (head);
3342 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
3344 if (!INSN_P (insn))
3345 continue;
3347 n_insn++;
3348 (void) priority (insn);
3350 gcc_assert (INSN_PRIORITY_KNOWN (insn));
3352 sched_max_insns_priority = MAX (sched_max_insns_priority,
3353 INSN_PRIORITY (insn));
3356 current_sched_info->sched_max_insns_priority = sched_max_insns_priority;
3358 return n_insn;
3361 /* Set dump and sched_verbose for the desired debugging output. If no
3362 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
3363 For -fsched-verbose=N, N>=10, print everything to stderr. */
3364 void
3365 setup_sched_dump (void)
3367 sched_verbose = sched_verbose_param;
3368 if (sched_verbose_param == 0 && dump_file)
3369 sched_verbose = 1;
3370 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
3371 ? stderr : dump_file);
3374 /* Initialize some global state for the scheduler. This function works
3375 with the common data shared between all the schedulers. It is called
3376 from the scheduler specific initialization routine. */
3378 void
3379 sched_init (void)
3381 /* Disable speculative loads in their presence if cc0 defined. */
3382 #ifdef HAVE_cc0
3383 flag_schedule_speculative_load = 0;
3384 #endif
3386 if (targetm.sched.dispatch (NULL_RTX, IS_DISPATCH_ON))
3387 targetm.sched.dispatch_do (NULL_RTX, DISPATCH_INIT);
3389 sched_pressure_p = (flag_sched_pressure && ! reload_completed
3390 && common_sched_info->sched_pass_id == SCHED_RGN_PASS);
3392 if (sched_pressure_p)
3393 ira_setup_eliminable_regset ();
3395 /* Initialize SPEC_INFO. */
3396 if (targetm.sched.set_sched_flags)
3398 spec_info = &spec_info_var;
3399 targetm.sched.set_sched_flags (spec_info);
3401 if (spec_info->mask != 0)
3403 spec_info->data_weakness_cutoff =
3404 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
3405 spec_info->control_weakness_cutoff =
3406 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF)
3407 * REG_BR_PROB_BASE) / 100;
3409 else
3410 /* So we won't read anything accidentally. */
3411 spec_info = NULL;
3414 else
3415 /* So we won't read anything accidentally. */
3416 spec_info = 0;
3418 /* Initialize issue_rate. */
3419 if (targetm.sched.issue_rate)
3420 issue_rate = targetm.sched.issue_rate ();
3421 else
3422 issue_rate = 1;
3424 if (cached_issue_rate != issue_rate)
3426 cached_issue_rate = issue_rate;
3427 /* To invalidate max_lookahead_tries: */
3428 cached_first_cycle_multipass_dfa_lookahead = 0;
3431 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
3432 dfa_lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
3433 else
3434 dfa_lookahead = 0;
3436 if (targetm.sched.init_dfa_pre_cycle_insn)
3437 targetm.sched.init_dfa_pre_cycle_insn ();
3439 if (targetm.sched.init_dfa_post_cycle_insn)
3440 targetm.sched.init_dfa_post_cycle_insn ();
3442 dfa_start ();
3443 dfa_state_size = state_size ();
3445 init_alias_analysis ();
3447 df_set_flags (DF_LR_RUN_DCE);
3448 df_note_add_problem ();
3450 /* More problems needed for interloop dep calculation in SMS. */
3451 if (common_sched_info->sched_pass_id == SCHED_SMS_PASS)
3453 df_rd_add_problem ();
3454 df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
3457 df_analyze ();
3459 /* Do not run DCE after reload, as this can kill nops inserted
3460 by bundling. */
3461 if (reload_completed)
3462 df_clear_flags (DF_LR_RUN_DCE);
3464 regstat_compute_calls_crossed ();
3466 if (targetm.sched.init_global)
3467 targetm.sched.init_global (sched_dump, sched_verbose, get_max_uid () + 1);
3469 if (sched_pressure_p)
3471 int i, max_regno = max_reg_num ();
3473 ira_set_pseudo_classes (sched_verbose ? sched_dump : NULL);
3474 sched_regno_cover_class
3475 = (enum reg_class *) xmalloc (max_regno * sizeof (enum reg_class));
3476 for (i = 0; i < max_regno; i++)
3477 sched_regno_cover_class[i]
3478 = (i < FIRST_PSEUDO_REGISTER
3479 ? ira_class_translate[REGNO_REG_CLASS (i)]
3480 : reg_cover_class (i));
3481 curr_reg_live = BITMAP_ALLOC (NULL);
3482 saved_reg_live = BITMAP_ALLOC (NULL);
3483 region_ref_regs = BITMAP_ALLOC (NULL);
3486 curr_state = xmalloc (dfa_state_size);
3489 static void haifa_init_only_bb (basic_block, basic_block);
3491 /* Initialize data structures specific to the Haifa scheduler. */
3492 void
3493 haifa_sched_init (void)
3495 setup_sched_dump ();
3496 sched_init ();
3498 if (spec_info != NULL)
3500 sched_deps_info->use_deps_list = 1;
3501 sched_deps_info->generate_spec_deps = 1;
3504 /* Initialize luids, dependency caches, target and h_i_d for the
3505 whole function. */
3507 bb_vec_t bbs = VEC_alloc (basic_block, heap, n_basic_blocks);
3508 basic_block bb;
3510 sched_init_bbs ();
3512 FOR_EACH_BB (bb)
3513 VEC_quick_push (basic_block, bbs, bb);
3514 sched_init_luids (bbs, NULL, NULL, NULL);
3515 sched_deps_init (true);
3516 sched_extend_target ();
3517 haifa_init_h_i_d (bbs, NULL, NULL, NULL);
3519 VEC_free (basic_block, heap, bbs);
3522 sched_init_only_bb = haifa_init_only_bb;
3523 sched_split_block = sched_split_block_1;
3524 sched_create_empty_bb = sched_create_empty_bb_1;
3525 haifa_recovery_bb_ever_added_p = false;
3527 #ifdef ENABLE_CHECKING
3528 /* This is used preferably for finding bugs in check_cfg () itself.
3529 We must call sched_bbs_init () before check_cfg () because check_cfg ()
3530 assumes that the last insn in the last bb has a non-null successor. */
3531 check_cfg (0, 0);
3532 #endif
3534 nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
3535 before_recovery = 0;
3536 after_recovery = 0;
3539 /* Finish work with the data specific to the Haifa scheduler. */
3540 void
3541 haifa_sched_finish (void)
3543 sched_create_empty_bb = NULL;
3544 sched_split_block = NULL;
3545 sched_init_only_bb = NULL;
3547 if (spec_info && spec_info->dump)
3549 char c = reload_completed ? 'a' : 'b';
3551 fprintf (spec_info->dump,
3552 ";; %s:\n", current_function_name ());
3554 fprintf (spec_info->dump,
3555 ";; Procedure %cr-begin-data-spec motions == %d\n",
3556 c, nr_begin_data);
3557 fprintf (spec_info->dump,
3558 ";; Procedure %cr-be-in-data-spec motions == %d\n",
3559 c, nr_be_in_data);
3560 fprintf (spec_info->dump,
3561 ";; Procedure %cr-begin-control-spec motions == %d\n",
3562 c, nr_begin_control);
3563 fprintf (spec_info->dump,
3564 ";; Procedure %cr-be-in-control-spec motions == %d\n",
3565 c, nr_be_in_control);
3568 /* Finalize h_i_d, dependency caches, and luids for the whole
3569 function. Target will be finalized in md_global_finish (). */
3570 sched_deps_finish ();
3571 sched_finish_luids ();
3572 current_sched_info = NULL;
3573 sched_finish ();
3576 /* Free global data used during insn scheduling. This function works with
3577 the common data shared between the schedulers. */
3579 void
3580 sched_finish (void)
3582 haifa_finish_h_i_d ();
3583 if (sched_pressure_p)
3585 free (sched_regno_cover_class);
3586 BITMAP_FREE (region_ref_regs);
3587 BITMAP_FREE (saved_reg_live);
3588 BITMAP_FREE (curr_reg_live);
3590 free (curr_state);
3592 if (targetm.sched.finish_global)
3593 targetm.sched.finish_global (sched_dump, sched_verbose);
3595 end_alias_analysis ();
3597 regstat_free_calls_crossed ();
3599 dfa_finish ();
3601 #ifdef ENABLE_CHECKING
3602 /* After reload ia64 backend clobbers CFG, so can't check anything. */
3603 if (!reload_completed)
3604 check_cfg (0, 0);
3605 #endif
3608 /* Fix INSN_TICKs of the instructions in the current block as well as
3609 INSN_TICKs of their dependents.
3610 HEAD and TAIL are the begin and the end of the current scheduled block. */
3611 static void
3612 fix_inter_tick (rtx head, rtx tail)
3614 /* Set of instructions with corrected INSN_TICK. */
3615 bitmap_head processed;
3616 /* ??? It is doubtful if we should assume that cycle advance happens on
3617 basic block boundaries. Basically insns that are unconditionally ready
3618 on the start of the block are more preferable then those which have
3619 a one cycle dependency over insn from the previous block. */
3620 int next_clock = clock_var + 1;
3622 bitmap_initialize (&processed, 0);
3624 /* Iterates over scheduled instructions and fix their INSN_TICKs and
3625 INSN_TICKs of dependent instructions, so that INSN_TICKs are consistent
3626 across different blocks. */
3627 for (tail = NEXT_INSN (tail); head != tail; head = NEXT_INSN (head))
3629 if (INSN_P (head))
3631 int tick;
3632 sd_iterator_def sd_it;
3633 dep_t dep;
3635 tick = INSN_TICK (head);
3636 gcc_assert (tick >= MIN_TICK);
3638 /* Fix INSN_TICK of instruction from just scheduled block. */
3639 if (bitmap_set_bit (&processed, INSN_LUID (head)))
3641 tick -= next_clock;
3643 if (tick < MIN_TICK)
3644 tick = MIN_TICK;
3646 INSN_TICK (head) = tick;
3649 FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
3651 rtx next;
3653 next = DEP_CON (dep);
3654 tick = INSN_TICK (next);
3656 if (tick != INVALID_TICK
3657 /* If NEXT has its INSN_TICK calculated, fix it.
3658 If not - it will be properly calculated from
3659 scratch later in fix_tick_ready. */
3660 && bitmap_set_bit (&processed, INSN_LUID (next)))
3662 tick -= next_clock;
3664 if (tick < MIN_TICK)
3665 tick = MIN_TICK;
3667 if (tick > INTER_TICK (next))
3668 INTER_TICK (next) = tick;
3669 else
3670 tick = INTER_TICK (next);
3672 INSN_TICK (next) = tick;
3677 bitmap_clear (&processed);
3680 static int haifa_speculate_insn (rtx, ds_t, rtx *);
3682 /* Check if NEXT is ready to be added to the ready or queue list.
3683 If "yes", add it to the proper list.
3684 Returns:
3685 -1 - is not ready yet,
3686 0 - added to the ready list,
3687 0 < N - queued for N cycles. */
3689 try_ready (rtx next)
3691 ds_t old_ts, *ts;
3693 ts = &TODO_SPEC (next);
3694 old_ts = *ts;
3696 gcc_assert (!(old_ts & ~(SPECULATIVE | HARD_DEP))
3697 && ((old_ts & HARD_DEP)
3698 || (old_ts & SPECULATIVE)));
3700 if (sd_lists_empty_p (next, SD_LIST_BACK))
3701 /* NEXT has all its dependencies resolved. */
3703 /* Remove HARD_DEP bit from NEXT's status. */
3704 *ts &= ~HARD_DEP;
3706 if (current_sched_info->flags & DO_SPECULATION)
3707 /* Remove all speculative bits from NEXT's status. */
3708 *ts &= ~SPECULATIVE;
3710 else
3712 /* One of the NEXT's dependencies has been resolved.
3713 Recalculate NEXT's status. */
3715 *ts &= ~SPECULATIVE & ~HARD_DEP;
3717 if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
3718 /* Now we've got NEXT with speculative deps only.
3719 1. Look at the deps to see what we have to do.
3720 2. Check if we can do 'todo'. */
3722 sd_iterator_def sd_it;
3723 dep_t dep;
3724 bool first_p = true;
3726 FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
3728 ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
3730 if (DEBUG_INSN_P (DEP_PRO (dep))
3731 && !DEBUG_INSN_P (next))
3732 continue;
3734 if (first_p)
3736 first_p = false;
3738 *ts = ds;
3740 else
3741 *ts = ds_merge (*ts, ds);
3744 if (ds_weak (*ts) < spec_info->data_weakness_cutoff)
3745 /* Too few points. */
3746 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3748 else
3749 *ts |= HARD_DEP;
3752 if (*ts & HARD_DEP)
3753 gcc_assert (*ts == old_ts
3754 && QUEUE_INDEX (next) == QUEUE_NOWHERE);
3755 else if (current_sched_info->new_ready)
3756 *ts = current_sched_info->new_ready (next, *ts);
3758 /* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
3759 have its original pattern or changed (speculative) one. This is due
3760 to changing ebb in region scheduling.
3761 * But if (old_ts & SPECULATIVE), then we are pretty sure that insn
3762 has speculative pattern.
3764 We can't assert (!(*ts & HARD_DEP) || *ts == old_ts) here because
3765 control-speculative NEXT could have been discarded by sched-rgn.c
3766 (the same case as when discarded by can_schedule_ready_p ()). */
3768 if ((*ts & SPECULATIVE)
3769 /* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
3770 need to change anything. */
3771 && *ts != old_ts)
3773 int res;
3774 rtx new_pat;
3776 gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
3778 res = haifa_speculate_insn (next, *ts, &new_pat);
3780 switch (res)
3782 case -1:
3783 /* It would be nice to change DEP_STATUS of all dependences,
3784 which have ((DEP_STATUS & SPECULATIVE) == *ts) to HARD_DEP,
3785 so we won't reanalyze anything. */
3786 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3787 break;
3789 case 0:
3790 /* We follow the rule, that every speculative insn
3791 has non-null ORIG_PAT. */
3792 if (!ORIG_PAT (next))
3793 ORIG_PAT (next) = PATTERN (next);
3794 break;
3796 case 1:
3797 if (!ORIG_PAT (next))
3798 /* If we gonna to overwrite the original pattern of insn,
3799 save it. */
3800 ORIG_PAT (next) = PATTERN (next);
3802 haifa_change_pattern (next, new_pat);
3803 break;
3805 default:
3806 gcc_unreachable ();
3810 /* We need to restore pattern only if (*ts == 0), because otherwise it is
3811 either correct (*ts & SPECULATIVE),
3812 or we simply don't care (*ts & HARD_DEP). */
3814 gcc_assert (!ORIG_PAT (next)
3815 || !IS_SPECULATION_BRANCHY_CHECK_P (next));
3817 if (*ts & HARD_DEP)
3819 /* We can't assert (QUEUE_INDEX (next) == QUEUE_NOWHERE) here because
3820 control-speculative NEXT could have been discarded by sched-rgn.c
3821 (the same case as when discarded by can_schedule_ready_p ()). */
3822 /*gcc_assert (QUEUE_INDEX (next) == QUEUE_NOWHERE);*/
3824 change_queue_index (next, QUEUE_NOWHERE);
3825 return -1;
3827 else if (!(*ts & BEGIN_SPEC) && ORIG_PAT (next) && !IS_SPECULATION_CHECK_P (next))
3828 /* We should change pattern of every previously speculative
3829 instruction - and we determine if NEXT was speculative by using
3830 ORIG_PAT field. Except one case - speculation checks have ORIG_PAT
3831 pat too, so skip them. */
3833 haifa_change_pattern (next, ORIG_PAT (next));
3834 ORIG_PAT (next) = 0;
3837 if (sched_verbose >= 2)
3839 int s = TODO_SPEC (next);
3841 fprintf (sched_dump, ";;\t\tdependencies resolved: insn %s",
3842 (*current_sched_info->print_insn) (next, 0));
3844 if (spec_info && spec_info->dump)
3846 if (s & BEGIN_DATA)
3847 fprintf (spec_info->dump, "; data-spec;");
3848 if (s & BEGIN_CONTROL)
3849 fprintf (spec_info->dump, "; control-spec;");
3850 if (s & BE_IN_CONTROL)
3851 fprintf (spec_info->dump, "; in-control-spec;");
3854 fprintf (sched_dump, "\n");
3857 adjust_priority (next);
3859 return fix_tick_ready (next);
3862 /* Calculate INSN_TICK of NEXT and add it to either ready or queue list. */
3863 static int
3864 fix_tick_ready (rtx next)
3866 int tick, delay;
3868 if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
3870 int full_p;
3871 sd_iterator_def sd_it;
3872 dep_t dep;
3874 tick = INSN_TICK (next);
3875 /* if tick is not equal to INVALID_TICK, then update
3876 INSN_TICK of NEXT with the most recent resolved dependence
3877 cost. Otherwise, recalculate from scratch. */
3878 full_p = (tick == INVALID_TICK);
3880 FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
3882 rtx pro = DEP_PRO (dep);
3883 int tick1;
3885 gcc_assert (INSN_TICK (pro) >= MIN_TICK);
3887 tick1 = INSN_TICK (pro) + dep_cost (dep);
3888 if (tick1 > tick)
3889 tick = tick1;
3891 if (!full_p)
3892 break;
3895 else
3896 tick = -1;
3898 INSN_TICK (next) = tick;
3900 delay = tick - clock_var;
3901 if (delay <= 0 || sched_pressure_p)
3902 delay = QUEUE_READY;
3904 change_queue_index (next, delay);
3906 return delay;
3909 /* Move NEXT to the proper queue list with (DELAY >= 1),
3910 or add it to the ready list (DELAY == QUEUE_READY),
3911 or remove it from ready and queue lists at all (DELAY == QUEUE_NOWHERE). */
3912 static void
3913 change_queue_index (rtx next, int delay)
3915 int i = QUEUE_INDEX (next);
3917 gcc_assert (QUEUE_NOWHERE <= delay && delay <= max_insn_queue_index
3918 && delay != 0);
3919 gcc_assert (i != QUEUE_SCHEDULED);
3921 if ((delay > 0 && NEXT_Q_AFTER (q_ptr, delay) == i)
3922 || (delay < 0 && delay == i))
3923 /* We have nothing to do. */
3924 return;
3926 /* Remove NEXT from wherever it is now. */
3927 if (i == QUEUE_READY)
3928 ready_remove_insn (next);
3929 else if (i >= 0)
3930 queue_remove (next);
3932 /* Add it to the proper place. */
3933 if (delay == QUEUE_READY)
3934 ready_add (readyp, next, false);
3935 else if (delay >= 1)
3936 queue_insn (next, delay);
3938 if (sched_verbose >= 2)
3940 fprintf (sched_dump, ";;\t\ttick updated: insn %s",
3941 (*current_sched_info->print_insn) (next, 0));
3943 if (delay == QUEUE_READY)
3944 fprintf (sched_dump, " into ready\n");
3945 else if (delay >= 1)
3946 fprintf (sched_dump, " into queue with cost=%d\n", delay);
3947 else
3948 fprintf (sched_dump, " removed from ready or queue lists\n");
3952 static int sched_ready_n_insns = -1;
3954 /* Initialize per region data structures. */
3955 void
3956 sched_extend_ready_list (int new_sched_ready_n_insns)
3958 int i;
3960 if (sched_ready_n_insns == -1)
3961 /* At the first call we need to initialize one more choice_stack
3962 entry. */
3964 i = 0;
3965 sched_ready_n_insns = 0;
3967 else
3968 i = sched_ready_n_insns + 1;
3970 ready.veclen = new_sched_ready_n_insns + issue_rate;
3971 ready.vec = XRESIZEVEC (rtx, ready.vec, ready.veclen);
3973 gcc_assert (new_sched_ready_n_insns >= sched_ready_n_insns);
3975 ready_try = (char *) xrecalloc (ready_try, new_sched_ready_n_insns,
3976 sched_ready_n_insns, sizeof (*ready_try));
3978 /* We allocate +1 element to save initial state in the choice_stack[0]
3979 entry. */
3980 choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
3981 new_sched_ready_n_insns + 1);
3983 for (; i <= new_sched_ready_n_insns; i++)
3985 choice_stack[i].state = xmalloc (dfa_state_size);
3987 if (targetm.sched.first_cycle_multipass_init)
3988 targetm.sched.first_cycle_multipass_init (&(choice_stack[i]
3989 .target_data));
3992 sched_ready_n_insns = new_sched_ready_n_insns;
3995 /* Free per region data structures. */
3996 void
3997 sched_finish_ready_list (void)
3999 int i;
4001 free (ready.vec);
4002 ready.vec = NULL;
4003 ready.veclen = 0;
4005 free (ready_try);
4006 ready_try = NULL;
4008 for (i = 0; i <= sched_ready_n_insns; i++)
4010 if (targetm.sched.first_cycle_multipass_fini)
4011 targetm.sched.first_cycle_multipass_fini (&(choice_stack[i]
4012 .target_data));
4014 free (choice_stack [i].state);
4016 free (choice_stack);
4017 choice_stack = NULL;
4019 sched_ready_n_insns = -1;
4022 static int
4023 haifa_luid_for_non_insn (rtx x)
4025 gcc_assert (NOTE_P (x) || LABEL_P (x));
4027 return 0;
4030 /* Generates recovery code for INSN. */
4031 static void
4032 generate_recovery_code (rtx insn)
4034 if (TODO_SPEC (insn) & BEGIN_SPEC)
4035 begin_speculative_block (insn);
4037 /* Here we have insn with no dependencies to
4038 instructions other then CHECK_SPEC ones. */
4040 if (TODO_SPEC (insn) & BE_IN_SPEC)
4041 add_to_speculative_block (insn);
4044 /* Helper function.
4045 Tries to add speculative dependencies of type FS between instructions
4046 in deps_list L and TWIN. */
4047 static void
4048 process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
4050 sd_iterator_def sd_it;
4051 dep_t dep;
4053 FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
4055 ds_t ds;
4056 rtx consumer;
4058 consumer = DEP_CON (dep);
4060 ds = DEP_STATUS (dep);
4062 if (/* If we want to create speculative dep. */
4064 /* And we can do that because this is a true dep. */
4065 && (ds & DEP_TYPES) == DEP_TRUE)
4067 gcc_assert (!(ds & BE_IN_SPEC));
4069 if (/* If this dep can be overcome with 'begin speculation'. */
4070 ds & BEGIN_SPEC)
4071 /* Then we have a choice: keep the dep 'begin speculative'
4072 or transform it into 'be in speculative'. */
4074 if (/* In try_ready we assert that if insn once became ready
4075 it can be removed from the ready (or queue) list only
4076 due to backend decision. Hence we can't let the
4077 probability of the speculative dep to decrease. */
4078 ds_weak (ds) <= ds_weak (fs))
4080 ds_t new_ds;
4082 new_ds = (ds & ~BEGIN_SPEC) | fs;
4084 if (/* consumer can 'be in speculative'. */
4085 sched_insn_is_legitimate_for_speculation_p (consumer,
4086 new_ds))
4087 /* Transform it to be in speculative. */
4088 ds = new_ds;
4091 else
4092 /* Mark the dep as 'be in speculative'. */
4093 ds |= fs;
4097 dep_def _new_dep, *new_dep = &_new_dep;
4099 init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
4100 sd_add_dep (new_dep, false);
4105 /* Generates recovery code for BEGIN speculative INSN. */
4106 static void
4107 begin_speculative_block (rtx insn)
4109 if (TODO_SPEC (insn) & BEGIN_DATA)
4110 nr_begin_data++;
4111 if (TODO_SPEC (insn) & BEGIN_CONTROL)
4112 nr_begin_control++;
4114 create_check_block_twin (insn, false);
4116 TODO_SPEC (insn) &= ~BEGIN_SPEC;
4119 static void haifa_init_insn (rtx);
4121 /* Generates recovery code for BE_IN speculative INSN. */
4122 static void
4123 add_to_speculative_block (rtx insn)
4125 ds_t ts;
4126 sd_iterator_def sd_it;
4127 dep_t dep;
4128 rtx twins = NULL;
4129 rtx_vec_t priorities_roots;
4131 ts = TODO_SPEC (insn);
4132 gcc_assert (!(ts & ~BE_IN_SPEC));
4134 if (ts & BE_IN_DATA)
4135 nr_be_in_data++;
4136 if (ts & BE_IN_CONTROL)
4137 nr_be_in_control++;
4139 TODO_SPEC (insn) &= ~BE_IN_SPEC;
4140 gcc_assert (!TODO_SPEC (insn));
4142 DONE_SPEC (insn) |= ts;
4144 /* First we convert all simple checks to branchy. */
4145 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4146 sd_iterator_cond (&sd_it, &dep);)
4148 rtx check = DEP_PRO (dep);
4150 if (IS_SPECULATION_SIMPLE_CHECK_P (check))
4152 create_check_block_twin (check, true);
4154 /* Restart search. */
4155 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4157 else
4158 /* Continue search. */
4159 sd_iterator_next (&sd_it);
4162 priorities_roots = NULL;
4163 clear_priorities (insn, &priorities_roots);
4165 while (1)
4167 rtx check, twin;
4168 basic_block rec;
4170 /* Get the first backward dependency of INSN. */
4171 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4172 if (!sd_iterator_cond (&sd_it, &dep))
4173 /* INSN has no backward dependencies left. */
4174 break;
4176 gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
4177 && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
4178 && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4180 check = DEP_PRO (dep);
4182 gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
4183 && QUEUE_INDEX (check) == QUEUE_NOWHERE);
4185 rec = BLOCK_FOR_INSN (check);
4187 twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
4188 haifa_init_insn (twin);
4190 sd_copy_back_deps (twin, insn, true);
4192 if (sched_verbose && spec_info->dump)
4193 /* INSN_BB (insn) isn't determined for twin insns yet.
4194 So we can't use current_sched_info->print_insn. */
4195 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4196 INSN_UID (twin), rec->index);
4198 twins = alloc_INSN_LIST (twin, twins);
4200 /* Add dependences between TWIN and all appropriate
4201 instructions from REC. */
4202 FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
4204 rtx pro = DEP_PRO (dep);
4206 gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
4208 /* INSN might have dependencies from the instructions from
4209 several recovery blocks. At this iteration we process those
4210 producers that reside in REC. */
4211 if (BLOCK_FOR_INSN (pro) == rec)
4213 dep_def _new_dep, *new_dep = &_new_dep;
4215 init_dep (new_dep, pro, twin, REG_DEP_TRUE);
4216 sd_add_dep (new_dep, false);
4220 process_insn_forw_deps_be_in_spec (insn, twin, ts);
4222 /* Remove all dependencies between INSN and insns in REC. */
4223 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4224 sd_iterator_cond (&sd_it, &dep);)
4226 rtx pro = DEP_PRO (dep);
4228 if (BLOCK_FOR_INSN (pro) == rec)
4229 sd_delete_dep (sd_it);
4230 else
4231 sd_iterator_next (&sd_it);
4235 /* We couldn't have added the dependencies between INSN and TWINS earlier
4236 because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
4237 while (twins)
4239 rtx twin;
4241 twin = XEXP (twins, 0);
4244 dep_def _new_dep, *new_dep = &_new_dep;
4246 init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4247 sd_add_dep (new_dep, false);
4250 twin = XEXP (twins, 1);
4251 free_INSN_LIST_node (twins);
4252 twins = twin;
4255 calc_priorities (priorities_roots);
4256 VEC_free (rtx, heap, priorities_roots);
4259 /* Extends and fills with zeros (only the new part) array pointed to by P. */
4260 void *
4261 xrecalloc (void *p, size_t new_nmemb, size_t old_nmemb, size_t size)
4263 gcc_assert (new_nmemb >= old_nmemb);
4264 p = XRESIZEVAR (void, p, new_nmemb * size);
4265 memset (((char *) p) + old_nmemb * size, 0, (new_nmemb - old_nmemb) * size);
4266 return p;
4269 /* Helper function.
4270 Find fallthru edge from PRED. */
4271 edge
4272 find_fallthru_edge_from (basic_block pred)
4274 edge e;
4275 basic_block succ;
4277 succ = pred->next_bb;
4278 gcc_assert (succ->prev_bb == pred);
4280 if (EDGE_COUNT (pred->succs) <= EDGE_COUNT (succ->preds))
4282 e = find_fallthru_edge (pred->succs);
4284 if (e)
4286 gcc_assert (e->dest == succ);
4287 return e;
4290 else
4292 e = find_fallthru_edge (succ->preds);
4294 if (e)
4296 gcc_assert (e->src == pred);
4297 return e;
4301 return NULL;
4304 /* Extend per basic block data structures. */
4305 static void
4306 sched_extend_bb (void)
4308 rtx insn;
4310 /* The following is done to keep current_sched_info->next_tail non null. */
4311 insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
4312 if (NEXT_INSN (insn) == 0
4313 || (!NOTE_P (insn)
4314 && !LABEL_P (insn)
4315 /* Don't emit a NOTE if it would end up before a BARRIER. */
4316 && !BARRIER_P (NEXT_INSN (insn))))
4318 rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
4319 /* Make insn appear outside BB. */
4320 set_block_for_insn (note, NULL);
4321 BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
4325 /* Init per basic block data structures. */
4326 void
4327 sched_init_bbs (void)
4329 sched_extend_bb ();
4332 /* Initialize BEFORE_RECOVERY variable. */
4333 static void
4334 init_before_recovery (basic_block *before_recovery_ptr)
4336 basic_block last;
4337 edge e;
4339 last = EXIT_BLOCK_PTR->prev_bb;
4340 e = find_fallthru_edge_from (last);
4342 if (e)
4344 /* We create two basic blocks:
4345 1. Single instruction block is inserted right after E->SRC
4346 and has jump to
4347 2. Empty block right before EXIT_BLOCK.
4348 Between these two blocks recovery blocks will be emitted. */
4350 basic_block single, empty;
4351 rtx x, label;
4353 /* If the fallthrough edge to exit we've found is from the block we've
4354 created before, don't do anything more. */
4355 if (last == after_recovery)
4356 return;
4358 adding_bb_to_current_region_p = false;
4360 single = sched_create_empty_bb (last);
4361 empty = sched_create_empty_bb (single);
4363 /* Add new blocks to the root loop. */
4364 if (current_loops != NULL)
4366 add_bb_to_loop (single, VEC_index (loop_p, current_loops->larray, 0));
4367 add_bb_to_loop (empty, VEC_index (loop_p, current_loops->larray, 0));
4370 single->count = last->count;
4371 empty->count = last->count;
4372 single->frequency = last->frequency;
4373 empty->frequency = last->frequency;
4374 BB_COPY_PARTITION (single, last);
4375 BB_COPY_PARTITION (empty, last);
4377 redirect_edge_succ (e, single);
4378 make_single_succ_edge (single, empty, 0);
4379 make_single_succ_edge (empty, EXIT_BLOCK_PTR,
4380 EDGE_FALLTHRU | EDGE_CAN_FALLTHRU);
4382 label = block_label (empty);
4383 x = emit_jump_insn_after (gen_jump (label), BB_END (single));
4384 JUMP_LABEL (x) = label;
4385 LABEL_NUSES (label)++;
4386 haifa_init_insn (x);
4388 emit_barrier_after (x);
4390 sched_init_only_bb (empty, NULL);
4391 sched_init_only_bb (single, NULL);
4392 sched_extend_bb ();
4394 adding_bb_to_current_region_p = true;
4395 before_recovery = single;
4396 after_recovery = empty;
4398 if (before_recovery_ptr)
4399 *before_recovery_ptr = before_recovery;
4401 if (sched_verbose >= 2 && spec_info->dump)
4402 fprintf (spec_info->dump,
4403 ";;\t\tFixed fallthru to EXIT : %d->>%d->%d->>EXIT\n",
4404 last->index, single->index, empty->index);
4406 else
4407 before_recovery = last;
4410 /* Returns new recovery block. */
4411 basic_block
4412 sched_create_recovery_block (basic_block *before_recovery_ptr)
4414 rtx label;
4415 rtx barrier;
4416 basic_block rec;
4418 haifa_recovery_bb_recently_added_p = true;
4419 haifa_recovery_bb_ever_added_p = true;
4421 init_before_recovery (before_recovery_ptr);
4423 barrier = get_last_bb_insn (before_recovery);
4424 gcc_assert (BARRIER_P (barrier));
4426 label = emit_label_after (gen_label_rtx (), barrier);
4428 rec = create_basic_block (label, label, before_recovery);
4430 /* A recovery block always ends with an unconditional jump. */
4431 emit_barrier_after (BB_END (rec));
4433 if (BB_PARTITION (before_recovery) != BB_UNPARTITIONED)
4434 BB_SET_PARTITION (rec, BB_COLD_PARTITION);
4436 if (sched_verbose && spec_info->dump)
4437 fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
4438 rec->index);
4440 return rec;
4443 /* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
4444 and emit necessary jumps. */
4445 void
4446 sched_create_recovery_edges (basic_block first_bb, basic_block rec,
4447 basic_block second_bb)
4449 rtx label;
4450 rtx jump;
4451 int edge_flags;
4453 /* This is fixing of incoming edge. */
4454 /* ??? Which other flags should be specified? */
4455 if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
4456 /* Partition type is the same, if it is "unpartitioned". */
4457 edge_flags = EDGE_CROSSING;
4458 else
4459 edge_flags = 0;
4461 make_edge (first_bb, rec, edge_flags);
4462 label = block_label (second_bb);
4463 jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
4464 JUMP_LABEL (jump) = label;
4465 LABEL_NUSES (label)++;
4467 if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
4468 /* Partition type is the same, if it is "unpartitioned". */
4470 /* Rewritten from cfgrtl.c. */
4471 if (flag_reorder_blocks_and_partition
4472 && targetm.have_named_sections)
4474 /* We don't need the same note for the check because
4475 any_condjump_p (check) == true. */
4476 add_reg_note (jump, REG_CROSSING_JUMP, NULL_RTX);
4478 edge_flags = EDGE_CROSSING;
4480 else
4481 edge_flags = 0;
4483 make_single_succ_edge (rec, second_bb, edge_flags);
4486 /* This function creates recovery code for INSN. If MUTATE_P is nonzero,
4487 INSN is a simple check, that should be converted to branchy one. */
4488 static void
4489 create_check_block_twin (rtx insn, bool mutate_p)
4491 basic_block rec;
4492 rtx label, check, twin;
4493 ds_t fs;
4494 sd_iterator_def sd_it;
4495 dep_t dep;
4496 dep_def _new_dep, *new_dep = &_new_dep;
4497 ds_t todo_spec;
4499 gcc_assert (ORIG_PAT (insn) != NULL_RTX);
4501 if (!mutate_p)
4502 todo_spec = TODO_SPEC (insn);
4503 else
4505 gcc_assert (IS_SPECULATION_SIMPLE_CHECK_P (insn)
4506 && (TODO_SPEC (insn) & SPECULATIVE) == 0);
4508 todo_spec = CHECK_SPEC (insn);
4511 todo_spec &= SPECULATIVE;
4513 /* Create recovery block. */
4514 if (mutate_p || targetm.sched.needs_block_p (todo_spec))
4516 rec = sched_create_recovery_block (NULL);
4517 label = BB_HEAD (rec);
4519 else
4521 rec = EXIT_BLOCK_PTR;
4522 label = NULL_RTX;
4525 /* Emit CHECK. */
4526 check = targetm.sched.gen_spec_check (insn, label, todo_spec);
4528 if (rec != EXIT_BLOCK_PTR)
4530 /* To have mem_reg alive at the beginning of second_bb,
4531 we emit check BEFORE insn, so insn after splitting
4532 insn will be at the beginning of second_bb, which will
4533 provide us with the correct life information. */
4534 check = emit_jump_insn_before (check, insn);
4535 JUMP_LABEL (check) = label;
4536 LABEL_NUSES (label)++;
4538 else
4539 check = emit_insn_before (check, insn);
4541 /* Extend data structures. */
4542 haifa_init_insn (check);
4544 /* CHECK is being added to current region. Extend ready list. */
4545 gcc_assert (sched_ready_n_insns != -1);
4546 sched_extend_ready_list (sched_ready_n_insns + 1);
4548 if (current_sched_info->add_remove_insn)
4549 current_sched_info->add_remove_insn (insn, 0);
4551 RECOVERY_BLOCK (check) = rec;
4553 if (sched_verbose && spec_info->dump)
4554 fprintf (spec_info->dump, ";;\t\tGenerated check insn : %s\n",
4555 (*current_sched_info->print_insn) (check, 0));
4557 gcc_assert (ORIG_PAT (insn));
4559 /* Initialize TWIN (twin is a duplicate of original instruction
4560 in the recovery block). */
4561 if (rec != EXIT_BLOCK_PTR)
4563 sd_iterator_def sd_it;
4564 dep_t dep;
4566 FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
4567 if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
4569 struct _dep _dep2, *dep2 = &_dep2;
4571 init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
4573 sd_add_dep (dep2, true);
4576 twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
4577 haifa_init_insn (twin);
4579 if (sched_verbose && spec_info->dump)
4580 /* INSN_BB (insn) isn't determined for twin insns yet.
4581 So we can't use current_sched_info->print_insn. */
4582 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4583 INSN_UID (twin), rec->index);
4585 else
4587 ORIG_PAT (check) = ORIG_PAT (insn);
4588 HAS_INTERNAL_DEP (check) = 1;
4589 twin = check;
4590 /* ??? We probably should change all OUTPUT dependencies to
4591 (TRUE | OUTPUT). */
4594 /* Copy all resolved back dependencies of INSN to TWIN. This will
4595 provide correct value for INSN_TICK (TWIN). */
4596 sd_copy_back_deps (twin, insn, true);
4598 if (rec != EXIT_BLOCK_PTR)
4599 /* In case of branchy check, fix CFG. */
4601 basic_block first_bb, second_bb;
4602 rtx jump;
4604 first_bb = BLOCK_FOR_INSN (check);
4605 second_bb = sched_split_block (first_bb, check);
4607 sched_create_recovery_edges (first_bb, rec, second_bb);
4609 sched_init_only_bb (second_bb, first_bb);
4610 sched_init_only_bb (rec, EXIT_BLOCK_PTR);
4612 jump = BB_END (rec);
4613 haifa_init_insn (jump);
4616 /* Move backward dependences from INSN to CHECK and
4617 move forward dependences from INSN to TWIN. */
4619 /* First, create dependencies between INSN's producers and CHECK & TWIN. */
4620 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
4622 rtx pro = DEP_PRO (dep);
4623 ds_t ds;
4625 /* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
4626 check --TRUE--> producer ??? or ANTI ???
4627 twin --TRUE--> producer
4628 twin --ANTI--> check
4630 If BEGIN_CONTROL: [insn ~~ANTI~~> producer]:
4631 check --ANTI--> producer
4632 twin --ANTI--> producer
4633 twin --ANTI--> check
4635 If BE_IN_SPEC: [insn ~~TRUE~~> producer]:
4636 check ~~TRUE~~> producer
4637 twin ~~TRUE~~> producer
4638 twin --ANTI--> check */
4640 ds = DEP_STATUS (dep);
4642 if (ds & BEGIN_SPEC)
4644 gcc_assert (!mutate_p);
4645 ds &= ~BEGIN_SPEC;
4648 init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
4649 sd_add_dep (new_dep, false);
4651 if (rec != EXIT_BLOCK_PTR)
4653 DEP_CON (new_dep) = twin;
4654 sd_add_dep (new_dep, false);
4658 /* Second, remove backward dependencies of INSN. */
4659 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4660 sd_iterator_cond (&sd_it, &dep);)
4662 if ((DEP_STATUS (dep) & BEGIN_SPEC)
4663 || mutate_p)
4664 /* We can delete this dep because we overcome it with
4665 BEGIN_SPECULATION. */
4666 sd_delete_dep (sd_it);
4667 else
4668 sd_iterator_next (&sd_it);
4671 /* Future Speculations. Determine what BE_IN speculations will be like. */
4672 fs = 0;
4674 /* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
4675 here. */
4677 gcc_assert (!DONE_SPEC (insn));
4679 if (!mutate_p)
4681 ds_t ts = TODO_SPEC (insn);
4683 DONE_SPEC (insn) = ts & BEGIN_SPEC;
4684 CHECK_SPEC (check) = ts & BEGIN_SPEC;
4686 /* Luckiness of future speculations solely depends upon initial
4687 BEGIN speculation. */
4688 if (ts & BEGIN_DATA)
4689 fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
4690 if (ts & BEGIN_CONTROL)
4691 fs = set_dep_weak (fs, BE_IN_CONTROL,
4692 get_dep_weak (ts, BEGIN_CONTROL));
4694 else
4695 CHECK_SPEC (check) = CHECK_SPEC (insn);
4697 /* Future speculations: call the helper. */
4698 process_insn_forw_deps_be_in_spec (insn, twin, fs);
4700 if (rec != EXIT_BLOCK_PTR)
4702 /* Which types of dependencies should we use here is,
4703 generally, machine-dependent question... But, for now,
4704 it is not. */
4706 if (!mutate_p)
4708 init_dep (new_dep, insn, check, REG_DEP_TRUE);
4709 sd_add_dep (new_dep, false);
4711 init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4712 sd_add_dep (new_dep, false);
4714 else
4716 if (spec_info->dump)
4717 fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
4718 (*current_sched_info->print_insn) (insn, 0));
4720 /* Remove all dependencies of the INSN. */
4722 sd_it = sd_iterator_start (insn, (SD_LIST_FORW
4723 | SD_LIST_BACK
4724 | SD_LIST_RES_BACK));
4725 while (sd_iterator_cond (&sd_it, &dep))
4726 sd_delete_dep (sd_it);
4729 /* If former check (INSN) already was moved to the ready (or queue)
4730 list, add new check (CHECK) there too. */
4731 if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
4732 try_ready (check);
4734 /* Remove old check from instruction stream and free its
4735 data. */
4736 sched_remove_insn (insn);
4739 init_dep (new_dep, check, twin, REG_DEP_ANTI);
4740 sd_add_dep (new_dep, false);
4742 else
4744 init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
4745 sd_add_dep (new_dep, false);
4748 if (!mutate_p)
4749 /* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
4750 because it'll be done later in add_to_speculative_block. */
4752 rtx_vec_t priorities_roots = NULL;
4754 clear_priorities (twin, &priorities_roots);
4755 calc_priorities (priorities_roots);
4756 VEC_free (rtx, heap, priorities_roots);
4760 /* Removes dependency between instructions in the recovery block REC
4761 and usual region instructions. It keeps inner dependences so it
4762 won't be necessary to recompute them. */
4763 static void
4764 fix_recovery_deps (basic_block rec)
4766 rtx note, insn, jump, ready_list = 0;
4767 bitmap_head in_ready;
4768 rtx link;
4770 bitmap_initialize (&in_ready, 0);
4772 /* NOTE - a basic block note. */
4773 note = NEXT_INSN (BB_HEAD (rec));
4774 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4775 insn = BB_END (rec);
4776 gcc_assert (JUMP_P (insn));
4777 insn = PREV_INSN (insn);
4781 sd_iterator_def sd_it;
4782 dep_t dep;
4784 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
4785 sd_iterator_cond (&sd_it, &dep);)
4787 rtx consumer = DEP_CON (dep);
4789 if (BLOCK_FOR_INSN (consumer) != rec)
4791 sd_delete_dep (sd_it);
4793 if (bitmap_set_bit (&in_ready, INSN_LUID (consumer)))
4794 ready_list = alloc_INSN_LIST (consumer, ready_list);
4796 else
4798 gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4800 sd_iterator_next (&sd_it);
4804 insn = PREV_INSN (insn);
4806 while (insn != note);
4808 bitmap_clear (&in_ready);
4810 /* Try to add instructions to the ready or queue list. */
4811 for (link = ready_list; link; link = XEXP (link, 1))
4812 try_ready (XEXP (link, 0));
4813 free_INSN_LIST_list (&ready_list);
4815 /* Fixing jump's dependences. */
4816 insn = BB_HEAD (rec);
4817 jump = BB_END (rec);
4819 gcc_assert (LABEL_P (insn));
4820 insn = NEXT_INSN (insn);
4822 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (insn));
4823 add_jump_dependencies (insn, jump);
4826 /* Change pattern of INSN to NEW_PAT. */
4827 void
4828 sched_change_pattern (rtx insn, rtx new_pat)
4830 int t;
4832 t = validate_change (insn, &PATTERN (insn), new_pat, 0);
4833 gcc_assert (t);
4834 dfa_clear_single_insn_cache (insn);
4837 /* Change pattern of INSN to NEW_PAT. Invalidate cached haifa
4838 instruction data. */
4839 static void
4840 haifa_change_pattern (rtx insn, rtx new_pat)
4842 sched_change_pattern (insn, new_pat);
4844 /* Invalidate INSN_COST, so it'll be recalculated. */
4845 INSN_COST (insn) = -1;
4846 /* Invalidate INSN_TICK, so it'll be recalculated. */
4847 INSN_TICK (insn) = INVALID_TICK;
4850 /* -1 - can't speculate,
4851 0 - for speculation with REQUEST mode it is OK to use
4852 current instruction pattern,
4853 1 - need to change pattern for *NEW_PAT to be speculative. */
4855 sched_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4857 gcc_assert (current_sched_info->flags & DO_SPECULATION
4858 && (request & SPECULATIVE)
4859 && sched_insn_is_legitimate_for_speculation_p (insn, request));
4861 if ((request & spec_info->mask) != request)
4862 return -1;
4864 if (request & BE_IN_SPEC
4865 && !(request & BEGIN_SPEC))
4866 return 0;
4868 return targetm.sched.speculate_insn (insn, request, new_pat);
4871 static int
4872 haifa_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4874 gcc_assert (sched_deps_info->generate_spec_deps
4875 && !IS_SPECULATION_CHECK_P (insn));
4877 if (HAS_INTERNAL_DEP (insn)
4878 || SCHED_GROUP_P (insn))
4879 return -1;
4881 return sched_speculate_insn (insn, request, new_pat);
4884 /* Print some information about block BB, which starts with HEAD and
4885 ends with TAIL, before scheduling it.
4886 I is zero, if scheduler is about to start with the fresh ebb. */
4887 static void
4888 dump_new_block_header (int i, basic_block bb, rtx head, rtx tail)
4890 if (!i)
4891 fprintf (sched_dump,
4892 ";; ======================================================\n");
4893 else
4894 fprintf (sched_dump,
4895 ";; =====================ADVANCING TO=====================\n");
4896 fprintf (sched_dump,
4897 ";; -- basic block %d from %d to %d -- %s reload\n",
4898 bb->index, INSN_UID (head), INSN_UID (tail),
4899 (reload_completed ? "after" : "before"));
4900 fprintf (sched_dump,
4901 ";; ======================================================\n");
4902 fprintf (sched_dump, "\n");
4905 /* Unlink basic block notes and labels and saves them, so they
4906 can be easily restored. We unlink basic block notes in EBB to
4907 provide back-compatibility with the previous code, as target backends
4908 assume, that there'll be only instructions between
4909 current_sched_info->{head and tail}. We restore these notes as soon
4910 as we can.
4911 FIRST (LAST) is the first (last) basic block in the ebb.
4912 NB: In usual case (FIRST == LAST) nothing is really done. */
4913 void
4914 unlink_bb_notes (basic_block first, basic_block last)
4916 /* We DON'T unlink basic block notes of the first block in the ebb. */
4917 if (first == last)
4918 return;
4920 bb_header = XNEWVEC (rtx, last_basic_block);
4922 /* Make a sentinel. */
4923 if (last->next_bb != EXIT_BLOCK_PTR)
4924 bb_header[last->next_bb->index] = 0;
4926 first = first->next_bb;
4929 rtx prev, label, note, next;
4931 label = BB_HEAD (last);
4932 if (LABEL_P (label))
4933 note = NEXT_INSN (label);
4934 else
4935 note = label;
4936 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4938 prev = PREV_INSN (label);
4939 next = NEXT_INSN (note);
4940 gcc_assert (prev && next);
4942 NEXT_INSN (prev) = next;
4943 PREV_INSN (next) = prev;
4945 bb_header[last->index] = label;
4947 if (last == first)
4948 break;
4950 last = last->prev_bb;
4952 while (1);
4955 /* Restore basic block notes.
4956 FIRST is the first basic block in the ebb. */
4957 static void
4958 restore_bb_notes (basic_block first)
4960 if (!bb_header)
4961 return;
4963 /* We DON'T unlink basic block notes of the first block in the ebb. */
4964 first = first->next_bb;
4965 /* Remember: FIRST is actually a second basic block in the ebb. */
4967 while (first != EXIT_BLOCK_PTR
4968 && bb_header[first->index])
4970 rtx prev, label, note, next;
4972 label = bb_header[first->index];
4973 prev = PREV_INSN (label);
4974 next = NEXT_INSN (prev);
4976 if (LABEL_P (label))
4977 note = NEXT_INSN (label);
4978 else
4979 note = label;
4980 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4982 bb_header[first->index] = 0;
4984 NEXT_INSN (prev) = label;
4985 NEXT_INSN (note) = next;
4986 PREV_INSN (next) = note;
4988 first = first->next_bb;
4991 free (bb_header);
4992 bb_header = 0;
4995 /* Helper function.
4996 Fix CFG after both in- and inter-block movement of
4997 control_flow_insn_p JUMP. */
4998 static void
4999 fix_jump_move (rtx jump)
5001 basic_block bb, jump_bb, jump_bb_next;
5003 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
5004 jump_bb = BLOCK_FOR_INSN (jump);
5005 jump_bb_next = jump_bb->next_bb;
5007 gcc_assert (common_sched_info->sched_pass_id == SCHED_EBB_PASS
5008 || IS_SPECULATION_BRANCHY_CHECK_P (jump));
5010 if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
5011 /* if jump_bb_next is not empty. */
5012 BB_END (jump_bb) = BB_END (jump_bb_next);
5014 if (BB_END (bb) != PREV_INSN (jump))
5015 /* Then there are instruction after jump that should be placed
5016 to jump_bb_next. */
5017 BB_END (jump_bb_next) = BB_END (bb);
5018 else
5019 /* Otherwise jump_bb_next is empty. */
5020 BB_END (jump_bb_next) = NEXT_INSN (BB_HEAD (jump_bb_next));
5022 /* To make assertion in move_insn happy. */
5023 BB_END (bb) = PREV_INSN (jump);
5025 update_bb_for_insn (jump_bb_next);
5028 /* Fix CFG after interblock movement of control_flow_insn_p JUMP. */
5029 static void
5030 move_block_after_check (rtx jump)
5032 basic_block bb, jump_bb, jump_bb_next;
5033 VEC(edge,gc) *t;
5035 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
5036 jump_bb = BLOCK_FOR_INSN (jump);
5037 jump_bb_next = jump_bb->next_bb;
5039 update_bb_for_insn (jump_bb);
5041 gcc_assert (IS_SPECULATION_CHECK_P (jump)
5042 || IS_SPECULATION_CHECK_P (BB_END (jump_bb_next)));
5044 unlink_block (jump_bb_next);
5045 link_block (jump_bb_next, bb);
5047 t = bb->succs;
5048 bb->succs = 0;
5049 move_succs (&(jump_bb->succs), bb);
5050 move_succs (&(jump_bb_next->succs), jump_bb);
5051 move_succs (&t, jump_bb_next);
5053 df_mark_solutions_dirty ();
5055 common_sched_info->fix_recovery_cfg
5056 (bb->index, jump_bb->index, jump_bb_next->index);
5059 /* Helper function for move_block_after_check.
5060 This functions attaches edge vector pointed to by SUCCSP to
5061 block TO. */
5062 static void
5063 move_succs (VEC(edge,gc) **succsp, basic_block to)
5065 edge e;
5066 edge_iterator ei;
5068 gcc_assert (to->succs == 0);
5070 to->succs = *succsp;
5072 FOR_EACH_EDGE (e, ei, to->succs)
5073 e->src = to;
5075 *succsp = 0;
5078 /* Remove INSN from the instruction stream.
5079 INSN should have any dependencies. */
5080 static void
5081 sched_remove_insn (rtx insn)
5083 sd_finish_insn (insn);
5085 change_queue_index (insn, QUEUE_NOWHERE);
5086 current_sched_info->add_remove_insn (insn, 1);
5087 remove_insn (insn);
5090 /* Clear priorities of all instructions, that are forward dependent on INSN.
5091 Store in vector pointed to by ROOTS_PTR insns on which priority () should
5092 be invoked to initialize all cleared priorities. */
5093 static void
5094 clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
5096 sd_iterator_def sd_it;
5097 dep_t dep;
5098 bool insn_is_root_p = true;
5100 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
5102 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
5104 rtx pro = DEP_PRO (dep);
5106 if (INSN_PRIORITY_STATUS (pro) >= 0
5107 && QUEUE_INDEX (insn) != QUEUE_SCHEDULED)
5109 /* If DEP doesn't contribute to priority then INSN itself should
5110 be added to priority roots. */
5111 if (contributes_to_priority_p (dep))
5112 insn_is_root_p = false;
5114 INSN_PRIORITY_STATUS (pro) = -1;
5115 clear_priorities (pro, roots_ptr);
5119 if (insn_is_root_p)
5120 VEC_safe_push (rtx, heap, *roots_ptr, insn);
5123 /* Recompute priorities of instructions, whose priorities might have been
5124 changed. ROOTS is a vector of instructions whose priority computation will
5125 trigger initialization of all cleared priorities. */
5126 static void
5127 calc_priorities (rtx_vec_t roots)
5129 int i;
5130 rtx insn;
5132 FOR_EACH_VEC_ELT (rtx, roots, i, insn)
5133 priority (insn);
5137 /* Add dependences between JUMP and other instructions in the recovery
5138 block. INSN is the first insn the recovery block. */
5139 static void
5140 add_jump_dependencies (rtx insn, rtx jump)
5144 insn = NEXT_INSN (insn);
5145 if (insn == jump)
5146 break;
5148 if (dep_list_size (insn) == 0)
5150 dep_def _new_dep, *new_dep = &_new_dep;
5152 init_dep (new_dep, insn, jump, REG_DEP_ANTI);
5153 sd_add_dep (new_dep, false);
5156 while (1);
5158 gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
5161 /* Return the NOTE_INSN_BASIC_BLOCK of BB. */
5163 bb_note (basic_block bb)
5165 rtx note;
5167 note = BB_HEAD (bb);
5168 if (LABEL_P (note))
5169 note = NEXT_INSN (note);
5171 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
5172 return note;
5175 #ifdef ENABLE_CHECKING
5176 /* Helper function for check_cfg.
5177 Return nonzero, if edge vector pointed to by EL has edge with TYPE in
5178 its flags. */
5179 static int
5180 has_edge_p (VEC(edge,gc) *el, int type)
5182 edge e;
5183 edge_iterator ei;
5185 FOR_EACH_EDGE (e, ei, el)
5186 if (e->flags & type)
5187 return 1;
5188 return 0;
5191 /* Search back, starting at INSN, for an insn that is not a
5192 NOTE_INSN_VAR_LOCATION. Don't search beyond HEAD, and return it if
5193 no such insn can be found. */
5194 static inline rtx
5195 prev_non_location_insn (rtx insn, rtx head)
5197 while (insn != head && NOTE_P (insn)
5198 && NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION)
5199 insn = PREV_INSN (insn);
5201 return insn;
5204 /* Check few properties of CFG between HEAD and TAIL.
5205 If HEAD (TAIL) is NULL check from the beginning (till the end) of the
5206 instruction stream. */
5207 static void
5208 check_cfg (rtx head, rtx tail)
5210 rtx next_tail;
5211 basic_block bb = 0;
5212 int not_first = 0, not_last;
5214 if (head == NULL)
5215 head = get_insns ();
5216 if (tail == NULL)
5217 tail = get_last_insn ();
5218 next_tail = NEXT_INSN (tail);
5222 not_last = head != tail;
5224 if (not_first)
5225 gcc_assert (NEXT_INSN (PREV_INSN (head)) == head);
5226 if (not_last)
5227 gcc_assert (PREV_INSN (NEXT_INSN (head)) == head);
5229 if (LABEL_P (head)
5230 || (NOTE_INSN_BASIC_BLOCK_P (head)
5231 && (!not_first
5232 || (not_first && !LABEL_P (PREV_INSN (head))))))
5234 gcc_assert (bb == 0);
5235 bb = BLOCK_FOR_INSN (head);
5236 if (bb != 0)
5237 gcc_assert (BB_HEAD (bb) == head);
5238 else
5239 /* This is the case of jump table. See inside_basic_block_p (). */
5240 gcc_assert (LABEL_P (head) && !inside_basic_block_p (head));
5243 if (bb == 0)
5245 gcc_assert (!inside_basic_block_p (head));
5246 head = NEXT_INSN (head);
5248 else
5250 gcc_assert (inside_basic_block_p (head)
5251 || NOTE_P (head));
5252 gcc_assert (BLOCK_FOR_INSN (head) == bb);
5254 if (LABEL_P (head))
5256 head = NEXT_INSN (head);
5257 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (head));
5259 else
5261 if (control_flow_insn_p (head))
5263 gcc_assert (prev_non_location_insn (BB_END (bb), head)
5264 == head);
5266 if (any_uncondjump_p (head))
5267 gcc_assert (EDGE_COUNT (bb->succs) == 1
5268 && BARRIER_P (NEXT_INSN (head)));
5269 else if (any_condjump_p (head))
5270 gcc_assert (/* Usual case. */
5271 (EDGE_COUNT (bb->succs) > 1
5272 && !BARRIER_P (NEXT_INSN (head)))
5273 /* Or jump to the next instruction. */
5274 || (EDGE_COUNT (bb->succs) == 1
5275 && (BB_HEAD (EDGE_I (bb->succs, 0)->dest)
5276 == JUMP_LABEL (head))));
5278 if (BB_END (bb) == head)
5280 if (EDGE_COUNT (bb->succs) > 1)
5281 gcc_assert (control_flow_insn_p (prev_non_location_insn
5282 (head, BB_HEAD (bb)))
5283 || has_edge_p (bb->succs, EDGE_COMPLEX));
5284 bb = 0;
5287 head = NEXT_INSN (head);
5291 not_first = 1;
5293 while (head != next_tail);
5295 gcc_assert (bb == 0);
5298 #endif /* ENABLE_CHECKING */
5300 /* Extend per basic block data structures. */
5301 static void
5302 extend_bb (void)
5304 if (sched_scan_info->extend_bb)
5305 sched_scan_info->extend_bb ();
5308 /* Init data for BB. */
5309 static void
5310 init_bb (basic_block bb)
5312 if (sched_scan_info->init_bb)
5313 sched_scan_info->init_bb (bb);
5316 /* Extend per insn data structures. */
5317 static void
5318 extend_insn (void)
5320 if (sched_scan_info->extend_insn)
5321 sched_scan_info->extend_insn ();
5324 /* Init data structures for INSN. */
5325 static void
5326 init_insn (rtx insn)
5328 if (sched_scan_info->init_insn)
5329 sched_scan_info->init_insn (insn);
5332 /* Init all insns in BB. */
5333 static void
5334 init_insns_in_bb (basic_block bb)
5336 rtx insn;
5338 FOR_BB_INSNS (bb, insn)
5339 init_insn (insn);
5342 /* A driver function to add a set of basic blocks (BBS),
5343 a single basic block (BB), a set of insns (INSNS) or a single insn (INSN)
5344 to the scheduling region. */
5345 void
5346 sched_scan (const struct sched_scan_info_def *ssi,
5347 bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5349 sched_scan_info = ssi;
5351 if (bbs != NULL || bb != NULL)
5353 extend_bb ();
5355 if (bbs != NULL)
5357 unsigned i;
5358 basic_block x;
5360 FOR_EACH_VEC_ELT (basic_block, bbs, i, x)
5361 init_bb (x);
5364 if (bb != NULL)
5365 init_bb (bb);
5368 extend_insn ();
5370 if (bbs != NULL)
5372 unsigned i;
5373 basic_block x;
5375 FOR_EACH_VEC_ELT (basic_block, bbs, i, x)
5376 init_insns_in_bb (x);
5379 if (bb != NULL)
5380 init_insns_in_bb (bb);
5382 if (insns != NULL)
5384 unsigned i;
5385 rtx x;
5387 FOR_EACH_VEC_ELT (rtx, insns, i, x)
5388 init_insn (x);
5391 if (insn != NULL)
5392 init_insn (insn);
5396 /* Extend data structures for logical insn UID. */
5397 static void
5398 luids_extend_insn (void)
5400 int new_luids_max_uid = get_max_uid () + 1;
5402 VEC_safe_grow_cleared (int, heap, sched_luids, new_luids_max_uid);
5405 /* Initialize LUID for INSN. */
5406 static void
5407 luids_init_insn (rtx insn)
5409 int i = INSN_P (insn) ? 1 : common_sched_info->luid_for_non_insn (insn);
5410 int luid;
5412 if (i >= 0)
5414 luid = sched_max_luid;
5415 sched_max_luid += i;
5417 else
5418 luid = -1;
5420 SET_INSN_LUID (insn, luid);
5423 /* Initialize luids for BBS, BB, INSNS and INSN.
5424 The hook common_sched_info->luid_for_non_insn () is used to determine
5425 if notes, labels, etc. need luids. */
5426 void
5427 sched_init_luids (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5429 const struct sched_scan_info_def ssi =
5431 NULL, /* extend_bb */
5432 NULL, /* init_bb */
5433 luids_extend_insn, /* extend_insn */
5434 luids_init_insn /* init_insn */
5437 sched_scan (&ssi, bbs, bb, insns, insn);
5440 /* Free LUIDs. */
5441 void
5442 sched_finish_luids (void)
5444 VEC_free (int, heap, sched_luids);
5445 sched_max_luid = 1;
5448 /* Return logical uid of INSN. Helpful while debugging. */
5450 insn_luid (rtx insn)
5452 return INSN_LUID (insn);
5455 /* Extend per insn data in the target. */
5456 void
5457 sched_extend_target (void)
5459 if (targetm.sched.h_i_d_extended)
5460 targetm.sched.h_i_d_extended ();
5463 /* Extend global scheduler structures (those, that live across calls to
5464 schedule_block) to include information about just emitted INSN. */
5465 static void
5466 extend_h_i_d (void)
5468 int reserve = (get_max_uid () + 1
5469 - VEC_length (haifa_insn_data_def, h_i_d));
5470 if (reserve > 0
5471 && ! VEC_space (haifa_insn_data_def, h_i_d, reserve))
5473 VEC_safe_grow_cleared (haifa_insn_data_def, heap, h_i_d,
5474 3 * get_max_uid () / 2);
5475 sched_extend_target ();
5479 /* Initialize h_i_d entry of the INSN with default values.
5480 Values, that are not explicitly initialized here, hold zero. */
5481 static void
5482 init_h_i_d (rtx insn)
5484 if (INSN_LUID (insn) > 0)
5486 INSN_COST (insn) = -1;
5487 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
5488 INSN_TICK (insn) = INVALID_TICK;
5489 INTER_TICK (insn) = INVALID_TICK;
5490 TODO_SPEC (insn) = HARD_DEP;
5494 /* Initialize haifa_insn_data for BBS, BB, INSNS and INSN. */
5495 void
5496 haifa_init_h_i_d (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5498 const struct sched_scan_info_def ssi =
5500 NULL, /* extend_bb */
5501 NULL, /* init_bb */
5502 extend_h_i_d, /* extend_insn */
5503 init_h_i_d /* init_insn */
5506 sched_scan (&ssi, bbs, bb, insns, insn);
5509 /* Finalize haifa_insn_data. */
5510 void
5511 haifa_finish_h_i_d (void)
5513 int i;
5514 haifa_insn_data_t data;
5515 struct reg_use_data *use, *next;
5517 FOR_EACH_VEC_ELT (haifa_insn_data_def, h_i_d, i, data)
5519 if (data->reg_pressure != NULL)
5520 free (data->reg_pressure);
5521 for (use = data->reg_use_list; use != NULL; use = next)
5523 next = use->next_insn_use;
5524 free (use);
5527 VEC_free (haifa_insn_data_def, heap, h_i_d);
5530 /* Init data for the new insn INSN. */
5531 static void
5532 haifa_init_insn (rtx insn)
5534 gcc_assert (insn != NULL);
5536 sched_init_luids (NULL, NULL, NULL, insn);
5537 sched_extend_target ();
5538 sched_deps_init (false);
5539 haifa_init_h_i_d (NULL, NULL, NULL, insn);
5541 if (adding_bb_to_current_region_p)
5543 sd_init_insn (insn);
5545 /* Extend dependency caches by one element. */
5546 extend_dependency_caches (1, false);
5550 /* Init data for the new basic block BB which comes after AFTER. */
5551 static void
5552 haifa_init_only_bb (basic_block bb, basic_block after)
5554 gcc_assert (bb != NULL);
5556 sched_init_bbs ();
5558 if (common_sched_info->add_block)
5559 /* This changes only data structures of the front-end. */
5560 common_sched_info->add_block (bb, after);
5563 /* A generic version of sched_split_block (). */
5564 basic_block
5565 sched_split_block_1 (basic_block first_bb, rtx after)
5567 edge e;
5569 e = split_block (first_bb, after);
5570 gcc_assert (e->src == first_bb);
5572 /* sched_split_block emits note if *check == BB_END. Probably it
5573 is better to rip that note off. */
5575 return e->dest;
5578 /* A generic version of sched_create_empty_bb (). */
5579 basic_block
5580 sched_create_empty_bb_1 (basic_block after)
5582 return create_empty_bb (after);
5585 /* Insert PAT as an INSN into the schedule and update the necessary data
5586 structures to account for it. */
5588 sched_emit_insn (rtx pat)
5590 rtx insn = emit_insn_after (pat, last_scheduled_insn);
5591 last_scheduled_insn = insn;
5592 haifa_init_insn (insn);
5593 return insn;
5596 /* This function returns a candidate satisfying dispatch constraints from
5597 the ready list. */
5599 static rtx
5600 ready_remove_first_dispatch (struct ready_list *ready)
5602 int i;
5603 rtx insn = ready_element (ready, 0);
5605 if (ready->n_ready == 1
5606 || INSN_CODE (insn) < 0
5607 || !INSN_P (insn)
5608 || !active_insn_p (insn)
5609 || targetm.sched.dispatch (insn, FITS_DISPATCH_WINDOW))
5610 return ready_remove_first (ready);
5612 for (i = 1; i < ready->n_ready; i++)
5614 insn = ready_element (ready, i);
5616 if (INSN_CODE (insn) < 0
5617 || !INSN_P (insn)
5618 || !active_insn_p (insn))
5619 continue;
5621 if (targetm.sched.dispatch (insn, FITS_DISPATCH_WINDOW))
5623 /* Return ith element of ready. */
5624 insn = ready_remove (ready, i);
5625 return insn;
5629 if (targetm.sched.dispatch (NULL_RTX, DISPATCH_VIOLATION))
5630 return ready_remove_first (ready);
5632 for (i = 1; i < ready->n_ready; i++)
5634 insn = ready_element (ready, i);
5636 if (INSN_CODE (insn) < 0
5637 || !INSN_P (insn)
5638 || !active_insn_p (insn))
5639 continue;
5641 /* Return i-th element of ready. */
5642 if (targetm.sched.dispatch (insn, IS_CMP))
5643 return ready_remove (ready, i);
5646 return ready_remove_first (ready);
5649 /* Get number of ready insn in the ready list. */
5652 number_in_ready (void)
5654 return ready.n_ready;
5657 /* Get number of ready's in the ready list. */
5660 get_ready_element (int i)
5662 return ready_element (&ready, i);
5665 #endif /* INSN_SCHEDULING */