Enable dumping of alias graphs.
[official-gcc/Ramakrishna.git] / gcc / haifa-sched.c
blobdc0791f6f2fd234f876a70fa3cdc2fa81be6a54b
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
3 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009
4 Free Software Foundation, Inc.
5 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
6 and currently maintained by, Jim Wilson (wilson@cygnus.com)
8 This file is part of GCC.
10 GCC is free software; you can redistribute it and/or modify it under
11 the terms of the GNU General Public License as published by the Free
12 Software Foundation; either version 3, or (at your option) any later
13 version.
15 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
16 WARRANTY; without even the implied warranty of MERCHANTABILITY or
17 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
18 for more details.
20 You should have received a copy of the GNU General Public License
21 along with GCC; see the file COPYING3. If not see
22 <http://www.gnu.org/licenses/>. */
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
57 The following list shows the order in which we want to break ties
58 among insns in the ready list:
60 1. choose insn with the longest path to end of bb, ties
61 broken by
62 2. choose insn with least contribution to register pressure,
63 ties broken by
64 3. prefer in-block upon interblock motion, ties broken by
65 4. prefer useful upon speculative motion, ties broken by
66 5. choose insn with largest control flow probability, ties
67 broken by
68 6. choose insn with the least dependences upon the previously
69 scheduled insn, or finally
70 7 choose the insn which has the most insns dependent on it.
71 8. choose insn with lowest UID.
73 Memory references complicate matters. Only if we can be certain
74 that memory references are not part of the data dependency graph
75 (via true, anti, or output dependence), can we move operations past
76 memory references. To first approximation, reads can be done
77 independently, while writes introduce dependencies. Better
78 approximations will yield fewer dependencies.
80 Before reload, an extended analysis of interblock data dependences
81 is required for interblock scheduling. This is performed in
82 compute_block_backward_dependences ().
84 Dependencies set up by memory references are treated in exactly the
85 same way as other dependencies, by using insn backward dependences
86 INSN_BACK_DEPS. INSN_BACK_DEPS are translated into forward dependences
87 INSN_FORW_DEPS the purpose of forward list scheduling.
89 Having optimized the critical path, we may have also unduly
90 extended the lifetimes of some registers. If an operation requires
91 that constants be loaded into registers, it is certainly desirable
92 to load those constants as early as necessary, but no earlier.
93 I.e., it will not do to load up a bunch of registers at the
94 beginning of a basic block only to use them at the end, if they
95 could be loaded later, since this may result in excessive register
96 utilization.
98 Note that since branches are never in basic blocks, but only end
99 basic blocks, this pass will not move branches. But that is ok,
100 since we can use GNU's delayed branch scheduling pass to take care
101 of this case.
103 Also note that no further optimizations based on algebraic
104 identities are performed, so this pass would be a good one to
105 perform instruction splitting, such as breaking up a multiply
106 instruction into shifts and adds where that is profitable.
108 Given the memory aliasing analysis that this pass should perform,
109 it should be possible to remove redundant stores to memory, and to
110 load values from registers instead of hitting memory.
112 Before reload, speculative insns are moved only if a 'proof' exists
113 that no exception will be caused by this, and if no live registers
114 exist that inhibit the motion (live registers constraints are not
115 represented by data dependence edges).
117 This pass must update information that subsequent passes expect to
118 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
121 The information in the line number notes is carefully retained by
122 this pass. Notes that refer to the starting and ending of
123 exception regions are also carefully retained by this pass. All
124 other NOTE insns are grouped in their same relative order at the
125 beginning of basic blocks and regions that have been scheduled. */
127 #include "config.h"
128 #include "system.h"
129 #include "coretypes.h"
130 #include "tm.h"
131 #include "toplev.h"
132 #include "rtl.h"
133 #include "tm_p.h"
134 #include "hard-reg-set.h"
135 #include "regs.h"
136 #include "function.h"
137 #include "flags.h"
138 #include "insn-config.h"
139 #include "insn-attr.h"
140 #include "except.h"
141 #include "toplev.h"
142 #include "recog.h"
143 #include "sched-int.h"
144 #include "target.h"
145 #include "output.h"
146 #include "params.h"
147 #include "vecprim.h"
148 #include "dbgcnt.h"
149 #include "cfgloop.h"
150 #include "ira.h"
152 #ifdef INSN_SCHEDULING
154 /* issue_rate is the number of insns that can be scheduled in the same
155 machine cycle. It can be defined in the config/mach/mach.h file,
156 otherwise we set it to 1. */
158 int issue_rate;
160 /* sched-verbose controls the amount of debugging output the
161 scheduler prints. It is controlled by -fsched-verbose=N:
162 N>0 and no -DSR : the output is directed to stderr.
163 N>=10 will direct the printouts to stderr (regardless of -dSR).
164 N=1: same as -dSR.
165 N=2: bb's probabilities, detailed ready list info, unit/insn info.
166 N=3: rtl at abort point, control-flow, regions info.
167 N=5: dependences info. */
169 static int sched_verbose_param = 0;
170 int sched_verbose = 0;
172 /* Debugging file. All printouts are sent to dump, which is always set,
173 either to stderr, or to the dump listing file (-dRS). */
174 FILE *sched_dump = 0;
176 /* fix_sched_param() is called from toplev.c upon detection
177 of the -fsched-verbose=N option. */
179 void
180 fix_sched_param (const char *param, const char *val)
182 if (!strcmp (param, "verbose"))
183 sched_verbose_param = atoi (val);
184 else
185 warning (0, "fix_sched_param: unknown param: %s", param);
188 /* This is a placeholder for the scheduler parameters common
189 to all schedulers. */
190 struct common_sched_info_def *common_sched_info;
192 #define INSN_TICK(INSN) (HID (INSN)->tick)
193 #define INTER_TICK(INSN) (HID (INSN)->inter_tick)
195 /* If INSN_TICK of an instruction is equal to INVALID_TICK,
196 then it should be recalculated from scratch. */
197 #define INVALID_TICK (-(max_insn_queue_index + 1))
198 /* The minimal value of the INSN_TICK of an instruction. */
199 #define MIN_TICK (-max_insn_queue_index)
201 /* Issue points are used to distinguish between instructions in max_issue ().
202 For now, all instructions are equally good. */
203 #define ISSUE_POINTS(INSN) 1
205 /* List of important notes we must keep around. This is a pointer to the
206 last element in the list. */
207 rtx note_list;
209 static struct spec_info_def spec_info_var;
210 /* Description of the speculative part of the scheduling.
211 If NULL - no speculation. */
212 spec_info_t spec_info = NULL;
214 /* True, if recovery block was added during scheduling of current block.
215 Used to determine, if we need to fix INSN_TICKs. */
216 static bool haifa_recovery_bb_recently_added_p;
218 /* True, if recovery block was added during this scheduling pass.
219 Used to determine if we should have empty memory pools of dependencies
220 after finishing current region. */
221 bool haifa_recovery_bb_ever_added_p;
223 /* Counters of different types of speculative instructions. */
224 static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
226 /* Array used in {unlink, restore}_bb_notes. */
227 static rtx *bb_header = 0;
229 /* Basic block after which recovery blocks will be created. */
230 static basic_block before_recovery;
232 /* Basic block just before the EXIT_BLOCK and after recovery, if we have
233 created it. */
234 basic_block after_recovery;
236 /* FALSE if we add bb to another region, so we don't need to initialize it. */
237 bool adding_bb_to_current_region_p = true;
239 /* Queues, etc. */
241 /* An instruction is ready to be scheduled when all insns preceding it
242 have already been scheduled. It is important to ensure that all
243 insns which use its result will not be executed until its result
244 has been computed. An insn is maintained in one of four structures:
246 (P) the "Pending" set of insns which cannot be scheduled until
247 their dependencies have been satisfied.
248 (Q) the "Queued" set of insns that can be scheduled when sufficient
249 time has passed.
250 (R) the "Ready" list of unscheduled, uncommitted insns.
251 (S) the "Scheduled" list of insns.
253 Initially, all insns are either "Pending" or "Ready" depending on
254 whether their dependencies are satisfied.
256 Insns move from the "Ready" list to the "Scheduled" list as they
257 are committed to the schedule. As this occurs, the insns in the
258 "Pending" list have their dependencies satisfied and move to either
259 the "Ready" list or the "Queued" set depending on whether
260 sufficient time has passed to make them ready. As time passes,
261 insns move from the "Queued" set to the "Ready" list.
263 The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
264 unscheduled insns, i.e., those that are ready, queued, and pending.
265 The "Queued" set (Q) is implemented by the variable `insn_queue'.
266 The "Ready" list (R) is implemented by the variables `ready' and
267 `n_ready'.
268 The "Scheduled" list (S) is the new insn chain built by this pass.
270 The transition (R->S) is implemented in the scheduling loop in
271 `schedule_block' when the best insn to schedule is chosen.
272 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
273 insns move from the ready list to the scheduled list.
274 The transition (Q->R) is implemented in 'queue_to_insn' as time
275 passes or stalls are introduced. */
277 /* Implement a circular buffer to delay instructions until sufficient
278 time has passed. For the new pipeline description interface,
279 MAX_INSN_QUEUE_INDEX is a power of two minus one which is not less
280 than maximal time of instruction execution computed by genattr.c on
281 the base maximal time of functional unit reservations and getting a
282 result. This is the longest time an insn may be queued. */
284 static rtx *insn_queue;
285 static int q_ptr = 0;
286 static int q_size = 0;
287 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
288 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
290 #define QUEUE_SCHEDULED (-3)
291 #define QUEUE_NOWHERE (-2)
292 #define QUEUE_READY (-1)
293 /* QUEUE_SCHEDULED - INSN is scheduled.
294 QUEUE_NOWHERE - INSN isn't scheduled yet and is neither in
295 queue or ready list.
296 QUEUE_READY - INSN is in ready list.
297 N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles. */
299 #define QUEUE_INDEX(INSN) (HID (INSN)->queue_index)
301 /* The following variable value refers for all current and future
302 reservations of the processor units. */
303 state_t curr_state;
305 /* The following variable value is size of memory representing all
306 current and future reservations of the processor units. */
307 size_t dfa_state_size;
309 /* The following array is used to find the best insn from ready when
310 the automaton pipeline interface is used. */
311 char *ready_try = NULL;
313 /* The ready list. */
314 struct ready_list ready = {NULL, 0, 0, 0, 0};
316 /* The pointer to the ready list (to be removed). */
317 static struct ready_list *readyp = &ready;
319 /* Scheduling clock. */
320 static int clock_var;
322 static int may_trap_exp (const_rtx, int);
324 /* Nonzero iff the address is comprised from at most 1 register. */
325 #define CONST_BASED_ADDRESS_P(x) \
326 (REG_P (x) \
327 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
328 || (GET_CODE (x) == LO_SUM)) \
329 && (CONSTANT_P (XEXP (x, 0)) \
330 || CONSTANT_P (XEXP (x, 1)))))
332 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
333 as found by analyzing insn's expression. */
336 static int haifa_luid_for_non_insn (rtx x);
338 /* Haifa version of sched_info hooks common to all headers. */
339 const struct common_sched_info_def haifa_common_sched_info =
341 NULL, /* fix_recovery_cfg */
342 NULL, /* add_block */
343 NULL, /* estimate_number_of_insns */
344 haifa_luid_for_non_insn, /* luid_for_non_insn */
345 SCHED_PASS_UNKNOWN /* sched_pass_id */
348 const struct sched_scan_info_def *sched_scan_info;
350 /* Mapping from instruction UID to its Logical UID. */
351 VEC (int, heap) *sched_luids = NULL;
353 /* Next LUID to assign to an instruction. */
354 int sched_max_luid = 1;
356 /* Haifa Instruction Data. */
357 VEC (haifa_insn_data_def, heap) *h_i_d = NULL;
359 void (* sched_init_only_bb) (basic_block, basic_block);
361 /* Split block function. Different schedulers might use different functions
362 to handle their internal data consistent. */
363 basic_block (* sched_split_block) (basic_block, rtx);
365 /* Create empty basic block after the specified block. */
366 basic_block (* sched_create_empty_bb) (basic_block);
368 static int
369 may_trap_exp (const_rtx x, int is_store)
371 enum rtx_code code;
373 if (x == 0)
374 return TRAP_FREE;
375 code = GET_CODE (x);
376 if (is_store)
378 if (code == MEM && may_trap_p (x))
379 return TRAP_RISKY;
380 else
381 return TRAP_FREE;
383 if (code == MEM)
385 /* The insn uses memory: a volatile load. */
386 if (MEM_VOLATILE_P (x))
387 return IRISKY;
388 /* An exception-free load. */
389 if (!may_trap_p (x))
390 return IFREE;
391 /* A load with 1 base register, to be further checked. */
392 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
393 return PFREE_CANDIDATE;
394 /* No info on the load, to be further checked. */
395 return PRISKY_CANDIDATE;
397 else
399 const char *fmt;
400 int i, insn_class = TRAP_FREE;
402 /* Neither store nor load, check if it may cause a trap. */
403 if (may_trap_p (x))
404 return TRAP_RISKY;
405 /* Recursive step: walk the insn... */
406 fmt = GET_RTX_FORMAT (code);
407 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
409 if (fmt[i] == 'e')
411 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
412 insn_class = WORST_CLASS (insn_class, tmp_class);
414 else if (fmt[i] == 'E')
416 int j;
417 for (j = 0; j < XVECLEN (x, i); j++)
419 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
420 insn_class = WORST_CLASS (insn_class, tmp_class);
421 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
422 break;
425 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
426 break;
428 return insn_class;
432 /* Classifies rtx X of an insn for the purpose of verifying that X can be
433 executed speculatively (and consequently the insn can be moved
434 speculatively), by examining X, returning:
435 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
436 TRAP_FREE: non-load insn.
437 IFREE: load from a globally safe location.
438 IRISKY: volatile load.
439 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
440 being either PFREE or PRISKY. */
442 static int
443 haifa_classify_rtx (const_rtx x)
445 int tmp_class = TRAP_FREE;
446 int insn_class = TRAP_FREE;
447 enum rtx_code code;
449 if (GET_CODE (x) == PARALLEL)
451 int i, len = XVECLEN (x, 0);
453 for (i = len - 1; i >= 0; i--)
455 tmp_class = haifa_classify_rtx (XVECEXP (x, 0, i));
456 insn_class = WORST_CLASS (insn_class, tmp_class);
457 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
458 break;
461 else
463 code = GET_CODE (x);
464 switch (code)
466 case CLOBBER:
467 /* Test if it is a 'store'. */
468 tmp_class = may_trap_exp (XEXP (x, 0), 1);
469 break;
470 case SET:
471 /* Test if it is a store. */
472 tmp_class = may_trap_exp (SET_DEST (x), 1);
473 if (tmp_class == TRAP_RISKY)
474 break;
475 /* Test if it is a load. */
476 tmp_class =
477 WORST_CLASS (tmp_class,
478 may_trap_exp (SET_SRC (x), 0));
479 break;
480 case COND_EXEC:
481 tmp_class = haifa_classify_rtx (COND_EXEC_CODE (x));
482 if (tmp_class == TRAP_RISKY)
483 break;
484 tmp_class = WORST_CLASS (tmp_class,
485 may_trap_exp (COND_EXEC_TEST (x), 0));
486 break;
487 case TRAP_IF:
488 tmp_class = TRAP_RISKY;
489 break;
490 default:;
492 insn_class = tmp_class;
495 return insn_class;
499 haifa_classify_insn (const_rtx insn)
501 return haifa_classify_rtx (PATTERN (insn));
504 /* Forward declarations. */
506 static int priority (rtx);
507 static int rank_for_schedule (const void *, const void *);
508 static void swap_sort (rtx *, int);
509 static void queue_insn (rtx, int);
510 static int schedule_insn (rtx);
511 static void adjust_priority (rtx);
512 static void advance_one_cycle (void);
513 static void extend_h_i_d (void);
516 /* Notes handling mechanism:
517 =========================
518 Generally, NOTES are saved before scheduling and restored after scheduling.
519 The scheduler distinguishes between two types of notes:
521 (1) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
522 Before scheduling a region, a pointer to the note is added to the insn
523 that follows or precedes it. (This happens as part of the data dependence
524 computation). After scheduling an insn, the pointer contained in it is
525 used for regenerating the corresponding note (in reemit_notes).
527 (2) All other notes (e.g. INSN_DELETED): Before scheduling a block,
528 these notes are put in a list (in rm_other_notes() and
529 unlink_other_notes ()). After scheduling the block, these notes are
530 inserted at the beginning of the block (in schedule_block()). */
532 static void ready_add (struct ready_list *, rtx, bool);
533 static rtx ready_remove_first (struct ready_list *);
535 static void queue_to_ready (struct ready_list *);
536 static int early_queue_to_ready (state_t, struct ready_list *);
538 static void debug_ready_list (struct ready_list *);
540 /* The following functions are used to implement multi-pass scheduling
541 on the first cycle. */
542 static rtx ready_remove (struct ready_list *, int);
543 static void ready_remove_insn (rtx);
545 static int choose_ready (struct ready_list *, rtx *);
547 static void fix_inter_tick (rtx, rtx);
548 static int fix_tick_ready (rtx);
549 static void change_queue_index (rtx, int);
551 /* The following functions are used to implement scheduling of data/control
552 speculative instructions. */
554 static void extend_h_i_d (void);
555 static void init_h_i_d (rtx);
556 static void generate_recovery_code (rtx);
557 static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
558 static void begin_speculative_block (rtx);
559 static void add_to_speculative_block (rtx);
560 static void init_before_recovery (basic_block *);
561 static void create_check_block_twin (rtx, bool);
562 static void fix_recovery_deps (basic_block);
563 static void haifa_change_pattern (rtx, rtx);
564 static void dump_new_block_header (int, basic_block, rtx, rtx);
565 static void restore_bb_notes (basic_block);
566 static void fix_jump_move (rtx);
567 static void move_block_after_check (rtx);
568 static void move_succs (VEC(edge,gc) **, basic_block);
569 static void sched_remove_insn (rtx);
570 static void clear_priorities (rtx, rtx_vec_t *);
571 static void calc_priorities (rtx_vec_t);
572 static void add_jump_dependencies (rtx, rtx);
573 #ifdef ENABLE_CHECKING
574 static int has_edge_p (VEC(edge,gc) *, int);
575 static void check_cfg (rtx, rtx);
576 #endif
578 #endif /* INSN_SCHEDULING */
580 /* Point to state used for the current scheduling pass. */
581 struct haifa_sched_info *current_sched_info;
583 #ifndef INSN_SCHEDULING
584 void
585 schedule_insns (void)
588 #else
590 /* Do register pressure sensitive insn scheduling if the flag is set
591 up. */
592 bool sched_pressure_p;
594 /* Map regno -> its cover class. The map defined only when
595 SCHED_PRESSURE_P is true. */
596 enum reg_class *sched_regno_cover_class;
598 /* The current register pressure. Only elements corresponding cover
599 classes are defined. */
600 static int curr_reg_pressure[N_REG_CLASSES];
602 /* Saved value of the previous array. */
603 static int saved_reg_pressure[N_REG_CLASSES];
605 /* Register living at given scheduling point. */
606 static bitmap curr_reg_live;
608 /* Saved value of the previous array. */
609 static bitmap saved_reg_live;
611 /* Registers mentioned in the current region. */
612 static bitmap region_ref_regs;
614 /* Initiate register pressure relative info for scheduling the current
615 region. Currently it is only clearing register mentioned in the
616 current region. */
617 void
618 sched_init_region_reg_pressure_info (void)
620 bitmap_clear (region_ref_regs);
623 /* Update current register pressure related info after birth (if
624 BIRTH_P) or death of register REGNO. */
625 static void
626 mark_regno_birth_or_death (int regno, bool birth_p)
628 enum reg_class cover_class;
630 cover_class = sched_regno_cover_class[regno];
631 if (regno >= FIRST_PSEUDO_REGISTER)
633 if (cover_class != NO_REGS)
635 if (birth_p)
637 bitmap_set_bit (curr_reg_live, regno);
638 curr_reg_pressure[cover_class]
639 += ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
641 else
643 bitmap_clear_bit (curr_reg_live, regno);
644 curr_reg_pressure[cover_class]
645 -= ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
649 else if (cover_class != NO_REGS
650 && ! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
652 if (birth_p)
654 bitmap_set_bit (curr_reg_live, regno);
655 curr_reg_pressure[cover_class]++;
657 else
659 bitmap_clear_bit (curr_reg_live, regno);
660 curr_reg_pressure[cover_class]--;
665 /* Initiate current register pressure related info from living
666 registers given by LIVE. */
667 static void
668 initiate_reg_pressure_info (bitmap live)
670 int i;
671 unsigned int j;
672 bitmap_iterator bi;
674 for (i = 0; i < ira_reg_class_cover_size; i++)
675 curr_reg_pressure[ira_reg_class_cover[i]] = 0;
676 bitmap_clear (curr_reg_live);
677 EXECUTE_IF_SET_IN_BITMAP (live, 0, j, bi)
678 if (current_nr_blocks == 1 || bitmap_bit_p (region_ref_regs, j))
679 mark_regno_birth_or_death (j, true);
682 /* Mark registers in X as mentioned in the current region. */
683 static void
684 setup_ref_regs (rtx x)
686 int i, j, regno;
687 const RTX_CODE code = GET_CODE (x);
688 const char *fmt;
690 if (REG_P (x))
692 regno = REGNO (x);
693 if (regno >= FIRST_PSEUDO_REGISTER)
694 bitmap_set_bit (region_ref_regs, REGNO (x));
695 else
696 for (i = hard_regno_nregs[regno][GET_MODE (x)] - 1; i >= 0; i--)
697 bitmap_set_bit (region_ref_regs, regno + i);
698 return;
700 fmt = GET_RTX_FORMAT (code);
701 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
702 if (fmt[i] == 'e')
703 setup_ref_regs (XEXP (x, i));
704 else if (fmt[i] == 'E')
706 for (j = 0; j < XVECLEN (x, i); j++)
707 setup_ref_regs (XVECEXP (x, i, j));
711 /* Initiate current register pressure related info at the start of
712 basic block BB. */
713 static void
714 initiate_bb_reg_pressure_info (basic_block bb)
716 unsigned int i;
717 rtx insn;
719 if (current_nr_blocks > 1)
720 FOR_BB_INSNS (bb, insn)
721 if (INSN_P (insn))
722 setup_ref_regs (PATTERN (insn));
723 initiate_reg_pressure_info (df_get_live_in (bb));
724 #ifdef EH_RETURN_DATA_REGNO
725 if (bb_has_eh_pred (bb))
726 for (i = 0; ; ++i)
728 unsigned int regno = EH_RETURN_DATA_REGNO (i);
730 if (regno == INVALID_REGNUM)
731 break;
732 if (! bitmap_bit_p (df_get_live_in (bb), regno))
733 mark_regno_birth_or_death (regno, true);
735 #endif
738 /* Save current register pressure related info. */
739 static void
740 save_reg_pressure (void)
742 int i;
744 for (i = 0; i < ira_reg_class_cover_size; i++)
745 saved_reg_pressure[ira_reg_class_cover[i]]
746 = curr_reg_pressure[ira_reg_class_cover[i]];
747 bitmap_copy (saved_reg_live, curr_reg_live);
750 /* Restore saved register pressure related info. */
751 static void
752 restore_reg_pressure (void)
754 int i;
756 for (i = 0; i < ira_reg_class_cover_size; i++)
757 curr_reg_pressure[ira_reg_class_cover[i]]
758 = saved_reg_pressure[ira_reg_class_cover[i]];
759 bitmap_copy (curr_reg_live, saved_reg_live);
762 /* Return TRUE if the register is dying after its USE. */
763 static bool
764 dying_use_p (struct reg_use_data *use)
766 struct reg_use_data *next;
768 for (next = use->next_regno_use; next != use; next = next->next_regno_use)
769 if (QUEUE_INDEX (next->insn) != QUEUE_SCHEDULED)
770 return false;
771 return true;
774 /* Print info about the current register pressure and its excess for
775 each cover class. */
776 static void
777 print_curr_reg_pressure (void)
779 int i;
780 enum reg_class cl;
782 fprintf (sched_dump, ";;\t");
783 for (i = 0; i < ira_reg_class_cover_size; i++)
785 cl = ira_reg_class_cover[i];
786 gcc_assert (curr_reg_pressure[cl] >= 0);
787 fprintf (sched_dump, " %s:%d(%d)", reg_class_names[cl],
788 curr_reg_pressure[cl],
789 curr_reg_pressure[cl] - ira_available_class_regs[cl]);
791 fprintf (sched_dump, "\n");
794 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
795 so that insns independent of the last scheduled insn will be preferred
796 over dependent instructions. */
798 static rtx last_scheduled_insn;
800 /* Cached cost of the instruction. Use below function to get cost of the
801 insn. -1 here means that the field is not initialized. */
802 #define INSN_COST(INSN) (HID (INSN)->cost)
804 /* Compute cost of executing INSN.
805 This is the number of cycles between instruction issue and
806 instruction results. */
808 insn_cost (rtx insn)
810 int cost;
812 if (sel_sched_p ())
814 if (recog_memoized (insn) < 0)
815 return 0;
817 cost = insn_default_latency (insn);
818 if (cost < 0)
819 cost = 0;
821 return cost;
824 cost = INSN_COST (insn);
826 if (cost < 0)
828 /* A USE insn, or something else we don't need to
829 understand. We can't pass these directly to
830 result_ready_cost or insn_default_latency because it will
831 trigger a fatal error for unrecognizable insns. */
832 if (recog_memoized (insn) < 0)
834 INSN_COST (insn) = 0;
835 return 0;
837 else
839 cost = insn_default_latency (insn);
840 if (cost < 0)
841 cost = 0;
843 INSN_COST (insn) = cost;
847 return cost;
850 /* Compute cost of dependence LINK.
851 This is the number of cycles between instruction issue and
852 instruction results.
853 ??? We also use this function to call recog_memoized on all insns. */
855 dep_cost_1 (dep_t link, dw_t dw)
857 rtx insn = DEP_PRO (link);
858 rtx used = DEP_CON (link);
859 int cost;
861 /* A USE insn should never require the value used to be computed.
862 This allows the computation of a function's result and parameter
863 values to overlap the return and call. We don't care about the
864 the dependence cost when only decreasing register pressure. */
865 if (recog_memoized (used) < 0)
867 cost = 0;
868 recog_memoized (insn);
870 else
872 enum reg_note dep_type = DEP_TYPE (link);
874 cost = insn_cost (insn);
876 if (INSN_CODE (insn) >= 0)
878 if (dep_type == REG_DEP_ANTI)
879 cost = 0;
880 else if (dep_type == REG_DEP_OUTPUT)
882 cost = (insn_default_latency (insn)
883 - insn_default_latency (used));
884 if (cost <= 0)
885 cost = 1;
887 else if (bypass_p (insn))
888 cost = insn_latency (insn, used);
892 if (targetm.sched.adjust_cost_2)
893 cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost,
894 dw);
895 else if (targetm.sched.adjust_cost != NULL)
897 /* This variable is used for backward compatibility with the
898 targets. */
899 rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
901 /* Make it self-cycled, so that if some tries to walk over this
902 incomplete list he/she will be caught in an endless loop. */
903 XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
905 /* Targets use only REG_NOTE_KIND of the link. */
906 PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
908 cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
909 insn, cost);
911 free_INSN_LIST_node (dep_cost_rtx_link);
914 if (cost < 0)
915 cost = 0;
918 return cost;
921 /* Compute cost of dependence LINK.
922 This is the number of cycles between instruction issue and
923 instruction results. */
925 dep_cost (dep_t link)
927 return dep_cost_1 (link, 0);
930 /* Use this sel-sched.c friendly function in reorder2 instead of increasing
931 INSN_PRIORITY explicitly. */
932 void
933 increase_insn_priority (rtx insn, int amount)
935 if (!sel_sched_p ())
937 /* We're dealing with haifa-sched.c INSN_PRIORITY. */
938 if (INSN_PRIORITY_KNOWN (insn))
939 INSN_PRIORITY (insn) += amount;
941 else
943 /* In sel-sched.c INSN_PRIORITY is not kept up to date.
944 Use EXPR_PRIORITY instead. */
945 sel_add_to_insn_priority (insn, amount);
949 /* Return 'true' if DEP should be included in priority calculations. */
950 static bool
951 contributes_to_priority_p (dep_t dep)
953 if (DEBUG_INSN_P (DEP_CON (dep))
954 || DEBUG_INSN_P (DEP_PRO (dep)))
955 return false;
957 /* Critical path is meaningful in block boundaries only. */
958 if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
959 DEP_PRO (dep)))
960 return false;
962 /* If flag COUNT_SPEC_IN_CRITICAL_PATH is set,
963 then speculative instructions will less likely be
964 scheduled. That is because the priority of
965 their producers will increase, and, thus, the
966 producers will more likely be scheduled, thus,
967 resolving the dependence. */
968 if (sched_deps_info->generate_spec_deps
969 && !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
970 && (DEP_STATUS (dep) & SPECULATIVE))
971 return false;
973 return true;
976 /* Compute the number of nondebug forward deps of an insn. */
978 static int
979 dep_list_size (rtx insn)
981 sd_iterator_def sd_it;
982 dep_t dep;
983 int dbgcount = 0, nodbgcount = 0;
985 if (!MAY_HAVE_DEBUG_INSNS)
986 return sd_lists_size (insn, SD_LIST_FORW);
988 FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
990 if (DEBUG_INSN_P (DEP_CON (dep)))
991 dbgcount++;
992 else
993 nodbgcount++;
996 gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW));
998 return nodbgcount;
1001 /* Compute the priority number for INSN. */
1002 static int
1003 priority (rtx insn)
1005 if (! INSN_P (insn))
1006 return 0;
1008 /* We should not be interested in priority of an already scheduled insn. */
1009 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
1011 if (!INSN_PRIORITY_KNOWN (insn))
1013 int this_priority = -1;
1015 if (dep_list_size (insn) == 0)
1016 /* ??? We should set INSN_PRIORITY to insn_cost when and insn has
1017 some forward deps but all of them are ignored by
1018 contributes_to_priority hook. At the moment we set priority of
1019 such insn to 0. */
1020 this_priority = insn_cost (insn);
1021 else
1023 rtx prev_first, twin;
1024 basic_block rec;
1026 /* For recovery check instructions we calculate priority slightly
1027 different than that of normal instructions. Instead of walking
1028 through INSN_FORW_DEPS (check) list, we walk through
1029 INSN_FORW_DEPS list of each instruction in the corresponding
1030 recovery block. */
1032 /* Selective scheduling does not define RECOVERY_BLOCK macro. */
1033 rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
1034 if (!rec || rec == EXIT_BLOCK_PTR)
1036 prev_first = PREV_INSN (insn);
1037 twin = insn;
1039 else
1041 prev_first = NEXT_INSN (BB_HEAD (rec));
1042 twin = PREV_INSN (BB_END (rec));
1047 sd_iterator_def sd_it;
1048 dep_t dep;
1050 FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
1052 rtx next;
1053 int next_priority;
1055 next = DEP_CON (dep);
1057 if (BLOCK_FOR_INSN (next) != rec)
1059 int cost;
1061 if (!contributes_to_priority_p (dep))
1062 continue;
1064 if (twin == insn)
1065 cost = dep_cost (dep);
1066 else
1068 struct _dep _dep1, *dep1 = &_dep1;
1070 init_dep (dep1, insn, next, REG_DEP_ANTI);
1072 cost = dep_cost (dep1);
1075 next_priority = cost + priority (next);
1077 if (next_priority > this_priority)
1078 this_priority = next_priority;
1082 twin = PREV_INSN (twin);
1084 while (twin != prev_first);
1087 if (this_priority < 0)
1089 gcc_assert (this_priority == -1);
1091 this_priority = insn_cost (insn);
1094 INSN_PRIORITY (insn) = this_priority;
1095 INSN_PRIORITY_STATUS (insn) = 1;
1098 return INSN_PRIORITY (insn);
1101 /* Macros and functions for keeping the priority queue sorted, and
1102 dealing with queuing and dequeuing of instructions. */
1104 #define SCHED_SORT(READY, N_READY) \
1105 do { if ((N_READY) == 2) \
1106 swap_sort (READY, N_READY); \
1107 else if ((N_READY) > 2) \
1108 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
1109 while (0)
1111 /* Setup info about the current register pressure impact of scheduling
1112 INSN at the current scheduling point. */
1113 static void
1114 setup_insn_reg_pressure_info (rtx insn)
1116 int i, change, before, after, hard_regno;
1117 int excess_cost_change;
1118 enum machine_mode mode;
1119 enum reg_class cl;
1120 struct reg_pressure_data *pressure_info;
1121 int *max_reg_pressure;
1122 struct reg_use_data *use;
1123 static int death[N_REG_CLASSES];
1125 excess_cost_change = 0;
1126 for (i = 0; i < ira_reg_class_cover_size; i++)
1127 death[ira_reg_class_cover[i]] = 0;
1128 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1129 if (dying_use_p (use))
1131 cl = sched_regno_cover_class[use->regno];
1132 if (use->regno < FIRST_PSEUDO_REGISTER)
1133 death[cl]++;
1134 else
1135 death[cl] += ira_reg_class_nregs[cl][PSEUDO_REGNO_MODE (use->regno)];
1137 pressure_info = INSN_REG_PRESSURE (insn);
1138 max_reg_pressure = INSN_MAX_REG_PRESSURE (insn);
1139 gcc_assert (pressure_info != NULL && max_reg_pressure != NULL);
1140 for (i = 0; i < ira_reg_class_cover_size; i++)
1142 cl = ira_reg_class_cover[i];
1143 gcc_assert (curr_reg_pressure[cl] >= 0);
1144 change = (int) pressure_info[i].set_increase - death[cl];
1145 before = MAX (0, max_reg_pressure[i] - ira_available_class_regs[cl]);
1146 after = MAX (0, max_reg_pressure[i] + change
1147 - ira_available_class_regs[cl]);
1148 hard_regno = ira_class_hard_regs[cl][0];
1149 gcc_assert (hard_regno >= 0);
1150 mode = reg_raw_mode[hard_regno];
1151 excess_cost_change += ((after - before)
1152 * (ira_memory_move_cost[mode][cl][0]
1153 + ira_memory_move_cost[mode][cl][1]));
1155 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insn) = excess_cost_change;
1158 /* Returns a positive value if x is preferred; returns a negative value if
1159 y is preferred. Should never return 0, since that will make the sort
1160 unstable. */
1162 static int
1163 rank_for_schedule (const void *x, const void *y)
1165 rtx tmp = *(const rtx *) y;
1166 rtx tmp2 = *(const rtx *) x;
1167 rtx last;
1168 int tmp_class, tmp2_class;
1169 int val, priority_val, info_val;
1171 if (MAY_HAVE_DEBUG_INSNS)
1173 /* Schedule debug insns as early as possible. */
1174 if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
1175 return -1;
1176 else if (DEBUG_INSN_P (tmp2))
1177 return 1;
1180 /* The insn in a schedule group should be issued the first. */
1181 if (flag_sched_group_heuristic &&
1182 SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
1183 return SCHED_GROUP_P (tmp2) ? 1 : -1;
1185 /* Make sure that priority of TMP and TMP2 are initialized. */
1186 gcc_assert (INSN_PRIORITY_KNOWN (tmp) && INSN_PRIORITY_KNOWN (tmp2));
1188 if (sched_pressure_p)
1190 int diff;
1192 /* Prefer insn whose scheduling results in the smallest register
1193 pressure excess. */
1194 if ((diff = (INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp)
1195 + (INSN_TICK (tmp) > clock_var
1196 ? INSN_TICK (tmp) - clock_var : 0)
1197 - INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2)
1198 - (INSN_TICK (tmp2) > clock_var
1199 ? INSN_TICK (tmp2) - clock_var : 0))) != 0)
1200 return diff;
1204 if (sched_pressure_p
1205 && (INSN_TICK (tmp2) > clock_var || INSN_TICK (tmp) > clock_var))
1207 if (INSN_TICK (tmp) <= clock_var)
1208 return -1;
1209 else if (INSN_TICK (tmp2) <= clock_var)
1210 return 1;
1211 else
1212 return INSN_TICK (tmp) - INSN_TICK (tmp2);
1214 /* Prefer insn with higher priority. */
1215 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
1217 if (flag_sched_critical_path_heuristic && priority_val)
1218 return priority_val;
1220 /* Prefer speculative insn with greater dependencies weakness. */
1221 if (flag_sched_spec_insn_heuristic && spec_info)
1223 ds_t ds1, ds2;
1224 dw_t dw1, dw2;
1225 int dw;
1227 ds1 = TODO_SPEC (tmp) & SPECULATIVE;
1228 if (ds1)
1229 dw1 = ds_weak (ds1);
1230 else
1231 dw1 = NO_DEP_WEAK;
1233 ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
1234 if (ds2)
1235 dw2 = ds_weak (ds2);
1236 else
1237 dw2 = NO_DEP_WEAK;
1239 dw = dw2 - dw1;
1240 if (dw > (NO_DEP_WEAK / 8) || dw < -(NO_DEP_WEAK / 8))
1241 return dw;
1244 info_val = (*current_sched_info->rank) (tmp, tmp2);
1245 if(flag_sched_rank_heuristic && info_val)
1246 return info_val;
1248 if (flag_sched_last_insn_heuristic)
1250 last = last_scheduled_insn;
1252 if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head)
1254 last = PREV_INSN (last);
1255 while (!NONDEBUG_INSN_P (last)
1256 && last != current_sched_info->prev_head);
1259 /* Compare insns based on their relation to the last scheduled
1260 non-debug insn. */
1261 if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last))
1263 dep_t dep1;
1264 dep_t dep2;
1266 /* Classify the instructions into three classes:
1267 1) Data dependent on last schedule insn.
1268 2) Anti/Output dependent on last scheduled insn.
1269 3) Independent of last scheduled insn, or has latency of one.
1270 Choose the insn from the highest numbered class if different. */
1271 dep1 = sd_find_dep_between (last, tmp, true);
1273 if (dep1 == NULL || dep_cost (dep1) == 1)
1274 tmp_class = 3;
1275 else if (/* Data dependence. */
1276 DEP_TYPE (dep1) == REG_DEP_TRUE)
1277 tmp_class = 1;
1278 else
1279 tmp_class = 2;
1281 dep2 = sd_find_dep_between (last, tmp2, true);
1283 if (dep2 == NULL || dep_cost (dep2) == 1)
1284 tmp2_class = 3;
1285 else if (/* Data dependence. */
1286 DEP_TYPE (dep2) == REG_DEP_TRUE)
1287 tmp2_class = 1;
1288 else
1289 tmp2_class = 2;
1291 if ((val = tmp2_class - tmp_class))
1292 return val;
1295 /* Prefer the insn which has more later insns that depend on it.
1296 This gives the scheduler more freedom when scheduling later
1297 instructions at the expense of added register pressure. */
1299 val = (dep_list_size (tmp2) - dep_list_size (tmp));
1301 if (flag_sched_dep_count_heuristic && val != 0)
1302 return val;
1304 /* If insns are equally good, sort by INSN_LUID (original insn order),
1305 so that we make the sort stable. This minimizes instruction movement,
1306 thus minimizing sched's effect on debugging and cross-jumping. */
1307 return INSN_LUID (tmp) - INSN_LUID (tmp2);
1310 /* Resort the array A in which only element at index N may be out of order. */
1312 HAIFA_INLINE static void
1313 swap_sort (rtx *a, int n)
1315 rtx insn = a[n - 1];
1316 int i = n - 2;
1318 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
1320 a[i + 1] = a[i];
1321 i -= 1;
1323 a[i + 1] = insn;
1326 /* Add INSN to the insn queue so that it can be executed at least
1327 N_CYCLES after the currently executing insn. Preserve insns
1328 chain for debugging purposes. */
1330 HAIFA_INLINE static void
1331 queue_insn (rtx insn, int n_cycles)
1333 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1334 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
1336 gcc_assert (n_cycles <= max_insn_queue_index);
1337 gcc_assert (!DEBUG_INSN_P (insn));
1339 insn_queue[next_q] = link;
1340 q_size += 1;
1342 if (sched_verbose >= 2)
1344 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
1345 (*current_sched_info->print_insn) (insn, 0));
1347 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
1350 QUEUE_INDEX (insn) = next_q;
1353 /* Remove INSN from queue. */
1354 static void
1355 queue_remove (rtx insn)
1357 gcc_assert (QUEUE_INDEX (insn) >= 0);
1358 remove_free_INSN_LIST_elem (insn, &insn_queue[QUEUE_INDEX (insn)]);
1359 q_size--;
1360 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
1363 /* Return a pointer to the bottom of the ready list, i.e. the insn
1364 with the lowest priority. */
1366 rtx *
1367 ready_lastpos (struct ready_list *ready)
1369 gcc_assert (ready->n_ready >= 1);
1370 return ready->vec + ready->first - ready->n_ready + 1;
1373 /* Add an element INSN to the ready list so that it ends up with the
1374 lowest/highest priority depending on FIRST_P. */
1376 HAIFA_INLINE static void
1377 ready_add (struct ready_list *ready, rtx insn, bool first_p)
1379 if (!first_p)
1381 if (ready->first == ready->n_ready)
1383 memmove (ready->vec + ready->veclen - ready->n_ready,
1384 ready_lastpos (ready),
1385 ready->n_ready * sizeof (rtx));
1386 ready->first = ready->veclen - 1;
1388 ready->vec[ready->first - ready->n_ready] = insn;
1390 else
1392 if (ready->first == ready->veclen - 1)
1394 if (ready->n_ready)
1395 /* ready_lastpos() fails when called with (ready->n_ready == 0). */
1396 memmove (ready->vec + ready->veclen - ready->n_ready - 1,
1397 ready_lastpos (ready),
1398 ready->n_ready * sizeof (rtx));
1399 ready->first = ready->veclen - 2;
1401 ready->vec[++(ready->first)] = insn;
1404 ready->n_ready++;
1405 if (DEBUG_INSN_P (insn))
1406 ready->n_debug++;
1408 gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
1409 QUEUE_INDEX (insn) = QUEUE_READY;
1412 /* Remove the element with the highest priority from the ready list and
1413 return it. */
1415 HAIFA_INLINE static rtx
1416 ready_remove_first (struct ready_list *ready)
1418 rtx t;
1420 gcc_assert (ready->n_ready);
1421 t = ready->vec[ready->first--];
1422 ready->n_ready--;
1423 if (DEBUG_INSN_P (t))
1424 ready->n_debug--;
1425 /* If the queue becomes empty, reset it. */
1426 if (ready->n_ready == 0)
1427 ready->first = ready->veclen - 1;
1429 gcc_assert (QUEUE_INDEX (t) == QUEUE_READY);
1430 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1432 return t;
1435 /* The following code implements multi-pass scheduling for the first
1436 cycle. In other words, we will try to choose ready insn which
1437 permits to start maximum number of insns on the same cycle. */
1439 /* Return a pointer to the element INDEX from the ready. INDEX for
1440 insn with the highest priority is 0, and the lowest priority has
1441 N_READY - 1. */
1444 ready_element (struct ready_list *ready, int index)
1446 gcc_assert (ready->n_ready && index < ready->n_ready);
1448 return ready->vec[ready->first - index];
1451 /* Remove the element INDEX from the ready list and return it. INDEX
1452 for insn with the highest priority is 0, and the lowest priority
1453 has N_READY - 1. */
1455 HAIFA_INLINE static rtx
1456 ready_remove (struct ready_list *ready, int index)
1458 rtx t;
1459 int i;
1461 if (index == 0)
1462 return ready_remove_first (ready);
1463 gcc_assert (ready->n_ready && index < ready->n_ready);
1464 t = ready->vec[ready->first - index];
1465 ready->n_ready--;
1466 if (DEBUG_INSN_P (t))
1467 ready->n_debug--;
1468 for (i = index; i < ready->n_ready; i++)
1469 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
1470 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1471 return t;
1474 /* Remove INSN from the ready list. */
1475 static void
1476 ready_remove_insn (rtx insn)
1478 int i;
1480 for (i = 0; i < readyp->n_ready; i++)
1481 if (ready_element (readyp, i) == insn)
1483 ready_remove (readyp, i);
1484 return;
1486 gcc_unreachable ();
1489 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
1490 macro. */
1492 void
1493 ready_sort (struct ready_list *ready)
1495 int i;
1496 rtx *first = ready_lastpos (ready);
1498 if (sched_pressure_p)
1500 for (i = 0; i < ready->n_ready; i++)
1501 setup_insn_reg_pressure_info (first[i]);
1503 SCHED_SORT (first, ready->n_ready);
1506 /* PREV is an insn that is ready to execute. Adjust its priority if that
1507 will help shorten or lengthen register lifetimes as appropriate. Also
1508 provide a hook for the target to tweak itself. */
1510 HAIFA_INLINE static void
1511 adjust_priority (rtx prev)
1513 /* ??? There used to be code here to try and estimate how an insn
1514 affected register lifetimes, but it did it by looking at REG_DEAD
1515 notes, which we removed in schedule_region. Nor did it try to
1516 take into account register pressure or anything useful like that.
1518 Revisit when we have a machine model to work with and not before. */
1520 if (targetm.sched.adjust_priority)
1521 INSN_PRIORITY (prev) =
1522 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
1525 /* Advance DFA state STATE on one cycle. */
1526 void
1527 advance_state (state_t state)
1529 if (targetm.sched.dfa_pre_advance_cycle)
1530 targetm.sched.dfa_pre_advance_cycle ();
1532 if (targetm.sched.dfa_pre_cycle_insn)
1533 state_transition (state,
1534 targetm.sched.dfa_pre_cycle_insn ());
1536 state_transition (state, NULL);
1538 if (targetm.sched.dfa_post_cycle_insn)
1539 state_transition (state,
1540 targetm.sched.dfa_post_cycle_insn ());
1542 if (targetm.sched.dfa_post_advance_cycle)
1543 targetm.sched.dfa_post_advance_cycle ();
1546 /* Advance time on one cycle. */
1547 HAIFA_INLINE static void
1548 advance_one_cycle (void)
1550 advance_state (curr_state);
1551 if (sched_verbose >= 6)
1552 fprintf (sched_dump, ";;\tAdvanced a state.\n");
1555 /* Clock at which the previous instruction was issued. */
1556 static int last_clock_var;
1558 /* Update register pressure after scheduling INSN. */
1559 static void
1560 update_register_pressure (rtx insn)
1562 struct reg_use_data *use;
1563 struct reg_set_data *set;
1565 for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1566 if (dying_use_p (use) && bitmap_bit_p (curr_reg_live, use->regno))
1567 mark_regno_birth_or_death (use->regno, false);
1568 for (set = INSN_REG_SET_LIST (insn); set != NULL; set = set->next_insn_set)
1569 mark_regno_birth_or_death (set->regno, true);
1572 /* Set up or update (if UPDATE_P) max register pressure (see its
1573 meaning in sched-int.h::_haifa_insn_data) for all current BB insns
1574 after insn AFTER. */
1575 static void
1576 setup_insn_max_reg_pressure (rtx after, bool update_p)
1578 int i, p;
1579 bool eq_p;
1580 rtx insn;
1581 static int max_reg_pressure[N_REG_CLASSES];
1583 save_reg_pressure ();
1584 for (i = 0; i < ira_reg_class_cover_size; i++)
1585 max_reg_pressure[ira_reg_class_cover[i]]
1586 = curr_reg_pressure[ira_reg_class_cover[i]];
1587 for (insn = NEXT_INSN (after);
1588 insn != NULL_RTX && BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (after);
1589 insn = NEXT_INSN (insn))
1590 if (NONDEBUG_INSN_P (insn))
1592 eq_p = true;
1593 for (i = 0; i < ira_reg_class_cover_size; i++)
1595 p = max_reg_pressure[ira_reg_class_cover[i]];
1596 if (INSN_MAX_REG_PRESSURE (insn)[i] != p)
1598 eq_p = false;
1599 INSN_MAX_REG_PRESSURE (insn)[i]
1600 = max_reg_pressure[ira_reg_class_cover[i]];
1603 if (update_p && eq_p)
1604 break;
1605 update_register_pressure (insn);
1606 for (i = 0; i < ira_reg_class_cover_size; i++)
1607 if (max_reg_pressure[ira_reg_class_cover[i]]
1608 < curr_reg_pressure[ira_reg_class_cover[i]])
1609 max_reg_pressure[ira_reg_class_cover[i]]
1610 = curr_reg_pressure[ira_reg_class_cover[i]];
1612 restore_reg_pressure ();
1615 /* Update the current register pressure after scheduling INSN. Update
1616 also max register pressure for unscheduled insns of the current
1617 BB. */
1618 static void
1619 update_reg_and_insn_max_reg_pressure (rtx insn)
1621 int i;
1622 int before[N_REG_CLASSES];
1624 for (i = 0; i < ira_reg_class_cover_size; i++)
1625 before[i] = curr_reg_pressure[ira_reg_class_cover[i]];
1626 update_register_pressure (insn);
1627 for (i = 0; i < ira_reg_class_cover_size; i++)
1628 if (curr_reg_pressure[ira_reg_class_cover[i]] != before[i])
1629 break;
1630 if (i < ira_reg_class_cover_size)
1631 setup_insn_max_reg_pressure (insn, true);
1634 /* Set up register pressure at the beginning of basic block BB whose
1635 insns starting after insn AFTER. Set up also max register pressure
1636 for all insns of the basic block. */
1637 void
1638 sched_setup_bb_reg_pressure_info (basic_block bb, rtx after)
1640 gcc_assert (sched_pressure_p);
1641 initiate_bb_reg_pressure_info (bb);
1642 setup_insn_max_reg_pressure (after, false);
1645 /* INSN is the "currently executing insn". Launch each insn which was
1646 waiting on INSN. READY is the ready list which contains the insns
1647 that are ready to fire. CLOCK is the current cycle. The function
1648 returns necessary cycle advance after issuing the insn (it is not
1649 zero for insns in a schedule group). */
1651 static int
1652 schedule_insn (rtx insn)
1654 sd_iterator_def sd_it;
1655 dep_t dep;
1656 int i;
1657 int advance = 0;
1659 if (sched_verbose >= 1)
1661 struct reg_pressure_data *pressure_info;
1662 char buf[2048];
1664 print_insn (buf, insn, 0);
1665 buf[40] = 0;
1666 fprintf (sched_dump, ";;\t%3i--> %-40s:", clock_var, buf);
1668 if (recog_memoized (insn) < 0)
1669 fprintf (sched_dump, "nothing");
1670 else
1671 print_reservation (sched_dump, insn);
1672 pressure_info = INSN_REG_PRESSURE (insn);
1673 if (pressure_info != NULL)
1675 fputc (':', sched_dump);
1676 for (i = 0; i < ira_reg_class_cover_size; i++)
1677 fprintf (sched_dump, "%s%+d(%d)",
1678 reg_class_names[ira_reg_class_cover[i]],
1679 pressure_info[i].set_increase, pressure_info[i].change);
1681 fputc ('\n', sched_dump);
1684 if (sched_pressure_p)
1685 update_reg_and_insn_max_reg_pressure (insn);
1687 /* Scheduling instruction should have all its dependencies resolved and
1688 should have been removed from the ready list. */
1689 gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
1691 gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
1692 QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
1694 gcc_assert (INSN_TICK (insn) >= MIN_TICK);
1695 if (INSN_TICK (insn) > clock_var)
1696 /* INSN has been prematurely moved from the queue to the ready list.
1697 This is possible only if following flag is set. */
1698 gcc_assert (flag_sched_stalled_insns);
1700 /* ??? Probably, if INSN is scheduled prematurely, we should leave
1701 INSN_TICK untouched. This is a machine-dependent issue, actually. */
1702 INSN_TICK (insn) = clock_var;
1704 /* Update dependent instructions. */
1705 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
1706 sd_iterator_cond (&sd_it, &dep);)
1708 rtx next = DEP_CON (dep);
1710 /* Resolve the dependence between INSN and NEXT.
1711 sd_resolve_dep () moves current dep to another list thus
1712 advancing the iterator. */
1713 sd_resolve_dep (sd_it);
1715 if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
1717 int effective_cost;
1719 effective_cost = try_ready (next);
1721 if (effective_cost >= 0
1722 && SCHED_GROUP_P (next)
1723 && advance < effective_cost)
1724 advance = effective_cost;
1726 else
1727 /* Check always has only one forward dependence (to the first insn in
1728 the recovery block), therefore, this will be executed only once. */
1730 gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
1731 fix_recovery_deps (RECOVERY_BLOCK (insn));
1735 /* This is the place where scheduler doesn't *basically* need backward and
1736 forward dependencies for INSN anymore. Nevertheless they are used in
1737 heuristics in rank_for_schedule (), early_queue_to_ready () and in
1738 some targets (e.g. rs6000). Thus the earliest place where we *can*
1739 remove dependencies is after targetm.sched.md_finish () call in
1740 schedule_block (). But, on the other side, the safest place to remove
1741 dependencies is when we are finishing scheduling entire region. As we
1742 don't generate [many] dependencies during scheduling itself, we won't
1743 need memory until beginning of next region.
1744 Bottom line: Dependencies are removed for all insns in the end of
1745 scheduling the region. */
1747 /* Annotate the instruction with issue information -- TImode
1748 indicates that the instruction is expected not to be able
1749 to issue on the same cycle as the previous insn. A machine
1750 may use this information to decide how the instruction should
1751 be aligned. */
1752 if (issue_rate > 1
1753 && GET_CODE (PATTERN (insn)) != USE
1754 && GET_CODE (PATTERN (insn)) != CLOBBER
1755 && !DEBUG_INSN_P (insn))
1757 if (reload_completed)
1758 PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
1759 last_clock_var = clock_var;
1762 return advance;
1765 /* Functions for handling of notes. */
1767 /* Insert the INSN note at the end of the notes list. */
1768 static void
1769 add_to_note_list (rtx insn, rtx *note_list_end_p)
1771 PREV_INSN (insn) = *note_list_end_p;
1772 if (*note_list_end_p)
1773 NEXT_INSN (*note_list_end_p) = insn;
1774 *note_list_end_p = insn;
1777 /* Add note list that ends on FROM_END to the end of TO_ENDP. */
1778 void
1779 concat_note_lists (rtx from_end, rtx *to_endp)
1781 rtx from_start;
1783 if (from_end == NULL)
1784 /* It's easy when have nothing to concat. */
1785 return;
1787 if (*to_endp == NULL)
1788 /* It's also easy when destination is empty. */
1790 *to_endp = from_end;
1791 return;
1794 from_start = from_end;
1795 /* A note list should be traversed via PREV_INSN. */
1796 while (PREV_INSN (from_start) != NULL)
1797 from_start = PREV_INSN (from_start);
1799 add_to_note_list (from_start, to_endp);
1800 *to_endp = from_end;
1803 /* Delete notes beginning with INSN and put them in the chain
1804 of notes ended by NOTE_LIST.
1805 Returns the insn following the notes. */
1806 static rtx
1807 unlink_other_notes (rtx insn, rtx tail)
1809 rtx prev = PREV_INSN (insn);
1811 while (insn != tail && NOTE_NOT_BB_P (insn))
1813 rtx next = NEXT_INSN (insn);
1814 basic_block bb = BLOCK_FOR_INSN (insn);
1816 /* Delete the note from its current position. */
1817 if (prev)
1818 NEXT_INSN (prev) = next;
1819 if (next)
1820 PREV_INSN (next) = prev;
1822 if (bb)
1824 /* Basic block can begin with either LABEL or
1825 NOTE_INSN_BASIC_BLOCK. */
1826 gcc_assert (BB_HEAD (bb) != insn);
1828 /* Check if we are removing last insn in the BB. */
1829 if (BB_END (bb) == insn)
1830 BB_END (bb) = prev;
1833 /* See sched_analyze to see how these are handled. */
1834 if (NOTE_KIND (insn) != NOTE_INSN_EH_REGION_BEG
1835 && NOTE_KIND (insn) != NOTE_INSN_EH_REGION_END)
1836 add_to_note_list (insn, &note_list);
1838 insn = next;
1841 if (insn == tail)
1843 gcc_assert (sel_sched_p ());
1844 return prev;
1847 return insn;
1850 /* Return the head and tail pointers of ebb starting at BEG and ending
1851 at END. */
1852 void
1853 get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
1855 rtx beg_head = BB_HEAD (beg);
1856 rtx beg_tail = BB_END (beg);
1857 rtx end_head = BB_HEAD (end);
1858 rtx end_tail = BB_END (end);
1860 /* Don't include any notes or labels at the beginning of the BEG
1861 basic block, or notes at the end of the END basic blocks. */
1863 if (LABEL_P (beg_head))
1864 beg_head = NEXT_INSN (beg_head);
1866 while (beg_head != beg_tail)
1867 if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head))
1868 beg_head = NEXT_INSN (beg_head);
1869 else
1870 break;
1872 *headp = beg_head;
1874 if (beg == end)
1875 end_head = beg_head;
1876 else if (LABEL_P (end_head))
1877 end_head = NEXT_INSN (end_head);
1879 while (end_head != end_tail)
1880 if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail))
1881 end_tail = PREV_INSN (end_tail);
1882 else
1883 break;
1885 *tailp = end_tail;
1888 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1891 no_real_insns_p (const_rtx head, const_rtx tail)
1893 while (head != NEXT_INSN (tail))
1895 if (!NOTE_P (head) && !LABEL_P (head)
1896 && !BOUNDARY_DEBUG_INSN_P (head))
1897 return 0;
1898 head = NEXT_INSN (head);
1900 return 1;
1903 /* Delete notes between HEAD and TAIL and put them in the chain
1904 of notes ended by NOTE_LIST. */
1905 static void
1906 rm_other_notes (rtx head, rtx tail)
1908 rtx next_tail;
1909 rtx insn;
1911 note_list = 0;
1912 if (head == tail && (! INSN_P (head)))
1913 return;
1915 next_tail = NEXT_INSN (tail);
1916 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1918 rtx prev;
1920 /* Farm out notes, and maybe save them in NOTE_LIST.
1921 This is needed to keep the debugger from
1922 getting completely deranged. */
1923 if (NOTE_NOT_BB_P (insn))
1925 prev = insn;
1926 insn = unlink_other_notes (insn, next_tail);
1928 gcc_assert ((sel_sched_p ()
1929 || prev != tail) && prev != head && insn != next_tail);
1934 /* Same as above, but also process REG_SAVE_NOTEs of HEAD. */
1935 void
1936 remove_notes (rtx head, rtx tail)
1938 /* rm_other_notes only removes notes which are _inside_ the
1939 block---that is, it won't remove notes before the first real insn
1940 or after the last real insn of the block. So if the first insn
1941 has a REG_SAVE_NOTE which would otherwise be emitted before the
1942 insn, it is redundant with the note before the start of the
1943 block, and so we have to take it out. */
1944 if (INSN_P (head))
1946 rtx note;
1948 for (note = REG_NOTES (head); note; note = XEXP (note, 1))
1949 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
1950 remove_note (head, note);
1953 /* Remove remaining note insns from the block, save them in
1954 note_list. These notes are restored at the end of
1955 schedule_block (). */
1956 rm_other_notes (head, tail);
1959 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
1960 previously found among the insns. Insert them just before HEAD. */
1962 restore_other_notes (rtx head, basic_block head_bb)
1964 if (note_list != 0)
1966 rtx note_head = note_list;
1968 if (head)
1969 head_bb = BLOCK_FOR_INSN (head);
1970 else
1971 head = NEXT_INSN (bb_note (head_bb));
1973 while (PREV_INSN (note_head))
1975 set_block_for_insn (note_head, head_bb);
1976 note_head = PREV_INSN (note_head);
1978 /* In the above cycle we've missed this note. */
1979 set_block_for_insn (note_head, head_bb);
1981 PREV_INSN (note_head) = PREV_INSN (head);
1982 NEXT_INSN (PREV_INSN (head)) = note_head;
1983 PREV_INSN (head) = note_list;
1984 NEXT_INSN (note_list) = head;
1986 if (BLOCK_FOR_INSN (head) != head_bb)
1987 BB_END (head_bb) = note_list;
1989 head = note_head;
1992 return head;
1995 /* Move insns that became ready to fire from queue to ready list. */
1997 static void
1998 queue_to_ready (struct ready_list *ready)
2000 rtx insn;
2001 rtx link;
2002 rtx skip_insn;
2004 q_ptr = NEXT_Q (q_ptr);
2006 if (dbg_cnt (sched_insn) == false)
2008 /* If debug counter is activated do not requeue insn next after
2009 last_scheduled_insn. */
2010 skip_insn = next_nonnote_insn (last_scheduled_insn);
2011 while (skip_insn && DEBUG_INSN_P (skip_insn))
2012 skip_insn = next_nonnote_insn (skip_insn);
2014 else
2015 skip_insn = NULL_RTX;
2017 /* Add all pending insns that can be scheduled without stalls to the
2018 ready list. */
2019 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
2021 insn = XEXP (link, 0);
2022 q_size -= 1;
2024 if (sched_verbose >= 2)
2025 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2026 (*current_sched_info->print_insn) (insn, 0));
2028 /* If the ready list is full, delay the insn for 1 cycle.
2029 See the comment in schedule_block for the rationale. */
2030 if (!reload_completed
2031 && ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
2032 && !SCHED_GROUP_P (insn)
2033 && insn != skip_insn)
2035 if (sched_verbose >= 2)
2036 fprintf (sched_dump, "requeued because ready full\n");
2037 queue_insn (insn, 1);
2039 else
2041 ready_add (ready, insn, false);
2042 if (sched_verbose >= 2)
2043 fprintf (sched_dump, "moving to ready without stalls\n");
2046 free_INSN_LIST_list (&insn_queue[q_ptr]);
2048 /* If there are no ready insns, stall until one is ready and add all
2049 of the pending insns at that point to the ready list. */
2050 if (ready->n_ready == 0)
2052 int stalls;
2054 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
2056 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2058 for (; link; link = XEXP (link, 1))
2060 insn = XEXP (link, 0);
2061 q_size -= 1;
2063 if (sched_verbose >= 2)
2064 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2065 (*current_sched_info->print_insn) (insn, 0));
2067 ready_add (ready, insn, false);
2068 if (sched_verbose >= 2)
2069 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
2071 free_INSN_LIST_list (&insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]);
2073 advance_one_cycle ();
2075 break;
2078 advance_one_cycle ();
2081 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
2082 clock_var += stalls;
2086 /* Used by early_queue_to_ready. Determines whether it is "ok" to
2087 prematurely move INSN from the queue to the ready list. Currently,
2088 if a target defines the hook 'is_costly_dependence', this function
2089 uses the hook to check whether there exist any dependences which are
2090 considered costly by the target, between INSN and other insns that
2091 have already been scheduled. Dependences are checked up to Y cycles
2092 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
2093 controlling this value.
2094 (Other considerations could be taken into account instead (or in
2095 addition) depending on user flags and target hooks. */
2097 static bool
2098 ok_for_early_queue_removal (rtx insn)
2100 int n_cycles;
2101 rtx prev_insn = last_scheduled_insn;
2103 if (targetm.sched.is_costly_dependence)
2105 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
2107 for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
2109 int cost;
2111 if (prev_insn == current_sched_info->prev_head)
2113 prev_insn = NULL;
2114 break;
2117 if (!NOTE_P (prev_insn))
2119 dep_t dep;
2121 dep = sd_find_dep_between (prev_insn, insn, true);
2123 if (dep != NULL)
2125 cost = dep_cost (dep);
2127 if (targetm.sched.is_costly_dependence (dep, cost,
2128 flag_sched_stalled_insns_dep - n_cycles))
2129 return false;
2133 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
2134 break;
2137 if (!prev_insn)
2138 break;
2139 prev_insn = PREV_INSN (prev_insn);
2143 return true;
2147 /* Remove insns from the queue, before they become "ready" with respect
2148 to FU latency considerations. */
2150 static int
2151 early_queue_to_ready (state_t state, struct ready_list *ready)
2153 rtx insn;
2154 rtx link;
2155 rtx next_link;
2156 rtx prev_link;
2157 bool move_to_ready;
2158 int cost;
2159 state_t temp_state = alloca (dfa_state_size);
2160 int stalls;
2161 int insns_removed = 0;
2164 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
2165 function:
2167 X == 0: There is no limit on how many queued insns can be removed
2168 prematurely. (flag_sched_stalled_insns = -1).
2170 X >= 1: Only X queued insns can be removed prematurely in each
2171 invocation. (flag_sched_stalled_insns = X).
2173 Otherwise: Early queue removal is disabled.
2174 (flag_sched_stalled_insns = 0)
2177 if (! flag_sched_stalled_insns)
2178 return 0;
2180 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
2182 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2184 if (sched_verbose > 6)
2185 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
2187 prev_link = 0;
2188 while (link)
2190 next_link = XEXP (link, 1);
2191 insn = XEXP (link, 0);
2192 if (insn && sched_verbose > 6)
2193 print_rtl_single (sched_dump, insn);
2195 memcpy (temp_state, state, dfa_state_size);
2196 if (recog_memoized (insn) < 0)
2197 /* non-negative to indicate that it's not ready
2198 to avoid infinite Q->R->Q->R... */
2199 cost = 0;
2200 else
2201 cost = state_transition (temp_state, insn);
2203 if (sched_verbose >= 6)
2204 fprintf (sched_dump, "transition cost = %d\n", cost);
2206 move_to_ready = false;
2207 if (cost < 0)
2209 move_to_ready = ok_for_early_queue_removal (insn);
2210 if (move_to_ready == true)
2212 /* move from Q to R */
2213 q_size -= 1;
2214 ready_add (ready, insn, false);
2216 if (prev_link)
2217 XEXP (prev_link, 1) = next_link;
2218 else
2219 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
2221 free_INSN_LIST_node (link);
2223 if (sched_verbose >= 2)
2224 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
2225 (*current_sched_info->print_insn) (insn, 0));
2227 insns_removed++;
2228 if (insns_removed == flag_sched_stalled_insns)
2229 /* Remove no more than flag_sched_stalled_insns insns
2230 from Q at a time. */
2231 return insns_removed;
2235 if (move_to_ready == false)
2236 prev_link = link;
2238 link = next_link;
2239 } /* while link */
2240 } /* if link */
2242 } /* for stalls.. */
2244 return insns_removed;
2248 /* Print the ready list for debugging purposes. Callable from debugger. */
2250 static void
2251 debug_ready_list (struct ready_list *ready)
2253 rtx *p;
2254 int i;
2256 if (ready->n_ready == 0)
2258 fprintf (sched_dump, "\n");
2259 return;
2262 p = ready_lastpos (ready);
2263 for (i = 0; i < ready->n_ready; i++)
2265 fprintf (sched_dump, " %s:%d",
2266 (*current_sched_info->print_insn) (p[i], 0),
2267 INSN_LUID (p[i]));
2268 if (sched_pressure_p)
2269 fprintf (sched_dump, "(cost=%d",
2270 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (p[i]));
2271 if (INSN_TICK (p[i]) > clock_var)
2272 fprintf (sched_dump, ":delay=%d", INSN_TICK (p[i]) - clock_var);
2273 if (sched_pressure_p)
2274 fprintf (sched_dump, ")");
2276 fprintf (sched_dump, "\n");
2279 /* Search INSN for REG_SAVE_NOTE note pairs for
2280 NOTE_INSN_EHREGION_{BEG,END}; and convert them back into
2281 NOTEs. The REG_SAVE_NOTE note following first one is contains the
2282 saved value for NOTE_BLOCK_NUMBER which is useful for
2283 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. */
2284 void
2285 reemit_notes (rtx insn)
2287 rtx note, last = insn;
2289 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2291 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
2293 enum insn_note note_type = (enum insn_note) INTVAL (XEXP (note, 0));
2295 last = emit_note_before (note_type, last);
2296 remove_note (insn, note);
2301 /* Move INSN. Reemit notes if needed. Update CFG, if needed. */
2302 static void
2303 move_insn (rtx insn, rtx last, rtx nt)
2305 if (PREV_INSN (insn) != last)
2307 basic_block bb;
2308 rtx note;
2309 int jump_p = 0;
2311 bb = BLOCK_FOR_INSN (insn);
2313 /* BB_HEAD is either LABEL or NOTE. */
2314 gcc_assert (BB_HEAD (bb) != insn);
2316 if (BB_END (bb) == insn)
2317 /* If this is last instruction in BB, move end marker one
2318 instruction up. */
2320 /* Jumps are always placed at the end of basic block. */
2321 jump_p = control_flow_insn_p (insn);
2323 gcc_assert (!jump_p
2324 || ((common_sched_info->sched_pass_id == SCHED_RGN_PASS)
2325 && IS_SPECULATION_BRANCHY_CHECK_P (insn))
2326 || (common_sched_info->sched_pass_id
2327 == SCHED_EBB_PASS));
2329 gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
2331 BB_END (bb) = PREV_INSN (insn);
2334 gcc_assert (BB_END (bb) != last);
2336 if (jump_p)
2337 /* We move the block note along with jump. */
2339 gcc_assert (nt);
2341 note = NEXT_INSN (insn);
2342 while (NOTE_NOT_BB_P (note) && note != nt)
2343 note = NEXT_INSN (note);
2345 if (note != nt
2346 && (LABEL_P (note)
2347 || BARRIER_P (note)))
2348 note = NEXT_INSN (note);
2350 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
2352 else
2353 note = insn;
2355 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (note);
2356 PREV_INSN (NEXT_INSN (note)) = PREV_INSN (insn);
2358 NEXT_INSN (note) = NEXT_INSN (last);
2359 PREV_INSN (NEXT_INSN (last)) = note;
2361 NEXT_INSN (last) = insn;
2362 PREV_INSN (insn) = last;
2364 bb = BLOCK_FOR_INSN (last);
2366 if (jump_p)
2368 fix_jump_move (insn);
2370 if (BLOCK_FOR_INSN (insn) != bb)
2371 move_block_after_check (insn);
2373 gcc_assert (BB_END (bb) == last);
2376 df_insn_change_bb (insn, bb);
2378 /* Update BB_END, if needed. */
2379 if (BB_END (bb) == last)
2380 BB_END (bb) = insn;
2383 SCHED_GROUP_P (insn) = 0;
2386 /* Return true if scheduling INSN will finish current clock cycle. */
2387 static bool
2388 insn_finishes_cycle_p (rtx insn)
2390 if (SCHED_GROUP_P (insn))
2391 /* After issuing INSN, rest of the sched_group will be forced to issue
2392 in order. Don't make any plans for the rest of cycle. */
2393 return true;
2395 /* Finishing the block will, apparently, finish the cycle. */
2396 if (current_sched_info->insn_finishes_block_p
2397 && current_sched_info->insn_finishes_block_p (insn))
2398 return true;
2400 return false;
2403 /* The following structure describe an entry of the stack of choices. */
2404 struct choice_entry
2406 /* Ordinal number of the issued insn in the ready queue. */
2407 int index;
2408 /* The number of the rest insns whose issues we should try. */
2409 int rest;
2410 /* The number of issued essential insns. */
2411 int n;
2412 /* State after issuing the insn. */
2413 state_t state;
2416 /* The following array is used to implement a stack of choices used in
2417 function max_issue. */
2418 static struct choice_entry *choice_stack;
2420 /* The following variable value is number of essential insns issued on
2421 the current cycle. An insn is essential one if it changes the
2422 processors state. */
2423 int cycle_issued_insns;
2425 /* This holds the value of the target dfa_lookahead hook. */
2426 int dfa_lookahead;
2428 /* The following variable value is maximal number of tries of issuing
2429 insns for the first cycle multipass insn scheduling. We define
2430 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
2431 need this constraint if all real insns (with non-negative codes)
2432 had reservations because in this case the algorithm complexity is
2433 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
2434 might be incomplete and such insn might occur. For such
2435 descriptions, the complexity of algorithm (without the constraint)
2436 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
2437 static int max_lookahead_tries;
2439 /* The following value is value of hook
2440 `first_cycle_multipass_dfa_lookahead' at the last call of
2441 `max_issue'. */
2442 static int cached_first_cycle_multipass_dfa_lookahead = 0;
2444 /* The following value is value of `issue_rate' at the last call of
2445 `sched_init'. */
2446 static int cached_issue_rate = 0;
2448 /* The following function returns maximal (or close to maximal) number
2449 of insns which can be issued on the same cycle and one of which
2450 insns is insns with the best rank (the first insn in READY). To
2451 make this function tries different samples of ready insns. READY
2452 is current queue `ready'. Global array READY_TRY reflects what
2453 insns are already issued in this try. MAX_POINTS is the sum of points
2454 of all instructions in READY. The function stops immediately,
2455 if it reached the such a solution, that all instruction can be issued.
2456 INDEX will contain index of the best insn in READY. The following
2457 function is used only for first cycle multipass scheduling.
2459 PRIVILEGED_N >= 0
2461 This function expects recognized insns only. All USEs,
2462 CLOBBERs, etc must be filtered elsewhere. */
2464 max_issue (struct ready_list *ready, int privileged_n, state_t state,
2465 int *index)
2467 int n, i, all, n_ready, best, delay, tries_num, points = -1, max_points;
2468 int more_issue;
2469 struct choice_entry *top;
2470 rtx insn;
2472 n_ready = ready->n_ready;
2473 gcc_assert (dfa_lookahead >= 1 && privileged_n >= 0
2474 && privileged_n <= n_ready);
2476 /* Init MAX_LOOKAHEAD_TRIES. */
2477 if (cached_first_cycle_multipass_dfa_lookahead != dfa_lookahead)
2479 cached_first_cycle_multipass_dfa_lookahead = dfa_lookahead;
2480 max_lookahead_tries = 100;
2481 for (i = 0; i < issue_rate; i++)
2482 max_lookahead_tries *= dfa_lookahead;
2485 /* Init max_points. */
2486 max_points = 0;
2487 more_issue = issue_rate - cycle_issued_insns;
2489 /* ??? We used to assert here that we never issue more insns than issue_rate.
2490 However, some targets (e.g. MIPS/SB1) claim lower issue rate than can be
2491 achieved to get better performance. Until these targets are fixed to use
2492 scheduler hooks to manipulate insns priority instead, the assert should
2493 be disabled.
2495 gcc_assert (more_issue >= 0); */
2497 for (i = 0; i < n_ready; i++)
2498 if (!ready_try [i])
2500 if (more_issue-- > 0)
2501 max_points += ISSUE_POINTS (ready_element (ready, i));
2502 else
2503 break;
2506 /* The number of the issued insns in the best solution. */
2507 best = 0;
2509 top = choice_stack;
2511 /* Set initial state of the search. */
2512 memcpy (top->state, state, dfa_state_size);
2513 top->rest = dfa_lookahead;
2514 top->n = 0;
2516 /* Count the number of the insns to search among. */
2517 for (all = i = 0; i < n_ready; i++)
2518 if (!ready_try [i])
2519 all++;
2521 /* I is the index of the insn to try next. */
2522 i = 0;
2523 tries_num = 0;
2524 for (;;)
2526 if (/* If we've reached a dead end or searched enough of what we have
2527 been asked... */
2528 top->rest == 0
2529 /* Or have nothing else to try. */
2530 || i >= n_ready)
2532 /* ??? (... || i == n_ready). */
2533 gcc_assert (i <= n_ready);
2535 if (top == choice_stack)
2536 break;
2538 if (best < top - choice_stack)
2540 if (privileged_n)
2542 n = privileged_n;
2543 /* Try to find issued privileged insn. */
2544 while (n && !ready_try[--n]);
2547 if (/* If all insns are equally good... */
2548 privileged_n == 0
2549 /* Or a privileged insn will be issued. */
2550 || ready_try[n])
2551 /* Then we have a solution. */
2553 best = top - choice_stack;
2554 /* This is the index of the insn issued first in this
2555 solution. */
2556 *index = choice_stack [1].index;
2557 points = top->n;
2558 if (top->n == max_points || best == all)
2559 break;
2563 /* Set ready-list index to point to the last insn
2564 ('i++' below will advance it to the next insn). */
2565 i = top->index;
2567 /* Backtrack. */
2568 ready_try [i] = 0;
2569 top--;
2570 memcpy (state, top->state, dfa_state_size);
2572 else if (!ready_try [i])
2574 tries_num++;
2575 if (tries_num > max_lookahead_tries)
2576 break;
2577 insn = ready_element (ready, i);
2578 delay = state_transition (state, insn);
2579 if (delay < 0)
2581 if (state_dead_lock_p (state)
2582 || insn_finishes_cycle_p (insn))
2583 /* We won't issue any more instructions in the next
2584 choice_state. */
2585 top->rest = 0;
2586 else
2587 top->rest--;
2589 n = top->n;
2590 if (memcmp (top->state, state, dfa_state_size) != 0)
2591 n += ISSUE_POINTS (insn);
2593 /* Advance to the next choice_entry. */
2594 top++;
2595 /* Initialize it. */
2596 top->rest = dfa_lookahead;
2597 top->index = i;
2598 top->n = n;
2599 memcpy (top->state, state, dfa_state_size);
2601 ready_try [i] = 1;
2602 i = -1;
2606 /* Increase ready-list index. */
2607 i++;
2610 /* Restore the original state of the DFA. */
2611 memcpy (state, choice_stack->state, dfa_state_size);
2613 return best;
2616 /* The following function chooses insn from READY and modifies
2617 READY. The following function is used only for first
2618 cycle multipass scheduling.
2619 Return:
2620 -1 if cycle should be advanced,
2621 0 if INSN_PTR is set to point to the desirable insn,
2622 1 if choose_ready () should be restarted without advancing the cycle. */
2623 static int
2624 choose_ready (struct ready_list *ready, rtx *insn_ptr)
2626 int lookahead;
2628 if (dbg_cnt (sched_insn) == false)
2630 rtx insn;
2632 insn = next_nonnote_insn (last_scheduled_insn);
2634 if (QUEUE_INDEX (insn) == QUEUE_READY)
2635 /* INSN is in the ready_list. */
2637 ready_remove_insn (insn);
2638 *insn_ptr = insn;
2639 return 0;
2642 /* INSN is in the queue. Advance cycle to move it to the ready list. */
2643 return -1;
2646 lookahead = 0;
2648 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
2649 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
2650 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
2651 || DEBUG_INSN_P (ready_element (ready, 0)))
2653 *insn_ptr = ready_remove_first (ready);
2654 return 0;
2656 else
2658 /* Try to choose the better insn. */
2659 int index = 0, i, n;
2660 rtx insn;
2661 int try_data = 1, try_control = 1;
2662 ds_t ts;
2664 insn = ready_element (ready, 0);
2665 if (INSN_CODE (insn) < 0)
2667 *insn_ptr = ready_remove_first (ready);
2668 return 0;
2671 if (spec_info
2672 && spec_info->flags & (PREFER_NON_DATA_SPEC
2673 | PREFER_NON_CONTROL_SPEC))
2675 for (i = 0, n = ready->n_ready; i < n; i++)
2677 rtx x;
2678 ds_t s;
2680 x = ready_element (ready, i);
2681 s = TODO_SPEC (x);
2683 if (spec_info->flags & PREFER_NON_DATA_SPEC
2684 && !(s & DATA_SPEC))
2686 try_data = 0;
2687 if (!(spec_info->flags & PREFER_NON_CONTROL_SPEC)
2688 || !try_control)
2689 break;
2692 if (spec_info->flags & PREFER_NON_CONTROL_SPEC
2693 && !(s & CONTROL_SPEC))
2695 try_control = 0;
2696 if (!(spec_info->flags & PREFER_NON_DATA_SPEC) || !try_data)
2697 break;
2702 ts = TODO_SPEC (insn);
2703 if ((ts & SPECULATIVE)
2704 && (((!try_data && (ts & DATA_SPEC))
2705 || (!try_control && (ts & CONTROL_SPEC)))
2706 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
2707 && !targetm.sched
2708 .first_cycle_multipass_dfa_lookahead_guard_spec (insn))))
2709 /* Discard speculative instruction that stands first in the ready
2710 list. */
2712 change_queue_index (insn, 1);
2713 return 1;
2716 ready_try[0] = 0;
2718 for (i = 1; i < ready->n_ready; i++)
2720 insn = ready_element (ready, i);
2722 ready_try [i]
2723 = ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
2724 || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC)));
2727 /* Let the target filter the search space. */
2728 for (i = 1; i < ready->n_ready; i++)
2729 if (!ready_try[i])
2731 insn = ready_element (ready, i);
2733 #ifdef ENABLE_CHECKING
2734 /* If this insn is recognizable we should have already
2735 recognized it earlier.
2736 ??? Not very clear where this is supposed to be done.
2737 See dep_cost_1. */
2738 gcc_assert (INSN_CODE (insn) >= 0
2739 || recog_memoized (insn) < 0);
2740 #endif
2742 ready_try [i]
2743 = (/* INSN_CODE check can be omitted here as it is also done later
2744 in max_issue (). */
2745 INSN_CODE (insn) < 0
2746 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2747 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2748 (insn)));
2751 if (max_issue (ready, 1, curr_state, &index) == 0)
2753 *insn_ptr = ready_remove_first (ready);
2754 if (sched_verbose >= 4)
2755 fprintf (sched_dump, ";;\t\tChosen insn (but can't issue) : %s \n",
2756 (*current_sched_info->print_insn) (*insn_ptr, 0));
2757 return 0;
2759 else
2761 if (sched_verbose >= 4)
2762 fprintf (sched_dump, ";;\t\tChosen insn : %s\n",
2763 (*current_sched_info->print_insn)
2764 (ready_element (ready, index), 0));
2766 *insn_ptr = ready_remove (ready, index);
2767 return 0;
2772 /* Use forward list scheduling to rearrange insns of block pointed to by
2773 TARGET_BB, possibly bringing insns from subsequent blocks in the same
2774 region. */
2776 void
2777 schedule_block (basic_block *target_bb)
2779 int i, first_cycle_insn_p;
2780 int can_issue_more;
2781 state_t temp_state = NULL; /* It is used for multipass scheduling. */
2782 int sort_p, advance, start_clock_var;
2784 /* Head/tail info for this block. */
2785 rtx prev_head = current_sched_info->prev_head;
2786 rtx next_tail = current_sched_info->next_tail;
2787 rtx head = NEXT_INSN (prev_head);
2788 rtx tail = PREV_INSN (next_tail);
2790 /* We used to have code to avoid getting parameters moved from hard
2791 argument registers into pseudos.
2793 However, it was removed when it proved to be of marginal benefit
2794 and caused problems because schedule_block and compute_forward_dependences
2795 had different notions of what the "head" insn was. */
2797 gcc_assert (head != tail || INSN_P (head));
2799 haifa_recovery_bb_recently_added_p = false;
2801 /* Debug info. */
2802 if (sched_verbose)
2803 dump_new_block_header (0, *target_bb, head, tail);
2805 state_reset (curr_state);
2807 /* Clear the ready list. */
2808 ready.first = ready.veclen - 1;
2809 ready.n_ready = 0;
2810 ready.n_debug = 0;
2812 /* It is used for first cycle multipass scheduling. */
2813 temp_state = alloca (dfa_state_size);
2815 if (targetm.sched.md_init)
2816 targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
2818 /* We start inserting insns after PREV_HEAD. */
2819 last_scheduled_insn = prev_head;
2821 gcc_assert ((NOTE_P (last_scheduled_insn)
2822 || BOUNDARY_DEBUG_INSN_P (last_scheduled_insn))
2823 && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
2825 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
2826 queue. */
2827 q_ptr = 0;
2828 q_size = 0;
2830 insn_queue = XALLOCAVEC (rtx, max_insn_queue_index + 1);
2831 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
2833 /* Start just before the beginning of time. */
2834 clock_var = -1;
2836 /* We need queue and ready lists and clock_var be initialized
2837 in try_ready () (which is called through init_ready_list ()). */
2838 (*current_sched_info->init_ready_list) ();
2840 /* The algorithm is O(n^2) in the number of ready insns at any given
2841 time in the worst case. Before reload we are more likely to have
2842 big lists so truncate them to a reasonable size. */
2843 if (!reload_completed
2844 && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
2846 ready_sort (&ready);
2848 /* Find first free-standing insn past MAX_SCHED_READY_INSNS.
2849 If there are debug insns, we know they're first. */
2850 for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
2851 if (!SCHED_GROUP_P (ready_element (&ready, i)))
2852 break;
2854 if (sched_verbose >= 2)
2856 fprintf (sched_dump,
2857 ";;\t\tReady list on entry: %d insns\n", ready.n_ready);
2858 fprintf (sched_dump,
2859 ";;\t\t before reload => truncated to %d insns\n", i);
2862 /* Delay all insns past it for 1 cycle. If debug counter is
2863 activated make an exception for the insn right after
2864 last_scheduled_insn. */
2866 rtx skip_insn;
2868 if (dbg_cnt (sched_insn) == false)
2869 skip_insn = next_nonnote_insn (last_scheduled_insn);
2870 else
2871 skip_insn = NULL_RTX;
2873 while (i < ready.n_ready)
2875 rtx insn;
2877 insn = ready_remove (&ready, i);
2879 if (insn != skip_insn)
2880 queue_insn (insn, 1);
2885 /* Now we can restore basic block notes and maintain precise cfg. */
2886 restore_bb_notes (*target_bb);
2888 last_clock_var = -1;
2890 advance = 0;
2892 sort_p = TRUE;
2893 /* Loop until all the insns in BB are scheduled. */
2894 while ((*current_sched_info->schedule_more_p) ())
2898 start_clock_var = clock_var;
2900 clock_var++;
2902 advance_one_cycle ();
2904 /* Add to the ready list all pending insns that can be issued now.
2905 If there are no ready insns, increment clock until one
2906 is ready and add all pending insns at that point to the ready
2907 list. */
2908 queue_to_ready (&ready);
2910 gcc_assert (ready.n_ready);
2912 if (sched_verbose >= 2)
2914 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
2915 debug_ready_list (&ready);
2917 advance -= clock_var - start_clock_var;
2919 while (advance > 0);
2921 if (sort_p)
2923 /* Sort the ready list based on priority. */
2924 ready_sort (&ready);
2926 if (sched_verbose >= 2)
2928 fprintf (sched_dump, ";;\t\tReady list after ready_sort: ");
2929 debug_ready_list (&ready);
2933 /* We don't want md sched reorder to even see debug isns, so put
2934 them out right away. */
2935 if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2937 if (control_flow_insn_p (last_scheduled_insn))
2939 *target_bb = current_sched_info->advance_target_bb
2940 (*target_bb, 0);
2942 if (sched_verbose)
2944 rtx x;
2946 x = next_real_insn (last_scheduled_insn);
2947 gcc_assert (x);
2948 dump_new_block_header (1, *target_bb, x, tail);
2951 last_scheduled_insn = bb_note (*target_bb);
2954 while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2956 rtx insn = ready_remove_first (&ready);
2957 gcc_assert (DEBUG_INSN_P (insn));
2958 (*current_sched_info->begin_schedule_ready) (insn,
2959 last_scheduled_insn);
2960 move_insn (insn, last_scheduled_insn,
2961 current_sched_info->next_tail);
2962 last_scheduled_insn = insn;
2963 advance = schedule_insn (insn);
2964 gcc_assert (advance == 0);
2965 if (ready.n_ready > 0)
2966 ready_sort (&ready);
2969 if (!ready.n_ready)
2970 continue;
2973 /* Allow the target to reorder the list, typically for
2974 better instruction bundling. */
2975 if (sort_p && targetm.sched.reorder
2976 && (ready.n_ready == 0
2977 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2978 can_issue_more =
2979 targetm.sched.reorder (sched_dump, sched_verbose,
2980 ready_lastpos (&ready),
2981 &ready.n_ready, clock_var);
2982 else
2983 can_issue_more = issue_rate;
2985 first_cycle_insn_p = 1;
2986 cycle_issued_insns = 0;
2987 for (;;)
2989 rtx insn;
2990 int cost;
2991 bool asm_p = false;
2993 if (sched_verbose >= 2)
2995 fprintf (sched_dump, ";;\tReady list (t = %3d): ",
2996 clock_var);
2997 debug_ready_list (&ready);
2998 if (sched_pressure_p)
2999 print_curr_reg_pressure ();
3002 if (ready.n_ready == 0
3003 && can_issue_more
3004 && reload_completed)
3006 /* Allow scheduling insns directly from the queue in case
3007 there's nothing better to do (ready list is empty) but
3008 there are still vacant dispatch slots in the current cycle. */
3009 if (sched_verbose >= 6)
3010 fprintf (sched_dump,";;\t\tSecond chance\n");
3011 memcpy (temp_state, curr_state, dfa_state_size);
3012 if (early_queue_to_ready (temp_state, &ready))
3013 ready_sort (&ready);
3016 if (ready.n_ready == 0
3017 || !can_issue_more
3018 || state_dead_lock_p (curr_state)
3019 || !(*current_sched_info->schedule_more_p) ())
3020 break;
3022 /* Select and remove the insn from the ready list. */
3023 if (sort_p)
3025 int res;
3027 insn = NULL_RTX;
3028 res = choose_ready (&ready, &insn);
3030 if (res < 0)
3031 /* Finish cycle. */
3032 break;
3033 if (res > 0)
3034 /* Restart choose_ready (). */
3035 continue;
3037 gcc_assert (insn != NULL_RTX);
3039 else
3040 insn = ready_remove_first (&ready);
3042 if (sched_pressure_p && INSN_TICK (insn) > clock_var)
3044 ready_add (&ready, insn, true);
3045 advance = 1;
3046 break;
3049 if (targetm.sched.dfa_new_cycle
3050 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
3051 insn, last_clock_var,
3052 clock_var, &sort_p))
3053 /* SORT_P is used by the target to override sorting
3054 of the ready list. This is needed when the target
3055 has modified its internal structures expecting that
3056 the insn will be issued next. As we need the insn
3057 to have the highest priority (so it will be returned by
3058 the ready_remove_first call above), we invoke
3059 ready_add (&ready, insn, true).
3060 But, still, there is one issue: INSN can be later
3061 discarded by scheduler's front end through
3062 current_sched_info->can_schedule_ready_p, hence, won't
3063 be issued next. */
3065 ready_add (&ready, insn, true);
3066 break;
3069 sort_p = TRUE;
3070 memcpy (temp_state, curr_state, dfa_state_size);
3071 if (recog_memoized (insn) < 0)
3073 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
3074 || asm_noperands (PATTERN (insn)) >= 0);
3075 if (!first_cycle_insn_p && asm_p)
3076 /* This is asm insn which is tried to be issued on the
3077 cycle not first. Issue it on the next cycle. */
3078 cost = 1;
3079 else
3080 /* A USE insn, or something else we don't need to
3081 understand. We can't pass these directly to
3082 state_transition because it will trigger a
3083 fatal error for unrecognizable insns. */
3084 cost = 0;
3086 else if (sched_pressure_p)
3087 cost = 0;
3088 else
3090 cost = state_transition (temp_state, insn);
3091 if (cost < 0)
3092 cost = 0;
3093 else if (cost == 0)
3094 cost = 1;
3097 if (cost >= 1)
3099 queue_insn (insn, cost);
3100 if (SCHED_GROUP_P (insn))
3102 advance = cost;
3103 break;
3106 continue;
3109 if (current_sched_info->can_schedule_ready_p
3110 && ! (*current_sched_info->can_schedule_ready_p) (insn))
3111 /* We normally get here only if we don't want to move
3112 insn from the split block. */
3114 TODO_SPEC (insn) = (TODO_SPEC (insn) & ~SPECULATIVE) | HARD_DEP;
3115 continue;
3118 /* DECISION is made. */
3120 if (TODO_SPEC (insn) & SPECULATIVE)
3121 generate_recovery_code (insn);
3123 if (control_flow_insn_p (last_scheduled_insn)
3124 /* This is used to switch basic blocks by request
3125 from scheduler front-end (actually, sched-ebb.c only).
3126 This is used to process blocks with single fallthru
3127 edge. If succeeding block has jump, it [jump] will try
3128 move at the end of current bb, thus corrupting CFG. */
3129 || current_sched_info->advance_target_bb (*target_bb, insn))
3131 *target_bb = current_sched_info->advance_target_bb
3132 (*target_bb, 0);
3134 if (sched_verbose)
3136 rtx x;
3138 x = next_real_insn (last_scheduled_insn);
3139 gcc_assert (x);
3140 dump_new_block_header (1, *target_bb, x, tail);
3143 last_scheduled_insn = bb_note (*target_bb);
3146 /* Update counters, etc in the scheduler's front end. */
3147 (*current_sched_info->begin_schedule_ready) (insn,
3148 last_scheduled_insn);
3150 move_insn (insn, last_scheduled_insn, current_sched_info->next_tail);
3151 reemit_notes (insn);
3152 last_scheduled_insn = insn;
3154 if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
3156 cycle_issued_insns++;
3157 memcpy (curr_state, temp_state, dfa_state_size);
3160 if (targetm.sched.variable_issue)
3161 can_issue_more =
3162 targetm.sched.variable_issue (sched_dump, sched_verbose,
3163 insn, can_issue_more);
3164 /* A naked CLOBBER or USE generates no instruction, so do
3165 not count them against the issue rate. */
3166 else if (GET_CODE (PATTERN (insn)) != USE
3167 && GET_CODE (PATTERN (insn)) != CLOBBER)
3168 can_issue_more--;
3169 advance = schedule_insn (insn);
3171 /* After issuing an asm insn we should start a new cycle. */
3172 if (advance == 0 && asm_p)
3173 advance = 1;
3174 if (advance != 0)
3175 break;
3177 first_cycle_insn_p = 0;
3179 /* Sort the ready list based on priority. This must be
3180 redone here, as schedule_insn may have readied additional
3181 insns that will not be sorted correctly. */
3182 if (ready.n_ready > 0)
3183 ready_sort (&ready);
3185 /* Quickly go through debug insns such that md sched
3186 reorder2 doesn't have to deal with debug insns. */
3187 if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
3188 && (*current_sched_info->schedule_more_p) ())
3190 if (control_flow_insn_p (last_scheduled_insn))
3192 *target_bb = current_sched_info->advance_target_bb
3193 (*target_bb, 0);
3195 if (sched_verbose)
3197 rtx x;
3199 x = next_real_insn (last_scheduled_insn);
3200 gcc_assert (x);
3201 dump_new_block_header (1, *target_bb, x, tail);
3204 last_scheduled_insn = bb_note (*target_bb);
3207 while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
3209 insn = ready_remove_first (&ready);
3210 gcc_assert (DEBUG_INSN_P (insn));
3211 (*current_sched_info->begin_schedule_ready)
3212 (insn, last_scheduled_insn);
3213 move_insn (insn, last_scheduled_insn,
3214 current_sched_info->next_tail);
3215 advance = schedule_insn (insn);
3216 last_scheduled_insn = insn;
3217 gcc_assert (advance == 0);
3218 if (ready.n_ready > 0)
3219 ready_sort (&ready);
3223 if (targetm.sched.reorder2
3224 && (ready.n_ready == 0
3225 || !SCHED_GROUP_P (ready_element (&ready, 0))))
3227 can_issue_more =
3228 targetm.sched.reorder2 (sched_dump, sched_verbose,
3229 ready.n_ready
3230 ? ready_lastpos (&ready) : NULL,
3231 &ready.n_ready, clock_var);
3236 /* Debug info. */
3237 if (sched_verbose)
3239 fprintf (sched_dump, ";;\tReady list (final): ");
3240 debug_ready_list (&ready);
3243 if (current_sched_info->queue_must_finish_empty)
3244 /* Sanity check -- queue must be empty now. Meaningless if region has
3245 multiple bbs. */
3246 gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
3247 else
3249 /* We must maintain QUEUE_INDEX between blocks in region. */
3250 for (i = ready.n_ready - 1; i >= 0; i--)
3252 rtx x;
3254 x = ready_element (&ready, i);
3255 QUEUE_INDEX (x) = QUEUE_NOWHERE;
3256 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3259 if (q_size)
3260 for (i = 0; i <= max_insn_queue_index; i++)
3262 rtx link;
3263 for (link = insn_queue[i]; link; link = XEXP (link, 1))
3265 rtx x;
3267 x = XEXP (link, 0);
3268 QUEUE_INDEX (x) = QUEUE_NOWHERE;
3269 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3271 free_INSN_LIST_list (&insn_queue[i]);
3275 if (sched_verbose)
3276 fprintf (sched_dump, ";; total time = %d\n", clock_var);
3278 if (!current_sched_info->queue_must_finish_empty
3279 || haifa_recovery_bb_recently_added_p)
3281 /* INSN_TICK (minimum clock tick at which the insn becomes
3282 ready) may be not correct for the insn in the subsequent
3283 blocks of the region. We should use a correct value of
3284 `clock_var' or modify INSN_TICK. It is better to keep
3285 clock_var value equal to 0 at the start of a basic block.
3286 Therefore we modify INSN_TICK here. */
3287 fix_inter_tick (NEXT_INSN (prev_head), last_scheduled_insn);
3290 if (targetm.sched.md_finish)
3292 targetm.sched.md_finish (sched_dump, sched_verbose);
3293 /* Target might have added some instructions to the scheduled block
3294 in its md_finish () hook. These new insns don't have any data
3295 initialized and to identify them we extend h_i_d so that they'll
3296 get zero luids. */
3297 sched_init_luids (NULL, NULL, NULL, NULL);
3300 if (sched_verbose)
3301 fprintf (sched_dump, ";; new head = %d\n;; new tail = %d\n\n",
3302 INSN_UID (head), INSN_UID (tail));
3304 /* Update head/tail boundaries. */
3305 head = NEXT_INSN (prev_head);
3306 tail = last_scheduled_insn;
3308 head = restore_other_notes (head, NULL);
3310 current_sched_info->head = head;
3311 current_sched_info->tail = tail;
3314 /* Set_priorities: compute priority of each insn in the block. */
3317 set_priorities (rtx head, rtx tail)
3319 rtx insn;
3320 int n_insn;
3321 int sched_max_insns_priority =
3322 current_sched_info->sched_max_insns_priority;
3323 rtx prev_head;
3325 if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head)))
3326 gcc_unreachable ();
3328 n_insn = 0;
3330 prev_head = PREV_INSN (head);
3331 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
3333 if (!INSN_P (insn))
3334 continue;
3336 n_insn++;
3337 (void) priority (insn);
3339 gcc_assert (INSN_PRIORITY_KNOWN (insn));
3341 sched_max_insns_priority = MAX (sched_max_insns_priority,
3342 INSN_PRIORITY (insn));
3345 current_sched_info->sched_max_insns_priority = sched_max_insns_priority;
3347 return n_insn;
3350 /* Set dump and sched_verbose for the desired debugging output. If no
3351 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
3352 For -fsched-verbose=N, N>=10, print everything to stderr. */
3353 void
3354 setup_sched_dump (void)
3356 sched_verbose = sched_verbose_param;
3357 if (sched_verbose_param == 0 && dump_file)
3358 sched_verbose = 1;
3359 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
3360 ? stderr : dump_file);
3363 /* Initialize some global state for the scheduler. This function works
3364 with the common data shared between all the schedulers. It is called
3365 from the scheduler specific initialization routine. */
3367 void
3368 sched_init (void)
3370 /* Disable speculative loads in their presence if cc0 defined. */
3371 #ifdef HAVE_cc0
3372 flag_schedule_speculative_load = 0;
3373 #endif
3375 sched_pressure_p = (flag_sched_pressure && ! reload_completed
3376 && common_sched_info->sched_pass_id == SCHED_RGN_PASS);
3377 if (sched_pressure_p)
3378 ira_setup_eliminable_regset ();
3380 /* Initialize SPEC_INFO. */
3381 if (targetm.sched.set_sched_flags)
3383 spec_info = &spec_info_var;
3384 targetm.sched.set_sched_flags (spec_info);
3386 if (spec_info->mask != 0)
3388 spec_info->data_weakness_cutoff =
3389 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
3390 spec_info->control_weakness_cutoff =
3391 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF)
3392 * REG_BR_PROB_BASE) / 100;
3394 else
3395 /* So we won't read anything accidentally. */
3396 spec_info = NULL;
3399 else
3400 /* So we won't read anything accidentally. */
3401 spec_info = 0;
3403 /* Initialize issue_rate. */
3404 if (targetm.sched.issue_rate)
3405 issue_rate = targetm.sched.issue_rate ();
3406 else
3407 issue_rate = 1;
3409 if (cached_issue_rate != issue_rate)
3411 cached_issue_rate = issue_rate;
3412 /* To invalidate max_lookahead_tries: */
3413 cached_first_cycle_multipass_dfa_lookahead = 0;
3416 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
3417 dfa_lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
3418 else
3419 dfa_lookahead = 0;
3421 if (targetm.sched.init_dfa_pre_cycle_insn)
3422 targetm.sched.init_dfa_pre_cycle_insn ();
3424 if (targetm.sched.init_dfa_post_cycle_insn)
3425 targetm.sched.init_dfa_post_cycle_insn ();
3427 dfa_start ();
3428 dfa_state_size = state_size ();
3430 init_alias_analysis ();
3432 df_set_flags (DF_LR_RUN_DCE);
3433 df_note_add_problem ();
3435 /* More problems needed for interloop dep calculation in SMS. */
3436 if (common_sched_info->sched_pass_id == SCHED_SMS_PASS)
3438 df_rd_add_problem ();
3439 df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
3442 df_analyze ();
3444 /* Do not run DCE after reload, as this can kill nops inserted
3445 by bundling. */
3446 if (reload_completed)
3447 df_clear_flags (DF_LR_RUN_DCE);
3449 regstat_compute_calls_crossed ();
3451 if (targetm.sched.md_init_global)
3452 targetm.sched.md_init_global (sched_dump, sched_verbose,
3453 get_max_uid () + 1);
3455 if (sched_pressure_p)
3457 int i, max_regno = max_reg_num ();
3459 ira_set_pseudo_classes (sched_verbose ? sched_dump : NULL);
3460 sched_regno_cover_class
3461 = (enum reg_class *) xmalloc (max_regno * sizeof (enum reg_class));
3462 for (i = 0; i < max_regno; i++)
3463 sched_regno_cover_class[i]
3464 = (i < FIRST_PSEUDO_REGISTER
3465 ? ira_class_translate[REGNO_REG_CLASS (i)]
3466 : reg_cover_class (i));
3467 curr_reg_live = BITMAP_ALLOC (NULL);
3468 saved_reg_live = BITMAP_ALLOC (NULL);
3469 region_ref_regs = BITMAP_ALLOC (NULL);
3472 curr_state = xmalloc (dfa_state_size);
3475 static void haifa_init_only_bb (basic_block, basic_block);
3477 /* Initialize data structures specific to the Haifa scheduler. */
3478 void
3479 haifa_sched_init (void)
3481 setup_sched_dump ();
3482 sched_init ();
3484 if (spec_info != NULL)
3486 sched_deps_info->use_deps_list = 1;
3487 sched_deps_info->generate_spec_deps = 1;
3490 /* Initialize luids, dependency caches, target and h_i_d for the
3491 whole function. */
3493 bb_vec_t bbs = VEC_alloc (basic_block, heap, n_basic_blocks);
3494 basic_block bb;
3496 sched_init_bbs ();
3498 FOR_EACH_BB (bb)
3499 VEC_quick_push (basic_block, bbs, bb);
3500 sched_init_luids (bbs, NULL, NULL, NULL);
3501 sched_deps_init (true);
3502 sched_extend_target ();
3503 haifa_init_h_i_d (bbs, NULL, NULL, NULL);
3505 VEC_free (basic_block, heap, bbs);
3508 sched_init_only_bb = haifa_init_only_bb;
3509 sched_split_block = sched_split_block_1;
3510 sched_create_empty_bb = sched_create_empty_bb_1;
3511 haifa_recovery_bb_ever_added_p = false;
3513 #ifdef ENABLE_CHECKING
3514 /* This is used preferably for finding bugs in check_cfg () itself.
3515 We must call sched_bbs_init () before check_cfg () because check_cfg ()
3516 assumes that the last insn in the last bb has a non-null successor. */
3517 check_cfg (0, 0);
3518 #endif
3520 nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
3521 before_recovery = 0;
3522 after_recovery = 0;
3525 /* Finish work with the data specific to the Haifa scheduler. */
3526 void
3527 haifa_sched_finish (void)
3529 sched_create_empty_bb = NULL;
3530 sched_split_block = NULL;
3531 sched_init_only_bb = NULL;
3533 if (spec_info && spec_info->dump)
3535 char c = reload_completed ? 'a' : 'b';
3537 fprintf (spec_info->dump,
3538 ";; %s:\n", current_function_name ());
3540 fprintf (spec_info->dump,
3541 ";; Procedure %cr-begin-data-spec motions == %d\n",
3542 c, nr_begin_data);
3543 fprintf (spec_info->dump,
3544 ";; Procedure %cr-be-in-data-spec motions == %d\n",
3545 c, nr_be_in_data);
3546 fprintf (spec_info->dump,
3547 ";; Procedure %cr-begin-control-spec motions == %d\n",
3548 c, nr_begin_control);
3549 fprintf (spec_info->dump,
3550 ";; Procedure %cr-be-in-control-spec motions == %d\n",
3551 c, nr_be_in_control);
3554 /* Finalize h_i_d, dependency caches, and luids for the whole
3555 function. Target will be finalized in md_global_finish (). */
3556 sched_deps_finish ();
3557 sched_finish_luids ();
3558 current_sched_info = NULL;
3559 sched_finish ();
3562 /* Free global data used during insn scheduling. This function works with
3563 the common data shared between the schedulers. */
3565 void
3566 sched_finish (void)
3568 haifa_finish_h_i_d ();
3569 if (sched_pressure_p)
3571 free (sched_regno_cover_class);
3572 BITMAP_FREE (region_ref_regs);
3573 BITMAP_FREE (saved_reg_live);
3574 BITMAP_FREE (curr_reg_live);
3576 free (curr_state);
3578 if (targetm.sched.md_finish_global)
3579 targetm.sched.md_finish_global (sched_dump, sched_verbose);
3581 end_alias_analysis ();
3583 regstat_free_calls_crossed ();
3585 dfa_finish ();
3587 #ifdef ENABLE_CHECKING
3588 /* After reload ia64 backend clobbers CFG, so can't check anything. */
3589 if (!reload_completed)
3590 check_cfg (0, 0);
3591 #endif
3594 /* Fix INSN_TICKs of the instructions in the current block as well as
3595 INSN_TICKs of their dependents.
3596 HEAD and TAIL are the begin and the end of the current scheduled block. */
3597 static void
3598 fix_inter_tick (rtx head, rtx tail)
3600 /* Set of instructions with corrected INSN_TICK. */
3601 bitmap_head processed;
3602 /* ??? It is doubtful if we should assume that cycle advance happens on
3603 basic block boundaries. Basically insns that are unconditionally ready
3604 on the start of the block are more preferable then those which have
3605 a one cycle dependency over insn from the previous block. */
3606 int next_clock = clock_var + 1;
3608 bitmap_initialize (&processed, 0);
3610 /* Iterates over scheduled instructions and fix their INSN_TICKs and
3611 INSN_TICKs of dependent instructions, so that INSN_TICKs are consistent
3612 across different blocks. */
3613 for (tail = NEXT_INSN (tail); head != tail; head = NEXT_INSN (head))
3615 if (INSN_P (head))
3617 int tick;
3618 sd_iterator_def sd_it;
3619 dep_t dep;
3621 tick = INSN_TICK (head);
3622 gcc_assert (tick >= MIN_TICK);
3624 /* Fix INSN_TICK of instruction from just scheduled block. */
3625 if (!bitmap_bit_p (&processed, INSN_LUID (head)))
3627 bitmap_set_bit (&processed, INSN_LUID (head));
3628 tick -= next_clock;
3630 if (tick < MIN_TICK)
3631 tick = MIN_TICK;
3633 INSN_TICK (head) = tick;
3636 FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
3638 rtx next;
3640 next = DEP_CON (dep);
3641 tick = INSN_TICK (next);
3643 if (tick != INVALID_TICK
3644 /* If NEXT has its INSN_TICK calculated, fix it.
3645 If not - it will be properly calculated from
3646 scratch later in fix_tick_ready. */
3647 && !bitmap_bit_p (&processed, INSN_LUID (next)))
3649 bitmap_set_bit (&processed, INSN_LUID (next));
3650 tick -= next_clock;
3652 if (tick < MIN_TICK)
3653 tick = MIN_TICK;
3655 if (tick > INTER_TICK (next))
3656 INTER_TICK (next) = tick;
3657 else
3658 tick = INTER_TICK (next);
3660 INSN_TICK (next) = tick;
3665 bitmap_clear (&processed);
3668 static int haifa_speculate_insn (rtx, ds_t, rtx *);
3670 /* Check if NEXT is ready to be added to the ready or queue list.
3671 If "yes", add it to the proper list.
3672 Returns:
3673 -1 - is not ready yet,
3674 0 - added to the ready list,
3675 0 < N - queued for N cycles. */
3677 try_ready (rtx next)
3679 ds_t old_ts, *ts;
3681 ts = &TODO_SPEC (next);
3682 old_ts = *ts;
3684 gcc_assert (!(old_ts & ~(SPECULATIVE | HARD_DEP))
3685 && ((old_ts & HARD_DEP)
3686 || (old_ts & SPECULATIVE)));
3688 if (sd_lists_empty_p (next, SD_LIST_BACK))
3689 /* NEXT has all its dependencies resolved. */
3691 /* Remove HARD_DEP bit from NEXT's status. */
3692 *ts &= ~HARD_DEP;
3694 if (current_sched_info->flags & DO_SPECULATION)
3695 /* Remove all speculative bits from NEXT's status. */
3696 *ts &= ~SPECULATIVE;
3698 else
3700 /* One of the NEXT's dependencies has been resolved.
3701 Recalculate NEXT's status. */
3703 *ts &= ~SPECULATIVE & ~HARD_DEP;
3705 if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
3706 /* Now we've got NEXT with speculative deps only.
3707 1. Look at the deps to see what we have to do.
3708 2. Check if we can do 'todo'. */
3710 sd_iterator_def sd_it;
3711 dep_t dep;
3712 bool first_p = true;
3714 FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
3716 ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
3718 if (first_p)
3720 first_p = false;
3722 *ts = ds;
3724 else
3725 *ts = ds_merge (*ts, ds);
3728 if (ds_weak (*ts) < spec_info->data_weakness_cutoff)
3729 /* Too few points. */
3730 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3732 else
3733 *ts |= HARD_DEP;
3736 if (*ts & HARD_DEP)
3737 gcc_assert (*ts == old_ts
3738 && QUEUE_INDEX (next) == QUEUE_NOWHERE);
3739 else if (current_sched_info->new_ready)
3740 *ts = current_sched_info->new_ready (next, *ts);
3742 /* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
3743 have its original pattern or changed (speculative) one. This is due
3744 to changing ebb in region scheduling.
3745 * But if (old_ts & SPECULATIVE), then we are pretty sure that insn
3746 has speculative pattern.
3748 We can't assert (!(*ts & HARD_DEP) || *ts == old_ts) here because
3749 control-speculative NEXT could have been discarded by sched-rgn.c
3750 (the same case as when discarded by can_schedule_ready_p ()). */
3752 if ((*ts & SPECULATIVE)
3753 /* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
3754 need to change anything. */
3755 && *ts != old_ts)
3757 int res;
3758 rtx new_pat;
3760 gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
3762 res = haifa_speculate_insn (next, *ts, &new_pat);
3764 switch (res)
3766 case -1:
3767 /* It would be nice to change DEP_STATUS of all dependences,
3768 which have ((DEP_STATUS & SPECULATIVE) == *ts) to HARD_DEP,
3769 so we won't reanalyze anything. */
3770 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3771 break;
3773 case 0:
3774 /* We follow the rule, that every speculative insn
3775 has non-null ORIG_PAT. */
3776 if (!ORIG_PAT (next))
3777 ORIG_PAT (next) = PATTERN (next);
3778 break;
3780 case 1:
3781 if (!ORIG_PAT (next))
3782 /* If we gonna to overwrite the original pattern of insn,
3783 save it. */
3784 ORIG_PAT (next) = PATTERN (next);
3786 haifa_change_pattern (next, new_pat);
3787 break;
3789 default:
3790 gcc_unreachable ();
3794 /* We need to restore pattern only if (*ts == 0), because otherwise it is
3795 either correct (*ts & SPECULATIVE),
3796 or we simply don't care (*ts & HARD_DEP). */
3798 gcc_assert (!ORIG_PAT (next)
3799 || !IS_SPECULATION_BRANCHY_CHECK_P (next));
3801 if (*ts & HARD_DEP)
3803 /* We can't assert (QUEUE_INDEX (next) == QUEUE_NOWHERE) here because
3804 control-speculative NEXT could have been discarded by sched-rgn.c
3805 (the same case as when discarded by can_schedule_ready_p ()). */
3806 /*gcc_assert (QUEUE_INDEX (next) == QUEUE_NOWHERE);*/
3808 change_queue_index (next, QUEUE_NOWHERE);
3809 return -1;
3811 else if (!(*ts & BEGIN_SPEC) && ORIG_PAT (next) && !IS_SPECULATION_CHECK_P (next))
3812 /* We should change pattern of every previously speculative
3813 instruction - and we determine if NEXT was speculative by using
3814 ORIG_PAT field. Except one case - speculation checks have ORIG_PAT
3815 pat too, so skip them. */
3817 haifa_change_pattern (next, ORIG_PAT (next));
3818 ORIG_PAT (next) = 0;
3821 if (sched_verbose >= 2)
3823 int s = TODO_SPEC (next);
3825 fprintf (sched_dump, ";;\t\tdependencies resolved: insn %s",
3826 (*current_sched_info->print_insn) (next, 0));
3828 if (spec_info && spec_info->dump)
3830 if (s & BEGIN_DATA)
3831 fprintf (spec_info->dump, "; data-spec;");
3832 if (s & BEGIN_CONTROL)
3833 fprintf (spec_info->dump, "; control-spec;");
3834 if (s & BE_IN_CONTROL)
3835 fprintf (spec_info->dump, "; in-control-spec;");
3838 fprintf (sched_dump, "\n");
3841 adjust_priority (next);
3843 return fix_tick_ready (next);
3846 /* Calculate INSN_TICK of NEXT and add it to either ready or queue list. */
3847 static int
3848 fix_tick_ready (rtx next)
3850 int tick, delay;
3852 if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
3854 int full_p;
3855 sd_iterator_def sd_it;
3856 dep_t dep;
3858 tick = INSN_TICK (next);
3859 /* if tick is not equal to INVALID_TICK, then update
3860 INSN_TICK of NEXT with the most recent resolved dependence
3861 cost. Otherwise, recalculate from scratch. */
3862 full_p = (tick == INVALID_TICK);
3864 FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
3866 rtx pro = DEP_PRO (dep);
3867 int tick1;
3869 gcc_assert (INSN_TICK (pro) >= MIN_TICK);
3871 tick1 = INSN_TICK (pro) + dep_cost (dep);
3872 if (tick1 > tick)
3873 tick = tick1;
3875 if (!full_p)
3876 break;
3879 else
3880 tick = -1;
3882 INSN_TICK (next) = tick;
3884 delay = tick - clock_var;
3885 if (delay <= 0 || sched_pressure_p)
3886 delay = QUEUE_READY;
3888 change_queue_index (next, delay);
3890 return delay;
3893 /* Move NEXT to the proper queue list with (DELAY >= 1),
3894 or add it to the ready list (DELAY == QUEUE_READY),
3895 or remove it from ready and queue lists at all (DELAY == QUEUE_NOWHERE). */
3896 static void
3897 change_queue_index (rtx next, int delay)
3899 int i = QUEUE_INDEX (next);
3901 gcc_assert (QUEUE_NOWHERE <= delay && delay <= max_insn_queue_index
3902 && delay != 0);
3903 gcc_assert (i != QUEUE_SCHEDULED);
3905 if ((delay > 0 && NEXT_Q_AFTER (q_ptr, delay) == i)
3906 || (delay < 0 && delay == i))
3907 /* We have nothing to do. */
3908 return;
3910 /* Remove NEXT from wherever it is now. */
3911 if (i == QUEUE_READY)
3912 ready_remove_insn (next);
3913 else if (i >= 0)
3914 queue_remove (next);
3916 /* Add it to the proper place. */
3917 if (delay == QUEUE_READY)
3918 ready_add (readyp, next, false);
3919 else if (delay >= 1)
3920 queue_insn (next, delay);
3922 if (sched_verbose >= 2)
3924 fprintf (sched_dump, ";;\t\ttick updated: insn %s",
3925 (*current_sched_info->print_insn) (next, 0));
3927 if (delay == QUEUE_READY)
3928 fprintf (sched_dump, " into ready\n");
3929 else if (delay >= 1)
3930 fprintf (sched_dump, " into queue with cost=%d\n", delay);
3931 else
3932 fprintf (sched_dump, " removed from ready or queue lists\n");
3936 static int sched_ready_n_insns = -1;
3938 /* Initialize per region data structures. */
3939 void
3940 sched_extend_ready_list (int new_sched_ready_n_insns)
3942 int i;
3944 if (sched_ready_n_insns == -1)
3945 /* At the first call we need to initialize one more choice_stack
3946 entry. */
3948 i = 0;
3949 sched_ready_n_insns = 0;
3951 else
3952 i = sched_ready_n_insns + 1;
3954 ready.veclen = new_sched_ready_n_insns + issue_rate;
3955 ready.vec = XRESIZEVEC (rtx, ready.vec, ready.veclen);
3957 gcc_assert (new_sched_ready_n_insns >= sched_ready_n_insns);
3959 ready_try = (char *) xrecalloc (ready_try, new_sched_ready_n_insns,
3960 sched_ready_n_insns, sizeof (*ready_try));
3962 /* We allocate +1 element to save initial state in the choice_stack[0]
3963 entry. */
3964 choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
3965 new_sched_ready_n_insns + 1);
3967 for (; i <= new_sched_ready_n_insns; i++)
3968 choice_stack[i].state = xmalloc (dfa_state_size);
3970 sched_ready_n_insns = new_sched_ready_n_insns;
3973 /* Free per region data structures. */
3974 void
3975 sched_finish_ready_list (void)
3977 int i;
3979 free (ready.vec);
3980 ready.vec = NULL;
3981 ready.veclen = 0;
3983 free (ready_try);
3984 ready_try = NULL;
3986 for (i = 0; i <= sched_ready_n_insns; i++)
3987 free (choice_stack [i].state);
3988 free (choice_stack);
3989 choice_stack = NULL;
3991 sched_ready_n_insns = -1;
3994 static int
3995 haifa_luid_for_non_insn (rtx x)
3997 gcc_assert (NOTE_P (x) || LABEL_P (x));
3999 return 0;
4002 /* Generates recovery code for INSN. */
4003 static void
4004 generate_recovery_code (rtx insn)
4006 if (TODO_SPEC (insn) & BEGIN_SPEC)
4007 begin_speculative_block (insn);
4009 /* Here we have insn with no dependencies to
4010 instructions other then CHECK_SPEC ones. */
4012 if (TODO_SPEC (insn) & BE_IN_SPEC)
4013 add_to_speculative_block (insn);
4016 /* Helper function.
4017 Tries to add speculative dependencies of type FS between instructions
4018 in deps_list L and TWIN. */
4019 static void
4020 process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
4022 sd_iterator_def sd_it;
4023 dep_t dep;
4025 FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
4027 ds_t ds;
4028 rtx consumer;
4030 consumer = DEP_CON (dep);
4032 ds = DEP_STATUS (dep);
4034 if (/* If we want to create speculative dep. */
4036 /* And we can do that because this is a true dep. */
4037 && (ds & DEP_TYPES) == DEP_TRUE)
4039 gcc_assert (!(ds & BE_IN_SPEC));
4041 if (/* If this dep can be overcome with 'begin speculation'. */
4042 ds & BEGIN_SPEC)
4043 /* Then we have a choice: keep the dep 'begin speculative'
4044 or transform it into 'be in speculative'. */
4046 if (/* In try_ready we assert that if insn once became ready
4047 it can be removed from the ready (or queue) list only
4048 due to backend decision. Hence we can't let the
4049 probability of the speculative dep to decrease. */
4050 ds_weak (ds) <= ds_weak (fs))
4052 ds_t new_ds;
4054 new_ds = (ds & ~BEGIN_SPEC) | fs;
4056 if (/* consumer can 'be in speculative'. */
4057 sched_insn_is_legitimate_for_speculation_p (consumer,
4058 new_ds))
4059 /* Transform it to be in speculative. */
4060 ds = new_ds;
4063 else
4064 /* Mark the dep as 'be in speculative'. */
4065 ds |= fs;
4069 dep_def _new_dep, *new_dep = &_new_dep;
4071 init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
4072 sd_add_dep (new_dep, false);
4077 /* Generates recovery code for BEGIN speculative INSN. */
4078 static void
4079 begin_speculative_block (rtx insn)
4081 if (TODO_SPEC (insn) & BEGIN_DATA)
4082 nr_begin_data++;
4083 if (TODO_SPEC (insn) & BEGIN_CONTROL)
4084 nr_begin_control++;
4086 create_check_block_twin (insn, false);
4088 TODO_SPEC (insn) &= ~BEGIN_SPEC;
4091 static void haifa_init_insn (rtx);
4093 /* Generates recovery code for BE_IN speculative INSN. */
4094 static void
4095 add_to_speculative_block (rtx insn)
4097 ds_t ts;
4098 sd_iterator_def sd_it;
4099 dep_t dep;
4100 rtx twins = NULL;
4101 rtx_vec_t priorities_roots;
4103 ts = TODO_SPEC (insn);
4104 gcc_assert (!(ts & ~BE_IN_SPEC));
4106 if (ts & BE_IN_DATA)
4107 nr_be_in_data++;
4108 if (ts & BE_IN_CONTROL)
4109 nr_be_in_control++;
4111 TODO_SPEC (insn) &= ~BE_IN_SPEC;
4112 gcc_assert (!TODO_SPEC (insn));
4114 DONE_SPEC (insn) |= ts;
4116 /* First we convert all simple checks to branchy. */
4117 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4118 sd_iterator_cond (&sd_it, &dep);)
4120 rtx check = DEP_PRO (dep);
4122 if (IS_SPECULATION_SIMPLE_CHECK_P (check))
4124 create_check_block_twin (check, true);
4126 /* Restart search. */
4127 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4129 else
4130 /* Continue search. */
4131 sd_iterator_next (&sd_it);
4134 priorities_roots = NULL;
4135 clear_priorities (insn, &priorities_roots);
4137 while (1)
4139 rtx check, twin;
4140 basic_block rec;
4142 /* Get the first backward dependency of INSN. */
4143 sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4144 if (!sd_iterator_cond (&sd_it, &dep))
4145 /* INSN has no backward dependencies left. */
4146 break;
4148 gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
4149 && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
4150 && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4152 check = DEP_PRO (dep);
4154 gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
4155 && QUEUE_INDEX (check) == QUEUE_NOWHERE);
4157 rec = BLOCK_FOR_INSN (check);
4159 twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
4160 haifa_init_insn (twin);
4162 sd_copy_back_deps (twin, insn, true);
4164 if (sched_verbose && spec_info->dump)
4165 /* INSN_BB (insn) isn't determined for twin insns yet.
4166 So we can't use current_sched_info->print_insn. */
4167 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4168 INSN_UID (twin), rec->index);
4170 twins = alloc_INSN_LIST (twin, twins);
4172 /* Add dependences between TWIN and all appropriate
4173 instructions from REC. */
4174 FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
4176 rtx pro = DEP_PRO (dep);
4178 gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
4180 /* INSN might have dependencies from the instructions from
4181 several recovery blocks. At this iteration we process those
4182 producers that reside in REC. */
4183 if (BLOCK_FOR_INSN (pro) == rec)
4185 dep_def _new_dep, *new_dep = &_new_dep;
4187 init_dep (new_dep, pro, twin, REG_DEP_TRUE);
4188 sd_add_dep (new_dep, false);
4192 process_insn_forw_deps_be_in_spec (insn, twin, ts);
4194 /* Remove all dependencies between INSN and insns in REC. */
4195 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4196 sd_iterator_cond (&sd_it, &dep);)
4198 rtx pro = DEP_PRO (dep);
4200 if (BLOCK_FOR_INSN (pro) == rec)
4201 sd_delete_dep (sd_it);
4202 else
4203 sd_iterator_next (&sd_it);
4207 /* We couldn't have added the dependencies between INSN and TWINS earlier
4208 because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
4209 while (twins)
4211 rtx twin;
4213 twin = XEXP (twins, 0);
4216 dep_def _new_dep, *new_dep = &_new_dep;
4218 init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4219 sd_add_dep (new_dep, false);
4222 twin = XEXP (twins, 1);
4223 free_INSN_LIST_node (twins);
4224 twins = twin;
4227 calc_priorities (priorities_roots);
4228 VEC_free (rtx, heap, priorities_roots);
4231 /* Extends and fills with zeros (only the new part) array pointed to by P. */
4232 void *
4233 xrecalloc (void *p, size_t new_nmemb, size_t old_nmemb, size_t size)
4235 gcc_assert (new_nmemb >= old_nmemb);
4236 p = XRESIZEVAR (void, p, new_nmemb * size);
4237 memset (((char *) p) + old_nmemb * size, 0, (new_nmemb - old_nmemb) * size);
4238 return p;
4241 /* Helper function.
4242 Find fallthru edge from PRED. */
4243 edge
4244 find_fallthru_edge (basic_block pred)
4246 edge e;
4247 edge_iterator ei;
4248 basic_block succ;
4250 succ = pred->next_bb;
4251 gcc_assert (succ->prev_bb == pred);
4253 if (EDGE_COUNT (pred->succs) <= EDGE_COUNT (succ->preds))
4255 FOR_EACH_EDGE (e, ei, pred->succs)
4256 if (e->flags & EDGE_FALLTHRU)
4258 gcc_assert (e->dest == succ);
4259 return e;
4262 else
4264 FOR_EACH_EDGE (e, ei, succ->preds)
4265 if (e->flags & EDGE_FALLTHRU)
4267 gcc_assert (e->src == pred);
4268 return e;
4272 return NULL;
4275 /* Extend per basic block data structures. */
4276 static void
4277 sched_extend_bb (void)
4279 rtx insn;
4281 /* The following is done to keep current_sched_info->next_tail non null. */
4282 insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
4283 if (NEXT_INSN (insn) == 0
4284 || (!NOTE_P (insn)
4285 && !LABEL_P (insn)
4286 /* Don't emit a NOTE if it would end up before a BARRIER. */
4287 && !BARRIER_P (NEXT_INSN (insn))))
4289 rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
4290 /* Make insn appear outside BB. */
4291 set_block_for_insn (note, NULL);
4292 BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
4296 /* Init per basic block data structures. */
4297 void
4298 sched_init_bbs (void)
4300 sched_extend_bb ();
4303 /* Initialize BEFORE_RECOVERY variable. */
4304 static void
4305 init_before_recovery (basic_block *before_recovery_ptr)
4307 basic_block last;
4308 edge e;
4310 last = EXIT_BLOCK_PTR->prev_bb;
4311 e = find_fallthru_edge (last);
4313 if (e)
4315 /* We create two basic blocks:
4316 1. Single instruction block is inserted right after E->SRC
4317 and has jump to
4318 2. Empty block right before EXIT_BLOCK.
4319 Between these two blocks recovery blocks will be emitted. */
4321 basic_block single, empty;
4322 rtx x, label;
4324 /* If the fallthrough edge to exit we've found is from the block we've
4325 created before, don't do anything more. */
4326 if (last == after_recovery)
4327 return;
4329 adding_bb_to_current_region_p = false;
4331 single = sched_create_empty_bb (last);
4332 empty = sched_create_empty_bb (single);
4334 /* Add new blocks to the root loop. */
4335 if (current_loops != NULL)
4337 add_bb_to_loop (single, VEC_index (loop_p, current_loops->larray, 0));
4338 add_bb_to_loop (empty, VEC_index (loop_p, current_loops->larray, 0));
4341 single->count = last->count;
4342 empty->count = last->count;
4343 single->frequency = last->frequency;
4344 empty->frequency = last->frequency;
4345 BB_COPY_PARTITION (single, last);
4346 BB_COPY_PARTITION (empty, last);
4348 redirect_edge_succ (e, single);
4349 make_single_succ_edge (single, empty, 0);
4350 make_single_succ_edge (empty, EXIT_BLOCK_PTR,
4351 EDGE_FALLTHRU | EDGE_CAN_FALLTHRU);
4353 label = block_label (empty);
4354 x = emit_jump_insn_after (gen_jump (label), BB_END (single));
4355 JUMP_LABEL (x) = label;
4356 LABEL_NUSES (label)++;
4357 haifa_init_insn (x);
4359 emit_barrier_after (x);
4361 sched_init_only_bb (empty, NULL);
4362 sched_init_only_bb (single, NULL);
4363 sched_extend_bb ();
4365 adding_bb_to_current_region_p = true;
4366 before_recovery = single;
4367 after_recovery = empty;
4369 if (before_recovery_ptr)
4370 *before_recovery_ptr = before_recovery;
4372 if (sched_verbose >= 2 && spec_info->dump)
4373 fprintf (spec_info->dump,
4374 ";;\t\tFixed fallthru to EXIT : %d->>%d->%d->>EXIT\n",
4375 last->index, single->index, empty->index);
4377 else
4378 before_recovery = last;
4381 /* Returns new recovery block. */
4382 basic_block
4383 sched_create_recovery_block (basic_block *before_recovery_ptr)
4385 rtx label;
4386 rtx barrier;
4387 basic_block rec;
4389 haifa_recovery_bb_recently_added_p = true;
4390 haifa_recovery_bb_ever_added_p = true;
4392 init_before_recovery (before_recovery_ptr);
4394 barrier = get_last_bb_insn (before_recovery);
4395 gcc_assert (BARRIER_P (barrier));
4397 label = emit_label_after (gen_label_rtx (), barrier);
4399 rec = create_basic_block (label, label, before_recovery);
4401 /* A recovery block always ends with an unconditional jump. */
4402 emit_barrier_after (BB_END (rec));
4404 if (BB_PARTITION (before_recovery) != BB_UNPARTITIONED)
4405 BB_SET_PARTITION (rec, BB_COLD_PARTITION);
4407 if (sched_verbose && spec_info->dump)
4408 fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
4409 rec->index);
4411 return rec;
4414 /* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
4415 and emit necessary jumps. */
4416 void
4417 sched_create_recovery_edges (basic_block first_bb, basic_block rec,
4418 basic_block second_bb)
4420 rtx label;
4421 rtx jump;
4422 edge e;
4423 int edge_flags;
4425 /* This is fixing of incoming edge. */
4426 /* ??? Which other flags should be specified? */
4427 if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
4428 /* Partition type is the same, if it is "unpartitioned". */
4429 edge_flags = EDGE_CROSSING;
4430 else
4431 edge_flags = 0;
4433 e = make_edge (first_bb, rec, edge_flags);
4434 label = block_label (second_bb);
4435 jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
4436 JUMP_LABEL (jump) = label;
4437 LABEL_NUSES (label)++;
4439 if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
4440 /* Partition type is the same, if it is "unpartitioned". */
4442 /* Rewritten from cfgrtl.c. */
4443 if (flag_reorder_blocks_and_partition
4444 && targetm.have_named_sections)
4446 /* We don't need the same note for the check because
4447 any_condjump_p (check) == true. */
4448 add_reg_note (jump, REG_CROSSING_JUMP, NULL_RTX);
4450 edge_flags = EDGE_CROSSING;
4452 else
4453 edge_flags = 0;
4455 make_single_succ_edge (rec, second_bb, edge_flags);
4458 /* This function creates recovery code for INSN. If MUTATE_P is nonzero,
4459 INSN is a simple check, that should be converted to branchy one. */
4460 static void
4461 create_check_block_twin (rtx insn, bool mutate_p)
4463 basic_block rec;
4464 rtx label, check, twin;
4465 ds_t fs;
4466 sd_iterator_def sd_it;
4467 dep_t dep;
4468 dep_def _new_dep, *new_dep = &_new_dep;
4469 ds_t todo_spec;
4471 gcc_assert (ORIG_PAT (insn) != NULL_RTX);
4473 if (!mutate_p)
4474 todo_spec = TODO_SPEC (insn);
4475 else
4477 gcc_assert (IS_SPECULATION_SIMPLE_CHECK_P (insn)
4478 && (TODO_SPEC (insn) & SPECULATIVE) == 0);
4480 todo_spec = CHECK_SPEC (insn);
4483 todo_spec &= SPECULATIVE;
4485 /* Create recovery block. */
4486 if (mutate_p || targetm.sched.needs_block_p (todo_spec))
4488 rec = sched_create_recovery_block (NULL);
4489 label = BB_HEAD (rec);
4491 else
4493 rec = EXIT_BLOCK_PTR;
4494 label = NULL_RTX;
4497 /* Emit CHECK. */
4498 check = targetm.sched.gen_spec_check (insn, label, todo_spec);
4500 if (rec != EXIT_BLOCK_PTR)
4502 /* To have mem_reg alive at the beginning of second_bb,
4503 we emit check BEFORE insn, so insn after splitting
4504 insn will be at the beginning of second_bb, which will
4505 provide us with the correct life information. */
4506 check = emit_jump_insn_before (check, insn);
4507 JUMP_LABEL (check) = label;
4508 LABEL_NUSES (label)++;
4510 else
4511 check = emit_insn_before (check, insn);
4513 /* Extend data structures. */
4514 haifa_init_insn (check);
4516 /* CHECK is being added to current region. Extend ready list. */
4517 gcc_assert (sched_ready_n_insns != -1);
4518 sched_extend_ready_list (sched_ready_n_insns + 1);
4520 if (current_sched_info->add_remove_insn)
4521 current_sched_info->add_remove_insn (insn, 0);
4523 RECOVERY_BLOCK (check) = rec;
4525 if (sched_verbose && spec_info->dump)
4526 fprintf (spec_info->dump, ";;\t\tGenerated check insn : %s\n",
4527 (*current_sched_info->print_insn) (check, 0));
4529 gcc_assert (ORIG_PAT (insn));
4531 /* Initialize TWIN (twin is a duplicate of original instruction
4532 in the recovery block). */
4533 if (rec != EXIT_BLOCK_PTR)
4535 sd_iterator_def sd_it;
4536 dep_t dep;
4538 FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
4539 if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
4541 struct _dep _dep2, *dep2 = &_dep2;
4543 init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
4545 sd_add_dep (dep2, true);
4548 twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
4549 haifa_init_insn (twin);
4551 if (sched_verbose && spec_info->dump)
4552 /* INSN_BB (insn) isn't determined for twin insns yet.
4553 So we can't use current_sched_info->print_insn. */
4554 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4555 INSN_UID (twin), rec->index);
4557 else
4559 ORIG_PAT (check) = ORIG_PAT (insn);
4560 HAS_INTERNAL_DEP (check) = 1;
4561 twin = check;
4562 /* ??? We probably should change all OUTPUT dependencies to
4563 (TRUE | OUTPUT). */
4566 /* Copy all resolved back dependencies of INSN to TWIN. This will
4567 provide correct value for INSN_TICK (TWIN). */
4568 sd_copy_back_deps (twin, insn, true);
4570 if (rec != EXIT_BLOCK_PTR)
4571 /* In case of branchy check, fix CFG. */
4573 basic_block first_bb, second_bb;
4574 rtx jump;
4576 first_bb = BLOCK_FOR_INSN (check);
4577 second_bb = sched_split_block (first_bb, check);
4579 sched_create_recovery_edges (first_bb, rec, second_bb);
4581 sched_init_only_bb (second_bb, first_bb);
4582 sched_init_only_bb (rec, EXIT_BLOCK_PTR);
4584 jump = BB_END (rec);
4585 haifa_init_insn (jump);
4588 /* Move backward dependences from INSN to CHECK and
4589 move forward dependences from INSN to TWIN. */
4591 /* First, create dependencies between INSN's producers and CHECK & TWIN. */
4592 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
4594 rtx pro = DEP_PRO (dep);
4595 ds_t ds;
4597 /* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
4598 check --TRUE--> producer ??? or ANTI ???
4599 twin --TRUE--> producer
4600 twin --ANTI--> check
4602 If BEGIN_CONTROL: [insn ~~ANTI~~> producer]:
4603 check --ANTI--> producer
4604 twin --ANTI--> producer
4605 twin --ANTI--> check
4607 If BE_IN_SPEC: [insn ~~TRUE~~> producer]:
4608 check ~~TRUE~~> producer
4609 twin ~~TRUE~~> producer
4610 twin --ANTI--> check */
4612 ds = DEP_STATUS (dep);
4614 if (ds & BEGIN_SPEC)
4616 gcc_assert (!mutate_p);
4617 ds &= ~BEGIN_SPEC;
4620 init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
4621 sd_add_dep (new_dep, false);
4623 if (rec != EXIT_BLOCK_PTR)
4625 DEP_CON (new_dep) = twin;
4626 sd_add_dep (new_dep, false);
4630 /* Second, remove backward dependencies of INSN. */
4631 for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4632 sd_iterator_cond (&sd_it, &dep);)
4634 if ((DEP_STATUS (dep) & BEGIN_SPEC)
4635 || mutate_p)
4636 /* We can delete this dep because we overcome it with
4637 BEGIN_SPECULATION. */
4638 sd_delete_dep (sd_it);
4639 else
4640 sd_iterator_next (&sd_it);
4643 /* Future Speculations. Determine what BE_IN speculations will be like. */
4644 fs = 0;
4646 /* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
4647 here. */
4649 gcc_assert (!DONE_SPEC (insn));
4651 if (!mutate_p)
4653 ds_t ts = TODO_SPEC (insn);
4655 DONE_SPEC (insn) = ts & BEGIN_SPEC;
4656 CHECK_SPEC (check) = ts & BEGIN_SPEC;
4658 /* Luckiness of future speculations solely depends upon initial
4659 BEGIN speculation. */
4660 if (ts & BEGIN_DATA)
4661 fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
4662 if (ts & BEGIN_CONTROL)
4663 fs = set_dep_weak (fs, BE_IN_CONTROL,
4664 get_dep_weak (ts, BEGIN_CONTROL));
4666 else
4667 CHECK_SPEC (check) = CHECK_SPEC (insn);
4669 /* Future speculations: call the helper. */
4670 process_insn_forw_deps_be_in_spec (insn, twin, fs);
4672 if (rec != EXIT_BLOCK_PTR)
4674 /* Which types of dependencies should we use here is,
4675 generally, machine-dependent question... But, for now,
4676 it is not. */
4678 if (!mutate_p)
4680 init_dep (new_dep, insn, check, REG_DEP_TRUE);
4681 sd_add_dep (new_dep, false);
4683 init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4684 sd_add_dep (new_dep, false);
4686 else
4688 if (spec_info->dump)
4689 fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
4690 (*current_sched_info->print_insn) (insn, 0));
4692 /* Remove all dependencies of the INSN. */
4694 sd_it = sd_iterator_start (insn, (SD_LIST_FORW
4695 | SD_LIST_BACK
4696 | SD_LIST_RES_BACK));
4697 while (sd_iterator_cond (&sd_it, &dep))
4698 sd_delete_dep (sd_it);
4701 /* If former check (INSN) already was moved to the ready (or queue)
4702 list, add new check (CHECK) there too. */
4703 if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
4704 try_ready (check);
4706 /* Remove old check from instruction stream and free its
4707 data. */
4708 sched_remove_insn (insn);
4711 init_dep (new_dep, check, twin, REG_DEP_ANTI);
4712 sd_add_dep (new_dep, false);
4714 else
4716 init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
4717 sd_add_dep (new_dep, false);
4720 if (!mutate_p)
4721 /* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
4722 because it'll be done later in add_to_speculative_block. */
4724 rtx_vec_t priorities_roots = NULL;
4726 clear_priorities (twin, &priorities_roots);
4727 calc_priorities (priorities_roots);
4728 VEC_free (rtx, heap, priorities_roots);
4732 /* Removes dependency between instructions in the recovery block REC
4733 and usual region instructions. It keeps inner dependences so it
4734 won't be necessary to recompute them. */
4735 static void
4736 fix_recovery_deps (basic_block rec)
4738 rtx note, insn, jump, ready_list = 0;
4739 bitmap_head in_ready;
4740 rtx link;
4742 bitmap_initialize (&in_ready, 0);
4744 /* NOTE - a basic block note. */
4745 note = NEXT_INSN (BB_HEAD (rec));
4746 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4747 insn = BB_END (rec);
4748 gcc_assert (JUMP_P (insn));
4749 insn = PREV_INSN (insn);
4753 sd_iterator_def sd_it;
4754 dep_t dep;
4756 for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
4757 sd_iterator_cond (&sd_it, &dep);)
4759 rtx consumer = DEP_CON (dep);
4761 if (BLOCK_FOR_INSN (consumer) != rec)
4763 sd_delete_dep (sd_it);
4765 if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
4767 ready_list = alloc_INSN_LIST (consumer, ready_list);
4768 bitmap_set_bit (&in_ready, INSN_LUID (consumer));
4771 else
4773 gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4775 sd_iterator_next (&sd_it);
4779 insn = PREV_INSN (insn);
4781 while (insn != note);
4783 bitmap_clear (&in_ready);
4785 /* Try to add instructions to the ready or queue list. */
4786 for (link = ready_list; link; link = XEXP (link, 1))
4787 try_ready (XEXP (link, 0));
4788 free_INSN_LIST_list (&ready_list);
4790 /* Fixing jump's dependences. */
4791 insn = BB_HEAD (rec);
4792 jump = BB_END (rec);
4794 gcc_assert (LABEL_P (insn));
4795 insn = NEXT_INSN (insn);
4797 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (insn));
4798 add_jump_dependencies (insn, jump);
4801 /* Change pattern of INSN to NEW_PAT. */
4802 void
4803 sched_change_pattern (rtx insn, rtx new_pat)
4805 int t;
4807 t = validate_change (insn, &PATTERN (insn), new_pat, 0);
4808 gcc_assert (t);
4809 dfa_clear_single_insn_cache (insn);
4812 /* Change pattern of INSN to NEW_PAT. Invalidate cached haifa
4813 instruction data. */
4814 static void
4815 haifa_change_pattern (rtx insn, rtx new_pat)
4817 sched_change_pattern (insn, new_pat);
4819 /* Invalidate INSN_COST, so it'll be recalculated. */
4820 INSN_COST (insn) = -1;
4821 /* Invalidate INSN_TICK, so it'll be recalculated. */
4822 INSN_TICK (insn) = INVALID_TICK;
4825 /* -1 - can't speculate,
4826 0 - for speculation with REQUEST mode it is OK to use
4827 current instruction pattern,
4828 1 - need to change pattern for *NEW_PAT to be speculative. */
4830 sched_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4832 gcc_assert (current_sched_info->flags & DO_SPECULATION
4833 && (request & SPECULATIVE)
4834 && sched_insn_is_legitimate_for_speculation_p (insn, request));
4836 if ((request & spec_info->mask) != request)
4837 return -1;
4839 if (request & BE_IN_SPEC
4840 && !(request & BEGIN_SPEC))
4841 return 0;
4843 return targetm.sched.speculate_insn (insn, request, new_pat);
4846 static int
4847 haifa_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4849 gcc_assert (sched_deps_info->generate_spec_deps
4850 && !IS_SPECULATION_CHECK_P (insn));
4852 if (HAS_INTERNAL_DEP (insn)
4853 || SCHED_GROUP_P (insn))
4854 return -1;
4856 return sched_speculate_insn (insn, request, new_pat);
4859 /* Print some information about block BB, which starts with HEAD and
4860 ends with TAIL, before scheduling it.
4861 I is zero, if scheduler is about to start with the fresh ebb. */
4862 static void
4863 dump_new_block_header (int i, basic_block bb, rtx head, rtx tail)
4865 if (!i)
4866 fprintf (sched_dump,
4867 ";; ======================================================\n");
4868 else
4869 fprintf (sched_dump,
4870 ";; =====================ADVANCING TO=====================\n");
4871 fprintf (sched_dump,
4872 ";; -- basic block %d from %d to %d -- %s reload\n",
4873 bb->index, INSN_UID (head), INSN_UID (tail),
4874 (reload_completed ? "after" : "before"));
4875 fprintf (sched_dump,
4876 ";; ======================================================\n");
4877 fprintf (sched_dump, "\n");
4880 /* Unlink basic block notes and labels and saves them, so they
4881 can be easily restored. We unlink basic block notes in EBB to
4882 provide back-compatibility with the previous code, as target backends
4883 assume, that there'll be only instructions between
4884 current_sched_info->{head and tail}. We restore these notes as soon
4885 as we can.
4886 FIRST (LAST) is the first (last) basic block in the ebb.
4887 NB: In usual case (FIRST == LAST) nothing is really done. */
4888 void
4889 unlink_bb_notes (basic_block first, basic_block last)
4891 /* We DON'T unlink basic block notes of the first block in the ebb. */
4892 if (first == last)
4893 return;
4895 bb_header = XNEWVEC (rtx, last_basic_block);
4897 /* Make a sentinel. */
4898 if (last->next_bb != EXIT_BLOCK_PTR)
4899 bb_header[last->next_bb->index] = 0;
4901 first = first->next_bb;
4904 rtx prev, label, note, next;
4906 label = BB_HEAD (last);
4907 if (LABEL_P (label))
4908 note = NEXT_INSN (label);
4909 else
4910 note = label;
4911 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4913 prev = PREV_INSN (label);
4914 next = NEXT_INSN (note);
4915 gcc_assert (prev && next);
4917 NEXT_INSN (prev) = next;
4918 PREV_INSN (next) = prev;
4920 bb_header[last->index] = label;
4922 if (last == first)
4923 break;
4925 last = last->prev_bb;
4927 while (1);
4930 /* Restore basic block notes.
4931 FIRST is the first basic block in the ebb. */
4932 static void
4933 restore_bb_notes (basic_block first)
4935 if (!bb_header)
4936 return;
4938 /* We DON'T unlink basic block notes of the first block in the ebb. */
4939 first = first->next_bb;
4940 /* Remember: FIRST is actually a second basic block in the ebb. */
4942 while (first != EXIT_BLOCK_PTR
4943 && bb_header[first->index])
4945 rtx prev, label, note, next;
4947 label = bb_header[first->index];
4948 prev = PREV_INSN (label);
4949 next = NEXT_INSN (prev);
4951 if (LABEL_P (label))
4952 note = NEXT_INSN (label);
4953 else
4954 note = label;
4955 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4957 bb_header[first->index] = 0;
4959 NEXT_INSN (prev) = label;
4960 NEXT_INSN (note) = next;
4961 PREV_INSN (next) = note;
4963 first = first->next_bb;
4966 free (bb_header);
4967 bb_header = 0;
4970 /* Helper function.
4971 Fix CFG after both in- and inter-block movement of
4972 control_flow_insn_p JUMP. */
4973 static void
4974 fix_jump_move (rtx jump)
4976 basic_block bb, jump_bb, jump_bb_next;
4978 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
4979 jump_bb = BLOCK_FOR_INSN (jump);
4980 jump_bb_next = jump_bb->next_bb;
4982 gcc_assert (common_sched_info->sched_pass_id == SCHED_EBB_PASS
4983 || IS_SPECULATION_BRANCHY_CHECK_P (jump));
4985 if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
4986 /* if jump_bb_next is not empty. */
4987 BB_END (jump_bb) = BB_END (jump_bb_next);
4989 if (BB_END (bb) != PREV_INSN (jump))
4990 /* Then there are instruction after jump that should be placed
4991 to jump_bb_next. */
4992 BB_END (jump_bb_next) = BB_END (bb);
4993 else
4994 /* Otherwise jump_bb_next is empty. */
4995 BB_END (jump_bb_next) = NEXT_INSN (BB_HEAD (jump_bb_next));
4997 /* To make assertion in move_insn happy. */
4998 BB_END (bb) = PREV_INSN (jump);
5000 update_bb_for_insn (jump_bb_next);
5003 /* Fix CFG after interblock movement of control_flow_insn_p JUMP. */
5004 static void
5005 move_block_after_check (rtx jump)
5007 basic_block bb, jump_bb, jump_bb_next;
5008 VEC(edge,gc) *t;
5010 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
5011 jump_bb = BLOCK_FOR_INSN (jump);
5012 jump_bb_next = jump_bb->next_bb;
5014 update_bb_for_insn (jump_bb);
5016 gcc_assert (IS_SPECULATION_CHECK_P (jump)
5017 || IS_SPECULATION_CHECK_P (BB_END (jump_bb_next)));
5019 unlink_block (jump_bb_next);
5020 link_block (jump_bb_next, bb);
5022 t = bb->succs;
5023 bb->succs = 0;
5024 move_succs (&(jump_bb->succs), bb);
5025 move_succs (&(jump_bb_next->succs), jump_bb);
5026 move_succs (&t, jump_bb_next);
5028 df_mark_solutions_dirty ();
5030 common_sched_info->fix_recovery_cfg
5031 (bb->index, jump_bb->index, jump_bb_next->index);
5034 /* Helper function for move_block_after_check.
5035 This functions attaches edge vector pointed to by SUCCSP to
5036 block TO. */
5037 static void
5038 move_succs (VEC(edge,gc) **succsp, basic_block to)
5040 edge e;
5041 edge_iterator ei;
5043 gcc_assert (to->succs == 0);
5045 to->succs = *succsp;
5047 FOR_EACH_EDGE (e, ei, to->succs)
5048 e->src = to;
5050 *succsp = 0;
5053 /* Remove INSN from the instruction stream.
5054 INSN should have any dependencies. */
5055 static void
5056 sched_remove_insn (rtx insn)
5058 sd_finish_insn (insn);
5060 change_queue_index (insn, QUEUE_NOWHERE);
5061 current_sched_info->add_remove_insn (insn, 1);
5062 remove_insn (insn);
5065 /* Clear priorities of all instructions, that are forward dependent on INSN.
5066 Store in vector pointed to by ROOTS_PTR insns on which priority () should
5067 be invoked to initialize all cleared priorities. */
5068 static void
5069 clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
5071 sd_iterator_def sd_it;
5072 dep_t dep;
5073 bool insn_is_root_p = true;
5075 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
5077 FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
5079 rtx pro = DEP_PRO (dep);
5081 if (INSN_PRIORITY_STATUS (pro) >= 0
5082 && QUEUE_INDEX (insn) != QUEUE_SCHEDULED)
5084 /* If DEP doesn't contribute to priority then INSN itself should
5085 be added to priority roots. */
5086 if (contributes_to_priority_p (dep))
5087 insn_is_root_p = false;
5089 INSN_PRIORITY_STATUS (pro) = -1;
5090 clear_priorities (pro, roots_ptr);
5094 if (insn_is_root_p)
5095 VEC_safe_push (rtx, heap, *roots_ptr, insn);
5098 /* Recompute priorities of instructions, whose priorities might have been
5099 changed. ROOTS is a vector of instructions whose priority computation will
5100 trigger initialization of all cleared priorities. */
5101 static void
5102 calc_priorities (rtx_vec_t roots)
5104 int i;
5105 rtx insn;
5107 for (i = 0; VEC_iterate (rtx, roots, i, insn); i++)
5108 priority (insn);
5112 /* Add dependences between JUMP and other instructions in the recovery
5113 block. INSN is the first insn the recovery block. */
5114 static void
5115 add_jump_dependencies (rtx insn, rtx jump)
5119 insn = NEXT_INSN (insn);
5120 if (insn == jump)
5121 break;
5123 if (dep_list_size (insn) == 0)
5125 dep_def _new_dep, *new_dep = &_new_dep;
5127 init_dep (new_dep, insn, jump, REG_DEP_ANTI);
5128 sd_add_dep (new_dep, false);
5131 while (1);
5133 gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
5136 /* Return the NOTE_INSN_BASIC_BLOCK of BB. */
5138 bb_note (basic_block bb)
5140 rtx note;
5142 note = BB_HEAD (bb);
5143 if (LABEL_P (note))
5144 note = NEXT_INSN (note);
5146 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
5147 return note;
5150 #ifdef ENABLE_CHECKING
5151 /* Helper function for check_cfg.
5152 Return nonzero, if edge vector pointed to by EL has edge with TYPE in
5153 its flags. */
5154 static int
5155 has_edge_p (VEC(edge,gc) *el, int type)
5157 edge e;
5158 edge_iterator ei;
5160 FOR_EACH_EDGE (e, ei, el)
5161 if (e->flags & type)
5162 return 1;
5163 return 0;
5166 /* Search back, starting at INSN, for an insn that is not a
5167 NOTE_INSN_VAR_LOCATION. Don't search beyond HEAD, and return it if
5168 no such insn can be found. */
5169 static inline rtx
5170 prev_non_location_insn (rtx insn, rtx head)
5172 while (insn != head && NOTE_P (insn)
5173 && NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION)
5174 insn = PREV_INSN (insn);
5176 return insn;
5179 /* Check few properties of CFG between HEAD and TAIL.
5180 If HEAD (TAIL) is NULL check from the beginning (till the end) of the
5181 instruction stream. */
5182 static void
5183 check_cfg (rtx head, rtx tail)
5185 rtx next_tail;
5186 basic_block bb = 0;
5187 int not_first = 0, not_last;
5189 if (head == NULL)
5190 head = get_insns ();
5191 if (tail == NULL)
5192 tail = get_last_insn ();
5193 next_tail = NEXT_INSN (tail);
5197 not_last = head != tail;
5199 if (not_first)
5200 gcc_assert (NEXT_INSN (PREV_INSN (head)) == head);
5201 if (not_last)
5202 gcc_assert (PREV_INSN (NEXT_INSN (head)) == head);
5204 if (LABEL_P (head)
5205 || (NOTE_INSN_BASIC_BLOCK_P (head)
5206 && (!not_first
5207 || (not_first && !LABEL_P (PREV_INSN (head))))))
5209 gcc_assert (bb == 0);
5210 bb = BLOCK_FOR_INSN (head);
5211 if (bb != 0)
5212 gcc_assert (BB_HEAD (bb) == head);
5213 else
5214 /* This is the case of jump table. See inside_basic_block_p (). */
5215 gcc_assert (LABEL_P (head) && !inside_basic_block_p (head));
5218 if (bb == 0)
5220 gcc_assert (!inside_basic_block_p (head));
5221 head = NEXT_INSN (head);
5223 else
5225 gcc_assert (inside_basic_block_p (head)
5226 || NOTE_P (head));
5227 gcc_assert (BLOCK_FOR_INSN (head) == bb);
5229 if (LABEL_P (head))
5231 head = NEXT_INSN (head);
5232 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (head));
5234 else
5236 if (control_flow_insn_p (head))
5238 gcc_assert (prev_non_location_insn (BB_END (bb), head)
5239 == head);
5241 if (any_uncondjump_p (head))
5242 gcc_assert (EDGE_COUNT (bb->succs) == 1
5243 && BARRIER_P (NEXT_INSN (head)));
5244 else if (any_condjump_p (head))
5245 gcc_assert (/* Usual case. */
5246 (EDGE_COUNT (bb->succs) > 1
5247 && !BARRIER_P (NEXT_INSN (head)))
5248 /* Or jump to the next instruction. */
5249 || (EDGE_COUNT (bb->succs) == 1
5250 && (BB_HEAD (EDGE_I (bb->succs, 0)->dest)
5251 == JUMP_LABEL (head))));
5253 if (BB_END (bb) == head)
5255 if (EDGE_COUNT (bb->succs) > 1)
5256 gcc_assert (control_flow_insn_p (prev_non_location_insn
5257 (head, BB_HEAD (bb)))
5258 || has_edge_p (bb->succs, EDGE_COMPLEX));
5259 bb = 0;
5262 head = NEXT_INSN (head);
5266 not_first = 1;
5268 while (head != next_tail);
5270 gcc_assert (bb == 0);
5273 #endif /* ENABLE_CHECKING */
5275 /* Extend per basic block data structures. */
5276 static void
5277 extend_bb (void)
5279 if (sched_scan_info->extend_bb)
5280 sched_scan_info->extend_bb ();
5283 /* Init data for BB. */
5284 static void
5285 init_bb (basic_block bb)
5287 if (sched_scan_info->init_bb)
5288 sched_scan_info->init_bb (bb);
5291 /* Extend per insn data structures. */
5292 static void
5293 extend_insn (void)
5295 if (sched_scan_info->extend_insn)
5296 sched_scan_info->extend_insn ();
5299 /* Init data structures for INSN. */
5300 static void
5301 init_insn (rtx insn)
5303 if (sched_scan_info->init_insn)
5304 sched_scan_info->init_insn (insn);
5307 /* Init all insns in BB. */
5308 static void
5309 init_insns_in_bb (basic_block bb)
5311 rtx insn;
5313 FOR_BB_INSNS (bb, insn)
5314 init_insn (insn);
5317 /* A driver function to add a set of basic blocks (BBS),
5318 a single basic block (BB), a set of insns (INSNS) or a single insn (INSN)
5319 to the scheduling region. */
5320 void
5321 sched_scan (const struct sched_scan_info_def *ssi,
5322 bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5324 sched_scan_info = ssi;
5326 if (bbs != NULL || bb != NULL)
5328 extend_bb ();
5330 if (bbs != NULL)
5332 unsigned i;
5333 basic_block x;
5335 for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5336 init_bb (x);
5339 if (bb != NULL)
5340 init_bb (bb);
5343 extend_insn ();
5345 if (bbs != NULL)
5347 unsigned i;
5348 basic_block x;
5350 for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5351 init_insns_in_bb (x);
5354 if (bb != NULL)
5355 init_insns_in_bb (bb);
5357 if (insns != NULL)
5359 unsigned i;
5360 rtx x;
5362 for (i = 0; VEC_iterate (rtx, insns, i, x); i++)
5363 init_insn (x);
5366 if (insn != NULL)
5367 init_insn (insn);
5371 /* Extend data structures for logical insn UID. */
5372 static void
5373 luids_extend_insn (void)
5375 int new_luids_max_uid = get_max_uid () + 1;
5377 VEC_safe_grow_cleared (int, heap, sched_luids, new_luids_max_uid);
5380 /* Initialize LUID for INSN. */
5381 static void
5382 luids_init_insn (rtx insn)
5384 int i = INSN_P (insn) ? 1 : common_sched_info->luid_for_non_insn (insn);
5385 int luid;
5387 if (i >= 0)
5389 luid = sched_max_luid;
5390 sched_max_luid += i;
5392 else
5393 luid = -1;
5395 SET_INSN_LUID (insn, luid);
5398 /* Initialize luids for BBS, BB, INSNS and INSN.
5399 The hook common_sched_info->luid_for_non_insn () is used to determine
5400 if notes, labels, etc. need luids. */
5401 void
5402 sched_init_luids (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5404 const struct sched_scan_info_def ssi =
5406 NULL, /* extend_bb */
5407 NULL, /* init_bb */
5408 luids_extend_insn, /* extend_insn */
5409 luids_init_insn /* init_insn */
5412 sched_scan (&ssi, bbs, bb, insns, insn);
5415 /* Free LUIDs. */
5416 void
5417 sched_finish_luids (void)
5419 VEC_free (int, heap, sched_luids);
5420 sched_max_luid = 1;
5423 /* Return logical uid of INSN. Helpful while debugging. */
5425 insn_luid (rtx insn)
5427 return INSN_LUID (insn);
5430 /* Extend per insn data in the target. */
5431 void
5432 sched_extend_target (void)
5434 if (targetm.sched.h_i_d_extended)
5435 targetm.sched.h_i_d_extended ();
5438 /* Extend global scheduler structures (those, that live across calls to
5439 schedule_block) to include information about just emitted INSN. */
5440 static void
5441 extend_h_i_d (void)
5443 int reserve = (get_max_uid () + 1
5444 - VEC_length (haifa_insn_data_def, h_i_d));
5445 if (reserve > 0
5446 && ! VEC_space (haifa_insn_data_def, h_i_d, reserve))
5448 VEC_safe_grow_cleared (haifa_insn_data_def, heap, h_i_d,
5449 3 * get_max_uid () / 2);
5450 sched_extend_target ();
5454 /* Initialize h_i_d entry of the INSN with default values.
5455 Values, that are not explicitly initialized here, hold zero. */
5456 static void
5457 init_h_i_d (rtx insn)
5459 if (INSN_LUID (insn) > 0)
5461 INSN_COST (insn) = -1;
5462 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
5463 INSN_TICK (insn) = INVALID_TICK;
5464 INTER_TICK (insn) = INVALID_TICK;
5465 TODO_SPEC (insn) = HARD_DEP;
5469 /* Initialize haifa_insn_data for BBS, BB, INSNS and INSN. */
5470 void
5471 haifa_init_h_i_d (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5473 const struct sched_scan_info_def ssi =
5475 NULL, /* extend_bb */
5476 NULL, /* init_bb */
5477 extend_h_i_d, /* extend_insn */
5478 init_h_i_d /* init_insn */
5481 sched_scan (&ssi, bbs, bb, insns, insn);
5484 /* Finalize haifa_insn_data. */
5485 void
5486 haifa_finish_h_i_d (void)
5488 int i;
5489 haifa_insn_data_t data;
5490 struct reg_use_data *use, *next;
5492 for (i = 0; VEC_iterate (haifa_insn_data_def, h_i_d, i, data); i++)
5494 if (data->reg_pressure != NULL)
5495 free (data->reg_pressure);
5496 for (use = data->reg_use_list; use != NULL; use = next)
5498 next = use->next_insn_use;
5499 free (use);
5502 VEC_free (haifa_insn_data_def, heap, h_i_d);
5505 /* Init data for the new insn INSN. */
5506 static void
5507 haifa_init_insn (rtx insn)
5509 gcc_assert (insn != NULL);
5511 sched_init_luids (NULL, NULL, NULL, insn);
5512 sched_extend_target ();
5513 sched_deps_init (false);
5514 haifa_init_h_i_d (NULL, NULL, NULL, insn);
5516 if (adding_bb_to_current_region_p)
5518 sd_init_insn (insn);
5520 /* Extend dependency caches by one element. */
5521 extend_dependency_caches (1, false);
5525 /* Init data for the new basic block BB which comes after AFTER. */
5526 static void
5527 haifa_init_only_bb (basic_block bb, basic_block after)
5529 gcc_assert (bb != NULL);
5531 sched_init_bbs ();
5533 if (common_sched_info->add_block)
5534 /* This changes only data structures of the front-end. */
5535 common_sched_info->add_block (bb, after);
5538 /* A generic version of sched_split_block (). */
5539 basic_block
5540 sched_split_block_1 (basic_block first_bb, rtx after)
5542 edge e;
5544 e = split_block (first_bb, after);
5545 gcc_assert (e->src == first_bb);
5547 /* sched_split_block emits note if *check == BB_END. Probably it
5548 is better to rip that note off. */
5550 return e->dest;
5553 /* A generic version of sched_create_empty_bb (). */
5554 basic_block
5555 sched_create_empty_bb_1 (basic_block after)
5557 return create_empty_bb (after);
5560 /* Insert PAT as an INSN into the schedule and update the necessary data
5561 structures to account for it. */
5563 sched_emit_insn (rtx pat)
5565 rtx insn = emit_insn_after (pat, last_scheduled_insn);
5566 last_scheduled_insn = insn;
5567 haifa_init_insn (insn);
5568 return insn;
5571 #endif /* INSN_SCHEDULING */