1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 93-97, 1998 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
4 and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GNU CC.
8 GNU CC is free software; you can redistribute it and/or modify it
9 under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
13 GNU CC is distributed in the hope that it will be useful, but
14 WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
16 General Public License for more details.
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to the Free
20 the Free Software Foundation, 59 Temple Place - Suite 330,
21 Boston, MA 02111-1307, USA. */
24 /* Instruction scheduling pass.
26 This pass implements list scheduling within basic blocks. It is
27 run twice: (1) after flow analysis, but before register allocation,
28 and (2) after register allocation.
30 The first run performs interblock scheduling, moving insns between
31 different blocks in the same "region", and the second runs only
32 basic block scheduling.
34 Interblock motions performed are useful motions and speculative
35 motions, including speculative loads. Motions requiring code
36 duplication are not supported. The identification of motion type
37 and the check for validity of speculative motions requires
38 construction and analysis of the function's control flow graph.
39 The scheduler works as follows:
41 We compute insn priorities based on data dependencies. Flow
42 analysis only creates a fraction of the data-dependencies we must
43 observe: namely, only those dependencies which the combiner can be
44 expected to use. For this pass, we must therefore create the
45 remaining dependencies we need to observe: register dependencies,
46 memory dependencies, dependencies to keep function calls in order,
47 and the dependence between a conditional branch and the setting of
48 condition codes are all dealt with here.
50 The scheduler first traverses the data flow graph, starting with
51 the last instruction, and proceeding to the first, assigning values
52 to insn_priority as it goes. This sorts the instructions
53 topologically by data dependence.
55 Once priorities have been established, we order the insns using
56 list scheduling. This works as follows: starting with a list of
57 all the ready insns, and sorted according to priority number, we
58 schedule the insn from the end of the list by placing its
59 predecessors in the list according to their priority order. We
60 consider this insn scheduled by setting the pointer to the "end" of
61 the list to point to the previous insn. When an insn has no
62 predecessors, we either queue it until sufficient time has elapsed
63 or add it to the ready list. As the instructions are scheduled or
64 when stalls are introduced, the queue advances and dumps insns into
65 the ready list. When all insns down to the lowest priority have
66 been scheduled, the critical path of the basic block has been made
67 as short as possible. The remaining insns are then scheduled in
70 Function unit conflicts are resolved during forward list scheduling
71 by tracking the time when each insn is committed to the schedule
72 and from that, the time the function units it uses must be free.
73 As insns on the ready list are considered for scheduling, those
74 that would result in a blockage of the already committed insns are
75 queued until no blockage will result.
77 The following list shows the order in which we want to break ties
78 among insns in the ready list:
80 1. choose insn with the longest path to end of bb, ties
82 2. choose insn with least contribution to register pressure,
84 3. prefer in-block upon interblock motion, ties broken by
85 4. prefer useful upon speculative motion, ties broken by
86 5. choose insn with largest control flow probability, ties
88 6. choose insn with the least dependences upon the previously
89 scheduled insn, or finally
90 7 choose the insn which has the most insns dependent on it.
91 8. choose insn with lowest UID.
93 Memory references complicate matters. Only if we can be certain
94 that memory references are not part of the data dependency graph
95 (via true, anti, or output dependence), can we move operations past
96 memory references. To first approximation, reads can be done
97 independently, while writes introduce dependencies. Better
98 approximations will yield fewer dependencies.
100 Before reload, an extended analysis of interblock data dependences
101 is required for interblock scheduling. This is performed in
102 compute_block_backward_dependences ().
104 Dependencies set up by memory references are treated in exactly the
105 same way as other dependencies, by using LOG_LINKS backward
106 dependences. LOG_LINKS are translated into INSN_DEPEND forward
107 dependences for the purpose of forward list scheduling.
109 Having optimized the critical path, we may have also unduly
110 extended the lifetimes of some registers. If an operation requires
111 that constants be loaded into registers, it is certainly desirable
112 to load those constants as early as necessary, but no earlier.
113 I.e., it will not do to load up a bunch of registers at the
114 beginning of a basic block only to use them at the end, if they
115 could be loaded later, since this may result in excessive register
118 Note that since branches are never in basic blocks, but only end
119 basic blocks, this pass will not move branches. But that is ok,
120 since we can use GNU's delayed branch scheduling pass to take care
123 Also note that no further optimizations based on algebraic
124 identities are performed, so this pass would be a good one to
125 perform instruction splitting, such as breaking up a multiply
126 instruction into shifts and adds where that is profitable.
128 Given the memory aliasing analysis that this pass should perform,
129 it should be possible to remove redundant stores to memory, and to
130 load values from registers instead of hitting memory.
132 Before reload, speculative insns are moved only if a 'proof' exists
133 that no exception will be caused by this, and if no live registers
134 exist that inhibit the motion (live registers constraints are not
135 represented by data dependence edges).
137 This pass must update information that subsequent passes expect to
138 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
139 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
142 The information in the line number notes is carefully retained by
143 this pass. Notes that refer to the starting and ending of
144 exception regions are also carefully retained by this pass. All
145 other NOTE insns are grouped in their same relative order at the
146 beginning of basic blocks and regions that have been scheduled.
148 The main entry point for this pass is schedule_insns(), called for
149 each function. The work of the scheduler is organized in three
150 levels: (1) function level: insns are subject to splitting,
151 control-flow-graph is constructed, regions are computed (after
152 reload, each region is of one block), (2) region level: control
153 flow graph attributes required for interblock scheduling are
154 computed (dominators, reachability, etc.), data dependences and
155 priorities are computed, and (3) block level: insns in the block
156 are actually scheduled. */
161 #include "basic-block.h"
163 #include "hard-reg-set.h"
165 #include "insn-config.h"
166 #include "insn-attr.h"
170 extern char *reg_known_equiv_p
;
171 extern rtx
*reg_known_value
;
173 #ifdef INSN_SCHEDULING
175 /* target_units bitmask has 1 for each unit in the cpu. It should be
176 possible to compute this variable from the machine description.
177 But currently it is computed by examinning the insn list. Since
178 this is only needed for visualization, it seems an acceptable
179 solution. (For understanding the mapping of bits to units, see
180 definition of function_units[] in "insn-attrtab.c") */
182 static int target_units
= 0;
184 /* issue_rate is the number of insns that can be scheduled in the same
185 machine cycle. It can be defined in the config/mach/mach.h file,
186 otherwise we set it to 1. */
188 static int issue_rate
;
194 /* sched-verbose controls the amount of debugging output the
195 scheduler prints. It is controlled by -fsched-verbose-N:
196 N>0 and no -DSR : the output is directed to stderr.
197 N>=10 will direct the printouts to stderr (regardless of -dSR).
199 N=2: bb's probabilities, detailed ready list info, unit/insn info.
200 N=3: rtl at abort point, control-flow, regions info.
201 N=5: dependences info. */
203 #define MAX_RGN_BLOCKS 10
204 #define MAX_RGN_INSNS 100
206 static int sched_verbose_param
= 0;
207 static int sched_verbose
= 0;
209 /* nr_inter/spec counts interblock/speculative motion for the function */
210 static int nr_inter
, nr_spec
;
213 /* debugging file. all printouts are sent to dump, which is always set,
214 either to stderr, or to the dump listing file (-dRS). */
215 static FILE *dump
= 0;
217 /* fix_sched_param() is called from toplev.c upon detection
218 of the -fsched-***-N options. */
221 fix_sched_param (param
, val
)
224 if (!strcmp (param
, "verbose"))
225 sched_verbose_param
= atoi (val
);
227 warning ("fix_sched_param: unknown param: %s", param
);
231 /* Arrays set up by scheduling for the same respective purposes as
232 similar-named arrays set up by flow analysis. We work with these
233 arrays during the scheduling pass so we can compare values against
236 Values of these arrays are copied at the end of this pass into the
237 arrays set up by flow analysis. */
238 static int *sched_reg_n_calls_crossed
;
239 static int *sched_reg_live_length
;
240 static int *sched_reg_basic_block
;
242 /* We need to know the current block number during the post scheduling
243 update of live register information so that we can also update
244 REG_BASIC_BLOCK if a register changes blocks. */
245 static int current_block_num
;
247 /* Element N is the next insn that sets (hard or pseudo) register
248 N within the current basic block; or zero, if there is no
249 such insn. Needed for new registers which may be introduced
250 by splitting insns. */
251 static rtx
*reg_last_uses
;
252 static rtx
*reg_last_sets
;
253 static regset reg_pending_sets
;
254 static int reg_pending_sets_all
;
256 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
257 static int *insn_luid
;
258 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
260 /* Vector indexed by INSN_UID giving each instruction a priority. */
261 static int *insn_priority
;
262 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
264 static short *insn_costs
;
265 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
267 /* Vector indexed by INSN_UID giving an encoding of the function units
269 static short *insn_units
;
270 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
272 /* Vector indexed by INSN_UID giving each instruction a register-weight.
273 This weight is an estimation of the insn contribution to registers pressure. */
274 static int *insn_reg_weight
;
275 #define INSN_REG_WEIGHT(INSN) (insn_reg_weight[INSN_UID (INSN)])
277 /* Vector indexed by INSN_UID giving list of insns which
278 depend upon INSN. Unlike LOG_LINKS, it represents forward dependences. */
279 static rtx
*insn_depend
;
280 #define INSN_DEPEND(INSN) insn_depend[INSN_UID (INSN)]
282 /* Vector indexed by INSN_UID. Initialized to the number of incoming
283 edges in forward dependence graph (= number of LOG_LINKS). As
284 scheduling procedes, dependence counts are decreased. An
285 instruction moves to the ready list when its counter is zero. */
286 static int *insn_dep_count
;
287 #define INSN_DEP_COUNT(INSN) (insn_dep_count[INSN_UID (INSN)])
289 /* Vector indexed by INSN_UID giving an encoding of the blockage range
290 function. The unit and the range are encoded. */
291 static unsigned int *insn_blockage
;
292 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
294 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
295 #define ENCODE_BLOCKAGE(U, R) \
296 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
297 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
298 | MAX_BLOCKAGE_COST (R))
299 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
300 #define BLOCKAGE_RANGE(B) \
301 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
302 | ((B) & BLOCKAGE_MASK))
304 /* Encodings of the `<name>_unit_blockage_range' function. */
305 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
306 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
308 #define DONE_PRIORITY -1
309 #define MAX_PRIORITY 0x7fffffff
310 #define TAIL_PRIORITY 0x7ffffffe
311 #define LAUNCH_PRIORITY 0x7f000001
312 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
313 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
315 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
316 static int *insn_ref_count
;
317 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
319 /* Vector indexed by INSN_UID giving line-number note in effect for each
320 insn. For line-number notes, this indicates whether the note may be
322 static rtx
*line_note
;
323 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
325 /* Vector indexed by basic block number giving the starting line-number
326 for each basic block. */
327 static rtx
*line_note_head
;
329 /* List of important notes we must keep around. This is a pointer to the
330 last element in the list. */
331 static rtx note_list
;
333 /* Regsets telling whether a given register is live or dead before the last
334 scheduled insn. Must scan the instructions once before scheduling to
335 determine what registers are live or dead at the end of the block. */
336 static regset bb_live_regs
;
338 /* Regset telling whether a given register is live after the insn currently
339 being scheduled. Before processing an insn, this is equal to bb_live_regs
340 above. This is used so that we can find registers that are newly born/dead
341 after processing an insn. */
342 static regset old_live_regs
;
344 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
345 during the initial scan and reused later. If there are not exactly as
346 many REG_DEAD notes in the post scheduled code as there were in the
347 prescheduled code then we trigger an abort because this indicates a bug. */
348 static rtx dead_notes
;
352 /* An instruction is ready to be scheduled when all insns preceding it
353 have already been scheduled. It is important to ensure that all
354 insns which use its result will not be executed until its result
355 has been computed. An insn is maintained in one of four structures:
357 (P) the "Pending" set of insns which cannot be scheduled until
358 their dependencies have been satisfied.
359 (Q) the "Queued" set of insns that can be scheduled when sufficient
361 (R) the "Ready" list of unscheduled, uncommitted insns.
362 (S) the "Scheduled" list of insns.
364 Initially, all insns are either "Pending" or "Ready" depending on
365 whether their dependencies are satisfied.
367 Insns move from the "Ready" list to the "Scheduled" list as they
368 are committed to the schedule. As this occurs, the insns in the
369 "Pending" list have their dependencies satisfied and move to either
370 the "Ready" list or the "Queued" set depending on whether
371 sufficient time has passed to make them ready. As time passes,
372 insns move from the "Queued" set to the "Ready" list. Insns may
373 move from the "Ready" list to the "Queued" set if they are blocked
374 due to a function unit conflict.
376 The "Pending" list (P) are the insns in the INSN_DEPEND of the unscheduled
377 insns, i.e., those that are ready, queued, and pending.
378 The "Queued" set (Q) is implemented by the variable `insn_queue'.
379 The "Ready" list (R) is implemented by the variables `ready' and
381 The "Scheduled" list (S) is the new insn chain built by this pass.
383 The transition (R->S) is implemented in the scheduling loop in
384 `schedule_block' when the best insn to schedule is chosen.
385 The transition (R->Q) is implemented in `queue_insn' when an
386 insn is found to have a function unit conflict with the already
388 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
389 insns move from the ready list to the scheduled list.
390 The transition (Q->R) is implemented in 'queue_to_insn' as time
391 passes or stalls are introduced. */
393 /* Implement a circular buffer to delay instructions until sufficient
394 time has passed. INSN_QUEUE_SIZE is a power of two larger than
395 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
396 longest time an isnsn may be queued. */
397 static rtx insn_queue
[INSN_QUEUE_SIZE
];
398 static int q_ptr
= 0;
399 static int q_size
= 0;
400 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
401 #define NEXT_Q_AFTER(X, C) (((X)+C) & (INSN_QUEUE_SIZE-1))
403 /* Vector indexed by INSN_UID giving the minimum clock tick at which
404 the insn becomes ready. This is used to note timing constraints for
405 insns in the pending list. */
406 static int *insn_tick
;
407 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
409 /* Data structure for keeping track of register information
410 during that register's life. */
419 /* Forward declarations. */
420 static void add_dependence
PROTO ((rtx
, rtx
, enum reg_note
));
421 static void remove_dependence
PROTO ((rtx
, rtx
));
422 static rtx find_insn_list
PROTO ((rtx
, rtx
));
423 static int insn_unit
PROTO ((rtx
));
424 static unsigned int blockage_range
PROTO ((int, rtx
));
425 static void clear_units
PROTO ((void));
426 static int actual_hazard_this_instance
PROTO ((int, int, rtx
, int, int));
427 static void schedule_unit
PROTO ((int, rtx
, int));
428 static int actual_hazard
PROTO ((int, rtx
, int, int));
429 static int potential_hazard
PROTO ((int, rtx
, int));
430 static int insn_cost
PROTO ((rtx
, rtx
, rtx
));
431 static int priority
PROTO ((rtx
));
432 static void free_pending_lists
PROTO ((void));
433 static void add_insn_mem_dependence
PROTO ((rtx
*, rtx
*, rtx
, rtx
));
434 static void flush_pending_lists
PROTO ((rtx
, int));
435 static void sched_analyze_1
PROTO ((rtx
, rtx
));
436 static void sched_analyze_2
PROTO ((rtx
, rtx
));
437 static void sched_analyze_insn
PROTO ((rtx
, rtx
, rtx
));
438 static void sched_analyze
PROTO ((rtx
, rtx
));
439 static void sched_note_set
PROTO ((rtx
, int));
440 static int rank_for_schedule
PROTO ((const GENERIC_PTR
, const GENERIC_PTR
));
441 static void swap_sort
PROTO ((rtx
*, int));
442 static void queue_insn
PROTO ((rtx
, int));
443 static int schedule_insn
PROTO ((rtx
, rtx
*, int, int));
444 static void create_reg_dead_note
PROTO ((rtx
, rtx
));
445 static void attach_deaths
PROTO ((rtx
, rtx
, int));
446 static void attach_deaths_insn
PROTO ((rtx
));
447 static int new_sometimes_live
PROTO ((struct sometimes
*, int, int));
448 static void finish_sometimes_live
PROTO ((struct sometimes
*, int));
449 static int schedule_block
PROTO ((int, int));
450 static rtx regno_use_in
PROTO ((int, rtx
));
451 static void split_hard_reg_notes
PROTO ((rtx
, rtx
, rtx
));
452 static void new_insn_dead_notes
PROTO ((rtx
, rtx
, rtx
, rtx
));
453 static void update_n_sets
PROTO ((rtx
, int));
454 static void update_flow_info
PROTO ((rtx
, rtx
, rtx
, rtx
));
455 static char *safe_concat
PROTO ((char *, char *, char *));
456 static int insn_issue_delay
PROTO ((rtx
));
457 static int birthing_insn_p
PROTO ((rtx
));
458 static void adjust_priority
PROTO ((rtx
));
460 /* Mapping of insns to their original block prior to scheduling. */
461 static int *insn_orig_block
;
462 #define INSN_BLOCK(insn) (insn_orig_block[INSN_UID (insn)])
464 /* Some insns (e.g. call) are not allowed to move across blocks. */
465 static char *cant_move
;
466 #define CANT_MOVE(insn) (cant_move[INSN_UID (insn)])
468 /* Control flow graph edges are kept in circular lists. */
477 static edge
*edge_table
;
479 #define NEXT_IN(edge) (edge_table[edge].next_in)
480 #define NEXT_OUT(edge) (edge_table[edge].next_out)
481 #define FROM_BLOCK(edge) (edge_table[edge].from_block)
482 #define TO_BLOCK(edge) (edge_table[edge].to_block)
484 /* Number of edges in the control flow graph. (in fact larger than
485 that by 1, since edge 0 is unused.) */
488 /* Circular list of incoming/outgoing edges of a block */
489 static int *in_edges
;
490 static int *out_edges
;
492 #define IN_EDGES(block) (in_edges[block])
493 #define OUT_EDGES(block) (out_edges[block])
495 /* List of labels which cannot be deleted, needed for control
496 flow graph construction. */
497 extern rtx forced_labels
;
500 static int is_cfg_nonregular
PROTO ((void));
501 static int build_control_flow
PROTO ((int_list_ptr
*, int_list_ptr
*,
503 static void new_edge
PROTO ((int, int));
506 /* A region is the main entity for interblock scheduling: insns
507 are allowed to move between blocks in the same region, along
508 control flow graph edges, in the 'up' direction. */
511 int rgn_nr_blocks
; /* number of blocks in region */
512 int rgn_blocks
; /* blocks in the region (actually index in rgn_bb_table) */
516 /* Number of regions in the procedure */
517 static int nr_regions
;
519 /* Table of region descriptions */
520 static region
*rgn_table
;
522 /* Array of lists of regions' blocks */
523 static int *rgn_bb_table
;
525 /* Topological order of blocks in the region (if b2 is reachable from
526 b1, block_to_bb[b2] > block_to_bb[b1]).
527 Note: A basic block is always referred to by either block or b,
528 while its topological order name (in the region) is refered to by
531 static int *block_to_bb
;
533 /* The number of the region containing a block. */
534 static int *containing_rgn
;
536 #define RGN_NR_BLOCKS(rgn) (rgn_table[rgn].rgn_nr_blocks)
537 #define RGN_BLOCKS(rgn) (rgn_table[rgn].rgn_blocks)
538 #define BLOCK_TO_BB(block) (block_to_bb[block])
539 #define CONTAINING_RGN(block) (containing_rgn[block])
541 void debug_regions
PROTO ((void));
542 static void find_single_block_region
PROTO ((void));
543 static void find_rgns
PROTO ((int_list_ptr
*, int_list_ptr
*,
544 int *, int *, sbitmap
*));
545 static int too_large
PROTO ((int, int *, int *));
547 extern void debug_live
PROTO ((int, int));
549 /* Blocks of the current region being scheduled. */
550 static int current_nr_blocks
;
551 static int current_blocks
;
553 /* The mapping from bb to block */
554 #define BB_TO_BLOCK(bb) (rgn_bb_table[current_blocks + (bb)])
557 /* Bit vectors and bitset operations are needed for computations on
558 the control flow graph. */
560 typedef unsigned HOST_WIDE_INT
*bitset
;
563 int *first_member
; /* pointer to the list start in bitlst_table. */
564 int nr_members
; /* the number of members of the bit list. */
568 static int bitlst_table_last
;
569 static int bitlst_table_size
;
570 static int *bitlst_table
;
572 static char bitset_member
PROTO ((bitset
, int, int));
573 static void extract_bitlst
PROTO ((bitset
, int, bitlst
*));
575 /* target info declarations.
577 The block currently being scheduled is referred to as the "target" block,
578 while other blocks in the region from which insns can be moved to the
579 target are called "source" blocks. The candidate structure holds info
580 about such sources: are they valid? Speculative? Etc. */
581 typedef bitlst bblst
;
592 static candidate
*candidate_table
;
594 /* A speculative motion requires checking live information on the path
595 from 'source' to 'target'. The split blocks are those to be checked.
596 After a speculative motion, live information should be modified in
599 Lists of split and update blocks for each candidate of the current
600 target are in array bblst_table */
601 static int *bblst_table
, bblst_size
, bblst_last
;
603 #define IS_VALID(src) ( candidate_table[src].is_valid )
604 #define IS_SPECULATIVE(src) ( candidate_table[src].is_speculative )
605 #define SRC_PROB(src) ( candidate_table[src].src_prob )
607 /* The bb being currently scheduled. */
608 static int target_bb
;
611 typedef bitlst edgelst
;
613 /* target info functions */
614 static void split_edges
PROTO ((int, int, edgelst
*));
615 static void compute_trg_info
PROTO ((int));
616 void debug_candidate
PROTO ((int));
617 void debug_candidates
PROTO ((int));
620 /* Bit-set of bbs, where bit 'i' stands for bb 'i'. */
621 typedef bitset bbset
;
623 /* Number of words of the bbset. */
624 static int bbset_size
;
626 /* Dominators array: dom[i] contains the bbset of dominators of
627 bb i in the region. */
630 /* bb 0 is the only region entry */
631 #define IS_RGN_ENTRY(bb) (!bb)
633 /* Is bb_src dominated by bb_trg. */
634 #define IS_DOMINATED(bb_src, bb_trg) \
635 ( bitset_member (dom[bb_src], bb_trg, bbset_size) )
637 /* Probability: Prob[i] is a float in [0, 1] which is the probability
638 of bb i relative to the region entry. */
641 /* The probability of bb_src, relative to bb_trg. Note, that while the
642 'prob[bb]' is a float in [0, 1], this macro returns an integer
644 #define GET_SRC_PROB(bb_src, bb_trg) ((int) (100.0 * (prob[bb_src] / \
647 /* Bit-set of edges, where bit i stands for edge i. */
648 typedef bitset edgeset
;
650 /* Number of edges in the region. */
651 static int rgn_nr_edges
;
653 /* Array of size rgn_nr_edges. */
654 static int *rgn_edges
;
656 /* Number of words in an edgeset. */
657 static int edgeset_size
;
659 /* Mapping from each edge in the graph to its number in the rgn. */
660 static int *edge_to_bit
;
661 #define EDGE_TO_BIT(edge) (edge_to_bit[edge])
663 /* The split edges of a source bb is different for each target
664 bb. In order to compute this efficiently, the 'potential-split edges'
665 are computed for each bb prior to scheduling a region. This is actually
666 the split edges of each bb relative to the region entry.
668 pot_split[bb] is the set of potential split edges of bb. */
669 static edgeset
*pot_split
;
671 /* For every bb, a set of its ancestor edges. */
672 static edgeset
*ancestor_edges
;
674 static void compute_dom_prob_ps
PROTO ((int));
676 #define ABS_VALUE(x) (((x)<0)?(-(x)):(x))
677 #define INSN_PROBABILITY(INSN) (SRC_PROB (BLOCK_TO_BB (INSN_BLOCK (INSN))))
678 #define IS_SPECULATIVE_INSN(INSN) (IS_SPECULATIVE (BLOCK_TO_BB (INSN_BLOCK (INSN))))
679 #define INSN_BB(INSN) (BLOCK_TO_BB (INSN_BLOCK (INSN)))
681 /* parameters affecting the decision of rank_for_schedule() */
682 #define MIN_DIFF_PRIORITY 2
683 #define MIN_PROBABILITY 40
684 #define MIN_PROB_DIFF 10
686 /* speculative scheduling functions */
687 static int check_live_1
PROTO ((int, rtx
));
688 static void update_live_1
PROTO ((int, rtx
));
689 static int check_live
PROTO ((rtx
, int));
690 static void update_live
PROTO ((rtx
, int));
691 static void set_spec_fed
PROTO ((rtx
));
692 static int is_pfree
PROTO ((rtx
, int, int));
693 static int find_conditional_protection
PROTO ((rtx
, int));
694 static int is_conditionally_protected
PROTO ((rtx
, int, int));
695 static int may_trap_exp
PROTO ((rtx
, int));
696 static int haifa_classify_insn
PROTO ((rtx
));
697 static int is_prisky
PROTO ((rtx
, int, int));
698 static int is_exception_free
PROTO ((rtx
, int, int));
700 static char find_insn_mem_list
PROTO ((rtx
, rtx
, rtx
, rtx
));
701 static void compute_block_forward_dependences
PROTO ((int));
702 static void init_rgn_data_dependences
PROTO ((int));
703 static void add_branch_dependences
PROTO ((rtx
, rtx
));
704 static void compute_block_backward_dependences
PROTO ((int));
705 void debug_dependencies
PROTO ((void));
707 /* Notes handling mechanism:
708 =========================
709 Generally, NOTES are saved before scheduling and restored after scheduling.
710 The scheduler distinguishes between three types of notes:
712 (1) LINE_NUMBER notes, generated and used for debugging. Here,
713 before scheduling a region, a pointer to the LINE_NUMBER note is
714 added to the insn following it (in save_line_notes()), and the note
715 is removed (in rm_line_notes() and unlink_line_notes()). After
716 scheduling the region, this pointer is used for regeneration of
717 the LINE_NUMBER note (in restore_line_notes()).
719 (2) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
720 Before scheduling a region, a pointer to the note is added to the insn
721 that follows or precedes it. (This happens as part of the data dependence
722 computation). After scheduling an insn, the pointer contained in it is
723 used for regenerating the corresponding note (in reemit_notes).
725 (3) All other notes (e.g. INSN_DELETED): Before scheduling a block,
726 these notes are put in a list (in rm_other_notes() and
727 unlink_other_notes ()). After scheduling the block, these notes are
728 inserted at the beginning of the block (in schedule_block()). */
730 static rtx unlink_other_notes
PROTO ((rtx
, rtx
));
731 static rtx unlink_line_notes
PROTO ((rtx
, rtx
));
732 static void rm_line_notes
PROTO ((int));
733 static void save_line_notes
PROTO ((int));
734 static void restore_line_notes
PROTO ((int));
735 static void rm_redundant_line_notes
PROTO ((void));
736 static void rm_other_notes
PROTO ((rtx
, rtx
));
737 static rtx reemit_notes
PROTO ((rtx
, rtx
));
739 static void get_block_head_tail
PROTO ((int, rtx
*, rtx
*));
741 static void find_pre_sched_live
PROTO ((int));
742 static void find_post_sched_live
PROTO ((int));
743 static void update_reg_usage
PROTO ((void));
744 static int queue_to_ready
PROTO ((rtx
[], int));
746 static void debug_ready_list
PROTO ((rtx
[], int));
747 static void init_target_units
PROTO ((void));
748 static void insn_print_units
PROTO ((rtx
));
749 static int get_visual_tbl_length
PROTO ((void));
750 static void init_block_visualization
PROTO ((void));
751 static void print_block_visualization
PROTO ((int, char *));
752 static void visualize_scheduled_insns
PROTO ((int, int));
753 static void visualize_no_unit
PROTO ((rtx
));
754 static void visualize_stall_cycles
PROTO ((int, int));
755 static void print_exp
PROTO ((char *, rtx
, int));
756 static void print_value
PROTO ((char *, rtx
, int));
757 static void print_pattern
PROTO ((char *, rtx
, int));
758 static void print_insn
PROTO ((char *, rtx
, int));
759 void debug_reg_vector
PROTO ((regset
));
761 static rtx move_insn1
PROTO ((rtx
, rtx
));
762 static rtx move_insn
PROTO ((rtx
, rtx
));
763 static rtx group_leader
PROTO ((rtx
));
764 static int set_priorities
PROTO ((int));
765 static void init_rtx_vector
PROTO ((rtx
**, rtx
*, int, int));
766 static void schedule_region
PROTO ((int));
767 static void split_block_insns
PROTO ((int));
769 #endif /* INSN_SCHEDULING */
771 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
773 /* Helper functions for instruction scheduling. */
775 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
776 static rtx unused_insn_list
;
778 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
779 static rtx unused_expr_list
;
781 static void free_list
PROTO ((rtx
*, rtx
*));
782 static rtx alloc_INSN_LIST
PROTO ((rtx
, rtx
));
783 static rtx alloc_EXPR_LIST
PROTO ((int, rtx
, rtx
));
786 free_list (listp
, unused_listp
)
787 rtx
*listp
, *unused_listp
;
789 register rtx link
, prev_link
;
795 link
= XEXP (prev_link
, 1);
800 link
= XEXP (link
, 1);
803 XEXP (prev_link
, 1) = *unused_listp
;
804 *unused_listp
= *listp
;
809 alloc_INSN_LIST (val
, next
)
814 if (unused_insn_list
)
816 r
= unused_insn_list
;
817 unused_insn_list
= XEXP (r
, 1);
820 PUT_REG_NOTE_KIND (r
, VOIDmode
);
823 r
= gen_rtx_INSN_LIST (VOIDmode
, val
, next
);
829 alloc_EXPR_LIST (kind
, val
, next
)
835 if (unused_insn_list
)
837 r
= unused_insn_list
;
838 unused_insn_list
= XEXP (r
, 1);
841 PUT_REG_NOTE_KIND (r
, kind
);
844 r
= gen_rtx_EXPR_LIST (kind
, val
, next
);
849 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
850 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
851 of dependence that this link represents. */
854 add_dependence (insn
, elem
, dep_type
)
857 enum reg_note dep_type
;
861 /* Don't depend an insn on itself. */
865 /* If elem is part of a sequence that must be scheduled together, then
866 make the dependence point to the last insn of the sequence.
867 When HAVE_cc0, it is possible for NOTEs to exist between users and
868 setters of the condition codes, so we must skip past notes here.
869 Otherwise, NOTEs are impossible here. */
871 next
= NEXT_INSN (elem
);
874 while (next
&& GET_CODE (next
) == NOTE
)
875 next
= NEXT_INSN (next
);
878 if (next
&& SCHED_GROUP_P (next
)
879 && GET_CODE (next
) != CODE_LABEL
)
881 /* Notes will never intervene here though, so don't bother checking
883 /* We must reject CODE_LABELs, so that we don't get confused by one
884 that has LABEL_PRESERVE_P set, which is represented by the same
885 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
887 while (NEXT_INSN (next
) && SCHED_GROUP_P (NEXT_INSN (next
))
888 && GET_CODE (NEXT_INSN (next
)) != CODE_LABEL
)
889 next
= NEXT_INSN (next
);
891 /* Again, don't depend an insn on itself. */
895 /* Make the dependence to NEXT, the last insn of the group, instead
896 of the original ELEM. */
900 #ifdef INSN_SCHEDULING
901 /* (This code is guarded by INSN_SCHEDULING, otherwise INSN_BB is undefined.)
902 No need for interblock dependences with calls, since
903 calls are not moved between blocks. Note: the edge where
904 elem is a CALL is still required. */
905 if (GET_CODE (insn
) == CALL_INSN
906 && (INSN_BB (elem
) != INSN_BB (insn
)))
911 /* Check that we don't already have this dependence. */
912 for (link
= LOG_LINKS (insn
); link
; link
= XEXP (link
, 1))
913 if (XEXP (link
, 0) == elem
)
915 /* If this is a more restrictive type of dependence than the existing
916 one, then change the existing dependence to this type. */
917 if ((int) dep_type
< (int) REG_NOTE_KIND (link
))
918 PUT_REG_NOTE_KIND (link
, dep_type
);
921 /* Might want to check one level of transitivity to save conses. */
923 link
= alloc_INSN_LIST (elem
, LOG_LINKS (insn
));
924 LOG_LINKS (insn
) = link
;
926 /* Insn dependency, not data dependency. */
927 PUT_REG_NOTE_KIND (link
, dep_type
);
930 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
931 of INSN. Abort if not found. */
934 remove_dependence (insn
, elem
)
938 rtx prev
, link
, next
;
941 for (prev
= 0, link
= LOG_LINKS (insn
); link
; link
= next
)
943 next
= XEXP (link
, 1);
944 if (XEXP (link
, 0) == elem
)
947 XEXP (prev
, 1) = next
;
949 LOG_LINKS (insn
) = next
;
951 XEXP (link
, 1) = unused_insn_list
;
952 unused_insn_list
= link
;
965 #ifndef INSN_SCHEDULING
967 schedule_insns (dump_file
)
977 #define HAIFA_INLINE __inline
980 /* Computation of memory dependencies. */
982 /* The *_insns and *_mems are paired lists. Each pending memory operation
983 will have a pointer to the MEM rtx on one list and a pointer to the
984 containing insn on the other list in the same place in the list. */
986 /* We can't use add_dependence like the old code did, because a single insn
987 may have multiple memory accesses, and hence needs to be on the list
988 once for each memory access. Add_dependence won't let you add an insn
989 to a list more than once. */
991 /* An INSN_LIST containing all insns with pending read operations. */
992 static rtx pending_read_insns
;
994 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
995 static rtx pending_read_mems
;
997 /* An INSN_LIST containing all insns with pending write operations. */
998 static rtx pending_write_insns
;
1000 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
1001 static rtx pending_write_mems
;
1003 /* Indicates the combined length of the two pending lists. We must prevent
1004 these lists from ever growing too large since the number of dependencies
1005 produced is at least O(N*N), and execution time is at least O(4*N*N), as
1006 a function of the length of these pending lists. */
1008 static int pending_lists_length
;
1010 /* The last insn upon which all memory references must depend.
1011 This is an insn which flushed the pending lists, creating a dependency
1012 between it and all previously pending memory references. This creates
1013 a barrier (or a checkpoint) which no memory reference is allowed to cross.
1015 This includes all non constant CALL_INSNs. When we do interprocedural
1016 alias analysis, this restriction can be relaxed.
1017 This may also be an INSN that writes memory if the pending lists grow
1020 static rtx last_pending_memory_flush
;
1022 /* The last function call we have seen. All hard regs, and, of course,
1023 the last function call, must depend on this. */
1025 static rtx last_function_call
;
1027 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
1028 that does not already cross a call. We create dependencies between each
1029 of those insn and the next call insn, to ensure that they won't cross a call
1030 after scheduling is done. */
1032 static rtx sched_before_next_call
;
1034 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
1035 so that insns independent of the last scheduled insn will be preferred
1036 over dependent instructions. */
1038 static rtx last_scheduled_insn
;
1040 /* Data structures for the computation of data dependences in a regions. We
1041 keep one copy of each of the declared above variables for each bb in the
1042 region. Before analyzing the data dependences for a bb, its variables
1043 are initialized as a function of the variables of its predecessors. When
1044 the analysis for a bb completes, we save the contents of each variable X
1045 to a corresponding bb_X[bb] variable. For example, pending_read_insns is
1046 copied to bb_pending_read_insns[bb]. Another change is that few
1047 variables are now a list of insns rather than a single insn:
1048 last_pending_memory_flash, last_function_call, reg_last_sets. The
1049 manipulation of these variables was changed appropriately. */
1051 static rtx
**bb_reg_last_uses
;
1052 static rtx
**bb_reg_last_sets
;
1054 static rtx
*bb_pending_read_insns
;
1055 static rtx
*bb_pending_read_mems
;
1056 static rtx
*bb_pending_write_insns
;
1057 static rtx
*bb_pending_write_mems
;
1058 static int *bb_pending_lists_length
;
1060 static rtx
*bb_last_pending_memory_flush
;
1061 static rtx
*bb_last_function_call
;
1062 static rtx
*bb_sched_before_next_call
;
1064 /* functions for construction of the control flow graph. */
1066 /* Return 1 if control flow graph should not be constructed, 0 otherwise.
1068 We decide not to build the control flow graph if there is possibly more
1069 than one entry to the function, if computed branches exist, of if we
1070 have nonlocal gotos. */
1073 is_cfg_nonregular ()
1079 /* If we have a label that could be the target of a nonlocal goto, then
1080 the cfg is not well structured. */
1081 if (nonlocal_label_rtx_list () != NULL
)
1084 /* If we have any forced labels, then the cfg is not well structured. */
1088 /* If this function has a computed jump, then we consider the cfg
1089 not well structured. */
1090 if (current_function_has_computed_jump
)
1093 /* If we have exception handlers, then we consider the cfg not well
1094 structured. ?!? We should be able to handle this now that flow.c
1095 computes an accurate cfg for EH. */
1096 if (exception_handler_labels
)
1099 /* If we have non-jumping insns which refer to labels, then we consider
1100 the cfg not well structured. */
1101 /* check for labels referred to other thn by jumps */
1102 for (b
= 0; b
< n_basic_blocks
; b
++)
1103 for (insn
= basic_block_head
[b
];; insn
= NEXT_INSN (insn
))
1105 code
= GET_CODE (insn
);
1106 if (GET_RTX_CLASS (code
) == 'i')
1110 for (note
= REG_NOTES (insn
); note
; note
= XEXP (note
, 1))
1111 if (REG_NOTE_KIND (note
) == REG_LABEL
)
1115 if (insn
== basic_block_end
[b
])
1119 /* All the tests passed. Consider the cfg well structured. */
1123 /* Build the control flow graph and set nr_edges.
1125 Instead of trying to build a cfg ourselves, we rely on flow to
1126 do it for us. Stamp out useless code (and bug) duplication.
1128 Return nonzero if an irregularity in the cfg is found which would
1129 prevent cross block scheduling. */
1132 build_control_flow (s_preds
, s_succs
, num_preds
, num_succs
)
1133 int_list_ptr
*s_preds
;
1134 int_list_ptr
*s_succs
;
1142 /* Count the number of edges in the cfg. */
1145 for (i
= 0; i
< n_basic_blocks
; i
++)
1147 nr_edges
+= num_succs
[i
];
1149 /* Unreachable loops with more than one basic block are detected
1150 during the DFS traversal in find_rgns.
1152 Unreachable loops with a single block are detected here. This
1153 test is redundant with the one in find_rgns, but it's much
1154 cheaper to go ahead and catch the trivial case here. */
1155 if (num_preds
[i
] == 0
1156 || (num_preds
[i
] == 1 && INT_LIST_VAL (s_preds
[i
]) == i
))
1160 /* Account for entry/exit edges. */
1163 in_edges
= (int *) xmalloc (n_basic_blocks
* sizeof (int));
1164 out_edges
= (int *) xmalloc (n_basic_blocks
* sizeof (int));
1165 bzero ((char *) in_edges
, n_basic_blocks
* sizeof (int));
1166 bzero ((char *) out_edges
, n_basic_blocks
* sizeof (int));
1168 edge_table
= (edge
*) xmalloc ((nr_edges
) * sizeof (edge
));
1169 bzero ((char *) edge_table
, ((nr_edges
) * sizeof (edge
)));
1172 for (i
= 0; i
< n_basic_blocks
; i
++)
1173 for (succ
= s_succs
[i
]; succ
; succ
= succ
->next
)
1175 if (INT_LIST_VAL (succ
) != EXIT_BLOCK
)
1176 new_edge (i
, INT_LIST_VAL (succ
));
1179 /* increment by 1, since edge 0 is unused. */
1186 /* Record an edge in the control flow graph from SOURCE to TARGET.
1188 In theory, this is redundant with the s_succs computed above, but
1189 we have not converted all of haifa to use information from the
1193 new_edge (source
, target
)
1197 int curr_edge
, fst_edge
;
1199 /* check for duplicates */
1200 fst_edge
= curr_edge
= OUT_EDGES (source
);
1203 if (FROM_BLOCK (curr_edge
) == source
1204 && TO_BLOCK (curr_edge
) == target
)
1209 curr_edge
= NEXT_OUT (curr_edge
);
1211 if (fst_edge
== curr_edge
)
1217 FROM_BLOCK (e
) = source
;
1218 TO_BLOCK (e
) = target
;
1220 if (OUT_EDGES (source
))
1222 next_edge
= NEXT_OUT (OUT_EDGES (source
));
1223 NEXT_OUT (OUT_EDGES (source
)) = e
;
1224 NEXT_OUT (e
) = next_edge
;
1228 OUT_EDGES (source
) = e
;
1232 if (IN_EDGES (target
))
1234 next_edge
= NEXT_IN (IN_EDGES (target
));
1235 NEXT_IN (IN_EDGES (target
)) = e
;
1236 NEXT_IN (e
) = next_edge
;
1240 IN_EDGES (target
) = e
;
1246 /* BITSET macros for operations on the control flow graph. */
1248 /* Compute bitwise union of two bitsets. */
1249 #define BITSET_UNION(set1, set2, len) \
1250 do { register bitset tp = set1, sp = set2; \
1252 for (i = 0; i < len; i++) \
1253 *(tp++) |= *(sp++); } while (0)
1255 /* Compute bitwise intersection of two bitsets. */
1256 #define BITSET_INTER(set1, set2, len) \
1257 do { register bitset tp = set1, sp = set2; \
1259 for (i = 0; i < len; i++) \
1260 *(tp++) &= *(sp++); } while (0)
1262 /* Compute bitwise difference of two bitsets. */
1263 #define BITSET_DIFFER(set1, set2, len) \
1264 do { register bitset tp = set1, sp = set2; \
1266 for (i = 0; i < len; i++) \
1267 *(tp++) &= ~*(sp++); } while (0)
1269 /* Inverts every bit of bitset 'set' */
1270 #define BITSET_INVERT(set, len) \
1271 do { register bitset tmpset = set; \
1273 for (i = 0; i < len; i++, tmpset++) \
1274 *tmpset = ~*tmpset; } while (0)
1276 /* Turn on the index'th bit in bitset set. */
1277 #define BITSET_ADD(set, index, len) \
1279 if (index >= HOST_BITS_PER_WIDE_INT * len) \
1282 set[index/HOST_BITS_PER_WIDE_INT] |= \
1283 1 << (index % HOST_BITS_PER_WIDE_INT); \
1286 /* Turn off the index'th bit in set. */
1287 #define BITSET_REMOVE(set, index, len) \
1289 if (index >= HOST_BITS_PER_WIDE_INT * len) \
1292 set[index/HOST_BITS_PER_WIDE_INT] &= \
1293 ~(1 << (index%HOST_BITS_PER_WIDE_INT)); \
1297 /* Check if the index'th bit in bitset set is on. */
1300 bitset_member (set
, index
, len
)
1304 if (index
>= HOST_BITS_PER_WIDE_INT
* len
)
1306 return (set
[index
/ HOST_BITS_PER_WIDE_INT
] &
1307 1 << (index
% HOST_BITS_PER_WIDE_INT
)) ? 1 : 0;
1311 /* Translate a bit-set SET to a list BL of the bit-set members. */
1314 extract_bitlst (set
, len
, bl
)
1320 unsigned HOST_WIDE_INT word
;
1322 /* bblst table space is reused in each call to extract_bitlst */
1323 bitlst_table_last
= 0;
1325 bl
->first_member
= &bitlst_table
[bitlst_table_last
];
1328 for (i
= 0; i
< len
; i
++)
1331 offset
= i
* HOST_BITS_PER_WIDE_INT
;
1332 for (j
= 0; word
; j
++)
1336 bitlst_table
[bitlst_table_last
++] = offset
;
1347 /* functions for the construction of regions */
1349 /* Print the regions, for debugging purposes. Callable from debugger. */
1356 fprintf (dump
, "\n;; ------------ REGIONS ----------\n\n");
1357 for (rgn
= 0; rgn
< nr_regions
; rgn
++)
1359 fprintf (dump
, ";;\trgn %d nr_blocks %d:\n", rgn
,
1360 rgn_table
[rgn
].rgn_nr_blocks
);
1361 fprintf (dump
, ";;\tbb/block: ");
1363 for (bb
= 0; bb
< rgn_table
[rgn
].rgn_nr_blocks
; bb
++)
1365 current_blocks
= RGN_BLOCKS (rgn
);
1367 if (bb
!= BLOCK_TO_BB (BB_TO_BLOCK (bb
)))
1370 fprintf (dump
, " %d/%d ", bb
, BB_TO_BLOCK (bb
));
1373 fprintf (dump
, "\n\n");
1378 /* Build a single block region for each basic block in the function.
1379 This allows for using the same code for interblock and basic block
1383 find_single_block_region ()
1387 for (i
= 0; i
< n_basic_blocks
; i
++)
1389 rgn_bb_table
[i
] = i
;
1390 RGN_NR_BLOCKS (i
) = 1;
1392 CONTAINING_RGN (i
) = i
;
1393 BLOCK_TO_BB (i
) = 0;
1395 nr_regions
= n_basic_blocks
;
1399 /* Update number of blocks and the estimate for number of insns
1400 in the region. Return 1 if the region is "too large" for interblock
1401 scheduling (compile time considerations), otherwise return 0. */
1404 too_large (block
, num_bbs
, num_insns
)
1405 int block
, *num_bbs
, *num_insns
;
1408 (*num_insns
) += (INSN_LUID (basic_block_end
[block
]) -
1409 INSN_LUID (basic_block_head
[block
]));
1410 if ((*num_bbs
> MAX_RGN_BLOCKS
) || (*num_insns
> MAX_RGN_INSNS
))
1417 /* Update_loop_relations(blk, hdr): Check if the loop headed by max_hdr[blk]
1418 is still an inner loop. Put in max_hdr[blk] the header of the most inner
1419 loop containing blk. */
1420 #define UPDATE_LOOP_RELATIONS(blk, hdr) \
1422 if (max_hdr[blk] == -1) \
1423 max_hdr[blk] = hdr; \
1424 else if (dfs_nr[max_hdr[blk]] > dfs_nr[hdr]) \
1425 RESET_BIT (inner, hdr); \
1426 else if (dfs_nr[max_hdr[blk]] < dfs_nr[hdr]) \
1428 RESET_BIT (inner,max_hdr[blk]); \
1429 max_hdr[blk] = hdr; \
1434 /* Find regions for interblock scheduling.
1436 A region for scheduling can be:
1438 * A loop-free procedure, or
1440 * A reducible inner loop, or
1442 * A basic block not contained in any other region.
1445 ?!? In theory we could build other regions based on extended basic
1446 blocks or reverse extended basic blocks. Is it worth the trouble?
1448 Loop blocks that form a region are put into the region's block list
1449 in topological order.
1451 This procedure stores its results into the following global (ick) variables
1460 We use dominator relationships to avoid making regions out of non-reducible
1463 This procedure needs to be converted to work on pred/succ lists instead
1464 of edge tables. That would simplify it somewhat. */
1467 find_rgns (s_preds
, s_succs
, num_preds
, num_succs
, dom
)
1468 int_list_ptr
*s_preds
;
1469 int_list_ptr
*s_succs
;
1474 int *max_hdr
, *dfs_nr
, *stack
, *queue
, *degree
;
1476 int node
, child
, loop_head
, i
, head
, tail
;
1477 int count
= 0, sp
, idx
= 0, current_edge
= out_edges
[0];
1478 int num_bbs
, num_insns
, unreachable
;
1479 int too_large_failure
;
1481 /* Note if an edge has been passed. */
1484 /* Note if a block is a natural loop header. */
1487 /* Note if a block is an natural inner loop header. */
1490 /* Note if a block is in the block queue. */
1493 /* Note if a block is in the block queue. */
1496 /* Perform a DFS traversal of the cfg. Identify loop headers, inner loops
1497 and a mapping from block to its loop header (if the block is contained
1498 in a loop, else -1).
1500 Store results in HEADER, INNER, and MAX_HDR respectively, these will
1501 be used as inputs to the second traversal.
1503 STACK, SP and DFS_NR are only used during the first traversal. */
1505 /* Allocate and initialize variables for the first traversal. */
1506 max_hdr
= (int *) alloca (n_basic_blocks
* sizeof (int));
1507 dfs_nr
= (int *) alloca (n_basic_blocks
* sizeof (int));
1508 bzero ((char *) dfs_nr
, n_basic_blocks
* sizeof (int));
1509 stack
= (int *) alloca (nr_edges
* sizeof (int));
1511 inner
= sbitmap_alloc (n_basic_blocks
);
1512 sbitmap_ones (inner
);
1514 header
= sbitmap_alloc (n_basic_blocks
);
1515 sbitmap_zero (header
);
1517 passed
= sbitmap_alloc (nr_edges
);
1518 sbitmap_zero (passed
);
1520 in_queue
= sbitmap_alloc (n_basic_blocks
);
1521 sbitmap_zero (in_queue
);
1523 in_stack
= sbitmap_alloc (n_basic_blocks
);
1524 sbitmap_zero (in_stack
);
1526 for (i
= 0; i
< n_basic_blocks
; i
++)
1529 /* DFS traversal to find inner loops in the cfg. */
1534 if (current_edge
== 0 || TEST_BIT (passed
, current_edge
))
1536 /* We have reached a leaf node or a node that was already
1537 processed. Pop edges off the stack until we find
1538 an edge that has not yet been processed. */
1540 && (current_edge
== 0 || TEST_BIT (passed
, current_edge
)))
1542 /* Pop entry off the stack. */
1543 current_edge
= stack
[sp
--];
1544 node
= FROM_BLOCK (current_edge
);
1545 child
= TO_BLOCK (current_edge
);
1546 RESET_BIT (in_stack
, child
);
1547 if (max_hdr
[child
] >= 0 && TEST_BIT (in_stack
, max_hdr
[child
]))
1548 UPDATE_LOOP_RELATIONS (node
, max_hdr
[child
]);
1549 current_edge
= NEXT_OUT (current_edge
);
1552 /* See if have finished the DFS tree traversal. */
1553 if (sp
< 0 && TEST_BIT (passed
, current_edge
))
1556 /* Nope, continue the traversal with the popped node. */
1560 /* Process a node. */
1561 node
= FROM_BLOCK (current_edge
);
1562 child
= TO_BLOCK (current_edge
);
1563 SET_BIT (in_stack
, node
);
1564 dfs_nr
[node
] = ++count
;
1566 /* If the successor is in the stack, then we've found a loop.
1567 Mark the loop, if it is not a natural loop, then it will
1568 be rejected during the second traversal. */
1569 if (TEST_BIT (in_stack
, child
))
1572 SET_BIT (header
, child
);
1573 UPDATE_LOOP_RELATIONS (node
, child
);
1574 SET_BIT (passed
, current_edge
);
1575 current_edge
= NEXT_OUT (current_edge
);
1579 /* If the child was already visited, then there is no need to visit
1580 it again. Just update the loop relationships and restart
1584 if (max_hdr
[child
] >= 0 && TEST_BIT (in_stack
, max_hdr
[child
]))
1585 UPDATE_LOOP_RELATIONS (node
, max_hdr
[child
]);
1586 SET_BIT (passed
, current_edge
);
1587 current_edge
= NEXT_OUT (current_edge
);
1591 /* Push an entry on the stack and continue DFS traversal. */
1592 stack
[++sp
] = current_edge
;
1593 SET_BIT (passed
, current_edge
);
1594 current_edge
= OUT_EDGES (child
);
1597 /* Another check for unreachable blocks. The earlier test in
1598 is_cfg_nonregular only finds unreachable blocks that do not
1601 The DFS traversal will mark every block that is reachable from
1602 the entry node by placing a nonzero value in dfs_nr. Thus if
1603 dfs_nr is zero for any block, then it must be unreachable. */
1605 for (i
= 0; i
< n_basic_blocks
; i
++)
1612 /* Gross. To avoid wasting memory, the second pass uses the dfs_nr array
1613 to hold degree counts. */
1616 /* Compute the in-degree of every block in the graph */
1617 for (i
= 0; i
< n_basic_blocks
; i
++)
1618 degree
[i
] = num_preds
[i
];
1620 /* Do not perform region scheduling if there are any unreachable
1625 SET_BIT (header
, 0);
1627 /* Second travsersal:find reducible inner loops and topologically sort
1628 block of each region. */
1630 queue
= (int *) alloca (n_basic_blocks
* sizeof (int));
1632 /* Find blocks which are inner loop headers. We still have non-reducible
1633 loops to consider at this point. */
1634 for (i
= 0; i
< n_basic_blocks
; i
++)
1636 if (TEST_BIT (header
, i
) && TEST_BIT (inner
, i
))
1641 /* Now check that the loop is reducible. We do this separate
1642 from finding inner loops so that we do not find a reducible
1643 loop which contains an inner non-reducible loop.
1645 A simple way to find reducible/natrual loops is to verify
1646 that each block in the loop is dominated by the loop
1649 If there exists a block that is not dominated by the loop
1650 header, then the block is reachable from outside the loop
1651 and thus the loop is not a natural loop. */
1652 for (j
= 0; j
< n_basic_blocks
; j
++)
1654 /* First identify blocks in the loop, except for the loop
1656 if (i
== max_hdr
[j
] && i
!= j
)
1658 /* Now verify that the block is dominated by the loop
1660 if (!TEST_BIT (dom
[j
], i
))
1665 /* If we exited the loop early, then I is the header of a non
1666 reducible loop and we should quit processing it now. */
1667 if (j
!= n_basic_blocks
)
1670 /* I is a header of an inner loop, or block 0 in a subroutine
1671 with no loops at all. */
1673 too_large_failure
= 0;
1674 loop_head
= max_hdr
[i
];
1676 /* Decrease degree of all I's successors for topological
1678 for (ps
= s_succs
[i
]; ps
; ps
= ps
->next
)
1679 if (INT_LIST_VAL (ps
) != EXIT_BLOCK
1680 && INT_LIST_VAL (ps
) != ENTRY_BLOCK
)
1681 --degree
[INT_LIST_VAL(ps
)];
1683 /* Estimate # insns, and count # blocks in the region. */
1685 num_insns
= (INSN_LUID (basic_block_end
[i
])
1686 - INSN_LUID (basic_block_head
[i
]));
1689 /* Find all loop latches (blocks which back edges to the loop
1690 header) or all the leaf blocks in the cfg has no loops.
1692 Place those blocks into the queue. */
1695 for (j
= 0; j
< n_basic_blocks
; j
++)
1696 /* Leaf nodes have only a single successor which must
1698 if (num_succs
[j
] == 1
1699 && INT_LIST_VAL (s_succs
[j
]) == EXIT_BLOCK
)
1702 SET_BIT (in_queue
, j
);
1704 if (too_large (j
, &num_bbs
, &num_insns
))
1706 too_large_failure
= 1;
1715 for (ps
= s_preds
[i
]; ps
; ps
= ps
->next
)
1717 node
= INT_LIST_VAL (ps
);
1719 if (node
== ENTRY_BLOCK
|| node
== EXIT_BLOCK
)
1722 if (max_hdr
[node
] == loop_head
&& node
!= i
)
1724 /* This is a loop latch. */
1725 queue
[++tail
] = node
;
1726 SET_BIT (in_queue
, node
);
1728 if (too_large (node
, &num_bbs
, &num_insns
))
1730 too_large_failure
= 1;
1738 /* Now add all the blocks in the loop to the queue.
1740 We know the loop is a natural loop; however the algorithm
1741 above will not always mark certain blocks as being in the
1750 The algorithm in the DFS traversal may not mark B & D as part
1751 of the loop (ie they will not have max_hdr set to A).
1753 We know they can not be loop latches (else they would have
1754 had max_hdr set since they'd have a backedge to a dominator
1755 block). So we don't need them on the initial queue.
1757 We know they are part of the loop because they are dominated
1758 by the loop header and can be reached by a backwards walk of
1759 the edges starting with nodes on the initial queue.
1761 It is safe and desirable to include those nodes in the
1762 loop/scheduling region. To do so we would need to decrease
1763 the degree of a node if it is the target of a backedge
1764 within the loop itself as the node is placed in the queue.
1766 We do not do this because I'm not sure that the actual
1767 scheduling code will properly handle this case. ?!? */
1769 while (head
< tail
&& !too_large_failure
)
1772 child
= queue
[++head
];
1774 for (ps
= s_preds
[child
]; ps
; ps
= ps
->next
)
1776 node
= INT_LIST_VAL (ps
);
1778 /* See discussion above about nodes not marked as in
1779 this loop during the initial DFS traversal. */
1780 if (node
== ENTRY_BLOCK
|| node
== EXIT_BLOCK
1781 || max_hdr
[node
] != loop_head
)
1786 else if (!TEST_BIT (in_queue
, node
) && node
!= i
)
1788 queue
[++tail
] = node
;
1789 SET_BIT (in_queue
, node
);
1791 if (too_large (node
, &num_bbs
, &num_insns
))
1793 too_large_failure
= 1;
1800 if (tail
>= 0 && !too_large_failure
)
1802 /* Place the loop header into list of region blocks. */
1804 rgn_bb_table
[idx
] = i
;
1805 RGN_NR_BLOCKS (nr_regions
) = num_bbs
;
1806 RGN_BLOCKS (nr_regions
) = idx
++;
1807 CONTAINING_RGN (i
) = nr_regions
;
1808 BLOCK_TO_BB (i
) = count
= 0;
1810 /* Remove blocks from queue[] when their in degree becomes
1811 zero. Repeat until no blocks are left on the list. This
1812 produces a topological list of blocks in the region. */
1819 child
= queue
[head
];
1820 if (degree
[child
] == 0)
1823 rgn_bb_table
[idx
++] = child
;
1824 BLOCK_TO_BB (child
) = ++count
;
1825 CONTAINING_RGN (child
) = nr_regions
;
1826 queue
[head
] = queue
[tail
--];
1828 for (ps
= s_succs
[child
]; ps
; ps
= ps
->next
)
1829 if (INT_LIST_VAL (ps
) != ENTRY_BLOCK
1830 && INT_LIST_VAL (ps
) != EXIT_BLOCK
)
1831 --degree
[INT_LIST_VAL (ps
)];
1842 /* Any block that did not end up in a region is placed into a region
1844 for (i
= 0; i
< n_basic_blocks
; i
++)
1847 rgn_bb_table
[idx
] = i
;
1848 RGN_NR_BLOCKS (nr_regions
) = 1;
1849 RGN_BLOCKS (nr_regions
) = idx
++;
1850 CONTAINING_RGN (i
) = nr_regions
++;
1851 BLOCK_TO_BB (i
) = 0;
1862 /* functions for regions scheduling information */
1864 /* Compute dominators, probability, and potential-split-edges of bb.
1865 Assume that these values were already computed for bb's predecessors. */
1868 compute_dom_prob_ps (bb
)
1871 int nxt_in_edge
, fst_in_edge
, pred
;
1872 int fst_out_edge
, nxt_out_edge
, nr_out_edges
, nr_rgn_out_edges
;
1875 if (IS_RGN_ENTRY (bb
))
1877 BITSET_ADD (dom
[bb
], 0, bbset_size
);
1882 fst_in_edge
= nxt_in_edge
= IN_EDGES (BB_TO_BLOCK (bb
));
1884 /* intialize dom[bb] to '111..1' */
1885 BITSET_INVERT (dom
[bb
], bbset_size
);
1889 pred
= FROM_BLOCK (nxt_in_edge
);
1890 BITSET_INTER (dom
[bb
], dom
[BLOCK_TO_BB (pred
)], bbset_size
);
1892 BITSET_UNION (ancestor_edges
[bb
], ancestor_edges
[BLOCK_TO_BB (pred
)],
1895 BITSET_ADD (ancestor_edges
[bb
], EDGE_TO_BIT (nxt_in_edge
), edgeset_size
);
1898 nr_rgn_out_edges
= 0;
1899 fst_out_edge
= OUT_EDGES (pred
);
1900 nxt_out_edge
= NEXT_OUT (fst_out_edge
);
1901 BITSET_UNION (pot_split
[bb
], pot_split
[BLOCK_TO_BB (pred
)],
1904 BITSET_ADD (pot_split
[bb
], EDGE_TO_BIT (fst_out_edge
), edgeset_size
);
1906 /* the successor doesn't belong the region? */
1907 if (CONTAINING_RGN (TO_BLOCK (fst_out_edge
)) !=
1908 CONTAINING_RGN (BB_TO_BLOCK (bb
)))
1911 while (fst_out_edge
!= nxt_out_edge
)
1914 /* the successor doesn't belong the region? */
1915 if (CONTAINING_RGN (TO_BLOCK (nxt_out_edge
)) !=
1916 CONTAINING_RGN (BB_TO_BLOCK (bb
)))
1918 BITSET_ADD (pot_split
[bb
], EDGE_TO_BIT (nxt_out_edge
), edgeset_size
);
1919 nxt_out_edge
= NEXT_OUT (nxt_out_edge
);
1923 /* now nr_rgn_out_edges is the number of region-exit edges from pred,
1924 and nr_out_edges will be the number of pred out edges not leaving
1926 nr_out_edges
-= nr_rgn_out_edges
;
1927 if (nr_rgn_out_edges
> 0)
1928 prob
[bb
] += 0.9 * prob
[BLOCK_TO_BB (pred
)] / nr_out_edges
;
1930 prob
[bb
] += prob
[BLOCK_TO_BB (pred
)] / nr_out_edges
;
1931 nxt_in_edge
= NEXT_IN (nxt_in_edge
);
1933 while (fst_in_edge
!= nxt_in_edge
);
1935 BITSET_ADD (dom
[bb
], bb
, bbset_size
);
1936 BITSET_DIFFER (pot_split
[bb
], ancestor_edges
[bb
], edgeset_size
);
1938 if (sched_verbose
>= 2)
1939 fprintf (dump
, ";; bb_prob(%d, %d) = %3d\n", bb
, BB_TO_BLOCK (bb
), (int) (100.0 * prob
[bb
]));
1940 } /* compute_dom_prob_ps */
1942 /* functions for target info */
1944 /* Compute in BL the list of split-edges of bb_src relatively to bb_trg.
1945 Note that bb_trg dominates bb_src. */
1948 split_edges (bb_src
, bb_trg
, bl
)
1953 int es
= edgeset_size
;
1954 edgeset src
= (edgeset
) alloca (es
* sizeof (HOST_WIDE_INT
));
1957 src
[es
] = (pot_split
[bb_src
])[es
];
1958 BITSET_DIFFER (src
, pot_split
[bb_trg
], edgeset_size
);
1959 extract_bitlst (src
, edgeset_size
, bl
);
1963 /* Find the valid candidate-source-blocks for the target block TRG, compute
1964 their probability, and check if they are speculative or not.
1965 For speculative sources, compute their update-blocks and split-blocks. */
1968 compute_trg_info (trg
)
1971 register candidate
*sp
;
1973 int check_block
, update_idx
;
1974 int i
, j
, k
, fst_edge
, nxt_edge
;
1976 /* define some of the fields for the target bb as well */
1977 sp
= candidate_table
+ trg
;
1979 sp
->is_speculative
= 0;
1982 for (i
= trg
+ 1; i
< current_nr_blocks
; i
++)
1984 sp
= candidate_table
+ i
;
1986 sp
->is_valid
= IS_DOMINATED (i
, trg
);
1989 sp
->src_prob
= GET_SRC_PROB (i
, trg
);
1990 sp
->is_valid
= (sp
->src_prob
>= MIN_PROBABILITY
);
1995 split_edges (i
, trg
, &el
);
1996 sp
->is_speculative
= (el
.nr_members
) ? 1 : 0;
1997 if (sp
->is_speculative
&& !flag_schedule_speculative
)
2003 sp
->split_bbs
.first_member
= &bblst_table
[bblst_last
];
2004 sp
->split_bbs
.nr_members
= el
.nr_members
;
2005 for (j
= 0; j
< el
.nr_members
; bblst_last
++, j
++)
2006 bblst_table
[bblst_last
] =
2007 TO_BLOCK (rgn_edges
[el
.first_member
[j
]]);
2008 sp
->update_bbs
.first_member
= &bblst_table
[bblst_last
];
2010 for (j
= 0; j
< el
.nr_members
; j
++)
2012 check_block
= FROM_BLOCK (rgn_edges
[el
.first_member
[j
]]);
2013 fst_edge
= nxt_edge
= OUT_EDGES (check_block
);
2016 for (k
= 0; k
< el
.nr_members
; k
++)
2017 if (EDGE_TO_BIT (nxt_edge
) == el
.first_member
[k
])
2020 if (k
>= el
.nr_members
)
2022 bblst_table
[bblst_last
++] = TO_BLOCK (nxt_edge
);
2026 nxt_edge
= NEXT_OUT (nxt_edge
);
2028 while (fst_edge
!= nxt_edge
);
2030 sp
->update_bbs
.nr_members
= update_idx
;
2035 sp
->split_bbs
.nr_members
= sp
->update_bbs
.nr_members
= 0;
2037 sp
->is_speculative
= 0;
2041 } /* compute_trg_info */
2044 /* Print candidates info, for debugging purposes. Callable from debugger. */
2050 if (!candidate_table
[i
].is_valid
)
2053 if (candidate_table
[i
].is_speculative
)
2056 fprintf (dump
, "src b %d bb %d speculative \n", BB_TO_BLOCK (i
), i
);
2058 fprintf (dump
, "split path: ");
2059 for (j
= 0; j
< candidate_table
[i
].split_bbs
.nr_members
; j
++)
2061 int b
= candidate_table
[i
].split_bbs
.first_member
[j
];
2063 fprintf (dump
, " %d ", b
);
2065 fprintf (dump
, "\n");
2067 fprintf (dump
, "update path: ");
2068 for (j
= 0; j
< candidate_table
[i
].update_bbs
.nr_members
; j
++)
2070 int b
= candidate_table
[i
].update_bbs
.first_member
[j
];
2072 fprintf (dump
, " %d ", b
);
2074 fprintf (dump
, "\n");
2078 fprintf (dump
, " src %d equivalent\n", BB_TO_BLOCK (i
));
2083 /* Print candidates info, for debugging purposes. Callable from debugger. */
2086 debug_candidates (trg
)
2091 fprintf (dump
, "----------- candidate table: target: b=%d bb=%d ---\n",
2092 BB_TO_BLOCK (trg
), trg
);
2093 for (i
= trg
+ 1; i
< current_nr_blocks
; i
++)
2094 debug_candidate (i
);
2098 /* functions for speculative scheduing */
2100 /* Return 0 if x is a set of a register alive in the beginning of one
2101 of the split-blocks of src, otherwise return 1. */
2104 check_live_1 (src
, x
)
2110 register rtx reg
= SET_DEST (x
);
2115 while (GET_CODE (reg
) == SUBREG
|| GET_CODE (reg
) == ZERO_EXTRACT
2116 || GET_CODE (reg
) == SIGN_EXTRACT
2117 || GET_CODE (reg
) == STRICT_LOW_PART
)
2118 reg
= XEXP (reg
, 0);
2120 if (GET_CODE (reg
) != REG
)
2123 regno
= REGNO (reg
);
2125 if (regno
< FIRST_PSEUDO_REGISTER
&& global_regs
[regno
])
2127 /* Global registers are assumed live */
2132 if (regno
< FIRST_PSEUDO_REGISTER
)
2134 /* check for hard registers */
2135 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (reg
));
2138 for (i
= 0; i
< candidate_table
[src
].split_bbs
.nr_members
; i
++)
2140 int b
= candidate_table
[src
].split_bbs
.first_member
[i
];
2142 if (REGNO_REG_SET_P (basic_block_live_at_start
[b
], regno
+ j
))
2151 /* check for psuedo registers */
2152 for (i
= 0; i
< candidate_table
[src
].split_bbs
.nr_members
; i
++)
2154 int b
= candidate_table
[src
].split_bbs
.first_member
[i
];
2156 if (REGNO_REG_SET_P (basic_block_live_at_start
[b
], regno
))
2168 /* If x is a set of a register R, mark that R is alive in the beginning
2169 of every update-block of src. */
2172 update_live_1 (src
, x
)
2178 register rtx reg
= SET_DEST (x
);
2183 while (GET_CODE (reg
) == SUBREG
|| GET_CODE (reg
) == ZERO_EXTRACT
2184 || GET_CODE (reg
) == SIGN_EXTRACT
2185 || GET_CODE (reg
) == STRICT_LOW_PART
)
2186 reg
= XEXP (reg
, 0);
2188 if (GET_CODE (reg
) != REG
)
2191 /* Global registers are always live, so the code below does not apply
2194 regno
= REGNO (reg
);
2196 if (regno
>= FIRST_PSEUDO_REGISTER
|| !global_regs
[regno
])
2198 if (regno
< FIRST_PSEUDO_REGISTER
)
2200 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (reg
));
2203 for (i
= 0; i
< candidate_table
[src
].update_bbs
.nr_members
; i
++)
2205 int b
= candidate_table
[src
].update_bbs
.first_member
[i
];
2207 SET_REGNO_REG_SET (basic_block_live_at_start
[b
], regno
+ j
);
2213 for (i
= 0; i
< candidate_table
[src
].update_bbs
.nr_members
; i
++)
2215 int b
= candidate_table
[src
].update_bbs
.first_member
[i
];
2217 SET_REGNO_REG_SET (basic_block_live_at_start
[b
], regno
);
2224 /* Return 1 if insn can be speculatively moved from block src to trg,
2225 otherwise return 0. Called before first insertion of insn to
2226 ready-list or before the scheduling. */
2229 check_live (insn
, src
)
2233 /* find the registers set by instruction */
2234 if (GET_CODE (PATTERN (insn
)) == SET
2235 || GET_CODE (PATTERN (insn
)) == CLOBBER
)
2236 return check_live_1 (src
, PATTERN (insn
));
2237 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
2240 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
2241 if ((GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == SET
2242 || GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == CLOBBER
)
2243 && !check_live_1 (src
, XVECEXP (PATTERN (insn
), 0, j
)))
2253 /* Update the live registers info after insn was moved speculatively from
2254 block src to trg. */
2257 update_live (insn
, src
)
2261 /* find the registers set by instruction */
2262 if (GET_CODE (PATTERN (insn
)) == SET
2263 || GET_CODE (PATTERN (insn
)) == CLOBBER
)
2264 update_live_1 (src
, PATTERN (insn
));
2265 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
2268 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
2269 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == SET
2270 || GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == CLOBBER
)
2271 update_live_1 (src
, XVECEXP (PATTERN (insn
), 0, j
));
2275 /* Exception Free Loads:
2277 We define five classes of speculative loads: IFREE, IRISKY,
2278 PFREE, PRISKY, and MFREE.
2280 IFREE loads are loads that are proved to be exception-free, just
2281 by examining the load insn. Examples for such loads are loads
2282 from TOC and loads of global data.
2284 IRISKY loads are loads that are proved to be exception-risky,
2285 just by examining the load insn. Examples for such loads are
2286 volatile loads and loads from shared memory.
2288 PFREE loads are loads for which we can prove, by examining other
2289 insns, that they are exception-free. Currently, this class consists
2290 of loads for which we are able to find a "similar load", either in
2291 the target block, or, if only one split-block exists, in that split
2292 block. Load2 is similar to load1 if both have same single base
2293 register. We identify only part of the similar loads, by finding
2294 an insn upon which both load1 and load2 have a DEF-USE dependence.
2296 PRISKY loads are loads for which we can prove, by examining other
2297 insns, that they are exception-risky. Currently we have two proofs for
2298 such loads. The first proof detects loads that are probably guarded by a
2299 test on the memory address. This proof is based on the
2300 backward and forward data dependence information for the region.
2301 Let load-insn be the examined load.
2302 Load-insn is PRISKY iff ALL the following hold:
2304 - insn1 is not in the same block as load-insn
2305 - there is a DEF-USE dependence chain (insn1, ..., load-insn)
2306 - test-insn is either a compare or a branch, not in the same block as load-insn
2307 - load-insn is reachable from test-insn
2308 - there is a DEF-USE dependence chain (insn1, ..., test-insn)
2310 This proof might fail when the compare and the load are fed
2311 by an insn not in the region. To solve this, we will add to this
2312 group all loads that have no input DEF-USE dependence.
2314 The second proof detects loads that are directly or indirectly
2315 fed by a speculative load. This proof is affected by the
2316 scheduling process. We will use the flag fed_by_spec_load.
2317 Initially, all insns have this flag reset. After a speculative
2318 motion of an insn, if insn is either a load, or marked as
2319 fed_by_spec_load, we will also mark as fed_by_spec_load every
2320 insn1 for which a DEF-USE dependence (insn, insn1) exists. A
2321 load which is fed_by_spec_load is also PRISKY.
2323 MFREE (maybe-free) loads are all the remaining loads. They may be
2324 exception-free, but we cannot prove it.
2326 Now, all loads in IFREE and PFREE classes are considered
2327 exception-free, while all loads in IRISKY and PRISKY classes are
2328 considered exception-risky. As for loads in the MFREE class,
2329 these are considered either exception-free or exception-risky,
2330 depending on whether we are pessimistic or optimistic. We have
2331 to take the pessimistic approach to assure the safety of
2332 speculative scheduling, but we can take the optimistic approach
2333 by invoking the -fsched_spec_load_dangerous option. */
2335 enum INSN_TRAP_CLASS
2337 TRAP_FREE
= 0, IFREE
= 1, PFREE_CANDIDATE
= 2,
2338 PRISKY_CANDIDATE
= 3, IRISKY
= 4, TRAP_RISKY
= 5
2341 #define WORST_CLASS(class1, class2) \
2342 ((class1 > class2) ? class1 : class2)
2344 /* Indexed by INSN_UID, and set if there's DEF-USE dependence between */
2345 /* some speculatively moved load insn and this one. */
2346 char *fed_by_spec_load
;
2349 /* Non-zero if block bb_to is equal to, or reachable from block bb_from. */
2350 #define IS_REACHABLE(bb_from, bb_to) \
2352 || IS_RGN_ENTRY (bb_from) \
2353 || (bitset_member (ancestor_edges[bb_to], \
2354 EDGE_TO_BIT (IN_EDGES (BB_TO_BLOCK (bb_from))), \
2356 #define FED_BY_SPEC_LOAD(insn) (fed_by_spec_load[INSN_UID (insn)])
2357 #define IS_LOAD_INSN(insn) (is_load_insn[INSN_UID (insn)])
2359 /* Non-zero iff the address is comprised from at most 1 register */
2360 #define CONST_BASED_ADDRESS_P(x) \
2361 (GET_CODE (x) == REG \
2362 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
2363 || (GET_CODE (x) == LO_SUM)) \
2364 && (GET_CODE (XEXP (x, 0)) == CONST_INT \
2365 || GET_CODE (XEXP (x, 1)) == CONST_INT)))
2367 /* Turns on the fed_by_spec_load flag for insns fed by load_insn. */
2370 set_spec_fed (load_insn
)
2375 for (link
= INSN_DEPEND (load_insn
); link
; link
= XEXP (link
, 1))
2376 if (GET_MODE (link
) == VOIDmode
)
2377 FED_BY_SPEC_LOAD (XEXP (link
, 0)) = 1;
2378 } /* set_spec_fed */
2380 /* On the path from the insn to load_insn_bb, find a conditional branch */
2381 /* depending on insn, that guards the speculative load. */
2384 find_conditional_protection (insn
, load_insn_bb
)
2390 /* iterate through DEF-USE forward dependences */
2391 for (link
= INSN_DEPEND (insn
); link
; link
= XEXP (link
, 1))
2393 rtx next
= XEXP (link
, 0);
2394 if ((CONTAINING_RGN (INSN_BLOCK (next
)) ==
2395 CONTAINING_RGN (BB_TO_BLOCK (load_insn_bb
)))
2396 && IS_REACHABLE (INSN_BB (next
), load_insn_bb
)
2397 && load_insn_bb
!= INSN_BB (next
)
2398 && GET_MODE (link
) == VOIDmode
2399 && (GET_CODE (next
) == JUMP_INSN
2400 || find_conditional_protection (next
, load_insn_bb
)))
2404 } /* find_conditional_protection */
2406 /* Returns 1 if the same insn1 that participates in the computation
2407 of load_insn's address is feeding a conditional branch that is
2408 guarding on load_insn. This is true if we find a the two DEF-USE
2410 insn1 -> ... -> conditional-branch
2411 insn1 -> ... -> load_insn,
2412 and if a flow path exist:
2413 insn1 -> ... -> conditional-branch -> ... -> load_insn,
2414 and if insn1 is on the path
2415 region-entry -> ... -> bb_trg -> ... load_insn.
2417 Locate insn1 by climbing on LOG_LINKS from load_insn.
2418 Locate the branch by following INSN_DEPEND from insn1. */
2421 is_conditionally_protected (load_insn
, bb_src
, bb_trg
)
2427 for (link
= LOG_LINKS (load_insn
); link
; link
= XEXP (link
, 1))
2429 rtx insn1
= XEXP (link
, 0);
2431 /* must be a DEF-USE dependence upon non-branch */
2432 if (GET_MODE (link
) != VOIDmode
2433 || GET_CODE (insn1
) == JUMP_INSN
)
2436 /* must exist a path: region-entry -> ... -> bb_trg -> ... load_insn */
2437 if (INSN_BB (insn1
) == bb_src
2438 || (CONTAINING_RGN (INSN_BLOCK (insn1
))
2439 != CONTAINING_RGN (BB_TO_BLOCK (bb_src
)))
2440 || (!IS_REACHABLE (bb_trg
, INSN_BB (insn1
))
2441 && !IS_REACHABLE (INSN_BB (insn1
), bb_trg
)))
2444 /* now search for the conditional-branch */
2445 if (find_conditional_protection (insn1
, bb_src
))
2448 /* recursive step: search another insn1, "above" current insn1. */
2449 return is_conditionally_protected (insn1
, bb_src
, bb_trg
);
2452 /* the chain does not exsist */
2454 } /* is_conditionally_protected */
2456 /* Returns 1 if a clue for "similar load" 'insn2' is found, and hence
2457 load_insn can move speculatively from bb_src to bb_trg. All the
2458 following must hold:
2460 (1) both loads have 1 base register (PFREE_CANDIDATEs).
2461 (2) load_insn and load1 have a def-use dependence upon
2462 the same insn 'insn1'.
2463 (3) either load2 is in bb_trg, or:
2464 - there's only one split-block, and
2465 - load1 is on the escape path, and
2467 From all these we can conclude that the two loads access memory
2468 addresses that differ at most by a constant, and hence if moving
2469 load_insn would cause an exception, it would have been caused by
2473 is_pfree (load_insn
, bb_src
, bb_trg
)
2478 register candidate
*candp
= candidate_table
+ bb_src
;
2480 if (candp
->split_bbs
.nr_members
!= 1)
2481 /* must have exactly one escape block */
2484 for (back_link
= LOG_LINKS (load_insn
);
2485 back_link
; back_link
= XEXP (back_link
, 1))
2487 rtx insn1
= XEXP (back_link
, 0);
2489 if (GET_MODE (back_link
) == VOIDmode
)
2491 /* found a DEF-USE dependence (insn1, load_insn) */
2494 for (fore_link
= INSN_DEPEND (insn1
);
2495 fore_link
; fore_link
= XEXP (fore_link
, 1))
2497 rtx insn2
= XEXP (fore_link
, 0);
2498 if (GET_MODE (fore_link
) == VOIDmode
)
2500 /* found a DEF-USE dependence (insn1, insn2) */
2501 if (haifa_classify_insn (insn2
) != PFREE_CANDIDATE
)
2502 /* insn2 not guaranteed to be a 1 base reg load */
2505 if (INSN_BB (insn2
) == bb_trg
)
2506 /* insn2 is the similar load, in the target block */
2509 if (*(candp
->split_bbs
.first_member
) == INSN_BLOCK (insn2
))
2510 /* insn2 is a similar load, in a split-block */
2517 /* couldn't find a similar load */
2521 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
2522 as found by analyzing insn's expression. */
2525 may_trap_exp (x
, is_store
)
2533 code
= GET_CODE (x
);
2543 /* The insn uses memory */
2544 /* a volatile load */
2545 if (MEM_VOLATILE_P (x
))
2547 /* an exception-free load */
2548 if (!may_trap_p (x
))
2550 /* a load with 1 base register, to be further checked */
2551 if (CONST_BASED_ADDRESS_P (XEXP (x
, 0)))
2552 return PFREE_CANDIDATE
;
2553 /* no info on the load, to be further checked */
2554 return PRISKY_CANDIDATE
;
2559 int i
, insn_class
= TRAP_FREE
;
2561 /* neither store nor load, check if it may cause a trap */
2564 /* recursive step: walk the insn... */
2565 fmt
= GET_RTX_FORMAT (code
);
2566 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
2570 int tmp_class
= may_trap_exp (XEXP (x
, i
), is_store
);
2571 insn_class
= WORST_CLASS (insn_class
, tmp_class
);
2573 else if (fmt
[i
] == 'E')
2576 for (j
= 0; j
< XVECLEN (x
, i
); j
++)
2578 int tmp_class
= may_trap_exp (XVECEXP (x
, i
, j
), is_store
);
2579 insn_class
= WORST_CLASS (insn_class
, tmp_class
);
2580 if (insn_class
== TRAP_RISKY
|| insn_class
== IRISKY
)
2584 if (insn_class
== TRAP_RISKY
|| insn_class
== IRISKY
)
2589 } /* may_trap_exp */
2592 /* Classifies insn for the purpose of verifying that it can be
2593 moved speculatively, by examining it's patterns, returning:
2594 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
2595 TRAP_FREE: non-load insn.
2596 IFREE: load from a globaly safe location.
2597 IRISKY: volatile load.
2598 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
2599 being either PFREE or PRISKY. */
2602 haifa_classify_insn (insn
)
2605 rtx pat
= PATTERN (insn
);
2606 int tmp_class
= TRAP_FREE
;
2607 int insn_class
= TRAP_FREE
;
2610 if (GET_CODE (pat
) == PARALLEL
)
2612 int i
, len
= XVECLEN (pat
, 0);
2614 for (i
= len
- 1; i
>= 0; i
--)
2616 code
= GET_CODE (XVECEXP (pat
, 0, i
));
2620 /* test if it is a 'store' */
2621 tmp_class
= may_trap_exp (XEXP (XVECEXP (pat
, 0, i
), 0), 1);
2624 /* test if it is a store */
2625 tmp_class
= may_trap_exp (SET_DEST (XVECEXP (pat
, 0, i
)), 1);
2626 if (tmp_class
== TRAP_RISKY
)
2628 /* test if it is a load */
2630 WORST_CLASS (tmp_class
,
2631 may_trap_exp (SET_SRC (XVECEXP (pat
, 0, i
)), 0));
2634 tmp_class
= TRAP_RISKY
;
2638 insn_class
= WORST_CLASS (insn_class
, tmp_class
);
2639 if (insn_class
== TRAP_RISKY
|| insn_class
== IRISKY
)
2645 code
= GET_CODE (pat
);
2649 /* test if it is a 'store' */
2650 tmp_class
= may_trap_exp (XEXP (pat
, 0), 1);
2653 /* test if it is a store */
2654 tmp_class
= may_trap_exp (SET_DEST (pat
), 1);
2655 if (tmp_class
== TRAP_RISKY
)
2657 /* test if it is a load */
2659 WORST_CLASS (tmp_class
,
2660 may_trap_exp (SET_SRC (pat
), 0));
2663 tmp_class
= TRAP_RISKY
;
2667 insn_class
= tmp_class
;
2672 } /* haifa_classify_insn */
2674 /* Return 1 if load_insn is prisky (i.e. if load_insn is fed by
2675 a load moved speculatively, or if load_insn is protected by
2676 a compare on load_insn's address). */
2679 is_prisky (load_insn
, bb_src
, bb_trg
)
2683 if (FED_BY_SPEC_LOAD (load_insn
))
2686 if (LOG_LINKS (load_insn
) == NULL
)
2687 /* dependence may 'hide' out of the region. */
2690 if (is_conditionally_protected (load_insn
, bb_src
, bb_trg
))
2696 /* Insn is a candidate to be moved speculatively from bb_src to bb_trg.
2697 Return 1 if insn is exception-free (and the motion is valid)
2701 is_exception_free (insn
, bb_src
, bb_trg
)
2705 int insn_class
= haifa_classify_insn (insn
);
2707 /* handle non-load insns */
2718 if (!flag_schedule_speculative_load
)
2720 IS_LOAD_INSN (insn
) = 1;
2727 case PFREE_CANDIDATE
:
2728 if (is_pfree (insn
, bb_src
, bb_trg
))
2730 /* don't 'break' here: PFREE-candidate is also PRISKY-candidate */
2731 case PRISKY_CANDIDATE
:
2732 if (!flag_schedule_speculative_load_dangerous
2733 || is_prisky (insn
, bb_src
, bb_trg
))
2739 return flag_schedule_speculative_load_dangerous
;
2740 } /* is_exception_free */
2743 /* Process an insn's memory dependencies. There are four kinds of
2746 (0) read dependence: read follows read
2747 (1) true dependence: read follows write
2748 (2) anti dependence: write follows read
2749 (3) output dependence: write follows write
2751 We are careful to build only dependencies which actually exist, and
2752 use transitivity to avoid building too many links. */
2754 /* Return the INSN_LIST containing INSN in LIST, or NULL
2755 if LIST does not contain INSN. */
2757 HAIFA_INLINE
static rtx
2758 find_insn_list (insn
, list
)
2764 if (XEXP (list
, 0) == insn
)
2766 list
= XEXP (list
, 1);
2772 /* Return 1 if the pair (insn, x) is found in (LIST, LIST1), or 0 otherwise. */
2774 HAIFA_INLINE
static char
2775 find_insn_mem_list (insn
, x
, list
, list1
)
2781 if (XEXP (list
, 0) == insn
2782 && XEXP (list1
, 0) == x
)
2784 list
= XEXP (list
, 1);
2785 list1
= XEXP (list1
, 1);
2791 /* Compute the function units used by INSN. This caches the value
2792 returned by function_units_used. A function unit is encoded as the
2793 unit number if the value is non-negative and the compliment of a
2794 mask if the value is negative. A function unit index is the
2795 non-negative encoding. */
2797 HAIFA_INLINE
static int
2801 register int unit
= INSN_UNIT (insn
);
2805 recog_memoized (insn
);
2807 /* A USE insn, or something else we don't need to understand.
2808 We can't pass these directly to function_units_used because it will
2809 trigger a fatal error for unrecognizable insns. */
2810 if (INSN_CODE (insn
) < 0)
2814 unit
= function_units_used (insn
);
2815 /* Increment non-negative values so we can cache zero. */
2819 /* We only cache 16 bits of the result, so if the value is out of
2820 range, don't cache it. */
2821 if (FUNCTION_UNITS_SIZE
< HOST_BITS_PER_SHORT
2823 || (~unit
& ((1 << (HOST_BITS_PER_SHORT
- 1)) - 1)) == 0)
2824 INSN_UNIT (insn
) = unit
;
2826 return (unit
> 0 ? unit
- 1 : unit
);
2829 /* Compute the blockage range for executing INSN on UNIT. This caches
2830 the value returned by the blockage_range_function for the unit.
2831 These values are encoded in an int where the upper half gives the
2832 minimum value and the lower half gives the maximum value. */
2834 HAIFA_INLINE
static unsigned int
2835 blockage_range (unit
, insn
)
2839 unsigned int blockage
= INSN_BLOCKAGE (insn
);
2842 if (UNIT_BLOCKED (blockage
) != unit
+ 1)
2844 range
= function_units
[unit
].blockage_range_function (insn
);
2845 /* We only cache the blockage range for one unit and then only if
2847 if (HOST_BITS_PER_INT
>= UNIT_BITS
+ 2 * BLOCKAGE_BITS
)
2848 INSN_BLOCKAGE (insn
) = ENCODE_BLOCKAGE (unit
+ 1, range
);
2851 range
= BLOCKAGE_RANGE (blockage
);
2856 /* A vector indexed by function unit instance giving the last insn to use
2857 the unit. The value of the function unit instance index for unit U
2858 instance I is (U + I * FUNCTION_UNITS_SIZE). */
2859 static rtx unit_last_insn
[FUNCTION_UNITS_SIZE
* MAX_MULTIPLICITY
];
2861 /* A vector indexed by function unit instance giving the minimum time when
2862 the unit will unblock based on the maximum blockage cost. */
2863 static int unit_tick
[FUNCTION_UNITS_SIZE
* MAX_MULTIPLICITY
];
2865 /* A vector indexed by function unit number giving the number of insns
2866 that remain to use the unit. */
2867 static int unit_n_insns
[FUNCTION_UNITS_SIZE
];
2869 /* Reset the function unit state to the null state. */
2874 bzero ((char *) unit_last_insn
, sizeof (unit_last_insn
));
2875 bzero ((char *) unit_tick
, sizeof (unit_tick
));
2876 bzero ((char *) unit_n_insns
, sizeof (unit_n_insns
));
2879 /* Return the issue-delay of an insn */
2881 HAIFA_INLINE
static int
2882 insn_issue_delay (insn
)
2886 int unit
= insn_unit (insn
);
2888 /* efficiency note: in fact, we are working 'hard' to compute a
2889 value that was available in md file, and is not available in
2890 function_units[] structure. It would be nice to have this
2891 value there, too. */
2894 if (function_units
[unit
].blockage_range_function
&&
2895 function_units
[unit
].blockage_function
)
2896 delay
= function_units
[unit
].blockage_function (insn
, insn
);
2899 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
2900 if ((unit
& 1) != 0 && function_units
[i
].blockage_range_function
2901 && function_units
[i
].blockage_function
)
2902 delay
= MAX (delay
, function_units
[i
].blockage_function (insn
, insn
));
2907 /* Return the actual hazard cost of executing INSN on the unit UNIT,
2908 instance INSTANCE at time CLOCK if the previous actual hazard cost
2911 HAIFA_INLINE
static int
2912 actual_hazard_this_instance (unit
, instance
, insn
, clock
, cost
)
2913 int unit
, instance
, clock
, cost
;
2916 int tick
= unit_tick
[instance
]; /* issue time of the last issued insn */
2918 if (tick
- clock
> cost
)
2920 /* The scheduler is operating forward, so unit's last insn is the
2921 executing insn and INSN is the candidate insn. We want a
2922 more exact measure of the blockage if we execute INSN at CLOCK
2923 given when we committed the execution of the unit's last insn.
2925 The blockage value is given by either the unit's max blockage
2926 constant, blockage range function, or blockage function. Use
2927 the most exact form for the given unit. */
2929 if (function_units
[unit
].blockage_range_function
)
2931 if (function_units
[unit
].blockage_function
)
2932 tick
+= (function_units
[unit
].blockage_function
2933 (unit_last_insn
[instance
], insn
)
2934 - function_units
[unit
].max_blockage
);
2936 tick
+= ((int) MAX_BLOCKAGE_COST (blockage_range (unit
, insn
))
2937 - function_units
[unit
].max_blockage
);
2939 if (tick
- clock
> cost
)
2940 cost
= tick
- clock
;
2945 /* Record INSN as having begun execution on the units encoded by UNIT at
2948 HAIFA_INLINE
static void
2949 schedule_unit (unit
, insn
, clock
)
2957 int instance
= unit
;
2958 #if MAX_MULTIPLICITY > 1
2959 /* Find the first free instance of the function unit and use that
2960 one. We assume that one is free. */
2961 for (i
= function_units
[unit
].multiplicity
- 1; i
> 0; i
--)
2963 if (!actual_hazard_this_instance (unit
, instance
, insn
, clock
, 0))
2965 instance
+= FUNCTION_UNITS_SIZE
;
2968 unit_last_insn
[instance
] = insn
;
2969 unit_tick
[instance
] = (clock
+ function_units
[unit
].max_blockage
);
2972 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
2973 if ((unit
& 1) != 0)
2974 schedule_unit (i
, insn
, clock
);
2977 /* Return the actual hazard cost of executing INSN on the units encoded by
2978 UNIT at time CLOCK if the previous actual hazard cost was COST. */
2980 HAIFA_INLINE
static int
2981 actual_hazard (unit
, insn
, clock
, cost
)
2982 int unit
, clock
, cost
;
2989 /* Find the instance of the function unit with the minimum hazard. */
2990 int instance
= unit
;
2991 int best_cost
= actual_hazard_this_instance (unit
, instance
, insn
,
2995 #if MAX_MULTIPLICITY > 1
2996 if (best_cost
> cost
)
2998 for (i
= function_units
[unit
].multiplicity
- 1; i
> 0; i
--)
3000 instance
+= FUNCTION_UNITS_SIZE
;
3001 this_cost
= actual_hazard_this_instance (unit
, instance
, insn
,
3003 if (this_cost
< best_cost
)
3005 best_cost
= this_cost
;
3006 if (this_cost
<= cost
)
3012 cost
= MAX (cost
, best_cost
);
3015 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
3016 if ((unit
& 1) != 0)
3017 cost
= actual_hazard (i
, insn
, clock
, cost
);
3022 /* Return the potential hazard cost of executing an instruction on the
3023 units encoded by UNIT if the previous potential hazard cost was COST.
3024 An insn with a large blockage time is chosen in preference to one
3025 with a smaller time; an insn that uses a unit that is more likely
3026 to be used is chosen in preference to one with a unit that is less
3027 used. We are trying to minimize a subsequent actual hazard. */
3029 HAIFA_INLINE
static int
3030 potential_hazard (unit
, insn
, cost
)
3035 unsigned int minb
, maxb
;
3039 minb
= maxb
= function_units
[unit
].max_blockage
;
3042 if (function_units
[unit
].blockage_range_function
)
3044 maxb
= minb
= blockage_range (unit
, insn
);
3045 maxb
= MAX_BLOCKAGE_COST (maxb
);
3046 minb
= MIN_BLOCKAGE_COST (minb
);
3051 /* Make the number of instructions left dominate. Make the
3052 minimum delay dominate the maximum delay. If all these
3053 are the same, use the unit number to add an arbitrary
3054 ordering. Other terms can be added. */
3055 ncost
= minb
* 0x40 + maxb
;
3056 ncost
*= (unit_n_insns
[unit
] - 1) * 0x1000 + unit
;
3063 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
3064 if ((unit
& 1) != 0)
3065 cost
= potential_hazard (i
, insn
, cost
);
3070 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
3071 This is the number of cycles between instruction issue and
3072 instruction results. */
3074 HAIFA_INLINE
static int
3075 insn_cost (insn
, link
, used
)
3076 rtx insn
, link
, used
;
3078 register int cost
= INSN_COST (insn
);
3082 recog_memoized (insn
);
3084 /* A USE insn, or something else we don't need to understand.
3085 We can't pass these directly to result_ready_cost because it will
3086 trigger a fatal error for unrecognizable insns. */
3087 if (INSN_CODE (insn
) < 0)
3089 INSN_COST (insn
) = 1;
3094 cost
= result_ready_cost (insn
);
3099 INSN_COST (insn
) = cost
;
3103 /* in this case estimate cost without caring how insn is used. */
3104 if (link
== 0 && used
== 0)
3107 /* A USE insn should never require the value used to be computed. This
3108 allows the computation of a function's result and parameter values to
3109 overlap the return and call. */
3110 recog_memoized (used
);
3111 if (INSN_CODE (used
) < 0)
3112 LINK_COST_FREE (link
) = 1;
3114 /* If some dependencies vary the cost, compute the adjustment. Most
3115 commonly, the adjustment is complete: either the cost is ignored
3116 (in the case of an output- or anti-dependence), or the cost is
3117 unchanged. These values are cached in the link as LINK_COST_FREE
3118 and LINK_COST_ZERO. */
3120 if (LINK_COST_FREE (link
))
3123 else if (!LINK_COST_ZERO (link
))
3127 ADJUST_COST (used
, link
, insn
, ncost
);
3129 LINK_COST_FREE (link
) = ncost
= 1;
3131 LINK_COST_ZERO (link
) = 1;
3138 /* Compute the priority number for INSN. */
3147 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
3150 if ((this_priority
= INSN_PRIORITY (insn
)) == 0)
3152 if (INSN_DEPEND (insn
) == 0)
3153 this_priority
= insn_cost (insn
, 0, 0);
3155 for (link
= INSN_DEPEND (insn
); link
; link
= XEXP (link
, 1))
3160 if (RTX_INTEGRATED_P (link
))
3163 next
= XEXP (link
, 0);
3165 /* critical path is meaningful in block boundaries only */
3166 if (INSN_BLOCK (next
) != INSN_BLOCK (insn
))
3169 next_priority
= insn_cost (insn
, link
, next
) + priority (next
);
3170 if (next_priority
> this_priority
)
3171 this_priority
= next_priority
;
3173 INSN_PRIORITY (insn
) = this_priority
;
3175 return this_priority
;
3179 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
3180 them to the unused_*_list variables, so that they can be reused. */
3183 free_pending_lists ()
3185 if (current_nr_blocks
<= 1)
3187 free_list (&pending_read_insns
, &unused_insn_list
);
3188 free_list (&pending_write_insns
, &unused_insn_list
);
3189 free_list (&pending_read_mems
, &unused_expr_list
);
3190 free_list (&pending_write_mems
, &unused_expr_list
);
3194 /* interblock scheduling */
3197 for (bb
= 0; bb
< current_nr_blocks
; bb
++)
3199 free_list (&bb_pending_read_insns
[bb
], &unused_insn_list
);
3200 free_list (&bb_pending_write_insns
[bb
], &unused_insn_list
);
3201 free_list (&bb_pending_read_mems
[bb
], &unused_expr_list
);
3202 free_list (&bb_pending_write_mems
[bb
], &unused_expr_list
);
3207 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
3208 The MEM is a memory reference contained within INSN, which we are saving
3209 so that we can do memory aliasing on it. */
3212 add_insn_mem_dependence (insn_list
, mem_list
, insn
, mem
)
3213 rtx
*insn_list
, *mem_list
, insn
, mem
;
3217 link
= alloc_INSN_LIST (insn
, *insn_list
);
3220 link
= alloc_EXPR_LIST (VOIDmode
, mem
, *mem_list
);
3223 pending_lists_length
++;
3227 /* Make a dependency between every memory reference on the pending lists
3228 and INSN, thus flushing the pending lists. If ONLY_WRITE, don't flush
3232 flush_pending_lists (insn
, only_write
)
3239 while (pending_read_insns
&& ! only_write
)
3241 add_dependence (insn
, XEXP (pending_read_insns
, 0), REG_DEP_ANTI
);
3243 link
= pending_read_insns
;
3244 pending_read_insns
= XEXP (pending_read_insns
, 1);
3245 XEXP (link
, 1) = unused_insn_list
;
3246 unused_insn_list
= link
;
3248 link
= pending_read_mems
;
3249 pending_read_mems
= XEXP (pending_read_mems
, 1);
3250 XEXP (link
, 1) = unused_expr_list
;
3251 unused_expr_list
= link
;
3253 while (pending_write_insns
)
3255 add_dependence (insn
, XEXP (pending_write_insns
, 0), REG_DEP_ANTI
);
3257 link
= pending_write_insns
;
3258 pending_write_insns
= XEXP (pending_write_insns
, 1);
3259 XEXP (link
, 1) = unused_insn_list
;
3260 unused_insn_list
= link
;
3262 link
= pending_write_mems
;
3263 pending_write_mems
= XEXP (pending_write_mems
, 1);
3264 XEXP (link
, 1) = unused_expr_list
;
3265 unused_expr_list
= link
;
3267 pending_lists_length
= 0;
3269 /* last_pending_memory_flush is now a list of insns */
3270 for (u
= last_pending_memory_flush
; u
; u
= XEXP (u
, 1))
3271 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3273 free_list (&last_pending_memory_flush
, &unused_insn_list
);
3274 last_pending_memory_flush
= alloc_INSN_LIST (insn
, NULL_RTX
);
3277 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
3278 by the write to the destination of X, and reads of everything mentioned. */
3281 sched_analyze_1 (x
, insn
)
3286 register rtx dest
= SET_DEST (x
);
3291 while (GET_CODE (dest
) == STRICT_LOW_PART
|| GET_CODE (dest
) == SUBREG
3292 || GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SIGN_EXTRACT
)
3294 if (GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SIGN_EXTRACT
)
3296 /* The second and third arguments are values read by this insn. */
3297 sched_analyze_2 (XEXP (dest
, 1), insn
);
3298 sched_analyze_2 (XEXP (dest
, 2), insn
);
3300 dest
= SUBREG_REG (dest
);
3303 if (GET_CODE (dest
) == REG
)
3307 regno
= REGNO (dest
);
3309 /* A hard reg in a wide mode may really be multiple registers.
3310 If so, mark all of them just like the first. */
3311 if (regno
< FIRST_PSEUDO_REGISTER
)
3313 i
= HARD_REGNO_NREGS (regno
, GET_MODE (dest
));
3318 for (u
= reg_last_uses
[regno
+ i
]; u
; u
= XEXP (u
, 1))
3319 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3320 reg_last_uses
[regno
+ i
] = 0;
3322 for (u
= reg_last_sets
[regno
+ i
]; u
; u
= XEXP (u
, 1))
3323 add_dependence (insn
, XEXP (u
, 0), REG_DEP_OUTPUT
);
3325 SET_REGNO_REG_SET (reg_pending_sets
, regno
+ i
);
3327 if ((call_used_regs
[regno
+ i
] || global_regs
[regno
+ i
]))
3328 /* Function calls clobber all call_used regs. */
3329 for (u
= last_function_call
; u
; u
= XEXP (u
, 1))
3330 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3337 for (u
= reg_last_uses
[regno
]; u
; u
= XEXP (u
, 1))
3338 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3339 reg_last_uses
[regno
] = 0;
3341 for (u
= reg_last_sets
[regno
]; u
; u
= XEXP (u
, 1))
3342 add_dependence (insn
, XEXP (u
, 0), REG_DEP_OUTPUT
);
3344 SET_REGNO_REG_SET (reg_pending_sets
, regno
);
3346 /* Pseudos that are REG_EQUIV to something may be replaced
3347 by that during reloading. We need only add dependencies for
3348 the address in the REG_EQUIV note. */
3349 if (!reload_completed
3350 && reg_known_equiv_p
[regno
]
3351 && GET_CODE (reg_known_value
[regno
]) == MEM
)
3352 sched_analyze_2 (XEXP (reg_known_value
[regno
], 0), insn
);
3354 /* Don't let it cross a call after scheduling if it doesn't
3355 already cross one. */
3357 if (REG_N_CALLS_CROSSED (regno
) == 0)
3358 for (u
= last_function_call
; u
; u
= XEXP (u
, 1))
3359 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3362 else if (GET_CODE (dest
) == MEM
)
3364 /* Writing memory. */
3366 if (pending_lists_length
> 32)
3368 /* Flush all pending reads and writes to prevent the pending lists
3369 from getting any larger. Insn scheduling runs too slowly when
3370 these lists get long. The number 32 was chosen because it
3371 seems like a reasonable number. When compiling GCC with itself,
3372 this flush occurs 8 times for sparc, and 10 times for m88k using
3374 flush_pending_lists (insn
, 0);
3379 rtx pending
, pending_mem
;
3381 pending
= pending_read_insns
;
3382 pending_mem
= pending_read_mems
;
3385 /* If a dependency already exists, don't create a new one. */
3386 if (!find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
3387 if (anti_dependence (XEXP (pending_mem
, 0), dest
))
3388 add_dependence (insn
, XEXP (pending
, 0), REG_DEP_ANTI
);
3390 pending
= XEXP (pending
, 1);
3391 pending_mem
= XEXP (pending_mem
, 1);
3394 pending
= pending_write_insns
;
3395 pending_mem
= pending_write_mems
;
3398 /* If a dependency already exists, don't create a new one. */
3399 if (!find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
3400 if (output_dependence (XEXP (pending_mem
, 0), dest
))
3401 add_dependence (insn
, XEXP (pending
, 0), REG_DEP_OUTPUT
);
3403 pending
= XEXP (pending
, 1);
3404 pending_mem
= XEXP (pending_mem
, 1);
3407 for (u
= last_pending_memory_flush
; u
; u
= XEXP (u
, 1))
3408 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3410 add_insn_mem_dependence (&pending_write_insns
, &pending_write_mems
,
3413 sched_analyze_2 (XEXP (dest
, 0), insn
);
3416 /* Analyze reads. */
3417 if (GET_CODE (x
) == SET
)
3418 sched_analyze_2 (SET_SRC (x
), insn
);
3421 /* Analyze the uses of memory and registers in rtx X in INSN. */
3424 sched_analyze_2 (x
, insn
)
3430 register enum rtx_code code
;
3436 code
= GET_CODE (x
);
3445 /* Ignore constants. Note that we must handle CONST_DOUBLE here
3446 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
3447 this does not mean that this insn is using cc0. */
3455 /* User of CC0 depends on immediately preceding insn. */
3456 SCHED_GROUP_P (insn
) = 1;
3458 /* There may be a note before this insn now, but all notes will
3459 be removed before we actually try to schedule the insns, so
3460 it won't cause a problem later. We must avoid it here though. */
3461 prev
= prev_nonnote_insn (insn
);
3463 /* Make a copy of all dependencies on the immediately previous insn,
3464 and add to this insn. This is so that all the dependencies will
3465 apply to the group. Remove an explicit dependence on this insn
3466 as SCHED_GROUP_P now represents it. */
3468 if (find_insn_list (prev
, LOG_LINKS (insn
)))
3469 remove_dependence (insn
, prev
);
3471 for (link
= LOG_LINKS (prev
); link
; link
= XEXP (link
, 1))
3472 add_dependence (insn
, XEXP (link
, 0), REG_NOTE_KIND (link
));
3481 int regno
= REGNO (x
);
3482 if (regno
< FIRST_PSEUDO_REGISTER
)
3486 i
= HARD_REGNO_NREGS (regno
, GET_MODE (x
));
3489 reg_last_uses
[regno
+ i
]
3490 = alloc_INSN_LIST (insn
, reg_last_uses
[regno
+ i
]);
3492 for (u
= reg_last_sets
[regno
+ i
]; u
; u
= XEXP (u
, 1))
3493 add_dependence (insn
, XEXP (u
, 0), 0);
3495 if ((call_used_regs
[regno
+ i
] || global_regs
[regno
+ i
]))
3496 /* Function calls clobber all call_used regs. */
3497 for (u
= last_function_call
; u
; u
= XEXP (u
, 1))
3498 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3503 reg_last_uses
[regno
] = alloc_INSN_LIST (insn
, reg_last_uses
[regno
]);
3505 for (u
= reg_last_sets
[regno
]; u
; u
= XEXP (u
, 1))
3506 add_dependence (insn
, XEXP (u
, 0), 0);
3508 /* Pseudos that are REG_EQUIV to something may be replaced
3509 by that during reloading. We need only add dependencies for
3510 the address in the REG_EQUIV note. */
3511 if (!reload_completed
3512 && reg_known_equiv_p
[regno
]
3513 && GET_CODE (reg_known_value
[regno
]) == MEM
)
3514 sched_analyze_2 (XEXP (reg_known_value
[regno
], 0), insn
);
3516 /* If the register does not already cross any calls, then add this
3517 insn to the sched_before_next_call list so that it will still
3518 not cross calls after scheduling. */
3519 if (REG_N_CALLS_CROSSED (regno
) == 0)
3520 add_dependence (sched_before_next_call
, insn
, REG_DEP_ANTI
);
3527 /* Reading memory. */
3529 rtx pending
, pending_mem
;
3531 pending
= pending_read_insns
;
3532 pending_mem
= pending_read_mems
;
3535 /* If a dependency already exists, don't create a new one. */
3536 if (!find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
3537 if (read_dependence (XEXP (pending_mem
, 0), x
))
3538 add_dependence (insn
, XEXP (pending
, 0), REG_DEP_ANTI
);
3540 pending
= XEXP (pending
, 1);
3541 pending_mem
= XEXP (pending_mem
, 1);
3544 pending
= pending_write_insns
;
3545 pending_mem
= pending_write_mems
;
3548 /* If a dependency already exists, don't create a new one. */
3549 if (!find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
3550 if (true_dependence (XEXP (pending_mem
, 0), VOIDmode
,
3552 add_dependence (insn
, XEXP (pending
, 0), 0);
3554 pending
= XEXP (pending
, 1);
3555 pending_mem
= XEXP (pending_mem
, 1);
3558 for (u
= last_pending_memory_flush
; u
; u
= XEXP (u
, 1))
3559 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3561 /* Always add these dependencies to pending_reads, since
3562 this insn may be followed by a write. */
3563 add_insn_mem_dependence (&pending_read_insns
, &pending_read_mems
,
3566 /* Take advantage of tail recursion here. */
3567 sched_analyze_2 (XEXP (x
, 0), insn
);
3571 /* Force pending stores to memory in case a trap handler needs them. */
3573 flush_pending_lists (insn
, 1);
3578 case UNSPEC_VOLATILE
:
3582 /* Traditional and volatile asm instructions must be considered to use
3583 and clobber all hard registers, all pseudo-registers and all of
3584 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
3586 Consider for instance a volatile asm that changes the fpu rounding
3587 mode. An insn should not be moved across this even if it only uses
3588 pseudo-regs because it might give an incorrectly rounded result. */
3589 if (code
!= ASM_OPERANDS
|| MEM_VOLATILE_P (x
))
3591 int max_reg
= max_reg_num ();
3592 for (i
= 0; i
< max_reg
; i
++)
3594 for (u
= reg_last_uses
[i
]; u
; u
= XEXP (u
, 1))
3595 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3596 reg_last_uses
[i
] = 0;
3598 /* reg_last_sets[r] is now a list of insns */
3599 for (u
= reg_last_sets
[i
]; u
; u
= XEXP (u
, 1))
3600 add_dependence (insn
, XEXP (u
, 0), 0);
3602 reg_pending_sets_all
= 1;
3604 flush_pending_lists (insn
, 0);
3607 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
3608 We can not just fall through here since then we would be confused
3609 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
3610 traditional asms unlike their normal usage. */
3612 if (code
== ASM_OPERANDS
)
3614 for (j
= 0; j
< ASM_OPERANDS_INPUT_LENGTH (x
); j
++)
3615 sched_analyze_2 (ASM_OPERANDS_INPUT (x
, j
), insn
);
3625 /* These both read and modify the result. We must handle them as writes
3626 to get proper dependencies for following instructions. We must handle
3627 them as reads to get proper dependencies from this to previous
3628 instructions. Thus we need to pass them to both sched_analyze_1
3629 and sched_analyze_2. We must call sched_analyze_2 first in order
3630 to get the proper antecedent for the read. */
3631 sched_analyze_2 (XEXP (x
, 0), insn
);
3632 sched_analyze_1 (x
, insn
);
3639 /* Other cases: walk the insn. */
3640 fmt
= GET_RTX_FORMAT (code
);
3641 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
3644 sched_analyze_2 (XEXP (x
, i
), insn
);
3645 else if (fmt
[i
] == 'E')
3646 for (j
= 0; j
< XVECLEN (x
, i
); j
++)
3647 sched_analyze_2 (XVECEXP (x
, i
, j
), insn
);
3651 /* Analyze an INSN with pattern X to find all dependencies. */
3654 sched_analyze_insn (x
, insn
, loop_notes
)
3658 register RTX_CODE code
= GET_CODE (x
);
3660 int maxreg
= max_reg_num ();
3663 if (code
== SET
|| code
== CLOBBER
)
3664 sched_analyze_1 (x
, insn
);
3665 else if (code
== PARALLEL
)
3668 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
3670 code
= GET_CODE (XVECEXP (x
, 0, i
));
3671 if (code
== SET
|| code
== CLOBBER
)
3672 sched_analyze_1 (XVECEXP (x
, 0, i
), insn
);
3674 sched_analyze_2 (XVECEXP (x
, 0, i
), insn
);
3678 sched_analyze_2 (x
, insn
);
3680 /* Mark registers CLOBBERED or used by called function. */
3681 if (GET_CODE (insn
) == CALL_INSN
)
3682 for (link
= CALL_INSN_FUNCTION_USAGE (insn
); link
; link
= XEXP (link
, 1))
3684 if (GET_CODE (XEXP (link
, 0)) == CLOBBER
)
3685 sched_analyze_1 (XEXP (link
, 0), insn
);
3687 sched_analyze_2 (XEXP (link
, 0), insn
);
3690 /* If there is a {LOOP,EHREGION}_{BEG,END} note in the middle of a basic block, then
3691 we must be sure that no instructions are scheduled across it.
3692 Otherwise, the reg_n_refs info (which depends on loop_depth) would
3693 become incorrect. */
3697 int max_reg
= max_reg_num ();
3700 for (i
= 0; i
< max_reg
; i
++)
3703 for (u
= reg_last_uses
[i
]; u
; u
= XEXP (u
, 1))
3704 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3705 reg_last_uses
[i
] = 0;
3707 /* reg_last_sets[r] is now a list of insns */
3708 for (u
= reg_last_sets
[i
]; u
; u
= XEXP (u
, 1))
3709 add_dependence (insn
, XEXP (u
, 0), 0);
3711 reg_pending_sets_all
= 1;
3713 flush_pending_lists (insn
, 0);
3716 while (XEXP (link
, 1))
3717 link
= XEXP (link
, 1);
3718 XEXP (link
, 1) = REG_NOTES (insn
);
3719 REG_NOTES (insn
) = loop_notes
;
3722 /* After reload, it is possible for an instruction to have a REG_DEAD note
3723 for a register that actually dies a few instructions earlier. For
3724 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
3725 In this case, we must consider the insn to use the register mentioned
3726 in the REG_DEAD note. Otherwise, we may accidentally move this insn
3727 after another insn that sets the register, thus getting obviously invalid
3728 rtl. This confuses reorg which believes that REG_DEAD notes are still
3731 ??? We would get better code if we fixed reload to put the REG_DEAD
3732 notes in the right places, but that may not be worth the effort. */
3734 if (reload_completed
)
3738 for (note
= REG_NOTES (insn
); note
; note
= XEXP (note
, 1))
3739 if (REG_NOTE_KIND (note
) == REG_DEAD
)
3740 sched_analyze_2 (XEXP (note
, 0), insn
);
3743 EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets
, 0, i
,
3745 /* reg_last_sets[r] is now a list of insns */
3746 free_list (®_last_sets
[i
], &unused_insn_list
);
3748 = alloc_INSN_LIST (insn
, NULL_RTX
);
3750 CLEAR_REG_SET (reg_pending_sets
);
3752 if (reg_pending_sets_all
)
3754 for (i
= 0; i
< maxreg
; i
++)
3756 /* reg_last_sets[r] is now a list of insns */
3757 free_list (®_last_sets
[i
], &unused_insn_list
);
3758 reg_last_sets
[i
] = alloc_INSN_LIST (insn
, NULL_RTX
);
3761 reg_pending_sets_all
= 0;
3764 /* Handle function calls and function returns created by the epilogue
3766 if (GET_CODE (insn
) == CALL_INSN
|| GET_CODE (insn
) == JUMP_INSN
)
3771 /* When scheduling instructions, we make sure calls don't lose their
3772 accompanying USE insns by depending them one on another in order.
3774 Also, we must do the same thing for returns created by the epilogue
3775 threading code. Note this code works only in this special case,
3776 because other passes make no guarantee that they will never emit
3777 an instruction between a USE and a RETURN. There is such a guarantee
3778 for USE instructions immediately before a call. */
3780 prev_dep_insn
= insn
;
3781 dep_insn
= PREV_INSN (insn
);
3782 while (GET_CODE (dep_insn
) == INSN
3783 && GET_CODE (PATTERN (dep_insn
)) == USE
3784 && GET_CODE (XEXP (PATTERN (dep_insn
), 0)) == REG
)
3786 SCHED_GROUP_P (prev_dep_insn
) = 1;
3788 /* Make a copy of all dependencies on dep_insn, and add to insn.
3789 This is so that all of the dependencies will apply to the
3792 for (link
= LOG_LINKS (dep_insn
); link
; link
= XEXP (link
, 1))
3793 add_dependence (insn
, XEXP (link
, 0), REG_NOTE_KIND (link
));
3795 prev_dep_insn
= dep_insn
;
3796 dep_insn
= PREV_INSN (dep_insn
);
3801 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
3802 for every dependency. */
3805 sched_analyze (head
, tail
)
3812 for (insn
= head
;; insn
= NEXT_INSN (insn
))
3814 if (GET_CODE (insn
) == INSN
|| GET_CODE (insn
) == JUMP_INSN
)
3816 sched_analyze_insn (PATTERN (insn
), insn
, loop_notes
);
3819 else if (GET_CODE (insn
) == CALL_INSN
)
3824 CANT_MOVE (insn
) = 1;
3826 /* Any instruction using a hard register which may get clobbered
3827 by a call needs to be marked as dependent on this call.
3828 This prevents a use of a hard return reg from being moved
3829 past a void call (i.e. it does not explicitly set the hard
3832 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
3833 all registers, not just hard registers, may be clobbered by this
3836 /* Insn, being a CALL_INSN, magically depends on
3837 `last_function_call' already. */
3839 if (NEXT_INSN (insn
) && GET_CODE (NEXT_INSN (insn
)) == NOTE
3840 && NOTE_LINE_NUMBER (NEXT_INSN (insn
)) == NOTE_INSN_SETJMP
)
3842 int max_reg
= max_reg_num ();
3843 for (i
= 0; i
< max_reg
; i
++)
3845 for (u
= reg_last_uses
[i
]; u
; u
= XEXP (u
, 1))
3846 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3848 reg_last_uses
[i
] = 0;
3850 /* reg_last_sets[r] is now a list of insns */
3851 for (u
= reg_last_sets
[i
]; u
; u
= XEXP (u
, 1))
3852 add_dependence (insn
, XEXP (u
, 0), 0);
3854 reg_pending_sets_all
= 1;
3856 /* Add a pair of fake REG_NOTE which we will later
3857 convert back into a NOTE_INSN_SETJMP note. See
3858 reemit_notes for why we use a pair of NOTEs. */
3859 REG_NOTES (insn
) = alloc_EXPR_LIST (REG_DEAD
,
3862 REG_NOTES (insn
) = alloc_EXPR_LIST (REG_DEAD
,
3863 GEN_INT (NOTE_INSN_SETJMP
),
3868 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
3869 if (call_used_regs
[i
] || global_regs
[i
])
3871 for (u
= reg_last_uses
[i
]; u
; u
= XEXP (u
, 1))
3872 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3873 reg_last_uses
[i
] = 0;
3875 /* reg_last_sets[r] is now a list of insns */
3876 for (u
= reg_last_sets
[i
]; u
; u
= XEXP (u
, 1))
3877 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
3879 SET_REGNO_REG_SET (reg_pending_sets
, i
);
3883 /* For each insn which shouldn't cross a call, add a dependence
3884 between that insn and this call insn. */
3885 x
= LOG_LINKS (sched_before_next_call
);
3888 add_dependence (insn
, XEXP (x
, 0), REG_DEP_ANTI
);
3891 LOG_LINKS (sched_before_next_call
) = 0;
3893 sched_analyze_insn (PATTERN (insn
), insn
, loop_notes
);
3896 /* In the absence of interprocedural alias analysis, we must flush
3897 all pending reads and writes, and start new dependencies starting
3898 from here. But only flush writes for constant calls (which may
3899 be passed a pointer to something we haven't written yet). */
3900 flush_pending_lists (insn
, CONST_CALL_P (insn
));
3902 /* Depend this function call (actually, the user of this
3903 function call) on all hard register clobberage. */
3905 /* last_function_call is now a list of insns */
3906 free_list(&last_function_call
, &unused_insn_list
);
3907 last_function_call
= alloc_INSN_LIST (insn
, NULL_RTX
);
3910 /* See comments on reemit_notes as to why we do this. */
3911 else if (GET_CODE (insn
) == NOTE
3912 && (NOTE_LINE_NUMBER (insn
) == NOTE_INSN_LOOP_BEG
3913 || NOTE_LINE_NUMBER (insn
) == NOTE_INSN_LOOP_END
3914 || NOTE_LINE_NUMBER (insn
) == NOTE_INSN_EH_REGION_BEG
3915 || NOTE_LINE_NUMBER (insn
) == NOTE_INSN_EH_REGION_END
3916 || NOTE_LINE_NUMBER (insn
) == NOTE_INSN_RANGE_START
3917 || NOTE_LINE_NUMBER (insn
) == NOTE_INSN_RANGE_END
3918 || (NOTE_LINE_NUMBER (insn
) == NOTE_INSN_SETJMP
3919 && GET_CODE (PREV_INSN (insn
)) != CALL_INSN
)))
3921 loop_notes
= alloc_EXPR_LIST (REG_DEAD
,
3922 GEN_INT (NOTE_BLOCK_NUMBER (insn
)),
3924 loop_notes
= alloc_EXPR_LIST (REG_DEAD
,
3925 GEN_INT (NOTE_LINE_NUMBER (insn
)),
3927 CONST_CALL_P (loop_notes
) = CONST_CALL_P (insn
);
3936 /* Called when we see a set of a register. If death is true, then we are
3937 scanning backwards. Mark that register as unborn. If nobody says
3938 otherwise, that is how things will remain. If death is false, then we
3939 are scanning forwards. Mark that register as being born. */
3942 sched_note_set (x
, death
)
3947 register rtx reg
= SET_DEST (x
);
3953 while (GET_CODE (reg
) == SUBREG
|| GET_CODE (reg
) == STRICT_LOW_PART
3954 || GET_CODE (reg
) == SIGN_EXTRACT
|| GET_CODE (reg
) == ZERO_EXTRACT
)
3956 /* Must treat modification of just one hardware register of a multi-reg
3957 value or just a byte field of a register exactly the same way that
3958 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
3959 does not kill the entire register. */
3960 if (GET_CODE (reg
) != SUBREG
3961 || REG_SIZE (SUBREG_REG (reg
)) > REG_SIZE (reg
))
3964 reg
= SUBREG_REG (reg
);
3967 if (GET_CODE (reg
) != REG
)
3970 /* Global registers are always live, so the code below does not apply
3973 regno
= REGNO (reg
);
3974 if (regno
>= FIRST_PSEUDO_REGISTER
|| !global_regs
[regno
])
3978 /* If we only set part of the register, then this set does not
3983 /* Try killing this register. */
3984 if (regno
< FIRST_PSEUDO_REGISTER
)
3986 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (reg
));
3989 CLEAR_REGNO_REG_SET (bb_live_regs
, regno
+ j
);
3994 /* Recompute REG_BASIC_BLOCK as we update all the other
3995 dataflow information. */
3996 if (sched_reg_basic_block
[regno
] == REG_BLOCK_UNKNOWN
)
3997 sched_reg_basic_block
[regno
] = current_block_num
;
3998 else if (sched_reg_basic_block
[regno
] != current_block_num
)
3999 sched_reg_basic_block
[regno
] = REG_BLOCK_GLOBAL
;
4001 CLEAR_REGNO_REG_SET (bb_live_regs
, regno
);
4006 /* Make the register live again. */
4007 if (regno
< FIRST_PSEUDO_REGISTER
)
4009 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (reg
));
4012 SET_REGNO_REG_SET (bb_live_regs
, regno
+ j
);
4017 SET_REGNO_REG_SET (bb_live_regs
, regno
);
4023 /* Macros and functions for keeping the priority queue sorted, and
4024 dealing with queueing and dequeueing of instructions. */
4026 #define SCHED_SORT(READY, N_READY) \
4027 do { if ((N_READY) == 2) \
4028 swap_sort (READY, N_READY); \
4029 else if ((N_READY) > 2) \
4030 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
4033 /* Returns a positive value if x is preferred; returns a negative value if
4034 y is preferred. Should never return 0, since that will make the sort
4038 rank_for_schedule (x
, y
)
4039 const GENERIC_PTR x
;
4040 const GENERIC_PTR y
;
4042 rtx tmp
= *(rtx
*)y
;
4043 rtx tmp2
= *(rtx
*)x
;
4045 int tmp_class
, tmp2_class
, depend_count1
, depend_count2
;
4046 int val
, priority_val
, spec_val
, prob_val
, weight_val
;
4049 /* prefer insn with higher priority */
4050 priority_val
= INSN_PRIORITY (tmp2
) - INSN_PRIORITY (tmp
);
4052 return priority_val
;
4054 /* prefer an insn with smaller contribution to registers-pressure */
4055 if (!reload_completed
&&
4056 (weight_val
= INSN_REG_WEIGHT (tmp
) - INSN_REG_WEIGHT (tmp2
)))
4057 return (weight_val
);
4059 /* some comparison make sense in interblock scheduling only */
4060 if (INSN_BB (tmp
) != INSN_BB (tmp2
))
4062 /* prefer an inblock motion on an interblock motion */
4063 if ((INSN_BB (tmp2
) == target_bb
) && (INSN_BB (tmp
) != target_bb
))
4065 if ((INSN_BB (tmp
) == target_bb
) && (INSN_BB (tmp2
) != target_bb
))
4068 /* prefer a useful motion on a speculative one */
4069 if ((spec_val
= IS_SPECULATIVE_INSN (tmp
) - IS_SPECULATIVE_INSN (tmp2
)))
4072 /* prefer a more probable (speculative) insn */
4073 prob_val
= INSN_PROBABILITY (tmp2
) - INSN_PROBABILITY (tmp
);
4078 /* compare insns based on their relation to the last-scheduled-insn */
4079 if (last_scheduled_insn
)
4081 /* Classify the instructions into three classes:
4082 1) Data dependent on last schedule insn.
4083 2) Anti/Output dependent on last scheduled insn.
4084 3) Independent of last scheduled insn, or has latency of one.
4085 Choose the insn from the highest numbered class if different. */
4086 link
= find_insn_list (tmp
, INSN_DEPEND (last_scheduled_insn
));
4087 if (link
== 0 || insn_cost (last_scheduled_insn
, link
, tmp
) == 1)
4089 else if (REG_NOTE_KIND (link
) == 0) /* Data dependence. */
4094 link
= find_insn_list (tmp2
, INSN_DEPEND (last_scheduled_insn
));
4095 if (link
== 0 || insn_cost (last_scheduled_insn
, link
, tmp2
) == 1)
4097 else if (REG_NOTE_KIND (link
) == 0) /* Data dependence. */
4102 if ((val
= tmp2_class
- tmp_class
))
4106 /* Prefer the insn which has more later insns that depend on it.
4107 This gives the scheduler more freedom when scheduling later
4108 instructions at the expense of added register pressure. */
4110 for (link
= INSN_DEPEND (tmp
); link
; link
= XEXP (link
, 1))
4114 for (link
= INSN_DEPEND (tmp2
); link
; link
= XEXP (link
, 1))
4117 val
= depend_count2
- depend_count1
;
4121 /* If insns are equally good, sort by INSN_LUID (original insn order),
4122 so that we make the sort stable. This minimizes instruction movement,
4123 thus minimizing sched's effect on debugging and cross-jumping. */
4124 return INSN_LUID (tmp
) - INSN_LUID (tmp2
);
4127 /* Resort the array A in which only element at index N may be out of order. */
4129 HAIFA_INLINE
static void
4134 rtx insn
= a
[n
- 1];
4137 while (i
>= 0 && rank_for_schedule (a
+ i
, &insn
) >= 0)
4145 static int max_priority
;
4147 /* Add INSN to the insn queue so that it can be executed at least
4148 N_CYCLES after the currently executing insn. Preserve insns
4149 chain for debugging purposes. */
4151 HAIFA_INLINE
static void
4152 queue_insn (insn
, n_cycles
)
4156 int next_q
= NEXT_Q_AFTER (q_ptr
, n_cycles
);
4157 rtx link
= alloc_INSN_LIST (insn
, insn_queue
[next_q
]);
4158 insn_queue
[next_q
] = link
;
4161 if (sched_verbose
>= 2)
4163 fprintf (dump
, ";;\t\tReady-->Q: insn %d: ", INSN_UID (insn
));
4165 if (INSN_BB (insn
) != target_bb
)
4166 fprintf (dump
, "(b%d) ", INSN_BLOCK (insn
));
4168 fprintf (dump
, "queued for %d cycles.\n", n_cycles
);
4173 /* Return nonzero if PAT is the pattern of an insn which makes a
4176 HAIFA_INLINE
static int
4177 birthing_insn_p (pat
)
4182 if (reload_completed
== 1)
4185 if (GET_CODE (pat
) == SET
4186 && GET_CODE (SET_DEST (pat
)) == REG
)
4188 rtx dest
= SET_DEST (pat
);
4189 int i
= REGNO (dest
);
4191 /* It would be more accurate to use refers_to_regno_p or
4192 reg_mentioned_p to determine when the dest is not live before this
4195 if (REGNO_REG_SET_P (bb_live_regs
, i
))
4196 return (REG_N_SETS (i
) == 1);
4200 if (GET_CODE (pat
) == PARALLEL
)
4202 for (j
= 0; j
< XVECLEN (pat
, 0); j
++)
4203 if (birthing_insn_p (XVECEXP (pat
, 0, j
)))
4209 /* PREV is an insn that is ready to execute. Adjust its priority if that
4210 will help shorten register lifetimes. */
4212 HAIFA_INLINE
static void
4213 adjust_priority (prev
)
4216 /* Trying to shorten register lives after reload has completed
4217 is useless and wrong. It gives inaccurate schedules. */
4218 if (reload_completed
== 0)
4223 /* ??? This code has no effect, because REG_DEAD notes are removed
4224 before we ever get here. */
4225 for (note
= REG_NOTES (prev
); note
; note
= XEXP (note
, 1))
4226 if (REG_NOTE_KIND (note
) == REG_DEAD
)
4229 /* Defer scheduling insns which kill registers, since that
4230 shortens register lives. Prefer scheduling insns which
4231 make registers live for the same reason. */
4235 INSN_PRIORITY (prev
) >>= 3;
4238 INSN_PRIORITY (prev
) >>= 2;
4242 INSN_PRIORITY (prev
) >>= 1;
4245 if (birthing_insn_p (PATTERN (prev
)))
4247 int max
= max_priority
;
4249 if (max
> INSN_PRIORITY (prev
))
4250 INSN_PRIORITY (prev
) = max
;
4254 #ifdef ADJUST_PRIORITY
4255 ADJUST_PRIORITY (prev
);
4260 /* INSN is the "currently executing insn". Launch each insn which was
4261 waiting on INSN. READY is a vector of insns which are ready to fire.
4262 N_READY is the number of elements in READY. CLOCK is the current
4266 schedule_insn (insn
, ready
, n_ready
, clock
)
4275 unit
= insn_unit (insn
);
4277 if (sched_verbose
>= 2)
4279 fprintf (dump
, ";;\t\t--> scheduling insn <<<%d>>> on unit ", INSN_UID (insn
));
4280 insn_print_units (insn
);
4281 fprintf (dump
, "\n");
4284 if (sched_verbose
&& unit
== -1)
4285 visualize_no_unit (insn
);
4287 if (MAX_BLOCKAGE
> 1 || issue_rate
> 1 || sched_verbose
)
4288 schedule_unit (unit
, insn
, clock
);
4290 if (INSN_DEPEND (insn
) == 0)
4293 /* This is used by the function adjust_priority above. */
4295 max_priority
= MAX (INSN_PRIORITY (ready
[0]), INSN_PRIORITY (insn
));
4297 max_priority
= INSN_PRIORITY (insn
);
4299 for (link
= INSN_DEPEND (insn
); link
!= 0; link
= XEXP (link
, 1))
4301 rtx next
= XEXP (link
, 0);
4302 int cost
= insn_cost (insn
, link
, next
);
4304 INSN_TICK (next
) = MAX (INSN_TICK (next
), clock
+ cost
);
4306 if ((INSN_DEP_COUNT (next
) -= 1) == 0)
4308 int effective_cost
= INSN_TICK (next
) - clock
;
4310 /* For speculative insns, before inserting to ready/queue,
4311 check live, exception-free, and issue-delay */
4312 if (INSN_BB (next
) != target_bb
4313 && (!IS_VALID (INSN_BB (next
))
4315 || (IS_SPECULATIVE_INSN (next
)
4316 && (insn_issue_delay (next
) > 3
4317 || !check_live (next
, INSN_BB (next
))
4318 || !is_exception_free (next
, INSN_BB (next
), target_bb
)))))
4321 if (sched_verbose
>= 2)
4323 fprintf (dump
, ";;\t\tdependences resolved: insn %d ", INSN_UID (next
));
4325 if (current_nr_blocks
> 1 && INSN_BB (next
) != target_bb
)
4326 fprintf (dump
, "/b%d ", INSN_BLOCK (next
));
4328 if (effective_cost
<= 1)
4329 fprintf (dump
, "into ready\n");
4331 fprintf (dump
, "into queue with cost=%d\n", effective_cost
);
4334 /* Adjust the priority of NEXT and either put it on the ready
4335 list or queue it. */
4336 adjust_priority (next
);
4337 if (effective_cost
<= 1)
4338 ready
[n_ready
++] = next
;
4340 queue_insn (next
, effective_cost
);
4348 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
4352 create_reg_dead_note (reg
, insn
)
4357 /* The number of registers killed after scheduling must be the same as the
4358 number of registers killed before scheduling. The number of REG_DEAD
4359 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
4360 might become one DImode hard register REG_DEAD note, but the number of
4361 registers killed will be conserved.
4363 We carefully remove REG_DEAD notes from the dead_notes list, so that
4364 there will be none left at the end. If we run out early, then there
4365 is a bug somewhere in flow, combine and/or sched. */
4367 if (dead_notes
== 0)
4369 if (current_nr_blocks
<= 1)
4372 link
= alloc_EXPR_LIST (REG_DEAD
, NULL_RTX
, NULL_RTX
);
4376 /* Number of regs killed by REG. */
4377 int regs_killed
= (REGNO (reg
) >= FIRST_PSEUDO_REGISTER
? 1
4378 : HARD_REGNO_NREGS (REGNO (reg
), GET_MODE (reg
)));
4379 /* Number of regs killed by REG_DEAD notes taken off the list. */
4383 reg_note_regs
= (REGNO (XEXP (link
, 0)) >= FIRST_PSEUDO_REGISTER
? 1
4384 : HARD_REGNO_NREGS (REGNO (XEXP (link
, 0)),
4385 GET_MODE (XEXP (link
, 0))));
4386 while (reg_note_regs
< regs_killed
)
4388 link
= XEXP (link
, 1);
4390 /* LINK might be zero if we killed more registers after scheduling
4391 than before, and the last hard register we kill is actually
4394 This is normal for interblock scheduling, so deal with it in
4395 that case, else abort. */
4396 if (link
== NULL_RTX
&& current_nr_blocks
<= 1)
4398 else if (link
== NULL_RTX
)
4399 link
= alloc_EXPR_LIST (REG_DEAD
, gen_rtx_REG (word_mode
, 0),
4402 reg_note_regs
+= (REGNO (XEXP (link
, 0)) >= FIRST_PSEUDO_REGISTER
? 1
4403 : HARD_REGNO_NREGS (REGNO (XEXP (link
, 0)),
4404 GET_MODE (XEXP (link
, 0))));
4406 dead_notes
= XEXP (link
, 1);
4408 /* If we took too many regs kills off, put the extra ones back. */
4409 while (reg_note_regs
> regs_killed
)
4411 rtx temp_reg
, temp_link
;
4413 temp_reg
= gen_rtx_REG (word_mode
, 0);
4414 temp_link
= alloc_EXPR_LIST (REG_DEAD
, temp_reg
, dead_notes
);
4415 dead_notes
= temp_link
;
4420 XEXP (link
, 0) = reg
;
4421 XEXP (link
, 1) = REG_NOTES (insn
);
4422 REG_NOTES (insn
) = link
;
4425 /* Subroutine on attach_deaths_insn--handles the recursive search
4426 through INSN. If SET_P is true, then x is being modified by the insn. */
4429 attach_deaths (x
, insn
, set_p
)
4436 register enum rtx_code code
;
4442 code
= GET_CODE (x
);
4454 /* Get rid of the easy cases first. */
4459 /* If the register dies in this insn, queue that note, and mark
4460 this register as needing to die. */
4461 /* This code is very similar to mark_used_1 (if set_p is false)
4462 and mark_set_1 (if set_p is true) in flow.c. */
4472 all_needed
= some_needed
= REGNO_REG_SET_P (old_live_regs
, regno
);
4473 if (regno
< FIRST_PSEUDO_REGISTER
)
4477 n
= HARD_REGNO_NREGS (regno
, GET_MODE (x
));
4480 int needed
= (REGNO_REG_SET_P (old_live_regs
, regno
+ n
));
4481 some_needed
|= needed
;
4482 all_needed
&= needed
;
4486 /* If it wasn't live before we started, then add a REG_DEAD note.
4487 We must check the previous lifetime info not the current info,
4488 because we may have to execute this code several times, e.g.
4489 once for a clobber (which doesn't add a note) and later
4490 for a use (which does add a note).
4492 Always make the register live. We must do this even if it was
4493 live before, because this may be an insn which sets and uses
4494 the same register, in which case the register has already been
4495 killed, so we must make it live again.
4497 Global registers are always live, and should never have a REG_DEAD
4498 note added for them, so none of the code below applies to them. */
4500 if (regno
>= FIRST_PSEUDO_REGISTER
|| ! global_regs
[regno
])
4502 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
4503 STACK_POINTER_REGNUM, since these are always considered to be
4504 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
4505 if (regno
!= FRAME_POINTER_REGNUM
4506 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
4507 && ! (regno
== HARD_FRAME_POINTER_REGNUM
)
4509 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
4510 && ! (regno
== ARG_POINTER_REGNUM
&& fixed_regs
[regno
])
4512 && regno
!= STACK_POINTER_REGNUM
)
4514 if (! all_needed
&& ! dead_or_set_p (insn
, x
))
4516 /* Check for the case where the register dying partially
4517 overlaps the register set by this insn. */
4518 if (regno
< FIRST_PSEUDO_REGISTER
4519 && HARD_REGNO_NREGS (regno
, GET_MODE (x
)) > 1)
4521 int n
= HARD_REGNO_NREGS (regno
, GET_MODE (x
));
4523 some_needed
|= dead_or_set_regno_p (insn
, regno
+ n
);
4526 /* If none of the words in X is needed, make a REG_DEAD
4527 note. Otherwise, we must make partial REG_DEAD
4530 create_reg_dead_note (x
, insn
);
4535 /* Don't make a REG_DEAD note for a part of a
4536 register that is set in the insn. */
4537 for (i
= HARD_REGNO_NREGS (regno
, GET_MODE (x
)) - 1;
4539 if (! REGNO_REG_SET_P (old_live_regs
, regno
+i
)
4540 && ! dead_or_set_regno_p (insn
, regno
+ i
))
4541 create_reg_dead_note (gen_rtx_REG (reg_raw_mode
[regno
+ i
],
4548 if (regno
< FIRST_PSEUDO_REGISTER
)
4550 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (x
));
4553 SET_REGNO_REG_SET (bb_live_regs
, regno
+ j
);
4558 /* Recompute REG_BASIC_BLOCK as we update all the other
4559 dataflow information. */
4560 if (sched_reg_basic_block
[regno
] == REG_BLOCK_UNKNOWN
)
4561 sched_reg_basic_block
[regno
] = current_block_num
;
4562 else if (sched_reg_basic_block
[regno
] != current_block_num
)
4563 sched_reg_basic_block
[regno
] = REG_BLOCK_GLOBAL
;
4565 SET_REGNO_REG_SET (bb_live_regs
, regno
);
4572 /* Handle tail-recursive case. */
4573 attach_deaths (XEXP (x
, 0), insn
, 0);
4577 attach_deaths (SUBREG_REG (x
), insn
,
4578 set_p
&& ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (x
)))
4580 || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x
)))
4581 == GET_MODE_SIZE (GET_MODE ((x
))))));
4584 case STRICT_LOW_PART
:
4585 attach_deaths (XEXP (x
, 0), insn
, 0);
4590 attach_deaths (XEXP (x
, 0), insn
, 0);
4591 attach_deaths (XEXP (x
, 1), insn
, 0);
4592 attach_deaths (XEXP (x
, 2), insn
, 0);
4596 /* Other cases: walk the insn. */
4597 fmt
= GET_RTX_FORMAT (code
);
4598 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
4601 attach_deaths (XEXP (x
, i
), insn
, 0);
4602 else if (fmt
[i
] == 'E')
4603 for (j
= 0; j
< XVECLEN (x
, i
); j
++)
4604 attach_deaths (XVECEXP (x
, i
, j
), insn
, 0);
4609 /* After INSN has executed, add register death notes for each register
4610 that is dead after INSN. */
4613 attach_deaths_insn (insn
)
4616 rtx x
= PATTERN (insn
);
4617 register RTX_CODE code
= GET_CODE (x
);
4622 attach_deaths (SET_SRC (x
), insn
, 0);
4624 /* A register might die here even if it is the destination, e.g.
4625 it is the target of a volatile read and is otherwise unused.
4626 Hence we must always call attach_deaths for the SET_DEST. */
4627 attach_deaths (SET_DEST (x
), insn
, 1);
4629 else if (code
== PARALLEL
)
4632 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
4634 code
= GET_CODE (XVECEXP (x
, 0, i
));
4637 attach_deaths (SET_SRC (XVECEXP (x
, 0, i
)), insn
, 0);
4639 attach_deaths (SET_DEST (XVECEXP (x
, 0, i
)), insn
, 1);
4641 /* Flow does not add REG_DEAD notes to registers that die in
4642 clobbers, so we can't either. */
4643 else if (code
!= CLOBBER
)
4644 attach_deaths (XVECEXP (x
, 0, i
), insn
, 0);
4647 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
4648 MEM being clobbered, just like flow. */
4649 else if (code
== CLOBBER
&& GET_CODE (XEXP (x
, 0)) == MEM
)
4650 attach_deaths (XEXP (XEXP (x
, 0), 0), insn
, 0);
4651 /* Otherwise don't add a death note to things being clobbered. */
4652 else if (code
!= CLOBBER
)
4653 attach_deaths (x
, insn
, 0);
4655 /* Make death notes for things used in the called function. */
4656 if (GET_CODE (insn
) == CALL_INSN
)
4657 for (link
= CALL_INSN_FUNCTION_USAGE (insn
); link
; link
= XEXP (link
, 1))
4658 attach_deaths (XEXP (XEXP (link
, 0), 0), insn
,
4659 GET_CODE (XEXP (link
, 0)) == CLOBBER
);
4662 /* functions for handlnig of notes */
4664 /* Delete notes beginning with INSN and put them in the chain
4665 of notes ended by NOTE_LIST.
4666 Returns the insn following the notes. */
4669 unlink_other_notes (insn
, tail
)
4672 rtx prev
= PREV_INSN (insn
);
4674 while (insn
!= tail
&& GET_CODE (insn
) == NOTE
)
4676 rtx next
= NEXT_INSN (insn
);
4677 /* Delete the note from its current position. */
4679 NEXT_INSN (prev
) = next
;
4681 PREV_INSN (next
) = prev
;
4683 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
4684 immediately after the call they follow. We use a fake
4685 (REG_DEAD (const_int -1)) note to remember them.
4686 Likewise with NOTE_INSN_{LOOP,EHREGION}_{BEG, END}. */
4687 if (NOTE_LINE_NUMBER (insn
) != NOTE_INSN_SETJMP
4688 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_LOOP_BEG
4689 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_LOOP_END
4690 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_RANGE_START
4691 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_RANGE_END
4692 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_EH_REGION_BEG
4693 && NOTE_LINE_NUMBER (insn
) != NOTE_INSN_EH_REGION_END
)
4695 /* Insert the note at the end of the notes list. */
4696 PREV_INSN (insn
) = note_list
;
4698 NEXT_INSN (note_list
) = insn
;
4707 /* Delete line notes beginning with INSN. Record line-number notes so
4708 they can be reused. Returns the insn following the notes. */
4711 unlink_line_notes (insn
, tail
)
4714 rtx prev
= PREV_INSN (insn
);
4716 while (insn
!= tail
&& GET_CODE (insn
) == NOTE
)
4718 rtx next
= NEXT_INSN (insn
);
4720 if (write_symbols
!= NO_DEBUG
&& NOTE_LINE_NUMBER (insn
) > 0)
4722 /* Delete the note from its current position. */
4724 NEXT_INSN (prev
) = next
;
4726 PREV_INSN (next
) = prev
;
4728 /* Record line-number notes so they can be reused. */
4729 LINE_NOTE (insn
) = insn
;
4739 /* Return the head and tail pointers of BB. */
4741 HAIFA_INLINE
static void
4742 get_block_head_tail (bb
, headp
, tailp
)
4752 b
= BB_TO_BLOCK (bb
);
4754 /* HEAD and TAIL delimit the basic block being scheduled. */
4755 head
= basic_block_head
[b
];
4756 tail
= basic_block_end
[b
];
4758 /* Don't include any notes or labels at the beginning of the
4759 basic block, or notes at the ends of basic blocks. */
4760 while (head
!= tail
)
4762 if (GET_CODE (head
) == NOTE
)
4763 head
= NEXT_INSN (head
);
4764 else if (GET_CODE (tail
) == NOTE
)
4765 tail
= PREV_INSN (tail
);
4766 else if (GET_CODE (head
) == CODE_LABEL
)
4767 head
= NEXT_INSN (head
);
4776 /* Delete line notes from bb. Save them so they can be later restored
4777 (in restore_line_notes ()). */
4788 get_block_head_tail (bb
, &head
, &tail
);
4791 && (GET_RTX_CLASS (GET_CODE (head
)) != 'i'))
4794 next_tail
= NEXT_INSN (tail
);
4795 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
4799 /* Farm out notes, and maybe save them in NOTE_LIST.
4800 This is needed to keep the debugger from
4801 getting completely deranged. */
4802 if (GET_CODE (insn
) == NOTE
)
4805 insn
= unlink_line_notes (insn
, next_tail
);
4811 if (insn
== next_tail
)
4817 /* Save line number notes for each insn in bb. */
4820 save_line_notes (bb
)
4826 /* We must use the true line number for the first insn in the block
4827 that was computed and saved at the start of this pass. We can't
4828 use the current line number, because scheduling of the previous
4829 block may have changed the current line number. */
4831 rtx line
= line_note_head
[BB_TO_BLOCK (bb
)];
4834 get_block_head_tail (bb
, &head
, &tail
);
4835 next_tail
= NEXT_INSN (tail
);
4837 for (insn
= basic_block_head
[BB_TO_BLOCK (bb
)];
4839 insn
= NEXT_INSN (insn
))
4840 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
4843 LINE_NOTE (insn
) = line
;
4847 /* After bb was scheduled, insert line notes into the insns list. */
4850 restore_line_notes (bb
)
4853 rtx line
, note
, prev
, new;
4854 int added_notes
= 0;
4856 rtx head
, next_tail
, insn
;
4858 b
= BB_TO_BLOCK (bb
);
4860 head
= basic_block_head
[b
];
4861 next_tail
= NEXT_INSN (basic_block_end
[b
]);
4863 /* Determine the current line-number. We want to know the current
4864 line number of the first insn of the block here, in case it is
4865 different from the true line number that was saved earlier. If
4866 different, then we need a line number note before the first insn
4867 of this block. If it happens to be the same, then we don't want to
4868 emit another line number note here. */
4869 for (line
= head
; line
; line
= PREV_INSN (line
))
4870 if (GET_CODE (line
) == NOTE
&& NOTE_LINE_NUMBER (line
) > 0)
4873 /* Walk the insns keeping track of the current line-number and inserting
4874 the line-number notes as needed. */
4875 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
4876 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
4878 /* This used to emit line number notes before every non-deleted note.
4879 However, this confuses a debugger, because line notes not separated
4880 by real instructions all end up at the same address. I can find no
4881 use for line number notes before other notes, so none are emitted. */
4882 else if (GET_CODE (insn
) != NOTE
4883 && (note
= LINE_NOTE (insn
)) != 0
4886 || NOTE_LINE_NUMBER (note
) != NOTE_LINE_NUMBER (line
)
4887 || NOTE_SOURCE_FILE (note
) != NOTE_SOURCE_FILE (line
)))
4890 prev
= PREV_INSN (insn
);
4891 if (LINE_NOTE (note
))
4893 /* Re-use the original line-number note. */
4894 LINE_NOTE (note
) = 0;
4895 PREV_INSN (note
) = prev
;
4896 NEXT_INSN (prev
) = note
;
4897 PREV_INSN (insn
) = note
;
4898 NEXT_INSN (note
) = insn
;
4903 new = emit_note_after (NOTE_LINE_NUMBER (note
), prev
);
4904 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note
);
4905 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note
);
4908 if (sched_verbose
&& added_notes
)
4909 fprintf (dump
, ";; added %d line-number notes\n", added_notes
);
4912 /* After scheduling the function, delete redundant line notes from the
4916 rm_redundant_line_notes ()
4919 rtx insn
= get_insns ();
4920 int active_insn
= 0;
4923 /* Walk the insns deleting redundant line-number notes. Many of these
4924 are already present. The remainder tend to occur at basic
4925 block boundaries. */
4926 for (insn
= get_last_insn (); insn
; insn
= PREV_INSN (insn
))
4927 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
4929 /* If there are no active insns following, INSN is redundant. */
4930 if (active_insn
== 0)
4933 NOTE_SOURCE_FILE (insn
) = 0;
4934 NOTE_LINE_NUMBER (insn
) = NOTE_INSN_DELETED
;
4936 /* If the line number is unchanged, LINE is redundant. */
4938 && NOTE_LINE_NUMBER (line
) == NOTE_LINE_NUMBER (insn
)
4939 && NOTE_SOURCE_FILE (line
) == NOTE_SOURCE_FILE (insn
))
4942 NOTE_SOURCE_FILE (line
) = 0;
4943 NOTE_LINE_NUMBER (line
) = NOTE_INSN_DELETED
;
4950 else if (!((GET_CODE (insn
) == NOTE
4951 && NOTE_LINE_NUMBER (insn
) == NOTE_INSN_DELETED
)
4952 || (GET_CODE (insn
) == INSN
4953 && (GET_CODE (PATTERN (insn
)) == USE
4954 || GET_CODE (PATTERN (insn
)) == CLOBBER
))))
4957 if (sched_verbose
&& notes
)
4958 fprintf (dump
, ";; deleted %d line-number notes\n", notes
);
4961 /* Delete notes between head and tail and put them in the chain
4962 of notes ended by NOTE_LIST. */
4965 rm_other_notes (head
, tail
)
4973 && (GET_RTX_CLASS (GET_CODE (head
)) != 'i'))
4976 next_tail
= NEXT_INSN (tail
);
4977 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
4981 /* Farm out notes, and maybe save them in NOTE_LIST.
4982 This is needed to keep the debugger from
4983 getting completely deranged. */
4984 if (GET_CODE (insn
) == NOTE
)
4988 insn
= unlink_other_notes (insn
, next_tail
);
4994 if (insn
== next_tail
)
5000 /* Constructor for `sometimes' data structure. */
5003 new_sometimes_live (regs_sometimes_live
, regno
, sometimes_max
)
5004 struct sometimes
*regs_sometimes_live
;
5008 register struct sometimes
*p
;
5010 /* There should never be a register greater than max_regno here. If there
5011 is, it means that a define_split has created a new pseudo reg. This
5012 is not allowed, since there will not be flow info available for any
5013 new register, so catch the error here. */
5014 if (regno
>= max_regno
)
5017 p
= ®s_sometimes_live
[sometimes_max
];
5020 p
->calls_crossed
= 0;
5022 return sometimes_max
;
5025 /* Count lengths of all regs we are currently tracking,
5026 and find new registers no longer live. */
5029 finish_sometimes_live (regs_sometimes_live
, sometimes_max
)
5030 struct sometimes
*regs_sometimes_live
;
5035 for (i
= 0; i
< sometimes_max
; i
++)
5037 register struct sometimes
*p
= ®s_sometimes_live
[i
];
5038 int regno
= p
->regno
;
5040 sched_reg_live_length
[regno
] += p
->live_length
;
5041 sched_reg_n_calls_crossed
[regno
] += p
->calls_crossed
;
5045 /* functions for computation of registers live/usage info */
5047 /* It is assumed that prior to scheduling basic_block_live_at_start (b)
5048 contains the registers that are alive at the entry to b.
5050 Two passes follow: The first pass is performed before the scheduling
5051 of a region. It scans each block of the region forward, computing
5052 the set of registers alive at the end of the basic block and
5053 discard REG_DEAD notes (done by find_pre_sched_live ()).
5055 The second path is invoked after scheduling all region blocks.
5056 It scans each block of the region backward, a block being traversed
5057 only after its succesors in the region. When the set of registers
5058 live at the end of a basic block may be changed by the scheduling
5059 (this may happen for multiple blocks region), it is computed as
5060 the union of the registers live at the start of its succesors.
5061 The last-use information is updated by inserting REG_DEAD notes.
5062 (done by find_post_sched_live ()) */
5064 /* Scan all the insns to be scheduled, removing register death notes.
5065 Register death notes end up in DEAD_NOTES.
5066 Recreate the register life information for the end of this basic
5070 find_pre_sched_live (bb
)
5073 rtx insn
, next_tail
, head
, tail
;
5074 int b
= BB_TO_BLOCK (bb
);
5076 get_block_head_tail (bb
, &head
, &tail
);
5077 COPY_REG_SET (bb_live_regs
, basic_block_live_at_start
[b
]);
5078 next_tail
= NEXT_INSN (tail
);
5080 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
5082 rtx prev
, next
, link
;
5085 /* Handle register life information. */
5086 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i')
5088 /* See if the register gets born here. */
5089 /* We must check for registers being born before we check for
5090 registers dying. It is possible for a register to be born and
5091 die in the same insn, e.g. reading from a volatile memory
5092 location into an otherwise unused register. Such a register
5093 must be marked as dead after this insn. */
5094 if (GET_CODE (PATTERN (insn
)) == SET
5095 || GET_CODE (PATTERN (insn
)) == CLOBBER
)
5097 sched_note_set (PATTERN (insn
), 0);
5101 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
5104 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
5105 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == SET
5106 || GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == CLOBBER
)
5108 sched_note_set (XVECEXP (PATTERN (insn
), 0, j
), 0);
5112 /* ??? This code is obsolete and should be deleted. It
5113 is harmless though, so we will leave it in for now. */
5114 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
5115 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == USE
)
5116 sched_note_set (XVECEXP (PATTERN (insn
), 0, j
), 0);
5119 /* Each call cobbers (makes live) all call-clobbered regs
5120 that are not global or fixed. Note that the function-value
5121 reg is a call_clobbered reg. */
5122 if (GET_CODE (insn
) == CALL_INSN
)
5125 for (j
= 0; j
< FIRST_PSEUDO_REGISTER
; j
++)
5126 if (call_used_regs
[j
] && !global_regs
[j
]
5129 SET_REGNO_REG_SET (bb_live_regs
, j
);
5133 /* Need to know what registers this insn kills. */
5134 for (prev
= 0, link
= REG_NOTES (insn
); link
; link
= next
)
5136 next
= XEXP (link
, 1);
5137 if ((REG_NOTE_KIND (link
) == REG_DEAD
5138 || REG_NOTE_KIND (link
) == REG_UNUSED
)
5139 /* Verify that the REG_NOTE has a valid value. */
5140 && GET_CODE (XEXP (link
, 0)) == REG
)
5142 register int regno
= REGNO (XEXP (link
, 0));
5146 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
5148 if (REG_NOTE_KIND (link
) == REG_DEAD
)
5151 XEXP (prev
, 1) = next
;
5153 REG_NOTES (insn
) = next
;
5154 XEXP (link
, 1) = dead_notes
;
5160 if (regno
< FIRST_PSEUDO_REGISTER
)
5162 int j
= HARD_REGNO_NREGS (regno
,
5163 GET_MODE (XEXP (link
, 0)));
5166 CLEAR_REGNO_REG_SET (bb_live_regs
, regno
+j
);
5171 CLEAR_REGNO_REG_SET (bb_live_regs
, regno
);
5179 INSN_REG_WEIGHT (insn
) = reg_weight
;
5183 /* Update register life and usage information for block bb
5184 after scheduling. Put register dead notes back in the code. */
5187 find_post_sched_live (bb
)
5194 rtx head
, tail
, prev_head
, next_tail
;
5196 register struct sometimes
*regs_sometimes_live
;
5198 b
= BB_TO_BLOCK (bb
);
5200 /* compute live regs at the end of bb as a function of its successors. */
5201 if (current_nr_blocks
> 1)
5206 first_edge
= e
= OUT_EDGES (b
);
5207 CLEAR_REG_SET (bb_live_regs
);
5214 b_succ
= TO_BLOCK (e
);
5215 IOR_REG_SET (bb_live_regs
, basic_block_live_at_start
[b_succ
]);
5218 while (e
!= first_edge
);
5221 get_block_head_tail (bb
, &head
, &tail
);
5222 next_tail
= NEXT_INSN (tail
);
5223 prev_head
= PREV_INSN (head
);
5225 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs
, FIRST_PSEUDO_REGISTER
, i
,
5227 sched_reg_basic_block
[i
] = REG_BLOCK_GLOBAL
;
5230 /* if the block is empty, same regs are alive at its end and its start.
5231 since this is not guaranteed after interblock scheduling, make sure they
5232 are truly identical. */
5233 if (NEXT_INSN (prev_head
) == tail
5234 && (GET_RTX_CLASS (GET_CODE (tail
)) != 'i'))
5236 if (current_nr_blocks
> 1)
5237 COPY_REG_SET (basic_block_live_at_start
[b
], bb_live_regs
);
5242 b
= BB_TO_BLOCK (bb
);
5243 current_block_num
= b
;
5245 /* Keep track of register lives. */
5246 old_live_regs
= ALLOCA_REG_SET ();
5248 = (struct sometimes
*) alloca (max_regno
* sizeof (struct sometimes
));
5251 /* initiate "sometimes" data, starting with registers live at end */
5253 COPY_REG_SET (old_live_regs
, bb_live_regs
);
5254 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs
, 0, j
,
5257 = new_sometimes_live (regs_sometimes_live
,
5261 /* scan insns back, computing regs live info */
5262 for (insn
= tail
; insn
!= prev_head
; insn
= PREV_INSN (insn
))
5264 /* First we kill registers set by this insn, and then we
5265 make registers used by this insn live. This is the opposite
5266 order used above because we are traversing the instructions
5269 /* Strictly speaking, we should scan REG_UNUSED notes and make
5270 every register mentioned there live, however, we will just
5271 kill them again immediately below, so there doesn't seem to
5272 be any reason why we bother to do this. */
5274 /* See if this is the last notice we must take of a register. */
5275 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
5278 if (GET_CODE (PATTERN (insn
)) == SET
5279 || GET_CODE (PATTERN (insn
)) == CLOBBER
)
5280 sched_note_set (PATTERN (insn
), 1);
5281 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
5283 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
5284 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == SET
5285 || GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == CLOBBER
)
5286 sched_note_set (XVECEXP (PATTERN (insn
), 0, j
), 1);
5289 /* This code keeps life analysis information up to date. */
5290 if (GET_CODE (insn
) == CALL_INSN
)
5292 register struct sometimes
*p
;
5294 /* A call kills all call used registers that are not
5295 global or fixed, except for those mentioned in the call
5296 pattern which will be made live again later. */
5297 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
5298 if (call_used_regs
[i
] && ! global_regs
[i
]
5301 CLEAR_REGNO_REG_SET (bb_live_regs
, i
);
5304 /* Regs live at the time of a call instruction must not
5305 go in a register clobbered by calls. Record this for
5306 all regs now live. Note that insns which are born or
5307 die in a call do not cross a call, so this must be done
5308 after the killings (above) and before the births
5310 p
= regs_sometimes_live
;
5311 for (i
= 0; i
< sometimes_max
; i
++, p
++)
5312 if (REGNO_REG_SET_P (bb_live_regs
, p
->regno
))
5313 p
->calls_crossed
+= 1;
5316 /* Make every register used live, and add REG_DEAD notes for
5317 registers which were not live before we started. */
5318 attach_deaths_insn (insn
);
5320 /* Find registers now made live by that instruction. */
5321 EXECUTE_IF_AND_COMPL_IN_REG_SET (bb_live_regs
, old_live_regs
, 0, j
,
5324 = new_sometimes_live (regs_sometimes_live
,
5327 IOR_REG_SET (old_live_regs
, bb_live_regs
);
5329 /* Count lengths of all regs we are worrying about now,
5330 and handle registers no longer live. */
5332 for (i
= 0; i
< sometimes_max
; i
++)
5334 register struct sometimes
*p
= ®s_sometimes_live
[i
];
5335 int regno
= p
->regno
;
5337 p
->live_length
+= 1;
5339 if (!REGNO_REG_SET_P (bb_live_regs
, regno
))
5341 /* This is the end of one of this register's lifetime
5342 segments. Save the lifetime info collected so far,
5343 and clear its bit in the old_live_regs entry. */
5344 sched_reg_live_length
[regno
] += p
->live_length
;
5345 sched_reg_n_calls_crossed
[regno
] += p
->calls_crossed
;
5346 CLEAR_REGNO_REG_SET (old_live_regs
, p
->regno
);
5348 /* Delete the reg_sometimes_live entry for this reg by
5349 copying the last entry over top of it. */
5350 *p
= regs_sometimes_live
[--sometimes_max
];
5351 /* ...and decrement i so that this newly copied entry
5352 will be processed. */
5358 finish_sometimes_live (regs_sometimes_live
, sometimes_max
);
5360 /* In interblock scheduling, basic_block_live_at_start may have changed. */
5361 if (current_nr_blocks
> 1)
5362 COPY_REG_SET (basic_block_live_at_start
[b
], bb_live_regs
);
5365 FREE_REG_SET (old_live_regs
);
5366 } /* find_post_sched_live */
5368 /* After scheduling the subroutine, restore information about uses of
5376 if (n_basic_blocks
> 0)
5377 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs
, FIRST_PSEUDO_REGISTER
, regno
,
5379 sched_reg_basic_block
[regno
]
5383 for (regno
= 0; regno
< max_regno
; regno
++)
5384 if (sched_reg_live_length
[regno
])
5388 if (REG_LIVE_LENGTH (regno
) > sched_reg_live_length
[regno
])
5390 ";; register %d life shortened from %d to %d\n",
5391 regno
, REG_LIVE_LENGTH (regno
),
5392 sched_reg_live_length
[regno
]);
5393 /* Negative values are special; don't overwrite the current
5394 reg_live_length value if it is negative. */
5395 else if (REG_LIVE_LENGTH (regno
) < sched_reg_live_length
[regno
]
5396 && REG_LIVE_LENGTH (regno
) >= 0)
5398 ";; register %d life extended from %d to %d\n",
5399 regno
, REG_LIVE_LENGTH (regno
),
5400 sched_reg_live_length
[regno
]);
5402 if (!REG_N_CALLS_CROSSED (regno
)
5403 && sched_reg_n_calls_crossed
[regno
])
5405 ";; register %d now crosses calls\n", regno
);
5406 else if (REG_N_CALLS_CROSSED (regno
)
5407 && !sched_reg_n_calls_crossed
[regno
]
5408 && REG_BASIC_BLOCK (regno
) != REG_BLOCK_GLOBAL
)
5410 ";; register %d no longer crosses calls\n", regno
);
5412 if (REG_BASIC_BLOCK (regno
) != sched_reg_basic_block
[regno
]
5413 && sched_reg_basic_block
[regno
] != REG_BLOCK_UNKNOWN
5414 && REG_BASIC_BLOCK(regno
) != REG_BLOCK_UNKNOWN
)
5416 ";; register %d changed basic block from %d to %d\n",
5417 regno
, REG_BASIC_BLOCK(regno
),
5418 sched_reg_basic_block
[regno
]);
5421 /* Negative values are special; don't overwrite the current
5422 reg_live_length value if it is negative. */
5423 if (REG_LIVE_LENGTH (regno
) >= 0)
5424 REG_LIVE_LENGTH (regno
) = sched_reg_live_length
[regno
];
5426 if (sched_reg_basic_block
[regno
] != REG_BLOCK_UNKNOWN
5427 && REG_BASIC_BLOCK(regno
) != REG_BLOCK_UNKNOWN
)
5428 REG_BASIC_BLOCK(regno
) = sched_reg_basic_block
[regno
];
5430 /* We can't change the value of reg_n_calls_crossed to zero for
5431 pseudos which are live in more than one block.
5433 This is because combine might have made an optimization which
5434 invalidated basic_block_live_at_start and reg_n_calls_crossed,
5435 but it does not update them. If we update reg_n_calls_crossed
5436 here, the two variables are now inconsistent, and this might
5437 confuse the caller-save code into saving a register that doesn't
5438 need to be saved. This is only a problem when we zero calls
5439 crossed for a pseudo live in multiple basic blocks.
5441 Alternatively, we could try to correctly update basic block live
5442 at start here in sched, but that seems complicated.
5444 Note: it is possible that a global register became local, as result
5445 of interblock motion, but will remain marked as a global register. */
5446 if (sched_reg_n_calls_crossed
[regno
]
5447 || REG_BASIC_BLOCK (regno
) != REG_BLOCK_GLOBAL
)
5448 REG_N_CALLS_CROSSED (regno
) = sched_reg_n_calls_crossed
[regno
];
5453 /* Scheduling clock, modified in schedule_block() and queue_to_ready () */
5454 static int clock_var
;
5456 /* Move insns that became ready to fire from queue to ready list. */
5459 queue_to_ready (ready
, n_ready
)
5466 q_ptr
= NEXT_Q (q_ptr
);
5468 /* Add all pending insns that can be scheduled without stalls to the
5470 for (link
= insn_queue
[q_ptr
]; link
; link
= XEXP (link
, 1))
5473 insn
= XEXP (link
, 0);
5476 if (sched_verbose
>= 2)
5477 fprintf (dump
, ";;\t\tQ-->Ready: insn %d: ", INSN_UID (insn
));
5479 if (sched_verbose
>= 2 && INSN_BB (insn
) != target_bb
)
5480 fprintf (dump
, "(b%d) ", INSN_BLOCK (insn
));
5482 ready
[n_ready
++] = insn
;
5483 if (sched_verbose
>= 2)
5484 fprintf (dump
, "moving to ready without stalls\n");
5486 insn_queue
[q_ptr
] = 0;
5488 /* If there are no ready insns, stall until one is ready and add all
5489 of the pending insns at that point to the ready list. */
5492 register int stalls
;
5494 for (stalls
= 1; stalls
< INSN_QUEUE_SIZE
; stalls
++)
5496 if ((link
= insn_queue
[NEXT_Q_AFTER (q_ptr
, stalls
)]))
5498 for (; link
; link
= XEXP (link
, 1))
5500 insn
= XEXP (link
, 0);
5503 if (sched_verbose
>= 2)
5504 fprintf (dump
, ";;\t\tQ-->Ready: insn %d: ", INSN_UID (insn
));
5506 if (sched_verbose
>= 2 && INSN_BB (insn
) != target_bb
)
5507 fprintf (dump
, "(b%d) ", INSN_BLOCK (insn
));
5509 ready
[n_ready
++] = insn
;
5510 if (sched_verbose
>= 2)
5511 fprintf (dump
, "moving to ready with %d stalls\n", stalls
);
5513 insn_queue
[NEXT_Q_AFTER (q_ptr
, stalls
)] = 0;
5520 if (sched_verbose
&& stalls
)
5521 visualize_stall_cycles (BB_TO_BLOCK (target_bb
), stalls
);
5522 q_ptr
= NEXT_Q_AFTER (q_ptr
, stalls
);
5523 clock_var
+= stalls
;
5528 /* Print the ready list for debugging purposes. Callable from debugger. */
5531 debug_ready_list (ready
, n_ready
)
5537 for (i
= 0; i
< n_ready
; i
++)
5539 fprintf (dump
, " %d", INSN_UID (ready
[i
]));
5540 if (current_nr_blocks
> 1 && INSN_BB (ready
[i
]) != target_bb
)
5541 fprintf (dump
, "/b%d", INSN_BLOCK (ready
[i
]));
5543 fprintf (dump
, "\n");
5546 /* Print names of units on which insn can/should execute, for debugging. */
5549 insn_print_units (insn
)
5553 int unit
= insn_unit (insn
);
5556 fprintf (dump
, "none");
5558 fprintf (dump
, "%s", function_units
[unit
].name
);
5561 fprintf (dump
, "[");
5562 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
5565 fprintf (dump
, "%s", function_units
[i
].name
);
5567 fprintf (dump
, " ");
5569 fprintf (dump
, "]");
5573 /* MAX_VISUAL_LINES is the maximum number of lines in visualization table
5574 of a basic block. If more lines are needed, table is splitted to two.
5575 n_visual_lines is the number of lines printed so far for a block.
5576 visual_tbl contains the block visualization info.
5577 vis_no_unit holds insns in a cycle that are not mapped to any unit. */
5578 #define MAX_VISUAL_LINES 100
5583 rtx vis_no_unit
[10];
5585 /* Finds units that are in use in this fuction. Required only
5586 for visualization. */
5589 init_target_units ()
5594 for (insn
= get_last_insn (); insn
; insn
= PREV_INSN (insn
))
5596 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
5599 unit
= insn_unit (insn
);
5602 target_units
|= ~unit
;
5604 target_units
|= (1 << unit
);
5608 /* Return the length of the visualization table */
5611 get_visual_tbl_length ()
5617 /* compute length of one field in line */
5618 s
= (char *) alloca (INSN_LEN
+ 5);
5619 sprintf (s
, " %33s", "uname");
5622 /* compute length of one line */
5625 for (unit
= 0; unit
< FUNCTION_UNITS_SIZE
; unit
++)
5626 if (function_units
[unit
].bitmask
& target_units
)
5627 for (i
= 0; i
< function_units
[unit
].multiplicity
; i
++)
5630 n
+= strlen ("\n") + 2;
5632 /* compute length of visualization string */
5633 return (MAX_VISUAL_LINES
* n
);
5636 /* Init block visualization debugging info */
5639 init_block_visualization ()
5641 strcpy (visual_tbl
, "");
5649 safe_concat (buf
, cur
, str
)
5654 char *end
= buf
+ BUF_LEN
- 2; /* leave room for null */
5663 while (cur
< end
&& (c
= *str
++) != '\0')
5670 /* This recognizes rtx, I classified as expressions. These are always */
5671 /* represent some action on values or results of other expression, */
5672 /* that may be stored in objects representing values. */
5675 print_exp (buf
, x
, verbose
)
5683 char *fun
= (char *)0;
5688 for (i
= 0; i
< 4; i
++)
5694 switch (GET_CODE (x
))
5697 op
[0] = XEXP (x
, 0);
5699 op
[1] = XEXP (x
, 1);
5702 op
[0] = XEXP (x
, 0);
5704 op
[1] = XEXP (x
, 1);
5708 op
[0] = XEXP (x
, 0);
5710 op
[1] = XEXP (x
, 1);
5714 op
[0] = XEXP (x
, 0);
5715 op
[1] = XEXP (x
, 1);
5719 op
[0] = XEXP (x
, 0);
5722 op
[0] = XEXP (x
, 0);
5724 op
[1] = XEXP (x
, 1);
5727 op
[0] = XEXP (x
, 0);
5729 op
[1] = XEXP (x
, 1);
5733 op
[0] = XEXP (x
, 0);
5734 op
[1] = XEXP (x
, 1);
5737 op
[0] = XEXP (x
, 0);
5739 op
[1] = XEXP (x
, 1);
5743 op
[0] = XEXP (x
, 0);
5744 op
[1] = XEXP (x
, 1);
5748 op
[0] = XEXP (x
, 0);
5749 op
[1] = XEXP (x
, 1);
5753 op
[0] = XEXP (x
, 0);
5754 op
[1] = XEXP (x
, 1);
5758 op
[0] = XEXP (x
, 0);
5759 op
[1] = XEXP (x
, 1);
5763 op
[0] = XEXP (x
, 0);
5764 op
[1] = XEXP (x
, 1);
5768 op
[0] = XEXP (x
, 0);
5771 op
[0] = XEXP (x
, 0);
5773 op
[1] = XEXP (x
, 1);
5776 op
[0] = XEXP (x
, 0);
5778 op
[1] = XEXP (x
, 1);
5781 op
[0] = XEXP (x
, 0);
5783 op
[1] = XEXP (x
, 1);
5786 op
[0] = XEXP (x
, 0);
5788 op
[1] = XEXP (x
, 1);
5791 op
[0] = XEXP (x
, 0);
5793 op
[1] = XEXP (x
, 1);
5796 op
[0] = XEXP (x
, 0);
5798 op
[1] = XEXP (x
, 1);
5801 op
[0] = XEXP (x
, 0);
5803 op
[1] = XEXP (x
, 1);
5806 op
[0] = XEXP (x
, 0);
5808 op
[1] = XEXP (x
, 1);
5812 op
[0] = XEXP (x
, 0);
5816 op
[0] = XEXP (x
, 0);
5820 op
[0] = XEXP (x
, 0);
5823 op
[0] = XEXP (x
, 0);
5825 op
[1] = XEXP (x
, 1);
5828 op
[0] = XEXP (x
, 0);
5830 op
[1] = XEXP (x
, 1);
5833 op
[0] = XEXP (x
, 0);
5835 op
[1] = XEXP (x
, 1);
5839 op
[0] = XEXP (x
, 0);
5840 op
[1] = XEXP (x
, 1);
5843 op
[0] = XEXP (x
, 0);
5845 op
[1] = XEXP (x
, 1);
5849 op
[0] = XEXP (x
, 0);
5850 op
[1] = XEXP (x
, 1);
5853 op
[0] = XEXP (x
, 0);
5855 op
[1] = XEXP (x
, 1);
5859 op
[0] = XEXP (x
, 0);
5860 op
[1] = XEXP (x
, 1);
5863 op
[0] = XEXP (x
, 0);
5865 op
[1] = XEXP (x
, 1);
5869 op
[0] = XEXP (x
, 0);
5870 op
[1] = XEXP (x
, 1);
5873 fun
= (verbose
) ? "sign_extract" : "sxt";
5874 op
[0] = XEXP (x
, 0);
5875 op
[1] = XEXP (x
, 1);
5876 op
[2] = XEXP (x
, 2);
5879 fun
= (verbose
) ? "zero_extract" : "zxt";
5880 op
[0] = XEXP (x
, 0);
5881 op
[1] = XEXP (x
, 1);
5882 op
[2] = XEXP (x
, 2);
5885 fun
= (verbose
) ? "sign_extend" : "sxn";
5886 op
[0] = XEXP (x
, 0);
5889 fun
= (verbose
) ? "zero_extend" : "zxn";
5890 op
[0] = XEXP (x
, 0);
5893 fun
= (verbose
) ? "float_extend" : "fxn";
5894 op
[0] = XEXP (x
, 0);
5897 fun
= (verbose
) ? "trunc" : "trn";
5898 op
[0] = XEXP (x
, 0);
5900 case FLOAT_TRUNCATE
:
5901 fun
= (verbose
) ? "float_trunc" : "ftr";
5902 op
[0] = XEXP (x
, 0);
5905 fun
= (verbose
) ? "float" : "flt";
5906 op
[0] = XEXP (x
, 0);
5908 case UNSIGNED_FLOAT
:
5909 fun
= (verbose
) ? "uns_float" : "ufl";
5910 op
[0] = XEXP (x
, 0);
5914 op
[0] = XEXP (x
, 0);
5917 fun
= (verbose
) ? "uns_fix" : "ufx";
5918 op
[0] = XEXP (x
, 0);
5922 op
[0] = XEXP (x
, 0);
5926 op
[0] = XEXP (x
, 0);
5929 op
[0] = XEXP (x
, 0);
5933 op
[0] = XEXP (x
, 0);
5938 op
[0] = XEXP (x
, 0);
5942 op
[1] = XEXP (x
, 1);
5947 op
[0] = XEXP (x
, 0);
5949 op
[1] = XEXP (x
, 1);
5951 op
[2] = XEXP (x
, 2);
5956 op
[0] = TRAP_CONDITION (x
);
5959 case UNSPEC_VOLATILE
:
5961 cur
= safe_concat (buf
, cur
, "unspec");
5962 if (GET_CODE (x
) == UNSPEC_VOLATILE
)
5963 cur
= safe_concat (buf
, cur
, "/v");
5964 cur
= safe_concat (buf
, cur
, "[");
5966 for (i
= 0; i
< XVECLEN (x
, 0); i
++)
5968 print_pattern (tmp
, XVECEXP (x
, 0, i
), verbose
);
5969 cur
= safe_concat (buf
, cur
, sep
);
5970 cur
= safe_concat (buf
, cur
, tmp
);
5973 cur
= safe_concat (buf
, cur
, "] ");
5974 sprintf (tmp
, "%d", XINT (x
, 1));
5975 cur
= safe_concat (buf
, cur
, tmp
);
5979 /* if (verbose) debug_rtx (x); */
5980 st
[0] = GET_RTX_NAME (GET_CODE (x
));
5984 /* Print this as a function? */
5987 cur
= safe_concat (buf
, cur
, fun
);
5988 cur
= safe_concat (buf
, cur
, "(");
5991 for (i
= 0; i
< 4; i
++)
5994 cur
= safe_concat (buf
, cur
, st
[i
]);
5999 cur
= safe_concat (buf
, cur
, ",");
6001 print_value (tmp
, op
[i
], verbose
);
6002 cur
= safe_concat (buf
, cur
, tmp
);
6007 cur
= safe_concat (buf
, cur
, ")");
6010 /* Prints rtxes, i customly classified as values. They're constants, */
6011 /* registers, labels, symbols and memory accesses. */
6014 print_value (buf
, x
, verbose
)
6022 switch (GET_CODE (x
))
6025 sprintf (t
, "0x%lx", (long)INTVAL (x
));
6026 cur
= safe_concat (buf
, cur
, t
);
6029 sprintf (t
, "<0x%lx,0x%lx>", (long)XWINT (x
, 2), (long)XWINT (x
, 3));
6030 cur
= safe_concat (buf
, cur
, t
);
6033 cur
= safe_concat (buf
, cur
, "\"");
6034 cur
= safe_concat (buf
, cur
, XSTR (x
, 0));
6035 cur
= safe_concat (buf
, cur
, "\"");
6038 cur
= safe_concat (buf
, cur
, "`");
6039 cur
= safe_concat (buf
, cur
, XSTR (x
, 0));
6040 cur
= safe_concat (buf
, cur
, "'");
6043 sprintf (t
, "L%d", INSN_UID (XEXP (x
, 0)));
6044 cur
= safe_concat (buf
, cur
, t
);
6047 print_value (t
, XEXP (x
, 0), verbose
);
6048 cur
= safe_concat (buf
, cur
, "const(");
6049 cur
= safe_concat (buf
, cur
, t
);
6050 cur
= safe_concat (buf
, cur
, ")");
6053 print_value (t
, XEXP (x
, 0), verbose
);
6054 cur
= safe_concat (buf
, cur
, "high(");
6055 cur
= safe_concat (buf
, cur
, t
);
6056 cur
= safe_concat (buf
, cur
, ")");
6059 if (REGNO (x
) < FIRST_PSEUDO_REGISTER
)
6061 int c
= reg_names
[ REGNO (x
) ][0];
6062 if (c
>= '0' && c
<= '9')
6063 cur
= safe_concat (buf
, cur
, "%");
6065 cur
= safe_concat (buf
, cur
, reg_names
[ REGNO (x
) ]);
6069 sprintf (t
, "r%d", REGNO (x
));
6070 cur
= safe_concat (buf
, cur
, t
);
6074 print_value (t
, SUBREG_REG (x
), verbose
);
6075 cur
= safe_concat (buf
, cur
, t
);
6076 sprintf (t
, "#%d", SUBREG_WORD (x
));
6077 cur
= safe_concat (buf
, cur
, t
);
6080 cur
= safe_concat (buf
, cur
, "scratch");
6083 cur
= safe_concat (buf
, cur
, "cc0");
6086 cur
= safe_concat (buf
, cur
, "pc");
6089 print_value (t
, XEXP (x
, 0), verbose
);
6090 cur
= safe_concat (buf
, cur
, "[");
6091 cur
= safe_concat (buf
, cur
, t
);
6092 cur
= safe_concat (buf
, cur
, "]");
6095 print_exp (t
, x
, verbose
);
6096 cur
= safe_concat (buf
, cur
, t
);
6101 /* The next step in insn detalization, its pattern recognition */
6104 print_pattern (buf
, x
, verbose
)
6109 char t1
[BUF_LEN
], t2
[BUF_LEN
], t3
[BUF_LEN
];
6111 switch (GET_CODE (x
))
6114 print_value (t1
, SET_DEST (x
), verbose
);
6115 print_value (t2
, SET_SRC (x
), verbose
);
6116 sprintf (buf
, "%s=%s", t1
, t2
);
6119 sprintf (buf
, "return");
6122 print_exp (buf
, x
, verbose
);
6125 print_value (t1
, XEXP (x
, 0), verbose
);
6126 sprintf (buf
, "clobber %s", t1
);
6129 print_value (t1
, XEXP (x
, 0), verbose
);
6130 sprintf (buf
, "use %s", t1
);
6137 for (i
= 0; i
< XVECLEN (x
, 0); i
++)
6139 print_pattern (t2
, XVECEXP (x
, 0, i
), verbose
);
6140 sprintf (t3
, "%s%s;", t1
, t2
);
6143 sprintf (buf
, "%s}", t1
);
6150 sprintf (t1
, "%%{");
6151 for (i
= 0; i
< XVECLEN (x
, 0); i
++)
6153 print_insn (t2
, XVECEXP (x
, 0, i
), verbose
);
6154 sprintf (t3
, "%s%s;", t1
, t2
);
6157 sprintf (buf
, "%s%%}", t1
);
6161 sprintf (buf
, "asm {%s}", XSTR (x
, 0));
6166 print_value (buf
, XEXP (x
, 0), verbose
);
6169 print_value (t1
, TRAP_CONDITION (x
), verbose
);
6170 sprintf (buf
, "trap_if %s", t1
);
6176 sprintf (t1
, "unspec{");
6177 for (i
= 0; i
< XVECLEN (x
, 0); i
++)
6179 print_pattern (t2
, XVECEXP (x
, 0, i
), verbose
);
6180 sprintf (t3
, "%s%s;", t1
, t2
);
6183 sprintf (buf
, "%s}", t1
);
6186 case UNSPEC_VOLATILE
:
6190 sprintf (t1
, "unspec/v{");
6191 for (i
= 0; i
< XVECLEN (x
, 0); i
++)
6193 print_pattern (t2
, XVECEXP (x
, 0, i
), verbose
);
6194 sprintf (t3
, "%s%s;", t1
, t2
);
6197 sprintf (buf
, "%s}", t1
);
6201 print_value (buf
, x
, verbose
);
6203 } /* print_pattern */
6205 /* This is the main function in rtl visualization mechanism. It
6206 accepts an rtx and tries to recognize it as an insn, then prints it
6207 properly in human readable form, resembling assembler mnemonics. */
6208 /* For every insn it prints its UID and BB the insn belongs */
6209 /* too. (probably the last "option" should be extended somehow, since */
6210 /* it depends now on sched.c inner variables ...) */
6213 print_insn (buf
, x
, verbose
)
6221 switch (GET_CODE (x
))
6224 print_pattern (t
, PATTERN (x
), verbose
);
6226 sprintf (buf
, "b%d: i% 4d: %s", INSN_BB (x
),
6229 sprintf (buf
, "%-4d %s", INSN_UID (x
), t
);
6232 print_pattern (t
, PATTERN (x
), verbose
);
6234 sprintf (buf
, "b%d: i% 4d: jump %s", INSN_BB (x
),
6237 sprintf (buf
, "%-4d %s", INSN_UID (x
), t
);
6241 if (GET_CODE (x
) == PARALLEL
)
6243 x
= XVECEXP (x
, 0, 0);
6244 print_pattern (t
, x
, verbose
);
6247 strcpy (t
, "call <...>");
6249 sprintf (buf
, "b%d: i% 4d: %s", INSN_BB (insn
),
6250 INSN_UID (insn
), t
);
6252 sprintf (buf
, "%-4d %s", INSN_UID (insn
), t
);
6255 sprintf (buf
, "L%d:", INSN_UID (x
));
6258 sprintf (buf
, "i% 4d: barrier", INSN_UID (x
));
6261 if (NOTE_LINE_NUMBER (x
) > 0)
6262 sprintf (buf
, "%4d note \"%s\" %d", INSN_UID (x
),
6263 NOTE_SOURCE_FILE (x
), NOTE_LINE_NUMBER (x
));
6265 sprintf (buf
, "%4d %s", INSN_UID (x
),
6266 GET_NOTE_INSN_NAME (NOTE_LINE_NUMBER (x
)));
6271 sprintf (buf
, "Not an INSN at all\n");
6275 sprintf (buf
, "i%-4d <What?>", INSN_UID (x
));
6279 /* Print visualization debugging info */
6282 print_block_visualization (b
, s
)
6289 fprintf (dump
, "\n;; ==================== scheduling visualization for block %d %s \n", b
, s
);
6291 /* Print names of units */
6292 fprintf (dump
, ";; %-8s", "clock");
6293 for (unit
= 0; unit
< FUNCTION_UNITS_SIZE
; unit
++)
6294 if (function_units
[unit
].bitmask
& target_units
)
6295 for (i
= 0; i
< function_units
[unit
].multiplicity
; i
++)
6296 fprintf (dump
, " %-33s", function_units
[unit
].name
);
6297 fprintf (dump
, " %-8s\n", "no-unit");
6299 fprintf (dump
, ";; %-8s", "=====");
6300 for (unit
= 0; unit
< FUNCTION_UNITS_SIZE
; unit
++)
6301 if (function_units
[unit
].bitmask
& target_units
)
6302 for (i
= 0; i
< function_units
[unit
].multiplicity
; i
++)
6303 fprintf (dump
, " %-33s", "==============================");
6304 fprintf (dump
, " %-8s\n", "=======");
6306 /* Print insns in each cycle */
6307 fprintf (dump
, "%s\n", visual_tbl
);
6310 /* Print insns in the 'no_unit' column of visualization */
6313 visualize_no_unit (insn
)
6316 vis_no_unit
[n_vis_no_unit
] = insn
;
6320 /* Print insns scheduled in clock, for visualization. */
6323 visualize_scheduled_insns (b
, clock
)
6328 /* if no more room, split table into two */
6329 if (n_visual_lines
>= MAX_VISUAL_LINES
)
6331 print_block_visualization (b
, "(incomplete)");
6332 init_block_visualization ();
6337 sprintf (visual_tbl
+ strlen (visual_tbl
), ";; %-8d", clock
);
6338 for (unit
= 0; unit
< FUNCTION_UNITS_SIZE
; unit
++)
6339 if (function_units
[unit
].bitmask
& target_units
)
6340 for (i
= 0; i
< function_units
[unit
].multiplicity
; i
++)
6342 int instance
= unit
+ i
* FUNCTION_UNITS_SIZE
;
6343 rtx insn
= unit_last_insn
[instance
];
6345 /* print insns that still keep the unit busy */
6347 actual_hazard_this_instance (unit
, instance
, insn
, clock
, 0))
6350 print_insn (str
, insn
, 0);
6351 str
[INSN_LEN
] = '\0';
6352 sprintf (visual_tbl
+ strlen (visual_tbl
), " %-33s", str
);
6355 sprintf (visual_tbl
+ strlen (visual_tbl
), " %-33s", "------------------------------");
6358 /* print insns that are not assigned to any unit */
6359 for (i
= 0; i
< n_vis_no_unit
; i
++)
6360 sprintf (visual_tbl
+ strlen (visual_tbl
), " %-8d",
6361 INSN_UID (vis_no_unit
[i
]));
6364 sprintf (visual_tbl
+ strlen (visual_tbl
), "\n");
6367 /* Print stalled cycles */
6370 visualize_stall_cycles (b
, stalls
)
6375 /* if no more room, split table into two */
6376 if (n_visual_lines
>= MAX_VISUAL_LINES
)
6378 print_block_visualization (b
, "(incomplete)");
6379 init_block_visualization ();
6384 sprintf (visual_tbl
+ strlen (visual_tbl
), ";; ");
6385 for (i
= 0; i
< stalls
; i
++)
6386 sprintf (visual_tbl
+ strlen (visual_tbl
), ".");
6387 sprintf (visual_tbl
+ strlen (visual_tbl
), "\n");
6390 /* move_insn1: Remove INSN from insn chain, and link it after LAST insn */
6393 move_insn1 (insn
, last
)
6396 NEXT_INSN (PREV_INSN (insn
)) = NEXT_INSN (insn
);
6397 PREV_INSN (NEXT_INSN (insn
)) = PREV_INSN (insn
);
6399 NEXT_INSN (insn
) = NEXT_INSN (last
);
6400 PREV_INSN (NEXT_INSN (last
)) = insn
;
6402 NEXT_INSN (last
) = insn
;
6403 PREV_INSN (insn
) = last
;
6408 /* Search INSN for fake REG_DEAD note pairs for NOTE_INSN_SETJMP,
6409 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
6410 NOTEs. The REG_DEAD note following first one is contains the saved
6411 value for NOTE_BLOCK_NUMBER which is useful for
6412 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
6413 output by the instruction scheduler. Return the new value of LAST. */
6416 reemit_notes (insn
, last
)
6423 for (note
= REG_NOTES (insn
); note
; note
= XEXP (note
, 1))
6425 if (REG_NOTE_KIND (note
) == REG_DEAD
6426 && GET_CODE (XEXP (note
, 0)) == CONST_INT
)
6428 if (INTVAL (XEXP (note
, 0)) == NOTE_INSN_SETJMP
)
6430 retval
= emit_note_after (INTVAL (XEXP (note
, 0)), insn
);
6431 CONST_CALL_P (retval
) = CONST_CALL_P (note
);
6432 remove_note (insn
, note
);
6433 note
= XEXP (note
, 1);
6437 last
= emit_note_before (INTVAL (XEXP (note
, 0)), last
);
6438 remove_note (insn
, note
);
6439 note
= XEXP (note
, 1);
6440 NOTE_BLOCK_NUMBER (last
) = INTVAL (XEXP (note
, 0));
6442 remove_note (insn
, note
);
6448 /* Move INSN, and all insns which should be issued before it,
6449 due to SCHED_GROUP_P flag. Reemit notes if needed.
6451 Return the last insn emitted by the scheduler, which is the
6452 return value from the first call to reemit_notes. */
6455 move_insn (insn
, last
)
6460 /* If INSN has SCHED_GROUP_P set, then issue it and any other
6461 insns with SCHED_GROUP_P set first. */
6462 while (SCHED_GROUP_P (insn
))
6464 rtx prev
= PREV_INSN (insn
);
6466 /* Move a SCHED_GROUP_P insn. */
6467 move_insn1 (insn
, last
);
6468 /* If this is the first call to reemit_notes, then record
6469 its return value. */
6470 if (retval
== NULL_RTX
)
6471 retval
= reemit_notes (insn
, insn
);
6473 reemit_notes (insn
, insn
);
6477 /* Now move the first non SCHED_GROUP_P insn. */
6478 move_insn1 (insn
, last
);
6480 /* If this is the first call to reemit_notes, then record
6481 its return value. */
6482 if (retval
== NULL_RTX
)
6483 retval
= reemit_notes (insn
, insn
);
6485 reemit_notes (insn
, insn
);
6490 /* Return an insn which represents a SCHED_GROUP, which is
6491 the last insn in the group. */
6502 insn
= next_nonnote_insn (insn
);
6504 while (insn
&& SCHED_GROUP_P (insn
) && (GET_CODE (insn
) != CODE_LABEL
));
6509 /* Use forward list scheduling to rearrange insns of block BB in region RGN,
6510 possibly bringing insns from subsequent blocks in the same region.
6511 Return number of insns scheduled. */
6514 schedule_block (bb
, rgn_n_insns
)
6518 /* Local variables. */
6525 /* flow block of this bb */
6526 int b
= BB_TO_BLOCK (bb
);
6528 /* target_n_insns == number of insns in b before scheduling starts.
6529 sched_target_n_insns == how many of b's insns were scheduled.
6530 sched_n_insns == how many insns were scheduled in b */
6531 int target_n_insns
= 0;
6532 int sched_target_n_insns
= 0;
6533 int sched_n_insns
= 0;
6535 #define NEED_NOTHING 0
6540 /* head/tail info for this block */
6547 /* We used to have code to avoid getting parameters moved from hard
6548 argument registers into pseudos.
6550 However, it was removed when it proved to be of marginal benefit
6551 and caused problems because schedule_block and compute_forward_dependences
6552 had different notions of what the "head" insn was. */
6553 get_block_head_tail (bb
, &head
, &tail
);
6555 /* Interblock scheduling could have moved the original head insn from this
6556 block into a proceeding block. This may also cause schedule_block and
6557 compute_forward_dependences to have different notions of what the
6560 If the interblock movement happened to make this block start with
6561 some notes (LOOP, EH or SETJMP) before the first real insn, then
6562 HEAD will have various special notes attached to it which must be
6563 removed so that we don't end up with extra copies of the notes. */
6564 if (GET_RTX_CLASS (GET_CODE (head
)) == 'i')
6568 for (note
= REG_NOTES (head
); note
; note
= XEXP (note
, 1))
6569 if (REG_NOTE_KIND (note
) == REG_DEAD
6570 && GET_CODE (XEXP (note
, 0)) == CONST_INT
)
6571 remove_note (head
, note
);
6574 next_tail
= NEXT_INSN (tail
);
6575 prev_head
= PREV_INSN (head
);
6577 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
6578 to schedule this block. */
6580 && (GET_RTX_CLASS (GET_CODE (head
)) != 'i'))
6581 return (sched_n_insns
);
6586 fprintf (dump
, ";; ======================================================\n");
6588 ";; -- basic block %d from %d to %d -- %s reload\n",
6589 b
, INSN_UID (basic_block_head
[b
]),
6590 INSN_UID (basic_block_end
[b
]),
6591 (reload_completed
? "after" : "before"));
6592 fprintf (dump
, ";; ======================================================\n");
6593 fprintf (dump
, "\n");
6595 visual_tbl
= (char *) alloca (get_visual_tbl_length ());
6596 init_block_visualization ();
6599 /* remove remaining note insns from the block, save them in
6600 note_list. These notes are restored at the end of
6601 schedule_block (). */
6603 rm_other_notes (head
, tail
);
6607 /* prepare current target block info */
6608 if (current_nr_blocks
> 1)
6610 candidate_table
= (candidate
*) alloca (current_nr_blocks
* sizeof (candidate
));
6613 /* ??? It is not clear why bblst_size is computed this way. The original
6614 number was clearly too small as it resulted in compiler failures.
6615 Multiplying by the original number by 2 (to account for update_bbs
6616 members) seems to be a reasonable solution. */
6617 /* ??? Or perhaps there is a bug somewhere else in this file? */
6618 bblst_size
= (current_nr_blocks
- bb
) * rgn_nr_edges
* 2;
6619 bblst_table
= (int *) alloca (bblst_size
* sizeof (int));
6621 bitlst_table_last
= 0;
6622 bitlst_table_size
= rgn_nr_edges
;
6623 bitlst_table
= (int *) alloca (rgn_nr_edges
* sizeof (int));
6625 compute_trg_info (bb
);
6630 /* Allocate the ready list */
6631 ready
= (rtx
*) alloca ((rgn_n_insns
+ 1) * sizeof (rtx
));
6633 /* Print debugging information. */
6634 if (sched_verbose
>= 5)
6635 debug_dependencies ();
6638 /* Initialize ready list with all 'ready' insns in target block.
6639 Count number of insns in the target block being scheduled. */
6641 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
6645 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
6647 next
= NEXT_INSN (insn
);
6649 if (INSN_DEP_COUNT (insn
) == 0
6650 && (SCHED_GROUP_P (next
) == 0 || GET_RTX_CLASS (GET_CODE (next
)) != 'i'))
6651 ready
[n_ready
++] = insn
;
6652 if (!(SCHED_GROUP_P (insn
)))
6656 /* Add to ready list all 'ready' insns in valid source blocks.
6657 For speculative insns, check-live, exception-free, and
6659 for (bb_src
= bb
+ 1; bb_src
< current_nr_blocks
; bb_src
++)
6660 if (IS_VALID (bb_src
))
6666 get_block_head_tail (bb_src
, &head
, &tail
);
6667 src_next_tail
= NEXT_INSN (tail
);
6671 && (GET_RTX_CLASS (GET_CODE (head
)) != 'i'))
6674 for (insn
= src_head
; insn
!= src_next_tail
; insn
= NEXT_INSN (insn
))
6676 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
6679 if (!CANT_MOVE (insn
)
6680 && (!IS_SPECULATIVE_INSN (insn
)
6681 || (insn_issue_delay (insn
) <= 3
6682 && check_live (insn
, bb_src
)
6683 && is_exception_free (insn
, bb_src
, target_bb
))))
6688 next
= NEXT_INSN (insn
);
6689 if (INSN_DEP_COUNT (insn
) == 0
6690 && (SCHED_GROUP_P (next
) == 0
6691 || GET_RTX_CLASS (GET_CODE (next
)) != 'i'))
6692 ready
[n_ready
++] = insn
;
6697 #ifdef MD_SCHED_INIT
6698 MD_SCHED_INIT (dump
, sched_verbose
);
6701 /* no insns scheduled in this block yet */
6702 last_scheduled_insn
= 0;
6704 /* Sort the ready list */
6705 SCHED_SORT (ready
, n_ready
);
6706 #ifdef MD_SCHED_REORDER
6707 MD_SCHED_REORDER (dump
, sched_verbose
, ready
, n_ready
);
6710 if (sched_verbose
>= 2)
6712 fprintf (dump
, ";;\t\tReady list initially: ");
6713 debug_ready_list (ready
, n_ready
);
6716 /* Q_SIZE is the total number of insns in the queue. */
6720 bzero ((char *) insn_queue
, sizeof (insn_queue
));
6722 /* We start inserting insns after PREV_HEAD. */
6725 /* Initialize INSN_QUEUE, LIST and NEW_NEEDS. */
6726 new_needs
= (NEXT_INSN (prev_head
) == basic_block_head
[b
]
6727 ? NEED_HEAD
: NEED_NOTHING
);
6728 if (PREV_INSN (next_tail
) == basic_block_end
[b
])
6729 new_needs
|= NEED_TAIL
;
6731 /* loop until all the insns in BB are scheduled. */
6732 while (sched_target_n_insns
< target_n_insns
)
6738 /* Add to the ready list all pending insns that can be issued now.
6739 If there are no ready insns, increment clock until one
6740 is ready and add all pending insns at that point to the ready
6742 n_ready
= queue_to_ready (ready
, n_ready
);
6747 if (sched_verbose
>= 2)
6749 fprintf (dump
, ";;\t\tReady list after queue_to_ready: ");
6750 debug_ready_list (ready
, n_ready
);
6753 /* Sort the ready list. */
6754 SCHED_SORT (ready
, n_ready
);
6755 #ifdef MD_SCHED_REORDER
6756 MD_SCHED_REORDER (dump
, sched_verbose
, ready
, n_ready
);
6761 fprintf (dump
, "\n;;\tReady list (t =%3d): ", clock_var
);
6762 debug_ready_list (ready
, n_ready
);
6765 /* Issue insns from ready list.
6766 It is important to count down from n_ready, because n_ready may change
6767 as insns are issued. */
6768 can_issue_more
= issue_rate
;
6769 for (i
= n_ready
- 1; i
>= 0 && can_issue_more
; i
--)
6771 rtx insn
= ready
[i
];
6772 int cost
= actual_hazard (insn_unit (insn
), insn
, clock_var
, 0);
6776 queue_insn (insn
, cost
);
6777 ready
[i
] = ready
[--n_ready
]; /* remove insn from ready list */
6781 /* an interblock motion? */
6782 if (INSN_BB (insn
) != target_bb
)
6786 if (IS_SPECULATIVE_INSN (insn
))
6789 if (!check_live (insn
, INSN_BB (insn
)))
6791 /* speculative motion, live check failed, remove
6792 insn from ready list */
6793 ready
[i
] = ready
[--n_ready
];
6796 update_live (insn
, INSN_BB (insn
));
6798 /* for speculative load, mark insns fed by it. */
6799 if (IS_LOAD_INSN (insn
) || FED_BY_SPEC_LOAD (insn
))
6800 set_spec_fed (insn
);
6807 while (SCHED_GROUP_P (temp
))
6808 temp
= PREV_INSN (temp
);
6810 /* Update source block boundaries. */
6811 b1
= INSN_BLOCK (temp
);
6812 if (temp
== basic_block_head
[b1
]
6813 && insn
== basic_block_end
[b1
])
6815 /* We moved all the insns in the basic block.
6816 Emit a note after the last insn and update the
6817 begin/end boundaries to point to the note. */
6818 emit_note_after (NOTE_INSN_DELETED
, insn
);
6819 basic_block_end
[b1
] = NEXT_INSN (insn
);
6820 basic_block_head
[b1
] = NEXT_INSN (insn
);
6822 else if (insn
== basic_block_end
[b1
])
6824 /* We took insns from the end of the basic block,
6825 so update the end of block boundary so that it
6826 points to the first insn we did not move. */
6827 basic_block_end
[b1
] = PREV_INSN (temp
);
6829 else if (temp
== basic_block_head
[b1
])
6831 /* We took insns from the start of the basic block,
6832 so update the start of block boundary so that
6833 it points to the first insn we did not move. */
6834 basic_block_head
[b1
] = NEXT_INSN (insn
);
6839 /* in block motion */
6840 sched_target_n_insns
++;
6843 last_scheduled_insn
= insn
;
6844 last
= move_insn (insn
, last
);
6847 #ifdef MD_SCHED_VARIABLE_ISSUE
6848 MD_SCHED_VARIABLE_ISSUE (dump
, sched_verbose
, insn
, can_issue_more
);
6853 n_ready
= schedule_insn (insn
, ready
, n_ready
, clock_var
);
6855 /* remove insn from ready list */
6856 ready
[i
] = ready
[--n_ready
];
6858 /* close this block after scheduling its jump */
6859 if (GET_CODE (last_scheduled_insn
) == JUMP_INSN
)
6867 visualize_scheduled_insns (b
, clock_var
);
6874 fprintf (dump
, ";;\tReady list (final): ");
6875 debug_ready_list (ready
, n_ready
);
6876 print_block_visualization (b
, "");
6879 /* Sanity check -- queue must be empty now. Meaningless if region has
6881 if (current_nr_blocks
> 1)
6882 if (!flag_schedule_interblock
&& q_size
!= 0)
6885 /* update head/tail boundaries. */
6886 head
= NEXT_INSN (prev_head
);
6889 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
6890 previously found among the insns. Insert them at the beginning
6894 rtx note_head
= note_list
;
6896 while (PREV_INSN (note_head
))
6898 note_head
= PREV_INSN (note_head
);
6901 PREV_INSN (note_head
) = PREV_INSN (head
);
6902 NEXT_INSN (PREV_INSN (head
)) = note_head
;
6903 PREV_INSN (head
) = note_list
;
6904 NEXT_INSN (note_list
) = head
;
6908 /* update target block boundaries. */
6909 if (new_needs
& NEED_HEAD
)
6910 basic_block_head
[b
] = head
;
6912 if (new_needs
& NEED_TAIL
)
6913 basic_block_end
[b
] = tail
;
6918 fprintf (dump
, ";; total time = %d\n;; new basic block head = %d\n",
6919 clock_var
, INSN_UID (basic_block_head
[b
]));
6920 fprintf (dump
, ";; new basic block end = %d\n\n",
6921 INSN_UID (basic_block_end
[b
]));
6924 return (sched_n_insns
);
6925 } /* schedule_block () */
6928 /* print the bit-set of registers, S. callable from debugger */
6931 debug_reg_vector (s
)
6936 EXECUTE_IF_SET_IN_REG_SET (s
, 0, regno
,
6938 fprintf (dump
, " %d", regno
);
6941 fprintf (dump
, "\n");
6944 /* Use the backward dependences from LOG_LINKS to build
6945 forward dependences in INSN_DEPEND. */
6948 compute_block_forward_dependences (bb
)
6954 enum reg_note dep_type
;
6956 get_block_head_tail (bb
, &head
, &tail
);
6957 next_tail
= NEXT_INSN (tail
);
6958 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
6960 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
6963 insn
= group_leader (insn
);
6965 for (link
= LOG_LINKS (insn
); link
; link
= XEXP (link
, 1))
6967 rtx x
= group_leader (XEXP (link
, 0));
6970 if (x
!= XEXP (link
, 0))
6973 /* Ignore dependences upon deleted insn */
6974 if (GET_CODE (x
) == NOTE
|| INSN_DELETED_P (x
))
6976 if (find_insn_list (insn
, INSN_DEPEND (x
)))
6979 new_link
= alloc_INSN_LIST (insn
, INSN_DEPEND (x
));
6981 dep_type
= REG_NOTE_KIND (link
);
6982 PUT_REG_NOTE_KIND (new_link
, dep_type
);
6984 INSN_DEPEND (x
) = new_link
;
6985 INSN_DEP_COUNT (insn
) += 1;
6990 /* Initialize variables for region data dependence analysis.
6991 n_bbs is the number of region blocks */
6993 __inline
static void
6994 init_rgn_data_dependences (n_bbs
)
6999 /* variables for which one copy exists for each block */
7000 bzero ((char *) bb_pending_read_insns
, n_bbs
* sizeof (rtx
));
7001 bzero ((char *) bb_pending_read_mems
, n_bbs
* sizeof (rtx
));
7002 bzero ((char *) bb_pending_write_insns
, n_bbs
* sizeof (rtx
));
7003 bzero ((char *) bb_pending_write_mems
, n_bbs
* sizeof (rtx
));
7004 bzero ((char *) bb_pending_lists_length
, n_bbs
* sizeof (rtx
));
7005 bzero ((char *) bb_last_pending_memory_flush
, n_bbs
* sizeof (rtx
));
7006 bzero ((char *) bb_last_function_call
, n_bbs
* sizeof (rtx
));
7007 bzero ((char *) bb_sched_before_next_call
, n_bbs
* sizeof (rtx
));
7009 /* Create an insn here so that we can hang dependencies off of it later. */
7010 for (bb
= 0; bb
< n_bbs
; bb
++)
7012 bb_sched_before_next_call
[bb
] =
7013 gen_rtx_INSN (VOIDmode
, 0, NULL_RTX
, NULL_RTX
,
7014 NULL_RTX
, 0, NULL_RTX
, NULL_RTX
);
7015 LOG_LINKS (bb_sched_before_next_call
[bb
]) = 0;
7019 /* Add dependences so that branches are scheduled to run last in their block */
7022 add_branch_dependences (head
, tail
)
7028 /* For all branches, calls, uses, and cc0 setters, force them to remain
7029 in order at the end of the block by adding dependencies and giving
7030 the last a high priority. There may be notes present, and prev_head
7033 Branches must obviously remain at the end. Calls should remain at the
7034 end since moving them results in worse register allocation. Uses remain
7035 at the end to ensure proper register allocation. cc0 setters remaim
7036 at the end because they can't be moved away from their cc0 user. */
7039 while (GET_CODE (insn
) == CALL_INSN
|| GET_CODE (insn
) == JUMP_INSN
7040 || (GET_CODE (insn
) == INSN
7041 && (GET_CODE (PATTERN (insn
)) == USE
7043 || sets_cc0_p (PATTERN (insn
))
7046 || GET_CODE (insn
) == NOTE
)
7048 if (GET_CODE (insn
) != NOTE
)
7051 && !find_insn_list (insn
, LOG_LINKS (last
)))
7053 add_dependence (last
, insn
, REG_DEP_ANTI
);
7054 INSN_REF_COUNT (insn
)++;
7057 CANT_MOVE (insn
) = 1;
7060 /* Skip over insns that are part of a group.
7061 Make each insn explicitly depend on the previous insn.
7062 This ensures that only the group header will ever enter
7063 the ready queue (and, when scheduled, will automatically
7064 schedule the SCHED_GROUP_P block). */
7065 while (SCHED_GROUP_P (insn
))
7067 rtx temp
= prev_nonnote_insn (insn
);
7068 add_dependence (insn
, temp
, REG_DEP_ANTI
);
7073 /* Don't overrun the bounds of the basic block. */
7077 insn
= PREV_INSN (insn
);
7080 /* make sure these insns are scheduled last in their block */
7083 while (insn
!= head
)
7085 insn
= prev_nonnote_insn (insn
);
7087 if (INSN_REF_COUNT (insn
) != 0)
7090 if (!find_insn_list (last
, LOG_LINKS (insn
)))
7091 add_dependence (last
, insn
, REG_DEP_ANTI
);
7092 INSN_REF_COUNT (insn
) = 1;
7094 /* Skip over insns that are part of a group. */
7095 while (SCHED_GROUP_P (insn
))
7096 insn
= prev_nonnote_insn (insn
);
7100 /* Compute bacward dependences inside BB. In a multiple blocks region:
7101 (1) a bb is analyzed after its predecessors, and (2) the lists in
7102 effect at the end of bb (after analyzing for bb) are inherited by
7105 Specifically for reg-reg data dependences, the block insns are
7106 scanned by sched_analyze () top-to-bottom. Two lists are
7107 naintained by sched_analyze (): reg_last_defs[] for register DEFs,
7108 and reg_last_uses[] for register USEs.
7110 When analysis is completed for bb, we update for its successors:
7111 ; - DEFS[succ] = Union (DEFS [succ], DEFS [bb])
7112 ; - USES[succ] = Union (USES [succ], DEFS [bb])
7114 The mechanism for computing mem-mem data dependence is very
7115 similar, and the result is interblock dependences in the region. */
7118 compute_block_backward_dependences (bb
)
7124 int max_reg
= max_reg_num ();
7126 b
= BB_TO_BLOCK (bb
);
7128 if (current_nr_blocks
== 1)
7130 reg_last_uses
= (rtx
*) alloca (max_reg
* sizeof (rtx
));
7131 reg_last_sets
= (rtx
*) alloca (max_reg
* sizeof (rtx
));
7133 bzero ((char *) reg_last_uses
, max_reg
* sizeof (rtx
));
7134 bzero ((char *) reg_last_sets
, max_reg
* sizeof (rtx
));
7136 pending_read_insns
= 0;
7137 pending_read_mems
= 0;
7138 pending_write_insns
= 0;
7139 pending_write_mems
= 0;
7140 pending_lists_length
= 0;
7141 last_function_call
= 0;
7142 last_pending_memory_flush
= 0;
7143 sched_before_next_call
7144 = gen_rtx_INSN (VOIDmode
, 0, NULL_RTX
, NULL_RTX
,
7145 NULL_RTX
, 0, NULL_RTX
, NULL_RTX
);
7146 LOG_LINKS (sched_before_next_call
) = 0;
7150 reg_last_uses
= bb_reg_last_uses
[bb
];
7151 reg_last_sets
= bb_reg_last_sets
[bb
];
7153 pending_read_insns
= bb_pending_read_insns
[bb
];
7154 pending_read_mems
= bb_pending_read_mems
[bb
];
7155 pending_write_insns
= bb_pending_write_insns
[bb
];
7156 pending_write_mems
= bb_pending_write_mems
[bb
];
7157 pending_lists_length
= bb_pending_lists_length
[bb
];
7158 last_function_call
= bb_last_function_call
[bb
];
7159 last_pending_memory_flush
= bb_last_pending_memory_flush
[bb
];
7161 sched_before_next_call
= bb_sched_before_next_call
[bb
];
7164 /* do the analysis for this block */
7165 get_block_head_tail (bb
, &head
, &tail
);
7166 sched_analyze (head
, tail
);
7167 add_branch_dependences (head
, tail
);
7169 if (current_nr_blocks
> 1)
7172 int b_succ
, bb_succ
;
7174 rtx link_insn
, link_mem
;
7177 /* these lists should point to the right place, for correct freeing later. */
7178 bb_pending_read_insns
[bb
] = pending_read_insns
;
7179 bb_pending_read_mems
[bb
] = pending_read_mems
;
7180 bb_pending_write_insns
[bb
] = pending_write_insns
;
7181 bb_pending_write_mems
[bb
] = pending_write_mems
;
7183 /* bb's structures are inherited by it's successors */
7184 first_edge
= e
= OUT_EDGES (b
);
7188 b_succ
= TO_BLOCK (e
);
7189 bb_succ
= BLOCK_TO_BB (b_succ
);
7191 /* only bbs "below" bb, in the same region, are interesting */
7192 if (CONTAINING_RGN (b
) != CONTAINING_RGN (b_succ
)
7199 for (reg
= 0; reg
< max_reg
; reg
++)
7202 /* reg-last-uses lists are inherited by bb_succ */
7203 for (u
= reg_last_uses
[reg
]; u
; u
= XEXP (u
, 1))
7205 if (find_insn_list (XEXP (u
, 0), (bb_reg_last_uses
[bb_succ
])[reg
]))
7208 (bb_reg_last_uses
[bb_succ
])[reg
]
7209 = alloc_INSN_LIST (XEXP (u
, 0),
7210 (bb_reg_last_uses
[bb_succ
])[reg
]);
7213 /* reg-last-defs lists are inherited by bb_succ */
7214 for (u
= reg_last_sets
[reg
]; u
; u
= XEXP (u
, 1))
7216 if (find_insn_list (XEXP (u
, 0), (bb_reg_last_sets
[bb_succ
])[reg
]))
7219 (bb_reg_last_sets
[bb_succ
])[reg
]
7220 = alloc_INSN_LIST (XEXP (u
, 0),
7221 (bb_reg_last_sets
[bb_succ
])[reg
]);
7225 /* mem read/write lists are inherited by bb_succ */
7226 link_insn
= pending_read_insns
;
7227 link_mem
= pending_read_mems
;
7230 if (!(find_insn_mem_list (XEXP (link_insn
, 0), XEXP (link_mem
, 0),
7231 bb_pending_read_insns
[bb_succ
],
7232 bb_pending_read_mems
[bb_succ
])))
7233 add_insn_mem_dependence (&bb_pending_read_insns
[bb_succ
],
7234 &bb_pending_read_mems
[bb_succ
],
7235 XEXP (link_insn
, 0), XEXP (link_mem
, 0));
7236 link_insn
= XEXP (link_insn
, 1);
7237 link_mem
= XEXP (link_mem
, 1);
7240 link_insn
= pending_write_insns
;
7241 link_mem
= pending_write_mems
;
7244 if (!(find_insn_mem_list (XEXP (link_insn
, 0), XEXP (link_mem
, 0),
7245 bb_pending_write_insns
[bb_succ
],
7246 bb_pending_write_mems
[bb_succ
])))
7247 add_insn_mem_dependence (&bb_pending_write_insns
[bb_succ
],
7248 &bb_pending_write_mems
[bb_succ
],
7249 XEXP (link_insn
, 0), XEXP (link_mem
, 0));
7251 link_insn
= XEXP (link_insn
, 1);
7252 link_mem
= XEXP (link_mem
, 1);
7255 /* last_function_call is inherited by bb_succ */
7256 for (u
= last_function_call
; u
; u
= XEXP (u
, 1))
7258 if (find_insn_list (XEXP (u
, 0), bb_last_function_call
[bb_succ
]))
7261 bb_last_function_call
[bb_succ
]
7262 = alloc_INSN_LIST (XEXP (u
, 0),
7263 bb_last_function_call
[bb_succ
]);
7266 /* last_pending_memory_flush is inherited by bb_succ */
7267 for (u
= last_pending_memory_flush
; u
; u
= XEXP (u
, 1))
7269 if (find_insn_list (XEXP (u
, 0), bb_last_pending_memory_flush
[bb_succ
]))
7272 bb_last_pending_memory_flush
[bb_succ
]
7273 = alloc_INSN_LIST (XEXP (u
, 0),
7274 bb_last_pending_memory_flush
[bb_succ
]);
7277 /* sched_before_next_call is inherited by bb_succ */
7278 x
= LOG_LINKS (sched_before_next_call
);
7279 for (; x
; x
= XEXP (x
, 1))
7280 add_dependence (bb_sched_before_next_call
[bb_succ
],
7281 XEXP (x
, 0), REG_DEP_ANTI
);
7285 while (e
!= first_edge
);
7288 /* Free up the INSN_LISTs
7290 Note this loop is executed max_reg * nr_regions times. It's first
7291 implementation accounted for over 90% of the calls to free_list.
7292 The list was empty for the vast majority of those calls. On the PA,
7293 not calling free_list in those cases improves -O2 compile times by
7295 for (b
= 0; b
< max_reg
; ++b
)
7297 if (reg_last_sets
[b
])
7298 free_list (®_last_sets
[b
], &unused_insn_list
);
7299 if (reg_last_uses
[b
])
7300 free_list (®_last_uses
[b
], &unused_insn_list
);
7303 /* Assert that we won't need bb_reg_last_* for this block anymore. */
7304 if (current_nr_blocks
> 1)
7306 bb_reg_last_uses
[bb
] = (rtx
*) NULL_RTX
;
7307 bb_reg_last_sets
[bb
] = (rtx
*) NULL_RTX
;
7311 /* Print dependences for debugging, callable from debugger */
7314 debug_dependencies ()
7318 fprintf (dump
, ";; --------------- forward dependences: ------------ \n");
7319 for (bb
= 0; bb
< current_nr_blocks
; bb
++)
7327 get_block_head_tail (bb
, &head
, &tail
);
7328 next_tail
= NEXT_INSN (tail
);
7329 fprintf (dump
, "\n;; --- Region Dependences --- b %d bb %d \n",
7330 BB_TO_BLOCK (bb
), bb
);
7332 fprintf (dump
, ";; %7s%6s%6s%6s%6s%6s%11s%6s\n",
7333 "insn", "code", "bb", "dep", "prio", "cost", "blockage", "units");
7334 fprintf (dump
, ";; %7s%6s%6s%6s%6s%6s%11s%6s\n",
7335 "----", "----", "--", "---", "----", "----", "--------", "-----");
7336 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
7341 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
7344 fprintf (dump
, ";; %6d ", INSN_UID (insn
));
7345 if (GET_CODE (insn
) == NOTE
)
7347 n
= NOTE_LINE_NUMBER (insn
);
7349 fprintf (dump
, "%s\n", GET_NOTE_INSN_NAME (n
));
7351 fprintf (dump
, "line %d, file %s\n", n
,
7352 NOTE_SOURCE_FILE (insn
));
7355 fprintf (dump
, " {%s}\n", GET_RTX_NAME (GET_CODE (insn
)));
7359 unit
= insn_unit (insn
);
7361 || function_units
[unit
].blockage_range_function
== 0) ? 0 :
7362 function_units
[unit
].blockage_range_function (insn
);
7364 ";; %s%5d%6d%6d%6d%6d%6d %3d -%3d ",
7365 (SCHED_GROUP_P (insn
) ? "+" : " "),
7369 INSN_DEP_COUNT (insn
),
7370 INSN_PRIORITY (insn
),
7371 insn_cost (insn
, 0, 0),
7372 (int) MIN_BLOCKAGE_COST (range
),
7373 (int) MAX_BLOCKAGE_COST (range
));
7374 insn_print_units (insn
);
7375 fprintf (dump
, "\t: ");
7376 for (link
= INSN_DEPEND (insn
); link
; link
= XEXP (link
, 1))
7377 fprintf (dump
, "%d ", INSN_UID (XEXP (link
, 0)));
7378 fprintf (dump
, "\n");
7382 fprintf (dump
, "\n");
7385 /* Set_priorities: compute priority of each insn in the block */
7398 get_block_head_tail (bb
, &head
, &tail
);
7399 prev_head
= PREV_INSN (head
);
7402 && (GET_RTX_CLASS (GET_CODE (head
)) != 'i'))
7406 for (insn
= tail
; insn
!= prev_head
; insn
= PREV_INSN (insn
))
7409 if (GET_CODE (insn
) == NOTE
)
7412 if (!(SCHED_GROUP_P (insn
)))
7414 (void) priority (insn
);
7420 /* Make each element of VECTOR point at an rtx-vector,
7421 taking the space for all those rtx-vectors from SPACE.
7422 SPACE is of type (rtx *), but it is really as long as NELTS rtx-vectors.
7423 BYTES_PER_ELT is the number of bytes in one rtx-vector.
7424 (this is the same as init_regset_vector () in flow.c) */
7427 init_rtx_vector (vector
, space
, nelts
, bytes_per_elt
)
7434 register rtx
*p
= space
;
7436 for (i
= 0; i
< nelts
; i
++)
7439 p
+= bytes_per_elt
/ sizeof (*p
);
7443 /* Schedule a region. A region is either an inner loop, a loop-free
7444 subroutine, or a single basic block. Each bb in the region is
7445 scheduled after its flow predecessors. */
7448 schedule_region (rgn
)
7452 int rgn_n_insns
= 0;
7453 int sched_rgn_n_insns
= 0;
7455 /* set variables for the current region */
7456 current_nr_blocks
= RGN_NR_BLOCKS (rgn
);
7457 current_blocks
= RGN_BLOCKS (rgn
);
7459 reg_pending_sets
= ALLOCA_REG_SET ();
7460 reg_pending_sets_all
= 0;
7462 /* initializations for region data dependence analyisis */
7463 if (current_nr_blocks
> 1)
7466 int maxreg
= max_reg_num ();
7468 bb_reg_last_uses
= (rtx
**) alloca (current_nr_blocks
* sizeof (rtx
*));
7469 space
= (rtx
*) alloca (current_nr_blocks
* maxreg
* sizeof (rtx
));
7470 bzero ((char *) space
, current_nr_blocks
* maxreg
* sizeof (rtx
));
7471 init_rtx_vector (bb_reg_last_uses
, space
, current_nr_blocks
, maxreg
* sizeof (rtx
*));
7473 bb_reg_last_sets
= (rtx
**) alloca (current_nr_blocks
* sizeof (rtx
*));
7474 space
= (rtx
*) alloca (current_nr_blocks
* maxreg
* sizeof (rtx
));
7475 bzero ((char *) space
, current_nr_blocks
* maxreg
* sizeof (rtx
));
7476 init_rtx_vector (bb_reg_last_sets
, space
, current_nr_blocks
, maxreg
* sizeof (rtx
*));
7478 bb_pending_read_insns
= (rtx
*) alloca (current_nr_blocks
* sizeof (rtx
));
7479 bb_pending_read_mems
= (rtx
*) alloca (current_nr_blocks
* sizeof (rtx
));
7480 bb_pending_write_insns
= (rtx
*) alloca (current_nr_blocks
* sizeof (rtx
));
7481 bb_pending_write_mems
= (rtx
*) alloca (current_nr_blocks
* sizeof (rtx
));
7482 bb_pending_lists_length
= (int *) alloca (current_nr_blocks
* sizeof (int));
7483 bb_last_pending_memory_flush
= (rtx
*) alloca (current_nr_blocks
* sizeof (rtx
));
7484 bb_last_function_call
= (rtx
*) alloca (current_nr_blocks
* sizeof (rtx
));
7485 bb_sched_before_next_call
= (rtx
*) alloca (current_nr_blocks
* sizeof (rtx
));
7487 init_rgn_data_dependences (current_nr_blocks
);
7490 /* compute LOG_LINKS */
7491 for (bb
= 0; bb
< current_nr_blocks
; bb
++)
7492 compute_block_backward_dependences (bb
);
7494 /* compute INSN_DEPEND */
7495 for (bb
= current_nr_blocks
- 1; bb
>= 0; bb
--)
7496 compute_block_forward_dependences (bb
);
7498 /* Delete line notes, compute live-regs at block end, and set priorities. */
7500 for (bb
= 0; bb
< current_nr_blocks
; bb
++)
7502 if (reload_completed
== 0)
7503 find_pre_sched_live (bb
);
7505 if (write_symbols
!= NO_DEBUG
)
7507 save_line_notes (bb
);
7511 rgn_n_insns
+= set_priorities (bb
);
7514 /* compute interblock info: probabilities, split-edges, dominators, etc. */
7515 if (current_nr_blocks
> 1)
7519 prob
= (float *) alloca ((current_nr_blocks
) * sizeof (float));
7521 bbset_size
= current_nr_blocks
/ HOST_BITS_PER_WIDE_INT
+ 1;
7522 dom
= (bbset
*) alloca (current_nr_blocks
* sizeof (bbset
));
7523 for (i
= 0; i
< current_nr_blocks
; i
++)
7525 dom
[i
] = (bbset
) alloca (bbset_size
* sizeof (HOST_WIDE_INT
));
7526 bzero ((char *) dom
[i
], bbset_size
* sizeof (HOST_WIDE_INT
));
7531 edge_to_bit
= (int *) alloca (nr_edges
* sizeof (int));
7532 for (i
= 1; i
< nr_edges
; i
++)
7533 if (CONTAINING_RGN (FROM_BLOCK (i
)) == rgn
)
7534 EDGE_TO_BIT (i
) = rgn_nr_edges
++;
7535 rgn_edges
= (int *) alloca (rgn_nr_edges
* sizeof (int));
7538 for (i
= 1; i
< nr_edges
; i
++)
7539 if (CONTAINING_RGN (FROM_BLOCK (i
)) == (rgn
))
7540 rgn_edges
[rgn_nr_edges
++] = i
;
7543 edgeset_size
= rgn_nr_edges
/ HOST_BITS_PER_WIDE_INT
+ 1;
7544 pot_split
= (edgeset
*) alloca (current_nr_blocks
* sizeof (edgeset
));
7545 ancestor_edges
= (edgeset
*) alloca (current_nr_blocks
* sizeof (edgeset
));
7546 for (i
= 0; i
< current_nr_blocks
; i
++)
7549 (edgeset
) alloca (edgeset_size
* sizeof (HOST_WIDE_INT
));
7550 bzero ((char *) pot_split
[i
],
7551 edgeset_size
* sizeof (HOST_WIDE_INT
));
7553 (edgeset
) alloca (edgeset_size
* sizeof (HOST_WIDE_INT
));
7554 bzero ((char *) ancestor_edges
[i
],
7555 edgeset_size
* sizeof (HOST_WIDE_INT
));
7558 /* compute probabilities, dominators, split_edges */
7559 for (bb
= 0; bb
< current_nr_blocks
; bb
++)
7560 compute_dom_prob_ps (bb
);
7563 /* now we can schedule all blocks */
7564 for (bb
= 0; bb
< current_nr_blocks
; bb
++)
7566 sched_rgn_n_insns
+= schedule_block (bb
, rgn_n_insns
);
7573 /* sanity check: verify that all region insns were scheduled */
7574 if (sched_rgn_n_insns
!= rgn_n_insns
)
7577 /* update register life and usage information */
7578 if (reload_completed
== 0)
7580 for (bb
= current_nr_blocks
- 1; bb
>= 0; bb
--)
7581 find_post_sched_live (bb
);
7583 if (current_nr_blocks
<= 1)
7584 /* Sanity check. There should be no REG_DEAD notes leftover at the end.
7585 In practice, this can occur as the result of bugs in flow, combine.c,
7586 and/or sched.c. The values of the REG_DEAD notes remaining are
7587 meaningless, because dead_notes is just used as a free list. */
7588 if (dead_notes
!= 0)
7592 /* restore line notes. */
7593 if (write_symbols
!= NO_DEBUG
)
7595 for (bb
= 0; bb
< current_nr_blocks
; bb
++)
7596 restore_line_notes (bb
);
7599 /* Done with this region */
7600 free_pending_lists ();
7602 FREE_REG_SET (reg_pending_sets
);
7605 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
7606 REGNO, returning the rtx of the reference found if any. Otherwise,
7610 regno_use_in (regno
, x
)
7618 if (GET_CODE (x
) == REG
&& REGNO (x
) == regno
)
7621 fmt
= GET_RTX_FORMAT (GET_CODE (x
));
7622 for (i
= GET_RTX_LENGTH (GET_CODE (x
)) - 1; i
>= 0; i
--)
7626 if ((tem
= regno_use_in (regno
, XEXP (x
, i
))))
7629 else if (fmt
[i
] == 'E')
7630 for (j
= XVECLEN (x
, i
) - 1; j
>= 0; j
--)
7631 if ((tem
= regno_use_in (regno
, XVECEXP (x
, i
, j
))))
7638 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
7639 needed for the hard register mentioned in the note. This can happen
7640 if the reference to the hard register in the original insn was split into
7641 several smaller hard register references in the split insns. */
7644 split_hard_reg_notes (note
, first
, last
)
7645 rtx note
, first
, last
;
7647 rtx reg
, temp
, link
;
7648 int n_regs
, i
, new_reg
;
7651 /* Assume that this is a REG_DEAD note. */
7652 if (REG_NOTE_KIND (note
) != REG_DEAD
)
7655 reg
= XEXP (note
, 0);
7657 n_regs
= HARD_REGNO_NREGS (REGNO (reg
), GET_MODE (reg
));
7659 for (i
= 0; i
< n_regs
; i
++)
7661 new_reg
= REGNO (reg
) + i
;
7663 /* Check for references to new_reg in the split insns. */
7664 for (insn
= last
;; insn
= PREV_INSN (insn
))
7666 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
7667 && (temp
= regno_use_in (new_reg
, PATTERN (insn
))))
7669 /* Create a new reg dead note ere. */
7670 link
= alloc_EXPR_LIST (REG_DEAD
, temp
, REG_NOTES (insn
));
7671 REG_NOTES (insn
) = link
;
7673 /* If killed multiple registers here, then add in the excess. */
7674 i
+= HARD_REGNO_NREGS (REGNO (temp
), GET_MODE (temp
)) - 1;
7678 /* It isn't mentioned anywhere, so no new reg note is needed for
7686 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
7687 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
7690 new_insn_dead_notes (pat
, insn
, last
, orig_insn
)
7691 rtx pat
, insn
, last
, orig_insn
;
7695 /* PAT is either a CLOBBER or a SET here. */
7696 dest
= XEXP (pat
, 0);
7698 while (GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SUBREG
7699 || GET_CODE (dest
) == STRICT_LOW_PART
7700 || GET_CODE (dest
) == SIGN_EXTRACT
)
7701 dest
= XEXP (dest
, 0);
7703 if (GET_CODE (dest
) == REG
)
7705 /* If the original insn already used this register, we may not add new
7706 notes for it. One example for a split that needs this test is
7707 when a multi-word memory access with register-indirect addressing
7708 is split into multiple memory accesses with auto-increment and
7709 one adjusting add instruction for the address register. */
7710 if (reg_referenced_p (dest
, PATTERN (orig_insn
)))
7712 for (tem
= last
; tem
!= insn
; tem
= PREV_INSN (tem
))
7714 if (GET_RTX_CLASS (GET_CODE (tem
)) == 'i'
7715 && reg_overlap_mentioned_p (dest
, PATTERN (tem
))
7716 && (set
= single_set (tem
)))
7718 rtx tem_dest
= SET_DEST (set
);
7720 while (GET_CODE (tem_dest
) == ZERO_EXTRACT
7721 || GET_CODE (tem_dest
) == SUBREG
7722 || GET_CODE (tem_dest
) == STRICT_LOW_PART
7723 || GET_CODE (tem_dest
) == SIGN_EXTRACT
)
7724 tem_dest
= XEXP (tem_dest
, 0);
7726 if (!rtx_equal_p (tem_dest
, dest
))
7728 /* Use the same scheme as combine.c, don't put both REG_DEAD
7729 and REG_UNUSED notes on the same insn. */
7730 if (!find_regno_note (tem
, REG_UNUSED
, REGNO (dest
))
7731 && !find_regno_note (tem
, REG_DEAD
, REGNO (dest
)))
7733 rtx note
= alloc_EXPR_LIST (REG_DEAD
, dest
,
7735 REG_NOTES (tem
) = note
;
7737 /* The reg only dies in one insn, the last one that uses
7741 else if (reg_overlap_mentioned_p (dest
, SET_SRC (set
)))
7742 /* We found an instruction that both uses the register,
7743 and sets it, so no new REG_NOTE is needed for this set. */
7747 /* If this is a set, it must die somewhere, unless it is the dest of
7748 the original insn, and hence is live after the original insn. Abort
7749 if it isn't supposed to be live after the original insn.
7751 If this is a clobber, then just add a REG_UNUSED note. */
7754 int live_after_orig_insn
= 0;
7755 rtx pattern
= PATTERN (orig_insn
);
7758 if (GET_CODE (pat
) == CLOBBER
)
7760 rtx note
= alloc_EXPR_LIST (REG_UNUSED
, dest
, REG_NOTES (insn
));
7761 REG_NOTES (insn
) = note
;
7765 /* The original insn could have multiple sets, so search the
7766 insn for all sets. */
7767 if (GET_CODE (pattern
) == SET
)
7769 if (reg_overlap_mentioned_p (dest
, SET_DEST (pattern
)))
7770 live_after_orig_insn
= 1;
7772 else if (GET_CODE (pattern
) == PARALLEL
)
7774 for (i
= 0; i
< XVECLEN (pattern
, 0); i
++)
7775 if (GET_CODE (XVECEXP (pattern
, 0, i
)) == SET
7776 && reg_overlap_mentioned_p (dest
,
7777 SET_DEST (XVECEXP (pattern
,
7779 live_after_orig_insn
= 1;
7782 if (!live_after_orig_insn
)
7788 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
7789 registers modified by X. INC is -1 if the containing insn is being deleted,
7790 and is 1 if the containing insn is a newly generated insn. */
7793 update_n_sets (x
, inc
)
7797 rtx dest
= SET_DEST (x
);
7799 while (GET_CODE (dest
) == STRICT_LOW_PART
|| GET_CODE (dest
) == SUBREG
7800 || GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SIGN_EXTRACT
)
7801 dest
= SUBREG_REG (dest
);
7803 if (GET_CODE (dest
) == REG
)
7805 int regno
= REGNO (dest
);
7807 if (regno
< FIRST_PSEUDO_REGISTER
)
7810 int endregno
= regno
+ HARD_REGNO_NREGS (regno
, GET_MODE (dest
));
7812 for (i
= regno
; i
< endregno
; i
++)
7813 REG_N_SETS (i
) += inc
;
7816 REG_N_SETS (regno
) += inc
;
7820 /* Updates all flow-analysis related quantities (including REG_NOTES) for
7821 the insns from FIRST to LAST inclusive that were created by splitting
7822 ORIG_INSN. NOTES are the original REG_NOTES. */
7825 update_flow_info (notes
, first
, last
, orig_insn
)
7832 rtx orig_dest
, temp
;
7835 /* Get and save the destination set by the original insn. */
7837 orig_dest
= single_set (orig_insn
);
7839 orig_dest
= SET_DEST (orig_dest
);
7841 /* Move REG_NOTES from the original insn to where they now belong. */
7843 for (note
= notes
; note
; note
= next
)
7845 next
= XEXP (note
, 1);
7846 switch (REG_NOTE_KIND (note
))
7850 /* Move these notes from the original insn to the last new insn where
7851 the register is now set. */
7853 for (insn
= last
;; insn
= PREV_INSN (insn
))
7855 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
7856 && reg_mentioned_p (XEXP (note
, 0), PATTERN (insn
)))
7858 /* If this note refers to a multiple word hard register, it
7859 may have been split into several smaller hard register
7860 references, so handle it specially. */
7861 temp
= XEXP (note
, 0);
7862 if (REG_NOTE_KIND (note
) == REG_DEAD
7863 && GET_CODE (temp
) == REG
7864 && REGNO (temp
) < FIRST_PSEUDO_REGISTER
7865 && HARD_REGNO_NREGS (REGNO (temp
), GET_MODE (temp
)) > 1)
7866 split_hard_reg_notes (note
, first
, last
);
7869 XEXP (note
, 1) = REG_NOTES (insn
);
7870 REG_NOTES (insn
) = note
;
7873 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
7875 /* ??? This won't handle multiple word registers correctly,
7876 but should be good enough for now. */
7877 if (REG_NOTE_KIND (note
) == REG_UNUSED
7878 && GET_CODE (XEXP (note
, 0)) != SCRATCH
7879 && !dead_or_set_p (insn
, XEXP (note
, 0)))
7880 PUT_REG_NOTE_KIND (note
, REG_DEAD
);
7882 /* The reg only dies in one insn, the last one that uses
7886 /* It must die somewhere, fail it we couldn't find where it died.
7888 If this is a REG_UNUSED note, then it must be a temporary
7889 register that was not needed by this instantiation of the
7890 pattern, so we can safely ignore it. */
7893 /* After reload, REG_DEAD notes come sometimes an
7894 instruction after the register actually dies. */
7895 if (reload_completed
&& REG_NOTE_KIND (note
) == REG_DEAD
)
7897 XEXP (note
, 1) = REG_NOTES (insn
);
7898 REG_NOTES (insn
) = note
;
7902 if (REG_NOTE_KIND (note
) != REG_UNUSED
)
7911 /* If the insn that set the register to 0 was deleted, this
7912 note cannot be relied on any longer. The destination might
7913 even have been moved to memory.
7914 This was observed for SH4 with execute/920501-6.c compilation,
7915 -O2 -fomit-frame-pointer -finline-functions . */
7916 if (GET_CODE (XEXP (note
, 0)) == NOTE
7917 || INSN_DELETED_P (XEXP (note
, 0)))
7919 /* This note applies to the dest of the original insn. Find the
7920 first new insn that now has the same dest, and move the note
7926 for (insn
= first
;; insn
= NEXT_INSN (insn
))
7928 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
7929 && (temp
= single_set (insn
))
7930 && rtx_equal_p (SET_DEST (temp
), orig_dest
))
7932 XEXP (note
, 1) = REG_NOTES (insn
);
7933 REG_NOTES (insn
) = note
;
7934 /* The reg is only zero before one insn, the first that
7938 /* If this note refers to a multiple word hard
7939 register, it may have been split into several smaller
7940 hard register references. We could split the notes,
7941 but simply dropping them is good enough. */
7942 if (GET_CODE (orig_dest
) == REG
7943 && REGNO (orig_dest
) < FIRST_PSEUDO_REGISTER
7944 && HARD_REGNO_NREGS (REGNO (orig_dest
),
7945 GET_MODE (orig_dest
)) > 1)
7947 /* It must be set somewhere, fail if we couldn't find where it
7956 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
7957 set is meaningless. Just drop the note. */
7961 case REG_NO_CONFLICT
:
7962 /* These notes apply to the dest of the original insn. Find the last
7963 new insn that now has the same dest, and move the note there. */
7968 for (insn
= last
;; insn
= PREV_INSN (insn
))
7970 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
7971 && (temp
= single_set (insn
))
7972 && rtx_equal_p (SET_DEST (temp
), orig_dest
))
7974 XEXP (note
, 1) = REG_NOTES (insn
);
7975 REG_NOTES (insn
) = note
;
7976 /* Only put this note on one of the new insns. */
7980 /* The original dest must still be set someplace. Abort if we
7981 couldn't find it. */
7984 /* However, if this note refers to a multiple word hard
7985 register, it may have been split into several smaller
7986 hard register references. We could split the notes,
7987 but simply dropping them is good enough. */
7988 if (GET_CODE (orig_dest
) == REG
7989 && REGNO (orig_dest
) < FIRST_PSEUDO_REGISTER
7990 && HARD_REGNO_NREGS (REGNO (orig_dest
),
7991 GET_MODE (orig_dest
)) > 1)
7993 /* Likewise for multi-word memory references. */
7994 if (GET_CODE (orig_dest
) == MEM
7995 && SIZE_FOR_MODE (orig_dest
) > UNITS_PER_WORD
)
8003 /* Move a REG_LIBCALL note to the first insn created, and update
8004 the corresponding REG_RETVAL note. */
8005 XEXP (note
, 1) = REG_NOTES (first
);
8006 REG_NOTES (first
) = note
;
8008 insn
= XEXP (note
, 0);
8009 note
= find_reg_note (insn
, REG_RETVAL
, NULL_RTX
);
8011 XEXP (note
, 0) = first
;
8014 case REG_EXEC_COUNT
:
8015 /* Move a REG_EXEC_COUNT note to the first insn created. */
8016 XEXP (note
, 1) = REG_NOTES (first
);
8017 REG_NOTES (first
) = note
;
8021 /* Move a REG_RETVAL note to the last insn created, and update
8022 the corresponding REG_LIBCALL note. */
8023 XEXP (note
, 1) = REG_NOTES (last
);
8024 REG_NOTES (last
) = note
;
8026 insn
= XEXP (note
, 0);
8027 note
= find_reg_note (insn
, REG_LIBCALL
, NULL_RTX
);
8029 XEXP (note
, 0) = last
;
8034 /* This should be moved to whichever instruction is a JUMP_INSN. */
8036 for (insn
= last
;; insn
= PREV_INSN (insn
))
8038 if (GET_CODE (insn
) == JUMP_INSN
)
8040 XEXP (note
, 1) = REG_NOTES (insn
);
8041 REG_NOTES (insn
) = note
;
8042 /* Only put this note on one of the new insns. */
8045 /* Fail if we couldn't find a JUMP_INSN. */
8052 /* reload sometimes leaves obsolete REG_INC notes around. */
8053 if (reload_completed
)
8055 /* This should be moved to whichever instruction now has the
8056 increment operation. */
8060 /* Should be moved to the new insn(s) which use the label. */
8061 for (insn
= first
; insn
!= NEXT_INSN (last
); insn
= NEXT_INSN (insn
))
8062 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
8063 && reg_mentioned_p (XEXP (note
, 0), PATTERN (insn
)))
8065 REG_NOTES (insn
) = alloc_EXPR_LIST (REG_LABEL
,
8073 /* These two notes will never appear until after reorg, so we don't
8074 have to handle them here. */
8080 /* Each new insn created, except the last, has a new set. If the destination
8081 is a register, then this reg is now live across several insns, whereas
8082 previously the dest reg was born and died within the same insn. To
8083 reflect this, we now need a REG_DEAD note on the insn where this
8086 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
8088 for (insn
= first
; insn
!= last
; insn
= NEXT_INSN (insn
))
8093 pat
= PATTERN (insn
);
8094 if (GET_CODE (pat
) == SET
|| GET_CODE (pat
) == CLOBBER
)
8095 new_insn_dead_notes (pat
, insn
, last
, orig_insn
);
8096 else if (GET_CODE (pat
) == PARALLEL
)
8098 for (i
= 0; i
< XVECLEN (pat
, 0); i
++)
8099 if (GET_CODE (XVECEXP (pat
, 0, i
)) == SET
8100 || GET_CODE (XVECEXP (pat
, 0, i
)) == CLOBBER
)
8101 new_insn_dead_notes (XVECEXP (pat
, 0, i
), insn
, last
, orig_insn
);
8105 /* If any insn, except the last, uses the register set by the last insn,
8106 then we need a new REG_DEAD note on that insn. In this case, there
8107 would not have been a REG_DEAD note for this register in the original
8108 insn because it was used and set within one insn. */
8110 set
= single_set (last
);
8113 rtx dest
= SET_DEST (set
);
8115 while (GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SUBREG
8116 || GET_CODE (dest
) == STRICT_LOW_PART
8117 || GET_CODE (dest
) == SIGN_EXTRACT
)
8118 dest
= XEXP (dest
, 0);
8120 if (GET_CODE (dest
) == REG
8121 /* Global registers are always live, so the code below does not
8123 && (REGNO (dest
) >= FIRST_PSEUDO_REGISTER
8124 || ! global_regs
[REGNO (dest
)]))
8126 rtx stop_insn
= PREV_INSN (first
);
8128 /* If the last insn uses the register that it is setting, then
8129 we don't want to put a REG_DEAD note there. Search backwards
8130 to find the first insn that sets but does not use DEST. */
8133 if (reg_overlap_mentioned_p (dest
, SET_SRC (set
)))
8135 for (insn
= PREV_INSN (insn
); insn
!= first
;
8136 insn
= PREV_INSN (insn
))
8138 if ((set
= single_set (insn
))
8139 && reg_mentioned_p (dest
, SET_DEST (set
))
8140 && ! reg_overlap_mentioned_p (dest
, SET_SRC (set
)))
8145 /* Now find the first insn that uses but does not set DEST. */
8147 for (insn
= PREV_INSN (insn
); insn
!= stop_insn
;
8148 insn
= PREV_INSN (insn
))
8150 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
8151 && reg_mentioned_p (dest
, PATTERN (insn
))
8152 && (set
= single_set (insn
)))
8154 rtx insn_dest
= SET_DEST (set
);
8156 while (GET_CODE (insn_dest
) == ZERO_EXTRACT
8157 || GET_CODE (insn_dest
) == SUBREG
8158 || GET_CODE (insn_dest
) == STRICT_LOW_PART
8159 || GET_CODE (insn_dest
) == SIGN_EXTRACT
)
8160 insn_dest
= XEXP (insn_dest
, 0);
8162 if (insn_dest
!= dest
)
8164 note
= alloc_EXPR_LIST (REG_DEAD
, dest
, REG_NOTES (insn
));
8165 REG_NOTES (insn
) = note
;
8166 /* The reg only dies in one insn, the last one
8175 /* If the original dest is modifying a multiple register target, and the
8176 original instruction was split such that the original dest is now set
8177 by two or more SUBREG sets, then the split insns no longer kill the
8178 destination of the original insn.
8180 In this case, if there exists an instruction in the same basic block,
8181 before the split insn, which uses the original dest, and this use is
8182 killed by the original insn, then we must remove the REG_DEAD note on
8183 this insn, because it is now superfluous.
8185 This does not apply when a hard register gets split, because the code
8186 knows how to handle overlapping hard registers properly. */
8187 if (orig_dest
&& GET_CODE (orig_dest
) == REG
)
8189 int found_orig_dest
= 0;
8190 int found_split_dest
= 0;
8192 for (insn
= first
;; insn
= NEXT_INSN (insn
))
8197 /* I'm not sure if this can happen, but let's be safe. */
8198 if (GET_RTX_CLASS (GET_CODE (insn
)) != 'i')
8201 pat
= PATTERN (insn
);
8202 i
= GET_CODE (pat
) == PARALLEL
? XVECLEN (pat
, 0) : 0;
8207 if (GET_CODE (set
) == SET
)
8209 if (GET_CODE (SET_DEST (set
)) == REG
8210 && REGNO (SET_DEST (set
)) == REGNO (orig_dest
))
8212 found_orig_dest
= 1;
8215 else if (GET_CODE (SET_DEST (set
)) == SUBREG
8216 && SUBREG_REG (SET_DEST (set
)) == orig_dest
)
8218 found_split_dest
= 1;
8224 set
= XVECEXP (pat
, 0, i
);
8231 if (found_split_dest
)
8233 /* Search backwards from FIRST, looking for the first insn that uses
8234 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
8235 If we find an insn, and it has a REG_DEAD note, then delete the
8238 for (insn
= first
; insn
; insn
= PREV_INSN (insn
))
8240 if (GET_CODE (insn
) == CODE_LABEL
8241 || GET_CODE (insn
) == JUMP_INSN
)
8243 else if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
8244 && reg_mentioned_p (orig_dest
, insn
))
8246 note
= find_regno_note (insn
, REG_DEAD
, REGNO (orig_dest
));
8248 remove_note (insn
, note
);
8252 else if (!found_orig_dest
)
8254 /* This should never happen. */
8259 /* Update reg_n_sets. This is necessary to prevent local alloc from
8260 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
8261 a reg from set once to set multiple times. */
8264 rtx x
= PATTERN (orig_insn
);
8265 RTX_CODE code
= GET_CODE (x
);
8267 if (code
== SET
|| code
== CLOBBER
)
8268 update_n_sets (x
, -1);
8269 else if (code
== PARALLEL
)
8272 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
8274 code
= GET_CODE (XVECEXP (x
, 0, i
));
8275 if (code
== SET
|| code
== CLOBBER
)
8276 update_n_sets (XVECEXP (x
, 0, i
), -1);
8280 for (insn
= first
;; insn
= NEXT_INSN (insn
))
8283 code
= GET_CODE (x
);
8285 if (code
== SET
|| code
== CLOBBER
)
8286 update_n_sets (x
, 1);
8287 else if (code
== PARALLEL
)
8290 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
8292 code
= GET_CODE (XVECEXP (x
, 0, i
));
8293 if (code
== SET
|| code
== CLOBBER
)
8294 update_n_sets (XVECEXP (x
, 0, i
), 1);
8304 /* Do the splitting of insns in the block b. */
8307 split_block_insns (b
)
8312 for (insn
= basic_block_head
[b
];; insn
= next
)
8314 rtx set
, last
, first
, notes
;
8316 /* Can't use `next_real_insn' because that
8317 might go across CODE_LABELS and short-out basic blocks. */
8318 next
= NEXT_INSN (insn
);
8319 if (GET_CODE (insn
) != INSN
)
8321 if (insn
== basic_block_end
[b
])
8327 /* Don't split no-op move insns. These should silently disappear
8328 later in final. Splitting such insns would break the code
8329 that handles REG_NO_CONFLICT blocks. */
8330 set
= single_set (insn
);
8331 if (set
&& rtx_equal_p (SET_SRC (set
), SET_DEST (set
)))
8333 if (insn
== basic_block_end
[b
])
8336 /* Nops get in the way while scheduling, so delete them now if
8337 register allocation has already been done. It is too risky
8338 to try to do this before register allocation, and there are
8339 unlikely to be very many nops then anyways. */
8340 if (reload_completed
)
8342 PUT_CODE (insn
, NOTE
);
8343 NOTE_LINE_NUMBER (insn
) = NOTE_INSN_DELETED
;
8344 NOTE_SOURCE_FILE (insn
) = 0;
8350 /* Split insns here to get max fine-grain parallelism. */
8351 first
= PREV_INSN (insn
);
8352 notes
= REG_NOTES (insn
);
8353 last
= try_split (PATTERN (insn
), insn
, 1);
8356 /* try_split returns the NOTE that INSN became. */
8357 first
= NEXT_INSN (first
);
8358 update_flow_info (notes
, first
, last
, insn
);
8360 PUT_CODE (insn
, NOTE
);
8361 NOTE_SOURCE_FILE (insn
) = 0;
8362 NOTE_LINE_NUMBER (insn
) = NOTE_INSN_DELETED
;
8363 if (insn
== basic_block_head
[b
])
8364 basic_block_head
[b
] = first
;
8365 if (insn
== basic_block_end
[b
])
8367 basic_block_end
[b
] = last
;
8372 if (insn
== basic_block_end
[b
])
8377 /* The one entry point in this file. DUMP_FILE is the dump file for
8381 schedule_insns (dump_file
)
8392 /* disable speculative loads in their presence if cc0 defined */
8394 flag_schedule_speculative_load
= 0;
8397 /* Taking care of this degenerate case makes the rest of
8398 this code simpler. */
8399 if (n_basic_blocks
== 0)
8402 /* set dump and sched_verbose for the desired debugging output. If no
8403 dump-file was specified, but -fsched-verbose-N (any N), print to stderr.
8404 For -fsched-verbose-N, N>=10, print everything to stderr. */
8405 sched_verbose
= sched_verbose_param
;
8406 if (sched_verbose_param
== 0 && dump_file
)
8408 dump
= ((sched_verbose_param
>= 10 || !dump_file
) ? stderr
: dump_file
);
8413 /* Initialize the unused_*_lists. We can't use the ones left over from
8414 the previous function, because gcc has freed that memory. We can use
8415 the ones left over from the first sched pass in the second pass however,
8416 so only clear them on the first sched pass. The first pass is before
8417 reload if flag_schedule_insns is set, otherwise it is afterwards. */
8419 if (reload_completed
== 0 || !flag_schedule_insns
)
8421 unused_insn_list
= 0;
8422 unused_expr_list
= 0;
8425 /* initialize issue_rate */
8426 issue_rate
= ISSUE_RATE
;
8428 /* do the splitting first for all blocks */
8429 for (b
= 0; b
< n_basic_blocks
; b
++)
8430 split_block_insns (b
);
8432 max_uid
= (get_max_uid () + 1);
8434 cant_move
= (char *) xmalloc (max_uid
* sizeof (char));
8435 bzero ((char *) cant_move
, max_uid
* sizeof (char));
8437 fed_by_spec_load
= (char *) xmalloc (max_uid
* sizeof (char));
8438 bzero ((char *) fed_by_spec_load
, max_uid
* sizeof (char));
8440 is_load_insn
= (char *) xmalloc (max_uid
* sizeof (char));
8441 bzero ((char *) is_load_insn
, max_uid
* sizeof (char));
8443 insn_orig_block
= (int *) xmalloc (max_uid
* sizeof (int));
8444 insn_luid
= (int *) xmalloc (max_uid
* sizeof (int));
8447 for (b
= 0; b
< n_basic_blocks
; b
++)
8448 for (insn
= basic_block_head
[b
];; insn
= NEXT_INSN (insn
))
8450 INSN_BLOCK (insn
) = b
;
8451 INSN_LUID (insn
) = luid
++;
8453 if (insn
== basic_block_end
[b
])
8457 /* after reload, remove inter-blocks dependences computed before reload. */
8458 if (reload_completed
)
8463 for (b
= 0; b
< n_basic_blocks
; b
++)
8464 for (insn
= basic_block_head
[b
];; insn
= NEXT_INSN (insn
))
8468 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i')
8471 link
= LOG_LINKS (insn
);
8474 rtx x
= XEXP (link
, 0);
8476 if (INSN_BLOCK (x
) != b
)
8478 remove_dependence (insn
, x
);
8479 link
= prev
? XEXP (prev
, 1) : LOG_LINKS (insn
);
8482 prev
= link
, link
= XEXP (prev
, 1);
8486 if (insn
== basic_block_end
[b
])
8492 rgn_table
= (region
*) alloca ((n_basic_blocks
) * sizeof (region
));
8493 rgn_bb_table
= (int *) alloca ((n_basic_blocks
) * sizeof (int));
8494 block_to_bb
= (int *) alloca ((n_basic_blocks
) * sizeof (int));
8495 containing_rgn
= (int *) alloca ((n_basic_blocks
) * sizeof (int));
8497 /* compute regions for scheduling */
8498 if (reload_completed
8499 || n_basic_blocks
== 1
8500 || !flag_schedule_interblock
)
8502 find_single_block_region ();
8506 /* verify that a 'good' control flow graph can be built */
8507 if (is_cfg_nonregular ())
8509 find_single_block_region ();
8513 int_list_ptr
*s_preds
, *s_succs
;
8514 int *num_preds
, *num_succs
;
8515 sbitmap
*dom
, *pdom
;
8517 s_preds
= (int_list_ptr
*) alloca (n_basic_blocks
8518 * sizeof (int_list_ptr
));
8519 s_succs
= (int_list_ptr
*) alloca (n_basic_blocks
8520 * sizeof (int_list_ptr
));
8521 num_preds
= (int *) alloca (n_basic_blocks
* sizeof (int));
8522 num_succs
= (int *) alloca (n_basic_blocks
* sizeof (int));
8523 dom
= sbitmap_vector_alloc (n_basic_blocks
, n_basic_blocks
);
8524 pdom
= sbitmap_vector_alloc (n_basic_blocks
, n_basic_blocks
);
8526 /* The scheduler runs after flow; therefore, we can't blindly call
8527 back into find_basic_blocks since doing so could invalidate the
8528 info in basic_block_live_at_start.
8530 Consider a block consisting entirely of dead stores; after life
8531 analysis it would be a block of NOTE_INSN_DELETED notes. If
8532 we call find_basic_blocks again, then the block would be removed
8533 entirely and invalidate our the register live information.
8535 We could (should?) recompute register live information. Doing
8536 so may even be beneficial. */
8538 compute_preds_succs (s_preds
, s_succs
, num_preds
, num_succs
);
8540 /* Compute the dominators and post dominators. We don't currently use
8541 post dominators, but we should for speculative motion analysis. */
8542 compute_dominators (dom
, pdom
, s_preds
, s_succs
);
8544 /* build_control_flow will return nonzero if it detects unreachable
8545 blocks or any other irregularity with the cfg which prevents
8546 cross block scheduling. */
8547 if (build_control_flow (s_preds
, s_succs
, num_preds
, num_succs
) != 0)
8548 find_single_block_region ();
8550 find_rgns (s_preds
, s_succs
, num_preds
, num_succs
, dom
);
8552 if (sched_verbose
>= 3)
8555 /* For now. This will move as more and more of haifa is converted
8556 to using the cfg code in flow.c */
8563 /* Allocate data for this pass. See comments, above,
8564 for what these vectors do.
8566 We use xmalloc instead of alloca, because max_uid can be very large
8567 when there is a lot of function inlining. If we used alloca, we could
8568 exceed stack limits on some hosts for some inputs. */
8569 insn_priority
= (int *) xmalloc (max_uid
* sizeof (int));
8570 insn_reg_weight
= (int *) xmalloc (max_uid
* sizeof (int));
8571 insn_tick
= (int *) xmalloc (max_uid
* sizeof (int));
8572 insn_costs
= (short *) xmalloc (max_uid
* sizeof (short));
8573 insn_units
= (short *) xmalloc (max_uid
* sizeof (short));
8574 insn_blockage
= (unsigned int *) xmalloc (max_uid
* sizeof (unsigned int));
8575 insn_ref_count
= (int *) xmalloc (max_uid
* sizeof (int));
8577 /* Allocate for forward dependencies */
8578 insn_dep_count
= (int *) xmalloc (max_uid
* sizeof (int));
8579 insn_depend
= (rtx
*) xmalloc (max_uid
* sizeof (rtx
));
8581 if (reload_completed
== 0)
8585 sched_reg_n_calls_crossed
= (int *) alloca (max_regno
* sizeof (int));
8586 sched_reg_live_length
= (int *) alloca (max_regno
* sizeof (int));
8587 sched_reg_basic_block
= (int *) alloca (max_regno
* sizeof (int));
8588 bb_live_regs
= ALLOCA_REG_SET ();
8589 bzero ((char *) sched_reg_n_calls_crossed
, max_regno
* sizeof (int));
8590 bzero ((char *) sched_reg_live_length
, max_regno
* sizeof (int));
8592 for (i
= 0; i
< max_regno
; i
++)
8593 sched_reg_basic_block
[i
] = REG_BLOCK_UNKNOWN
;
8597 sched_reg_n_calls_crossed
= 0;
8598 sched_reg_live_length
= 0;
8601 init_alias_analysis ();
8603 if (write_symbols
!= NO_DEBUG
)
8607 line_note
= (rtx
*) xmalloc (max_uid
* sizeof (rtx
));
8608 bzero ((char *) line_note
, max_uid
* sizeof (rtx
));
8609 line_note_head
= (rtx
*) alloca (n_basic_blocks
* sizeof (rtx
));
8610 bzero ((char *) line_note_head
, n_basic_blocks
* sizeof (rtx
));
8612 /* Save-line-note-head:
8613 Determine the line-number at the start of each basic block.
8614 This must be computed and saved now, because after a basic block's
8615 predecessor has been scheduled, it is impossible to accurately
8616 determine the correct line number for the first insn of the block. */
8618 for (b
= 0; b
< n_basic_blocks
; b
++)
8619 for (line
= basic_block_head
[b
]; line
; line
= PREV_INSN (line
))
8620 if (GET_CODE (line
) == NOTE
&& NOTE_LINE_NUMBER (line
) > 0)
8622 line_note_head
[b
] = line
;
8627 bzero ((char *) insn_priority
, max_uid
* sizeof (int));
8628 bzero ((char *) insn_reg_weight
, max_uid
* sizeof (int));
8629 bzero ((char *) insn_tick
, max_uid
* sizeof (int));
8630 bzero ((char *) insn_costs
, max_uid
* sizeof (short));
8631 bzero ((char *) insn_units
, max_uid
* sizeof (short));
8632 bzero ((char *) insn_blockage
, max_uid
* sizeof (unsigned int));
8633 bzero ((char *) insn_ref_count
, max_uid
* sizeof (int));
8635 /* Initialize for forward dependencies */
8636 bzero ((char *) insn_depend
, max_uid
* sizeof (rtx
));
8637 bzero ((char *) insn_dep_count
, max_uid
* sizeof (int));
8639 /* Find units used in this fuction, for visualization */
8641 init_target_units ();
8643 /* ??? Add a NOTE after the last insn of the last basic block. It is not
8644 known why this is done. */
8646 insn
= basic_block_end
[n_basic_blocks
- 1];
8647 if (NEXT_INSN (insn
) == 0
8648 || (GET_CODE (insn
) != NOTE
8649 && GET_CODE (insn
) != CODE_LABEL
8650 /* Don't emit a NOTE if it would end up between an unconditional
8651 jump and a BARRIER. */
8652 && !(GET_CODE (insn
) == JUMP_INSN
8653 && GET_CODE (NEXT_INSN (insn
)) == BARRIER
)))
8654 emit_note_after (NOTE_INSN_DELETED
, basic_block_end
[n_basic_blocks
- 1]);
8656 /* Schedule every region in the subroutine */
8657 for (rgn
= 0; rgn
< nr_regions
; rgn
++)
8659 schedule_region (rgn
);
8666 /* Reposition the prologue and epilogue notes in case we moved the
8667 prologue/epilogue insns. */
8668 if (reload_completed
)
8669 reposition_prologue_and_epilogue_notes (get_insns ());
8671 /* delete redundant line notes. */
8672 if (write_symbols
!= NO_DEBUG
)
8673 rm_redundant_line_notes ();
8675 /* Update information about uses of registers in the subroutine. */
8676 if (reload_completed
== 0)
8677 update_reg_usage ();
8681 if (reload_completed
== 0 && flag_schedule_interblock
)
8683 fprintf (dump
, "\n;; Procedure interblock/speculative motions == %d/%d \n",
8691 fprintf (dump
, "\n\n");
8695 free (fed_by_spec_load
);
8696 free (is_load_insn
);
8697 free (insn_orig_block
);
8700 free (insn_priority
);
8701 free (insn_reg_weight
);
8705 free (insn_blockage
);
8706 free (insn_ref_count
);
8708 free (insn_dep_count
);
8711 if (write_symbols
!= NO_DEBUG
)
8715 FREE_REG_SET (bb_live_regs
);
8734 #endif /* INSN_SCHEDULING */