2002-02-19 Philip Blundell <philb@gnu.org>
[official-gcc.git] / gcc / reorg.c
blobb1c33f5bf0dfce34a71dcf571172479ac7df1fea
1 /* Perform instruction reorganizations for delay slot filling.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998,
3 1999, 2000, 2001, 2002 Free Software Foundation, Inc.
4 Contributed by Richard Kenner (kenner@vlsi1.ultra.nyu.edu).
5 Hacked by Michael Tiemann (tiemann@cygnus.com).
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
12 version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
22 02111-1307, USA. */
24 /* Instruction reorganization pass.
26 This pass runs after register allocation and final jump
27 optimization. It should be the last pass to run before peephole.
28 It serves primarily to fill delay slots of insns, typically branch
29 and call insns. Other insns typically involve more complicated
30 interactions of data dependencies and resource constraints, and
31 are better handled by scheduling before register allocation (by the
32 function `schedule_insns').
34 The Branch Penalty is the number of extra cycles that are needed to
35 execute a branch insn. On an ideal machine, branches take a single
36 cycle, and the Branch Penalty is 0. Several RISC machines approach
37 branch delays differently:
39 The MIPS and AMD 29000 have a single branch delay slot. Most insns
40 (except other branches) can be used to fill this slot. When the
41 slot is filled, two insns execute in two cycles, reducing the
42 branch penalty to zero.
44 The Motorola 88000 conditionally exposes its branch delay slot,
45 so code is shorter when it is turned off, but will run faster
46 when useful insns are scheduled there.
48 The IBM ROMP has two forms of branch and call insns, both with and
49 without a delay slot. Much like the 88k, insns not using the delay
50 slot can be shorted (2 bytes vs. 4 bytes), but will run slowed.
52 The SPARC always has a branch delay slot, but its effects can be
53 annulled when the branch is not taken. This means that failing to
54 find other sources of insns, we can hoist an insn from the branch
55 target that would only be safe to execute knowing that the branch
56 is taken.
58 The HP-PA always has a branch delay slot. For unconditional branches
59 its effects can be annulled when the branch is taken. The effects
60 of the delay slot in a conditional branch can be nullified for forward
61 taken branches, or for untaken backward branches. This means
62 we can hoist insns from the fall-through path for forward branches or
63 steal insns from the target of backward branches.
65 The TMS320C3x and C4x have three branch delay slots. When the three
66 slots are filled, the branch penalty is zero. Most insns can fill the
67 delay slots except jump insns.
69 Three techniques for filling delay slots have been implemented so far:
71 (1) `fill_simple_delay_slots' is the simplest, most efficient way
72 to fill delay slots. This pass first looks for insns which come
73 from before the branch and which are safe to execute after the
74 branch. Then it searches after the insn requiring delay slots or,
75 in the case of a branch, for insns that are after the point at
76 which the branch merges into the fallthrough code, if such a point
77 exists. When such insns are found, the branch penalty decreases
78 and no code expansion takes place.
80 (2) `fill_eager_delay_slots' is more complicated: it is used for
81 scheduling conditional jumps, or for scheduling jumps which cannot
82 be filled using (1). A machine need not have annulled jumps to use
83 this strategy, but it helps (by keeping more options open).
84 `fill_eager_delay_slots' tries to guess the direction the branch
85 will go; if it guesses right 100% of the time, it can reduce the
86 branch penalty as much as `fill_simple_delay_slots' does. If it
87 guesses wrong 100% of the time, it might as well schedule nops (or
88 on the m88k, unexpose the branch slot). When
89 `fill_eager_delay_slots' takes insns from the fall-through path of
90 the jump, usually there is no code expansion; when it takes insns
91 from the branch target, there is code expansion if it is not the
92 only way to reach that target.
94 (3) `relax_delay_slots' uses a set of rules to simplify code that
95 has been reorganized by (1) and (2). It finds cases where
96 conditional test can be eliminated, jumps can be threaded, extra
97 insns can be eliminated, etc. It is the job of (1) and (2) to do a
98 good job of scheduling locally; `relax_delay_slots' takes care of
99 making the various individual schedules work well together. It is
100 especially tuned to handle the control flow interactions of branch
101 insns. It does nothing for insns with delay slots that do not
102 branch.
104 On machines that use CC0, we are very conservative. We will not make
105 a copy of an insn involving CC0 since we want to maintain a 1-1
106 correspondence between the insn that sets and uses CC0. The insns are
107 allowed to be separated by placing an insn that sets CC0 (but not an insn
108 that uses CC0; we could do this, but it doesn't seem worthwhile) in a
109 delay slot. In that case, we point each insn at the other with REG_CC_USER
110 and REG_CC_SETTER notes. Note that these restrictions affect very few
111 machines because most RISC machines with delay slots will not use CC0
112 (the RT is the only known exception at this point).
114 Not yet implemented:
116 The Acorn Risc Machine can conditionally execute most insns, so
117 it is profitable to move single insns into a position to execute
118 based on the condition code of the previous insn.
120 The HP-PA can conditionally nullify insns, providing a similar
121 effect to the ARM, differing mostly in which insn is "in charge". */
123 #include "config.h"
124 #include "system.h"
125 #include "toplev.h"
126 #include "rtl.h"
127 #include "tm_p.h"
128 #include "expr.h"
129 #include "function.h"
130 #include "insn-config.h"
131 #include "conditions.h"
132 #include "hard-reg-set.h"
133 #include "basic-block.h"
134 #include "regs.h"
135 #include "recog.h"
136 #include "flags.h"
137 #include "output.h"
138 #include "obstack.h"
139 #include "insn-attr.h"
140 #include "resource.h"
141 #include "except.h"
142 #include "params.h"
144 #ifdef DELAY_SLOTS
146 #define obstack_chunk_alloc xmalloc
147 #define obstack_chunk_free free
149 #ifndef ANNUL_IFTRUE_SLOTS
150 #define eligible_for_annul_true(INSN, SLOTS, TRIAL, FLAGS) 0
151 #endif
152 #ifndef ANNUL_IFFALSE_SLOTS
153 #define eligible_for_annul_false(INSN, SLOTS, TRIAL, FLAGS) 0
154 #endif
156 /* Insns which have delay slots that have not yet been filled. */
158 static struct obstack unfilled_slots_obstack;
159 static rtx *unfilled_firstobj;
161 /* Define macros to refer to the first and last slot containing unfilled
162 insns. These are used because the list may move and its address
163 should be recomputed at each use. */
165 #define unfilled_slots_base \
166 ((rtx *) obstack_base (&unfilled_slots_obstack))
168 #define unfilled_slots_next \
169 ((rtx *) obstack_next_free (&unfilled_slots_obstack))
171 /* Points to the label before the end of the function. */
172 static rtx end_of_function_label;
174 /* Mapping between INSN_UID's and position in the code since INSN_UID's do
175 not always monotonically increase. */
176 static int *uid_to_ruid;
178 /* Highest valid index in `uid_to_ruid'. */
179 static int max_uid;
181 static int stop_search_p PARAMS ((rtx, int));
182 static int resource_conflicts_p PARAMS ((struct resources *,
183 struct resources *));
184 static int insn_references_resource_p PARAMS ((rtx, struct resources *, int));
185 static int insn_sets_resource_p PARAMS ((rtx, struct resources *, int));
186 static rtx find_end_label PARAMS ((void));
187 static rtx emit_delay_sequence PARAMS ((rtx, rtx, int));
188 static rtx add_to_delay_list PARAMS ((rtx, rtx));
189 static rtx delete_from_delay_slot PARAMS ((rtx));
190 static void delete_scheduled_jump PARAMS ((rtx));
191 static void note_delay_statistics PARAMS ((int, int));
192 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS)
193 static rtx optimize_skip PARAMS ((rtx));
194 #endif
195 static int get_jump_flags PARAMS ((rtx, rtx));
196 static int rare_destination PARAMS ((rtx));
197 static int mostly_true_jump PARAMS ((rtx, rtx));
198 static rtx get_branch_condition PARAMS ((rtx, rtx));
199 static int condition_dominates_p PARAMS ((rtx, rtx));
200 static int redirect_with_delay_slots_safe_p PARAMS ((rtx, rtx, rtx));
201 static int redirect_with_delay_list_safe_p PARAMS ((rtx, rtx, rtx));
202 static int check_annul_list_true_false PARAMS ((int, rtx));
203 static rtx steal_delay_list_from_target PARAMS ((rtx, rtx, rtx, rtx,
204 struct resources *,
205 struct resources *,
206 struct resources *,
207 int, int *, int *, rtx *));
208 static rtx steal_delay_list_from_fallthrough PARAMS ((rtx, rtx, rtx, rtx,
209 struct resources *,
210 struct resources *,
211 struct resources *,
212 int, int *, int *));
213 static void try_merge_delay_insns PARAMS ((rtx, rtx));
214 static rtx redundant_insn PARAMS ((rtx, rtx, rtx));
215 static int own_thread_p PARAMS ((rtx, rtx, int));
216 static void update_block PARAMS ((rtx, rtx));
217 static int reorg_redirect_jump PARAMS ((rtx, rtx));
218 static void update_reg_dead_notes PARAMS ((rtx, rtx));
219 static void fix_reg_dead_note PARAMS ((rtx, rtx));
220 static void update_reg_unused_notes PARAMS ((rtx, rtx));
221 static void fill_simple_delay_slots PARAMS ((int));
222 static rtx fill_slots_from_thread PARAMS ((rtx, rtx, rtx, rtx, int, int,
223 int, int, int *, rtx));
224 static void fill_eager_delay_slots PARAMS ((void));
225 static void relax_delay_slots PARAMS ((rtx));
226 #ifdef HAVE_return
227 static void make_return_insns PARAMS ((rtx));
228 #endif
230 /* Return TRUE if this insn should stop the search for insn to fill delay
231 slots. LABELS_P indicates that labels should terminate the search.
232 In all cases, jumps terminate the search. */
234 static int
235 stop_search_p (insn, labels_p)
236 rtx insn;
237 int labels_p;
239 if (insn == 0)
240 return 1;
242 switch (GET_CODE (insn))
244 case NOTE:
245 case CALL_INSN:
246 return 0;
248 case CODE_LABEL:
249 return labels_p;
251 case JUMP_INSN:
252 case BARRIER:
253 return 1;
255 case INSN:
256 /* OK unless it contains a delay slot or is an `asm' insn of some type.
257 We don't know anything about these. */
258 return (GET_CODE (PATTERN (insn)) == SEQUENCE
259 || GET_CODE (PATTERN (insn)) == ASM_INPUT
260 || asm_noperands (PATTERN (insn)) >= 0);
262 default:
263 abort ();
267 /* Return TRUE if any resources are marked in both RES1 and RES2 or if either
268 resource set contains a volatile memory reference. Otherwise, return FALSE. */
270 static int
271 resource_conflicts_p (res1, res2)
272 struct resources *res1, *res2;
274 if ((res1->cc && res2->cc) || (res1->memory && res2->memory)
275 || (res1->unch_memory && res2->unch_memory)
276 || res1->volatil || res2->volatil)
277 return 1;
279 #ifdef HARD_REG_SET
280 return (res1->regs & res2->regs) != HARD_CONST (0);
281 #else
283 int i;
285 for (i = 0; i < HARD_REG_SET_LONGS; i++)
286 if ((res1->regs[i] & res2->regs[i]) != 0)
287 return 1;
288 return 0;
290 #endif
293 /* Return TRUE if any resource marked in RES, a `struct resources', is
294 referenced by INSN. If INCLUDE_DELAYED_EFFECTS is set, return if the called
295 routine is using those resources.
297 We compute this by computing all the resources referenced by INSN and
298 seeing if this conflicts with RES. It might be faster to directly check
299 ourselves, and this is the way it used to work, but it means duplicating
300 a large block of complex code. */
302 static int
303 insn_references_resource_p (insn, res, include_delayed_effects)
304 rtx insn;
305 struct resources *res;
306 int include_delayed_effects;
308 struct resources insn_res;
310 CLEAR_RESOURCE (&insn_res);
311 mark_referenced_resources (insn, &insn_res, include_delayed_effects);
312 return resource_conflicts_p (&insn_res, res);
315 /* Return TRUE if INSN modifies resources that are marked in RES.
316 INCLUDE_DELAYED_EFFECTS is set if the actions of that routine should be
317 included. CC0 is only modified if it is explicitly set; see comments
318 in front of mark_set_resources for details. */
320 static int
321 insn_sets_resource_p (insn, res, include_delayed_effects)
322 rtx insn;
323 struct resources *res;
324 int include_delayed_effects;
326 struct resources insn_sets;
328 CLEAR_RESOURCE (&insn_sets);
329 mark_set_resources (insn, &insn_sets, 0, include_delayed_effects);
330 return resource_conflicts_p (&insn_sets, res);
333 /* Find a label at the end of the function or before a RETURN. If there is
334 none, make one. */
336 static rtx
337 find_end_label ()
339 rtx insn;
341 /* If we found one previously, return it. */
342 if (end_of_function_label)
343 return end_of_function_label;
345 /* Otherwise, see if there is a label at the end of the function. If there
346 is, it must be that RETURN insns aren't needed, so that is our return
347 label and we don't have to do anything else. */
349 insn = get_last_insn ();
350 while (GET_CODE (insn) == NOTE
351 || (GET_CODE (insn) == INSN
352 && (GET_CODE (PATTERN (insn)) == USE
353 || GET_CODE (PATTERN (insn)) == CLOBBER)))
354 insn = PREV_INSN (insn);
356 /* When a target threads its epilogue we might already have a
357 suitable return insn. If so put a label before it for the
358 end_of_function_label. */
359 if (GET_CODE (insn) == BARRIER
360 && GET_CODE (PREV_INSN (insn)) == JUMP_INSN
361 && GET_CODE (PATTERN (PREV_INSN (insn))) == RETURN)
363 rtx temp = PREV_INSN (PREV_INSN (insn));
364 end_of_function_label = gen_label_rtx ();
365 LABEL_NUSES (end_of_function_label) = 0;
367 /* Put the label before an USE insns that may proceed the RETURN insn. */
368 while (GET_CODE (temp) == USE)
369 temp = PREV_INSN (temp);
371 emit_label_after (end_of_function_label, temp);
374 else if (GET_CODE (insn) == CODE_LABEL)
375 end_of_function_label = insn;
376 else
378 end_of_function_label = gen_label_rtx ();
379 LABEL_NUSES (end_of_function_label) = 0;
380 /* If the basic block reorder pass moves the return insn to
381 some other place try to locate it again and put our
382 end_of_function_label there. */
383 while (insn && ! (GET_CODE (insn) == JUMP_INSN
384 && (GET_CODE (PATTERN (insn)) == RETURN)))
385 insn = PREV_INSN (insn);
386 if (insn)
388 insn = PREV_INSN (insn);
390 /* Put the label before an USE insns that may proceed the
391 RETURN insn. */
392 while (GET_CODE (insn) == USE)
393 insn = PREV_INSN (insn);
395 emit_label_after (end_of_function_label, insn);
397 else
399 /* Otherwise, make a new label and emit a RETURN and BARRIER,
400 if needed. */
401 emit_label (end_of_function_label);
402 #ifdef HAVE_return
403 if (HAVE_return)
405 /* The return we make may have delay slots too. */
406 rtx insn = gen_return ();
407 insn = emit_jump_insn (insn);
408 emit_barrier ();
409 if (num_delay_slots (insn) > 0)
410 obstack_ptr_grow (&unfilled_slots_obstack, insn);
412 #endif
416 /* Show one additional use for this label so it won't go away until
417 we are done. */
418 ++LABEL_NUSES (end_of_function_label);
420 return end_of_function_label;
423 /* Put INSN and LIST together in a SEQUENCE rtx of LENGTH, and replace
424 the pattern of INSN with the SEQUENCE.
426 Chain the insns so that NEXT_INSN of each insn in the sequence points to
427 the next and NEXT_INSN of the last insn in the sequence points to
428 the first insn after the sequence. Similarly for PREV_INSN. This makes
429 it easier to scan all insns.
431 Returns the SEQUENCE that replaces INSN. */
433 static rtx
434 emit_delay_sequence (insn, list, length)
435 rtx insn;
436 rtx list;
437 int length;
439 int i = 1;
440 rtx li;
441 int had_barrier = 0;
443 /* Allocate the rtvec to hold the insns and the SEQUENCE. */
444 rtvec seqv = rtvec_alloc (length + 1);
445 rtx seq = gen_rtx_SEQUENCE (VOIDmode, seqv);
446 rtx seq_insn = make_insn_raw (seq);
447 rtx first = get_insns ();
448 rtx last = get_last_insn ();
450 /* Make a copy of the insn having delay slots. */
451 rtx delay_insn = copy_rtx (insn);
453 /* If INSN is followed by a BARRIER, delete the BARRIER since it will only
454 confuse further processing. Update LAST in case it was the last insn.
455 We will put the BARRIER back in later. */
456 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == BARRIER)
458 delete_related_insns (NEXT_INSN (insn));
459 last = get_last_insn ();
460 had_barrier = 1;
463 /* Splice our SEQUENCE into the insn stream where INSN used to be. */
464 NEXT_INSN (seq_insn) = NEXT_INSN (insn);
465 PREV_INSN (seq_insn) = PREV_INSN (insn);
467 if (insn != last)
468 PREV_INSN (NEXT_INSN (seq_insn)) = seq_insn;
470 if (insn != first)
471 NEXT_INSN (PREV_INSN (seq_insn)) = seq_insn;
473 /* Note the calls to set_new_first_and_last_insn must occur after
474 SEQ_INSN has been completely spliced into the insn stream.
476 Otherwise CUR_INSN_UID will get set to an incorrect value because
477 set_new_first_and_last_insn will not find SEQ_INSN in the chain. */
478 if (insn == last)
479 set_new_first_and_last_insn (first, seq_insn);
481 if (insn == first)
482 set_new_first_and_last_insn (seq_insn, last);
484 /* Build our SEQUENCE and rebuild the insn chain. */
485 XVECEXP (seq, 0, 0) = delay_insn;
486 INSN_DELETED_P (delay_insn) = 0;
487 PREV_INSN (delay_insn) = PREV_INSN (seq_insn);
489 for (li = list; li; li = XEXP (li, 1), i++)
491 rtx tem = XEXP (li, 0);
492 rtx note, next;
494 /* Show that this copy of the insn isn't deleted. */
495 INSN_DELETED_P (tem) = 0;
497 XVECEXP (seq, 0, i) = tem;
498 PREV_INSN (tem) = XVECEXP (seq, 0, i - 1);
499 NEXT_INSN (XVECEXP (seq, 0, i - 1)) = tem;
501 for (note = REG_NOTES (tem); note; note = next)
503 next = XEXP (note, 1);
504 switch (REG_NOTE_KIND (note))
506 case REG_DEAD:
507 /* Remove any REG_DEAD notes because we can't rely on them now
508 that the insn has been moved. */
509 remove_note (tem, note);
510 break;
512 case REG_LABEL:
513 /* Keep the label reference count up to date. */
514 LABEL_NUSES (XEXP (note, 0)) ++;
515 break;
517 default:
518 break;
523 NEXT_INSN (XVECEXP (seq, 0, length)) = NEXT_INSN (seq_insn);
525 /* If the previous insn is a SEQUENCE, update the NEXT_INSN pointer on the
526 last insn in that SEQUENCE to point to us. Similarly for the first
527 insn in the following insn if it is a SEQUENCE. */
529 if (PREV_INSN (seq_insn) && GET_CODE (PREV_INSN (seq_insn)) == INSN
530 && GET_CODE (PATTERN (PREV_INSN (seq_insn))) == SEQUENCE)
531 NEXT_INSN (XVECEXP (PATTERN (PREV_INSN (seq_insn)), 0,
532 XVECLEN (PATTERN (PREV_INSN (seq_insn)), 0) - 1))
533 = seq_insn;
535 if (NEXT_INSN (seq_insn) && GET_CODE (NEXT_INSN (seq_insn)) == INSN
536 && GET_CODE (PATTERN (NEXT_INSN (seq_insn))) == SEQUENCE)
537 PREV_INSN (XVECEXP (PATTERN (NEXT_INSN (seq_insn)), 0, 0)) = seq_insn;
539 /* If there used to be a BARRIER, put it back. */
540 if (had_barrier)
541 emit_barrier_after (seq_insn);
543 if (i != length + 1)
544 abort ();
546 return seq_insn;
549 /* Add INSN to DELAY_LIST and return the head of the new list. The list must
550 be in the order in which the insns are to be executed. */
552 static rtx
553 add_to_delay_list (insn, delay_list)
554 rtx insn;
555 rtx delay_list;
557 /* If we have an empty list, just make a new list element. If
558 INSN has its block number recorded, clear it since we may
559 be moving the insn to a new block. */
561 if (delay_list == 0)
563 clear_hashed_info_for_insn (insn);
564 return gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX);
567 /* Otherwise this must be an INSN_LIST. Add INSN to the end of the
568 list. */
569 XEXP (delay_list, 1) = add_to_delay_list (insn, XEXP (delay_list, 1));
571 return delay_list;
574 /* Delete INSN from the delay slot of the insn that it is in, which may
575 produce an insn with no delay slots. Return the new insn. */
577 static rtx
578 delete_from_delay_slot (insn)
579 rtx insn;
581 rtx trial, seq_insn, seq, prev;
582 rtx delay_list = 0;
583 int i;
585 /* We first must find the insn containing the SEQUENCE with INSN in its
586 delay slot. Do this by finding an insn, TRIAL, where
587 PREV_INSN (NEXT_INSN (TRIAL)) != TRIAL. */
589 for (trial = insn;
590 PREV_INSN (NEXT_INSN (trial)) == trial;
591 trial = NEXT_INSN (trial))
594 seq_insn = PREV_INSN (NEXT_INSN (trial));
595 seq = PATTERN (seq_insn);
597 /* Create a delay list consisting of all the insns other than the one
598 we are deleting (unless we were the only one). */
599 if (XVECLEN (seq, 0) > 2)
600 for (i = 1; i < XVECLEN (seq, 0); i++)
601 if (XVECEXP (seq, 0, i) != insn)
602 delay_list = add_to_delay_list (XVECEXP (seq, 0, i), delay_list);
604 /* Delete the old SEQUENCE, re-emit the insn that used to have the delay
605 list, and rebuild the delay list if non-empty. */
606 prev = PREV_INSN (seq_insn);
607 trial = XVECEXP (seq, 0, 0);
608 delete_related_insns (seq_insn);
609 add_insn_after (trial, prev);
611 if (GET_CODE (trial) == JUMP_INSN
612 && (simplejump_p (trial) || GET_CODE (PATTERN (trial)) == RETURN))
613 emit_barrier_after (trial);
615 /* If there are any delay insns, remit them. Otherwise clear the
616 annul flag. */
617 if (delay_list)
618 trial = emit_delay_sequence (trial, delay_list, XVECLEN (seq, 0) - 2);
619 else
620 INSN_ANNULLED_BRANCH_P (trial) = 0;
622 INSN_FROM_TARGET_P (insn) = 0;
624 /* Show we need to fill this insn again. */
625 obstack_ptr_grow (&unfilled_slots_obstack, trial);
627 return trial;
630 /* Delete INSN, a JUMP_INSN. If it is a conditional jump, we must track down
631 the insn that sets CC0 for it and delete it too. */
633 static void
634 delete_scheduled_jump (insn)
635 rtx insn;
637 /* Delete the insn that sets cc0 for us. On machines without cc0, we could
638 delete the insn that sets the condition code, but it is hard to find it.
639 Since this case is rare anyway, don't bother trying; there would likely
640 be other insns that became dead anyway, which we wouldn't know to
641 delete. */
643 #ifdef HAVE_cc0
644 if (reg_mentioned_p (cc0_rtx, insn))
646 rtx note = find_reg_note (insn, REG_CC_SETTER, NULL_RTX);
648 /* If a reg-note was found, it points to an insn to set CC0. This
649 insn is in the delay list of some other insn. So delete it from
650 the delay list it was in. */
651 if (note)
653 if (! FIND_REG_INC_NOTE (XEXP (note, 0), NULL_RTX)
654 && sets_cc0_p (PATTERN (XEXP (note, 0))) == 1)
655 delete_from_delay_slot (XEXP (note, 0));
657 else
659 /* The insn setting CC0 is our previous insn, but it may be in
660 a delay slot. It will be the last insn in the delay slot, if
661 it is. */
662 rtx trial = previous_insn (insn);
663 if (GET_CODE (trial) == NOTE)
664 trial = prev_nonnote_insn (trial);
665 if (sets_cc0_p (PATTERN (trial)) != 1
666 || FIND_REG_INC_NOTE (trial, NULL_RTX))
667 return;
668 if (PREV_INSN (NEXT_INSN (trial)) == trial)
669 delete_related_insns (trial);
670 else
671 delete_from_delay_slot (trial);
674 #endif
676 delete_related_insns (insn);
679 /* Counters for delay-slot filling. */
681 #define NUM_REORG_FUNCTIONS 2
682 #define MAX_DELAY_HISTOGRAM 3
683 #define MAX_REORG_PASSES 2
685 static int num_insns_needing_delays[NUM_REORG_FUNCTIONS][MAX_REORG_PASSES];
687 static int num_filled_delays[NUM_REORG_FUNCTIONS][MAX_DELAY_HISTOGRAM+1][MAX_REORG_PASSES];
689 static int reorg_pass_number;
691 static void
692 note_delay_statistics (slots_filled, index)
693 int slots_filled, index;
695 num_insns_needing_delays[index][reorg_pass_number]++;
696 if (slots_filled > MAX_DELAY_HISTOGRAM)
697 slots_filled = MAX_DELAY_HISTOGRAM;
698 num_filled_delays[index][slots_filled][reorg_pass_number]++;
701 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS)
703 /* Optimize the following cases:
705 1. When a conditional branch skips over only one instruction,
706 use an annulling branch and put that insn in the delay slot.
707 Use either a branch that annuls when the condition if true or
708 invert the test with a branch that annuls when the condition is
709 false. This saves insns, since otherwise we must copy an insn
710 from the L1 target.
712 (orig) (skip) (otherwise)
713 Bcc.n L1 Bcc',a L1 Bcc,a L1'
714 insn insn insn2
715 L1: L1: L1:
716 insn2 insn2 insn2
717 insn3 insn3 L1':
718 insn3
720 2. When a conditional branch skips over only one instruction,
721 and after that, it unconditionally branches somewhere else,
722 perform the similar optimization. This saves executing the
723 second branch in the case where the inverted condition is true.
725 Bcc.n L1 Bcc',a L2
726 insn insn
727 L1: L1:
728 Bra L2 Bra L2
730 INSN is a JUMP_INSN.
732 This should be expanded to skip over N insns, where N is the number
733 of delay slots required. */
735 static rtx
736 optimize_skip (insn)
737 rtx insn;
739 rtx trial = next_nonnote_insn (insn);
740 rtx next_trial = next_active_insn (trial);
741 rtx delay_list = 0;
742 rtx target_label;
743 int flags;
745 flags = get_jump_flags (insn, JUMP_LABEL (insn));
747 if (trial == 0
748 || GET_CODE (trial) != INSN
749 || GET_CODE (PATTERN (trial)) == SEQUENCE
750 || recog_memoized (trial) < 0
751 || (! eligible_for_annul_false (insn, 0, trial, flags)
752 && ! eligible_for_annul_true (insn, 0, trial, flags)))
753 return 0;
755 /* There are two cases where we are just executing one insn (we assume
756 here that a branch requires only one insn; this should be generalized
757 at some point): Where the branch goes around a single insn or where
758 we have one insn followed by a branch to the same label we branch to.
759 In both of these cases, inverting the jump and annulling the delay
760 slot give the same effect in fewer insns. */
761 if ((next_trial == next_active_insn (JUMP_LABEL (insn))
762 && ! (next_trial == 0 && current_function_epilogue_delay_list != 0))
763 || (next_trial != 0
764 && GET_CODE (next_trial) == JUMP_INSN
765 && JUMP_LABEL (insn) == JUMP_LABEL (next_trial)
766 && (simplejump_p (next_trial)
767 || GET_CODE (PATTERN (next_trial)) == RETURN)))
769 if (eligible_for_annul_false (insn, 0, trial, flags))
771 if (invert_jump (insn, JUMP_LABEL (insn), 1))
772 INSN_FROM_TARGET_P (trial) = 1;
773 else if (! eligible_for_annul_true (insn, 0, trial, flags))
774 return 0;
777 delay_list = add_to_delay_list (trial, NULL_RTX);
778 next_trial = next_active_insn (trial);
779 update_block (trial, trial);
780 delete_related_insns (trial);
782 /* Also, if we are targeting an unconditional
783 branch, thread our jump to the target of that branch. Don't
784 change this into a RETURN here, because it may not accept what
785 we have in the delay slot. We'll fix this up later. */
786 if (next_trial && GET_CODE (next_trial) == JUMP_INSN
787 && (simplejump_p (next_trial)
788 || GET_CODE (PATTERN (next_trial)) == RETURN))
790 target_label = JUMP_LABEL (next_trial);
791 if (target_label == 0)
792 target_label = find_end_label ();
794 /* Recompute the flags based on TARGET_LABEL since threading
795 the jump to TARGET_LABEL may change the direction of the
796 jump (which may change the circumstances in which the
797 delay slot is nullified). */
798 flags = get_jump_flags (insn, target_label);
799 if (eligible_for_annul_true (insn, 0, trial, flags))
800 reorg_redirect_jump (insn, target_label);
803 INSN_ANNULLED_BRANCH_P (insn) = 1;
806 return delay_list;
808 #endif
810 /* Encode and return branch direction and prediction information for
811 INSN assuming it will jump to LABEL.
813 Non conditional branches return no direction information and
814 are predicted as very likely taken. */
816 static int
817 get_jump_flags (insn, label)
818 rtx insn, label;
820 int flags;
822 /* get_jump_flags can be passed any insn with delay slots, these may
823 be INSNs, CALL_INSNs, or JUMP_INSNs. Only JUMP_INSNs have branch
824 direction information, and only if they are conditional jumps.
826 If LABEL is zero, then there is no way to determine the branch
827 direction. */
828 if (GET_CODE (insn) == JUMP_INSN
829 && (condjump_p (insn) || condjump_in_parallel_p (insn))
830 && INSN_UID (insn) <= max_uid
831 && label != 0
832 && INSN_UID (label) <= max_uid)
833 flags
834 = (uid_to_ruid[INSN_UID (label)] > uid_to_ruid[INSN_UID (insn)])
835 ? ATTR_FLAG_forward : ATTR_FLAG_backward;
836 /* No valid direction information. */
837 else
838 flags = 0;
840 /* If insn is a conditional branch call mostly_true_jump to get
841 determine the branch prediction.
843 Non conditional branches are predicted as very likely taken. */
844 if (GET_CODE (insn) == JUMP_INSN
845 && (condjump_p (insn) || condjump_in_parallel_p (insn)))
847 int prediction;
849 prediction = mostly_true_jump (insn, get_branch_condition (insn, label));
850 switch (prediction)
852 case 2:
853 flags |= (ATTR_FLAG_very_likely | ATTR_FLAG_likely);
854 break;
855 case 1:
856 flags |= ATTR_FLAG_likely;
857 break;
858 case 0:
859 flags |= ATTR_FLAG_unlikely;
860 break;
861 case -1:
862 flags |= (ATTR_FLAG_very_unlikely | ATTR_FLAG_unlikely);
863 break;
865 default:
866 abort ();
869 else
870 flags |= (ATTR_FLAG_very_likely | ATTR_FLAG_likely);
872 return flags;
875 /* Return 1 if INSN is a destination that will be branched to rarely (the
876 return point of a function); return 2 if DEST will be branched to very
877 rarely (a call to a function that doesn't return). Otherwise,
878 return 0. */
880 static int
881 rare_destination (insn)
882 rtx insn;
884 int jump_count = 0;
885 rtx next;
887 for (; insn; insn = next)
889 if (GET_CODE (insn) == INSN && GET_CODE (PATTERN (insn)) == SEQUENCE)
890 insn = XVECEXP (PATTERN (insn), 0, 0);
892 next = NEXT_INSN (insn);
894 switch (GET_CODE (insn))
896 case CODE_LABEL:
897 return 0;
898 case BARRIER:
899 /* A BARRIER can either be after a JUMP_INSN or a CALL_INSN. We
900 don't scan past JUMP_INSNs, so any barrier we find here must
901 have been after a CALL_INSN and hence mean the call doesn't
902 return. */
903 return 2;
904 case JUMP_INSN:
905 if (GET_CODE (PATTERN (insn)) == RETURN)
906 return 1;
907 else if (simplejump_p (insn)
908 && jump_count++ < 10)
909 next = JUMP_LABEL (insn);
910 else
911 return 0;
913 default:
914 break;
918 /* If we got here it means we hit the end of the function. So this
919 is an unlikely destination. */
921 return 1;
924 /* Return truth value of the statement that this branch
925 is mostly taken. If we think that the branch is extremely likely
926 to be taken, we return 2. If the branch is slightly more likely to be
927 taken, return 1. If the branch is slightly less likely to be taken,
928 return 0 and if the branch is highly unlikely to be taken, return -1.
930 CONDITION, if non-zero, is the condition that JUMP_INSN is testing. */
932 static int
933 mostly_true_jump (jump_insn, condition)
934 rtx jump_insn, condition;
936 rtx target_label = JUMP_LABEL (jump_insn);
937 rtx insn, note;
938 int rare_dest = rare_destination (target_label);
939 int rare_fallthrough = rare_destination (NEXT_INSN (jump_insn));
941 /* If branch probabilities are available, then use that number since it
942 always gives a correct answer. */
943 note = find_reg_note (jump_insn, REG_BR_PROB, 0);
944 if (note)
946 int prob = INTVAL (XEXP (note, 0));
948 if (prob >= REG_BR_PROB_BASE * 9 / 10)
949 return 2;
950 else if (prob >= REG_BR_PROB_BASE / 2)
951 return 1;
952 else if (prob >= REG_BR_PROB_BASE / 10)
953 return 0;
954 else
955 return -1;
958 /* ??? Ought to use estimate_probability instead. */
960 /* If this is a branch outside a loop, it is highly unlikely. */
961 if (GET_CODE (PATTERN (jump_insn)) == SET
962 && GET_CODE (SET_SRC (PATTERN (jump_insn))) == IF_THEN_ELSE
963 && ((GET_CODE (XEXP (SET_SRC (PATTERN (jump_insn)), 1)) == LABEL_REF
964 && LABEL_OUTSIDE_LOOP_P (XEXP (SET_SRC (PATTERN (jump_insn)), 1)))
965 || (GET_CODE (XEXP (SET_SRC (PATTERN (jump_insn)), 2)) == LABEL_REF
966 && LABEL_OUTSIDE_LOOP_P (XEXP (SET_SRC (PATTERN (jump_insn)), 2)))))
967 return -1;
969 if (target_label)
971 /* If this is the test of a loop, it is very likely true. We scan
972 backwards from the target label. If we find a NOTE_INSN_LOOP_BEG
973 before the next real insn, we assume the branch is to the top of
974 the loop. */
975 for (insn = PREV_INSN (target_label);
976 insn && GET_CODE (insn) == NOTE;
977 insn = PREV_INSN (insn))
978 if (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG)
979 return 2;
981 /* If this is a jump to the test of a loop, it is likely true. We scan
982 forwards from the target label. If we find a NOTE_INSN_LOOP_VTOP
983 before the next real insn, we assume the branch is to the loop branch
984 test. */
985 for (insn = NEXT_INSN (target_label);
986 insn && GET_CODE (insn) == NOTE;
987 insn = PREV_INSN (insn))
988 if (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_VTOP)
989 return 1;
992 /* Look at the relative rarities of the fallthrough and destination. If
993 they differ, we can predict the branch that way. */
995 switch (rare_fallthrough - rare_dest)
997 case -2:
998 return -1;
999 case -1:
1000 return 0;
1001 case 0:
1002 break;
1003 case 1:
1004 return 1;
1005 case 2:
1006 return 2;
1009 /* If we couldn't figure out what this jump was, assume it won't be
1010 taken. This should be rare. */
1011 if (condition == 0)
1012 return 0;
1014 /* EQ tests are usually false and NE tests are usually true. Also,
1015 most quantities are positive, so we can make the appropriate guesses
1016 about signed comparisons against zero. */
1017 switch (GET_CODE (condition))
1019 case CONST_INT:
1020 /* Unconditional branch. */
1021 return 1;
1022 case EQ:
1023 return 0;
1024 case NE:
1025 return 1;
1026 case LE:
1027 case LT:
1028 if (XEXP (condition, 1) == const0_rtx)
1029 return 0;
1030 break;
1031 case GE:
1032 case GT:
1033 if (XEXP (condition, 1) == const0_rtx)
1034 return 1;
1035 break;
1037 default:
1038 break;
1041 /* Predict backward branches usually take, forward branches usually not. If
1042 we don't know whether this is forward or backward, assume the branch
1043 will be taken, since most are. */
1044 return (target_label == 0 || INSN_UID (jump_insn) > max_uid
1045 || INSN_UID (target_label) > max_uid
1046 || (uid_to_ruid[INSN_UID (jump_insn)]
1047 > uid_to_ruid[INSN_UID (target_label)]));
1050 /* Return the condition under which INSN will branch to TARGET. If TARGET
1051 is zero, return the condition under which INSN will return. If INSN is
1052 an unconditional branch, return const_true_rtx. If INSN isn't a simple
1053 type of jump, or it doesn't go to TARGET, return 0. */
1055 static rtx
1056 get_branch_condition (insn, target)
1057 rtx insn;
1058 rtx target;
1060 rtx pat = PATTERN (insn);
1061 rtx src;
1063 if (condjump_in_parallel_p (insn))
1064 pat = XVECEXP (pat, 0, 0);
1066 if (GET_CODE (pat) == RETURN)
1067 return target == 0 ? const_true_rtx : 0;
1069 else if (GET_CODE (pat) != SET || SET_DEST (pat) != pc_rtx)
1070 return 0;
1072 src = SET_SRC (pat);
1073 if (GET_CODE (src) == LABEL_REF && XEXP (src, 0) == target)
1074 return const_true_rtx;
1076 else if (GET_CODE (src) == IF_THEN_ELSE
1077 && ((target == 0 && GET_CODE (XEXP (src, 1)) == RETURN)
1078 || (GET_CODE (XEXP (src, 1)) == LABEL_REF
1079 && XEXP (XEXP (src, 1), 0) == target))
1080 && XEXP (src, 2) == pc_rtx)
1081 return XEXP (src, 0);
1083 else if (GET_CODE (src) == IF_THEN_ELSE
1084 && ((target == 0 && GET_CODE (XEXP (src, 2)) == RETURN)
1085 || (GET_CODE (XEXP (src, 2)) == LABEL_REF
1086 && XEXP (XEXP (src, 2), 0) == target))
1087 && XEXP (src, 1) == pc_rtx)
1088 return gen_rtx_fmt_ee (reverse_condition (GET_CODE (XEXP (src, 0))),
1089 GET_MODE (XEXP (src, 0)),
1090 XEXP (XEXP (src, 0), 0), XEXP (XEXP (src, 0), 1));
1092 return 0;
1095 /* Return non-zero if CONDITION is more strict than the condition of
1096 INSN, i.e., if INSN will always branch if CONDITION is true. */
1098 static int
1099 condition_dominates_p (condition, insn)
1100 rtx condition;
1101 rtx insn;
1103 rtx other_condition = get_branch_condition (insn, JUMP_LABEL (insn));
1104 enum rtx_code code = GET_CODE (condition);
1105 enum rtx_code other_code;
1107 if (rtx_equal_p (condition, other_condition)
1108 || other_condition == const_true_rtx)
1109 return 1;
1111 else if (condition == const_true_rtx || other_condition == 0)
1112 return 0;
1114 other_code = GET_CODE (other_condition);
1115 if (GET_RTX_LENGTH (code) != 2 || GET_RTX_LENGTH (other_code) != 2
1116 || ! rtx_equal_p (XEXP (condition, 0), XEXP (other_condition, 0))
1117 || ! rtx_equal_p (XEXP (condition, 1), XEXP (other_condition, 1)))
1118 return 0;
1120 return comparison_dominates_p (code, other_code);
1123 /* Return non-zero if redirecting JUMP to NEWLABEL does not invalidate
1124 any insns already in the delay slot of JUMP. */
1126 static int
1127 redirect_with_delay_slots_safe_p (jump, newlabel, seq)
1128 rtx jump, newlabel, seq;
1130 int flags, i;
1131 rtx pat = PATTERN (seq);
1133 /* Make sure all the delay slots of this jump would still
1134 be valid after threading the jump. If they are still
1135 valid, then return non-zero. */
1137 flags = get_jump_flags (jump, newlabel);
1138 for (i = 1; i < XVECLEN (pat, 0); i++)
1139 if (! (
1140 #ifdef ANNUL_IFFALSE_SLOTS
1141 (INSN_ANNULLED_BRANCH_P (jump)
1142 && INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)))
1143 ? eligible_for_annul_false (jump, i - 1,
1144 XVECEXP (pat, 0, i), flags) :
1145 #endif
1146 #ifdef ANNUL_IFTRUE_SLOTS
1147 (INSN_ANNULLED_BRANCH_P (jump)
1148 && ! INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)))
1149 ? eligible_for_annul_true (jump, i - 1,
1150 XVECEXP (pat, 0, i), flags) :
1151 #endif
1152 eligible_for_delay (jump, i - 1, XVECEXP (pat, 0, i), flags)))
1153 break;
1155 return (i == XVECLEN (pat, 0));
1158 /* Return non-zero if redirecting JUMP to NEWLABEL does not invalidate
1159 any insns we wish to place in the delay slot of JUMP. */
1161 static int
1162 redirect_with_delay_list_safe_p (jump, newlabel, delay_list)
1163 rtx jump, newlabel, delay_list;
1165 int flags, i;
1166 rtx li;
1168 /* Make sure all the insns in DELAY_LIST would still be
1169 valid after threading the jump. If they are still
1170 valid, then return non-zero. */
1172 flags = get_jump_flags (jump, newlabel);
1173 for (li = delay_list, i = 0; li; li = XEXP (li, 1), i++)
1174 if (! (
1175 #ifdef ANNUL_IFFALSE_SLOTS
1176 (INSN_ANNULLED_BRANCH_P (jump)
1177 && INSN_FROM_TARGET_P (XEXP (li, 0)))
1178 ? eligible_for_annul_false (jump, i, XEXP (li, 0), flags) :
1179 #endif
1180 #ifdef ANNUL_IFTRUE_SLOTS
1181 (INSN_ANNULLED_BRANCH_P (jump)
1182 && ! INSN_FROM_TARGET_P (XEXP (li, 0)))
1183 ? eligible_for_annul_true (jump, i, XEXP (li, 0), flags) :
1184 #endif
1185 eligible_for_delay (jump, i, XEXP (li, 0), flags)))
1186 break;
1188 return (li == NULL);
1191 /* DELAY_LIST is a list of insns that have already been placed into delay
1192 slots. See if all of them have the same annulling status as ANNUL_TRUE_P.
1193 If not, return 0; otherwise return 1. */
1195 static int
1196 check_annul_list_true_false (annul_true_p, delay_list)
1197 int annul_true_p;
1198 rtx delay_list;
1200 rtx temp;
1202 if (delay_list)
1204 for (temp = delay_list; temp; temp = XEXP (temp, 1))
1206 rtx trial = XEXP (temp, 0);
1208 if ((annul_true_p && INSN_FROM_TARGET_P (trial))
1209 || (!annul_true_p && !INSN_FROM_TARGET_P (trial)))
1210 return 0;
1214 return 1;
1217 /* INSN branches to an insn whose pattern SEQ is a SEQUENCE. Given that
1218 the condition tested by INSN is CONDITION and the resources shown in
1219 OTHER_NEEDED are needed after INSN, see whether INSN can take all the insns
1220 from SEQ's delay list, in addition to whatever insns it may execute
1221 (in DELAY_LIST). SETS and NEEDED are denote resources already set and
1222 needed while searching for delay slot insns. Return the concatenated
1223 delay list if possible, otherwise, return 0.
1225 SLOTS_TO_FILL is the total number of slots required by INSN, and
1226 PSLOTS_FILLED points to the number filled so far (also the number of
1227 insns in DELAY_LIST). It is updated with the number that have been
1228 filled from the SEQUENCE, if any.
1230 PANNUL_P points to a non-zero value if we already know that we need
1231 to annul INSN. If this routine determines that annulling is needed,
1232 it may set that value non-zero.
1234 PNEW_THREAD points to a location that is to receive the place at which
1235 execution should continue. */
1237 static rtx
1238 steal_delay_list_from_target (insn, condition, seq, delay_list,
1239 sets, needed, other_needed,
1240 slots_to_fill, pslots_filled, pannul_p,
1241 pnew_thread)
1242 rtx insn, condition;
1243 rtx seq;
1244 rtx delay_list;
1245 struct resources *sets, *needed, *other_needed;
1246 int slots_to_fill;
1247 int *pslots_filled;
1248 int *pannul_p;
1249 rtx *pnew_thread;
1251 rtx temp;
1252 int slots_remaining = slots_to_fill - *pslots_filled;
1253 int total_slots_filled = *pslots_filled;
1254 rtx new_delay_list = 0;
1255 int must_annul = *pannul_p;
1256 int used_annul = 0;
1257 int i;
1258 struct resources cc_set;
1260 /* We can't do anything if there are more delay slots in SEQ than we
1261 can handle, or if we don't know that it will be a taken branch.
1262 We know that it will be a taken branch if it is either an unconditional
1263 branch or a conditional branch with a stricter branch condition.
1265 Also, exit if the branch has more than one set, since then it is computing
1266 other results that can't be ignored, e.g. the HPPA mov&branch instruction.
1267 ??? It may be possible to move other sets into INSN in addition to
1268 moving the instructions in the delay slots.
1270 We can not steal the delay list if one of the instructions in the
1271 current delay_list modifies the condition codes and the jump in the
1272 sequence is a conditional jump. We can not do this because we can
1273 not change the direction of the jump because the condition codes
1274 will effect the direction of the jump in the sequence. */
1276 CLEAR_RESOURCE (&cc_set);
1277 for (temp = delay_list; temp; temp = XEXP (temp, 1))
1279 rtx trial = XEXP (temp, 0);
1281 mark_set_resources (trial, &cc_set, 0, MARK_SRC_DEST_CALL);
1282 if (insn_references_resource_p (XVECEXP (seq , 0, 0), &cc_set, 0))
1283 return delay_list;
1286 if (XVECLEN (seq, 0) - 1 > slots_remaining
1287 || ! condition_dominates_p (condition, XVECEXP (seq, 0, 0))
1288 || ! single_set (XVECEXP (seq, 0, 0)))
1289 return delay_list;
1291 #ifdef MD_CAN_REDIRECT_BRANCH
1292 /* On some targets, branches with delay slots can have a limited
1293 displacement. Give the back end a chance to tell us we can't do
1294 this. */
1295 if (! MD_CAN_REDIRECT_BRANCH (insn, XVECEXP (seq, 0, 0)))
1296 return delay_list;
1297 #endif
1299 for (i = 1; i < XVECLEN (seq, 0); i++)
1301 rtx trial = XVECEXP (seq, 0, i);
1302 int flags;
1304 if (insn_references_resource_p (trial, sets, 0)
1305 || insn_sets_resource_p (trial, needed, 0)
1306 || insn_sets_resource_p (trial, sets, 0)
1307 #ifdef HAVE_cc0
1308 /* If TRIAL sets CC0, we can't copy it, so we can't steal this
1309 delay list. */
1310 || find_reg_note (trial, REG_CC_USER, NULL_RTX)
1311 #endif
1312 /* If TRIAL is from the fallthrough code of an annulled branch insn
1313 in SEQ, we cannot use it. */
1314 || (INSN_ANNULLED_BRANCH_P (XVECEXP (seq, 0, 0))
1315 && ! INSN_FROM_TARGET_P (trial)))
1316 return delay_list;
1318 /* If this insn was already done (usually in a previous delay slot),
1319 pretend we put it in our delay slot. */
1320 if (redundant_insn (trial, insn, new_delay_list))
1321 continue;
1323 /* We will end up re-vectoring this branch, so compute flags
1324 based on jumping to the new label. */
1325 flags = get_jump_flags (insn, JUMP_LABEL (XVECEXP (seq, 0, 0)));
1327 if (! must_annul
1328 && ((condition == const_true_rtx
1329 || (! insn_sets_resource_p (trial, other_needed, 0)
1330 && ! may_trap_p (PATTERN (trial)))))
1331 ? eligible_for_delay (insn, total_slots_filled, trial, flags)
1332 : (must_annul || (delay_list == NULL && new_delay_list == NULL))
1333 && (must_annul = 1,
1334 check_annul_list_true_false (0, delay_list)
1335 && check_annul_list_true_false (0, new_delay_list)
1336 && eligible_for_annul_false (insn, total_slots_filled,
1337 trial, flags)))
1339 if (must_annul)
1340 used_annul = 1;
1341 temp = copy_rtx (trial);
1342 INSN_FROM_TARGET_P (temp) = 1;
1343 new_delay_list = add_to_delay_list (temp, new_delay_list);
1344 total_slots_filled++;
1346 if (--slots_remaining == 0)
1347 break;
1349 else
1350 return delay_list;
1353 /* Show the place to which we will be branching. */
1354 *pnew_thread = next_active_insn (JUMP_LABEL (XVECEXP (seq, 0, 0)));
1356 /* Add any new insns to the delay list and update the count of the
1357 number of slots filled. */
1358 *pslots_filled = total_slots_filled;
1359 if (used_annul)
1360 *pannul_p = 1;
1362 if (delay_list == 0)
1363 return new_delay_list;
1365 for (temp = new_delay_list; temp; temp = XEXP (temp, 1))
1366 delay_list = add_to_delay_list (XEXP (temp, 0), delay_list);
1368 return delay_list;
1371 /* Similar to steal_delay_list_from_target except that SEQ is on the
1372 fallthrough path of INSN. Here we only do something if the delay insn
1373 of SEQ is an unconditional branch. In that case we steal its delay slot
1374 for INSN since unconditional branches are much easier to fill. */
1376 static rtx
1377 steal_delay_list_from_fallthrough (insn, condition, seq,
1378 delay_list, sets, needed, other_needed,
1379 slots_to_fill, pslots_filled, pannul_p)
1380 rtx insn, condition;
1381 rtx seq;
1382 rtx delay_list;
1383 struct resources *sets, *needed, *other_needed;
1384 int slots_to_fill;
1385 int *pslots_filled;
1386 int *pannul_p;
1388 int i;
1389 int flags;
1390 int must_annul = *pannul_p;
1391 int used_annul = 0;
1393 flags = get_jump_flags (insn, JUMP_LABEL (insn));
1395 /* We can't do anything if SEQ's delay insn isn't an
1396 unconditional branch. */
1398 if (! simplejump_p (XVECEXP (seq, 0, 0))
1399 && GET_CODE (PATTERN (XVECEXP (seq, 0, 0))) != RETURN)
1400 return delay_list;
1402 for (i = 1; i < XVECLEN (seq, 0); i++)
1404 rtx trial = XVECEXP (seq, 0, i);
1406 /* If TRIAL sets CC0, stealing it will move it too far from the use
1407 of CC0. */
1408 if (insn_references_resource_p (trial, sets, 0)
1409 || insn_sets_resource_p (trial, needed, 0)
1410 || insn_sets_resource_p (trial, sets, 0)
1411 #ifdef HAVE_cc0
1412 || sets_cc0_p (PATTERN (trial))
1413 #endif
1416 break;
1418 /* If this insn was already done, we don't need it. */
1419 if (redundant_insn (trial, insn, delay_list))
1421 delete_from_delay_slot (trial);
1422 continue;
1425 if (! must_annul
1426 && ((condition == const_true_rtx
1427 || (! insn_sets_resource_p (trial, other_needed, 0)
1428 && ! may_trap_p (PATTERN (trial)))))
1429 ? eligible_for_delay (insn, *pslots_filled, trial, flags)
1430 : (must_annul || delay_list == NULL) && (must_annul = 1,
1431 check_annul_list_true_false (1, delay_list)
1432 && eligible_for_annul_true (insn, *pslots_filled, trial, flags)))
1434 if (must_annul)
1435 used_annul = 1;
1436 delete_from_delay_slot (trial);
1437 delay_list = add_to_delay_list (trial, delay_list);
1439 if (++(*pslots_filled) == slots_to_fill)
1440 break;
1442 else
1443 break;
1446 if (used_annul)
1447 *pannul_p = 1;
1448 return delay_list;
1451 /* Try merging insns starting at THREAD which match exactly the insns in
1452 INSN's delay list.
1454 If all insns were matched and the insn was previously annulling, the
1455 annul bit will be cleared.
1457 For each insn that is merged, if the branch is or will be non-annulling,
1458 we delete the merged insn. */
1460 static void
1461 try_merge_delay_insns (insn, thread)
1462 rtx insn, thread;
1464 rtx trial, next_trial;
1465 rtx delay_insn = XVECEXP (PATTERN (insn), 0, 0);
1466 int annul_p = INSN_ANNULLED_BRANCH_P (delay_insn);
1467 int slot_number = 1;
1468 int num_slots = XVECLEN (PATTERN (insn), 0);
1469 rtx next_to_match = XVECEXP (PATTERN (insn), 0, slot_number);
1470 struct resources set, needed;
1471 rtx merged_insns = 0;
1472 int i;
1473 int flags;
1475 flags = get_jump_flags (delay_insn, JUMP_LABEL (delay_insn));
1477 CLEAR_RESOURCE (&needed);
1478 CLEAR_RESOURCE (&set);
1480 /* If this is not an annulling branch, take into account anything needed in
1481 INSN's delay slot. This prevents two increments from being incorrectly
1482 folded into one. If we are annulling, this would be the correct
1483 thing to do. (The alternative, looking at things set in NEXT_TO_MATCH
1484 will essentially disable this optimization. This method is somewhat of
1485 a kludge, but I don't see a better way.) */
1486 if (! annul_p)
1487 for (i = 1 ; i < num_slots; i++)
1488 if (XVECEXP (PATTERN (insn), 0, i))
1489 mark_referenced_resources (XVECEXP (PATTERN (insn), 0, i), &needed, 1);
1491 for (trial = thread; !stop_search_p (trial, 1); trial = next_trial)
1493 rtx pat = PATTERN (trial);
1494 rtx oldtrial = trial;
1496 next_trial = next_nonnote_insn (trial);
1498 /* TRIAL must be a CALL_INSN or INSN. Skip USE and CLOBBER. */
1499 if (GET_CODE (trial) == INSN
1500 && (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER))
1501 continue;
1503 if (GET_CODE (next_to_match) == GET_CODE (trial)
1504 #ifdef HAVE_cc0
1505 /* We can't share an insn that sets cc0. */
1506 && ! sets_cc0_p (pat)
1507 #endif
1508 && ! insn_references_resource_p (trial, &set, 1)
1509 && ! insn_sets_resource_p (trial, &set, 1)
1510 && ! insn_sets_resource_p (trial, &needed, 1)
1511 && (trial = try_split (pat, trial, 0)) != 0
1512 /* Update next_trial, in case try_split succeeded. */
1513 && (next_trial = next_nonnote_insn (trial))
1514 /* Likewise THREAD. */
1515 && (thread = oldtrial == thread ? trial : thread)
1516 && rtx_equal_p (PATTERN (next_to_match), PATTERN (trial))
1517 /* Have to test this condition if annul condition is different
1518 from (and less restrictive than) non-annulling one. */
1519 && eligible_for_delay (delay_insn, slot_number - 1, trial, flags))
1522 if (! annul_p)
1524 update_block (trial, thread);
1525 if (trial == thread)
1526 thread = next_active_insn (thread);
1528 delete_related_insns (trial);
1529 INSN_FROM_TARGET_P (next_to_match) = 0;
1531 else
1532 merged_insns = gen_rtx_INSN_LIST (VOIDmode, trial, merged_insns);
1534 if (++slot_number == num_slots)
1535 break;
1537 next_to_match = XVECEXP (PATTERN (insn), 0, slot_number);
1540 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
1541 mark_referenced_resources (trial, &needed, 1);
1544 /* See if we stopped on a filled insn. If we did, try to see if its
1545 delay slots match. */
1546 if (slot_number != num_slots
1547 && trial && GET_CODE (trial) == INSN
1548 && GET_CODE (PATTERN (trial)) == SEQUENCE
1549 && ! INSN_ANNULLED_BRANCH_P (XVECEXP (PATTERN (trial), 0, 0)))
1551 rtx pat = PATTERN (trial);
1552 rtx filled_insn = XVECEXP (pat, 0, 0);
1554 /* Account for resources set/needed by the filled insn. */
1555 mark_set_resources (filled_insn, &set, 0, MARK_SRC_DEST_CALL);
1556 mark_referenced_resources (filled_insn, &needed, 1);
1558 for (i = 1; i < XVECLEN (pat, 0); i++)
1560 rtx dtrial = XVECEXP (pat, 0, i);
1562 if (! insn_references_resource_p (dtrial, &set, 1)
1563 && ! insn_sets_resource_p (dtrial, &set, 1)
1564 && ! insn_sets_resource_p (dtrial, &needed, 1)
1565 #ifdef HAVE_cc0
1566 && ! sets_cc0_p (PATTERN (dtrial))
1567 #endif
1568 && rtx_equal_p (PATTERN (next_to_match), PATTERN (dtrial))
1569 && eligible_for_delay (delay_insn, slot_number - 1, dtrial, flags))
1571 if (! annul_p)
1573 rtx new;
1575 update_block (dtrial, thread);
1576 new = delete_from_delay_slot (dtrial);
1577 if (INSN_DELETED_P (thread))
1578 thread = new;
1579 INSN_FROM_TARGET_P (next_to_match) = 0;
1581 else
1582 merged_insns = gen_rtx_INSN_LIST (SImode, dtrial,
1583 merged_insns);
1585 if (++slot_number == num_slots)
1586 break;
1588 next_to_match = XVECEXP (PATTERN (insn), 0, slot_number);
1590 else
1592 /* Keep track of the set/referenced resources for the delay
1593 slots of any trial insns we encounter. */
1594 mark_set_resources (dtrial, &set, 0, MARK_SRC_DEST_CALL);
1595 mark_referenced_resources (dtrial, &needed, 1);
1600 /* If all insns in the delay slot have been matched and we were previously
1601 annulling the branch, we need not any more. In that case delete all the
1602 merged insns. Also clear the INSN_FROM_TARGET_P bit of each insn in
1603 the delay list so that we know that it isn't only being used at the
1604 target. */
1605 if (slot_number == num_slots && annul_p)
1607 for (; merged_insns; merged_insns = XEXP (merged_insns, 1))
1609 if (GET_MODE (merged_insns) == SImode)
1611 rtx new;
1613 update_block (XEXP (merged_insns, 0), thread);
1614 new = delete_from_delay_slot (XEXP (merged_insns, 0));
1615 if (INSN_DELETED_P (thread))
1616 thread = new;
1618 else
1620 update_block (XEXP (merged_insns, 0), thread);
1621 delete_related_insns (XEXP (merged_insns, 0));
1625 INSN_ANNULLED_BRANCH_P (delay_insn) = 0;
1627 for (i = 0; i < XVECLEN (PATTERN (insn), 0); i++)
1628 INSN_FROM_TARGET_P (XVECEXP (PATTERN (insn), 0, i)) = 0;
1632 /* See if INSN is redundant with an insn in front of TARGET. Often this
1633 is called when INSN is a candidate for a delay slot of TARGET.
1634 DELAY_LIST are insns that will be placed in delay slots of TARGET in front
1635 of INSN. Often INSN will be redundant with an insn in a delay slot of
1636 some previous insn. This happens when we have a series of branches to the
1637 same label; in that case the first insn at the target might want to go
1638 into each of the delay slots.
1640 If we are not careful, this routine can take up a significant fraction
1641 of the total compilation time (4%), but only wins rarely. Hence we
1642 speed this routine up by making two passes. The first pass goes back
1643 until it hits a label and sees if it find an insn with an identical
1644 pattern. Only in this (relatively rare) event does it check for
1645 data conflicts.
1647 We do not split insns we encounter. This could cause us not to find a
1648 redundant insn, but the cost of splitting seems greater than the possible
1649 gain in rare cases. */
1651 static rtx
1652 redundant_insn (insn, target, delay_list)
1653 rtx insn;
1654 rtx target;
1655 rtx delay_list;
1657 rtx target_main = target;
1658 rtx ipat = PATTERN (insn);
1659 rtx trial, pat;
1660 struct resources needed, set;
1661 int i;
1662 unsigned insns_to_search;
1664 /* If INSN has any REG_UNUSED notes, it can't match anything since we
1665 are allowed to not actually assign to such a register. */
1666 if (find_reg_note (insn, REG_UNUSED, NULL_RTX) != 0)
1667 return 0;
1669 /* Scan backwards looking for a match. */
1670 for (trial = PREV_INSN (target),
1671 insns_to_search = MAX_DELAY_SLOT_INSN_SEARCH;
1672 trial && insns_to_search > 0;
1673 trial = PREV_INSN (trial), --insns_to_search)
1675 if (GET_CODE (trial) == CODE_LABEL)
1676 return 0;
1678 if (! INSN_P (trial))
1679 continue;
1681 pat = PATTERN (trial);
1682 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
1683 continue;
1685 if (GET_CODE (pat) == SEQUENCE)
1687 /* Stop for a CALL and its delay slots because it is difficult to
1688 track its resource needs correctly. */
1689 if (GET_CODE (XVECEXP (pat, 0, 0)) == CALL_INSN)
1690 return 0;
1692 /* Stop for an INSN or JUMP_INSN with delayed effects and its delay
1693 slots because it is difficult to track its resource needs
1694 correctly. */
1696 #ifdef INSN_SETS_ARE_DELAYED
1697 if (INSN_SETS_ARE_DELAYED (XVECEXP (pat, 0, 0)))
1698 return 0;
1699 #endif
1701 #ifdef INSN_REFERENCES_ARE_DELAYED
1702 if (INSN_REFERENCES_ARE_DELAYED (XVECEXP (pat, 0, 0)))
1703 return 0;
1704 #endif
1706 /* See if any of the insns in the delay slot match, updating
1707 resource requirements as we go. */
1708 for (i = XVECLEN (pat, 0) - 1; i > 0; i--)
1709 if (GET_CODE (XVECEXP (pat, 0, i)) == GET_CODE (insn)
1710 && rtx_equal_p (PATTERN (XVECEXP (pat, 0, i)), ipat)
1711 && ! find_reg_note (XVECEXP (pat, 0, i), REG_UNUSED, NULL_RTX))
1712 break;
1714 /* If found a match, exit this loop early. */
1715 if (i > 0)
1716 break;
1719 else if (GET_CODE (trial) == GET_CODE (insn) && rtx_equal_p (pat, ipat)
1720 && ! find_reg_note (trial, REG_UNUSED, NULL_RTX))
1721 break;
1724 /* If we didn't find an insn that matches, return 0. */
1725 if (trial == 0)
1726 return 0;
1728 /* See what resources this insn sets and needs. If they overlap, or
1729 if this insn references CC0, it can't be redundant. */
1731 CLEAR_RESOURCE (&needed);
1732 CLEAR_RESOURCE (&set);
1733 mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL);
1734 mark_referenced_resources (insn, &needed, 1);
1736 /* If TARGET is a SEQUENCE, get the main insn. */
1737 if (GET_CODE (target) == INSN && GET_CODE (PATTERN (target)) == SEQUENCE)
1738 target_main = XVECEXP (PATTERN (target), 0, 0);
1740 if (resource_conflicts_p (&needed, &set)
1741 #ifdef HAVE_cc0
1742 || reg_mentioned_p (cc0_rtx, ipat)
1743 #endif
1744 /* The insn requiring the delay may not set anything needed or set by
1745 INSN. */
1746 || insn_sets_resource_p (target_main, &needed, 1)
1747 || insn_sets_resource_p (target_main, &set, 1))
1748 return 0;
1750 /* Insns we pass may not set either NEEDED or SET, so merge them for
1751 simpler tests. */
1752 needed.memory |= set.memory;
1753 needed.unch_memory |= set.unch_memory;
1754 IOR_HARD_REG_SET (needed.regs, set.regs);
1756 /* This insn isn't redundant if it conflicts with an insn that either is
1757 or will be in a delay slot of TARGET. */
1759 while (delay_list)
1761 if (insn_sets_resource_p (XEXP (delay_list, 0), &needed, 1))
1762 return 0;
1763 delay_list = XEXP (delay_list, 1);
1766 if (GET_CODE (target) == INSN && GET_CODE (PATTERN (target)) == SEQUENCE)
1767 for (i = 1; i < XVECLEN (PATTERN (target), 0); i++)
1768 if (insn_sets_resource_p (XVECEXP (PATTERN (target), 0, i), &needed, 1))
1769 return 0;
1771 /* Scan backwards until we reach a label or an insn that uses something
1772 INSN sets or sets something insn uses or sets. */
1774 for (trial = PREV_INSN (target),
1775 insns_to_search = MAX_DELAY_SLOT_INSN_SEARCH;
1776 trial && GET_CODE (trial) != CODE_LABEL && insns_to_search > 0;
1777 trial = PREV_INSN (trial), --insns_to_search)
1779 if (GET_CODE (trial) != INSN && GET_CODE (trial) != CALL_INSN
1780 && GET_CODE (trial) != JUMP_INSN)
1781 continue;
1783 pat = PATTERN (trial);
1784 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
1785 continue;
1787 if (GET_CODE (pat) == SEQUENCE)
1789 /* If this is a CALL_INSN and its delay slots, it is hard to track
1790 the resource needs properly, so give up. */
1791 if (GET_CODE (XVECEXP (pat, 0, 0)) == CALL_INSN)
1792 return 0;
1794 /* If this is an INSN or JUMP_INSN with delayed effects, it
1795 is hard to track the resource needs properly, so give up. */
1797 #ifdef INSN_SETS_ARE_DELAYED
1798 if (INSN_SETS_ARE_DELAYED (XVECEXP (pat, 0, 0)))
1799 return 0;
1800 #endif
1802 #ifdef INSN_REFERENCES_ARE_DELAYED
1803 if (INSN_REFERENCES_ARE_DELAYED (XVECEXP (pat, 0, 0)))
1804 return 0;
1805 #endif
1807 /* See if any of the insns in the delay slot match, updating
1808 resource requirements as we go. */
1809 for (i = XVECLEN (pat, 0) - 1; i > 0; i--)
1811 rtx candidate = XVECEXP (pat, 0, i);
1813 /* If an insn will be annulled if the branch is false, it isn't
1814 considered as a possible duplicate insn. */
1815 if (rtx_equal_p (PATTERN (candidate), ipat)
1816 && ! (INSN_ANNULLED_BRANCH_P (XVECEXP (pat, 0, 0))
1817 && INSN_FROM_TARGET_P (candidate)))
1819 /* Show that this insn will be used in the sequel. */
1820 INSN_FROM_TARGET_P (candidate) = 0;
1821 return candidate;
1824 /* Unless this is an annulled insn from the target of a branch,
1825 we must stop if it sets anything needed or set by INSN. */
1826 if ((! INSN_ANNULLED_BRANCH_P (XVECEXP (pat, 0, 0))
1827 || ! INSN_FROM_TARGET_P (candidate))
1828 && insn_sets_resource_p (candidate, &needed, 1))
1829 return 0;
1832 /* If the insn requiring the delay slot conflicts with INSN, we
1833 must stop. */
1834 if (insn_sets_resource_p (XVECEXP (pat, 0, 0), &needed, 1))
1835 return 0;
1837 else
1839 /* See if TRIAL is the same as INSN. */
1840 pat = PATTERN (trial);
1841 if (rtx_equal_p (pat, ipat))
1842 return trial;
1844 /* Can't go any further if TRIAL conflicts with INSN. */
1845 if (insn_sets_resource_p (trial, &needed, 1))
1846 return 0;
1850 return 0;
1853 /* Return 1 if THREAD can only be executed in one way. If LABEL is non-zero,
1854 it is the target of the branch insn being scanned. If ALLOW_FALLTHROUGH
1855 is non-zero, we are allowed to fall into this thread; otherwise, we are
1856 not.
1858 If LABEL is used more than one or we pass a label other than LABEL before
1859 finding an active insn, we do not own this thread. */
1861 static int
1862 own_thread_p (thread, label, allow_fallthrough)
1863 rtx thread;
1864 rtx label;
1865 int allow_fallthrough;
1867 rtx active_insn;
1868 rtx insn;
1870 /* We don't own the function end. */
1871 if (thread == 0)
1872 return 0;
1874 /* Get the first active insn, or THREAD, if it is an active insn. */
1875 active_insn = next_active_insn (PREV_INSN (thread));
1877 for (insn = thread; insn != active_insn; insn = NEXT_INSN (insn))
1878 if (GET_CODE (insn) == CODE_LABEL
1879 && (insn != label || LABEL_NUSES (insn) != 1))
1880 return 0;
1882 if (allow_fallthrough)
1883 return 1;
1885 /* Ensure that we reach a BARRIER before any insn or label. */
1886 for (insn = prev_nonnote_insn (thread);
1887 insn == 0 || GET_CODE (insn) != BARRIER;
1888 insn = prev_nonnote_insn (insn))
1889 if (insn == 0
1890 || GET_CODE (insn) == CODE_LABEL
1891 || (GET_CODE (insn) == INSN
1892 && GET_CODE (PATTERN (insn)) != USE
1893 && GET_CODE (PATTERN (insn)) != CLOBBER))
1894 return 0;
1896 return 1;
1899 /* Called when INSN is being moved from a location near the target of a jump.
1900 We leave a marker of the form (use (INSN)) immediately in front
1901 of WHERE for mark_target_live_regs. These markers will be deleted when
1902 reorg finishes.
1904 We used to try to update the live status of registers if WHERE is at
1905 the start of a basic block, but that can't work since we may remove a
1906 BARRIER in relax_delay_slots. */
1908 static void
1909 update_block (insn, where)
1910 rtx insn;
1911 rtx where;
1913 /* Ignore if this was in a delay slot and it came from the target of
1914 a branch. */
1915 if (INSN_FROM_TARGET_P (insn))
1916 return;
1918 emit_insn_before (gen_rtx_USE (VOIDmode, insn), where);
1920 /* INSN might be making a value live in a block where it didn't use to
1921 be. So recompute liveness information for this block. */
1923 incr_ticks_for_insn (insn);
1926 /* Similar to REDIRECT_JUMP except that we update the BB_TICKS entry for
1927 the basic block containing the jump. */
1929 static int
1930 reorg_redirect_jump (jump, nlabel)
1931 rtx jump;
1932 rtx nlabel;
1934 incr_ticks_for_insn (jump);
1935 return redirect_jump (jump, nlabel, 1);
1938 /* Called when INSN is being moved forward into a delay slot of DELAYED_INSN.
1939 We check every instruction between INSN and DELAYED_INSN for REG_DEAD notes
1940 that reference values used in INSN. If we find one, then we move the
1941 REG_DEAD note to INSN.
1943 This is needed to handle the case where an later insn (after INSN) has a
1944 REG_DEAD note for a register used by INSN, and this later insn subsequently
1945 gets moved before a CODE_LABEL because it is a redundant insn. In this
1946 case, mark_target_live_regs may be confused into thinking the register
1947 is dead because it sees a REG_DEAD note immediately before a CODE_LABEL. */
1949 static void
1950 update_reg_dead_notes (insn, delayed_insn)
1951 rtx insn, delayed_insn;
1953 rtx p, link, next;
1955 for (p = next_nonnote_insn (insn); p != delayed_insn;
1956 p = next_nonnote_insn (p))
1957 for (link = REG_NOTES (p); link; link = next)
1959 next = XEXP (link, 1);
1961 if (REG_NOTE_KIND (link) != REG_DEAD
1962 || GET_CODE (XEXP (link, 0)) != REG)
1963 continue;
1965 if (reg_referenced_p (XEXP (link, 0), PATTERN (insn)))
1967 /* Move the REG_DEAD note from P to INSN. */
1968 remove_note (p, link);
1969 XEXP (link, 1) = REG_NOTES (insn);
1970 REG_NOTES (insn) = link;
1975 /* Called when an insn redundant with start_insn is deleted. If there
1976 is a REG_DEAD note for the target of start_insn between start_insn
1977 and stop_insn, then the REG_DEAD note needs to be deleted since the
1978 value no longer dies there.
1980 If the REG_DEAD note isn't deleted, then mark_target_live_regs may be
1981 confused into thinking the register is dead. */
1983 static void
1984 fix_reg_dead_note (start_insn, stop_insn)
1985 rtx start_insn, stop_insn;
1987 rtx p, link, next;
1989 for (p = next_nonnote_insn (start_insn); p != stop_insn;
1990 p = next_nonnote_insn (p))
1991 for (link = REG_NOTES (p); link; link = next)
1993 next = XEXP (link, 1);
1995 if (REG_NOTE_KIND (link) != REG_DEAD
1996 || GET_CODE (XEXP (link, 0)) != REG)
1997 continue;
1999 if (reg_set_p (XEXP (link, 0), PATTERN (start_insn)))
2001 remove_note (p, link);
2002 return;
2007 /* Delete any REG_UNUSED notes that exist on INSN but not on REDUNDANT_INSN.
2009 This handles the case of udivmodXi4 instructions which optimize their
2010 output depending on whether any REG_UNUSED notes are present.
2011 we must make sure that INSN calculates as many results as REDUNDANT_INSN
2012 does. */
2014 static void
2015 update_reg_unused_notes (insn, redundant_insn)
2016 rtx insn, redundant_insn;
2018 rtx link, next;
2020 for (link = REG_NOTES (insn); link; link = next)
2022 next = XEXP (link, 1);
2024 if (REG_NOTE_KIND (link) != REG_UNUSED
2025 || GET_CODE (XEXP (link, 0)) != REG)
2026 continue;
2028 if (! find_regno_note (redundant_insn, REG_UNUSED,
2029 REGNO (XEXP (link, 0))))
2030 remove_note (insn, link);
2034 /* Scan a function looking for insns that need a delay slot and find insns to
2035 put into the delay slot.
2037 NON_JUMPS_P is non-zero if we are to only try to fill non-jump insns (such
2038 as calls). We do these first since we don't want jump insns (that are
2039 easier to fill) to get the only insns that could be used for non-jump insns.
2040 When it is zero, only try to fill JUMP_INSNs.
2042 When slots are filled in this manner, the insns (including the
2043 delay_insn) are put together in a SEQUENCE rtx. In this fashion,
2044 it is possible to tell whether a delay slot has really been filled
2045 or not. `final' knows how to deal with this, by communicating
2046 through FINAL_SEQUENCE. */
2048 static void
2049 fill_simple_delay_slots (non_jumps_p)
2050 int non_jumps_p;
2052 rtx insn, pat, trial, next_trial;
2053 int i;
2054 int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base;
2055 struct resources needed, set;
2056 int slots_to_fill, slots_filled;
2057 rtx delay_list;
2059 for (i = 0; i < num_unfilled_slots; i++)
2061 int flags;
2062 /* Get the next insn to fill. If it has already had any slots assigned,
2063 we can't do anything with it. Maybe we'll improve this later. */
2065 insn = unfilled_slots_base[i];
2066 if (insn == 0
2067 || INSN_DELETED_P (insn)
2068 || (GET_CODE (insn) == INSN
2069 && GET_CODE (PATTERN (insn)) == SEQUENCE)
2070 || (GET_CODE (insn) == JUMP_INSN && non_jumps_p)
2071 || (GET_CODE (insn) != JUMP_INSN && ! non_jumps_p))
2072 continue;
2074 /* It may have been that this insn used to need delay slots, but
2075 now doesn't; ignore in that case. This can happen, for example,
2076 on the HP PA RISC, where the number of delay slots depends on
2077 what insns are nearby. */
2078 slots_to_fill = num_delay_slots (insn);
2080 /* Some machine description have defined instructions to have
2081 delay slots only in certain circumstances which may depend on
2082 nearby insns (which change due to reorg's actions).
2084 For example, the PA port normally has delay slots for unconditional
2085 jumps.
2087 However, the PA port claims such jumps do not have a delay slot
2088 if they are immediate successors of certain CALL_INSNs. This
2089 allows the port to favor filling the delay slot of the call with
2090 the unconditional jump. */
2091 if (slots_to_fill == 0)
2092 continue;
2094 /* This insn needs, or can use, some delay slots. SLOTS_TO_FILL
2095 says how many. After initialization, first try optimizing
2097 call _foo call _foo
2098 nop add %o7,.-L1,%o7
2099 b,a L1
2102 If this case applies, the delay slot of the call is filled with
2103 the unconditional jump. This is done first to avoid having the
2104 delay slot of the call filled in the backward scan. Also, since
2105 the unconditional jump is likely to also have a delay slot, that
2106 insn must exist when it is subsequently scanned.
2108 This is tried on each insn with delay slots as some machines
2109 have insns which perform calls, but are not represented as
2110 CALL_INSNs. */
2112 slots_filled = 0;
2113 delay_list = 0;
2115 if (GET_CODE (insn) == JUMP_INSN)
2116 flags = get_jump_flags (insn, JUMP_LABEL (insn));
2117 else
2118 flags = get_jump_flags (insn, NULL_RTX);
2120 if ((trial = next_active_insn (insn))
2121 && GET_CODE (trial) == JUMP_INSN
2122 && simplejump_p (trial)
2123 && eligible_for_delay (insn, slots_filled, trial, flags)
2124 && no_labels_between_p (insn, trial))
2126 rtx *tmp;
2127 slots_filled++;
2128 delay_list = add_to_delay_list (trial, delay_list);
2130 /* TRIAL may have had its delay slot filled, then unfilled. When
2131 the delay slot is unfilled, TRIAL is placed back on the unfilled
2132 slots obstack. Unfortunately, it is placed on the end of the
2133 obstack, not in its original location. Therefore, we must search
2134 from entry i + 1 to the end of the unfilled slots obstack to
2135 try and find TRIAL. */
2136 tmp = &unfilled_slots_base[i + 1];
2137 while (*tmp != trial && tmp != unfilled_slots_next)
2138 tmp++;
2140 /* Remove the unconditional jump from consideration for delay slot
2141 filling and unthread it. */
2142 if (*tmp == trial)
2143 *tmp = 0;
2145 rtx next = NEXT_INSN (trial);
2146 rtx prev = PREV_INSN (trial);
2147 if (prev)
2148 NEXT_INSN (prev) = next;
2149 if (next)
2150 PREV_INSN (next) = prev;
2154 /* Now, scan backwards from the insn to search for a potential
2155 delay-slot candidate. Stop searching when a label or jump is hit.
2157 For each candidate, if it is to go into the delay slot (moved
2158 forward in execution sequence), it must not need or set any resources
2159 that were set by later insns and must not set any resources that
2160 are needed for those insns.
2162 The delay slot insn itself sets resources unless it is a call
2163 (in which case the called routine, not the insn itself, is doing
2164 the setting). */
2166 if (slots_filled < slots_to_fill)
2168 CLEAR_RESOURCE (&needed);
2169 CLEAR_RESOURCE (&set);
2170 mark_set_resources (insn, &set, 0, MARK_SRC_DEST);
2171 mark_referenced_resources (insn, &needed, 0);
2173 for (trial = prev_nonnote_insn (insn); ! stop_search_p (trial, 1);
2174 trial = next_trial)
2176 next_trial = prev_nonnote_insn (trial);
2178 /* This must be an INSN or CALL_INSN. */
2179 pat = PATTERN (trial);
2181 /* USE and CLOBBER at this level was just for flow; ignore it. */
2182 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
2183 continue;
2185 /* Check for resource conflict first, to avoid unnecessary
2186 splitting. */
2187 if (! insn_references_resource_p (trial, &set, 1)
2188 && ! insn_sets_resource_p (trial, &set, 1)
2189 && ! insn_sets_resource_p (trial, &needed, 1)
2190 #ifdef HAVE_cc0
2191 /* Can't separate set of cc0 from its use. */
2192 && ! (reg_mentioned_p (cc0_rtx, pat) && ! sets_cc0_p (pat))
2193 #endif
2196 trial = try_split (pat, trial, 1);
2197 next_trial = prev_nonnote_insn (trial);
2198 if (eligible_for_delay (insn, slots_filled, trial, flags))
2200 /* In this case, we are searching backward, so if we
2201 find insns to put on the delay list, we want
2202 to put them at the head, rather than the
2203 tail, of the list. */
2205 update_reg_dead_notes (trial, insn);
2206 delay_list = gen_rtx_INSN_LIST (VOIDmode,
2207 trial, delay_list);
2208 update_block (trial, trial);
2209 delete_related_insns (trial);
2210 if (slots_to_fill == ++slots_filled)
2211 break;
2212 continue;
2216 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
2217 mark_referenced_resources (trial, &needed, 1);
2221 /* If all needed slots haven't been filled, we come here. */
2223 /* Try to optimize case of jumping around a single insn. */
2224 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS)
2225 if (slots_filled != slots_to_fill
2226 && delay_list == 0
2227 && GET_CODE (insn) == JUMP_INSN
2228 && (condjump_p (insn) || condjump_in_parallel_p (insn)))
2230 delay_list = optimize_skip (insn);
2231 if (delay_list)
2232 slots_filled += 1;
2234 #endif
2236 /* Try to get insns from beyond the insn needing the delay slot.
2237 These insns can neither set or reference resources set in insns being
2238 skipped, cannot set resources in the insn being skipped, and, if this
2239 is a CALL_INSN (or a CALL_INSN is passed), cannot trap (because the
2240 call might not return).
2242 There used to be code which continued past the target label if
2243 we saw all uses of the target label. This code did not work,
2244 because it failed to account for some instructions which were
2245 both annulled and marked as from the target. This can happen as a
2246 result of optimize_skip. Since this code was redundant with
2247 fill_eager_delay_slots anyways, it was just deleted. */
2249 if (slots_filled != slots_to_fill
2250 /* If this instruction could throw an exception which is
2251 caught in the same function, then it's not safe to fill
2252 the delay slot with an instruction from beyond this
2253 point. For example, consider:
2255 int i = 2;
2257 try {
2258 f();
2259 i = 3;
2260 } catch (...) {}
2262 return i;
2264 Even though `i' is a local variable, we must be sure not
2265 to put `i = 3' in the delay slot if `f' might throw an
2266 exception.
2268 Presumably, we should also check to see if we could get
2269 back to this function via `setjmp'. */
2270 && !can_throw_internal (insn)
2271 && (GET_CODE (insn) != JUMP_INSN
2272 || ((condjump_p (insn) || condjump_in_parallel_p (insn))
2273 && ! simplejump_p (insn)
2274 && JUMP_LABEL (insn) != 0)))
2276 /* Invariant: If insn is a JUMP_INSN, the insn's jump
2277 label. Otherwise, zero. */
2278 rtx target = 0;
2279 int maybe_never = 0;
2280 rtx pat, trial_delay;
2282 CLEAR_RESOURCE (&needed);
2283 CLEAR_RESOURCE (&set);
2285 if (GET_CODE (insn) == CALL_INSN)
2287 mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL);
2288 mark_referenced_resources (insn, &needed, 1);
2289 maybe_never = 1;
2291 else
2293 mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL);
2294 mark_referenced_resources (insn, &needed, 1);
2295 if (GET_CODE (insn) == JUMP_INSN)
2296 target = JUMP_LABEL (insn);
2299 if (target == 0)
2300 for (trial = next_nonnote_insn (insn); trial; trial = next_trial)
2302 next_trial = next_nonnote_insn (trial);
2304 if (GET_CODE (trial) == CODE_LABEL
2305 || GET_CODE (trial) == BARRIER)
2306 break;
2308 /* We must have an INSN, JUMP_INSN, or CALL_INSN. */
2309 pat = PATTERN (trial);
2311 /* Stand-alone USE and CLOBBER are just for flow. */
2312 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
2313 continue;
2315 /* If this already has filled delay slots, get the insn needing
2316 the delay slots. */
2317 if (GET_CODE (pat) == SEQUENCE)
2318 trial_delay = XVECEXP (pat, 0, 0);
2319 else
2320 trial_delay = trial;
2322 /* Stop our search when seeing an unconditional jump. */
2323 if (GET_CODE (trial_delay) == JUMP_INSN)
2324 break;
2326 /* See if we have a resource problem before we try to
2327 split. */
2328 if (GET_CODE (pat) != SEQUENCE
2329 && ! insn_references_resource_p (trial, &set, 1)
2330 && ! insn_sets_resource_p (trial, &set, 1)
2331 && ! insn_sets_resource_p (trial, &needed, 1)
2332 #ifdef HAVE_cc0
2333 && ! (reg_mentioned_p (cc0_rtx, pat) && ! sets_cc0_p (pat))
2334 #endif
2335 && ! (maybe_never && may_trap_p (pat))
2336 && (trial = try_split (pat, trial, 0))
2337 && eligible_for_delay (insn, slots_filled, trial, flags))
2339 next_trial = next_nonnote_insn (trial);
2340 delay_list = add_to_delay_list (trial, delay_list);
2342 #ifdef HAVE_cc0
2343 if (reg_mentioned_p (cc0_rtx, pat))
2344 link_cc0_insns (trial);
2345 #endif
2347 delete_related_insns (trial);
2348 if (slots_to_fill == ++slots_filled)
2349 break;
2350 continue;
2353 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
2354 mark_referenced_resources (trial, &needed, 1);
2356 /* Ensure we don't put insns between the setting of cc and the
2357 comparison by moving a setting of cc into an earlier delay
2358 slot since these insns could clobber the condition code. */
2359 set.cc = 1;
2361 /* If this is a call or jump, we might not get here. */
2362 if (GET_CODE (trial_delay) == CALL_INSN
2363 || GET_CODE (trial_delay) == JUMP_INSN)
2364 maybe_never = 1;
2367 /* If there are slots left to fill and our search was stopped by an
2368 unconditional branch, try the insn at the branch target. We can
2369 redirect the branch if it works.
2371 Don't do this if the insn at the branch target is a branch. */
2372 if (slots_to_fill != slots_filled
2373 && trial
2374 && GET_CODE (trial) == JUMP_INSN
2375 && simplejump_p (trial)
2376 && (target == 0 || JUMP_LABEL (trial) == target)
2377 && (next_trial = next_active_insn (JUMP_LABEL (trial))) != 0
2378 && ! (GET_CODE (next_trial) == INSN
2379 && GET_CODE (PATTERN (next_trial)) == SEQUENCE)
2380 && GET_CODE (next_trial) != JUMP_INSN
2381 && ! insn_references_resource_p (next_trial, &set, 1)
2382 && ! insn_sets_resource_p (next_trial, &set, 1)
2383 && ! insn_sets_resource_p (next_trial, &needed, 1)
2384 #ifdef HAVE_cc0
2385 && ! reg_mentioned_p (cc0_rtx, PATTERN (next_trial))
2386 #endif
2387 && ! (maybe_never && may_trap_p (PATTERN (next_trial)))
2388 && (next_trial = try_split (PATTERN (next_trial), next_trial, 0))
2389 && eligible_for_delay (insn, slots_filled, next_trial, flags))
2391 rtx new_label = next_active_insn (next_trial);
2393 if (new_label != 0)
2394 new_label = get_label_before (new_label);
2395 else
2396 new_label = find_end_label ();
2398 delay_list
2399 = add_to_delay_list (copy_rtx (next_trial), delay_list);
2400 slots_filled++;
2401 reorg_redirect_jump (trial, new_label);
2403 /* If we merged because we both jumped to the same place,
2404 redirect the original insn also. */
2405 if (target)
2406 reorg_redirect_jump (insn, new_label);
2410 /* If this is an unconditional jump, then try to get insns from the
2411 target of the jump. */
2412 if (GET_CODE (insn) == JUMP_INSN
2413 && simplejump_p (insn)
2414 && slots_filled != slots_to_fill)
2415 delay_list
2416 = fill_slots_from_thread (insn, const_true_rtx,
2417 next_active_insn (JUMP_LABEL (insn)),
2418 NULL, 1, 1,
2419 own_thread_p (JUMP_LABEL (insn),
2420 JUMP_LABEL (insn), 0),
2421 slots_to_fill, &slots_filled,
2422 delay_list);
2424 if (delay_list)
2425 unfilled_slots_base[i]
2426 = emit_delay_sequence (insn, delay_list, slots_filled);
2428 if (slots_to_fill == slots_filled)
2429 unfilled_slots_base[i] = 0;
2431 note_delay_statistics (slots_filled, 0);
2434 #ifdef DELAY_SLOTS_FOR_EPILOGUE
2435 /* See if the epilogue needs any delay slots. Try to fill them if so.
2436 The only thing we can do is scan backwards from the end of the
2437 function. If we did this in a previous pass, it is incorrect to do it
2438 again. */
2439 if (current_function_epilogue_delay_list)
2440 return;
2442 slots_to_fill = DELAY_SLOTS_FOR_EPILOGUE;
2443 if (slots_to_fill == 0)
2444 return;
2446 slots_filled = 0;
2447 CLEAR_RESOURCE (&set);
2449 /* The frame pointer and stack pointer are needed at the beginning of
2450 the epilogue, so instructions setting them can not be put in the
2451 epilogue delay slot. However, everything else needed at function
2452 end is safe, so we don't want to use end_of_function_needs here. */
2453 CLEAR_RESOURCE (&needed);
2454 if (frame_pointer_needed)
2456 SET_HARD_REG_BIT (needed.regs, FRAME_POINTER_REGNUM);
2457 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2458 SET_HARD_REG_BIT (needed.regs, HARD_FRAME_POINTER_REGNUM);
2459 #endif
2460 #ifdef EXIT_IGNORE_STACK
2461 if (! EXIT_IGNORE_STACK
2462 || current_function_sp_is_unchanging)
2463 #endif
2464 SET_HARD_REG_BIT (needed.regs, STACK_POINTER_REGNUM);
2466 else
2467 SET_HARD_REG_BIT (needed.regs, STACK_POINTER_REGNUM);
2469 #ifdef EPILOGUE_USES
2470 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
2472 if (EPILOGUE_USES (i))
2473 SET_HARD_REG_BIT (needed.regs, i);
2475 #endif
2477 for (trial = get_last_insn (); ! stop_search_p (trial, 1);
2478 trial = PREV_INSN (trial))
2480 if (GET_CODE (trial) == NOTE)
2481 continue;
2482 pat = PATTERN (trial);
2483 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
2484 continue;
2486 if (! insn_references_resource_p (trial, &set, 1)
2487 && ! insn_sets_resource_p (trial, &needed, 1)
2488 && ! insn_sets_resource_p (trial, &set, 1)
2489 #ifdef HAVE_cc0
2490 /* Don't want to mess with cc0 here. */
2491 && ! reg_mentioned_p (cc0_rtx, pat)
2492 #endif
2495 trial = try_split (pat, trial, 1);
2496 if (ELIGIBLE_FOR_EPILOGUE_DELAY (trial, slots_filled))
2498 /* Here as well we are searching backward, so put the
2499 insns we find on the head of the list. */
2501 current_function_epilogue_delay_list
2502 = gen_rtx_INSN_LIST (VOIDmode, trial,
2503 current_function_epilogue_delay_list);
2504 mark_end_of_function_resources (trial, 1);
2505 update_block (trial, trial);
2506 delete_related_insns (trial);
2508 /* Clear deleted bit so final.c will output the insn. */
2509 INSN_DELETED_P (trial) = 0;
2511 if (slots_to_fill == ++slots_filled)
2512 break;
2513 continue;
2517 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
2518 mark_referenced_resources (trial, &needed, 1);
2521 note_delay_statistics (slots_filled, 0);
2522 #endif
2525 /* Try to find insns to place in delay slots.
2527 INSN is the jump needing SLOTS_TO_FILL delay slots. It tests CONDITION
2528 or is an unconditional branch if CONDITION is const_true_rtx.
2529 *PSLOTS_FILLED is updated with the number of slots that we have filled.
2531 THREAD is a flow-of-control, either the insns to be executed if the
2532 branch is true or if the branch is false, THREAD_IF_TRUE says which.
2534 OPPOSITE_THREAD is the thread in the opposite direction. It is used
2535 to see if any potential delay slot insns set things needed there.
2537 LIKELY is non-zero if it is extremely likely that the branch will be
2538 taken and THREAD_IF_TRUE is set. This is used for the branch at the
2539 end of a loop back up to the top.
2541 OWN_THREAD and OWN_OPPOSITE_THREAD are true if we are the only user of the
2542 thread. I.e., it is the fallthrough code of our jump or the target of the
2543 jump when we are the only jump going there.
2545 If OWN_THREAD is false, it must be the "true" thread of a jump. In that
2546 case, we can only take insns from the head of the thread for our delay
2547 slot. We then adjust the jump to point after the insns we have taken. */
2549 static rtx
2550 fill_slots_from_thread (insn, condition, thread, opposite_thread, likely,
2551 thread_if_true, own_thread,
2552 slots_to_fill, pslots_filled, delay_list)
2553 rtx insn;
2554 rtx condition;
2555 rtx thread, opposite_thread;
2556 int likely;
2557 int thread_if_true;
2558 int own_thread;
2559 int slots_to_fill, *pslots_filled;
2560 rtx delay_list;
2562 rtx new_thread;
2563 struct resources opposite_needed, set, needed;
2564 rtx trial;
2565 int lose = 0;
2566 int must_annul = 0;
2567 int flags;
2569 /* Validate our arguments. */
2570 if ((condition == const_true_rtx && ! thread_if_true)
2571 || (! own_thread && ! thread_if_true))
2572 abort ();
2574 flags = get_jump_flags (insn, JUMP_LABEL (insn));
2576 /* If our thread is the end of subroutine, we can't get any delay
2577 insns from that. */
2578 if (thread == 0)
2579 return delay_list;
2581 /* If this is an unconditional branch, nothing is needed at the
2582 opposite thread. Otherwise, compute what is needed there. */
2583 if (condition == const_true_rtx)
2584 CLEAR_RESOURCE (&opposite_needed);
2585 else
2586 mark_target_live_regs (get_insns (), opposite_thread, &opposite_needed);
2588 /* If the insn at THREAD can be split, do it here to avoid having to
2589 update THREAD and NEW_THREAD if it is done in the loop below. Also
2590 initialize NEW_THREAD. */
2592 new_thread = thread = try_split (PATTERN (thread), thread, 0);
2594 /* Scan insns at THREAD. We are looking for an insn that can be removed
2595 from THREAD (it neither sets nor references resources that were set
2596 ahead of it and it doesn't set anything needs by the insns ahead of
2597 it) and that either can be placed in an annulling insn or aren't
2598 needed at OPPOSITE_THREAD. */
2600 CLEAR_RESOURCE (&needed);
2601 CLEAR_RESOURCE (&set);
2603 /* If we do not own this thread, we must stop as soon as we find
2604 something that we can't put in a delay slot, since all we can do
2605 is branch into THREAD at a later point. Therefore, labels stop
2606 the search if this is not the `true' thread. */
2608 for (trial = thread;
2609 ! stop_search_p (trial, ! thread_if_true) && (! lose || own_thread);
2610 trial = next_nonnote_insn (trial))
2612 rtx pat, old_trial;
2614 /* If we have passed a label, we no longer own this thread. */
2615 if (GET_CODE (trial) == CODE_LABEL)
2617 own_thread = 0;
2618 continue;
2621 pat = PATTERN (trial);
2622 if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
2623 continue;
2625 /* If TRIAL conflicts with the insns ahead of it, we lose. Also,
2626 don't separate or copy insns that set and use CC0. */
2627 if (! insn_references_resource_p (trial, &set, 1)
2628 && ! insn_sets_resource_p (trial, &set, 1)
2629 && ! insn_sets_resource_p (trial, &needed, 1)
2630 #ifdef HAVE_cc0
2631 && ! (reg_mentioned_p (cc0_rtx, pat)
2632 && (! own_thread || ! sets_cc0_p (pat)))
2633 #endif
2636 rtx prior_insn;
2638 /* If TRIAL is redundant with some insn before INSN, we don't
2639 actually need to add it to the delay list; we can merely pretend
2640 we did. */
2641 if ((prior_insn = redundant_insn (trial, insn, delay_list)))
2643 fix_reg_dead_note (prior_insn, insn);
2644 if (own_thread)
2646 update_block (trial, thread);
2647 if (trial == thread)
2649 thread = next_active_insn (thread);
2650 if (new_thread == trial)
2651 new_thread = thread;
2654 delete_related_insns (trial);
2656 else
2658 update_reg_unused_notes (prior_insn, trial);
2659 new_thread = next_active_insn (trial);
2662 continue;
2665 /* There are two ways we can win: If TRIAL doesn't set anything
2666 needed at the opposite thread and can't trap, or if it can
2667 go into an annulled delay slot. */
2668 if (!must_annul
2669 && (condition == const_true_rtx
2670 || (! insn_sets_resource_p (trial, &opposite_needed, 1)
2671 && ! may_trap_p (pat))))
2673 old_trial = trial;
2674 trial = try_split (pat, trial, 0);
2675 if (new_thread == old_trial)
2676 new_thread = trial;
2677 if (thread == old_trial)
2678 thread = trial;
2679 pat = PATTERN (trial);
2680 if (eligible_for_delay (insn, *pslots_filled, trial, flags))
2681 goto winner;
2683 else if (0
2684 #ifdef ANNUL_IFTRUE_SLOTS
2685 || ! thread_if_true
2686 #endif
2687 #ifdef ANNUL_IFFALSE_SLOTS
2688 || thread_if_true
2689 #endif
2692 old_trial = trial;
2693 trial = try_split (pat, trial, 0);
2694 if (new_thread == old_trial)
2695 new_thread = trial;
2696 if (thread == old_trial)
2697 thread = trial;
2698 pat = PATTERN (trial);
2699 if ((must_annul || delay_list == NULL) && (thread_if_true
2700 ? check_annul_list_true_false (0, delay_list)
2701 && eligible_for_annul_false (insn, *pslots_filled, trial, flags)
2702 : check_annul_list_true_false (1, delay_list)
2703 && eligible_for_annul_true (insn, *pslots_filled, trial, flags)))
2705 rtx temp;
2707 must_annul = 1;
2708 winner:
2710 #ifdef HAVE_cc0
2711 if (reg_mentioned_p (cc0_rtx, pat))
2712 link_cc0_insns (trial);
2713 #endif
2715 /* If we own this thread, delete the insn. If this is the
2716 destination of a branch, show that a basic block status
2717 may have been updated. In any case, mark the new
2718 starting point of this thread. */
2719 if (own_thread)
2721 rtx note;
2723 update_block (trial, thread);
2724 if (trial == thread)
2726 thread = next_active_insn (thread);
2727 if (new_thread == trial)
2728 new_thread = thread;
2731 /* We are moving this insn, not deleting it. We must
2732 temporarily increment the use count on any referenced
2733 label lest it be deleted by delete_related_insns. */
2734 note = find_reg_note (trial, REG_LABEL, 0);
2735 if (note)
2736 LABEL_NUSES (XEXP (note, 0))++;
2738 delete_related_insns (trial);
2740 if (note)
2741 LABEL_NUSES (XEXP (note, 0))--;
2743 else
2744 new_thread = next_active_insn (trial);
2746 temp = own_thread ? trial : copy_rtx (trial);
2747 if (thread_if_true)
2748 INSN_FROM_TARGET_P (temp) = 1;
2750 delay_list = add_to_delay_list (temp, delay_list);
2752 if (slots_to_fill == ++(*pslots_filled))
2754 /* Even though we have filled all the slots, we
2755 may be branching to a location that has a
2756 redundant insn. Skip any if so. */
2757 while (new_thread && ! own_thread
2758 && ! insn_sets_resource_p (new_thread, &set, 1)
2759 && ! insn_sets_resource_p (new_thread, &needed, 1)
2760 && ! insn_references_resource_p (new_thread,
2761 &set, 1)
2762 && (prior_insn
2763 = redundant_insn (new_thread, insn,
2764 delay_list)))
2766 /* We know we do not own the thread, so no need
2767 to call update_block and delete_insn. */
2768 fix_reg_dead_note (prior_insn, insn);
2769 update_reg_unused_notes (prior_insn, new_thread);
2770 new_thread = next_active_insn (new_thread);
2772 break;
2775 continue;
2780 /* This insn can't go into a delay slot. */
2781 lose = 1;
2782 mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL);
2783 mark_referenced_resources (trial, &needed, 1);
2785 /* Ensure we don't put insns between the setting of cc and the comparison
2786 by moving a setting of cc into an earlier delay slot since these insns
2787 could clobber the condition code. */
2788 set.cc = 1;
2790 /* If this insn is a register-register copy and the next insn has
2791 a use of our destination, change it to use our source. That way,
2792 it will become a candidate for our delay slot the next time
2793 through this loop. This case occurs commonly in loops that
2794 scan a list.
2796 We could check for more complex cases than those tested below,
2797 but it doesn't seem worth it. It might also be a good idea to try
2798 to swap the two insns. That might do better.
2800 We can't do this if the next insn modifies our destination, because
2801 that would make the replacement into the insn invalid. We also can't
2802 do this if it modifies our source, because it might be an earlyclobber
2803 operand. This latter test also prevents updating the contents of
2804 a PRE_INC. */
2806 if (GET_CODE (trial) == INSN && GET_CODE (pat) == SET
2807 && GET_CODE (SET_SRC (pat)) == REG
2808 && GET_CODE (SET_DEST (pat)) == REG)
2810 rtx next = next_nonnote_insn (trial);
2812 if (next && GET_CODE (next) == INSN
2813 && GET_CODE (PATTERN (next)) != USE
2814 && ! reg_set_p (SET_DEST (pat), next)
2815 && ! reg_set_p (SET_SRC (pat), next)
2816 && reg_referenced_p (SET_DEST (pat), PATTERN (next))
2817 && ! modified_in_p (SET_DEST (pat), next))
2818 validate_replace_rtx (SET_DEST (pat), SET_SRC (pat), next);
2822 /* If we stopped on a branch insn that has delay slots, see if we can
2823 steal some of the insns in those slots. */
2824 if (trial && GET_CODE (trial) == INSN
2825 && GET_CODE (PATTERN (trial)) == SEQUENCE
2826 && GET_CODE (XVECEXP (PATTERN (trial), 0, 0)) == JUMP_INSN)
2828 /* If this is the `true' thread, we will want to follow the jump,
2829 so we can only do this if we have taken everything up to here. */
2830 if (thread_if_true && trial == new_thread)
2832 delay_list
2833 = steal_delay_list_from_target (insn, condition, PATTERN (trial),
2834 delay_list, &set, &needed,
2835 &opposite_needed, slots_to_fill,
2836 pslots_filled, &must_annul,
2837 &new_thread);
2838 /* If we owned the thread and are told that it branched
2839 elsewhere, make sure we own the thread at the new location. */
2840 if (own_thread && trial != new_thread)
2841 own_thread = own_thread_p (new_thread, new_thread, 0);
2843 else if (! thread_if_true)
2844 delay_list
2845 = steal_delay_list_from_fallthrough (insn, condition,
2846 PATTERN (trial),
2847 delay_list, &set, &needed,
2848 &opposite_needed, slots_to_fill,
2849 pslots_filled, &must_annul);
2852 /* If we haven't found anything for this delay slot and it is very
2853 likely that the branch will be taken, see if the insn at our target
2854 increments or decrements a register with an increment that does not
2855 depend on the destination register. If so, try to place the opposite
2856 arithmetic insn after the jump insn and put the arithmetic insn in the
2857 delay slot. If we can't do this, return. */
2858 if (delay_list == 0 && likely && new_thread
2859 && GET_CODE (new_thread) == INSN
2860 && GET_CODE (PATTERN (new_thread)) != ASM_INPUT
2861 && asm_noperands (PATTERN (new_thread)) < 0)
2863 rtx pat = PATTERN (new_thread);
2864 rtx dest;
2865 rtx src;
2867 trial = new_thread;
2868 pat = PATTERN (trial);
2870 if (GET_CODE (trial) != INSN || GET_CODE (pat) != SET
2871 || ! eligible_for_delay (insn, 0, trial, flags))
2872 return 0;
2874 dest = SET_DEST (pat), src = SET_SRC (pat);
2875 if ((GET_CODE (src) == PLUS || GET_CODE (src) == MINUS)
2876 && rtx_equal_p (XEXP (src, 0), dest)
2877 && ! reg_overlap_mentioned_p (dest, XEXP (src, 1))
2878 && ! side_effects_p (pat))
2880 rtx other = XEXP (src, 1);
2881 rtx new_arith;
2882 rtx ninsn;
2884 /* If this is a constant adjustment, use the same code with
2885 the negated constant. Otherwise, reverse the sense of the
2886 arithmetic. */
2887 if (GET_CODE (other) == CONST_INT)
2888 new_arith = gen_rtx_fmt_ee (GET_CODE (src), GET_MODE (src), dest,
2889 negate_rtx (GET_MODE (src), other));
2890 else
2891 new_arith = gen_rtx_fmt_ee (GET_CODE (src) == PLUS ? MINUS : PLUS,
2892 GET_MODE (src), dest, other);
2894 ninsn = emit_insn_after (gen_rtx_SET (VOIDmode, dest, new_arith),
2895 insn);
2897 if (recog_memoized (ninsn) < 0
2898 || (extract_insn (ninsn), ! constrain_operands (1)))
2900 delete_related_insns (ninsn);
2901 return 0;
2904 if (own_thread)
2906 update_block (trial, thread);
2907 if (trial == thread)
2909 thread = next_active_insn (thread);
2910 if (new_thread == trial)
2911 new_thread = thread;
2913 delete_related_insns (trial);
2915 else
2916 new_thread = next_active_insn (trial);
2918 ninsn = own_thread ? trial : copy_rtx (trial);
2919 if (thread_if_true)
2920 INSN_FROM_TARGET_P (ninsn) = 1;
2922 delay_list = add_to_delay_list (ninsn, NULL_RTX);
2923 (*pslots_filled)++;
2927 if (delay_list && must_annul)
2928 INSN_ANNULLED_BRANCH_P (insn) = 1;
2930 /* If we are to branch into the middle of this thread, find an appropriate
2931 label or make a new one if none, and redirect INSN to it. If we hit the
2932 end of the function, use the end-of-function label. */
2933 if (new_thread != thread)
2935 rtx label;
2937 if (! thread_if_true)
2938 abort ();
2940 if (new_thread && GET_CODE (new_thread) == JUMP_INSN
2941 && (simplejump_p (new_thread)
2942 || GET_CODE (PATTERN (new_thread)) == RETURN)
2943 && redirect_with_delay_list_safe_p (insn,
2944 JUMP_LABEL (new_thread),
2945 delay_list))
2946 new_thread = follow_jumps (JUMP_LABEL (new_thread));
2948 if (new_thread == 0)
2949 label = find_end_label ();
2950 else if (GET_CODE (new_thread) == CODE_LABEL)
2951 label = new_thread;
2952 else
2953 label = get_label_before (new_thread);
2955 reorg_redirect_jump (insn, label);
2958 return delay_list;
2961 /* Make another attempt to find insns to place in delay slots.
2963 We previously looked for insns located in front of the delay insn
2964 and, for non-jump delay insns, located behind the delay insn.
2966 Here only try to schedule jump insns and try to move insns from either
2967 the target or the following insns into the delay slot. If annulling is
2968 supported, we will be likely to do this. Otherwise, we can do this only
2969 if safe. */
2971 static void
2972 fill_eager_delay_slots ()
2974 rtx insn;
2975 int i;
2976 int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base;
2978 for (i = 0; i < num_unfilled_slots; i++)
2980 rtx condition;
2981 rtx target_label, insn_at_target, fallthrough_insn;
2982 rtx delay_list = 0;
2983 int own_target;
2984 int own_fallthrough;
2985 int prediction, slots_to_fill, slots_filled;
2987 insn = unfilled_slots_base[i];
2988 if (insn == 0
2989 || INSN_DELETED_P (insn)
2990 || GET_CODE (insn) != JUMP_INSN
2991 || ! (condjump_p (insn) || condjump_in_parallel_p (insn)))
2992 continue;
2994 slots_to_fill = num_delay_slots (insn);
2995 /* Some machine description have defined instructions to have
2996 delay slots only in certain circumstances which may depend on
2997 nearby insns (which change due to reorg's actions).
2999 For example, the PA port normally has delay slots for unconditional
3000 jumps.
3002 However, the PA port claims such jumps do not have a delay slot
3003 if they are immediate successors of certain CALL_INSNs. This
3004 allows the port to favor filling the delay slot of the call with
3005 the unconditional jump. */
3006 if (slots_to_fill == 0)
3007 continue;
3009 slots_filled = 0;
3010 target_label = JUMP_LABEL (insn);
3011 condition = get_branch_condition (insn, target_label);
3013 if (condition == 0)
3014 continue;
3016 /* Get the next active fallthrough and target insns and see if we own
3017 them. Then see whether the branch is likely true. We don't need
3018 to do a lot of this for unconditional branches. */
3020 insn_at_target = next_active_insn (target_label);
3021 own_target = own_thread_p (target_label, target_label, 0);
3023 if (condition == const_true_rtx)
3025 own_fallthrough = 0;
3026 fallthrough_insn = 0;
3027 prediction = 2;
3029 else
3031 fallthrough_insn = next_active_insn (insn);
3032 own_fallthrough = own_thread_p (NEXT_INSN (insn), NULL_RTX, 1);
3033 prediction = mostly_true_jump (insn, condition);
3036 /* If this insn is expected to branch, first try to get insns from our
3037 target, then our fallthrough insns. If it is not expected to branch,
3038 try the other order. */
3040 if (prediction > 0)
3042 delay_list
3043 = fill_slots_from_thread (insn, condition, insn_at_target,
3044 fallthrough_insn, prediction == 2, 1,
3045 own_target,
3046 slots_to_fill, &slots_filled, delay_list);
3048 if (delay_list == 0 && own_fallthrough)
3050 /* Even though we didn't find anything for delay slots,
3051 we might have found a redundant insn which we deleted
3052 from the thread that was filled. So we have to recompute
3053 the next insn at the target. */
3054 target_label = JUMP_LABEL (insn);
3055 insn_at_target = next_active_insn (target_label);
3057 delay_list
3058 = fill_slots_from_thread (insn, condition, fallthrough_insn,
3059 insn_at_target, 0, 0,
3060 own_fallthrough,
3061 slots_to_fill, &slots_filled,
3062 delay_list);
3065 else
3067 if (own_fallthrough)
3068 delay_list
3069 = fill_slots_from_thread (insn, condition, fallthrough_insn,
3070 insn_at_target, 0, 0,
3071 own_fallthrough,
3072 slots_to_fill, &slots_filled,
3073 delay_list);
3075 if (delay_list == 0)
3076 delay_list
3077 = fill_slots_from_thread (insn, condition, insn_at_target,
3078 next_active_insn (insn), 0, 1,
3079 own_target,
3080 slots_to_fill, &slots_filled,
3081 delay_list);
3084 if (delay_list)
3085 unfilled_slots_base[i]
3086 = emit_delay_sequence (insn, delay_list, slots_filled);
3088 if (slots_to_fill == slots_filled)
3089 unfilled_slots_base[i] = 0;
3091 note_delay_statistics (slots_filled, 1);
3095 /* Once we have tried two ways to fill a delay slot, make a pass over the
3096 code to try to improve the results and to do such things as more jump
3097 threading. */
3099 static void
3100 relax_delay_slots (first)
3101 rtx first;
3103 rtx insn, next, pat;
3104 rtx trial, delay_insn, target_label;
3106 /* Look at every JUMP_INSN and see if we can improve it. */
3107 for (insn = first; insn; insn = next)
3109 rtx other;
3111 next = next_active_insn (insn);
3113 /* If this is a jump insn, see if it now jumps to a jump, jumps to
3114 the next insn, or jumps to a label that is not the last of a
3115 group of consecutive labels. */
3116 if (GET_CODE (insn) == JUMP_INSN
3117 && (condjump_p (insn) || condjump_in_parallel_p (insn))
3118 && (target_label = JUMP_LABEL (insn)) != 0)
3120 target_label = follow_jumps (target_label);
3121 target_label = prev_label (next_active_insn (target_label));
3123 if (target_label == 0)
3124 target_label = find_end_label ();
3126 if (next_active_insn (target_label) == next
3127 && ! condjump_in_parallel_p (insn))
3129 delete_jump (insn);
3130 continue;
3133 if (target_label != JUMP_LABEL (insn))
3134 reorg_redirect_jump (insn, target_label);
3136 /* See if this jump branches around an unconditional jump.
3137 If so, invert this jump and point it to the target of the
3138 second jump. */
3139 if (next && GET_CODE (next) == JUMP_INSN
3140 && (simplejump_p (next) || GET_CODE (PATTERN (next)) == RETURN)
3141 && next_active_insn (target_label) == next_active_insn (next)
3142 && no_labels_between_p (insn, next))
3144 rtx label = JUMP_LABEL (next);
3146 /* Be careful how we do this to avoid deleting code or
3147 labels that are momentarily dead. See similar optimization
3148 in jump.c.
3150 We also need to ensure we properly handle the case when
3151 invert_jump fails. */
3153 ++LABEL_NUSES (target_label);
3154 if (label)
3155 ++LABEL_NUSES (label);
3157 if (invert_jump (insn, label, 1))
3159 delete_related_insns (next);
3160 next = insn;
3163 if (label)
3164 --LABEL_NUSES (label);
3166 if (--LABEL_NUSES (target_label) == 0)
3167 delete_related_insns (target_label);
3169 continue;
3173 /* If this is an unconditional jump and the previous insn is a
3174 conditional jump, try reversing the condition of the previous
3175 insn and swapping our targets. The next pass might be able to
3176 fill the slots.
3178 Don't do this if we expect the conditional branch to be true, because
3179 we would then be making the more common case longer. */
3181 if (GET_CODE (insn) == JUMP_INSN
3182 && (simplejump_p (insn) || GET_CODE (PATTERN (insn)) == RETURN)
3183 && (other = prev_active_insn (insn)) != 0
3184 && (condjump_p (other) || condjump_in_parallel_p (other))
3185 && no_labels_between_p (other, insn)
3186 && 0 > mostly_true_jump (other,
3187 get_branch_condition (other,
3188 JUMP_LABEL (other))))
3190 rtx other_target = JUMP_LABEL (other);
3191 target_label = JUMP_LABEL (insn);
3193 if (invert_jump (other, target_label, 0))
3194 reorg_redirect_jump (insn, other_target);
3197 /* Now look only at cases where we have filled a delay slot. */
3198 if (GET_CODE (insn) != INSN
3199 || GET_CODE (PATTERN (insn)) != SEQUENCE)
3200 continue;
3202 pat = PATTERN (insn);
3203 delay_insn = XVECEXP (pat, 0, 0);
3205 /* See if the first insn in the delay slot is redundant with some
3206 previous insn. Remove it from the delay slot if so; then set up
3207 to reprocess this insn. */
3208 if (redundant_insn (XVECEXP (pat, 0, 1), delay_insn, 0))
3210 delete_from_delay_slot (XVECEXP (pat, 0, 1));
3211 next = prev_active_insn (next);
3212 continue;
3215 /* See if we have a RETURN insn with a filled delay slot followed
3216 by a RETURN insn with an unfilled a delay slot. If so, we can delete
3217 the first RETURN (but not it's delay insn). This gives the same
3218 effect in fewer instructions.
3220 Only do so if optimizing for size since this results in slower, but
3221 smaller code. */
3222 if (optimize_size
3223 && GET_CODE (PATTERN (delay_insn)) == RETURN
3224 && next
3225 && GET_CODE (next) == JUMP_INSN
3226 && GET_CODE (PATTERN (next)) == RETURN)
3228 int i;
3230 /* Delete the RETURN and just execute the delay list insns.
3232 We do this by deleting the INSN containing the SEQUENCE, then
3233 re-emitting the insns separately, and then deleting the RETURN.
3234 This allows the count of the jump target to be properly
3235 decremented. */
3237 /* Clear the from target bit, since these insns are no longer
3238 in delay slots. */
3239 for (i = 0; i < XVECLEN (pat, 0); i++)
3240 INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0;
3242 trial = PREV_INSN (insn);
3243 delete_related_insns (insn);
3244 emit_insn_after (pat, trial);
3245 delete_scheduled_jump (delay_insn);
3246 continue;
3249 /* Now look only at the cases where we have a filled JUMP_INSN. */
3250 if (GET_CODE (XVECEXP (PATTERN (insn), 0, 0)) != JUMP_INSN
3251 || ! (condjump_p (XVECEXP (PATTERN (insn), 0, 0))
3252 || condjump_in_parallel_p (XVECEXP (PATTERN (insn), 0, 0))))
3253 continue;
3255 target_label = JUMP_LABEL (delay_insn);
3257 if (target_label)
3259 /* If this jump goes to another unconditional jump, thread it, but
3260 don't convert a jump into a RETURN here. */
3261 trial = follow_jumps (target_label);
3262 /* We use next_real_insn instead of next_active_insn, so that
3263 the special USE insns emitted by reorg won't be ignored.
3264 If they are ignored, then they will get deleted if target_label
3265 is now unreachable, and that would cause mark_target_live_regs
3266 to fail. */
3267 trial = prev_label (next_real_insn (trial));
3268 if (trial == 0 && target_label != 0)
3269 trial = find_end_label ();
3271 if (trial != target_label
3272 && redirect_with_delay_slots_safe_p (delay_insn, trial, insn))
3274 reorg_redirect_jump (delay_insn, trial);
3275 target_label = trial;
3278 /* If the first insn at TARGET_LABEL is redundant with a previous
3279 insn, redirect the jump to the following insn process again. */
3280 trial = next_active_insn (target_label);
3281 if (trial && GET_CODE (PATTERN (trial)) != SEQUENCE
3282 && redundant_insn (trial, insn, 0))
3284 rtx tmp;
3286 /* Figure out where to emit the special USE insn so we don't
3287 later incorrectly compute register live/death info. */
3288 tmp = next_active_insn (trial);
3289 if (tmp == 0)
3290 tmp = find_end_label ();
3292 /* Insert the special USE insn and update dataflow info. */
3293 update_block (trial, tmp);
3295 /* Now emit a label before the special USE insn, and
3296 redirect our jump to the new label. */
3297 target_label = get_label_before (PREV_INSN (tmp));
3298 reorg_redirect_jump (delay_insn, target_label);
3299 next = insn;
3300 continue;
3303 /* Similarly, if it is an unconditional jump with one insn in its
3304 delay list and that insn is redundant, thread the jump. */
3305 if (trial && GET_CODE (PATTERN (trial)) == SEQUENCE
3306 && XVECLEN (PATTERN (trial), 0) == 2
3307 && GET_CODE (XVECEXP (PATTERN (trial), 0, 0)) == JUMP_INSN
3308 && (simplejump_p (XVECEXP (PATTERN (trial), 0, 0))
3309 || GET_CODE (PATTERN (XVECEXP (PATTERN (trial), 0, 0))) == RETURN)
3310 && redundant_insn (XVECEXP (PATTERN (trial), 0, 1), insn, 0))
3312 target_label = JUMP_LABEL (XVECEXP (PATTERN (trial), 0, 0));
3313 if (target_label == 0)
3314 target_label = find_end_label ();
3316 if (redirect_with_delay_slots_safe_p (delay_insn, target_label,
3317 insn))
3319 reorg_redirect_jump (delay_insn, target_label);
3320 next = insn;
3321 continue;
3326 if (! INSN_ANNULLED_BRANCH_P (delay_insn)
3327 && prev_active_insn (target_label) == insn
3328 && ! condjump_in_parallel_p (delay_insn)
3329 #ifdef HAVE_cc0
3330 /* If the last insn in the delay slot sets CC0 for some insn,
3331 various code assumes that it is in a delay slot. We could
3332 put it back where it belonged and delete the register notes,
3333 but it doesn't seem worthwhile in this uncommon case. */
3334 && ! find_reg_note (XVECEXP (pat, 0, XVECLEN (pat, 0) - 1),
3335 REG_CC_USER, NULL_RTX)
3336 #endif
3339 int i;
3341 /* All this insn does is execute its delay list and jump to the
3342 following insn. So delete the jump and just execute the delay
3343 list insns.
3345 We do this by deleting the INSN containing the SEQUENCE, then
3346 re-emitting the insns separately, and then deleting the jump.
3347 This allows the count of the jump target to be properly
3348 decremented. */
3350 /* Clear the from target bit, since these insns are no longer
3351 in delay slots. */
3352 for (i = 0; i < XVECLEN (pat, 0); i++)
3353 INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0;
3355 trial = PREV_INSN (insn);
3356 delete_related_insns (insn);
3357 emit_insn_after (pat, trial);
3358 delete_scheduled_jump (delay_insn);
3359 continue;
3362 /* See if this is an unconditional jump around a single insn which is
3363 identical to the one in its delay slot. In this case, we can just
3364 delete the branch and the insn in its delay slot. */
3365 if (next && GET_CODE (next) == INSN
3366 && prev_label (next_active_insn (next)) == target_label
3367 && simplejump_p (insn)
3368 && XVECLEN (pat, 0) == 2
3369 && rtx_equal_p (PATTERN (next), PATTERN (XVECEXP (pat, 0, 1))))
3371 delete_related_insns (insn);
3372 continue;
3375 /* See if this jump (with its delay slots) branches around another
3376 jump (without delay slots). If so, invert this jump and point
3377 it to the target of the second jump. We cannot do this for
3378 annulled jumps, though. Again, don't convert a jump to a RETURN
3379 here. */
3380 if (! INSN_ANNULLED_BRANCH_P (delay_insn)
3381 && next && GET_CODE (next) == JUMP_INSN
3382 && (simplejump_p (next) || GET_CODE (PATTERN (next)) == RETURN)
3383 && next_active_insn (target_label) == next_active_insn (next)
3384 && no_labels_between_p (insn, next))
3386 rtx label = JUMP_LABEL (next);
3387 rtx old_label = JUMP_LABEL (delay_insn);
3389 if (label == 0)
3390 label = find_end_label ();
3392 /* find_end_label can generate a new label. Check this first. */
3393 if (no_labels_between_p (insn, next)
3394 && redirect_with_delay_slots_safe_p (delay_insn, label, insn))
3396 /* Be careful how we do this to avoid deleting code or labels
3397 that are momentarily dead. See similar optimization in
3398 jump.c */
3399 if (old_label)
3400 ++LABEL_NUSES (old_label);
3402 if (invert_jump (delay_insn, label, 1))
3404 int i;
3406 /* Must update the INSN_FROM_TARGET_P bits now that
3407 the branch is reversed, so that mark_target_live_regs
3408 will handle the delay slot insn correctly. */
3409 for (i = 1; i < XVECLEN (PATTERN (insn), 0); i++)
3411 rtx slot = XVECEXP (PATTERN (insn), 0, i);
3412 INSN_FROM_TARGET_P (slot) = ! INSN_FROM_TARGET_P (slot);
3415 delete_related_insns (next);
3416 next = insn;
3419 if (old_label && --LABEL_NUSES (old_label) == 0)
3420 delete_related_insns (old_label);
3421 continue;
3425 /* If we own the thread opposite the way this insn branches, see if we
3426 can merge its delay slots with following insns. */
3427 if (INSN_FROM_TARGET_P (XVECEXP (pat, 0, 1))
3428 && own_thread_p (NEXT_INSN (insn), 0, 1))
3429 try_merge_delay_insns (insn, next);
3430 else if (! INSN_FROM_TARGET_P (XVECEXP (pat, 0, 1))
3431 && own_thread_p (target_label, target_label, 0))
3432 try_merge_delay_insns (insn, next_active_insn (target_label));
3434 /* If we get here, we haven't deleted INSN. But we may have deleted
3435 NEXT, so recompute it. */
3436 next = next_active_insn (insn);
3440 #ifdef HAVE_return
3442 /* Look for filled jumps to the end of function label. We can try to convert
3443 them into RETURN insns if the insns in the delay slot are valid for the
3444 RETURN as well. */
3446 static void
3447 make_return_insns (first)
3448 rtx first;
3450 rtx insn, jump_insn, pat;
3451 rtx real_return_label = end_of_function_label;
3452 int slots, i;
3454 /* See if there is a RETURN insn in the function other than the one we
3455 made for END_OF_FUNCTION_LABEL. If so, set up anything we can't change
3456 into a RETURN to jump to it. */
3457 for (insn = first; insn; insn = NEXT_INSN (insn))
3458 if (GET_CODE (insn) == JUMP_INSN && GET_CODE (PATTERN (insn)) == RETURN)
3460 real_return_label = get_label_before (insn);
3461 break;
3464 /* Show an extra usage of REAL_RETURN_LABEL so it won't go away if it
3465 was equal to END_OF_FUNCTION_LABEL. */
3466 LABEL_NUSES (real_return_label)++;
3468 /* Clear the list of insns to fill so we can use it. */
3469 obstack_free (&unfilled_slots_obstack, unfilled_firstobj);
3471 for (insn = first; insn; insn = NEXT_INSN (insn))
3473 int flags;
3475 /* Only look at filled JUMP_INSNs that go to the end of function
3476 label. */
3477 if (GET_CODE (insn) != INSN
3478 || GET_CODE (PATTERN (insn)) != SEQUENCE
3479 || GET_CODE (XVECEXP (PATTERN (insn), 0, 0)) != JUMP_INSN
3480 || JUMP_LABEL (XVECEXP (PATTERN (insn), 0, 0)) != end_of_function_label)
3481 continue;
3483 pat = PATTERN (insn);
3484 jump_insn = XVECEXP (pat, 0, 0);
3486 /* If we can't make the jump into a RETURN, try to redirect it to the best
3487 RETURN and go on to the next insn. */
3488 if (! reorg_redirect_jump (jump_insn, NULL_RTX))
3490 /* Make sure redirecting the jump will not invalidate the delay
3491 slot insns. */
3492 if (redirect_with_delay_slots_safe_p (jump_insn,
3493 real_return_label,
3494 insn))
3495 reorg_redirect_jump (jump_insn, real_return_label);
3496 continue;
3499 /* See if this RETURN can accept the insns current in its delay slot.
3500 It can if it has more or an equal number of slots and the contents
3501 of each is valid. */
3503 flags = get_jump_flags (jump_insn, JUMP_LABEL (jump_insn));
3504 slots = num_delay_slots (jump_insn);
3505 if (slots >= XVECLEN (pat, 0) - 1)
3507 for (i = 1; i < XVECLEN (pat, 0); i++)
3508 if (! (
3509 #ifdef ANNUL_IFFALSE_SLOTS
3510 (INSN_ANNULLED_BRANCH_P (jump_insn)
3511 && INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)))
3512 ? eligible_for_annul_false (jump_insn, i - 1,
3513 XVECEXP (pat, 0, i), flags) :
3514 #endif
3515 #ifdef ANNUL_IFTRUE_SLOTS
3516 (INSN_ANNULLED_BRANCH_P (jump_insn)
3517 && ! INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)))
3518 ? eligible_for_annul_true (jump_insn, i - 1,
3519 XVECEXP (pat, 0, i), flags) :
3520 #endif
3521 eligible_for_delay (jump_insn, i - 1,
3522 XVECEXP (pat, 0, i), flags)))
3523 break;
3525 else
3526 i = 0;
3528 if (i == XVECLEN (pat, 0))
3529 continue;
3531 /* We have to do something with this insn. If it is an unconditional
3532 RETURN, delete the SEQUENCE and output the individual insns,
3533 followed by the RETURN. Then set things up so we try to find
3534 insns for its delay slots, if it needs some. */
3535 if (GET_CODE (PATTERN (jump_insn)) == RETURN)
3537 rtx prev = PREV_INSN (insn);
3539 delete_related_insns (insn);
3540 for (i = 1; i < XVECLEN (pat, 0); i++)
3541 prev = emit_insn_after (PATTERN (XVECEXP (pat, 0, i)), prev);
3543 insn = emit_jump_insn_after (PATTERN (jump_insn), prev);
3544 emit_barrier_after (insn);
3546 if (slots)
3547 obstack_ptr_grow (&unfilled_slots_obstack, insn);
3549 else
3550 /* It is probably more efficient to keep this with its current
3551 delay slot as a branch to a RETURN. */
3552 reorg_redirect_jump (jump_insn, real_return_label);
3555 /* Now delete REAL_RETURN_LABEL if we never used it. Then try to fill any
3556 new delay slots we have created. */
3557 if (--LABEL_NUSES (real_return_label) == 0)
3558 delete_related_insns (real_return_label);
3560 fill_simple_delay_slots (1);
3561 fill_simple_delay_slots (0);
3563 #endif
3565 /* Try to find insns to place in delay slots. */
3567 void
3568 dbr_schedule (first, file)
3569 rtx first;
3570 FILE *file;
3572 rtx insn, next, epilogue_insn = 0;
3573 int i;
3574 #if 0
3575 int old_flag_no_peephole = flag_no_peephole;
3577 /* Execute `final' once in prescan mode to delete any insns that won't be
3578 used. Don't let final try to do any peephole optimization--it will
3579 ruin dataflow information for this pass. */
3581 flag_no_peephole = 1;
3582 final (first, 0, NO_DEBUG, 1, 1);
3583 flag_no_peephole = old_flag_no_peephole;
3584 #endif
3586 /* If the current function has no insns other than the prologue and
3587 epilogue, then do not try to fill any delay slots. */
3588 if (n_basic_blocks == 0)
3589 return;
3591 /* Find the highest INSN_UID and allocate and initialize our map from
3592 INSN_UID's to position in code. */
3593 for (max_uid = 0, insn = first; insn; insn = NEXT_INSN (insn))
3595 if (INSN_UID (insn) > max_uid)
3596 max_uid = INSN_UID (insn);
3597 if (GET_CODE (insn) == NOTE
3598 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_EPILOGUE_BEG)
3599 epilogue_insn = insn;
3602 uid_to_ruid = (int *) xmalloc ((max_uid + 1) * sizeof (int));
3603 for (i = 0, insn = first; insn; i++, insn = NEXT_INSN (insn))
3604 uid_to_ruid[INSN_UID (insn)] = i;
3606 /* Initialize the list of insns that need filling. */
3607 if (unfilled_firstobj == 0)
3609 gcc_obstack_init (&unfilled_slots_obstack);
3610 unfilled_firstobj = (rtx *) obstack_alloc (&unfilled_slots_obstack, 0);
3613 for (insn = next_active_insn (first); insn; insn = next_active_insn (insn))
3615 rtx target;
3617 INSN_ANNULLED_BRANCH_P (insn) = 0;
3618 INSN_FROM_TARGET_P (insn) = 0;
3620 /* Skip vector tables. We can't get attributes for them. */
3621 if (GET_CODE (insn) == JUMP_INSN
3622 && (GET_CODE (PATTERN (insn)) == ADDR_VEC
3623 || GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC))
3624 continue;
3626 if (num_delay_slots (insn) > 0)
3627 obstack_ptr_grow (&unfilled_slots_obstack, insn);
3629 /* Ensure all jumps go to the last of a set of consecutive labels. */
3630 if (GET_CODE (insn) == JUMP_INSN
3631 && (condjump_p (insn) || condjump_in_parallel_p (insn))
3632 && JUMP_LABEL (insn) != 0
3633 && ((target = prev_label (next_active_insn (JUMP_LABEL (insn))))
3634 != JUMP_LABEL (insn)))
3635 redirect_jump (insn, target, 1);
3638 init_resource_info (epilogue_insn);
3640 /* Show we haven't computed an end-of-function label yet. */
3641 end_of_function_label = 0;
3643 /* Initialize the statistics for this function. */
3644 memset ((char *) num_insns_needing_delays, 0, sizeof num_insns_needing_delays);
3645 memset ((char *) num_filled_delays, 0, sizeof num_filled_delays);
3647 /* Now do the delay slot filling. Try everything twice in case earlier
3648 changes make more slots fillable. */
3650 for (reorg_pass_number = 0;
3651 reorg_pass_number < MAX_REORG_PASSES;
3652 reorg_pass_number++)
3654 fill_simple_delay_slots (1);
3655 fill_simple_delay_slots (0);
3656 fill_eager_delay_slots ();
3657 relax_delay_slots (first);
3660 /* Delete any USE insns made by update_block; subsequent passes don't need
3661 them or know how to deal with them. */
3662 for (insn = first; insn; insn = next)
3664 next = NEXT_INSN (insn);
3666 if (GET_CODE (insn) == INSN && GET_CODE (PATTERN (insn)) == USE
3667 && INSN_P (XEXP (PATTERN (insn), 0)))
3668 next = delete_related_insns (insn);
3671 /* If we made an end of function label, indicate that it is now
3672 safe to delete it by undoing our prior adjustment to LABEL_NUSES.
3673 If it is now unused, delete it. */
3674 if (end_of_function_label && --LABEL_NUSES (end_of_function_label) == 0)
3675 delete_related_insns (end_of_function_label);
3677 #ifdef HAVE_return
3678 if (HAVE_return && end_of_function_label != 0)
3679 make_return_insns (first);
3680 #endif
3682 obstack_free (&unfilled_slots_obstack, unfilled_firstobj);
3684 /* It is not clear why the line below is needed, but it does seem to be. */
3685 unfilled_firstobj = (rtx *) obstack_alloc (&unfilled_slots_obstack, 0);
3687 /* Reposition the prologue and epilogue notes in case we moved the
3688 prologue/epilogue insns. */
3689 reposition_prologue_and_epilogue_notes (first);
3691 if (file)
3693 int i, j, need_comma;
3694 int total_delay_slots[MAX_DELAY_HISTOGRAM + 1];
3695 int total_annul_slots[MAX_DELAY_HISTOGRAM + 1];
3697 for (reorg_pass_number = 0;
3698 reorg_pass_number < MAX_REORG_PASSES;
3699 reorg_pass_number++)
3701 fprintf (file, ";; Reorg pass #%d:\n", reorg_pass_number + 1);
3702 for (i = 0; i < NUM_REORG_FUNCTIONS; i++)
3704 need_comma = 0;
3705 fprintf (file, ";; Reorg function #%d\n", i);
3707 fprintf (file, ";; %d insns needing delay slots\n;; ",
3708 num_insns_needing_delays[i][reorg_pass_number]);
3710 for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++)
3711 if (num_filled_delays[i][j][reorg_pass_number])
3713 if (need_comma)
3714 fprintf (file, ", ");
3715 need_comma = 1;
3716 fprintf (file, "%d got %d delays",
3717 num_filled_delays[i][j][reorg_pass_number], j);
3719 fprintf (file, "\n");
3722 memset ((char *) total_delay_slots, 0, sizeof total_delay_slots);
3723 memset ((char *) total_annul_slots, 0, sizeof total_annul_slots);
3724 for (insn = first; insn; insn = NEXT_INSN (insn))
3726 if (! INSN_DELETED_P (insn)
3727 && GET_CODE (insn) == INSN
3728 && GET_CODE (PATTERN (insn)) != USE
3729 && GET_CODE (PATTERN (insn)) != CLOBBER)
3731 if (GET_CODE (PATTERN (insn)) == SEQUENCE)
3733 j = XVECLEN (PATTERN (insn), 0) - 1;
3734 if (j > MAX_DELAY_HISTOGRAM)
3735 j = MAX_DELAY_HISTOGRAM;
3736 if (INSN_ANNULLED_BRANCH_P (XVECEXP (PATTERN (insn), 0, 0)))
3737 total_annul_slots[j]++;
3738 else
3739 total_delay_slots[j]++;
3741 else if (num_delay_slots (insn) > 0)
3742 total_delay_slots[0]++;
3745 fprintf (file, ";; Reorg totals: ");
3746 need_comma = 0;
3747 for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++)
3749 if (total_delay_slots[j])
3751 if (need_comma)
3752 fprintf (file, ", ");
3753 need_comma = 1;
3754 fprintf (file, "%d got %d delays", total_delay_slots[j], j);
3757 fprintf (file, "\n");
3758 #if defined (ANNUL_IFTRUE_SLOTS) || defined (ANNUL_IFFALSE_SLOTS)
3759 fprintf (file, ";; Reorg annuls: ");
3760 need_comma = 0;
3761 for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++)
3763 if (total_annul_slots[j])
3765 if (need_comma)
3766 fprintf (file, ", ");
3767 need_comma = 1;
3768 fprintf (file, "%d got %d delays", total_annul_slots[j], j);
3771 fprintf (file, "\n");
3772 #endif
3773 fprintf (file, "\n");
3776 /* For all JUMP insns, fill in branch prediction notes, so that during
3777 assembler output a target can set branch prediction bits in the code.
3778 We have to do this now, as up until this point the destinations of
3779 JUMPS can be moved around and changed, but past right here that cannot
3780 happen. */
3781 for (insn = first; insn; insn = NEXT_INSN (insn))
3783 int pred_flags;
3785 if (GET_CODE (insn) == INSN)
3787 rtx pat = PATTERN (insn);
3789 if (GET_CODE (pat) == SEQUENCE)
3790 insn = XVECEXP (pat, 0, 0);
3792 if (GET_CODE (insn) != JUMP_INSN)
3793 continue;
3795 pred_flags = get_jump_flags (insn, JUMP_LABEL (insn));
3796 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_BR_PRED,
3797 GEN_INT (pred_flags),
3798 REG_NOTES (insn));
3800 free_resource_info ();
3801 free (uid_to_ruid);
3803 #endif /* DELAY_SLOTS */