Basic block renumbering removal.
[official-gcc.git] / gcc / df.c
blobc598faf517ffa165a904002149782e62b2b72724
1 /* Dataflow support routines.
2 Copyright (C) 1999, 2000, 2001, 2002 Free Software Foundation, Inc.
3 Contributed by Michael P. Hayes (m.hayes@elec.canterbury.ac.nz,
4 mhayes@redhat.com)
6 This file is part of GCC.
8 GCC is free software; you can redistribute it and/or modify it under
9 the terms of the GNU General Public License as published by the Free
10 Software Foundation; either version 2, or (at your option) any later
11 version.
13 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
14 WARRANTY; without even the implied warranty of MERCHANTABILITY or
15 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
16 for more details.
18 You should have received a copy of the GNU General Public License
19 along with GCC; see the file COPYING. If not, write to the Free
20 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
21 02111-1307, USA.
24 OVERVIEW:
26 This file provides some dataflow routines for computing reaching defs,
27 upward exposed uses, live variables, def-use chains, and use-def
28 chains. The global dataflow is performed using simple iterative
29 methods with a worklist and could be sped up by ordering the blocks
30 with a depth first search order.
32 A `struct ref' data structure (ref) is allocated for every register
33 reference (def or use) and this records the insn and bb the ref is
34 found within. The refs are linked together in chains of uses and defs
35 for each insn and for each register. Each ref also has a chain field
36 that links all the use refs for a def or all the def refs for a use.
37 This is used to create use-def or def-use chains.
40 USAGE:
42 Here's an example of using the dataflow routines.
44 struct df *df;
46 df = df_init ();
48 df_analyse (df, 0, DF_ALL);
50 df_dump (df, DF_ALL, stderr);
52 df_finish (df);
55 df_init simply creates a poor man's object (df) that needs to be
56 passed to all the dataflow routines. df_finish destroys this
57 object and frees up any allocated memory.
59 df_analyse performs the following:
61 1. Records defs and uses by scanning the insns in each basic block
62 or by scanning the insns queued by df_insn_modify.
63 2. Links defs and uses into insn-def and insn-use chains.
64 3. Links defs and uses into reg-def and reg-use chains.
65 4. Assigns LUIDs to each insn (for modified blocks).
66 5. Calculates local reaching definitions.
67 6. Calculates global reaching definitions.
68 7. Creates use-def chains.
69 8. Calculates local reaching uses (upwards exposed uses).
70 9. Calculates global reaching uses.
71 10. Creates def-use chains.
72 11. Calculates local live registers.
73 12. Calculates global live registers.
74 13. Calculates register lifetimes and determines local registers.
77 PHILOSOPHY:
79 Note that the dataflow information is not updated for every newly
80 deleted or created insn. If the dataflow information requires
81 updating then all the changed, new, or deleted insns needs to be
82 marked with df_insn_modify (or df_insns_modify) either directly or
83 indirectly (say through calling df_insn_delete). df_insn_modify
84 marks all the modified insns to get processed the next time df_analyse
85 is called.
87 Beware that tinkering with insns may invalidate the dataflow information.
88 The philosophy behind these routines is that once the dataflow
89 information has been gathered, the user should store what they require
90 before they tinker with any insn. Once a reg is replaced, for example,
91 then the reg-def/reg-use chains will point to the wrong place. Once a
92 whole lot of changes have been made, df_analyse can be called again
93 to update the dataflow information. Currently, this is not very smart
94 with regard to propagating changes to the dataflow so it should not
95 be called very often.
98 DATA STRUCTURES:
100 The basic object is a REF (reference) and this may either be a DEF
101 (definition) or a USE of a register.
103 These are linked into a variety of lists; namely reg-def, reg-use,
104 insn-def, insn-use, def-use, and use-def lists. For example,
105 the reg-def lists contain all the refs that define a given register
106 while the insn-use lists contain all the refs used by an insn.
108 Note that the reg-def and reg-use chains are generally short (except for the
109 hard registers) and thus it is much faster to search these chains
110 rather than searching the def or use bitmaps.
112 If the insns are in SSA form then the reg-def and use-def lists
113 should only contain the single defining ref.
115 TODO:
117 1) Incremental dataflow analysis.
119 Note that if a loop invariant insn is hoisted (or sunk), we do not
120 need to change the def-use or use-def chains. All we have to do is to
121 change the bb field for all the associated defs and uses and to
122 renumber the LUIDs for the original and new basic blocks of the insn.
124 When shadowing loop mems we create new uses and defs for new pseudos
125 so we do not affect the existing dataflow information.
127 My current strategy is to queue up all modified, created, or deleted
128 insns so when df_analyse is called we can easily determine all the new
129 or deleted refs. Currently the global dataflow information is
130 recomputed from scratch but this could be propagated more efficiently.
132 2) Improved global data flow computation using depth first search.
134 3) Reduced memory requirements.
136 We could operate a pool of ref structures. When a ref is deleted it
137 gets returned to the pool (say by linking on to a chain of free refs).
138 This will require a pair of bitmaps for defs and uses so that we can
139 tell which ones have been changed. Alternatively, we could
140 periodically squeeze the def and use tables and associated bitmaps and
141 renumber the def and use ids.
143 4) Ordering of reg-def and reg-use lists.
145 Should the first entry in the def list be the first def (within a BB)?
146 Similarly, should the first entry in the use list be the last use
147 (within a BB)?
149 5) Working with a sub-CFG.
151 Often the whole CFG does not need to be analysed, for example,
152 when optimising a loop, only certain registers are of interest.
153 Perhaps there should be a bitmap argument to df_analyse to specify
154 which registers should be analysed? */
156 #define HANDLE_SUBREG
158 #include "config.h"
159 #include "system.h"
160 #include "rtl.h"
161 #include "tm_p.h"
162 #include "insn-config.h"
163 #include "recog.h"
164 #include "function.h"
165 #include "regs.h"
166 #include "obstack.h"
167 #include "hard-reg-set.h"
168 #include "basic-block.h"
169 #include "sbitmap.h"
170 #include "bitmap.h"
171 #include "df.h"
172 #include "fibheap.h"
174 #define FOR_EACH_BB_IN_BITMAP(BITMAP, MIN, BB, CODE) \
175 do { \
176 unsigned int node_; \
177 EXECUTE_IF_SET_IN_BITMAP (BITMAP, MIN, node_, \
178 {(BB) = BASIC_BLOCK (node_); CODE;});} while (0)
180 #define FOR_EACH_BB_IN_BITMAP_REV(BITMAP, MIN, BB, CODE) \
181 do { \
182 unsigned int node_; \
183 EXECUTE_IF_SET_IN_BITMAP_REV (BITMAP, node_, \
184 {(BB) = BASIC_BLOCK (node_); CODE;});} while (0)
186 #define FOR_EACH_BB_IN_SBITMAP(BITMAP, MIN, BB, CODE) \
187 do { \
188 unsigned int node_; \
189 EXECUTE_IF_SET_IN_SBITMAP (BITMAP, MIN, node_, \
190 {(BB) = BASIC_BLOCK (node_); CODE;});} while (0)
192 #define obstack_chunk_alloc xmalloc
193 #define obstack_chunk_free free
195 static struct obstack df_ref_obstack;
196 static struct df *ddf;
198 static void df_reg_table_realloc PARAMS((struct df *, int));
199 #if 0
200 static void df_def_table_realloc PARAMS((struct df *, int));
201 #endif
202 static void df_insn_table_realloc PARAMS((struct df *, int));
203 static void df_bitmaps_alloc PARAMS((struct df *, int));
204 static void df_bitmaps_free PARAMS((struct df *, int));
205 static void df_free PARAMS((struct df *));
206 static void df_alloc PARAMS((struct df *, int));
208 static rtx df_reg_clobber_gen PARAMS((unsigned int));
209 static rtx df_reg_use_gen PARAMS((unsigned int));
211 static inline struct df_link *df_link_create PARAMS((struct ref *,
212 struct df_link *));
213 static struct df_link *df_ref_unlink PARAMS((struct df_link **, struct ref *));
214 static void df_def_unlink PARAMS((struct df *, struct ref *));
215 static void df_use_unlink PARAMS((struct df *, struct ref *));
216 static void df_insn_refs_unlink PARAMS ((struct df *, basic_block, rtx));
217 #if 0
218 static void df_bb_refs_unlink PARAMS ((struct df *, basic_block));
219 static void df_refs_unlink PARAMS ((struct df *, bitmap));
220 #endif
222 static struct ref *df_ref_create PARAMS((struct df *,
223 rtx, rtx *, rtx,
224 enum df_ref_type, enum df_ref_flags));
225 static void df_ref_record_1 PARAMS((struct df *, rtx, rtx *,
226 rtx, enum df_ref_type,
227 enum df_ref_flags));
228 static void df_ref_record PARAMS((struct df *, rtx, rtx *,
229 rtx, enum df_ref_type,
230 enum df_ref_flags));
231 static void df_def_record_1 PARAMS((struct df *, rtx, basic_block, rtx));
232 static void df_defs_record PARAMS((struct df *, rtx, basic_block, rtx));
233 static void df_uses_record PARAMS((struct df *, rtx *,
234 enum df_ref_type, basic_block, rtx,
235 enum df_ref_flags));
236 static void df_insn_refs_record PARAMS((struct df *, basic_block, rtx));
237 static void df_bb_refs_record PARAMS((struct df *, basic_block));
238 static void df_refs_record PARAMS((struct df *, bitmap));
240 static void df_bb_reg_def_chain_create PARAMS((struct df *, basic_block));
241 static void df_reg_def_chain_create PARAMS((struct df *, bitmap));
242 static void df_bb_reg_use_chain_create PARAMS((struct df *, basic_block));
243 static void df_reg_use_chain_create PARAMS((struct df *, bitmap));
244 static void df_bb_du_chain_create PARAMS((struct df *, basic_block, bitmap));
245 static void df_du_chain_create PARAMS((struct df *, bitmap));
246 static void df_bb_ud_chain_create PARAMS((struct df *, basic_block));
247 static void df_ud_chain_create PARAMS((struct df *, bitmap));
248 static void df_bb_rd_local_compute PARAMS((struct df *, basic_block));
249 static void df_rd_local_compute PARAMS((struct df *, bitmap));
250 static void df_bb_ru_local_compute PARAMS((struct df *, basic_block));
251 static void df_ru_local_compute PARAMS((struct df *, bitmap));
252 static void df_bb_lr_local_compute PARAMS((struct df *, basic_block));
253 static void df_lr_local_compute PARAMS((struct df *, bitmap));
254 static void df_bb_reg_info_compute PARAMS((struct df *, basic_block, bitmap));
255 static void df_reg_info_compute PARAMS((struct df *, bitmap));
257 static int df_bb_luids_set PARAMS((struct df *df, basic_block));
258 static int df_luids_set PARAMS((struct df *df, bitmap));
260 static int df_modified_p PARAMS ((struct df *, bitmap));
261 static int df_refs_queue PARAMS ((struct df *));
262 static int df_refs_process PARAMS ((struct df *));
263 static int df_bb_refs_update PARAMS ((struct df *, basic_block));
264 static int df_refs_update PARAMS ((struct df *));
265 static void df_analyse_1 PARAMS((struct df *, bitmap, int, int));
267 static void df_insns_modify PARAMS((struct df *, basic_block,
268 rtx, rtx));
269 static int df_rtx_mem_replace PARAMS ((rtx *, void *));
270 static int df_rtx_reg_replace PARAMS ((rtx *, void *));
271 void df_refs_reg_replace PARAMS ((struct df *, bitmap,
272 struct df_link *, rtx, rtx));
274 static int df_def_dominates_all_uses_p PARAMS((struct df *, struct ref *def));
275 static int df_def_dominates_uses_p PARAMS((struct df *,
276 struct ref *def, bitmap));
277 static struct ref *df_bb_regno_last_use_find PARAMS((struct df *, basic_block,
278 unsigned int));
279 static struct ref *df_bb_regno_first_def_find PARAMS((struct df *, basic_block,
280 unsigned int));
281 static struct ref *df_bb_insn_regno_last_use_find PARAMS((struct df *,
282 basic_block,
283 rtx, unsigned int));
284 static struct ref *df_bb_insn_regno_first_def_find PARAMS((struct df *,
285 basic_block,
286 rtx, unsigned int));
288 static void df_chain_dump PARAMS((struct df_link *, FILE *file));
289 static void df_chain_dump_regno PARAMS((struct df_link *, FILE *file));
290 static void df_regno_debug PARAMS ((struct df *, unsigned int, FILE *));
291 static void df_ref_debug PARAMS ((struct df *, struct ref *, FILE *));
292 static void df_rd_transfer_function PARAMS ((int, int *, bitmap, bitmap,
293 bitmap, bitmap, void *));
294 static void df_ru_transfer_function PARAMS ((int, int *, bitmap, bitmap,
295 bitmap, bitmap, void *));
296 static void df_lr_transfer_function PARAMS ((int, int *, bitmap, bitmap,
297 bitmap, bitmap, void *));
298 static void hybrid_search_bitmap PARAMS ((basic_block, bitmap *, bitmap *,
299 bitmap *, bitmap *, enum df_flow_dir,
300 enum df_confluence_op,
301 transfer_function_bitmap,
302 sbitmap, sbitmap, void *));
303 static void hybrid_search_sbitmap PARAMS ((basic_block, sbitmap *, sbitmap *,
304 sbitmap *, sbitmap *, enum df_flow_dir,
305 enum df_confluence_op,
306 transfer_function_sbitmap,
307 sbitmap, sbitmap, void *));
308 static inline bool read_modify_subreg_p PARAMS ((rtx));
311 /* Local memory allocation/deallocation routines. */
314 /* Increase the insn info table by SIZE more elements. */
315 static void
316 df_insn_table_realloc (df, size)
317 struct df *df;
318 int size;
320 /* Make table 25 percent larger by default. */
321 if (! size)
322 size = df->insn_size / 4;
324 size += df->insn_size;
326 df->insns = (struct insn_info *)
327 xrealloc (df->insns, size * sizeof (struct insn_info));
329 memset (df->insns + df->insn_size, 0,
330 (size - df->insn_size) * sizeof (struct insn_info));
332 df->insn_size = size;
334 if (! df->insns_modified)
336 df->insns_modified = BITMAP_XMALLOC ();
337 bitmap_zero (df->insns_modified);
342 /* Increase the reg info table by SIZE more elements. */
343 static void
344 df_reg_table_realloc (df, size)
345 struct df *df;
346 int size;
348 /* Make table 25 percent larger by default. */
349 if (! size)
350 size = df->reg_size / 4;
352 size += df->reg_size;
354 df->regs = (struct reg_info *)
355 xrealloc (df->regs, size * sizeof (struct reg_info));
357 /* Zero the new entries. */
358 memset (df->regs + df->reg_size, 0,
359 (size - df->reg_size) * sizeof (struct reg_info));
361 df->reg_size = size;
365 #if 0
366 /* Not currently used. */
367 static void
368 df_def_table_realloc (df, size)
369 struct df *df;
370 int size;
372 int i;
373 struct ref *refs;
375 /* Make table 25 percent larger by default. */
376 if (! size)
377 size = df->def_size / 4;
379 df->def_size += size;
380 df->defs = xrealloc (df->defs,
381 df->def_size * sizeof (*df->defs));
383 /* Allocate a new block of memory and link into list of blocks
384 that will need to be freed later. */
386 refs = xmalloc (size * sizeof (*refs));
388 /* Link all the new refs together, overloading the chain field. */
389 for (i = 0; i < size - 1; i++)
390 refs[i].chain = (struct df_link *)(refs + i + 1);
391 refs[size - 1].chain = 0;
393 #endif
397 /* Allocate bitmaps for each basic block. */
398 static void
399 df_bitmaps_alloc (df, flags)
400 struct df *df;
401 int flags;
403 int dflags = 0;
404 basic_block bb;
406 /* Free the bitmaps if they need resizing. */
407 if ((flags & DF_LR) && df->n_regs < (unsigned int)max_reg_num ())
408 dflags |= DF_LR | DF_RU;
409 if ((flags & DF_RU) && df->n_uses < df->use_id)
410 dflags |= DF_RU;
411 if ((flags & DF_RD) && df->n_defs < df->def_id)
412 dflags |= DF_RD;
414 if (dflags)
415 df_bitmaps_free (df, dflags);
417 df->n_defs = df->def_id;
418 df->n_uses = df->use_id;
420 FOR_ALL_BB (bb)
422 struct bb_info *bb_info = DF_BB_INFO (df, bb);
424 if (flags & DF_RD && ! bb_info->rd_in)
426 /* Allocate bitmaps for reaching definitions. */
427 bb_info->rd_kill = BITMAP_XMALLOC ();
428 bitmap_zero (bb_info->rd_kill);
429 bb_info->rd_gen = BITMAP_XMALLOC ();
430 bitmap_zero (bb_info->rd_gen);
431 bb_info->rd_in = BITMAP_XMALLOC ();
432 bb_info->rd_out = BITMAP_XMALLOC ();
433 bb_info->rd_valid = 0;
436 if (flags & DF_RU && ! bb_info->ru_in)
438 /* Allocate bitmaps for upward exposed uses. */
439 bb_info->ru_kill = BITMAP_XMALLOC ();
440 bitmap_zero (bb_info->ru_kill);
441 /* Note the lack of symmetry. */
442 bb_info->ru_gen = BITMAP_XMALLOC ();
443 bitmap_zero (bb_info->ru_gen);
444 bb_info->ru_in = BITMAP_XMALLOC ();
445 bb_info->ru_out = BITMAP_XMALLOC ();
446 bb_info->ru_valid = 0;
449 if (flags & DF_LR && ! bb_info->lr_in)
451 /* Allocate bitmaps for live variables. */
452 bb_info->lr_def = BITMAP_XMALLOC ();
453 bitmap_zero (bb_info->lr_def);
454 bb_info->lr_use = BITMAP_XMALLOC ();
455 bitmap_zero (bb_info->lr_use);
456 bb_info->lr_in = BITMAP_XMALLOC ();
457 bb_info->lr_out = BITMAP_XMALLOC ();
458 bb_info->lr_valid = 0;
464 /* Free bitmaps for each basic block. */
465 static void
466 df_bitmaps_free (df, flags)
467 struct df *df ATTRIBUTE_UNUSED;
468 int flags;
470 basic_block bb;
472 FOR_ALL_BB (bb)
474 struct bb_info *bb_info = DF_BB_INFO (df, bb);
476 if (!bb_info)
477 continue;
479 if ((flags & DF_RD) && bb_info->rd_in)
481 /* Free bitmaps for reaching definitions. */
482 BITMAP_XFREE (bb_info->rd_kill);
483 bb_info->rd_kill = NULL;
484 BITMAP_XFREE (bb_info->rd_gen);
485 bb_info->rd_gen = NULL;
486 BITMAP_XFREE (bb_info->rd_in);
487 bb_info->rd_in = NULL;
488 BITMAP_XFREE (bb_info->rd_out);
489 bb_info->rd_out = NULL;
492 if ((flags & DF_RU) && bb_info->ru_in)
494 /* Free bitmaps for upward exposed uses. */
495 BITMAP_XFREE (bb_info->ru_kill);
496 bb_info->ru_kill = NULL;
497 BITMAP_XFREE (bb_info->ru_gen);
498 bb_info->ru_gen = NULL;
499 BITMAP_XFREE (bb_info->ru_in);
500 bb_info->ru_in = NULL;
501 BITMAP_XFREE (bb_info->ru_out);
502 bb_info->ru_out = NULL;
505 if ((flags & DF_LR) && bb_info->lr_in)
507 /* Free bitmaps for live variables. */
508 BITMAP_XFREE (bb_info->lr_def);
509 bb_info->lr_def = NULL;
510 BITMAP_XFREE (bb_info->lr_use);
511 bb_info->lr_use = NULL;
512 BITMAP_XFREE (bb_info->lr_in);
513 bb_info->lr_in = NULL;
514 BITMAP_XFREE (bb_info->lr_out);
515 bb_info->lr_out = NULL;
518 df->flags &= ~(flags & (DF_RD | DF_RU | DF_LR));
522 /* Allocate and initialise dataflow memory. */
523 static void
524 df_alloc (df, n_regs)
525 struct df *df;
526 int n_regs;
528 int n_insns;
529 basic_block bb;
531 gcc_obstack_init (&df_ref_obstack);
533 /* Perhaps we should use LUIDs to save memory for the insn_refs
534 table. This is only a small saving; a few pointers. */
535 n_insns = get_max_uid () + 1;
537 df->def_id = 0;
538 df->n_defs = 0;
539 /* Approximate number of defs by number of insns. */
540 df->def_size = n_insns;
541 df->defs = xmalloc (df->def_size * sizeof (*df->defs));
543 df->use_id = 0;
544 df->n_uses = 0;
545 /* Approximate number of uses by twice number of insns. */
546 df->use_size = n_insns * 2;
547 df->uses = xmalloc (df->use_size * sizeof (*df->uses));
549 df->n_regs = n_regs;
550 df->n_bbs = last_basic_block;
552 /* Allocate temporary working array used during local dataflow analysis. */
553 df->reg_def_last = xmalloc (df->n_regs * sizeof (struct ref *));
555 df_insn_table_realloc (df, n_insns);
557 df_reg_table_realloc (df, df->n_regs);
559 df->bbs_modified = BITMAP_XMALLOC ();
560 bitmap_zero (df->bbs_modified);
562 df->flags = 0;
564 df->bbs = xcalloc (last_basic_block, sizeof (struct bb_info));
566 df->all_blocks = BITMAP_XMALLOC ();
567 FOR_ALL_BB (bb)
568 bitmap_set_bit (df->all_blocks, bb->sindex);
572 /* Free all the dataflow info. */
573 static void
574 df_free (df)
575 struct df *df;
577 df_bitmaps_free (df, DF_ALL);
579 if (df->bbs)
580 free (df->bbs);
581 df->bbs = 0;
583 if (df->insns)
584 free (df->insns);
585 df->insns = 0;
586 df->insn_size = 0;
588 if (df->defs)
589 free (df->defs);
590 df->defs = 0;
591 df->def_size = 0;
592 df->def_id = 0;
594 if (df->uses)
595 free (df->uses);
596 df->uses = 0;
597 df->use_size = 0;
598 df->use_id = 0;
600 if (df->regs)
601 free (df->regs);
602 df->regs = 0;
603 df->reg_size = 0;
605 if (df->bbs_modified)
606 BITMAP_XFREE (df->bbs_modified);
607 df->bbs_modified = 0;
609 if (df->insns_modified)
610 BITMAP_XFREE (df->insns_modified);
611 df->insns_modified = 0;
613 BITMAP_XFREE (df->all_blocks);
614 df->all_blocks = 0;
616 obstack_free (&df_ref_obstack, NULL);
619 /* Local miscellaneous routines. */
621 /* Return a USE for register REGNO. */
622 static rtx df_reg_use_gen (regno)
623 unsigned int regno;
625 rtx reg;
626 rtx use;
628 reg = regno >= FIRST_PSEUDO_REGISTER
629 ? regno_reg_rtx[regno] : gen_rtx_REG (reg_raw_mode[regno], regno);
631 use = gen_rtx_USE (GET_MODE (reg), reg);
632 return use;
636 /* Return a CLOBBER for register REGNO. */
637 static rtx df_reg_clobber_gen (regno)
638 unsigned int regno;
640 rtx reg;
641 rtx use;
643 reg = regno >= FIRST_PSEUDO_REGISTER
644 ? regno_reg_rtx[regno] : gen_rtx_REG (reg_raw_mode[regno], regno);
646 use = gen_rtx_CLOBBER (GET_MODE (reg), reg);
647 return use;
650 /* Local chain manipulation routines. */
652 /* Create a link in a def-use or use-def chain. */
653 static inline struct df_link *
654 df_link_create (ref, next)
655 struct ref *ref;
656 struct df_link *next;
658 struct df_link *link;
660 link = (struct df_link *) obstack_alloc (&df_ref_obstack,
661 sizeof (*link));
662 link->next = next;
663 link->ref = ref;
664 return link;
668 /* Add REF to chain head pointed to by PHEAD. */
669 static struct df_link *
670 df_ref_unlink (phead, ref)
671 struct df_link **phead;
672 struct ref *ref;
674 struct df_link *link = *phead;
676 if (link)
678 if (! link->next)
680 /* Only a single ref. It must be the one we want.
681 If not, the def-use and use-def chains are likely to
682 be inconsistent. */
683 if (link->ref != ref)
684 abort ();
685 /* Now have an empty chain. */
686 *phead = NULL;
688 else
690 /* Multiple refs. One of them must be us. */
691 if (link->ref == ref)
692 *phead = link->next;
693 else
695 /* Follow chain. */
696 for (; link->next; link = link->next)
698 if (link->next->ref == ref)
700 /* Unlink from list. */
701 link->next = link->next->next;
702 return link->next;
708 return link;
712 /* Unlink REF from all def-use/use-def chains, etc. */
714 df_ref_remove (df, ref)
715 struct df *df;
716 struct ref *ref;
718 if (DF_REF_REG_DEF_P (ref))
720 df_def_unlink (df, ref);
721 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].defs, ref);
723 else
725 df_use_unlink (df, ref);
726 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].uses, ref);
728 return 1;
732 /* Unlink DEF from use-def and reg-def chains. */
733 static void
734 df_def_unlink (df, def)
735 struct df *df ATTRIBUTE_UNUSED;
736 struct ref *def;
738 struct df_link *du_link;
739 unsigned int dregno = DF_REF_REGNO (def);
741 /* Follow def-use chain to find all the uses of this def. */
742 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
744 struct ref *use = du_link->ref;
746 /* Unlink this def from the use-def chain. */
747 df_ref_unlink (&DF_REF_CHAIN (use), def);
749 DF_REF_CHAIN (def) = 0;
751 /* Unlink def from reg-def chain. */
752 df_ref_unlink (&df->regs[dregno].defs, def);
754 df->defs[DF_REF_ID (def)] = 0;
758 /* Unlink use from def-use and reg-use chains. */
759 static void
760 df_use_unlink (df, use)
761 struct df *df ATTRIBUTE_UNUSED;
762 struct ref *use;
764 struct df_link *ud_link;
765 unsigned int uregno = DF_REF_REGNO (use);
767 /* Follow use-def chain to find all the defs of this use. */
768 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
770 struct ref *def = ud_link->ref;
772 /* Unlink this use from the def-use chain. */
773 df_ref_unlink (&DF_REF_CHAIN (def), use);
775 DF_REF_CHAIN (use) = 0;
777 /* Unlink use from reg-use chain. */
778 df_ref_unlink (&df->regs[uregno].uses, use);
780 df->uses[DF_REF_ID (use)] = 0;
783 /* Local routines for recording refs. */
786 /* Create a new ref of type DF_REF_TYPE for register REG at address
787 LOC within INSN of BB. */
788 static struct ref *
789 df_ref_create (df, reg, loc, insn, ref_type, ref_flags)
790 struct df *df;
791 rtx reg;
792 rtx *loc;
793 rtx insn;
794 enum df_ref_type ref_type;
795 enum df_ref_flags ref_flags;
797 struct ref *this_ref;
798 unsigned int uid;
800 this_ref = (struct ref *) obstack_alloc (&df_ref_obstack,
801 sizeof (*this_ref));
802 DF_REF_REG (this_ref) = reg;
803 DF_REF_LOC (this_ref) = loc;
804 DF_REF_INSN (this_ref) = insn;
805 DF_REF_CHAIN (this_ref) = 0;
806 DF_REF_TYPE (this_ref) = ref_type;
807 DF_REF_FLAGS (this_ref) = ref_flags;
808 uid = INSN_UID (insn);
810 if (ref_type == DF_REF_REG_DEF)
812 if (df->def_id >= df->def_size)
814 /* Make table 25 percent larger. */
815 df->def_size += (df->def_size / 4);
816 df->defs = xrealloc (df->defs,
817 df->def_size * sizeof (*df->defs));
819 DF_REF_ID (this_ref) = df->def_id;
820 df->defs[df->def_id++] = this_ref;
822 else
824 if (df->use_id >= df->use_size)
826 /* Make table 25 percent larger. */
827 df->use_size += (df->use_size / 4);
828 df->uses = xrealloc (df->uses,
829 df->use_size * sizeof (*df->uses));
831 DF_REF_ID (this_ref) = df->use_id;
832 df->uses[df->use_id++] = this_ref;
834 return this_ref;
838 /* Create a new reference of type DF_REF_TYPE for a single register REG,
839 used inside the LOC rtx of INSN. */
840 static void
841 df_ref_record_1 (df, reg, loc, insn, ref_type, ref_flags)
842 struct df *df;
843 rtx reg;
844 rtx *loc;
845 rtx insn;
846 enum df_ref_type ref_type;
847 enum df_ref_flags ref_flags;
849 df_ref_create (df, reg, loc, insn, ref_type, ref_flags);
853 /* Create new references of type DF_REF_TYPE for each part of register REG
854 at address LOC within INSN of BB. */
855 static void
856 df_ref_record (df, reg, loc, insn, ref_type, ref_flags)
857 struct df *df;
858 rtx reg;
859 rtx *loc;
860 rtx insn;
861 enum df_ref_type ref_type;
862 enum df_ref_flags ref_flags;
864 unsigned int regno;
866 if (GET_CODE (reg) != REG && GET_CODE (reg) != SUBREG)
867 abort ();
869 /* For the reg allocator we are interested in some SUBREG rtx's, but not
870 all. Notably only those representing a word extraction from a multi-word
871 reg. As written in the docu those should have the form
872 (subreg:SI (reg:M A) N), with size(SImode) > size(Mmode).
873 XXX Is that true? We could also use the global word_mode variable. */
874 if (GET_CODE (reg) == SUBREG
875 && (GET_MODE_SIZE (GET_MODE (reg)) < GET_MODE_SIZE (word_mode)
876 || GET_MODE_SIZE (GET_MODE (reg))
877 >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (reg)))))
879 loc = &SUBREG_REG (reg);
880 reg = *loc;
883 regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
884 if (regno < FIRST_PSEUDO_REGISTER)
886 int i;
887 int endregno;
889 if (! (df->flags & DF_HARD_REGS))
890 return;
892 /* GET_MODE (reg) is correct here. We don't want to go into a SUBREG
893 for the mode, because we only want to add references to regs, which
894 are really referenced. E.g. a (subreg:SI (reg:DI 0) 0) does _not_
895 reference the whole reg 0 in DI mode (which would also include
896 reg 1, at least, if 0 and 1 are SImode registers). */
897 endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (reg));
899 for (i = regno; i < endregno; i++)
900 df_ref_record_1 (df, gen_rtx_REG (reg_raw_mode[i], i),
901 loc, insn, ref_type, ref_flags);
903 else
905 df_ref_record_1 (df, reg, loc, insn, ref_type, ref_flags);
909 /* Writes to SUBREG of inndermode wider than word and outermode shorter than
910 word are read-modify-write. */
912 static inline bool
913 read_modify_subreg_p (x)
914 rtx x;
916 if (GET_CODE (x) != SUBREG)
917 return false;
918 if (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))) <= UNITS_PER_WORD)
919 return false;
920 if (GET_MODE_SIZE (GET_MODE (x)) > UNITS_PER_WORD)
921 return false;
922 return true;
925 /* Process all the registers defined in the rtx, X. */
926 static void
927 df_def_record_1 (df, x, bb, insn)
928 struct df *df;
929 rtx x;
930 basic_block bb;
931 rtx insn;
933 rtx *loc = &SET_DEST (x);
934 rtx dst = *loc;
935 enum df_ref_flags flags = 0;
937 /* Some targets place small structures in registers for
938 return values of functions. */
939 if (GET_CODE (dst) == PARALLEL && GET_MODE (dst) == BLKmode)
941 int i;
943 for (i = XVECLEN (dst, 0) - 1; i >= 0; i--)
944 df_def_record_1 (df, XVECEXP (dst, 0, i), bb, insn);
945 return;
948 /* May be, we should flag the use of strict_low_part somehow. Might be
949 handy for the reg allocator. */
950 while (GET_CODE (dst) == STRICT_LOW_PART
951 || GET_CODE (dst) == ZERO_EXTRACT
952 || GET_CODE (dst) == SIGN_EXTRACT
953 || read_modify_subreg_p (dst))
955 /* Strict low part always contains SUBREG, but we don't want to make
956 it appear outside, as whole register is always considered. */
957 if (GET_CODE (dst) == STRICT_LOW_PART)
959 loc = &XEXP (dst, 0);
960 dst = *loc;
962 loc = &XEXP (dst, 0);
963 dst = *loc;
964 flags |= DF_REF_READ_WRITE;
967 if (GET_CODE (dst) == REG
968 || (GET_CODE (dst) == SUBREG && GET_CODE (SUBREG_REG (dst)) == REG))
969 df_ref_record (df, dst, loc, insn, DF_REF_REG_DEF, flags);
973 /* Process all the registers defined in the pattern rtx, X. */
974 static void
975 df_defs_record (df, x, bb, insn)
976 struct df *df;
977 rtx x;
978 basic_block bb;
979 rtx insn;
981 RTX_CODE code = GET_CODE (x);
983 if (code == SET || code == CLOBBER)
985 /* Mark the single def within the pattern. */
986 df_def_record_1 (df, x, bb, insn);
988 else if (code == PARALLEL)
990 int i;
992 /* Mark the multiple defs within the pattern. */
993 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
995 code = GET_CODE (XVECEXP (x, 0, i));
996 if (code == SET || code == CLOBBER)
997 df_def_record_1 (df, XVECEXP (x, 0, i), bb, insn);
1003 /* Process all the registers used in the rtx at address LOC. */
1004 static void
1005 df_uses_record (df, loc, ref_type, bb, insn, flags)
1006 struct df *df;
1007 rtx *loc;
1008 enum df_ref_type ref_type;
1009 basic_block bb;
1010 rtx insn;
1011 enum df_ref_flags flags;
1013 RTX_CODE code;
1014 rtx x;
1015 retry:
1016 x = *loc;
1017 if (!x)
1018 return;
1019 code = GET_CODE (x);
1020 switch (code)
1022 case LABEL_REF:
1023 case SYMBOL_REF:
1024 case CONST_INT:
1025 case CONST:
1026 case CONST_DOUBLE:
1027 case CONST_VECTOR:
1028 case PC:
1029 case ADDR_VEC:
1030 case ADDR_DIFF_VEC:
1031 return;
1033 case CLOBBER:
1034 /* If we are clobbering a MEM, mark any registers inside the address
1035 as being used. */
1036 if (GET_CODE (XEXP (x, 0)) == MEM)
1037 df_uses_record (df, &XEXP (XEXP (x, 0), 0),
1038 DF_REF_REG_MEM_STORE, bb, insn, flags);
1040 /* If we're clobbering a REG then we have a def so ignore. */
1041 return;
1043 case MEM:
1044 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_MEM_LOAD, bb, insn, flags);
1045 return;
1047 case SUBREG:
1048 /* While we're here, optimize this case. */
1050 /* In case the SUBREG is not of a register, don't optimize. */
1051 if (GET_CODE (SUBREG_REG (x)) != REG)
1053 loc = &SUBREG_REG (x);
1054 df_uses_record (df, loc, ref_type, bb, insn, flags);
1055 return;
1058 /* ... Fall through ... */
1060 case REG:
1061 /* See a register (or subreg) other than being set. */
1062 df_ref_record (df, x, loc, insn, ref_type, flags);
1063 return;
1065 case SET:
1067 rtx dst = SET_DEST (x);
1069 df_uses_record (df, &SET_SRC (x), DF_REF_REG_USE, bb, insn, 0);
1071 switch (GET_CODE (dst))
1073 case SUBREG:
1074 if (read_modify_subreg_p (dst))
1076 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1077 insn, DF_REF_READ_WRITE);
1078 break;
1080 /* ... FALLTHRU ... */
1081 case REG:
1082 case PC:
1083 break;
1084 case MEM:
1085 df_uses_record (df, &XEXP (dst, 0),
1086 DF_REF_REG_MEM_STORE,
1087 bb, insn, 0);
1088 break;
1089 case STRICT_LOW_PART:
1090 /* A strict_low_part uses the whole reg not only the subreg. */
1091 dst = XEXP (dst, 0);
1092 if (GET_CODE (dst) != SUBREG)
1093 abort ();
1094 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1095 insn, DF_REF_READ_WRITE);
1096 break;
1097 case ZERO_EXTRACT:
1098 case SIGN_EXTRACT:
1099 df_uses_record (df, &XEXP (dst, 0), DF_REF_REG_USE, bb, insn,
1100 DF_REF_READ_WRITE);
1101 df_uses_record (df, &XEXP (dst, 1), DF_REF_REG_USE, bb, insn, 0);
1102 df_uses_record (df, &XEXP (dst, 2), DF_REF_REG_USE, bb, insn, 0);
1103 dst = XEXP (dst, 0);
1104 break;
1105 default:
1106 abort ();
1108 return;
1111 case RETURN:
1112 break;
1114 case ASM_OPERANDS:
1115 case UNSPEC_VOLATILE:
1116 case TRAP_IF:
1117 case ASM_INPUT:
1119 /* Traditional and volatile asm instructions must be considered to use
1120 and clobber all hard registers, all pseudo-registers and all of
1121 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1123 Consider for instance a volatile asm that changes the fpu rounding
1124 mode. An insn should not be moved across this even if it only uses
1125 pseudo-regs because it might give an incorrectly rounded result.
1127 For now, just mark any regs we can find in ASM_OPERANDS as
1128 used. */
1130 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1131 We can not just fall through here since then we would be confused
1132 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1133 traditional asms unlike their normal usage. */
1134 if (code == ASM_OPERANDS)
1136 int j;
1138 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1139 df_uses_record (df, &ASM_OPERANDS_INPUT (x, j),
1140 DF_REF_REG_USE, bb, insn, 0);
1141 return;
1143 break;
1146 case PRE_DEC:
1147 case POST_DEC:
1148 case PRE_INC:
1149 case POST_INC:
1150 case PRE_MODIFY:
1151 case POST_MODIFY:
1152 /* Catch the def of the register being modified. */
1153 df_ref_record (df, XEXP (x, 0), &XEXP (x, 0), insn, DF_REF_REG_DEF, DF_REF_READ_WRITE);
1155 /* ... Fall through to handle uses ... */
1157 default:
1158 break;
1161 /* Recursively scan the operands of this expression. */
1163 const char *fmt = GET_RTX_FORMAT (code);
1164 int i;
1166 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1168 if (fmt[i] == 'e')
1170 /* Tail recursive case: save a function call level. */
1171 if (i == 0)
1173 loc = &XEXP (x, 0);
1174 goto retry;
1176 df_uses_record (df, &XEXP (x, i), ref_type, bb, insn, flags);
1178 else if (fmt[i] == 'E')
1180 int j;
1181 for (j = 0; j < XVECLEN (x, i); j++)
1182 df_uses_record (df, &XVECEXP (x, i, j), ref_type,
1183 bb, insn, flags);
1190 /* Record all the df within INSN of basic block BB. */
1191 static void
1192 df_insn_refs_record (df, bb, insn)
1193 struct df *df;
1194 basic_block bb;
1195 rtx insn;
1197 int i;
1199 if (INSN_P (insn))
1201 rtx note;
1203 /* Record register defs */
1204 df_defs_record (df, PATTERN (insn), bb, insn);
1206 if (df->flags & DF_EQUIV_NOTES)
1207 for (note = REG_NOTES (insn); note;
1208 note = XEXP (note, 1))
1210 switch (REG_NOTE_KIND (note))
1212 case REG_EQUIV:
1213 case REG_EQUAL:
1214 df_uses_record (df, &XEXP (note, 0), DF_REF_REG_USE,
1215 bb, insn, 0);
1216 default:
1217 break;
1221 if (GET_CODE (insn) == CALL_INSN)
1223 rtx note;
1224 rtx x;
1226 /* Record the registers used to pass arguments. */
1227 for (note = CALL_INSN_FUNCTION_USAGE (insn); note;
1228 note = XEXP (note, 1))
1230 if (GET_CODE (XEXP (note, 0)) == USE)
1231 df_uses_record (df, &XEXP (XEXP (note, 0), 0), DF_REF_REG_USE,
1232 bb, insn, 0);
1235 /* The stack ptr is used (honorarily) by a CALL insn. */
1236 x = df_reg_use_gen (STACK_POINTER_REGNUM);
1237 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_USE, bb, insn, 0);
1239 if (df->flags & DF_HARD_REGS)
1241 /* Calls may also reference any of the global registers,
1242 so they are recorded as used. */
1243 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1244 if (global_regs[i])
1246 x = df_reg_use_gen (i);
1247 df_uses_record (df, &SET_DEST (x),
1248 DF_REF_REG_USE, bb, insn, 0);
1253 /* Record the register uses. */
1254 df_uses_record (df, &PATTERN (insn),
1255 DF_REF_REG_USE, bb, insn, 0);
1258 if (GET_CODE (insn) == CALL_INSN)
1260 rtx note;
1262 if (df->flags & DF_HARD_REGS)
1264 /* Kill all registers invalidated by a call. */
1265 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1266 if (TEST_HARD_REG_BIT (regs_invalidated_by_call, i))
1268 rtx reg_clob = df_reg_clobber_gen (i);
1269 df_defs_record (df, reg_clob, bb, insn);
1273 /* There may be extra registers to be clobbered. */
1274 for (note = CALL_INSN_FUNCTION_USAGE (insn);
1275 note;
1276 note = XEXP (note, 1))
1277 if (GET_CODE (XEXP (note, 0)) == CLOBBER)
1278 df_defs_record (df, XEXP (note, 0), bb, insn);
1284 /* Record all the refs within the basic block BB. */
1285 static void
1286 df_bb_refs_record (df, bb)
1287 struct df *df;
1288 basic_block bb;
1290 rtx insn;
1292 /* Scan the block an insn at a time from beginning to end. */
1293 for (insn = bb->head; ; insn = NEXT_INSN (insn))
1295 if (INSN_P (insn))
1297 /* Record defs within INSN. */
1298 df_insn_refs_record (df, bb, insn);
1300 if (insn == bb->end)
1301 break;
1306 /* Record all the refs in the basic blocks specified by BLOCKS. */
1307 static void
1308 df_refs_record (df, blocks)
1309 struct df *df;
1310 bitmap blocks;
1312 basic_block bb;
1314 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1316 df_bb_refs_record (df, bb);
1320 /* Dataflow analysis routines. */
1323 /* Create reg-def chains for basic block BB. These are a list of
1324 definitions for each register. */
1325 static void
1326 df_bb_reg_def_chain_create (df, bb)
1327 struct df *df;
1328 basic_block bb;
1330 rtx insn;
1332 /* Perhaps the defs should be sorted using a depth first search
1333 of the CFG (or possibly a breadth first search). We currently
1334 scan the basic blocks in reverse order so that the first defs
1335 appear at the start of the chain. */
1337 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1338 insn = PREV_INSN (insn))
1340 struct df_link *link;
1341 unsigned int uid = INSN_UID (insn);
1343 if (! INSN_P (insn))
1344 continue;
1346 for (link = df->insns[uid].defs; link; link = link->next)
1348 struct ref *def = link->ref;
1349 unsigned int dregno = DF_REF_REGNO (def);
1351 df->regs[dregno].defs
1352 = df_link_create (def, df->regs[dregno].defs);
1358 /* Create reg-def chains for each basic block within BLOCKS. These
1359 are a list of definitions for each register. */
1360 static void
1361 df_reg_def_chain_create (df, blocks)
1362 struct df *df;
1363 bitmap blocks;
1365 basic_block bb;
1367 FOR_EACH_BB_IN_BITMAP/*_REV*/ (blocks, 0, bb,
1369 df_bb_reg_def_chain_create (df, bb);
1374 /* Create reg-use chains for basic block BB. These are a list of uses
1375 for each register. */
1376 static void
1377 df_bb_reg_use_chain_create (df, bb)
1378 struct df *df;
1379 basic_block bb;
1381 rtx insn;
1383 /* Scan in forward order so that the last uses appear at the
1384 start of the chain. */
1386 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1387 insn = NEXT_INSN (insn))
1389 struct df_link *link;
1390 unsigned int uid = INSN_UID (insn);
1392 if (! INSN_P (insn))
1393 continue;
1395 for (link = df->insns[uid].uses; link; link = link->next)
1397 struct ref *use = link->ref;
1398 unsigned int uregno = DF_REF_REGNO (use);
1400 df->regs[uregno].uses
1401 = df_link_create (use, df->regs[uregno].uses);
1407 /* Create reg-use chains for each basic block within BLOCKS. These
1408 are a list of uses for each register. */
1409 static void
1410 df_reg_use_chain_create (df, blocks)
1411 struct df *df;
1412 bitmap blocks;
1414 basic_block bb;
1416 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1418 df_bb_reg_use_chain_create (df, bb);
1423 /* Create def-use chains from reaching use bitmaps for basic block BB. */
1424 static void
1425 df_bb_du_chain_create (df, bb, ru)
1426 struct df *df;
1427 basic_block bb;
1428 bitmap ru;
1430 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1431 rtx insn;
1433 bitmap_copy (ru, bb_info->ru_out);
1435 /* For each def in BB create a linked list (chain) of uses
1436 reached from the def. */
1437 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1438 insn = PREV_INSN (insn))
1440 struct df_link *def_link;
1441 struct df_link *use_link;
1442 unsigned int uid = INSN_UID (insn);
1444 if (! INSN_P (insn))
1445 continue;
1447 /* For each def in insn... */
1448 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1450 struct ref *def = def_link->ref;
1451 unsigned int dregno = DF_REF_REGNO (def);
1453 DF_REF_CHAIN (def) = 0;
1455 /* While the reg-use chains are not essential, it
1456 is _much_ faster to search these short lists rather
1457 than all the reaching uses, especially for large functions. */
1458 for (use_link = df->regs[dregno].uses; use_link;
1459 use_link = use_link->next)
1461 struct ref *use = use_link->ref;
1463 if (bitmap_bit_p (ru, DF_REF_ID (use)))
1465 DF_REF_CHAIN (def)
1466 = df_link_create (use, DF_REF_CHAIN (def));
1468 bitmap_clear_bit (ru, DF_REF_ID (use));
1473 /* For each use in insn... */
1474 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1476 struct ref *use = use_link->ref;
1477 bitmap_set_bit (ru, DF_REF_ID (use));
1483 /* Create def-use chains from reaching use bitmaps for basic blocks
1484 in BLOCKS. */
1485 static void
1486 df_du_chain_create (df, blocks)
1487 struct df *df;
1488 bitmap blocks;
1490 bitmap ru;
1491 basic_block bb;
1493 ru = BITMAP_XMALLOC ();
1495 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1497 df_bb_du_chain_create (df, bb, ru);
1500 BITMAP_XFREE (ru);
1504 /* Create use-def chains from reaching def bitmaps for basic block BB. */
1505 static void
1506 df_bb_ud_chain_create (df, bb)
1507 struct df *df;
1508 basic_block bb;
1510 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1511 struct ref **reg_def_last = df->reg_def_last;
1512 rtx insn;
1514 memset (reg_def_last, 0, df->n_regs * sizeof (struct ref *));
1516 /* For each use in BB create a linked list (chain) of defs
1517 that reach the use. */
1518 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1519 insn = NEXT_INSN (insn))
1521 unsigned int uid = INSN_UID (insn);
1522 struct df_link *use_link;
1523 struct df_link *def_link;
1525 if (! INSN_P (insn))
1526 continue;
1528 /* For each use in insn... */
1529 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1531 struct ref *use = use_link->ref;
1532 unsigned int regno = DF_REF_REGNO (use);
1534 DF_REF_CHAIN (use) = 0;
1536 /* Has regno been defined in this BB yet? If so, use
1537 the last def as the single entry for the use-def
1538 chain for this use. Otherwise, we need to add all
1539 the defs using this regno that reach the start of
1540 this BB. */
1541 if (reg_def_last[regno])
1543 DF_REF_CHAIN (use)
1544 = df_link_create (reg_def_last[regno], 0);
1546 else
1548 /* While the reg-def chains are not essential, it is
1549 _much_ faster to search these short lists rather than
1550 all the reaching defs, especially for large
1551 functions. */
1552 for (def_link = df->regs[regno].defs; def_link;
1553 def_link = def_link->next)
1555 struct ref *def = def_link->ref;
1557 if (bitmap_bit_p (bb_info->rd_in, DF_REF_ID (def)))
1559 DF_REF_CHAIN (use)
1560 = df_link_create (def, DF_REF_CHAIN (use));
1567 /* For each def in insn...record the last def of each reg. */
1568 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1570 struct ref *def = def_link->ref;
1571 int dregno = DF_REF_REGNO (def);
1573 reg_def_last[dregno] = def;
1579 /* Create use-def chains from reaching def bitmaps for basic blocks
1580 within BLOCKS. */
1581 static void
1582 df_ud_chain_create (df, blocks)
1583 struct df *df;
1584 bitmap blocks;
1586 basic_block bb;
1588 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1590 df_bb_ud_chain_create (df, bb);
1596 static void
1597 df_rd_transfer_function (bb, changed, in, out, gen, kill, data)
1598 int bb ATTRIBUTE_UNUSED;
1599 int *changed;
1600 bitmap in, out, gen, kill;
1601 void *data ATTRIBUTE_UNUSED;
1603 *changed = bitmap_union_of_diff (out, gen, in, kill);
1605 static void
1606 df_ru_transfer_function (bb, changed, in, out, gen, kill, data)
1607 int bb ATTRIBUTE_UNUSED;
1608 int *changed;
1609 bitmap in, out, gen, kill;
1610 void *data ATTRIBUTE_UNUSED;
1612 *changed = bitmap_union_of_diff (in, gen, out, kill);
1615 static void
1616 df_lr_transfer_function (bb, changed, in, out, use, def, data)
1617 int bb ATTRIBUTE_UNUSED;
1618 int *changed;
1619 bitmap in, out, use, def;
1620 void *data ATTRIBUTE_UNUSED;
1622 *changed = bitmap_union_of_diff (in, use, out, def);
1626 /* Compute local reaching def info for basic block BB. */
1627 static void
1628 df_bb_rd_local_compute (df, bb)
1629 struct df *df;
1630 basic_block bb;
1632 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1633 rtx insn;
1635 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1636 insn = NEXT_INSN (insn))
1638 unsigned int uid = INSN_UID (insn);
1639 struct df_link *def_link;
1641 if (! INSN_P (insn))
1642 continue;
1644 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1646 struct ref *def = def_link->ref;
1647 unsigned int regno = DF_REF_REGNO (def);
1648 struct df_link *def2_link;
1650 for (def2_link = df->regs[regno].defs; def2_link;
1651 def2_link = def2_link->next)
1653 struct ref *def2 = def2_link->ref;
1655 /* Add all defs of this reg to the set of kills. This
1656 is greedy since many of these defs will not actually
1657 be killed by this BB but it keeps things a lot
1658 simpler. */
1659 bitmap_set_bit (bb_info->rd_kill, DF_REF_ID (def2));
1661 /* Zap from the set of gens for this BB. */
1662 bitmap_clear_bit (bb_info->rd_gen, DF_REF_ID (def2));
1665 bitmap_set_bit (bb_info->rd_gen, DF_REF_ID (def));
1669 bb_info->rd_valid = 1;
1673 /* Compute local reaching def info for each basic block within BLOCKS. */
1674 static void
1675 df_rd_local_compute (df, blocks)
1676 struct df *df;
1677 bitmap blocks;
1679 basic_block bb;
1681 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1683 df_bb_rd_local_compute (df, bb);
1688 /* Compute local reaching use (upward exposed use) info for basic
1689 block BB. */
1690 static void
1691 df_bb_ru_local_compute (df, bb)
1692 struct df *df;
1693 basic_block bb;
1695 /* This is much more tricky than computing reaching defs. With
1696 reaching defs, defs get killed by other defs. With upwards
1697 exposed uses, these get killed by defs with the same regno. */
1699 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1700 rtx insn;
1703 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1704 insn = PREV_INSN (insn))
1706 unsigned int uid = INSN_UID (insn);
1707 struct df_link *def_link;
1708 struct df_link *use_link;
1710 if (! INSN_P (insn))
1711 continue;
1713 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1715 struct ref *def = def_link->ref;
1716 unsigned int dregno = DF_REF_REGNO (def);
1718 for (use_link = df->regs[dregno].uses; use_link;
1719 use_link = use_link->next)
1721 struct ref *use = use_link->ref;
1723 /* Add all uses of this reg to the set of kills. This
1724 is greedy since many of these uses will not actually
1725 be killed by this BB but it keeps things a lot
1726 simpler. */
1727 bitmap_set_bit (bb_info->ru_kill, DF_REF_ID (use));
1729 /* Zap from the set of gens for this BB. */
1730 bitmap_clear_bit (bb_info->ru_gen, DF_REF_ID (use));
1734 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1736 struct ref *use = use_link->ref;
1737 /* Add use to set of gens in this BB. */
1738 bitmap_set_bit (bb_info->ru_gen, DF_REF_ID (use));
1741 bb_info->ru_valid = 1;
1745 /* Compute local reaching use (upward exposed use) info for each basic
1746 block within BLOCKS. */
1747 static void
1748 df_ru_local_compute (df, blocks)
1749 struct df *df;
1750 bitmap blocks;
1752 basic_block bb;
1754 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1756 df_bb_ru_local_compute (df, bb);
1761 /* Compute local live variable info for basic block BB. */
1762 static void
1763 df_bb_lr_local_compute (df, bb)
1764 struct df *df;
1765 basic_block bb;
1767 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1768 rtx insn;
1770 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1771 insn = PREV_INSN (insn))
1773 unsigned int uid = INSN_UID (insn);
1774 struct df_link *link;
1776 if (! INSN_P (insn))
1777 continue;
1779 for (link = df->insns[uid].defs; link; link = link->next)
1781 struct ref *def = link->ref;
1782 unsigned int dregno = DF_REF_REGNO (def);
1784 /* Add def to set of defs in this BB. */
1785 bitmap_set_bit (bb_info->lr_def, dregno);
1787 bitmap_clear_bit (bb_info->lr_use, dregno);
1790 for (link = df->insns[uid].uses; link; link = link->next)
1792 struct ref *use = link->ref;
1793 /* Add use to set of uses in this BB. */
1794 bitmap_set_bit (bb_info->lr_use, DF_REF_REGNO (use));
1797 bb_info->lr_valid = 1;
1801 /* Compute local live variable info for each basic block within BLOCKS. */
1802 static void
1803 df_lr_local_compute (df, blocks)
1804 struct df *df;
1805 bitmap blocks;
1807 basic_block bb;
1809 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1811 df_bb_lr_local_compute (df, bb);
1816 /* Compute register info: lifetime, bb, and number of defs and uses
1817 for basic block BB. */
1818 static void
1819 df_bb_reg_info_compute (df, bb, live)
1820 struct df *df;
1821 basic_block bb;
1822 bitmap live;
1824 struct reg_info *reg_info = df->regs;
1825 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1826 rtx insn;
1828 bitmap_copy (live, bb_info->lr_out);
1830 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1831 insn = PREV_INSN (insn))
1833 unsigned int uid = INSN_UID (insn);
1834 unsigned int regno;
1835 struct df_link *link;
1837 if (! INSN_P (insn))
1838 continue;
1840 for (link = df->insns[uid].defs; link; link = link->next)
1842 struct ref *def = link->ref;
1843 unsigned int dregno = DF_REF_REGNO (def);
1845 /* Kill this register. */
1846 bitmap_clear_bit (live, dregno);
1847 reg_info[dregno].n_defs++;
1850 for (link = df->insns[uid].uses; link; link = link->next)
1852 struct ref *use = link->ref;
1853 unsigned int uregno = DF_REF_REGNO (use);
1855 /* This register is now live. */
1856 bitmap_set_bit (live, uregno);
1857 reg_info[uregno].n_uses++;
1860 /* Increment lifetimes of all live registers. */
1861 EXECUTE_IF_SET_IN_BITMAP (live, 0, regno,
1863 reg_info[regno].lifetime++;
1869 /* Compute register info: lifetime, bb, and number of defs and uses. */
1870 static void
1871 df_reg_info_compute (df, blocks)
1872 struct df *df;
1873 bitmap blocks;
1875 basic_block bb;
1876 bitmap live;
1878 live = BITMAP_XMALLOC ();
1880 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1882 df_bb_reg_info_compute (df, bb, live);
1885 BITMAP_XFREE (live);
1889 /* Assign LUIDs for BB. */
1890 static int
1891 df_bb_luids_set (df, bb)
1892 struct df *df;
1893 basic_block bb;
1895 rtx insn;
1896 int luid = 0;
1898 /* The LUIDs are monotonically increasing for each basic block. */
1900 for (insn = bb->head; ; insn = NEXT_INSN (insn))
1902 if (INSN_P (insn))
1903 DF_INSN_LUID (df, insn) = luid++;
1904 DF_INSN_LUID (df, insn) = luid;
1906 if (insn == bb->end)
1907 break;
1909 return luid;
1913 /* Assign LUIDs for each basic block within BLOCKS. */
1914 static int
1915 df_luids_set (df, blocks)
1916 struct df *df;
1917 bitmap blocks;
1919 basic_block bb;
1920 int total = 0;
1922 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1924 total += df_bb_luids_set (df, bb);
1926 return total;
1929 /* Perform dataflow analysis using existing DF structure for blocks
1930 within BLOCKS. If BLOCKS is zero, use all basic blocks in the CFG. */
1931 static void
1932 df_analyse_1 (df, blocks, flags, update)
1933 struct df *df;
1934 bitmap blocks;
1935 int flags;
1936 int update;
1938 int aflags;
1939 int dflags;
1940 int i;
1941 basic_block bb;
1943 dflags = 0;
1944 aflags = flags;
1945 if (flags & DF_UD_CHAIN)
1946 aflags |= DF_RD | DF_RD_CHAIN;
1948 if (flags & DF_DU_CHAIN)
1949 aflags |= DF_RU;
1951 if (flags & DF_RU)
1952 aflags |= DF_RU_CHAIN;
1954 if (flags & DF_REG_INFO)
1955 aflags |= DF_LR;
1957 if (! blocks)
1958 blocks = df->all_blocks;
1960 df->flags = flags;
1961 if (update)
1963 df_refs_update (df);
1964 /* More fine grained incremental dataflow analysis would be
1965 nice. For now recompute the whole shebang for the
1966 modified blocks. */
1967 #if 0
1968 df_refs_unlink (df, blocks);
1969 #endif
1970 /* All the def-use, use-def chains can be potentially
1971 modified by changes in one block. The size of the
1972 bitmaps can also change. */
1974 else
1976 /* Scan the function for all register defs and uses. */
1977 df_refs_queue (df);
1978 df_refs_record (df, blocks);
1980 /* Link all the new defs and uses to the insns. */
1981 df_refs_process (df);
1984 /* Allocate the bitmaps now the total number of defs and uses are
1985 known. If the number of defs or uses have changed, then
1986 these bitmaps need to be reallocated. */
1987 df_bitmaps_alloc (df, aflags);
1989 /* Set the LUIDs for each specified basic block. */
1990 df_luids_set (df, blocks);
1992 /* Recreate reg-def and reg-use chains from scratch so that first
1993 def is at the head of the reg-def chain and the last use is at
1994 the head of the reg-use chain. This is only important for
1995 regs local to a basic block as it speeds up searching. */
1996 if (aflags & DF_RD_CHAIN)
1998 df_reg_def_chain_create (df, blocks);
2001 if (aflags & DF_RU_CHAIN)
2003 df_reg_use_chain_create (df, blocks);
2006 df->dfs_order = xmalloc (sizeof(int) * num_basic_blocks);
2007 df->rc_order = xmalloc (sizeof(int) * num_basic_blocks);
2008 df->rts_order = xmalloc (sizeof(int) * num_basic_blocks);
2009 df->inverse_dfs_map = xmalloc (sizeof(int) * last_basic_block);
2010 df->inverse_rc_map = xmalloc (sizeof(int) * last_basic_block);
2011 df->inverse_rts_map = xmalloc (sizeof(int) * last_basic_block);
2013 flow_depth_first_order_compute (df->dfs_order, df->rc_order);
2014 flow_reverse_top_sort_order_compute (df->rts_order);
2015 for (i = 0; i < num_basic_blocks; i ++)
2017 df->inverse_dfs_map[df->dfs_order[i]] = i;
2018 df->inverse_rc_map[df->rc_order[i]] = i;
2019 df->inverse_rts_map[df->rts_order[i]] = i;
2021 if (aflags & DF_RD)
2023 /* Compute the sets of gens and kills for the defs of each bb. */
2024 df_rd_local_compute (df, df->flags & DF_RD ? blocks : df->all_blocks);
2026 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
2027 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
2028 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
2029 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
2030 FOR_ALL_BB (bb)
2032 in[bb->sindex] = DF_BB_INFO (df, bb)->rd_in;
2033 out[bb->sindex] = DF_BB_INFO (df, bb)->rd_out;
2034 gen[bb->sindex] = DF_BB_INFO (df, bb)->rd_gen;
2035 kill[bb->sindex] = DF_BB_INFO (df, bb)->rd_kill;
2037 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
2038 FORWARD, UNION, df_rd_transfer_function,
2039 df->inverse_rc_map, NULL);
2040 free (in);
2041 free (out);
2042 free (gen);
2043 free (kill);
2047 if (aflags & DF_UD_CHAIN)
2049 /* Create use-def chains. */
2050 df_ud_chain_create (df, df->all_blocks);
2052 if (! (flags & DF_RD))
2053 dflags |= DF_RD;
2056 if (aflags & DF_RU)
2058 /* Compute the sets of gens and kills for the upwards exposed
2059 uses in each bb. */
2060 df_ru_local_compute (df, df->flags & DF_RU ? blocks : df->all_blocks);
2062 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
2063 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
2064 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
2065 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
2066 FOR_ALL_BB (bb)
2068 in[bb->sindex] = DF_BB_INFO (df, bb)->ru_in;
2069 out[bb->sindex] = DF_BB_INFO (df, bb)->ru_out;
2070 gen[bb->sindex] = DF_BB_INFO (df, bb)->ru_gen;
2071 kill[bb->sindex] = DF_BB_INFO (df, bb)->ru_kill;
2073 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
2074 BACKWARD, UNION, df_ru_transfer_function,
2075 df->inverse_rts_map, NULL);
2076 free (in);
2077 free (out);
2078 free (gen);
2079 free (kill);
2083 if (aflags & DF_DU_CHAIN)
2085 /* Create def-use chains. */
2086 df_du_chain_create (df, df->all_blocks);
2088 if (! (flags & DF_RU))
2089 dflags |= DF_RU;
2092 /* Free up bitmaps that are no longer required. */
2093 if (dflags)
2094 df_bitmaps_free (df, dflags);
2096 if (aflags & DF_LR)
2098 /* Compute the sets of defs and uses of live variables. */
2099 df_lr_local_compute (df, df->flags & DF_LR ? blocks : df->all_blocks);
2101 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
2102 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
2103 bitmap *use = xmalloc (sizeof (bitmap) * last_basic_block);
2104 bitmap *def = xmalloc (sizeof (bitmap) * last_basic_block);
2105 FOR_ALL_BB (bb)
2107 in[bb->sindex] = DF_BB_INFO (df, bb)->lr_in;
2108 out[bb->sindex] = DF_BB_INFO (df, bb)->lr_out;
2109 use[bb->sindex] = DF_BB_INFO (df, bb)->lr_use;
2110 def[bb->sindex] = DF_BB_INFO (df, bb)->lr_def;
2112 iterative_dataflow_bitmap (in, out, use, def, df->all_blocks,
2113 BACKWARD, UNION, df_lr_transfer_function,
2114 df->inverse_rts_map, NULL);
2115 free (in);
2116 free (out);
2117 free (use);
2118 free (def);
2122 if (aflags & DF_REG_INFO)
2124 df_reg_info_compute (df, df->all_blocks);
2126 free (df->dfs_order);
2127 free (df->rc_order);
2128 free (df->rts_order);
2129 free (df->inverse_rc_map);
2130 free (df->inverse_dfs_map);
2131 free (df->inverse_rts_map);
2135 /* Initialise dataflow analysis. */
2136 struct df *
2137 df_init ()
2139 struct df *df;
2141 df = xcalloc (1, sizeof (struct df));
2143 /* Squirrel away a global for debugging. */
2144 ddf = df;
2146 return df;
2150 /* Start queuing refs. */
2151 static int
2152 df_refs_queue (df)
2153 struct df *df;
2155 df->def_id_save = df->def_id;
2156 df->use_id_save = df->use_id;
2157 /* ???? Perhaps we should save current obstack state so that we can
2158 unwind it. */
2159 return 0;
2163 /* Process queued refs. */
2164 static int
2165 df_refs_process (df)
2166 struct df *df;
2168 unsigned int i;
2170 /* Build new insn-def chains. */
2171 for (i = df->def_id_save; i != df->def_id; i++)
2173 struct ref *def = df->defs[i];
2174 unsigned int uid = DF_REF_INSN_UID (def);
2176 /* Add def to head of def list for INSN. */
2177 df->insns[uid].defs
2178 = df_link_create (def, df->insns[uid].defs);
2181 /* Build new insn-use chains. */
2182 for (i = df->use_id_save; i != df->use_id; i++)
2184 struct ref *use = df->uses[i];
2185 unsigned int uid = DF_REF_INSN_UID (use);
2187 /* Add use to head of use list for INSN. */
2188 df->insns[uid].uses
2189 = df_link_create (use, df->insns[uid].uses);
2191 return 0;
2195 /* Update refs for basic block BB. */
2196 static int
2197 df_bb_refs_update (df, bb)
2198 struct df *df;
2199 basic_block bb;
2201 rtx insn;
2202 int count = 0;
2204 /* While we have to scan the chain of insns for this BB, we don't
2205 need to allocate and queue a long chain of BB/INSN pairs. Using
2206 a bitmap for insns_modified saves memory and avoids queuing
2207 duplicates. */
2209 for (insn = bb->head; ; insn = NEXT_INSN (insn))
2211 unsigned int uid;
2213 uid = INSN_UID (insn);
2215 if (bitmap_bit_p (df->insns_modified, uid))
2217 /* Delete any allocated refs of this insn. MPH, FIXME. */
2218 df_insn_refs_unlink (df, bb, insn);
2220 /* Scan the insn for refs. */
2221 df_insn_refs_record (df, bb, insn);
2224 bitmap_clear_bit (df->insns_modified, uid);
2225 count++;
2227 if (insn == bb->end)
2228 break;
2230 return count;
2234 /* Process all the modified/deleted insns that were queued. */
2235 static int
2236 df_refs_update (df)
2237 struct df *df;
2239 basic_block bb;
2240 int count = 0;
2242 if ((unsigned int)max_reg_num () >= df->reg_size)
2243 df_reg_table_realloc (df, 0);
2245 df_refs_queue (df);
2247 FOR_EACH_BB_IN_BITMAP (df->bbs_modified, 0, bb,
2249 count += df_bb_refs_update (df, bb);
2252 df_refs_process (df);
2253 return count;
2257 /* Return non-zero if any of the requested blocks in the bitmap
2258 BLOCKS have been modified. */
2259 static int
2260 df_modified_p (df, blocks)
2261 struct df *df;
2262 bitmap blocks;
2264 int update = 0;
2265 basic_block bb;
2267 if (!df->n_bbs)
2268 return 0;
2270 FOR_ALL_BB (bb)
2271 if (bitmap_bit_p (df->bbs_modified, bb->sindex)
2272 && (! blocks || (blocks == (bitmap) -1) || bitmap_bit_p (blocks, bb->sindex)))
2274 update = 1;
2275 break;
2278 return update;
2282 /* Analyse dataflow info for the basic blocks specified by the bitmap
2283 BLOCKS, or for the whole CFG if BLOCKS is zero, or just for the
2284 modified blocks if BLOCKS is -1. */
2286 df_analyse (df, blocks, flags)
2287 struct df *df;
2288 bitmap blocks;
2289 int flags;
2291 int update;
2293 /* We could deal with additional basic blocks being created by
2294 rescanning everything again. */
2295 if (df->n_bbs && df->n_bbs != (unsigned int) last_basic_block)
2296 abort ();
2298 update = df_modified_p (df, blocks);
2299 if (update || (flags != df->flags))
2301 if (! blocks)
2303 if (df->n_bbs)
2305 /* Recompute everything from scratch. */
2306 df_free (df);
2308 /* Allocate and initialise data structures. */
2309 df_alloc (df, max_reg_num ());
2310 df_analyse_1 (df, 0, flags, 0);
2311 update = 1;
2313 else
2315 if (blocks == (bitmap) -1)
2316 blocks = df->bbs_modified;
2318 if (! df->n_bbs)
2319 abort ();
2321 df_analyse_1 (df, blocks, flags, 1);
2322 bitmap_zero (df->bbs_modified);
2325 return update;
2329 /* Free all the dataflow info and the DF structure. */
2330 void
2331 df_finish (df)
2332 struct df *df;
2334 df_free (df);
2335 free (df);
2339 /* Unlink INSN from its reference information. */
2340 static void
2341 df_insn_refs_unlink (df, bb, insn)
2342 struct df *df;
2343 basic_block bb ATTRIBUTE_UNUSED;
2344 rtx insn;
2346 struct df_link *link;
2347 unsigned int uid;
2349 uid = INSN_UID (insn);
2351 /* Unlink all refs defined by this insn. */
2352 for (link = df->insns[uid].defs; link; link = link->next)
2353 df_def_unlink (df, link->ref);
2355 /* Unlink all refs used by this insn. */
2356 for (link = df->insns[uid].uses; link; link = link->next)
2357 df_use_unlink (df, link->ref);
2359 df->insns[uid].defs = 0;
2360 df->insns[uid].uses = 0;
2364 #if 0
2365 /* Unlink all the insns within BB from their reference information. */
2366 static void
2367 df_bb_refs_unlink (df, bb)
2368 struct df *df;
2369 basic_block bb;
2371 rtx insn;
2373 /* Scan the block an insn at a time from beginning to end. */
2374 for (insn = bb->head; ; insn = NEXT_INSN (insn))
2376 if (INSN_P (insn))
2378 /* Unlink refs for INSN. */
2379 df_insn_refs_unlink (df, bb, insn);
2381 if (insn == bb->end)
2382 break;
2387 /* Unlink all the refs in the basic blocks specified by BLOCKS.
2388 Not currently used. */
2389 static void
2390 df_refs_unlink (df, blocks)
2391 struct df *df;
2392 bitmap blocks;
2394 basic_block bb;
2396 if (blocks)
2398 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
2400 df_bb_refs_unlink (df, bb);
2403 else
2405 FOR_ALL_BB (bb)
2406 df_bb_refs_unlink (df, bb);
2409 #endif
2411 /* Functions to modify insns. */
2414 /* Delete INSN and all its reference information. */
2416 df_insn_delete (df, bb, insn)
2417 struct df *df;
2418 basic_block bb ATTRIBUTE_UNUSED;
2419 rtx insn;
2421 /* If the insn is a jump, we should perhaps call delete_insn to
2422 handle the JUMP_LABEL? */
2424 /* We should not be deleting the NOTE_INSN_BASIC_BLOCK or label. */
2425 if (insn == bb->head)
2426 abort ();
2428 /* Delete the insn. */
2429 delete_insn (insn);
2431 df_insn_modify (df, bb, insn);
2433 return NEXT_INSN (insn);
2437 /* Mark that INSN within BB may have changed (created/modified/deleted).
2438 This may be called multiple times for the same insn. There is no
2439 harm calling this function if the insn wasn't changed; it will just
2440 slow down the rescanning of refs. */
2441 void
2442 df_insn_modify (df, bb, insn)
2443 struct df *df;
2444 basic_block bb;
2445 rtx insn;
2447 unsigned int uid;
2449 uid = INSN_UID (insn);
2451 if (uid >= df->insn_size)
2452 df_insn_table_realloc (df, 0);
2454 bitmap_set_bit (df->bbs_modified, bb->sindex);
2455 bitmap_set_bit (df->insns_modified, uid);
2457 /* For incremental updating on the fly, perhaps we could make a copy
2458 of all the refs of the original insn and turn them into
2459 anti-refs. When df_refs_update finds these anti-refs, it annihilates
2460 the original refs. If validate_change fails then these anti-refs
2461 will just get ignored. */
2465 typedef struct replace_args
2467 rtx match;
2468 rtx replacement;
2469 rtx insn;
2470 int modified;
2471 } replace_args;
2474 /* Replace mem pointed to by PX with its associated pseudo register.
2475 DATA is actually a pointer to a structure describing the
2476 instruction currently being scanned and the MEM we are currently
2477 replacing. */
2478 static int
2479 df_rtx_mem_replace (px, data)
2480 rtx *px;
2481 void *data;
2483 replace_args *args = (replace_args *) data;
2484 rtx mem = *px;
2486 if (mem == NULL_RTX)
2487 return 0;
2489 switch (GET_CODE (mem))
2491 case MEM:
2492 break;
2494 case CONST_DOUBLE:
2495 /* We're not interested in the MEM associated with a
2496 CONST_DOUBLE, so there's no need to traverse into one. */
2497 return -1;
2499 default:
2500 /* This is not a MEM. */
2501 return 0;
2504 if (!rtx_equal_p (args->match, mem))
2505 /* This is not the MEM we are currently replacing. */
2506 return 0;
2508 /* Actually replace the MEM. */
2509 validate_change (args->insn, px, args->replacement, 1);
2510 args->modified++;
2512 return 0;
2517 df_insn_mem_replace (df, bb, insn, mem, reg)
2518 struct df *df;
2519 basic_block bb;
2520 rtx insn;
2521 rtx mem;
2522 rtx reg;
2524 replace_args args;
2526 args.insn = insn;
2527 args.match = mem;
2528 args.replacement = reg;
2529 args.modified = 0;
2531 /* Search and replace all matching mems within insn. */
2532 for_each_rtx (&insn, df_rtx_mem_replace, &args);
2534 if (args.modified)
2535 df_insn_modify (df, bb, insn);
2537 /* ???? FIXME. We may have a new def or one or more new uses of REG
2538 in INSN. REG should be a new pseudo so it won't affect the
2539 dataflow information that we currently have. We should add
2540 the new uses and defs to INSN and then recreate the chains
2541 when df_analyse is called. */
2542 return args.modified;
2546 /* Replace one register with another. Called through for_each_rtx; PX
2547 points to the rtx being scanned. DATA is actually a pointer to a
2548 structure of arguments. */
2549 static int
2550 df_rtx_reg_replace (px, data)
2551 rtx *px;
2552 void *data;
2554 rtx x = *px;
2555 replace_args *args = (replace_args *) data;
2557 if (x == NULL_RTX)
2558 return 0;
2560 if (x == args->match)
2562 validate_change (args->insn, px, args->replacement, 1);
2563 args->modified++;
2566 return 0;
2570 /* Replace the reg within every ref on CHAIN that is within the set
2571 BLOCKS of basic blocks with NEWREG. Also update the regs within
2572 REG_NOTES. */
2573 void
2574 df_refs_reg_replace (df, blocks, chain, oldreg, newreg)
2575 struct df *df;
2576 bitmap blocks;
2577 struct df_link *chain;
2578 rtx oldreg;
2579 rtx newreg;
2581 struct df_link *link;
2582 replace_args args;
2584 if (! blocks)
2585 blocks = df->all_blocks;
2587 args.match = oldreg;
2588 args.replacement = newreg;
2589 args.modified = 0;
2591 for (link = chain; link; link = link->next)
2593 struct ref *ref = link->ref;
2594 rtx insn = DF_REF_INSN (ref);
2596 if (! INSN_P (insn))
2597 continue;
2599 if (bitmap_bit_p (blocks, DF_REF_BBNO (ref)))
2601 df_ref_reg_replace (df, ref, oldreg, newreg);
2603 /* Replace occurrences of the reg within the REG_NOTES. */
2604 if ((! link->next || DF_REF_INSN (ref)
2605 != DF_REF_INSN (link->next->ref))
2606 && REG_NOTES (insn))
2608 args.insn = insn;
2609 for_each_rtx (&REG_NOTES (insn), df_rtx_reg_replace, &args);
2612 else
2614 /* Temporary check to ensure that we have a grip on which
2615 regs should be replaced. */
2616 abort ();
2622 /* Replace all occurrences of register OLDREG with register NEWREG in
2623 blocks defined by bitmap BLOCKS. This also replaces occurrences of
2624 OLDREG in the REG_NOTES but only for insns containing OLDREG. This
2625 routine expects the reg-use and reg-def chains to be valid. */
2627 df_reg_replace (df, blocks, oldreg, newreg)
2628 struct df *df;
2629 bitmap blocks;
2630 rtx oldreg;
2631 rtx newreg;
2633 unsigned int oldregno = REGNO (oldreg);
2635 df_refs_reg_replace (df, blocks, df->regs[oldregno].defs, oldreg, newreg);
2636 df_refs_reg_replace (df, blocks, df->regs[oldregno].uses, oldreg, newreg);
2637 return 1;
2641 /* Try replacing the reg within REF with NEWREG. Do not modify
2642 def-use/use-def chains. */
2644 df_ref_reg_replace (df, ref, oldreg, newreg)
2645 struct df *df;
2646 struct ref *ref;
2647 rtx oldreg;
2648 rtx newreg;
2650 /* Check that insn was deleted by being converted into a NOTE. If
2651 so ignore this insn. */
2652 if (! INSN_P (DF_REF_INSN (ref)))
2653 return 0;
2655 if (oldreg && oldreg != DF_REF_REG (ref))
2656 abort ();
2658 if (! validate_change (DF_REF_INSN (ref), DF_REF_LOC (ref), newreg, 1))
2659 return 0;
2661 df_insn_modify (df, DF_REF_BB (ref), DF_REF_INSN (ref));
2662 return 1;
2666 struct ref*
2667 df_bb_def_use_swap (df, bb, def_insn, use_insn, regno)
2668 struct df * df;
2669 basic_block bb;
2670 rtx def_insn;
2671 rtx use_insn;
2672 unsigned int regno;
2674 struct ref *def;
2675 struct ref *use;
2676 int def_uid;
2677 int use_uid;
2678 struct df_link *link;
2680 def = df_bb_insn_regno_first_def_find (df, bb, def_insn, regno);
2681 if (! def)
2682 return 0;
2684 use = df_bb_insn_regno_last_use_find (df, bb, use_insn, regno);
2685 if (! use)
2686 return 0;
2688 /* The USE no longer exists. */
2689 use_uid = INSN_UID (use_insn);
2690 df_use_unlink (df, use);
2691 df_ref_unlink (&df->insns[use_uid].uses, use);
2693 /* The DEF requires shifting so remove it from DEF_INSN
2694 and add it to USE_INSN by reusing LINK. */
2695 def_uid = INSN_UID (def_insn);
2696 link = df_ref_unlink (&df->insns[def_uid].defs, def);
2697 link->ref = def;
2698 link->next = df->insns[use_uid].defs;
2699 df->insns[use_uid].defs = link;
2701 #if 0
2702 link = df_ref_unlink (&df->regs[regno].defs, def);
2703 link->ref = def;
2704 link->next = df->regs[regno].defs;
2705 df->insns[regno].defs = link;
2706 #endif
2708 DF_REF_INSN (def) = use_insn;
2709 return def;
2713 /* Record df between FIRST_INSN and LAST_INSN inclusive. All new
2714 insns must be processed by this routine. */
2715 static void
2716 df_insns_modify (df, bb, first_insn, last_insn)
2717 struct df *df;
2718 basic_block bb;
2719 rtx first_insn;
2720 rtx last_insn;
2722 rtx insn;
2724 for (insn = first_insn; ; insn = NEXT_INSN (insn))
2726 unsigned int uid;
2728 /* A non-const call should not have slipped through the net. If
2729 it does, we need to create a new basic block. Ouch. The
2730 same applies for a label. */
2731 if ((GET_CODE (insn) == CALL_INSN
2732 && ! CONST_OR_PURE_CALL_P (insn))
2733 || GET_CODE (insn) == CODE_LABEL)
2734 abort ();
2736 uid = INSN_UID (insn);
2738 if (uid >= df->insn_size)
2739 df_insn_table_realloc (df, 0);
2741 df_insn_modify (df, bb, insn);
2743 if (insn == last_insn)
2744 break;
2749 /* Emit PATTERN before INSN within BB. */
2751 df_pattern_emit_before (df, pattern, bb, insn)
2752 struct df *df ATTRIBUTE_UNUSED;
2753 rtx pattern;
2754 basic_block bb;
2755 rtx insn;
2757 rtx ret_insn;
2758 rtx prev_insn = PREV_INSN (insn);
2760 /* We should not be inserting before the start of the block. */
2761 if (insn == bb->head)
2762 abort ();
2763 ret_insn = emit_insn_before (pattern, insn);
2764 if (ret_insn == insn)
2765 return ret_insn;
2767 df_insns_modify (df, bb, NEXT_INSN (prev_insn), ret_insn);
2768 return ret_insn;
2772 /* Emit PATTERN after INSN within BB. */
2774 df_pattern_emit_after (df, pattern, bb, insn)
2775 struct df *df;
2776 rtx pattern;
2777 basic_block bb;
2778 rtx insn;
2780 rtx ret_insn;
2782 ret_insn = emit_insn_after (pattern, insn);
2783 if (ret_insn == insn)
2784 return ret_insn;
2786 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2787 return ret_insn;
2791 /* Emit jump PATTERN after INSN within BB. */
2793 df_jump_pattern_emit_after (df, pattern, bb, insn)
2794 struct df *df;
2795 rtx pattern;
2796 basic_block bb;
2797 rtx insn;
2799 rtx ret_insn;
2801 ret_insn = emit_jump_insn_after (pattern, insn);
2802 if (ret_insn == insn)
2803 return ret_insn;
2805 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2806 return ret_insn;
2810 /* Move INSN within BB before BEFORE_INSN within BEFORE_BB.
2812 This function should only be used to move loop invariant insns
2813 out of a loop where it has been proven that the def-use info
2814 will still be valid. */
2816 df_insn_move_before (df, bb, insn, before_bb, before_insn)
2817 struct df *df;
2818 basic_block bb;
2819 rtx insn;
2820 basic_block before_bb;
2821 rtx before_insn;
2823 struct df_link *link;
2824 unsigned int uid;
2826 if (! bb)
2827 return df_pattern_emit_before (df, insn, before_bb, before_insn);
2829 uid = INSN_UID (insn);
2831 /* Change bb for all df defined and used by this insn. */
2832 for (link = df->insns[uid].defs; link; link = link->next)
2833 DF_REF_BB (link->ref) = before_bb;
2834 for (link = df->insns[uid].uses; link; link = link->next)
2835 DF_REF_BB (link->ref) = before_bb;
2837 /* The lifetimes of the registers used in this insn will be reduced
2838 while the lifetimes of the registers defined in this insn
2839 are likely to be increased. */
2841 /* ???? Perhaps all the insns moved should be stored on a list
2842 which df_analyse removes when it recalculates data flow. */
2844 return emit_insn_before (insn, before_insn);
2847 /* Functions to query dataflow information. */
2851 df_insn_regno_def_p (df, bb, insn, regno)
2852 struct df *df;
2853 basic_block bb ATTRIBUTE_UNUSED;
2854 rtx insn;
2855 unsigned int regno;
2857 unsigned int uid;
2858 struct df_link *link;
2860 uid = INSN_UID (insn);
2862 for (link = df->insns[uid].defs; link; link = link->next)
2864 struct ref *def = link->ref;
2866 if (DF_REF_REGNO (def) == regno)
2867 return 1;
2870 return 0;
2874 static int
2875 df_def_dominates_all_uses_p (df, def)
2876 struct df *df ATTRIBUTE_UNUSED;
2877 struct ref *def;
2879 struct df_link *du_link;
2881 /* Follow def-use chain to find all the uses of this def. */
2882 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2884 struct ref *use = du_link->ref;
2885 struct df_link *ud_link;
2887 /* Follow use-def chain to check all the defs for this use. */
2888 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2889 if (ud_link->ref != def)
2890 return 0;
2892 return 1;
2897 df_insn_dominates_all_uses_p (df, bb, insn)
2898 struct df *df;
2899 basic_block bb ATTRIBUTE_UNUSED;
2900 rtx insn;
2902 unsigned int uid;
2903 struct df_link *link;
2905 uid = INSN_UID (insn);
2907 for (link = df->insns[uid].defs; link; link = link->next)
2909 struct ref *def = link->ref;
2911 if (! df_def_dominates_all_uses_p (df, def))
2912 return 0;
2915 return 1;
2919 /* Return non-zero if all DF dominates all the uses within the bitmap
2920 BLOCKS. */
2921 static int
2922 df_def_dominates_uses_p (df, def, blocks)
2923 struct df *df ATTRIBUTE_UNUSED;
2924 struct ref *def;
2925 bitmap blocks;
2927 struct df_link *du_link;
2929 /* Follow def-use chain to find all the uses of this def. */
2930 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2932 struct ref *use = du_link->ref;
2933 struct df_link *ud_link;
2935 /* Only worry about the uses within BLOCKS. For example,
2936 consider a register defined within a loop that is live at the
2937 loop exits. */
2938 if (bitmap_bit_p (blocks, DF_REF_BBNO (use)))
2940 /* Follow use-def chain to check all the defs for this use. */
2941 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2942 if (ud_link->ref != def)
2943 return 0;
2946 return 1;
2950 /* Return non-zero if all the defs of INSN within BB dominates
2951 all the corresponding uses. */
2953 df_insn_dominates_uses_p (df, bb, insn, blocks)
2954 struct df *df;
2955 basic_block bb ATTRIBUTE_UNUSED;
2956 rtx insn;
2957 bitmap blocks;
2959 unsigned int uid;
2960 struct df_link *link;
2962 uid = INSN_UID (insn);
2964 for (link = df->insns[uid].defs; link; link = link->next)
2966 struct ref *def = link->ref;
2968 /* Only consider the defs within BLOCKS. */
2969 if (bitmap_bit_p (blocks, DF_REF_BBNO (def))
2970 && ! df_def_dominates_uses_p (df, def, blocks))
2971 return 0;
2973 return 1;
2977 /* Return the basic block that REG referenced in or NULL if referenced
2978 in multiple basic blocks. */
2979 basic_block
2980 df_regno_bb (df, regno)
2981 struct df *df;
2982 unsigned int regno;
2984 struct df_link *defs = df->regs[regno].defs;
2985 struct df_link *uses = df->regs[regno].uses;
2986 struct ref *def = defs ? defs->ref : 0;
2987 struct ref *use = uses ? uses->ref : 0;
2988 basic_block bb_def = def ? DF_REF_BB (def) : 0;
2989 basic_block bb_use = use ? DF_REF_BB (use) : 0;
2991 /* Compare blocks of first def and last use. ???? FIXME. What if
2992 the reg-def and reg-use lists are not correctly ordered. */
2993 return bb_def == bb_use ? bb_def : 0;
2997 /* Return non-zero if REG used in multiple basic blocks. */
2999 df_reg_global_p (df, reg)
3000 struct df *df;
3001 rtx reg;
3003 return df_regno_bb (df, REGNO (reg)) != 0;
3007 /* Return total lifetime (in insns) of REG. */
3009 df_reg_lifetime (df, reg)
3010 struct df *df;
3011 rtx reg;
3013 return df->regs[REGNO (reg)].lifetime;
3017 /* Return non-zero if REG live at start of BB. */
3019 df_bb_reg_live_start_p (df, bb, reg)
3020 struct df *df ATTRIBUTE_UNUSED;
3021 basic_block bb;
3022 rtx reg;
3024 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3026 #ifdef ENABLE_CHECKING
3027 if (! bb_info->lr_in)
3028 abort ();
3029 #endif
3031 return bitmap_bit_p (bb_info->lr_in, REGNO (reg));
3035 /* Return non-zero if REG live at end of BB. */
3037 df_bb_reg_live_end_p (df, bb, reg)
3038 struct df *df ATTRIBUTE_UNUSED;
3039 basic_block bb;
3040 rtx reg;
3042 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3044 #ifdef ENABLE_CHECKING
3045 if (! bb_info->lr_in)
3046 abort ();
3047 #endif
3049 return bitmap_bit_p (bb_info->lr_out, REGNO (reg));
3053 /* Return -1 if life of REG1 before life of REG2, 1 if life of REG1
3054 after life of REG2, or 0, if the lives overlap. */
3056 df_bb_regs_lives_compare (df, bb, reg1, reg2)
3057 struct df *df;
3058 basic_block bb;
3059 rtx reg1;
3060 rtx reg2;
3062 unsigned int regno1 = REGNO (reg1);
3063 unsigned int regno2 = REGNO (reg2);
3064 struct ref *def1;
3065 struct ref *use1;
3066 struct ref *def2;
3067 struct ref *use2;
3070 /* The regs must be local to BB. */
3071 if (df_regno_bb (df, regno1) != bb
3072 || df_regno_bb (df, regno2) != bb)
3073 abort ();
3075 def2 = df_bb_regno_first_def_find (df, bb, regno2);
3076 use1 = df_bb_regno_last_use_find (df, bb, regno1);
3078 if (DF_INSN_LUID (df, DF_REF_INSN (def2))
3079 > DF_INSN_LUID (df, DF_REF_INSN (use1)))
3080 return -1;
3082 def1 = df_bb_regno_first_def_find (df, bb, regno1);
3083 use2 = df_bb_regno_last_use_find (df, bb, regno2);
3085 if (DF_INSN_LUID (df, DF_REF_INSN (def1))
3086 > DF_INSN_LUID (df, DF_REF_INSN (use2)))
3087 return 1;
3089 return 0;
3093 /* Return last use of REGNO within BB. */
3094 static struct ref *
3095 df_bb_regno_last_use_find (df, bb, regno)
3096 struct df * df;
3097 basic_block bb ATTRIBUTE_UNUSED;
3098 unsigned int regno;
3100 struct df_link *link;
3102 /* This assumes that the reg-use list is ordered such that for any
3103 BB, the last use is found first. However, since the BBs are not
3104 ordered, the first use in the chain is not necessarily the last
3105 use in the function. */
3106 for (link = df->regs[regno].uses; link; link = link->next)
3108 struct ref *use = link->ref;
3110 if (DF_REF_BB (use) == bb)
3111 return use;
3113 return 0;
3117 /* Return first def of REGNO within BB. */
3118 static struct ref *
3119 df_bb_regno_first_def_find (df, bb, regno)
3120 struct df * df;
3121 basic_block bb ATTRIBUTE_UNUSED;
3122 unsigned int regno;
3124 struct df_link *link;
3126 /* This assumes that the reg-def list is ordered such that for any
3127 BB, the first def is found first. However, since the BBs are not
3128 ordered, the first def in the chain is not necessarily the first
3129 def in the function. */
3130 for (link = df->regs[regno].defs; link; link = link->next)
3132 struct ref *def = link->ref;
3134 if (DF_REF_BB (def) == bb)
3135 return def;
3137 return 0;
3141 /* Return first use of REGNO inside INSN within BB. */
3142 static struct ref *
3143 df_bb_insn_regno_last_use_find (df, bb, insn, regno)
3144 struct df * df;
3145 basic_block bb ATTRIBUTE_UNUSED;
3146 rtx insn;
3147 unsigned int regno;
3149 unsigned int uid;
3150 struct df_link *link;
3152 uid = INSN_UID (insn);
3154 for (link = df->insns[uid].uses; link; link = link->next)
3156 struct ref *use = link->ref;
3158 if (DF_REF_REGNO (use) == regno)
3159 return use;
3162 return 0;
3166 /* Return first def of REGNO inside INSN within BB. */
3167 static struct ref *
3168 df_bb_insn_regno_first_def_find (df, bb, insn, regno)
3169 struct df * df;
3170 basic_block bb ATTRIBUTE_UNUSED;
3171 rtx insn;
3172 unsigned int regno;
3174 unsigned int uid;
3175 struct df_link *link;
3177 uid = INSN_UID (insn);
3179 for (link = df->insns[uid].defs; link; link = link->next)
3181 struct ref *def = link->ref;
3183 if (DF_REF_REGNO (def) == regno)
3184 return def;
3187 return 0;
3191 /* Return insn using REG if the BB contains only a single
3192 use and def of REG. */
3194 df_bb_single_def_use_insn_find (df, bb, insn, reg)
3195 struct df * df;
3196 basic_block bb;
3197 rtx insn;
3198 rtx reg;
3200 struct ref *def;
3201 struct ref *use;
3202 struct df_link *du_link;
3204 def = df_bb_insn_regno_first_def_find (df, bb, insn, REGNO (reg));
3206 if (! def)
3207 abort ();
3209 du_link = DF_REF_CHAIN (def);
3211 if (! du_link)
3212 return NULL_RTX;
3214 use = du_link->ref;
3216 /* Check if def is dead. */
3217 if (! use)
3218 return NULL_RTX;
3220 /* Check for multiple uses. */
3221 if (du_link->next)
3222 return NULL_RTX;
3224 return DF_REF_INSN (use);
3227 /* Functions for debugging/dumping dataflow information. */
3230 /* Dump a def-use or use-def chain for REF to FILE. */
3231 static void
3232 df_chain_dump (link, file)
3233 struct df_link *link;
3234 FILE *file;
3236 fprintf (file, "{ ");
3237 for (; link; link = link->next)
3239 fprintf (file, "%c%d ",
3240 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3241 DF_REF_ID (link->ref));
3243 fprintf (file, "}");
3246 static void
3247 df_chain_dump_regno (link, file)
3248 struct df_link *link;
3249 FILE *file;
3251 fprintf (file, "{ ");
3252 for (; link; link = link->next)
3254 fprintf (file, "%c%d(%d) ",
3255 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3256 DF_REF_ID (link->ref),
3257 DF_REF_REGNO (link->ref));
3259 fprintf (file, "}");
3262 /* Dump dataflow info. */
3263 void
3264 df_dump (df, flags, file)
3265 struct df *df;
3266 int flags;
3267 FILE *file;
3269 unsigned int j;
3271 if (! df || ! file)
3272 return;
3274 fprintf (file, "\nDataflow summary:\n");
3275 fprintf (file, "n_regs = %d, n_defs = %d, n_uses = %d, n_bbs = %d\n",
3276 df->n_regs, df->n_defs, df->n_uses, df->n_bbs);
3278 if (flags & DF_RD)
3280 basic_block bb;
3282 fprintf (file, "Reaching defs:\n");
3283 FOR_ALL_BB (bb)
3285 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3287 if (! bb_info->rd_in)
3288 continue;
3290 fprintf (file, "bb %d in \t", bb->sindex);
3291 dump_bitmap (file, bb_info->rd_in);
3292 fprintf (file, "bb %d gen \t", bb->sindex);
3293 dump_bitmap (file, bb_info->rd_gen);
3294 fprintf (file, "bb %d kill\t", bb->sindex);
3295 dump_bitmap (file, bb_info->rd_kill);
3296 fprintf (file, "bb %d out \t", bb->sindex);
3297 dump_bitmap (file, bb_info->rd_out);
3301 if (flags & DF_UD_CHAIN)
3303 fprintf (file, "Use-def chains:\n");
3304 for (j = 0; j < df->n_defs; j++)
3306 if (df->defs[j])
3308 fprintf (file, "d%d bb %d luid %d insn %d reg %d ",
3309 j, DF_REF_BBNO (df->defs[j]),
3310 DF_INSN_LUID (df, DF_REF_INSN (df->defs[j])),
3311 DF_REF_INSN_UID (df->defs[j]),
3312 DF_REF_REGNO (df->defs[j]));
3313 if (df->defs[j]->flags & DF_REF_READ_WRITE)
3314 fprintf (file, "read/write ");
3315 df_chain_dump (DF_REF_CHAIN (df->defs[j]), file);
3316 fprintf (file, "\n");
3321 if (flags & DF_RU)
3323 basic_block bb;
3325 fprintf (file, "Reaching uses:\n");
3326 FOR_ALL_BB (bb)
3328 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3330 if (! bb_info->ru_in)
3331 continue;
3333 fprintf (file, "bb %d in \t", bb->sindex);
3334 dump_bitmap (file, bb_info->ru_in);
3335 fprintf (file, "bb %d gen \t", bb->sindex);
3336 dump_bitmap (file, bb_info->ru_gen);
3337 fprintf (file, "bb %d kill\t", bb->sindex);
3338 dump_bitmap (file, bb_info->ru_kill);
3339 fprintf (file, "bb %d out \t", bb->sindex);
3340 dump_bitmap (file, bb_info->ru_out);
3344 if (flags & DF_DU_CHAIN)
3346 fprintf (file, "Def-use chains:\n");
3347 for (j = 0; j < df->n_uses; j++)
3349 if (df->uses[j])
3351 fprintf (file, "u%d bb %d luid %d insn %d reg %d ",
3352 j, DF_REF_BBNO (df->uses[j]),
3353 DF_INSN_LUID (df, DF_REF_INSN (df->uses[j])),
3354 DF_REF_INSN_UID (df->uses[j]),
3355 DF_REF_REGNO (df->uses[j]));
3356 if (df->uses[j]->flags & DF_REF_READ_WRITE)
3357 fprintf (file, "read/write ");
3358 df_chain_dump (DF_REF_CHAIN (df->uses[j]), file);
3359 fprintf (file, "\n");
3364 if (flags & DF_LR)
3366 basic_block bb;
3368 fprintf (file, "Live regs:\n");
3369 FOR_ALL_BB (bb)
3371 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3373 if (! bb_info->lr_in)
3374 continue;
3376 fprintf (file, "bb %d in \t", bb->sindex);
3377 dump_bitmap (file, bb_info->lr_in);
3378 fprintf (file, "bb %d use \t", bb->sindex);
3379 dump_bitmap (file, bb_info->lr_use);
3380 fprintf (file, "bb %d def \t", bb->sindex);
3381 dump_bitmap (file, bb_info->lr_def);
3382 fprintf (file, "bb %d out \t", bb->sindex);
3383 dump_bitmap (file, bb_info->lr_out);
3387 if (flags & (DF_REG_INFO | DF_RD_CHAIN | DF_RU_CHAIN))
3389 struct reg_info *reg_info = df->regs;
3391 fprintf (file, "Register info:\n");
3392 for (j = 0; j < df->n_regs; j++)
3394 if (((flags & DF_REG_INFO)
3395 && (reg_info[j].n_uses || reg_info[j].n_defs))
3396 || ((flags & DF_RD_CHAIN) && reg_info[j].defs)
3397 || ((flags & DF_RU_CHAIN) && reg_info[j].uses))
3399 fprintf (file, "reg %d", j);
3400 if ((flags & DF_RD_CHAIN) && (flags & DF_RU_CHAIN))
3402 basic_block bb = df_regno_bb (df, j);
3404 if (bb)
3405 fprintf (file, " bb %d", bb->sindex);
3406 else
3407 fprintf (file, " bb ?");
3409 if (flags & DF_REG_INFO)
3411 fprintf (file, " life %d", reg_info[j].lifetime);
3414 if ((flags & DF_REG_INFO) || (flags & DF_RD_CHAIN))
3416 fprintf (file, " defs ");
3417 if (flags & DF_REG_INFO)
3418 fprintf (file, "%d ", reg_info[j].n_defs);
3419 if (flags & DF_RD_CHAIN)
3420 df_chain_dump (reg_info[j].defs, file);
3423 if ((flags & DF_REG_INFO) || (flags & DF_RU_CHAIN))
3425 fprintf (file, " uses ");
3426 if (flags & DF_REG_INFO)
3427 fprintf (file, "%d ", reg_info[j].n_uses);
3428 if (flags & DF_RU_CHAIN)
3429 df_chain_dump (reg_info[j].uses, file);
3432 fprintf (file, "\n");
3436 fprintf (file, "\n");
3440 void
3441 df_insn_debug (df, insn, file)
3442 struct df *df;
3443 rtx insn;
3444 FILE *file;
3446 unsigned int uid;
3447 int bbi;
3449 uid = INSN_UID (insn);
3450 if (uid >= df->insn_size)
3451 return;
3453 if (df->insns[uid].defs)
3454 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3455 else if (df->insns[uid].uses)
3456 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3457 else
3458 bbi = -1;
3460 fprintf (file, "insn %d bb %d luid %d defs ",
3461 uid, bbi, DF_INSN_LUID (df, insn));
3462 df_chain_dump (df->insns[uid].defs, file);
3463 fprintf (file, " uses ");
3464 df_chain_dump (df->insns[uid].uses, file);
3465 fprintf (file, "\n");
3468 void
3469 df_insn_debug_regno (df, insn, file)
3470 struct df *df;
3471 rtx insn;
3472 FILE *file;
3474 unsigned int uid;
3475 int bbi;
3477 uid = INSN_UID (insn);
3478 if (uid >= df->insn_size)
3479 return;
3481 if (df->insns[uid].defs)
3482 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3483 else if (df->insns[uid].uses)
3484 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3485 else
3486 bbi = -1;
3488 fprintf (file, "insn %d bb %d luid %d defs ",
3489 uid, bbi, DF_INSN_LUID (df, insn));
3490 df_chain_dump_regno (df->insns[uid].defs, file);
3491 fprintf (file, " uses ");
3492 df_chain_dump_regno (df->insns[uid].uses, file);
3493 fprintf (file, "\n");
3496 static void
3497 df_regno_debug (df, regno, file)
3498 struct df *df;
3499 unsigned int regno;
3500 FILE *file;
3502 if (regno >= df->reg_size)
3503 return;
3505 fprintf (file, "reg %d life %d defs ",
3506 regno, df->regs[regno].lifetime);
3507 df_chain_dump (df->regs[regno].defs, file);
3508 fprintf (file, " uses ");
3509 df_chain_dump (df->regs[regno].uses, file);
3510 fprintf (file, "\n");
3514 static void
3515 df_ref_debug (df, ref, file)
3516 struct df *df;
3517 struct ref *ref;
3518 FILE *file;
3520 fprintf (file, "%c%d ",
3521 DF_REF_REG_DEF_P (ref) ? 'd' : 'u',
3522 DF_REF_ID (ref));
3523 fprintf (file, "reg %d bb %d luid %d insn %d chain ",
3524 DF_REF_REGNO (ref),
3525 DF_REF_BBNO (ref),
3526 DF_INSN_LUID (df, DF_REF_INSN (ref)),
3527 INSN_UID (DF_REF_INSN (ref)));
3528 df_chain_dump (DF_REF_CHAIN (ref), file);
3529 fprintf (file, "\n");
3533 void
3534 debug_df_insn (insn)
3535 rtx insn;
3537 df_insn_debug (ddf, insn, stderr);
3538 debug_rtx (insn);
3542 void
3543 debug_df_reg (reg)
3544 rtx reg;
3546 df_regno_debug (ddf, REGNO (reg), stderr);
3550 void
3551 debug_df_regno (regno)
3552 unsigned int regno;
3554 df_regno_debug (ddf, regno, stderr);
3558 void
3559 debug_df_ref (ref)
3560 struct ref *ref;
3562 df_ref_debug (ddf, ref, stderr);
3566 void
3567 debug_df_defno (defno)
3568 unsigned int defno;
3570 df_ref_debug (ddf, ddf->defs[defno], stderr);
3574 void
3575 debug_df_useno (defno)
3576 unsigned int defno;
3578 df_ref_debug (ddf, ddf->uses[defno], stderr);
3582 void
3583 debug_df_chain (link)
3584 struct df_link *link;
3586 df_chain_dump (link, stderr);
3587 fputc ('\n', stderr);
3590 /* Hybrid search algorithm from "Implementation Techniques for
3591 Efficient Data-Flow Analysis of Large Programs". */
3592 static void
3593 hybrid_search_bitmap (block, in, out, gen, kill, dir,
3594 conf_op, transfun, visited, pending,
3595 data)
3596 basic_block block;
3597 bitmap *in, *out, *gen, *kill;
3598 enum df_flow_dir dir;
3599 enum df_confluence_op conf_op;
3600 transfer_function_bitmap transfun;
3601 sbitmap visited;
3602 sbitmap pending;
3603 void *data;
3605 int changed;
3606 int i = block->sindex;
3607 edge e;
3608 basic_block bb = block;
3609 SET_BIT (visited, block->sindex);
3610 if (TEST_BIT (pending, block->sindex))
3612 if (dir == FORWARD)
3614 /* Calculate <conf_op> of predecessor_outs */
3615 bitmap_zero (in[i]);
3616 for (e = bb->pred; e != 0; e = e->pred_next)
3618 if (e->src == ENTRY_BLOCK_PTR)
3619 continue;
3620 switch (conf_op)
3622 case UNION:
3623 bitmap_a_or_b (in[i], in[i], out[e->src->sindex]);
3624 break;
3625 case INTERSECTION:
3626 bitmap_a_and_b (in[i], in[i], out[e->src->sindex]);
3627 break;
3631 else
3633 /* Calculate <conf_op> of successor ins */
3634 bitmap_zero(out[i]);
3635 for (e = bb->succ; e != 0; e = e->succ_next)
3637 if (e->dest == EXIT_BLOCK_PTR)
3638 continue;
3639 switch (conf_op)
3641 case UNION:
3642 bitmap_a_or_b (out[i], out[i], in[e->dest->sindex]);
3643 break;
3644 case INTERSECTION:
3645 bitmap_a_and_b (out[i], out[i], in[e->dest->sindex]);
3646 break;
3650 /* Common part */
3651 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3652 RESET_BIT (pending, i);
3653 if (changed)
3655 if (dir == FORWARD)
3657 for (e = bb->succ; e != 0; e = e->succ_next)
3659 if (e->dest == EXIT_BLOCK_PTR || e->dest == block)
3660 continue;
3661 SET_BIT (pending, e->dest->sindex);
3664 else
3666 for (e = bb->pred; e != 0; e = e->pred_next)
3668 if (e->src == ENTRY_BLOCK_PTR || e->dest == block)
3669 continue;
3670 SET_BIT (pending, e->src->sindex);
3675 if (dir == FORWARD)
3677 for (e = bb->succ; e != 0; e = e->succ_next)
3679 if (e->dest == EXIT_BLOCK_PTR || e->dest == block)
3680 continue;
3681 if (!TEST_BIT (visited, e->dest->sindex))
3682 hybrid_search_bitmap (e->dest, in, out, gen, kill, dir,
3683 conf_op, transfun, visited, pending,
3684 data);
3687 else
3689 for (e = bb->pred; e != 0; e = e->pred_next)
3691 if (e->src == ENTRY_BLOCK_PTR || e->src == block)
3692 continue;
3693 if (!TEST_BIT (visited, e->src->sindex))
3694 hybrid_search_bitmap (e->src, in, out, gen, kill, dir,
3695 conf_op, transfun, visited, pending,
3696 data);
3702 /* Hybrid search for sbitmaps, rather than bitmaps. */
3703 static void
3704 hybrid_search_sbitmap (block, in, out, gen, kill, dir,
3705 conf_op, transfun, visited, pending,
3706 data)
3707 basic_block block;
3708 sbitmap *in, *out, *gen, *kill;
3709 enum df_flow_dir dir;
3710 enum df_confluence_op conf_op;
3711 transfer_function_sbitmap transfun;
3712 sbitmap visited;
3713 sbitmap pending;
3714 void *data;
3716 int changed;
3717 int i = block->sindex;
3718 edge e;
3719 basic_block bb = block;
3720 SET_BIT (visited, block->sindex);
3721 if (TEST_BIT (pending, block->sindex))
3723 if (dir == FORWARD)
3725 /* Calculate <conf_op> of predecessor_outs */
3726 sbitmap_zero (in[i]);
3727 for (e = bb->pred; e != 0; e = e->pred_next)
3729 if (e->src == ENTRY_BLOCK_PTR)
3730 continue;
3731 switch (conf_op)
3733 case UNION:
3734 sbitmap_a_or_b (in[i], in[i], out[e->src->sindex]);
3735 break;
3736 case INTERSECTION:
3737 sbitmap_a_and_b (in[i], in[i], out[e->src->sindex]);
3738 break;
3742 else
3744 /* Calculate <conf_op> of successor ins */
3745 sbitmap_zero(out[i]);
3746 for (e = bb->succ; e != 0; e = e->succ_next)
3748 if (e->dest == EXIT_BLOCK_PTR)
3749 continue;
3750 switch (conf_op)
3752 case UNION:
3753 sbitmap_a_or_b (out[i], out[i], in[e->dest->sindex]);
3754 break;
3755 case INTERSECTION:
3756 sbitmap_a_and_b (out[i], out[i], in[e->dest->sindex]);
3757 break;
3761 /* Common part */
3762 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3763 RESET_BIT (pending, i);
3764 if (changed)
3766 if (dir == FORWARD)
3768 for (e = bb->succ; e != 0; e = e->succ_next)
3770 if (e->dest == EXIT_BLOCK_PTR || e->dest == block)
3771 continue;
3772 SET_BIT (pending, e->dest->sindex);
3775 else
3777 for (e = bb->pred; e != 0; e = e->pred_next)
3779 if (e->src == ENTRY_BLOCK_PTR || e->dest == block)
3780 continue;
3781 SET_BIT (pending, e->src->sindex);
3786 if (dir == FORWARD)
3788 for (e = bb->succ; e != 0; e = e->succ_next)
3790 if (e->dest == EXIT_BLOCK_PTR || e->dest == block)
3791 continue;
3792 if (!TEST_BIT (visited, e->dest->sindex))
3793 hybrid_search_sbitmap (e->dest, in, out, gen, kill, dir,
3794 conf_op, transfun, visited, pending,
3795 data);
3798 else
3800 for (e = bb->pred; e != 0; e = e->pred_next)
3802 if (e->src == ENTRY_BLOCK_PTR || e->src == block)
3803 continue;
3804 if (!TEST_BIT (visited, e->src->sindex))
3805 hybrid_search_sbitmap (e->src, in, out, gen, kill, dir,
3806 conf_op, transfun, visited, pending,
3807 data);
3815 /* gen = GEN set.
3816 kill = KILL set.
3817 in, out = Filled in by function.
3818 blocks = Blocks to analyze.
3819 dir = Dataflow direction.
3820 conf_op = Confluence operation.
3821 transfun = Transfer function.
3822 order = Order to iterate in. (Should map block numbers -> order)
3823 data = Whatever you want. It's passed to the transfer function.
3825 This function will perform iterative bitvector dataflow, producing
3826 the in and out sets. Even if you only want to perform it for a
3827 small number of blocks, the vectors for in and out must be large
3828 enough for *all* blocks, because changing one block might affect
3829 others. However, it'll only put what you say to analyze on the
3830 initial worklist.
3832 For forward problems, you probably want to pass in a mapping of
3833 block number to rc_order (like df->inverse_rc_map).
3835 void
3836 iterative_dataflow_sbitmap (in, out, gen, kill, blocks,
3837 dir, conf_op, transfun, order, data)
3838 sbitmap *in, *out, *gen, *kill;
3839 bitmap blocks;
3840 enum df_flow_dir dir;
3841 enum df_confluence_op conf_op;
3842 transfer_function_sbitmap transfun;
3843 int *order;
3844 void *data;
3846 int i;
3847 fibheap_t worklist;
3848 basic_block bb;
3849 sbitmap visited, pending;
3850 pending = sbitmap_alloc (last_basic_block);
3851 visited = sbitmap_alloc (last_basic_block);
3852 sbitmap_zero (pending);
3853 sbitmap_zero (visited);
3854 worklist = fibheap_new ();
3855 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3857 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3858 SET_BIT (pending, i);
3859 if (dir == FORWARD)
3860 sbitmap_copy (out[i], gen[i]);
3861 else
3862 sbitmap_copy (in[i], gen[i]);
3864 while (sbitmap_first_set_bit (pending) != -1)
3866 while (!fibheap_empty (worklist))
3868 i = (size_t) fibheap_extract_min (worklist);
3869 bb = BASIC_BLOCK (i);
3870 if (!TEST_BIT (visited, bb->sindex))
3871 hybrid_search_sbitmap (bb, in, out, gen, kill, dir,
3872 conf_op, transfun, visited, pending, data);
3874 if (sbitmap_first_set_bit (pending) != -1)
3876 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3878 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3880 sbitmap_zero (visited);
3882 else
3884 break;
3887 sbitmap_free (pending);
3888 sbitmap_free (visited);
3889 fibheap_delete (worklist);
3892 /* Exactly the same as iterative_dataflow_sbitmap, except it works on
3893 bitmaps instead */
3894 void
3895 iterative_dataflow_bitmap (in, out, gen, kill, blocks,
3896 dir, conf_op, transfun, order, data)
3897 bitmap *in, *out, *gen, *kill;
3898 bitmap blocks;
3899 enum df_flow_dir dir;
3900 enum df_confluence_op conf_op;
3901 transfer_function_bitmap transfun;
3902 int *order;
3903 void *data;
3905 int i;
3906 fibheap_t worklist;
3907 basic_block bb;
3908 sbitmap visited, pending;
3909 pending = sbitmap_alloc (last_basic_block);
3910 visited = sbitmap_alloc (last_basic_block);
3911 sbitmap_zero (pending);
3912 sbitmap_zero (visited);
3913 worklist = fibheap_new ();
3914 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3916 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3917 SET_BIT (pending, i);
3918 if (dir == FORWARD)
3919 bitmap_copy (out[i], gen[i]);
3920 else
3921 bitmap_copy (in[i], gen[i]);
3923 while (sbitmap_first_set_bit (pending) != -1)
3925 while (!fibheap_empty (worklist))
3927 i = (size_t) fibheap_extract_min (worklist);
3928 bb = BASIC_BLOCK (i);
3929 if (!TEST_BIT (visited, bb->sindex))
3930 hybrid_search_bitmap (bb, in, out, gen, kill, dir,
3931 conf_op, transfun, visited, pending, data);
3933 if (sbitmap_first_set_bit (pending) != -1)
3935 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3937 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3939 sbitmap_zero (visited);
3941 else
3943 break;
3946 sbitmap_free (pending);
3947 sbitmap_free (visited);
3948 fibheap_delete (worklist);