This commit was manufactured by cvs2svn to create branch
[official-gcc.git] / gcc / df.c
blobd91f95e37d0915bebde985fe163d616dc7b4e5fa
1 /* Dataflow support routines.
2 Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004
3 Free Software Foundation, Inc.
4 Contributed by Michael P. Hayes (m.hayes@elec.canterbury.ac.nz,
5 mhayes@redhat.com)
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
12 version.
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
22 02111-1307, USA.
25 OVERVIEW:
27 This file provides some dataflow routines for computing reaching defs,
28 upward exposed uses, live variables, def-use chains, and use-def
29 chains. The global dataflow is performed using simple iterative
30 methods with a worklist and could be sped up by ordering the blocks
31 with a depth first search order.
33 A `struct ref' data structure (ref) is allocated for every register
34 reference (def or use) and this records the insn and bb the ref is
35 found within. The refs are linked together in chains of uses and defs
36 for each insn and for each register. Each ref also has a chain field
37 that links all the use refs for a def or all the def refs for a use.
38 This is used to create use-def or def-use chains.
41 USAGE:
43 Here's an example of using the dataflow routines.
45 struct df *df;
47 df = df_init ();
49 df_analyse (df, 0, DF_ALL);
51 df_dump (df, DF_ALL, stderr);
53 df_finish (df);
56 df_init simply creates a poor man's object (df) that needs to be
57 passed to all the dataflow routines. df_finish destroys this
58 object and frees up any allocated memory. DF_ALL says to analyse
59 everything.
61 df_analyse performs the following:
63 1. Records defs and uses by scanning the insns in each basic block
64 or by scanning the insns queued by df_insn_modify.
65 2. Links defs and uses into insn-def and insn-use chains.
66 3. Links defs and uses into reg-def and reg-use chains.
67 4. Assigns LUIDs to each insn (for modified blocks).
68 5. Calculates local reaching definitions.
69 6. Calculates global reaching definitions.
70 7. Creates use-def chains.
71 8. Calculates local reaching uses (upwards exposed uses).
72 9. Calculates global reaching uses.
73 10. Creates def-use chains.
74 11. Calculates local live registers.
75 12. Calculates global live registers.
76 13. Calculates register lifetimes and determines local registers.
79 PHILOSOPHY:
81 Note that the dataflow information is not updated for every newly
82 deleted or created insn. If the dataflow information requires
83 updating then all the changed, new, or deleted insns needs to be
84 marked with df_insn_modify (or df_insns_modify) either directly or
85 indirectly (say through calling df_insn_delete). df_insn_modify
86 marks all the modified insns to get processed the next time df_analyse
87 is called.
89 Beware that tinkering with insns may invalidate the dataflow information.
90 The philosophy behind these routines is that once the dataflow
91 information has been gathered, the user should store what they require
92 before they tinker with any insn. Once a reg is replaced, for example,
93 then the reg-def/reg-use chains will point to the wrong place. Once a
94 whole lot of changes have been made, df_analyse can be called again
95 to update the dataflow information. Currently, this is not very smart
96 with regard to propagating changes to the dataflow so it should not
97 be called very often.
100 DATA STRUCTURES:
102 The basic object is a REF (reference) and this may either be a DEF
103 (definition) or a USE of a register.
105 These are linked into a variety of lists; namely reg-def, reg-use,
106 insn-def, insn-use, def-use, and use-def lists. For example,
107 the reg-def lists contain all the refs that define a given register
108 while the insn-use lists contain all the refs used by an insn.
110 Note that the reg-def and reg-use chains are generally short (except for the
111 hard registers) and thus it is much faster to search these chains
112 rather than searching the def or use bitmaps.
114 If the insns are in SSA form then the reg-def and use-def lists
115 should only contain the single defining ref.
118 TODO:
120 1) Incremental dataflow analysis.
122 Note that if a loop invariant insn is hoisted (or sunk), we do not
123 need to change the def-use or use-def chains. All we have to do is to
124 change the bb field for all the associated defs and uses and to
125 renumber the LUIDs for the original and new basic blocks of the insn.
127 When shadowing loop mems we create new uses and defs for new pseudos
128 so we do not affect the existing dataflow information.
130 My current strategy is to queue up all modified, created, or deleted
131 insns so when df_analyse is called we can easily determine all the new
132 or deleted refs. Currently the global dataflow information is
133 recomputed from scratch but this could be propagated more efficiently.
135 2) Reduced memory requirements.
137 We could operate a pool of ref structures. When a ref is deleted it
138 gets returned to the pool (say by linking on to a chain of free refs).
139 This will require a pair of bitmaps for defs and uses so that we can
140 tell which ones have been changed. Alternatively, we could
141 periodically squeeze the def and use tables and associated bitmaps and
142 renumber the def and use ids.
144 3) Ordering of reg-def and reg-use lists.
146 Should the first entry in the def list be the first def (within a BB)?
147 Similarly, should the first entry in the use list be the last use
148 (within a BB)?
150 4) Working with a sub-CFG.
152 Often the whole CFG does not need to be analyzed, for example,
153 when optimizing a loop, only certain registers are of interest.
154 Perhaps there should be a bitmap argument to df_analyse to specify
155 which registers should be analyzed?
158 NOTES:
160 Embedded addressing side-effects, such as POST_INC or PRE_INC, generate
161 both a use and a def. These are both marked read/write to show that they
162 are dependent. For example, (set (reg 40) (mem (post_inc (reg 42))))
163 will generate a use of reg 42 followed by a def of reg 42 (both marked
164 read/write). Similarly, (set (reg 40) (mem (pre_dec (reg 41))))
165 generates a use of reg 41 then a def of reg 41 (both marked read/write),
166 even though reg 41 is decremented before it is used for the memory
167 address in this second example.
169 A set to a REG inside a ZERO_EXTRACT, SIGN_EXTRACT, or SUBREG invokes
170 a read-modify write operation. We generate both a use and a def
171 and again mark them read/write.
174 #include "config.h"
175 #include "system.h"
176 #include "coretypes.h"
177 #include "tm.h"
178 #include "rtl.h"
179 #include "tm_p.h"
180 #include "insn-config.h"
181 #include "recog.h"
182 #include "function.h"
183 #include "regs.h"
184 #include "alloc-pool.h"
185 #include "hard-reg-set.h"
186 #include "basic-block.h"
187 #include "sbitmap.h"
188 #include "bitmap.h"
189 #include "df.h"
190 #include "fibheap.h"
192 #define FOR_EACH_BB_IN_BITMAP(BITMAP, MIN, BB, CODE) \
193 do \
195 unsigned int node_; \
196 EXECUTE_IF_SET_IN_BITMAP (BITMAP, MIN, node_, \
197 {(BB) = BASIC_BLOCK (node_); CODE;}); \
199 while (0)
201 static alloc_pool df_ref_pool;
202 static alloc_pool df_link_pool;
203 static struct df *ddf;
205 static void df_reg_table_realloc (struct df *, int);
206 static void df_insn_table_realloc (struct df *, unsigned int);
207 static void df_bitmaps_alloc (struct df *, int);
208 static void df_bitmaps_free (struct df *, int);
209 static void df_free (struct df *);
210 static void df_alloc (struct df *, int);
212 static rtx df_reg_clobber_gen (unsigned int);
213 static rtx df_reg_use_gen (unsigned int);
215 static inline struct df_link *df_link_create (struct ref *, struct df_link *);
216 static struct df_link *df_ref_unlink (struct df_link **, struct ref *);
217 static void df_def_unlink (struct df *, struct ref *);
218 static void df_use_unlink (struct df *, struct ref *);
219 static void df_insn_refs_unlink (struct df *, basic_block, rtx);
220 #if 0
221 static void df_bb_refs_unlink (struct df *, basic_block);
222 static void df_refs_unlink (struct df *, bitmap);
223 #endif
225 static struct ref *df_ref_create (struct df *, rtx, rtx *, rtx,
226 enum df_ref_type, enum df_ref_flags);
227 static void df_ref_record_1 (struct df *, rtx, rtx *, rtx, enum df_ref_type,
228 enum df_ref_flags);
229 static void df_ref_record (struct df *, rtx, rtx *, rtx, enum df_ref_type,
230 enum df_ref_flags);
231 static void df_def_record_1 (struct df *, rtx, basic_block, rtx);
232 static void df_defs_record (struct df *, rtx, basic_block, rtx);
233 static void df_uses_record (struct df *, rtx *, enum df_ref_type,
234 basic_block, rtx, enum df_ref_flags);
235 static void df_insn_refs_record (struct df *, basic_block, rtx);
236 static void df_bb_refs_record (struct df *, basic_block);
237 static void df_refs_record (struct df *, bitmap);
239 static void df_bb_reg_def_chain_create (struct df *, basic_block);
240 static void df_reg_def_chain_create (struct df *, bitmap);
241 static void df_bb_reg_use_chain_create (struct df *, basic_block);
242 static void df_reg_use_chain_create (struct df *, bitmap);
243 static void df_bb_du_chain_create (struct df *, basic_block, bitmap);
244 static void df_du_chain_create (struct df *, bitmap);
245 static void df_bb_ud_chain_create (struct df *, basic_block);
246 static void df_ud_chain_create (struct df *, bitmap);
247 static void df_bb_rd_local_compute (struct df *, basic_block);
248 static void df_rd_local_compute (struct df *, bitmap);
249 static void df_bb_ru_local_compute (struct df *, basic_block);
250 static void df_ru_local_compute (struct df *, bitmap);
251 static void df_bb_lr_local_compute (struct df *, basic_block);
252 static void df_lr_local_compute (struct df *, bitmap);
253 static void df_bb_reg_info_compute (struct df *, basic_block, bitmap);
254 static void df_reg_info_compute (struct df *, bitmap);
256 static int df_bb_luids_set (struct df *df, basic_block);
257 static int df_luids_set (struct df *df, bitmap);
259 static int df_modified_p (struct df *, bitmap);
260 static int df_refs_queue (struct df *);
261 static int df_refs_process (struct df *);
262 static int df_bb_refs_update (struct df *, basic_block);
263 static int df_refs_update (struct df *);
264 static void df_analyse_1 (struct df *, bitmap, int, int);
266 static void df_insns_modify (struct df *, basic_block, rtx, rtx);
267 static int df_rtx_mem_replace (rtx *, void *);
268 static int df_rtx_reg_replace (rtx *, void *);
269 void df_refs_reg_replace (struct df *, bitmap, struct df_link *, rtx, rtx);
271 static int df_def_dominates_all_uses_p (struct df *, struct ref *def);
272 static int df_def_dominates_uses_p (struct df *, struct ref *def, bitmap);
273 static struct ref *df_bb_regno_last_use_find (struct df *, basic_block,
274 unsigned int);
275 static struct ref *df_bb_regno_first_def_find (struct df *, basic_block,
276 unsigned int);
277 static struct ref *df_bb_insn_regno_last_use_find (struct df *, basic_block,
278 rtx, unsigned int);
279 static struct ref *df_bb_insn_regno_first_def_find (struct df *, basic_block,
280 rtx, unsigned int);
282 static void df_chain_dump (struct df_link *, FILE *file);
283 static void df_chain_dump_regno (struct df_link *, FILE *file);
284 static void df_regno_debug (struct df *, unsigned int, FILE *);
285 static void df_ref_debug (struct df *, struct ref *, FILE *);
286 static void df_rd_transfer_function (int, int *, bitmap, bitmap, bitmap,
287 bitmap, void *);
288 static void df_ru_transfer_function (int, int *, bitmap, bitmap, bitmap,
289 bitmap, void *);
290 static void df_lr_transfer_function (int, int *, bitmap, bitmap, bitmap,
291 bitmap, void *);
292 static void hybrid_search_bitmap (basic_block, bitmap *, bitmap *,
293 bitmap *, bitmap *, enum df_flow_dir,
294 enum df_confluence_op,
295 transfer_function_bitmap,
296 sbitmap, sbitmap, void *);
297 static void hybrid_search_sbitmap (basic_block, sbitmap *, sbitmap *,
298 sbitmap *, sbitmap *, enum df_flow_dir,
299 enum df_confluence_op,
300 transfer_function_sbitmap,
301 sbitmap, sbitmap, void *);
304 /* Local memory allocation/deallocation routines. */
307 /* Increase the insn info table to have space for at least SIZE + 1
308 elements. */
309 static void
310 df_insn_table_realloc (struct df *df, unsigned int size)
312 size++;
313 if (size <= df->insn_size)
314 return;
316 /* Make the table a little larger than requested, so we do not need
317 to enlarge it so often. */
318 size += df->insn_size / 4;
320 df->insns = xrealloc (df->insns, size * sizeof (struct insn_info));
322 memset (df->insns + df->insn_size, 0,
323 (size - df->insn_size) * sizeof (struct insn_info));
325 df->insn_size = size;
327 if (! df->insns_modified)
329 df->insns_modified = BITMAP_XMALLOC ();
330 bitmap_zero (df->insns_modified);
335 /* Increase the reg info table by SIZE more elements. */
336 static void
337 df_reg_table_realloc (struct df *df, int size)
339 /* Make table 25 percent larger by default. */
340 if (! size)
341 size = df->reg_size / 4;
343 size += df->reg_size;
344 if (size < max_reg_num ())
345 size = max_reg_num ();
347 df->regs = xrealloc (df->regs, size * sizeof (struct reg_info));
349 /* Zero the new entries. */
350 memset (df->regs + df->reg_size, 0,
351 (size - df->reg_size) * sizeof (struct reg_info));
353 df->reg_size = size;
357 /* Allocate bitmaps for each basic block. */
358 static void
359 df_bitmaps_alloc (struct df *df, int flags)
361 int dflags = 0;
362 basic_block bb;
364 /* Free the bitmaps if they need resizing. */
365 if ((flags & DF_LR) && df->n_regs < (unsigned int) max_reg_num ())
366 dflags |= DF_LR | DF_RU;
367 if ((flags & DF_RU) && df->n_uses < df->use_id)
368 dflags |= DF_RU;
369 if ((flags & DF_RD) && df->n_defs < df->def_id)
370 dflags |= DF_RD;
372 if (dflags)
373 df_bitmaps_free (df, dflags);
375 df->n_defs = df->def_id;
376 df->n_uses = df->use_id;
378 FOR_EACH_BB (bb)
380 struct bb_info *bb_info = DF_BB_INFO (df, bb);
382 if (flags & DF_RD && ! bb_info->rd_in)
384 /* Allocate bitmaps for reaching definitions. */
385 bb_info->rd_kill = BITMAP_XMALLOC ();
386 bitmap_zero (bb_info->rd_kill);
387 bb_info->rd_gen = BITMAP_XMALLOC ();
388 bitmap_zero (bb_info->rd_gen);
389 bb_info->rd_in = BITMAP_XMALLOC ();
390 bb_info->rd_out = BITMAP_XMALLOC ();
391 bb_info->rd_valid = 0;
394 if (flags & DF_RU && ! bb_info->ru_in)
396 /* Allocate bitmaps for upward exposed uses. */
397 bb_info->ru_kill = BITMAP_XMALLOC ();
398 bitmap_zero (bb_info->ru_kill);
399 /* Note the lack of symmetry. */
400 bb_info->ru_gen = BITMAP_XMALLOC ();
401 bitmap_zero (bb_info->ru_gen);
402 bb_info->ru_in = BITMAP_XMALLOC ();
403 bb_info->ru_out = BITMAP_XMALLOC ();
404 bb_info->ru_valid = 0;
407 if (flags & DF_LR && ! bb_info->lr_in)
409 /* Allocate bitmaps for live variables. */
410 bb_info->lr_def = BITMAP_XMALLOC ();
411 bitmap_zero (bb_info->lr_def);
412 bb_info->lr_use = BITMAP_XMALLOC ();
413 bitmap_zero (bb_info->lr_use);
414 bb_info->lr_in = BITMAP_XMALLOC ();
415 bb_info->lr_out = BITMAP_XMALLOC ();
416 bb_info->lr_valid = 0;
422 /* Free bitmaps for each basic block. */
423 static void
424 df_bitmaps_free (struct df *df, int flags)
426 basic_block bb;
428 FOR_EACH_BB (bb)
430 struct bb_info *bb_info = DF_BB_INFO (df, bb);
432 if (!bb_info)
433 continue;
435 if ((flags & DF_RD) && bb_info->rd_in)
437 /* Free bitmaps for reaching definitions. */
438 BITMAP_XFREE (bb_info->rd_kill);
439 bb_info->rd_kill = NULL;
440 BITMAP_XFREE (bb_info->rd_gen);
441 bb_info->rd_gen = NULL;
442 BITMAP_XFREE (bb_info->rd_in);
443 bb_info->rd_in = NULL;
444 BITMAP_XFREE (bb_info->rd_out);
445 bb_info->rd_out = NULL;
448 if ((flags & DF_RU) && bb_info->ru_in)
450 /* Free bitmaps for upward exposed uses. */
451 BITMAP_XFREE (bb_info->ru_kill);
452 bb_info->ru_kill = NULL;
453 BITMAP_XFREE (bb_info->ru_gen);
454 bb_info->ru_gen = NULL;
455 BITMAP_XFREE (bb_info->ru_in);
456 bb_info->ru_in = NULL;
457 BITMAP_XFREE (bb_info->ru_out);
458 bb_info->ru_out = NULL;
461 if ((flags & DF_LR) && bb_info->lr_in)
463 /* Free bitmaps for live variables. */
464 BITMAP_XFREE (bb_info->lr_def);
465 bb_info->lr_def = NULL;
466 BITMAP_XFREE (bb_info->lr_use);
467 bb_info->lr_use = NULL;
468 BITMAP_XFREE (bb_info->lr_in);
469 bb_info->lr_in = NULL;
470 BITMAP_XFREE (bb_info->lr_out);
471 bb_info->lr_out = NULL;
474 df->flags &= ~(flags & (DF_RD | DF_RU | DF_LR));
478 /* Allocate and initialize dataflow memory. */
479 static void
480 df_alloc (struct df *df, int n_regs)
482 int n_insns;
483 basic_block bb;
485 df_link_pool = create_alloc_pool ("df_link pool", sizeof (struct df_link),
486 100);
487 df_ref_pool = create_alloc_pool ("df_ref pool", sizeof (struct ref), 100);
489 /* Perhaps we should use LUIDs to save memory for the insn_refs
490 table. This is only a small saving; a few pointers. */
491 n_insns = get_max_uid () + 1;
493 df->def_id = 0;
494 df->n_defs = 0;
495 /* Approximate number of defs by number of insns. */
496 df->def_size = n_insns;
497 df->defs = xmalloc (df->def_size * sizeof (*df->defs));
499 df->use_id = 0;
500 df->n_uses = 0;
501 /* Approximate number of uses by twice number of insns. */
502 df->use_size = n_insns * 2;
503 df->uses = xmalloc (df->use_size * sizeof (*df->uses));
505 df->n_regs = n_regs;
506 df->n_bbs = last_basic_block;
508 /* Allocate temporary working array used during local dataflow analysis. */
509 df->reg_def_last = xmalloc (df->n_regs * sizeof (struct ref *));
511 df_insn_table_realloc (df, n_insns);
513 df_reg_table_realloc (df, df->n_regs);
515 df->bbs_modified = BITMAP_XMALLOC ();
516 bitmap_zero (df->bbs_modified);
518 df->flags = 0;
520 df->bbs = xcalloc (last_basic_block, sizeof (struct bb_info));
522 df->all_blocks = BITMAP_XMALLOC ();
523 FOR_EACH_BB (bb)
524 bitmap_set_bit (df->all_blocks, bb->index);
528 /* Free all the dataflow info. */
529 static void
530 df_free (struct df *df)
532 df_bitmaps_free (df, DF_ALL);
534 if (df->bbs)
535 free (df->bbs);
536 df->bbs = 0;
538 if (df->insns)
539 free (df->insns);
540 df->insns = 0;
541 df->insn_size = 0;
543 if (df->defs)
544 free (df->defs);
545 df->defs = 0;
546 df->def_size = 0;
547 df->def_id = 0;
549 if (df->uses)
550 free (df->uses);
551 df->uses = 0;
552 df->use_size = 0;
553 df->use_id = 0;
555 if (df->regs)
556 free (df->regs);
557 df->regs = 0;
558 df->reg_size = 0;
560 if (df->bbs_modified)
561 BITMAP_XFREE (df->bbs_modified);
562 df->bbs_modified = 0;
564 if (df->insns_modified)
565 BITMAP_XFREE (df->insns_modified);
566 df->insns_modified = 0;
568 BITMAP_XFREE (df->all_blocks);
569 df->all_blocks = 0;
571 free_alloc_pool (df_ref_pool);
572 free_alloc_pool (df_link_pool);
576 /* Local miscellaneous routines. */
578 /* Return a USE for register REGNO. */
579 static rtx df_reg_use_gen (unsigned int regno)
581 rtx reg;
582 rtx use;
584 reg = regno_reg_rtx[regno];
586 use = gen_rtx_USE (GET_MODE (reg), reg);
587 return use;
591 /* Return a CLOBBER for register REGNO. */
592 static rtx df_reg_clobber_gen (unsigned int regno)
594 rtx reg;
595 rtx use;
597 reg = regno_reg_rtx[regno];
599 use = gen_rtx_CLOBBER (GET_MODE (reg), reg);
600 return use;
603 /* Local chain manipulation routines. */
605 /* Create a link in a def-use or use-def chain. */
606 static inline struct df_link *
607 df_link_create (struct ref *ref, struct df_link *next)
609 struct df_link *link;
611 link = pool_alloc (df_link_pool);
612 link->next = next;
613 link->ref = ref;
614 return link;
618 /* Add REF to chain head pointed to by PHEAD. */
619 static struct df_link *
620 df_ref_unlink (struct df_link **phead, struct ref *ref)
622 struct df_link *link = *phead;
624 if (link)
626 if (! link->next)
628 /* Only a single ref. It must be the one we want.
629 If not, the def-use and use-def chains are likely to
630 be inconsistent. */
631 if (link->ref != ref)
632 abort ();
633 /* Now have an empty chain. */
634 *phead = NULL;
636 else
638 /* Multiple refs. One of them must be us. */
639 if (link->ref == ref)
640 *phead = link->next;
641 else
643 /* Follow chain. */
644 for (; link->next; link = link->next)
646 if (link->next->ref == ref)
648 /* Unlink from list. */
649 link->next = link->next->next;
650 return link->next;
656 return link;
660 /* Unlink REF from all def-use/use-def chains, etc. */
662 df_ref_remove (struct df *df, struct ref *ref)
664 if (DF_REF_REG_DEF_P (ref))
666 df_def_unlink (df, ref);
667 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].defs, ref);
669 else
671 df_use_unlink (df, ref);
672 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].uses, ref);
674 return 1;
678 /* Unlink DEF from use-def and reg-def chains. */
679 static void
680 df_def_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
682 struct df_link *du_link;
683 unsigned int dregno = DF_REF_REGNO (def);
685 /* Follow def-use chain to find all the uses of this def. */
686 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
688 struct ref *use = du_link->ref;
690 /* Unlink this def from the use-def chain. */
691 df_ref_unlink (&DF_REF_CHAIN (use), def);
693 DF_REF_CHAIN (def) = 0;
695 /* Unlink def from reg-def chain. */
696 df_ref_unlink (&df->regs[dregno].defs, def);
698 df->defs[DF_REF_ID (def)] = 0;
702 /* Unlink use from def-use and reg-use chains. */
703 static void
704 df_use_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *use)
706 struct df_link *ud_link;
707 unsigned int uregno = DF_REF_REGNO (use);
709 /* Follow use-def chain to find all the defs of this use. */
710 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
712 struct ref *def = ud_link->ref;
714 /* Unlink this use from the def-use chain. */
715 df_ref_unlink (&DF_REF_CHAIN (def), use);
717 DF_REF_CHAIN (use) = 0;
719 /* Unlink use from reg-use chain. */
720 df_ref_unlink (&df->regs[uregno].uses, use);
722 df->uses[DF_REF_ID (use)] = 0;
725 /* Local routines for recording refs. */
728 /* Create a new ref of type DF_REF_TYPE for register REG at address
729 LOC within INSN of BB. */
730 static struct ref *
731 df_ref_create (struct df *df, rtx reg, rtx *loc, rtx insn,
732 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
734 struct ref *this_ref;
736 this_ref = pool_alloc (df_ref_pool);
737 DF_REF_REG (this_ref) = reg;
738 DF_REF_LOC (this_ref) = loc;
739 DF_REF_INSN (this_ref) = insn;
740 DF_REF_CHAIN (this_ref) = 0;
741 DF_REF_TYPE (this_ref) = ref_type;
742 DF_REF_FLAGS (this_ref) = ref_flags;
744 if (ref_type == DF_REF_REG_DEF)
746 if (df->def_id >= df->def_size)
748 /* Make table 25 percent larger. */
749 df->def_size += (df->def_size / 4);
750 df->defs = xrealloc (df->defs,
751 df->def_size * sizeof (*df->defs));
753 DF_REF_ID (this_ref) = df->def_id;
754 df->defs[df->def_id++] = this_ref;
756 else
758 if (df->use_id >= df->use_size)
760 /* Make table 25 percent larger. */
761 df->use_size += (df->use_size / 4);
762 df->uses = xrealloc (df->uses,
763 df->use_size * sizeof (*df->uses));
765 DF_REF_ID (this_ref) = df->use_id;
766 df->uses[df->use_id++] = this_ref;
768 return this_ref;
772 /* Create a new reference of type DF_REF_TYPE for a single register REG,
773 used inside the LOC rtx of INSN. */
774 static void
775 df_ref_record_1 (struct df *df, rtx reg, rtx *loc, rtx insn,
776 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
778 df_ref_create (df, reg, loc, insn, ref_type, ref_flags);
782 /* Create new references of type DF_REF_TYPE for each part of register REG
783 at address LOC within INSN of BB. */
784 static void
785 df_ref_record (struct df *df, rtx reg, rtx *loc, rtx insn,
786 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
788 unsigned int regno;
790 if (GET_CODE (reg) != REG && GET_CODE (reg) != SUBREG)
791 abort ();
793 /* For the reg allocator we are interested in some SUBREG rtx's, but not
794 all. Notably only those representing a word extraction from a multi-word
795 reg. As written in the docu those should have the form
796 (subreg:SI (reg:M A) N), with size(SImode) > size(Mmode).
797 XXX Is that true? We could also use the global word_mode variable. */
798 if (GET_CODE (reg) == SUBREG
799 && (GET_MODE_SIZE (GET_MODE (reg)) < GET_MODE_SIZE (word_mode)
800 || GET_MODE_SIZE (GET_MODE (reg))
801 >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (reg)))))
803 loc = &SUBREG_REG (reg);
804 reg = *loc;
805 ref_flags |= DF_REF_STRIPPED;
808 regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
809 if (regno < FIRST_PSEUDO_REGISTER)
811 int i;
812 int endregno;
814 if (! (df->flags & DF_HARD_REGS))
815 return;
817 /* GET_MODE (reg) is correct here. We do not want to go into a SUBREG
818 for the mode, because we only want to add references to regs, which
819 are really referenced. E.g., a (subreg:SI (reg:DI 0) 0) does _not_
820 reference the whole reg 0 in DI mode (which would also include
821 reg 1, at least, if 0 and 1 are SImode registers). */
822 endregno = HARD_REGNO_NREGS (regno, GET_MODE (reg));
823 if (GET_CODE (reg) == SUBREG)
824 regno += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
825 SUBREG_BYTE (reg), GET_MODE (reg));
826 endregno += regno;
828 for (i = regno; i < endregno; i++)
829 df_ref_record_1 (df, regno_reg_rtx[i],
830 loc, insn, ref_type, ref_flags);
832 else
834 df_ref_record_1 (df, reg, loc, insn, ref_type, ref_flags);
839 /* Return nonzero if writes to paradoxical SUBREGs, or SUBREGs which
840 are too narrow, are read-modify-write. */
841 bool
842 read_modify_subreg_p (rtx x)
844 unsigned int isize, osize;
845 if (GET_CODE (x) != SUBREG)
846 return false;
847 isize = GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)));
848 osize = GET_MODE_SIZE (GET_MODE (x));
849 /* Paradoxical subreg writes don't leave a trace of the old content. */
850 return (isize > osize && isize > UNITS_PER_WORD);
854 /* Process all the registers defined in the rtx, X. */
855 static void
856 df_def_record_1 (struct df *df, rtx x, basic_block bb, rtx insn)
858 rtx *loc;
859 rtx dst;
860 enum df_ref_flags flags = 0;
862 /* We may recursively call ourselves on EXPR_LIST when dealing with PARALLEL
863 construct. */
864 if (GET_CODE (x) == EXPR_LIST || GET_CODE (x) == CLOBBER)
865 loc = &XEXP (x, 0);
866 else
867 loc = &SET_DEST (x);
868 dst = *loc;
870 /* Some targets place small structures in registers for
871 return values of functions. */
872 if (GET_CODE (dst) == PARALLEL && GET_MODE (dst) == BLKmode)
874 int i;
876 for (i = XVECLEN (dst, 0) - 1; i >= 0; i--)
878 rtx temp = XVECEXP (dst, 0, i);
879 if (GET_CODE (temp) == EXPR_LIST || GET_CODE (temp) == CLOBBER
880 || GET_CODE (temp) == SET)
881 df_def_record_1 (df, temp, bb, insn);
883 return;
886 /* Maybe, we should flag the use of STRICT_LOW_PART somehow. It might
887 be handy for the reg allocator. */
888 while (GET_CODE (dst) == STRICT_LOW_PART
889 || GET_CODE (dst) == ZERO_EXTRACT
890 || GET_CODE (dst) == SIGN_EXTRACT
891 || ((df->flags & DF_FOR_REGALLOC) == 0
892 && read_modify_subreg_p (dst)))
894 /* Strict low part always contains SUBREG, but we do not want to make
895 it appear outside, as whole register is always considered. */
896 if (GET_CODE (dst) == STRICT_LOW_PART)
898 loc = &XEXP (dst, 0);
899 dst = *loc;
901 loc = &XEXP (dst, 0);
902 dst = *loc;
903 flags |= DF_REF_READ_WRITE;
906 if (GET_CODE (dst) == REG
907 || (GET_CODE (dst) == SUBREG && GET_CODE (SUBREG_REG (dst)) == REG))
908 df_ref_record (df, dst, loc, insn, DF_REF_REG_DEF, flags);
912 /* Process all the registers defined in the pattern rtx, X. */
913 static void
914 df_defs_record (struct df *df, rtx x, basic_block bb, rtx insn)
916 RTX_CODE code = GET_CODE (x);
918 if (code == SET || code == CLOBBER)
920 /* Mark the single def within the pattern. */
921 df_def_record_1 (df, x, bb, insn);
923 else if (code == PARALLEL)
925 int i;
927 /* Mark the multiple defs within the pattern. */
928 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
930 code = GET_CODE (XVECEXP (x, 0, i));
931 if (code == SET || code == CLOBBER)
932 df_def_record_1 (df, XVECEXP (x, 0, i), bb, insn);
938 /* Process all the registers used in the rtx at address LOC. */
939 static void
940 df_uses_record (struct df *df, rtx *loc, enum df_ref_type ref_type,
941 basic_block bb, rtx insn, enum df_ref_flags flags)
943 RTX_CODE code;
944 rtx x;
945 retry:
946 x = *loc;
947 if (!x)
948 return;
949 code = GET_CODE (x);
950 switch (code)
952 case LABEL_REF:
953 case SYMBOL_REF:
954 case CONST_INT:
955 case CONST:
956 case CONST_DOUBLE:
957 case CONST_VECTOR:
958 case PC:
959 case CC0:
960 case ADDR_VEC:
961 case ADDR_DIFF_VEC:
962 return;
964 case CLOBBER:
965 /* If we are clobbering a MEM, mark any registers inside the address
966 as being used. */
967 if (GET_CODE (XEXP (x, 0)) == MEM)
968 df_uses_record (df, &XEXP (XEXP (x, 0), 0),
969 DF_REF_REG_MEM_STORE, bb, insn, flags);
971 /* If we're clobbering a REG then we have a def so ignore. */
972 return;
974 case MEM:
975 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_MEM_LOAD, bb, insn, 0);
976 return;
978 case SUBREG:
979 /* While we're here, optimize this case. */
981 /* In case the SUBREG is not of a REG, do not optimize. */
982 if (GET_CODE (SUBREG_REG (x)) != REG)
984 loc = &SUBREG_REG (x);
985 df_uses_record (df, loc, ref_type, bb, insn, flags);
986 return;
988 /* ... Fall through ... */
990 case REG:
991 df_ref_record (df, x, loc, insn, ref_type, flags);
992 return;
994 case SET:
996 rtx dst = SET_DEST (x);
998 df_uses_record (df, &SET_SRC (x), DF_REF_REG_USE, bb, insn, 0);
1000 switch (GET_CODE (dst))
1002 case SUBREG:
1003 if ((df->flags & DF_FOR_REGALLOC) == 0
1004 && read_modify_subreg_p (dst))
1006 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1007 insn, DF_REF_READ_WRITE);
1008 break;
1010 /* Fall through. */
1011 case REG:
1012 case PARALLEL:
1013 case PC:
1014 case CC0:
1015 break;
1016 case MEM:
1017 df_uses_record (df, &XEXP (dst, 0),
1018 DF_REF_REG_MEM_STORE,
1019 bb, insn, 0);
1020 break;
1021 case STRICT_LOW_PART:
1022 /* A strict_low_part uses the whole REG and not just the SUBREG. */
1023 dst = XEXP (dst, 0);
1024 if (GET_CODE (dst) != SUBREG)
1025 abort ();
1026 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1027 insn, DF_REF_READ_WRITE);
1028 break;
1029 case ZERO_EXTRACT:
1030 case SIGN_EXTRACT:
1031 df_uses_record (df, &XEXP (dst, 0), DF_REF_REG_USE, bb, insn,
1032 DF_REF_READ_WRITE);
1033 df_uses_record (df, &XEXP (dst, 1), DF_REF_REG_USE, bb, insn, 0);
1034 df_uses_record (df, &XEXP (dst, 2), DF_REF_REG_USE, bb, insn, 0);
1035 dst = XEXP (dst, 0);
1036 break;
1037 default:
1038 abort ();
1040 return;
1043 case RETURN:
1044 break;
1046 case ASM_OPERANDS:
1047 case UNSPEC_VOLATILE:
1048 case TRAP_IF:
1049 case ASM_INPUT:
1051 /* Traditional and volatile asm instructions must be considered to use
1052 and clobber all hard registers, all pseudo-registers and all of
1053 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1055 Consider for instance a volatile asm that changes the fpu rounding
1056 mode. An insn should not be moved across this even if it only uses
1057 pseudo-regs because it might give an incorrectly rounded result.
1059 For now, just mark any regs we can find in ASM_OPERANDS as
1060 used. */
1062 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1063 We can not just fall through here since then we would be confused
1064 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1065 traditional asms unlike their normal usage. */
1066 if (code == ASM_OPERANDS)
1068 int j;
1070 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1071 df_uses_record (df, &ASM_OPERANDS_INPUT (x, j),
1072 DF_REF_REG_USE, bb, insn, 0);
1073 return;
1075 break;
1078 case PRE_DEC:
1079 case POST_DEC:
1080 case PRE_INC:
1081 case POST_INC:
1082 case PRE_MODIFY:
1083 case POST_MODIFY:
1084 /* Catch the def of the register being modified. */
1085 df_ref_record (df, XEXP (x, 0), &XEXP (x, 0), insn, DF_REF_REG_DEF, DF_REF_READ_WRITE);
1087 /* ... Fall through to handle uses ... */
1089 default:
1090 break;
1093 /* Recursively scan the operands of this expression. */
1095 const char *fmt = GET_RTX_FORMAT (code);
1096 int i;
1098 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1100 if (fmt[i] == 'e')
1102 /* Tail recursive case: save a function call level. */
1103 if (i == 0)
1105 loc = &XEXP (x, 0);
1106 goto retry;
1108 df_uses_record (df, &XEXP (x, i), ref_type, bb, insn, flags);
1110 else if (fmt[i] == 'E')
1112 int j;
1113 for (j = 0; j < XVECLEN (x, i); j++)
1114 df_uses_record (df, &XVECEXP (x, i, j), ref_type,
1115 bb, insn, flags);
1122 /* Record all the df within INSN of basic block BB. */
1123 static void
1124 df_insn_refs_record (struct df *df, basic_block bb, rtx insn)
1126 int i;
1128 if (INSN_P (insn))
1130 rtx note;
1132 /* Record register defs. */
1133 df_defs_record (df, PATTERN (insn), bb, insn);
1135 if (df->flags & DF_EQUIV_NOTES)
1136 for (note = REG_NOTES (insn); note;
1137 note = XEXP (note, 1))
1139 switch (REG_NOTE_KIND (note))
1141 case REG_EQUIV:
1142 case REG_EQUAL:
1143 df_uses_record (df, &XEXP (note, 0), DF_REF_REG_USE,
1144 bb, insn, 0);
1145 default:
1146 break;
1150 if (GET_CODE (insn) == CALL_INSN)
1152 rtx note;
1153 rtx x;
1155 /* Record the registers used to pass arguments. */
1156 for (note = CALL_INSN_FUNCTION_USAGE (insn); note;
1157 note = XEXP (note, 1))
1159 if (GET_CODE (XEXP (note, 0)) == USE)
1160 df_uses_record (df, &XEXP (XEXP (note, 0), 0), DF_REF_REG_USE,
1161 bb, insn, 0);
1164 /* The stack ptr is used (honorarily) by a CALL insn. */
1165 x = df_reg_use_gen (STACK_POINTER_REGNUM);
1166 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_USE, bb, insn, 0);
1168 if (df->flags & DF_HARD_REGS)
1170 /* Calls may also reference any of the global registers,
1171 so they are recorded as used. */
1172 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1173 if (global_regs[i])
1175 x = df_reg_use_gen (i);
1176 df_uses_record (df, &SET_DEST (x),
1177 DF_REF_REG_USE, bb, insn, 0);
1182 /* Record the register uses. */
1183 df_uses_record (df, &PATTERN (insn),
1184 DF_REF_REG_USE, bb, insn, 0);
1186 if (GET_CODE (insn) == CALL_INSN)
1188 rtx note;
1190 if (df->flags & DF_HARD_REGS)
1192 /* Kill all registers invalidated by a call. */
1193 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1194 if (TEST_HARD_REG_BIT (regs_invalidated_by_call, i))
1196 rtx reg_clob = df_reg_clobber_gen (i);
1197 df_defs_record (df, reg_clob, bb, insn);
1201 /* There may be extra registers to be clobbered. */
1202 for (note = CALL_INSN_FUNCTION_USAGE (insn);
1203 note;
1204 note = XEXP (note, 1))
1205 if (GET_CODE (XEXP (note, 0)) == CLOBBER)
1206 df_defs_record (df, XEXP (note, 0), bb, insn);
1212 /* Record all the refs within the basic block BB. */
1213 static void
1214 df_bb_refs_record (struct df *df, basic_block bb)
1216 rtx insn;
1218 /* Scan the block an insn at a time from beginning to end. */
1219 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
1221 if (INSN_P (insn))
1223 /* Record defs within INSN. */
1224 df_insn_refs_record (df, bb, insn);
1226 if (insn == BB_END (bb))
1227 break;
1232 /* Record all the refs in the basic blocks specified by BLOCKS. */
1233 static void
1234 df_refs_record (struct df *df, bitmap blocks)
1236 basic_block bb;
1238 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1240 df_bb_refs_record (df, bb);
1244 /* Dataflow analysis routines. */
1247 /* Create reg-def chains for basic block BB. These are a list of
1248 definitions for each register. */
1249 static void
1250 df_bb_reg_def_chain_create (struct df *df, basic_block bb)
1252 rtx insn;
1254 /* Perhaps the defs should be sorted using a depth first search
1255 of the CFG (or possibly a breadth first search). We currently
1256 scan the basic blocks in reverse order so that the first defs
1257 appear at the start of the chain. */
1259 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1260 insn = PREV_INSN (insn))
1262 struct df_link *link;
1263 unsigned int uid = INSN_UID (insn);
1265 if (! INSN_P (insn))
1266 continue;
1268 for (link = df->insns[uid].defs; link; link = link->next)
1270 struct ref *def = link->ref;
1271 unsigned int dregno = DF_REF_REGNO (def);
1273 /* Do not add ref's to the chain twice, i.e., only add new
1274 refs. XXX the same could be done by testing if the
1275 current insn is a modified (or a new) one. This would be
1276 faster. */
1277 if (DF_REF_ID (def) < df->def_id_save)
1278 continue;
1280 df->regs[dregno].defs
1281 = df_link_create (def, df->regs[dregno].defs);
1287 /* Create reg-def chains for each basic block within BLOCKS. These
1288 are a list of definitions for each register. */
1289 static void
1290 df_reg_def_chain_create (struct df *df, bitmap blocks)
1292 basic_block bb;
1294 FOR_EACH_BB_IN_BITMAP/*_REV*/ (blocks, 0, bb,
1296 df_bb_reg_def_chain_create (df, bb);
1301 /* Create reg-use chains for basic block BB. These are a list of uses
1302 for each register. */
1303 static void
1304 df_bb_reg_use_chain_create (struct df *df, basic_block bb)
1306 rtx insn;
1308 /* Scan in forward order so that the last uses appear at the start
1309 of the chain. */
1311 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1312 insn = NEXT_INSN (insn))
1314 struct df_link *link;
1315 unsigned int uid = INSN_UID (insn);
1317 if (! INSN_P (insn))
1318 continue;
1320 for (link = df->insns[uid].uses; link; link = link->next)
1322 struct ref *use = link->ref;
1323 unsigned int uregno = DF_REF_REGNO (use);
1325 /* Do not add ref's to the chain twice, i.e., only add new
1326 refs. XXX the same could be done by testing if the
1327 current insn is a modified (or a new) one. This would be
1328 faster. */
1329 if (DF_REF_ID (use) < df->use_id_save)
1330 continue;
1332 df->regs[uregno].uses
1333 = df_link_create (use, df->regs[uregno].uses);
1339 /* Create reg-use chains for each basic block within BLOCKS. These
1340 are a list of uses for each register. */
1341 static void
1342 df_reg_use_chain_create (struct df *df, bitmap blocks)
1344 basic_block bb;
1346 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1348 df_bb_reg_use_chain_create (df, bb);
1353 /* Create def-use chains from reaching use bitmaps for basic block BB. */
1354 static void
1355 df_bb_du_chain_create (struct df *df, basic_block bb, bitmap ru)
1357 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1358 rtx insn;
1360 bitmap_copy (ru, bb_info->ru_out);
1362 /* For each def in BB create a linked list (chain) of uses
1363 reached from the def. */
1364 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1365 insn = PREV_INSN (insn))
1367 struct df_link *def_link;
1368 struct df_link *use_link;
1369 unsigned int uid = INSN_UID (insn);
1371 if (! INSN_P (insn))
1372 continue;
1374 /* For each def in insn... */
1375 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1377 struct ref *def = def_link->ref;
1378 unsigned int dregno = DF_REF_REGNO (def);
1380 DF_REF_CHAIN (def) = 0;
1382 /* While the reg-use chains are not essential, it
1383 is _much_ faster to search these short lists rather
1384 than all the reaching uses, especially for large functions. */
1385 for (use_link = df->regs[dregno].uses; use_link;
1386 use_link = use_link->next)
1388 struct ref *use = use_link->ref;
1390 if (bitmap_bit_p (ru, DF_REF_ID (use)))
1392 DF_REF_CHAIN (def)
1393 = df_link_create (use, DF_REF_CHAIN (def));
1395 bitmap_clear_bit (ru, DF_REF_ID (use));
1400 /* For each use in insn... */
1401 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1403 struct ref *use = use_link->ref;
1404 bitmap_set_bit (ru, DF_REF_ID (use));
1410 /* Create def-use chains from reaching use bitmaps for basic blocks
1411 in BLOCKS. */
1412 static void
1413 df_du_chain_create (struct df *df, bitmap blocks)
1415 bitmap ru;
1416 basic_block bb;
1418 ru = BITMAP_XMALLOC ();
1420 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1422 df_bb_du_chain_create (df, bb, ru);
1425 BITMAP_XFREE (ru);
1429 /* Create use-def chains from reaching def bitmaps for basic block BB. */
1430 static void
1431 df_bb_ud_chain_create (struct df *df, basic_block bb)
1433 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1434 struct ref **reg_def_last = df->reg_def_last;
1435 rtx insn;
1437 memset (reg_def_last, 0, df->n_regs * sizeof (struct ref *));
1439 /* For each use in BB create a linked list (chain) of defs
1440 that reach the use. */
1441 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1442 insn = NEXT_INSN (insn))
1444 unsigned int uid = INSN_UID (insn);
1445 struct df_link *use_link;
1446 struct df_link *def_link;
1448 if (! INSN_P (insn))
1449 continue;
1451 /* For each use in insn... */
1452 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1454 struct ref *use = use_link->ref;
1455 unsigned int regno = DF_REF_REGNO (use);
1457 DF_REF_CHAIN (use) = 0;
1459 /* Has regno been defined in this BB yet? If so, use
1460 the last def as the single entry for the use-def
1461 chain for this use. Otherwise, we need to add all
1462 the defs using this regno that reach the start of
1463 this BB. */
1464 if (reg_def_last[regno])
1466 DF_REF_CHAIN (use)
1467 = df_link_create (reg_def_last[regno], 0);
1469 else
1471 /* While the reg-def chains are not essential, it is
1472 _much_ faster to search these short lists rather than
1473 all the reaching defs, especially for large
1474 functions. */
1475 for (def_link = df->regs[regno].defs; def_link;
1476 def_link = def_link->next)
1478 struct ref *def = def_link->ref;
1480 if (bitmap_bit_p (bb_info->rd_in, DF_REF_ID (def)))
1482 DF_REF_CHAIN (use)
1483 = df_link_create (def, DF_REF_CHAIN (use));
1490 /* For each def in insn... record the last def of each reg. */
1491 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1493 struct ref *def = def_link->ref;
1494 int dregno = DF_REF_REGNO (def);
1496 reg_def_last[dregno] = def;
1502 /* Create use-def chains from reaching def bitmaps for basic blocks
1503 within BLOCKS. */
1504 static void
1505 df_ud_chain_create (struct df *df, bitmap blocks)
1507 basic_block bb;
1509 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1511 df_bb_ud_chain_create (df, bb);
1517 static void
1518 df_rd_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1519 bitmap out, bitmap gen, bitmap kill,
1520 void *data ATTRIBUTE_UNUSED)
1522 *changed = bitmap_union_of_diff (out, gen, in, kill);
1526 static void
1527 df_ru_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1528 bitmap out, bitmap gen, bitmap kill,
1529 void *data ATTRIBUTE_UNUSED)
1531 *changed = bitmap_union_of_diff (in, gen, out, kill);
1535 static void
1536 df_lr_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1537 bitmap out, bitmap use, bitmap def,
1538 void *data ATTRIBUTE_UNUSED)
1540 *changed = bitmap_union_of_diff (in, use, out, def);
1544 /* Compute local reaching def info for basic block BB. */
1545 static void
1546 df_bb_rd_local_compute (struct df *df, basic_block bb)
1548 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1549 rtx insn;
1551 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1552 insn = NEXT_INSN (insn))
1554 unsigned int uid = INSN_UID (insn);
1555 struct df_link *def_link;
1557 if (! INSN_P (insn))
1558 continue;
1560 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1562 struct ref *def = def_link->ref;
1563 unsigned int regno = DF_REF_REGNO (def);
1564 struct df_link *def2_link;
1566 for (def2_link = df->regs[regno].defs; def2_link;
1567 def2_link = def2_link->next)
1569 struct ref *def2 = def2_link->ref;
1571 /* Add all defs of this reg to the set of kills. This
1572 is greedy since many of these defs will not actually
1573 be killed by this BB but it keeps things a lot
1574 simpler. */
1575 bitmap_set_bit (bb_info->rd_kill, DF_REF_ID (def2));
1577 /* Zap from the set of gens for this BB. */
1578 bitmap_clear_bit (bb_info->rd_gen, DF_REF_ID (def2));
1581 bitmap_set_bit (bb_info->rd_gen, DF_REF_ID (def));
1585 bb_info->rd_valid = 1;
1589 /* Compute local reaching def info for each basic block within BLOCKS. */
1590 static void
1591 df_rd_local_compute (struct df *df, bitmap blocks)
1593 basic_block bb;
1595 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1597 df_bb_rd_local_compute (df, bb);
1602 /* Compute local reaching use (upward exposed use) info for basic
1603 block BB. */
1604 static void
1605 df_bb_ru_local_compute (struct df *df, basic_block bb)
1607 /* This is much more tricky than computing reaching defs. With
1608 reaching defs, defs get killed by other defs. With upwards
1609 exposed uses, these get killed by defs with the same regno. */
1611 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1612 rtx insn;
1615 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1616 insn = PREV_INSN (insn))
1618 unsigned int uid = INSN_UID (insn);
1619 struct df_link *def_link;
1620 struct df_link *use_link;
1622 if (! INSN_P (insn))
1623 continue;
1625 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1627 struct ref *def = def_link->ref;
1628 unsigned int dregno = DF_REF_REGNO (def);
1630 for (use_link = df->regs[dregno].uses; use_link;
1631 use_link = use_link->next)
1633 struct ref *use = use_link->ref;
1635 /* Add all uses of this reg to the set of kills. This
1636 is greedy since many of these uses will not actually
1637 be killed by this BB but it keeps things a lot
1638 simpler. */
1639 bitmap_set_bit (bb_info->ru_kill, DF_REF_ID (use));
1641 /* Zap from the set of gens for this BB. */
1642 bitmap_clear_bit (bb_info->ru_gen, DF_REF_ID (use));
1646 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1648 struct ref *use = use_link->ref;
1649 /* Add use to set of gens in this BB. */
1650 bitmap_set_bit (bb_info->ru_gen, DF_REF_ID (use));
1653 bb_info->ru_valid = 1;
1657 /* Compute local reaching use (upward exposed use) info for each basic
1658 block within BLOCKS. */
1659 static void
1660 df_ru_local_compute (struct df *df, bitmap blocks)
1662 basic_block bb;
1664 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1666 df_bb_ru_local_compute (df, bb);
1671 /* Compute local live variable info for basic block BB. */
1672 static void
1673 df_bb_lr_local_compute (struct df *df, basic_block bb)
1675 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1676 rtx insn;
1678 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1679 insn = PREV_INSN (insn))
1681 unsigned int uid = INSN_UID (insn);
1682 struct df_link *link;
1684 if (! INSN_P (insn))
1685 continue;
1687 for (link = df->insns[uid].defs; link; link = link->next)
1689 struct ref *def = link->ref;
1690 unsigned int dregno = DF_REF_REGNO (def);
1692 /* Add def to set of defs in this BB. */
1693 bitmap_set_bit (bb_info->lr_def, dregno);
1695 bitmap_clear_bit (bb_info->lr_use, dregno);
1698 for (link = df->insns[uid].uses; link; link = link->next)
1700 struct ref *use = link->ref;
1701 /* Add use to set of uses in this BB. */
1702 bitmap_set_bit (bb_info->lr_use, DF_REF_REGNO (use));
1705 bb_info->lr_valid = 1;
1709 /* Compute local live variable info for each basic block within BLOCKS. */
1710 static void
1711 df_lr_local_compute (struct df *df, bitmap blocks)
1713 basic_block bb;
1715 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1717 df_bb_lr_local_compute (df, bb);
1722 /* Compute register info: lifetime, bb, and number of defs and uses
1723 for basic block BB. */
1724 static void
1725 df_bb_reg_info_compute (struct df *df, basic_block bb, bitmap live)
1727 struct reg_info *reg_info = df->regs;
1728 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1729 rtx insn;
1731 bitmap_copy (live, bb_info->lr_out);
1733 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1734 insn = PREV_INSN (insn))
1736 unsigned int uid = INSN_UID (insn);
1737 unsigned int regno;
1738 struct df_link *link;
1740 if (! INSN_P (insn))
1741 continue;
1743 for (link = df->insns[uid].defs; link; link = link->next)
1745 struct ref *def = link->ref;
1746 unsigned int dregno = DF_REF_REGNO (def);
1748 /* Kill this register. */
1749 bitmap_clear_bit (live, dregno);
1750 reg_info[dregno].n_defs++;
1753 for (link = df->insns[uid].uses; link; link = link->next)
1755 struct ref *use = link->ref;
1756 unsigned int uregno = DF_REF_REGNO (use);
1758 /* This register is now live. */
1759 bitmap_set_bit (live, uregno);
1760 reg_info[uregno].n_uses++;
1763 /* Increment lifetimes of all live registers. */
1764 EXECUTE_IF_SET_IN_BITMAP (live, 0, regno,
1766 reg_info[regno].lifetime++;
1772 /* Compute register info: lifetime, bb, and number of defs and uses. */
1773 static void
1774 df_reg_info_compute (struct df *df, bitmap blocks)
1776 basic_block bb;
1777 bitmap live;
1779 live = BITMAP_XMALLOC ();
1781 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1783 df_bb_reg_info_compute (df, bb, live);
1786 BITMAP_XFREE (live);
1790 /* Assign LUIDs for BB. */
1791 static int
1792 df_bb_luids_set (struct df *df, basic_block bb)
1794 rtx insn;
1795 int luid = 0;
1797 /* The LUIDs are monotonically increasing for each basic block. */
1799 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
1801 if (INSN_P (insn))
1802 DF_INSN_LUID (df, insn) = luid++;
1803 DF_INSN_LUID (df, insn) = luid;
1805 if (insn == BB_END (bb))
1806 break;
1808 return luid;
1812 /* Assign LUIDs for each basic block within BLOCKS. */
1813 static int
1814 df_luids_set (struct df *df, bitmap blocks)
1816 basic_block bb;
1817 int total = 0;
1819 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1821 total += df_bb_luids_set (df, bb);
1823 return total;
1827 /* Perform dataflow analysis using existing DF structure for blocks
1828 within BLOCKS. If BLOCKS is zero, use all basic blocks in the CFG. */
1829 static void
1830 df_analyse_1 (struct df *df, bitmap blocks, int flags, int update)
1832 int aflags;
1833 int dflags;
1834 int i;
1835 basic_block bb;
1837 dflags = 0;
1838 aflags = flags;
1839 if (flags & DF_UD_CHAIN)
1840 aflags |= DF_RD | DF_RD_CHAIN;
1842 if (flags & DF_DU_CHAIN)
1843 aflags |= DF_RU;
1845 if (flags & DF_RU)
1846 aflags |= DF_RU_CHAIN;
1848 if (flags & DF_REG_INFO)
1849 aflags |= DF_LR;
1851 if (! blocks)
1852 blocks = df->all_blocks;
1854 df->flags = flags;
1855 if (update)
1857 df_refs_update (df);
1858 /* More fine grained incremental dataflow analysis would be
1859 nice. For now recompute the whole shebang for the
1860 modified blocks. */
1861 #if 0
1862 df_refs_unlink (df, blocks);
1863 #endif
1864 /* All the def-use, use-def chains can be potentially
1865 modified by changes in one block. The size of the
1866 bitmaps can also change. */
1868 else
1870 /* Scan the function for all register defs and uses. */
1871 df_refs_queue (df);
1872 df_refs_record (df, blocks);
1874 /* Link all the new defs and uses to the insns. */
1875 df_refs_process (df);
1878 /* Allocate the bitmaps now the total number of defs and uses are
1879 known. If the number of defs or uses have changed, then
1880 these bitmaps need to be reallocated. */
1881 df_bitmaps_alloc (df, aflags);
1883 /* Set the LUIDs for each specified basic block. */
1884 df_luids_set (df, blocks);
1886 /* Recreate reg-def and reg-use chains from scratch so that first
1887 def is at the head of the reg-def chain and the last use is at
1888 the head of the reg-use chain. This is only important for
1889 regs local to a basic block as it speeds up searching. */
1890 if (aflags & DF_RD_CHAIN)
1892 df_reg_def_chain_create (df, blocks);
1895 if (aflags & DF_RU_CHAIN)
1897 df_reg_use_chain_create (df, blocks);
1900 df->dfs_order = xmalloc (sizeof (int) * n_basic_blocks);
1901 df->rc_order = xmalloc (sizeof (int) * n_basic_blocks);
1902 df->rts_order = xmalloc (sizeof (int) * n_basic_blocks);
1903 df->inverse_dfs_map = xmalloc (sizeof (int) * last_basic_block);
1904 df->inverse_rc_map = xmalloc (sizeof (int) * last_basic_block);
1905 df->inverse_rts_map = xmalloc (sizeof (int) * last_basic_block);
1907 flow_depth_first_order_compute (df->dfs_order, df->rc_order);
1908 flow_reverse_top_sort_order_compute (df->rts_order);
1909 for (i = 0; i < n_basic_blocks; i++)
1911 df->inverse_dfs_map[df->dfs_order[i]] = i;
1912 df->inverse_rc_map[df->rc_order[i]] = i;
1913 df->inverse_rts_map[df->rts_order[i]] = i;
1915 if (aflags & DF_RD)
1917 /* Compute the sets of gens and kills for the defs of each bb. */
1918 df_rd_local_compute (df, df->flags & DF_RD ? blocks : df->all_blocks);
1920 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1921 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1922 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1923 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1924 FOR_EACH_BB (bb)
1926 in[bb->index] = DF_BB_INFO (df, bb)->rd_in;
1927 out[bb->index] = DF_BB_INFO (df, bb)->rd_out;
1928 gen[bb->index] = DF_BB_INFO (df, bb)->rd_gen;
1929 kill[bb->index] = DF_BB_INFO (df, bb)->rd_kill;
1931 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1932 DF_FORWARD, DF_UNION, df_rd_transfer_function,
1933 df->inverse_rc_map, NULL);
1934 free (in);
1935 free (out);
1936 free (gen);
1937 free (kill);
1941 if (aflags & DF_UD_CHAIN)
1943 /* Create use-def chains. */
1944 df_ud_chain_create (df, df->all_blocks);
1946 if (! (flags & DF_RD))
1947 dflags |= DF_RD;
1950 if (aflags & DF_RU)
1952 /* Compute the sets of gens and kills for the upwards exposed
1953 uses in each bb. */
1954 df_ru_local_compute (df, df->flags & DF_RU ? blocks : df->all_blocks);
1956 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1957 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1958 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1959 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1960 FOR_EACH_BB (bb)
1962 in[bb->index] = DF_BB_INFO (df, bb)->ru_in;
1963 out[bb->index] = DF_BB_INFO (df, bb)->ru_out;
1964 gen[bb->index] = DF_BB_INFO (df, bb)->ru_gen;
1965 kill[bb->index] = DF_BB_INFO (df, bb)->ru_kill;
1967 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1968 DF_BACKWARD, DF_UNION, df_ru_transfer_function,
1969 df->inverse_rts_map, NULL);
1970 free (in);
1971 free (out);
1972 free (gen);
1973 free (kill);
1977 if (aflags & DF_DU_CHAIN)
1979 /* Create def-use chains. */
1980 df_du_chain_create (df, df->all_blocks);
1982 if (! (flags & DF_RU))
1983 dflags |= DF_RU;
1986 /* Free up bitmaps that are no longer required. */
1987 if (dflags)
1988 df_bitmaps_free (df, dflags);
1990 if (aflags & DF_LR)
1992 /* Compute the sets of defs and uses of live variables. */
1993 df_lr_local_compute (df, df->flags & DF_LR ? blocks : df->all_blocks);
1995 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1996 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1997 bitmap *use = xmalloc (sizeof (bitmap) * last_basic_block);
1998 bitmap *def = xmalloc (sizeof (bitmap) * last_basic_block);
1999 FOR_EACH_BB (bb)
2001 in[bb->index] = DF_BB_INFO (df, bb)->lr_in;
2002 out[bb->index] = DF_BB_INFO (df, bb)->lr_out;
2003 use[bb->index] = DF_BB_INFO (df, bb)->lr_use;
2004 def[bb->index] = DF_BB_INFO (df, bb)->lr_def;
2006 iterative_dataflow_bitmap (in, out, use, def, df->all_blocks,
2007 DF_BACKWARD, DF_UNION, df_lr_transfer_function,
2008 df->inverse_rts_map, NULL);
2009 free (in);
2010 free (out);
2011 free (use);
2012 free (def);
2016 if (aflags & DF_REG_INFO)
2018 df_reg_info_compute (df, df->all_blocks);
2021 free (df->dfs_order);
2022 free (df->rc_order);
2023 free (df->rts_order);
2024 free (df->inverse_rc_map);
2025 free (df->inverse_dfs_map);
2026 free (df->inverse_rts_map);
2030 /* Initialize dataflow analysis. */
2031 struct df *
2032 df_init (void)
2034 struct df *df;
2036 df = xcalloc (1, sizeof (struct df));
2038 /* Squirrel away a global for debugging. */
2039 ddf = df;
2041 return df;
2045 /* Start queuing refs. */
2046 static int
2047 df_refs_queue (struct df *df)
2049 df->def_id_save = df->def_id;
2050 df->use_id_save = df->use_id;
2051 /* ???? Perhaps we should save current obstack state so that we can
2052 unwind it. */
2053 return 0;
2057 /* Process queued refs. */
2058 static int
2059 df_refs_process (struct df *df)
2061 unsigned int i;
2063 /* Build new insn-def chains. */
2064 for (i = df->def_id_save; i != df->def_id; i++)
2066 struct ref *def = df->defs[i];
2067 unsigned int uid = DF_REF_INSN_UID (def);
2069 /* Add def to head of def list for INSN. */
2070 df->insns[uid].defs
2071 = df_link_create (def, df->insns[uid].defs);
2074 /* Build new insn-use chains. */
2075 for (i = df->use_id_save; i != df->use_id; i++)
2077 struct ref *use = df->uses[i];
2078 unsigned int uid = DF_REF_INSN_UID (use);
2080 /* Add use to head of use list for INSN. */
2081 df->insns[uid].uses
2082 = df_link_create (use, df->insns[uid].uses);
2084 return 0;
2088 /* Update refs for basic block BB. */
2089 static int
2090 df_bb_refs_update (struct df *df, basic_block bb)
2092 rtx insn;
2093 int count = 0;
2095 /* While we have to scan the chain of insns for this BB, we do not
2096 need to allocate and queue a long chain of BB/INSN pairs. Using
2097 a bitmap for insns_modified saves memory and avoids queuing
2098 duplicates. */
2100 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
2102 unsigned int uid;
2104 uid = INSN_UID (insn);
2106 if (bitmap_bit_p (df->insns_modified, uid))
2108 /* Delete any allocated refs of this insn. MPH, FIXME. */
2109 df_insn_refs_unlink (df, bb, insn);
2111 /* Scan the insn for refs. */
2112 df_insn_refs_record (df, bb, insn);
2114 count++;
2116 if (insn == BB_END (bb))
2117 break;
2119 return count;
2123 /* Process all the modified/deleted insns that were queued. */
2124 static int
2125 df_refs_update (struct df *df)
2127 basic_block bb;
2128 int count = 0;
2130 if ((unsigned int) max_reg_num () >= df->reg_size)
2131 df_reg_table_realloc (df, 0);
2133 df_refs_queue (df);
2135 FOR_EACH_BB_IN_BITMAP (df->bbs_modified, 0, bb,
2137 count += df_bb_refs_update (df, bb);
2140 df_refs_process (df);
2141 return count;
2145 /* Return nonzero if any of the requested blocks in the bitmap
2146 BLOCKS have been modified. */
2147 static int
2148 df_modified_p (struct df *df, bitmap blocks)
2150 int update = 0;
2151 basic_block bb;
2153 if (!df->n_bbs)
2154 return 0;
2156 FOR_EACH_BB (bb)
2157 if (bitmap_bit_p (df->bbs_modified, bb->index)
2158 && (! blocks || (blocks == (bitmap) -1) || bitmap_bit_p (blocks, bb->index)))
2160 update = 1;
2161 break;
2164 return update;
2168 /* Analyze dataflow info for the basic blocks specified by the bitmap
2169 BLOCKS, or for the whole CFG if BLOCKS is zero, or just for the
2170 modified blocks if BLOCKS is -1. */
2172 df_analyse (struct df *df, bitmap blocks, int flags)
2174 int update;
2176 /* We could deal with additional basic blocks being created by
2177 rescanning everything again. */
2178 if (df->n_bbs && df->n_bbs != (unsigned int) last_basic_block)
2179 abort ();
2181 update = df_modified_p (df, blocks);
2182 if (update || (flags != df->flags))
2184 if (! blocks)
2186 if (df->n_bbs)
2188 /* Recompute everything from scratch. */
2189 df_free (df);
2191 /* Allocate and initialize data structures. */
2192 df_alloc (df, max_reg_num ());
2193 df_analyse_1 (df, 0, flags, 0);
2194 update = 1;
2196 else
2198 if (blocks == (bitmap) -1)
2199 blocks = df->bbs_modified;
2201 if (! df->n_bbs)
2202 abort ();
2204 df_analyse_1 (df, blocks, flags, 1);
2205 bitmap_zero (df->bbs_modified);
2206 bitmap_zero (df->insns_modified);
2209 return update;
2213 /* Free all the dataflow info and the DF structure. */
2214 void
2215 df_finish (struct df *df)
2217 df_free (df);
2218 free (df);
2222 /* Unlink INSN from its reference information. */
2223 static void
2224 df_insn_refs_unlink (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2226 struct df_link *link;
2227 unsigned int uid;
2229 uid = INSN_UID (insn);
2231 /* Unlink all refs defined by this insn. */
2232 for (link = df->insns[uid].defs; link; link = link->next)
2233 df_def_unlink (df, link->ref);
2235 /* Unlink all refs used by this insn. */
2236 for (link = df->insns[uid].uses; link; link = link->next)
2237 df_use_unlink (df, link->ref);
2239 df->insns[uid].defs = 0;
2240 df->insns[uid].uses = 0;
2244 #if 0
2245 /* Unlink all the insns within BB from their reference information. */
2246 static void
2247 df_bb_refs_unlink (struct df *df, basic_block bb)
2249 rtx insn;
2251 /* Scan the block an insn at a time from beginning to end. */
2252 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
2254 if (INSN_P (insn))
2256 /* Unlink refs for INSN. */
2257 df_insn_refs_unlink (df, bb, insn);
2259 if (insn == BB_END (bb))
2260 break;
2265 /* Unlink all the refs in the basic blocks specified by BLOCKS.
2266 Not currently used. */
2267 static void
2268 df_refs_unlink (struct df *df, bitmap blocks)
2270 basic_block bb;
2272 if (blocks)
2274 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
2276 df_bb_refs_unlink (df, bb);
2279 else
2281 FOR_EACH_BB (bb)
2282 df_bb_refs_unlink (df, bb);
2285 #endif
2287 /* Functions to modify insns. */
2290 /* Delete INSN and all its reference information. */
2292 df_insn_delete (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2294 /* If the insn is a jump, we should perhaps call delete_insn to
2295 handle the JUMP_LABEL? */
2297 /* We should not be deleting the NOTE_INSN_BASIC_BLOCK or label. */
2298 if (insn == BB_HEAD (bb))
2299 abort ();
2301 /* Delete the insn. */
2302 delete_insn (insn);
2304 df_insn_modify (df, bb, insn);
2306 return NEXT_INSN (insn);
2310 /* Mark that INSN within BB may have changed (created/modified/deleted).
2311 This may be called multiple times for the same insn. There is no
2312 harm calling this function if the insn wasn't changed; it will just
2313 slow down the rescanning of refs. */
2314 void
2315 df_insn_modify (struct df *df, basic_block bb, rtx insn)
2317 unsigned int uid;
2319 uid = INSN_UID (insn);
2320 if (uid >= df->insn_size)
2321 df_insn_table_realloc (df, uid);
2323 bitmap_set_bit (df->bbs_modified, bb->index);
2324 bitmap_set_bit (df->insns_modified, uid);
2326 /* For incremental updating on the fly, perhaps we could make a copy
2327 of all the refs of the original insn and turn them into
2328 anti-refs. When df_refs_update finds these anti-refs, it annihilates
2329 the original refs. If validate_change fails then these anti-refs
2330 will just get ignored. */
2334 typedef struct replace_args
2336 rtx match;
2337 rtx replacement;
2338 rtx insn;
2339 int modified;
2340 } replace_args;
2343 /* Replace mem pointed to by PX with its associated pseudo register.
2344 DATA is actually a pointer to a structure describing the
2345 instruction currently being scanned and the MEM we are currently
2346 replacing. */
2347 static int
2348 df_rtx_mem_replace (rtx *px, void *data)
2350 replace_args *args = (replace_args *) data;
2351 rtx mem = *px;
2353 if (mem == NULL_RTX)
2354 return 0;
2356 switch (GET_CODE (mem))
2358 case MEM:
2359 break;
2361 case CONST_DOUBLE:
2362 /* We're not interested in the MEM associated with a
2363 CONST_DOUBLE, so there's no need to traverse into one. */
2364 return -1;
2366 default:
2367 /* This is not a MEM. */
2368 return 0;
2371 if (!rtx_equal_p (args->match, mem))
2372 /* This is not the MEM we are currently replacing. */
2373 return 0;
2375 /* Actually replace the MEM. */
2376 validate_change (args->insn, px, args->replacement, 1);
2377 args->modified++;
2379 return 0;
2384 df_insn_mem_replace (struct df *df, basic_block bb, rtx insn, rtx mem, rtx reg)
2386 replace_args args;
2388 args.insn = insn;
2389 args.match = mem;
2390 args.replacement = reg;
2391 args.modified = 0;
2393 /* Search and replace all matching mems within insn. */
2394 for_each_rtx (&insn, df_rtx_mem_replace, &args);
2396 if (args.modified)
2397 df_insn_modify (df, bb, insn);
2399 /* ???? FIXME. We may have a new def or one or more new uses of REG
2400 in INSN. REG should be a new pseudo so it won't affect the
2401 dataflow information that we currently have. We should add
2402 the new uses and defs to INSN and then recreate the chains
2403 when df_analyse is called. */
2404 return args.modified;
2408 /* Replace one register with another. Called through for_each_rtx; PX
2409 points to the rtx being scanned. DATA is actually a pointer to a
2410 structure of arguments. */
2411 static int
2412 df_rtx_reg_replace (rtx *px, void *data)
2414 rtx x = *px;
2415 replace_args *args = (replace_args *) data;
2417 if (x == NULL_RTX)
2418 return 0;
2420 if (x == args->match)
2422 validate_change (args->insn, px, args->replacement, 1);
2423 args->modified++;
2426 return 0;
2430 /* Replace the reg within every ref on CHAIN that is within the set
2431 BLOCKS of basic blocks with NEWREG. Also update the regs within
2432 REG_NOTES. */
2433 void
2434 df_refs_reg_replace (struct df *df, bitmap blocks, struct df_link *chain, rtx oldreg, rtx newreg)
2436 struct df_link *link;
2437 replace_args args;
2439 if (! blocks)
2440 blocks = df->all_blocks;
2442 args.match = oldreg;
2443 args.replacement = newreg;
2444 args.modified = 0;
2446 for (link = chain; link; link = link->next)
2448 struct ref *ref = link->ref;
2449 rtx insn = DF_REF_INSN (ref);
2451 if (! INSN_P (insn))
2452 continue;
2454 if (bitmap_bit_p (blocks, DF_REF_BBNO (ref)))
2456 df_ref_reg_replace (df, ref, oldreg, newreg);
2458 /* Replace occurrences of the reg within the REG_NOTES. */
2459 if ((! link->next || DF_REF_INSN (ref)
2460 != DF_REF_INSN (link->next->ref))
2461 && REG_NOTES (insn))
2463 args.insn = insn;
2464 for_each_rtx (&REG_NOTES (insn), df_rtx_reg_replace, &args);
2467 else
2469 /* Temporary check to ensure that we have a grip on which
2470 regs should be replaced. */
2471 abort ();
2477 /* Replace all occurrences of register OLDREG with register NEWREG in
2478 blocks defined by bitmap BLOCKS. This also replaces occurrences of
2479 OLDREG in the REG_NOTES but only for insns containing OLDREG. This
2480 routine expects the reg-use and reg-def chains to be valid. */
2482 df_reg_replace (struct df *df, bitmap blocks, rtx oldreg, rtx newreg)
2484 unsigned int oldregno = REGNO (oldreg);
2486 df_refs_reg_replace (df, blocks, df->regs[oldregno].defs, oldreg, newreg);
2487 df_refs_reg_replace (df, blocks, df->regs[oldregno].uses, oldreg, newreg);
2488 return 1;
2492 /* Try replacing the reg within REF with NEWREG. Do not modify
2493 def-use/use-def chains. */
2495 df_ref_reg_replace (struct df *df, struct ref *ref, rtx oldreg, rtx newreg)
2497 /* Check that insn was deleted by being converted into a NOTE. If
2498 so ignore this insn. */
2499 if (! INSN_P (DF_REF_INSN (ref)))
2500 return 0;
2502 if (oldreg && oldreg != DF_REF_REG (ref))
2503 abort ();
2505 if (! validate_change (DF_REF_INSN (ref), DF_REF_LOC (ref), newreg, 1))
2506 return 0;
2508 df_insn_modify (df, DF_REF_BB (ref), DF_REF_INSN (ref));
2509 return 1;
2513 struct ref*
2514 df_bb_def_use_swap (struct df *df, basic_block bb, rtx def_insn, rtx use_insn, unsigned int regno)
2516 struct ref *def;
2517 struct ref *use;
2518 int def_uid;
2519 int use_uid;
2520 struct df_link *link;
2522 def = df_bb_insn_regno_first_def_find (df, bb, def_insn, regno);
2523 if (! def)
2524 return 0;
2526 use = df_bb_insn_regno_last_use_find (df, bb, use_insn, regno);
2527 if (! use)
2528 return 0;
2530 /* The USE no longer exists. */
2531 use_uid = INSN_UID (use_insn);
2532 df_use_unlink (df, use);
2533 df_ref_unlink (&df->insns[use_uid].uses, use);
2535 /* The DEF requires shifting so remove it from DEF_INSN
2536 and add it to USE_INSN by reusing LINK. */
2537 def_uid = INSN_UID (def_insn);
2538 link = df_ref_unlink (&df->insns[def_uid].defs, def);
2539 link->ref = def;
2540 link->next = df->insns[use_uid].defs;
2541 df->insns[use_uid].defs = link;
2543 #if 0
2544 link = df_ref_unlink (&df->regs[regno].defs, def);
2545 link->ref = def;
2546 link->next = df->regs[regno].defs;
2547 df->insns[regno].defs = link;
2548 #endif
2550 DF_REF_INSN (def) = use_insn;
2551 return def;
2555 /* Record df between FIRST_INSN and LAST_INSN inclusive. All new
2556 insns must be processed by this routine. */
2557 static void
2558 df_insns_modify (struct df *df, basic_block bb, rtx first_insn, rtx last_insn)
2560 rtx insn;
2562 for (insn = first_insn; ; insn = NEXT_INSN (insn))
2564 unsigned int uid;
2566 /* A non-const call should not have slipped through the net. If
2567 it does, we need to create a new basic block. Ouch. The
2568 same applies for a label. */
2569 if ((GET_CODE (insn) == CALL_INSN
2570 && ! CONST_OR_PURE_CALL_P (insn))
2571 || GET_CODE (insn) == CODE_LABEL)
2572 abort ();
2574 uid = INSN_UID (insn);
2576 if (uid >= df->insn_size)
2577 df_insn_table_realloc (df, uid);
2579 df_insn_modify (df, bb, insn);
2581 if (insn == last_insn)
2582 break;
2587 /* Emit PATTERN before INSN within BB. */
2589 df_pattern_emit_before (struct df *df, rtx pattern, basic_block bb, rtx insn)
2591 rtx ret_insn;
2592 rtx prev_insn = PREV_INSN (insn);
2594 /* We should not be inserting before the start of the block. */
2595 if (insn == BB_HEAD (bb))
2596 abort ();
2597 ret_insn = emit_insn_before (pattern, insn);
2598 if (ret_insn == insn)
2599 return ret_insn;
2601 df_insns_modify (df, bb, NEXT_INSN (prev_insn), ret_insn);
2602 return ret_insn;
2606 /* Emit PATTERN after INSN within BB. */
2608 df_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2610 rtx ret_insn;
2612 ret_insn = emit_insn_after (pattern, insn);
2613 if (ret_insn == insn)
2614 return ret_insn;
2616 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2617 return ret_insn;
2621 /* Emit jump PATTERN after INSN within BB. */
2623 df_jump_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2625 rtx ret_insn;
2627 ret_insn = emit_jump_insn_after (pattern, insn);
2628 if (ret_insn == insn)
2629 return ret_insn;
2631 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2632 return ret_insn;
2636 /* Move INSN within BB before BEFORE_INSN within BEFORE_BB.
2638 This function should only be used to move loop invariant insns
2639 out of a loop where it has been proven that the def-use info
2640 will still be valid. */
2642 df_insn_move_before (struct df *df, basic_block bb, rtx insn, basic_block before_bb, rtx before_insn)
2644 struct df_link *link;
2645 unsigned int uid;
2647 if (! bb)
2648 return df_pattern_emit_before (df, insn, before_bb, before_insn);
2650 uid = INSN_UID (insn);
2652 /* Change bb for all df defined and used by this insn. */
2653 for (link = df->insns[uid].defs; link; link = link->next)
2654 DF_REF_BB (link->ref) = before_bb;
2655 for (link = df->insns[uid].uses; link; link = link->next)
2656 DF_REF_BB (link->ref) = before_bb;
2658 /* The lifetimes of the registers used in this insn will be reduced
2659 while the lifetimes of the registers defined in this insn
2660 are likely to be increased. */
2662 /* ???? Perhaps all the insns moved should be stored on a list
2663 which df_analyse removes when it recalculates data flow. */
2665 return emit_insn_before (insn, before_insn);
2668 /* Functions to query dataflow information. */
2672 df_insn_regno_def_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2673 rtx insn, unsigned int regno)
2675 unsigned int uid;
2676 struct df_link *link;
2678 uid = INSN_UID (insn);
2680 for (link = df->insns[uid].defs; link; link = link->next)
2682 struct ref *def = link->ref;
2684 if (DF_REF_REGNO (def) == regno)
2685 return 1;
2688 return 0;
2692 static int
2693 df_def_dominates_all_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
2695 struct df_link *du_link;
2697 /* Follow def-use chain to find all the uses of this def. */
2698 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2700 struct ref *use = du_link->ref;
2701 struct df_link *ud_link;
2703 /* Follow use-def chain to check all the defs for this use. */
2704 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2705 if (ud_link->ref != def)
2706 return 0;
2708 return 1;
2713 df_insn_dominates_all_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2714 rtx insn)
2716 unsigned int uid;
2717 struct df_link *link;
2719 uid = INSN_UID (insn);
2721 for (link = df->insns[uid].defs; link; link = link->next)
2723 struct ref *def = link->ref;
2725 if (! df_def_dominates_all_uses_p (df, def))
2726 return 0;
2729 return 1;
2733 /* Return nonzero if all DF dominates all the uses within the bitmap
2734 BLOCKS. */
2735 static int
2736 df_def_dominates_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def,
2737 bitmap blocks)
2739 struct df_link *du_link;
2741 /* Follow def-use chain to find all the uses of this def. */
2742 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2744 struct ref *use = du_link->ref;
2745 struct df_link *ud_link;
2747 /* Only worry about the uses within BLOCKS. For example,
2748 consider a register defined within a loop that is live at the
2749 loop exits. */
2750 if (bitmap_bit_p (blocks, DF_REF_BBNO (use)))
2752 /* Follow use-def chain to check all the defs for this use. */
2753 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2754 if (ud_link->ref != def)
2755 return 0;
2758 return 1;
2762 /* Return nonzero if all the defs of INSN within BB dominates
2763 all the corresponding uses. */
2765 df_insn_dominates_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2766 rtx insn, bitmap blocks)
2768 unsigned int uid;
2769 struct df_link *link;
2771 uid = INSN_UID (insn);
2773 for (link = df->insns[uid].defs; link; link = link->next)
2775 struct ref *def = link->ref;
2777 /* Only consider the defs within BLOCKS. */
2778 if (bitmap_bit_p (blocks, DF_REF_BBNO (def))
2779 && ! df_def_dominates_uses_p (df, def, blocks))
2780 return 0;
2782 return 1;
2786 /* Return the basic block that REG referenced in or NULL if referenced
2787 in multiple basic blocks. */
2788 basic_block
2789 df_regno_bb (struct df *df, unsigned int regno)
2791 struct df_link *defs = df->regs[regno].defs;
2792 struct df_link *uses = df->regs[regno].uses;
2793 struct ref *def = defs ? defs->ref : 0;
2794 struct ref *use = uses ? uses->ref : 0;
2795 basic_block bb_def = def ? DF_REF_BB (def) : 0;
2796 basic_block bb_use = use ? DF_REF_BB (use) : 0;
2798 /* Compare blocks of first def and last use. ???? FIXME. What if
2799 the reg-def and reg-use lists are not correctly ordered. */
2800 return bb_def == bb_use ? bb_def : 0;
2804 /* Return nonzero if REG used in multiple basic blocks. */
2806 df_reg_global_p (struct df *df, rtx reg)
2808 return df_regno_bb (df, REGNO (reg)) != 0;
2812 /* Return total lifetime (in insns) of REG. */
2814 df_reg_lifetime (struct df *df, rtx reg)
2816 return df->regs[REGNO (reg)].lifetime;
2820 /* Return nonzero if REG live at start of BB. */
2822 df_bb_reg_live_start_p (struct df *df, basic_block bb, rtx reg)
2824 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2826 #ifdef ENABLE_CHECKING
2827 if (! bb_info->lr_in)
2828 abort ();
2829 #endif
2831 return bitmap_bit_p (bb_info->lr_in, REGNO (reg));
2835 /* Return nonzero if REG live at end of BB. */
2837 df_bb_reg_live_end_p (struct df *df, basic_block bb, rtx reg)
2839 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2841 #ifdef ENABLE_CHECKING
2842 if (! bb_info->lr_in)
2843 abort ();
2844 #endif
2846 return bitmap_bit_p (bb_info->lr_out, REGNO (reg));
2850 /* Return -1 if life of REG1 before life of REG2, 1 if life of REG1
2851 after life of REG2, or 0, if the lives overlap. */
2853 df_bb_regs_lives_compare (struct df *df, basic_block bb, rtx reg1, rtx reg2)
2855 unsigned int regno1 = REGNO (reg1);
2856 unsigned int regno2 = REGNO (reg2);
2857 struct ref *def1;
2858 struct ref *use1;
2859 struct ref *def2;
2860 struct ref *use2;
2863 /* The regs must be local to BB. */
2864 if (df_regno_bb (df, regno1) != bb
2865 || df_regno_bb (df, regno2) != bb)
2866 abort ();
2868 def2 = df_bb_regno_first_def_find (df, bb, regno2);
2869 use1 = df_bb_regno_last_use_find (df, bb, regno1);
2871 if (DF_INSN_LUID (df, DF_REF_INSN (def2))
2872 > DF_INSN_LUID (df, DF_REF_INSN (use1)))
2873 return -1;
2875 def1 = df_bb_regno_first_def_find (df, bb, regno1);
2876 use2 = df_bb_regno_last_use_find (df, bb, regno2);
2878 if (DF_INSN_LUID (df, DF_REF_INSN (def1))
2879 > DF_INSN_LUID (df, DF_REF_INSN (use2)))
2880 return 1;
2882 return 0;
2886 /* Return last use of REGNO within BB. */
2887 static struct ref *
2888 df_bb_regno_last_use_find (struct df *df, basic_block bb, unsigned int regno)
2890 struct df_link *link;
2892 /* This assumes that the reg-use list is ordered such that for any
2893 BB, the last use is found first. However, since the BBs are not
2894 ordered, the first use in the chain is not necessarily the last
2895 use in the function. */
2896 for (link = df->regs[regno].uses; link; link = link->next)
2898 struct ref *use = link->ref;
2900 if (DF_REF_BB (use) == bb)
2901 return use;
2903 return 0;
2907 /* Return first def of REGNO within BB. */
2908 static struct ref *
2909 df_bb_regno_first_def_find (struct df *df, basic_block bb, unsigned int regno)
2911 struct df_link *link;
2913 /* This assumes that the reg-def list is ordered such that for any
2914 BB, the first def is found first. However, since the BBs are not
2915 ordered, the first def in the chain is not necessarily the first
2916 def in the function. */
2917 for (link = df->regs[regno].defs; link; link = link->next)
2919 struct ref *def = link->ref;
2921 if (DF_REF_BB (def) == bb)
2922 return def;
2924 return 0;
2928 /* Return first use of REGNO inside INSN within BB. */
2929 static struct ref *
2930 df_bb_insn_regno_last_use_find (struct df *df,
2931 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2932 unsigned int regno)
2934 unsigned int uid;
2935 struct df_link *link;
2937 uid = INSN_UID (insn);
2939 for (link = df->insns[uid].uses; link; link = link->next)
2941 struct ref *use = link->ref;
2943 if (DF_REF_REGNO (use) == regno)
2944 return use;
2947 return 0;
2951 /* Return first def of REGNO inside INSN within BB. */
2952 static struct ref *
2953 df_bb_insn_regno_first_def_find (struct df *df,
2954 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2955 unsigned int regno)
2957 unsigned int uid;
2958 struct df_link *link;
2960 uid = INSN_UID (insn);
2962 for (link = df->insns[uid].defs; link; link = link->next)
2964 struct ref *def = link->ref;
2966 if (DF_REF_REGNO (def) == regno)
2967 return def;
2970 return 0;
2974 /* Return insn using REG if the BB contains only a single
2975 use and def of REG. */
2977 df_bb_single_def_use_insn_find (struct df *df, basic_block bb, rtx insn, rtx reg)
2979 struct ref *def;
2980 struct ref *use;
2981 struct df_link *du_link;
2983 def = df_bb_insn_regno_first_def_find (df, bb, insn, REGNO (reg));
2985 if (! def)
2986 abort ();
2988 du_link = DF_REF_CHAIN (def);
2990 if (! du_link)
2991 return NULL_RTX;
2993 use = du_link->ref;
2995 /* Check if def is dead. */
2996 if (! use)
2997 return NULL_RTX;
2999 /* Check for multiple uses. */
3000 if (du_link->next)
3001 return NULL_RTX;
3003 return DF_REF_INSN (use);
3006 /* Functions for debugging/dumping dataflow information. */
3009 /* Dump a def-use or use-def chain for REF to FILE. */
3010 static void
3011 df_chain_dump (struct df_link *link, FILE *file)
3013 fprintf (file, "{ ");
3014 for (; link; link = link->next)
3016 fprintf (file, "%c%d ",
3017 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3018 DF_REF_ID (link->ref));
3020 fprintf (file, "}");
3024 /* Dump a chain of refs with the associated regno. */
3025 static void
3026 df_chain_dump_regno (struct df_link *link, FILE *file)
3028 fprintf (file, "{ ");
3029 for (; link; link = link->next)
3031 fprintf (file, "%c%d(%d) ",
3032 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3033 DF_REF_ID (link->ref),
3034 DF_REF_REGNO (link->ref));
3036 fprintf (file, "}");
3040 /* Dump dataflow info. */
3041 void
3042 df_dump (struct df *df, int flags, FILE *file)
3044 unsigned int j;
3045 basic_block bb;
3047 if (! df || ! file)
3048 return;
3050 fprintf (file, "\nDataflow summary:\n");
3051 fprintf (file, "n_regs = %d, n_defs = %d, n_uses = %d, n_bbs = %d\n",
3052 df->n_regs, df->n_defs, df->n_uses, df->n_bbs);
3054 if (flags & DF_RD)
3056 basic_block bb;
3058 fprintf (file, "Reaching defs:\n");
3059 FOR_EACH_BB (bb)
3061 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3063 if (! bb_info->rd_in)
3064 continue;
3066 fprintf (file, "bb %d in \t", bb->index);
3067 dump_bitmap (file, bb_info->rd_in);
3068 fprintf (file, "bb %d gen \t", bb->index);
3069 dump_bitmap (file, bb_info->rd_gen);
3070 fprintf (file, "bb %d kill\t", bb->index);
3071 dump_bitmap (file, bb_info->rd_kill);
3072 fprintf (file, "bb %d out \t", bb->index);
3073 dump_bitmap (file, bb_info->rd_out);
3077 if (flags & DF_UD_CHAIN)
3079 fprintf (file, "Use-def chains:\n");
3080 for (j = 0; j < df->n_defs; j++)
3082 if (df->defs[j])
3084 fprintf (file, "d%d bb %d luid %d insn %d reg %d ",
3085 j, DF_REF_BBNO (df->defs[j]),
3086 DF_INSN_LUID (df, DF_REF_INSN (df->defs[j])),
3087 DF_REF_INSN_UID (df->defs[j]),
3088 DF_REF_REGNO (df->defs[j]));
3089 if (df->defs[j]->flags & DF_REF_READ_WRITE)
3090 fprintf (file, "read/write ");
3091 df_chain_dump (DF_REF_CHAIN (df->defs[j]), file);
3092 fprintf (file, "\n");
3097 if (flags & DF_RU)
3099 fprintf (file, "Reaching uses:\n");
3100 FOR_EACH_BB (bb)
3102 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3104 if (! bb_info->ru_in)
3105 continue;
3107 fprintf (file, "bb %d in \t", bb->index);
3108 dump_bitmap (file, bb_info->ru_in);
3109 fprintf (file, "bb %d gen \t", bb->index);
3110 dump_bitmap (file, bb_info->ru_gen);
3111 fprintf (file, "bb %d kill\t", bb->index);
3112 dump_bitmap (file, bb_info->ru_kill);
3113 fprintf (file, "bb %d out \t", bb->index);
3114 dump_bitmap (file, bb_info->ru_out);
3118 if (flags & DF_DU_CHAIN)
3120 fprintf (file, "Def-use chains:\n");
3121 for (j = 0; j < df->n_uses; j++)
3123 if (df->uses[j])
3125 fprintf (file, "u%d bb %d luid %d insn %d reg %d ",
3126 j, DF_REF_BBNO (df->uses[j]),
3127 DF_INSN_LUID (df, DF_REF_INSN (df->uses[j])),
3128 DF_REF_INSN_UID (df->uses[j]),
3129 DF_REF_REGNO (df->uses[j]));
3130 if (df->uses[j]->flags & DF_REF_READ_WRITE)
3131 fprintf (file, "read/write ");
3132 df_chain_dump (DF_REF_CHAIN (df->uses[j]), file);
3133 fprintf (file, "\n");
3138 if (flags & DF_LR)
3140 fprintf (file, "Live regs:\n");
3141 FOR_EACH_BB (bb)
3143 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3145 if (! bb_info->lr_in)
3146 continue;
3148 fprintf (file, "bb %d in \t", bb->index);
3149 dump_bitmap (file, bb_info->lr_in);
3150 fprintf (file, "bb %d use \t", bb->index);
3151 dump_bitmap (file, bb_info->lr_use);
3152 fprintf (file, "bb %d def \t", bb->index);
3153 dump_bitmap (file, bb_info->lr_def);
3154 fprintf (file, "bb %d out \t", bb->index);
3155 dump_bitmap (file, bb_info->lr_out);
3159 if (flags & (DF_REG_INFO | DF_RD_CHAIN | DF_RU_CHAIN))
3161 struct reg_info *reg_info = df->regs;
3163 fprintf (file, "Register info:\n");
3164 for (j = 0; j < df->n_regs; j++)
3166 if (((flags & DF_REG_INFO)
3167 && (reg_info[j].n_uses || reg_info[j].n_defs))
3168 || ((flags & DF_RD_CHAIN) && reg_info[j].defs)
3169 || ((flags & DF_RU_CHAIN) && reg_info[j].uses))
3171 fprintf (file, "reg %d", j);
3172 if ((flags & DF_RD_CHAIN) && (flags & DF_RU_CHAIN))
3174 basic_block bb = df_regno_bb (df, j);
3176 if (bb)
3177 fprintf (file, " bb %d", bb->index);
3178 else
3179 fprintf (file, " bb ?");
3181 if (flags & DF_REG_INFO)
3183 fprintf (file, " life %d", reg_info[j].lifetime);
3186 if ((flags & DF_REG_INFO) || (flags & DF_RD_CHAIN))
3188 fprintf (file, " defs ");
3189 if (flags & DF_REG_INFO)
3190 fprintf (file, "%d ", reg_info[j].n_defs);
3191 if (flags & DF_RD_CHAIN)
3192 df_chain_dump (reg_info[j].defs, file);
3195 if ((flags & DF_REG_INFO) || (flags & DF_RU_CHAIN))
3197 fprintf (file, " uses ");
3198 if (flags & DF_REG_INFO)
3199 fprintf (file, "%d ", reg_info[j].n_uses);
3200 if (flags & DF_RU_CHAIN)
3201 df_chain_dump (reg_info[j].uses, file);
3204 fprintf (file, "\n");
3208 fprintf (file, "\n");
3212 void
3213 df_insn_debug (struct df *df, rtx insn, FILE *file)
3215 unsigned int uid;
3216 int bbi;
3218 uid = INSN_UID (insn);
3219 if (uid >= df->insn_size)
3220 return;
3222 if (df->insns[uid].defs)
3223 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3224 else if (df->insns[uid].uses)
3225 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3226 else
3227 bbi = -1;
3229 fprintf (file, "insn %d bb %d luid %d defs ",
3230 uid, bbi, DF_INSN_LUID (df, insn));
3231 df_chain_dump (df->insns[uid].defs, file);
3232 fprintf (file, " uses ");
3233 df_chain_dump (df->insns[uid].uses, file);
3234 fprintf (file, "\n");
3238 void
3239 df_insn_debug_regno (struct df *df, rtx insn, FILE *file)
3241 unsigned int uid;
3242 int bbi;
3244 uid = INSN_UID (insn);
3245 if (uid >= df->insn_size)
3246 return;
3248 if (df->insns[uid].defs)
3249 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3250 else if (df->insns[uid].uses)
3251 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3252 else
3253 bbi = -1;
3255 fprintf (file, "insn %d bb %d luid %d defs ",
3256 uid, bbi, DF_INSN_LUID (df, insn));
3257 df_chain_dump_regno (df->insns[uid].defs, file);
3258 fprintf (file, " uses ");
3259 df_chain_dump_regno (df->insns[uid].uses, file);
3260 fprintf (file, "\n");
3264 static void
3265 df_regno_debug (struct df *df, unsigned int regno, FILE *file)
3267 if (regno >= df->reg_size)
3268 return;
3270 fprintf (file, "reg %d life %d defs ",
3271 regno, df->regs[regno].lifetime);
3272 df_chain_dump (df->regs[regno].defs, file);
3273 fprintf (file, " uses ");
3274 df_chain_dump (df->regs[regno].uses, file);
3275 fprintf (file, "\n");
3279 static void
3280 df_ref_debug (struct df *df, struct ref *ref, FILE *file)
3282 fprintf (file, "%c%d ",
3283 DF_REF_REG_DEF_P (ref) ? 'd' : 'u',
3284 DF_REF_ID (ref));
3285 fprintf (file, "reg %d bb %d luid %d insn %d chain ",
3286 DF_REF_REGNO (ref),
3287 DF_REF_BBNO (ref),
3288 DF_INSN_LUID (df, DF_REF_INSN (ref)),
3289 INSN_UID (DF_REF_INSN (ref)));
3290 df_chain_dump (DF_REF_CHAIN (ref), file);
3291 fprintf (file, "\n");
3294 /* Functions for debugging from GDB. */
3296 void
3297 debug_df_insn (rtx insn)
3299 df_insn_debug (ddf, insn, stderr);
3300 debug_rtx (insn);
3304 void
3305 debug_df_reg (rtx reg)
3307 df_regno_debug (ddf, REGNO (reg), stderr);
3311 void
3312 debug_df_regno (unsigned int regno)
3314 df_regno_debug (ddf, regno, stderr);
3318 void
3319 debug_df_ref (struct ref *ref)
3321 df_ref_debug (ddf, ref, stderr);
3325 void
3326 debug_df_defno (unsigned int defno)
3328 df_ref_debug (ddf, ddf->defs[defno], stderr);
3332 void
3333 debug_df_useno (unsigned int defno)
3335 df_ref_debug (ddf, ddf->uses[defno], stderr);
3339 void
3340 debug_df_chain (struct df_link *link)
3342 df_chain_dump (link, stderr);
3343 fputc ('\n', stderr);
3347 /* Hybrid search algorithm from "Implementation Techniques for
3348 Efficient Data-Flow Analysis of Large Programs". */
3349 static void
3350 hybrid_search_bitmap (basic_block block, bitmap *in, bitmap *out, bitmap *gen,
3351 bitmap *kill, enum df_flow_dir dir,
3352 enum df_confluence_op conf_op,
3353 transfer_function_bitmap transfun, sbitmap visited,
3354 sbitmap pending, void *data)
3356 int changed;
3357 int i = block->index;
3358 edge e;
3359 basic_block bb = block;
3361 SET_BIT (visited, block->index);
3362 if (TEST_BIT (pending, block->index))
3364 if (dir == DF_FORWARD)
3366 /* Calculate <conf_op> of predecessor_outs. */
3367 bitmap_zero (in[i]);
3368 for (e = bb->pred; e != 0; e = e->pred_next)
3370 if (e->src == ENTRY_BLOCK_PTR)
3371 continue;
3372 switch (conf_op)
3374 case DF_UNION:
3375 bitmap_a_or_b (in[i], in[i], out[e->src->index]);
3376 break;
3377 case DF_INTERSECTION:
3378 bitmap_a_and_b (in[i], in[i], out[e->src->index]);
3379 break;
3383 else
3385 /* Calculate <conf_op> of successor ins. */
3386 bitmap_zero (out[i]);
3387 for (e = bb->succ; e != 0; e = e->succ_next)
3389 if (e->dest == EXIT_BLOCK_PTR)
3390 continue;
3391 switch (conf_op)
3393 case DF_UNION:
3394 bitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3395 break;
3396 case DF_INTERSECTION:
3397 bitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3398 break;
3402 /* Common part */
3403 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3404 RESET_BIT (pending, i);
3405 if (changed)
3407 if (dir == DF_FORWARD)
3409 for (e = bb->succ; e != 0; e = e->succ_next)
3411 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3412 continue;
3413 SET_BIT (pending, e->dest->index);
3416 else
3418 for (e = bb->pred; e != 0; e = e->pred_next)
3420 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3421 continue;
3422 SET_BIT (pending, e->src->index);
3427 if (dir == DF_FORWARD)
3429 for (e = bb->succ; e != 0; e = e->succ_next)
3431 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3432 continue;
3433 if (!TEST_BIT (visited, e->dest->index))
3434 hybrid_search_bitmap (e->dest, in, out, gen, kill, dir,
3435 conf_op, transfun, visited, pending,
3436 data);
3439 else
3441 for (e = bb->pred; e != 0; e = e->pred_next)
3443 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3444 continue;
3445 if (!TEST_BIT (visited, e->src->index))
3446 hybrid_search_bitmap (e->src, in, out, gen, kill, dir,
3447 conf_op, transfun, visited, pending,
3448 data);
3454 /* Hybrid search for sbitmaps, rather than bitmaps. */
3455 static void
3456 hybrid_search_sbitmap (basic_block block, sbitmap *in, sbitmap *out,
3457 sbitmap *gen, sbitmap *kill, enum df_flow_dir dir,
3458 enum df_confluence_op conf_op,
3459 transfer_function_sbitmap transfun, sbitmap visited,
3460 sbitmap pending, void *data)
3462 int changed;
3463 int i = block->index;
3464 edge e;
3465 basic_block bb = block;
3467 SET_BIT (visited, block->index);
3468 if (TEST_BIT (pending, block->index))
3470 if (dir == DF_FORWARD)
3472 /* Calculate <conf_op> of predecessor_outs. */
3473 sbitmap_zero (in[i]);
3474 for (e = bb->pred; e != 0; e = e->pred_next)
3476 if (e->src == ENTRY_BLOCK_PTR)
3477 continue;
3478 switch (conf_op)
3480 case DF_UNION:
3481 sbitmap_a_or_b (in[i], in[i], out[e->src->index]);
3482 break;
3483 case DF_INTERSECTION:
3484 sbitmap_a_and_b (in[i], in[i], out[e->src->index]);
3485 break;
3489 else
3491 /* Calculate <conf_op> of successor ins. */
3492 sbitmap_zero (out[i]);
3493 for (e = bb->succ; e != 0; e = e->succ_next)
3495 if (e->dest == EXIT_BLOCK_PTR)
3496 continue;
3497 switch (conf_op)
3499 case DF_UNION:
3500 sbitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3501 break;
3502 case DF_INTERSECTION:
3503 sbitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3504 break;
3508 /* Common part. */
3509 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3510 RESET_BIT (pending, i);
3511 if (changed)
3513 if (dir == DF_FORWARD)
3515 for (e = bb->succ; e != 0; e = e->succ_next)
3517 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3518 continue;
3519 SET_BIT (pending, e->dest->index);
3522 else
3524 for (e = bb->pred; e != 0; e = e->pred_next)
3526 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3527 continue;
3528 SET_BIT (pending, e->src->index);
3533 if (dir == DF_FORWARD)
3535 for (e = bb->succ; e != 0; e = e->succ_next)
3537 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3538 continue;
3539 if (!TEST_BIT (visited, e->dest->index))
3540 hybrid_search_sbitmap (e->dest, in, out, gen, kill, dir,
3541 conf_op, transfun, visited, pending,
3542 data);
3545 else
3547 for (e = bb->pred; e != 0; e = e->pred_next)
3549 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3550 continue;
3551 if (!TEST_BIT (visited, e->src->index))
3552 hybrid_search_sbitmap (e->src, in, out, gen, kill, dir,
3553 conf_op, transfun, visited, pending,
3554 data);
3560 /* gen = GEN set.
3561 kill = KILL set.
3562 in, out = Filled in by function.
3563 blocks = Blocks to analyze.
3564 dir = Dataflow direction.
3565 conf_op = Confluence operation.
3566 transfun = Transfer function.
3567 order = Order to iterate in. (Should map block numbers -> order)
3568 data = Whatever you want. It's passed to the transfer function.
3570 This function will perform iterative bitvector dataflow, producing
3571 the in and out sets. Even if you only want to perform it for a
3572 small number of blocks, the vectors for in and out must be large
3573 enough for *all* blocks, because changing one block might affect
3574 others. However, it'll only put what you say to analyze on the
3575 initial worklist.
3577 For forward problems, you probably want to pass in a mapping of
3578 block number to rc_order (like df->inverse_rc_map).
3580 void
3581 iterative_dataflow_sbitmap (sbitmap *in, sbitmap *out, sbitmap *gen,
3582 sbitmap *kill, bitmap blocks,
3583 enum df_flow_dir dir,
3584 enum df_confluence_op conf_op,
3585 transfer_function_sbitmap transfun, int *order,
3586 void *data)
3588 int i;
3589 fibheap_t worklist;
3590 basic_block bb;
3591 sbitmap visited, pending;
3593 pending = sbitmap_alloc (last_basic_block);
3594 visited = sbitmap_alloc (last_basic_block);
3595 sbitmap_zero (pending);
3596 sbitmap_zero (visited);
3597 worklist = fibheap_new ();
3599 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3601 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3602 SET_BIT (pending, i);
3603 if (dir == DF_FORWARD)
3604 sbitmap_copy (out[i], gen[i]);
3605 else
3606 sbitmap_copy (in[i], gen[i]);
3609 while (sbitmap_first_set_bit (pending) != -1)
3611 while (!fibheap_empty (worklist))
3613 i = (size_t) fibheap_extract_min (worklist);
3614 bb = BASIC_BLOCK (i);
3615 if (!TEST_BIT (visited, bb->index))
3616 hybrid_search_sbitmap (bb, in, out, gen, kill, dir,
3617 conf_op, transfun, visited, pending, data);
3620 if (sbitmap_first_set_bit (pending) != -1)
3622 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3624 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3626 sbitmap_zero (visited);
3628 else
3630 break;
3634 sbitmap_free (pending);
3635 sbitmap_free (visited);
3636 fibheap_delete (worklist);
3640 /* Exactly the same as iterative_dataflow_sbitmap, except it works on
3641 bitmaps instead. */
3642 void
3643 iterative_dataflow_bitmap (bitmap *in, bitmap *out, bitmap *gen, bitmap *kill,
3644 bitmap blocks, enum df_flow_dir dir,
3645 enum df_confluence_op conf_op,
3646 transfer_function_bitmap transfun, int *order,
3647 void *data)
3649 int i;
3650 fibheap_t worklist;
3651 basic_block bb;
3652 sbitmap visited, pending;
3654 pending = sbitmap_alloc (last_basic_block);
3655 visited = sbitmap_alloc (last_basic_block);
3656 sbitmap_zero (pending);
3657 sbitmap_zero (visited);
3658 worklist = fibheap_new ();
3660 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3662 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3663 SET_BIT (pending, i);
3664 if (dir == DF_FORWARD)
3665 bitmap_copy (out[i], gen[i]);
3666 else
3667 bitmap_copy (in[i], gen[i]);
3670 while (sbitmap_first_set_bit (pending) != -1)
3672 while (!fibheap_empty (worklist))
3674 i = (size_t) fibheap_extract_min (worklist);
3675 bb = BASIC_BLOCK (i);
3676 if (!TEST_BIT (visited, bb->index))
3677 hybrid_search_bitmap (bb, in, out, gen, kill, dir,
3678 conf_op, transfun, visited, pending, data);
3681 if (sbitmap_first_set_bit (pending) != -1)
3683 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3685 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3687 sbitmap_zero (visited);
3689 else
3691 break;
3694 sbitmap_free (pending);
3695 sbitmap_free (visited);
3696 fibheap_delete (worklist);