2003-12-26 Guilhem Lavaux <guilhem@kaffe.org>
[official-gcc.git] / gcc / df.c
blob96c8ad8dbd2c8be4afd9b92dcc8caa2b3baf33a2
1 /* Dataflow support routines.
2 Copyright (C) 1999, 2000, 2001, 2002, 2003 Free Software Foundation, Inc.
3 Contributed by Michael P. Hayes (m.hayes@elec.canterbury.ac.nz,
4 mhayes@redhat.com)
6 This file is part of GCC.
8 GCC is free software; you can redistribute it and/or modify it under
9 the terms of the GNU General Public License as published by the Free
10 Software Foundation; either version 2, or (at your option) any later
11 version.
13 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
14 WARRANTY; without even the implied warranty of MERCHANTABILITY or
15 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
16 for more details.
18 You should have received a copy of the GNU General Public License
19 along with GCC; see the file COPYING. If not, write to the Free
20 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
21 02111-1307, USA.
24 OVERVIEW:
26 This file provides some dataflow routines for computing reaching defs,
27 upward exposed uses, live variables, def-use chains, and use-def
28 chains. The global dataflow is performed using simple iterative
29 methods with a worklist and could be sped up by ordering the blocks
30 with a depth first search order.
32 A `struct ref' data structure (ref) is allocated for every register
33 reference (def or use) and this records the insn and bb the ref is
34 found within. The refs are linked together in chains of uses and defs
35 for each insn and for each register. Each ref also has a chain field
36 that links all the use refs for a def or all the def refs for a use.
37 This is used to create use-def or def-use chains.
40 USAGE:
42 Here's an example of using the dataflow routines.
44 struct df *df;
46 df = df_init ();
48 df_analyse (df, 0, DF_ALL);
50 df_dump (df, DF_ALL, stderr);
52 df_finish (df);
55 df_init simply creates a poor man's object (df) that needs to be
56 passed to all the dataflow routines. df_finish destroys this
57 object and frees up any allocated memory. DF_ALL says to analyse
58 everything.
60 df_analyse performs the following:
62 1. Records defs and uses by scanning the insns in each basic block
63 or by scanning the insns queued by df_insn_modify.
64 2. Links defs and uses into insn-def and insn-use chains.
65 3. Links defs and uses into reg-def and reg-use chains.
66 4. Assigns LUIDs to each insn (for modified blocks).
67 5. Calculates local reaching definitions.
68 6. Calculates global reaching definitions.
69 7. Creates use-def chains.
70 8. Calculates local reaching uses (upwards exposed uses).
71 9. Calculates global reaching uses.
72 10. Creates def-use chains.
73 11. Calculates local live registers.
74 12. Calculates global live registers.
75 13. Calculates register lifetimes and determines local registers.
78 PHILOSOPHY:
80 Note that the dataflow information is not updated for every newly
81 deleted or created insn. If the dataflow information requires
82 updating then all the changed, new, or deleted insns needs to be
83 marked with df_insn_modify (or df_insns_modify) either directly or
84 indirectly (say through calling df_insn_delete). df_insn_modify
85 marks all the modified insns to get processed the next time df_analyse
86 is called.
88 Beware that tinkering with insns may invalidate the dataflow information.
89 The philosophy behind these routines is that once the dataflow
90 information has been gathered, the user should store what they require
91 before they tinker with any insn. Once a reg is replaced, for example,
92 then the reg-def/reg-use chains will point to the wrong place. Once a
93 whole lot of changes have been made, df_analyse can be called again
94 to update the dataflow information. Currently, this is not very smart
95 with regard to propagating changes to the dataflow so it should not
96 be called very often.
99 DATA STRUCTURES:
101 The basic object is a REF (reference) and this may either be a DEF
102 (definition) or a USE of a register.
104 These are linked into a variety of lists; namely reg-def, reg-use,
105 insn-def, insn-use, def-use, and use-def lists. For example,
106 the reg-def lists contain all the refs that define a given register
107 while the insn-use lists contain all the refs used by an insn.
109 Note that the reg-def and reg-use chains are generally short (except for the
110 hard registers) and thus it is much faster to search these chains
111 rather than searching the def or use bitmaps.
113 If the insns are in SSA form then the reg-def and use-def lists
114 should only contain the single defining ref.
117 TODO:
119 1) Incremental dataflow analysis.
121 Note that if a loop invariant insn is hoisted (or sunk), we do not
122 need to change the def-use or use-def chains. All we have to do is to
123 change the bb field for all the associated defs and uses and to
124 renumber the LUIDs for the original and new basic blocks of the insn.
126 When shadowing loop mems we create new uses and defs for new pseudos
127 so we do not affect the existing dataflow information.
129 My current strategy is to queue up all modified, created, or deleted
130 insns so when df_analyse is called we can easily determine all the new
131 or deleted refs. Currently the global dataflow information is
132 recomputed from scratch but this could be propagated more efficiently.
134 2) Reduced memory requirements.
136 We could operate a pool of ref structures. When a ref is deleted it
137 gets returned to the pool (say by linking on to a chain of free refs).
138 This will require a pair of bitmaps for defs and uses so that we can
139 tell which ones have been changed. Alternatively, we could
140 periodically squeeze the def and use tables and associated bitmaps and
141 renumber the def and use ids.
143 3) Ordering of reg-def and reg-use lists.
145 Should the first entry in the def list be the first def (within a BB)?
146 Similarly, should the first entry in the use list be the last use
147 (within a BB)?
149 4) Working with a sub-CFG.
151 Often the whole CFG does not need to be analyzed, for example,
152 when optimizing a loop, only certain registers are of interest.
153 Perhaps there should be a bitmap argument to df_analyse to specify
154 which registers should be analyzed?
157 NOTES:
159 Embedded addressing side-effects, such as POST_INC or PRE_INC, generate
160 both a use and a def. These are both marked read/write to show that they
161 are dependent. For example, (set (reg 40) (mem (post_inc (reg 42))))
162 will generate a use of reg 42 followed by a def of reg 42 (both marked
163 read/write). Similarly, (set (reg 40) (mem (pre_dec (reg 41))))
164 generates a use of reg 41 then a def of reg 41 (both marked read/write),
165 even though reg 41 is decremented before it is used for the memory
166 address in this second example.
168 A set to a REG inside a ZERO_EXTRACT, SIGN_EXTRACT, or SUBREG invokes
169 a read-modify write operation. We generate both a use and a def
170 and again mark them read/write.
173 #include "config.h"
174 #include "system.h"
175 #include "coretypes.h"
176 #include "tm.h"
177 #include "rtl.h"
178 #include "tm_p.h"
179 #include "insn-config.h"
180 #include "recog.h"
181 #include "function.h"
182 #include "regs.h"
183 #include "alloc-pool.h"
184 #include "hard-reg-set.h"
185 #include "basic-block.h"
186 #include "sbitmap.h"
187 #include "bitmap.h"
188 #include "df.h"
189 #include "fibheap.h"
191 #define FOR_EACH_BB_IN_BITMAP(BITMAP, MIN, BB, CODE) \
192 do \
194 unsigned int node_; \
195 EXECUTE_IF_SET_IN_BITMAP (BITMAP, MIN, node_, \
196 {(BB) = BASIC_BLOCK (node_); CODE;}); \
198 while (0)
200 static alloc_pool df_ref_pool;
201 static alloc_pool df_link_pool;
202 static struct df *ddf;
204 static void df_reg_table_realloc (struct df *, int);
205 static void df_insn_table_realloc (struct df *, unsigned int);
206 static void df_bitmaps_alloc (struct df *, int);
207 static void df_bitmaps_free (struct df *, int);
208 static void df_free (struct df *);
209 static void df_alloc (struct df *, int);
211 static rtx df_reg_clobber_gen (unsigned int);
212 static rtx df_reg_use_gen (unsigned int);
214 static inline struct df_link *df_link_create (struct ref *, struct df_link *);
215 static struct df_link *df_ref_unlink (struct df_link **, struct ref *);
216 static void df_def_unlink (struct df *, struct ref *);
217 static void df_use_unlink (struct df *, struct ref *);
218 static void df_insn_refs_unlink (struct df *, basic_block, rtx);
219 #if 0
220 static void df_bb_refs_unlink (struct df *, basic_block);
221 static void df_refs_unlink (struct df *, bitmap);
222 #endif
224 static struct ref *df_ref_create (struct df *, rtx, rtx *, rtx,
225 enum df_ref_type, enum df_ref_flags);
226 static void df_ref_record_1 (struct df *, rtx, rtx *, rtx, enum df_ref_type,
227 enum df_ref_flags);
228 static void df_ref_record (struct df *, rtx, rtx *, rtx, enum df_ref_type,
229 enum df_ref_flags);
230 static void df_def_record_1 (struct df *, rtx, basic_block, rtx);
231 static void df_defs_record (struct df *, rtx, basic_block, rtx);
232 static void df_uses_record (struct df *, rtx *, enum df_ref_type,
233 basic_block, rtx, enum df_ref_flags);
234 static void df_insn_refs_record (struct df *, basic_block, rtx);
235 static void df_bb_refs_record (struct df *, basic_block);
236 static void df_refs_record (struct df *, bitmap);
238 static void df_bb_reg_def_chain_create (struct df *, basic_block);
239 static void df_reg_def_chain_create (struct df *, bitmap);
240 static void df_bb_reg_use_chain_create (struct df *, basic_block);
241 static void df_reg_use_chain_create (struct df *, bitmap);
242 static void df_bb_du_chain_create (struct df *, basic_block, bitmap);
243 static void df_du_chain_create (struct df *, bitmap);
244 static void df_bb_ud_chain_create (struct df *, basic_block);
245 static void df_ud_chain_create (struct df *, bitmap);
246 static void df_bb_rd_local_compute (struct df *, basic_block);
247 static void df_rd_local_compute (struct df *, bitmap);
248 static void df_bb_ru_local_compute (struct df *, basic_block);
249 static void df_ru_local_compute (struct df *, bitmap);
250 static void df_bb_lr_local_compute (struct df *, basic_block);
251 static void df_lr_local_compute (struct df *, bitmap);
252 static void df_bb_reg_info_compute (struct df *, basic_block, bitmap);
253 static void df_reg_info_compute (struct df *, bitmap);
255 static int df_bb_luids_set (struct df *df, basic_block);
256 static int df_luids_set (struct df *df, bitmap);
258 static int df_modified_p (struct df *, bitmap);
259 static int df_refs_queue (struct df *);
260 static int df_refs_process (struct df *);
261 static int df_bb_refs_update (struct df *, basic_block);
262 static int df_refs_update (struct df *);
263 static void df_analyse_1 (struct df *, bitmap, int, int);
265 static void df_insns_modify (struct df *, basic_block, rtx, rtx);
266 static int df_rtx_mem_replace (rtx *, void *);
267 static int df_rtx_reg_replace (rtx *, void *);
268 void df_refs_reg_replace (struct df *, bitmap, struct df_link *, rtx, rtx);
270 static int df_def_dominates_all_uses_p (struct df *, struct ref *def);
271 static int df_def_dominates_uses_p (struct df *, struct ref *def, bitmap);
272 static struct ref *df_bb_regno_last_use_find (struct df *, basic_block,
273 unsigned int);
274 static struct ref *df_bb_regno_first_def_find (struct df *, basic_block,
275 unsigned int);
276 static struct ref *df_bb_insn_regno_last_use_find (struct df *, basic_block,
277 rtx, unsigned int);
278 static struct ref *df_bb_insn_regno_first_def_find (struct df *, basic_block,
279 rtx, unsigned int);
281 static void df_chain_dump (struct df_link *, FILE *file);
282 static void df_chain_dump_regno (struct df_link *, FILE *file);
283 static void df_regno_debug (struct df *, unsigned int, FILE *);
284 static void df_ref_debug (struct df *, struct ref *, FILE *);
285 static void df_rd_transfer_function (int, int *, bitmap, bitmap, bitmap,
286 bitmap, void *);
287 static void df_ru_transfer_function (int, int *, bitmap, bitmap, bitmap,
288 bitmap, void *);
289 static void df_lr_transfer_function (int, int *, bitmap, bitmap, bitmap,
290 bitmap, void *);
291 static void hybrid_search_bitmap (basic_block, bitmap *, bitmap *,
292 bitmap *, bitmap *, enum df_flow_dir,
293 enum df_confluence_op,
294 transfer_function_bitmap,
295 sbitmap, sbitmap, void *);
296 static void hybrid_search_sbitmap (basic_block, sbitmap *, sbitmap *,
297 sbitmap *, sbitmap *, enum df_flow_dir,
298 enum df_confluence_op,
299 transfer_function_sbitmap,
300 sbitmap, sbitmap, void *);
303 /* Local memory allocation/deallocation routines. */
306 /* Increase the insn info table to have space for at least SIZE + 1
307 elements. */
308 static void
309 df_insn_table_realloc (struct df *df, unsigned int size)
311 size++;
312 if (size <= df->insn_size)
313 return;
315 /* Make the table a little larger than requested, so we do not need
316 to enlarge it so often. */
317 size += df->insn_size / 4;
319 df->insns = xrealloc (df->insns, size * sizeof (struct insn_info));
321 memset (df->insns + df->insn_size, 0,
322 (size - df->insn_size) * sizeof (struct insn_info));
324 df->insn_size = size;
326 if (! df->insns_modified)
328 df->insns_modified = BITMAP_XMALLOC ();
329 bitmap_zero (df->insns_modified);
334 /* Increase the reg info table by SIZE more elements. */
335 static void
336 df_reg_table_realloc (struct df *df, int size)
338 /* Make table 25 percent larger by default. */
339 if (! size)
340 size = df->reg_size / 4;
342 size += df->reg_size;
343 if (size < max_reg_num ())
344 size = max_reg_num ();
346 df->regs = xrealloc (df->regs, size * sizeof (struct reg_info));
348 /* Zero the new entries. */
349 memset (df->regs + df->reg_size, 0,
350 (size - df->reg_size) * sizeof (struct reg_info));
352 df->reg_size = size;
356 /* Allocate bitmaps for each basic block. */
357 static void
358 df_bitmaps_alloc (struct df *df, int flags)
360 int dflags = 0;
361 basic_block bb;
363 /* Free the bitmaps if they need resizing. */
364 if ((flags & DF_LR) && df->n_regs < (unsigned int) max_reg_num ())
365 dflags |= DF_LR | DF_RU;
366 if ((flags & DF_RU) && df->n_uses < df->use_id)
367 dflags |= DF_RU;
368 if ((flags & DF_RD) && df->n_defs < df->def_id)
369 dflags |= DF_RD;
371 if (dflags)
372 df_bitmaps_free (df, dflags);
374 df->n_defs = df->def_id;
375 df->n_uses = df->use_id;
377 FOR_EACH_BB (bb)
379 struct bb_info *bb_info = DF_BB_INFO (df, bb);
381 if (flags & DF_RD && ! bb_info->rd_in)
383 /* Allocate bitmaps for reaching definitions. */
384 bb_info->rd_kill = BITMAP_XMALLOC ();
385 bitmap_zero (bb_info->rd_kill);
386 bb_info->rd_gen = BITMAP_XMALLOC ();
387 bitmap_zero (bb_info->rd_gen);
388 bb_info->rd_in = BITMAP_XMALLOC ();
389 bb_info->rd_out = BITMAP_XMALLOC ();
390 bb_info->rd_valid = 0;
393 if (flags & DF_RU && ! bb_info->ru_in)
395 /* Allocate bitmaps for upward exposed uses. */
396 bb_info->ru_kill = BITMAP_XMALLOC ();
397 bitmap_zero (bb_info->ru_kill);
398 /* Note the lack of symmetry. */
399 bb_info->ru_gen = BITMAP_XMALLOC ();
400 bitmap_zero (bb_info->ru_gen);
401 bb_info->ru_in = BITMAP_XMALLOC ();
402 bb_info->ru_out = BITMAP_XMALLOC ();
403 bb_info->ru_valid = 0;
406 if (flags & DF_LR && ! bb_info->lr_in)
408 /* Allocate bitmaps for live variables. */
409 bb_info->lr_def = BITMAP_XMALLOC ();
410 bitmap_zero (bb_info->lr_def);
411 bb_info->lr_use = BITMAP_XMALLOC ();
412 bitmap_zero (bb_info->lr_use);
413 bb_info->lr_in = BITMAP_XMALLOC ();
414 bb_info->lr_out = BITMAP_XMALLOC ();
415 bb_info->lr_valid = 0;
421 /* Free bitmaps for each basic block. */
422 static void
423 df_bitmaps_free (struct df *df, int flags)
425 basic_block bb;
427 FOR_EACH_BB (bb)
429 struct bb_info *bb_info = DF_BB_INFO (df, bb);
431 if (!bb_info)
432 continue;
434 if ((flags & DF_RD) && bb_info->rd_in)
436 /* Free bitmaps for reaching definitions. */
437 BITMAP_XFREE (bb_info->rd_kill);
438 bb_info->rd_kill = NULL;
439 BITMAP_XFREE (bb_info->rd_gen);
440 bb_info->rd_gen = NULL;
441 BITMAP_XFREE (bb_info->rd_in);
442 bb_info->rd_in = NULL;
443 BITMAP_XFREE (bb_info->rd_out);
444 bb_info->rd_out = NULL;
447 if ((flags & DF_RU) && bb_info->ru_in)
449 /* Free bitmaps for upward exposed uses. */
450 BITMAP_XFREE (bb_info->ru_kill);
451 bb_info->ru_kill = NULL;
452 BITMAP_XFREE (bb_info->ru_gen);
453 bb_info->ru_gen = NULL;
454 BITMAP_XFREE (bb_info->ru_in);
455 bb_info->ru_in = NULL;
456 BITMAP_XFREE (bb_info->ru_out);
457 bb_info->ru_out = NULL;
460 if ((flags & DF_LR) && bb_info->lr_in)
462 /* Free bitmaps for live variables. */
463 BITMAP_XFREE (bb_info->lr_def);
464 bb_info->lr_def = NULL;
465 BITMAP_XFREE (bb_info->lr_use);
466 bb_info->lr_use = NULL;
467 BITMAP_XFREE (bb_info->lr_in);
468 bb_info->lr_in = NULL;
469 BITMAP_XFREE (bb_info->lr_out);
470 bb_info->lr_out = NULL;
473 df->flags &= ~(flags & (DF_RD | DF_RU | DF_LR));
477 /* Allocate and initialize dataflow memory. */
478 static void
479 df_alloc (struct df *df, int n_regs)
481 int n_insns;
482 basic_block bb;
484 df_link_pool = create_alloc_pool ("df_link pool", sizeof (struct df_link),
485 100);
486 df_ref_pool = create_alloc_pool ("df_ref pool", sizeof (struct ref), 100);
488 /* Perhaps we should use LUIDs to save memory for the insn_refs
489 table. This is only a small saving; a few pointers. */
490 n_insns = get_max_uid () + 1;
492 df->def_id = 0;
493 df->n_defs = 0;
494 /* Approximate number of defs by number of insns. */
495 df->def_size = n_insns;
496 df->defs = xmalloc (df->def_size * sizeof (*df->defs));
498 df->use_id = 0;
499 df->n_uses = 0;
500 /* Approximate number of uses by twice number of insns. */
501 df->use_size = n_insns * 2;
502 df->uses = xmalloc (df->use_size * sizeof (*df->uses));
504 df->n_regs = n_regs;
505 df->n_bbs = last_basic_block;
507 /* Allocate temporary working array used during local dataflow analysis. */
508 df->reg_def_last = xmalloc (df->n_regs * sizeof (struct ref *));
510 df_insn_table_realloc (df, n_insns);
512 df_reg_table_realloc (df, df->n_regs);
514 df->bbs_modified = BITMAP_XMALLOC ();
515 bitmap_zero (df->bbs_modified);
517 df->flags = 0;
519 df->bbs = xcalloc (last_basic_block, sizeof (struct bb_info));
521 df->all_blocks = BITMAP_XMALLOC ();
522 FOR_EACH_BB (bb)
523 bitmap_set_bit (df->all_blocks, bb->index);
527 /* Free all the dataflow info. */
528 static void
529 df_free (struct df *df)
531 df_bitmaps_free (df, DF_ALL);
533 if (df->bbs)
534 free (df->bbs);
535 df->bbs = 0;
537 if (df->insns)
538 free (df->insns);
539 df->insns = 0;
540 df->insn_size = 0;
542 if (df->defs)
543 free (df->defs);
544 df->defs = 0;
545 df->def_size = 0;
546 df->def_id = 0;
548 if (df->uses)
549 free (df->uses);
550 df->uses = 0;
551 df->use_size = 0;
552 df->use_id = 0;
554 if (df->regs)
555 free (df->regs);
556 df->regs = 0;
557 df->reg_size = 0;
559 if (df->bbs_modified)
560 BITMAP_XFREE (df->bbs_modified);
561 df->bbs_modified = 0;
563 if (df->insns_modified)
564 BITMAP_XFREE (df->insns_modified);
565 df->insns_modified = 0;
567 BITMAP_XFREE (df->all_blocks);
568 df->all_blocks = 0;
570 free_alloc_pool (df_ref_pool);
571 free_alloc_pool (df_link_pool);
575 /* Local miscellaneous routines. */
577 /* Return a USE for register REGNO. */
578 static rtx df_reg_use_gen (unsigned int regno)
580 rtx reg;
581 rtx use;
583 reg = regno_reg_rtx[regno];
585 use = gen_rtx_USE (GET_MODE (reg), reg);
586 return use;
590 /* Return a CLOBBER for register REGNO. */
591 static rtx df_reg_clobber_gen (unsigned int regno)
593 rtx reg;
594 rtx use;
596 reg = regno_reg_rtx[regno];
598 use = gen_rtx_CLOBBER (GET_MODE (reg), reg);
599 return use;
602 /* Local chain manipulation routines. */
604 /* Create a link in a def-use or use-def chain. */
605 static inline struct df_link *
606 df_link_create (struct ref *ref, struct df_link *next)
608 struct df_link *link;
610 link = pool_alloc (df_link_pool);
611 link->next = next;
612 link->ref = ref;
613 return link;
617 /* Add REF to chain head pointed to by PHEAD. */
618 static struct df_link *
619 df_ref_unlink (struct df_link **phead, struct ref *ref)
621 struct df_link *link = *phead;
623 if (link)
625 if (! link->next)
627 /* Only a single ref. It must be the one we want.
628 If not, the def-use and use-def chains are likely to
629 be inconsistent. */
630 if (link->ref != ref)
631 abort ();
632 /* Now have an empty chain. */
633 *phead = NULL;
635 else
637 /* Multiple refs. One of them must be us. */
638 if (link->ref == ref)
639 *phead = link->next;
640 else
642 /* Follow chain. */
643 for (; link->next; link = link->next)
645 if (link->next->ref == ref)
647 /* Unlink from list. */
648 link->next = link->next->next;
649 return link->next;
655 return link;
659 /* Unlink REF from all def-use/use-def chains, etc. */
661 df_ref_remove (struct df *df, struct ref *ref)
663 if (DF_REF_REG_DEF_P (ref))
665 df_def_unlink (df, ref);
666 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].defs, ref);
668 else
670 df_use_unlink (df, ref);
671 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].uses, ref);
673 return 1;
677 /* Unlink DEF from use-def and reg-def chains. */
678 static void
679 df_def_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
681 struct df_link *du_link;
682 unsigned int dregno = DF_REF_REGNO (def);
684 /* Follow def-use chain to find all the uses of this def. */
685 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
687 struct ref *use = du_link->ref;
689 /* Unlink this def from the use-def chain. */
690 df_ref_unlink (&DF_REF_CHAIN (use), def);
692 DF_REF_CHAIN (def) = 0;
694 /* Unlink def from reg-def chain. */
695 df_ref_unlink (&df->regs[dregno].defs, def);
697 df->defs[DF_REF_ID (def)] = 0;
701 /* Unlink use from def-use and reg-use chains. */
702 static void
703 df_use_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *use)
705 struct df_link *ud_link;
706 unsigned int uregno = DF_REF_REGNO (use);
708 /* Follow use-def chain to find all the defs of this use. */
709 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
711 struct ref *def = ud_link->ref;
713 /* Unlink this use from the def-use chain. */
714 df_ref_unlink (&DF_REF_CHAIN (def), use);
716 DF_REF_CHAIN (use) = 0;
718 /* Unlink use from reg-use chain. */
719 df_ref_unlink (&df->regs[uregno].uses, use);
721 df->uses[DF_REF_ID (use)] = 0;
724 /* Local routines for recording refs. */
727 /* Create a new ref of type DF_REF_TYPE for register REG at address
728 LOC within INSN of BB. */
729 static struct ref *
730 df_ref_create (struct df *df, rtx reg, rtx *loc, rtx insn,
731 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
733 struct ref *this_ref;
735 this_ref = pool_alloc (df_ref_pool);
736 DF_REF_REG (this_ref) = reg;
737 DF_REF_LOC (this_ref) = loc;
738 DF_REF_INSN (this_ref) = insn;
739 DF_REF_CHAIN (this_ref) = 0;
740 DF_REF_TYPE (this_ref) = ref_type;
741 DF_REF_FLAGS (this_ref) = ref_flags;
743 if (ref_type == DF_REF_REG_DEF)
745 if (df->def_id >= df->def_size)
747 /* Make table 25 percent larger. */
748 df->def_size += (df->def_size / 4);
749 df->defs = xrealloc (df->defs,
750 df->def_size * sizeof (*df->defs));
752 DF_REF_ID (this_ref) = df->def_id;
753 df->defs[df->def_id++] = this_ref;
755 else
757 if (df->use_id >= df->use_size)
759 /* Make table 25 percent larger. */
760 df->use_size += (df->use_size / 4);
761 df->uses = xrealloc (df->uses,
762 df->use_size * sizeof (*df->uses));
764 DF_REF_ID (this_ref) = df->use_id;
765 df->uses[df->use_id++] = this_ref;
767 return this_ref;
771 /* Create a new reference of type DF_REF_TYPE for a single register REG,
772 used inside the LOC rtx of INSN. */
773 static void
774 df_ref_record_1 (struct df *df, rtx reg, rtx *loc, rtx insn,
775 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
777 df_ref_create (df, reg, loc, insn, ref_type, ref_flags);
781 /* Create new references of type DF_REF_TYPE for each part of register REG
782 at address LOC within INSN of BB. */
783 static void
784 df_ref_record (struct df *df, rtx reg, rtx *loc, rtx insn,
785 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
787 unsigned int regno;
789 if (GET_CODE (reg) != REG && GET_CODE (reg) != SUBREG)
790 abort ();
792 /* For the reg allocator we are interested in some SUBREG rtx's, but not
793 all. Notably only those representing a word extraction from a multi-word
794 reg. As written in the docu those should have the form
795 (subreg:SI (reg:M A) N), with size(SImode) > size(Mmode).
796 XXX Is that true? We could also use the global word_mode variable. */
797 if (GET_CODE (reg) == SUBREG
798 && (GET_MODE_SIZE (GET_MODE (reg)) < GET_MODE_SIZE (word_mode)
799 || GET_MODE_SIZE (GET_MODE (reg))
800 >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (reg)))))
802 loc = &SUBREG_REG (reg);
803 reg = *loc;
804 ref_flags |= DF_REF_STRIPPED;
807 regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
808 if (regno < FIRST_PSEUDO_REGISTER)
810 int i;
811 int endregno;
813 if (! (df->flags & DF_HARD_REGS))
814 return;
816 /* GET_MODE (reg) is correct here. We do not want to go into a SUBREG
817 for the mode, because we only want to add references to regs, which
818 are really referenced. E.g., a (subreg:SI (reg:DI 0) 0) does _not_
819 reference the whole reg 0 in DI mode (which would also include
820 reg 1, at least, if 0 and 1 are SImode registers). */
821 endregno = HARD_REGNO_NREGS (regno, GET_MODE (reg));
822 if (GET_CODE (reg) == SUBREG)
823 regno += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
824 SUBREG_BYTE (reg), GET_MODE (reg));
825 endregno += regno;
827 for (i = regno; i < endregno; i++)
828 df_ref_record_1 (df, regno_reg_rtx[i],
829 loc, insn, ref_type, ref_flags);
831 else
833 df_ref_record_1 (df, reg, loc, insn, ref_type, ref_flags);
838 /* Return nonzero if writes to paradoxical SUBREGs, or SUBREGs which
839 are too narrow, are read-modify-write. */
840 bool
841 read_modify_subreg_p (rtx x)
843 unsigned int isize, osize;
844 if (GET_CODE (x) != SUBREG)
845 return false;
846 isize = GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)));
847 osize = GET_MODE_SIZE (GET_MODE (x));
848 /* Paradoxical subreg writes don't leave a trace of the old content. */
849 return (isize > osize && isize > UNITS_PER_WORD);
853 /* Process all the registers defined in the rtx, X. */
854 static void
855 df_def_record_1 (struct df *df, rtx x, basic_block bb, rtx insn)
857 rtx *loc;
858 rtx dst;
859 enum df_ref_flags flags = 0;
861 /* We may recursively call ourselves on EXPR_LIST when dealing with PARALLEL
862 construct. */
863 if (GET_CODE (x) == EXPR_LIST || GET_CODE (x) == CLOBBER)
864 loc = &XEXP (x, 0);
865 else
866 loc = &SET_DEST (x);
867 dst = *loc;
869 /* Some targets place small structures in registers for
870 return values of functions. */
871 if (GET_CODE (dst) == PARALLEL && GET_MODE (dst) == BLKmode)
873 int i;
875 for (i = XVECLEN (dst, 0) - 1; i >= 0; i--)
877 rtx temp = XVECEXP (dst, 0, i);
878 if (GET_CODE (temp) == EXPR_LIST || GET_CODE (temp) == CLOBBER
879 || GET_CODE (temp) == SET)
880 df_def_record_1 (df, temp, bb, insn);
882 return;
885 /* Maybe, we should flag the use of STRICT_LOW_PART somehow. It might
886 be handy for the reg allocator. */
887 while (GET_CODE (dst) == STRICT_LOW_PART
888 || GET_CODE (dst) == ZERO_EXTRACT
889 || GET_CODE (dst) == SIGN_EXTRACT
890 || ((df->flags & DF_FOR_REGALLOC) == 0
891 && read_modify_subreg_p (dst)))
893 /* Strict low part always contains SUBREG, but we do not want to make
894 it appear outside, as whole register is always considered. */
895 if (GET_CODE (dst) == STRICT_LOW_PART)
897 loc = &XEXP (dst, 0);
898 dst = *loc;
900 loc = &XEXP (dst, 0);
901 dst = *loc;
902 flags |= DF_REF_READ_WRITE;
905 if (GET_CODE (dst) == REG
906 || (GET_CODE (dst) == SUBREG && GET_CODE (SUBREG_REG (dst)) == REG))
907 df_ref_record (df, dst, loc, insn, DF_REF_REG_DEF, flags);
911 /* Process all the registers defined in the pattern rtx, X. */
912 static void
913 df_defs_record (struct df *df, rtx x, basic_block bb, rtx insn)
915 RTX_CODE code = GET_CODE (x);
917 if (code == SET || code == CLOBBER)
919 /* Mark the single def within the pattern. */
920 df_def_record_1 (df, x, bb, insn);
922 else if (code == PARALLEL)
924 int i;
926 /* Mark the multiple defs within the pattern. */
927 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
929 code = GET_CODE (XVECEXP (x, 0, i));
930 if (code == SET || code == CLOBBER)
931 df_def_record_1 (df, XVECEXP (x, 0, i), bb, insn);
937 /* Process all the registers used in the rtx at address LOC. */
938 static void
939 df_uses_record (struct df *df, rtx *loc, enum df_ref_type ref_type,
940 basic_block bb, rtx insn, enum df_ref_flags flags)
942 RTX_CODE code;
943 rtx x;
944 retry:
945 x = *loc;
946 if (!x)
947 return;
948 code = GET_CODE (x);
949 switch (code)
951 case LABEL_REF:
952 case SYMBOL_REF:
953 case CONST_INT:
954 case CONST:
955 case CONST_DOUBLE:
956 case CONST_VECTOR:
957 case PC:
958 case CC0:
959 case ADDR_VEC:
960 case ADDR_DIFF_VEC:
961 return;
963 case CLOBBER:
964 /* If we are clobbering a MEM, mark any registers inside the address
965 as being used. */
966 if (GET_CODE (XEXP (x, 0)) == MEM)
967 df_uses_record (df, &XEXP (XEXP (x, 0), 0),
968 DF_REF_REG_MEM_STORE, bb, insn, flags);
970 /* If we're clobbering a REG then we have a def so ignore. */
971 return;
973 case MEM:
974 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_MEM_LOAD, bb, insn, 0);
975 return;
977 case SUBREG:
978 /* While we're here, optimize this case. */
980 /* In case the SUBREG is not of a REG, do not optimize. */
981 if (GET_CODE (SUBREG_REG (x)) != REG)
983 loc = &SUBREG_REG (x);
984 df_uses_record (df, loc, ref_type, bb, insn, flags);
985 return;
987 /* ... Fall through ... */
989 case REG:
990 df_ref_record (df, x, loc, insn, ref_type, flags);
991 return;
993 case SET:
995 rtx dst = SET_DEST (x);
997 df_uses_record (df, &SET_SRC (x), DF_REF_REG_USE, bb, insn, 0);
999 switch (GET_CODE (dst))
1001 case SUBREG:
1002 if ((df->flags & DF_FOR_REGALLOC) == 0
1003 && read_modify_subreg_p (dst))
1005 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1006 insn, DF_REF_READ_WRITE);
1007 break;
1009 /* ... FALLTHRU ... */
1010 case REG:
1011 case PARALLEL:
1012 case PC:
1013 case CC0:
1014 break;
1015 case MEM:
1016 df_uses_record (df, &XEXP (dst, 0),
1017 DF_REF_REG_MEM_STORE,
1018 bb, insn, 0);
1019 break;
1020 case STRICT_LOW_PART:
1021 /* A strict_low_part uses the whole REG and not just the SUBREG. */
1022 dst = XEXP (dst, 0);
1023 if (GET_CODE (dst) != SUBREG)
1024 abort ();
1025 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1026 insn, DF_REF_READ_WRITE);
1027 break;
1028 case ZERO_EXTRACT:
1029 case SIGN_EXTRACT:
1030 df_uses_record (df, &XEXP (dst, 0), DF_REF_REG_USE, bb, insn,
1031 DF_REF_READ_WRITE);
1032 df_uses_record (df, &XEXP (dst, 1), DF_REF_REG_USE, bb, insn, 0);
1033 df_uses_record (df, &XEXP (dst, 2), DF_REF_REG_USE, bb, insn, 0);
1034 dst = XEXP (dst, 0);
1035 break;
1036 default:
1037 abort ();
1039 return;
1042 case RETURN:
1043 break;
1045 case ASM_OPERANDS:
1046 case UNSPEC_VOLATILE:
1047 case TRAP_IF:
1048 case ASM_INPUT:
1050 /* Traditional and volatile asm instructions must be considered to use
1051 and clobber all hard registers, all pseudo-registers and all of
1052 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1054 Consider for instance a volatile asm that changes the fpu rounding
1055 mode. An insn should not be moved across this even if it only uses
1056 pseudo-regs because it might give an incorrectly rounded result.
1058 For now, just mark any regs we can find in ASM_OPERANDS as
1059 used. */
1061 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1062 We can not just fall through here since then we would be confused
1063 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1064 traditional asms unlike their normal usage. */
1065 if (code == ASM_OPERANDS)
1067 int j;
1069 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1070 df_uses_record (df, &ASM_OPERANDS_INPUT (x, j),
1071 DF_REF_REG_USE, bb, insn, 0);
1072 return;
1074 break;
1077 case PRE_DEC:
1078 case POST_DEC:
1079 case PRE_INC:
1080 case POST_INC:
1081 case PRE_MODIFY:
1082 case POST_MODIFY:
1083 /* Catch the def of the register being modified. */
1084 df_ref_record (df, XEXP (x, 0), &XEXP (x, 0), insn, DF_REF_REG_DEF, DF_REF_READ_WRITE);
1086 /* ... Fall through to handle uses ... */
1088 default:
1089 break;
1092 /* Recursively scan the operands of this expression. */
1094 const char *fmt = GET_RTX_FORMAT (code);
1095 int i;
1097 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1099 if (fmt[i] == 'e')
1101 /* Tail recursive case: save a function call level. */
1102 if (i == 0)
1104 loc = &XEXP (x, 0);
1105 goto retry;
1107 df_uses_record (df, &XEXP (x, i), ref_type, bb, insn, flags);
1109 else if (fmt[i] == 'E')
1111 int j;
1112 for (j = 0; j < XVECLEN (x, i); j++)
1113 df_uses_record (df, &XVECEXP (x, i, j), ref_type,
1114 bb, insn, flags);
1121 /* Record all the df within INSN of basic block BB. */
1122 static void
1123 df_insn_refs_record (struct df *df, basic_block bb, rtx insn)
1125 int i;
1127 if (INSN_P (insn))
1129 rtx note;
1131 /* Record register defs. */
1132 df_defs_record (df, PATTERN (insn), bb, insn);
1134 if (df->flags & DF_EQUIV_NOTES)
1135 for (note = REG_NOTES (insn); note;
1136 note = XEXP (note, 1))
1138 switch (REG_NOTE_KIND (note))
1140 case REG_EQUIV:
1141 case REG_EQUAL:
1142 df_uses_record (df, &XEXP (note, 0), DF_REF_REG_USE,
1143 bb, insn, 0);
1144 default:
1145 break;
1149 if (GET_CODE (insn) == CALL_INSN)
1151 rtx note;
1152 rtx x;
1154 /* Record the registers used to pass arguments. */
1155 for (note = CALL_INSN_FUNCTION_USAGE (insn); note;
1156 note = XEXP (note, 1))
1158 if (GET_CODE (XEXP (note, 0)) == USE)
1159 df_uses_record (df, &XEXP (XEXP (note, 0), 0), DF_REF_REG_USE,
1160 bb, insn, 0);
1163 /* The stack ptr is used (honorarily) by a CALL insn. */
1164 x = df_reg_use_gen (STACK_POINTER_REGNUM);
1165 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_USE, bb, insn, 0);
1167 if (df->flags & DF_HARD_REGS)
1169 /* Calls may also reference any of the global registers,
1170 so they are recorded as used. */
1171 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1172 if (global_regs[i])
1174 x = df_reg_use_gen (i);
1175 df_uses_record (df, &SET_DEST (x),
1176 DF_REF_REG_USE, bb, insn, 0);
1181 /* Record the register uses. */
1182 df_uses_record (df, &PATTERN (insn),
1183 DF_REF_REG_USE, bb, insn, 0);
1185 if (GET_CODE (insn) == CALL_INSN)
1187 rtx note;
1189 if (df->flags & DF_HARD_REGS)
1191 /* Kill all registers invalidated by a call. */
1192 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1193 if (TEST_HARD_REG_BIT (regs_invalidated_by_call, i))
1195 rtx reg_clob = df_reg_clobber_gen (i);
1196 df_defs_record (df, reg_clob, bb, insn);
1200 /* There may be extra registers to be clobbered. */
1201 for (note = CALL_INSN_FUNCTION_USAGE (insn);
1202 note;
1203 note = XEXP (note, 1))
1204 if (GET_CODE (XEXP (note, 0)) == CLOBBER)
1205 df_defs_record (df, XEXP (note, 0), bb, insn);
1211 /* Record all the refs within the basic block BB. */
1212 static void
1213 df_bb_refs_record (struct df *df, basic_block bb)
1215 rtx insn;
1217 /* Scan the block an insn at a time from beginning to end. */
1218 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
1220 if (INSN_P (insn))
1222 /* Record defs within INSN. */
1223 df_insn_refs_record (df, bb, insn);
1225 if (insn == BB_END (bb))
1226 break;
1231 /* Record all the refs in the basic blocks specified by BLOCKS. */
1232 static void
1233 df_refs_record (struct df *df, bitmap blocks)
1235 basic_block bb;
1237 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1239 df_bb_refs_record (df, bb);
1243 /* Dataflow analysis routines. */
1246 /* Create reg-def chains for basic block BB. These are a list of
1247 definitions for each register. */
1248 static void
1249 df_bb_reg_def_chain_create (struct df *df, basic_block bb)
1251 rtx insn;
1253 /* Perhaps the defs should be sorted using a depth first search
1254 of the CFG (or possibly a breadth first search). We currently
1255 scan the basic blocks in reverse order so that the first defs
1256 appear at the start of the chain. */
1258 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1259 insn = PREV_INSN (insn))
1261 struct df_link *link;
1262 unsigned int uid = INSN_UID (insn);
1264 if (! INSN_P (insn))
1265 continue;
1267 for (link = df->insns[uid].defs; link; link = link->next)
1269 struct ref *def = link->ref;
1270 unsigned int dregno = DF_REF_REGNO (def);
1272 /* Do not add ref's to the chain twice, i.e., only add new
1273 refs. XXX the same could be done by testing if the
1274 current insn is a modified (or a new) one. This would be
1275 faster. */
1276 if (DF_REF_ID (def) < df->def_id_save)
1277 continue;
1279 df->regs[dregno].defs
1280 = df_link_create (def, df->regs[dregno].defs);
1286 /* Create reg-def chains for each basic block within BLOCKS. These
1287 are a list of definitions for each register. */
1288 static void
1289 df_reg_def_chain_create (struct df *df, bitmap blocks)
1291 basic_block bb;
1293 FOR_EACH_BB_IN_BITMAP/*_REV*/ (blocks, 0, bb,
1295 df_bb_reg_def_chain_create (df, bb);
1300 /* Create reg-use chains for basic block BB. These are a list of uses
1301 for each register. */
1302 static void
1303 df_bb_reg_use_chain_create (struct df *df, basic_block bb)
1305 rtx insn;
1307 /* Scan in forward order so that the last uses appear at the start
1308 of the chain. */
1310 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1311 insn = NEXT_INSN (insn))
1313 struct df_link *link;
1314 unsigned int uid = INSN_UID (insn);
1316 if (! INSN_P (insn))
1317 continue;
1319 for (link = df->insns[uid].uses; link; link = link->next)
1321 struct ref *use = link->ref;
1322 unsigned int uregno = DF_REF_REGNO (use);
1324 /* Do not add ref's to the chain twice, i.e., only add new
1325 refs. XXX the same could be done by testing if the
1326 current insn is a modified (or a new) one. This would be
1327 faster. */
1328 if (DF_REF_ID (use) < df->use_id_save)
1329 continue;
1331 df->regs[uregno].uses
1332 = df_link_create (use, df->regs[uregno].uses);
1338 /* Create reg-use chains for each basic block within BLOCKS. These
1339 are a list of uses for each register. */
1340 static void
1341 df_reg_use_chain_create (struct df *df, bitmap blocks)
1343 basic_block bb;
1345 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1347 df_bb_reg_use_chain_create (df, bb);
1352 /* Create def-use chains from reaching use bitmaps for basic block BB. */
1353 static void
1354 df_bb_du_chain_create (struct df *df, basic_block bb, bitmap ru)
1356 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1357 rtx insn;
1359 bitmap_copy (ru, bb_info->ru_out);
1361 /* For each def in BB create a linked list (chain) of uses
1362 reached from the def. */
1363 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1364 insn = PREV_INSN (insn))
1366 struct df_link *def_link;
1367 struct df_link *use_link;
1368 unsigned int uid = INSN_UID (insn);
1370 if (! INSN_P (insn))
1371 continue;
1373 /* For each def in insn... */
1374 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1376 struct ref *def = def_link->ref;
1377 unsigned int dregno = DF_REF_REGNO (def);
1379 DF_REF_CHAIN (def) = 0;
1381 /* While the reg-use chains are not essential, it
1382 is _much_ faster to search these short lists rather
1383 than all the reaching uses, especially for large functions. */
1384 for (use_link = df->regs[dregno].uses; use_link;
1385 use_link = use_link->next)
1387 struct ref *use = use_link->ref;
1389 if (bitmap_bit_p (ru, DF_REF_ID (use)))
1391 DF_REF_CHAIN (def)
1392 = df_link_create (use, DF_REF_CHAIN (def));
1394 bitmap_clear_bit (ru, DF_REF_ID (use));
1399 /* For each use in insn... */
1400 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1402 struct ref *use = use_link->ref;
1403 bitmap_set_bit (ru, DF_REF_ID (use));
1409 /* Create def-use chains from reaching use bitmaps for basic blocks
1410 in BLOCKS. */
1411 static void
1412 df_du_chain_create (struct df *df, bitmap blocks)
1414 bitmap ru;
1415 basic_block bb;
1417 ru = BITMAP_XMALLOC ();
1419 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1421 df_bb_du_chain_create (df, bb, ru);
1424 BITMAP_XFREE (ru);
1428 /* Create use-def chains from reaching def bitmaps for basic block BB. */
1429 static void
1430 df_bb_ud_chain_create (struct df *df, basic_block bb)
1432 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1433 struct ref **reg_def_last = df->reg_def_last;
1434 rtx insn;
1436 memset (reg_def_last, 0, df->n_regs * sizeof (struct ref *));
1438 /* For each use in BB create a linked list (chain) of defs
1439 that reach the use. */
1440 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1441 insn = NEXT_INSN (insn))
1443 unsigned int uid = INSN_UID (insn);
1444 struct df_link *use_link;
1445 struct df_link *def_link;
1447 if (! INSN_P (insn))
1448 continue;
1450 /* For each use in insn... */
1451 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1453 struct ref *use = use_link->ref;
1454 unsigned int regno = DF_REF_REGNO (use);
1456 DF_REF_CHAIN (use) = 0;
1458 /* Has regno been defined in this BB yet? If so, use
1459 the last def as the single entry for the use-def
1460 chain for this use. Otherwise, we need to add all
1461 the defs using this regno that reach the start of
1462 this BB. */
1463 if (reg_def_last[regno])
1465 DF_REF_CHAIN (use)
1466 = df_link_create (reg_def_last[regno], 0);
1468 else
1470 /* While the reg-def chains are not essential, it is
1471 _much_ faster to search these short lists rather than
1472 all the reaching defs, especially for large
1473 functions. */
1474 for (def_link = df->regs[regno].defs; def_link;
1475 def_link = def_link->next)
1477 struct ref *def = def_link->ref;
1479 if (bitmap_bit_p (bb_info->rd_in, DF_REF_ID (def)))
1481 DF_REF_CHAIN (use)
1482 = df_link_create (def, DF_REF_CHAIN (use));
1489 /* For each def in insn... record the last def of each reg. */
1490 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1492 struct ref *def = def_link->ref;
1493 int dregno = DF_REF_REGNO (def);
1495 reg_def_last[dregno] = def;
1501 /* Create use-def chains from reaching def bitmaps for basic blocks
1502 within BLOCKS. */
1503 static void
1504 df_ud_chain_create (struct df *df, bitmap blocks)
1506 basic_block bb;
1508 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1510 df_bb_ud_chain_create (df, bb);
1516 static void
1517 df_rd_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1518 bitmap out, bitmap gen, bitmap kill,
1519 void *data ATTRIBUTE_UNUSED)
1521 *changed = bitmap_union_of_diff (out, gen, in, kill);
1525 static void
1526 df_ru_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1527 bitmap out, bitmap gen, bitmap kill,
1528 void *data ATTRIBUTE_UNUSED)
1530 *changed = bitmap_union_of_diff (in, gen, out, kill);
1534 static void
1535 df_lr_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1536 bitmap out, bitmap use, bitmap def,
1537 void *data ATTRIBUTE_UNUSED)
1539 *changed = bitmap_union_of_diff (in, use, out, def);
1543 /* Compute local reaching def info for basic block BB. */
1544 static void
1545 df_bb_rd_local_compute (struct df *df, basic_block bb)
1547 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1548 rtx insn;
1550 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1551 insn = NEXT_INSN (insn))
1553 unsigned int uid = INSN_UID (insn);
1554 struct df_link *def_link;
1556 if (! INSN_P (insn))
1557 continue;
1559 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1561 struct ref *def = def_link->ref;
1562 unsigned int regno = DF_REF_REGNO (def);
1563 struct df_link *def2_link;
1565 for (def2_link = df->regs[regno].defs; def2_link;
1566 def2_link = def2_link->next)
1568 struct ref *def2 = def2_link->ref;
1570 /* Add all defs of this reg to the set of kills. This
1571 is greedy since many of these defs will not actually
1572 be killed by this BB but it keeps things a lot
1573 simpler. */
1574 bitmap_set_bit (bb_info->rd_kill, DF_REF_ID (def2));
1576 /* Zap from the set of gens for this BB. */
1577 bitmap_clear_bit (bb_info->rd_gen, DF_REF_ID (def2));
1580 bitmap_set_bit (bb_info->rd_gen, DF_REF_ID (def));
1584 bb_info->rd_valid = 1;
1588 /* Compute local reaching def info for each basic block within BLOCKS. */
1589 static void
1590 df_rd_local_compute (struct df *df, bitmap blocks)
1592 basic_block bb;
1594 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1596 df_bb_rd_local_compute (df, bb);
1601 /* Compute local reaching use (upward exposed use) info for basic
1602 block BB. */
1603 static void
1604 df_bb_ru_local_compute (struct df *df, basic_block bb)
1606 /* This is much more tricky than computing reaching defs. With
1607 reaching defs, defs get killed by other defs. With upwards
1608 exposed uses, these get killed by defs with the same regno. */
1610 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1611 rtx insn;
1614 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1615 insn = PREV_INSN (insn))
1617 unsigned int uid = INSN_UID (insn);
1618 struct df_link *def_link;
1619 struct df_link *use_link;
1621 if (! INSN_P (insn))
1622 continue;
1624 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1626 struct ref *def = def_link->ref;
1627 unsigned int dregno = DF_REF_REGNO (def);
1629 for (use_link = df->regs[dregno].uses; use_link;
1630 use_link = use_link->next)
1632 struct ref *use = use_link->ref;
1634 /* Add all uses of this reg to the set of kills. This
1635 is greedy since many of these uses will not actually
1636 be killed by this BB but it keeps things a lot
1637 simpler. */
1638 bitmap_set_bit (bb_info->ru_kill, DF_REF_ID (use));
1640 /* Zap from the set of gens for this BB. */
1641 bitmap_clear_bit (bb_info->ru_gen, DF_REF_ID (use));
1645 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1647 struct ref *use = use_link->ref;
1648 /* Add use to set of gens in this BB. */
1649 bitmap_set_bit (bb_info->ru_gen, DF_REF_ID (use));
1652 bb_info->ru_valid = 1;
1656 /* Compute local reaching use (upward exposed use) info for each basic
1657 block within BLOCKS. */
1658 static void
1659 df_ru_local_compute (struct df *df, bitmap blocks)
1661 basic_block bb;
1663 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1665 df_bb_ru_local_compute (df, bb);
1670 /* Compute local live variable info for basic block BB. */
1671 static void
1672 df_bb_lr_local_compute (struct df *df, basic_block bb)
1674 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1675 rtx insn;
1677 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1678 insn = PREV_INSN (insn))
1680 unsigned int uid = INSN_UID (insn);
1681 struct df_link *link;
1683 if (! INSN_P (insn))
1684 continue;
1686 for (link = df->insns[uid].defs; link; link = link->next)
1688 struct ref *def = link->ref;
1689 unsigned int dregno = DF_REF_REGNO (def);
1691 /* Add def to set of defs in this BB. */
1692 bitmap_set_bit (bb_info->lr_def, dregno);
1694 bitmap_clear_bit (bb_info->lr_use, dregno);
1697 for (link = df->insns[uid].uses; link; link = link->next)
1699 struct ref *use = link->ref;
1700 /* Add use to set of uses in this BB. */
1701 bitmap_set_bit (bb_info->lr_use, DF_REF_REGNO (use));
1704 bb_info->lr_valid = 1;
1708 /* Compute local live variable info for each basic block within BLOCKS. */
1709 static void
1710 df_lr_local_compute (struct df *df, bitmap blocks)
1712 basic_block bb;
1714 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1716 df_bb_lr_local_compute (df, bb);
1721 /* Compute register info: lifetime, bb, and number of defs and uses
1722 for basic block BB. */
1723 static void
1724 df_bb_reg_info_compute (struct df *df, basic_block bb, bitmap live)
1726 struct reg_info *reg_info = df->regs;
1727 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1728 rtx insn;
1730 bitmap_copy (live, bb_info->lr_out);
1732 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1733 insn = PREV_INSN (insn))
1735 unsigned int uid = INSN_UID (insn);
1736 unsigned int regno;
1737 struct df_link *link;
1739 if (! INSN_P (insn))
1740 continue;
1742 for (link = df->insns[uid].defs; link; link = link->next)
1744 struct ref *def = link->ref;
1745 unsigned int dregno = DF_REF_REGNO (def);
1747 /* Kill this register. */
1748 bitmap_clear_bit (live, dregno);
1749 reg_info[dregno].n_defs++;
1752 for (link = df->insns[uid].uses; link; link = link->next)
1754 struct ref *use = link->ref;
1755 unsigned int uregno = DF_REF_REGNO (use);
1757 /* This register is now live. */
1758 bitmap_set_bit (live, uregno);
1759 reg_info[uregno].n_uses++;
1762 /* Increment lifetimes of all live registers. */
1763 EXECUTE_IF_SET_IN_BITMAP (live, 0, regno,
1765 reg_info[regno].lifetime++;
1771 /* Compute register info: lifetime, bb, and number of defs and uses. */
1772 static void
1773 df_reg_info_compute (struct df *df, bitmap blocks)
1775 basic_block bb;
1776 bitmap live;
1778 live = BITMAP_XMALLOC ();
1780 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1782 df_bb_reg_info_compute (df, bb, live);
1785 BITMAP_XFREE (live);
1789 /* Assign LUIDs for BB. */
1790 static int
1791 df_bb_luids_set (struct df *df, basic_block bb)
1793 rtx insn;
1794 int luid = 0;
1796 /* The LUIDs are monotonically increasing for each basic block. */
1798 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
1800 if (INSN_P (insn))
1801 DF_INSN_LUID (df, insn) = luid++;
1802 DF_INSN_LUID (df, insn) = luid;
1804 if (insn == BB_END (bb))
1805 break;
1807 return luid;
1811 /* Assign LUIDs for each basic block within BLOCKS. */
1812 static int
1813 df_luids_set (struct df *df, bitmap blocks)
1815 basic_block bb;
1816 int total = 0;
1818 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1820 total += df_bb_luids_set (df, bb);
1822 return total;
1826 /* Perform dataflow analysis using existing DF structure for blocks
1827 within BLOCKS. If BLOCKS is zero, use all basic blocks in the CFG. */
1828 static void
1829 df_analyse_1 (struct df *df, bitmap blocks, int flags, int update)
1831 int aflags;
1832 int dflags;
1833 int i;
1834 basic_block bb;
1836 dflags = 0;
1837 aflags = flags;
1838 if (flags & DF_UD_CHAIN)
1839 aflags |= DF_RD | DF_RD_CHAIN;
1841 if (flags & DF_DU_CHAIN)
1842 aflags |= DF_RU;
1844 if (flags & DF_RU)
1845 aflags |= DF_RU_CHAIN;
1847 if (flags & DF_REG_INFO)
1848 aflags |= DF_LR;
1850 if (! blocks)
1851 blocks = df->all_blocks;
1853 df->flags = flags;
1854 if (update)
1856 df_refs_update (df);
1857 /* More fine grained incremental dataflow analysis would be
1858 nice. For now recompute the whole shebang for the
1859 modified blocks. */
1860 #if 0
1861 df_refs_unlink (df, blocks);
1862 #endif
1863 /* All the def-use, use-def chains can be potentially
1864 modified by changes in one block. The size of the
1865 bitmaps can also change. */
1867 else
1869 /* Scan the function for all register defs and uses. */
1870 df_refs_queue (df);
1871 df_refs_record (df, blocks);
1873 /* Link all the new defs and uses to the insns. */
1874 df_refs_process (df);
1877 /* Allocate the bitmaps now the total number of defs and uses are
1878 known. If the number of defs or uses have changed, then
1879 these bitmaps need to be reallocated. */
1880 df_bitmaps_alloc (df, aflags);
1882 /* Set the LUIDs for each specified basic block. */
1883 df_luids_set (df, blocks);
1885 /* Recreate reg-def and reg-use chains from scratch so that first
1886 def is at the head of the reg-def chain and the last use is at
1887 the head of the reg-use chain. This is only important for
1888 regs local to a basic block as it speeds up searching. */
1889 if (aflags & DF_RD_CHAIN)
1891 df_reg_def_chain_create (df, blocks);
1894 if (aflags & DF_RU_CHAIN)
1896 df_reg_use_chain_create (df, blocks);
1899 df->dfs_order = xmalloc (sizeof (int) * n_basic_blocks);
1900 df->rc_order = xmalloc (sizeof (int) * n_basic_blocks);
1901 df->rts_order = xmalloc (sizeof (int) * n_basic_blocks);
1902 df->inverse_dfs_map = xmalloc (sizeof (int) * last_basic_block);
1903 df->inverse_rc_map = xmalloc (sizeof (int) * last_basic_block);
1904 df->inverse_rts_map = xmalloc (sizeof (int) * last_basic_block);
1906 flow_depth_first_order_compute (df->dfs_order, df->rc_order);
1907 flow_reverse_top_sort_order_compute (df->rts_order);
1908 for (i = 0; i < n_basic_blocks; i++)
1910 df->inverse_dfs_map[df->dfs_order[i]] = i;
1911 df->inverse_rc_map[df->rc_order[i]] = i;
1912 df->inverse_rts_map[df->rts_order[i]] = i;
1914 if (aflags & DF_RD)
1916 /* Compute the sets of gens and kills for the defs of each bb. */
1917 df_rd_local_compute (df, df->flags & DF_RD ? blocks : df->all_blocks);
1919 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1920 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1921 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1922 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1923 FOR_EACH_BB (bb)
1925 in[bb->index] = DF_BB_INFO (df, bb)->rd_in;
1926 out[bb->index] = DF_BB_INFO (df, bb)->rd_out;
1927 gen[bb->index] = DF_BB_INFO (df, bb)->rd_gen;
1928 kill[bb->index] = DF_BB_INFO (df, bb)->rd_kill;
1930 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1931 DF_FORWARD, DF_UNION, df_rd_transfer_function,
1932 df->inverse_rc_map, NULL);
1933 free (in);
1934 free (out);
1935 free (gen);
1936 free (kill);
1940 if (aflags & DF_UD_CHAIN)
1942 /* Create use-def chains. */
1943 df_ud_chain_create (df, df->all_blocks);
1945 if (! (flags & DF_RD))
1946 dflags |= DF_RD;
1949 if (aflags & DF_RU)
1951 /* Compute the sets of gens and kills for the upwards exposed
1952 uses in each bb. */
1953 df_ru_local_compute (df, df->flags & DF_RU ? blocks : df->all_blocks);
1955 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1956 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1957 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1958 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1959 FOR_EACH_BB (bb)
1961 in[bb->index] = DF_BB_INFO (df, bb)->ru_in;
1962 out[bb->index] = DF_BB_INFO (df, bb)->ru_out;
1963 gen[bb->index] = DF_BB_INFO (df, bb)->ru_gen;
1964 kill[bb->index] = DF_BB_INFO (df, bb)->ru_kill;
1966 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1967 DF_BACKWARD, DF_UNION, df_ru_transfer_function,
1968 df->inverse_rts_map, NULL);
1969 free (in);
1970 free (out);
1971 free (gen);
1972 free (kill);
1976 if (aflags & DF_DU_CHAIN)
1978 /* Create def-use chains. */
1979 df_du_chain_create (df, df->all_blocks);
1981 if (! (flags & DF_RU))
1982 dflags |= DF_RU;
1985 /* Free up bitmaps that are no longer required. */
1986 if (dflags)
1987 df_bitmaps_free (df, dflags);
1989 if (aflags & DF_LR)
1991 /* Compute the sets of defs and uses of live variables. */
1992 df_lr_local_compute (df, df->flags & DF_LR ? blocks : df->all_blocks);
1994 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1995 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1996 bitmap *use = xmalloc (sizeof (bitmap) * last_basic_block);
1997 bitmap *def = xmalloc (sizeof (bitmap) * last_basic_block);
1998 FOR_EACH_BB (bb)
2000 in[bb->index] = DF_BB_INFO (df, bb)->lr_in;
2001 out[bb->index] = DF_BB_INFO (df, bb)->lr_out;
2002 use[bb->index] = DF_BB_INFO (df, bb)->lr_use;
2003 def[bb->index] = DF_BB_INFO (df, bb)->lr_def;
2005 iterative_dataflow_bitmap (in, out, use, def, df->all_blocks,
2006 DF_BACKWARD, DF_UNION, df_lr_transfer_function,
2007 df->inverse_rts_map, NULL);
2008 free (in);
2009 free (out);
2010 free (use);
2011 free (def);
2015 if (aflags & DF_REG_INFO)
2017 df_reg_info_compute (df, df->all_blocks);
2020 free (df->dfs_order);
2021 free (df->rc_order);
2022 free (df->rts_order);
2023 free (df->inverse_rc_map);
2024 free (df->inverse_dfs_map);
2025 free (df->inverse_rts_map);
2029 /* Initialize dataflow analysis. */
2030 struct df *
2031 df_init (void)
2033 struct df *df;
2035 df = xcalloc (1, sizeof (struct df));
2037 /* Squirrel away a global for debugging. */
2038 ddf = df;
2040 return df;
2044 /* Start queuing refs. */
2045 static int
2046 df_refs_queue (struct df *df)
2048 df->def_id_save = df->def_id;
2049 df->use_id_save = df->use_id;
2050 /* ???? Perhaps we should save current obstack state so that we can
2051 unwind it. */
2052 return 0;
2056 /* Process queued refs. */
2057 static int
2058 df_refs_process (struct df *df)
2060 unsigned int i;
2062 /* Build new insn-def chains. */
2063 for (i = df->def_id_save; i != df->def_id; i++)
2065 struct ref *def = df->defs[i];
2066 unsigned int uid = DF_REF_INSN_UID (def);
2068 /* Add def to head of def list for INSN. */
2069 df->insns[uid].defs
2070 = df_link_create (def, df->insns[uid].defs);
2073 /* Build new insn-use chains. */
2074 for (i = df->use_id_save; i != df->use_id; i++)
2076 struct ref *use = df->uses[i];
2077 unsigned int uid = DF_REF_INSN_UID (use);
2079 /* Add use to head of use list for INSN. */
2080 df->insns[uid].uses
2081 = df_link_create (use, df->insns[uid].uses);
2083 return 0;
2087 /* Update refs for basic block BB. */
2088 static int
2089 df_bb_refs_update (struct df *df, basic_block bb)
2091 rtx insn;
2092 int count = 0;
2094 /* While we have to scan the chain of insns for this BB, we do not
2095 need to allocate and queue a long chain of BB/INSN pairs. Using
2096 a bitmap for insns_modified saves memory and avoids queuing
2097 duplicates. */
2099 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
2101 unsigned int uid;
2103 uid = INSN_UID (insn);
2105 if (bitmap_bit_p (df->insns_modified, uid))
2107 /* Delete any allocated refs of this insn. MPH, FIXME. */
2108 df_insn_refs_unlink (df, bb, insn);
2110 /* Scan the insn for refs. */
2111 df_insn_refs_record (df, bb, insn);
2113 count++;
2115 if (insn == BB_END (bb))
2116 break;
2118 return count;
2122 /* Process all the modified/deleted insns that were queued. */
2123 static int
2124 df_refs_update (struct df *df)
2126 basic_block bb;
2127 int count = 0;
2129 if ((unsigned int) max_reg_num () >= df->reg_size)
2130 df_reg_table_realloc (df, 0);
2132 df_refs_queue (df);
2134 FOR_EACH_BB_IN_BITMAP (df->bbs_modified, 0, bb,
2136 count += df_bb_refs_update (df, bb);
2139 df_refs_process (df);
2140 return count;
2144 /* Return nonzero if any of the requested blocks in the bitmap
2145 BLOCKS have been modified. */
2146 static int
2147 df_modified_p (struct df *df, bitmap blocks)
2149 int update = 0;
2150 basic_block bb;
2152 if (!df->n_bbs)
2153 return 0;
2155 FOR_EACH_BB (bb)
2156 if (bitmap_bit_p (df->bbs_modified, bb->index)
2157 && (! blocks || (blocks == (bitmap) -1) || bitmap_bit_p (blocks, bb->index)))
2159 update = 1;
2160 break;
2163 return update;
2167 /* Analyze dataflow info for the basic blocks specified by the bitmap
2168 BLOCKS, or for the whole CFG if BLOCKS is zero, or just for the
2169 modified blocks if BLOCKS is -1. */
2171 df_analyse (struct df *df, bitmap blocks, int flags)
2173 int update;
2175 /* We could deal with additional basic blocks being created by
2176 rescanning everything again. */
2177 if (df->n_bbs && df->n_bbs != (unsigned int) last_basic_block)
2178 abort ();
2180 update = df_modified_p (df, blocks);
2181 if (update || (flags != df->flags))
2183 if (! blocks)
2185 if (df->n_bbs)
2187 /* Recompute everything from scratch. */
2188 df_free (df);
2190 /* Allocate and initialize data structures. */
2191 df_alloc (df, max_reg_num ());
2192 df_analyse_1 (df, 0, flags, 0);
2193 update = 1;
2195 else
2197 if (blocks == (bitmap) -1)
2198 blocks = df->bbs_modified;
2200 if (! df->n_bbs)
2201 abort ();
2203 df_analyse_1 (df, blocks, flags, 1);
2204 bitmap_zero (df->bbs_modified);
2205 bitmap_zero (df->insns_modified);
2208 return update;
2212 /* Free all the dataflow info and the DF structure. */
2213 void
2214 df_finish (struct df *df)
2216 df_free (df);
2217 free (df);
2221 /* Unlink INSN from its reference information. */
2222 static void
2223 df_insn_refs_unlink (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2225 struct df_link *link;
2226 unsigned int uid;
2228 uid = INSN_UID (insn);
2230 /* Unlink all refs defined by this insn. */
2231 for (link = df->insns[uid].defs; link; link = link->next)
2232 df_def_unlink (df, link->ref);
2234 /* Unlink all refs used by this insn. */
2235 for (link = df->insns[uid].uses; link; link = link->next)
2236 df_use_unlink (df, link->ref);
2238 df->insns[uid].defs = 0;
2239 df->insns[uid].uses = 0;
2243 #if 0
2244 /* Unlink all the insns within BB from their reference information. */
2245 static void
2246 df_bb_refs_unlink (struct df *df, basic_block bb)
2248 rtx insn;
2250 /* Scan the block an insn at a time from beginning to end. */
2251 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
2253 if (INSN_P (insn))
2255 /* Unlink refs for INSN. */
2256 df_insn_refs_unlink (df, bb, insn);
2258 if (insn == BB_END (bb))
2259 break;
2264 /* Unlink all the refs in the basic blocks specified by BLOCKS.
2265 Not currently used. */
2266 static void
2267 df_refs_unlink (struct df *df, bitmap blocks)
2269 basic_block bb;
2271 if (blocks)
2273 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
2275 df_bb_refs_unlink (df, bb);
2278 else
2280 FOR_EACH_BB (bb)
2281 df_bb_refs_unlink (df, bb);
2284 #endif
2286 /* Functions to modify insns. */
2289 /* Delete INSN and all its reference information. */
2291 df_insn_delete (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2293 /* If the insn is a jump, we should perhaps call delete_insn to
2294 handle the JUMP_LABEL? */
2296 /* We should not be deleting the NOTE_INSN_BASIC_BLOCK or label. */
2297 if (insn == BB_HEAD (bb))
2298 abort ();
2300 /* Delete the insn. */
2301 delete_insn (insn);
2303 df_insn_modify (df, bb, insn);
2305 return NEXT_INSN (insn);
2309 /* Mark that INSN within BB may have changed (created/modified/deleted).
2310 This may be called multiple times for the same insn. There is no
2311 harm calling this function if the insn wasn't changed; it will just
2312 slow down the rescanning of refs. */
2313 void
2314 df_insn_modify (struct df *df, basic_block bb, rtx insn)
2316 unsigned int uid;
2318 uid = INSN_UID (insn);
2319 if (uid >= df->insn_size)
2320 df_insn_table_realloc (df, uid);
2322 bitmap_set_bit (df->bbs_modified, bb->index);
2323 bitmap_set_bit (df->insns_modified, uid);
2325 /* For incremental updating on the fly, perhaps we could make a copy
2326 of all the refs of the original insn and turn them into
2327 anti-refs. When df_refs_update finds these anti-refs, it annihilates
2328 the original refs. If validate_change fails then these anti-refs
2329 will just get ignored. */
2333 typedef struct replace_args
2335 rtx match;
2336 rtx replacement;
2337 rtx insn;
2338 int modified;
2339 } replace_args;
2342 /* Replace mem pointed to by PX with its associated pseudo register.
2343 DATA is actually a pointer to a structure describing the
2344 instruction currently being scanned and the MEM we are currently
2345 replacing. */
2346 static int
2347 df_rtx_mem_replace (rtx *px, void *data)
2349 replace_args *args = (replace_args *) data;
2350 rtx mem = *px;
2352 if (mem == NULL_RTX)
2353 return 0;
2355 switch (GET_CODE (mem))
2357 case MEM:
2358 break;
2360 case CONST_DOUBLE:
2361 /* We're not interested in the MEM associated with a
2362 CONST_DOUBLE, so there's no need to traverse into one. */
2363 return -1;
2365 default:
2366 /* This is not a MEM. */
2367 return 0;
2370 if (!rtx_equal_p (args->match, mem))
2371 /* This is not the MEM we are currently replacing. */
2372 return 0;
2374 /* Actually replace the MEM. */
2375 validate_change (args->insn, px, args->replacement, 1);
2376 args->modified++;
2378 return 0;
2383 df_insn_mem_replace (struct df *df, basic_block bb, rtx insn, rtx mem, rtx reg)
2385 replace_args args;
2387 args.insn = insn;
2388 args.match = mem;
2389 args.replacement = reg;
2390 args.modified = 0;
2392 /* Search and replace all matching mems within insn. */
2393 for_each_rtx (&insn, df_rtx_mem_replace, &args);
2395 if (args.modified)
2396 df_insn_modify (df, bb, insn);
2398 /* ???? FIXME. We may have a new def or one or more new uses of REG
2399 in INSN. REG should be a new pseudo so it won't affect the
2400 dataflow information that we currently have. We should add
2401 the new uses and defs to INSN and then recreate the chains
2402 when df_analyse is called. */
2403 return args.modified;
2407 /* Replace one register with another. Called through for_each_rtx; PX
2408 points to the rtx being scanned. DATA is actually a pointer to a
2409 structure of arguments. */
2410 static int
2411 df_rtx_reg_replace (rtx *px, void *data)
2413 rtx x = *px;
2414 replace_args *args = (replace_args *) data;
2416 if (x == NULL_RTX)
2417 return 0;
2419 if (x == args->match)
2421 validate_change (args->insn, px, args->replacement, 1);
2422 args->modified++;
2425 return 0;
2429 /* Replace the reg within every ref on CHAIN that is within the set
2430 BLOCKS of basic blocks with NEWREG. Also update the regs within
2431 REG_NOTES. */
2432 void
2433 df_refs_reg_replace (struct df *df, bitmap blocks, struct df_link *chain, rtx oldreg, rtx newreg)
2435 struct df_link *link;
2436 replace_args args;
2438 if (! blocks)
2439 blocks = df->all_blocks;
2441 args.match = oldreg;
2442 args.replacement = newreg;
2443 args.modified = 0;
2445 for (link = chain; link; link = link->next)
2447 struct ref *ref = link->ref;
2448 rtx insn = DF_REF_INSN (ref);
2450 if (! INSN_P (insn))
2451 continue;
2453 if (bitmap_bit_p (blocks, DF_REF_BBNO (ref)))
2455 df_ref_reg_replace (df, ref, oldreg, newreg);
2457 /* Replace occurrences of the reg within the REG_NOTES. */
2458 if ((! link->next || DF_REF_INSN (ref)
2459 != DF_REF_INSN (link->next->ref))
2460 && REG_NOTES (insn))
2462 args.insn = insn;
2463 for_each_rtx (&REG_NOTES (insn), df_rtx_reg_replace, &args);
2466 else
2468 /* Temporary check to ensure that we have a grip on which
2469 regs should be replaced. */
2470 abort ();
2476 /* Replace all occurrences of register OLDREG with register NEWREG in
2477 blocks defined by bitmap BLOCKS. This also replaces occurrences of
2478 OLDREG in the REG_NOTES but only for insns containing OLDREG. This
2479 routine expects the reg-use and reg-def chains to be valid. */
2481 df_reg_replace (struct df *df, bitmap blocks, rtx oldreg, rtx newreg)
2483 unsigned int oldregno = REGNO (oldreg);
2485 df_refs_reg_replace (df, blocks, df->regs[oldregno].defs, oldreg, newreg);
2486 df_refs_reg_replace (df, blocks, df->regs[oldregno].uses, oldreg, newreg);
2487 return 1;
2491 /* Try replacing the reg within REF with NEWREG. Do not modify
2492 def-use/use-def chains. */
2494 df_ref_reg_replace (struct df *df, struct ref *ref, rtx oldreg, rtx newreg)
2496 /* Check that insn was deleted by being converted into a NOTE. If
2497 so ignore this insn. */
2498 if (! INSN_P (DF_REF_INSN (ref)))
2499 return 0;
2501 if (oldreg && oldreg != DF_REF_REG (ref))
2502 abort ();
2504 if (! validate_change (DF_REF_INSN (ref), DF_REF_LOC (ref), newreg, 1))
2505 return 0;
2507 df_insn_modify (df, DF_REF_BB (ref), DF_REF_INSN (ref));
2508 return 1;
2512 struct ref*
2513 df_bb_def_use_swap (struct df *df, basic_block bb, rtx def_insn, rtx use_insn, unsigned int regno)
2515 struct ref *def;
2516 struct ref *use;
2517 int def_uid;
2518 int use_uid;
2519 struct df_link *link;
2521 def = df_bb_insn_regno_first_def_find (df, bb, def_insn, regno);
2522 if (! def)
2523 return 0;
2525 use = df_bb_insn_regno_last_use_find (df, bb, use_insn, regno);
2526 if (! use)
2527 return 0;
2529 /* The USE no longer exists. */
2530 use_uid = INSN_UID (use_insn);
2531 df_use_unlink (df, use);
2532 df_ref_unlink (&df->insns[use_uid].uses, use);
2534 /* The DEF requires shifting so remove it from DEF_INSN
2535 and add it to USE_INSN by reusing LINK. */
2536 def_uid = INSN_UID (def_insn);
2537 link = df_ref_unlink (&df->insns[def_uid].defs, def);
2538 link->ref = def;
2539 link->next = df->insns[use_uid].defs;
2540 df->insns[use_uid].defs = link;
2542 #if 0
2543 link = df_ref_unlink (&df->regs[regno].defs, def);
2544 link->ref = def;
2545 link->next = df->regs[regno].defs;
2546 df->insns[regno].defs = link;
2547 #endif
2549 DF_REF_INSN (def) = use_insn;
2550 return def;
2554 /* Record df between FIRST_INSN and LAST_INSN inclusive. All new
2555 insns must be processed by this routine. */
2556 static void
2557 df_insns_modify (struct df *df, basic_block bb, rtx first_insn, rtx last_insn)
2559 rtx insn;
2561 for (insn = first_insn; ; insn = NEXT_INSN (insn))
2563 unsigned int uid;
2565 /* A non-const call should not have slipped through the net. If
2566 it does, we need to create a new basic block. Ouch. The
2567 same applies for a label. */
2568 if ((GET_CODE (insn) == CALL_INSN
2569 && ! CONST_OR_PURE_CALL_P (insn))
2570 || GET_CODE (insn) == CODE_LABEL)
2571 abort ();
2573 uid = INSN_UID (insn);
2575 if (uid >= df->insn_size)
2576 df_insn_table_realloc (df, uid);
2578 df_insn_modify (df, bb, insn);
2580 if (insn == last_insn)
2581 break;
2586 /* Emit PATTERN before INSN within BB. */
2588 df_pattern_emit_before (struct df *df, rtx pattern, basic_block bb, rtx insn)
2590 rtx ret_insn;
2591 rtx prev_insn = PREV_INSN (insn);
2593 /* We should not be inserting before the start of the block. */
2594 if (insn == BB_HEAD (bb))
2595 abort ();
2596 ret_insn = emit_insn_before (pattern, insn);
2597 if (ret_insn == insn)
2598 return ret_insn;
2600 df_insns_modify (df, bb, NEXT_INSN (prev_insn), ret_insn);
2601 return ret_insn;
2605 /* Emit PATTERN after INSN within BB. */
2607 df_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2609 rtx ret_insn;
2611 ret_insn = emit_insn_after (pattern, insn);
2612 if (ret_insn == insn)
2613 return ret_insn;
2615 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2616 return ret_insn;
2620 /* Emit jump PATTERN after INSN within BB. */
2622 df_jump_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2624 rtx ret_insn;
2626 ret_insn = emit_jump_insn_after (pattern, insn);
2627 if (ret_insn == insn)
2628 return ret_insn;
2630 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2631 return ret_insn;
2635 /* Move INSN within BB before BEFORE_INSN within BEFORE_BB.
2637 This function should only be used to move loop invariant insns
2638 out of a loop where it has been proven that the def-use info
2639 will still be valid. */
2641 df_insn_move_before (struct df *df, basic_block bb, rtx insn, basic_block before_bb, rtx before_insn)
2643 struct df_link *link;
2644 unsigned int uid;
2646 if (! bb)
2647 return df_pattern_emit_before (df, insn, before_bb, before_insn);
2649 uid = INSN_UID (insn);
2651 /* Change bb for all df defined and used by this insn. */
2652 for (link = df->insns[uid].defs; link; link = link->next)
2653 DF_REF_BB (link->ref) = before_bb;
2654 for (link = df->insns[uid].uses; link; link = link->next)
2655 DF_REF_BB (link->ref) = before_bb;
2657 /* The lifetimes of the registers used in this insn will be reduced
2658 while the lifetimes of the registers defined in this insn
2659 are likely to be increased. */
2661 /* ???? Perhaps all the insns moved should be stored on a list
2662 which df_analyse removes when it recalculates data flow. */
2664 return emit_insn_before (insn, before_insn);
2667 /* Functions to query dataflow information. */
2671 df_insn_regno_def_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2672 rtx insn, unsigned int regno)
2674 unsigned int uid;
2675 struct df_link *link;
2677 uid = INSN_UID (insn);
2679 for (link = df->insns[uid].defs; link; link = link->next)
2681 struct ref *def = link->ref;
2683 if (DF_REF_REGNO (def) == regno)
2684 return 1;
2687 return 0;
2691 static int
2692 df_def_dominates_all_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
2694 struct df_link *du_link;
2696 /* Follow def-use chain to find all the uses of this def. */
2697 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2699 struct ref *use = du_link->ref;
2700 struct df_link *ud_link;
2702 /* Follow use-def chain to check all the defs for this use. */
2703 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2704 if (ud_link->ref != def)
2705 return 0;
2707 return 1;
2712 df_insn_dominates_all_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2713 rtx insn)
2715 unsigned int uid;
2716 struct df_link *link;
2718 uid = INSN_UID (insn);
2720 for (link = df->insns[uid].defs; link; link = link->next)
2722 struct ref *def = link->ref;
2724 if (! df_def_dominates_all_uses_p (df, def))
2725 return 0;
2728 return 1;
2732 /* Return nonzero if all DF dominates all the uses within the bitmap
2733 BLOCKS. */
2734 static int
2735 df_def_dominates_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def,
2736 bitmap blocks)
2738 struct df_link *du_link;
2740 /* Follow def-use chain to find all the uses of this def. */
2741 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2743 struct ref *use = du_link->ref;
2744 struct df_link *ud_link;
2746 /* Only worry about the uses within BLOCKS. For example,
2747 consider a register defined within a loop that is live at the
2748 loop exits. */
2749 if (bitmap_bit_p (blocks, DF_REF_BBNO (use)))
2751 /* Follow use-def chain to check all the defs for this use. */
2752 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2753 if (ud_link->ref != def)
2754 return 0;
2757 return 1;
2761 /* Return nonzero if all the defs of INSN within BB dominates
2762 all the corresponding uses. */
2764 df_insn_dominates_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2765 rtx insn, bitmap blocks)
2767 unsigned int uid;
2768 struct df_link *link;
2770 uid = INSN_UID (insn);
2772 for (link = df->insns[uid].defs; link; link = link->next)
2774 struct ref *def = link->ref;
2776 /* Only consider the defs within BLOCKS. */
2777 if (bitmap_bit_p (blocks, DF_REF_BBNO (def))
2778 && ! df_def_dominates_uses_p (df, def, blocks))
2779 return 0;
2781 return 1;
2785 /* Return the basic block that REG referenced in or NULL if referenced
2786 in multiple basic blocks. */
2787 basic_block
2788 df_regno_bb (struct df *df, unsigned int regno)
2790 struct df_link *defs = df->regs[regno].defs;
2791 struct df_link *uses = df->regs[regno].uses;
2792 struct ref *def = defs ? defs->ref : 0;
2793 struct ref *use = uses ? uses->ref : 0;
2794 basic_block bb_def = def ? DF_REF_BB (def) : 0;
2795 basic_block bb_use = use ? DF_REF_BB (use) : 0;
2797 /* Compare blocks of first def and last use. ???? FIXME. What if
2798 the reg-def and reg-use lists are not correctly ordered. */
2799 return bb_def == bb_use ? bb_def : 0;
2803 /* Return nonzero if REG used in multiple basic blocks. */
2805 df_reg_global_p (struct df *df, rtx reg)
2807 return df_regno_bb (df, REGNO (reg)) != 0;
2811 /* Return total lifetime (in insns) of REG. */
2813 df_reg_lifetime (struct df *df, rtx reg)
2815 return df->regs[REGNO (reg)].lifetime;
2819 /* Return nonzero if REG live at start of BB. */
2821 df_bb_reg_live_start_p (struct df *df, basic_block bb, rtx reg)
2823 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2825 #ifdef ENABLE_CHECKING
2826 if (! bb_info->lr_in)
2827 abort ();
2828 #endif
2830 return bitmap_bit_p (bb_info->lr_in, REGNO (reg));
2834 /* Return nonzero if REG live at end of BB. */
2836 df_bb_reg_live_end_p (struct df *df, basic_block bb, rtx reg)
2838 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2840 #ifdef ENABLE_CHECKING
2841 if (! bb_info->lr_in)
2842 abort ();
2843 #endif
2845 return bitmap_bit_p (bb_info->lr_out, REGNO (reg));
2849 /* Return -1 if life of REG1 before life of REG2, 1 if life of REG1
2850 after life of REG2, or 0, if the lives overlap. */
2852 df_bb_regs_lives_compare (struct df *df, basic_block bb, rtx reg1, rtx reg2)
2854 unsigned int regno1 = REGNO (reg1);
2855 unsigned int regno2 = REGNO (reg2);
2856 struct ref *def1;
2857 struct ref *use1;
2858 struct ref *def2;
2859 struct ref *use2;
2862 /* The regs must be local to BB. */
2863 if (df_regno_bb (df, regno1) != bb
2864 || df_regno_bb (df, regno2) != bb)
2865 abort ();
2867 def2 = df_bb_regno_first_def_find (df, bb, regno2);
2868 use1 = df_bb_regno_last_use_find (df, bb, regno1);
2870 if (DF_INSN_LUID (df, DF_REF_INSN (def2))
2871 > DF_INSN_LUID (df, DF_REF_INSN (use1)))
2872 return -1;
2874 def1 = df_bb_regno_first_def_find (df, bb, regno1);
2875 use2 = df_bb_regno_last_use_find (df, bb, regno2);
2877 if (DF_INSN_LUID (df, DF_REF_INSN (def1))
2878 > DF_INSN_LUID (df, DF_REF_INSN (use2)))
2879 return 1;
2881 return 0;
2885 /* Return last use of REGNO within BB. */
2886 static struct ref *
2887 df_bb_regno_last_use_find (struct df *df, basic_block bb, unsigned int regno)
2889 struct df_link *link;
2891 /* This assumes that the reg-use list is ordered such that for any
2892 BB, the last use is found first. However, since the BBs are not
2893 ordered, the first use in the chain is not necessarily the last
2894 use in the function. */
2895 for (link = df->regs[regno].uses; link; link = link->next)
2897 struct ref *use = link->ref;
2899 if (DF_REF_BB (use) == bb)
2900 return use;
2902 return 0;
2906 /* Return first def of REGNO within BB. */
2907 static struct ref *
2908 df_bb_regno_first_def_find (struct df *df, basic_block bb, unsigned int regno)
2910 struct df_link *link;
2912 /* This assumes that the reg-def list is ordered such that for any
2913 BB, the first def is found first. However, since the BBs are not
2914 ordered, the first def in the chain is not necessarily the first
2915 def in the function. */
2916 for (link = df->regs[regno].defs; link; link = link->next)
2918 struct ref *def = link->ref;
2920 if (DF_REF_BB (def) == bb)
2921 return def;
2923 return 0;
2927 /* Return first use of REGNO inside INSN within BB. */
2928 static struct ref *
2929 df_bb_insn_regno_last_use_find (struct df *df,
2930 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2931 unsigned int regno)
2933 unsigned int uid;
2934 struct df_link *link;
2936 uid = INSN_UID (insn);
2938 for (link = df->insns[uid].uses; link; link = link->next)
2940 struct ref *use = link->ref;
2942 if (DF_REF_REGNO (use) == regno)
2943 return use;
2946 return 0;
2950 /* Return first def of REGNO inside INSN within BB. */
2951 static struct ref *
2952 df_bb_insn_regno_first_def_find (struct df *df,
2953 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2954 unsigned int regno)
2956 unsigned int uid;
2957 struct df_link *link;
2959 uid = INSN_UID (insn);
2961 for (link = df->insns[uid].defs; link; link = link->next)
2963 struct ref *def = link->ref;
2965 if (DF_REF_REGNO (def) == regno)
2966 return def;
2969 return 0;
2973 /* Return insn using REG if the BB contains only a single
2974 use and def of REG. */
2976 df_bb_single_def_use_insn_find (struct df *df, basic_block bb, rtx insn, rtx reg)
2978 struct ref *def;
2979 struct ref *use;
2980 struct df_link *du_link;
2982 def = df_bb_insn_regno_first_def_find (df, bb, insn, REGNO (reg));
2984 if (! def)
2985 abort ();
2987 du_link = DF_REF_CHAIN (def);
2989 if (! du_link)
2990 return NULL_RTX;
2992 use = du_link->ref;
2994 /* Check if def is dead. */
2995 if (! use)
2996 return NULL_RTX;
2998 /* Check for multiple uses. */
2999 if (du_link->next)
3000 return NULL_RTX;
3002 return DF_REF_INSN (use);
3005 /* Functions for debugging/dumping dataflow information. */
3008 /* Dump a def-use or use-def chain for REF to FILE. */
3009 static void
3010 df_chain_dump (struct df_link *link, FILE *file)
3012 fprintf (file, "{ ");
3013 for (; link; link = link->next)
3015 fprintf (file, "%c%d ",
3016 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3017 DF_REF_ID (link->ref));
3019 fprintf (file, "}");
3023 /* Dump a chain of refs with the associated regno. */
3024 static void
3025 df_chain_dump_regno (struct df_link *link, FILE *file)
3027 fprintf (file, "{ ");
3028 for (; link; link = link->next)
3030 fprintf (file, "%c%d(%d) ",
3031 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3032 DF_REF_ID (link->ref),
3033 DF_REF_REGNO (link->ref));
3035 fprintf (file, "}");
3039 /* Dump dataflow info. */
3040 void
3041 df_dump (struct df *df, int flags, FILE *file)
3043 unsigned int j;
3044 basic_block bb;
3046 if (! df || ! file)
3047 return;
3049 fprintf (file, "\nDataflow summary:\n");
3050 fprintf (file, "n_regs = %d, n_defs = %d, n_uses = %d, n_bbs = %d\n",
3051 df->n_regs, df->n_defs, df->n_uses, df->n_bbs);
3053 if (flags & DF_RD)
3055 basic_block bb;
3057 fprintf (file, "Reaching defs:\n");
3058 FOR_EACH_BB (bb)
3060 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3062 if (! bb_info->rd_in)
3063 continue;
3065 fprintf (file, "bb %d in \t", bb->index);
3066 dump_bitmap (file, bb_info->rd_in);
3067 fprintf (file, "bb %d gen \t", bb->index);
3068 dump_bitmap (file, bb_info->rd_gen);
3069 fprintf (file, "bb %d kill\t", bb->index);
3070 dump_bitmap (file, bb_info->rd_kill);
3071 fprintf (file, "bb %d out \t", bb->index);
3072 dump_bitmap (file, bb_info->rd_out);
3076 if (flags & DF_UD_CHAIN)
3078 fprintf (file, "Use-def chains:\n");
3079 for (j = 0; j < df->n_defs; j++)
3081 if (df->defs[j])
3083 fprintf (file, "d%d bb %d luid %d insn %d reg %d ",
3084 j, DF_REF_BBNO (df->defs[j]),
3085 DF_INSN_LUID (df, DF_REF_INSN (df->defs[j])),
3086 DF_REF_INSN_UID (df->defs[j]),
3087 DF_REF_REGNO (df->defs[j]));
3088 if (df->defs[j]->flags & DF_REF_READ_WRITE)
3089 fprintf (file, "read/write ");
3090 df_chain_dump (DF_REF_CHAIN (df->defs[j]), file);
3091 fprintf (file, "\n");
3096 if (flags & DF_RU)
3098 fprintf (file, "Reaching uses:\n");
3099 FOR_EACH_BB (bb)
3101 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3103 if (! bb_info->ru_in)
3104 continue;
3106 fprintf (file, "bb %d in \t", bb->index);
3107 dump_bitmap (file, bb_info->ru_in);
3108 fprintf (file, "bb %d gen \t", bb->index);
3109 dump_bitmap (file, bb_info->ru_gen);
3110 fprintf (file, "bb %d kill\t", bb->index);
3111 dump_bitmap (file, bb_info->ru_kill);
3112 fprintf (file, "bb %d out \t", bb->index);
3113 dump_bitmap (file, bb_info->ru_out);
3117 if (flags & DF_DU_CHAIN)
3119 fprintf (file, "Def-use chains:\n");
3120 for (j = 0; j < df->n_uses; j++)
3122 if (df->uses[j])
3124 fprintf (file, "u%d bb %d luid %d insn %d reg %d ",
3125 j, DF_REF_BBNO (df->uses[j]),
3126 DF_INSN_LUID (df, DF_REF_INSN (df->uses[j])),
3127 DF_REF_INSN_UID (df->uses[j]),
3128 DF_REF_REGNO (df->uses[j]));
3129 if (df->uses[j]->flags & DF_REF_READ_WRITE)
3130 fprintf (file, "read/write ");
3131 df_chain_dump (DF_REF_CHAIN (df->uses[j]), file);
3132 fprintf (file, "\n");
3137 if (flags & DF_LR)
3139 fprintf (file, "Live regs:\n");
3140 FOR_EACH_BB (bb)
3142 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3144 if (! bb_info->lr_in)
3145 continue;
3147 fprintf (file, "bb %d in \t", bb->index);
3148 dump_bitmap (file, bb_info->lr_in);
3149 fprintf (file, "bb %d use \t", bb->index);
3150 dump_bitmap (file, bb_info->lr_use);
3151 fprintf (file, "bb %d def \t", bb->index);
3152 dump_bitmap (file, bb_info->lr_def);
3153 fprintf (file, "bb %d out \t", bb->index);
3154 dump_bitmap (file, bb_info->lr_out);
3158 if (flags & (DF_REG_INFO | DF_RD_CHAIN | DF_RU_CHAIN))
3160 struct reg_info *reg_info = df->regs;
3162 fprintf (file, "Register info:\n");
3163 for (j = 0; j < df->n_regs; j++)
3165 if (((flags & DF_REG_INFO)
3166 && (reg_info[j].n_uses || reg_info[j].n_defs))
3167 || ((flags & DF_RD_CHAIN) && reg_info[j].defs)
3168 || ((flags & DF_RU_CHAIN) && reg_info[j].uses))
3170 fprintf (file, "reg %d", j);
3171 if ((flags & DF_RD_CHAIN) && (flags & DF_RU_CHAIN))
3173 basic_block bb = df_regno_bb (df, j);
3175 if (bb)
3176 fprintf (file, " bb %d", bb->index);
3177 else
3178 fprintf (file, " bb ?");
3180 if (flags & DF_REG_INFO)
3182 fprintf (file, " life %d", reg_info[j].lifetime);
3185 if ((flags & DF_REG_INFO) || (flags & DF_RD_CHAIN))
3187 fprintf (file, " defs ");
3188 if (flags & DF_REG_INFO)
3189 fprintf (file, "%d ", reg_info[j].n_defs);
3190 if (flags & DF_RD_CHAIN)
3191 df_chain_dump (reg_info[j].defs, file);
3194 if ((flags & DF_REG_INFO) || (flags & DF_RU_CHAIN))
3196 fprintf (file, " uses ");
3197 if (flags & DF_REG_INFO)
3198 fprintf (file, "%d ", reg_info[j].n_uses);
3199 if (flags & DF_RU_CHAIN)
3200 df_chain_dump (reg_info[j].uses, file);
3203 fprintf (file, "\n");
3207 fprintf (file, "\n");
3211 void
3212 df_insn_debug (struct df *df, rtx insn, FILE *file)
3214 unsigned int uid;
3215 int bbi;
3217 uid = INSN_UID (insn);
3218 if (uid >= df->insn_size)
3219 return;
3221 if (df->insns[uid].defs)
3222 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3223 else if (df->insns[uid].uses)
3224 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3225 else
3226 bbi = -1;
3228 fprintf (file, "insn %d bb %d luid %d defs ",
3229 uid, bbi, DF_INSN_LUID (df, insn));
3230 df_chain_dump (df->insns[uid].defs, file);
3231 fprintf (file, " uses ");
3232 df_chain_dump (df->insns[uid].uses, file);
3233 fprintf (file, "\n");
3237 void
3238 df_insn_debug_regno (struct df *df, rtx insn, FILE *file)
3240 unsigned int uid;
3241 int bbi;
3243 uid = INSN_UID (insn);
3244 if (uid >= df->insn_size)
3245 return;
3247 if (df->insns[uid].defs)
3248 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3249 else if (df->insns[uid].uses)
3250 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3251 else
3252 bbi = -1;
3254 fprintf (file, "insn %d bb %d luid %d defs ",
3255 uid, bbi, DF_INSN_LUID (df, insn));
3256 df_chain_dump_regno (df->insns[uid].defs, file);
3257 fprintf (file, " uses ");
3258 df_chain_dump_regno (df->insns[uid].uses, file);
3259 fprintf (file, "\n");
3263 static void
3264 df_regno_debug (struct df *df, unsigned int regno, FILE *file)
3266 if (regno >= df->reg_size)
3267 return;
3269 fprintf (file, "reg %d life %d defs ",
3270 regno, df->regs[regno].lifetime);
3271 df_chain_dump (df->regs[regno].defs, file);
3272 fprintf (file, " uses ");
3273 df_chain_dump (df->regs[regno].uses, file);
3274 fprintf (file, "\n");
3278 static void
3279 df_ref_debug (struct df *df, struct ref *ref, FILE *file)
3281 fprintf (file, "%c%d ",
3282 DF_REF_REG_DEF_P (ref) ? 'd' : 'u',
3283 DF_REF_ID (ref));
3284 fprintf (file, "reg %d bb %d luid %d insn %d chain ",
3285 DF_REF_REGNO (ref),
3286 DF_REF_BBNO (ref),
3287 DF_INSN_LUID (df, DF_REF_INSN (ref)),
3288 INSN_UID (DF_REF_INSN (ref)));
3289 df_chain_dump (DF_REF_CHAIN (ref), file);
3290 fprintf (file, "\n");
3293 /* Functions for debugging from GDB. */
3295 void
3296 debug_df_insn (rtx insn)
3298 df_insn_debug (ddf, insn, stderr);
3299 debug_rtx (insn);
3303 void
3304 debug_df_reg (rtx reg)
3306 df_regno_debug (ddf, REGNO (reg), stderr);
3310 void
3311 debug_df_regno (unsigned int regno)
3313 df_regno_debug (ddf, regno, stderr);
3317 void
3318 debug_df_ref (struct ref *ref)
3320 df_ref_debug (ddf, ref, stderr);
3324 void
3325 debug_df_defno (unsigned int defno)
3327 df_ref_debug (ddf, ddf->defs[defno], stderr);
3331 void
3332 debug_df_useno (unsigned int defno)
3334 df_ref_debug (ddf, ddf->uses[defno], stderr);
3338 void
3339 debug_df_chain (struct df_link *link)
3341 df_chain_dump (link, stderr);
3342 fputc ('\n', stderr);
3346 /* Hybrid search algorithm from "Implementation Techniques for
3347 Efficient Data-Flow Analysis of Large Programs". */
3348 static void
3349 hybrid_search_bitmap (basic_block block, bitmap *in, bitmap *out, bitmap *gen,
3350 bitmap *kill, enum df_flow_dir dir,
3351 enum df_confluence_op conf_op,
3352 transfer_function_bitmap transfun, sbitmap visited,
3353 sbitmap pending, void *data)
3355 int changed;
3356 int i = block->index;
3357 edge e;
3358 basic_block bb = block;
3360 SET_BIT (visited, block->index);
3361 if (TEST_BIT (pending, block->index))
3363 if (dir == DF_FORWARD)
3365 /* Calculate <conf_op> of predecessor_outs. */
3366 bitmap_zero (in[i]);
3367 for (e = bb->pred; e != 0; e = e->pred_next)
3369 if (e->src == ENTRY_BLOCK_PTR)
3370 continue;
3371 switch (conf_op)
3373 case DF_UNION:
3374 bitmap_a_or_b (in[i], in[i], out[e->src->index]);
3375 break;
3376 case DF_INTERSECTION:
3377 bitmap_a_and_b (in[i], in[i], out[e->src->index]);
3378 break;
3382 else
3384 /* Calculate <conf_op> of successor ins. */
3385 bitmap_zero (out[i]);
3386 for (e = bb->succ; e != 0; e = e->succ_next)
3388 if (e->dest == EXIT_BLOCK_PTR)
3389 continue;
3390 switch (conf_op)
3392 case DF_UNION:
3393 bitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3394 break;
3395 case DF_INTERSECTION:
3396 bitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3397 break;
3401 /* Common part */
3402 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3403 RESET_BIT (pending, i);
3404 if (changed)
3406 if (dir == DF_FORWARD)
3408 for (e = bb->succ; e != 0; e = e->succ_next)
3410 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3411 continue;
3412 SET_BIT (pending, e->dest->index);
3415 else
3417 for (e = bb->pred; e != 0; e = e->pred_next)
3419 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3420 continue;
3421 SET_BIT (pending, e->src->index);
3426 if (dir == DF_FORWARD)
3428 for (e = bb->succ; e != 0; e = e->succ_next)
3430 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3431 continue;
3432 if (!TEST_BIT (visited, e->dest->index))
3433 hybrid_search_bitmap (e->dest, in, out, gen, kill, dir,
3434 conf_op, transfun, visited, pending,
3435 data);
3438 else
3440 for (e = bb->pred; e != 0; e = e->pred_next)
3442 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3443 continue;
3444 if (!TEST_BIT (visited, e->src->index))
3445 hybrid_search_bitmap (e->src, in, out, gen, kill, dir,
3446 conf_op, transfun, visited, pending,
3447 data);
3453 /* Hybrid search for sbitmaps, rather than bitmaps. */
3454 static void
3455 hybrid_search_sbitmap (basic_block block, sbitmap *in, sbitmap *out,
3456 sbitmap *gen, sbitmap *kill, enum df_flow_dir dir,
3457 enum df_confluence_op conf_op,
3458 transfer_function_sbitmap transfun, sbitmap visited,
3459 sbitmap pending, void *data)
3461 int changed;
3462 int i = block->index;
3463 edge e;
3464 basic_block bb = block;
3466 SET_BIT (visited, block->index);
3467 if (TEST_BIT (pending, block->index))
3469 if (dir == DF_FORWARD)
3471 /* Calculate <conf_op> of predecessor_outs. */
3472 sbitmap_zero (in[i]);
3473 for (e = bb->pred; e != 0; e = e->pred_next)
3475 if (e->src == ENTRY_BLOCK_PTR)
3476 continue;
3477 switch (conf_op)
3479 case DF_UNION:
3480 sbitmap_a_or_b (in[i], in[i], out[e->src->index]);
3481 break;
3482 case DF_INTERSECTION:
3483 sbitmap_a_and_b (in[i], in[i], out[e->src->index]);
3484 break;
3488 else
3490 /* Calculate <conf_op> of successor ins. */
3491 sbitmap_zero (out[i]);
3492 for (e = bb->succ; e != 0; e = e->succ_next)
3494 if (e->dest == EXIT_BLOCK_PTR)
3495 continue;
3496 switch (conf_op)
3498 case DF_UNION:
3499 sbitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3500 break;
3501 case DF_INTERSECTION:
3502 sbitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3503 break;
3507 /* Common part. */
3508 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3509 RESET_BIT (pending, i);
3510 if (changed)
3512 if (dir == DF_FORWARD)
3514 for (e = bb->succ; e != 0; e = e->succ_next)
3516 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3517 continue;
3518 SET_BIT (pending, e->dest->index);
3521 else
3523 for (e = bb->pred; e != 0; e = e->pred_next)
3525 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3526 continue;
3527 SET_BIT (pending, e->src->index);
3532 if (dir == DF_FORWARD)
3534 for (e = bb->succ; e != 0; e = e->succ_next)
3536 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3537 continue;
3538 if (!TEST_BIT (visited, e->dest->index))
3539 hybrid_search_sbitmap (e->dest, in, out, gen, kill, dir,
3540 conf_op, transfun, visited, pending,
3541 data);
3544 else
3546 for (e = bb->pred; e != 0; e = e->pred_next)
3548 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3549 continue;
3550 if (!TEST_BIT (visited, e->src->index))
3551 hybrid_search_sbitmap (e->src, in, out, gen, kill, dir,
3552 conf_op, transfun, visited, pending,
3553 data);
3559 /* gen = GEN set.
3560 kill = KILL set.
3561 in, out = Filled in by function.
3562 blocks = Blocks to analyze.
3563 dir = Dataflow direction.
3564 conf_op = Confluence operation.
3565 transfun = Transfer function.
3566 order = Order to iterate in. (Should map block numbers -> order)
3567 data = Whatever you want. It's passed to the transfer function.
3569 This function will perform iterative bitvector dataflow, producing
3570 the in and out sets. Even if you only want to perform it for a
3571 small number of blocks, the vectors for in and out must be large
3572 enough for *all* blocks, because changing one block might affect
3573 others. However, it'll only put what you say to analyze on the
3574 initial worklist.
3576 For forward problems, you probably want to pass in a mapping of
3577 block number to rc_order (like df->inverse_rc_map).
3579 void
3580 iterative_dataflow_sbitmap (sbitmap *in, sbitmap *out, sbitmap *gen,
3581 sbitmap *kill, bitmap blocks,
3582 enum df_flow_dir dir,
3583 enum df_confluence_op conf_op,
3584 transfer_function_sbitmap transfun, int *order,
3585 void *data)
3587 int i;
3588 fibheap_t worklist;
3589 basic_block bb;
3590 sbitmap visited, pending;
3592 pending = sbitmap_alloc (last_basic_block);
3593 visited = sbitmap_alloc (last_basic_block);
3594 sbitmap_zero (pending);
3595 sbitmap_zero (visited);
3596 worklist = fibheap_new ();
3598 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3600 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3601 SET_BIT (pending, i);
3602 if (dir == DF_FORWARD)
3603 sbitmap_copy (out[i], gen[i]);
3604 else
3605 sbitmap_copy (in[i], gen[i]);
3608 while (sbitmap_first_set_bit (pending) != -1)
3610 while (!fibheap_empty (worklist))
3612 i = (size_t) fibheap_extract_min (worklist);
3613 bb = BASIC_BLOCK (i);
3614 if (!TEST_BIT (visited, bb->index))
3615 hybrid_search_sbitmap (bb, in, out, gen, kill, dir,
3616 conf_op, transfun, visited, pending, data);
3619 if (sbitmap_first_set_bit (pending) != -1)
3621 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3623 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3625 sbitmap_zero (visited);
3627 else
3629 break;
3633 sbitmap_free (pending);
3634 sbitmap_free (visited);
3635 fibheap_delete (worklist);
3639 /* Exactly the same as iterative_dataflow_sbitmap, except it works on
3640 bitmaps instead. */
3641 void
3642 iterative_dataflow_bitmap (bitmap *in, bitmap *out, bitmap *gen, bitmap *kill,
3643 bitmap blocks, enum df_flow_dir dir,
3644 enum df_confluence_op conf_op,
3645 transfer_function_bitmap transfun, int *order,
3646 void *data)
3648 int i;
3649 fibheap_t worklist;
3650 basic_block bb;
3651 sbitmap visited, pending;
3653 pending = sbitmap_alloc (last_basic_block);
3654 visited = sbitmap_alloc (last_basic_block);
3655 sbitmap_zero (pending);
3656 sbitmap_zero (visited);
3657 worklist = fibheap_new ();
3659 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3661 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3662 SET_BIT (pending, i);
3663 if (dir == DF_FORWARD)
3664 bitmap_copy (out[i], gen[i]);
3665 else
3666 bitmap_copy (in[i], gen[i]);
3669 while (sbitmap_first_set_bit (pending) != -1)
3671 while (!fibheap_empty (worklist))
3673 i = (size_t) fibheap_extract_min (worklist);
3674 bb = BASIC_BLOCK (i);
3675 if (!TEST_BIT (visited, bb->index))
3676 hybrid_search_bitmap (bb, in, out, gen, kill, dir,
3677 conf_op, transfun, visited, pending, data);
3680 if (sbitmap_first_set_bit (pending) != -1)
3682 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3684 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3686 sbitmap_zero (visited);
3688 else
3690 break;
3693 sbitmap_free (pending);
3694 sbitmap_free (visited);
3695 fibheap_delete (worklist);