PR optimization/9325, PR java/6391
[official-gcc.git] / gcc / df.c
blobae99d8eb64f5b525caf603c064ede4ea63795160
1 /* Dataflow support routines.
2 Copyright (C) 1999, 2000, 2001, 2002, 2003 Free Software Foundation, Inc.
3 Contributed by Michael P. Hayes (m.hayes@elec.canterbury.ac.nz,
4 mhayes@redhat.com)
6 This file is part of GCC.
8 GCC is free software; you can redistribute it and/or modify it under
9 the terms of the GNU General Public License as published by the Free
10 Software Foundation; either version 2, or (at your option) any later
11 version.
13 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
14 WARRANTY; without even the implied warranty of MERCHANTABILITY or
15 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
16 for more details.
18 You should have received a copy of the GNU General Public License
19 along with GCC; see the file COPYING. If not, write to the Free
20 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
21 02111-1307, USA.
24 OVERVIEW:
26 This file provides some dataflow routines for computing reaching defs,
27 upward exposed uses, live variables, def-use chains, and use-def
28 chains. The global dataflow is performed using simple iterative
29 methods with a worklist and could be sped up by ordering the blocks
30 with a depth first search order.
32 A `struct ref' data structure (ref) is allocated for every register
33 reference (def or use) and this records the insn and bb the ref is
34 found within. The refs are linked together in chains of uses and defs
35 for each insn and for each register. Each ref also has a chain field
36 that links all the use refs for a def or all the def refs for a use.
37 This is used to create use-def or def-use chains.
40 USAGE:
42 Here's an example of using the dataflow routines.
44 struct df *df;
46 df = df_init ();
48 df_analyse (df, 0, DF_ALL);
50 df_dump (df, DF_ALL, stderr);
52 df_finish (df);
55 df_init simply creates a poor man's object (df) that needs to be
56 passed to all the dataflow routines. df_finish destroys this
57 object and frees up any allocated memory. DF_ALL says to analyse
58 everything.
60 df_analyse performs the following:
62 1. Records defs and uses by scanning the insns in each basic block
63 or by scanning the insns queued by df_insn_modify.
64 2. Links defs and uses into insn-def and insn-use chains.
65 3. Links defs and uses into reg-def and reg-use chains.
66 4. Assigns LUIDs to each insn (for modified blocks).
67 5. Calculates local reaching definitions.
68 6. Calculates global reaching definitions.
69 7. Creates use-def chains.
70 8. Calculates local reaching uses (upwards exposed uses).
71 9. Calculates global reaching uses.
72 10. Creates def-use chains.
73 11. Calculates local live registers.
74 12. Calculates global live registers.
75 13. Calculates register lifetimes and determines local registers.
78 PHILOSOPHY:
80 Note that the dataflow information is not updated for every newly
81 deleted or created insn. If the dataflow information requires
82 updating then all the changed, new, or deleted insns needs to be
83 marked with df_insn_modify (or df_insns_modify) either directly or
84 indirectly (say through calling df_insn_delete). df_insn_modify
85 marks all the modified insns to get processed the next time df_analyse
86 is called.
88 Beware that tinkering with insns may invalidate the dataflow information.
89 The philosophy behind these routines is that once the dataflow
90 information has been gathered, the user should store what they require
91 before they tinker with any insn. Once a reg is replaced, for example,
92 then the reg-def/reg-use chains will point to the wrong place. Once a
93 whole lot of changes have been made, df_analyse can be called again
94 to update the dataflow information. Currently, this is not very smart
95 with regard to propagating changes to the dataflow so it should not
96 be called very often.
99 DATA STRUCTURES:
101 The basic object is a REF (reference) and this may either be a DEF
102 (definition) or a USE of a register.
104 These are linked into a variety of lists; namely reg-def, reg-use,
105 insn-def, insn-use, def-use, and use-def lists. For example,
106 the reg-def lists contain all the refs that define a given register
107 while the insn-use lists contain all the refs used by an insn.
109 Note that the reg-def and reg-use chains are generally short (except for the
110 hard registers) and thus it is much faster to search these chains
111 rather than searching the def or use bitmaps.
113 If the insns are in SSA form then the reg-def and use-def lists
114 should only contain the single defining ref.
117 TODO:
119 1) Incremental dataflow analysis.
121 Note that if a loop invariant insn is hoisted (or sunk), we do not
122 need to change the def-use or use-def chains. All we have to do is to
123 change the bb field for all the associated defs and uses and to
124 renumber the LUIDs for the original and new basic blocks of the insn.
126 When shadowing loop mems we create new uses and defs for new pseudos
127 so we do not affect the existing dataflow information.
129 My current strategy is to queue up all modified, created, or deleted
130 insns so when df_analyse is called we can easily determine all the new
131 or deleted refs. Currently the global dataflow information is
132 recomputed from scratch but this could be propagated more efficiently.
134 2) Reduced memory requirements.
136 We could operate a pool of ref structures. When a ref is deleted it
137 gets returned to the pool (say by linking on to a chain of free refs).
138 This will require a pair of bitmaps for defs and uses so that we can
139 tell which ones have been changed. Alternatively, we could
140 periodically squeeze the def and use tables and associated bitmaps and
141 renumber the def and use ids.
143 3) Ordering of reg-def and reg-use lists.
145 Should the first entry in the def list be the first def (within a BB)?
146 Similarly, should the first entry in the use list be the last use
147 (within a BB)?
149 4) Working with a sub-CFG.
151 Often the whole CFG does not need to be analyzed, for example,
152 when optimizing a loop, only certain registers are of interest.
153 Perhaps there should be a bitmap argument to df_analyse to specify
154 which registers should be analyzed?
157 NOTES:
159 Embedded addressing side-effects, such as POST_INC or PRE_INC, generate
160 both a use and a def. These are both marked read/write to show that they
161 are dependent. For example, (set (reg 40) (mem (post_inc (reg 42))))
162 will generate a use of reg 42 followed by a def of reg 42 (both marked
163 read/write). Similarly, (set (reg 40) (mem (pre_dec (reg 41))))
164 generates a use of reg 41 then a def of reg 41 (both marked read/write),
165 even though reg 41 is decremented before it is used for the memory
166 address in this second example.
168 A set to a REG inside a ZERO_EXTRACT, SIGN_EXTRACT, or SUBREG invokes
169 a read-modify write operation. We generate both a use and a def
170 and again mark them read/write.
173 #include "config.h"
174 #include "system.h"
175 #include "coretypes.h"
176 #include "tm.h"
177 #include "rtl.h"
178 #include "tm_p.h"
179 #include "insn-config.h"
180 #include "recog.h"
181 #include "function.h"
182 #include "regs.h"
183 #include "alloc-pool.h"
184 #include "hard-reg-set.h"
185 #include "basic-block.h"
186 #include "sbitmap.h"
187 #include "bitmap.h"
188 #include "df.h"
189 #include "fibheap.h"
191 #define FOR_EACH_BB_IN_BITMAP(BITMAP, MIN, BB, CODE) \
192 do \
194 unsigned int node_; \
195 EXECUTE_IF_SET_IN_BITMAP (BITMAP, MIN, node_, \
196 {(BB) = BASIC_BLOCK (node_); CODE;}); \
198 while (0)
200 static alloc_pool df_ref_pool;
201 static alloc_pool df_link_pool;
202 static struct df *ddf;
204 static void df_reg_table_realloc (struct df *, int);
205 static void df_insn_table_realloc (struct df *, unsigned int);
206 static void df_bitmaps_alloc (struct df *, int);
207 static void df_bitmaps_free (struct df *, int);
208 static void df_free (struct df *);
209 static void df_alloc (struct df *, int);
211 static rtx df_reg_clobber_gen (unsigned int);
212 static rtx df_reg_use_gen (unsigned int);
214 static inline struct df_link *df_link_create (struct ref *, struct df_link *);
215 static struct df_link *df_ref_unlink (struct df_link **, struct ref *);
216 static void df_def_unlink (struct df *, struct ref *);
217 static void df_use_unlink (struct df *, struct ref *);
218 static void df_insn_refs_unlink (struct df *, basic_block, rtx);
219 #if 0
220 static void df_bb_refs_unlink (struct df *, basic_block);
221 static void df_refs_unlink (struct df *, bitmap);
222 #endif
224 static struct ref *df_ref_create (struct df *, rtx, rtx *, rtx,
225 enum df_ref_type, enum df_ref_flags);
226 static void df_ref_record_1 (struct df *, rtx, rtx *, rtx, enum df_ref_type,
227 enum df_ref_flags);
228 static void df_ref_record (struct df *, rtx, rtx *, rtx, enum df_ref_type,
229 enum df_ref_flags);
230 static void df_def_record_1 (struct df *, rtx, basic_block, rtx);
231 static void df_defs_record (struct df *, rtx, basic_block, rtx);
232 static void df_uses_record (struct df *, rtx *, enum df_ref_type,
233 basic_block, rtx, enum df_ref_flags);
234 static void df_insn_refs_record (struct df *, basic_block, rtx);
235 static void df_bb_refs_record (struct df *, basic_block);
236 static void df_refs_record (struct df *, bitmap);
238 static void df_bb_reg_def_chain_create (struct df *, basic_block);
239 static void df_reg_def_chain_create (struct df *, bitmap);
240 static void df_bb_reg_use_chain_create (struct df *, basic_block);
241 static void df_reg_use_chain_create (struct df *, bitmap);
242 static void df_bb_du_chain_create (struct df *, basic_block, bitmap);
243 static void df_du_chain_create (struct df *, bitmap);
244 static void df_bb_ud_chain_create (struct df *, basic_block);
245 static void df_ud_chain_create (struct df *, bitmap);
246 static void df_bb_rd_local_compute (struct df *, basic_block);
247 static void df_rd_local_compute (struct df *, bitmap);
248 static void df_bb_ru_local_compute (struct df *, basic_block);
249 static void df_ru_local_compute (struct df *, bitmap);
250 static void df_bb_lr_local_compute (struct df *, basic_block);
251 static void df_lr_local_compute (struct df *, bitmap);
252 static void df_bb_reg_info_compute (struct df *, basic_block, bitmap);
253 static void df_reg_info_compute (struct df *, bitmap);
255 static int df_bb_luids_set (struct df *df, basic_block);
256 static int df_luids_set (struct df *df, bitmap);
258 static int df_modified_p (struct df *, bitmap);
259 static int df_refs_queue (struct df *);
260 static int df_refs_process (struct df *);
261 static int df_bb_refs_update (struct df *, basic_block);
262 static int df_refs_update (struct df *);
263 static void df_analyse_1 (struct df *, bitmap, int, int);
265 static void df_insns_modify (struct df *, basic_block, rtx, rtx);
266 static int df_rtx_mem_replace (rtx *, void *);
267 static int df_rtx_reg_replace (rtx *, void *);
268 void df_refs_reg_replace (struct df *, bitmap, struct df_link *, rtx, rtx);
270 static int df_def_dominates_all_uses_p (struct df *, struct ref *def);
271 static int df_def_dominates_uses_p (struct df *, struct ref *def, bitmap);
272 static struct ref *df_bb_regno_last_use_find (struct df *, basic_block,
273 unsigned int);
274 static struct ref *df_bb_regno_first_def_find (struct df *, basic_block,
275 unsigned int);
276 static struct ref *df_bb_insn_regno_last_use_find (struct df *, basic_block,
277 rtx, unsigned int);
278 static struct ref *df_bb_insn_regno_first_def_find (struct df *, basic_block,
279 rtx, unsigned int);
281 static void df_chain_dump (struct df_link *, FILE *file);
282 static void df_chain_dump_regno (struct df_link *, FILE *file);
283 static void df_regno_debug (struct df *, unsigned int, FILE *);
284 static void df_ref_debug (struct df *, struct ref *, FILE *);
285 static void df_rd_transfer_function (int, int *, bitmap, bitmap, bitmap,
286 bitmap, void *);
287 static void df_ru_transfer_function (int, int *, bitmap, bitmap, bitmap,
288 bitmap, void *);
289 static void df_lr_transfer_function (int, int *, bitmap, bitmap, bitmap,
290 bitmap, void *);
291 static void hybrid_search_bitmap (basic_block, bitmap *, bitmap *,
292 bitmap *, bitmap *, enum df_flow_dir,
293 enum df_confluence_op,
294 transfer_function_bitmap,
295 sbitmap, sbitmap, void *);
296 static void hybrid_search_sbitmap (basic_block, sbitmap *, sbitmap *,
297 sbitmap *, sbitmap *, enum df_flow_dir,
298 enum df_confluence_op,
299 transfer_function_sbitmap,
300 sbitmap, sbitmap, void *);
303 /* Local memory allocation/deallocation routines. */
306 /* Increase the insn info table to have space for at least SIZE + 1
307 elements. */
308 static void
309 df_insn_table_realloc (struct df *df, unsigned int size)
311 size++;
312 if (size <= df->insn_size)
313 return;
315 /* Make the table a little larger than requested, so we do not need
316 to enlarge it so often. */
317 size += df->insn_size / 4;
319 df->insns = xrealloc (df->insns, size * sizeof (struct insn_info));
321 memset (df->insns + df->insn_size, 0,
322 (size - df->insn_size) * sizeof (struct insn_info));
324 df->insn_size = size;
326 if (! df->insns_modified)
328 df->insns_modified = BITMAP_XMALLOC ();
329 bitmap_zero (df->insns_modified);
334 /* Increase the reg info table by SIZE more elements. */
335 static void
336 df_reg_table_realloc (struct df *df, int size)
338 /* Make table 25 percent larger by default. */
339 if (! size)
340 size = df->reg_size / 4;
342 size += df->reg_size;
343 if (size < max_reg_num ())
344 size = max_reg_num ();
346 df->regs = xrealloc (df->regs, size * sizeof (struct reg_info));
348 /* Zero the new entries. */
349 memset (df->regs + df->reg_size, 0,
350 (size - df->reg_size) * sizeof (struct reg_info));
352 df->reg_size = size;
356 /* Allocate bitmaps for each basic block. */
357 static void
358 df_bitmaps_alloc (struct df *df, int flags)
360 int dflags = 0;
361 basic_block bb;
363 /* Free the bitmaps if they need resizing. */
364 if ((flags & DF_LR) && df->n_regs < (unsigned int) max_reg_num ())
365 dflags |= DF_LR | DF_RU;
366 if ((flags & DF_RU) && df->n_uses < df->use_id)
367 dflags |= DF_RU;
368 if ((flags & DF_RD) && df->n_defs < df->def_id)
369 dflags |= DF_RD;
371 if (dflags)
372 df_bitmaps_free (df, dflags);
374 df->n_defs = df->def_id;
375 df->n_uses = df->use_id;
377 FOR_EACH_BB (bb)
379 struct bb_info *bb_info = DF_BB_INFO (df, bb);
381 if (flags & DF_RD && ! bb_info->rd_in)
383 /* Allocate bitmaps for reaching definitions. */
384 bb_info->rd_kill = BITMAP_XMALLOC ();
385 bitmap_zero (bb_info->rd_kill);
386 bb_info->rd_gen = BITMAP_XMALLOC ();
387 bitmap_zero (bb_info->rd_gen);
388 bb_info->rd_in = BITMAP_XMALLOC ();
389 bb_info->rd_out = BITMAP_XMALLOC ();
390 bb_info->rd_valid = 0;
393 if (flags & DF_RU && ! bb_info->ru_in)
395 /* Allocate bitmaps for upward exposed uses. */
396 bb_info->ru_kill = BITMAP_XMALLOC ();
397 bitmap_zero (bb_info->ru_kill);
398 /* Note the lack of symmetry. */
399 bb_info->ru_gen = BITMAP_XMALLOC ();
400 bitmap_zero (bb_info->ru_gen);
401 bb_info->ru_in = BITMAP_XMALLOC ();
402 bb_info->ru_out = BITMAP_XMALLOC ();
403 bb_info->ru_valid = 0;
406 if (flags & DF_LR && ! bb_info->lr_in)
408 /* Allocate bitmaps for live variables. */
409 bb_info->lr_def = BITMAP_XMALLOC ();
410 bitmap_zero (bb_info->lr_def);
411 bb_info->lr_use = BITMAP_XMALLOC ();
412 bitmap_zero (bb_info->lr_use);
413 bb_info->lr_in = BITMAP_XMALLOC ();
414 bb_info->lr_out = BITMAP_XMALLOC ();
415 bb_info->lr_valid = 0;
421 /* Free bitmaps for each basic block. */
422 static void
423 df_bitmaps_free (struct df *df, int flags)
425 basic_block bb;
427 FOR_EACH_BB (bb)
429 struct bb_info *bb_info = DF_BB_INFO (df, bb);
431 if (!bb_info)
432 continue;
434 if ((flags & DF_RD) && bb_info->rd_in)
436 /* Free bitmaps for reaching definitions. */
437 BITMAP_XFREE (bb_info->rd_kill);
438 bb_info->rd_kill = NULL;
439 BITMAP_XFREE (bb_info->rd_gen);
440 bb_info->rd_gen = NULL;
441 BITMAP_XFREE (bb_info->rd_in);
442 bb_info->rd_in = NULL;
443 BITMAP_XFREE (bb_info->rd_out);
444 bb_info->rd_out = NULL;
447 if ((flags & DF_RU) && bb_info->ru_in)
449 /* Free bitmaps for upward exposed uses. */
450 BITMAP_XFREE (bb_info->ru_kill);
451 bb_info->ru_kill = NULL;
452 BITMAP_XFREE (bb_info->ru_gen);
453 bb_info->ru_gen = NULL;
454 BITMAP_XFREE (bb_info->ru_in);
455 bb_info->ru_in = NULL;
456 BITMAP_XFREE (bb_info->ru_out);
457 bb_info->ru_out = NULL;
460 if ((flags & DF_LR) && bb_info->lr_in)
462 /* Free bitmaps for live variables. */
463 BITMAP_XFREE (bb_info->lr_def);
464 bb_info->lr_def = NULL;
465 BITMAP_XFREE (bb_info->lr_use);
466 bb_info->lr_use = NULL;
467 BITMAP_XFREE (bb_info->lr_in);
468 bb_info->lr_in = NULL;
469 BITMAP_XFREE (bb_info->lr_out);
470 bb_info->lr_out = NULL;
473 df->flags &= ~(flags & (DF_RD | DF_RU | DF_LR));
477 /* Allocate and initialize dataflow memory. */
478 static void
479 df_alloc (struct df *df, int n_regs)
481 int n_insns;
482 basic_block bb;
484 df_link_pool = create_alloc_pool ("df_link pool", sizeof (struct df_link),
485 100);
486 df_ref_pool = create_alloc_pool ("df_ref pool", sizeof (struct ref), 100);
488 /* Perhaps we should use LUIDs to save memory for the insn_refs
489 table. This is only a small saving; a few pointers. */
490 n_insns = get_max_uid () + 1;
492 df->def_id = 0;
493 df->n_defs = 0;
494 /* Approximate number of defs by number of insns. */
495 df->def_size = n_insns;
496 df->defs = xmalloc (df->def_size * sizeof (*df->defs));
498 df->use_id = 0;
499 df->n_uses = 0;
500 /* Approximate number of uses by twice number of insns. */
501 df->use_size = n_insns * 2;
502 df->uses = xmalloc (df->use_size * sizeof (*df->uses));
504 df->n_regs = n_regs;
505 df->n_bbs = last_basic_block;
507 /* Allocate temporary working array used during local dataflow analysis. */
508 df->reg_def_last = xmalloc (df->n_regs * sizeof (struct ref *));
510 df_insn_table_realloc (df, n_insns);
512 df_reg_table_realloc (df, df->n_regs);
514 df->bbs_modified = BITMAP_XMALLOC ();
515 bitmap_zero (df->bbs_modified);
517 df->flags = 0;
519 df->bbs = xcalloc (last_basic_block, sizeof (struct bb_info));
521 df->all_blocks = BITMAP_XMALLOC ();
522 FOR_EACH_BB (bb)
523 bitmap_set_bit (df->all_blocks, bb->index);
527 /* Free all the dataflow info. */
528 static void
529 df_free (struct df *df)
531 df_bitmaps_free (df, DF_ALL);
533 if (df->bbs)
534 free (df->bbs);
535 df->bbs = 0;
537 if (df->insns)
538 free (df->insns);
539 df->insns = 0;
540 df->insn_size = 0;
542 if (df->defs)
543 free (df->defs);
544 df->defs = 0;
545 df->def_size = 0;
546 df->def_id = 0;
548 if (df->uses)
549 free (df->uses);
550 df->uses = 0;
551 df->use_size = 0;
552 df->use_id = 0;
554 if (df->regs)
555 free (df->regs);
556 df->regs = 0;
557 df->reg_size = 0;
559 if (df->bbs_modified)
560 BITMAP_XFREE (df->bbs_modified);
561 df->bbs_modified = 0;
563 if (df->insns_modified)
564 BITMAP_XFREE (df->insns_modified);
565 df->insns_modified = 0;
567 BITMAP_XFREE (df->all_blocks);
568 df->all_blocks = 0;
570 free_alloc_pool (df_ref_pool);
571 free_alloc_pool (df_link_pool);
575 /* Local miscellaneous routines. */
577 /* Return a USE for register REGNO. */
578 static rtx df_reg_use_gen (unsigned int regno)
580 rtx reg;
581 rtx use;
583 reg = regno_reg_rtx[regno];
585 use = gen_rtx_USE (GET_MODE (reg), reg);
586 return use;
590 /* Return a CLOBBER for register REGNO. */
591 static rtx df_reg_clobber_gen (unsigned int regno)
593 rtx reg;
594 rtx use;
596 reg = regno_reg_rtx[regno];
598 use = gen_rtx_CLOBBER (GET_MODE (reg), reg);
599 return use;
602 /* Local chain manipulation routines. */
604 /* Create a link in a def-use or use-def chain. */
605 static inline struct df_link *
606 df_link_create (struct ref *ref, struct df_link *next)
608 struct df_link *link;
610 link = pool_alloc (df_link_pool);
611 link->next = next;
612 link->ref = ref;
613 return link;
617 /* Add REF to chain head pointed to by PHEAD. */
618 static struct df_link *
619 df_ref_unlink (struct df_link **phead, struct ref *ref)
621 struct df_link *link = *phead;
623 if (link)
625 if (! link->next)
627 /* Only a single ref. It must be the one we want.
628 If not, the def-use and use-def chains are likely to
629 be inconsistent. */
630 if (link->ref != ref)
631 abort ();
632 /* Now have an empty chain. */
633 *phead = NULL;
635 else
637 /* Multiple refs. One of them must be us. */
638 if (link->ref == ref)
639 *phead = link->next;
640 else
642 /* Follow chain. */
643 for (; link->next; link = link->next)
645 if (link->next->ref == ref)
647 /* Unlink from list. */
648 link->next = link->next->next;
649 return link->next;
655 return link;
659 /* Unlink REF from all def-use/use-def chains, etc. */
661 df_ref_remove (struct df *df, struct ref *ref)
663 if (DF_REF_REG_DEF_P (ref))
665 df_def_unlink (df, ref);
666 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].defs, ref);
668 else
670 df_use_unlink (df, ref);
671 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].uses, ref);
673 return 1;
677 /* Unlink DEF from use-def and reg-def chains. */
678 static void
679 df_def_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
681 struct df_link *du_link;
682 unsigned int dregno = DF_REF_REGNO (def);
684 /* Follow def-use chain to find all the uses of this def. */
685 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
687 struct ref *use = du_link->ref;
689 /* Unlink this def from the use-def chain. */
690 df_ref_unlink (&DF_REF_CHAIN (use), def);
692 DF_REF_CHAIN (def) = 0;
694 /* Unlink def from reg-def chain. */
695 df_ref_unlink (&df->regs[dregno].defs, def);
697 df->defs[DF_REF_ID (def)] = 0;
701 /* Unlink use from def-use and reg-use chains. */
702 static void
703 df_use_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *use)
705 struct df_link *ud_link;
706 unsigned int uregno = DF_REF_REGNO (use);
708 /* Follow use-def chain to find all the defs of this use. */
709 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
711 struct ref *def = ud_link->ref;
713 /* Unlink this use from the def-use chain. */
714 df_ref_unlink (&DF_REF_CHAIN (def), use);
716 DF_REF_CHAIN (use) = 0;
718 /* Unlink use from reg-use chain. */
719 df_ref_unlink (&df->regs[uregno].uses, use);
721 df->uses[DF_REF_ID (use)] = 0;
724 /* Local routines for recording refs. */
727 /* Create a new ref of type DF_REF_TYPE for register REG at address
728 LOC within INSN of BB. */
729 static struct ref *
730 df_ref_create (struct df *df, rtx reg, rtx *loc, rtx insn,
731 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
733 struct ref *this_ref;
735 this_ref = pool_alloc (df_ref_pool);
736 DF_REF_REG (this_ref) = reg;
737 DF_REF_LOC (this_ref) = loc;
738 DF_REF_INSN (this_ref) = insn;
739 DF_REF_CHAIN (this_ref) = 0;
740 DF_REF_TYPE (this_ref) = ref_type;
741 DF_REF_FLAGS (this_ref) = ref_flags;
743 if (ref_type == DF_REF_REG_DEF)
745 if (df->def_id >= df->def_size)
747 /* Make table 25 percent larger. */
748 df->def_size += (df->def_size / 4);
749 df->defs = xrealloc (df->defs,
750 df->def_size * sizeof (*df->defs));
752 DF_REF_ID (this_ref) = df->def_id;
753 df->defs[df->def_id++] = this_ref;
755 else
757 if (df->use_id >= df->use_size)
759 /* Make table 25 percent larger. */
760 df->use_size += (df->use_size / 4);
761 df->uses = xrealloc (df->uses,
762 df->use_size * sizeof (*df->uses));
764 DF_REF_ID (this_ref) = df->use_id;
765 df->uses[df->use_id++] = this_ref;
767 return this_ref;
771 /* Create a new reference of type DF_REF_TYPE for a single register REG,
772 used inside the LOC rtx of INSN. */
773 static void
774 df_ref_record_1 (struct df *df, rtx reg, rtx *loc, rtx insn,
775 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
777 df_ref_create (df, reg, loc, insn, ref_type, ref_flags);
781 /* Create new references of type DF_REF_TYPE for each part of register REG
782 at address LOC within INSN of BB. */
783 static void
784 df_ref_record (struct df *df, rtx reg, rtx *loc, rtx insn,
785 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
787 unsigned int regno;
789 if (GET_CODE (reg) != REG && GET_CODE (reg) != SUBREG)
790 abort ();
792 /* For the reg allocator we are interested in some SUBREG rtx's, but not
793 all. Notably only those representing a word extraction from a multi-word
794 reg. As written in the docu those should have the form
795 (subreg:SI (reg:M A) N), with size(SImode) > size(Mmode).
796 XXX Is that true? We could also use the global word_mode variable. */
797 if (GET_CODE (reg) == SUBREG
798 && (GET_MODE_SIZE (GET_MODE (reg)) < GET_MODE_SIZE (word_mode)
799 || GET_MODE_SIZE (GET_MODE (reg))
800 >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (reg)))))
802 loc = &SUBREG_REG (reg);
803 reg = *loc;
804 ref_flags |= DF_REF_STRIPPED;
807 regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
808 if (regno < FIRST_PSEUDO_REGISTER)
810 int i;
811 int endregno;
813 if (! (df->flags & DF_HARD_REGS))
814 return;
816 /* GET_MODE (reg) is correct here. We do not want to go into a SUBREG
817 for the mode, because we only want to add references to regs, which
818 are really referenced. E.g., a (subreg:SI (reg:DI 0) 0) does _not_
819 reference the whole reg 0 in DI mode (which would also include
820 reg 1, at least, if 0 and 1 are SImode registers). */
821 endregno = HARD_REGNO_NREGS (regno, GET_MODE (reg));
822 if (GET_CODE (reg) == SUBREG)
823 regno += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
824 SUBREG_BYTE (reg), GET_MODE (reg));
825 endregno += regno;
827 for (i = regno; i < endregno; i++)
828 df_ref_record_1 (df, regno_reg_rtx[i],
829 loc, insn, ref_type, ref_flags);
831 else
833 df_ref_record_1 (df, reg, loc, insn, ref_type, ref_flags);
838 /* Return nonzero if writes to paradoxical SUBREGs, or SUBREGs which
839 are too narrow, are read-modify-write. */
840 bool
841 read_modify_subreg_p (rtx x)
843 unsigned int isize, osize;
844 if (GET_CODE (x) != SUBREG)
845 return false;
846 isize = GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)));
847 osize = GET_MODE_SIZE (GET_MODE (x));
848 /* Paradoxical subreg writes don't leave a trace of the old content. */
849 return (isize > osize && isize > UNITS_PER_WORD);
853 /* Process all the registers defined in the rtx, X. */
854 static void
855 df_def_record_1 (struct df *df, rtx x, basic_block bb, rtx insn)
857 rtx *loc;
858 rtx dst;
859 enum df_ref_flags flags = 0;
861 /* We may recursively call ourselves on EXPR_LIST when dealing with PARALLEL
862 construct. */
863 if (GET_CODE (x) == EXPR_LIST || GET_CODE (x) == CLOBBER)
864 loc = &XEXP (x, 0);
865 else
866 loc = &SET_DEST (x);
867 dst = *loc;
869 /* Some targets place small structures in registers for
870 return values of functions. */
871 if (GET_CODE (dst) == PARALLEL && GET_MODE (dst) == BLKmode)
873 int i;
875 for (i = XVECLEN (dst, 0) - 1; i >= 0; i--)
877 rtx temp = XVECEXP (dst, 0, i);
878 if (GET_CODE (temp) == EXPR_LIST || GET_CODE (temp) == CLOBBER
879 || GET_CODE (temp) == SET)
880 df_def_record_1 (df, temp, bb, insn);
882 return;
885 /* Maybe, we should flag the use of STRICT_LOW_PART somehow. It might
886 be handy for the reg allocator. */
887 while (GET_CODE (dst) == STRICT_LOW_PART
888 || GET_CODE (dst) == ZERO_EXTRACT
889 || GET_CODE (dst) == SIGN_EXTRACT
890 || ((df->flags & DF_FOR_REGALLOC) == 0
891 && read_modify_subreg_p (dst)))
893 /* Strict low part always contains SUBREG, but we do not want to make
894 it appear outside, as whole register is always considered. */
895 if (GET_CODE (dst) == STRICT_LOW_PART)
897 loc = &XEXP (dst, 0);
898 dst = *loc;
900 loc = &XEXP (dst, 0);
901 dst = *loc;
902 flags |= DF_REF_READ_WRITE;
905 if (GET_CODE (dst) == REG
906 || (GET_CODE (dst) == SUBREG && GET_CODE (SUBREG_REG (dst)) == REG))
907 df_ref_record (df, dst, loc, insn, DF_REF_REG_DEF, flags);
911 /* Process all the registers defined in the pattern rtx, X. */
912 static void
913 df_defs_record (struct df *df, rtx x, basic_block bb, rtx insn)
915 RTX_CODE code = GET_CODE (x);
917 if (code == SET || code == CLOBBER)
919 /* Mark the single def within the pattern. */
920 df_def_record_1 (df, x, bb, insn);
922 else if (code == PARALLEL)
924 int i;
926 /* Mark the multiple defs within the pattern. */
927 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
929 code = GET_CODE (XVECEXP (x, 0, i));
930 if (code == SET || code == CLOBBER)
931 df_def_record_1 (df, XVECEXP (x, 0, i), bb, insn);
937 /* Process all the registers used in the rtx at address LOC. */
938 static void
939 df_uses_record (struct df *df, rtx *loc, enum df_ref_type ref_type,
940 basic_block bb, rtx insn, enum df_ref_flags flags)
942 RTX_CODE code;
943 rtx x;
944 retry:
945 x = *loc;
946 if (!x)
947 return;
948 code = GET_CODE (x);
949 switch (code)
951 case LABEL_REF:
952 case SYMBOL_REF:
953 case CONST_INT:
954 case CONST:
955 case CONST_DOUBLE:
956 case CONST_VECTOR:
957 case PC:
958 case CC0:
959 case ADDR_VEC:
960 case ADDR_DIFF_VEC:
961 return;
963 case CLOBBER:
964 /* If we are clobbering a MEM, mark any registers inside the address
965 as being used. */
966 if (GET_CODE (XEXP (x, 0)) == MEM)
967 df_uses_record (df, &XEXP (XEXP (x, 0), 0),
968 DF_REF_REG_MEM_STORE, bb, insn, flags);
970 /* If we're clobbering a REG then we have a def so ignore. */
971 return;
973 case MEM:
974 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_MEM_LOAD, bb, insn, flags);
975 return;
977 case SUBREG:
978 /* While we're here, optimize this case. */
980 /* In case the SUBREG is not of a REG, do not optimize. */
981 if (GET_CODE (SUBREG_REG (x)) != REG)
983 loc = &SUBREG_REG (x);
984 df_uses_record (df, loc, ref_type, bb, insn, flags);
985 return;
987 /* ... Fall through ... */
989 case REG:
990 /* See a REG (or SUBREG) other than being set. */
991 df_ref_record (df, x, loc, insn, ref_type, flags);
992 return;
994 case SET:
996 rtx dst = SET_DEST (x);
998 df_uses_record (df, &SET_SRC (x), DF_REF_REG_USE, bb, insn, 0);
1000 switch (GET_CODE (dst))
1002 enum df_ref_flags use_flags;
1003 case SUBREG:
1004 if ((df->flags & DF_FOR_REGALLOC) == 0
1005 && read_modify_subreg_p (dst))
1007 use_flags = DF_REF_READ_WRITE;
1008 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1009 insn, use_flags);
1010 break;
1012 /* ... FALLTHRU ... */
1013 case REG:
1014 case PARALLEL:
1015 case PC:
1016 case CC0:
1017 break;
1018 case MEM:
1019 df_uses_record (df, &XEXP (dst, 0),
1020 DF_REF_REG_MEM_STORE,
1021 bb, insn, 0);
1022 break;
1023 case STRICT_LOW_PART:
1024 /* A strict_low_part uses the whole REG and not just the SUBREG. */
1025 dst = XEXP (dst, 0);
1026 if (GET_CODE (dst) != SUBREG)
1027 abort ();
1028 use_flags = DF_REF_READ_WRITE;
1029 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1030 insn, use_flags);
1031 break;
1032 case ZERO_EXTRACT:
1033 case SIGN_EXTRACT:
1034 df_uses_record (df, &XEXP (dst, 0), DF_REF_REG_USE, bb, insn,
1035 DF_REF_READ_WRITE);
1036 df_uses_record (df, &XEXP (dst, 1), DF_REF_REG_USE, bb, insn, 0);
1037 df_uses_record (df, &XEXP (dst, 2), DF_REF_REG_USE, bb, insn, 0);
1038 dst = XEXP (dst, 0);
1039 break;
1040 default:
1041 abort ();
1043 return;
1046 case RETURN:
1047 break;
1049 case ASM_OPERANDS:
1050 case UNSPEC_VOLATILE:
1051 case TRAP_IF:
1052 case ASM_INPUT:
1054 /* Traditional and volatile asm instructions must be considered to use
1055 and clobber all hard registers, all pseudo-registers and all of
1056 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1058 Consider for instance a volatile asm that changes the fpu rounding
1059 mode. An insn should not be moved across this even if it only uses
1060 pseudo-regs because it might give an incorrectly rounded result.
1062 For now, just mark any regs we can find in ASM_OPERANDS as
1063 used. */
1065 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1066 We can not just fall through here since then we would be confused
1067 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1068 traditional asms unlike their normal usage. */
1069 if (code == ASM_OPERANDS)
1071 int j;
1073 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1074 df_uses_record (df, &ASM_OPERANDS_INPUT (x, j),
1075 DF_REF_REG_USE, bb, insn, 0);
1076 return;
1078 break;
1081 case PRE_DEC:
1082 case POST_DEC:
1083 case PRE_INC:
1084 case POST_INC:
1085 case PRE_MODIFY:
1086 case POST_MODIFY:
1087 /* Catch the def of the register being modified. */
1088 df_ref_record (df, XEXP (x, 0), &XEXP (x, 0), insn, DF_REF_REG_DEF, DF_REF_READ_WRITE);
1090 /* ... Fall through to handle uses ... */
1092 default:
1093 break;
1096 /* Recursively scan the operands of this expression. */
1098 const char *fmt = GET_RTX_FORMAT (code);
1099 int i;
1101 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1103 if (fmt[i] == 'e')
1105 /* Tail recursive case: save a function call level. */
1106 if (i == 0)
1108 loc = &XEXP (x, 0);
1109 goto retry;
1111 df_uses_record (df, &XEXP (x, i), ref_type, bb, insn, flags);
1113 else if (fmt[i] == 'E')
1115 int j;
1116 for (j = 0; j < XVECLEN (x, i); j++)
1117 df_uses_record (df, &XVECEXP (x, i, j), ref_type,
1118 bb, insn, flags);
1125 /* Record all the df within INSN of basic block BB. */
1126 static void
1127 df_insn_refs_record (struct df *df, basic_block bb, rtx insn)
1129 int i;
1131 if (INSN_P (insn))
1133 rtx note;
1135 /* Record register defs. */
1136 df_defs_record (df, PATTERN (insn), bb, insn);
1138 if (df->flags & DF_EQUIV_NOTES)
1139 for (note = REG_NOTES (insn); note;
1140 note = XEXP (note, 1))
1142 switch (REG_NOTE_KIND (note))
1144 case REG_EQUIV:
1145 case REG_EQUAL:
1146 df_uses_record (df, &XEXP (note, 0), DF_REF_REG_USE,
1147 bb, insn, 0);
1148 default:
1149 break;
1153 if (GET_CODE (insn) == CALL_INSN)
1155 rtx note;
1156 rtx x;
1158 /* Record the registers used to pass arguments. */
1159 for (note = CALL_INSN_FUNCTION_USAGE (insn); note;
1160 note = XEXP (note, 1))
1162 if (GET_CODE (XEXP (note, 0)) == USE)
1163 df_uses_record (df, &XEXP (XEXP (note, 0), 0), DF_REF_REG_USE,
1164 bb, insn, 0);
1167 /* The stack ptr is used (honorarily) by a CALL insn. */
1168 x = df_reg_use_gen (STACK_POINTER_REGNUM);
1169 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_USE, bb, insn, 0);
1171 if (df->flags & DF_HARD_REGS)
1173 /* Calls may also reference any of the global registers,
1174 so they are recorded as used. */
1175 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1176 if (global_regs[i])
1178 x = df_reg_use_gen (i);
1179 df_uses_record (df, &SET_DEST (x),
1180 DF_REF_REG_USE, bb, insn, 0);
1185 /* Record the register uses. */
1186 df_uses_record (df, &PATTERN (insn),
1187 DF_REF_REG_USE, bb, insn, 0);
1189 if (GET_CODE (insn) == CALL_INSN)
1191 rtx note;
1193 if (df->flags & DF_HARD_REGS)
1195 /* Kill all registers invalidated by a call. */
1196 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1197 if (TEST_HARD_REG_BIT (regs_invalidated_by_call, i))
1199 rtx reg_clob = df_reg_clobber_gen (i);
1200 df_defs_record (df, reg_clob, bb, insn);
1204 /* There may be extra registers to be clobbered. */
1205 for (note = CALL_INSN_FUNCTION_USAGE (insn);
1206 note;
1207 note = XEXP (note, 1))
1208 if (GET_CODE (XEXP (note, 0)) == CLOBBER)
1209 df_defs_record (df, XEXP (note, 0), bb, insn);
1215 /* Record all the refs within the basic block BB. */
1216 static void
1217 df_bb_refs_record (struct df *df, basic_block bb)
1219 rtx insn;
1221 /* Scan the block an insn at a time from beginning to end. */
1222 for (insn = bb->head; ; insn = NEXT_INSN (insn))
1224 if (INSN_P (insn))
1226 /* Record defs within INSN. */
1227 df_insn_refs_record (df, bb, insn);
1229 if (insn == bb->end)
1230 break;
1235 /* Record all the refs in the basic blocks specified by BLOCKS. */
1236 static void
1237 df_refs_record (struct df *df, bitmap blocks)
1239 basic_block bb;
1241 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1243 df_bb_refs_record (df, bb);
1247 /* Dataflow analysis routines. */
1250 /* Create reg-def chains for basic block BB. These are a list of
1251 definitions for each register. */
1252 static void
1253 df_bb_reg_def_chain_create (struct df *df, basic_block bb)
1255 rtx insn;
1257 /* Perhaps the defs should be sorted using a depth first search
1258 of the CFG (or possibly a breadth first search). We currently
1259 scan the basic blocks in reverse order so that the first defs
1260 appear at the start of the chain. */
1262 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1263 insn = PREV_INSN (insn))
1265 struct df_link *link;
1266 unsigned int uid = INSN_UID (insn);
1268 if (! INSN_P (insn))
1269 continue;
1271 for (link = df->insns[uid].defs; link; link = link->next)
1273 struct ref *def = link->ref;
1274 unsigned int dregno = DF_REF_REGNO (def);
1276 /* Do not add ref's to the chain twice, i.e., only add new
1277 refs. XXX the same could be done by testing if the
1278 current insn is a modified (or a new) one. This would be
1279 faster. */
1280 if (DF_REF_ID (def) < df->def_id_save)
1281 continue;
1283 df->regs[dregno].defs
1284 = df_link_create (def, df->regs[dregno].defs);
1290 /* Create reg-def chains for each basic block within BLOCKS. These
1291 are a list of definitions for each register. */
1292 static void
1293 df_reg_def_chain_create (struct df *df, bitmap blocks)
1295 basic_block bb;
1297 FOR_EACH_BB_IN_BITMAP/*_REV*/ (blocks, 0, bb,
1299 df_bb_reg_def_chain_create (df, bb);
1304 /* Create reg-use chains for basic block BB. These are a list of uses
1305 for each register. */
1306 static void
1307 df_bb_reg_use_chain_create (struct df *df, basic_block bb)
1309 rtx insn;
1311 /* Scan in forward order so that the last uses appear at the start
1312 of the chain. */
1314 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1315 insn = NEXT_INSN (insn))
1317 struct df_link *link;
1318 unsigned int uid = INSN_UID (insn);
1320 if (! INSN_P (insn))
1321 continue;
1323 for (link = df->insns[uid].uses; link; link = link->next)
1325 struct ref *use = link->ref;
1326 unsigned int uregno = DF_REF_REGNO (use);
1328 /* Do not add ref's to the chain twice, i.e., only add new
1329 refs. XXX the same could be done by testing if the
1330 current insn is a modified (or a new) one. This would be
1331 faster. */
1332 if (DF_REF_ID (use) < df->use_id_save)
1333 continue;
1335 df->regs[uregno].uses
1336 = df_link_create (use, df->regs[uregno].uses);
1342 /* Create reg-use chains for each basic block within BLOCKS. These
1343 are a list of uses for each register. */
1344 static void
1345 df_reg_use_chain_create (struct df *df, bitmap blocks)
1347 basic_block bb;
1349 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1351 df_bb_reg_use_chain_create (df, bb);
1356 /* Create def-use chains from reaching use bitmaps for basic block BB. */
1357 static void
1358 df_bb_du_chain_create (struct df *df, basic_block bb, bitmap ru)
1360 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1361 rtx insn;
1363 bitmap_copy (ru, bb_info->ru_out);
1365 /* For each def in BB create a linked list (chain) of uses
1366 reached from the def. */
1367 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1368 insn = PREV_INSN (insn))
1370 struct df_link *def_link;
1371 struct df_link *use_link;
1372 unsigned int uid = INSN_UID (insn);
1374 if (! INSN_P (insn))
1375 continue;
1377 /* For each def in insn... */
1378 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1380 struct ref *def = def_link->ref;
1381 unsigned int dregno = DF_REF_REGNO (def);
1383 DF_REF_CHAIN (def) = 0;
1385 /* While the reg-use chains are not essential, it
1386 is _much_ faster to search these short lists rather
1387 than all the reaching uses, especially for large functions. */
1388 for (use_link = df->regs[dregno].uses; use_link;
1389 use_link = use_link->next)
1391 struct ref *use = use_link->ref;
1393 if (bitmap_bit_p (ru, DF_REF_ID (use)))
1395 DF_REF_CHAIN (def)
1396 = df_link_create (use, DF_REF_CHAIN (def));
1398 bitmap_clear_bit (ru, DF_REF_ID (use));
1403 /* For each use in insn... */
1404 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1406 struct ref *use = use_link->ref;
1407 bitmap_set_bit (ru, DF_REF_ID (use));
1413 /* Create def-use chains from reaching use bitmaps for basic blocks
1414 in BLOCKS. */
1415 static void
1416 df_du_chain_create (struct df *df, bitmap blocks)
1418 bitmap ru;
1419 basic_block bb;
1421 ru = BITMAP_XMALLOC ();
1423 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1425 df_bb_du_chain_create (df, bb, ru);
1428 BITMAP_XFREE (ru);
1432 /* Create use-def chains from reaching def bitmaps for basic block BB. */
1433 static void
1434 df_bb_ud_chain_create (struct df *df, basic_block bb)
1436 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1437 struct ref **reg_def_last = df->reg_def_last;
1438 rtx insn;
1440 memset (reg_def_last, 0, df->n_regs * sizeof (struct ref *));
1442 /* For each use in BB create a linked list (chain) of defs
1443 that reach the use. */
1444 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1445 insn = NEXT_INSN (insn))
1447 unsigned int uid = INSN_UID (insn);
1448 struct df_link *use_link;
1449 struct df_link *def_link;
1451 if (! INSN_P (insn))
1452 continue;
1454 /* For each use in insn... */
1455 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1457 struct ref *use = use_link->ref;
1458 unsigned int regno = DF_REF_REGNO (use);
1460 DF_REF_CHAIN (use) = 0;
1462 /* Has regno been defined in this BB yet? If so, use
1463 the last def as the single entry for the use-def
1464 chain for this use. Otherwise, we need to add all
1465 the defs using this regno that reach the start of
1466 this BB. */
1467 if (reg_def_last[regno])
1469 DF_REF_CHAIN (use)
1470 = df_link_create (reg_def_last[regno], 0);
1472 else
1474 /* While the reg-def chains are not essential, it is
1475 _much_ faster to search these short lists rather than
1476 all the reaching defs, especially for large
1477 functions. */
1478 for (def_link = df->regs[regno].defs; def_link;
1479 def_link = def_link->next)
1481 struct ref *def = def_link->ref;
1483 if (bitmap_bit_p (bb_info->rd_in, DF_REF_ID (def)))
1485 DF_REF_CHAIN (use)
1486 = df_link_create (def, DF_REF_CHAIN (use));
1493 /* For each def in insn... record the last def of each reg. */
1494 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1496 struct ref *def = def_link->ref;
1497 int dregno = DF_REF_REGNO (def);
1499 reg_def_last[dregno] = def;
1505 /* Create use-def chains from reaching def bitmaps for basic blocks
1506 within BLOCKS. */
1507 static void
1508 df_ud_chain_create (struct df *df, bitmap blocks)
1510 basic_block bb;
1512 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1514 df_bb_ud_chain_create (df, bb);
1520 static void
1521 df_rd_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1522 bitmap out, bitmap gen, bitmap kill,
1523 void *data ATTRIBUTE_UNUSED)
1525 *changed = bitmap_union_of_diff (out, gen, in, kill);
1529 static void
1530 df_ru_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1531 bitmap out, bitmap gen, bitmap kill,
1532 void *data ATTRIBUTE_UNUSED)
1534 *changed = bitmap_union_of_diff (in, gen, out, kill);
1538 static void
1539 df_lr_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1540 bitmap out, bitmap use, bitmap def,
1541 void *data ATTRIBUTE_UNUSED)
1543 *changed = bitmap_union_of_diff (in, use, out, def);
1547 /* Compute local reaching def info for basic block BB. */
1548 static void
1549 df_bb_rd_local_compute (struct df *df, basic_block bb)
1551 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1552 rtx insn;
1554 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1555 insn = NEXT_INSN (insn))
1557 unsigned int uid = INSN_UID (insn);
1558 struct df_link *def_link;
1560 if (! INSN_P (insn))
1561 continue;
1563 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1565 struct ref *def = def_link->ref;
1566 unsigned int regno = DF_REF_REGNO (def);
1567 struct df_link *def2_link;
1569 for (def2_link = df->regs[regno].defs; def2_link;
1570 def2_link = def2_link->next)
1572 struct ref *def2 = def2_link->ref;
1574 /* Add all defs of this reg to the set of kills. This
1575 is greedy since many of these defs will not actually
1576 be killed by this BB but it keeps things a lot
1577 simpler. */
1578 bitmap_set_bit (bb_info->rd_kill, DF_REF_ID (def2));
1580 /* Zap from the set of gens for this BB. */
1581 bitmap_clear_bit (bb_info->rd_gen, DF_REF_ID (def2));
1584 bitmap_set_bit (bb_info->rd_gen, DF_REF_ID (def));
1588 bb_info->rd_valid = 1;
1592 /* Compute local reaching def info for each basic block within BLOCKS. */
1593 static void
1594 df_rd_local_compute (struct df *df, bitmap blocks)
1596 basic_block bb;
1598 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1600 df_bb_rd_local_compute (df, bb);
1605 /* Compute local reaching use (upward exposed use) info for basic
1606 block BB. */
1607 static void
1608 df_bb_ru_local_compute (struct df *df, basic_block bb)
1610 /* This is much more tricky than computing reaching defs. With
1611 reaching defs, defs get killed by other defs. With upwards
1612 exposed uses, these get killed by defs with the same regno. */
1614 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1615 rtx insn;
1618 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1619 insn = PREV_INSN (insn))
1621 unsigned int uid = INSN_UID (insn);
1622 struct df_link *def_link;
1623 struct df_link *use_link;
1625 if (! INSN_P (insn))
1626 continue;
1628 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1630 struct ref *def = def_link->ref;
1631 unsigned int dregno = DF_REF_REGNO (def);
1633 for (use_link = df->regs[dregno].uses; use_link;
1634 use_link = use_link->next)
1636 struct ref *use = use_link->ref;
1638 /* Add all uses of this reg to the set of kills. This
1639 is greedy since many of these uses will not actually
1640 be killed by this BB but it keeps things a lot
1641 simpler. */
1642 bitmap_set_bit (bb_info->ru_kill, DF_REF_ID (use));
1644 /* Zap from the set of gens for this BB. */
1645 bitmap_clear_bit (bb_info->ru_gen, DF_REF_ID (use));
1649 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1651 struct ref *use = use_link->ref;
1652 /* Add use to set of gens in this BB. */
1653 bitmap_set_bit (bb_info->ru_gen, DF_REF_ID (use));
1656 bb_info->ru_valid = 1;
1660 /* Compute local reaching use (upward exposed use) info for each basic
1661 block within BLOCKS. */
1662 static void
1663 df_ru_local_compute (struct df *df, bitmap blocks)
1665 basic_block bb;
1667 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1669 df_bb_ru_local_compute (df, bb);
1674 /* Compute local live variable info for basic block BB. */
1675 static void
1676 df_bb_lr_local_compute (struct df *df, basic_block bb)
1678 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1679 rtx insn;
1681 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1682 insn = PREV_INSN (insn))
1684 unsigned int uid = INSN_UID (insn);
1685 struct df_link *link;
1687 if (! INSN_P (insn))
1688 continue;
1690 for (link = df->insns[uid].defs; link; link = link->next)
1692 struct ref *def = link->ref;
1693 unsigned int dregno = DF_REF_REGNO (def);
1695 /* Add def to set of defs in this BB. */
1696 bitmap_set_bit (bb_info->lr_def, dregno);
1698 bitmap_clear_bit (bb_info->lr_use, dregno);
1701 for (link = df->insns[uid].uses; link; link = link->next)
1703 struct ref *use = link->ref;
1704 /* Add use to set of uses in this BB. */
1705 bitmap_set_bit (bb_info->lr_use, DF_REF_REGNO (use));
1708 bb_info->lr_valid = 1;
1712 /* Compute local live variable info for each basic block within BLOCKS. */
1713 static void
1714 df_lr_local_compute (struct df *df, bitmap blocks)
1716 basic_block bb;
1718 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1720 df_bb_lr_local_compute (df, bb);
1725 /* Compute register info: lifetime, bb, and number of defs and uses
1726 for basic block BB. */
1727 static void
1728 df_bb_reg_info_compute (struct df *df, basic_block bb, bitmap live)
1730 struct reg_info *reg_info = df->regs;
1731 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1732 rtx insn;
1734 bitmap_copy (live, bb_info->lr_out);
1736 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1737 insn = PREV_INSN (insn))
1739 unsigned int uid = INSN_UID (insn);
1740 unsigned int regno;
1741 struct df_link *link;
1743 if (! INSN_P (insn))
1744 continue;
1746 for (link = df->insns[uid].defs; link; link = link->next)
1748 struct ref *def = link->ref;
1749 unsigned int dregno = DF_REF_REGNO (def);
1751 /* Kill this register. */
1752 bitmap_clear_bit (live, dregno);
1753 reg_info[dregno].n_defs++;
1756 for (link = df->insns[uid].uses; link; link = link->next)
1758 struct ref *use = link->ref;
1759 unsigned int uregno = DF_REF_REGNO (use);
1761 /* This register is now live. */
1762 bitmap_set_bit (live, uregno);
1763 reg_info[uregno].n_uses++;
1766 /* Increment lifetimes of all live registers. */
1767 EXECUTE_IF_SET_IN_BITMAP (live, 0, regno,
1769 reg_info[regno].lifetime++;
1775 /* Compute register info: lifetime, bb, and number of defs and uses. */
1776 static void
1777 df_reg_info_compute (struct df *df, bitmap blocks)
1779 basic_block bb;
1780 bitmap live;
1782 live = BITMAP_XMALLOC ();
1784 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1786 df_bb_reg_info_compute (df, bb, live);
1789 BITMAP_XFREE (live);
1793 /* Assign LUIDs for BB. */
1794 static int
1795 df_bb_luids_set (struct df *df, basic_block bb)
1797 rtx insn;
1798 int luid = 0;
1800 /* The LUIDs are monotonically increasing for each basic block. */
1802 for (insn = bb->head; ; insn = NEXT_INSN (insn))
1804 if (INSN_P (insn))
1805 DF_INSN_LUID (df, insn) = luid++;
1806 DF_INSN_LUID (df, insn) = luid;
1808 if (insn == bb->end)
1809 break;
1811 return luid;
1815 /* Assign LUIDs for each basic block within BLOCKS. */
1816 static int
1817 df_luids_set (struct df *df, bitmap blocks)
1819 basic_block bb;
1820 int total = 0;
1822 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1824 total += df_bb_luids_set (df, bb);
1826 return total;
1830 /* Perform dataflow analysis using existing DF structure for blocks
1831 within BLOCKS. If BLOCKS is zero, use all basic blocks in the CFG. */
1832 static void
1833 df_analyse_1 (struct df *df, bitmap blocks, int flags, int update)
1835 int aflags;
1836 int dflags;
1837 int i;
1838 basic_block bb;
1840 dflags = 0;
1841 aflags = flags;
1842 if (flags & DF_UD_CHAIN)
1843 aflags |= DF_RD | DF_RD_CHAIN;
1845 if (flags & DF_DU_CHAIN)
1846 aflags |= DF_RU;
1848 if (flags & DF_RU)
1849 aflags |= DF_RU_CHAIN;
1851 if (flags & DF_REG_INFO)
1852 aflags |= DF_LR;
1854 if (! blocks)
1855 blocks = df->all_blocks;
1857 df->flags = flags;
1858 if (update)
1860 df_refs_update (df);
1861 /* More fine grained incremental dataflow analysis would be
1862 nice. For now recompute the whole shebang for the
1863 modified blocks. */
1864 #if 0
1865 df_refs_unlink (df, blocks);
1866 #endif
1867 /* All the def-use, use-def chains can be potentially
1868 modified by changes in one block. The size of the
1869 bitmaps can also change. */
1871 else
1873 /* Scan the function for all register defs and uses. */
1874 df_refs_queue (df);
1875 df_refs_record (df, blocks);
1877 /* Link all the new defs and uses to the insns. */
1878 df_refs_process (df);
1881 /* Allocate the bitmaps now the total number of defs and uses are
1882 known. If the number of defs or uses have changed, then
1883 these bitmaps need to be reallocated. */
1884 df_bitmaps_alloc (df, aflags);
1886 /* Set the LUIDs for each specified basic block. */
1887 df_luids_set (df, blocks);
1889 /* Recreate reg-def and reg-use chains from scratch so that first
1890 def is at the head of the reg-def chain and the last use is at
1891 the head of the reg-use chain. This is only important for
1892 regs local to a basic block as it speeds up searching. */
1893 if (aflags & DF_RD_CHAIN)
1895 df_reg_def_chain_create (df, blocks);
1898 if (aflags & DF_RU_CHAIN)
1900 df_reg_use_chain_create (df, blocks);
1903 df->dfs_order = xmalloc (sizeof (int) * n_basic_blocks);
1904 df->rc_order = xmalloc (sizeof (int) * n_basic_blocks);
1905 df->rts_order = xmalloc (sizeof (int) * n_basic_blocks);
1906 df->inverse_dfs_map = xmalloc (sizeof (int) * last_basic_block);
1907 df->inverse_rc_map = xmalloc (sizeof (int) * last_basic_block);
1908 df->inverse_rts_map = xmalloc (sizeof (int) * last_basic_block);
1910 flow_depth_first_order_compute (df->dfs_order, df->rc_order);
1911 flow_reverse_top_sort_order_compute (df->rts_order);
1912 for (i = 0; i < n_basic_blocks; i++)
1914 df->inverse_dfs_map[df->dfs_order[i]] = i;
1915 df->inverse_rc_map[df->rc_order[i]] = i;
1916 df->inverse_rts_map[df->rts_order[i]] = i;
1918 if (aflags & DF_RD)
1920 /* Compute the sets of gens and kills for the defs of each bb. */
1921 df_rd_local_compute (df, df->flags & DF_RD ? blocks : df->all_blocks);
1923 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1924 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1925 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1926 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1927 FOR_EACH_BB (bb)
1929 in[bb->index] = DF_BB_INFO (df, bb)->rd_in;
1930 out[bb->index] = DF_BB_INFO (df, bb)->rd_out;
1931 gen[bb->index] = DF_BB_INFO (df, bb)->rd_gen;
1932 kill[bb->index] = DF_BB_INFO (df, bb)->rd_kill;
1934 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1935 DF_FORWARD, DF_UNION, df_rd_transfer_function,
1936 df->inverse_rc_map, NULL);
1937 free (in);
1938 free (out);
1939 free (gen);
1940 free (kill);
1944 if (aflags & DF_UD_CHAIN)
1946 /* Create use-def chains. */
1947 df_ud_chain_create (df, df->all_blocks);
1949 if (! (flags & DF_RD))
1950 dflags |= DF_RD;
1953 if (aflags & DF_RU)
1955 /* Compute the sets of gens and kills for the upwards exposed
1956 uses in each bb. */
1957 df_ru_local_compute (df, df->flags & DF_RU ? blocks : df->all_blocks);
1959 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1960 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1961 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1962 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1963 FOR_EACH_BB (bb)
1965 in[bb->index] = DF_BB_INFO (df, bb)->ru_in;
1966 out[bb->index] = DF_BB_INFO (df, bb)->ru_out;
1967 gen[bb->index] = DF_BB_INFO (df, bb)->ru_gen;
1968 kill[bb->index] = DF_BB_INFO (df, bb)->ru_kill;
1970 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1971 DF_BACKWARD, DF_UNION, df_ru_transfer_function,
1972 df->inverse_rts_map, NULL);
1973 free (in);
1974 free (out);
1975 free (gen);
1976 free (kill);
1980 if (aflags & DF_DU_CHAIN)
1982 /* Create def-use chains. */
1983 df_du_chain_create (df, df->all_blocks);
1985 if (! (flags & DF_RU))
1986 dflags |= DF_RU;
1989 /* Free up bitmaps that are no longer required. */
1990 if (dflags)
1991 df_bitmaps_free (df, dflags);
1993 if (aflags & DF_LR)
1995 /* Compute the sets of defs and uses of live variables. */
1996 df_lr_local_compute (df, df->flags & DF_LR ? blocks : df->all_blocks);
1998 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1999 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
2000 bitmap *use = xmalloc (sizeof (bitmap) * last_basic_block);
2001 bitmap *def = xmalloc (sizeof (bitmap) * last_basic_block);
2002 FOR_EACH_BB (bb)
2004 in[bb->index] = DF_BB_INFO (df, bb)->lr_in;
2005 out[bb->index] = DF_BB_INFO (df, bb)->lr_out;
2006 use[bb->index] = DF_BB_INFO (df, bb)->lr_use;
2007 def[bb->index] = DF_BB_INFO (df, bb)->lr_def;
2009 iterative_dataflow_bitmap (in, out, use, def, df->all_blocks,
2010 DF_BACKWARD, DF_UNION, df_lr_transfer_function,
2011 df->inverse_rts_map, NULL);
2012 free (in);
2013 free (out);
2014 free (use);
2015 free (def);
2019 if (aflags & DF_REG_INFO)
2021 df_reg_info_compute (df, df->all_blocks);
2024 free (df->dfs_order);
2025 free (df->rc_order);
2026 free (df->rts_order);
2027 free (df->inverse_rc_map);
2028 free (df->inverse_dfs_map);
2029 free (df->inverse_rts_map);
2033 /* Initialize dataflow analysis. */
2034 struct df *
2035 df_init (void)
2037 struct df *df;
2039 df = xcalloc (1, sizeof (struct df));
2041 /* Squirrel away a global for debugging. */
2042 ddf = df;
2044 return df;
2048 /* Start queuing refs. */
2049 static int
2050 df_refs_queue (struct df *df)
2052 df->def_id_save = df->def_id;
2053 df->use_id_save = df->use_id;
2054 /* ???? Perhaps we should save current obstack state so that we can
2055 unwind it. */
2056 return 0;
2060 /* Process queued refs. */
2061 static int
2062 df_refs_process (struct df *df)
2064 unsigned int i;
2066 /* Build new insn-def chains. */
2067 for (i = df->def_id_save; i != df->def_id; i++)
2069 struct ref *def = df->defs[i];
2070 unsigned int uid = DF_REF_INSN_UID (def);
2072 /* Add def to head of def list for INSN. */
2073 df->insns[uid].defs
2074 = df_link_create (def, df->insns[uid].defs);
2077 /* Build new insn-use chains. */
2078 for (i = df->use_id_save; i != df->use_id; i++)
2080 struct ref *use = df->uses[i];
2081 unsigned int uid = DF_REF_INSN_UID (use);
2083 /* Add use to head of use list for INSN. */
2084 df->insns[uid].uses
2085 = df_link_create (use, df->insns[uid].uses);
2087 return 0;
2091 /* Update refs for basic block BB. */
2092 static int
2093 df_bb_refs_update (struct df *df, basic_block bb)
2095 rtx insn;
2096 int count = 0;
2098 /* While we have to scan the chain of insns for this BB, we do not
2099 need to allocate and queue a long chain of BB/INSN pairs. Using
2100 a bitmap for insns_modified saves memory and avoids queuing
2101 duplicates. */
2103 for (insn = bb->head; ; insn = NEXT_INSN (insn))
2105 unsigned int uid;
2107 uid = INSN_UID (insn);
2109 if (bitmap_bit_p (df->insns_modified, uid))
2111 /* Delete any allocated refs of this insn. MPH, FIXME. */
2112 df_insn_refs_unlink (df, bb, insn);
2114 /* Scan the insn for refs. */
2115 df_insn_refs_record (df, bb, insn);
2117 count++;
2119 if (insn == bb->end)
2120 break;
2122 return count;
2126 /* Process all the modified/deleted insns that were queued. */
2127 static int
2128 df_refs_update (struct df *df)
2130 basic_block bb;
2131 int count = 0;
2133 if ((unsigned int) max_reg_num () >= df->reg_size)
2134 df_reg_table_realloc (df, 0);
2136 df_refs_queue (df);
2138 FOR_EACH_BB_IN_BITMAP (df->bbs_modified, 0, bb,
2140 count += df_bb_refs_update (df, bb);
2143 df_refs_process (df);
2144 return count;
2148 /* Return nonzero if any of the requested blocks in the bitmap
2149 BLOCKS have been modified. */
2150 static int
2151 df_modified_p (struct df *df, bitmap blocks)
2153 int update = 0;
2154 basic_block bb;
2156 if (!df->n_bbs)
2157 return 0;
2159 FOR_EACH_BB (bb)
2160 if (bitmap_bit_p (df->bbs_modified, bb->index)
2161 && (! blocks || (blocks == (bitmap) -1) || bitmap_bit_p (blocks, bb->index)))
2163 update = 1;
2164 break;
2167 return update;
2171 /* Analyze dataflow info for the basic blocks specified by the bitmap
2172 BLOCKS, or for the whole CFG if BLOCKS is zero, or just for the
2173 modified blocks if BLOCKS is -1. */
2175 df_analyse (struct df *df, bitmap blocks, int flags)
2177 int update;
2179 /* We could deal with additional basic blocks being created by
2180 rescanning everything again. */
2181 if (df->n_bbs && df->n_bbs != (unsigned int) last_basic_block)
2182 abort ();
2184 update = df_modified_p (df, blocks);
2185 if (update || (flags != df->flags))
2187 if (! blocks)
2189 if (df->n_bbs)
2191 /* Recompute everything from scratch. */
2192 df_free (df);
2194 /* Allocate and initialize data structures. */
2195 df_alloc (df, max_reg_num ());
2196 df_analyse_1 (df, 0, flags, 0);
2197 update = 1;
2199 else
2201 if (blocks == (bitmap) -1)
2202 blocks = df->bbs_modified;
2204 if (! df->n_bbs)
2205 abort ();
2207 df_analyse_1 (df, blocks, flags, 1);
2208 bitmap_zero (df->bbs_modified);
2209 bitmap_zero (df->insns_modified);
2212 return update;
2216 /* Free all the dataflow info and the DF structure. */
2217 void
2218 df_finish (struct df *df)
2220 df_free (df);
2221 free (df);
2225 /* Unlink INSN from its reference information. */
2226 static void
2227 df_insn_refs_unlink (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2229 struct df_link *link;
2230 unsigned int uid;
2232 uid = INSN_UID (insn);
2234 /* Unlink all refs defined by this insn. */
2235 for (link = df->insns[uid].defs; link; link = link->next)
2236 df_def_unlink (df, link->ref);
2238 /* Unlink all refs used by this insn. */
2239 for (link = df->insns[uid].uses; link; link = link->next)
2240 df_use_unlink (df, link->ref);
2242 df->insns[uid].defs = 0;
2243 df->insns[uid].uses = 0;
2247 #if 0
2248 /* Unlink all the insns within BB from their reference information. */
2249 static void
2250 df_bb_refs_unlink (struct df *df, basic_block bb)
2252 rtx insn;
2254 /* Scan the block an insn at a time from beginning to end. */
2255 for (insn = bb->head; ; insn = NEXT_INSN (insn))
2257 if (INSN_P (insn))
2259 /* Unlink refs for INSN. */
2260 df_insn_refs_unlink (df, bb, insn);
2262 if (insn == bb->end)
2263 break;
2268 /* Unlink all the refs in the basic blocks specified by BLOCKS.
2269 Not currently used. */
2270 static void
2271 df_refs_unlink (struct df *df, bitmap blocks)
2273 basic_block bb;
2275 if (blocks)
2277 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
2279 df_bb_refs_unlink (df, bb);
2282 else
2284 FOR_EACH_BB (bb)
2285 df_bb_refs_unlink (df, bb);
2288 #endif
2290 /* Functions to modify insns. */
2293 /* Delete INSN and all its reference information. */
2295 df_insn_delete (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2297 /* If the insn is a jump, we should perhaps call delete_insn to
2298 handle the JUMP_LABEL? */
2300 /* We should not be deleting the NOTE_INSN_BASIC_BLOCK or label. */
2301 if (insn == bb->head)
2302 abort ();
2304 /* Delete the insn. */
2305 delete_insn (insn);
2307 df_insn_modify (df, bb, insn);
2309 return NEXT_INSN (insn);
2313 /* Mark that INSN within BB may have changed (created/modified/deleted).
2314 This may be called multiple times for the same insn. There is no
2315 harm calling this function if the insn wasn't changed; it will just
2316 slow down the rescanning of refs. */
2317 void
2318 df_insn_modify (struct df *df, basic_block bb, rtx insn)
2320 unsigned int uid;
2322 uid = INSN_UID (insn);
2323 if (uid >= df->insn_size)
2324 df_insn_table_realloc (df, uid);
2326 bitmap_set_bit (df->bbs_modified, bb->index);
2327 bitmap_set_bit (df->insns_modified, uid);
2329 /* For incremental updating on the fly, perhaps we could make a copy
2330 of all the refs of the original insn and turn them into
2331 anti-refs. When df_refs_update finds these anti-refs, it annihilates
2332 the original refs. If validate_change fails then these anti-refs
2333 will just get ignored. */
2337 typedef struct replace_args
2339 rtx match;
2340 rtx replacement;
2341 rtx insn;
2342 int modified;
2343 } replace_args;
2346 /* Replace mem pointed to by PX with its associated pseudo register.
2347 DATA is actually a pointer to a structure describing the
2348 instruction currently being scanned and the MEM we are currently
2349 replacing. */
2350 static int
2351 df_rtx_mem_replace (rtx *px, void *data)
2353 replace_args *args = (replace_args *) data;
2354 rtx mem = *px;
2356 if (mem == NULL_RTX)
2357 return 0;
2359 switch (GET_CODE (mem))
2361 case MEM:
2362 break;
2364 case CONST_DOUBLE:
2365 /* We're not interested in the MEM associated with a
2366 CONST_DOUBLE, so there's no need to traverse into one. */
2367 return -1;
2369 default:
2370 /* This is not a MEM. */
2371 return 0;
2374 if (!rtx_equal_p (args->match, mem))
2375 /* This is not the MEM we are currently replacing. */
2376 return 0;
2378 /* Actually replace the MEM. */
2379 validate_change (args->insn, px, args->replacement, 1);
2380 args->modified++;
2382 return 0;
2387 df_insn_mem_replace (struct df *df, basic_block bb, rtx insn, rtx mem, rtx reg)
2389 replace_args args;
2391 args.insn = insn;
2392 args.match = mem;
2393 args.replacement = reg;
2394 args.modified = 0;
2396 /* Search and replace all matching mems within insn. */
2397 for_each_rtx (&insn, df_rtx_mem_replace, &args);
2399 if (args.modified)
2400 df_insn_modify (df, bb, insn);
2402 /* ???? FIXME. We may have a new def or one or more new uses of REG
2403 in INSN. REG should be a new pseudo so it won't affect the
2404 dataflow information that we currently have. We should add
2405 the new uses and defs to INSN and then recreate the chains
2406 when df_analyse is called. */
2407 return args.modified;
2411 /* Replace one register with another. Called through for_each_rtx; PX
2412 points to the rtx being scanned. DATA is actually a pointer to a
2413 structure of arguments. */
2414 static int
2415 df_rtx_reg_replace (rtx *px, void *data)
2417 rtx x = *px;
2418 replace_args *args = (replace_args *) data;
2420 if (x == NULL_RTX)
2421 return 0;
2423 if (x == args->match)
2425 validate_change (args->insn, px, args->replacement, 1);
2426 args->modified++;
2429 return 0;
2433 /* Replace the reg within every ref on CHAIN that is within the set
2434 BLOCKS of basic blocks with NEWREG. Also update the regs within
2435 REG_NOTES. */
2436 void
2437 df_refs_reg_replace (struct df *df, bitmap blocks, struct df_link *chain, rtx oldreg, rtx newreg)
2439 struct df_link *link;
2440 replace_args args;
2442 if (! blocks)
2443 blocks = df->all_blocks;
2445 args.match = oldreg;
2446 args.replacement = newreg;
2447 args.modified = 0;
2449 for (link = chain; link; link = link->next)
2451 struct ref *ref = link->ref;
2452 rtx insn = DF_REF_INSN (ref);
2454 if (! INSN_P (insn))
2455 continue;
2457 if (bitmap_bit_p (blocks, DF_REF_BBNO (ref)))
2459 df_ref_reg_replace (df, ref, oldreg, newreg);
2461 /* Replace occurrences of the reg within the REG_NOTES. */
2462 if ((! link->next || DF_REF_INSN (ref)
2463 != DF_REF_INSN (link->next->ref))
2464 && REG_NOTES (insn))
2466 args.insn = insn;
2467 for_each_rtx (&REG_NOTES (insn), df_rtx_reg_replace, &args);
2470 else
2472 /* Temporary check to ensure that we have a grip on which
2473 regs should be replaced. */
2474 abort ();
2480 /* Replace all occurrences of register OLDREG with register NEWREG in
2481 blocks defined by bitmap BLOCKS. This also replaces occurrences of
2482 OLDREG in the REG_NOTES but only for insns containing OLDREG. This
2483 routine expects the reg-use and reg-def chains to be valid. */
2485 df_reg_replace (struct df *df, bitmap blocks, rtx oldreg, rtx newreg)
2487 unsigned int oldregno = REGNO (oldreg);
2489 df_refs_reg_replace (df, blocks, df->regs[oldregno].defs, oldreg, newreg);
2490 df_refs_reg_replace (df, blocks, df->regs[oldregno].uses, oldreg, newreg);
2491 return 1;
2495 /* Try replacing the reg within REF with NEWREG. Do not modify
2496 def-use/use-def chains. */
2498 df_ref_reg_replace (struct df *df, struct ref *ref, rtx oldreg, rtx newreg)
2500 /* Check that insn was deleted by being converted into a NOTE. If
2501 so ignore this insn. */
2502 if (! INSN_P (DF_REF_INSN (ref)))
2503 return 0;
2505 if (oldreg && oldreg != DF_REF_REG (ref))
2506 abort ();
2508 if (! validate_change (DF_REF_INSN (ref), DF_REF_LOC (ref), newreg, 1))
2509 return 0;
2511 df_insn_modify (df, DF_REF_BB (ref), DF_REF_INSN (ref));
2512 return 1;
2516 struct ref*
2517 df_bb_def_use_swap (struct df *df, basic_block bb, rtx def_insn, rtx use_insn, unsigned int regno)
2519 struct ref *def;
2520 struct ref *use;
2521 int def_uid;
2522 int use_uid;
2523 struct df_link *link;
2525 def = df_bb_insn_regno_first_def_find (df, bb, def_insn, regno);
2526 if (! def)
2527 return 0;
2529 use = df_bb_insn_regno_last_use_find (df, bb, use_insn, regno);
2530 if (! use)
2531 return 0;
2533 /* The USE no longer exists. */
2534 use_uid = INSN_UID (use_insn);
2535 df_use_unlink (df, use);
2536 df_ref_unlink (&df->insns[use_uid].uses, use);
2538 /* The DEF requires shifting so remove it from DEF_INSN
2539 and add it to USE_INSN by reusing LINK. */
2540 def_uid = INSN_UID (def_insn);
2541 link = df_ref_unlink (&df->insns[def_uid].defs, def);
2542 link->ref = def;
2543 link->next = df->insns[use_uid].defs;
2544 df->insns[use_uid].defs = link;
2546 #if 0
2547 link = df_ref_unlink (&df->regs[regno].defs, def);
2548 link->ref = def;
2549 link->next = df->regs[regno].defs;
2550 df->insns[regno].defs = link;
2551 #endif
2553 DF_REF_INSN (def) = use_insn;
2554 return def;
2558 /* Record df between FIRST_INSN and LAST_INSN inclusive. All new
2559 insns must be processed by this routine. */
2560 static void
2561 df_insns_modify (struct df *df, basic_block bb, rtx first_insn, rtx last_insn)
2563 rtx insn;
2565 for (insn = first_insn; ; insn = NEXT_INSN (insn))
2567 unsigned int uid;
2569 /* A non-const call should not have slipped through the net. If
2570 it does, we need to create a new basic block. Ouch. The
2571 same applies for a label. */
2572 if ((GET_CODE (insn) == CALL_INSN
2573 && ! CONST_OR_PURE_CALL_P (insn))
2574 || GET_CODE (insn) == CODE_LABEL)
2575 abort ();
2577 uid = INSN_UID (insn);
2579 if (uid >= df->insn_size)
2580 df_insn_table_realloc (df, uid);
2582 df_insn_modify (df, bb, insn);
2584 if (insn == last_insn)
2585 break;
2590 /* Emit PATTERN before INSN within BB. */
2592 df_pattern_emit_before (struct df *df, rtx pattern, basic_block bb, rtx insn)
2594 rtx ret_insn;
2595 rtx prev_insn = PREV_INSN (insn);
2597 /* We should not be inserting before the start of the block. */
2598 if (insn == bb->head)
2599 abort ();
2600 ret_insn = emit_insn_before (pattern, insn);
2601 if (ret_insn == insn)
2602 return ret_insn;
2604 df_insns_modify (df, bb, NEXT_INSN (prev_insn), ret_insn);
2605 return ret_insn;
2609 /* Emit PATTERN after INSN within BB. */
2611 df_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2613 rtx ret_insn;
2615 ret_insn = emit_insn_after (pattern, insn);
2616 if (ret_insn == insn)
2617 return ret_insn;
2619 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2620 return ret_insn;
2624 /* Emit jump PATTERN after INSN within BB. */
2626 df_jump_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2628 rtx ret_insn;
2630 ret_insn = emit_jump_insn_after (pattern, insn);
2631 if (ret_insn == insn)
2632 return ret_insn;
2634 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2635 return ret_insn;
2639 /* Move INSN within BB before BEFORE_INSN within BEFORE_BB.
2641 This function should only be used to move loop invariant insns
2642 out of a loop where it has been proven that the def-use info
2643 will still be valid. */
2645 df_insn_move_before (struct df *df, basic_block bb, rtx insn, basic_block before_bb, rtx before_insn)
2647 struct df_link *link;
2648 unsigned int uid;
2650 if (! bb)
2651 return df_pattern_emit_before (df, insn, before_bb, before_insn);
2653 uid = INSN_UID (insn);
2655 /* Change bb for all df defined and used by this insn. */
2656 for (link = df->insns[uid].defs; link; link = link->next)
2657 DF_REF_BB (link->ref) = before_bb;
2658 for (link = df->insns[uid].uses; link; link = link->next)
2659 DF_REF_BB (link->ref) = before_bb;
2661 /* The lifetimes of the registers used in this insn will be reduced
2662 while the lifetimes of the registers defined in this insn
2663 are likely to be increased. */
2665 /* ???? Perhaps all the insns moved should be stored on a list
2666 which df_analyse removes when it recalculates data flow. */
2668 return emit_insn_before (insn, before_insn);
2671 /* Functions to query dataflow information. */
2675 df_insn_regno_def_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2676 rtx insn, unsigned int regno)
2678 unsigned int uid;
2679 struct df_link *link;
2681 uid = INSN_UID (insn);
2683 for (link = df->insns[uid].defs; link; link = link->next)
2685 struct ref *def = link->ref;
2687 if (DF_REF_REGNO (def) == regno)
2688 return 1;
2691 return 0;
2695 static int
2696 df_def_dominates_all_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
2698 struct df_link *du_link;
2700 /* Follow def-use chain to find all the uses of this def. */
2701 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2703 struct ref *use = du_link->ref;
2704 struct df_link *ud_link;
2706 /* Follow use-def chain to check all the defs for this use. */
2707 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2708 if (ud_link->ref != def)
2709 return 0;
2711 return 1;
2716 df_insn_dominates_all_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2717 rtx insn)
2719 unsigned int uid;
2720 struct df_link *link;
2722 uid = INSN_UID (insn);
2724 for (link = df->insns[uid].defs; link; link = link->next)
2726 struct ref *def = link->ref;
2728 if (! df_def_dominates_all_uses_p (df, def))
2729 return 0;
2732 return 1;
2736 /* Return nonzero if all DF dominates all the uses within the bitmap
2737 BLOCKS. */
2738 static int
2739 df_def_dominates_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def,
2740 bitmap blocks)
2742 struct df_link *du_link;
2744 /* Follow def-use chain to find all the uses of this def. */
2745 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2747 struct ref *use = du_link->ref;
2748 struct df_link *ud_link;
2750 /* Only worry about the uses within BLOCKS. For example,
2751 consider a register defined within a loop that is live at the
2752 loop exits. */
2753 if (bitmap_bit_p (blocks, DF_REF_BBNO (use)))
2755 /* Follow use-def chain to check all the defs for this use. */
2756 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2757 if (ud_link->ref != def)
2758 return 0;
2761 return 1;
2765 /* Return nonzero if all the defs of INSN within BB dominates
2766 all the corresponding uses. */
2768 df_insn_dominates_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2769 rtx insn, bitmap blocks)
2771 unsigned int uid;
2772 struct df_link *link;
2774 uid = INSN_UID (insn);
2776 for (link = df->insns[uid].defs; link; link = link->next)
2778 struct ref *def = link->ref;
2780 /* Only consider the defs within BLOCKS. */
2781 if (bitmap_bit_p (blocks, DF_REF_BBNO (def))
2782 && ! df_def_dominates_uses_p (df, def, blocks))
2783 return 0;
2785 return 1;
2789 /* Return the basic block that REG referenced in or NULL if referenced
2790 in multiple basic blocks. */
2791 basic_block
2792 df_regno_bb (struct df *df, unsigned int regno)
2794 struct df_link *defs = df->regs[regno].defs;
2795 struct df_link *uses = df->regs[regno].uses;
2796 struct ref *def = defs ? defs->ref : 0;
2797 struct ref *use = uses ? uses->ref : 0;
2798 basic_block bb_def = def ? DF_REF_BB (def) : 0;
2799 basic_block bb_use = use ? DF_REF_BB (use) : 0;
2801 /* Compare blocks of first def and last use. ???? FIXME. What if
2802 the reg-def and reg-use lists are not correctly ordered. */
2803 return bb_def == bb_use ? bb_def : 0;
2807 /* Return nonzero if REG used in multiple basic blocks. */
2809 df_reg_global_p (struct df *df, rtx reg)
2811 return df_regno_bb (df, REGNO (reg)) != 0;
2815 /* Return total lifetime (in insns) of REG. */
2817 df_reg_lifetime (struct df *df, rtx reg)
2819 return df->regs[REGNO (reg)].lifetime;
2823 /* Return nonzero if REG live at start of BB. */
2825 df_bb_reg_live_start_p (struct df *df, basic_block bb, rtx reg)
2827 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2829 #ifdef ENABLE_CHECKING
2830 if (! bb_info->lr_in)
2831 abort ();
2832 #endif
2834 return bitmap_bit_p (bb_info->lr_in, REGNO (reg));
2838 /* Return nonzero if REG live at end of BB. */
2840 df_bb_reg_live_end_p (struct df *df, basic_block bb, rtx reg)
2842 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2844 #ifdef ENABLE_CHECKING
2845 if (! bb_info->lr_in)
2846 abort ();
2847 #endif
2849 return bitmap_bit_p (bb_info->lr_out, REGNO (reg));
2853 /* Return -1 if life of REG1 before life of REG2, 1 if life of REG1
2854 after life of REG2, or 0, if the lives overlap. */
2856 df_bb_regs_lives_compare (struct df *df, basic_block bb, rtx reg1, rtx reg2)
2858 unsigned int regno1 = REGNO (reg1);
2859 unsigned int regno2 = REGNO (reg2);
2860 struct ref *def1;
2861 struct ref *use1;
2862 struct ref *def2;
2863 struct ref *use2;
2866 /* The regs must be local to BB. */
2867 if (df_regno_bb (df, regno1) != bb
2868 || df_regno_bb (df, regno2) != bb)
2869 abort ();
2871 def2 = df_bb_regno_first_def_find (df, bb, regno2);
2872 use1 = df_bb_regno_last_use_find (df, bb, regno1);
2874 if (DF_INSN_LUID (df, DF_REF_INSN (def2))
2875 > DF_INSN_LUID (df, DF_REF_INSN (use1)))
2876 return -1;
2878 def1 = df_bb_regno_first_def_find (df, bb, regno1);
2879 use2 = df_bb_regno_last_use_find (df, bb, regno2);
2881 if (DF_INSN_LUID (df, DF_REF_INSN (def1))
2882 > DF_INSN_LUID (df, DF_REF_INSN (use2)))
2883 return 1;
2885 return 0;
2889 /* Return last use of REGNO within BB. */
2890 static struct ref *
2891 df_bb_regno_last_use_find (struct df *df, basic_block bb, unsigned int regno)
2893 struct df_link *link;
2895 /* This assumes that the reg-use list is ordered such that for any
2896 BB, the last use is found first. However, since the BBs are not
2897 ordered, the first use in the chain is not necessarily the last
2898 use in the function. */
2899 for (link = df->regs[regno].uses; link; link = link->next)
2901 struct ref *use = link->ref;
2903 if (DF_REF_BB (use) == bb)
2904 return use;
2906 return 0;
2910 /* Return first def of REGNO within BB. */
2911 static struct ref *
2912 df_bb_regno_first_def_find (struct df *df, basic_block bb, unsigned int regno)
2914 struct df_link *link;
2916 /* This assumes that the reg-def list is ordered such that for any
2917 BB, the first def is found first. However, since the BBs are not
2918 ordered, the first def in the chain is not necessarily the first
2919 def in the function. */
2920 for (link = df->regs[regno].defs; link; link = link->next)
2922 struct ref *def = link->ref;
2924 if (DF_REF_BB (def) == bb)
2925 return def;
2927 return 0;
2931 /* Return first use of REGNO inside INSN within BB. */
2932 static struct ref *
2933 df_bb_insn_regno_last_use_find (struct df *df,
2934 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2935 unsigned int regno)
2937 unsigned int uid;
2938 struct df_link *link;
2940 uid = INSN_UID (insn);
2942 for (link = df->insns[uid].uses; link; link = link->next)
2944 struct ref *use = link->ref;
2946 if (DF_REF_REGNO (use) == regno)
2947 return use;
2950 return 0;
2954 /* Return first def of REGNO inside INSN within BB. */
2955 static struct ref *
2956 df_bb_insn_regno_first_def_find (struct df *df,
2957 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2958 unsigned int regno)
2960 unsigned int uid;
2961 struct df_link *link;
2963 uid = INSN_UID (insn);
2965 for (link = df->insns[uid].defs; link; link = link->next)
2967 struct ref *def = link->ref;
2969 if (DF_REF_REGNO (def) == regno)
2970 return def;
2973 return 0;
2977 /* Return insn using REG if the BB contains only a single
2978 use and def of REG. */
2980 df_bb_single_def_use_insn_find (struct df *df, basic_block bb, rtx insn, rtx reg)
2982 struct ref *def;
2983 struct ref *use;
2984 struct df_link *du_link;
2986 def = df_bb_insn_regno_first_def_find (df, bb, insn, REGNO (reg));
2988 if (! def)
2989 abort ();
2991 du_link = DF_REF_CHAIN (def);
2993 if (! du_link)
2994 return NULL_RTX;
2996 use = du_link->ref;
2998 /* Check if def is dead. */
2999 if (! use)
3000 return NULL_RTX;
3002 /* Check for multiple uses. */
3003 if (du_link->next)
3004 return NULL_RTX;
3006 return DF_REF_INSN (use);
3009 /* Functions for debugging/dumping dataflow information. */
3012 /* Dump a def-use or use-def chain for REF to FILE. */
3013 static void
3014 df_chain_dump (struct df_link *link, FILE *file)
3016 fprintf (file, "{ ");
3017 for (; link; link = link->next)
3019 fprintf (file, "%c%d ",
3020 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3021 DF_REF_ID (link->ref));
3023 fprintf (file, "}");
3027 /* Dump a chain of refs with the associated regno. */
3028 static void
3029 df_chain_dump_regno (struct df_link *link, FILE *file)
3031 fprintf (file, "{ ");
3032 for (; link; link = link->next)
3034 fprintf (file, "%c%d(%d) ",
3035 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3036 DF_REF_ID (link->ref),
3037 DF_REF_REGNO (link->ref));
3039 fprintf (file, "}");
3043 /* Dump dataflow info. */
3044 void
3045 df_dump (struct df *df, int flags, FILE *file)
3047 unsigned int j;
3048 basic_block bb;
3050 if (! df || ! file)
3051 return;
3053 fprintf (file, "\nDataflow summary:\n");
3054 fprintf (file, "n_regs = %d, n_defs = %d, n_uses = %d, n_bbs = %d\n",
3055 df->n_regs, df->n_defs, df->n_uses, df->n_bbs);
3057 if (flags & DF_RD)
3059 basic_block bb;
3061 fprintf (file, "Reaching defs:\n");
3062 FOR_EACH_BB (bb)
3064 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3066 if (! bb_info->rd_in)
3067 continue;
3069 fprintf (file, "bb %d in \t", bb->index);
3070 dump_bitmap (file, bb_info->rd_in);
3071 fprintf (file, "bb %d gen \t", bb->index);
3072 dump_bitmap (file, bb_info->rd_gen);
3073 fprintf (file, "bb %d kill\t", bb->index);
3074 dump_bitmap (file, bb_info->rd_kill);
3075 fprintf (file, "bb %d out \t", bb->index);
3076 dump_bitmap (file, bb_info->rd_out);
3080 if (flags & DF_UD_CHAIN)
3082 fprintf (file, "Use-def chains:\n");
3083 for (j = 0; j < df->n_defs; j++)
3085 if (df->defs[j])
3087 fprintf (file, "d%d bb %d luid %d insn %d reg %d ",
3088 j, DF_REF_BBNO (df->defs[j]),
3089 DF_INSN_LUID (df, DF_REF_INSN (df->defs[j])),
3090 DF_REF_INSN_UID (df->defs[j]),
3091 DF_REF_REGNO (df->defs[j]));
3092 if (df->defs[j]->flags & DF_REF_READ_WRITE)
3093 fprintf (file, "read/write ");
3094 df_chain_dump (DF_REF_CHAIN (df->defs[j]), file);
3095 fprintf (file, "\n");
3100 if (flags & DF_RU)
3102 fprintf (file, "Reaching uses:\n");
3103 FOR_EACH_BB (bb)
3105 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3107 if (! bb_info->ru_in)
3108 continue;
3110 fprintf (file, "bb %d in \t", bb->index);
3111 dump_bitmap (file, bb_info->ru_in);
3112 fprintf (file, "bb %d gen \t", bb->index);
3113 dump_bitmap (file, bb_info->ru_gen);
3114 fprintf (file, "bb %d kill\t", bb->index);
3115 dump_bitmap (file, bb_info->ru_kill);
3116 fprintf (file, "bb %d out \t", bb->index);
3117 dump_bitmap (file, bb_info->ru_out);
3121 if (flags & DF_DU_CHAIN)
3123 fprintf (file, "Def-use chains:\n");
3124 for (j = 0; j < df->n_uses; j++)
3126 if (df->uses[j])
3128 fprintf (file, "u%d bb %d luid %d insn %d reg %d ",
3129 j, DF_REF_BBNO (df->uses[j]),
3130 DF_INSN_LUID (df, DF_REF_INSN (df->uses[j])),
3131 DF_REF_INSN_UID (df->uses[j]),
3132 DF_REF_REGNO (df->uses[j]));
3133 if (df->uses[j]->flags & DF_REF_READ_WRITE)
3134 fprintf (file, "read/write ");
3135 df_chain_dump (DF_REF_CHAIN (df->uses[j]), file);
3136 fprintf (file, "\n");
3141 if (flags & DF_LR)
3143 fprintf (file, "Live regs:\n");
3144 FOR_EACH_BB (bb)
3146 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3148 if (! bb_info->lr_in)
3149 continue;
3151 fprintf (file, "bb %d in \t", bb->index);
3152 dump_bitmap (file, bb_info->lr_in);
3153 fprintf (file, "bb %d use \t", bb->index);
3154 dump_bitmap (file, bb_info->lr_use);
3155 fprintf (file, "bb %d def \t", bb->index);
3156 dump_bitmap (file, bb_info->lr_def);
3157 fprintf (file, "bb %d out \t", bb->index);
3158 dump_bitmap (file, bb_info->lr_out);
3162 if (flags & (DF_REG_INFO | DF_RD_CHAIN | DF_RU_CHAIN))
3164 struct reg_info *reg_info = df->regs;
3166 fprintf (file, "Register info:\n");
3167 for (j = 0; j < df->n_regs; j++)
3169 if (((flags & DF_REG_INFO)
3170 && (reg_info[j].n_uses || reg_info[j].n_defs))
3171 || ((flags & DF_RD_CHAIN) && reg_info[j].defs)
3172 || ((flags & DF_RU_CHAIN) && reg_info[j].uses))
3174 fprintf (file, "reg %d", j);
3175 if ((flags & DF_RD_CHAIN) && (flags & DF_RU_CHAIN))
3177 basic_block bb = df_regno_bb (df, j);
3179 if (bb)
3180 fprintf (file, " bb %d", bb->index);
3181 else
3182 fprintf (file, " bb ?");
3184 if (flags & DF_REG_INFO)
3186 fprintf (file, " life %d", reg_info[j].lifetime);
3189 if ((flags & DF_REG_INFO) || (flags & DF_RD_CHAIN))
3191 fprintf (file, " defs ");
3192 if (flags & DF_REG_INFO)
3193 fprintf (file, "%d ", reg_info[j].n_defs);
3194 if (flags & DF_RD_CHAIN)
3195 df_chain_dump (reg_info[j].defs, file);
3198 if ((flags & DF_REG_INFO) || (flags & DF_RU_CHAIN))
3200 fprintf (file, " uses ");
3201 if (flags & DF_REG_INFO)
3202 fprintf (file, "%d ", reg_info[j].n_uses);
3203 if (flags & DF_RU_CHAIN)
3204 df_chain_dump (reg_info[j].uses, file);
3207 fprintf (file, "\n");
3211 fprintf (file, "\n");
3215 void
3216 df_insn_debug (struct df *df, rtx insn, FILE *file)
3218 unsigned int uid;
3219 int bbi;
3221 uid = INSN_UID (insn);
3222 if (uid >= df->insn_size)
3223 return;
3225 if (df->insns[uid].defs)
3226 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3227 else if (df->insns[uid].uses)
3228 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3229 else
3230 bbi = -1;
3232 fprintf (file, "insn %d bb %d luid %d defs ",
3233 uid, bbi, DF_INSN_LUID (df, insn));
3234 df_chain_dump (df->insns[uid].defs, file);
3235 fprintf (file, " uses ");
3236 df_chain_dump (df->insns[uid].uses, file);
3237 fprintf (file, "\n");
3241 void
3242 df_insn_debug_regno (struct df *df, rtx insn, FILE *file)
3244 unsigned int uid;
3245 int bbi;
3247 uid = INSN_UID (insn);
3248 if (uid >= df->insn_size)
3249 return;
3251 if (df->insns[uid].defs)
3252 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3253 else if (df->insns[uid].uses)
3254 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3255 else
3256 bbi = -1;
3258 fprintf (file, "insn %d bb %d luid %d defs ",
3259 uid, bbi, DF_INSN_LUID (df, insn));
3260 df_chain_dump_regno (df->insns[uid].defs, file);
3261 fprintf (file, " uses ");
3262 df_chain_dump_regno (df->insns[uid].uses, file);
3263 fprintf (file, "\n");
3267 static void
3268 df_regno_debug (struct df *df, unsigned int regno, FILE *file)
3270 if (regno >= df->reg_size)
3271 return;
3273 fprintf (file, "reg %d life %d defs ",
3274 regno, df->regs[regno].lifetime);
3275 df_chain_dump (df->regs[regno].defs, file);
3276 fprintf (file, " uses ");
3277 df_chain_dump (df->regs[regno].uses, file);
3278 fprintf (file, "\n");
3282 static void
3283 df_ref_debug (struct df *df, struct ref *ref, FILE *file)
3285 fprintf (file, "%c%d ",
3286 DF_REF_REG_DEF_P (ref) ? 'd' : 'u',
3287 DF_REF_ID (ref));
3288 fprintf (file, "reg %d bb %d luid %d insn %d chain ",
3289 DF_REF_REGNO (ref),
3290 DF_REF_BBNO (ref),
3291 DF_INSN_LUID (df, DF_REF_INSN (ref)),
3292 INSN_UID (DF_REF_INSN (ref)));
3293 df_chain_dump (DF_REF_CHAIN (ref), file);
3294 fprintf (file, "\n");
3297 /* Functions for debugging from GDB. */
3299 void
3300 debug_df_insn (rtx insn)
3302 df_insn_debug (ddf, insn, stderr);
3303 debug_rtx (insn);
3307 void
3308 debug_df_reg (rtx reg)
3310 df_regno_debug (ddf, REGNO (reg), stderr);
3314 void
3315 debug_df_regno (unsigned int regno)
3317 df_regno_debug (ddf, regno, stderr);
3321 void
3322 debug_df_ref (struct ref *ref)
3324 df_ref_debug (ddf, ref, stderr);
3328 void
3329 debug_df_defno (unsigned int defno)
3331 df_ref_debug (ddf, ddf->defs[defno], stderr);
3335 void
3336 debug_df_useno (unsigned int defno)
3338 df_ref_debug (ddf, ddf->uses[defno], stderr);
3342 void
3343 debug_df_chain (struct df_link *link)
3345 df_chain_dump (link, stderr);
3346 fputc ('\n', stderr);
3350 /* Hybrid search algorithm from "Implementation Techniques for
3351 Efficient Data-Flow Analysis of Large Programs". */
3352 static void
3353 hybrid_search_bitmap (basic_block block, bitmap *in, bitmap *out, bitmap *gen,
3354 bitmap *kill, enum df_flow_dir dir,
3355 enum df_confluence_op conf_op,
3356 transfer_function_bitmap transfun, sbitmap visited,
3357 sbitmap pending, void *data)
3359 int changed;
3360 int i = block->index;
3361 edge e;
3362 basic_block bb = block;
3364 SET_BIT (visited, block->index);
3365 if (TEST_BIT (pending, block->index))
3367 if (dir == DF_FORWARD)
3369 /* Calculate <conf_op> of predecessor_outs. */
3370 bitmap_zero (in[i]);
3371 for (e = bb->pred; e != 0; e = e->pred_next)
3373 if (e->src == ENTRY_BLOCK_PTR)
3374 continue;
3375 switch (conf_op)
3377 case DF_UNION:
3378 bitmap_a_or_b (in[i], in[i], out[e->src->index]);
3379 break;
3380 case DF_INTERSECTION:
3381 bitmap_a_and_b (in[i], in[i], out[e->src->index]);
3382 break;
3386 else
3388 /* Calculate <conf_op> of successor ins. */
3389 bitmap_zero (out[i]);
3390 for (e = bb->succ; e != 0; e = e->succ_next)
3392 if (e->dest == EXIT_BLOCK_PTR)
3393 continue;
3394 switch (conf_op)
3396 case DF_UNION:
3397 bitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3398 break;
3399 case DF_INTERSECTION:
3400 bitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3401 break;
3405 /* Common part */
3406 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3407 RESET_BIT (pending, i);
3408 if (changed)
3410 if (dir == DF_FORWARD)
3412 for (e = bb->succ; e != 0; e = e->succ_next)
3414 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3415 continue;
3416 SET_BIT (pending, e->dest->index);
3419 else
3421 for (e = bb->pred; e != 0; e = e->pred_next)
3423 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3424 continue;
3425 SET_BIT (pending, e->src->index);
3430 if (dir == DF_FORWARD)
3432 for (e = bb->succ; e != 0; e = e->succ_next)
3434 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3435 continue;
3436 if (!TEST_BIT (visited, e->dest->index))
3437 hybrid_search_bitmap (e->dest, in, out, gen, kill, dir,
3438 conf_op, transfun, visited, pending,
3439 data);
3442 else
3444 for (e = bb->pred; e != 0; e = e->pred_next)
3446 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3447 continue;
3448 if (!TEST_BIT (visited, e->src->index))
3449 hybrid_search_bitmap (e->src, in, out, gen, kill, dir,
3450 conf_op, transfun, visited, pending,
3451 data);
3457 /* Hybrid search for sbitmaps, rather than bitmaps. */
3458 static void
3459 hybrid_search_sbitmap (basic_block block, sbitmap *in, sbitmap *out,
3460 sbitmap *gen, sbitmap *kill, enum df_flow_dir dir,
3461 enum df_confluence_op conf_op,
3462 transfer_function_sbitmap transfun, sbitmap visited,
3463 sbitmap pending, void *data)
3465 int changed;
3466 int i = block->index;
3467 edge e;
3468 basic_block bb = block;
3470 SET_BIT (visited, block->index);
3471 if (TEST_BIT (pending, block->index))
3473 if (dir == DF_FORWARD)
3475 /* Calculate <conf_op> of predecessor_outs. */
3476 sbitmap_zero (in[i]);
3477 for (e = bb->pred; e != 0; e = e->pred_next)
3479 if (e->src == ENTRY_BLOCK_PTR)
3480 continue;
3481 switch (conf_op)
3483 case DF_UNION:
3484 sbitmap_a_or_b (in[i], in[i], out[e->src->index]);
3485 break;
3486 case DF_INTERSECTION:
3487 sbitmap_a_and_b (in[i], in[i], out[e->src->index]);
3488 break;
3492 else
3494 /* Calculate <conf_op> of successor ins. */
3495 sbitmap_zero (out[i]);
3496 for (e = bb->succ; e != 0; e = e->succ_next)
3498 if (e->dest == EXIT_BLOCK_PTR)
3499 continue;
3500 switch (conf_op)
3502 case DF_UNION:
3503 sbitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3504 break;
3505 case DF_INTERSECTION:
3506 sbitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3507 break;
3511 /* Common part. */
3512 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3513 RESET_BIT (pending, i);
3514 if (changed)
3516 if (dir == DF_FORWARD)
3518 for (e = bb->succ; e != 0; e = e->succ_next)
3520 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3521 continue;
3522 SET_BIT (pending, e->dest->index);
3525 else
3527 for (e = bb->pred; e != 0; e = e->pred_next)
3529 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3530 continue;
3531 SET_BIT (pending, e->src->index);
3536 if (dir == DF_FORWARD)
3538 for (e = bb->succ; e != 0; e = e->succ_next)
3540 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3541 continue;
3542 if (!TEST_BIT (visited, e->dest->index))
3543 hybrid_search_sbitmap (e->dest, in, out, gen, kill, dir,
3544 conf_op, transfun, visited, pending,
3545 data);
3548 else
3550 for (e = bb->pred; e != 0; e = e->pred_next)
3552 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3553 continue;
3554 if (!TEST_BIT (visited, e->src->index))
3555 hybrid_search_sbitmap (e->src, in, out, gen, kill, dir,
3556 conf_op, transfun, visited, pending,
3557 data);
3563 /* gen = GEN set.
3564 kill = KILL set.
3565 in, out = Filled in by function.
3566 blocks = Blocks to analyze.
3567 dir = Dataflow direction.
3568 conf_op = Confluence operation.
3569 transfun = Transfer function.
3570 order = Order to iterate in. (Should map block numbers -> order)
3571 data = Whatever you want. It's passed to the transfer function.
3573 This function will perform iterative bitvector dataflow, producing
3574 the in and out sets. Even if you only want to perform it for a
3575 small number of blocks, the vectors for in and out must be large
3576 enough for *all* blocks, because changing one block might affect
3577 others. However, it'll only put what you say to analyze on the
3578 initial worklist.
3580 For forward problems, you probably want to pass in a mapping of
3581 block number to rc_order (like df->inverse_rc_map).
3583 void
3584 iterative_dataflow_sbitmap (sbitmap *in, sbitmap *out, sbitmap *gen,
3585 sbitmap *kill, bitmap blocks,
3586 enum df_flow_dir dir,
3587 enum df_confluence_op conf_op,
3588 transfer_function_sbitmap transfun, int *order,
3589 void *data)
3591 int i;
3592 fibheap_t worklist;
3593 basic_block bb;
3594 sbitmap visited, pending;
3596 pending = sbitmap_alloc (last_basic_block);
3597 visited = sbitmap_alloc (last_basic_block);
3598 sbitmap_zero (pending);
3599 sbitmap_zero (visited);
3600 worklist = fibheap_new ();
3602 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3604 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3605 SET_BIT (pending, i);
3606 if (dir == DF_FORWARD)
3607 sbitmap_copy (out[i], gen[i]);
3608 else
3609 sbitmap_copy (in[i], gen[i]);
3612 while (sbitmap_first_set_bit (pending) != -1)
3614 while (!fibheap_empty (worklist))
3616 i = (size_t) fibheap_extract_min (worklist);
3617 bb = BASIC_BLOCK (i);
3618 if (!TEST_BIT (visited, bb->index))
3619 hybrid_search_sbitmap (bb, in, out, gen, kill, dir,
3620 conf_op, transfun, visited, pending, data);
3623 if (sbitmap_first_set_bit (pending) != -1)
3625 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3627 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3629 sbitmap_zero (visited);
3631 else
3633 break;
3637 sbitmap_free (pending);
3638 sbitmap_free (visited);
3639 fibheap_delete (worklist);
3643 /* Exactly the same as iterative_dataflow_sbitmap, except it works on
3644 bitmaps instead. */
3645 void
3646 iterative_dataflow_bitmap (bitmap *in, bitmap *out, bitmap *gen, bitmap *kill,
3647 bitmap blocks, enum df_flow_dir dir,
3648 enum df_confluence_op conf_op,
3649 transfer_function_bitmap transfun, int *order,
3650 void *data)
3652 int i;
3653 fibheap_t worklist;
3654 basic_block bb;
3655 sbitmap visited, pending;
3657 pending = sbitmap_alloc (last_basic_block);
3658 visited = sbitmap_alloc (last_basic_block);
3659 sbitmap_zero (pending);
3660 sbitmap_zero (visited);
3661 worklist = fibheap_new ();
3663 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3665 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3666 SET_BIT (pending, i);
3667 if (dir == DF_FORWARD)
3668 bitmap_copy (out[i], gen[i]);
3669 else
3670 bitmap_copy (in[i], gen[i]);
3673 while (sbitmap_first_set_bit (pending) != -1)
3675 while (!fibheap_empty (worklist))
3677 i = (size_t) fibheap_extract_min (worklist);
3678 bb = BASIC_BLOCK (i);
3679 if (!TEST_BIT (visited, bb->index))
3680 hybrid_search_bitmap (bb, in, out, gen, kill, dir,
3681 conf_op, transfun, visited, pending, data);
3684 if (sbitmap_first_set_bit (pending) != -1)
3686 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3688 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3690 sbitmap_zero (visited);
3692 else
3694 break;
3697 sbitmap_free (pending);
3698 sbitmap_free (visited);
3699 fibheap_delete (worklist);