include/ChangeLog:
[official-gcc.git] / gcc / df.c
blobb60b428972096bf59e60b7acc095467ea178e19f
1 /* Dataflow support routines.
2 Copyright (C) 1999, 2000, 2001, 2002, 2003 Free Software Foundation, Inc.
3 Contributed by Michael P. Hayes (m.hayes@elec.canterbury.ac.nz,
4 mhayes@redhat.com)
6 This file is part of GCC.
8 GCC is free software; you can redistribute it and/or modify it under
9 the terms of the GNU General Public License as published by the Free
10 Software Foundation; either version 2, or (at your option) any later
11 version.
13 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
14 WARRANTY; without even the implied warranty of MERCHANTABILITY or
15 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
16 for more details.
18 You should have received a copy of the GNU General Public License
19 along with GCC; see the file COPYING. If not, write to the Free
20 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
21 02111-1307, USA.
24 OVERVIEW:
26 This file provides some dataflow routines for computing reaching defs,
27 upward exposed uses, live variables, def-use chains, and use-def
28 chains. The global dataflow is performed using simple iterative
29 methods with a worklist and could be sped up by ordering the blocks
30 with a depth first search order.
32 A `struct ref' data structure (ref) is allocated for every register
33 reference (def or use) and this records the insn and bb the ref is
34 found within. The refs are linked together in chains of uses and defs
35 for each insn and for each register. Each ref also has a chain field
36 that links all the use refs for a def or all the def refs for a use.
37 This is used to create use-def or def-use chains.
40 USAGE:
42 Here's an example of using the dataflow routines.
44 struct df *df;
46 df = df_init ();
48 df_analyse (df, 0, DF_ALL);
50 df_dump (df, DF_ALL, stderr);
52 df_finish (df);
55 df_init simply creates a poor man's object (df) that needs to be
56 passed to all the dataflow routines. df_finish destroys this
57 object and frees up any allocated memory. DF_ALL says to analyse
58 everything.
60 df_analyse performs the following:
62 1. Records defs and uses by scanning the insns in each basic block
63 or by scanning the insns queued by df_insn_modify.
64 2. Links defs and uses into insn-def and insn-use chains.
65 3. Links defs and uses into reg-def and reg-use chains.
66 4. Assigns LUIDs to each insn (for modified blocks).
67 5. Calculates local reaching definitions.
68 6. Calculates global reaching definitions.
69 7. Creates use-def chains.
70 8. Calculates local reaching uses (upwards exposed uses).
71 9. Calculates global reaching uses.
72 10. Creates def-use chains.
73 11. Calculates local live registers.
74 12. Calculates global live registers.
75 13. Calculates register lifetimes and determines local registers.
78 PHILOSOPHY:
80 Note that the dataflow information is not updated for every newly
81 deleted or created insn. If the dataflow information requires
82 updating then all the changed, new, or deleted insns needs to be
83 marked with df_insn_modify (or df_insns_modify) either directly or
84 indirectly (say through calling df_insn_delete). df_insn_modify
85 marks all the modified insns to get processed the next time df_analyse
86 is called.
88 Beware that tinkering with insns may invalidate the dataflow information.
89 The philosophy behind these routines is that once the dataflow
90 information has been gathered, the user should store what they require
91 before they tinker with any insn. Once a reg is replaced, for example,
92 then the reg-def/reg-use chains will point to the wrong place. Once a
93 whole lot of changes have been made, df_analyse can be called again
94 to update the dataflow information. Currently, this is not very smart
95 with regard to propagating changes to the dataflow so it should not
96 be called very often.
99 DATA STRUCTURES:
101 The basic object is a REF (reference) and this may either be a DEF
102 (definition) or a USE of a register.
104 These are linked into a variety of lists; namely reg-def, reg-use,
105 insn-def, insn-use, def-use, and use-def lists. For example,
106 the reg-def lists contain all the refs that define a given register
107 while the insn-use lists contain all the refs used by an insn.
109 Note that the reg-def and reg-use chains are generally short (except for the
110 hard registers) and thus it is much faster to search these chains
111 rather than searching the def or use bitmaps.
113 If the insns are in SSA form then the reg-def and use-def lists
114 should only contain the single defining ref.
117 TODO:
119 1) Incremental dataflow analysis.
121 Note that if a loop invariant insn is hoisted (or sunk), we do not
122 need to change the def-use or use-def chains. All we have to do is to
123 change the bb field for all the associated defs and uses and to
124 renumber the LUIDs for the original and new basic blocks of the insn.
126 When shadowing loop mems we create new uses and defs for new pseudos
127 so we do not affect the existing dataflow information.
129 My current strategy is to queue up all modified, created, or deleted
130 insns so when df_analyse is called we can easily determine all the new
131 or deleted refs. Currently the global dataflow information is
132 recomputed from scratch but this could be propagated more efficiently.
134 2) Reduced memory requirements.
136 We could operate a pool of ref structures. When a ref is deleted it
137 gets returned to the pool (say by linking on to a chain of free refs).
138 This will require a pair of bitmaps for defs and uses so that we can
139 tell which ones have been changed. Alternatively, we could
140 periodically squeeze the def and use tables and associated bitmaps and
141 renumber the def and use ids.
143 3) Ordering of reg-def and reg-use lists.
145 Should the first entry in the def list be the first def (within a BB)?
146 Similarly, should the first entry in the use list be the last use
147 (within a BB)?
149 4) Working with a sub-CFG.
151 Often the whole CFG does not need to be analyzed, for example,
152 when optimizing a loop, only certain registers are of interest.
153 Perhaps there should be a bitmap argument to df_analyse to specify
154 which registers should be analyzed?
157 NOTES:
159 Embedded addressing side-effects, such as POST_INC or PRE_INC, generate
160 both a use and a def. These are both marked read/write to show that they
161 are dependent. For example, (set (reg 40) (mem (post_inc (reg 42))))
162 will generate a use of reg 42 followed by a def of reg 42 (both marked
163 read/write). Similarly, (set (reg 40) (mem (pre_dec (reg 41))))
164 generates a use of reg 41 then a def of reg 41 (both marked read/write),
165 even though reg 41 is decremented before it is used for the memory
166 address in this second example.
168 A set to a REG inside a ZERO_EXTRACT, SIGN_EXTRACT, or SUBREG invokes
169 a read-modify write operation. We generate both a use and a def
170 and again mark them read/write.
173 #include "config.h"
174 #include "system.h"
175 #include "coretypes.h"
176 #include "tm.h"
177 #include "rtl.h"
178 #include "tm_p.h"
179 #include "insn-config.h"
180 #include "recog.h"
181 #include "function.h"
182 #include "regs.h"
183 #include "alloc-pool.h"
184 #include "hard-reg-set.h"
185 #include "basic-block.h"
186 #include "sbitmap.h"
187 #include "bitmap.h"
188 #include "df.h"
189 #include "fibheap.h"
191 #define FOR_EACH_BB_IN_BITMAP(BITMAP, MIN, BB, CODE) \
192 do \
194 unsigned int node_; \
195 EXECUTE_IF_SET_IN_BITMAP (BITMAP, MIN, node_, \
196 {(BB) = BASIC_BLOCK (node_); CODE;}); \
198 while (0)
200 static alloc_pool df_ref_pool;
201 static alloc_pool df_link_pool;
202 static struct df *ddf;
204 static void df_reg_table_realloc (struct df *, int);
205 static void df_insn_table_realloc (struct df *, unsigned int);
206 static void df_bitmaps_alloc (struct df *, int);
207 static void df_bitmaps_free (struct df *, int);
208 static void df_free (struct df *);
209 static void df_alloc (struct df *, int);
211 static rtx df_reg_clobber_gen (unsigned int);
212 static rtx df_reg_use_gen (unsigned int);
214 static inline struct df_link *df_link_create (struct ref *, struct df_link *);
215 static struct df_link *df_ref_unlink (struct df_link **, struct ref *);
216 static void df_def_unlink (struct df *, struct ref *);
217 static void df_use_unlink (struct df *, struct ref *);
218 static void df_insn_refs_unlink (struct df *, basic_block, rtx);
219 #if 0
220 static void df_bb_refs_unlink (struct df *, basic_block);
221 static void df_refs_unlink (struct df *, bitmap);
222 #endif
224 static struct ref *df_ref_create (struct df *, rtx, rtx *, rtx,
225 enum df_ref_type, enum df_ref_flags);
226 static void df_ref_record_1 (struct df *, rtx, rtx *, rtx, enum df_ref_type,
227 enum df_ref_flags);
228 static void df_ref_record (struct df *, rtx, rtx *, rtx, enum df_ref_type,
229 enum df_ref_flags);
230 static void df_def_record_1 (struct df *, rtx, basic_block, rtx);
231 static void df_defs_record (struct df *, rtx, basic_block, rtx);
232 static void df_uses_record (struct df *, rtx *, enum df_ref_type,
233 basic_block, rtx, enum df_ref_flags);
234 static void df_insn_refs_record (struct df *, basic_block, rtx);
235 static void df_bb_refs_record (struct df *, basic_block);
236 static void df_refs_record (struct df *, bitmap);
238 static void df_bb_reg_def_chain_create (struct df *, basic_block);
239 static void df_reg_def_chain_create (struct df *, bitmap);
240 static void df_bb_reg_use_chain_create (struct df *, basic_block);
241 static void df_reg_use_chain_create (struct df *, bitmap);
242 static void df_bb_du_chain_create (struct df *, basic_block, bitmap);
243 static void df_du_chain_create (struct df *, bitmap);
244 static void df_bb_ud_chain_create (struct df *, basic_block);
245 static void df_ud_chain_create (struct df *, bitmap);
246 static void df_bb_rd_local_compute (struct df *, basic_block);
247 static void df_rd_local_compute (struct df *, bitmap);
248 static void df_bb_ru_local_compute (struct df *, basic_block);
249 static void df_ru_local_compute (struct df *, bitmap);
250 static void df_bb_lr_local_compute (struct df *, basic_block);
251 static void df_lr_local_compute (struct df *, bitmap);
252 static void df_bb_reg_info_compute (struct df *, basic_block, bitmap);
253 static void df_reg_info_compute (struct df *, bitmap);
255 static int df_bb_luids_set (struct df *df, basic_block);
256 static int df_luids_set (struct df *df, bitmap);
258 static int df_modified_p (struct df *, bitmap);
259 static int df_refs_queue (struct df *);
260 static int df_refs_process (struct df *);
261 static int df_bb_refs_update (struct df *, basic_block);
262 static int df_refs_update (struct df *);
263 static void df_analyse_1 (struct df *, bitmap, int, int);
265 static void df_insns_modify (struct df *, basic_block, rtx, rtx);
266 static int df_rtx_mem_replace (rtx *, void *);
267 static int df_rtx_reg_replace (rtx *, void *);
268 void df_refs_reg_replace (struct df *, bitmap, struct df_link *, rtx, rtx);
270 static int df_def_dominates_all_uses_p (struct df *, struct ref *def);
271 static int df_def_dominates_uses_p (struct df *, struct ref *def, bitmap);
272 static struct ref *df_bb_regno_last_use_find (struct df *, basic_block,
273 unsigned int);
274 static struct ref *df_bb_regno_first_def_find (struct df *, basic_block,
275 unsigned int);
276 static struct ref *df_bb_insn_regno_last_use_find (struct df *, basic_block,
277 rtx, unsigned int);
278 static struct ref *df_bb_insn_regno_first_def_find (struct df *, basic_block,
279 rtx, unsigned int);
281 static void df_chain_dump (struct df_link *, FILE *file);
282 static void df_chain_dump_regno (struct df_link *, FILE *file);
283 static void df_regno_debug (struct df *, unsigned int, FILE *);
284 static void df_ref_debug (struct df *, struct ref *, FILE *);
285 static void df_rd_transfer_function (int, int *, bitmap, bitmap, bitmap,
286 bitmap, void *);
287 static void df_ru_transfer_function (int, int *, bitmap, bitmap, bitmap,
288 bitmap, void *);
289 static void df_lr_transfer_function (int, int *, bitmap, bitmap, bitmap,
290 bitmap, void *);
291 static void hybrid_search_bitmap (basic_block, bitmap *, bitmap *,
292 bitmap *, bitmap *, enum df_flow_dir,
293 enum df_confluence_op,
294 transfer_function_bitmap,
295 sbitmap, sbitmap, void *);
296 static void hybrid_search_sbitmap (basic_block, sbitmap *, sbitmap *,
297 sbitmap *, sbitmap *, enum df_flow_dir,
298 enum df_confluence_op,
299 transfer_function_sbitmap,
300 sbitmap, sbitmap, void *);
303 /* Local memory allocation/deallocation routines. */
306 /* Increase the insn info table to have space for at least SIZE + 1
307 elements. */
308 static void
309 df_insn_table_realloc (struct df *df, unsigned int size)
311 size++;
312 if (size <= df->insn_size)
313 return;
315 /* Make the table a little larger than requested, so we do not need
316 to enlarge it so often. */
317 size += df->insn_size / 4;
319 df->insns = (struct insn_info *)
320 xrealloc (df->insns, size * sizeof (struct insn_info));
322 memset (df->insns + df->insn_size, 0,
323 (size - df->insn_size) * sizeof (struct insn_info));
325 df->insn_size = size;
327 if (! df->insns_modified)
329 df->insns_modified = BITMAP_XMALLOC ();
330 bitmap_zero (df->insns_modified);
335 /* Increase the reg info table by SIZE more elements. */
336 static void
337 df_reg_table_realloc (struct df *df, int size)
339 /* Make table 25 percent larger by default. */
340 if (! size)
341 size = df->reg_size / 4;
343 size += df->reg_size;
344 if (size < max_reg_num ())
345 size = max_reg_num ();
347 df->regs = (struct reg_info *)
348 xrealloc (df->regs, size * sizeof (struct reg_info));
350 /* Zero the new entries. */
351 memset (df->regs + df->reg_size, 0,
352 (size - df->reg_size) * sizeof (struct reg_info));
354 df->reg_size = size;
358 /* Allocate bitmaps for each basic block. */
359 static void
360 df_bitmaps_alloc (struct df *df, int flags)
362 int dflags = 0;
363 basic_block bb;
365 /* Free the bitmaps if they need resizing. */
366 if ((flags & DF_LR) && df->n_regs < (unsigned int) max_reg_num ())
367 dflags |= DF_LR | DF_RU;
368 if ((flags & DF_RU) && df->n_uses < df->use_id)
369 dflags |= DF_RU;
370 if ((flags & DF_RD) && df->n_defs < df->def_id)
371 dflags |= DF_RD;
373 if (dflags)
374 df_bitmaps_free (df, dflags);
376 df->n_defs = df->def_id;
377 df->n_uses = df->use_id;
379 FOR_EACH_BB (bb)
381 struct bb_info *bb_info = DF_BB_INFO (df, bb);
383 if (flags & DF_RD && ! bb_info->rd_in)
385 /* Allocate bitmaps for reaching definitions. */
386 bb_info->rd_kill = BITMAP_XMALLOC ();
387 bitmap_zero (bb_info->rd_kill);
388 bb_info->rd_gen = BITMAP_XMALLOC ();
389 bitmap_zero (bb_info->rd_gen);
390 bb_info->rd_in = BITMAP_XMALLOC ();
391 bb_info->rd_out = BITMAP_XMALLOC ();
392 bb_info->rd_valid = 0;
395 if (flags & DF_RU && ! bb_info->ru_in)
397 /* Allocate bitmaps for upward exposed uses. */
398 bb_info->ru_kill = BITMAP_XMALLOC ();
399 bitmap_zero (bb_info->ru_kill);
400 /* Note the lack of symmetry. */
401 bb_info->ru_gen = BITMAP_XMALLOC ();
402 bitmap_zero (bb_info->ru_gen);
403 bb_info->ru_in = BITMAP_XMALLOC ();
404 bb_info->ru_out = BITMAP_XMALLOC ();
405 bb_info->ru_valid = 0;
408 if (flags & DF_LR && ! bb_info->lr_in)
410 /* Allocate bitmaps for live variables. */
411 bb_info->lr_def = BITMAP_XMALLOC ();
412 bitmap_zero (bb_info->lr_def);
413 bb_info->lr_use = BITMAP_XMALLOC ();
414 bitmap_zero (bb_info->lr_use);
415 bb_info->lr_in = BITMAP_XMALLOC ();
416 bb_info->lr_out = BITMAP_XMALLOC ();
417 bb_info->lr_valid = 0;
423 /* Free bitmaps for each basic block. */
424 static void
425 df_bitmaps_free (struct df *df, int flags)
427 basic_block bb;
429 FOR_EACH_BB (bb)
431 struct bb_info *bb_info = DF_BB_INFO (df, bb);
433 if (!bb_info)
434 continue;
436 if ((flags & DF_RD) && bb_info->rd_in)
438 /* Free bitmaps for reaching definitions. */
439 BITMAP_XFREE (bb_info->rd_kill);
440 bb_info->rd_kill = NULL;
441 BITMAP_XFREE (bb_info->rd_gen);
442 bb_info->rd_gen = NULL;
443 BITMAP_XFREE (bb_info->rd_in);
444 bb_info->rd_in = NULL;
445 BITMAP_XFREE (bb_info->rd_out);
446 bb_info->rd_out = NULL;
449 if ((flags & DF_RU) && bb_info->ru_in)
451 /* Free bitmaps for upward exposed uses. */
452 BITMAP_XFREE (bb_info->ru_kill);
453 bb_info->ru_kill = NULL;
454 BITMAP_XFREE (bb_info->ru_gen);
455 bb_info->ru_gen = NULL;
456 BITMAP_XFREE (bb_info->ru_in);
457 bb_info->ru_in = NULL;
458 BITMAP_XFREE (bb_info->ru_out);
459 bb_info->ru_out = NULL;
462 if ((flags & DF_LR) && bb_info->lr_in)
464 /* Free bitmaps for live variables. */
465 BITMAP_XFREE (bb_info->lr_def);
466 bb_info->lr_def = NULL;
467 BITMAP_XFREE (bb_info->lr_use);
468 bb_info->lr_use = NULL;
469 BITMAP_XFREE (bb_info->lr_in);
470 bb_info->lr_in = NULL;
471 BITMAP_XFREE (bb_info->lr_out);
472 bb_info->lr_out = NULL;
475 df->flags &= ~(flags & (DF_RD | DF_RU | DF_LR));
479 /* Allocate and initialize dataflow memory. */
480 static void
481 df_alloc (struct df *df, int n_regs)
483 int n_insns;
484 basic_block bb;
486 df_link_pool = create_alloc_pool ("df_link pool", sizeof (struct df_link),
487 100);
488 df_ref_pool = create_alloc_pool ("df_ref pool", sizeof (struct ref), 100);
490 /* Perhaps we should use LUIDs to save memory for the insn_refs
491 table. This is only a small saving; a few pointers. */
492 n_insns = get_max_uid () + 1;
494 df->def_id = 0;
495 df->n_defs = 0;
496 /* Approximate number of defs by number of insns. */
497 df->def_size = n_insns;
498 df->defs = xmalloc (df->def_size * sizeof (*df->defs));
500 df->use_id = 0;
501 df->n_uses = 0;
502 /* Approximate number of uses by twice number of insns. */
503 df->use_size = n_insns * 2;
504 df->uses = xmalloc (df->use_size * sizeof (*df->uses));
506 df->n_regs = n_regs;
507 df->n_bbs = last_basic_block;
509 /* Allocate temporary working array used during local dataflow analysis. */
510 df->reg_def_last = xmalloc (df->n_regs * sizeof (struct ref *));
512 df_insn_table_realloc (df, n_insns);
514 df_reg_table_realloc (df, df->n_regs);
516 df->bbs_modified = BITMAP_XMALLOC ();
517 bitmap_zero (df->bbs_modified);
519 df->flags = 0;
521 df->bbs = xcalloc (last_basic_block, sizeof (struct bb_info));
523 df->all_blocks = BITMAP_XMALLOC ();
524 FOR_EACH_BB (bb)
525 bitmap_set_bit (df->all_blocks, bb->index);
529 /* Free all the dataflow info. */
530 static void
531 df_free (struct df *df)
533 df_bitmaps_free (df, DF_ALL);
535 if (df->bbs)
536 free (df->bbs);
537 df->bbs = 0;
539 if (df->insns)
540 free (df->insns);
541 df->insns = 0;
542 df->insn_size = 0;
544 if (df->defs)
545 free (df->defs);
546 df->defs = 0;
547 df->def_size = 0;
548 df->def_id = 0;
550 if (df->uses)
551 free (df->uses);
552 df->uses = 0;
553 df->use_size = 0;
554 df->use_id = 0;
556 if (df->regs)
557 free (df->regs);
558 df->regs = 0;
559 df->reg_size = 0;
561 if (df->bbs_modified)
562 BITMAP_XFREE (df->bbs_modified);
563 df->bbs_modified = 0;
565 if (df->insns_modified)
566 BITMAP_XFREE (df->insns_modified);
567 df->insns_modified = 0;
569 BITMAP_XFREE (df->all_blocks);
570 df->all_blocks = 0;
572 free_alloc_pool (df_ref_pool);
573 free_alloc_pool (df_link_pool);
577 /* Local miscellaneous routines. */
579 /* Return a USE for register REGNO. */
580 static rtx df_reg_use_gen (unsigned int regno)
582 rtx reg;
583 rtx use;
585 reg = regno_reg_rtx[regno];
587 use = gen_rtx_USE (GET_MODE (reg), reg);
588 return use;
592 /* Return a CLOBBER for register REGNO. */
593 static rtx df_reg_clobber_gen (unsigned int regno)
595 rtx reg;
596 rtx use;
598 reg = regno_reg_rtx[regno];
600 use = gen_rtx_CLOBBER (GET_MODE (reg), reg);
601 return use;
604 /* Local chain manipulation routines. */
606 /* Create a link in a def-use or use-def chain. */
607 static inline struct df_link *
608 df_link_create (struct ref *ref, struct df_link *next)
610 struct df_link *link;
612 link = pool_alloc (df_link_pool);
613 link->next = next;
614 link->ref = ref;
615 return link;
619 /* Add REF to chain head pointed to by PHEAD. */
620 static struct df_link *
621 df_ref_unlink (struct df_link **phead, struct ref *ref)
623 struct df_link *link = *phead;
625 if (link)
627 if (! link->next)
629 /* Only a single ref. It must be the one we want.
630 If not, the def-use and use-def chains are likely to
631 be inconsistent. */
632 if (link->ref != ref)
633 abort ();
634 /* Now have an empty chain. */
635 *phead = NULL;
637 else
639 /* Multiple refs. One of them must be us. */
640 if (link->ref == ref)
641 *phead = link->next;
642 else
644 /* Follow chain. */
645 for (; link->next; link = link->next)
647 if (link->next->ref == ref)
649 /* Unlink from list. */
650 link->next = link->next->next;
651 return link->next;
657 return link;
661 /* Unlink REF from all def-use/use-def chains, etc. */
663 df_ref_remove (struct df *df, struct ref *ref)
665 if (DF_REF_REG_DEF_P (ref))
667 df_def_unlink (df, ref);
668 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].defs, ref);
670 else
672 df_use_unlink (df, ref);
673 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].uses, ref);
675 return 1;
679 /* Unlink DEF from use-def and reg-def chains. */
680 static void
681 df_def_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
683 struct df_link *du_link;
684 unsigned int dregno = DF_REF_REGNO (def);
686 /* Follow def-use chain to find all the uses of this def. */
687 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
689 struct ref *use = du_link->ref;
691 /* Unlink this def from the use-def chain. */
692 df_ref_unlink (&DF_REF_CHAIN (use), def);
694 DF_REF_CHAIN (def) = 0;
696 /* Unlink def from reg-def chain. */
697 df_ref_unlink (&df->regs[dregno].defs, def);
699 df->defs[DF_REF_ID (def)] = 0;
703 /* Unlink use from def-use and reg-use chains. */
704 static void
705 df_use_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *use)
707 struct df_link *ud_link;
708 unsigned int uregno = DF_REF_REGNO (use);
710 /* Follow use-def chain to find all the defs of this use. */
711 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
713 struct ref *def = ud_link->ref;
715 /* Unlink this use from the def-use chain. */
716 df_ref_unlink (&DF_REF_CHAIN (def), use);
718 DF_REF_CHAIN (use) = 0;
720 /* Unlink use from reg-use chain. */
721 df_ref_unlink (&df->regs[uregno].uses, use);
723 df->uses[DF_REF_ID (use)] = 0;
726 /* Local routines for recording refs. */
729 /* Create a new ref of type DF_REF_TYPE for register REG at address
730 LOC within INSN of BB. */
731 static struct ref *
732 df_ref_create (struct df *df, rtx reg, rtx *loc, rtx insn,
733 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
735 struct ref *this_ref;
737 this_ref = pool_alloc (df_ref_pool);
738 DF_REF_REG (this_ref) = reg;
739 DF_REF_LOC (this_ref) = loc;
740 DF_REF_INSN (this_ref) = insn;
741 DF_REF_CHAIN (this_ref) = 0;
742 DF_REF_TYPE (this_ref) = ref_type;
743 DF_REF_FLAGS (this_ref) = ref_flags;
745 if (ref_type == DF_REF_REG_DEF)
747 if (df->def_id >= df->def_size)
749 /* Make table 25 percent larger. */
750 df->def_size += (df->def_size / 4);
751 df->defs = xrealloc (df->defs,
752 df->def_size * sizeof (*df->defs));
754 DF_REF_ID (this_ref) = df->def_id;
755 df->defs[df->def_id++] = this_ref;
757 else
759 if (df->use_id >= df->use_size)
761 /* Make table 25 percent larger. */
762 df->use_size += (df->use_size / 4);
763 df->uses = xrealloc (df->uses,
764 df->use_size * sizeof (*df->uses));
766 DF_REF_ID (this_ref) = df->use_id;
767 df->uses[df->use_id++] = this_ref;
769 return this_ref;
773 /* Create a new reference of type DF_REF_TYPE for a single register REG,
774 used inside the LOC rtx of INSN. */
775 static void
776 df_ref_record_1 (struct df *df, rtx reg, rtx *loc, rtx insn,
777 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
779 df_ref_create (df, reg, loc, insn, ref_type, ref_flags);
783 /* Create new references of type DF_REF_TYPE for each part of register REG
784 at address LOC within INSN of BB. */
785 static void
786 df_ref_record (struct df *df, rtx reg, rtx *loc, rtx insn,
787 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
789 unsigned int regno;
791 if (GET_CODE (reg) != REG && GET_CODE (reg) != SUBREG)
792 abort ();
794 /* For the reg allocator we are interested in some SUBREG rtx's, but not
795 all. Notably only those representing a word extraction from a multi-word
796 reg. As written in the docu those should have the form
797 (subreg:SI (reg:M A) N), with size(SImode) > size(Mmode).
798 XXX Is that true? We could also use the global word_mode variable. */
799 if (GET_CODE (reg) == SUBREG
800 && (GET_MODE_SIZE (GET_MODE (reg)) < GET_MODE_SIZE (word_mode)
801 || GET_MODE_SIZE (GET_MODE (reg))
802 >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (reg)))))
804 loc = &SUBREG_REG (reg);
805 reg = *loc;
806 ref_flags |= DF_REF_STRIPPED;
809 regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
810 if (regno < FIRST_PSEUDO_REGISTER)
812 int i;
813 int endregno;
815 if (! (df->flags & DF_HARD_REGS))
816 return;
818 /* GET_MODE (reg) is correct here. We do not want to go into a SUBREG
819 for the mode, because we only want to add references to regs, which
820 are really referenced. E.g., a (subreg:SI (reg:DI 0) 0) does _not_
821 reference the whole reg 0 in DI mode (which would also include
822 reg 1, at least, if 0 and 1 are SImode registers). */
823 endregno = HARD_REGNO_NREGS (regno, GET_MODE (reg));
824 if (GET_CODE (reg) == SUBREG)
825 regno += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
826 SUBREG_BYTE (reg), GET_MODE (reg));
827 endregno += regno;
829 for (i = regno; i < endregno; i++)
830 df_ref_record_1 (df, regno_reg_rtx[i],
831 loc, insn, ref_type, ref_flags);
833 else
835 df_ref_record_1 (df, reg, loc, insn, ref_type, ref_flags);
840 /* Return nonzero if writes to paradoxical SUBREGs, or SUBREGs which
841 are too narrow, are read-modify-write. */
842 bool
843 read_modify_subreg_p (rtx x)
845 unsigned int isize, osize;
846 if (GET_CODE (x) != SUBREG)
847 return false;
848 isize = GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)));
849 osize = GET_MODE_SIZE (GET_MODE (x));
850 /* Paradoxical subreg writes don't leave a trace of the old content. */
851 return (isize > osize && isize > UNITS_PER_WORD);
855 /* Process all the registers defined in the rtx, X. */
856 static void
857 df_def_record_1 (struct df *df, rtx x, basic_block bb, rtx insn)
859 rtx *loc;
860 rtx dst;
861 enum df_ref_flags flags = 0;
863 /* We may recursivly call ourselves on EXPR_LIST when dealing with PARALLEL
864 construct. */
865 if (GET_CODE (x) == EXPR_LIST || GET_CODE (x) == CLOBBER)
866 loc = &XEXP (x, 0);
867 else
868 loc = &SET_DEST (x);
869 dst = *loc;
871 /* Some targets place small structures in registers for
872 return values of functions. */
873 if (GET_CODE (dst) == PARALLEL && GET_MODE (dst) == BLKmode)
875 int i;
877 for (i = XVECLEN (dst, 0) - 1; i >= 0; i--)
879 rtx temp = XVECEXP (dst, 0, i);
880 if (GET_CODE (temp) == EXPR_LIST || GET_CODE (temp) == CLOBBER
881 || GET_CODE (temp) == SET)
882 df_def_record_1 (df, temp, bb, insn);
884 return;
887 /* Maybe, we should flag the use of STRICT_LOW_PART somehow. It might
888 be handy for the reg allocator. */
889 while (GET_CODE (dst) == STRICT_LOW_PART
890 || GET_CODE (dst) == ZERO_EXTRACT
891 || GET_CODE (dst) == SIGN_EXTRACT
892 || ((df->flags & DF_FOR_REGALLOC) == 0
893 && read_modify_subreg_p (dst)))
895 /* Strict low part always contains SUBREG, but we do not want to make
896 it appear outside, as whole register is always considered. */
897 if (GET_CODE (dst) == STRICT_LOW_PART)
899 loc = &XEXP (dst, 0);
900 dst = *loc;
902 loc = &XEXP (dst, 0);
903 dst = *loc;
904 flags |= DF_REF_READ_WRITE;
907 if (GET_CODE (dst) == REG
908 || (GET_CODE (dst) == SUBREG && GET_CODE (SUBREG_REG (dst)) == REG))
909 df_ref_record (df, dst, loc, insn, DF_REF_REG_DEF, flags);
913 /* Process all the registers defined in the pattern rtx, X. */
914 static void
915 df_defs_record (struct df *df, rtx x, basic_block bb, rtx insn)
917 RTX_CODE code = GET_CODE (x);
919 if (code == SET || code == CLOBBER)
921 /* Mark the single def within the pattern. */
922 df_def_record_1 (df, x, bb, insn);
924 else if (code == PARALLEL)
926 int i;
928 /* Mark the multiple defs within the pattern. */
929 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
931 code = GET_CODE (XVECEXP (x, 0, i));
932 if (code == SET || code == CLOBBER)
933 df_def_record_1 (df, XVECEXP (x, 0, i), bb, insn);
939 /* Process all the registers used in the rtx at address LOC. */
940 static void
941 df_uses_record (struct df *df, rtx *loc, enum df_ref_type ref_type,
942 basic_block bb, rtx insn, enum df_ref_flags flags)
944 RTX_CODE code;
945 rtx x;
946 retry:
947 x = *loc;
948 if (!x)
949 return;
950 code = GET_CODE (x);
951 switch (code)
953 case LABEL_REF:
954 case SYMBOL_REF:
955 case CONST_INT:
956 case CONST:
957 case CONST_DOUBLE:
958 case CONST_VECTOR:
959 case PC:
960 case CC0:
961 case ADDR_VEC:
962 case ADDR_DIFF_VEC:
963 return;
965 case CLOBBER:
966 /* If we are clobbering a MEM, mark any registers inside the address
967 as being used. */
968 if (GET_CODE (XEXP (x, 0)) == MEM)
969 df_uses_record (df, &XEXP (XEXP (x, 0), 0),
970 DF_REF_REG_MEM_STORE, bb, insn, flags);
972 /* If we're clobbering a REG then we have a def so ignore. */
973 return;
975 case MEM:
976 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_MEM_LOAD, bb, insn, flags);
977 return;
979 case SUBREG:
980 /* While we're here, optimize this case. */
982 /* In case the SUBREG is not of a REG, do not optimize. */
983 if (GET_CODE (SUBREG_REG (x)) != REG)
985 loc = &SUBREG_REG (x);
986 df_uses_record (df, loc, ref_type, bb, insn, flags);
987 return;
989 /* ... Fall through ... */
991 case REG:
992 /* See a REG (or SUBREG) other than being set. */
993 df_ref_record (df, x, loc, insn, ref_type, flags);
994 return;
996 case SET:
998 rtx dst = SET_DEST (x);
1000 df_uses_record (df, &SET_SRC (x), DF_REF_REG_USE, bb, insn, 0);
1002 switch (GET_CODE (dst))
1004 enum df_ref_flags use_flags;
1005 case SUBREG:
1006 if ((df->flags & DF_FOR_REGALLOC) == 0
1007 && read_modify_subreg_p (dst))
1009 use_flags = DF_REF_READ_WRITE;
1010 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1011 insn, use_flags);
1012 break;
1014 /* ... FALLTHRU ... */
1015 case REG:
1016 case PARALLEL:
1017 case PC:
1018 case CC0:
1019 break;
1020 case MEM:
1021 df_uses_record (df, &XEXP (dst, 0),
1022 DF_REF_REG_MEM_STORE,
1023 bb, insn, 0);
1024 break;
1025 case STRICT_LOW_PART:
1026 /* A strict_low_part uses the whole REG and not just the SUBREG. */
1027 dst = XEXP (dst, 0);
1028 if (GET_CODE (dst) != SUBREG)
1029 abort ();
1030 use_flags = DF_REF_READ_WRITE;
1031 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1032 insn, use_flags);
1033 break;
1034 case ZERO_EXTRACT:
1035 case SIGN_EXTRACT:
1036 df_uses_record (df, &XEXP (dst, 0), DF_REF_REG_USE, bb, insn,
1037 DF_REF_READ_WRITE);
1038 df_uses_record (df, &XEXP (dst, 1), DF_REF_REG_USE, bb, insn, 0);
1039 df_uses_record (df, &XEXP (dst, 2), DF_REF_REG_USE, bb, insn, 0);
1040 dst = XEXP (dst, 0);
1041 break;
1042 default:
1043 abort ();
1045 return;
1048 case RETURN:
1049 break;
1051 case ASM_OPERANDS:
1052 case UNSPEC_VOLATILE:
1053 case TRAP_IF:
1054 case ASM_INPUT:
1056 /* Traditional and volatile asm instructions must be considered to use
1057 and clobber all hard registers, all pseudo-registers and all of
1058 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1060 Consider for instance a volatile asm that changes the fpu rounding
1061 mode. An insn should not be moved across this even if it only uses
1062 pseudo-regs because it might give an incorrectly rounded result.
1064 For now, just mark any regs we can find in ASM_OPERANDS as
1065 used. */
1067 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1068 We can not just fall through here since then we would be confused
1069 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1070 traditional asms unlike their normal usage. */
1071 if (code == ASM_OPERANDS)
1073 int j;
1075 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1076 df_uses_record (df, &ASM_OPERANDS_INPUT (x, j),
1077 DF_REF_REG_USE, bb, insn, 0);
1078 return;
1080 break;
1083 case PRE_DEC:
1084 case POST_DEC:
1085 case PRE_INC:
1086 case POST_INC:
1087 case PRE_MODIFY:
1088 case POST_MODIFY:
1089 /* Catch the def of the register being modified. */
1090 df_ref_record (df, XEXP (x, 0), &XEXP (x, 0), insn, DF_REF_REG_DEF, DF_REF_READ_WRITE);
1092 /* ... Fall through to handle uses ... */
1094 default:
1095 break;
1098 /* Recursively scan the operands of this expression. */
1100 const char *fmt = GET_RTX_FORMAT (code);
1101 int i;
1103 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1105 if (fmt[i] == 'e')
1107 /* Tail recursive case: save a function call level. */
1108 if (i == 0)
1110 loc = &XEXP (x, 0);
1111 goto retry;
1113 df_uses_record (df, &XEXP (x, i), ref_type, bb, insn, flags);
1115 else if (fmt[i] == 'E')
1117 int j;
1118 for (j = 0; j < XVECLEN (x, i); j++)
1119 df_uses_record (df, &XVECEXP (x, i, j), ref_type,
1120 bb, insn, flags);
1127 /* Record all the df within INSN of basic block BB. */
1128 static void
1129 df_insn_refs_record (struct df *df, basic_block bb, rtx insn)
1131 int i;
1133 if (INSN_P (insn))
1135 rtx note;
1137 /* Record register defs. */
1138 df_defs_record (df, PATTERN (insn), bb, insn);
1140 if (df->flags & DF_EQUIV_NOTES)
1141 for (note = REG_NOTES (insn); note;
1142 note = XEXP (note, 1))
1144 switch (REG_NOTE_KIND (note))
1146 case REG_EQUIV:
1147 case REG_EQUAL:
1148 df_uses_record (df, &XEXP (note, 0), DF_REF_REG_USE,
1149 bb, insn, 0);
1150 default:
1151 break;
1155 if (GET_CODE (insn) == CALL_INSN)
1157 rtx note;
1158 rtx x;
1160 /* Record the registers used to pass arguments. */
1161 for (note = CALL_INSN_FUNCTION_USAGE (insn); note;
1162 note = XEXP (note, 1))
1164 if (GET_CODE (XEXP (note, 0)) == USE)
1165 df_uses_record (df, &XEXP (XEXP (note, 0), 0), DF_REF_REG_USE,
1166 bb, insn, 0);
1169 /* The stack ptr is used (honorarily) by a CALL insn. */
1170 x = df_reg_use_gen (STACK_POINTER_REGNUM);
1171 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_USE, bb, insn, 0);
1173 if (df->flags & DF_HARD_REGS)
1175 /* Calls may also reference any of the global registers,
1176 so they are recorded as used. */
1177 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1178 if (global_regs[i])
1180 x = df_reg_use_gen (i);
1181 df_uses_record (df, &SET_DEST (x),
1182 DF_REF_REG_USE, bb, insn, 0);
1187 /* Record the register uses. */
1188 df_uses_record (df, &PATTERN (insn),
1189 DF_REF_REG_USE, bb, insn, 0);
1191 if (GET_CODE (insn) == CALL_INSN)
1193 rtx note;
1195 if (df->flags & DF_HARD_REGS)
1197 /* Kill all registers invalidated by a call. */
1198 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1199 if (TEST_HARD_REG_BIT (regs_invalidated_by_call, i))
1201 rtx reg_clob = df_reg_clobber_gen (i);
1202 df_defs_record (df, reg_clob, bb, insn);
1206 /* There may be extra registers to be clobbered. */
1207 for (note = CALL_INSN_FUNCTION_USAGE (insn);
1208 note;
1209 note = XEXP (note, 1))
1210 if (GET_CODE (XEXP (note, 0)) == CLOBBER)
1211 df_defs_record (df, XEXP (note, 0), bb, insn);
1217 /* Record all the refs within the basic block BB. */
1218 static void
1219 df_bb_refs_record (struct df *df, basic_block bb)
1221 rtx insn;
1223 /* Scan the block an insn at a time from beginning to end. */
1224 for (insn = bb->head; ; insn = NEXT_INSN (insn))
1226 if (INSN_P (insn))
1228 /* Record defs within INSN. */
1229 df_insn_refs_record (df, bb, insn);
1231 if (insn == bb->end)
1232 break;
1237 /* Record all the refs in the basic blocks specified by BLOCKS. */
1238 static void
1239 df_refs_record (struct df *df, bitmap blocks)
1241 basic_block bb;
1243 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1245 df_bb_refs_record (df, bb);
1249 /* Dataflow analysis routines. */
1252 /* Create reg-def chains for basic block BB. These are a list of
1253 definitions for each register. */
1254 static void
1255 df_bb_reg_def_chain_create (struct df *df, basic_block bb)
1257 rtx insn;
1259 /* Perhaps the defs should be sorted using a depth first search
1260 of the CFG (or possibly a breadth first search). We currently
1261 scan the basic blocks in reverse order so that the first defs
1262 appear at the start of the chain. */
1264 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1265 insn = PREV_INSN (insn))
1267 struct df_link *link;
1268 unsigned int uid = INSN_UID (insn);
1270 if (! INSN_P (insn))
1271 continue;
1273 for (link = df->insns[uid].defs; link; link = link->next)
1275 struct ref *def = link->ref;
1276 unsigned int dregno = DF_REF_REGNO (def);
1278 /* Do not add ref's to the chain twice, i.e., only add new
1279 refs. XXX the same could be done by testing if the
1280 current insn is a modified (or a new) one. This would be
1281 faster. */
1282 if (DF_REF_ID (def) < df->def_id_save)
1283 continue;
1285 df->regs[dregno].defs
1286 = df_link_create (def, df->regs[dregno].defs);
1292 /* Create reg-def chains for each basic block within BLOCKS. These
1293 are a list of definitions for each register. */
1294 static void
1295 df_reg_def_chain_create (struct df *df, bitmap blocks)
1297 basic_block bb;
1299 FOR_EACH_BB_IN_BITMAP/*_REV*/ (blocks, 0, bb,
1301 df_bb_reg_def_chain_create (df, bb);
1306 /* Create reg-use chains for basic block BB. These are a list of uses
1307 for each register. */
1308 static void
1309 df_bb_reg_use_chain_create (struct df *df, basic_block bb)
1311 rtx insn;
1313 /* Scan in forward order so that the last uses appear at the start
1314 of the chain. */
1316 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1317 insn = NEXT_INSN (insn))
1319 struct df_link *link;
1320 unsigned int uid = INSN_UID (insn);
1322 if (! INSN_P (insn))
1323 continue;
1325 for (link = df->insns[uid].uses; link; link = link->next)
1327 struct ref *use = link->ref;
1328 unsigned int uregno = DF_REF_REGNO (use);
1330 /* Do not add ref's to the chain twice, i.e., only add new
1331 refs. XXX the same could be done by testing if the
1332 current insn is a modified (or a new) one. This would be
1333 faster. */
1334 if (DF_REF_ID (use) < df->use_id_save)
1335 continue;
1337 df->regs[uregno].uses
1338 = df_link_create (use, df->regs[uregno].uses);
1344 /* Create reg-use chains for each basic block within BLOCKS. These
1345 are a list of uses for each register. */
1346 static void
1347 df_reg_use_chain_create (struct df *df, bitmap blocks)
1349 basic_block bb;
1351 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1353 df_bb_reg_use_chain_create (df, bb);
1358 /* Create def-use chains from reaching use bitmaps for basic block BB. */
1359 static void
1360 df_bb_du_chain_create (struct df *df, basic_block bb, bitmap ru)
1362 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1363 rtx insn;
1365 bitmap_copy (ru, bb_info->ru_out);
1367 /* For each def in BB create a linked list (chain) of uses
1368 reached from the def. */
1369 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1370 insn = PREV_INSN (insn))
1372 struct df_link *def_link;
1373 struct df_link *use_link;
1374 unsigned int uid = INSN_UID (insn);
1376 if (! INSN_P (insn))
1377 continue;
1379 /* For each def in insn... */
1380 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1382 struct ref *def = def_link->ref;
1383 unsigned int dregno = DF_REF_REGNO (def);
1385 DF_REF_CHAIN (def) = 0;
1387 /* While the reg-use chains are not essential, it
1388 is _much_ faster to search these short lists rather
1389 than all the reaching uses, especially for large functions. */
1390 for (use_link = df->regs[dregno].uses; use_link;
1391 use_link = use_link->next)
1393 struct ref *use = use_link->ref;
1395 if (bitmap_bit_p (ru, DF_REF_ID (use)))
1397 DF_REF_CHAIN (def)
1398 = df_link_create (use, DF_REF_CHAIN (def));
1400 bitmap_clear_bit (ru, DF_REF_ID (use));
1405 /* For each use in insn... */
1406 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1408 struct ref *use = use_link->ref;
1409 bitmap_set_bit (ru, DF_REF_ID (use));
1415 /* Create def-use chains from reaching use bitmaps for basic blocks
1416 in BLOCKS. */
1417 static void
1418 df_du_chain_create (struct df *df, bitmap blocks)
1420 bitmap ru;
1421 basic_block bb;
1423 ru = BITMAP_XMALLOC ();
1425 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1427 df_bb_du_chain_create (df, bb, ru);
1430 BITMAP_XFREE (ru);
1434 /* Create use-def chains from reaching def bitmaps for basic block BB. */
1435 static void
1436 df_bb_ud_chain_create (struct df *df, basic_block bb)
1438 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1439 struct ref **reg_def_last = df->reg_def_last;
1440 rtx insn;
1442 memset (reg_def_last, 0, df->n_regs * sizeof (struct ref *));
1444 /* For each use in BB create a linked list (chain) of defs
1445 that reach the use. */
1446 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1447 insn = NEXT_INSN (insn))
1449 unsigned int uid = INSN_UID (insn);
1450 struct df_link *use_link;
1451 struct df_link *def_link;
1453 if (! INSN_P (insn))
1454 continue;
1456 /* For each use in insn... */
1457 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1459 struct ref *use = use_link->ref;
1460 unsigned int regno = DF_REF_REGNO (use);
1462 DF_REF_CHAIN (use) = 0;
1464 /* Has regno been defined in this BB yet? If so, use
1465 the last def as the single entry for the use-def
1466 chain for this use. Otherwise, we need to add all
1467 the defs using this regno that reach the start of
1468 this BB. */
1469 if (reg_def_last[regno])
1471 DF_REF_CHAIN (use)
1472 = df_link_create (reg_def_last[regno], 0);
1474 else
1476 /* While the reg-def chains are not essential, it is
1477 _much_ faster to search these short lists rather than
1478 all the reaching defs, especially for large
1479 functions. */
1480 for (def_link = df->regs[regno].defs; def_link;
1481 def_link = def_link->next)
1483 struct ref *def = def_link->ref;
1485 if (bitmap_bit_p (bb_info->rd_in, DF_REF_ID (def)))
1487 DF_REF_CHAIN (use)
1488 = df_link_create (def, DF_REF_CHAIN (use));
1495 /* For each def in insn... record the last def of each reg. */
1496 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1498 struct ref *def = def_link->ref;
1499 int dregno = DF_REF_REGNO (def);
1501 reg_def_last[dregno] = def;
1507 /* Create use-def chains from reaching def bitmaps for basic blocks
1508 within BLOCKS. */
1509 static void
1510 df_ud_chain_create (struct df *df, bitmap blocks)
1512 basic_block bb;
1514 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1516 df_bb_ud_chain_create (df, bb);
1522 static void
1523 df_rd_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1524 bitmap out, bitmap gen, bitmap kill,
1525 void *data ATTRIBUTE_UNUSED)
1527 *changed = bitmap_union_of_diff (out, gen, in, kill);
1531 static void
1532 df_ru_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1533 bitmap out, bitmap gen, bitmap kill,
1534 void *data ATTRIBUTE_UNUSED)
1536 *changed = bitmap_union_of_diff (in, gen, out, kill);
1540 static void
1541 df_lr_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1542 bitmap out, bitmap use, bitmap def,
1543 void *data ATTRIBUTE_UNUSED)
1545 *changed = bitmap_union_of_diff (in, use, out, def);
1549 /* Compute local reaching def info for basic block BB. */
1550 static void
1551 df_bb_rd_local_compute (struct df *df, basic_block bb)
1553 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1554 rtx insn;
1556 for (insn = bb->head; insn && insn != NEXT_INSN (bb->end);
1557 insn = NEXT_INSN (insn))
1559 unsigned int uid = INSN_UID (insn);
1560 struct df_link *def_link;
1562 if (! INSN_P (insn))
1563 continue;
1565 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1567 struct ref *def = def_link->ref;
1568 unsigned int regno = DF_REF_REGNO (def);
1569 struct df_link *def2_link;
1571 for (def2_link = df->regs[regno].defs; def2_link;
1572 def2_link = def2_link->next)
1574 struct ref *def2 = def2_link->ref;
1576 /* Add all defs of this reg to the set of kills. This
1577 is greedy since many of these defs will not actually
1578 be killed by this BB but it keeps things a lot
1579 simpler. */
1580 bitmap_set_bit (bb_info->rd_kill, DF_REF_ID (def2));
1582 /* Zap from the set of gens for this BB. */
1583 bitmap_clear_bit (bb_info->rd_gen, DF_REF_ID (def2));
1586 bitmap_set_bit (bb_info->rd_gen, DF_REF_ID (def));
1590 bb_info->rd_valid = 1;
1594 /* Compute local reaching def info for each basic block within BLOCKS. */
1595 static void
1596 df_rd_local_compute (struct df *df, bitmap blocks)
1598 basic_block bb;
1600 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1602 df_bb_rd_local_compute (df, bb);
1607 /* Compute local reaching use (upward exposed use) info for basic
1608 block BB. */
1609 static void
1610 df_bb_ru_local_compute (struct df *df, basic_block bb)
1612 /* This is much more tricky than computing reaching defs. With
1613 reaching defs, defs get killed by other defs. With upwards
1614 exposed uses, these get killed by defs with the same regno. */
1616 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1617 rtx insn;
1620 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1621 insn = PREV_INSN (insn))
1623 unsigned int uid = INSN_UID (insn);
1624 struct df_link *def_link;
1625 struct df_link *use_link;
1627 if (! INSN_P (insn))
1628 continue;
1630 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1632 struct ref *def = def_link->ref;
1633 unsigned int dregno = DF_REF_REGNO (def);
1635 for (use_link = df->regs[dregno].uses; use_link;
1636 use_link = use_link->next)
1638 struct ref *use = use_link->ref;
1640 /* Add all uses of this reg to the set of kills. This
1641 is greedy since many of these uses will not actually
1642 be killed by this BB but it keeps things a lot
1643 simpler. */
1644 bitmap_set_bit (bb_info->ru_kill, DF_REF_ID (use));
1646 /* Zap from the set of gens for this BB. */
1647 bitmap_clear_bit (bb_info->ru_gen, DF_REF_ID (use));
1651 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1653 struct ref *use = use_link->ref;
1654 /* Add use to set of gens in this BB. */
1655 bitmap_set_bit (bb_info->ru_gen, DF_REF_ID (use));
1658 bb_info->ru_valid = 1;
1662 /* Compute local reaching use (upward exposed use) info for each basic
1663 block within BLOCKS. */
1664 static void
1665 df_ru_local_compute (struct df *df, bitmap blocks)
1667 basic_block bb;
1669 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1671 df_bb_ru_local_compute (df, bb);
1676 /* Compute local live variable info for basic block BB. */
1677 static void
1678 df_bb_lr_local_compute (struct df *df, basic_block bb)
1680 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1681 rtx insn;
1683 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1684 insn = PREV_INSN (insn))
1686 unsigned int uid = INSN_UID (insn);
1687 struct df_link *link;
1689 if (! INSN_P (insn))
1690 continue;
1692 for (link = df->insns[uid].defs; link; link = link->next)
1694 struct ref *def = link->ref;
1695 unsigned int dregno = DF_REF_REGNO (def);
1697 /* Add def to set of defs in this BB. */
1698 bitmap_set_bit (bb_info->lr_def, dregno);
1700 bitmap_clear_bit (bb_info->lr_use, dregno);
1703 for (link = df->insns[uid].uses; link; link = link->next)
1705 struct ref *use = link->ref;
1706 /* Add use to set of uses in this BB. */
1707 bitmap_set_bit (bb_info->lr_use, DF_REF_REGNO (use));
1710 bb_info->lr_valid = 1;
1714 /* Compute local live variable info for each basic block within BLOCKS. */
1715 static void
1716 df_lr_local_compute (struct df *df, bitmap blocks)
1718 basic_block bb;
1720 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1722 df_bb_lr_local_compute (df, bb);
1727 /* Compute register info: lifetime, bb, and number of defs and uses
1728 for basic block BB. */
1729 static void
1730 df_bb_reg_info_compute (struct df *df, basic_block bb, bitmap live)
1732 struct reg_info *reg_info = df->regs;
1733 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1734 rtx insn;
1736 bitmap_copy (live, bb_info->lr_out);
1738 for (insn = bb->end; insn && insn != PREV_INSN (bb->head);
1739 insn = PREV_INSN (insn))
1741 unsigned int uid = INSN_UID (insn);
1742 unsigned int regno;
1743 struct df_link *link;
1745 if (! INSN_P (insn))
1746 continue;
1748 for (link = df->insns[uid].defs; link; link = link->next)
1750 struct ref *def = link->ref;
1751 unsigned int dregno = DF_REF_REGNO (def);
1753 /* Kill this register. */
1754 bitmap_clear_bit (live, dregno);
1755 reg_info[dregno].n_defs++;
1758 for (link = df->insns[uid].uses; link; link = link->next)
1760 struct ref *use = link->ref;
1761 unsigned int uregno = DF_REF_REGNO (use);
1763 /* This register is now live. */
1764 bitmap_set_bit (live, uregno);
1765 reg_info[uregno].n_uses++;
1768 /* Increment lifetimes of all live registers. */
1769 EXECUTE_IF_SET_IN_BITMAP (live, 0, regno,
1771 reg_info[regno].lifetime++;
1777 /* Compute register info: lifetime, bb, and number of defs and uses. */
1778 static void
1779 df_reg_info_compute (struct df *df, bitmap blocks)
1781 basic_block bb;
1782 bitmap live;
1784 live = BITMAP_XMALLOC ();
1786 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1788 df_bb_reg_info_compute (df, bb, live);
1791 BITMAP_XFREE (live);
1795 /* Assign LUIDs for BB. */
1796 static int
1797 df_bb_luids_set (struct df *df, basic_block bb)
1799 rtx insn;
1800 int luid = 0;
1802 /* The LUIDs are monotonically increasing for each basic block. */
1804 for (insn = bb->head; ; insn = NEXT_INSN (insn))
1806 if (INSN_P (insn))
1807 DF_INSN_LUID (df, insn) = luid++;
1808 DF_INSN_LUID (df, insn) = luid;
1810 if (insn == bb->end)
1811 break;
1813 return luid;
1817 /* Assign LUIDs for each basic block within BLOCKS. */
1818 static int
1819 df_luids_set (struct df *df, bitmap blocks)
1821 basic_block bb;
1822 int total = 0;
1824 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1826 total += df_bb_luids_set (df, bb);
1828 return total;
1832 /* Perform dataflow analysis using existing DF structure for blocks
1833 within BLOCKS. If BLOCKS is zero, use all basic blocks in the CFG. */
1834 static void
1835 df_analyse_1 (struct df *df, bitmap blocks, int flags, int update)
1837 int aflags;
1838 int dflags;
1839 int i;
1840 basic_block bb;
1842 dflags = 0;
1843 aflags = flags;
1844 if (flags & DF_UD_CHAIN)
1845 aflags |= DF_RD | DF_RD_CHAIN;
1847 if (flags & DF_DU_CHAIN)
1848 aflags |= DF_RU;
1850 if (flags & DF_RU)
1851 aflags |= DF_RU_CHAIN;
1853 if (flags & DF_REG_INFO)
1854 aflags |= DF_LR;
1856 if (! blocks)
1857 blocks = df->all_blocks;
1859 df->flags = flags;
1860 if (update)
1862 df_refs_update (df);
1863 /* More fine grained incremental dataflow analysis would be
1864 nice. For now recompute the whole shebang for the
1865 modified blocks. */
1866 #if 0
1867 df_refs_unlink (df, blocks);
1868 #endif
1869 /* All the def-use, use-def chains can be potentially
1870 modified by changes in one block. The size of the
1871 bitmaps can also change. */
1873 else
1875 /* Scan the function for all register defs and uses. */
1876 df_refs_queue (df);
1877 df_refs_record (df, blocks);
1879 /* Link all the new defs and uses to the insns. */
1880 df_refs_process (df);
1883 /* Allocate the bitmaps now the total number of defs and uses are
1884 known. If the number of defs or uses have changed, then
1885 these bitmaps need to be reallocated. */
1886 df_bitmaps_alloc (df, aflags);
1888 /* Set the LUIDs for each specified basic block. */
1889 df_luids_set (df, blocks);
1891 /* Recreate reg-def and reg-use chains from scratch so that first
1892 def is at the head of the reg-def chain and the last use is at
1893 the head of the reg-use chain. This is only important for
1894 regs local to a basic block as it speeds up searching. */
1895 if (aflags & DF_RD_CHAIN)
1897 df_reg_def_chain_create (df, blocks);
1900 if (aflags & DF_RU_CHAIN)
1902 df_reg_use_chain_create (df, blocks);
1905 df->dfs_order = xmalloc (sizeof (int) * n_basic_blocks);
1906 df->rc_order = xmalloc (sizeof (int) * n_basic_blocks);
1907 df->rts_order = xmalloc (sizeof (int) * n_basic_blocks);
1908 df->inverse_dfs_map = xmalloc (sizeof (int) * last_basic_block);
1909 df->inverse_rc_map = xmalloc (sizeof (int) * last_basic_block);
1910 df->inverse_rts_map = xmalloc (sizeof (int) * last_basic_block);
1912 flow_depth_first_order_compute (df->dfs_order, df->rc_order);
1913 flow_reverse_top_sort_order_compute (df->rts_order);
1914 for (i = 0; i < n_basic_blocks; i++)
1916 df->inverse_dfs_map[df->dfs_order[i]] = i;
1917 df->inverse_rc_map[df->rc_order[i]] = i;
1918 df->inverse_rts_map[df->rts_order[i]] = i;
1920 if (aflags & DF_RD)
1922 /* Compute the sets of gens and kills for the defs of each bb. */
1923 df_rd_local_compute (df, df->flags & DF_RD ? blocks : df->all_blocks);
1925 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1926 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1927 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1928 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1929 FOR_EACH_BB (bb)
1931 in[bb->index] = DF_BB_INFO (df, bb)->rd_in;
1932 out[bb->index] = DF_BB_INFO (df, bb)->rd_out;
1933 gen[bb->index] = DF_BB_INFO (df, bb)->rd_gen;
1934 kill[bb->index] = DF_BB_INFO (df, bb)->rd_kill;
1936 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1937 DF_FORWARD, DF_UNION, df_rd_transfer_function,
1938 df->inverse_rc_map, NULL);
1939 free (in);
1940 free (out);
1941 free (gen);
1942 free (kill);
1946 if (aflags & DF_UD_CHAIN)
1948 /* Create use-def chains. */
1949 df_ud_chain_create (df, df->all_blocks);
1951 if (! (flags & DF_RD))
1952 dflags |= DF_RD;
1955 if (aflags & DF_RU)
1957 /* Compute the sets of gens and kills for the upwards exposed
1958 uses in each bb. */
1959 df_ru_local_compute (df, df->flags & DF_RU ? blocks : df->all_blocks);
1961 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1962 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1963 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1964 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1965 FOR_EACH_BB (bb)
1967 in[bb->index] = DF_BB_INFO (df, bb)->ru_in;
1968 out[bb->index] = DF_BB_INFO (df, bb)->ru_out;
1969 gen[bb->index] = DF_BB_INFO (df, bb)->ru_gen;
1970 kill[bb->index] = DF_BB_INFO (df, bb)->ru_kill;
1972 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1973 DF_BACKWARD, DF_UNION, df_ru_transfer_function,
1974 df->inverse_rts_map, NULL);
1975 free (in);
1976 free (out);
1977 free (gen);
1978 free (kill);
1982 if (aflags & DF_DU_CHAIN)
1984 /* Create def-use chains. */
1985 df_du_chain_create (df, df->all_blocks);
1987 if (! (flags & DF_RU))
1988 dflags |= DF_RU;
1991 /* Free up bitmaps that are no longer required. */
1992 if (dflags)
1993 df_bitmaps_free (df, dflags);
1995 if (aflags & DF_LR)
1997 /* Compute the sets of defs and uses of live variables. */
1998 df_lr_local_compute (df, df->flags & DF_LR ? blocks : df->all_blocks);
2000 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
2001 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
2002 bitmap *use = xmalloc (sizeof (bitmap) * last_basic_block);
2003 bitmap *def = xmalloc (sizeof (bitmap) * last_basic_block);
2004 FOR_EACH_BB (bb)
2006 in[bb->index] = DF_BB_INFO (df, bb)->lr_in;
2007 out[bb->index] = DF_BB_INFO (df, bb)->lr_out;
2008 use[bb->index] = DF_BB_INFO (df, bb)->lr_use;
2009 def[bb->index] = DF_BB_INFO (df, bb)->lr_def;
2011 iterative_dataflow_bitmap (in, out, use, def, df->all_blocks,
2012 DF_BACKWARD, DF_UNION, df_lr_transfer_function,
2013 df->inverse_rts_map, NULL);
2014 free (in);
2015 free (out);
2016 free (use);
2017 free (def);
2021 if (aflags & DF_REG_INFO)
2023 df_reg_info_compute (df, df->all_blocks);
2026 free (df->dfs_order);
2027 free (df->rc_order);
2028 free (df->rts_order);
2029 free (df->inverse_rc_map);
2030 free (df->inverse_dfs_map);
2031 free (df->inverse_rts_map);
2035 /* Initialize dataflow analysis. */
2036 struct df *
2037 df_init (void)
2039 struct df *df;
2041 df = xcalloc (1, sizeof (struct df));
2043 /* Squirrel away a global for debugging. */
2044 ddf = df;
2046 return df;
2050 /* Start queuing refs. */
2051 static int
2052 df_refs_queue (struct df *df)
2054 df->def_id_save = df->def_id;
2055 df->use_id_save = df->use_id;
2056 /* ???? Perhaps we should save current obstack state so that we can
2057 unwind it. */
2058 return 0;
2062 /* Process queued refs. */
2063 static int
2064 df_refs_process (struct df *df)
2066 unsigned int i;
2068 /* Build new insn-def chains. */
2069 for (i = df->def_id_save; i != df->def_id; i++)
2071 struct ref *def = df->defs[i];
2072 unsigned int uid = DF_REF_INSN_UID (def);
2074 /* Add def to head of def list for INSN. */
2075 df->insns[uid].defs
2076 = df_link_create (def, df->insns[uid].defs);
2079 /* Build new insn-use chains. */
2080 for (i = df->use_id_save; i != df->use_id; i++)
2082 struct ref *use = df->uses[i];
2083 unsigned int uid = DF_REF_INSN_UID (use);
2085 /* Add use to head of use list for INSN. */
2086 df->insns[uid].uses
2087 = df_link_create (use, df->insns[uid].uses);
2089 return 0;
2093 /* Update refs for basic block BB. */
2094 static int
2095 df_bb_refs_update (struct df *df, basic_block bb)
2097 rtx insn;
2098 int count = 0;
2100 /* While we have to scan the chain of insns for this BB, we do not
2101 need to allocate and queue a long chain of BB/INSN pairs. Using
2102 a bitmap for insns_modified saves memory and avoids queuing
2103 duplicates. */
2105 for (insn = bb->head; ; insn = NEXT_INSN (insn))
2107 unsigned int uid;
2109 uid = INSN_UID (insn);
2111 if (bitmap_bit_p (df->insns_modified, uid))
2113 /* Delete any allocated refs of this insn. MPH, FIXME. */
2114 df_insn_refs_unlink (df, bb, insn);
2116 /* Scan the insn for refs. */
2117 df_insn_refs_record (df, bb, insn);
2119 count++;
2121 if (insn == bb->end)
2122 break;
2124 return count;
2128 /* Process all the modified/deleted insns that were queued. */
2129 static int
2130 df_refs_update (struct df *df)
2132 basic_block bb;
2133 int count = 0;
2135 if ((unsigned int) max_reg_num () >= df->reg_size)
2136 df_reg_table_realloc (df, 0);
2138 df_refs_queue (df);
2140 FOR_EACH_BB_IN_BITMAP (df->bbs_modified, 0, bb,
2142 count += df_bb_refs_update (df, bb);
2145 df_refs_process (df);
2146 return count;
2150 /* Return nonzero if any of the requested blocks in the bitmap
2151 BLOCKS have been modified. */
2152 static int
2153 df_modified_p (struct df *df, bitmap blocks)
2155 int update = 0;
2156 basic_block bb;
2158 if (!df->n_bbs)
2159 return 0;
2161 FOR_EACH_BB (bb)
2162 if (bitmap_bit_p (df->bbs_modified, bb->index)
2163 && (! blocks || (blocks == (bitmap) -1) || bitmap_bit_p (blocks, bb->index)))
2165 update = 1;
2166 break;
2169 return update;
2173 /* Analyze dataflow info for the basic blocks specified by the bitmap
2174 BLOCKS, or for the whole CFG if BLOCKS is zero, or just for the
2175 modified blocks if BLOCKS is -1. */
2177 df_analyse (struct df *df, bitmap blocks, int flags)
2179 int update;
2181 /* We could deal with additional basic blocks being created by
2182 rescanning everything again. */
2183 if (df->n_bbs && df->n_bbs != (unsigned int) last_basic_block)
2184 abort ();
2186 update = df_modified_p (df, blocks);
2187 if (update || (flags != df->flags))
2189 if (! blocks)
2191 if (df->n_bbs)
2193 /* Recompute everything from scratch. */
2194 df_free (df);
2196 /* Allocate and initialize data structures. */
2197 df_alloc (df, max_reg_num ());
2198 df_analyse_1 (df, 0, flags, 0);
2199 update = 1;
2201 else
2203 if (blocks == (bitmap) -1)
2204 blocks = df->bbs_modified;
2206 if (! df->n_bbs)
2207 abort ();
2209 df_analyse_1 (df, blocks, flags, 1);
2210 bitmap_zero (df->bbs_modified);
2211 bitmap_zero (df->insns_modified);
2214 return update;
2218 /* Free all the dataflow info and the DF structure. */
2219 void
2220 df_finish (struct df *df)
2222 df_free (df);
2223 free (df);
2227 /* Unlink INSN from its reference information. */
2228 static void
2229 df_insn_refs_unlink (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2231 struct df_link *link;
2232 unsigned int uid;
2234 uid = INSN_UID (insn);
2236 /* Unlink all refs defined by this insn. */
2237 for (link = df->insns[uid].defs; link; link = link->next)
2238 df_def_unlink (df, link->ref);
2240 /* Unlink all refs used by this insn. */
2241 for (link = df->insns[uid].uses; link; link = link->next)
2242 df_use_unlink (df, link->ref);
2244 df->insns[uid].defs = 0;
2245 df->insns[uid].uses = 0;
2249 #if 0
2250 /* Unlink all the insns within BB from their reference information. */
2251 static void
2252 df_bb_refs_unlink (struct df *df, basic_block bb)
2254 rtx insn;
2256 /* Scan the block an insn at a time from beginning to end. */
2257 for (insn = bb->head; ; insn = NEXT_INSN (insn))
2259 if (INSN_P (insn))
2261 /* Unlink refs for INSN. */
2262 df_insn_refs_unlink (df, bb, insn);
2264 if (insn == bb->end)
2265 break;
2270 /* Unlink all the refs in the basic blocks specified by BLOCKS.
2271 Not currently used. */
2272 static void
2273 df_refs_unlink (struct df *df, bitmap blocks)
2275 basic_block bb;
2277 if (blocks)
2279 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
2281 df_bb_refs_unlink (df, bb);
2284 else
2286 FOR_EACH_BB (bb)
2287 df_bb_refs_unlink (df, bb);
2290 #endif
2292 /* Functions to modify insns. */
2295 /* Delete INSN and all its reference information. */
2297 df_insn_delete (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2299 /* If the insn is a jump, we should perhaps call delete_insn to
2300 handle the JUMP_LABEL? */
2302 /* We should not be deleting the NOTE_INSN_BASIC_BLOCK or label. */
2303 if (insn == bb->head)
2304 abort ();
2306 /* Delete the insn. */
2307 delete_insn (insn);
2309 df_insn_modify (df, bb, insn);
2311 return NEXT_INSN (insn);
2315 /* Mark that INSN within BB may have changed (created/modified/deleted).
2316 This may be called multiple times for the same insn. There is no
2317 harm calling this function if the insn wasn't changed; it will just
2318 slow down the rescanning of refs. */
2319 void
2320 df_insn_modify (struct df *df, basic_block bb, rtx insn)
2322 unsigned int uid;
2324 uid = INSN_UID (insn);
2325 if (uid >= df->insn_size)
2326 df_insn_table_realloc (df, uid);
2328 bitmap_set_bit (df->bbs_modified, bb->index);
2329 bitmap_set_bit (df->insns_modified, uid);
2331 /* For incremental updating on the fly, perhaps we could make a copy
2332 of all the refs of the original insn and turn them into
2333 anti-refs. When df_refs_update finds these anti-refs, it annihilates
2334 the original refs. If validate_change fails then these anti-refs
2335 will just get ignored. */
2339 typedef struct replace_args
2341 rtx match;
2342 rtx replacement;
2343 rtx insn;
2344 int modified;
2345 } replace_args;
2348 /* Replace mem pointed to by PX with its associated pseudo register.
2349 DATA is actually a pointer to a structure describing the
2350 instruction currently being scanned and the MEM we are currently
2351 replacing. */
2352 static int
2353 df_rtx_mem_replace (rtx *px, void *data)
2355 replace_args *args = (replace_args *) data;
2356 rtx mem = *px;
2358 if (mem == NULL_RTX)
2359 return 0;
2361 switch (GET_CODE (mem))
2363 case MEM:
2364 break;
2366 case CONST_DOUBLE:
2367 /* We're not interested in the MEM associated with a
2368 CONST_DOUBLE, so there's no need to traverse into one. */
2369 return -1;
2371 default:
2372 /* This is not a MEM. */
2373 return 0;
2376 if (!rtx_equal_p (args->match, mem))
2377 /* This is not the MEM we are currently replacing. */
2378 return 0;
2380 /* Actually replace the MEM. */
2381 validate_change (args->insn, px, args->replacement, 1);
2382 args->modified++;
2384 return 0;
2389 df_insn_mem_replace (struct df *df, basic_block bb, rtx insn, rtx mem, rtx reg)
2391 replace_args args;
2393 args.insn = insn;
2394 args.match = mem;
2395 args.replacement = reg;
2396 args.modified = 0;
2398 /* Search and replace all matching mems within insn. */
2399 for_each_rtx (&insn, df_rtx_mem_replace, &args);
2401 if (args.modified)
2402 df_insn_modify (df, bb, insn);
2404 /* ???? FIXME. We may have a new def or one or more new uses of REG
2405 in INSN. REG should be a new pseudo so it won't affect the
2406 dataflow information that we currently have. We should add
2407 the new uses and defs to INSN and then recreate the chains
2408 when df_analyse is called. */
2409 return args.modified;
2413 /* Replace one register with another. Called through for_each_rtx; PX
2414 points to the rtx being scanned. DATA is actually a pointer to a
2415 structure of arguments. */
2416 static int
2417 df_rtx_reg_replace (rtx *px, void *data)
2419 rtx x = *px;
2420 replace_args *args = (replace_args *) data;
2422 if (x == NULL_RTX)
2423 return 0;
2425 if (x == args->match)
2427 validate_change (args->insn, px, args->replacement, 1);
2428 args->modified++;
2431 return 0;
2435 /* Replace the reg within every ref on CHAIN that is within the set
2436 BLOCKS of basic blocks with NEWREG. Also update the regs within
2437 REG_NOTES. */
2438 void
2439 df_refs_reg_replace (struct df *df, bitmap blocks, struct df_link *chain, rtx oldreg, rtx newreg)
2441 struct df_link *link;
2442 replace_args args;
2444 if (! blocks)
2445 blocks = df->all_blocks;
2447 args.match = oldreg;
2448 args.replacement = newreg;
2449 args.modified = 0;
2451 for (link = chain; link; link = link->next)
2453 struct ref *ref = link->ref;
2454 rtx insn = DF_REF_INSN (ref);
2456 if (! INSN_P (insn))
2457 continue;
2459 if (bitmap_bit_p (blocks, DF_REF_BBNO (ref)))
2461 df_ref_reg_replace (df, ref, oldreg, newreg);
2463 /* Replace occurrences of the reg within the REG_NOTES. */
2464 if ((! link->next || DF_REF_INSN (ref)
2465 != DF_REF_INSN (link->next->ref))
2466 && REG_NOTES (insn))
2468 args.insn = insn;
2469 for_each_rtx (&REG_NOTES (insn), df_rtx_reg_replace, &args);
2472 else
2474 /* Temporary check to ensure that we have a grip on which
2475 regs should be replaced. */
2476 abort ();
2482 /* Replace all occurrences of register OLDREG with register NEWREG in
2483 blocks defined by bitmap BLOCKS. This also replaces occurrences of
2484 OLDREG in the REG_NOTES but only for insns containing OLDREG. This
2485 routine expects the reg-use and reg-def chains to be valid. */
2487 df_reg_replace (struct df *df, bitmap blocks, rtx oldreg, rtx newreg)
2489 unsigned int oldregno = REGNO (oldreg);
2491 df_refs_reg_replace (df, blocks, df->regs[oldregno].defs, oldreg, newreg);
2492 df_refs_reg_replace (df, blocks, df->regs[oldregno].uses, oldreg, newreg);
2493 return 1;
2497 /* Try replacing the reg within REF with NEWREG. Do not modify
2498 def-use/use-def chains. */
2500 df_ref_reg_replace (struct df *df, struct ref *ref, rtx oldreg, rtx newreg)
2502 /* Check that insn was deleted by being converted into a NOTE. If
2503 so ignore this insn. */
2504 if (! INSN_P (DF_REF_INSN (ref)))
2505 return 0;
2507 if (oldreg && oldreg != DF_REF_REG (ref))
2508 abort ();
2510 if (! validate_change (DF_REF_INSN (ref), DF_REF_LOC (ref), newreg, 1))
2511 return 0;
2513 df_insn_modify (df, DF_REF_BB (ref), DF_REF_INSN (ref));
2514 return 1;
2518 struct ref*
2519 df_bb_def_use_swap (struct df *df, basic_block bb, rtx def_insn, rtx use_insn, unsigned int regno)
2521 struct ref *def;
2522 struct ref *use;
2523 int def_uid;
2524 int use_uid;
2525 struct df_link *link;
2527 def = df_bb_insn_regno_first_def_find (df, bb, def_insn, regno);
2528 if (! def)
2529 return 0;
2531 use = df_bb_insn_regno_last_use_find (df, bb, use_insn, regno);
2532 if (! use)
2533 return 0;
2535 /* The USE no longer exists. */
2536 use_uid = INSN_UID (use_insn);
2537 df_use_unlink (df, use);
2538 df_ref_unlink (&df->insns[use_uid].uses, use);
2540 /* The DEF requires shifting so remove it from DEF_INSN
2541 and add it to USE_INSN by reusing LINK. */
2542 def_uid = INSN_UID (def_insn);
2543 link = df_ref_unlink (&df->insns[def_uid].defs, def);
2544 link->ref = def;
2545 link->next = df->insns[use_uid].defs;
2546 df->insns[use_uid].defs = link;
2548 #if 0
2549 link = df_ref_unlink (&df->regs[regno].defs, def);
2550 link->ref = def;
2551 link->next = df->regs[regno].defs;
2552 df->insns[regno].defs = link;
2553 #endif
2555 DF_REF_INSN (def) = use_insn;
2556 return def;
2560 /* Record df between FIRST_INSN and LAST_INSN inclusive. All new
2561 insns must be processed by this routine. */
2562 static void
2563 df_insns_modify (struct df *df, basic_block bb, rtx first_insn, rtx last_insn)
2565 rtx insn;
2567 for (insn = first_insn; ; insn = NEXT_INSN (insn))
2569 unsigned int uid;
2571 /* A non-const call should not have slipped through the net. If
2572 it does, we need to create a new basic block. Ouch. The
2573 same applies for a label. */
2574 if ((GET_CODE (insn) == CALL_INSN
2575 && ! CONST_OR_PURE_CALL_P (insn))
2576 || GET_CODE (insn) == CODE_LABEL)
2577 abort ();
2579 uid = INSN_UID (insn);
2581 if (uid >= df->insn_size)
2582 df_insn_table_realloc (df, uid);
2584 df_insn_modify (df, bb, insn);
2586 if (insn == last_insn)
2587 break;
2592 /* Emit PATTERN before INSN within BB. */
2594 df_pattern_emit_before (struct df *df, rtx pattern, basic_block bb, rtx insn)
2596 rtx ret_insn;
2597 rtx prev_insn = PREV_INSN (insn);
2599 /* We should not be inserting before the start of the block. */
2600 if (insn == bb->head)
2601 abort ();
2602 ret_insn = emit_insn_before (pattern, insn);
2603 if (ret_insn == insn)
2604 return ret_insn;
2606 df_insns_modify (df, bb, NEXT_INSN (prev_insn), ret_insn);
2607 return ret_insn;
2611 /* Emit PATTERN after INSN within BB. */
2613 df_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2615 rtx ret_insn;
2617 ret_insn = emit_insn_after (pattern, insn);
2618 if (ret_insn == insn)
2619 return ret_insn;
2621 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2622 return ret_insn;
2626 /* Emit jump PATTERN after INSN within BB. */
2628 df_jump_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2630 rtx ret_insn;
2632 ret_insn = emit_jump_insn_after (pattern, insn);
2633 if (ret_insn == insn)
2634 return ret_insn;
2636 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2637 return ret_insn;
2641 /* Move INSN within BB before BEFORE_INSN within BEFORE_BB.
2643 This function should only be used to move loop invariant insns
2644 out of a loop where it has been proven that the def-use info
2645 will still be valid. */
2647 df_insn_move_before (struct df *df, basic_block bb, rtx insn, basic_block before_bb, rtx before_insn)
2649 struct df_link *link;
2650 unsigned int uid;
2652 if (! bb)
2653 return df_pattern_emit_before (df, insn, before_bb, before_insn);
2655 uid = INSN_UID (insn);
2657 /* Change bb for all df defined and used by this insn. */
2658 for (link = df->insns[uid].defs; link; link = link->next)
2659 DF_REF_BB (link->ref) = before_bb;
2660 for (link = df->insns[uid].uses; link; link = link->next)
2661 DF_REF_BB (link->ref) = before_bb;
2663 /* The lifetimes of the registers used in this insn will be reduced
2664 while the lifetimes of the registers defined in this insn
2665 are likely to be increased. */
2667 /* ???? Perhaps all the insns moved should be stored on a list
2668 which df_analyse removes when it recalculates data flow. */
2670 return emit_insn_before (insn, before_insn);
2673 /* Functions to query dataflow information. */
2677 df_insn_regno_def_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2678 rtx insn, unsigned int regno)
2680 unsigned int uid;
2681 struct df_link *link;
2683 uid = INSN_UID (insn);
2685 for (link = df->insns[uid].defs; link; link = link->next)
2687 struct ref *def = link->ref;
2689 if (DF_REF_REGNO (def) == regno)
2690 return 1;
2693 return 0;
2697 static int
2698 df_def_dominates_all_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
2700 struct df_link *du_link;
2702 /* Follow def-use chain to find all the uses of this def. */
2703 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2705 struct ref *use = du_link->ref;
2706 struct df_link *ud_link;
2708 /* Follow use-def chain to check all the defs for this use. */
2709 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2710 if (ud_link->ref != def)
2711 return 0;
2713 return 1;
2718 df_insn_dominates_all_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2719 rtx insn)
2721 unsigned int uid;
2722 struct df_link *link;
2724 uid = INSN_UID (insn);
2726 for (link = df->insns[uid].defs; link; link = link->next)
2728 struct ref *def = link->ref;
2730 if (! df_def_dominates_all_uses_p (df, def))
2731 return 0;
2734 return 1;
2738 /* Return nonzero if all DF dominates all the uses within the bitmap
2739 BLOCKS. */
2740 static int
2741 df_def_dominates_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def,
2742 bitmap blocks)
2744 struct df_link *du_link;
2746 /* Follow def-use chain to find all the uses of this def. */
2747 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2749 struct ref *use = du_link->ref;
2750 struct df_link *ud_link;
2752 /* Only worry about the uses within BLOCKS. For example,
2753 consider a register defined within a loop that is live at the
2754 loop exits. */
2755 if (bitmap_bit_p (blocks, DF_REF_BBNO (use)))
2757 /* Follow use-def chain to check all the defs for this use. */
2758 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2759 if (ud_link->ref != def)
2760 return 0;
2763 return 1;
2767 /* Return nonzero if all the defs of INSN within BB dominates
2768 all the corresponding uses. */
2770 df_insn_dominates_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2771 rtx insn, bitmap blocks)
2773 unsigned int uid;
2774 struct df_link *link;
2776 uid = INSN_UID (insn);
2778 for (link = df->insns[uid].defs; link; link = link->next)
2780 struct ref *def = link->ref;
2782 /* Only consider the defs within BLOCKS. */
2783 if (bitmap_bit_p (blocks, DF_REF_BBNO (def))
2784 && ! df_def_dominates_uses_p (df, def, blocks))
2785 return 0;
2787 return 1;
2791 /* Return the basic block that REG referenced in or NULL if referenced
2792 in multiple basic blocks. */
2793 basic_block
2794 df_regno_bb (struct df *df, unsigned int regno)
2796 struct df_link *defs = df->regs[regno].defs;
2797 struct df_link *uses = df->regs[regno].uses;
2798 struct ref *def = defs ? defs->ref : 0;
2799 struct ref *use = uses ? uses->ref : 0;
2800 basic_block bb_def = def ? DF_REF_BB (def) : 0;
2801 basic_block bb_use = use ? DF_REF_BB (use) : 0;
2803 /* Compare blocks of first def and last use. ???? FIXME. What if
2804 the reg-def and reg-use lists are not correctly ordered. */
2805 return bb_def == bb_use ? bb_def : 0;
2809 /* Return nonzero if REG used in multiple basic blocks. */
2811 df_reg_global_p (struct df *df, rtx reg)
2813 return df_regno_bb (df, REGNO (reg)) != 0;
2817 /* Return total lifetime (in insns) of REG. */
2819 df_reg_lifetime (struct df *df, rtx reg)
2821 return df->regs[REGNO (reg)].lifetime;
2825 /* Return nonzero if REG live at start of BB. */
2827 df_bb_reg_live_start_p (struct df *df, basic_block bb, rtx reg)
2829 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2831 #ifdef ENABLE_CHECKING
2832 if (! bb_info->lr_in)
2833 abort ();
2834 #endif
2836 return bitmap_bit_p (bb_info->lr_in, REGNO (reg));
2840 /* Return nonzero if REG live at end of BB. */
2842 df_bb_reg_live_end_p (struct df *df, basic_block bb, rtx reg)
2844 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2846 #ifdef ENABLE_CHECKING
2847 if (! bb_info->lr_in)
2848 abort ();
2849 #endif
2851 return bitmap_bit_p (bb_info->lr_out, REGNO (reg));
2855 /* Return -1 if life of REG1 before life of REG2, 1 if life of REG1
2856 after life of REG2, or 0, if the lives overlap. */
2858 df_bb_regs_lives_compare (struct df *df, basic_block bb, rtx reg1, rtx reg2)
2860 unsigned int regno1 = REGNO (reg1);
2861 unsigned int regno2 = REGNO (reg2);
2862 struct ref *def1;
2863 struct ref *use1;
2864 struct ref *def2;
2865 struct ref *use2;
2868 /* The regs must be local to BB. */
2869 if (df_regno_bb (df, regno1) != bb
2870 || df_regno_bb (df, regno2) != bb)
2871 abort ();
2873 def2 = df_bb_regno_first_def_find (df, bb, regno2);
2874 use1 = df_bb_regno_last_use_find (df, bb, regno1);
2876 if (DF_INSN_LUID (df, DF_REF_INSN (def2))
2877 > DF_INSN_LUID (df, DF_REF_INSN (use1)))
2878 return -1;
2880 def1 = df_bb_regno_first_def_find (df, bb, regno1);
2881 use2 = df_bb_regno_last_use_find (df, bb, regno2);
2883 if (DF_INSN_LUID (df, DF_REF_INSN (def1))
2884 > DF_INSN_LUID (df, DF_REF_INSN (use2)))
2885 return 1;
2887 return 0;
2891 /* Return last use of REGNO within BB. */
2892 static struct ref *
2893 df_bb_regno_last_use_find (struct df *df, basic_block bb, unsigned int regno)
2895 struct df_link *link;
2897 /* This assumes that the reg-use list is ordered such that for any
2898 BB, the last use is found first. However, since the BBs are not
2899 ordered, the first use in the chain is not necessarily the last
2900 use in the function. */
2901 for (link = df->regs[regno].uses; link; link = link->next)
2903 struct ref *use = link->ref;
2905 if (DF_REF_BB (use) == bb)
2906 return use;
2908 return 0;
2912 /* Return first def of REGNO within BB. */
2913 static struct ref *
2914 df_bb_regno_first_def_find (struct df *df, basic_block bb, unsigned int regno)
2916 struct df_link *link;
2918 /* This assumes that the reg-def list is ordered such that for any
2919 BB, the first def is found first. However, since the BBs are not
2920 ordered, the first def in the chain is not necessarily the first
2921 def in the function. */
2922 for (link = df->regs[regno].defs; link; link = link->next)
2924 struct ref *def = link->ref;
2926 if (DF_REF_BB (def) == bb)
2927 return def;
2929 return 0;
2933 /* Return first use of REGNO inside INSN within BB. */
2934 static struct ref *
2935 df_bb_insn_regno_last_use_find (struct df *df,
2936 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2937 unsigned int regno)
2939 unsigned int uid;
2940 struct df_link *link;
2942 uid = INSN_UID (insn);
2944 for (link = df->insns[uid].uses; link; link = link->next)
2946 struct ref *use = link->ref;
2948 if (DF_REF_REGNO (use) == regno)
2949 return use;
2952 return 0;
2956 /* Return first def of REGNO inside INSN within BB. */
2957 static struct ref *
2958 df_bb_insn_regno_first_def_find (struct df *df,
2959 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2960 unsigned int regno)
2962 unsigned int uid;
2963 struct df_link *link;
2965 uid = INSN_UID (insn);
2967 for (link = df->insns[uid].defs; link; link = link->next)
2969 struct ref *def = link->ref;
2971 if (DF_REF_REGNO (def) == regno)
2972 return def;
2975 return 0;
2979 /* Return insn using REG if the BB contains only a single
2980 use and def of REG. */
2982 df_bb_single_def_use_insn_find (struct df *df, basic_block bb, rtx insn, rtx reg)
2984 struct ref *def;
2985 struct ref *use;
2986 struct df_link *du_link;
2988 def = df_bb_insn_regno_first_def_find (df, bb, insn, REGNO (reg));
2990 if (! def)
2991 abort ();
2993 du_link = DF_REF_CHAIN (def);
2995 if (! du_link)
2996 return NULL_RTX;
2998 use = du_link->ref;
3000 /* Check if def is dead. */
3001 if (! use)
3002 return NULL_RTX;
3004 /* Check for multiple uses. */
3005 if (du_link->next)
3006 return NULL_RTX;
3008 return DF_REF_INSN (use);
3011 /* Functions for debugging/dumping dataflow information. */
3014 /* Dump a def-use or use-def chain for REF to FILE. */
3015 static void
3016 df_chain_dump (struct df_link *link, FILE *file)
3018 fprintf (file, "{ ");
3019 for (; link; link = link->next)
3021 fprintf (file, "%c%d ",
3022 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3023 DF_REF_ID (link->ref));
3025 fprintf (file, "}");
3029 /* Dump a chain of refs with the associated regno. */
3030 static void
3031 df_chain_dump_regno (struct df_link *link, FILE *file)
3033 fprintf (file, "{ ");
3034 for (; link; link = link->next)
3036 fprintf (file, "%c%d(%d) ",
3037 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3038 DF_REF_ID (link->ref),
3039 DF_REF_REGNO (link->ref));
3041 fprintf (file, "}");
3045 /* Dump dataflow info. */
3046 void
3047 df_dump (struct df *df, int flags, FILE *file)
3049 unsigned int j;
3050 basic_block bb;
3052 if (! df || ! file)
3053 return;
3055 fprintf (file, "\nDataflow summary:\n");
3056 fprintf (file, "n_regs = %d, n_defs = %d, n_uses = %d, n_bbs = %d\n",
3057 df->n_regs, df->n_defs, df->n_uses, df->n_bbs);
3059 if (flags & DF_RD)
3061 basic_block bb;
3063 fprintf (file, "Reaching defs:\n");
3064 FOR_EACH_BB (bb)
3066 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3068 if (! bb_info->rd_in)
3069 continue;
3071 fprintf (file, "bb %d in \t", bb->index);
3072 dump_bitmap (file, bb_info->rd_in);
3073 fprintf (file, "bb %d gen \t", bb->index);
3074 dump_bitmap (file, bb_info->rd_gen);
3075 fprintf (file, "bb %d kill\t", bb->index);
3076 dump_bitmap (file, bb_info->rd_kill);
3077 fprintf (file, "bb %d out \t", bb->index);
3078 dump_bitmap (file, bb_info->rd_out);
3082 if (flags & DF_UD_CHAIN)
3084 fprintf (file, "Use-def chains:\n");
3085 for (j = 0; j < df->n_defs; j++)
3087 if (df->defs[j])
3089 fprintf (file, "d%d bb %d luid %d insn %d reg %d ",
3090 j, DF_REF_BBNO (df->defs[j]),
3091 DF_INSN_LUID (df, DF_REF_INSN (df->defs[j])),
3092 DF_REF_INSN_UID (df->defs[j]),
3093 DF_REF_REGNO (df->defs[j]));
3094 if (df->defs[j]->flags & DF_REF_READ_WRITE)
3095 fprintf (file, "read/write ");
3096 df_chain_dump (DF_REF_CHAIN (df->defs[j]), file);
3097 fprintf (file, "\n");
3102 if (flags & DF_RU)
3104 fprintf (file, "Reaching uses:\n");
3105 FOR_EACH_BB (bb)
3107 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3109 if (! bb_info->ru_in)
3110 continue;
3112 fprintf (file, "bb %d in \t", bb->index);
3113 dump_bitmap (file, bb_info->ru_in);
3114 fprintf (file, "bb %d gen \t", bb->index);
3115 dump_bitmap (file, bb_info->ru_gen);
3116 fprintf (file, "bb %d kill\t", bb->index);
3117 dump_bitmap (file, bb_info->ru_kill);
3118 fprintf (file, "bb %d out \t", bb->index);
3119 dump_bitmap (file, bb_info->ru_out);
3123 if (flags & DF_DU_CHAIN)
3125 fprintf (file, "Def-use chains:\n");
3126 for (j = 0; j < df->n_uses; j++)
3128 if (df->uses[j])
3130 fprintf (file, "u%d bb %d luid %d insn %d reg %d ",
3131 j, DF_REF_BBNO (df->uses[j]),
3132 DF_INSN_LUID (df, DF_REF_INSN (df->uses[j])),
3133 DF_REF_INSN_UID (df->uses[j]),
3134 DF_REF_REGNO (df->uses[j]));
3135 if (df->uses[j]->flags & DF_REF_READ_WRITE)
3136 fprintf (file, "read/write ");
3137 df_chain_dump (DF_REF_CHAIN (df->uses[j]), file);
3138 fprintf (file, "\n");
3143 if (flags & DF_LR)
3145 fprintf (file, "Live regs:\n");
3146 FOR_EACH_BB (bb)
3148 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3150 if (! bb_info->lr_in)
3151 continue;
3153 fprintf (file, "bb %d in \t", bb->index);
3154 dump_bitmap (file, bb_info->lr_in);
3155 fprintf (file, "bb %d use \t", bb->index);
3156 dump_bitmap (file, bb_info->lr_use);
3157 fprintf (file, "bb %d def \t", bb->index);
3158 dump_bitmap (file, bb_info->lr_def);
3159 fprintf (file, "bb %d out \t", bb->index);
3160 dump_bitmap (file, bb_info->lr_out);
3164 if (flags & (DF_REG_INFO | DF_RD_CHAIN | DF_RU_CHAIN))
3166 struct reg_info *reg_info = df->regs;
3168 fprintf (file, "Register info:\n");
3169 for (j = 0; j < df->n_regs; j++)
3171 if (((flags & DF_REG_INFO)
3172 && (reg_info[j].n_uses || reg_info[j].n_defs))
3173 || ((flags & DF_RD_CHAIN) && reg_info[j].defs)
3174 || ((flags & DF_RU_CHAIN) && reg_info[j].uses))
3176 fprintf (file, "reg %d", j);
3177 if ((flags & DF_RD_CHAIN) && (flags & DF_RU_CHAIN))
3179 basic_block bb = df_regno_bb (df, j);
3181 if (bb)
3182 fprintf (file, " bb %d", bb->index);
3183 else
3184 fprintf (file, " bb ?");
3186 if (flags & DF_REG_INFO)
3188 fprintf (file, " life %d", reg_info[j].lifetime);
3191 if ((flags & DF_REG_INFO) || (flags & DF_RD_CHAIN))
3193 fprintf (file, " defs ");
3194 if (flags & DF_REG_INFO)
3195 fprintf (file, "%d ", reg_info[j].n_defs);
3196 if (flags & DF_RD_CHAIN)
3197 df_chain_dump (reg_info[j].defs, file);
3200 if ((flags & DF_REG_INFO) || (flags & DF_RU_CHAIN))
3202 fprintf (file, " uses ");
3203 if (flags & DF_REG_INFO)
3204 fprintf (file, "%d ", reg_info[j].n_uses);
3205 if (flags & DF_RU_CHAIN)
3206 df_chain_dump (reg_info[j].uses, file);
3209 fprintf (file, "\n");
3213 fprintf (file, "\n");
3217 void
3218 df_insn_debug (struct df *df, rtx insn, FILE *file)
3220 unsigned int uid;
3221 int bbi;
3223 uid = INSN_UID (insn);
3224 if (uid >= df->insn_size)
3225 return;
3227 if (df->insns[uid].defs)
3228 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3229 else if (df->insns[uid].uses)
3230 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3231 else
3232 bbi = -1;
3234 fprintf (file, "insn %d bb %d luid %d defs ",
3235 uid, bbi, DF_INSN_LUID (df, insn));
3236 df_chain_dump (df->insns[uid].defs, file);
3237 fprintf (file, " uses ");
3238 df_chain_dump (df->insns[uid].uses, file);
3239 fprintf (file, "\n");
3243 void
3244 df_insn_debug_regno (struct df *df, rtx insn, FILE *file)
3246 unsigned int uid;
3247 int bbi;
3249 uid = INSN_UID (insn);
3250 if (uid >= df->insn_size)
3251 return;
3253 if (df->insns[uid].defs)
3254 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3255 else if (df->insns[uid].uses)
3256 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3257 else
3258 bbi = -1;
3260 fprintf (file, "insn %d bb %d luid %d defs ",
3261 uid, bbi, DF_INSN_LUID (df, insn));
3262 df_chain_dump_regno (df->insns[uid].defs, file);
3263 fprintf (file, " uses ");
3264 df_chain_dump_regno (df->insns[uid].uses, file);
3265 fprintf (file, "\n");
3269 static void
3270 df_regno_debug (struct df *df, unsigned int regno, FILE *file)
3272 if (regno >= df->reg_size)
3273 return;
3275 fprintf (file, "reg %d life %d defs ",
3276 regno, df->regs[regno].lifetime);
3277 df_chain_dump (df->regs[regno].defs, file);
3278 fprintf (file, " uses ");
3279 df_chain_dump (df->regs[regno].uses, file);
3280 fprintf (file, "\n");
3284 static void
3285 df_ref_debug (struct df *df, struct ref *ref, FILE *file)
3287 fprintf (file, "%c%d ",
3288 DF_REF_REG_DEF_P (ref) ? 'd' : 'u',
3289 DF_REF_ID (ref));
3290 fprintf (file, "reg %d bb %d luid %d insn %d chain ",
3291 DF_REF_REGNO (ref),
3292 DF_REF_BBNO (ref),
3293 DF_INSN_LUID (df, DF_REF_INSN (ref)),
3294 INSN_UID (DF_REF_INSN (ref)));
3295 df_chain_dump (DF_REF_CHAIN (ref), file);
3296 fprintf (file, "\n");
3299 /* Functions for debugging from GDB. */
3301 void
3302 debug_df_insn (rtx insn)
3304 df_insn_debug (ddf, insn, stderr);
3305 debug_rtx (insn);
3309 void
3310 debug_df_reg (rtx reg)
3312 df_regno_debug (ddf, REGNO (reg), stderr);
3316 void
3317 debug_df_regno (unsigned int regno)
3319 df_regno_debug (ddf, regno, stderr);
3323 void
3324 debug_df_ref (struct ref *ref)
3326 df_ref_debug (ddf, ref, stderr);
3330 void
3331 debug_df_defno (unsigned int defno)
3333 df_ref_debug (ddf, ddf->defs[defno], stderr);
3337 void
3338 debug_df_useno (unsigned int defno)
3340 df_ref_debug (ddf, ddf->uses[defno], stderr);
3344 void
3345 debug_df_chain (struct df_link *link)
3347 df_chain_dump (link, stderr);
3348 fputc ('\n', stderr);
3352 /* Hybrid search algorithm from "Implementation Techniques for
3353 Efficient Data-Flow Analysis of Large Programs". */
3354 static void
3355 hybrid_search_bitmap (basic_block block, bitmap *in, bitmap *out, bitmap *gen,
3356 bitmap *kill, enum df_flow_dir dir,
3357 enum df_confluence_op conf_op,
3358 transfer_function_bitmap transfun, sbitmap visited,
3359 sbitmap pending, void *data)
3361 int changed;
3362 int i = block->index;
3363 edge e;
3364 basic_block bb = block;
3366 SET_BIT (visited, block->index);
3367 if (TEST_BIT (pending, block->index))
3369 if (dir == DF_FORWARD)
3371 /* Calculate <conf_op> of predecessor_outs. */
3372 bitmap_zero (in[i]);
3373 for (e = bb->pred; e != 0; e = e->pred_next)
3375 if (e->src == ENTRY_BLOCK_PTR)
3376 continue;
3377 switch (conf_op)
3379 case DF_UNION:
3380 bitmap_a_or_b (in[i], in[i], out[e->src->index]);
3381 break;
3382 case DF_INTERSECTION:
3383 bitmap_a_and_b (in[i], in[i], out[e->src->index]);
3384 break;
3388 else
3390 /* Calculate <conf_op> of successor ins. */
3391 bitmap_zero (out[i]);
3392 for (e = bb->succ; e != 0; e = e->succ_next)
3394 if (e->dest == EXIT_BLOCK_PTR)
3395 continue;
3396 switch (conf_op)
3398 case DF_UNION:
3399 bitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3400 break;
3401 case DF_INTERSECTION:
3402 bitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3403 break;
3407 /* Common part */
3408 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3409 RESET_BIT (pending, i);
3410 if (changed)
3412 if (dir == DF_FORWARD)
3414 for (e = bb->succ; e != 0; e = e->succ_next)
3416 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3417 continue;
3418 SET_BIT (pending, e->dest->index);
3421 else
3423 for (e = bb->pred; e != 0; e = e->pred_next)
3425 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3426 continue;
3427 SET_BIT (pending, e->src->index);
3432 if (dir == DF_FORWARD)
3434 for (e = bb->succ; e != 0; e = e->succ_next)
3436 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3437 continue;
3438 if (!TEST_BIT (visited, e->dest->index))
3439 hybrid_search_bitmap (e->dest, in, out, gen, kill, dir,
3440 conf_op, transfun, visited, pending,
3441 data);
3444 else
3446 for (e = bb->pred; e != 0; e = e->pred_next)
3448 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3449 continue;
3450 if (!TEST_BIT (visited, e->src->index))
3451 hybrid_search_bitmap (e->src, in, out, gen, kill, dir,
3452 conf_op, transfun, visited, pending,
3453 data);
3459 /* Hybrid search for sbitmaps, rather than bitmaps. */
3460 static void
3461 hybrid_search_sbitmap (basic_block block, sbitmap *in, sbitmap *out,
3462 sbitmap *gen, sbitmap *kill, enum df_flow_dir dir,
3463 enum df_confluence_op conf_op,
3464 transfer_function_sbitmap transfun, sbitmap visited,
3465 sbitmap pending, void *data)
3467 int changed;
3468 int i = block->index;
3469 edge e;
3470 basic_block bb = block;
3472 SET_BIT (visited, block->index);
3473 if (TEST_BIT (pending, block->index))
3475 if (dir == DF_FORWARD)
3477 /* Calculate <conf_op> of predecessor_outs. */
3478 sbitmap_zero (in[i]);
3479 for (e = bb->pred; e != 0; e = e->pred_next)
3481 if (e->src == ENTRY_BLOCK_PTR)
3482 continue;
3483 switch (conf_op)
3485 case DF_UNION:
3486 sbitmap_a_or_b (in[i], in[i], out[e->src->index]);
3487 break;
3488 case DF_INTERSECTION:
3489 sbitmap_a_and_b (in[i], in[i], out[e->src->index]);
3490 break;
3494 else
3496 /* Calculate <conf_op> of successor ins. */
3497 sbitmap_zero (out[i]);
3498 for (e = bb->succ; e != 0; e = e->succ_next)
3500 if (e->dest == EXIT_BLOCK_PTR)
3501 continue;
3502 switch (conf_op)
3504 case DF_UNION:
3505 sbitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3506 break;
3507 case DF_INTERSECTION:
3508 sbitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3509 break;
3513 /* Common part. */
3514 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3515 RESET_BIT (pending, i);
3516 if (changed)
3518 if (dir == DF_FORWARD)
3520 for (e = bb->succ; e != 0; e = e->succ_next)
3522 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3523 continue;
3524 SET_BIT (pending, e->dest->index);
3527 else
3529 for (e = bb->pred; e != 0; e = e->pred_next)
3531 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3532 continue;
3533 SET_BIT (pending, e->src->index);
3538 if (dir == DF_FORWARD)
3540 for (e = bb->succ; e != 0; e = e->succ_next)
3542 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3543 continue;
3544 if (!TEST_BIT (visited, e->dest->index))
3545 hybrid_search_sbitmap (e->dest, in, out, gen, kill, dir,
3546 conf_op, transfun, visited, pending,
3547 data);
3550 else
3552 for (e = bb->pred; e != 0; e = e->pred_next)
3554 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3555 continue;
3556 if (!TEST_BIT (visited, e->src->index))
3557 hybrid_search_sbitmap (e->src, in, out, gen, kill, dir,
3558 conf_op, transfun, visited, pending,
3559 data);
3565 /* gen = GEN set.
3566 kill = KILL set.
3567 in, out = Filled in by function.
3568 blocks = Blocks to analyze.
3569 dir = Dataflow direction.
3570 conf_op = Confluence operation.
3571 transfun = Transfer function.
3572 order = Order to iterate in. (Should map block numbers -> order)
3573 data = Whatever you want. It's passed to the transfer function.
3575 This function will perform iterative bitvector dataflow, producing
3576 the in and out sets. Even if you only want to perform it for a
3577 small number of blocks, the vectors for in and out must be large
3578 enough for *all* blocks, because changing one block might affect
3579 others. However, it'll only put what you say to analyze on the
3580 initial worklist.
3582 For forward problems, you probably want to pass in a mapping of
3583 block number to rc_order (like df->inverse_rc_map).
3585 void
3586 iterative_dataflow_sbitmap (sbitmap *in, sbitmap *out, sbitmap *gen,
3587 sbitmap *kill, bitmap blocks,
3588 enum df_flow_dir dir,
3589 enum df_confluence_op conf_op,
3590 transfer_function_sbitmap transfun, int *order,
3591 void *data)
3593 int i;
3594 fibheap_t worklist;
3595 basic_block bb;
3596 sbitmap visited, pending;
3598 pending = sbitmap_alloc (last_basic_block);
3599 visited = sbitmap_alloc (last_basic_block);
3600 sbitmap_zero (pending);
3601 sbitmap_zero (visited);
3602 worklist = fibheap_new ();
3604 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3606 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3607 SET_BIT (pending, i);
3608 if (dir == DF_FORWARD)
3609 sbitmap_copy (out[i], gen[i]);
3610 else
3611 sbitmap_copy (in[i], gen[i]);
3614 while (sbitmap_first_set_bit (pending) != -1)
3616 while (!fibheap_empty (worklist))
3618 i = (size_t) fibheap_extract_min (worklist);
3619 bb = BASIC_BLOCK (i);
3620 if (!TEST_BIT (visited, bb->index))
3621 hybrid_search_sbitmap (bb, in, out, gen, kill, dir,
3622 conf_op, transfun, visited, pending, data);
3625 if (sbitmap_first_set_bit (pending) != -1)
3627 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3629 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3631 sbitmap_zero (visited);
3633 else
3635 break;
3639 sbitmap_free (pending);
3640 sbitmap_free (visited);
3641 fibheap_delete (worklist);
3645 /* Exactly the same as iterative_dataflow_sbitmap, except it works on
3646 bitmaps instead. */
3647 void
3648 iterative_dataflow_bitmap (bitmap *in, bitmap *out, bitmap *gen, bitmap *kill,
3649 bitmap blocks, enum df_flow_dir dir,
3650 enum df_confluence_op conf_op,
3651 transfer_function_bitmap transfun, int *order,
3652 void *data)
3654 int i;
3655 fibheap_t worklist;
3656 basic_block bb;
3657 sbitmap visited, pending;
3659 pending = sbitmap_alloc (last_basic_block);
3660 visited = sbitmap_alloc (last_basic_block);
3661 sbitmap_zero (pending);
3662 sbitmap_zero (visited);
3663 worklist = fibheap_new ();
3665 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3667 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3668 SET_BIT (pending, i);
3669 if (dir == DF_FORWARD)
3670 bitmap_copy (out[i], gen[i]);
3671 else
3672 bitmap_copy (in[i], gen[i]);
3675 while (sbitmap_first_set_bit (pending) != -1)
3677 while (!fibheap_empty (worklist))
3679 i = (size_t) fibheap_extract_min (worklist);
3680 bb = BASIC_BLOCK (i);
3681 if (!TEST_BIT (visited, bb->index))
3682 hybrid_search_bitmap (bb, in, out, gen, kill, dir,
3683 conf_op, transfun, visited, pending, data);
3686 if (sbitmap_first_set_bit (pending) != -1)
3688 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3690 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3692 sbitmap_zero (visited);
3694 else
3696 break;
3699 sbitmap_free (pending);
3700 sbitmap_free (visited);
3701 fibheap_delete (worklist);