2015-08-04 Thomas Preud'homme <thomas.preudhomme@arm.com>
[official-gcc.git] / gcc / wide-int.h
blob6e0275f58c0a6da1f158f6854ee263eb727323f7
1 /* Operations with very long integers. -*- C++ -*-
2 Copyright (C) 2012-2015 Free Software Foundation, Inc.
4 This file is part of GCC.
6 GCC is free software; you can redistribute it and/or modify it
7 under the terms of the GNU General Public License as published by the
8 Free Software Foundation; either version 3, or (at your option) any
9 later version.
11 GCC is distributed in the hope that it will be useful, but WITHOUT
12 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 for more details.
16 You should have received a copy of the GNU General Public License
17 along with GCC; see the file COPYING3. If not see
18 <http://www.gnu.org/licenses/>. */
20 #ifndef WIDE_INT_H
21 #define WIDE_INT_H
23 /* wide-int.[cc|h] implements a class that efficiently performs
24 mathematical operations on finite precision integers. wide_ints
25 are designed to be transient - they are not for long term storage
26 of values. There is tight integration between wide_ints and the
27 other longer storage GCC representations (rtl and tree).
29 The actual precision of a wide_int depends on the flavor. There
30 are three predefined flavors:
32 1) wide_int (the default). This flavor does the math in the
33 precision of its input arguments. It is assumed (and checked)
34 that the precisions of the operands and results are consistent.
35 This is the most efficient flavor. It is not possible to examine
36 bits above the precision that has been specified. Because of
37 this, the default flavor has semantics that are simple to
38 understand and in general model the underlying hardware that the
39 compiler is targetted for.
41 This flavor must be used at the RTL level of gcc because there
42 is, in general, not enough information in the RTL representation
43 to extend a value beyond the precision specified in the mode.
45 This flavor should also be used at the TREE and GIMPLE levels of
46 the compiler except for the circumstances described in the
47 descriptions of the other two flavors.
49 The default wide_int representation does not contain any
50 information inherent about signedness of the represented value,
51 so it can be used to represent both signed and unsigned numbers.
52 For operations where the results depend on signedness (full width
53 multiply, division, shifts, comparisons, and operations that need
54 overflow detected), the signedness must be specified separately.
56 2) offset_int. This is a fixed size representation that is
57 guaranteed to be large enough to compute any bit or byte sized
58 address calculation on the target. Currently the value is 64 + 4
59 bits rounded up to the next number even multiple of
60 HOST_BITS_PER_WIDE_INT (but this can be changed when the first
61 port needs more than 64 bits for the size of a pointer).
63 This flavor can be used for all address math on the target. In
64 this representation, the values are sign or zero extended based
65 on their input types to the internal precision. All math is done
66 in this precision and then the values are truncated to fit in the
67 result type. Unlike most gimple or rtl intermediate code, it is
68 not useful to perform the address arithmetic at the same
69 precision in which the operands are represented because there has
70 been no effort by the front ends to convert most addressing
71 arithmetic to canonical types.
73 3) widest_int. This representation is an approximation of
74 infinite precision math. However, it is not really infinite
75 precision math as in the GMP library. It is really finite
76 precision math where the precision is 4 times the size of the
77 largest integer that the target port can represent.
79 widest_int is supposed to be wider than any number that it needs to
80 store, meaning that there is always at least one leading sign bit.
81 All widest_int values are therefore signed.
83 There are several places in the GCC where this should/must be used:
85 * Code that does induction variable optimizations. This code
86 works with induction variables of many different types at the
87 same time. Because of this, it ends up doing many different
88 calculations where the operands are not compatible types. The
89 widest_int makes this easy, because it provides a field where
90 nothing is lost when converting from any variable,
92 * There are a small number of passes that currently use the
93 widest_int that should use the default. These should be
94 changed.
96 There are surprising features of offset_int and widest_int
97 that the users should be careful about:
99 1) Shifts and rotations are just weird. You have to specify a
100 precision in which the shift or rotate is to happen in. The bits
101 above this precision are zeroed. While this is what you
102 want, it is clearly non obvious.
104 2) Larger precision math sometimes does not produce the same
105 answer as would be expected for doing the math at the proper
106 precision. In particular, a multiply followed by a divide will
107 produce a different answer if the first product is larger than
108 what can be represented in the input precision.
110 The offset_int and the widest_int flavors are more expensive
111 than the default wide int, so in addition to the caveats with these
112 two, the default is the prefered representation.
114 All three flavors of wide_int are represented as a vector of
115 HOST_WIDE_INTs. The default and widest_int vectors contain enough elements
116 to hold a value of MAX_BITSIZE_MODE_ANY_INT bits. offset_int contains only
117 enough elements to hold ADDR_MAX_PRECISION bits. The values are stored
118 in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
119 in element 0.
121 The default wide_int contains three fields: the vector (VAL),
122 the precision and a length (LEN). The length is the number of HWIs
123 needed to represent the value. widest_int and offset_int have a
124 constant precision that cannot be changed, so they only store the
125 VAL and LEN fields.
127 Since most integers used in a compiler are small values, it is
128 generally profitable to use a representation of the value that is
129 as small as possible. LEN is used to indicate the number of
130 elements of the vector that are in use. The numbers are stored as
131 sign extended numbers as a means of compression. Leading
132 HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
133 as long as they can be reconstructed from the top bit that is being
134 represented.
136 The precision and length of a wide_int are always greater than 0.
137 Any bits in a wide_int above the precision are sign-extended from the
138 most significant bit. For example, a 4-bit value 0x8 is represented as
139 VAL = { 0xf...fff8 }. However, as an optimization, we allow other integer
140 constants to be represented with undefined bits above the precision.
141 This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
142 so that the INTEGER_CST representation can be used both in TYPE_PRECISION
143 and in wider precisions.
145 There are constructors to create the various forms of wide_int from
146 trees, rtl and constants. For trees you can simply say:
148 tree t = ...;
149 wide_int x = t;
151 However, a little more syntax is required for rtl constants since
152 they do not have an explicit precision. To make an rtl into a
153 wide_int, you have to pair it with a mode. The canonical way to do
154 this is with std::make_pair as in:
156 rtx r = ...
157 wide_int x = std::make_pair (r, mode);
159 Similarly, a wide_int can only be constructed from a host value if
160 the target precision is given explicitly, such as in:
162 wide_int x = wi::shwi (c, prec); // sign-extend C if necessary
163 wide_int y = wi::uhwi (c, prec); // zero-extend C if necessary
165 However, offset_int and widest_int have an inherent precision and so
166 can be initialized directly from a host value:
168 offset_int x = (int) c; // sign-extend C
169 widest_int x = (unsigned int) c; // zero-extend C
171 It is also possible to do arithmetic directly on trees, rtxes and
172 constants. For example:
174 wi::add (t1, t2); // add equal-sized INTEGER_CSTs t1 and t2
175 wi::add (t1, 1); // add 1 to INTEGER_CST t1
176 wi::add (r1, r2); // add equal-sized rtx constants r1 and r2
177 wi::lshift (1, 100); // 1 << 100 as a widest_int
179 Many binary operations place restrictions on the combinations of inputs,
180 using the following rules:
182 - {tree, rtx, wide_int} op {tree, rtx, wide_int} -> wide_int
183 The inputs must be the same precision. The result is a wide_int
184 of the same precision
186 - {tree, rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
187 (un)signed HOST_WIDE_INT op {tree, rtx, wide_int} -> wide_int
188 The HOST_WIDE_INT is extended or truncated to the precision of
189 the other input. The result is a wide_int of the same precision
190 as that input.
192 - (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
193 The inputs are extended to widest_int precision and produce a
194 widest_int result.
196 - offset_int op offset_int -> offset_int
197 offset_int op (un)signed HOST_WIDE_INT -> offset_int
198 (un)signed HOST_WIDE_INT op offset_int -> offset_int
200 - widest_int op widest_int -> widest_int
201 widest_int op (un)signed HOST_WIDE_INT -> widest_int
202 (un)signed HOST_WIDE_INT op widest_int -> widest_int
204 Other combinations like:
206 - widest_int op offset_int and
207 - wide_int op offset_int
209 are not allowed. The inputs should instead be extended or truncated
210 so that they match.
212 The inputs to comparison functions like wi::eq_p and wi::lts_p
213 follow the same compatibility rules, although their return types
214 are different. Unary functions on X produce the same result as
215 a binary operation X + X. Shift functions X op Y also produce
216 the same result as X + X; the precision of the shift amount Y
217 can be arbitrarily different from X. */
219 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
220 early examination of the target's mode file. The WIDE_INT_MAX_ELTS
221 can accomodate at least 1 more bit so that unsigned numbers of that
222 mode can be represented as a signed value. Note that it is still
223 possible to create fixed_wide_ints that have precisions greater than
224 MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a
225 double-width multiplication result, for example. */
226 #define WIDE_INT_MAX_ELTS \
227 ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
229 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
231 /* This is the max size of any pointer on any machine. It does not
232 seem to be as easy to sniff this out of the machine description as
233 it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
234 multiple address sizes and may have different address sizes for
235 different address spaces. However, currently the largest pointer
236 on any platform is 64 bits. When that changes, then it is likely
237 that a target hook should be defined so that targets can make this
238 value larger for those targets. */
239 #define ADDR_MAX_BITSIZE 64
241 /* This is the internal precision used when doing any address
242 arithmetic. The '4' is really 3 + 1. Three of the bits are for
243 the number of extra bits needed to do bit addresses and the other bit
244 is to allow everything to be signed without loosing any precision.
245 Then everything is rounded up to the next HWI for efficiency. */
246 #define ADDR_MAX_PRECISION \
247 ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) \
248 & ~(HOST_BITS_PER_WIDE_INT - 1))
250 /* The number of HWIs needed to store an offset_int. */
251 #define OFFSET_INT_ELTS (ADDR_MAX_PRECISION / HOST_BITS_PER_WIDE_INT)
253 /* The type of result produced by a binary operation on types T1 and T2.
254 Defined purely for brevity. */
255 #define WI_BINARY_RESULT(T1, T2) \
256 typename wi::binary_traits <T1, T2>::result_type
258 /* The type of result produced by a unary operation on type T. */
259 #define WI_UNARY_RESULT(T) \
260 typename wi::unary_traits <T>::result_type
262 /* Define a variable RESULT to hold the result of a binary operation on
263 X and Y, which have types T1 and T2 respectively. Define VAL to
264 point to the blocks of RESULT. Once the user of the macro has
265 filled in VAL, it should call RESULT.set_len to set the number
266 of initialized blocks. */
267 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
268 WI_BINARY_RESULT (T1, T2) RESULT = \
269 wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
270 HOST_WIDE_INT *VAL = RESULT.write_val ()
272 /* Similar for the result of a unary operation on X, which has type T. */
273 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
274 WI_UNARY_RESULT (T) RESULT = \
275 wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
276 HOST_WIDE_INT *VAL = RESULT.write_val ()
278 template <typename T> class generic_wide_int;
279 template <int N> struct fixed_wide_int_storage;
280 class wide_int_storage;
282 /* An N-bit integer. Until we can use typedef templates, use this instead. */
283 #define FIXED_WIDE_INT(N) \
284 generic_wide_int < fixed_wide_int_storage <N> >
286 typedef generic_wide_int <wide_int_storage> wide_int;
287 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
288 typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
290 template <bool SE>
291 struct wide_int_ref_storage;
293 typedef generic_wide_int <wide_int_ref_storage <false> > wide_int_ref;
295 /* This can be used instead of wide_int_ref if the referenced value is
296 known to have type T. It carries across properties of T's representation,
297 such as whether excess upper bits in a HWI are defined, and can therefore
298 help avoid redundant work.
300 The macro could be replaced with a template typedef, once we're able
301 to use those. */
302 #define WIDE_INT_REF_FOR(T) \
303 generic_wide_int \
304 <wide_int_ref_storage <wi::int_traits <T>::is_sign_extended> >
306 namespace wi
308 /* Classifies an integer based on its precision. */
309 enum precision_type {
310 /* The integer has both a precision and defined signedness. This allows
311 the integer to be converted to any width, since we know whether to fill
312 any extra bits with zeros or signs. */
313 FLEXIBLE_PRECISION,
315 /* The integer has a variable precision but no defined signedness. */
316 VAR_PRECISION,
318 /* The integer has a constant precision (known at GCC compile time)
319 but no defined signedness. */
320 CONST_PRECISION
323 /* This class, which has no default implementation, is expected to
324 provide the following members:
326 static const enum precision_type precision_type;
327 Classifies the type of T.
329 static const unsigned int precision;
330 Only defined if precision_type == CONST_PRECISION. Specifies the
331 precision of all integers of type T.
333 static const bool host_dependent_precision;
334 True if the precision of T depends (or can depend) on the host.
336 static unsigned int get_precision (const T &x)
337 Return the number of bits in X.
339 static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
340 unsigned int precision, const T &x)
341 Decompose X as a PRECISION-bit integer, returning the associated
342 wi::storage_ref. SCRATCH is available as scratch space if needed.
343 The routine should assert that PRECISION is acceptable. */
344 template <typename T> struct int_traits;
346 /* This class provides a single type, result_type, which specifies the
347 type of integer produced by a binary operation whose inputs have
348 types T1 and T2. The definition should be symmetric. */
349 template <typename T1, typename T2,
350 enum precision_type P1 = int_traits <T1>::precision_type,
351 enum precision_type P2 = int_traits <T2>::precision_type>
352 struct binary_traits;
354 /* The result of a unary operation on T is the same as the result of
355 a binary operation on two values of type T. */
356 template <typename T>
357 struct unary_traits : public binary_traits <T, T> {};
359 /* Specify the result type for each supported combination of binary
360 inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be
361 mixed, in order to give stronger type checking. When both inputs
362 are CONST_PRECISION, they must have the same precision. */
363 template <typename T1, typename T2>
364 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
366 typedef widest_int result_type;
369 template <typename T1, typename T2>
370 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
372 typedef wide_int result_type;
375 template <typename T1, typename T2>
376 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
378 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
379 so as not to confuse gengtype. */
380 typedef generic_wide_int < fixed_wide_int_storage
381 <int_traits <T2>::precision> > result_type;
384 template <typename T1, typename T2>
385 struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
387 typedef wide_int result_type;
390 template <typename T1, typename T2>
391 struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
393 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
394 so as not to confuse gengtype. */
395 typedef generic_wide_int < fixed_wide_int_storage
396 <int_traits <T1>::precision> > result_type;
399 template <typename T1, typename T2>
400 struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
402 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
403 so as not to confuse gengtype. */
404 STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
405 typedef generic_wide_int < fixed_wide_int_storage
406 <int_traits <T1>::precision> > result_type;
409 template <typename T1, typename T2>
410 struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
412 typedef wide_int result_type;
416 /* Public functions for querying and operating on integers. */
417 namespace wi
419 template <typename T>
420 unsigned int get_precision (const T &);
422 template <typename T1, typename T2>
423 unsigned int get_binary_precision (const T1 &, const T2 &);
425 template <typename T1, typename T2>
426 void copy (T1 &, const T2 &);
428 #define UNARY_PREDICATE \
429 template <typename T> bool
430 #define UNARY_FUNCTION \
431 template <typename T> WI_UNARY_RESULT (T)
432 #define BINARY_PREDICATE \
433 template <typename T1, typename T2> bool
434 #define BINARY_FUNCTION \
435 template <typename T1, typename T2> WI_BINARY_RESULT (T1, T2)
436 #define SHIFT_FUNCTION \
437 template <typename T1, typename T2> WI_UNARY_RESULT (T1)
439 UNARY_PREDICATE fits_shwi_p (const T &);
440 UNARY_PREDICATE fits_uhwi_p (const T &);
441 UNARY_PREDICATE neg_p (const T &, signop = SIGNED);
443 template <typename T>
444 HOST_WIDE_INT sign_mask (const T &);
446 BINARY_PREDICATE eq_p (const T1 &, const T2 &);
447 BINARY_PREDICATE ne_p (const T1 &, const T2 &);
448 BINARY_PREDICATE lt_p (const T1 &, const T2 &, signop);
449 BINARY_PREDICATE lts_p (const T1 &, const T2 &);
450 BINARY_PREDICATE ltu_p (const T1 &, const T2 &);
451 BINARY_PREDICATE le_p (const T1 &, const T2 &, signop);
452 BINARY_PREDICATE les_p (const T1 &, const T2 &);
453 BINARY_PREDICATE leu_p (const T1 &, const T2 &);
454 BINARY_PREDICATE gt_p (const T1 &, const T2 &, signop);
455 BINARY_PREDICATE gts_p (const T1 &, const T2 &);
456 BINARY_PREDICATE gtu_p (const T1 &, const T2 &);
457 BINARY_PREDICATE ge_p (const T1 &, const T2 &, signop);
458 BINARY_PREDICATE ges_p (const T1 &, const T2 &);
459 BINARY_PREDICATE geu_p (const T1 &, const T2 &);
461 template <typename T1, typename T2>
462 int cmp (const T1 &, const T2 &, signop);
464 template <typename T1, typename T2>
465 int cmps (const T1 &, const T2 &);
467 template <typename T1, typename T2>
468 int cmpu (const T1 &, const T2 &);
470 UNARY_FUNCTION bit_not (const T &);
471 UNARY_FUNCTION neg (const T &);
472 UNARY_FUNCTION neg (const T &, bool *);
473 UNARY_FUNCTION abs (const T &);
474 UNARY_FUNCTION ext (const T &, unsigned int, signop);
475 UNARY_FUNCTION sext (const T &, unsigned int);
476 UNARY_FUNCTION zext (const T &, unsigned int);
477 UNARY_FUNCTION set_bit (const T &, unsigned int);
479 BINARY_FUNCTION min (const T1 &, const T2 &, signop);
480 BINARY_FUNCTION smin (const T1 &, const T2 &);
481 BINARY_FUNCTION umin (const T1 &, const T2 &);
482 BINARY_FUNCTION max (const T1 &, const T2 &, signop);
483 BINARY_FUNCTION smax (const T1 &, const T2 &);
484 BINARY_FUNCTION umax (const T1 &, const T2 &);
486 BINARY_FUNCTION bit_and (const T1 &, const T2 &);
487 BINARY_FUNCTION bit_and_not (const T1 &, const T2 &);
488 BINARY_FUNCTION bit_or (const T1 &, const T2 &);
489 BINARY_FUNCTION bit_or_not (const T1 &, const T2 &);
490 BINARY_FUNCTION bit_xor (const T1 &, const T2 &);
491 BINARY_FUNCTION add (const T1 &, const T2 &);
492 BINARY_FUNCTION add (const T1 &, const T2 &, signop, bool *);
493 BINARY_FUNCTION sub (const T1 &, const T2 &);
494 BINARY_FUNCTION sub (const T1 &, const T2 &, signop, bool *);
495 BINARY_FUNCTION mul (const T1 &, const T2 &);
496 BINARY_FUNCTION mul (const T1 &, const T2 &, signop, bool *);
497 BINARY_FUNCTION smul (const T1 &, const T2 &, bool *);
498 BINARY_FUNCTION umul (const T1 &, const T2 &, bool *);
499 BINARY_FUNCTION mul_high (const T1 &, const T2 &, signop);
500 BINARY_FUNCTION div_trunc (const T1 &, const T2 &, signop, bool * = 0);
501 BINARY_FUNCTION sdiv_trunc (const T1 &, const T2 &);
502 BINARY_FUNCTION udiv_trunc (const T1 &, const T2 &);
503 BINARY_FUNCTION div_floor (const T1 &, const T2 &, signop, bool * = 0);
504 BINARY_FUNCTION udiv_floor (const T1 &, const T2 &);
505 BINARY_FUNCTION sdiv_floor (const T1 &, const T2 &);
506 BINARY_FUNCTION div_ceil (const T1 &, const T2 &, signop, bool * = 0);
507 BINARY_FUNCTION div_round (const T1 &, const T2 &, signop, bool * = 0);
508 BINARY_FUNCTION divmod_trunc (const T1 &, const T2 &, signop,
509 WI_BINARY_RESULT (T1, T2) *);
510 BINARY_FUNCTION mod_trunc (const T1 &, const T2 &, signop, bool * = 0);
511 BINARY_FUNCTION smod_trunc (const T1 &, const T2 &);
512 BINARY_FUNCTION umod_trunc (const T1 &, const T2 &);
513 BINARY_FUNCTION mod_floor (const T1 &, const T2 &, signop, bool * = 0);
514 BINARY_FUNCTION umod_floor (const T1 &, const T2 &);
515 BINARY_FUNCTION mod_ceil (const T1 &, const T2 &, signop, bool * = 0);
516 BINARY_FUNCTION mod_round (const T1 &, const T2 &, signop, bool * = 0);
518 template <typename T1, typename T2>
519 bool multiple_of_p (const T1 &, const T2 &, signop);
521 template <typename T1, typename T2>
522 bool multiple_of_p (const T1 &, const T2 &, signop,
523 WI_BINARY_RESULT (T1, T2) *);
525 SHIFT_FUNCTION lshift (const T1 &, const T2 &);
526 SHIFT_FUNCTION lrshift (const T1 &, const T2 &);
527 SHIFT_FUNCTION arshift (const T1 &, const T2 &);
528 SHIFT_FUNCTION rshift (const T1 &, const T2 &, signop sgn);
529 SHIFT_FUNCTION lrotate (const T1 &, const T2 &, unsigned int = 0);
530 SHIFT_FUNCTION rrotate (const T1 &, const T2 &, unsigned int = 0);
532 #undef SHIFT_FUNCTION
533 #undef BINARY_PREDICATE
534 #undef BINARY_FUNCTION
535 #undef UNARY_PREDICATE
536 #undef UNARY_FUNCTION
538 bool only_sign_bit_p (const wide_int_ref &, unsigned int);
539 bool only_sign_bit_p (const wide_int_ref &);
540 int clz (const wide_int_ref &);
541 int clrsb (const wide_int_ref &);
542 int ctz (const wide_int_ref &);
543 int exact_log2 (const wide_int_ref &);
544 int floor_log2 (const wide_int_ref &);
545 int ffs (const wide_int_ref &);
546 int popcount (const wide_int_ref &);
547 int parity (const wide_int_ref &);
549 template <typename T>
550 unsigned HOST_WIDE_INT extract_uhwi (const T &, unsigned int, unsigned int);
552 template <typename T>
553 unsigned int min_precision (const T &, signop);
556 namespace wi
558 /* Contains the components of a decomposed integer for easy, direct
559 access. */
560 struct storage_ref
562 storage_ref (const HOST_WIDE_INT *, unsigned int, unsigned int);
564 const HOST_WIDE_INT *val;
565 unsigned int len;
566 unsigned int precision;
568 /* Provide enough trappings for this class to act as storage for
569 generic_wide_int. */
570 unsigned int get_len () const;
571 unsigned int get_precision () const;
572 const HOST_WIDE_INT *get_val () const;
576 inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT *val_in,
577 unsigned int len_in,
578 unsigned int precision_in)
579 : val (val_in), len (len_in), precision (precision_in)
583 inline unsigned int
584 wi::storage_ref::get_len () const
586 return len;
589 inline unsigned int
590 wi::storage_ref::get_precision () const
592 return precision;
595 inline const HOST_WIDE_INT *
596 wi::storage_ref::get_val () const
598 return val;
601 /* This class defines an integer type using the storage provided by the
602 template argument. The storage class must provide the following
603 functions:
605 unsigned int get_precision () const
606 Return the number of bits in the integer.
608 HOST_WIDE_INT *get_val () const
609 Return a pointer to the array of blocks that encodes the integer.
611 unsigned int get_len () const
612 Return the number of blocks in get_val (). If this is smaller
613 than the number of blocks implied by get_precision (), the
614 remaining blocks are sign extensions of block get_len () - 1.
616 Although not required by generic_wide_int itself, writable storage
617 classes can also provide the following functions:
619 HOST_WIDE_INT *write_val ()
620 Get a modifiable version of get_val ()
622 unsigned int set_len (unsigned int len)
623 Set the value returned by get_len () to LEN. */
624 template <typename storage>
625 class GTY(()) generic_wide_int : public storage
627 public:
628 generic_wide_int ();
630 template <typename T>
631 generic_wide_int (const T &);
633 template <typename T>
634 generic_wide_int (const T &, unsigned int);
636 /* Conversions. */
637 HOST_WIDE_INT to_shwi (unsigned int) const;
638 HOST_WIDE_INT to_shwi () const;
639 unsigned HOST_WIDE_INT to_uhwi (unsigned int) const;
640 unsigned HOST_WIDE_INT to_uhwi () const;
641 HOST_WIDE_INT to_short_addr () const;
643 /* Public accessors for the interior of a wide int. */
644 HOST_WIDE_INT sign_mask () const;
645 HOST_WIDE_INT elt (unsigned int) const;
646 unsigned HOST_WIDE_INT ulow () const;
647 unsigned HOST_WIDE_INT uhigh () const;
648 HOST_WIDE_INT slow () const;
649 HOST_WIDE_INT shigh () const;
651 template <typename T>
652 generic_wide_int &operator = (const T &);
654 #define BINARY_PREDICATE(OP, F) \
655 template <typename T> \
656 bool OP (const T &c) const { return wi::F (*this, c); }
658 #define UNARY_OPERATOR(OP, F) \
659 WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
661 #define BINARY_OPERATOR(OP, F) \
662 template <typename T> \
663 WI_BINARY_RESULT (generic_wide_int, T) \
664 OP (const T &c) const { return wi::F (*this, c); }
666 #define ASSIGNMENT_OPERATOR(OP, F) \
667 template <typename T> \
668 generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
670 #define INCDEC_OPERATOR(OP, DELTA) \
671 generic_wide_int &OP () { *this += DELTA; return *this; }
673 UNARY_OPERATOR (operator ~, bit_not)
674 UNARY_OPERATOR (operator -, neg)
675 BINARY_PREDICATE (operator ==, eq_p)
676 BINARY_PREDICATE (operator !=, ne_p)
677 BINARY_OPERATOR (operator &, bit_and)
678 BINARY_OPERATOR (and_not, bit_and_not)
679 BINARY_OPERATOR (operator |, bit_or)
680 BINARY_OPERATOR (or_not, bit_or_not)
681 BINARY_OPERATOR (operator ^, bit_xor)
682 BINARY_OPERATOR (operator +, add)
683 BINARY_OPERATOR (operator -, sub)
684 BINARY_OPERATOR (operator *, mul)
685 ASSIGNMENT_OPERATOR (operator &=, bit_and)
686 ASSIGNMENT_OPERATOR (operator |=, bit_or)
687 ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
688 ASSIGNMENT_OPERATOR (operator +=, add)
689 ASSIGNMENT_OPERATOR (operator -=, sub)
690 ASSIGNMENT_OPERATOR (operator *=, mul)
691 INCDEC_OPERATOR (operator ++, 1)
692 INCDEC_OPERATOR (operator --, -1)
694 #undef BINARY_PREDICATE
695 #undef UNARY_OPERATOR
696 #undef BINARY_OPERATOR
697 #undef ASSIGNMENT_OPERATOR
698 #undef INCDEC_OPERATOR
700 /* Debugging functions. */
701 void dump () const;
703 static const bool is_sign_extended
704 = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
707 template <typename storage>
708 inline generic_wide_int <storage>::generic_wide_int () {}
710 template <typename storage>
711 template <typename T>
712 inline generic_wide_int <storage>::generic_wide_int (const T &x)
713 : storage (x)
717 template <typename storage>
718 template <typename T>
719 inline generic_wide_int <storage>::generic_wide_int (const T &x,
720 unsigned int precision)
721 : storage (x, precision)
725 /* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
726 If THIS does not fit in PRECISION, the information is lost. */
727 template <typename storage>
728 inline HOST_WIDE_INT
729 generic_wide_int <storage>::to_shwi (unsigned int precision) const
731 if (precision < HOST_BITS_PER_WIDE_INT)
732 return sext_hwi (this->get_val ()[0], precision);
733 else
734 return this->get_val ()[0];
737 /* Return THIS as a signed HOST_WIDE_INT, in its natural precision. */
738 template <typename storage>
739 inline HOST_WIDE_INT
740 generic_wide_int <storage>::to_shwi () const
742 if (is_sign_extended)
743 return this->get_val ()[0];
744 else
745 return to_shwi (this->get_precision ());
748 /* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
749 PRECISION. If THIS does not fit in PRECISION, the information
750 is lost. */
751 template <typename storage>
752 inline unsigned HOST_WIDE_INT
753 generic_wide_int <storage>::to_uhwi (unsigned int precision) const
755 if (precision < HOST_BITS_PER_WIDE_INT)
756 return zext_hwi (this->get_val ()[0], precision);
757 else
758 return this->get_val ()[0];
761 /* Return THIS as an signed HOST_WIDE_INT, in its natural precision. */
762 template <typename storage>
763 inline unsigned HOST_WIDE_INT
764 generic_wide_int <storage>::to_uhwi () const
766 return to_uhwi (this->get_precision ());
769 /* TODO: The compiler is half converted from using HOST_WIDE_INT to
770 represent addresses to using offset_int to represent addresses.
771 We use to_short_addr at the interface from new code to old,
772 unconverted code. */
773 template <typename storage>
774 inline HOST_WIDE_INT
775 generic_wide_int <storage>::to_short_addr () const
777 return this->get_val ()[0];
780 /* Return the implicit value of blocks above get_len (). */
781 template <typename storage>
782 inline HOST_WIDE_INT
783 generic_wide_int <storage>::sign_mask () const
785 unsigned int len = this->get_len ();
786 unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
787 if (!is_sign_extended)
789 unsigned int precision = this->get_precision ();
790 int excess = len * HOST_BITS_PER_WIDE_INT - precision;
791 if (excess > 0)
792 high <<= excess;
794 return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
797 /* Return the signed value of the least-significant explicitly-encoded
798 block. */
799 template <typename storage>
800 inline HOST_WIDE_INT
801 generic_wide_int <storage>::slow () const
803 return this->get_val ()[0];
806 /* Return the signed value of the most-significant explicitly-encoded
807 block. */
808 template <typename storage>
809 inline HOST_WIDE_INT
810 generic_wide_int <storage>::shigh () const
812 return this->get_val ()[this->get_len () - 1];
815 /* Return the unsigned value of the least-significant
816 explicitly-encoded block. */
817 template <typename storage>
818 inline unsigned HOST_WIDE_INT
819 generic_wide_int <storage>::ulow () const
821 return this->get_val ()[0];
824 /* Return the unsigned value of the most-significant
825 explicitly-encoded block. */
826 template <typename storage>
827 inline unsigned HOST_WIDE_INT
828 generic_wide_int <storage>::uhigh () const
830 return this->get_val ()[this->get_len () - 1];
833 /* Return block I, which might be implicitly or explicit encoded. */
834 template <typename storage>
835 inline HOST_WIDE_INT
836 generic_wide_int <storage>::elt (unsigned int i) const
838 if (i >= this->get_len ())
839 return sign_mask ();
840 else
841 return this->get_val ()[i];
844 template <typename storage>
845 template <typename T>
846 generic_wide_int <storage> &
847 generic_wide_int <storage>::operator = (const T &x)
849 storage::operator = (x);
850 return *this;
853 /* Dump the contents of the integer to stderr, for debugging. */
854 template <typename storage>
855 void
856 generic_wide_int <storage>::dump () const
858 unsigned int len = this->get_len ();
859 const HOST_WIDE_INT *val = this->get_val ();
860 unsigned int precision = this->get_precision ();
861 fprintf (stderr, "[");
862 if (len * HOST_BITS_PER_WIDE_INT < precision)
863 fprintf (stderr, "...,");
864 for (unsigned int i = 0; i < len - 1; ++i)
865 fprintf (stderr, HOST_WIDE_INT_PRINT_HEX ",", val[len - 1 - i]);
866 fprintf (stderr, HOST_WIDE_INT_PRINT_HEX "], precision = %d\n",
867 val[0], precision);
870 namespace wi
872 template <typename storage>
873 struct int_traits < generic_wide_int <storage> >
874 : public wi::int_traits <storage>
876 static unsigned int get_precision (const generic_wide_int <storage> &);
877 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
878 const generic_wide_int <storage> &);
882 template <typename storage>
883 inline unsigned int
884 wi::int_traits < generic_wide_int <storage> >::
885 get_precision (const generic_wide_int <storage> &x)
887 return x.get_precision ();
890 template <typename storage>
891 inline wi::storage_ref
892 wi::int_traits < generic_wide_int <storage> >::
893 decompose (HOST_WIDE_INT *, unsigned int precision,
894 const generic_wide_int <storage> &x)
896 gcc_checking_assert (precision == x.get_precision ());
897 return wi::storage_ref (x.get_val (), x.get_len (), precision);
900 /* Provide the storage for a wide_int_ref. This acts like a read-only
901 wide_int, with the optimization that VAL is normally a pointer to
902 another integer's storage, so that no array copy is needed. */
903 template <bool SE>
904 struct wide_int_ref_storage : public wi::storage_ref
906 private:
907 /* Scratch space that can be used when decomposing the original integer.
908 It must live as long as this object. */
909 HOST_WIDE_INT scratch[2];
911 public:
912 wide_int_ref_storage (const wi::storage_ref &);
914 template <typename T>
915 wide_int_ref_storage (const T &);
917 template <typename T>
918 wide_int_ref_storage (const T &, unsigned int);
921 /* Create a reference from an existing reference. */
922 template <bool SE>
923 inline wide_int_ref_storage <SE>::
924 wide_int_ref_storage (const wi::storage_ref &x)
925 : storage_ref (x)
928 /* Create a reference to integer X in its natural precision. Note
929 that the natural precision is host-dependent for primitive
930 types. */
931 template <bool SE>
932 template <typename T>
933 inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x)
934 : storage_ref (wi::int_traits <T>::decompose (scratch,
935 wi::get_precision (x), x))
939 /* Create a reference to integer X in precision PRECISION. */
940 template <bool SE>
941 template <typename T>
942 inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x,
943 unsigned int precision)
944 : storage_ref (wi::int_traits <T>::decompose (scratch, precision, x))
948 namespace wi
950 template <bool SE>
951 struct int_traits <wide_int_ref_storage <SE> >
953 static const enum precision_type precision_type = VAR_PRECISION;
954 /* wi::storage_ref can be a reference to a primitive type,
955 so this is the conservatively-correct setting. */
956 static const bool host_dependent_precision = true;
957 static const bool is_sign_extended = SE;
961 namespace wi
963 unsigned int force_to_size (HOST_WIDE_INT *, const HOST_WIDE_INT *,
964 unsigned int, unsigned int, unsigned int,
965 signop sgn);
966 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
967 unsigned int, unsigned int, bool = true);
970 /* The storage used by wide_int. */
971 class GTY(()) wide_int_storage
973 private:
974 HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
975 unsigned int len;
976 unsigned int precision;
978 public:
979 wide_int_storage ();
980 template <typename T>
981 wide_int_storage (const T &);
983 /* The standard generic_wide_int storage methods. */
984 unsigned int get_precision () const;
985 const HOST_WIDE_INT *get_val () const;
986 unsigned int get_len () const;
987 HOST_WIDE_INT *write_val ();
988 void set_len (unsigned int, bool = false);
990 static wide_int from (const wide_int_ref &, unsigned int, signop);
991 static wide_int from_array (const HOST_WIDE_INT *, unsigned int,
992 unsigned int, bool = true);
993 static wide_int create (unsigned int);
995 /* FIXME: target-dependent, so should disappear. */
996 wide_int bswap () const;
999 namespace wi
1001 template <>
1002 struct int_traits <wide_int_storage>
1004 static const enum precision_type precision_type = VAR_PRECISION;
1005 /* Guaranteed by a static assert in the wide_int_storage constructor. */
1006 static const bool host_dependent_precision = false;
1007 static const bool is_sign_extended = true;
1008 template <typename T1, typename T2>
1009 static wide_int get_binary_result (const T1 &, const T2 &);
1013 inline wide_int_storage::wide_int_storage () {}
1015 /* Initialize the storage from integer X, in its natural precision.
1016 Note that we do not allow integers with host-dependent precision
1017 to become wide_ints; wide_ints must always be logically independent
1018 of the host. */
1019 template <typename T>
1020 inline wide_int_storage::wide_int_storage (const T &x)
1022 { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
1023 { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
1024 WIDE_INT_REF_FOR (T) xi (x);
1025 precision = xi.precision;
1026 wi::copy (*this, xi);
1029 inline unsigned int
1030 wide_int_storage::get_precision () const
1032 return precision;
1035 inline const HOST_WIDE_INT *
1036 wide_int_storage::get_val () const
1038 return val;
1041 inline unsigned int
1042 wide_int_storage::get_len () const
1044 return len;
1047 inline HOST_WIDE_INT *
1048 wide_int_storage::write_val ()
1050 return val;
1053 inline void
1054 wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
1056 len = l;
1057 if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
1058 val[len - 1] = sext_hwi (val[len - 1],
1059 precision % HOST_BITS_PER_WIDE_INT);
1062 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
1063 number. */
1064 inline wide_int
1065 wide_int_storage::from (const wide_int_ref &x, unsigned int precision,
1066 signop sgn)
1068 wide_int result = wide_int::create (precision);
1069 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1070 x.precision, precision, sgn));
1071 return result;
1074 /* Create a wide_int from the explicit block encoding given by VAL and
1075 LEN. PRECISION is the precision of the integer. NEED_CANON_P is
1076 true if the encoding may have redundant trailing blocks. */
1077 inline wide_int
1078 wide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
1079 unsigned int precision, bool need_canon_p)
1081 wide_int result = wide_int::create (precision);
1082 result.set_len (wi::from_array (result.write_val (), val, len, precision,
1083 need_canon_p));
1084 return result;
1087 /* Return an uninitialized wide_int with precision PRECISION. */
1088 inline wide_int
1089 wide_int_storage::create (unsigned int precision)
1091 wide_int x;
1092 x.precision = precision;
1093 return x;
1096 template <typename T1, typename T2>
1097 inline wide_int
1098 wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
1100 /* This shouldn't be used for two flexible-precision inputs. */
1101 STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
1102 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
1103 if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
1104 return wide_int::create (wi::get_precision (y));
1105 else
1106 return wide_int::create (wi::get_precision (x));
1109 /* The storage used by FIXED_WIDE_INT (N). */
1110 template <int N>
1111 class GTY(()) fixed_wide_int_storage
1113 private:
1114 HOST_WIDE_INT val[(N + HOST_BITS_PER_WIDE_INT + 1) / HOST_BITS_PER_WIDE_INT];
1115 unsigned int len;
1117 public:
1118 fixed_wide_int_storage ();
1119 template <typename T>
1120 fixed_wide_int_storage (const T &);
1122 /* The standard generic_wide_int storage methods. */
1123 unsigned int get_precision () const;
1124 const HOST_WIDE_INT *get_val () const;
1125 unsigned int get_len () const;
1126 HOST_WIDE_INT *write_val ();
1127 void set_len (unsigned int, bool = false);
1129 static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
1130 static FIXED_WIDE_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
1131 bool = true);
1134 namespace wi
1136 template <int N>
1137 struct int_traits < fixed_wide_int_storage <N> >
1139 static const enum precision_type precision_type = CONST_PRECISION;
1140 static const bool host_dependent_precision = false;
1141 static const bool is_sign_extended = true;
1142 static const unsigned int precision = N;
1143 template <typename T1, typename T2>
1144 static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
1148 template <int N>
1149 inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
1151 /* Initialize the storage from integer X, in precision N. */
1152 template <int N>
1153 template <typename T>
1154 inline fixed_wide_int_storage <N>::fixed_wide_int_storage (const T &x)
1156 /* Check for type compatibility. We don't want to initialize a
1157 fixed-width integer from something like a wide_int. */
1158 WI_BINARY_RESULT (T, FIXED_WIDE_INT (N)) *assertion ATTRIBUTE_UNUSED;
1159 wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
1162 template <int N>
1163 inline unsigned int
1164 fixed_wide_int_storage <N>::get_precision () const
1166 return N;
1169 template <int N>
1170 inline const HOST_WIDE_INT *
1171 fixed_wide_int_storage <N>::get_val () const
1173 return val;
1176 template <int N>
1177 inline unsigned int
1178 fixed_wide_int_storage <N>::get_len () const
1180 return len;
1183 template <int N>
1184 inline HOST_WIDE_INT *
1185 fixed_wide_int_storage <N>::write_val ()
1187 return val;
1190 template <int N>
1191 inline void
1192 fixed_wide_int_storage <N>::set_len (unsigned int l, bool)
1194 len = l;
1195 /* There are no excess bits in val[len - 1]. */
1196 STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
1199 /* Treat X as having signedness SGN and convert it to an N-bit number. */
1200 template <int N>
1201 inline FIXED_WIDE_INT (N)
1202 fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
1204 FIXED_WIDE_INT (N) result;
1205 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1206 x.precision, N, sgn));
1207 return result;
1210 /* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
1211 VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
1212 trailing blocks. */
1213 template <int N>
1214 inline FIXED_WIDE_INT (N)
1215 fixed_wide_int_storage <N>::from_array (const HOST_WIDE_INT *val,
1216 unsigned int len,
1217 bool need_canon_p)
1219 FIXED_WIDE_INT (N) result;
1220 result.set_len (wi::from_array (result.write_val (), val, len,
1221 N, need_canon_p));
1222 return result;
1225 template <int N>
1226 template <typename T1, typename T2>
1227 inline FIXED_WIDE_INT (N)
1228 wi::int_traits < fixed_wide_int_storage <N> >::
1229 get_binary_result (const T1 &, const T2 &)
1231 return FIXED_WIDE_INT (N) ();
1234 /* A reference to one element of a trailing_wide_ints structure. */
1235 class trailing_wide_int_storage
1237 private:
1238 /* The precision of the integer, which is a fixed property of the
1239 parent trailing_wide_ints. */
1240 unsigned int m_precision;
1242 /* A pointer to the length field. */
1243 unsigned char *m_len;
1245 /* A pointer to the HWI array. There are enough elements to hold all
1246 values of precision M_PRECISION. */
1247 HOST_WIDE_INT *m_val;
1249 public:
1250 trailing_wide_int_storage (unsigned int, unsigned char *, HOST_WIDE_INT *);
1252 /* The standard generic_wide_int storage methods. */
1253 unsigned int get_len () const;
1254 unsigned int get_precision () const;
1255 const HOST_WIDE_INT *get_val () const;
1256 HOST_WIDE_INT *write_val ();
1257 void set_len (unsigned int, bool = false);
1259 template <typename T>
1260 trailing_wide_int_storage &operator = (const T &);
1263 typedef generic_wide_int <trailing_wide_int_storage> trailing_wide_int;
1265 /* trailing_wide_int behaves like a wide_int. */
1266 namespace wi
1268 template <>
1269 struct int_traits <trailing_wide_int_storage>
1270 : public int_traits <wide_int_storage> {};
1273 /* An array of N wide_int-like objects that can be put at the end of
1274 a variable-sized structure. Use extra_size to calculate how many
1275 bytes beyond the sizeof need to be allocated. Use set_precision
1276 to initialize the structure. */
1277 template <int N>
1278 class GTY(()) trailing_wide_ints
1280 private:
1281 /* The shared precision of each number. */
1282 unsigned short m_precision;
1284 /* The shared maximum length of each number. */
1285 unsigned char m_max_len;
1287 /* The current length of each number. */
1288 unsigned char m_len[N];
1290 /* The variable-length part of the structure, which always contains
1291 at least one HWI. Element I starts at index I * M_MAX_LEN. */
1292 HOST_WIDE_INT m_val[1];
1294 public:
1295 void set_precision (unsigned int);
1296 trailing_wide_int operator [] (unsigned int);
1297 static size_t extra_size (unsigned int);
1300 inline trailing_wide_int_storage::
1301 trailing_wide_int_storage (unsigned int precision, unsigned char *len,
1302 HOST_WIDE_INT *val)
1303 : m_precision (precision), m_len (len), m_val (val)
1307 inline unsigned int
1308 trailing_wide_int_storage::get_len () const
1310 return *m_len;
1313 inline unsigned int
1314 trailing_wide_int_storage::get_precision () const
1316 return m_precision;
1319 inline const HOST_WIDE_INT *
1320 trailing_wide_int_storage::get_val () const
1322 return m_val;
1325 inline HOST_WIDE_INT *
1326 trailing_wide_int_storage::write_val ()
1328 return m_val;
1331 inline void
1332 trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
1334 *m_len = len;
1335 if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
1336 m_val[len - 1] = sext_hwi (m_val[len - 1],
1337 m_precision % HOST_BITS_PER_WIDE_INT);
1340 template <typename T>
1341 inline trailing_wide_int_storage &
1342 trailing_wide_int_storage::operator = (const T &x)
1344 WIDE_INT_REF_FOR (T) xi (x, m_precision);
1345 wi::copy (*this, xi);
1346 return *this;
1349 /* Initialize the structure and record that all elements have precision
1350 PRECISION. */
1351 template <int N>
1352 inline void
1353 trailing_wide_ints <N>::set_precision (unsigned int precision)
1355 m_precision = precision;
1356 m_max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1357 / HOST_BITS_PER_WIDE_INT);
1360 /* Return a reference to element INDEX. */
1361 template <int N>
1362 inline trailing_wide_int
1363 trailing_wide_ints <N>::operator [] (unsigned int index)
1365 return trailing_wide_int_storage (m_precision, &m_len[index],
1366 &m_val[index * m_max_len]);
1369 /* Return how many extra bytes need to be added to the end of the structure
1370 in order to handle N wide_ints of precision PRECISION. */
1371 template <int N>
1372 inline size_t
1373 trailing_wide_ints <N>::extra_size (unsigned int precision)
1375 unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1376 / HOST_BITS_PER_WIDE_INT);
1377 return (N * max_len - 1) * sizeof (HOST_WIDE_INT);
1380 /* This macro is used in structures that end with a trailing_wide_ints field
1381 called FIELD. It declares get_NAME() and set_NAME() methods to access
1382 element I of FIELD. */
1383 #define TRAILING_WIDE_INT_ACCESSOR(NAME, FIELD, I) \
1384 trailing_wide_int get_##NAME () { return FIELD[I]; } \
1385 template <typename T> void set_##NAME (const T &x) { FIELD[I] = x; }
1387 namespace wi
1389 /* Implementation of int_traits for primitive integer types like "int". */
1390 template <typename T, bool signed_p>
1391 struct primitive_int_traits
1393 static const enum precision_type precision_type = FLEXIBLE_PRECISION;
1394 static const bool host_dependent_precision = true;
1395 static const bool is_sign_extended = true;
1396 static unsigned int get_precision (T);
1397 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
1401 template <typename T, bool signed_p>
1402 inline unsigned int
1403 wi::primitive_int_traits <T, signed_p>::get_precision (T)
1405 return sizeof (T) * CHAR_BIT;
1408 template <typename T, bool signed_p>
1409 inline wi::storage_ref
1410 wi::primitive_int_traits <T, signed_p>::decompose (HOST_WIDE_INT *scratch,
1411 unsigned int precision, T x)
1413 scratch[0] = x;
1414 if (signed_p || scratch[0] >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1415 return wi::storage_ref (scratch, 1, precision);
1416 scratch[1] = 0;
1417 return wi::storage_ref (scratch, 2, precision);
1420 /* Allow primitive C types to be used in wi:: routines. */
1421 namespace wi
1423 template <>
1424 struct int_traits <int>
1425 : public primitive_int_traits <int, true> {};
1427 template <>
1428 struct int_traits <unsigned int>
1429 : public primitive_int_traits <unsigned int, false> {};
1431 template <>
1432 struct int_traits <long>
1433 : public primitive_int_traits <long, true> {};
1435 template <>
1436 struct int_traits <unsigned long>
1437 : public primitive_int_traits <unsigned long, false> {};
1439 #if defined HAVE_LONG_LONG
1440 template <>
1441 struct int_traits <long long>
1442 : public primitive_int_traits <long long, true> {};
1444 template <>
1445 struct int_traits <unsigned long long>
1446 : public primitive_int_traits <unsigned long long, false> {};
1447 #endif
1450 namespace wi
1452 /* Stores HWI-sized integer VAL, treating it as having signedness SGN
1453 and precision PRECISION. */
1454 struct hwi_with_prec
1456 hwi_with_prec (HOST_WIDE_INT, unsigned int, signop);
1457 HOST_WIDE_INT val;
1458 unsigned int precision;
1459 signop sgn;
1462 hwi_with_prec shwi (HOST_WIDE_INT, unsigned int);
1463 hwi_with_prec uhwi (unsigned HOST_WIDE_INT, unsigned int);
1465 hwi_with_prec minus_one (unsigned int);
1466 hwi_with_prec zero (unsigned int);
1467 hwi_with_prec one (unsigned int);
1468 hwi_with_prec two (unsigned int);
1471 inline wi::hwi_with_prec::hwi_with_prec (HOST_WIDE_INT v, unsigned int p,
1472 signop s)
1473 : val (v), precision (p), sgn (s)
1477 /* Return a signed integer that has value VAL and precision PRECISION. */
1478 inline wi::hwi_with_prec
1479 wi::shwi (HOST_WIDE_INT val, unsigned int precision)
1481 return hwi_with_prec (val, precision, SIGNED);
1484 /* Return an unsigned integer that has value VAL and precision PRECISION. */
1485 inline wi::hwi_with_prec
1486 wi::uhwi (unsigned HOST_WIDE_INT val, unsigned int precision)
1488 return hwi_with_prec (val, precision, UNSIGNED);
1491 /* Return a wide int of -1 with precision PRECISION. */
1492 inline wi::hwi_with_prec
1493 wi::minus_one (unsigned int precision)
1495 return wi::shwi (-1, precision);
1498 /* Return a wide int of 0 with precision PRECISION. */
1499 inline wi::hwi_with_prec
1500 wi::zero (unsigned int precision)
1502 return wi::shwi (0, precision);
1505 /* Return a wide int of 1 with precision PRECISION. */
1506 inline wi::hwi_with_prec
1507 wi::one (unsigned int precision)
1509 return wi::shwi (1, precision);
1512 /* Return a wide int of 2 with precision PRECISION. */
1513 inline wi::hwi_with_prec
1514 wi::two (unsigned int precision)
1516 return wi::shwi (2, precision);
1519 namespace wi
1521 template <>
1522 struct int_traits <wi::hwi_with_prec>
1524 static const enum precision_type precision_type = VAR_PRECISION;
1525 /* hwi_with_prec has an explicitly-given precision, rather than the
1526 precision of HOST_WIDE_INT. */
1527 static const bool host_dependent_precision = false;
1528 static const bool is_sign_extended = true;
1529 static unsigned int get_precision (const wi::hwi_with_prec &);
1530 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
1531 const wi::hwi_with_prec &);
1535 inline unsigned int
1536 wi::int_traits <wi::hwi_with_prec>::get_precision (const wi::hwi_with_prec &x)
1538 return x.precision;
1541 inline wi::storage_ref
1542 wi::int_traits <wi::hwi_with_prec>::
1543 decompose (HOST_WIDE_INT *scratch, unsigned int precision,
1544 const wi::hwi_with_prec &x)
1546 gcc_checking_assert (precision == x.precision);
1547 scratch[0] = x.val;
1548 if (x.sgn == SIGNED || x.val >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1549 return wi::storage_ref (scratch, 1, precision);
1550 scratch[1] = 0;
1551 return wi::storage_ref (scratch, 2, precision);
1554 /* Private functions for handling large cases out of line. They take
1555 individual length and array parameters because that is cheaper for
1556 the inline caller than constructing an object on the stack and
1557 passing a reference to it. (Although many callers use wide_int_refs,
1558 we generally want those to be removed by SRA.) */
1559 namespace wi
1561 bool eq_p_large (const HOST_WIDE_INT *, unsigned int,
1562 const HOST_WIDE_INT *, unsigned int, unsigned int);
1563 bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1564 const HOST_WIDE_INT *, unsigned int);
1565 bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1566 const HOST_WIDE_INT *, unsigned int);
1567 int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1568 const HOST_WIDE_INT *, unsigned int);
1569 int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1570 const HOST_WIDE_INT *, unsigned int);
1571 unsigned int sext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1572 unsigned int,
1573 unsigned int, unsigned int);
1574 unsigned int zext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1575 unsigned int,
1576 unsigned int, unsigned int);
1577 unsigned int set_bit_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1578 unsigned int, unsigned int, unsigned int);
1579 unsigned int lshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1580 unsigned int, unsigned int, unsigned int);
1581 unsigned int lrshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1582 unsigned int, unsigned int, unsigned int,
1583 unsigned int);
1584 unsigned int arshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1585 unsigned int, unsigned int, unsigned int,
1586 unsigned int);
1587 unsigned int and_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1588 const HOST_WIDE_INT *, unsigned int, unsigned int);
1589 unsigned int and_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1590 unsigned int, const HOST_WIDE_INT *,
1591 unsigned int, unsigned int);
1592 unsigned int or_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1593 const HOST_WIDE_INT *, unsigned int, unsigned int);
1594 unsigned int or_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1595 unsigned int, const HOST_WIDE_INT *,
1596 unsigned int, unsigned int);
1597 unsigned int xor_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1598 const HOST_WIDE_INT *, unsigned int, unsigned int);
1599 unsigned int add_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1600 const HOST_WIDE_INT *, unsigned int, unsigned int,
1601 signop, bool *);
1602 unsigned int sub_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1603 const HOST_WIDE_INT *, unsigned int, unsigned int,
1604 signop, bool *);
1605 unsigned int mul_internal (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1606 unsigned int, const HOST_WIDE_INT *,
1607 unsigned int, unsigned int, signop, bool *,
1608 bool);
1609 unsigned int divmod_internal (HOST_WIDE_INT *, unsigned int *,
1610 HOST_WIDE_INT *, const HOST_WIDE_INT *,
1611 unsigned int, unsigned int,
1612 const HOST_WIDE_INT *,
1613 unsigned int, unsigned int,
1614 signop, bool *);
1617 /* Return the number of bits that integer X can hold. */
1618 template <typename T>
1619 inline unsigned int
1620 wi::get_precision (const T &x)
1622 return wi::int_traits <T>::get_precision (x);
1625 /* Return the number of bits that the result of a binary operation can
1626 hold when the input operands are X and Y. */
1627 template <typename T1, typename T2>
1628 inline unsigned int
1629 wi::get_binary_precision (const T1 &x, const T2 &y)
1631 return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
1632 get_binary_result (x, y));
1635 /* Copy the contents of Y to X, but keeping X's current precision. */
1636 template <typename T1, typename T2>
1637 inline void
1638 wi::copy (T1 &x, const T2 &y)
1640 HOST_WIDE_INT *xval = x.write_val ();
1641 const HOST_WIDE_INT *yval = y.get_val ();
1642 unsigned int len = y.get_len ();
1643 unsigned int i = 0;
1645 xval[i] = yval[i];
1646 while (++i < len);
1647 x.set_len (len, y.is_sign_extended);
1650 /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */
1651 template <typename T>
1652 inline bool
1653 wi::fits_shwi_p (const T &x)
1655 WIDE_INT_REF_FOR (T) xi (x);
1656 return xi.len == 1;
1659 /* Return true if X fits in an unsigned HOST_WIDE_INT with no loss of
1660 precision. */
1661 template <typename T>
1662 inline bool
1663 wi::fits_uhwi_p (const T &x)
1665 WIDE_INT_REF_FOR (T) xi (x);
1666 if (xi.precision <= HOST_BITS_PER_WIDE_INT)
1667 return true;
1668 if (xi.len == 1)
1669 return xi.slow () >= 0;
1670 return xi.len == 2 && xi.uhigh () == 0;
1673 /* Return true if X is negative based on the interpretation of SGN.
1674 For UNSIGNED, this is always false. */
1675 template <typename T>
1676 inline bool
1677 wi::neg_p (const T &x, signop sgn)
1679 WIDE_INT_REF_FOR (T) xi (x);
1680 if (sgn == UNSIGNED)
1681 return false;
1682 return xi.sign_mask () < 0;
1685 /* Return -1 if the top bit of X is set and 0 if the top bit is clear. */
1686 template <typename T>
1687 inline HOST_WIDE_INT
1688 wi::sign_mask (const T &x)
1690 WIDE_INT_REF_FOR (T) xi (x);
1691 return xi.sign_mask ();
1694 /* Return true if X == Y. X and Y must be binary-compatible. */
1695 template <typename T1, typename T2>
1696 inline bool
1697 wi::eq_p (const T1 &x, const T2 &y)
1699 unsigned int precision = get_binary_precision (x, y);
1700 WIDE_INT_REF_FOR (T1) xi (x, precision);
1701 WIDE_INT_REF_FOR (T2) yi (y, precision);
1702 if (xi.is_sign_extended && yi.is_sign_extended)
1704 /* This case reduces to array equality. */
1705 if (xi.len != yi.len)
1706 return false;
1707 unsigned int i = 0;
1709 if (xi.val[i] != yi.val[i])
1710 return false;
1711 while (++i != xi.len);
1712 return true;
1714 if (__builtin_expect (yi.len == 1, true))
1716 /* XI is only equal to YI if it too has a single HWI. */
1717 if (xi.len != 1)
1718 return false;
1719 /* Excess bits in xi.val[0] will be signs or zeros, so comparisons
1720 with 0 are simple. */
1721 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1722 return xi.val[0] == 0;
1723 /* Otherwise flush out any excess bits first. */
1724 unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
1725 int excess = HOST_BITS_PER_WIDE_INT - precision;
1726 if (excess > 0)
1727 diff <<= excess;
1728 return diff == 0;
1730 return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
1733 /* Return true if X != Y. X and Y must be binary-compatible. */
1734 template <typename T1, typename T2>
1735 inline bool
1736 wi::ne_p (const T1 &x, const T2 &y)
1738 return !eq_p (x, y);
1741 /* Return true if X < Y when both are treated as signed values. */
1742 template <typename T1, typename T2>
1743 inline bool
1744 wi::lts_p (const T1 &x, const T2 &y)
1746 unsigned int precision = get_binary_precision (x, y);
1747 WIDE_INT_REF_FOR (T1) xi (x, precision);
1748 WIDE_INT_REF_FOR (T2) yi (y, precision);
1749 /* We optimize x < y, where y is 64 or fewer bits. */
1750 if (wi::fits_shwi_p (yi))
1752 /* Make lts_p (x, 0) as efficient as wi::neg_p (x). */
1753 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1754 return neg_p (xi);
1755 /* If x fits directly into a shwi, we can compare directly. */
1756 if (wi::fits_shwi_p (xi))
1757 return xi.to_shwi () < yi.to_shwi ();
1758 /* If x doesn't fit and is negative, then it must be more
1759 negative than any value in y, and hence smaller than y. */
1760 if (neg_p (xi))
1761 return true;
1762 /* If x is positive, then it must be larger than any value in y,
1763 and hence greater than y. */
1764 return false;
1766 /* Optimize the opposite case, if it can be detected at compile time. */
1767 if (STATIC_CONSTANT_P (xi.len == 1))
1768 /* If YI is negative it is lower than the least HWI.
1769 If YI is positive it is greater than the greatest HWI. */
1770 return !neg_p (yi);
1771 return lts_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1774 /* Return true if X < Y when both are treated as unsigned values. */
1775 template <typename T1, typename T2>
1776 inline bool
1777 wi::ltu_p (const T1 &x, const T2 &y)
1779 unsigned int precision = get_binary_precision (x, y);
1780 WIDE_INT_REF_FOR (T1) xi (x, precision);
1781 WIDE_INT_REF_FOR (T2) yi (y, precision);
1782 /* Optimize comparisons with constants. */
1783 if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1784 return xi.len == 1 && xi.to_uhwi () < (unsigned HOST_WIDE_INT) yi.val[0];
1785 if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1786 return yi.len != 1 || yi.to_uhwi () > (unsigned HOST_WIDE_INT) xi.val[0];
1787 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1788 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1789 values does not change the result. */
1790 if (__builtin_expect (xi.len + yi.len == 2, true))
1792 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1793 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1794 return xl < yl;
1796 return ltu_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1799 /* Return true if X < Y. Signedness of X and Y is indicated by SGN. */
1800 template <typename T1, typename T2>
1801 inline bool
1802 wi::lt_p (const T1 &x, const T2 &y, signop sgn)
1804 if (sgn == SIGNED)
1805 return lts_p (x, y);
1806 else
1807 return ltu_p (x, y);
1810 /* Return true if X <= Y when both are treated as signed values. */
1811 template <typename T1, typename T2>
1812 inline bool
1813 wi::les_p (const T1 &x, const T2 &y)
1815 return !lts_p (y, x);
1818 /* Return true if X <= Y when both are treated as unsigned values. */
1819 template <typename T1, typename T2>
1820 inline bool
1821 wi::leu_p (const T1 &x, const T2 &y)
1823 return !ltu_p (y, x);
1826 /* Return true if X <= Y. Signedness of X and Y is indicated by SGN. */
1827 template <typename T1, typename T2>
1828 inline bool
1829 wi::le_p (const T1 &x, const T2 &y, signop sgn)
1831 if (sgn == SIGNED)
1832 return les_p (x, y);
1833 else
1834 return leu_p (x, y);
1837 /* Return true if X > Y when both are treated as signed values. */
1838 template <typename T1, typename T2>
1839 inline bool
1840 wi::gts_p (const T1 &x, const T2 &y)
1842 return lts_p (y, x);
1845 /* Return true if X > Y when both are treated as unsigned values. */
1846 template <typename T1, typename T2>
1847 inline bool
1848 wi::gtu_p (const T1 &x, const T2 &y)
1850 return ltu_p (y, x);
1853 /* Return true if X > Y. Signedness of X and Y is indicated by SGN. */
1854 template <typename T1, typename T2>
1855 inline bool
1856 wi::gt_p (const T1 &x, const T2 &y, signop sgn)
1858 if (sgn == SIGNED)
1859 return gts_p (x, y);
1860 else
1861 return gtu_p (x, y);
1864 /* Return true if X >= Y when both are treated as signed values. */
1865 template <typename T1, typename T2>
1866 inline bool
1867 wi::ges_p (const T1 &x, const T2 &y)
1869 return !lts_p (x, y);
1872 /* Return true if X >= Y when both are treated as unsigned values. */
1873 template <typename T1, typename T2>
1874 inline bool
1875 wi::geu_p (const T1 &x, const T2 &y)
1877 return !ltu_p (x, y);
1880 /* Return true if X >= Y. Signedness of X and Y is indicated by SGN. */
1881 template <typename T1, typename T2>
1882 inline bool
1883 wi::ge_p (const T1 &x, const T2 &y, signop sgn)
1885 if (sgn == SIGNED)
1886 return ges_p (x, y);
1887 else
1888 return geu_p (x, y);
1891 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1892 as signed values. */
1893 template <typename T1, typename T2>
1894 inline int
1895 wi::cmps (const T1 &x, const T2 &y)
1897 unsigned int precision = get_binary_precision (x, y);
1898 WIDE_INT_REF_FOR (T1) xi (x, precision);
1899 WIDE_INT_REF_FOR (T2) yi (y, precision);
1900 if (wi::fits_shwi_p (yi))
1902 /* Special case for comparisons with 0. */
1903 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1904 return neg_p (xi) ? -1 : !(xi.len == 1 && xi.val[0] == 0);
1905 /* If x fits into a signed HWI, we can compare directly. */
1906 if (wi::fits_shwi_p (xi))
1908 HOST_WIDE_INT xl = xi.to_shwi ();
1909 HOST_WIDE_INT yl = yi.to_shwi ();
1910 return xl < yl ? -1 : xl > yl;
1912 /* If x doesn't fit and is negative, then it must be more
1913 negative than any signed HWI, and hence smaller than y. */
1914 if (neg_p (xi))
1915 return -1;
1916 /* If x is positive, then it must be larger than any signed HWI,
1917 and hence greater than y. */
1918 return 1;
1920 /* Optimize the opposite case, if it can be detected at compile time. */
1921 if (STATIC_CONSTANT_P (xi.len == 1))
1922 /* If YI is negative it is lower than the least HWI.
1923 If YI is positive it is greater than the greatest HWI. */
1924 return neg_p (yi) ? 1 : -1;
1925 return cmps_large (xi.val, xi.len, precision, yi.val, yi.len);
1928 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1929 as unsigned values. */
1930 template <typename T1, typename T2>
1931 inline int
1932 wi::cmpu (const T1 &x, const T2 &y)
1934 unsigned int precision = get_binary_precision (x, y);
1935 WIDE_INT_REF_FOR (T1) xi (x, precision);
1936 WIDE_INT_REF_FOR (T2) yi (y, precision);
1937 /* Optimize comparisons with constants. */
1938 if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1940 /* If XI doesn't fit in a HWI then it must be larger than YI. */
1941 if (xi.len != 1)
1942 return 1;
1943 /* Otherwise compare directly. */
1944 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1945 unsigned HOST_WIDE_INT yl = yi.val[0];
1946 return xl < yl ? -1 : xl > yl;
1948 if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1950 /* If YI doesn't fit in a HWI then it must be larger than XI. */
1951 if (yi.len != 1)
1952 return -1;
1953 /* Otherwise compare directly. */
1954 unsigned HOST_WIDE_INT xl = xi.val[0];
1955 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1956 return xl < yl ? -1 : xl > yl;
1958 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1959 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1960 values does not change the result. */
1961 if (__builtin_expect (xi.len + yi.len == 2, true))
1963 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1964 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1965 return xl < yl ? -1 : xl > yl;
1967 return cmpu_large (xi.val, xi.len, precision, yi.val, yi.len);
1970 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Signedness of
1971 X and Y indicated by SGN. */
1972 template <typename T1, typename T2>
1973 inline int
1974 wi::cmp (const T1 &x, const T2 &y, signop sgn)
1976 if (sgn == SIGNED)
1977 return cmps (x, y);
1978 else
1979 return cmpu (x, y);
1982 /* Return ~x. */
1983 template <typename T>
1984 inline WI_UNARY_RESULT (T)
1985 wi::bit_not (const T &x)
1987 WI_UNARY_RESULT_VAR (result, val, T, x);
1988 WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
1989 for (unsigned int i = 0; i < xi.len; ++i)
1990 val[i] = ~xi.val[i];
1991 result.set_len (xi.len);
1992 return result;
1995 /* Return -x. */
1996 template <typename T>
1997 inline WI_UNARY_RESULT (T)
1998 wi::neg (const T &x)
2000 return sub (0, x);
2003 /* Return -x. Indicate in *OVERFLOW if X is the minimum signed value. */
2004 template <typename T>
2005 inline WI_UNARY_RESULT (T)
2006 wi::neg (const T &x, bool *overflow)
2008 *overflow = only_sign_bit_p (x);
2009 return sub (0, x);
2012 /* Return the absolute value of x. */
2013 template <typename T>
2014 inline WI_UNARY_RESULT (T)
2015 wi::abs (const T &x)
2017 return neg_p (x) ? neg (x) : WI_UNARY_RESULT (T) (x);
2020 /* Return the result of sign-extending the low OFFSET bits of X. */
2021 template <typename T>
2022 inline WI_UNARY_RESULT (T)
2023 wi::sext (const T &x, unsigned int offset)
2025 WI_UNARY_RESULT_VAR (result, val, T, x);
2026 unsigned int precision = get_precision (result);
2027 WIDE_INT_REF_FOR (T) xi (x, precision);
2029 if (offset <= HOST_BITS_PER_WIDE_INT)
2031 val[0] = sext_hwi (xi.ulow (), offset);
2032 result.set_len (1, true);
2034 else
2035 result.set_len (sext_large (val, xi.val, xi.len, precision, offset));
2036 return result;
2039 /* Return the result of zero-extending the low OFFSET bits of X. */
2040 template <typename T>
2041 inline WI_UNARY_RESULT (T)
2042 wi::zext (const T &x, unsigned int offset)
2044 WI_UNARY_RESULT_VAR (result, val, T, x);
2045 unsigned int precision = get_precision (result);
2046 WIDE_INT_REF_FOR (T) xi (x, precision);
2048 /* This is not just an optimization, it is actually required to
2049 maintain canonization. */
2050 if (offset >= precision)
2052 wi::copy (result, xi);
2053 return result;
2056 /* In these cases we know that at least the top bit will be clear,
2057 so no sign extension is necessary. */
2058 if (offset < HOST_BITS_PER_WIDE_INT)
2060 val[0] = zext_hwi (xi.ulow (), offset);
2061 result.set_len (1, true);
2063 else
2064 result.set_len (zext_large (val, xi.val, xi.len, precision, offset), true);
2065 return result;
2068 /* Return the result of extending the low OFFSET bits of X according to
2069 signedness SGN. */
2070 template <typename T>
2071 inline WI_UNARY_RESULT (T)
2072 wi::ext (const T &x, unsigned int offset, signop sgn)
2074 return sgn == SIGNED ? sext (x, offset) : zext (x, offset);
2077 /* Return an integer that represents X | (1 << bit). */
2078 template <typename T>
2079 inline WI_UNARY_RESULT (T)
2080 wi::set_bit (const T &x, unsigned int bit)
2082 WI_UNARY_RESULT_VAR (result, val, T, x);
2083 unsigned int precision = get_precision (result);
2084 WIDE_INT_REF_FOR (T) xi (x, precision);
2085 if (precision <= HOST_BITS_PER_WIDE_INT)
2087 val[0] = xi.ulow () | ((unsigned HOST_WIDE_INT) 1 << bit);
2088 result.set_len (1);
2090 else
2091 result.set_len (set_bit_large (val, xi.val, xi.len, precision, bit));
2092 return result;
2095 /* Return the mininum of X and Y, treating them both as having
2096 signedness SGN. */
2097 template <typename T1, typename T2>
2098 inline WI_BINARY_RESULT (T1, T2)
2099 wi::min (const T1 &x, const T2 &y, signop sgn)
2101 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2102 unsigned int precision = get_precision (result);
2103 if (wi::le_p (x, y, sgn))
2104 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2105 else
2106 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2107 return result;
2110 /* Return the minimum of X and Y, treating both as signed values. */
2111 template <typename T1, typename T2>
2112 inline WI_BINARY_RESULT (T1, T2)
2113 wi::smin (const T1 &x, const T2 &y)
2115 return wi::min (x, y, SIGNED);
2118 /* Return the minimum of X and Y, treating both as unsigned values. */
2119 template <typename T1, typename T2>
2120 inline WI_BINARY_RESULT (T1, T2)
2121 wi::umin (const T1 &x, const T2 &y)
2123 return wi::min (x, y, UNSIGNED);
2126 /* Return the maxinum of X and Y, treating them both as having
2127 signedness SGN. */
2128 template <typename T1, typename T2>
2129 inline WI_BINARY_RESULT (T1, T2)
2130 wi::max (const T1 &x, const T2 &y, signop sgn)
2132 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2133 unsigned int precision = get_precision (result);
2134 if (wi::ge_p (x, y, sgn))
2135 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2136 else
2137 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2138 return result;
2141 /* Return the maximum of X and Y, treating both as signed values. */
2142 template <typename T1, typename T2>
2143 inline WI_BINARY_RESULT (T1, T2)
2144 wi::smax (const T1 &x, const T2 &y)
2146 return wi::max (x, y, SIGNED);
2149 /* Return the maximum of X and Y, treating both as unsigned values. */
2150 template <typename T1, typename T2>
2151 inline WI_BINARY_RESULT (T1, T2)
2152 wi::umax (const T1 &x, const T2 &y)
2154 return wi::max (x, y, UNSIGNED);
2157 /* Return X & Y. */
2158 template <typename T1, typename T2>
2159 inline WI_BINARY_RESULT (T1, T2)
2160 wi::bit_and (const T1 &x, const T2 &y)
2162 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2163 unsigned int precision = get_precision (result);
2164 WIDE_INT_REF_FOR (T1) xi (x, precision);
2165 WIDE_INT_REF_FOR (T2) yi (y, precision);
2166 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2167 if (__builtin_expect (xi.len + yi.len == 2, true))
2169 val[0] = xi.ulow () & yi.ulow ();
2170 result.set_len (1, is_sign_extended);
2172 else
2173 result.set_len (and_large (val, xi.val, xi.len, yi.val, yi.len,
2174 precision), is_sign_extended);
2175 return result;
2178 /* Return X & ~Y. */
2179 template <typename T1, typename T2>
2180 inline WI_BINARY_RESULT (T1, T2)
2181 wi::bit_and_not (const T1 &x, const T2 &y)
2183 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2184 unsigned int precision = get_precision (result);
2185 WIDE_INT_REF_FOR (T1) xi (x, precision);
2186 WIDE_INT_REF_FOR (T2) yi (y, precision);
2187 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2188 if (__builtin_expect (xi.len + yi.len == 2, true))
2190 val[0] = xi.ulow () & ~yi.ulow ();
2191 result.set_len (1, is_sign_extended);
2193 else
2194 result.set_len (and_not_large (val, xi.val, xi.len, yi.val, yi.len,
2195 precision), is_sign_extended);
2196 return result;
2199 /* Return X | Y. */
2200 template <typename T1, typename T2>
2201 inline WI_BINARY_RESULT (T1, T2)
2202 wi::bit_or (const T1 &x, const T2 &y)
2204 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2205 unsigned int precision = get_precision (result);
2206 WIDE_INT_REF_FOR (T1) xi (x, precision);
2207 WIDE_INT_REF_FOR (T2) yi (y, precision);
2208 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2209 if (__builtin_expect (xi.len + yi.len == 2, true))
2211 val[0] = xi.ulow () | yi.ulow ();
2212 result.set_len (1, is_sign_extended);
2214 else
2215 result.set_len (or_large (val, xi.val, xi.len,
2216 yi.val, yi.len, precision), is_sign_extended);
2217 return result;
2220 /* Return X | ~Y. */
2221 template <typename T1, typename T2>
2222 inline WI_BINARY_RESULT (T1, T2)
2223 wi::bit_or_not (const T1 &x, const T2 &y)
2225 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2226 unsigned int precision = get_precision (result);
2227 WIDE_INT_REF_FOR (T1) xi (x, precision);
2228 WIDE_INT_REF_FOR (T2) yi (y, precision);
2229 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2230 if (__builtin_expect (xi.len + yi.len == 2, true))
2232 val[0] = xi.ulow () | ~yi.ulow ();
2233 result.set_len (1, is_sign_extended);
2235 else
2236 result.set_len (or_not_large (val, xi.val, xi.len, yi.val, yi.len,
2237 precision), is_sign_extended);
2238 return result;
2241 /* Return X ^ Y. */
2242 template <typename T1, typename T2>
2243 inline WI_BINARY_RESULT (T1, T2)
2244 wi::bit_xor (const T1 &x, const T2 &y)
2246 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2247 unsigned int precision = get_precision (result);
2248 WIDE_INT_REF_FOR (T1) xi (x, precision);
2249 WIDE_INT_REF_FOR (T2) yi (y, precision);
2250 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2251 if (__builtin_expect (xi.len + yi.len == 2, true))
2253 val[0] = xi.ulow () ^ yi.ulow ();
2254 result.set_len (1, is_sign_extended);
2256 else
2257 result.set_len (xor_large (val, xi.val, xi.len,
2258 yi.val, yi.len, precision), is_sign_extended);
2259 return result;
2262 /* Return X + Y. */
2263 template <typename T1, typename T2>
2264 inline WI_BINARY_RESULT (T1, T2)
2265 wi::add (const T1 &x, const T2 &y)
2267 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2268 unsigned int precision = get_precision (result);
2269 WIDE_INT_REF_FOR (T1) xi (x, precision);
2270 WIDE_INT_REF_FOR (T2) yi (y, precision);
2271 if (precision <= HOST_BITS_PER_WIDE_INT)
2273 val[0] = xi.ulow () + yi.ulow ();
2274 result.set_len (1);
2276 /* If the precision is known at compile time to be greater than
2277 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2278 knowing that (a) all bits in those HWIs are significant and
2279 (b) the result has room for at least two HWIs. This provides
2280 a fast path for things like offset_int and widest_int.
2282 The STATIC_CONSTANT_P test prevents this path from being
2283 used for wide_ints. wide_ints with precisions greater than
2284 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2285 point handling them inline. */
2286 else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2287 && __builtin_expect (xi.len + yi.len == 2, true))
2289 unsigned HOST_WIDE_INT xl = xi.ulow ();
2290 unsigned HOST_WIDE_INT yl = yi.ulow ();
2291 unsigned HOST_WIDE_INT resultl = xl + yl;
2292 val[0] = resultl;
2293 val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2294 result.set_len (1 + (((resultl ^ xl) & (resultl ^ yl))
2295 >> (HOST_BITS_PER_WIDE_INT - 1)));
2297 else
2298 result.set_len (add_large (val, xi.val, xi.len,
2299 yi.val, yi.len, precision,
2300 UNSIGNED, 0));
2301 return result;
2304 /* Return X + Y. Treat X and Y as having the signednes given by SGN
2305 and indicate in *OVERFLOW whether the operation overflowed. */
2306 template <typename T1, typename T2>
2307 inline WI_BINARY_RESULT (T1, T2)
2308 wi::add (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2310 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2311 unsigned int precision = get_precision (result);
2312 WIDE_INT_REF_FOR (T1) xi (x, precision);
2313 WIDE_INT_REF_FOR (T2) yi (y, precision);
2314 if (precision <= HOST_BITS_PER_WIDE_INT)
2316 unsigned HOST_WIDE_INT xl = xi.ulow ();
2317 unsigned HOST_WIDE_INT yl = yi.ulow ();
2318 unsigned HOST_WIDE_INT resultl = xl + yl;
2319 if (sgn == SIGNED)
2320 *overflow = (((resultl ^ xl) & (resultl ^ yl))
2321 >> (precision - 1)) & 1;
2322 else
2323 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2324 < (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2325 val[0] = resultl;
2326 result.set_len (1);
2328 else
2329 result.set_len (add_large (val, xi.val, xi.len,
2330 yi.val, yi.len, precision,
2331 sgn, overflow));
2332 return result;
2335 /* Return X - Y. */
2336 template <typename T1, typename T2>
2337 inline WI_BINARY_RESULT (T1, T2)
2338 wi::sub (const T1 &x, const T2 &y)
2340 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2341 unsigned int precision = get_precision (result);
2342 WIDE_INT_REF_FOR (T1) xi (x, precision);
2343 WIDE_INT_REF_FOR (T2) yi (y, precision);
2344 if (precision <= HOST_BITS_PER_WIDE_INT)
2346 val[0] = xi.ulow () - yi.ulow ();
2347 result.set_len (1);
2349 /* If the precision is known at compile time to be greater than
2350 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2351 knowing that (a) all bits in those HWIs are significant and
2352 (b) the result has room for at least two HWIs. This provides
2353 a fast path for things like offset_int and widest_int.
2355 The STATIC_CONSTANT_P test prevents this path from being
2356 used for wide_ints. wide_ints with precisions greater than
2357 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2358 point handling them inline. */
2359 else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2360 && __builtin_expect (xi.len + yi.len == 2, true))
2362 unsigned HOST_WIDE_INT xl = xi.ulow ();
2363 unsigned HOST_WIDE_INT yl = yi.ulow ();
2364 unsigned HOST_WIDE_INT resultl = xl - yl;
2365 val[0] = resultl;
2366 val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2367 result.set_len (1 + (((resultl ^ xl) & (xl ^ yl))
2368 >> (HOST_BITS_PER_WIDE_INT - 1)));
2370 else
2371 result.set_len (sub_large (val, xi.val, xi.len,
2372 yi.val, yi.len, precision,
2373 UNSIGNED, 0));
2374 return result;
2377 /* Return X - Y. Treat X and Y as having the signednes given by SGN
2378 and indicate in *OVERFLOW whether the operation overflowed. */
2379 template <typename T1, typename T2>
2380 inline WI_BINARY_RESULT (T1, T2)
2381 wi::sub (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2383 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2384 unsigned int precision = get_precision (result);
2385 WIDE_INT_REF_FOR (T1) xi (x, precision);
2386 WIDE_INT_REF_FOR (T2) yi (y, precision);
2387 if (precision <= HOST_BITS_PER_WIDE_INT)
2389 unsigned HOST_WIDE_INT xl = xi.ulow ();
2390 unsigned HOST_WIDE_INT yl = yi.ulow ();
2391 unsigned HOST_WIDE_INT resultl = xl - yl;
2392 if (sgn == SIGNED)
2393 *overflow = (((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1;
2394 else
2395 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2396 > (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2397 val[0] = resultl;
2398 result.set_len (1);
2400 else
2401 result.set_len (sub_large (val, xi.val, xi.len,
2402 yi.val, yi.len, precision,
2403 sgn, overflow));
2404 return result;
2407 /* Return X * Y. */
2408 template <typename T1, typename T2>
2409 inline WI_BINARY_RESULT (T1, T2)
2410 wi::mul (const T1 &x, const T2 &y)
2412 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2413 unsigned int precision = get_precision (result);
2414 WIDE_INT_REF_FOR (T1) xi (x, precision);
2415 WIDE_INT_REF_FOR (T2) yi (y, precision);
2416 if (precision <= HOST_BITS_PER_WIDE_INT)
2418 val[0] = xi.ulow () * yi.ulow ();
2419 result.set_len (1);
2421 else
2422 result.set_len (mul_internal (val, xi.val, xi.len, yi.val, yi.len,
2423 precision, UNSIGNED, 0, false));
2424 return result;
2427 /* Return X * Y. Treat X and Y as having the signednes given by SGN
2428 and indicate in *OVERFLOW whether the operation overflowed. */
2429 template <typename T1, typename T2>
2430 inline WI_BINARY_RESULT (T1, T2)
2431 wi::mul (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2433 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2434 unsigned int precision = get_precision (result);
2435 WIDE_INT_REF_FOR (T1) xi (x, precision);
2436 WIDE_INT_REF_FOR (T2) yi (y, precision);
2437 result.set_len (mul_internal (val, xi.val, xi.len,
2438 yi.val, yi.len, precision,
2439 sgn, overflow, false));
2440 return result;
2443 /* Return X * Y, treating both X and Y as signed values. Indicate in
2444 *OVERFLOW whether the operation overflowed. */
2445 template <typename T1, typename T2>
2446 inline WI_BINARY_RESULT (T1, T2)
2447 wi::smul (const T1 &x, const T2 &y, bool *overflow)
2449 return mul (x, y, SIGNED, overflow);
2452 /* Return X * Y, treating both X and Y as unsigned values. Indicate in
2453 *OVERFLOW whether the operation overflowed. */
2454 template <typename T1, typename T2>
2455 inline WI_BINARY_RESULT (T1, T2)
2456 wi::umul (const T1 &x, const T2 &y, bool *overflow)
2458 return mul (x, y, UNSIGNED, overflow);
2461 /* Perform a widening multiplication of X and Y, extending the values
2462 according to SGN, and return the high part of the result. */
2463 template <typename T1, typename T2>
2464 inline WI_BINARY_RESULT (T1, T2)
2465 wi::mul_high (const T1 &x, const T2 &y, signop sgn)
2467 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2468 unsigned int precision = get_precision (result);
2469 WIDE_INT_REF_FOR (T1) xi (x, precision);
2470 WIDE_INT_REF_FOR (T2) yi (y, precision);
2471 result.set_len (mul_internal (val, xi.val, xi.len,
2472 yi.val, yi.len, precision,
2473 sgn, 0, true));
2474 return result;
2477 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2478 signedness given by SGN. Indicate in *OVERFLOW if the result
2479 overflows. */
2480 template <typename T1, typename T2>
2481 inline WI_BINARY_RESULT (T1, T2)
2482 wi::div_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2484 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2485 unsigned int precision = get_precision (quotient);
2486 WIDE_INT_REF_FOR (T1) xi (x, precision);
2487 WIDE_INT_REF_FOR (T2) yi (y);
2489 quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
2490 precision,
2491 yi.val, yi.len, yi.precision,
2492 sgn, overflow));
2493 return quotient;
2496 /* Return X / Y, rouding towards 0. Treat X and Y as signed values. */
2497 template <typename T1, typename T2>
2498 inline WI_BINARY_RESULT (T1, T2)
2499 wi::sdiv_trunc (const T1 &x, const T2 &y)
2501 return div_trunc (x, y, SIGNED);
2504 /* Return X / Y, rouding towards 0. Treat X and Y as unsigned values. */
2505 template <typename T1, typename T2>
2506 inline WI_BINARY_RESULT (T1, T2)
2507 wi::udiv_trunc (const T1 &x, const T2 &y)
2509 return div_trunc (x, y, UNSIGNED);
2512 /* Return X / Y, rouding towards -inf. Treat X and Y as having the
2513 signedness given by SGN. Indicate in *OVERFLOW if the result
2514 overflows. */
2515 template <typename T1, typename T2>
2516 inline WI_BINARY_RESULT (T1, T2)
2517 wi::div_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2519 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2520 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2521 unsigned int precision = get_precision (quotient);
2522 WIDE_INT_REF_FOR (T1) xi (x, precision);
2523 WIDE_INT_REF_FOR (T2) yi (y);
2525 unsigned int remainder_len;
2526 quotient.set_len (divmod_internal (quotient_val,
2527 &remainder_len, remainder_val,
2528 xi.val, xi.len, precision,
2529 yi.val, yi.len, yi.precision, sgn,
2530 overflow));
2531 remainder.set_len (remainder_len);
2532 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2533 return quotient - 1;
2534 return quotient;
2537 /* Return X / Y, rouding towards -inf. Treat X and Y as signed values. */
2538 template <typename T1, typename T2>
2539 inline WI_BINARY_RESULT (T1, T2)
2540 wi::sdiv_floor (const T1 &x, const T2 &y)
2542 return div_floor (x, y, SIGNED);
2545 /* Return X / Y, rouding towards -inf. Treat X and Y as unsigned values. */
2546 /* ??? Why do we have both this and udiv_trunc. Aren't they the same? */
2547 template <typename T1, typename T2>
2548 inline WI_BINARY_RESULT (T1, T2)
2549 wi::udiv_floor (const T1 &x, const T2 &y)
2551 return div_floor (x, y, UNSIGNED);
2554 /* Return X / Y, rouding towards +inf. Treat X and Y as having the
2555 signedness given by SGN. Indicate in *OVERFLOW if the result
2556 overflows. */
2557 template <typename T1, typename T2>
2558 inline WI_BINARY_RESULT (T1, T2)
2559 wi::div_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2561 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2562 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2563 unsigned int precision = get_precision (quotient);
2564 WIDE_INT_REF_FOR (T1) xi (x, precision);
2565 WIDE_INT_REF_FOR (T2) yi (y);
2567 unsigned int remainder_len;
2568 quotient.set_len (divmod_internal (quotient_val,
2569 &remainder_len, remainder_val,
2570 xi.val, xi.len, precision,
2571 yi.val, yi.len, yi.precision, sgn,
2572 overflow));
2573 remainder.set_len (remainder_len);
2574 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2575 return quotient + 1;
2576 return quotient;
2579 /* Return X / Y, rouding towards nearest with ties away from zero.
2580 Treat X and Y as having the signedness given by SGN. Indicate
2581 in *OVERFLOW if the result overflows. */
2582 template <typename T1, typename T2>
2583 inline WI_BINARY_RESULT (T1, T2)
2584 wi::div_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2586 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2587 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2588 unsigned int precision = get_precision (quotient);
2589 WIDE_INT_REF_FOR (T1) xi (x, precision);
2590 WIDE_INT_REF_FOR (T2) yi (y);
2592 unsigned int remainder_len;
2593 quotient.set_len (divmod_internal (quotient_val,
2594 &remainder_len, remainder_val,
2595 xi.val, xi.len, precision,
2596 yi.val, yi.len, yi.precision, sgn,
2597 overflow));
2598 remainder.set_len (remainder_len);
2600 if (remainder != 0)
2602 if (sgn == SIGNED)
2604 WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2605 if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2607 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2608 return quotient - 1;
2609 else
2610 return quotient + 1;
2613 else
2615 if (wi::geu_p (remainder, wi::sub (y, remainder)))
2616 return quotient + 1;
2619 return quotient;
2622 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2623 signedness given by SGN. Store the remainder in *REMAINDER_PTR. */
2624 template <typename T1, typename T2>
2625 inline WI_BINARY_RESULT (T1, T2)
2626 wi::divmod_trunc (const T1 &x, const T2 &y, signop sgn,
2627 WI_BINARY_RESULT (T1, T2) *remainder_ptr)
2629 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2630 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2631 unsigned int precision = get_precision (quotient);
2632 WIDE_INT_REF_FOR (T1) xi (x, precision);
2633 WIDE_INT_REF_FOR (T2) yi (y);
2635 unsigned int remainder_len;
2636 quotient.set_len (divmod_internal (quotient_val,
2637 &remainder_len, remainder_val,
2638 xi.val, xi.len, precision,
2639 yi.val, yi.len, yi.precision, sgn, 0));
2640 remainder.set_len (remainder_len);
2642 *remainder_ptr = remainder;
2643 return quotient;
2646 /* Compute X / Y, rouding towards 0, and return the remainder.
2647 Treat X and Y as having the signedness given by SGN. Indicate
2648 in *OVERFLOW if the division overflows. */
2649 template <typename T1, typename T2>
2650 inline WI_BINARY_RESULT (T1, T2)
2651 wi::mod_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2653 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2654 unsigned int precision = get_precision (remainder);
2655 WIDE_INT_REF_FOR (T1) xi (x, precision);
2656 WIDE_INT_REF_FOR (T2) yi (y);
2658 unsigned int remainder_len;
2659 divmod_internal (0, &remainder_len, remainder_val,
2660 xi.val, xi.len, precision,
2661 yi.val, yi.len, yi.precision, sgn, overflow);
2662 remainder.set_len (remainder_len);
2664 return remainder;
2667 /* Compute X / Y, rouding towards 0, and return the remainder.
2668 Treat X and Y as signed values. */
2669 template <typename T1, typename T2>
2670 inline WI_BINARY_RESULT (T1, T2)
2671 wi::smod_trunc (const T1 &x, const T2 &y)
2673 return mod_trunc (x, y, SIGNED);
2676 /* Compute X / Y, rouding towards 0, and return the remainder.
2677 Treat X and Y as unsigned values. */
2678 template <typename T1, typename T2>
2679 inline WI_BINARY_RESULT (T1, T2)
2680 wi::umod_trunc (const T1 &x, const T2 &y)
2682 return mod_trunc (x, y, UNSIGNED);
2685 /* Compute X / Y, rouding towards -inf, and return the remainder.
2686 Treat X and Y as having the signedness given by SGN. Indicate
2687 in *OVERFLOW if the division overflows. */
2688 template <typename T1, typename T2>
2689 inline WI_BINARY_RESULT (T1, T2)
2690 wi::mod_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2692 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2693 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2694 unsigned int precision = get_precision (quotient);
2695 WIDE_INT_REF_FOR (T1) xi (x, precision);
2696 WIDE_INT_REF_FOR (T2) yi (y);
2698 unsigned int remainder_len;
2699 quotient.set_len (divmod_internal (quotient_val,
2700 &remainder_len, remainder_val,
2701 xi.val, xi.len, precision,
2702 yi.val, yi.len, yi.precision, sgn,
2703 overflow));
2704 remainder.set_len (remainder_len);
2706 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2707 return remainder + y;
2708 return remainder;
2711 /* Compute X / Y, rouding towards -inf, and return the remainder.
2712 Treat X and Y as unsigned values. */
2713 /* ??? Why do we have both this and umod_trunc. Aren't they the same? */
2714 template <typename T1, typename T2>
2715 inline WI_BINARY_RESULT (T1, T2)
2716 wi::umod_floor (const T1 &x, const T2 &y)
2718 return mod_floor (x, y, UNSIGNED);
2721 /* Compute X / Y, rouding towards +inf, and return the remainder.
2722 Treat X and Y as having the signedness given by SGN. Indicate
2723 in *OVERFLOW if the division overflows. */
2724 template <typename T1, typename T2>
2725 inline WI_BINARY_RESULT (T1, T2)
2726 wi::mod_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2728 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2729 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2730 unsigned int precision = get_precision (quotient);
2731 WIDE_INT_REF_FOR (T1) xi (x, precision);
2732 WIDE_INT_REF_FOR (T2) yi (y);
2734 unsigned int remainder_len;
2735 quotient.set_len (divmod_internal (quotient_val,
2736 &remainder_len, remainder_val,
2737 xi.val, xi.len, precision,
2738 yi.val, yi.len, yi.precision, sgn,
2739 overflow));
2740 remainder.set_len (remainder_len);
2742 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2743 return remainder - y;
2744 return remainder;
2747 /* Compute X / Y, rouding towards nearest with ties away from zero,
2748 and return the remainder. Treat X and Y as having the signedness
2749 given by SGN. Indicate in *OVERFLOW if the division overflows. */
2750 template <typename T1, typename T2>
2751 inline WI_BINARY_RESULT (T1, T2)
2752 wi::mod_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2754 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2755 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2756 unsigned int precision = get_precision (quotient);
2757 WIDE_INT_REF_FOR (T1) xi (x, precision);
2758 WIDE_INT_REF_FOR (T2) yi (y);
2760 unsigned int remainder_len;
2761 quotient.set_len (divmod_internal (quotient_val,
2762 &remainder_len, remainder_val,
2763 xi.val, xi.len, precision,
2764 yi.val, yi.len, yi.precision, sgn,
2765 overflow));
2766 remainder.set_len (remainder_len);
2768 if (remainder != 0)
2770 if (sgn == SIGNED)
2772 WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2773 if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2775 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2776 return remainder + y;
2777 else
2778 return remainder - y;
2781 else
2783 if (wi::geu_p (remainder, wi::sub (y, remainder)))
2784 return remainder - y;
2787 return remainder;
2790 /* Return true if X is a multiple of Y. Treat X and Y as having the
2791 signedness given by SGN. */
2792 template <typename T1, typename T2>
2793 inline bool
2794 wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn)
2796 return wi::mod_trunc (x, y, sgn) == 0;
2799 /* Return true if X is a multiple of Y, storing X / Y in *RES if so.
2800 Treat X and Y as having the signedness given by SGN. */
2801 template <typename T1, typename T2>
2802 inline bool
2803 wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn,
2804 WI_BINARY_RESULT (T1, T2) *res)
2806 WI_BINARY_RESULT (T1, T2) remainder;
2807 WI_BINARY_RESULT (T1, T2) quotient
2808 = divmod_trunc (x, y, sgn, &remainder);
2809 if (remainder == 0)
2811 *res = quotient;
2812 return true;
2814 return false;
2817 /* Return X << Y. Return 0 if Y is greater than or equal to
2818 the precision of X. */
2819 template <typename T1, typename T2>
2820 inline WI_UNARY_RESULT (T1)
2821 wi::lshift (const T1 &x, const T2 &y)
2823 WI_UNARY_RESULT_VAR (result, val, T1, x);
2824 unsigned int precision = get_precision (result);
2825 WIDE_INT_REF_FOR (T1) xi (x, precision);
2826 WIDE_INT_REF_FOR (T2) yi (y);
2827 /* Handle the simple cases quickly. */
2828 if (geu_p (yi, precision))
2830 val[0] = 0;
2831 result.set_len (1);
2833 else
2835 unsigned int shift = yi.to_uhwi ();
2836 /* For fixed-precision integers like offset_int and widest_int,
2837 handle the case where the shift value is constant and the
2838 result is a single nonnegative HWI (meaning that we don't
2839 need to worry about val[1]). This is particularly common
2840 for converting a byte count to a bit count.
2842 For variable-precision integers like wide_int, handle HWI
2843 and sub-HWI integers inline. */
2844 if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
2845 ? (STATIC_CONSTANT_P (shift < HOST_BITS_PER_WIDE_INT - 1)
2846 && xi.len == 1
2847 && xi.val[0] <= (HOST_WIDE_INT) ((unsigned HOST_WIDE_INT)
2848 HOST_WIDE_INT_MAX >> shift))
2849 : precision <= HOST_BITS_PER_WIDE_INT)
2851 val[0] = xi.ulow () << shift;
2852 result.set_len (1);
2854 else
2855 result.set_len (lshift_large (val, xi.val, xi.len,
2856 precision, shift));
2858 return result;
2861 /* Return X >> Y, using a logical shift. Return 0 if Y is greater than
2862 or equal to the precision of X. */
2863 template <typename T1, typename T2>
2864 inline WI_UNARY_RESULT (T1)
2865 wi::lrshift (const T1 &x, const T2 &y)
2867 WI_UNARY_RESULT_VAR (result, val, T1, x);
2868 /* Do things in the precision of the input rather than the output,
2869 since the result can be no larger than that. */
2870 WIDE_INT_REF_FOR (T1) xi (x);
2871 WIDE_INT_REF_FOR (T2) yi (y);
2872 /* Handle the simple cases quickly. */
2873 if (geu_p (yi, xi.precision))
2875 val[0] = 0;
2876 result.set_len (1);
2878 else
2880 unsigned int shift = yi.to_uhwi ();
2881 /* For fixed-precision integers like offset_int and widest_int,
2882 handle the case where the shift value is constant and the
2883 shifted value is a single nonnegative HWI (meaning that all
2884 bits above the HWI are zero). This is particularly common
2885 for converting a bit count to a byte count.
2887 For variable-precision integers like wide_int, handle HWI
2888 and sub-HWI integers inline. */
2889 if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
2890 ? xi.len == 1 && xi.val[0] >= 0
2891 : xi.precision <= HOST_BITS_PER_WIDE_INT)
2893 val[0] = xi.to_uhwi () >> shift;
2894 result.set_len (1);
2896 else
2897 result.set_len (lrshift_large (val, xi.val, xi.len, xi.precision,
2898 get_precision (result), shift));
2900 return result;
2903 /* Return X >> Y, using an arithmetic shift. Return a sign mask if
2904 Y is greater than or equal to the precision of X. */
2905 template <typename T1, typename T2>
2906 inline WI_UNARY_RESULT (T1)
2907 wi::arshift (const T1 &x, const T2 &y)
2909 WI_UNARY_RESULT_VAR (result, val, T1, x);
2910 /* Do things in the precision of the input rather than the output,
2911 since the result can be no larger than that. */
2912 WIDE_INT_REF_FOR (T1) xi (x);
2913 WIDE_INT_REF_FOR (T2) yi (y);
2914 /* Handle the simple cases quickly. */
2915 if (geu_p (yi, xi.precision))
2917 val[0] = sign_mask (x);
2918 result.set_len (1);
2920 else
2922 unsigned int shift = yi.to_uhwi ();
2923 if (xi.precision <= HOST_BITS_PER_WIDE_INT)
2925 val[0] = sext_hwi (xi.ulow () >> shift, xi.precision - shift);
2926 result.set_len (1, true);
2928 else
2929 result.set_len (arshift_large (val, xi.val, xi.len, xi.precision,
2930 get_precision (result), shift));
2932 return result;
2935 /* Return X >> Y, using an arithmetic shift if SGN is SIGNED and a
2936 logical shift otherwise. */
2937 template <typename T1, typename T2>
2938 inline WI_UNARY_RESULT (T1)
2939 wi::rshift (const T1 &x, const T2 &y, signop sgn)
2941 if (sgn == UNSIGNED)
2942 return lrshift (x, y);
2943 else
2944 return arshift (x, y);
2947 /* Return the result of rotating the low WIDTH bits of X left by Y
2948 bits and zero-extending the result. Use a full-width rotate if
2949 WIDTH is zero. */
2950 template <typename T1, typename T2>
2951 WI_UNARY_RESULT (T1)
2952 wi::lrotate (const T1 &x, const T2 &y, unsigned int width)
2954 unsigned int precision = get_binary_precision (x, x);
2955 if (width == 0)
2956 width = precision;
2957 WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
2958 WI_UNARY_RESULT (T1) left = wi::lshift (x, ymod);
2959 WI_UNARY_RESULT (T1) right = wi::lrshift (x, wi::sub (width, ymod));
2960 if (width != precision)
2961 return wi::zext (left, width) | wi::zext (right, width);
2962 return left | right;
2965 /* Return the result of rotating the low WIDTH bits of X right by Y
2966 bits and zero-extending the result. Use a full-width rotate if
2967 WIDTH is zero. */
2968 template <typename T1, typename T2>
2969 WI_UNARY_RESULT (T1)
2970 wi::rrotate (const T1 &x, const T2 &y, unsigned int width)
2972 unsigned int precision = get_binary_precision (x, x);
2973 if (width == 0)
2974 width = precision;
2975 WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
2976 WI_UNARY_RESULT (T1) right = wi::lrshift (x, ymod);
2977 WI_UNARY_RESULT (T1) left = wi::lshift (x, wi::sub (width, ymod));
2978 if (width != precision)
2979 return wi::zext (left, width) | wi::zext (right, width);
2980 return left | right;
2983 /* Return 0 if the number of 1s in X is even and 1 if the number of 1s
2984 is odd. */
2985 inline int
2986 wi::parity (const wide_int_ref &x)
2988 return popcount (x) & 1;
2991 /* Extract WIDTH bits from X, starting at BITPOS. */
2992 template <typename T>
2993 inline unsigned HOST_WIDE_INT
2994 wi::extract_uhwi (const T &x, unsigned int bitpos, unsigned int width)
2996 unsigned precision = get_precision (x);
2997 if (precision < bitpos + width)
2998 precision = bitpos + width;
2999 WIDE_INT_REF_FOR (T) xi (x, precision);
3001 /* Handle this rare case after the above, so that we assert about
3002 bogus BITPOS values. */
3003 if (width == 0)
3004 return 0;
3006 unsigned int start = bitpos / HOST_BITS_PER_WIDE_INT;
3007 unsigned int shift = bitpos % HOST_BITS_PER_WIDE_INT;
3008 unsigned HOST_WIDE_INT res = xi.elt (start);
3009 res >>= shift;
3010 if (shift + width > HOST_BITS_PER_WIDE_INT)
3012 unsigned HOST_WIDE_INT upper = xi.elt (start + 1);
3013 res |= upper << (-shift % HOST_BITS_PER_WIDE_INT);
3015 return zext_hwi (res, width);
3018 /* Return the minimum precision needed to store X with sign SGN. */
3019 template <typename T>
3020 inline unsigned int
3021 wi::min_precision (const T &x, signop sgn)
3023 if (sgn == SIGNED)
3024 return get_precision (x) - clrsb (x);
3025 else
3026 return get_precision (x) - clz (x);
3029 template<typename T>
3030 void
3031 gt_ggc_mx (generic_wide_int <T> *)
3035 template<typename T>
3036 void
3037 gt_pch_nx (generic_wide_int <T> *)
3041 template<typename T>
3042 void
3043 gt_pch_nx (generic_wide_int <T> *, void (*) (void *, void *), void *)
3047 template<int N>
3048 void
3049 gt_ggc_mx (trailing_wide_ints <N> *)
3053 template<int N>
3054 void
3055 gt_pch_nx (trailing_wide_ints <N> *)
3059 template<int N>
3060 void
3061 gt_pch_nx (trailing_wide_ints <N> *, void (*) (void *, void *), void *)
3065 namespace wi
3067 /* Used for overloaded functions in which the only other acceptable
3068 scalar type is a pointer. It stops a plain 0 from being treated
3069 as a null pointer. */
3070 struct never_used1 {};
3071 struct never_used2 {};
3073 wide_int min_value (unsigned int, signop);
3074 wide_int min_value (never_used1 *);
3075 wide_int min_value (never_used2 *);
3076 wide_int max_value (unsigned int, signop);
3077 wide_int max_value (never_used1 *);
3078 wide_int max_value (never_used2 *);
3080 /* FIXME: this is target dependent, so should be elsewhere.
3081 It also seems to assume that CHAR_BIT == BITS_PER_UNIT. */
3082 wide_int from_buffer (const unsigned char *, unsigned int);
3084 #ifndef GENERATOR_FILE
3085 void to_mpz (const wide_int_ref &, mpz_t, signop);
3086 #endif
3088 wide_int mask (unsigned int, bool, unsigned int);
3089 wide_int shifted_mask (unsigned int, unsigned int, bool, unsigned int);
3090 wide_int set_bit_in_zero (unsigned int, unsigned int);
3091 wide_int insert (const wide_int &x, const wide_int &y, unsigned int,
3092 unsigned int);
3094 template <typename T>
3095 T mask (unsigned int, bool);
3097 template <typename T>
3098 T shifted_mask (unsigned int, unsigned int, bool);
3100 template <typename T>
3101 T set_bit_in_zero (unsigned int);
3103 unsigned int mask (HOST_WIDE_INT *, unsigned int, bool, unsigned int);
3104 unsigned int shifted_mask (HOST_WIDE_INT *, unsigned int, unsigned int,
3105 bool, unsigned int);
3106 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
3107 unsigned int, unsigned int, bool);
3110 /* Return a PRECISION-bit integer in which the low WIDTH bits are set
3111 and the other bits are clear, or the inverse if NEGATE_P. */
3112 inline wide_int
3113 wi::mask (unsigned int width, bool negate_p, unsigned int precision)
3115 wide_int result = wide_int::create (precision);
3116 result.set_len (mask (result.write_val (), width, negate_p, precision));
3117 return result;
3120 /* Return a PRECISION-bit integer in which the low START bits are clear,
3121 the next WIDTH bits are set, and the other bits are clear,
3122 or the inverse if NEGATE_P. */
3123 inline wide_int
3124 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p,
3125 unsigned int precision)
3127 wide_int result = wide_int::create (precision);
3128 result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
3129 precision));
3130 return result;
3133 /* Return a PRECISION-bit integer in which bit BIT is set and all the
3134 others are clear. */
3135 inline wide_int
3136 wi::set_bit_in_zero (unsigned int bit, unsigned int precision)
3138 return shifted_mask (bit, 1, false, precision);
3141 /* Return an integer of type T in which the low WIDTH bits are set
3142 and the other bits are clear, or the inverse if NEGATE_P. */
3143 template <typename T>
3144 inline T
3145 wi::mask (unsigned int width, bool negate_p)
3147 STATIC_ASSERT (wi::int_traits<T>::precision);
3148 T result;
3149 result.set_len (mask (result.write_val (), width, negate_p,
3150 wi::int_traits <T>::precision));
3151 return result;
3154 /* Return an integer of type T in which the low START bits are clear,
3155 the next WIDTH bits are set, and the other bits are clear, or the
3156 inverse if NEGATE_P. */
3157 template <typename T>
3158 inline T
3159 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p)
3161 STATIC_ASSERT (wi::int_traits<T>::precision);
3162 T result;
3163 result.set_len (shifted_mask (result.write_val (), start, width,
3164 negate_p,
3165 wi::int_traits <T>::precision));
3166 return result;
3169 /* Return an integer of type T in which bit BIT is set and all the
3170 others are clear. */
3171 template <typename T>
3172 inline T
3173 wi::set_bit_in_zero (unsigned int bit)
3175 return shifted_mask <T> (bit, 1, false);
3178 #endif /* WIDE_INT_H */