PR c/68024
[official-gcc.git] / gcc / wide-int.h
blobff156791547229c91c0d69dfe74d1d7a2ae06ee7
1 /* Operations with very long integers. -*- C++ -*-
2 Copyright (C) 2012-2015 Free Software Foundation, Inc.
4 This file is part of GCC.
6 GCC is free software; you can redistribute it and/or modify it
7 under the terms of the GNU General Public License as published by the
8 Free Software Foundation; either version 3, or (at your option) any
9 later version.
11 GCC is distributed in the hope that it will be useful, but WITHOUT
12 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 for more details.
16 You should have received a copy of the GNU General Public License
17 along with GCC; see the file COPYING3. If not see
18 <http://www.gnu.org/licenses/>. */
20 #ifndef WIDE_INT_H
21 #define WIDE_INT_H
23 /* wide-int.[cc|h] implements a class that efficiently performs
24 mathematical operations on finite precision integers. wide_ints
25 are designed to be transient - they are not for long term storage
26 of values. There is tight integration between wide_ints and the
27 other longer storage GCC representations (rtl and tree).
29 The actual precision of a wide_int depends on the flavor. There
30 are three predefined flavors:
32 1) wide_int (the default). This flavor does the math in the
33 precision of its input arguments. It is assumed (and checked)
34 that the precisions of the operands and results are consistent.
35 This is the most efficient flavor. It is not possible to examine
36 bits above the precision that has been specified. Because of
37 this, the default flavor has semantics that are simple to
38 understand and in general model the underlying hardware that the
39 compiler is targetted for.
41 This flavor must be used at the RTL level of gcc because there
42 is, in general, not enough information in the RTL representation
43 to extend a value beyond the precision specified in the mode.
45 This flavor should also be used at the TREE and GIMPLE levels of
46 the compiler except for the circumstances described in the
47 descriptions of the other two flavors.
49 The default wide_int representation does not contain any
50 information inherent about signedness of the represented value,
51 so it can be used to represent both signed and unsigned numbers.
52 For operations where the results depend on signedness (full width
53 multiply, division, shifts, comparisons, and operations that need
54 overflow detected), the signedness must be specified separately.
56 2) offset_int. This is a fixed size representation that is
57 guaranteed to be large enough to compute any bit or byte sized
58 address calculation on the target. Currently the value is 64 + 4
59 bits rounded up to the next number even multiple of
60 HOST_BITS_PER_WIDE_INT (but this can be changed when the first
61 port needs more than 64 bits for the size of a pointer).
63 This flavor can be used for all address math on the target. In
64 this representation, the values are sign or zero extended based
65 on their input types to the internal precision. All math is done
66 in this precision and then the values are truncated to fit in the
67 result type. Unlike most gimple or rtl intermediate code, it is
68 not useful to perform the address arithmetic at the same
69 precision in which the operands are represented because there has
70 been no effort by the front ends to convert most addressing
71 arithmetic to canonical types.
73 3) widest_int. This representation is an approximation of
74 infinite precision math. However, it is not really infinite
75 precision math as in the GMP library. It is really finite
76 precision math where the precision is 4 times the size of the
77 largest integer that the target port can represent.
79 widest_int is supposed to be wider than any number that it needs to
80 store, meaning that there is always at least one leading sign bit.
81 All widest_int values are therefore signed.
83 There are several places in the GCC where this should/must be used:
85 * Code that does induction variable optimizations. This code
86 works with induction variables of many different types at the
87 same time. Because of this, it ends up doing many different
88 calculations where the operands are not compatible types. The
89 widest_int makes this easy, because it provides a field where
90 nothing is lost when converting from any variable,
92 * There are a small number of passes that currently use the
93 widest_int that should use the default. These should be
94 changed.
96 There are surprising features of offset_int and widest_int
97 that the users should be careful about:
99 1) Shifts and rotations are just weird. You have to specify a
100 precision in which the shift or rotate is to happen in. The bits
101 above this precision are zeroed. While this is what you
102 want, it is clearly non obvious.
104 2) Larger precision math sometimes does not produce the same
105 answer as would be expected for doing the math at the proper
106 precision. In particular, a multiply followed by a divide will
107 produce a different answer if the first product is larger than
108 what can be represented in the input precision.
110 The offset_int and the widest_int flavors are more expensive
111 than the default wide int, so in addition to the caveats with these
112 two, the default is the prefered representation.
114 All three flavors of wide_int are represented as a vector of
115 HOST_WIDE_INTs. The default and widest_int vectors contain enough elements
116 to hold a value of MAX_BITSIZE_MODE_ANY_INT bits. offset_int contains only
117 enough elements to hold ADDR_MAX_PRECISION bits. The values are stored
118 in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
119 in element 0.
121 The default wide_int contains three fields: the vector (VAL),
122 the precision and a length (LEN). The length is the number of HWIs
123 needed to represent the value. widest_int and offset_int have a
124 constant precision that cannot be changed, so they only store the
125 VAL and LEN fields.
127 Since most integers used in a compiler are small values, it is
128 generally profitable to use a representation of the value that is
129 as small as possible. LEN is used to indicate the number of
130 elements of the vector that are in use. The numbers are stored as
131 sign extended numbers as a means of compression. Leading
132 HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
133 as long as they can be reconstructed from the top bit that is being
134 represented.
136 The precision and length of a wide_int are always greater than 0.
137 Any bits in a wide_int above the precision are sign-extended from the
138 most significant bit. For example, a 4-bit value 0x8 is represented as
139 VAL = { 0xf...fff8 }. However, as an optimization, we allow other integer
140 constants to be represented with undefined bits above the precision.
141 This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
142 so that the INTEGER_CST representation can be used both in TYPE_PRECISION
143 and in wider precisions.
145 There are constructors to create the various forms of wide_int from
146 trees, rtl and constants. For trees you can simply say:
148 tree t = ...;
149 wide_int x = t;
151 However, a little more syntax is required for rtl constants since
152 they do not have an explicit precision. To make an rtl into a
153 wide_int, you have to pair it with a mode. The canonical way to do
154 this is with std::make_pair as in:
156 rtx r = ...
157 wide_int x = std::make_pair (r, mode);
159 Similarly, a wide_int can only be constructed from a host value if
160 the target precision is given explicitly, such as in:
162 wide_int x = wi::shwi (c, prec); // sign-extend C if necessary
163 wide_int y = wi::uhwi (c, prec); // zero-extend C if necessary
165 However, offset_int and widest_int have an inherent precision and so
166 can be initialized directly from a host value:
168 offset_int x = (int) c; // sign-extend C
169 widest_int x = (unsigned int) c; // zero-extend C
171 It is also possible to do arithmetic directly on trees, rtxes and
172 constants. For example:
174 wi::add (t1, t2); // add equal-sized INTEGER_CSTs t1 and t2
175 wi::add (t1, 1); // add 1 to INTEGER_CST t1
176 wi::add (r1, r2); // add equal-sized rtx constants r1 and r2
177 wi::lshift (1, 100); // 1 << 100 as a widest_int
179 Many binary operations place restrictions on the combinations of inputs,
180 using the following rules:
182 - {tree, rtx, wide_int} op {tree, rtx, wide_int} -> wide_int
183 The inputs must be the same precision. The result is a wide_int
184 of the same precision
186 - {tree, rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
187 (un)signed HOST_WIDE_INT op {tree, rtx, wide_int} -> wide_int
188 The HOST_WIDE_INT is extended or truncated to the precision of
189 the other input. The result is a wide_int of the same precision
190 as that input.
192 - (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
193 The inputs are extended to widest_int precision and produce a
194 widest_int result.
196 - offset_int op offset_int -> offset_int
197 offset_int op (un)signed HOST_WIDE_INT -> offset_int
198 (un)signed HOST_WIDE_INT op offset_int -> offset_int
200 - widest_int op widest_int -> widest_int
201 widest_int op (un)signed HOST_WIDE_INT -> widest_int
202 (un)signed HOST_WIDE_INT op widest_int -> widest_int
204 Other combinations like:
206 - widest_int op offset_int and
207 - wide_int op offset_int
209 are not allowed. The inputs should instead be extended or truncated
210 so that they match.
212 The inputs to comparison functions like wi::eq_p and wi::lts_p
213 follow the same compatibility rules, although their return types
214 are different. Unary functions on X produce the same result as
215 a binary operation X + X. Shift functions X op Y also produce
216 the same result as X + X; the precision of the shift amount Y
217 can be arbitrarily different from X. */
219 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
220 early examination of the target's mode file. The WIDE_INT_MAX_ELTS
221 can accomodate at least 1 more bit so that unsigned numbers of that
222 mode can be represented as a signed value. Note that it is still
223 possible to create fixed_wide_ints that have precisions greater than
224 MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a
225 double-width multiplication result, for example. */
226 #define WIDE_INT_MAX_ELTS \
227 ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
229 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
231 /* This is the max size of any pointer on any machine. It does not
232 seem to be as easy to sniff this out of the machine description as
233 it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
234 multiple address sizes and may have different address sizes for
235 different address spaces. However, currently the largest pointer
236 on any platform is 64 bits. When that changes, then it is likely
237 that a target hook should be defined so that targets can make this
238 value larger for those targets. */
239 #define ADDR_MAX_BITSIZE 64
241 /* This is the internal precision used when doing any address
242 arithmetic. The '4' is really 3 + 1. Three of the bits are for
243 the number of extra bits needed to do bit addresses and the other bit
244 is to allow everything to be signed without loosing any precision.
245 Then everything is rounded up to the next HWI for efficiency. */
246 #define ADDR_MAX_PRECISION \
247 ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) \
248 & ~(HOST_BITS_PER_WIDE_INT - 1))
250 /* The number of HWIs needed to store an offset_int. */
251 #define OFFSET_INT_ELTS (ADDR_MAX_PRECISION / HOST_BITS_PER_WIDE_INT)
253 /* The type of result produced by a binary operation on types T1 and T2.
254 Defined purely for brevity. */
255 #define WI_BINARY_RESULT(T1, T2) \
256 typename wi::binary_traits <T1, T2>::result_type
258 /* The type of result produced by a unary operation on type T. */
259 #define WI_UNARY_RESULT(T) \
260 typename wi::unary_traits <T>::result_type
262 /* Define a variable RESULT to hold the result of a binary operation on
263 X and Y, which have types T1 and T2 respectively. Define VAL to
264 point to the blocks of RESULT. Once the user of the macro has
265 filled in VAL, it should call RESULT.set_len to set the number
266 of initialized blocks. */
267 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
268 WI_BINARY_RESULT (T1, T2) RESULT = \
269 wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
270 HOST_WIDE_INT *VAL = RESULT.write_val ()
272 /* Similar for the result of a unary operation on X, which has type T. */
273 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
274 WI_UNARY_RESULT (T) RESULT = \
275 wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
276 HOST_WIDE_INT *VAL = RESULT.write_val ()
278 template <typename T> class generic_wide_int;
279 template <int N> struct fixed_wide_int_storage;
280 class wide_int_storage;
282 /* An N-bit integer. Until we can use typedef templates, use this instead. */
283 #define FIXED_WIDE_INT(N) \
284 generic_wide_int < fixed_wide_int_storage <N> >
286 typedef generic_wide_int <wide_int_storage> wide_int;
287 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
288 typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
290 template <bool SE>
291 struct wide_int_ref_storage;
293 typedef generic_wide_int <wide_int_ref_storage <false> > wide_int_ref;
295 /* This can be used instead of wide_int_ref if the referenced value is
296 known to have type T. It carries across properties of T's representation,
297 such as whether excess upper bits in a HWI are defined, and can therefore
298 help avoid redundant work.
300 The macro could be replaced with a template typedef, once we're able
301 to use those. */
302 #define WIDE_INT_REF_FOR(T) \
303 generic_wide_int \
304 <wide_int_ref_storage <wi::int_traits <T>::is_sign_extended> >
306 namespace wi
308 /* Classifies an integer based on its precision. */
309 enum precision_type {
310 /* The integer has both a precision and defined signedness. This allows
311 the integer to be converted to any width, since we know whether to fill
312 any extra bits with zeros or signs. */
313 FLEXIBLE_PRECISION,
315 /* The integer has a variable precision but no defined signedness. */
316 VAR_PRECISION,
318 /* The integer has a constant precision (known at GCC compile time)
319 but no defined signedness. */
320 CONST_PRECISION
323 /* This class, which has no default implementation, is expected to
324 provide the following members:
326 static const enum precision_type precision_type;
327 Classifies the type of T.
329 static const unsigned int precision;
330 Only defined if precision_type == CONST_PRECISION. Specifies the
331 precision of all integers of type T.
333 static const bool host_dependent_precision;
334 True if the precision of T depends (or can depend) on the host.
336 static unsigned int get_precision (const T &x)
337 Return the number of bits in X.
339 static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
340 unsigned int precision, const T &x)
341 Decompose X as a PRECISION-bit integer, returning the associated
342 wi::storage_ref. SCRATCH is available as scratch space if needed.
343 The routine should assert that PRECISION is acceptable. */
344 template <typename T> struct int_traits;
346 /* This class provides a single type, result_type, which specifies the
347 type of integer produced by a binary operation whose inputs have
348 types T1 and T2. The definition should be symmetric. */
349 template <typename T1, typename T2,
350 enum precision_type P1 = int_traits <T1>::precision_type,
351 enum precision_type P2 = int_traits <T2>::precision_type>
352 struct binary_traits;
354 /* The result of a unary operation on T is the same as the result of
355 a binary operation on two values of type T. */
356 template <typename T>
357 struct unary_traits : public binary_traits <T, T> {};
359 /* Specify the result type for each supported combination of binary
360 inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be
361 mixed, in order to give stronger type checking. When both inputs
362 are CONST_PRECISION, they must have the same precision. */
363 template <typename T1, typename T2>
364 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
366 typedef widest_int result_type;
369 template <typename T1, typename T2>
370 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
372 typedef wide_int result_type;
375 template <typename T1, typename T2>
376 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
378 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
379 so as not to confuse gengtype. */
380 typedef generic_wide_int < fixed_wide_int_storage
381 <int_traits <T2>::precision> > result_type;
384 template <typename T1, typename T2>
385 struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
387 typedef wide_int result_type;
390 template <typename T1, typename T2>
391 struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
393 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
394 so as not to confuse gengtype. */
395 typedef generic_wide_int < fixed_wide_int_storage
396 <int_traits <T1>::precision> > result_type;
399 template <typename T1, typename T2>
400 struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
402 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
403 so as not to confuse gengtype. */
404 STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
405 typedef generic_wide_int < fixed_wide_int_storage
406 <int_traits <T1>::precision> > result_type;
409 template <typename T1, typename T2>
410 struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
412 typedef wide_int result_type;
416 /* Public functions for querying and operating on integers. */
417 namespace wi
419 template <typename T>
420 unsigned int get_precision (const T &);
422 template <typename T1, typename T2>
423 unsigned int get_binary_precision (const T1 &, const T2 &);
425 template <typename T1, typename T2>
426 void copy (T1 &, const T2 &);
428 #define UNARY_PREDICATE \
429 template <typename T> bool
430 #define UNARY_FUNCTION \
431 template <typename T> WI_UNARY_RESULT (T)
432 #define BINARY_PREDICATE \
433 template <typename T1, typename T2> bool
434 #define BINARY_FUNCTION \
435 template <typename T1, typename T2> WI_BINARY_RESULT (T1, T2)
436 #define SHIFT_FUNCTION \
437 template <typename T1, typename T2> WI_UNARY_RESULT (T1)
439 UNARY_PREDICATE fits_shwi_p (const T &);
440 UNARY_PREDICATE fits_uhwi_p (const T &);
441 UNARY_PREDICATE neg_p (const T &, signop = SIGNED);
443 template <typename T>
444 HOST_WIDE_INT sign_mask (const T &);
446 BINARY_PREDICATE eq_p (const T1 &, const T2 &);
447 BINARY_PREDICATE ne_p (const T1 &, const T2 &);
448 BINARY_PREDICATE lt_p (const T1 &, const T2 &, signop);
449 BINARY_PREDICATE lts_p (const T1 &, const T2 &);
450 BINARY_PREDICATE ltu_p (const T1 &, const T2 &);
451 BINARY_PREDICATE le_p (const T1 &, const T2 &, signop);
452 BINARY_PREDICATE les_p (const T1 &, const T2 &);
453 BINARY_PREDICATE leu_p (const T1 &, const T2 &);
454 BINARY_PREDICATE gt_p (const T1 &, const T2 &, signop);
455 BINARY_PREDICATE gts_p (const T1 &, const T2 &);
456 BINARY_PREDICATE gtu_p (const T1 &, const T2 &);
457 BINARY_PREDICATE ge_p (const T1 &, const T2 &, signop);
458 BINARY_PREDICATE ges_p (const T1 &, const T2 &);
459 BINARY_PREDICATE geu_p (const T1 &, const T2 &);
461 template <typename T1, typename T2>
462 int cmp (const T1 &, const T2 &, signop);
464 template <typename T1, typename T2>
465 int cmps (const T1 &, const T2 &);
467 template <typename T1, typename T2>
468 int cmpu (const T1 &, const T2 &);
470 UNARY_FUNCTION bit_not (const T &);
471 UNARY_FUNCTION neg (const T &);
472 UNARY_FUNCTION neg (const T &, bool *);
473 UNARY_FUNCTION abs (const T &);
474 UNARY_FUNCTION ext (const T &, unsigned int, signop);
475 UNARY_FUNCTION sext (const T &, unsigned int);
476 UNARY_FUNCTION zext (const T &, unsigned int);
477 UNARY_FUNCTION set_bit (const T &, unsigned int);
479 BINARY_FUNCTION min (const T1 &, const T2 &, signop);
480 BINARY_FUNCTION smin (const T1 &, const T2 &);
481 BINARY_FUNCTION umin (const T1 &, const T2 &);
482 BINARY_FUNCTION max (const T1 &, const T2 &, signop);
483 BINARY_FUNCTION smax (const T1 &, const T2 &);
484 BINARY_FUNCTION umax (const T1 &, const T2 &);
486 BINARY_FUNCTION bit_and (const T1 &, const T2 &);
487 BINARY_FUNCTION bit_and_not (const T1 &, const T2 &);
488 BINARY_FUNCTION bit_or (const T1 &, const T2 &);
489 BINARY_FUNCTION bit_or_not (const T1 &, const T2 &);
490 BINARY_FUNCTION bit_xor (const T1 &, const T2 &);
491 BINARY_FUNCTION add (const T1 &, const T2 &);
492 BINARY_FUNCTION add (const T1 &, const T2 &, signop, bool *);
493 BINARY_FUNCTION sub (const T1 &, const T2 &);
494 BINARY_FUNCTION sub (const T1 &, const T2 &, signop, bool *);
495 BINARY_FUNCTION mul (const T1 &, const T2 &);
496 BINARY_FUNCTION mul (const T1 &, const T2 &, signop, bool *);
497 BINARY_FUNCTION smul (const T1 &, const T2 &, bool *);
498 BINARY_FUNCTION umul (const T1 &, const T2 &, bool *);
499 BINARY_FUNCTION mul_high (const T1 &, const T2 &, signop);
500 BINARY_FUNCTION div_trunc (const T1 &, const T2 &, signop, bool * = 0);
501 BINARY_FUNCTION sdiv_trunc (const T1 &, const T2 &);
502 BINARY_FUNCTION udiv_trunc (const T1 &, const T2 &);
503 BINARY_FUNCTION div_floor (const T1 &, const T2 &, signop, bool * = 0);
504 BINARY_FUNCTION udiv_floor (const T1 &, const T2 &);
505 BINARY_FUNCTION sdiv_floor (const T1 &, const T2 &);
506 BINARY_FUNCTION div_ceil (const T1 &, const T2 &, signop, bool * = 0);
507 BINARY_FUNCTION div_round (const T1 &, const T2 &, signop, bool * = 0);
508 BINARY_FUNCTION divmod_trunc (const T1 &, const T2 &, signop,
509 WI_BINARY_RESULT (T1, T2) *);
510 BINARY_FUNCTION gcd (const T1 &, const T2 &, signop = UNSIGNED);
511 BINARY_FUNCTION mod_trunc (const T1 &, const T2 &, signop, bool * = 0);
512 BINARY_FUNCTION smod_trunc (const T1 &, const T2 &);
513 BINARY_FUNCTION umod_trunc (const T1 &, const T2 &);
514 BINARY_FUNCTION mod_floor (const T1 &, const T2 &, signop, bool * = 0);
515 BINARY_FUNCTION umod_floor (const T1 &, const T2 &);
516 BINARY_FUNCTION mod_ceil (const T1 &, const T2 &, signop, bool * = 0);
517 BINARY_FUNCTION mod_round (const T1 &, const T2 &, signop, bool * = 0);
519 template <typename T1, typename T2>
520 bool multiple_of_p (const T1 &, const T2 &, signop);
522 template <typename T1, typename T2>
523 bool multiple_of_p (const T1 &, const T2 &, signop,
524 WI_BINARY_RESULT (T1, T2) *);
526 SHIFT_FUNCTION lshift (const T1 &, const T2 &);
527 SHIFT_FUNCTION lrshift (const T1 &, const T2 &);
528 SHIFT_FUNCTION arshift (const T1 &, const T2 &);
529 SHIFT_FUNCTION rshift (const T1 &, const T2 &, signop sgn);
530 SHIFT_FUNCTION lrotate (const T1 &, const T2 &, unsigned int = 0);
531 SHIFT_FUNCTION rrotate (const T1 &, const T2 &, unsigned int = 0);
533 #undef SHIFT_FUNCTION
534 #undef BINARY_PREDICATE
535 #undef BINARY_FUNCTION
536 #undef UNARY_PREDICATE
537 #undef UNARY_FUNCTION
539 bool only_sign_bit_p (const wide_int_ref &, unsigned int);
540 bool only_sign_bit_p (const wide_int_ref &);
541 int clz (const wide_int_ref &);
542 int clrsb (const wide_int_ref &);
543 int ctz (const wide_int_ref &);
544 int exact_log2 (const wide_int_ref &);
545 int floor_log2 (const wide_int_ref &);
546 int ffs (const wide_int_ref &);
547 int popcount (const wide_int_ref &);
548 int parity (const wide_int_ref &);
550 template <typename T>
551 unsigned HOST_WIDE_INT extract_uhwi (const T &, unsigned int, unsigned int);
553 template <typename T>
554 unsigned int min_precision (const T &, signop);
557 namespace wi
559 /* Contains the components of a decomposed integer for easy, direct
560 access. */
561 struct storage_ref
563 storage_ref (const HOST_WIDE_INT *, unsigned int, unsigned int);
565 const HOST_WIDE_INT *val;
566 unsigned int len;
567 unsigned int precision;
569 /* Provide enough trappings for this class to act as storage for
570 generic_wide_int. */
571 unsigned int get_len () const;
572 unsigned int get_precision () const;
573 const HOST_WIDE_INT *get_val () const;
577 inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT *val_in,
578 unsigned int len_in,
579 unsigned int precision_in)
580 : val (val_in), len (len_in), precision (precision_in)
584 inline unsigned int
585 wi::storage_ref::get_len () const
587 return len;
590 inline unsigned int
591 wi::storage_ref::get_precision () const
593 return precision;
596 inline const HOST_WIDE_INT *
597 wi::storage_ref::get_val () const
599 return val;
602 /* This class defines an integer type using the storage provided by the
603 template argument. The storage class must provide the following
604 functions:
606 unsigned int get_precision () const
607 Return the number of bits in the integer.
609 HOST_WIDE_INT *get_val () const
610 Return a pointer to the array of blocks that encodes the integer.
612 unsigned int get_len () const
613 Return the number of blocks in get_val (). If this is smaller
614 than the number of blocks implied by get_precision (), the
615 remaining blocks are sign extensions of block get_len () - 1.
617 Although not required by generic_wide_int itself, writable storage
618 classes can also provide the following functions:
620 HOST_WIDE_INT *write_val ()
621 Get a modifiable version of get_val ()
623 unsigned int set_len (unsigned int len)
624 Set the value returned by get_len () to LEN. */
625 template <typename storage>
626 class GTY(()) generic_wide_int : public storage
628 public:
629 generic_wide_int ();
631 template <typename T>
632 generic_wide_int (const T &);
634 template <typename T>
635 generic_wide_int (const T &, unsigned int);
637 /* Conversions. */
638 HOST_WIDE_INT to_shwi (unsigned int) const;
639 HOST_WIDE_INT to_shwi () const;
640 unsigned HOST_WIDE_INT to_uhwi (unsigned int) const;
641 unsigned HOST_WIDE_INT to_uhwi () const;
642 HOST_WIDE_INT to_short_addr () const;
644 /* Public accessors for the interior of a wide int. */
645 HOST_WIDE_INT sign_mask () const;
646 HOST_WIDE_INT elt (unsigned int) const;
647 unsigned HOST_WIDE_INT ulow () const;
648 unsigned HOST_WIDE_INT uhigh () const;
649 HOST_WIDE_INT slow () const;
650 HOST_WIDE_INT shigh () const;
652 template <typename T>
653 generic_wide_int &operator = (const T &);
655 #define BINARY_PREDICATE(OP, F) \
656 template <typename T> \
657 bool OP (const T &c) const { return wi::F (*this, c); }
659 #define UNARY_OPERATOR(OP, F) \
660 WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
662 #define BINARY_OPERATOR(OP, F) \
663 template <typename T> \
664 WI_BINARY_RESULT (generic_wide_int, T) \
665 OP (const T &c) const { return wi::F (*this, c); }
667 #define ASSIGNMENT_OPERATOR(OP, F) \
668 template <typename T> \
669 generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
671 #define INCDEC_OPERATOR(OP, DELTA) \
672 generic_wide_int &OP () { *this += DELTA; return *this; }
674 UNARY_OPERATOR (operator ~, bit_not)
675 UNARY_OPERATOR (operator -, neg)
676 BINARY_PREDICATE (operator ==, eq_p)
677 BINARY_PREDICATE (operator !=, ne_p)
678 BINARY_OPERATOR (operator &, bit_and)
679 BINARY_OPERATOR (and_not, bit_and_not)
680 BINARY_OPERATOR (operator |, bit_or)
681 BINARY_OPERATOR (or_not, bit_or_not)
682 BINARY_OPERATOR (operator ^, bit_xor)
683 BINARY_OPERATOR (operator +, add)
684 BINARY_OPERATOR (operator -, sub)
685 BINARY_OPERATOR (operator *, mul)
686 ASSIGNMENT_OPERATOR (operator &=, bit_and)
687 ASSIGNMENT_OPERATOR (operator |=, bit_or)
688 ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
689 ASSIGNMENT_OPERATOR (operator +=, add)
690 ASSIGNMENT_OPERATOR (operator -=, sub)
691 ASSIGNMENT_OPERATOR (operator *=, mul)
692 INCDEC_OPERATOR (operator ++, 1)
693 INCDEC_OPERATOR (operator --, -1)
695 #undef BINARY_PREDICATE
696 #undef UNARY_OPERATOR
697 #undef BINARY_OPERATOR
698 #undef ASSIGNMENT_OPERATOR
699 #undef INCDEC_OPERATOR
701 /* Debugging functions. */
702 void dump () const;
704 static const bool is_sign_extended
705 = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
708 template <typename storage>
709 inline generic_wide_int <storage>::generic_wide_int () {}
711 template <typename storage>
712 template <typename T>
713 inline generic_wide_int <storage>::generic_wide_int (const T &x)
714 : storage (x)
718 template <typename storage>
719 template <typename T>
720 inline generic_wide_int <storage>::generic_wide_int (const T &x,
721 unsigned int precision)
722 : storage (x, precision)
726 /* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
727 If THIS does not fit in PRECISION, the information is lost. */
728 template <typename storage>
729 inline HOST_WIDE_INT
730 generic_wide_int <storage>::to_shwi (unsigned int precision) const
732 if (precision < HOST_BITS_PER_WIDE_INT)
733 return sext_hwi (this->get_val ()[0], precision);
734 else
735 return this->get_val ()[0];
738 /* Return THIS as a signed HOST_WIDE_INT, in its natural precision. */
739 template <typename storage>
740 inline HOST_WIDE_INT
741 generic_wide_int <storage>::to_shwi () const
743 if (is_sign_extended)
744 return this->get_val ()[0];
745 else
746 return to_shwi (this->get_precision ());
749 /* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
750 PRECISION. If THIS does not fit in PRECISION, the information
751 is lost. */
752 template <typename storage>
753 inline unsigned HOST_WIDE_INT
754 generic_wide_int <storage>::to_uhwi (unsigned int precision) const
756 if (precision < HOST_BITS_PER_WIDE_INT)
757 return zext_hwi (this->get_val ()[0], precision);
758 else
759 return this->get_val ()[0];
762 /* Return THIS as an signed HOST_WIDE_INT, in its natural precision. */
763 template <typename storage>
764 inline unsigned HOST_WIDE_INT
765 generic_wide_int <storage>::to_uhwi () const
767 return to_uhwi (this->get_precision ());
770 /* TODO: The compiler is half converted from using HOST_WIDE_INT to
771 represent addresses to using offset_int to represent addresses.
772 We use to_short_addr at the interface from new code to old,
773 unconverted code. */
774 template <typename storage>
775 inline HOST_WIDE_INT
776 generic_wide_int <storage>::to_short_addr () const
778 return this->get_val ()[0];
781 /* Return the implicit value of blocks above get_len (). */
782 template <typename storage>
783 inline HOST_WIDE_INT
784 generic_wide_int <storage>::sign_mask () const
786 unsigned int len = this->get_len ();
787 unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
788 if (!is_sign_extended)
790 unsigned int precision = this->get_precision ();
791 int excess = len * HOST_BITS_PER_WIDE_INT - precision;
792 if (excess > 0)
793 high <<= excess;
795 return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
798 /* Return the signed value of the least-significant explicitly-encoded
799 block. */
800 template <typename storage>
801 inline HOST_WIDE_INT
802 generic_wide_int <storage>::slow () const
804 return this->get_val ()[0];
807 /* Return the signed value of the most-significant explicitly-encoded
808 block. */
809 template <typename storage>
810 inline HOST_WIDE_INT
811 generic_wide_int <storage>::shigh () const
813 return this->get_val ()[this->get_len () - 1];
816 /* Return the unsigned value of the least-significant
817 explicitly-encoded block. */
818 template <typename storage>
819 inline unsigned HOST_WIDE_INT
820 generic_wide_int <storage>::ulow () const
822 return this->get_val ()[0];
825 /* Return the unsigned value of the most-significant
826 explicitly-encoded block. */
827 template <typename storage>
828 inline unsigned HOST_WIDE_INT
829 generic_wide_int <storage>::uhigh () const
831 return this->get_val ()[this->get_len () - 1];
834 /* Return block I, which might be implicitly or explicit encoded. */
835 template <typename storage>
836 inline HOST_WIDE_INT
837 generic_wide_int <storage>::elt (unsigned int i) const
839 if (i >= this->get_len ())
840 return sign_mask ();
841 else
842 return this->get_val ()[i];
845 template <typename storage>
846 template <typename T>
847 generic_wide_int <storage> &
848 generic_wide_int <storage>::operator = (const T &x)
850 storage::operator = (x);
851 return *this;
854 /* Dump the contents of the integer to stderr, for debugging. */
855 template <typename storage>
856 void
857 generic_wide_int <storage>::dump () const
859 unsigned int len = this->get_len ();
860 const HOST_WIDE_INT *val = this->get_val ();
861 unsigned int precision = this->get_precision ();
862 fprintf (stderr, "[");
863 if (len * HOST_BITS_PER_WIDE_INT < precision)
864 fprintf (stderr, "...,");
865 for (unsigned int i = 0; i < len - 1; ++i)
866 fprintf (stderr, HOST_WIDE_INT_PRINT_HEX ",", val[len - 1 - i]);
867 fprintf (stderr, HOST_WIDE_INT_PRINT_HEX "], precision = %d\n",
868 val[0], precision);
871 namespace wi
873 template <typename storage>
874 struct int_traits < generic_wide_int <storage> >
875 : public wi::int_traits <storage>
877 static unsigned int get_precision (const generic_wide_int <storage> &);
878 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
879 const generic_wide_int <storage> &);
883 template <typename storage>
884 inline unsigned int
885 wi::int_traits < generic_wide_int <storage> >::
886 get_precision (const generic_wide_int <storage> &x)
888 return x.get_precision ();
891 template <typename storage>
892 inline wi::storage_ref
893 wi::int_traits < generic_wide_int <storage> >::
894 decompose (HOST_WIDE_INT *, unsigned int precision,
895 const generic_wide_int <storage> &x)
897 gcc_checking_assert (precision == x.get_precision ());
898 return wi::storage_ref (x.get_val (), x.get_len (), precision);
901 /* Provide the storage for a wide_int_ref. This acts like a read-only
902 wide_int, with the optimization that VAL is normally a pointer to
903 another integer's storage, so that no array copy is needed. */
904 template <bool SE>
905 struct wide_int_ref_storage : public wi::storage_ref
907 private:
908 /* Scratch space that can be used when decomposing the original integer.
909 It must live as long as this object. */
910 HOST_WIDE_INT scratch[2];
912 public:
913 wide_int_ref_storage (const wi::storage_ref &);
915 template <typename T>
916 wide_int_ref_storage (const T &);
918 template <typename T>
919 wide_int_ref_storage (const T &, unsigned int);
922 /* Create a reference from an existing reference. */
923 template <bool SE>
924 inline wide_int_ref_storage <SE>::
925 wide_int_ref_storage (const wi::storage_ref &x)
926 : storage_ref (x)
929 /* Create a reference to integer X in its natural precision. Note
930 that the natural precision is host-dependent for primitive
931 types. */
932 template <bool SE>
933 template <typename T>
934 inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x)
935 : storage_ref (wi::int_traits <T>::decompose (scratch,
936 wi::get_precision (x), x))
940 /* Create a reference to integer X in precision PRECISION. */
941 template <bool SE>
942 template <typename T>
943 inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x,
944 unsigned int precision)
945 : storage_ref (wi::int_traits <T>::decompose (scratch, precision, x))
949 namespace wi
951 template <bool SE>
952 struct int_traits <wide_int_ref_storage <SE> >
954 static const enum precision_type precision_type = VAR_PRECISION;
955 /* wi::storage_ref can be a reference to a primitive type,
956 so this is the conservatively-correct setting. */
957 static const bool host_dependent_precision = true;
958 static const bool is_sign_extended = SE;
962 namespace wi
964 unsigned int force_to_size (HOST_WIDE_INT *, const HOST_WIDE_INT *,
965 unsigned int, unsigned int, unsigned int,
966 signop sgn);
967 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
968 unsigned int, unsigned int, bool = true);
971 /* The storage used by wide_int. */
972 class GTY(()) wide_int_storage
974 private:
975 HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
976 unsigned int len;
977 unsigned int precision;
979 public:
980 wide_int_storage ();
981 template <typename T>
982 wide_int_storage (const T &);
984 /* The standard generic_wide_int storage methods. */
985 unsigned int get_precision () const;
986 const HOST_WIDE_INT *get_val () const;
987 unsigned int get_len () const;
988 HOST_WIDE_INT *write_val ();
989 void set_len (unsigned int, bool = false);
991 static wide_int from (const wide_int_ref &, unsigned int, signop);
992 static wide_int from_array (const HOST_WIDE_INT *, unsigned int,
993 unsigned int, bool = true);
994 static wide_int create (unsigned int);
996 /* FIXME: target-dependent, so should disappear. */
997 wide_int bswap () const;
1000 namespace wi
1002 template <>
1003 struct int_traits <wide_int_storage>
1005 static const enum precision_type precision_type = VAR_PRECISION;
1006 /* Guaranteed by a static assert in the wide_int_storage constructor. */
1007 static const bool host_dependent_precision = false;
1008 static const bool is_sign_extended = true;
1009 template <typename T1, typename T2>
1010 static wide_int get_binary_result (const T1 &, const T2 &);
1014 inline wide_int_storage::wide_int_storage () {}
1016 /* Initialize the storage from integer X, in its natural precision.
1017 Note that we do not allow integers with host-dependent precision
1018 to become wide_ints; wide_ints must always be logically independent
1019 of the host. */
1020 template <typename T>
1021 inline wide_int_storage::wide_int_storage (const T &x)
1023 { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
1024 { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
1025 WIDE_INT_REF_FOR (T) xi (x);
1026 precision = xi.precision;
1027 wi::copy (*this, xi);
1030 inline unsigned int
1031 wide_int_storage::get_precision () const
1033 return precision;
1036 inline const HOST_WIDE_INT *
1037 wide_int_storage::get_val () const
1039 return val;
1042 inline unsigned int
1043 wide_int_storage::get_len () const
1045 return len;
1048 inline HOST_WIDE_INT *
1049 wide_int_storage::write_val ()
1051 return val;
1054 inline void
1055 wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
1057 len = l;
1058 if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
1059 val[len - 1] = sext_hwi (val[len - 1],
1060 precision % HOST_BITS_PER_WIDE_INT);
1063 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
1064 number. */
1065 inline wide_int
1066 wide_int_storage::from (const wide_int_ref &x, unsigned int precision,
1067 signop sgn)
1069 wide_int result = wide_int::create (precision);
1070 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1071 x.precision, precision, sgn));
1072 return result;
1075 /* Create a wide_int from the explicit block encoding given by VAL and
1076 LEN. PRECISION is the precision of the integer. NEED_CANON_P is
1077 true if the encoding may have redundant trailing blocks. */
1078 inline wide_int
1079 wide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
1080 unsigned int precision, bool need_canon_p)
1082 wide_int result = wide_int::create (precision);
1083 result.set_len (wi::from_array (result.write_val (), val, len, precision,
1084 need_canon_p));
1085 return result;
1088 /* Return an uninitialized wide_int with precision PRECISION. */
1089 inline wide_int
1090 wide_int_storage::create (unsigned int precision)
1092 wide_int x;
1093 x.precision = precision;
1094 return x;
1097 template <typename T1, typename T2>
1098 inline wide_int
1099 wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
1101 /* This shouldn't be used for two flexible-precision inputs. */
1102 STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
1103 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
1104 if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
1105 return wide_int::create (wi::get_precision (y));
1106 else
1107 return wide_int::create (wi::get_precision (x));
1110 /* The storage used by FIXED_WIDE_INT (N). */
1111 template <int N>
1112 class GTY(()) fixed_wide_int_storage
1114 private:
1115 HOST_WIDE_INT val[(N + HOST_BITS_PER_WIDE_INT + 1) / HOST_BITS_PER_WIDE_INT];
1116 unsigned int len;
1118 public:
1119 fixed_wide_int_storage ();
1120 template <typename T>
1121 fixed_wide_int_storage (const T &);
1123 /* The standard generic_wide_int storage methods. */
1124 unsigned int get_precision () const;
1125 const HOST_WIDE_INT *get_val () const;
1126 unsigned int get_len () const;
1127 HOST_WIDE_INT *write_val ();
1128 void set_len (unsigned int, bool = false);
1130 static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
1131 static FIXED_WIDE_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
1132 bool = true);
1135 namespace wi
1137 template <int N>
1138 struct int_traits < fixed_wide_int_storage <N> >
1140 static const enum precision_type precision_type = CONST_PRECISION;
1141 static const bool host_dependent_precision = false;
1142 static const bool is_sign_extended = true;
1143 static const unsigned int precision = N;
1144 template <typename T1, typename T2>
1145 static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
1149 template <int N>
1150 inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
1152 /* Initialize the storage from integer X, in precision N. */
1153 template <int N>
1154 template <typename T>
1155 inline fixed_wide_int_storage <N>::fixed_wide_int_storage (const T &x)
1157 /* Check for type compatibility. We don't want to initialize a
1158 fixed-width integer from something like a wide_int. */
1159 WI_BINARY_RESULT (T, FIXED_WIDE_INT (N)) *assertion ATTRIBUTE_UNUSED;
1160 wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
1163 template <int N>
1164 inline unsigned int
1165 fixed_wide_int_storage <N>::get_precision () const
1167 return N;
1170 template <int N>
1171 inline const HOST_WIDE_INT *
1172 fixed_wide_int_storage <N>::get_val () const
1174 return val;
1177 template <int N>
1178 inline unsigned int
1179 fixed_wide_int_storage <N>::get_len () const
1181 return len;
1184 template <int N>
1185 inline HOST_WIDE_INT *
1186 fixed_wide_int_storage <N>::write_val ()
1188 return val;
1191 template <int N>
1192 inline void
1193 fixed_wide_int_storage <N>::set_len (unsigned int l, bool)
1195 len = l;
1196 /* There are no excess bits in val[len - 1]. */
1197 STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
1200 /* Treat X as having signedness SGN and convert it to an N-bit number. */
1201 template <int N>
1202 inline FIXED_WIDE_INT (N)
1203 fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
1205 FIXED_WIDE_INT (N) result;
1206 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1207 x.precision, N, sgn));
1208 return result;
1211 /* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
1212 VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
1213 trailing blocks. */
1214 template <int N>
1215 inline FIXED_WIDE_INT (N)
1216 fixed_wide_int_storage <N>::from_array (const HOST_WIDE_INT *val,
1217 unsigned int len,
1218 bool need_canon_p)
1220 FIXED_WIDE_INT (N) result;
1221 result.set_len (wi::from_array (result.write_val (), val, len,
1222 N, need_canon_p));
1223 return result;
1226 template <int N>
1227 template <typename T1, typename T2>
1228 inline FIXED_WIDE_INT (N)
1229 wi::int_traits < fixed_wide_int_storage <N> >::
1230 get_binary_result (const T1 &, const T2 &)
1232 return FIXED_WIDE_INT (N) ();
1235 /* A reference to one element of a trailing_wide_ints structure. */
1236 class trailing_wide_int_storage
1238 private:
1239 /* The precision of the integer, which is a fixed property of the
1240 parent trailing_wide_ints. */
1241 unsigned int m_precision;
1243 /* A pointer to the length field. */
1244 unsigned char *m_len;
1246 /* A pointer to the HWI array. There are enough elements to hold all
1247 values of precision M_PRECISION. */
1248 HOST_WIDE_INT *m_val;
1250 public:
1251 trailing_wide_int_storage (unsigned int, unsigned char *, HOST_WIDE_INT *);
1253 /* The standard generic_wide_int storage methods. */
1254 unsigned int get_len () const;
1255 unsigned int get_precision () const;
1256 const HOST_WIDE_INT *get_val () const;
1257 HOST_WIDE_INT *write_val ();
1258 void set_len (unsigned int, bool = false);
1260 template <typename T>
1261 trailing_wide_int_storage &operator = (const T &);
1264 typedef generic_wide_int <trailing_wide_int_storage> trailing_wide_int;
1266 /* trailing_wide_int behaves like a wide_int. */
1267 namespace wi
1269 template <>
1270 struct int_traits <trailing_wide_int_storage>
1271 : public int_traits <wide_int_storage> {};
1274 /* An array of N wide_int-like objects that can be put at the end of
1275 a variable-sized structure. Use extra_size to calculate how many
1276 bytes beyond the sizeof need to be allocated. Use set_precision
1277 to initialize the structure. */
1278 template <int N>
1279 class GTY(()) trailing_wide_ints
1281 private:
1282 /* The shared precision of each number. */
1283 unsigned short m_precision;
1285 /* The shared maximum length of each number. */
1286 unsigned char m_max_len;
1288 /* The current length of each number. */
1289 unsigned char m_len[N];
1291 /* The variable-length part of the structure, which always contains
1292 at least one HWI. Element I starts at index I * M_MAX_LEN. */
1293 HOST_WIDE_INT m_val[1];
1295 public:
1296 void set_precision (unsigned int);
1297 trailing_wide_int operator [] (unsigned int);
1298 static size_t extra_size (unsigned int);
1301 inline trailing_wide_int_storage::
1302 trailing_wide_int_storage (unsigned int precision, unsigned char *len,
1303 HOST_WIDE_INT *val)
1304 : m_precision (precision), m_len (len), m_val (val)
1308 inline unsigned int
1309 trailing_wide_int_storage::get_len () const
1311 return *m_len;
1314 inline unsigned int
1315 trailing_wide_int_storage::get_precision () const
1317 return m_precision;
1320 inline const HOST_WIDE_INT *
1321 trailing_wide_int_storage::get_val () const
1323 return m_val;
1326 inline HOST_WIDE_INT *
1327 trailing_wide_int_storage::write_val ()
1329 return m_val;
1332 inline void
1333 trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
1335 *m_len = len;
1336 if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
1337 m_val[len - 1] = sext_hwi (m_val[len - 1],
1338 m_precision % HOST_BITS_PER_WIDE_INT);
1341 template <typename T>
1342 inline trailing_wide_int_storage &
1343 trailing_wide_int_storage::operator = (const T &x)
1345 WIDE_INT_REF_FOR (T) xi (x, m_precision);
1346 wi::copy (*this, xi);
1347 return *this;
1350 /* Initialize the structure and record that all elements have precision
1351 PRECISION. */
1352 template <int N>
1353 inline void
1354 trailing_wide_ints <N>::set_precision (unsigned int precision)
1356 m_precision = precision;
1357 m_max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1358 / HOST_BITS_PER_WIDE_INT);
1361 /* Return a reference to element INDEX. */
1362 template <int N>
1363 inline trailing_wide_int
1364 trailing_wide_ints <N>::operator [] (unsigned int index)
1366 return trailing_wide_int_storage (m_precision, &m_len[index],
1367 &m_val[index * m_max_len]);
1370 /* Return how many extra bytes need to be added to the end of the structure
1371 in order to handle N wide_ints of precision PRECISION. */
1372 template <int N>
1373 inline size_t
1374 trailing_wide_ints <N>::extra_size (unsigned int precision)
1376 unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1377 / HOST_BITS_PER_WIDE_INT);
1378 return (N * max_len - 1) * sizeof (HOST_WIDE_INT);
1381 /* This macro is used in structures that end with a trailing_wide_ints field
1382 called FIELD. It declares get_NAME() and set_NAME() methods to access
1383 element I of FIELD. */
1384 #define TRAILING_WIDE_INT_ACCESSOR(NAME, FIELD, I) \
1385 trailing_wide_int get_##NAME () { return FIELD[I]; } \
1386 template <typename T> void set_##NAME (const T &x) { FIELD[I] = x; }
1388 namespace wi
1390 /* Implementation of int_traits for primitive integer types like "int". */
1391 template <typename T, bool signed_p>
1392 struct primitive_int_traits
1394 static const enum precision_type precision_type = FLEXIBLE_PRECISION;
1395 static const bool host_dependent_precision = true;
1396 static const bool is_sign_extended = true;
1397 static unsigned int get_precision (T);
1398 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
1402 template <typename T, bool signed_p>
1403 inline unsigned int
1404 wi::primitive_int_traits <T, signed_p>::get_precision (T)
1406 return sizeof (T) * CHAR_BIT;
1409 template <typename T, bool signed_p>
1410 inline wi::storage_ref
1411 wi::primitive_int_traits <T, signed_p>::decompose (HOST_WIDE_INT *scratch,
1412 unsigned int precision, T x)
1414 scratch[0] = x;
1415 if (signed_p || scratch[0] >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1416 return wi::storage_ref (scratch, 1, precision);
1417 scratch[1] = 0;
1418 return wi::storage_ref (scratch, 2, precision);
1421 /* Allow primitive C types to be used in wi:: routines. */
1422 namespace wi
1424 template <>
1425 struct int_traits <int>
1426 : public primitive_int_traits <int, true> {};
1428 template <>
1429 struct int_traits <unsigned int>
1430 : public primitive_int_traits <unsigned int, false> {};
1432 template <>
1433 struct int_traits <long>
1434 : public primitive_int_traits <long, true> {};
1436 template <>
1437 struct int_traits <unsigned long>
1438 : public primitive_int_traits <unsigned long, false> {};
1440 #if defined HAVE_LONG_LONG
1441 template <>
1442 struct int_traits <long long>
1443 : public primitive_int_traits <long long, true> {};
1445 template <>
1446 struct int_traits <unsigned long long>
1447 : public primitive_int_traits <unsigned long long, false> {};
1448 #endif
1451 namespace wi
1453 /* Stores HWI-sized integer VAL, treating it as having signedness SGN
1454 and precision PRECISION. */
1455 struct hwi_with_prec
1457 hwi_with_prec (HOST_WIDE_INT, unsigned int, signop);
1458 HOST_WIDE_INT val;
1459 unsigned int precision;
1460 signop sgn;
1463 hwi_with_prec shwi (HOST_WIDE_INT, unsigned int);
1464 hwi_with_prec uhwi (unsigned HOST_WIDE_INT, unsigned int);
1466 hwi_with_prec minus_one (unsigned int);
1467 hwi_with_prec zero (unsigned int);
1468 hwi_with_prec one (unsigned int);
1469 hwi_with_prec two (unsigned int);
1472 inline wi::hwi_with_prec::hwi_with_prec (HOST_WIDE_INT v, unsigned int p,
1473 signop s)
1474 : val (v), precision (p), sgn (s)
1478 /* Return a signed integer that has value VAL and precision PRECISION. */
1479 inline wi::hwi_with_prec
1480 wi::shwi (HOST_WIDE_INT val, unsigned int precision)
1482 return hwi_with_prec (val, precision, SIGNED);
1485 /* Return an unsigned integer that has value VAL and precision PRECISION. */
1486 inline wi::hwi_with_prec
1487 wi::uhwi (unsigned HOST_WIDE_INT val, unsigned int precision)
1489 return hwi_with_prec (val, precision, UNSIGNED);
1492 /* Return a wide int of -1 with precision PRECISION. */
1493 inline wi::hwi_with_prec
1494 wi::minus_one (unsigned int precision)
1496 return wi::shwi (-1, precision);
1499 /* Return a wide int of 0 with precision PRECISION. */
1500 inline wi::hwi_with_prec
1501 wi::zero (unsigned int precision)
1503 return wi::shwi (0, precision);
1506 /* Return a wide int of 1 with precision PRECISION. */
1507 inline wi::hwi_with_prec
1508 wi::one (unsigned int precision)
1510 return wi::shwi (1, precision);
1513 /* Return a wide int of 2 with precision PRECISION. */
1514 inline wi::hwi_with_prec
1515 wi::two (unsigned int precision)
1517 return wi::shwi (2, precision);
1520 namespace wi
1522 template <>
1523 struct int_traits <wi::hwi_with_prec>
1525 static const enum precision_type precision_type = VAR_PRECISION;
1526 /* hwi_with_prec has an explicitly-given precision, rather than the
1527 precision of HOST_WIDE_INT. */
1528 static const bool host_dependent_precision = false;
1529 static const bool is_sign_extended = true;
1530 static unsigned int get_precision (const wi::hwi_with_prec &);
1531 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
1532 const wi::hwi_with_prec &);
1536 inline unsigned int
1537 wi::int_traits <wi::hwi_with_prec>::get_precision (const wi::hwi_with_prec &x)
1539 return x.precision;
1542 inline wi::storage_ref
1543 wi::int_traits <wi::hwi_with_prec>::
1544 decompose (HOST_WIDE_INT *scratch, unsigned int precision,
1545 const wi::hwi_with_prec &x)
1547 gcc_checking_assert (precision == x.precision);
1548 scratch[0] = x.val;
1549 if (x.sgn == SIGNED || x.val >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1550 return wi::storage_ref (scratch, 1, precision);
1551 scratch[1] = 0;
1552 return wi::storage_ref (scratch, 2, precision);
1555 /* Private functions for handling large cases out of line. They take
1556 individual length and array parameters because that is cheaper for
1557 the inline caller than constructing an object on the stack and
1558 passing a reference to it. (Although many callers use wide_int_refs,
1559 we generally want those to be removed by SRA.) */
1560 namespace wi
1562 bool eq_p_large (const HOST_WIDE_INT *, unsigned int,
1563 const HOST_WIDE_INT *, unsigned int, unsigned int);
1564 bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1565 const HOST_WIDE_INT *, unsigned int);
1566 bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1567 const HOST_WIDE_INT *, unsigned int);
1568 int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1569 const HOST_WIDE_INT *, unsigned int);
1570 int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1571 const HOST_WIDE_INT *, unsigned int);
1572 unsigned int sext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1573 unsigned int,
1574 unsigned int, unsigned int);
1575 unsigned int zext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1576 unsigned int,
1577 unsigned int, unsigned int);
1578 unsigned int set_bit_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1579 unsigned int, unsigned int, unsigned int);
1580 unsigned int lshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1581 unsigned int, unsigned int, unsigned int);
1582 unsigned int lrshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1583 unsigned int, unsigned int, unsigned int,
1584 unsigned int);
1585 unsigned int arshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1586 unsigned int, unsigned int, unsigned int,
1587 unsigned int);
1588 unsigned int and_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1589 const HOST_WIDE_INT *, unsigned int, unsigned int);
1590 unsigned int and_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1591 unsigned int, const HOST_WIDE_INT *,
1592 unsigned int, unsigned int);
1593 unsigned int or_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1594 const HOST_WIDE_INT *, unsigned int, unsigned int);
1595 unsigned int or_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1596 unsigned int, const HOST_WIDE_INT *,
1597 unsigned int, unsigned int);
1598 unsigned int xor_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1599 const HOST_WIDE_INT *, unsigned int, unsigned int);
1600 unsigned int add_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1601 const HOST_WIDE_INT *, unsigned int, unsigned int,
1602 signop, bool *);
1603 unsigned int sub_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1604 const HOST_WIDE_INT *, unsigned int, unsigned int,
1605 signop, bool *);
1606 unsigned int mul_internal (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1607 unsigned int, const HOST_WIDE_INT *,
1608 unsigned int, unsigned int, signop, bool *,
1609 bool);
1610 unsigned int divmod_internal (HOST_WIDE_INT *, unsigned int *,
1611 HOST_WIDE_INT *, const HOST_WIDE_INT *,
1612 unsigned int, unsigned int,
1613 const HOST_WIDE_INT *,
1614 unsigned int, unsigned int,
1615 signop, bool *);
1618 /* Return the number of bits that integer X can hold. */
1619 template <typename T>
1620 inline unsigned int
1621 wi::get_precision (const T &x)
1623 return wi::int_traits <T>::get_precision (x);
1626 /* Return the number of bits that the result of a binary operation can
1627 hold when the input operands are X and Y. */
1628 template <typename T1, typename T2>
1629 inline unsigned int
1630 wi::get_binary_precision (const T1 &x, const T2 &y)
1632 return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
1633 get_binary_result (x, y));
1636 /* Copy the contents of Y to X, but keeping X's current precision. */
1637 template <typename T1, typename T2>
1638 inline void
1639 wi::copy (T1 &x, const T2 &y)
1641 HOST_WIDE_INT *xval = x.write_val ();
1642 const HOST_WIDE_INT *yval = y.get_val ();
1643 unsigned int len = y.get_len ();
1644 unsigned int i = 0;
1646 xval[i] = yval[i];
1647 while (++i < len);
1648 x.set_len (len, y.is_sign_extended);
1651 /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */
1652 template <typename T>
1653 inline bool
1654 wi::fits_shwi_p (const T &x)
1656 WIDE_INT_REF_FOR (T) xi (x);
1657 return xi.len == 1;
1660 /* Return true if X fits in an unsigned HOST_WIDE_INT with no loss of
1661 precision. */
1662 template <typename T>
1663 inline bool
1664 wi::fits_uhwi_p (const T &x)
1666 WIDE_INT_REF_FOR (T) xi (x);
1667 if (xi.precision <= HOST_BITS_PER_WIDE_INT)
1668 return true;
1669 if (xi.len == 1)
1670 return xi.slow () >= 0;
1671 return xi.len == 2 && xi.uhigh () == 0;
1674 /* Return true if X is negative based on the interpretation of SGN.
1675 For UNSIGNED, this is always false. */
1676 template <typename T>
1677 inline bool
1678 wi::neg_p (const T &x, signop sgn)
1680 WIDE_INT_REF_FOR (T) xi (x);
1681 if (sgn == UNSIGNED)
1682 return false;
1683 return xi.sign_mask () < 0;
1686 /* Return -1 if the top bit of X is set and 0 if the top bit is clear. */
1687 template <typename T>
1688 inline HOST_WIDE_INT
1689 wi::sign_mask (const T &x)
1691 WIDE_INT_REF_FOR (T) xi (x);
1692 return xi.sign_mask ();
1695 /* Return true if X == Y. X and Y must be binary-compatible. */
1696 template <typename T1, typename T2>
1697 inline bool
1698 wi::eq_p (const T1 &x, const T2 &y)
1700 unsigned int precision = get_binary_precision (x, y);
1701 WIDE_INT_REF_FOR (T1) xi (x, precision);
1702 WIDE_INT_REF_FOR (T2) yi (y, precision);
1703 if (xi.is_sign_extended && yi.is_sign_extended)
1705 /* This case reduces to array equality. */
1706 if (xi.len != yi.len)
1707 return false;
1708 unsigned int i = 0;
1710 if (xi.val[i] != yi.val[i])
1711 return false;
1712 while (++i != xi.len);
1713 return true;
1715 if (__builtin_expect (yi.len == 1, true))
1717 /* XI is only equal to YI if it too has a single HWI. */
1718 if (xi.len != 1)
1719 return false;
1720 /* Excess bits in xi.val[0] will be signs or zeros, so comparisons
1721 with 0 are simple. */
1722 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1723 return xi.val[0] == 0;
1724 /* Otherwise flush out any excess bits first. */
1725 unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
1726 int excess = HOST_BITS_PER_WIDE_INT - precision;
1727 if (excess > 0)
1728 diff <<= excess;
1729 return diff == 0;
1731 return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
1734 /* Return true if X != Y. X and Y must be binary-compatible. */
1735 template <typename T1, typename T2>
1736 inline bool
1737 wi::ne_p (const T1 &x, const T2 &y)
1739 return !eq_p (x, y);
1742 /* Return true if X < Y when both are treated as signed values. */
1743 template <typename T1, typename T2>
1744 inline bool
1745 wi::lts_p (const T1 &x, const T2 &y)
1747 unsigned int precision = get_binary_precision (x, y);
1748 WIDE_INT_REF_FOR (T1) xi (x, precision);
1749 WIDE_INT_REF_FOR (T2) yi (y, precision);
1750 /* We optimize x < y, where y is 64 or fewer bits. */
1751 if (wi::fits_shwi_p (yi))
1753 /* Make lts_p (x, 0) as efficient as wi::neg_p (x). */
1754 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1755 return neg_p (xi);
1756 /* If x fits directly into a shwi, we can compare directly. */
1757 if (wi::fits_shwi_p (xi))
1758 return xi.to_shwi () < yi.to_shwi ();
1759 /* If x doesn't fit and is negative, then it must be more
1760 negative than any value in y, and hence smaller than y. */
1761 if (neg_p (xi))
1762 return true;
1763 /* If x is positive, then it must be larger than any value in y,
1764 and hence greater than y. */
1765 return false;
1767 /* Optimize the opposite case, if it can be detected at compile time. */
1768 if (STATIC_CONSTANT_P (xi.len == 1))
1769 /* If YI is negative it is lower than the least HWI.
1770 If YI is positive it is greater than the greatest HWI. */
1771 return !neg_p (yi);
1772 return lts_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1775 /* Return true if X < Y when both are treated as unsigned values. */
1776 template <typename T1, typename T2>
1777 inline bool
1778 wi::ltu_p (const T1 &x, const T2 &y)
1780 unsigned int precision = get_binary_precision (x, y);
1781 WIDE_INT_REF_FOR (T1) xi (x, precision);
1782 WIDE_INT_REF_FOR (T2) yi (y, precision);
1783 /* Optimize comparisons with constants. */
1784 if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1785 return xi.len == 1 && xi.to_uhwi () < (unsigned HOST_WIDE_INT) yi.val[0];
1786 if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1787 return yi.len != 1 || yi.to_uhwi () > (unsigned HOST_WIDE_INT) xi.val[0];
1788 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1789 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1790 values does not change the result. */
1791 if (__builtin_expect (xi.len + yi.len == 2, true))
1793 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1794 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1795 return xl < yl;
1797 return ltu_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1800 /* Return true if X < Y. Signedness of X and Y is indicated by SGN. */
1801 template <typename T1, typename T2>
1802 inline bool
1803 wi::lt_p (const T1 &x, const T2 &y, signop sgn)
1805 if (sgn == SIGNED)
1806 return lts_p (x, y);
1807 else
1808 return ltu_p (x, y);
1811 /* Return true if X <= Y when both are treated as signed values. */
1812 template <typename T1, typename T2>
1813 inline bool
1814 wi::les_p (const T1 &x, const T2 &y)
1816 return !lts_p (y, x);
1819 /* Return true if X <= Y when both are treated as unsigned values. */
1820 template <typename T1, typename T2>
1821 inline bool
1822 wi::leu_p (const T1 &x, const T2 &y)
1824 return !ltu_p (y, x);
1827 /* Return true if X <= Y. Signedness of X and Y is indicated by SGN. */
1828 template <typename T1, typename T2>
1829 inline bool
1830 wi::le_p (const T1 &x, const T2 &y, signop sgn)
1832 if (sgn == SIGNED)
1833 return les_p (x, y);
1834 else
1835 return leu_p (x, y);
1838 /* Return true if X > Y when both are treated as signed values. */
1839 template <typename T1, typename T2>
1840 inline bool
1841 wi::gts_p (const T1 &x, const T2 &y)
1843 return lts_p (y, x);
1846 /* Return true if X > Y when both are treated as unsigned values. */
1847 template <typename T1, typename T2>
1848 inline bool
1849 wi::gtu_p (const T1 &x, const T2 &y)
1851 return ltu_p (y, x);
1854 /* Return true if X > Y. Signedness of X and Y is indicated by SGN. */
1855 template <typename T1, typename T2>
1856 inline bool
1857 wi::gt_p (const T1 &x, const T2 &y, signop sgn)
1859 if (sgn == SIGNED)
1860 return gts_p (x, y);
1861 else
1862 return gtu_p (x, y);
1865 /* Return true if X >= Y when both are treated as signed values. */
1866 template <typename T1, typename T2>
1867 inline bool
1868 wi::ges_p (const T1 &x, const T2 &y)
1870 return !lts_p (x, y);
1873 /* Return true if X >= Y when both are treated as unsigned values. */
1874 template <typename T1, typename T2>
1875 inline bool
1876 wi::geu_p (const T1 &x, const T2 &y)
1878 return !ltu_p (x, y);
1881 /* Return true if X >= Y. Signedness of X and Y is indicated by SGN. */
1882 template <typename T1, typename T2>
1883 inline bool
1884 wi::ge_p (const T1 &x, const T2 &y, signop sgn)
1886 if (sgn == SIGNED)
1887 return ges_p (x, y);
1888 else
1889 return geu_p (x, y);
1892 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1893 as signed values. */
1894 template <typename T1, typename T2>
1895 inline int
1896 wi::cmps (const T1 &x, const T2 &y)
1898 unsigned int precision = get_binary_precision (x, y);
1899 WIDE_INT_REF_FOR (T1) xi (x, precision);
1900 WIDE_INT_REF_FOR (T2) yi (y, precision);
1901 if (wi::fits_shwi_p (yi))
1903 /* Special case for comparisons with 0. */
1904 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1905 return neg_p (xi) ? -1 : !(xi.len == 1 && xi.val[0] == 0);
1906 /* If x fits into a signed HWI, we can compare directly. */
1907 if (wi::fits_shwi_p (xi))
1909 HOST_WIDE_INT xl = xi.to_shwi ();
1910 HOST_WIDE_INT yl = yi.to_shwi ();
1911 return xl < yl ? -1 : xl > yl;
1913 /* If x doesn't fit and is negative, then it must be more
1914 negative than any signed HWI, and hence smaller than y. */
1915 if (neg_p (xi))
1916 return -1;
1917 /* If x is positive, then it must be larger than any signed HWI,
1918 and hence greater than y. */
1919 return 1;
1921 /* Optimize the opposite case, if it can be detected at compile time. */
1922 if (STATIC_CONSTANT_P (xi.len == 1))
1923 /* If YI is negative it is lower than the least HWI.
1924 If YI is positive it is greater than the greatest HWI. */
1925 return neg_p (yi) ? 1 : -1;
1926 return cmps_large (xi.val, xi.len, precision, yi.val, yi.len);
1929 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1930 as unsigned values. */
1931 template <typename T1, typename T2>
1932 inline int
1933 wi::cmpu (const T1 &x, const T2 &y)
1935 unsigned int precision = get_binary_precision (x, y);
1936 WIDE_INT_REF_FOR (T1) xi (x, precision);
1937 WIDE_INT_REF_FOR (T2) yi (y, precision);
1938 /* Optimize comparisons with constants. */
1939 if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1941 /* If XI doesn't fit in a HWI then it must be larger than YI. */
1942 if (xi.len != 1)
1943 return 1;
1944 /* Otherwise compare directly. */
1945 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1946 unsigned HOST_WIDE_INT yl = yi.val[0];
1947 return xl < yl ? -1 : xl > yl;
1949 if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1951 /* If YI doesn't fit in a HWI then it must be larger than XI. */
1952 if (yi.len != 1)
1953 return -1;
1954 /* Otherwise compare directly. */
1955 unsigned HOST_WIDE_INT xl = xi.val[0];
1956 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1957 return xl < yl ? -1 : xl > yl;
1959 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1960 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1961 values does not change the result. */
1962 if (__builtin_expect (xi.len + yi.len == 2, true))
1964 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1965 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1966 return xl < yl ? -1 : xl > yl;
1968 return cmpu_large (xi.val, xi.len, precision, yi.val, yi.len);
1971 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Signedness of
1972 X and Y indicated by SGN. */
1973 template <typename T1, typename T2>
1974 inline int
1975 wi::cmp (const T1 &x, const T2 &y, signop sgn)
1977 if (sgn == SIGNED)
1978 return cmps (x, y);
1979 else
1980 return cmpu (x, y);
1983 /* Return ~x. */
1984 template <typename T>
1985 inline WI_UNARY_RESULT (T)
1986 wi::bit_not (const T &x)
1988 WI_UNARY_RESULT_VAR (result, val, T, x);
1989 WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
1990 for (unsigned int i = 0; i < xi.len; ++i)
1991 val[i] = ~xi.val[i];
1992 result.set_len (xi.len);
1993 return result;
1996 /* Return -x. */
1997 template <typename T>
1998 inline WI_UNARY_RESULT (T)
1999 wi::neg (const T &x)
2001 return sub (0, x);
2004 /* Return -x. Indicate in *OVERFLOW if X is the minimum signed value. */
2005 template <typename T>
2006 inline WI_UNARY_RESULT (T)
2007 wi::neg (const T &x, bool *overflow)
2009 *overflow = only_sign_bit_p (x);
2010 return sub (0, x);
2013 /* Return the absolute value of x. */
2014 template <typename T>
2015 inline WI_UNARY_RESULT (T)
2016 wi::abs (const T &x)
2018 return neg_p (x) ? neg (x) : WI_UNARY_RESULT (T) (x);
2021 /* Return the result of sign-extending the low OFFSET bits of X. */
2022 template <typename T>
2023 inline WI_UNARY_RESULT (T)
2024 wi::sext (const T &x, unsigned int offset)
2026 WI_UNARY_RESULT_VAR (result, val, T, x);
2027 unsigned int precision = get_precision (result);
2028 WIDE_INT_REF_FOR (T) xi (x, precision);
2030 if (offset <= HOST_BITS_PER_WIDE_INT)
2032 val[0] = sext_hwi (xi.ulow (), offset);
2033 result.set_len (1, true);
2035 else
2036 result.set_len (sext_large (val, xi.val, xi.len, precision, offset));
2037 return result;
2040 /* Return the result of zero-extending the low OFFSET bits of X. */
2041 template <typename T>
2042 inline WI_UNARY_RESULT (T)
2043 wi::zext (const T &x, unsigned int offset)
2045 WI_UNARY_RESULT_VAR (result, val, T, x);
2046 unsigned int precision = get_precision (result);
2047 WIDE_INT_REF_FOR (T) xi (x, precision);
2049 /* This is not just an optimization, it is actually required to
2050 maintain canonization. */
2051 if (offset >= precision)
2053 wi::copy (result, xi);
2054 return result;
2057 /* In these cases we know that at least the top bit will be clear,
2058 so no sign extension is necessary. */
2059 if (offset < HOST_BITS_PER_WIDE_INT)
2061 val[0] = zext_hwi (xi.ulow (), offset);
2062 result.set_len (1, true);
2064 else
2065 result.set_len (zext_large (val, xi.val, xi.len, precision, offset), true);
2066 return result;
2069 /* Return the result of extending the low OFFSET bits of X according to
2070 signedness SGN. */
2071 template <typename T>
2072 inline WI_UNARY_RESULT (T)
2073 wi::ext (const T &x, unsigned int offset, signop sgn)
2075 return sgn == SIGNED ? sext (x, offset) : zext (x, offset);
2078 /* Return an integer that represents X | (1 << bit). */
2079 template <typename T>
2080 inline WI_UNARY_RESULT (T)
2081 wi::set_bit (const T &x, unsigned int bit)
2083 WI_UNARY_RESULT_VAR (result, val, T, x);
2084 unsigned int precision = get_precision (result);
2085 WIDE_INT_REF_FOR (T) xi (x, precision);
2086 if (precision <= HOST_BITS_PER_WIDE_INT)
2088 val[0] = xi.ulow () | ((unsigned HOST_WIDE_INT) 1 << bit);
2089 result.set_len (1);
2091 else
2092 result.set_len (set_bit_large (val, xi.val, xi.len, precision, bit));
2093 return result;
2096 /* Return the mininum of X and Y, treating them both as having
2097 signedness SGN. */
2098 template <typename T1, typename T2>
2099 inline WI_BINARY_RESULT (T1, T2)
2100 wi::min (const T1 &x, const T2 &y, signop sgn)
2102 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2103 unsigned int precision = get_precision (result);
2104 if (wi::le_p (x, y, sgn))
2105 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2106 else
2107 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2108 return result;
2111 /* Return the minimum of X and Y, treating both as signed values. */
2112 template <typename T1, typename T2>
2113 inline WI_BINARY_RESULT (T1, T2)
2114 wi::smin (const T1 &x, const T2 &y)
2116 return wi::min (x, y, SIGNED);
2119 /* Return the minimum of X and Y, treating both as unsigned values. */
2120 template <typename T1, typename T2>
2121 inline WI_BINARY_RESULT (T1, T2)
2122 wi::umin (const T1 &x, const T2 &y)
2124 return wi::min (x, y, UNSIGNED);
2127 /* Return the maxinum of X and Y, treating them both as having
2128 signedness SGN. */
2129 template <typename T1, typename T2>
2130 inline WI_BINARY_RESULT (T1, T2)
2131 wi::max (const T1 &x, const T2 &y, signop sgn)
2133 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2134 unsigned int precision = get_precision (result);
2135 if (wi::ge_p (x, y, sgn))
2136 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2137 else
2138 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2139 return result;
2142 /* Return the maximum of X and Y, treating both as signed values. */
2143 template <typename T1, typename T2>
2144 inline WI_BINARY_RESULT (T1, T2)
2145 wi::smax (const T1 &x, const T2 &y)
2147 return wi::max (x, y, SIGNED);
2150 /* Return the maximum of X and Y, treating both as unsigned values. */
2151 template <typename T1, typename T2>
2152 inline WI_BINARY_RESULT (T1, T2)
2153 wi::umax (const T1 &x, const T2 &y)
2155 return wi::max (x, y, UNSIGNED);
2158 /* Return X & Y. */
2159 template <typename T1, typename T2>
2160 inline WI_BINARY_RESULT (T1, T2)
2161 wi::bit_and (const T1 &x, const T2 &y)
2163 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2164 unsigned int precision = get_precision (result);
2165 WIDE_INT_REF_FOR (T1) xi (x, precision);
2166 WIDE_INT_REF_FOR (T2) yi (y, precision);
2167 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2168 if (__builtin_expect (xi.len + yi.len == 2, true))
2170 val[0] = xi.ulow () & yi.ulow ();
2171 result.set_len (1, is_sign_extended);
2173 else
2174 result.set_len (and_large (val, xi.val, xi.len, yi.val, yi.len,
2175 precision), is_sign_extended);
2176 return result;
2179 /* Return X & ~Y. */
2180 template <typename T1, typename T2>
2181 inline WI_BINARY_RESULT (T1, T2)
2182 wi::bit_and_not (const T1 &x, const T2 &y)
2184 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2185 unsigned int precision = get_precision (result);
2186 WIDE_INT_REF_FOR (T1) xi (x, precision);
2187 WIDE_INT_REF_FOR (T2) yi (y, precision);
2188 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2189 if (__builtin_expect (xi.len + yi.len == 2, true))
2191 val[0] = xi.ulow () & ~yi.ulow ();
2192 result.set_len (1, is_sign_extended);
2194 else
2195 result.set_len (and_not_large (val, xi.val, xi.len, yi.val, yi.len,
2196 precision), is_sign_extended);
2197 return result;
2200 /* Return X | Y. */
2201 template <typename T1, typename T2>
2202 inline WI_BINARY_RESULT (T1, T2)
2203 wi::bit_or (const T1 &x, const T2 &y)
2205 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2206 unsigned int precision = get_precision (result);
2207 WIDE_INT_REF_FOR (T1) xi (x, precision);
2208 WIDE_INT_REF_FOR (T2) yi (y, precision);
2209 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2210 if (__builtin_expect (xi.len + yi.len == 2, true))
2212 val[0] = xi.ulow () | yi.ulow ();
2213 result.set_len (1, is_sign_extended);
2215 else
2216 result.set_len (or_large (val, xi.val, xi.len,
2217 yi.val, yi.len, precision), is_sign_extended);
2218 return result;
2221 /* Return X | ~Y. */
2222 template <typename T1, typename T2>
2223 inline WI_BINARY_RESULT (T1, T2)
2224 wi::bit_or_not (const T1 &x, const T2 &y)
2226 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2227 unsigned int precision = get_precision (result);
2228 WIDE_INT_REF_FOR (T1) xi (x, precision);
2229 WIDE_INT_REF_FOR (T2) yi (y, precision);
2230 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2231 if (__builtin_expect (xi.len + yi.len == 2, true))
2233 val[0] = xi.ulow () | ~yi.ulow ();
2234 result.set_len (1, is_sign_extended);
2236 else
2237 result.set_len (or_not_large (val, xi.val, xi.len, yi.val, yi.len,
2238 precision), is_sign_extended);
2239 return result;
2242 /* Return X ^ Y. */
2243 template <typename T1, typename T2>
2244 inline WI_BINARY_RESULT (T1, T2)
2245 wi::bit_xor (const T1 &x, const T2 &y)
2247 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2248 unsigned int precision = get_precision (result);
2249 WIDE_INT_REF_FOR (T1) xi (x, precision);
2250 WIDE_INT_REF_FOR (T2) yi (y, precision);
2251 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2252 if (__builtin_expect (xi.len + yi.len == 2, true))
2254 val[0] = xi.ulow () ^ yi.ulow ();
2255 result.set_len (1, is_sign_extended);
2257 else
2258 result.set_len (xor_large (val, xi.val, xi.len,
2259 yi.val, yi.len, precision), is_sign_extended);
2260 return result;
2263 /* Return X + Y. */
2264 template <typename T1, typename T2>
2265 inline WI_BINARY_RESULT (T1, T2)
2266 wi::add (const T1 &x, const T2 &y)
2268 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2269 unsigned int precision = get_precision (result);
2270 WIDE_INT_REF_FOR (T1) xi (x, precision);
2271 WIDE_INT_REF_FOR (T2) yi (y, precision);
2272 if (precision <= HOST_BITS_PER_WIDE_INT)
2274 val[0] = xi.ulow () + yi.ulow ();
2275 result.set_len (1);
2277 /* If the precision is known at compile time to be greater than
2278 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2279 knowing that (a) all bits in those HWIs are significant and
2280 (b) the result has room for at least two HWIs. This provides
2281 a fast path for things like offset_int and widest_int.
2283 The STATIC_CONSTANT_P test prevents this path from being
2284 used for wide_ints. wide_ints with precisions greater than
2285 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2286 point handling them inline. */
2287 else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2288 && __builtin_expect (xi.len + yi.len == 2, true))
2290 unsigned HOST_WIDE_INT xl = xi.ulow ();
2291 unsigned HOST_WIDE_INT yl = yi.ulow ();
2292 unsigned HOST_WIDE_INT resultl = xl + yl;
2293 val[0] = resultl;
2294 val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2295 result.set_len (1 + (((resultl ^ xl) & (resultl ^ yl))
2296 >> (HOST_BITS_PER_WIDE_INT - 1)));
2298 else
2299 result.set_len (add_large (val, xi.val, xi.len,
2300 yi.val, yi.len, precision,
2301 UNSIGNED, 0));
2302 return result;
2305 /* Return X + Y. Treat X and Y as having the signednes given by SGN
2306 and indicate in *OVERFLOW whether the operation overflowed. */
2307 template <typename T1, typename T2>
2308 inline WI_BINARY_RESULT (T1, T2)
2309 wi::add (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2311 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2312 unsigned int precision = get_precision (result);
2313 WIDE_INT_REF_FOR (T1) xi (x, precision);
2314 WIDE_INT_REF_FOR (T2) yi (y, precision);
2315 if (precision <= HOST_BITS_PER_WIDE_INT)
2317 unsigned HOST_WIDE_INT xl = xi.ulow ();
2318 unsigned HOST_WIDE_INT yl = yi.ulow ();
2319 unsigned HOST_WIDE_INT resultl = xl + yl;
2320 if (sgn == SIGNED)
2321 *overflow = (((resultl ^ xl) & (resultl ^ yl))
2322 >> (precision - 1)) & 1;
2323 else
2324 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2325 < (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2326 val[0] = resultl;
2327 result.set_len (1);
2329 else
2330 result.set_len (add_large (val, xi.val, xi.len,
2331 yi.val, yi.len, precision,
2332 sgn, overflow));
2333 return result;
2336 /* Return X - Y. */
2337 template <typename T1, typename T2>
2338 inline WI_BINARY_RESULT (T1, T2)
2339 wi::sub (const T1 &x, const T2 &y)
2341 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2342 unsigned int precision = get_precision (result);
2343 WIDE_INT_REF_FOR (T1) xi (x, precision);
2344 WIDE_INT_REF_FOR (T2) yi (y, precision);
2345 if (precision <= HOST_BITS_PER_WIDE_INT)
2347 val[0] = xi.ulow () - yi.ulow ();
2348 result.set_len (1);
2350 /* If the precision is known at compile time to be greater than
2351 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2352 knowing that (a) all bits in those HWIs are significant and
2353 (b) the result has room for at least two HWIs. This provides
2354 a fast path for things like offset_int and widest_int.
2356 The STATIC_CONSTANT_P test prevents this path from being
2357 used for wide_ints. wide_ints with precisions greater than
2358 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2359 point handling them inline. */
2360 else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2361 && __builtin_expect (xi.len + yi.len == 2, true))
2363 unsigned HOST_WIDE_INT xl = xi.ulow ();
2364 unsigned HOST_WIDE_INT yl = yi.ulow ();
2365 unsigned HOST_WIDE_INT resultl = xl - yl;
2366 val[0] = resultl;
2367 val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2368 result.set_len (1 + (((resultl ^ xl) & (xl ^ yl))
2369 >> (HOST_BITS_PER_WIDE_INT - 1)));
2371 else
2372 result.set_len (sub_large (val, xi.val, xi.len,
2373 yi.val, yi.len, precision,
2374 UNSIGNED, 0));
2375 return result;
2378 /* Return X - Y. Treat X and Y as having the signednes given by SGN
2379 and indicate in *OVERFLOW whether the operation overflowed. */
2380 template <typename T1, typename T2>
2381 inline WI_BINARY_RESULT (T1, T2)
2382 wi::sub (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2384 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2385 unsigned int precision = get_precision (result);
2386 WIDE_INT_REF_FOR (T1) xi (x, precision);
2387 WIDE_INT_REF_FOR (T2) yi (y, precision);
2388 if (precision <= HOST_BITS_PER_WIDE_INT)
2390 unsigned HOST_WIDE_INT xl = xi.ulow ();
2391 unsigned HOST_WIDE_INT yl = yi.ulow ();
2392 unsigned HOST_WIDE_INT resultl = xl - yl;
2393 if (sgn == SIGNED)
2394 *overflow = (((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1;
2395 else
2396 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2397 > (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2398 val[0] = resultl;
2399 result.set_len (1);
2401 else
2402 result.set_len (sub_large (val, xi.val, xi.len,
2403 yi.val, yi.len, precision,
2404 sgn, overflow));
2405 return result;
2408 /* Return X * Y. */
2409 template <typename T1, typename T2>
2410 inline WI_BINARY_RESULT (T1, T2)
2411 wi::mul (const T1 &x, const T2 &y)
2413 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2414 unsigned int precision = get_precision (result);
2415 WIDE_INT_REF_FOR (T1) xi (x, precision);
2416 WIDE_INT_REF_FOR (T2) yi (y, precision);
2417 if (precision <= HOST_BITS_PER_WIDE_INT)
2419 val[0] = xi.ulow () * yi.ulow ();
2420 result.set_len (1);
2422 else
2423 result.set_len (mul_internal (val, xi.val, xi.len, yi.val, yi.len,
2424 precision, UNSIGNED, 0, false));
2425 return result;
2428 /* Return X * Y. Treat X and Y as having the signednes given by SGN
2429 and indicate in *OVERFLOW whether the operation overflowed. */
2430 template <typename T1, typename T2>
2431 inline WI_BINARY_RESULT (T1, T2)
2432 wi::mul (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2434 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2435 unsigned int precision = get_precision (result);
2436 WIDE_INT_REF_FOR (T1) xi (x, precision);
2437 WIDE_INT_REF_FOR (T2) yi (y, precision);
2438 result.set_len (mul_internal (val, xi.val, xi.len,
2439 yi.val, yi.len, precision,
2440 sgn, overflow, false));
2441 return result;
2444 /* Return X * Y, treating both X and Y as signed values. Indicate in
2445 *OVERFLOW whether the operation overflowed. */
2446 template <typename T1, typename T2>
2447 inline WI_BINARY_RESULT (T1, T2)
2448 wi::smul (const T1 &x, const T2 &y, bool *overflow)
2450 return mul (x, y, SIGNED, overflow);
2453 /* Return X * Y, treating both X and Y as unsigned values. Indicate in
2454 *OVERFLOW whether the operation overflowed. */
2455 template <typename T1, typename T2>
2456 inline WI_BINARY_RESULT (T1, T2)
2457 wi::umul (const T1 &x, const T2 &y, bool *overflow)
2459 return mul (x, y, UNSIGNED, overflow);
2462 /* Perform a widening multiplication of X and Y, extending the values
2463 according to SGN, and return the high part of the result. */
2464 template <typename T1, typename T2>
2465 inline WI_BINARY_RESULT (T1, T2)
2466 wi::mul_high (const T1 &x, const T2 &y, signop sgn)
2468 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2469 unsigned int precision = get_precision (result);
2470 WIDE_INT_REF_FOR (T1) xi (x, precision);
2471 WIDE_INT_REF_FOR (T2) yi (y, precision);
2472 result.set_len (mul_internal (val, xi.val, xi.len,
2473 yi.val, yi.len, precision,
2474 sgn, 0, true));
2475 return result;
2478 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2479 signedness given by SGN. Indicate in *OVERFLOW if the result
2480 overflows. */
2481 template <typename T1, typename T2>
2482 inline WI_BINARY_RESULT (T1, T2)
2483 wi::div_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2485 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2486 unsigned int precision = get_precision (quotient);
2487 WIDE_INT_REF_FOR (T1) xi (x, precision);
2488 WIDE_INT_REF_FOR (T2) yi (y);
2490 quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
2491 precision,
2492 yi.val, yi.len, yi.precision,
2493 sgn, overflow));
2494 return quotient;
2497 /* Return X / Y, rouding towards 0. Treat X and Y as signed values. */
2498 template <typename T1, typename T2>
2499 inline WI_BINARY_RESULT (T1, T2)
2500 wi::sdiv_trunc (const T1 &x, const T2 &y)
2502 return div_trunc (x, y, SIGNED);
2505 /* Return X / Y, rouding towards 0. Treat X and Y as unsigned values. */
2506 template <typename T1, typename T2>
2507 inline WI_BINARY_RESULT (T1, T2)
2508 wi::udiv_trunc (const T1 &x, const T2 &y)
2510 return div_trunc (x, y, UNSIGNED);
2513 /* Return X / Y, rouding towards -inf. Treat X and Y as having the
2514 signedness given by SGN. Indicate in *OVERFLOW if the result
2515 overflows. */
2516 template <typename T1, typename T2>
2517 inline WI_BINARY_RESULT (T1, T2)
2518 wi::div_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2520 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2521 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2522 unsigned int precision = get_precision (quotient);
2523 WIDE_INT_REF_FOR (T1) xi (x, precision);
2524 WIDE_INT_REF_FOR (T2) yi (y);
2526 unsigned int remainder_len;
2527 quotient.set_len (divmod_internal (quotient_val,
2528 &remainder_len, remainder_val,
2529 xi.val, xi.len, precision,
2530 yi.val, yi.len, yi.precision, sgn,
2531 overflow));
2532 remainder.set_len (remainder_len);
2533 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2534 return quotient - 1;
2535 return quotient;
2538 /* Return X / Y, rouding towards -inf. Treat X and Y as signed values. */
2539 template <typename T1, typename T2>
2540 inline WI_BINARY_RESULT (T1, T2)
2541 wi::sdiv_floor (const T1 &x, const T2 &y)
2543 return div_floor (x, y, SIGNED);
2546 /* Return X / Y, rouding towards -inf. Treat X and Y as unsigned values. */
2547 /* ??? Why do we have both this and udiv_trunc. Aren't they the same? */
2548 template <typename T1, typename T2>
2549 inline WI_BINARY_RESULT (T1, T2)
2550 wi::udiv_floor (const T1 &x, const T2 &y)
2552 return div_floor (x, y, UNSIGNED);
2555 /* Return X / Y, rouding towards +inf. Treat X and Y as having the
2556 signedness given by SGN. Indicate in *OVERFLOW if the result
2557 overflows. */
2558 template <typename T1, typename T2>
2559 inline WI_BINARY_RESULT (T1, T2)
2560 wi::div_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2562 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2563 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2564 unsigned int precision = get_precision (quotient);
2565 WIDE_INT_REF_FOR (T1) xi (x, precision);
2566 WIDE_INT_REF_FOR (T2) yi (y);
2568 unsigned int remainder_len;
2569 quotient.set_len (divmod_internal (quotient_val,
2570 &remainder_len, remainder_val,
2571 xi.val, xi.len, precision,
2572 yi.val, yi.len, yi.precision, sgn,
2573 overflow));
2574 remainder.set_len (remainder_len);
2575 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2576 return quotient + 1;
2577 return quotient;
2580 /* Return X / Y, rouding towards nearest with ties away from zero.
2581 Treat X and Y as having the signedness given by SGN. Indicate
2582 in *OVERFLOW if the result overflows. */
2583 template <typename T1, typename T2>
2584 inline WI_BINARY_RESULT (T1, T2)
2585 wi::div_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2587 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2588 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2589 unsigned int precision = get_precision (quotient);
2590 WIDE_INT_REF_FOR (T1) xi (x, precision);
2591 WIDE_INT_REF_FOR (T2) yi (y);
2593 unsigned int remainder_len;
2594 quotient.set_len (divmod_internal (quotient_val,
2595 &remainder_len, remainder_val,
2596 xi.val, xi.len, precision,
2597 yi.val, yi.len, yi.precision, sgn,
2598 overflow));
2599 remainder.set_len (remainder_len);
2601 if (remainder != 0)
2603 if (sgn == SIGNED)
2605 WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2606 if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2608 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2609 return quotient - 1;
2610 else
2611 return quotient + 1;
2614 else
2616 if (wi::geu_p (remainder, wi::sub (y, remainder)))
2617 return quotient + 1;
2620 return quotient;
2623 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2624 signedness given by SGN. Store the remainder in *REMAINDER_PTR. */
2625 template <typename T1, typename T2>
2626 inline WI_BINARY_RESULT (T1, T2)
2627 wi::divmod_trunc (const T1 &x, const T2 &y, signop sgn,
2628 WI_BINARY_RESULT (T1, T2) *remainder_ptr)
2630 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2631 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2632 unsigned int precision = get_precision (quotient);
2633 WIDE_INT_REF_FOR (T1) xi (x, precision);
2634 WIDE_INT_REF_FOR (T2) yi (y);
2636 unsigned int remainder_len;
2637 quotient.set_len (divmod_internal (quotient_val,
2638 &remainder_len, remainder_val,
2639 xi.val, xi.len, precision,
2640 yi.val, yi.len, yi.precision, sgn, 0));
2641 remainder.set_len (remainder_len);
2643 *remainder_ptr = remainder;
2644 return quotient;
2647 /* Compute the greatest common divisor of two numbers A and B using
2648 Euclid's algorithm. */
2649 template <typename T1, typename T2>
2650 inline WI_BINARY_RESULT (T1, T2)
2651 wi::gcd (const T1 &a, const T2 &b, signop sgn)
2653 T1 x, y, z;
2655 x = wi::abs (a);
2656 y = wi::abs (b);
2658 while (gt_p (x, 0, sgn))
2660 z = mod_trunc (y, x, sgn);
2661 y = x;
2662 x = z;
2665 return y;
2668 /* Compute X / Y, rouding towards 0, and return the remainder.
2669 Treat X and Y as having the signedness given by SGN. Indicate
2670 in *OVERFLOW if the division overflows. */
2671 template <typename T1, typename T2>
2672 inline WI_BINARY_RESULT (T1, T2)
2673 wi::mod_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2675 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2676 unsigned int precision = get_precision (remainder);
2677 WIDE_INT_REF_FOR (T1) xi (x, precision);
2678 WIDE_INT_REF_FOR (T2) yi (y);
2680 unsigned int remainder_len;
2681 divmod_internal (0, &remainder_len, remainder_val,
2682 xi.val, xi.len, precision,
2683 yi.val, yi.len, yi.precision, sgn, overflow);
2684 remainder.set_len (remainder_len);
2686 return remainder;
2689 /* Compute X / Y, rouding towards 0, and return the remainder.
2690 Treat X and Y as signed values. */
2691 template <typename T1, typename T2>
2692 inline WI_BINARY_RESULT (T1, T2)
2693 wi::smod_trunc (const T1 &x, const T2 &y)
2695 return mod_trunc (x, y, SIGNED);
2698 /* Compute X / Y, rouding towards 0, and return the remainder.
2699 Treat X and Y as unsigned values. */
2700 template <typename T1, typename T2>
2701 inline WI_BINARY_RESULT (T1, T2)
2702 wi::umod_trunc (const T1 &x, const T2 &y)
2704 return mod_trunc (x, y, UNSIGNED);
2707 /* Compute X / Y, rouding towards -inf, and return the remainder.
2708 Treat X and Y as having the signedness given by SGN. Indicate
2709 in *OVERFLOW if the division overflows. */
2710 template <typename T1, typename T2>
2711 inline WI_BINARY_RESULT (T1, T2)
2712 wi::mod_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2714 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2715 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2716 unsigned int precision = get_precision (quotient);
2717 WIDE_INT_REF_FOR (T1) xi (x, precision);
2718 WIDE_INT_REF_FOR (T2) yi (y);
2720 unsigned int remainder_len;
2721 quotient.set_len (divmod_internal (quotient_val,
2722 &remainder_len, remainder_val,
2723 xi.val, xi.len, precision,
2724 yi.val, yi.len, yi.precision, sgn,
2725 overflow));
2726 remainder.set_len (remainder_len);
2728 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2729 return remainder + y;
2730 return remainder;
2733 /* Compute X / Y, rouding towards -inf, and return the remainder.
2734 Treat X and Y as unsigned values. */
2735 /* ??? Why do we have both this and umod_trunc. Aren't they the same? */
2736 template <typename T1, typename T2>
2737 inline WI_BINARY_RESULT (T1, T2)
2738 wi::umod_floor (const T1 &x, const T2 &y)
2740 return mod_floor (x, y, UNSIGNED);
2743 /* Compute X / Y, rouding towards +inf, and return the remainder.
2744 Treat X and Y as having the signedness given by SGN. Indicate
2745 in *OVERFLOW if the division overflows. */
2746 template <typename T1, typename T2>
2747 inline WI_BINARY_RESULT (T1, T2)
2748 wi::mod_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2750 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2751 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2752 unsigned int precision = get_precision (quotient);
2753 WIDE_INT_REF_FOR (T1) xi (x, precision);
2754 WIDE_INT_REF_FOR (T2) yi (y);
2756 unsigned int remainder_len;
2757 quotient.set_len (divmod_internal (quotient_val,
2758 &remainder_len, remainder_val,
2759 xi.val, xi.len, precision,
2760 yi.val, yi.len, yi.precision, sgn,
2761 overflow));
2762 remainder.set_len (remainder_len);
2764 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2765 return remainder - y;
2766 return remainder;
2769 /* Compute X / Y, rouding towards nearest with ties away from zero,
2770 and return the remainder. Treat X and Y as having the signedness
2771 given by SGN. Indicate in *OVERFLOW if the division overflows. */
2772 template <typename T1, typename T2>
2773 inline WI_BINARY_RESULT (T1, T2)
2774 wi::mod_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2776 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2777 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2778 unsigned int precision = get_precision (quotient);
2779 WIDE_INT_REF_FOR (T1) xi (x, precision);
2780 WIDE_INT_REF_FOR (T2) yi (y);
2782 unsigned int remainder_len;
2783 quotient.set_len (divmod_internal (quotient_val,
2784 &remainder_len, remainder_val,
2785 xi.val, xi.len, precision,
2786 yi.val, yi.len, yi.precision, sgn,
2787 overflow));
2788 remainder.set_len (remainder_len);
2790 if (remainder != 0)
2792 if (sgn == SIGNED)
2794 WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2795 if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2797 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2798 return remainder + y;
2799 else
2800 return remainder - y;
2803 else
2805 if (wi::geu_p (remainder, wi::sub (y, remainder)))
2806 return remainder - y;
2809 return remainder;
2812 /* Return true if X is a multiple of Y. Treat X and Y as having the
2813 signedness given by SGN. */
2814 template <typename T1, typename T2>
2815 inline bool
2816 wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn)
2818 return wi::mod_trunc (x, y, sgn) == 0;
2821 /* Return true if X is a multiple of Y, storing X / Y in *RES if so.
2822 Treat X and Y as having the signedness given by SGN. */
2823 template <typename T1, typename T2>
2824 inline bool
2825 wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn,
2826 WI_BINARY_RESULT (T1, T2) *res)
2828 WI_BINARY_RESULT (T1, T2) remainder;
2829 WI_BINARY_RESULT (T1, T2) quotient
2830 = divmod_trunc (x, y, sgn, &remainder);
2831 if (remainder == 0)
2833 *res = quotient;
2834 return true;
2836 return false;
2839 /* Return X << Y. Return 0 if Y is greater than or equal to
2840 the precision of X. */
2841 template <typename T1, typename T2>
2842 inline WI_UNARY_RESULT (T1)
2843 wi::lshift (const T1 &x, const T2 &y)
2845 WI_UNARY_RESULT_VAR (result, val, T1, x);
2846 unsigned int precision = get_precision (result);
2847 WIDE_INT_REF_FOR (T1) xi (x, precision);
2848 WIDE_INT_REF_FOR (T2) yi (y);
2849 /* Handle the simple cases quickly. */
2850 if (geu_p (yi, precision))
2852 val[0] = 0;
2853 result.set_len (1);
2855 else
2857 unsigned int shift = yi.to_uhwi ();
2858 /* For fixed-precision integers like offset_int and widest_int,
2859 handle the case where the shift value is constant and the
2860 result is a single nonnegative HWI (meaning that we don't
2861 need to worry about val[1]). This is particularly common
2862 for converting a byte count to a bit count.
2864 For variable-precision integers like wide_int, handle HWI
2865 and sub-HWI integers inline. */
2866 if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
2867 ? (STATIC_CONSTANT_P (shift < HOST_BITS_PER_WIDE_INT - 1)
2868 && xi.len == 1
2869 && xi.val[0] <= (HOST_WIDE_INT) ((unsigned HOST_WIDE_INT)
2870 HOST_WIDE_INT_MAX >> shift))
2871 : precision <= HOST_BITS_PER_WIDE_INT)
2873 val[0] = xi.ulow () << shift;
2874 result.set_len (1);
2876 else
2877 result.set_len (lshift_large (val, xi.val, xi.len,
2878 precision, shift));
2880 return result;
2883 /* Return X >> Y, using a logical shift. Return 0 if Y is greater than
2884 or equal to the precision of X. */
2885 template <typename T1, typename T2>
2886 inline WI_UNARY_RESULT (T1)
2887 wi::lrshift (const T1 &x, const T2 &y)
2889 WI_UNARY_RESULT_VAR (result, val, T1, x);
2890 /* Do things in the precision of the input rather than the output,
2891 since the result can be no larger than that. */
2892 WIDE_INT_REF_FOR (T1) xi (x);
2893 WIDE_INT_REF_FOR (T2) yi (y);
2894 /* Handle the simple cases quickly. */
2895 if (geu_p (yi, xi.precision))
2897 val[0] = 0;
2898 result.set_len (1);
2900 else
2902 unsigned int shift = yi.to_uhwi ();
2903 /* For fixed-precision integers like offset_int and widest_int,
2904 handle the case where the shift value is constant and the
2905 shifted value is a single nonnegative HWI (meaning that all
2906 bits above the HWI are zero). This is particularly common
2907 for converting a bit count to a byte count.
2909 For variable-precision integers like wide_int, handle HWI
2910 and sub-HWI integers inline. */
2911 if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
2912 ? xi.len == 1 && xi.val[0] >= 0
2913 : xi.precision <= HOST_BITS_PER_WIDE_INT)
2915 val[0] = xi.to_uhwi () >> shift;
2916 result.set_len (1);
2918 else
2919 result.set_len (lrshift_large (val, xi.val, xi.len, xi.precision,
2920 get_precision (result), shift));
2922 return result;
2925 /* Return X >> Y, using an arithmetic shift. Return a sign mask if
2926 Y is greater than or equal to the precision of X. */
2927 template <typename T1, typename T2>
2928 inline WI_UNARY_RESULT (T1)
2929 wi::arshift (const T1 &x, const T2 &y)
2931 WI_UNARY_RESULT_VAR (result, val, T1, x);
2932 /* Do things in the precision of the input rather than the output,
2933 since the result can be no larger than that. */
2934 WIDE_INT_REF_FOR (T1) xi (x);
2935 WIDE_INT_REF_FOR (T2) yi (y);
2936 /* Handle the simple cases quickly. */
2937 if (geu_p (yi, xi.precision))
2939 val[0] = sign_mask (x);
2940 result.set_len (1);
2942 else
2944 unsigned int shift = yi.to_uhwi ();
2945 if (xi.precision <= HOST_BITS_PER_WIDE_INT)
2947 val[0] = sext_hwi (xi.ulow () >> shift, xi.precision - shift);
2948 result.set_len (1, true);
2950 else
2951 result.set_len (arshift_large (val, xi.val, xi.len, xi.precision,
2952 get_precision (result), shift));
2954 return result;
2957 /* Return X >> Y, using an arithmetic shift if SGN is SIGNED and a
2958 logical shift otherwise. */
2959 template <typename T1, typename T2>
2960 inline WI_UNARY_RESULT (T1)
2961 wi::rshift (const T1 &x, const T2 &y, signop sgn)
2963 if (sgn == UNSIGNED)
2964 return lrshift (x, y);
2965 else
2966 return arshift (x, y);
2969 /* Return the result of rotating the low WIDTH bits of X left by Y
2970 bits and zero-extending the result. Use a full-width rotate if
2971 WIDTH is zero. */
2972 template <typename T1, typename T2>
2973 WI_UNARY_RESULT (T1)
2974 wi::lrotate (const T1 &x, const T2 &y, unsigned int width)
2976 unsigned int precision = get_binary_precision (x, x);
2977 if (width == 0)
2978 width = precision;
2979 WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
2980 WI_UNARY_RESULT (T1) left = wi::lshift (x, ymod);
2981 WI_UNARY_RESULT (T1) right = wi::lrshift (x, wi::sub (width, ymod));
2982 if (width != precision)
2983 return wi::zext (left, width) | wi::zext (right, width);
2984 return left | right;
2987 /* Return the result of rotating the low WIDTH bits of X right by Y
2988 bits and zero-extending the result. Use a full-width rotate if
2989 WIDTH is zero. */
2990 template <typename T1, typename T2>
2991 WI_UNARY_RESULT (T1)
2992 wi::rrotate (const T1 &x, const T2 &y, unsigned int width)
2994 unsigned int precision = get_binary_precision (x, x);
2995 if (width == 0)
2996 width = precision;
2997 WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
2998 WI_UNARY_RESULT (T1) right = wi::lrshift (x, ymod);
2999 WI_UNARY_RESULT (T1) left = wi::lshift (x, wi::sub (width, ymod));
3000 if (width != precision)
3001 return wi::zext (left, width) | wi::zext (right, width);
3002 return left | right;
3005 /* Return 0 if the number of 1s in X is even and 1 if the number of 1s
3006 is odd. */
3007 inline int
3008 wi::parity (const wide_int_ref &x)
3010 return popcount (x) & 1;
3013 /* Extract WIDTH bits from X, starting at BITPOS. */
3014 template <typename T>
3015 inline unsigned HOST_WIDE_INT
3016 wi::extract_uhwi (const T &x, unsigned int bitpos, unsigned int width)
3018 unsigned precision = get_precision (x);
3019 if (precision < bitpos + width)
3020 precision = bitpos + width;
3021 WIDE_INT_REF_FOR (T) xi (x, precision);
3023 /* Handle this rare case after the above, so that we assert about
3024 bogus BITPOS values. */
3025 if (width == 0)
3026 return 0;
3028 unsigned int start = bitpos / HOST_BITS_PER_WIDE_INT;
3029 unsigned int shift = bitpos % HOST_BITS_PER_WIDE_INT;
3030 unsigned HOST_WIDE_INT res = xi.elt (start);
3031 res >>= shift;
3032 if (shift + width > HOST_BITS_PER_WIDE_INT)
3034 unsigned HOST_WIDE_INT upper = xi.elt (start + 1);
3035 res |= upper << (-shift % HOST_BITS_PER_WIDE_INT);
3037 return zext_hwi (res, width);
3040 /* Return the minimum precision needed to store X with sign SGN. */
3041 template <typename T>
3042 inline unsigned int
3043 wi::min_precision (const T &x, signop sgn)
3045 if (sgn == SIGNED)
3046 return get_precision (x) - clrsb (x);
3047 else
3048 return get_precision (x) - clz (x);
3051 template<typename T>
3052 void
3053 gt_ggc_mx (generic_wide_int <T> *)
3057 template<typename T>
3058 void
3059 gt_pch_nx (generic_wide_int <T> *)
3063 template<typename T>
3064 void
3065 gt_pch_nx (generic_wide_int <T> *, void (*) (void *, void *), void *)
3069 template<int N>
3070 void
3071 gt_ggc_mx (trailing_wide_ints <N> *)
3075 template<int N>
3076 void
3077 gt_pch_nx (trailing_wide_ints <N> *)
3081 template<int N>
3082 void
3083 gt_pch_nx (trailing_wide_ints <N> *, void (*) (void *, void *), void *)
3087 namespace wi
3089 /* Used for overloaded functions in which the only other acceptable
3090 scalar type is a pointer. It stops a plain 0 from being treated
3091 as a null pointer. */
3092 struct never_used1 {};
3093 struct never_used2 {};
3095 wide_int min_value (unsigned int, signop);
3096 wide_int min_value (never_used1 *);
3097 wide_int min_value (never_used2 *);
3098 wide_int max_value (unsigned int, signop);
3099 wide_int max_value (never_used1 *);
3100 wide_int max_value (never_used2 *);
3102 /* FIXME: this is target dependent, so should be elsewhere.
3103 It also seems to assume that CHAR_BIT == BITS_PER_UNIT. */
3104 wide_int from_buffer (const unsigned char *, unsigned int);
3106 #ifndef GENERATOR_FILE
3107 void to_mpz (const wide_int_ref &, mpz_t, signop);
3108 #endif
3110 wide_int mask (unsigned int, bool, unsigned int);
3111 wide_int shifted_mask (unsigned int, unsigned int, bool, unsigned int);
3112 wide_int set_bit_in_zero (unsigned int, unsigned int);
3113 wide_int insert (const wide_int &x, const wide_int &y, unsigned int,
3114 unsigned int);
3116 template <typename T>
3117 T mask (unsigned int, bool);
3119 template <typename T>
3120 T shifted_mask (unsigned int, unsigned int, bool);
3122 template <typename T>
3123 T set_bit_in_zero (unsigned int);
3125 unsigned int mask (HOST_WIDE_INT *, unsigned int, bool, unsigned int);
3126 unsigned int shifted_mask (HOST_WIDE_INT *, unsigned int, unsigned int,
3127 bool, unsigned int);
3128 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
3129 unsigned int, unsigned int, bool);
3132 /* Return a PRECISION-bit integer in which the low WIDTH bits are set
3133 and the other bits are clear, or the inverse if NEGATE_P. */
3134 inline wide_int
3135 wi::mask (unsigned int width, bool negate_p, unsigned int precision)
3137 wide_int result = wide_int::create (precision);
3138 result.set_len (mask (result.write_val (), width, negate_p, precision));
3139 return result;
3142 /* Return a PRECISION-bit integer in which the low START bits are clear,
3143 the next WIDTH bits are set, and the other bits are clear,
3144 or the inverse if NEGATE_P. */
3145 inline wide_int
3146 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p,
3147 unsigned int precision)
3149 wide_int result = wide_int::create (precision);
3150 result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
3151 precision));
3152 return result;
3155 /* Return a PRECISION-bit integer in which bit BIT is set and all the
3156 others are clear. */
3157 inline wide_int
3158 wi::set_bit_in_zero (unsigned int bit, unsigned int precision)
3160 return shifted_mask (bit, 1, false, precision);
3163 /* Return an integer of type T in which the low WIDTH bits are set
3164 and the other bits are clear, or the inverse if NEGATE_P. */
3165 template <typename T>
3166 inline T
3167 wi::mask (unsigned int width, bool negate_p)
3169 STATIC_ASSERT (wi::int_traits<T>::precision);
3170 T result;
3171 result.set_len (mask (result.write_val (), width, negate_p,
3172 wi::int_traits <T>::precision));
3173 return result;
3176 /* Return an integer of type T in which the low START bits are clear,
3177 the next WIDTH bits are set, and the other bits are clear, or the
3178 inverse if NEGATE_P. */
3179 template <typename T>
3180 inline T
3181 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p)
3183 STATIC_ASSERT (wi::int_traits<T>::precision);
3184 T result;
3185 result.set_len (shifted_mask (result.write_val (), start, width,
3186 negate_p,
3187 wi::int_traits <T>::precision));
3188 return result;
3191 /* Return an integer of type T in which bit BIT is set and all the
3192 others are clear. */
3193 template <typename T>
3194 inline T
3195 wi::set_bit_in_zero (unsigned int bit)
3197 return shifted_mask <T> (bit, 1, false);
3200 #endif /* WIDE_INT_H */