Simplify convert_modes, ignoring invalid old modes for CONST_INTs.
[official-gcc.git] / gcc / wide-int.h
blob935291ea91e76b4209a4d6217ad5810663e51dc7
1 /* Operations with very long integers. -*- C++ -*-
2 Copyright (C) 2012-2013 Free Software Foundation, Inc.
4 This file is part of GCC.
6 GCC is free software; you can redistribute it and/or modify it
7 under the terms of the GNU General Public License as published by the
8 Free Software Foundation; either version 3, or (at your option) any
9 later version.
11 GCC is distributed in the hope that it will be useful, but WITHOUT
12 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 for more details.
16 You should have received a copy of the GNU General Public License
17 along with GCC; see the file COPYING3. If not see
18 <http://www.gnu.org/licenses/>. */
20 #ifndef WIDE_INT_H
21 #define WIDE_INT_H
23 /* wide-int.[cc|h] implements a class that efficiently performs
24 mathematical operations on finite precision integers. wide_ints
25 are designed to be transient - they are not for long term storage
26 of values. There is tight integration between wide_ints and the
27 other longer storage GCC representations (rtl and tree).
29 The actual precision of a wide_int depends on the flavor. There
30 are three predefined flavors:
32 1) wide_int (the default). This flavor does the math in the
33 precision of its input arguments. It is assumed (and checked)
34 that the precisions of the operands and results are consistent.
35 This is the most efficient flavor. It is not possible to examine
36 bits above the precision that has been specified. Because of
37 this, the default flavor has semantics that are simple to
38 understand and in general model the underlying hardware that the
39 compiler is targetted for.
41 This flavor must be used at the RTL level of gcc because there
42 is, in general, not enough information in the RTL representation
43 to extend a value beyond the precision specified in the mode.
45 This flavor should also be used at the TREE and GIMPLE levels of
46 the compiler except for the circumstances described in the
47 descriptions of the other two flavors.
49 The default wide_int representation does not contain any
50 information inherent about signedness of the represented value,
51 so it can be used to represent both signed and unsigned numbers.
52 For operations where the results depend on signedness (full width
53 multiply, division, shifts, comparisons, and operations that need
54 overflow detected), the signedness must be specified separately.
56 2) offset_int. This is a fixed size representation that is
57 guaranteed to be large enough to compute any bit or byte sized
58 address calculation on the target. Currently the value is 64 + 4
59 bits rounded up to the next number even multiple of
60 HOST_BITS_PER_WIDE_INT (but this can be changed when the first
61 port needs more than 64 bits for the size of a pointer).
63 This flavor can be used for all address math on the target. In
64 this representation, the values are sign or zero extended based
65 on their input types to the internal precision. All math is done
66 in this precision and then the values are truncated to fit in the
67 result type. Unlike most gimple or rtl intermediate code, it is
68 not useful to perform the address arithmetic at the same
69 precision in which the operands are represented because there has
70 been no effort by the front ends to convert most addressing
71 arithmetic to canonical types.
73 3) widest_int. This representation is an approximation of
74 infinite precision math. However, it is not really infinite
75 precision math as in the GMP library. It is really finite
76 precision math where the precision is 4 times the size of the
77 largest integer that the target port can represent.
79 widest_int is supposed to be wider than any number that it needs to
80 store, meaning that there is always at least one leading sign bit.
81 All widest_int values are therefore signed.
83 There are several places in the GCC where this should/must be used:
85 * Code that does induction variable optimizations. This code
86 works with induction variables of many different types at the
87 same time. Because of this, it ends up doing many different
88 calculations where the operands are not compatible types. The
89 widest_int makes this easy, because it provides a field where
90 nothing is lost when converting from any variable,
92 * There are a small number of passes that currently use the
93 widest_int that should use the default. These should be
94 changed.
96 There are surprising features of offset_int and widest_int
97 that the users should be careful about:
99 1) Shifts and rotations are just weird. You have to specify a
100 precision in which the shift or rotate is to happen in. The bits
101 above this precision remain unchanged. While this is what you
102 want, it is clearly is non obvious.
104 2) Larger precision math sometimes does not produce the same
105 answer as would be expected for doing the math at the proper
106 precision. In particular, a multiply followed by a divide will
107 produce a different answer if the first product is larger than
108 what can be represented in the input precision.
110 The offset_int and the widest_int flavors are more expensive
111 than the default wide int, so in addition to the caveats with these
112 two, the default is the prefered representation.
114 All three flavors of wide_int are represented as a vector of
115 HOST_WIDE_INTs. The default and widest_int vectors contain enough elements
116 to hold a value of MAX_BITSIZE_MODE_ANY_INT bits. offset_int contains only
117 enough elements to hold ADDR_MAX_PRECISION bits. The values are stored
118 in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
119 in element 0.
121 The default wide_int contains three fields: the vector (VAL),
122 the precision and a length (LEN). The length is the number of HWIs
123 needed to represent the value. widest_int and offset_int have a
124 constant precision that cannot be changed, so they only store the
125 VAL and LEN fields.
127 Since most integers used in a compiler are small values, it is
128 generally profitable to use a representation of the value that is
129 as small as possible. LEN is used to indicate the number of
130 elements of the vector that are in use. The numbers are stored as
131 sign extended numbers as a means of compression. Leading
132 HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
133 as long as they can be reconstructed from the top bit that is being
134 represented.
136 The precision and length of a wide_int are always greater than 0.
137 Any bits in a wide_int above the precision are sign-extended from the
138 most significant bit. For example, a 4-bit value 0x8 is represented as
139 VAL = { 0xf...fff8 }. However, as an optimization, we allow other integer
140 constants to be represented with undefined bits above the precision.
141 This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
142 so that the INTEGER_CST representation can be used both in TYPE_PRECISION
143 and in wider precisions.
145 There are constructors to create the various forms of wide_int from
146 trees, rtl and constants. For trees you can simply say:
148 tree t = ...;
149 wide_int x = t;
151 However, a little more syntax is required for rtl constants since
152 they do have an explicit precision. To make an rtl into a
153 wide_int, you have to pair it with a mode. The canonical way to do
154 this is with std::make_pair as in:
156 rtx r = ...
157 wide_int x = std::make_pair (r, mode);
159 Similarly, a wide_int can only be constructed from a host value if
160 the target precision is given explicitly, such as in:
162 wide_int x = wi::shwi (c, prec); // sign-extend X if necessary
163 wide_int y = wi::uhwi (c, prec); // zero-extend X if necessary
165 However, offset_int and widest_int have an inherent precision and so
166 can be initialized directly from a host value:
168 offset_int x = (int) c; // sign-extend C
169 widest_int x = (unsigned int) c; // zero-extend C
171 It is also possible to do arithmetic directly on trees, rtxes and
172 constants. For example:
174 wi::add (t1, t2); // add equal-sized INTEGER_CSTs t1 and t2
175 wi::add (t1, 1); // add 1 to INTEGER_CST t1
176 wi::add (r1, r2); // add equal-sized rtx constants r1 and r2
177 wi::lshift (1, 100); // 1 << 100 as a widest_int
179 Many binary operations place restrictions on the combinations of inputs,
180 using the following rules:
182 - {tree, rtx, wide_int} op {tree, rtx, wide_int} -> wide_int
183 The inputs must be the same precision. The result is a wide_int
184 of the same precision
186 - {tree, rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
187 (un)signed HOST_WIDE_INT op {tree, rtx, wide_int} -> wide_int
188 The HOST_WIDE_INT is extended or truncated to the precision of
189 the other input. The result is a wide_int of the same precision
190 as that input.
192 - (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
193 The inputs are extended to widest_int precision and produce a
194 widest_int result.
196 - offset_int op offset_int -> offset_int
197 offset_int op (un)signed HOST_WIDE_INT -> offset_int
198 (un)signed HOST_WIDE_INT op offset_int -> offset_int
200 - widest_int op widest_int -> widest_int
201 widest_int op (un)signed HOST_WIDE_INT -> widest_int
202 (un)signed HOST_WIDE_INT op widest_int -> widest_int
204 Other combinations like:
206 - widest_int op offset_int and
207 - wide_int op offset_int
209 are not allowed. The inputs should instead be extended or truncated
210 so that they match.
212 The inputs to comparison functions like wi::eq_p and wi::lts_p
213 follow the same compatibility rules, although their return types
214 are different. Unary functions on X produce the same result as
215 a binary operation X + X. Shift functions X op Y also produce
216 the same result as X + X; the precision of the shift amount Y
217 can be arbitrarily different from X. */
220 #include <utility>
221 #include "system.h"
222 #include "hwint.h"
223 #include "signop.h"
224 #include "insn-modes.h"
226 #if 0
227 #define DEBUG_WIDE_INT
228 #endif
230 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
231 early examination of the target's mode file. Thus it is safe that
232 some small multiple of this number is easily larger than any number
233 that that target could compute. The place in the compiler that
234 currently needs the widest ints is the code that determines the
235 range of a multiply. This code needs 2n + 2 bits. */
237 #define WIDE_INT_MAX_ELTS \
238 ((4 * MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT - 1) \
239 / HOST_BITS_PER_WIDE_INT)
241 /* This is the max size of any pointer on any machine. It does not
242 seem to be as easy to sniff this out of the machine description as
243 it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
244 multiple address sizes and may have different address sizes for
245 different address spaces. However, currently the largest pointer
246 on any platform is 64 bits. When that changes, then it is likely
247 that a target hook should be defined so that targets can make this
248 value larger for those targets. */
249 #define ADDR_MAX_BITSIZE 64
251 /* This is the internal precision used when doing any address
252 arithmetic. The '4' is really 3 + 1. Three of the bits are for
253 the number of extra bits needed to do bit addresses and single bit is
254 allow everything to be signed without loosing any precision. Then
255 everything is rounded up to the next HWI for efficiency. */
256 #define ADDR_MAX_PRECISION \
257 ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) & ~(HOST_BITS_PER_WIDE_INT - 1))
259 /* The type of result produced by a binary operation on types T1 and T2.
260 Defined purely for brevity. */
261 #define WI_BINARY_RESULT(T1, T2) \
262 typename wi::binary_traits <T1, T2>::result_type
264 /* The type of result produced by a unary operation on type T. */
265 #define WI_UNARY_RESULT(T) \
266 typename wi::unary_traits <T>::result_type
268 /* Define a variable RESULT to hold the result of a binary operation on
269 X and Y, which have types T1 and T2 respectively. Define VAR to
270 point to the blocks of RESULT. Once the user of the macro has
271 filled in VAR, it should call RESULT.set_len to set the number
272 of initialized blocks. */
273 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
274 WI_BINARY_RESULT (T1, T2) RESULT = \
275 wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
276 HOST_WIDE_INT *VAL = RESULT.write_val ()
278 /* Similar for the result of a unary operation on X, which has type T. */
279 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
280 WI_UNARY_RESULT (T) RESULT = \
281 wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
282 HOST_WIDE_INT *VAL = RESULT.write_val ()
284 template <typename T> struct generic_wide_int;
285 template <int N> struct fixed_wide_int_storage;
286 struct wide_int_storage;
288 /* An N-bit integer. Until we can use typedef templates, use this instead. */
289 #define FIXED_WIDE_INT(N) \
290 generic_wide_int < fixed_wide_int_storage <N> >
292 typedef generic_wide_int <wide_int_storage> wide_int;
293 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
294 typedef FIXED_WIDE_INT (MAX_BITSIZE_MODE_ANY_INT) widest_int;
296 template <bool SE>
297 struct wide_int_ref_storage;
299 typedef generic_wide_int <wide_int_ref_storage <false> > wide_int_ref;
301 /* This can be used instead of wide_int_ref if the referenced value is
302 known to have type T. It carries across properties of T's representation,
303 such as whether excess upper bits in a HWI are defined, and can therefore
304 help avoid redundant work.
306 The macro could be replaced with a template typedef, once we're able
307 to use those. */
308 #define WIDE_INT_REF_FOR(T) \
309 generic_wide_int \
310 <wide_int_ref_storage <wi::int_traits <T>::is_sign_extended> >
312 namespace wi
314 /* Classifies an integer based on its precision. */
315 enum precision_type {
316 /* The integer has both a precision and defined signedness. This allows
317 the integer to be converted to any width, since we know whether to fill
318 any extra bits with zeros or signs. */
319 FLEXIBLE_PRECISION,
321 /* The integer has a variable precision but no defined signedness. */
322 VAR_PRECISION,
324 /* The integer has a constant precision (known at GCC compile time)
325 but no defined signedness. */
326 CONST_PRECISION
329 /* This class, which has no default implementation, is expected to
330 provide the following members:
332 static const enum precision_type precision_type;
333 Classifies the type of T.
335 static const unsigned int precision;
336 Only defined if precision_type == CONST_PRECISION. Specifies the
337 precision of all integers of type T.
339 static const bool host_dependent_precision;
340 True if the precision of T depends (or can depend) on the host.
342 static unsigned int get_precision (const T &x)
343 Return the number of bits in X.
345 static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
346 unsigned int precision, const T &x)
347 Decompose X as a PRECISION-bit integer, returning the associated
348 wi::storage_ref. SCRATCH is available as scratch space if needed.
349 The routine should assert that PRECISION is acceptable. */
350 template <typename T> struct int_traits;
352 /* This class provides a single type, result_type, which specifies the
353 type of integer produced by a binary operation whose inputs have
354 types T1 and T2. The definition should be symmetric. */
355 template <typename T1, typename T2,
356 enum precision_type P1 = int_traits <T1>::precision_type,
357 enum precision_type P2 = int_traits <T2>::precision_type>
358 struct binary_traits;
360 /* The result of a unary operation on T is the same as the result of
361 a binary operation on two values of type T. */
362 template <typename T>
363 struct unary_traits : public binary_traits <T, T> {};
365 /* Specify the result type for each supported combination of binary
366 inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be
367 mixed, in order to give stronger type checking. When both inputs
368 are CONST_PRECISION, they must have the same precision. */
369 template <>
370 template <typename T1, typename T2>
371 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
373 typedef widest_int result_type;
376 template <>
377 template <typename T1, typename T2>
378 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
380 typedef wide_int result_type;
383 template <>
384 template <typename T1, typename T2>
385 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
387 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
388 so as not to confuse gengtype. */
389 typedef generic_wide_int < fixed_wide_int_storage
390 <int_traits <T2>::precision> > result_type;
393 template <>
394 template <typename T1, typename T2>
395 struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
397 typedef wide_int result_type;
400 template <>
401 template <typename T1, typename T2>
402 struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
404 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
405 so as not to confuse gengtype. */
406 typedef generic_wide_int < fixed_wide_int_storage
407 <int_traits <T1>::precision> > result_type;
410 template <>
411 template <typename T1, typename T2>
412 struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
414 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
415 so as not to confuse gengtype. */
416 STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
417 typedef generic_wide_int < fixed_wide_int_storage
418 <int_traits <T1>::precision> > result_type;
421 template <>
422 template <typename T1, typename T2>
423 struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
425 typedef wide_int result_type;
429 /* Public functions for querying and operating on integers. */
430 namespace wi
432 template <typename T>
433 unsigned int get_precision (const T &);
435 template <typename T1, typename T2>
436 unsigned int get_binary_precision (const T1 &, const T2 &);
438 template <typename T1, typename T2>
439 void copy (T1 &, const T2 &);
441 #define UNARY_PREDICATE \
442 template <typename T> bool
443 #define UNARY_FUNCTION \
444 template <typename T> WI_UNARY_RESULT (T)
445 #define BINARY_PREDICATE \
446 template <typename T1, typename T2> bool
447 #define BINARY_FUNCTION \
448 template <typename T1, typename T2> WI_BINARY_RESULT (T1, T2)
449 #define SHIFT_FUNCTION \
450 template <typename T> WI_UNARY_RESULT (T)
452 UNARY_PREDICATE fits_shwi_p (const T &);
453 UNARY_PREDICATE fits_uhwi_p (const T &);
454 UNARY_PREDICATE neg_p (const T &, signop = SIGNED);
456 template <typename T>
457 HOST_WIDE_INT sign_mask (const T &);
459 BINARY_PREDICATE eq_p (const T1 &, const T2 &);
460 BINARY_PREDICATE ne_p (const T1 &, const T2 &);
461 BINARY_PREDICATE lt_p (const T1 &, const T2 &, signop);
462 BINARY_PREDICATE lts_p (const T1 &, const T2 &);
463 BINARY_PREDICATE ltu_p (const T1 &, const T2 &);
464 BINARY_PREDICATE le_p (const T1 &, const T2 &, signop);
465 BINARY_PREDICATE les_p (const T1 &, const T2 &);
466 BINARY_PREDICATE leu_p (const T1 &, const T2 &);
467 BINARY_PREDICATE gt_p (const T1 &, const T2 &, signop);
468 BINARY_PREDICATE gts_p (const T1 &, const T2 &);
469 BINARY_PREDICATE gtu_p (const T1 &, const T2 &);
470 BINARY_PREDICATE ge_p (const T1 &, const T2 &, signop);
471 BINARY_PREDICATE ges_p (const T1 &, const T2 &);
472 BINARY_PREDICATE geu_p (const T1 &, const T2 &);
474 template <typename T1, typename T2>
475 int cmp (const T1 &, const T2 &, signop);
477 template <typename T1, typename T2>
478 int cmps (const T1 &, const T2 &);
480 template <typename T1, typename T2>
481 int cmpu (const T1 &, const T2 &);
483 UNARY_FUNCTION bit_not (const T &);
484 UNARY_FUNCTION neg (const T &);
485 UNARY_FUNCTION neg (const T &, bool *);
486 UNARY_FUNCTION abs (const T &);
487 UNARY_FUNCTION ext (const T &, unsigned int, signop);
488 UNARY_FUNCTION sext (const T &, unsigned int);
489 UNARY_FUNCTION zext (const T &, unsigned int);
490 UNARY_FUNCTION set_bit (const T &, unsigned int);
492 BINARY_FUNCTION min (const T1 &, const T2 &, signop);
493 BINARY_FUNCTION smin (const T1 &, const T2 &);
494 BINARY_FUNCTION umin (const T1 &, const T2 &);
495 BINARY_FUNCTION max (const T1 &, const T2 &, signop);
496 BINARY_FUNCTION smax (const T1 &, const T2 &);
497 BINARY_FUNCTION umax (const T1 &, const T2 &);
499 BINARY_FUNCTION bit_and (const T1 &, const T2 &);
500 BINARY_FUNCTION bit_and_not (const T1 &, const T2 &);
501 BINARY_FUNCTION bit_or (const T1 &, const T2 &);
502 BINARY_FUNCTION bit_or_not (const T1 &, const T2 &);
503 BINARY_FUNCTION bit_xor (const T1 &, const T2 &);
504 BINARY_FUNCTION add (const T1 &, const T2 &);
505 BINARY_FUNCTION add (const T1 &, const T2 &, signop, bool *);
506 BINARY_FUNCTION sub (const T1 &, const T2 &);
507 BINARY_FUNCTION sub (const T1 &, const T2 &, signop, bool *);
508 BINARY_FUNCTION mul (const T1 &, const T2 &);
509 BINARY_FUNCTION mul (const T1 &, const T2 &, signop, bool *);
510 BINARY_FUNCTION smul (const T1 &, const T2 &, bool *);
511 BINARY_FUNCTION umul (const T1 &, const T2 &, bool *);
512 BINARY_FUNCTION mul_high (const T1 &, const T2 &, signop);
513 BINARY_FUNCTION div_trunc (const T1 &, const T2 &, signop, bool * = 0);
514 BINARY_FUNCTION sdiv_trunc (const T1 &, const T2 &);
515 BINARY_FUNCTION udiv_trunc (const T1 &, const T2 &);
516 BINARY_FUNCTION div_floor (const T1 &, const T2 &, signop, bool * = 0);
517 BINARY_FUNCTION udiv_floor (const T1 &, const T2 &);
518 BINARY_FUNCTION sdiv_floor (const T1 &, const T2 &);
519 BINARY_FUNCTION div_ceil (const T1 &, const T2 &, signop, bool * = 0);
520 BINARY_FUNCTION div_round (const T1 &, const T2 &, signop, bool * = 0);
521 BINARY_FUNCTION divmod_trunc (const T1 &, const T2 &, signop,
522 WI_BINARY_RESULT (T1, T2) *);
523 BINARY_FUNCTION mod_trunc (const T1 &, const T2 &, signop, bool * = 0);
524 BINARY_FUNCTION smod_trunc (const T1 &, const T2 &);
525 BINARY_FUNCTION umod_trunc (const T1 &, const T2 &);
526 BINARY_FUNCTION mod_floor (const T1 &, const T2 &, signop, bool * = 0);
527 BINARY_FUNCTION umod_floor (const T1 &, const T2 &);
528 BINARY_FUNCTION mod_ceil (const T1 &, const T2 &, signop, bool * = 0);
529 BINARY_FUNCTION mod_round (const T1 &, const T2 &, signop, bool * = 0);
531 template <typename T1, typename T2>
532 bool multiple_of_p (const T1 &, const T2 &, signop,
533 WI_BINARY_RESULT (T1, T2) *);
535 unsigned int trunc_shift (const wide_int_ref &, unsigned int, unsigned int);
537 SHIFT_FUNCTION lshift (const T &, const wide_int_ref &, unsigned int = 0);
538 SHIFT_FUNCTION lrshift (const T &, const wide_int_ref &, unsigned int = 0);
539 SHIFT_FUNCTION arshift (const T &, const wide_int_ref &, unsigned int = 0);
540 SHIFT_FUNCTION rshift (const T &, const wide_int_ref &, signop sgn,
541 unsigned int = 0);
542 SHIFT_FUNCTION lrotate (const T &, const wide_int_ref &, unsigned int = 0);
543 SHIFT_FUNCTION rrotate (const T &, const wide_int_ref &, unsigned int = 0);
545 #undef SHIFT_FUNCTION
546 #undef BINARY_PREDICATE
547 #undef BINARY_FUNCTION
548 #undef UNARY_PREDICATE
549 #undef UNARY_FUNCTION
551 bool only_sign_bit_p (const wide_int_ref &, unsigned int);
552 bool only_sign_bit_p (const wide_int_ref &);
553 int clz (const wide_int_ref &);
554 int clrsb (const wide_int_ref &);
555 int ctz (const wide_int_ref &);
556 int exact_log2 (const wide_int_ref &);
557 int floor_log2 (const wide_int_ref &);
558 int ffs (const wide_int_ref &);
559 int popcount (const wide_int_ref &);
560 int parity (const wide_int_ref &);
562 template <typename T>
563 unsigned HOST_WIDE_INT extract_uhwi (const T &, unsigned int, unsigned int);
566 namespace wi
568 /* Contains the components of a decomposed integer for easy, direct
569 access. */
570 struct storage_ref
572 storage_ref (const HOST_WIDE_INT *, unsigned int, unsigned int);
574 const HOST_WIDE_INT *val;
575 unsigned int len;
576 unsigned int precision;
578 /* Provide enough trappings for this class to act as storage for
579 generic_wide_int. */
580 unsigned int get_len () const;
581 unsigned int get_precision () const;
582 const HOST_WIDE_INT *get_val () const;
586 inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT *val_in,
587 unsigned int len_in,
588 unsigned int precision_in)
589 : val (val_in), len (len_in), precision (precision_in)
593 inline unsigned int
594 wi::storage_ref::get_len () const
596 return len;
599 inline unsigned int
600 wi::storage_ref::get_precision () const
602 return precision;
605 inline const HOST_WIDE_INT *
606 wi::storage_ref::get_val () const
608 return val;
611 /* This class defines an integer type using the storage provided by the
612 template argument. The storage class must provide the following
613 functions:
615 unsigned int get_precision () const
616 Return the number of bits in the integer.
618 HOST_WIDE_INT *get_val () const
619 Return a pointer to the array of blocks that encodes the integer.
621 unsigned int get_len () const
622 Return the number of blocks in get_val (). If this is smaller
623 than the number of blocks implied by get_precision (), the
624 remaining blocks are sign extensions of block get_len () - 1.
626 Although not required by generic_wide_int itself, writable storage
627 classes can also provide the following functions:
629 HOST_WIDE_INT *write_val ()
630 Get a modifiable version of get_val ()
632 unsigned int set_len (unsigned int len)
633 Set the value returned by get_len () to LEN. */
634 template <typename storage>
635 class GTY(()) generic_wide_int : public storage
637 public:
638 generic_wide_int ();
640 template <typename T>
641 generic_wide_int (const T &);
643 template <typename T>
644 generic_wide_int (const T &, unsigned int);
646 /* Conversions. */
647 HOST_WIDE_INT to_shwi (unsigned int = 0) const;
648 unsigned HOST_WIDE_INT to_uhwi (unsigned int = 0) const;
649 HOST_WIDE_INT to_short_addr () const;
651 /* Public accessors for the interior of a wide int. */
652 HOST_WIDE_INT sign_mask () const;
653 HOST_WIDE_INT elt (unsigned int) const;
654 unsigned HOST_WIDE_INT ulow () const;
655 unsigned HOST_WIDE_INT uhigh () const;
656 HOST_WIDE_INT slow () const;
657 HOST_WIDE_INT shigh () const;
659 #define BINARY_PREDICATE(OP, F) \
660 template <typename T> \
661 bool OP (const T &c) const { return wi::F (*this, c); }
663 #define UNARY_OPERATOR(OP, F) \
664 WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
666 #define BINARY_OPERATOR(OP, F) \
667 template <typename T> \
668 WI_BINARY_RESULT (generic_wide_int, T) \
669 OP (const T &c) const { return wi::F (*this, c); }
671 #define ASSIGNMENT_OPERATOR(OP, F) \
672 template <typename T> \
673 generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
675 #define INCDEC_OPERATOR(OP, DELTA) \
676 generic_wide_int &OP () { *this += DELTA; return *this; }
678 UNARY_OPERATOR (operator ~, bit_not)
679 UNARY_OPERATOR (operator -, neg)
680 BINARY_PREDICATE (operator ==, eq_p)
681 BINARY_PREDICATE (operator !=, ne_p)
682 BINARY_OPERATOR (operator &, bit_and)
683 BINARY_OPERATOR (and_not, bit_and_not)
684 BINARY_OPERATOR (operator |, bit_or)
685 BINARY_OPERATOR (or_not, bit_or_not)
686 BINARY_OPERATOR (operator ^, bit_xor)
687 BINARY_OPERATOR (operator +, add)
688 BINARY_OPERATOR (operator -, sub)
689 BINARY_OPERATOR (operator *, mul)
690 ASSIGNMENT_OPERATOR (operator &=, bit_and)
691 ASSIGNMENT_OPERATOR (operator |=, bit_or)
692 ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
693 ASSIGNMENT_OPERATOR (operator +=, add)
694 ASSIGNMENT_OPERATOR (operator -=, sub)
695 ASSIGNMENT_OPERATOR (operator *=, mul)
696 INCDEC_OPERATOR (operator ++, 1)
697 INCDEC_OPERATOR (operator --, -1)
699 #undef BINARY_PREDICATE
700 #undef UNARY_OPERATOR
701 #undef BINARY_OPERATOR
702 #undef ASSIGNMENT_OPERATOR
703 #undef INCDEC_OPERATOR
705 char *dump (char *) const;
707 static const bool is_sign_extended
708 = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
711 template <typename storage>
712 inline generic_wide_int <storage>::generic_wide_int () {}
714 template <typename storage>
715 template <typename T>
716 inline generic_wide_int <storage>::generic_wide_int (const T &x)
717 : storage (x)
721 template <typename storage>
722 template <typename T>
723 inline generic_wide_int <storage>::generic_wide_int (const T &x,
724 unsigned int precision)
725 : storage (x, precision)
729 /* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
730 If THIS does not fit in PRECISION, the information is lost. */
731 template <typename storage>
732 inline HOST_WIDE_INT
733 generic_wide_int <storage>::to_shwi (unsigned int precision) const
735 if (precision == 0)
737 if (is_sign_extended)
738 return this->get_val ()[0];
739 precision = this->get_precision ();
741 if (precision < HOST_BITS_PER_WIDE_INT)
742 return sext_hwi (this->get_val ()[0], precision);
743 else
744 return this->get_val ()[0];
747 /* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
748 PRECISION. If THIS does not fit in PRECISION, the information
749 is lost. */
750 template <typename storage>
751 inline unsigned HOST_WIDE_INT
752 generic_wide_int <storage>::to_uhwi (unsigned int precision) const
754 if (precision == 0)
755 precision = this->get_precision ();
756 if (precision < HOST_BITS_PER_WIDE_INT)
757 return zext_hwi (this->get_val ()[0], precision);
758 else
759 return this->get_val ()[0];
762 /* TODO: The compiler is half converted from using HOST_WIDE_INT to
763 represent addresses to using offset_int to represent addresses.
764 We use to_short_addr at the interface from new code to old,
765 unconverted code. */
766 template <typename storage>
767 inline HOST_WIDE_INT
768 generic_wide_int <storage>::to_short_addr () const
770 return this->get_val ()[0];
773 /* Return the implicit value of blocks above get_len (). */
774 template <typename storage>
775 inline HOST_WIDE_INT
776 generic_wide_int <storage>::sign_mask () const
778 unsigned int len = this->get_len ();
779 unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
780 if (!is_sign_extended)
782 unsigned int precision = this->get_precision ();
783 int excess = len * HOST_BITS_PER_WIDE_INT - precision;
784 if (excess > 0)
785 high <<= excess;
787 return HOST_WIDE_INT (high) < 0 ? -1 : 0;
790 /* Return the signed value of the least-significant explicitly-encoded
791 block. */
792 template <typename storage>
793 inline HOST_WIDE_INT
794 generic_wide_int <storage>::slow () const
796 return this->get_val ()[0];
799 /* Return the signed value of the most-significant explicitly-encoded
800 block. */
801 template <typename storage>
802 inline HOST_WIDE_INT
803 generic_wide_int <storage>::shigh () const
805 return this->get_val ()[this->get_len () - 1];
808 /* Return the unsigned value of the least-significant
809 explicitly-encoded block. */
810 template <typename storage>
811 inline unsigned HOST_WIDE_INT
812 generic_wide_int <storage>::ulow () const
814 return this->get_val ()[0];
817 /* Return the unsigned value of the most-significant
818 explicitly-encoded block. */
819 template <typename storage>
820 inline unsigned HOST_WIDE_INT
821 generic_wide_int <storage>::uhigh () const
823 return this->get_val ()[this->get_len () - 1];
826 /* Return block I, which might be implicitly or explicit encoded. */
827 template <typename storage>
828 inline HOST_WIDE_INT
829 generic_wide_int <storage>::elt (unsigned int i) const
831 if (i >= this->get_len ())
832 return sign_mask ();
833 else
834 return this->get_val ()[i];
837 namespace wi
839 template <>
840 template <typename storage>
841 struct int_traits < generic_wide_int <storage> >
842 : public wi::int_traits <storage>
844 static unsigned int get_precision (const generic_wide_int <storage> &);
845 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
846 const generic_wide_int <storage> &);
850 template <typename storage>
851 inline unsigned int
852 wi::int_traits < generic_wide_int <storage> >::
853 get_precision (const generic_wide_int <storage> &x)
855 return x.get_precision ();
858 template <typename storage>
859 inline wi::storage_ref
860 wi::int_traits < generic_wide_int <storage> >::
861 decompose (HOST_WIDE_INT *, unsigned int precision,
862 const generic_wide_int <storage> &x)
864 gcc_checking_assert (precision == x.get_precision ());
865 return wi::storage_ref (x.get_val (), x.get_len (), precision);
868 /* Provide the storage for a wide_int_ref. This acts like a read-only
869 wide_int, with the optimization that VAL is normally a pointer to
870 another integer's storage, so that no array copy is needed. */
871 template <bool SE>
872 struct wide_int_ref_storage : public wi::storage_ref
874 private:
875 /* Scratch space that can be used when decomposing the original integer.
876 It must live as long as this object. */
877 HOST_WIDE_INT scratch[2];
879 public:
880 template <typename T>
881 wide_int_ref_storage (const T &);
883 template <typename T>
884 wide_int_ref_storage (const T &, unsigned int);
887 /* Create a reference to integer X in its natural precision. Note
888 that the natural precision is host-dependent for primitive
889 types. */
890 template <bool SE>
891 template <typename T>
892 inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x)
893 : storage_ref (wi::int_traits <T>::decompose (scratch,
894 wi::get_precision (x), x))
898 /* Create a reference to integer X in precision PRECISION. */
899 template <bool SE>
900 template <typename T>
901 inline wide_int_ref_storage <SE>::wide_int_ref_storage (const T &x,
902 unsigned int precision)
903 : storage_ref (wi::int_traits <T>::decompose (scratch, precision, x))
907 namespace wi
909 template <>
910 template <bool SE>
911 struct int_traits <wide_int_ref_storage <SE> >
913 static const enum precision_type precision_type = VAR_PRECISION;
914 /* wi::storage_ref can be a reference to a primitive type,
915 so this is the conservatively-correct setting. */
916 static const bool host_dependent_precision = true;
917 static const bool is_sign_extended = SE;
921 namespace wi
923 unsigned int force_to_size (HOST_WIDE_INT *, const HOST_WIDE_INT *,
924 unsigned int, unsigned int, unsigned int,
925 signop sgn);
926 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
927 unsigned int, unsigned int, bool = true);
930 /* The storage used by wide_int. */
931 class GTY(()) wide_int_storage
933 private:
934 HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
935 unsigned int len;
936 unsigned int precision;
938 public:
939 wide_int_storage ();
940 template <typename T>
941 wide_int_storage (const T &);
943 /* The standard generic_wide_int storage methods. */
944 unsigned int get_precision () const;
945 const HOST_WIDE_INT *get_val () const;
946 unsigned int get_len () const;
947 HOST_WIDE_INT *write_val ();
948 void set_len (unsigned int, bool = false);
950 static wide_int from (const wide_int_ref &, unsigned int, signop);
951 static wide_int from_array (const HOST_WIDE_INT *, unsigned int,
952 unsigned int, bool = true);
953 static wide_int create (unsigned int);
955 /* FIXME: target-dependent, so should disappear. */
956 wide_int bswap () const;
959 namespace wi
961 template <>
962 struct int_traits <wide_int_storage>
964 static const enum precision_type precision_type = VAR_PRECISION;
965 /* Guaranteed by a static assert in the wide_int_storage constructor. */
966 static const bool host_dependent_precision = false;
967 static const bool is_sign_extended = true;
968 template <typename T1, typename T2>
969 static wide_int get_binary_result (const T1 &, const T2 &);
973 inline wide_int_storage::wide_int_storage () {}
975 /* Initialize the storage from integer X, in its natural precision.
976 Note that we do not allow integers with host-dependent precision
977 to become wide_ints; wide_ints must always be logically independent
978 of the host. */
979 template <typename T>
980 inline wide_int_storage::wide_int_storage (const T &x)
982 STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
983 WIDE_INT_REF_FOR (T) xi (x);
984 precision = xi.precision;
985 wi::copy (*this, xi);
988 inline unsigned int
989 wide_int_storage::get_precision () const
991 return precision;
994 inline const HOST_WIDE_INT *
995 wide_int_storage::get_val () const
997 return val;
1000 inline unsigned int
1001 wide_int_storage::get_len () const
1003 return len;
1006 inline HOST_WIDE_INT *
1007 wide_int_storage::write_val ()
1009 return val;
1012 inline void
1013 wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
1015 len = l;
1016 if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
1017 val[len - 1] = sext_hwi (val[len - 1],
1018 precision % HOST_BITS_PER_WIDE_INT);
1021 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
1022 number. */
1023 inline wide_int
1024 wide_int_storage::from (const wide_int_ref &x, unsigned int precision,
1025 signop sgn)
1027 wide_int result = wide_int::create (precision);
1028 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1029 x.precision, precision, sgn));
1030 return result;
1033 /* Create a wide_int from the explicit block encoding given by VAL and
1034 LEN. PRECISION is the precision of the integer. NEED_CANON_P is
1035 true if the encoding may have redundant trailing blocks. */
1036 inline wide_int
1037 wide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
1038 unsigned int precision, bool need_canon_p)
1040 wide_int result = wide_int::create (precision);
1041 result.set_len (wi::from_array (result.write_val (), val, len, precision,
1042 need_canon_p));
1043 return result;
1046 /* Return an uninitialized wide_int with precision PRECISION. */
1047 inline wide_int
1048 wide_int_storage::create (unsigned int precision)
1050 wide_int x;
1051 x.precision = precision;
1052 return x;
1055 template <typename T1, typename T2>
1056 inline wide_int
1057 wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
1059 /* This shouldn't be used for two flexible-precision inputs. */
1060 STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
1061 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
1062 if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
1063 return wide_int::create (wi::get_precision (y));
1064 else
1065 return wide_int::create (wi::get_precision (x));
1068 /* The storage used by FIXED_WIDE_INT (N). */
1069 template <int N>
1070 class GTY(()) fixed_wide_int_storage
1072 private:
1073 HOST_WIDE_INT val[(N + HOST_BITS_PER_WIDE_INT + 1) / HOST_BITS_PER_WIDE_INT];
1074 unsigned int len;
1076 public:
1077 fixed_wide_int_storage ();
1078 template <typename T>
1079 fixed_wide_int_storage (const T &);
1081 /* The standard generic_wide_int storage methods. */
1082 unsigned int get_precision () const;
1083 const HOST_WIDE_INT *get_val () const;
1084 unsigned int get_len () const;
1085 HOST_WIDE_INT *write_val ();
1086 void set_len (unsigned int, bool = false);
1088 static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
1089 static FIXED_WIDE_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
1090 bool = true);
1093 namespace wi
1095 template <>
1096 template <int N>
1097 struct int_traits < fixed_wide_int_storage <N> >
1099 static const enum precision_type precision_type = CONST_PRECISION;
1100 static const bool host_dependent_precision = false;
1101 static const bool is_sign_extended = true;
1102 static const unsigned int precision = N;
1103 template <typename T1, typename T2>
1104 static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
1108 template <int N>
1109 inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
1111 /* Initialize the storage from integer X, in precision N. */
1112 template <int N>
1113 template <typename T>
1114 inline fixed_wide_int_storage <N>::fixed_wide_int_storage (const T &x)
1116 /* Check for type compatibility. We don't want to initialize a
1117 fixed-width integer from something like a wide_int. */
1118 WI_BINARY_RESULT (T, FIXED_WIDE_INT (N)) *assertion ATTRIBUTE_UNUSED;
1119 wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
1122 template <int N>
1123 inline unsigned int
1124 fixed_wide_int_storage <N>::get_precision () const
1126 return N;
1129 template <int N>
1130 inline const HOST_WIDE_INT *
1131 fixed_wide_int_storage <N>::get_val () const
1133 return val;
1136 template <int N>
1137 inline unsigned int
1138 fixed_wide_int_storage <N>::get_len () const
1140 return len;
1143 template <int N>
1144 inline HOST_WIDE_INT *
1145 fixed_wide_int_storage <N>::write_val ()
1147 return val;
1150 template <int N>
1151 inline void
1152 fixed_wide_int_storage <N>::set_len (unsigned int l, bool)
1154 len = l;
1155 /* There are no excess bits in val[len - 1]. */
1156 STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
1159 /* Treat X as having signedness SGN and convert it to an N-bit number. */
1160 template <int N>
1161 inline FIXED_WIDE_INT (N)
1162 fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
1164 FIXED_WIDE_INT (N) result;
1165 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1166 x.precision, N, sgn));
1167 return result;
1170 /* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
1171 VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
1172 trailing blocks. */
1173 template <int N>
1174 inline FIXED_WIDE_INT (N)
1175 fixed_wide_int_storage <N>::from_array (const HOST_WIDE_INT *val,
1176 unsigned int len,
1177 bool need_canon_p)
1179 FIXED_WIDE_INT (N) result;
1180 result.set_len (wi::from_array (result.write_val (), val, len,
1181 N, need_canon_p));
1182 return result;
1185 template <int N>
1186 template <typename T1, typename T2>
1187 inline FIXED_WIDE_INT (N)
1188 wi::int_traits < fixed_wide_int_storage <N> >::
1189 get_binary_result (const T1 &, const T2 &)
1191 return FIXED_WIDE_INT (N) ();
1194 namespace wi
1196 /* Implementation of int_traits for primitive integer types like "int". */
1197 template <typename T, bool signed_p>
1198 struct primitive_int_traits
1200 static const enum precision_type precision_type = FLEXIBLE_PRECISION;
1201 static const bool host_dependent_precision = true;
1202 static const bool is_sign_extended = true;
1203 static unsigned int get_precision (T);
1204 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
1208 template <typename T, bool signed_p>
1209 inline unsigned int
1210 wi::primitive_int_traits <T, signed_p>::get_precision (T)
1212 return sizeof (T) * CHAR_BIT;
1215 template <typename T, bool signed_p>
1216 inline wi::storage_ref
1217 wi::primitive_int_traits <T, signed_p>::decompose (HOST_WIDE_INT *scratch,
1218 unsigned int precision, T x)
1220 scratch[0] = x;
1221 if (signed_p || scratch[0] >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1222 return wi::storage_ref (scratch, 1, precision);
1223 scratch[1] = 0;
1224 return wi::storage_ref (scratch, 2, precision);
1227 /* Allow primitive C types to be used in wi:: routines. */
1228 namespace wi
1230 template <>
1231 struct int_traits <int>
1232 : public primitive_int_traits <int, true> {};
1234 template <>
1235 struct int_traits <unsigned int>
1236 : public primitive_int_traits <unsigned int, false> {};
1238 #if HOST_BITS_PER_INT != HOST_BITS_PER_WIDE_INT
1239 template <>
1240 struct int_traits <HOST_WIDE_INT>
1241 : public primitive_int_traits <HOST_WIDE_INT, true> {};
1243 template <>
1244 struct int_traits <unsigned HOST_WIDE_INT>
1245 : public primitive_int_traits <unsigned HOST_WIDE_INT, false> {};
1246 #endif
1249 namespace wi
1251 /* Stores HWI-sized integer VAL, treating it as having signedness SGN
1252 and precision PRECISION. */
1253 struct hwi_with_prec
1255 hwi_with_prec (HOST_WIDE_INT, unsigned int, signop);
1256 HOST_WIDE_INT val;
1257 unsigned int precision;
1258 signop sgn;
1261 hwi_with_prec shwi (HOST_WIDE_INT, unsigned int);
1262 hwi_with_prec uhwi (unsigned HOST_WIDE_INT, unsigned int);
1264 hwi_with_prec minus_one (unsigned int);
1265 hwi_with_prec zero (unsigned int);
1266 hwi_with_prec one (unsigned int);
1267 hwi_with_prec two (unsigned int);
1270 inline wi::hwi_with_prec::hwi_with_prec (HOST_WIDE_INT v, unsigned int p,
1271 signop s)
1272 : val (v), precision (p), sgn (s)
1276 /* Return a signed integer that has value VAL and precision PRECISION. */
1277 inline wi::hwi_with_prec
1278 wi::shwi (HOST_WIDE_INT val, unsigned int precision)
1280 return hwi_with_prec (val, precision, SIGNED);
1283 /* Return an unsigned integer that has value VAL and precision PRECISION. */
1284 inline wi::hwi_with_prec
1285 wi::uhwi (unsigned HOST_WIDE_INT val, unsigned int precision)
1287 return hwi_with_prec (val, precision, UNSIGNED);
1290 /* Return a wide int of -1 with precision PRECISION. */
1291 inline wi::hwi_with_prec
1292 wi::minus_one (unsigned int precision)
1294 return wi::shwi (-1, precision);
1297 /* Return a wide int of 0 with precision PRECISION. */
1298 inline wi::hwi_with_prec
1299 wi::zero (unsigned int precision)
1301 return wi::shwi (0, precision);
1304 /* Return a wide int of 1 with precision PRECISION. */
1305 inline wi::hwi_with_prec
1306 wi::one (unsigned int precision)
1308 return wi::shwi (1, precision);
1311 /* Return a wide int of 2 with precision PRECISION. */
1312 inline wi::hwi_with_prec
1313 wi::two (unsigned int precision)
1315 return wi::shwi (2, precision);
1318 namespace wi
1320 template <>
1321 struct int_traits <wi::hwi_with_prec>
1323 static const enum precision_type precision_type = VAR_PRECISION;
1324 /* hwi_with_prec has an explicitly-given precision, rather than the
1325 precision of HOST_WIDE_INT. */
1326 static const bool host_dependent_precision = false;
1327 static const bool is_sign_extended = true;
1328 static unsigned int get_precision (const wi::hwi_with_prec &);
1329 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
1330 const wi::hwi_with_prec &);
1334 inline unsigned int
1335 wi::int_traits <wi::hwi_with_prec>::get_precision (const wi::hwi_with_prec &x)
1337 return x.precision;
1340 inline wi::storage_ref
1341 wi::int_traits <wi::hwi_with_prec>::
1342 decompose (HOST_WIDE_INT *scratch, unsigned int precision,
1343 const wi::hwi_with_prec &x)
1345 gcc_checking_assert (precision == x.precision);
1346 scratch[0] = x.val;
1347 if (x.sgn == SIGNED || x.val >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1348 return wi::storage_ref (scratch, 1, precision);
1349 scratch[1] = 0;
1350 return wi::storage_ref (scratch, 2, precision);
1353 /* Private functions for handling large cases out of line. They take
1354 individual length and array parameters because that is cheaper for
1355 the inline caller than constructing an object on the stack and
1356 passing a reference to it. (Although many callers use wide_int_refs,
1357 we generally want those to be removed by SRA.) */
1358 namespace wi
1360 bool eq_p_large (const HOST_WIDE_INT *, unsigned int,
1361 const HOST_WIDE_INT *, unsigned int, unsigned int);
1362 bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1363 const HOST_WIDE_INT *, unsigned int);
1364 bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1365 const HOST_WIDE_INT *, unsigned int);
1366 int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1367 const HOST_WIDE_INT *, unsigned int);
1368 int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1369 const HOST_WIDE_INT *, unsigned int);
1370 unsigned int sext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1371 unsigned int,
1372 unsigned int, unsigned int);
1373 unsigned int zext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1374 unsigned int,
1375 unsigned int, unsigned int);
1376 unsigned int set_bit_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1377 unsigned int, unsigned int, unsigned int);
1378 unsigned int lshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1379 unsigned int, unsigned int, unsigned int);
1380 unsigned int lrshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1381 unsigned int, unsigned int, unsigned int,
1382 unsigned int);
1383 unsigned int arshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1384 unsigned int, unsigned int, unsigned int,
1385 unsigned int);
1386 unsigned int and_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1387 const HOST_WIDE_INT *, unsigned int, unsigned int);
1388 unsigned int and_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1389 unsigned int, const HOST_WIDE_INT *,
1390 unsigned int, unsigned int);
1391 unsigned int or_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1392 const HOST_WIDE_INT *, unsigned int, unsigned int);
1393 unsigned int or_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1394 unsigned int, const HOST_WIDE_INT *,
1395 unsigned int, unsigned int);
1396 unsigned int xor_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1397 const HOST_WIDE_INT *, unsigned int, unsigned int);
1398 unsigned int add_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1399 const HOST_WIDE_INT *, unsigned int, unsigned int,
1400 signop, bool *);
1401 unsigned int sub_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1402 const HOST_WIDE_INT *, unsigned int, unsigned int,
1403 signop, bool *);
1404 unsigned int mul_internal (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1405 unsigned int, const HOST_WIDE_INT *,
1406 unsigned int, unsigned int, signop, bool *,
1407 bool, bool);
1408 unsigned int divmod_internal (HOST_WIDE_INT *, unsigned int *,
1409 HOST_WIDE_INT *, const HOST_WIDE_INT *,
1410 unsigned int, unsigned int,
1411 const HOST_WIDE_INT *,
1412 unsigned int, unsigned int,
1413 signop, bool *);
1416 /* Return the number of bits that integer X can hold. */
1417 template <typename T>
1418 inline unsigned int
1419 wi::get_precision (const T &x)
1421 return wi::int_traits <T>::get_precision (x);
1424 /* Return the number of bits that the result of a binary operation can
1425 hold when the input operands are X and Y. */
1426 template <typename T1, typename T2>
1427 inline unsigned int
1428 wi::get_binary_precision (const T1 &x, const T2 &y)
1430 return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
1431 get_binary_result (x, y));
1434 /* Copy the contents of Y to X, but keeping X's current precision. */
1435 template <typename T1, typename T2>
1436 inline void
1437 wi::copy (T1 &x, const T2 &y)
1439 HOST_WIDE_INT *xval = x.write_val ();
1440 const HOST_WIDE_INT *yval = y.get_val ();
1441 unsigned int len = y.get_len ();
1442 unsigned int i = 0;
1444 xval[i] = yval[i];
1445 while (++i < len);
1446 x.set_len (len, y.is_sign_extended);
1449 /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */
1450 template <typename T>
1451 inline bool
1452 wi::fits_shwi_p (const T &x)
1454 WIDE_INT_REF_FOR (T) xi (x);
1455 return xi.len == 1;
1458 /* Return true if X fits in an unsigned HOST_WIDE_INT with no loss of
1459 precision. */
1460 template <typename T>
1461 inline bool
1462 wi::fits_uhwi_p (const T &x)
1464 WIDE_INT_REF_FOR (T) xi (x);
1465 if (xi.precision <= HOST_BITS_PER_WIDE_INT)
1466 return true;
1467 if (xi.len == 1)
1468 return xi.slow () >= 0;
1469 return xi.len == 2 && xi.uhigh () == 0;
1472 /* Return true if X is negative based on the interpretation of SGN.
1473 For UNSIGNED, this is always false. */
1474 template <typename T>
1475 inline bool
1476 wi::neg_p (const T &x, signop sgn)
1478 WIDE_INT_REF_FOR (T) xi (x);
1479 if (sgn == UNSIGNED)
1480 return false;
1481 return xi.sign_mask () < 0;
1484 /* Return -1 if the top bit of X is set and 0 if the top bit is clear. */
1485 template <typename T>
1486 inline HOST_WIDE_INT
1487 wi::sign_mask (const T &x)
1489 WIDE_INT_REF_FOR (T) xi (x);
1490 return xi.sign_mask ();
1493 /* Return true if X == Y. X and Y must be binary-compatible. */
1494 template <typename T1, typename T2>
1495 inline bool
1496 wi::eq_p (const T1 &x, const T2 &y)
1498 unsigned int precision = get_binary_precision (x, y);
1499 WIDE_INT_REF_FOR (T1) xi (x, precision);
1500 WIDE_INT_REF_FOR (T2) yi (y, precision);
1501 if (xi.is_sign_extended && yi.is_sign_extended)
1503 /* This case reduces to array equality. */
1504 if (xi.len != yi.len)
1505 return false;
1506 unsigned int i = 0;
1508 if (xi.val[i] != yi.val[i])
1509 return false;
1510 while (++i != xi.len);
1511 return true;
1513 if (yi.len == 1)
1515 /* XI is only equal to YI if it too has a single HWI. */
1516 if (xi.len != 1)
1517 return false;
1518 /* Excess bits in xi.val[0] will be signs or zeros, so comparisons
1519 with 0 are simple. */
1520 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1521 return xi.val[0] == 0;
1522 /* Otherwise flush out any excess bits first. */
1523 unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
1524 int excess = HOST_BITS_PER_WIDE_INT - precision;
1525 if (excess > 0)
1526 diff <<= excess;
1527 return diff == 0;
1529 return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
1532 /* Return true if X != Y. X and Y must be binary-compatible. */
1533 template <typename T1, typename T2>
1534 inline bool
1535 wi::ne_p (const T1 &x, const T2 &y)
1537 return !eq_p (x, y);
1540 /* Return true if X < Y when both are treated as signed values. */
1541 template <typename T1, typename T2>
1542 inline bool
1543 wi::lts_p (const T1 &x, const T2 &y)
1545 unsigned int precision = get_binary_precision (x, y);
1546 WIDE_INT_REF_FOR (T1) xi (x, precision);
1547 WIDE_INT_REF_FOR (T2) yi (y, precision);
1548 /* We optimize x < y, where y is 64 or fewer bits. */
1549 if (wi::fits_shwi_p (yi))
1551 /* Make lts_p (x, 0) as efficient as wi::neg_p (x). */
1552 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1553 return neg_p (xi);
1554 /* If x fits directly into a shwi, we can compare directly. */
1555 if (wi::fits_shwi_p (xi))
1556 return xi.to_shwi () < yi.to_shwi ();
1557 /* If x doesn't fit and is negative, then it must be more
1558 negative than any value in y, and hence smaller than y. */
1559 if (neg_p (xi))
1560 return true;
1561 /* If x is positive, then it must be larger than any value in y,
1562 and hence greater than y. */
1563 return false;
1565 /* Optimize the opposite case, if it can be detected at compile time. */
1566 if (STATIC_CONSTANT_P (xi.len == 1))
1567 /* If YI is negative it is lower than the least HWI.
1568 If YI is positive it is greater than the greatest HWI. */
1569 return !neg_p (yi);
1570 return lts_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1573 /* Return true if X < Y when both are treated as unsigned values. */
1574 template <typename T1, typename T2>
1575 inline bool
1576 wi::ltu_p (const T1 &x, const T2 &y)
1578 unsigned int precision = get_binary_precision (x, y);
1579 WIDE_INT_REF_FOR (T1) xi (x, precision);
1580 WIDE_INT_REF_FOR (T2) yi (y, precision);
1581 /* Optimize comparisons with constants and with sub-HWI unsigned
1582 integers. */
1583 if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1584 return xi.len == 1 && xi.to_uhwi () < (unsigned HOST_WIDE_INT) yi.val[0];
1585 if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1586 return yi.len != 1 || yi.to_uhwi () > (unsigned HOST_WIDE_INT) xi.val[0];
1587 if (precision <= HOST_BITS_PER_WIDE_INT)
1589 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1590 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1591 return xl < yl;
1593 return ltu_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1596 /* Return true if X < Y. Signedness of X and Y is indicated by SGN. */
1597 template <typename T1, typename T2>
1598 inline bool
1599 wi::lt_p (const T1 &x, const T2 &y, signop sgn)
1601 if (sgn == SIGNED)
1602 return lts_p (x, y);
1603 else
1604 return ltu_p (x, y);
1607 /* Return true if X <= Y when both are treated as signed values. */
1608 template <typename T1, typename T2>
1609 inline bool
1610 wi::les_p (const T1 &x, const T2 &y)
1612 return !lts_p (y, x);
1615 /* Return true if X <= Y when both are treated as unsigned values. */
1616 template <typename T1, typename T2>
1617 inline bool
1618 wi::leu_p (const T1 &x, const T2 &y)
1620 return !ltu_p (y, x);
1623 /* Return true if X <= Y. Signedness of X and Y is indicated by SGN. */
1624 template <typename T1, typename T2>
1625 inline bool
1626 wi::le_p (const T1 &x, const T2 &y, signop sgn)
1628 if (sgn == SIGNED)
1629 return les_p (x, y);
1630 else
1631 return leu_p (x, y);
1634 /* Return true if X > Y when both are treated as signed values. */
1635 template <typename T1, typename T2>
1636 inline bool
1637 wi::gts_p (const T1 &x, const T2 &y)
1639 return lts_p (y, x);
1642 /* Return true if X > Y when both are treated as unsigned values. */
1643 template <typename T1, typename T2>
1644 inline bool
1645 wi::gtu_p (const T1 &x, const T2 &y)
1647 return ltu_p (y, x);
1650 /* Return true if X > Y. Signedness of X and Y is indicated by SGN. */
1651 template <typename T1, typename T2>
1652 inline bool
1653 wi::gt_p (const T1 &x, const T2 &y, signop sgn)
1655 if (sgn == SIGNED)
1656 return gts_p (x, y);
1657 else
1658 return gtu_p (x, y);
1661 /* Return true if X >= Y when both are treated as signed values. */
1662 template <typename T1, typename T2>
1663 inline bool
1664 wi::ges_p (const T1 &x, const T2 &y)
1666 return !lts_p (x, y);
1669 /* Return true if X >= Y when both are treated as unsigned values. */
1670 template <typename T1, typename T2>
1671 inline bool
1672 wi::geu_p (const T1 &x, const T2 &y)
1674 return !ltu_p (x, y);
1677 /* Return true if X >= Y. Signedness of X and Y is indicated by SGN. */
1678 template <typename T1, typename T2>
1679 inline bool
1680 wi::ge_p (const T1 &x, const T2 &y, signop sgn)
1682 if (sgn == SIGNED)
1683 return ges_p (x, y);
1684 else
1685 return geu_p (x, y);
1688 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1689 as signed values. */
1690 template <typename T1, typename T2>
1691 inline int
1692 wi::cmps (const T1 &x, const T2 &y)
1694 unsigned int precision = get_binary_precision (x, y);
1695 WIDE_INT_REF_FOR (T1) xi (x, precision);
1696 WIDE_INT_REF_FOR (T2) yi (y, precision);
1697 if (precision <= HOST_BITS_PER_WIDE_INT)
1699 HOST_WIDE_INT xl = xi.to_shwi ();
1700 HOST_WIDE_INT yl = yi.to_shwi ();
1701 if (xl < yl)
1702 return -1;
1703 else if (xl > yl)
1704 return 1;
1705 else
1706 return 0;
1708 return cmps_large (xi.val, xi.len, precision, yi.val, yi.len);
1711 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1712 as unsigned values. */
1713 template <typename T1, typename T2>
1714 inline int
1715 wi::cmpu (const T1 &x, const T2 &y)
1717 unsigned int precision = get_binary_precision (x, y);
1718 WIDE_INT_REF_FOR (T1) xi (x, precision);
1719 WIDE_INT_REF_FOR (T2) yi (y, precision);
1720 if (precision <= HOST_BITS_PER_WIDE_INT)
1722 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1723 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1724 if (xl < yl)
1725 return -1;
1726 else if (xl == yl)
1727 return 0;
1728 else
1729 return 1;
1731 return cmpu_large (xi.val, xi.len, precision, yi.val, yi.len);
1734 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Signedness of
1735 X and Y indicated by SGN. */
1736 template <typename T1, typename T2>
1737 inline int
1738 wi::cmp (const T1 &x, const T2 &y, signop sgn)
1740 if (sgn == SIGNED)
1741 return cmps (x, y);
1742 else
1743 return cmpu (x, y);
1746 /* Return ~x. */
1747 template <typename T>
1748 inline WI_UNARY_RESULT (T)
1749 wi::bit_not (const T &x)
1751 WI_UNARY_RESULT_VAR (result, val, T, x);
1752 WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
1753 for (unsigned int i = 0; i < xi.len; ++i)
1754 val[i] = ~xi.val[i];
1755 result.set_len (xi.len);
1756 return result;
1759 /* Return -x. */
1760 template <typename T>
1761 inline WI_UNARY_RESULT (T)
1762 wi::neg (const T &x)
1764 return sub (0, x);
1767 /* Return -x. Indicate in *OVERFLOW if X is the minimum signed value. */
1768 template <typename T>
1769 inline WI_UNARY_RESULT (T)
1770 wi::neg (const T &x, bool *overflow)
1772 *overflow = only_sign_bit_p (x);
1773 return sub (0, x);
1776 /* Return the absolute value of x. */
1777 template <typename T>
1778 inline WI_UNARY_RESULT (T)
1779 wi::abs (const T &x)
1781 return neg_p (x) ? neg (x) : x;
1784 /* Return the result of sign-extending the low OFFSET bits of X. */
1785 template <typename T>
1786 inline WI_UNARY_RESULT (T)
1787 wi::sext (const T &x, unsigned int offset)
1789 WI_UNARY_RESULT_VAR (result, val, T, x);
1790 unsigned int precision = get_precision (result);
1791 WIDE_INT_REF_FOR (T) xi (x, precision);
1793 if (offset <= HOST_BITS_PER_WIDE_INT)
1795 val[0] = sext_hwi (xi.ulow (), offset);
1796 result.set_len (1, true);
1798 else
1799 result.set_len (sext_large (val, xi.val, xi.len, precision, offset));
1800 return result;
1803 /* Return the result of zero-extending the low OFFSET bits of X. */
1804 template <typename T>
1805 inline WI_UNARY_RESULT (T)
1806 wi::zext (const T &x, unsigned int offset)
1808 WI_UNARY_RESULT_VAR (result, val, T, x);
1809 unsigned int precision = get_precision (result);
1810 WIDE_INT_REF_FOR (T) xi (x, precision);
1812 /* This is not just an optimization, it is actually required to
1813 maintain canonization. */
1814 if (offset >= precision)
1816 wi::copy (result, xi);
1817 return result;
1820 /* In these cases we know that at least the top bit will be clear,
1821 so no sign extension is necessary. */
1822 if (offset < HOST_BITS_PER_WIDE_INT)
1824 val[0] = zext_hwi (xi.ulow (), offset);
1825 result.set_len (1, true);
1827 else
1828 result.set_len (zext_large (val, xi.val, xi.len, precision, offset), true);
1829 return result;
1832 /* Return the result of extending the low OFFSET bits of X according to
1833 signedness SGN. */
1834 template <typename T>
1835 inline WI_UNARY_RESULT (T)
1836 wi::ext (const T &x, unsigned int offset, signop sgn)
1838 return sgn == SIGNED ? sext (x, offset) : zext (x, offset);
1841 /* Return an integer that represents X | (1 << bit). */
1842 template <typename T>
1843 inline WI_UNARY_RESULT (T)
1844 wi::set_bit (const T &x, unsigned int bit)
1846 WI_UNARY_RESULT_VAR (result, val, T, x);
1847 unsigned int precision = get_precision (result);
1848 WIDE_INT_REF_FOR (T) xi (x, precision);
1849 if (precision <= HOST_BITS_PER_WIDE_INT)
1851 val[0] = xi.ulow () | ((unsigned HOST_WIDE_INT) 1 << bit);
1852 result.set_len (1);
1854 else
1855 result.set_len (set_bit_large (val, xi.val, xi.len, precision, bit));
1856 return result;
1859 /* Return the mininum of X and Y, treating them both as having
1860 signedness SGN. */
1861 template <typename T1, typename T2>
1862 inline WI_BINARY_RESULT (T1, T2)
1863 wi::min (const T1 &x, const T2 &y, signop sgn)
1865 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
1866 unsigned int precision = get_precision (result);
1867 if (wi::le_p (x, y, sgn))
1868 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
1869 else
1870 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
1871 return result;
1874 /* Return the minimum of X and Y, treating both as signed values. */
1875 template <typename T1, typename T2>
1876 inline WI_BINARY_RESULT (T1, T2)
1877 wi::smin (const T1 &x, const T2 &y)
1879 return min (x, y, SIGNED);
1882 /* Return the minimum of X and Y, treating both as unsigned values. */
1883 template <typename T1, typename T2>
1884 inline WI_BINARY_RESULT (T1, T2)
1885 wi::umin (const T1 &x, const T2 &y)
1887 return min (x, y, UNSIGNED);
1890 /* Return the maxinum of X and Y, treating them both as having
1891 signedness SGN. */
1892 template <typename T1, typename T2>
1893 inline WI_BINARY_RESULT (T1, T2)
1894 wi::max (const T1 &x, const T2 &y, signop sgn)
1896 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
1897 unsigned int precision = get_precision (result);
1898 if (wi::ge_p (x, y, sgn))
1899 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
1900 else
1901 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
1902 return result;
1905 /* Return the maximum of X and Y, treating both as signed values. */
1906 template <typename T1, typename T2>
1907 inline WI_BINARY_RESULT (T1, T2)
1908 wi::smax (const T1 &x, const T2 &y)
1910 return max (x, y, SIGNED);
1913 /* Return the maximum of X and Y, treating both as unsigned values. */
1914 template <typename T1, typename T2>
1915 inline WI_BINARY_RESULT (T1, T2)
1916 wi::umax (const T1 &x, const T2 &y)
1918 return max (x, y, UNSIGNED);
1921 /* Return X & Y. */
1922 template <typename T1, typename T2>
1923 inline WI_BINARY_RESULT (T1, T2)
1924 wi::bit_and (const T1 &x, const T2 &y)
1926 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
1927 unsigned int precision = get_precision (result);
1928 WIDE_INT_REF_FOR (T1) xi (x, precision);
1929 WIDE_INT_REF_FOR (T2) yi (y, precision);
1930 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
1931 if (xi.len + yi.len == 2)
1933 val[0] = xi.ulow () & yi.ulow ();
1934 result.set_len (1, is_sign_extended);
1936 else
1937 result.set_len (and_large (val, xi.val, xi.len, yi.val, yi.len,
1938 precision), is_sign_extended);
1939 return result;
1942 /* Return X & ~Y. */
1943 template <typename T1, typename T2>
1944 inline WI_BINARY_RESULT (T1, T2)
1945 wi::bit_and_not (const T1 &x, const T2 &y)
1947 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
1948 unsigned int precision = get_precision (result);
1949 WIDE_INT_REF_FOR (T1) xi (x, precision);
1950 WIDE_INT_REF_FOR (T2) yi (y, precision);
1951 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
1952 if (xi.len + yi.len == 2)
1954 val[0] = xi.ulow () & ~yi.ulow ();
1955 result.set_len (1, is_sign_extended);
1957 else
1958 result.set_len (and_not_large (val, xi.val, xi.len, yi.val, yi.len,
1959 precision), is_sign_extended);
1960 return result;
1963 /* Return X | Y. */
1964 template <typename T1, typename T2>
1965 inline WI_BINARY_RESULT (T1, T2)
1966 wi::bit_or (const T1 &x, const T2 &y)
1968 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
1969 unsigned int precision = get_precision (result);
1970 WIDE_INT_REF_FOR (T1) xi (x, precision);
1971 WIDE_INT_REF_FOR (T2) yi (y, precision);
1972 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
1973 if (xi.len + yi.len == 2)
1975 val[0] = xi.ulow () | yi.ulow ();
1976 result.set_len (1, is_sign_extended);
1978 else
1979 result.set_len (or_large (val, xi.val, xi.len,
1980 yi.val, yi.len, precision), is_sign_extended);
1981 return result;
1984 /* Return X | ~Y. */
1985 template <typename T1, typename T2>
1986 inline WI_BINARY_RESULT (T1, T2)
1987 wi::bit_or_not (const T1 &x, const T2 &y)
1989 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
1990 unsigned int precision = get_precision (result);
1991 WIDE_INT_REF_FOR (T1) xi (x, precision);
1992 WIDE_INT_REF_FOR (T2) yi (y, precision);
1993 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
1994 if (xi.len + yi.len == 2)
1996 val[0] = xi.ulow () | ~yi.ulow ();
1997 result.set_len (1, is_sign_extended);
1999 else
2000 result.set_len (or_not_large (val, xi.val, xi.len, yi.val, yi.len,
2001 precision), is_sign_extended);
2002 return result;
2005 /* Return X ^ Y. */
2006 template <typename T1, typename T2>
2007 inline WI_BINARY_RESULT (T1, T2)
2008 wi::bit_xor (const T1 &x, const T2 &y)
2010 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2011 unsigned int precision = get_precision (result);
2012 WIDE_INT_REF_FOR (T1) xi (x, precision);
2013 WIDE_INT_REF_FOR (T2) yi (y, precision);
2014 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2015 if (xi.len + yi.len == 2)
2017 val[0] = xi.ulow () ^ yi.ulow ();
2018 result.set_len (1, is_sign_extended);
2020 else
2021 result.set_len (xor_large (val, xi.val, xi.len,
2022 yi.val, yi.len, precision), is_sign_extended);
2023 return result;
2026 /* Return X + Y. */
2027 template <typename T1, typename T2>
2028 inline WI_BINARY_RESULT (T1, T2)
2029 wi::add (const T1 &x, const T2 &y)
2031 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2032 unsigned int precision = get_precision (result);
2033 WIDE_INT_REF_FOR (T1) xi (x, precision);
2034 WIDE_INT_REF_FOR (T2) yi (y, precision);
2035 if (precision <= HOST_BITS_PER_WIDE_INT)
2037 val[0] = xi.ulow () + yi.ulow ();
2038 result.set_len (1);
2040 else
2041 result.set_len (add_large (val, xi.val, xi.len,
2042 yi.val, yi.len, precision,
2043 UNSIGNED, 0));
2044 return result;
2047 /* Return X + Y. Treat X and Y as having the signednes given by SGN
2048 and indicate in *OVERFLOW whether the operation overflowed. */
2049 template <typename T1, typename T2>
2050 inline WI_BINARY_RESULT (T1, T2)
2051 wi::add (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2053 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2054 unsigned int precision = get_precision (result);
2055 WIDE_INT_REF_FOR (T1) xi (x, precision);
2056 WIDE_INT_REF_FOR (T2) yi (y, precision);
2057 if (precision <= HOST_BITS_PER_WIDE_INT)
2059 unsigned HOST_WIDE_INT xl = xi.ulow ();
2060 unsigned HOST_WIDE_INT yl = yi.ulow ();
2061 unsigned HOST_WIDE_INT resultl = xl + yl;
2062 if (precision == 0)
2063 *overflow = false;
2064 else if (sgn == SIGNED)
2065 *overflow = (((resultl ^ xl) & (resultl ^ yl))
2066 >> (precision - 1)) & 1;
2067 else
2068 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2069 < (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2070 val[0] = resultl;
2071 result.set_len (1);
2073 else
2074 result.set_len (add_large (val, xi.val, xi.len,
2075 yi.val, yi.len, precision,
2076 sgn, overflow));
2077 return result;
2080 /* Return X - Y. */
2081 template <typename T1, typename T2>
2082 inline WI_BINARY_RESULT (T1, T2)
2083 wi::sub (const T1 &x, const T2 &y)
2085 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2086 unsigned int precision = get_precision (result);
2087 WIDE_INT_REF_FOR (T1) xi (x, precision);
2088 WIDE_INT_REF_FOR (T2) yi (y, precision);
2089 if (precision <= HOST_BITS_PER_WIDE_INT)
2091 val[0] = xi.ulow () - yi.ulow ();
2092 result.set_len (1);
2094 else
2095 result.set_len (sub_large (val, xi.val, xi.len,
2096 yi.val, yi.len, precision,
2097 UNSIGNED, 0));
2098 return result;
2101 /* Return X - Y. Treat X and Y as having the signednes given by SGN
2102 and indicate in *OVERFLOW whether the operation overflowed. */
2103 template <typename T1, typename T2>
2104 inline WI_BINARY_RESULT (T1, T2)
2105 wi::sub (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2107 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2108 unsigned int precision = get_precision (result);
2109 WIDE_INT_REF_FOR (T1) xi (x, precision);
2110 WIDE_INT_REF_FOR (T2) yi (y, precision);
2111 if (precision <= HOST_BITS_PER_WIDE_INT)
2113 unsigned HOST_WIDE_INT xl = xi.ulow ();
2114 unsigned HOST_WIDE_INT yl = yi.ulow ();
2115 unsigned HOST_WIDE_INT resultl = xl - yl;
2116 if (precision == 0)
2117 *overflow = false;
2118 else if (sgn == SIGNED)
2119 *overflow = (((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1;
2120 else
2121 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2122 > (xl << (HOST_BITS_PER_WIDE_INT - precision)));
2123 val[0] = resultl;
2124 result.set_len (1);
2126 else
2127 result.set_len (sub_large (val, xi.val, xi.len,
2128 yi.val, yi.len, precision,
2129 sgn, overflow));
2130 return result;
2133 /* Return X * Y. */
2134 template <typename T1, typename T2>
2135 inline WI_BINARY_RESULT (T1, T2)
2136 wi::mul (const T1 &x, const T2 &y)
2138 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2139 unsigned int precision = get_precision (result);
2140 WIDE_INT_REF_FOR (T1) xi (x, precision);
2141 WIDE_INT_REF_FOR (T2) yi (y, precision);
2142 if (precision <= HOST_BITS_PER_WIDE_INT)
2144 val[0] = xi.ulow () * yi.ulow ();
2145 result.set_len (1);
2147 else
2148 result.set_len (mul_internal (val, xi.val, xi.len, yi.val, yi.len,
2149 precision, UNSIGNED, 0, false, false));
2150 return result;
2153 /* Return X * Y. Treat X and Y as having the signednes given by SGN
2154 and indicate in *OVERFLOW whether the operation overflowed. */
2155 template <typename T1, typename T2>
2156 inline WI_BINARY_RESULT (T1, T2)
2157 wi::mul (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2159 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2160 unsigned int precision = get_precision (result);
2161 WIDE_INT_REF_FOR (T1) xi (x, precision);
2162 WIDE_INT_REF_FOR (T2) yi (y, precision);
2163 result.set_len (mul_internal (val, xi.val, xi.len,
2164 yi.val, yi.len, precision,
2165 sgn, overflow, false, false));
2166 return result;
2169 /* Return X * Y, treating both X and Y as signed values. Indicate in
2170 *OVERFLOW whether the operation overflowed. */
2171 template <typename T1, typename T2>
2172 inline WI_BINARY_RESULT (T1, T2)
2173 wi::smul (const T1 &x, const T2 &y, bool *overflow)
2175 return mul (x, y, SIGNED, overflow);
2178 /* Return X * Y, treating both X and Y as unsigned values. Indicate in
2179 *OVERFLOW whether the operation overflowed. */
2180 template <typename T1, typename T2>
2181 inline WI_BINARY_RESULT (T1, T2)
2182 wi::umul (const T1 &x, const T2 &y, bool *overflow)
2184 return mul (x, y, UNSIGNED, overflow);
2187 /* Perform a widening multiplication of X and Y, extending the values
2188 according to SGN, and return the high part of the result. */
2189 template <typename T1, typename T2>
2190 inline WI_BINARY_RESULT (T1, T2)
2191 wi::mul_high (const T1 &x, const T2 &y, signop sgn)
2193 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2194 unsigned int precision = get_precision (result);
2195 WIDE_INT_REF_FOR (T1) xi (x, precision);
2196 WIDE_INT_REF_FOR (T2) yi (y, precision);
2197 result.set_len (mul_internal (val, xi.val, xi.len,
2198 yi.val, yi.len, precision,
2199 sgn, 0, true, false));
2200 return result;
2203 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2204 signedness given by SGN. Indicate in *OVERFLOW if the result
2205 overflows. */
2206 template <typename T1, typename T2>
2207 inline WI_BINARY_RESULT (T1, T2)
2208 wi::div_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2210 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2211 unsigned int precision = get_precision (quotient);
2212 WIDE_INT_REF_FOR (T1) xi (x, precision);
2213 WIDE_INT_REF_FOR (T2) yi (y);
2215 quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
2216 precision,
2217 yi.val, yi.len, yi.precision,
2218 sgn, overflow));
2219 return quotient;
2222 /* Return X / Y, rouding towards 0. Treat X and Y as signed values. */
2223 template <typename T1, typename T2>
2224 inline WI_BINARY_RESULT (T1, T2)
2225 wi::sdiv_trunc (const T1 &x, const T2 &y)
2227 return div_trunc (x, y, SIGNED);
2230 /* Return X / Y, rouding towards 0. Treat X and Y as unsigned values. */
2231 template <typename T1, typename T2>
2232 inline WI_BINARY_RESULT (T1, T2)
2233 wi::udiv_trunc (const T1 &x, const T2 &y)
2235 return div_trunc (x, y, UNSIGNED);
2238 /* Return X / Y, rouding towards -inf. Treat X and Y as having the
2239 signedness given by SGN. Indicate in *OVERFLOW if the result
2240 overflows. */
2241 template <typename T1, typename T2>
2242 inline WI_BINARY_RESULT (T1, T2)
2243 wi::div_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2245 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2246 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2247 unsigned int precision = get_precision (quotient);
2248 WIDE_INT_REF_FOR (T1) xi (x, precision);
2249 WIDE_INT_REF_FOR (T2) yi (y);
2251 unsigned int remainder_len;
2252 quotient.set_len (divmod_internal (quotient_val,
2253 &remainder_len, remainder_val,
2254 xi.val, xi.len, precision,
2255 yi.val, yi.len, yi.precision, sgn,
2256 overflow));
2257 remainder.set_len (remainder_len);
2258 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2259 return quotient - 1;
2260 return quotient;
2263 /* Return X / Y, rouding towards -inf. Treat X and Y as signed values. */
2264 template <typename T1, typename T2>
2265 inline WI_BINARY_RESULT (T1, T2)
2266 wi::sdiv_floor (const T1 &x, const T2 &y)
2268 return div_floor (x, y, SIGNED);
2271 /* Return X / Y, rouding towards -inf. Treat X and Y as unsigned values. */
2272 /* ??? Why do we have both this and udiv_trunc. Aren't they the same? */
2273 template <typename T1, typename T2>
2274 inline WI_BINARY_RESULT (T1, T2)
2275 wi::udiv_floor (const T1 &x, const T2 &y)
2277 return div_floor (x, y, UNSIGNED);
2280 /* Return X / Y, rouding towards +inf. Treat X and Y as having the
2281 signedness given by SGN. Indicate in *OVERFLOW if the result
2282 overflows. */
2283 template <typename T1, typename T2>
2284 inline WI_BINARY_RESULT (T1, T2)
2285 wi::div_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2287 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2288 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2289 unsigned int precision = get_precision (quotient);
2290 WIDE_INT_REF_FOR (T1) xi (x, precision);
2291 WIDE_INT_REF_FOR (T2) yi (y);
2293 unsigned int remainder_len;
2294 quotient.set_len (divmod_internal (quotient_val,
2295 &remainder_len, remainder_val,
2296 xi.val, xi.len, precision,
2297 yi.val, yi.len, yi.precision, sgn,
2298 overflow));
2299 remainder.set_len (remainder_len);
2300 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2301 return quotient + 1;
2302 return quotient;
2305 /* Return X / Y, rouding towards nearest with ties away from zero.
2306 Treat X and Y as having the signedness given by SGN. Indicate
2307 in *OVERFLOW if the result overflows. */
2308 template <typename T1, typename T2>
2309 inline WI_BINARY_RESULT (T1, T2)
2310 wi::div_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2312 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2313 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2314 unsigned int precision = get_precision (quotient);
2315 WIDE_INT_REF_FOR (T1) xi (x, precision);
2316 WIDE_INT_REF_FOR (T2) yi (y);
2318 unsigned int remainder_len;
2319 quotient.set_len (divmod_internal (quotient_val,
2320 &remainder_len, remainder_val,
2321 xi.val, xi.len, precision,
2322 yi.val, yi.len, yi.precision, sgn,
2323 overflow));
2324 remainder.set_len (remainder_len);
2326 if (remainder != 0)
2328 if (sgn == SIGNED)
2330 if (wi::ges_p (wi::abs (remainder),
2331 wi::lrshift (wi::abs (y), 1)))
2333 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2334 return quotient - 1;
2335 else
2336 return quotient + 1;
2339 else
2341 if (wi::geu_p (remainder, wi::lrshift (y, 1)))
2342 return quotient + 1;
2345 return quotient;
2348 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2349 signedness given by SGN. Store the remainder in *REMAINDER_PTR. */
2350 template <typename T1, typename T2>
2351 inline WI_BINARY_RESULT (T1, T2)
2352 wi::divmod_trunc (const T1 &x, const T2 &y, signop sgn,
2353 WI_BINARY_RESULT (T1, T2) *remainder_ptr)
2355 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2356 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2357 unsigned int precision = get_precision (quotient);
2358 WIDE_INT_REF_FOR (T1) xi (x, precision);
2359 WIDE_INT_REF_FOR (T2) yi (y);
2361 unsigned int remainder_len;
2362 quotient.set_len (divmod_internal (quotient_val,
2363 &remainder_len, remainder_val,
2364 xi.val, xi.len, precision,
2365 yi.val, yi.len, yi.precision, sgn, 0));
2366 remainder.set_len (remainder_len);
2368 *remainder_ptr = remainder;
2369 return quotient;
2372 /* Compute X / Y, rouding towards 0, and return the remainder.
2373 Treat X and Y as having the signedness given by SGN. Indicate
2374 in *OVERFLOW if the division overflows. */
2375 template <typename T1, typename T2>
2376 inline WI_BINARY_RESULT (T1, T2)
2377 wi::mod_trunc (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2379 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2380 unsigned int precision = get_precision (remainder);
2381 WIDE_INT_REF_FOR (T1) xi (x, precision);
2382 WIDE_INT_REF_FOR (T2) yi (y);
2384 unsigned int remainder_len;
2385 divmod_internal (0, &remainder_len, remainder_val,
2386 xi.val, xi.len, precision,
2387 yi.val, yi.len, yi.precision, sgn, overflow);
2388 remainder.set_len (remainder_len);
2390 return remainder;
2393 /* Compute X / Y, rouding towards 0, and return the remainder.
2394 Treat X and Y as signed values. */
2395 template <typename T1, typename T2>
2396 inline WI_BINARY_RESULT (T1, T2)
2397 wi::smod_trunc (const T1 &x, const T2 &y)
2399 return mod_trunc (x, y, SIGNED);
2402 /* Compute X / Y, rouding towards 0, and return the remainder.
2403 Treat X and Y as unsigned values. */
2404 template <typename T1, typename T2>
2405 inline WI_BINARY_RESULT (T1, T2)
2406 wi::umod_trunc (const T1 &x, const T2 &y)
2408 return mod_trunc (x, y, UNSIGNED);
2411 /* Compute X / Y, rouding towards -inf, and return the remainder.
2412 Treat X and Y as having the signedness given by SGN. Indicate
2413 in *OVERFLOW if the division overflows. */
2414 template <typename T1, typename T2>
2415 inline WI_BINARY_RESULT (T1, T2)
2416 wi::mod_floor (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2418 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2419 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2420 unsigned int precision = get_precision (quotient);
2421 WIDE_INT_REF_FOR (T1) xi (x, precision);
2422 WIDE_INT_REF_FOR (T2) yi (y);
2424 unsigned int remainder_len;
2425 quotient.set_len (divmod_internal (quotient_val,
2426 &remainder_len, remainder_val,
2427 xi.val, xi.len, precision,
2428 yi.val, yi.len, yi.precision, sgn,
2429 overflow));
2430 remainder.set_len (remainder_len);
2432 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2433 return remainder + y;
2434 return remainder;
2437 /* Compute X / Y, rouding towards -inf, and return the remainder.
2438 Treat X and Y as unsigned values. */
2439 /* ??? Why do we have both this and umod_trunc. Aren't they the same? */
2440 template <typename T1, typename T2>
2441 inline WI_BINARY_RESULT (T1, T2)
2442 wi::umod_floor (const T1 &x, const T2 &y)
2444 return mod_floor (x, y, UNSIGNED);
2447 /* Compute X / Y, rouding towards +inf, and return the remainder.
2448 Treat X and Y as having the signedness given by SGN. Indicate
2449 in *OVERFLOW if the division overflows. */
2450 template <typename T1, typename T2>
2451 inline WI_BINARY_RESULT (T1, T2)
2452 wi::mod_ceil (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2454 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2455 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2456 unsigned int precision = get_precision (quotient);
2457 WIDE_INT_REF_FOR (T1) xi (x, precision);
2458 WIDE_INT_REF_FOR (T2) yi (y);
2460 unsigned int remainder_len;
2461 quotient.set_len (divmod_internal (quotient_val,
2462 &remainder_len, remainder_val,
2463 xi.val, xi.len, precision,
2464 yi.val, yi.len, yi.precision, sgn,
2465 overflow));
2466 remainder.set_len (remainder_len);
2468 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2469 return remainder - y;
2470 return remainder;
2473 /* Compute X / Y, rouding towards nearest with ties away from zero,
2474 and return the remainder. Treat X and Y as having the signedness
2475 given by SGN. Indicate in *OVERFLOW if the division overflows. */
2476 template <typename T1, typename T2>
2477 inline WI_BINARY_RESULT (T1, T2)
2478 wi::mod_round (const T1 &x, const T2 &y, signop sgn, bool *overflow)
2480 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2481 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2482 unsigned int precision = get_precision (quotient);
2483 WIDE_INT_REF_FOR (T1) xi (x, precision);
2484 WIDE_INT_REF_FOR (T2) yi (y);
2486 unsigned int remainder_len;
2487 quotient.set_len (divmod_internal (quotient_val,
2488 &remainder_len, remainder_val,
2489 xi.val, xi.len, precision,
2490 yi.val, yi.len, yi.precision, sgn,
2491 overflow));
2492 remainder.set_len (remainder_len);
2494 if (remainder != 0)
2496 if (sgn == SIGNED)
2498 if (wi::ges_p (wi::abs (remainder),
2499 wi::lrshift (wi::abs (y), 1)))
2501 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2502 return remainder + y;
2503 else
2504 return remainder - y;
2507 else
2509 if (wi::geu_p (remainder, wi::lrshift (y, 1)))
2510 return remainder - y;
2513 return remainder;
2516 /* Return true if X is a multiple of Y, storing X / Y in *RES if so.
2517 Treat X and Y as having the signedness given by SGN. */
2518 template <typename T1, typename T2>
2519 inline bool
2520 wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn,
2521 WI_BINARY_RESULT (T1, T2) *res)
2523 WI_BINARY_RESULT (T1, T2) remainder;
2524 WI_BINARY_RESULT (T1, T2) quotient
2525 = divmod_trunc (x, y, sgn, &remainder);
2526 if (remainder == 0)
2528 *res = quotient;
2529 return true;
2531 return false;
2534 /* Truncate the value of shift value X so that the value is within
2535 BITSIZE. PRECISION is the number of bits in the value being
2536 shifted. */
2537 inline unsigned int
2538 wi::trunc_shift (const wide_int_ref &x, unsigned int bitsize,
2539 unsigned int precision)
2541 if (bitsize == 0)
2543 gcc_checking_assert (!neg_p (x));
2544 if (geu_p (x, precision))
2545 return precision;
2547 return x.to_uhwi () & (bitsize - 1);
2550 /* Return X << Y. If BITSIZE is nonzero, only use the low BITSIZE
2551 bits of Y. */
2552 template <typename T>
2553 inline WI_UNARY_RESULT (T)
2554 wi::lshift (const T &x, const wide_int_ref &y, unsigned int bitsize)
2556 WI_UNARY_RESULT_VAR (result, val, T, x);
2557 unsigned int precision = get_precision (result);
2558 WIDE_INT_REF_FOR (T) xi (x, precision);
2559 unsigned int shift = trunc_shift (y, bitsize, precision);
2560 /* Handle the simple cases quickly. */
2561 if (shift >= precision)
2563 val[0] = 0;
2564 result.set_len (1);
2566 else if (precision <= HOST_BITS_PER_WIDE_INT)
2568 val[0] = xi.ulow () << shift;
2569 result.set_len (1);
2571 else
2572 result.set_len (lshift_large (val, xi.val, xi.len,
2573 precision, shift));
2574 return result;
2577 /* Return X >> Y, using a logical shift. If BITSIZE is nonzero, only
2578 use the low BITSIZE bits of Y. */
2579 template <typename T>
2580 inline WI_UNARY_RESULT (T)
2581 wi::lrshift (const T &x, const wide_int_ref &y, unsigned int bitsize)
2583 WI_UNARY_RESULT_VAR (result, val, T, x);
2584 /* Do things in the precision of the input rather than the output,
2585 since the result can be no larger than that. */
2586 WIDE_INT_REF_FOR (T) xi (x);
2587 unsigned int shift = trunc_shift (y, bitsize, xi.precision);
2588 /* Handle the simple cases quickly. */
2589 if (shift >= xi.precision)
2591 val[0] = 0;
2592 result.set_len (1);
2594 else if (xi.precision <= HOST_BITS_PER_WIDE_INT)
2596 val[0] = xi.to_uhwi () >> shift;
2597 result.set_len (1);
2599 else
2600 result.set_len (lrshift_large (val, xi.val, xi.len, xi.precision,
2601 get_precision (result), shift));
2602 return result;
2605 /* Return X >> Y, using an arithmetic shift. If BITSIZE is nonzero,
2606 only use the low BITSIZE bits of Y. */
2607 template <typename T>
2608 inline WI_UNARY_RESULT (T)
2609 wi::arshift (const T &x, const wide_int_ref &y, unsigned int bitsize)
2611 WI_UNARY_RESULT_VAR (result, val, T, x);
2612 /* Do things in the precision of the input rather than the output,
2613 since the result can be no larger than that. */
2614 WIDE_INT_REF_FOR (T) xi (x);
2615 unsigned int shift = trunc_shift (y, bitsize, xi.precision);
2616 /* Handle the simple case quickly. */
2617 if (shift >= xi.precision)
2619 val[0] = sign_mask (x);
2620 result.set_len (1);
2622 else if (xi.precision <= HOST_BITS_PER_WIDE_INT)
2624 val[0] = sext_hwi (xi.ulow () >> shift, xi.precision - shift);
2625 result.set_len (1, true);
2627 else
2628 result.set_len (arshift_large (val, xi.val, xi.len, xi.precision,
2629 get_precision (result), shift));
2630 return result;
2633 /* Return X >> Y, using an arithmetic shift if SGN is SIGNED and a
2634 logical shift otherwise. If BITSIZE is nonzero, only use the low
2635 BITSIZE bits of Y. */
2636 template <typename T>
2637 inline WI_UNARY_RESULT (T)
2638 wi::rshift (const T &x, const wide_int_ref &y, signop sgn,
2639 unsigned int bitsize)
2641 if (sgn == UNSIGNED)
2642 return lrshift (x, y, bitsize);
2643 else
2644 return arshift (x, y, bitsize);
2647 /* Return the result of rotating the low WIDTH bits of X left by Y
2648 bits and zero-extending the result. Use a full-width rotate if
2649 WIDTH is zero. */
2650 template <typename T>
2651 inline WI_UNARY_RESULT (T)
2652 wi::lrotate (const T &x, const wide_int_ref &y, unsigned int width)
2654 unsigned int precision = get_binary_precision (x, x);
2655 if (width == 0)
2656 width = precision;
2657 gcc_checking_assert ((width & -width) == width);
2658 WI_UNARY_RESULT (T) left = wi::lshift (x, y, width);
2659 WI_UNARY_RESULT (T) right = wi::lrshift (x, wi::sub (width, y), width);
2660 if (width != precision)
2661 return wi::zext (left, width) | wi::zext (right, width);
2662 return left | right;
2665 /* Return the result of rotating the low WIDTH bits of X right by Y
2666 bits and zero-extending the result. Use a full-width rotate if
2667 WIDTH is zero. */
2668 template <typename T>
2669 inline WI_UNARY_RESULT (T)
2670 wi::rrotate (const T &x, const wide_int_ref &y, unsigned int width)
2672 unsigned int precision = get_binary_precision (x, x);
2673 if (width == 0)
2674 width = precision;
2675 gcc_checking_assert ((width & -width) == width);
2676 WI_UNARY_RESULT (T) right = wi::lrshift (x, y, width);
2677 WI_UNARY_RESULT (T) left = wi::lshift (x, wi::sub (width, y), width);
2678 if (width != precision)
2679 return wi::zext (left, width) | wi::zext (right, width);
2680 return left | right;
2683 /* Return 0 if the number of 1s in X is even and 1 if the number of 1s
2684 is odd. */
2685 inline int
2686 wi::parity (const wide_int_ref &x)
2688 return popcount (x) & 1;
2691 /* Extract WIDTH bits from X, starting at BITPOS. */
2692 template <typename T>
2693 inline unsigned HOST_WIDE_INT
2694 wi::extract_uhwi (const T &x, unsigned int bitpos,
2695 unsigned int width)
2697 unsigned precision = get_precision (x);
2698 if (precision < bitpos + width)
2699 precision = bitpos + width;
2700 WIDE_INT_REF_FOR (T) xi (x, precision);
2702 /* Handle this rare case after the above, so that we assert about
2703 bogus BITPOS values. */
2704 if (width == 0)
2705 return 0;
2707 unsigned int start = bitpos / HOST_BITS_PER_WIDE_INT;
2708 unsigned int shift = bitpos % HOST_BITS_PER_WIDE_INT;
2709 unsigned HOST_WIDE_INT res = xi.elt (start);
2710 res >>= shift;
2711 if (shift + width > HOST_BITS_PER_WIDE_INT)
2713 unsigned HOST_WIDE_INT upper = xi.elt (start + 1);
2714 res |= upper << (-shift % HOST_BITS_PER_WIDE_INT);
2716 return zext_hwi (res, width);
2719 template<typename T>
2720 void
2721 gt_ggc_mx (generic_wide_int <T> *)
2725 template<typename T>
2726 void
2727 gt_pch_nx (generic_wide_int <T> *)
2731 template<typename T>
2732 void
2733 gt_pch_nx (generic_wide_int <T> *, void (*) (void *, void *), void *)
2737 namespace wi
2739 /* Used for overloaded functions in which the only other acceptable
2740 scalar type is a pointer. It stops a plain 0 from being treated
2741 as a null pointer. */
2742 struct never_used1 {};
2743 struct never_used2 {};
2745 wide_int min_value (unsigned int, signop);
2746 wide_int min_value (never_used1 *);
2747 wide_int min_value (never_used2 *);
2748 wide_int max_value (unsigned int, signop);
2749 wide_int max_value (never_used1 *);
2750 wide_int max_value (never_used2 *);
2752 wide_int mul_full (const wide_int_ref &, const wide_int_ref &, signop);
2754 /* FIXME: this is target dependent, so should be elsewhere.
2755 It also seems to assume that CHAR_BIT == BITS_PER_UNIT. */
2756 wide_int from_buffer (const unsigned char *, unsigned int);
2758 #ifndef GENERATOR_FILE
2759 void to_mpz (wide_int, mpz_t, signop);
2760 #endif
2762 wide_int mask (unsigned int, bool, unsigned int);
2763 wide_int shifted_mask (unsigned int, unsigned int, bool, unsigned int);
2764 wide_int set_bit_in_zero (unsigned int, unsigned int);
2765 wide_int insert (const wide_int &x, const wide_int &y, unsigned int,
2766 unsigned int);
2768 template <typename T>
2769 T mask (unsigned int, bool);
2771 template <typename T>
2772 T shifted_mask (unsigned int, unsigned int, bool);
2774 template <typename T>
2775 T set_bit_in_zero (unsigned int);
2777 unsigned int mask (HOST_WIDE_INT *, unsigned int, bool, unsigned int);
2778 unsigned int shifted_mask (HOST_WIDE_INT *, unsigned int, unsigned int,
2779 bool, unsigned int);
2780 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
2781 unsigned int, unsigned int, bool);
2784 /* Perform a widening multiplication of X and Y, extending the values
2785 according according to SGN. */
2786 inline wide_int
2787 wi::mul_full (const wide_int_ref &x, const wide_int_ref &y, signop sgn)
2789 gcc_checking_assert (x.precision == y.precision);
2790 wide_int result = wide_int::create (x.precision * 2);
2791 result.set_len (mul_internal (result.write_val (), x.val, x.len,
2792 y.val, y.len, x.precision,
2793 sgn, 0, false, true));
2794 return result;
2797 /* Return a PRECISION-bit integer in which the low WIDTH bits are set
2798 and the other bits are clear, or the inverse if NEGATE_P. */
2799 inline wide_int
2800 wi::mask (unsigned int width, bool negate_p, unsigned int precision)
2802 wide_int result = wide_int::create (precision);
2803 result.set_len (mask (result.write_val (), width, negate_p, precision));
2804 return result;
2807 /* Return a PRECISION-bit integer in which the low START bits are clear,
2808 the next WIDTH bits are set, and the other bits are clear,
2809 or the inverse if NEGATE_P. */
2810 inline wide_int
2811 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p,
2812 unsigned int precision)
2814 wide_int result = wide_int::create (precision);
2815 result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
2816 precision));
2817 return result;
2820 /* Return a PRECISION-bit integer in which bit BIT is set and all the
2821 others are clear. */
2822 inline wide_int
2823 wi::set_bit_in_zero (unsigned int bit, unsigned int precision)
2825 return shifted_mask (bit, 1, false, precision);
2828 /* Return an integer of type T in which the low WIDTH bits are set
2829 and the other bits are clear, or the inverse if NEGATE_P. */
2830 template <typename T>
2831 inline T
2832 wi::mask (unsigned int width, bool negate_p)
2834 STATIC_ASSERT (wi::int_traits<T>::precision);
2835 T result;
2836 result.set_len (mask (result.write_val (), width, negate_p,
2837 wi::int_traits <T>::precision));
2838 return result;
2841 /* Return an integer of type T in which the low START bits are clear,
2842 the next WIDTH bits are set, and the other bits are clear, or the
2843 inverse if NEGATE_P. */
2844 template <typename T>
2845 inline T
2846 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p)
2848 STATIC_ASSERT (wi::int_traits<T>::precision);
2849 T result;
2850 result.set_len (shifted_mask (result.write_val (), start, width,
2851 negate_p,
2852 wi::int_traits <T>::precision));
2853 return result;
2856 /* Return an integer of type T in which bit BIT is set and all the
2857 others are clear. */
2858 template <typename T>
2859 inline T
2860 wi::set_bit_in_zero (unsigned int bit)
2862 return shifted_mask <T> (bit, 1, false);
2865 #endif /* WIDE_INT_H */