1 /* Operations with very long integers. -*- C++ -*-
2 Copyright (C) 2012-2016 Free Software Foundation, Inc.
4 This file is part of GCC.
6 GCC is free software; you can redistribute it and/or modify it
7 under the terms of the GNU General Public License as published by the
8 Free Software Foundation; either version 3, or (at your option) any
11 GCC is distributed in the hope that it will be useful, but WITHOUT
12 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
16 You should have received a copy of the GNU General Public License
17 along with GCC; see the file COPYING3. If not see
18 <http://www.gnu.org/licenses/>. */
23 /* wide-int.[cc|h] implements a class that efficiently performs
24 mathematical operations on finite precision integers. wide_ints
25 are designed to be transient - they are not for long term storage
26 of values. There is tight integration between wide_ints and the
27 other longer storage GCC representations (rtl and tree).
29 The actual precision of a wide_int depends on the flavor. There
30 are three predefined flavors:
32 1) wide_int (the default). This flavor does the math in the
33 precision of its input arguments. It is assumed (and checked)
34 that the precisions of the operands and results are consistent.
35 This is the most efficient flavor. It is not possible to examine
36 bits above the precision that has been specified. Because of
37 this, the default flavor has semantics that are simple to
38 understand and in general model the underlying hardware that the
39 compiler is targetted for.
41 This flavor must be used at the RTL level of gcc because there
42 is, in general, not enough information in the RTL representation
43 to extend a value beyond the precision specified in the mode.
45 This flavor should also be used at the TREE and GIMPLE levels of
46 the compiler except for the circumstances described in the
47 descriptions of the other two flavors.
49 The default wide_int representation does not contain any
50 information inherent about signedness of the represented value,
51 so it can be used to represent both signed and unsigned numbers.
52 For operations where the results depend on signedness (full width
53 multiply, division, shifts, comparisons, and operations that need
54 overflow detected), the signedness must be specified separately.
56 2) offset_int. This is a fixed-precision integer that can hold
57 any address offset, measured in either bits or bytes, with at
58 least one extra sign bit. At the moment the maximum address
59 size GCC supports is 64 bits. With 8-bit bytes and an extra
60 sign bit, offset_int therefore needs to have at least 68 bits
61 of precision. We round this up to 128 bits for efficiency.
62 Values of type T are converted to this precision by sign- or
63 zero-extending them based on the signedness of T.
65 The extra sign bit means that offset_int is effectively a signed
66 128-bit integer, i.e. it behaves like int128_t.
68 Since the values are logically signed, there is no need to
69 distinguish between signed and unsigned operations. Sign-sensitive
70 comparison operators <, <=, > and >= are therefore supported.
71 Shift operators << and >> are also supported, with >> being
72 an _arithmetic_ right shift.
74 [ Note that, even though offset_int is effectively int128_t,
75 it can still be useful to use unsigned comparisons like
76 wi::leu_p (a, b) as a more efficient short-hand for
79 3) widest_int. This representation is an approximation of
80 infinite precision math. However, it is not really infinite
81 precision math as in the GMP library. It is really finite
82 precision math where the precision is 4 times the size of the
83 largest integer that the target port can represent.
85 Like offset_int, widest_int is wider than all the values that
86 it needs to represent, so the integers are logically signed.
87 Sign-sensitive comparison operators <, <=, > and >= are supported,
90 There are several places in the GCC where this should/must be used:
92 * Code that does induction variable optimizations. This code
93 works with induction variables of many different types at the
94 same time. Because of this, it ends up doing many different
95 calculations where the operands are not compatible types. The
96 widest_int makes this easy, because it provides a field where
97 nothing is lost when converting from any variable,
99 * There are a small number of passes that currently use the
100 widest_int that should use the default. These should be
103 There are surprising features of offset_int and widest_int
104 that the users should be careful about:
106 1) Shifts and rotations are just weird. You have to specify a
107 precision in which the shift or rotate is to happen in. The bits
108 above this precision are zeroed. While this is what you
109 want, it is clearly non obvious.
111 2) Larger precision math sometimes does not produce the same
112 answer as would be expected for doing the math at the proper
113 precision. In particular, a multiply followed by a divide will
114 produce a different answer if the first product is larger than
115 what can be represented in the input precision.
117 The offset_int and the widest_int flavors are more expensive
118 than the default wide int, so in addition to the caveats with these
119 two, the default is the prefered representation.
121 All three flavors of wide_int are represented as a vector of
122 HOST_WIDE_INTs. The default and widest_int vectors contain enough elements
123 to hold a value of MAX_BITSIZE_MODE_ANY_INT bits. offset_int contains only
124 enough elements to hold ADDR_MAX_PRECISION bits. The values are stored
125 in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
128 The default wide_int contains three fields: the vector (VAL),
129 the precision and a length (LEN). The length is the number of HWIs
130 needed to represent the value. widest_int and offset_int have a
131 constant precision that cannot be changed, so they only store the
134 Since most integers used in a compiler are small values, it is
135 generally profitable to use a representation of the value that is
136 as small as possible. LEN is used to indicate the number of
137 elements of the vector that are in use. The numbers are stored as
138 sign extended numbers as a means of compression. Leading
139 HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
140 as long as they can be reconstructed from the top bit that is being
143 The precision and length of a wide_int are always greater than 0.
144 Any bits in a wide_int above the precision are sign-extended from the
145 most significant bit. For example, a 4-bit value 0x8 is represented as
146 VAL = { 0xf...fff8 }. However, as an optimization, we allow other integer
147 constants to be represented with undefined bits above the precision.
148 This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
149 so that the INTEGER_CST representation can be used both in TYPE_PRECISION
150 and in wider precisions.
152 There are constructors to create the various forms of wide_int from
153 trees, rtl and constants. For trees you can simply say:
158 However, a little more syntax is required for rtl constants since
159 they do not have an explicit precision. To make an rtl into a
160 wide_int, you have to pair it with a mode. The canonical way to do
161 this is with std::make_pair as in:
164 wide_int x = std::make_pair (r, mode);
166 Similarly, a wide_int can only be constructed from a host value if
167 the target precision is given explicitly, such as in:
169 wide_int x = wi::shwi (c, prec); // sign-extend C if necessary
170 wide_int y = wi::uhwi (c, prec); // zero-extend C if necessary
172 However, offset_int and widest_int have an inherent precision and so
173 can be initialized directly from a host value:
175 offset_int x = (int) c; // sign-extend C
176 widest_int x = (unsigned int) c; // zero-extend C
178 It is also possible to do arithmetic directly on trees, rtxes and
179 constants. For example:
181 wi::add (t1, t2); // add equal-sized INTEGER_CSTs t1 and t2
182 wi::add (t1, 1); // add 1 to INTEGER_CST t1
183 wi::add (r1, r2); // add equal-sized rtx constants r1 and r2
184 wi::lshift (1, 100); // 1 << 100 as a widest_int
186 Many binary operations place restrictions on the combinations of inputs,
187 using the following rules:
189 - {tree, rtx, wide_int} op {tree, rtx, wide_int} -> wide_int
190 The inputs must be the same precision. The result is a wide_int
191 of the same precision
193 - {tree, rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
194 (un)signed HOST_WIDE_INT op {tree, rtx, wide_int} -> wide_int
195 The HOST_WIDE_INT is extended or truncated to the precision of
196 the other input. The result is a wide_int of the same precision
199 - (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
200 The inputs are extended to widest_int precision and produce a
203 - offset_int op offset_int -> offset_int
204 offset_int op (un)signed HOST_WIDE_INT -> offset_int
205 (un)signed HOST_WIDE_INT op offset_int -> offset_int
207 - widest_int op widest_int -> widest_int
208 widest_int op (un)signed HOST_WIDE_INT -> widest_int
209 (un)signed HOST_WIDE_INT op widest_int -> widest_int
211 Other combinations like:
213 - widest_int op offset_int and
214 - wide_int op offset_int
216 are not allowed. The inputs should instead be extended or truncated
219 The inputs to comparison functions like wi::eq_p and wi::lts_p
220 follow the same compatibility rules, although their return types
221 are different. Unary functions on X produce the same result as
222 a binary operation X + X. Shift functions X op Y also produce
223 the same result as X + X; the precision of the shift amount Y
224 can be arbitrarily different from X. */
226 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
227 early examination of the target's mode file. The WIDE_INT_MAX_ELTS
228 can accomodate at least 1 more bit so that unsigned numbers of that
229 mode can be represented as a signed value. Note that it is still
230 possible to create fixed_wide_ints that have precisions greater than
231 MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a
232 double-width multiplication result, for example. */
233 #define WIDE_INT_MAX_ELTS \
234 ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
236 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
238 /* This is the max size of any pointer on any machine. It does not
239 seem to be as easy to sniff this out of the machine description as
240 it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
241 multiple address sizes and may have different address sizes for
242 different address spaces. However, currently the largest pointer
243 on any platform is 64 bits. When that changes, then it is likely
244 that a target hook should be defined so that targets can make this
245 value larger for those targets. */
246 #define ADDR_MAX_BITSIZE 64
248 /* This is the internal precision used when doing any address
249 arithmetic. The '4' is really 3 + 1. Three of the bits are for
250 the number of extra bits needed to do bit addresses and the other bit
251 is to allow everything to be signed without loosing any precision.
252 Then everything is rounded up to the next HWI for efficiency. */
253 #define ADDR_MAX_PRECISION \
254 ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) \
255 & ~(HOST_BITS_PER_WIDE_INT - 1))
257 /* The number of HWIs needed to store an offset_int. */
258 #define OFFSET_INT_ELTS (ADDR_MAX_PRECISION / HOST_BITS_PER_WIDE_INT)
260 /* The type of result produced by a binary operation on types T1 and T2.
261 Defined purely for brevity. */
262 #define WI_BINARY_RESULT(T1, T2) \
263 typename wi::binary_traits <T1, T2>::result_type
265 /* The type of result produced by T1 << T2. Leads to substitution failure
266 if the operation isn't supported. Defined purely for brevity. */
267 #define WI_SIGNED_SHIFT_RESULT(T1, T2) \
268 typename wi::binary_traits <T1, T2>::signed_shift_result_type
270 /* The type of result produced by a signed binary predicate on types T1 and T2.
271 This is bool if signed comparisons make sense for T1 and T2 and leads to
272 substitution failure otherwise. */
273 #define WI_SIGNED_BINARY_PREDICATE_RESULT(T1, T2) \
274 typename wi::binary_traits <T1, T2>::signed_predicate_result
276 /* The type of result produced by a unary operation on type T. */
277 #define WI_UNARY_RESULT(T) \
278 typename wi::unary_traits <T>::result_type
280 /* Define a variable RESULT to hold the result of a binary operation on
281 X and Y, which have types T1 and T2 respectively. Define VAL to
282 point to the blocks of RESULT. Once the user of the macro has
283 filled in VAL, it should call RESULT.set_len to set the number
284 of initialized blocks. */
285 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
286 WI_BINARY_RESULT (T1, T2) RESULT = \
287 wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
288 HOST_WIDE_INT *VAL = RESULT.write_val ()
290 /* Similar for the result of a unary operation on X, which has type T. */
291 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
292 WI_UNARY_RESULT (T) RESULT = \
293 wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
294 HOST_WIDE_INT *VAL = RESULT.write_val ()
296 template <typename T
> class generic_wide_int
;
297 template <int N
> class fixed_wide_int_storage
;
298 class wide_int_storage
;
300 /* An N-bit integer. Until we can use typedef templates, use this instead. */
301 #define FIXED_WIDE_INT(N) \
302 generic_wide_int < fixed_wide_int_storage <N> >
304 typedef generic_wide_int
<wide_int_storage
> wide_int
;
305 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION
) offset_int
;
306 typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION
) widest_int
;
309 struct wide_int_ref_storage
;
311 typedef generic_wide_int
<wide_int_ref_storage
<false> > wide_int_ref
;
313 /* This can be used instead of wide_int_ref if the referenced value is
314 known to have type T. It carries across properties of T's representation,
315 such as whether excess upper bits in a HWI are defined, and can therefore
316 help avoid redundant work.
318 The macro could be replaced with a template typedef, once we're able
320 #define WIDE_INT_REF_FOR(T) \
322 <wide_int_ref_storage <wi::int_traits <T>::is_sign_extended> >
326 /* Classifies an integer based on its precision. */
327 enum precision_type
{
328 /* The integer has both a precision and defined signedness. This allows
329 the integer to be converted to any width, since we know whether to fill
330 any extra bits with zeros or signs. */
333 /* The integer has a variable precision but no defined signedness. */
336 /* The integer has a constant precision (known at GCC compile time)
341 /* This class, which has no default implementation, is expected to
342 provide the following members:
344 static const enum precision_type precision_type;
345 Classifies the type of T.
347 static const unsigned int precision;
348 Only defined if precision_type == CONST_PRECISION. Specifies the
349 precision of all integers of type T.
351 static const bool host_dependent_precision;
352 True if the precision of T depends (or can depend) on the host.
354 static unsigned int get_precision (const T &x)
355 Return the number of bits in X.
357 static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
358 unsigned int precision, const T &x)
359 Decompose X as a PRECISION-bit integer, returning the associated
360 wi::storage_ref. SCRATCH is available as scratch space if needed.
361 The routine should assert that PRECISION is acceptable. */
362 template <typename T
> struct int_traits
;
364 /* This class provides a single type, result_type, which specifies the
365 type of integer produced by a binary operation whose inputs have
366 types T1 and T2. The definition should be symmetric. */
367 template <typename T1
, typename T2
,
368 enum precision_type P1
= int_traits
<T1
>::precision_type
,
369 enum precision_type P2
= int_traits
<T2
>::precision_type
>
370 struct binary_traits
;
372 /* The result of a unary operation on T is the same as the result of
373 a binary operation on two values of type T. */
374 template <typename T
>
375 struct unary_traits
: public binary_traits
<T
, T
> {};
377 /* Specify the result type for each supported combination of binary
378 inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be
379 mixed, in order to give stronger type checking. When both inputs
380 are CONST_PRECISION, they must have the same precision. */
381 template <typename T1
, typename T2
>
382 struct binary_traits
<T1
, T2
, FLEXIBLE_PRECISION
, FLEXIBLE_PRECISION
>
384 typedef widest_int result_type
;
387 template <typename T1
, typename T2
>
388 struct binary_traits
<T1
, T2
, FLEXIBLE_PRECISION
, VAR_PRECISION
>
390 typedef wide_int result_type
;
393 template <typename T1
, typename T2
>
394 struct binary_traits
<T1
, T2
, FLEXIBLE_PRECISION
, CONST_PRECISION
>
396 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
397 so as not to confuse gengtype. */
398 typedef generic_wide_int
< fixed_wide_int_storage
399 <int_traits
<T2
>::precision
> > result_type
;
400 typedef bool signed_predicate_result
;
403 template <typename T1
, typename T2
>
404 struct binary_traits
<T1
, T2
, VAR_PRECISION
, FLEXIBLE_PRECISION
>
406 typedef wide_int result_type
;
409 template <typename T1
, typename T2
>
410 struct binary_traits
<T1
, T2
, CONST_PRECISION
, FLEXIBLE_PRECISION
>
412 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
413 so as not to confuse gengtype. */
414 typedef generic_wide_int
< fixed_wide_int_storage
415 <int_traits
<T1
>::precision
> > result_type
;
416 typedef result_type signed_shift_result_type
;
417 typedef bool signed_predicate_result
;
420 template <typename T1
, typename T2
>
421 struct binary_traits
<T1
, T2
, CONST_PRECISION
, CONST_PRECISION
>
423 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
424 so as not to confuse gengtype. */
425 STATIC_ASSERT (int_traits
<T1
>::precision
== int_traits
<T2
>::precision
);
426 typedef generic_wide_int
< fixed_wide_int_storage
427 <int_traits
<T1
>::precision
> > result_type
;
428 typedef result_type signed_shift_result_type
;
429 typedef bool signed_predicate_result
;
432 template <typename T1
, typename T2
>
433 struct binary_traits
<T1
, T2
, VAR_PRECISION
, VAR_PRECISION
>
435 typedef wide_int result_type
;
439 /* Public functions for querying and operating on integers. */
442 template <typename T
>
443 unsigned int get_precision (const T
&);
445 template <typename T1
, typename T2
>
446 unsigned int get_binary_precision (const T1
&, const T2
&);
448 template <typename T1
, typename T2
>
449 void copy (T1
&, const T2
&);
451 #define UNARY_PREDICATE \
452 template <typename T> bool
453 #define UNARY_FUNCTION \
454 template <typename T> WI_UNARY_RESULT (T)
455 #define BINARY_PREDICATE \
456 template <typename T1, typename T2> bool
457 #define BINARY_FUNCTION \
458 template <typename T1, typename T2> WI_BINARY_RESULT (T1, T2)
459 #define SHIFT_FUNCTION \
460 template <typename T1, typename T2> WI_UNARY_RESULT (T1)
462 UNARY_PREDICATE
fits_shwi_p (const T
&);
463 UNARY_PREDICATE
fits_uhwi_p (const T
&);
464 UNARY_PREDICATE
neg_p (const T
&, signop
= SIGNED
);
466 template <typename T
>
467 HOST_WIDE_INT
sign_mask (const T
&);
469 BINARY_PREDICATE
eq_p (const T1
&, const T2
&);
470 BINARY_PREDICATE
ne_p (const T1
&, const T2
&);
471 BINARY_PREDICATE
lt_p (const T1
&, const T2
&, signop
);
472 BINARY_PREDICATE
lts_p (const T1
&, const T2
&);
473 BINARY_PREDICATE
ltu_p (const T1
&, const T2
&);
474 BINARY_PREDICATE
le_p (const T1
&, const T2
&, signop
);
475 BINARY_PREDICATE
les_p (const T1
&, const T2
&);
476 BINARY_PREDICATE
leu_p (const T1
&, const T2
&);
477 BINARY_PREDICATE
gt_p (const T1
&, const T2
&, signop
);
478 BINARY_PREDICATE
gts_p (const T1
&, const T2
&);
479 BINARY_PREDICATE
gtu_p (const T1
&, const T2
&);
480 BINARY_PREDICATE
ge_p (const T1
&, const T2
&, signop
);
481 BINARY_PREDICATE
ges_p (const T1
&, const T2
&);
482 BINARY_PREDICATE
geu_p (const T1
&, const T2
&);
484 template <typename T1
, typename T2
>
485 int cmp (const T1
&, const T2
&, signop
);
487 template <typename T1
, typename T2
>
488 int cmps (const T1
&, const T2
&);
490 template <typename T1
, typename T2
>
491 int cmpu (const T1
&, const T2
&);
493 UNARY_FUNCTION
bit_not (const T
&);
494 UNARY_FUNCTION
neg (const T
&);
495 UNARY_FUNCTION
neg (const T
&, bool *);
496 UNARY_FUNCTION
abs (const T
&);
497 UNARY_FUNCTION
ext (const T
&, unsigned int, signop
);
498 UNARY_FUNCTION
sext (const T
&, unsigned int);
499 UNARY_FUNCTION
zext (const T
&, unsigned int);
500 UNARY_FUNCTION
set_bit (const T
&, unsigned int);
502 BINARY_FUNCTION
min (const T1
&, const T2
&, signop
);
503 BINARY_FUNCTION
smin (const T1
&, const T2
&);
504 BINARY_FUNCTION
umin (const T1
&, const T2
&);
505 BINARY_FUNCTION
max (const T1
&, const T2
&, signop
);
506 BINARY_FUNCTION
smax (const T1
&, const T2
&);
507 BINARY_FUNCTION
umax (const T1
&, const T2
&);
509 BINARY_FUNCTION
bit_and (const T1
&, const T2
&);
510 BINARY_FUNCTION
bit_and_not (const T1
&, const T2
&);
511 BINARY_FUNCTION
bit_or (const T1
&, const T2
&);
512 BINARY_FUNCTION
bit_or_not (const T1
&, const T2
&);
513 BINARY_FUNCTION
bit_xor (const T1
&, const T2
&);
514 BINARY_FUNCTION
add (const T1
&, const T2
&);
515 BINARY_FUNCTION
add (const T1
&, const T2
&, signop
, bool *);
516 BINARY_FUNCTION
sub (const T1
&, const T2
&);
517 BINARY_FUNCTION
sub (const T1
&, const T2
&, signop
, bool *);
518 BINARY_FUNCTION
mul (const T1
&, const T2
&);
519 BINARY_FUNCTION
mul (const T1
&, const T2
&, signop
, bool *);
520 BINARY_FUNCTION
smul (const T1
&, const T2
&, bool *);
521 BINARY_FUNCTION
umul (const T1
&, const T2
&, bool *);
522 BINARY_FUNCTION
mul_high (const T1
&, const T2
&, signop
);
523 BINARY_FUNCTION
div_trunc (const T1
&, const T2
&, signop
, bool * = 0);
524 BINARY_FUNCTION
sdiv_trunc (const T1
&, const T2
&);
525 BINARY_FUNCTION
udiv_trunc (const T1
&, const T2
&);
526 BINARY_FUNCTION
div_floor (const T1
&, const T2
&, signop
, bool * = 0);
527 BINARY_FUNCTION
udiv_floor (const T1
&, const T2
&);
528 BINARY_FUNCTION
sdiv_floor (const T1
&, const T2
&);
529 BINARY_FUNCTION
div_ceil (const T1
&, const T2
&, signop
, bool * = 0);
530 BINARY_FUNCTION
div_round (const T1
&, const T2
&, signop
, bool * = 0);
531 BINARY_FUNCTION
divmod_trunc (const T1
&, const T2
&, signop
,
532 WI_BINARY_RESULT (T1
, T2
) *);
533 BINARY_FUNCTION
gcd (const T1
&, const T2
&, signop
= UNSIGNED
);
534 BINARY_FUNCTION
mod_trunc (const T1
&, const T2
&, signop
, bool * = 0);
535 BINARY_FUNCTION
smod_trunc (const T1
&, const T2
&);
536 BINARY_FUNCTION
umod_trunc (const T1
&, const T2
&);
537 BINARY_FUNCTION
mod_floor (const T1
&, const T2
&, signop
, bool * = 0);
538 BINARY_FUNCTION
umod_floor (const T1
&, const T2
&);
539 BINARY_FUNCTION
mod_ceil (const T1
&, const T2
&, signop
, bool * = 0);
540 BINARY_FUNCTION
mod_round (const T1
&, const T2
&, signop
, bool * = 0);
542 template <typename T1
, typename T2
>
543 bool multiple_of_p (const T1
&, const T2
&, signop
);
545 template <typename T1
, typename T2
>
546 bool multiple_of_p (const T1
&, const T2
&, signop
,
547 WI_BINARY_RESULT (T1
, T2
) *);
549 SHIFT_FUNCTION
lshift (const T1
&, const T2
&);
550 SHIFT_FUNCTION
lrshift (const T1
&, const T2
&);
551 SHIFT_FUNCTION
arshift (const T1
&, const T2
&);
552 SHIFT_FUNCTION
rshift (const T1
&, const T2
&, signop sgn
);
553 SHIFT_FUNCTION
lrotate (const T1
&, const T2
&, unsigned int = 0);
554 SHIFT_FUNCTION
rrotate (const T1
&, const T2
&, unsigned int = 0);
556 #undef SHIFT_FUNCTION
557 #undef BINARY_PREDICATE
558 #undef BINARY_FUNCTION
559 #undef UNARY_PREDICATE
560 #undef UNARY_FUNCTION
562 bool only_sign_bit_p (const wide_int_ref
&, unsigned int);
563 bool only_sign_bit_p (const wide_int_ref
&);
564 int clz (const wide_int_ref
&);
565 int clrsb (const wide_int_ref
&);
566 int ctz (const wide_int_ref
&);
567 int exact_log2 (const wide_int_ref
&);
568 int floor_log2 (const wide_int_ref
&);
569 int ffs (const wide_int_ref
&);
570 int popcount (const wide_int_ref
&);
571 int parity (const wide_int_ref
&);
573 template <typename T
>
574 unsigned HOST_WIDE_INT
extract_uhwi (const T
&, unsigned int, unsigned int);
576 template <typename T
>
577 unsigned int min_precision (const T
&, signop
);
582 /* Contains the components of a decomposed integer for easy, direct
586 storage_ref (const HOST_WIDE_INT
*, unsigned int, unsigned int);
588 const HOST_WIDE_INT
*val
;
590 unsigned int precision
;
592 /* Provide enough trappings for this class to act as storage for
594 unsigned int get_len () const;
595 unsigned int get_precision () const;
596 const HOST_WIDE_INT
*get_val () const;
600 inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT
*val_in
,
602 unsigned int precision_in
)
603 : val (val_in
), len (len_in
), precision (precision_in
)
608 wi::storage_ref::get_len () const
614 wi::storage_ref::get_precision () const
619 inline const HOST_WIDE_INT
*
620 wi::storage_ref::get_val () const
625 /* This class defines an integer type using the storage provided by the
626 template argument. The storage class must provide the following
629 unsigned int get_precision () const
630 Return the number of bits in the integer.
632 HOST_WIDE_INT *get_val () const
633 Return a pointer to the array of blocks that encodes the integer.
635 unsigned int get_len () const
636 Return the number of blocks in get_val (). If this is smaller
637 than the number of blocks implied by get_precision (), the
638 remaining blocks are sign extensions of block get_len () - 1.
640 Although not required by generic_wide_int itself, writable storage
641 classes can also provide the following functions:
643 HOST_WIDE_INT *write_val ()
644 Get a modifiable version of get_val ()
646 unsigned int set_len (unsigned int len)
647 Set the value returned by get_len () to LEN. */
648 template <typename storage
>
649 class GTY(()) generic_wide_int
: public storage
654 template <typename T
>
655 generic_wide_int (const T
&);
657 template <typename T
>
658 generic_wide_int (const T
&, unsigned int);
661 HOST_WIDE_INT
to_shwi (unsigned int) const;
662 HOST_WIDE_INT
to_shwi () const;
663 unsigned HOST_WIDE_INT
to_uhwi (unsigned int) const;
664 unsigned HOST_WIDE_INT
to_uhwi () const;
665 HOST_WIDE_INT
to_short_addr () const;
667 /* Public accessors for the interior of a wide int. */
668 HOST_WIDE_INT
sign_mask () const;
669 HOST_WIDE_INT
elt (unsigned int) const;
670 unsigned HOST_WIDE_INT
ulow () const;
671 unsigned HOST_WIDE_INT
uhigh () const;
672 HOST_WIDE_INT
slow () const;
673 HOST_WIDE_INT
shigh () const;
675 template <typename T
>
676 generic_wide_int
&operator = (const T
&);
678 #define BINARY_PREDICATE(OP, F) \
679 template <typename T> \
680 bool OP (const T &c) const { return wi::F (*this, c); }
682 #define UNARY_OPERATOR(OP, F) \
683 WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
685 #define BINARY_OPERATOR(OP, F) \
686 template <typename T> \
687 WI_BINARY_RESULT (generic_wide_int, T) \
688 OP (const T &c) const { return wi::F (*this, c); }
690 #define ASSIGNMENT_OPERATOR(OP, F) \
691 template <typename T> \
692 generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
694 /* Restrict these to cases where the shift operator is defined. */
695 #define SHIFT_ASSIGNMENT_OPERATOR(OP, OP2) \
696 template <typename T> \
697 generic_wide_int &OP (const T &c) { return (*this = *this OP2 c); }
699 #define INCDEC_OPERATOR(OP, DELTA) \
700 generic_wide_int &OP () { *this += DELTA; return *this; }
702 UNARY_OPERATOR (operator ~, bit_not
)
703 UNARY_OPERATOR (operator -, neg
)
704 BINARY_PREDICATE (operator ==, eq_p
)
705 BINARY_PREDICATE (operator !=, ne_p
)
706 BINARY_OPERATOR (operator &, bit_and
)
707 BINARY_OPERATOR (and_not
, bit_and_not
)
708 BINARY_OPERATOR (operator |, bit_or
)
709 BINARY_OPERATOR (or_not
, bit_or_not
)
710 BINARY_OPERATOR (operator ^, bit_xor
)
711 BINARY_OPERATOR (operator +, add
)
712 BINARY_OPERATOR (operator -, sub
)
713 BINARY_OPERATOR (operator *, mul
)
714 ASSIGNMENT_OPERATOR (operator &=, bit_and
)
715 ASSIGNMENT_OPERATOR (operator |=, bit_or
)
716 ASSIGNMENT_OPERATOR (operator ^=, bit_xor
)
717 ASSIGNMENT_OPERATOR (operator +=, add
)
718 ASSIGNMENT_OPERATOR (operator -=, sub
)
719 ASSIGNMENT_OPERATOR (operator *=, mul
)
720 SHIFT_ASSIGNMENT_OPERATOR (operator <<=, <<)
721 SHIFT_ASSIGNMENT_OPERATOR (operator >>=, >>)
722 INCDEC_OPERATOR (operator ++, 1)
723 INCDEC_OPERATOR (operator --, -1)
725 #undef BINARY_PREDICATE
726 #undef UNARY_OPERATOR
727 #undef BINARY_OPERATOR
728 #undef SHIFT_ASSIGNMENT_OPERATOR
729 #undef ASSIGNMENT_OPERATOR
730 #undef INCDEC_OPERATOR
732 /* Debugging functions. */
735 static const bool is_sign_extended
736 = wi::int_traits
<generic_wide_int
<storage
> >::is_sign_extended
;
739 template <typename storage
>
740 inline generic_wide_int
<storage
>::generic_wide_int () {}
742 template <typename storage
>
743 template <typename T
>
744 inline generic_wide_int
<storage
>::generic_wide_int (const T
&x
)
749 template <typename storage
>
750 template <typename T
>
751 inline generic_wide_int
<storage
>::generic_wide_int (const T
&x
,
752 unsigned int precision
)
753 : storage (x
, precision
)
757 /* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
758 If THIS does not fit in PRECISION, the information is lost. */
759 template <typename storage
>
761 generic_wide_int
<storage
>::to_shwi (unsigned int precision
) const
763 if (precision
< HOST_BITS_PER_WIDE_INT
)
764 return sext_hwi (this->get_val ()[0], precision
);
766 return this->get_val ()[0];
769 /* Return THIS as a signed HOST_WIDE_INT, in its natural precision. */
770 template <typename storage
>
772 generic_wide_int
<storage
>::to_shwi () const
774 if (is_sign_extended
)
775 return this->get_val ()[0];
777 return to_shwi (this->get_precision ());
780 /* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
781 PRECISION. If THIS does not fit in PRECISION, the information
783 template <typename storage
>
784 inline unsigned HOST_WIDE_INT
785 generic_wide_int
<storage
>::to_uhwi (unsigned int precision
) const
787 if (precision
< HOST_BITS_PER_WIDE_INT
)
788 return zext_hwi (this->get_val ()[0], precision
);
790 return this->get_val ()[0];
793 /* Return THIS as an signed HOST_WIDE_INT, in its natural precision. */
794 template <typename storage
>
795 inline unsigned HOST_WIDE_INT
796 generic_wide_int
<storage
>::to_uhwi () const
798 return to_uhwi (this->get_precision ());
801 /* TODO: The compiler is half converted from using HOST_WIDE_INT to
802 represent addresses to using offset_int to represent addresses.
803 We use to_short_addr at the interface from new code to old,
805 template <typename storage
>
807 generic_wide_int
<storage
>::to_short_addr () const
809 return this->get_val ()[0];
812 /* Return the implicit value of blocks above get_len (). */
813 template <typename storage
>
815 generic_wide_int
<storage
>::sign_mask () const
817 unsigned int len
= this->get_len ();
818 unsigned HOST_WIDE_INT high
= this->get_val ()[len
- 1];
819 if (!is_sign_extended
)
821 unsigned int precision
= this->get_precision ();
822 int excess
= len
* HOST_BITS_PER_WIDE_INT
- precision
;
826 return (HOST_WIDE_INT
) (high
) < 0 ? -1 : 0;
829 /* Return the signed value of the least-significant explicitly-encoded
831 template <typename storage
>
833 generic_wide_int
<storage
>::slow () const
835 return this->get_val ()[0];
838 /* Return the signed value of the most-significant explicitly-encoded
840 template <typename storage
>
842 generic_wide_int
<storage
>::shigh () const
844 return this->get_val ()[this->get_len () - 1];
847 /* Return the unsigned value of the least-significant
848 explicitly-encoded block. */
849 template <typename storage
>
850 inline unsigned HOST_WIDE_INT
851 generic_wide_int
<storage
>::ulow () const
853 return this->get_val ()[0];
856 /* Return the unsigned value of the most-significant
857 explicitly-encoded block. */
858 template <typename storage
>
859 inline unsigned HOST_WIDE_INT
860 generic_wide_int
<storage
>::uhigh () const
862 return this->get_val ()[this->get_len () - 1];
865 /* Return block I, which might be implicitly or explicit encoded. */
866 template <typename storage
>
868 generic_wide_int
<storage
>::elt (unsigned int i
) const
870 if (i
>= this->get_len ())
873 return this->get_val ()[i
];
876 template <typename storage
>
877 template <typename T
>
878 inline generic_wide_int
<storage
> &
879 generic_wide_int
<storage
>::operator = (const T
&x
)
881 storage::operator = (x
);
885 /* Dump the contents of the integer to stderr, for debugging. */
886 template <typename storage
>
888 generic_wide_int
<storage
>::dump () const
890 unsigned int len
= this->get_len ();
891 const HOST_WIDE_INT
*val
= this->get_val ();
892 unsigned int precision
= this->get_precision ();
893 fprintf (stderr
, "[");
894 if (len
* HOST_BITS_PER_WIDE_INT
< precision
)
895 fprintf (stderr
, "...,");
896 for (unsigned int i
= 0; i
< len
- 1; ++i
)
897 fprintf (stderr
, HOST_WIDE_INT_PRINT_HEX
",", val
[len
- 1 - i
]);
898 fprintf (stderr
, HOST_WIDE_INT_PRINT_HEX
"], precision = %d\n",
904 template <typename storage
>
905 struct int_traits
< generic_wide_int
<storage
> >
906 : public wi::int_traits
<storage
>
908 static unsigned int get_precision (const generic_wide_int
<storage
> &);
909 static wi::storage_ref
decompose (HOST_WIDE_INT
*, unsigned int,
910 const generic_wide_int
<storage
> &);
914 template <typename storage
>
916 wi::int_traits
< generic_wide_int
<storage
> >::
917 get_precision (const generic_wide_int
<storage
> &x
)
919 return x
.get_precision ();
922 template <typename storage
>
923 inline wi::storage_ref
924 wi::int_traits
< generic_wide_int
<storage
> >::
925 decompose (HOST_WIDE_INT
*, unsigned int precision
,
926 const generic_wide_int
<storage
> &x
)
928 gcc_checking_assert (precision
== x
.get_precision ());
929 return wi::storage_ref (x
.get_val (), x
.get_len (), precision
);
932 /* Provide the storage for a wide_int_ref. This acts like a read-only
933 wide_int, with the optimization that VAL is normally a pointer to
934 another integer's storage, so that no array copy is needed. */
936 struct wide_int_ref_storage
: public wi::storage_ref
939 /* Scratch space that can be used when decomposing the original integer.
940 It must live as long as this object. */
941 HOST_WIDE_INT scratch
[2];
944 wide_int_ref_storage (const wi::storage_ref
&);
946 template <typename T
>
947 wide_int_ref_storage (const T
&);
949 template <typename T
>
950 wide_int_ref_storage (const T
&, unsigned int);
953 /* Create a reference from an existing reference. */
955 inline wide_int_ref_storage
<SE
>::
956 wide_int_ref_storage (const wi::storage_ref
&x
)
960 /* Create a reference to integer X in its natural precision. Note
961 that the natural precision is host-dependent for primitive
964 template <typename T
>
965 inline wide_int_ref_storage
<SE
>::wide_int_ref_storage (const T
&x
)
966 : storage_ref (wi::int_traits
<T
>::decompose (scratch
,
967 wi::get_precision (x
), x
))
971 /* Create a reference to integer X in precision PRECISION. */
973 template <typename T
>
974 inline wide_int_ref_storage
<SE
>::wide_int_ref_storage (const T
&x
,
975 unsigned int precision
)
976 : storage_ref (wi::int_traits
<T
>::decompose (scratch
, precision
, x
))
983 struct int_traits
<wide_int_ref_storage
<SE
> >
985 static const enum precision_type precision_type
= VAR_PRECISION
;
986 /* wi::storage_ref can be a reference to a primitive type,
987 so this is the conservatively-correct setting. */
988 static const bool host_dependent_precision
= true;
989 static const bool is_sign_extended
= SE
;
995 unsigned int force_to_size (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
996 unsigned int, unsigned int, unsigned int,
998 unsigned int from_array (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
999 unsigned int, unsigned int, bool = true);
1002 /* The storage used by wide_int. */
1003 class GTY(()) wide_int_storage
1006 HOST_WIDE_INT val
[WIDE_INT_MAX_ELTS
];
1008 unsigned int precision
;
1011 wide_int_storage ();
1012 template <typename T
>
1013 wide_int_storage (const T
&);
1015 /* The standard generic_wide_int storage methods. */
1016 unsigned int get_precision () const;
1017 const HOST_WIDE_INT
*get_val () const;
1018 unsigned int get_len () const;
1019 HOST_WIDE_INT
*write_val ();
1020 void set_len (unsigned int, bool = false);
1022 static wide_int
from (const wide_int_ref
&, unsigned int, signop
);
1023 static wide_int
from_array (const HOST_WIDE_INT
*, unsigned int,
1024 unsigned int, bool = true);
1025 static wide_int
create (unsigned int);
1027 /* FIXME: target-dependent, so should disappear. */
1028 wide_int
bswap () const;
1034 struct int_traits
<wide_int_storage
>
1036 static const enum precision_type precision_type
= VAR_PRECISION
;
1037 /* Guaranteed by a static assert in the wide_int_storage constructor. */
1038 static const bool host_dependent_precision
= false;
1039 static const bool is_sign_extended
= true;
1040 template <typename T1
, typename T2
>
1041 static wide_int
get_binary_result (const T1
&, const T2
&);
1045 inline wide_int_storage::wide_int_storage () {}
1047 /* Initialize the storage from integer X, in its natural precision.
1048 Note that we do not allow integers with host-dependent precision
1049 to become wide_ints; wide_ints must always be logically independent
1051 template <typename T
>
1052 inline wide_int_storage::wide_int_storage (const T
&x
)
1054 { STATIC_ASSERT (!wi::int_traits
<T
>::host_dependent_precision
); }
1055 { STATIC_ASSERT (wi::int_traits
<T
>::precision_type
!= wi::CONST_PRECISION
); }
1056 WIDE_INT_REF_FOR (T
) xi (x
);
1057 precision
= xi
.precision
;
1058 wi::copy (*this, xi
);
1062 wide_int_storage::get_precision () const
1067 inline const HOST_WIDE_INT
*
1068 wide_int_storage::get_val () const
1074 wide_int_storage::get_len () const
1079 inline HOST_WIDE_INT
*
1080 wide_int_storage::write_val ()
1086 wide_int_storage::set_len (unsigned int l
, bool is_sign_extended
)
1089 if (!is_sign_extended
&& len
* HOST_BITS_PER_WIDE_INT
> precision
)
1090 val
[len
- 1] = sext_hwi (val
[len
- 1],
1091 precision
% HOST_BITS_PER_WIDE_INT
);
1094 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
1097 wide_int_storage::from (const wide_int_ref
&x
, unsigned int precision
,
1100 wide_int result
= wide_int::create (precision
);
1101 result
.set_len (wi::force_to_size (result
.write_val (), x
.val
, x
.len
,
1102 x
.precision
, precision
, sgn
));
1106 /* Create a wide_int from the explicit block encoding given by VAL and
1107 LEN. PRECISION is the precision of the integer. NEED_CANON_P is
1108 true if the encoding may have redundant trailing blocks. */
1110 wide_int_storage::from_array (const HOST_WIDE_INT
*val
, unsigned int len
,
1111 unsigned int precision
, bool need_canon_p
)
1113 wide_int result
= wide_int::create (precision
);
1114 result
.set_len (wi::from_array (result
.write_val (), val
, len
, precision
,
1119 /* Return an uninitialized wide_int with precision PRECISION. */
1121 wide_int_storage::create (unsigned int precision
)
1124 x
.precision
= precision
;
1128 template <typename T1
, typename T2
>
1130 wi::int_traits
<wide_int_storage
>::get_binary_result (const T1
&x
, const T2
&y
)
1132 /* This shouldn't be used for two flexible-precision inputs. */
1133 STATIC_ASSERT (wi::int_traits
<T1
>::precision_type
!= FLEXIBLE_PRECISION
1134 || wi::int_traits
<T2
>::precision_type
!= FLEXIBLE_PRECISION
);
1135 if (wi::int_traits
<T1
>::precision_type
== FLEXIBLE_PRECISION
)
1136 return wide_int::create (wi::get_precision (y
));
1138 return wide_int::create (wi::get_precision (x
));
1141 /* The storage used by FIXED_WIDE_INT (N). */
1143 class GTY(()) fixed_wide_int_storage
1146 HOST_WIDE_INT val
[(N
+ HOST_BITS_PER_WIDE_INT
+ 1) / HOST_BITS_PER_WIDE_INT
];
1150 fixed_wide_int_storage ();
1151 template <typename T
>
1152 fixed_wide_int_storage (const T
&);
1154 /* The standard generic_wide_int storage methods. */
1155 unsigned int get_precision () const;
1156 const HOST_WIDE_INT
*get_val () const;
1157 unsigned int get_len () const;
1158 HOST_WIDE_INT
*write_val ();
1159 void set_len (unsigned int, bool = false);
1161 static FIXED_WIDE_INT (N
) from (const wide_int_ref
&, signop
);
1162 static FIXED_WIDE_INT (N
) from_array (const HOST_WIDE_INT
*, unsigned int,
1169 struct int_traits
< fixed_wide_int_storage
<N
> >
1171 static const enum precision_type precision_type
= CONST_PRECISION
;
1172 static const bool host_dependent_precision
= false;
1173 static const bool is_sign_extended
= true;
1174 static const unsigned int precision
= N
;
1175 template <typename T1
, typename T2
>
1176 static FIXED_WIDE_INT (N
) get_binary_result (const T1
&, const T2
&);
1181 inline fixed_wide_int_storage
<N
>::fixed_wide_int_storage () {}
1183 /* Initialize the storage from integer X, in precision N. */
1185 template <typename T
>
1186 inline fixed_wide_int_storage
<N
>::fixed_wide_int_storage (const T
&x
)
1188 /* Check for type compatibility. We don't want to initialize a
1189 fixed-width integer from something like a wide_int. */
1190 WI_BINARY_RESULT (T
, FIXED_WIDE_INT (N
)) *assertion ATTRIBUTE_UNUSED
;
1191 wi::copy (*this, WIDE_INT_REF_FOR (T
) (x
, N
));
1196 fixed_wide_int_storage
<N
>::get_precision () const
1202 inline const HOST_WIDE_INT
*
1203 fixed_wide_int_storage
<N
>::get_val () const
1210 fixed_wide_int_storage
<N
>::get_len () const
1216 inline HOST_WIDE_INT
*
1217 fixed_wide_int_storage
<N
>::write_val ()
1224 fixed_wide_int_storage
<N
>::set_len (unsigned int l
, bool)
1227 /* There are no excess bits in val[len - 1]. */
1228 STATIC_ASSERT (N
% HOST_BITS_PER_WIDE_INT
== 0);
1231 /* Treat X as having signedness SGN and convert it to an N-bit number. */
1233 inline FIXED_WIDE_INT (N
)
1234 fixed_wide_int_storage
<N
>::from (const wide_int_ref
&x
, signop sgn
)
1236 FIXED_WIDE_INT (N
) result
;
1237 result
.set_len (wi::force_to_size (result
.write_val (), x
.val
, x
.len
,
1238 x
.precision
, N
, sgn
));
1242 /* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
1243 VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
1246 inline FIXED_WIDE_INT (N
)
1247 fixed_wide_int_storage
<N
>::from_array (const HOST_WIDE_INT
*val
,
1251 FIXED_WIDE_INT (N
) result
;
1252 result
.set_len (wi::from_array (result
.write_val (), val
, len
,
1258 template <typename T1
, typename T2
>
1259 inline FIXED_WIDE_INT (N
)
1260 wi::int_traits
< fixed_wide_int_storage
<N
> >::
1261 get_binary_result (const T1
&, const T2
&)
1263 return FIXED_WIDE_INT (N
) ();
1266 /* A reference to one element of a trailing_wide_ints structure. */
1267 class trailing_wide_int_storage
1270 /* The precision of the integer, which is a fixed property of the
1271 parent trailing_wide_ints. */
1272 unsigned int m_precision
;
1274 /* A pointer to the length field. */
1275 unsigned char *m_len
;
1277 /* A pointer to the HWI array. There are enough elements to hold all
1278 values of precision M_PRECISION. */
1279 HOST_WIDE_INT
*m_val
;
1282 trailing_wide_int_storage (unsigned int, unsigned char *, HOST_WIDE_INT
*);
1284 /* The standard generic_wide_int storage methods. */
1285 unsigned int get_len () const;
1286 unsigned int get_precision () const;
1287 const HOST_WIDE_INT
*get_val () const;
1288 HOST_WIDE_INT
*write_val ();
1289 void set_len (unsigned int, bool = false);
1291 template <typename T
>
1292 trailing_wide_int_storage
&operator = (const T
&);
1295 typedef generic_wide_int
<trailing_wide_int_storage
> trailing_wide_int
;
1297 /* trailing_wide_int behaves like a wide_int. */
1301 struct int_traits
<trailing_wide_int_storage
>
1302 : public int_traits
<wide_int_storage
> {};
1305 /* An array of N wide_int-like objects that can be put at the end of
1306 a variable-sized structure. Use extra_size to calculate how many
1307 bytes beyond the sizeof need to be allocated. Use set_precision
1308 to initialize the structure. */
1310 class GTY(()) trailing_wide_ints
1313 /* The shared precision of each number. */
1314 unsigned short m_precision
;
1316 /* The shared maximum length of each number. */
1317 unsigned char m_max_len
;
1319 /* The current length of each number. */
1320 unsigned char m_len
[N
];
1322 /* The variable-length part of the structure, which always contains
1323 at least one HWI. Element I starts at index I * M_MAX_LEN. */
1324 HOST_WIDE_INT m_val
[1];
1327 void set_precision (unsigned int);
1328 trailing_wide_int
operator [] (unsigned int);
1329 static size_t extra_size (unsigned int);
1332 inline trailing_wide_int_storage::
1333 trailing_wide_int_storage (unsigned int precision
, unsigned char *len
,
1335 : m_precision (precision
), m_len (len
), m_val (val
)
1340 trailing_wide_int_storage::get_len () const
1346 trailing_wide_int_storage::get_precision () const
1351 inline const HOST_WIDE_INT
*
1352 trailing_wide_int_storage::get_val () const
1357 inline HOST_WIDE_INT
*
1358 trailing_wide_int_storage::write_val ()
1364 trailing_wide_int_storage::set_len (unsigned int len
, bool is_sign_extended
)
1367 if (!is_sign_extended
&& len
* HOST_BITS_PER_WIDE_INT
> m_precision
)
1368 m_val
[len
- 1] = sext_hwi (m_val
[len
- 1],
1369 m_precision
% HOST_BITS_PER_WIDE_INT
);
1372 template <typename T
>
1373 inline trailing_wide_int_storage
&
1374 trailing_wide_int_storage::operator = (const T
&x
)
1376 WIDE_INT_REF_FOR (T
) xi (x
, m_precision
);
1377 wi::copy (*this, xi
);
1381 /* Initialize the structure and record that all elements have precision
1385 trailing_wide_ints
<N
>::set_precision (unsigned int precision
)
1387 m_precision
= precision
;
1388 m_max_len
= ((precision
+ HOST_BITS_PER_WIDE_INT
- 1)
1389 / HOST_BITS_PER_WIDE_INT
);
1392 /* Return a reference to element INDEX. */
1394 inline trailing_wide_int
1395 trailing_wide_ints
<N
>::operator [] (unsigned int index
)
1397 return trailing_wide_int_storage (m_precision
, &m_len
[index
],
1398 &m_val
[index
* m_max_len
]);
1401 /* Return how many extra bytes need to be added to the end of the structure
1402 in order to handle N wide_ints of precision PRECISION. */
1405 trailing_wide_ints
<N
>::extra_size (unsigned int precision
)
1407 unsigned int max_len
= ((precision
+ HOST_BITS_PER_WIDE_INT
- 1)
1408 / HOST_BITS_PER_WIDE_INT
);
1409 return (N
* max_len
- 1) * sizeof (HOST_WIDE_INT
);
1412 /* This macro is used in structures that end with a trailing_wide_ints field
1413 called FIELD. It declares get_NAME() and set_NAME() methods to access
1414 element I of FIELD. */
1415 #define TRAILING_WIDE_INT_ACCESSOR(NAME, FIELD, I) \
1416 trailing_wide_int get_##NAME () { return FIELD[I]; } \
1417 template <typename T> void set_##NAME (const T &x) { FIELD[I] = x; }
1421 /* Implementation of int_traits for primitive integer types like "int". */
1422 template <typename T
, bool signed_p
>
1423 struct primitive_int_traits
1425 static const enum precision_type precision_type
= FLEXIBLE_PRECISION
;
1426 static const bool host_dependent_precision
= true;
1427 static const bool is_sign_extended
= true;
1428 static unsigned int get_precision (T
);
1429 static wi::storage_ref
decompose (HOST_WIDE_INT
*, unsigned int, T
);
1433 template <typename T
, bool signed_p
>
1435 wi::primitive_int_traits
<T
, signed_p
>::get_precision (T
)
1437 return sizeof (T
) * CHAR_BIT
;
1440 template <typename T
, bool signed_p
>
1441 inline wi::storage_ref
1442 wi::primitive_int_traits
<T
, signed_p
>::decompose (HOST_WIDE_INT
*scratch
,
1443 unsigned int precision
, T x
)
1446 if (signed_p
|| scratch
[0] >= 0 || precision
<= HOST_BITS_PER_WIDE_INT
)
1447 return wi::storage_ref (scratch
, 1, precision
);
1449 return wi::storage_ref (scratch
, 2, precision
);
1452 /* Allow primitive C types to be used in wi:: routines. */
1456 struct int_traits
<int>
1457 : public primitive_int_traits
<int, true> {};
1460 struct int_traits
<unsigned int>
1461 : public primitive_int_traits
<unsigned int, false> {};
1464 struct int_traits
<long>
1465 : public primitive_int_traits
<long, true> {};
1468 struct int_traits
<unsigned long>
1469 : public primitive_int_traits
<unsigned long, false> {};
1471 #if defined HAVE_LONG_LONG
1473 struct int_traits
<long long>
1474 : public primitive_int_traits
<long long, true> {};
1477 struct int_traits
<unsigned long long>
1478 : public primitive_int_traits
<unsigned long long, false> {};
1484 /* Stores HWI-sized integer VAL, treating it as having signedness SGN
1485 and precision PRECISION. */
1486 struct hwi_with_prec
1488 hwi_with_prec (HOST_WIDE_INT
, unsigned int, signop
);
1490 unsigned int precision
;
1494 hwi_with_prec
shwi (HOST_WIDE_INT
, unsigned int);
1495 hwi_with_prec
uhwi (unsigned HOST_WIDE_INT
, unsigned int);
1497 hwi_with_prec
minus_one (unsigned int);
1498 hwi_with_prec
zero (unsigned int);
1499 hwi_with_prec
one (unsigned int);
1500 hwi_with_prec
two (unsigned int);
1503 inline wi::hwi_with_prec::hwi_with_prec (HOST_WIDE_INT v
, unsigned int p
,
1505 : val (v
), precision (p
), sgn (s
)
1509 /* Return a signed integer that has value VAL and precision PRECISION. */
1510 inline wi::hwi_with_prec
1511 wi::shwi (HOST_WIDE_INT val
, unsigned int precision
)
1513 return hwi_with_prec (val
, precision
, SIGNED
);
1516 /* Return an unsigned integer that has value VAL and precision PRECISION. */
1517 inline wi::hwi_with_prec
1518 wi::uhwi (unsigned HOST_WIDE_INT val
, unsigned int precision
)
1520 return hwi_with_prec (val
, precision
, UNSIGNED
);
1523 /* Return a wide int of -1 with precision PRECISION. */
1524 inline wi::hwi_with_prec
1525 wi::minus_one (unsigned int precision
)
1527 return wi::shwi (-1, precision
);
1530 /* Return a wide int of 0 with precision PRECISION. */
1531 inline wi::hwi_with_prec
1532 wi::zero (unsigned int precision
)
1534 return wi::shwi (0, precision
);
1537 /* Return a wide int of 1 with precision PRECISION. */
1538 inline wi::hwi_with_prec
1539 wi::one (unsigned int precision
)
1541 return wi::shwi (1, precision
);
1544 /* Return a wide int of 2 with precision PRECISION. */
1545 inline wi::hwi_with_prec
1546 wi::two (unsigned int precision
)
1548 return wi::shwi (2, precision
);
1554 struct int_traits
<wi::hwi_with_prec
>
1556 static const enum precision_type precision_type
= VAR_PRECISION
;
1557 /* hwi_with_prec has an explicitly-given precision, rather than the
1558 precision of HOST_WIDE_INT. */
1559 static const bool host_dependent_precision
= false;
1560 static const bool is_sign_extended
= true;
1561 static unsigned int get_precision (const wi::hwi_with_prec
&);
1562 static wi::storage_ref
decompose (HOST_WIDE_INT
*, unsigned int,
1563 const wi::hwi_with_prec
&);
1568 wi::int_traits
<wi::hwi_with_prec
>::get_precision (const wi::hwi_with_prec
&x
)
1573 inline wi::storage_ref
1574 wi::int_traits
<wi::hwi_with_prec
>::
1575 decompose (HOST_WIDE_INT
*scratch
, unsigned int precision
,
1576 const wi::hwi_with_prec
&x
)
1578 gcc_checking_assert (precision
== x
.precision
);
1580 if (x
.sgn
== SIGNED
|| x
.val
>= 0 || precision
<= HOST_BITS_PER_WIDE_INT
)
1581 return wi::storage_ref (scratch
, 1, precision
);
1583 return wi::storage_ref (scratch
, 2, precision
);
1586 /* Private functions for handling large cases out of line. They take
1587 individual length and array parameters because that is cheaper for
1588 the inline caller than constructing an object on the stack and
1589 passing a reference to it. (Although many callers use wide_int_refs,
1590 we generally want those to be removed by SRA.) */
1593 bool eq_p_large (const HOST_WIDE_INT
*, unsigned int,
1594 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1595 bool lts_p_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1596 const HOST_WIDE_INT
*, unsigned int);
1597 bool ltu_p_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1598 const HOST_WIDE_INT
*, unsigned int);
1599 int cmps_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1600 const HOST_WIDE_INT
*, unsigned int);
1601 int cmpu_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1602 const HOST_WIDE_INT
*, unsigned int);
1603 unsigned int sext_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1605 unsigned int, unsigned int);
1606 unsigned int zext_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1608 unsigned int, unsigned int);
1609 unsigned int set_bit_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1610 unsigned int, unsigned int, unsigned int);
1611 unsigned int lshift_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1612 unsigned int, unsigned int, unsigned int);
1613 unsigned int lrshift_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1614 unsigned int, unsigned int, unsigned int,
1616 unsigned int arshift_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1617 unsigned int, unsigned int, unsigned int,
1619 unsigned int and_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1620 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1621 unsigned int and_not_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1622 unsigned int, const HOST_WIDE_INT
*,
1623 unsigned int, unsigned int);
1624 unsigned int or_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1625 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1626 unsigned int or_not_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1627 unsigned int, const HOST_WIDE_INT
*,
1628 unsigned int, unsigned int);
1629 unsigned int xor_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1630 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1631 unsigned int add_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1632 const HOST_WIDE_INT
*, unsigned int, unsigned int,
1634 unsigned int sub_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1635 const HOST_WIDE_INT
*, unsigned int, unsigned int,
1637 unsigned int mul_internal (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1638 unsigned int, const HOST_WIDE_INT
*,
1639 unsigned int, unsigned int, signop
, bool *,
1641 unsigned int divmod_internal (HOST_WIDE_INT
*, unsigned int *,
1642 HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1643 unsigned int, unsigned int,
1644 const HOST_WIDE_INT
*,
1645 unsigned int, unsigned int,
1649 /* Return the number of bits that integer X can hold. */
1650 template <typename T
>
1652 wi::get_precision (const T
&x
)
1654 return wi::int_traits
<T
>::get_precision (x
);
1657 /* Return the number of bits that the result of a binary operation can
1658 hold when the input operands are X and Y. */
1659 template <typename T1
, typename T2
>
1661 wi::get_binary_precision (const T1
&x
, const T2
&y
)
1663 return get_precision (wi::int_traits
<WI_BINARY_RESULT (T1
, T2
)>::
1664 get_binary_result (x
, y
));
1667 /* Copy the contents of Y to X, but keeping X's current precision. */
1668 template <typename T1
, typename T2
>
1670 wi::copy (T1
&x
, const T2
&y
)
1672 HOST_WIDE_INT
*xval
= x
.write_val ();
1673 const HOST_WIDE_INT
*yval
= y
.get_val ();
1674 unsigned int len
= y
.get_len ();
1679 x
.set_len (len
, y
.is_sign_extended
);
1682 /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */
1683 template <typename T
>
1685 wi::fits_shwi_p (const T
&x
)
1687 WIDE_INT_REF_FOR (T
) xi (x
);
1691 /* Return true if X fits in an unsigned HOST_WIDE_INT with no loss of
1693 template <typename T
>
1695 wi::fits_uhwi_p (const T
&x
)
1697 WIDE_INT_REF_FOR (T
) xi (x
);
1698 if (xi
.precision
<= HOST_BITS_PER_WIDE_INT
)
1701 return xi
.slow () >= 0;
1702 return xi
.len
== 2 && xi
.uhigh () == 0;
1705 /* Return true if X is negative based on the interpretation of SGN.
1706 For UNSIGNED, this is always false. */
1707 template <typename T
>
1709 wi::neg_p (const T
&x
, signop sgn
)
1711 WIDE_INT_REF_FOR (T
) xi (x
);
1712 if (sgn
== UNSIGNED
)
1714 return xi
.sign_mask () < 0;
1717 /* Return -1 if the top bit of X is set and 0 if the top bit is clear. */
1718 template <typename T
>
1719 inline HOST_WIDE_INT
1720 wi::sign_mask (const T
&x
)
1722 WIDE_INT_REF_FOR (T
) xi (x
);
1723 return xi
.sign_mask ();
1726 /* Return true if X == Y. X and Y must be binary-compatible. */
1727 template <typename T1
, typename T2
>
1729 wi::eq_p (const T1
&x
, const T2
&y
)
1731 unsigned int precision
= get_binary_precision (x
, y
);
1732 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1733 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1734 if (xi
.is_sign_extended
&& yi
.is_sign_extended
)
1736 /* This case reduces to array equality. */
1737 if (xi
.len
!= yi
.len
)
1741 if (xi
.val
[i
] != yi
.val
[i
])
1743 while (++i
!= xi
.len
);
1746 if (__builtin_expect (yi
.len
== 1, true))
1748 /* XI is only equal to YI if it too has a single HWI. */
1751 /* Excess bits in xi.val[0] will be signs or zeros, so comparisons
1752 with 0 are simple. */
1753 if (STATIC_CONSTANT_P (yi
.val
[0] == 0))
1754 return xi
.val
[0] == 0;
1755 /* Otherwise flush out any excess bits first. */
1756 unsigned HOST_WIDE_INT diff
= xi
.val
[0] ^ yi
.val
[0];
1757 int excess
= HOST_BITS_PER_WIDE_INT
- precision
;
1762 return eq_p_large (xi
.val
, xi
.len
, yi
.val
, yi
.len
, precision
);
1765 /* Return true if X != Y. X and Y must be binary-compatible. */
1766 template <typename T1
, typename T2
>
1768 wi::ne_p (const T1
&x
, const T2
&y
)
1770 return !eq_p (x
, y
);
1773 /* Return true if X < Y when both are treated as signed values. */
1774 template <typename T1
, typename T2
>
1776 wi::lts_p (const T1
&x
, const T2
&y
)
1778 unsigned int precision
= get_binary_precision (x
, y
);
1779 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1780 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1781 /* We optimize x < y, where y is 64 or fewer bits. */
1782 if (wi::fits_shwi_p (yi
))
1784 /* Make lts_p (x, 0) as efficient as wi::neg_p (x). */
1785 if (STATIC_CONSTANT_P (yi
.val
[0] == 0))
1787 /* If x fits directly into a shwi, we can compare directly. */
1788 if (wi::fits_shwi_p (xi
))
1789 return xi
.to_shwi () < yi
.to_shwi ();
1790 /* If x doesn't fit and is negative, then it must be more
1791 negative than any value in y, and hence smaller than y. */
1794 /* If x is positive, then it must be larger than any value in y,
1795 and hence greater than y. */
1798 /* Optimize the opposite case, if it can be detected at compile time. */
1799 if (STATIC_CONSTANT_P (xi
.len
== 1))
1800 /* If YI is negative it is lower than the least HWI.
1801 If YI is positive it is greater than the greatest HWI. */
1803 return lts_p_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
1806 /* Return true if X < Y when both are treated as unsigned values. */
1807 template <typename T1
, typename T2
>
1809 wi::ltu_p (const T1
&x
, const T2
&y
)
1811 unsigned int precision
= get_binary_precision (x
, y
);
1812 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1813 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1814 /* Optimize comparisons with constants. */
1815 if (STATIC_CONSTANT_P (yi
.len
== 1 && yi
.val
[0] >= 0))
1816 return xi
.len
== 1 && xi
.to_uhwi () < (unsigned HOST_WIDE_INT
) yi
.val
[0];
1817 if (STATIC_CONSTANT_P (xi
.len
== 1 && xi
.val
[0] >= 0))
1818 return yi
.len
!= 1 || yi
.to_uhwi () > (unsigned HOST_WIDE_INT
) xi
.val
[0];
1819 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1820 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1821 values does not change the result. */
1822 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
1824 unsigned HOST_WIDE_INT xl
= xi
.to_uhwi ();
1825 unsigned HOST_WIDE_INT yl
= yi
.to_uhwi ();
1828 return ltu_p_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
1831 /* Return true if X < Y. Signedness of X and Y is indicated by SGN. */
1832 template <typename T1
, typename T2
>
1834 wi::lt_p (const T1
&x
, const T2
&y
, signop sgn
)
1837 return lts_p (x
, y
);
1839 return ltu_p (x
, y
);
1842 /* Return true if X <= Y when both are treated as signed values. */
1843 template <typename T1
, typename T2
>
1845 wi::les_p (const T1
&x
, const T2
&y
)
1847 return !lts_p (y
, x
);
1850 /* Return true if X <= Y when both are treated as unsigned values. */
1851 template <typename T1
, typename T2
>
1853 wi::leu_p (const T1
&x
, const T2
&y
)
1855 return !ltu_p (y
, x
);
1858 /* Return true if X <= Y. Signedness of X and Y is indicated by SGN. */
1859 template <typename T1
, typename T2
>
1861 wi::le_p (const T1
&x
, const T2
&y
, signop sgn
)
1864 return les_p (x
, y
);
1866 return leu_p (x
, y
);
1869 /* Return true if X > Y when both are treated as signed values. */
1870 template <typename T1
, typename T2
>
1872 wi::gts_p (const T1
&x
, const T2
&y
)
1874 return lts_p (y
, x
);
1877 /* Return true if X > Y when both are treated as unsigned values. */
1878 template <typename T1
, typename T2
>
1880 wi::gtu_p (const T1
&x
, const T2
&y
)
1882 return ltu_p (y
, x
);
1885 /* Return true if X > Y. Signedness of X and Y is indicated by SGN. */
1886 template <typename T1
, typename T2
>
1888 wi::gt_p (const T1
&x
, const T2
&y
, signop sgn
)
1891 return gts_p (x
, y
);
1893 return gtu_p (x
, y
);
1896 /* Return true if X >= Y when both are treated as signed values. */
1897 template <typename T1
, typename T2
>
1899 wi::ges_p (const T1
&x
, const T2
&y
)
1901 return !lts_p (x
, y
);
1904 /* Return true if X >= Y when both are treated as unsigned values. */
1905 template <typename T1
, typename T2
>
1907 wi::geu_p (const T1
&x
, const T2
&y
)
1909 return !ltu_p (x
, y
);
1912 /* Return true if X >= Y. Signedness of X and Y is indicated by SGN. */
1913 template <typename T1
, typename T2
>
1915 wi::ge_p (const T1
&x
, const T2
&y
, signop sgn
)
1918 return ges_p (x
, y
);
1920 return geu_p (x
, y
);
1923 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1924 as signed values. */
1925 template <typename T1
, typename T2
>
1927 wi::cmps (const T1
&x
, const T2
&y
)
1929 unsigned int precision
= get_binary_precision (x
, y
);
1930 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1931 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1932 if (wi::fits_shwi_p (yi
))
1934 /* Special case for comparisons with 0. */
1935 if (STATIC_CONSTANT_P (yi
.val
[0] == 0))
1936 return neg_p (xi
) ? -1 : !(xi
.len
== 1 && xi
.val
[0] == 0);
1937 /* If x fits into a signed HWI, we can compare directly. */
1938 if (wi::fits_shwi_p (xi
))
1940 HOST_WIDE_INT xl
= xi
.to_shwi ();
1941 HOST_WIDE_INT yl
= yi
.to_shwi ();
1942 return xl
< yl
? -1 : xl
> yl
;
1944 /* If x doesn't fit and is negative, then it must be more
1945 negative than any signed HWI, and hence smaller than y. */
1948 /* If x is positive, then it must be larger than any signed HWI,
1949 and hence greater than y. */
1952 /* Optimize the opposite case, if it can be detected at compile time. */
1953 if (STATIC_CONSTANT_P (xi
.len
== 1))
1954 /* If YI is negative it is lower than the least HWI.
1955 If YI is positive it is greater than the greatest HWI. */
1956 return neg_p (yi
) ? 1 : -1;
1957 return cmps_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
1960 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
1961 as unsigned values. */
1962 template <typename T1
, typename T2
>
1964 wi::cmpu (const T1
&x
, const T2
&y
)
1966 unsigned int precision
= get_binary_precision (x
, y
);
1967 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1968 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1969 /* Optimize comparisons with constants. */
1970 if (STATIC_CONSTANT_P (yi
.len
== 1 && yi
.val
[0] >= 0))
1972 /* If XI doesn't fit in a HWI then it must be larger than YI. */
1975 /* Otherwise compare directly. */
1976 unsigned HOST_WIDE_INT xl
= xi
.to_uhwi ();
1977 unsigned HOST_WIDE_INT yl
= yi
.val
[0];
1978 return xl
< yl
? -1 : xl
> yl
;
1980 if (STATIC_CONSTANT_P (xi
.len
== 1 && xi
.val
[0] >= 0))
1982 /* If YI doesn't fit in a HWI then it must be larger than XI. */
1985 /* Otherwise compare directly. */
1986 unsigned HOST_WIDE_INT xl
= xi
.val
[0];
1987 unsigned HOST_WIDE_INT yl
= yi
.to_uhwi ();
1988 return xl
< yl
? -1 : xl
> yl
;
1990 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1991 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1992 values does not change the result. */
1993 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
1995 unsigned HOST_WIDE_INT xl
= xi
.to_uhwi ();
1996 unsigned HOST_WIDE_INT yl
= yi
.to_uhwi ();
1997 return xl
< yl
? -1 : xl
> yl
;
1999 return cmpu_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
2002 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Signedness of
2003 X and Y indicated by SGN. */
2004 template <typename T1
, typename T2
>
2006 wi::cmp (const T1
&x
, const T2
&y
, signop sgn
)
2015 template <typename T
>
2016 inline WI_UNARY_RESULT (T
)
2017 wi::bit_not (const T
&x
)
2019 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2020 WIDE_INT_REF_FOR (T
) xi (x
, get_precision (result
));
2021 for (unsigned int i
= 0; i
< xi
.len
; ++i
)
2022 val
[i
] = ~xi
.val
[i
];
2023 result
.set_len (xi
.len
);
2028 template <typename T
>
2029 inline WI_UNARY_RESULT (T
)
2030 wi::neg (const T
&x
)
2035 /* Return -x. Indicate in *OVERFLOW if X is the minimum signed value. */
2036 template <typename T
>
2037 inline WI_UNARY_RESULT (T
)
2038 wi::neg (const T
&x
, bool *overflow
)
2040 *overflow
= only_sign_bit_p (x
);
2044 /* Return the absolute value of x. */
2045 template <typename T
>
2046 inline WI_UNARY_RESULT (T
)
2047 wi::abs (const T
&x
)
2049 return neg_p (x
) ? neg (x
) : WI_UNARY_RESULT (T
) (x
);
2052 /* Return the result of sign-extending the low OFFSET bits of X. */
2053 template <typename T
>
2054 inline WI_UNARY_RESULT (T
)
2055 wi::sext (const T
&x
, unsigned int offset
)
2057 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2058 unsigned int precision
= get_precision (result
);
2059 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
2061 if (offset
<= HOST_BITS_PER_WIDE_INT
)
2063 val
[0] = sext_hwi (xi
.ulow (), offset
);
2064 result
.set_len (1, true);
2067 result
.set_len (sext_large (val
, xi
.val
, xi
.len
, precision
, offset
));
2071 /* Return the result of zero-extending the low OFFSET bits of X. */
2072 template <typename T
>
2073 inline WI_UNARY_RESULT (T
)
2074 wi::zext (const T
&x
, unsigned int offset
)
2076 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2077 unsigned int precision
= get_precision (result
);
2078 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
2080 /* This is not just an optimization, it is actually required to
2081 maintain canonization. */
2082 if (offset
>= precision
)
2084 wi::copy (result
, xi
);
2088 /* In these cases we know that at least the top bit will be clear,
2089 so no sign extension is necessary. */
2090 if (offset
< HOST_BITS_PER_WIDE_INT
)
2092 val
[0] = zext_hwi (xi
.ulow (), offset
);
2093 result
.set_len (1, true);
2096 result
.set_len (zext_large (val
, xi
.val
, xi
.len
, precision
, offset
), true);
2100 /* Return the result of extending the low OFFSET bits of X according to
2102 template <typename T
>
2103 inline WI_UNARY_RESULT (T
)
2104 wi::ext (const T
&x
, unsigned int offset
, signop sgn
)
2106 return sgn
== SIGNED
? sext (x
, offset
) : zext (x
, offset
);
2109 /* Return an integer that represents X | (1 << bit). */
2110 template <typename T
>
2111 inline WI_UNARY_RESULT (T
)
2112 wi::set_bit (const T
&x
, unsigned int bit
)
2114 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2115 unsigned int precision
= get_precision (result
);
2116 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
2117 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2119 val
[0] = xi
.ulow () | ((unsigned HOST_WIDE_INT
) 1 << bit
);
2123 result
.set_len (set_bit_large (val
, xi
.val
, xi
.len
, precision
, bit
));
2127 /* Return the mininum of X and Y, treating them both as having
2129 template <typename T1
, typename T2
>
2130 inline WI_BINARY_RESULT (T1
, T2
)
2131 wi::min (const T1
&x
, const T2
&y
, signop sgn
)
2133 WI_BINARY_RESULT_VAR (result
, val ATTRIBUTE_UNUSED
, T1
, x
, T2
, y
);
2134 unsigned int precision
= get_precision (result
);
2135 if (wi::le_p (x
, y
, sgn
))
2136 wi::copy (result
, WIDE_INT_REF_FOR (T1
) (x
, precision
));
2138 wi::copy (result
, WIDE_INT_REF_FOR (T2
) (y
, precision
));
2142 /* Return the minimum of X and Y, treating both as signed values. */
2143 template <typename T1
, typename T2
>
2144 inline WI_BINARY_RESULT (T1
, T2
)
2145 wi::smin (const T1
&x
, const T2
&y
)
2147 return wi::min (x
, y
, SIGNED
);
2150 /* Return the minimum of X and Y, treating both as unsigned values. */
2151 template <typename T1
, typename T2
>
2152 inline WI_BINARY_RESULT (T1
, T2
)
2153 wi::umin (const T1
&x
, const T2
&y
)
2155 return wi::min (x
, y
, UNSIGNED
);
2158 /* Return the maxinum of X and Y, treating them both as having
2160 template <typename T1
, typename T2
>
2161 inline WI_BINARY_RESULT (T1
, T2
)
2162 wi::max (const T1
&x
, const T2
&y
, signop sgn
)
2164 WI_BINARY_RESULT_VAR (result
, val ATTRIBUTE_UNUSED
, T1
, x
, T2
, y
);
2165 unsigned int precision
= get_precision (result
);
2166 if (wi::ge_p (x
, y
, sgn
))
2167 wi::copy (result
, WIDE_INT_REF_FOR (T1
) (x
, precision
));
2169 wi::copy (result
, WIDE_INT_REF_FOR (T2
) (y
, precision
));
2173 /* Return the maximum of X and Y, treating both as signed values. */
2174 template <typename T1
, typename T2
>
2175 inline WI_BINARY_RESULT (T1
, T2
)
2176 wi::smax (const T1
&x
, const T2
&y
)
2178 return wi::max (x
, y
, SIGNED
);
2181 /* Return the maximum of X and Y, treating both as unsigned values. */
2182 template <typename T1
, typename T2
>
2183 inline WI_BINARY_RESULT (T1
, T2
)
2184 wi::umax (const T1
&x
, const T2
&y
)
2186 return wi::max (x
, y
, UNSIGNED
);
2190 template <typename T1
, typename T2
>
2191 inline WI_BINARY_RESULT (T1
, T2
)
2192 wi::bit_and (const T1
&x
, const T2
&y
)
2194 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2195 unsigned int precision
= get_precision (result
);
2196 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2197 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2198 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2199 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2201 val
[0] = xi
.ulow () & yi
.ulow ();
2202 result
.set_len (1, is_sign_extended
);
2205 result
.set_len (and_large (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2206 precision
), is_sign_extended
);
2210 /* Return X & ~Y. */
2211 template <typename T1
, typename T2
>
2212 inline WI_BINARY_RESULT (T1
, T2
)
2213 wi::bit_and_not (const T1
&x
, const T2
&y
)
2215 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2216 unsigned int precision
= get_precision (result
);
2217 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2218 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2219 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2220 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2222 val
[0] = xi
.ulow () & ~yi
.ulow ();
2223 result
.set_len (1, is_sign_extended
);
2226 result
.set_len (and_not_large (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2227 precision
), is_sign_extended
);
2232 template <typename T1
, typename T2
>
2233 inline WI_BINARY_RESULT (T1
, T2
)
2234 wi::bit_or (const T1
&x
, const T2
&y
)
2236 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2237 unsigned int precision
= get_precision (result
);
2238 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2239 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2240 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2241 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2243 val
[0] = xi
.ulow () | yi
.ulow ();
2244 result
.set_len (1, is_sign_extended
);
2247 result
.set_len (or_large (val
, xi
.val
, xi
.len
,
2248 yi
.val
, yi
.len
, precision
), is_sign_extended
);
2252 /* Return X | ~Y. */
2253 template <typename T1
, typename T2
>
2254 inline WI_BINARY_RESULT (T1
, T2
)
2255 wi::bit_or_not (const T1
&x
, const T2
&y
)
2257 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2258 unsigned int precision
= get_precision (result
);
2259 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2260 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2261 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2262 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2264 val
[0] = xi
.ulow () | ~yi
.ulow ();
2265 result
.set_len (1, is_sign_extended
);
2268 result
.set_len (or_not_large (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2269 precision
), is_sign_extended
);
2274 template <typename T1
, typename T2
>
2275 inline WI_BINARY_RESULT (T1
, T2
)
2276 wi::bit_xor (const T1
&x
, const T2
&y
)
2278 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2279 unsigned int precision
= get_precision (result
);
2280 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2281 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2282 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2283 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2285 val
[0] = xi
.ulow () ^ yi
.ulow ();
2286 result
.set_len (1, is_sign_extended
);
2289 result
.set_len (xor_large (val
, xi
.val
, xi
.len
,
2290 yi
.val
, yi
.len
, precision
), is_sign_extended
);
2295 template <typename T1
, typename T2
>
2296 inline WI_BINARY_RESULT (T1
, T2
)
2297 wi::add (const T1
&x
, const T2
&y
)
2299 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2300 unsigned int precision
= get_precision (result
);
2301 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2302 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2303 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2305 val
[0] = xi
.ulow () + yi
.ulow ();
2308 /* If the precision is known at compile time to be greater than
2309 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2310 knowing that (a) all bits in those HWIs are significant and
2311 (b) the result has room for at least two HWIs. This provides
2312 a fast path for things like offset_int and widest_int.
2314 The STATIC_CONSTANT_P test prevents this path from being
2315 used for wide_ints. wide_ints with precisions greater than
2316 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2317 point handling them inline. */
2318 else if (STATIC_CONSTANT_P (precision
> HOST_BITS_PER_WIDE_INT
)
2319 && __builtin_expect (xi
.len
+ yi
.len
== 2, true))
2321 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2322 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2323 unsigned HOST_WIDE_INT resultl
= xl
+ yl
;
2325 val
[1] = (HOST_WIDE_INT
) resultl
< 0 ? 0 : -1;
2326 result
.set_len (1 + (((resultl
^ xl
) & (resultl
^ yl
))
2327 >> (HOST_BITS_PER_WIDE_INT
- 1)));
2330 result
.set_len (add_large (val
, xi
.val
, xi
.len
,
2331 yi
.val
, yi
.len
, precision
,
2336 /* Return X + Y. Treat X and Y as having the signednes given by SGN
2337 and indicate in *OVERFLOW whether the operation overflowed. */
2338 template <typename T1
, typename T2
>
2339 inline WI_BINARY_RESULT (T1
, T2
)
2340 wi::add (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2342 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2343 unsigned int precision
= get_precision (result
);
2344 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2345 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2346 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2348 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2349 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2350 unsigned HOST_WIDE_INT resultl
= xl
+ yl
;
2352 *overflow
= (((resultl
^ xl
) & (resultl
^ yl
))
2353 >> (precision
- 1)) & 1;
2355 *overflow
= ((resultl
<< (HOST_BITS_PER_WIDE_INT
- precision
))
2356 < (xl
<< (HOST_BITS_PER_WIDE_INT
- precision
)));
2361 result
.set_len (add_large (val
, xi
.val
, xi
.len
,
2362 yi
.val
, yi
.len
, precision
,
2368 template <typename T1
, typename T2
>
2369 inline WI_BINARY_RESULT (T1
, T2
)
2370 wi::sub (const T1
&x
, const T2
&y
)
2372 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2373 unsigned int precision
= get_precision (result
);
2374 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2375 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2376 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2378 val
[0] = xi
.ulow () - yi
.ulow ();
2381 /* If the precision is known at compile time to be greater than
2382 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2383 knowing that (a) all bits in those HWIs are significant and
2384 (b) the result has room for at least two HWIs. This provides
2385 a fast path for things like offset_int and widest_int.
2387 The STATIC_CONSTANT_P test prevents this path from being
2388 used for wide_ints. wide_ints with precisions greater than
2389 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2390 point handling them inline. */
2391 else if (STATIC_CONSTANT_P (precision
> HOST_BITS_PER_WIDE_INT
)
2392 && __builtin_expect (xi
.len
+ yi
.len
== 2, true))
2394 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2395 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2396 unsigned HOST_WIDE_INT resultl
= xl
- yl
;
2398 val
[1] = (HOST_WIDE_INT
) resultl
< 0 ? 0 : -1;
2399 result
.set_len (1 + (((resultl
^ xl
) & (xl
^ yl
))
2400 >> (HOST_BITS_PER_WIDE_INT
- 1)));
2403 result
.set_len (sub_large (val
, xi
.val
, xi
.len
,
2404 yi
.val
, yi
.len
, precision
,
2409 /* Return X - Y. Treat X and Y as having the signednes given by SGN
2410 and indicate in *OVERFLOW whether the operation overflowed. */
2411 template <typename T1
, typename T2
>
2412 inline WI_BINARY_RESULT (T1
, T2
)
2413 wi::sub (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2415 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2416 unsigned int precision
= get_precision (result
);
2417 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2418 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2419 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2421 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2422 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2423 unsigned HOST_WIDE_INT resultl
= xl
- yl
;
2425 *overflow
= (((xl
^ yl
) & (resultl
^ xl
)) >> (precision
- 1)) & 1;
2427 *overflow
= ((resultl
<< (HOST_BITS_PER_WIDE_INT
- precision
))
2428 > (xl
<< (HOST_BITS_PER_WIDE_INT
- precision
)));
2433 result
.set_len (sub_large (val
, xi
.val
, xi
.len
,
2434 yi
.val
, yi
.len
, precision
,
2440 template <typename T1
, typename T2
>
2441 inline WI_BINARY_RESULT (T1
, T2
)
2442 wi::mul (const T1
&x
, const T2
&y
)
2444 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2445 unsigned int precision
= get_precision (result
);
2446 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2447 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2448 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2450 val
[0] = xi
.ulow () * yi
.ulow ();
2454 result
.set_len (mul_internal (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2455 precision
, UNSIGNED
, 0, false));
2459 /* Return X * Y. Treat X and Y as having the signednes given by SGN
2460 and indicate in *OVERFLOW whether the operation overflowed. */
2461 template <typename T1
, typename T2
>
2462 inline WI_BINARY_RESULT (T1
, T2
)
2463 wi::mul (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2465 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2466 unsigned int precision
= get_precision (result
);
2467 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2468 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2469 result
.set_len (mul_internal (val
, xi
.val
, xi
.len
,
2470 yi
.val
, yi
.len
, precision
,
2471 sgn
, overflow
, false));
2475 /* Return X * Y, treating both X and Y as signed values. Indicate in
2476 *OVERFLOW whether the operation overflowed. */
2477 template <typename T1
, typename T2
>
2478 inline WI_BINARY_RESULT (T1
, T2
)
2479 wi::smul (const T1
&x
, const T2
&y
, bool *overflow
)
2481 return mul (x
, y
, SIGNED
, overflow
);
2484 /* Return X * Y, treating both X and Y as unsigned values. Indicate in
2485 *OVERFLOW whether the operation overflowed. */
2486 template <typename T1
, typename T2
>
2487 inline WI_BINARY_RESULT (T1
, T2
)
2488 wi::umul (const T1
&x
, const T2
&y
, bool *overflow
)
2490 return mul (x
, y
, UNSIGNED
, overflow
);
2493 /* Perform a widening multiplication of X and Y, extending the values
2494 according to SGN, and return the high part of the result. */
2495 template <typename T1
, typename T2
>
2496 inline WI_BINARY_RESULT (T1
, T2
)
2497 wi::mul_high (const T1
&x
, const T2
&y
, signop sgn
)
2499 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2500 unsigned int precision
= get_precision (result
);
2501 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2502 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2503 result
.set_len (mul_internal (val
, xi
.val
, xi
.len
,
2504 yi
.val
, yi
.len
, precision
,
2509 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2510 signedness given by SGN. Indicate in *OVERFLOW if the result
2512 template <typename T1
, typename T2
>
2513 inline WI_BINARY_RESULT (T1
, T2
)
2514 wi::div_trunc (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2516 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2517 unsigned int precision
= get_precision (quotient
);
2518 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2519 WIDE_INT_REF_FOR (T2
) yi (y
);
2521 quotient
.set_len (divmod_internal (quotient_val
, 0, 0, xi
.val
, xi
.len
,
2523 yi
.val
, yi
.len
, yi
.precision
,
2528 /* Return X / Y, rouding towards 0. Treat X and Y as signed values. */
2529 template <typename T1
, typename T2
>
2530 inline WI_BINARY_RESULT (T1
, T2
)
2531 wi::sdiv_trunc (const T1
&x
, const T2
&y
)
2533 return div_trunc (x
, y
, SIGNED
);
2536 /* Return X / Y, rouding towards 0. Treat X and Y as unsigned values. */
2537 template <typename T1
, typename T2
>
2538 inline WI_BINARY_RESULT (T1
, T2
)
2539 wi::udiv_trunc (const T1
&x
, const T2
&y
)
2541 return div_trunc (x
, y
, UNSIGNED
);
2544 /* Return X / Y, rouding towards -inf. Treat X and Y as having the
2545 signedness given by SGN. Indicate in *OVERFLOW if the result
2547 template <typename T1
, typename T2
>
2548 inline WI_BINARY_RESULT (T1
, T2
)
2549 wi::div_floor (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2551 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2552 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2553 unsigned int precision
= get_precision (quotient
);
2554 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2555 WIDE_INT_REF_FOR (T2
) yi (y
);
2557 unsigned int remainder_len
;
2558 quotient
.set_len (divmod_internal (quotient_val
,
2559 &remainder_len
, remainder_val
,
2560 xi
.val
, xi
.len
, precision
,
2561 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2563 remainder
.set_len (remainder_len
);
2564 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
) && remainder
!= 0)
2565 return quotient
- 1;
2569 /* Return X / Y, rouding towards -inf. Treat X and Y as signed values. */
2570 template <typename T1
, typename T2
>
2571 inline WI_BINARY_RESULT (T1
, T2
)
2572 wi::sdiv_floor (const T1
&x
, const T2
&y
)
2574 return div_floor (x
, y
, SIGNED
);
2577 /* Return X / Y, rouding towards -inf. Treat X and Y as unsigned values. */
2578 /* ??? Why do we have both this and udiv_trunc. Aren't they the same? */
2579 template <typename T1
, typename T2
>
2580 inline WI_BINARY_RESULT (T1
, T2
)
2581 wi::udiv_floor (const T1
&x
, const T2
&y
)
2583 return div_floor (x
, y
, UNSIGNED
);
2586 /* Return X / Y, rouding towards +inf. Treat X and Y as having the
2587 signedness given by SGN. Indicate in *OVERFLOW if the result
2589 template <typename T1
, typename T2
>
2590 inline WI_BINARY_RESULT (T1
, T2
)
2591 wi::div_ceil (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2593 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2594 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2595 unsigned int precision
= get_precision (quotient
);
2596 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2597 WIDE_INT_REF_FOR (T2
) yi (y
);
2599 unsigned int remainder_len
;
2600 quotient
.set_len (divmod_internal (quotient_val
,
2601 &remainder_len
, remainder_val
,
2602 xi
.val
, xi
.len
, precision
,
2603 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2605 remainder
.set_len (remainder_len
);
2606 if (wi::neg_p (x
, sgn
) == wi::neg_p (y
, sgn
) && remainder
!= 0)
2607 return quotient
+ 1;
2611 /* Return X / Y, rouding towards nearest with ties away from zero.
2612 Treat X and Y as having the signedness given by SGN. Indicate
2613 in *OVERFLOW if the result overflows. */
2614 template <typename T1
, typename T2
>
2615 inline WI_BINARY_RESULT (T1
, T2
)
2616 wi::div_round (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2618 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2619 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2620 unsigned int precision
= get_precision (quotient
);
2621 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2622 WIDE_INT_REF_FOR (T2
) yi (y
);
2624 unsigned int remainder_len
;
2625 quotient
.set_len (divmod_internal (quotient_val
,
2626 &remainder_len
, remainder_val
,
2627 xi
.val
, xi
.len
, precision
,
2628 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2630 remainder
.set_len (remainder_len
);
2636 WI_BINARY_RESULT (T1
, T2
) abs_remainder
= wi::abs (remainder
);
2637 if (wi::geu_p (abs_remainder
, wi::sub (wi::abs (y
), abs_remainder
)))
2639 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
))
2640 return quotient
- 1;
2642 return quotient
+ 1;
2647 if (wi::geu_p (remainder
, wi::sub (y
, remainder
)))
2648 return quotient
+ 1;
2654 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2655 signedness given by SGN. Store the remainder in *REMAINDER_PTR. */
2656 template <typename T1
, typename T2
>
2657 inline WI_BINARY_RESULT (T1
, T2
)
2658 wi::divmod_trunc (const T1
&x
, const T2
&y
, signop sgn
,
2659 WI_BINARY_RESULT (T1
, T2
) *remainder_ptr
)
2661 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2662 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2663 unsigned int precision
= get_precision (quotient
);
2664 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2665 WIDE_INT_REF_FOR (T2
) yi (y
);
2667 unsigned int remainder_len
;
2668 quotient
.set_len (divmod_internal (quotient_val
,
2669 &remainder_len
, remainder_val
,
2670 xi
.val
, xi
.len
, precision
,
2671 yi
.val
, yi
.len
, yi
.precision
, sgn
, 0));
2672 remainder
.set_len (remainder_len
);
2674 *remainder_ptr
= remainder
;
2678 /* Compute the greatest common divisor of two numbers A and B using
2679 Euclid's algorithm. */
2680 template <typename T1
, typename T2
>
2681 inline WI_BINARY_RESULT (T1
, T2
)
2682 wi::gcd (const T1
&a
, const T2
&b
, signop sgn
)
2689 while (gt_p (x
, 0, sgn
))
2691 z
= mod_trunc (y
, x
, sgn
);
2699 /* Compute X / Y, rouding towards 0, and return the remainder.
2700 Treat X and Y as having the signedness given by SGN. Indicate
2701 in *OVERFLOW if the division overflows. */
2702 template <typename T1
, typename T2
>
2703 inline WI_BINARY_RESULT (T1
, T2
)
2704 wi::mod_trunc (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2706 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2707 unsigned int precision
= get_precision (remainder
);
2708 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2709 WIDE_INT_REF_FOR (T2
) yi (y
);
2711 unsigned int remainder_len
;
2712 divmod_internal (0, &remainder_len
, remainder_val
,
2713 xi
.val
, xi
.len
, precision
,
2714 yi
.val
, yi
.len
, yi
.precision
, sgn
, overflow
);
2715 remainder
.set_len (remainder_len
);
2720 /* Compute X / Y, rouding towards 0, and return the remainder.
2721 Treat X and Y as signed values. */
2722 template <typename T1
, typename T2
>
2723 inline WI_BINARY_RESULT (T1
, T2
)
2724 wi::smod_trunc (const T1
&x
, const T2
&y
)
2726 return mod_trunc (x
, y
, SIGNED
);
2729 /* Compute X / Y, rouding towards 0, and return the remainder.
2730 Treat X and Y as unsigned values. */
2731 template <typename T1
, typename T2
>
2732 inline WI_BINARY_RESULT (T1
, T2
)
2733 wi::umod_trunc (const T1
&x
, const T2
&y
)
2735 return mod_trunc (x
, y
, UNSIGNED
);
2738 /* Compute X / Y, rouding towards -inf, and return the remainder.
2739 Treat X and Y as having the signedness given by SGN. Indicate
2740 in *OVERFLOW if the division overflows. */
2741 template <typename T1
, typename T2
>
2742 inline WI_BINARY_RESULT (T1
, T2
)
2743 wi::mod_floor (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2745 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2746 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2747 unsigned int precision
= get_precision (quotient
);
2748 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2749 WIDE_INT_REF_FOR (T2
) yi (y
);
2751 unsigned int remainder_len
;
2752 quotient
.set_len (divmod_internal (quotient_val
,
2753 &remainder_len
, remainder_val
,
2754 xi
.val
, xi
.len
, precision
,
2755 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2757 remainder
.set_len (remainder_len
);
2759 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
) && remainder
!= 0)
2760 return remainder
+ y
;
2764 /* Compute X / Y, rouding towards -inf, and return the remainder.
2765 Treat X and Y as unsigned values. */
2766 /* ??? Why do we have both this and umod_trunc. Aren't they the same? */
2767 template <typename T1
, typename T2
>
2768 inline WI_BINARY_RESULT (T1
, T2
)
2769 wi::umod_floor (const T1
&x
, const T2
&y
)
2771 return mod_floor (x
, y
, UNSIGNED
);
2774 /* Compute X / Y, rouding towards +inf, and return the remainder.
2775 Treat X and Y as having the signedness given by SGN. Indicate
2776 in *OVERFLOW if the division overflows. */
2777 template <typename T1
, typename T2
>
2778 inline WI_BINARY_RESULT (T1
, T2
)
2779 wi::mod_ceil (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2781 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2782 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2783 unsigned int precision
= get_precision (quotient
);
2784 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2785 WIDE_INT_REF_FOR (T2
) yi (y
);
2787 unsigned int remainder_len
;
2788 quotient
.set_len (divmod_internal (quotient_val
,
2789 &remainder_len
, remainder_val
,
2790 xi
.val
, xi
.len
, precision
,
2791 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2793 remainder
.set_len (remainder_len
);
2795 if (wi::neg_p (x
, sgn
) == wi::neg_p (y
, sgn
) && remainder
!= 0)
2796 return remainder
- y
;
2800 /* Compute X / Y, rouding towards nearest with ties away from zero,
2801 and return the remainder. Treat X and Y as having the signedness
2802 given by SGN. Indicate in *OVERFLOW if the division overflows. */
2803 template <typename T1
, typename T2
>
2804 inline WI_BINARY_RESULT (T1
, T2
)
2805 wi::mod_round (const T1
&x
, const T2
&y
, signop sgn
, bool *overflow
)
2807 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2808 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2809 unsigned int precision
= get_precision (quotient
);
2810 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2811 WIDE_INT_REF_FOR (T2
) yi (y
);
2813 unsigned int remainder_len
;
2814 quotient
.set_len (divmod_internal (quotient_val
,
2815 &remainder_len
, remainder_val
,
2816 xi
.val
, xi
.len
, precision
,
2817 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2819 remainder
.set_len (remainder_len
);
2825 WI_BINARY_RESULT (T1
, T2
) abs_remainder
= wi::abs (remainder
);
2826 if (wi::geu_p (abs_remainder
, wi::sub (wi::abs (y
), abs_remainder
)))
2828 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
))
2829 return remainder
+ y
;
2831 return remainder
- y
;
2836 if (wi::geu_p (remainder
, wi::sub (y
, remainder
)))
2837 return remainder
- y
;
2843 /* Return true if X is a multiple of Y. Treat X and Y as having the
2844 signedness given by SGN. */
2845 template <typename T1
, typename T2
>
2847 wi::multiple_of_p (const T1
&x
, const T2
&y
, signop sgn
)
2849 return wi::mod_trunc (x
, y
, sgn
) == 0;
2852 /* Return true if X is a multiple of Y, storing X / Y in *RES if so.
2853 Treat X and Y as having the signedness given by SGN. */
2854 template <typename T1
, typename T2
>
2856 wi::multiple_of_p (const T1
&x
, const T2
&y
, signop sgn
,
2857 WI_BINARY_RESULT (T1
, T2
) *res
)
2859 WI_BINARY_RESULT (T1
, T2
) remainder
;
2860 WI_BINARY_RESULT (T1
, T2
) quotient
2861 = divmod_trunc (x
, y
, sgn
, &remainder
);
2870 /* Return X << Y. Return 0 if Y is greater than or equal to
2871 the precision of X. */
2872 template <typename T1
, typename T2
>
2873 inline WI_UNARY_RESULT (T1
)
2874 wi::lshift (const T1
&x
, const T2
&y
)
2876 WI_UNARY_RESULT_VAR (result
, val
, T1
, x
);
2877 unsigned int precision
= get_precision (result
);
2878 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2879 WIDE_INT_REF_FOR (T2
) yi (y
);
2880 /* Handle the simple cases quickly. */
2881 if (geu_p (yi
, precision
))
2888 unsigned int shift
= yi
.to_uhwi ();
2889 /* For fixed-precision integers like offset_int and widest_int,
2890 handle the case where the shift value is constant and the
2891 result is a single nonnegative HWI (meaning that we don't
2892 need to worry about val[1]). This is particularly common
2893 for converting a byte count to a bit count.
2895 For variable-precision integers like wide_int, handle HWI
2896 and sub-HWI integers inline. */
2897 if (STATIC_CONSTANT_P (xi
.precision
> HOST_BITS_PER_WIDE_INT
)
2898 ? (STATIC_CONSTANT_P (shift
< HOST_BITS_PER_WIDE_INT
- 1)
2900 && xi
.val
[0] <= (HOST_WIDE_INT
) ((unsigned HOST_WIDE_INT
)
2901 HOST_WIDE_INT_MAX
>> shift
))
2902 : precision
<= HOST_BITS_PER_WIDE_INT
)
2904 val
[0] = xi
.ulow () << shift
;
2908 result
.set_len (lshift_large (val
, xi
.val
, xi
.len
,
2914 /* Return X >> Y, using a logical shift. Return 0 if Y is greater than
2915 or equal to the precision of X. */
2916 template <typename T1
, typename T2
>
2917 inline WI_UNARY_RESULT (T1
)
2918 wi::lrshift (const T1
&x
, const T2
&y
)
2920 WI_UNARY_RESULT_VAR (result
, val
, T1
, x
);
2921 /* Do things in the precision of the input rather than the output,
2922 since the result can be no larger than that. */
2923 WIDE_INT_REF_FOR (T1
) xi (x
);
2924 WIDE_INT_REF_FOR (T2
) yi (y
);
2925 /* Handle the simple cases quickly. */
2926 if (geu_p (yi
, xi
.precision
))
2933 unsigned int shift
= yi
.to_uhwi ();
2934 /* For fixed-precision integers like offset_int and widest_int,
2935 handle the case where the shift value is constant and the
2936 shifted value is a single nonnegative HWI (meaning that all
2937 bits above the HWI are zero). This is particularly common
2938 for converting a bit count to a byte count.
2940 For variable-precision integers like wide_int, handle HWI
2941 and sub-HWI integers inline. */
2942 if (STATIC_CONSTANT_P (xi
.precision
> HOST_BITS_PER_WIDE_INT
)
2943 ? (shift
< HOST_BITS_PER_WIDE_INT
2946 : xi
.precision
<= HOST_BITS_PER_WIDE_INT
)
2948 val
[0] = xi
.to_uhwi () >> shift
;
2952 result
.set_len (lrshift_large (val
, xi
.val
, xi
.len
, xi
.precision
,
2953 get_precision (result
), shift
));
2958 /* Return X >> Y, using an arithmetic shift. Return a sign mask if
2959 Y is greater than or equal to the precision of X. */
2960 template <typename T1
, typename T2
>
2961 inline WI_UNARY_RESULT (T1
)
2962 wi::arshift (const T1
&x
, const T2
&y
)
2964 WI_UNARY_RESULT_VAR (result
, val
, T1
, x
);
2965 /* Do things in the precision of the input rather than the output,
2966 since the result can be no larger than that. */
2967 WIDE_INT_REF_FOR (T1
) xi (x
);
2968 WIDE_INT_REF_FOR (T2
) yi (y
);
2969 /* Handle the simple cases quickly. */
2970 if (geu_p (yi
, xi
.precision
))
2972 val
[0] = sign_mask (x
);
2977 unsigned int shift
= yi
.to_uhwi ();
2978 if (xi
.precision
<= HOST_BITS_PER_WIDE_INT
)
2980 val
[0] = sext_hwi (xi
.ulow () >> shift
, xi
.precision
- shift
);
2981 result
.set_len (1, true);
2984 result
.set_len (arshift_large (val
, xi
.val
, xi
.len
, xi
.precision
,
2985 get_precision (result
), shift
));
2990 /* Return X >> Y, using an arithmetic shift if SGN is SIGNED and a
2991 logical shift otherwise. */
2992 template <typename T1
, typename T2
>
2993 inline WI_UNARY_RESULT (T1
)
2994 wi::rshift (const T1
&x
, const T2
&y
, signop sgn
)
2996 if (sgn
== UNSIGNED
)
2997 return lrshift (x
, y
);
2999 return arshift (x
, y
);
3002 /* Return the result of rotating the low WIDTH bits of X left by Y
3003 bits and zero-extending the result. Use a full-width rotate if
3005 template <typename T1
, typename T2
>
3006 WI_UNARY_RESULT (T1
)
3007 wi::lrotate (const T1
&x
, const T2
&y
, unsigned int width
)
3009 unsigned int precision
= get_binary_precision (x
, x
);
3012 WI_UNARY_RESULT (T2
) ymod
= umod_trunc (y
, width
);
3013 WI_UNARY_RESULT (T1
) left
= wi::lshift (x
, ymod
);
3014 WI_UNARY_RESULT (T1
) right
= wi::lrshift (x
, wi::sub (width
, ymod
));
3015 if (width
!= precision
)
3016 return wi::zext (left
, width
) | wi::zext (right
, width
);
3017 return left
| right
;
3020 /* Return the result of rotating the low WIDTH bits of X right by Y
3021 bits and zero-extending the result. Use a full-width rotate if
3023 template <typename T1
, typename T2
>
3024 WI_UNARY_RESULT (T1
)
3025 wi::rrotate (const T1
&x
, const T2
&y
, unsigned int width
)
3027 unsigned int precision
= get_binary_precision (x
, x
);
3030 WI_UNARY_RESULT (T2
) ymod
= umod_trunc (y
, width
);
3031 WI_UNARY_RESULT (T1
) right
= wi::lrshift (x
, ymod
);
3032 WI_UNARY_RESULT (T1
) left
= wi::lshift (x
, wi::sub (width
, ymod
));
3033 if (width
!= precision
)
3034 return wi::zext (left
, width
) | wi::zext (right
, width
);
3035 return left
| right
;
3038 /* Return 0 if the number of 1s in X is even and 1 if the number of 1s
3041 wi::parity (const wide_int_ref
&x
)
3043 return popcount (x
) & 1;
3046 /* Extract WIDTH bits from X, starting at BITPOS. */
3047 template <typename T
>
3048 inline unsigned HOST_WIDE_INT
3049 wi::extract_uhwi (const T
&x
, unsigned int bitpos
, unsigned int width
)
3051 unsigned precision
= get_precision (x
);
3052 if (precision
< bitpos
+ width
)
3053 precision
= bitpos
+ width
;
3054 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
3056 /* Handle this rare case after the above, so that we assert about
3057 bogus BITPOS values. */
3061 unsigned int start
= bitpos
/ HOST_BITS_PER_WIDE_INT
;
3062 unsigned int shift
= bitpos
% HOST_BITS_PER_WIDE_INT
;
3063 unsigned HOST_WIDE_INT res
= xi
.elt (start
);
3065 if (shift
+ width
> HOST_BITS_PER_WIDE_INT
)
3067 unsigned HOST_WIDE_INT upper
= xi
.elt (start
+ 1);
3068 res
|= upper
<< (-shift
% HOST_BITS_PER_WIDE_INT
);
3070 return zext_hwi (res
, width
);
3073 /* Return the minimum precision needed to store X with sign SGN. */
3074 template <typename T
>
3076 wi::min_precision (const T
&x
, signop sgn
)
3079 return get_precision (x
) - clrsb (x
);
3081 return get_precision (x
) - clz (x
);
3084 #define SIGNED_BINARY_PREDICATE(OP, F) \
3085 template <typename T1, typename T2> \
3086 inline WI_SIGNED_BINARY_PREDICATE_RESULT (T1, T2) \
3087 OP (const T1 &x, const T2 &y) \
3089 return wi::F (x, y); \
3092 SIGNED_BINARY_PREDICATE (operator <, lts_p
)
3093 SIGNED_BINARY_PREDICATE (operator <=, les_p
)
3094 SIGNED_BINARY_PREDICATE (operator >, gts_p
)
3095 SIGNED_BINARY_PREDICATE (operator >=, ges_p
)
3097 #undef SIGNED_BINARY_PREDICATE
3099 template <typename T1
, typename T2
>
3100 inline WI_SIGNED_SHIFT_RESULT (T1
, T2
)
3101 operator << (const T1
&x
, const T2
&y
)
3103 return wi::lshift (x
, y
);
3106 template <typename T1
, typename T2
>
3107 inline WI_SIGNED_SHIFT_RESULT (T1
, T2
)
3108 operator >> (const T1
&x
, const T2
&y
)
3110 return wi::arshift (x
, y
);
3113 template<typename T
>
3115 gt_ggc_mx (generic_wide_int
<T
> *)
3119 template<typename T
>
3121 gt_pch_nx (generic_wide_int
<T
> *)
3125 template<typename T
>
3127 gt_pch_nx (generic_wide_int
<T
> *, void (*) (void *, void *), void *)
3133 gt_ggc_mx (trailing_wide_ints
<N
> *)
3139 gt_pch_nx (trailing_wide_ints
<N
> *)
3145 gt_pch_nx (trailing_wide_ints
<N
> *, void (*) (void *, void *), void *)
3151 /* Used for overloaded functions in which the only other acceptable
3152 scalar type is a pointer. It stops a plain 0 from being treated
3153 as a null pointer. */
3154 struct never_used1
{};
3155 struct never_used2
{};
3157 wide_int
min_value (unsigned int, signop
);
3158 wide_int
min_value (never_used1
*);
3159 wide_int
min_value (never_used2
*);
3160 wide_int
max_value (unsigned int, signop
);
3161 wide_int
max_value (never_used1
*);
3162 wide_int
max_value (never_used2
*);
3164 /* FIXME: this is target dependent, so should be elsewhere.
3165 It also seems to assume that CHAR_BIT == BITS_PER_UNIT. */
3166 wide_int
from_buffer (const unsigned char *, unsigned int);
3168 #ifndef GENERATOR_FILE
3169 void to_mpz (const wide_int_ref
&, mpz_t
, signop
);
3172 wide_int
mask (unsigned int, bool, unsigned int);
3173 wide_int
shifted_mask (unsigned int, unsigned int, bool, unsigned int);
3174 wide_int
set_bit_in_zero (unsigned int, unsigned int);
3175 wide_int
insert (const wide_int
&x
, const wide_int
&y
, unsigned int,
3178 template <typename T
>
3179 T
mask (unsigned int, bool);
3181 template <typename T
>
3182 T
shifted_mask (unsigned int, unsigned int, bool);
3184 template <typename T
>
3185 T
set_bit_in_zero (unsigned int);
3187 unsigned int mask (HOST_WIDE_INT
*, unsigned int, bool, unsigned int);
3188 unsigned int shifted_mask (HOST_WIDE_INT
*, unsigned int, unsigned int,
3189 bool, unsigned int);
3190 unsigned int from_array (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
3191 unsigned int, unsigned int, bool);
3194 /* Return a PRECISION-bit integer in which the low WIDTH bits are set
3195 and the other bits are clear, or the inverse if NEGATE_P. */
3197 wi::mask (unsigned int width
, bool negate_p
, unsigned int precision
)
3199 wide_int result
= wide_int::create (precision
);
3200 result
.set_len (mask (result
.write_val (), width
, negate_p
, precision
));
3204 /* Return a PRECISION-bit integer in which the low START bits are clear,
3205 the next WIDTH bits are set, and the other bits are clear,
3206 or the inverse if NEGATE_P. */
3208 wi::shifted_mask (unsigned int start
, unsigned int width
, bool negate_p
,
3209 unsigned int precision
)
3211 wide_int result
= wide_int::create (precision
);
3212 result
.set_len (shifted_mask (result
.write_val (), start
, width
, negate_p
,
3217 /* Return a PRECISION-bit integer in which bit BIT is set and all the
3218 others are clear. */
3220 wi::set_bit_in_zero (unsigned int bit
, unsigned int precision
)
3222 return shifted_mask (bit
, 1, false, precision
);
3225 /* Return an integer of type T in which the low WIDTH bits are set
3226 and the other bits are clear, or the inverse if NEGATE_P. */
3227 template <typename T
>
3229 wi::mask (unsigned int width
, bool negate_p
)
3231 STATIC_ASSERT (wi::int_traits
<T
>::precision
);
3233 result
.set_len (mask (result
.write_val (), width
, negate_p
,
3234 wi::int_traits
<T
>::precision
));
3238 /* Return an integer of type T in which the low START bits are clear,
3239 the next WIDTH bits are set, and the other bits are clear, or the
3240 inverse if NEGATE_P. */
3241 template <typename T
>
3243 wi::shifted_mask (unsigned int start
, unsigned int width
, bool negate_p
)
3245 STATIC_ASSERT (wi::int_traits
<T
>::precision
);
3247 result
.set_len (shifted_mask (result
.write_val (), start
, width
,
3249 wi::int_traits
<T
>::precision
));
3253 /* Return an integer of type T in which bit BIT is set and all the
3254 others are clear. */
3255 template <typename T
>
3257 wi::set_bit_in_zero (unsigned int bit
)
3259 return shifted_mask
<T
> (bit
, 1, false);
3262 #endif /* WIDE_INT_H */