1 /* Operations with very long integers. -*- C++ -*-
2 Copyright (C) 2012-2021 Free Software Foundation, Inc.
4 This file is part of GCC.
6 GCC is free software; you can redistribute it and/or modify it
7 under the terms of the GNU General Public License as published by the
8 Free Software Foundation; either version 3, or (at your option) any
11 GCC is distributed in the hope that it will be useful, but WITHOUT
12 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
16 You should have received a copy of the GNU General Public License
17 along with GCC; see the file COPYING3. If not see
18 <http://www.gnu.org/licenses/>. */
23 /* wide-int.[cc|h] implements a class that efficiently performs
24 mathematical operations on finite precision integers. wide_ints
25 are designed to be transient - they are not for long term storage
26 of values. There is tight integration between wide_ints and the
27 other longer storage GCC representations (rtl and tree).
29 The actual precision of a wide_int depends on the flavor. There
30 are three predefined flavors:
32 1) wide_int (the default). This flavor does the math in the
33 precision of its input arguments. It is assumed (and checked)
34 that the precisions of the operands and results are consistent.
35 This is the most efficient flavor. It is not possible to examine
36 bits above the precision that has been specified. Because of
37 this, the default flavor has semantics that are simple to
38 understand and in general model the underlying hardware that the
39 compiler is targetted for.
41 This flavor must be used at the RTL level of gcc because there
42 is, in general, not enough information in the RTL representation
43 to extend a value beyond the precision specified in the mode.
45 This flavor should also be used at the TREE and GIMPLE levels of
46 the compiler except for the circumstances described in the
47 descriptions of the other two flavors.
49 The default wide_int representation does not contain any
50 information inherent about signedness of the represented value,
51 so it can be used to represent both signed and unsigned numbers.
52 For operations where the results depend on signedness (full width
53 multiply, division, shifts, comparisons, and operations that need
54 overflow detected), the signedness must be specified separately.
56 2) offset_int. This is a fixed-precision integer that can hold
57 any address offset, measured in either bits or bytes, with at
58 least one extra sign bit. At the moment the maximum address
59 size GCC supports is 64 bits. With 8-bit bytes and an extra
60 sign bit, offset_int therefore needs to have at least 68 bits
61 of precision. We round this up to 128 bits for efficiency.
62 Values of type T are converted to this precision by sign- or
63 zero-extending them based on the signedness of T.
65 The extra sign bit means that offset_int is effectively a signed
66 128-bit integer, i.e. it behaves like int128_t.
68 Since the values are logically signed, there is no need to
69 distinguish between signed and unsigned operations. Sign-sensitive
70 comparison operators <, <=, > and >= are therefore supported.
71 Shift operators << and >> are also supported, with >> being
72 an _arithmetic_ right shift.
74 [ Note that, even though offset_int is effectively int128_t,
75 it can still be useful to use unsigned comparisons like
76 wi::leu_p (a, b) as a more efficient short-hand for
79 3) widest_int. This representation is an approximation of
80 infinite precision math. However, it is not really infinite
81 precision math as in the GMP library. It is really finite
82 precision math where the precision is 4 times the size of the
83 largest integer that the target port can represent.
85 Like offset_int, widest_int is wider than all the values that
86 it needs to represent, so the integers are logically signed.
87 Sign-sensitive comparison operators <, <=, > and >= are supported,
90 There are several places in the GCC where this should/must be used:
92 * Code that does induction variable optimizations. This code
93 works with induction variables of many different types at the
94 same time. Because of this, it ends up doing many different
95 calculations where the operands are not compatible types. The
96 widest_int makes this easy, because it provides a field where
97 nothing is lost when converting from any variable,
99 * There are a small number of passes that currently use the
100 widest_int that should use the default. These should be
103 There are surprising features of offset_int and widest_int
104 that the users should be careful about:
106 1) Shifts and rotations are just weird. You have to specify a
107 precision in which the shift or rotate is to happen in. The bits
108 above this precision are zeroed. While this is what you
109 want, it is clearly non obvious.
111 2) Larger precision math sometimes does not produce the same
112 answer as would be expected for doing the math at the proper
113 precision. In particular, a multiply followed by a divide will
114 produce a different answer if the first product is larger than
115 what can be represented in the input precision.
117 The offset_int and the widest_int flavors are more expensive
118 than the default wide int, so in addition to the caveats with these
119 two, the default is the prefered representation.
121 All three flavors of wide_int are represented as a vector of
122 HOST_WIDE_INTs. The default and widest_int vectors contain enough elements
123 to hold a value of MAX_BITSIZE_MODE_ANY_INT bits. offset_int contains only
124 enough elements to hold ADDR_MAX_PRECISION bits. The values are stored
125 in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
128 The default wide_int contains three fields: the vector (VAL),
129 the precision and a length (LEN). The length is the number of HWIs
130 needed to represent the value. widest_int and offset_int have a
131 constant precision that cannot be changed, so they only store the
134 Since most integers used in a compiler are small values, it is
135 generally profitable to use a representation of the value that is
136 as small as possible. LEN is used to indicate the number of
137 elements of the vector that are in use. The numbers are stored as
138 sign extended numbers as a means of compression. Leading
139 HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
140 as long as they can be reconstructed from the top bit that is being
143 The precision and length of a wide_int are always greater than 0.
144 Any bits in a wide_int above the precision are sign-extended from the
145 most significant bit. For example, a 4-bit value 0x8 is represented as
146 VAL = { 0xf...fff8 }. However, as an optimization, we allow other integer
147 constants to be represented with undefined bits above the precision.
148 This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
149 so that the INTEGER_CST representation can be used both in TYPE_PRECISION
150 and in wider precisions.
152 There are constructors to create the various forms of wide_int from
153 trees, rtl and constants. For trees the options are:
156 wi::to_wide (t) // Treat T as a wide_int
157 wi::to_offset (t) // Treat T as an offset_int
158 wi::to_widest (t) // Treat T as a widest_int
160 All three are light-weight accessors that should have no overhead
161 in release builds. If it is useful for readability reasons to
162 store the result in a temporary variable, the preferred method is:
164 wi::tree_to_wide_ref twide = wi::to_wide (t);
165 wi::tree_to_offset_ref toffset = wi::to_offset (t);
166 wi::tree_to_widest_ref twidest = wi::to_widest (t);
168 To make an rtx into a wide_int, you have to pair it with a mode.
169 The canonical way to do this is with rtx_mode_t as in:
172 wide_int x = rtx_mode_t (r, mode);
174 Similarly, a wide_int can only be constructed from a host value if
175 the target precision is given explicitly, such as in:
177 wide_int x = wi::shwi (c, prec); // sign-extend C if necessary
178 wide_int y = wi::uhwi (c, prec); // zero-extend C if necessary
180 However, offset_int and widest_int have an inherent precision and so
181 can be initialized directly from a host value:
183 offset_int x = (int) c; // sign-extend C
184 widest_int x = (unsigned int) c; // zero-extend C
186 It is also possible to do arithmetic directly on rtx_mode_ts and
187 constants. For example:
189 wi::add (r1, r2); // add equal-sized rtx_mode_ts r1 and r2
190 wi::add (r1, 1); // add 1 to rtx_mode_t r1
191 wi::lshift (1, 100); // 1 << 100 as a widest_int
193 Many binary operations place restrictions on the combinations of inputs,
194 using the following rules:
196 - {rtx, wide_int} op {rtx, wide_int} -> wide_int
197 The inputs must be the same precision. The result is a wide_int
198 of the same precision
200 - {rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
201 (un)signed HOST_WIDE_INT op {rtx, wide_int} -> wide_int
202 The HOST_WIDE_INT is extended or truncated to the precision of
203 the other input. The result is a wide_int of the same precision
206 - (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
207 The inputs are extended to widest_int precision and produce a
210 - offset_int op offset_int -> offset_int
211 offset_int op (un)signed HOST_WIDE_INT -> offset_int
212 (un)signed HOST_WIDE_INT op offset_int -> offset_int
214 - widest_int op widest_int -> widest_int
215 widest_int op (un)signed HOST_WIDE_INT -> widest_int
216 (un)signed HOST_WIDE_INT op widest_int -> widest_int
218 Other combinations like:
220 - widest_int op offset_int and
221 - wide_int op offset_int
223 are not allowed. The inputs should instead be extended or truncated
226 The inputs to comparison functions like wi::eq_p and wi::lts_p
227 follow the same compatibility rules, although their return types
228 are different. Unary functions on X produce the same result as
229 a binary operation X + X. Shift functions X op Y also produce
230 the same result as X + X; the precision of the shift amount Y
231 can be arbitrarily different from X. */
233 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
234 early examination of the target's mode file. The WIDE_INT_MAX_ELTS
235 can accomodate at least 1 more bit so that unsigned numbers of that
236 mode can be represented as a signed value. Note that it is still
237 possible to create fixed_wide_ints that have precisions greater than
238 MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a
239 double-width multiplication result, for example. */
240 #define WIDE_INT_MAX_ELTS \
241 ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
243 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
245 /* This is the max size of any pointer on any machine. It does not
246 seem to be as easy to sniff this out of the machine description as
247 it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
248 multiple address sizes and may have different address sizes for
249 different address spaces. However, currently the largest pointer
250 on any platform is 64 bits. When that changes, then it is likely
251 that a target hook should be defined so that targets can make this
252 value larger for those targets. */
253 #define ADDR_MAX_BITSIZE 64
255 /* This is the internal precision used when doing any address
256 arithmetic. The '4' is really 3 + 1. Three of the bits are for
257 the number of extra bits needed to do bit addresses and the other bit
258 is to allow everything to be signed without loosing any precision.
259 Then everything is rounded up to the next HWI for efficiency. */
260 #define ADDR_MAX_PRECISION \
261 ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) \
262 & ~(HOST_BITS_PER_WIDE_INT - 1))
264 /* The number of HWIs needed to store an offset_int. */
265 #define OFFSET_INT_ELTS (ADDR_MAX_PRECISION / HOST_BITS_PER_WIDE_INT)
267 /* The type of result produced by a binary operation on types T1 and T2.
268 Defined purely for brevity. */
269 #define WI_BINARY_RESULT(T1, T2) \
270 typename wi::binary_traits <T1, T2>::result_type
272 /* Likewise for binary operators, which excludes the case in which neither
273 T1 nor T2 is a wide-int-based type. */
274 #define WI_BINARY_OPERATOR_RESULT(T1, T2) \
275 typename wi::binary_traits <T1, T2>::operator_result
277 /* The type of result produced by T1 << T2. Leads to substitution failure
278 if the operation isn't supported. Defined purely for brevity. */
279 #define WI_SIGNED_SHIFT_RESULT(T1, T2) \
280 typename wi::binary_traits <T1, T2>::signed_shift_result_type
282 /* The type of result produced by a sign-agnostic binary predicate on
283 types T1 and T2. This is bool if wide-int operations make sense for
284 T1 and T2 and leads to substitution failure otherwise. */
285 #define WI_BINARY_PREDICATE_RESULT(T1, T2) \
286 typename wi::binary_traits <T1, T2>::predicate_result
288 /* The type of result produced by a signed binary predicate on types T1 and T2.
289 This is bool if signed comparisons make sense for T1 and T2 and leads to
290 substitution failure otherwise. */
291 #define WI_SIGNED_BINARY_PREDICATE_RESULT(T1, T2) \
292 typename wi::binary_traits <T1, T2>::signed_predicate_result
294 /* The type of result produced by a unary operation on type T. */
295 #define WI_UNARY_RESULT(T) \
296 typename wi::binary_traits <T, T>::result_type
298 /* Define a variable RESULT to hold the result of a binary operation on
299 X and Y, which have types T1 and T2 respectively. Define VAL to
300 point to the blocks of RESULT. Once the user of the macro has
301 filled in VAL, it should call RESULT.set_len to set the number
302 of initialized blocks. */
303 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
304 WI_BINARY_RESULT (T1, T2) RESULT = \
305 wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
306 HOST_WIDE_INT *VAL = RESULT.write_val ()
308 /* Similar for the result of a unary operation on X, which has type T. */
309 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
310 WI_UNARY_RESULT (T) RESULT = \
311 wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
312 HOST_WIDE_INT *VAL = RESULT.write_val ()
314 template <typename T
> class generic_wide_int
;
315 template <int N
> class fixed_wide_int_storage
;
316 class wide_int_storage
;
318 /* An N-bit integer. Until we can use typedef templates, use this instead. */
319 #define FIXED_WIDE_INT(N) \
320 generic_wide_int < fixed_wide_int_storage <N> >
322 typedef generic_wide_int
<wide_int_storage
> wide_int
;
323 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION
) offset_int
;
324 typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION
) widest_int
;
325 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
326 so as not to confuse gengtype. */
327 typedef generic_wide_int
< fixed_wide_int_storage
<WIDE_INT_MAX_PRECISION
* 2> > widest2_int
;
329 /* wi::storage_ref can be a reference to a primitive type,
330 so this is the conservatively-correct setting. */
331 template <bool SE
, bool HDP
= true>
332 class wide_int_ref_storage
;
334 typedef generic_wide_int
<wide_int_ref_storage
<false> > wide_int_ref
;
336 /* This can be used instead of wide_int_ref if the referenced value is
337 known to have type T. It carries across properties of T's representation,
338 such as whether excess upper bits in a HWI are defined, and can therefore
339 help avoid redundant work.
341 The macro could be replaced with a template typedef, once we're able
343 #define WIDE_INT_REF_FOR(T) \
345 <wide_int_ref_storage <wi::int_traits <T>::is_sign_extended, \
346 wi::int_traits <T>::host_dependent_precision> >
350 /* Operations that calculate overflow do so even for
351 TYPE_OVERFLOW_WRAPS types. For example, adding 1 to +MAX_INT in
352 an unsigned int is 0 and does not overflow in C/C++, but wi::add
353 will set the overflow argument in case it's needed for further
356 For operations that require overflow, these are the different
357 types of overflow. */
362 /* There was an overflow, but we are unsure whether it was an
363 overflow or an underflow. */
367 /* Classifies an integer based on its precision. */
368 enum precision_type
{
369 /* The integer has both a precision and defined signedness. This allows
370 the integer to be converted to any width, since we know whether to fill
371 any extra bits with zeros or signs. */
374 /* The integer has a variable precision but no defined signedness. */
377 /* The integer has a constant precision (known at GCC compile time)
382 /* This class, which has no default implementation, is expected to
383 provide the following members:
385 static const enum precision_type precision_type;
386 Classifies the type of T.
388 static const unsigned int precision;
389 Only defined if precision_type == CONST_PRECISION. Specifies the
390 precision of all integers of type T.
392 static const bool host_dependent_precision;
393 True if the precision of T depends (or can depend) on the host.
395 static unsigned int get_precision (const T &x)
396 Return the number of bits in X.
398 static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
399 unsigned int precision, const T &x)
400 Decompose X as a PRECISION-bit integer, returning the associated
401 wi::storage_ref. SCRATCH is available as scratch space if needed.
402 The routine should assert that PRECISION is acceptable. */
403 template <typename T
> struct int_traits
;
405 /* This class provides a single type, result_type, which specifies the
406 type of integer produced by a binary operation whose inputs have
407 types T1 and T2. The definition should be symmetric. */
408 template <typename T1
, typename T2
,
409 enum precision_type P1
= int_traits
<T1
>::precision_type
,
410 enum precision_type P2
= int_traits
<T2
>::precision_type
>
411 struct binary_traits
;
413 /* Specify the result type for each supported combination of binary
414 inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be
415 mixed, in order to give stronger type checking. When both inputs
416 are CONST_PRECISION, they must have the same precision. */
417 template <typename T1
, typename T2
>
418 struct binary_traits
<T1
, T2
, FLEXIBLE_PRECISION
, FLEXIBLE_PRECISION
>
420 typedef widest_int result_type
;
421 /* Don't define operators for this combination. */
424 template <typename T1
, typename T2
>
425 struct binary_traits
<T1
, T2
, FLEXIBLE_PRECISION
, VAR_PRECISION
>
427 typedef wide_int result_type
;
428 typedef result_type operator_result
;
429 typedef bool predicate_result
;
432 template <typename T1
, typename T2
>
433 struct binary_traits
<T1
, T2
, FLEXIBLE_PRECISION
, CONST_PRECISION
>
435 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
436 so as not to confuse gengtype. */
437 typedef generic_wide_int
< fixed_wide_int_storage
438 <int_traits
<T2
>::precision
> > result_type
;
439 typedef result_type operator_result
;
440 typedef bool predicate_result
;
441 typedef result_type signed_shift_result_type
;
442 typedef bool signed_predicate_result
;
445 template <typename T1
, typename T2
>
446 struct binary_traits
<T1
, T2
, VAR_PRECISION
, FLEXIBLE_PRECISION
>
448 typedef wide_int result_type
;
449 typedef result_type operator_result
;
450 typedef bool predicate_result
;
453 template <typename T1
, typename T2
>
454 struct binary_traits
<T1
, T2
, CONST_PRECISION
, FLEXIBLE_PRECISION
>
456 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
457 so as not to confuse gengtype. */
458 typedef generic_wide_int
< fixed_wide_int_storage
459 <int_traits
<T1
>::precision
> > result_type
;
460 typedef result_type operator_result
;
461 typedef bool predicate_result
;
462 typedef result_type signed_shift_result_type
;
463 typedef bool signed_predicate_result
;
466 template <typename T1
, typename T2
>
467 struct binary_traits
<T1
, T2
, CONST_PRECISION
, CONST_PRECISION
>
469 STATIC_ASSERT (int_traits
<T1
>::precision
== int_traits
<T2
>::precision
);
470 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
471 so as not to confuse gengtype. */
472 typedef generic_wide_int
< fixed_wide_int_storage
473 <int_traits
<T1
>::precision
> > result_type
;
474 typedef result_type operator_result
;
475 typedef bool predicate_result
;
476 typedef result_type signed_shift_result_type
;
477 typedef bool signed_predicate_result
;
480 template <typename T1
, typename T2
>
481 struct binary_traits
<T1
, T2
, VAR_PRECISION
, VAR_PRECISION
>
483 typedef wide_int result_type
;
484 typedef result_type operator_result
;
485 typedef bool predicate_result
;
489 /* Public functions for querying and operating on integers. */
492 template <typename T
>
493 unsigned int get_precision (const T
&);
495 template <typename T1
, typename T2
>
496 unsigned int get_binary_precision (const T1
&, const T2
&);
498 template <typename T1
, typename T2
>
499 void copy (T1
&, const T2
&);
501 #define UNARY_PREDICATE \
502 template <typename T> bool
503 #define UNARY_FUNCTION \
504 template <typename T> WI_UNARY_RESULT (T)
505 #define BINARY_PREDICATE \
506 template <typename T1, typename T2> bool
507 #define BINARY_FUNCTION \
508 template <typename T1, typename T2> WI_BINARY_RESULT (T1, T2)
509 #define SHIFT_FUNCTION \
510 template <typename T1, typename T2> WI_UNARY_RESULT (T1)
512 UNARY_PREDICATE
fits_shwi_p (const T
&);
513 UNARY_PREDICATE
fits_uhwi_p (const T
&);
514 UNARY_PREDICATE
neg_p (const T
&, signop
= SIGNED
);
516 template <typename T
>
517 HOST_WIDE_INT
sign_mask (const T
&);
519 BINARY_PREDICATE
eq_p (const T1
&, const T2
&);
520 BINARY_PREDICATE
ne_p (const T1
&, const T2
&);
521 BINARY_PREDICATE
lt_p (const T1
&, const T2
&, signop
);
522 BINARY_PREDICATE
lts_p (const T1
&, const T2
&);
523 BINARY_PREDICATE
ltu_p (const T1
&, const T2
&);
524 BINARY_PREDICATE
le_p (const T1
&, const T2
&, signop
);
525 BINARY_PREDICATE
les_p (const T1
&, const T2
&);
526 BINARY_PREDICATE
leu_p (const T1
&, const T2
&);
527 BINARY_PREDICATE
gt_p (const T1
&, const T2
&, signop
);
528 BINARY_PREDICATE
gts_p (const T1
&, const T2
&);
529 BINARY_PREDICATE
gtu_p (const T1
&, const T2
&);
530 BINARY_PREDICATE
ge_p (const T1
&, const T2
&, signop
);
531 BINARY_PREDICATE
ges_p (const T1
&, const T2
&);
532 BINARY_PREDICATE
geu_p (const T1
&, const T2
&);
534 template <typename T1
, typename T2
>
535 int cmp (const T1
&, const T2
&, signop
);
537 template <typename T1
, typename T2
>
538 int cmps (const T1
&, const T2
&);
540 template <typename T1
, typename T2
>
541 int cmpu (const T1
&, const T2
&);
543 UNARY_FUNCTION
bit_not (const T
&);
544 UNARY_FUNCTION
neg (const T
&);
545 UNARY_FUNCTION
neg (const T
&, overflow_type
*);
546 UNARY_FUNCTION
abs (const T
&);
547 UNARY_FUNCTION
ext (const T
&, unsigned int, signop
);
548 UNARY_FUNCTION
sext (const T
&, unsigned int);
549 UNARY_FUNCTION
zext (const T
&, unsigned int);
550 UNARY_FUNCTION
set_bit (const T
&, unsigned int);
552 BINARY_FUNCTION
min (const T1
&, const T2
&, signop
);
553 BINARY_FUNCTION
smin (const T1
&, const T2
&);
554 BINARY_FUNCTION
umin (const T1
&, const T2
&);
555 BINARY_FUNCTION
max (const T1
&, const T2
&, signop
);
556 BINARY_FUNCTION
smax (const T1
&, const T2
&);
557 BINARY_FUNCTION
umax (const T1
&, const T2
&);
559 BINARY_FUNCTION
bit_and (const T1
&, const T2
&);
560 BINARY_FUNCTION
bit_and_not (const T1
&, const T2
&);
561 BINARY_FUNCTION
bit_or (const T1
&, const T2
&);
562 BINARY_FUNCTION
bit_or_not (const T1
&, const T2
&);
563 BINARY_FUNCTION
bit_xor (const T1
&, const T2
&);
564 BINARY_FUNCTION
add (const T1
&, const T2
&);
565 BINARY_FUNCTION
add (const T1
&, const T2
&, signop
, overflow_type
*);
566 BINARY_FUNCTION
sub (const T1
&, const T2
&);
567 BINARY_FUNCTION
sub (const T1
&, const T2
&, signop
, overflow_type
*);
568 BINARY_FUNCTION
mul (const T1
&, const T2
&);
569 BINARY_FUNCTION
mul (const T1
&, const T2
&, signop
, overflow_type
*);
570 BINARY_FUNCTION
smul (const T1
&, const T2
&, overflow_type
*);
571 BINARY_FUNCTION
umul (const T1
&, const T2
&, overflow_type
*);
572 BINARY_FUNCTION
mul_high (const T1
&, const T2
&, signop
);
573 BINARY_FUNCTION
div_trunc (const T1
&, const T2
&, signop
,
574 overflow_type
* = 0);
575 BINARY_FUNCTION
sdiv_trunc (const T1
&, const T2
&);
576 BINARY_FUNCTION
udiv_trunc (const T1
&, const T2
&);
577 BINARY_FUNCTION
div_floor (const T1
&, const T2
&, signop
,
578 overflow_type
* = 0);
579 BINARY_FUNCTION
udiv_floor (const T1
&, const T2
&);
580 BINARY_FUNCTION
sdiv_floor (const T1
&, const T2
&);
581 BINARY_FUNCTION
div_ceil (const T1
&, const T2
&, signop
,
582 overflow_type
* = 0);
583 BINARY_FUNCTION
udiv_ceil (const T1
&, const T2
&);
584 BINARY_FUNCTION
div_round (const T1
&, const T2
&, signop
,
585 overflow_type
* = 0);
586 BINARY_FUNCTION
divmod_trunc (const T1
&, const T2
&, signop
,
587 WI_BINARY_RESULT (T1
, T2
) *);
588 BINARY_FUNCTION
gcd (const T1
&, const T2
&, signop
= UNSIGNED
);
589 BINARY_FUNCTION
mod_trunc (const T1
&, const T2
&, signop
,
590 overflow_type
* = 0);
591 BINARY_FUNCTION
smod_trunc (const T1
&, const T2
&);
592 BINARY_FUNCTION
umod_trunc (const T1
&, const T2
&);
593 BINARY_FUNCTION
mod_floor (const T1
&, const T2
&, signop
,
594 overflow_type
* = 0);
595 BINARY_FUNCTION
umod_floor (const T1
&, const T2
&);
596 BINARY_FUNCTION
mod_ceil (const T1
&, const T2
&, signop
,
597 overflow_type
* = 0);
598 BINARY_FUNCTION
mod_round (const T1
&, const T2
&, signop
,
599 overflow_type
* = 0);
601 template <typename T1
, typename T2
>
602 bool multiple_of_p (const T1
&, const T2
&, signop
);
604 template <typename T1
, typename T2
>
605 bool multiple_of_p (const T1
&, const T2
&, signop
,
606 WI_BINARY_RESULT (T1
, T2
) *);
608 SHIFT_FUNCTION
lshift (const T1
&, const T2
&);
609 SHIFT_FUNCTION
lrshift (const T1
&, const T2
&);
610 SHIFT_FUNCTION
arshift (const T1
&, const T2
&);
611 SHIFT_FUNCTION
rshift (const T1
&, const T2
&, signop sgn
);
612 SHIFT_FUNCTION
lrotate (const T1
&, const T2
&, unsigned int = 0);
613 SHIFT_FUNCTION
rrotate (const T1
&, const T2
&, unsigned int = 0);
615 #undef SHIFT_FUNCTION
616 #undef BINARY_PREDICATE
617 #undef BINARY_FUNCTION
618 #undef UNARY_PREDICATE
619 #undef UNARY_FUNCTION
621 bool only_sign_bit_p (const wide_int_ref
&, unsigned int);
622 bool only_sign_bit_p (const wide_int_ref
&);
623 int clz (const wide_int_ref
&);
624 int clrsb (const wide_int_ref
&);
625 int ctz (const wide_int_ref
&);
626 int exact_log2 (const wide_int_ref
&);
627 int floor_log2 (const wide_int_ref
&);
628 int ffs (const wide_int_ref
&);
629 int popcount (const wide_int_ref
&);
630 int parity (const wide_int_ref
&);
632 template <typename T
>
633 unsigned HOST_WIDE_INT
extract_uhwi (const T
&, unsigned int, unsigned int);
635 template <typename T
>
636 unsigned int min_precision (const T
&, signop
);
638 static inline void accumulate_overflow (overflow_type
&, overflow_type
);
643 /* Contains the components of a decomposed integer for easy, direct
649 storage_ref (const HOST_WIDE_INT
*, unsigned int, unsigned int);
651 const HOST_WIDE_INT
*val
;
653 unsigned int precision
;
655 /* Provide enough trappings for this class to act as storage for
657 unsigned int get_len () const;
658 unsigned int get_precision () const;
659 const HOST_WIDE_INT
*get_val () const;
663 inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT
*val_in
,
665 unsigned int precision_in
)
666 : val (val_in
), len (len_in
), precision (precision_in
)
671 wi::storage_ref::get_len () const
677 wi::storage_ref::get_precision () const
682 inline const HOST_WIDE_INT
*
683 wi::storage_ref::get_val () const
688 /* This class defines an integer type using the storage provided by the
689 template argument. The storage class must provide the following
692 unsigned int get_precision () const
693 Return the number of bits in the integer.
695 HOST_WIDE_INT *get_val () const
696 Return a pointer to the array of blocks that encodes the integer.
698 unsigned int get_len () const
699 Return the number of blocks in get_val (). If this is smaller
700 than the number of blocks implied by get_precision (), the
701 remaining blocks are sign extensions of block get_len () - 1.
703 Although not required by generic_wide_int itself, writable storage
704 classes can also provide the following functions:
706 HOST_WIDE_INT *write_val ()
707 Get a modifiable version of get_val ()
709 unsigned int set_len (unsigned int len)
710 Set the value returned by get_len () to LEN. */
711 template <typename storage
>
712 class GTY(()) generic_wide_int
: public storage
717 template <typename T
>
718 generic_wide_int (const T
&);
720 template <typename T
>
721 generic_wide_int (const T
&, unsigned int);
724 HOST_WIDE_INT
to_shwi (unsigned int) const;
725 HOST_WIDE_INT
to_shwi () const;
726 unsigned HOST_WIDE_INT
to_uhwi (unsigned int) const;
727 unsigned HOST_WIDE_INT
to_uhwi () const;
728 HOST_WIDE_INT
to_short_addr () const;
730 /* Public accessors for the interior of a wide int. */
731 HOST_WIDE_INT
sign_mask () const;
732 HOST_WIDE_INT
elt (unsigned int) const;
733 HOST_WIDE_INT
sext_elt (unsigned int) const;
734 unsigned HOST_WIDE_INT
ulow () const;
735 unsigned HOST_WIDE_INT
uhigh () const;
736 HOST_WIDE_INT
slow () const;
737 HOST_WIDE_INT
shigh () const;
739 template <typename T
>
740 generic_wide_int
&operator = (const T
&);
742 #define ASSIGNMENT_OPERATOR(OP, F) \
743 template <typename T> \
744 generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
746 /* Restrict these to cases where the shift operator is defined. */
747 #define SHIFT_ASSIGNMENT_OPERATOR(OP, OP2) \
748 template <typename T> \
749 generic_wide_int &OP (const T &c) { return (*this = *this OP2 c); }
751 #define INCDEC_OPERATOR(OP, DELTA) \
752 generic_wide_int &OP () { *this += DELTA; return *this; }
754 ASSIGNMENT_OPERATOR (operator &=, bit_and
)
755 ASSIGNMENT_OPERATOR (operator |=, bit_or
)
756 ASSIGNMENT_OPERATOR (operator ^=, bit_xor
)
757 ASSIGNMENT_OPERATOR (operator +=, add
)
758 ASSIGNMENT_OPERATOR (operator -=, sub
)
759 ASSIGNMENT_OPERATOR (operator *=, mul
)
760 ASSIGNMENT_OPERATOR (operator <<=, lshift
)
761 SHIFT_ASSIGNMENT_OPERATOR (operator >>=, >>)
762 INCDEC_OPERATOR (operator ++, 1)
763 INCDEC_OPERATOR (operator --, -1)
765 #undef SHIFT_ASSIGNMENT_OPERATOR
766 #undef ASSIGNMENT_OPERATOR
767 #undef INCDEC_OPERATOR
769 /* Debugging functions. */
772 static const bool is_sign_extended
773 = wi::int_traits
<generic_wide_int
<storage
> >::is_sign_extended
;
776 template <typename storage
>
777 inline generic_wide_int
<storage
>::generic_wide_int () {}
779 template <typename storage
>
780 template <typename T
>
781 inline generic_wide_int
<storage
>::generic_wide_int (const T
&x
)
786 template <typename storage
>
787 template <typename T
>
788 inline generic_wide_int
<storage
>::generic_wide_int (const T
&x
,
789 unsigned int precision
)
790 : storage (x
, precision
)
794 /* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
795 If THIS does not fit in PRECISION, the information is lost. */
796 template <typename storage
>
798 generic_wide_int
<storage
>::to_shwi (unsigned int precision
) const
800 if (precision
< HOST_BITS_PER_WIDE_INT
)
801 return sext_hwi (this->get_val ()[0], precision
);
803 return this->get_val ()[0];
806 /* Return THIS as a signed HOST_WIDE_INT, in its natural precision. */
807 template <typename storage
>
809 generic_wide_int
<storage
>::to_shwi () const
811 if (is_sign_extended
)
812 return this->get_val ()[0];
814 return to_shwi (this->get_precision ());
817 /* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
818 PRECISION. If THIS does not fit in PRECISION, the information
820 template <typename storage
>
821 inline unsigned HOST_WIDE_INT
822 generic_wide_int
<storage
>::to_uhwi (unsigned int precision
) const
824 if (precision
< HOST_BITS_PER_WIDE_INT
)
825 return zext_hwi (this->get_val ()[0], precision
);
827 return this->get_val ()[0];
830 /* Return THIS as an signed HOST_WIDE_INT, in its natural precision. */
831 template <typename storage
>
832 inline unsigned HOST_WIDE_INT
833 generic_wide_int
<storage
>::to_uhwi () const
835 return to_uhwi (this->get_precision ());
838 /* TODO: The compiler is half converted from using HOST_WIDE_INT to
839 represent addresses to using offset_int to represent addresses.
840 We use to_short_addr at the interface from new code to old,
842 template <typename storage
>
844 generic_wide_int
<storage
>::to_short_addr () const
846 return this->get_val ()[0];
849 /* Return the implicit value of blocks above get_len (). */
850 template <typename storage
>
852 generic_wide_int
<storage
>::sign_mask () const
854 unsigned int len
= this->get_len ();
855 gcc_assert (len
> 0);
857 unsigned HOST_WIDE_INT high
= this->get_val ()[len
- 1];
858 if (!is_sign_extended
)
860 unsigned int precision
= this->get_precision ();
861 int excess
= len
* HOST_BITS_PER_WIDE_INT
- precision
;
865 return (HOST_WIDE_INT
) (high
) < 0 ? -1 : 0;
868 /* Return the signed value of the least-significant explicitly-encoded
870 template <typename storage
>
872 generic_wide_int
<storage
>::slow () const
874 return this->get_val ()[0];
877 /* Return the signed value of the most-significant explicitly-encoded
879 template <typename storage
>
881 generic_wide_int
<storage
>::shigh () const
883 return this->get_val ()[this->get_len () - 1];
886 /* Return the unsigned value of the least-significant
887 explicitly-encoded block. */
888 template <typename storage
>
889 inline unsigned HOST_WIDE_INT
890 generic_wide_int
<storage
>::ulow () const
892 return this->get_val ()[0];
895 /* Return the unsigned value of the most-significant
896 explicitly-encoded block. */
897 template <typename storage
>
898 inline unsigned HOST_WIDE_INT
899 generic_wide_int
<storage
>::uhigh () const
901 return this->get_val ()[this->get_len () - 1];
904 /* Return block I, which might be implicitly or explicit encoded. */
905 template <typename storage
>
907 generic_wide_int
<storage
>::elt (unsigned int i
) const
909 if (i
>= this->get_len ())
912 return this->get_val ()[i
];
915 /* Like elt, but sign-extend beyond the upper bit, instead of returning
917 template <typename storage
>
919 generic_wide_int
<storage
>::sext_elt (unsigned int i
) const
921 HOST_WIDE_INT elt_i
= elt (i
);
922 if (!is_sign_extended
)
924 unsigned int precision
= this->get_precision ();
925 unsigned int lsb
= i
* HOST_BITS_PER_WIDE_INT
;
926 if (precision
- lsb
< HOST_BITS_PER_WIDE_INT
)
927 elt_i
= sext_hwi (elt_i
, precision
- lsb
);
932 template <typename storage
>
933 template <typename T
>
934 inline generic_wide_int
<storage
> &
935 generic_wide_int
<storage
>::operator = (const T
&x
)
937 storage::operator = (x
);
941 /* Dump the contents of the integer to stderr, for debugging. */
942 template <typename storage
>
944 generic_wide_int
<storage
>::dump () const
946 unsigned int len
= this->get_len ();
947 const HOST_WIDE_INT
*val
= this->get_val ();
948 unsigned int precision
= this->get_precision ();
949 fprintf (stderr
, "[");
950 if (len
* HOST_BITS_PER_WIDE_INT
< precision
)
951 fprintf (stderr
, "...,");
952 for (unsigned int i
= 0; i
< len
- 1; ++i
)
953 fprintf (stderr
, HOST_WIDE_INT_PRINT_HEX
",", val
[len
- 1 - i
]);
954 fprintf (stderr
, HOST_WIDE_INT_PRINT_HEX
"], precision = %d\n",
960 template <typename storage
>
961 struct int_traits
< generic_wide_int
<storage
> >
962 : public wi::int_traits
<storage
>
964 static unsigned int get_precision (const generic_wide_int
<storage
> &);
965 static wi::storage_ref
decompose (HOST_WIDE_INT
*, unsigned int,
966 const generic_wide_int
<storage
> &);
970 template <typename storage
>
972 wi::int_traits
< generic_wide_int
<storage
> >::
973 get_precision (const generic_wide_int
<storage
> &x
)
975 return x
.get_precision ();
978 template <typename storage
>
979 inline wi::storage_ref
980 wi::int_traits
< generic_wide_int
<storage
> >::
981 decompose (HOST_WIDE_INT
*, unsigned int precision
,
982 const generic_wide_int
<storage
> &x
)
984 gcc_checking_assert (precision
== x
.get_precision ());
985 return wi::storage_ref (x
.get_val (), x
.get_len (), precision
);
988 /* Provide the storage for a wide_int_ref. This acts like a read-only
989 wide_int, with the optimization that VAL is normally a pointer to
990 another integer's storage, so that no array copy is needed. */
991 template <bool SE
, bool HDP
>
992 class wide_int_ref_storage
: public wi::storage_ref
995 /* Scratch space that can be used when decomposing the original integer.
996 It must live as long as this object. */
997 HOST_WIDE_INT scratch
[2];
1000 wide_int_ref_storage () {}
1002 wide_int_ref_storage (const wi::storage_ref
&);
1004 template <typename T
>
1005 wide_int_ref_storage (const T
&);
1007 template <typename T
>
1008 wide_int_ref_storage (const T
&, unsigned int);
1011 /* Create a reference from an existing reference. */
1012 template <bool SE
, bool HDP
>
1013 inline wide_int_ref_storage
<SE
, HDP
>::
1014 wide_int_ref_storage (const wi::storage_ref
&x
)
1018 /* Create a reference to integer X in its natural precision. Note
1019 that the natural precision is host-dependent for primitive
1021 template <bool SE
, bool HDP
>
1022 template <typename T
>
1023 inline wide_int_ref_storage
<SE
, HDP
>::wide_int_ref_storage (const T
&x
)
1024 : storage_ref (wi::int_traits
<T
>::decompose (scratch
,
1025 wi::get_precision (x
), x
))
1029 /* Create a reference to integer X in precision PRECISION. */
1030 template <bool SE
, bool HDP
>
1031 template <typename T
>
1032 inline wide_int_ref_storage
<SE
, HDP
>::
1033 wide_int_ref_storage (const T
&x
, unsigned int precision
)
1034 : storage_ref (wi::int_traits
<T
>::decompose (scratch
, precision
, x
))
1040 template <bool SE
, bool HDP
>
1041 struct int_traits
<wide_int_ref_storage
<SE
, HDP
> >
1043 static const enum precision_type precision_type
= VAR_PRECISION
;
1044 static const bool host_dependent_precision
= HDP
;
1045 static const bool is_sign_extended
= SE
;
1051 unsigned int force_to_size (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1052 unsigned int, unsigned int, unsigned int,
1054 unsigned int from_array (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1055 unsigned int, unsigned int, bool = true);
1058 /* The storage used by wide_int. */
1059 class GTY(()) wide_int_storage
1062 HOST_WIDE_INT val
[WIDE_INT_MAX_ELTS
];
1064 unsigned int precision
;
1067 wide_int_storage ();
1068 template <typename T
>
1069 wide_int_storage (const T
&);
1071 /* The standard generic_wide_int storage methods. */
1072 unsigned int get_precision () const;
1073 const HOST_WIDE_INT
*get_val () const;
1074 unsigned int get_len () const;
1075 HOST_WIDE_INT
*write_val ();
1076 void set_len (unsigned int, bool = false);
1078 template <typename T
>
1079 wide_int_storage
&operator = (const T
&);
1081 static wide_int
from (const wide_int_ref
&, unsigned int, signop
);
1082 static wide_int
from_array (const HOST_WIDE_INT
*, unsigned int,
1083 unsigned int, bool = true);
1084 static wide_int
create (unsigned int);
1086 /* FIXME: target-dependent, so should disappear. */
1087 wide_int
bswap () const;
1093 struct int_traits
<wide_int_storage
>
1095 static const enum precision_type precision_type
= VAR_PRECISION
;
1096 /* Guaranteed by a static assert in the wide_int_storage constructor. */
1097 static const bool host_dependent_precision
= false;
1098 static const bool is_sign_extended
= true;
1099 template <typename T1
, typename T2
>
1100 static wide_int
get_binary_result (const T1
&, const T2
&);
1104 inline wide_int_storage::wide_int_storage () {}
1106 /* Initialize the storage from integer X, in its natural precision.
1107 Note that we do not allow integers with host-dependent precision
1108 to become wide_ints; wide_ints must always be logically independent
1110 template <typename T
>
1111 inline wide_int_storage::wide_int_storage (const T
&x
)
1113 { STATIC_ASSERT (!wi::int_traits
<T
>::host_dependent_precision
); }
1114 { STATIC_ASSERT (wi::int_traits
<T
>::precision_type
!= wi::CONST_PRECISION
); }
1115 WIDE_INT_REF_FOR (T
) xi (x
);
1116 precision
= xi
.precision
;
1117 wi::copy (*this, xi
);
1120 template <typename T
>
1121 inline wide_int_storage
&
1122 wide_int_storage::operator = (const T
&x
)
1124 { STATIC_ASSERT (!wi::int_traits
<T
>::host_dependent_precision
); }
1125 { STATIC_ASSERT (wi::int_traits
<T
>::precision_type
!= wi::CONST_PRECISION
); }
1126 WIDE_INT_REF_FOR (T
) xi (x
);
1127 precision
= xi
.precision
;
1128 wi::copy (*this, xi
);
1133 wide_int_storage::get_precision () const
1138 inline const HOST_WIDE_INT
*
1139 wide_int_storage::get_val () const
1145 wide_int_storage::get_len () const
1150 inline HOST_WIDE_INT
*
1151 wide_int_storage::write_val ()
1157 wide_int_storage::set_len (unsigned int l
, bool is_sign_extended
)
1160 if (!is_sign_extended
&& len
* HOST_BITS_PER_WIDE_INT
> precision
)
1161 val
[len
- 1] = sext_hwi (val
[len
- 1],
1162 precision
% HOST_BITS_PER_WIDE_INT
);
1165 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
1168 wide_int_storage::from (const wide_int_ref
&x
, unsigned int precision
,
1171 wide_int result
= wide_int::create (precision
);
1172 result
.set_len (wi::force_to_size (result
.write_val (), x
.val
, x
.len
,
1173 x
.precision
, precision
, sgn
));
1177 /* Create a wide_int from the explicit block encoding given by VAL and
1178 LEN. PRECISION is the precision of the integer. NEED_CANON_P is
1179 true if the encoding may have redundant trailing blocks. */
1181 wide_int_storage::from_array (const HOST_WIDE_INT
*val
, unsigned int len
,
1182 unsigned int precision
, bool need_canon_p
)
1184 wide_int result
= wide_int::create (precision
);
1185 result
.set_len (wi::from_array (result
.write_val (), val
, len
, precision
,
1190 /* Return an uninitialized wide_int with precision PRECISION. */
1192 wide_int_storage::create (unsigned int precision
)
1195 x
.precision
= precision
;
1199 template <typename T1
, typename T2
>
1201 wi::int_traits
<wide_int_storage
>::get_binary_result (const T1
&x
, const T2
&y
)
1203 /* This shouldn't be used for two flexible-precision inputs. */
1204 STATIC_ASSERT (wi::int_traits
<T1
>::precision_type
!= FLEXIBLE_PRECISION
1205 || wi::int_traits
<T2
>::precision_type
!= FLEXIBLE_PRECISION
);
1206 if (wi::int_traits
<T1
>::precision_type
== FLEXIBLE_PRECISION
)
1207 return wide_int::create (wi::get_precision (y
));
1209 return wide_int::create (wi::get_precision (x
));
1212 /* The storage used by FIXED_WIDE_INT (N). */
1214 class GTY(()) fixed_wide_int_storage
1217 HOST_WIDE_INT val
[(N
+ HOST_BITS_PER_WIDE_INT
+ 1) / HOST_BITS_PER_WIDE_INT
];
1221 fixed_wide_int_storage ();
1222 template <typename T
>
1223 fixed_wide_int_storage (const T
&);
1225 /* The standard generic_wide_int storage methods. */
1226 unsigned int get_precision () const;
1227 const HOST_WIDE_INT
*get_val () const;
1228 unsigned int get_len () const;
1229 HOST_WIDE_INT
*write_val ();
1230 void set_len (unsigned int, bool = false);
1232 static FIXED_WIDE_INT (N
) from (const wide_int_ref
&, signop
);
1233 static FIXED_WIDE_INT (N
) from_array (const HOST_WIDE_INT
*, unsigned int,
1240 struct int_traits
< fixed_wide_int_storage
<N
> >
1242 static const enum precision_type precision_type
= CONST_PRECISION
;
1243 static const bool host_dependent_precision
= false;
1244 static const bool is_sign_extended
= true;
1245 static const unsigned int precision
= N
;
1246 template <typename T1
, typename T2
>
1247 static FIXED_WIDE_INT (N
) get_binary_result (const T1
&, const T2
&);
1252 inline fixed_wide_int_storage
<N
>::fixed_wide_int_storage () {}
1254 /* Initialize the storage from integer X, in precision N. */
1256 template <typename T
>
1257 inline fixed_wide_int_storage
<N
>::fixed_wide_int_storage (const T
&x
)
1259 /* Check for type compatibility. We don't want to initialize a
1260 fixed-width integer from something like a wide_int. */
1261 WI_BINARY_RESULT (T
, FIXED_WIDE_INT (N
)) *assertion ATTRIBUTE_UNUSED
;
1262 wi::copy (*this, WIDE_INT_REF_FOR (T
) (x
, N
));
1267 fixed_wide_int_storage
<N
>::get_precision () const
1273 inline const HOST_WIDE_INT
*
1274 fixed_wide_int_storage
<N
>::get_val () const
1281 fixed_wide_int_storage
<N
>::get_len () const
1287 inline HOST_WIDE_INT
*
1288 fixed_wide_int_storage
<N
>::write_val ()
1295 fixed_wide_int_storage
<N
>::set_len (unsigned int l
, bool)
1298 /* There are no excess bits in val[len - 1]. */
1299 STATIC_ASSERT (N
% HOST_BITS_PER_WIDE_INT
== 0);
1302 /* Treat X as having signedness SGN and convert it to an N-bit number. */
1304 inline FIXED_WIDE_INT (N
)
1305 fixed_wide_int_storage
<N
>::from (const wide_int_ref
&x
, signop sgn
)
1307 FIXED_WIDE_INT (N
) result
;
1308 result
.set_len (wi::force_to_size (result
.write_val (), x
.val
, x
.len
,
1309 x
.precision
, N
, sgn
));
1313 /* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
1314 VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
1317 inline FIXED_WIDE_INT (N
)
1318 fixed_wide_int_storage
<N
>::from_array (const HOST_WIDE_INT
*val
,
1322 FIXED_WIDE_INT (N
) result
;
1323 result
.set_len (wi::from_array (result
.write_val (), val
, len
,
1329 template <typename T1
, typename T2
>
1330 inline FIXED_WIDE_INT (N
)
1331 wi::int_traits
< fixed_wide_int_storage
<N
> >::
1332 get_binary_result (const T1
&, const T2
&)
1334 return FIXED_WIDE_INT (N
) ();
1337 /* A reference to one element of a trailing_wide_ints structure. */
1338 class trailing_wide_int_storage
1341 /* The precision of the integer, which is a fixed property of the
1342 parent trailing_wide_ints. */
1343 unsigned int m_precision
;
1345 /* A pointer to the length field. */
1346 unsigned char *m_len
;
1348 /* A pointer to the HWI array. There are enough elements to hold all
1349 values of precision M_PRECISION. */
1350 HOST_WIDE_INT
*m_val
;
1353 trailing_wide_int_storage (unsigned int, unsigned char *, HOST_WIDE_INT
*);
1355 /* The standard generic_wide_int storage methods. */
1356 unsigned int get_len () const;
1357 unsigned int get_precision () const;
1358 const HOST_WIDE_INT
*get_val () const;
1359 HOST_WIDE_INT
*write_val ();
1360 void set_len (unsigned int, bool = false);
1362 template <typename T
>
1363 trailing_wide_int_storage
&operator = (const T
&);
1366 typedef generic_wide_int
<trailing_wide_int_storage
> trailing_wide_int
;
1368 /* trailing_wide_int behaves like a wide_int. */
1372 struct int_traits
<trailing_wide_int_storage
>
1373 : public int_traits
<wide_int_storage
> {};
1376 /* An array of N wide_int-like objects that can be put at the end of
1377 a variable-sized structure. Use extra_size to calculate how many
1378 bytes beyond the sizeof need to be allocated. Use set_precision
1379 to initialize the structure. */
1381 struct GTY((user
)) trailing_wide_ints
1384 /* The shared precision of each number. */
1385 unsigned short m_precision
;
1387 /* The shared maximum length of each number. */
1388 unsigned char m_max_len
;
1390 /* The current length of each number.
1391 Avoid char array so the whole structure is not a typeless storage
1392 that will, in turn, turn off TBAA on gimple, trees and RTL. */
1393 struct {unsigned char len
;} m_len
[N
];
1395 /* The variable-length part of the structure, which always contains
1396 at least one HWI. Element I starts at index I * M_MAX_LEN. */
1397 HOST_WIDE_INT m_val
[1];
1400 typedef WIDE_INT_REF_FOR (trailing_wide_int_storage
) const_reference
;
1402 void set_precision (unsigned int);
1403 unsigned int get_precision () const { return m_precision
; }
1404 trailing_wide_int
operator [] (unsigned int);
1405 const_reference
operator [] (unsigned int) const;
1406 static size_t extra_size (unsigned int);
1407 size_t extra_size () const { return extra_size (m_precision
); }
1410 inline trailing_wide_int_storage::
1411 trailing_wide_int_storage (unsigned int precision
, unsigned char *len
,
1413 : m_precision (precision
), m_len (len
), m_val (val
)
1418 trailing_wide_int_storage::get_len () const
1424 trailing_wide_int_storage::get_precision () const
1429 inline const HOST_WIDE_INT
*
1430 trailing_wide_int_storage::get_val () const
1435 inline HOST_WIDE_INT
*
1436 trailing_wide_int_storage::write_val ()
1442 trailing_wide_int_storage::set_len (unsigned int len
, bool is_sign_extended
)
1445 if (!is_sign_extended
&& len
* HOST_BITS_PER_WIDE_INT
> m_precision
)
1446 m_val
[len
- 1] = sext_hwi (m_val
[len
- 1],
1447 m_precision
% HOST_BITS_PER_WIDE_INT
);
1450 template <typename T
>
1451 inline trailing_wide_int_storage
&
1452 trailing_wide_int_storage::operator = (const T
&x
)
1454 WIDE_INT_REF_FOR (T
) xi (x
, m_precision
);
1455 wi::copy (*this, xi
);
1459 /* Initialize the structure and record that all elements have precision
1463 trailing_wide_ints
<N
>::set_precision (unsigned int precision
)
1465 m_precision
= precision
;
1466 m_max_len
= ((precision
+ HOST_BITS_PER_WIDE_INT
- 1)
1467 / HOST_BITS_PER_WIDE_INT
);
1470 /* Return a reference to element INDEX. */
1472 inline trailing_wide_int
1473 trailing_wide_ints
<N
>::operator [] (unsigned int index
)
1475 return trailing_wide_int_storage (m_precision
, &m_len
[index
].len
,
1476 &m_val
[index
* m_max_len
]);
1480 inline typename trailing_wide_ints
<N
>::const_reference
1481 trailing_wide_ints
<N
>::operator [] (unsigned int index
) const
1483 return wi::storage_ref (&m_val
[index
* m_max_len
],
1484 m_len
[index
].len
, m_precision
);
1487 /* Return how many extra bytes need to be added to the end of the structure
1488 in order to handle N wide_ints of precision PRECISION. */
1491 trailing_wide_ints
<N
>::extra_size (unsigned int precision
)
1493 unsigned int max_len
= ((precision
+ HOST_BITS_PER_WIDE_INT
- 1)
1494 / HOST_BITS_PER_WIDE_INT
);
1495 return (N
* max_len
- 1) * sizeof (HOST_WIDE_INT
);
1498 /* This macro is used in structures that end with a trailing_wide_ints field
1499 called FIELD. It declares get_NAME() and set_NAME() methods to access
1500 element I of FIELD. */
1501 #define TRAILING_WIDE_INT_ACCESSOR(NAME, FIELD, I) \
1502 trailing_wide_int get_##NAME () { return FIELD[I]; } \
1503 template <typename T> void set_##NAME (const T &x) { FIELD[I] = x; }
1507 /* Implementation of int_traits for primitive integer types like "int". */
1508 template <typename T
, bool signed_p
>
1509 struct primitive_int_traits
1511 static const enum precision_type precision_type
= FLEXIBLE_PRECISION
;
1512 static const bool host_dependent_precision
= true;
1513 static const bool is_sign_extended
= true;
1514 static unsigned int get_precision (T
);
1515 static wi::storage_ref
decompose (HOST_WIDE_INT
*, unsigned int, T
);
1519 template <typename T
, bool signed_p
>
1521 wi::primitive_int_traits
<T
, signed_p
>::get_precision (T
)
1523 return sizeof (T
) * CHAR_BIT
;
1526 template <typename T
, bool signed_p
>
1527 inline wi::storage_ref
1528 wi::primitive_int_traits
<T
, signed_p
>::decompose (HOST_WIDE_INT
*scratch
,
1529 unsigned int precision
, T x
)
1532 if (signed_p
|| scratch
[0] >= 0 || precision
<= HOST_BITS_PER_WIDE_INT
)
1533 return wi::storage_ref (scratch
, 1, precision
);
1535 return wi::storage_ref (scratch
, 2, precision
);
1538 /* Allow primitive C types to be used in wi:: routines. */
1542 struct int_traits
<unsigned char>
1543 : public primitive_int_traits
<unsigned char, false> {};
1546 struct int_traits
<unsigned short>
1547 : public primitive_int_traits
<unsigned short, false> {};
1550 struct int_traits
<int>
1551 : public primitive_int_traits
<int, true> {};
1554 struct int_traits
<unsigned int>
1555 : public primitive_int_traits
<unsigned int, false> {};
1558 struct int_traits
<long>
1559 : public primitive_int_traits
<long, true> {};
1562 struct int_traits
<unsigned long>
1563 : public primitive_int_traits
<unsigned long, false> {};
1565 #if defined HAVE_LONG_LONG
1567 struct int_traits
<long long>
1568 : public primitive_int_traits
<long long, true> {};
1571 struct int_traits
<unsigned long long>
1572 : public primitive_int_traits
<unsigned long long, false> {};
1578 /* Stores HWI-sized integer VAL, treating it as having signedness SGN
1579 and precision PRECISION. */
1584 hwi_with_prec (HOST_WIDE_INT
, unsigned int, signop
);
1586 unsigned int precision
;
1590 hwi_with_prec
shwi (HOST_WIDE_INT
, unsigned int);
1591 hwi_with_prec
uhwi (unsigned HOST_WIDE_INT
, unsigned int);
1593 hwi_with_prec
minus_one (unsigned int);
1594 hwi_with_prec
zero (unsigned int);
1595 hwi_with_prec
one (unsigned int);
1596 hwi_with_prec
two (unsigned int);
1599 inline wi::hwi_with_prec::hwi_with_prec (HOST_WIDE_INT v
, unsigned int p
,
1601 : precision (p
), sgn (s
)
1603 if (precision
< HOST_BITS_PER_WIDE_INT
)
1604 val
= sext_hwi (v
, precision
);
1609 /* Return a signed integer that has value VAL and precision PRECISION. */
1610 inline wi::hwi_with_prec
1611 wi::shwi (HOST_WIDE_INT val
, unsigned int precision
)
1613 return hwi_with_prec (val
, precision
, SIGNED
);
1616 /* Return an unsigned integer that has value VAL and precision PRECISION. */
1617 inline wi::hwi_with_prec
1618 wi::uhwi (unsigned HOST_WIDE_INT val
, unsigned int precision
)
1620 return hwi_with_prec (val
, precision
, UNSIGNED
);
1623 /* Return a wide int of -1 with precision PRECISION. */
1624 inline wi::hwi_with_prec
1625 wi::minus_one (unsigned int precision
)
1627 return wi::shwi (-1, precision
);
1630 /* Return a wide int of 0 with precision PRECISION. */
1631 inline wi::hwi_with_prec
1632 wi::zero (unsigned int precision
)
1634 return wi::shwi (0, precision
);
1637 /* Return a wide int of 1 with precision PRECISION. */
1638 inline wi::hwi_with_prec
1639 wi::one (unsigned int precision
)
1641 return wi::shwi (1, precision
);
1644 /* Return a wide int of 2 with precision PRECISION. */
1645 inline wi::hwi_with_prec
1646 wi::two (unsigned int precision
)
1648 return wi::shwi (2, precision
);
1653 /* ints_for<T>::zero (X) returns a zero that, when asssigned to a T,
1654 gives that T the same precision as X. */
1655 template<typename T
, precision_type
= int_traits
<T
>::precision_type
>
1658 static int zero (const T
&) { return 0; }
1661 template<typename T
>
1662 struct ints_for
<T
, VAR_PRECISION
>
1664 static hwi_with_prec
zero (const T
&);
1668 template<typename T
>
1669 inline wi::hwi_with_prec
1670 wi::ints_for
<T
, wi::VAR_PRECISION
>::zero (const T
&x
)
1672 return wi::zero (wi::get_precision (x
));
1678 struct int_traits
<wi::hwi_with_prec
>
1680 static const enum precision_type precision_type
= VAR_PRECISION
;
1681 /* hwi_with_prec has an explicitly-given precision, rather than the
1682 precision of HOST_WIDE_INT. */
1683 static const bool host_dependent_precision
= false;
1684 static const bool is_sign_extended
= true;
1685 static unsigned int get_precision (const wi::hwi_with_prec
&);
1686 static wi::storage_ref
decompose (HOST_WIDE_INT
*, unsigned int,
1687 const wi::hwi_with_prec
&);
1692 wi::int_traits
<wi::hwi_with_prec
>::get_precision (const wi::hwi_with_prec
&x
)
1697 inline wi::storage_ref
1698 wi::int_traits
<wi::hwi_with_prec
>::
1699 decompose (HOST_WIDE_INT
*scratch
, unsigned int precision
,
1700 const wi::hwi_with_prec
&x
)
1702 gcc_checking_assert (precision
== x
.precision
);
1704 if (x
.sgn
== SIGNED
|| x
.val
>= 0 || precision
<= HOST_BITS_PER_WIDE_INT
)
1705 return wi::storage_ref (scratch
, 1, precision
);
1707 return wi::storage_ref (scratch
, 2, precision
);
1710 /* Private functions for handling large cases out of line. They take
1711 individual length and array parameters because that is cheaper for
1712 the inline caller than constructing an object on the stack and
1713 passing a reference to it. (Although many callers use wide_int_refs,
1714 we generally want those to be removed by SRA.) */
1717 bool eq_p_large (const HOST_WIDE_INT
*, unsigned int,
1718 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1719 bool lts_p_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1720 const HOST_WIDE_INT
*, unsigned int);
1721 bool ltu_p_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1722 const HOST_WIDE_INT
*, unsigned int);
1723 int cmps_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1724 const HOST_WIDE_INT
*, unsigned int);
1725 int cmpu_large (const HOST_WIDE_INT
*, unsigned int, unsigned int,
1726 const HOST_WIDE_INT
*, unsigned int);
1727 unsigned int sext_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1729 unsigned int, unsigned int);
1730 unsigned int zext_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1732 unsigned int, unsigned int);
1733 unsigned int set_bit_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1734 unsigned int, unsigned int, unsigned int);
1735 unsigned int lshift_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1736 unsigned int, unsigned int, unsigned int);
1737 unsigned int lrshift_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1738 unsigned int, unsigned int, unsigned int,
1740 unsigned int arshift_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1741 unsigned int, unsigned int, unsigned int,
1743 unsigned int and_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1744 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1745 unsigned int and_not_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1746 unsigned int, const HOST_WIDE_INT
*,
1747 unsigned int, unsigned int);
1748 unsigned int or_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1749 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1750 unsigned int or_not_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1751 unsigned int, const HOST_WIDE_INT
*,
1752 unsigned int, unsigned int);
1753 unsigned int xor_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1754 const HOST_WIDE_INT
*, unsigned int, unsigned int);
1755 unsigned int add_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1756 const HOST_WIDE_INT
*, unsigned int, unsigned int,
1757 signop
, overflow_type
*);
1758 unsigned int sub_large (HOST_WIDE_INT
*, const HOST_WIDE_INT
*, unsigned int,
1759 const HOST_WIDE_INT
*, unsigned int, unsigned int,
1760 signop
, overflow_type
*);
1761 unsigned int mul_internal (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1762 unsigned int, const HOST_WIDE_INT
*,
1763 unsigned int, unsigned int, signop
,
1764 overflow_type
*, bool);
1765 unsigned int divmod_internal (HOST_WIDE_INT
*, unsigned int *,
1766 HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
1767 unsigned int, unsigned int,
1768 const HOST_WIDE_INT
*,
1769 unsigned int, unsigned int,
1770 signop
, overflow_type
*);
1773 /* Return the number of bits that integer X can hold. */
1774 template <typename T
>
1776 wi::get_precision (const T
&x
)
1778 return wi::int_traits
<T
>::get_precision (x
);
1781 /* Return the number of bits that the result of a binary operation can
1782 hold when the input operands are X and Y. */
1783 template <typename T1
, typename T2
>
1785 wi::get_binary_precision (const T1
&x
, const T2
&y
)
1787 return get_precision (wi::int_traits
<WI_BINARY_RESULT (T1
, T2
)>::
1788 get_binary_result (x
, y
));
1791 /* Copy the contents of Y to X, but keeping X's current precision. */
1792 template <typename T1
, typename T2
>
1794 wi::copy (T1
&x
, const T2
&y
)
1796 HOST_WIDE_INT
*xval
= x
.write_val ();
1797 const HOST_WIDE_INT
*yval
= y
.get_val ();
1798 unsigned int len
= y
.get_len ();
1803 x
.set_len (len
, y
.is_sign_extended
);
1806 /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */
1807 template <typename T
>
1809 wi::fits_shwi_p (const T
&x
)
1811 WIDE_INT_REF_FOR (T
) xi (x
);
1815 /* Return true if X fits in an unsigned HOST_WIDE_INT with no loss of
1817 template <typename T
>
1819 wi::fits_uhwi_p (const T
&x
)
1821 WIDE_INT_REF_FOR (T
) xi (x
);
1822 if (xi
.precision
<= HOST_BITS_PER_WIDE_INT
)
1825 return xi
.slow () >= 0;
1826 return xi
.len
== 2 && xi
.uhigh () == 0;
1829 /* Return true if X is negative based on the interpretation of SGN.
1830 For UNSIGNED, this is always false. */
1831 template <typename T
>
1833 wi::neg_p (const T
&x
, signop sgn
)
1835 WIDE_INT_REF_FOR (T
) xi (x
);
1836 if (sgn
== UNSIGNED
)
1838 return xi
.sign_mask () < 0;
1841 /* Return -1 if the top bit of X is set and 0 if the top bit is clear. */
1842 template <typename T
>
1843 inline HOST_WIDE_INT
1844 wi::sign_mask (const T
&x
)
1846 WIDE_INT_REF_FOR (T
) xi (x
);
1847 return xi
.sign_mask ();
1850 /* Return true if X == Y. X and Y must be binary-compatible. */
1851 template <typename T1
, typename T2
>
1853 wi::eq_p (const T1
&x
, const T2
&y
)
1855 unsigned int precision
= get_binary_precision (x
, y
);
1856 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1857 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1858 if (xi
.is_sign_extended
&& yi
.is_sign_extended
)
1860 /* This case reduces to array equality. */
1861 if (xi
.len
!= yi
.len
)
1865 if (xi
.val
[i
] != yi
.val
[i
])
1867 while (++i
!= xi
.len
);
1870 if (__builtin_expect (yi
.len
== 1, true))
1872 /* XI is only equal to YI if it too has a single HWI. */
1875 /* Excess bits in xi.val[0] will be signs or zeros, so comparisons
1876 with 0 are simple. */
1877 if (STATIC_CONSTANT_P (yi
.val
[0] == 0))
1878 return xi
.val
[0] == 0;
1879 /* Otherwise flush out any excess bits first. */
1880 unsigned HOST_WIDE_INT diff
= xi
.val
[0] ^ yi
.val
[0];
1881 int excess
= HOST_BITS_PER_WIDE_INT
- precision
;
1886 return eq_p_large (xi
.val
, xi
.len
, yi
.val
, yi
.len
, precision
);
1889 /* Return true if X != Y. X and Y must be binary-compatible. */
1890 template <typename T1
, typename T2
>
1892 wi::ne_p (const T1
&x
, const T2
&y
)
1894 return !eq_p (x
, y
);
1897 /* Return true if X < Y when both are treated as signed values. */
1898 template <typename T1
, typename T2
>
1900 wi::lts_p (const T1
&x
, const T2
&y
)
1902 unsigned int precision
= get_binary_precision (x
, y
);
1903 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1904 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1905 /* We optimize x < y, where y is 64 or fewer bits. */
1906 if (wi::fits_shwi_p (yi
))
1908 /* Make lts_p (x, 0) as efficient as wi::neg_p (x). */
1909 if (STATIC_CONSTANT_P (yi
.val
[0] == 0))
1911 /* If x fits directly into a shwi, we can compare directly. */
1912 if (wi::fits_shwi_p (xi
))
1913 return xi
.to_shwi () < yi
.to_shwi ();
1914 /* If x doesn't fit and is negative, then it must be more
1915 negative than any value in y, and hence smaller than y. */
1918 /* If x is positive, then it must be larger than any value in y,
1919 and hence greater than y. */
1922 /* Optimize the opposite case, if it can be detected at compile time. */
1923 if (STATIC_CONSTANT_P (xi
.len
== 1))
1924 /* If YI is negative it is lower than the least HWI.
1925 If YI is positive it is greater than the greatest HWI. */
1927 return lts_p_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
1930 /* Return true if X < Y when both are treated as unsigned values. */
1931 template <typename T1
, typename T2
>
1933 wi::ltu_p (const T1
&x
, const T2
&y
)
1935 unsigned int precision
= get_binary_precision (x
, y
);
1936 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
1937 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
1938 /* Optimize comparisons with constants. */
1939 if (STATIC_CONSTANT_P (yi
.len
== 1 && yi
.val
[0] >= 0))
1940 return xi
.len
== 1 && xi
.to_uhwi () < (unsigned HOST_WIDE_INT
) yi
.val
[0];
1941 if (STATIC_CONSTANT_P (xi
.len
== 1 && xi
.val
[0] >= 0))
1942 return yi
.len
!= 1 || yi
.to_uhwi () > (unsigned HOST_WIDE_INT
) xi
.val
[0];
1943 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1944 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1945 values does not change the result. */
1946 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
1948 unsigned HOST_WIDE_INT xl
= xi
.to_uhwi ();
1949 unsigned HOST_WIDE_INT yl
= yi
.to_uhwi ();
1952 return ltu_p_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
1955 /* Return true if X < Y. Signedness of X and Y is indicated by SGN. */
1956 template <typename T1
, typename T2
>
1958 wi::lt_p (const T1
&x
, const T2
&y
, signop sgn
)
1961 return lts_p (x
, y
);
1963 return ltu_p (x
, y
);
1966 /* Return true if X <= Y when both are treated as signed values. */
1967 template <typename T1
, typename T2
>
1969 wi::les_p (const T1
&x
, const T2
&y
)
1971 return !lts_p (y
, x
);
1974 /* Return true if X <= Y when both are treated as unsigned values. */
1975 template <typename T1
, typename T2
>
1977 wi::leu_p (const T1
&x
, const T2
&y
)
1979 return !ltu_p (y
, x
);
1982 /* Return true if X <= Y. Signedness of X and Y is indicated by SGN. */
1983 template <typename T1
, typename T2
>
1985 wi::le_p (const T1
&x
, const T2
&y
, signop sgn
)
1988 return les_p (x
, y
);
1990 return leu_p (x
, y
);
1993 /* Return true if X > Y when both are treated as signed values. */
1994 template <typename T1
, typename T2
>
1996 wi::gts_p (const T1
&x
, const T2
&y
)
1998 return lts_p (y
, x
);
2001 /* Return true if X > Y when both are treated as unsigned values. */
2002 template <typename T1
, typename T2
>
2004 wi::gtu_p (const T1
&x
, const T2
&y
)
2006 return ltu_p (y
, x
);
2009 /* Return true if X > Y. Signedness of X and Y is indicated by SGN. */
2010 template <typename T1
, typename T2
>
2012 wi::gt_p (const T1
&x
, const T2
&y
, signop sgn
)
2015 return gts_p (x
, y
);
2017 return gtu_p (x
, y
);
2020 /* Return true if X >= Y when both are treated as signed values. */
2021 template <typename T1
, typename T2
>
2023 wi::ges_p (const T1
&x
, const T2
&y
)
2025 return !lts_p (x
, y
);
2028 /* Return true if X >= Y when both are treated as unsigned values. */
2029 template <typename T1
, typename T2
>
2031 wi::geu_p (const T1
&x
, const T2
&y
)
2033 return !ltu_p (x
, y
);
2036 /* Return true if X >= Y. Signedness of X and Y is indicated by SGN. */
2037 template <typename T1
, typename T2
>
2039 wi::ge_p (const T1
&x
, const T2
&y
, signop sgn
)
2042 return ges_p (x
, y
);
2044 return geu_p (x
, y
);
2047 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
2048 as signed values. */
2049 template <typename T1
, typename T2
>
2051 wi::cmps (const T1
&x
, const T2
&y
)
2053 unsigned int precision
= get_binary_precision (x
, y
);
2054 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2055 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2056 if (wi::fits_shwi_p (yi
))
2058 /* Special case for comparisons with 0. */
2059 if (STATIC_CONSTANT_P (yi
.val
[0] == 0))
2060 return neg_p (xi
) ? -1 : !(xi
.len
== 1 && xi
.val
[0] == 0);
2061 /* If x fits into a signed HWI, we can compare directly. */
2062 if (wi::fits_shwi_p (xi
))
2064 HOST_WIDE_INT xl
= xi
.to_shwi ();
2065 HOST_WIDE_INT yl
= yi
.to_shwi ();
2066 return xl
< yl
? -1 : xl
> yl
;
2068 /* If x doesn't fit and is negative, then it must be more
2069 negative than any signed HWI, and hence smaller than y. */
2072 /* If x is positive, then it must be larger than any signed HWI,
2073 and hence greater than y. */
2076 /* Optimize the opposite case, if it can be detected at compile time. */
2077 if (STATIC_CONSTANT_P (xi
.len
== 1))
2078 /* If YI is negative it is lower than the least HWI.
2079 If YI is positive it is greater than the greatest HWI. */
2080 return neg_p (yi
) ? 1 : -1;
2081 return cmps_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
2084 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
2085 as unsigned values. */
2086 template <typename T1
, typename T2
>
2088 wi::cmpu (const T1
&x
, const T2
&y
)
2090 unsigned int precision
= get_binary_precision (x
, y
);
2091 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2092 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2093 /* Optimize comparisons with constants. */
2094 if (STATIC_CONSTANT_P (yi
.len
== 1 && yi
.val
[0] >= 0))
2096 /* If XI doesn't fit in a HWI then it must be larger than YI. */
2099 /* Otherwise compare directly. */
2100 unsigned HOST_WIDE_INT xl
= xi
.to_uhwi ();
2101 unsigned HOST_WIDE_INT yl
= yi
.val
[0];
2102 return xl
< yl
? -1 : xl
> yl
;
2104 if (STATIC_CONSTANT_P (xi
.len
== 1 && xi
.val
[0] >= 0))
2106 /* If YI doesn't fit in a HWI then it must be larger than XI. */
2109 /* Otherwise compare directly. */
2110 unsigned HOST_WIDE_INT xl
= xi
.val
[0];
2111 unsigned HOST_WIDE_INT yl
= yi
.to_uhwi ();
2112 return xl
< yl
? -1 : xl
> yl
;
2114 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
2115 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
2116 values does not change the result. */
2117 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2119 unsigned HOST_WIDE_INT xl
= xi
.to_uhwi ();
2120 unsigned HOST_WIDE_INT yl
= yi
.to_uhwi ();
2121 return xl
< yl
? -1 : xl
> yl
;
2123 return cmpu_large (xi
.val
, xi
.len
, precision
, yi
.val
, yi
.len
);
2126 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Signedness of
2127 X and Y indicated by SGN. */
2128 template <typename T1
, typename T2
>
2130 wi::cmp (const T1
&x
, const T2
&y
, signop sgn
)
2139 template <typename T
>
2140 inline WI_UNARY_RESULT (T
)
2141 wi::bit_not (const T
&x
)
2143 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2144 WIDE_INT_REF_FOR (T
) xi (x
, get_precision (result
));
2145 for (unsigned int i
= 0; i
< xi
.len
; ++i
)
2146 val
[i
] = ~xi
.val
[i
];
2147 result
.set_len (xi
.len
);
2152 template <typename T
>
2153 inline WI_UNARY_RESULT (T
)
2154 wi::neg (const T
&x
)
2159 /* Return -x. Indicate in *OVERFLOW if performing the negation would
2160 cause an overflow. */
2161 template <typename T
>
2162 inline WI_UNARY_RESULT (T
)
2163 wi::neg (const T
&x
, overflow_type
*overflow
)
2165 *overflow
= only_sign_bit_p (x
) ? OVF_OVERFLOW
: OVF_NONE
;
2169 /* Return the absolute value of x. */
2170 template <typename T
>
2171 inline WI_UNARY_RESULT (T
)
2172 wi::abs (const T
&x
)
2174 return neg_p (x
) ? neg (x
) : WI_UNARY_RESULT (T
) (x
);
2177 /* Return the result of sign-extending the low OFFSET bits of X. */
2178 template <typename T
>
2179 inline WI_UNARY_RESULT (T
)
2180 wi::sext (const T
&x
, unsigned int offset
)
2182 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2183 unsigned int precision
= get_precision (result
);
2184 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
2186 if (offset
<= HOST_BITS_PER_WIDE_INT
)
2188 val
[0] = sext_hwi (xi
.ulow (), offset
);
2189 result
.set_len (1, true);
2192 result
.set_len (sext_large (val
, xi
.val
, xi
.len
, precision
, offset
));
2196 /* Return the result of zero-extending the low OFFSET bits of X. */
2197 template <typename T
>
2198 inline WI_UNARY_RESULT (T
)
2199 wi::zext (const T
&x
, unsigned int offset
)
2201 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2202 unsigned int precision
= get_precision (result
);
2203 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
2205 /* This is not just an optimization, it is actually required to
2206 maintain canonization. */
2207 if (offset
>= precision
)
2209 wi::copy (result
, xi
);
2213 /* In these cases we know that at least the top bit will be clear,
2214 so no sign extension is necessary. */
2215 if (offset
< HOST_BITS_PER_WIDE_INT
)
2217 val
[0] = zext_hwi (xi
.ulow (), offset
);
2218 result
.set_len (1, true);
2221 result
.set_len (zext_large (val
, xi
.val
, xi
.len
, precision
, offset
), true);
2225 /* Return the result of extending the low OFFSET bits of X according to
2227 template <typename T
>
2228 inline WI_UNARY_RESULT (T
)
2229 wi::ext (const T
&x
, unsigned int offset
, signop sgn
)
2231 return sgn
== SIGNED
? sext (x
, offset
) : zext (x
, offset
);
2234 /* Return an integer that represents X | (1 << bit). */
2235 template <typename T
>
2236 inline WI_UNARY_RESULT (T
)
2237 wi::set_bit (const T
&x
, unsigned int bit
)
2239 WI_UNARY_RESULT_VAR (result
, val
, T
, x
);
2240 unsigned int precision
= get_precision (result
);
2241 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
2242 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2244 val
[0] = xi
.ulow () | (HOST_WIDE_INT_1U
<< bit
);
2248 result
.set_len (set_bit_large (val
, xi
.val
, xi
.len
, precision
, bit
));
2252 /* Return the mininum of X and Y, treating them both as having
2254 template <typename T1
, typename T2
>
2255 inline WI_BINARY_RESULT (T1
, T2
)
2256 wi::min (const T1
&x
, const T2
&y
, signop sgn
)
2258 WI_BINARY_RESULT_VAR (result
, val ATTRIBUTE_UNUSED
, T1
, x
, T2
, y
);
2259 unsigned int precision
= get_precision (result
);
2260 if (wi::le_p (x
, y
, sgn
))
2261 wi::copy (result
, WIDE_INT_REF_FOR (T1
) (x
, precision
));
2263 wi::copy (result
, WIDE_INT_REF_FOR (T2
) (y
, precision
));
2267 /* Return the minimum of X and Y, treating both as signed values. */
2268 template <typename T1
, typename T2
>
2269 inline WI_BINARY_RESULT (T1
, T2
)
2270 wi::smin (const T1
&x
, const T2
&y
)
2272 return wi::min (x
, y
, SIGNED
);
2275 /* Return the minimum of X and Y, treating both as unsigned values. */
2276 template <typename T1
, typename T2
>
2277 inline WI_BINARY_RESULT (T1
, T2
)
2278 wi::umin (const T1
&x
, const T2
&y
)
2280 return wi::min (x
, y
, UNSIGNED
);
2283 /* Return the maxinum of X and Y, treating them both as having
2285 template <typename T1
, typename T2
>
2286 inline WI_BINARY_RESULT (T1
, T2
)
2287 wi::max (const T1
&x
, const T2
&y
, signop sgn
)
2289 WI_BINARY_RESULT_VAR (result
, val ATTRIBUTE_UNUSED
, T1
, x
, T2
, y
);
2290 unsigned int precision
= get_precision (result
);
2291 if (wi::ge_p (x
, y
, sgn
))
2292 wi::copy (result
, WIDE_INT_REF_FOR (T1
) (x
, precision
));
2294 wi::copy (result
, WIDE_INT_REF_FOR (T2
) (y
, precision
));
2298 /* Return the maximum of X and Y, treating both as signed values. */
2299 template <typename T1
, typename T2
>
2300 inline WI_BINARY_RESULT (T1
, T2
)
2301 wi::smax (const T1
&x
, const T2
&y
)
2303 return wi::max (x
, y
, SIGNED
);
2306 /* Return the maximum of X and Y, treating both as unsigned values. */
2307 template <typename T1
, typename T2
>
2308 inline WI_BINARY_RESULT (T1
, T2
)
2309 wi::umax (const T1
&x
, const T2
&y
)
2311 return wi::max (x
, y
, UNSIGNED
);
2315 template <typename T1
, typename T2
>
2316 inline WI_BINARY_RESULT (T1
, T2
)
2317 wi::bit_and (const T1
&x
, const T2
&y
)
2319 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2320 unsigned int precision
= get_precision (result
);
2321 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2322 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2323 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2324 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2326 val
[0] = xi
.ulow () & yi
.ulow ();
2327 result
.set_len (1, is_sign_extended
);
2330 result
.set_len (and_large (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2331 precision
), is_sign_extended
);
2335 /* Return X & ~Y. */
2336 template <typename T1
, typename T2
>
2337 inline WI_BINARY_RESULT (T1
, T2
)
2338 wi::bit_and_not (const T1
&x
, const T2
&y
)
2340 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2341 unsigned int precision
= get_precision (result
);
2342 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2343 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2344 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2345 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2347 val
[0] = xi
.ulow () & ~yi
.ulow ();
2348 result
.set_len (1, is_sign_extended
);
2351 result
.set_len (and_not_large (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2352 precision
), is_sign_extended
);
2357 template <typename T1
, typename T2
>
2358 inline WI_BINARY_RESULT (T1
, T2
)
2359 wi::bit_or (const T1
&x
, const T2
&y
)
2361 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2362 unsigned int precision
= get_precision (result
);
2363 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2364 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2365 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2366 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2368 val
[0] = xi
.ulow () | yi
.ulow ();
2369 result
.set_len (1, is_sign_extended
);
2372 result
.set_len (or_large (val
, xi
.val
, xi
.len
,
2373 yi
.val
, yi
.len
, precision
), is_sign_extended
);
2377 /* Return X | ~Y. */
2378 template <typename T1
, typename T2
>
2379 inline WI_BINARY_RESULT (T1
, T2
)
2380 wi::bit_or_not (const T1
&x
, const T2
&y
)
2382 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2383 unsigned int precision
= get_precision (result
);
2384 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2385 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2386 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2387 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2389 val
[0] = xi
.ulow () | ~yi
.ulow ();
2390 result
.set_len (1, is_sign_extended
);
2393 result
.set_len (or_not_large (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2394 precision
), is_sign_extended
);
2399 template <typename T1
, typename T2
>
2400 inline WI_BINARY_RESULT (T1
, T2
)
2401 wi::bit_xor (const T1
&x
, const T2
&y
)
2403 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2404 unsigned int precision
= get_precision (result
);
2405 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2406 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2407 bool is_sign_extended
= xi
.is_sign_extended
&& yi
.is_sign_extended
;
2408 if (__builtin_expect (xi
.len
+ yi
.len
== 2, true))
2410 val
[0] = xi
.ulow () ^ yi
.ulow ();
2411 result
.set_len (1, is_sign_extended
);
2414 result
.set_len (xor_large (val
, xi
.val
, xi
.len
,
2415 yi
.val
, yi
.len
, precision
), is_sign_extended
);
2420 template <typename T1
, typename T2
>
2421 inline WI_BINARY_RESULT (T1
, T2
)
2422 wi::add (const T1
&x
, const T2
&y
)
2424 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2425 unsigned int precision
= get_precision (result
);
2426 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2427 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2428 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2430 val
[0] = xi
.ulow () + yi
.ulow ();
2433 /* If the precision is known at compile time to be greater than
2434 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2435 knowing that (a) all bits in those HWIs are significant and
2436 (b) the result has room for at least two HWIs. This provides
2437 a fast path for things like offset_int and widest_int.
2439 The STATIC_CONSTANT_P test prevents this path from being
2440 used for wide_ints. wide_ints with precisions greater than
2441 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2442 point handling them inline. */
2443 else if (STATIC_CONSTANT_P (precision
> HOST_BITS_PER_WIDE_INT
)
2444 && __builtin_expect (xi
.len
+ yi
.len
== 2, true))
2446 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2447 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2448 unsigned HOST_WIDE_INT resultl
= xl
+ yl
;
2450 val
[1] = (HOST_WIDE_INT
) resultl
< 0 ? 0 : -1;
2451 result
.set_len (1 + (((resultl
^ xl
) & (resultl
^ yl
))
2452 >> (HOST_BITS_PER_WIDE_INT
- 1)));
2455 result
.set_len (add_large (val
, xi
.val
, xi
.len
,
2456 yi
.val
, yi
.len
, precision
,
2461 /* Return X + Y. Treat X and Y as having the signednes given by SGN
2462 and indicate in *OVERFLOW whether the operation overflowed. */
2463 template <typename T1
, typename T2
>
2464 inline WI_BINARY_RESULT (T1
, T2
)
2465 wi::add (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2467 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2468 unsigned int precision
= get_precision (result
);
2469 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2470 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2471 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2473 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2474 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2475 unsigned HOST_WIDE_INT resultl
= xl
+ yl
;
2478 if ((((resultl
^ xl
) & (resultl
^ yl
))
2479 >> (precision
- 1)) & 1)
2482 *overflow
= OVF_UNDERFLOW
;
2483 else if (xl
< resultl
)
2484 *overflow
= OVF_OVERFLOW
;
2486 *overflow
= OVF_NONE
;
2489 *overflow
= OVF_NONE
;
2492 *overflow
= ((resultl
<< (HOST_BITS_PER_WIDE_INT
- precision
))
2493 < (xl
<< (HOST_BITS_PER_WIDE_INT
- precision
)))
2494 ? OVF_OVERFLOW
: OVF_NONE
;
2499 result
.set_len (add_large (val
, xi
.val
, xi
.len
,
2500 yi
.val
, yi
.len
, precision
,
2506 template <typename T1
, typename T2
>
2507 inline WI_BINARY_RESULT (T1
, T2
)
2508 wi::sub (const T1
&x
, const T2
&y
)
2510 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2511 unsigned int precision
= get_precision (result
);
2512 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2513 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2514 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2516 val
[0] = xi
.ulow () - yi
.ulow ();
2519 /* If the precision is known at compile time to be greater than
2520 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2521 knowing that (a) all bits in those HWIs are significant and
2522 (b) the result has room for at least two HWIs. This provides
2523 a fast path for things like offset_int and widest_int.
2525 The STATIC_CONSTANT_P test prevents this path from being
2526 used for wide_ints. wide_ints with precisions greater than
2527 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2528 point handling them inline. */
2529 else if (STATIC_CONSTANT_P (precision
> HOST_BITS_PER_WIDE_INT
)
2530 && __builtin_expect (xi
.len
+ yi
.len
== 2, true))
2532 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2533 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2534 unsigned HOST_WIDE_INT resultl
= xl
- yl
;
2536 val
[1] = (HOST_WIDE_INT
) resultl
< 0 ? 0 : -1;
2537 result
.set_len (1 + (((resultl
^ xl
) & (xl
^ yl
))
2538 >> (HOST_BITS_PER_WIDE_INT
- 1)));
2541 result
.set_len (sub_large (val
, xi
.val
, xi
.len
,
2542 yi
.val
, yi
.len
, precision
,
2547 /* Return X - Y. Treat X and Y as having the signednes given by SGN
2548 and indicate in *OVERFLOW whether the operation overflowed. */
2549 template <typename T1
, typename T2
>
2550 inline WI_BINARY_RESULT (T1
, T2
)
2551 wi::sub (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2553 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2554 unsigned int precision
= get_precision (result
);
2555 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2556 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2557 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2559 unsigned HOST_WIDE_INT xl
= xi
.ulow ();
2560 unsigned HOST_WIDE_INT yl
= yi
.ulow ();
2561 unsigned HOST_WIDE_INT resultl
= xl
- yl
;
2564 if ((((xl
^ yl
) & (resultl
^ xl
)) >> (precision
- 1)) & 1)
2567 *overflow
= OVF_UNDERFLOW
;
2569 *overflow
= OVF_OVERFLOW
;
2571 *overflow
= OVF_NONE
;
2574 *overflow
= OVF_NONE
;
2577 *overflow
= ((resultl
<< (HOST_BITS_PER_WIDE_INT
- precision
))
2578 > (xl
<< (HOST_BITS_PER_WIDE_INT
- precision
)))
2579 ? OVF_UNDERFLOW
: OVF_NONE
;
2584 result
.set_len (sub_large (val
, xi
.val
, xi
.len
,
2585 yi
.val
, yi
.len
, precision
,
2591 template <typename T1
, typename T2
>
2592 inline WI_BINARY_RESULT (T1
, T2
)
2593 wi::mul (const T1
&x
, const T2
&y
)
2595 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2596 unsigned int precision
= get_precision (result
);
2597 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2598 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2599 if (precision
<= HOST_BITS_PER_WIDE_INT
)
2601 val
[0] = xi
.ulow () * yi
.ulow ();
2605 result
.set_len (mul_internal (val
, xi
.val
, xi
.len
, yi
.val
, yi
.len
,
2606 precision
, UNSIGNED
, 0, false));
2610 /* Return X * Y. Treat X and Y as having the signednes given by SGN
2611 and indicate in *OVERFLOW whether the operation overflowed. */
2612 template <typename T1
, typename T2
>
2613 inline WI_BINARY_RESULT (T1
, T2
)
2614 wi::mul (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2616 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2617 unsigned int precision
= get_precision (result
);
2618 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2619 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2620 result
.set_len (mul_internal (val
, xi
.val
, xi
.len
,
2621 yi
.val
, yi
.len
, precision
,
2622 sgn
, overflow
, false));
2626 /* Return X * Y, treating both X and Y as signed values. Indicate in
2627 *OVERFLOW whether the operation overflowed. */
2628 template <typename T1
, typename T2
>
2629 inline WI_BINARY_RESULT (T1
, T2
)
2630 wi::smul (const T1
&x
, const T2
&y
, overflow_type
*overflow
)
2632 return mul (x
, y
, SIGNED
, overflow
);
2635 /* Return X * Y, treating both X and Y as unsigned values. Indicate in
2636 *OVERFLOW if the result overflows. */
2637 template <typename T1
, typename T2
>
2638 inline WI_BINARY_RESULT (T1
, T2
)
2639 wi::umul (const T1
&x
, const T2
&y
, overflow_type
*overflow
)
2641 return mul (x
, y
, UNSIGNED
, overflow
);
2644 /* Perform a widening multiplication of X and Y, extending the values
2645 according to SGN, and return the high part of the result. */
2646 template <typename T1
, typename T2
>
2647 inline WI_BINARY_RESULT (T1
, T2
)
2648 wi::mul_high (const T1
&x
, const T2
&y
, signop sgn
)
2650 WI_BINARY_RESULT_VAR (result
, val
, T1
, x
, T2
, y
);
2651 unsigned int precision
= get_precision (result
);
2652 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2653 WIDE_INT_REF_FOR (T2
) yi (y
, precision
);
2654 result
.set_len (mul_internal (val
, xi
.val
, xi
.len
,
2655 yi
.val
, yi
.len
, precision
,
2660 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2661 signedness given by SGN. Indicate in *OVERFLOW if the result
2663 template <typename T1
, typename T2
>
2664 inline WI_BINARY_RESULT (T1
, T2
)
2665 wi::div_trunc (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2667 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2668 unsigned int precision
= get_precision (quotient
);
2669 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2670 WIDE_INT_REF_FOR (T2
) yi (y
);
2672 quotient
.set_len (divmod_internal (quotient_val
, 0, 0, xi
.val
, xi
.len
,
2674 yi
.val
, yi
.len
, yi
.precision
,
2679 /* Return X / Y, rouding towards 0. Treat X and Y as signed values. */
2680 template <typename T1
, typename T2
>
2681 inline WI_BINARY_RESULT (T1
, T2
)
2682 wi::sdiv_trunc (const T1
&x
, const T2
&y
)
2684 return div_trunc (x
, y
, SIGNED
);
2687 /* Return X / Y, rouding towards 0. Treat X and Y as unsigned values. */
2688 template <typename T1
, typename T2
>
2689 inline WI_BINARY_RESULT (T1
, T2
)
2690 wi::udiv_trunc (const T1
&x
, const T2
&y
)
2692 return div_trunc (x
, y
, UNSIGNED
);
2695 /* Return X / Y, rouding towards -inf. Treat X and Y as having the
2696 signedness given by SGN. Indicate in *OVERFLOW if the result
2698 template <typename T1
, typename T2
>
2699 inline WI_BINARY_RESULT (T1
, T2
)
2700 wi::div_floor (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2702 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2703 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2704 unsigned int precision
= get_precision (quotient
);
2705 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2706 WIDE_INT_REF_FOR (T2
) yi (y
);
2708 unsigned int remainder_len
;
2709 quotient
.set_len (divmod_internal (quotient_val
,
2710 &remainder_len
, remainder_val
,
2711 xi
.val
, xi
.len
, precision
,
2712 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2714 remainder
.set_len (remainder_len
);
2715 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
) && remainder
!= 0)
2716 return quotient
- 1;
2720 /* Return X / Y, rouding towards -inf. Treat X and Y as signed values. */
2721 template <typename T1
, typename T2
>
2722 inline WI_BINARY_RESULT (T1
, T2
)
2723 wi::sdiv_floor (const T1
&x
, const T2
&y
)
2725 return div_floor (x
, y
, SIGNED
);
2728 /* Return X / Y, rouding towards -inf. Treat X and Y as unsigned values. */
2729 /* ??? Why do we have both this and udiv_trunc. Aren't they the same? */
2730 template <typename T1
, typename T2
>
2731 inline WI_BINARY_RESULT (T1
, T2
)
2732 wi::udiv_floor (const T1
&x
, const T2
&y
)
2734 return div_floor (x
, y
, UNSIGNED
);
2737 /* Return X / Y, rouding towards +inf. Treat X and Y as having the
2738 signedness given by SGN. Indicate in *OVERFLOW if the result
2740 template <typename T1
, typename T2
>
2741 inline WI_BINARY_RESULT (T1
, T2
)
2742 wi::div_ceil (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2744 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2745 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2746 unsigned int precision
= get_precision (quotient
);
2747 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2748 WIDE_INT_REF_FOR (T2
) yi (y
);
2750 unsigned int remainder_len
;
2751 quotient
.set_len (divmod_internal (quotient_val
,
2752 &remainder_len
, remainder_val
,
2753 xi
.val
, xi
.len
, precision
,
2754 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2756 remainder
.set_len (remainder_len
);
2757 if (wi::neg_p (x
, sgn
) == wi::neg_p (y
, sgn
) && remainder
!= 0)
2758 return quotient
+ 1;
2762 /* Return X / Y, rouding towards +inf. Treat X and Y as unsigned values. */
2763 template <typename T1
, typename T2
>
2764 inline WI_BINARY_RESULT (T1
, T2
)
2765 wi::udiv_ceil (const T1
&x
, const T2
&y
)
2767 return div_ceil (x
, y
, UNSIGNED
);
2770 /* Return X / Y, rouding towards nearest with ties away from zero.
2771 Treat X and Y as having the signedness given by SGN. Indicate
2772 in *OVERFLOW if the result overflows. */
2773 template <typename T1
, typename T2
>
2774 inline WI_BINARY_RESULT (T1
, T2
)
2775 wi::div_round (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2777 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2778 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2779 unsigned int precision
= get_precision (quotient
);
2780 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2781 WIDE_INT_REF_FOR (T2
) yi (y
);
2783 unsigned int remainder_len
;
2784 quotient
.set_len (divmod_internal (quotient_val
,
2785 &remainder_len
, remainder_val
,
2786 xi
.val
, xi
.len
, precision
,
2787 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2789 remainder
.set_len (remainder_len
);
2795 WI_BINARY_RESULT (T1
, T2
) abs_remainder
= wi::abs (remainder
);
2796 if (wi::geu_p (abs_remainder
, wi::sub (wi::abs (y
), abs_remainder
)))
2798 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
))
2799 return quotient
- 1;
2801 return quotient
+ 1;
2806 if (wi::geu_p (remainder
, wi::sub (y
, remainder
)))
2807 return quotient
+ 1;
2813 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2814 signedness given by SGN. Store the remainder in *REMAINDER_PTR. */
2815 template <typename T1
, typename T2
>
2816 inline WI_BINARY_RESULT (T1
, T2
)
2817 wi::divmod_trunc (const T1
&x
, const T2
&y
, signop sgn
,
2818 WI_BINARY_RESULT (T1
, T2
) *remainder_ptr
)
2820 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2821 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2822 unsigned int precision
= get_precision (quotient
);
2823 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2824 WIDE_INT_REF_FOR (T2
) yi (y
);
2826 unsigned int remainder_len
;
2827 quotient
.set_len (divmod_internal (quotient_val
,
2828 &remainder_len
, remainder_val
,
2829 xi
.val
, xi
.len
, precision
,
2830 yi
.val
, yi
.len
, yi
.precision
, sgn
, 0));
2831 remainder
.set_len (remainder_len
);
2833 *remainder_ptr
= remainder
;
2837 /* Compute the greatest common divisor of two numbers A and B using
2838 Euclid's algorithm. */
2839 template <typename T1
, typename T2
>
2840 inline WI_BINARY_RESULT (T1
, T2
)
2841 wi::gcd (const T1
&a
, const T2
&b
, signop sgn
)
2848 while (gt_p (x
, 0, sgn
))
2850 z
= mod_trunc (y
, x
, sgn
);
2858 /* Compute X / Y, rouding towards 0, and return the remainder.
2859 Treat X and Y as having the signedness given by SGN. Indicate
2860 in *OVERFLOW if the division overflows. */
2861 template <typename T1
, typename T2
>
2862 inline WI_BINARY_RESULT (T1
, T2
)
2863 wi::mod_trunc (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2865 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2866 unsigned int precision
= get_precision (remainder
);
2867 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2868 WIDE_INT_REF_FOR (T2
) yi (y
);
2870 unsigned int remainder_len
;
2871 divmod_internal (0, &remainder_len
, remainder_val
,
2872 xi
.val
, xi
.len
, precision
,
2873 yi
.val
, yi
.len
, yi
.precision
, sgn
, overflow
);
2874 remainder
.set_len (remainder_len
);
2879 /* Compute X / Y, rouding towards 0, and return the remainder.
2880 Treat X and Y as signed values. */
2881 template <typename T1
, typename T2
>
2882 inline WI_BINARY_RESULT (T1
, T2
)
2883 wi::smod_trunc (const T1
&x
, const T2
&y
)
2885 return mod_trunc (x
, y
, SIGNED
);
2888 /* Compute X / Y, rouding towards 0, and return the remainder.
2889 Treat X and Y as unsigned values. */
2890 template <typename T1
, typename T2
>
2891 inline WI_BINARY_RESULT (T1
, T2
)
2892 wi::umod_trunc (const T1
&x
, const T2
&y
)
2894 return mod_trunc (x
, y
, UNSIGNED
);
2897 /* Compute X / Y, rouding towards -inf, and return the remainder.
2898 Treat X and Y as having the signedness given by SGN. Indicate
2899 in *OVERFLOW if the division overflows. */
2900 template <typename T1
, typename T2
>
2901 inline WI_BINARY_RESULT (T1
, T2
)
2902 wi::mod_floor (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2904 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2905 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2906 unsigned int precision
= get_precision (quotient
);
2907 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2908 WIDE_INT_REF_FOR (T2
) yi (y
);
2910 unsigned int remainder_len
;
2911 quotient
.set_len (divmod_internal (quotient_val
,
2912 &remainder_len
, remainder_val
,
2913 xi
.val
, xi
.len
, precision
,
2914 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2916 remainder
.set_len (remainder_len
);
2918 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
) && remainder
!= 0)
2919 return remainder
+ y
;
2923 /* Compute X / Y, rouding towards -inf, and return the remainder.
2924 Treat X and Y as unsigned values. */
2925 /* ??? Why do we have both this and umod_trunc. Aren't they the same? */
2926 template <typename T1
, typename T2
>
2927 inline WI_BINARY_RESULT (T1
, T2
)
2928 wi::umod_floor (const T1
&x
, const T2
&y
)
2930 return mod_floor (x
, y
, UNSIGNED
);
2933 /* Compute X / Y, rouding towards +inf, and return the remainder.
2934 Treat X and Y as having the signedness given by SGN. Indicate
2935 in *OVERFLOW if the division overflows. */
2936 template <typename T1
, typename T2
>
2937 inline WI_BINARY_RESULT (T1
, T2
)
2938 wi::mod_ceil (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2940 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2941 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2942 unsigned int precision
= get_precision (quotient
);
2943 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2944 WIDE_INT_REF_FOR (T2
) yi (y
);
2946 unsigned int remainder_len
;
2947 quotient
.set_len (divmod_internal (quotient_val
,
2948 &remainder_len
, remainder_val
,
2949 xi
.val
, xi
.len
, precision
,
2950 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2952 remainder
.set_len (remainder_len
);
2954 if (wi::neg_p (x
, sgn
) == wi::neg_p (y
, sgn
) && remainder
!= 0)
2955 return remainder
- y
;
2959 /* Compute X / Y, rouding towards nearest with ties away from zero,
2960 and return the remainder. Treat X and Y as having the signedness
2961 given by SGN. Indicate in *OVERFLOW if the division overflows. */
2962 template <typename T1
, typename T2
>
2963 inline WI_BINARY_RESULT (T1
, T2
)
2964 wi::mod_round (const T1
&x
, const T2
&y
, signop sgn
, overflow_type
*overflow
)
2966 WI_BINARY_RESULT_VAR (quotient
, quotient_val
, T1
, x
, T2
, y
);
2967 WI_BINARY_RESULT_VAR (remainder
, remainder_val
, T1
, x
, T2
, y
);
2968 unsigned int precision
= get_precision (quotient
);
2969 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
2970 WIDE_INT_REF_FOR (T2
) yi (y
);
2972 unsigned int remainder_len
;
2973 quotient
.set_len (divmod_internal (quotient_val
,
2974 &remainder_len
, remainder_val
,
2975 xi
.val
, xi
.len
, precision
,
2976 yi
.val
, yi
.len
, yi
.precision
, sgn
,
2978 remainder
.set_len (remainder_len
);
2984 WI_BINARY_RESULT (T1
, T2
) abs_remainder
= wi::abs (remainder
);
2985 if (wi::geu_p (abs_remainder
, wi::sub (wi::abs (y
), abs_remainder
)))
2987 if (wi::neg_p (x
, sgn
) != wi::neg_p (y
, sgn
))
2988 return remainder
+ y
;
2990 return remainder
- y
;
2995 if (wi::geu_p (remainder
, wi::sub (y
, remainder
)))
2996 return remainder
- y
;
3002 /* Return true if X is a multiple of Y. Treat X and Y as having the
3003 signedness given by SGN. */
3004 template <typename T1
, typename T2
>
3006 wi::multiple_of_p (const T1
&x
, const T2
&y
, signop sgn
)
3008 return wi::mod_trunc (x
, y
, sgn
) == 0;
3011 /* Return true if X is a multiple of Y, storing X / Y in *RES if so.
3012 Treat X and Y as having the signedness given by SGN. */
3013 template <typename T1
, typename T2
>
3015 wi::multiple_of_p (const T1
&x
, const T2
&y
, signop sgn
,
3016 WI_BINARY_RESULT (T1
, T2
) *res
)
3018 WI_BINARY_RESULT (T1
, T2
) remainder
;
3019 WI_BINARY_RESULT (T1
, T2
) quotient
3020 = divmod_trunc (x
, y
, sgn
, &remainder
);
3029 /* Return X << Y. Return 0 if Y is greater than or equal to
3030 the precision of X. */
3031 template <typename T1
, typename T2
>
3032 inline WI_UNARY_RESULT (T1
)
3033 wi::lshift (const T1
&x
, const T2
&y
)
3035 WI_UNARY_RESULT_VAR (result
, val
, T1
, x
);
3036 unsigned int precision
= get_precision (result
);
3037 WIDE_INT_REF_FOR (T1
) xi (x
, precision
);
3038 WIDE_INT_REF_FOR (T2
) yi (y
);
3039 /* Handle the simple cases quickly. */
3040 if (geu_p (yi
, precision
))
3047 unsigned int shift
= yi
.to_uhwi ();
3048 /* For fixed-precision integers like offset_int and widest_int,
3049 handle the case where the shift value is constant and the
3050 result is a single nonnegative HWI (meaning that we don't
3051 need to worry about val[1]). This is particularly common
3052 for converting a byte count to a bit count.
3054 For variable-precision integers like wide_int, handle HWI
3055 and sub-HWI integers inline. */
3056 if (STATIC_CONSTANT_P (xi
.precision
> HOST_BITS_PER_WIDE_INT
)
3057 ? (STATIC_CONSTANT_P (shift
< HOST_BITS_PER_WIDE_INT
- 1)
3059 && IN_RANGE (xi
.val
[0], 0, HOST_WIDE_INT_MAX
>> shift
))
3060 : precision
<= HOST_BITS_PER_WIDE_INT
)
3062 val
[0] = xi
.ulow () << shift
;
3066 result
.set_len (lshift_large (val
, xi
.val
, xi
.len
,
3072 /* Return X >> Y, using a logical shift. Return 0 if Y is greater than
3073 or equal to the precision of X. */
3074 template <typename T1
, typename T2
>
3075 inline WI_UNARY_RESULT (T1
)
3076 wi::lrshift (const T1
&x
, const T2
&y
)
3078 WI_UNARY_RESULT_VAR (result
, val
, T1
, x
);
3079 /* Do things in the precision of the input rather than the output,
3080 since the result can be no larger than that. */
3081 WIDE_INT_REF_FOR (T1
) xi (x
);
3082 WIDE_INT_REF_FOR (T2
) yi (y
);
3083 /* Handle the simple cases quickly. */
3084 if (geu_p (yi
, xi
.precision
))
3091 unsigned int shift
= yi
.to_uhwi ();
3092 /* For fixed-precision integers like offset_int and widest_int,
3093 handle the case where the shift value is constant and the
3094 shifted value is a single nonnegative HWI (meaning that all
3095 bits above the HWI are zero). This is particularly common
3096 for converting a bit count to a byte count.
3098 For variable-precision integers like wide_int, handle HWI
3099 and sub-HWI integers inline. */
3100 if (STATIC_CONSTANT_P (xi
.precision
> HOST_BITS_PER_WIDE_INT
)
3101 ? (shift
< HOST_BITS_PER_WIDE_INT
3104 : xi
.precision
<= HOST_BITS_PER_WIDE_INT
)
3106 val
[0] = xi
.to_uhwi () >> shift
;
3110 result
.set_len (lrshift_large (val
, xi
.val
, xi
.len
, xi
.precision
,
3111 get_precision (result
), shift
));
3116 /* Return X >> Y, using an arithmetic shift. Return a sign mask if
3117 Y is greater than or equal to the precision of X. */
3118 template <typename T1
, typename T2
>
3119 inline WI_UNARY_RESULT (T1
)
3120 wi::arshift (const T1
&x
, const T2
&y
)
3122 WI_UNARY_RESULT_VAR (result
, val
, T1
, x
);
3123 /* Do things in the precision of the input rather than the output,
3124 since the result can be no larger than that. */
3125 WIDE_INT_REF_FOR (T1
) xi (x
);
3126 WIDE_INT_REF_FOR (T2
) yi (y
);
3127 /* Handle the simple cases quickly. */
3128 if (geu_p (yi
, xi
.precision
))
3130 val
[0] = sign_mask (x
);
3135 unsigned int shift
= yi
.to_uhwi ();
3136 if (xi
.precision
<= HOST_BITS_PER_WIDE_INT
)
3138 val
[0] = sext_hwi (xi
.ulow () >> shift
, xi
.precision
- shift
);
3139 result
.set_len (1, true);
3142 result
.set_len (arshift_large (val
, xi
.val
, xi
.len
, xi
.precision
,
3143 get_precision (result
), shift
));
3148 /* Return X >> Y, using an arithmetic shift if SGN is SIGNED and a
3149 logical shift otherwise. */
3150 template <typename T1
, typename T2
>
3151 inline WI_UNARY_RESULT (T1
)
3152 wi::rshift (const T1
&x
, const T2
&y
, signop sgn
)
3154 if (sgn
== UNSIGNED
)
3155 return lrshift (x
, y
);
3157 return arshift (x
, y
);
3160 /* Return the result of rotating the low WIDTH bits of X left by Y
3161 bits and zero-extending the result. Use a full-width rotate if
3163 template <typename T1
, typename T2
>
3164 WI_UNARY_RESULT (T1
)
3165 wi::lrotate (const T1
&x
, const T2
&y
, unsigned int width
)
3167 unsigned int precision
= get_binary_precision (x
, x
);
3170 WI_UNARY_RESULT (T2
) ymod
= umod_trunc (y
, width
);
3171 WI_UNARY_RESULT (T1
) left
= wi::lshift (x
, ymod
);
3172 WI_UNARY_RESULT (T1
) right
= wi::lrshift (x
, wi::sub (width
, ymod
));
3173 if (width
!= precision
)
3174 return wi::zext (left
, width
) | wi::zext (right
, width
);
3175 return left
| right
;
3178 /* Return the result of rotating the low WIDTH bits of X right by Y
3179 bits and zero-extending the result. Use a full-width rotate if
3181 template <typename T1
, typename T2
>
3182 WI_UNARY_RESULT (T1
)
3183 wi::rrotate (const T1
&x
, const T2
&y
, unsigned int width
)
3185 unsigned int precision
= get_binary_precision (x
, x
);
3188 WI_UNARY_RESULT (T2
) ymod
= umod_trunc (y
, width
);
3189 WI_UNARY_RESULT (T1
) right
= wi::lrshift (x
, ymod
);
3190 WI_UNARY_RESULT (T1
) left
= wi::lshift (x
, wi::sub (width
, ymod
));
3191 if (width
!= precision
)
3192 return wi::zext (left
, width
) | wi::zext (right
, width
);
3193 return left
| right
;
3196 /* Return 0 if the number of 1s in X is even and 1 if the number of 1s
3199 wi::parity (const wide_int_ref
&x
)
3201 return popcount (x
) & 1;
3204 /* Extract WIDTH bits from X, starting at BITPOS. */
3205 template <typename T
>
3206 inline unsigned HOST_WIDE_INT
3207 wi::extract_uhwi (const T
&x
, unsigned int bitpos
, unsigned int width
)
3209 unsigned precision
= get_precision (x
);
3210 if (precision
< bitpos
+ width
)
3211 precision
= bitpos
+ width
;
3212 WIDE_INT_REF_FOR (T
) xi (x
, precision
);
3214 /* Handle this rare case after the above, so that we assert about
3215 bogus BITPOS values. */
3219 unsigned int start
= bitpos
/ HOST_BITS_PER_WIDE_INT
;
3220 unsigned int shift
= bitpos
% HOST_BITS_PER_WIDE_INT
;
3221 unsigned HOST_WIDE_INT res
= xi
.elt (start
);
3223 if (shift
+ width
> HOST_BITS_PER_WIDE_INT
)
3225 unsigned HOST_WIDE_INT upper
= xi
.elt (start
+ 1);
3226 res
|= upper
<< (-shift
% HOST_BITS_PER_WIDE_INT
);
3228 return zext_hwi (res
, width
);
3231 /* Return the minimum precision needed to store X with sign SGN. */
3232 template <typename T
>
3234 wi::min_precision (const T
&x
, signop sgn
)
3237 return get_precision (x
) - clrsb (x
);
3239 return get_precision (x
) - clz (x
);
3242 #define SIGNED_BINARY_PREDICATE(OP, F) \
3243 template <typename T1, typename T2> \
3244 inline WI_SIGNED_BINARY_PREDICATE_RESULT (T1, T2) \
3245 OP (const T1 &x, const T2 &y) \
3247 return wi::F (x, y); \
3250 SIGNED_BINARY_PREDICATE (operator <, lts_p
)
3251 SIGNED_BINARY_PREDICATE (operator <=, les_p
)
3252 SIGNED_BINARY_PREDICATE (operator >, gts_p
)
3253 SIGNED_BINARY_PREDICATE (operator >=, ges_p
)
3255 #undef SIGNED_BINARY_PREDICATE
3257 #define UNARY_OPERATOR(OP, F) \
3258 template<typename T> \
3259 WI_UNARY_RESULT (generic_wide_int<T>) \
3260 OP (const generic_wide_int<T> &x) \
3265 #define BINARY_PREDICATE(OP, F) \
3266 template<typename T1, typename T2> \
3267 WI_BINARY_PREDICATE_RESULT (T1, T2) \
3268 OP (const T1 &x, const T2 &y) \
3270 return wi::F (x, y); \
3273 #define BINARY_OPERATOR(OP, F) \
3274 template<typename T1, typename T2> \
3275 WI_BINARY_OPERATOR_RESULT (T1, T2) \
3276 OP (const T1 &x, const T2 &y) \
3278 return wi::F (x, y); \
3281 #define SHIFT_OPERATOR(OP, F) \
3282 template<typename T1, typename T2> \
3283 WI_BINARY_OPERATOR_RESULT (T1, T1) \
3284 OP (const T1 &x, const T2 &y) \
3286 return wi::F (x, y); \
3289 UNARY_OPERATOR (operator ~, bit_not
)
3290 UNARY_OPERATOR (operator -, neg
)
3291 BINARY_PREDICATE (operator ==, eq_p
)
3292 BINARY_PREDICATE (operator !=, ne_p
)
3293 BINARY_OPERATOR (operator &, bit_and
)
3294 BINARY_OPERATOR (operator |, bit_or
)
3295 BINARY_OPERATOR (operator ^, bit_xor
)
3296 BINARY_OPERATOR (operator +, add
)
3297 BINARY_OPERATOR (operator -, sub
)
3298 BINARY_OPERATOR (operator *, mul
)
3299 SHIFT_OPERATOR (operator <<, lshift
)
3301 #undef UNARY_OPERATOR
3302 #undef BINARY_PREDICATE
3303 #undef BINARY_OPERATOR
3304 #undef SHIFT_OPERATOR
3306 template <typename T1
, typename T2
>
3307 inline WI_SIGNED_SHIFT_RESULT (T1
, T2
)
3308 operator >> (const T1
&x
, const T2
&y
)
3310 return wi::arshift (x
, y
);
3313 template <typename T1
, typename T2
>
3314 inline WI_SIGNED_SHIFT_RESULT (T1
, T2
)
3315 operator / (const T1
&x
, const T2
&y
)
3317 return wi::sdiv_trunc (x
, y
);
3320 template <typename T1
, typename T2
>
3321 inline WI_SIGNED_SHIFT_RESULT (T1
, T2
)
3322 operator % (const T1
&x
, const T2
&y
)
3324 return wi::smod_trunc (x
, y
);
3327 template<typename T
>
3329 gt_ggc_mx (generic_wide_int
<T
> *)
3333 template<typename T
>
3335 gt_pch_nx (generic_wide_int
<T
> *)
3339 template<typename T
>
3341 gt_pch_nx (generic_wide_int
<T
> *, void (*) (void *, void *), void *)
3347 gt_ggc_mx (trailing_wide_ints
<N
> *)
3353 gt_pch_nx (trailing_wide_ints
<N
> *)
3359 gt_pch_nx (trailing_wide_ints
<N
> *, void (*) (void *, void *), void *)
3365 /* Used for overloaded functions in which the only other acceptable
3366 scalar type is a pointer. It stops a plain 0 from being treated
3367 as a null pointer. */
3368 struct never_used1
{};
3369 struct never_used2
{};
3371 wide_int
min_value (unsigned int, signop
);
3372 wide_int
min_value (never_used1
*);
3373 wide_int
min_value (never_used2
*);
3374 wide_int
max_value (unsigned int, signop
);
3375 wide_int
max_value (never_used1
*);
3376 wide_int
max_value (never_used2
*);
3378 /* FIXME: this is target dependent, so should be elsewhere.
3379 It also seems to assume that CHAR_BIT == BITS_PER_UNIT. */
3380 wide_int
from_buffer (const unsigned char *, unsigned int);
3382 #ifndef GENERATOR_FILE
3383 void to_mpz (const wide_int_ref
&, mpz_t
, signop
);
3386 wide_int
mask (unsigned int, bool, unsigned int);
3387 wide_int
shifted_mask (unsigned int, unsigned int, bool, unsigned int);
3388 wide_int
set_bit_in_zero (unsigned int, unsigned int);
3389 wide_int
insert (const wide_int
&x
, const wide_int
&y
, unsigned int,
3391 wide_int
round_down_for_mask (const wide_int
&, const wide_int
&);
3392 wide_int
round_up_for_mask (const wide_int
&, const wide_int
&);
3394 wide_int
mod_inv (const wide_int
&a
, const wide_int
&b
);
3396 template <typename T
>
3397 T
mask (unsigned int, bool);
3399 template <typename T
>
3400 T
shifted_mask (unsigned int, unsigned int, bool);
3402 template <typename T
>
3403 T
set_bit_in_zero (unsigned int);
3405 unsigned int mask (HOST_WIDE_INT
*, unsigned int, bool, unsigned int);
3406 unsigned int shifted_mask (HOST_WIDE_INT
*, unsigned int, unsigned int,
3407 bool, unsigned int);
3408 unsigned int from_array (HOST_WIDE_INT
*, const HOST_WIDE_INT
*,
3409 unsigned int, unsigned int, bool);
3412 /* Return a PRECISION-bit integer in which the low WIDTH bits are set
3413 and the other bits are clear, or the inverse if NEGATE_P. */
3415 wi::mask (unsigned int width
, bool negate_p
, unsigned int precision
)
3417 wide_int result
= wide_int::create (precision
);
3418 result
.set_len (mask (result
.write_val (), width
, negate_p
, precision
));
3422 /* Return a PRECISION-bit integer in which the low START bits are clear,
3423 the next WIDTH bits are set, and the other bits are clear,
3424 or the inverse if NEGATE_P. */
3426 wi::shifted_mask (unsigned int start
, unsigned int width
, bool negate_p
,
3427 unsigned int precision
)
3429 wide_int result
= wide_int::create (precision
);
3430 result
.set_len (shifted_mask (result
.write_val (), start
, width
, negate_p
,
3435 /* Return a PRECISION-bit integer in which bit BIT is set and all the
3436 others are clear. */
3438 wi::set_bit_in_zero (unsigned int bit
, unsigned int precision
)
3440 return shifted_mask (bit
, 1, false, precision
);
3443 /* Return an integer of type T in which the low WIDTH bits are set
3444 and the other bits are clear, or the inverse if NEGATE_P. */
3445 template <typename T
>
3447 wi::mask (unsigned int width
, bool negate_p
)
3449 STATIC_ASSERT (wi::int_traits
<T
>::precision
);
3451 result
.set_len (mask (result
.write_val (), width
, negate_p
,
3452 wi::int_traits
<T
>::precision
));
3456 /* Return an integer of type T in which the low START bits are clear,
3457 the next WIDTH bits are set, and the other bits are clear, or the
3458 inverse if NEGATE_P. */
3459 template <typename T
>
3461 wi::shifted_mask (unsigned int start
, unsigned int width
, bool negate_p
)
3463 STATIC_ASSERT (wi::int_traits
<T
>::precision
);
3465 result
.set_len (shifted_mask (result
.write_val (), start
, width
,
3467 wi::int_traits
<T
>::precision
));
3471 /* Return an integer of type T in which bit BIT is set and all the
3472 others are clear. */
3473 template <typename T
>
3475 wi::set_bit_in_zero (unsigned int bit
)
3477 return shifted_mask
<T
> (bit
, 1, false);
3480 /* Accumulate a set of overflows into OVERFLOW. */
3483 wi::accumulate_overflow (wi::overflow_type
&overflow
,
3484 wi::overflow_type suboverflow
)
3489 overflow
= suboverflow
;
3490 else if (overflow
!= suboverflow
)
3491 overflow
= wi::OVF_UNKNOWN
;
3494 #endif /* WIDE_INT_H */