PR tree-optimization/86415 - strlen() not folded for substrings within constant arrays
[official-gcc.git] / gcc / wide-int.h
blobcb5f9abffe23cc83209ab8c20e38dee346c36f0b
1 /* Operations with very long integers. -*- C++ -*-
2 Copyright (C) 2012-2018 Free Software Foundation, Inc.
4 This file is part of GCC.
6 GCC is free software; you can redistribute it and/or modify it
7 under the terms of the GNU General Public License as published by the
8 Free Software Foundation; either version 3, or (at your option) any
9 later version.
11 GCC is distributed in the hope that it will be useful, but WITHOUT
12 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 for more details.
16 You should have received a copy of the GNU General Public License
17 along with GCC; see the file COPYING3. If not see
18 <http://www.gnu.org/licenses/>. */
20 #ifndef WIDE_INT_H
21 #define WIDE_INT_H
23 /* wide-int.[cc|h] implements a class that efficiently performs
24 mathematical operations on finite precision integers. wide_ints
25 are designed to be transient - they are not for long term storage
26 of values. There is tight integration between wide_ints and the
27 other longer storage GCC representations (rtl and tree).
29 The actual precision of a wide_int depends on the flavor. There
30 are three predefined flavors:
32 1) wide_int (the default). This flavor does the math in the
33 precision of its input arguments. It is assumed (and checked)
34 that the precisions of the operands and results are consistent.
35 This is the most efficient flavor. It is not possible to examine
36 bits above the precision that has been specified. Because of
37 this, the default flavor has semantics that are simple to
38 understand and in general model the underlying hardware that the
39 compiler is targetted for.
41 This flavor must be used at the RTL level of gcc because there
42 is, in general, not enough information in the RTL representation
43 to extend a value beyond the precision specified in the mode.
45 This flavor should also be used at the TREE and GIMPLE levels of
46 the compiler except for the circumstances described in the
47 descriptions of the other two flavors.
49 The default wide_int representation does not contain any
50 information inherent about signedness of the represented value,
51 so it can be used to represent both signed and unsigned numbers.
52 For operations where the results depend on signedness (full width
53 multiply, division, shifts, comparisons, and operations that need
54 overflow detected), the signedness must be specified separately.
56 2) offset_int. This is a fixed-precision integer that can hold
57 any address offset, measured in either bits or bytes, with at
58 least one extra sign bit. At the moment the maximum address
59 size GCC supports is 64 bits. With 8-bit bytes and an extra
60 sign bit, offset_int therefore needs to have at least 68 bits
61 of precision. We round this up to 128 bits for efficiency.
62 Values of type T are converted to this precision by sign- or
63 zero-extending them based on the signedness of T.
65 The extra sign bit means that offset_int is effectively a signed
66 128-bit integer, i.e. it behaves like int128_t.
68 Since the values are logically signed, there is no need to
69 distinguish between signed and unsigned operations. Sign-sensitive
70 comparison operators <, <=, > and >= are therefore supported.
71 Shift operators << and >> are also supported, with >> being
72 an _arithmetic_ right shift.
74 [ Note that, even though offset_int is effectively int128_t,
75 it can still be useful to use unsigned comparisons like
76 wi::leu_p (a, b) as a more efficient short-hand for
77 "a >= 0 && a <= b". ]
79 3) widest_int. This representation is an approximation of
80 infinite precision math. However, it is not really infinite
81 precision math as in the GMP library. It is really finite
82 precision math where the precision is 4 times the size of the
83 largest integer that the target port can represent.
85 Like offset_int, widest_int is wider than all the values that
86 it needs to represent, so the integers are logically signed.
87 Sign-sensitive comparison operators <, <=, > and >= are supported,
88 as are << and >>.
90 There are several places in the GCC where this should/must be used:
92 * Code that does induction variable optimizations. This code
93 works with induction variables of many different types at the
94 same time. Because of this, it ends up doing many different
95 calculations where the operands are not compatible types. The
96 widest_int makes this easy, because it provides a field where
97 nothing is lost when converting from any variable,
99 * There are a small number of passes that currently use the
100 widest_int that should use the default. These should be
101 changed.
103 There are surprising features of offset_int and widest_int
104 that the users should be careful about:
106 1) Shifts and rotations are just weird. You have to specify a
107 precision in which the shift or rotate is to happen in. The bits
108 above this precision are zeroed. While this is what you
109 want, it is clearly non obvious.
111 2) Larger precision math sometimes does not produce the same
112 answer as would be expected for doing the math at the proper
113 precision. In particular, a multiply followed by a divide will
114 produce a different answer if the first product is larger than
115 what can be represented in the input precision.
117 The offset_int and the widest_int flavors are more expensive
118 than the default wide int, so in addition to the caveats with these
119 two, the default is the prefered representation.
121 All three flavors of wide_int are represented as a vector of
122 HOST_WIDE_INTs. The default and widest_int vectors contain enough elements
123 to hold a value of MAX_BITSIZE_MODE_ANY_INT bits. offset_int contains only
124 enough elements to hold ADDR_MAX_PRECISION bits. The values are stored
125 in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
126 in element 0.
128 The default wide_int contains three fields: the vector (VAL),
129 the precision and a length (LEN). The length is the number of HWIs
130 needed to represent the value. widest_int and offset_int have a
131 constant precision that cannot be changed, so they only store the
132 VAL and LEN fields.
134 Since most integers used in a compiler are small values, it is
135 generally profitable to use a representation of the value that is
136 as small as possible. LEN is used to indicate the number of
137 elements of the vector that are in use. The numbers are stored as
138 sign extended numbers as a means of compression. Leading
139 HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
140 as long as they can be reconstructed from the top bit that is being
141 represented.
143 The precision and length of a wide_int are always greater than 0.
144 Any bits in a wide_int above the precision are sign-extended from the
145 most significant bit. For example, a 4-bit value 0x8 is represented as
146 VAL = { 0xf...fff8 }. However, as an optimization, we allow other integer
147 constants to be represented with undefined bits above the precision.
148 This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
149 so that the INTEGER_CST representation can be used both in TYPE_PRECISION
150 and in wider precisions.
152 There are constructors to create the various forms of wide_int from
153 trees, rtl and constants. For trees the options are:
155 tree t = ...;
156 wi::to_wide (t) // Treat T as a wide_int
157 wi::to_offset (t) // Treat T as an offset_int
158 wi::to_widest (t) // Treat T as a widest_int
160 All three are light-weight accessors that should have no overhead
161 in release builds. If it is useful for readability reasons to
162 store the result in a temporary variable, the preferred method is:
164 wi::tree_to_wide_ref twide = wi::to_wide (t);
165 wi::tree_to_offset_ref toffset = wi::to_offset (t);
166 wi::tree_to_widest_ref twidest = wi::to_widest (t);
168 To make an rtx into a wide_int, you have to pair it with a mode.
169 The canonical way to do this is with rtx_mode_t as in:
171 rtx r = ...
172 wide_int x = rtx_mode_t (r, mode);
174 Similarly, a wide_int can only be constructed from a host value if
175 the target precision is given explicitly, such as in:
177 wide_int x = wi::shwi (c, prec); // sign-extend C if necessary
178 wide_int y = wi::uhwi (c, prec); // zero-extend C if necessary
180 However, offset_int and widest_int have an inherent precision and so
181 can be initialized directly from a host value:
183 offset_int x = (int) c; // sign-extend C
184 widest_int x = (unsigned int) c; // zero-extend C
186 It is also possible to do arithmetic directly on rtx_mode_ts and
187 constants. For example:
189 wi::add (r1, r2); // add equal-sized rtx_mode_ts r1 and r2
190 wi::add (r1, 1); // add 1 to rtx_mode_t r1
191 wi::lshift (1, 100); // 1 << 100 as a widest_int
193 Many binary operations place restrictions on the combinations of inputs,
194 using the following rules:
196 - {rtx, wide_int} op {rtx, wide_int} -> wide_int
197 The inputs must be the same precision. The result is a wide_int
198 of the same precision
200 - {rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
201 (un)signed HOST_WIDE_INT op {rtx, wide_int} -> wide_int
202 The HOST_WIDE_INT is extended or truncated to the precision of
203 the other input. The result is a wide_int of the same precision
204 as that input.
206 - (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
207 The inputs are extended to widest_int precision and produce a
208 widest_int result.
210 - offset_int op offset_int -> offset_int
211 offset_int op (un)signed HOST_WIDE_INT -> offset_int
212 (un)signed HOST_WIDE_INT op offset_int -> offset_int
214 - widest_int op widest_int -> widest_int
215 widest_int op (un)signed HOST_WIDE_INT -> widest_int
216 (un)signed HOST_WIDE_INT op widest_int -> widest_int
218 Other combinations like:
220 - widest_int op offset_int and
221 - wide_int op offset_int
223 are not allowed. The inputs should instead be extended or truncated
224 so that they match.
226 The inputs to comparison functions like wi::eq_p and wi::lts_p
227 follow the same compatibility rules, although their return types
228 are different. Unary functions on X produce the same result as
229 a binary operation X + X. Shift functions X op Y also produce
230 the same result as X + X; the precision of the shift amount Y
231 can be arbitrarily different from X. */
233 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
234 early examination of the target's mode file. The WIDE_INT_MAX_ELTS
235 can accomodate at least 1 more bit so that unsigned numbers of that
236 mode can be represented as a signed value. Note that it is still
237 possible to create fixed_wide_ints that have precisions greater than
238 MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a
239 double-width multiplication result, for example. */
240 #define WIDE_INT_MAX_ELTS \
241 ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
243 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
245 /* This is the max size of any pointer on any machine. It does not
246 seem to be as easy to sniff this out of the machine description as
247 it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
248 multiple address sizes and may have different address sizes for
249 different address spaces. However, currently the largest pointer
250 on any platform is 64 bits. When that changes, then it is likely
251 that a target hook should be defined so that targets can make this
252 value larger for those targets. */
253 #define ADDR_MAX_BITSIZE 64
255 /* This is the internal precision used when doing any address
256 arithmetic. The '4' is really 3 + 1. Three of the bits are for
257 the number of extra bits needed to do bit addresses and the other bit
258 is to allow everything to be signed without loosing any precision.
259 Then everything is rounded up to the next HWI for efficiency. */
260 #define ADDR_MAX_PRECISION \
261 ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) \
262 & ~(HOST_BITS_PER_WIDE_INT - 1))
264 /* The number of HWIs needed to store an offset_int. */
265 #define OFFSET_INT_ELTS (ADDR_MAX_PRECISION / HOST_BITS_PER_WIDE_INT)
267 /* The type of result produced by a binary operation on types T1 and T2.
268 Defined purely for brevity. */
269 #define WI_BINARY_RESULT(T1, T2) \
270 typename wi::binary_traits <T1, T2>::result_type
272 /* Likewise for binary operators, which excludes the case in which neither
273 T1 nor T2 is a wide-int-based type. */
274 #define WI_BINARY_OPERATOR_RESULT(T1, T2) \
275 typename wi::binary_traits <T1, T2>::operator_result
277 /* The type of result produced by T1 << T2. Leads to substitution failure
278 if the operation isn't supported. Defined purely for brevity. */
279 #define WI_SIGNED_SHIFT_RESULT(T1, T2) \
280 typename wi::binary_traits <T1, T2>::signed_shift_result_type
282 /* The type of result produced by a sign-agnostic binary predicate on
283 types T1 and T2. This is bool if wide-int operations make sense for
284 T1 and T2 and leads to substitution failure otherwise. */
285 #define WI_BINARY_PREDICATE_RESULT(T1, T2) \
286 typename wi::binary_traits <T1, T2>::predicate_result
288 /* The type of result produced by a signed binary predicate on types T1 and T2.
289 This is bool if signed comparisons make sense for T1 and T2 and leads to
290 substitution failure otherwise. */
291 #define WI_SIGNED_BINARY_PREDICATE_RESULT(T1, T2) \
292 typename wi::binary_traits <T1, T2>::signed_predicate_result
294 /* The type of result produced by a unary operation on type T. */
295 #define WI_UNARY_RESULT(T) \
296 typename wi::binary_traits <T, T>::result_type
298 /* Define a variable RESULT to hold the result of a binary operation on
299 X and Y, which have types T1 and T2 respectively. Define VAL to
300 point to the blocks of RESULT. Once the user of the macro has
301 filled in VAL, it should call RESULT.set_len to set the number
302 of initialized blocks. */
303 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
304 WI_BINARY_RESULT (T1, T2) RESULT = \
305 wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
306 HOST_WIDE_INT *VAL = RESULT.write_val ()
308 /* Similar for the result of a unary operation on X, which has type T. */
309 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
310 WI_UNARY_RESULT (T) RESULT = \
311 wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
312 HOST_WIDE_INT *VAL = RESULT.write_val ()
314 template <typename T> class generic_wide_int;
315 template <int N> class fixed_wide_int_storage;
316 class wide_int_storage;
318 /* An N-bit integer. Until we can use typedef templates, use this instead. */
319 #define FIXED_WIDE_INT(N) \
320 generic_wide_int < fixed_wide_int_storage <N> >
322 typedef generic_wide_int <wide_int_storage> wide_int;
323 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
324 typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
326 /* wi::storage_ref can be a reference to a primitive type,
327 so this is the conservatively-correct setting. */
328 template <bool SE, bool HDP = true>
329 struct wide_int_ref_storage;
331 typedef generic_wide_int <wide_int_ref_storage <false> > wide_int_ref;
333 /* This can be used instead of wide_int_ref if the referenced value is
334 known to have type T. It carries across properties of T's representation,
335 such as whether excess upper bits in a HWI are defined, and can therefore
336 help avoid redundant work.
338 The macro could be replaced with a template typedef, once we're able
339 to use those. */
340 #define WIDE_INT_REF_FOR(T) \
341 generic_wide_int \
342 <wide_int_ref_storage <wi::int_traits <T>::is_sign_extended, \
343 wi::int_traits <T>::host_dependent_precision> >
345 namespace wi
347 /* Operations that calculate overflow do so even for
348 TYPE_OVERFLOW_WRAPS types. For example, adding 1 to +MAX_INT in
349 an unsigned int is 0 and does not overflow in C/C++, but wi::add
350 will set the overflow argument in case it's needed for further
351 analysis.
353 For operations that require overflow, these are the different
354 types of overflow. */
355 enum overflow_type {
356 OVF_NONE = 0,
357 OVF_UNDERFLOW = -1,
358 OVF_OVERFLOW = 1,
359 /* There was an overflow, but we are unsure whether it was an
360 overflow or an underflow. */
361 OVF_UNKNOWN = 2
364 /* Classifies an integer based on its precision. */
365 enum precision_type {
366 /* The integer has both a precision and defined signedness. This allows
367 the integer to be converted to any width, since we know whether to fill
368 any extra bits with zeros or signs. */
369 FLEXIBLE_PRECISION,
371 /* The integer has a variable precision but no defined signedness. */
372 VAR_PRECISION,
374 /* The integer has a constant precision (known at GCC compile time)
375 and is signed. */
376 CONST_PRECISION
379 /* This class, which has no default implementation, is expected to
380 provide the following members:
382 static const enum precision_type precision_type;
383 Classifies the type of T.
385 static const unsigned int precision;
386 Only defined if precision_type == CONST_PRECISION. Specifies the
387 precision of all integers of type T.
389 static const bool host_dependent_precision;
390 True if the precision of T depends (or can depend) on the host.
392 static unsigned int get_precision (const T &x)
393 Return the number of bits in X.
395 static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
396 unsigned int precision, const T &x)
397 Decompose X as a PRECISION-bit integer, returning the associated
398 wi::storage_ref. SCRATCH is available as scratch space if needed.
399 The routine should assert that PRECISION is acceptable. */
400 template <typename T> struct int_traits;
402 /* This class provides a single type, result_type, which specifies the
403 type of integer produced by a binary operation whose inputs have
404 types T1 and T2. The definition should be symmetric. */
405 template <typename T1, typename T2,
406 enum precision_type P1 = int_traits <T1>::precision_type,
407 enum precision_type P2 = int_traits <T2>::precision_type>
408 struct binary_traits;
410 /* Specify the result type for each supported combination of binary
411 inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be
412 mixed, in order to give stronger type checking. When both inputs
413 are CONST_PRECISION, they must have the same precision. */
414 template <typename T1, typename T2>
415 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
417 typedef widest_int result_type;
418 /* Don't define operators for this combination. */
421 template <typename T1, typename T2>
422 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
424 typedef wide_int result_type;
425 typedef result_type operator_result;
426 typedef bool predicate_result;
429 template <typename T1, typename T2>
430 struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
432 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
433 so as not to confuse gengtype. */
434 typedef generic_wide_int < fixed_wide_int_storage
435 <int_traits <T2>::precision> > result_type;
436 typedef result_type operator_result;
437 typedef bool predicate_result;
438 typedef result_type signed_shift_result_type;
439 typedef bool signed_predicate_result;
442 template <typename T1, typename T2>
443 struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
445 typedef wide_int result_type;
446 typedef result_type operator_result;
447 typedef bool predicate_result;
450 template <typename T1, typename T2>
451 struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
453 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
454 so as not to confuse gengtype. */
455 typedef generic_wide_int < fixed_wide_int_storage
456 <int_traits <T1>::precision> > result_type;
457 typedef result_type operator_result;
458 typedef bool predicate_result;
459 typedef result_type signed_shift_result_type;
460 typedef bool signed_predicate_result;
463 template <typename T1, typename T2>
464 struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
466 STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
467 /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
468 so as not to confuse gengtype. */
469 typedef generic_wide_int < fixed_wide_int_storage
470 <int_traits <T1>::precision> > result_type;
471 typedef result_type operator_result;
472 typedef bool predicate_result;
473 typedef result_type signed_shift_result_type;
474 typedef bool signed_predicate_result;
477 template <typename T1, typename T2>
478 struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
480 typedef wide_int result_type;
481 typedef result_type operator_result;
482 typedef bool predicate_result;
486 /* Public functions for querying and operating on integers. */
487 namespace wi
489 template <typename T>
490 unsigned int get_precision (const T &);
492 template <typename T1, typename T2>
493 unsigned int get_binary_precision (const T1 &, const T2 &);
495 template <typename T1, typename T2>
496 void copy (T1 &, const T2 &);
498 #define UNARY_PREDICATE \
499 template <typename T> bool
500 #define UNARY_FUNCTION \
501 template <typename T> WI_UNARY_RESULT (T)
502 #define BINARY_PREDICATE \
503 template <typename T1, typename T2> bool
504 #define BINARY_FUNCTION \
505 template <typename T1, typename T2> WI_BINARY_RESULT (T1, T2)
506 #define SHIFT_FUNCTION \
507 template <typename T1, typename T2> WI_UNARY_RESULT (T1)
509 UNARY_PREDICATE fits_shwi_p (const T &);
510 UNARY_PREDICATE fits_uhwi_p (const T &);
511 UNARY_PREDICATE neg_p (const T &, signop = SIGNED);
513 template <typename T>
514 HOST_WIDE_INT sign_mask (const T &);
516 BINARY_PREDICATE eq_p (const T1 &, const T2 &);
517 BINARY_PREDICATE ne_p (const T1 &, const T2 &);
518 BINARY_PREDICATE lt_p (const T1 &, const T2 &, signop);
519 BINARY_PREDICATE lts_p (const T1 &, const T2 &);
520 BINARY_PREDICATE ltu_p (const T1 &, const T2 &);
521 BINARY_PREDICATE le_p (const T1 &, const T2 &, signop);
522 BINARY_PREDICATE les_p (const T1 &, const T2 &);
523 BINARY_PREDICATE leu_p (const T1 &, const T2 &);
524 BINARY_PREDICATE gt_p (const T1 &, const T2 &, signop);
525 BINARY_PREDICATE gts_p (const T1 &, const T2 &);
526 BINARY_PREDICATE gtu_p (const T1 &, const T2 &);
527 BINARY_PREDICATE ge_p (const T1 &, const T2 &, signop);
528 BINARY_PREDICATE ges_p (const T1 &, const T2 &);
529 BINARY_PREDICATE geu_p (const T1 &, const T2 &);
531 template <typename T1, typename T2>
532 int cmp (const T1 &, const T2 &, signop);
534 template <typename T1, typename T2>
535 int cmps (const T1 &, const T2 &);
537 template <typename T1, typename T2>
538 int cmpu (const T1 &, const T2 &);
540 UNARY_FUNCTION bit_not (const T &);
541 UNARY_FUNCTION neg (const T &);
542 UNARY_FUNCTION neg (const T &, overflow_type *);
543 UNARY_FUNCTION abs (const T &);
544 UNARY_FUNCTION ext (const T &, unsigned int, signop);
545 UNARY_FUNCTION sext (const T &, unsigned int);
546 UNARY_FUNCTION zext (const T &, unsigned int);
547 UNARY_FUNCTION set_bit (const T &, unsigned int);
549 BINARY_FUNCTION min (const T1 &, const T2 &, signop);
550 BINARY_FUNCTION smin (const T1 &, const T2 &);
551 BINARY_FUNCTION umin (const T1 &, const T2 &);
552 BINARY_FUNCTION max (const T1 &, const T2 &, signop);
553 BINARY_FUNCTION smax (const T1 &, const T2 &);
554 BINARY_FUNCTION umax (const T1 &, const T2 &);
556 BINARY_FUNCTION bit_and (const T1 &, const T2 &);
557 BINARY_FUNCTION bit_and_not (const T1 &, const T2 &);
558 BINARY_FUNCTION bit_or (const T1 &, const T2 &);
559 BINARY_FUNCTION bit_or_not (const T1 &, const T2 &);
560 BINARY_FUNCTION bit_xor (const T1 &, const T2 &);
561 BINARY_FUNCTION add (const T1 &, const T2 &);
562 BINARY_FUNCTION add (const T1 &, const T2 &, signop, overflow_type *);
563 BINARY_FUNCTION sub (const T1 &, const T2 &);
564 BINARY_FUNCTION sub (const T1 &, const T2 &, signop, overflow_type *);
565 BINARY_FUNCTION mul (const T1 &, const T2 &);
566 BINARY_FUNCTION mul (const T1 &, const T2 &, signop, overflow_type *);
567 BINARY_FUNCTION smul (const T1 &, const T2 &, overflow_type *);
568 BINARY_FUNCTION umul (const T1 &, const T2 &, overflow_type *);
569 BINARY_FUNCTION mul_high (const T1 &, const T2 &, signop);
570 BINARY_FUNCTION div_trunc (const T1 &, const T2 &, signop,
571 overflow_type * = 0);
572 BINARY_FUNCTION sdiv_trunc (const T1 &, const T2 &);
573 BINARY_FUNCTION udiv_trunc (const T1 &, const T2 &);
574 BINARY_FUNCTION div_floor (const T1 &, const T2 &, signop,
575 overflow_type * = 0);
576 BINARY_FUNCTION udiv_floor (const T1 &, const T2 &);
577 BINARY_FUNCTION sdiv_floor (const T1 &, const T2 &);
578 BINARY_FUNCTION div_ceil (const T1 &, const T2 &, signop,
579 overflow_type * = 0);
580 BINARY_FUNCTION udiv_ceil (const T1 &, const T2 &);
581 BINARY_FUNCTION div_round (const T1 &, const T2 &, signop,
582 overflow_type * = 0);
583 BINARY_FUNCTION divmod_trunc (const T1 &, const T2 &, signop,
584 WI_BINARY_RESULT (T1, T2) *);
585 BINARY_FUNCTION gcd (const T1 &, const T2 &, signop = UNSIGNED);
586 BINARY_FUNCTION mod_trunc (const T1 &, const T2 &, signop,
587 overflow_type * = 0);
588 BINARY_FUNCTION smod_trunc (const T1 &, const T2 &);
589 BINARY_FUNCTION umod_trunc (const T1 &, const T2 &);
590 BINARY_FUNCTION mod_floor (const T1 &, const T2 &, signop,
591 overflow_type * = 0);
592 BINARY_FUNCTION umod_floor (const T1 &, const T2 &);
593 BINARY_FUNCTION mod_ceil (const T1 &, const T2 &, signop,
594 overflow_type * = 0);
595 BINARY_FUNCTION mod_round (const T1 &, const T2 &, signop,
596 overflow_type * = 0);
598 template <typename T1, typename T2>
599 bool multiple_of_p (const T1 &, const T2 &, signop);
601 template <typename T1, typename T2>
602 bool multiple_of_p (const T1 &, const T2 &, signop,
603 WI_BINARY_RESULT (T1, T2) *);
605 SHIFT_FUNCTION lshift (const T1 &, const T2 &);
606 SHIFT_FUNCTION lrshift (const T1 &, const T2 &);
607 SHIFT_FUNCTION arshift (const T1 &, const T2 &);
608 SHIFT_FUNCTION rshift (const T1 &, const T2 &, signop sgn);
609 SHIFT_FUNCTION lrotate (const T1 &, const T2 &, unsigned int = 0);
610 SHIFT_FUNCTION rrotate (const T1 &, const T2 &, unsigned int = 0);
612 #undef SHIFT_FUNCTION
613 #undef BINARY_PREDICATE
614 #undef BINARY_FUNCTION
615 #undef UNARY_PREDICATE
616 #undef UNARY_FUNCTION
618 bool only_sign_bit_p (const wide_int_ref &, unsigned int);
619 bool only_sign_bit_p (const wide_int_ref &);
620 int clz (const wide_int_ref &);
621 int clrsb (const wide_int_ref &);
622 int ctz (const wide_int_ref &);
623 int exact_log2 (const wide_int_ref &);
624 int floor_log2 (const wide_int_ref &);
625 int ffs (const wide_int_ref &);
626 int popcount (const wide_int_ref &);
627 int parity (const wide_int_ref &);
629 template <typename T>
630 unsigned HOST_WIDE_INT extract_uhwi (const T &, unsigned int, unsigned int);
632 template <typename T>
633 unsigned int min_precision (const T &, signop);
635 static inline void accumulate_overflow (overflow_type &, overflow_type);
638 namespace wi
640 /* Contains the components of a decomposed integer for easy, direct
641 access. */
642 struct storage_ref
644 storage_ref () {}
645 storage_ref (const HOST_WIDE_INT *, unsigned int, unsigned int);
647 const HOST_WIDE_INT *val;
648 unsigned int len;
649 unsigned int precision;
651 /* Provide enough trappings for this class to act as storage for
652 generic_wide_int. */
653 unsigned int get_len () const;
654 unsigned int get_precision () const;
655 const HOST_WIDE_INT *get_val () const;
659 inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT *val_in,
660 unsigned int len_in,
661 unsigned int precision_in)
662 : val (val_in), len (len_in), precision (precision_in)
666 inline unsigned int
667 wi::storage_ref::get_len () const
669 return len;
672 inline unsigned int
673 wi::storage_ref::get_precision () const
675 return precision;
678 inline const HOST_WIDE_INT *
679 wi::storage_ref::get_val () const
681 return val;
684 /* This class defines an integer type using the storage provided by the
685 template argument. The storage class must provide the following
686 functions:
688 unsigned int get_precision () const
689 Return the number of bits in the integer.
691 HOST_WIDE_INT *get_val () const
692 Return a pointer to the array of blocks that encodes the integer.
694 unsigned int get_len () const
695 Return the number of blocks in get_val (). If this is smaller
696 than the number of blocks implied by get_precision (), the
697 remaining blocks are sign extensions of block get_len () - 1.
699 Although not required by generic_wide_int itself, writable storage
700 classes can also provide the following functions:
702 HOST_WIDE_INT *write_val ()
703 Get a modifiable version of get_val ()
705 unsigned int set_len (unsigned int len)
706 Set the value returned by get_len () to LEN. */
707 template <typename storage>
708 class GTY(()) generic_wide_int : public storage
710 public:
711 generic_wide_int ();
713 template <typename T>
714 generic_wide_int (const T &);
716 template <typename T>
717 generic_wide_int (const T &, unsigned int);
719 /* Conversions. */
720 HOST_WIDE_INT to_shwi (unsigned int) const;
721 HOST_WIDE_INT to_shwi () const;
722 unsigned HOST_WIDE_INT to_uhwi (unsigned int) const;
723 unsigned HOST_WIDE_INT to_uhwi () const;
724 HOST_WIDE_INT to_short_addr () const;
726 /* Public accessors for the interior of a wide int. */
727 HOST_WIDE_INT sign_mask () const;
728 HOST_WIDE_INT elt (unsigned int) const;
729 unsigned HOST_WIDE_INT ulow () const;
730 unsigned HOST_WIDE_INT uhigh () const;
731 HOST_WIDE_INT slow () const;
732 HOST_WIDE_INT shigh () const;
734 template <typename T>
735 generic_wide_int &operator = (const T &);
737 #define ASSIGNMENT_OPERATOR(OP, F) \
738 template <typename T> \
739 generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
741 /* Restrict these to cases where the shift operator is defined. */
742 #define SHIFT_ASSIGNMENT_OPERATOR(OP, OP2) \
743 template <typename T> \
744 generic_wide_int &OP (const T &c) { return (*this = *this OP2 c); }
746 #define INCDEC_OPERATOR(OP, DELTA) \
747 generic_wide_int &OP () { *this += DELTA; return *this; }
749 ASSIGNMENT_OPERATOR (operator &=, bit_and)
750 ASSIGNMENT_OPERATOR (operator |=, bit_or)
751 ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
752 ASSIGNMENT_OPERATOR (operator +=, add)
753 ASSIGNMENT_OPERATOR (operator -=, sub)
754 ASSIGNMENT_OPERATOR (operator *=, mul)
755 ASSIGNMENT_OPERATOR (operator <<=, lshift)
756 SHIFT_ASSIGNMENT_OPERATOR (operator >>=, >>)
757 INCDEC_OPERATOR (operator ++, 1)
758 INCDEC_OPERATOR (operator --, -1)
760 #undef SHIFT_ASSIGNMENT_OPERATOR
761 #undef ASSIGNMENT_OPERATOR
762 #undef INCDEC_OPERATOR
764 /* Debugging functions. */
765 void dump () const;
767 static const bool is_sign_extended
768 = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
771 template <typename storage>
772 inline generic_wide_int <storage>::generic_wide_int () {}
774 template <typename storage>
775 template <typename T>
776 inline generic_wide_int <storage>::generic_wide_int (const T &x)
777 : storage (x)
781 template <typename storage>
782 template <typename T>
783 inline generic_wide_int <storage>::generic_wide_int (const T &x,
784 unsigned int precision)
785 : storage (x, precision)
789 /* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
790 If THIS does not fit in PRECISION, the information is lost. */
791 template <typename storage>
792 inline HOST_WIDE_INT
793 generic_wide_int <storage>::to_shwi (unsigned int precision) const
795 if (precision < HOST_BITS_PER_WIDE_INT)
796 return sext_hwi (this->get_val ()[0], precision);
797 else
798 return this->get_val ()[0];
801 /* Return THIS as a signed HOST_WIDE_INT, in its natural precision. */
802 template <typename storage>
803 inline HOST_WIDE_INT
804 generic_wide_int <storage>::to_shwi () const
806 if (is_sign_extended)
807 return this->get_val ()[0];
808 else
809 return to_shwi (this->get_precision ());
812 /* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
813 PRECISION. If THIS does not fit in PRECISION, the information
814 is lost. */
815 template <typename storage>
816 inline unsigned HOST_WIDE_INT
817 generic_wide_int <storage>::to_uhwi (unsigned int precision) const
819 if (precision < HOST_BITS_PER_WIDE_INT)
820 return zext_hwi (this->get_val ()[0], precision);
821 else
822 return this->get_val ()[0];
825 /* Return THIS as an signed HOST_WIDE_INT, in its natural precision. */
826 template <typename storage>
827 inline unsigned HOST_WIDE_INT
828 generic_wide_int <storage>::to_uhwi () const
830 return to_uhwi (this->get_precision ());
833 /* TODO: The compiler is half converted from using HOST_WIDE_INT to
834 represent addresses to using offset_int to represent addresses.
835 We use to_short_addr at the interface from new code to old,
836 unconverted code. */
837 template <typename storage>
838 inline HOST_WIDE_INT
839 generic_wide_int <storage>::to_short_addr () const
841 return this->get_val ()[0];
844 /* Return the implicit value of blocks above get_len (). */
845 template <typename storage>
846 inline HOST_WIDE_INT
847 generic_wide_int <storage>::sign_mask () const
849 unsigned int len = this->get_len ();
850 unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
851 if (!is_sign_extended)
853 unsigned int precision = this->get_precision ();
854 int excess = len * HOST_BITS_PER_WIDE_INT - precision;
855 if (excess > 0)
856 high <<= excess;
858 return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
861 /* Return the signed value of the least-significant explicitly-encoded
862 block. */
863 template <typename storage>
864 inline HOST_WIDE_INT
865 generic_wide_int <storage>::slow () const
867 return this->get_val ()[0];
870 /* Return the signed value of the most-significant explicitly-encoded
871 block. */
872 template <typename storage>
873 inline HOST_WIDE_INT
874 generic_wide_int <storage>::shigh () const
876 return this->get_val ()[this->get_len () - 1];
879 /* Return the unsigned value of the least-significant
880 explicitly-encoded block. */
881 template <typename storage>
882 inline unsigned HOST_WIDE_INT
883 generic_wide_int <storage>::ulow () const
885 return this->get_val ()[0];
888 /* Return the unsigned value of the most-significant
889 explicitly-encoded block. */
890 template <typename storage>
891 inline unsigned HOST_WIDE_INT
892 generic_wide_int <storage>::uhigh () const
894 return this->get_val ()[this->get_len () - 1];
897 /* Return block I, which might be implicitly or explicit encoded. */
898 template <typename storage>
899 inline HOST_WIDE_INT
900 generic_wide_int <storage>::elt (unsigned int i) const
902 if (i >= this->get_len ())
903 return sign_mask ();
904 else
905 return this->get_val ()[i];
908 template <typename storage>
909 template <typename T>
910 inline generic_wide_int <storage> &
911 generic_wide_int <storage>::operator = (const T &x)
913 storage::operator = (x);
914 return *this;
917 /* Dump the contents of the integer to stderr, for debugging. */
918 template <typename storage>
919 void
920 generic_wide_int <storage>::dump () const
922 unsigned int len = this->get_len ();
923 const HOST_WIDE_INT *val = this->get_val ();
924 unsigned int precision = this->get_precision ();
925 fprintf (stderr, "[");
926 if (len * HOST_BITS_PER_WIDE_INT < precision)
927 fprintf (stderr, "...,");
928 for (unsigned int i = 0; i < len - 1; ++i)
929 fprintf (stderr, HOST_WIDE_INT_PRINT_HEX ",", val[len - 1 - i]);
930 fprintf (stderr, HOST_WIDE_INT_PRINT_HEX "], precision = %d\n",
931 val[0], precision);
934 namespace wi
936 template <typename storage>
937 struct int_traits < generic_wide_int <storage> >
938 : public wi::int_traits <storage>
940 static unsigned int get_precision (const generic_wide_int <storage> &);
941 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
942 const generic_wide_int <storage> &);
946 template <typename storage>
947 inline unsigned int
948 wi::int_traits < generic_wide_int <storage> >::
949 get_precision (const generic_wide_int <storage> &x)
951 return x.get_precision ();
954 template <typename storage>
955 inline wi::storage_ref
956 wi::int_traits < generic_wide_int <storage> >::
957 decompose (HOST_WIDE_INT *, unsigned int precision,
958 const generic_wide_int <storage> &x)
960 gcc_checking_assert (precision == x.get_precision ());
961 return wi::storage_ref (x.get_val (), x.get_len (), precision);
964 /* Provide the storage for a wide_int_ref. This acts like a read-only
965 wide_int, with the optimization that VAL is normally a pointer to
966 another integer's storage, so that no array copy is needed. */
967 template <bool SE, bool HDP>
968 struct wide_int_ref_storage : public wi::storage_ref
970 private:
971 /* Scratch space that can be used when decomposing the original integer.
972 It must live as long as this object. */
973 HOST_WIDE_INT scratch[2];
975 public:
976 wide_int_ref_storage () {}
978 wide_int_ref_storage (const wi::storage_ref &);
980 template <typename T>
981 wide_int_ref_storage (const T &);
983 template <typename T>
984 wide_int_ref_storage (const T &, unsigned int);
987 /* Create a reference from an existing reference. */
988 template <bool SE, bool HDP>
989 inline wide_int_ref_storage <SE, HDP>::
990 wide_int_ref_storage (const wi::storage_ref &x)
991 : storage_ref (x)
994 /* Create a reference to integer X in its natural precision. Note
995 that the natural precision is host-dependent for primitive
996 types. */
997 template <bool SE, bool HDP>
998 template <typename T>
999 inline wide_int_ref_storage <SE, HDP>::wide_int_ref_storage (const T &x)
1000 : storage_ref (wi::int_traits <T>::decompose (scratch,
1001 wi::get_precision (x), x))
1005 /* Create a reference to integer X in precision PRECISION. */
1006 template <bool SE, bool HDP>
1007 template <typename T>
1008 inline wide_int_ref_storage <SE, HDP>::
1009 wide_int_ref_storage (const T &x, unsigned int precision)
1010 : storage_ref (wi::int_traits <T>::decompose (scratch, precision, x))
1014 namespace wi
1016 template <bool SE, bool HDP>
1017 struct int_traits <wide_int_ref_storage <SE, HDP> >
1019 static const enum precision_type precision_type = VAR_PRECISION;
1020 static const bool host_dependent_precision = HDP;
1021 static const bool is_sign_extended = SE;
1025 namespace wi
1027 unsigned int force_to_size (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1028 unsigned int, unsigned int, unsigned int,
1029 signop sgn);
1030 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1031 unsigned int, unsigned int, bool = true);
1034 /* The storage used by wide_int. */
1035 class GTY(()) wide_int_storage
1037 private:
1038 HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
1039 unsigned int len;
1040 unsigned int precision;
1042 public:
1043 wide_int_storage ();
1044 template <typename T>
1045 wide_int_storage (const T &);
1047 /* The standard generic_wide_int storage methods. */
1048 unsigned int get_precision () const;
1049 const HOST_WIDE_INT *get_val () const;
1050 unsigned int get_len () const;
1051 HOST_WIDE_INT *write_val ();
1052 void set_len (unsigned int, bool = false);
1054 template <typename T>
1055 wide_int_storage &operator = (const T &);
1057 static wide_int from (const wide_int_ref &, unsigned int, signop);
1058 static wide_int from_array (const HOST_WIDE_INT *, unsigned int,
1059 unsigned int, bool = true);
1060 static wide_int create (unsigned int);
1062 /* FIXME: target-dependent, so should disappear. */
1063 wide_int bswap () const;
1066 namespace wi
1068 template <>
1069 struct int_traits <wide_int_storage>
1071 static const enum precision_type precision_type = VAR_PRECISION;
1072 /* Guaranteed by a static assert in the wide_int_storage constructor. */
1073 static const bool host_dependent_precision = false;
1074 static const bool is_sign_extended = true;
1075 template <typename T1, typename T2>
1076 static wide_int get_binary_result (const T1 &, const T2 &);
1080 inline wide_int_storage::wide_int_storage () {}
1082 /* Initialize the storage from integer X, in its natural precision.
1083 Note that we do not allow integers with host-dependent precision
1084 to become wide_ints; wide_ints must always be logically independent
1085 of the host. */
1086 template <typename T>
1087 inline wide_int_storage::wide_int_storage (const T &x)
1089 { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
1090 { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
1091 WIDE_INT_REF_FOR (T) xi (x);
1092 precision = xi.precision;
1093 wi::copy (*this, xi);
1096 template <typename T>
1097 inline wide_int_storage&
1098 wide_int_storage::operator = (const T &x)
1100 { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
1101 { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
1102 WIDE_INT_REF_FOR (T) xi (x);
1103 precision = xi.precision;
1104 wi::copy (*this, xi);
1105 return *this;
1108 inline unsigned int
1109 wide_int_storage::get_precision () const
1111 return precision;
1114 inline const HOST_WIDE_INT *
1115 wide_int_storage::get_val () const
1117 return val;
1120 inline unsigned int
1121 wide_int_storage::get_len () const
1123 return len;
1126 inline HOST_WIDE_INT *
1127 wide_int_storage::write_val ()
1129 return val;
1132 inline void
1133 wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
1135 len = l;
1136 if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
1137 val[len - 1] = sext_hwi (val[len - 1],
1138 precision % HOST_BITS_PER_WIDE_INT);
1141 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
1142 number. */
1143 inline wide_int
1144 wide_int_storage::from (const wide_int_ref &x, unsigned int precision,
1145 signop sgn)
1147 wide_int result = wide_int::create (precision);
1148 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1149 x.precision, precision, sgn));
1150 return result;
1153 /* Create a wide_int from the explicit block encoding given by VAL and
1154 LEN. PRECISION is the precision of the integer. NEED_CANON_P is
1155 true if the encoding may have redundant trailing blocks. */
1156 inline wide_int
1157 wide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
1158 unsigned int precision, bool need_canon_p)
1160 wide_int result = wide_int::create (precision);
1161 result.set_len (wi::from_array (result.write_val (), val, len, precision,
1162 need_canon_p));
1163 return result;
1166 /* Return an uninitialized wide_int with precision PRECISION. */
1167 inline wide_int
1168 wide_int_storage::create (unsigned int precision)
1170 wide_int x;
1171 x.precision = precision;
1172 return x;
1175 template <typename T1, typename T2>
1176 inline wide_int
1177 wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
1179 /* This shouldn't be used for two flexible-precision inputs. */
1180 STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
1181 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
1182 if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
1183 return wide_int::create (wi::get_precision (y));
1184 else
1185 return wide_int::create (wi::get_precision (x));
1188 /* The storage used by FIXED_WIDE_INT (N). */
1189 template <int N>
1190 class GTY(()) fixed_wide_int_storage
1192 private:
1193 HOST_WIDE_INT val[(N + HOST_BITS_PER_WIDE_INT + 1) / HOST_BITS_PER_WIDE_INT];
1194 unsigned int len;
1196 public:
1197 fixed_wide_int_storage ();
1198 template <typename T>
1199 fixed_wide_int_storage (const T &);
1201 /* The standard generic_wide_int storage methods. */
1202 unsigned int get_precision () const;
1203 const HOST_WIDE_INT *get_val () const;
1204 unsigned int get_len () const;
1205 HOST_WIDE_INT *write_val ();
1206 void set_len (unsigned int, bool = false);
1208 static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
1209 static FIXED_WIDE_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
1210 bool = true);
1213 namespace wi
1215 template <int N>
1216 struct int_traits < fixed_wide_int_storage <N> >
1218 static const enum precision_type precision_type = CONST_PRECISION;
1219 static const bool host_dependent_precision = false;
1220 static const bool is_sign_extended = true;
1221 static const unsigned int precision = N;
1222 template <typename T1, typename T2>
1223 static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
1227 template <int N>
1228 inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
1230 /* Initialize the storage from integer X, in precision N. */
1231 template <int N>
1232 template <typename T>
1233 inline fixed_wide_int_storage <N>::fixed_wide_int_storage (const T &x)
1235 /* Check for type compatibility. We don't want to initialize a
1236 fixed-width integer from something like a wide_int. */
1237 WI_BINARY_RESULT (T, FIXED_WIDE_INT (N)) *assertion ATTRIBUTE_UNUSED;
1238 wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
1241 template <int N>
1242 inline unsigned int
1243 fixed_wide_int_storage <N>::get_precision () const
1245 return N;
1248 template <int N>
1249 inline const HOST_WIDE_INT *
1250 fixed_wide_int_storage <N>::get_val () const
1252 return val;
1255 template <int N>
1256 inline unsigned int
1257 fixed_wide_int_storage <N>::get_len () const
1259 return len;
1262 template <int N>
1263 inline HOST_WIDE_INT *
1264 fixed_wide_int_storage <N>::write_val ()
1266 return val;
1269 template <int N>
1270 inline void
1271 fixed_wide_int_storage <N>::set_len (unsigned int l, bool)
1273 len = l;
1274 /* There are no excess bits in val[len - 1]. */
1275 STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
1278 /* Treat X as having signedness SGN and convert it to an N-bit number. */
1279 template <int N>
1280 inline FIXED_WIDE_INT (N)
1281 fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
1283 FIXED_WIDE_INT (N) result;
1284 result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
1285 x.precision, N, sgn));
1286 return result;
1289 /* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
1290 VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
1291 trailing blocks. */
1292 template <int N>
1293 inline FIXED_WIDE_INT (N)
1294 fixed_wide_int_storage <N>::from_array (const HOST_WIDE_INT *val,
1295 unsigned int len,
1296 bool need_canon_p)
1298 FIXED_WIDE_INT (N) result;
1299 result.set_len (wi::from_array (result.write_val (), val, len,
1300 N, need_canon_p));
1301 return result;
1304 template <int N>
1305 template <typename T1, typename T2>
1306 inline FIXED_WIDE_INT (N)
1307 wi::int_traits < fixed_wide_int_storage <N> >::
1308 get_binary_result (const T1 &, const T2 &)
1310 return FIXED_WIDE_INT (N) ();
1313 /* A reference to one element of a trailing_wide_ints structure. */
1314 class trailing_wide_int_storage
1316 private:
1317 /* The precision of the integer, which is a fixed property of the
1318 parent trailing_wide_ints. */
1319 unsigned int m_precision;
1321 /* A pointer to the length field. */
1322 unsigned char *m_len;
1324 /* A pointer to the HWI array. There are enough elements to hold all
1325 values of precision M_PRECISION. */
1326 HOST_WIDE_INT *m_val;
1328 public:
1329 trailing_wide_int_storage (unsigned int, unsigned char *, HOST_WIDE_INT *);
1331 /* The standard generic_wide_int storage methods. */
1332 unsigned int get_len () const;
1333 unsigned int get_precision () const;
1334 const HOST_WIDE_INT *get_val () const;
1335 HOST_WIDE_INT *write_val ();
1336 void set_len (unsigned int, bool = false);
1338 template <typename T>
1339 trailing_wide_int_storage &operator = (const T &);
1342 typedef generic_wide_int <trailing_wide_int_storage> trailing_wide_int;
1344 /* trailing_wide_int behaves like a wide_int. */
1345 namespace wi
1347 template <>
1348 struct int_traits <trailing_wide_int_storage>
1349 : public int_traits <wide_int_storage> {};
1352 /* An array of N wide_int-like objects that can be put at the end of
1353 a variable-sized structure. Use extra_size to calculate how many
1354 bytes beyond the sizeof need to be allocated. Use set_precision
1355 to initialize the structure. */
1356 template <int N>
1357 class GTY((user)) trailing_wide_ints
1359 private:
1360 /* The shared precision of each number. */
1361 unsigned short m_precision;
1363 /* The shared maximum length of each number. */
1364 unsigned char m_max_len;
1366 /* The current length of each number. */
1367 unsigned char m_len[N];
1369 /* The variable-length part of the structure, which always contains
1370 at least one HWI. Element I starts at index I * M_MAX_LEN. */
1371 HOST_WIDE_INT m_val[1];
1373 public:
1374 typedef WIDE_INT_REF_FOR (trailing_wide_int_storage) const_reference;
1376 void set_precision (unsigned int);
1377 unsigned int get_precision () const { return m_precision; }
1378 trailing_wide_int operator [] (unsigned int);
1379 const_reference operator [] (unsigned int) const;
1380 static size_t extra_size (unsigned int);
1381 size_t extra_size () const { return extra_size (m_precision); }
1384 inline trailing_wide_int_storage::
1385 trailing_wide_int_storage (unsigned int precision, unsigned char *len,
1386 HOST_WIDE_INT *val)
1387 : m_precision (precision), m_len (len), m_val (val)
1391 inline unsigned int
1392 trailing_wide_int_storage::get_len () const
1394 return *m_len;
1397 inline unsigned int
1398 trailing_wide_int_storage::get_precision () const
1400 return m_precision;
1403 inline const HOST_WIDE_INT *
1404 trailing_wide_int_storage::get_val () const
1406 return m_val;
1409 inline HOST_WIDE_INT *
1410 trailing_wide_int_storage::write_val ()
1412 return m_val;
1415 inline void
1416 trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
1418 *m_len = len;
1419 if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
1420 m_val[len - 1] = sext_hwi (m_val[len - 1],
1421 m_precision % HOST_BITS_PER_WIDE_INT);
1424 template <typename T>
1425 inline trailing_wide_int_storage &
1426 trailing_wide_int_storage::operator = (const T &x)
1428 WIDE_INT_REF_FOR (T) xi (x, m_precision);
1429 wi::copy (*this, xi);
1430 return *this;
1433 /* Initialize the structure and record that all elements have precision
1434 PRECISION. */
1435 template <int N>
1436 inline void
1437 trailing_wide_ints <N>::set_precision (unsigned int precision)
1439 m_precision = precision;
1440 m_max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1441 / HOST_BITS_PER_WIDE_INT);
1444 /* Return a reference to element INDEX. */
1445 template <int N>
1446 inline trailing_wide_int
1447 trailing_wide_ints <N>::operator [] (unsigned int index)
1449 return trailing_wide_int_storage (m_precision, &m_len[index],
1450 &m_val[index * m_max_len]);
1453 template <int N>
1454 inline typename trailing_wide_ints <N>::const_reference
1455 trailing_wide_ints <N>::operator [] (unsigned int index) const
1457 return wi::storage_ref (&m_val[index * m_max_len],
1458 m_len[index], m_precision);
1461 /* Return how many extra bytes need to be added to the end of the structure
1462 in order to handle N wide_ints of precision PRECISION. */
1463 template <int N>
1464 inline size_t
1465 trailing_wide_ints <N>::extra_size (unsigned int precision)
1467 unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
1468 / HOST_BITS_PER_WIDE_INT);
1469 return (N * max_len - 1) * sizeof (HOST_WIDE_INT);
1472 /* This macro is used in structures that end with a trailing_wide_ints field
1473 called FIELD. It declares get_NAME() and set_NAME() methods to access
1474 element I of FIELD. */
1475 #define TRAILING_WIDE_INT_ACCESSOR(NAME, FIELD, I) \
1476 trailing_wide_int get_##NAME () { return FIELD[I]; } \
1477 template <typename T> void set_##NAME (const T &x) { FIELD[I] = x; }
1479 namespace wi
1481 /* Implementation of int_traits for primitive integer types like "int". */
1482 template <typename T, bool signed_p>
1483 struct primitive_int_traits
1485 static const enum precision_type precision_type = FLEXIBLE_PRECISION;
1486 static const bool host_dependent_precision = true;
1487 static const bool is_sign_extended = true;
1488 static unsigned int get_precision (T);
1489 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
1493 template <typename T, bool signed_p>
1494 inline unsigned int
1495 wi::primitive_int_traits <T, signed_p>::get_precision (T)
1497 return sizeof (T) * CHAR_BIT;
1500 template <typename T, bool signed_p>
1501 inline wi::storage_ref
1502 wi::primitive_int_traits <T, signed_p>::decompose (HOST_WIDE_INT *scratch,
1503 unsigned int precision, T x)
1505 scratch[0] = x;
1506 if (signed_p || scratch[0] >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1507 return wi::storage_ref (scratch, 1, precision);
1508 scratch[1] = 0;
1509 return wi::storage_ref (scratch, 2, precision);
1512 /* Allow primitive C types to be used in wi:: routines. */
1513 namespace wi
1515 template <>
1516 struct int_traits <unsigned char>
1517 : public primitive_int_traits <unsigned char, false> {};
1519 template <>
1520 struct int_traits <unsigned short>
1521 : public primitive_int_traits <unsigned short, false> {};
1523 template <>
1524 struct int_traits <int>
1525 : public primitive_int_traits <int, true> {};
1527 template <>
1528 struct int_traits <unsigned int>
1529 : public primitive_int_traits <unsigned int, false> {};
1531 template <>
1532 struct int_traits <long>
1533 : public primitive_int_traits <long, true> {};
1535 template <>
1536 struct int_traits <unsigned long>
1537 : public primitive_int_traits <unsigned long, false> {};
1539 #if defined HAVE_LONG_LONG
1540 template <>
1541 struct int_traits <long long>
1542 : public primitive_int_traits <long long, true> {};
1544 template <>
1545 struct int_traits <unsigned long long>
1546 : public primitive_int_traits <unsigned long long, false> {};
1547 #endif
1550 namespace wi
1552 /* Stores HWI-sized integer VAL, treating it as having signedness SGN
1553 and precision PRECISION. */
1554 struct hwi_with_prec
1556 hwi_with_prec () {}
1557 hwi_with_prec (HOST_WIDE_INT, unsigned int, signop);
1558 HOST_WIDE_INT val;
1559 unsigned int precision;
1560 signop sgn;
1563 hwi_with_prec shwi (HOST_WIDE_INT, unsigned int);
1564 hwi_with_prec uhwi (unsigned HOST_WIDE_INT, unsigned int);
1566 hwi_with_prec minus_one (unsigned int);
1567 hwi_with_prec zero (unsigned int);
1568 hwi_with_prec one (unsigned int);
1569 hwi_with_prec two (unsigned int);
1572 inline wi::hwi_with_prec::hwi_with_prec (HOST_WIDE_INT v, unsigned int p,
1573 signop s)
1574 : precision (p), sgn (s)
1576 if (precision < HOST_BITS_PER_WIDE_INT)
1577 val = sext_hwi (v, precision);
1578 else
1579 val = v;
1582 /* Return a signed integer that has value VAL and precision PRECISION. */
1583 inline wi::hwi_with_prec
1584 wi::shwi (HOST_WIDE_INT val, unsigned int precision)
1586 return hwi_with_prec (val, precision, SIGNED);
1589 /* Return an unsigned integer that has value VAL and precision PRECISION. */
1590 inline wi::hwi_with_prec
1591 wi::uhwi (unsigned HOST_WIDE_INT val, unsigned int precision)
1593 return hwi_with_prec (val, precision, UNSIGNED);
1596 /* Return a wide int of -1 with precision PRECISION. */
1597 inline wi::hwi_with_prec
1598 wi::minus_one (unsigned int precision)
1600 return wi::shwi (-1, precision);
1603 /* Return a wide int of 0 with precision PRECISION. */
1604 inline wi::hwi_with_prec
1605 wi::zero (unsigned int precision)
1607 return wi::shwi (0, precision);
1610 /* Return a wide int of 1 with precision PRECISION. */
1611 inline wi::hwi_with_prec
1612 wi::one (unsigned int precision)
1614 return wi::shwi (1, precision);
1617 /* Return a wide int of 2 with precision PRECISION. */
1618 inline wi::hwi_with_prec
1619 wi::two (unsigned int precision)
1621 return wi::shwi (2, precision);
1624 namespace wi
1626 /* ints_for<T>::zero (X) returns a zero that, when asssigned to a T,
1627 gives that T the same precision as X. */
1628 template<typename T, precision_type = int_traits<T>::precision_type>
1629 struct ints_for
1631 static int zero (const T &) { return 0; }
1634 template<typename T>
1635 struct ints_for<T, VAR_PRECISION>
1637 static hwi_with_prec zero (const T &);
1641 template<typename T>
1642 inline wi::hwi_with_prec
1643 wi::ints_for<T, wi::VAR_PRECISION>::zero (const T &x)
1645 return wi::zero (wi::get_precision (x));
1648 namespace wi
1650 template <>
1651 struct int_traits <wi::hwi_with_prec>
1653 static const enum precision_type precision_type = VAR_PRECISION;
1654 /* hwi_with_prec has an explicitly-given precision, rather than the
1655 precision of HOST_WIDE_INT. */
1656 static const bool host_dependent_precision = false;
1657 static const bool is_sign_extended = true;
1658 static unsigned int get_precision (const wi::hwi_with_prec &);
1659 static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
1660 const wi::hwi_with_prec &);
1664 inline unsigned int
1665 wi::int_traits <wi::hwi_with_prec>::get_precision (const wi::hwi_with_prec &x)
1667 return x.precision;
1670 inline wi::storage_ref
1671 wi::int_traits <wi::hwi_with_prec>::
1672 decompose (HOST_WIDE_INT *scratch, unsigned int precision,
1673 const wi::hwi_with_prec &x)
1675 gcc_checking_assert (precision == x.precision);
1676 scratch[0] = x.val;
1677 if (x.sgn == SIGNED || x.val >= 0 || precision <= HOST_BITS_PER_WIDE_INT)
1678 return wi::storage_ref (scratch, 1, precision);
1679 scratch[1] = 0;
1680 return wi::storage_ref (scratch, 2, precision);
1683 /* Private functions for handling large cases out of line. They take
1684 individual length and array parameters because that is cheaper for
1685 the inline caller than constructing an object on the stack and
1686 passing a reference to it. (Although many callers use wide_int_refs,
1687 we generally want those to be removed by SRA.) */
1688 namespace wi
1690 bool eq_p_large (const HOST_WIDE_INT *, unsigned int,
1691 const HOST_WIDE_INT *, unsigned int, unsigned int);
1692 bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1693 const HOST_WIDE_INT *, unsigned int);
1694 bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1695 const HOST_WIDE_INT *, unsigned int);
1696 int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1697 const HOST_WIDE_INT *, unsigned int);
1698 int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
1699 const HOST_WIDE_INT *, unsigned int);
1700 unsigned int sext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1701 unsigned int,
1702 unsigned int, unsigned int);
1703 unsigned int zext_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1704 unsigned int,
1705 unsigned int, unsigned int);
1706 unsigned int set_bit_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1707 unsigned int, unsigned int, unsigned int);
1708 unsigned int lshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1709 unsigned int, unsigned int, unsigned int);
1710 unsigned int lrshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1711 unsigned int, unsigned int, unsigned int,
1712 unsigned int);
1713 unsigned int arshift_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1714 unsigned int, unsigned int, unsigned int,
1715 unsigned int);
1716 unsigned int and_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1717 const HOST_WIDE_INT *, unsigned int, unsigned int);
1718 unsigned int and_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1719 unsigned int, const HOST_WIDE_INT *,
1720 unsigned int, unsigned int);
1721 unsigned int or_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1722 const HOST_WIDE_INT *, unsigned int, unsigned int);
1723 unsigned int or_not_large (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1724 unsigned int, const HOST_WIDE_INT *,
1725 unsigned int, unsigned int);
1726 unsigned int xor_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1727 const HOST_WIDE_INT *, unsigned int, unsigned int);
1728 unsigned int add_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1729 const HOST_WIDE_INT *, unsigned int, unsigned int,
1730 signop, overflow_type *);
1731 unsigned int sub_large (HOST_WIDE_INT *, const HOST_WIDE_INT *, unsigned int,
1732 const HOST_WIDE_INT *, unsigned int, unsigned int,
1733 signop, overflow_type *);
1734 unsigned int mul_internal (HOST_WIDE_INT *, const HOST_WIDE_INT *,
1735 unsigned int, const HOST_WIDE_INT *,
1736 unsigned int, unsigned int, signop,
1737 overflow_type *, bool);
1738 unsigned int divmod_internal (HOST_WIDE_INT *, unsigned int *,
1739 HOST_WIDE_INT *, const HOST_WIDE_INT *,
1740 unsigned int, unsigned int,
1741 const HOST_WIDE_INT *,
1742 unsigned int, unsigned int,
1743 signop, overflow_type *);
1746 /* Return the number of bits that integer X can hold. */
1747 template <typename T>
1748 inline unsigned int
1749 wi::get_precision (const T &x)
1751 return wi::int_traits <T>::get_precision (x);
1754 /* Return the number of bits that the result of a binary operation can
1755 hold when the input operands are X and Y. */
1756 template <typename T1, typename T2>
1757 inline unsigned int
1758 wi::get_binary_precision (const T1 &x, const T2 &y)
1760 return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
1761 get_binary_result (x, y));
1764 /* Copy the contents of Y to X, but keeping X's current precision. */
1765 template <typename T1, typename T2>
1766 inline void
1767 wi::copy (T1 &x, const T2 &y)
1769 HOST_WIDE_INT *xval = x.write_val ();
1770 const HOST_WIDE_INT *yval = y.get_val ();
1771 unsigned int len = y.get_len ();
1772 unsigned int i = 0;
1774 xval[i] = yval[i];
1775 while (++i < len);
1776 x.set_len (len, y.is_sign_extended);
1779 /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */
1780 template <typename T>
1781 inline bool
1782 wi::fits_shwi_p (const T &x)
1784 WIDE_INT_REF_FOR (T) xi (x);
1785 return xi.len == 1;
1788 /* Return true if X fits in an unsigned HOST_WIDE_INT with no loss of
1789 precision. */
1790 template <typename T>
1791 inline bool
1792 wi::fits_uhwi_p (const T &x)
1794 WIDE_INT_REF_FOR (T) xi (x);
1795 if (xi.precision <= HOST_BITS_PER_WIDE_INT)
1796 return true;
1797 if (xi.len == 1)
1798 return xi.slow () >= 0;
1799 return xi.len == 2 && xi.uhigh () == 0;
1802 /* Return true if X is negative based on the interpretation of SGN.
1803 For UNSIGNED, this is always false. */
1804 template <typename T>
1805 inline bool
1806 wi::neg_p (const T &x, signop sgn)
1808 WIDE_INT_REF_FOR (T) xi (x);
1809 if (sgn == UNSIGNED)
1810 return false;
1811 return xi.sign_mask () < 0;
1814 /* Return -1 if the top bit of X is set and 0 if the top bit is clear. */
1815 template <typename T>
1816 inline HOST_WIDE_INT
1817 wi::sign_mask (const T &x)
1819 WIDE_INT_REF_FOR (T) xi (x);
1820 return xi.sign_mask ();
1823 /* Return true if X == Y. X and Y must be binary-compatible. */
1824 template <typename T1, typename T2>
1825 inline bool
1826 wi::eq_p (const T1 &x, const T2 &y)
1828 unsigned int precision = get_binary_precision (x, y);
1829 WIDE_INT_REF_FOR (T1) xi (x, precision);
1830 WIDE_INT_REF_FOR (T2) yi (y, precision);
1831 if (xi.is_sign_extended && yi.is_sign_extended)
1833 /* This case reduces to array equality. */
1834 if (xi.len != yi.len)
1835 return false;
1836 unsigned int i = 0;
1838 if (xi.val[i] != yi.val[i])
1839 return false;
1840 while (++i != xi.len);
1841 return true;
1843 if (__builtin_expect (yi.len == 1, true))
1845 /* XI is only equal to YI if it too has a single HWI. */
1846 if (xi.len != 1)
1847 return false;
1848 /* Excess bits in xi.val[0] will be signs or zeros, so comparisons
1849 with 0 are simple. */
1850 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1851 return xi.val[0] == 0;
1852 /* Otherwise flush out any excess bits first. */
1853 unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
1854 int excess = HOST_BITS_PER_WIDE_INT - precision;
1855 if (excess > 0)
1856 diff <<= excess;
1857 return diff == 0;
1859 return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
1862 /* Return true if X != Y. X and Y must be binary-compatible. */
1863 template <typename T1, typename T2>
1864 inline bool
1865 wi::ne_p (const T1 &x, const T2 &y)
1867 return !eq_p (x, y);
1870 /* Return true if X < Y when both are treated as signed values. */
1871 template <typename T1, typename T2>
1872 inline bool
1873 wi::lts_p (const T1 &x, const T2 &y)
1875 unsigned int precision = get_binary_precision (x, y);
1876 WIDE_INT_REF_FOR (T1) xi (x, precision);
1877 WIDE_INT_REF_FOR (T2) yi (y, precision);
1878 /* We optimize x < y, where y is 64 or fewer bits. */
1879 if (wi::fits_shwi_p (yi))
1881 /* Make lts_p (x, 0) as efficient as wi::neg_p (x). */
1882 if (STATIC_CONSTANT_P (yi.val[0] == 0))
1883 return neg_p (xi);
1884 /* If x fits directly into a shwi, we can compare directly. */
1885 if (wi::fits_shwi_p (xi))
1886 return xi.to_shwi () < yi.to_shwi ();
1887 /* If x doesn't fit and is negative, then it must be more
1888 negative than any value in y, and hence smaller than y. */
1889 if (neg_p (xi))
1890 return true;
1891 /* If x is positive, then it must be larger than any value in y,
1892 and hence greater than y. */
1893 return false;
1895 /* Optimize the opposite case, if it can be detected at compile time. */
1896 if (STATIC_CONSTANT_P (xi.len == 1))
1897 /* If YI is negative it is lower than the least HWI.
1898 If YI is positive it is greater than the greatest HWI. */
1899 return !neg_p (yi);
1900 return lts_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1903 /* Return true if X < Y when both are treated as unsigned values. */
1904 template <typename T1, typename T2>
1905 inline bool
1906 wi::ltu_p (const T1 &x, const T2 &y)
1908 unsigned int precision = get_binary_precision (x, y);
1909 WIDE_INT_REF_FOR (T1) xi (x, precision);
1910 WIDE_INT_REF_FOR (T2) yi (y, precision);
1911 /* Optimize comparisons with constants. */
1912 if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
1913 return xi.len == 1 && xi.to_uhwi () < (unsigned HOST_WIDE_INT) yi.val[0];
1914 if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
1915 return yi.len != 1 || yi.to_uhwi () > (unsigned HOST_WIDE_INT) xi.val[0];
1916 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
1917 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
1918 values does not change the result. */
1919 if (__builtin_expect (xi.len + yi.len == 2, true))
1921 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
1922 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
1923 return xl < yl;
1925 return ltu_p_large (xi.val, xi.len, precision, yi.val, yi.len);
1928 /* Return true if X < Y. Signedness of X and Y is indicated by SGN. */
1929 template <typename T1, typename T2>
1930 inline bool
1931 wi::lt_p (const T1 &x, const T2 &y, signop sgn)
1933 if (sgn == SIGNED)
1934 return lts_p (x, y);
1935 else
1936 return ltu_p (x, y);
1939 /* Return true if X <= Y when both are treated as signed values. */
1940 template <typename T1, typename T2>
1941 inline bool
1942 wi::les_p (const T1 &x, const T2 &y)
1944 return !lts_p (y, x);
1947 /* Return true if X <= Y when both are treated as unsigned values. */
1948 template <typename T1, typename T2>
1949 inline bool
1950 wi::leu_p (const T1 &x, const T2 &y)
1952 return !ltu_p (y, x);
1955 /* Return true if X <= Y. Signedness of X and Y is indicated by SGN. */
1956 template <typename T1, typename T2>
1957 inline bool
1958 wi::le_p (const T1 &x, const T2 &y, signop sgn)
1960 if (sgn == SIGNED)
1961 return les_p (x, y);
1962 else
1963 return leu_p (x, y);
1966 /* Return true if X > Y when both are treated as signed values. */
1967 template <typename T1, typename T2>
1968 inline bool
1969 wi::gts_p (const T1 &x, const T2 &y)
1971 return lts_p (y, x);
1974 /* Return true if X > Y when both are treated as unsigned values. */
1975 template <typename T1, typename T2>
1976 inline bool
1977 wi::gtu_p (const T1 &x, const T2 &y)
1979 return ltu_p (y, x);
1982 /* Return true if X > Y. Signedness of X and Y is indicated by SGN. */
1983 template <typename T1, typename T2>
1984 inline bool
1985 wi::gt_p (const T1 &x, const T2 &y, signop sgn)
1987 if (sgn == SIGNED)
1988 return gts_p (x, y);
1989 else
1990 return gtu_p (x, y);
1993 /* Return true if X >= Y when both are treated as signed values. */
1994 template <typename T1, typename T2>
1995 inline bool
1996 wi::ges_p (const T1 &x, const T2 &y)
1998 return !lts_p (x, y);
2001 /* Return true if X >= Y when both are treated as unsigned values. */
2002 template <typename T1, typename T2>
2003 inline bool
2004 wi::geu_p (const T1 &x, const T2 &y)
2006 return !ltu_p (x, y);
2009 /* Return true if X >= Y. Signedness of X and Y is indicated by SGN. */
2010 template <typename T1, typename T2>
2011 inline bool
2012 wi::ge_p (const T1 &x, const T2 &y, signop sgn)
2014 if (sgn == SIGNED)
2015 return ges_p (x, y);
2016 else
2017 return geu_p (x, y);
2020 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
2021 as signed values. */
2022 template <typename T1, typename T2>
2023 inline int
2024 wi::cmps (const T1 &x, const T2 &y)
2026 unsigned int precision = get_binary_precision (x, y);
2027 WIDE_INT_REF_FOR (T1) xi (x, precision);
2028 WIDE_INT_REF_FOR (T2) yi (y, precision);
2029 if (wi::fits_shwi_p (yi))
2031 /* Special case for comparisons with 0. */
2032 if (STATIC_CONSTANT_P (yi.val[0] == 0))
2033 return neg_p (xi) ? -1 : !(xi.len == 1 && xi.val[0] == 0);
2034 /* If x fits into a signed HWI, we can compare directly. */
2035 if (wi::fits_shwi_p (xi))
2037 HOST_WIDE_INT xl = xi.to_shwi ();
2038 HOST_WIDE_INT yl = yi.to_shwi ();
2039 return xl < yl ? -1 : xl > yl;
2041 /* If x doesn't fit and is negative, then it must be more
2042 negative than any signed HWI, and hence smaller than y. */
2043 if (neg_p (xi))
2044 return -1;
2045 /* If x is positive, then it must be larger than any signed HWI,
2046 and hence greater than y. */
2047 return 1;
2049 /* Optimize the opposite case, if it can be detected at compile time. */
2050 if (STATIC_CONSTANT_P (xi.len == 1))
2051 /* If YI is negative it is lower than the least HWI.
2052 If YI is positive it is greater than the greatest HWI. */
2053 return neg_p (yi) ? 1 : -1;
2054 return cmps_large (xi.val, xi.len, precision, yi.val, yi.len);
2057 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Treat both X and Y
2058 as unsigned values. */
2059 template <typename T1, typename T2>
2060 inline int
2061 wi::cmpu (const T1 &x, const T2 &y)
2063 unsigned int precision = get_binary_precision (x, y);
2064 WIDE_INT_REF_FOR (T1) xi (x, precision);
2065 WIDE_INT_REF_FOR (T2) yi (y, precision);
2066 /* Optimize comparisons with constants. */
2067 if (STATIC_CONSTANT_P (yi.len == 1 && yi.val[0] >= 0))
2069 /* If XI doesn't fit in a HWI then it must be larger than YI. */
2070 if (xi.len != 1)
2071 return 1;
2072 /* Otherwise compare directly. */
2073 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
2074 unsigned HOST_WIDE_INT yl = yi.val[0];
2075 return xl < yl ? -1 : xl > yl;
2077 if (STATIC_CONSTANT_P (xi.len == 1 && xi.val[0] >= 0))
2079 /* If YI doesn't fit in a HWI then it must be larger than XI. */
2080 if (yi.len != 1)
2081 return -1;
2082 /* Otherwise compare directly. */
2083 unsigned HOST_WIDE_INT xl = xi.val[0];
2084 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
2085 return xl < yl ? -1 : xl > yl;
2087 /* Optimize the case of two HWIs. The HWIs are implicitly sign-extended
2088 for precisions greater than HOST_BITS_WIDE_INT, but sign-extending both
2089 values does not change the result. */
2090 if (__builtin_expect (xi.len + yi.len == 2, true))
2092 unsigned HOST_WIDE_INT xl = xi.to_uhwi ();
2093 unsigned HOST_WIDE_INT yl = yi.to_uhwi ();
2094 return xl < yl ? -1 : xl > yl;
2096 return cmpu_large (xi.val, xi.len, precision, yi.val, yi.len);
2099 /* Return -1 if X < Y, 0 if X == Y and 1 if X > Y. Signedness of
2100 X and Y indicated by SGN. */
2101 template <typename T1, typename T2>
2102 inline int
2103 wi::cmp (const T1 &x, const T2 &y, signop sgn)
2105 if (sgn == SIGNED)
2106 return cmps (x, y);
2107 else
2108 return cmpu (x, y);
2111 /* Return ~x. */
2112 template <typename T>
2113 inline WI_UNARY_RESULT (T)
2114 wi::bit_not (const T &x)
2116 WI_UNARY_RESULT_VAR (result, val, T, x);
2117 WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
2118 for (unsigned int i = 0; i < xi.len; ++i)
2119 val[i] = ~xi.val[i];
2120 result.set_len (xi.len);
2121 return result;
2124 /* Return -x. */
2125 template <typename T>
2126 inline WI_UNARY_RESULT (T)
2127 wi::neg (const T &x)
2129 return sub (0, x);
2132 /* Return -x. Indicate in *OVERFLOW if performing the negation would
2133 cause an overflow. */
2134 template <typename T>
2135 inline WI_UNARY_RESULT (T)
2136 wi::neg (const T &x, overflow_type *overflow)
2138 *overflow = only_sign_bit_p (x) ? OVF_OVERFLOW : OVF_NONE;
2139 return sub (0, x);
2142 /* Return the absolute value of x. */
2143 template <typename T>
2144 inline WI_UNARY_RESULT (T)
2145 wi::abs (const T &x)
2147 return neg_p (x) ? neg (x) : WI_UNARY_RESULT (T) (x);
2150 /* Return the result of sign-extending the low OFFSET bits of X. */
2151 template <typename T>
2152 inline WI_UNARY_RESULT (T)
2153 wi::sext (const T &x, unsigned int offset)
2155 WI_UNARY_RESULT_VAR (result, val, T, x);
2156 unsigned int precision = get_precision (result);
2157 WIDE_INT_REF_FOR (T) xi (x, precision);
2159 if (offset <= HOST_BITS_PER_WIDE_INT)
2161 val[0] = sext_hwi (xi.ulow (), offset);
2162 result.set_len (1, true);
2164 else
2165 result.set_len (sext_large (val, xi.val, xi.len, precision, offset));
2166 return result;
2169 /* Return the result of zero-extending the low OFFSET bits of X. */
2170 template <typename T>
2171 inline WI_UNARY_RESULT (T)
2172 wi::zext (const T &x, unsigned int offset)
2174 WI_UNARY_RESULT_VAR (result, val, T, x);
2175 unsigned int precision = get_precision (result);
2176 WIDE_INT_REF_FOR (T) xi (x, precision);
2178 /* This is not just an optimization, it is actually required to
2179 maintain canonization. */
2180 if (offset >= precision)
2182 wi::copy (result, xi);
2183 return result;
2186 /* In these cases we know that at least the top bit will be clear,
2187 so no sign extension is necessary. */
2188 if (offset < HOST_BITS_PER_WIDE_INT)
2190 val[0] = zext_hwi (xi.ulow (), offset);
2191 result.set_len (1, true);
2193 else
2194 result.set_len (zext_large (val, xi.val, xi.len, precision, offset), true);
2195 return result;
2198 /* Return the result of extending the low OFFSET bits of X according to
2199 signedness SGN. */
2200 template <typename T>
2201 inline WI_UNARY_RESULT (T)
2202 wi::ext (const T &x, unsigned int offset, signop sgn)
2204 return sgn == SIGNED ? sext (x, offset) : zext (x, offset);
2207 /* Return an integer that represents X | (1 << bit). */
2208 template <typename T>
2209 inline WI_UNARY_RESULT (T)
2210 wi::set_bit (const T &x, unsigned int bit)
2212 WI_UNARY_RESULT_VAR (result, val, T, x);
2213 unsigned int precision = get_precision (result);
2214 WIDE_INT_REF_FOR (T) xi (x, precision);
2215 if (precision <= HOST_BITS_PER_WIDE_INT)
2217 val[0] = xi.ulow () | (HOST_WIDE_INT_1U << bit);
2218 result.set_len (1);
2220 else
2221 result.set_len (set_bit_large (val, xi.val, xi.len, precision, bit));
2222 return result;
2225 /* Return the mininum of X and Y, treating them both as having
2226 signedness SGN. */
2227 template <typename T1, typename T2>
2228 inline WI_BINARY_RESULT (T1, T2)
2229 wi::min (const T1 &x, const T2 &y, signop sgn)
2231 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2232 unsigned int precision = get_precision (result);
2233 if (wi::le_p (x, y, sgn))
2234 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2235 else
2236 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2237 return result;
2240 /* Return the minimum of X and Y, treating both as signed values. */
2241 template <typename T1, typename T2>
2242 inline WI_BINARY_RESULT (T1, T2)
2243 wi::smin (const T1 &x, const T2 &y)
2245 return wi::min (x, y, SIGNED);
2248 /* Return the minimum of X and Y, treating both as unsigned values. */
2249 template <typename T1, typename T2>
2250 inline WI_BINARY_RESULT (T1, T2)
2251 wi::umin (const T1 &x, const T2 &y)
2253 return wi::min (x, y, UNSIGNED);
2256 /* Return the maxinum of X and Y, treating them both as having
2257 signedness SGN. */
2258 template <typename T1, typename T2>
2259 inline WI_BINARY_RESULT (T1, T2)
2260 wi::max (const T1 &x, const T2 &y, signop sgn)
2262 WI_BINARY_RESULT_VAR (result, val ATTRIBUTE_UNUSED, T1, x, T2, y);
2263 unsigned int precision = get_precision (result);
2264 if (wi::ge_p (x, y, sgn))
2265 wi::copy (result, WIDE_INT_REF_FOR (T1) (x, precision));
2266 else
2267 wi::copy (result, WIDE_INT_REF_FOR (T2) (y, precision));
2268 return result;
2271 /* Return the maximum of X and Y, treating both as signed values. */
2272 template <typename T1, typename T2>
2273 inline WI_BINARY_RESULT (T1, T2)
2274 wi::smax (const T1 &x, const T2 &y)
2276 return wi::max (x, y, SIGNED);
2279 /* Return the maximum of X and Y, treating both as unsigned values. */
2280 template <typename T1, typename T2>
2281 inline WI_BINARY_RESULT (T1, T2)
2282 wi::umax (const T1 &x, const T2 &y)
2284 return wi::max (x, y, UNSIGNED);
2287 /* Return X & Y. */
2288 template <typename T1, typename T2>
2289 inline WI_BINARY_RESULT (T1, T2)
2290 wi::bit_and (const T1 &x, const T2 &y)
2292 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2293 unsigned int precision = get_precision (result);
2294 WIDE_INT_REF_FOR (T1) xi (x, precision);
2295 WIDE_INT_REF_FOR (T2) yi (y, precision);
2296 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2297 if (__builtin_expect (xi.len + yi.len == 2, true))
2299 val[0] = xi.ulow () & yi.ulow ();
2300 result.set_len (1, is_sign_extended);
2302 else
2303 result.set_len (and_large (val, xi.val, xi.len, yi.val, yi.len,
2304 precision), is_sign_extended);
2305 return result;
2308 /* Return X & ~Y. */
2309 template <typename T1, typename T2>
2310 inline WI_BINARY_RESULT (T1, T2)
2311 wi::bit_and_not (const T1 &x, const T2 &y)
2313 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2314 unsigned int precision = get_precision (result);
2315 WIDE_INT_REF_FOR (T1) xi (x, precision);
2316 WIDE_INT_REF_FOR (T2) yi (y, precision);
2317 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2318 if (__builtin_expect (xi.len + yi.len == 2, true))
2320 val[0] = xi.ulow () & ~yi.ulow ();
2321 result.set_len (1, is_sign_extended);
2323 else
2324 result.set_len (and_not_large (val, xi.val, xi.len, yi.val, yi.len,
2325 precision), is_sign_extended);
2326 return result;
2329 /* Return X | Y. */
2330 template <typename T1, typename T2>
2331 inline WI_BINARY_RESULT (T1, T2)
2332 wi::bit_or (const T1 &x, const T2 &y)
2334 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2335 unsigned int precision = get_precision (result);
2336 WIDE_INT_REF_FOR (T1) xi (x, precision);
2337 WIDE_INT_REF_FOR (T2) yi (y, precision);
2338 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2339 if (__builtin_expect (xi.len + yi.len == 2, true))
2341 val[0] = xi.ulow () | yi.ulow ();
2342 result.set_len (1, is_sign_extended);
2344 else
2345 result.set_len (or_large (val, xi.val, xi.len,
2346 yi.val, yi.len, precision), is_sign_extended);
2347 return result;
2350 /* Return X | ~Y. */
2351 template <typename T1, typename T2>
2352 inline WI_BINARY_RESULT (T1, T2)
2353 wi::bit_or_not (const T1 &x, const T2 &y)
2355 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2356 unsigned int precision = get_precision (result);
2357 WIDE_INT_REF_FOR (T1) xi (x, precision);
2358 WIDE_INT_REF_FOR (T2) yi (y, precision);
2359 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2360 if (__builtin_expect (xi.len + yi.len == 2, true))
2362 val[0] = xi.ulow () | ~yi.ulow ();
2363 result.set_len (1, is_sign_extended);
2365 else
2366 result.set_len (or_not_large (val, xi.val, xi.len, yi.val, yi.len,
2367 precision), is_sign_extended);
2368 return result;
2371 /* Return X ^ Y. */
2372 template <typename T1, typename T2>
2373 inline WI_BINARY_RESULT (T1, T2)
2374 wi::bit_xor (const T1 &x, const T2 &y)
2376 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2377 unsigned int precision = get_precision (result);
2378 WIDE_INT_REF_FOR (T1) xi (x, precision);
2379 WIDE_INT_REF_FOR (T2) yi (y, precision);
2380 bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
2381 if (__builtin_expect (xi.len + yi.len == 2, true))
2383 val[0] = xi.ulow () ^ yi.ulow ();
2384 result.set_len (1, is_sign_extended);
2386 else
2387 result.set_len (xor_large (val, xi.val, xi.len,
2388 yi.val, yi.len, precision), is_sign_extended);
2389 return result;
2392 /* Return X + Y. */
2393 template <typename T1, typename T2>
2394 inline WI_BINARY_RESULT (T1, T2)
2395 wi::add (const T1 &x, const T2 &y)
2397 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2398 unsigned int precision = get_precision (result);
2399 WIDE_INT_REF_FOR (T1) xi (x, precision);
2400 WIDE_INT_REF_FOR (T2) yi (y, precision);
2401 if (precision <= HOST_BITS_PER_WIDE_INT)
2403 val[0] = xi.ulow () + yi.ulow ();
2404 result.set_len (1);
2406 /* If the precision is known at compile time to be greater than
2407 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2408 knowing that (a) all bits in those HWIs are significant and
2409 (b) the result has room for at least two HWIs. This provides
2410 a fast path for things like offset_int and widest_int.
2412 The STATIC_CONSTANT_P test prevents this path from being
2413 used for wide_ints. wide_ints with precisions greater than
2414 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2415 point handling them inline. */
2416 else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2417 && __builtin_expect (xi.len + yi.len == 2, true))
2419 unsigned HOST_WIDE_INT xl = xi.ulow ();
2420 unsigned HOST_WIDE_INT yl = yi.ulow ();
2421 unsigned HOST_WIDE_INT resultl = xl + yl;
2422 val[0] = resultl;
2423 val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2424 result.set_len (1 + (((resultl ^ xl) & (resultl ^ yl))
2425 >> (HOST_BITS_PER_WIDE_INT - 1)));
2427 else
2428 result.set_len (add_large (val, xi.val, xi.len,
2429 yi.val, yi.len, precision,
2430 UNSIGNED, 0));
2431 return result;
2434 /* Return X + Y. Treat X and Y as having the signednes given by SGN
2435 and indicate in *OVERFLOW whether the operation overflowed. */
2436 template <typename T1, typename T2>
2437 inline WI_BINARY_RESULT (T1, T2)
2438 wi::add (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2440 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2441 unsigned int precision = get_precision (result);
2442 WIDE_INT_REF_FOR (T1) xi (x, precision);
2443 WIDE_INT_REF_FOR (T2) yi (y, precision);
2444 if (precision <= HOST_BITS_PER_WIDE_INT)
2446 unsigned HOST_WIDE_INT xl = xi.ulow ();
2447 unsigned HOST_WIDE_INT yl = yi.ulow ();
2448 unsigned HOST_WIDE_INT resultl = xl + yl;
2449 if (sgn == SIGNED)
2451 if ((((resultl ^ xl) & (resultl ^ yl))
2452 >> (precision - 1)) & 1)
2454 if (xl > resultl)
2455 *overflow = OVF_UNDERFLOW;
2456 else if (xl < resultl)
2457 *overflow = OVF_OVERFLOW;
2458 else
2459 *overflow = OVF_NONE;
2461 else
2462 *overflow = OVF_NONE;
2464 else
2465 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2466 < (xl << (HOST_BITS_PER_WIDE_INT - precision)))
2467 ? OVF_OVERFLOW : OVF_NONE;
2468 val[0] = resultl;
2469 result.set_len (1);
2471 else
2472 result.set_len (add_large (val, xi.val, xi.len,
2473 yi.val, yi.len, precision,
2474 sgn, overflow));
2475 return result;
2478 /* Return X - Y. */
2479 template <typename T1, typename T2>
2480 inline WI_BINARY_RESULT (T1, T2)
2481 wi::sub (const T1 &x, const T2 &y)
2483 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2484 unsigned int precision = get_precision (result);
2485 WIDE_INT_REF_FOR (T1) xi (x, precision);
2486 WIDE_INT_REF_FOR (T2) yi (y, precision);
2487 if (precision <= HOST_BITS_PER_WIDE_INT)
2489 val[0] = xi.ulow () - yi.ulow ();
2490 result.set_len (1);
2492 /* If the precision is known at compile time to be greater than
2493 HOST_BITS_PER_WIDE_INT, we can optimize the single-HWI case
2494 knowing that (a) all bits in those HWIs are significant and
2495 (b) the result has room for at least two HWIs. This provides
2496 a fast path for things like offset_int and widest_int.
2498 The STATIC_CONSTANT_P test prevents this path from being
2499 used for wide_ints. wide_ints with precisions greater than
2500 HOST_BITS_PER_WIDE_INT are relatively rare and there's not much
2501 point handling them inline. */
2502 else if (STATIC_CONSTANT_P (precision > HOST_BITS_PER_WIDE_INT)
2503 && __builtin_expect (xi.len + yi.len == 2, true))
2505 unsigned HOST_WIDE_INT xl = xi.ulow ();
2506 unsigned HOST_WIDE_INT yl = yi.ulow ();
2507 unsigned HOST_WIDE_INT resultl = xl - yl;
2508 val[0] = resultl;
2509 val[1] = (HOST_WIDE_INT) resultl < 0 ? 0 : -1;
2510 result.set_len (1 + (((resultl ^ xl) & (xl ^ yl))
2511 >> (HOST_BITS_PER_WIDE_INT - 1)));
2513 else
2514 result.set_len (sub_large (val, xi.val, xi.len,
2515 yi.val, yi.len, precision,
2516 UNSIGNED, 0));
2517 return result;
2520 /* Return X - Y. Treat X and Y as having the signednes given by SGN
2521 and indicate in *OVERFLOW whether the operation overflowed. */
2522 template <typename T1, typename T2>
2523 inline WI_BINARY_RESULT (T1, T2)
2524 wi::sub (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2526 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2527 unsigned int precision = get_precision (result);
2528 WIDE_INT_REF_FOR (T1) xi (x, precision);
2529 WIDE_INT_REF_FOR (T2) yi (y, precision);
2530 if (precision <= HOST_BITS_PER_WIDE_INT)
2532 unsigned HOST_WIDE_INT xl = xi.ulow ();
2533 unsigned HOST_WIDE_INT yl = yi.ulow ();
2534 unsigned HOST_WIDE_INT resultl = xl - yl;
2535 if (sgn == SIGNED)
2537 if ((((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1)
2539 if (xl > yl)
2540 *overflow = OVF_UNDERFLOW;
2541 else if (xl < yl)
2542 *overflow = OVF_OVERFLOW;
2543 else
2544 *overflow = OVF_NONE;
2546 else
2547 *overflow = OVF_NONE;
2549 else
2550 *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
2551 > (xl << (HOST_BITS_PER_WIDE_INT - precision)))
2552 ? OVF_UNDERFLOW : OVF_NONE;
2553 val[0] = resultl;
2554 result.set_len (1);
2556 else
2557 result.set_len (sub_large (val, xi.val, xi.len,
2558 yi.val, yi.len, precision,
2559 sgn, overflow));
2560 return result;
2563 /* Return X * Y. */
2564 template <typename T1, typename T2>
2565 inline WI_BINARY_RESULT (T1, T2)
2566 wi::mul (const T1 &x, const T2 &y)
2568 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2569 unsigned int precision = get_precision (result);
2570 WIDE_INT_REF_FOR (T1) xi (x, precision);
2571 WIDE_INT_REF_FOR (T2) yi (y, precision);
2572 if (precision <= HOST_BITS_PER_WIDE_INT)
2574 val[0] = xi.ulow () * yi.ulow ();
2575 result.set_len (1);
2577 else
2578 result.set_len (mul_internal (val, xi.val, xi.len, yi.val, yi.len,
2579 precision, UNSIGNED, 0, false));
2580 return result;
2583 /* Return X * Y. Treat X and Y as having the signednes given by SGN
2584 and indicate in *OVERFLOW whether the operation overflowed. */
2585 template <typename T1, typename T2>
2586 inline WI_BINARY_RESULT (T1, T2)
2587 wi::mul (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2589 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2590 unsigned int precision = get_precision (result);
2591 WIDE_INT_REF_FOR (T1) xi (x, precision);
2592 WIDE_INT_REF_FOR (T2) yi (y, precision);
2593 result.set_len (mul_internal (val, xi.val, xi.len,
2594 yi.val, yi.len, precision,
2595 sgn, overflow, false));
2596 return result;
2599 /* Return X * Y, treating both X and Y as signed values. Indicate in
2600 *OVERFLOW whether the operation overflowed. */
2601 template <typename T1, typename T2>
2602 inline WI_BINARY_RESULT (T1, T2)
2603 wi::smul (const T1 &x, const T2 &y, overflow_type *overflow)
2605 return mul (x, y, SIGNED, overflow);
2608 /* Return X * Y, treating both X and Y as unsigned values. Indicate in
2609 *OVERFLOW if the result overflows. */
2610 template <typename T1, typename T2>
2611 inline WI_BINARY_RESULT (T1, T2)
2612 wi::umul (const T1 &x, const T2 &y, overflow_type *overflow)
2614 return mul (x, y, UNSIGNED, overflow);
2617 /* Perform a widening multiplication of X and Y, extending the values
2618 according to SGN, and return the high part of the result. */
2619 template <typename T1, typename T2>
2620 inline WI_BINARY_RESULT (T1, T2)
2621 wi::mul_high (const T1 &x, const T2 &y, signop sgn)
2623 WI_BINARY_RESULT_VAR (result, val, T1, x, T2, y);
2624 unsigned int precision = get_precision (result);
2625 WIDE_INT_REF_FOR (T1) xi (x, precision);
2626 WIDE_INT_REF_FOR (T2) yi (y, precision);
2627 result.set_len (mul_internal (val, xi.val, xi.len,
2628 yi.val, yi.len, precision,
2629 sgn, 0, true));
2630 return result;
2633 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2634 signedness given by SGN. Indicate in *OVERFLOW if the result
2635 overflows. */
2636 template <typename T1, typename T2>
2637 inline WI_BINARY_RESULT (T1, T2)
2638 wi::div_trunc (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2640 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2641 unsigned int precision = get_precision (quotient);
2642 WIDE_INT_REF_FOR (T1) xi (x, precision);
2643 WIDE_INT_REF_FOR (T2) yi (y);
2645 quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
2646 precision,
2647 yi.val, yi.len, yi.precision,
2648 sgn, overflow));
2649 return quotient;
2652 /* Return X / Y, rouding towards 0. Treat X and Y as signed values. */
2653 template <typename T1, typename T2>
2654 inline WI_BINARY_RESULT (T1, T2)
2655 wi::sdiv_trunc (const T1 &x, const T2 &y)
2657 return div_trunc (x, y, SIGNED);
2660 /* Return X / Y, rouding towards 0. Treat X and Y as unsigned values. */
2661 template <typename T1, typename T2>
2662 inline WI_BINARY_RESULT (T1, T2)
2663 wi::udiv_trunc (const T1 &x, const T2 &y)
2665 return div_trunc (x, y, UNSIGNED);
2668 /* Return X / Y, rouding towards -inf. Treat X and Y as having the
2669 signedness given by SGN. Indicate in *OVERFLOW if the result
2670 overflows. */
2671 template <typename T1, typename T2>
2672 inline WI_BINARY_RESULT (T1, T2)
2673 wi::div_floor (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2675 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2676 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2677 unsigned int precision = get_precision (quotient);
2678 WIDE_INT_REF_FOR (T1) xi (x, precision);
2679 WIDE_INT_REF_FOR (T2) yi (y);
2681 unsigned int remainder_len;
2682 quotient.set_len (divmod_internal (quotient_val,
2683 &remainder_len, remainder_val,
2684 xi.val, xi.len, precision,
2685 yi.val, yi.len, yi.precision, sgn,
2686 overflow));
2687 remainder.set_len (remainder_len);
2688 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2689 return quotient - 1;
2690 return quotient;
2693 /* Return X / Y, rouding towards -inf. Treat X and Y as signed values. */
2694 template <typename T1, typename T2>
2695 inline WI_BINARY_RESULT (T1, T2)
2696 wi::sdiv_floor (const T1 &x, const T2 &y)
2698 return div_floor (x, y, SIGNED);
2701 /* Return X / Y, rouding towards -inf. Treat X and Y as unsigned values. */
2702 /* ??? Why do we have both this and udiv_trunc. Aren't they the same? */
2703 template <typename T1, typename T2>
2704 inline WI_BINARY_RESULT (T1, T2)
2705 wi::udiv_floor (const T1 &x, const T2 &y)
2707 return div_floor (x, y, UNSIGNED);
2710 /* Return X / Y, rouding towards +inf. Treat X and Y as having the
2711 signedness given by SGN. Indicate in *OVERFLOW if the result
2712 overflows. */
2713 template <typename T1, typename T2>
2714 inline WI_BINARY_RESULT (T1, T2)
2715 wi::div_ceil (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2717 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2718 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2719 unsigned int precision = get_precision (quotient);
2720 WIDE_INT_REF_FOR (T1) xi (x, precision);
2721 WIDE_INT_REF_FOR (T2) yi (y);
2723 unsigned int remainder_len;
2724 quotient.set_len (divmod_internal (quotient_val,
2725 &remainder_len, remainder_val,
2726 xi.val, xi.len, precision,
2727 yi.val, yi.len, yi.precision, sgn,
2728 overflow));
2729 remainder.set_len (remainder_len);
2730 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2731 return quotient + 1;
2732 return quotient;
2735 /* Return X / Y, rouding towards +inf. Treat X and Y as unsigned values. */
2736 template <typename T1, typename T2>
2737 inline WI_BINARY_RESULT (T1, T2)
2738 wi::udiv_ceil (const T1 &x, const T2 &y)
2740 return div_ceil (x, y, UNSIGNED);
2743 /* Return X / Y, rouding towards nearest with ties away from zero.
2744 Treat X and Y as having the signedness given by SGN. Indicate
2745 in *OVERFLOW if the result overflows. */
2746 template <typename T1, typename T2>
2747 inline WI_BINARY_RESULT (T1, T2)
2748 wi::div_round (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2750 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2751 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2752 unsigned int precision = get_precision (quotient);
2753 WIDE_INT_REF_FOR (T1) xi (x, precision);
2754 WIDE_INT_REF_FOR (T2) yi (y);
2756 unsigned int remainder_len;
2757 quotient.set_len (divmod_internal (quotient_val,
2758 &remainder_len, remainder_val,
2759 xi.val, xi.len, precision,
2760 yi.val, yi.len, yi.precision, sgn,
2761 overflow));
2762 remainder.set_len (remainder_len);
2764 if (remainder != 0)
2766 if (sgn == SIGNED)
2768 WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2769 if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2771 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2772 return quotient - 1;
2773 else
2774 return quotient + 1;
2777 else
2779 if (wi::geu_p (remainder, wi::sub (y, remainder)))
2780 return quotient + 1;
2783 return quotient;
2786 /* Return X / Y, rouding towards 0. Treat X and Y as having the
2787 signedness given by SGN. Store the remainder in *REMAINDER_PTR. */
2788 template <typename T1, typename T2>
2789 inline WI_BINARY_RESULT (T1, T2)
2790 wi::divmod_trunc (const T1 &x, const T2 &y, signop sgn,
2791 WI_BINARY_RESULT (T1, T2) *remainder_ptr)
2793 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2794 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2795 unsigned int precision = get_precision (quotient);
2796 WIDE_INT_REF_FOR (T1) xi (x, precision);
2797 WIDE_INT_REF_FOR (T2) yi (y);
2799 unsigned int remainder_len;
2800 quotient.set_len (divmod_internal (quotient_val,
2801 &remainder_len, remainder_val,
2802 xi.val, xi.len, precision,
2803 yi.val, yi.len, yi.precision, sgn, 0));
2804 remainder.set_len (remainder_len);
2806 *remainder_ptr = remainder;
2807 return quotient;
2810 /* Compute the greatest common divisor of two numbers A and B using
2811 Euclid's algorithm. */
2812 template <typename T1, typename T2>
2813 inline WI_BINARY_RESULT (T1, T2)
2814 wi::gcd (const T1 &a, const T2 &b, signop sgn)
2816 T1 x, y, z;
2818 x = wi::abs (a);
2819 y = wi::abs (b);
2821 while (gt_p (x, 0, sgn))
2823 z = mod_trunc (y, x, sgn);
2824 y = x;
2825 x = z;
2828 return y;
2831 /* Compute X / Y, rouding towards 0, and return the remainder.
2832 Treat X and Y as having the signedness given by SGN. Indicate
2833 in *OVERFLOW if the division overflows. */
2834 template <typename T1, typename T2>
2835 inline WI_BINARY_RESULT (T1, T2)
2836 wi::mod_trunc (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2838 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2839 unsigned int precision = get_precision (remainder);
2840 WIDE_INT_REF_FOR (T1) xi (x, precision);
2841 WIDE_INT_REF_FOR (T2) yi (y);
2843 unsigned int remainder_len;
2844 divmod_internal (0, &remainder_len, remainder_val,
2845 xi.val, xi.len, precision,
2846 yi.val, yi.len, yi.precision, sgn, overflow);
2847 remainder.set_len (remainder_len);
2849 return remainder;
2852 /* Compute X / Y, rouding towards 0, and return the remainder.
2853 Treat X and Y as signed values. */
2854 template <typename T1, typename T2>
2855 inline WI_BINARY_RESULT (T1, T2)
2856 wi::smod_trunc (const T1 &x, const T2 &y)
2858 return mod_trunc (x, y, SIGNED);
2861 /* Compute X / Y, rouding towards 0, and return the remainder.
2862 Treat X and Y as unsigned values. */
2863 template <typename T1, typename T2>
2864 inline WI_BINARY_RESULT (T1, T2)
2865 wi::umod_trunc (const T1 &x, const T2 &y)
2867 return mod_trunc (x, y, UNSIGNED);
2870 /* Compute X / Y, rouding towards -inf, and return the remainder.
2871 Treat X and Y as having the signedness given by SGN. Indicate
2872 in *OVERFLOW if the division overflows. */
2873 template <typename T1, typename T2>
2874 inline WI_BINARY_RESULT (T1, T2)
2875 wi::mod_floor (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2877 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2878 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2879 unsigned int precision = get_precision (quotient);
2880 WIDE_INT_REF_FOR (T1) xi (x, precision);
2881 WIDE_INT_REF_FOR (T2) yi (y);
2883 unsigned int remainder_len;
2884 quotient.set_len (divmod_internal (quotient_val,
2885 &remainder_len, remainder_val,
2886 xi.val, xi.len, precision,
2887 yi.val, yi.len, yi.precision, sgn,
2888 overflow));
2889 remainder.set_len (remainder_len);
2891 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn) && remainder != 0)
2892 return remainder + y;
2893 return remainder;
2896 /* Compute X / Y, rouding towards -inf, and return the remainder.
2897 Treat X and Y as unsigned values. */
2898 /* ??? Why do we have both this and umod_trunc. Aren't they the same? */
2899 template <typename T1, typename T2>
2900 inline WI_BINARY_RESULT (T1, T2)
2901 wi::umod_floor (const T1 &x, const T2 &y)
2903 return mod_floor (x, y, UNSIGNED);
2906 /* Compute X / Y, rouding towards +inf, and return the remainder.
2907 Treat X and Y as having the signedness given by SGN. Indicate
2908 in *OVERFLOW if the division overflows. */
2909 template <typename T1, typename T2>
2910 inline WI_BINARY_RESULT (T1, T2)
2911 wi::mod_ceil (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2913 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2914 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2915 unsigned int precision = get_precision (quotient);
2916 WIDE_INT_REF_FOR (T1) xi (x, precision);
2917 WIDE_INT_REF_FOR (T2) yi (y);
2919 unsigned int remainder_len;
2920 quotient.set_len (divmod_internal (quotient_val,
2921 &remainder_len, remainder_val,
2922 xi.val, xi.len, precision,
2923 yi.val, yi.len, yi.precision, sgn,
2924 overflow));
2925 remainder.set_len (remainder_len);
2927 if (wi::neg_p (x, sgn) == wi::neg_p (y, sgn) && remainder != 0)
2928 return remainder - y;
2929 return remainder;
2932 /* Compute X / Y, rouding towards nearest with ties away from zero,
2933 and return the remainder. Treat X and Y as having the signedness
2934 given by SGN. Indicate in *OVERFLOW if the division overflows. */
2935 template <typename T1, typename T2>
2936 inline WI_BINARY_RESULT (T1, T2)
2937 wi::mod_round (const T1 &x, const T2 &y, signop sgn, overflow_type *overflow)
2939 WI_BINARY_RESULT_VAR (quotient, quotient_val, T1, x, T2, y);
2940 WI_BINARY_RESULT_VAR (remainder, remainder_val, T1, x, T2, y);
2941 unsigned int precision = get_precision (quotient);
2942 WIDE_INT_REF_FOR (T1) xi (x, precision);
2943 WIDE_INT_REF_FOR (T2) yi (y);
2945 unsigned int remainder_len;
2946 quotient.set_len (divmod_internal (quotient_val,
2947 &remainder_len, remainder_val,
2948 xi.val, xi.len, precision,
2949 yi.val, yi.len, yi.precision, sgn,
2950 overflow));
2951 remainder.set_len (remainder_len);
2953 if (remainder != 0)
2955 if (sgn == SIGNED)
2957 WI_BINARY_RESULT (T1, T2) abs_remainder = wi::abs (remainder);
2958 if (wi::geu_p (abs_remainder, wi::sub (wi::abs (y), abs_remainder)))
2960 if (wi::neg_p (x, sgn) != wi::neg_p (y, sgn))
2961 return remainder + y;
2962 else
2963 return remainder - y;
2966 else
2968 if (wi::geu_p (remainder, wi::sub (y, remainder)))
2969 return remainder - y;
2972 return remainder;
2975 /* Return true if X is a multiple of Y. Treat X and Y as having the
2976 signedness given by SGN. */
2977 template <typename T1, typename T2>
2978 inline bool
2979 wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn)
2981 return wi::mod_trunc (x, y, sgn) == 0;
2984 /* Return true if X is a multiple of Y, storing X / Y in *RES if so.
2985 Treat X and Y as having the signedness given by SGN. */
2986 template <typename T1, typename T2>
2987 inline bool
2988 wi::multiple_of_p (const T1 &x, const T2 &y, signop sgn,
2989 WI_BINARY_RESULT (T1, T2) *res)
2991 WI_BINARY_RESULT (T1, T2) remainder;
2992 WI_BINARY_RESULT (T1, T2) quotient
2993 = divmod_trunc (x, y, sgn, &remainder);
2994 if (remainder == 0)
2996 *res = quotient;
2997 return true;
2999 return false;
3002 /* Return X << Y. Return 0 if Y is greater than or equal to
3003 the precision of X. */
3004 template <typename T1, typename T2>
3005 inline WI_UNARY_RESULT (T1)
3006 wi::lshift (const T1 &x, const T2 &y)
3008 WI_UNARY_RESULT_VAR (result, val, T1, x);
3009 unsigned int precision = get_precision (result);
3010 WIDE_INT_REF_FOR (T1) xi (x, precision);
3011 WIDE_INT_REF_FOR (T2) yi (y);
3012 /* Handle the simple cases quickly. */
3013 if (geu_p (yi, precision))
3015 val[0] = 0;
3016 result.set_len (1);
3018 else
3020 unsigned int shift = yi.to_uhwi ();
3021 /* For fixed-precision integers like offset_int and widest_int,
3022 handle the case where the shift value is constant and the
3023 result is a single nonnegative HWI (meaning that we don't
3024 need to worry about val[1]). This is particularly common
3025 for converting a byte count to a bit count.
3027 For variable-precision integers like wide_int, handle HWI
3028 and sub-HWI integers inline. */
3029 if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
3030 ? (STATIC_CONSTANT_P (shift < HOST_BITS_PER_WIDE_INT - 1)
3031 && xi.len == 1
3032 && xi.val[0] <= (HOST_WIDE_INT) ((unsigned HOST_WIDE_INT)
3033 HOST_WIDE_INT_MAX >> shift))
3034 : precision <= HOST_BITS_PER_WIDE_INT)
3036 val[0] = xi.ulow () << shift;
3037 result.set_len (1);
3039 else
3040 result.set_len (lshift_large (val, xi.val, xi.len,
3041 precision, shift));
3043 return result;
3046 /* Return X >> Y, using a logical shift. Return 0 if Y is greater than
3047 or equal to the precision of X. */
3048 template <typename T1, typename T2>
3049 inline WI_UNARY_RESULT (T1)
3050 wi::lrshift (const T1 &x, const T2 &y)
3052 WI_UNARY_RESULT_VAR (result, val, T1, x);
3053 /* Do things in the precision of the input rather than the output,
3054 since the result can be no larger than that. */
3055 WIDE_INT_REF_FOR (T1) xi (x);
3056 WIDE_INT_REF_FOR (T2) yi (y);
3057 /* Handle the simple cases quickly. */
3058 if (geu_p (yi, xi.precision))
3060 val[0] = 0;
3061 result.set_len (1);
3063 else
3065 unsigned int shift = yi.to_uhwi ();
3066 /* For fixed-precision integers like offset_int and widest_int,
3067 handle the case where the shift value is constant and the
3068 shifted value is a single nonnegative HWI (meaning that all
3069 bits above the HWI are zero). This is particularly common
3070 for converting a bit count to a byte count.
3072 For variable-precision integers like wide_int, handle HWI
3073 and sub-HWI integers inline. */
3074 if (STATIC_CONSTANT_P (xi.precision > HOST_BITS_PER_WIDE_INT)
3075 ? (shift < HOST_BITS_PER_WIDE_INT
3076 && xi.len == 1
3077 && xi.val[0] >= 0)
3078 : xi.precision <= HOST_BITS_PER_WIDE_INT)
3080 val[0] = xi.to_uhwi () >> shift;
3081 result.set_len (1);
3083 else
3084 result.set_len (lrshift_large (val, xi.val, xi.len, xi.precision,
3085 get_precision (result), shift));
3087 return result;
3090 /* Return X >> Y, using an arithmetic shift. Return a sign mask if
3091 Y is greater than or equal to the precision of X. */
3092 template <typename T1, typename T2>
3093 inline WI_UNARY_RESULT (T1)
3094 wi::arshift (const T1 &x, const T2 &y)
3096 WI_UNARY_RESULT_VAR (result, val, T1, x);
3097 /* Do things in the precision of the input rather than the output,
3098 since the result can be no larger than that. */
3099 WIDE_INT_REF_FOR (T1) xi (x);
3100 WIDE_INT_REF_FOR (T2) yi (y);
3101 /* Handle the simple cases quickly. */
3102 if (geu_p (yi, xi.precision))
3104 val[0] = sign_mask (x);
3105 result.set_len (1);
3107 else
3109 unsigned int shift = yi.to_uhwi ();
3110 if (xi.precision <= HOST_BITS_PER_WIDE_INT)
3112 val[0] = sext_hwi (xi.ulow () >> shift, xi.precision - shift);
3113 result.set_len (1, true);
3115 else
3116 result.set_len (arshift_large (val, xi.val, xi.len, xi.precision,
3117 get_precision (result), shift));
3119 return result;
3122 /* Return X >> Y, using an arithmetic shift if SGN is SIGNED and a
3123 logical shift otherwise. */
3124 template <typename T1, typename T2>
3125 inline WI_UNARY_RESULT (T1)
3126 wi::rshift (const T1 &x, const T2 &y, signop sgn)
3128 if (sgn == UNSIGNED)
3129 return lrshift (x, y);
3130 else
3131 return arshift (x, y);
3134 /* Return the result of rotating the low WIDTH bits of X left by Y
3135 bits and zero-extending the result. Use a full-width rotate if
3136 WIDTH is zero. */
3137 template <typename T1, typename T2>
3138 WI_UNARY_RESULT (T1)
3139 wi::lrotate (const T1 &x, const T2 &y, unsigned int width)
3141 unsigned int precision = get_binary_precision (x, x);
3142 if (width == 0)
3143 width = precision;
3144 WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
3145 WI_UNARY_RESULT (T1) left = wi::lshift (x, ymod);
3146 WI_UNARY_RESULT (T1) right = wi::lrshift (x, wi::sub (width, ymod));
3147 if (width != precision)
3148 return wi::zext (left, width) | wi::zext (right, width);
3149 return left | right;
3152 /* Return the result of rotating the low WIDTH bits of X right by Y
3153 bits and zero-extending the result. Use a full-width rotate if
3154 WIDTH is zero. */
3155 template <typename T1, typename T2>
3156 WI_UNARY_RESULT (T1)
3157 wi::rrotate (const T1 &x, const T2 &y, unsigned int width)
3159 unsigned int precision = get_binary_precision (x, x);
3160 if (width == 0)
3161 width = precision;
3162 WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width);
3163 WI_UNARY_RESULT (T1) right = wi::lrshift (x, ymod);
3164 WI_UNARY_RESULT (T1) left = wi::lshift (x, wi::sub (width, ymod));
3165 if (width != precision)
3166 return wi::zext (left, width) | wi::zext (right, width);
3167 return left | right;
3170 /* Return 0 if the number of 1s in X is even and 1 if the number of 1s
3171 is odd. */
3172 inline int
3173 wi::parity (const wide_int_ref &x)
3175 return popcount (x) & 1;
3178 /* Extract WIDTH bits from X, starting at BITPOS. */
3179 template <typename T>
3180 inline unsigned HOST_WIDE_INT
3181 wi::extract_uhwi (const T &x, unsigned int bitpos, unsigned int width)
3183 unsigned precision = get_precision (x);
3184 if (precision < bitpos + width)
3185 precision = bitpos + width;
3186 WIDE_INT_REF_FOR (T) xi (x, precision);
3188 /* Handle this rare case after the above, so that we assert about
3189 bogus BITPOS values. */
3190 if (width == 0)
3191 return 0;
3193 unsigned int start = bitpos / HOST_BITS_PER_WIDE_INT;
3194 unsigned int shift = bitpos % HOST_BITS_PER_WIDE_INT;
3195 unsigned HOST_WIDE_INT res = xi.elt (start);
3196 res >>= shift;
3197 if (shift + width > HOST_BITS_PER_WIDE_INT)
3199 unsigned HOST_WIDE_INT upper = xi.elt (start + 1);
3200 res |= upper << (-shift % HOST_BITS_PER_WIDE_INT);
3202 return zext_hwi (res, width);
3205 /* Return the minimum precision needed to store X with sign SGN. */
3206 template <typename T>
3207 inline unsigned int
3208 wi::min_precision (const T &x, signop sgn)
3210 if (sgn == SIGNED)
3211 return get_precision (x) - clrsb (x);
3212 else
3213 return get_precision (x) - clz (x);
3216 #define SIGNED_BINARY_PREDICATE(OP, F) \
3217 template <typename T1, typename T2> \
3218 inline WI_SIGNED_BINARY_PREDICATE_RESULT (T1, T2) \
3219 OP (const T1 &x, const T2 &y) \
3221 return wi::F (x, y); \
3224 SIGNED_BINARY_PREDICATE (operator <, lts_p)
3225 SIGNED_BINARY_PREDICATE (operator <=, les_p)
3226 SIGNED_BINARY_PREDICATE (operator >, gts_p)
3227 SIGNED_BINARY_PREDICATE (operator >=, ges_p)
3229 #undef SIGNED_BINARY_PREDICATE
3231 #define UNARY_OPERATOR(OP, F) \
3232 template<typename T> \
3233 WI_UNARY_RESULT (generic_wide_int<T>) \
3234 OP (const generic_wide_int<T> &x) \
3236 return wi::F (x); \
3239 #define BINARY_PREDICATE(OP, F) \
3240 template<typename T1, typename T2> \
3241 WI_BINARY_PREDICATE_RESULT (T1, T2) \
3242 OP (const T1 &x, const T2 &y) \
3244 return wi::F (x, y); \
3247 #define BINARY_OPERATOR(OP, F) \
3248 template<typename T1, typename T2> \
3249 WI_BINARY_OPERATOR_RESULT (T1, T2) \
3250 OP (const T1 &x, const T2 &y) \
3252 return wi::F (x, y); \
3255 #define SHIFT_OPERATOR(OP, F) \
3256 template<typename T1, typename T2> \
3257 WI_BINARY_OPERATOR_RESULT (T1, T1) \
3258 OP (const T1 &x, const T2 &y) \
3260 return wi::F (x, y); \
3263 UNARY_OPERATOR (operator ~, bit_not)
3264 UNARY_OPERATOR (operator -, neg)
3265 BINARY_PREDICATE (operator ==, eq_p)
3266 BINARY_PREDICATE (operator !=, ne_p)
3267 BINARY_OPERATOR (operator &, bit_and)
3268 BINARY_OPERATOR (operator |, bit_or)
3269 BINARY_OPERATOR (operator ^, bit_xor)
3270 BINARY_OPERATOR (operator +, add)
3271 BINARY_OPERATOR (operator -, sub)
3272 BINARY_OPERATOR (operator *, mul)
3273 SHIFT_OPERATOR (operator <<, lshift)
3275 #undef UNARY_OPERATOR
3276 #undef BINARY_PREDICATE
3277 #undef BINARY_OPERATOR
3278 #undef SHIFT_OPERATOR
3280 template <typename T1, typename T2>
3281 inline WI_SIGNED_SHIFT_RESULT (T1, T2)
3282 operator >> (const T1 &x, const T2 &y)
3284 return wi::arshift (x, y);
3287 template <typename T1, typename T2>
3288 inline WI_SIGNED_SHIFT_RESULT (T1, T2)
3289 operator / (const T1 &x, const T2 &y)
3291 return wi::sdiv_trunc (x, y);
3294 template <typename T1, typename T2>
3295 inline WI_SIGNED_SHIFT_RESULT (T1, T2)
3296 operator % (const T1 &x, const T2 &y)
3298 return wi::smod_trunc (x, y);
3301 template<typename T>
3302 void
3303 gt_ggc_mx (generic_wide_int <T> *)
3307 template<typename T>
3308 void
3309 gt_pch_nx (generic_wide_int <T> *)
3313 template<typename T>
3314 void
3315 gt_pch_nx (generic_wide_int <T> *, void (*) (void *, void *), void *)
3319 template<int N>
3320 void
3321 gt_ggc_mx (trailing_wide_ints <N> *)
3325 template<int N>
3326 void
3327 gt_pch_nx (trailing_wide_ints <N> *)
3331 template<int N>
3332 void
3333 gt_pch_nx (trailing_wide_ints <N> *, void (*) (void *, void *), void *)
3337 namespace wi
3339 /* Used for overloaded functions in which the only other acceptable
3340 scalar type is a pointer. It stops a plain 0 from being treated
3341 as a null pointer. */
3342 struct never_used1 {};
3343 struct never_used2 {};
3345 wide_int min_value (unsigned int, signop);
3346 wide_int min_value (never_used1 *);
3347 wide_int min_value (never_used2 *);
3348 wide_int max_value (unsigned int, signop);
3349 wide_int max_value (never_used1 *);
3350 wide_int max_value (never_used2 *);
3352 /* FIXME: this is target dependent, so should be elsewhere.
3353 It also seems to assume that CHAR_BIT == BITS_PER_UNIT. */
3354 wide_int from_buffer (const unsigned char *, unsigned int);
3356 #ifndef GENERATOR_FILE
3357 void to_mpz (const wide_int_ref &, mpz_t, signop);
3358 #endif
3360 wide_int mask (unsigned int, bool, unsigned int);
3361 wide_int shifted_mask (unsigned int, unsigned int, bool, unsigned int);
3362 wide_int set_bit_in_zero (unsigned int, unsigned int);
3363 wide_int insert (const wide_int &x, const wide_int &y, unsigned int,
3364 unsigned int);
3365 wide_int round_down_for_mask (const wide_int &, const wide_int &);
3366 wide_int round_up_for_mask (const wide_int &, const wide_int &);
3368 template <typename T>
3369 T mask (unsigned int, bool);
3371 template <typename T>
3372 T shifted_mask (unsigned int, unsigned int, bool);
3374 template <typename T>
3375 T set_bit_in_zero (unsigned int);
3377 unsigned int mask (HOST_WIDE_INT *, unsigned int, bool, unsigned int);
3378 unsigned int shifted_mask (HOST_WIDE_INT *, unsigned int, unsigned int,
3379 bool, unsigned int);
3380 unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
3381 unsigned int, unsigned int, bool);
3384 /* Return a PRECISION-bit integer in which the low WIDTH bits are set
3385 and the other bits are clear, or the inverse if NEGATE_P. */
3386 inline wide_int
3387 wi::mask (unsigned int width, bool negate_p, unsigned int precision)
3389 wide_int result = wide_int::create (precision);
3390 result.set_len (mask (result.write_val (), width, negate_p, precision));
3391 return result;
3394 /* Return a PRECISION-bit integer in which the low START bits are clear,
3395 the next WIDTH bits are set, and the other bits are clear,
3396 or the inverse if NEGATE_P. */
3397 inline wide_int
3398 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p,
3399 unsigned int precision)
3401 wide_int result = wide_int::create (precision);
3402 result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
3403 precision));
3404 return result;
3407 /* Return a PRECISION-bit integer in which bit BIT is set and all the
3408 others are clear. */
3409 inline wide_int
3410 wi::set_bit_in_zero (unsigned int bit, unsigned int precision)
3412 return shifted_mask (bit, 1, false, precision);
3415 /* Return an integer of type T in which the low WIDTH bits are set
3416 and the other bits are clear, or the inverse if NEGATE_P. */
3417 template <typename T>
3418 inline T
3419 wi::mask (unsigned int width, bool negate_p)
3421 STATIC_ASSERT (wi::int_traits<T>::precision);
3422 T result;
3423 result.set_len (mask (result.write_val (), width, negate_p,
3424 wi::int_traits <T>::precision));
3425 return result;
3428 /* Return an integer of type T in which the low START bits are clear,
3429 the next WIDTH bits are set, and the other bits are clear, or the
3430 inverse if NEGATE_P. */
3431 template <typename T>
3432 inline T
3433 wi::shifted_mask (unsigned int start, unsigned int width, bool negate_p)
3435 STATIC_ASSERT (wi::int_traits<T>::precision);
3436 T result;
3437 result.set_len (shifted_mask (result.write_val (), start, width,
3438 negate_p,
3439 wi::int_traits <T>::precision));
3440 return result;
3443 /* Return an integer of type T in which bit BIT is set and all the
3444 others are clear. */
3445 template <typename T>
3446 inline T
3447 wi::set_bit_in_zero (unsigned int bit)
3449 return shifted_mask <T> (bit, 1, false);
3452 /* Accumulate a set of overflows into OVERFLOW. */
3454 static inline void
3455 wi::accumulate_overflow (wi::overflow_type &overflow,
3456 wi::overflow_type suboverflow)
3458 if (!suboverflow)
3459 return;
3460 if (!overflow)
3461 overflow = suboverflow;
3462 else if (overflow != suboverflow)
3463 overflow = wi::OVF_UNKNOWN;
3466 #endif /* WIDE_INT_H */