Document TOPCOM based chamber decomposition
[barvinok.git] / doc / implementation.tex
blob53c2930040d65c089cc68312510005b8ee5d0f05
1 \section{Implementation details}
3 \subsection{An interior point of a polyhedron}
4 \label{s:interior}
6 We often need a point that lies in the interior of a polyhedron.
7 The function \ai[\tt]{inner\_point} implements the following algorithm.
8 Each polyhedron $P$ can be written as the sum of a polytope $P'$ and a cone $C$
9 (the \ai{recession cone} or \ai{characteristic cone} of $P$).
10 Adding a positive multiple of the sum of the extremal rays of $C$ to
11 the \ai{barycenter}
13 \frac 1 N \sum_i \vec v_i(\vec p)
15 of $P'$, where $N$ is the number of vertices, results in a point
16 in the interior of $P$.
18 \subsection{The integer points in the fundamental parallelepiped of a simple cone}
20 \label{s:fundamental}
22 This section is based on \shortciteN[Lemma 5.1]{Barvinok1992volume} and
23 \shortciteN{Koeppe2006experiments}.
25 \sindex{simple}{cone}
26 \sindex{open}{facet}
27 \sindex{open}{ray}
28 \sindex{explicit}{representation}
29 In this section we will deal exclusively with \ai{simple cone}s,
30 i.e. $d$-dimensional cones with $d$ extremal rays and $d$ facets.
31 \index{open facet}%
32 Some of the facets of these cones may be open.
33 Since we will mostly be dealing with cones in their
34 \ai{explicit representation}, we will have occasion to speak of
35 ``\ai{open ray}s'', by which we will mean that the facet not
36 containing the ray is open. (There is only one such facet because the cone
37 is simple.)
39 \sindex{fundamental}{parallelepiped}
40 \begin{definition}[Fundamental parallelepiped]
41 Let $K = \vec v + \poshull \lb\, \vec u_i \,\rb$ be
42 a closed (shifted) cone, then the \defindex{fundamental parallelepiped} $\Pi$
43 of $K$ is
45 \Pi = \vec v +
46 \lb\, \sum_i \alpha_i \vec u_i \mid 0 \leq \alpha_i < 1 \,\rb
49 If some of the rays $\vec u_i$ of $K$ are open, then the constraints on
50 the corresponding coefficient $\alpha_i$ are such that $0 < \alpha_i \le 1$.
51 \end{definition}
53 \begin{lemma}[Integer points in the fundamental parallelepiped of a simple cone]
54 \label{l:fundamental}
55 Let $K = \vec v + \poshull \lb\, \vec u_i \,\rb$ be a closed simple cone
56 and let $A$ be the matrix with the generators $\vec u_i$ of $K$
57 as rows.
58 Furthermore let $V A W^{-1} = S = \diag \vec s$ be the \indac{SNF} of $A$.
59 Then the integer points in the fundamental parallelepiped of $K$ are given
61 \begin{eqnarray}
62 \label{eq:parallelepiped}
63 \vec w^\T & = & \vec v^\T + \fractional{(\vec k^\T W - \vec v^\T) A^{-1}} A
65 \nonumber
66 & = &
67 \vec v^\T +
68 \sum_{i=1}^d
69 \fractional{\sps{\sum_{j=1}^d k_j \vec w^\T_j - \vec v^\T}{\vec u^*_i}}
70 \vec u_i^\T,
71 \end{eqnarray}
72 where $\vec u^*_i$ are the columns of $A^{-1}$ and $k_j \in \ZZ$ ranges
73 over $0 \le k_j < s_j$.
74 \end{lemma}
76 \begin{proof}
77 Since $0 \le \fractional{x} < 1$, it is clear that each such $\vec w$
78 lies inside the fundamental parallelepiped.
79 Furthermore,
80 \begin{eqnarray*}
81 \vec w^\T & = & \vec v^\T + \fractional{(\vec k^\T W - \vec v^\T) A^{-1}} A
83 & = &
84 \vec v^T +
85 \left(
86 (\vec k^\T W - \vec v^\T) A^{-1} - \floor{(\vec k^\T W - \vec v^\T) A^{-1}}
87 \right) A
89 & = &
90 \underbrace{\vec k^\T W\mathstrut}_{\in \ZZ^{1\times d}}
92 \underbrace{\floor{(\vec k^\T W - \vec v^\T) A^{-1}}}_{\in \ZZ^{1\times d}}
93 \underbrace{A\mathstrut}_{\in \ZZ^{d\times d}} \in \ZZ^{1\times d}.
94 \end{eqnarray*}
95 Finally, if two such $\vec w$ are equal, i.e., $\vec w_1 = \vec w_2$,
96 then
97 \begin{eqnarray*}
98 \vec 0^\T = \vec w_1^\T - \vec w_2^\T
99 & = & \vec k_1^\T W - \vec k_2^\T W + \vec p^\T A
101 & = & \left(\vec k_1^\T - \vec k_2^\T \right) W + \vec p^\T V^{-1} S W,
102 \end{eqnarray*}
103 with $\vec p \in \ZZ^d$,
104 or $\vec k_1 \equiv \vec k_2 \mod \vec s$, i.e., $\vec k_1 = \vec k_2$.
105 Since $\det S = \det A$, we obtain all points in the fundamental parallelepiped
106 by taking all $\vec k \in \ZZ^d$ satisfying $0 \le k_j < s_j$.
107 \end{proof}
109 If the cone $K$ is not closed then the coefficients of the open rays
110 should be in $(0,1]$ rather than in $[0,1)$.
111 In (\ref{eq:parallelepiped}),
112 we therefore need to replace the fractional part $\fractional{x} = x - \floor{x}$
113 by $\cractional{x} = x - \ceil{x-1}$ for the open rays.
115 \begin{figure}
116 \intercol=1.2cm
117 \begin{xy}
118 <\intercol,0pt>:<0pt,\intercol>::
119 \POS@i@={(0,-3),(0,0),(4,2),(4,-3)},{0*[grey]\xypolyline{*}}
120 \POS@i@={(0,-3),(0,0),(4,2)},{0*[|(2)]\xypolyline{}}
121 \POS(-1,0)\ar(4.5,0)
122 \POS(0,-3)\ar(0,3)
123 \POS(0,0)\ar@[|(3)](0,-1)
124 \POS(0,0)\ar@[|(3)](2,1)
125 \POS(0,-1)\ar@{--}@[|(2)](2,0)
126 \POS(2,1)\ar@{--}@[|(2)](2,0)
127 \POS(0,0)*{\bullet}
128 \POS(1,0)*{\bullet}
129 \end{xy}
130 \caption{The integer points in the fundamental parallelepiped of $K$}
131 \label{f:parallelepiped}
132 \end{figure}
134 \begin{example}
135 Let $K$ be the cone
137 K = \sm{0 \\ 0} + \poshull \lb\, \sm{2 \\ 1}, \sm{0 \\ -1} \,\rb
140 shown in Figure~\ref{f:parallelepiped}.
141 Then
143 A = \sm{2 & 1\\0 & -1} \qquad A^{-1} = \sm{1/2 & 1/2 \\ 0 & -1 }
147 \sm{1 & 0 \\ 1 & 1 } \sm{2 & 1\\0 & -1} = \sm{1 & 0 \\ 0 & 2} \sm{2 & 1 \\ 1 & 0}.
149 We have $\det A = \det S = 2$ and
150 $\vec k_1^\T = \sm{0 & 0}$ and $\vec k_2^\T = \sm{0 & 1}$.
151 Therefore,
153 \vec w_1^\T = \fractional{\sm{0 & 0} \sm{2 & 1 \\ 1 & 0} \sm{1/2 & 1/2 \\ 0 & -1 }}
154 \sm{2 & 1\\0 & -1} = \sm{0 & 0}
157 \begin{eqnarray*}
158 \vec w_2^\T & = &
159 \fractional{\sm{0 & 1} \sm{2 & 1 \\ 1 & 0} \sm{1/2 & 1/2 \\ 0 & -1 }}
160 \sm{2 & 1\\0 & -1}
162 & = &
163 \sm{1/2 & 1/2} \sm{2 & 1\\0 & -1} = \sm{1 & 0}.
164 \end{eqnarray*}
165 \end{example}
170 \subsection{Barvinok's decomposition of simple cones in primal space}
171 \label{s:decomposition}
173 As described by \shortciteN{DeLoera2003effective}, the first
174 implementation of Barvinok's counting algorithm applied
175 \ai{Barvinok's decomposition} \shortcite{Barvinok1994} in the \ai{dual space}.
176 \ai{Brion's polarization trick} \shortcite{Brion88} then ensures that you
177 do not need to worry about lower-dimensional faces in the decomposition.
178 Another way of avoiding the lower-dimensional faces, in the \ai{primal space},
179 is to perturb the vertex of the cone such that none of the lower-dimensional
180 face encountered contain any integer points \shortcite{Koeppe2006primal}.
181 In this section, we describe another technique that is based on allowing
182 some of the facets of the cone to be open.
184 The basic step in Barvinok's decomposition is to replace a
185 $d$-dimensional simple cone
186 $K = \poshull \lb\, \vec u_i \,\rb_{i=1}^d \subset \QQ^d$
187 by a signed sum of (at most) $d$ cones $K_j$
188 with a smaller determinant (in absolute value).
189 The cones are obtained by successively replacing each generator
190 of $K$ by an appropriately chosen
191 $\vec w = \sum_{i=1}^d \alpha_i \vec u_i$, i.e.,
192 \begin{equation}
193 \label{eq:K_j}
194 K_j =
195 \poshull \left(\lb\, \vec u_i \,\rb_{i=1}^d
196 \setminus \{\, \vec u_j \,\} \cup \{\, \vec w \,\}\right)
198 \end{equation}
199 To see that we can use these $K_j$ to perform a decomposition,
200 rearrange the $\vec u_i$ such that for all $1 \le i \le k$ we have
201 $\alpha_i < 0$ and for all $k+1 \le i \le d'$ we have $\alpha_i > 0$,
202 with $d - d'$ the number of zero $\alpha_i$.
203 We may assume $k < d'$; otherwise replace $\vec w \in B$ by
204 $-\vec w \in B$. We have
206 \vec w + \sum_{i=1}^k (-\alpha_i) \vec u_i =
207 \sum_{i=k+1}^{d'} \alpha_i \vec u_i
210 \begin{equation}
211 \label{eq:sub}
212 \sum_{i=0}^k \beta_i \vec u_i =
213 \sum_{i=k+1}^{d'} \alpha_i \vec u_i
215 \end{equation}
216 with $\vec u_0 = \vec w$, $\beta_0 = 1$ and $\beta_i = -\alpha_i > 0$
217 for $1 \le i \le k$. Any two $\vec u_j$ and $\vec u_l$ on the same side
218 of the equality are on opposite sides of the linear hull $H$ of
219 the other $\vec u_i$s since there exists a convex combination
220 of $\vec u_j$ and $\vec u_l$ on this hyperplane.
221 In particular, since $\alpha_j$ and $\alpha_l$ have the same sign,
222 we have
223 \begin{equation}
224 \label{eq:opposite}
225 \frac {\alpha_j}{\alpha_j+\alpha_l} \vec u_j
227 \frac {\alpha_l}{\alpha_j+\alpha_l} \vec u_l
228 \in H
229 \qquad\text{for $\alpha_i \alpha_l > 0$}
231 \end{equation}
232 The corresponding cones $K_j$ and $K_l$ (with $K_0 = K$)
233 therefore intersect in a common face $F \subset H$.
234 Let
236 K' :=
237 \poshull \left(\lb\, \vec u_i \,\rb_{i=1}^d \cup \{\, \vec w \,\}\right)
240 then any $\vec x \in K'$ lies both in some cone $K_i$ with
241 $0 \le i \le k$ and in some cone $K_i$ with $k+1 \le i \le d'$.
242 (Just subtract an appropriate multiple of Equation~(\ref{eq:sub}).)
243 The cones
244 $\{\, K_i \,\}_{i=0}^k$
246 $\{\, K_i \,\}_{i=k+1}^{d'}$
247 therefore both form a triangulation of $K'$ and hence
248 \begin{equation}
249 \label{eq:triangulations}
250 \indf{K'}
252 \indf{K} + \sum_{i=1}^k \indf{K_i} - \sum_{j\in J_1} \indf{F_j}
254 \sum_{i=k+1}^{d'} \indf{K_i} - \sum_{j\in J_2} \indf{F_j}
255 \end{equation}
257 \begin{equation}
258 \label{eq:decomposition}
259 \indf{K} = \sum_{i=1}^{d'} \varepsilon_i \indf{K_i} + \sum_j \delta_j \indf{F_j}
261 \end{equation}
262 with $\varepsilon_i = -1$ for $1 \le i \le k$,
263 $\varepsilon_i = 1$ for $k+1 \le i \le d'$,
264 $\delta_j \in \{ -1, 1 \}$ and $F_j$ some lower-dimensional faces.
265 Figure~\ref{fig:w} shows the possible configurations
266 in the case of a $3$-dimensional cone.
268 \begin{figure}
269 \intercol=0.48cm
270 \begin{center}
271 \begin{minipage}{0cm}
272 \begin{xy}
273 <\intercol,0pt>:<0pt,\intercol>::
275 \xybox{
276 \POS(-2,-1)="a"*+!U{+}
277 \POS(2,0)="b"*+!L{+}
278 \POS(0,2)="c"*+!D{+}
279 \POS(0,0)="w"*+!DR{\vec w}
280 \POS"a"\ar@{-}"b"
281 \POS"b"\ar@{-}"c"
282 \POS"c"\ar@{-}"a"
283 \POS"a"\ar@{--}"w"
284 \POS"b"\ar@{--}"w"
285 \POS"c"\ar@{--}"w"
286 }="a"
287 +R+(2,0)*!L
288 \xybox{
289 \POS(-2,-1)="a"*+!U{+}
290 \POS(2,0)="b"*+!L{-}
291 \POS(0,2)="c"*+!D{+}
292 \POS(-3,1)="w"*+!DR{\vec w}
293 \POS"a"\ar@{-}"b"
294 \POS"b"\ar@{-}"c"
295 \POS"c"\ar@{-}"a"
296 \POS"a"\ar@{--}"w"
297 \POS"b"\ar@{--}"w"
298 \POS"c"\ar@{--}"w"
299 }="b"
300 +R+(2,0)*!L
301 \xybox{
302 \POS(-2,-1)="a"*+!U{-}
303 \POS(2,0)="b"*+!U{+}
304 \POS(0,2)="c"*+!D{-}
305 \POS(5,-1)="w"*+!L{\vec w}
306 \POS"a"\ar@{-}"b"
307 \POS"b"\ar@{-}"c"
308 \POS"c"\ar@{-}"a"
309 \POS"a"\ar@{--}"w"
310 \POS"b"\ar@{--}"w"
311 \POS"c"\ar@{--}"w"
313 \POS"a"
314 +D-(0,1)*!U
315 \xybox{
316 \POS(-2,-1)="a"*+!U{0}
317 \POS(2,0)="b"*+!L{+}
318 \POS(0,2)="c"*+!D{+}
319 \POS(1,1)="w"*+!DL{\vec w}
320 \POS"a"\ar@{-}"b"
321 \POS"b"\ar@{-}"c"
322 \POS"c"\ar@{-}"a"
323 \POS"a"\ar@{--}"w"
325 \POS"b"
326 +DL-(0,1)*!UL
327 \xybox{
328 \POS(-2,-1)="a"*+!U{0}
329 \POS(2,0)="b"*+!U{+}
330 \POS(0,2)="c"*+!D{-}
331 \POS(4,-2)="w"*+!L{\vec w}
332 \POS"a"\ar@{-}"b"
333 \POS"b"\ar@{-}"c"
334 \POS"c"\ar@{-}"a"
335 \POS"a"\ar@{--}"w"
336 \POS"b"\ar@{--}"w"
338 \end{xy}
339 \end{minipage}
340 \end{center}
341 \caption[Possible locations of the vector $\vec w$ with respect to the rays
342 of a $3$-dimensional cone.]
343 {Possible locations of $\vec w$ with respect to the rays
344 of a $3$-dimensional cone. The figure shows a section of the cones.}
345 \label{fig:w}
346 \end{figure}
348 As explained above there are several ways of avoiding the lower-dimensional
349 faces in (\ref{eq:decomposition}). Here we will apply the following proposition.
350 \begin{proposition}[\shortciteN{Koeppe2007parametric}]
351 \label{p:inclusion-exclusion}
352 Let
353 \begin{equation}
354 \label{eq:full-source-identity}
355 \sum_{i\in {I_1}} \epsilon_i [P_i] + \sum_{i\in {I_2}} \delta_k [P_i] = 0
356 \end{equation}
357 be a (finite) linear identity of indicator functions of closed
358 polyhedra~$P_i\subseteq\QQ^d$, where the
359 polyhedra~$P_i$ with $i \in I_1$ are full-dimensional and those with $i \in I_2$
360 lower-dimensional. Let each closed polyhedron be given as
362 P_i = \left\{\, \vec x \mid \sp{b^*_{i,j}}{x} \ge \beta_{i,j} \text{
363 for $j\in J_i$}\,\right\}
366 Let $\vec y\in\QQ^d$ be a vector such that $\langle \vec b^*_{i,j}, \vec
367 y\rangle \neq 0$ for all $i\in I_1\cup I_2$, $j\in J_i$.
368 For each $i\in I_1$, we define the half-open polyhedron
369 \begin{equation}
370 \label{eq:half-open-by-y}
371 \begin{aligned}
372 \tilde P_i = \Bigl\{\, \vec x\in\QQ^d \mid {}&
373 \sp{b^*_{i,j}}{x} \ge \beta_{i,j}
374 \text{ for $j\in J_i$ with $\sp{b^*_{i,j}}{y} > 0$,} \\
375 & \sp{b^*_{i,j}}{x} > \beta_{i,j}
376 \text{ for $j\in J_i$ with $\sp{b^*_{i,j}}{y} < 0$} \,\Bigr\}.
377 \end{aligned}
378 \end{equation}
379 Then
380 \begin{equation}
381 \label{eq:target-identity}
382 \sum_{i\in I_1} \epsilon_i [\tilde P_i] = 0.
383 \end{equation}
384 \end{proposition}
385 When applying this proposition to (\ref{eq:decomposition}), we obtain
386 \begin{equation}
387 \label{eq:decomposition:2}
388 \indf{\tilde K} = \sum_{i=1}^{d'} \varepsilon_i \indf{\tilde K_i}
390 \end{equation}
391 where we start out
392 from a given $\tilde K$, which may be $K$ itself, i.e., a fully closed cone,
393 or the result of a previous application of the proposition, either through
394 a triangulation (Section~\ref{s:triangulation}) or a previous decomposition.
395 In either case, a suitable $\vec y$ is available, either as an interior
396 point of the cone or as the vector used in the previous application
397 (which may require a slight perturbation if it happens to lie on one of
398 the new facets of the cones $K_i$).
399 We are, however, free to construct a new $\vec y$ on each application
400 of the proposition.
401 In fact, we will not even construct such a vector explicitly, but
402 rather apply a set of rules that is equivalent to a valid choice of $\vec y$.
403 Below, we will present an ``intuitive'' motivation for these rules.
404 For a more algebraic, shorter, and arguably simpler motivation we
405 refer to \shortciteN{Koeppe2007parametric}.
407 The vector $\vec y$ has to satisfy $\sp{b^*_j}y > 0$ for normals $\vec b^*_j$
408 of closed facets and $\sp{b^*_j}y < 0$ for normals $\vec b^*_j$ of open facets of
409 $\tilde K$.
410 These constraints delineate a non-empty open cone $R$ from which
411 $\vec y$ should be selected. For some of the new facets of the cones
412 $\tilde K_j$, the cone $R$ will not be cut by the affine hull of the facet.
413 The closedness of these facets is therefore predetermined by $\tilde K$.
414 For the other facets, a choice will have to be made.
415 To be able to make the choice based on local information and without
416 computing an explicit vector $\vec y$, we use the following convention.
417 We first assign an arbitrary total order to the rays.
418 If (the affine hull of) a facet separates the two rays not on the facet $\vec u_i$
419 and $\vec u_j$, i.e., $\alpha_i \alpha_j > 0$ (\ref{eq:opposite}), then
420 we choose $\vec y$ to lie on the side of the smallest ray, according
421 to the chosen order.
422 That is, $\sp{{\tilde n}_{ij}}y > 0$, for
423 $\vec {\tilde n}_{ij}$ the normal of the facet pointing towards this smallest ray.
424 Otherwise, i.e., if $\alpha_i \alpha_j < 0$,
425 the interior of $K$ will lie on one side
426 of the facet and then we choose $\vec y$ to lie on the other side.
427 That is, $\sp{{\tilde n}_{ij}}y > 0$, for
428 $\vec {\tilde n}_{ij}$ the normal of the facet pointing away from the cone $K$.
429 Figure~\ref{fig:primal:examples} shows some example decompositions with
430 an explicitly marked $\vec y$.
432 \begin{figure}
433 \begin{align*}
434 \intercol=0.48cm
435 \begin{xy}
436 <\intercol,0pt>:<0pt,\intercol>::
437 \POS(-2,-1)="a"*+!U{+}
438 \POS(2,0)="b"*+!L{+}
439 \POS(0,2)="c"*+!D{+}
440 \POS"a"\ar@{-}@[|(3)]"b"
441 \POS"b"\ar@{-}@[|(3)]"c"
442 \POS"c"\ar@{-}@[|(3)]"a"
443 \POS(0.3,0.6)*{\bullet},*+!L{\vec y}
444 \end{xy}
446 \intercol=0.48cm
447 \begin{xy}
448 <\intercol,0pt>:<0pt,\intercol>::
449 \POS(2,0)="b"*+!L{+}
450 \POS(0,2)="c"*+!D{+}
451 \POS(0,0)="w"*+!DR{\vec w}
452 \POS"b"\ar@{-}@[|(3)]"c"
453 \POS"b"\ar@{-}@[|(3)]"w"
454 \POS"c"\ar@{-}@[|(3)]"w"
455 \POS(0.3,0.6)*{\bullet},*+!L{\vec y}
456 \end{xy}
458 \begin{xy}
459 <\intercol,0pt>:<0pt,\intercol>::
460 \POS(-2,-1)="a"*+!U{+}
461 \POS(0,2)="c"*+!D{+}
462 \POS(0,0)="w"*+!DR{\vec w}
463 \POS"c"\ar@{-}@[|(3)]"a"
464 \POS"a"\ar@{-}@[|(3)]"w"
465 \POS"c"\ar@{--}@[|(3)]"w"
466 \POS(0.3,0.6)*{\bullet},*+!L{\vec y}
467 \end{xy}
469 \begin{xy}
470 <\intercol,0pt>:<0pt,\intercol>::
471 \POS(-2,-1)="a"*+!U{+}
472 \POS(2,0)="b"*+!L{+}
473 \POS(0,0)="w"*+!DR{\vec w}
474 \POS"a"\ar@{-}@[|(3)]"b"
475 \POS"a"\ar@{--}@[|(3)]"w"
476 \POS"b"\ar@{--}@[|(3)]"w"
477 \POS(0.3,0.6)*{\bullet},*+!L{\vec y}
478 \end{xy}
480 \intercol=0.48cm
481 \begin{xy}
482 <\intercol,0pt>:<0pt,\intercol>::
483 \POS(-2,-1)="a"*+!U{+}
484 \POS(2,0)="b"*+!L{+}
485 \POS(0,2)="c"*+!D{+}
486 \POS"a"\ar@{--}@[|(3)]"b"
487 \POS"b"\ar@{-}@[|(3)]"c"
488 \POS"c"\ar@{--}@[|(3)]"a"
489 \POS(-2.5,-1.5)*{\bullet},*+!U{\vec y}
490 \end{xy}
492 \intercol=0.48cm
493 \begin{xy}
494 <\intercol,0pt>:<0pt,\intercol>::
495 \POS(2,0)="b"*+!L{+}
496 \POS(0,2)="c"*+!D{+}
497 \POS(0,0)="w"*+!DR{\vec w}
498 \POS"b"\ar@{-}@[|(3)]"c"
499 \POS"b"\ar@{--}@[|(3)]"w"
500 \POS"c"\ar@{--}@[|(3)]"w"
501 \POS(-2.5,-1.5)*{\bullet},*+!U{\vec y}
502 \end{xy}
504 \begin{xy}
505 <\intercol,0pt>:<0pt,\intercol>::
506 \POS(-2,-1)="a"*+!U{+}
507 \POS(0,2)="c"*+!D{+}
508 \POS(0,0)="w"*+!DR{\vec w}
509 \POS"c"\ar@{--}@[|(3)]"a"
510 \POS"a"\ar@{--}@[|(3)]"w"
511 \POS"c"\ar@{-}@[|(3)]"w"
512 \POS(-2.5,-1.5)*{\bullet},*+!U{\vec y}
513 \end{xy}
515 \begin{xy}
516 <\intercol,0pt>:<0pt,\intercol>::
517 \POS(-2,-1)="a"*+!U{+}
518 \POS(2,0)="b"*+!L{+}
519 \POS(0,0)="w"*+!DR{\vec w}
520 \POS"a"\ar@{--}@[|(3)]"b"
521 \POS"a"\ar@{-}@[|(3)]"w"
522 \POS"b"\ar@{-}@[|(3)]"w"
523 \POS(-2.5,-1.5)*{\bullet},*+!U{\vec y}
524 \end{xy}
526 \intercol=0.48cm
527 \begin{xy}
528 <\intercol,0pt>:<0pt,\intercol>::
529 \POS(-2,-1)="a"*+!U{+}
530 \POS(2,0)="b"*+!L{+}
531 \POS(0,2)="c"*+!D{+}
532 \POS"a"\ar@{--}@[|(3)]"b"
533 \POS"b"\ar@{-}@[|(3)]"c"
534 \POS"c"\ar@{-}@[|(3)]"a"
535 \POS(1,-1.5)*{\bullet},*+!L{\vec y}
536 \end{xy}
538 \intercol=0.48cm
539 \begin{xy}
540 <\intercol,0pt>:<0pt,\intercol>::
541 \POS(2,0)="b"*+!L{+}
542 \POS(0,2)="c"*+!D{+}
543 \POS(0,0)="w"*+!DR{\vec w}
544 \POS"b"\ar@{-}@[|(3)]"c"
545 \POS"b"\ar@{--}@[|(3)]"w"
546 \POS"c"\ar@{-}@[|(3)]"w"
547 \POS(1,-1.5)*{\bullet},*+!L{\vec y}
548 \end{xy}
550 \begin{xy}
551 <\intercol,0pt>:<0pt,\intercol>::
552 \POS(-2,-1)="a"*+!U{+}
553 \POS(0,2)="c"*+!D{+}
554 \POS(0,0)="w"*+!DR{\vec w}
555 \POS"c"\ar@{-}@[|(3)]"a"
556 \POS"a"\ar@{--}@[|(3)]"w"
557 \POS"c"\ar@{--}@[|(3)]"w"
558 \POS(1,-1.5)*{\bullet},*+!L{\vec y}
559 \end{xy}
561 \begin{xy}
562 <\intercol,0pt>:<0pt,\intercol>::
563 \POS(-2,-1)="a"*+!U{+}
564 \POS(2,0)="b"*+!L{+}
565 \POS(0,0)="w"*+!DR{\vec w}
566 \POS"a"\ar@{--}@[|(3)]"b"
567 \POS"a"\ar@{-}@[|(3)]"w"
568 \POS"b"\ar@{-}@[|(3)]"w"
569 \POS(1,-1.5)*{\bullet},*+!L{\vec y}
570 \end{xy}
572 \intercol=0.48cm
573 \begin{xy}
574 <\intercol,0pt>:<0pt,\intercol>::
575 \POS(-2,-1)="a"*+!U{0}
576 \POS(2,0)="b"*+!L{+}
577 \POS(0,2)="c"*+!D{+}
578 \POS"a"\ar@{-}@[|(3)]"b"
579 \POS"b"\ar@{-}@[|(3)]"c"
580 \POS"c"\ar@{-}@[|(3)]"a"
581 \POS(1,0.2)*{\bullet},*+!R{\vec y}
582 \end{xy}
584 \intercol=0.48cm
585 \begin{xy}
586 <\intercol,0pt>:<0pt,\intercol>::
587 \POS(-2,-1)="a"*+!U{0}
588 \POS(2,0)="b"*+!L{+}
589 \POS(1,1)="w"*+!DL{\vec w}
590 \POS"a"\ar@{-}@[|(3)]"b"
591 \POS"a"\ar@{-}@[|(3)]"w"
592 \POS"b"\ar@{-}@[|(3)]"w"
593 \POS(1,0.2)*{\bullet},*+!R{\vec y}
594 \end{xy}
596 \begin{xy}
597 <\intercol,0pt>:<0pt,\intercol>::
598 \POS(-2,-1)="a"*+!U{0}
599 \POS(0,2)="c"*+!D{+}
600 \POS(1,1)="w"*+!DL{\vec w}
601 \POS"c"\ar@{-}@[|(3)]"a"
602 \POS"a"\ar@{--}@[|(3)]"w"
603 \POS"c"\ar@{-}@[|(3)]"w"
604 \POS(1,0.2)*{\bullet},*+!R{\vec y}
605 \end{xy}
607 \intercol=0.48cm
608 \begin{xy}
609 <\intercol,0pt>:<0pt,\intercol>::
610 \POS(-2,-1)="a"*+!U{0}
611 \POS(2,0)="b"*+!U{+}
612 \POS(0,2)="c"*+!D{-}
613 \POS"a"\ar@{-}@[|(3)]"b"
614 \POS"b"\ar@{--}@[|(3)]"c"
615 \POS"c"\ar@{-}@[|(3)]"a"
616 \POS(1.5,1.5)*{\bullet},*+!D{\vec y}
617 \end{xy}
619 \intercol=0.48cm
620 \begin{xy}
621 <\intercol,0pt>:<0pt,\intercol>::
622 \POS(-2,-1)="a"*+!U{0}
623 \POS(0,2)="c"*+!D{-}
624 \POS(4,-2)="w"*+!L{\vec w}
625 \POS"c"\ar@{-}@[|(3)]"a"
626 \POS"a"\ar@{-}@[|(3)]"w"
627 \POS"c"\ar@{--}@[|(3)]"w"
628 \POS(1.5,1.5)*{\bullet},*+!D{\vec y}
629 \end{xy}
631 \begin{xy}
632 <\intercol,0pt>:<0pt,\intercol>::
633 \POS(-2,-1)="a"*+!U{0}
634 \POS(2,0)="b"*+!U{+}
635 \POS(4,-2)="w"*+!L{\vec w}
636 \POS"a"\ar@{--}@[|(3)]"b"
637 \POS"a"\ar@{-}@[|(3)]"w"
638 \POS"b"\ar@{--}@[|(3)]"w"
639 \POS(1.5,1.5)*{\bullet},*+!D{\vec y}
640 \end{xy}
642 \intercol=0.48cm
643 \begin{xy}
644 <\intercol,0pt>:<0pt,\intercol>::
645 \POS(-2,-1)="a"*+!U{0}
646 \POS(2,0)="b"*+!U{+}
647 \POS(0,2)="c"*+!D{-}
648 \POS"a"\ar@{--}@[|(3)]"b"
649 \POS"b"\ar@{--}@[|(3)]"c"
650 \POS"c"\ar@{-}@[|(3)]"a"
651 \POS(4.7,-2.5)*{\bullet},*+!R{\vec y}
652 \end{xy}
654 \intercol=0.48cm
655 \begin{xy}
656 <\intercol,0pt>:<0pt,\intercol>::
657 \POS(-2,-1)="a"*+!U{0}
658 \POS(0,2)="c"*+!D{-}
659 \POS(4,-2)="w"*+!L{\vec w}
660 \POS"c"\ar@{-}@[|(3)]"a"
661 \POS"a"\ar@{--}@[|(3)]"w"
662 \POS"c"\ar@{--}@[|(3)]"w"
663 \POS(4.7,-2.5)*{\bullet},*+!R{\vec y}
664 \end{xy}
666 \begin{xy}
667 <\intercol,0pt>:<0pt,\intercol>::
668 \POS(-2,-1)="a"*+!U{0}
669 \POS(2,0)="b"*+!U{+}
670 \POS(4,-2)="w"*+!L{\vec w}
671 \POS"a"\ar@{-}@[|(3)]"b"
672 \POS"a"\ar@{--}@[|(3)]"w"
673 \POS"b"\ar@{--}@[|(3)]"w"
674 \POS(4.7,-2.5)*{\bullet},*+!R{\vec y}
675 \end{xy}
676 \end{align*}
677 \caption{Examples of decompositions in primal space.}
678 \label{fig:primal:examples}
679 \end{figure}
681 To see that there is a $\vec y$ satisfying the above constraints,
682 we need to show that $R \cap S$ is non-empty, with
683 $S = \{ \vec y \mid \sp{{\tilde n}_{i_kj_k}}y > 0 \text{ for all $k$}\}$.
684 It will be easier to show this set is non-empty when the $\vec u_i$ form
685 an orthogonal basis. Applying a non-singular linear transformation $T$
686 does not change the decomposition of $\vec w$ in terms of the $\vec u_i$ (i.e., the
687 $\alpha_i$ remain unchanged), nor does this change
688 any of the scalar products in the constraints that define $R \cap S$
689 (the normals are transformed by $\left(T^{-1}\right)^\T$).
690 Finding a vector $\vec y \in T(R \cap S)$ ensures that
691 $T^{-1}(\vec y) \in R \cap S$.
692 Without loss of generality, we can therefore assume for the purpose of
693 showing that $R \cap S$ is non-empty that
694 the $\vec u_i$ indeed form an orthogonal basis.
696 In the orthogonal basis, we have $\vec b_i^* = \vec u_i$
697 and the corresponding inward normal $\vec N_i$ is either
698 $\vec u_i$ or $-\vec u_i$.
699 Furthermore, each normal of a facet of $S$ of the first type is of the
700 form $\vec {\tilde n}_{i_kj_k} = a_k \vec u_{i_k} - b_k \vec u_{j_k}$, with
701 $a_k, b_k > 0$ and ${i_k} < {j_k}$,
702 while for the second type each normal is of the form
703 $\vec {\tilde n}_{i_kj_k} = -a_k \vec u_{i_k} - b_k \vec u_{j_k}$, with
704 $a_k, b_k > 0$.
705 If $\vec {\tilde n}_{i_kj_k} = a_k \vec u_{i_k} - b_k \vec u_{j_k}$
706 is the normal of a facet of $S$
707 then either
708 $(\vec N_{i_k}, \vec N_{j_k}) = (\vec u_{i_k}, \vec u_{j_k})$
710 $(\vec N_{i_k}, \vec N_{j_k}) = (-\vec u_{i_k}, -\vec u_{j_k})$.
711 Otherwise, the facet would not cut $R$.
712 Similarly,
713 if $\vec {\tilde n}_{i_kj_k} = -a_k \vec u_{i_k} - b_k \vec u_{j_k}$
714 is the normal of a facet of $S$
715 then either
716 $(\vec N_{i_k}, \vec N_{j_k}) = (\vec u_{i_k}, -\vec u_{j_k})$
718 $(\vec N_{i_k}, \vec N_{j_k}) = (-\vec u_{i_k}, \vec u_{j_k})$.
719 Assume now that $R \cap S$ is empty, then there exist
720 $\lambda_k, \mu_i \ge 0$ not all zero
721 such that
722 $\sum_k \lambda_k \vec {\tilde n}_{i_kj_k} + \sum_l \mu_i \vec N_i = \vec 0$.
723 Assume $\lambda_k > 0$ for some facet of the first type.
724 If $\vec N_{j_k} = -\vec u_{j_k}$, then $-b_k$ can only be canceled
725 by another facet $k'$ of the first type with $j_k = i_{k'}$, but then
726 also $\vec N_{j_{k'}} = -\vec u_{j_{k'}}$. Since the $j_k$ are strictly
727 increasing, this sequence has to stop with a strictly positive coefficient
728 for the largest $\vec u_{j_k}$ in this sequence.
729 If, on the other hand, $\vec N_{i_k} = \vec u_{i_k}$, then $a_k$ can only
730 be canceled by the normal of a facet $k'$ of the second kind
731 with $i_k = j_{k'}$, but then
732 $\vec N_{i_{k'}} = -\vec u_{i_{k'}}$ and we return to the first case.
733 Finally, if $\lambda_k > 0$ only for normals of facets of the second type,
734 then either $\vec N_{i_k} = -\vec u_{i_k}$ or $\vec N_{j_k} = -\vec u_{j_k}$
735 and so the coefficient of one of these basis vectors will be strictly
736 negative.
737 That is, the sum of the normals will never be zero and
738 the set $R \cap S$ is non-empty.
740 For each ray $\vec u_j$ of cone $K_i$, i.e., the cone with $\vec u_i$ replaced
741 by $\vec w$, we now need to determine whether the facet not containing this
742 ray is closed or not. We denote the (inward) normal of this cone by
743 $\vec n_{ij}$. Note that cone $K_j$ (if it appears in (\ref{eq:triangulations}),
744 i.e., $\alpha_j \ne 0$) has the same facet opposite $\vec u_i$
745 and its normal $\vec n_{ji}$ will be equal to either $\vec n_{ij}$ or
746 $-\vec n_{ij}$, depending on whether we are dealing with an ``external'' facet,
747 i.e., a facet of $K'$, or an ``internal'' facet.
748 If, on the other hand, $\alpha_j = 0$, then $\vec n_{ij} = \vec n_{0j}$.
749 If $\sp{n_{ij}}y > 0$, then the facet is closed.
750 Otherwise it is open.
751 It follows that the two (or more) occurrences of external facets are either all open
752 or all closed, while for internal facets, exactly one is closed.
754 First consider the facet not containing $\vec u_0 = \vec w$.
755 If $\alpha_i > 0$, then $\vec u_i$ and $\vec w$ are on the same side of the facet
756 and so $\vec n_{i0} = \vec n_{0i}$. Otherwise, $\vec n_{i0} = -\vec n_{i0}$.
757 Second, if $\alpha_j = 0$, then replacing $\vec u_i$ by $\vec w$ does not
758 change the affine hull of the facet and so $\vec n_{ij} = \vec n_{0j}$.
759 Now consider the case that $\alpha_i \alpha_j < 0$, i.e., $\vec u_i$
760 and $\vec u_j$ are on the same side of the hyperplane through the other rays.
761 If we project $\vec u_i$, $\vec u_j$ and $\vec w$ onto a plane orthogonal
762 to the ridge through the other rays, then the possible locations of $\vec w$
763 with respect to $\vec u_i$ and $\vec u_j$ are shown in Figure~\ref{fig:w:same}.
764 If both $\vec n_{0i}$ and $\vec n_{0j}$ are closed then $\vec y$ lies in region~1
765 and therefore $\vec n_{ij}$ (as well as $\vec n_{ji}$) is closed too.
766 Similarly, if both $\vec n_{0i}$ and $\vec n_{0j}$ are open then so is
767 $\vec n_{ij}$. If only one of the facets is closed, then, as explained above,
768 we choose $\vec n_{ij}$ to be open, i.e., we take $\vec y$ to lie in region~3
769 or~5.
770 Figure~\ref{fig:w:opposite} shows the possible configurations
771 for the case that $\alpha_i \alpha_j > 0$.
772 If exactly one of $\vec n_{0i}$ and $\vec n_{0j}$ is closed, then
773 $\vec y$ lies in region~3 or region~5 and therefore $\vec n_{ij}$ is closed iff
774 $\vec n_{0j}$ is closed.
775 Otherwise, as explained above, we choose $\vec n_{ij}$ to be closed if $i < j$.
777 \begin{figure}
778 \intercol=0.7cm
779 \begin{minipage}{0cm}
780 \begin{xy}
781 <\intercol,0pt>:<0pt,\intercol>::
782 \POS(-4,0)="a"\ar@[|(2)]@{-}(4,0)
783 \POS?(0)/\intercol/="b"\POS(2,0)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0i}}(0,0.75)}
784 \POS(2,0)*{\bullet},*+!U{\vec u_j}
785 \POS(-2,-3)="a"\ar@[|(2)]@{-}(2,3)
786 \POS?(0)/\intercol/="b"\POS(1,1.5)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0j}}(0,-0.75)}
787 \POS(1,1.5)*{\bullet},*+!R{\vec u_i}
788 \POS(-2,3)="a"\ar@{-}(2,-3)
789 \POS?(0)/\intercol/="b"\POS(1.5,-2.25)
790 *\xybox{"b"-"a":(0,0)\ar_{\vec n_{ji}}^{\vec n_{ij}}(0,+0.75)}
791 \POS(1.5,-2.25)*{\bullet},*+!R{\vec u_0 = \vec w}
792 \POS(0,0)*{\bullet}
793 \POS(3,1.5)*+[o][F]{\scriptstyle 1}
794 \POS(0,2.5)*+[o][F]{\scriptstyle 2}
795 \POS(-3,1.5)*+[o][F]{\scriptstyle 3}
796 \POS(-3,-1.5)*+[o][F]{\scriptstyle 4}
797 \POS(0,-3)*+[o][F]{\scriptstyle 5}
798 \POS(3,-1.5)*+[o][F]{\scriptstyle 6}
799 \end{xy}
800 \end{minipage}
801 \begin{minipage}{0cm}
802 \begin{xy}
803 <\intercol,0pt>:<0pt,\intercol>::
804 \POS(-4,0)="a"\ar@[|(2)]@{-}(4,0)
805 \POS?(0)/\intercol/="b"\POS(2,0)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0i}}(0,0.75)}
806 \POS(2,0)*{\bullet},*+!U{\vec u_j}
807 \POS(-2,-3)="a"\ar@[|(2)]@{-}(2,3)
808 \POS?(0)/\intercol/="b"\POS(1,1.5)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0j}}(0,-0.75)}
809 \POS(1,1.5)*{\bullet},*+!R{\vec u_i}
810 \POS(-2,3)="a"\ar@{-}(2,-3)
811 \POS?(0)/\intercol/="b"\POS(-1.5,2.25)
812 *\xybox{"b"-"a":(0,0)\ar_{\vec n_{ji}}^{\vec n_{ij}}(0,+0.75)}
813 \POS(-1.5,2.25)*{\bullet},*+!R{\vec u_0 = \vec w}
814 \POS(0,0)*{\bullet}
815 \POS(3,1.5)*+[o][F]{\scriptstyle 1}
816 \POS(0,2.5)*+[o][F]{\scriptstyle 2}
817 \POS(-3,1.5)*+[o][F]{\scriptstyle 3}
818 \POS(-3,-1.5)*+[o][F]{\scriptstyle 4}
819 \POS(0,-3)*+[o][F]{\scriptstyle 5}
820 \POS(3,-1.5)*+[o][F]{\scriptstyle 6}
821 \end{xy}
822 \end{minipage}
823 \caption{Possible locations of $\vec w$ with respect to $\vec u_i$ and
824 $\vec u_j$, projected onto a plane orthogonal to the other rays, when
825 $\alpha_i \alpha_j < 0$.}
826 \label{fig:w:same}
827 \end{figure}
829 \begin{figure}
830 \intercol=0.7cm
831 \begin{minipage}{0cm}
832 \begin{xy}
833 <\intercol,0pt>:<0pt,\intercol>::
834 \POS(-4,0)="a"\ar@[|(2)]@{-}(4,0)
835 \POS?(0)/\intercol/="b"\POS(2,0)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0i}}(0,0.75)}
836 \POS(2,0)*{\bullet},*+!U{\vec u_j}
837 \POS(-2,3)="a"\ar@[|(2)]@{-}(2,-3)
838 \POS?(0)/\intercol/="b"\POS(-1,1.5)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0j}}(0,+0.75)}
839 \POS(-1,1.5)*{\bullet},*+!R{\vec u_i}
840 \POS(-2,-3)="a"\ar@{-}(2,3)
841 \POS?(0)/\intercol/="b"\POS(1.5,2.25)
842 *\xybox{"b"-"a":(0,0)\ar^{\vec n_{ji}}(0,+0.75)
843 \POS(0,0)\ar_{\vec n_{ij}}(0,-0.75)}
844 \POS(1.5,2.25)*{\bullet},*+!L{\vec u_0 = \vec w}
845 \POS(0,0)*{\bullet}
846 \POS(3,1.5)*+[o][F]{\scriptstyle 1}
847 \POS(0,2.5)*+[o][F]{\scriptstyle 2}
848 \POS(-3,1.5)*+[o][F]{\scriptstyle 3}
849 \POS(-3,-1.5)*+[o][F]{\scriptstyle 4}
850 \POS(0,-3)*+[o][F]{\scriptstyle 5}
851 \POS(3,-1.5)*+[o][F]{\scriptstyle 6}
852 \end{xy}
853 \end{minipage}
854 \begin{minipage}{0cm}
855 \begin{xy}
856 <\intercol,0pt>:<0pt,\intercol>::
857 \POS(-4,0)="a"\ar@[|(2)]@{-}(4,0)
858 \POS?(0)/\intercol/="b"\POS(2,0)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0i}}(0,0.75)}
859 \POS(2,0)*{\bullet},*+!U{\vec u_j}
860 \POS(-2,3)="a"\ar@[|(2)]@{-}(2,-3)
861 \POS?(0)/\intercol/="b"\POS(-1,1.5)*\xybox{"b"-"a":(0,0)\ar^{\vec n_{0j}}(0,+0.75)}
862 \POS(-1,1.5)*{\bullet},*+!R{\vec u_i}
863 \POS(-2,-3)="a"\ar@{-}(2,3)
864 \POS?(0)/\intercol/="b"\POS(-1.5,-2.25)
865 *\xybox{"b"-"a":(0,0)\ar^{\vec n_{ji}}(0,+0.75)
866 \POS(0,0)\ar_{\vec n_{ij}}(0,-0.75)}
867 \POS(-1.5,-2.25)*{\bullet},*+!L{\vec u_0 = \vec w}
868 \POS(0,0)*{\bullet}
869 \POS(3,1.5)*+[o][F]{\scriptstyle 1}
870 \POS(0,2.5)*+[o][F]{\scriptstyle 2}
871 \POS(-3,1.5)*+[o][F]{\scriptstyle 3}
872 \POS(-3,-1.5)*+[o][F]{\scriptstyle 4}
873 \POS(0,-3)*+[o][F]{\scriptstyle 5}
874 \POS(3,-1.5)*+[o][F]{\scriptstyle 6}
875 \end{xy}
876 \end{minipage}
877 \caption{Possible locations of $\vec w$ with respect to $\vec u_i$ and
878 $\vec u_j$, projected onto a plane orthogonal to the other rays, when
879 $\alpha_i \alpha_j > 0$.}
880 \label{fig:w:opposite}
881 \end{figure}
883 The algorithm is summarized in Algorithm~\ref{alg:closed}, where
884 we use the convention that in cone $K_i$, $\vec u_i$ refers to
885 $\vec u_0 = \vec w$.
886 Note that we do not need any of the rays or normals in this code.
887 The only information we need is the closedness of the facets in the
888 original cone and the signs of the $\alpha_i$.
890 \begin{algorithm}
891 \begin{tabbing}
892 next \= next \= next \= \kill
893 if $\alpha_j = 0$ \\
894 \> closed[$K_i$][$\vec u_j$] := closed[$\tilde K$][$\vec u_j$] \\
895 else if $i = j$ \\
896 \> if $\alpha_j > 0$ \\
897 \> \> closed[$K_i$][$\vec u_j$] := closed[$\tilde K$][$\vec u_j$] \\
898 \> else \\
899 \> \> closed[$K_i$][$\vec u_j$] := $\lnot$closed[$\tilde K$][$\vec u_j$] \\
900 else if $\alpha_i \alpha_j > 0$ \\
901 \> if closed[$\tilde K$][$\vec u_i$] = closed[$\tilde K$][$\vec u_j$] \\
902 \> \> closed[$K_i$][$\vec u_j$] := $i < j$ \\
903 \> else \\
904 \> \> closed[$K_i$][$\vec u_j$] := closed[$\tilde K$][$\vec u_j$] \\
905 else \\
906 \> closed[$K_i$][$\vec u_j$] := closed[$\tilde K$][$\vec u_i$] and
907 closed[$\tilde K$][$\vec u_j$]
908 \end{tabbing}
909 \caption{Determine whether the facet opposite $\vec u_j$ is closed in $K_i$.}
910 \label{alg:closed}
911 \end{algorithm}
913 \subsection{Triangulation in primal space}
914 \label{s:triangulation}
916 As in the case for Barvinok's decomposition (Section~\ref{s:decomposition}),
917 we can transform a triangulation of a (closed) cone into closed simple cones
918 into a triangulation of half-open simple cones that fully partitions the
919 original cone, i.e., such that the half-open simple cones do not intersect at their
920 facets.
921 Again, we apply Proposition~\ref{p:inclusion-exclusion} with $\vec y$
922 an interior point of the cone (Section~\ref{s:interior}).
924 \subsection{Multivariate quasi-polynomials as lists of polynomials}
926 There are many definitions for a (univariate) \ai{quasi-polynomial}.
927 \shortciteN{Ehrhart1977} uses a definition based on {\em periodic number}s.
929 \begin{definition}
930 \label{d:periodic:1}
931 A rational \defindex{periodic number} $U(p)$
932 is a function $\ZZ \to \QQ$,
933 such that there exists a \defindex{period} $q$
934 such that $U(p) = U(p')$ whenever $p \equiv p' \mod q$.
935 \end{definition}
937 \begin{definition}
938 \label{d:qp:1}
939 A (univariate)
940 \defindex{quasi-polynomial}\/ $f$ of degree $d$ is
941 a function
943 f(n) = c_d(n) \, n^d + \cdots + c_1(n) \, n + c_0
946 where $c_i(n)$ are rational periodic numbers.
947 I.e., it is a polynomial expression of degree $d$
948 with rational periodic numbers for coefficients.
949 The \defindex{period} of a quasi-polynomial is the \ac{lcm}
950 of the periods of its coefficients.
951 \end{definition}
953 Other authors (e.g., \shortciteNP{Stanley1986})
954 use the following definition of a quasi-polynomial.
955 \begin{definition}
956 \label{d:qp:1:list}
957 A function $f : \ZZ \to \QQ$ is
958 a (univariate) \defindex{quasi-polynomial} of period $q$ if there
959 exists a list of $q$ polynomials $g_i \in \QQ[T]$ for $0 \le i < q$ such
960 that
962 f (s) = g_i(s) \qquad \hbox{if $s \equiv i \mod {q}$}
965 The functions $g_i$ are called the {\em constituents}.
966 \index{constituent}
967 \end{definition}
969 In our implementation, we use Definition~\ref{d:qp:1},
970 but whereas
971 \shortciteN{Ehrhart1977} uses a list of $q$ rational
972 numbers enclosed in square brackets to represent periodic
973 numbers, our periodic numbers are polynomial expressions
974 in fractional parts (Section~\ref{a:data}).
975 These fractional parts naturally extend to multivariate
976 quasi-polynomials.
977 The bracketed (``explicit'') periodic numbers can
978 be extended to multiple variables by nesting them
979 (e.g., \shortciteNP{Loechner1999}).
981 Definition~\ref{d:qp:1:list} could be extended in a similar way
982 by having a constituent for each residue modulo a vector period $\vec q$.
983 However, as pointed out by \citeN{Woods2006personal}, this may not result
984 in the minimum number of constituents.
985 A vector period can be considered as a lattice with orthogonal generators and
986 the number of constituents is equal to the index or determinant of that lattice.
987 By considering more general lattices, we can potentially reduce the number
988 of constituents.
989 \begin{definition}
990 \label{d:qp}
991 A function $f : \ZZ^n \to \QQ$ is
992 a (multivariate) \defindex{quasi-polynomial} of period $L$ if there
993 exists a list of $\det L$ polynomials $g_{\vec i} \in \QQ[T_1,\ldots,T_n]$
994 for $\vec i$ in the fundamental parallelepiped of $L$ such
995 that
997 f (\vec s) = g_{\vec i}(\vec s) \qquad \hbox{if $\vec s \equiv \vec i \mod L$}
1000 \end{definition}
1002 To compute the period lattice from a fractional representation, we compute
1003 the appropriate lattice for each fractional part and then take their intersection.
1004 Recall that the argument of each fractional part is an affine expression
1005 in the parameters $(\sp a p + c)/m$,
1006 with $\vec a \in \ZZ^n$ and $c, m \in \ZZ$.
1007 Such a fractional part is translation invariant over
1008 any (integer) value of $\vec p$
1009 such that $\sp a p + m t = 0$, for some $\vec t \in \ZZ$.
1010 Solving this homogeneous equation over the integers (in our implementation,
1011 we use \PolyLib/'s \ai[\tt]{SolveDiophantine}) gives the general solution
1013 \begin{bmatrix}
1014 \vec p \\ t
1015 \end{bmatrix}
1017 \begin{bmatrix}
1018 U_1 \\ U_2
1019 \end{bmatrix}
1020 \vec x
1021 \qquad\text{for $\vec x \in \ZZ^n$}
1024 The matrix $U_1 \in \ZZ^{n \times n}$ then has the generators of
1025 the required lattice as columns.
1026 The constituents are computed by plugging in each integer point
1027 in the fundamental parallelepiped of the lattice.
1028 These points themselves are computed as explained in Section~\ref{s:fundamental}.
1029 Note that for computing the constituents, it is sufficient to take any
1030 representative of the residue class. For example, we could take
1031 $\vec w^\T = \vec k^\T W$ in the notations of Lemma~\ref{l:fundamental}.
1033 \begin{example}[\shortciteN{Woods2006personal}]
1034 Consider the parametric polytope
1036 P_{s,t}=\{\, x \mid 0 \le x \le (s+t)/2 \,\}
1039 The enumerator of $P_{s,t}$ is
1041 \begin{cases}
1042 \frac s 2 + \frac t 2 + 1 &
1043 \text{if $\begin{bmatrix}s \\ t \end{bmatrix} \in
1044 \begin{bmatrix}
1045 -1 & -2 \\ 1 & 0
1046 \end{bmatrix}
1047 \ZZ^2 +
1048 \begin{bmatrix}
1049 0 \\ 0
1050 \end{bmatrix}
1053 \frac s 2 + \frac t 2 + \frac 1 2 &
1054 \text{if $\begin{bmatrix}s \\ t \end{bmatrix} \in
1055 \begin{bmatrix}
1056 -1 & -2 \\ 1 & 0
1057 \end{bmatrix}
1058 \ZZ^2 +
1059 \begin{bmatrix}
1060 -1 \\ 0
1061 \end{bmatrix}
1064 \end{cases}
1066 The corresponding output of \ai[\tt]{barvinok\_enumerate} is
1067 \begin{verbatim}
1068 s + t >= 0
1069 1 >= 0
1071 Lattice:
1072 [[-1 1]
1073 [-2 0]
1075 [0 0]
1076 ( 1/2 * s + ( 1/2 * t + 1 )
1078 [-1 0]
1079 ( 1/2 * s + ( 1/2 * t + 1/2 )
1081 \end{verbatim}
1082 \end{example}
1084 \subsection{Left inverse of an affine embedding}
1085 \label{s:inverse}
1087 We often map a polytope onto a lower dimensional space to remove possible
1088 equalities in the polytope. These maps are typically represented
1089 by the inverse, mapping the coordinates $\vec x'$ of the lower-dimensional
1090 space to the coordinates $\vec x$ of (an affine subspace of) the original space,
1091 i.e.,
1093 \begin{bmatrix}
1094 \vec x \\ 1
1095 \end{bmatrix}
1097 \begin{bmatrix}
1098 T & \vec v \\ \vec 0^\T & 1
1099 \end{bmatrix}
1100 \begin{bmatrix}
1101 \vec x' \\ 1
1102 \end{bmatrix}
1105 where, as usual in \PolyLib/, we work with homogeneous coordinates.
1106 To obtain the transformation that maps the coordinates of the original
1107 space to the coordinates of the lower dimensional space,
1108 we need to compute the \ai{left inverse} of the above \ai{affine embedding},
1109 i.e., an $A$, $\vec b$ and $d$ such that
1112 \begin{bmatrix}
1113 \vec x' \\ 1
1114 \end{bmatrix}
1116 \begin{bmatrix}
1117 A & \vec b \\ \vec 0^\T & d
1118 \end{bmatrix}
1119 \begin{bmatrix}
1120 \vec x \\ 1
1121 \end{bmatrix}
1124 To compute this left inverse, we first compute the
1125 (right) \indac{HNF} of T,
1127 \begin{bmatrix}
1128 U_1 \\ U_2
1129 \end{bmatrix}
1132 \begin{bmatrix}
1133 H \\ 0
1134 \end{bmatrix}
1137 The left inverse is then simply
1139 \begin{bmatrix}
1140 d H^{-1}U_1 & -d H^{-1} \vec v \\ \vec 0^\T & d
1141 \end{bmatrix}
1144 We often also want a description of the affine subspace that is the range
1145 of the affine embedding and this is given by
1147 \begin{bmatrix}
1148 U_2 & - U_2 \vec v \\ \vec 0^T & 1
1149 \end{bmatrix}
1150 \begin{bmatrix}
1151 \vec x \\ 1
1152 \end{bmatrix}
1154 \vec 0
1157 This computation is implemented in \ai[\tt]{left\_inverse}.
1159 \subsection{Integral basis of the orthogonal complement of a linear subspace}
1160 \label{s:completion}
1162 Let $M_1 \in \ZZ^{m \times n}$ be a basis of a linear subspace.
1163 We first extend $M_1$ with zero rows to obtain a square matrix $M'$
1164 and then compute the (left) \indac{HNF} of $M'$,
1166 \begin{bmatrix}
1167 M_1 \\ 0
1168 \end{bmatrix}
1170 \begin{bmatrix}
1171 H & 0 \\ 0 & 0
1172 \end{bmatrix}
1173 \begin{bmatrix}
1174 Q_1 \\ Q_2
1175 \end{bmatrix}
1178 The rows of $Q_2$ span the orthogonal complement of the given subspace.
1179 Since $Q_2$ can be extended to a unimodular matrix, these rows form
1180 an integral basis.
1182 If the entries on the diagonal of $H$ are all $1$ then $M_1$
1183 can be extended to a unimodular matrix, by concatenating $M_1$ and $Q_2$.
1184 The resulting matrix is unimodular, since
1186 \begin{bmatrix}
1187 M_1 \\ Q_2
1188 \end{bmatrix}
1190 \begin{bmatrix}
1191 H & 0 \\ 0 & I_{n-m,n-m}
1192 \end{bmatrix}
1193 \begin{bmatrix}
1194 Q_1 \\ Q_2
1195 \end{bmatrix}
1198 This method for extending a matrix of which
1199 only a few lines are known to a \ai{unimodular matrix}
1200 is more general than the method described by \shortciteN{Bik1996PhD},
1201 which only considers extending a matrix given by a single row.
1203 \subsection{Ensuring a polyhedron has only revlex-positive rays}
1204 \label{s:revlexpos}
1206 The \ai[\tt]{barvinok\_series\_with\_options} function and all
1207 further \ai[\tt]{gen\_fun} manipulations assume that the effective
1208 parameter domain has only \ai{revlex-positive} rays.
1209 When used to computer \rgf/s, the \ai[\tt]{barvinok\_enumerate}
1210 application will therefore transform the effective parameter domain
1211 of a problem if it has revlex-negative rays.
1212 It will then not compute the generating function
1214 f(\vec x) = \sum_{\vec p \in \ZZ^m} \#(P_{\vec p} \cap \ZZ^d) \, x^{\vec p}
1219 g(\vec z) = \sum_{\vec p' \in \ZZ^n}
1220 \#(P_{T \vec p' + \vec t} \cap \ZZ^d) \, x^{\vec p'}
1222 instead, where $\vec p = T \vec p' + \vec t$,
1223 with $T \in \ZZ^{m \times n}$ and $\vec t \in \ZZ^m$, is an affine transformation
1224 that maps the transformed parameter space back to the original parameter space.
1226 First assume that the parameter domain does not contain any lines and
1227 that there are no equalities in the description of $P_{\vec p}$ that force
1228 the values of $\vec p$ for which $P_{\vec p}$ contains integer points
1229 to lie on a non-standard lattice.
1230 Let the effective parameter domain be given as
1232 \{\, \vec p \mid A \vec p + \vec c \ge \vec 0 \,\}
1234 where $A \in \ZZ^{s \times d}$ of row rank $d$;
1235 otherwise the effective parameter domain would contain a line.
1236 Let $H$ be the (left) \indac{HNF} of $A$, i.e.,
1238 A = H Q
1241 with $H$ lower-triangular with positive diagonal elements and
1242 $Q$ unimodular.
1243 Let $\tilde Q$ be the matrix obtained from $Q$ by reversing its rows,
1244 and, similarly, $\tilde H$ from $H$ by reversing the columns.
1245 After performing the transformation
1246 $\vec p' = \tilde Q \vec p$, i.e.,
1247 $\vec p = \tilde Q^{-1} \vec p'$, the transformed parameter domain
1248 is given by
1250 \{\, \vec p' \mid A \tilde Q^{-1} \vec p' + \vec c \ge \vec 0 \,\}
1254 \{\, \vec p' \mid \tilde H \vec p' + \vec c \ge \vec 0 \,\}
1257 The first constraint of this domain is
1258 $h_{11} p'_m + c_1 \ge 0$. A ray with non-zero final coordinate
1259 therefore has a positive final coordinate.
1260 Similarly, the second constraint is
1261 $h_{22} p'_{m-1} + h_{21} p'_m + c_2 \ge 0$.
1262 A ray with zero $n$th coordinate, but non-zero $n-1$st coordinate,
1263 will therefore have a positive $n-1$st coordinate.
1264 Continuing this reasoning, we see that all rays in the transformed
1265 domain are revlex-positive.
1267 If the parameter domain does contains lines, but is not restricted
1268 to a non-standard lattice, then the number of points in the parametric
1269 polytope is invariant over a translation along the lines.
1270 It is therefore sufficient to compute the number of points in the
1271 orthogonal complement of the linear subspace spanned by the lines.
1272 That is, we apply a prior transformation that maps a reduced parameter
1273 domain to this subspace,
1275 \vec p = L^\perp \vec p' =
1276 \begin{bmatrix}
1277 L & L^\perp
1278 \end{bmatrix}
1279 \begin{bmatrix}
1280 0 \\ I
1281 \end{bmatrix}
1282 \vec p'
1285 where $L$ has the lines as columns, and $L^\perp$ an integral basis
1286 for the orthogonal complement (Section~\ref{s:completion}).
1287 Note that the inverse transformation
1289 \vec p' =
1290 L^{-\perp}
1291 \vec p =
1292 \begin{bmatrix}
1293 0 & I
1294 \end{bmatrix}
1295 \begin{bmatrix}
1296 L & L^\perp
1297 \end{bmatrix}^{-1}
1298 \vec p
1300 has integral coefficients since $L^\perp$ can be extended to a unimodular matrix.
1302 If the parameter values $\vec p$ for which $P_{\vec p}$ contains integer points
1303 are restricted to a non-standard lattice, we first replace the parameters
1304 by a different set of parameters that lie on the standard lattice
1305 through ``\ai{parameter compression}''\shortcite{Meister2004PhD},
1307 \vec p = C \vec p'
1310 The (left) inverse of $C$ can be computes as explained in
1311 Section~\ref{s:inverse}, giving
1313 \vec p' = C^{-L} \vec p
1316 We have to be careful to only apply this transformation when
1317 both the equalities computed in Section~\ref{s:inverse} are satisfied
1318 and some additional divisibility constraints.
1319 In particular if $\vec a^\T/d$ is a row of $C^{-L}$, with $\vec a \in \ZZ^{n'}$
1320 and $d \in \ZZ$, the transformation can only be applied to parameter values
1321 $\vec p$ such that $d$ divides $\sp a p$.
1323 The complete transformation is given by
1325 \vec p = C L^\perp \hat Q^{-1} \vec p'
1329 \vec p' = \hat Q L^{-\perp} C^{-L} \vec p
1333 \subsection{Parametric Volume Computation}
1335 The \ai{volume} of a (parametric) polytope can serve as an approximation
1336 for the number of integer points in the polytope.
1337 We basically follow the description of~\shortciteN{Rabl2006} here, except that we
1338 focus on volume computation for {\em linearly}
1339 parametrized polytopes, which we exploit to determine the sign
1340 of the determinants we compute, as explained below.
1342 Note first that
1343 the vertices of a linearly parametrized polytope are affine expressions
1344 in the parameters that may be valid only in parts (chambers)
1345 of the parameter domain.
1346 Since the volume computation is based on the (active) vertices, we perform
1347 the computation in each chamber separately.
1348 Also note that since the vertices are affine expressions, it is
1349 easy to check whether they belong to a facet.
1351 The volume of a $d$-simplex, i.e., a $d$-dimensional polytope with
1352 $d+1$ vertices, is relatively easy to compute.
1353 In particular, if $\vec v_i(\vec p)$, for $0 \le i \le d$,
1354 are the (parametric) vertices
1355 of the simplex $P$ then
1356 \begin{equation}
1357 \label{eq:volume}
1358 \vol P =
1359 \frac 1 {d!}
1360 \left|
1361 \det
1362 \begin{bmatrix}
1363 v_{11}(\vec p) - v_{01}(\vec p) &
1364 v_{12}(\vec p) - v_{02}(\vec p) &
1365 \ldots &
1366 v_{1d}(\vec p) - v_{0d}(\vec p)
1368 v_{21}(\vec p) - v_{01}(\vec p) &
1369 v_{22}(\vec p) - v_{02}(\vec p) &
1370 \ldots &
1371 v_{2d}(\vec p) - v_{0d}(\vec p)
1373 \vdots & \vdots & \ddots & \vdots
1375 v_{d1}(\vec p) - v_{01}(\vec p) &
1376 v_{d2}(\vec p) - v_{02}(\vec p) &
1377 \ldots &
1378 v_{dd}(\vec p) - v_{0d}(\vec p)
1379 \end{bmatrix}
1380 \right|.
1381 \end{equation}
1383 If $P$ is not a simplex, i.e., $N > d+1$, with $N$ the number of
1384 vertices of $P$, then the standard way of computing the volume of $P$
1385 is to first {\em triangulate} $P$, i.e., subdivide $P$ into simplices,
1386 and then to compute and sum the volumes of the resulting simplices.
1387 One way of computing a triangulation is to
1388 compute the \ai{barycenter}
1390 \frac 1 N \sum_i \vec v_i(\vec p)
1392 of $P$
1393 and to perform a subdivision by computing the convex hulls
1394 of the barycenter with each of the facets of $P$.
1395 If a given facet of $P$ is itself a simplex, then this convex hull
1396 is also a simplex. Otherwise the facet is further subdivided.
1397 This recursive process terminates as every $1$-dimensional polytope
1398 is a simplex.
1400 The triangulation described above is known as
1401 the boundary triangulation~\shortcite{Bueler2000exact} and is used
1402 by \shortciteN{Rabl2006} in his implementation.
1403 The Cohen-Hickey triangulation~\shortcite{Cohen1979volumes,Bueler2000exact}
1404 is a much more efficient variation and uses one of the vertices
1405 instead of the barycenter. The facets incident on the vertex
1406 do not have to be considered in this case because the resulting subpolytopes
1407 would have zero volume.
1408 Another possibility is to use a
1409 ``lifting'' triangulation~\shortcite{Lee1991,DeLoera1995}.
1410 In this triangulation, each vertex is assigned a (random) ``height'' in
1411 an extra dimension.
1412 The projection of the ``lower envelope'' of the resulting polytope onto
1413 the original space results in a subdivision, which is a triangulation
1414 with very high probability.
1416 A complication with the lifting triangulation is that the constraint system
1417 of the lifted polytope will in general not be linearly parameterized,
1418 even if the original polytope is.
1419 It is, however, sufficient to perform the triangulation for a particular
1420 value of the parameters inside the chamber since the parametric polytope
1421 has the same combinatorial structure throughout the chamber.
1422 The triangulation obtained for the instantiated vertices can then
1423 be carried over to the corresponding parametric vertices.
1424 We only need to be careful to select a value for the parameters that
1425 does not lie on any facet of the chambers. On these chambers, some
1426 of the vertices may coincide.
1427 For linearly parametrized polytopes, it is easy to find a parameter
1428 point in the interior of a chamber, as explained in Section~\ref{s:interior}.
1429 Note that this point need not be integer.
1431 A direct application of the above algorithm, using any of the triangulations,
1432 would yield for each chamber
1433 a volume expressed as the sum of the absolute values of polynomials in
1434 the parameters. To remove the absolute value, we plug in a particular
1435 value of the parameters (not necessarily integer)
1436 belonging to the given chamber for which we know that the volume is non-zero.
1437 Again, it is sufficient to take any point in the interior of the chamber.
1438 The sign of the resulting value then determines the sign of the whole
1439 polynomial since polynomials are continuous functions and will not change
1440 sign without passing through zero.
1442 \subsection{Maclaurin series division}
1443 \label{s:division}
1445 If $P(t)$ and $Q(t)$ are two Maclaurin series
1446 \begin{align*}
1447 P(t) & = a_0 + a_1 t + a_2 t^2 + \cdots \\
1448 Q(t) & = b_0 + b_1 t + b_2 t^2 + \cdots
1450 \end{align*}
1451 then, as outlined by \shortciteN[241--247]{Henrici1974},
1452 we can compute the coefficients $c_l$ in
1454 \frac{P(t)}{Q(t)} =: c_0 + c_1 t + c_2 t^2 + \cdots
1456 by applying the recurrence relation
1458 c_l = \frac 1 {b_0} \left( a_l - \sum_{i=1}^l b_i c_{l-i} \right)
1461 To avoid dealing with denominators, we can also compute
1462 $d_l = b_0^{l+1} c_l$ instead as
1464 d_l = b_0^l a_l - \sum_{i=1}^l b_0^{i-1} b_i c_{l-i}
1467 The coefficients $c_l$ can then be directly read off as
1469 c_l = \frac{d_l}{b_0^{l+1}}
1473 \subsection{Specialization through exponential substitution}
1475 This section draws heavily from \shortciteN{Koeppe2006experiments}.
1477 After computing the \rgf/ of a polytope,
1478 \begin{equation}
1479 \label{eq:rgf}
1480 f(\vec x)=
1481 \sum_{i\in I}\alpha_i
1482 \frac{\sum_{k=1}^{r} \vec x^{\vec w_{ik} }}
1483 {\prod_{j=1}^{d}\left(1-\vec x^{\vec b_{ij}}\right)}
1485 \end{equation}
1486 the number of lattice points in the polytope can be obtained
1487 by evaluating $f(\vec 1)$. Since $\vec 1$ is a pole of each
1488 term, we need to compute the constant term in the Laurent expansions
1489 of each term in~\eqref{eq:rgf} about $\vec 1$.
1490 Since it is easier to work with univariate series, a substitution is usually
1491 applied, either a \ai{polynomial substitution}
1493 \vec x = (1+t)^{\vec \lambda}
1496 as implemented in \LattE/ \shortcite{latte1.1},
1497 or an \ai{exponential substitution} (see, e.g., \shortciteNP{Barvinok1999}),
1499 \vec x = e^{t \vec \lambda}
1502 as implemented in \LattEmk/ \shortcite{latte-macchiato}.
1503 In each case, $\vec \lambda \in \ZZ^d$ is a vector that is not orthogonal
1504 to any of the $\vec b_{ij}$.
1505 Both substitutions also transform the problem of computing the
1506 constant term in the Laurent expansions about $\vec x = \vec 1$
1507 to that of computing the constant term in the
1508 Laurent expansions about $t = 0$.
1510 Consider now one of the terms in~\eqref{eq:rgf},
1512 g(t) =
1513 \frac{\sum_{k=1}^{r} e^{a_k t}}
1514 {\prod_{j=1}^{d}\left(1-e^{c_j t}\right)}
1517 with $a_k = \sp{w_{ik}}{\lambda}$ and $c_j = \sp{b_{ij}}{\lambda}$.
1518 We rewrite this equation as
1520 g(t) =
1521 (-1)^d
1522 \frac{\sum_{k=1}^{r} e^{a_k t}}
1523 {t^d \prod_{j=1}^d c_j}
1524 \prod_{j=1}^d \frac{-c_j t}
1525 {1-e^{c_j t}}
1528 The second factor is analytic in a neighborhood of the origin
1529 $x = c_1 = \cdots = c_d = 0$ and therefore has a Taylor series expansion
1530 \begin{equation}
1531 \label{eq:todd}
1532 \prod_{j=1}^d \frac{-c_j t}
1533 {1-e^{c_j t}}
1535 \sum_{m=0}^{\infty} \todd_m(-c_1, \ldots, -c_d) t^m
1537 \end{equation}
1538 where $\todd_m$ is a homogeneous polynomial of degree $m$ called
1539 the $m$-th \ai{Todd polynomial}~\cite{Barvinok1999}.
1540 Also expanding the numerator in the first factor, we find
1542 g(t) = \frac{(-1)^d}{t^d \prod_{j=1}^d c_j}
1543 \left(
1544 \sum_{n=0}^{\infty}\frac{\sum_{k=1}^{r} a_k^n}{n!}
1545 \right)
1546 \left(
1547 \sum_{m=0}^{\infty} \todd_m(-c_1, \ldots, -c_d) t^m
1548 \right)
1551 with constant term
1552 \begin{multline}
1553 \label{eq:todd:constant}
1554 \frac{(-1)^d}{t^d \prod_{j=1}^d c_j}
1555 \left(\sum_{i=0}^d \frac{\sum_{k=1}^{r} a_k^i}{i!}
1556 \todd_{d-i}(-c_1, \ldots, -c_d)\right)t^d
1557 = \\
1558 \frac{(-1)^d}{\prod_{j=1}^d c_j}
1559 \sum_{i=0}^d \frac{\sum_{k=1}^{r} a_k^i}{i!} \todd_{d-i}(-c_1, \ldots, -c_d)
1561 \end{multline}
1562 To compute the first $d+1$ terms in the Taylor series~\eqref{eq:todd},
1563 we write down the truncated Taylor series
1565 \frac{e^t -1}t \equiv
1566 \sum_{i=0}^d \frac 1{(i+1)!} t^i \equiv
1567 \frac 1 {(d+1)!} \sum_{i=0}^d \frac{(d+1)!}{(i+1)!} t^i
1568 \mod t^{d+1}
1571 where we have
1573 \frac 1 {(d+1)!} \sum_{i=0}^d \frac{(d+1)!}{(i+1)!} t^i
1574 \in \frac 1{(d+1)!} \ZZ[t]
1577 Computing the reciprocal as explained in Section~\ref{s:division},
1578 we find
1579 \begin{equation}
1580 \label{eq:t-exp-1}
1581 \frac{t}{e^t-1} = \frac 1{\frac{e^t -1}t}
1582 \equiv (d+1)! \frac 1{\sum_{i=0}^d \frac{(d+1)!}{(i+1)!} t^i}
1583 =: \sum_{i=0}^d b_i t^i
1585 \end{equation}
1586 Note that the constant term of the denominator is $1/(d+1)!$.
1587 The denominators of the quotient are therefore $((d+1)!)^{i+1}/(d+1)!$.
1588 Also note that the $b_i$ are independent of the generating function
1589 and can be computed in advance.
1590 An alternative way of computing the $b_i$ is to note that
1592 \frac{t}{e^t-1} = \sum_{i=0}^\infty B_i \frac{t^i}{i!}
1595 with $B_i = i! \, b_i$ the \ai{Bernoulli number}s, which can be computed
1596 using the recurrence~\eqref{eq:Bernoulli} (see Section~\ref{s:nested}).
1598 Substituting $t$ by $c_j t$ in~\eqref{eq:t-exp-1}, we have
1600 \frac{-c_j t}{1-e^{c_j t}} = \sum_{i=0}^d b_i c_j^i t^i
1603 Multiplication of these truncated Taylor series for each $c_j$
1604 results in the first $d+1$ terms of~\eqref{eq:todd},
1606 \sum_{m=0}^{d} \todd_m(-c_1, \ldots, -c_d) t^m
1608 \sum_{m=0}^{d} \frac{\beta_m}{((d+1)!)^m} t^m
1611 from which
1612 it is easy to compute the constant term~\eqref{eq:todd:constant}.
1613 Note that this convolution can also be computed without the use
1614 of rational coefficients,
1616 \frac{(-1)^d}{\prod_{j=1}^d c_j}
1617 \sum_{i=0}^d \frac{\alpha_i}{i!} \frac{\beta_{d-i}}{((d+1)!)^{d-i}}
1619 \frac{(-1)^d}{((d+1)!)^d\prod_{j=1}^d c_j}
1620 \sum_{i=0}^d \left(\frac{((d+1)!)^i}{i!}\alpha_i\right) \beta_{d-i}
1623 with $\alpha_i = \sum_{k=1}^{r} a_k^i$.
1625 \begin{example}
1626 Consider the \rgf/
1627 \begin{multline*}
1628 \f T x =
1629 \frac{x_1^2}{(1-x_1^{-1})(1-x_1^{-1}x_2)}
1631 \frac{x_2^2}{(1-x_2^{-1})(1-x_1 x_2^{-1})}
1632 + {} \\
1633 \frac1{(1-x_1)(1-x_2)}
1634 \end{multline*}
1635 from \shortciteN[Example~39]{Verdoolaege2005PhD}.
1636 Since this is a 2-dimensional problem, we first compute the first
1637 3 Todd polynomials (evaluated at $-1$),
1639 \frac{e^t -1}t \equiv
1640 1 + \frac 1 2 t + \frac 1 7 t^2 =
1641 \frac 1 6
1642 \begin{bmatrix}
1643 6 & 3 & 1
1644 \end{bmatrix}
1648 \frac t{e^t -1} \equiv
1649 \begin{bmatrix}
1650 \displaystyle\frac{-1}{1} & \displaystyle\frac{3}{6} & \displaystyle\frac{-3}{36}
1651 \end{bmatrix}
1654 where we represent each truncated power series by a vector of its
1655 coefficients.
1656 The vector $\vec\lambda = (1, -1)$ is not
1657 orthogonal to any of the rays, so we can use the substitution
1658 $\vec x = e^{(1, -1)t}$
1659 and obtain
1661 \frac{e^{2t}}{(1-e^{-t})(1-e^{-2t})}
1663 \frac{e^{-2t}}{(1-e^{t})(1-e^{2t})}
1665 \frac1{(1-e^{t})(1-e^{-t})}
1668 We have
1669 \begin{align*}
1670 \frac{t}{1-e^{- t}} & =
1671 \begin{bmatrix}
1672 \displaystyle\frac{-1}{1} & \displaystyle\frac{-3}{6} & \displaystyle\frac{-3}{36}
1673 \end{bmatrix}
1675 \frac{2t}{1-e^{-2 t}} & =
1676 \begin{bmatrix}
1677 \displaystyle\frac{-1}{1} & \displaystyle\frac{-6}{6} & \displaystyle\frac{-12}{36}
1678 \end{bmatrix}
1680 \frac{-t}{1-e^{t}} & =
1681 \begin{bmatrix}
1682 \displaystyle\frac{-1}{1} & \displaystyle\frac{3}{6} & \displaystyle\frac{-3}{36}
1683 \end{bmatrix}
1685 \frac{-2t}{1-e^{2t}} & =
1686 \begin{bmatrix}
1687 \displaystyle\frac{-1}{1} & \displaystyle\frac{6}{6} & \displaystyle\frac{-12}{36}
1688 \end{bmatrix}
1690 \end{align*}
1691 The first term in the \rgf/ evaluates to
1692 \begin{align*}
1694 \frac 1{-1 \cdot -2}
1695 \begin{bmatrix}
1696 \displaystyle\frac{1}{1} & \displaystyle\frac{2}{1} & \displaystyle\frac{4}{2}
1697 \end{bmatrix}
1699 \left(
1700 \begin{bmatrix}
1701 \displaystyle\frac{-1}{1} & \displaystyle\frac{-3}{6} & \displaystyle\frac{-3}{36}
1702 \end{bmatrix}
1703 \begin{bmatrix}
1704 \displaystyle\frac{-1}{1} & \displaystyle\frac{-6}{6} & \displaystyle\frac{-12}{36}
1705 \end{bmatrix}
1706 \right)
1708 = {} &
1709 \frac 1{2}
1710 \begin{bmatrix}
1711 \displaystyle\frac{1}{1} & \displaystyle\frac{2}{1} & \displaystyle\frac{4}{2}
1712 \end{bmatrix}
1714 \begin{bmatrix}
1715 \displaystyle\frac{1}{1} & \displaystyle\frac{9}{6} & \displaystyle\frac{33}{36}
1716 \end{bmatrix}
1718 = {} &
1719 \frac 1{72}
1720 \begin{bmatrix}
1721 1 & 2 \cdot 6 & 4 \cdot 18
1722 \end{bmatrix}
1724 \begin{bmatrix}
1725 1 & 9 & 33
1726 \end{bmatrix}
1727 = \frac {213}{72} = \frac{71}{24}
1729 \end{align*}
1730 Due to symmetry, the second term evaluates to the same value,
1731 while for the third term we find
1733 \frac{1}{-1\cdot 1 \cdot 36}
1734 \begin{bmatrix}
1735 1 & 0 \cdot 6 & 0 \cdot 18
1736 \end{bmatrix}
1738 \begin{bmatrix}
1739 1 & 0 & -3
1740 \end{bmatrix}
1742 \frac{-3}{-36} = \frac 1{12}
1745 The sum is
1747 \frac{71}{24} + \frac{71}{24} + \frac 1{12} = 6
1750 \end{example}
1752 Note that the run-time complexities of polynomial and exponential
1753 substitution are basically the same. The experiments of
1754 \citeN{Koeppe2006primal} are somewhat misleading in this respect
1755 since the polynomial substitution (unlike the exponential
1756 substitution) had not been optimized to take full
1757 advantage of the stopped Barvinok decomposition.
1758 For comparison, \autoref{t:hickerson} shows running times
1759 for the same experiments of that paper, but using
1760 barvinok version \verb+barvinok-0.23-47-gaa9024e+
1761 on an Athlon MP 1500+ with 512MiB internal memory.
1762 This machine appears to be slightly slower than the
1763 machine used in the experiments of \citeN{Koeppe2006primal}
1764 as computing {\tt hickerson-14} using the dual decomposition
1765 with polynomial substitution an maximal index 1
1766 took 2768 seconds on this machine using \LattEmk/.
1767 At this stage, it is not clear yet why the number of
1768 cones in the dual decomposition of {\tt hickerson-13}
1769 differs from that of \LattE/~\shortcite{latte1.1} and
1770 \LattEmk/~\cite{latte-macchiato}.
1771 We conclude from \autoref{t:hickerson} that (our implementation of)
1772 the exponential substitution is always slightly faster than
1773 (our implementation of) the polynomial substitution.
1774 The optimal maximal index for these examples is about 500,
1775 which agrees with the experiments of \citeN{Koeppe2006primal}.
1777 \begin{table}
1778 \begin{center}
1779 \begin{tabular}{rrrrrrr}
1780 \hline
1782 \multicolumn{3}{c}{Dual decomposition} &
1783 \multicolumn{3}{c}{Primal decomposition}
1786 & \multicolumn{2}{c}{Time (s)} &
1787 & \multicolumn{2}{c}{Time (s)}
1789 \cline{3-4}
1790 \cline{6-7}
1791 Max.\ index & Cones & Poly & Exp & Cones & Poly & Exp \\
1792 \hline
1793 \multicolumn{7}{c}{{\tt hickerson-12}}
1795 \hline
1796 1 & 11625 & 9.24 & 8.90 & 7929 & 4.80 & 4.55
1798 10 & 4251 & 4.32 & 4.19 & 803 & 0.66 & 0.62
1800 100 & 980 & 1.42 & 1.35 & 84 & 0.13 & 0.12
1802 200 & 550 & 1.00 & 0.92 & 76 & 0.12 & 0.12
1804 300 & 474 & 0.93 & 0.86 & 58 & 0.12 & 0.10
1806 500 & 410 & 0.90 & 0.83 & 42 & 0.10 & 0.10
1808 1000 & 130 & 0.42 & 0.38 & 22 & {\bf 0.10} & {\bf 0.07}
1810 2000 & 10 & {\bf 0.10} & {\bf 0.10} & 22 & 0.10 & 0.09
1812 5000 & 7 & 0.12 & 0.11 & 7 & 0.12 & 0.10
1814 \hline
1815 \multicolumn{7}{c}{{\tt hickerson-13}}
1817 \hline
1818 1 & 494836 & 489 & 463 & 483507 & 339 & 315
1820 10 & 296151 & 325 & 309 & 55643 & 51 & 48
1822 100 & 158929 & 203 & 192 & 9158 & 11 & 10
1824 200 & 138296 & 184 & 173 & 6150 & 9 & 8
1826 300 & 110438 & 168 & 157 & 4674 & 8 & 7
1828 500 & 102403 & 163 & 151 & 3381 & {\bf 8} & {\bf 7}
1830 1000 & 83421 & {\bf 163} & {\bf 149} & 2490 & 8 & 7
1832 2000 & 77055 & 170 & 153 & 1857 & 10 & 8
1834 5000 & 57265 & 246 & 211 & 1488 & 13 & 11
1836 10000 & 50963 & 319 & 269 & 1011 & 26 & 21
1838 \hline
1839 \multicolumn{7}{c}{{\tt hickerson-14}}
1841 \hline
1842 1 & 1682743 & 2171 & 2064 & 552065 & 508 & 475
1844 10 & 1027619 & 1453 & 1385 & 49632 & 62 & 59
1846 100 & 455474 & 768 & 730 & 8470 & 14 & 13
1848 200 & 406491 & 699 & 661 & 5554 & 11 & 10
1850 300 & 328340 & 627 & 590 & 4332 & 11 & 9
1852 500 & 303566 & 605 & 565 & 3464 & {\bf 11} & {\bf 9}
1854 1000 & 232626 & {\bf 581} & {\bf 532} & 2384 & 12 & 10
1856 2000 & 195368 & 607 & 545 & 1792 & 14 & 12
1858 5000 & 147496 & 785 & 682 & 1276 & 19 & 16
1860 10000 & 128372 & 966 & 824 & 956 & 29 & 23
1862 \hline
1863 \end{tabular}
1864 \caption{Timing results of dual and primal decomposition with
1865 polynomial or exponential substitution on the Hickerson examples}
1866 \label{t:hickerson}
1867 \end{center}
1868 \end{table}
1870 \subsection{Approximate Enumeration using Nested Sums}
1871 \label{s:nested}
1873 If $P \in \QQ^d$ is a polyhedron and $p(\vec x) \in \QQ[\vec x]$ is a
1874 polynomial and we want to sum $p(\vec x)$ over all integer values
1875 of (a subset of) the variables $\vec x$, then we can do this incrementally
1876 by taking a variable $x_1$ with lower bound $L(\vec{\hat x})$
1877 and upper bound $U(\vec{\hat x})$, with $\vec{\hat x} = (x_2, \ldots, x_d)$,
1878 and computing
1879 \begin{equation}
1880 \label{eq:nested:sum}
1881 Q(\vec{\hat x}) = \sum_{x_1 = L(\vec{\hat x})}^{U(\vec{\hat x})} p(\vec x)
1883 \end{equation}
1884 Since $P$ is a polytope, the lower bound is a maximum of affine expressions
1885 in the remaining variables, while the upper bound is a minimum of such expressions.
1886 If the coefficients in these expressions are all integer, then we can
1887 compute $Q(\vec{\hat x})$ exactly as a piecewise polynomial using formulas
1888 for sums of powers, as proposed by, e.g.,
1889 \shortciteN{Tawbi1994,Sakellariou1997sums,VanEngelen2004}.
1890 If some of the coefficients are not integer, we can apply the same formulas
1891 to obtain an approximation, which can is some cases be shown
1892 to be an overapproximation~\shortcite{VanEngelen2004}.
1893 Note that if we take the initial polynomial to be the constant $1$, then
1894 this gives us a method for computing an approximation of the number
1895 of integer points in a (parametric) polytope.
1897 The first step is to compute the chamber decomposition of $P$ when viewed
1898 as a 1-dimensional parametric polytope. That is, we need to partition
1899 the projection of $P$ onto the remaining variables into polyhedral cells
1900 such that in each cell, both the upper and the lower bound are described
1901 by a single affine expression. Basically, for each pair of lower and upper
1902 bound, we compute the cell where the chosen lower bound is (strictly)
1903 smaller than all other lower bounds and similarly for the upper bound.
1905 For any given pair of lower and upper bound $(l(\vec {\hat x}), u(\vec{\hat x}))$,
1906 the formula~\eqref{eq:nested:sum} is computed for each monomial of $p(\vec x)$
1907 separately. For the constant term $\alpha_0$, we have
1909 \sum_{x_1 = l(\vec {\hat x})}^{u(\vec{\hat x})} \alpha_0(\vec{\hat x})
1910 = \alpha_0(\vec{\hat x}) \left(u(\vec{\hat x}) - l(\vec {\hat x}) + 1\right)
1913 For the higher degree monomials, we use the formula
1914 \begin{equation}
1915 \label{eq:summation}
1916 \sum_{k=0}^{m-1} k^n = {1\over{n+1}}\sum_{k=0}^n{n+1\choose{k}} B_k m^{n+1-k}
1917 =: S_n(m)
1919 \end{equation}
1920 with $B_i$ the \ai{Bernoulli number}s, which can be computed
1921 using the recurrence
1922 \begin{equation}
1923 \label{eq:Bernoulli}
1924 \sum_{j=0}^m{m+1\choose{j}}B_j = 0
1925 \qquad B_0 = 1
1927 \end{equation}
1928 Note that \eqref{eq:summation} is also valid if $m = 0$,
1929 i.e., $S_n(0) = 0$, a fact
1930 that can be easily shown using Newton series~\shortcite{VanEngelen2004}.
1932 \newcounter{saveenumi}
1934 Since we can only directly apply the summation formula when
1935 the lower bound is zero (or one), we need to consider several
1936 cases.
1937 \begin{enumerate}
1938 \item $l(\vec {\hat x}) \ge 1$
1940 \begin{align*}
1941 \sum_{x_1 = l(\vec {\hat x})}^{u(\vec{\hat x})} \alpha_n(\vec{\hat x}) \, x_1^n
1943 \alpha_n(\vec{\hat x})
1944 \left(
1945 \sum_{x_1 = 1}^{u(\vec{\hat x})} x_1^n
1947 \sum_{x_1 = 1}^{l(\vec {\hat x})-1} x_1^n
1948 \right)
1951 \alpha_n(\vec{\hat x})
1952 \left( S_n(u(\vec{\hat x})+1) - S_n(l(\vec {\hat x})) \right)
1953 \end{align*}
1955 \item $u(\vec{\hat x}) \le -1$
1957 \begin{align*}
1958 \sum_{x_1 = l(\vec {\hat x})}^{u(\vec{\hat x})} \alpha_n(\vec{\hat x}) \, x_1^n
1960 \alpha_n(\vec{\hat x}) (-1)^n
1961 \sum_{x_1 = -u(\vec {\hat x})}^{-l(\vec{\hat x})} \alpha_n(\vec{\hat x}) \, x_1^n
1964 \alpha_n(\vec{\hat x}) (-1)^n
1965 \left( S_n(-l(\vec{\hat x})+1) - S_n(-u(\vec {\hat x})) \right)
1966 \end{align*}
1968 \item $l(\vec {\hat x}) \le 0$ and $u(\vec{\hat x}) \ge 0$
1970 \begin{align*}
1971 \sum_{x_1 = l(\vec {\hat x})}^{u(\vec{\hat x})} \alpha_n(\vec{\hat x}) \, x_1^n
1973 \alpha_n(\vec{\hat x})
1974 \left(
1975 \sum_{x_1 = 0}^{u(\vec{\hat x})} x_1^n
1977 (-1)^n
1978 \sum_{x_1 = 1}^{-l(\vec {\hat x})} x_1^n
1979 \right)
1982 \alpha_n(\vec{\hat x})
1983 \left(
1984 S_n(u(\vec{\hat x})+1)
1986 (-1)^n
1987 S_n(-l(\vec{\hat x})+1)
1988 \right)
1989 \end{align*}
1991 \setcounter{saveenumi}{\value{enumi}}
1992 \end{enumerate}
1994 If the coefficients in the lower and upper bound are all
1995 integer, then the above 3 cases partition (the integer points in)
1996 the projection of $P$ onto the remaining variables.
1997 However, if some of the coefficients are rational, then the lower
1998 and upper bound can lie in the open interval $(0,1)$ for some
1999 values of $\vec{\hat x}$. We therefore also need to consider
2000 the following two cases.
2001 Note that the results will not be correct in these cases, but
2002 not taking them into account would lead to a greater loss in accuracy.
2004 \begin{enumerate}
2005 \setcounter{enumi}{\value{saveenumi}}
2006 \item $0 < l(\vec {\hat x}) < 1$
2009 \sum_{x_1 = l(\vec {\hat x})}^{u(\vec{\hat x})} \alpha_n(\vec{\hat x}) \, x_1^n
2011 \alpha_n(\vec{\hat x})
2012 S_n(u(\vec{\hat x})+1)
2015 \item $0 < -u(\vec {\hat x}) < 1$ and $l(\vec {\hat x}) \le 0$
2018 \sum_{x_1 = l(\vec {\hat x})}^{u(\vec{\hat x})} \alpha_n(\vec{\hat x}) \, x_1^n
2020 \alpha_n(\vec{\hat x})
2021 (-1)^n
2022 S_n(-l(\vec{\hat x})+1)
2025 \end{enumerate}
2027 \subsection{Using TOPCOM to compute Chamber Decompositions}
2029 In this section, we describe how to use the correspondence
2030 between the \ai{regular triangulation}s of a point set
2031 and the chambers of the \ai{Gale transform}
2032 of the point set~\shortcite{Gelfand1994}
2033 to compute the chamber decomposition of a parametric polytope.
2034 This correspondence was also used by \shortciteN{Pfeifle2003}
2035 \shortciteN{Eisenschmidt2007integrally}.
2037 Let us first assume that the parametric polytope can be written as
2038 \begin{equation}
2039 \label{eq:TOPCOM:polytope}
2040 \begin{cases}
2041 \begin{aligned}
2042 \vec x &\ge 0
2044 A \, \vec x &\le \vec b(\vec p)
2046 \end{aligned}
2047 \end{cases}
2048 \end{equation}
2049 where the right hand side $\vec b(\vec p)$ is arbitrary and
2050 may depend on the parameters.
2051 The first step is to add slack variables $\vec s$ to obtain
2052 the \ai{vector partition} problem
2054 \begin{cases}
2055 \begin{aligned}
2056 A \, \vec x + I \, \vec s & = \vec b(\vec p)
2058 \vec x, \vec s &\ge 0
2060 \end{aligned}
2061 \end{cases}
2063 with $I$ the identity matrix.
2064 Then we compute the (right) kernel $K$ of the matrix
2065 $\begin{bmatrix}
2066 A & I
2067 \end{bmatrix}$, i.e.,
2069 \begin{bmatrix}
2070 A & I
2071 \end{bmatrix}
2076 and use \ai[\tt]{TOPCOM}'s \ai[\tt]{points2triangs} to
2077 compute the \ai{regular triangulation}s of the points specified
2078 by the rows of $K$.
2079 Each of the resulting triangulations corresponds to a chamber
2080 in the chamber complex of the above vector partition problem.
2081 Each simplex in a triangulation corresponds to a parametric
2082 vertex active on the corresponding chamber and
2083 each point in the simplex (i.e., a row of $K$) corresponds
2084 to a variable ($x_j$ or $s_j$) that is set to zero to obtain
2085 this parametric vertex.
2086 In the original formulation of the problem~\eqref{eq:TOPCOM:polytope}
2087 each such variable set to zero reflects the saturation of the
2088 corresponding constraint ($x_j = 0$ for $x_j = 0$ and
2089 $\sps {\vec a_j}{\vec x} = b_j(\vec p)$ for $s_j = 0$).
2090 A description of the chamber can then be obtained by plugging
2091 in the parametric vertices in the remaining constraints.
2093 \begin{example}
2094 Consider the parametric polytope
2096 P(p,q,r) = \{\,
2097 (i,j) \mid 0 \le i \le p \wedge
2098 0 \le j \le 2 i + q \wedge
2099 0 \le k \le i - p + r \wedge
2100 p \ge 0 \wedge
2101 q \ge 0 \wedge
2102 r \ge 0
2103 \,\}
2106 The constraints involving the variables are
2108 \begin{cases}
2109 \begin{aligned}
2110 \begin{bmatrix}
2115 & & 1
2116 \end{bmatrix}
2117 \begin{bmatrix}
2118 i \\ j \\ k
2119 \end{bmatrix}
2121 \begin{matrix}
2127 \end{matrix}
2128 \begin{array}{l}
2134 \end{array}
2136 \begin{bmatrix}
2137 1 & 0 & 0
2139 -1 & 0 & 1
2141 -2 & 1 & 0
2142 \end{bmatrix}
2143 \begin{bmatrix}
2144 i \\ j \\ k
2145 \end{bmatrix}
2147 \begin{matrix}
2153 \end{matrix}
2154 \begin{array}{l}
2159 -p + r
2160 \end{array}
2161 \end{aligned}
2162 \end{cases}
2164 We have
2166 \begin{bmatrix}
2167 1 & 0 & 0 & 1 & 0 & 0 \\
2168 -1 & 0 & 1 & 0 & 1 & 0 \\
2169 -2 & 1 & 0 & 0 & 0 & 1 \\
2170 \end{bmatrix}
2171 \begin{bmatrix}
2172 -1 & 0 & 0 \\
2173 -2 & 0 & -1 \\
2174 -1 & -1 & 0 \\
2175 1 & 0 & 0 \\
2176 0 & 1 & 0 \\
2177 0 & 0 & 1 \\
2178 \end{bmatrix}
2182 Computing the \ai{regular triangulation}s of the rows of $K$
2183 using \ai[\tt]{TOPCOM}, we obtain
2184 \begin{verbatim}
2185 > cat e2.topcom
2187 [ -1 0 0 ]
2188 [ -2 0 -1 ]
2189 [ -1 -1 0 ]
2190 [ 1 0 0 ]
2191 [ 0 1 0 ]
2192 [ 0 0 1 ]
2194 > points2triangs --regular < e2.topcom
2195 T[1]:={{0,1,2},{1,2,3},{0,1,4},{1,3,4},{0,2,5},{2,3,5},{0,4,5},{3,4,5}};
2196 T[2]:={{1,2,3},{1,3,4},{2,3,5},{3,4,5},{1,2,5},{1,4,5}};
2197 T[3]:={{1,2,3},{1,3,4},{2,3,5},{3,4,5},{1,2,4},{2,4,5}};
2198 \end{verbatim}
2200 We see that we have three chambers in the decomposition,
2201 one with 8 vertices and two with 6 vertices.
2202 Take the second vertex (``\verb+{1,2,3}+'') of the first chamber.
2203 This vertex corresponds
2204 to the saturation of the constraints $j \ge 0$, $k \ge 0$
2205 and $i \le p$, i.e., $(i,j,k) = (p,0,0)$. Plugging in this
2206 vertex in the remaining constraints, we see that it is only valid
2207 in case $p \ge 0$, $r \ge 0$ and $2p + q \ge 0$.
2208 For the remaining vertices of the first chamber, we similarly find
2210 \begin{tabular}{ccc}
2211 % e0
2212 \verb+{0,1,2}+ & $(0,0,0)$ & $p \ge 0$, $-q + r \ge 0$ and $q \ge 0$
2214 % 70
2215 \verb+{1,2,3}+ & $(p,0,0)$ & $p \ge 0$, $r \ge 0$ and $2p + q \ge 0$
2217 % c8
2218 \verb+{0,1,4}+ & $(0,0,-p+r)$ & $-q + r \ge 0$, $p \ge 0$ and $q \ge 0$
2220 % 58
2221 \verb+{1,3,4}+ & $(p,0,r)$ & $p \ge 0$, $r \ge 0$ and $2p + q \ge 0$
2223 % a4
2224 \verb+{0,2,5}+ & $(0,q,0)$ & $q \ge 0$, $p \ge 0$ and $-q + r \ge 0$
2226 % 34
2227 \verb+{2,3,5}+ & $(p, 2p+q, 0)$ & $p \ge 0$, $2p + q \ge 0$ and $r \ge 0$
2229 % 8c
2230 \verb+{0,4,5}+ & $(0, q, -p+r)$ & $q \ge 0$, $-q + r \ge 0$ and $p \ge 0$
2232 % 1c
2233 \verb+{3,4,5}+ & $(p, 2p+q, r)$ & $p \ge 0$, $2p + q \ge 0$ and $r \ge 0$
2234 \end{tabular}
2236 Combining these constraints with the initial constraints of the problem
2237 on the parameters
2238 $p \ge 0$, $q \ge 0$ and $r \ge 0$, we find the chamber
2240 \{\,
2241 (p,q,r) \mid p \ge 0 \wedge -p + r \ge 0 \wedge q \ge 0
2242 \,\}
2244 For the second chamber, we have
2246 \begin{tabular}{ccc}
2247 % 70
2248 \verb+{1,2,3}+ & $(p,0,0)$ & $p \ge 0$, $r \ge 0$ and $2p + q \ge 0$
2250 % 58
2251 \verb+{1,3,4}+ & $(p,0,r)$ & $p \ge 0$, $r \ge 0$ and $2p + q \ge 0$
2253 % 34
2254 \verb+{2,3,5}+ & $(p, 2p+q, 0)$ & $p \ge 0$, $2p + q \ge 0$ and $r \ge 0$
2256 % 1c
2257 \verb+{3,4,5}+ & $(p, 2p+q, r)$ & $p \ge 0$, $2p + q \ge 0$ and $r \ge 0$
2259 % 64
2260 \verb+{1,2,5}+ & $(-\frac q 2,0,0)$ &
2261 $-q \ge 0$, $2p + q \ge 0$ and $-2p -q+2r \ge 0$
2263 % 4c
2264 \verb+{1,4,5}+ & $(-\frac q 2,0,-p-\frac q 2+r)$ &
2265 $-q \ge 0$, $-2p -q+2r \ge 0$ and $2p + q \ge 0$
2266 \end{tabular}
2268 The chamber is therefore
2270 \{\,
2271 (p,q,r) \mid q = 0 \wedge p \ge 0 \wedge -p +r \ge 0
2272 \,\}
2274 Note that by intersecting with the initial constraints this chamber
2275 is no longer full-dimensional and can therefore be discarded.
2276 Finally, for the third chamber, we have
2278 \begin{tabular}{ccc}
2279 % 70
2280 \verb+{1,2,3}+ & $(p,0,0)$ & $p \ge 0$, $r \ge 0$ and $2p + q \ge 0$
2282 % 58
2283 \verb+{1,3,4}+ & $(p,0,r)$ & $p \ge 0$, $r \ge 0$ and $2p + q \ge 0$
2285 % 34
2286 \verb+{2,3,5}+ & $(p, 2p+q, 0)$ & $p \ge 0$, $2p + q \ge 0$ and $r \ge 0$
2288 % 1c
2289 \verb+{3,4,5}+ & $(p, 2p+q, r)$ & $p \ge 0$, $2p + q \ge 0$ and $r \ge 0$
2291 % 68
2292 \verb+{1,2,4}+ & $(p-r,0,0)$ &
2293 $p -r \ge 0$, $r \ge 0$ and $2p +q -2r \ge 0$
2295 % 2c
2296 \verb+{2,4,5}+ & $(p-r,2p+q-2r, 0)$ &
2297 $p -r \ge 0$, $2p +q -2r \ge 0$ and $r \ge 0$
2298 \end{tabular}
2300 The chamber is therefore
2302 \{\,
2303 (p,q,r) \mid p - r \ge 0 \wedge q \ge 0 \wedge r \ge 0
2304 \,\}
2306 \end{example}
2308 Now let us consider general parametric polytopes.
2309 First note that we can follow the same procedure as above
2310 if we replace $\vec x$ by $\vec x' - \vec c(\vec p)$
2311 in \eqref{eq:TOPCOM:polytope}, i.e.,
2312 if our problem has the form
2313 \begin{equation}
2314 \label{eq:TOPCOM:polytope:2}
2315 \begin{cases}
2316 \begin{aligned}
2317 \vec x' &\ge \vec c(\vec p)
2319 A \, \vec x' &\le \vec b(\vec p) + A \vec c(\vec p)
2321 \end{aligned}
2322 \end{cases}
2323 \end{equation}
2324 as saturating a constraint $x_i = 0$ is equivalent
2325 to saturating the constraint $x_i' = c_i(\vec p)$
2326 and, similarly, $\sps {\vec a_j}{\vec x} = b_j(\vec p)$
2327 is equivalent to
2328 $\sps {\vec a_j}{\vec x'} = b_j(\vec p) + \sps {\vec a_j}{\vec c(\vec p)}$.
2330 In the general case, the problem has the form
2332 A \vec x \ge \vec b(\vec p)
2335 Let $A'$ be a non-singular square submatrix of $A$ with the same number
2336 of columns and compute the (left) \indac{HNF} $H = A' U$ with $U$ unimodular
2337 and $H$ lower-triangular with non-positive elements below the diagonal.
2338 Replacing $\vec x$ by $U \vec x'$, we obtain
2340 \begin{cases}
2341 \begin{aligned}
2342 H \vec x' &\ge \vec b'(\vec p)
2344 -A''U \, \vec x' &\le -\vec b''(\vec p)
2346 \end{aligned}
2347 \end{cases}
2349 with $A''$ the remaining rows of $A$ and $\vec b(\vec p)$ split
2350 in the same way.
2351 If $H$ happens to be the identity matrix, then our problem is
2352 of the form \eqref{eq:TOPCOM:polytope:2} and we already know how
2353 to solve this problem.
2354 Note that, again, saturating any of the transformed constraints
2355 in $\vec x'$ is equivalent to saturating the corresponding constraint
2356 in $\vec x$. We therefore only need to compute $-A'' U$ for the
2357 computation of the kernel $K$. To construct the parametric vertices
2358 in the original coordinate system, we can simply use the original
2359 constraints.
2360 The same reasoning holds if $H$ is any diagonal matrix, since
2361 we can virtually replace $H \vec x$ by $\vec x'$ without affecting
2362 the non-negativity of the variables.
2364 If $H$ is not diagonal, then we can introduce new constraints
2365 $x'_j \ge d(\vec p)$, where $d(\vec p)$ is some symbolic constant.
2366 These constraints do not remove any solutions
2367 since each row in $H$ expresses that the corresponding variable is
2368 greater than or equal to a non-negative combination of the
2369 previous variables plus some constant.
2370 We can then proceed as before. However, to reduce unnecessary computations
2371 we may remove from $K$ the rows that correspond to these new rows.
2372 Any solution saturating the new constraint, would also saturate
2373 the corresponding constraint $\vec h_j^\T$ and all
2374 the constraints corresponding to the non-zero
2375 entries in $\vec h_j^\T$.
2376 If a chamber contains a vertex obtained by saturating such a new
2377 constraint, it would appear multiple times in the same chamber,
2378 each time combined with different constraints from the above set.
2379 Furthermore, there would also be another (as it turns out, identical)
2380 chamber where the vertex is only defined by the other constraints.
2382 \begin{example}
2383 Consider the parametric polytope
2385 P(n) = \{\,
2386 (i,j) \mid
2387 1 \le i \wedge 2 i \le 3 j \wedge j \le n
2388 \,\}
2391 The constraints are
2393 \begin{bmatrix}
2394 1 & 0 \\
2395 -2 & 3 \\
2396 0 & -1
2397 \end{bmatrix}
2398 \begin{bmatrix}
2399 i \\ j
2400 \end{bmatrix}
2402 \begin{bmatrix}
2403 1 \\
2404 0 \\
2406 \end{bmatrix}
2409 The top $2 \times 2$ submatrix is already in \indac{HNF}.
2410 We have $3 j \ge 2i \ge 2$, so we can add a constraint
2411 of the form $j \ge c(n)$ and obtain
2413 \begin{bmatrix}
2414 A & I
2415 \end{bmatrix}
2417 \begin{bmatrix}
2418 0 & 1 & 1 & 0
2420 2 & -3 & 0 & 1
2421 \end{bmatrix}
2424 while $K$ with $\begin{bmatrix}A & I\end{bmatrix} K = 0$ is given
2427 \begin{bmatrix}
2428 0 & 1 & 1 & 0
2430 2 & -3 & 0 & 1
2431 \end{bmatrix}
2432 \begin{bmatrix}
2433 1 & 0 \\
2434 0 & 1 \\
2435 0 & -1 \\
2436 -2 & 3
2437 \end{bmatrix}
2440 The second row of $K$ corresponds to the second variable,
2441 which in turn corresponds to the newly added constraint.
2442 Passing all rows of $K$ to \ai[\tt]{TOPCOM} we would get
2443 \begin{verbatim}
2444 > points2triangs --regular <<EOF
2445 > [[1 0],[0,1],[0,-1],[-2,3]]
2446 > EOF
2447 T[1]:={{0,1},{0,2},{1,3},{2,3}};
2448 T[2]:={{0,2},{2,3},{0,3}};
2449 T[3]:={};
2450 \end{verbatim}
2451 The first vertex in the first chamber saturates the second row
2452 (row 1) and therefore saturates both the first (0) and fourth (3)
2453 and it appears a second time as \verb+{1,3}+. Combining
2454 these ``two'' vertices into one as \verb+{0,3}+ results in the
2455 second (identical) chamber.
2456 Removing the row corresponding to the new constraint from $K$
2457 we remove the duplicates
2458 \begin{verbatim}
2459 > points2triangs --regular <<EOF
2460 > [[1 0],[0,-1],[-2,3]]
2461 > EOF
2462 T[1]:={{0,1},{1,2},{0,2}};
2463 T[2]:={};
2464 \end{verbatim}
2465 Note that in this example, we also could have interchanged
2466 the second and the third constraint and then have replaced $j$ by $-j'$.
2467 \end{example}
2469 In practice, this method of computing a \ai{chamber decomposition}
2470 does not seem to perform very well, mostly because
2471 \ai[\tt]{TOPCOM} can not exploit all available information
2472 about the parametric polytopes and will therefore compute
2473 many redundant triangulations/chambers.
2474 In particular, any chamber that does not intersect with
2475 the parameter domain of the parametric polytope, or only
2476 intersects in a face of this parameter domain, is completely redundant.
2477 Furthermore, if the parametric polytope is not simple, then many
2478 different combinations of the constraints will lead to the same parametric
2479 vertex. Many triangulations will therefore correspond to one and the
2480 same chamber in the chamber complex of the parametric polytope.
2481 For example, for a dilated octahedron, \ai[\tt]{TOPCOM} will
2482 compute 150 triangulations/chambers, 104 of which are empty,
2483 while the remaining 46 refer to the same single chamber.