2 Let's try a simple generator:
18 "Falling off the end" stops the generator:
21 Traceback (most recent call last):
22 File "<stdin>", line 1, in ?
23 File "<stdin>", line 2, in g
26 "return" also stops the generator:
31 ... yield 2 # never reached
37 Traceback (most recent call last):
38 File "<stdin>", line 1, in ?
39 File "<stdin>", line 3, in f
41 >>> g.next() # once stopped, can't be resumed
42 Traceback (most recent call last):
43 File "<stdin>", line 1, in ?
46 "raise StopIteration" stops the generator too:
50 ... raise StopIteration
51 ... yield 2 # never reached
57 Traceback (most recent call last):
58 File "<stdin>", line 1, in ?
61 Traceback (most recent call last):
62 File "<stdin>", line 1, in ?
65 However, they are not exactly equivalent:
78 ... raise StopIteration
84 This may be surprising at first:
95 Let's create an alternate range() function implemented as a generator:
98 ... for i in range(n):
104 Generators always return to the most recent caller:
108 ... print "creator", r.next()
114 ... print "caller", i
123 Generators can call other generators:
126 ... for i in yrange(n):
134 # The examples from PEP 255.
140 Restriction: A generator cannot be resumed while it is actively
148 Traceback (most recent call last):
150 File "<string>", line 2, in g
151 ValueError: generator already executing
153 Specification: Return
155 Note that return isn't always equivalent to raising StopIteration: the
156 difference lies in how enclosing try/except constructs are treated.
167 because, as in any function, return simply exits, but
171 ... raise StopIteration
177 because StopIteration is captured by a bare "except", as is any
180 Specification: Generators and Exception Propagation
185 ... yield f() # the zero division exception propagates
186 ... yield 42 # and we'll never get here
189 Traceback (most recent call last):
190 File "<stdin>", line 1, in ?
191 File "<stdin>", line 2, in g
192 File "<stdin>", line 2, in f
193 ZeroDivisionError: integer division or modulo by zero
194 >>> k.next() # and the generator cannot be resumed
195 Traceback (most recent call last):
196 File "<stdin>", line 1, in ?
200 Specification: Try/Except/Finally
208 ... yield 3 # never get here
209 ... except ZeroDivisionError:
215 ... yield 7 # the "raise" above stops this
225 [1, 2, 4, 5, 8, 9, 10, 11]
228 Guido's binary tree example.
230 >>> # A binary tree class.
233 ... def __init__(self, label, left=None, right=None):
234 ... self.label = label
236 ... self.right = right
238 ... def __repr__(self, level=0, indent=" "):
239 ... s = level*indent + repr(self.label)
241 ... s = s + "\\n" + self.left.__repr__(level+1, indent)
243 ... s = s + "\\n" + self.right.__repr__(level+1, indent)
246 ... def __iter__(self):
247 ... return inorder(self)
249 >>> # Create a Tree from a list.
255 ... return Tree(list[i], tree(list[:i]), tree(list[i+1:]))
257 >>> # Show it off: create a tree.
258 >>> t = tree("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
260 >>> # A recursive generator that generates Tree labels in in-order.
263 ... for x in inorder(t.left):
266 ... for x in inorder(t.right):
269 >>> # Show it off: create a tree.
270 >>> t = tree("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
271 >>> # Print the nodes of the tree in in-order.
274 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
276 >>> # A non-recursive generator.
277 >>> def inorder(node):
281 ... stack.append(node)
284 ... while not node.right:
286 ... node = stack.pop()
287 ... except IndexError:
290 ... node = node.right
292 >>> # Exercise the non-recursive generator.
295 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
299 # Examples from Iterator-List and Python-Dev and c.l.py.
303 The difference between yielding None and returning it.
306 ... for i in range(3):
311 [None, None, None, None]
313 Ensure that explicitly raising StopIteration acts like any other exception
314 in try/except, not like a return.
319 ... raise StopIteration
326 Next one was posted to c.l.py.
329 ... "Generate all combinations of k elements from list x."
336 ... first, rest = x[0], x[1:]
337 ... # A combination does or doesn't contain first.
338 ... # If it does, the remainder is a k-1 comb of rest.
339 ... for c in gcomb(rest, k-1):
340 ... c.insert(0, first)
342 ... # If it doesn't contain first, it's a k comb of rest.
343 ... for c in gcomb(rest, k):
346 >>> seq = range(1, 5)
347 >>> for k in range(len(seq) + 2):
348 ... print "%d-combs of %s:" % (k, seq)
349 ... for c in gcomb(seq, k):
351 0-combs of [1, 2, 3, 4]:
353 1-combs of [1, 2, 3, 4]:
358 2-combs of [1, 2, 3, 4]:
365 3-combs of [1, 2, 3, 4]:
370 4-combs of [1, 2, 3, 4]:
372 5-combs of [1, 2, 3, 4]:
374 From the Iterators list, about the types of these things.
384 >>> [s for s in dir(i) if not s.startswith('_')]
385 ['close', 'gi_code', 'gi_frame', 'gi_running', 'next', 'send', 'throw']
386 >>> print i.next.__doc__
387 x.next() -> the next value, or raise StopIteration
391 >>> isinstance(i, types.GeneratorType)
394 And more, added later.
400 >>> i.gi_running = 42
401 Traceback (most recent call last):
403 TypeError: readonly attribute
405 ... yield me.gi_running
414 A clever union-find implementation from c.l.py, due to David Eppstein.
415 Sent: Friday, June 29, 2001 12:16 PM
416 To: python-list@python.org
417 Subject: Re: PEP 255: Simple Generators
419 >>> class disjointSet:
420 ... def __init__(self, name):
422 ... self.parent = None
423 ... self.generator = self.generate()
425 ... def generate(self):
426 ... while not self.parent:
428 ... for x in self.parent.generator:
432 ... return self.generator.next()
434 ... def union(self, parent):
436 ... raise ValueError("Sorry, I'm not a root!")
437 ... self.parent = parent
439 ... def __str__(self):
442 >>> names = "ABCDEFGHIJKLM"
443 >>> sets = [disjointSet(name) for name in names]
447 >>> gen = random.WichmannHill(42)
450 ... print "%s->%s" % (s, s.find()),
452 ... if len(roots) > 1:
453 ... s1 = gen.choice(roots)
455 ... s2 = gen.choice(roots)
457 ... print "merged", s1, "into", s2
460 A->A B->B C->C D->D E->E F->F G->G H->H I->I J->J K->K L->L M->M
462 A->A B->B C->C D->G E->E F->F G->G H->H I->I J->J K->K L->L M->M
464 A->A B->B C->F D->G E->E F->F G->G H->H I->I J->J K->K L->L M->M
466 A->A B->B C->F D->G E->E F->F G->G H->H I->I J->J K->K L->A M->M
468 A->A B->B C->F D->G E->E F->F G->G H->E I->I J->J K->K L->A M->M
470 A->A B->E C->F D->G E->E F->F G->G H->E I->I J->J K->K L->A M->M
472 A->A B->E C->F D->G E->E F->F G->G H->E I->I J->G K->K L->A M->M
474 A->A B->G C->F D->G E->G F->F G->G H->G I->I J->G K->K L->A M->M
476 A->A B->G C->F D->G E->G F->F G->G H->G I->I J->G K->K L->A M->G
478 A->A B->G C->F D->G E->G F->F G->G H->G I->K J->G K->K L->A M->G
480 A->A B->G C->F D->G E->G F->F G->G H->G I->A J->G K->A L->A M->G
482 A->A B->G C->A D->G E->G F->A G->G H->G I->A J->G K->A L->A M->G
484 A->G B->G C->G D->G E->G F->G G->G H->G I->G J->G K->G L->G M->G
489 # Fun tests (for sufficiently warped notions of "fun").
493 Build up to a recursive Sieve of Eratosthenes generator.
495 >>> def firstn(g, n):
496 ... return [g.next() for i in range(n)]
503 >>> firstn(intsfrom(5), 7)
504 [5, 6, 7, 8, 9, 10, 11]
506 >>> def exclude_multiples(n, ints):
511 >>> firstn(exclude_multiples(3, intsfrom(1)), 6)
515 ... prime = ints.next()
517 ... not_divisible_by_prime = exclude_multiples(prime, ints)
518 ... for p in sieve(not_divisible_by_prime):
521 >>> primes = sieve(intsfrom(2))
522 >>> firstn(primes, 20)
523 [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71]
526 Another famous problem: generate all integers of the form
528 in increasing order, where i,j,k >= 0. Trickier than it may look at first!
529 Try writing it without generators, and correctly, and without generating
530 3 internal results for each result output.
535 >>> firstn(times(10, intsfrom(1)), 10)
536 [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
553 The following works, but is doing a whale of a lot of redundant work --
554 it's not clear how to get the internal uses of m235 to share a single
555 generator. Note that me_times2 (etc) each need to see every element in the
556 result sequence. So this is an example where lazy lists are more natural
557 (you can look at the head of a lazy list any number of times).
561 ... me_times2 = times(2, m235())
562 ... me_times3 = times(3, m235())
563 ... me_times5 = times(5, m235())
564 ... for i in merge(merge(me_times2,
569 Don't print "too many" of these -- the implementation above is extremely
570 inefficient: each call of m235() leads to 3 recursive calls, and in
571 turn each of those 3 more, and so on, and so on, until we've descended
572 enough levels to satisfy the print stmts. Very odd: when I printed 5
573 lines of results below, this managed to screw up Win98's malloc in "the
574 usual" way, i.e. the heap grew over 4Mb so Win98 started fragmenting
575 address space, and it *looked* like a very slow leak.
578 >>> for i in range(3):
579 ... print firstn(result, 15)
580 [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24]
581 [25, 27, 30, 32, 36, 40, 45, 48, 50, 54, 60, 64, 72, 75, 80]
582 [81, 90, 96, 100, 108, 120, 125, 128, 135, 144, 150, 160, 162, 180, 192]
584 Heh. Here's one way to get a shared list, complete with an excruciating
585 namespace renaming trick. The *pretty* part is that the times() and merge()
586 functions can be reused as-is, because they only assume their stream
587 arguments are iterable -- a LazyList is the same as a generator to times().
590 ... def __init__(self, g):
592 ... self.fetch = g.next
594 ... def __getitem__(self, i):
595 ... sofar, fetch = self.sofar, self.fetch
596 ... while i >= len(sofar):
597 ... sofar.append(fetch())
602 ... # Gack: m235 below actually refers to a LazyList.
603 ... me_times2 = times(2, m235)
604 ... me_times3 = times(3, m235)
605 ... me_times5 = times(5, m235)
606 ... for i in merge(merge(me_times2,
611 Print as many of these as you like -- *this* implementation is memory-
614 >>> m235 = LazyList(m235())
615 >>> for i in range(5):
616 ... print [m235[j] for j in range(15*i, 15*(i+1))]
617 [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24]
618 [25, 27, 30, 32, 36, 40, 45, 48, 50, 54, 60, 64, 72, 75, 80]
619 [81, 90, 96, 100, 108, 120, 125, 128, 135, 144, 150, 160, 162, 180, 192]
620 [200, 216, 225, 240, 243, 250, 256, 270, 288, 300, 320, 324, 360, 375, 384]
621 [400, 405, 432, 450, 480, 486, 500, 512, 540, 576, 600, 625, 640, 648, 675]
623 Ye olde Fibonacci generator, LazyList style.
625 >>> def fibgen(a, b):
629 ... yield g.next() + h.next()
632 ... g.next() # throw first away
638 ... for s in sum(iter(fib),
639 ... tail(iter(fib))):
642 >>> fib = LazyList(fibgen(1, 2))
643 >>> firstn(iter(fib), 17)
644 [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584]
647 Running after your tail with itertools.tee (new in version 2.4)
649 The algorithms "m235" (Hamming) and Fibonacci presented above are both
650 examples of a whole family of FP (functional programming) algorithms
651 where a function produces and returns a list while the production algorithm
652 suppose the list as already produced by recursively calling itself.
653 For these algorithms to work, they must:
655 - produce at least a first element without presupposing the existence of
657 - produce their elements in a lazy manner
659 To work efficiently, the beginning of the list must not be recomputed over
660 and over again. This is ensured in most FP languages as a built-in feature.
661 In python, we have to explicitly maintain a list of already computed results
662 and abandon genuine recursivity.
664 This is what had been attempted above with the LazyList class. One problem
665 with that class is that it keeps a list of all of the generated results and
666 therefore continually grows. This partially defeats the goal of the generator
667 concept, viz. produce the results only as needed instead of producing them
668 all and thereby wasting memory.
670 Thanks to itertools.tee, it is now clear "how to get the internal uses of
671 m235 to share a single generator".
673 >>> from itertools import tee
677 ... for n in merge(times(2, m2),
678 ... merge(times(3, m3),
682 ... m2, m3, m5, mRes = tee(m1, 4)
686 >>> for i in range(5):
687 ... print firstn(it, 15)
688 [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24]
689 [25, 27, 30, 32, 36, 40, 45, 48, 50, 54, 60, 64, 72, 75, 80]
690 [81, 90, 96, 100, 108, 120, 125, 128, 135, 144, 150, 160, 162, 180, 192]
691 [200, 216, 225, 240, 243, 250, 256, 270, 288, 300, 320, 324, 360, 375, 384]
692 [400, 405, 432, 450, 480, 486, 500, 512, 540, 576, 600, 625, 640, 648, 675]
694 The "tee" function does just what we want. It internally keeps a generated
695 result for as long as it has not been "consumed" from all of the duplicated
696 iterators, whereupon it is deleted. You can therefore print the hamming
697 sequence during hours without increasing memory usage, or very little.
699 The beauty of it is that recursive running-after-their-tail FP algorithms
700 are quite straightforwardly expressed with this Python idiom.
702 Ye olde Fibonacci generator, tee style.
708 ... yield g.next() + h.next()
713 ... fibTail.next() # throw first away
714 ... for res in _isum(fibHead, fibTail):
718 ... fibHead, fibTail, fibRes = tee(realfib, 3)
721 >>> firstn(fib(), 17)
722 [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584]
726 # syntax_tests mostly provokes SyntaxErrors. Also fiddling with #if 0
734 Traceback (most recent call last):
736 SyntaxError: 'return' with argument inside generator (<doctest test.test_generators.__test__.syntax[0]>, line 3)
741 Traceback (most recent call last):
743 SyntaxError: 'return' with argument inside generator (<doctest test.test_generators.__test__.syntax[1]>, line 3)
745 "return None" is not the same as "return" in a generator:
750 Traceback (most recent call last):
752 SyntaxError: 'return' with argument inside generator (<doctest test.test_generators.__test__.syntax[2]>, line 3)
770 ... except ZeroDivisionError:
782 ... except ZeroDivisionError:
827 ... except SyntaxError:
833 ... yield 2 # don't blink
854 ... def __init__(self):
872 ... lambda x: x # shouldn't trigger here
875 ... return 2*i # or here
877 ... return 3 # but *this* sucks (line 8)
879 ... yield 2 # because it's a generator (line 10)
880 Traceback (most recent call last):
881 SyntaxError: 'return' with argument inside generator (<doctest test.test_generators.__test__.syntax[24]>, line 10)
883 This one caused a crash (see SF bug 567538):
886 ... for i in range(3):
900 Traceback (most recent call last):
904 Test the gi_code attribute
910 >>> g.gi_code is f.func_code
915 Traceback (most recent call last):
917 >>> g.gi_code is f.func_code
921 Test the __name__ attribute and the repr()
929 >>> repr(g) # doctest: +ELLIPSIS
930 '<generator object f at ...>'
932 Lambdas shouldn't have their usual return behavior.
934 >>> x = lambda: (yield 1)
938 >>> x = lambda: ((yield 1), (yield 2))
943 # conjoin is a simple backtracking generator, named in honor of Icon's
944 # "conjunction" control structure. Pass a list of no-argument functions
945 # that return iterable objects. Easiest to explain by example: assume the
946 # function list [x, y, z] is passed. Then conjoin acts like:
949 # values = [None] * 3
950 # for values[0] in x():
951 # for values[1] in y():
952 # for values[2] in z():
955 # So some 3-lists of values *may* be generated, each time we successfully
956 # get into the innermost loop. If an iterator fails (is exhausted) before
957 # then, it "backtracks" to get the next value from the nearest enclosing
958 # iterator (the one "to the left"), and starts all over again at the next
959 # slot (pumps a fresh iterator). Of course this is most useful when the
960 # iterators have side-effects, so that which values *can* be generated at
961 # each slot depend on the values iterated at previous slots.
963 def simple_conjoin(gs
):
965 values
= [None] * len(gs
)
971 for values
[i
] in gs
[i
]():
978 # That works fine, but recursing a level and checking i against len(gs) for
979 # each item produced is inefficient. By doing manual loop unrolling across
980 # generator boundaries, it's possible to eliminate most of that overhead.
981 # This isn't worth the bother *in general* for generators, but conjoin() is
982 # a core building block for some CPU-intensive generator applications.
989 # Do one loop nest at time recursively, until the # of loop nests
990 # remaining is divisible by 3.
998 for values
[i
] in gs
[i
]():
1006 # Do three loop nests at a time, recursing only if at least three more
1007 # remain. Don't call directly: this is an internal optimization for
1011 assert i
< n
and (n
-i
) % 3 == 0
1012 ip1
, ip2
, ip3
= i
+1, i
+2, i
+3
1013 g
, g1
, g2
= gs
[i
: ip3
]
1016 # These are the last three, so we can yield values directly.
1017 for values
[i
] in g():
1018 for values
[ip1
] in g1():
1019 for values
[ip2
] in g2():
1023 # At least 6 loop nests remain; peel off 3 and recurse for the
1025 for values
[i
] in g():
1026 for values
[ip1
] in g1():
1027 for values
[ip2
] in g2():
1028 for x
in _gen3(ip3
):
1034 # And one more approach: For backtracking apps like the Knight's Tour
1035 # solver below, the number of backtracking levels can be enormous (one
1036 # level per square, for the Knight's Tour, so that e.g. a 100x100 board
1037 # needs 10,000 levels). In such cases Python is likely to run out of
1038 # stack space due to recursion. So here's a recursion-free version of
1040 # NOTE WELL: This allows large problems to be solved with only trivial
1041 # demands on stack space. Without explicitly resumable generators, this is
1042 # much harder to achieve. OTOH, this is much slower (up to a factor of 2)
1043 # than the fancy unrolled recursive conjoin.
1045 def flat_conjoin(gs
): # rename to conjoin to run tests with this instead
1049 _StopIteration
= StopIteration # make local because caught a *lot*
1055 it
= iters
[i
] = gs
[i
]().next
1058 except _StopIteration
:
1064 # Backtrack until an older iterator can be resumed.
1068 values
[i
] = iters
[i
]()
1069 # Success! Start fresh at next level.
1072 except _StopIteration
:
1073 # Continue backtracking.
1079 # A conjoin-based N-Queens solver.
1082 def __init__(self
, n
):
1086 # Assign a unique int to each column and diagonal.
1087 # columns: n of those, range(n).
1088 # NW-SE diagonals: 2n-1 of these, i-j unique and invariant along
1089 # each, smallest i-j is 0-(n-1) = 1-n, so add n-1 to shift to 0-
1091 # NE-SW diagonals: 2n-1 of these, i+j unique and invariant along
1092 # each, smallest i+j is 0, largest is 2n-2.
1094 # For each square, compute a bit vector of the columns and
1095 # diagonals it covers, and for each row compute a function that
1096 # generates the possiblities for the columns in that row.
1097 self
.rowgenerators
= []
1099 rowuses
= [(1L << j
) |
# column ordinal
1100 (1L << (n
+ i
-j
+ n
-1)) |
# NW-SE ordinal
1101 (1L << (n
+ 2*n
-1 + i
+j
)) # NE-SW ordinal
1104 def rowgen(rowuses
=rowuses
):
1107 if uses
& self
.used
== 0:
1112 self
.rowgenerators
.append(rowgen
)
1114 # Generate solutions.
1117 for row2col
in conjoin(self
.rowgenerators
):
1120 def printsolution(self
, row2col
):
1122 assert n
== len(row2col
)
1123 sep
= "+" + "-+" * n
1126 squares
= [" " for j
in range(n
)]
1127 squares
[row2col
[i
]] = "Q"
1128 print "|" + "|".join(squares
) + "|"
1131 # A conjoin-based Knight's Tour solver. This is pretty sophisticated
1132 # (e.g., when used with flat_conjoin above, and passing hard=1 to the
1133 # constructor, a 200x200 Knight's Tour was found quickly -- note that we're
1134 # creating 10s of thousands of generators then!), and is lengthy.
1137 def __init__(self
, m
, n
, hard
=0):
1138 self
.m
, self
.n
= m
, n
1140 # solve() will set up succs[i] to be a list of square #i's
1142 succs
= self
.succs
= []
1144 # Remove i0 from each of its successor's successor lists, i.e.
1145 # successors can't go back to i0 again. Return 0 if we can
1146 # detect this makes a solution impossible, else return 1.
1148 def remove_from_successors(i0
, len=len):
1149 # If we remove all exits from a free square, we're dead:
1150 # even if we move to it next, we can't leave it again.
1151 # If we create a square with one exit, we must visit it next;
1152 # else somebody else will have to visit it, and since there's
1153 # only one adjacent, there won't be a way to leave it again.
1154 # Finelly, if we create more than one free square with a
1155 # single exit, we can only move to one of them next, leaving
1156 # the other one a dead end.
1166 return ne0
== 0 and ne1
< 2
1168 # Put i0 back in each of its successor's successor lists.
1170 def add_to_successors(i0
):
1174 # Generate the first move.
1179 # Since we're looking for a cycle, it doesn't matter where we
1180 # start. Starting in a corner makes the 2nd move easy.
1181 corner
= self
.coords2index(0, 0)
1182 remove_from_successors(corner
)
1183 self
.lastij
= corner
1185 add_to_successors(corner
)
1187 # Generate the second moves.
1189 corner
= self
.coords2index(0, 0)
1190 assert self
.lastij
== corner
# i.e., we started in the corner
1193 assert len(succs
[corner
]) == 2
1194 assert self
.coords2index(1, 2) in succs
[corner
]
1195 assert self
.coords2index(2, 1) in succs
[corner
]
1196 # Only two choices. Whichever we pick, the other must be the
1197 # square picked on move m*n, as it's the only way to get back
1198 # to (0, 0). Save its index in self.final so that moves before
1199 # the last know it must be kept free.
1200 for i
, j
in (1, 2), (2, 1):
1201 this
= self
.coords2index(i
, j
)
1202 final
= self
.coords2index(3-i
, 3-j
)
1205 remove_from_successors(this
)
1206 succs
[final
].append(corner
)
1209 succs
[final
].remove(corner
)
1210 add_to_successors(this
)
1212 # Generate moves 3 thru m*n-1.
1213 def advance(len=len):
1214 # If some successor has only one exit, must take it.
1215 # Else favor successors with fewer exits.
1217 for i
in succs
[self
.lastij
]:
1219 assert e
> 0, "else remove_from_successors() pruning flawed"
1221 candidates
= [(e
, i
)]
1223 candidates
.append((e
, i
))
1227 for e
, i
in candidates
:
1229 if remove_from_successors(i
):
1232 add_to_successors(i
)
1234 # Generate moves 3 thru m*n-1. Alternative version using a
1235 # stronger (but more expensive) heuristic to order successors.
1236 # Since the # of backtracking levels is m*n, a poor move early on
1237 # can take eons to undo. Smallest square board for which this
1238 # matters a lot is 52x52.
1239 def advance_hard(vmid
=(m
-1)/2.0, hmid
=(n
-1)/2.0, len=len):
1240 # If some successor has only one exit, must take it.
1241 # Else favor successors with fewer exits.
1242 # Break ties via max distance from board centerpoint (favor
1243 # corners and edges whenever possible).
1245 for i
in succs
[self
.lastij
]:
1247 assert e
> 0, "else remove_from_successors() pruning flawed"
1249 candidates
= [(e
, 0, i
)]
1251 i1
, j1
= self
.index2coords(i
)
1252 d
= (i1
- vmid
)**2 + (j1
- hmid
)**2
1253 candidates
.append((e
, -d
, i
))
1257 for e
, d
, i
in candidates
:
1259 if remove_from_successors(i
):
1262 add_to_successors(i
)
1264 # Generate the last move.
1266 assert self
.final
in succs
[self
.lastij
]
1270 self
.squaregenerators
= [first
]
1272 self
.squaregenerators
= [first
, second
] + \
1273 [hard
and advance_hard
or advance
] * (m
*n
- 3) + \
1276 def coords2index(self
, i
, j
):
1277 assert 0 <= i
< self
.m
1278 assert 0 <= j
< self
.n
1279 return i
* self
.n
+ j
1281 def index2coords(self
, index
):
1282 assert 0 <= index
< self
.m
* self
.n
1283 return divmod(index
, self
.n
)
1285 def _init_board(self
):
1288 m
, n
= self
.m
, self
.n
1289 c2i
= self
.coords2index
1291 offsets
= [( 1, 2), ( 2, 1), ( 2, -1), ( 1, -2),
1292 (-1, -2), (-2, -1), (-2, 1), (-1, 2)]
1296 s
= [c2i(i
+io
, j
+jo
) for io
, jo
in offsets
1297 if 0 <= i
+io
< m
and
1301 # Generate solutions.
1304 for x
in conjoin(self
.squaregenerators
):
1307 def printsolution(self
, x
):
1308 m
, n
= self
.m
, self
.n
1309 assert len(x
) == m
*n
1311 format
= "%" + str(w
) + "d"
1313 squares
= [[None] * n
for i
in range(m
)]
1316 i1
, j1
= self
.index2coords(i
)
1317 squares
[i1
][j1
] = format
% k
1320 sep
= "+" + ("-" * w
+ "+") * n
1324 print "|" + "|".join(row
) + "|"
1329 Generate the 3-bit binary numbers in order. This illustrates dumbest-
1330 possible use of conjoin, just to generate the full cross-product.
1332 >>> for c in conjoin([lambda: iter((0, 1))] * 3):
1343 For efficiency in typical backtracking apps, conjoin() yields the same list
1344 object each time. So if you want to save away a full account of its
1345 generated sequence, you need to copy its results.
1347 >>> def gencopy(iterator):
1348 ... for x in iterator:
1351 >>> for n in range(10):
1352 ... all = list(gencopy(conjoin([lambda: iter((0, 1))] * n)))
1353 ... print n, len(all), all[0] == [0] * n, all[-1] == [1] * n
1365 And run an 8-queens solver.
1370 >>> for row2col in q.solve():
1372 ... if count <= LIMIT:
1373 ... print "Solution", count
1374 ... q.printsolution(row2col)
1412 >>> print count, "solutions in all."
1413 92 solutions in all.
1415 And run a Knight's Tour on a 10x10 board. Note that there are about
1416 20,000 solutions even on a 6x6 board, so don't dare run this to exhaustion.
1418 >>> k = Knights(10, 10)
1421 >>> for x in k.solve():
1423 ... if count <= LIMIT:
1424 ... print "Solution", count
1425 ... k.printsolution(x)
1429 +---+---+---+---+---+---+---+---+---+---+
1430 | 1| 58| 27| 34| 3| 40| 29| 10| 5| 8|
1431 +---+---+---+---+---+---+---+---+---+---+
1432 | 26| 35| 2| 57| 28| 33| 4| 7| 30| 11|
1433 +---+---+---+---+---+---+---+---+---+---+
1434 | 59|100| 73| 36| 41| 56| 39| 32| 9| 6|
1435 +---+---+---+---+---+---+---+---+---+---+
1436 | 74| 25| 60| 55| 72| 37| 42| 49| 12| 31|
1437 +---+---+---+---+---+---+---+---+---+---+
1438 | 61| 86| 99| 76| 63| 52| 47| 38| 43| 50|
1439 +---+---+---+---+---+---+---+---+---+---+
1440 | 24| 75| 62| 85| 54| 71| 64| 51| 48| 13|
1441 +---+---+---+---+---+---+---+---+---+---+
1442 | 87| 98| 91| 80| 77| 84| 53| 46| 65| 44|
1443 +---+---+---+---+---+---+---+---+---+---+
1444 | 90| 23| 88| 95| 70| 79| 68| 83| 14| 17|
1445 +---+---+---+---+---+---+---+---+---+---+
1446 | 97| 92| 21| 78| 81| 94| 19| 16| 45| 66|
1447 +---+---+---+---+---+---+---+---+---+---+
1448 | 22| 89| 96| 93| 20| 69| 82| 67| 18| 15|
1449 +---+---+---+---+---+---+---+---+---+---+
1451 +---+---+---+---+---+---+---+---+---+---+
1452 | 1| 58| 27| 34| 3| 40| 29| 10| 5| 8|
1453 +---+---+---+---+---+---+---+---+---+---+
1454 | 26| 35| 2| 57| 28| 33| 4| 7| 30| 11|
1455 +---+---+---+---+---+---+---+---+---+---+
1456 | 59|100| 73| 36| 41| 56| 39| 32| 9| 6|
1457 +---+---+---+---+---+---+---+---+---+---+
1458 | 74| 25| 60| 55| 72| 37| 42| 49| 12| 31|
1459 +---+---+---+---+---+---+---+---+---+---+
1460 | 61| 86| 99| 76| 63| 52| 47| 38| 43| 50|
1461 +---+---+---+---+---+---+---+---+---+---+
1462 | 24| 75| 62| 85| 54| 71| 64| 51| 48| 13|
1463 +---+---+---+---+---+---+---+---+---+---+
1464 | 87| 98| 89| 80| 77| 84| 53| 46| 65| 44|
1465 +---+---+---+---+---+---+---+---+---+---+
1466 | 90| 23| 92| 95| 70| 79| 68| 83| 14| 17|
1467 +---+---+---+---+---+---+---+---+---+---+
1468 | 97| 88| 21| 78| 81| 94| 19| 16| 45| 66|
1469 +---+---+---+---+---+---+---+---+---+---+
1470 | 22| 91| 96| 93| 20| 69| 82| 67| 18| 15|
1471 +---+---+---+---+---+---+---+---+---+---+
1474 weakref_tests
= """\
1475 Generators are weakly referencable:
1481 >>> wr = weakref.ref(gen)
1484 >>> p = weakref.proxy(gen)
1486 Generator-iterators are weakly referencable as well:
1489 >>> wr = weakref.ref(gi)
1492 >>> p = weakref.proxy(gi)
1498 coroutine_tests
= """\
1499 Sending a value into a started generator:
1511 Sending a value into a new generator produces a TypeError:
1514 Traceback (most recent call last):
1516 TypeError: can't send non-None value to a just-started generator
1519 Yield by itself yields None:
1527 An obscene abuse of a yield expression within a generator expression:
1529 >>> list((yield 21) for i in range(4))
1530 [21, None, 21, None, 21, None, 21, None]
1532 And a more sane, but still weird usage:
1534 >>> def f(): list(i for i in [(yield 26)])
1539 A yield expression with augmented assignment.
1541 >>> def coroutine(seq):
1543 ... while count < 200:
1545 ... seq.append(count)
1547 >>> c = coroutine(seq)
1562 Check some syntax errors for yield expressions:
1564 >>> f=lambda: (yield 1),(yield 2)
1565 Traceback (most recent call last):
1567 File "<doctest test.test_generators.__test__.coroutine[21]>", line 1
1568 SyntaxError: 'yield' outside function
1570 >>> def f(): return lambda x=(yield): 1
1571 Traceback (most recent call last):
1573 SyntaxError: 'return' with argument inside generator (<doctest test.test_generators.__test__.coroutine[22]>, line 1)
1575 >>> def f(): x = yield = y
1576 Traceback (most recent call last):
1578 File "<doctest test.test_generators.__test__.coroutine[23]>", line 1
1579 SyntaxError: assignment to yield expression not possible
1581 >>> def f(): (yield bar) = y
1582 Traceback (most recent call last):
1584 File "<doctest test.test_generators.__test__.coroutine[24]>", line 1
1585 SyntaxError: can't assign to yield expression
1587 >>> def f(): (yield bar) += y
1588 Traceback (most recent call last):
1590 File "<doctest test.test_generators.__test__.coroutine[25]>", line 1
1591 SyntaxError: can't assign to yield expression
1594 Now check some throw() conditions:
1600 ... except ValueError,v:
1601 ... print "caught ValueError (%s)" % (v),
1606 >>> g.throw(ValueError) # type only
1607 caught ValueError ()
1609 >>> g.throw(ValueError("xyz")) # value only
1610 caught ValueError (xyz)
1612 >>> g.throw(ValueError, ValueError(1)) # value+matching type
1613 caught ValueError (1)
1615 >>> g.throw(ValueError, TypeError(1)) # mismatched type, rewrapped
1616 caught ValueError (1)
1618 >>> g.throw(ValueError, ValueError(1), None) # explicit None traceback
1619 caught ValueError (1)
1621 >>> g.throw(ValueError(1), "foo") # bad args
1622 Traceback (most recent call last):
1624 TypeError: instance exception may not have a separate value
1626 >>> g.throw(ValueError, "foo", 23) # bad args
1627 Traceback (most recent call last):
1629 TypeError: throw() third argument must be a traceback object
1631 >>> def throw(g,exc):
1635 ... g.throw(*sys.exc_info())
1636 >>> throw(g,ValueError) # do it with traceback included
1637 caught ValueError ()
1642 >>> throw(g,TypeError) # terminate the generator
1643 Traceback (most recent call last):
1647 >>> print g.gi_frame
1651 Traceback (most recent call last):
1655 >>> g.throw(ValueError,6) # throw on closed generator
1656 Traceback (most recent call last):
1660 >>> f().throw(ValueError,7) # throw on just-opened generator
1661 Traceback (most recent call last):
1665 >>> f().throw("abc") # throw on just-opened generator
1666 Traceback (most recent call last):
1668 TypeError: exceptions must be classes, or instances, not str
1670 Now let's try closing a generator:
1674 ... except GeneratorExit:
1681 >>> g.close() # should be no-op now
1683 >>> f().close() # close on just-opened generator should be fine
1685 >>> def f(): yield # an even simpler generator
1686 >>> f().close() # close before opening
1689 >>> g.close() # close normally
1703 >>> class context(object):
1704 ... def __enter__(self): pass
1705 ... def __exit__(self, *args): print 'exiting'
1715 GeneratorExit is not caught by except Exception:
1719 ... except Exception: print 'except'
1720 ... finally: print 'finally'
1728 Now let's try some ill-behaved generators:
1732 ... except GeneratorExit:
1737 Traceback (most recent call last):
1739 RuntimeError: generator ignored GeneratorExit
1743 Our ill-behaved code should be invoked during GC:
1745 >>> import sys, StringIO
1746 >>> old, sys.stderr = sys.stderr, StringIO.StringIO()
1750 >>> sys.stderr.getvalue().startswith(
1751 ... "Exception RuntimeError: 'generator ignored GeneratorExit' in "
1754 >>> sys.stderr = old
1757 And errors thrown during closing should propagate:
1761 ... except GeneratorExit:
1762 ... raise TypeError("fie!")
1766 Traceback (most recent call last):
1771 Ensure that various yield expression constructs make their
1772 enclosing function a generator:
1774 >>> def f(): x += yield
1778 >>> def f(): x = yield
1782 >>> def f(): lambda x=(yield): 1
1786 >>> def f(): x=(i for i in (yield) if (yield))
1790 >>> def f(d): d[(yield "a")] = d[(yield "b")] = 27
1804 ... except StopIteration: pass
1810 refleaks_tests
= """
1811 Prior to adding cycle-GC support to itertools.tee, this code would leak
1812 references. We add it to the standard suite so the routine refleak-tests
1813 would trigger if it starts being uncleanable again.
1815 >>> import itertools
1818 ... def __iter__(self):
1821 ... return self.item
1823 ... head, tail = itertools.tee(g)
1828 Make sure to also test the involvement of the tee-internal teedataobject,
1829 which stores returned items.
1831 >>> item = it.next()
1835 This test leaked at one point due to generator finalization/destruction.
1836 It was copied from Lib/test/leakers/test_generator_cycle.py before the file
1849 This test isn't really generator related, but rather exception-in-cleanup
1850 related. The coroutine tests (above) just happen to cause an exception in
1851 the generator's __del__ (tp_del) method. We can also test for this
1852 explicitly, without generators. We do have to redirect stderr to avoid
1853 printing warnings and to doublecheck that we actually tested what we wanted
1856 >>> import sys, StringIO
1857 >>> old = sys.stderr
1859 ... sys.stderr = StringIO.StringIO()
1861 ... def __del__(self):
1862 ... raise RuntimeError
1866 ... err = sys.stderr.getvalue().strip()
1868 ... "Exception RuntimeError: RuntimeError() in <"
1870 ... err.endswith("> ignored")
1871 ... len(err.splitlines())
1873 ... sys.stderr = old
1880 These refleak tests should perhaps be in a testfile of their own,
1881 test_generators just happened to be the test that drew these out.
1885 __test__
= {"tut": tutorial_tests
,
1887 "email": email_tests
,
1889 "syntax": syntax_tests
,
1890 "conjoin": conjoin_tests
,
1891 "weakref": weakref_tests
,
1892 "coroutine": coroutine_tests
,
1893 "refleaks": refleaks_tests
,
1896 # Magic test name that regrtest.py invokes *after* importing this module.
1897 # This worms around a bootstrap problem.
1898 # Note that doctest and regrtest both look in sys.argv for a "-v" argument,
1899 # so this works as expected in both ways of running regrtest.
1900 def test_main(verbose
=None):
1901 from test
import test_support
, test_generators
1902 test_support
.run_doctest(test_generators
, verbose
)
1904 # This part isn't needed for regrtest, but for running the test directly.
1905 if __name__
== "__main__":