2 :mod:`tokenize` --- Tokenizer for Python source
3 ===============================================
6 :synopsis: Lexical scanner for Python source code.
7 .. moduleauthor:: Ka Ping Yee
8 .. sectionauthor:: Fred L. Drake, Jr. <fdrake@acm.org>
11 The :mod:`tokenize` module provides a lexical scanner for Python source code,
12 implemented in Python. The scanner in this module returns comments as tokens as
13 well, making it useful for implementing "pretty-printers," including colorizers
14 for on-screen displays.
16 The primary entry point is a :term:`generator`:
19 .. function:: generate_tokens(readline)
21 The :func:`generate_tokens` generator requires one argument, *readline*, which
22 must be a callable object which provides the same interface as the
23 :meth:`readline` method of built-in file objects (see section
24 :ref:`bltin-file-objects`). Each call to the function should return one line of
27 The generator produces 5-tuples with these members: the token type; the token
28 string; a 2-tuple ``(srow, scol)`` of ints specifying the row and column where
29 the token begins in the source; a 2-tuple ``(erow, ecol)`` of ints specifying
30 the row and column where the token ends in the source; and the line on which the
31 token was found. The line passed is the *logical* line; continuation lines are
36 An older entry point is retained for backward compatibility:
39 .. function:: tokenize(readline[, tokeneater])
41 The :func:`tokenize` function accepts two parameters: one representing the input
42 stream, and one providing an output mechanism for :func:`tokenize`.
44 The first parameter, *readline*, must be a callable object which provides the
45 same interface as the :meth:`readline` method of built-in file objects (see
46 section :ref:`bltin-file-objects`). Each call to the function should return one
47 line of input as a string. Alternately, *readline* may be a callable object that
48 signals completion by raising :exc:`StopIteration`.
50 .. versionchanged:: 2.5
51 Added :exc:`StopIteration` support.
53 The second parameter, *tokeneater*, must also be a callable object. It is
54 called once for each token, with five arguments, corresponding to the tuples
55 generated by :func:`generate_tokens`.
57 All constants from the :mod:`token` module are also exported from
58 :mod:`tokenize`, as are two additional token type values that might be passed to
59 the *tokeneater* function by :func:`tokenize`:
64 Token value used to indicate a comment.
69 Token value used to indicate a non-terminating newline. The NEWLINE token
70 indicates the end of a logical line of Python code; NL tokens are generated when
71 a logical line of code is continued over multiple physical lines.
73 Another function is provided to reverse the tokenization process. This is useful
74 for creating tools that tokenize a script, modify the token stream, and write
75 back the modified script.
78 .. function:: untokenize(iterable)
80 Converts tokens back into Python source code. The *iterable* must return
81 sequences with at least two elements, the token type and the token string. Any
82 additional sequence elements are ignored.
84 The reconstructed script is returned as a single string. The result is
85 guaranteed to tokenize back to match the input so that the conversion is
86 lossless and round-trips are assured. The guarantee applies only to the token
87 type and token string as the spacing between tokens (column positions) may
92 Example of a script re-writer that transforms float literals into Decimal
96 """Substitute Decimals for floats in a string of statements.
98 >>> from decimal import Decimal
99 >>> s = 'print +21.3e-5*-.1234/81.7'
101 "print +Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7')"
105 >>> exec(decistmt(s))
106 -3.217160342717258261933904529E-7
110 g = generate_tokens(StringIO(s).readline) # tokenize the string
111 for toknum, tokval, _, _, _ in g:
112 if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
116 (STRING, repr(tokval)),
120 result.append((toknum, tokval))
121 return untokenize(result)