2 :mod:`zlib` --- Compression compatible with :program:`gzip`
3 ===========================================================
6 :synopsis: Low-level interface to compression and decompression routines compatible with
10 For applications that require data compression, the functions in this module
11 allow compression and decompression, using the zlib library. The zlib library
12 has its own home page at http://www.zlib.net. There are known
13 incompatibilities between the Python module and versions of the zlib library
14 earlier than 1.1.3; 1.1.3 has a security vulnerability, so we recommend using
17 zlib's functions have many options and often need to be used in a particular
18 order. This documentation doesn't attempt to cover all of the permutations;
19 consult the zlib manual at http://www.zlib.net/manual.html for authoritative
22 For reading and writing ``.gz`` files see the :mod:`gzip` module. For
23 other archive formats, see the :mod:`bz2`, :mod:`zipfile`, and
24 :mod:`tarfile` modules.
26 The available exception and functions in this module are:
31 Exception raised on compression and decompression errors.
34 .. function:: adler32(data[, value])
36 Computes a Adler-32 checksum of *data*. (An Adler-32 checksum is almost as
37 reliable as a CRC32 but can be computed much more quickly.) If *value* is
38 present, it is used as the starting value of the checksum; otherwise, a fixed
39 default value is used. This allows computing a running checksum over the
40 concatenation of several inputs. The algorithm is not cryptographically
41 strong, and should not be used for authentication or digital signatures. Since
42 the algorithm is designed for use as a checksum algorithm, it is not suitable
43 for use as a general hash algorithm.
45 This function always returns an integer object.
48 To generate the same numeric value across all Python versions and
49 platforms use adler32(data) & 0xffffffff. If you are only using
50 the checksum in packed binary format this is not necessary as the
51 return value is the correct 32bit binary representation
54 .. versionchanged:: 2.6
55 The return value is in the range [-2**31, 2**31-1]
56 regardless of platform. In older versions the value is
57 signed on some platforms and unsigned on others.
59 .. versionchanged:: 3.0
60 The return value is unsigned and in the range [0, 2**32-1]
61 regardless of platform.
64 .. function:: compress(string[, level])
66 Compresses the data in *string*, returning a string contained compressed data.
67 *level* is an integer from ``1`` to ``9`` controlling the level of compression;
68 ``1`` is fastest and produces the least compression, ``9`` is slowest and
69 produces the most. The default value is ``6``. Raises the :exc:`error`
70 exception if any error occurs.
73 .. function:: compressobj([level])
75 Returns a compression object, to be used for compressing data streams that won't
76 fit into memory at once. *level* is an integer from ``1`` to ``9`` controlling
77 the level of compression; ``1`` is fastest and produces the least compression,
78 ``9`` is slowest and produces the most. The default value is ``6``.
81 .. function:: crc32(data[, value])
84 single: Cyclic Redundancy Check
85 single: checksum; Cyclic Redundancy Check
87 Computes a CRC (Cyclic Redundancy Check) checksum of *data*. If *value* is
88 present, it is used as the starting value of the checksum; otherwise, a fixed
89 default value is used. This allows computing a running checksum over the
90 concatenation of several inputs. The algorithm is not cryptographically
91 strong, and should not be used for authentication or digital signatures. Since
92 the algorithm is designed for use as a checksum algorithm, it is not suitable
93 for use as a general hash algorithm.
95 This function always returns an integer object.
98 To generate the same numeric value across all Python versions and
99 platforms use crc32(data) & 0xffffffff. If you are only using
100 the checksum in packed binary format this is not necessary as the
101 return value is the correct 32bit binary representation
104 .. versionchanged:: 2.6
105 The return value is in the range [-2**31, 2**31-1]
106 regardless of platform. In older versions the value would be
107 signed on some platforms and unsigned on others.
109 .. versionchanged:: 3.0
110 The return value is unsigned and in the range [0, 2**32-1]
111 regardless of platform.
114 .. function:: decompress(string[, wbits[, bufsize]])
116 Decompresses the data in *string*, returning a string containing the
117 uncompressed data. The *wbits* parameter controls the size of the window
118 buffer. If *bufsize* is given, it is used as the initial size of the output
119 buffer. Raises the :exc:`error` exception if any error occurs.
121 The absolute value of *wbits* is the base two logarithm of the size of the
122 history buffer (the "window size") used when compressing data. Its absolute
123 value should be between 8 and 15 for the most recent versions of the zlib
124 library, larger values resulting in better compression at the expense of greater
125 memory usage. The default value is 15. When *wbits* is negative, the standard
126 :program:`gzip` header is suppressed; this is an undocumented feature of the
127 zlib library, used for compatibility with :program:`unzip`'s compression file
130 *bufsize* is the initial size of the buffer used to hold decompressed data. If
131 more space is required, the buffer size will be increased as needed, so you
132 don't have to get this value exactly right; tuning it will only save a few calls
133 to :cfunc:`malloc`. The default size is 16384.
136 .. function:: decompressobj([wbits])
138 Returns a decompression object, to be used for decompressing data streams that
139 won't fit into memory at once. The *wbits* parameter controls the size of the
142 Compression objects support the following methods:
145 .. method:: Compress.compress(string)
147 Compress *string*, returning a string containing compressed data for at least
148 part of the data in *string*. This data should be concatenated to the output
149 produced by any preceding calls to the :meth:`compress` method. Some input may
150 be kept in internal buffers for later processing.
153 .. method:: Compress.flush([mode])
155 All pending input is processed, and a string containing the remaining compressed
156 output is returned. *mode* can be selected from the constants
157 :const:`Z_SYNC_FLUSH`, :const:`Z_FULL_FLUSH`, or :const:`Z_FINISH`,
158 defaulting to :const:`Z_FINISH`. :const:`Z_SYNC_FLUSH` and
159 :const:`Z_FULL_FLUSH` allow compressing further strings of data, while
160 :const:`Z_FINISH` finishes the compressed stream and prevents compressing any
161 more data. After calling :meth:`flush` with *mode* set to :const:`Z_FINISH`,
162 the :meth:`compress` method cannot be called again; the only realistic action is
163 to delete the object.
166 .. method:: Compress.copy()
168 Returns a copy of the compression object. This can be used to efficiently
169 compress a set of data that share a common initial prefix.
171 .. versionadded:: 2.5
173 Decompression objects support the following methods, and two attributes:
176 .. attribute:: Decompress.unused_data
178 A string which contains any bytes past the end of the compressed data. That is,
179 this remains ``""`` until the last byte that contains compression data is
180 available. If the whole string turned out to contain compressed data, this is
181 ``""``, the empty string.
183 The only way to determine where a string of compressed data ends is by actually
184 decompressing it. This means that when compressed data is contained part of a
185 larger file, you can only find the end of it by reading data and feeding it
186 followed by some non-empty string into a decompression object's
187 :meth:`decompress` method until the :attr:`unused_data` attribute is no longer
191 .. attribute:: Decompress.unconsumed_tail
193 A string that contains any data that was not consumed by the last
194 :meth:`decompress` call because it exceeded the limit for the uncompressed data
195 buffer. This data has not yet been seen by the zlib machinery, so you must feed
196 it (possibly with further data concatenated to it) back to a subsequent
197 :meth:`decompress` method call in order to get correct output.
200 .. method:: Decompress.decompress(string[, max_length])
202 Decompress *string*, returning a string containing the uncompressed data
203 corresponding to at least part of the data in *string*. This data should be
204 concatenated to the output produced by any preceding calls to the
205 :meth:`decompress` method. Some of the input data may be preserved in internal
206 buffers for later processing.
208 If the optional parameter *max_length* is supplied then the return value will be
209 no longer than *max_length*. This may mean that not all of the compressed input
210 can be processed; and unconsumed data will be stored in the attribute
211 :attr:`unconsumed_tail`. This string must be passed to a subsequent call to
212 :meth:`decompress` if decompression is to continue. If *max_length* is not
213 supplied then the whole input is decompressed, and :attr:`unconsumed_tail` is an
217 .. method:: Decompress.flush([length])
219 All pending input is processed, and a string containing the remaining
220 uncompressed output is returned. After calling :meth:`flush`, the
221 :meth:`decompress` method cannot be called again; the only realistic action is
222 to delete the object.
224 The optional parameter *length* sets the initial size of the output buffer.
227 .. method:: Decompress.copy()
229 Returns a copy of the decompression object. This can be used to save the state
230 of the decompressor midway through the data stream in order to speed up random
231 seeks into the stream at a future point.
233 .. versionadded:: 2.5
239 Reading and writing :program:`gzip`\ -format files.
242 The zlib library home page.
244 http://www.zlib.net/manual.html
245 The zlib manual explains the semantics and usage of the library's many