2 subunit: A streaming protocol for test results
3 Copyright (C) 2005-2009 Robert Collins <robertc@robertcollins.net>
5 Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
6 license at the users choice. A copy of both licenses are available in the
7 project source as Apache-2.0 and BSD. You may not use this file except in
8 compliance with one of these two licences.
10 Unless required by applicable law or agreed to in writing, software
11 distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
12 WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13 license you chose for the specific language governing permissions and
14 limitations under that license.
16 See the COPYING file for full details on the licensing of Subunit.
18 subunit reuses iso8601 by Michael Twomey, distributed under an MIT style
19 licence - see python/iso8601/LICENSE for details.
24 Subunit is a streaming protocol for test results. The protocol is human
25 readable and easily generated and parsed. By design all the components of
26 the protocol conceptually fit into the xUnit TestCase->TestResult interaction.
28 Subunit comes with command line filters to process a subunit stream and
29 language bindings for python, C, C++ and shell. Bindings are easy to write
32 A number of useful things can be done easily with subunit:
33 * Test aggregation: Tests run separately can be combined and then
34 reported/displayed together. For instance, tests from different languages
35 can be shown as a seamless whole.
36 * Test archiving: A test run may be recorded and replayed later.
37 * Test isolation: Tests that may crash or otherwise interact badly with each
38 other can be run seperately and then aggregated, rather than interfering
40 * Grid testing: subunit can act as the necessary serialisation and
41 deserialiation to get test runs on distributed machines to be reported in
44 Subunit supplies the following filters:
45 * tap2subunit - convert perl's TestAnythingProtocol to subunit.
46 * subunit2pyunit - convert a subunit stream to pyunit test results.
47 * subunit2gtk - show a subunit stream in GTK.
48 * subunit2junitxml - convert a subunit stream to JUnit's XML format.
49 * subunit-diff - compare two subunit streams.
50 * subunit-filter - filter out tests from a subunit stream.
51 * subunit-ls - list info about tests present in a subunit stream.
52 * subunit-stats - generate a summary of a subunit stream.
53 * subunit-tags - add or remove tags from a stream.
55 Integration with other tools
56 ----------------------------
58 Subunit's language bindings act as integration with various test runners like
59 'check', 'cppunit', Python's 'unittest'. Beyond that a small amount of glue
60 (typically a few lines) will allow Subunit to be used in more sophisticated
66 Subunit has excellent Python support: most of the filters and tools are written
67 in python and there are facilities for using Subunit to increase test isolation
68 seamlessly within a test suite.
70 One simple way to run an existing python test suite and have it output subunit
71 is the module ``subunit.run``::
73 $ python -m subunit.run mypackage.tests.test_suite
75 For more information on the Python support Subunit offers , please see
76 ``pydoc subunit``, or the source in ``python/subunit/__init__.py``
81 Subunit has C bindings to emit the protocol, and comes with a patch for 'check'
82 which has been nominally accepted by the 'check' developers. See 'c/README' for
88 The C library is includable and usable directly from C++. A TestListener for
89 CPPUnit is included in the Subunit distribution. See 'c++/README' for details.
94 Similar to C, the shell bindings consist of simple functions to output protocol
95 elements, and a patch for adding subunit output to the 'ShUnit' shell test
96 runner. See 'shell/README' for details.
101 To ignore some failing tests whose root cause is already known::
103 subunit-filter --without 'AttributeError.*flavor'
109 Sample subunit wire contents
110 ----------------------------
114 success: test foo works.
116 failure: tar a file. [
119 foo.c:34 WARNING foo is not defined.
123 When run through subunit2pyunit::
127 ========================
132 foo.c:34 WARNING foo is not defined.
135 Subunit protocol description
136 ============================
138 This description is being ported to an EBNF style. Currently its only partly in
139 that style, but should be fairly clear all the same. When in doubt, refer the
140 source (and ideally help fix up the description!). Generally the protocol is
141 line orientated and consists of either directives and their parameters, or
142 when outside a DETAILS region unexpected lines which are not interpreted by
143 the parser - they should be forwarded unaltered.
145 test|testing|test:|testing: test LABEL
146 success|success:|successful|successful: test LABEL
147 success|success:|successful|successful: test LABEL DETAILS
149 failure: test LABEL DETAILS
151 error: test LABEL DETAILS
153 skip[:] test LABEL DETAILS
155 xfail[:] test LABEL DETAILS
156 uxsuccess[:] test LABEL
157 uxsuccess[:] test LABEL DETAILS
162 time: YYYY-MM-DD HH:MM:SSZ
165 DETAILS ::= BRACKETED | MULTIPART
166 BRACKETED ::= '[' CR UTF8-lines ']' CR
167 MULTIPART ::= '[ multipart' CR PART* ']' CR
168 PART ::= PART_TYPE CR NAME CR PART_BYTES CR
169 PART_TYPE ::= Content-Type: type/sub-type(;parameter=value,parameter=value)
170 PART_BYTES ::= (DIGITS CR LF BYTE{DIGITS})* '0' CR LF
172 unexpected output on stdout -> stdout.
173 exit w/0 or last test completing -> error
175 Tags given outside a test are applied to all following tests
176 Tags given after a test: line and before the result line for the same test
177 apply only to that test, and inherit the current global tags.
178 A '-' before a tag is used to remove tags - e.g. to prevent a global tag
179 applying to a single test, or to cancel a global tag.
181 The progress directive is used to provide progress information about a stream
182 so that stream consumer can provide completion estimates, progress bars and so
183 on. Stream generators that know how many tests will be present in the stream
184 should output "progress: COUNT". Stream filters that add tests should output
185 "progress: +COUNT", and those that remove tests should output
186 "progress: -COUNT". An absolute count should reset the progress indicators in
187 use - it indicates that two separate streams from different generators have
188 been trivially concatenated together, and there is no knowledge of how many
189 more complete streams are incoming. Smart concatenation could scan each stream
190 for their count and sum them, or alternatively translate absolute counts into
191 relative counts inline. It is recommended that outputters avoid absolute counts
192 unless necessary. The push and pop directives are used to provide local regions
193 for progress reporting. This fits with hierarchically operating test
194 environments - such as those that organise tests into suites - the top-most
195 runner can report on the number of suites, and each suite surround its output
196 with a (push, pop) pair. Interpreters should interpret a pop as also advancing
197 the progress of the restored level by one step. Encountering progress
198 directives between the start and end of a test pair indicates that a previous
199 test was interrupted and did not cleanly terminate: it should be implicitly
200 closed with an error (the same as when a stream ends with no closing test
201 directive for the most recently started test).
203 The time directive acts as a clock event - it sets the time for all future
204 events. The value should be a valid ISO8601 time.
206 The skip, xfail and uxsuccess outcomes are not supported by all testing
207 environments. In Python the testttools (https://launchpad.net/testtools)
208 library is used to translate these automatically if an older Python version
209 that does not support them is in use. See the testtools documentation for the
212 skip is used to indicate a test was discovered but not executed. xfail is used
213 to indicate a test that errored in some expected fashion (also know as "TODO"
214 tests in some frameworks). uxsuccess is used to indicate and unexpected success
215 where a test though to be failing actually passes. It is complementary to