1 # Copyright (C) 2001-2007, Parrot Foundation.
6 docs/tests.pod - Testing Parrot
8 =head1 A basic guide to writing and running tests for Parrot
10 This is quick and dirty pointer to how the Parrot test suite is executed and
11 to how new tests for Parrot should be written.
12 The testing system is liable to change in the future, but tests written
13 following the guidelines below should be easy to port into a new test suite.
15 =head1 How to test parrot
17 The easy way to test parrot is running C<make test>. If you have
18 updated your code recently and tests began failing, go for a C<make
19 realclean> and recompile parrot before complaining.
21 C<make languages-test> runs the test suite for most language implementations
22 in the languages directory.
24 =head2 Submitting smolder test results
26 Parrot has a status page with smoke test results at
27 L<http://smolder.plusthree.com/app/public_projects/details/8>.
29 You can supply new tests results by just running C<make smoke>.
30 It will run the same tests as C<make test> would, but will upload
31 the test results to the website.
33 =head1 Location of the test files
35 The parrot test files, the F<*.t> files, can be found in the F<t> directory.
36 A quick overview over the subdirs in F<t> can be found in F<t/README>.
38 The language implementations usually have their test files in F<languages/*/t>.
40 New tests should be added to an existing F<*.t> file.
41 If a previously untested feature is tested,
42 it might also make sense to create a new F<*.t> file.
44 =head1 How to write a test
46 Test scripts must emit text that conforms to the C<Test Anything Protocol>.
47 Test scripts are currently usually written in Perl 5 or PIR.
48 The Perl 5 module C<Parrot::Test>
49 and the PIR module C<Test;More> help with writing tests.
51 The testing framework needs to know how many tests it should expect. So the
52 number of planned tests needs to be incremented when adding a new test. This
53 is done near the top of a test file, in a line that looks like:
55 use Parrot::Test tests => 8;
57 for Perl 5 based test scripts.
59 =head2 Testing Parrot Assembler
61 PASM tests are mostly used for testing ops. Appropriate test files for basic
62 ops are F<t/op/*.t>. Polymorphic Containers are tested in F<t/pmc/*.t>. Add the
65 pasm_output_is(<<'CODE', <<'OUTPUT', "name for test");
66 *** a big chunk of assembler, eg:
68 print "\n" # you can even comment it if it's obscure
69 end # don't forget this...!
71 *** what you expect the output of the chunk to be, eg.
75 =head2 Testing Parrot Intermediate Representation
77 Writing tests in B<PIR> is more convenient. This is done with
78 C<pir_output_is> and friends.
80 pir_output_is(<<'CODE',<<'OUT','nothing useful');
88 =head2 Testing C source
90 C source tests are usually located in F<t/src/*.t>. A simple test looks like:
92 c_output_is(<<'CODE', <<'OUTPUT', "name for test");
94 #include "parrot/parrot.h"
95 #include "parrot/embed.h"
97 static opcode_t *the_test(Parrot_Interp, opcode_t *, opcode_t *);
99 int main(int argc, char* argv[]) {
100 Parrot_Interp interp;
101 interpreter = Parrot_new(NULL);
106 Parrot_run_native(interp, the_test);
113 the_test(PARROT_INTERP,
114 opcode_t *cur_op, opcode_t *start)
116 /* Your test goes here. */
118 return NULL; /* always return NULL */
121 # Anything that might be output prior to "done".
125 Note that it's always a good idea to output "done" to confirm that the compiled
126 code executed completely. When mixing C<printf> and C<Parrot_io_printf> always append
127 a C<fflush(stdout);> after the former.
129 =head2 Testing Perl5 components
131 At the present time most, if not all, of the programs used to configure, build
132 and install Parrot are written in Perl 5. These programs take the form of
133 program files (F<*.pl>) and Perl modules (F<*.pm>) holding subroutines and
134 other variables imported into the program files. Examples of such
135 program files can be found under F<tools/>; examples of such Perl modules
136 can be found under F<lib/Parrot/>.
138 All of these Perl 5 components ought to be tested. Fortunately, over the last
139 decade, under the leadership of Michael Schwern, chromatic, Andy Lester and
140 many others, the Perl 5 community has developed a rigorous approach to testing
147 Subroutines found in F<*.pl> files are extracted and placed in F<*.pm>
152 Those subroutines are then imported back into the program file.
156 Those subroutines are also imported into test files (F<*.t>) where are tests
157 are run by Test::Builder-based modules such as Test::Simple and Test::More.
161 Those test files are run by Test::Harness-based functionality such as
162 ExtUtils::MakeMaker's F<make test>, Module::Build's F<build test>, or
163 Test::Harness's F<prove>.
167 The extent to which the test files exercise all statements in the Perl modules
168 being tested is measured in coverage analysis using CPAN module Devel::Cover.
172 The underlying code is refactored and improved on the basis of the results of
173 tests and coverage analysis.
177 Tests reflecting this approach can be found in F<t/configure/>,
178 F<t/postconfigure/>, F<t/tools/>, and so on.
180 It is our objective to test all Perl 5 components of the Parrot distribution
181 using the methodology above.
183 =head3 Build Tools Tests
185 The files in F<t/postconfigure> are tests for build system. The build tools
186 tests are intended to be run after someone has made changes in modules such as
187 F<lib/Parrot/Pmc2cUtils/>, F<Ops2cUtils/> and F<Ops2pmutils/>. They're set up
188 to be run after F<Configure.pl> has completed but before make has been invoked.
189 (In fact, they will generate errors if make has completed.) You can run them
190 with any of the following:
192 perl Configure.pl --test
193 perl Configure.pl --test=build
194 make buildtools_tests (following Configure.pl)
196 =head2 Testing language implementations
198 Language implementations are usually tested with
199 C<language_output_is> and friends.
207 Probe the boundaries (including edge cases, errors thrown etc.) of whatever
208 code they're testing. These should include potentially out of band input
209 unless we decide that compilers should check for this themselves.
213 Are small and self contained, so that if the tested feature breaks we can
214 identify where and why quickly.
218 Are valid. Essentially, they should conform to the additional documentation
219 that accompanies the feature (if any). [If there isn't any documentation, then
220 feel free to add some and/or complain to the mailing list].
224 Are a chunk of assembler and a chunk of expected output.
230 In test driven development, tests are implemented first. So the tests are
231 initially expected to fail. This can be expressed by marking the tests as
232 TODO. See L<Test::More> on how to do that.
236 TODO test actually executed, so that unexpected success can be detected.
237 In the case of missing requirements and in the case of serious breakdowns
238 the execution of tests can be skipped.
239 See L<Test::More> on how to do that.
243 L<http://qa.perl.org/>
244 L<http://testanything.org/>
245 L<http://en.wikipedia.org/wiki/Test_Anything_Protocol>
246 F<t/TESTS.STATUS.pod>