doc: mention libpq regression tests
[pgsql.git] / doc / src / sgml / regress.sgml
blobde065c0564a588b8e6e4590740dff101578336c8
1 <!-- doc/src/sgml/regress.sgml -->
3 <chapter id="regress">
4 <title>Regression Tests</title>
6 <indexterm zone="regress">
7 <primary>regression tests</primary>
8 </indexterm>
10 <indexterm zone="regress">
11 <primary>test</primary>
12 </indexterm>
14 <para>
15 The regression tests are a comprehensive set of tests for the SQL
16 implementation in <productname>PostgreSQL</productname>. They test
17 standard SQL operations as well as the extended capabilities of
18 <productname>PostgreSQL</productname>.
19 </para>
21 <sect1 id="regress-run">
22 <title>Running the Tests</title>
24 <para>
25 The regression tests can be run against an already installed and
26 running server, or using a temporary installation within the build
27 tree. Furthermore, there is a <quote>parallel</quote> and a
28 <quote>sequential</quote> mode for running the tests. The
29 sequential method runs each test script alone, while the
30 parallel method starts up multiple server processes to run groups
31 of tests in parallel. Parallel testing adds confidence that
32 interprocess communication and locking are working correctly.
33 Some tests may run sequentially even in the <quote>parallel</quote>
34 mode in case this is required by the test.
35 </para>
37 <sect2 id="regress-run-temp-inst">
38 <title>Running the Tests Against a Temporary Installation</title>
40 <para>
41 To run the parallel regression tests after building but before installation,
42 type:
43 <screen>
44 make check
45 </screen>
46 in the top-level directory. (Or you can change to
47 <filename>src/test/regress</filename> and run the command there.)
48 Tests which are run in parallel are prefixed with <quote>+</quote>, and
49 tests which run sequentially are prefixed with <quote>-</quote>.
50 At the end you should see something like:
51 <screen>
52 <computeroutput>
53 # All 213 tests passed.
54 </computeroutput>
55 </screen>
56 or otherwise a note about which tests failed. See <xref
57 linkend="regress-evaluation"/> below before assuming that a
58 <quote>failure</quote> represents a serious problem.
59 </para>
61 <para>
62 Because this test method runs a temporary server, it will not work
63 if you did the build as the root user, since the server will not start as
64 root. Recommended procedure is not to do the build as root, or else to
65 perform testing after completing the installation.
66 </para>
68 <para>
69 If you have configured <productname>PostgreSQL</productname> to install
70 into a location where an older <productname>PostgreSQL</productname>
71 installation already exists, and you perform <literal>make check</literal>
72 before installing the new version, you might find that the tests fail
73 because the new programs try to use the already-installed shared
74 libraries. (Typical symptoms are complaints about undefined symbols.)
75 If you wish to run the tests before overwriting the old installation,
76 you'll need to build with <literal>configure --disable-rpath</literal>.
77 It is not recommended that you use this option for the final installation,
78 however.
79 </para>
81 <para>
82 The parallel regression test starts quite a few processes under your
83 user ID. Presently, the maximum concurrency is twenty parallel test
84 scripts, which means forty processes: there's a server process and a
85 <application>psql</application> process for each test script.
86 So if your system enforces a per-user limit on the number of processes,
87 make sure this limit is at least fifty or so, else you might get
88 random-seeming failures in the parallel test. If you are not in
89 a position to raise the limit, you can cut down the degree of parallelism
90 by setting the <literal>MAX_CONNECTIONS</literal> parameter. For example:
91 <screen>
92 make MAX_CONNECTIONS=10 check
93 </screen>
94 runs no more than ten tests concurrently.
95 </para>
96 </sect2>
98 <sect2 id="regress-run-existing-inst">
99 <title>Running the Tests Against an Existing Installation</title>
101 <para>
102 To run the tests after installation (see <xref linkend="installation"/>),
103 initialize a data directory and start the
104 server as explained in <xref linkend="runtime"/>, then type:
105 <screen>
106 make installcheck
107 </screen>
108 or for a parallel test:
109 <screen>
110 make installcheck-parallel
111 </screen>
112 The tests will expect to contact the server at the local host and the
113 default port number, unless directed otherwise by <envar>PGHOST</envar> and
114 <envar>PGPORT</envar> environment variables. The tests will be run in a
115 database named <literal>regression</literal>; any existing database by this name
116 will be dropped.
117 </para>
119 <para>
120 The tests will also transiently create some cluster-wide objects, such as
121 roles, tablespaces, and subscriptions. These objects will have names
122 beginning with <literal>regress_</literal>. Beware of
123 using <literal>installcheck</literal> mode with an installation that has
124 any actual global objects named that way.
125 </para>
126 </sect2>
128 <sect2 id="regress-additional">
129 <title>Additional Test Suites</title>
131 <para>
132 The <literal>make check</literal> and <literal>make installcheck</literal> commands
133 run only the <quote>core</quote> regression tests, which test built-in
134 functionality of the <productname>PostgreSQL</productname> server. The source
135 distribution contains many additional test suites, most of them having
136 to do with add-on functionality such as optional procedural languages.
137 </para>
139 <para>
140 To run all test suites applicable to the modules that have been selected
141 to be built, including the core tests, type one of these commands at the
142 top of the build tree:
143 <screen>
144 make check-world
145 make installcheck-world
146 </screen>
147 These commands run the tests using temporary servers or an
148 already-installed server, respectively, just as previously explained
149 for <literal>make check</literal> and <literal>make installcheck</literal>. Other
150 considerations are the same as previously explained for each method.
151 Note that <literal>make check-world</literal> builds a separate instance
152 (temporary data directory) for each tested module, so it requires more
153 time and disk space than <literal>make installcheck-world</literal>.
154 </para>
156 <para>
157 On a modern machine with multiple CPU cores and no tight operating-system
158 limits, you can make things go substantially faster with parallelism.
159 The recipe that most PostgreSQL developers actually use for running all
160 tests is something like
161 <screen>
162 make check-world -j8 >/dev/null
163 </screen>
164 with a <option>-j</option> limit near to or a bit more than the number
165 of available cores. Discarding <systemitem>stdout</systemitem>
166 eliminates chatter that's not interesting when you just want to verify
167 success. (In case of failure, the <systemitem>stderr</systemitem>
168 messages are usually enough to determine where to look closer.)
169 </para>
171 <para>
172 Alternatively, you can run individual test suites by typing
173 <literal>make check</literal> or <literal>make installcheck</literal> in the appropriate
174 subdirectory of the build tree. Keep in mind that <literal>make
175 installcheck</literal> assumes you've installed the relevant module(s), not
176 only the core server.
177 </para>
179 <para>
180 The additional tests that can be invoked this way include:
181 </para>
183 <itemizedlist>
184 <listitem>
185 <para>
186 Regression tests for optional procedural languages.
187 These are located under <filename>src/pl</filename>.
188 </para>
189 </listitem>
190 <listitem>
191 <para>
192 Regression tests for <filename>contrib</filename> modules,
193 located under <filename>contrib</filename>.
194 Not all <filename>contrib</filename> modules have tests.
195 </para>
196 </listitem>
197 <listitem>
198 <para>
199 Regression tests for the interface libraries,
200 located in <filename>src/interfaces/libpq/test</filename> and
201 <filename>src/interfaces/ecpg/test</filename>.
202 </para>
203 </listitem>
204 <listitem>
205 <para>
206 Tests for core-supported authentication methods,
207 located in <filename>src/test/authentication</filename>.
208 (See below for additional authentication-related tests.)
209 </para>
210 </listitem>
211 <listitem>
212 <para>
213 Tests stressing behavior of concurrent sessions,
214 located in <filename>src/test/isolation</filename>.
215 </para>
216 </listitem>
217 <listitem>
218 <para>
219 Tests for crash recovery and physical replication,
220 located in <filename>src/test/recovery</filename>.
221 </para>
222 </listitem>
223 <listitem>
224 <para>
225 Tests for logical replication,
226 located in <filename>src/test/subscription</filename>.
227 </para>
228 </listitem>
229 <listitem>
230 <para>
231 Tests of client programs, located under <filename>src/bin</filename>.
232 </para>
233 </listitem>
234 </itemizedlist>
236 <para>
237 When using <literal>installcheck</literal> mode, these tests will create
238 and destroy test databases whose names
239 include <literal>regression</literal>, for
240 example <literal>pl_regression</literal>
241 or <literal>contrib_regression</literal>. Beware of
242 using <literal>installcheck</literal> mode with an installation that has
243 any non-test databases named that way.
244 </para>
246 <para>
247 Some of these auxiliary test suites use the TAP infrastructure explained
248 in <xref linkend="regress-tap"/>.
249 The TAP-based tests are run only when PostgreSQL was configured with the
250 option <option>--enable-tap-tests</option>. This is recommended for
251 development, but can be omitted if there is no suitable Perl installation.
252 </para>
254 <para>
255 Some test suites are not run by default, either because they are not secure
256 to run on a multiuser system or because they require special software. You
257 can decide which test suites to run additionally by setting the
258 <command>make</command> or environment variable
259 <varname>PG_TEST_EXTRA</varname> to a whitespace-separated list, for
260 example:
261 <programlisting>
262 make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance'
263 </programlisting>
264 The following values are currently supported:
265 <variablelist>
266 <varlistentry>
267 <term><literal>kerberos</literal></term>
268 <listitem>
269 <para>
270 Runs the test suite under <filename>src/test/kerberos</filename>. This
271 requires an MIT Kerberos installation and opens TCP/IP listen sockets.
272 </para>
273 </listitem>
274 </varlistentry>
276 <varlistentry>
277 <term><literal>ldap</literal></term>
278 <listitem>
279 <para>
280 Runs the test suite under <filename>src/test/ldap</filename>. This
281 requires an <productname>OpenLDAP</productname> installation and opens
282 TCP/IP listen sockets.
283 </para>
284 </listitem>
285 </varlistentry>
287 <varlistentry>
288 <term><literal>ssl</literal></term>
289 <listitem>
290 <para>
291 Runs the test suite under <filename>src/test/ssl</filename>. This opens TCP/IP listen sockets.
292 </para>
293 </listitem>
294 </varlistentry>
296 <varlistentry>
297 <term><literal>load_balance</literal></term>
298 <listitem>
299 <para>
300 Runs the test <filename>src/interfaces/libpq/t/004_load_balance_dns.pl</filename>.
301 This requires editing the system <filename>hosts</filename> file and
302 opens TCP/IP listen sockets.
303 </para>
304 </listitem>
305 </varlistentry>
307 <varlistentry>
308 <term><literal>wal_consistency_checking</literal></term>
309 <listitem>
310 <para>
311 Uses <literal>wal_consistency_checking=all</literal> while running
312 certain tests under <filename>src/test/recovery</filename>. Not
313 enabled by default because it is resource intensive.
314 </para>
315 </listitem>
316 </varlistentry>
317 </variablelist>
319 Tests for features that are not supported by the current build
320 configuration are not run even if they are mentioned in
321 <varname>PG_TEST_EXTRA</varname>.
322 </para>
324 <para>
325 In addition, there are tests in <filename>src/test/modules</filename>
326 which will be run by <literal>make check-world</literal> but not
327 by <literal>make installcheck-world</literal>. This is because they
328 install non-production extensions or have other side-effects that are
329 considered undesirable for a production installation. You can
330 use <literal>make install</literal> and <literal>make
331 installcheck</literal> in one of those subdirectories if you wish,
332 but it's not recommended to do so with a non-test server.
333 </para>
334 </sect2>
336 <sect2 id="regress-run-locale">
337 <title>Locale and Encoding</title>
339 <para>
340 By default, tests using a temporary installation use the
341 locale defined in the current environment and the corresponding
342 database encoding as determined by <command>initdb</command>. It
343 can be useful to test different locales by setting the appropriate
344 environment variables, for example:
345 <screen>
346 make check LANG=C
347 make check LC_COLLATE=en_US.utf8 LC_CTYPE=fr_CA.utf8
348 </screen>
349 For implementation reasons, setting <envar>LC_ALL</envar> does not
350 work for this purpose; all the other locale-related environment
351 variables do work.
352 </para>
354 <para>
355 When testing against an existing installation, the locale is
356 determined by the existing database cluster and cannot be set
357 separately for the test run.
358 </para>
360 <para>
361 You can also choose the database encoding explicitly by setting
362 the variable <envar>ENCODING</envar>, for example:
363 <screen>
364 make check LANG=C ENCODING=EUC_JP
365 </screen>
366 Setting the database encoding this way typically only makes sense
367 if the locale is C; otherwise the encoding is chosen automatically
368 from the locale, and specifying an encoding that does not match
369 the locale will result in an error.
370 </para>
372 <para>
373 The database encoding can be set for tests against either a temporary or
374 an existing installation, though in the latter case it must be
375 compatible with the installation's locale.
376 </para>
377 </sect2>
379 <sect2 id="regress-run-custom-settings">
380 <title>Custom Server Settings</title>
382 <para>
383 Custom server settings to use when running a regression test suite can be
384 set in the <varname>PGOPTIONS</varname> environment variable (for settings
385 that allow this):
386 <screen>
387 make check PGOPTIONS="-c debug_parallel_query=regress -c work_mem=50MB"
388 </screen>
389 When running against a temporary installation, custom settings can also be
390 set by supplying a pre-written <filename>postgresql.conf</filename>:
391 <screen>
392 echo 'log_checkpoints = on' > test_postgresql.conf
393 echo 'work_mem = 50MB' >> test_postgresql.conf
394 make check EXTRA_REGRESS_OPTS="--temp-config=test_postgresql.conf"
395 </screen>
396 </para>
398 <para>
399 This can be useful to enable additional logging, adjust resource limits,
400 or enable extra run-time checks such as <xref
401 linkend="guc-debug-discard-caches"/>.
402 </para>
403 </sect2>
405 <sect2 id="regress-run-extra-tests">
406 <title>Extra Tests</title>
408 <para>
409 The core regression test suite contains a few test files that are not
410 run by default, because they might be platform-dependent or take a
411 very long time to run. You can run these or other extra test
412 files by setting the variable <envar>EXTRA_TESTS</envar>. For
413 example, to run the <literal>numeric_big</literal> test:
414 <screen>
415 make check EXTRA_TESTS=numeric_big
416 </screen>
417 </para>
418 </sect2>
419 </sect1>
421 <sect1 id="regress-evaluation">
422 <title>Test Evaluation</title>
424 <para>
425 Some properly installed and fully functional
426 <productname>PostgreSQL</productname> installations can
427 <quote>fail</quote> some of these regression tests due to
428 platform-specific artifacts such as varying floating-point representation
429 and message wording. The tests are currently evaluated using a simple
430 <command>diff</command> comparison against the outputs
431 generated on a reference system, so the results are sensitive to
432 small system differences. When a test is reported as
433 <quote>failed</quote>, always examine the differences between
434 expected and actual results; you might find that the
435 differences are not significant. Nonetheless, we still strive to
436 maintain accurate reference files across all supported platforms,
437 so it can be expected that all tests pass.
438 </para>
440 <para>
441 The actual outputs of the regression tests are in files in the
442 <filename>src/test/regress/results</filename> directory. The test
443 script uses <command>diff</command> to compare each output
444 file against the reference outputs stored in the
445 <filename>src/test/regress/expected</filename> directory. Any
446 differences are saved for your inspection in
447 <filename>src/test/regress/regression.diffs</filename>.
448 (When running a test suite other than the core tests, these files
449 of course appear in the relevant subdirectory,
450 not <filename>src/test/regress</filename>.)
451 </para>
453 <para>
454 If you don't
455 like the <command>diff</command> options that are used by default, set the
456 environment variable <envar>PG_REGRESS_DIFF_OPTS</envar>, for
457 instance <literal>PG_REGRESS_DIFF_OPTS='-c'</literal>. (Or you
458 can run <command>diff</command> yourself, if you prefer.)
459 </para>
461 <para>
462 If for some reason a particular platform generates a <quote>failure</quote>
463 for a given test, but inspection of the output convinces you that
464 the result is valid, you can add a new comparison file to silence
465 the failure report in future test runs. See
466 <xref linkend="regress-variant"/> for details.
467 </para>
469 <sect2 id="regress-evaluation-message-differences">
470 <title>Error Message Differences</title>
472 <para>
473 Some of the regression tests involve intentional invalid input
474 values. Error messages can come from either the
475 <productname>PostgreSQL</productname> code or from the host
476 platform system routines. In the latter case, the messages can
477 vary between platforms, but should reflect similar
478 information. These differences in messages will result in a
479 <quote>failed</quote> regression test that can be validated by
480 inspection.
481 </para>
482 </sect2>
484 <sect2 id="regress-evaluation-locale-differences">
485 <title>Locale Differences</title>
487 <para>
488 If you run the tests against a server that was
489 initialized with a collation-order locale other than C, then
490 there might be differences due to sort order and subsequent
491 failures. The regression test suite is set up to handle this
492 problem by providing alternate result files that together are
493 known to handle a large number of locales.
494 </para>
496 <para>
497 To run the tests in a different locale when using the
498 temporary-installation method, pass the appropriate
499 locale-related environment variables on
500 the <command>make</command> command line, for example:
501 <programlisting>
502 make check LANG=de_DE.utf8
503 </programlisting>
504 (The regression test driver unsets <envar>LC_ALL</envar>, so it
505 does not work to choose the locale using that variable.) To use
506 no locale, either unset all locale-related environment variables
507 (or set them to <literal>C</literal>) or use the following
508 special invocation:
509 <programlisting>
510 make check NO_LOCALE=1
511 </programlisting>
512 When running the tests against an existing installation, the
513 locale setup is determined by the existing installation. To
514 change it, initialize the database cluster with a different
515 locale by passing the appropriate options
516 to <command>initdb</command>.
517 </para>
519 <para>
520 In general, it is advisable to try to run the
521 regression tests in the locale setup that is wanted for
522 production use, as this will exercise the locale- and
523 encoding-related code portions that will actually be used in
524 production. Depending on the operating system environment, you
525 might get failures, but then you will at least know what
526 locale-specific behaviors to expect when running real
527 applications.
528 </para>
529 </sect2>
531 <sect2 id="regress-evaluation-date-time-differences">
532 <title>Date and Time Differences</title>
534 <para>
535 Most of the date and time results are dependent on the time zone
536 environment. The reference files are generated for time zone
537 <literal>PST8PDT</literal> (Berkeley, California), and there will be
538 apparent failures if the tests are not run with that time zone setting.
539 The regression test driver sets environment variable
540 <envar>PGTZ</envar> to <literal>PST8PDT</literal>, which normally
541 ensures proper results.
542 </para>
543 </sect2>
545 <sect2 id="regress-evaluation-float-differences">
546 <title>Floating-Point Differences</title>
548 <para>
549 Some of the tests involve computing 64-bit floating-point numbers (<type>double
550 precision</type>) from table columns. Differences in
551 results involving mathematical functions of <type>double
552 precision</type> columns have been observed. The <literal>float8</literal> and
553 <literal>geometry</literal> tests are particularly prone to small differences
554 across platforms, or even with different compiler optimization settings.
555 Human eyeball comparison is needed to determine the real
556 significance of these differences which are usually 10 places to
557 the right of the decimal point.
558 </para>
560 <para>
561 Some systems display minus zero as <literal>-0</literal>, while others
562 just show <literal>0</literal>.
563 </para>
565 <para>
566 Some systems signal errors from <function>pow()</function> and
567 <function>exp()</function> differently from the mechanism
568 expected by the current <productname>PostgreSQL</productname>
569 code.
570 </para>
571 </sect2>
573 <sect2 id="regress-evaluation-ordering-differences">
574 <title>Row Ordering Differences</title>
576 <para>
577 You might see differences in which the same rows are output in a
578 different order than what appears in the expected file. In most cases
579 this is not, strictly speaking, a bug. Most of the regression test
580 scripts are not so pedantic as to use an <literal>ORDER BY</literal> for every single
581 <literal>SELECT</literal>, and so their result row orderings are not well-defined
582 according to the SQL specification. In practice, since we are
583 looking at the same queries being executed on the same data by the same
584 software, we usually get the same result ordering on all platforms,
585 so the lack of <literal>ORDER BY</literal> is not a problem. Some queries do exhibit
586 cross-platform ordering differences, however. When testing against an
587 already-installed server, ordering differences can also be caused by
588 non-C locale settings or non-default parameter settings, such as custom values
589 of <varname>work_mem</varname> or the planner cost parameters.
590 </para>
592 <para>
593 Therefore, if you see an ordering difference, it's not something to
594 worry about, unless the query does have an <literal>ORDER BY</literal> that your
595 result is violating. However, please report it anyway, so that we can add an
596 <literal>ORDER BY</literal> to that particular query to eliminate the bogus
597 <quote>failure</quote> in future releases.
598 </para>
600 <para>
601 You might wonder why we don't order all the regression test queries explicitly
602 to get rid of this issue once and for all. The reason is that that would
603 make the regression tests less useful, not more, since they'd tend
604 to exercise query plan types that produce ordered results to the
605 exclusion of those that don't.
606 </para>
607 </sect2>
609 <sect2 id="regress-evaluation-stack-depth">
610 <title>Insufficient Stack Depth</title>
612 <para>
613 If the <literal>errors</literal> test results in a server crash
614 at the <literal>select infinite_recurse()</literal> command, it means that
615 the platform's limit on process stack size is smaller than the
616 <xref linkend="guc-max-stack-depth"/> parameter indicates. This
617 can be fixed by running the server under a higher stack
618 size limit (4MB is recommended with the default value of
619 <varname>max_stack_depth</varname>). If you are unable to do that, an
620 alternative is to reduce the value of <varname>max_stack_depth</varname>.
621 </para>
623 <para>
624 On platforms supporting <function>getrlimit()</function>, the server should
625 automatically choose a safe value of <varname>max_stack_depth</varname>;
626 so unless you've manually overridden this setting, a failure of this
627 kind is a reportable bug.
628 </para>
629 </sect2>
631 <sect2 id="regress-evaluation-random-test">
632 <title>The <quote>random</quote> Test</title>
634 <para>
635 The <literal>random</literal> test script is intended to produce
636 random results. In very rare cases, this causes that regression
637 test to fail. Typing:
638 <programlisting>
639 diff results/random.out expected/random.out
640 </programlisting>
641 should produce only one or a few lines of differences. You need
642 not worry unless the random test fails repeatedly.
643 </para>
644 </sect2>
646 <sect2 id="regress-evaluation-config-params">
647 <title>Configuration Parameters</title>
649 <para>
650 When running the tests against an existing installation, some non-default
651 parameter settings could cause the tests to fail. For example, changing
652 parameters such as <varname>enable_seqscan</varname> or
653 <varname>enable_indexscan</varname> could cause plan changes that would
654 affect the results of tests that use <command>EXPLAIN</command>.
655 </para>
656 </sect2>
657 </sect1>
659 <!-- We might want to move the following section into the developer's guide. -->
660 <sect1 id="regress-variant">
661 <title>Variant Comparison Files</title>
663 <para>
664 Since some of the tests inherently produce environment-dependent
665 results, we have provided ways to specify alternate <quote>expected</quote>
666 result files. Each regression test can have several comparison files
667 showing possible results on different platforms. There are two
668 independent mechanisms for determining which comparison file is used
669 for each test.
670 </para>
672 <para>
673 The first mechanism allows comparison files to be selected for
674 specific platforms. There is a mapping file,
675 <filename>src/test/regress/resultmap</filename>, that defines
676 which comparison file to use for each platform.
677 To eliminate bogus test <quote>failures</quote> for a particular platform,
678 you first choose or make a variant result file, and then add a line to the
679 <filename>resultmap</filename> file.
680 </para>
682 <para>
683 Each line in the mapping file is of the form
684 <synopsis>
685 testname:output:platformpattern=comparisonfilename
686 </synopsis>
687 The test name is just the name of the particular regression test
688 module. The output value indicates which output file to check. For the
689 standard regression tests, this is always <literal>out</literal>. The
690 value corresponds to the file extension of the output file.
691 The platform pattern is a pattern in the style of the Unix
692 tool <command>expr</command> (that is, a regular expression with an implicit
693 <literal>^</literal> anchor at the start). It is matched against the
694 platform name as printed by <command>config.guess</command>.
695 The comparison file name is the base name of the substitute result
696 comparison file.
697 </para>
699 <para>
700 For example: some systems lack a working <literal>strtof</literal> function,
701 for which our workaround causes rounding errors in the
702 <filename>float4</filename> regression test.
703 Therefore, we provide a variant comparison file,
704 <filename>float4-misrounded-input.out</filename>, which includes
705 the results to be expected on these systems. To silence the bogus
706 <quote>failure</quote> message on <systemitem>Cygwin</systemitem>
707 platforms, <filename>resultmap</filename> includes:
708 <programlisting>
709 float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
710 </programlisting>
711 which will trigger on any machine where the output of
712 <command>config.guess</command> matches <literal>.*-.*-cygwin.*</literal>.
713 Other lines in <filename>resultmap</filename> select the variant comparison
714 file for other platforms where it's appropriate.
715 </para>
717 <para>
718 The second selection mechanism for variant comparison files is
719 much more automatic: it simply uses the <quote>best match</quote> among
720 several supplied comparison files. The regression test driver
721 script considers both the standard comparison file for a test,
722 <literal><replaceable>testname</replaceable>.out</literal>, and variant files named
723 <literal><replaceable>testname</replaceable>_<replaceable>digit</replaceable>.out</literal>
724 (where the <replaceable>digit</replaceable> is any single digit
725 <literal>0</literal>-<literal>9</literal>). If any such file is an exact match,
726 the test is considered to pass; otherwise, the one that generates
727 the shortest diff is used to create the failure report. (If
728 <filename>resultmap</filename> includes an entry for the particular
729 test, then the base <replaceable>testname</replaceable> is the substitute
730 name given in <filename>resultmap</filename>.)
731 </para>
733 <para>
734 For example, for the <literal>char</literal> test, the comparison file
735 <filename>char.out</filename> contains results that are expected
736 in the <literal>C</literal> and <literal>POSIX</literal> locales, while
737 the file <filename>char_1.out</filename> contains results sorted as
738 they appear in many other locales.
739 </para>
741 <para>
742 The best-match mechanism was devised to cope with locale-dependent
743 results, but it can be used in any situation where the test results
744 cannot be predicted easily from the platform name alone. A limitation of
745 this mechanism is that the test driver cannot tell which variant is
746 actually <quote>correct</quote> for the current environment; it will just pick
747 the variant that seems to work best. Therefore it is safest to use this
748 mechanism only for variant results that you are willing to consider
749 equally valid in all contexts.
750 </para>
752 </sect1>
754 <sect1 id="regress-tap">
755 <title>TAP Tests</title>
757 <para>
758 Various tests, particularly the client program tests
759 under <filename>src/bin</filename>, use the Perl TAP tools and are run
760 using the Perl testing program <command>prove</command>. You can pass
761 command-line options to <command>prove</command> by setting
762 the <command>make</command> variable <varname>PROVE_FLAGS</varname>, for example:
763 <programlisting>
764 make -C src/bin check PROVE_FLAGS='--timer'
765 </programlisting>
766 See the manual page of <command>prove</command> for more information.
767 </para>
769 <para>
770 The <command>make</command> variable <varname>PROVE_TESTS</varname>
771 can be used to define a whitespace-separated list of paths relative
772 to the <filename>Makefile</filename> invoking <command>prove</command>
773 to run the specified subset of tests instead of the default
774 <filename>t/*.pl</filename>. For example:
775 <programlisting>
776 make check PROVE_TESTS='t/001_test1.pl t/003_test3.pl'
777 </programlisting>
778 </para>
780 <para>
781 The TAP tests require the Perl module <literal>IPC::Run</literal>.
782 This module is available from
783 <ulink url="https://metacpan.org/dist/IPC-Run">CPAN</ulink>
784 or an operating system package.
785 They also require <productname>PostgreSQL</productname> to be
786 configured with the option <option>--enable-tap-tests</option>.
787 </para>
789 <para>
790 Generically speaking, the TAP tests will test the executables in a
791 previously-installed installation tree if you say <literal>make
792 installcheck</literal>, or will build a new local installation tree from
793 current sources if you say <literal>make check</literal>. In either
794 case they will initialize a local instance (data directory) and
795 transiently run a server in it. Some of these tests run more than one
796 server. Thus, these tests can be fairly resource-intensive.
797 </para>
799 <para>
800 It's important to realize that the TAP tests will start test server(s)
801 even when you say <literal>make installcheck</literal>; this is unlike
802 the traditional non-TAP testing infrastructure, which expects to use an
803 already-running test server in that case. Some PostgreSQL
804 subdirectories contain both traditional-style and TAP-style tests,
805 meaning that <literal>make installcheck</literal> will produce a mix of
806 results from temporary servers and the already-running test server.
807 </para>
809 <sect2 id="regress-tap-vars">
810 <title>Environment Variables</title>
812 <para>
813 Data directories are named according to the test filename, and will be
814 retained if a test fails. If the environment variable
815 <varname>PG_TEST_NOCLEAN</varname> is set, data directories will be
816 retained regardless of test status. For example, retaining the data
817 directory regardless of test results when running the
818 <application>pg_dump</application> tests:
819 <programlisting>
820 PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check
821 </programlisting>
822 This environment variable also prevents the test's temporary directories
823 from being removed.
824 </para>
826 <para>
827 Many operations in the test suites use a 180-second timeout, which on slow
828 hosts may lead to load-induced timeouts. Setting the environment variable
829 <varname>PG_TEST_TIMEOUT_DEFAULT</varname> to a higher number will change
830 the default to avoid this.
831 </para>
832 </sect2>
834 </sect1>
836 <sect1 id="regress-coverage">
837 <title>Test Coverage Examination</title>
839 <para>
840 The PostgreSQL source code can be compiled with coverage testing
841 instrumentation, so that it becomes possible to examine which
842 parts of the code are covered by the regression tests or any other
843 test suite that is run with the code. This is currently supported
844 when compiling with GCC, and it requires the <literal>gcov</literal>
845 and <literal>lcov</literal> packages.
846 </para>
848 <sect2 id="regress-coverage-configure">
849 <title>Coverage with Autoconf and Make</title>
850 <para>
851 A typical workflow looks like this:
852 <screen>
853 ./configure --enable-coverage ... OTHER OPTIONS ...
854 make
855 make check # or other test suite
856 make coverage-html
857 </screen>
858 Then point your HTML browser
859 to <filename>coverage/index.html</filename>.
860 </para>
862 <para>
863 If you don't have <command>lcov</command> or prefer text output over an
864 HTML report, you can run
865 <screen>
866 make coverage
867 </screen>
868 instead of <literal>make coverage-html</literal>, which will
869 produce <filename>.gcov</filename> output files for each source file
870 relevant to the test. (<literal>make coverage</literal> and <literal>make
871 coverage-html</literal> will overwrite each other's files, so mixing them
872 might be confusing.)
873 </para>
875 <para>
876 You can run several different tests before making the coverage report;
877 the execution counts will accumulate. If you want
878 to reset the execution counts between test runs, run:
879 <screen>
880 make coverage-clean
881 </screen>
882 </para>
884 <para>
885 You can run the <literal>make coverage-html</literal> or <literal>make
886 coverage</literal> command in a subdirectory if you want a coverage
887 report for only a portion of the code tree.
888 </para>
890 <para>
891 Use <literal>make distclean</literal> to clean up when done.
892 </para>
893 </sect2>
895 <sect2 id="regress-coverage-meson">
896 <title>Coverage with Meson</title>
897 <para>
898 A typical workflow looks like this:
899 <screen>
900 meson setup -Db_coverage=true ... OTHER OPTIONS ... builddir/
901 meson compile -C builddir/
902 meson test -C builddir/
903 cd builddir/
904 ninja coverage-html
905 </screen>
906 Then point your HTML browser
907 to <filename>./meson-logs/coveragereport/index.html</filename>.
908 </para>
910 <para>
911 You can run several different tests before making the coverage report;
912 the execution counts will accumulate.
913 </para>
914 </sect2>
915 </sect1>
917 </chapter>