1 =================================
2 LLVM Testing Infrastructure Guide
3 =================================
11 TestSuiteMakefileGuide
16 This document is the reference manual for the LLVM testing
17 infrastructure. It documents the structure of the LLVM testing
18 infrastructure, the tools needed to use it, and how to add and run
24 In order to use the LLVM testing infrastructure, you will need all of the
25 software required to build LLVM, as well as `Python <http://python.org>`_ 2.7 or
28 If you intend to run the :ref:`test-suite <test-suite-overview>`, you will also
29 need a development version of zlib (zlib1g-dev is known to work on several Linux
32 LLVM testing infrastructure organization
33 ========================================
35 The LLVM testing infrastructure contains two major categories of tests:
36 regression tests and whole programs. The regression tests are contained
37 inside the LLVM repository itself under ``llvm/test`` and are expected
38 to always pass -- they should be run before every commit.
40 The whole programs tests are referred to as the "LLVM test suite" (or
41 "test-suite") and are in the ``test-suite`` module in subversion. For
42 historical reasons, these tests are also referred to as the "nightly
43 tests" in places, which is less ambiguous than "test-suite" and remains
44 in use although we run them much more often than nightly.
49 The regression tests are small pieces of code that test a specific
50 feature of LLVM or trigger a specific bug in LLVM. The language they are
51 written in depends on the part of LLVM being tested. These tests are driven by
52 the :doc:`Lit <CommandGuide/lit>` testing tool (which is part of LLVM), and
53 are located in the ``llvm/test`` directory.
55 Typically when a bug is found in LLVM, a regression test containing just
56 enough code to reproduce the problem should be written and placed
57 somewhere underneath this directory. For example, it can be a small
58 piece of LLVM IR distilled from an actual application or benchmark.
63 The test suite contains whole programs, which are pieces of code which
64 can be compiled and linked into a stand-alone program that can be
65 executed. These programs are generally written in high level languages
68 These programs are compiled using a user specified compiler and set of
69 flags, and then executed to capture the program output and timing
70 information. The output of these programs is compared to a reference
71 output to ensure that the program is being compiled correctly.
73 In addition to compiling and executing programs, whole program tests
74 serve as a way of benchmarking LLVM performance, both in terms of the
75 efficiency of the programs generated as well as the speed with which
76 LLVM compiles, optimizes, and generates code.
78 The test-suite is located in the ``test-suite`` Subversion module.
80 Debugging Information tests
81 ---------------------------
83 The test suite contains tests to check quality of debugging information.
84 The test are written in C based languages or in LLVM assembly language.
86 These tests are compiled and run under a debugger. The debugger output
87 is checked to validate of debugging information. See README.txt in the
88 test suite for more information . This test suite is located in the
89 ``debuginfo-tests`` Subversion module.
94 The tests are located in two separate Subversion modules. The
95 regressions tests are in the main "llvm" module under the directory
96 ``llvm/test`` (so you get these tests for free with the main LLVM tree).
97 Use ``make check-all`` to run the regression tests after building LLVM.
99 The more comprehensive test suite that includes whole programs in C and C++
100 is in the ``test-suite`` module. See :ref:`test-suite Quickstart
101 <test-suite-quickstart>` for more information on running these tests.
106 To run all of the LLVM regression tests use the check-llvm target:
112 If you have `Clang <http://clang.llvm.org/>`_ checked out and built, you
113 can run the LLVM and Clang tests simultaneously using:
119 To run the tests with Valgrind (Memcheck by default), use the ``LIT_ARGS`` make
120 variable to pass the required options to lit. For example, you can use:
124 % make check LIT_ARGS="-v --vg --vg-leak"
126 to enable testing with valgrind and with leak checking enabled.
128 To run individual tests or subsets of tests, you can use the ``llvm-lit``
129 script which is built as part of LLVM. For example, to run the
130 ``Integer/BitPacked.ll`` test by itself you can run:
134 % llvm-lit ~/llvm/test/Integer/BitPacked.ll
136 or to run all of the ARM CodeGen tests:
140 % llvm-lit ~/llvm/test/CodeGen/ARM
142 For more information on using the :program:`lit` tool, see ``llvm-lit --help``
143 or the :doc:`lit man page <CommandGuide/lit>`.
145 Debugging Information tests
146 ---------------------------
148 To run debugging information tests simply checkout the tests inside
149 clang/test directory.
154 % svn co http://llvm.org/svn/llvm-project/debuginfo-tests/trunk debuginfo-tests
156 These tests are already set up to run as part of clang regression tests.
158 Regression test structure
159 =========================
161 The LLVM regression tests are driven by :program:`lit` and are located in the
162 ``llvm/test`` directory.
164 This directory contains a large array of small tests that exercise
165 various features of LLVM and to ensure that regressions do not occur.
166 The directory is broken into several sub-directories, each focused on a
167 particular area of LLVM.
169 Writing new regression tests
170 ----------------------------
172 The regression test structure is very simple, but does require some
173 information to be set. This information is gathered via ``configure``
174 and is written to a file, ``test/lit.site.cfg`` in the build directory.
175 The ``llvm/test`` Makefile does this work for you.
177 In order for the regression tests to work, each directory of tests must
178 have a ``lit.local.cfg`` file. :program:`lit` looks for this file to determine
179 how to run the tests. This file is just Python code and thus is very
180 flexible, but we've standardized it for the LLVM regression tests. If
181 you're adding a directory of tests, just copy ``lit.local.cfg`` from
182 another directory to get running. The standard ``lit.local.cfg`` simply
183 specifies which files to look in for tests. Any directory that contains
184 only directories does not need the ``lit.local.cfg`` file. Read the :doc:`Lit
185 documentation <CommandGuide/lit>` for more information.
187 Each test file must contain lines starting with "RUN:" that tell :program:`lit`
188 how to run it. If there are no RUN lines, :program:`lit` will issue an error
189 while running a test.
191 RUN lines are specified in the comments of the test program using the
192 keyword ``RUN`` followed by a colon, and lastly the command (pipeline)
193 to execute. Together, these lines form the "script" that :program:`lit`
194 executes to run the test case. The syntax of the RUN lines is similar to a
195 shell's syntax for pipelines including I/O redirection and variable
196 substitution. However, even though these lines may *look* like a shell
197 script, they are not. RUN lines are interpreted by :program:`lit`.
198 Consequently, the syntax differs from shell in a few ways. You can specify
199 as many RUN lines as needed.
201 :program:`lit` performs substitution on each RUN line to replace LLVM tool names
202 with the full paths to the executable built for each tool (in
203 ``$(LLVM_OBJ_ROOT)/$(BuildMode)/bin)``. This ensures that :program:`lit` does
204 not invoke any stray LLVM tools in the user's path during testing.
206 Each RUN line is executed on its own, distinct from other lines unless
207 its last character is ``\``. This continuation character causes the RUN
208 line to be concatenated with the next one. In this way you can build up
209 long pipelines of commands without making huge line lengths. The lines
210 ending in ``\`` are concatenated until a RUN line that doesn't end in
211 ``\`` is found. This concatenated set of RUN lines then constitutes one
212 execution. :program:`lit` will substitute variables and arrange for the pipeline
213 to be executed. If any process in the pipeline fails, the entire line (and
214 test case) fails too.
216 Below is an example of legal RUN lines in a ``.ll`` file:
220 ; RUN: llvm-as < %s | llvm-dis > %t1
221 ; RUN: llvm-dis < %s.bc-13 > %t2
224 As with a Unix shell, the RUN lines permit pipelines and I/O
225 redirection to be used.
227 There are some quoting rules that you must pay attention to when writing
228 your RUN lines. In general nothing needs to be quoted. :program:`lit` won't
229 strip off any quote characters so they will get passed to the invoked program.
230 To avoid this use curly braces to tell :program:`lit` that it should treat
231 everything enclosed as one value.
233 In general, you should strive to keep your RUN lines as simple as possible,
234 using them only to run tools that generate textual output you can then examine.
235 The recommended way to examine output to figure out if the test passes is using
236 the :doc:`FileCheck tool <CommandGuide/FileCheck>`. *[The usage of grep in RUN
237 lines is deprecated - please do not send or commit patches that use it.]*
239 Put related tests into a single file rather than having a separate file per
240 test. Check if there are files already covering your feature and consider
241 adding your code there instead of creating a new file.
246 If your test requires extra files besides the file containing the ``RUN:``
247 lines, the idiomatic place to put them is in a subdirectory ``Inputs``.
248 You can then refer to the extra files as ``%S/Inputs/foo.bar``.
250 For example, consider ``test/Linker/ident.ll``. The directory structure is
260 For convenience, these are the contents:
266 ; RUN: llvm-link %S/Inputs/ident.a.ll %S/Inputs/ident.b.ll -S | FileCheck %s
268 ; Verify that multiple input llvm.ident metadata are linked together.
270 ; CHECK-DAG: !llvm.ident = !{!0, !1, !2}
271 ; CHECK-DAG: "Compiler V1"
272 ; CHECK-DAG: "Compiler V2"
273 ; CHECK-DAG: "Compiler V3"
275 ;;;;; Inputs/ident.a.ll:
277 !llvm.ident = !{!0, !1}
278 !0 = metadata !{metadata !"Compiler V1"}
279 !1 = metadata !{metadata !"Compiler V2"}
281 ;;;;; Inputs/ident.b.ll:
284 !0 = metadata !{metadata !"Compiler V3"}
286 For symmetry reasons, ``ident.ll`` is just a dummy file that doesn't
287 actually participate in the test besides holding the ``RUN:`` lines.
291 Some existing tests use ``RUN: true`` in extra files instead of just
292 putting the extra files in an ``Inputs/`` directory. This pattern is
298 It is easy to write a fragile test that would fail spuriously if the tool being
299 tested outputs a full path to the input file. For example, :program:`opt` by
300 default outputs a ``ModuleID``:
302 .. code-block:: console
305 define i32 @main() nounwind {
309 $ opt -S /path/to/example.ll
310 ; ModuleID = '/path/to/example.ll'
312 define i32 @main() nounwind {
316 ``ModuleID`` can unexpetedly match against ``CHECK`` lines. For example:
320 ; RUN: opt -S %s | FileCheck
322 define i32 @main() nounwind {
327 This test will fail if placed into a ``download`` directory.
329 To make your tests robust, always use ``opt ... < %s`` in the RUN line.
330 :program:`opt` does not output a ``ModuleID`` when input comes from stdin.
332 Platform-Specific Tests
333 -----------------------
335 Whenever adding tests that require the knowledge of a specific platform,
336 either related to code generated, specific output or back-end features,
337 you must make sure to isolate the features, so that buildbots that
338 run on different architectures (and don't even compile all back-ends),
341 The first problem is to check for target-specific output, for example sizes
342 of structures, paths and architecture names, for example:
344 * Tests containing Windows paths will fail on Linux and vice-versa.
345 * Tests that check for ``x86_64`` somewhere in the text will fail anywhere else.
346 * Tests where the debug information calculates the size of types and structures.
348 Also, if the test rely on any behaviour that is coded in any back-end, it must
349 go in its own directory. So, for instance, code generator tests for ARM go
350 into ``test/CodeGen/ARM`` and so on. Those directories contain a special
351 ``lit`` configuration file that ensure all tests in that directory will
352 only run if a specific back-end is compiled and available.
354 For instance, on ``test/CodeGen/ARM``, the ``lit.local.cfg`` is:
356 .. code-block:: python
358 config.suffixes = ['.ll', '.c', '.cpp', '.test']
359 if not 'ARM' in config.root.targets:
360 config.unsupported = True
362 Other platform-specific tests are those that depend on a specific feature
363 of a specific sub-architecture, for example only to Intel chips that support ``AVX2``.
365 For instance, ``test/CodeGen/X86/psubus.ll`` tests three sub-architecture
370 ; RUN: llc -mcpu=core2 < %s | FileCheck %s -check-prefix=SSE2
371 ; RUN: llc -mcpu=corei7-avx < %s | FileCheck %s -check-prefix=AVX1
372 ; RUN: llc -mcpu=core-avx2 < %s | FileCheck %s -check-prefix=AVX2
374 And the checks are different:
379 ; SSE2: psubusw LCPI0_0(%rip), %xmm0
381 ; AVX1: vpsubusw LCPI0_0(%rip), %xmm0, %xmm0
383 ; AVX2: vpsubusw LCPI0_0(%rip), %xmm0, %xmm0
385 So, if you're testing for a behaviour that you know is platform-specific or
386 depends on special features of sub-architectures, you must add the specific
387 triple, test with the specific FileCheck and put it into the specific
388 directory that will filter out all other architectures.
390 REQUIRES and REQUIRES-ANY directive
391 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
393 Some tests can be enabled only in specific situation - like having
394 debug build. Use ``REQUIRES`` directive to specify those requirements.
398 ; This test will be only enabled in the build with asserts
401 You can separate requirements by a comma.
402 ``REQUIRES`` means all listed requirements must be satisfied.
403 ``REQUIRES-ANY`` means at least one must be satisfied.
405 List of features that can be used in ``REQUIRES`` and ``REQUIRES-ANY`` can be
406 found in lit.cfg files.
411 Besides replacing LLVM tool names the following substitutions are performed in
415 Replaced by a single ``%``. This allows escaping other substitutions.
418 File path to the test case's source. This is suitable for passing on the
419 command line as the input to an LLVM tool.
421 Example: ``/home/user/llvm/test/MC/ELF/foo_test.s``
424 Directory path to the test case's source.
426 Example: ``/home/user/llvm/test/MC/ELF``
429 File path to a temporary file name that could be used for this test case.
430 The file name won't conflict with other test cases. You can append to it
431 if you need multiple temporaries. This is useful as the destination of
432 some redirected output.
434 Example: ``/home/user/llvm.build/test/MC/ELF/Output/foo_test.s.tmp``
439 Example: ``/home/user/llvm.build/test/MC/ELF/Output``
443 Expands to the path separator, i.e. ``:`` (or ``;`` on Windows).
446 **LLVM-specific substitutions:**
449 The suffix for the host platforms shared library files. This includes the
450 period as the first character.
452 Example: ``.so`` (Linux), ``.dylib`` (OS X), ``.dll`` (Windows)
455 The suffix for the host platforms executable files. This includes the
456 period as the first character.
458 Example: ``.exe`` (Windows), empty on Linux.
460 ``%(line)``, ``%(line+<number>)``, ``%(line-<number>)``
461 The number of the line where this substitution is used, with an optional
462 integer offset. This can be used in tests with multiple RUN lines, which
463 reference test file's line numbers.
466 **Clang-specific substitutions:**
469 Invokes the Clang driver.
472 Invokes the Clang driver for C++.
475 Invokes the CL-compatible Clang driver.
478 Invokes the G++-compatible Clang driver.
481 Invokes the Clang frontend.
483 ``%itanium_abi_triple``, ``%ms_abi_triple``
484 These substitutions can be used to get the current target triple adjusted to
485 the desired ABI. For example, if the test suite is running with the
486 ``i686-pc-win32`` target, ``%itanium_abi_triple`` will expand to
487 ``i686-pc-mingw32``. This allows a test to run with a specific ABI without
488 constraining it to a specific triple.
490 To add more substituations, look at ``test/lit.cfg`` or ``lit.local.cfg``.
496 The llvm lit configuration allows to customize some things with user options:
498 ``llc``, ``opt``, ...
499 Substitute the respective llvm tool name with a custom command line. This
500 allows to specify custom paths and default arguments for these tools.
503 % llvm-lit "-Dllc=llc -verify-machineinstrs"
506 Enable the execution of long running tests.
509 Load the specified lit configuration instead of the default one.
515 To make RUN line writing easier, there are several helper programs. These
516 helpers are in the PATH when running tests, so you can just call them using
517 their name. For example:
520 This program runs its arguments and then inverts the result code from it.
521 Zero result codes become 1. Non-zero result codes become 0.
523 Sometimes it is necessary to mark a test case as "expected fail" or
524 XFAIL. You can easily mark a test as XFAIL just by including ``XFAIL:``
525 on a line near the top of the file. This signals that the test case
526 should succeed if the test fails. Such test cases are counted separately
527 by the testing tool. To specify an expected fail, use the XFAIL keyword
528 in the comments of the test program followed by a colon and one or more
529 failure patterns. Each failure pattern can be either ``*`` (to specify
530 fail everywhere), or a part of a target triple (indicating the test
531 should fail on that platform), or the name of a configurable feature
532 (for example, ``loadable_module``). If there is a match, the test is
533 expected to fail. If not, the test is expected to succeed. To XFAIL
534 everywhere just specify ``XFAIL: *``. Here is an example of an ``XFAIL``
541 To make the output more useful, :program:`lit` will scan
542 the lines of the test case for ones that contain a pattern that matches
543 ``PR[0-9]+``. This is the syntax for specifying a PR (Problem Report) number
544 that is related to the test case. The number after "PR" specifies the
545 LLVM bugzilla number. When a PR number is specified, it will be used in
546 the pass/fail reporting. This is useful to quickly get some context when
549 Finally, any line that contains "END." will cause the special
550 interpretation of lines to terminate. This is generally done right after
551 the last RUN: line. This has two side effects:
553 (a) it prevents special interpretation of lines that are part of the test
554 program, not the instructions to the test case, and
556 (b) it speeds things up for really big test cases by avoiding
557 interpretation of the remainder of the file.
559 .. _test-suite-overview:
561 ``test-suite`` Overview
562 =======================
564 The ``test-suite`` module contains a number of programs that can be
565 compiled and executed. The ``test-suite`` includes reference outputs for
566 all of the programs, so that the output of the executed program can be
567 checked for correctness.
569 ``test-suite`` tests are divided into three types of tests: MultiSource,
570 SingleSource, and External.
572 - ``test-suite/SingleSource``
574 The SingleSource directory contains test programs that are only a
575 single source file in size. These are usually small benchmark
576 programs or small programs that calculate a particular value. Several
577 such programs are grouped together in each directory.
579 - ``test-suite/MultiSource``
581 The MultiSource directory contains subdirectories which contain
582 entire programs with multiple source files. Large benchmarks and
583 whole applications go here.
585 - ``test-suite/External``
587 The External directory contains Makefiles for building code that is
588 external to (i.e., not distributed with) LLVM. The most prominent
589 members of this directory are the SPEC 95 and SPEC 2000 benchmark
590 suites. The ``External`` directory does not contain these actual
591 tests, but only the Makefiles that know how to properly compile these
592 programs from somewhere else. When using ``LNT``, use the
593 ``--test-externals`` option to include these tests in the results.
595 .. _test-suite-quickstart:
597 ``test-suite`` Quickstart
598 -------------------------
600 The modern way of running the ``test-suite`` is focused on testing and
601 benchmarking complete compilers using the
602 `LNT <http://llvm.org/docs/lnt>`_ testing infrastructure.
604 For more information on using LNT to execute the ``test-suite``, please
605 see the `LNT Quickstart <http://llvm.org/docs/lnt/quickstart.html>`_
608 ``test-suite`` Makefiles
609 ------------------------
611 Historically, the ``test-suite`` was executed using a complicated setup
612 of Makefiles. The LNT based approach above is recommended for most
613 users, but there are some testing scenarios which are not supported by
614 the LNT approach. In addition, LNT currently uses the Makefile setup
615 under the covers and so developers who are interested in how LNT works
616 under the hood may want to understand the Makefile based setup.
618 For more information on the ``test-suite`` Makefile setup, please see
619 the :doc:`Test Suite Makefile Guide <TestSuiteMakefileGuide>`.