4 This directory holds performance testing scripts for git tools. The
5 first part of this document describes the various ways in which you
8 When fixing the tools or adding enhancements, you are strongly
9 encouraged to add tests in this directory to cover what you are
10 trying to fix or enhance. The later part of this short document
11 describes how your test scripts should be organized.
17 The easiest way to run tests is to say "make". This runs all
18 the tests on the current git repository.
20 === Running 2 tests in this tree ===
23 ---------------------------------------------------------
24 0001.1: rev-list --all 0.54(0.51+0.02)
25 0001.2: rev-list --all --objects 6.14(5.99+0.11)
26 7810.1: grep worktree, cheap regex 0.16(0.16+0.35)
27 7810.2: grep worktree, expensive regex 7.90(29.75+0.37)
28 7810.3: grep --cached, cheap regex 3.07(3.02+0.25)
29 7810.4: grep --cached, expensive regex 9.39(30.57+0.24)
31 Output format is in seconds "Elapsed(User + System)"
33 You can compare multiple repositories and even git revisions with the
36 $ ./run . origin/next /path/to/git-tree p0001-rev-list.sh
38 where . stands for the current git tree. The full invocation is
40 ./run [<revision|directory>...] [--] [<test-script>...]
42 A '.' argument is implied if you do not pass any other
43 revisions/directories.
45 You can also manually test this or another git build tree, and then
46 call the aggregation script to summarize the results:
50 $ ./run /path/to/other/git -- ./p0001-rev-list.sh
52 $ ./aggregate.perl . /path/to/other/git ./p0001-rev-list.sh
54 aggregate.perl has the same invocation as 'run', it just does not run
57 You can set the following variables (also in your config.mak):
60 Number of times a test should be repeated for best-of-N
61 measurements. Defaults to 3.
64 Options to use when automatically building a git tree for
65 performance testing. E.g., -j6 would be useful. Passed
66 directly to make as "make $GIT_PERF_MAKE_OPTS".
69 An arbitrary command that'll be run in place of the make
70 command, if set the GIT_PERF_MAKE_OPTS variable is
71 ignored. Useful in cases where source tree changes might
72 require issuing a different make command to different
75 This can be (ab)used to monkeypatch or otherwise change the
76 tree about to be built. Note that the build directory can be
77 re-used for subsequent runs so the make command might get
78 executed multiple times on the same tree, but don't count on
79 any of that, that's an implementation detail that might change
84 Repositories to copy for the performance tests. The normal
85 repo should be at least git.git size. The large repo should
86 probably be about linux.git size for optimal results.
87 Both default to the git.git you are running from.
90 Boolean to enable additional tests. Most test scripts are
91 written to detect regressions between two versions of Git, and
92 the output will compare timings for individual tests between
93 those versions. Some scripts have additional tests which are not
94 run by default, that show patterns within a single version of
95 Git (e.g., performance of index-pack as the number of threads
96 changes). These can be enabled with GIT_PERF_EXTRA.
99 Boolean indicating whether to register test repo(s) with Scalar
100 before executing tests.
102 You can also pass the options taken by ordinary git tests; the most
106 Create "trash" directories used to store all temporary data during
107 testing under <directory>, instead of the t/ directory.
108 Using this option with a RAM-based filesystem (such as tmpfs)
109 can massively speed up the test suite.
115 The performance test files are named as:
117 pNNNN-commandname-details.sh
119 where N is a decimal digit. The same conventions for choosing NNNN as
120 for normal tests apply.
126 The perf script starts much like a normal test script, except it
131 # Copyright (c) 2005 Junio C Hamano
134 test_description='xxx performance test'
137 After that you will want to use some of the following:
139 test_perf_fresh_repo # sets up an empty repository
140 test_perf_default_repo # sets up a "normal" repository
141 test_perf_large_repo # sets up a "large" repository
143 test_perf_default_repo sub # ditto, in a subdir "sub"
145 test_checkout_worktree # if you need the worktree too
147 At least one of the first two is required!
149 You can use test_expect_success as usual. In both test_expect_success
150 and in test_perf, running "git" points to the version that is being
151 perf-tested. The $MODERN_GIT variable points to the git wrapper for the
152 currently checked-out version (i.e., the one that matches the t/perf
153 scripts you are running). This is useful if your setup uses commands
154 that only work with newer versions of git than what you might want to
155 test (but obviously your new commands must still create a state that can
156 be used by the older version of git you are testing).
158 For actual performance tests, use
160 test_perf 'descriptive string' '
165 test_perf spawns a subshell, for lack of better options. This means
168 * you _must_ export all variables that you need in the subshell
170 * you _must_ flag all variables that you want to persist from the
171 subshell with 'test_export':
173 test_perf 'descriptive string' '
174 foo=$(git rev-parse HEAD) &&
178 The so-exported variables are automatically marked for export in the
179 shell executing the perf test. For your convenience, test_export is
180 the same as export in the main shell.
182 This feature relies on a bit of magic using 'set' and 'source'.
183 While we have tried to make sure that it can cope with embedded
184 whitespace and other special characters, it will not work with
187 Rather than tracking the performance by run-time as `test_perf` does, you
188 may also track output size by using `test_size`. The stdout of the
189 function should be a single numeric value, which will be captured and
190 shown in the aggregated output. For example:
192 test_perf 'time foo' '
196 test_size 'output size'
200 might produce output like:
203 -------------------------------------------------------------
204 1234.1 time foo 0.37(0.79+0.02) 0.26(0.51+0.02) -29.7%
205 1234.2 output size 4.3M 3.6M -14.7%
207 The item being measured (and its units) is up to the test; the context
208 and the test title should make it clear to the user whether bigger or
209 smaller numbers are better. Unlike test_perf, the test code will only be
210 run once, since output sizes tend to be more deterministic than timings.