parallel: Fixed bug #56981: --tee should try and use --output-error=warn-nopipe
[parallel.git] / src / parallel.pod
blobd31187d30b123b59b6a1ac6ce306110f24e5940a
1 #!/usr/bin/perl -w
3 =encoding utf8
5 =head1 NAME
7 parallel - build and execute shell command lines from standard input
8 in parallel
11 =head1 SYNOPSIS
13 B<parallel> [options] [I<command> [arguments]] < list_of_arguments
15 B<parallel> [options] [I<command> [arguments]] ( B<:::> arguments |
16 B<:::+> arguments | B<::::> argfile(s) | B<::::+> argfile(s) ) ...
18 B<parallel> --semaphore [options] I<command>
20 B<#!/usr/bin/parallel> --shebang [options] [I<command> [arguments]]
22 B<#!/usr/bin/parallel> --shebang-wrap [options] [I<command>
23 [arguments]]
26 =head1 DESCRIPTION
28 STOP!
30 Read the B<Reader's guide> below if you are new to GNU B<parallel>.
32 GNU B<parallel> is a shell tool for executing jobs in parallel using
33 one or more computers. A job can be a single command or a small script
34 that has to be run for each of the lines in the input. The typical
35 input is a list of files, a list of hosts, a list of users, a list of
36 URLs, or a list of tables. A job can also be a command that reads from
37 a pipe. GNU B<parallel> can then split the input into blocks and pipe
38 a block into each command in parallel.
40 If you use xargs and tee today you will find GNU B<parallel> very easy
41 to use as GNU B<parallel> is written to have the same options as
42 xargs. If you write loops in shell, you will find GNU B<parallel> may
43 be able to replace most of the loops and make them run faster by
44 running several jobs in parallel.
46 GNU B<parallel> makes sure output from the commands is the same output
47 as you would get had you run the commands sequentially. This makes it
48 possible to use output from GNU B<parallel> as input for other
49 programs.
51 For each line of input GNU B<parallel> will execute I<command> with
52 the line as arguments. If no I<command> is given, the line of input is
53 executed. Several lines will be run in parallel. GNU B<parallel> can
54 often be used as a substitute for B<xargs> or B<cat | bash>.
56 =head2 Reader's guide
58 If you prefer reading a book buy B<GNU Parallel 2018> at
59 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html
60 or download it at: https://doi.org/10.5281/zenodo.1146014
62 Otherwise start by watching the intro videos for a quick introduction:
63 http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
65 If you need a one page printable cheat sheet you can find it on:
66 https://www.gnu.org/software/parallel/parallel_cheat.pdf
68 You can find a lot of B<EXAMPLE>s of use after the list of B<OPTIONS>
69 in B<man parallel> (Use B<LESS=+/EXAMPLE: man parallel>). That will
70 give you an idea of what GNU B<parallel> is capable of, and you may
71 find a solution you can simply adapt to your situation.
73 If you want to dive even deeper: spend a couple of hours walking
74 through the tutorial (B<man parallel_tutorial>). Your command line
75 will love you for it.
77 Finally you may want to look at the rest of the manual (B<man
78 parallel>) if you have special needs not already covered.
80 If you want to know the design decisions behind GNU B<parallel>, try:
81 B<man parallel_design>. This is also a good intro if you intend to
82 change GNU B<parallel>.
85 =head1 OPTIONS
87 =over 4
89 =item I<command>
91 Command to execute. If I<command> or the following arguments contain
92 replacement strings (such as B<{}>) every instance will be substituted
93 with the input.
95 If I<command> is given, GNU B<parallel> solve the same tasks as
96 B<xargs>. If I<command> is not given GNU B<parallel> will behave
97 similar to B<cat | sh>.
99 The I<command> must be an executable, a script, a composed command, an
100 alias, or a function.
102 B<Bash functions>: B<export -f> the function first or use B<env_parallel>.
104 B<Bash, Csh, or Tcsh aliases>: Use B<env_parallel>.
106 B<Zsh, Fish, Ksh, and Pdksh functions and aliases>: Use B<env_parallel>.
108 =item B<{}> (beta testing)
110 Input line. This replacement string will be replaced by a full line
111 read from the input source. The input source is normally stdin
112 (standard input), but can also be given with B<-a>, B<:::>, or
113 B<::::>.
115 The replacement string B<{}> can be changed with B<-I>.
117 If the command line contains no replacement strings then B<{}> will be
118 appended to the command line.
120 Replacement strings are normally quoted, so special characters are not
121 parsed by the shell. The exception is if the command starts with a
122 replacement string; then the string is not quoted.
125 =item B<{.}>
127 Input line without extension. This replacement string will be replaced
128 by the input with the extension removed. If the input line contains
129 B<.> after the last B</>, the last B<.> until the end of the string
130 will be removed and B<{.}> will be replaced with the
131 remaining. E.g. I<foo.jpg> becomes I<foo>, I<subdir/foo.jpg> becomes
132 I<subdir/foo>, I<sub.dir/foo.jpg> becomes I<sub.dir/foo>,
133 I<sub.dir/bar> remains I<sub.dir/bar>. If the input line does not
134 contain B<.> it will remain unchanged.
136 The replacement string B<{.}> can be changed with B<--er>.
138 To understand replacement strings see B<{}>.
141 =item B<{/}>
143 Basename of input line. This replacement string will be replaced by
144 the input with the directory part removed.
146 The replacement string B<{/}> can be changed with
147 B<--basenamereplace>.
149 To understand replacement strings see B<{}>.
152 =item B<{//}>
154 Dirname of input line. This replacement string will be replaced by the
155 dir of the input line. See B<dirname>(1).
157 The replacement string B<{//}> can be changed with
158 B<--dirnamereplace>.
160 To understand replacement strings see B<{}>.
163 =item B<{/.}>
165 Basename of input line without extension. This replacement string will
166 be replaced by the input with the directory and extension part
167 removed. It is a combination of B<{/}> and B<{.}>.
169 The replacement string B<{/.}> can be changed with
170 B<--basenameextensionreplace>.
172 To understand replacement strings see B<{}>.
175 =item B<{#}>
177 Sequence number of the job to run. This replacement string will be
178 replaced by the sequence number of the job being run. It contains the
179 same number as $PARALLEL_SEQ.
181 The replacement string B<{#}> can be changed with B<--seqreplace>.
183 To understand replacement strings see B<{}>.
186 =item B<{%}>
188 Job slot number. This replacement string will be replaced by the job's
189 slot number between 1 and number of jobs to run in parallel. There
190 will never be 2 jobs running at the same time with the same job slot
191 number.
193 The replacement string B<{%}> can be changed with B<--slotreplace>.
195 To understand replacement strings see B<{}>.
198 =item B<{>I<n>B<}>
200 Argument from input source I<n> or the I<n>'th argument. This
201 positional replacement string will be replaced by the input from input
202 source I<n> (when used with B<-a> or B<::::>) or with the I<n>'th
203 argument (when used with B<-N>). If I<n> is negative it refers to the
204 I<n>'th last argument.
206 To understand replacement strings see B<{}>.
209 =item B<{>I<n>.B<}>
211 Argument from input source I<n> or the I<n>'th argument without
212 extension. It is a combination of B<{>I<n>B<}> and B<{.}>.
214 This positional replacement string will be replaced by the input from
215 input source I<n> (when used with B<-a> or B<::::>) or with the
216 I<n>'th argument (when used with B<-N>). The input will have the
217 extension removed.
219 To understand positional replacement strings see B<{>I<n>B<}>.
222 =item B<{>I<n>/B<}>
224 Basename of argument from input source I<n> or the I<n>'th argument.
225 It is a combination of B<{>I<n>B<}> and B<{/}>.
227 This positional replacement string will be replaced by the input from
228 input source I<n> (when used with B<-a> or B<::::>) or with the
229 I<n>'th argument (when used with B<-N>). The input will have the
230 directory (if any) removed.
232 To understand positional replacement strings see B<{>I<n>B<}>.
235 =item B<{>I<n>//B<}>
237 Dirname of argument from input source I<n> or the I<n>'th argument.
238 It is a combination of B<{>I<n>B<}> and B<{//}>.
240 This positional replacement string will be replaced by the dir of the
241 input from input source I<n> (when used with B<-a> or B<::::>) or with
242 the I<n>'th argument (when used with B<-N>). See B<dirname>(1).
244 To understand positional replacement strings see B<{>I<n>B<}>.
247 =item B<{>I<n>/.B<}>
249 Basename of argument from input source I<n> or the I<n>'th argument
250 without extension. It is a combination of B<{>I<n>B<}>, B<{/}>, and
251 B<{.}>.
253 This positional replacement string will be replaced by the input from
254 input source I<n> (when used with B<-a> or B<::::>) or with the
255 I<n>'th argument (when used with B<-N>). The input will have the
256 directory (if any) and extension removed.
258 To understand positional replacement strings see B<{>I<n>B<}>.
261 =item B<{=>I<perl expression>B<=}>
263 Replace with calculated I<perl expression>. B<$_> will contain the
264 same as B<{}>. After evaluating I<perl expression> B<$_> will be used
265 as the value. It is recommended to only change $_ but you have full
266 access to all of GNU B<parallel>'s internal functions and data
267 structures. A few convenience functions and data structures have been
268 made:
270 =over 15
272 =item Z<> B<Q(>I<string>B<)>
274 shell quote a string
276 =item Z<> B<pQ(>I<string>B<)>
278 perl quote a string
280 =item Z<> B<uq()> (or B<uq>)
282 (beta testing)
283 do not quote current replacement string
285 =item Z<> B<total_jobs()>
287 number of jobs in total
289 =item Z<> B<slot()>
291 slot number of job
293 =item Z<> B<seq()>
295 sequence number of job
297 =item Z<> B<@arg>
299 the arguments
301 =back
303 Example:
305 seq 10 | parallel echo {} + 1 is {= '$_++' =}
306 parallel csh -c {= '$_="mkdir ".Q($_)' =} ::: '12" dir'
307 seq 50 | parallel echo job {#} of {= '$_=total_jobs()' =}
309 See also: B<--rpl> B<--parens>
312 =item B<{=>I<n> I<perl expression>B<=}>
314 Positional equivalent to B<{=perl expression=}>. To understand
315 positional replacement strings see B<{>I<n>B<}>.
317 See also: B<{=perl expression=}> B<{>I<n>B<}>.
320 =item B<:::> I<arguments>
322 Use arguments from the command line as input source instead of stdin
323 (standard input). Unlike other options for GNU B<parallel> B<:::> is
324 placed after the I<command> and before the arguments.
326 The following are equivalent:
328 (echo file1; echo file2) | parallel gzip
329 parallel gzip ::: file1 file2
330 parallel gzip {} ::: file1 file2
331 parallel --arg-sep ,, gzip {} ,, file1 file2
332 parallel --arg-sep ,, gzip ,, file1 file2
333 parallel ::: "gzip file1" "gzip file2"
335 To avoid treating B<:::> as special use B<--arg-sep> to set the
336 argument separator to something else. See also B<--arg-sep>.
338 If multiple B<:::> are given, each group will be treated as an input
339 source, and all combinations of input sources will be
340 generated. E.g. ::: 1 2 ::: a b c will result in the combinations
341 (1,a) (1,b) (1,c) (2,a) (2,b) (2,c). This is useful for replacing
342 nested for-loops.
344 B<:::> and B<::::> can be mixed. So these are equivalent:
346 parallel echo {1} {2} {3} ::: 6 7 ::: 4 5 ::: 1 2 3
347 parallel echo {1} {2} {3} :::: <(seq 6 7) <(seq 4 5) \
348 :::: <(seq 1 3)
349 parallel -a <(seq 6 7) echo {1} {2} {3} :::: <(seq 4 5) \
350 :::: <(seq 1 3)
351 parallel -a <(seq 6 7) -a <(seq 4 5) echo {1} {2} {3} \
352 ::: 1 2 3
353 seq 6 7 | parallel -a - -a <(seq 4 5) echo {1} {2} {3} \
354 ::: 1 2 3
355 seq 4 5 | parallel echo {1} {2} {3} :::: <(seq 6 7) - \
356 ::: 1 2 3
359 =item B<:::+> I<arguments>
361 Like B<:::> but linked like B<--link> to the previous input source.
363 Contrary to B<--link>, values do not wrap: The shortest input source
364 determines the length.
366 Example:
368 parallel echo ::: a b c :::+ 1 2 3 ::: X Y :::+ 11 22
371 =item B<::::> I<argfiles>
373 Another way to write B<-a> I<argfile1> B<-a> I<argfile2> ...
375 B<:::> and B<::::> can be mixed.
377 See B<-a>, B<:::> and B<--link>.
380 =item B<::::+> I<argfiles>
382 Like B<::::> but linked like B<--link> to the previous input source.
384 Contrary to B<--link>, values do not wrap: The shortest input source
385 determines the length.
388 =item B<--null>
390 =item B<-0>
392 Use NUL as delimiter. Normally input lines will end in \n
393 (newline). If they end in \0 (NUL), then use this option. It is useful
394 for processing arguments that may contain \n (newline).
397 =item B<--arg-file> I<input-file>
399 =item B<-a> I<input-file>
401 Use I<input-file> as input source. If you use this option, stdin
402 (standard input) is given to the first process run. Otherwise, stdin
403 (standard input) is redirected from /dev/null.
405 If multiple B<-a> are given, each I<input-file> will be treated as an
406 input source, and all combinations of input sources will be
407 generated. E.g. The file B<foo> contains B<1 2>, the file B<bar>
408 contains B<a b c>. B<-a foo> B<-a bar> will result in the combinations
409 (1,a) (1,b) (1,c) (2,a) (2,b) (2,c). This is useful for replacing
410 nested for-loops.
412 See also B<--link> and B<{>I<n>B<}>.
415 =item B<--arg-file-sep> I<sep-str>
417 Use I<sep-str> instead of B<::::> as separator string between command
418 and argument files. Useful if B<::::> is used for something else by the
419 command.
421 See also: B<::::>.
424 =item B<--arg-sep> I<sep-str>
426 Use I<sep-str> instead of B<:::> as separator string. Useful if B<:::>
427 is used for something else by the command.
429 Also useful if you command uses B<:::> but you still want to read
430 arguments from stdin (standard input): Simply change B<--arg-sep> to a
431 string that is not in the command line.
433 See also: B<:::>.
436 =item B<--bar>
438 Show progress as a progress bar. In the bar is shown: % of jobs
439 completed, estimated seconds left, and number of jobs started.
441 It is compatible with B<zenity>:
443 seq 1000 | parallel -j30 --bar '(echo {};sleep 0.1)' \
444 2> >(zenity --progress --auto-kill) | wc
447 =item B<--basefile> I<file>
449 =item B<--bf> I<file>
451 I<file> will be transferred to each sshlogin before a job is
452 started. It will be removed if B<--cleanup> is active. The file may be
453 a script to run or some common base data needed for the job.
454 Multiple B<--bf> can be specified to transfer more basefiles. The
455 I<file> will be transferred the same way as B<--transferfile>.
458 =item B<--basenamereplace> I<replace-str>
460 =item B<--bnr> I<replace-str>
462 Use the replacement string I<replace-str> instead of B<{/}> for
463 basename of input line.
466 =item B<--basenameextensionreplace> I<replace-str>
468 =item B<--bner> I<replace-str>
470 Use the replacement string I<replace-str> instead of B<{/.}> for basename of input line without extension.
473 =item B<--bin> I<binexpr> (alpha testing)
475 Use I<binexpr> as binning key and bin input to the jobs.
477 I<binexpr> is [column number|column name] [perlexpression] e.g. 3,
478 Address, 3 $_%=100, Address s/\D//g.
480 Each input line is split using B<--colsep>. The value of the column is
481 put into $_, the perl expression is executed, the resulting value is
482 is the job slot that will be given the line. If the value is bigger
483 than the number of jobslots the value will be modulo number of jobslots.
485 This is similar to B<--shard> but the hashing algorithm is a simple
486 modulo, which makes it predictible which jobslot will receive which
487 value.
489 The performance is in the order of 100K rows per second. Faster if the
490 I<bincol> is small (<10), slower if it is big (>100).
492 B<--bin> requires B<--pipe> and a fixed numeric value for B<--jobs>.
494 See also B<--shard>, B<--group-by>, B<--roundrobin>.
497 =item B<--bg>
499 Run command in background thus GNU B<parallel> will not wait for
500 completion of the command before exiting. This is the default if
501 B<--semaphore> is set.
503 See also: B<--fg>, B<man sem>.
505 Implies B<--semaphore>.
508 =item B<--bibtex>
510 =item B<--citation>
512 Print the citation notice and BibTeX entry for GNU B<parallel>,
513 silence citation notice for all future runs, and exit. It will not run
514 any commands.
516 If it is impossible for you to run B<--citation> you can instead use
517 B<--will-cite>, which will run commands, but which will only silence
518 the citation notice for this single run.
520 If you use B<--will-cite> in scripts to be run by others you are
521 making it harder for others to see the citation notice. The
522 development of GNU B<parallel> is indirectly financed through
523 citations, so if your users do not know they should cite then you are
524 making it harder to finance development. However, if you pay 10000
525 EUR, you have done your part to finance future development and should
526 feel free to use B<--will-cite> in scripts.
528 If you do not want to help financing future development by letting
529 other users see the citation notice or by paying, then please use
530 another tool instead of GNU B<parallel>. You can find some of the
531 alternatives in B<man parallel_alternatives>.
534 =item B<--block> I<size>
536 =item B<--block-size> I<size>
538 Size of block in bytes to read at a time. The I<size> can be postfixed
539 with K, M, G, T, P, E, k, m, g, t, p, or e which would multiply the
540 size with 1024, 1048576, 1073741824, 1099511627776, 1125899906842624,
541 1152921504606846976, 1000, 1000000, 1000000000, 1000000000000,
542 1000000000000000, or 1000000000000000000 respectively.
544 GNU B<parallel> tries to meet the block size but can be off by the
545 length of one record. For performance reasons I<size> should be bigger
546 than a two records. GNU B<parallel> will warn you and automatically
547 increase the size if you choose a I<size> that is too small.
549 If you use B<-N>, B<--block-size> should be bigger than N+1 records.
551 I<size> defaults to 1M.
553 When using B<--pipepart> a negative block size is not interpreted as a
554 blocksize but as the number of blocks each jobslot should have. So
555 this will run 10*5 = 50 jobs in total:
557 parallel --pipepart -a myfile --block -10 -j5 wc
559 This is an efficient alternative to B<--roundrobin> because data is
560 never read by GNU B<parallel>, but you can still have very few
561 jobslots process a large amount of data.
563 See B<--pipe> and B<--pipepart> for use of this.
566 =item B<--cat>
568 Create a temporary file with content. Normally B<--pipe>/B<--pipepart>
569 will give data to the program on stdin (standard input). With B<--cat>
570 GNU B<parallel> will create a temporary file with the name in B<{}>, so
571 you can do: B<parallel --pipe --cat wc {}>.
573 Implies B<--pipe> unless B<--pipepart> is used.
575 See also B<--fifo>.
578 =item B<--cleanup>
580 Remove transferred files. B<--cleanup> will remove the transferred
581 files on the remote computer after processing is done.
583 find log -name '*gz' | parallel \
584 --sshlogin server.example.com --transferfile {} \
585 --return {.}.bz2 --cleanup "zcat {} | bzip -9 >{.}.bz2"
587 With B<--transferfile {}> the file transferred to the remote computer
588 will be removed on the remote computer. Directories created will not
589 be removed - even if they are empty.
591 With B<--return> the file transferred from the remote computer will be
592 removed on the remote computer. Directories created will not be
593 removed - even if they are empty.
595 B<--cleanup> is ignored when not used with B<--transferfile> or
596 B<--return>.
599 =item B<--colsep> I<regexp>
601 =item B<-C> I<regexp>
603 Column separator. The input will be treated as a table with I<regexp>
604 separating the columns. The n'th column can be accessed using
605 B<{>I<n>B<}> or B<{>I<n>.B<}>. E.g. B<{3}> is the 3rd column.
607 If there are more input sources, each input source will be separated,
608 but the columns from each input source will be linked (see B<--link>).
610 parallel --colsep '-' echo {4} {3} {2} {1} \
611 ::: A-B C-D ::: e-f g-h
613 B<--colsep> implies B<--trim rl>, which can be overridden with
614 B<--trim n>.
616 I<regexp> is a Perl Regular Expression:
617 http://perldoc.perl.org/perlre.html
620 =item B<--compress>
622 Compress temporary files. If the output is big and very compressible
623 this will take up less disk space in $TMPDIR and possibly be faster
624 due to less disk I/O.
626 GNU B<parallel> will try B<pzstd>, B<lbzip2>, B<pbzip2>, B<zstd>,
627 B<pigz>, B<lz4>, B<lzop>, B<plzip>, B<lzip>, B<lrz>, B<gzip>, B<pxz>,
628 B<lzma>, B<bzip2>, B<xz>, B<clzip>, in that order, and use the first
629 available.
632 =item B<--compress-program> I<prg>
634 =item B<--decompress-program> I<prg>
636 Use I<prg> for (de)compressing temporary files. It is assumed that I<prg
637 -dc> will decompress stdin (standard input) to stdout (standard
638 output) unless B<--decompress-program> is given.
641 =item B<--csv>
643 Treat input as CSV-format. B<--colsep> sets the field delimiter. It
644 works very much like B<--colsep> except it deals correctly with
645 quoting:
647 echo '"1 big, 2 small","2""x4"" plank",12.34' |
648 parallel --csv echo {1} of {2} at {3}
650 Even quoted newlines are parsed correctly:
652 (echo '"Start of field 1 with newline'
653 echo 'Line 2 in field 1";value 2') |
654 parallel --csv --colsep ';' echo Field 1: {1} Field 2: {2}
656 When used with B<--pipe> only pass full CSV-records.
659 =item B<--delimiter> I<delim>
661 =item B<-d> I<delim>
663 Input items are terminated by I<delim>. Quotes and backslash are not
664 special; every character in the input is taken literally. Disables
665 the end-of-file string, which is treated like any other argument. The
666 specified delimiter may be characters, C-style character escapes such
667 as \n, or octal or hexadecimal escape codes. Octal and hexadecimal
668 escape codes are understood as for the printf command. Multibyte
669 characters are not supported.
672 =item B<--dirnamereplace> I<replace-str>
674 =item B<--dnr> I<replace-str>
676 Use the replacement string I<replace-str> instead of B<{//}> for
677 dirname of input line.
680 =item B<-E> I<eof-str>
682 Set the end of file string to I<eof-str>. If the end of file string
683 occurs as a line of input, the rest of the input is not read. If
684 neither B<-E> nor B<-e> is used, no end of file string is used.
687 =item B<--delay> I<mytime>
689 Delay starting next job by I<mytime>. GNU B<parallel> will pause
690 I<mytime> after starting each job. I<mytime> is normally in seconds,
691 but can be floats postfixed with B<s>, B<m>, B<h>, or B<d> which would
692 multiply the float by 1, 60, 3600, or 86400. Thus these are
693 equivalent: B<--delay 100000> and B<--delay 1d3.5h16.6m4s>.
696 =item B<--dry-run>
698 Print the job to run on stdout (standard output), but do not run the
699 job. Use B<-v -v> to include the wrapping that GNU B<parallel>
700 generates (for remote jobs, B<--tmux>, B<--nice>, B<--pipe>,
701 B<--pipepart>, B<--fifo> and B<--cat>). Do not count on this
702 literally, though, as the job may be scheduled on another computer or
703 the local computer if : is in the list.
706 =item B<--eof>[=I<eof-str>]
708 =item B<-e>[I<eof-str>]
710 This option is a synonym for the B<-E> option. Use B<-E> instead,
711 because it is POSIX compliant for B<xargs> while this option is not.
712 If I<eof-str> is omitted, there is no end of file string. If neither
713 B<-E> nor B<-e> is used, no end of file string is used.
716 =item B<--embed>
718 Embed GNU B<parallel> in a shell script. If you need to distribute your
719 script to someone who does not want to install GNU B<parallel> you can
720 embed GNU B<parallel> in your own shell script:
722 parallel --embed > new_script
724 After which you add your code at the end of B<new_script>. This is tested
725 on B<ash>, B<bash>, B<dash>, B<ksh>, B<sh>, and B<zsh>.
728 =item B<--env> I<var>
730 Copy environment variable I<var>. This will copy I<var> to the
731 environment that the command is run in. This is especially useful for
732 remote execution.
734 In Bash I<var> can also be a Bash function - just remember to B<export
735 -f> the function, see B<command>.
737 The variable '_' is special. It will copy all exported environment
738 variables except for the ones mentioned in ~/.parallel/ignored_vars.
740 To copy the full environment (both exported and not exported
741 variables, arrays, and functions) use B<env_parallel>.
743 See also: B<--record-env>, B<--session>.
746 =item B<--eta>
748 Show the estimated number of seconds before finishing. This forces GNU
749 B<parallel> to read all jobs before starting to find the number of
750 jobs. GNU B<parallel> normally only reads the next job to run.
752 The estimate is based on the runtime of finished jobs, so the first
753 estimate will only be shown when the first job has finished.
755 Implies B<--progress>.
757 See also: B<--bar>, B<--progress>.
760 =item B<--fg>
762 Run command in foreground.
764 With B<--tmux> and B<--tmuxpane> GNU B<parallel> will start B<tmux> in
765 the foreground.
767 With B<--semaphore> GNU B<parallel> will run the command in the
768 foreground (opposite B<--bg>), and wait for completion of the command
769 before exiting.
772 See also B<--bg>, B<man sem>.
775 =item B<--fifo>
777 Create a temporary fifo with content. Normally B<--pipe> and
778 B<--pipepart> will give data to the program on stdin (standard
779 input). With B<--fifo> GNU B<parallel> will create a temporary fifo
780 with the name in B<{}>, so you can do: B<parallel --pipe --fifo wc {}>.
782 Beware: If data is not read from the fifo, the job will block forever.
784 Implies B<--pipe> unless B<--pipepart> is used.
786 See also B<--cat>.
789 =item B<--filter-hosts>
791 Remove down hosts. For each remote host: check that login through ssh
792 works. If not: do not use this host.
794 For performance reasons, this check is performed only at the start and
795 every time B<--sshloginfile> is changed. If an host goes down after
796 the first check, it will go undetected until B<--sshloginfile> is
797 changed; B<--retries> can be used to mitigate this.
799 Currently you can I<not> put B<--filter-hosts> in a profile,
800 $PARALLEL, /etc/parallel/config or similar. This is because GNU
801 B<parallel> uses GNU B<parallel> to compute this, so you will get an
802 infinite loop. This will likely be fixed in a later release.
805 =item B<--gnu>
807 Behave like GNU B<parallel>. This option historically took precedence
808 over B<--tollef>. The B<--tollef> option is now retired, and therefore
809 may not be used. B<--gnu> is kept for compatibility.
812 =item B<--group>
814 Group output. Output from each job is grouped together and is only
815 printed when the command is finished. Stdout (standard output) first
816 followed by stderr (standard error).
818 This takes in the order of 0.5ms per job and depends on the speed of
819 your disk for larger output. It can be disabled with B<-u>, but this
820 means output from different commands can get mixed.
822 B<--group> is the default. Can be reversed with B<-u>.
824 See also: B<--line-buffer> B<--ungroup>
827 =item B<--group-by> I<val>
829 Group input by value. Combined with B<--pipe>/B<--pipepart>
830 B<--group-by> groups lines with the same value into a record.
832 The value can be computed from the full line or from a single column.
834 I<val> can be:
836 =over 15
838 =item Z<> column number
840 Use the value in the column numbered.
842 =item Z<> column name
844 Treat the first line as a header and use the value in the column
845 named.
847 (Not supported with B<--pipepart>).
849 =item Z<> perl expression
851 Run the perl expression and use $_ as the value.
853 =item Z<> column number perl expression
855 Put the value of the column put in $_, run the perl expression, and use $_ as the value.
857 =item Z<> column name perl expression
859 Put the value of the column put in $_, run the perl expression, and use $_ as the value.
861 (Not supported with B<--pipepart>).
863 =back
865 Example:
867 UserID, Consumption
868 123, 1
869 123, 2
870 12-3, 1
871 221, 3
872 221, 1
873 2/21, 5
875 If you want to group 123, 12-3, 221, and 2/21 into 4 records and pass
876 one record at a time to B<wc>:
878 tail -n +2 table.csv | \
879 parallel --pipe --colsep , --group-by 1 -kN1 wc
881 Make GNU B<parallel> treat the first line as a header:
883 cat table.csv | \
884 parallel --pipe --colsep , --header : --group-by 1 -kN1 wc
886 Address column by column name:
888 cat table.csv | \
889 parallel --pipe --colsep , --header : --group-by UserID -kN1 wc
891 If 12-3 and 123 are really the same UserID, remove non-digits in
892 UserID when grouping:
894 cat table.csv | parallel --pipe --colsep , --header : \
895 --group-by 'UserID s/\D//g' -kN1 wc
897 See also B<--shard>, B<--roundrobin>.
900 =item B<--help>
902 =item B<-h>
904 Print a summary of the options to GNU B<parallel> and exit.
907 =item B<--halt-on-error> I<val>
909 =item B<--halt> I<val>
911 When should GNU B<parallel> terminate? In some situations it makes no
912 sense to run all jobs. GNU B<parallel> should simply give up as soon
913 as a condition is met.
915 I<val> defaults to B<never>, which runs all jobs no matter what.
917 I<val> can also take on the form of I<when>,I<why>.
919 I<when> can be 'now' which means kill all running jobs and halt
920 immediately, or it can be 'soon' which means wait for all running jobs
921 to complete, but start no new jobs.
923 I<why> can be 'fail=X', 'fail=Y%', 'success=X', 'success=Y%',
924 'done=X', or 'done=Y%' where X is the number of jobs that has to fail,
925 succeed, or be done before halting, and Y is the percentage of jobs
926 that has to fail, succeed, or be done before halting.
928 Example:
930 =over 23
932 =item Z<> --halt now,fail=1
934 exit when the first job fails. Kill running jobs.
936 =item Z<> --halt soon,fail=3
938 exit when 3 jobs fail, but wait for running jobs to complete.
940 =item Z<> --halt soon,fail=3%
942 exit when 3% of the jobs have failed, but wait for running jobs to complete.
944 =item Z<> --halt now,success=1
946 exit when a job succeeds. Kill running jobs.
948 =item Z<> --halt soon,success=3
950 exit when 3 jobs succeeds, but wait for running jobs to complete.
952 =item Z<> --halt now,success=3%
954 exit when 3% of the jobs have succeeded. Kill running jobs.
956 =item Z<> --halt now,done=1
958 exit when one of the jobs finishes. Kill running jobs.
960 =item Z<> --halt soon,done=3
962 exit when 3 jobs finishes, but wait for running jobs to complete.
964 =item Z<> --halt now,done=3%
966 exit when 3% of the jobs have finished. Kill running jobs.
968 =back
970 For backwards compatibility these also work:
972 =over 12
974 =item Z<>0
976 never
978 =item Z<>1
980 soon,fail=1
982 =item Z<>2
984 now,fail=1
986 =item Z<>-1
988 soon,success=1
990 =item Z<>-2
992 now,success=1
994 =item Z<>1-99%
996 soon,fail=1-99%
998 =back
1001 =item B<--header> I<regexp>
1003 Use regexp as header. For normal usage the matched header (typically
1004 the first line: B<--header '.*\n'>) will be split using B<--colsep>
1005 (which will default to '\t') and column names can be used as
1006 replacement variables: B<{column name}>, B<{column name/}>, B<{column
1007 name//}>, B<{column name/.}>, B<{column name.}>, B<{=column name perl
1008 expression =}>, ..
1010 For B<--pipe> the matched header will be prepended to each output.
1012 B<--header :> is an alias for B<--header '.*\n'>.
1014 If I<regexp> is a number, it is a fixed number of lines.
1017 =item B<--hostgroups>
1019 =item B<--hgrp>
1021 Enable hostgroups on arguments. If an argument contains '@' the string
1022 after '@' will be removed and treated as a list of hostgroups on which
1023 this job is allowed to run. If there is no B<--sshlogin> with a
1024 corresponding group, the job will run on any hostgroup.
1026 Example:
1028 parallel --hostgroups \
1029 --sshlogin @grp1/myserver1 -S @grp1+grp2/myserver2 \
1030 --sshlogin @grp3/myserver3 \
1031 echo ::: my_grp1_arg@grp1 arg_for_grp2@grp2 third@grp1+grp3
1033 B<my_grp1_arg> may be run on either B<myserver1> or B<myserver2>,
1034 B<third> may be run on either B<myserver1> or B<myserver3>,
1035 but B<arg_for_grp2> will only be run on B<myserver2>.
1037 See also: B<--sshlogin>.
1040 =item B<-I> I<replace-str>
1042 Use the replacement string I<replace-str> instead of B<{}>.
1045 =item B<--replace>[=I<replace-str>]
1047 =item B<-i>[I<replace-str>]
1049 This option is a synonym for B<-I>I<replace-str> if I<replace-str> is
1050 specified, and for B<-I {}> otherwise. This option is deprecated;
1051 use B<-I> instead.
1054 =item B<--joblog> I<logfile>
1056 Logfile for executed jobs. Save a list of the executed jobs to
1057 I<logfile> in the following TAB separated format: sequence number,
1058 sshlogin, start time as seconds since epoch, run time in seconds,
1059 bytes in files transferred, bytes in files returned, exit status,
1060 signal, and command run.
1062 For B<--pipe> bytes transferred and bytes returned are number of input
1063 and output of bytes.
1065 If B<logfile> is prepended with '+' log lines will be appended to the
1066 logfile.
1068 To convert the times into ISO-8601 strict do:
1070 cat logfile | perl -a -F"\t" -ne \
1071 'chomp($F[2]=`date -d \@$F[2] +%FT%T`); print join("\t",@F)'
1073 If the host is long, you can use B<column -t> to pretty print it:
1075 cat joblog | column -t
1077 See also B<--resume> B<--resume-failed>.
1080 =item B<--jobs> I<N>
1082 =item B<-j> I<N>
1084 =item B<--max-procs> I<N>
1086 =item B<-P> I<N>
1088 Number of jobslots on each machine. Run up to N jobs in parallel. 0
1089 means as many as possible. Default is 100% which will run one job per
1090 CPU on each machine.
1092 If B<--semaphore> is set, the default is 1 thus making a mutex.
1095 =item B<--jobs> I<+N>
1097 =item B<-j> I<+N>
1099 =item B<--max-procs> I<+N>
1101 =item B<-P> I<+N>
1103 Add N to the number of CPUs. Run this many jobs in parallel. See
1104 also B<--use-cores-instead-of-threads> and
1105 B<--use-sockets-instead-of-threads>.
1108 =item B<--jobs> I<-N>
1110 =item B<-j> I<-N>
1112 =item B<--max-procs> I<-N>
1114 =item B<-P> I<-N>
1116 Subtract N from the number of CPUs. Run this many jobs in parallel.
1117 If the evaluated number is less than 1 then 1 will be used. See also
1118 B<--use-cores-instead-of-threads> and
1119 B<--use-sockets-instead-of-threads>.
1122 =item B<--jobs> I<N>%
1124 =item B<-j> I<N>%
1126 =item B<--max-procs> I<N>%
1128 =item B<-P> I<N>%
1130 Multiply N% with the number of CPUs. Run this many jobs in
1131 parallel. See also B<--use-cores-instead-of-threads> and
1132 B<--use-sockets-instead-of-threads>.
1135 =item B<--jobs> I<procfile>
1137 =item B<-j> I<procfile>
1139 =item B<--max-procs> I<procfile>
1141 =item B<-P> I<procfile>
1143 Read parameter from file. Use the content of I<procfile> as parameter
1144 for I<-j>. E.g. I<procfile> could contain the string 100% or +2 or
1145 10. If I<procfile> is changed when a job completes, I<procfile> is
1146 read again and the new number of jobs is computed. If the number is
1147 lower than before, running jobs will be allowed to finish but new jobs
1148 will not be started until the wanted number of jobs has been reached.
1149 This makes it possible to change the number of simultaneous running
1150 jobs while GNU B<parallel> is running.
1153 =item B<--keep-order>
1155 =item B<-k>
1157 Keep sequence of output same as the order of input. Normally the
1158 output of a job will be printed as soon as the job completes. Try this
1159 to see the difference:
1161 parallel -j4 sleep {}\; echo {} ::: 2 1 4 3
1162 parallel -j4 -k sleep {}\; echo {} ::: 2 1 4 3
1164 If used with B<--onall> or B<--nonall> the output will grouped by
1165 sshlogin in sorted order.
1167 If used with B<--pipe --roundrobin> and the same input, the jobslots
1168 will get the same blocks in the same order in every run.
1170 B<-k> only affects the order in which the output is printed - not the
1171 order in which jobs are run.
1174 =item B<-L> I<recsize>
1176 When used with B<--pipe>: Read records of I<recsize>.
1178 When used otherwise: Use at most I<recsize> nonblank input lines per
1179 command line. Trailing blanks cause an input line to be logically
1180 continued on the next input line.
1182 B<-L 0> means read one line, but insert 0 arguments on the command
1183 line.
1185 Implies B<-X> unless B<-m>, B<--xargs>, or B<--pipe> is set.
1188 =item B<--max-lines>[=I<recsize>]
1190 =item B<-l>[I<recsize>]
1192 When used with B<--pipe>: Read records of I<recsize> lines.
1194 When used otherwise: Synonym for the B<-L> option. Unlike B<-L>, the
1195 I<recsize> argument is optional. If I<recsize> is not specified,
1196 it defaults to one. The B<-l> option is deprecated since the POSIX
1197 standard specifies B<-L> instead.
1199 B<-l 0> is an alias for B<-l 1>.
1201 Implies B<-X> unless B<-m>, B<--xargs>, or B<--pipe> is set.
1204 =item B<--limit> "I<command> I<args>"
1206 Dynamic job limit. Before starting a new job run I<command> with
1207 I<args>. The exit value of I<command> determines what GNU B<parallel>
1208 will do:
1210 =over 4
1212 =item Z<>0
1214 Below limit. Start another job.
1216 =item Z<>1
1218 Over limit. Start no jobs.
1220 =item Z<>2
1222 Way over limit. Kill the youngest job.
1224 =back
1226 You can use any shell command. There are 3 predefined commands:
1228 =over 10
1230 =item "io I<n>"
1232 Limit for I/O. The amount of disk I/O will be computed as a value
1233 0-100, where 0 is no I/O and 100 is at least one disk is 100%
1234 saturated.
1236 =item "load I<n>"
1238 Similar to B<--load>.
1240 =item "mem I<n>"
1242 Similar to B<--memfree>.
1244 =back
1247 =item B<--line-buffer> (beta testing)
1249 =item B<--lb> (beta testing)
1251 Buffer output on line basis. B<--group> will keep the output together
1252 for a whole job. B<--ungroup> allows output to mixup with half a line
1253 coming from one job and half a line coming from another
1254 job. B<--line-buffer> fits between these two: GNU B<parallel> will
1255 print a full line, but will allow for mixing lines of different jobs.
1257 B<--line-buffer> takes more CPU power than both B<--group> and
1258 B<--ungroup>, but can be much faster than B<--group> if the CPU is not
1259 the limiting factor.
1261 Normally B<--line-buffer> does not buffer on disk, and can thus
1262 process an infinite amount of data, but it will buffer on disk when
1263 combined with: B<--keep-order>, B<--results>, B<--compress>, and
1264 B<--files>. This will make it as slow as B<--group> and will limit
1265 output to the available disk space.
1267 With B<--keep-order> B<--line-buffer> will output lines from the first
1268 job continuously while it is running, then lines from the second job
1269 while that is running. It will buffer full lines, but jobs will not
1270 mix. Compare:
1272 parallel -j0 'echo {};sleep {};echo {}' ::: 1 3 2 4
1273 parallel -j0 --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4
1274 parallel -j0 -k --lb 'echo {};sleep {};echo {}' ::: 1 3 2 4
1276 See also: B<--group> B<--ungroup>
1279 =item B<--xapply>
1281 =item B<--link>
1283 Link input sources. Read multiple input sources like B<xapply>. If
1284 multiple input sources are given, one argument will be read from each
1285 of the input sources. The arguments can be accessed in the command as
1286 B<{1}> .. B<{>I<n>B<}>, so B<{1}> will be a line from the first input
1287 source, and B<{6}> will refer to the line with the same line number
1288 from the 6th input source.
1290 Compare these two:
1292 parallel echo {1} {2} ::: 1 2 3 ::: a b c
1293 parallel --link echo {1} {2} ::: 1 2 3 ::: a b c
1295 Arguments will be recycled if one input source has more arguments than the others:
1297 parallel --link echo {1} {2} {3} \
1298 ::: 1 2 ::: I II III ::: a b c d e f g
1300 See also B<--header>, B<:::+>, B<::::+>.
1303 =item B<--load> I<max-load>
1305 Do not start new jobs on a given computer unless the number of running
1306 processes on the computer is less than I<max-load>. I<max-load> uses
1307 the same syntax as B<--jobs>, so I<100%> for one per CPU is a valid
1308 setting. Only difference is 0 which is interpreted as 0.01.
1311 =item B<--controlmaster>
1313 =item B<-M>
1315 Use ssh's ControlMaster to make ssh connections faster. Useful if jobs
1316 run remote and are very fast to run. This is disabled for sshlogins
1317 that specify their own ssh command.
1320 =item B<--xargs>
1322 Multiple arguments. Insert as many arguments as the command line
1323 length permits.
1325 If B<{}> is not used the arguments will be appended to the
1326 line. If B<{}> is used multiple times each B<{}> will be replaced
1327 with all the arguments.
1329 Support for B<--xargs> with B<--sshlogin> is limited and may fail.
1331 See also B<-X> for context replace. If in doubt use B<-X> as that will
1332 most likely do what is needed.
1335 =item B<-m>
1337 Multiple arguments. Insert as many arguments as the command line
1338 length permits. If multiple jobs are being run in parallel: distribute
1339 the arguments evenly among the jobs. Use B<-j1> or B<--xargs> to avoid this.
1341 If B<{}> is not used the arguments will be appended to the
1342 line. If B<{}> is used multiple times each B<{}> will be replaced
1343 with all the arguments.
1345 Support for B<-m> with B<--sshlogin> is limited and may fail.
1347 See also B<-X> for context replace. If in doubt use B<-X> as that will
1348 most likely do what is needed.
1351 =item B<--memfree> I<size>
1353 Minimum memory free when starting another job. The I<size> can be
1354 postfixed with K, M, G, T, P, k, m, g, t, or p which would multiply
1355 the size with 1024, 1048576, 1073741824, 1099511627776,
1356 1125899906842624, 1000, 1000000, 1000000000, 1000000000000, or
1357 1000000000000000, respectively.
1359 If the jobs take up very different amount of RAM, GNU B<parallel> will
1360 only start as many as there is memory for. If less than I<size> bytes
1361 are free, no more jobs will be started. If less than 50% I<size> bytes
1362 are free, the youngest job will be killed, and put back on the queue
1363 to be run later.
1365 B<--retries> must be set to determine how many times GNU B<parallel>
1366 should retry a given job.
1369 =item B<--minversion> I<version>
1371 Print the version GNU B<parallel> and exit. If the current version of
1372 GNU B<parallel> is less than I<version> the exit code is
1373 255. Otherwise it is 0.
1375 This is useful for scripts that depend on features only available from
1376 a certain version of GNU B<parallel>.
1379 =item B<--nonall>
1381 B<--onall> with no arguments. Run the command on all computers given
1382 with B<--sshlogin> but take no arguments. GNU B<parallel> will log
1383 into B<--jobs> number of computers in parallel and run the job on the
1384 computer. B<-j> adjusts how many computers to log into in parallel.
1386 This is useful for running the same command (e.g. uptime) on a list of
1387 servers.
1390 =item B<--onall>
1392 Run all the jobs on all computers given with B<--sshlogin>. GNU
1393 B<parallel> will log into B<--jobs> number of computers in parallel
1394 and run one job at a time on the computer. The order of the jobs will
1395 not be changed, but some computers may finish before others.
1397 When using B<--group> the output will be grouped by each server, so
1398 all the output from one server will be grouped together.
1400 B<--joblog> will contain an entry for each job on each server, so
1401 there will be several job sequence 1.
1404 =item B<--output-as-files>
1406 =item B<--outputasfiles>
1408 =item B<--files>
1410 Instead of printing the output to stdout (standard output) the output
1411 of each job is saved in a file and the filename is then printed.
1413 See also: B<--results>
1416 =item B<--pipe>
1418 =item B<--spreadstdin>
1420 Spread input to jobs on stdin (standard input). Read a block of data
1421 from stdin (standard input) and give one block of data as input to one
1422 job.
1424 The block size is determined by B<--block>. The strings B<--recstart>
1425 and B<--recend> tell GNU B<parallel> how a record starts and/or
1426 ends. The block read will have the final partial record removed before
1427 the block is passed on to the job. The partial record will be
1428 prepended to next block.
1430 If B<--recstart> is given this will be used to split at record start.
1432 If B<--recend> is given this will be used to split at record end.
1434 If both B<--recstart> and B<--recend> are given both will have to
1435 match to find a split position.
1437 If neither B<--recstart> nor B<--recend> are given B<--recend>
1438 defaults to '\n'. To have no record separator use B<--recend "">.
1440 B<--files> is often used with B<--pipe>.
1442 B<--pipe> maxes out at around 1 GB/s input, and 100 MB/s output. If
1443 performance is important use B<--pipepart>.
1445 See also: B<--recstart>, B<--recend>, B<--fifo>, B<--cat>,
1446 B<--pipepart>, B<--files>.
1449 =item B<--pipepart>
1451 Pipe parts of a physical file. B<--pipepart> works similar to
1452 B<--pipe>, but is much faster.
1454 B<--pipepart> has a few limitations:
1456 =over 3
1458 =item *
1460 The file must be a normal file or a block device (technically it must
1461 be seekable) and must be given using B<-a> or B<::::>. The file cannot
1462 be a pipe or a fifo as they are not seekable.
1464 If using a block device with lot of NUL bytes, remember to set
1465 B<--recend ''>.
1467 =item *
1469 Record counting (B<-N>) and line counting (B<-L>/B<-l>) do not work.
1471 =back
1474 =item B<--plain>
1476 Ignore any B<--profile>, $PARALLEL, and ~/.parallel/config to get full
1477 control on the command line (used by GNU B<parallel> internally when
1478 called with B<--sshlogin>).
1481 =item B<--plus>
1483 Activate additional replacement strings: {+/} {+.} {+..} {+...} {..}
1484 {...} {/..} {/...} {##}. The idea being that '{+foo}' matches the opposite of
1485 '{foo}' and {} = {+/}/{/} = {.}.{+.} = {+/}/{/.}.{+.} = {..}.{+..} =
1486 {+/}/{/..}.{+..} = {...}.{+...} = {+/}/{/...}.{+...}
1488 B<{##}> is the number of jobs to be run. It is incompatible with
1489 B<-X>/B<-m>/B<--xargs>.
1491 B<{choose_k}> is inspired by n choose k: Given a list of n elements,
1492 choose k. k is the number of input sources and n is the number of
1493 arguments in an input source. The content of the input sources must
1494 be the same and the arguments must be unique.
1496 The following dynamic replacement strings are also activated. They are
1497 inspired by bash's parameter expansion:
1499 {:-str} str if the value is empty
1500 {:num} remove the first num characters
1501 {:num1:num2} characters from num1 to num2
1502 {#str} remove prefix str
1503 {%str} remove postfix str
1504 {/str1/str2} replace str1 with str2
1505 {^str} uppercase str if found at the start
1506 {^^str} uppercase str
1507 {,str} lowercase str if found at the start
1508 {,,str} lowercase str
1511 =item B<--progress>
1513 Show progress of computations. List the computers involved in the task
1514 with number of CPUs detected and the max number of jobs to run. After
1515 that show progress for each computer: number of running jobs, number
1516 of completed jobs, and percentage of all jobs done by this
1517 computer. The percentage will only be available after all jobs have
1518 been scheduled as GNU B<parallel> only read the next job when ready to
1519 schedule it - this is to avoid wasting time and memory by reading
1520 everything at startup.
1522 By sending GNU B<parallel> SIGUSR2 you can toggle turning on/off
1523 B<--progress> on a running GNU B<parallel> process.
1525 See also B<--eta> and B<--bar>.
1528 =item B<--max-args>=I<max-args>
1530 =item B<-n> I<max-args>
1532 Use at most I<max-args> arguments per command line. Fewer than
1533 I<max-args> arguments will be used if the size (see the B<-s> option)
1534 is exceeded, unless the B<-x> option is given, in which case
1535 GNU B<parallel> will exit.
1537 B<-n 0> means read one argument, but insert 0 arguments on the command
1538 line.
1540 Implies B<-X> unless B<-m> is set.
1543 =item B<--max-replace-args>=I<max-args>
1545 =item B<-N> I<max-args>
1547 Use at most I<max-args> arguments per command line. Like B<-n> but
1548 also makes replacement strings B<{1}> .. B<{>I<max-args>B<}> that
1549 represents argument 1 .. I<max-args>. If too few args the B<{>I<n>B<}> will
1550 be empty.
1552 B<-N 0> means read one argument, but insert 0 arguments on the command
1553 line.
1555 This will set the owner of the homedir to the user:
1557 tr ':' '\n' < /etc/passwd | parallel -N7 chown {1} {6}
1559 Implies B<-X> unless B<-m> or B<--pipe> is set.
1561 When used with B<--pipe> B<-N> is the number of records to read. This
1562 is somewhat slower than B<--block>.
1565 =item B<--max-line-length-allowed>
1567 Print the maximal number of characters allowed on the command line and
1568 exit (used by GNU B<parallel> itself to determine the line length
1569 on remote computers).
1572 =item B<--number-of-cpus> (obsolete)
1574 Print the number of physical CPU cores and exit.
1577 =item B<--number-of-cores> (beta testing)
1579 Print the number of physical CPU cores and exit (used by GNU B<parallel> itself
1580 to determine the number of physical CPU cores on remote computers).
1583 =item B<--number-of-sockets> (beta testing)
1585 Print the number of filled CPU sockets and exit (used by GNU
1586 B<parallel> itself to determine the number of filled CPU sockets on
1587 remote computers).
1590 =item B<--number-of-threads> (beta testing)
1592 Print the number of hyperthreaded CPU cores and exit (used by GNU
1593 B<parallel> itself to determine the number of hyperthreaded CPU cores
1594 on remote computers).
1597 =item B<--no-keep-order>
1599 Overrides an earlier B<--keep-order> (e.g. if set in
1600 B<~/.parallel/config>).
1603 =item B<--nice> I<niceness> (alpha testing)
1605 Run the command at this niceness.
1607 By default GNU B<parallel> will run jobs at the same nice level as GNU
1608 B<parallel> is started - both on the local machine and remote servers,
1609 so you are unlikely to ever use this option.
1611 Setting B<--nice> will override this nice level. If the nice level is
1612 smaller than the current nice level, it will only affect remote jobs
1613 (e.g. current level is 10 and B<--nice 5> will cause local jobs to be
1614 run at level 10, but remote jobs run at nice level 5).
1617 =item B<--interactive>
1619 =item B<-p>
1621 Prompt the user about whether to run each command line and read a line
1622 from the terminal. Only run the command line if the response starts
1623 with 'y' or 'Y'. Implies B<-t>.
1626 =item B<--parens> I<parensstring>
1628 Define start and end parenthesis for B<{= perl expression =}>. The
1629 left and the right parenthesis can be multiple characters and are
1630 assumed to be the same length. The default is B<{==}> giving B<{=> as
1631 the start parenthesis and B<=}> as the end parenthesis.
1633 Another useful setting is B<,,,,> which would make both parenthesis
1634 B<,,>:
1636 parallel --parens ,,,, echo foo is ,,s/I/O/g,, ::: FII
1638 See also: B<--rpl> B<{= perl expression =}>
1641 =item B<--profile> I<profilename> (beta testing)
1643 =item B<-J> I<profilename> (beta testing)
1645 Use profile I<profilename> for options. This is useful if you want to
1646 have multiple profiles. You could have one profile for running jobs in
1647 parallel on the local computer and a different profile for running jobs
1648 on remote computers. See the section PROFILE FILES for examples.
1650 I<profilename> corresponds to the file ~/.parallel/I<profilename>.
1652 You can give multiple profiles by repeating B<--profile>. If parts of
1653 the profiles conflict, the later ones will be used.
1655 Default: config
1658 =item B<--quote>
1660 =item B<-q>
1662 Quote I<command>. If your command contains special characters that
1663 should not be interpreted by the shell (e.g. ; \ | *), use B<--quote> to
1664 escape these. The command must be a simple command (see B<man
1665 bash>) without redirections and without variable assignments.
1667 See the section QUOTING. Most people will not need this. Quoting is
1668 disabled by default.
1671 =item B<--no-run-if-empty>
1673 =item B<-r>
1675 If the stdin (standard input) only contains whitespace, do not run the command.
1677 If used with B<--pipe> this is slow.
1680 =item B<--noswap>
1682 Do not start new jobs on a given computer if there is both swap-in and
1683 swap-out activity.
1685 The swap activity is only sampled every 10 seconds as the sampling
1686 takes 1 second to do.
1688 Swap activity is computed as (swap-in)*(swap-out) which in practice is
1689 a good value: swapping out is not a problem, swapping in is not a
1690 problem, but both swapping in and out usually indicates a problem.
1692 B<--memfree> may give better results, so try using that first.
1695 =item B<--record-env>
1697 Record current environment variables in ~/.parallel/ignored_vars. This
1698 is useful before using B<--env _>.
1700 See also B<--env>, B<--session>.
1703 =item B<--recstart> I<startstring>
1705 =item B<--recend> I<endstring>
1707 If B<--recstart> is given I<startstring> will be used to split at record start.
1709 If B<--recend> is given I<endstring> will be used to split at record end.
1711 If both B<--recstart> and B<--recend> are given the combined string
1712 I<endstring>I<startstring> will have to match to find a split
1713 position. This is useful if either I<startstring> or I<endstring>
1714 match in the middle of a record.
1716 If neither B<--recstart> nor B<--recend> are given then B<--recend>
1717 defaults to '\n'. To have no record separator use B<--recend "">.
1719 B<--recstart> and B<--recend> are used with B<--pipe>.
1721 Use B<--regexp> to interpret B<--recstart> and B<--recend> as regular
1722 expressions. This is slow, however.
1725 =item B<--regexp>
1727 Use B<--regexp> to interpret B<--recstart> and B<--recend> as regular
1728 expressions. This is slow, however.
1731 =item B<--remove-rec-sep>
1733 =item B<--removerecsep>
1735 =item B<--rrs>
1737 Remove the text matched by B<--recstart> and B<--recend> before piping
1738 it to the command.
1740 Only used with B<--pipe>.
1743 =item B<--results> I<name>
1745 =item B<--res> I<name>
1747 Save the output into files.
1749 B<Simple string output dir>
1751 If I<name> does not contain replacement strings and does not end in
1752 B<.csv/.tsv>, the output will be stored in a directory tree rooted at
1753 I<name>. Within this directory tree, each command will result in
1754 three files: I<name>/<ARGS>/stdout and I<name>/<ARGS>/stderr,
1755 I<name>/<ARGS>/seq, where <ARGS> is a sequence of directories
1756 representing the header of the input source (if using B<--header :>)
1757 or the number of the input source and corresponding values.
1759 E.g:
1761 parallel --header : --results foo echo {a} {b} \
1762 ::: a I II ::: b III IIII
1764 will generate the files:
1766 foo/a/II/b/III/seq
1767 foo/a/II/b/III/stderr
1768 foo/a/II/b/III/stdout
1769 foo/a/II/b/IIII/seq
1770 foo/a/II/b/IIII/stderr
1771 foo/a/II/b/IIII/stdout
1772 foo/a/I/b/III/seq
1773 foo/a/I/b/III/stderr
1774 foo/a/I/b/III/stdout
1775 foo/a/I/b/IIII/seq
1776 foo/a/I/b/IIII/stderr
1777 foo/a/I/b/IIII/stdout
1781 parallel --results foo echo {1} {2} ::: I II ::: III IIII
1783 will generate the files:
1785 foo/1/II/2/III/seq
1786 foo/1/II/2/III/stderr
1787 foo/1/II/2/III/stdout
1788 foo/1/II/2/IIII/seq
1789 foo/1/II/2/IIII/stderr
1790 foo/1/II/2/IIII/stdout
1791 foo/1/I/2/III/seq
1792 foo/1/I/2/III/stderr
1793 foo/1/I/2/III/stdout
1794 foo/1/I/2/IIII/seq
1795 foo/1/I/2/IIII/stderr
1796 foo/1/I/2/IIII/stdout
1799 B<CSV file output>
1801 If I<name> ends in B<.csv>/B<.tsv> the output will be a CSV-file
1802 named I<name>.
1804 B<.csv> gives a comma separated value file. B<.tsv> gives a TAB
1805 separated value file.
1807 B<-.csv>/B<-.tsv> are special: It will give the file on stdout
1808 (standard output).
1811 B<Replacement string output file>
1813 If I<name> contains a replacement string and the replaced result does
1814 not end in /, then the standard output will be stored in a file named
1815 by this result. Standard error will be stored in the same file name
1816 with '.err' added, and the sequence number will be stored in the same
1817 file name with '.seq' added.
1819 E.g.
1821 parallel --results my_{} echo ::: foo bar baz
1823 will generate the files:
1825 my_bar
1826 my_bar.err
1827 my_bar.seq
1828 my_baz
1829 my_baz.err
1830 my_baz.seq
1831 my_foo
1832 my_foo.err
1833 my_foo.seq
1836 B<Replacement string output dir>
1838 If I<name> contains a replacement string and the replaced result ends
1839 in /, then output files will be stored in the resulting dir.
1841 E.g.
1843 parallel --results my_{}/ echo ::: foo bar baz
1845 will generate the files:
1847 my_bar/seq
1848 my_bar/stderr
1849 my_bar/stdout
1850 my_baz/seq
1851 my_baz/stderr
1852 my_baz/stdout
1853 my_foo/seq
1854 my_foo/stderr
1855 my_foo/stdout
1857 See also B<--files>, B<--tag>, B<--header>, B<--joblog>.
1860 =item B<--resume>
1862 Resumes from the last unfinished job. By reading B<--joblog> or the
1863 B<--results> dir GNU B<parallel> will figure out the last unfinished
1864 job and continue from there. As GNU B<parallel> only looks at the
1865 sequence numbers in B<--joblog> then the input, the command, and
1866 B<--joblog> all have to remain unchanged; otherwise GNU B<parallel>
1867 may run wrong commands.
1869 See also B<--joblog>, B<--results>, B<--resume-failed>, B<--retries>.
1872 =item B<--resume-failed>
1874 Retry all failed and resume from the last unfinished job. By reading
1875 B<--joblog> GNU B<parallel> will figure out the failed jobs and run
1876 those again. After that it will resume last unfinished job and
1877 continue from there. As GNU B<parallel> only looks at the sequence
1878 numbers in B<--joblog> then the input, the command, and B<--joblog>
1879 all have to remain unchanged; otherwise GNU B<parallel> may run wrong
1880 commands.
1882 See also B<--joblog>, B<--resume>, B<--retry-failed>, B<--retries>.
1885 =item B<--retry-failed>
1887 Retry all failed jobs in joblog. By reading B<--joblog> GNU
1888 B<parallel> will figure out the failed jobs and run those again.
1890 B<--retry-failed> ignores the command and arguments on the command
1891 line: It only looks at the joblog.
1893 B<Differences between --resume, --resume-failed, --retry-failed>
1895 In this example B<exit {= $_%=2 =}> will cause every other job to fail.
1897 timeout -k 1 4 parallel --joblog log -j10 \
1898 'sleep {}; exit {= $_%=2 =}' ::: {10..1}
1900 4 jobs completed. 2 failed:
1902 Seq [...] Exitval Signal Command
1903 10 [...] 1 0 sleep 1; exit 1
1904 9 [...] 0 0 sleep 2; exit 0
1905 8 [...] 1 0 sleep 3; exit 1
1906 7 [...] 0 0 sleep 4; exit 0
1908 B<--resume> does not care about the Exitval, but only looks at Seq. If
1909 the Seq is run, it will not be run again. So if needed, you can change
1910 the command for the seqs not run yet:
1912 parallel --resume --joblog log -j10 \
1913 'sleep .{}; exit {= $_%=2 =}' ::: {10..1}
1915 Seq [...] Exitval Signal Command
1916 [... as above ...]
1917 1 [...] 0 0 sleep .10; exit 0
1918 6 [...] 1 0 sleep .5; exit 1
1919 5 [...] 0 0 sleep .6; exit 0
1920 4 [...] 1 0 sleep .7; exit 1
1921 3 [...] 0 0 sleep .8; exit 0
1922 2 [...] 1 0 sleep .9; exit 1
1924 B<--resume-failed> cares about the Exitval, but also only looks at Seq
1925 to figure out which commands to run. Again this means you can change
1926 the command, but not the arguments. It will run the failed seqs and
1927 the seqs not yet run:
1929 parallel --resume-failed --joblog log -j10 \
1930 'echo {};sleep .{}; exit {= $_%=3 =}' ::: {10..1}
1932 Seq [...] Exitval Signal Command
1933 [... as above ...]
1934 10 [...] 1 0 echo 1;sleep .1; exit 1
1935 8 [...] 0 0 echo 3;sleep .3; exit 0
1936 6 [...] 2 0 echo 5;sleep .5; exit 2
1937 4 [...] 1 0 echo 7;sleep .7; exit 1
1938 2 [...] 0 0 echo 9;sleep .9; exit 0
1940 B<--retry-failed> cares about the Exitval, but takes the command from
1941 the joblog. It ignores any arguments or commands given on the command
1942 line:
1944 parallel --retry-failed --joblog log -j10 this part is ignored
1946 Seq [...] Exitval Signal Command
1947 [... as above ...]
1948 10 [...] 1 0 echo 1;sleep .1; exit 1
1949 6 [...] 2 0 echo 5;sleep .5; exit 2
1950 4 [...] 1 0 echo 7;sleep .7; exit 1
1952 See also B<--joblog>, B<--resume>, B<--resume-failed>, B<--retries>.
1955 =item B<--retries> I<n>
1957 If a job fails, retry it on another computer on which it has not
1958 failed. Do this I<n> times. If there are fewer than I<n> computers in
1959 B<--sshlogin> GNU B<parallel> will re-use all the computers. This is
1960 useful if some jobs fail for no apparent reason (such as network
1961 failure).
1964 =item B<--return> I<filename>
1966 Transfer files from remote computers. B<--return> is used with
1967 B<--sshlogin> when the arguments are files on the remote computers. When
1968 processing is done the file I<filename> will be transferred
1969 from the remote computer using B<rsync> and will be put relative to
1970 the default login dir. E.g.
1972 echo foo/bar.txt | parallel --return {.}.out \
1973 --sshlogin server.example.com touch {.}.out
1975 This will transfer the file I<$HOME/foo/bar.out> from the computer
1976 I<server.example.com> to the file I<foo/bar.out> after running
1977 B<touch foo/bar.out> on I<server.example.com>.
1979 parallel -S server --trc out/./{}.out touch {}.out ::: in/file
1981 This will transfer the file I<in/file.out> from the computer
1982 I<server.example.com> to the files I<out/in/file.out> after running
1983 B<touch in/file.out> on I<server>.
1985 echo /tmp/foo/bar.txt | parallel --return {.}.out \
1986 --sshlogin server.example.com touch {.}.out
1988 This will transfer the file I</tmp/foo/bar.out> from the computer
1989 I<server.example.com> to the file I</tmp/foo/bar.out> after running
1990 B<touch /tmp/foo/bar.out> on I<server.example.com>.
1992 Multiple files can be transferred by repeating the option multiple
1993 times:
1995 echo /tmp/foo/bar.txt | parallel \
1996 --sshlogin server.example.com \
1997 --return {.}.out --return {.}.out2 touch {.}.out {.}.out2
1999 B<--return> is often used with B<--transferfile> and B<--cleanup>.
2001 B<--return> is ignored when used with B<--sshlogin :> or when not used
2002 with B<--sshlogin>.
2005 =item B<--round-robin>
2007 =item B<--round>
2009 Normally B<--pipe> will give a single block to each instance of the
2010 command. With B<--roundrobin> all blocks will at random be written to
2011 commands already running. This is useful if the command takes a long
2012 time to initialize.
2014 B<--keep-order> will not work with B<--roundrobin> as it is
2015 impossible to track which input block corresponds to which output.
2017 B<--roundrobin> implies B<--pipe>, except if B<--pipepart> is given.
2019 See also B<--group-by>, B<--shard>.
2022 =item B<--rpl> 'I<tag> I<perl expression>'
2024 Use I<tag> as a replacement string for I<perl expression>. This makes
2025 it possible to define your own replacement strings. GNU B<parallel>'s
2026 7 replacement strings are implemented as:
2028 --rpl '{} '
2029 --rpl '{#} 1 $_=$job->seq()'
2030 --rpl '{%} 1 $_=$job->slot()'
2031 --rpl '{/} s:.*/::'
2032 --rpl '{//} $Global::use{"File::Basename"} ||=
2033 eval "use File::Basename; 1;"; $_ = dirname($_);'
2034 --rpl '{/.} s:.*/::; s:\.[^/.]+$::;'
2035 --rpl '{.} s:\.[^/.]+$::'
2037 The B<--plus> replacement strings are implemented as:
2039 --rpl '{+/} s:/[^/]*$::'
2040 --rpl '{+.} s:.*\.::'
2041 --rpl '{+..} s:.*\.([^.]*\.):$1:'
2042 --rpl '{+...} s:.*\.([^.]*\.[^.]*\.):$1:'
2043 --rpl '{..} s:\.[^/.]+$::; s:\.[^/.]+$::'
2044 --rpl '{...} s:\.[^/.]+$::; s:\.[^/.]+$::; s:\.[^/.]+$::'
2045 --rpl '{/..} s:.*/::; s:\.[^/.]+$::; s:\.[^/.]+$::'
2046 --rpl '{/...} s:.*/::;s:\.[^/.]+$::;s:\.[^/.]+$::;s:\.[^/.]+$::'
2047 --rpl '{##} $_=total_jobs()'
2048 --rpl '{:-(.+?)} $_ ||= $$1'
2049 --rpl '{:(\d+?)} substr($_,0,$$1) = ""'
2050 --rpl '{:(\d+?):(\d+?)} $_ = substr($_,$$1,$$2);'
2051 --rpl '{#([^#].*?)} s/^$$1//;'
2052 --rpl '{%(.+?)} s/$$1$//;'
2053 --rpl '{/(.+?)/(.*?)} s/$$1/$$2/;'
2054 --rpl '{^(.+?)} s/^($$1)/uc($1)/e;'
2055 --rpl '{^^(.+?)} s/($$1)/uc($1)/eg;'
2056 --rpl '{,(.+?)} s/^($$1)/lc($1)/e;'
2057 --rpl '{,,(.+?)} s/($$1)/lc($1)/eg;'
2060 If the user defined replacement string starts with '{' it can also be
2061 used as a positional replacement string (like B<{2.}>).
2063 It is recommended to only change $_ but you have full access to all
2064 of GNU B<parallel>'s internal functions and data structures.
2066 Here are a few examples:
2068 Is the job sequence even or odd?
2069 --rpl '{odd} $_ = seq() % 2 ? "odd" : "even"'
2070 Pad job sequence with leading zeros to get equal width
2071 --rpl '{0#} $f=1+int("".(log(total_jobs())/log(10)));
2072 $_=sprintf("%0${f}d",seq())'
2073 Job sequence counting from 0
2074 --rpl '{#0} $_ = seq() - 1'
2075 Job slot counting from 2
2076 --rpl '{%1} $_ = slot() + 1'
2077 Remove all extensions
2078 --rpl '{:} s:(\.[^/]+)*$::'
2080 You can have dynamic replacement strings by including parenthesis in
2081 the replacement string and adding a regular expression between the
2082 parenthesis. The matching string will be inserted as $$1:
2084 parallel --rpl '{%(.*?)} s/$$1//' echo {%.tar.gz} ::: my.tar.gz
2085 parallel --rpl '{:%(.+?)} s:$$1(\.[^/]+)*$::' \
2086 echo {:%_file} ::: my_file.tar.gz
2087 parallel -n3 --rpl '{/:%(.*?)} s:.*/(.*)$$1(\.[^/]+)*$:$1:' \
2088 echo job {#}: {2} {2.} {3/:%_1} ::: a/b.c c/d.e f/g_1.h.i
2090 You can even use multiple matches:
2092 parallel --rpl '{/(.+?)/(.*?)} s/$$1/$$2/;'
2093 echo {/replacethis/withthis} {/b/C} ::: a_replacethis_b
2095 parallel --rpl '{(.*?)/(.*?)} $_="$$2$_$$1"' \
2096 echo {swap/these} ::: -middle-
2098 See also: B<{= perl expression =}> B<--parens>
2101 =item B<--rsync-opts> I<options>
2103 Options to pass on to B<rsync>. Setting B<--rsync-opts> takes
2104 precedence over setting the environment variable $PARALLEL_RSYNC_OPTS.
2107 =item B<--max-chars>=I<max-chars>
2109 =item B<-s> I<max-chars>
2111 Use at most I<max-chars> characters per command line, including the
2112 command and initial-arguments and the terminating nulls at the ends of
2113 the argument strings. The largest allowed value is system-dependent,
2114 and is calculated as the argument length limit for exec, less the size
2115 of your environment. The default value is the maximum.
2117 Implies B<-X> unless B<-m> is set.
2120 =item B<--show-limits>
2122 Display the limits on the command-line length which are imposed by the
2123 operating system and the B<-s> option. Pipe the input from /dev/null
2124 (and perhaps specify --no-run-if-empty) if you don't want GNU B<parallel>
2125 to do anything.
2128 =item B<--semaphore>
2130 Work as a counting semaphore. B<--semaphore> will cause GNU
2131 B<parallel> to start I<command> in the background. When the number of
2132 jobs given by B<--jobs> is reached, GNU B<parallel> will wait for one of
2133 these to complete before starting another command.
2135 B<--semaphore> implies B<--bg> unless B<--fg> is specified.
2137 B<--semaphore> implies B<--semaphorename `tty`> unless
2138 B<--semaphorename> is specified.
2140 Used with B<--fg>, B<--wait>, and B<--semaphorename>.
2142 The command B<sem> is an alias for B<parallel --semaphore>.
2144 See also B<man sem>.
2147 =item B<--semaphorename> I<name>
2149 =item B<--id> I<name>
2151 Use B<name> as the name of the semaphore. Default is the name of the
2152 controlling tty (output from B<tty>).
2154 The default normally works as expected when used interactively, but
2155 when used in a script I<name> should be set. I<$$> or I<my_task_name>
2156 are often a good value.
2158 The semaphore is stored in ~/.parallel/semaphores/
2160 Implies B<--semaphore>.
2162 See also B<man sem>.
2165 =item B<--semaphoretimeout> I<secs>
2167 =item B<--st> I<secs>
2169 If I<secs> > 0: If the semaphore is not released within I<secs> seconds, take it anyway.
2171 If I<secs> < 0: If the semaphore is not released within I<secs> seconds, exit.
2173 Implies B<--semaphore>.
2175 See also B<man sem>.
2178 =item B<--seqreplace> I<replace-str>
2180 Use the replacement string I<replace-str> instead of B<{#}> for
2181 job sequence number.
2184 =item B<--session>
2186 Record names in current environment in B<$PARALLEL_IGNORED_NAMES> and
2187 exit. Only used with B<env_parallel>. Aliases, functions, and
2188 variables with names in B<$PARALLEL_IGNORED_NAMES> will not be copied.
2190 Only supported in B<Ash, Bash, Dash, Ksh, Sh, and Zsh>.
2192 See also B<--env>, B<--record-env>.
2195 =item B<--shard> I<shardexpr> (alpha testing)
2197 Use I<shardexpr> as shard key and shard input to the jobs.
2199 I<shardexpr> is [column number|column name] [perlexpression] e.g. 3,
2200 Address, 3 $_%=100, Address s/\d//g.
2202 Each input line is split using B<--colsep>. The value of the column is
2203 put into $_, the perl expression is executed, the resulting value is
2204 hashed so that all lines of a given value is given to the same job
2205 slot.
2207 This is similar to sharding in databases.
2209 The performance is in the order of 100K rows per second. Faster if the
2210 I<shardcol> is small (<10), slower if it is big (>100).
2212 B<--shard> requires B<--pipe> and a fixed numeric value for B<--jobs>.
2214 See also B<--bin>, B<--group-by>, B<--roundrobin>.
2217 =item B<--shebang>
2219 =item B<--hashbang>
2221 GNU B<parallel> can be called as a shebang (#!) command as the first
2222 line of a script. The content of the file will be treated as
2223 inputsource.
2225 Like this:
2227 #!/usr/bin/parallel --shebang -r wget
2229 https://ftpmirror.gnu.org/parallel/parallel-20120822.tar.bz2
2230 https://ftpmirror.gnu.org/parallel/parallel-20130822.tar.bz2
2231 https://ftpmirror.gnu.org/parallel/parallel-20140822.tar.bz2
2233 B<--shebang> must be set as the first option.
2235 On FreeBSD B<env> is needed:
2237 #!/usr/bin/env -S parallel --shebang -r wget
2239 https://ftpmirror.gnu.org/parallel/parallel-20120822.tar.bz2
2240 https://ftpmirror.gnu.org/parallel/parallel-20130822.tar.bz2
2241 https://ftpmirror.gnu.org/parallel/parallel-20140822.tar.bz2
2243 There are many limitations of shebang (#!) depending on your operating
2244 system. See details on http://www.in-ulm.de/~mascheck/various/shebang/
2247 =item B<--shebang-wrap>
2249 GNU B<parallel> can parallelize scripts by wrapping the shebang
2250 line. If the program can be run like this:
2252 cat arguments | parallel the_program
2254 then the script can be changed to:
2256 #!/usr/bin/parallel --shebang-wrap /original/parser --options
2258 E.g.
2260 #!/usr/bin/parallel --shebang-wrap /usr/bin/python
2262 If the program can be run like this:
2264 cat data | parallel --pipe the_program
2266 then the script can be changed to:
2268 #!/usr/bin/parallel --shebang-wrap --pipe /orig/parser --opts
2270 E.g.
2272 #!/usr/bin/parallel --shebang-wrap --pipe /usr/bin/perl -w
2274 B<--shebang-wrap> must be set as the first option.
2277 =item B<--shellquote>
2279 Does not run the command but quotes it. Useful for making quoted
2280 composed commands for GNU B<parallel>.
2282 Multiple B<--shellquote> with quote the string multiple times, so
2283 B<parallel --shellquote | parallel --shellquote> can be written as
2284 B<parallel --shellquote --shellquote>.
2287 =item B<--shuf>
2289 Shuffle jobs. When having multiple input sources it is hard to
2290 randomize jobs. --shuf will generate all jobs, and shuffle them before
2291 running them. This is useful to get a quick preview of the results
2292 before running the full batch.
2295 =item B<--skip-first-line>
2297 Do not use the first line of input (used by GNU B<parallel> itself
2298 when called with B<--shebang>).
2301 =item B<--sql> I<DBURL> (obsolete)
2303 Use B<--sqlmaster> instead.
2306 =item B<--sqlmaster> I<DBURL>
2308 Submit jobs via SQL server. I<DBURL> must point to a table, which will
2309 contain the same information as B<--joblog>, the values from the input
2310 sources (stored in columns V1 .. Vn), and the output (stored in
2311 columns Stdout and Stderr).
2313 If I<DBURL> is prepended with '+' GNU B<parallel> assumes the table is
2314 already made with the correct columns and appends the jobs to it.
2316 If I<DBURL> is not prepended with '+' the table will be dropped and
2317 created with the correct amount of V-columns unless
2319 B<--sqlmaster> does not run any jobs, but it creates the values for
2320 the jobs to be run. One or more B<--sqlworker> must be run to actually
2321 execute the jobs.
2323 If B<--wait> is set, GNU B<parallel> will wait for the jobs to
2324 complete.
2326 The format of a DBURL is:
2328 [sql:]vendor://[[user][:pwd]@][host][:port]/[db]/table
2330 E.g.
2332 sql:mysql://hr:hr@localhost:3306/hrdb/jobs
2333 mysql://scott:tiger@my.example.com/pardb/paralleljobs
2334 sql:oracle://scott:tiger@ora.example.com/xe/parjob
2335 postgresql://scott:tiger@pg.example.com/pgdb/parjob
2336 pg:///parjob
2337 sqlite3:///pardb/parjob
2339 It can also be an alias from ~/.sql/aliases:
2341 :myalias mysql:///mydb/paralleljobs
2344 =item B<--sqlandworker> I<DBURL>
2346 Shorthand for: B<--sqlmaster> I<DBURL> B<--sqlworker> I<DBURL>.
2349 =item B<--sqlworker> I<DBURL>
2351 Execute jobs via SQL server. Read the input sources variables from the
2352 table pointed to by I<DBURL>. The I<command> on the command line
2353 should be the same as given by B<--sqlmaster>.
2355 If you have more than one B<--sqlworker> jobs may be run more than
2356 once.
2358 If B<--sqlworker> runs on the local machine, the hostname in the SQL
2359 table will not be ':' but instead the hostname of the machine.
2362 =item B<--ssh> I<sshcommand>
2364 GNU B<parallel> defaults to using B<ssh> for remote access. This can
2365 be overridden with B<--ssh>. It can also be set on a per server
2366 basis (see B<--sshlogin>).
2369 =item B<--sshdelay> I<secs>
2371 Delay starting next ssh by I<secs> seconds. GNU B<parallel> will pause
2372 I<secs> seconds after starting each ssh. I<secs> can be less than 1
2373 seconds.
2376 =item B<-S> I<[@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,...]]>
2378 =item B<-S> I<@hostgroup>
2380 =item B<--sshlogin> I<[@hostgroups/][ncpus/]sshlogin[,[@hostgroups/][ncpus/]sshlogin[,...]]>
2382 =item B<--sshlogin> I<@hostgroup>
2384 Distribute jobs to remote computers. The jobs will be run on a list of
2385 remote computers.
2387 If I<hostgroups> is given, the I<sshlogin> will be added to that
2388 hostgroup. Multiple hostgroups are separated by '+'. The I<sshlogin>
2389 will always be added to a hostgroup named the same as I<sshlogin>.
2391 If only the I<@hostgroup> is given, only the sshlogins in that
2392 hostgroup will be used. Multiple I<@hostgroup> can be given.
2394 GNU B<parallel> will determine the number of CPUs on the remote
2395 computers and run the number of jobs as specified by B<-j>. If the
2396 number I<ncpus> is given GNU B<parallel> will use this number for
2397 number of CPUs on the host. Normally I<ncpus> will not be
2398 needed.
2400 An I<sshlogin> is of the form:
2402 [sshcommand [options]] [username@]hostname
2404 The sshlogin must not require a password (B<ssh-agent>,
2405 B<ssh-copy-id>, and B<sshpass> may help with that).
2407 The sshlogin ':' is special, it means 'no ssh' and will therefore run
2408 on the local computer.
2410 The sshlogin '..' is special, it read sshlogins from ~/.parallel/sshloginfile or
2411 $XDG_CONFIG_HOME/parallel/sshloginfile
2413 The sshlogin '-' is special, too, it read sshlogins from stdin
2414 (standard input).
2416 To specify more sshlogins separate the sshlogins by comma, newline (in
2417 the same string), or repeat the options multiple times.
2419 For examples: see B<--sshloginfile>.
2421 The remote host must have GNU B<parallel> installed.
2423 B<--sshlogin> is known to cause problems with B<-m> and B<-X>.
2425 B<--sshlogin> is often used with B<--transferfile>, B<--return>,
2426 B<--cleanup>, and B<--trc>.
2429 =item B<--sshloginfile> I<filename>
2431 =item B<--slf> I<filename>
2433 File with sshlogins. The file consists of sshlogins on separate
2434 lines. Empty lines and lines starting with '#' are ignored. Example:
2436 server.example.com
2437 username@server2.example.com
2438 8/my-8-cpu-server.example.com
2439 2/my_other_username@my-dualcore.example.net
2440 # This server has SSH running on port 2222
2441 ssh -p 2222 server.example.net
2442 4/ssh -p 2222 quadserver.example.net
2443 # Use a different ssh program
2444 myssh -p 2222 -l myusername hexacpu.example.net
2445 # Use a different ssh program with default number of CPUs
2446 //usr/local/bin/myssh -p 2222 -l myusername hexacpu
2447 # Use a different ssh program with 6 CPUs
2448 6//usr/local/bin/myssh -p 2222 -l myusername hexacpu
2449 # Assume 16 CPUs on the local computer
2450 16/:
2451 # Put server1 in hostgroup1
2452 @hostgroup1/server1
2453 # Put myusername@server2 in hostgroup1+hostgroup2
2454 @hostgroup1+hostgroup2/myusername@server2
2455 # Force 4 CPUs and put 'ssh -p 2222 server3' in hostgroup1
2456 @hostgroup1/4/ssh -p 2222 server3
2458 When using a different ssh program the last argument must be the hostname.
2460 Multiple B<--sshloginfile> are allowed.
2462 GNU B<parallel> will first look for the file in current dir; if that
2463 fails it look for the file in ~/.parallel.
2465 The sshloginfile '..' is special, it read sshlogins from
2466 ~/.parallel/sshloginfile
2468 The sshloginfile '.' is special, it read sshlogins from
2469 /etc/parallel/sshloginfile
2471 The sshloginfile '-' is special, too, it read sshlogins from stdin
2472 (standard input).
2474 If the sshloginfile is changed it will be re-read when a job finishes
2475 though at most once per second. This makes it possible to add and
2476 remove hosts while running.
2478 This can be used to have a daemon that updates the sshloginfile to
2479 only contain servers that are up:
2481 cp original.slf tmp2.slf
2482 while [ 1 ] ; do
2483 nice parallel --nonall -j0 -k --slf original.slf \
2484 --tag echo | perl 's/\t$//' > tmp.slf
2485 if diff tmp.slf tmp2.slf; then
2486 mv tmp.slf tmp2.slf
2488 sleep 10
2489 done &
2490 parallel --slf tmp2.slf ...
2493 =item B<--slotreplace> I<replace-str>
2495 Use the replacement string I<replace-str> instead of B<{%}> for
2496 job slot number.
2499 =item B<--silent>
2501 Silent. The job to be run will not be printed. This is the default.
2502 Can be reversed with B<-v>.
2505 =item B<--tty>
2507 Open terminal tty. If GNU B<parallel> is used for starting a program
2508 that accesses the tty (such as an interactive program) then this
2509 option may be needed. It will default to starting only one job at a
2510 time (i.e. B<-j1>), not buffer the output (i.e. B<-u>), and it will
2511 open a tty for the job.
2513 You can of course override B<-j1> and B<-u>.
2515 Using B<--tty> unfortunately means that GNU B<parallel> cannot kill
2516 the jobs (with B<--timeout>, B<--memfree>, or B<--halt>). This is due
2517 to GNU B<parallel> giving each child its own process group, which is
2518 then killed. Process groups are dependant on the tty.
2521 =item B<--tag> (beta testing)
2523 Tag lines with arguments. Each output line will be prepended with the
2524 arguments and TAB (\t). When combined with B<--onall> or B<--nonall>
2525 the lines will be prepended with the sshlogin instead.
2527 B<--tag> is ignored when using B<-u>.
2530 =item B<--tagstring> I<str> (beta testing)
2532 Tag lines with a string. Each output line will be prepended with
2533 I<str> and TAB (\t). I<str> can contain replacement strings such as
2534 B<{}>.
2536 B<--tagstring> is ignored when using B<-u>, B<--onall>, and B<--nonall>.
2539 =item B<--tee>
2541 Pipe all data to all jobs. Used with B<--pipe>/B<--pipepart> and
2542 B<:::>.
2544 seq 1000 | parallel --pipe --tee -v wc {} ::: -w -l -c
2546 How many numbers in 1..1000 contain 0..9, and how many bytes do they
2547 fill:
2549 seq 1000 | parallel --pipe --tee --tag \
2550 'grep {1} | wc {2}' ::: {0..9} ::: -l -c
2552 How many words contain a..z and how many bytes do they fill?
2554 parallel -a /usr/share/dict/words --pipepart --tee --tag \
2555 'grep {1} | wc {2}' ::: {a..z} ::: -l -c
2558 =item B<--termseq> I<sequence>
2560 Termination sequence. When a job is killed due to B<--timeout>,
2561 B<--memfree>, B<--halt>, or abnormal termination of GNU B<parallel>,
2562 I<sequence> determines how the job is killed. The default is:
2564 TERM,200,TERM,100,TERM,50,KILL,25
2566 which sends a TERM signal, waits 200 ms, sends another TERM signal,
2567 waits 100 ms, sends another TERM signal, waits 50 ms, sends a KILL
2568 signal, waits 25 ms, and exits. GNU B<parallel> detects if a process
2569 dies before the waiting time is up.
2572 =item B<--tmpdir> I<dirname>
2574 Directory for temporary files. GNU B<parallel> normally buffers output
2575 into temporary files in /tmp. By setting B<--tmpdir> you can use a
2576 different dir for the files. Setting B<--tmpdir> is equivalent to
2577 setting $TMPDIR.
2580 =item B<--tmux> (Long beta testing)
2582 Use B<tmux> for output. Start a B<tmux> session and run each job in a
2583 window in that session. No other output will be produced.
2586 =item B<--tmuxpane> (Long beta testing)
2588 Use B<tmux> for output but put output into panes in the first window.
2589 Useful if you want to monitor the progress of less than 100 concurrent
2590 jobs.
2593 =item B<--timeout> I<duration>
2595 Time out for command. If the command runs for longer than I<duration>
2596 seconds it will get killed as per B<--termseq>.
2598 If I<duration> is followed by a % then the timeout will dynamically be
2599 computed as a percentage of the median average runtime of successful
2600 jobs. Only values > 100% will make sense.
2602 I<duration> is normally in seconds, but can be floats postfixed with
2603 B<s>, B<m>, B<h>, or B<d> which would multiply the float by 1, 60,
2604 3600, or 86400. Thus these are equivalent: B<--timeout 100000> and
2605 B<--timeout 1d3.5h16.6m4s>.
2608 =item B<--verbose>
2610 =item B<-t>
2612 Print the job to be run on stderr (standard error).
2614 See also B<-v>, B<-p>.
2617 =item B<--transfer>
2619 Transfer files to remote computers. Shorthand for: B<--transferfile {}>.
2622 =item B<--transferfile> I<filename>
2624 =item B<--tf> I<filename>
2626 B<--transferfile> is used with B<--sshlogin> to transfer files to the
2627 remote computers. The files will be transferred using B<rsync> and
2628 will be put relative to the default work dir. If the path contains /./
2629 the remaining path will be relative to the work dir. E.g.
2631 echo foo/bar.txt | parallel --transferfile {} \
2632 --sshlogin server.example.com wc
2634 This will transfer the file I<foo/bar.txt> to the computer
2635 I<server.example.com> to the file I<$HOME/foo/bar.txt> before running
2636 B<wc foo/bar.txt> on I<server.example.com>.
2638 echo /tmp/foo/bar.txt | parallel --transferfile {} \
2639 --sshlogin server.example.com wc
2641 This will transfer the file I</tmp/foo/bar.txt> to the computer
2642 I<server.example.com> to the file I</tmp/foo/bar.txt> before running
2643 B<wc /tmp/foo/bar.txt> on I<server.example.com>.
2645 echo /tmp/./foo/bar.txt | parallel --transferfile {} \
2646 --sshlogin server.example.com wc {= s:.*/./:./: =}
2648 This will transfer the file I</tmp/foo/bar.txt> to the computer
2649 I<server.example.com> to the file I<foo/bar.txt> before running
2650 B<wc ./foo/bar.txt> on I<server.example.com>.
2652 B<--transferfile> is often used with B<--return> and B<--cleanup>. A
2653 shorthand for B<--transferfile {}> is B<--transfer>.
2655 B<--transferfile> is ignored when used with B<--sshlogin :> or when
2656 not used with B<--sshlogin>.
2659 =item B<--trc> I<filename>
2661 Transfer, Return, Cleanup. Shorthand for:
2663 B<--transferfile {}> B<--return> I<filename> B<--cleanup>
2666 =item B<--trim> <n|l|r|lr|rl>
2668 Trim white space in input.
2670 =over 4
2672 =item n
2674 No trim. Input is not modified. This is the default.
2676 =item l
2678 Left trim. Remove white space from start of input. E.g. " a bc " -> "a bc ".
2680 =item r
2682 Right trim. Remove white space from end of input. E.g. " a bc " -> " a bc".
2684 =item lr
2686 =item rl
2688 Both trim. Remove white space from both start and end of input. E.g. "
2689 a bc " -> "a bc". This is the default if B<--colsep> is used.
2691 =back
2694 =item B<--ungroup>
2696 =item B<-u>
2698 Ungroup output. Output is printed as soon as possible and bypasses
2699 GNU B<parallel> internal processing. This may cause output from
2700 different commands to be mixed thus should only be used if you do not
2701 care about the output. Compare these:
2703 seq 4 | parallel -j0 \
2704 'sleep {};echo -n start{};sleep {};echo {}end'
2705 seq 4 | parallel -u -j0 \
2706 'sleep {};echo -n start{};sleep {};echo {}end'
2708 It also disables B<--tag>. GNU B<parallel> outputs faster with
2709 B<-u>. Compare the speeds of these:
2711 parallel seq ::: 300000000 >/dev/null
2712 parallel -u seq ::: 300000000 >/dev/null
2713 parallel --line-buffer seq ::: 300000000 >/dev/null
2715 Can be reversed with B<--group>.
2717 See also: B<--line-buffer> B<--group>
2720 =item B<--extensionreplace> I<replace-str>
2722 =item B<--er> I<replace-str>
2724 Use the replacement string I<replace-str> instead of B<{.}> for input
2725 line without extension.
2728 =item B<--use-sockets-instead-of-threads>
2730 =item B<--use-cores-instead-of-threads>
2732 =item B<--use-cpus-instead-of-cores> (obsolete)
2734 Determine how GNU B<parallel> counts the number of CPUs. GNU
2735 B<parallel> uses this number when the number of jobslots is computed
2736 relative to the number of CPUs (e.g. 100% or +1).
2738 CPUs can be counted in three different ways:
2740 =over 8
2742 =item sockets
2744 The number of filled CPU sockets (i.e. the number of physical chips).
2746 =item cores
2748 The number of physical cores (i.e. the number of physical compute
2749 cores).
2751 =item threads
2753 The number of hyperthreaded cores (i.e. the number of virtual
2754 cores - with some of them possibly being hyperthreaded)
2756 =back
2758 Normally the number of CPUs is computed as the number of CPU
2759 threads. With B<--use-sockets-instead-of-threads> or
2760 B<--use-cores-instead-of-threads> you can force it to be computed as
2761 the number of filled sockets or number of cores instead.
2763 Most users will not need these options.
2765 B<--use-cpus-instead-of-cores> is a (misleading) alias for
2766 B<--use-sockets-instead-of-threads> and is kept for backwards
2767 compatibility.
2770 =item B<-v>
2772 Verbose. Print the job to be run on stdout (standard output). Can be reversed
2773 with B<--silent>. See also B<-t>.
2775 Use B<-v> B<-v> to print the wrapping ssh command when running remotely.
2778 =item B<--version>
2780 =item B<-V>
2782 Print the version GNU B<parallel> and exit.
2785 =item B<--workdir> I<mydir>
2787 =item B<--wd> I<mydir>
2789 Files transferred using B<--transferfile> and B<--return> will be
2790 relative to I<mydir> on remote computers, and the command will be
2791 executed in the dir I<mydir>.
2793 The special I<mydir> value B<...> will create working dirs under
2794 B<~/.parallel/tmp/> on the remote computers. If B<--cleanup> is given
2795 these dirs will be removed.
2797 The special I<mydir> value B<.> uses the current working dir. If the
2798 current working dir is beneath your home dir, the value B<.> is
2799 treated as the relative path to your home dir. This means that if your
2800 home dir is different on remote computers (e.g. if your login is
2801 different) the relative path will still be relative to your home dir.
2803 To see the difference try:
2805 parallel -S server pwd ::: ""
2806 parallel --wd . -S server pwd ::: ""
2807 parallel --wd ... -S server pwd ::: ""
2809 I<mydir> can contain GNU B<parallel>'s replacement strings.
2812 =item B<--wait>
2814 Wait for all commands to complete.
2816 Used with B<--semaphore> or B<--sqlmaster>.
2818 See also B<man sem>.
2821 =item B<-X>
2823 Multiple arguments with context replace. Insert as many arguments as
2824 the command line length permits. If multiple jobs are being run in
2825 parallel: distribute the arguments evenly among the jobs. Use B<-j1>
2826 to avoid this.
2828 If B<{}> is not used the arguments will be appended to the line. If
2829 B<{}> is used as part of a word (like I<pic{}.jpg>) then the whole
2830 word will be repeated. If B<{}> is used multiple times each B<{}> will
2831 be replaced with the arguments.
2833 Normally B<-X> will do the right thing, whereas B<-m> can give
2834 unexpected results if B<{}> is used as part of a word.
2836 Support for B<-X> with B<--sshlogin> is limited and may fail.
2838 See also B<-m>.
2841 =item B<--exit>
2843 =item B<-x>
2845 Exit if the size (see the B<-s> option) is exceeded.
2848 =back
2850 =head1 EXAMPLE: Working as xargs -n1. Argument appending
2852 GNU B<parallel> can work similar to B<xargs -n1>.
2854 To compress all html files using B<gzip> run:
2856 find . -name '*.html' | parallel gzip --best
2858 If the file names may contain a newline use B<-0>. Substitute FOO BAR with
2859 FUBAR in all files in this dir and subdirs:
2861 find . -type f -print0 | \
2862 parallel -q0 perl -i -pe 's/FOO BAR/FUBAR/g'
2864 Note B<-q> is needed because of the space in 'FOO BAR'.
2867 =head1 EXAMPLE: Simple network scanner
2869 B<prips> can generate IP-addresses from CIDR notation. With GNU
2870 B<parallel> you can build a simple network scanner to see which
2871 addresses respond to B<ping>:
2873 prips 130.229.16.0/20 | \
2874 parallel --timeout 2 -j0 \
2875 'ping -c 1 {} >/dev/null && echo {}' 2>/dev/null
2878 =head1 EXAMPLE: Reading arguments from command line
2880 GNU B<parallel> can take the arguments from command line instead of
2881 stdin (standard input). To compress all html files in the current dir
2882 using B<gzip> run:
2884 parallel gzip --best ::: *.html
2886 To convert *.wav to *.mp3 using LAME running one process per CPU run:
2888 parallel lame {} -o {.}.mp3 ::: *.wav
2891 =head1 EXAMPLE: Inserting multiple arguments
2893 When moving a lot of files like this: B<mv *.log destdir> you will
2894 sometimes get the error:
2896 bash: /bin/mv: Argument list too long
2898 because there are too many files. You can instead do:
2900 ls | grep -E '\.log$' | parallel mv {} destdir
2902 This will run B<mv> for each file. It can be done faster if B<mv> gets
2903 as many arguments that will fit on the line:
2905 ls | grep -E '\.log$' | parallel -m mv {} destdir
2907 In many shells you can also use B<printf>:
2909 printf '%s\0' *.log | parallel -0 -m mv {} destdir
2912 =head1 EXAMPLE: Context replace
2914 To remove the files I<pict0000.jpg> .. I<pict9999.jpg> you could do:
2916 seq -w 0 9999 | parallel rm pict{}.jpg
2918 You could also do:
2920 seq -w 0 9999 | perl -pe 's/(.*)/pict$1.jpg/' | parallel -m rm
2922 The first will run B<rm> 10000 times, while the last will only run
2923 B<rm> as many times needed to keep the command line length short
2924 enough to avoid B<Argument list too long> (it typically runs 1-2 times).
2926 You could also run:
2928 seq -w 0 9999 | parallel -X rm pict{}.jpg
2930 This will also only run B<rm> as many times needed to keep the command
2931 line length short enough.
2934 =head1 EXAMPLE: Compute intensive jobs and substitution
2936 If ImageMagick is installed this will generate a thumbnail of a jpg
2937 file:
2939 convert -geometry 120 foo.jpg thumb_foo.jpg
2941 This will run with number-of-cpus jobs in parallel for all jpg files
2942 in a directory:
2944 ls *.jpg | parallel convert -geometry 120 {} thumb_{}
2946 To do it recursively use B<find>:
2948 find . -name '*.jpg' | \
2949 parallel convert -geometry 120 {} {}_thumb.jpg
2951 Notice how the argument has to start with B<{}> as B<{}> will include path
2952 (e.g. running B<convert -geometry 120 ./foo/bar.jpg
2953 thumb_./foo/bar.jpg> would clearly be wrong). The command will
2954 generate files like ./foo/bar.jpg_thumb.jpg.
2956 Use B<{.}> to avoid the extra .jpg in the file name. This command will
2957 make files like ./foo/bar_thumb.jpg:
2959 find . -name '*.jpg' | \
2960 parallel convert -geometry 120 {} {.}_thumb.jpg
2963 =head1 EXAMPLE: Substitution and redirection
2965 This will generate an uncompressed version of .gz-files next to the .gz-file:
2967 parallel zcat {} ">"{.} ::: *.gz
2969 Quoting of > is necessary to postpone the redirection. Another
2970 solution is to quote the whole command:
2972 parallel "zcat {} >{.}" ::: *.gz
2974 Other special shell characters (such as * ; $ > < | >> <<) also need
2975 to be put in quotes, as they may otherwise be interpreted by the shell
2976 and not given to GNU B<parallel>.
2979 =head1 EXAMPLE: Composed commands
2981 A job can consist of several commands. This will print the number of
2982 files in each directory:
2984 ls | parallel 'echo -n {}" "; ls {}|wc -l'
2986 To put the output in a file called <name>.dir:
2988 ls | parallel '(echo -n {}" "; ls {}|wc -l) >{}.dir'
2990 Even small shell scripts can be run by GNU B<parallel>:
2992 find . | parallel 'a={}; name=${a##*/};' \
2993 'upper=$(echo "$name" | tr "[:lower:]" "[:upper:]");'\
2994 'echo "$name - $upper"'
2996 ls | parallel 'mv {} "$(echo {} | tr "[:upper:]" "[:lower:]")"'
2998 Given a list of URLs, list all URLs that fail to download. Print the
2999 line number and the URL.
3001 cat urlfile | parallel "wget {} 2>/dev/null || grep -n {} urlfile"
3003 Create a mirror directory with the same filenames except all files and
3004 symlinks are empty files.
3006 cp -rs /the/source/dir mirror_dir
3007 find mirror_dir -type l | parallel -m rm {} '&&' touch {}
3009 Find the files in a list that do not exist
3011 cat file_list | parallel 'if [ ! -e {} ] ; then echo {}; fi'
3014 =head1 EXAMPLE: Composed command with perl replacement string
3016 You have a bunch of file. You want them sorted into dirs. The dir of
3017 each file should be named the first letter of the file name.
3019 parallel 'mkdir -p {=s/(.).*/$1/=}; mv {} {=s/(.).*/$1/=}' ::: *
3022 =head1 EXAMPLE: Composed command with multiple input sources
3024 You have a dir with files named as 24 hours in 5 minute intervals:
3025 00:00, 00:05, 00:10 .. 23:55. You want to find the files missing:
3027 parallel [ -f {1}:{2} ] "||" echo {1}:{2} does not exist \
3028 ::: {00..23} ::: {00..55..5}
3031 =head1 EXAMPLE: Calling Bash functions
3033 If the composed command is longer than a line, it becomes hard to
3034 read. In Bash you can use functions. Just remember to B<export -f> the
3035 function.
3037 doit() {
3038 echo Doing it for $1
3039 sleep 2
3040 echo Done with $1
3042 export -f doit
3043 parallel doit ::: 1 2 3
3045 doubleit() {
3046 echo Doing it for $1 $2
3047 sleep 2
3048 echo Done with $1 $2
3050 export -f doubleit
3051 parallel doubleit ::: 1 2 3 ::: a b
3053 To do this on remote servers you need to transfer the function using
3054 B<--env>:
3056 parallel --env doit -S server doit ::: 1 2 3
3057 parallel --env doubleit -S server doubleit ::: 1 2 3 ::: a b
3059 If your environment (aliases, variables, and functions) is small you
3060 can copy the full environment without having to B<export -f>
3061 anything. See B<env_parallel>.
3064 =head1 EXAMPLE: Function tester
3066 To test a program with different parameters:
3068 tester() {
3069 if (eval "$@") >&/dev/null; then
3070 perl -e 'printf "\033[30;102m[ OK ]\033[0m @ARGV\n"' "$@"
3071 else
3072 perl -e 'printf "\033[30;101m[FAIL]\033[0m @ARGV\n"' "$@"
3075 export -f tester
3076 parallel tester my_program ::: arg1 arg2
3077 parallel tester exit ::: 1 0 2 0
3079 If B<my_program> fails a red FAIL will be printed followed by the failing
3080 command; otherwise a green OK will be printed followed by the command.
3083 =head1 EXAMPLE: Log rotate
3085 Log rotation renames a logfile to an extension with a higher number:
3086 log.1 becomes log.2, log.2 becomes log.3, and so on. The oldest log is
3087 removed. To avoid overwriting files the process starts backwards from
3088 the high number to the low number. This will keep 10 old versions of
3089 the log:
3091 seq 9 -1 1 | parallel -j1 mv log.{} log.'{= $_++ =}'
3092 mv log log.1
3095 =head1 EXAMPLE: Removing file extension when processing files
3097 When processing files removing the file extension using B<{.}> is
3098 often useful.
3100 Create a directory for each zip-file and unzip it in that dir:
3102 parallel 'mkdir {.}; cd {.}; unzip ../{}' ::: *.zip
3104 Recompress all .gz files in current directory using B<bzip2> running 1
3105 job per CPU in parallel:
3107 parallel "zcat {} | bzip2 >{.}.bz2 && rm {}" ::: *.gz
3109 Convert all WAV files to MP3 using LAME:
3111 find sounddir -type f -name '*.wav' | parallel lame {} -o {.}.mp3
3113 Put all converted in the same directory:
3115 find sounddir -type f -name '*.wav' | \
3116 parallel lame {} -o mydir/{/.}.mp3
3119 =head1 EXAMPLE: Removing strings from the argument
3121 If you have directory with tar.gz files and want these extracted in
3122 the corresponding dir (e.g foo.tar.gz will be extracted in the dir
3123 foo) you can do:
3125 parallel --plus 'mkdir {..}; tar -C {..} -xf {}' ::: *.tar.gz
3127 If you want to remove a different ending, you can use {%string}:
3129 parallel --plus echo {%_demo} ::: mycode_demo keep_demo_here
3131 You can also remove a starting string with {#string}
3133 parallel --plus echo {#demo_} ::: demo_mycode keep_demo_here
3135 To remove a string anywhere you can use regular expressions with
3136 {/regexp/replacement} and leave the replacement empty:
3138 parallel --plus echo {/demo_/} ::: demo_mycode remove_demo_here
3141 =head1 EXAMPLE: Download 24 images for each of the past 30 days
3143 Let us assume a website stores images like:
3145 http://www.example.com/path/to/YYYYMMDD_##.jpg
3147 where YYYYMMDD is the date and ## is the number 01-24. This will
3148 download images for the past 30 days:
3150 getit() {
3151 date=$(date -d "today -$1 days" +%Y%m%d)
3152 num=$2
3153 echo wget http://www.example.com/path/to/${date}_${num}.jpg
3155 export -f getit
3157 parallel getit ::: $(seq 30) ::: $(seq -w 24)
3159 B<$(date -d "today -$1 days" +%Y%m%d)> will give the dates in
3160 YYYYMMDD with B<$1> days subtracted.
3163 =head1 EXAMPLE: Download world map from NASA
3165 NASA provides tiles to download on earthdata.nasa.gov. Download tiles
3166 for Blue Marble world map and create a 10240x20480 map.
3168 base=https://map1a.vis.earthdata.nasa.gov/wmts-geo/wmts.cgi
3169 service="SERVICE=WMTS&REQUEST=GetTile&VERSION=1.0.0"
3170 layer="LAYER=BlueMarble_ShadedRelief_Bathymetry"
3171 set="STYLE=&TILEMATRIXSET=EPSG4326_500m&TILEMATRIX=5"
3172 tile="TILEROW={1}&TILECOL={2}"
3173 format="FORMAT=image%2Fjpeg"
3174 url="$base?$service&$layer&$set&$tile&$format"
3176 parallel -j0 -q wget "$url" -O {1}_{2}.jpg ::: {0..19} ::: {0..39}
3177 parallel eval convert +append {}_{0..39}.jpg line{}.jpg ::: {0..19}
3178 convert -append line{0..19}.jpg world.jpg
3181 =head1 EXAMPLE: Download Apollo-11 images from NASA using jq
3183 Search NASA using their API to get JSON for images related to 'apollo
3184 11' and has 'moon landing' in the description.
3186 The search query returns JSON containing URLs to JSON containing
3187 collections of pictures. One of the pictures in each of these
3188 collection is I<large>.
3190 B<wget> is used to get the JSON for the search query. B<jq> is then
3191 used to extract the URLs of the collections. B<parallel> then calls
3192 B<wget> to get each collection, which is passed to B<jq> to extract
3193 the URLs of all images. B<grep> filters out the I<large> images, and
3194 B<parallel> finally uses B<wget> to fetch the images.
3196 base="https://images-api.nasa.gov/search"
3197 q="q=apollo 11"
3198 description="description=moon landing"
3199 media_type="media_type=image"
3200 wget -O - "$base?$q&$description&$media_type" |
3201 jq -r .collection.items[].href |
3202 parallel wget -O - |
3203 jq -r .[] |
3204 grep large |
3205 parallel wget
3208 =head1 EXAMPLE: Download video playlist in parallel
3210 B<youtube-dl> is an excellent tool to download videos. It can,
3211 however, not download videos in parallel. This takes a playlist and
3212 downloads 10 videos in parallel.
3214 url='youtu.be/watch?v=0wOf2Fgi3DE&list=UU_cznB5YZZmvAmeq7Y3EriQ'
3215 export url
3216 youtube-dl --flat-playlist "https://$url" |
3217 parallel --tagstring {#} --lb -j10 \
3218 youtube-dl --playlist-start {#} --playlist-end {#} '"https://$url"'
3221 =head1 EXAMPLE: Prepend last modified date (ISO8601) to file name
3223 parallel mv {} '{= $a=pQ($_); $b=$_;' \
3224 '$_=qx{date -r "$a" +%FT%T}; chomp; $_="$_ $b" =}' ::: *
3226 B<{=> and B<=}> mark a perl expression. B<pQ> perl-quotes the
3227 string. B<date +%FT%T> is the date in ISO8601 with time.
3229 =head1 EXAMPLE: Save output in ISO8601 dirs
3231 Save output from B<ps aux> every second into dirs named
3232 yyyy-mm-ddThh:mm:ss+zz:zz.
3234 seq 1000 | parallel -N0 -j1 --delay 1 \
3235 --results '{= $_=`date -Isec`; chomp=}/' ps aux
3238 =head1 EXAMPLE: Digital clock with "blinking" :
3240 The : in a digital clock blinks. To make every other line have a ':'
3241 and the rest a ' ' a perl expression is used to look at the 3rd input
3242 source. If the value modulo 2 is 1: Use ":" otherwise use " ":
3244 parallel -k echo {1}'{=3 $_=$_%2?":":" "=}'{2}{3} \
3245 ::: {0..12} ::: {0..5} ::: {0..9}
3248 =head1 EXAMPLE: Aggregating content of files
3250 This:
3252 parallel --header : echo x{X}y{Y}z{Z} \> x{X}y{Y}z{Z} \
3253 ::: X {1..5} ::: Y {01..10} ::: Z {1..5}
3255 will generate the files x1y01z1 .. x5y10z5. If you want to aggregate
3256 the output grouping on x and z you can do this:
3258 parallel eval 'cat {=s/y01/y*/=} > {=s/y01//=}' ::: *y01*
3260 For all values of x and z it runs commands like:
3262 cat x1y*z1 > x1z1
3264 So you end up with x1z1 .. x5z5 each containing the content of all
3265 values of y.
3268 =head1 EXAMPLE: Breadth first parallel web crawler/mirrorer
3270 This script below will crawl and mirror a URL in parallel. It
3271 downloads first pages that are 1 click down, then 2 clicks down, then
3272 3; instead of the normal depth first, where the first link link on
3273 each page is fetched first.
3275 Run like this:
3277 PARALLEL=-j100 ./parallel-crawl http://gatt.org.yeslab.org/
3279 Remove the B<wget> part if you only want a web crawler.
3281 It works by fetching a page from a list of URLs and looking for links
3282 in that page that are within the same starting URL and that have not
3283 already been seen. These links are added to a new queue. When all the
3284 pages from the list is done, the new queue is moved to the list of
3285 URLs and the process is started over until no unseen links are found.
3287 #!/bin/bash
3289 # E.g. http://gatt.org.yeslab.org/
3290 URL=$1
3291 # Stay inside the start dir
3292 BASEURL=$(echo $URL | perl -pe 's:#.*::; s:(//.*/)[^/]*:$1:')
3293 URLLIST=$(mktemp urllist.XXXX)
3294 URLLIST2=$(mktemp urllist.XXXX)
3295 SEEN=$(mktemp seen.XXXX)
3297 # Spider to get the URLs
3298 echo $URL >$URLLIST
3299 cp $URLLIST $SEEN
3301 while [ -s $URLLIST ] ; do
3302 cat $URLLIST |
3303 parallel lynx -listonly -image_links -dump {} \; \
3304 wget -qm -l1 -Q1 {} \; echo Spidered: {} \>\&2 |
3305 perl -ne 's/#.*//; s/\s+\d+.\s(\S+)$/$1/ and
3306 do { $seen{$1}++ or print }' |
3307 grep -F $BASEURL |
3308 grep -v -x -F -f $SEEN | tee -a $SEEN > $URLLIST2
3309 mv $URLLIST2 $URLLIST
3310 done
3312 rm -f $URLLIST $URLLIST2 $SEEN
3315 =head1 EXAMPLE: Process files from a tar file while unpacking
3317 If the files to be processed are in a tar file then unpacking one file
3318 and processing it immediately may be faster than first unpacking all
3319 files.
3321 tar xvf foo.tgz | perl -ne 'print $l;$l=$_;END{print $l}' | \
3322 parallel echo
3324 The Perl one-liner is needed to make sure the file is complete before
3325 handing it to GNU B<parallel>.
3328 =head1 EXAMPLE: Rewriting a for-loop and a while-read-loop
3330 for-loops like this:
3332 (for x in `cat list` ; do
3333 do_something $x
3334 done) | process_output
3336 and while-read-loops like this:
3338 cat list | (while read x ; do
3339 do_something $x
3340 done) | process_output
3342 can be written like this:
3344 cat list | parallel do_something | process_output
3346 For example: Find which host name in a list has IP address 1.2.3 4:
3348 cat hosts.txt | parallel -P 100 host | grep 1.2.3.4
3350 If the processing requires more steps the for-loop like this:
3352 (for x in `cat list` ; do
3353 no_extension=${x%.*};
3354 do_step1 $x scale $no_extension.jpg
3355 do_step2 <$x $no_extension
3356 done) | process_output
3358 and while-loops like this:
3360 cat list | (while read x ; do
3361 no_extension=${x%.*};
3362 do_step1 $x scale $no_extension.jpg
3363 do_step2 <$x $no_extension
3364 done) | process_output
3366 can be written like this:
3368 cat list | parallel "do_step1 {} scale {.}.jpg ; do_step2 <{} {.}" |\
3369 process_output
3371 If the body of the loop is bigger, it improves readability to use a function:
3373 (for x in `cat list` ; do
3374 do_something $x
3375 [... 100 lines that do something with $x ...]
3376 done) | process_output
3378 cat list | (while read x ; do
3379 do_something $x
3380 [... 100 lines that do something with $x ...]
3381 done) | process_output
3383 can both be rewritten as:
3385 doit() {
3386 x=$1
3387 do_something $x
3388 [... 100 lines that do something with $x ...]
3390 export -f doit
3391 cat list | parallel doit
3393 =head1 EXAMPLE: Rewriting nested for-loops
3395 Nested for-loops like this:
3397 (for x in `cat xlist` ; do
3398 for y in `cat ylist` ; do
3399 do_something $x $y
3400 done
3401 done) | process_output
3403 can be written like this:
3405 parallel do_something {1} {2} :::: xlist ylist | process_output
3407 Nested for-loops like this:
3409 (for colour in red green blue ; do
3410 for size in S M L XL XXL ; do
3411 echo $colour $size
3412 done
3413 done) | sort
3415 can be written like this:
3417 parallel echo {1} {2} ::: red green blue ::: S M L XL XXL | sort
3420 =head1 EXAMPLE: Finding the lowest difference between files
3422 B<diff> is good for finding differences in text files. B<diff | wc -l>
3423 gives an indication of the size of the difference. To find the
3424 differences between all files in the current dir do:
3426 parallel --tag 'diff {1} {2} | wc -l' ::: * ::: * | sort -nk3
3428 This way it is possible to see if some files are closer to other
3429 files.
3432 =head1 EXAMPLE: for-loops with column names
3434 When doing multiple nested for-loops it can be easier to keep track of
3435 the loop variable if is is named instead of just having a number. Use
3436 B<--header :> to let the first argument be an named alias for the
3437 positional replacement string:
3439 parallel --header : echo {colour} {size} \
3440 ::: colour red green blue ::: size S M L XL XXL
3442 This also works if the input file is a file with columns:
3444 cat addressbook.tsv | \
3445 parallel --colsep '\t' --header : echo {Name} {E-mail address}
3448 =head1 EXAMPLE: All combinations in a list
3450 GNU B<parallel> makes all combinations when given two lists.
3452 To make all combinations in a single list with unique values, you
3453 repeat the list and use replacement string B<{choose_k}>:
3455 parallel --plus echo {choose_k} ::: A B C D ::: A B C D
3457 parallel --plus echo 2{2choose_k} 1{1choose_k} ::: A B C D ::: A B C D
3459 B<{choose_k}> works for any number of input sources:
3461 parallel --plus echo {choose_k} ::: A B C D ::: A B C D ::: A B C D
3464 =head1 EXAMPLE: From a to b and b to c
3466 Assume you have input like:
3468 aardvark
3469 babble
3472 each
3474 and want to run combinations like:
3476 aardvark babble
3477 babble cab
3478 cab dab
3479 dab each
3481 If the input is in the file in.txt:
3483 parallel echo {1} - {2} ::::+ <(head -n -1 in.txt) <(tail -n +2 in.txt)
3485 If the input is in the array $a here are two solutions:
3487 seq $((${#a[@]}-1)) | \
3488 env_parallel --env a echo '${a[{=$_--=}]} - ${a[{}]}'
3489 parallel echo {1} - {2} ::: "${a[@]::${#a[@]}-1}" :::+ "${a[@]:1}"
3492 =head1 EXAMPLE: Count the differences between all files in a dir
3494 Using B<--results> the results are saved in /tmp/diffcount*.
3496 parallel --results /tmp/diffcount "diff -U 0 {1} {2} | \
3497 tail -n +3 |grep -v '^@'|wc -l" ::: * ::: *
3499 To see the difference between file A and file B look at the file
3500 '/tmp/diffcount/1/A/2/B'.
3503 =head1 EXAMPLE: Speeding up fast jobs
3505 Starting a job on the local machine takes around 10 ms. This can be a
3506 big overhead if the job takes very few ms to run. Often you can group
3507 small jobs together using B<-X> which will make the overhead less
3508 significant. Compare the speed of these:
3510 seq -w 0 9999 | parallel touch pict{}.jpg
3511 seq -w 0 9999 | parallel -X touch pict{}.jpg
3513 If your program cannot take multiple arguments, then you can use GNU
3514 B<parallel> to spawn multiple GNU B<parallel>s:
3516 seq -w 0 9999999 | \
3517 parallel -j10 -q -I,, --pipe parallel -j0 touch pict{}.jpg
3519 If B<-j0> normally spawns 252 jobs, then the above will try to spawn
3520 2520 jobs. On a normal GNU/Linux system you can spawn 32000 jobs using
3521 this technique with no problems. To raise the 32000 jobs limit raise
3522 /proc/sys/kernel/pid_max to 4194303.
3524 If you do not need GNU B<parallel> to have control over each job (so
3525 no need for B<--retries> or B<--joblog> or similar), then it can be
3526 even faster if you can generate the command lines and pipe those to a
3527 shell. So if you can do this:
3529 mygenerator | sh
3531 Then that can be parallelized like this:
3533 mygenerator | parallel --pipe --block 10M sh
3535 E.g.
3537 mygenerator() {
3538 seq 10000000 | perl -pe 'print "echo This is fast job number "';
3540 mygenerator | parallel --pipe --block 10M sh
3542 The overhead is 100000 times smaller namely around 100 nanoseconds per
3543 job.
3546 =head1 EXAMPLE: Using shell variables
3548 When using shell variables you need to quote them correctly as they
3549 may otherwise be interpreted by the shell.
3551 Notice the difference between:
3553 ARR=("My brother's 12\" records are worth <\$\$\$>"'!' Foo Bar)
3554 parallel echo ::: ${ARR[@]} # This is probably not what you want
3556 and:
3558 ARR=("My brother's 12\" records are worth <\$\$\$>"'!' Foo Bar)
3559 parallel echo ::: "${ARR[@]}"
3561 When using variables in the actual command that contains special
3562 characters (e.g. space) you can quote them using B<'"$VAR"'> or using
3563 "'s and B<-q>:
3565 VAR="My brother's 12\" records are worth <\$\$\$>"
3566 parallel -q echo "$VAR" ::: '!'
3567 export VAR
3568 parallel echo '"$VAR"' ::: '!'
3570 If B<$VAR> does not contain ' then B<"'$VAR'"> will also work
3571 (and does not need B<export>):
3573 VAR="My 12\" records are worth <\$\$\$>"
3574 parallel echo "'$VAR'" ::: '!'
3576 If you use them in a function you just quote as you normally would do:
3578 VAR="My brother's 12\" records are worth <\$\$\$>"
3579 export VAR
3580 myfunc() { echo "$VAR" "$1"; }
3581 export -f myfunc
3582 parallel myfunc ::: '!'
3585 =head1 EXAMPLE: Group output lines
3587 When running jobs that output data, you often do not want the output
3588 of multiple jobs to run together. GNU B<parallel> defaults to grouping
3589 the output of each job, so the output is printed when the job
3590 finishes. If you want full lines to be printed while the job is
3591 running you can use B<--line-buffer>. If you want output to be
3592 printed as soon as possible you can use B<-u>.
3594 Compare the output of:
3596 parallel wget --limit-rate=100k \
3597 https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
3598 ::: {12..16}
3599 parallel --line-buffer wget --limit-rate=100k \
3600 https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
3601 ::: {12..16}
3602 parallel -u wget --limit-rate=100k \
3603 https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
3604 ::: {12..16}
3606 =head1 EXAMPLE: Tag output lines
3608 GNU B<parallel> groups the output lines, but it can be hard to see
3609 where the different jobs begin. B<--tag> prepends the argument to make
3610 that more visible:
3612 parallel --tag wget --limit-rate=100k \
3613 https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
3614 ::: {12..16}
3616 B<--tag> works with B<--line-buffer> but not with B<-u>:
3618 parallel --tag --line-buffer wget --limit-rate=100k \
3619 https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
3620 ::: {12..16}
3622 Check the uptime of the servers in I<~/.parallel/sshloginfile>:
3624 parallel --tag -S .. --nonall uptime
3627 =head1 EXAMPLE: Colorize output
3629 Give each job a new color. Most terminals support ANSI colors with the
3630 escape code "\033[30;3Xm" where 0 <= X <= 7:
3632 seq 10 | \
3633 parallel --tagstring '\033[30;3{=$_=++$::color%8=}m' seq {}
3634 parallel --rpl '{color} $_="\033[30;3".(++$::color%8)."m"' \
3635 --tagstring {color} seq {} ::: {1..10}
3637 To get rid of the initial \t (which comes from B<--tagstring>):
3639 ... | perl -pe 's/\t//'
3642 =head1 EXAMPLE: Keep order of output same as order of input
3644 Normally the output of a job will be printed as soon as it
3645 completes. Sometimes you want the order of the output to remain the
3646 same as the order of the input. This is often important, if the output
3647 is used as input for another system. B<-k> will make sure the order of
3648 output will be in the same order as input even if later jobs end
3649 before earlier jobs.
3651 Append a string to every line in a text file:
3653 cat textfile | parallel -k echo {} append_string
3655 If you remove B<-k> some of the lines may come out in the wrong order.
3657 Another example is B<traceroute>:
3659 parallel traceroute ::: qubes-os.org debian.org freenetproject.org
3661 will give traceroute of qubes-os.org, debian.org and
3662 freenetproject.org, but it will be sorted according to which job
3663 completed first.
3665 To keep the order the same as input run:
3667 parallel -k traceroute ::: qubes-os.org debian.org freenetproject.org
3669 This will make sure the traceroute to qubes-os.org will be printed
3670 first.
3672 A bit more complex example is downloading a huge file in chunks in
3673 parallel: Some internet connections will deliver more data if you
3674 download files in parallel. For downloading files in parallel see:
3675 "EXAMPLE: Download 10 images for each of the past 30 days". But if you
3676 are downloading a big file you can download the file in chunks in
3677 parallel.
3679 To download byte 10000000-19999999 you can use B<curl>:
3681 curl -r 10000000-19999999 http://example.com/the/big/file >file.part
3683 To download a 1 GB file we need 100 10MB chunks downloaded and
3684 combined in the correct order.
3686 seq 0 99 | parallel -k curl -r \
3687 {}0000000-{}9999999 http://example.com/the/big/file > file
3690 =head1 EXAMPLE: Parallel grep
3692 B<grep -r> greps recursively through directories. On multicore CPUs
3693 GNU B<parallel> can often speed this up.
3695 find . -type f | parallel -k -j150% -n 1000 -m grep -H -n STRING {}
3697 This will run 1.5 job per CPU, and give 1000 arguments to B<grep>.
3700 =head1 EXAMPLE: Grepping n lines for m regular expressions.
3702 The simplest solution to grep a big file for a lot of regexps is:
3704 grep -f regexps.txt bigfile
3706 Or if the regexps are fixed strings:
3708 grep -F -f regexps.txt bigfile
3710 There are 3 limiting factors: CPU, RAM, and disk I/O.
3712 RAM is easy to measure: If the B<grep> process takes up most of your
3713 free memory (e.g. when running B<top>), then RAM is a limiting factor.
3715 CPU is also easy to measure: If the B<grep> takes >90% CPU in B<top>,
3716 then the CPU is a limiting factor, and parallelization will speed this
3719 It is harder to see if disk I/O is the limiting factor, and depending
3720 on the disk system it may be faster or slower to parallelize. The only
3721 way to know for certain is to test and measure.
3724 =head2 Limiting factor: RAM
3726 The normal B<grep -f regexs.txt bigfile> works no matter the size of
3727 bigfile, but if regexps.txt is so big it cannot fit into memory, then
3728 you need to split this.
3730 B<grep -F> takes around 100 bytes of RAM and B<grep> takes about 500
3731 bytes of RAM per 1 byte of regexp. So if regexps.txt is 1% of your
3732 RAM, then it may be too big.
3734 If you can convert your regexps into fixed strings do that. E.g. if
3735 the lines you are looking for in bigfile all looks like:
3737 ID1 foo bar baz Identifier1 quux
3738 fubar ID2 foo bar baz Identifier2
3740 then your regexps.txt can be converted from:
3742 ID1.*Identifier1
3743 ID2.*Identifier2
3745 into:
3747 ID1 foo bar baz Identifier1
3748 ID2 foo bar baz Identifier2
3750 This way you can use B<grep -F> which takes around 80% less memory and
3751 is much faster.
3753 If it still does not fit in memory you can do this:
3755 parallel --pipepart -a regexps.txt --block 1M grep -Ff - -n bigfile | \
3756 sort -un | perl -pe 's/^\d+://'
3758 The 1M should be your free memory divided by the number of CPU threads and
3759 divided by 200 for B<grep -F> and by 1000 for normal B<grep>. On
3760 GNU/Linux you can do:
3762 free=$(awk '/^((Swap)?Cached|MemFree|Buffers):/ { sum += $2 }
3763 END { print sum }' /proc/meminfo)
3764 percpu=$((free / 200 / $(parallel --number-of-threads)))k
3766 parallel --pipepart -a regexps.txt --block $percpu --compress \
3767 grep -F -f - -n bigfile | \
3768 sort -un | perl -pe 's/^\d+://'
3770 If you can live with duplicated lines and wrong order, it is faster to do:
3772 parallel --pipepart -a regexps.txt --block $percpu --compress \
3773 grep -F -f - bigfile
3775 =head2 Limiting factor: CPU
3777 If the CPU is the limiting factor parallelization should be done on
3778 the regexps:
3780 cat regexp.txt | parallel --pipe -L1000 --roundrobin --compress \
3781 grep -f - -n bigfile | \
3782 sort -un | perl -pe 's/^\d+://'
3784 The command will start one B<grep> per CPU and read I<bigfile> one
3785 time per CPU, but as that is done in parallel, all reads except the
3786 first will be cached in RAM. Depending on the size of I<regexp.txt> it
3787 may be faster to use B<--block 10m> instead of B<-L1000>.
3789 Some storage systems perform better when reading multiple chunks in
3790 parallel. This is true for some RAID systems and for some network file
3791 systems. To parallelize the reading of I<bigfile>:
3793 parallel --pipepart --block 100M -a bigfile -k --compress \
3794 grep -f regexp.txt
3796 This will split I<bigfile> into 100MB chunks and run B<grep> on each of
3797 these chunks. To parallelize both reading of I<bigfile> and I<regexp.txt>
3798 combine the two using B<--fifo>:
3800 parallel --pipepart --block 100M -a bigfile --fifo cat regexp.txt \
3801 \| parallel --pipe -L1000 --roundrobin grep -f - {}
3803 If a line matches multiple regexps, the line may be duplicated.
3805 =head2 Bigger problem
3807 If the problem is too big to be solved by this, you are probably ready
3808 for Lucene.
3811 =head1 EXAMPLE: Using remote computers
3813 To run commands on a remote computer SSH needs to be set up and you
3814 must be able to login without entering a password (The commands
3815 B<ssh-copy-id>, B<ssh-agent>, and B<sshpass> may help you do that).
3817 If you need to login to a whole cluster, you typically do not want to
3818 accept the host key for every host. You want to accept them the first
3819 time and be warned if they are ever changed. To do that:
3821 # Add the servers to the sshloginfile
3822 (echo servera; echo serverb) > .parallel/my_cluster
3823 # Make sure .ssh/config exist
3824 touch .ssh/config
3825 cp .ssh/config .ssh/config.backup
3826 # Disable StrictHostKeyChecking temporarily
3827 (echo 'Host *'; echo StrictHostKeyChecking no) >> .ssh/config
3828 parallel --slf my_cluster --nonall true
3829 # Remove the disabling of StrictHostKeyChecking
3830 mv .ssh/config.backup .ssh/config
3832 The servers in B<.parallel/my_cluster> are now added in B<.ssh/known_hosts>.
3834 To run B<echo> on B<server.example.com>:
3836 seq 10 | parallel --sshlogin server.example.com echo
3838 To run commands on more than one remote computer run:
3840 seq 10 | parallel --sshlogin s1.example.com,s2.example.net echo
3844 seq 10 | parallel --sshlogin server.example.com \
3845 --sshlogin server2.example.net echo
3847 If the login username is I<foo> on I<server2.example.net> use:
3849 seq 10 | parallel --sshlogin server.example.com \
3850 --sshlogin foo@server2.example.net echo
3852 If your list of hosts is I<server1-88.example.net> with login I<foo>:
3854 seq 10 | parallel -Sfoo@server{1..88}.example.net echo
3856 To distribute the commands to a list of computers, make a file
3857 I<mycomputers> with all the computers:
3859 server.example.com
3860 foo@server2.example.com
3861 server3.example.com
3863 Then run:
3865 seq 10 | parallel --sshloginfile mycomputers echo
3867 To include the local computer add the special sshlogin ':' to the list:
3869 server.example.com
3870 foo@server2.example.com
3871 server3.example.com
3874 GNU B<parallel> will try to determine the number of CPUs on each of
3875 the remote computers, and run one job per CPU - even if the remote
3876 computers do not have the same number of CPUs.
3878 If the number of CPUs on the remote computers is not identified
3879 correctly the number of CPUs can be added in front. Here the computer
3880 has 8 CPUs.
3882 seq 10 | parallel --sshlogin 8/server.example.com echo
3885 =head1 EXAMPLE: Transferring of files
3887 To recompress gzipped files with B<bzip2> using a remote computer run:
3889 find logs/ -name '*.gz' | \
3890 parallel --sshlogin server.example.com \
3891 --transfer "zcat {} | bzip2 -9 >{.}.bz2"
3893 This will list the .gz-files in the I<logs> directory and all
3894 directories below. Then it will transfer the files to
3895 I<server.example.com> to the corresponding directory in
3896 I<$HOME/logs>. On I<server.example.com> the file will be recompressed
3897 using B<zcat> and B<bzip2> resulting in the corresponding file with
3898 I<.gz> replaced with I<.bz2>.
3900 If you want the resulting bz2-file to be transferred back to the local
3901 computer add I<--return {.}.bz2>:
3903 find logs/ -name '*.gz' | \
3904 parallel --sshlogin server.example.com \
3905 --transfer --return {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
3907 After the recompressing is done the I<.bz2>-file is transferred back to
3908 the local computer and put next to the original I<.gz>-file.
3910 If you want to delete the transferred files on the remote computer add
3911 I<--cleanup>. This will remove both the file transferred to the remote
3912 computer and the files transferred from the remote computer:
3914 find logs/ -name '*.gz' | \
3915 parallel --sshlogin server.example.com \
3916 --transfer --return {.}.bz2 --cleanup "zcat {} | bzip2 -9 >{.}.bz2"
3918 If you want run on several computers add the computers to I<--sshlogin>
3919 either using ',' or multiple I<--sshlogin>:
3921 find logs/ -name '*.gz' | \
3922 parallel --sshlogin server.example.com,server2.example.com \
3923 --sshlogin server3.example.com \
3924 --transfer --return {.}.bz2 --cleanup "zcat {} | bzip2 -9 >{.}.bz2"
3926 You can add the local computer using I<--sshlogin :>. This will disable the
3927 removing and transferring for the local computer only:
3929 find logs/ -name '*.gz' | \
3930 parallel --sshlogin server.example.com,server2.example.com \
3931 --sshlogin server3.example.com \
3932 --sshlogin : \
3933 --transfer --return {.}.bz2 --cleanup "zcat {} | bzip2 -9 >{.}.bz2"
3935 Often I<--transfer>, I<--return> and I<--cleanup> are used together. They can be
3936 shortened to I<--trc>:
3938 find logs/ -name '*.gz' | \
3939 parallel --sshlogin server.example.com,server2.example.com \
3940 --sshlogin server3.example.com \
3941 --sshlogin : \
3942 --trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
3944 With the file I<mycomputers> containing the list of computers it becomes:
3946 find logs/ -name '*.gz' | parallel --sshloginfile mycomputers \
3947 --trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
3949 If the file I<~/.parallel/sshloginfile> contains the list of computers
3950 the special short hand I<-S ..> can be used:
3952 find logs/ -name '*.gz' | parallel -S .. \
3953 --trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
3956 =head1 EXAMPLE: Distributing work to local and remote computers
3958 Convert *.mp3 to *.ogg running one process per CPU on local computer
3959 and server2:
3961 parallel --trc {.}.ogg -S server2,: \
3962 'mpg321 -w - {} | oggenc -q0 - -o {.}.ogg' ::: *.mp3
3965 =head1 EXAMPLE: Running the same command on remote computers
3967 To run the command B<uptime> on remote computers you can do:
3969 parallel --tag --nonall -S server1,server2 uptime
3971 B<--nonall> reads no arguments. If you have a list of jobs you want
3972 to run on each computer you can do:
3974 parallel --tag --onall -S server1,server2 echo ::: 1 2 3
3976 Remove B<--tag> if you do not want the sshlogin added before the
3977 output.
3979 If you have a lot of hosts use '-j0' to access more hosts in parallel.
3982 =head1 EXAMPLE: Using remote computers behind NAT wall
3984 If the workers are behind a NAT wall, you need some trickery to get to
3985 them.
3987 If you can B<ssh> to a jumphost, and reach the workers from there,
3988 then the obvious solution would be this, but it B<does not work>:
3990 parallel --ssh 'ssh jumphost ssh' -S host1 echo ::: DOES NOT WORK
3992 It does not work because the command is dequoted by B<ssh> twice where
3993 as GNU B<parallel> only expects it to be dequoted once.
3995 So instead put this in B<~/.ssh/config>:
3997 Host host1 host2 host3
3998 ProxyCommand ssh jumphost.domain nc -w 1 %h 22
4000 It requires B<nc(netcat)> to be installed on jumphost. With this you
4001 can simply:
4003 parallel -S host1,host2,host3 echo ::: This does work
4005 =head2 No jumphost, but port forwards
4007 If there is no jumphost but each server has port 22 forwarded from the
4008 firewall (e.g. the firewall's port 22001 = port 22 on host1, 22002 = host2,
4009 22003 = host3) then you can use B<~/.ssh/config>:
4011 Host host1.v
4012 Port 22001
4013 Host host2.v
4014 Port 22002
4015 Host host3.v
4016 Port 22003
4017 Host *.v
4018 Hostname firewall
4020 And then use host{1..3}.v as normal hosts:
4022 parallel -S host1.v,host2.v,host3.v echo ::: a b c
4024 =head2 No jumphost, no port forwards
4026 If ports cannot be forwarded, you need some sort of VPN to traverse
4027 the NAT-wall. TOR is one options for that, as it is very easy to get
4028 working.
4030 You need to install TOR and setup a hidden service. In B<torrc> put:
4032 HiddenServiceDir /var/lib/tor/hidden_service/
4033 HiddenServicePort 22 127.0.0.1:22
4035 Then start TOR: B</etc/init.d/tor restart>
4037 The TOR hostname is now in B</var/lib/tor/hidden_service/hostname> and
4038 is something similar to B<izjafdceobowklhz.onion>. Now you simply
4039 prepend B<torsocks> to B<ssh>:
4041 parallel --ssh 'torsocks ssh' -S izjafdceobowklhz.onion \
4042 -S zfcdaeiojoklbwhz.onion,auclucjzobowklhi.onion echo ::: a b c
4044 If not all hosts are accessible through TOR:
4046 parallel -S 'torsocks ssh izjafdceobowklhz.onion,host2,host3' \
4047 echo ::: a b c
4049 See more B<ssh> tricks on https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts
4052 =head1 EXAMPLE: Parallelizing rsync
4054 B<rsync> is a great tool, but sometimes it will not fill up the
4055 available bandwidth. Running multiple B<rsync> in parallel can fix
4056 this.
4058 cd src-dir
4059 find . -type f |
4060 parallel -j10 -X rsync -zR -Ha ./{} fooserver:/dest-dir/
4062 Adjust B<-j10> until you find the optimal number.
4064 B<rsync -R> will create the needed subdirectories, so all files are
4065 not put into a single dir. The B<./> is needed so the resulting command
4066 looks similar to:
4068 rsync -zR ././sub/dir/file fooserver:/dest-dir/
4070 The B</./> is what B<rsync -R> works on.
4072 If you are unable to push data, but need to pull them and the files
4073 are called digits.png (e.g. 000000.png) you might be able to do:
4075 seq -w 0 99 | parallel rsync -Havessh fooserver:src/*{}.png destdir/
4078 =head1 EXAMPLE: Use multiple inputs in one command
4080 Copy files like foo.es.ext to foo.ext:
4082 ls *.es.* | perl -pe 'print; s/\.es//' | parallel -N2 cp {1} {2}
4084 The perl command spits out 2 lines for each input. GNU B<parallel>
4085 takes 2 inputs (using B<-N2>) and replaces {1} and {2} with the inputs.
4087 Count in binary:
4089 parallel -k echo ::: 0 1 ::: 0 1 ::: 0 1 ::: 0 1 ::: 0 1 ::: 0 1
4091 Print the number on the opposing sides of a six sided die:
4093 parallel --link -a <(seq 6) -a <(seq 6 -1 1) echo
4094 parallel --link echo :::: <(seq 6) <(seq 6 -1 1)
4096 Convert files from all subdirs to PNG-files with consecutive numbers
4097 (useful for making input PNG's for B<ffmpeg>):
4099 parallel --link -a <(find . -type f | sort) \
4100 -a <(seq $(find . -type f|wc -l)) convert {1} {2}.png
4102 Alternative version:
4104 find . -type f | sort | parallel convert {} {#}.png
4107 =head1 EXAMPLE: Use a table as input
4109 Content of table_file.tsv:
4111 foo<TAB>bar
4112 baz <TAB> quux
4114 To run:
4116 cmd -o bar -i foo
4117 cmd -o quux -i baz
4119 you can run:
4121 parallel -a table_file.tsv --colsep '\t' cmd -o {2} -i {1}
4123 Note: The default for GNU B<parallel> is to remove the spaces around
4124 the columns. To keep the spaces:
4126 parallel -a table_file.tsv --trim n --colsep '\t' cmd -o {2} -i {1}
4129 =head1 EXAMPLE: Output to database
4131 GNU B<parallel> can output to a database table and a CSV-file:
4133 DBURL=csv:///%2Ftmp%2Fmy.csv
4134 DBTABLEURL=$DBURL/mytable
4135 parallel --sqlandworker $DBTABLEURL seq ::: {1..10}
4137 It is rather slow and takes up a lot of CPU time because GNU
4138 B<parallel> parses the whole CSV file for each update.
4140 A better approach is to use an SQLite-base and then convert that to CSV:
4142 DBURL=sqlite3:///%2Ftmp%2Fmy.sqlite
4143 DBTABLEURL=$DBURL/mytable
4144 parallel --sqlandworker $DBTABLEURL seq ::: {1..10}
4145 sql $DBURL '.headers on' '.mode csv' 'SELECT * FROM mytable;'
4147 This takes around a second per job.
4149 If you have access to a real database system, such as PostgreSQL, it
4150 is even faster:
4152 DBURL=pg://user:pass@host/mydb
4153 DBTABLEURL=$DBURL/mytable
4154 parallel --sqlandworker $DBTABLEURL seq ::: {1..10}
4155 sql $DBURL \
4156 "COPY (SELECT * FROM mytable) TO stdout DELIMITER ',' CSV HEADER;"
4158 Or MySQL:
4160 DBURL=mysql://user:pass@host/mydb
4161 DBTABLEURL=$DBURL/mytable
4162 parallel --sqlandworker $DBTABLEURL seq ::: {1..10}
4163 sql -p -B $DBURL "SELECT * FROM mytable;" > mytable.tsv
4164 perl -pe 's/"/""/g; s/\t/","/g; s/^/"/; s/$/"/; s/\\\\/\\/g;
4165 s/\\t/\t/g; s/\\n/\n/g;' mytable.tsv
4168 =head1 EXAMPLE: Output to CSV-file for R
4170 If you have no need for the advanced job distribution control that a
4171 database provides, but you simply want output into a CSV file that you
4172 can read into R or LibreCalc, then you can use B<--results>:
4174 parallel --results my.csv seq ::: 10 20 30
4176 > mydf <- read.csv("my.csv");
4177 > print(mydf[2,])
4178 > write(as.character(mydf[2,c("Stdout")]),'')
4181 =head1 EXAMPLE: Use XML as input
4183 The show Aflyttet on Radio 24syv publishes an RSS feed with their audio
4184 podcasts on: http://arkiv.radio24syv.dk/audiopodcast/channel/4466232
4186 Using B<xpath> you can extract the URLs for 2019 and download them
4187 using GNU B<parallel>:
4189 wget -O - http://arkiv.radio24syv.dk/audiopodcast/channel/4466232 | \
4190 xpath -e "//pubDate[contains(text(),'2019')]/../enclosure/@url" | \
4191 parallel -u wget '{= s/ url="//; s/"//; =}'
4194 =head1 EXAMPLE: Run the same command 10 times
4196 If you want to run the same command with the same arguments 10 times
4197 in parallel you can do:
4199 seq 10 | parallel -n0 my_command my_args
4202 =head1 EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation
4204 GNU B<parallel> can work similar to B<cat | sh>.
4206 A resource inexpensive job is a job that takes very little CPU, disk
4207 I/O and network I/O. Ping is an example of a resource inexpensive
4208 job. wget is too - if the webpages are small.
4210 The content of the file jobs_to_run:
4212 ping -c 1 10.0.0.1
4213 wget http://example.com/status.cgi?ip=10.0.0.1
4214 ping -c 1 10.0.0.2
4215 wget http://example.com/status.cgi?ip=10.0.0.2
4217 ping -c 1 10.0.0.255
4218 wget http://example.com/status.cgi?ip=10.0.0.255
4220 To run 100 processes simultaneously do:
4222 parallel -j 100 < jobs_to_run
4224 As there is not a I<command> the jobs will be evaluated by the shell.
4227 =head1 EXAMPLE: Processing a big file using more CPUs
4229 To process a big file or some output you can use B<--pipe> to split up
4230 the data into blocks and pipe the blocks into the processing program.
4232 If the program is B<gzip -9> you can do:
4234 cat bigfile | parallel --pipe --recend '' -k gzip -9 > bigfile.gz
4236 This will split B<bigfile> into blocks of 1 MB and pass that to B<gzip
4237 -9> in parallel. One B<gzip> will be run per CPU. The output of B<gzip
4238 -9> will be kept in order and saved to B<bigfile.gz>
4240 B<gzip> works fine if the output is appended, but some processing does
4241 not work like that - for example sorting. For this GNU B<parallel> can
4242 put the output of each command into a file. This will sort a big file
4243 in parallel:
4245 cat bigfile | parallel --pipe --files sort |\
4246 parallel -Xj1 sort -m {} ';' rm {} >bigfile.sort
4248 Here B<bigfile> is split into blocks of around 1MB, each block ending
4249 in '\n' (which is the default for B<--recend>). Each block is passed
4250 to B<sort> and the output from B<sort> is saved into files. These
4251 files are passed to the second B<parallel> that runs B<sort -m> on the
4252 files before it removes the files. The output is saved to
4253 B<bigfile.sort>.
4255 GNU B<parallel>'s B<--pipe> maxes out at around 100 MB/s because every
4256 byte has to be copied through GNU B<parallel>. But if B<bigfile> is a
4257 real (seekable) file GNU B<parallel> can by-pass the copying and send
4258 the parts directly to the program:
4260 parallel --pipepart --block 100m -a bigfile --files sort |\
4261 parallel -Xj1 sort -m {} ';' rm {} >bigfile.sort
4264 =head1 EXAMPLE: Grouping input lines
4266 When processing with B<--pipe> you may have lines grouped by a
4267 value. Here is I<my.csv>:
4269 Transaction Customer Item
4270 1 a 53
4271 2 b 65
4272 3 b 82
4273 4 c 96
4274 5 c 67
4275 6 c 13
4276 7 d 90
4277 8 d 43
4278 9 d 91
4279 10 d 84
4280 11 e 72
4281 12 e 102
4282 13 e 63
4283 14 e 56
4284 15 e 74
4286 Let us assume you want GNU B<parallel> to process each customer. In
4287 other words: You want all the transactions for a single customer to be
4288 treated as a single record.
4290 To do this we preprocess the data with a program that inserts a record
4291 separator before each customer (column 2 = $F[1]). Here we first make
4292 a 50 character random string, which we then use as the separator:
4294 sep=`perl -e 'print map { ("a".."z","A".."Z")[rand(52)] } (1..50);'`
4295 cat my.csv | \
4296 perl -ape '$F[1] ne $l and print "'$sep'"; $l = $F[1]' | \
4297 parallel --recend $sep --rrs --pipe -N1 wc
4299 If your program can process multiple customers replace B<-N1> with a
4300 reasonable B<--blocksize>.
4303 =head1 EXAMPLE: Running more than 250 jobs workaround
4305 If you need to run a massive amount of jobs in parallel, then you will
4306 likely hit the filehandle limit which is often around 250 jobs. If you
4307 are super user you can raise the limit in /etc/security/limits.conf
4308 but you can also use this workaround. The filehandle limit is per
4309 process. That means that if you just spawn more GNU B<parallel>s then
4310 each of them can run 250 jobs. This will spawn up to 2500 jobs:
4312 cat myinput |\
4313 parallel --pipe -N 50 --roundrobin -j50 parallel -j50 your_prg
4315 This will spawn up to 62500 jobs (use with caution - you need 64 GB
4316 RAM to do this, and you may need to increase /proc/sys/kernel/pid_max):
4318 cat myinput |\
4319 parallel --pipe -N 250 --roundrobin -j250 parallel -j250 your_prg
4322 =head1 EXAMPLE: Working as mutex and counting semaphore
4324 The command B<sem> is an alias for B<parallel --semaphore>.
4326 A counting semaphore will allow a given number of jobs to be started
4327 in the background. When the number of jobs are running in the
4328 background, GNU B<sem> will wait for one of these to complete before
4329 starting another command. B<sem --wait> will wait for all jobs to
4330 complete.
4332 Run 10 jobs concurrently in the background:
4334 for i in *.log ; do
4335 echo $i
4336 sem -j10 gzip $i ";" echo done
4337 done
4338 sem --wait
4340 A mutex is a counting semaphore allowing only one job to run. This
4341 will edit the file I<myfile> and prepends the file with lines with the
4342 numbers 1 to 3.
4344 seq 3 | parallel sem sed -i -e '1i{}' myfile
4346 As I<myfile> can be very big it is important only one process edits
4347 the file at the same time.
4349 Name the semaphore to have multiple different semaphores active at the
4350 same time:
4352 seq 3 | parallel sem --id mymutex sed -i -e '1i{}' myfile
4355 =head1 EXAMPLE: Mutex for a script
4357 Assume a script is called from cron or from a web service, but only
4358 one instance can be run at a time. With B<sem> and B<--shebang-wrap>
4359 the script can be made to wait for other instances to finish. Here in
4360 B<bash>:
4362 #!/usr/bin/sem --shebang-wrap -u --id $0 --fg /bin/bash
4364 echo This will run
4365 sleep 5
4366 echo exclusively
4368 Here B<perl>:
4370 #!/usr/bin/sem --shebang-wrap -u --id $0 --fg /usr/bin/perl
4372 print "This will run ";
4373 sleep 5;
4374 print "exclusively\n";
4376 Here B<python>:
4378 #!/usr/local/bin/sem --shebang-wrap -u --id $0 --fg /usr/bin/python
4380 import time
4381 print "This will run ";
4382 time.sleep(5)
4383 print "exclusively";
4386 =head1 EXAMPLE: Start editor with filenames from stdin (standard input)
4388 You can use GNU B<parallel> to start interactive programs like emacs or vi:
4390 cat filelist | parallel --tty -X emacs
4391 cat filelist | parallel --tty -X vi
4393 If there are more files than will fit on a single command line, the
4394 editor will be started again with the remaining files.
4397 =head1 EXAMPLE: Running sudo
4399 B<sudo> requires a password to run a command as root. It caches the
4400 access, so you only need to enter the password again if you have not
4401 used B<sudo> for a while.
4403 The command:
4405 parallel sudo echo ::: This is a bad idea
4407 is no good, as you would be prompted for the sudo password for each of
4408 the jobs. You can either do:
4410 sudo echo This
4411 parallel sudo echo ::: is a good idea
4415 sudo parallel echo ::: This is a good idea
4417 This way you only have to enter the sudo password once.
4420 =head1 EXAMPLE: GNU Parallel as queue system/batch manager
4422 GNU B<parallel> can work as a simple job queue system or batch manager.
4423 The idea is to put the jobs into a file and have GNU B<parallel> read
4424 from that continuously. As GNU B<parallel> will stop at end of file we
4425 use B<tail> to continue reading:
4427 true >jobqueue; tail -n+0 -f jobqueue | parallel
4429 To submit your jobs to the queue:
4431 echo my_command my_arg >> jobqueue
4433 You can of course use B<-S> to distribute the jobs to remote
4434 computers:
4436 true >jobqueue; tail -n+0 -f jobqueue | parallel -S ..
4438 If you keep this running for a long time, jobqueue will grow. A way of
4439 removing the jobs already run is by making GNU B<parallel> stop when
4440 it hits a special value and then restart. To use B<--eof> to make GNU
4441 B<parallel> exit, B<tail> also needs to be forced to exit:
4443 true >jobqueue;
4444 while true; do
4445 tail -n+0 -f jobqueue |
4446 (parallel -E StOpHeRe -S ..; echo GNU Parallel is now done;
4447 perl -e 'while(<>){/StOpHeRe/ and last};print <>' jobqueue > j2;
4448 (seq 1000 >> jobqueue &);
4449 echo Done appending dummy data forcing tail to exit)
4450 echo tail exited;
4451 mv j2 jobqueue
4452 done
4454 In some cases you can run on more CPUs and computers during the night:
4456 # Day time
4457 echo 50% > jobfile
4458 cp day_server_list ~/.parallel/sshloginfile
4459 # Night time
4460 echo 100% > jobfile
4461 cp night_server_list ~/.parallel/sshloginfile
4462 tail -n+0 -f jobqueue | parallel --jobs jobfile -S ..
4464 GNU B<parallel> discovers if B<jobfile> or B<~/.parallel/sshloginfile>
4465 changes.
4467 There is a a small issue when using GNU B<parallel> as queue
4468 system/batch manager: You have to submit JobSlot number of jobs before
4469 they will start, and after that you can submit one at a time, and job
4470 will start immediately if free slots are available. Output from the
4471 running or completed jobs are held back and will only be printed when
4472 JobSlots more jobs has been started (unless you use --ungroup or
4473 --line-buffer, in which case the output from the jobs are printed
4474 immediately). E.g. if you have 10 jobslots then the output from the
4475 first completed job will only be printed when job 11 has started, and
4476 the output of second completed job will only be printed when job 12
4477 has started.
4480 =head1 EXAMPLE: GNU Parallel as dir processor
4482 If you have a dir in which users drop files that needs to be processed
4483 you can do this on GNU/Linux (If you know what B<inotifywait> is
4484 called on other platforms file a bug report):
4486 inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |\
4487 parallel -u echo
4489 This will run the command B<echo> on each file put into B<my_dir> or
4490 subdirs of B<my_dir>.
4492 You can of course use B<-S> to distribute the jobs to remote
4493 computers:
4495 inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |\
4496 parallel -S .. -u echo
4498 If the files to be processed are in a tar file then unpacking one file
4499 and processing it immediately may be faster than first unpacking all
4500 files. Set up the dir processor as above and unpack into the dir.
4502 Using GNU B<parallel> as dir processor has the same limitations as
4503 using GNU B<parallel> as queue system/batch manager.
4506 =head1 EXAMPLE: Locate the missing package
4508 If you have downloaded source and tried compiling it, you may have seen:
4510 $ ./configure
4511 [...]
4512 checking for something.h... no
4513 configure: error: "libsomething not found"
4515 Often it is not obvious which package you should install to get that
4516 file. Debian has `apt-file` to search for a file. `tracefile` from
4517 https://gitlab.com/ole.tange/tangetools can tell which files a program
4518 tried to access. In this case we are interested in one of the last
4519 files:
4521 $ tracefile -un ./configure | tail | parallel -j0 apt-file search
4524 =head1 SPREADING BLOCKS OF DATA
4526 B<--round-robin>, B<--pipe-part>, B<--shard>, B<--bin> and
4527 B<--group-by> are all specialized versions of B<--pipe>.
4529 In the following I<n> is the number of jobslots given by B<--jobs>. A
4530 record starts with B<--recstart> and ends with B<--recend>. It is
4531 typically a full line. A chunk is a number of full records that is
4532 approximately the size of a block. A block can contain half records, a
4533 chunk cannot.
4535 B<--pipe> starts one job per chunk. It reads blocks from stdin
4536 (standard input). It finds a record end near a block border and passes
4537 a chunk to the program.
4539 B<--pipe-part> starts one job per chunk - just like normal
4540 B<--pipe>. It first finds record endings near all block borders in the
4541 file and then starts the jobs. By using B<--block -1> it will set the
4542 block size to 1/I<n> * size-of-file. Used this way it will start I<n>
4543 jobs in total.
4545 B<--round-robin> starts I<n> jobs in total. It reads a block and
4546 passes a chunk to whichever job is ready to read. It does not parse
4547 the content except for identifying where a record ends to make sure it
4548 only passes full records.
4550 B<--shard> starts I<n> jobs in total. It parses each line to read the
4551 value in the given column. Based on this value the line is passed to
4552 one of the I<n> jobs. All lines having this value will be given to the
4553 same jobslot.
4555 B<--bin> works like B<--shard> but the value of the column is the
4556 jobslot number it will be passed to. If the value is bigger than I<n>,
4557 then I<n> will be subtracted from the value until the values is
4558 smaller than or equal to I<n>.
4560 B<--group-by> starts one job per chunk. Record borders are not given
4561 by B<--recend>/B<--recstart>. Instead a record is defined by a number
4562 of lines having the same value in a given column. So the value of a
4563 given column changes at a chunk border. With B<--pipe> every line is
4564 parsed, with B<--pipe-part> only a few lines are parsed to find the
4565 chunk border.
4567 B<--group-by> can be combined with B<--round-robin> or B<--pipe-part>.
4569 =head1 QUOTING
4571 GNU B<parallel> is very liberal in quoting. You only need to quote
4572 characters that have special meaning in shell:
4574 ( ) $ ` ' " < > ; | \
4576 and depending on context these needs to be quoted, too:
4578 ~ & # ! ? space * {
4580 Therefore most people will never need more quoting than putting '\'
4581 in front of the special characters.
4583 Often you can simply put \' around every ':
4585 perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' file
4587 can be quoted:
4589 parallel perl -ne \''/^\S+\s+\S+$/ and print $ARGV,"\n"'\' ::: file
4591 However, when you want to use a shell variable you need to quote the
4592 $-sign. Here is an example using $PARALLEL_SEQ. This variable is set
4593 by GNU B<parallel> itself, so the evaluation of the $ must be done by
4594 the sub shell started by GNU B<parallel>:
4596 seq 10 | parallel -N2 echo seq:\$PARALLEL_SEQ arg1:{1} arg2:{2}
4598 If the variable is set before GNU B<parallel> starts you can do this:
4600 VAR=this_is_set_before_starting
4601 echo test | parallel echo {} $VAR
4603 Prints: B<test this_is_set_before_starting>
4605 It is a little more tricky if the variable contains more than one space in a row:
4607 VAR="two spaces between each word"
4608 echo test | parallel echo {} \'"$VAR"\'
4610 Prints: B<test two spaces between each word>
4612 If the variable should not be evaluated by the shell starting GNU
4613 B<parallel> but be evaluated by the sub shell started by GNU
4614 B<parallel>, then you need to quote it:
4616 echo test | parallel VAR=this_is_set_after_starting \; echo {} \$VAR
4618 Prints: B<test this_is_set_after_starting>
4620 It is a little more tricky if the variable contains space:
4622 echo test |\
4623 parallel VAR='"two spaces between each word"' echo {} \'"$VAR"\'
4625 Prints: B<test two spaces between each word>
4627 $$ is the shell variable containing the process id of the shell. This
4628 will print the process id of the shell running GNU B<parallel>:
4630 seq 10 | parallel echo $$
4632 And this will print the process ids of the sub shells started by GNU
4633 B<parallel>.
4635 seq 10 | parallel echo \$\$
4637 If the special characters should not be evaluated by the sub shell
4638 then you need to protect it against evaluation from both the shell
4639 starting GNU B<parallel> and the sub shell:
4641 echo test | parallel echo {} \\\$VAR
4643 Prints: B<test $VAR>
4645 GNU B<parallel> can protect against evaluation by the sub shell by
4646 using -q:
4648 echo test | parallel -q echo {} \$VAR
4650 Prints: B<test $VAR>
4652 This is particularly useful if you have lots of quoting. If you want
4653 to run a perl script like this:
4655 perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' file
4657 It needs to be quoted like one of these:
4659 ls | parallel perl -ne '/^\\S+\\s+\\S+\$/\ and\ print\ \$ARGV,\"\\n\"'
4660 ls | parallel perl -ne \''/^\S+\s+\S+$/ and print $ARGV,"\n"'\'
4662 Notice how spaces, \'s, "'s, and $'s need to be quoted. GNU B<parallel>
4663 can do the quoting by using option -q:
4665 ls | parallel -q perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"'
4667 However, this means you cannot make the sub shell interpret special
4668 characters. For example because of B<-q> this WILL NOT WORK:
4670 ls *.gz | parallel -q "zcat {} >{.}"
4671 ls *.gz | parallel -q "zcat {} | bzip2 >{.}.bz2"
4673 because > and | need to be interpreted by the sub shell.
4675 If you get errors like:
4677 sh: -c: line 0: syntax error near unexpected token
4678 sh: Syntax error: Unterminated quoted string
4679 sh: -c: line 0: unexpected EOF while looking for matching `''
4680 sh: -c: line 1: syntax error: unexpected end of file
4681 zsh:1: no matches found:
4683 then you might try using B<-q>.
4685 If you are using B<bash> process substitution like B<<(cat foo)> then
4686 you may try B<-q> and prepending I<command> with B<bash -c>:
4688 ls | parallel -q bash -c 'wc -c <(echo {})'
4690 Or for substituting output:
4692 ls | parallel -q bash -c \
4693 'tar c {} | tee >(gzip >{}.tar.gz) | bzip2 >{}.tar.bz2'
4695 B<Conclusion>: To avoid dealing with the quoting problems it may be
4696 easier just to write a small script or a function (remember to
4697 B<export -f> the function) and have GNU B<parallel> call that.
4700 =head1 LIST RUNNING JOBS
4702 If you want a list of the jobs currently running you can run:
4704 killall -USR1 parallel
4706 GNU B<parallel> will then print the currently running jobs on stderr
4707 (standard error).
4710 =head1 COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS
4712 If you regret starting a lot of jobs you can simply break GNU B<parallel>,
4713 but if you want to make sure you do not have half-completed jobs you
4714 should send the signal B<SIGHUP> to GNU B<parallel>:
4716 killall -HUP parallel
4718 This will tell GNU B<parallel> to not start any new jobs, but wait until
4719 the currently running jobs are finished before exiting.
4722 =head1 ENVIRONMENT VARIABLES
4724 =over 9
4726 =item $PARALLEL_HOME
4728 Dir where GNU B<parallel> stores config files, semaphores, and caches
4729 information between invocations. Default: $HOME/.parallel.
4731 =item $PARALLEL_PID
4733 The environment variable $PARALLEL_PID is set by GNU B<parallel> and
4734 is visible to the jobs started from GNU B<parallel>. This makes it
4735 possible for the jobs to communicate directly to GNU B<parallel>.
4736 Remember to quote the $, so it gets evaluated by the correct
4737 shell.
4739 B<Example:> If each of the jobs tests a solution and one of jobs finds
4740 the solution the job can tell GNU B<parallel> not to start more jobs
4741 by: B<kill -HUP $PARALLEL_PID>. This only works on the local
4742 computer.
4745 =item $PARALLEL_RSYNC_OPTS
4747 Options to pass on to B<rsync>. Defaults to: -rlDzR.
4750 =item $PARALLEL_SHELL
4752 Use this shell for the commands run by GNU B<parallel>:
4754 =over 2
4756 =item *
4758 $PARALLEL_SHELL. If undefined use:
4760 =item *
4762 The shell that started GNU B<parallel>. If that cannot be determined:
4764 =item *
4766 $SHELL. If undefined use:
4768 =item *
4770 /bin/sh
4772 =back
4775 =item $PARALLEL_SSH
4777 GNU B<parallel> defaults to using B<ssh> for remote access. This can
4778 be overridden with $PARALLEL_SSH, which again can be overridden with
4779 B<--ssh>. It can also be set on a per server basis (see
4780 B<--sshlogin>).
4783 =item $PARALLEL_SSHLOGIN (beta testing)
4785 The environment variable $PARALLEL_SSHLOGIN is set by GNU B<parallel>
4786 and is visible to the jobs started from GNU B<parallel>. The value is
4787 the sshlogin line with number of cores removed. E.g.
4789 4//usr/bin/specialssh user@host
4791 becomes:
4793 /usr/bin/specialssh user@host
4796 =item $PARALLEL_SEQ
4798 $PARALLEL_SEQ will be set to the sequence number of the job
4799 running. Remember to quote the $, so it gets evaluated by the correct
4800 shell.
4802 B<Example:>
4804 seq 10 | parallel -N2 \
4805 echo seq:'$'PARALLEL_SEQ arg1:{1} arg2:{2}
4808 =item $PARALLEL_TMUX
4810 Path to B<tmux>. If unset the B<tmux> in $PATH is used.
4813 =item $TMPDIR
4815 Directory for temporary files. See: B<--tmpdir>.
4818 =item $PARALLEL
4820 The environment variable $PARALLEL will be used as default options for
4821 GNU B<parallel>. If the variable contains special shell characters
4822 (e.g. $, *, or space) then these need to be to be escaped with \.
4824 B<Example:>
4826 cat list | parallel -j1 -k -v ls
4827 cat list | parallel -j1 -k -v -S"myssh user@server" ls
4829 can be written as:
4831 cat list | PARALLEL="-kvj1" parallel ls
4832 cat list | PARALLEL='-kvj1 -S myssh\ user@server' \
4833 parallel echo
4835 Notice the \ in the middle is needed because 'myssh' and 'user@server'
4836 must be one argument.
4838 =back
4841 =head1 DEFAULT PROFILE (CONFIG FILE)
4843 The global configuration file /etc/parallel/config, followed by user
4844 configuration file ~/.parallel/config (formerly known as .parallelrc)
4845 will be read in turn if they exist. Lines starting with '#' will be
4846 ignored. The format can follow that of the environment variable
4847 $PARALLEL, but it is often easier to simply put each option on its own
4848 line.
4850 Options on the command line take precedence, followed by the
4851 environment variable $PARALLEL, user configuration file
4852 ~/.parallel/config, and finally the global configuration file
4853 /etc/parallel/config.
4855 Note that no file that is read for options, nor the environment
4856 variable $PARALLEL, may contain retired options such as B<--tollef>.
4858 =head1 PROFILE FILES
4860 If B<--profile> set, GNU B<parallel> will read the profile from that
4861 file rather than the global or user configuration files. You can have
4862 multiple B<--profiles>.
4864 Profiles are searched for in B<~/.parallel>. If the name starts with
4865 B</> it is seen as an absolute path. If the name starts with B<./> it
4866 is seen as a relative path from current dir.
4868 Example: Profile for running a command on every sshlogin in
4869 ~/.ssh/sshlogins and prepend the output with the sshlogin:
4871 echo --tag -S .. --nonall > ~/.parallel/n
4872 parallel -Jn uptime
4874 Example: Profile for running every command with B<-j-1> and B<nice>
4876 echo -j-1 nice > ~/.parallel/nice_profile
4877 parallel -J nice_profile bzip2 -9 ::: *
4879 Example: Profile for running a perl script before every command:
4881 echo "perl -e '\$a=\$\$; print \$a,\" \",'\$PARALLEL_SEQ',\" \";';" \
4882 > ~/.parallel/pre_perl
4883 parallel -J pre_perl echo ::: *
4885 Note how the $ and " need to be quoted using \.
4887 Example: Profile for running distributed jobs with B<nice> on the
4888 remote computers:
4890 echo -S .. nice > ~/.parallel/dist
4891 parallel -J dist --trc {.}.bz2 bzip2 -9 ::: *
4894 =head1 EXIT STATUS
4896 Exit status depends on B<--halt-on-error> if one of these is used:
4897 success=X, success=Y%, fail=Y%.
4899 =over 6
4901 =item Z<>0
4903 All jobs ran without error. If success=X is used: X jobs ran without
4904 error. If success=Y% is used: Y% of the jobs ran without error.
4906 =item Z<>1-100
4908 Some of the jobs failed. The exit status gives the number of failed
4909 jobs. If Y% is used the exit status is the percentage of jobs that
4910 failed.
4912 =item Z<>101
4914 More than 100 jobs failed.
4916 =item Z<>255
4918 Other error.
4920 =item Z<>-1 (In joblog and SQL table)
4922 Killed by Ctrl-C, timeout, not enough memory or similar.
4924 =item Z<>-2 (In joblog and SQL table)
4926 skip() was called in B<{= =}>.
4928 =item Z<>-1000 (In SQL table)
4930 Job is ready to run (set by --sqlmaster).
4932 =item Z<>-1220 (In SQL table)
4934 Job is taken by worker (set by --sqlworker).
4936 =back
4938 If fail=1 is used, the exit status will be the exit status of the
4939 failing job.
4942 =head1 DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES
4944 See: B<man parallel_alternatives>
4947 =head1 BUGS
4949 =head2 Quoting of newline
4951 Because of the way newline is quoted this will not work:
4953 echo 1,2,3 | parallel -vkd, "echo 'a{}b'"
4955 However, these will all work:
4957 echo 1,2,3 | parallel -vkd, echo a{}b
4958 echo 1,2,3 | parallel -vkd, "echo 'a'{}'b'"
4959 echo 1,2,3 | parallel -vkd, "echo 'a'"{}"'b'"
4962 =head2 Speed
4964 =head3 Startup
4966 GNU B<parallel> is slow at starting up - around 250 ms the first time
4967 and 150 ms after that.
4969 =head3 Job startup
4971 Starting a job on the local machine takes around 10 ms. This can be a
4972 big overhead if the job takes very few ms to run. Often you can group
4973 small jobs together using B<-X> which will make the overhead less
4974 significant. Or you can run multiple GNU B<parallel>s as described in
4975 B<EXAMPLE: Speeding up fast jobs>.
4977 =head3 SSH
4979 When using multiple computers GNU B<parallel> opens B<ssh> connections
4980 to them to figure out how many connections can be used reliably
4981 simultaneously (Namely SSHD's MaxStartups). This test is done for each
4982 host in serial, so if your B<--sshloginfile> contains many hosts it may
4983 be slow.
4985 If your jobs are short you may see that there are fewer jobs running
4986 on the remote systems than expected. This is due to time spent logging
4987 in and out. B<-M> may help here.
4989 =head3 Disk access
4991 A single disk can normally read data faster if it reads one file at a
4992 time instead of reading a lot of files in parallel, as this will avoid
4993 disk seeks. However, newer disk systems with multiple drives can read
4994 faster if reading from multiple files in parallel.
4996 If the jobs are of the form read-all-compute-all-write-all, so
4997 everything is read before anything is written, it may be faster to
4998 force only one disk access at the time:
5000 sem --id diskio cat file | compute | sem --id diskio cat > file
5002 If the jobs are of the form read-compute-write, so writing starts
5003 before all reading is done, it may be faster to force only one reader
5004 and writer at the time:
5006 sem --id read cat file | compute | sem --id write cat > file
5008 If the jobs are of the form read-compute-read-compute, it may be
5009 faster to run more jobs in parallel than the system has CPUs, as some
5010 of the jobs will be stuck waiting for disk access.
5012 =head2 --nice limits command length
5014 The current implementation of B<--nice> is too pessimistic in the max
5015 allowed command length. It only uses a little more than half of what
5016 it could. This affects B<-X> and B<-m>. If this becomes a real problem for
5017 you, file a bug-report.
5019 =head2 Aliases and functions do not work
5021 If you get:
5023 Can't exec "command": No such file or directory
5027 open3: exec of by command failed
5031 /bin/bash: command: command not found
5033 it may be because I<command> is not known, but it could also be
5034 because I<command> is an alias or a function. If it is a function you
5035 need to B<export -f> the function first or use B<env_parallel>. An
5036 alias will only work if you use B<env_parallel>.
5038 =head2 Database with MySQL fails randomly
5040 The B<--sql*> options may fail randomly with MySQL. This problem does
5041 not exist with PostgreSQL.
5044 =head1 REPORTING BUGS
5046 Report bugs to <bug-parallel@gnu.org> or
5047 https://savannah.gnu.org/bugs/?func=additem&group=parallel
5049 See a perfect bug report on
5050 https://lists.gnu.org/archive/html/bug-parallel/2015-01/msg00000.html
5052 Your bug report should always include:
5054 =over 2
5056 =item *
5058 The error message you get (if any). If the error message is not from
5059 GNU B<parallel> you need to show why you think GNU B<parallel> caused
5060 these.
5062 =item *
5064 The complete output of B<parallel --version>. If you are not running
5065 the latest released version (see http://ftp.gnu.org/gnu/parallel/) you
5066 should specify why you believe the problem is not fixed in that
5067 version.
5069 =item *
5071 A minimal, complete, and verifiable example (See description on
5072 http://stackoverflow.com/help/mcve).
5074 It should be a complete example that others can run that shows the problem
5075 including all files needed to run the example. This should preferably
5076 be small and simple, so try to remove as many options as possible. A
5077 combination of B<yes>, B<seq>, B<cat>, B<echo>, and B<sleep> can
5078 reproduce most errors. If your example requires large files, see if
5079 you can make them by something like B<seq 1000000> > B<file> or B<yes
5080 | head -n 10000000> > B<file>.
5082 If your example requires remote execution, see if you can use
5083 B<localhost> - maybe using another login.
5085 If you have access to a different system, test if the MCVE shows the
5086 problem on that system.
5088 =item *
5090 The output of your example. If your problem is not easily reproduced
5091 by others, the output might help them figure out the problem.
5093 =item *
5095 Whether you have watched the intro videos
5096 (http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1), walked
5097 through the tutorial (man parallel_tutorial), and read the EXAMPLE
5098 section in the man page (man parallel - search for EXAMPLE:).
5100 =back
5102 If you suspect the error is dependent on your environment or
5103 distribution, please see if you can reproduce the error on one of
5104 these VirtualBox images:
5105 http://sourceforge.net/projects/virtualboximage/files/
5106 http://www.osboxes.org/virtualbox-images/
5108 Specifying the name of your distribution is not enough as you may have
5109 installed software that is not in the VirtualBox images.
5111 If you cannot reproduce the error on any of the VirtualBox images
5112 above, see if you can build a VirtualBox image on which you can
5113 reproduce the error. If not you should assume the debugging will be
5114 done through you. That will put more burden on you and it is extra
5115 important you give any information that help. In general the problem
5116 will be fixed faster and with less work for you if you can reproduce
5117 the error on a VirtualBox.
5120 =head1 AUTHOR
5122 When using GNU B<parallel> for a publication please cite:
5124 O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login:
5125 The USENIX Magazine, February 2011:42-47.
5127 This helps funding further development; and it won't cost you a cent.
5128 If you pay 10000 EUR you should feel free to use GNU Parallel without citing.
5130 Copyright (C) 2007-10-18 Ole Tange, http://ole.tange.dk
5132 Copyright (C) 2008-2010 Ole Tange, http://ole.tange.dk
5134 Copyright (C) 2010-2019 Ole Tange,
5135 http://ole.tange.dk and Free Software Foundation, Inc.
5137 Parts of the manual concerning B<xargs> compatibility is inspired by
5138 the manual of B<xargs> from GNU findutils 4.4.2.
5141 =head1 LICENSE
5143 This program is free software; you can redistribute it and/or modify
5144 it under the terms of the GNU General Public License as published by
5145 the Free Software Foundation; either version 3 of the License, or
5146 at your option any later version.
5148 This program is distributed in the hope that it will be useful,
5149 but WITHOUT ANY WARRANTY; without even the implied warranty of
5150 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
5151 GNU General Public License for more details.
5153 You should have received a copy of the GNU General Public License
5154 along with this program. If not, see <http://www.gnu.org/licenses/>.
5156 =head2 Documentation license I
5158 Permission is granted to copy, distribute and/or modify this documentation
5159 under the terms of the GNU Free Documentation License, Version 1.3 or
5160 any later version published by the Free Software Foundation; with no
5161 Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
5162 Texts. A copy of the license is included in the file fdl.txt.
5164 =head2 Documentation license II
5166 You are free:
5168 =over 9
5170 =item B<to Share>
5172 to copy, distribute and transmit the work
5174 =item B<to Remix>
5176 to adapt the work
5178 =back
5180 Under the following conditions:
5182 =over 9
5184 =item B<Attribution>
5186 You must attribute the work in the manner specified by the author or
5187 licensor (but not in any way that suggests that they endorse you or
5188 your use of the work).
5190 =item B<Share Alike>
5192 If you alter, transform, or build upon this work, you may distribute
5193 the resulting work only under the same, similar or a compatible
5194 license.
5196 =back
5198 With the understanding that:
5200 =over 9
5202 =item B<Waiver>
5204 Any of the above conditions can be waived if you get permission from
5205 the copyright holder.
5207 =item B<Public Domain>
5209 Where the work or any of its elements is in the public domain under
5210 applicable law, that status is in no way affected by the license.
5212 =item B<Other Rights>
5214 In no way are any of the following rights affected by the license:
5216 =over 2
5218 =item *
5220 Your fair dealing or fair use rights, or other applicable
5221 copyright exceptions and limitations;
5223 =item *
5225 The author's moral rights;
5227 =item *
5229 Rights other persons may have either in the work itself or in
5230 how the work is used, such as publicity or privacy rights.
5232 =back
5234 =back
5236 =over 9
5238 =item B<Notice>
5240 For any reuse or distribution, you must make clear to others the
5241 license terms of this work.
5243 =back
5245 A copy of the full license is included in the file as cc-by-sa.txt.
5248 =head1 DEPENDENCIES
5250 GNU B<parallel> uses Perl, and the Perl modules Getopt::Long,
5251 IPC::Open3, Symbol, IO::File, POSIX, and File::Temp. For remote usage
5252 it also uses rsync with ssh.
5255 =head1 SEE ALSO
5257 B<ssh>(1), B<ssh-agent>(1), B<sshpass>(1), B<ssh-copy-id>(1),
5258 B<rsync>(1), B<find>(1), B<xargs>(1), B<dirname>(1), B<make>(1),
5259 B<pexec>(1), B<ppss>(1), B<xjobs>(1), B<prll>(1), B<dxargs>(1),
5260 B<mdm>(1)
5262 =cut