17 ∗ modes["sh", "\"", "submodes"]="\\\\||\\$\\(";
18 ∗ modes["sh", "\"", "submodes"]="\\\\||\\$\\(";
19 ∗ modes["sh", "\"", "submodes"]="\\\\||\\$\\(";
20 ∗ modes["sh", "\"", "submodes"]="\\\\||\\$\\(";
22 Fangle is a tool for fangled literate programming. Newfangled is defined as New and often needlessly novel by TheFreeDictionary.com.
23 In this case, fangled means yet another not-so-new1. but improved. ^1 method for literate programming.
24 Literate Programming has a long history starting with the great Donald Knuth himself, whose literate programming tools seem to make use of as many escape sequences for semantic markup as TeX (also by Donald Knuth).
25 Norman Ramsey wrote the Noweb set of tools (notangle, noweave and noroots) and helpfully reduced the amount of magic character sequences to pretty much just <<, >> and @, and in doing so brought the wonders of literate programming within my reach.
26 While using the L Y X editor for LaTeX editing I had various troubles with the noweb tools, some of which were my fault, some of which were noweb's fault and some of which were L Y X's fault.
27 Noweb generally brought literate programming to the masses through removing some of the complexity of the original literate programming, but this would be of no advantage to me if the L Y X / LaTeX combination brought more complications in their place.
28 Fangle was thus born (originally called Newfangle) as an awk replacement for notangle, adding some important features, like better integration with L Y X and LaTeX (and later TeXmacs), multiple output format conversions, and fixing notangle bugs like indentation when using -L for line numbers.
29 Significantly, fangle is just one program which replaces various programs in Noweb. Noweave is done away with and implemented directly as LaTeX macros, and noroots is implemented as a function of the untangler fangle.
30 Fangle is written in awk for portability reasons, awk being available for most platforms. A Python version2. hasn't anyone implemented awk in python yet? ^2 was considered for the benefit of L Y X but a scheme version for TeXmacs will probably materialise first; as TeXmacs macro capabilities help make edit-time and format-time rendering of fangle chunks simple enough for my weak brain.
31 As an extension to many literate-programming styles, Fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Name parameters (or even local variables in the callers scope) are anticipated, as parameterized chunks — useful though they are — are hard to comprehend in the literate document.
33 Fangle is licensed under the GPL 3 (or later).
34 This doesn't mean that sources generated by fangle must be licensed under the GPL 3.
35 This doesn't mean that you can't use or distribute fangle with sources of an incompatible license, but it means you must make the source of fangle available too.
36 As fangle is currently written in awk, an interpreted language, this should not be too hard.
38 4a <gpl3-copyright[1](
\v), lang=text> ≡
39 ________________________________________________________________________
40 1 | fangle - fully featured notangle replacement in awk
42 3 | Copyright (C) 2009-2010 Sam Liddicott <sam@liddicott.com>
44 5 | This program is free software: you can redistribute it and/or modify
45 6 | it under the terms of the GNU General Public License as published by
46 7 | the Free Software Foundation, either version 3 of the License, or
47 8 | (at your option) any later version.
49 10 | This program is distributed in the hope that it will be useful,
50 11 | but WITHOUT ANY WARRANTY; without even the implied warranty of
51 12 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
52 13 | GNU General Public License for more details.
54 15 | You should have received a copy of the GNU General Public License
55 16 | along with this program. If not, see <http://www.gnu.org/licenses/>.
56 |________________________________________________________________________
63 1 Introduction to Literate Programming 11
66 2.2 Extracting roots 13
67 2.3 Formatting the document 13
68 3 Using Fangle with L^ A T_ E X 15
69 4 Using Fangle with L Y X 17
70 4.1 Installing the L Y X module 17
71 4.2 Obtaining a decent mono font 17
75 4.3 Formatting your Lyx document 18
76 4.3.1 Customising the listing appearance 18
77 4.3.2 Global customisations 18
78 4.4 Configuring the build script 19
80 5 Using Fangle with T_ E X_( M A CS) 21
81 6 Fangle with Makefiles 23
82 6.1 A word about makefiles formats 23
83 6.2 Extracting Sources 23
84 6.2.1 Converting from L Y X to L^ A T_ E X 24
85 6.2.2 Converting from T_ E X_( M A CS) 24
86 6.3 Extracting Program Source 25
87 6.4 Extracting Source Files 25
88 6.5 Extracting Documentation 27
89 6.5.1 Formatting T_ E X 28
90 6.5.1.1 Running pdflatex 28
91 6.5.2 Formatting T_ E X_( M A CS) 28
92 6.5.3 Building the Documentation as a Whole 28
94 6.7 Boot-strapping the extraction 29
95 6.8 Incorporating Makefile.inc into existing projects 30
98 7 Fangle awk source code 35
100 7.2 Catching errors 36
101 8 T_ E X_( M A CS) args 37
102 9 L^ A T_ E X and lstlistings 39
103 9.1 Additional lstlstings parameters 39
104 9.2 Parsing chunk arguments 41
105 9.3 Expanding parameters in the text 42
106 10 Language Modes & Quoting 45
107 10.1 Modes to keep code together 45
108 10.2 Modes affect included chunks 45
109 10.3 Modes operation 46
110 10.4 Quoting scenarios 47
111 10.4.1 Direct quoting 47
112 10.5 Language Mode Definitions 47
115 10.5.3 Parentheses, Braces and Brackets 49
116 10.5.4 Customizing Standard Modes 50
123 10.7 A non-recursive mode tracker 54
124 10.7.1 Constructor 54
127 10.7.3.1 One happy chunk 58
129 10.8 Escaping and Quoting 59
130 11 Recognizing Chunks 61
132 11.1.1 T_ E X_( M A CS) 61
133 11.1.2 lstlistings 62
135 11.2.1 T_ E X_( M A CS) 63
138 11.3.1 lstlistings 64
140 11.4 Chunk contents 65
141 11.4.1 lstlistings 66
142 12 Processing Options 69
143 13 Generating the Output 71
144 13.1 Assembling the Chunks 72
145 13.1.1 Chunk Parts 72
148 16 Fangle LaTeX source code 83
149 16.1 fangle module 83
150 16.1.1 The Chunk style 83
151 16.1.2 The chunkref style 84
153 16.2.1 The chunk command 85
154 16.2.1.1 Chunk parameters 86
155 16.2.2 The noweb styled caption 86
156 16.2.3 The chunk counter 86
157 16.2.4 Cross references 89
159 17 Extracting fangle 91
160 17.1 Extracting from Lyx 91
161 17.2 Extracting documentation 91
162 17.3 Extracting from the command line 92
165 19 Chunk Parameters 97
167 19.2 T_ E X_( M A CS) 97
168 20 Compile-log-lyx 99
170 Chapter 1Introduction to Literate Programming
171 Todo: Should really follow on from a part-0 explanation of what literate programming is.
172 Chapter 2Running Fangle
173 Fangle is a replacement for noweb, which consists of notangle, noroots and noweave.
174 Like notangle and noroots, fangle can read multiple named files, or from stdin.
176 The -r option causes fangle to behave like noroots.
177 fangle -r filename.tex
178 will print out the fangle roots of a tex file.
179 Unlike the noroots command, the printed roots are not enclosed in angle brackets e.g. <<name>>, unless at least one of the roots is defined using the notangle notation <<name>>=.
180 Also, unlike noroots, it prints out all roots --- not just those that are not used elsewhere. I find that a root not being used doesn't make it particularly top level — and so-called top level roots could also be included in another root as well.
181 My convention is that top level roots to be extracted begin with ./ and have the form of a filename.
182 Makefile.inc, discussed in 6, can automatically extract all such sources prefixed with ./
184 notangle's -R and -L options are supported.
185 If you are using L Y X or LaTeX, the standard way to extract a file would be:
186 fangle -R./Makefile.inc fangle.tex > ./Makefile.inc
187 If you are using TeXmacs, the standard way to extract a file would similarly be:
188 fangle -R./Makefile.inc fangle.txt > ./Makefile.inc
189 TeXmacs users would obtain the text file with a verbatim export from TeXmacs which can be done on the command line with texmacs -s -c fangle.tm fangle.txt -q
190 Unlike the noroots command, the -L option to generate C pre-preocessor #file style line-number directives,does not break indenting of the generated file..
191 Also, thanks to mode tracking (described in 10) the -L option does not interrupt (and break) multi-line C macros either.
192 This does mean that sometimes the compiler might calculate the source line wrongly when generating error messages in such cases, but there isn't any other way around if multi-line macros include other chunks.
193 Future releases will include a mapping file so that line/character references from the C compiler can be converted to the correct part of the source document.
194 2.3 Formatting the document
195 The noweave replacement built into the editing and formatting environment for TeXmacs, L Y X (which uses LaTeX), and even for raw LaTeX.
196 Use of fangle with TeXmacs, L Y X and LaTeX are explained the the next few chapters.
197 Chapter 3Using Fangle with LaTeX
198 Because the noweave replacement is impemented in LaTeX, there is no processing stage required before running the LaTeX command. Of course, LaTeX may need running two or more times, so that the code chunk references can be fully calculated.
199 The formatting is managed by a set of macros shown in 16, and can be included with:
200 \usepackage{fangle.sty}
201 Norman Ramsay's origial noweb.sty package is currently required as it is used for formatting the code chunk captions.
202 The listings.sty package is required, and is used for formatting the code chunks and syntax highlighting.
203 The xargs.sty package is also required, and makes writing LaTeX macro so much more pleasant.
204 To do: Add examples of use of Macros
206 Chapter 4Using Fangle with L Y X
207 L Y X uses the same LaTeX macros shown in 16 as part of a L Y X module file fangle.module, which automatically includes the macros in the document pre-amble provided that the fangle L Y X module is used in the document.
208 4.1 Installing the L Y X module
209 Copy fangle.module to your L Y X layouts directory, which for unix users will be ~/.lyx/layouts
210 In order to make the new literate styles availalble, you will need to reconfigure L Y X by clicking Tools->Reconfigure, and then re-start L Y X.
211 4.2 Obtaining a decent mono font
212 The syntax high-lighting features of lstlistings makes use of bold; however a mono-space tt font is used to typeset the listings. Obtaining a bold tt font can be impossibly difficult and amazingly easy. I spent many hours at it, following complicated instructions from those who had spend many hours over it, and was finally delivered the simple solution on the lyx mailing list.
214 The simple way was to add this to my preamble:
216 \renewcommand{\ttdefault}{txtt}
219 The next simplest way was to use ams poor-mans-bold, by adding this to the pre-amble:
221 %\renewcommand{\ttdefault}{txtt}
222 %somehow make \pmb be the command for bold, forgot how, sorry, above line not work
223 It works, but looks wretched on the dvi viewer.
225 The lstlistings documention suggests using Luximono.
226 Luximono was installed according to the instructions in Ubuntu Forums thread 11591811. http://ubuntuforums.org/showthread.php?t=1159181 ^1 with tips from miknight2. http://miknight.blogspot.com/2005/11/how-to-install-luxi-mono-font-in.html ^2 stating that sudo updmap --enable MixedMap ul9.map is required. It looks fine in PDF and PS view but still looks rotten in dvi view.
227 4.3 Formatting your Lyx document
228 It is not necessary to base your literate document on any of the original L Y X literate classes; so select a regular class for your document type.
229 Add the new module Fangle Literate Listings and also Logical Markup which is very useful.
230 In the drop-down style listbox you should notice a new style defined, called Chunk.
231 When you wish to insert a literate chunk, you enter it's plain name in the Chunk style, instead of the old noweb method that uses <<name>>= type tags. In the line (or paragraph) following the chunk name, you insert a listing with: Insert->Program Listing.
232 Inside the white listing box you can type (or paste using shift+ctrl+V) your listing. There is no need to use ctrl+enter at the end of lines as with some older L Y X literate techniques --- just press enter as normal.
233 4.3.1 Customising the listing appearance
234 The code is formatted using the lstlistings package. The chunk style doesn't just define the chunk name, but can also define any other chunk options supported by the lstlistings package \lstset command. In fact, what you type in the chunk style is raw latex. If you want to set the chunk language without having to right-click the listing, just add ,lanuage=C after the chunk name. (Currently the language will affect all subsequent listings, so you may need to specify ,language= quite a lot).
235 To do: so fix the bug
237 Of course you can do this by editing the listings box advanced properties by right-clicking on the listings box, but that takes longer, and you can't see at-a-glance what the advanced settings are while editing the document; also advanced settings apply only to that box --- the chunk settings apply through the rest of the document3. It ought to apply only to subsequent chunks of the same name. I'll fix that later ^3.
238 To do: So make sure they only apply to chunks of that name
240 4.3.2 Global customisations
241 As lstlistings is used to set the code chunks, it's \lstset command can be used in the pre-amble to set some document wide settings.
242 If your source has many words with long sequences of capital letters, then columns=fullflexible may be a good idea, or the capital letters will get crowded. (I think lstlistings ought to use a slightly smaller font for captial letters so that they still fit).
243 The font family \ttfamily looks more normal for code, but has no bold (an alternate typewriter font is used).
244 With \ttfamily, I must also specify columns=fullflexible or the wrong letter spacing is used.
245 In my LaTeX pre-amble I usually specialise my code format with:
247 19a <document-preamble[1](
\v), lang=tex> ≡
248 ________________________________________________________________________
250 2 | numbers=left, stepnumber=1, numbersep=5pt,
251 3 | breaklines=false,
252 4 | basicstyle=\footnotesize\ttfamily,
253 5 | numberstyle=\tiny,
255 7 | columns=fullflexible,
256 8 | numberfirstline=true
258 |________________________________________________________________________
262 4.4 Configuring the build script
263 You can invoke code extraction and building from the L Y X menu option Document->Build Program.
264 First, make sure you don't have a conversion defined for Lyx->Program
265 From the menu Tools->Preferences, add a conversion from Latex(Plain)->Program as:
266 set -x ; fangle -Rlyx-build $$i |
267 env LYX_b=$$b LYX_i=$$i LYX_o=$$o LYX_p=$$p LYX_r=$$r bash
268 (But don't cut-n-paste it from this document or you may be be pasting a multi-line string which will break your lyx preferences file).
269 I hope that one day, L Y X will set these into the environment when calling the build script.
270 You may also want to consider adding options to this conversion...
271 parselog=/usr/share/lyx/scripts/listerrors
272 ...but if you do you will lose your stderr4. There is some bash plumbing to get a copy of stderr but this footnote is too small ^4.
273 Now, a shell script chunk called lyx-build will be extracted and run whenever you choose the Document->Build Program menu item.
274 This document was originally managed using L Y X and lyx-build script for this document is shown here for historical reference.
275 lyx -e latex fangle.lyx && \
276 fangle fangle.lyx > ./autoboot
277 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
279 When the lyx-build chunk is executed, the current directory will be a temporary directory, and LYX_SOURCE will refer to the tex file in this temporary directory. This is unfortunate as our makefile wants to run from the project directory where the Lyx file is kept.
280 We can extract the project directory from $$r, and derive the probable Lyx filename from the noweb file that Lyx generated.
282 19b <lyx-build-helper[1](
\v), lang=sh> ≡ 91b⊳
283 ________________________________________________________________________
284 1 | PROJECT_DIR="$LYX_r"
285 2 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
287 4 | TEX_SRC="$TEX_DIR/$LYX_i"
288 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
289 And then we can define a lyx-build fragment similar to the autoboot fragment
291 20a <lyx-build[1](
\v), lang=sh> ≡ 91a⊳
292 ________________________________________________________________________
294 2 | «lyx-build-helper 19b»
295 3 | cd $PROJECT_DIR || exit 1
297 5 | #/usr/bin/fangle -filter ./notanglefix-filter \
298 6 | # -R./Makefile.inc "../../noweb-lyx/noweb-lyx3.lyx" \
299 7 | # | sed '/NOWEB_SOURCE=/s/=.*/=samba4-dfs.lyx/' \
300 8 | # > ./Makefile.inc
302 10 | #make -f ./Makefile.inc fangle_sources
303 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
305 Chapter 5Using Fangle with TeXmacs
306 To do: Write this chapter
308 Chapter 6Fangle with Makefiles
309 Here we describe a Makefile.inc that you can include in your own Makefiles, or glue as a recursive make to other projects.
310 Makefile.inc will cope with extracting all the other source files from this or any specified literate document and keeping them up to date.
311 It may also be included by a Makefile or Makefile.am defined in a literate document to automatically deal with the extraction of source files and documents during normal builds.
312 Thus, if Makefile.inc is included into a main project makefile it add rules for the source files, capable of extracting the source files from the literate document.
313 6.1 A word about makefiles formats
314 Whitespace formatting is very important in a Makefile. The first character of each action line must be a TAB.
315 target: pre-requisite
318 This requires that the literate programming environment have the ability to represent a TAB character in a way that fangle will generate an actual TAB character.
319 We also adopt a convention that code chunks whose names beginning with ./ should always be automatically extracted from the document. Code chunks whose names do not begin with ./ are for internal reference. Such chunks may be extracted directly, but will not be automatically extracted by this Makefile.
320 6.2 Extracting Sources
321 Our makefile has two parts; variables must be defined before the targets that use them.
322 As we progress through this chapter, explaining concepts, we will be adding lines to <Makefile.inc-vars 23b> and <Makefile.inc-targets 24c> which are included in <./Makefile.inc 23a> below.
324 23a <./Makefile.inc[1](
\v), lang=make> ≡
325 ________________________________________________________________________
326 1 | «Makefile.inc-vars 23b»
327 2 | «Makefile.inc-default-targets 28a»
328 3 | «Makefile.inc-targets 24c»
329 |________________________________________________________________________
332 We first define a placeholder for the tool fangle in case it cannot be found in the path.
334 23b <Makefile.inc-vars[1](
\v), lang=> ≡ 24a⊳
335 ________________________________________________________________________
337 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
338 We also define a placeholder for LITERATE_SOURCE to hold the name of this document. This will normally be passed on the command line.
340 24a <Makefile.inc-vars[2](
\v) ⇑23b, lang=> +≡ ⊲23b 24b▿
341 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
343 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
344 Fangle cannot process L Y X or TeXmacs documents directly, so the first stage is to convert these to more suitable text based formats1. L Y X and TeXmacs formats are text-based, but not suitable for fangle ^1.
345 6.2.1 Converting from L Y X to LaTeX
346 The first stage will always be to convert the L Y X file to a LaTeX file. Fangle must run on a TeX file because the L Y X command server-goto-file-line2. The Lyx command server-goto-file-line is used to position the Lyx cursor at the compiler errors. ^2 requries that the line number provided be a line of the TeX file and always maps this the line in the L Y X docment. We use server-goto-file-line when moving the cursor to error lines during compile failures.
347 The command lyx -e literate fangle.lyx will produce fangle.tex, a TeX file; so we define a make target to be the same as the L Y X file but with the .tex extension.
348 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
350 24b <Makefile.inc-vars[3](
\v) ⇑23b, lang=> +≡ ▵24a 24d▿
351 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
352 3 | LYX_SOURCE=$(LITERATE_SOURCE) # but only the .lyx files
353 4 | TEX_SOURCE=$(LYX_SOURCE:.lyx=.tex)
354 5 | EXTRA_DIST+=$(TEX_SOURCE)
355 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
356 We then specify that the TeX source is to be generated from the L Y X source.
358 24c <Makefile.inc-targets[1](
\v), lang=> ≡ 25a⊳
359 ________________________________________________________________________
360 1 | .SUFFIXES: .tex .lyx
364 5 | ↦rm -f -- $(TEX_SOURCE)
366 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
367 6.2.2 Converting from TeXmacs
368 Fangle cannot process TeXmacs files directly3. but this is planned when TeXmacs uses xml as it's native format ^3, but must first convert them to text files.
369 The command texmacs -c fangle.tm fangle.txt -q will produce fangle.txt, a text file; so we define a make target to be the same as the TeXmacs file but with the .txt extension.
370 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
372 24d <Makefile.inc-vars[4](
\v) ⇑23b, lang=> +≡ ▵24b 25b⊳
373 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
374 6 | TEXMACS_SOURCE=$(LITERATE_SOURCE) # but only the .tm files
375 7 | TXT_SOURCE=$(LITERATE_SOURCE:.tm=.txt)
376 8 | EXTRA_DIST+=$(TXT_SOURCE)
377 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
378 To do: Add loop around each $< so multiple targets can be specified
381 25a <Makefile.inc-targets[2](
\v) ⇑24c, lang=> +≡ ⊲24c 25d▿
382 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
383 7 | .SUFFIXES: .txt .tm
385 9 | ↦texmacs -s -c $< $@ -q
386 10 | .PHONEY: clean_txt
388 12 | ↦rm -f -- $(TXT_SOURCE)
389 13 | clean: clean_txt
390 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
391 6.3 Extracting Program Source
392 The program source is extracted using fangle, which is designed to operate on text or a LaTeX documents4. LaTeX documents are just slightly special text documents ^4.
394 25b <Makefile.inc-vars[5](
\v) ⇑23b, lang=> +≡ ⊲24d 25c▿
395 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
396 9 | FANGLE_SOURCE=$(TXT_SOURCE)
397 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
398 The literate document can result in any number of source files, but not all of these will be changed each time the document is updated. We certainly don't want to update the timestamps of these files and cause the whole source tree to be recompiled just because the literate explanation was revised. We use CPIF from the Noweb tools to avoid updating the file if the content has not changed, but should probably write our own.
399 However, if a source file is not updated, then the fangle file will always have a newer time-stamp and the makefile would always re-attempt to extact a newer source file which would be a waste of time.
400 Because of this, we use a stamp file which is always updated each time the sources are fully extracted from the LaTeX document. If the stamp file is newer than the document, then we can avoid an attempt to re-extract any of the sources. Because this stamp file is only updated when extraction is complete, it is safe for the user to interrupt the build-process mid-extraction.
401 We use echo rather than touch to update the stamp file beause the touch command does not work very well over an sshfs mount that I was using.
403 25c <Makefile.inc-vars[6](
\v) ⇑23b, lang=> +≡ ▵25b 26a⊳
404 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
405 10 | FANGLE_SOURCE_STAMP=$(FANGLE_SOURCE).stamp
406 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
408 25d <Makefile.inc-targets[3](
\v) ⇑24c, lang=> +≡ ▵25a 26b⊳
409 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
410 14 | $(FANGLE_SOURCE_STAMP): $(FANGLE_SOURCE) \
411 15 | ↦ $(FANGLE_SOURCES) ; \
412 16 | ↦echo -n > $(FANGLE_SOURCE_STAMP)
414 18 | ↦rm -f $(FANGLE_SOURCE_STAMP)
415 19 | clean: clean_stamp
416 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
417 6.4 Extracting Source Files
418 We compute FANGLE_SOURCES to hold the names of all the source files defined in the document. We compute this only once, by means of := in assignent. The sed deletes the any << and >> which may surround the roots names (for compatibility with Noweb's noroots command).
419 As we use chunk names beginning with ./ to denote top level fragments that should be extracted, we filter out all fragments that do not begin with ./
420 Note 1. FANGLE_PREFIX is set to ./ by default, but whatever it may be overridden to, the prefix is replaced by a literal ./ before extraction so that files will be extracted in the current directory whatever the prefix. This helps namespace or sub-project prefixes like documents: for chunks like documents:docbook/intro.xml
421 To do: This doesn't work though, because it loses the full name and doesn't know what to extact!
424 26a <Makefile.inc-vars[7](
\v) ⇑23b, lang=> +≡ ⊲25c 26e▿
425 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
426 11 | FANGLE_PREFIX:=\.\/
427 12 | FANGLE_SOURCES:=$(shell \
428 13 | $(FANGLE) -r $(FANGLE_SOURCE) |\
429 14 | sed -e 's/^[<][<]//;s/[>][>]$$//;/^$(FANGLE_PREFIX)/!d' \
430 15 | -e 's/^$(FANGLE_PREFIX)/\.\//' )
431 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
432 The target below, echo_fangle_sources is a helpful debugging target and shows the names of the files that would be extracted.
434 26b <Makefile.inc-targets[4](
\v) ⇑24c, lang=> +≡ ⊲25d 26c▿
435 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
436 20 | .PHONY: echo_fangle_sources
437 21 | echo_fangle_sources: ; @echo $(FANGLE_SOURCES)
438 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
439 We define a convenient target called fangle_sources so that make -f fangle_sources will re-extract the source if the literate document has been updated.
441 26c <Makefile.inc-targets[5](
\v) ⇑24c, lang=> +≡ ▵26b 26d▿
442 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
443 22 | .PHONY: fangle_sources
444 23 | fangle_sources: $(FANGLE_SOURCE_STAMP)
445 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
446 And also a convenient target to remove extracted sources.
448 26d <Makefile.inc-targets[6](
\v) ⇑24c, lang=> +≡ ▵26c 27e⊳
449 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
450 24 | .PHONY: clean_fangle_sources
451 25 | clean_fangle_sources: ; \
452 26 | rm -f -- $(FANGLE_SOURCE_STAMP) $(FANGLE_SOURCES)
453 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
454 We now look at the extraction of the source files.
455 This makefile macro if_extension takes 4 arguments: the filename $(1), some extensions to match $(2) and a shell command to return if the filename does match the exensions $(3), and a shell command to return if it does not match the extensions $(4).
457 26e <Makefile.inc-vars[8](
\v) ⇑23b, lang=> +≡ ▵26a 26f▿
458 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
459 16 | if_extension=$(if $(findstring $(suffix $(1)),$(2)),$(3),$(4))
460 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
461 For some source files like C files, we want to output the line number and filename of the original LaTeX document from which the source came5. I plan to replace this option with a separate mapping file so as not to pollute the generated source, and also to allow a code pretty-printing reformatter like indent be able to re-format the file and adjust for changes through comparing the character streams. ^5.
462 To make this easier we define the file extensions for which we want to do this.
464 26f <Makefile.inc-vars[9](
\v) ⇑23b, lang=> +≡ ▵26e 27a⊳
465 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
466 17 | C_EXTENSIONS=.c .h
467 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
468 We can then use the if_extensions macro to define a macro which expands out to the -L option if fangle is being invoked in a C source file, so that C compile errors will refer to the line number in the TeX document.
470 27a <Makefile.inc-vars[10](
\v) ⇑23b, lang=> +≡ ⊲26f 27b▿
471 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
473 19 | nf_line=-L -T$(TABS)
474 20 | fangle=$(FANGLE) $(call if_extension,$(2),$(C_EXTENSIONS),$(nf_line)) -R"$(2)" $(1)
475 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
476 We can use a similar trick to define an indent macro which takes just the filename as an argument and can return a pipeline stage calling the indent command. Indent can be turned off with make fangle_sources indent=
478 27b <Makefile.inc-vars[11](
\v) ⇑23b, lang=> +≡ ▵27a 27c▿
479 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
480 21 | indent_options=-npro -kr -i8 -ts8 -sob -l80 -ss -ncs
481 22 | indent=$(call if_extension,$(1),$(C_EXTENSIONS), | indent $(indent_options))
482 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
483 We now define the pattern for extracting a file. The files are written using noweb's cpif so that the file timestamp will not be touched if the contents haven't changed. This avoids the need to rebuild the entire project because of a typographical change in the documentation, or if none or a few C source files have changed.
485 27c <Makefile.inc-vars[12](
\v) ⇑23b, lang=> +≡ ▵27b 27d▿
486 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
487 23 | fangle_extract=@mkdir -p $(dir $(1)) && \
488 24 | $(call fangle,$(2),$(1)) > "$(1).tmp" && \
489 25 | cat "$(1).tmp" $(indent) | cpif "$(1)" \
490 26 | && rm -f -- "$(1).tmp" || \
491 27 | (echo error fangling $(1) from $(2) ; exit 1)
492 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
493 We define a target which will extract or update all sources. To do this we first defined a makefile template that can do this for any source file in the LaTeX document.
495 27d <Makefile.inc-vars[13](
\v) ⇑23b, lang=> +≡ ▵27c 28b⊳
496 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
497 28 | define FANGLE_template
499 30 | ↦$$(call fangle_extract,$(1),$(2))
500 31 | FANGLE_TARGETS+=$(1)
502 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
503 We then enumerate the discovered FANGLE_SOURCES to generate a makefile rule for each one using the makefile template we defined above.
505 27e <Makefile.inc-targets[7](
\v) ⇑24c, lang=> +≡ ⊲26d 27f▿
506 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
507 27 | $(foreach source,$(FANGLE_SOURCES),\
508 28 | $(eval $(call FANGLE_template,$(source),$(FANGLE_SOURCE))) \
510 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
511 These will all be built with FANGLE_SOURCE_STAMP.
512 We also remove the generated sources on a make distclean.
514 27f <Makefile.inc-targets[8](
\v) ⇑24c, lang=> +≡ ▵27e 28c⊳
515 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
516 30 | _distclean: clean_fangle_sources
517 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
518 6.5 Extracting Documentation
519 We then identify the intermediate stages of the documentation and their build and clean targets.
521 28a <Makefile.inc-default-targets[1](
\v), lang=> ≡
522 ________________________________________________________________________
523 1 | .PHONEY : clean_pdf
524 |________________________________________________________________________
528 6.5.1.1 Running pdflatex
529 We produce a pdf file from the tex file.
531 28b <Makefile.inc-vars[14](
\v) ⇑23b, lang=> +≡ ⊲27d 28d▿
532 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
533 33 | FANGLE_PDF+=$(TEX_SOURCE:.tex=.pdf)
534 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
535 We run pdflatex twice to be sure that the contents and aux files are up to date. We certainly are required to run pdflatex at least twice if these files do not exist.
537 28c <Makefile.inc-targets[9](
\v) ⇑24c, lang=> +≡ ⊲27f 28e▿
538 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
539 31 | .SUFFIXES: .tex .pdf
541 33 | ↦pdflatex $< && pdflatex $<
544 36 | ↦rm -f -- $(FANGLE_PDF) $(TEX_SOURCE:.tex=.toc) \
545 37 | ↦ $(TEX_SOURCE:.tex=.log) $(TEX_SOURCE:.tex=.aux)
546 38 | clean_pdf: clean_pdf_tex
547 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
548 6.5.2 Formatting TeXmacs
549 TeXmacs can produce a PDF file directly.
551 28d <Makefile.inc-vars[15](
\v) ⇑23b, lang=> +≡ ▵28b 28f▿
552 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
553 34 | FANGLE_PDF+=$(LITERATE_SOURCE:.tm=.pdf)
554 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
555 To do: Outputting the PDF may not be enough to update the links and page references. I think
556 we need to update twice, generate a pdf, update twice mode and generate a new PDF.
557 Basically the PDF export of TeXmacs is pretty rotten and doesn't work properly from the CLI
560 28e <Makefile.inc-targets[10](
\v) ⇑24c, lang=> +≡ ▵28c 29a⊳
561 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
562 39 | .SUFFIXES: .tm .pdf
564 41 | ↦texmacs -s -c $< $@ -q
566 43 | clean_pdf_texmacs:
567 44 | ↦rm -f -- $(FANGLE_PDF)
568 45 | clean_pdf: clean_pdf_texmacs
569 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
570 6.5.3 Building the Documentation as a Whole
571 Currently we only build pdf as a final format, but FANGLE_DOCS may later hold other output formats.
573 28f <Makefile.inc-vars[16](
\v) ⇑23b, lang=> +≡ ▵28d
574 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
575 35 | FANGLE_DOCS=$(FANGLE_PDF)
576 |________________________________________________________________________
579 We also define fangle_docs as a convenient phony target.
581 29a <Makefile.inc-targets[11](
\v) ⇑24c, lang=> +≡ ⊲28e 29b▿
582 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
583 46 | .PHONY: fangle_docs
584 47 | fangle_docs: $(FANGLE_DOCS)
585 48 | docs: fangle_docs
586 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
587 And define a convenient clean_fangle_docs which we add to the regular clean target
589 29b <Makefile.inc-targets[12](
\v) ⇑24c, lang=> +≡ ▵29a
590 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
591 49 | .PHONEY: clean_fangle_docs
592 50 | clean_fangle_docs: clean_tex clean_pdf
593 51 | clean: clean_fangle_docs
595 53 | distclean_fangle_docs: clean_tex clean_fangle_docs
596 54 | distclean: clean distclean_fangle_docs
597 |________________________________________________________________________
601 If Makefile.inc is included into Makefile, then extracted files can be updated with this command:
604 make -f Makefile.inc fangle_sources
605 6.7 Boot-strapping the extraction
606 As well as having the makefile extract or update the source files as part of it's operation, it also seems convenient to have the makefile re-extracted itself from this document.
607 It would also be convenient to have the code that extracts the makefile from this document to also be part of this document, however we have to start somewhere and this unfortunately requires us to type at least a few words by hand to start things off.
608 Therefore we will have a minimal root fragment, which, when extracted, can cope with extracting the rest of the source. This shell script fragment can do that. It's name is * — out of regard for Noweb, but when extracted might better be called autoupdate.
612 29c <*[1](
\v), lang=sh> ≡
613 ________________________________________________________________________
616 3 | MAKE_SRC="${1:-${NW_LYX:-../../noweb-lyx/noweb-lyx3.lyx}}"
617 4 | MAKE_SRC=‘dirname "$MAKE_SRC"‘/‘basename "$MAKE_SRC" .lyx‘
618 5 | NOWEB_SRC="${2:-${NOWEB_SRC:-$MAKE_SRC.lyx}}"
619 6 | lyx -e latex $MAKE_SRC
621 8 | fangle -R./Makefile.inc ${MAKE_SRC}.tex \
622 9 | | sed "/FANGLE_SOURCE=/s/^/#/;T;aNOWEB_SOURCE=$FANGLE_SRC" \
623 10 | | cpif ./Makefile.inc
625 12 | make -f ./Makefile.inc fangle_sources
626 |________________________________________________________________________
629 The general Makefile can be invoked with ./autoboot and can also be included into any automake file to automatically re-generate the source files.
630 The autoboot can be extracted with this command:
631 lyx -e latex fangle.lyx && \
632 fangle fangle.lyx > ./autoboot
633 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
634 On a unix system this will extract fangle.module and the fangle awk script, and run some basic tests.
635 To do: cross-ref to test chapter when it is a chapter all on its own
637 6.8 Incorporating Makefile.inc into existing projects
638 If you are writing a literate module of an existing non-literate program you may find it easier to use a slight recursive make instead of directly including Makefile.inc in the projects makefile.
639 This way there is less chance of definitions in Makefile.inc interfering with definitions in the main makefile, or with definitions in other Makefile.inc from other literate modules of the same project.
640 To do this we add some glue to the project makefile that invokes Makefile.inc in the right way. The glue works by adding a .PHONY target to call the recursive make, and adding this target as an additional pre-requisite to the existing targets.
641 Example Sub-module of existing system
642 In this example, we are building module.so as a literate module of a larger project.
643 We will show the sort glue that can be inserted into the projects Makefile — or more likely — a regular Makefile included in or invoked by the projects Makefile.
645 30a <makefile-glue[1](
\v), lang=> ≡ 30b▿
646 ________________________________________________________________________
647 1 | module_srcdir=modules/module
648 2 | MODULE_SOURCE=module.tm
649 3 | MODULE_STAMP=$(MODULE_SOURCE).stamp
650 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
651 The existing build system may already have a build target for module.o, but we just add another pre-requisite to that. In this case we use module.tm.stamp as a pre-requisite, the stamp file's modified time indicating when all sources were extracted6. If the projects build system does not know how to build the module from the extracted sources, then just add build actions here as normal. ^6.
653 30b <makefile-glue[2](
\v) ⇑30a, lang=make> +≡ ▵30a 30c▿
654 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
655 4 | $(module_srcdir)/module.o: $(module_srcdir)/$(MODULE_STAMP)
656 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
657 The target for this new pre-requisite will be generated by a recursive make using Makefile.inc which will make sure that the source is up to date, before it is built by the main projects makefile.
659 30c <makefile-glue[3](
\v) ⇑30a, lang=> +≡ ▵30b 31a⊳
660 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
661 5 | $(module_srcdir)/$(MODULE_STAMP): $(module_srcdir)/$(MODULE_SOURCE)
662 6 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc fangle_sources LITERATE_SOURCE=$(MODULE_SOURCE)
663 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
664 We can do similar glue for the docs, clean and distclean targets. In this example the main prject was using a double colon for these targets, so we must use the same in our glue.
666 31a <makefile-glue[4](
\v) ⇑30a, lang=> +≡ ⊲30c
667 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
668 7 | docs:: docs_module
669 8 | .PHONY: docs_module
671 10 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc docs LITERATE_SOURCE=$(MODULE_SOURCE)
673 12 | clean:: clean_module
674 13 | .PHONEY: clean_module
676 15 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc clean LITERATE_SOURCE=$(MODULE_SOURCE)
678 17 | distclean:: distclean_module
679 18 | .PHONY: distclean_module
680 19 | distclean_module:
681 20 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc distclean LITERATE_SOURCE=$(MODULE_SOURCE)
682 |________________________________________________________________________
685 We could do similarly for install targets to install the generated docs.
687 Chapter 7Fangle awk source code
688 We use the copyright notice from chapter 2.
690 35a <./fangle[1](
\v), lang=awk> ≡ 35b▿
691 ________________________________________________________________________
692 1 | #! /usr/bin/awk -f
693 2 | # «gpl3-copyright 4a»
694 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
695 We also use code from Arnold Robbins public domain getopt (1993 revision) defined in 81a, and naturally want to attribute this appropriately.
697 35b <./fangle[2](
\v) ⇑35a, lang=> +≡ ▵35a 35c▿
698 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
699 3 | # NOTE: Arnold Robbins public domain getopt for awk is also used:
700 4 | «getopt.awk-header 79a»
701 5 | «getopt.awk-getopt() 79c»
703 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
704 And include the following chunks (which are explained further on) to make up the program:
706 35c <./fangle[3](
\v) ⇑35a, lang=> +≡ ▵35b 40a⊳
707 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
708 7 | «helper-functions 36d»
709 8 | «mode-tracker 58c»
710 9 | «parse_chunk_args 42a»
711 10 | «chunk-storage-functions 77b»
712 11 | «output_chunk_names() 71d»
713 12 | «output_chunks() 71e»
714 13 | «write_chunk() 72a»
715 14 | «expand_chunk_args() 42b»
718 17 | «recognize-chunk 61a»
720 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
722 The portable way to erase an array in awk is to split the empty string, so we define a fangle macro that can split an array, like this:
724 35d <awk-delete-array[1](ARRAY
\v\v), lang=awk> ≡
725 ________________________________________________________________________
726 1 | split("", ${ARRAY});
727 |________________________________________________________________________
730 For debugging it is sometimes convenient to be able to dump the contents of an array to stderr, and so this macro is also useful.
732 35e <dump-array[1](ARRAY
\v\v), lang=awk> ≡
733 ________________________________________________________________________
734 1 | print "\nDump: ${ARRAY}\n--------\n" > "/dev/stderr";
735 2 | for (_x in ${ARRAY}) {
736 3 | print _x "=" ${ARRAY}[_x] "\n" > "/dev/stderr";
738 5 | print "========\n" > "/dev/stderr";
739 |________________________________________________________________________
743 Fatal errors are issued with the error function:
745 36a <error()[1](
\v), lang=awk> ≡ 36b▿
746 ________________________________________________________________________
747 1 | function error(message)
749 3 | print "ERROR: " FILENAME ":" FNR " " message > "/dev/stderr";
752 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
753 and likewise for non-fatal warnings:
755 36b <error()[2](
\v) ⇑36a, lang=awk> +≡ ▵36a 36c▿
756 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
757 6 | function warning(message)
759 8 | print "WARNING: " FILENAME ":" FNR " " message > "/dev/stderr";
762 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
763 and debug output too:
765 36c <error()[3](
\v) ⇑36a, lang=awk> +≡ ▵36b
766 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
767 11 | function debug_log(message)
769 13 | print "DEBUG: " FILENAME ":" FNR " " message > "/dev/stderr";
771 |________________________________________________________________________
774 To do: append=helper-functions
777 36d <helper-functions[1](
\v), lang=> ≡
778 ________________________________________________________________________
780 |________________________________________________________________________
783 Chapter 8TeXmacs args
784 TeXmacs functions with arguments1. or function declarations with parameters ^1 appear like this:
785 blah((I came, I saw, I conquered)<wide-overbrace>^(argument 1)(^K, )<wide-overbrace>^(sep.)(and then went home asd)<wide-overbrace>^(argument 3)(^K))<wide-overbrace>^(term.)_arguments
786 Arguments commence after the opening parenthesis. The first argument runs up till the next ^K.
787 If the following character is a , then another argument follows. If the next character after the , is a space character, then it is also eaten. The fangle stylesheet emits ^K,space as separators, but the fangle untangler will forgive a missing space.
788 If the following character is ) then this is a terminator and there are no more arguments.
790 37a <constants[1](
\v), lang=> ≡ 77a⊳
791 ________________________________________________________________________
792 1 | ARG_SEPARATOR=sprintf("%c", 11);
793 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
794 To process the text in this fashion, we split the string on ^K
797 37b <get_chunk_args[1](
\v), lang=> ≡
798 ________________________________________________________________________
799 1 | function get_texmacs_chunk_args(text, args, a, done) {
800 2 | split(text, args, ARG_SEPARATOR);
803 5 | for (a=1; (a in args); a++) if (a>1) {
804 6 | if (args[a] == "" || substr(args[a], 1, 1) == ")") done=1;
810 12 | if (substr(args[a], 1, 2) == ", ") args[a]=substr(args[a], 3);
811 13 | else if (substr(args[a], 1, 1) == ",") args[a]=substr(args[a], 2);
814 |________________________________________________________________________
817 Chapter 9LaTeX and lstlistings
818 To do: Split LyX and TeXmacs parts
820 For L Y X and LaTeX, the lstlistings package is used to format the lines of code chunks. You may recal from chapter XXX that arguments to a chunk definition are pure LaTeX code. This means that fangle needs to be able to parse LaTeX a little.
821 LaTeX arguments to lstlistings macros are a comma seperated list of key-value pairs, and values containing commas are enclosed in { braces } (which is to be expected for LaTeX).
822 A sample expressions is:
823 name=thomas, params={a, b}, something, something-else
824 but we see that this is just a simpler form of this expression:
825 name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
826 We may consider that we need a function that can parse such LaTeX expressions and assign the values to an AWK associated array, perhaps using a recursive parser into a multi-dimensional hash1. as AWK doesn't have nested-hash support ^1, resulting in:
831 a[foo, quux, a] fleeg
834 Yet, also, on reflection it seems that sometimes such nesting is not desirable, as the braces are also used to delimit values that contain commas --- we may consider that
835 name={williamson, freddie}
836 should assign williamson, freddie to name.
837 In fact we are not so interested in the detail so as to be bothered by this, which turns out to be a good thing for two reasons. Firstly TeX has a malleable parser with no strict syntax, and secondly whether or not williamson and freddie should count as two items will be context dependant anyway.
838 We need to parse this latex for only one reason; which is that we are extending lstlistings to add some additional arguments which will be used to express chunk parameters and other chunk options.
839 9.1 Additional lstlstings parameters
840 Further on we define a \Chunk LaTeX macro whose arguments will consist of a the chunk name, optionally followed by a comma and then a comma separated list of arguments. In fact we will just need to prefix name= to the arguments to in order to create valid lstlistings arguments.
841 There will be other arguments supported too;
842 params.As an extension to many literate-programming styles, fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Chunk parameters are declared with a chunk argument called params, which holds a semi-colon separated list of parameters, like this:
843 achunk,language=C,params=name;address
844 addto.a named chunk that this chunk is to be included into. This saves the effort of having to declare another listing of the named chunk merely to include this one.
845 Function get_chunk_args() will accept two paramters, text being the text to parse, and values being an array to receive the parsed values as described above. The optional parameter path is used during recursion to build up the multi-dimensional array path.
847 40a <./fangle[4](
\v) ⇑35a, lang=> +≡ ⊲35c
848 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
849 19 | «get_chunk_args() 40b»
850 |________________________________________________________________________
854 40b <get_chunk_args()[1](
\v), lang=> ≡ 40c▿
855 ________________________________________________________________________
856 1 | function get_tex_chunk_args(text, values,
857 2 | # optional parameters
858 3 | path, # hierarchical precursors
861 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
862 The strategy is to parse the name, and then look for a value. If the value begins with a brace {, then we recurse and consume as much of the text as necessary, returning the remaining text when we encounter a leading close-brace }. This being the strategy --- and executed in a loop --- we realise that we must first look for the closing brace (perhaps preceded by white space) in order to terminate the recursion, and returning remaining text.
864 40c <get_chunk_args()[2](
\v) ⇑40b, lang=> +≡ ▵40b
865 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
867 7 | split("", values);
868 8 | while(length(text)) {
869 9 | if (match(text, "^ *}(.*)", a)) {
872 12 | «parse-chunk-args 40d»
876 |________________________________________________________________________
879 We can see that the text could be inspected with this regex:
881 40d <parse-chunk-args[1](
\v), lang=> ≡ 41a⊳
882 ________________________________________________________________________
883 1 | if (! match(text, " *([^,=]*[^,= ]) *(([,=]) *(([^,}]*) *,* *(.*))|)$", a)) {
886 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
887 and that a will have the following values:
890 2 =freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
892 4 freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
894 6 , foo={bar=baz, quux={quirk, a=fleeg}}, etc
896 a[3] will be either = or , and signify whether the option named in a[1] has a value or not (respectively).
897 If the option does have a value, then if the expression substr(a[4],1,1) returns a brace { it will signify that we need to recurse:
899 41a <parse-chunk-args[2](
\v) ⇑40d, lang=> +≡ ⊲40d
900 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
902 5 | if (a[3] == "=") {
903 6 | if (substr(a[4],1,1) == "{") {
904 7 | text = get_tex_chunk_args(substr(a[4],2), values, path name SUBSEP);
906 9 | values[path name]=a[5];
910 13 | values[path name]="";
913 |________________________________________________________________________
916 We can test this function like this:
918 41b <gca-test.awk[1](
\v), lang=> ≡
919 ________________________________________________________________________
920 1 | «get_chunk_args() 40b»
924 5 | print get_tex_chunk_args("name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc", a);
926 7 | print "a[" b "] => " a[b];
929 |________________________________________________________________________
932 which should give this output:
934 41c <gca-test.awk-results[1](
\v), lang=> ≡
935 ________________________________________________________________________
936 1 | a[foo.quux.quirk] =>
937 2 | a[foo.quux.a] => fleeg
938 3 | a[foo.bar] => baz
940 5 | a[name] => freddie
941 |________________________________________________________________________
944 9.2 Parsing chunk arguments
945 Arguments to paramterized chunks are expressed in round brackets as a comma separated list of optional arguments. For example, a chunk that is defined with:
946 \Chunk{achunk, params=name ; address}
948 \chunkref{achunk}(John Jones, jones@example.com)
949 An argument list may be as simple as in \chunkref{pull}(thing, otherthing) or as complex as:
950 \chunkref{pull}(things[x, y], get_other_things(a, "(all)"))
951 --- which for all it's commas and quotes and parenthesis represents only two parameters: things[x, y] and get_other_things(a, "(all)").
952 If we simply split parameter list on commas, then the comma in things[x,y] would split into two seperate arguments: things[x and y]--- neither of which make sense on their own.
953 One way to prevent this would be by refusing to split text between matching delimiters, such as [, ], (, ), {, } and most likely also ", " and ', '. Of course this also makes it impossible to pass such mis-matched code fragments as parameters, but I think that it would be hard for readers to cope with authors who would pass such code unbalanced fragments as chunk parameters2. I know that I couldn't cope with users doing such things, and although the GPL3 license prevents me from actually forbidding anyone from trying, if they want it to work they'll have to write the code themselves and not expect any support from me. ^2.
954 Unfortunately, the full set of matching delimiters may vary from language to language. In certain C++ template contexts, < and > would count as delimiters, and yet in other contexts they would not.
955 This puts me in the unfortunate position of having to parse-somewhat all programming languages without knowing what they are!
956 However, if this universal mode-tracking is possible, then parsing the arguments would be trivial. Such a mode tracker is described in chapter 10 and used here with simplicity.
958 42a <parse_chunk_args[1](
\v), lang=> ≡
959 ________________________________________________________________________
960 1 | function parse_chunk_args(language, text, values, mode,
962 3 | c, context, rest)
964 5 | «new-mode-tracker
\v(context
\v, language
\v, mode
\v) 54d»
965 6 | rest = mode_tracker(context, text, values);
967 8 | for(c=1; c <= context[0, "values"]; c++) {
968 9 | values[c] = context[0, "values", c];
972 |________________________________________________________________________
975 9.3 Expanding parameters in the text
976 Within the body of the chunk, the parameters are referred to with: ${name} and ${address}. There is a strong case that a LaTeX style notation should be used, like \param{name} which would be expressed in the listing as =<\param{name}> and be rendered as ${name}. Such notation would make me go blind, but I do intend to adopt it.
977 We therefore need a function expand_chunk_args which will take a block of text, a list of permitted parameters, and the arguments which must substitute for the parameters.
978 Here we split the text on ${ which means that all parts except the first will begin with a parameter name which will be terminated by }. The split function will consume the literal ${ in each case.
980 42b <expand_chunk_args()[1](
\v), lang=> ≡
981 ________________________________________________________________________
982 1 | function expand_chunk_args(text, params, args,
983 2 | p, text_array, next_text, v, t, l)
985 4 | if (split(text, text_array, "\\${")) {
986 5 | «substitute-chunk-args 43a»
991 |________________________________________________________________________
994 First, we produce an associative array of substitution values indexed by parameter names. This will serve as a cache, allowing us to look up the replacement values as we extract each name.
996 43a <substitute-chunk-args[1](
\v), lang=> ≡ 43b▿
997 ________________________________________________________________________
998 1 | for(p in params) {
999 2 | v[params[p]]=args[p];
1001 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1002 We accumulate substituted text in the variable text. As the first part of the split function is the part before the delimiter --- which is ${ in our case --- this part will never contain a parameter reference, so we assign this directly to the result kept in $text.
1004 43b <substitute-chunk-args[2](
\v) ⇑43a, lang=> +≡ ▵43a 43c▿
1005 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1006 4 | text=text_array[1];
1007 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1008 We then iterate over the remaining values in the array, and substitute each reference for it's argument.
1010 43c <substitute-chunk-args[3](
\v) ⇑43a, lang=> +≡ ▵43b
1011 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1012 5 | for(t=2; t in text_array; t++) {
1013 6 | «substitute-chunk-arg 43d»
1015 |________________________________________________________________________
1018 After the split on ${ a valid parameter reference will consist of valid parameter name terminated by a close-brace }. A valid character name begins with the underscore or a letter, and may contain letters, digits or underscores.
1019 A valid looking reference that is not actually the name of a parameter will be and not substituted. This is good because there is nothing to substitute anyway, and it avoids clashes when writing code for languages where ${...} is a valid construct --- such constructs will not be interfered with unless the parameter name also matches.
1021 43d <substitute-chunk-arg[1](
\v), lang=> ≡
1022 ________________________________________________________________________
1023 1 | if (match(text_array[t], "^([a-zA-Z_][a-zA-Z0-9_]*)}", l) &&
1026 4 | text = text v[l[1]] substr(text_array[t], length(l[1])+2);
1028 6 | text = text "${" text_array[t];
1030 |________________________________________________________________________
1033 Chapter 10Language Modes & Quoting
1034 lstlistings and fangle both recognize source languages, and perform some basic parsing and syntax highlighting in the rendered document1. although lstlisting supports many more languages ^1. lstlistings can detect strings and comments within a language definition and perform suitable rendering, such as italics for comments, and visible-spaces within strings.
1035 Fangle similarly can recognize strings, and comments, etc, within a language, so that any chunks included with \chunkref{a-chunk} or <a-chunk ?> can be suitably escape or quoted.
1036 10.1 Modes to keep code together
1037 As an example, the C language has a few parse modes, which affect the interpretation of characters.
1038 One parse mode is the string mode. The string mode is commenced by an un-escaped quotation mark " and terminated by the same. Within the string mode, only one additional mode can be commenced, it is the backslash mode \, which is always terminated by the following character.
1039 Another mode is [ which is terminated by a ] (unless it occurs in a string).
1040 Consider this fragment of C code:
1041 do_something((things([x, y])<wide-overbrace>^(2. [ mode), get_other_things((a, "(all)"_(4. " mode)))<wide-overbrace>^(3. ( mode)))<wide-overbrace>^(1. ( mode)
1043 Mode nesting prevents the close parenthesis in the quoted string (part 4) from terminating the parenthesis mode (part 3).
1044 Each language has a set of modes, the default mode being the null mode. Each mode can lead to other modes.
1045 10.2 Modes affect included chunks
1046 For instance, consider this chunk with language=perl:
1048 45a <test:example-perl[1](
\v), lang=perl> ≡
1049 ________________________________________________________________________
1050 1 | print "hello world $0\n";
1051 |________________________________________________________________________
1054 If it were included in a chunk with language=sh, like this:
1056 45b <test:example-sh[1](
\v), lang=sh> ≡
1057 ________________________________________________________________________
1058 1 | perl -e "«test:example-perl 45a»"
1059 |________________________________________________________________________
1062 we might want fangle would to generate output like this:
1064 46a <test:example-sh.result[1](
\v), lang=sh> ≡
1065 ________________________________________________________________________
1066 1 | perl -e "print \"hello world \$0\\n\";"
1067 |________________________________________________________________________
1070 See that the double quote ", back-slash \ and $ have been quoted with a back-slash to protect them from shell interpretation.
1071 If that were then included in a chunk with language=make, like this:
1073 46b <test:example-makefile[1](
\v), lang=make> ≡
1074 ________________________________________________________________________
1076 2 | ↦«test:example-sh 45b»
1077 |________________________________________________________________________
1080 We would need the output to look like this --- note the $$ as the single $ has been makefile-quoted with another $.
1082 46c <test:example-makefile.result[1](
\v), lang=make> ≡
1083 ________________________________________________________________________
1085 2 | ↦perl -e "print \"hello world \$$0\\n\";"
1086 |________________________________________________________________________
1089 10.3 Modes operation
1090 In order to make this work, we must define a mode-tracker supporting each language, that can detect the various quoting modes, and provide a transformation that may be applied to any included text so that included text will be interpreted correctly after any interpolation that it may be subject to at run-time.
1091 For example, the sed transformation for text to be inserted into shell double-quoted strings would be something like:
1092 s/\\/\\\\/g;s/$/\\$/g;s/"/\\"/g;
1093 which would protect \ $ "
1094 The mode tracker must also nested mode-changes, as in this shell example:
1095 echo "hello ‘id ...‘"
1097 Any shell special characters inserted at the point marked ↑ would need to be escaped if their plain-text meaning is to be preserved, including ‘ | * among others. The set of characters that need escaping in the back-ticks ‘ is not the same as the set that need escaing in the double-quotes ". However, in shell syntax, a " at the point marked ↑ does not close the leading " and so would not need additional escaping because of the nesting of the two modes.
1099 Escaping need not occur if the format and mode of the included chunk matches that of the including chunk.
1100 As each chunk is output a new mode tracker for that language is initialized in it's normal state. As text is output for that chunk the output mode is tracked. When a new chunk is included, a transformation appropriate to that mode is selected and pushed onto a stack of transformations. Any text to be output is passed through this stack of transformations.
1101 It remains to consider if the chunk-include function should return it's generated text so that the caller can apply any transformations (and formatting), or if it should apply the stack of transformations itself.
1102 Note that the transformed included text should have the property of not being able to change the mode in the current chunk.
1103 To do: Note chunk parameters should probably also be transformed
1105 10.4 Quoting scenarios
1106 10.4.1 Direct quoting
1107 He we give examples of various quoting scenarios and discuss what the expected outcome might be and how this could be obtained.
1109 47a <test:q:1[1](
\v), lang=sh> ≡
1110 ________________________________________________________________________
1111 1 | echo "$(«test:q:1-inc 47b»)"
1112 |________________________________________________________________________
1116 47b <test:q:1-inc[1](
\v), lang=sh> ≡
1117 ________________________________________________________________________
1119 |________________________________________________________________________
1122 Should this examples produce echo "$(echo "hello")" or echo "\$(echo \"hello\")" ?
1123 This depends on what the author intended, but we must privde a way to express that intent.
1124 We might argue that as both chunks have lang=sh the intent must have been to quote the included chunk — but consider that this might be shell script that writes shell script.
1125 If <test:q:1-inc 47b> had lang=text then it certainly would have been right to quote it, which leads us to ask: in what ways can we reduce quoting if lang of the included chunk is compatible with the lang of the including chunk?
1126 If we take a completely nested approach then even though $( mode might do no quoting of it's own, " mode will still do it's own quoting. We need a model where the nested $( mode will prevent " from quoting.
1127 This leads rise to the tunneling feature. In bash, the $( gives rise to a new top-level parsing scenario, so we need to enter the null mode, and also ignore any quoting and then undo-this when the $( mode is terminated by the corresponding close ).
1128 We shall say that tunneling is when a mode in a language ignores other modes in the same language and arrives back at an earlier null mode of the same language.
1129 In example <test:q:1 47a> above, the nesting of modes is: null, ", $(
1130 When mode $( is commenced, the stack of nest modes will be traversed. If the null moed can be found in the same language, without the language varying, then a tunnel will be established so that the intervening modes, " in this case, can be skipped when the modes are enumerated to quote the texted being emitted.
1131 10.5 Language Mode Definitions
1132 All modes definitions are stored in a single multi-dimensional hash. The first index is the language, and the second index is the mode-identifier. The third indexes hold properties: terminators, and optionally, submodes, delimiters, and tunnel targets.
1133 A useful set of mode definitions for a nameless general C-type language is shown here. (Don't be confused by the double backslash escaping needed in awk. One set of escaping is for the string, and the second set of escaping is for the regex).
1134 To do: TODO: Add =<\mode{}> command which will allow us to signify that a string is
1135 regex and thus fangle will quote it for us.
1137 Submodes are entered by the characters " ' { ( [ /*
1139 48a <common-mode-definitions[1](language
\v\v), lang=> ≡ 48b▿
1140 ________________________________________________________________________
1141 1 | modes[${language}, "", "submodes"]="\\\\|\"|'|{|\\(|\\[";
1142 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1143 In the default mode, a comma surrounded by un-important white space is a delimiter of language items2. whatever a language item might be ^2.
1145 48b <common-mode-definitions[2](language
\v\v) ⇑48a, lang=> +≡ ▵48a 48d▿
1146 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1147 2 | modes[${language}, "", "delimiters"]=" *, *";
1148 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1149 and should pass this test:
1150 To do: Why do the tests run in ?(? mode and not ?? mode
1153 48c <test:mode-definitions[1](
\v), lang=> ≡ 49g⊳
1154 ________________________________________________________________________
1155 1 | parse_chunk_args("c-like", "1,2,3", a, "");
1156 2 | if (a[1] != "1") e++;
1157 3 | if (a[2] != "2") e++;
1158 4 | if (a[3] != "3") e++;
1159 5 | if (length(a) != 3) e++;
1160 6 | «pca-test.awk:summary 59a»
1162 8 | parse_chunk_args("c-like", "joe, red", a, "");
1163 9 | if (a[1] != "joe") e++;
1164 10 | if (a[2] != "red") e++;
1165 11 | if (length(a) != 2) e++;
1166 12 | «pca-test.awk:summary 59a»
1168 14 | parse_chunk_args("c-like", "${colour}", a, "");
1169 15 | if (a[1] != "${colour}") e++;
1170 16 | if (length(a) != 1) e++;
1171 17 | «pca-test.awk:summary 59a»
1172 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1173 Nested modes are identified by a backslash, a double or single quote, various bracket styles or a /* comment.
1174 For each of these sub-modes modes we must also identify at a mode terminator, and any sub-modes or delimiters that may be entered3. Because we are using the sub-mode characters as the mode identifier it means we can't currently have a mode character dependant on it's context; i.e. { can't behave differently when it is inside [. ^3.
1176 The backslash mode has no submodes or delimiters, and is terminated by any character. Note that we are not so much interested in evaluating or interpolating content as we are in delineating content. It is no matter that a double backslash (\\) may represent a single backslash while a backslash-newline may represent white space, but it does matter that the newline in a backslash newline should not be able to terminate a C pre-processor statement; and so the newline will be consumed by the backslash however it is to be interpreted.
1178 48d <common-mode-definitions[3](language
\v\v) ⇑48a, lang=> +≡ ▵48b 49f⊳
1179 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1180 3 | modes[${language}, "\\", "terminators"]=".";
1181 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1183 Common languages support two kinds of strings quoting, double quotes and single quotes.
1184 In a string we have one special mode, which is the backslash. This may escape an embedded quote and prevent us thinking that it should terminate the string.
1186 49a <mode:common-string[1](language
\v, quote
\v\v), lang=> ≡ 49b▿
1187 ________________________________________________________________________
1188 1 | modes[${language}, ${quote}, "submodes"]="\\\\";
1189 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1190 Otherwise, the string will be terminated by the same character that commenced it.
1192 49b <mode:common-string[2](language
\v, quote
\v\v) ⇑49a, lang=> +≡ ▵49a 49c▿
1193 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1194 2 | modes[${language}, ${quote}, "terminators"]=${quote};
1195 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1196 In C type languages, certain escape sequences exist in strings. We need to define mechanism to enclode any chunks included in this mode using those escape sequences. These are expressed in two parts, s meaning search, and r meaning replace.
1197 The first substitution is to replace a backslash with a double backslash. We do this first as other substitutions may introduce a backslash which we would not then want to escape again here.
1198 Note: Backslashes need double-escaping in the search pattern but not in the replacement string, hence we are replacing a literal \ with a literal \\.
1200 49c <mode:common-string[3](language
\v, quote
\v\v) ⇑49a, lang=> +≡ ▵49b 49d▿
1201 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1202 3 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\\\";
1203 4 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\\\";
1204 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1205 If the quote character occurs in the text, it should be preceded by a backslash, otherwise it would terminate the string unexpectedly.
1207 49d <mode:common-string[4](language
\v, quote
\v\v) ⇑49a, lang=> +≡ ▵49c 49e▿
1208 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1209 5 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]=${quote};
1210 6 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\" ${quote};
1211 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1212 Any newlines in the string, must be replaced by \n.
1214 49e <mode:common-string[5](language
\v, quote
\v\v) ⇑49a, lang=> +≡ ▵49d
1215 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1216 7 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\n";
1217 8 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\n";
1218 |________________________________________________________________________
1221 For the common modes, we define this string handling for double and single quotes.
1223 49f <common-mode-definitions[4](language
\v\v) ⇑48a, lang=> +≡ ⊲48d 50b⊳
1224 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1225 4 | «mode:common-string
\v(${language}
\v, "\""
\v) 49a»
1226 5 | «mode:common-string
\v(${language}
\v, "'"
\v) 49a»
1227 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1228 Working strings should pass this test:
1230 49g <test:mode-definitions[2](
\v) ⇑48c, lang=> +≡ ⊲48c 54a⊳
1231 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1232 18 | parse_chunk_args("c-like", "say \"I said, \\\"Hello, how are you\\\".\", for me", a, "");
1233 19 | if (a[1] != "say \"I said, \\\"Hello, how are you\\\".\"") e++;
1234 20 | if (a[2] != "for me") e++;
1235 21 | if (length(a) != 2) e++;
1236 22 | «pca-test.awk:summary 59a»
1237 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1238 10.5.3 Parentheses, Braces and Brackets
1239 Where quotes are closed by the same character, parentheses, brackets and braces are closed by an alternate character.
1241 50a <mode:common-brackets[1](language
\v, open
\v, close
\v\v), lang=> ≡
1242 ________________________________________________________________________
1243 1 | modes[${language}, ${open}, "submodes" ]="\\\\|\"|{|\\(|\\[|'|/\\*";
1244 2 | modes[${language}, ${open}, "delimiters"]=" *, *";
1245 3 | modes[${language}, ${open}, "terminators"]=${close};
1246 |________________________________________________________________________
1249 Note that the open is NOT a regex but the close token IS.
1250 To do: When we can quote regex we won't have to put the slashes in here
1253 50b <common-mode-definitions[5](language
\v\v) ⇑48a, lang=> +≡ ⊲49f
1254 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1255 6 | «mode:common-brackets
\v(${language}
\v, "{"
\v, "}"
\v) 50a»
1256 7 | «mode:common-brackets
\v(${language}
\v, "["
\v, "\\]"
\v) 50a»
1257 8 | «mode:common-brackets
\v(${language}
\v, "("
\v, "\\)"
\v) 50a»
1258 |________________________________________________________________________
1261 10.5.4 Customizing Standard Modes
1263 50c <mode:add-submode[1](language
\v, mode
\v, submode
\v\v), lang=> ≡
1264 ________________________________________________________________________
1265 1 | modes[${language}, ${mode}, "submodes"] = modes[${language}, ${mode}, "submodes"] "|" ${submode};
1266 |________________________________________________________________________
1270 50d <mode:add-escapes[1](language
\v, mode
\v, search
\v, replace
\v\v), lang=> ≡
1271 ________________________________________________________________________
1272 1 | escapes[${language}, ${mode}, ++escapes[${language}, ${mode}], "s"]=${search};
1273 2 | escapes[${language}, ${mode}, escapes[${language}, ${mode}], "r"]=${replace};
1274 |________________________________________________________________________
1279 We can define /* comment */ style comments and //comment style comments to be added to any language:
1281 50e <mode:multi-line-comments[1](language
\v\v), lang=> ≡
1282 ________________________________________________________________________
1283 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "/\\*"
\v) 50c»
1284 2 | modes[${language}, "/*", "terminators"]="\\*/";
1285 |________________________________________________________________________
1289 50f <mode:single-line-slash-comments[1](language
\v\v), lang=> ≡
1290 ________________________________________________________________________
1291 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "//"
\v) 50c»
1292 2 | modes[${language}, "//", "terminators"]="\n";
1293 3 | «mode:add-escapes
\v(${language}
\v, "//"
\v, "\n"
\v, "\n//"
\v) 50d»
1294 |________________________________________________________________________
1297 We can also define # comment style comments (as used in awk and shell scripts) in a similar manner.
1298 To do: I'm having to use # for hash and ¯extbackslash{} for and have hacky work-arounds in the parser for now
1301 50g <mode:add-hash-comments[1](language
\v\v), lang=> ≡
1302 ________________________________________________________________________
1303 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "#"
\v) 50c»
1304 2 | modes[${language}, "#", "terminators"]="\n";
1305 3 | «mode:add-escapes
\v(${language}
\v, "#"
\v, "\n"
\v, "\n#"
\v) 50d»
1306 |________________________________________________________________________
1309 In C, the # denotes pre-processor directives which can be multi-line
1311 50h <mode:add-hash-defines[1](language
\v\v), lang=> ≡
1312 ________________________________________________________________________
1313 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "#"
\v) 50c»
1314 2 | modes[${language}, "#", "submodes" ]="\\\\";
1315 3 | modes[${language}, "#", "terminators"]="\n";
1316 4 | «mode:add-escapes
\v(${language}
\v, "#"
\v, "\n"
\v, "\\\\\n"
\v) 50d»
1317 |________________________________________________________________________
1321 51a <mode:quote-dollar-escape[1](language
\v, quote
\v\v), lang=> ≡
1322 ________________________________________________________________________
1323 1 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\$";
1324 2 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\$";
1325 |________________________________________________________________________
1328 We can add these definitions to various languages
1330 51b <mode-definitions[1](
\v), lang=> ≡ 52a⊳
1331 ________________________________________________________________________
1332 1 | «common-mode-definitions
\v("c-like"
\v) 48a»
1334 3 | «common-mode-definitions
\v("c"
\v) 48a»
1335 4 | «mode:multi-line-comments
\v("c"
\v) 50e»
1336 5 | «mode:single-line-slash-comments
\v("c"
\v) 50f»
1337 6 | «mode:add-hash-defines
\v("c"
\v) 50h»
1339 8 | «common-mode-definitions
\v("awk"
\v) 48a»
1340 9 | «mode:add-hash-comments
\v("awk"
\v) 50g»
1341 10 | «mode:add-naked-regex
\v("awk"
\v) 51g»
1342 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1343 The awk definitions should allow a comment block like this:
1345 51c <test:comment-quote[1](
\v), lang=awk> ≡
1346 ________________________________________________________________________
1347 1 | # Comment: «test:comment-text 51d»
1348 |________________________________________________________________________
1352 51d <test:comment-text[1](
\v), lang=> ≡
1353 ________________________________________________________________________
1354 1 | Now is the time for
1355 2 | the quick brown fox to bring lemonade
1357 |________________________________________________________________________
1360 to come out like this:
1362 51e <test:comment-quote:result[1](
\v), lang=> ≡
1363 ________________________________________________________________________
1364 1 | # Comment: Now is the time for
1365 2 | #the quick brown fox to bring lemonade
1367 |________________________________________________________________________
1370 The C definition for such a block should have it come out like this:
1372 51f <test:comment-quote:C-result[1](
\v), lang=> ≡
1373 ________________________________________________________________________
1374 1 | # Comment: Now is the time for\
1375 2 | the quick brown fox to bring lemonade\
1377 |________________________________________________________________________
1381 This pattern is incomplete, but meant to detect naked regular expressions in awk and perl; e.g. /.*$/, however required capabilities are not present.
1382 Current it only detects regexes anchored with ^ as used in fangle.
1383 For full regex support, modes need to be named not after their starting character, but some other more fully qualified name.
1385 51g <mode:add-naked-regex[1](language
\v\v), lang=> ≡
1386 ________________________________________________________________________
1387 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "/\\^"
\v) 50c»
1388 2 | modes[${language}, "/^", "terminators"]="/";
1389 |________________________________________________________________________
1394 52a <mode-definitions[2](
\v) ⇑51b, lang=> +≡ ⊲51b 52b▿
1395 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1396 11 | «common-mode-definitions
\v("perl"
\v) 48a»
1397 12 | «mode:multi-line-comments
\v("perl"
\v) 50e»
1398 13 | «mode:add-hash-comments
\v("perl"
\v) 50g»
1399 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1400 Still need to add add s/, submode /, terminate both with //. This is likely to be impossible as perl regexes can contain perl.
1402 Shell single-quote strings are different to other strings and have no escape characters. The only special character is the single quote ' which always closes the string. Therefore we cannot use <common-mode-definitions
\v("sh"
\v) 48a> but we will invoke most of it's definition apart from single-quote strings.
1404 52b <mode-definitions[3](
\v) ⇑51b, lang=awk> +≡ ▵52a 53a⊳
1405 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1406 14 | modes["sh", "", "submodes"]="\\\\|\"|'|{|\\(|\\[|\\$\\(";
1407 15 | modes["sh", "\\", "terminators"]=".";
1409 17 | modes["sh", "\"", "submodes"]="\\\\|\\$\\(";
1410 18 | modes["sh", "\"", "terminators"]="\"";
1411 19 | escapes["sh", "\"", ++escapes["sh", "\""], "s"]="\\\\";
1412 20 | escapes["sh", "\"", escapes["sh", "\""], "r"]="\\\\";
1413 21 | escapes["sh", "\"", ++escapes["sh", "\""], "s"]="\"";
1414 22 | escapes["sh", "\"", escapes["sh", "\""], "r"]="\\" "\"";
1415 23 | escapes["sh", "\"", ++escapes["sh", "\""], "s"]="\n";
1416 24 | escapes["sh", "\"", escapes["sh", "\""], "r"]="\\n";
1418 26 | modes["sh", "'", "terminators"]="'";
1419 27 | escapes["sh", "'", ++escapes["sh", "'"], "s"]="'";
1420 28 | escapes["sh", "'", escapes["sh", "'"], "r"]="'\\'" "'";
1421 29 | «mode:common-brackets
\v("sh"
\v, "$("
\v, "\\)"
\v) 50a»
1422 30 | «mode:add-tunnel
\v("sh"
\v, "$("
\v, ""
\v) 52c»
1423 31 | «mode:common-brackets
\v("sh"
\v, "{"
\v, "}"
\v) 50a»
1424 32 | «mode:common-brackets
\v("sh"
\v, "["
\v, "\\]"
\v) 50a»
1425 33 | «mode:common-brackets
\v("sh"
\v, "("
\v, "\\)"
\v) 50a»
1426 34 | «mode:add-hash-comments
\v("sh"
\v) 50g»
1427 35 | «mode:quote-dollar-escape
\v("sh"
\v, "\""
\v) 51a»
1428 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1429 The definition of add-tunnel is:
1431 52c <mode:add-tunnel[1](language
\v, mode
\v, tunnel
\v\v), lang=> ≡
1432 ________________________________________________________________________
1433 1 | escapes[${language}, ${mode}, ++escapes[${language}, ${mode}], "tunnel"]=${tunnel};
1434 |________________________________________________________________________
1438 For makefiles, we currently recognize 2 modes: the null mode and ↦ mode, which is tabbed mode and contains the makefile recipie. In the null mode the only escape is $ which must be converted to $$.
1439 Tabbed mode is harder to manage, as the GNU Make Manual says in the section on splitting lines4. http://www.gnu.org/s/hello/manual/make/Splitting-Lines.html ^4. There is no way to escape a multi-line text that occurs as part of a makefile recipe.
1440 Despite this sad fact, if the newline's in the shell script all occur at points of top-level shell syntax, then we could replace them with ;\n↦and largely get the right effect.
1442 53a <test:make:1[1](
\v), lang=make> ≡
1443 ________________________________________________________________________
1446 3 | ↦«test:make:1-inc
\v($@) ?b»
1448 |________________________________________________________________________
1452 53b <test:make:1-inc[1](target
\v\v), lang=sh> ≡
1453 ________________________________________________________________________
1454 1 | if test "${target}" = "all"
1455 2 | then echo yes, all
1456 3 | else echo not all
1458 |________________________________________________________________________
1461 The two chunks about could reasonably produce this:
1463 53c <test:make:1.result-ideal[1](
\v), lang=make> ≡
1464 ________________________________________________________________________
1466 2 | ↦echo making test
1467 3 | ↦if test "$@" = "all" ;\
1468 4 | ↦then echo yes, all ;\
1469 5 | ↦else echo not all ;\
1471 |________________________________________________________________________
1474 But will more likely produce this:
1476 53d <test:make:1.result[1](
\v), lang=make> ≡
1477 ________________________________________________________________________
1479 2 | ↦echo making test
1480 3 | ↦if test "$$@" = "all" ;\
1481 4 | ↦ then echo yes, all ;\
1482 5 | ↦ else echo not all ;\
1484 |________________________________________________________________________
1487 The chunk argument $@ has been quoted (which would have been fine if we were passing the name of a shell variable), and the other shell lines are (harmlessly) indented by 1 space as part of fangle indent-matching which should have taken into account the expanded tab size, and should generally take into account the expanded prefix of the line whose indent it is trying to match, but which in this case we want to have no effect at all!
1488 To do: The $@ was passed from a make fragment. In what cases should it be converted to $$@?
1489 Do we need to track the language of sources of arguments?
1491 A more ugly work-around until this problem can be solved would be:
1493 ?a <test:make:2[1](
\v), lang=make> ≡
1494 ________________________________________________________________________
1497 3 | ↦ARG="$@"; «test:make:1-inc
\v($ARG) ?b»
1498 |________________________________________________________________________
1501 which produces the more useful:
1503 ?b <test:make:2.result[1](
\v), lang=make> ≡
1504 ________________________________________________________________________
1506 2 | ↦echo making test
1507 3 | ↦ARG="$@"; if test "$$ARG" = "all" ;\
1508 4 | ↦ then echo yes, all ;\
1509 5 | ↦ else echo not all ;\
1511 |________________________________________________________________________
1514 If, however, the shell fragment contained strings with literal newline characters then there would be no easy way to escape these and preserve the value of the string.
1515 A different style of makefile construction might be used — the recipe could be stored in a target specific variable5. http://www.gnu.org/s/hello/manual/make/Target_002dspecific.html ^5 which contains the recipe with a more normal escape mechanism.
1518 53a <mode-definitions[4](
\v) ⇑51b, lang=awk> +≡ ⊲52b
1519 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1520 36 | modes["make", "", "submodes"]="↦";
1521 37 | escapes["make", "", ++escapes["make", ""], "s"]="\\$";
1522 38 | escapes["make", "", escapes["make", ""], "r"]="$$";
1523 39 | modes["make", "↦", "terminators"]="\\n";
1524 40 | escapes["make", "↦", ++escapes["make", "↦"], "s"]="\\n";
1525 41 | escapes["make", "↦", escapes["make", "↦"], "r"]=" ;\\\n↦";
1526 |________________________________________________________________________
1529 Note also that the tab character is hard-wired into the pattern, and that the make variable .RECIPEPREFIX might change this to something else.
1531 Also, the parser must return any spare text at the end that has not been processed due to a mode terminator being found.
1533 54a <test:mode-definitions[3](
\v) ⇑48c, lang=> +≡ ⊲49g 54b▿
1534 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1535 23 | rest = parse_chunk_args("c-like", "1, 2, 3) spare", a, "(");
1536 24 | if (a[1] != 1) e++;
1537 25 | if (a[2] != 2) e++;
1538 26 | if (a[3] != 3) e++;
1539 27 | if (length(a) != 3) e++;
1540 28 | if (rest != " spare") e++;
1541 29 | «pca-test.awk:summary 59a»
1542 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1543 We must also be able to parse the example given earlier.
1545 54b <test:mode-definitions[4](
\v) ⇑48c, lang=> +≡ ▵54a
1546 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1547 30 | parse_chunk_args("c-like", "things[x, y], get_other_things(a, \"(all)\"), 99", a, "(");
1548 31 | if (a[1] != "things[x, y]") e++;
1549 32 | if (a[2] != "get_other_things(a, \"(all)\")") e++;
1550 33 | if (a[3] != "99") e++;
1551 34 | if (length(a) != 3) e++;
1552 35 | «pca-test.awk:summary 59a»
1553 |________________________________________________________________________
1556 10.7 A non-recursive mode tracker
1558 The mode tracker holds its state in a stack based on a numerically indexed hash. This function, when passed an empty hash, will intialize it.
1560 54c <new_mode_tracker()[1](
\v), lang=> ≡
1561 ________________________________________________________________________
1562 1 | function new_mode_tracker(context, language, mode) {
1563 2 | context[""] = 0;
1564 3 | context[0, "language"] = language;
1565 4 | context[0, "mode"] = mode;
1567 |________________________________________________________________________
1570 Because awk functions cannot return an array, we must create the array first and pass it in, so we have a fangle macro to do this:
1572 54d <new-mode-tracker[1](context
\v, language
\v, mode
\v\v), lang=awk> ≡
1573 ________________________________________________________________________
1574 1 | «awk-delete-array
\v(context
\v) 35d»
1575 2 | new_mode_tracker(${context}, ${language}, ${mode});
1576 |________________________________________________________________________
1580 And for tracking modes, we dispatch to a mode-tracker action based on the current language
1582 54e <mode_tracker[1](
\v), lang=awk> ≡ 55a⊳
1583 ________________________________________________________________________
1584 1 | function push_mode_tracker(context, language, mode,
1588 5 | if (! ("" in context)) {
1589 6 | «new-mode-tracker
\v(context
\v, language
\v, mode
\v) 54d»
1592 9 | top = context[""];
1593 10 | if (context[top, "language"] == language && mode=="") mode = context[top, "mode"];
1596 13 | context[top, "language"] = language;
1597 14 | context[top, "mode"] = mode;
1598 15 | context[""] = top;
1600 17 | return old_top;
1602 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1604 55a <mode_tracker[2](
\v) ⇑54e, lang=> +≡ ⊲54e 55b▿
1605 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1606 19 | function dump_mode_tracker(context,
1609 22 | for(c=0; c <= context[""]; c++) {
1610 23 | printf(" %2d %s:%s\n", c, context[c, "language"], context[c, "mode"]) > "/dev/stderr";
1611 24 | for(d=1; ( (c, "values", d) in context); d++) {
1612 25 | printf(" %2d %s\n", d, context[c, "values", d]) > "/dev/stderr";
1616 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1618 55b <mode_tracker[3](
\v) ⇑54e, lang=> +≡ ▵55a 59c⊳
1619 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1620 29 | function pop_mode_tracker(context, context_origin)
1622 31 | if ( (context_origin) && ("" in context) && context[""] != (1+context_origin)) return 0;
1623 32 | context[""] = context_origin;
1626 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1627 This implies that any chunk must be syntactically whole; for instance, this is fine:
1629 55c <test:whole-chunk[1](
\v), lang=> ≡
1630 ________________________________________________________________________
1632 2 | «test:say-hello 55d»
1634 |________________________________________________________________________
1638 55d <test:say-hello[1](
\v), lang=> ≡
1639 ________________________________________________________________________
1641 |________________________________________________________________________
1644 But this is not fine; the chunk <test:hidden-else 55f> is not properly cromulent.
1646 55e <test:partial-chunk[1](
\v), lang=> ≡
1647 ________________________________________________________________________
1649 2 | «test:hidden-else 55f»
1651 |________________________________________________________________________
1655 55f <test:hidden-else[1](
\v), lang=> ≡
1656 ________________________________________________________________________
1657 1 | print "I'm fine";
1659 3 | print "I'm not";
1660 |________________________________________________________________________
1663 These tests will check for correct behaviour:
1665 56a <test:cromulence[1](
\v), lang=> ≡
1666 ________________________________________________________________________
1667 1 | echo Cromulence test
1668 2 | passtest $FANGLE -Rtest:whole-chunk $TXT_SRC &>/dev/null || ( echo "Whole chunk failed" && exit 1 )
1669 3 | failtest $FANGLE -Rtest:partial-chunk $TXT_SRC &>/dev/null || ( echo "Partial chunk failed" && exit 1 )
1670 |________________________________________________________________________
1674 We must avoid recursion as a language construct because we intend to employ mode-tracking to track language mode of emitted code, and the code is emitted from a function which is itself recursive, so instead we implement psuedo-recursion using our own stack based on a hash.
1676 56b <mode_tracker()[1](
\v), lang=awk> ≡ 56c▿
1677 ________________________________________________________________________
1678 1 | function mode_tracker(context, text, values,
1679 2 | # optional parameters
1681 4 | mode, submodes, language,
1682 5 | cindex, c, a, part, item, name, result, new_values, new_mode,
1683 6 | delimiters, terminators)
1685 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1686 We could be re-commencing with a valid context, so we need to setup the state according to the last context.
1688 56c <mode_tracker()[2](
\v) ⇑56b, lang=> +≡ ▵56b 56f▿
1689 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1690 8 | cindex = context[""] + 0;
1691 9 | mode = context[cindex, "mode"];
1692 10 | language = context[cindex, "language" ];
1693 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1694 First we construct a single large regex combining the possible sub-modes for the current mode along with the terminators for the current mode.
1696 56d <parse_chunk_args-reset-modes[1](
\v), lang=> ≡ 56e▿
1697 ________________________________________________________________________
1698 1 | submodes=modes[language, mode, "submodes"];
1700 3 | if ((language, mode, "delimiters") in modes) {
1701 4 | delimiters = modes[language, mode, "delimiters"];
1702 5 | if (length(submodes)>0) submodes = submodes "|";
1703 6 | submodes=submodes delimiters;
1704 7 | } else delimiters="";
1705 8 | if ((language, mode, "terminators") in modes) {
1706 9 | terminators = modes[language, mode, "terminators"];
1707 10 | if (length(submodes)>0) submodes = submodes "|";
1708 11 | submodes=submodes terminators;
1709 12 | } else terminators="";
1710 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1711 If we don't find anything to match on --- probably because the language is not supported --- then we return the entire text without matching anything.
1713 56e <parse_chunk_args-reset-modes[2](
\v) ⇑56d, lang=> +≡ ▵56d
1714 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1715 13 | if (! length(submodes)) return text;
1716 |________________________________________________________________________
1720 56f <mode_tracker()[3](
\v) ⇑56b, lang=> +≡ ▵56c 57a⊳
1721 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1722 11 | «parse_chunk_args-reset-modes 56d»
1723 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1724 We then iterate the text (until there is none left) looking for sub-modes or terminators in the regex.
1726 57a <mode_tracker()[4](
\v) ⇑56b, lang=> +≡ ⊲56f 57b▿
1727 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1728 12 | while((cindex >= 0) && length(text)) {
1729 13 | if (match(text, "(" submodes ")", a)) {
1730 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1731 A bug that creeps in regularly during development is bad regexes of zero length which result in an infinite loop (as no text is consumed), so I catch that right away with this test.
1733 57b <mode_tracker()[5](
\v) ⇑56b, lang=> +≡ ▵57a 57c▿
1734 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1735 14 | if (RLENGTH<1) {
1736 15 | error(sprintf("Internal error, matched zero length submode, should be impossible - likely regex computation error\n" \
1737 16 | "Language=%s\nmode=%s\nmatch=%s\n", language, mode, submodes));
1739 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1740 part is defined as the text up to the sub-mode or terminator, and this is appended to item --- which is the current text being gathered. If a mode has a delimiter, then item is reset each time a delimiter is found.
1741 ("hello_item, there_item")<wide-overbrace>^item, (he said.)<wide-overbrace>^item
1743 57c <mode_tracker()[6](
\v) ⇑56b, lang=> +≡ ▵57b 57d▿
1744 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1745 18 | part = substr(text, 1, RSTART -1);
1746 19 | item = item part;
1747 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1748 We must now determine what was matched. If it was a terminator, then we must restore the previous mode.
1750 57d <mode_tracker()[7](
\v) ⇑56b, lang=> +≡ ▵57c 57e▿
1751 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1752 20 | if (match(a[1], "^" terminators "$")) {
1753 21 | #printf("%2d EXIT MODE [%s] by [%s] [%s]\n", cindex, mode, a[1], text) > "/dev/stderr"
1754 22 | context[cindex, "values", ++context[cindex, "values"]] = item;
1755 23 | delete context[cindex];
1756 24 | context[""] = --cindex;
1757 25 | if (cindex>=0) {
1758 26 | mode = context[cindex, "mode"];
1759 27 | language = context[cindex, "language"];
1760 28 | «parse_chunk_args-reset-modes 56d»
1762 30 | item = item a[1];
1763 31 | text = substr(text, 1 + length(part) + length(a[1]));
1765 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1766 If a delimiter was matched, then we must store the current item in the parsed values array, and reset the item.
1768 57e <mode_tracker()[8](
\v) ⇑56b, lang=> +≡ ▵57d 58a⊳
1769 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1770 33 | else if (match(a[1], "^" delimiters "$")) {
1771 34 | if (cindex==0) {
1772 35 | context[cindex, "values", ++context[cindex, "values"]] = item;
1775 38 | item = item a[1];
1777 40 | text = substr(text, 1 + length(part) + length(a[1]));
1779 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1780 otherwise, if a new submode is detected (all submodes have terminators), we must create a nested parse context until we find the terminator for this mode.
1782 58a <mode_tracker()[9](
\v) ⇑56b, lang=> +≡ ⊲57e 58b▿
1783 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1784 42 | else if ((language, a[1], "terminators") in modes) {
1785 43 | #check if new_mode is defined
1786 44 | item = item a[1];
1787 45 | #printf("%2d ENTER MODE [%s] in [%s]\n", cindex, a[1], text) > "/dev/stderr"
1788 46 | text = substr(text, 1 + length(part) + length(a[1]));
1789 47 | context[""] = ++cindex;
1790 48 | context[cindex, "mode"] = a[1];
1791 49 | context[cindex, "language"] = language;
1793 51 | «parse_chunk_args-reset-modes 56d»
1795 53 | error(sprintf("Submode '%s' set unknown mode in text: %s\nLanguage %s Mode %s\n", a[1], text, language, mode));
1796 54 | text = substr(text, 1 + length(part) + length(a[1]));
1799 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1800 In the final case, we parsed to the end of the string. If the string was entire, then we should have no nested mode context, but if the string was just a fragment we may have a mode context which must be preserved for the next fragment. Todo: Consideration ought to be given if sub-mode strings are split over two fragments.
1802 58b <mode_tracker()[10](
\v) ⇑56b, lang=> +≡ ▵58a
1803 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1805 58 | context[cindex, "values", ++context[cindex, "values"]] = item text;
1811 64 | context["item"] = item;
1813 66 | if (length(item)) context[cindex, "values", ++context[cindex, "values"]] = item;
1816 |________________________________________________________________________
1819 10.7.3.1 One happy chunk
1820 All the mode tracker chunks are referred to here:
1822 58c <mode-tracker[1](
\v), lang=> ≡
1823 ________________________________________________________________________
1824 1 | «new_mode_tracker() 54c»
1825 2 | «mode_tracker() 56b»
1826 |________________________________________________________________________
1830 We can test this function like this:
1832 58d <pca-test.awk[1](
\v), lang=awk> ≡
1833 ________________________________________________________________________
1835 2 | «mode-tracker 58c»
1836 3 | «parse_chunk_args() ?»
1839 6 | «mode-definitions 51b»
1841 8 | «test:mode-definitions 48c»
1843 |________________________________________________________________________
1847 59a <pca-test.awk:summary[1](
\v), lang=awk> ≡
1848 ________________________________________________________________________
1850 2 | printf "Failed " e
1852 4 | print "a[" b "] => " a[b];
1859 |________________________________________________________________________
1862 which should give this output:
1864 59b <pca-test.awk-results[1](
\v), lang=> ≡
1865 ________________________________________________________________________
1866 1 | a[foo.quux.quirk] =>
1867 2 | a[foo.quux.a] => fleeg
1868 3 | a[foo.bar] => baz
1870 5 | a[name] => freddie
1871 |________________________________________________________________________
1874 10.8 Escaping and Quoting
1875 For the time being and to get around TeXmacs inability to export a TAB character, the right arrow ↦ whose UTF-8 sequence is ...
1878 Another special character is used, the left-arrow ↤ with UTF-8 sequence 0xE2 0x86 0xA4 is used to strip any preceding white space as a way of un-tabbing and removing indent that has been applied — this is important for bash here documents, and the like. It's a filthy hack.
1879 To do: remove the hack
1882 59c <mode_tracker[4](
\v) ⇑54e, lang=> +≡ ⊲55b 59d▿
1883 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1885 35 | function untab(text) {
1886 36 | gsub("[[:space:]]*\xE2\x86\xA4","", text);
1889 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1890 Each nested mode can optionally define a set of transforms to be applied to any text that is included from another language.
1891 This code can perform transforms from index c downwards.
1893 59d <mode_tracker[5](
\v) ⇑54e, lang=awk> +≡ ▵59c 58c⊳
1894 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1895 39 | function transform_escape(context, text, top,
1896 40 | c, cp, cpl, s, r)
1898 42 | for(c = top; c >= 0; c--) {
1899 43 | if ( (context[c, "language"], context[c, "mode"]) in escapes) {
1900 44 | cpl = escapes[context[c, "language"], context[c, "mode"]];
1901 45 | for (cp = 1; cp <= cpl; cp ++) {
1902 46 | s = escapes[context[c, "language"], context[c, "mode"], cp, "s"];
1903 47 | r = escapes[context[c, "language"], context[c, "mode"], cp, "r"];
1904 48 | if (length(s)) {
1905 49 | gsub(s, r, text);
1907 51 | if ( (context[c, "language"], context[c, "mode"], cp, "t") in escapes ) {
1908 52 | quotes[src, "t"] = escapes[context[c, "language"], context[c, "mode"], cp, "t"];
1915 59 | function dump_escaper(quotes, r, cc) {
1916 60 | for(cc=1; cc<=c; cc++) {
1917 61 | printf("%2d s[%s] r[%s]\n", cc, quotes[cc, "s"], quotes[cc, "r"]) > "/dev/stderr"
1920 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1922 60a <test:escapes[1](
\v), lang=sh> ≡
1923 ________________________________________________________________________
1924 1 | echo escapes test
1925 2 | passtest $FANGLE -Rtest:comment-quote $TXT_SRC &>/dev/null || ( echo "Comment-quote failed" && exit 1 )
1926 |________________________________________________________________________
1929 Chapter 11Recognizing Chunks
1930 Fangle recognizes noweb chunks, but as we also want better LaTeX integration we will recognize any of these:
1931 • notangle chunks matching the pattern ^<<.*?>>=
1932 • chunks beginning with \begin{lstlistings}, possibly with \Chunk{...} on the previous line
1933 • an older form I have used, beginning with \begin{Chunk}[options] --- also more suitable for plain LaTeX users1. Is there such a thing as plain LaTeX? ^1.
1935 The variable chunking is used to signify that we are processing a code chunk and not document. In such a state, input lines will be assigned to the current chunk; otherwise they are ignored.
1937 We don't handle TeXmacs files natively yet, but rather instead emit unicode character sequences to mark up the text-export file which we do process.
1938 These hacks detect the unicode character sequences and retro-fit in the old TeX parsing.
1939 We convert ↦ into a tab character.
1941 61a <recognize-chunk[1](
\v), lang=> ≡ 61b▿
1942 ________________________________________________________________________
1945 2 | # gsub("\n*$","");
1946 3 | # gsub("\n", " ");
1949 6 | /\xE2\x86\xA6/ {
1950 7 | gsub("\\xE2\\x86\\xA6", "\x09");
1952 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1953 TeXmacs back-tick handling is obscure, and a cut-n-paste back-tick from a shell window comes out as a unicode sequence2. that won't export to html, except as a NULL character (literal 0x00) ^2 that is fixed-up here.
1955 61b <recognize-chunk[2](
\v) ⇑61a, lang=> +≡ ▵61a 62a⊳
1956 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1958 10 | /\xE2\x80\x98/ {
1959 11 | gsub("\\xE2\\x80\\x98", "‘");
1961 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1962 In the TeXmacs output, the start of a chunk will appear like this:
1963 5b<example-chunk^K[1](arg1,^K arg2^K^K), lang=C> ≡
1964 We detect the the start of a TeXmacs chunk by detecting the ≡ symbol which occurs near the end of the line. We obtain the chunk name, the chunk parameters, and the chunk language.
1966 62a <recognize-chunk[3](
\v) ⇑61a, lang=> +≡ ⊲61b 62b▿
1967 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1969 14 | /\xE2\x89\xA1/ {
1970 15 | if (match($0, "^ *([^[ ]* |)<([^[ ]*)\\[[0-9]*\\][(](.*)[)].*, lang=([^ ]*)>", line)) {
1971 16 | next_chunk_name=line[2];
1972 17 | get_texmacs_chunk_args(line[3], next_chunk_params);
1973 18 | gsub(ARG_SEPARATOR ",? ?", ";", line[3]);
1974 19 | params = "params=" line[3];
1975 20 | if ((line[4])) {
1976 21 | params = params ",language=" line[4]
1978 23 | get_tex_chunk_args(params, next_chunk_opts);
1979 24 | new_chunk(next_chunk_name, next_chunk_opts, next_chunk_params);
1980 25 | texmacs_chunking = 1;
1982 27 | # warning(sprintf("Unexpected chunk match: %s\n", $_))
1986 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1988 Our current scheme is to recognize the new lstlisting chunks, but these may be preceded by a \Chunk command which in L Y X is a more convenient way to pass the chunk name to the \begin{lstlistings} command, and a more visible way to specify other lstset settings.
1989 The arguments to the \Chunk command are a name, and then a comma-seperated list of key-value pairs after the manner of \lstset. (In fact within the LaTeX \Chunk macro (section 16.2.1) the text name= is prefixed to the argument which is then literally passed to \lstset).
1991 62b <recognize-chunk[4](
\v) ⇑61a, lang=awk> +≡ ▵62a 62c▿
1992 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1994 32 | if (match($0, "^\\\\Chunk{ *([^ ,}]*),?(.*)}", line)) {
1995 33 | next_chunk_name = line[1];
1996 34 | get_tex_chunk_args(line[2], next_chunk_opts);
2000 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2001 We also make a basic attempt to parse the name out of the \lstlistings[name=chunk-name] text, otherwise we fall back to the name found in the previous chunk command. This attempt is very basic and doesn't support commas or spaces or square brackets as part of the chunkname. We also recognize \begin{Chunk} which is convenient for some users3. but not yet supported in the LaTeX macros ^3.
2003 62c <recognize-chunk[5](
\v) ⇑61a, lang=> +≡ ▵62b 63a⊳
2004 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2005 38 | /^\\begin{lstlisting}|^\\begin{Chunk}/ {
2006 39 | if (match($0, "}.*[[,] *name= *{? *([^], }]*)", line)) {
2007 40 | new_chunk(line[1]);
2009 42 | new_chunk(next_chunk_name, next_chunk_opts);
2014 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2017 A chunk body in TeXmacs ends with |________... if it is the final chunklet of a chunk, or if there are further chunklets it ends with |\/\/\/... which is a depiction of a jagged line of torn paper.
2019 63a <recognize-chunk[6](
\v) ⇑61a, lang=> +≡ ⊲62c 63b▿
2020 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2021 47 | /^ *\|____________*/ && texmacs_chunking {
2022 48 | active_chunk="";
2023 49 | texmacs_chunking=0;
2026 52 | /^ *\|\/\\/ && texmacs_chunking {
2027 53 | texmacs_chunking=0;
2029 55 | active_chunk="";
2031 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2032 It has been observed that not every line of output when a TeXmacs chunk is active is a line of chunk. This may no longer be true, but we set a variable texmacs_chunk if the current line is a chunk line.
2033 Initially we set this to zero...
2035 63b <recognize-chunk[7](
\v) ⇑61a, lang=> +≡ ▵63a 63c▿
2036 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2037 57 | texmacs_chunk=0;
2038 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2039 ...and then we look to see if the current line is a chunk line.
2040 TeXmacs lines look like this: 3 | main() { so we detect the lines by leading white space, digits, more whiter space and a vertical bar followed by at least once space.
2041 If we find such a line, we remove this line-header and set texmacs_chunk=1 as well as chunking=1
2043 63c <recognize-chunk[8](
\v) ⇑61a, lang=> +≡ ▵63b 63d▿
2044 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2045 58 | /^ *[1-9][0-9]* *\| / {
2046 59 | if (texmacs_chunking) {
2048 61 | texmacs_chunk=1;
2049 62 | gsub("^ *[1-9][0-9]* *\\| ", "")
2052 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2053 When TeXmacs chunking, lines that commence with \/ or __ are not chunk content but visual framing, and are skipped.
2055 63d <recognize-chunk[9](
\v) ⇑61a, lang=> +≡ ▵63c 64a⊳
2056 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2057 65 | /^ *\.\/\\/ && texmacs_chunking {
2060 68 | /^ *__*$/ && texmacs_chunking {
2063 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2064 Any other line when TeXmacs chunking is considered to be a line-wrapped line.
2066 64a <recognize-chunk[10](
\v) ⇑61a, lang=> +≡ ⊲63d 64b▿
2067 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2068 71 | texmacs_chunking {
2069 72 | if (! texmacs_chunk) {
2070 73 | # must be a texmacs continued line
2072 75 | texmacs_chunk=1;
2075 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2076 This final chunklet seems bogus and probably stops L Y X working.
2078 64b <recognize-chunk[11](
\v) ⇑61a, lang=> +≡ ▵64a 64c▿
2079 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2080 78 | ! texmacs_chunk {
2081 79 | # texmacs_chunking=0;
2084 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2086 We recognize notangle style chunks too:
2088 64c <recognize-chunk[12](
\v) ⇑61a, lang=awk> +≡ ▵64b 64d▿
2089 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2090 82 | /^[<]<.*[>]>=/ {
2091 83 | if (match($0, "^[<]<(.*)[>]>= *$", line)) {
2093 85 | notangle_mode=1;
2094 86 | new_chunk(line[1]);
2098 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2100 Likewise, we need to recognize when a chunk ends.
2102 The e in [e]nd{lislisting} is surrounded by square brackets so that when this document is processed, this chunk doesn't terminate early when the lstlistings package recognizes it's own end-string!4. This doesn't make sense as the regex is anchored with ^, which this line does not begin with! ^4
2104 64d <recognize-chunk[13](
\v) ⇑61a, lang=> +≡ ▵64c 65a⊳
2105 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2106 90 | /^\\[e]nd{lstlisting}|^\\[e]nd{Chunk}/ {
2108 92 | active_chunk="";
2111 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2114 65a <recognize-chunk[14](
\v) ⇑61a, lang=> +≡ ⊲64d 65b▿
2115 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2118 97 | active_chunk="";
2120 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2121 All other recognizers are only of effect if we are chunking; there's no point in looking at lines if they aren't part of a chunk, so we just ignore them as efficiently as we can.
2123 65b <recognize-chunk[15](
\v) ⇑61a, lang=> +≡ ▵65a 65c▿
2124 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2125 99 | ! chunking { next; }
2126 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2128 Chunk contents are any lines read while chunking is true. Some chunk contents are special in that they refer to other chunks, and will be replaced by the contents of these chunks when the file is generated.
2129 We add the output record separator ORS to the line now, because we will set ORS to the empty string when we generate the output5. So that we can partial print lines using print instead of printf.
2130 To do: This does't make sense
2133 65c <recognize-chunk[16](
\v) ⇑61a, lang=> +≡ ▵65b
2134 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2135 100 | length(active_chunk) {
2136 101 | «process-chunk-tabs 65e»
2137 102 | «process-chunk 66b»
2139 |________________________________________________________________________
2142 If a chunk just consisted of plain text, we could handle the chunk like this:
2144 65d <process-chunk-simple[1](
\v), lang=> ≡
2145 ________________________________________________________________________
2146 1 | chunk_line(active_chunk, $0 ORS);
2147 |________________________________________________________________________
2150 but in fact a chunk can include references to other chunks. Chunk includes are traditionally written as <<chunk-name>> but we support other variations, some of which are more suitable for particular editing systems.
2151 However, we also process tabs at this point. A tab at input can be replaced by a number of spaces defined by the tabs variable, set by the -T option. Of course this is poor tab behaviour, we should probably have the option to use proper counted tab-stops and process this on output.
2153 65e <process-chunk-tabs[1](
\v), lang=> ≡
2154 ________________________________________________________________________
2155 1 | if (length(tabs)) {
2156 2 | gsub("\t", tabs);
2158 |________________________________________________________________________
2162 If \lstset{escapeinside={=<}{>}} is set, then we can use <chunk-name ?> in listings. The sequence =< was chosen because:
2163 1.it is a better mnemonic than <<chunk-name>> in that the = sign signifies equivalence or substitutability.
2164 2.and because =< is not valid in C or any language I can think of.
2165 3.and also because lstlistings doesn't like >> as an end delimiter for the texcl escape, so we must make do with a single > which is better complemented by =< than by <<.
2166 Unfortunately the =<...> that we use re-enters a LaTeX parsing mode in which some characters are special, e.g. # \ and so these cause trouble if used in arguments to \chunkref. At some point I must fix the LaTeX command \chunkref so that it can accept these literally, but until then, when writing chunkref argumemts that need these characters, I must use the forms \textbackslash{} and \#; so I also define a hacky chunk delatex to be used further on whose purpose it is to remove these from any arguments parsed by fangle.
2168 66a <delatex[1](text
\v\v), lang=> ≡
2169 ________________________________________________________________________
2171 2 | gsub("\\\\#", "#", ${text});
2172 3 | gsub("\\\\textbackslash{}", "\\", ${text});
2173 4 | gsub("\\\\\\^", "^", ${text});
2174 |________________________________________________________________________
2177 As each chunk line may contain more than one chunk include, we will split out chunk includes in an iterative fashion6. Contrary to our use of split when substituting parameters in chapter ? ^6.
2178 First, as long as the chunk contains a \chunkref command we take as much as we can up to the first \chunkref command.
2179 TeXmacs text output uses ⟨...⟩ which comes out as unicode sequences 0xC2 0xAB ... 0xC2 0xBB. Modern awk will interpret [^\xC2\xBB] as a single unicode character if LANG is set correctly to the sub-type UTF-8, e.g. LANG=en_GB.UTF-8, otherwise [^\xC2\xBB] will be treated as a two character negated match — but this should not interfere with the function.
2181 66b <process-chunk[1](
\v), lang=> ≡ 66c▿
2182 ________________________________________________________________________
2185 3 | while(match(chunk,"(\xC2\xAB)([^\xC2\xBB]*) [^\xC2\xBB]*\xC2\xBB", line) ||
2187 5 | "([=]<\\\\chunkref{([^}>]*)}(\\(.*\\)|)>|<<([a-zA-Z_][-a-zA-Z0-9_]*)>>)",
2190 8 | chunklet = substr(chunk, 1, RSTART - 1);
2191 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2192 We keep track of the indent count, by counting the number of literal characters found. We can then preserve this indent on each output line when multi-line chunks are expanded.
2193 We then process this first part literal text, and set the chunk which is still to be processed to be the text after the \chunkref command, which we will process next as we continue around the loop.
2195 66c <process-chunk[2](
\v) ⇑66b, lang=> +≡ ▵66b 67a⊳
2196 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2197 9 | indent += length(chunklet);
2198 10 | chunk_line(active_chunk, chunklet);
2199 11 | chunk = substr(chunk, RSTART + RLENGTH);
2200 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2201 We then consider the type of chunk command we have found, whether it is the fangle style command beginning with =< the older notangle style beginning with <<.
2202 Fangle chunks may have parameters contained within square brackets. These will be matched in line[3] and are considered at this stage of processing to be part of the name of the chunk to be included.
2204 67a <process-chunk[3](
\v) ⇑66b, lang=> +≡ ⊲66c 67b▿
2205 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2206 12 | if (substr(line[1], 1, 1) == "=") {
2207 13 | # chunk name up to }
2208 14 | «delatex
\v(line[3]
\v) 66a»
2209 15 | chunk_include(active_chunk, line[2] line[3], indent);
2210 16 | } else if (substr(line[1], 1, 1) == "<") {
2211 17 | chunk_include(active_chunk, line[4], indent);
2212 18 | } else if (line[1] == "\xC2\xAB") {
2213 19 | chunk_include(active_chunk, line[2], indent);
2215 21 | error("Unknown chunk fragment: " line[1]);
2217 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2218 The loop will continue until there are no more chunkref statements in the text, at which point we process the final part of the chunk.
2220 67b <process-chunk[4](
\v) ⇑66b, lang=> +≡ ▵67a 67c▿
2221 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2223 24 | chunk_line(active_chunk, chunk);
2224 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2225 We add the newline character as a chunklet on it's own, to make it easier to detect new lines and thus manage indentation when processing the output.
2227 67c <process-chunk[5](
\v) ⇑66b, lang=> +≡ ▵67b
2228 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2229 25 | chunk_line(active_chunk, "\n");
2230 |________________________________________________________________________
2233 We will also permit a chunk-part number to follow in square brackets, so that <chunk-name[1] ?> will refer to the first part only. This can make it easy to include a C function prototype in a header file, if the first part of the chunk is just the function prototype without the trailing semi-colon. The header file would include the prototype with the trailing semi-colon, like this:
2235 This is handled in section 13.1.1
2236 We should perhaps introduce a notion of language specific chunk options; so that perhaps we could specify:
2237 =<\chunkref{chunk-name[function-declaration]}
2238 which applies a transform function-declaration to the chunk --- which in this case would extract a function prototype from a function.
2241 Chapter 12Processing Options
2242 At the start, first we set the default options.
2244 69a <default-options[1](
\v), lang=> ≡
2245 ________________________________________________________________________
2248 3 | notangle_mode=0;
2251 |________________________________________________________________________
2254 Then we use getopt the standard way, and null out ARGV afterwards in the normal AWK fashion.
2256 69b <read-options[1](
\v), lang=> ≡
2257 ________________________________________________________________________
2258 1 | Optind = 1 # skip ARGV[0]
2259 2 | while(getopt(ARGC, ARGV, "R:LdT:hr")!=-1) {
2260 3 | «handle-options 69c»
2262 5 | for (i=1; i<Optind; i++) { ARGV[i]=""; }
2263 |________________________________________________________________________
2266 This is how we handle our options:
2268 69c <handle-options[1](
\v), lang=> ≡
2269 ________________________________________________________________________
2270 1 | if (Optopt == "R") root = Optarg;
2271 2 | else if (Optopt == "r") root="";
2272 3 | else if (Optopt == "L") linenos = 1;
2273 4 | else if (Optopt == "d") debug = 1;
2274 5 | else if (Optopt == "T") tabs = indent_string(Optarg+0);
2275 6 | else if (Optopt == "h") help();
2276 7 | else if (Optopt == "?") help();
2277 |________________________________________________________________________
2280 We do all of this at the beginning of the program
2282 69d <begin[1](
\v), lang=> ≡
2283 ________________________________________________________________________
2286 3 | «mode-definitions 51b»
2287 4 | «default-options 69a»
2289 6 | «read-options 69b»
2291 |________________________________________________________________________
2294 And have a simple help function
2296 69e <help()[1](
\v), lang=> ≡
2297 ________________________________________________________________________
2298 1 | function help() {
2300 3 | print " fangle [-L] -R<rootname> [source.tex ...]"
2301 4 | print " fangle -r [source.tex ...]"
2302 5 | print " If the filename, source.tex is not specified then stdin is used"
2304 7 | print "-L causes the C statement: #line <lineno> \"filename\"" to be issued"
2305 8 | print "-R causes the named root to be written to stdout"
2306 9 | print "-r lists all roots in the file (even those used elsewhere)"
2309 |________________________________________________________________________
2312 Chapter 13Generating the Output
2313 We generate output by calling output_chunk, or listing the chunk names.
2315 71a <generate-output[1](
\v), lang=> ≡
2316 ________________________________________________________________________
2317 1 | if (length(root)) output_chunk(root);
2318 2 | else output_chunk_names();
2319 |________________________________________________________________________
2322 We also have some other output debugging:
2324 71b <debug-output[1](
\v), lang=> ≡
2325 ________________________________________________________________________
2327 2 | print "------ chunk names "
2328 3 | output_chunk_names();
2329 4 | print "====== chunks"
2330 5 | output_chunks();
2331 6 | print "++++++ debug"
2332 7 | for (a in chunks) {
2333 8 | print a "=" chunks[a];
2336 |________________________________________________________________________
2339 We do both of these at the end. We also set ORS="" because each chunklet is not necessarily a complete line, and we already added ORS to each input line in section 11.4.
2341 71c <end[1](
\v), lang=> ≡
2342 ________________________________________________________________________
2344 2 | «debug-output 71b»
2346 4 | «generate-output 71a»
2348 |________________________________________________________________________
2351 We write chunk names like this. If we seem to be running in notangle compatibility mode, then we enclose the name like this <<name>> the same way notangle does:
2353 71d <output_chunk_names()[1](
\v), lang=> ≡
2354 ________________________________________________________________________
2355 1 | function output_chunk_names( c, prefix, suffix)
2357 3 | if (notangle_mode) {
2361 7 | for (c in chunk_names) {
2362 8 | print prefix c suffix "\n";
2365 |________________________________________________________________________
2368 This function would write out all chunks
2370 71e <output_chunks()[1](
\v), lang=> ≡
2371 ________________________________________________________________________
2372 1 | function output_chunks( a)
2374 3 | for (a in chunk_names) {
2375 4 | output_chunk(a);
2379 8 | function output_chunk(chunk) {
2381 10 | lineno_needed = linenos;
2383 12 | write_chunk(chunk);
2386 |________________________________________________________________________
2389 13.1 Assembling the Chunks
2390 chunk_path holds a string consisting of the names of all the chunks that resulted in this chunk being output. It should probably also contain the source line numbers at which each inclusion also occured.
2391 We first initialize the mode tracker for this chunk.
2393 72a <write_chunk()[1](
\v), lang=awk> ≡ 72b▿
2394 ________________________________________________________________________
2395 1 | function write_chunk(chunk_name) {
2396 2 | «awk-delete-array
\v(context
\v) 35d»
2397 3 | return write_chunk_r(chunk_name, context);
2400 6 | function write_chunk_r(chunk_name, context, indent, tail,
2402 8 | chunk_path, chunk_args,
2404 10 | context_origin,
2405 11 | chunk_params, part, max_part, part_line, frag, max_frag, text,
2406 12 | chunklet, only_part, call_chunk_args, new_context)
2408 14 | if (debug) debug_log("write_chunk_r(" chunk_name ")");
2409 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2411 As mentioned in section ?, a chunk name may contain a part specifier in square brackets, limiting the parts that should be emitted.
2413 72b <write_chunk()[2](
\v) ⇑72a, lang=> +≡ ▵72a 72c▿
2414 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2415 15 | if (match(chunk_name, "^(.*)\\[([0-9]*)\\]$", chunk_name_parts)) {
2416 16 | chunk_name = chunk_name_parts[1];
2417 17 | only_part = chunk_name_parts[2];
2419 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2420 We then create a mode tracker
2422 72c <write_chunk()[3](
\v) ⇑72a, lang=> +≡ ▵72b 73a⊳
2423 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2424 19 | context_origin = push_mode_tracker(context, chunks[chunk_name, "language"], "");
2425 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2426 We extract into chunk_params the names of the parameters that this chunk accepts, whose values were (optionally) passed in chunk_args.
2428 73a <write_chunk()[4](
\v) ⇑72a, lang=> +≡ ⊲72c 73b▿
2429 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2430 20 | split(chunks[chunk_name, "params"], chunk_params, " *; *");
2431 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2432 To assemble a chunk, we write out each part.
2434 73b <write_chunk()[5](
\v) ⇑72a, lang=> +≡ ▵73a
2435 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2436 21 | if (! (chunk_name in chunk_names)) {
2437 22 | error(sprintf(_"The root module <<%s>> was not defined.\nUsed by: %s",\
2438 23 | chunk_name, chunk_path));
2441 26 | max_part = chunks[chunk_name, "part"];
2442 27 | for(part = 1; part <= max_part; part++) {
2443 28 | if (! only_part || part == only_part) {
2444 29 | «write-part 73c»
2447 32 | if (! pop_mode_tracker(context, context_origin)) {
2448 33 | dump_mode_tracker(context);
2449 34 | error(sprintf(_"Module %s did not close context properly.\nUsed by: %s\n", chunk_name, chunk_path));
2452 |________________________________________________________________________
2455 A part can either be a chunklet of lines, or an include of another chunk.
2456 Chunks may also have parameters, specified in LaTeX style with braces after the chunk name --- looking like this in the document: chunkname{param1, param2}. Arguments are passed in square brackets: \chunkref{chunkname}[arg1, arg2].
2457 Before we process each part, we check that the source position hasn't changed unexpectedly, so that we can know if we need to output a new file-line directive.
2459 73c <write-part[1](
\v), lang=> ≡
2460 ________________________________________________________________________
2461 1 | «check-source-jump 75d»
2463 3 | chunklet = chunks[chunk_name, "part", part];
2464 4 | if (chunks[chunk_name, "part", part, "type"] == part_type_chunk) {
2465 5 | «write-included-chunk 73d»
2466 6 | } else if (chunklet SUBSEP "line" in chunks) {
2467 7 | «write-chunklets 74a»
2469 9 | # empty last chunklet
2471 |________________________________________________________________________
2474 To write an included chunk, we must detect any optional chunk arguments in parenthesis. Then we recurse calling write_chunk().
2476 73d <write-included-chunk[1](
\v), lang=> ≡
2477 ________________________________________________________________________
2478 1 | if (match(chunklet, "^([^\\[\\(]*)\\((.*)\\)$", chunklet_parts)) {
2479 2 | chunklet = chunklet_parts[1];
2481 4 | gsub(sprintf("%c",11), "", chunklet);
2482 5 | gsub(sprintf("%c",11), "", chunklet_parts[2]);
2483 6 | parse_chunk_args("c-like", chunklet_parts[2], call_chunk_args, "(");
2484 7 | for (c in call_chunk_args) {
2485 8 | call_chunk_args[c] = expand_chunk_args(call_chunk_args[c], chunk_params, chunk_args);
2488 11 | split("", call_chunk_args);
2491 14 | write_chunk_r(chunklet, context,
2492 15 | chunks[chunk_name, "part", part, "indent"] indent,
2493 16 | chunks[chunk_name, "part", part, "tail"],
2494 17 | chunk_path "\n " chunk_name,
2495 18 | call_chunk_args);
2496 |________________________________________________________________________
2499 Before we output a chunklet of lines, we first emit the file and line number if we have one, and if it is safe to do so.
2500 Chunklets are generally broken up by includes, so the start of a chunklet is a good place to do this. Then we output each line of the chunklet.
2501 When it is not safe, such as in the middle of a multi-line macro definition, lineno_suppressed is set to true, and in such a case we note that we want to emit the line statement when it is next safe.
2503 74a <write-chunklets[1](
\v), lang=> ≡ 74b▿
2504 ________________________________________________________________________
2505 1 | max_frag = chunks[chunklet, "line"];
2506 2 | for(frag = 1; frag <= max_frag; frag++) {
2507 3 | «write-file-line 75c»
2508 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2509 We then extract the chunklet text and expand any arguments.
2511 74b <write-chunklets[2](
\v) ⇑74a, lang=> +≡ ▵74a 74c▿
2512 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2514 5 | text = chunks[chunklet, frag];
2516 7 | /* check params */
2517 8 | text = expand_chunk_args(text, chunk_params, chunk_args);
2518 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2519 If the text is a single newline (which we keep separate - see 6) then we increment the line number. In the case where this is the last line of a chunk and it is not a top-level chunk we replace the newline with an empty string --- because the chunk that included this chunk will have the newline at the end of the line that included this chunk.
2520 We also note by newline = 1 that we have started a new line, so that indentation can be managed with the following piece of text.
2522 74c <write-chunklets[3](
\v) ⇑74a, lang=> +≡ ▵74b 74d▿
2523 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2525 10 | if (text == "\n") {
2527 12 | if (part == max_part && frag == max_frag && length(chunk_path)) {
2533 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2534 If this text does not represent a newline, but we see that we are the first piece of text on a newline, then we prefix our text with the current indent.
2535 Note 1. newline is a global output-state variable, but the indent is not.
2537 74d <write-chunklets[4](
\v) ⇑74a, lang=> +≡ ▵74c 75a⊳
2538 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2539 18 | } else if (length(text) || length(tail)) {
2540 19 | if (newline) text = indent text;
2544 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2545 Tail will soon no longer be relevant once mode-detection is in place.
2547 75a <write-chunklets[5](
\v) ⇑74a, lang=> +≡ ⊲74d 75b▿
2548 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2549 23 | text = text tail;
2550 24 | mode_tracker(context, text);
2551 25 | print untab(transform_escape(context, text, context_origin));
2552 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2553 If a line ends in a backslash --- suggesting continuation --- then we supress outputting file-line as it would probably break the continued lines.
2555 75b <write-chunklets[6](
\v) ⇑74a, lang=> +≡ ▵75a
2556 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2558 27 | lineno_suppressed = substr(lastline, length(lastline)) == "\\";
2561 |________________________________________________________________________
2564 Of course there is no point in actually outputting the source filename and line number (file-line) if they don't say anything new! We only need to emit them if they aren't what is expected, or if we we not able to emit one when they had changed.
2566 75c <write-file-line[1](
\v), lang=> ≡
2567 ________________________________________________________________________
2568 1 | if (newline && lineno_needed && ! lineno_suppressed) {
2569 2 | filename = a_filename;
2570 3 | lineno = a_lineno;
2571 4 | print "#line " lineno " \"" filename "\"\n"
2572 5 | lineno_needed = 0;
2574 |________________________________________________________________________
2577 We check if a new file-line is needed by checking if the source line matches what we (or a compiler) would expect.
2579 75d <check-source-jump[1](
\v), lang=> ≡
2580 ________________________________________________________________________
2581 1 | if (linenos && (chunk_name SUBSEP "part" SUBSEP part SUBSEP "FILENAME" in chunks)) {
2582 2 | a_filename = chunks[chunk_name, "part", part, "FILENAME"];
2583 3 | a_lineno = chunks[chunk_name, "part", part, "LINENO"];
2584 4 | if (a_filename != filename || a_lineno != lineno) {
2585 5 | lineno_needed++;
2588 |________________________________________________________________________
2591 Chapter 14Storing Chunks
2592 Awk has pretty limited data structures, so we will use two main hashes. Uninterrupted sequences of a chunk will be stored in chunklets and the chunklets used in a chunk will be stored in chunks.
2594 77a <constants[2](
\v) ⇑37a, lang=> +≡ ⊲37a
2595 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2596 2 | part_type_chunk=1;
2598 |________________________________________________________________________
2601 The params mentioned are not chunk parameters for parameterized chunks, as mentioned in 9.2, but the lstlistings style parameters used in the \Chunk command1. The params parameter is used to hold the parameters for parameterized chunks ^1.
2603 77b <chunk-storage-functions[1](
\v), lang=> ≡ 77c▿
2604 ________________________________________________________________________
2605 1 | function new_chunk(chunk_name, opts, args,
2609 5 | # HACK WHILE WE CHANGE TO ( ) for PARAM CHUNKS
2610 6 | gsub("\\(\\)$", "", chunk_name);
2611 7 | if (! (chunk_name in chunk_names)) {
2612 8 | if (debug) print "New chunk " chunk_name;
2613 9 | chunk_names[chunk_name];
2614 10 | for (p in opts) {
2615 11 | chunks[chunk_name, p] = opts[p];
2616 12 | if (debug) print "chunks[" chunk_name "," p "] = " opts[p];
2618 14 | for (p in args) {
2619 15 | chunks[chunk_name, "params", p] = args[p];
2621 17 | if ("append" in opts) {
2622 18 | append=opts["append"];
2623 19 | if (! (append in chunk_names)) {
2624 20 | warning("Chunk " chunk_name " is appended to chunk " append " which is not defined yet");
2625 21 | new_chunk(append);
2627 23 | chunk_include(append, chunk_name);
2628 24 | chunk_line(append, ORS);
2631 27 | active_chunk = chunk_name;
2632 28 | prime_chunk(chunk_name);
2634 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2636 77c <chunk-storage-functions[2](
\v) ⇑77b, lang=> +≡ ▵77b 78a⊳
2637 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2639 31 | function prime_chunk(chunk_name)
2641 33 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = \
2642 34 | chunk_name SUBSEP "chunklet" SUBSEP "" ++chunks[chunk_name, "chunklet"];
2643 35 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "FILENAME"] = FILENAME;
2644 36 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "LINENO"] = FNR + 1;
2647 39 | function chunk_line(chunk_name, line){
2648 40 | chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"],
2649 41 | ++chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"], "line"] ] = line;
2652 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2653 Chunk include represents a chunkref statement, and stores the requirement to include another chunk. The parameter indent represents the quanity of literal text characters that preceded this chunkref statement and therefore by how much additional lines of the included chunk should be indented.
2655 78a <chunk-storage-functions[3](
\v) ⇑77b, lang=> +≡ ⊲77c 78b▿
2656 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2657 44 | function chunk_include(chunk_name, chunk_ref, indent, tail)
2659 46 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = chunk_ref;
2660 47 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "type" ] = part_type_chunk;
2661 48 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "indent" ] = indent_string(indent);
2662 49 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "tail" ] = tail;
2663 50 | prime_chunk(chunk_name);
2666 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2667 The indent is calculated by indent_string, which may in future convert some spaces into tab characters. This function works by generating a printf padded format string, like %22s for an indent of 22, and then printing an empty string using that format.
2669 78b <chunk-storage-functions[4](
\v) ⇑77b, lang=> +≡ ▵78a
2670 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2671 53 | function indent_string(indent) {
2672 54 | return sprintf("%" indent "s", "");
2674 |________________________________________________________________________
2678 I use Arnold Robbins public domain getopt (1993 revision). This is probably the same one that is covered in chapter 12 of âĂIJEdition 3 of GAWK: Effective AWK Programming: A User's Guide for GNU AwkâĂİ but as that is licensed under the GNU Free Documentation License, Version 1.3, which conflicts with the GPL3, I can't use it from there (or it's accompanying explanations), so I do my best to explain how it works here.
2679 The getopt.awk header is:
2681 79a <getopt.awk-header[1](
\v), lang=> ≡
2682 ________________________________________________________________________
2683 1 | # getopt.awk --- do C library getopt(3) function in awk
2685 3 | # Arnold Robbins, arnold@skeeve.com, Public Domain
2687 5 | # Initial version: March, 1991
2688 6 | # Revised: May, 1993
2690 |________________________________________________________________________
2693 The provided explanation is:
2695 79b <getopt.awk-notes[1](
\v), lang=> ≡
2696 ________________________________________________________________________
2697 1 | # External variables:
2698 2 | # Optind -- index in ARGV of first nonoption argument
2699 3 | # Optarg -- string value of argument to current option
2700 4 | # Opterr -- if nonzero, print our own diagnostic
2701 5 | # Optopt -- current option letter
2704 8 | # -1 at end of options
2705 9 | # ? for unrecognized option
2706 10 | # <c> a character representing the current option
2708 12 | # Private Data:
2709 13 | # _opti -- index in multi-flag option, e.g., -abc
2711 |________________________________________________________________________
2714 The function follows. The final two parameters, thisopt and i are local variables and not parameters --- as indicated by the multiple spaces preceding them. Awk doesn't care, the multiple spaces are a convention to help us humans.
2716 79c <getopt.awk-getopt()[1](
\v), lang=> ≡ 80a⊳
2717 ________________________________________________________________________
2718 1 | function getopt(argc, argv, options, thisopt, i)
2720 3 | if (length(options) == 0) # no options given
2722 5 | if (argv[Optind] == "--") { # all done
2726 9 | } else if (argv[Optind] !~ /^-[^: \t\n\f\r\v\b]/) {
2730 13 | if (_opti == 0)
2732 15 | thisopt = substr(argv[Optind], _opti, 1)
2733 16 | Optopt = thisopt
2734 17 | i = index(options, thisopt)
2737 20 | printf("%c -- invalid option\n",
2738 21 | thisopt) > "/dev/stderr"
2739 22 | if (_opti >= length(argv[Optind])) {
2746 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2747 At this point, the option has been found and we need to know if it takes any arguments.
2749 80a <getopt.awk-getopt()[2](
\v) ⇑79c, lang=> +≡ ⊲79c
2750 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2751 29 | if (substr(options, i + 1, 1) == ":") {
2752 30 | # get option argument
2753 31 | if (length(substr(argv[Optind], _opti + 1)) > 0)
2754 32 | Optarg = substr(argv[Optind], _opti + 1)
2756 34 | Optarg = argv[++Optind]
2760 38 | if (_opti == 0 || _opti >= length(argv[Optind])) {
2767 |________________________________________________________________________
2770 A test program is built in, too
2772 80b <getopt.awk-begin[1](
\v), lang=> ≡
2773 ________________________________________________________________________
2775 2 | Opterr = 1 # default is to diagnose
2776 3 | Optind = 1 # skip ARGV[0]
2778 5 | if (_getopt_test) {
2779 6 | while ((_go_c = getopt(ARGC, ARGV, "ab:cd")) != -1)
2780 7 | printf("c = <%c>, optarg = <%s>\n",
2782 9 | printf("non-option arguments:\n")
2783 10 | for (; Optind < ARGC; Optind++)
2784 11 | printf("\tARGV[%d] = <%s>\n",
2785 12 | Optind, ARGV[Optind])
2788 |________________________________________________________________________
2791 The entire getopt.awk is made out of these chunks in order
2793 80c <getopt.awk[1](
\v), lang=> ≡
2794 ________________________________________________________________________
2795 1 | «getopt.awk-header 79a»
2797 3 | «getopt.awk-notes 79b»
2798 4 | «getopt.awk-getopt() 79c»
2799 5 | «getopt.awk-begin 80b»
2800 |________________________________________________________________________
2803 Although we only want the header and function:
2805 81a <getopt[1](
\v), lang=> ≡
2806 ________________________________________________________________________
2807 1 | # try: locate getopt.awk for the full original file
2808 2 | # as part of your standard awk installation
2809 3 | «getopt.awk-header 79a»
2811 5 | «getopt.awk-getopt() 79c»
2812 |________________________________________________________________________
2815 Chapter 16Fangle LaTeX source code
2817 Here we define a L Y X .module file that makes it convenient to use L Y X for writing such literate programs.
2818 This file ./fangle.module can be installed in your personal .lyx/layouts folder. You will need to Tools Reconfigure so that L Y X notices it. It adds a new format Chunk, which should precede every listing and contain the chunk name.
2820 83a <./fangle.module[1](
\v), lang=lyx-module> ≡
2821 ________________________________________________________________________
2822 1 | #\DeclareLyXModule{Fangle Literate Listings}
2823 2 | #DescriptionBegin
2824 3 | # Fangle literate listings allow one to write
2825 4 | # literate programs after the fashion of noweb, but without having
2826 5 | # to use noweave to generate the documentation. Instead the listings
2827 6 | # package is extended in conjunction with the noweb package to implement
2828 7 | # to code formating directly as latex.
2829 8 | # The fangle awk script
2832 11 | «gpl3-copyright.hashed 83b»
2837 16 | «./fangle.sty 84d»
2840 19 | «chunkstyle 84a»
2843 |________________________________________________________________________
2846 Because L Y X modules are not yet a language supported by fangle or lstlistings, we resort to this fake awk chunk below in order to have each line of the GPL3 license commence with a #
2848 83b <gpl3-copyright.hashed[1](
\v), lang=awk> ≡
2849 ________________________________________________________________________
2850 1 | #«gpl3-copyright 4a»
2852 |________________________________________________________________________
2855 16.1.1 The Chunk style
2856 The purpose of the chunk style is to make it easier for L Y X users to provide the name to lstlistings. Normally this requires right-clicking on the listing, choosing settings, advanced, and then typing name=chunk-name. This has the further disadvantage that the name (and other options) are not generally visible during document editing.
2857 The chunk style is defined as a LaTeX command, so that all text on the same line is passed to the LaTeX command Chunk. This makes it easy to parse using fangle, and easy to pass these options on to the listings package. The first word in a chunk section should be the chunk name, and will have name= prepended to it. Any other words are accepted arguments to lstset.
2858 We set PassThru to 1 because the user is actually entering raw latex.
2860 84a <chunkstyle[1](
\v), lang=> ≡ 84b▿
2861 ________________________________________________________________________
2863 2 | LatexType Command
2865 4 | Margin First_Dynamic
2866 5 | LeftMargin Chunk:xxx
2868 7 | LabelType Static
2869 8 | LabelString "Chunk:"
2873 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2874 To make the label very visible we choose a larger font coloured red.
2876 84b <chunkstyle[2](
\v) ⇑84a, lang=> +≡ ▵84a
2877 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2886 |________________________________________________________________________
2889 16.1.2 The chunkref style
2890 We also define the Chunkref style which can be used to express cross references to chunks.
2892 84c <chunkref[1](
\v), lang=> ≡
2893 ________________________________________________________________________
2894 1 | InsetLayout Chunkref
2895 2 | LyxType charstyle
2896 3 | LatexType Command
2897 4 | LatexName chunkref
2904 |________________________________________________________________________
2908 We require the listings, noweb and xargs packages. As noweb defines it's own \code environment, we re-define the one that L Y X logical markup module expects here.
2910 84d <./fangle.sty[1](
\v), lang=tex> ≡ 85a⊳
2911 ________________________________________________________________________
2912 1 | \usepackage{listings}%
2913 2 | \usepackage{noweb}%
2914 3 | \usepackage{xargs}%
2915 4 | \renewcommand{\code}[1]{\texttt{#1}}%
2916 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2917 We also define a CChunk macro, for use as: \begin{CChunk} which will need renaming to \begin{Chunk} when I can do this without clashing with \Chunk.
2919 85a <./fangle.sty[2](
\v) ⇑84d, lang=> +≡ ⊲84d 85b▿
2920 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2921 5 | \lstnewenvironment{Chunk}{\relax}{\relax}%
2922 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2923 We also define a suitable \lstset of parameters that suit the literate programming style after the fashion of noweave.
2925 85b <./fangle.sty[3](
\v) ⇑84d, lang=> +≡ ▵85a 85c▿
2926 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2927 6 | \lstset{numbers=left, stepnumber=5, numbersep=5pt,
2928 7 | breaklines=false,basicstyle=\ttfamily,
2929 8 | numberstyle=\tiny, language=C}%
2930 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2931 We also define a notangle-like mechanism for escaping to LaTeX from the listing, and by which we can refer to other listings. We declare the =<...> sequence to contain LaTeX code, and include another like this chunk: <chunkname ?>. However, because =<...> is already defined to contain LaTeX code for this document --- this is a fangle document after all --- the code fragment below effectively contains the LaTeX code: }{. To avoid problems with document generation, I had to declare an lstlistings property: escapeinside={} for this listing only; which in L Y X was done by right-clicking the listings inset, choosing settings->advanced. Therefore =< isn't interpreted literally here, in a listing when the escape sequence is already defined as shown... we need to somehow escape this representation...
2933 85c <./fangle.sty[4](
\v) ⇑84d, lang=> +≡ ▵85b 85d▿
2934 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2935 9 | \lstset{escapeinside={=<}{>}}%
2936 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2937 Although our macros will contain the @ symbol, they will be included in a \makeatletter section by L Y X; however we keep the commented out \makeatletter as a reminder. The listings package likes to centre the titles, but noweb titles are specially formatted and must be left aligned. The simplest way to do this turned out to be by removing the definition of \lst@maketitle. This may interact badly if other listings want a regular title or caption. We remember the old maketitle in case we need it.
2939 85d <./fangle.sty[5](
\v) ⇑84d, lang=> +≡ ▵85c 85e▿
2940 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2942 11 | %somehow re-defining maketitle gives us a left-aligned title
2943 12 | %which is extactly what our specially formatted title needs!
2944 13 | \global\let\fangle@lst@maketitle\lst@maketitle%
2945 14 | \global\def\lst@maketitle{}%
2946 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2947 16.2.1 The chunk command
2948 Our chunk command accepts one argument, and calls \ltset. Although \ltset will note the name, this is erased when the next \lstlisting starts, so we make a note of this in \lst@chunkname and restore in in lstlistings Init hook.
2950 85e <./fangle.sty[6](
\v) ⇑84d, lang=> +≡ ▵85d 86a⊳
2951 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2953 16 | \lstset{title={\fanglecaption},name=#1}%
2954 17 | \global\edef\lst@chunkname{\lst@intname}%
2956 19 | \def\lst@chunkname{\empty}%
2957 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2958 16.2.1.1 Chunk parameters
2959 Fangle permits parameterized chunks, and requires the paramters to be specified as listings options. The fangle script uses this, and although we don't do anything with these in the LaTeX code right now, we need to stop the listings package complaining.
2961 86a <./fangle.sty[7](
\v) ⇑84d, lang=> +≡ ⊲85e 86b▿
2962 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2963 20 | \lst@Key{params}\relax{\def\fangle@chunk@params{#1}}%
2964 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2965 As it is common to define a chunk which then needs appending to another chunk, and annoying to have to declare a single line chunk to manage the include, we support an append= option.
2967 86b <./fangle.sty[8](
\v) ⇑84d, lang=> +≡ ▵86a 86c▿
2968 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2969 21 | \lst@Key{append}\relax{\def\fangle@chunk@append{#1}}%
2970 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2971 16.2.2 The noweb styled caption
2972 We define a public macro \fanglecaption which can be set as a regular title. By means of \protect, It expands to \fangle@caption at the appopriate time when the caption is emitted.
2974 86c <./fangle.sty[9](
\v) ⇑84d, lang=> +≡ ▵86b 86d▿
2975 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2976 \def\fanglecaption{\protect\fangle@caption}%
2977 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2978 22c ⟨some-chunk 19b⟩≡+ ⊲22b 24d⊳
2980 In this example, the current chunk is 22c, and therefore the third chunk on page 22.
2981 It's name is some-chunk.
2982 The first chunk with this name (19b) occurs as the second chunk on page 19.
2983 The previous chunk (22d) with the same name is the second chunk on page 22.
2984 The next chunk (24d) is the fourth chunk on page 24.
2986 Figure 1. Noweb Heading
2988 The general noweb output format compactly identifies the current chunk, and references to the first chunk, and the previous and next chunks that have the same name.
2989 This means that we need to keep a counter for each chunk-name, that we use to count chunks of the same name.
2990 16.2.3 The chunk counter
2991 It would be natural to have a counter for each chunk name, but TeX would soon run out of counters1. ...soon did run out of counters and so I had to re-write the LaTeX macros to share a counter as described here. ^1, so we have one counter which we save at the end of a chunk and restore at the beginning of a chunk.
2993 86d <./fangle.sty[10](
\v) ⇑84d, lang=> +≡ ▵86c 87c⊳
2994 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2995 22 | \newcounter{fangle@chunkcounter}%
2996 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2997 We construct the name of this variable to store the counter to be the text lst-chunk- prefixed onto the chunks own name, and store it in \chunkcount.
2998 We save the counter like this:
3000 87a <save-counter[1](
\v), lang=> ≡
3001 ________________________________________________________________________
3002 \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
3003 |________________________________________________________________________
3006 and restore the counter like this:
3008 87b <restore-counter[1](
\v), lang=> ≡
3009 ________________________________________________________________________
3010 \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
3011 |________________________________________________________________________
3014 If there does not already exist a variable whose name is stored in \chunkcount, then we know we are the first chunk with this name, and then define a counter.
3015 Although chunks of the same name share a common counter, they must still be distinguished. We use is the internal name of the listing, suffixed by the counter value. So the first chunk might be something-1 and the second chunk be something-2, etc.
3016 We also calculate the name of the previous chunk if we can (before we increment the chunk counter). If this is the first chunk of that name, then \prevchunkname is set to \relax which the noweb package will interpret as not existing.
3018 87c <./fangle.sty[11](
\v) ⇑84d, lang=> +≡ ⊲86d 87d▿
3019 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3020 23 | \def\fangle@caption{%
3021 24 | \edef\chunkcount{lst-chunk-\lst@intname}%
3022 25 | \@ifundefined{\chunkcount}{%
3023 26 | \expandafter\gdef\csname \chunkcount\endcsname{0}%
3024 27 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
3025 28 | \let\prevchunkname\relax%
3027 30 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
3028 31 | \edef\prevchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
3030 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3031 After incrementing the chunk counter, we then define the name of this chunk, as well as the name of the first chunk.
3033 87d <./fangle.sty[12](
\v) ⇑84d, lang=> +≡ ▵87c 87e▿
3034 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3035 33 | \addtocounter{fangle@chunkcounter}{1}%
3036 34 | \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
3037 35 | \edef\chunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
3038 36 | \edef\firstchunkname{\lst@intname-1}%
3039 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3040 We now need to calculate the name of the next chunk. We do this by temporarily skipping the counter on by one; however there may not actually be another chunk with this name! We detect this by also defining a label for each chunk based on the chunkname. If there is a next chunkname then it will define a label with that name. As labels are persistent, we can at least tell the second time LaTeX is run. If we don't find such a defined label then we define \nextchunkname to \relax.
3042 87e <./fangle.sty[13](
\v) ⇑84d, lang=> +≡ ▵87d 88a⊳
3043 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3044 37 | \addtocounter{fangle@chunkcounter}{1}%
3045 38 | \edef\nextchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
3046 39 | \@ifundefined{r@label-\nextchunkname}{\let\nextchunkname\relax}{}%
3047 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3048 The noweb package requires that we define a \sublabel for every chunk, with a unique name, which is then used to print out it's navigation hints.
3049 We also define a regular label for this chunk, as was mentioned above when we calculated \nextchunkname. This requires LaTeX to be run at least twice after new chunk sections are added --- but noweb requried that anyway.
3051 88a <./fangle.sty[14](
\v) ⇑84d, lang=> +≡ ⊲87e 88b▿
3052 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3053 40 | \sublabel{\chunkname}%
3054 41 | % define this label for every chunk instance, so we
3055 42 | % can tell when we are the last chunk of this name
3056 43 | \label{label-\chunkname}%
3057 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3058 We also try and add the chunk to the list of listings, but I'm afraid we don't do very well. We want each chunk name listing once, with all of it's references.
3060 88b <./fangle.sty[15](
\v) ⇑84d, lang=> +≡ ▵88a 88c▿
3061 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3062 44 | \addcontentsline{lol}{lstlisting}{\lst@name~[\protect\subpageref{\chunkname}]}%
3063 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3064 We then call the noweb output macros in the same way that noweave generates them, except that we don't need to call \nwstartdeflinemarkup or \nwenddeflinemarkup — and if we do, it messes up the output somewhat.
3066 88c <./fangle.sty[16](
\v) ⇑84d, lang=> +≡ ▵88b 88d▿
3067 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3071 48 | \subpageref{\chunkname}%
3078 55 | \nwtagstyle{}\/%
3079 56 | \@ifundefined{fangle@chunk@params}{}{%
3080 57 | (\fangle@chunk@params)%
3082 59 | [\csname \chunkcount\endcsname]~%
3083 60 | \subpageref{\firstchunkname}%
3085 62 | \@ifundefined{fangle@chunk@append}{}{%
3086 63 | \ifx{}\fangle@chunk@append{x}\else%
3087 64 | ,~add~to~\fangle@chunk@append%
3090 67 | \global\def\fangle@chunk@append{}%
3091 68 | \lstset{append=x}%
3094 71 | \ifx\relax\prevchunkname\endmoddef\else\plusendmoddef\fi%
3095 72 | % \nwstartdeflinemarkup%
3096 73 | \nwprevnextdefs{\prevchunkname}{\nextchunkname}%
3097 74 | % \nwenddeflinemarkup%
3099 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3100 Originally this was developed as a listings aspect, in the Init hook, but it was found easier to affect the title without using a hook — \lst@AddToHookExe{PreSet} is still required to set the listings name to the name passed to the \Chunk command, though.
3102 88d <./fangle.sty[17](
\v) ⇑84d, lang=> +≡ ▵88c 89a⊳
3103 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3104 76 | %\lst@BeginAspect{fangle}
3105 77 | %\lst@Key{fangle}{true}[t]{\lstKV@SetIf{#1}{true}}
3106 78 | \lst@AddToHookExe{PreSet}{\global\let\lst@intname\lst@chunkname}
3107 79 | \lst@AddToHook{Init}{}%\fangle@caption}
3108 80 | %\lst@EndAspect
3109 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3110 16.2.4 Cross references
3111 We define the \chunkref command which makes it easy to generate visual references to different code chunks, e.g.
3114 \chunkref[3]{preamble}
3115 \chunkref{preamble}[arg1, arg2]
3117 Chunkref can also be used within a code chunk to include another code chunk. The third optional parameter to chunkref is a comma sepatarated list of arguments, which will replace defined parameters in the chunkref.
3118 Note 1. Darn it, if I have: =<\chunkref{new-mode-tracker}[{chunks[chunk_name, "language"]},{mode}]> the inner braces (inside [ ]) cause _ to signify subscript even though we have lst@ReplaceIn
3120 89a <./fangle.sty[18](
\v) ⇑84d, lang=> +≡ ⊲88d 90a⊳
3121 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3122 81 | \def\chunkref@args#1,{%
3124 83 | \lst@ReplaceIn\arg\lst@filenamerpl%
3126 85 | \@ifnextchar){\relax}{, \chunkref@args}%
3128 87 | \newcommand\chunkref[2][0]{%
3129 88 | \@ifnextchar({\chunkref@i{#1}{#2}}{\chunkref@i{#1}{#2}()}%
3131 90 | \def\chunkref@i#1#2(#3){%
3133 92 | \def\chunk{#2}%
3134 93 | \def\chunkno{#1}%
3135 94 | \def\chunkargs{#3}%
3136 95 | \ifx\chunkno\zero%
3137 96 | \def\chunkname{#2-1}%
3139 98 | \def\chunkname{#2-\chunkno}%
3141 100 | \let\lst@arg\chunk%
3142 101 | \lst@ReplaceIn\chunk\lst@filenamerpl%
3143 102 | \LA{%\moddef{%
3146 105 | \nwtagstyle{}\/%
3147 106 | \ifx\chunkno\zero%
3151 110 | \ifx\chunkargs\empty%
3153 112 | (\chunkref@args #3,)%
3155 114 | ~\subpageref{\chunkname}%
3158 117 | \RA%\endmoddef%
3160 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3163 90a <./fangle.sty[19](
\v) ⇑84d, lang=> +≡ ⊲89a
3164 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3167 |________________________________________________________________________
3170 Chapter 17Extracting fangle
3171 17.1 Extracting from Lyx
3172 To extract from L Y X, you will need to configure L Y X as explained in section ?.
3173 And this lyx-build scrap will extract fangle for me.
3175 91a <lyx-build[2](
\v) ⇑20a, lang=sh> +≡ ⊲20a
3176 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3180 14 | «lyx-build-helper 19b»
3181 15 | cd $PROJECT_DIR || exit 1
3183 17 | /usr/local/bin/fangle -R./fangle $TEX_SRC > ./fangle
3184 18 | /usr/local/bin/fangle -R./fangle.module $TEX_SRC > ./fangle.module
3186 20 | export FANGLE=./fangle
3187 21 | export TMP=${TMP:-/tmp}
3189 |________________________________________________________________________
3192 With a lyx-build-helper
3194 91b <lyx-build-helper[2](
\v) ⇑19b, lang=sh> +≡ ⊲19b
3195 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3196 5 | PROJECT_DIR="$LYX_r"
3197 6 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
3198 7 | TEX_DIR="$LYX_p"
3199 8 | TEX_SRC="$TEX_DIR/$LYX_i"
3200 9 | TXT_SRC="$TEX_SRC"
3201 |________________________________________________________________________
3204 17.2 Extracting documentation
3206 91c <./gen-www[1](
\v), lang=> ≡
3207 ________________________________________________________________________
3208 1 | #python -m elyxer --css lyx.css $LYX_SRC | \
3209 2 | # iconv -c -f utf-8 -t ISO-8859-1//TRANSLIT | \
3210 3 | # sed 's/UTF-8"\(.\)>/ISO-8859-1"\1>/' > www/docs/fangle.html
3212 5 | python -m elyxer --css lyx.css --iso885915 --html --destdirectory www/docs/fangle.e \
3213 6 | fangle.lyx > www/docs/fangle.e/fangle.html
3215 8 | ( mkdir -p www/docs/fangle && cd www/docs/fangle && \
3216 9 | lyx -e latex ../../../fangle.lyx && \
3217 10 | htlatex ../../../fangle.tex "xhtml,fn-in" && \
3218 11 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->//g' fangle.html
3221 14 | ( mkdir -p www/docs/literate && cd www/docs/literate && \
3222 15 | lyx -e latex ../../../literate.lyx && \
3223 16 | htlatex ../../../literate.tex "xhtml,fn-in" && \
3224 17 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->$//g' literate.html
3226 |________________________________________________________________________
3229 17.3 Extracting from the command line
3230 First you will need the tex output, then you can extract:
3232 92a <lyx-build-manual[1](
\v), lang=sh> ≡
3233 ________________________________________________________________________
3234 1 | lyx -e latex fangle.lyx
3235 2 | fangle -R./fangle fangle.tex > ./fangle
3236 3 | fangle -R./fangle.module fangle.tex > ./fangle.module
3237 |________________________________________________________________________
3244 95a <test:*[1](
\v), lang=> ≡
3245 ________________________________________________________________________
3248 3 | export TXT_SRC=${TXT_SRC:-fangle.txt}
3249 4 | export FANGLE=${FANGLE:-./fangle}
3250 5 | export TMP=${TMP:-/tmp}
3252 7 | tm -s -c fangle.tm fangle.txt -q
3254 9 | «test:helpers 95c»
3255 10 | «test:run-tests 95b»
3256 11 | # Now check that we can extract a fangle that also passes the tests!
3257 12 | $FANGLE -R./fangle $TXT_SRC > ./fangle.new
3258 13 | export FANGLE=./fangle.new
3259 14 | «test:run-tests 95b»
3260 |________________________________________________________________________
3264 95b <test:run-tests[1](
\v), lang=sh> ≡
3265 ________________________________________________________________________
3267 2 | $FANGLE -Rpca-test.awk $TXT_SRC | awk -f - || exit 1
3268 3 | «test:cromulence 56a»
3269 4 | «test:escapes 60a»
3270 5 | «test:test-chunk
\v(test:example-sh
\v, test:example-sh.result
\v) 95d»
3271 6 | «test:test-chunk
\v(test:example-makefile
\v, test:example-makefile.result
\v) 95d»
3272 7 | «test:chunk-params 97e»
3273 |________________________________________________________________________
3277 95c <test:helpers[1](
\v), lang=> ≡
3278 ________________________________________________________________________
3281 3 | then echo "Passed $TEST"
3282 4 | else echo "Failed $TEST"
3289 11 | then echo "Passed $TEST"
3290 12 | else echo "Failed $TEST"
3294 |________________________________________________________________________
3297 This chunk will render a named chunk and compare it to another rendered nameed chunk
3299 95d <test:test-chunk[1](chunk
\v, result
\v\v), lang=sh> ≡
3300 ________________________________________________________________________
3301 1 | TEST="${result}" passtest diff -u --label "${chunk}" <( $FANGLE -R${chunk} $TXT_SRC ) \
3302 2 | --label "${result}" <( $FANGLE -R${result} $TXT_SRC )
3303 |________________________________________________________________________
3306 Chapter 19Chunk Parameters
3309 97a <test:lyx:chunk-params:sub[1](THING
\v, colour
\v\v), lang=> ≡
3310 ________________________________________________________________________
3311 1 | I see a ${THING},
3312 2 | a ${THING} of colour ${colour},
3313 3 | and looking closer =<\chunkref{test:lyx:chunk-params:sub:sub}(${colour})>
3314 |________________________________________________________________________
3318 97b <test:lyx:chunk-params:sub:sub[1](colour
\v\v), lang=> ≡
3319 ________________________________________________________________________
3320 1 | a funny shade of ${colour}
3321 |________________________________________________________________________
3325 97c <test:lyx:chunk-params:text[1](
\v), lang=> ≡
3326 ________________________________________________________________________
3327 1 | What do you see? "=<\chunkref{test:lyx:chunk-params:sub}(joe, red)>"
3329 |________________________________________________________________________
3332 Should generate output:
3334 97d <test:lyx:chunk-params:result[1](
\v), lang=> ≡
3335 ________________________________________________________________________
3336 1 | What do you see? "I see a joe,
3337 2 | a joe of colour red,
3338 3 | and looking closer a funny shade of red"
3340 |________________________________________________________________________
3343 And this chunk will perform the test:
3345 97e <test:chunk-params[1](
\v), lang=> ≡ 98b⊳
3346 ________________________________________________________________________
3347 1 | «test:test-chunk
\v(test:lyx:chunk-params:text
\v, test:lyx:chunk-params:result
\v) 95d» || exit 1
3348 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3351 97f <test:chunk-params:sub[1](THING
\v, colour
\v\v), lang=> ≡
3352 ________________________________________________________________________
3353 1 | I see a ${THING},
3354 2 | a ${THING} of colour ${colour},
3355 3 | and looking closer «test:chunk-params:sub:sub
\v(${colour}
\v) 97g»
3356 |________________________________________________________________________
3360 97g <test:chunk-params:sub:sub[1](colour
\v\v), lang=> ≡
3361 ________________________________________________________________________
3362 1 | a funny shade of ${colour}
3363 |________________________________________________________________________
3367 97h <test:chunk-params:text[1](
\v), lang=> ≡ 96a⊳
3368 ________________________________________________________________________
3369 1 | What do you see? "«test:chunk-params:sub
\v(joe
\v, red
\v) 97f»"
3371 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3372 Should generate output:
3374 98a <test:chunk-params:result[1](
\v), lang=> ≡
3375 ________________________________________________________________________
3376 1 | What do you see? "I see a joe,
3377 2 | a joe of colour red,
3378 3 | and looking closer a funny shade of red"
3380 |________________________________________________________________________
3383 And this chunk will perform the test:
3385 98b <test:chunk-params[2](
\v) ⇑97e, lang=> +≡ ⊲97e
3386 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3387 2 | «test:test-chunk
\v(test:chunk-params:text
\v, test:chunk-params:result
\v) 95d» || exit 1
3388 |________________________________________________________________________
3391 Chapter 20Compile-log-lyx
3393 99a <Chunk:./compile-log-lyx[1](
\v), lang=sh> ≡
3394 ________________________________________________________________________
3396 2 | # can't use gtkdialog -i, cos it uses the "source" command which ubuntu sh doesn't have
3399 5 | errors="/tmp/compile.log.$$"
3400 6 | # if grep '^[^ ]*:\( In \|[0-9][0-9]*: [^ ]*:\)' > $errors
3401 7 | if grep '^[^ ]*(\([0-9][0-9]*\)) *: *\(error\|warning\)' > $errors
3403 9 | sed -i -e 's/^[^ ]*[/\\]\([^/\\]*\)(\([ 0-9][ 0-9]*\)) *: */\1:\2|\2|/' $errors
3404 10 | COMPILE_DIALOG='
3407 13 | <label>Compiler errors:</label>
3409 15 | <tree exported_column="0">
3410 16 | <variable>LINE</variable>
3411 17 | <height>400</height><width>800</width>
3412 18 | <label>File | Line | Message</label>
3413 19 | <action>'". $SELF ; "'lyxgoto $LINE</action>
3414 20 | <input>'"cat $errors"'</input>
3417 23 | <button><label>Build</label>
3418 24 | <action>lyxclient -c "LYXCMD:build-program" &</action>
3420 26 | <button ok></button>
3424 30 | export COMPILE_DIALOG
3425 31 | ( gtkdialog --program=COMPILE_DIALOG ; rm $errors ) &
3432 38 | file="${LINE%:*}"
3433 39 | line="${LINE##*:}"
3434 40 | extraline=‘cat $file | head -n $line | tac | sed '/^\\\\begin{lstlisting}/q' | wc -l‘
3435 41 | extraline=‘expr $extraline - 1‘
3436 42 | lyxclient -c "LYXCMD:command-sequence server-goto-file-row $file $line ; char-forward ; repeat $extraline paragraph-down ; paragraph-up-select"
3440 46 | if test -z "$COMPILE_DIALOG"
3443 |________________________________________________________________________