18 Fangle is a tool for fangled literate programming. Newfangled is defined as New and often needlessly novel by TheFreeDictionary.com.
19 In this case, fangled means yet another not-so-new1. but improved. ^1 method for literate programming.
20 Literate Programming has a long history starting with the great Donald Knuth himself, whose literate programming tools seem to make use of as many escape sequences for semantic markup as TeX (also by Donald Knuth).
21 Norman Ramsey wrote the Noweb set of tools (notangle, noweave and noroots) and helpfully reduced the amount of magic character sequences to pretty much just <<, >> and @, and in doing so brought the wonders of literate programming within my reach.
22 While using the L Y X editor for LaTeX editing I had various troubles with the noweb tools, some of which were my fault, some of which were noweb's fault and some of which were L Y X's fault.
23 Noweb generally brought literate programming to the masses through removing some of the complexity of the original literate programming, but this would be of no advantage to me if the L Y X / LaTeX combination brought more complications in their place.
24 Fangle was thus born (originally called Newfangle) as an awk replacement for notangle, adding some important features, like better integration with L Y X and LaTeX (and later TeXmacs), multiple output format conversions, and fixing notangle bugs like indentation when using -L for line numbers.
25 Significantly, fangle is just one program which replaces various programs in Noweb. Noweave is done away with and implemented directly as LaTeX macros, and noroots is implemented as a function of the untangler fangle.
26 Fangle is written in awk for portability reasons, awk being available for most platforms. A Python version2. hasn't anyone implemented awk in python yet? ^2 was considered for the benefit of L Y X but a scheme version for TeXmacs will probably materialise first; as TeXmacs macro capabilities help make edit-time and format-time rendering of fangle chunks simple enough for my weak brain.
27 As an extension to many literate-programming styles, Fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Name parameters (or even local variables in the callers scope) are anticipated, as parameterized chunks — useful though they are — are hard to comprehend in the literate document.
29 Fangle is licensed under the GPL 3 (or later).
30 This doesn't mean that sources generated by fangle must be licensed under the GPL 3.
31 This doesn't mean that you can't use or distribute fangle with sources of an incompatible license, but it means you must make the source of fangle available too.
32 As fangle is currently written in awk, an interpreted language, this should not be too hard.
34 4a <gpl3-copyright[1](), lang=text> ≡
35 ________________________________________________________________________
36 1 | # fangle - fully featured notangle replacement in awk
38 3 | # Copyright (C) 2009-2010 Sam Liddicott <sam@liddicott.com>
40 5 | # This program is free software: you can redistribute it and/or modify
41 6 | # it under the terms of the GNU General Public License as published by
42 7 | # the Free Software Foundation, either version 3 of the License, or
43 8 | # (at your option) any later version.
45 10 | # This program is distributed in the hope that it will be useful,
46 11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
47 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48 13 | # GNU General Public License for more details.
50 15 | # You should have received a copy of the GNU General Public License
51 16 | # along with this program. If not, see <http://www.gnu.org/licenses/>.
52 |________________________________________________________________________
59 1 Introduction to Literate Programming 11
62 2.2 Extracting roots 13
63 2.3 Formatting the document 13
64 3 Using Fangle with L^ A T_ E X 15
65 4 Using Fangle with L Y X 17
66 4.1 Installing the L Y X module 17
67 4.2 Obtaining a decent mono font 17
71 4.3 Formatting your Lyx document 18
72 4.3.1 Customising the listing appearance 18
73 4.3.2 Global customisations 18
74 4.4 Configuring the build script 19
76 5 Using Fangle with T_ E X_( M A CS) 21
77 6 Fangle with Makefiles 23
78 6.1 A word about makefiles formats 23
79 6.2 Extracting Sources 23
80 6.2.1 Converting from L Y X to L^ A T_ E X 24
81 6.2.2 Converting from T_ E X_( M A CS) 24
82 6.3 Extracting Program Source 25
83 6.4 Extracting Source Files 25
84 6.5 Extracting Documentation 27
85 6.5.1 Formatting T_ E X 27
86 6.5.1.1 Running pdflatex 27
87 6.5.2 Formatting T_ E X_( M A CS) 28
88 6.5.3 Building the Documentation as a Whole 28
90 6.7 Boot-strapping the extraction 29
91 6.8 Incorporating Makefile.inc into existing projects 30
94 7 Fangle awk source code 33
96 7.2 Catching errors 34
97 8 L^ A T_ E X and lstlistings 35
98 8.1 Additional lstlstings parameters 35
99 8.2 Parsing chunk arguments 37
100 8.3 Expanding parameters in the text 38
101 9 Language Modes & Quoting 41
103 9.1.1 Modes to keep code together 41
104 9.1.2 Modes affect included chunks 41
105 9.2 Language Mode Definitions 42
108 9.2.3 Parentheses, Braces and Brackets 45
109 9.2.4 Customizing Standard Modes 45
115 9.4 A non-recursive mode tracker 48
119 9.4.3.1 One happy chunk 52
121 9.5 Escaping and Quoting 52
122 10 Recognizing Chunks 55
124 10.1.1 T_ E X_( M A CS) hackery 55
125 10.1.2 lstlistings 56
126 10.1.3 T_ E X_( M A CS) 56
129 10.2.1 lstlistings 58
131 10.3 Chunk contents 58
132 10.3.1 lstlistings 59
133 11 Processing Options 61
134 12 Generating the Output 63
135 12.1 Assembling the Chunks 64
136 12.1.1 Chunk Parts 64
139 15 Fangle LaTeX source code 75
140 15.1 fangle module 75
141 15.1.1 The Chunk style 75
142 15.1.2 The chunkref style 76
144 15.2.1 The chunk command 77
145 15.2.1.1 Chunk parameters 78
146 15.2.2 The noweb styled caption 78
147 15.2.3 The chunk counter 78
148 15.2.4 Cross references 81
150 16 Extracting fangle 83
151 16.1 Extracting from Lyx 83
152 16.2 Extracting documentation 83
153 16.3 Extracting from the command line 84
156 17 Chunk Parameters 87
157 18 Compile-log-lyx 89
159 Chapter 1Introduction to Literate Programming
160 Todo: Should really follow on from a part-0 explanation of what literate programming is.
161 Chapter 2Running Fangle
162 Fangle is a replacement for noweb, which consists of notangle, noroots and noweave.
163 Like notangle and noroots, fangle can read multiple named files, or from stdin.
165 The -r option causes fangle to behave like noroots.
166 fangle -r filename.tex
167 will print out the fangle roots of a tex file.
168 Unlike the noroots command, the printed roots are not enclosed in angle brackets e.g. <<name>>, unless at least one of the roots is defined using the notangle notation <<name>>=.
169 Also, unlike noroots, it prints out all roots --- not just those that are not used elsewhere. I find that a root not being used doesn't make it particularly top level — and so-called top level roots could also be included in another root as well.
170 My convention is that top level roots to be extracted begin with ./ and have the form of a filename.
171 Makefile.inc, discussed in 6, can automatically extract all such sources prefixed with ./
173 notangle's -R and -L options are supported.
174 If you are using L Y X or LaTeX, the standard way to extract a file would be:
175 fangle -R./Makefile.inc fangle.tex > ./Makefile.inc
176 If you are using TeXmacs, the standard way to extract a file would similarly be:
177 fangle -R./Makefile.inc fangle.txt > ./Makefile.inc
178 TeXmacs users would obtain the text file with a verbatim export from TeXmacs which can be done on the command line with texmacs -s -c fangle.tm fangle.txt -q
179 Unlike the noroots command, the -L option to generate C pre-preocessor #file style line-number directives,does not break indenting of the generated file..
180 Also, thanks to mode tracking (described in 9) the -L option does not interrupt (and break) multi-line C macros either.
181 This does mean that sometimes the compiler might calculate the source line wrongly when generating error messages in such cases, but there isn't any other way around if multi-line macros include other chunks.
182 Future releases will include a mapping file so that line/character references from the C compiler can be converted to the correct part of the source document.
183 2.3 Formatting the document
184 The noweave replacement built into the editing and formatting environment for TeXmacs, L Y X (which uses LaTeX), and even for raw LaTeX.
185 Use of fangle with TeXmacs, L Y X and LaTeX are explained the the next few chapters.
186 Chapter 3Using Fangle with LaTeX
187 Because the noweave replacement is impemented in LaTeX, there is no processing stage required before running the LaTeX command. Of course, LaTeX may need running two or more times, so that the code chunk references can be fully calculated.
188 The formatting is managed by a set of macros shown in 15, and can be included with:
189 \usepackage{fangle.sty}
190 Norman Ramsay's origial noweb.sty package is currently required as it is used for formatting the code chunk captions.
191 The listings.sty package is required, and is used for formatting the code chunks and syntax highlighting.
192 The xargs.sty package is also required, and makes writing LaTeX macro so much more pleasant.
193 To do: Add examples of use of Macros
195 Chapter 4Using Fangle with L Y X
196 L Y X uses the same LaTeX macros shown in 15 as part of a L Y X module file fangle.module, which automatically includes the macros in the document pre-amble provided that the fangle L Y X module is used in the document.
197 4.1 Installing the L Y X module
198 Copy fangle.module to your L Y X layouts directory, which for unix users will be ~/.lyx/layouts
199 In order to make the new literate styles availalble, you will need to reconfigure L Y X by clicking Tools->Reconfigure, and then re-start L Y X.
200 4.2 Obtaining a decent mono font
201 The syntax high-lighting features of lstlistings makes use of bold; however a mono-space tt font is used to typeset the listings. Obtaining a bold tt font can be impossibly difficult and amazingly easy. I spent many hours at it, following complicated instructions from those who had spend many hours over it, and was finally delivered the simple solution on the lyx mailing list.
203 The simple way was to add this to my preamble:
205 \renewcommand{\ttdefault}{txtt}
208 The next simplest way was to use ams poor-mans-bold, by adding this to the pre-amble:
210 %\renewcommand{\ttdefault}{txtt}
211 %somehow make \pmb be the command for bold, forgot how, sorry, above line not work
212 It works, but looks wretched on the dvi viewer.
214 The lstlistings documention suggests using Luximono.
215 Luximono was installed according to the instructions in Ubuntu Forums thread 11591811. http://ubuntuforums.org/showthread.php?t=1159181 ^1 with tips from miknight2. http://miknight.blogspot.com/2005/11/how-to-install-luxi-mono-font-in.html ^2 stating that sudo updmap --enable MixedMap ul9.map is required. It looks fine in PDF and PS view but still looks rotten in dvi view.
216 4.3 Formatting your Lyx document
217 It is not necessary to base your literate document on any of the original L Y X literate classes; so select a regular class for your document type.
218 Add the new module Fangle Literate Listings and also Logical Markup which is very useful.
219 In the drop-down style listbox you should notice a new style defined, called Chunk.
220 When you wish to insert a literate chunk, you enter it's plain name in the Chunk style, instead of the old noweb method that uses <<name>>= type tags. In the line (or paragraph) following the chunk name, you insert a listing with: Insert->Program Listing.
221 Inside the white listing box you can type (or paste using shift+ctrl+V) your listing. There is no need to use ctrl+enter at the end of lines as with some older L Y X literate techniques --- just press enter as normal.
222 4.3.1 Customising the listing appearance
223 The code is formatted using the lstlistings package. The chunk style doesn't just define the chunk name, but can also define any other chunk options supported by the lstlistings package \lstset command. In fact, what you type in the chunk style is raw latex. If you want to set the chunk language without having to right-click the listing, just add ,lanuage=C after the chunk name. (Currently the language will affect all subsequent listings, so you may need to specify ,language= quite a lot).
224 To do: so fix the bug
226 Of course you can do this by editing the listings box advanced properties by right-clicking on the listings box, but that takes longer, and you can't see at-a-glance what the advanced settings are while editing the document; also advanced settings apply only to that box --- the chunk settings apply through the rest of the document3. It ought to apply only to subsequent chunks of the same name. I'll fix that later ^3.
227 To do: So make sure they only apply to chunks of that name
229 4.3.2 Global customisations
230 As lstlistings is used to set the code chunks, it's \lstset command can be used in the pre-amble to set some document wide settings.
231 If your source has many words with long sequences of capital letters, then columns=fullflexible may be a good idea, or the capital letters will get crowded. (I think lstlistings ought to use a slightly smaller font for captial letters so that they still fit).
232 The font family \ttfamily looks more normal for code, but has no bold (an alternate typewriter font is used).
233 With \ttfamily, I must also specify columns=fullflexible or the wrong letter spacing is used.
234 In my LaTeX pre-amble I usually specialise my code format with:
236 19a <document-preamble[1](), lang=tex> ≡
237 ________________________________________________________________________
239 2 | numbers=left, stepnumber=1, numbersep=5pt,
240 3 | breaklines=false,
241 4 | basicstyle=\footnotesize\ttfamily,
242 5 | numberstyle=\tiny,
244 7 | columns=fullflexible,
245 8 | numberfirstline=true
247 |________________________________________________________________________
251 4.4 Configuring the build script
252 You can invoke code extraction and building from the L Y X menu option Document->Build Program.
253 First, make sure you don't have a conversion defined for Lyx->Program
254 From the menu Tools->Preferences, add a conversion from Latex(Plain)->Program as:
255 set -x ; fangle -Rlyx-build $$i |
256 env LYX_b=$$b LYX_i=$$i LYX_o=$$o LYX_p=$$p LYX_r=$$r bash
257 (But don't cut-n-paste it from this document or you may be be pasting a multi-line string which will break your lyx preferences file).
258 I hope that one day, L Y X will set these into the environment when calling the build script.
259 You may also want to consider adding options to this conversion...
260 parselog=/usr/share/lyx/scripts/listerrors
261 ...but if you do you will lose your stderr4. There is some bash plumbing to get a copy of stderr but this footnote is too small ^4.
262 Now, a shell script chunk called lyx-build will be extracted and run whenever you choose the Document->Build Program menu item.
263 This document was originally managed using L Y X and lyx-build script for this document is shown here for historical reference.
264 lyx -e latex fangle.lyx && \
265 fangle fangle.lyx > ./autoboot
266 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
268 When the lyx-build chunk is executed, the current directory will be a temporary directory, and LYX_SOURCE will refer to the tex file in this temporary directory. This is unfortunate as our makefile wants to run from the project directory where the Lyx file is kept.
269 We can extract the project directory from $$r, and derive the probable Lyx filename from the noweb file that Lyx generated.
271 19b <lyx-build-helper[1](), lang=sh> ≡ 83c⊳
272 ________________________________________________________________________
273 1 | PROJECT_DIR="$LYX_r"
274 2 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
276 4 | TEX_SRC="$TEX_DIR/$LYX_i"
277 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
278 And then we can define a lyx-build fragment similar to the autoboot fragment
280 20a <lyx-build[1](), lang=sh> ≡ 83a⊳
281 ________________________________________________________________________
283 2 | =<\chunkref{lyx-build-helper}>
284 3 | cd $PROJECT_DIR || exit 1
286 5 | #/usr/bin/fangle -filter ./notanglefix-filter \
287 6 | # -R./Makefile.inc "../../noweb-lyx/noweb-lyx3.lyx" \
288 7 | # | sed '/NOWEB_SOURCE=/s/=.*/=samba4-dfs.lyx/' \
289 8 | # > ./Makefile.inc
291 10 | #make -f ./Makefile.inc fangle_sources
292 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
294 Chapter 5Using Fangle with TeXmacs
295 To do: Write this chapter
297 Chapter 6Fangle with Makefiles
298 Here we describe a Makefile.inc that you can include in your own Makefiles, or glue as a recursive make to other projects.
299 Makefile.inc will cope with extracting all the other source files from this or any specified literate document and keeping them up to date.
300 It may also be included by a Makefile or Makefile.am defined in a literate document to automatically deal with the extraction of source files and documents during normal builds.
301 Thus, if Makefile.inc is included into a main project makefile it add rules for the source files, capable of extracting the source files from the literate document.
302 6.1 A word about makefiles formats
303 Whitespace formatting is very important in a Makefile. The first character of each action line must be a TAB.
304 target: pre-requisite
307 This requires that the literate programming environment have the ability to represent a TAB character in a way that fangle will generate an actual TAB character.
308 We also adopt a convention that code chunks whose names beginning with ./ should always be automatically extracted from the document. Code chunks whose names do not begin with ./ are for internal reference. Such chunks may be extracted directly, but will not be automatically extracted by this Makefile.
309 6.2 Extracting Sources
310 Our makefile has two parts; variables must be defined before the targets that use them.
311 As we progress through this chapter, explaining concepts, we will be adding lines to <Makefile.inc-vars 23b> and <Makefile.inc-targets 24b> which are included in <./Makefile.inc 23a> below.
313 23a <./Makefile.inc[1](), lang=make> ≡
314 ________________________________________________________________________
315 1 | «Makefile.inc-vars 23b»
316 2 | «Makefile.inc-targets 24b»
317 |________________________________________________________________________
320 We first define a placeholder for LITERATE_SOURCE to hold the name of this document. This will normally be passed on the command line.
322 23b <Makefile.inc-vars[1](), lang=> ≡ 24a⊳
323 ________________________________________________________________________
325 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
326 Fangle cannot process L Y X or TeXmacs documents directly, so the first stage is to convert these to more suitable text based formats1. L Y X and TeXmacs formats are text-based, but not suitable for fangle ^1.
327 6.2.1 Converting from L Y X to LaTeX
328 The first stage will always be to convert the L Y X file to a LaTeX file. Fangle must run on a TeX file because the L Y X command server-goto-file-line2. The Lyx command server-goto-file-line is used to position the Lyx cursor at the compiler errors. ^2 requries that the line number provided be a line of the TeX file and always maps this the line in the L Y X docment. We use server-goto-file-line when moving the cursor to error lines during compile failures.
329 The command lyx -e literate fangle.lyx will produce fangle.tex, a TeX file; so we define a make target to be the same as the L Y X file but with the .tex extension.
330 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
332 24a <Makefile.inc-vars[2]() ⇑23b, lang=> +≡ ⊲23b 24c▿
333 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
334 2 | TEX_SOURCE=$(LYX_SOURCE:.lyx=.tex)
335 3 | EXTRA_DIST+=$(TEX_SOURCE)
336 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
337 We then specify that the TeX source is to be generated from the L Y X source.
339 24b <Makefile.inc-targets[1](), lang=> ≡ 24d▿
340 ________________________________________________________________________
341 1 | $(TEX_SOURCE): $(LYX_SOURCE)
344 4 | ↦rm -f -- $(TEX_SOURCE)
346 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
347 6.2.2 Converting from TeXmacs
348 Fangle cannot process TeXmacs files directly3. but this is planned when TeXmacs uses xml as it's native format ^3, but must first convert them to text files.
349 The command texmacs -c fangle.tm fangle.txt -q will produce fangle.txt, a text file; so we define a make target to be the same as the TeXmacs file but with the .txt extension.
350 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
352 24c <Makefile.inc-vars[3]() ⇑23b, lang=> +≡ ▵24a 25a⊳
353 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
354 4 | TXT_SOURCE=$(LITERATE_SOURCE:.tm=.txt)
355 5 | EXTRA_DIST+=$(TXT_SOURCE)
356 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
357 To do: Add loop around each $< so multiple targets can be specified
360 24d <Makefile.inc-targets[2]() ⇑24b, lang=> +≡ ▵24b 25c⊳
361 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
362 6 | $(TXT_SOURCE): $(LITERATE_SOURCE)
363 7 | ↦texmacs -c $< $(TXT_SOURCE) -q
365 9 | ↦rm -f -- $(TXT_SOURCE)
366 10 | clean: clean_txt
367 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
368 6.3 Extracting Program Source
369 The program source is extracted using fangle, which is designed to operate on text or a LaTeX documents4. LaTeX documents are just slightly special text documents ^4.
371 25a <Makefile.inc-vars[4]() ⇑23b, lang=> +≡ ⊲24c 25b▿
372 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
373 6 | FANGLE_SOURCE=$(TEX_SOURCE) $(TXT_SOURCE)
374 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
375 The literate document can result in any number of source files, but not all of these will be changed each time the document is updated. We certainly don't want to update the timestamps of these files and cause the whole source tree to be recompiled just because the literate explanation was revised. We use CPIF from the Noweb tools to avoid updating the file if the content has not changed, but should probably write our own.
376 However, if a source file is not updated, then the fangle file will always have a newer time-stamp and the makefile would always re-attempt to extact a newer source file which would be a waste of time.
377 Because of this, we use a stamp file which is always updated each time the sources are fully extracted from the LaTeX document. If the stamp file is newer than the document, then we can avoid an attempt to re-extract any of the sources. Because this stamp file is only updated when extraction is complete, it is safe for the user to interrupt the build-process mid-extraction.
378 We use echo rather than touch to update the stamp file beause the touch command does not work very well over an sshfsmount that I was using.
380 25b <Makefile.inc-vars[5]() ⇑23b, lang=> +≡ ▵25a 26a⊳
381 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
382 7 | FANGLE_SOURCE_STAMP=$(FANGLE_SOURCE).stamp
383 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
385 25c <Makefile.inc-targets[3]() ⇑24b, lang=> +≡ ⊲24d 26b⊳
386 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
387 11 | $(FANGLE_SOURCE_STAMP): $(FANGLE_SOURCE) \
388 12 | ↦ $(FANGLE_SOURCES) ; \
389 13 | ↦echo -n > $(FANGLE_SOURCE_STAMP)
391 15 | ↦rm -f $(FANGLE_SOURCE_STAMP)
392 16 | clean: clean_stamp
393 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
394 6.4 Extracting Source Files
395 We compute FANGLE_SOURCES to hold the names of all the source files defined in the document. We compute this only once, by means of := in assignent. The sed deletes the any << and >> which may surround the roots names (for compatibility with Noweb's noroots command).
396 As we use chunk names beginning with ./ to denote top level fragments that should be extracted, we filter out all fragments that do not begin with ./
397 Note 1. FANGLE_PREFIX is set to ./ by default, but whatever it may be overridden to, the prefix is replaced by a literal ./ before extraction so that files will be extracted in the current directory whatever the prefix. This helps namespace or sub-project prefixes like documents: for chunks like documents:docbook/intro.xml
398 To do: This doesn't work though, because it loses the full name and doesn't know what to extact!
401 26a <Makefile.inc-vars[6]() ⇑23b, lang=> +≡ ⊲25b 26e▿
402 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
403 8 | FANGLE_PREFIX:=\.\/
404 9 | FANGLE_SOURCES:=$(shell \
405 10 | fangle -r $(FANGLE_SOURCE) |\
406 11 | sed -e 's/^[<][<]//;s/[>][>]$$//;/^$(FANGLE_PREFIX)/!d' \
407 12 | -e 's/^$(FANGLE_PREFIX)/\.\//' )
408 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
409 The target below, echo_fangle_sources is a helpful debugging target and shows the names of the files that would be extracted.
411 26b <Makefile.inc-targets[4]() ⇑24b, lang=> +≡ ⊲25c 26c▿
412 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
413 17 | .PHONY: echo_fangle_sources
414 18 | echo_fangle_sources: ; @echo $(FANGLE_SOURCES)
415 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
416 We define a convenient target called fangle_sources so that make -f fangle_sources will re-extract the source if the literate document has been updated.
418 26c <Makefile.inc-targets[5]() ⇑24b, lang=> +≡ ▵26b 26d▿
419 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
420 19 | .PHONY: fangle_sources
421 20 | fangle_sources: $(FANGLE_SOURCE_STAMP)
422 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
423 And also a convenient target to remove extracted sources.
425 26d <Makefile.inc-targets[6]() ⇑24b, lang=> +≡ ▵26c 27d⊳
426 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
427 21 | .PHONY: clean_fangle_sources
428 22 | clean_fangle_sources: ; \
429 23 | rm -f -- $(FANGLE_SOURCE_STAMP) $(FANGLE_SOURCES)
430 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
431 We now look at the extraction of the source files.
432 This makefile macro if_extension takes 4 arguments: the filename $(1), some extensions to match $(2) and a shell command to return if the filename does match the exensions $(3), and a shell command to return if it does not match the extensions $(4).
434 26e <Makefile.inc-vars[7]() ⇑23b, lang=> +≡ ▵26a 26f▿
435 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
436 13 | if_extension=$(if $(findstring $(suffix $(1)),$(2)),$(3),$(4))
437 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
438 For some source files like C files, we want to output the line number and filename of the original LaTeX document from which the source came5. I plan to replace this option with a separate mapping file so as not to pollute the generated source, and also to allow a code pretty-printing reformatter like indent be able to re-format the file and adjust for changes through comparing the character streams. ^5.
439 To make this easier we define the file extensions for which we want to do this.
441 26f <Makefile.inc-vars[8]() ⇑23b, lang=> +≡ ▵26e 26g▿
442 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
443 14 | C_EXTENSIONS=.c .h
444 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
445 We can then use the if_extensions macro to define a macro which expands out to the -L option if fangle is being invoked in a C source file, so that C compile errors will refer to the line number in the TeX document.
447 26g <Makefile.inc-vars[9]() ⇑23b, lang=> +≡ ▵26f 27a⊳
448 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
450 16 | nf_line=-L -T$(TABS)
451 17 | fangle=fangle $(call if_extension,$(2),$(C_EXTENSIONS),$(nf_line)) -R"$(2)" $(1)
452 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
453 We can use a similar trick to define an indent macro which takes just the filename as an argument and can return a pipeline stage calling the indent command. Indent can be turned off with make fangle_sources indent=
455 27a <Makefile.inc-vars[10]() ⇑23b, lang=> +≡ ⊲26g 27b▿
456 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
457 18 | indent_options=-npro -kr -i8 -ts8 -sob -l80 -ss -ncs
458 19 | indent=$(call if_extension,$(1),$(C_EXTENSIONS), | indent $(indent_options))
459 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
460 We now define the pattern for extracting a file. The files are written using noweb's cpif so that the file timestamp will not be touched if the contents haven't changed. This avoids the need to rebuild the entire project because of a typographical change in the documentation, or if none or a few C source files have changed.
462 27b <Makefile.inc-vars[11]() ⇑23b, lang=> +≡ ▵27a 27c▿
463 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
464 20 | fangle_extract=@mkdir -p $(dir $(1)) && \
465 21 | $(call fangle,$(2),$(1)) > "$(1).tmp" && \
466 22 | cat "$(1).tmp" $(indent) | cpif "$(1)" \
467 23 | && rm -- "$(1).tmp" || \
468 24 | (echo error newfangling $(1) from $(2) ; exit 1)
469 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
470 We define a target which will extract or update all sources. To do this we first defined a makefile template that can do this for any source file in the LaTeX document.
472 27c <Makefile.inc-vars[12]() ⇑23b, lang=> +≡ ▵27b 28a⊳
473 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
474 25 | define FANGLE_template
476 27 | ↦$$(call fangle_extract,$(1),$(2))
477 28 | FANGLE_TARGETS+=$(1)
479 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
480 We then enumerate the discovered FANGLE_SOURCES to generate a makefile rule for each one using the makefile template we defined above.
482 27d <Makefile.inc-targets[7]() ⇑24b, lang=> +≡ ⊲26d 27e▿
483 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
484 24 | $(foreach source,$(FANGLE_SOURCES),\
485 25 | $(eval $(call FANGLE_template,$(source),$(FANGLE_SOURCE))) \
487 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
488 These will all be built with FANGLE_SOURCE_STAMP.
489 We also remove the generated sources on a make distclean.
491 27e <Makefile.inc-targets[8]() ⇑24b, lang=> +≡ ▵27d 28b⊳
492 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
493 27 | _distclean: clean_fangle_sources
494 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
495 6.5 Extracting Documentation
496 We then identify the intermediate stages of the documentation and their build and clean targets.
498 6.5.1.1 Running pdflatex
499 We produce a pdf file from the tex file.
501 28a <Makefile.inc-vars[13]() ⇑23b, lang=> +≡ ⊲27c 28c▿
502 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
503 30 | FANGLE_PDF=$(TEX_SOURCE:.tex=.pdf)
504 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
505 We run pdflatex twice to be sure that the contents and aux files are up to date. We certainly are required to run pdflatex at least twice if these files do not exist.
507 28b <Makefile.inc-targets[9]() ⇑24b, lang=> +≡ ⊲27e 28d▿
508 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
509 28 | $(FANGLE_PDF): $(TEX_SOURCE)
510 29 | ↦pdflatex $< && pdflatex $<
513 32 | ↦rm -f -- $(FANGLE_PDF) $(TEX_SOURCE:.tex=.toc) \
514 33 | ↦ $(TEX_SOURCE:.tex=.log) $(TEX_SOURCE:.tex=.aux)
515 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
516 6.5.2 Formatting TeXmacs
517 TeXmacs can produce a PDF file directly.
519 28c <Makefile.inc-vars[14]() ⇑23b, lang=> +≡ ▵28a 28e▿
520 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
521 31 | FANGLE_PDF=$(TEX_SOURCE:.tm=.pdf)
522 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
523 To do: Outputting the PDF may not be enough to update the links and page references. I think
524 we need to update twice, generate a pdf, update twice mode and generate a new PDF.
525 Basically the PDF export of TeXmacs is pretty rotten and doesn't work properly from the CLI
528 28d <Makefile.inc-targets[10]() ⇑24b, lang=> +≡ ▵28b 28f▿
529 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
530 34 | $(FANGLE_PDF): $(TEXMACS_SOURCE)
531 35 | ↦texmacs -c $(TEXMACS_SOURCE) $< -q
534 38 | ↦rm -f -- $(FANGLE_PDF)
535 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
536 6.5.3 Building the Documentation as a Whole
537 Currently we only build pdf as a final format, but FANGLE_DOCS may later hold other output formats.
539 28e <Makefile.inc-vars[15]() ⇑23b, lang=> +≡ ▵28c
540 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
541 32 | FANGLE_DOCS=$(FANGLE_PDF)
542 |________________________________________________________________________
545 We also define fangle_docs as a convenient phony target.
547 28f <Makefile.inc-targets[11]() ⇑24b, lang=> +≡ ▵28d 28g▿
548 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
549 39 | .PHONY: fangle_docs
550 40 | fangle_docs: $(FANGLE_DOCS)
551 41 | docs: fangle_docs
552 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
553 And define a convenient clean_fangle_docs which we add to the regular clean target
555 28g <Makefile.inc-targets[12]() ⇑24b, lang=> +≡ ▵28f
556 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
557 42 | .PHONEY: clean_fangle_docs
558 43 | clean_fangle_docs: clean_tex clean_pdf
559 44 | clean: clean_fangle_docs
561 46 | distclean_fangle_docs: clean_tex clean_fangle_docs
562 47 | distclean: clean distclean_fangle_docs
563 |________________________________________________________________________
567 If Makefile.inc is included into Makefile, then extracted files can be updated with this command:
570 make -f Makefile.inc fangle_sources
571 6.7 Boot-strapping the extraction
572 As well as having the makefile extract or update the source files as part of it's operation, it also seems convenient to have the makefile re-extracted itself from this document.
573 It would also be convenient to have the code that extracts the makefile from this document to also be part of this document, however we have to start somewhere and this unfortunately requires us to type at least a few words by hand to start things off.
574 Therefore we will have a minimal root fragment, which, when extracted, can cope with extracting the rest of the source. This shell script fragment can do that. It's name is * — out of regard for Noweb, but when extracted might better be called autoupdate.
578 29a <*[1](), lang=sh> ≡
579 ________________________________________________________________________
582 3 | MAKE_SRC="${1:-${NW_LYX:-../../noweb-lyx/noweb-lyx3.lyx}}"
583 4 | MAKE_SRC=‘dirname "$MAKE_SRC"‘/‘basename "$MAKE_SRC" .lyx‘
584 5 | NOWEB_SRC="${2:-${NOWEB_SRC:-$MAKE_SRC.lyx}}"
585 6 | lyx -e latex $MAKE_SRC
587 8 | fangle -R./Makefile.inc ${MAKE_SRC}.tex \
588 9 | | sed "/FANGLE_SOURCE=/s/^/#/;T;aNOWEB_SOURCE=$FANGLE_SRC" \
589 10 | | cpif ./Makefile.inc
591 12 | make -f ./Makefile.inc fangle_sources
592 |________________________________________________________________________
595 The general Makefile can be invoked with ./autoboot and can also be included into any automake file to automatically re-generate the source files.
596 The autoboot can be extracted with this command:
597 lyx -e latex fangle.lyx && \
598 fangle fangle.lyx > ./autoboot
599 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
600 On a unix system this will extract fangle.module and the fangle awk script, and run some basic tests.
601 To do: cross-ref to test chapter when it is a chapter all on its own
603 6.8 Incorporating Makefile.inc into existing projects
604 If you are writing a literate module of an existing non-literate program you may find it easier to use a slight recursive make instead of directly including Makefile.inc in the projects makefile.
605 This way there is less chance of definitions in Makefile.inc interfering with definitions in the main makefile, or with definitions in other Makefile.inc from other literate modules of the same project.
606 To do this we add some glue to the project makefile that invokes Makefile.inc in the right way. The glue works by adding a .PHONY target to call the recursive make, and adding this target as an additional pre-requisite to the existing targets.
607 Example Sub-module of existing system
608 In this example, we are building module.so as a literate module of a larger project.
609 We will show the sort glue that can be inserted into the projects Makefile — or more likely — a regular Makefile included in or invoked by the projects Makefile.
611 30a <makefile-glue[1](), lang=> ≡ 30b▿
612 ________________________________________________________________________
613 1 | module_srcdir=modules/module
614 2 | MODULE_SOURCE=module.tm
615 3 | MODULE_STAMP=$(MODULE_SOURCE).stamp
616 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
617 The existing build system may already have a build target for module.o, but we just add another pre-requisite to that. In this case we use module.tm.stamp as a pre-requisite, the stamp file's modified time indicating when all sources were extracted6. If the projects build system does not know how to build the module from the extracted sources, then just add build actions here as normal. ^6.
619 30b <makefile-glue[2]() ⇑30a, lang=make> +≡ ▵30a 30c▿
620 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
621 4 | $(module_srcdir)/module.o: $(module_srcdir)/$(MODULE_STAMP)
622 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
623 The target for this new pre-requisite will be generated by a recursive make using Makefile.inc which will make sure that the source is up to date, before it is built by the main projects makefile.
625 30c <makefile-glue[3]() ⇑30a, lang=> +≡ ▵30b 30d▿
626 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
627 5 | $(module_srcdir)/$(MODULE_STAMP): $(module_srcdir)/$(MODULE_SOURCE)
628 6 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc fangle_sources LITERATE_SOURCE=$(MODULE_SOURCE)
629 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
630 We can do similar glue for the docs, clean and distclean targets. In this example the main prject was using a double colon for these targets, so we must use the same in our glue.
632 30d <makefile-glue[4]() ⇑30a, lang=> +≡ ▵30c
633 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
634 7 | docs:: docs_module
635 8 | .PHONY: docs_module
637 10 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc docs LITERATE_SOURCE=$(MODULE_SOURCE)
639 12 | clean:: clean_module
640 13 | .PHONEY: clean_module
642 15 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc clean LITERATE_SOURCE=$(MODULE_SOURCE)
644 17 | distclean:: distclean_module
645 18 | .PHONY: distclean_module
646 19 | distclean_module:
647 20 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc distclean LITERATE_SOURCE=$(MODULE_SOURCE)
648 |________________________________________________________________________
651 We could do similarly for install targets to install the generated docs.
653 Chapter 7Fangle awk source code
654 We use the copyright notice from chapter 2.
656 33a <./fangle[1](), lang=awk> ≡ 33b▿
657 ________________________________________________________________________
658 1 | #! /usr/bin/awk -f
659 2 | # «gpl3-copyright 4a»
660 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
661 We also use code from Arnold Robbins public domain getopt (1993 revision) defined in 73a, and naturally want to attribute this appropriately.
663 33b <./fangle[2]() ⇑33a, lang=> +≡ ▵33a 33c▿
664 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
665 3 | # NOTE: Arnold Robbins public domain getopt for awk is also used:
666 4 | «getopt.awk-header 71a»
667 5 | «getopt.awk-getopt() 71c»
669 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
670 And include the following chunks (which are explained further on) to make up the program:
672 33c <./fangle[3]() ⇑33a, lang=> +≡ ▵33b 36a⊳
673 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
674 7 | «helper-functions 34d»
675 8 | «mode-tracker 52a»
676 9 | «parse_chunk_args 38a»
677 10 | «chunk-storage-functions 69b»
678 11 | «output_chunk_names() 63d»
679 12 | «output_chunks() 63e»
680 13 | «write_chunk() 64a»
681 14 | «expand_chunk_args() 38b»
684 17 | «recognize-chunk 55a»
686 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
688 The portable way to erase an array in awk is to split the empty string, so we define a fangle macro that can split an array, like this:
690 33d <awk-delete-array[1](ARRAY), lang=awk> ≡
691 ________________________________________________________________________
692 1 | split("", ${ARRAY});
693 |________________________________________________________________________
696 For debugging it is sometimes convenient to be able to dump the contents of an array to stderr, and so this macro is also useful.
698 33e <dump-array[1](ARRAY), lang=awk> ≡
699 ________________________________________________________________________
700 1 | print "\nDump: ${ARRAY}\n--------\n" > "/dev/stderr";
701 2 | for (_x in ${ARRAY}) {
702 3 | print _x "=" ${ARRAY}[_x] "\n" > "/dev/stderr";
704 5 | print "========\n" > "/dev/stderr";
705 |________________________________________________________________________
709 Fatal errors are issued with the error function:
711 34a <error()[1](), lang=awk> ≡ 34b▿
712 ________________________________________________________________________
713 1 | function error(message)
715 3 | print "ERROR: " FILENAME ":" FNR " " message > "/dev/stderr";
718 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
719 and likewise for non-fatal warnings:
721 34b <error()[2]() ⇑34a, lang=awk> +≡ ▵34a 34c▿
722 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
723 6 | function warning(message)
725 8 | print "WARNING: " FILENAME ":" FNR " " message > "/dev/stderr";
728 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
729 and debug output too:
731 34c <error()[3]() ⇑34a, lang=awk> +≡ ▵34b
732 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
733 11 | function debug_log(message)
735 13 | print "DEBUG: " FILENAME ":" FNR " " message > "/dev/stderr";
737 |________________________________________________________________________
740 To do: append=helper-functions
743 34d <helper-functions[1](), lang=> ≡
744 ________________________________________________________________________
746 |________________________________________________________________________
749 Chapter 8LaTeX and lstlistings
750 To do: Split LyX and TeXmacs parts
752 For L Y X and LaTeX, the lstlistings package is used to format the lines of code chunks. You may recal from chapter XXX that arguments to a chunk definition are pure LaTeX code. This means that fangle needs to be able to parse LaTeX a little.
753 LaTeX arguments to lstlistings macros are a comma seperated list of key-value pairs, and values containing commas are enclosed in { braces } (which is to be expected for LaTeX).
754 A sample expressions is:
755 name=thomas, params={a, b}, something, something-else
756 but we see that this is just a simpler form of this expression:
757 name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
758 We may consider that we need a function that can parse such LaTeX expressions and assign the values to an AWK associated array, perhaps using a recursive parser into a multi-dimensional hash1. as AWK doesn't have nested-hash support ^1, resulting in:
763 a[foo, quux, a] fleeg
766 Yet, also, on reflection it seems that sometimes such nesting is not desirable, as the braces are also used to delimit values that contain commas --- we may consider that
767 name={williamson, freddie}
768 should assign williamson, freddie to name.
769 In fact we are not so interested in the detail so as to be bothered by this, which turns out to be a good thing for two reasons. Firstly TeX has a malleable parser with no strict syntax, and secondly whether or not williamson and freddie should count as two items will be context dependant anyway.
770 We need to parse this latex for only one reason; which is that we are extending lstlistings to add some additional arguments which will be used to express chunk parameters and other chunk options.
771 8.1 Additional lstlstings parameters
772 Further on we define a \Chunk LaTeX macro whose arguments will consist of a the chunk name, optionally followed by a comma and then a comma separated list of arguments. In fact we will just need to prefix name= to the arguments to in order to create valid lstlistings arguments.
773 There will be other arguments supported too;
774 params.As an extension to many literate-programming styles, fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Chunk parameters are declared with a chunk argument called params, which holds a semi-colon separated list of parameters, like this:
775 achunk,language=C,params=name;address
776 addto.a named chunk that this chunk is to be included into. This saves the effort of having to declare another listing of the named chunk merely to include this one.
777 Function get_chunk_args() will accept two paramters, text being the text to parse, and values being an array to receive the parsed values as described above. The optional parameter path is used during recursion to build up the multi-dimensional array path.
779 36a <./fangle[4]() ⇑33a, lang=> +≡ ⊲33c
780 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
781 19 | =<\chunkref{get_chunk_args()}>
782 |________________________________________________________________________
786 36b <get_chunk_args()[1](), lang=> ≡ 36c▿
787 ________________________________________________________________________
788 1 | function get_chunk_args(text, values,
789 2 | # optional parameters
790 3 | path, # hierarchical precursors
793 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
794 The strategy is to parse the name, and then look for a value. If the value begins with a brace {, then we recurse and consume as much of the text as necessary, returning the remaining text when we encounter a leading close-brace }. This being the strategy --- and executed in a loop --- we realise that we must first look for the closing brace (perhaps preceded by white space) in order to terminate the recursion, and returning remaining text.
796 36c <get_chunk_args()[2]() ⇑36b, lang=> +≡ ▵36b
797 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
799 7 | split("", next_chunk_args);
800 8 | while(length(text)) {
801 9 | if (match(text, "^ *}(.*)", a)) {
804 12 | =<\chunkref{parse-chunk-args}>
808 |________________________________________________________________________
811 We can see that the text could be inspected with this regex:
813 36d <parse-chunk-args[1](), lang=> ≡ 37a⊳
814 ________________________________________________________________________
815 1 | if (! match(text, " *([^,=]*[^,= ]) *(([,=]) *(([^,}]*) *,* *(.*))|)$", a)) {
818 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
819 and that a will have the following values:
822 2 =freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
824 4 freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
826 6 , foo={bar=baz, quux={quirk, a=fleeg}}, etc
828 a[3] will be either = or , and signify whether the option named in a[1] has a value or not (respectively).
829 If the option does have a value, then if the expression substr(a[4],1,1) returns a brace { it will signify that we need to recurse:
831 37a <parse-chunk-args[2]() ⇑36d, lang=> +≡ ⊲36d
832 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
834 5 | if (a[3] == "=") {
835 6 | if (substr(a[4],1,1) == "{") {
836 7 | text = get_chunk_args(substr(a[4],2), values, path name SUBSEP);
838 9 | values[path name]=a[5];
842 13 | values[path name]="";
845 |________________________________________________________________________
848 We can test this function like this:
850 37b <gca-test.awk[1](), lang=> ≡
851 ________________________________________________________________________
852 1 | =<\chunkref{get_chunk_args()}>
856 5 | print get_chunk_args("name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc", a);
858 7 | print "a[" b "] => " a[b];
861 |________________________________________________________________________
864 which should give this output:
866 37c <gca-test.awk-results[1](), lang=> ≡
867 ________________________________________________________________________
868 1 | a[foo.quux.quirk] =>
869 2 | a[foo.quux.a] => fleeg
870 3 | a[foo.bar] => baz
872 5 | a[name] => freddie
873 |________________________________________________________________________
876 8.2 Parsing chunk arguments
877 Arguments to paramterized chunks are expressed in round brackets as a comma separated list of optional arguments. For example, a chunk that is defined with:
878 \Chunk{achunk, params=name ; address}
880 \chunkref{achunk}(John Jones, jones@example.com)
881 An argument list may be as simple as in \chunkref{pull}(thing, otherthing) or as complex as:
882 \chunkref{pull}(things[x, y], get_other_things(a, "(all)"))
883 --- which for all it's commas and quotes and parenthesis represents only two parameters: things[x, y] and get_other_things(a, "(all)").
884 If we simply split parameter list on commas, then the comma in things[x,y] would split into two seperate arguments: things[x and y]--- neither of which make sense on their own.
885 One way to prevent this would be by refusing to split text between matching delimiters, such as [, ], (, ), {, } and most likely also ", " and ', '. Of course this also makes it impossible to pass such mis-matched code fragments as parameters, but I think that it would be hard for readers to cope with authors who would pass such code unbalanced fragments as chunk parameters2. I know that I couldn't cope with users doing such things, and although the GPL3 license prevents me from actually forbidding anyone from trying, if they want it to work they'll have to write the code themselves and not expect any support from me. ^2.
886 Unfortunately, the full set of matching delimiters may vary from language to language. In certain C++ template contexts, < and > would count as delimiters, and yet in other contexts they would not.
887 This puts me in the unfortunate position of having to parse-somewhat all programming languages without knowing what they are!
888 However, if this universal mode-tracking is possible, then parsing the arguments would be trivial. Such a mode tracker is described in chapter 9 and used here with simplicity.
890 38a <parse_chunk_args[1](), lang=> ≡
891 ________________________________________________________________________
892 1 | function parse_chunk_args(language, text, values, mode,
894 3 | c, context, rest)
896 5 | =<\chunkref{new-mode-tracker}(context, language, mode)>
897 6 | rest = mode_tracker(context, text, values);
899 8 | for(c=1; c <= context[0, "values"]; c++) {
900 9 | values[c] = context[0, "values", c];
904 |________________________________________________________________________
907 8.3 Expanding parameters in the text
908 Within the body of the chunk, the parameters are referred to with: ${name} and ${address}. There is a strong case that a LaTeX style notation should be used, like \param{name} which would be expressed in the listing as =<\param{name}> and be rendered as ${name}. Such notation would make me go blind, but I do intend to adopt it.
909 We therefore need a function expand_chunk_args which will take a block of text, a list of permitted parameters, and the arguments which must substitute for the parameters.
910 Here we split the text on ${ which means that all parts except the first will begin with a parameter name which will be terminated by }. The split function will consume the literal ${ in each case.
912 38b <expand_chunk_args()[1](), lang=> ≡
913 ________________________________________________________________________
914 1 | function expand_chunk_args(text, params, args,
915 2 | p, text_array, next_text, v, t, l)
917 4 | if (split(text, text_array, "\\${")) {
918 5 | «substitute-chunk-args 39a»
923 |________________________________________________________________________
926 First, we produce an associative array of substitution values indexed by parameter names. This will serve as a cache, allowing us to look up the replacement values as we extract each name.
928 39a <substitute-chunk-args[1](), lang=> ≡ 39b▿
929 ________________________________________________________________________
930 1 | for(p in params) {
931 2 | v[params[p]]=args[p];
933 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
934 We accumulate substituted text in the variable text. As the first part of the split function is the part before the delimiter --- which is ${ in our case --- this part will never contain a parameter reference, so we assign this directly to the result kept in $text.
936 39b <substitute-chunk-args[2]() ⇑39a, lang=> +≡ ▵39a 39c▿
937 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
938 4 | text=text_array[1];
939 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
940 We then iterate over the remaining values in the array3. I don't know why I think that it will enumerate the array in order, but it seems to work ^3
941 To do: fix or prove it
942 , and substitute each reference for it's argument.
944 39c <substitute-chunk-args[3]() ⇑39a, lang=> +≡ ▵39b
945 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
946 5 | for(t=2; t in text_array; t++) {
947 6 | =<\chunkref{substitute-chunk-arg}>
949 |________________________________________________________________________
952 After the split on ${ a valid parameter reference will consist of valid parameter name terminated by a close-brace }. A valid character name begins with the underscore or a letter, and may contain letters, digits or underscores.
953 A valid looking reference that is not actually the name of a parameter will be and not substituted. This is good because there is nothing to substitute anyway, and it avoids clashes when writing code for languages where ${...} is a valid construct --- such constructs will not be interfered with unless the parameter name also matches.
955 39d <substitute-chunk-arg[1](), lang=> ≡
956 ________________________________________________________________________
957 1 | if (match(text_array[t], "^([a-zA-Z_][a-zA-Z0-9_]*)}", l) &&
960 4 | text = text v[l[1]] substr(text_array[t], length(l[1])+2);
962 6 | text = text "${" text_array[t];
964 |________________________________________________________________________
967 Chapter 9Language Modes & Quoting
969 lstlistings and fangle both recognize source languages, and perform some basic parsing. lstlistings can detect strings and comments within a language definition and perform suitable rendering, such as italics for comments, and visible-spaces within strings.
970 Fangle similarly can recognize strings, and comments, etc, within a language, so that any chunks included with \chunkref can be suitably escape or quoted.
971 9.1.1 Modes to keep code together
972 As an example, in the C language there are a few parse modes, affecting the interpretation of characters.
973 One parse mode is the strings mode. The string mode is commenced by an un-escaped quotation mark " and terminated by the same. Within the string mode, only one additional mode can be commenced, it is the backslash mode \, which is always terminated after the folloing character.
974 Another mode is [ which is terminated by a ] (unless it occurs in a string).
975 Consider this fragment of C code:
977 things([x, y])<wide-overbrace>^(1. [ mode), get_other_things((a, "(all)"_(3. " mode)))<wide-overbrace>^(2. ( mode)
979 Mode nesting prevents the close parenthesis in the quoted string (part 3) from terminating the parenthesis mode (part 2).
980 Each language has a set of modes, the default mode being the null mode. Each mode can lead to other modes.
981 9.1.2 Modes affect included chunks
982 For instance, consider this chunk with language=perl:
984 41a <example-perl[1](), lang=perl> ≡
985 ________________________________________________________________________
986 print "hello world $0\n";
987 |________________________________________________________________________
990 If it were included in a chunk with language=sh, like this:
992 41b <example-sh[1](), lang=sh> ≡
993 ________________________________________________________________________
994 perl -e "=<\chunkref{example-perl}>"
995 |________________________________________________________________________
998 fangle would want to generate output like this:
999 perl -e "print \"hello world \$0\\n\";"
1000 See that the double quote ", back-slash \ and $ have been quoted with a back-slash to protect them from shell interpretation.
1001 If that were then included in a chunk with language=make, like this:
1003 42a <example-makefile[1](), lang=make> ≡
1004 ________________________________________________________________________
1006 2 | =<\chunkref{example-sh}>
1007 |________________________________________________________________________
1010 We would need the output to look like this --- note the $$:
1012 perl -e "print \"hello world \$$0\\n\";"
1013 In order to make this work, we need to define a mode-tracker supporting each language, that can detect the various quoting modes, and provide a transformation that must be applied to any included text so that included text will be interpreted correctly after any interpolation that it may be subject to at run-time.
1014 For example, the sed transformation for text to be inserted into shell double-quoted strings would be something like:
1015 s/\\/\\\\/g;s/$/\\$/g;s/"/\\"/g;
1016 which protects \ $ ".
1017 To do: I don't think this example is true
1018 The mode tracker must also track nested mode-changes, as in this sh example.
1019 echo "hello ‘id ...‘"
1021 Any characters inserted at the point marked ↑ would need to be escaped, including ‘ | * among others. First it would need escaping for the back-ticks ‘, and then for the double-quotes ".
1023 Escaping need not occur if the format and mode of the included chunk matches that of the including chunk.
1024 As each chunk is output a new mode tracker for that language is initialized in it's normal state. As text is output for that chunk the output mode is tracked. When a new chunk is included, a transformation appropriate to that mode is selected and pushed onto a stack of transformations. Any text to be output is first passed through this stack of transformations.
1025 It remains to consider if the chunk-include function should return it's generated text so that the caller can apply any transformations (and formatting), or if it should apply the stack of transformations itself.
1026 Note that the transformed text should have the property of not being able to change the mode in the current chunk.
1027 To do: Note chunk parameters should probably also be transformed
1029 9.2 Language Mode Definitions
1030 All modes are stored in a single multi-dimensional hash. The first index is the language, and the second index is the mode-identifier. The third indexes are terminators, and optionally, submodes, and delimiters.
1031 A useful set of mode definitions for a nameless general C-type language is shown here. (Don't be confused by the double backslash escaping needed in awk. One set of escaping is for the string, and the second set of escaping is for the regex).
1032 To do: TODO: Add =<\mode{}> command which will allow us to signify that a string is
1033 regex and thus fangle will quote it for us.
1035 Submodes are entered by the characters " ' { ( [ /*
1037 43a <common-mode-definitions[1](language), lang=> ≡ 43b▿
1038 ________________________________________________________________________
1039 1 | modes[${language}, "", "submodes"]="\\\\|\"|'|{|\\(|\\[";
1040 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1041 In the default mode, a comma surrounded by un-important white space is a delimiter of language items1. whatever a language item might be ^1.
1043 43b <common-mode-definitions[2](language) ⇑43a, lang=> +≡ ▵43a 43d▿
1044 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1045 2 | modes[${language}, "", "delimiters"]=" *, *";
1046 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1047 and should pass this test:
1048 To do: Why do the tests run in ?(? mode and not ?? mode
1051 43c <test:mode-definitions[1](), lang=> ≡ 44g⊳
1052 ________________________________________________________________________
1053 1 | parse_chunk_args("c-like", "1,2,3", a, "");
1054 2 | if (a[1] != "1") e++;
1055 3 | if (a[2] != "2") e++;
1056 4 | if (a[3] != "3") e++;
1057 5 | if (length(a) != 3) e++;
1058 6 | =<\chunkref{pca-test.awk:summary}>
1060 8 | parse_chunk_args("c-like", "joe, red", a, "");
1061 9 | if (a[1] != "joe") e++;
1062 10 | if (a[2] != "red") e++;
1063 11 | if (length(a) != 2) e++;
1064 12 | =<\chunkref{pca-test.awk:summary}>
1066 14 | parse_chunk_args("c-like", "${colour}", a, "");
1067 15 | if (a[1] != "${colour}") e++;
1068 16 | if (length(a) != 1) e++;
1069 17 | =<\chunkref{pca-test.awk:summary}>
1070 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1071 Nested modes are identified by a backslash, a double or single quote, various bracket styles or a /* comment.
1072 For each of these sub-modes modes we must also identify at a mode terminator, and any sub-modes or delimiters that may be entered2. Because we are using the sub-mode characters as the mode identifier it means we can't currently have a mode character dependant on it's context; i.e. { can't behave differently when it is inside [. ^2.
1074 The backslash mode has no submodes or delimiters, and is terminated by any character. Note that we are not so much interested in evaluating or interpolating content as we are in delineating content. It is no matter that a double backslash (\\) may represent a single backslash while a backslash-newline may represent white space, but it does matter that the newline in a backslash newline should not be able to terminate a C pre-processor statement; and so the newline will be consumed by the backslash however it is to be interpreted.
1076 43d <common-mode-definitions[3](language) ⇑43a, lang=> +≡ ▵43b 44f⊳
1077 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1078 3 | modes[${language}, "\\", "terminators"]=".";
1079 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1081 Common languages support two kinds of strings quoting, double quotes and single quotes.
1082 In a string we have one special mode, which is the backslash. This may escape an embedded quote and prevent us thinking that it should terminate the string.
1084 44a <mode:common-string[1](language, quote), lang=> ≡ 44b▿
1085 ________________________________________________________________________
1086 1 | modes[${language}, ${quote}, "submodes"]="\\\\";
1087 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1088 Otherwise, the string will be terminated by the same character that commenced it.
1090 44b <mode:common-string[2](language, quote) ⇑44a, lang=> +≡ ▵44a 44c▿
1091 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1092 2 | modes[${language}, ${quote}, "terminators"]=${quote};
1093 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1094 In C type languages, certain escape sequences exist in strings. We need to define mechanism to enclode any chunks included in this mode using those escape sequences. These are expressed in two parts, s meaning search, and r meaning replace.
1095 The first substitution is to replace a backslash with a double backslash. We do this first as other substitutions may introduce a backslash which we would not then want to escape again here.
1096 Note: Backslashes need double-escaping in the search pattern but not in the replacement string, hence we are replacing a literal \ with a literal \\.
1098 44c <mode:common-string[3](language, quote) ⇑44a, lang=> +≡ ▵44b 44d▿
1099 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1100 3 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\\\";
1101 4 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\\\";
1102 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1103 If the quote character occurs in the text, it should be preceded by a backslash, otherwise it would terminate the string unexpectedly.
1105 44d <mode:common-string[4](language, quote) ⇑44a, lang=> +≡ ▵44c 44e▿
1106 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1107 5 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]=${quote};
1108 6 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\" ${quote};
1109 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1110 Any newlines in the string, must be replaced by \n.
1112 44e <mode:common-string[5](language, quote) ⇑44a, lang=> +≡ ▵44d
1113 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1114 7 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\n";
1115 8 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\n";
1116 |________________________________________________________________________
1119 For the common modes, we define this string handling for double and single quotes.
1121 44f <common-mode-definitions[4](language) ⇑43a, lang=> +≡ ⊲43d 45b⊳
1122 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1123 4 | =<\chunkref{mode:common-string}(${language}, "\textbackslash{}"")>
1124 5 | =<\chunkref{mode:common-string}(${language}, "'")>
1125 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1126 Working strings should pass this test:
1128 44g <test:mode-definitions[2]() ⇑43c, lang=> +≡ ⊲43c 47d⊳
1129 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1130 18 | parse_chunk_args("c-like", "say \"I said, \\\"Hello, how are you\\\".\", for me", a, "");
1131 19 | if (a[1] != "say \"I said, \\\"Hello, how are you\\\".\"") e++;
1132 20 | if (a[2] != "for me") e++;
1133 21 | if (length(a) != 2) e++;
1134 22 | =<\chunkref{pca-test.awk:summary}>
1135 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1136 9.2.3 Parentheses, Braces and Brackets
1137 Where quotes are closed by the same character, parentheses, brackets and braces are closed by an alternate character.
1139 45a <mode:common-brackets[1](language, open, close), lang=> ≡
1140 ________________________________________________________________________
1141 1 | modes[${language}, ${open}, "submodes" ]="\\\\|\"|{|\\(|\\[|'|/\\*";
1142 2 | modes[${language}, ${open}, "delimiters"]=" *, *";
1143 3 | modes[${language}, ${open}, "terminators"]=${close};
1144 |________________________________________________________________________
1147 Note that the open is NOT a regex but the close token IS.
1148 To do: When we can quote regex we won't have to put the slashes in here
1151 45b <common-mode-definitions[5](language) ⇑43a, lang=> +≡ ⊲44f
1152 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1153 6 | =<\chunkref{mode:common-brackets}(${language}, "{", "}")>
1154 7 | =<\chunkref{mode:common-brackets}(${language}, "[", "\textbackslash{}\textbackslash{}]")>
1155 8 | =<\chunkref{mode:common-brackets}(${language}, "(", "\textbackslash{}\textbackslash{})")>
1156 |________________________________________________________________________
1159 9.2.4 Customizing Standard Modes
1161 45c <mode:add-submode[1](language, mode, submode), lang=> ≡
1162 ________________________________________________________________________
1163 1 | modes[${language}, ${mode}, "submodes"] = modes[${language}, ${mode}, "submodes"] "|" ${submode};
1164 |________________________________________________________________________
1168 45d <mode:add-escapes[1](language, mode, search, replace), lang=> ≡
1169 ________________________________________________________________________
1170 1 | escapes[${language}, ${mode}, ++escapes[${language}, ${mode}], "s"]=${search};
1171 2 | escapes[${language}, ${mode}, escapes[${language}, ${mode}], "r"]=${replace};
1172 |________________________________________________________________________
1177 We can define /* comment */ style comments and //comment style comments to be added to any language:
1179 45e <mode:multi-line-comments[1](language), lang=> ≡
1180 ________________________________________________________________________
1181 1 | =<\chunkref{mode:add-submode}(${language}, "", "/\textbackslash{}\textbackslash{}*")>
1182 2 | modes[${language}, "/*", "terminators"]="\\*/";
1183 |________________________________________________________________________
1187 45f <mode:single-line-slash-comments[1](language), lang=> ≡
1188 ________________________________________________________________________
1189 1 | =<\chunkref{mode:add-submode}(${language}, "", "//")>
1190 2 | modes[${language}, "//", "terminators"]="\n";
1191 3 | =<\chunkref{mode:add-escapes}(${language}, "//", "\textbackslash{}n", "\textbackslash{}n//")>
1192 |________________________________________________________________________
1195 We can also define # comment style comments (as used in awk and shell scripts) in a similar manner.
1196 To do: I'm having to use # for hash and ¯extbackslash{} for and have hacky work-arounds in the parser for now
1199 45g <mode:add-hash-comments[1](language), lang=> ≡
1200 ________________________________________________________________________
1201 1 | =<\chunkref{mode:add-submode}(${language}, "", "\#")>
1202 2 | modes[${language}, "#", "terminators"]="\n";
1203 3 | =<\chunkref{mode:add-escapes}(${language}, "\#", "\textbackslash{}n", "\textbackslash{}n\#")>
1204 |________________________________________________________________________
1207 In C, the # denotes pre-processor directives which can be multi-line
1209 46a <mode:add-hash-defines[1](language), lang=> ≡
1210 ________________________________________________________________________
1211 1 | =<\chunkref{mode:add-submode}(${language}, "", "\#")>
1212 2 | modes[${language}, "#", "submodes" ]="\\\\";
1213 3 | modes[${language}, "#", "terminators"]="\n";
1214 4 | =<\chunkref{mode:add-escapes}(${language}, "\#", "\textbackslash{}n", "\textbackslash{}\textbackslash{}\textbackslash{}\textbackslash{}\textbackslash{}n")>
1215 |________________________________________________________________________
1219 46b <mode:quote-dollar-escape[1](language, quote), lang=> ≡
1220 ________________________________________________________________________
1221 1 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\$";
1222 2 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\$";
1223 |________________________________________________________________________
1226 We can add these definitions to various languages
1228 46c <mode-definitions[1](), lang=> ≡ 47b⊳
1229 ________________________________________________________________________
1230 1 | «common-mode-definitions("c-like") 43a»
1232 3 | «common-mode-definitions("c") 43a»
1233 4 | =<\chunkref{mode:multi-line-comments}("c")>
1234 5 | =<\chunkref{mode:single-line-slash-comments}("c")>
1235 6 | =<\chunkref{mode:add-hash-defines}("c")>
1237 8 | =<\chunkref{common-mode-definitions}("awk")>
1238 9 | =<\chunkref{mode:add-hash-comments}("awk")>
1239 10 | =<\chunkref{mode:add-naked-regex}("awk")>
1240 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1241 The awk definitions should allow a comment block like this:
1243 46d <test:comment-quote[1](), lang=awk> ≡
1244 ________________________________________________________________________
1245 1 | # Comment: =<\chunkref{test:comment-text}>
1246 |________________________________________________________________________
1250 46e <test:comment-text[1](), lang=> ≡
1251 ________________________________________________________________________
1252 1 | Now is the time for
1253 2 | the quick brown fox to bring lemonade
1255 |________________________________________________________________________
1258 to come out like this:
1260 46f <test:comment-quote:result[1](), lang=> ≡
1261 ________________________________________________________________________
1262 1 | # Comment: Now is the time for
1263 2 | #the quick brown fox to bring lemonade
1265 |________________________________________________________________________
1268 The C definition for such a block should have it come out like this:
1270 46g <test:comment-quote:C-result[1](), lang=> ≡
1271 ________________________________________________________________________
1272 1 | # Comment: Now is the time for\
1273 2 | the quick brown fox to bring lemonade\
1275 |________________________________________________________________________
1279 This pattern is incomplete, but meant to detect naked regular expressions in awk and perl; e.g. /.*$/, however required capabilities are not present.
1280 Current it only detects regexes anchored with ^ as used in fangle.
1281 For full regex support, modes need to be named not after their starting character, but some other more fully qualified name.
1283 47a <mode:add-naked-regex[1](language), lang=> ≡
1284 ________________________________________________________________________
1285 1 | =<\chunkref{mode:add-submode}(${language}, "", "/\textbackslash{}\textbackslash{}\^")>
1286 2 | modes[${language}, "/^", "terminators"]="/";
1287 |________________________________________________________________________
1292 47b <mode-definitions[2]() ⇑46c, lang=> +≡ ⊲46c 47c▿
1293 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1294 11 | =<\chunkref{common-mode-definitions}("perl")>
1295 12 | =<\chunkref{mode:multi-line-comments}("perl")>
1296 13 | =<\chunkref{mode:add-hash-comments}("perl")>
1297 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1298 Still need to add add s/, submode /, terminate both with //. This is likely to be impossible as perl regexes can contain perl.
1301 47c <mode-definitions[3]() ⇑46c, lang=> +≡ ▵47b
1302 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1303 14 | =<\chunkref{common-mode-definitions}("sh")>
1304 15 | #<\chunkref{mode:common-string}("sh", "\textbackslash{}"")>
1305 16 | #<\chunkref{mode:common-string}("sh", "'")>
1306 17 | =<\chunkref{mode:add-hash-comments}("sh")>
1307 18 | =<\chunkref{mode:quote-dollar-escape}("sh", "\"")>
1308 |________________________________________________________________________
1312 Also, the parser must return any spare text at the end that has not been processed due to a mode terminator being found.
1314 47d <test:mode-definitions[3]() ⇑43c, lang=> +≡ ⊲44g 47e▿
1315 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1316 23 | rest = parse_chunk_args("c-like", "1, 2, 3) spare", a, "(");
1317 24 | if (a[1] != 1) e++;
1318 25 | if (a[2] != 2) e++;
1319 26 | if (a[3] != 3) e++;
1320 27 | if (length(a) != 3) e++;
1321 28 | if (rest != " spare") e++;
1322 29 | =<\chunkref{pca-test.awk:summary}>
1323 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1324 We must also be able to parse the example given earlier.
1326 47e <test:mode-definitions[4]() ⇑43c, lang=> +≡ ▵47d
1327 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1328 30 | parse_chunk_args("c-like", "things[x, y], get_other_things(a, \"(all)\"), 99", a, "(");
1329 31 | if (a[1] != "things[x, y]") e++;
1330 32 | if (a[2] != "get_other_things(a, \"(all)\")") e++;
1331 33 | if (a[3] != "99") e++;
1332 34 | if (length(a) != 3) e++;
1333 35 | =<\chunkref{pca-test.awk:summary}>
1334 |________________________________________________________________________
1337 9.4 A non-recursive mode tracker
1339 The mode tracker holds its state in a stack based on a numerically indexed hash. This function, when passed an empty hash, will intialize it.
1341 48a <new_mode_tracker()[1](), lang=> ≡
1342 ________________________________________________________________________
1343 1 | function new_mode_tracker(context, language, mode) {
1344 2 | context[""] = 0;
1345 3 | context[0, "language"] = language;
1346 4 | context[0, "mode"] = mode;
1348 |________________________________________________________________________
1351 Because awk functions cannot return an array, we must create the array first and pass it in, so we have a fangle macro to do this:
1353 48b <new-mode-tracker[1](context, language, mode), lang=awk> ≡
1354 ________________________________________________________________________
1355 1 | «awk-delete-array(context) 33d»
1356 2 | new_mode_tracker(${context}, ${language}, ${mode});
1357 |________________________________________________________________________
1361 And for tracking modes, we dispatch to a mode-tracker action based on the current language
1363 48c <mode_tracker[1](), lang=awk> ≡ 48d▿
1364 ________________________________________________________________________
1365 1 | function push_mode_tracker(context, language, mode,
1369 5 | if (! ("" in context)) {
1370 6 | «new-mode-tracker(context, language, mode) 48b»
1372 8 | top = context[""];
1373 9 | if (context[top, "language"] == language && mode=="") mode = context[top, "mode"];
1375 11 | context[top, "language"] = language;
1376 12 | context[top, "mode"] = mode;
1377 13 | context[""] = top;
1380 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1382 48d <mode_tracker[2]() ⇑48c, lang=> +≡ ▵48c 48e▿
1383 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1384 16 | function dump_mode_tracker(context,
1387 19 | for(c=0; c <= context[""]; c++) {
1388 20 | printf(" %2d %s:%s\n", c, context[c, "language"], context[c, "mode"]) > "/dev/stderr";
1389 21 | for(d=1; ( (c, "values", d) in context); d++) {
1390 22 | printf(" %2d %s\n", d, context[c, "values", d]) > "/dev/stderr";
1394 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1396 48e <mode_tracker[3]() ⇑48c, lang=> +≡ ▵48d 53a⊳
1397 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1398 26 | function finalize_mode_tracker(context)
1400 28 | if ( ("" in context) && context[""] != 0) return 0;
1403 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1404 This implies that any chunk must be syntactically whole; for instance, this is fine:
1406 49a <test:whole-chunk[1](), lang=> ≡
1407 ________________________________________________________________________
1409 2 | =<\chunkref{test:say-hello}>
1411 |________________________________________________________________________
1415 49b <test:say-hello[1](), lang=> ≡
1416 ________________________________________________________________________
1418 |________________________________________________________________________
1421 But this is not fine; the chunk <test:hidden-else 49d> is not properly cromulent.
1423 49c <test:partial-chunk[1](), lang=> ≡
1424 ________________________________________________________________________
1426 2 | =<\chunkref{test:hidden-else}>
1428 |________________________________________________________________________
1432 49d <test:hidden-else[1](), lang=> ≡
1433 ________________________________________________________________________
1434 1 | print "I'm fine";
1436 3 | print "I'm not";
1437 |________________________________________________________________________
1440 These tests will check for correct behaviour:
1442 49e <test:cromulence[1](), lang=> ≡
1443 ________________________________________________________________________
1444 1 | echo Cromulence test
1445 2 | passtest $FANGLE -Rtest:whole-chunk $TEX_SRC &>/dev/null || ( echo "Whole chunk failed" && exit 1 )
1446 3 | failtest $FANGLE -Rtest:partial-chunk $TEX_SRC &>/dev/null || ( echo "Partial chunk failed" && exit 1 )
1447 |________________________________________________________________________
1451 We must avoid recursion as a language construct because we intend to employ mode-tracking to track language mode of emitted code, and the code is emitted from a function which is itself recursive, so instead we implement psuedo-recursion using our own stack based on a hash.
1453 49f <mode_tracker()[1](), lang=awk> ≡ 49g▿
1454 ________________________________________________________________________
1455 1 | function mode_tracker(context, text, values,
1456 2 | # optional parameters
1458 4 | mode, submodes, language,
1459 5 | cindex, c, a, part, item, name, result, new_values, new_mode,
1460 6 | delimiters, terminators)
1462 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1463 We could be re-commencing with a valid context, so we need to setup the state according to the last context.
1465 49g <mode_tracker()[2]() ⇑49f, lang=> +≡ ▵49f 50c⊳
1466 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1467 8 | cindex = context[""] + 0;
1468 9 | mode = context[cindex, "mode"];
1469 10 | language = context[cindex, "language" ];
1470 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1471 First we construct a single large regex combining the possible sub-modes for the current mode along with the terminators for the current mode.
1473 50a <parse_chunk_args-reset-modes[1](), lang=> ≡ 50b▿
1474 ________________________________________________________________________
1475 1 | submodes=modes[language, mode, "submodes"];
1477 3 | if ((language, mode, "delimiters") in modes) {
1478 4 | delimiters = modes[language, mode, "delimiters"];
1479 5 | if (length(submodes)>0) submodes = submodes "|";
1480 6 | submodes=submodes delimiters;
1481 7 | } else delimiters="";
1482 8 | if ((language, mode, "terminators") in modes) {
1483 9 | terminators = modes[language, mode, "terminators"];
1484 10 | if (length(submodes)>0) submodes = submodes "|";
1485 11 | submodes=submodes terminators;
1486 12 | } else terminators="";
1487 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1488 If we don't find anything to match on --- probably because the language is not supported --- then we return the entire text without matching anything.
1490 50b <parse_chunk_args-reset-modes[2]() ⇑50a, lang=> +≡ ▵50a
1491 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1492 13 | if (! length(submodes)) return text;
1493 |________________________________________________________________________
1497 50c <mode_tracker()[3]() ⇑49f, lang=> +≡ ⊲49g 50d▿
1498 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1499 11 | =<\chunkref{parse_chunk_args-reset-modes}>
1500 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1501 We then iterate the text (until there is none left) looking for sub-modes or terminators in the regex.
1503 50d <mode_tracker()[4]() ⇑49f, lang=> +≡ ▵50c 50e▿
1504 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1505 12 | while((cindex >= 0) && length(text)) {
1506 13 | if (match(text, "(" submodes ")", a)) {
1507 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1508 A bug that creeps in regularly during development is bad regexes of zero length which result in an infinite loop (as no text is consumed), so I catch that right away with this test.
1510 50e <mode_tracker()[5]() ⇑49f, lang=> +≡ ▵50d 50f▿
1511 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1512 14 | if (RLENGTH<1) {
1513 15 | error(sprintf("Internal error, matched zero length submode, should be impossible - likely regex computation error\n" \
1514 16 | "Language=%s\nmode=%s\nmatch=%s\n", language, mode, submodes));
1516 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1517 part is defined as the text up to the sub-mode or terminator, and this is appended to item --- which is the current text being gathered. If a mode has a delimiter, then item is reset each time a delimiter is found.
1518 ("hello_item, there_item")<wide-overbrace>^item, (he said.)<wide-overbrace>^item
1520 50f <mode_tracker()[6]() ⇑49f, lang=> +≡ ▵50e 50g▿
1521 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1522 18 | part = substr(text, 1, RSTART -1);
1523 19 | item = item part;
1524 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1525 We must now determine what was matched. If it was a terminator, then we must restore the previous mode.
1527 50g <mode_tracker()[7]() ⇑49f, lang=> +≡ ▵50f 51a⊳
1528 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1529 20 | if (match(a[1], "^" terminators "$")) {
1530 21 | #printf("%2d EXIT MODE [%s] by [%s] [%s]\n", cindex, mode, a[1], text) > "/dev/stderr"
1531 22 | context[cindex, "values", ++context[cindex, "values"]] = item;
1532 23 | delete context[cindex];
1533 24 | context[""] = --cindex;
1534 25 | if (cindex>=0) {
1535 26 | mode = context[cindex, "mode"];
1536 27 | language = context[cindex, "language"];
1537 28 | =<\chunkref{parse_chunk_args-reset-modes}>
1539 30 | item = item a[1];
1540 31 | text = substr(text, 1 + length(part) + length(a[1]));
1542 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1543 If a delimiter was matched, then we must store the current item in the parsed values array, and reset the item.
1545 51a <mode_tracker()[8]() ⇑49f, lang=> +≡ ⊲50g 51b▿
1546 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1547 33 | else if (match(a[1], "^" delimiters "$")) {
1548 34 | if (cindex==0) {
1549 35 | context[cindex, "values", ++context[cindex, "values"]] = item;
1552 38 | item = item a[1];
1554 40 | text = substr(text, 1 + length(part) + length(a[1]));
1556 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1557 otherwise, if a new submode is detected (all submodes have terminators), we must create a nested parse context until we find the terminator for this mode.
1559 51b <mode_tracker()[9]() ⇑49f, lang=> +≡ ▵51a 51c▿
1560 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1561 42 | else if ((language, a[1], "terminators") in modes) {
1562 43 | #check if new_mode is defined
1563 44 | item = item a[1];
1564 45 | #printf("%2d ENTER MODE [%s] in [%s]\n", cindex, a[1], text) > "/dev/stderr"
1565 46 | text = substr(text, 1 + length(part) + length(a[1]));
1566 47 | context[""] = ++cindex;
1567 48 | context[cindex, "mode"] = a[1];
1568 49 | context[cindex, "language"] = language;
1570 51 | =<\chunkref{parse_chunk_args-reset-modes}>
1572 53 | error(sprintf("Submode '%s' set unknown mode in text: %s\nLanguage %s Mode %s\n", a[1], text, language, mode));
1573 54 | text = substr(text, 1 + length(part) + length(a[1]));
1576 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1577 In the final case, we parsed to the end of the string. If the string was entire, then we should have no nested mode context, but if the string was just a fragment we may have a mode context which must be preserved for the next fragment. Todo: Consideration ought to be given if sub-mode strings are split over two fragments.
1579 51c <mode_tracker()[10]() ⇑49f, lang=> +≡ ▵51b
1580 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1582 58 | context[cindex, "values", ++context[cindex, "values"]] = item text;
1588 64 | context["item"] = item;
1590 66 | if (length(item)) context[cindex, "values", ++context[cindex, "values"]] = item;
1593 |________________________________________________________________________
1596 9.4.3.1 One happy chunk
1597 All the mode tracker chunks are referred to here:
1599 52a <mode-tracker[1](), lang=> ≡
1600 ________________________________________________________________________
1601 1 | «new_mode_tracker() 48a»
1602 2 | «mode_tracker() 49f»
1603 |________________________________________________________________________
1607 We can test this function like this:
1609 52b <pca-test.awk[1](), lang=awk> ≡
1610 ________________________________________________________________________
1611 1 | =<\chunkref{error()}>
1612 2 | =<\chunkref{mode-tracker}>
1613 3 | =<\chunkref{parse_chunk_args()}>
1616 6 | =<\chunkref{mode-definitions}>
1618 8 | =<\chunkref{test:mode-definitions}>
1620 |________________________________________________________________________
1624 52c <pca-test.awk:summary[1](), lang=awk> ≡
1625 ________________________________________________________________________
1627 2 | printf "Failed " e
1629 4 | print "a[" b "] => " a[b];
1636 |________________________________________________________________________
1639 which should give this output:
1641 52d <pca-test.awk-results[1](), lang=> ≡
1642 ________________________________________________________________________
1643 1 | a[foo.quux.quirk] =>
1644 2 | a[foo.quux.a] => fleeg
1645 3 | a[foo.bar] => baz
1647 5 | a[name] => freddie
1648 |________________________________________________________________________
1651 9.5 Escaping and Quoting
1652 For the time being and to get around TeXmacs inability to export a TAB character, the right arrow whose UTF-8 sequence is ...
1655 Another special character is used, the left-arrow with UTF-8 sequence 0xE2 0x86 0xA4 is used to strip any preceding white space as a way of un-tabbing and removing indent that has been applied — this is important for bash here documents, and the like. It's a filthy hack.
1656 To do: remove the hack
1659 53a <mode_tracker[4]() ⇑48c, lang=> +≡ ⊲48e 53b▿
1660 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1662 31 | function untab(text) {
1663 32 | gsub("[[:space:]]*\xE2\x86\xA4","", text);
1666 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1667 Each nested mode can optionally define a set of transforms to be applied to any text that is included from another language.
1668 This code can perform transforms
1670 53b <mode_tracker[5]() ⇑48c, lang=awk> +≡ ▵53a 53c▿
1671 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1672 35 | function transform_escape(s, r, text,
1678 41 | for(c=1; c <= max && (c in s); c++) {
1679 42 | gsub(s[c], r[c], text);
1683 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1684 This function must append from index c onwards, and escape transforms from the supplied context, and return c + number of new transforms.
1686 53c <mode_tracker[6]() ⇑48c, lang=awk> +≡ ▵53b
1687 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1688 46 | function mode_escaper(context, s, r, src,
1691 49 | for(c = context[""]; c >= 0; c--) {
1692 50 | if ( (context[c, "language"], context[c, "mode"]) in escapes) {
1693 51 | cpl = escapes[context[c, "language"], context[c, "mode"]];
1694 52 | for (cp = 1; cp <= cpl; cp ++) {
1696 54 | s[src] = escapes[context[c, "language"], context[c, "mode"], cp, "s"];
1697 55 | r[src] = escapes[context[c, "language"], context[c, "mode"], cp, "r"];
1703 61 | function dump_escaper(c, s, r, cc) {
1704 62 | for(cc=1; cc<=c; cc++) {
1705 63 | printf("%2d s[%s] r[%s]\n", cc, s[cc], r[cc]) > "/dev/stderr"
1708 |________________________________________________________________________
1712 53d <test:escapes[1](), lang=sh> ≡
1713 ________________________________________________________________________
1714 1 | echo escapes test
1715 2 | passtest $FANGLE -Rtest:comment-quote $TEX_SRC &>/dev/null || ( echo "Comment-quote failed" && exit 1 )
1716 |________________________________________________________________________
1719 Chapter 10Recognizing Chunks
1720 Fangle recognizes noweb chunks, but as we also want better LaTeX integration we will recognize any of these:
1721 • notangle chunks matching the pattern ^<<.*?>>=
1722 • chunks beginning with \begin{lstlistings}, possibly with \Chunk{...} on the previous line
1723 • an older form I have used, beginning with \begin{Chunk}[options] --- also more suitable for plain LaTeX users1. Is there such a thing as plain LaTeX? ^1.
1725 The variable chunking is used to signify that we are processing a code chunk and not document. In such a state, input lines will be assigned to the current chunk; otherwise they are ignored.
1726 10.1.1 TeXmacs hackery
1727 We don't handle TeXmacs files natively but instead emit unicode character sequences to mark up the text-export file which we work on.
1728 These hacks detect such sequences and retro-fit in the old TeX parsing.
1730 55a <recognize-chunk[1](), lang=> ≡ 56a⊳
1731 ________________________________________________________________________
1734 2 | # gsub("\n*$","");
1735 3 | # gsub("\n", " ");
1738 6 | /\xE2\x86\xA6/ {
1739 7 | gsub("\\xE2\\x86\\xA6", "\x09");
1742 10 | /\xE2\x80\x98/ {
1743 11 | gsub("\\xE2\\x80\\x98", "‘");
1746 14 | /\xE2\x89\xA1/ {
1747 15 | if (match($0, "^ *([^[ ]* |)<([^[ ]*)\\[[0-9]*\\][(](.*)[)].*, lang=([^ ]*)", line)) {
1748 16 | next_chunk_name=line[2];
1749 17 | gsub(",",";",line[3]);
1750 18 | params="params=" line[3];
1751 19 | if ((line[4])) {
1752 20 | params = params ",language=" line[4]
1754 22 | get_chunk_args(params, next_chunk_args);
1755 23 | new_chunk(next_chunk_name, next_chunk_args);
1756 24 | texmacs_chunking = 1;
1758 26 | #print "Unexpected
1765 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1767 Our current scheme is to recognize the new lstlisting chunks, but these may be preceded by a \Chunk command which in L Y X is a more convenient way to pass the chunk name to the \begin{lstlistings} command, and a more visible way to specify other lstset settings.
1768 The arguments to the \Chunk command are a name, and then a comma-seperated list of key-value pairs after the manner of \lstset. (In fact within the LaTeX \Chunk macro (section 15.2.1) the text name= is prefixed to the argument which is then literally passed to \lstset).
1770 56a <recognize-chunk[2]() ⇑55a, lang=awk> +≡ ⊲55a 56b▿
1771 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1773 34 | if (match($0, "^\\\\Chunk{ *([^ ,}]*),?(.*)}", line)) {
1774 35 | next_chunk_name = line[1];
1775 36 | get_chunk_args(line[2], next_chunk_args);
1779 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1780 We also make a basic attempt to parse the name out of the \lstlistings[name=chunk-name] text, otherwise we fall back to the name found in the previous chunk command. This attempt is very basic and doesn't support commas or spaces or square brackets as part of the chunkname. We also recognize \begin{Chunk} which is convenient for some users2. but not yet supported in the LaTeX macros ^2.
1782 56b <recognize-chunk[3]() ⇑55a, lang=> +≡ ▵56a 56c▿
1783 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1784 40 | /^\\begin{lstlisting}|^\\begin{Chunk}/ {
1785 41 | if (match($0, "}.*[[,] *name= *{? *([^], }]*)", line)) {
1786 42 | new_chunk(line[1]);
1788 44 | new_chunk(next_chunk_name, next_chunk_args);
1793 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1797 56c <recognize-chunk[4]() ⇑55a, lang=> +≡ ▵56b 57a⊳
1798 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1800 50 | /^ *\|____________*/ && texmacs_chunking {
1801 51 | active_chunk="";
1802 52 | texmacs_chunking=0;
1805 55 | /^ *\|\/\\/ && texmacs_chunking {
1806 56 | texmacs_chunking=0;
1808 58 | active_chunk="";
1810 60 | texmacs_chunk=0;
1811 61 | /^ *[1-9][0-9]* *\| / {
1812 62 | if (texmacs_chunking) {
1814 64 | texmacs_chunk=1;
1815 65 | gsub("^ *[1-9][0-9]* *\\| ", "")
1818 68 | /^ *\.\/\\/ && texmacs_chunking {
1821 71 | /^ *__*$/ && texmacs_chunking {
1825 75 | texmacs_chunking {
1826 76 | if (! texmacs_chunk) {
1827 77 | # must be a texmacs continued line
1829 79 | texmacs_chunk=1;
1832 82 | ! texmacs_chunk {
1833 83 | # texmacs_chunking=0;
1838 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1840 We recognize notangle style chunks too:
1842 57a <recognize-chunk[5]() ⇑55a, lang=awk> +≡ ⊲56c 58a⊳
1843 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1844 88 | /^[<]<.*[>]>=/ {
1845 89 | if (match($0, "^[<]<(.*)[>]>= *$", line)) {
1847 91 | notangle_mode=1;
1848 92 | new_chunk(line[1]);
1852 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1854 Likewise, we need to recognize when a chunk ends.
1856 The e in [e]nd{lislisting} is surrounded by square brackets so that when this document is processed, this chunk doesn't terminate early when the lstlistings package recognizes it's own end-string!3. This doesn't make sense as the regex is anchored with ^, which this line does not begin with! ^3
1858 58a <recognize-chunk[6]() ⇑55a, lang=> +≡ ⊲57a 58b▿
1859 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1860 96 | /^\\[e]nd{lstlisting}|^\\[e]nd{Chunk}/ {
1862 98 | active_chunk="";
1865 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1868 58b <recognize-chunk[7]() ⇑55a, lang=> +≡ ▵58a 58c▿
1869 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1872 103 | active_chunk="";
1874 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1875 All other recognizers are only of effect if we are chunking; there's no point in looking at lines if they aren't part of a chunk, so we just ignore them as efficiently as we can.
1877 58c <recognize-chunk[8]() ⇑55a, lang=> +≡ ▵58b 58d▿
1878 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1879 105 | ! chunking { next; }
1880 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1882 Chunk contents are any lines read while chunking is true. Some chunk contents are special in that they refer to other chunks, and will be replaced by the contents of these chunks when the file is generated.
1883 We add the output record separator ORS to the line now, because we will set ORS to the empty string when we generate the output4. So that we can partial print lines using print instead of printf.
1884 To do: This does't make sense
1887 58d <recognize-chunk[9]() ⇑55a, lang=> +≡ ▵58c
1888 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1889 106 | length(active_chunk) {
1890 107 | =<\chunkref{process-chunk-tabs}>
1891 108 | =<\chunkref{process-chunk}>
1893 |________________________________________________________________________
1896 If a chunk just consisted of plain text, we could handle the chunk like this:
1898 58e <process-chunk-simple[1](), lang=> ≡
1899 ________________________________________________________________________
1900 1 | chunk_line(active_chunk, $0 ORS);
1901 |________________________________________________________________________
1904 but in fact a chunk can include references to other chunks. Chunk includes are traditionally written as <<chunk-name>> but we support other variations, some of which are more suitable for particular editing systems.
1905 However, we also process tabs at this point, a tab at input can be replaced by a number of spaces defined by the tabs variable, set by the -T option. Of course this is poor tab behaviour, we should probably have the option to use proper counted tab-stops and process this on output.
1907 59a <process-chunk-tabs[1](), lang=> ≡
1908 ________________________________________________________________________
1909 1 | if (length(tabs)) {
1910 2 | gsub("\t", tabs);
1912 |________________________________________________________________________
1916 If \lstset{escapeinside={=<}{>}} is set, then we can use =<\chunkref{chunk-name}> in listings. The sequence =< was chosen because:
1917 1.it is a better mnemonic than <<chunk-name>> in that the = sign signifies equivalence or substitutability.
1918 2.and because =< is not valid in C or any language I can think of.
1919 3.and also because lstlistings doesn't like >> as an end delimiter for the texcl escape, so we must make do with a single > which is better complemented by =< than by <<.
1920 Unfortunately the =<...> that we use re-enters a LaTeX parsing mode in which some characters are special, e.g. # \ and so these cause trouble if used in arguments to \chunkref. At some point I must fix the LaTeX command \chunkref so that it can accept these literally, but until then, when writing chunkref argumemts that need these characters, I must use the forms \textbackslash{} and \#; so I also define a hacky chunk delatex to be used further on whose purpose it is to remove these from any arguments parsed by fangle.
1922 59b <delatex[1](text), lang=> ≡
1923 ________________________________________________________________________
1925 2 | gsub("\\\\#", "#", ${text});
1926 3 | gsub("\\\\textbackslash{}", "\\", ${text});
1927 4 | gsub("\\\\\\^", "^", ${text});
1928 |________________________________________________________________________
1931 As each chunk line may contain more than one chunk include, we will split out chunk includes in an iterative fashion5. Contrary to our use of split when substituting parameters in chapter ? ^5.
1932 First, as long as the chunk contains a \chunkref command we take as much as we can up to the first \chunkref command.
1934 59c <process-chunk[1](), lang=> ≡ 60a⊳
1935 ________________________________________________________________________
1938 3 | while(match(chunk,"(\xC2\xAB)([^\xC2]*) [^\xC2]*\xC2\xBB", line) ||
1940 5 | "([=]<\\\\chunkref{([^}>]*)}(\\(.*\\)|)>|<<([a-zA-Z_][-a-zA-Z0-9_]*)>>)",
1943 8 | chunklet = substr(chunk, 1, RSTART - 1);
1944 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1945 We keep track of the indent count, by counting the number of literal characters found. We can then preserve this indent on each output line when multi-line chunks are expanded.
1946 We then process this first part literal text, and set the chunk which is still to be processed to be the text after the \chunkref command, which we will process next as we continue around the loop.
1948 60a <process-chunk[2]() ⇑59c, lang=> +≡ ⊲59c 60b▿
1949 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1950 9 | indent += length(chunklet);
1951 10 | chunk_line(active_chunk, chunklet);
1952 11 | chunk = substr(chunk, RSTART + RLENGTH);
1953 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1954 We then consider the type of chunk command we have found, whether it is the fangle style command beginning with =< the older notangle style beginning with <<.
1955 Fangle chunks may have parameters contained within square brackets. These will be matched in line[3] and are considered at this stage of processing to be part of the name of the chunk to be included.
1957 60b <process-chunk[3]() ⇑59c, lang=> +≡ ▵60a 60c▿
1958 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1959 12 | if (substr(line[1], 1, 1) == "=") {
1960 13 | # chunk name up to }
1961 14 | =<\chunkref{delatex}(line[3])>
1962 15 | chunk_include(active_chunk, line[2] line[3], indent);
1963 16 | } else if (substr(line[1], 1, 1) == "<") {
1964 17 | chunk_include(active_chunk, line[4], indent);
1965 18 | } else if (line[1] == "\xC2\xAB") {
1966 19 | chunk_include(active_chunk, line[2], indent);
1968 21 | error("Unknown chunk fragment: " line[1]);
1970 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1971 The loop will continue until there are no more chunkref statements in the text, at which point we process the final part of the chunk.
1973 60c <process-chunk[4]() ⇑59c, lang=> +≡ ▵60b 60d▿
1974 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1976 24 | chunk_line(active_chunk, chunk);
1977 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1978 We add the newline character as a chunklet on it's own, to make it easier to detect new lines and thus manage indentation when processing the output.
1980 60d <process-chunk[5]() ⇑59c, lang=> +≡ ▵60c
1981 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1982 25 | chunk_line(active_chunk, "\n");
1983 |________________________________________________________________________
1986 We will also permit a chunk-part number to follow in square brackets, so that =<\chunkref{chunk-name[1]}> will refer to the first part only. This can make it easy to include a C function prototype in a header file, if the first part of the chunk is just the function prototype without the trailing semi-colon. The header file would include the prototype with the trailing semi-colon, like this:
1987 =<\chunkref{chunk-name[1]}>
1988 This is handled in section 12.1.1
1989 We should perhaps introduce a notion of language specific chunk options; so that perhaps we could specify:
1990 =<\chunkref{chunk-name[function-declaration]}
1991 which applies a transform function-declaration to the chunk --- which in this case would extract a function prototype from a function.
1994 Chapter 11Processing Options
1995 At the start, first we set the default options.
1997 61a <default-options[1](), lang=> ≡
1998 ________________________________________________________________________
2001 3 | notangle_mode=0;
2004 |________________________________________________________________________
2007 Then we use getopt the standard way, and null out ARGV afterwards in the normal AWK fashion.
2009 61b <read-options[1](), lang=> ≡
2010 ________________________________________________________________________
2011 1 | Optind = 1 # skip ARGV[0]
2012 2 | while(getopt(ARGC, ARGV, "R:LdT:hr")!=-1) {
2013 3 | =<\chunkref{handle-options}>
2015 5 | for (i=1; i<Optind; i++) { ARGV[i]=""; }
2016 |________________________________________________________________________
2019 This is how we handle our options:
2021 61c <handle-options[1](), lang=> ≡
2022 ________________________________________________________________________
2023 1 | if (Optopt == "R") root = Optarg;
2024 2 | else if (Optopt == "r") root="";
2025 3 | else if (Optopt == "L") linenos = 1;
2026 4 | else if (Optopt == "d") debug = 1;
2027 5 | else if (Optopt == "T") tabs = indent_string(Optarg+0);
2028 6 | else if (Optopt == "h") help();
2029 7 | else if (Optopt == "?") help();
2030 |________________________________________________________________________
2033 We do all of this at the beginning of the program
2035 61d <begin[1](), lang=> ≡
2036 ________________________________________________________________________
2038 2 | =<\chunkref{constants}>
2039 3 | =<\chunkref{mode-definitions}>
2040 4 | =<\chunkref{default-options}>
2042 6 | =<\chunkref{read-options}>
2044 |________________________________________________________________________
2047 And have a simple help function
2049 61e <help()[1](), lang=> ≡
2050 ________________________________________________________________________
2051 1 | function help() {
2053 3 | print " fangle [-L] -R<rootname> [source.tex ...]"
2054 4 | print " fangle -r [source.tex ...]"
2055 5 | print " If the filename, source.tex is not specified then stdin is used"
2057 7 | print "-L causes the C statement: #line <lineno> \"filename\"" to be issued"
2058 8 | print "-R causes the named root to be written to stdout"
2059 9 | print "-r lists all roots in the file (even those used elsewhere)"
2062 |________________________________________________________________________
2065 Chapter 12Generating the Output
2066 We generate output by calling output_chunk, or listing the chunk names.
2068 63a <generate-output[1](), lang=> ≡
2069 ________________________________________________________________________
2070 1 | if (length(root)) output_chunk(root);
2071 2 | else output_chunk_names();
2072 |________________________________________________________________________
2075 We also have some other output debugging:
2077 63b <debug-output[1](), lang=> ≡
2078 ________________________________________________________________________
2080 2 | print "------ chunk names "
2081 3 | output_chunk_names();
2082 4 | print "====== chunks"
2083 5 | output_chunks();
2084 6 | print "++++++ debug"
2085 7 | for (a in chunks) {
2086 8 | print a "=" chunks[a];
2089 |________________________________________________________________________
2092 We do both of these at the end. We also set ORS="" because each chunklet is not necessarily a complete line, and we already added ORS to each input line in section 10.3.
2094 63c <end[1](), lang=> ≡
2095 ________________________________________________________________________
2097 2 | =<\chunkref{debug-output}>
2099 4 | =<\chunkref{generate-output}>
2101 |________________________________________________________________________
2104 We write chunk names like this. If we seem to be running in notangle compatibility mode, then we enclose the name like this <<name>> the same way notangle does:
2106 63d <output_chunk_names()[1](), lang=> ≡
2107 ________________________________________________________________________
2108 1 | function output_chunk_names( c, prefix, suffix)
2110 3 | if (notangle_mode) {
2114 7 | for (c in chunk_names) {
2115 8 | print prefix c suffix "\n";
2118 |________________________________________________________________________
2121 This function would write out all chunks
2123 63e <output_chunks()[1](), lang=> ≡
2124 ________________________________________________________________________
2125 1 | function output_chunks( a)
2127 3 | for (a in chunk_names) {
2128 4 | output_chunk(a);
2132 8 | function output_chunk(chunk) {
2134 10 | lineno_needed = linenos;
2136 12 | write_chunk(chunk);
2139 |________________________________________________________________________
2142 12.1 Assembling the Chunks
2143 chunk_path holds a string consisting of the names of all the chunks that resulted in this chunk being output. It should probably also contain the source line numbers at which each inclusion also occured.
2144 We first initialize the mode tracker for this chunk.
2146 64a <write_chunk()[1](), lang=> ≡ 64b▿
2147 ________________________________________________________________________
2148 1 | function write_chunk(chunk_name) {
2149 2 | =<\chunkref{awk-delete-array}(context)>
2150 3 | return write_chunk_r(chunk_name, context);
2153 6 | function write_chunk_r(chunk_name, context, indent, tail,
2155 8 | chunk_path, chunk_args,
2156 9 | s, r, src, new_src,
2158 11 | chunk_params, part, max_part, part_line, frag, max_frag, text,
2159 12 | chunklet, only_part, call_chunk_args, new_context)
2161 14 | if (debug) debug_log("write_chunk_r(", chunk_name, ")");
2162 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2164 As mentioned in section ?, a chunk name may contain a part specifier in square brackets, limiting the parts that should be emitted.
2166 64b <write_chunk()[2]() ⇑64a, lang=> +≡ ▵64a 64c▿
2167 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2168 15 | if (match(chunk_name, "^(.*)\\[([0-9]*)\\]$", chunk_name_parts)) {
2169 16 | chunk_name = chunk_name_parts[1];
2170 17 | only_part = chunk_name_parts[2];
2172 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2173 We then create a mode tracker
2175 64c <write_chunk()[3]() ⇑64a, lang=> +≡ ▵64b 65a⊳
2176 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2177 19 | =<\chunkref{new-mode-tracker}(context, chunks[chunk_name, "language"], "")>
2178 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2179 We extract into chunk_params the names of the parameters that this chunk accepts, whose values were (optionally) passed in chunk_args.
2181 65a <write_chunk()[4]() ⇑64a, lang=> +≡ ⊲64c 65b▿
2182 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2183 20 | split(chunks[chunk_name, "params"], chunk_params, " *; *");
2184 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2185 To assemble a chunk, we write out each part.
2187 65b <write_chunk()[5]() ⇑64a, lang=> +≡ ▵65a
2188 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2189 21 | if (! (chunk_name in chunk_names)) {
2190 22 | error(sprintf(_"The root module <<%s>> was not defined.\nUsed by: %s",\
2191 23 | chunk_name, chunk_path));
2194 26 | max_part = chunks[chunk_name, "part"];
2195 27 | for(part = 1; part <= max_part; part++) {
2196 28 | if (! only_part || part == only_part) {
2197 29 | =<\chunkref{write-part}>
2200 32 | if (! finalize_mode_tracker(context)) {
2201 33 | dump_mode_tracker(context);
2202 34 | error(sprintf(_"Module %s did not close context properly.\nUsed by: %s\n", chunk_name, chunk_path));
2205 |________________________________________________________________________
2208 A part can either be a chunklet of lines, or an include of another chunk.
2209 Chunks may also have parameters, specified in LaTeX style with braces after the chunk name --- looking like this in the document: chunkname{param1, param2}. Arguments are passed in square brackets: \chunkref{chunkname}[arg1, arg2].
2210 Before we process each part, we check that the source position hasn't changed unexpectedly, so that we can know if we need to output a new file-line directive.
2212 65c <write-part[1](), lang=> ≡
2213 ________________________________________________________________________
2214 1 | =<\chunkref{check-source-jump}>
2216 3 | chunklet = chunks[chunk_name, "part", part];
2217 4 | if (chunks[chunk_name, "part", part, "type"] == part_type_chunk) {
2218 5 | =<\chunkref{write-included-chunk}>
2219 6 | } else if (chunklet SUBSEP "line" in chunks) {
2220 7 | =<\chunkref{write-chunklets}>
2222 9 | # empty last chunklet
2224 |________________________________________________________________________
2227 To write an included chunk, we must detect any optional chunk arguments in parenthesis. Then we recurse calling write_chunk().
2229 65d <write-included-chunk[1](), lang=> ≡
2230 ________________________________________________________________________
2231 1 | if (match(chunklet, "^([^\\[\\(]*)\\((.*)\\)$", chunklet_parts)) {
2232 2 | chunklet = chunklet_parts[1];
2233 3 | parse_chunk_args("c-like", chunklet_parts[2], call_chunk_args, "(");
2234 4 | for (c in call_chunk_args) {
2235 5 | call_chunk_args[c] = expand_chunk_args(call_chunk_args[c], chunk_params, chunk_args);
2238 8 | split("", call_chunk_args);
2240 10 | # update the transforms arrays
2241 11 | new_src = mode_escaper(context, s, r, src);
2242 12 | =<\chunkref{awk-delete-array}(new_context)>
2243 13 | write_chunk_r(chunklet, new_context,
2244 14 | chunks[chunk_name, "part", part, "indent"] indent,
2245 15 | chunks[chunk_name, "part", part, "tail"],
2246 16 | chunk_path "\n " chunk_name,
2247 17 | call_chunk_args,
2248 18 | s, r, new_src);
2249 |________________________________________________________________________
2252 Before we output a chunklet of lines, we first emit the file and line number if we have one, and if it is safe to do so.
2253 Chunklets are generally broken up by includes, so the start of a chunklet is a good place to do this. Then we output each line of the chunklet.
2254 When it is not safe, such as in the middle of a multi-line macro definition, lineno_suppressed is set to true, and in such a case we note that we want to emit the line statement when it is next safe.
2256 66a <write-chunklets[1](), lang=> ≡ 66b▿
2257 ________________________________________________________________________
2258 1 | max_frag = chunks[chunklet, "line"];
2259 2 | for(frag = 1; frag <= max_frag; frag++) {
2260 3 | =<\chunkref{write-file-line}>
2261 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2262 We then extract the chunklet text and expand any arguments.
2264 66b <write-chunklets[2]() ⇑66a, lang=> +≡ ▵66a 66c▿
2265 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2267 5 | text = chunks[chunklet, frag];
2269 7 | /* check params */
2270 8 | text = expand_chunk_args(text, chunk_params, chunk_args);
2271 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2272 If the text is a single newline (which we keep separate - see 5) then we increment the line number. In the case where this is the last line of a chunk and it is not a top-level chunk we replace the newline with an empty string --- because the chunk that included this chunk will have the newline at the end of the line that included this chunk.
2273 We also note by newline = 1 that we have started a new line, so that indentation can be managed with the following piece of text.
2275 66c <write-chunklets[3]() ⇑66a, lang=> +≡ ▵66b 66d▿
2276 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2278 10 | if (text == "\n") {
2280 12 | if (part == max_part && frag == max_frag && length(chunk_path)) {
2286 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2287 If this text does not represent a newline, but we see that we are the first piece of text on a newline, then we prefix our text with the current indent.
2288 Note 1. newline is a global output-state variable, but the indent is not.
2290 66d <write-chunklets[4]() ⇑66a, lang=> +≡ ▵66c 67a⊳
2291 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2292 18 | } else if (length(text) || length(tail)) {
2293 19 | if (newline) text = indent text;
2297 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2298 Tail will soon no longer be relevant once mode-detection is in place.
2300 67a <write-chunklets[5]() ⇑66a, lang=> +≡ ⊲66d 67b▿
2301 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2302 23 | text = text tail;
2303 24 | mode_tracker(context, text);
2304 25 | print untab(transform_escape(s, r, text, src));
2305 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2306 If a line ends in a backslash --- suggesting continuation --- then we supress outputting file-line as it would probably break the continued lines.
2308 67b <write-chunklets[6]() ⇑66a, lang=> +≡ ▵67a
2309 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2311 27 | lineno_suppressed = substr(lastline, length(lastline)) == "\\";
2314 |________________________________________________________________________
2317 Of course there is no point in actually outputting the source filename and line number (file-line) if they don't say anything new! We only need to emit them if they aren't what is expected, or if we we not able to emit one when they had changed.
2319 67c <write-file-line[1](), lang=> ≡
2320 ________________________________________________________________________
2321 1 | if (newline && lineno_needed && ! lineno_suppressed) {
2322 2 | filename = a_filename;
2323 3 | lineno = a_lineno;
2324 4 | print "#line " lineno " \"" filename "\"\n"
2325 5 | lineno_needed = 0;
2327 |________________________________________________________________________
2330 We check if a new file-line is needed by checking if the source line matches what we (or a compiler) would expect.
2332 67d <check-source-jump[1](), lang=> ≡
2333 ________________________________________________________________________
2334 1 | if (linenos && (chunk_name SUBSEP "part" SUBSEP part SUBSEP "FILENAME" in chunks)) {
2335 2 | a_filename = chunks[chunk_name, "part", part, "FILENAME"];
2336 3 | a_lineno = chunks[chunk_name, "part", part, "LINENO"];
2337 4 | if (a_filename != filename || a_lineno != lineno) {
2338 5 | lineno_needed++;
2341 |________________________________________________________________________
2344 Chapter 13Storing Chunks
2345 Awk has pretty limited data structures, so we will use two main hashes. Uninterrupted sequences of a chunk will be stored in chunklets and the chunklets used in a chunk will be stored in chunks.
2347 69a <constants[1](), lang=> ≡
2348 ________________________________________________________________________
2349 1 | part_type_chunk=1;
2351 |________________________________________________________________________
2354 The params mentioned are not chunk parameters for parameterized chunks, as mentioned in 8.2, but the lstlistings style parameters used in the \Chunk command1. The params parameter is used to hold the parameters for parameterized chunks ^1.
2356 69b <chunk-storage-functions[1](), lang=> ≡ 69c▿
2357 ________________________________________________________________________
2358 1 | function new_chunk(chunk_name, params,
2362 5 | # HACK WHILE WE CHANGE TO ( ) for PARAM CHUNKS
2363 6 | gsub("\\(\\)$", "", chunk_name);
2364 7 | if (! (chunk_name in chunk_names)) {
2365 8 | if (debug) print "New chunk " chunk_name;
2366 9 | chunk_names[chunk_name];
2367 10 | for (p in params) {
2368 11 | chunks[chunk_name, p] = params[p];
2369 12 | if (debug) print "chunks[" chunk_name "," p "] = " params[p];
2371 14 | if ("append" in params) {
2372 15 | append=params["append"];
2373 16 | if (! (append in chunk_names)) {
2374 17 | warning("Chunk " chunk_name " is appended to chunk " append " which is not defined yet");
2375 18 | new_chunk(append);
2377 20 | chunk_include(append, chunk_name);
2378 21 | chunk_line(append, ORS);
2381 24 | active_chunk = chunk_name;
2382 25 | prime_chunk(chunk_name);
2384 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2386 69c <chunk-storage-functions[2]() ⇑69b, lang=> +≡ ▵69b 70a⊳
2387 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2389 28 | function prime_chunk(chunk_name)
2391 30 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = \
2392 31 | chunk_name SUBSEP "chunklet" SUBSEP "" ++chunks[chunk_name, "chunklet"];
2393 32 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "FILENAME"] = FILENAME;
2394 33 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "LINENO"] = FNR + 1;
2397 36 | function chunk_line(chunk_name, line){
2398 37 | chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"],
2399 38 | ++chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"], "line"] ] = line;
2402 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2403 Chunk include represents a chunkref statement, and stores the requirement to include another chunk. The parameter indent represents the quanity of literal text characters that preceded this chunkref statement and therefore by how much additional lines of the included chunk should be indented.
2405 70a <chunk-storage-functions[3]() ⇑69b, lang=> +≡ ⊲69c 70b▿
2406 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2407 41 | function chunk_include(chunk_name, chunk_ref, indent, tail)
2409 43 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = chunk_ref;
2410 44 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "type" ] = part_type_chunk;
2411 45 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "indent" ] = indent_string(indent);
2412 46 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "tail" ] = tail;
2413 47 | prime_chunk(chunk_name);
2416 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2417 The indent is calculated by indent_string, which may in future convert some spaces into tab characters. This function works by generating a printf padded format string, like %22s for an indent of 22, and then printing an empty string using that format.
2419 70b <chunk-storage-functions[4]() ⇑69b, lang=> +≡ ▵70a
2420 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2421 50 | function indent_string(indent) {
2422 51 | return sprintf("%" indent "s", "");
2424 |________________________________________________________________________
2428 I use Arnold Robbins public domain getopt (1993 revision). This is probably the same one that is covered in chapter 12 of âĂIJEdition 3 of GAWK: Effective AWK Programming: A User's Guide for GNU AwkâĂİ but as that is licensed under the GNU Free Documentation License, Version 1.3, which conflicts with the GPL3, I can't use it from there (or it's accompanying explanations), so I do my best to explain how it works here.
2429 The getopt.awk header is:
2431 71a <getopt.awk-header[1](), lang=> ≡
2432 ________________________________________________________________________
2433 1 | # getopt.awk --- do C library getopt(3) function in awk
2435 3 | # Arnold Robbins, arnold@skeeve.com, Public Domain
2437 5 | # Initial version: March, 1991
2438 6 | # Revised: May, 1993
2440 |________________________________________________________________________
2443 The provided explanation is:
2445 71b <getopt.awk-notes[1](), lang=> ≡
2446 ________________________________________________________________________
2447 1 | # External variables:
2448 2 | # Optind -- index in ARGV of first nonoption argument
2449 3 | # Optarg -- string value of argument to current option
2450 4 | # Opterr -- if nonzero, print our own diagnostic
2451 5 | # Optopt -- current option letter
2454 8 | # -1 at end of options
2455 9 | # ? for unrecognized option
2456 10 | # <c> a character representing the current option
2458 12 | # Private Data:
2459 13 | # _opti -- index in multi-flag option, e.g., -abc
2461 |________________________________________________________________________
2464 The function follows. The final two parameters, thisopt and i are local variables and not parameters --- as indicated by the multiple spaces preceding them. Awk doesn't care, the multiple spaces are a convention to help us humans.
2466 71c <getopt.awk-getopt()[1](), lang=> ≡ 72a⊳
2467 ________________________________________________________________________
2468 1 | function getopt(argc, argv, options, thisopt, i)
2470 3 | if (length(options) == 0) # no options given
2472 5 | if (argv[Optind] == "--") { # all done
2476 9 | } else if (argv[Optind] !~ /^-[^: \t\n\f\r\v\b]/) {
2480 13 | if (_opti == 0)
2482 15 | thisopt = substr(argv[Optind], _opti, 1)
2483 16 | Optopt = thisopt
2484 17 | i = index(options, thisopt)
2487 20 | printf("%c -- invalid option\n",
2488 21 | thisopt) > "/dev/stderr"
2489 22 | if (_opti >= length(argv[Optind])) {
2496 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2497 At this point, the option has been found and we need to know if it takes any arguments.
2499 72a <getopt.awk-getopt()[2]() ⇑71c, lang=> +≡ ⊲71c
2500 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2501 29 | if (substr(options, i + 1, 1) == ":") {
2502 30 | # get option argument
2503 31 | if (length(substr(argv[Optind], _opti + 1)) > 0)
2504 32 | Optarg = substr(argv[Optind], _opti + 1)
2506 34 | Optarg = argv[++Optind]
2510 38 | if (_opti == 0 || _opti >= length(argv[Optind])) {
2517 |________________________________________________________________________
2520 A test program is built in, too
2522 72b <getopt.awk-begin[1](), lang=> ≡
2523 ________________________________________________________________________
2525 2 | Opterr = 1 # default is to diagnose
2526 3 | Optind = 1 # skip ARGV[0]
2528 5 | if (_getopt_test) {
2529 6 | while ((_go_c = getopt(ARGC, ARGV, "ab:cd")) != -1)
2530 7 | printf("c = <%c>, optarg = <%s>\n",
2532 9 | printf("non-option arguments:\n")
2533 10 | for (; Optind < ARGC; Optind++)
2534 11 | printf("\tARGV[%d] = <%s>\n",
2535 12 | Optind, ARGV[Optind])
2538 |________________________________________________________________________
2541 The entire getopt.awk is made out of these chunks in order
2543 72c <getopt.awk[1](), lang=> ≡
2544 ________________________________________________________________________
2545 1 | =<\chunkref{getopt.awk-header}>
2547 3 | =<\chunkref{getopt.awk-notes}>
2548 4 | =<\chunkref{getopt.awk-getopt()}>
2549 5 | =<\chunkref{getopt.awk-begin}>
2550 |________________________________________________________________________
2553 Although we only want the header and function:
2555 73a <getopt[1](), lang=> ≡
2556 ________________________________________________________________________
2557 1 | # try: locate getopt.awk for the full original file
2558 2 | # as part of your standard awk installation
2559 3 | =<\chunkref{getopt.awk-header}>
2561 5 | =<\chunkref{getopt.awk-getopt()}>
2562 |________________________________________________________________________
2565 Chapter 15Fangle LaTeX source code
2567 Here we define a L Y X .module file that makes it convenient to use L Y X for writing such literate programs.
2568 This file ./fangle.module can be installed in your personal .lyx/layouts folder. You will need to Tools Reconfigure so that L Y X notices it. It adds a new format Chunk, which should precede every listing and contain the chunk name.
2570 75a <./fangle.module[1](), lang=lyx-module> ≡
2571 ________________________________________________________________________
2572 1 | #\DeclareLyXModule{Fangle Literate Listings}
2573 2 | #DescriptionBegin
2574 3 | # Fangle literate listings allow one to write
2575 4 | # literate programs after the fashion of noweb, but without having
2576 5 | # to use noweave to generate the documentation. Instead the listings
2577 6 | # package is extended in conjunction with the noweb package to implement
2578 7 | # to code formating directly as latex.
2579 8 | # The fangle awk script
2582 11 | =<\chunkref{gpl3-copyright.hashed}>
2587 16 | =<\chunkref{./fangle.sty}>
2590 19 | =<\chunkref{chunkstyle}>
2592 21 | =<\chunkref{chunkref}>
2593 |________________________________________________________________________
2596 Because L Y X modules are not yet a language supported by fangle or lstlistings, we resort to this fake awk chunk below in order to have each line of the GPL3 license commence with a #
2598 75b <gpl3-copyright.hashed[1](), lang=awk> ≡
2599 ________________________________________________________________________
2600 1 | #=<\chunkref{gpl3-copyright}>
2602 |________________________________________________________________________
2605 15.1.1 The Chunk style
2606 The purpose of the chunk style is to make it easier for L Y X users to provide the name to lstlistings. Normally this requires right-clicking on the listing, choosing settings, advanced, and then typing name=chunk-name. This has the further disadvantage that the name (and other options) are not generally visible during document editing.
2607 The chunk style is defined as a LaTeX command, so that all text on the same line is passed to the LaTeX command Chunk. This makes it easy to parse using fangle, and easy to pass these options on to the listings package. The first word in a chunk section should be the chunk name, and will have name= prepended to it. Any other words are accepted arguments to lstset.
2608 We set PassThru to 1 because the user is actually entering raw latex.
2610 76a <chunkstyle[1](), lang=> ≡ 76b▿
2611 ________________________________________________________________________
2613 2 | LatexType Command
2615 4 | Margin First_Dynamic
2616 5 | LeftMargin Chunk:xxx
2618 7 | LabelType Static
2619 8 | LabelString "Chunk:"
2623 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2624 To make the label very visible we choose a larger font coloured red.
2626 76b <chunkstyle[2]() ⇑76a, lang=> +≡ ▵76a
2627 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2636 |________________________________________________________________________
2639 15.1.2 The chunkref style
2640 We also define the Chunkref style which can be used to express cross references to chunks.
2642 76c <chunkref[1](), lang=> ≡
2643 ________________________________________________________________________
2644 1 | InsetLayout Chunkref
2645 2 | LyxType charstyle
2646 3 | LatexType Command
2647 4 | LatexName chunkref
2654 |________________________________________________________________________
2658 We require the listings, noweb and xargs packages. As noweb defines it's own \code environment, we re-define the one that L Y X logical markup module expects here.
2660 76d <./fangle.sty[1](), lang=tex> ≡ 77a⊳
2661 ________________________________________________________________________
2662 1 | \usepackage{listings}%
2663 2 | \usepackage{noweb}%
2664 3 | \usepackage{xargs}%
2665 4 | \renewcommand{\code}[1]{\texttt{#1}}%
2666 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2667 We also define a CChunk macro, for use as: \begin{CChunk} which will need renaming to \begin{Chunk} when I can do this without clashing with \Chunk.
2669 77a <./fangle.sty[2]() ⇑76d, lang=> +≡ ⊲76d 77b▿
2670 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2671 5 | \lstnewenvironment{Chunk}{\relax}{\relax}%
2672 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2673 We also define a suitable \lstset of parameters that suit the literate programming style after the fashion of noweave.
2675 77b <./fangle.sty[3]() ⇑76d, lang=> +≡ ▵77a 77c▿
2676 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2677 6 | \lstset{numbers=left, stepnumber=5, numbersep=5pt,
2678 7 | breaklines=false,basicstyle=\ttfamily,
2679 8 | numberstyle=\tiny, language=C}%
2680 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2681 We also define a notangle-like mechanism for escaping to LaTeX from the listing, and by which we can refer to other listings. We declare the =<...> sequence to contain LaTeX code, and include another like this chunk: =<\chunkref{chunkname}>. However, because =<...> is already defined to contain LaTeX code for this document --- this is a fangle document after all --- the code fragment below effectively contains the LaTeX code: }{. To avoid problems with document generation, I had to declare an lstlistings property: escapeinside={} for this listing only; which in L Y X was done by right-clicking the listings inset, choosing settings->advanced. Therefore =< isn't interpreted literally here, in a listing when the escape sequence is already defined as shown... we need to somehow escape this representation...
2683 77c <./fangle.sty[4]() ⇑76d, lang=> +≡ ▵77b 77d▿
2684 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2685 9 | \lstset{escapeinside={=<}{>}}%
2686 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2687 Although our macros will contain the @ symbol, they will be included in a \makeatletter section by L Y X; however we keep the commented out \makeatletter as a reminder. The listings package likes to centre the titles, but noweb titles are specially formatted and must be left aligned. The simplest way to do this turned out to be by removing the definition of \lst@maketitle. This may interact badly if other listings want a regular title or caption. We remember the old maketitle in case we need it.
2689 77d <./fangle.sty[5]() ⇑76d, lang=> +≡ ▵77c 77e▿
2690 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2692 11 | %somehow re-defining maketitle gives us a left-aligned title
2693 12 | %which is extactly what our specially formatted title needs!
2694 13 | \global\let\fangle@lst@maketitle\lst@maketitle%
2695 14 | \global\def\lst@maketitle{}%
2696 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2697 15.2.1 The chunk command
2698 Our chunk command accepts one argument, and calls \ltset. Although \ltset will note the name, this is erased when the next \lstlisting starts, so we make a note of this in \lst@chunkname and restore in in lstlistings Init hook.
2700 77e <./fangle.sty[6]() ⇑76d, lang=> +≡ ▵77d 78a⊳
2701 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2703 16 | \lstset{title={\fanglecaption},name=#1}%
2704 17 | \global\edef\lst@chunkname{\lst@intname}%
2706 19 | \def\lst@chunkname{\empty}%
2707 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2708 15.2.1.1 Chunk parameters
2709 Fangle permits parameterized chunks, and requires the paramters to be specified as listings options. The fangle script uses this, and although we don't do anything with these in the LaTeX code right now, we need to stop the listings package complaining.
2711 78a <./fangle.sty[7]() ⇑76d, lang=> +≡ ⊲77e 78b▿
2712 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2713 20 | \lst@Key{params}\relax{\def\fangle@chunk@params{#1}}%
2714 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2715 As it is common to define a chunk which then needs appending to another chunk, and annoying to have to declare a single line chunk to manage the include, we support an append= option.
2717 78b <./fangle.sty[8]() ⇑76d, lang=> +≡ ▵78a 78c▿
2718 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2719 21 | \lst@Key{append}\relax{\def\fangle@chunk@append{#1}}%
2720 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2721 15.2.2 The noweb styled caption
2722 We define a public macro \fanglecaption which can be set as a regular title. By means of \protect, It expands to \fangle@caption at the appopriate time when the caption is emitted.
2724 78c <./fangle.sty[9]() ⇑76d, lang=> +≡ ▵78b 78d▿
2725 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2726 \def\fanglecaption{\protect\fangle@caption}%
2727 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2728 22c ⟨some-chunk 19b⟩≡+ ⊲22b 24d⊳
2730 In this example, the current chunk is 22c, and therefore the third chunk on page 22.
2731 It's name is some-chunk.
2732 The first chunk with this name (19b) occurs as the second chunk on page 19.
2733 The previous chunk (22d) with the same name is the second chunk on page 22.
2734 The next chunk (24d) is the fourth chunk on page 24.
2736 Figure 1. Noweb Heading
2738 The general noweb output format compactly identifies the current chunk, and references to the first chunk, and the previous and next chunks that have the same name.
2739 This means that we need to keep a counter for each chunk-name, that we use to count chunks of the same name.
2740 15.2.3 The chunk counter
2741 It would be natural to have a counter for each chunk name, but TeX would soon run out of counters1. ...soon did run out of counters and so I had to re-write the LaTeX macros to share a counter as described here. ^1, so we have one counter which we save at the end of a chunk and restore at the beginning of a chunk.
2743 78d <./fangle.sty[10]() ⇑76d, lang=> +≡ ▵78c 79c⊳
2744 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2745 22 | \newcounter{fangle@chunkcounter}%
2746 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2747 We construct the name of this variable to store the counter to be the text lst-chunk- prefixed onto the chunks own name, and store it in \chunkcount.
2748 We save the counter like this:
2750 79a <save-counter[1](), lang=> ≡
2751 ________________________________________________________________________
2752 \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
2753 |________________________________________________________________________
2756 and restore the counter like this:
2758 79b <restore-counter[1](), lang=> ≡
2759 ________________________________________________________________________
2760 \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
2761 |________________________________________________________________________
2764 If there does not already exist a variable whose name is stored in \chunkcount, then we know we are the first chunk with this name, and then define a counter.
2765 Although chunks of the same name share a common counter, they must still be distinguished. We use is the internal name of the listing, suffixed by the counter value. So the first chunk might be something-1 and the second chunk be something-2, etc.
2766 We also calculate the name of the previous chunk if we can (before we increment the chunk counter). If this is the first chunk of that name, then \prevchunkname is set to \relax which the noweb package will interpret as not existing.
2768 79c <./fangle.sty[11]() ⇑76d, lang=> +≡ ⊲78d 79d▿
2769 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2770 23 | \def\fangle@caption{%
2771 24 | \edef\chunkcount{lst-chunk-\lst@intname}%
2772 25 | \@ifundefined{\chunkcount}{%
2773 26 | \expandafter\gdef\csname \chunkcount\endcsname{0}%
2774 27 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
2775 28 | \let\prevchunkname\relax%
2777 30 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
2778 31 | \edef\prevchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
2780 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2781 After incrementing the chunk counter, we then define the name of this chunk, as well as the name of the first chunk.
2783 79d <./fangle.sty[12]() ⇑76d, lang=> +≡ ▵79c 79e▿
2784 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2785 33 | \addtocounter{fangle@chunkcounter}{1}%
2786 34 | \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
2787 35 | \edef\chunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
2788 36 | \edef\firstchunkname{\lst@intname-1}%
2789 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2790 We now need to calculate the name of the next chunk. We do this by temporarily skipping the counter on by one; however there may not actually be another chunk with this name! We detect this by also defining a label for each chunk based on the chunkname. If there is a next chunkname then it will define a label with that name. As labels are persistent, we can at least tell the second time LaTeX is run. If we don't find such a defined label then we define \nextchunkname to \relax.
2792 79e <./fangle.sty[13]() ⇑76d, lang=> +≡ ▵79d 80a⊳
2793 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2794 37 | \addtocounter{fangle@chunkcounter}{1}%
2795 38 | \edef\nextchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
2796 39 | \@ifundefined{r@label-\nextchunkname}{\let\nextchunkname\relax}{}%
2797 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2798 The noweb package requires that we define a \sublabel for every chunk, with a unique name, which is then used to print out it's navigation hints.
2799 We also define a regular label for this chunk, as was mentioned above when we calculated \nextchunkname. This requires LaTeX to be run at least twice after new chunk sections are added --- but noweb requried that anyway.
2801 80a <./fangle.sty[14]() ⇑76d, lang=> +≡ ⊲79e 80b▿
2802 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2803 40 | \sublabel{\chunkname}%
2804 41 | % define this label for every chunk instance, so we
2805 42 | % can tell when we are the last chunk of this name
2806 43 | \label{label-\chunkname}%
2807 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2808 We also try and add the chunk to the list of listings, but I'm afraid we don't do very well. We want each chunk name listing once, with all of it's references.
2810 80b <./fangle.sty[15]() ⇑76d, lang=> +≡ ▵80a 80c▿
2811 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2812 44 | \addcontentsline{lol}{lstlisting}{\lst@name~[\protect\subpageref{\chunkname}]}%
2813 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2814 We then call the noweb output macros in the same way that noweave generates them, except that we don't need to call \nwstartdeflinemarkup or \nwenddeflinemarkup — and if we do, it messes up the output somewhat.
2816 80c <./fangle.sty[16]() ⇑76d, lang=> +≡ ▵80b 80d▿
2817 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2821 48 | \subpageref{\chunkname}%
2828 55 | \nwtagstyle{}\/%
2829 56 | \@ifundefined{fangle@chunk@params}{}{%
2830 57 | (\fangle@chunk@params)%
2832 59 | [\csname \chunkcount\endcsname]~%
2833 60 | \subpageref{\firstchunkname}%
2835 62 | \@ifundefined{fangle@chunk@append}{}{%
2836 63 | \ifx{}\fangle@chunk@append{x}\else%
2837 64 | ,~add~to~\fangle@chunk@append%
2840 67 | \global\def\fangle@chunk@append{}%
2841 68 | \lstset{append=x}%
2844 71 | \ifx\relax\prevchunkname\endmoddef\else\plusendmoddef\fi%
2845 72 | % \nwstartdeflinemarkup%
2846 73 | \nwprevnextdefs{\prevchunkname}{\nextchunkname}%
2847 74 | % \nwenddeflinemarkup%
2849 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2850 Originally this was developed as a listings aspect, in the Init hook, but it was found easier to affect the title without using a hook — \lst@AddToHookExe{PreSet} is still required to set the listings name to the name passed to the \Chunk command, though.
2852 80d <./fangle.sty[17]() ⇑76d, lang=> +≡ ▵80c 81a⊳
2853 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2854 76 | %\lst@BeginAspect{fangle}
2855 77 | %\lst@Key{fangle}{true}[t]{\lstKV@SetIf{#1}{true}}
2856 78 | \lst@AddToHookExe{PreSet}{\global\let\lst@intname\lst@chunkname}
2857 79 | \lst@AddToHook{Init}{}%\fangle@caption}
2858 80 | %\lst@EndAspect
2859 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2860 15.2.4 Cross references
2861 We define the \chunkref command which makes it easy to generate visual references to different code chunks, e.g.
2864 \chunkref[3]{preamble}
2865 \chunkref{preamble}[arg1, arg2]
2867 Chunkref can also be used within a code chunk to include another code chunk. The third optional parameter to chunkref is a comma sepatarated list of arguments, which will replace defined parameters in the chunkref.
2868 Note 1. Darn it, if I have: =<\chunkref{new-mode-tracker}[{chunks[chunk_name, "language"]},{mode}]> the inner braces (inside [ ]) cause _ to signify subscript even though we have lst@ReplaceIn
2870 81a <./fangle.sty[18]() ⇑76d, lang=> +≡ ⊲80d 82a⊳
2871 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2872 81 | \def\chunkref@args#1,{%
2874 83 | \lst@ReplaceIn\arg\lst@filenamerpl%
2876 85 | \@ifnextchar){\relax}{, \chunkref@args}%
2878 87 | \newcommand\chunkref[2][0]{%
2879 88 | \@ifnextchar({\chunkref@i{#1}{#2}}{\chunkref@i{#1}{#2}()}%
2881 90 | \def\chunkref@i#1#2(#3){%
2883 92 | \def\chunk{#2}%
2884 93 | \def\chunkno{#1}%
2885 94 | \def\chunkargs{#3}%
2886 95 | \ifx\chunkno\zero%
2887 96 | \def\chunkname{#2-1}%
2889 98 | \def\chunkname{#2-\chunkno}%
2891 100 | \let\lst@arg\chunk%
2892 101 | \lst@ReplaceIn\chunk\lst@filenamerpl%
2893 102 | \LA{%\moddef{%
2896 105 | \nwtagstyle{}\/%
2897 106 | \ifx\chunkno\zero%
2901 110 | \ifx\chunkargs\empty%
2903 112 | (\chunkref@args #3,)%
2905 114 | ~\subpageref{\chunkname}%
2908 117 | \RA%\endmoddef%
2910 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2913 82a <./fangle.sty[19]() ⇑76d, lang=> +≡ ⊲81a
2914 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2917 |________________________________________________________________________
2920 Chapter 16Extracting fangle
2921 16.1 Extracting from Lyx
2922 To extract from L Y X, you will need to configure L Y X as explained in section ?.
2923 And this lyx-build scrap will extract fangle for me.
2925 83a <lyx-build[2]() ⇑20a, lang=sh> +≡ ⊲20a
2926 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2930 14 | =<\chunkref{lyx-build-helper}>
2931 15 | cd $PROJECT_DIR || exit 1
2933 17 | /usr/local/bin/fangle -R./fangle $TEX_SRC > ./fangle
2934 18 | /usr/local/bin/fangle -R./fangle.module $TEX_SRC > ./fangle.module
2936 20 | =<\chunkref{test:helpers}>
2937 21 | export FANGLE=./fangle
2938 22 | export TMP=${TMP:-/tmp}
2939 23 | =<\chunkref{test:run-tests}>
2940 24 | # Now check that we can extract a fangle that also passes the tests!
2941 25 | $FANGLE -R./fangle $TEX_SRC > ./new-fangle
2942 26 | export FANGLE=./new-fangle
2943 27 | =<\chunkref{test:run-tests}>
2944 |________________________________________________________________________
2948 83b <test:run-tests[1](), lang=sh> ≡
2949 ________________________________________________________________________
2951 2 | $FANGLE -Rpca-test.awk $TEX_SRC | awk -f - || exit 1
2952 3 | =<\chunkref{test:cromulence}>
2953 4 | =<\chunkref{test:escapes}>
2954 5 | =<\chunkref{test:chunk-params}>
2955 |________________________________________________________________________
2958 With a lyx-build-helper
2960 83c <lyx-build-helper[2]() ⇑19b, lang=sh> +≡ ⊲19b
2961 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2962 5 | PROJECT_DIR="$LYX_r"
2963 6 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
2964 7 | TEX_DIR="$LYX_p"
2965 8 | TEX_SRC="$TEX_DIR/$LYX_i"
2966 |________________________________________________________________________
2969 16.2 Extracting documentation
2971 83d <./gen-www[1](), lang=> ≡
2972 ________________________________________________________________________
2973 1 | #python -m elyxer --css lyx.css $LYX_SRC | \
2974 2 | # iconv -c -f utf-8 -t ISO-8859-1//TRANSLIT | \
2975 3 | # sed 's/UTF-8"\(.\)>/ISO-8859-1"\1>/' > www/docs/fangle.html
2977 5 | python -m elyxer --css lyx.css --iso885915 --html --destdirectory www/docs/fangle.e \
2978 6 | fangle.lyx > www/docs/fangle.e/fangle.html
2980 8 | ( mkdir -p www/docs/fangle && cd www/docs/fangle && \
2981 9 | lyx -e latex ../../../fangle.lyx && \
2982 10 | htlatex ../../../fangle.tex "xhtml,fn-in" && \
2983 11 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->//g' fangle.html
2986 14 | ( mkdir -p www/docs/literate && cd www/docs/literate && \
2987 15 | lyx -e latex ../../../literate.lyx && \
2988 16 | htlatex ../../../literate.tex "xhtml,fn-in" && \
2989 17 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->$//g' literate.html
2991 |________________________________________________________________________
2994 16.3 Extracting from the command line
2995 First you will need the tex output, then you can extract:
2997 84a <lyx-build-manual[1](), lang=sh> ≡
2998 ________________________________________________________________________
2999 1 | lyx -e latex fangle.lyx
3000 2 | fangle -R./fangle fangle.tex > ./fangle
3001 3 | fangle -R./fangle.module fangle.tex > ./fangle.module
3002 |________________________________________________________________________
3007 84b <test:helpers[1](), lang=> ≡
3008 ________________________________________________________________________
3011 3 | then echo "Passed"
3012 4 | else echo "Failed"
3019 11 | then echo "Passed"
3020 12 | else echo "Failed"
3024 |________________________________________________________________________
3028 Chapter 17Chunk Parameters
3030 87a <test:chunk-params:sub[1](THING, colour), lang=> ≡
3031 ________________________________________________________________________
3032 1 | I see a ${THING},
3033 2 | a ${THING} of colour ${colour},
3034 3 | and looking closer =<\chunkref{test:chunk-params:sub:sub}(${colour})>
3035 |________________________________________________________________________
3039 87b <test:chunk-params:sub:sub[1](colour), lang=> ≡
3040 ________________________________________________________________________
3041 1 | a funny shade of ${colour}
3042 |________________________________________________________________________
3046 87c <test:chunk-params:text[1](), lang=> ≡
3047 ________________________________________________________________________
3048 1 | What do you see? "=<\chunkref{test:chunk-params:sub}(joe, red)>"
3050 |________________________________________________________________________
3053 Should generate output:
3055 87d <test:chunk-params:result[1](), lang=> ≡
3056 ________________________________________________________________________
3057 1 | What do you see? "I see a joe,
3058 2 | a joe of colour red,
3059 3 | and looking closer a funny shade of red"
3061 |________________________________________________________________________
3064 And this chunk will perform the test:
3066 87e <test:chunk-params[1](), lang=> ≡
3067 ________________________________________________________________________
3068 1 | $FANGLE -Rtest:chunk-params:result $TEX_SRC > $TMP/answer || exit 1
3069 2 | $FANGLE -Rtest:chunk-params:text $TEX_SRC > $TMP/result || exit 1
3070 3 | passtest diff $TMP/answer $TMP/result || (echo test:chunk-params:text failed ; exit 1)
3071 |________________________________________________________________________
3074 Chapter 18Compile-log-lyx
3076 89a <Chunk:./compile-log-lyx[1](), lang=sh> ≡
3077 ________________________________________________________________________
3079 2 | # can't use gtkdialog -i, cos it uses the "source" command which ubuntu sh doesn't have
3082 5 | errors="/tmp/compile.log.$$"
3083 6 | # if grep '^[^ ]*:\( In \|[0-9][0-9]*: [^ ]*:\)' > $errors
3084 7 | if grep '^[^ ]*(\([0-9][0-9]*\)) *: *\(error\|warning\)' > $errors
3086 9 | sed -i -e 's/^[^ ]*[/\\]\([^/\\]*\)(\([ 0-9][ 0-9]*\)) *: */\1:\2|\2|/' $errors
3087 10 | COMPILE_DIALOG='
3090 13 | <label>Compiler errors:</label>
3092 15 | <tree exported_column="0">
3093 16 | <variable>LINE</variable>
3094 17 | <height>400</height><width>800</width>
3095 18 | <label>File | Line | Message</label>
3096 19 | <action>'". $SELF ; "'lyxgoto $LINE</action>
3097 20 | <input>'"cat $errors"'</input>
3100 23 | <button><label>Build</label>
3101 24 | <action>lyxclient -c "LYXCMD:build-program" &</action>
3103 26 | <button ok></button>
3107 30 | export COMPILE_DIALOG
3108 31 | ( gtkdialog --program=COMPILE_DIALOG ; rm $errors ) &
3115 38 | file="${LINE%:*}"
3116 39 | line="${LINE##*:}"
3117 40 | extraline=‘cat $file | head -n $line | tac | sed '/^\\\\begin{lstlisting}/q' | wc -l‘
3118 41 | extraline=‘expr $extraline - 1‘
3119 42 | lyxclient -c "LYXCMD:command-sequence server-goto-file-row $file $line ; char-forward ; repeat $extraline paragraph-down ; paragraph-up-select"
3123 46 | if test -z "$COMPILE_DIALOG"
3126 |________________________________________________________________________