1 2008-09-27 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
3 * Todo: find all todos together and make a plan what is missing for a
6 - verifytree or something like that. fsck maybe.
8 - rersyncrecent, the script itself
10 - a way to only mirror the recentfiles without mirroring the whole
11 remote system such that people can decide to mirror only partially see
14 - long living objects that mirror again and again. Inject something
15 into ta, see how it goes over to tb.
17 - how do we continue filling up the DONE system when we use an object
18 for the second time? "fully covered" and "uptodate" or new terminology.
20 - idea: have a new flag on recentfiles with the meaning: if this
21 changes, you're required to run a full rsync over all the files. The
22 reason why we set it would probably be: some foul happened. we injected
23 files in arbitrary places or didn't inject them although they changed.
24 The content of the flag? Timestamp? The relation between the
25 recentfiles would have to be inheritance from the principal, because any
26 out of band changes would soon later propagate to the next recentfile.
28 By upping the flag often one can easily ruin the slaves.
30 last out of band change? dirtymark?
32 Anyway, this implies that we read a potentially existing recentfile
35 And it implies that we have an eventloop that keeps us busy in 2-3
36 cycles, one for current stuff (tight loop) and one for the recentfiles
37 (cascade when principal has changed), one for the old stuff after a
40 - overview called on the wrong file should be understandable
42 - See that the long running process really only updates the principal
43 file unless it has missed a timespan during which something happened. If
44 nothing happened, it must notice even when it misses the timespan.
46 - fill up recentfiles with fake (historic) entries; fill up with
47 individual corrections; algorithm maybe to be done with bigfloat so that
48 we can always place something in the middle between two entries
50 - the meta data field that must change when we fake something up so that
51 the downstream people know they have to re-fetch everything.
53 - how tolerant are we against missing files upstream? how do we keep
54 track? there are legitimate cases where we did read upstream index right
55 before a file got deleted there and then find that file as new and want
56 it. There are other cases that are not self healing and must be tracked
59 - how, exactly, do we have to deal with deletes? With rsync errors?
61 rsync: link_stat "/id/K/KA/KARMAN/Rose-HTMLx-Form-Related-0.07.meta" (in
62 authors) failed: No such file or directory (2)
64 The file above is a delete in 1h and a new in file 1M and the
65 delete in the locally running rmirror did not get propagated to the 1M
66 object. Bug. And the consequence is a standstill.
68 It seems that a slave that works with a file below the principal needs
69 to merge things all the way up to get rid of later deletes. Or keep
70 track of all deletes and skip them later. So we need a trackdeletes.pm
71 similar to the done.pm?
73 see also 2008-08-20 about spurious deletes that really have no add
74 counterpart and yet they are not wrong.
76 - consider the effect when resyncing the recentfile takes longer than
77 the time per loop. Then we never rsync any file. We need to diagnose
78 that and force an increase of that loop time. But when we later are fast
79 enough again because the net has recovered, then we need to switch back
80 to original parameters.
82 - remember to verify that no temp files are left lying around and the
85 - we must throw away the pathdb when wee have reached the end of Z. From
86 that moment we can have a very small pathdb because the only reason for
87 a pathdb is that we know to ignore old records in old files. We won't
88 need this pathdb again before the next full pass over the data is
89 necessary and then we will rebuild it as we go along.
91 - status file for not long running jobs that want to track upstream with
94 - revisit all XXX _float areas and study Sub::Exporter
96 - persistent DB even though we just said we do not need it. Just for
97 extended capabilities and time savings when, for example, upstream
98 announces a reset and we get new recentfiles and could then limit
99 ourselves to a subset of files (those that have a changed epoch) in a
100 first pass and would only then do the llop to verify the rest. Or
103 - lookup by epoch and by path and use this ability on the pause to never
104 again register a file twice that doesn't need it.
106 * Todo: aggregate files should know their feed and finding the principal
107 should be done stepwise. (?)
109 * Todo: DESTROY thing that unlocks. Today when I left the debuggerr I
110 left locks around. DONE
112 2008-09-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
114 * maybe extend the _overview so that it always says if and where the
115 last file is in the next file and where the next event in the next rf
116 would lie. No, don't like this anymore. REJECT
118 * take the two new redundant tests out again, only the third must
121 * Todo: add a sanity check if the merged structure is really pointing to
122 a different rf and that this different rf is larger. DONE
124 2008-09-25 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
126 * now test, if they are overlapping. And test if there is a file in the
127 next rf that would fit into this rf's interval.
129 1h 1222324012.8474 1222322541.7963 0.4086
130 6h 1222320411.2760 1222304207.6931 4.5010 missing overlap/gap!
131 1d 1222320411.2760 1222238750.5071 22.6835 large overlap
132 1W 1222313218.3626 1221708477.5829 167.9835
134 I suspect that somebody writes a merged timestamp without having merged
135 and then somebody else relies on it.
137 If aggregate is running, the intervals must not be extravagated, if it
138 is not running, there must not be bounds, the total number of events in
139 the system must be counted and must be controlled throughout the tests.
140 That the test required the additional update was probably nonsense,
141 because aggregate can cut pieces too. FIXED & DONE
143 2008-09-23 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
145 * rrr-aggregate seems to rewrite the RECENT file even if nothing has
148 2008-09-21 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
150 * Most apparent bug at the moment is that the recentfiles are fetched
151 too often. Only the principal should be fetched and if it has not
152 changed, the others should not be refetched. ATM I must admit that I'm
153 happy that we refetch more often than needed because I can more easily
154 fix bugs while the thing is running.
156 * Let's say, 1220474966.19501 is a timestamp of a file that is already
157 done but the done system does not know about it. The reason for the
158 failure is not known and we never reach the status uptodate because of
159 this. We must get over it.
161 Later it turns out that the origin server had a bug somewhere.
162 1220474966.19042 came after 1220474966.19501. Or better: it was in the
163 array of the recentfile one position above. The bug was my own.
165 2008-09-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
167 * There is the race condition where the server does a delete and the
168 slave does not yet know and then tries to download it because he sees
169 the new. So for this time window we must be more tolerant against
170 failure. If we cannot download a file, we should just skip it and should
171 not retry immediately. The whole system should discover the lost thing
172 later. Keeping track with the DONE system should really be a no brainer.
174 But there is something more: the whole filesystem is a database and the
175 recentfiles are one possible representation of it. It's a pretty useful
176 representation I think that's why I have implemented something around
177 it. But for strictly local operation it has little value. For local
178 operation we would much rather have a database. So we would enter every
179 recentfile reading and every rsync operation and for every file the last
180 state change and what it leads to. Then we would always ignore older
181 records without the efforts involved with recentfiles.
183 The database would have: path,recentepoch,rsyncedon,deletedon
185 Oh well, not yet clear where this leads to.
187 2008-09-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
189 * Bug: the bigloop ran into a funny endless loop after EWILHELM uploaded
190 Module-Build. It *only* rsynced the "1h" recentfile from that moment on.
192 * statusfile, maybe only on demand, alone to have a sharp debugging
193 tool. It is locked and all recentfiles dump themselves into it and we
194 can build a viewer that lets us know where we stand and what's inside.
196 * remember: only the principal recentfile needs expiration, all others
197 shall be expired by principal if it discovers that something has move
200 2008-09-18 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
202 * Always check if we stringify to a higher value than in the entry
205 * And in covered make an additional check if we would be able to see a
206 numerical difference between the two numbers and if we can't then switch
207 to a different, more expensive algorithm. Do not want to be caught by
208 floating surprises. DONE
210 2008-09-17 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
212 * caching has several aspects here: we can cache the interval of the
213 recentfile which only will change when the mtime of the file changes. We
214 must re-mirror the recentfile when its ttl has expired. Does have_read
215 tell you anything? It counts nothing at all. Only the mtime is
216 interesting. The ntuple mtime, low-epoch, high-epoch. And as a separate
217 thing the have_mirrored because it is unrelated to the mtime.
219 * Robustness of floating point calculations! I always thought that the
220 string calculated by the origin server for the floating representation
221 of the epoch time is just a string. When we convert it to a number and
222 later back to a string, the other computer might come to a different
223 conclusion. This must not happen, we want to preserve it under any
224 circumstances. I will have to write tests with overlong sequences that
225 get lost in arithmetic and must see if all still works well. DONE
227 But one fragile point remains: if one host considers a>b and the other
228 one considers them == but no eq. To prevent this, we must probably do
229 some extra homework. DONE
231 2008-09-16 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
233 * the concept of tracking DONE needs an object per recentfile that has
234 something like these methods:
236 do_we_have(xxx), we_have(xxx), do_we_have_all(xxx,yyy), reset()
238 covered() register() covered()
240 The unclear thing is how we translate points in time into intervals. We
241 could pass a reference to the current recent_events array when running
242 we_have(xxx) and let the DONE object iterate over it such that it only
243 has to store a list of intervals that can melt into each other. Ah, even
244 passing the list together with a list of indexes seems feasiable.
246 Or maybe ask for the inverted list?
248 Whenever the complete array is covered by the interval we say we are
249 fully covered and if the recentfile is not expired, we are uptodate.
251 2008-09-07 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
253 2008-09-05 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
255 * need a way to "return" the next entry after the end of a list. When
256 the caller says "before" or "after" we would like to know if he could
257 cover that interval/threshold or not because this influences the effect
258 of a newer timestamp of that recentfile. DONE with $opt{info}.
260 2008-09-04 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
262 * one of the next things to tackle: the equivalent of csync2 -TIXU.
264 loop implies tixu (?). Nope, something like --statefile decides. Per
267 T test, I init, X including removals, U nodirtymark
269 So we have no concept of dirtymarks, we only trust that since we are
270 running we have observed everything steadily. But people will not let
271 this program run forever so we must consider both startup penalty and
272 book keeping for later runs. We keep this for later. For now we write a
273 long running mirror that merges several intervals.
275 2008-09-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
277 * need to speed up the 02 test, it's not clever to sleep so much. Reduce
280 * rersyncrecent, the script: default to one week. The name of the switch
281 is --after. Other switches? --loop!
283 2008-08-30 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
285 * need a switch --skip-deletes (?)
287 * need a switch --enduser that tells us that the whole tempfile
288 discipline is not needed when there is no downstream user. (?)
290 Without this switch we cannot have a reasonable recent.pl that just
291 displays the recent additions. Either we accept to download everything.
292 Or we download temporary files without the typical rsync protocol
295 Or maybe the switch is --tmpdir? If --tmpdir would mean: do not use
296 File::Temp::tempdir, this might be a win.
298 2008-08-29 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
300 * apropos missing: we have no push, we never know the downstream
301 servers. People who know their downstream hosts and want to ascertain
302 something will want additional methods we have never thought about, like
303 update or delete a certain file.
305 2008-08-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
307 * tempted to refactor rmirror into resolve_symlink, localize, etc.
308 Curious if rsync_options=links equal 0 vs. 1 will make the expected
311 * rsync options: it's a bit of a pain that we usually need several rsync
312 options, like compress, links, times, checksum and that there is no
313 reasonable default except the original rsync default. I think wee can
314 safely assume that the rsync options are shared between all recentfile
315 instances within one recent tree.
317 2008-08-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
319 * deletes: if a delete follows an add quickly enough it may happen that
320 a downstream mirror did not see the add at all! It seems this needs to
321 be mentioned somewhere. The point here is that even if the downstream is
322 never missing the principal timeframe it may encounter a "delete" that
323 has no complimentary "add" anywhere.
325 2008-08-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
327 * I suspect the treat of metadata is incorrect during read or something.
328 The bug that I am watching is that between 06:08 and 06:09 the 6h file
329 contained more than 6 hours worth of data. At 06:08 we merged into the
330 1d file. We need to take snapshots of the 6h file over the course of an
331 hour or maybe only between XX:08 and XX:09? Nope, the latter is not
334 Much worse: watching the 1h file: right at the moment (at 06:35) it
335 covers 1218867584-1219120397 which is 70 hours.
337 Something terribly broken. BTW, 1218867584 corresponds to Sat Aug 16
338 08:19:44 2008, that is when I checked out last time, so it seems to be
339 aggregating and never truncating?
341 No, correct is: it is never truncating; but wrong is: it is aggregating.
342 It does receive a lot of events from time to time from a larger file.
343 Somehow a large file gets merged into the small one and because the
344 "meta/merged" attribute is missing, nobody is paying attention. I
345 believe that I can fix this by making sure that metadata are honoured
346 during read. DONE and test adjusted.
348 2008-08-17 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
350 * grand renaming plan
352 remotebase => remoteroot to fit well with localroot DONE
353 local_path() => localroot seems to me should already work DONE
354 recentfile_basename => rfilename no need to stress it has no slash DONE
356 filenameroot??? Doesn't seem too bad to me today. Maybe something like
357 kern? It would anyway need a deprecation cycle because it is an
358 important constructor.
360 * I like the portability that Data::Serializer brings us but the price
361 is that some day we might find out that it is slowing us a bit. We'll
364 2008-08-16 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
366 * should we not enter the interval of the principal (or the interval of
367 the merging file?) in every aggregated/merged file?
369 * we should aim at a first release and give up on thinking about
370 sanitizing stuff and zloop. Let's just admit that a full traditional
371 rsync is the only available sanitizer ATM. Otherwise it's complicated
372 stuff: sanitizing on the origin server, sanitizing on the slaves,
373 sanitizing forgotten files, broken timestamps, etc. Let's delay it and
374 get the basics out before this becomes a major cause for mess.
376 2008-08-13 Andreas Koenig <k@andreas-koenigs-computer.local>
378 * On OSes not supporting symlinks we expect that RECENT.recent contains
379 the contents of the principal recentfile. Actually this is identical on
380 systems supporting symlinks. Simple, what follows from that is that we
381 need to keep the serializer in the metadata because we cannot read it
382 from the filename, doesn't it? Of course not. It's a chicken and egg
383 problem. This leaves us with the problem to actually parse the
384 serialized data to find out in which format it is. So who can do the 4
385 or 5 magics we wanted to support? File::LibMagic?
387 2008-08-09 Andreas Koenig <k@andreas-koenigs-computer.local>
389 * remotebase and recentfile_basename are ugly names. Now that we need a
390 word for the shortest/principal/driving recentfile too we should do
393 localroot is good. rfile is good. local_path() is bad, local_path($path)
394 is medium, filenameroot() is bad, remotebase is bad, recentfile is
397 Up to now remotebase was the string that described the remote root
398 directory in rsync notation, like pause.perl.org::authors. And
399 recentfile_basename was "RECENT-1h.yaml".
401 2008-08-08 Andreas Koenig <k@andreas-koenigs-computer.local>
403 * The test that was added in today's checkin is a good start for a test
404 of rmirror. We should have more methods in Recent.pm: verify,
405 addmissingfiles. We should verify the current tree, then rmirror it and
406 then verifytree the copy. We could then add some arbitrary file and let
407 it be discovered by addmissingfiles, then rmirror again and then
408 verifytree the copy again.
410 Then we could start stealing from csync2 sqlite database [no port to
411 OSX!] and fill a local DB. And methods to compare the database with the
412 recentfiles. Our strength is that in principle we could maintain state
413 with a single float. We have synced up to 1234567890.123456. If the Z
414 file does not add new files all we have to do is mirror the new ones and
417 This makes it clear that we should extend current protocol and declare
418 that we cheat when we add files too late, just to help the other end
419 keeping track. Ah yes, that's what was meant when zloop was mentioned
422 Maybe need to revisit File::Mirror to help me with this task.
424 2008-08-07 Andreas Koenig <k@andreas-koenigs-computer.local>
426 * There must be an allow-me-to-truncate flag in every recentfile.
427 Without it one could construct a sequence of updates winning the locking
428 battle against the aggregator. Only if an aggregator has managed to
429 merge data over to the next level, truncating can be allowed. DONE with
432 2008-08-06 Andreas Koenig <k@andreas-koenigs-computer.local>
434 * We should probably guarantee that no duplicates enter the aggregator
437 2008-08-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
439 * To get merge operation faster would need a good benchmark test. What
440 02 spits out isn't reliable enough and is dominated by many other
443 commit 10176bf6b79865d4fe9f46e3857a3b8669fa7961
444 Author: Andreas J. Koenig <k@k75.(none)>
445 Date: Sat Aug 2 07:58:04 2008 +0200
449 commit 3243120a0c120aaddcd9b1f4db6689ff12ed2523
450 Author: Andreas J. Koenig <k@k75.(none)>
451 Date: Sat Aug 2 11:40:29 2008 +0200
453 there was a lot of trying but the effect is hardly measurable with
456 * overhead of connecting seems high. When setting
457 max_files_per_connection to 1 we see that.
459 2008-08-01 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
461 * 1217622571.0889 - 1217597432.86734 = 25138.2215600014
463 25138.2215600014/3600 = 6.98283932222261
465 It jumps into the eye that this is ~ 7 hours, not ~6, so there seems to
466 be a bug in the aggregator. FIXED
468 2008-07-27 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
470 * e.g. id/Y/YE/YEWENBIN/Emacs-PDE-0.2.16.tar.gz: Do we have it, should
471 we have it, can we mirror it, mirror it!
473 I fear this needs a new class which might be called
474 File::Rsync::Mirror::Recent. It would collect all recentfiles of a kind
475 and treat them as an entity. I realize that a single recentfile may be
476 sufficient for certain tasks and that it is handy for the low level
477 programmer but it is not nice to use. If there is a delete in the 1h
478 file then the 6h file still contains it. Seekers of the best information
479 need to combine at least some of the recentfiles most of the time.
481 There is the place for the Z loop!
483 But the combination is something to collect in a database, isn't it. Did
484 csync2 just harrumph?
486 2008-07-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
488 * it just occurred to me that hosts in the same mirroring pool could
489 help out each other even without rewriting the recentfile. Just fetch
490 the stuff to mirror from several places, bingo. But that's something
491 that should rather live in a separate package or in rsync directly.
493 * cronjobs are unsuited because with ntp they would all come at the full
494 minute and disturb each other. Besides that I'd hate to have a backbone
495 with more than a few seconds latency.
497 2008-07-25 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
499 * a second rsync server with access control for PAUSE. Port? 873 is the
500 standard port, let's take 8873.
502 * if there were a filesystem based on this, it would have a slow access
503 to inexistent files. It would probably provide wrong readdir (only based
504 on current content) or also a slow one (based on a recentfile written
505 after the call). But it would provide fast access to existing files. Or
506 one would deliberately allow slightly blurred answers based on some
507 sqlite reflection of the recentfiles.
509 * todo: write a variant of mirror() that combines two or more
510 recentfiles and treats them like one
512 * todo: signal handler to remove the tempfile
514 2008-07-24 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
516 * now that we have the symlink I forgot how it should be used in
519 * the z loop: add missing files to Z file. Just append them (instead of
520 prepending). So one guy prepends something from the Y file from time to
521 time and another guy appends something rather frequently. Collecting
522 pond. When Y merges into Z, things get epoch and the collecting pond
523 gets smaller. What exactly are "missing files"?
525 take note of current epoch of the alpha file, let's call it the
528 find all files on disk
530 remove all files registered in the recentworld up to recent-ts
532 remove all files that have been deleted after recent-ts according to
535 2008-07-23 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
537 * rersyncrecent might be a cronjob with a (locked) state file which
538 contains things like after and maybe last z sync or such?
540 rrr-mirror might be an alternative name but how would we justify the
541 three Rs when there is no Re-Rsync-Recent?
543 With the --loop parameter it is an endless loop, without it is no loop.
544 At least this is simple.
546 * todo: new accssor z-interval specifies how often the Z file is updated
547 against the filesystem. We probably want no epoch stamp on these
548 entries. And we want to be able to filter the entries (e.g. no
549 by-modules and by-category tree)
551 2008-07-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
553 * Fill the Z file. gc or fsck or both. Somehow we must get the old files
554 into Z. We do not need the other files filled up with filesystem
557 * need interface to query for a file in order to NOT call update on
558 PAUSE a second time within a short time.
560 2008-07-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
562 * recommended update interval? Makes no sense, is different for
569 change-log-default-name: "Todo"