1 2009-03-21 Andreas J. Koenig <andk@cpan.org>
3 * Bug?: should it be harder than it is atm to set the timestamp to the
6 * bug with native integers:
10 path: id/D/DE/DELTA/Crypt-Rijndael_PP-0.03.readme
14 path: id/L/LG/LGODDARD/Tk-Wizard-2.124.readme
17 Native integer broke when native math was turned off. FIXED
19 * Bug: something between id/P/PH/PHISH/CGI-XMLApplication-1.1.2.readme
20 and id/C/CH/CHOGAN/HTML-WWWTheme-1.06.readme. Mirroring the Z file loops
23 Yes, records out of order:
26 447004 epoch: 995885533
27 447005 path: id/P/PH/PHISH/CGI-XMLApplication_0.9.3.readme
30 447008 epoch: 995890358
31 447009 path: id/H/HD/HDIAS/Mail-Cclient-1.3.readme
34 447012 epoch: 995892221
35 447013 path: id/H/HD/HDIAS/Mail-Cclient-1.3.tar.gz
38 FIXED with sanity check and later with the integer fix.
40 * Bug: want the index files in a .recent directory
42 * Bug: lots of dot files are not deleted in time
44 * possible test case: can a delete change the timestamp? This would
45 probably break the order of events.
47 2009-03-20 Andreas J. Koenig <andk@cpan.org>
49 * 1233701831.34486 what's so special about this number/string? It
51 id/G/GR/GRODITI/MooseX-Emulate-Class-Accessor-Fast-0.00800.tar.gz and
52 atm lives in Y,Q, and Z.
54 It is the first entry after id/--skip-locking which has timestamp
55 1234164228.11325 which represents a file that doesn't exist anymore.
59 Sync 1237531537 (31547/33111/Z) id/J/JH/JHI/String-Approx-2.7.tar.gz ...
60 _bigfloatcmp called with l[1237505213.21133]r[UNDEF]: but both must be defined at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/FakeBigFloat.pm line 76
61 File::Rsync::Mirror::Recentfile::FakeBigFloat::_bigfloatcmp(1237505213.21133, undef) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/FakeBigFloat.pm line 131
62 File::Rsync::Mirror::Recentfile::FakeBigFloat::_bigfloatlt(1237505213.21133, undef) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/Done.pm line 110
63 File::Rsync::Mirror::Recentfile::Done::covered('File::Rsync::Mirror::Recentfile::Done=HASH(0x8857fb4)', 1237505213.21133, 0.123456789) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile.pm line 2041
64 File::Rsync::Mirror::Recentfile::uptodate('File::Rsync::Mirror::Recentfile=HASH(0x8533a2c)') called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recent.pm line 536
65 File::Rsync::Mirror::Recent::rmirror('File::Rsync::Mirror::Recent=HASH(0x82ef3d0)', 'skip-deletes', 1) called at /home/k/sources/CPAN/GIT/trunk/bin/testing-rmirror.pl line 27
66 at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/FakeBigFloat.pm line 76
67 File::Rsync::Mirror::Recentfile::FakeBigFloat::_bigfloatcmp(1237505213.21133, undef) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/FakeBigFloat.pm line 131
68 File::Rsync::Mirror::Recentfile::FakeBigFloat::_bigfloatlt(1237505213.21133, undef) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/Done.pm line 110
69 File::Rsync::Mirror::Recentfile::Done::covered('File::Rsync::Mirror::Recentfile::Done=HASH(0x8857fb4)', 1237505213.21133, 0.123456789) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile.pm line 2041
70 File::Rsync::Mirror::Recentfile::uptodate('File::Rsync::Mirror::Recentfile=HASH(0x8533a2c)') called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recent.pm line 536
71 File::Rsync::Mirror::Recent::rmirror('File::Rsync::Mirror::Recent=HASH(0x82ef3d0)', 'skip-deletes', 1) called at /home/k/sources/CPAN/GIT/trunk/bin/testing-rmirror.pl line 27
73 FIXED, it was the "--skip-locking" file where manual intervention was participating
75 * bug on the mirroring slave: when the dirtymark gets increased we
76 probably do not reset the done intervals. The mirrorer stays within
77 tight bounds where it tries to sync with upstream and never seems to
78 finish. In the debugging state file I see lots of identical intervals
79 that do not get collapsed. When I restart the mirrorer it dies with:
81 Sync 1237507989 (227/33111/Z) id/X/XI/XINMING/Catalyst-Plugin-Compress.tar.gz ...
82 _bigfloatcmp called with l[1237400817.94363]r[UNDEF]: but both must be defined at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/FakeBigFloat.pm line 76
83 File::Rsync::Mirror::Recentfile::FakeBigFloat::_bigfloatcmp(1237400817.94363, undef) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/FakeBigFloat.pm line 101
84 File::Rsync::Mirror::Recentfile::FakeBigFloat::_bigfloatge(1237400817.94363, undef) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/Done.pm line 226
85 File::Rsync::Mirror::Recentfile::Done::_register_one('File::Rsync::Mirror::Recentfile::Done=HASH(0x84c6af8)', 'HASH(0xb693f2dc)') called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile/Done.pm line 200
86 File::Rsync::Mirror::Recentfile::Done::register('File::Rsync::Mirror::Recentfile::Done=HASH(0x84c6af8)', 'ARRAY(0x8c54bfc)', 'ARRAY(0xb67e618c)') called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile.pm line 1044
87 File::Rsync::Mirror::Recentfile::_mirror_item('File::Rsync::Mirror::Recentfile=HASH(0x84abcb0)', 227, 'ARRAY(0x8c54bfc)', 33110, 'File::Rsync::Mirror::Recentfile::Done=HASH(0x84c6af8)', 'HASH(0x84abd8c)', 'ARRAY(0x839cb64)', 'HASH(0x839c95c)', 'HASH(0xb6a30f0c)', ...) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recentfile.pm line 992
88 File::Rsync::Mirror::Recentfile::mirror('File::Rsync::Mirror::Recentfile=HASH(0x84abcb0)', 'piecemeal', 1, 'skip-deletes', 1) called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recent.pm line 564
89 File::Rsync::Mirror::Recent::_rmirror_mirror('File::Rsync::Mirror::Recent=HASH(0x84ab6d4)', 7, 'HASH(0x8499488)') called at /home/k/sources/rersyncrecent/lib/File/Rsync/Mirror/Recent.pm line 532
90 File::Rsync::Mirror::Recent::rmirror('File::Rsync::Mirror::Recent=HASH(0x84ab6d4)', 'skip-deletes', 1) called at /home/k/sources/CPAN/GIT/trunk/bin/testing-rmirror.pl line 27
92 and debugging stands at
98 epoch: 1237400802.5789
99 path: id/J/JO/JOHND/CHECKSUMS
102 epoch: 1237400807.97514
103 path: id/M/MI/MIYAGAWA/CHECKSUMS
106 epoch: 1237400817.94363
107 path: id/X/XI/XINMING/Catalyst-Plugin-Compress.tar.gz
115 and it is reproducable.
117 Why does the mirrorer not fetch a newer Z file? It is 18 hours old while
118 pause has a fresh one.
120 FIXED, it was the third anded term in each of the ifs in the IV block in
121 _register_one: with that we make sure that we do not stamp on valuable
124 2009-03-17 Andreas J. Koenig <andk@cpan.org>
126 * done: verified the existence of the floating point bug in bleadperl
127 and verified that switching from YAML::Syck to YAML::XS does not resolve
130 BTW, the switch was doable with
132 perl -i~ -pe 's/Syck/XS/g' lib/**/*.pm t/*.t
134 and should be considered as a separate TODO
136 * todo: integrate a dirty update with two aggregate calls before
137 unlocking for frictionless dirtying
139 * todo: start the second rsync daemon on pause
141 * todo: move index files to .recent: this cannot simply be done by
142 setting filenameroot to .recent/RECENT. Other parts of the modules rely
143 on the fact that dirname(recentfile) is the root of the mirrored tree.
145 2009-03-16 Andreas J. Koenig <andk@cpan.org>
147 * What was the resolution of the mirror.pl delete hook bug? Do we call
148 the delete hook when pause removes a file from MUIR?
150 * Today on pause: Updating 2a13fba..29f284d and installing it for
151 /usr/local/perl-5.10.0{,-RC2}
153 TURUGINA/Set-Intersection-0.01.tar.gz was the last upload before this
154 action and G/GW/GWILLIAMS/RDF-Query-2.100_01.tar.gz the first after it
156 2009-03-15 Andreas J. Koenig <andk@cpan.org>
158 * currently recent_events has the side effect of setting dirtymark
159 because it forces a read on the file. That should be transparent, so
160 that the dirtymark call always forces a cache-able(?) read.
162 * The bug below is -- after a lot of trying -- not reproducible on a
163 small script, only in the large test script. The closest to the output
169 my $x = "01237123229.8814";
174 ($l,$r) = ($1,$2) if $x =~ /(.)(.+)/;
176 $l = "1237123231.22458";
177 $r = "1237123231.22458";
179 Devel::Peek::Dump $l;
180 Devel::Peek::Dump $r;
181 Devel::Peek::Dump $x = $l <=> $r;
185 The checked in state at c404a85 fails the test with my
186 /usr/local/perl-5.10-uld/bin/perl on 64bit but curiously not with
187 /usr/local/perl-5.10-g/bin/perl. So it seems the behaviour is not even
188 in the test script always consistent.
190 * Todo: write a test that inserts a second dirty file with an already
191 existing timestamp. DONE
193 * Bug in perl 5.10 on my 64bit box:
195 DB<98> Devel::Peek::Dump $l
196 SV = PVMG(0x19e0450) at 0x142a550
198 FLAGS = (PADMY,NOK,POK,pNOK,pPOK)
200 NV = 1237123231.22458
201 PV = 0x194ce70 "1237123231.22458"\0
205 DB<99> Devel::Peek::Dump $r
206 SV = PVMG(0x19e0240) at 0x142a3e8
208 FLAGS = (PADMY,POK,pPOK)
211 PV = 0x19ff900 "1237123231.22458"\0
215 DB<100> Devel::Peek::Dump $l <=> $r
216 SV = IV(0x19ea6e8) at 0x19ea6f0
218 FLAGS = (PADTMP,IOK,pIOK)
221 DB<101> Devel::Peek::Dump $l
222 SV = PVMG(0x19e0450) at 0x142a550
224 FLAGS = (PADMY,NOK,POK,pNOK,pPOK)
226 NV = 1237123231.22458
227 PV = 0x194ce70 "1237123231.22458"\0
231 DB<102> Devel::Peek::Dump $r
232 SV = PVMG(0x19e0240) at 0x142a3e8
234 FLAGS = (PADMY,NOK,POK,pIOK,pNOK,pPOK)
236 NV = 1237123231.22458
237 PV = 0x19ff900 "1237123231.22458"\0
241 Retry with uselongdouble gives same effect. Not reproducable on 32bit box (k75).
243 * Todo: reset "done" or "covered" and "minmax" after a dirty operation?
245 2009-03-11 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
247 * $obj->merge ($other) needs to learn about equal epoch which may happen
248 since dirty_epoch intruded.
250 * Wontfix anytime soon: I think we currently do not support mkdir. Only
253 2009-01-01 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
255 * Todo: continue working on update(...,$dirty_epoch). It must be
256 followed by a fast_aggregate!
258 2008-12-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
260 * maybe we need a closest_entry or fitting_interval or something like
261 that. We want to merge an event into the middle of some recentfile.
262 First we do not know which file, then we do not know where to lock,
263 where to enter the new item, when and where to correct the dirtymark.
265 So my thought is we should first find which file.
267 Another part of my brain answers: what would happen if we would enter
268 the new file into the smallest file just like an ordinary new event,
269 just as an old event?
271 (1) we would write a duplicate timestamp? No, this would be easy to
274 (2) we would make the file large quickly? Yes, but so what? We are
275 changing the dirtymark, so are willing to disturb the downstream hosts.
277 2008-11-22 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
279 * 10705 root 17 0 725m 710m 1712 S 0.0 46.8 834:56.05 /home/src/perl/repoperls/installed-perls/perl/pVNtS9N/perl-5.8.0@32642/bin/perl -Ilib /home/k/sources/CPAN/GIT/trunk/bin/testing-rmirror.pl
283 https://rt.cpan.org/Ticket/Display.html?id=41199
285 * bzcat uploads.csv.bz2 | perl -F, -nale '$Seen{$F[-1]}++ and print'
287 Strangest output being HAKANARDO who managed to upload
289 Here is a better oneliner that includes also the first line of each
292 bzcat uploads.csv.bz2 | perl -MYAML::Syck -F, -nale '$F[-1]=~s/\s+\z//; push @{$Seen{$F[-1]}||=[]},$_; END {for my $k (keys %Seen){ delete $Seen{$k} if @{$Seen{$k}}==1; } print YAML::Syck::Dump(\%Seen)}'
294 2008-10-31 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
296 * memory leak in the syncher? It currently weighs 100M.
300 root 10705 1.0 4.9 80192 76596 pts/32 S+ Nov02 24:05 /home/src/perl/repoperls/installed-perls/perl/pVNtS9N/perl-5.8.0@32642/bin/perl -Ilib /home/k/sources/CPAN/GIT/trunk/bin/testing-rmirror.pl
303 2008-10-29 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
305 * lookup by epoch and by path and use this ability on the pause to never
306 again register a file twice that doesn't need it. Let's call it
309 * after the dirtymark is done: fill up recentfiles with fake (historic)
310 entries; fill up with individual corrections; algorithm maybe to be done
311 with bigfloat so that we can always place something in the middle
312 between two entries. Before we must switch to bigfloat we could try to
313 use Data::Float::nextup to get the.
315 * Inotify2 on an arbitrary tree and then play with that instead of PAUSE
318 * dirtymark now lives in Recentfile, needs to be used in rmirror.
320 * find out why the downloader died after a couple of hours without a net
321 connection. Write a test that survives the not-existence of the other
324 2008-10-15 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
326 * reconsider the HTTP epoch only. Not the whole thing over HTTP because
327 it makes less sense with tight coupling for secondary files. But asking
328 the server what the current epoch is might be cheaper on HTTP than on
329 rsync. (Needs to be evaluated)
331 * remove the 0.00 from the verbose overview in the Merged column in the
334 * write tests that expose the problems of the last few days: cascading
335 client/server roles, tight coupling for secondary RFs, deletes after
338 * Some day we might want to have policy options for the slave:
339 tight/loose/no coupling with upstream for secondary RFs. tight is what
340 we have now. loose would wait until a gap occurs that can be closed.
342 2008-10-14 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
344 * revisit all $rfs->[$i+1] places if they now make sense still
346 2008-10-11 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
348 * another bug is the fact that the mirror command deletes files before
349 it unhides the index file, thus confusing downstream slaves. We must not
350 delete before unhiding and must delete after unhiding. FIXED.
352 * new complication about the slave that is playing a server role.
353 Currently we mirror from newest to oldest with a hidden temporary file
354 as index. And when one file is finished, we unhide the index file.
355 Imagine the cascading server/slave is dead for a day. It then starts
356 mirroring again with the freshest thing and unhides the freshest index
357 file when it has worked through it. In that moment it exposes a time
358 hole. Because it now works on the second recentfile which is still
361 We currently do nothing special to converge after such a drop out. At
362 least not intentionally and robustly and thought through.
364 The algorithm we use to seed the next file needs quite a lot of more
365 robustness than it currently has. Something to do with looking at the
366 merged element of the next rf and when it has dropped off, we seed
367 immediately. And if it ramains dropped off, we seed again, of course.
369 Nope, looking from smaller to larger RFS we look at the merged element
370 of this RF and at the minmax/max element of the next RF. If that
371 $rf[next]->{minmax}{max} >= $rf[this]->{merged}{epoch}, then we can stop
374 And we need a public accessor seed and unseed or seeded. But not the mix
375 of public and private stuff that then is used behind the back.
377 And then the secondary* stuff must go.
379 And we must understand what the impact is on the DONE system. Can it go
380 unnoticed that there was a hole? And could the DONE system have decided
381 the hole is covered? This should be testable with three directories where
382 the middle stops working for a while. Done->merge is suspicious, we must
383 stop it from merging non-conflatable neighbors due to broken continuity.
387 2008-10-10 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
389 * Slaven suggests to have the current epoch or the whole current
390 recentfile available from the HTTP server and take it away with
391 keepalive. This direction goes the granularity down to subseconds.
393 We might want to rewrite everything to factor out transport and allow
394 the whole thing to run via HTTP.
396 2008-10-09 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
398 * are we sure we do NOT LEAVE DOT FILES around? Especially on the
401 * smoker on k81 fetching from k75 to verify cascading works. See
402 2008-07-17 in upgradexxx and rsync-over-recentfile-3.pl.
404 * maybe the loop should wait for CHECKSUMS file after every upload. And
405 CPAN.pm needs to deal with timestamps in the future.
407 * do not forget the dirtymark!
409 Text: have a new flag on recentfiles with the meaning: if this
410 changes, you're required to run a full rsync over all the files. The
411 reason why we set it would probably be: some foul happened. we injected
412 files in arbitrary places or didn't inject them although they changed.
413 The content of the flag? Timestamp? The relation between the
414 recentfiles would have to be inheritance from the principal, because any
415 out of band changes would soon later propagate to the next recentfile.
417 By upping the flag often one can easily ruin the slaves.
419 last out of band change? dirtymark?
421 Anyway, this implies that we read a potentially existing recentfile
424 And it implies that we have an eventloop that keeps us busy in 2-3
425 cycles, one for current stuff (tight loop) and one for the recentfiles
426 (cascade when principal has changed), one for the old stuff after a
429 And it implies that the out-of-band change in any of the recentfiles
430 must have a lock on the principal file and there is the place to set the
433 * start a FAQ, especially quick start guide questions. Also to aid those
434 problematic areas where we have no good solution, like the "links"
437 * wish feedback when we are slow.
441 * Remove a few DEBUG statements.
443 * The multiple-rrr way of doing things needs a new option to rmirror,
444 like piecemeal or so. Not urgent because after the first pass through,
445 things run smoothely. It's only ugly during the first pass.
447 * I have the suspicion that the code is broken that decides if the
448 neighboring RF needs to be seeded. I fear when too much time has gone
449 between two calls (in our case more than one hour), it would not seed
450 the neighbor. Of course this will never be noticed, so we need a good
453 * local/localroot confusion: I currently pass both options but one must
456 * accounts for early birds on PAUSE rsync daemon.
458 * hardcoded 20 seconds
460 * who mirrors the index? DOING now.
462 * which CPAN mirrors offer rsync?
464 * visit all XXX, visit all _float places
466 * rename the pathdb stuff, it's too confusing. No idea how.
468 * rrr-inotify, backpan, rrr-register
470 2008-10-08 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
472 * current bugs: the pathdb seems to get no reset, the seeding of the
473 secondaryttl stuff seems not to have an effect. Have helped myself with
474 a rand(10), need to fix this back. So not checked in. Does the rand
477 The rand thing helps. The secondaryttl stuff was in the wrong line,
480 The pathdb stuff was because I called either _pathdb or __pathdb on the
481 wrong object. FIXED now.
483 * It's not so beautiful if we never fetch the recentfiles that are not
484 the principal, even if this is correct behaviour. We really do not need
485 them after we have fetched the whole content.
487 OK, we want a switch for that: secondaryttl DONE
489 2008-10-07 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
491 * bug: rrr-news --max does not count correctly. with "35" it shows me 35
492 lines but with 36 it shows 110. First it repeats 35, gives 70, and then
493 it lets 40 follow. FIXED
495 * See that the long running process really only updates the principal
496 file unless it has missed a timespan during which something happened. If
497 nothing happened, it must notice even when it misses the timespan. DONE
499 * we must throw away the pathdb when we have reached the end of Z. From
500 that moment we can have a very small pathdb because the only reason for
501 a pathdb is that we know to ignore old records in old files. We won't
502 need this pathdb again before the next full pass over the data is
503 necessary and then we will rebuild it as we go along. DONE
505 2008-10-06 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
507 * I think, Done::register_one is doing wrong in that it does not
508 conflate neighboring pieces. The covered() method cannot do this because
509 it has no recent_events array at hand. But register_one has it and could
510 do it and for some reason misses to do it (sometimes).
512 This means that the three tests I just wrote can probably not survive
513 because they test with an already broken Done structure.
515 The art now is to detect how it happens, then to reproduce, then write a
518 So from the logfile this is what happens: we have a good interval with
519 newest file being F1 at T1. Now remotely F1 gets a change and F2 goes on
520 top of it. Locally we now mirror F2 and open a new done interval for it.
521 Then we mirror F1 but this time with the timestamp T1b. And when we then
522 try to close the gap, we do not find T1 but instead something older. We
523 should gladly accept this older piece and this would fix this bug.
527 * bug to fix: when the 1h file changes while rmirror is running, we do
528 correctly sync the new files but never switch to the 6h file but rather
529 stay in a rather quick loop that fetches the 1h file again and again.
531 Is it possible that we initialize a new object? Or does
532 get_remote_recentfile_as_tempfile overwrite something in myself?
534 Want a new option: _runstatusfile => $file which frequently dumps the
535 state of all recentfiles to a file.
539 2008-10-04 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
541 * Todo: now teach update to verify the timestamp is about to write
542 against the previous and use _increase_a_bit if it doesn't comply with
543 strict monotony. DONE
545 * The problem of rounding. So far perl's default precision was
546 sufficient. One day it won't be. FakeFloat has an easy job when it is
547 only reading and other machines have written correctly. But when we want
548 to write a floating point number that is a bit larger than the other
549 one, then we need our own idea of precision.
551 Slaven said: just append a "1". This might be going towards the end of
552 usability too quickly. I'd like something that actually uses the decimal
553 system. Well, appending a 1 also does this but...
555 E.g. we have 1.0. nextup on this architecture is starting with
556 1.0000000000000004. So there is a gap to fill: 1,2,3. Now I have
557 taken the 1.0000000000000003 and the next user comes and the time tells
558 him 1.0 again. He has to beat my number without stepping over the
559 nextup. This is much less space than I had when I chose 1,2,3.
561 What is also irritating is that nextup is architecture dependent. The
562 128 bit guy must choose very long numbers to fit in between whereas the
563 other one with 16 bit uses larger steps. But then the algorithm is the
564 same for both, so that would be a nice thing.
566 I see two situation where we need this. One is when Time::HiRes returns
567 us a value that is <= the last entry in our recentfile. In this case
568 (let's call it the end-case) we must fill the region between that number
569 and the next higher native floating point number. The other is when we
570 inject an old file into an old recentfile (we would then also set a new
571 dirtymark). We find the integer value already taken and need a slightly
572 different one (let's call it the middle-case). The difference between
573 the two situations is that the next user will want to find something
574 higher than my number in the end-case and something lower than my number
577 So I suggest we give the function both a value and an upper bound and it
578 calculates us a primitive middle. The upper bound in the middle-case is
579 the next integer. The upper bound on the end-case is the nextup floating
580 point number. But the latter poses another problem: if we have occupied
581 the middle m between x and nextup(x), then the nextup(m) will probably
582 not be the same as nextup(x) because some rounding will take place
583 before the nextup is calculated and when the rounding reaches the
584 nextup(x), we will end up at nextup(nextup(x)).
586 So we really need to consider the nextup and the nextdown from there and
587 then the middle and that's the number we may approach asymptotically.
590 2008-10-03 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
592 * consider deprecating the use of RECENT.recent as a symlink. It turns
593 out to need extra hoops with the rsync options and just isn't worth it.
594 Or maybe these extra hoops are needed anyway for the rest of the tree?
595 Nope, can't be the case because not all filesystems support symlinks.
597 But before doing the large step, I'll deprecate the call of
598 get_remote_recentfile_as_tempfile with an argument. Rememberr this was
599 only introduced to resolve RECENT.recent and complicates the routine far
600 beyond what it deserves.
602 DONE. Won't deprecate RECENT.recent, just moved its handling to the
605 2008-10-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
607 * I think it's a bug that the rsync_option links must be set to true in
608 order to support RECENT.recent and that nobody cares to set it
609 automatically. Similar for ignore_link_stat_errors.
611 2008-09-27 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
613 * Todo: find all todos together and make a plan what is missing for a
616 - verifytree or something like that. fsck maybe.
618 - rersyncrecent, the script itself? What it do?
620 - a way to only mirror the recentfiles without mirroring the whole
621 remote system such that people can decide to mirror only partially see
622 also 2008-08-30. .shadow-xxx directory? this also needed for a
623 filesystem that is still incomplete and might need the mirrorfiles for
626 - long living objects that mirror again and again. Inject something
627 into ta, see how it goes over to tb.
629 - how do we continue filling up the DONE system when we use an object
630 for the second time? "fully covered" and "uptodate" or new terminology.
632 - overview called on the wrong file should be understandable
634 - the meta data field that must change when we fake something up so that
635 the downstream people know they have to re-fetch everything.
637 - how tolerant are we against missing files upstream? how do we keep
638 track? there are legitimate cases where we did read upstream index right
639 before a file got deleted there and then find that file as new and want
640 it. There are other cases that are not self healing and must be tracked
643 - how, exactly, do we have to deal with deletes? With rsync errors?
645 rsync: link_stat "/id/K/KA/KARMAN/Rose-HTMLx-Form-Related-0.07.meta" (in
646 authors) failed: No such file or directory (2)
648 The file above is a delete in 1h and a new in file 1M and the
649 delete in the locally running rmirror did not get propagated to the 1M
650 object. Bug. And the consequence is a standstill.
652 It seems that a slave that works with a file below the principal needs
653 to merge things all the way up to get rid of later deletes. Or keep
654 track of all deletes and skip them later. So we need a trackdeletes.pm
655 similar to the done.pm?
657 see also 2008-08-20 about spurious deletes that really have no add
658 counterpart and yet they are not wrong.
660 - consider the effect when resyncing the recentfile takes longer than
661 the time per loop. Then we never rsync any file. We need to diagnose
662 that and force an increase of that loop time. But when we later are fast
663 enough again because the net has recovered, then we need to switch back
664 to original parameters. ERm, no, it's enough to keep syncing at least
665 one file before refetching an index file.
667 - remember to verify that no temp files are left lying around and the
670 - status file for not long running jobs that want to track upstream with
673 - revisit all XXX _float areas and study Sub::Exporter DONE
675 - persistent DB even though we just said we do not need it. Just for
676 extended capabilities and time savings when, for example, upstream
677 announces a reset and we get new recentfiles and could then limit
678 ourselves to a subset of files (those that have a changed epoch) in a
679 first pass and would only then do the loop to verify the rest. Or
682 * Todo: aggregate files should know their feed and finding the principal
683 should be done stepwise. (?)
685 * Todo: DESTROY thing that unlocks. Today when I left the debuggerr I
686 left locks around. DONE
688 2008-09-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
690 * maybe extend the _overview so that it always says if and where the
691 last file is in the next file and where the next event in the next rf
692 would lie. No, don't like this anymore. REJECT
694 * take the two new redundant tests out again, only the third must
697 * Todo: add a sanity check if the merged structure is really pointing to
698 a different rf and that this different rf is larger. DONE
700 2008-09-25 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
702 * now test, if they are overlapping. And test if there is a file in the
703 next rf that would fit into this rf's interval.
705 1h 1222324012.8474 1222322541.7963 0.4086
706 6h 1222320411.2760 1222304207.6931 4.5010 missing overlap/gap!
707 1d 1222320411.2760 1222238750.5071 22.6835 large overlap
708 1W 1222313218.3626 1221708477.5829 167.9835
710 I suspect that somebody writes a merged timestamp without having merged
711 and then somebody else relies on it.
713 If aggregate is running, the intervals must not be extravagated, if it
714 is not running, there must not be bounds, the total number of events in
715 the system must be counted and must be controlled throughout the tests.
716 That the test required the additional update was probably nonsense,
717 because aggregate can cut pieces too. FIXED & DONE
719 2008-09-23 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
721 * rrr-aggregate seems to rewrite the RECENT file even if nothing has
724 2008-09-21 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
726 * Most apparent bug at the moment is that the recentfiles are fetched
727 too often. Only the principal should be fetched and if it has not
728 changed, the others should not be refetched. ATM I must admit that I'm
729 happy that we refetch more often than needed because I can more easily
730 fix bugs while the thing is running.
732 * Let's say, 1220474966.19501 is a timestamp of a file that is already
733 done but the done system does not know about it. The reason for the
734 failure is not known and we never reach the status uptodate because of
735 this. We must get over it.
737 Later it turns out that the origin server had a bug somewhere.
738 1220474966.19042 came after 1220474966.19501. Or better: it was in the
739 array of the recentfile one position above. The bug was my own.
741 2008-09-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
743 * There is the race condition where the server does a delete and the
744 slave does not yet know and then tries to download it because he sees
745 the new. So for this time window we must be more tolerant against
746 failure. If we cannot download a file, we should just skip it and should
747 not retry immediately. The whole system should discover the lost thing
748 later. Keeping track with the DONE system should really be a no brainer.
750 But there is something more: the whole filesystem is a database and the
751 recentfiles are one possible representation of it. It's a pretty useful
752 representation I think that's why I have implemented something around
753 it. But for strictly local operation it has little value. For local
754 operation we would much rather have a database. So we would enter every
755 recentfile reading and every rsync operation and for every file the last
756 state change and what it leads to. Then we would always ignore older
757 records without the efforts involved with recentfiles.
759 The database would have: path,recentepoch,rsyncedon,deletedon
761 Oh well, not yet clear where this leads to.
763 2008-09-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
765 * Bug: the bigloop ran into a funny endless loop after EWILHELM uploaded
766 Module-Build. It *only* rsynced the "1h" recentfile from that moment on.
768 * statusfile, maybe only on demand, alone to have a sharp debugging
769 tool. It is locked and all recentfiles dump themselves into it and we
770 can build a viewer that lets us know where we stand and what's inside.
772 * remember: only the principal recentfile needs expiration, all others
773 shall be expired by principal if it discovers that something has move
776 2008-09-18 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
778 * Always check if we stringify to a higher value than in the entry
781 * And in covered make an additional check if we would be able to see a
782 numerical difference between the two numbers and if we can't then switch
783 to a different, more expensive algorithm. Do not want to be caught by
784 floating surprises. DONE
786 2008-09-17 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
788 * caching has several aspects here: we can cache the interval of the
789 recentfile which only will change when the mtime of the file changes. We
790 must re-mirror the recentfile when its ttl has expired. Does have_read
791 tell you anything? It counts nothing at all. Only the mtime is
792 interesting. The ntuple mtime, low-epoch, high-epoch. And as a separate
793 thing the have_mirrored because it is unrelated to the mtime.
795 * Robustness of floating point calculations! I always thought that the
796 string calculated by the origin server for the floating representation
797 of the epoch time is just a string. When we convert it to a number and
798 later back to a string, the other computer might come to a different
799 conclusion. This must not happen, we want to preserve it under any
800 circumstances. I will have to write tests with overlong sequences that
801 get lost in arithmetic and must see if all still works well. DONE
803 But one fragile point remains: if one host considers a>b and the other
804 one considers them == but no eq. To prevent this, we must probably do
805 some extra homework. DONE
807 2008-09-16 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
809 * the concept of tracking DONE needs an object per recentfile that has
810 something like these methods:
812 do_we_have(xxx), we_have(xxx), do_we_have_all(xxx,yyy), reset()
814 covered() register() covered()
816 The unclear thing is how we translate points in time into intervals. We
817 could pass a reference to the current recent_events array when running
818 we_have(xxx) and let the DONE object iterate over it such that it only
819 has to store a list of intervals that can melt into each other. Ah, even
820 passing the list together with a list of indexes seems feasiable.
822 Or maybe ask for the inverted list?
824 Whenever the complete array is covered by the interval we say we are
825 fully covered and if the recentfile is not expired, we are uptodate.
827 2008-09-07 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
829 2008-09-05 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
831 * need a way to "return" the next entry after the end of a list. When
832 the caller says "before" or "after" we would like to know if he could
833 cover that interval/threshold or not because this influences the effect
834 of a newer timestamp of that recentfile. DONE with $opt{info}.
836 2008-09-04 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
838 * one of the next things to tackle: the equivalent of csync2 -TIXU.
840 loop implies tixu (?). Nope, something like --statefile decides. Per
843 T test, I init, X including removals, U nodirtymark
845 So we have no concept of dirtymarks, we only trust that since we are
846 running we have observed everything steadily. But people will not let
847 this program run forever so we must consider both startup penalty and
848 book keeping for later runs. We keep this for later. For now we write a
849 long running mirror that merges several intervals.
851 2008-09-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
853 * need to speed up the 02 test, it's not clever to sleep so much. Reduce
856 * rersyncrecent, the script: default to one week. The name of the switch
857 is --after. Other switches? --loop!
859 2008-08-30 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
861 * need a switch --skip-deletes (?)
863 * need a switch --enduser that tells us that the whole tempfile
864 discipline is not needed when there is no downstream user. (?)
866 Without this switch we cannot have a reasonable recent.pl that just
867 displays the recent additions. Either we accept to download everything.
868 Or we download temporary files without the typical rsync protocol
871 Or maybe the switch is --tmpdir? If --tmpdir would mean: do not use
872 File::Temp::tempdir, this might be a win.
874 2008-08-29 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
876 * apropos missing: we have no push, we never know the downstream
877 servers. People who know their downstream hosts and want to ascertain
878 something will want additional methods we have never thought about, like
879 update or delete a certain file.
881 2008-08-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
883 * tempted to refactor rmirror into resolve_symlink, localize, etc.
884 Curious if rsync_options=links equal 0 vs. 1 will make the expected
887 * rsync options: it's a bit of a pain that we usually need several rsync
888 options, like compress, links, times, checksum and that there is no
889 reasonable default except the original rsync default. I think wee can
890 safely assume that the rsync options are shared between all recentfile
891 instances within one recent tree.
893 2008-08-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
895 * deletes: if a delete follows an add quickly enough it may happen that
896 a downstream mirror did not see the add at all! It seems this needs to
897 be mentioned somewhere. The point here is that even if the downstream is
898 never missing the principal timeframe it may encounter a "delete" that
899 has no complimentary "add" anywhere.
901 2008-08-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
903 * I suspect the treat of metadata is incorrect during read or something.
904 The bug that I am watching is that between 06:08 and 06:09 the 6h file
905 contained more than 6 hours worth of data. At 06:08 we merged into the
906 1d file. We need to take snapshots of the 6h file over the course of an
907 hour or maybe only between XX:08 and XX:09? Nope, the latter is not
910 Much worse: watching the 1h file: right at the moment (at 06:35) it
911 covers 1218867584-1219120397 which is 70 hours.
913 Something terribly broken. BTW, 1218867584 corresponds to Sat Aug 16
914 08:19:44 2008, that is when I checked out last time, so it seems to be
915 aggregating and never truncating?
917 No, correct is: it is never truncating; but wrong is: it is aggregating.
918 It does receive a lot of events from time to time from a larger file.
919 Somehow a large file gets merged into the small one and because the
920 "meta/merged" attribute is missing, nobody is paying attention. I
921 believe that I can fix this by making sure that metadata are honoured
922 during read. DONE and test adjusted.
924 2008-08-17 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
926 * grand renaming plan
928 remotebase => remoteroot to fit well with localroot DONE
929 local_path() => localroot seems to me should already work DONE
930 recentfile_basename => rfilename no need to stress it has no slash DONE
932 filenameroot??? Doesn't seem too bad to me today. Maybe something like
933 kern? It would anyway need a deprecation cycle because it is an
934 important constructor.
936 * I like the portability that Data::Serializer brings us but the price
937 is that some day we might find out that it is slowing us a bit. We'll
940 2008-08-16 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
942 * should we not enter the interval of the principal (or the interval of
943 the merging file?) in every aggregated/merged file?
945 * we should aim at a first release and give up on thinking about
946 sanitizing stuff and zloop. Let's just admit that a full traditional
947 rsync is the only available sanitizer ATM. Otherwise it's complicated
948 stuff: sanitizing on the origin server, sanitizing on the slaves,
949 sanitizing forgotten files, broken timestamps, etc. Let's delay it and
950 get the basics out before this becomes a major cause for mess.
952 2008-08-13 Andreas Koenig <k@andreas-koenigs-computer.local>
954 * On OSes not supporting symlinks we expect that RECENT.recent contains
955 the contents of the principal recentfile. Actually this is identical on
956 systems supporting symlinks. Simple, what follows from that is that we
957 need to keep the serializer in the metadata because we cannot read it
958 from the filename, doesn't it? Of course not. It's a chicken and egg
959 problem. This leaves us with the problem to actually parse the
960 serialized data to find out in which format it is. So who can do the 4
961 or 5 magics we wanted to support? File::LibMagic?
963 2008-08-09 Andreas Koenig <k@andreas-koenigs-computer.local>
965 * remotebase and recentfile_basename are ugly names. Now that we need a
966 word for the shortest/principal/driving recentfile too we should do
969 localroot is good. rfile is good. local_path() is bad, local_path($path)
970 is medium, filenameroot() is bad, remotebase is bad, recentfile is
973 Up to now remotebase was the string that described the remote root
974 directory in rsync notation, like pause.perl.org::authors. And
975 recentfile_basename was "RECENT-1h.yaml".
977 2008-08-08 Andreas Koenig <k@andreas-koenigs-computer.local>
979 * The test that was added in today's checkin is a good start for a test
980 of rmirror. We should have more methods in Recent.pm: verify,
981 addmissingfiles. We should verify the current tree, then rmirror it and
982 then verifytree the copy. We could then add some arbitrary file and let
983 it be discovered by addmissingfiles, then rmirror again and then
984 verifytree the copy again.
986 Then we could start stealing from csync2 sqlite database [no port to
987 OSX!] and fill a local DB. And methods to compare the database with the
988 recentfiles. Our strength is that in principle we could maintain state
989 with a single float. We have synced up to 1234567890.123456. If the Z
990 file does not add new files all we have to do is mirror the new ones and
993 This makes it clear that we should extend current protocol and declare
994 that we cheat when we add files too late, just to help the other end
995 keeping track. Ah yes, that's what was meant when zloop was mentioned
998 Maybe need to revisit File::Mirror to help me with this task.
1000 2008-08-07 Andreas Koenig <k@andreas-koenigs-computer.local>
1002 * There must be an allow-me-to-truncate flag in every recentfile.
1003 Without it one could construct a sequence of updates winning the locking
1004 battle against the aggregator. Only if an aggregator has managed to
1005 merge data over to the next level, truncating can be allowed. DONE with
1008 2008-08-06 Andreas Koenig <k@andreas-koenigs-computer.local>
1010 * We should probably guarantee that no duplicates enter the aggregator
1013 2008-08-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1015 * To get merge operation faster would need a good benchmark test. What
1016 02 spits out isn't reliable enough and is dominated by many other
1019 commit 10176bf6b79865d4fe9f46e3857a3b8669fa7961
1020 Author: Andreas J. Koenig <k@k75.(none)>
1021 Date: Sat Aug 2 07:58:04 2008 +0200
1025 commit 3243120a0c120aaddcd9b1f4db6689ff12ed2523
1026 Author: Andreas J. Koenig <k@k75.(none)>
1027 Date: Sat Aug 2 11:40:29 2008 +0200
1029 there was a lot of trying but the effect is hardly measurable with
1032 * overhead of connecting seems high. When setting
1033 max_files_per_connection to 1 we see that.
1035 2008-08-01 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1037 * 1217622571.0889 - 1217597432.86734 = 25138.2215600014
1039 25138.2215600014/3600 = 6.98283932222261
1041 It jumps into the eye that this is ~ 7 hours, not ~6, so there seems to
1042 be a bug in the aggregator. FIXED
1044 2008-07-27 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1046 * e.g. id/Y/YE/YEWENBIN/Emacs-PDE-0.2.16.tar.gz: Do we have it, should
1047 we have it, can we mirror it, mirror it!
1049 I fear this needs a new class which might be called
1050 File::Rsync::Mirror::Recent. It would collect all recentfiles of a kind
1051 and treat them as an entity. I realize that a single recentfile may be
1052 sufficient for certain tasks and that it is handy for the low level
1053 programmer but it is not nice to use. If there is a delete in the 1h
1054 file then the 6h file still contains it. Seekers of the best information
1055 need to combine at least some of the recentfiles most of the time.
1057 There is the place for the Z loop!
1059 But the combination is something to collect in a database, isn't it. Did
1060 csync2 just harrumph?
1062 2008-07-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1064 * it just occurred to me that hosts in the same mirroring pool could
1065 help out each other even without rewriting the recentfile. Just fetch
1066 the stuff to mirror from several places, bingo. But that's something
1067 that should rather live in a separate package or in rsync directly.
1069 * cronjobs are unsuited because with ntp they would all come at the full
1070 minute and disturb each other. Besides that I'd hate to have a backbone
1071 with more than a few seconds latency.
1073 2008-07-25 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1075 * a second rsync server with access control for PAUSE. Port? 873 is the
1076 standard port, let's take 8873.
1078 * if there were a filesystem based on this, it would have a slow access
1079 to inexistent files. It would probably provide wrong readdir (only based
1080 on current content) or also a slow one (based on a recentfile written
1081 after the call). But it would provide fast access to existing files. Or
1082 one would deliberately allow slightly blurred answers based on some
1083 sqlite reflection of the recentfiles.
1085 * todo: write a variant of mirror() that combines two or more
1086 recentfiles and treats them like one
1088 * todo: signal handler to remove the tempfile
1090 2008-07-24 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1092 * now that we have the symlink I forgot how it should be used in
1095 * the z loop: add missing files to Z file. Just append them (instead of
1096 prepending). So one guy prepends something from the Y file from time to
1097 time and another guy appends something rather frequently. Collecting
1098 pond. When Y merges into Z, things get epoch and the collecting pond
1099 gets smaller. What exactly are "missing files"?
1101 take note of current epoch of the alpha file, let's call it the
1104 find all files on disk
1106 remove all files registered in the recentworld up to recent-ts
1108 remove all files that have been deleted after recent-ts according to
1111 2008-07-23 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1113 * rersyncrecent might be a cronjob with a (locked) state file which
1114 contains things like after and maybe last z sync or such?
1116 rrr-mirror might be an alternative name but how would we justify the
1117 three Rs when there is no Re-Rsync-Recent?
1119 With the --loop parameter it is an endless loop, without it is no loop.
1120 At least this is simple.
1122 * todo: new accssor z-interval specifies how often the Z file is updated
1123 against the filesystem. We probably want no epoch stamp on these
1124 entries. And we want to be able to filter the entries (e.g. no
1125 by-modules and by-category tree)
1127 2008-07-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1129 * Fill the Z file. gc or fsck or both. Somehow we must get the old files
1130 into Z. We do not need the other files filled up with filesystem
1133 * need interface to query for a file in order to NOT call update on
1134 PAUSE a second time within a short time.
1136 2008-07-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
1138 * recommended update interval? Makes no sense, is different for
1145 change-log-default-name: "Todo"