1 2008-10-08 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
3 * current bugs: the pathdb seems to get no reset, the seeding of the
4 secondaryttl stuff seems not to have an effect. Have helped myself with
5 a rand(10), need to fix this back. So not checked in. Does the rand
8 The rand thing helps. The secondaryttl stuff was in the wrong line,
11 The pathdb stuff was because I called either _pathdb or __pathdb on the
12 wrong object. Fixed now.
14 * do not forget the dirtymark!
16 * It's not so beautiful if we never fetch the recentfiles that are not
17 the principal, even if this is correct behaviour. We really do not need
18 them after we have fetched the whole content.
20 OK, we want a switch for that: secondaryttl
22 * Inotify2 on an arbitrary tree and then play with that instead of PAUSE
25 2008-10-07 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
27 * bug: rrr-news --max does not count correctly. with "35" it shows me 35
28 lines but with 36 it shows 110. First it repeats 35, gives 70, and then
29 it lets 40 follow. FIXED
31 * idea: have a new flag on recentfiles with the meaning: if this
32 changes, you're required to run a full rsync over all the files. The
33 reason why we set it would probably be: some foul happened. we injected
34 files in arbitrary places or didn't inject them although they changed.
35 The content of the flag? Timestamp? The relation between the
36 recentfiles would have to be inheritance from the principal, because any
37 out of band changes would soon later propagate to the next recentfile.
39 By upping the flag often one can easily ruin the slaves.
41 last out of band change? dirtymark?
43 Anyway, this implies that we read a potentially existing recentfile
46 And it implies that we have an eventloop that keeps us busy in 2-3
47 cycles, one for current stuff (tight loop) and one for the recentfiles
48 (cascade when principal has changed), one for the old stuff after a
51 * See that the long running process really only updates the principal
52 file unless it has missed a timespan during which something happened. If
53 nothing happened, it must notice even when it misses the timespan.
55 * fill up recentfiles with fake (historic) entries; fill up with
56 individual corrections; algorithm maybe to be done with bigfloat so that
57 we can always place something in the middle between two entries. Before
58 we must switch to bigfloat we could try to use Data::Float::nextup to
61 * we must throw away the pathdb when we have reached the end of Z. From
62 that moment we can have a very small pathdb because the only reason for
63 a pathdb is that we know to ignore old records in old files. We won't
64 need this pathdb again before the next full pass over the data is
65 necessary and then we will rebuild it as we go along.
67 * lookup by epoch and by path and use this ability on the pause to never
68 again register a file twice that doesn't need it.
70 2008-10-06 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
72 * I think, Done::register_one is doing wrong in that it does not
73 conflate neighboring pieces. The covered() method cannot do this because
74 it has no recent_events array at hand. But register_one has it and could
75 do it and for some reason misses to do it (sometimes).
77 This means that the three tests I just wrote can probably not survive
78 because they test with an already broken Done structure.
80 The art now is to detect how it happens, then to reproduce, then write a
83 So from the logfile this is what happens: we have a good interval with
84 newest file being F1 at T1. Now remotely F1 gets a change and F2 goes on
85 top of it. Locally we now mirror F2 and open a new done interval for it.
86 Then we mirror F1 but this time with the timestamp T1b. And when we then
87 try to close the gap, we do not find T1 but instead something older. We
88 should gladly accept this older piece and this would fix this bug.
92 * bug to fix: when the 1h file changes while rmirror is running, we do
93 correctly sync the new files but never switch to the 6h file but rather
94 stay in a rather quick loop that fetches the 1h file again and again.
96 Is it possible that we initialize a new object? Or does
97 get_remote_recentfile_as_tempfile overwrite something in myself?
99 Want a new option: _runstatusfile => $file which frequently dumps the
100 state of all recentfiles to a file.
102 2008-10-04 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
104 * Todo: now teach update to verify the timestamp is about to write
105 against the previous and use _increase_a_bit if it doesn't comply with
106 strict monotony. DONE
108 * The problem of rounding. So far perl's default precision was
109 sufficient. One day it won't be. FakeFloat has an easy job when it is
110 only reading and other machines have written correctly. But when we want
111 to write a floating point number that is a bit larger than the other
112 one, then we need our own idea of precision.
114 Slaven said: just append a "1". This might be going towards the end of
115 usability too quickly. I'd like something that actually uses the decimal
116 system. Well, appending a 1 also does this but...
118 E.g. we have 1.0. nextup on this architecture is starting with
119 1.0000000000000004. So there is a gap to fill: 1,2,3. Now I have
120 taken the 1.0000000000000003 and the next user comes and the time tells
121 him 1.0 again. He has to beat my number without stepping over the
122 nextup. This is much less space than I had when I chose 1,2,3.
124 What is also irritating is that nextup is architecture dependent. The
125 128 bit guy must choose very long numbers to fit in between whereas the
126 other one with 16 bit uses larger steps. But then the algorithm is the
127 same for both, so that would be a nice thing.
129 I see two situation where we need this. One is when Time::HiRes returns
130 us a value that is <= the last entry in our recentfile. In this case
131 (let's call it the end-case) we must fill the region between that number
132 and the next higher native floating point number. The other is when we
133 inject an old file into an old recentfile (we would then also set a new
134 dirtymark). We find the integer value already taken and need a slightly
135 different one (let's call it the middle-case). The difference between
136 the two situations is that the next user will want to find something
137 higher than my number in the end-case and something lower than my number
140 So I suggest we give the function both a value and an upper bound and it
141 calculates us a primitive middle. The upper bound in the middle-case is
142 the next integer. The upper bound on the end-case is the nextup floating
143 point number. But the latter poses another problem: if we have occupied
144 the middle m between x and nextup(x), then the nextup(m) will probably
145 not be the same as nextup(x) because some rounding will take place
146 before the nextup is calculated and when the rounding reaches the
147 nextup(x), we will end up at nextup(nextup(x)).
149 So we really need to consider the nextup and the nextdown from there and
150 then the middle and that's the number we may approach asymptotically.
153 2008-10-03 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
155 * consider deprecating the use of RECENT.recent as a symlink. It turns
156 out to need extra hoops with the rsync options and just isn't worth it.
157 Or maybe these extra hoops are needed anyway for the rest of the tree?
158 Nope, can't be the case because not all filesystems support symlinks.
160 But before doing the large step, I'll deprecate the call of
161 get_remote_recentfile_as_tempfile with an argument. Rememberr this was
162 only introduced to resolve RECENT.recent and complicates the routine far
163 beyond what it deserves.
165 DONE. Won't deprecate RECENT.recent, just moved its handling to the
168 2008-10-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
170 * I think it's a bug that the rsync_option links must be set to true in
171 order to support RECENT.recent and that nobody cares to set it
172 automatically. Similar for ignore_link_stat_errors.
174 2008-09-27 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
176 * Todo: find all todos together and make a plan what is missing for a
179 - verifytree or something like that. fsck maybe.
181 - rersyncrecent, the script itself
183 - a way to only mirror the recentfiles without mirroring the whole
184 remote system such that people can decide to mirror only partially see
185 also 2008-08-30. .shadow-xxx directory? this also needed for a
186 filesystem that is still incomplete and might need the mirrorfiles for
189 - long living objects that mirror again and again. Inject something
190 into ta, see how it goes over to tb.
192 - how do we continue filling up the DONE system when we use an object
193 for the second time? "fully covered" and "uptodate" or new terminology.
195 - overview called on the wrong file should be understandable
197 - the meta data field that must change when we fake something up so that
198 the downstream people know they have to re-fetch everything.
200 - how tolerant are we against missing files upstream? how do we keep
201 track? there are legitimate cases where we did read upstream index right
202 before a file got deleted there and then find that file as new and want
203 it. There are other cases that are not self healing and must be tracked
206 - how, exactly, do we have to deal with deletes? With rsync errors?
208 rsync: link_stat "/id/K/KA/KARMAN/Rose-HTMLx-Form-Related-0.07.meta" (in
209 authors) failed: No such file or directory (2)
211 The file above is a delete in 1h and a new in file 1M and the
212 delete in the locally running rmirror did not get propagated to the 1M
213 object. Bug. And the consequence is a standstill.
215 It seems that a slave that works with a file below the principal needs
216 to merge things all the way up to get rid of later deletes. Or keep
217 track of all deletes and skip them later. So we need a trackdeletes.pm
218 similar to the done.pm?
220 see also 2008-08-20 about spurious deletes that really have no add
221 counterpart and yet they are not wrong.
223 - consider the effect when resyncing the recentfile takes longer than
224 the time per loop. Then we never rsync any file. We need to diagnose
225 that and force an increase of that loop time. But when we later are fast
226 enough again because the net has recovered, then we need to switch back
227 to original parameters. ERm, no, it's enough to keep syncing at least
228 one file before refetching an index file.
230 - remember to verify that no temp files are left lying around and the
233 - status file for not long running jobs that want to track upstream with
236 - revisit all XXX _float areas and study Sub::Exporter DONE
238 - persistent DB even though we just said we do not need it. Just for
239 extended capabilities and time savings when, for example, upstream
240 announces a reset and we get new recentfiles and could then limit
241 ourselves to a subset of files (those that have a changed epoch) in a
242 first pass and would only then do the loop to verify the rest. Or
245 * Todo: aggregate files should know their feed and finding the principal
246 should be done stepwise. (?)
248 * Todo: DESTROY thing that unlocks. Today when I left the debuggerr I
249 left locks around. DONE
251 2008-09-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
253 * maybe extend the _overview so that it always says if and where the
254 last file is in the next file and where the next event in the next rf
255 would lie. No, don't like this anymore. REJECT
257 * take the two new redundant tests out again, only the third must
260 * Todo: add a sanity check if the merged structure is really pointing to
261 a different rf and that this different rf is larger. DONE
263 2008-09-25 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
265 * now test, if they are overlapping. And test if there is a file in the
266 next rf that would fit into this rf's interval.
268 1h 1222324012.8474 1222322541.7963 0.4086
269 6h 1222320411.2760 1222304207.6931 4.5010 missing overlap/gap!
270 1d 1222320411.2760 1222238750.5071 22.6835 large overlap
271 1W 1222313218.3626 1221708477.5829 167.9835
273 I suspect that somebody writes a merged timestamp without having merged
274 and then somebody else relies on it.
276 If aggregate is running, the intervals must not be extravagated, if it
277 is not running, there must not be bounds, the total number of events in
278 the system must be counted and must be controlled throughout the tests.
279 That the test required the additional update was probably nonsense,
280 because aggregate can cut pieces too. FIXED & DONE
282 2008-09-23 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
284 * rrr-aggregate seems to rewrite the RECENT file even if nothing has
287 2008-09-21 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
289 * Most apparent bug at the moment is that the recentfiles are fetched
290 too often. Only the principal should be fetched and if it has not
291 changed, the others should not be refetched. ATM I must admit that I'm
292 happy that we refetch more often than needed because I can more easily
293 fix bugs while the thing is running.
295 * Let's say, 1220474966.19501 is a timestamp of a file that is already
296 done but the done system does not know about it. The reason for the
297 failure is not known and we never reach the status uptodate because of
298 this. We must get over it.
300 Later it turns out that the origin server had a bug somewhere.
301 1220474966.19042 came after 1220474966.19501. Or better: it was in the
302 array of the recentfile one position above. The bug was my own.
304 2008-09-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
306 * There is the race condition where the server does a delete and the
307 slave does not yet know and then tries to download it because he sees
308 the new. So for this time window we must be more tolerant against
309 failure. If we cannot download a file, we should just skip it and should
310 not retry immediately. The whole system should discover the lost thing
311 later. Keeping track with the DONE system should really be a no brainer.
313 But there is something more: the whole filesystem is a database and the
314 recentfiles are one possible representation of it. It's a pretty useful
315 representation I think that's why I have implemented something around
316 it. But for strictly local operation it has little value. For local
317 operation we would much rather have a database. So we would enter every
318 recentfile reading and every rsync operation and for every file the last
319 state change and what it leads to. Then we would always ignore older
320 records without the efforts involved with recentfiles.
322 The database would have: path,recentepoch,rsyncedon,deletedon
324 Oh well, not yet clear where this leads to.
326 2008-09-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
328 * Bug: the bigloop ran into a funny endless loop after EWILHELM uploaded
329 Module-Build. It *only* rsynced the "1h" recentfile from that moment on.
331 * statusfile, maybe only on demand, alone to have a sharp debugging
332 tool. It is locked and all recentfiles dump themselves into it and we
333 can build a viewer that lets us know where we stand and what's inside.
335 * remember: only the principal recentfile needs expiration, all others
336 shall be expired by principal if it discovers that something has move
339 2008-09-18 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
341 * Always check if we stringify to a higher value than in the entry
344 * And in covered make an additional check if we would be able to see a
345 numerical difference between the two numbers and if we can't then switch
346 to a different, more expensive algorithm. Do not want to be caught by
347 floating surprises. DONE
349 2008-09-17 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
351 * caching has several aspects here: we can cache the interval of the
352 recentfile which only will change when the mtime of the file changes. We
353 must re-mirror the recentfile when its ttl has expired. Does have_read
354 tell you anything? It counts nothing at all. Only the mtime is
355 interesting. The ntuple mtime, low-epoch, high-epoch. And as a separate
356 thing the have_mirrored because it is unrelated to the mtime.
358 * Robustness of floating point calculations! I always thought that the
359 string calculated by the origin server for the floating representation
360 of the epoch time is just a string. When we convert it to a number and
361 later back to a string, the other computer might come to a different
362 conclusion. This must not happen, we want to preserve it under any
363 circumstances. I will have to write tests with overlong sequences that
364 get lost in arithmetic and must see if all still works well. DONE
366 But one fragile point remains: if one host considers a>b and the other
367 one considers them == but no eq. To prevent this, we must probably do
368 some extra homework. DONE
370 2008-09-16 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
372 * the concept of tracking DONE needs an object per recentfile that has
373 something like these methods:
375 do_we_have(xxx), we_have(xxx), do_we_have_all(xxx,yyy), reset()
377 covered() register() covered()
379 The unclear thing is how we translate points in time into intervals. We
380 could pass a reference to the current recent_events array when running
381 we_have(xxx) and let the DONE object iterate over it such that it only
382 has to store a list of intervals that can melt into each other. Ah, even
383 passing the list together with a list of indexes seems feasiable.
385 Or maybe ask for the inverted list?
387 Whenever the complete array is covered by the interval we say we are
388 fully covered and if the recentfile is not expired, we are uptodate.
390 2008-09-07 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
392 2008-09-05 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
394 * need a way to "return" the next entry after the end of a list. When
395 the caller says "before" or "after" we would like to know if he could
396 cover that interval/threshold or not because this influences the effect
397 of a newer timestamp of that recentfile. DONE with $opt{info}.
399 2008-09-04 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
401 * one of the next things to tackle: the equivalent of csync2 -TIXU.
403 loop implies tixu (?). Nope, something like --statefile decides. Per
406 T test, I init, X including removals, U nodirtymark
408 So we have no concept of dirtymarks, we only trust that since we are
409 running we have observed everything steadily. But people will not let
410 this program run forever so we must consider both startup penalty and
411 book keeping for later runs. We keep this for later. For now we write a
412 long running mirror that merges several intervals.
414 2008-09-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
416 * need to speed up the 02 test, it's not clever to sleep so much. Reduce
419 * rersyncrecent, the script: default to one week. The name of the switch
420 is --after. Other switches? --loop!
422 2008-08-30 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
424 * need a switch --skip-deletes (?)
426 * need a switch --enduser that tells us that the whole tempfile
427 discipline is not needed when there is no downstream user. (?)
429 Without this switch we cannot have a reasonable recent.pl that just
430 displays the recent additions. Either we accept to download everything.
431 Or we download temporary files without the typical rsync protocol
434 Or maybe the switch is --tmpdir? If --tmpdir would mean: do not use
435 File::Temp::tempdir, this might be a win.
437 2008-08-29 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
439 * apropos missing: we have no push, we never know the downstream
440 servers. People who know their downstream hosts and want to ascertain
441 something will want additional methods we have never thought about, like
442 update or delete a certain file.
444 2008-08-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
446 * tempted to refactor rmirror into resolve_symlink, localize, etc.
447 Curious if rsync_options=links equal 0 vs. 1 will make the expected
450 * rsync options: it's a bit of a pain that we usually need several rsync
451 options, like compress, links, times, checksum and that there is no
452 reasonable default except the original rsync default. I think wee can
453 safely assume that the rsync options are shared between all recentfile
454 instances within one recent tree.
456 2008-08-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
458 * deletes: if a delete follows an add quickly enough it may happen that
459 a downstream mirror did not see the add at all! It seems this needs to
460 be mentioned somewhere. The point here is that even if the downstream is
461 never missing the principal timeframe it may encounter a "delete" that
462 has no complimentary "add" anywhere.
464 2008-08-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
466 * I suspect the treat of metadata is incorrect during read or something.
467 The bug that I am watching is that between 06:08 and 06:09 the 6h file
468 contained more than 6 hours worth of data. At 06:08 we merged into the
469 1d file. We need to take snapshots of the 6h file over the course of an
470 hour or maybe only between XX:08 and XX:09? Nope, the latter is not
473 Much worse: watching the 1h file: right at the moment (at 06:35) it
474 covers 1218867584-1219120397 which is 70 hours.
476 Something terribly broken. BTW, 1218867584 corresponds to Sat Aug 16
477 08:19:44 2008, that is when I checked out last time, so it seems to be
478 aggregating and never truncating?
480 No, correct is: it is never truncating; but wrong is: it is aggregating.
481 It does receive a lot of events from time to time from a larger file.
482 Somehow a large file gets merged into the small one and because the
483 "meta/merged" attribute is missing, nobody is paying attention. I
484 believe that I can fix this by making sure that metadata are honoured
485 during read. DONE and test adjusted.
487 2008-08-17 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
489 * grand renaming plan
491 remotebase => remoteroot to fit well with localroot DONE
492 local_path() => localroot seems to me should already work DONE
493 recentfile_basename => rfilename no need to stress it has no slash DONE
495 filenameroot??? Doesn't seem too bad to me today. Maybe something like
496 kern? It would anyway need a deprecation cycle because it is an
497 important constructor.
499 * I like the portability that Data::Serializer brings us but the price
500 is that some day we might find out that it is slowing us a bit. We'll
503 2008-08-16 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
505 * should we not enter the interval of the principal (or the interval of
506 the merging file?) in every aggregated/merged file?
508 * we should aim at a first release and give up on thinking about
509 sanitizing stuff and zloop. Let's just admit that a full traditional
510 rsync is the only available sanitizer ATM. Otherwise it's complicated
511 stuff: sanitizing on the origin server, sanitizing on the slaves,
512 sanitizing forgotten files, broken timestamps, etc. Let's delay it and
513 get the basics out before this becomes a major cause for mess.
515 2008-08-13 Andreas Koenig <k@andreas-koenigs-computer.local>
517 * On OSes not supporting symlinks we expect that RECENT.recent contains
518 the contents of the principal recentfile. Actually this is identical on
519 systems supporting symlinks. Simple, what follows from that is that we
520 need to keep the serializer in the metadata because we cannot read it
521 from the filename, doesn't it? Of course not. It's a chicken and egg
522 problem. This leaves us with the problem to actually parse the
523 serialized data to find out in which format it is. So who can do the 4
524 or 5 magics we wanted to support? File::LibMagic?
526 2008-08-09 Andreas Koenig <k@andreas-koenigs-computer.local>
528 * remotebase and recentfile_basename are ugly names. Now that we need a
529 word for the shortest/principal/driving recentfile too we should do
532 localroot is good. rfile is good. local_path() is bad, local_path($path)
533 is medium, filenameroot() is bad, remotebase is bad, recentfile is
536 Up to now remotebase was the string that described the remote root
537 directory in rsync notation, like pause.perl.org::authors. And
538 recentfile_basename was "RECENT-1h.yaml".
540 2008-08-08 Andreas Koenig <k@andreas-koenigs-computer.local>
542 * The test that was added in today's checkin is a good start for a test
543 of rmirror. We should have more methods in Recent.pm: verify,
544 addmissingfiles. We should verify the current tree, then rmirror it and
545 then verifytree the copy. We could then add some arbitrary file and let
546 it be discovered by addmissingfiles, then rmirror again and then
547 verifytree the copy again.
549 Then we could start stealing from csync2 sqlite database [no port to
550 OSX!] and fill a local DB. And methods to compare the database with the
551 recentfiles. Our strength is that in principle we could maintain state
552 with a single float. We have synced up to 1234567890.123456. If the Z
553 file does not add new files all we have to do is mirror the new ones and
556 This makes it clear that we should extend current protocol and declare
557 that we cheat when we add files too late, just to help the other end
558 keeping track. Ah yes, that's what was meant when zloop was mentioned
561 Maybe need to revisit File::Mirror to help me with this task.
563 2008-08-07 Andreas Koenig <k@andreas-koenigs-computer.local>
565 * There must be an allow-me-to-truncate flag in every recentfile.
566 Without it one could construct a sequence of updates winning the locking
567 battle against the aggregator. Only if an aggregator has managed to
568 merge data over to the next level, truncating can be allowed. DONE with
571 2008-08-06 Andreas Koenig <k@andreas-koenigs-computer.local>
573 * We should probably guarantee that no duplicates enter the aggregator
576 2008-08-02 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
578 * To get merge operation faster would need a good benchmark test. What
579 02 spits out isn't reliable enough and is dominated by many other
582 commit 10176bf6b79865d4fe9f46e3857a3b8669fa7961
583 Author: Andreas J. Koenig <k@k75.(none)>
584 Date: Sat Aug 2 07:58:04 2008 +0200
588 commit 3243120a0c120aaddcd9b1f4db6689ff12ed2523
589 Author: Andreas J. Koenig <k@k75.(none)>
590 Date: Sat Aug 2 11:40:29 2008 +0200
592 there was a lot of trying but the effect is hardly measurable with
595 * overhead of connecting seems high. When setting
596 max_files_per_connection to 1 we see that.
598 2008-08-01 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
600 * 1217622571.0889 - 1217597432.86734 = 25138.2215600014
602 25138.2215600014/3600 = 6.98283932222261
604 It jumps into the eye that this is ~ 7 hours, not ~6, so there seems to
605 be a bug in the aggregator. FIXED
607 2008-07-27 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
609 * e.g. id/Y/YE/YEWENBIN/Emacs-PDE-0.2.16.tar.gz: Do we have it, should
610 we have it, can we mirror it, mirror it!
612 I fear this needs a new class which might be called
613 File::Rsync::Mirror::Recent. It would collect all recentfiles of a kind
614 and treat them as an entity. I realize that a single recentfile may be
615 sufficient for certain tasks and that it is handy for the low level
616 programmer but it is not nice to use. If there is a delete in the 1h
617 file then the 6h file still contains it. Seekers of the best information
618 need to combine at least some of the recentfiles most of the time.
620 There is the place for the Z loop!
622 But the combination is something to collect in a database, isn't it. Did
623 csync2 just harrumph?
625 2008-07-26 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
627 * it just occurred to me that hosts in the same mirroring pool could
628 help out each other even without rewriting the recentfile. Just fetch
629 the stuff to mirror from several places, bingo. But that's something
630 that should rather live in a separate package or in rsync directly.
632 * cronjobs are unsuited because with ntp they would all come at the full
633 minute and disturb each other. Besides that I'd hate to have a backbone
634 with more than a few seconds latency.
636 2008-07-25 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
638 * a second rsync server with access control for PAUSE. Port? 873 is the
639 standard port, let's take 8873.
641 * if there were a filesystem based on this, it would have a slow access
642 to inexistent files. It would probably provide wrong readdir (only based
643 on current content) or also a slow one (based on a recentfile written
644 after the call). But it would provide fast access to existing files. Or
645 one would deliberately allow slightly blurred answers based on some
646 sqlite reflection of the recentfiles.
648 * todo: write a variant of mirror() that combines two or more
649 recentfiles and treats them like one
651 * todo: signal handler to remove the tempfile
653 2008-07-24 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
655 * now that we have the symlink I forgot how it should be used in
658 * the z loop: add missing files to Z file. Just append them (instead of
659 prepending). So one guy prepends something from the Y file from time to
660 time and another guy appends something rather frequently. Collecting
661 pond. When Y merges into Z, things get epoch and the collecting pond
662 gets smaller. What exactly are "missing files"?
664 take note of current epoch of the alpha file, let's call it the
667 find all files on disk
669 remove all files registered in the recentworld up to recent-ts
671 remove all files that have been deleted after recent-ts according to
674 2008-07-23 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
676 * rersyncrecent might be a cronjob with a (locked) state file which
677 contains things like after and maybe last z sync or such?
679 rrr-mirror might be an alternative name but how would we justify the
680 three Rs when there is no Re-Rsync-Recent?
682 With the --loop parameter it is an endless loop, without it is no loop.
683 At least this is simple.
685 * todo: new accssor z-interval specifies how often the Z file is updated
686 against the filesystem. We probably want no epoch stamp on these
687 entries. And we want to be able to filter the entries (e.g. no
688 by-modules and by-category tree)
690 2008-07-20 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
692 * Fill the Z file. gc or fsck or both. Somehow we must get the old files
693 into Z. We do not need the other files filled up with filesystem
696 * need interface to query for a file in order to NOT call update on
697 PAUSE a second time within a short time.
699 2008-07-19 Andreas J. Koenig <andreas.koenig.7os6VVqR@franz.ak.mind.de>
701 * recommended update interval? Makes no sense, is different for
708 change-log-default-name: "Todo"