4 ----------------------------------------------------------------------
5 -- This is a list of what To Do before a release is ready to be made.
6 -- Keep the style please. Can be read with emacs org-mode
7 ----------------------------------------------------------------------
9 * Implement graceful stopping of the etorrent application [Milestone: 1.0]
10 The provisioning is there in OTP applications. We just need to
13 * Make shutdown strategies right [Milestone: 1.0]
14 Currently the shutdown strategies and cleanup strategies are wrong;
15 for an explanation, see
17 http://www.erlang.org/doc/design_principles/gen_server_concepts.html#2.6
19 This should be fixed. Also note the t_peer_recv and t_peer_send
20 might be living in an unhealthy relationship (send is a start_link
21 on recv) and that this ought to be cleaned up, eventually by
22 hooking both to a common ancestor supervisor (The overhead of doing
23 so is not measurable).
25 * Who clean up what? [Milestone: 1.0]
26 Go through each table and decide who "owns" the right to clean it
27 up. It should definitely be checked.
29 * Disconnect from seeders [Milestone 1.0]
30 Disconnect from seeders.
32 While here, also check that we remember to report we have all of
35 * Eradicate XXX and TODO [Milestone 1.0]
36 Go through the code and remove all XXX and TODOs remaining
37 there. There are a few that have been added when I find something
38 disgusting in the code base.
40 * What should we do about bad peers? [Milestone: 1.0]
41 Currently bad peers are those which are already connected. We ought
42 to keep a list of baddies and blacklist them for some time if we
43 don't like them anymore.
45 * etorrent_t_sup: Just start everything right away? [Milestone: 1.1]
46 Rather than having the code start things one at a time from the
47 Control, it is much more robust just to have etorrent_t_sup start
48 everything right away.
52 * It takes too much space to use #chunk_data [Milestone: 1.1]
53 To fix this we can move store_chunk down and make it into an FS
54 operation. That way we should be able to do it much much simpler
55 than now. We also get rid of the store_piece call in
56 etorrent_piece. All in all, the code will be much much simpler that
57 way and will not have to keep that much data in memory.
59 * Profile and minimize the critical path [Milestone: 1.1]
60 It looks like the critical path takes a wee bit too many clock
61 cycles. It might be possible to cut it down considerably by
62 profiling and optimizing that path.
64 I want to do something about it, but not right now. Hence in
67 * Use passive sockets [Milestone: 1.1]
68 We need to use passive sockets at some point. The reason is that
69 active sockets have no flow control, and the granularity of whole
70 packets are bad from a choke/unchoke perspective. The code that
71 needs change is rather contained, luckily, and can be placed in
74 An even more sinister idea: change to active sockets when the rate
75 of the peer exceeds a certain set amount to cut down the amount of
76 processing needed. We *do* have some flow control as a peer will
77 only send things we requested, so a peer can't overflow us by more
81 - Pick functions at random, and document what they are doing.
82 It is /especially/ important to document library calls and
83 non-standard internal functions in OTP modules.
85 * TorrentPeerMaster [Milestone: not decided]
86 - Figure out a better choking/unchoking algorithm.
87 The current algorithm is the original one. We should look for a
88 better algorithm and implement that. Suggestions for digging:
95 - Decide what to do if we connect multiply to the same IP
97 * Temporary IP-ban on errors [Milestone: 1.1]
98 If we find an error on a given peer, ban him temporarily for some
101 * ROBUSTNESS [Milestone: 1.2]
102 - In general, robustness is not really taken care of. We ought to make
103 the system more robust by not relying so much on Pids etc.
104 - What happens if process X dies
105 Go through all processes, and think about what happens if it
106 dies. Ensure that the system is robust.
107 - List of processes to check for it:
108 etorrent_acceptor.erl etorrent_sup.erl
109 etorrent_acceptor_sup.erl etorrent_t_control.erl
110 etorrent_bcoding.erl etorrent_t_manager.erl
111 etorrent_chunk.erl etorrent_torrent.erl
112 etorrent_dirwatcher.erl etorrent_t_peer_group.erl
113 etorrent_dirwatcher_sup.erl etorrent_t_peer_pool_sup.erl
115 etorrent_event.erl etorrent_t_peer_send.erl
116 etorrent_fs_checker.erl etorrent_t_pool_sup.erl
117 etorrent_fs.erl etorrent_tracker_communication.erl
118 etorrent_fs_pool_sup.erl etorrent_tracking_map.erl
120 etorrent_listener.erl etorrent_utils.erl
121 etorrent_metainfo.erl etorrent_version.hrl
122 etorrent_mnesia_init.erl http_gzip.erl
123 etorrent_mnesia_table.hrl
124 etorrent_peer_communication.erl
125 etorrent_peer.erl tr.erl
128 * Add another state for #piece records [Milestone: 1.2]
129 The state 'chunked_no_left' should indicate that the piece has been
130 chunked, but there are no chunks left to pick from it. The state is
131 introduced when we empty the #chunk record with not_fetched for the
132 #piece and it is reintroduced in putback_chunks so we may again
133 pick from it. Also, in the endgame, we should pick off from this
136 It turns out to be an optimization, so put it into 1.2 for now. It
137 is not obvious that it will give anything.