3 Tor network discovery protocol
7 This document proposes a way of doing more distributed network discovery
8 while maintaining some amount of admission control. We don't recommend
9 you implement this as-is; it needs more discussion.
12 - Client: The Tor component that chooses paths.
13 - Server: A relay node that passes traffic along.
17 We want more decentralized discovery for network topology and status.
20 1a. We want to let clients learn about new servers from anywhere
21 and build circuits through them if they wish. This means that
22 Tor nodes need to be able to Extend to nodes they don't already
25 1b. We want to let servers limit the addresses and ports they're
26 willing to extend to. This is necessary e.g. for middleman nodes
27 who have jerks trying to extend from them to badmafia.com:80 all
28 day long and it's drawing attention.
30 1b'. While we're at it, we also want to handle servers that *can't*
31 extend to some addresses/ports, e.g. because they're behind NAT or
32 otherwise firewalled. (See section 5 below.)
34 1c. We want to provide a robust (available) and not-too-centralized
35 mechanism for tracking network status (which nodes are up and working)
36 and admission (which nodes are "recommended" for certain uses).
40 2a. People get the code from us, and they trust us (or our gpg keys, or
41 something down the trust chain that's equivalent).
43 2b. Even if the software allows humans to change the client configuration,
44 most of them will use the default that's provided. so we should
45 provide one that is the right balance of robust and safe. That is,
46 we need to hard-code enough "first introduction" locations that new
47 clients will always have an available way to get connected.
49 2c. Assume that the current "ask them to email us and see if it seems
50 suspiciously related to previous emails" approach will not catch
51 the strong Sybil attackers. Therefore, assume the Sybil attackers
52 we do want to defend against can produce only a limited number of
53 not-obviously-on-the-same-subnet nodes.
55 2d. Roger has only a limited amount of time for approving nodes; shouldn't
56 be the time bottleneck anyway; and is doing a poor job at keeping
59 2e. Some people would be willing to offer servers but will be put off
60 by the need to send us mail and identify themselves.
61 2e'. Some evil people will avoid doing evil things based on the perception
62 (however true or false) that there are humans monitoring the network
63 and discouraging evil behavior.
64 2e''. Some people will trust the network, and the code, more if they
65 have the perception that there are trustworthy humans guiding the
68 2f. We can trust servers to accurately report their characteristics
69 (uptime, capacity, exit policies, etc), as long as we have some
70 mechanism for notifying clients when we notice that they're lying.
72 2g. There exists a "main" core Internet in which most locations can access
73 most locations. We'll focus on it (first).
75 3. Some notes on how to achieve.
79 We ship with N (e.g. 20) directory server locations and fingerprints.
81 Directory servers serve signed network-status pages, listing their
82 opinions of network status and which routers are good (see 4a below).
84 Dirservers collect and provide server descriptors as well. These don't
85 need to be signed by the dirservers, since they're self-certifying
88 (In theory the dirservers don't need to be the ones serving the
89 descriptors, but in practice the dirservers would need to point people
90 at the place that does, so for simplicity let's assume that they do.)
92 Clients then get network-status pages from a threshold of dirservers,
93 fetch enough of the corresponding server descriptors to make them happy,
98 We ship with S (e.g. 3) seed keys (trust anchors), and ship with
99 signed timestamped certs for each dirserver. Dirservers also serve a
100 list of certs, maybe including a "publish all certs since time foo"
101 functionality. If at least two seeds agree about something, then it
104 Now dirservers can be added, and revoked, without requiring users to
105 upgrade to a new version. If we only ship with dirserver locations
106 and not fingerprints, it also means that dirservers can rotate their
107 signing keys transparently.
109 But, keeping track of the seed keys becomes a critical security issue.
110 And rotating them in a backward-compatible way adds complexity. Also,
111 dirserver locations must be at least somewhere static, since each lost
112 dirserver degrades reachability for old clients. So as the dirserver
113 list rolls over we have no choice but to put out new versions.
116 Piece three: (optional)
118 Notice that this doesn't preclude other approaches to discovering
119 different concurrent Tor networks. For example, a Tor network inside
120 China could ship Tor with a different torrc and poof, they're using
121 a different set of dirservers. Some smarter clients could be made to
122 learn about both networks, and be told which nodes bridge the networks.
125 4. Unresolved issues.
127 4a. How do the dirservers decide whether to recommend a server? We
128 could have them do it based on contact from the human, but by
129 assumptions 2c and 2d above, that's going to be less effective, and
130 more of a hassle, as we scale up. Thus I propose that they simply
131 do some basic automatic measuring themselves, starting with the
132 current "are they connected to me" measurement, and that's all
135 We could blacklist as we notice evil servers, but then we're in
136 the same boat all the irc networks are in. We could whitelist as we
137 notice new servers, and stop whitelisting (maybe rolling back a bit)
138 once an attack is in progress. If we assume humans aren't particularly
139 good at this anyway, we could just do automated delayed whitelisting,
140 and have a "you're under attack" switch the human can enable for a
141 while to start acting more conservatively.
143 Once upon a time we collected contact info for servers, which was
144 mainly used to remind people that their servers are down and could
145 they please restart. Now that we have a critical mass of servers,
146 I've stopped doing that reminding. So contact info is less important.
148 4b. What do we do about recommended-versions? Do we need a threshold of
149 dirservers to claim that your version is obsolete before you believe
150 them? Or do we make it have less effect -- e.g. print a warning but
151 never actually quit? Coordinating all the humans to upgrade their
152 recommended-version strings at once seems bad. Maybe if we have
153 seeds, the seeds can sign a recommended-version and upload it to
156 4c. What does it mean to bind a nickname to a key? What if each dirserver
157 does it differently, so one nickname corresponds to several keys?
158 Maybe the solution is that nickname<=>key bindings should be
159 individually configured by clients in their torrc (if they want to
160 refer to nicknames in their torrc), and we stop thinking of nicknames
163 4d. What new features need to be added to server descriptors so they
164 remain compact yet support new functionality? Section 5 is a start
165 of discussion of one answer to this.
169 5. Regarding "Blossom: an unstructured overlay network for end-to-end
172 SECTION 5A: Blossom Architecture
174 Define "transport domain" as a set of nodes who can all mutually name each
175 other directly, using transport-layer (e.g. HOST:PORT) naming.
177 Define "clique" as a set of nodes who can all mutually contact each other directly,
178 using transport-layer (e.g. HOST:PORT) naming.
180 Neither transport domains and cliques form a partition of the set of all nodes.
181 Just as cliques may overlap in theoretical graphs, transport domains and
182 cliques may overlap in the context of Blossom.
184 In this section we address possible solutions to the problem of how to allow
185 Tor routers in different transport domains to communicate.
187 First, we presume that for every interface between transport domains A and B,
188 one Tor router T_A exists in transport domain A, one Tor router T_B exists in
189 transport domain B, and (without loss of generality) T_A can open a persistent
190 connection to T_B. Any Tor traffic between the two routers will occur over
191 this connection, which effectively renders the routers equal partners in
192 bridging between the two transport domains. We refer to the established link
193 between two transport domains as a "bridge" (we use this term because there is
194 no serious possibility of confusion with the notion of a layer 2 bridge).
196 Next, suppose that the universe consists of transport domains connected by
197 persistent connections in this manner. An individual router can open multiple
198 connections to routers within the same foreign transport domain, and it can
199 establish separate connections to routers within multiple foreign transport
202 As in regular Tor, each Blossom router pushes its descriptor to directory
203 servers. These directory servers can be within the same transport domain, but
204 they need not be. The trick is that if a directory server is in another
205 transport domain, then that directory server must know through which Tor
206 routers to send messages destined for the Tor router in question.
208 Blossom routers can advertise themselves to other transport domains in two
211 (1) Directly push the descriptor to a directory server in the other transport
212 domain. This probably works particularly well if the other transport domain is
213 "the Internet", or if there are hard-coded directory servers in "the Internet".
214 The router has the responsibility to inform the directory server about which
215 routers can be used to reach it.
217 (2) Push the descriptor to a directory server in the same transport domain.
218 This is the easiest solution for the router, but it relies upon the existence
219 of a directory server in the same transport domain that is capable of
220 communicating with directory servers in the remote transport domain. In order
221 for this to work, some individual Tor routers must have published their
222 descriptors in remote transport domains (i.e. followed the first option) in
223 order to provide a link by which directory servers can communiate
226 If all directory servers are within the same transport domain, then approach
227 (1) is sufficient: routers can exist within multiple transport domains, and as
228 long as the network of transport domains is fully connected by bridges, any
229 router will be able to access any other router in a foreign transport domain
230 simply by extending along the path specified by the directory server. However,
231 we want the system to be truly decentralized, which means not electing any
232 particular transport domain to be the master domain in which entries are
235 This is the explanation for (2): in order for a directory server to share
236 information with a directory server in a foreign transport domain to which it
237 cannot speak directly, it must use Tor, which means referring to the other
238 directory server by using a router in the foreign transport domain. However,
239 in order to use Tor, it must be able to reach that router, which means that a
240 descriptor for that router must exist in its table, along with a means of
241 reaching it. Therefore, in order for a mutual exchange of information between
242 routers in transport domain A and those in transport domain B to be possible,
243 when routers in transport domain A cannot establish direct connections with
244 routers in transport domain B, then some router in transport domain B must have
245 pushed its descriptor to a directory server in transport domain A, so that the
246 directory server in transport domain A can use that router to reach the
247 directory server in transport domain B.
249 Descriptors for Blossom routers are read-only, as for regular Tor routers, so
250 directory servers cannot modify them. However, Tor directory servers also
251 publish a "network-status" page that provide information about which nodes are
252 up and which are not. Directory servers could provide an additional field for
253 Blossom nodes. For each Blossom node, the directory server specifies a set of
254 paths (may be only one) through the overlay (i.e. an ordered list of router
255 names/IDs) to a router in a foreign transport domain. (This field may be a set
256 of paths rather than a single path.)
258 A new router publishing to a directory server in a foreign transport should
259 include a list of routers. This list should be either:
261 a. ...a list of routers to which the router has persistent connections, or, if
262 the new router does not have any persistent connections,
264 b. ...a (not necessarily exhaustive) list of fellow routers that are in the
265 same transport domain.
267 The directory server will be able to use this information to derive a path to
268 the new router, as follows. If the new router used approach (a), then the
269 directory server will define the set of paths to the new router as union of the
270 set of paths to the routers on the list with the name of the last hop appended
271 to each path. If the new router used approach (b), then the directory server
272 will define the paths to the new router as the union of the set of paths to the
273 routers specified in the list. The directory server will then insert the newly
274 defined path into the field in the network-status page from the router.
276 When confronted with the choice of multiple different paths to reach the same
277 router, the Blossom nodes may use a route selection protocol similar in design
278 to that used by BGP (may be a simple distance-vector route selection procedure
279 that only takes into account path length, or may be more complex to avoid
280 loops, cache results, etc.) in order to choose the best one.
282 If a .exit name is not provided, then a path will be chosen whose nodes are all
283 among the set of nodes provided by the directory server that are believed to be
284 in the same transport domain (i.e. no explicit path). Thus, there should be no
285 surprises to the client. All routers should be careful to define their exit
286 policies carefully, with the knowledge that clients from potentially any
287 transport domain could access that which is not explicitly restricted.
289 SECTION 5B: Tor+Blossom desiderata
291 The interests of Blossom would be best served by implementing the following
292 modifications to Tor:
296 Objectives: Ultimately, we want Blossom requests to be indistinguishable in
297 format from non-Blossom .exit requests, i.e. hostname.forwarder.exit.
299 Proposal: Blossom is a process that manipulates Tor, so it should be
300 implemented as a Tor Control, extending control-spec.txt. For each request,
301 Tor uses the control protocol to ask the Blossom process whether it (the
302 Blossom process) wants to build or assign a particular circuit to service the
303 request. Blossom chooses one of the following responses:
305 a. (Blossom exit node, circuit cached) "use this circuit" -- provides a circuit
308 b. (Blossom exit node, circuit not cached) "I will build one" -- provides a
309 list of routers, gets a circuit ID.
311 c. (Regular (non-Blossom) exit node) "No, do it yourself" -- provides nothing.
315 Objectives: Blossom routers are like regular Tor routers, except that Blossom
316 routers need these features as well:
318 a. the ability to open peresistent connections,
320 b. the ability to know whwther they should use a persistent connection to reach
323 c. the ability to define a set of routers to which to establish persistent
324 connections, as readable from a configuration file, and
326 d. the ability to tell a directory server that (1) it is Blossom-enabled, and
327 (2) it can be reached by some set of routers to which it explicitly establishes
328 persistent connections.
330 Proposal: Address the aforementioned points as follows.
332 a. need the ability to open a specified number of persistent connections. This
333 can be accomplished by implementing a generic should_i_close_this_conn() and
334 which_conns_should_i_try_to_open_even_when_i_dont_need_them().
336 b. The Tor design already supports this, but we must be sure to establish the
337 persistent connections explicitly, re-establish them when they are lost, and
338 not close them unnecessarily.
340 c. We must modify Tor to add a new configuration option, allowing either (a)
341 explicit specification of the set of routers to which to establish persistent
342 connections, or (b) a random choice of some nodes to which to establish
343 persistent connections, chosen from the set of nodes local to the transport
344 domain of the specified directory server (for example).
348 Objective: Blossom directory servers may provide extra
349 fields in their network-status pages. Blossom directory servers may
350 communicate with Blossom clients/routers in nonstandard ways in addition to
353 Proposal: Geoff should be able to implement a directory server according to the
354 Tor specification (dir-spec.txt).