more minor changes/additions
[tor.git] / doc / tor-design.tex
blob505117461e66bd05b3170bd4e2b00530ed0e84a6
1 \documentclass[times,10pt,twocolumn]{article}
2 \usepackage{latex8}
3 \usepackage{times}
4 \usepackage{url}
5 \usepackage{graphics}
6 \usepackage{amsmath}
8 \pagestyle{empty}
10 \renewcommand\url{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url}
11 \newcommand\emailaddr{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url}
13 % If an URL ends up with '%'s in it, that's because the line *in the .bib/.tex
14 % file* is too long, so break it there (it doesn't matter if the next line is
15 % indented with spaces). -DH
17 %\newif\ifpdf
18 %\ifx\pdfoutput\undefined
19 % \pdffalse
20 %\else
21 % \pdfoutput=1
22 % \pdftrue
23 %\fi
25 \newenvironment{tightlist}{\begin{list}{$\bullet$}{
26 \setlength{\itemsep}{0mm}
27 \setlength{\parsep}{0mm}
28 % \setlength{\labelsep}{0mm}
29 % \setlength{\labelwidth}{0mm}
30 % \setlength{\topsep}{0mm}
31 }}{\end{list}}
33 \begin{document}
35 %% Use dvipdfm instead. --DH
36 %\ifpdf
37 % \pdfcompresslevel=9
38 % \pdfpagewidth=\the\paperwidth
39 % \pdfpageheight=\the\paperheight
40 %\fi
42 \title{Tor: Design of a Second-Generation Onion Router}
44 %\author{Roger Dingledine \\ The Free Haven Project \\ arma@freehaven.net \and
45 %Nick Mathewson \\ The Free Haven Project \\ nickm@freehaven.net \and
46 %Paul Syverson \\ Naval Research Lab \\ syverson@itd.nrl.navy.mil}
48 \maketitle
49 \thispagestyle{empty}
51 \begin{abstract}
52 We present Tor, a connection-based low-latency anonymous communication
53 system. Tor is the successor to Onion Routing
54 and addresses many limitations in the original Onion Routing design.
55 Tor works in a real-world Internet environment,
56 % it's user-space too
57 requires little synchronization or coordination between nodes, and
58 protects against known anonymity-breaking attacks as well
59 as or better than other systems with similar design parameters.
60 % and we present a big list of open problems at the end
61 % and we present a new practical design for rendezvous points
62 \end{abstract}
64 %\begin{center}
65 %\textbf{Keywords:} anonymity, peer-to-peer, remailer, nymserver, reply block
66 %\end{center}
68 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
70 \Section{Overview}
71 \label{sec:intro}
73 Onion Routing is a distributed overlay network designed to anonymize
74 low-latency TCP-based applications such as web browsing, secure shell,
75 and instant messaging. Clients choose a path through the network and
76 build a \emph{virtual circuit}, in which each node (or ``onion router'')
77 in the path knows its
78 predecessor and successor, but no others. Traffic flowing down the circuit
79 is sent in fixed-size \emph{cells}, which are unwrapped by a symmetric key
80 at each node (like the layers of an onion) and relayed downstream. The
81 original Onion Routing project published several design and analysis
82 papers
83 \cite{or-jsac98,or-discex00,or-ih96,or-pet00}. While there was briefly
84 a wide area Onion Routing network,
85 % how long is briefly? a day, a month? -RD
86 the only long-running and publicly accessible
87 implementation was a fragile proof-of-concept that ran on a single
88 machine (which nonetheless processed several tens of thousands of connections
89 daily from thousands of global users).
90 Many critical design and deployment issues were never resolved,
91 and the design has not been updated in several years.
92 Here we describe Tor, a protocol for asynchronous, loosely
93 federated onion routers that provides the following improvements over
94 the old Onion Routing design, and over other low-latency anonymity systems:
96 \begin{tightlist}
98 \item \textbf{Perfect forward secrecy:} The original Onion Routing
99 design was vulnerable to a single hostile node recording traffic and later
100 compromising successive nodes in the circuit and forcing them to
101 decrypt it.
102 Rather than using a single onion to lay each circuit,
103 Tor now uses an incremental or \emph{telescoping}
104 path-building design, where the initiator negotiates session keys with
105 each successive hop in the circuit. Once these keys are deleted,
106 subsequently compromised nodes cannot decrypt old traffic.
107 As a side benefit, onion replay detection is no longer
108 necessary, and the process of building circuits is more reliable, since
109 the initiator knows when a hop fails and can then try extending to a new node.
111 % Perhaps mention that not all of these are things that we invented. -NM
113 \item \textbf{Separation of protocol cleaning from anonymity:}
114 The original Onion Routing design required a separate ``application
115 proxy'' for each
116 supported application protocol---most
117 of which were never written, so many applications were never supported.
118 Tor uses the standard and near-ubiquitous SOCKS
119 \cite{socks4,socks5} proxy interface, allowing us to support most TCP-based
120 programs without modification. This design change allows Tor to
121 use the filtering features of privacy-enhancing
122 application-level proxies such as Privoxy \cite{privoxy} without having to
123 incorporate those features itself.
125 \item \textbf{Many TCP streams can share one circuit:} The original
126 Onion Routing design built a separate circuit for each application-level
127 request.
128 This hurt performance by requiring multiple public key operations for
129 every request, and also presented
130 a threat to anonymity (see Section~\ref{maintaining-anonymity}).
131 \footnote{The first Onion Routing design \cite{or-ih96} protected against
132 this threat to some
133 extent by requiring users to hide network access behind an onion
134 router/firewall that was also forwarding traffic from other nodes.
135 However, it is desirable for users to
136 benefit from Onion Routing even when they can't run their own
137 onion routers.
138 %Such users, especially if they engage in certain unusual
139 %communication behaviors, may be identifiable \cite{wright03}.
141 %complicate the possibility of such attacks Tor multiplexes many
142 %stream down each circuit, but still rotates the circuit
143 %periodically to avoid too much linkability from requests on a single
144 %circuit.
146 % [This digression probably belongs in maintaining-anonymity. -NM
148 The current Tor design multiplexes multiple TCP streams along each virtual
149 circuit, in order to improve efficiency and anonymity.
151 \item \textbf{No mixing, padding, or traffic shaping:} The original
152 Onion Routing design called for mixing of data from each circuit,
153 plus full link padding both between onion routers and between onion
154 proxies (that is, users) and onion routers \cite{or-jsac98}. The
155 later analysis paper \cite{or-pet00} suggested \emph{traffic shaping}
156 to provide similar protection but use less bandwidth, but did not go
157 into detail. However, recent research \cite{econymics} and deployment
158 experience \cite{freedom21-security} suggest that this level of resource
159 use is not practical or economical; and even full link padding is still
160 vulnerable \cite{defensive-dropping}. Thus, until we have a proven and
161 convenient design for traffic shaping or low-latency mixing that will help
162 anonymity against a realistic adversary, we leave these strategies out.
164 \item \textbf{Leaky-pipe circuit topology:} Through in-band
165 signalling within the
166 circuit, Tor initiators can direct traffic to nodes partway down the
167 circuit. This not only allows for long-range padding to frustrate traffic
168 shape and volume attacks at the initiator \cite{defensive-dropping},
169 but because circuits are used by more than one application, it also
170 allows traffic to exit the circuit from the middle---thus
171 frustrating traffic shape and volume attacks based on observing exit
172 points.
174 \item \textbf{Congestion control:} Earlier anonymity designs do not
175 address traffic bottlenecks. Unfortunately, typical approaches to load
176 balancing and flow control in overlay networks involve inter-node control
177 communication and global views of traffic. Tor's decentralized ack-based
178 congestion control maintains reasonable anonymity while allowing nodes
179 at the edges of the network to detect congestion or flooding attacks
180 and send less data until the congestion subsides.
182 \item \textbf{Directory servers:} The original Onion Routing design
183 planned to flood link-state information through the network---an
184 approach which can be unreliable and
185 open to partitioning attacks or outright deception. Tor takes a simplified
186 view towards distributing link-state information. Certain more trusted
187 onion routers also serve as directory servers; they provide signed
188 \emph{directories} describing all routers they know about, and which
189 are currently up. Users periodically download these directories via HTTP.
191 \item \textbf{End-to-end integrity checking:} Without integrity checking
192 on traffic going through the network, any onion router on the path
193 can change the contents of cells as they pass by---for example, to redirect a
194 connection on the fly so it connects to a different webserver, or to
195 tag encrypted traffic and look for the tagged traffic at the network
196 edges \cite{minion-design}. Tor hampers these attacks by checking data
197 integrity before it leaves the network.
199 \item \textbf{Robustness to failed nodes:} A failed node in a traditional
200 mix network means lost messages, but thanks to Tor's step-by-step
201 circuit building, users can notice failed
202 nodes while building circuits and route around them. Additionally,
203 liveness information from directories allows users to avoid
204 unreliable nodes in the first place.
205 %We further provide a
206 %simple mechanism that allows connections to be established despite recent
207 %node failure or slightly dated information from a directory server. Tor
208 %permits onion routers to have \emph{router twins} --- nodes that share
209 %the same private decryption key. Note that because connections now have
210 %perfect forward secrecy, an onion router still cannot read the traffic
211 %on a connection established through its twin even while that connection
212 %is active. Also, which nodes are twins can change dynamically depending
213 %on current circumstances, and twins may or may not be under the same
214 %administrative authority.
216 %[Commented out; Router twins provide no real increase in robustness
217 %to failed nodes. If a non-twinned node goes down, the
218 %circuit-builder notices this and routes around it. Circuit-building
219 %is offline, so there shouldn't even be a latency hit. -NM]
221 \item \textbf{Variable exit policies:} Tor provides a consistent
222 mechanism for
223 each node to specify and advertise a policy describing the hosts and
224 ports to which it will connect. These exit policies
225 are critical in a volunteer-based distributed infrastructure, because
226 each operator is comfortable with allowing different types of traffic
227 to exit the Tor network from his node.
229 \item \textbf{Implementable in user-space:} Because it only attempts to
230 anonymize TCP streams, Tor differs from other anonymity systems like
231 Freedom \cite{freedom} in that it does not require patches to an operating
232 system's network stack in order to operate. Although this approach is less
233 flexible, it has proven valuable to Tor's portability and deployability.
235 \item \textbf{Rendezvous points and location-protected servers:} Tor
236 provides an integrated mechanism for responder anonymity via
237 location-protected servers. Previous Onion Routing designs included
238 long-lived ``reply onions'' which could be used to build virtual
239 circuits to a hidden server, but this approach is
240 brittle because a reply onion becomes useless if any node in the
241 path goes down or rotates its keys, and it's also
242 %vulnerable to flooding attacks,
243 % no it isn't. no more than our rendezvous point approach at least -RD
244 incompatible with forward security. In Tor's
245 current design, clients use {\it introduction points} to negotiate {\it
246 rendezvous points} to connect with hidden servers; and reply onions
247 are no longer required.
248 \end{tightlist}
250 [XXX carefully mention implementation, emphasizing that experience
251 deploying isn't there yet, and not all features are implemented.
252 Mention that it runs, is kinda alpha, kinda deployed, runs on win32.]
254 We review previous work in Section \ref{sec:background}, describe
255 our goals and assumptions in Section \ref{sec:assumptions},
256 and then address the above list of improvements in Sections
257 \ref{sec:design}-\ref{sec:maintaining-anonymity}. We then summarize
258 how our design stands up to known attacks, and conclude with a list of
259 open problems.
261 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
263 \Section{Background and threat model}
264 \label{sec:background}
266 \SubSection{Related work}
267 \label{sec:related-work}
268 Modern anonymity designs date to Chaum's Mix-Net\cite{chaum-mix} design of
269 1981. Chaum proposed hiding sender-recipient connections by wrapping
270 messages in several layers of public key cryptography, and relaying them
271 through a path composed of Mix servers. Mix servers in turn decrypt, delay,
272 and re-order messages, before relay them along the path towards their
273 destinations.
275 Subsequent relay-based anonymity designs have diverged in two
276 principal directions. Some have attempted to maximize anonymity at
277 the cost of introducing comparatively large and variable latencies,
278 for example, Babel\cite{babel}, Mixmaster\cite{mixmaster-spec}, and
279 Mixminion\cite{minion-design}. Because of this
280 trade-off, such \emph{high-latency} networks are well-suited for anonymous
281 email, but introduce too much lag for interactive tasks such as web browsing,
282 internet chat, or SSH connections.
284 Tor belongs to the second category: \emph{low-latency} designs that attempt
285 to anonymize interactive network traffic. Because these protocols typically
286 involve a large number of packets that must be delivered quickly, it is
287 difficult for them to prevent an attacker who can eavesdrop both ends of the
288 interactive communication from points from correlating the timing and volume
289 of traffic entering the anonymity network with traffic leaving it. These
290 protocols are also vulnerable against certain active attacks in which an
291 adversary introduces timing patterns into traffic entering the network, and
292 looks
293 for correlated patterns among exiting traffic.
294 Although some work has been done to frustrate
295 these attacks,\footnote{
296 The most common approach is to pad and limit communication to a constant
297 rate, or to limit
298 the variation in traffic shape. Doing so can have prohibitive bandwidth
299 costs and/or performance limitations.
300 %One can also use a cascade (fixed
301 %shared route) with a relatively fixed set of users. This assumes a
302 %significant degree of agreement and provides an easier target for an active
303 %attacker since the endpoints are generally known.
304 } most designs protect primarily against traffic analysis rather than traffic
305 confirmation \cite{or-jsac98}---that is, they assume that the attacker is
306 attempting to learn who is talking to whom, not to confirm a prior suspicion
307 about who is talking to whom.
309 The simplest low-latency designs are single-hop proxies such as the
310 Anonymizer \cite{anonymizer}, wherein a single trusted server removes
311 identifying users' data before relaying it. These designs are easy to
312 analyze, but require end-users to trust the anonymizing proxy. Furthermore,
313 concentrating the traffic to a single point makes traffic analysis easier: an
314 adversary need only eavesdrop on the proxy in order to become a global
315 observer against the entire anonymity network.
317 More complex are distributed-trust, channel-based anonymizing systems. In
318 these designs, a user establishes one or more medium-term bidirectional
319 end-to-end tunnels to exit servers, and uses those tunnels to deliver a
320 number of low-latency packets to and from one or more destinations per
321 tunnel. Establishing tunnels is comparatively expensive and typically
322 requires public-key cryptography, whereas relaying packets along a tunnel is
323 comparatively inexpensive. Because a tunnel crosses several servers, no
324 single server can learn the user's communication partners.
326 In some distributed-trust systems, such as the Java Anon Proxy (also known as
327 JAP or WebMIXes), users
328 build their tunnels along a fixed shared route or
329 ``cascade.'' Like a single-hop proxy, a single cascade increases anonymity
330 sets by concentrating concurrent traffic into a single communication pipe.
331 Concentrating traffic, however, can become a liability: as with a single-hop
332 proxy, an attacker only needs to observe a limited number of servers (in this
333 case, both ends of the cascade) in order
334 to bridge all the system's traffic.
335 The Java Anon Proxy's design seeks to prevent this by padding
336 between end users and the head of the cascade \cite{web-mix}. However, the
337 current implementation does no padding and thus remains vulnerable
338 to both active and passive bridging.
340 Systems such as earlier versions of Freedom and the original Onion Routing
341 build the anonymous channel all at once, using a layered ``onion'' of
342 public-key encrypted messages, each layer of which provides a set of session
343 keys and the address of the next server in the channel. Tor as described
344 herein, later designs of Freedom, and AnonNet \cite{anonnet} build the
345 channel in stages, extending it one hop at a time. This approach
346 makes perfect forward secrecy feasible.
348 Distributed-trust anonymizing systems differ in how they prevent attackers
349 from controlling too many servers and thus compromising too many user paths.
350 Some protocols rely on a centrally maintained set of well-known anonymizing
351 servers. The current Tor design falls into this category.
352 Others (such as Tarzan and MorphMix) allow unknown users to run
353 servers, while using a limited resource (DHT space for Tarzan; IP space for
354 MorphMix) to prevent an attacker from owning too much of the network.
355 Crowds uses a centralized ``blender'' to enforce Crowd membership
356 policy. For small crowds it is suggested that familiarity with all
357 members is adequate. For large diverse crowds, limiting accounts in
358 control of any one party is more complex:
359 ``(e.g., the blender administrator sets up an account for a user only
360 after receiving a written, notarized request from that user) and each
361 account to one jondo, and by monitoring and limiting the number of
362 jondos on any one net- work (using IP address), the attacker would be
363 forced to launch jondos using many different identities and on many
364 different networks to succeed'' \cite{crowds-tissec}.
366 Another low-latency design that was proposed independently and at
367 about the same time as the original Onion Routing was PipeNet
368 \cite{pipenet}. It provided anonymity protections that were stronger
369 than Onion Routing's, but at the cost of allowing a single user to
370 shut down the network simply by not sending. It was also never
371 implemented or formally published. Low-latency anonymous communication
372 has also been designed for other types of systems, including
373 ISDN \cite{isdn-mixes}, and mobile applications such as telephones and
374 active badging systems \cite{federrath-ih96,reed-protocols97}.
376 Some systems, such as Crowds \cite{crowds-tissec}, do not rely on changing the
377 appearance of packets to hide the path; rather they try to prevent an
378 intermediary from knowing whether it is talking to an initiator
379 or just another intermediary. Crowds uses no public-key
380 encryption, but the responder and all data are visible to all
381 nodes on the path; so anonymity of the connection initiator depends on
382 filtering all identifying information from the data stream. Crowds only
383 supports HTTP traffic.
385 Hordes \cite{hordes-jcs} is based on Crowds but also uses multicast
386 responses to hide the initiator. Herbivore \cite{herbivore} and
387 P5 \cite{p5} go even further, requiring broadcast.
388 Each uses broadcast in different ways, and trade-offs are made to
389 make broadcast more practical. Both Herbivore and P5 are designed primarily
390 for communication between communicating peers, although Herbivore
391 permits external connections by requesting a peer to serve as a proxy.
392 Allowing easy connections to nonparticipating responders or recipients
393 is a practical requirement for many users, e.g., to visit
394 nonparticipating Web sites or to exchange mail with nonparticipating
395 recipients.
397 Tor is not primarily designed for censorship resistance but rather
398 for anonymous communication. However, Tor's rendezvous points, which
399 enable connections between mutually anonymous entities, also
400 facilitate connections to hidden servers. These building blocks to
401 censorship resistance and other capabilities are described in
402 Section~\ref{sec:rendezvous}. Location-hidden servers are an
403 essential component for anonymous publishing systems such as
404 Publius\cite{publius}, Free Haven\cite{freehaven-berk}, and
405 Tangler\cite{tangler}.
408 STILL NOT MENTIONED:
409 real-time mixes\\
410 rewebbers\\
411 cebolla\\
413 Rewebber was mentioned in an earlier version along with Eternity,
414 which *must* be mentioned if we cite anything at all
415 in censorship resistance.
417 [XXX Close by mentioning where Tor fits.]
419 \Section{Design goals and assumptions}
420 \label{sec:assumptions}
422 \subsection{Goals}
423 Like other low-latency anonymity designs, Tor seeks to frustrate
424 attackers from linking communication partners, or from linking
425 multiple communications to or from a single point. Within this
426 main goal, however, several design considerations have directed
427 Tor's evolution.
429 \begin{description}
430 \item[Deployability:] The design must be one which can be implemented,
431 deployed, and used in the real world. This requirement precludes designs
432 that are expensive to run (for example, by requiring more bandwidth than
433 volunteers are willing to provide); designs that place a heavy liability
434 burden on operators (for example, by allowing attackers to implicate onion
435 routers in illegal activities); and designs that are difficult or expensive
436 to implement (for example, by requiring kernel patches, or separate proxies
437 for every protocol). This requirement also precludes systems in which
438 users who do not benefit from anonymity are required to run special
439 software in order to communicate with anonymous parties.
440 % XXX Our rendezvous points require clients to use our software to get to
441 % the location-hidden servers.
442 % Or at least, they require somebody near the client-side running our
443 % software. We haven't worked out the details of keeping it transparent
444 % for Alice if she's using some other http proxy somewhere. I guess the
445 % external http proxy should route through a Tor client, which automatically
446 % translates the foo.onion address? -RD
447 \item[Usability:] A hard-to-use system has fewer users---and because
448 anonymity systems hide users among users, a system with fewer users
449 provides less anonymity. Usability is not only a convenience for Tor:
450 it is a security requirement \cite{econymics,back01}. Tor
451 should work with most of a user's unmodified applications; shouldn't
452 introduce prohibitive delays; and should require the user to make as few
453 configuration decisions as possible.
454 \item[Flexibility:] The protocol must be flexible and
455 well-specified, so that it can serve as a test-bed for future research in
456 low-latency anonymity systems. Many of the open problems in low-latency
457 anonymity networks (such as generating dummy traffic, or preventing
458 pseudospoofing attacks) may be solvable independently from the issues
459 solved by Tor; it would be beneficial if future systems were not forced to
460 reinvent Tor's design decisions. (But note that while a flexible design
461 benefits researchers, there is a danger that differing choices of
462 extensions will render users distinguishable. Thus, implementations should
463 not permit different protocol extensions to coexist in a single deployed
464 network.)
465 \item[Conservative design:] The protocol's design and security parameters
466 must be conservative. Because additional features impose implementation
467 and complexity costs, Tor should include as few speculative features as
468 possible. (We do not oppose speculative designs in general; however, it is
469 our goal with Tor to embody a solution to the problems in low-latency
470 anonymity that we can solve today before we plunge into the problems of
471 tomorrow.)
472 % This last bit sounds completely cheesy. Somebody should tone it down. -NM
473 \end{description}
475 \subsection{Non-goals}
476 In favoring conservative, deployable designs, we have explicitly deferred
477 a number of goals. Many of these goals are desirable in anonymity systems,
478 but we choose to defer them either because they are solved elsewhere,
479 or because they present an area of active research lacking a generally
480 accepted solution.
482 \begin{description}
483 \item[Not Peer-to-peer:] Tarzan and Morphmix aim to
484 scale to completely decentralized peer-to-peer environments with thousands
485 of short-lived servers, many of which may be controlled by an adversary.
486 Because of the many open problems in this approach, Tor uses a more
487 conservative design.
488 \item[Not secure against end-to-end attacks:] Tor does not claim to provide a
489 definitive solution to end-to-end timing or intersection attacks. Some
490 approaches, such as running an onion router, may help; see Section
491 \ref{sec:analysis} for more discussion.
492 \item[No protocol normalization:] Tor does not provide \emph{protocol
493 normalization} like Privoxy or the Anonymizer. In order to make clients
494 indistinguishable when they use complex and variable protocols such as HTTP,
495 Tor must be layered with a filtering proxy such as Privoxy to hide
496 differences between clients, expunge protocol features that leak identity,
497 and so on. Similarly, Tor does not currently integrate tunneling for
498 non-stream-based protocols like UDP; this too must be provided by
499 an external service.
500 % Actually, tunneling udp over tcp is probably horrible for some apps.
501 % Should this get its own non-goal bulletpoint? The motivation for
502 % non-goal-ness would be burden on clients / portability.
503 \item[Not steganographic:] Tor does not try to conceal which users are
504 sending or receiving communications; it only tries to conceal whom they are
505 communicating with.
506 \end{description}
508 \SubSection{Adversary Model}
509 \label{subsec:adversary-model}
511 A global passive adversary is the most commonly assumed when
512 analyzing theoretical anonymity designs. But like all practical low-latency
513 systems, Tor is not secure against this adversary. Instead, we assume an
514 adversary that is weaker than global with respect to distribution, but that
515 is not merely passive. Our threat model expands on that from
516 \cite{or-pet00}.
518 %%%% This is really keen analytical stuff, but it isn't our threat model:
519 %%%% we just go ahead and assume a fraction of hostile nodes for
520 %%%% convenience. -NM
522 %% The basic adversary components we consider are:
523 %% \begin{description}
524 %% \item[Observer:] can observe a connection (e.g., a sniffer on an
525 %% Internet router), but cannot initiate connections. Observations may
526 %% include timing and/or volume of packets as well as appearance of
527 %% individual packets (including headers and content).
528 %% \item[Disrupter:] can delay (indefinitely) or corrupt traffic on a
529 %% link. Can change all those things that an observer can observe up to
530 %% the limits of computational ability (e.g., cannot forge signatures
531 %% unless a key is compromised).
532 %% \item[Hostile initiator:] can initiate (or destroy) connections with
533 %% specific routes as well as vary the timing and content of traffic
534 %% on the connections it creates. A special case of the disrupter with
535 %% additional abilities appropriate to its role in forming connections.
536 %% \item[Hostile responder:] can vary the traffic on the connections made
537 %% to it including refusing them entirely, intentionally modifying what
538 %% it sends and at what rate, and selectively closing them. Also a
539 %% special case of the disrupter.
540 %% \item[Key breaker:] can break the key used to encrypt connection
541 %% initiation requests sent to a Tor-node.
542 %% % Er, there are no long-term private decryption keys. They have
543 %% % long-term private signing keys, and medium-term onion (decryption)
544 %% % keys. Plus short-term link keys. Should we lump them together or
545 %% % separate them out? -RD
546 %% %
547 %% % Hmmm, I was talking about the keys used to encrypt the onion skin
548 %% % that contains the public DH key from the initiator. Is that what you
549 %% % mean by medium-term onion key? (``Onion key'' used to mean the
550 %% % session keys distributed in the onion, back when there were onions.)
551 %% % Also, why are link keys short-term? By link keys I assume you mean
552 %% % keys that neighbor nodes use to superencrypt all the stuff they send
553 %% % to each other on a link. Did you mean the session keys? I had been
554 %% % calling session keys short-term and everything else long-term. I
555 %% % know I was being sloppy. (I _have_ written papers formalizing
556 %% % concepts of relative freshness.) But, there's some questions lurking
557 %% % here. First up, I don't see why the onion-skin encryption key should
558 %% % be any shorter term than the signature key in terms of threat
559 %% % resistance. I understand that how we update onion-skin encryption
560 %% % keys makes them depend on the signature keys. But, this is not the
561 %% % basis on which we should be deciding about key rotation. Another
562 %% % question is whether we want to bother with someone who breaks a
563 %% % signature key as a particular adversary. He should be able to do
564 %% % nearly the same as a compromised tor-node, although they're not the
565 %% % same. I reworded above, I'm thinking we should leave other concerns
566 %% % for later. -PS
567 %% \item[Hostile Tor node:] can arbitrarily manipulate the
568 %% connections under its control, as well as creating new connections
569 %% (that pass through itself).
570 %% \end{description}
572 %% All feasible adversaries can be composed out of these basic
573 %% adversaries. This includes combinations such as one or more
574 %% compromised Tor-nodes cooperating with disrupters of links on which
575 %% those nodes are not adjacent, or such as combinations of hostile
576 %% outsiders and link observers (who watch links between adjacent
577 %% Tor-nodes). Note that one type of observer might be a Tor-node. This
578 %% is sometimes called an honest-but-curious adversary. While an observer
579 %% Tor-node will perform only correct protocol interactions, it might
580 %% share information about connections and cannot be assumed to destroy
581 %% session keys at end of a session. Note that a compromised Tor-node is
582 %% stronger than any other adversary component in the sense that
583 %% replacing a component of any adversary with a compromised Tor-node
584 %% results in a stronger overall adversary (assuming that the compromised
585 %% Tor-node retains the same signature keys and other private
586 %% state-information as the component it replaces).
588 First, we assume that a threshold of directory servers are honest,
589 reliable, accurate, and trustworthy.
590 %% the rest of this isn't needed, if dirservers do threshold concensus dirs
591 % To augment this, users can periodically cross-check
592 %directories from each directory server (trust, but verify).
593 %, and that they always have access to at least one directory server that they trust.
595 Second, we assume that somewhere between ten percent and twenty
596 percent\footnote{In some circumstances---for example, if the Tor network is
597 running on a hardened network where all operators have had background
598 checks---the number of compromised nodes could be much lower.}
599 of the Tor nodes accepted by the directory servers are compromised, hostile,
600 and collaborating in an off-line clique. These compromised nodes can
601 arbitrarily manipulate the connections that pass through them, as well as
602 creating new connections that pass through themselves. They can observe
603 traffic, and record it for later analysis. Honest participants do not know
604 which servers these are.
606 (In reality, many realistic adversaries might have `bad' servers that are not
607 fully compromised but simply under observation, or that have had their keys
608 compromised. But for the sake of analysis, we ignore, this possibility,
609 since the threat model we assume is strictly stronger.)
611 % This next paragraph is also more about analysis than it is about our
612 % threat model. Perhaps we can say, ``users can connect to the network and
613 % use it in any way; we consider abusive attacks separately.'' ? -NM
614 Third, we constrain the impact of hostile users. Users are assumed to vary
615 widely in both the duration and number of times they are connected to the Tor
616 network. They can also be assumed to vary widely in the volume and shape of
617 the traffic they send and receive. Hostile users are, by definition, limited
618 to creating and varying their own connections into or through a Tor
619 network. They may attack their own connections to try to gain identity
620 information of the responder in a rendezvous connection. They can also try to
621 attack sites through the Onion Routing network; however we will consider this
622 abuse rather than an attack per se (see
623 Section~\ref{subsec:exitpolicies}). Other than abuse, a hostile user's
624 motivation to attack his own connections is limited to the network effects of
625 such actions, such as denial of service (DoS) attacks. Thus, in this case,
626 we can view user as simply an extreme case of the ordinary user; although
627 ordinary users are not likely to engage in, e.g., IP spoofing, to gain their
628 objectives.
630 In general, we are more focused on traffic analysis attacks than
631 traffic confirmation attacks.
632 %A user who runs a Tor proxy on his own
633 %machine, connects to some remote Tor-node and makes a connection to an
634 %open Internet site, such as a public web server, is vulnerable to
635 %traffic confirmation.
636 That is, an active attacker who suspects that
637 a particular client is communicating with a particular server can
638 confirm this if she can modify and observe both the
639 connection between the Tor network and the client and that between the
640 Tor network and the server. Even a purely passive attacker can
641 confirm traffic if the timing and volume properties of the traffic on
642 the connection are unique enough. (This is not to say that Tor offers
643 no resistance to traffic confirmation; it does. We defer discussion
644 of this point and of particular attacks until Section~\ref{sec:attacks},
645 after we have described Tor in more detail.)
647 \SubSection{Known attacks against low-latency anonymity systems}
648 \label{subsec:known-attacks}
649 % Should be merged into ``Threat model'' and reiterated in Attacks. -NM
651 We discuss each of these attacks in more detail below, along with the
652 aspects of the Tor design that provide defense. We provide a summary
653 of the attacks and our defenses against them in Section~\ref{sec:attacks}.
655 Passive attacks:
656 simple observation,
657 timing correlation,
658 size correlation,
659 option distinguishability,
661 Active attacks:
662 key compromise,
663 iterated subpoena,
664 run recipient,
665 run a hostile node,
666 compromise entire path,
667 selectively DOS servers,
668 introduce timing into messages,
669 directory attacks,
670 tagging attacks,
671 smear attacks,
672 entrapment attacks
675 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
677 \Section{The Tor Design}
678 \label{sec:design}
680 The Tor network is an overlay network; each node is called an onion router
681 (OR). Onion routers run on normal computers without needing any special
682 privileges. Each OR maintains a long-term TLS connection to every other
683 OR (although we look at ways to relax this clique-topology assumption in
684 section \ref{subsec:restricted-routes}). A subset of the ORs also act as
685 directory servers, tracking which routers are currently in the network;
686 see section \ref{subsec:dirservers} for directory server details. Users
687 run local software called an onion proxy (OP) that fetches directories,
688 establishes paths (called \emph{virtual circuits}) over the network,
689 and handles connections from the user applications. Onion proxies accept
690 TCP streams and multiplex them across the virtual circuit. The onion
691 router on the other side
692 % I don't mean other side, I mean wherever it is on the circuit. But
693 % don't want to introduce complexity this early? Hm. -RD
694 of the circuit connects to the destinations of
695 the TCP streams and relays data.
697 Onion routers have three types of keys. The first key is the identity
698 (signing) key. An OR uses this key to sign TLS certificates, to sign its
699 router descriptor (a summary of its keys, address, bandwidth, exit policy,
700 etc), and to sign directories if it is a directory server. Changing the
701 identity key of a router is considered equivalent to creating a new
702 router. The second key is the onion (decryption) key, which is used
703 for decrypting requests from users to set up a circuit and negotiate
704 ephemeral keys. Thirdly, each OR shares link keys (generated by TLS)
705 with the other ORs it's connected to. We discuss rotating these keys in
706 Section \ref{subsec:rotating-keys}.
708 Section \ref{subsec:cells} discusses the structure of the fixed-size
709 \emph{cells} that are the unit of communication in Tor. We describe
710 in Section \ref{subsec:circuits} how circuits work, and how they are
711 built, extended, truncated, and destroyed. Section \ref{subsec:tcp}
712 discusses the process of opening TCP streams through Tor, and finally
713 Section \ref{subsec:congestion} talks about congestion control and
714 fairness issues.
716 \SubSection{Cells}
717 \label{subsec:cells}
719 Traffic passes from node to node in fixed-size cells. Each cell is 256
720 bytes, and consists of a header and a payload. The header includes the
721 circuit identifier (ACI) which specifies which circuit the cell refers to
722 (many circuits can be multiplexed over the single TCP connection between
723 ORs or between an OP and an OR), and a command to describe what to do
724 with the cell's payload. Cells are either control cells, meaning they are
725 intended to be interpreted by the node that receives them, or relay cells,
726 meaning they carry end-to-end stream data. Controls cells can be one of:
727 \emph{padding} (currently used for keepalive, but can be used for link
728 padding), \emph{create} or \emph{created} (to set up a new circuit),
729 or \emph{destroy} (to tear down a circuit).
731 Relay cells have an additional header (the relay header) after the
732 cell header, which specifies the stream identifier (many streams can
733 be multiplexed over a circuit), an end-to-end checksum for integrity
734 checking, the length of the relay payload, and a relay command. Relay
735 commands can be one of: \emph{relay
736 data} (for data flowing down the stream), \emph{relay begin} (to open a
737 stream), \emph{relay end} (to close a stream), \emph{relay connected}
738 (to notify the OP that a relay begin has succeeded), \emph{relay
739 extend} and \emph{relay extended} (to extend the circuit by a hop,
740 and to acknowledge), \emph{relay truncate} and \emph{relay truncated}
741 (to tear down only part of the circuit, and to acknowledge), \emph{relay
742 sendme} (used for congestion control), and \emph{relay drop} (used to
743 implement long-range dummies).
745 We will talk more about each of these cell types below.
747 % Nick: should there have been a table here? -RD
749 \SubSection{Circuits and streams}
750 \label{subsec:circuits}
752 While the original Onion Routing design built one circuit for each stream,
753 Tor circuits can be used by many streams. Thus because circuits can
754 take several tenths of a second to construct due to crypto and network
755 latency, users construct circuits preemptively. Users build a new circuit
756 periodically (currently every minute) if the previous one has been used,
757 and expire old used circuits that are no longer in use. Thus even very
758 active users spend a negligible amount of time and CPU in building
759 circuits, but only a limited number of requests can be linked to each
760 other by a given exit node.
762 Users set up circuits incrementally, negotiating a symmetric key with
763 each hop one at a time. To create a new circuit, the user (call her
764 Alice) sends a \emph{create} cell to the first node in her chosen
765 path. The payload is the first half of the Diffie-Hellman handshake,
766 encrypted to the onion key of the OR (call him Bob). Bob responds with a
767 \emph{created} cell with the second half of the DH handshake, along with
768 a hash of $K=g^{xy}$. The goal is to get unilateral entity authentication
769 (Alice knows she's handshaking with Bob, Bob doesn't care who it is ---
770 recall that Alice has no key and is trying to remain anonymous) and
771 unilateral key authentication (Alice and Bob agree on a key, and Alice
772 knows Bob is the only other person who could know it --- if he is
773 honest, etc.). We also want perfect forward secrecy, key freshness, etc.
775 \begin{equation}
776 \begin{aligned}
777 \mathrm{Alice} \rightarrow \mathrm{Bob}&: E_{PK_{Bob}}(g^x) \\
778 \mathrm{Bob} \rightarrow \mathrm{Alice}&: g^y, H(K | \mathrm{``handshake"}) \\
779 \end{aligned}
780 \end{equation}
782 The second step shows both that it was Bob
783 who received $g^x$, and that it was Bob who came up with $y$. We use
784 PK encryption in the first step (rather than, e.g., using the first two
785 steps of STS, which has a signature in the second step) because we
786 don't have enough room in a single cell for a public key and also a
787 signature. Preliminary analysis with the NRL protocol analyzer shows
788 the above protocol to be secure (including providing PFS) under the
789 traditional Dolev-Yao model.
790 % cite Cathy? -RD
791 % did I use the buzzwords correctly? -RD
793 To extend a circuit past the first hop, Alice sends a \emph{relay extend}
794 cell to the last node in the circuit, specifying the address of the new
795 OR and an encrypted $g^x$ for it. That node copies the half-handshake
796 into a \emph{create} cell, and passes it to the new OR to extend the
797 circuit. When it responds with a \emph{created} cell, the penultimate OR
798 copies the payload into a \emph{relay extended} cell and passes it back.
799 % Nick: please fix my "that OR" pronouns -RD
801 Once Alice has established the circuit (so she shares a key with each
802 OR on the circuit), she can send relay cells.
803 %The stream ID in the relay header indicates to which stream the cell belongs.
804 % Nick: should i include the above line?
805 % Paul says yes. -PS
806 Alice can address each relay cell to any of the ORs on the circuit. To
807 construct a relay cell destined for a given OR, she iteratively
808 encrypts the cell payload (that is, the relay header and payload)
809 with the symmetric key of each hop up to that node. Then, at each hop
810 down the circuit, the OR decrypts the cell payload and checks whether
811 it recognizes the stream ID. A stream ID is recognized either if it
812 is an already open stream at that OR, or if it is equal to zero. The
813 zero stream ID is treated specially, and is used for control messages,
814 e.g. starting a new stream. If the stream ID is unrecognized, the OR
815 passes the relay cell downstream. This \emph{leaky pipe} circuit design
816 allows Alice's streams to exit at different ORs, for example to tolerate
817 different exit policies, or to keep the ORs from knowing that two streams
818 originate at the same person.
820 To tear down a circuit, Alice sends a destroy control cell. Each OR
821 in the circuit receives the destroy cell, closes all open streams on
822 that circuit, and passes a new destroy cell forward. But since circuits
823 can be built incrementally, they can also be torn down incrementally:
824 Alice can send a relay truncate cell to a node along the circuit. That
825 node will send a destroy cell forward, and reply with an acknowledgement
826 (relay truncated). Alice might truncate her circuit so she can extend it
827 to different nodes without signaling to the first few nodes (or somebody
828 observing them) that she is changing her circuit. That is, nodes in the
829 middle are not even aware that the circuit was truncated, because the
830 relay cells are encrypted. Similarly, if a node on the circuit goes down,
831 the adjacent node can send a relay truncated back to Alice. Thus the
832 ``break a node and see which circuits go down'' attack is weakened.
834 \SubSection{Opening and closing streams}
835 \label{subsec:tcp}
837 When Alice's application wants to open a TCP connection to a given
838 address and port, it asks the OP (via SOCKS) to make the connection. The
839 OP chooses the newest open circuit (or creates one if none is available),
840 chooses a suitable OR on that circuit to be the exit node (usually the
841 last node, but maybe others due to exit policy conflicts; see Section
842 \ref{sec:exit-policies}), chooses a new random stream ID for this stream,
843 and delivers a relay begin cell to that exit node. It uses a stream ID
844 of zero for the begin cell (so the OR will recognize it), and the relay
845 payload lists the new stream ID and the destination address and port.
846 Once the exit node completes the connection to the remote host, it
847 responds with a relay connected cell through the circuit. Upon receipt,
848 the OP notifies the application that it can begin talking.
850 There's a catch to using SOCKS, though -- some applications hand the
851 alphanumeric address to the proxy, while others resolve it into an IP
852 address first and then hand the IP to the proxy. When the application
853 does the DNS resolution first, Alice broadcasts her destination. Common
854 applications like Mozilla and ssh have this flaw.
856 In the case of Mozilla, we're fine: the filtering web proxy called Privoxy
857 does the SOCKS call safely, and Mozilla talks to Privoxy safely. But a
858 portable general solution, such as for ssh, is an open problem. We could
859 modify the local nameserver, but this approach is invasive, brittle, and
860 not portable. We could encourage the resolver library to do resolution
861 via TCP rather than UDP, but this approach is hard to do right, and also
862 has portability problems. Our current answer is to encourage the use of
863 privacy-aware proxies like Privoxy wherever possible, and also provide
864 a tool similar to \emph{dig} that can do a private lookup through the
865 Tor network.
867 Ending a Tor stream is analogous to ending a TCP stream: it uses a
868 two-step handshake for normal operation, or a one-step handshake for
869 errors. If one side of the stream closes abnormally, that node simply
870 sends a relay teardown cell, and tears down the stream. If one side
871 % Nick: mention relay teardown in 'cell' subsec? good enough name? -RD
872 of the stream closes the connection normally, that node sends a relay
873 end cell down the circuit. When the other side has sent back its own
874 relay end, the stream can be torn down. This two-step handshake allows
875 for TCP-based applications that, for example, close a socket for writing
876 but are still willing to read.
878 \SubSection{Integrity checking on streams}
880 In the old Onion Routing design, traffic was vulnerable to a malleability
881 attack: without integrity checking, an adversary could
882 guess some of the plaintext of a cell, xor it out, and xor in his own
883 plaintext. Even an external adversary could do this despite the link
884 encryption!
886 For example, an adversary could change a create cell to a
887 destroy cell; change the destination address in a relay begin cell
888 to the adversary's webserver; or change a user on an ftp connection
889 from typing ``dir'' to typing ``delete *''. Any node or observer along
890 the path can introduce such corruption in a stream.
892 Tor solves this malleability attack with respect to external adversaries
893 simply by using TLS. Addressing the insider malleability attack is more
894 complex.
896 Rather than doing integrity checking of the relay cells at each hop
897 (like Mixminion \cite{minion-design}), which would increase packet size
898 by a function of path length\footnote{This is also the argument against
899 using recent cipher modes like EAX \cite{eax} --- we don't want the added
900 message-expansion overhead at each hop, and we don't want to leak the path
901 length (or pad to some max path length).}, we choose to accept passive
902 timing attacks, and do integrity
903 checking only at the edges of the circuit. When Alice negotiates a key
904 with that hop, they both start a SHA-1 with some derivative of that key,
905 thus starting out with randomness that only the two of them know. From
906 then on they each incrementally add all the data bytes flowing across
907 the stream to the SHA-1, and each relay cell includes the first 4 bytes
908 of the current value of the hash.
910 The attacker must be able to guess all previous bytes between Alice
911 and Bob on that circuit (including the pseudorandomness from the key
912 negotiation), plus the bytes in the current cell, to remove or modify the
913 cell. The computational overhead isn't so bad, compared to doing an AES
914 % XXX We never say we use AES. Say it somewhere above?
915 crypt at each hop in the circuit. We use only four bytes per cell to
916 minimize overhead; the chance that an adversary will correctly guess a
917 valid hash, plus the payload the current cell, is acceptly low, given
918 that Alice or Bob tear down the circuit if they receive a bad hash.
920 %% probably don't need to even mention this, because the randomness
921 %% covers it:
922 %The fun SHA1 attack where the bad guy can incrementally add to a hash
923 %to get a new valid hash doesn't apply to us, because we never show any
924 %hashes to anybody.
926 \SubSection{Website fingerprinting attacks}
928 % this subsection probably wants to move to analysis -RD
929 old onion routing is vulnerable to website fingerprinting attacks like
930 david martin's from usenix sec and drew's from pet2002. so is tor. we
931 need to send some padding or something, including long-range padding
932 (to foil the first hop), to solve this. let's hope somebody writes
933 a followup to \cite{defensive-dropping} that tells us what, exactly,
934 to do, and why, exactly, it helps.
936 \SubSection{Rate limiting and fairness}
938 Nodes use a token bucket approach \cite{foo} to limit the number of
939 bytes they receive. Tokens are added to the bucket each second (when
940 the bucket is full, new tokens are discarded.) Each token represents
941 permission to receive one byte from the network --- to receive a byte,
942 the connection must remove a token from the bucket. Thus if the bucket
943 is empty, that connection must wait until more tokens arrive. The number
944 of tokens we add enforces a longterm average rate of incoming bytes, yet
945 we still permit short-term bursts above the allowed bandwidth. Currently
946 bucket sizes are set to ten seconds worth of traffic.
948 Further, we want to avoid starving any Tor streams. Entire circuits
949 could starve if we read greedily from connections and one connection
950 uses all the remaining bandwidth. We solve this by dividing the number
951 of tokens in the bucket by the number of connections that want to read,
952 and reading at most that number of bytes from each connection. We iterate
953 this procedure until the number of tokens in the bucket is under some
954 threshold (eg 10KB), at which point we greedily read from connections.
956 Because the number of bytes going out of a node is roughly the same
957 as the number of bytes that have come in, doing rate limiting only on
958 incoming bytes should be sufficient.
960 Further, inspired by Rennhard et al's design in \cite{anonnet}, the edges
961 of the circuit can automatically distinguish interactive streams compared
962 to bulk streams --- interactive streams supply cells only rarely. We can
963 get good latency for these streams by giving them preferential service,
964 while still getting good overall throughput to the bulk streams. Such
965 preferential treatment can have impact on anonymity, but an adversary
966 who can observe the stream can already learn this information through
967 timing attacks.
969 \SubSection{Congestion control}
970 \label{subsec:congestion}
972 Even with bandwidth rate limiting, we still need to worry about
973 congestion, either accidental or intentional. If enough users choose
974 the same OR-to-OR connection for their circuits, that connection
975 will become saturated. For example, an adversary can make a `put'
976 request through the onion routing network to a webserver he runs,
977 and then refuse to read any of the bytes at the webserver end of the
978 circuit. Without some congestion control mechanism, these bottlenecks
979 can propagate back through the entire network.
981 \subsubsection{Circuit-level}
983 To control a circuit's bandwidth usage, each OR keeps track of two
984 windows. The package window tracks how many relay data cells the OR is
985 allowed to package (from outside streams) for transmission back to the OP,
986 and the deliver window tracks how many relay data cells it is willing
987 to deliver to streams outside the network. Each window is initialized
988 (say, to 1000 data cells). When a data cell is packaged or delivered,
989 the appropriate window is decremented. When an OR has received enough
990 data cells (currently 100), it sends a relay sendme cell towards the OP,
991 with stream ID zero. When an OR receives a relay sendme cell with stream
992 ID zero, it increments its packaging window. Either of these cells
993 increments the corresponding window by 100. If the packaging window
994 reaches 0, the OR stops reading from TCP connections for all streams
995 on the corresponding circuit, and sends no more relay data cells until
996 receiving a relay sendme cell.
998 The OP behaves identically, except that it must track a packaging window
999 and a delivery window for every OR in the circuit. If a packaging window
1000 reaches 0, it stops reading from streams destined for that OR.
1002 \subsubsection{Stream-level}
1004 The stream-level congestion control mechanism is similar to the
1005 circuit-level mechanism above. ORs and OPs use relay sendme cells
1006 to implement end-to-end flow control for individual streams across
1007 circuits. Each stream begins with a package window (e.g. 500 cells),
1008 and increments the window by a fixed value (50) upon receiving a relay
1009 sendme cell. Rather than always returning a relay sendme cell as soon
1010 as enough cells have arrived, the stream-level congestion control also
1011 has to check whether data has been successfully flushed onto the TCP
1012 stream; it sends a relay sendme only when the number of bytes pending
1013 to be flushed is under some threshold (currently 10 cells worth).
1015 Currently, non-data relay cells do not affect the windows. Thus we
1016 avoid potential deadlock issues, e.g. because a stream can't send a
1017 relay sendme cell because its packaging window is empty.
1019 \subsubsection{Needs more research}
1021 We don't need to reimplement full TCP windows (with sequence numbers,
1022 the ability to drop cells when we're full and retransmit later, etc),
1023 because the TCP streams already guarantee in-order delivery of each
1024 cell. But we need to investigate further the effects of the current
1025 parameters on throughput and latency, while also keeping privacy in mind;
1026 see Section \ref{sec:maintaining-anonymity} for more discussion.
1028 \Section{Other design decisions}
1030 \SubSection{Resource management and DoS prevention}
1031 \label{subsec:dos}
1033 Describe DoS prevention. cookies before tls begins, rate limiting of
1034 create cells, link-to-link rate limiting, etc.
1035 Mention twins, what the do, what they can't.
1036 How we should do sequencing and acking like TCP so that we can better
1037 tolerate lost data cells.
1038 Mention that designers have to choose what you send across your
1039 circuit: wrapped IP packets, wrapped stream data, etc. [Disspell
1040 TCP-over-TCP misconception.]
1041 Mention that OR-to-OR connections should be highly reliable. If
1042 they aren't, everything can stall.
1044 \SubSection{Exit policies and abuse}
1045 \label{subsec:exitpolicies}
1047 Exit abuse is a serious barrier to wide-scale Tor deployment --- we
1048 must block or limit attacks and other abuse that users can do through
1049 the Tor network.
1051 Each onion router's \emph{exit policy} describes to which external
1052 addresses and ports the router will permit stream connections. On one end
1053 of the spectrum are \emph{open exit} nodes that will connect anywhere;
1054 on the other end are \emph{middleman} nodes that only relay traffic to
1055 other Tor nodes, and \emph{private exit} nodes that only connect locally
1056 or to addresses internal to that node's organization.
1057 This private exit
1058 node configuration is more secure for clients --- the adversary cannot
1059 see plaintext traffic leaving the network (e.g. to a webserver), so he
1060 is less sure of Alice's destination. More generally, nodes can require
1061 a variety of forms of traffic authentication \cite{onion-discex00}.
1062 Most onnion routers will function as \emph{limited exits} that permit
1063 connections to the world at large, but restrict access to certain abuse-prone
1064 addresses and services.
1066 Tor offers more reliability than the high-latency fire-and-forget
1067 anonymous email networks, because the sender opens a TCP stream
1068 with the remote mail server and receives an explicit confirmation of
1069 acceptance. But ironically, the private exit node model works poorly for
1070 email, when Tor nodes are run on volunteer machines that also do other
1071 things, because it's quite hard to configure mail transport agents so
1072 normal users can send mail normally, but the Tor process can only deliver
1073 mail locally. Further, most organizations have specific hosts that will
1074 deliver mail on behalf of certain IP ranges; Tor operators must be aware
1075 of these hosts and consider putting them in the Tor exit policy.
1077 The abuse issues on closed (e.g. military) networks are different
1078 from the abuse on open networks like the Internet. While these IP-based
1079 access controls are still commonplace on the Internet, on closed networks,
1080 nearly all participants will be honest, and end-to-end authentication
1081 can be assumed for anything important.
1083 Tor is harder than minion because tcp doesn't include an abuse
1084 address. you could reach inside the http stream and change the agent
1085 or something, but that's a specific case and probably won't help
1086 much anyway.
1087 And volunteer nodes don't resolve to anonymizer.mit.edu so it never
1088 even occurs to people that it wasn't you.
1090 Preventing abuse of open exit nodes is an unsolved problem. Princeton's
1091 CoDeeN project \cite{darkside} gives us a glimpse of what we're in for.
1092 % This is more speculative than a description of our design.
1094 but their solutions, which mainly involve rate limiting and blacklisting
1095 nodes which do bad things, don't translate directly to Tor. Rate limiting
1096 still works great, but Tor intentionally separates sender from recipient,
1097 so it's hard to know which sender was the one who did the bad thing,
1098 without just making the whole network wide open.
1100 even limiting most nodes to allow http, ssh, and aim to exit and reject
1101 all other stuff is sketchy, because plenty of abuse can happen over
1102 port 80. but it's a surprisingly good start, because it blocks most things,
1103 and because people are more used to the concept of port 80 abuse not
1104 coming from the machine's owner.
1106 we could also run intrusion detection system (IDS) modules at each tor
1107 node, to dynamically monitor traffic streams for attack signatures. it
1108 can even react when it sees a signature by closing the stream. but IDS's
1109 don't actually work most of the time, and besides, how do you write a
1110 signature for "is sending a mean mail"?
1112 we should run a squid at each exit node, to provide comparable anonymity
1113 to private exit nodes for cache hits, to speed everything up, and to
1114 have a buffer for funny stuff coming out of port 80. we could similarly
1115 have other exit proxies for other protocols, like mail, to check
1116 delivered mail for being spam.
1118 [XXX Um, I'm uncomfortable with this for several reasons.
1119 It's not good for keeping honest nodes honest about discarding
1120 state after it's no longer needed. Granted it keeps an external
1121 observer from noticing how often sites are visited, but it also
1122 allows fishing expeditions. ``We noticed you went to this prohibited
1123 site an hour ago. Kindly turn over your caches to the authorities.''
1124 I previously elsewhere suggested bulk transfer proxies to carve
1125 up big things so that they could be downloaded in less noticeable
1126 pieces over several normal looking connections. We could suggest
1127 similarly one or a handful of squid nodes that might serve up
1128 some of the more sensitive but common material, especially if
1129 the relevant sites didn't want to or couldn't run their own OR.
1130 This would be better than having everyone run a squid which would
1131 just help identify after the fact the different history of that
1132 node's activity. All this kind of speculation needs to move to
1133 future work section I guess. -PS]
1135 A mixture of open and restricted exit nodes will allow the most
1136 flexibility for volunteers running servers. But while a large number
1137 of middleman nodes is useful to provide a large and robust network,
1138 a small number of exit nodes still simplifies traffic analysis because
1139 there are fewer nodes the adversary needs to monitor, and also puts a
1140 greater burden on the exit nodes.
1141 The JAP cascade model is really nice because they only need one node to
1142 take the heat per cascade. On the other hand, a hydra scheme could work
1143 better (it's still hard to watch all the clients).
1145 Discuss importance of public perception, and how abuse affects it.
1146 ``Usability is a security parameter''. ``Public Perception is also a
1147 security parameter.''
1149 Discuss smear attacks.
1151 \SubSection{Directory Servers}
1152 \label{subsec:dirservers}
1154 First-generation Onion Routing designs \cite{or-jsac98,freedom2-arch} did
1155 % is or-jsac98 the right cite here? what's our stock OR cite? -RD
1156 in-band network status updates: each router flooded a signed statement
1157 to its neighbors, which propagated it onward. But anonymizing networks
1158 have different security goals than typical link-state routing protocols.
1159 For example, we worry more about delays (accidental or intentional)
1160 that can cause different parts of the network to have different pictures
1161 of link-state and topology. We also worry about attacks to deceive a
1162 client about the router membership list, topology, or current network
1163 state. Such \emph{partitioning attacks} on client knowledge help an
1164 adversary with limited resources to efficiently deploy those resources
1165 when attacking a target.
1167 Instead, Tor uses a small group of redundant directory servers to
1168 track network topology and node state such as current keys and exit
1169 policies. The directory servers are normal onion routers, but there are
1170 only a few of them and they are more trusted. They listen on a separate
1171 port as an HTTP server, both so participants can fetch current network
1172 state and router lists (a \emph{directory}), and so other onion routers
1173 can upload their router descriptors.
1175 [[mention that descriptors are signed with long-term keys; ORs publish
1176 regularly to dirservers; policies for generating directories; key
1177 rotation (link, onion, identity); Everybody already know directory
1178 keys; how to approve new nodes (advogato, sybil, captcha (RTT));
1179 policy for handling connections with unknown ORs; diff-based
1180 retrieval; diff-based consensus; separate liveness from descriptor
1181 list]]
1183 Of course, a variety of attacks remain. An adversary who controls a
1184 directory server can track certain clients by providing different
1185 information --- perhaps by listing only nodes under its control
1186 as working, or by informing only certain clients about a given
1187 node. Moreover, an adversary without control of a directory server can
1188 still exploit differences among client knowledge. If Eve knows that
1189 node $M$ is listed on server $D_1$ but not on $D_2$, she can use this
1190 knowledge to link traffic through $M$ to clients who have queried $D_1$.
1192 Thus these directory servers must be synchronized and redundant. The
1193 software is distributed with the signature public key of each directory
1194 server, and directories must be signed by a threshold of these keys.
1196 The directory servers in Tor are modeled after those in Mixminion
1197 \cite{minion-design}, but our situation is easier. Firstly, we make the
1198 simplifying assumption that all participants agree on who the directory
1199 servers are. Secondly, Mixminion needs to predict node behavior ---
1200 that is, build a reputation system for guessing future performance of
1201 nodes based on past performance, and then figure out a way to build
1202 a threshold consensus of these predictions. Tor just needs to get a
1203 threshold consensus of the current state of the network.
1205 The threshold consensus can be reached with standard Byzantine agreement
1206 techniques \cite{castro-liskov}.
1207 % Should I just stop the section here? Is the rest crap? -RD
1208 But this library, while more efficient than previous Byzantine agreement
1209 systems, is still complex and heavyweight for our purposes: we only need
1210 to compute a single algorithm, and we do not require strict in-order
1211 computation steps. Indeed, the complexity of Byzantine agreement protocols
1212 threatens our security, because users cannot easily understand it and
1213 thus have less trust in the directory servers. The Tor directory servers
1214 build a consensus directory
1215 through a simple four-round broadcast protocol. First, each server signs
1216 and broadcasts its current opinion to the other directory servers; each
1217 server then rebroadcasts all the signed opinions it has received. At this
1218 point all directory servers check to see if anybody's cheating. If so,
1219 directory service stops, the humans are notified, and that directory
1220 server is permanently removed from the network. Assuming no cheating,
1221 each directory server then computes a local algorithm on the set of
1222 opinions, resulting in a uniform shared directory. Then the servers sign
1223 this directory and broadcast it; and finally all servers rebroadcast
1224 the directory and all the signatures.
1226 The rebroadcast steps ensure that a directory server is heard by either
1227 all of the other servers or none of them (some of the links between
1228 directory servers may be down). Broadcasts are feasible because there
1229 are so few directory servers (currently 3, but we expect to use as many
1230 as 9 as the network scales). The actual local algorithm for computing
1231 the shared directory is straightforward, and is described in the Tor
1232 specification \cite{tor-spec}.
1233 % we should, uh, add this to the spec. oh, and write it. -RD
1235 Using directory servers rather than flooding approaches provides
1236 simplicity and flexibility. For example, they don't complicate
1237 the analysis when we start experimenting with non-clique network
1238 topologies. And because the directories are signed, they can be cached at
1239 all the other onion routers (or even elsewhere). Thus directory servers
1240 are not a performance bottleneck when we have many users, and also they
1241 won't aid traffic analysis by forcing clients to periodically announce
1242 their existence to any central point.
1243 % Mention Hydra as an example of non-clique topologies. -NM, from RD
1245 % also find some place to integrate that dirservers have to actually
1246 % lay test circuits and use them, otherwise routers could connect to
1247 % the dirservers but discard all other traffic.
1248 % in some sense they're like reputation servers in \cite{mix-acc} -RD
1250 \Section{Rendezvous points: location privacy}
1251 \label{sec:rendezvous}
1253 Rendezvous points are a building block for \emph{location-hidden services}
1254 (aka responder anonymity) in the Tor network. Location-hidden services
1255 means Bob can offer a TCP service, such as a webserver, without revealing
1256 the IP of that service. One motivation for location privacy is to provide
1257 protection against DDoS attacks: attackers are forced to attack the
1258 onion routing network as a whole rather than just Bob's IP.
1260 We provide this censorship resistance for Bob by allowing him to
1261 advertise several onion routers (his \emph{Introduction Points}) as his
1262 public location. Alice, the client, chooses a node for her \emph{Meeting
1263 Point}. She connects to one of Bob's introduction points, informs him
1264 about her rendezvous point, and then waits for him to connect to the
1265 rendezvous
1266 point. This extra level of indirection means Bob's introduction points
1267 don't open themselves up to abuse by serving files directly, eg if Bob
1268 chooses a node in France to serve material distateful to the French,
1270 % We need a more legitimate-sounding reason here.
1272 or if Bob's service tends to get DDoS'ed by script kiddies.
1273 The extra level of indirection also allows Bob to respond to some requests
1274 and ignore others.
1276 We provide the necessary glue so that Alice can view webpages from Bob's
1277 location-hidden webserver with minimal invasive changes. Both Alice and
1278 Bob must run local onion proxies.
1280 The steps of a rendezvous:
1281 \begin{tightlist}
1282 \item Bob chooses some Introduction Points, and advertises them on a
1283 Distributed Hash Table (DHT).
1284 \item Bob establishes onion routing connections to each of his
1285 Introduction Points, and waits.
1286 \item Alice learns about Bob's service out of band (perhaps Bob told her,
1287 or she found it on a website). She looks up the details of Bob's
1288 service from the DHT.
1289 \item Alice chooses and establishes a Rendezvous Point (RP) for this
1290 transaction.
1291 \item Alice goes to one of Bob's Introduction Points, and gives it a blob
1292 (encrypted for Bob) which tells him about herself, the RP
1293 she chose, and the first half of an ephemeral key handshake. The
1294 Introduction Point sends the blob to Bob.
1295 \item Bob chooses whether to ignore the blob, or to onion route to RP.
1296 Let's assume the latter.
1297 \item RP plugs together Alice and Bob. Note that RP can't recognize Alice,
1298 Bob, or the data they transmit (they share a session key).
1299 \item Alice sends a Begin cell along the circuit. It arrives at Bob's
1300 onion proxy. Bob's onion proxy connects to Bob's webserver.
1301 \item Data goes back and forth as usual.
1302 \end{tightlist}
1304 When establishing an introduction point, Bob provides the onion router
1305 with a public ``introduction'' key. The hash of this public key
1306 identifies a unique service, and (since Bob is required to sign his
1307 messages) prevents anybody else from usurping Bob's introduction point
1308 in the future. Bob uses the same public key when establishing the other
1309 introduction points for that service.
1311 The blob that Alice gives the introduction point includes a hash of Bob's
1312 public key to identify the service, an optional initial authentication
1313 token (the introduction point can do prescreening, eg to block replays),
1314 and (encrypted to Bob's public key) the location of the rendezvous point,
1315 a rendezvous cookie Bob should tell RP so he gets connected to
1316 Alice, an optional authentication token so Bob can choose whether to respond,
1317 and the first half of a DH key exchange. When Bob connects to RP
1318 and gets connected to Alice's pipe, his first cell contains the
1319 other half of the DH key exchange.
1321 The authentication tokens can be used to provide selective access to users
1322 proportional to how important it is that they main uninterrupted access
1323 to the service. During normal situations, Bob's service might simply be
1324 offered directly from mirrors; Bob also gives out authentication cookies
1325 to special users. When those mirrors are knocked down by DDoS attacks,
1326 those special users can switch to accessing Bob's service via the Tor
1327 rendezvous system.
1329 \subsection{Integration with user applications}
1331 For each service Bob offers, he configures his local onion proxy to know
1332 the local IP and port of the server, a strategy for authorizating Alices,
1333 and a public key. We assume the existence of a robust decentralized
1334 efficient lookup system which allows authenticated updates, eg
1335 \cite{cfs:sosp01}. (Each onion router could run a node in this lookup
1336 system; also note that as a stopgap measure, we can just run a simple
1337 lookup system on the directory servers.) Bob publishes into the DHT
1338 (indexed by the hash of the public key) the public key, an expiration
1339 time (``not valid after''), and the current introduction points for that
1340 service. Note that Bob's webserver is unmodified, and doesn't even know
1341 that it's hidden behind the Tor network.
1343 As far as Alice's experience goes, we require that her client interface
1344 remain a SOCKS proxy, and we require that she shouldn't have to modify
1345 her applications. Thus we encode all of the necessary information into
1346 the hostname (more correctly, fully qualified domain name) that Alice
1347 uses, eg when clicking on a url in her browser. Location-hidden services
1348 use the special top level domain called `.onion': thus hostnames take the
1349 form x.y.onion where x encodes the hash of PK, and y is the authentication
1350 cookie. Alice's onion proxy examines hostnames and recognizes when they're
1351 destined for a hidden server. If so, it decodes the PK and starts the
1352 rendezvous as described in the table above.
1354 \subsection{Previous rendezvous work}
1356 Ian Goldberg developed a similar notion of rendezvous points for
1357 low-latency anonymity systems \cite{ian-thesis}. His ``service tag''
1358 is the same concept as our ``hash of service's public key''. We make it
1359 a hash of the public key so it can be self-authenticating, and so the
1360 client can recognize the same service with confidence later on. His
1361 design differs from ours in the following ways though. Firstly, Ian
1362 suggests that the client should manually hunt down a current location of
1363 the service via Gnutella; whereas our use of the DHT makes lookup faster,
1364 more robust, and transparent to the user. Secondly, in Tor the client
1365 and server can share ephemeral DH keys, so at no point in the path is
1366 the plaintext
1367 exposed. Thirdly, our design is much more practical for deployment in a
1368 volunteer network, in terms of getting volunteers to offer introduction
1369 and rendezvous point services. The introduction points do not output any
1370 bytes to the clients, and the rendezvous points don't know the client,
1371 the server, or the stuff being transmitted. The indirection scheme
1372 is also designed with authentication/authorization in mind -- if the
1373 client doesn't include the right cookie with its request for service,
1374 the server doesn't even acknowledge its existence.
1376 \Section{Analysis}
1378 How well do we resist chosen adversary?
1380 How well do we meet stated goals?
1382 Mention jurisdictional arbitrage.
1384 Pull attacks and defenses into analysis as a subsection
1386 \Section{Maintaining anonymity in Tor}
1387 \label{sec:maintaining-anonymity}
1389 I probably should have noted that this means loops will be on at least
1390 five hop routes, which should be rare given the distribution. I'm
1391 realizing that this is reproducing some of the thought that led to a
1392 default of five hops in the original onion routing design. There were
1393 some different assumptions, which I won't spell out now. Note that
1394 enclave level protections really change these assumptions. If most
1395 circuits are just two hops, then just a single link observer will be
1396 able to tell that two enclaves are communicating with high probability.
1397 So, it would seem that enclaves should have a four node minimum circuit
1398 to prevent trivial circuit insider identification of the whole circuit,
1399 and three hop minimum for circuits from an enclave to some nonclave
1400 responder. But then... we would have to make everyone obey these rules
1401 or a node that through timing inferred it was on a four hop circuit
1402 would know that it was probably carrying enclave to enclave traffic.
1403 Which... if there were even a moderate number of bad nodes in the
1404 network would make it advantageous to break the connection to conduct
1405 a reformation intersection attack. Ahhh! I gotta stop thinking
1406 about this and work on the paper some before the family wakes up.
1407 On Sat, Oct 25, 2003 at 06:57:12AM -0400, Paul Syverson wrote:
1408 > Which... if there were even a moderate number of bad nodes in the
1409 > network would make it advantageous to break the connection to conduct > a reformation intersection attack. Ahhh! I gotta stop thinking > about this and work on the paper some before the family wakes up.
1410 This is the sort of issue that should go in the 'maintaining anonymity
1411 with tor' section towards the end. :)
1412 Email from between roger and me to beginning of section above. Fix and move.
1415 [Put as much of this as a part of open issues as is possible.]
1417 [what's an anonymity set?]
1419 packet counting attacks work great against initiators. need to do some
1420 level of obfuscation for that. standard link padding for passive link
1421 observers. long-range padding for people who own the first hop. are
1422 we just screwed against people who insert timing signatures into your
1423 traffic?
1425 Even regardless of link padding from Alice to the cloud, there will be
1426 times when Alice is simply not online. Link padding, at the edges or
1427 inside the cloud, does not help for this.
1429 how often should we pull down directories? how often send updated
1430 server descs?
1432 when we start up the client, should we build a circuit immediately,
1433 or should the default be to build a circuit only on demand? should we
1434 fetch a directory immediately?
1436 would we benefit from greater synchronization, to blend with the other
1437 users? would the reduced speed hurt us more?
1439 does the "you can't see when i'm starting or ending a stream because
1440 you can't tell what sort of relay cell it is" idea work, or is just
1441 a distraction?
1443 does running a server actually get you better protection, because traffic
1444 coming from your node could plausibly have come from elsewhere? how
1445 much mixing do you need before this is actually plausible, or is it
1446 immediately beneficial because many adversary can't see your node?
1448 do different exit policies at different exit nodes trash anonymity sets,
1449 or not mess with them much?
1451 do we get better protection against a realistic adversary by having as
1452 many nodes as possible, so he probably can't see the whole network,
1453 or by having a small number of nodes that mix traffic well? is a
1454 cascade topology a more realistic way to get defenses against traffic
1455 confirmation? does the hydra (many inputs, few outputs) topology work
1456 better? are we going to get a hydra anyway because most nodes will be
1457 middleman nodes?
1459 using a circuit many times is good because it's less cpu work.
1460 good because of predecessor attacks with path rebuilding.
1461 bad because predecessor attacks can be more likely to link you with a
1462 previous circuit since you're so verbose.
1463 bad because each thing you do on that circuit is linked to the other
1464 things you do on that circuit.
1465 how often to rotate?
1466 how to decide when to exit from middle?
1467 when to truncate and re-extend versus when to start new circuit?
1469 Because Tor runs over TCP, when one of the servers goes down it seems
1470 that all the circuits (and thus streams) going over that server must
1471 break. This reduces anonymity because everybody needs to reconnect
1472 right then (does it? how much?) and because exit connections all break
1473 at the same time, and it also reduces usability. It seems the problem
1474 is even worse in a p2p environment, because so far such systems don't
1475 really provide an incentive for nodes to stay connected when they're
1476 done browsing, so we would expect a much higher churn rate than for
1477 onion routing. Are there ways of allowing streams to survive the loss
1478 of a node in the path?
1480 discuss topologies. Cite George's non-freeroutes paper. Maybe this
1481 graf goes elsewhere.
1483 discuss attracting users; incentives; usability.
1485 Choosing paths and path lengths.
1487 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1489 \Section{Attacks and Defenses}
1490 \label{sec:attacks}
1492 Below we summarize a variety of attacks and how well our design withstands
1493 them.
1495 \begin{enumerate}
1496 \item \textbf{Passive attacks}
1497 \begin{itemize}
1498 \item \emph{Simple observation.}
1499 \item \emph{Timing correlation.}
1500 \item \emph{Size correlation.}
1501 \item \emph{Option distinguishability.}
1502 \end{itemize}
1504 \item \textbf{Active attacks}
1505 \begin{itemize}
1506 \item \emph{Key compromise.}
1507 \item \emph{Iterated subpoena.}
1508 \item \emph{Run recipient.}
1509 \item \emph{Run a hostile node.}
1510 \item \emph{Compromise entire path.}
1511 \item \emph{Selectively DoS servers.}
1512 \item \emph{Introduce timing into messages.}
1513 \item \emph{Tagging attacks.}
1514 the exit node can change the content you're getting to try to
1515 trick you. similarly, when it rejects you due to exit policy,
1516 it could give you a bad IP that sends you somewhere else.
1517 \end{itemize}
1519 we rely on DNS being globally consistent. if people in africa resolve
1520 IPs differently, then asking to extend a circuit to a certain IP can
1521 give away your origin.
1523 \item \textbf{Directory attacks}
1524 \begin{itemize}
1525 \item knock out a dirserver
1526 \item knock out half the dirservers
1527 \item trick user into using different software (with different dirserver
1528 keys)
1529 \item OR connects to the dirservers but nowhere else
1530 \item foo
1531 \end{itemize}
1533 \item \textbf{Attacks against rendezvous points}
1534 \begin{itemize}
1535 \item foo
1536 \end{itemize}
1538 \end{enumerate}
1540 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1542 \Section{Future Directions and Open Problems}
1543 \label{sec:conclusion}
1545 % Mention that we need to do TCP over tor for reliability.
1547 Tor brings together many innovations into
1548 a unified deployable system. But there are still several attacks that
1549 work quite well, as well as a number of sustainability and run-time
1550 issues remaining to be ironed out. In particular:
1552 \begin{itemize}
1553 \item \emph{Scalability:} Since Tor's emphasis currently is on simplicity
1554 of design and deployment, the current design won't easily handle more
1555 than a few hundred servers, because of its clique topology. Restricted
1556 route topologies \cite{danezis-pets03} promise comparable anonymity
1557 with much better scaling properties, but we must solve problems like
1558 how to randomly form the network without introducing net attacks.
1559 % [cascades are a restricted route topology too. we must mention
1560 % earlier why we're not satisfied with the cascade approach.]-RD
1561 % [We do. At least
1562 \item \emph{Cover traffic:} Currently we avoid cover traffic because
1563 it introduces clear performance and bandwidth costs, but and its
1564 security properties are not well understood. With more research
1565 \cite{SS03,defensive-dropping}, the price/value ratio may change, both for
1566 link-level cover traffic and also long-range cover traffic. In particular,
1567 we expect restricted route topologies to reduce the cost of cover traffic
1568 because there are fewer links to cover.
1569 \item \emph{Better directory distribution:} Even with the threshold
1570 directory agreement algorithm described in \ref{subsec:dirservers},
1571 the directory servers are still trust bottlenecks. We must find more
1572 decentralized yet practical ways to distribute up-to-date snapshots of
1573 network status without introducing new attacks.
1574 \item \emph{Implementing location-hidden servers:} While Section
1575 \ref{sec:rendezvous} provides a design for rendezvous points and
1576 location-hidden servers, this feature has not yet been implemented.
1577 We will likely encounter additional issues, both in terms of usability
1578 and anonymity, that must be resolved.
1579 \item \emph{Wider-scale deployment:} The original goal of Tor was to
1580 gain experience in deploying an anonymizing overlay network, and learn
1581 from having actual users. We are now at the point where we can start
1582 deploying a wider network. We will see what happens!
1583 % ok, so that's hokey. fix it. -RD
1584 \item \emph{Further specification review:} Foo.
1585 \end{itemize}
1587 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1589 %\Section{Acknowledgments}
1590 %% commented out for anonymous submission
1592 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1594 \bibliographystyle{latex8}
1595 \bibliography{tor-design}
1597 \end{document}
1599 % Style guide:
1600 % U.S. spelling
1601 % avoid contractions (it's, can't, etc.)
1602 % 'mix', 'mixes' (as noun)
1603 % 'mix-net'
1604 % 'mix', 'mixing' (as verb)
1605 % 'middleman' [Not with a hyphen; the hyphen has been optional
1606 % since Middle English.]
1607 % 'nymserver'
1608 % 'Cypherpunk', 'Cypherpunks', 'Cypherpunk remailer'
1609 % 'Onion Routing design', 'onion router' [note capitalization]
1610 % 'SOCKS'
1613 % 'Substitute ``Damn'' every time you're inclined to write ``very;'' your
1614 % editor will delete it and the writing will be just as it should be.'
1615 % -- Mark Twain