3 (As of 8 October 2003, this was all accurate. If you're reading this in
4 the distant future, stuff may have changed.)
6 0. Intro and required reading
8 Onion Routing is still very much in development stages. This document
9 aims to get you started in the right direction if you want to understand
10 the code, add features, fix bugs, etc.
12 Read the README file first, so you can get familiar with the basics of
13 installing and running an onion router.
15 Then, skim some of the introductory materials in tor-design.pdf,
16 tor-spec.txt, and the Tor FAQ to learn more about how the Tor protocol
17 is supposed to work. This document will assume you know about Cells,
18 Circuits, Streams, Connections, Onion Routers, and Onion Proxies.
24 The code is divided into two directories: ./src/common and ./src/or.
25 The "common" directory contains general purpose utility functions not
26 specific to onion routing. The "or" directory implements all
27 onion-routing and onion-proxy specific functionality.
29 Files in ./src/common:
31 aes.[ch] -- Implements the AES cipher (with 128-bit keys and blocks),
32 and a counter-mode stream cipher on top of AES. This code is
33 taken from the main Rijndael distribution. (We include this
34 because many people are running older versions of OpenSSL without
37 crypto.[ch] -- Wrapper functions to present a consistent interface to
38 public-key and symmetric cryptography operations from OpenSSL.
40 fakepoll.[ch] -- Used on systems that don't have a poll() system call;
41 reimplements() poll using the select() system call.
43 log.[ch] -- Tor's logging subsystem.
45 test.h -- Macros used by unit tests.
47 torint.h -- Provides missing [u]int*_t types for environments that
50 tortls.[ch] -- Wrapper functions to present a consistent interface to
51 TLS, SSL, and X.509 functions from OpenSSL.
53 util.[ch] -- Miscellaneous portability and convenience functions.
57 [General-purpose modules]
59 or.h -- Common header file: include everything, define everything.
61 buffers.c -- Implements a generic buffer interface. Buffers are
62 fairly opaque string holders that can read to or flush from:
63 memory, file descriptors, or TLS connections.
65 Also implements parsing functions to read HTTP and SOCKS commands
68 tree.h -- A splay tree implementation by Niels Provos. Used by
69 dns.c for dns caching at exits, and by connection_edge.c for dns
72 config.c -- Code to parse and validate the configuration file.
74 [Background processing modules]
76 cpuworker.c -- Implements a farm of 'CPU worker' processes to perform
77 CPU-intensive tasks in the background, so as not interrupt the
78 onion router. (OR only)
80 dns.c -- Implements a farm of 'DNS worker' processes to perform DNS
81 lookups for onion routers and cache the results. [This needs to
82 be done in the background because of the lack of a good,
83 ubiquitous asynchronous DNS implementation.] (OR only)
85 [Directory-related functionality.]
87 directory.c -- Code to send and fetch directories and router
88 descriptors via HTTP. Directories use dirserv.c to generate the
89 results; clients use routers.c to parse them.
91 dirserv.c -- Code to manage directory contents and generate
92 directories. [Directory server only]
94 routers.c -- Code to parse directories and router descriptors; and to
95 generate a router descriptor corresponding to this OR's
96 capabilities. Also presents some high-level interfaces for
97 managing an OR or OP's view of the directory.
99 [Circuit-related modules.]
101 circuit.c -- Code to create circuits, manage circuits, and route
102 relay cells along circuits.
104 onion.c -- Code to generate and respond to "onion skins".
106 [Core protocol implementation.]
108 connection.c -- Code used in common by all connection types. See
109 1.2. below for more general information about connections.
111 connection_edge.c -- Code used only by edge connections.
113 command.c -- Code to handle specific cell types.
115 connection_or.c -- Code to implement cell-speaking connections.
119 main.c -- Toplevel module. Initializes keys, handles signals,
120 multiplexes between connections, implements main loop, and drives
123 tor_main.c -- Stub module containing a main() function. Allows unit
124 test binary to link against main.c
128 test.c -- Contains unit tests for many pieces of the lower level Tor
131 1.2. All about connections
133 All sockets in Tor are handled as different types of nonblocking
134 'connections'. (What the Tor spec calls a "Connection", the code refers
135 to as a "Cell-speaking" or "OR" connection.)
137 Connections are implemented by the connection_t struct, defined in or.h.
138 Not every kind of connection uses all the fields in connection_t; see
139 the comments in or.h and the assertions in assert_connection_ok() for
142 Every connection has a type and a state. Connections never change their
143 type, but can go through many state changes in their lifetime.
145 The connection types break down as follows:
147 [Cell-speaking connections]
148 CONN_TYPE_OR -- A bidirectional TLS connection transmitting a
149 sequence of cells. May be from an OR to an OR, or from an OP to
153 CONN_TYPE_EXIT -- A TCP connection from an onion router to a
154 Stream's destination. [OR only]
155 CONN_TYPE_AP -- A SOCKS proxy connection from the end user
156 application to the onion proxy. [OP only]
159 CONN_TYPE_OR_LISTENER [OR only]
160 CONN_TYPE_AP_LISTENER [OP only]
161 CONN_TYPE_DIR_LISTENER [Directory server only]
162 -- Bound network sockets, waiting for incoming connections.
165 CONN_TYPE_DNSWORKER -- Connection from the main process to a DNS
166 worker process. [OR only]
168 CONN_TYPE_CPUWORKER -- Connection from the main process to a CPU
169 worker process. [OR only]
171 Connection states are documented in or.h.
173 Every connection has two associated input and output buffers.
174 Listeners don't use them. For non-listener connections, incoming
175 data is appended to conn->inbuf, and outgoing data is taken from the
176 front of conn->outbuf. Connections differ primarily in the functions
177 called to fill and drain these buffers.
179 1.3. All about circuits.
181 A circuit_t structure fills two roles. First, a circuit_t links two
182 connections together: either an edge connection and an OR connection,
183 or two OR connections. (When joined to an OR connection, a circuit_t
184 affects only cells sent to a particular circID on that connection. When
185 joined to an edge connection, a circuit_t affects all data.)
187 Second, a circuit_t holds the cipher keys and state for sending data
188 along a given circuit. At the OP, it has a sequence of ciphers, each
189 of which is shared with a single OR along the circuit. Separate
190 ciphers are used for data going "forward" (away from the OP) and
191 "backward" (towards the OP). At the OR, a circuit has only two stream
192 ciphers: one for data going forward, and one for data going backward.
194 1.4. Asynchronous IO and the main loop.
196 Tor uses the poll(2) system call (or it wraps select(2) to act like
197 poll, if poll is not available) to handle nonblocking (asynchronous)
198 IO. If you're not familiar with nonblocking IO, check out the links
199 at the end of this document.
201 All asynchronous logic is handled in main.c. The functions
202 'connection_add', 'connection_set_poll_socket', and 'connection_remove'
203 manage an array of connection_t*, and keep in synch with the array of
204 struct pollfd required by poll(2). (This array of connection_t* is
205 accessible via get_connection_array, but users should generally call
206 one of the 'connection_get_by_*' functions in connection.c to look up
207 individual connections.)
209 To trap read and write events, connections call the functions
210 'connection_{is|stop|start}_{reading|writing}'. If you want
211 to completely reset the events you're watching for, use
212 'connection_watch_events'.
214 Every time poll() finishes, main.c calls conn_read and conn_write on
215 every connection. These functions dispatch events that have something
216 to read to connection_handle_read, and events that have something to
217 write to connection_handle_write, respectively.
219 When connections need to be closed, they can respond in two ways. Most
220 simply, they can make connection_handle_* return an error (-1),
221 which will make conn_{read|write} close them. But if it's not
222 convenient to return -1 (for example, processing one connection causes
223 you to realize that a second one should close), then you can also
224 mark a connection to close by setting conn->marked_for_close. Marked
225 connections will be closed at the end of the current iteration of
228 The main loop handles several other operations: First, it checks
229 whether any signals have been received that require a response (HUP,
230 KILL, USR1, CHLD). Second, it calls prepare_for_poll to handle recurring
231 tasks and compute the necessary poll timeout. These recurring tasks
232 include periodically fetching the directory, timing out unused
233 circuits, incrementing flow control windows and re-enabling connections
234 that were blocking for more bandwidth, and maintaining statistics.
236 A word about TLS: Using TLS on OR connections complicates matters in
238 First, a TLS stream has its own read buffer independent of the
239 connection's read buffer. (TLS needs to read an entire frame from
240 the network before it can decrypt any data. Thus, trying to read 1
241 byte from TLS can require that several KB be read from the network
242 and decrypted. The extra data is stored in TLS's decrypt buffer.)
243 Because the data hasn't been read by tor (it's still inside the TLS),
244 this means that sometimes a connection "has stuff to read" even when
245 poll() didn't return POLLIN. The tor_tls_get_pending_bytes function is
246 used in main.c to detect TLS objects with non-empty internal buffers.
247 Second, the TLS stream's events do not correspond directly to network
248 events: sometimes, before a TLS stream can read, the network must be
249 ready to write -- or vice versa.
251 1.5. How data flows (An illustration.)
253 Suppose an OR receives 256 bytes along an OR connection. These 256
254 bytes turn out to be a data relay cell, which gets decrypted and
255 delivered to an edge connection. Here we give a possible call sequence
256 for the delivery of this data.
258 (This may be outdated quickly.)
260 do_main_loop -- Calls poll(2), receives a POLLIN event on a struct
262 conn_read -- Looks up the corresponding connection_t, and calls:
263 connection_handle_read -- Calls:
264 connection_read_to_buf -- Notices that it has an OR connection so:
265 read_to_buf_tls -- Pulls data from the TLS stream onto conn->inbuf.
266 connection_process_inbuf -- Notices that it has an OR connection so:
267 connection_or_process_inbuf -- Checks whether conn is open, and calls:
268 connection_process_cell_from_inbuf -- Notices it has enough data for
270 connection_fetch_from_buf -- Pulls the cell from the buffer.
271 cell_unpack -- Decodes the raw cell into a cell_t
272 command_process_cell -- Notices it is a relay cell, so calls:
273 command_process_relay_cell -- Looks up the circuit for the cell,
274 makes sure the circuit is live, then passes the cell to:
275 circuit_deliver_relay_cell -- Passes the cell to each of:
276 relay_crypt -- Strips a layer of encryption from the cell and
277 notices that the cell is for local delivery.
278 connection_edge_process_relay_cell -- extracts the cell's
279 relay command, and makes sure the edge connection is
280 open. Since it has a DATA cell and an open connection,
282 circuit_consider_sending_sendme -- check if the total number
283 of cells received by all streams on this circuit is
284 enough that we should send back an acknowledgement
285 (requesting that more cells be sent to any stream).
286 connection_write_to_buf -- To place the data on the outgoing
287 buffer of the correct edge connection, by calling:
288 connection_start_writing -- To tell the main poll loop about
290 write_to_buf -- To actually place the outgoing data on the
292 connection_consider_sending_sendme -- if the outbuf waiting
293 to flush to the exit connection is not too full, check
294 if the total number of cells received on this stream
295 is enough that we should send back an acknowledgement
296 (requesting that more cells be sent to this stream).
298 In a subsequent iteration, main notices that the edge connection is
301 do_main_loop -- Calls poll(2), receives a POLLOUT event on a struct
303 conn_write -- Looks up the corresponding connection_t, and calls:
304 connection_handle_write -- This isn't a TLS connection, so calls:
305 flush_buf -- Delivers data from the edge connection's outbuf to the
307 connection_wants_to_flush -- Reports that all data has been flushed.
308 connection_finished_flushing -- Notices the connection is an exit,
310 connection_edge_finished_flushing -- The connection is open, so it
312 connection_stop_writing -- Tells the main poll loop that this
313 connection has no more data to write.
314 connection_consider_sending_sendme -- now that the outbuf
315 is empty, check again if the total number of cells
316 received on this stream is enough that we should send
317 back an acknowledgement (requesting that more cells be
318 sent to this stream).
320 1.6. Routers, descriptors, and directories
322 All Tor processes need to keep track of a list of onion routers, for
324 - OPs need to establish connections and circuits to ORs.
325 - ORs need to establish connections to other ORs.
326 - OPs and ORs need to fetch directories from a directory server.
327 - ORs need to upload their descriptors to directory servers.
328 - Directory servers need to know which ORs are allowed onto the
329 network, what the descriptors are for those ORs, and which of
330 those ORs are currently live.
332 Thus, every Tor process keeps track of a list of all the ORs it knows
333 in a static variable 'directory' in the routers.c module. This
334 variable contains a routerinfo_t object for each known OR. On startup,
335 the directory is initialized to a list of known directory servers (via
336 router_get_list_from_file()). Later, the directory is updated via
337 router_get_dir_from_string(). (OPs and ORs retrieve fresh directories
338 from directory servers; directory servers generate their own.)
340 Every OR must periodically regenerate a router descriptor for itself.
341 The descriptor and the corresponding routerinfo_t are stored in the
342 'desc_routerinfo' and 'descriptor' static variables in routers.c.
344 Additionally, a directory server keeps track of a list of the
345 router descriptors it knows in a separate list in dirserv.c. It
346 uses this list, checking which OR connections are open, to build
357 2. Coding conventions
361 Use tor_malloc, tor_strdup, and tor_gettimeofday instead of their
362 generic equivalents. (They always succeed or exit.)
364 Use INLINE instead of 'inline', so that we work properly on windows.
366 2.2. Calling and naming conventions
368 Whenever possible, functions should return -1 on error and and 0 on
371 For multi-word identifiers, use lowercase words combined with
372 underscores. (e.g., "multi_word_identifier"). Use ALL_CAPS for macros and
375 Typenames should end with "_t".
377 Function names should be prefixed with a module name or object name. (In
378 general, code to manipulate an object should be a module with the same
379 name as the object, so it's hard to tell which convention is used.)
381 Functions that do things should have imperative-verb names
382 (e.g. buffer_clear, buffer_resize); functions that return booleans should
383 have predicate names (e.g. buffer_is_empty, buffer_needs_resizing).
385 2.3. What To Optimize
387 Don't optimize anything if it's not in the critical path. Right now,
388 the critical path seems to be AES, logging, and the network itself.
389 Feel free to do your own profiling to determine otherwise.
393 Log convention: use only these four log severities.
395 ERR is if something fatal just happened.
396 WARN if something bad happened, but we're still running. The
397 bad thing is either a bug in the code, an attack or buggy
398 protocol/implementation of the remote peer, etc. The operator should
399 examine the bad thing and try to correct it.
400 NOTICE if it's something the operator will want to know about.
401 (No error or warning messages should be expected during normal OR or OP
402 operation. I expect most people to run on -l notice eventually. If a
403 library function is currently called such that failure always means
404 ERR, then the library function should log WARN and let the caller
406 INFO means something happened (maybe bad, maybe ok), but there's nothing
407 you need to (or can) do about it.
408 DEBUG is for everything louder than INFO.
410 [XXX Proposed convention: every messages of severity INFO or higher should
411 either (A) be intelligible to end-users who don't know the Tor source; or
412 (B) somehow inform the end-users that they aren't expected to understand
413 the message (perhaps with a string like "internal error"). Option (A) is
414 to be preferred to option (B). -NM]
418 We use the 'doxygen' utility to generate documentation from our source code.
419 Here's how to use it:
421 1. Begin every file that should be documented with
424 * \brief Short desccription of the file
427 (Doxygen will recognize any comment beginning with /** as special.)
429 2. Before any function, structure, #define, or variable you want to
430 document, add a comment of the form:
432 /** Describe the function's actions in imperative sentences.
434 * Use blank lines for paragraph breaks
440 * Write <b>argument_names</b> in boldface.
443 * place_example_code();
444 * between_code_and_endcode_commands();
448 3. Make sure to escape the characters "<", ">", "\", "%" and "#" as "\<",
449 "\>", "\\", "\%", and "\#".
451 4. To document structure members, you can use two forms:
454 /** You can put the comment before an element; */
456 int b; /**< Or use the less-than symbol to put the comment after the element. */
459 5. See the Doxygen manual for more information; this summary just scratches
466 See http://tor.eff.org/
467 http://tor.eff.org/cvs/doc/tor-spec.txt
468 http://tor.eff.org/cvs/doc/tor-design.tex
469 http://tor.eff.org/cvs/doc/FAQ
473 See http://freehaven.net/anonbib/
477 [XXX insert references]
479 # ======================================================================
480 # Old HACKING document; merge into the above, move into tor-design.tex,
482 # ======================================================================
485 Routers. Onion routers, as far as the 'tor' program is concerned,
486 are a bunch of data items that are loaded into the router_array when
487 the program starts. Periodically it downloads a new set of routers
488 from a directory server, and updates the router_array. When a new OR
489 connection is started (see below), the relevant information is copied
490 from the router struct to the connection struct.
492 Connections. A connection is a long-standing tcp socket between
493 nodes. A connection is named based on what it's connected to -- an "OR
494 connection" has an onion router on the other end, an "OP connection" has
495 an onion proxy on the other end, an "exit connection" has a website or
496 other server on the other end, and an "AP connection" has an application
497 proxy (and thus a user) on the other end.
499 Circuits. A circuit is a path over the onion routing
500 network. Applications can connect to one end of the circuit, and can
501 create exit connections at the other end of the circuit. AP and exit
502 connections have only one circuit associated with them (and thus these
503 connection types are closed when the circuit is closed), whereas OP and
504 OR connections multiplex many circuits at once, and stay standing even
505 when there are no circuits running over them.
507 Streams. Streams are specific conversations between an AP and an exit.
508 Streams are multiplexed over circuits.
510 Cells. Some connections, specifically OR and OP connections, speak
511 "cells". This means that data over that connection is bundled into 256
512 byte packets (8 bytes of header and 248 bytes of payload). Each cell has
513 a type, or "command", which indicates what it's for.
517 [XXX no longer up to date]
518 Bandwidth throttling. Each cell-speaking connection has a maximum
519 bandwidth it can use, as specified in the routers.or file. Bandwidth
520 throttling can occur on both the sender side and the receiving side. If
521 the LinkPadding option is on, the sending side sends cells at regularly
522 spaced intervals (e.g., a connection with a bandwidth of 25600B/s would
523 queue a cell every 10ms). The receiving side protects against misbehaving
524 servers that send cells more frequently, by using a simple token bucket:
526 Each connection has a token bucket with a specified capacity. Tokens are
527 added to the bucket each second (when the bucket is full, new tokens
528 are discarded.) Each token represents permission to receive one byte
529 from the network --- to receive a byte, the connection must remove a
530 token from the bucket. Thus if the bucket is empty, that connection must
531 wait until more tokens arrive. The number of tokens we add enforces a
532 longterm average rate of incoming bytes, yet we still permit short-term
533 bursts above the allowed bandwidth. Currently bucket sizes are set to
534 ten seconds worth of traffic.
536 The bandwidth throttling uses TCP to push back when we stop reading.
537 We extend it with token buckets to allow more flexibility for traffic
540 Data congestion control. Even with the above bandwidth throttling,
541 we still need to worry about congestion, either accidental or intentional.
542 If a lot of people make circuits into same node, and they all come out
543 through the same connection, then that connection may become saturated
544 (be unable to send out data cells as quickly as it wants to). An adversary
545 can make a 'put' request through the onion routing network to a webserver
546 he owns, and then refuse to read any of the bytes at the webserver end
547 of the circuit. These bottlenecks can propagate back through the entire
548 network, mucking up everything.
550 (See the tor-spec.txt document for details of how congestion control
553 In practice, all the nodes in the circuit maintain a receive window
554 close to maximum except the exit node, which stays around 0, periodically
555 receiving a sendme and reading more data cells from the webserver.
556 In this way we can use pretty much all of the available bandwidth for
557 data, but gracefully back off when faced with multiple circuits (a new
558 sendme arrives only after some cells have traversed the entire network),
559 stalled network connections, or attacks.
561 We don't need to reimplement full tcp windows, with sequence numbers,
562 the ability to drop cells when we're full etc, because the tcp streams
563 already guarantee in-order delivery of each cell. Rather than trying
564 to build some sort of tcp-on-tcp scheme, we implement this minimal data
565 congestion control; so far it's enough.
567 Router twins. In many cases when we ask for a router with a given
568 address and port, we really mean a router who knows a given key. Router
569 twins are two or more routers that share the same private key. We thus
570 give routers extra flexibility in choosing the next hop in the circuit: if
571 some of the twins are down or slow, it can choose the more available ones.
573 Currently the code tries for the primary router first, and if it's down,
574 chooses the first available twin.