1 Filename: 111-local-traffic-priority.txt
2 Title: Prioritizing local traffic over relayed traffic
5 Author: Roger Dingledine
8 Implemented-In: 0.2.0.x
12 We describe some ways to let Tor users operate as a relay and enforce
13 rate limiting for relayed traffic without impacting their locally
18 Right now we encourage people who use Tor as a client to configure it
19 as a relay too ("just click the button in Vidalia"). Most of these users
20 are on asymmetric links, meaning they have a lot more download capacity
21 than upload capacity. But if they enable rate limiting too, suddenly
22 they're limited to the same download capacity as upload capacity. And
23 they have to enable rate limiting, or their upstream pipe gets filled
24 up, starts dropping packets, and now their net connection doesn't work
25 even for non-Tor stuff. So they end up turning off the relaying part
26 so they can use Tor (and other applications) again.
28 So far this hasn't mattered that much: most of our fast relays are
29 being operated only in relay mode, so the rate limiting makes sense
30 for them. But if we want to be able to attract many more relays in
31 the future, we need to let ordinary users act as relays too.
33 Further, as we begin to deploy the blocking-resistance design and we
34 rely on ordinary users to click the "Tor for Freedom" button, this
35 limitation will become a serious stumbling block to getting volunteers
40 Tor implements its rate limiting on the 'read' side by only reading
41 a certain number of bytes from the network in each second. If it has
42 emptied its token bucket, it doesn't read any more from the network;
43 eventually TCP notices and stalls until we resume reading. But if we
44 want to have two classes of service, we can't know what class a given
45 incoming cell will be until we look at it, at which point we've already
50 Option 1: read when our token bucket is full enough, and if it turns
51 out that what we read was local traffic, then add the tokens back into
52 the token bucket. This will work when local traffic load alternates
53 with relayed traffic load; but it's a poor option in general, because
54 when we're receiving both local and relayed traffic, there are plenty
55 of cases where we'll end up with an empty token bucket, and then we're
56 back where we were before.
58 More generally, notice that our problem is easy when a given TCP
59 connection either has entirely local circuits or entirely relayed
60 circuits. In fact, even if they are both present, if one class is
61 entirely idle (none of its circuits have sent or received in the past
62 N seconds), we can ignore that class until it wakes up again. So it
63 only gets complex when a single connection contains active circuits
66 Next, notice that local traffic uses only the entry guards, whereas
67 relayed traffic likely doesn't. So if we're a bridge handling just
68 a few users, the expected number of overlapping connections would be
69 almost zero, and even if we're a full relay the number of overlapping
70 connections will be quite small.
72 Option 2: build separate TCP connections for local traffic and for
73 relayed traffic. In practice this will actually only require a few
74 extra TCP connections: we would only need redundant TCP connections
75 to at most the number of entry guards in use.
77 However, this approach has some drawbacks. First, if the remote side
78 wants to extend a circuit to you, how does it know which TCP connection
79 to send it on? We would need some extra scheme to label some connections
80 "client-only" during construction. Perhaps we could do this by seeing
81 whether any circuit was made via CREATE_FAST; but this still opens
82 up a race condition where the other side sends a create request
83 immediately. The only ways I can imagine to avoid the race entirely
84 are to specify our preference in the VERSIONS cell, or to add some
85 sort of "nope, not this connection, why don't you try another rather
86 than failing" response to create cells, or to forbid create cells on
87 connections that you didn't initiate and on which you haven't seen
88 any circuit creation requests yet -- this last one would lead to a bit
89 more connection bloat but doesn't seem so bad. And we already accept
90 this race for the case where directory authorities establish new TCP
91 connections periodically to check reachability, and then hope to hang
92 up on them soon after. (In any case this issue is moot for bridges,
93 since each destination will be one-way with respect to extend requests:
94 either receiving extend requests from bridge users or sending extend
95 requests to the Tor server, never both.)
97 The second problem with option 2 is that using two TCP connections
98 reveals that there are two classes of traffic (and probably quickly
99 reveals which is which, based on throughput). Now, it's unclear whether
100 this information is already available to the other relay -- he would
101 easily be able to tell that some circuits are fast and some are rate
102 limited, after all -- but it would be nice to not add even more ways to
103 leak that information. Also, it's less clear that an external observer
104 already has this information if the circuits are all bundled together,
105 and for this case it's worth trying to protect it.
107 Option 3: tell the other side about our rate limiting rules. When we
108 establish the TCP connection, specify the different policy classes we
109 have configured. Each time we extend a circuit, specify which policy
110 class that circuit should be part of. Then hope the other side obeys
111 our wishes. (If he doesn't, hang up on him.) Besides the design and
112 coordination hassles involved in this approach, there's a big problem:
113 our rate limiting classes apply to all our connections, not just
114 pairwise connections. How does one server we're connected to know how
115 much of our bucket has already been spent by another? I could imagine
116 a complex and inefficient "ok, now you can send me those two more cells
117 that you've got queued" protocol. I'm not sure how else we could do it.
119 (Gosh. How could UDP designs possibly be compatible with rate limiting
120 with multiple bucket sizes?)
122 Option 4: put both classes of circuits over a single connection, and
123 keep track of the last time we read or wrote a high-priority cell. If
124 it's been less than N seconds, give the whole connection high priority,
125 else give the whole connection low priority.
127 Option 5: put both classes of circuits over a single connection, and
128 play a complex juggling game by periodically telling the remote side
129 what rate limits to set for that connection, so you end up giving
130 priority to the right connections but still stick to roughly your
131 intended bandwidthrate and relaybandwidthrate.
137 Nick really didn't like option 2 because of the partitioning questions.
139 I've put option 4 into place as of Tor 0.2.0.3-alpha.
141 In terms of implementation, it will be easy: just add a time_t to
142 or_connection_t that specifies client_used (used by the initiator
143 of the connection to rate limit it differently depending on how
144 recently the time_t was reset). We currently update client_used
146 - command_process_relay_cell() when we receive a relay cell for
148 - relay_send_command_from_edge() when we send a relay cell for
150 - circuit_deliver_create_cell() when send a create cell.
151 We could probably remove the third case and it would still work,