1 # -*- encoding: binary -*-
3 require "sleepy_penguin"
6 # This is an edge-triggered epoll concurrency model with blocking
7 # accept() in a (hopefully) native thread. This is comparable to
8 # ThreadPool and CoolioThreadPool, but is Linux-only and able to exploit
9 # "wake one" accept() behavior of a blocking accept() call when used
10 # with native threads.
12 # This supports streaming "rack.input" and allows +:pool_size+ tuning
13 # independently of +worker_connections+
17 # This is only supported under Linux 2.6 kernels.
19 # === Compared to CoolioThreadPool
21 # This does not buffer outgoing responses in userspace at all, meaning
22 # it can lower response latency to fast clients and also prevent
23 # starvation of other clients when reading slow disks for responses
24 # (when combined with native threads).
26 # CoolioThreadPool is likely better for trickling large static files or
27 # proxying responses to slow clients, but this is likely better for fast
30 # Unlikely CoolioThreadPool, this supports streaming "rack.input" which
31 # is useful for reading large uploads from fast clients.
33 # This exposes no special API or extensions on top of Rack.
35 # === Compared to ThreadPool
37 # This can maintain idle connections without the memory overhead of an
38 # idle Thread. The cost of handling/dispatching active connections is
39 # exactly the same for an equivalent number of active connections
40 # (but independently tunable).
42 # Since +:pool_size+ and +worker_connections+ is independently tunable,
43 # it is possible to get into situations where active connections need
44 # to wait for an idle thread in the thread pool before being processed
46 module Rainbows::XEpollThreadPool
47 extend Rainbows::PoolSize
50 include Rainbows::Base
52 def init_worker_process(worker)
54 require "rainbows/xepoll_thread_pool/client"
55 Rainbows::Client.__send__ :include, Client
58 def worker_loop(worker) # :nodoc:
59 init_worker_process(worker)