3 \Unicorn performance is generally as good as a (mostly) Ruby web server
4 can provide. Most often the performance bottleneck is in the web
5 application running on Unicorn rather than Unicorn itself.
7 == \Unicorn Configuration
9 See Unicorn::Configurator for details on the config file format.
10 +worker_processes+ is the most-commonly needed tuning parameter.
12 === Unicorn::Configurator#worker_processes
14 * worker_processes should be scaled to the number of processes your
15 backend system(s) can support. DO NOT scale it to the number of
16 external network clients your application expects to be serving.
17 \Unicorn is NOT for serving slow clients, that is the job of nginx.
19 * worker_processes should be *at* *least* the number of CPU cores on
20 a dedicated server. If your application has occasionally slow
21 responses that are /not/ CPU-intensive, you may increase this to
22 workaround those inefficiencies.
24 * worker_processes may be increased for Unicorn::OobGC users to provide
25 more consistent response times.
27 * Never, ever, increase worker_processes to the point where the system
28 runs out of physical memory and hits swap. Production servers should
29 never see heavy swap activity.
31 === Unicorn::Configurator#listen Options
33 * Setting a very low value for the :backlog parameter in "listen"
34 directives can allow failover to happen more quickly if your
35 cluster is configured for it.
37 * If you're doing extremely simple benchmarks and getting connection
38 errors under high request rates, increasing your :backlog parameter
39 above the already-generous default of 1024 can help avoid connection
40 errors. Keep in mind this is not recommended for real traffic if
41 you have another machine to failover to (see above).
43 * :rcvbuf and :sndbuf parameters generally do not need to be set for TCP
44 listeners under Linux 2.6 because auto-tuning is enabled. UNIX domain
45 sockets do not have auto-tuning buffer sizes; so increasing those will
46 allow syscalls and task switches to be saved for larger requests
47 and responses. If your app only generates small responses or expects
48 small requests, you may shrink the buffer sizes to save memory, too.
50 * Having socket buffers too large can also be detrimental or have
51 little effect. Huge buffers can put more pressure on the allocator
52 and may also thrash CPU caches, cancelling out performance gains
53 one would normally expect.
55 * UNIX domain sockets are slighly faster than TCP sockets, but only
56 work if nginx is on the same machine.
58 == Other \Unicorn settings
60 * Setting "preload_app true" can allow copy-on-write-friendly GC to
61 be used to save memory. It will probably not work out of the box with
62 applications that open sockets or perform random I/O on files.
63 Databases like TokyoCabinet use concurrency-safe pread()/pwrite()
64 functions for safe sharing of database file descriptors across
67 * On POSIX-compliant filesystems, it is safe for multiple threads or
68 processes to append to one log file as long as all the processes are
69 have them unbuffered (File#sync = true) or they are
70 record(line)-buffered in userspace before any writes.
72 == Kernel Parameters (Linux sysctl)
74 WARNING: Do not change system parameters unless you know what you're doing!
76 * net.core.rmem_max and net.core.wmem_max can increase the allowed
77 size of :rcvbuf and :sndbuf respectively. This is mostly only useful
78 for UNIX domain sockets which do not have auto-tuning buffer sizes.
80 * For load testing/benchmarking with UNIX domain sockets, you should
81 consider increasing net.core.somaxconn or else nginx will start
82 failing to connect under heavy load. You may also consider setting
83 a higher :backlog to listen on as noted earlier.
85 * If you're running out of local ports, consider lowering
86 net.ipv4.tcp_fin_timeout to 20-30 (default: 60 seconds). Also
87 consider widening the usable port range by changing
88 net.ipv4.ip_local_port_range.
90 * Setting net.ipv4.tcp_timestamps=1 will also allow setting
91 net.ipv4.tcp_tw_reuse=1 and net.ipv4.tcp_tw_recycle=1, which along
92 with the above settings can slow down port exhaustion. Not all
93 networks are compatible with these settings, check with your friendly
94 network administrator before changing these.
96 * Increasing the MTU size can reduce framing overhead for larger
97 transfers. One often-overlooked detail is that the loopback
98 device (usually "lo") can have its MTU increased, too.