-
Notifications
You must be signed in to change notification settings - Fork 862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does not relay proxify after a while #99
Comments
I have a similar issue on Amazon Linux (RHEL-derived) that pops up anywhere from 1-10 days after launching |
Would you mind have a try of https://github.com/semigodking/redsocks ? I would like to take this opportunity to see if my fork have the same issue. Thank you! |
Maybe that's some bug in the code that is triggered on connection pressure... Also, |
@darkk Thanks for such a super-fast response - it's much appreciated. Also your work on @semigodking Looks interesting, I'll also have a play around with that. |
On Tue, 07 Mar 2017 00:15:54 -0800 Leonid Evdokimov ***@***.***> wrote:
Also, `kill -USR1 $(pidof redsocks)` next time it gets stuck, it will
dump some information to the log.
```
1488909880.098407 notice redsocks.c:1286 redsocks_debug_dump_instance(...) Dumping client list for socks5 at 0.0.0.0:9050:
1488909880.098433 notice redsocks.c:1337 redsocks_debug_dump_instance(...) End of client list. 0 clients.
```
It would be interesting to correlate it with `netstat -tan` and `ss
-tein` output.
```
netstat -tan | grep 9050
tcp 0 0 0.0.0.0:9050 0.0.0.0:* LISTEN
tcp 0 0 10.6.0.1:9050 10.6.104.62:42631 SYN_RECV
tcp 0 0 10.6.0.1:9050 10.6.238.107:53330 SYN_RECV
tcp 0 0 10.4.0.1:9050 10.4.149.116:47635 SYN_RECV
tcp 0 0 10.9.0.1:9050 10.9.204.105:54486 SYN_RECV
tcp 0 0 10.3.0.1:9050 10.3.1.96:49050 SYN_RECV
tcp 0 0 10.10.0.1:9050 10.10.134.36:52647 SYN_RECV
tcp 0 0 10.3.0.1:9050 10.3.1.96:40438 SYN_RECV
tcp 0 0 10.9.0.1:9050 10.9.19.20:40556 SYN_RECV
tcp 0 0 10.10.0.1:9050 10.10.134.36:52645 SYN_RECV
tcp 0 0 10.9.0.1:9050 10.9.204.105:46936 SYN_RECV
tcp 194 0 10.2.0.1:9050 10.2.44.22:41134 CLOSE_WAIT
tcp 136 0 10.9.0.1:9050 10.9.199.195:58053 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:60907 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:40167 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:42527 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:54584 CLOSE_WAIT
tcp 541 0 10.0.3.1:9050 10.0.3.207:36448 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:46275 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:47422 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:35482 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:36901 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:56834 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:35116 CLOSE_WAIT
tcp 193 0 10.9.0.1:9050 10.9.199.195:37957 CLOSE_WAIT
tcp 518 0 10.0.3.1:9050 10.0.3.207:33816 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:39627 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:60946 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:45513 CLOSE_WAIT
tcp 223 0 10.0.3.1:9050 10.0.3.207:42687 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:57216 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:54978 CLOSE_WAIT
tcp 685 0 10.0.3.1:9050 10.0.3.207:42850 CLOSE_WAIT
tcp 198 0 10.2.0.1:9050 10.2.69.145:37347 CLOSE_WAIT
tcp 184 0 10.2.0.1:9050 10.2.69.145:36885 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:41125 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:56918 CLOSE_WAIT
tcp 221 0 10.0.3.1:9050 10.0.3.207:43091 CLOSE_WAIT
tcp 472 0 10.0.3.1:9050 10.0.3.207:56625 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:44356 CLOSE_WAIT
tcp 254 0 10.8.0.1:9050 10.8.12.208:41405 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:35051 CLOSE_WAIT
tcp 541 0 10.0.3.1:9050 10.0.3.207:32784 CLOSE_WAIT
tcp 184 0 10.2.0.1:9050 10.2.44.22:43055 CLOSE_WAIT
tcp 87 0 10.9.0.1:9050 10.9.199.195:58221 CLOSE_WAIT
tcp 541 0 10.0.3.1:9050 10.0.3.207:43153 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:60947 CLOSE_WAIT
tcp 541 0 10.0.3.1:9050 10.0.3.207:37858 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:35756 CLOSE_WAIT
tcp 197 0 10.9.0.1:9050 10.9.199.195:58366 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:35115 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:37110 CLOSE_WAIT
tcp 187 0 10.8.0.1:9050 10.8.156.46:43098 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:51281 CLOSE_WAIT
tcp 87 0 10.9.0.1:9050 10.9.199.195:47494 CLOSE_WAIT
tcp 541 0 10.0.3.1:9050 10.0.3.207:41410 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:46302 CLOSE_WAIT
tcp 518 0 10.0.3.1:9050 10.0.3.207:54629 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:48509 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:60948 CLOSE_WAIT
tcp 518 0 10.0.3.1:9050 10.0.3.207:41163 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:53894 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:42860 CLOSE_WAIT
tcp 478 0 10.0.3.1:9050 10.0.3.207:46958 CLOSE_WAIT
tcp 184 0 10.2.0.1:9050 10.2.69.145:32839 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:60906 CLOSE_WAIT
tcp 1 0 10.9.0.1:9050 10.9.199.195:33843 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:45494 CLOSE_WAIT
tcp 219 0 10.2.0.1:9050 10.2.44.22:40015 CLOSE_WAIT
tcp 113 0 10.9.0.1:9050 10.9.199.195:58106 CLOSE_WAIT
tcp 181 0 10.8.0.1:9050 10.8.45.63:44445 CLOSE_WAIT
tcp 182 0 10.8.0.1:9050 10.8.156.46:39649 CLOSE_WAIT
tcp 174 0 10.2.0.1:9050 10.2.44.22:50751 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:51452 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:58637 CLOSE_WAIT
tcp 3963 0 10.8.0.1:9050 10.8.12.208:49158 CLOSE_WAIT
tcp 219 0 10.2.0.1:9050 10.2.44.22:56557 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:35114 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:60908 CLOSE_WAIT
tcp 219 0 10.2.0.1:9050 10.2.44.22:58471 CLOSE_WAIT
tcp 182 0 10.8.0.1:9050 10.8.156.46:39651 CLOSE_WAIT
tcp 345 0 10.2.0.1:9050 10.2.44.22:42823 CLOSE_WAIT
tcp 854 0 10.2.0.1:9050 10.2.44.22:56216 CLOSE_WAIT
tcp 192 0 10.9.0.1:9050 10.9.199.195:46441 CLOSE_WAIT
tcp 235 0 10.0.3.1:9050 10.0.3.207:44272 CLOSE_WAIT
tcp 187 0 10.8.0.1:9050 10.8.156.46:43080 CLOSE_WAIT
tcp 685 0 10.0.3.1:9050 10.0.3.207:58629 CLOSE_WAIT
tcp 185 0 10.8.0.1:9050 10.8.156.46:60905 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:40658 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:47197 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:45836 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:58881 CLOSE_WAIT
tcp 225 0 10.0.3.1:9050 10.0.3.207:48370 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:37272 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:37044 CLOSE_WAIT
tcp 192 0 10.9.0.1:9050 10.9.199.195:34823 CLOSE_WAIT
tcp 207 0 10.2.0.1:9050 10.2.144.233:39030 ESTABLISHED
tcp 219 0 10.2.0.1:9050 10.2.44.22:60253 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:54161 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:47293 CLOSE_WAIT
tcp 522 0 10.0.3.1:9050 10.0.3.207:41302 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:47104 CLOSE_WAIT
tcp 182 0 10.8.0.1:9050 10.8.156.46:39652 CLOSE_WAIT
tcp 221 0 10.0.3.1:9050 10.0.3.207:49130 CLOSE_WAIT
tcp 182 0 10.8.0.1:9050 10.8.156.46:39650 CLOSE_WAIT
tcp 385 0 10.8.0.1:9050 10.8.12.208:41440 CLOSE_WAIT
tcp 207 0 10.2.0.1:9050 10.2.44.22:56702 CLOSE_WAIT
tcp 854 0 10.2.0.1:9050 10.2.44.22:56218 CLOSE_WAIT
tcp 197 0 10.2.0.1:9050 10.2.69.145:44056 CLOSE_WAIT
tcp 253 0 10.0.3.1:9050 10.0.3.207:38558 CLOSE_WAIT
tcp 182 0 10.8.0.1:9050 10.8.156.46:39684 CLOSE_WAIT
tcp 191 0 10.8.0.1:9050 10.8.45.63:47418 CLOSE_WAIT
tcp 477 0 10.6.0.1:9050 10.6.103.145:41246 CLOSE_WAIT
tcp 181 0 10.2.0.1:9050 10.2.44.22:39016 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:36349 CLOSE_WAIT
tcp 518 0 10.0.3.1:9050 10.0.3.207:42187 CLOSE_WAIT
tcp 194 0 10.7.0.1:9050 10.7.238.30:56178 CLOSE_WAIT
tcp 518 0 10.0.3.1:9050 10.0.3.207:56098 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:41980 CLOSE_WAIT
tcp 194 0 10.2.0.1:9050 10.2.44.22:37374 CLOSE_WAIT
```
```
ss -tein | grep 9050
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:41134 ino:0 sk:0000000c -->
CLOSE-WAIT 136 0 10.9.0.1:9050 10.9.199.195:58053 ino:0 sk:00000018 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:60907 ino:0 sk:0000001f -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:40167 ino:0 sk:00000020 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:42527 ino:0 sk:00000021 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:54584 ino:0 sk:00000029 -->
CLOSE-WAIT 541 0 10.0.3.1:9050 10.0.3.207:36448 ino:0 sk:0000002a -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:46275 ino:0 sk:00000037 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:47422 ino:0 sk:0000003b -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:35482 ino:0 sk:0000004f -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:36901 ino:0 sk:00000050 -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:56834 ino:0 sk:00000057 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:35116 ino:0 sk:0000005d -->
CLOSE-WAIT 193 0 10.9.0.1:9050 10.9.199.195:37957 ino:0 sk:0000006d -->
CLOSE-WAIT 518 0 10.0.3.1:9050 10.0.3.207:33816 ino:0 sk:00000070 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:39627 ino:0 sk:00000081 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:60946 ino:0 sk:00000082 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:45513 ino:0 sk:0000008a -->
CLOSE-WAIT 223 0 10.0.3.1:9050 10.0.3.207:42687 ino:0 sk:0000009b -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:57216 ino:0 sk:0000009e -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:54978 ino:0 sk:000000a0 -->
CLOSE-WAIT 685 0 10.0.3.1:9050 10.0.3.207:42850 ino:0 sk:000000a1 -->
CLOSE-WAIT 198 0 10.2.0.1:9050 10.2.69.145:37347 ino:0 sk:000000a6 -->
CLOSE-WAIT 184 0 10.2.0.1:9050 10.2.69.145:36885 ino:0 sk:000000a7 -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:41125 ino:0 sk:000000a9 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:56918 ino:0 sk:000000b0 -->
CLOSE-WAIT 221 0 10.0.3.1:9050 10.0.3.207:43091 ino:0 sk:000000b5 -->
CLOSE-WAIT 472 0 10.0.3.1:9050 10.0.3.207:56625 ino:0 sk:000000bf -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:44356 ino:0 sk:000000c2 -->
CLOSE-WAIT 254 0 10.8.0.1:9050 10.8.12.208:41405 ino:0 sk:000000cc -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:35051 ino:0 sk:000000d2 -->
CLOSE-WAIT 541 0 10.0.3.1:9050 10.0.3.207:32784 ino:0 sk:000000db -->
CLOSE-WAIT 184 0 10.2.0.1:9050 10.2.44.22:43055 ino:0 sk:000000e1 -->
CLOSE-WAIT 87 0 10.9.0.1:9050 10.9.199.195:58221 ino:0 sk:000000e7 -->
CLOSE-WAIT 541 0 10.0.3.1:9050 10.0.3.207:43153 ino:0 sk:000000f1 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:60947 ino:0 sk:000000f7 -->
CLOSE-WAIT 541 0 10.0.3.1:9050 10.0.3.207:37858 ino:0 sk:000000ff -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:35756 ino:0 sk:00000101 -->
CLOSE-WAIT 197 0 10.9.0.1:9050 10.9.199.195:58366 ino:0 sk:0000010a -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:35115 ino:0 sk:0000010d -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:37110 ino:0 sk:00000110 -->
CLOSE-WAIT 187 0 10.8.0.1:9050 10.8.156.46:43098 ino:0 sk:00000112 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:51281 ino:0 sk:00000113 -->
CLOSE-WAIT 87 0 10.9.0.1:9050 10.9.199.195:47494 ino:0 sk:00000115 -->
CLOSE-WAIT 541 0 10.0.3.1:9050 10.0.3.207:41410 ino:0 sk:00000119 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:46302 ino:0 sk:0000011c -->
CLOSE-WAIT 518 0 10.0.3.1:9050 10.0.3.207:54629 ino:0 sk:00000129 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:48509 ino:0 sk:0000012c -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:60948 ino:0 sk:0000012d -->
CLOSE-WAIT 518 0 10.0.3.1:9050 10.0.3.207:41163 ino:0 sk:00000130 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:53894 ino:0 sk:00000132 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:42860 ino:0 sk:00000134 -->
CLOSE-WAIT 478 0 10.0.3.1:9050 10.0.3.207:46958 ino:0 sk:00000136 -->
CLOSE-WAIT 184 0 10.2.0.1:9050 10.2.69.145:32839 ino:0 sk:00000142 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:60906 ino:0 sk:00000145 -->
CLOSE-WAIT 1 0 10.9.0.1:9050 10.9.199.195:33843 ino:0 sk:00000152 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:45494 ino:0 sk:00000159 -->
CLOSE-WAIT 219 0 10.2.0.1:9050 10.2.44.22:40015 ino:0 sk:0000015d -->
CLOSE-WAIT 113 0 10.9.0.1:9050 10.9.199.195:58106 ino:0 sk:00000161 -->
CLOSE-WAIT 181 0 10.8.0.1:9050 10.8.45.63:44445 ino:0 sk:00000162 -->
CLOSE-WAIT 182 0 10.8.0.1:9050 10.8.156.46:39649 ino:0 sk:00000164 -->
CLOSE-WAIT 174 0 10.2.0.1:9050 10.2.44.22:50751 ino:0 sk:0000016c -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:51452 ino:0 sk:00000182 -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:58637 ino:0 sk:00000186 -->
CLOSE-WAIT 3963 0 10.8.0.1:9050 10.8.12.208:49158 ino:0 sk:0000018f -->
CLOSE-WAIT 219 0 10.2.0.1:9050 10.2.44.22:56557 ino:0 sk:00000191 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:35114 ino:0 sk:00000196 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:60908 ino:0 sk:000001aa -->
CLOSE-WAIT 219 0 10.2.0.1:9050 10.2.44.22:58471 ino:0 sk:000001b7 -->
CLOSE-WAIT 182 0 10.8.0.1:9050 10.8.156.46:39651 ino:0 sk:000001ba -->
CLOSE-WAIT 345 0 10.2.0.1:9050 10.2.44.22:42823 ino:0 sk:000001bb -->
CLOSE-WAIT 854 0 10.2.0.1:9050 10.2.44.22:56216 ino:0 sk:000001be -->
CLOSE-WAIT 192 0 10.9.0.1:9050 10.9.199.195:46441 ino:0 sk:000001c8 -->
CLOSE-WAIT 235 0 10.0.3.1:9050 10.0.3.207:44272 ino:0 sk:000001ca -->
CLOSE-WAIT 187 0 10.8.0.1:9050 10.8.156.46:43080 ino:0 sk:000001d5 -->
CLOSE-WAIT 685 0 10.0.3.1:9050 10.0.3.207:58629 ino:0 sk:000001d7 -->
CLOSE-WAIT 185 0 10.8.0.1:9050 10.8.156.46:60905 ino:0 sk:000001df -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:40658 ino:0 sk:000001e2 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:47197 ino:0 sk:000001e7 -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:45836 ino:0 sk:000001eb -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:58881 ino:0 sk:000001f1 -->
CLOSE-WAIT 225 0 10.0.3.1:9050 10.0.3.207:48370 ino:0 sk:000001fa -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:37272 ino:0 sk:000001fb -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:37044 ino:0 sk:00000200 -->
CLOSE-WAIT 192 0 10.9.0.1:9050 10.9.199.195:34823 ino:0 sk:00000202 -->
ESTAB 207 0 10.2.0.1:9050 10.2.144.233:39030 ino:0 sk:00000204 <->
CLOSE-WAIT 219 0 10.2.0.1:9050 10.2.44.22:60253 ino:0 sk:0000020b -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:54161 ino:0 sk:00000211 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:47293 ino:0 sk:00000215 -->
CLOSE-WAIT 522 0 10.0.3.1:9050 10.0.3.207:41302 ino:0 sk:00000216 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:47104 ino:0 sk:0000021a -->
CLOSE-WAIT 182 0 10.8.0.1:9050 10.8.156.46:39652 ino:0 sk:00000220 -->
CLOSE-WAIT 221 0 10.0.3.1:9050 10.0.3.207:49130 ino:0 sk:00000236 -->
CLOSE-WAIT 182 0 10.8.0.1:9050 10.8.156.46:39650 ino:0 sk:0000023b -->
CLOSE-WAIT 385 0 10.8.0.1:9050 10.8.12.208:41440 ino:0 sk:00000252 -->
CLOSE-WAIT 207 0 10.2.0.1:9050 10.2.44.22:56702 ino:0 sk:00000254 -->
CLOSE-WAIT 854 0 10.2.0.1:9050 10.2.44.22:56218 ino:0 sk:0000025d -->
CLOSE-WAIT 197 0 10.2.0.1:9050 10.2.69.145:44056 ino:0 sk:00000262 -->
CLOSE-WAIT 253 0 10.0.3.1:9050 10.0.3.207:38558 ino:0 sk:00000271 -->
CLOSE-WAIT 182 0 10.8.0.1:9050 10.8.156.46:39684 ino:0 sk:00000272 -->
CLOSE-WAIT 191 0 10.8.0.1:9050 10.8.45.63:47418 ino:0 sk:0000027a -->
CLOSE-WAIT 477 0 10.6.0.1:9050 10.6.103.145:41246 ino:0 sk:0000027c -->
CLOSE-WAIT 181 0 10.2.0.1:9050 10.2.44.22:39016 ino:0 sk:00000297 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:36349 ino:0 sk:000002a5 -->
CLOSE-WAIT 518 0 10.0.3.1:9050 10.0.3.207:42187 ino:0 sk:000002aa -->
CLOSE-WAIT 194 0 10.7.0.1:9050 10.7.238.30:56178 ino:0 sk:000002ac -->
CLOSE-WAIT 518 0 10.0.3.1:9050 10.0.3.207:56098 ino:0 sk:000002ae -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:41980 ino:0 sk:000002b8 -->
CLOSE-WAIT 194 0 10.2.0.1:9050 10.2.44.22:37374 ino:0 sk:000002bf -->
```
Looks close is not called for dead connections.
It's an upstart job so max fd limit for redsocks is effectively 1024.
I increased it to see what will happen.
|
@darkk I've got the output of those commands as well as stderr from 17 hours of runtime before failure in this gist: https://gist.github.com/aidansteele/3d3f0ffb1126b8644df0eafa7c7fd285 Let me know what else you need! |
Since instances restart at 11 hours ago, I now see all five instances with max fd limit of 1024 are stuck but the one with limit of 4096 is up and running. |
Also I see, for instance with max fd limit of 4096, output of |
Another thing I noticed is for instances with max fd limit of 1024 redsocks logs conn_max as 128 and for instance with max fd limit of 4096 it's 512. If conn_max is maximum number of clients, why It's not division of max fd limit by 2? (one for client-redsocks socket and one for redoscks-socks_server socket) My previous logs of netstat showed I had more than 128 client connections at some points and file descriptors were not properly closed. Attached log of a failed instance. At 1488953244.393913 I killed redsocks and another one spawned by upstart. |
@reith I see port 9050 and I assume that you're using
I'll comment the logs a bit later. |
Seems my problem was file description exhaustion. The default limit on Amazon Linux is a laughably low 1024, so redsocks calculates a Is this expected behaviour? I calculate the open fds from |
@aidansteele Occasional decrease may be cause by |
On Wed, 08 Mar 2017 03:54:16 -0800 Leonid Evdokimov ***@***.***> wrote:
@reith I see port 9050 and I assume that you're using `tor` to handle
the traffic. If that's true, have you considered using `TransPort`
feature of the tor daemon? It may be more convenient.
No, I'm not using Tor.
`conn_max` is calculated
[here](https://github.com/darkk/redsocks/blob/27b17889a43e32b0c1162514d00967e6967d41bb/base.c#L436),
rule of thumb is that redsocks needs to reserve some FDs for logs /
DNS / signal handling and other library needs and needs six file
descriptors for `splice()` to work in worst-case. Usage of splice
reduces number of memory-to-memory copies and reduces CPU load by
factor of ~30% and it matters on embedded devices. Number of
*actually* used sockets may be reduced, but worse-case scenario is
still 6 file descriptors per every connection: one for client, one to
server and two for every direction of the pipe.
Thanks for clarification. You said It may because of a bug happening
on connection pressure. Is this bug addressed somewhere?
|
On Thu, 09 Mar 2017 16:17:41 -0800 Aidan Steele ***@***.***> wrote:
Seems my problem was file description exhaustion.
This is not happening for me. I built from rev ce85086 and there is no
fd leak here.
|
I found the bug. There's a situation where the connection pressure is solved but redsocks doesn't resume listening to new connections. See #100 :) Edit: This of course doesn't solve the socket leaking problem. |
Darkk I'm using redsocks for a long while and found a problem related do iptables conntrack module. So, in my iptables logs, I have a bunch of entries for FIN ACK and RST packets being dropped because conntrack considered them INVALID. As those packtes never reach redsocks, it can't close the connection that remains in CLOSE_WAIT state. As redsocks relies on REDIRECT chain from iptables nat table, there is no workaround we can do using iptables to force that packet being redirected to redsocks. I think that IP_TRANSPARENT socket option and TPROXY chain from iptables mangle table would be a nice try to avoid that conntrack issue. I know that Squid has an option to use that instead of nat REDIRECTion. Another option is using Apache behavior of forking process instead of threads and killing forked process after it attended more N connections. But this could cause an unwanted behavior on persistent connection from APPs like WhatsApp. Anyway, redsocks is an excelent proxyfier and I thank you all for the effort of bringing it to us. PS: Sorry my poor English, it is not my main language and there is a lot of time I don't practice it... |
Hi, |
epoll_wait
call will not finish after a while. It's probably something with libevent. My setup runs redsocks instance by an upstart script configured as follow:It's last stack information for blocking call:
Attached last 300 lines of log. As you see instance had not been responding for a long time. Redsocks instance is made from ce85086 so It's up to date ignoring documentation changes. Kernel version is ubuntu's
4.2.0-34-generic
and libevent is installed by ubuntulibevent-dev=2.0.21-stable-1ubuntu1
package.redsocks-last-300.txt
The text was updated successfully, but these errors were encountered: