-
-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request socket hang up error #269
Comments
@digitaldan From the different options I mentioned in that issue, I think the third one would be the best. Let me know what you think. |
👍 Yep I agree. |
Any objections if I merge #270 and push to live skill? |
Sounds good to me 👍 |
Thanks. I just pushed this change to live skill. Let see if it fixes this issue. 🤞 |
So in the last hour, this change has been running, no new socket hang up error was generated while we usually have gotten about 5-6 errors. I will close this one until it happens again. |
Due to the change in #265, adding request keep alive http agent support (
forever: true
), we have been getting sporadically a socket hang up error causing relevant requests to fail.This seems to be caused by the native nodejs http agent not handling gracefully keep-alive free sockets that have timed out on the server side. Unless defined in the myopenhab cloud connector nginx configuration, the default server keep-alive timeout is currently set to 75 seconds; while an AWS lambda execution context can be inactive for up to 30 minutes between two invocations.
The options to resolve this issue are one of the following:
I think that, with the first option, we would lose the response time benefit for not reusing existing connections during peak times. With the second one, we would add some overhead and therefore some added response time going through the motion of first getting the initial reuse connection request rejected and then creating a new connection. While the last one seems to be the most efficient but would rely on an additional module.
Lambda log reference
The text was updated successfully, but these errors were encountered: