Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Make sure the Connection.disconnect gets called before the loop is closed #329

Merged
merged 1 commit into from
Oct 7, 2022

Conversation

bellini666
Copy link
Contributor

Fix #328

@carltongibson
Copy link
Member

carltongibson commented Sep 22, 2022

Ah,a PR good... (didn't see this before commenting on the ticket just now 🙂) Sorry I was reading on my phone and only now realise the separate, related reports. 🤦

Thanks for the report. Looks ≈right but let me have a play.

How might we test this? 🤔

@bellini666
Copy link
Contributor Author

Hi @carltongibson ,

I tested it in a live project. Without this patch everytime I would instantiate the layer object and call group_send inside a django view (not a channel consumer) I would get the error I reported in the issue. That because the pool gets deleted inside the wrapped loop.close, which schedules .disconnect() to be called by the loop, but since it is being closed there it doesn't get a chance to run.

Wi that patch the error don't happen because the connection gets closed in that redis.close() call (which is the correct behaviour, since we don't have more than one connection in that pool. It was created specifically for that conn)

If you are looking for a unit test, I can try to write one. Don't know how to properly mock this to ensure that the coroutine was never run, but I can give it a try.

Note that this fixes the PubSub layer. The core one has a similar issue (#327), which I did not find a good solution for it. The only difference there is that the disconnect does get called, but it is somehow run by a different loop (probably because the loop that created the pool is not the same that called the close_pools function, but I don't know exactly why)

@bellini666
Copy link
Contributor Author

obs. I'm running the beta versions of channels and channels_redis, and while this is not merged I monkey-patched it to force the same behaviour:

_original_ensure_redis = RedisSingleShardConnection._ensure_redis


def _ensure_redis(self):
    # FIXME: This can be removed once this issue is fixed and released:
    # https://github.com/django/channels_redis/issues/328
    need_patch_redis = self._redis is None
    retval = _original_ensure_redis(self)

    if need_patch_redis:
        self._redis.close = functools.partial(
            self._redis.close,
            close_connection_pool=True,
        )

    return retval


RedisSingleShardConnection._ensure_redis = _ensure_redis

It is running in production in a project that relies a lot in the PubSub behaviour for 3 days now and everything is running fine! :)

@carltongibson
Copy link
Member

Thanks for the follow up @bellini666

As I say sounds correct. Let me have a play. 👍

Copy link
Member

@carltongibson carltongibson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @bellini666 — thanks for this.

I added a test that triggers the shutdown warning at least.

🎁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Connection.disconnect: Task was destroyed but it is pending!
2 participants