Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Deadlock on compaction when stopping datanode #35198

Closed
1 task done
bigsheeper opened this issue Aug 1, 2024 · 1 comment
Closed
1 task done

[Bug]: Deadlock on compaction when stopping datanode #35198

bigsheeper opened this issue Aug 1, 2024 · 1 comment
Assignees
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@bigsheeper
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version: 2.3 only
- Deployment mode(standalone or cluster):
- MQ type(rocksmq, pulsar or kafka):    
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

Phenomenon:
Before stopping the old datanode, it received two compaction tasks, A and B, both targeting the same segment, seg0.
image

Process:

  1. Task A starts compaction and successfully injects seg0 into the flush manager.
  2. Task B starts compaction and attempts to inject seg0 into the flush manager but gets stuck since the previous seg0 has not yet executed injectDone.
  3. Stopping datanode...
  4. Task B is stopped first, stop process getting stuck at step 2.
  5. Task B's stop being stuck prevents Task A from stopping, further blocking Task A injectDone for seg0, causing a deadlock between A and B.

Q: Why is this a probabilistic issue?
A: The deadlock does not occur if Task A is stopped before Task B.

Q: Why were compaction tasks received for the same segment?
A: The datanode received compaction tasks from both the new and old datacoords simultaneously.

Solution:
Stop all compaction tasks in parallel to avoid interdependencies and prevent deadlocks.

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

No response

Anything else?

No response

@bigsheeper bigsheeper added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 1, 2024
@bigsheeper bigsheeper self-assigned this Aug 1, 2024
@binbinlv binbinlv added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 2, 2024
sre-ci-robot pushed a commit that referenced this issue Aug 5, 2024
Stop compaction tasks in parallel to avoid interdependencies and prevent
deadlocks.

issue: #35198

---------

Signed-off-by: bigsheeper <[email protected]>
Copy link

stale bot commented Sep 1, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Sep 1, 2024
@stale stale bot closed this as completed Sep 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

2 participants