-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: [benchmark] diskann index inserts 100 million data, querynode disk usage peaks at over 100G #25163
Comments
/assign @xige-16 |
As the error said: disk space is not enough |
@elstic |
We have parameters that limit the disk in this way, but previously the case was passable and we did not change the case parameters or other configuration information.
So I assume that the current image needs to use more disk space |
@xige-16 |
The load process of milvus has remained unchanged, you can test whether it is caused by the upgrade of knowhere |
Querynodes are evicted due to the disk usage exceeding 100GB, which was set by |
might be related to compaction issue? |
There are two phenomena in this issue. The disk usage during the minio and querynode load processes has increased, but the disk usage in the final state has not changed, indicating that the size of the index has not changed. The high probability is that the old segments have not been cleaned up in time after compaction Cause, I will check the log to confirm |
The problem of querynode disk usage exceeding 100g is also found on the master branch, the same case on the master branch, the querynode disk usage is 108g. |
Recent image: '2.2.0-20230814-27fe2a45' inserted 100 million data load successfully |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
this problem has not occurred recently |
diskann index inserts 100 million data, querynode disk usage peaks at over 100g image: master-20231023-0c33ddb7
server:
The querynode was evicted for using more than 100 gigabytes of disk. Validated Peak Disk Usage Less Than 140 Gigabytes |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@elstic |
This issue has not arisen recently and I will close it. |
@nikcoderr: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Is there an existing issue for this?
Environment
Current Behavior
argo task : fouramf-concurrent-n5lrq, id : 2
case: test_concurrent_locust_100m_diskann_ddl_dql_filter_cluster
This is a frequently run test case, which was available in previous versions by testing.
server:
client log:
client pod : fouramf-concurrent-n5lrq-1120963268
Expected Behavior
load success.
Steps To Reproduce
Milvus Log
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: