We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is your environment(Kubernetes version, Fluid version, etc.)
alluixo controller image: fluidcloudnative/alluxioruntime-controller:v1.0.1-14eda3b runtime image: alluxio/alluxio-dev:2.9.0 fuse image: alluxio/alluxio-dev:2.9.0
Describe the bug
use alluxio to accelerate data, but it seems that it does not accelerate。
配置:1个worker, 使用ssd作为缓存介质,缓存容量大小设置为10G
1GiB 文件
业务pod和缓存worker pod不在同一节点上,读取远程缓存数据
社区版读文件, 执行3次:
22.1MB/s,
23.1MB/s,
25.4MB/s
直接挂载zos 1GiB 文件
业务pod直接挂底层zos
读文件, 执行3次:
19.8MB/s,
20.2MB/s,
22.5MB/s
测试语句
fio -filename=/data/pytorch/1g.txt -direct=1 -iodepth 64 -thread -rw=randread -ioengine=libaio -bs=128K -size=1G -numjobs=10 -runtime=180 -group_reporting -name=rand_1Gread_128K
The text was updated successfully, but these errors were encountered:
@ruan875417 可以提供下AlluxioRuntime的配置吗?是否使用了本地磁盘作为缓存介质,远程数据源与Alluxio集群之间的带宽是多少?
Sorry, something went wrong.
No branches or pull requests
What is your environment(Kubernetes version, Fluid version, etc.)
alluixo
controller image: fluidcloudnative/alluxioruntime-controller:v1.0.1-14eda3b
runtime image: alluxio/alluxio-dev:2.9.0
fuse image: alluxio/alluxio-dev:2.9.0
Describe the bug
use alluxio to accelerate data, but it seems that it does not accelerate。
配置:1个worker, 使用ssd作为缓存介质,缓存容量大小设置为10G
1GiB 文件
业务pod和缓存worker pod不在同一节点上,读取远程缓存数据
社区版读文件, 执行3次:
22.1MB/s,
23.1MB/s,
25.4MB/s
直接挂载zos
1GiB 文件
业务pod直接挂底层zos
读文件, 执行3次:
19.8MB/s,
20.2MB/s,
22.5MB/s
测试语句
读文件
fio -filename=/data/pytorch/1g.txt -direct=1 -iodepth 64 -thread -rw=randread -ioengine=libaio -bs=128K -size=1G -numjobs=10 -runtime=180 -group_reporting -name=rand_1Gread_128K
The text was updated successfully, but these errors were encountered: