-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OPENBLAS_NUM_THREADS=1 by default #59
Conversation
This will kill the performance of linear algebra operations on multicore systems. It makes sense for |
@jebej Can you try to write constructive feedback? eg what do you suggest? |
This disables multithreading for matrix multiply and related operations. I would suggest to revert the change. Try e.g. julia> using BenchmarkTools
julia> A = rand(500,500);
julia> B = rand(500,500);
julia> @btime $A*$B # by default on a 6 core CPU
1.190 ms (2 allocations: 1.91 MiB)
julia> @btime $A*$B # with OPENBLAS_NUM_THREADS=1
4.291 ms (2 allocations: 1.91 MiB) |
One can still set the number of threads with |
Pluto used to use Distributed until now. Are you seeing a performance degradation between Pluto with Distributed and Pluto with Malt.jl? |
Yes absolutely, but there's a reason julia doesn't set the number of threads to 1 by default. |
I did not check for a difference, I was just curious about Pluto/Malt and noticed this PR. |
Yes, and there are reasons why we need to set it to 1 by default.
Ok, thanks for clarifying that. There is significant context around this change that can't be summarized in this discussion, but feel free to study it and come up and propose a solution that circumvents the limitations we currently face. We'll gladly review a PR that addresses this problem and makes Malt.jl even better! Best, |
I'm not sure why the OOM, linked to threading it seems, but would a different compromise help like having 2 or 4 (I'm looking into, and have suggested new threading defaults for Julia, so like to know what people consider the best compromise)? I doubt the memory use scales linearally. And was it mostly or only a problem on Julia 1.6? Then drop support for it? At least as soon as there's a new LTS, there's been talk 1.10 will be it, at least eventually. It makes no sense for anyone using 1.6 LTS, or at least starting now.. with it going out, and updates to it seemingly stalled/stopped. [@jebej Do you use Linux, or Windows? Or concerned about both non-default threading for both?] Also I don't know if the OOM problem was only on Windows (which does not overcommit, unlike Linux). Maybe lowered thread-setting could be done only on Windows, to 1, or 2 or whatever compromise. I don't know if macOS overcommits (or FreeBSF), so maybe change there too, or only on non-Linux? |
FWIW I ran across this thread on the julia Discourse today, I'd be surprised if this is the first time this happened: https://discourse.julialang.org/t/poor-openblas-performance-for-large-matrix-multiply/119354 |
Just like Distributed: JuliaLang/julia#47803
Seems to avoid OOM errors on Windows: fonsp/Pluto.jl#2240 (comment)