-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed Up with Numba #79
Comments
Hi Marco, Thanks for the suggestion, I must admit this is the first time I've heard of Numba! Do you have any experience in using it? At the moment (from what I remember), a lot of the computational time is spent in performing element level calculations and assembling these results into sparse matrices. Do you think this is something numba would improve significantly? If you are interested, you could do a small test case to see if there are any performance gains? Robbie |
Hi @robbievanleeuwen . |
Numba could be a good solution to some of the python for loops (depending on how exactly they are written), as it essentially turns them into machine code using LLVM. Numba is a really cool project. That said, I think it would be even better to fully vectorize the code in numpy, which is something I wanted to do and one of the reasons I wanted to get some benchmarking code in the project using pytest-benchmark. |
Hi @Spectre5 ! I do like the way this conversation is going |
See all of these Python loops in this file and this file to calculate the results? It loops on each element and each Gauss point of each element. So it uses all (slow) Python loops. Those might be sped up using Numba, or at least could be with some reorganization. However, you could also setup the analysis using larger numpy arrays and then perform the computations once using fully vectorized calculations, at the expense of using more memory. This would be ever better than Numba. One idea that I've played around with is to make a fully vectorized version (max speed with higher memory usage) and then also keep the Python loop version (and/or using Numba) that would be slower, but less memory usage. Just in case some really large or complicated section actually got big enough to be a memory problem. But we'd need to see how much memory a large section actually uses to know if that'd be needed. |
I am no Numba expert, but I have done a fair bit of Numba coding recently forming structural stiffness matrices (for solution with Scipy functions). The main problem I found was Numba reverting to Python code, with the end result being even slower than pure Python. The main two lessons I took from this were:
More details at: |
Hi Robbie.
Have you ever tried to use Numba instead of Numpy?
It seens it can speed up everything :)
Is it worth it to have a look at?
The text was updated successfully, but these errors were encountered: