-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scan
operation causing significant overhead
#23
Comments
Instead of Lines 266 to 271 in bf4eca3
@pstjohn? |
Nevermind, that is Cholesky solve using scipy. I don't know if pytensor has cholesky solvers. |
Should we try HMC? It has gotten a lot faster |
Nevermind. You still have the same bottleneck |
Yea so for my initial "profiling", I ran the following snippet: model.profile(model.logp()).summary() which is agnostic of any inference, and only takes the log probability of a state in the But the slowdown does follow the model into the inference step, hence the problem. |
Looking around the The The standard |
I would say it is worth a shot. Peter St John felt that the absence of the Cholesky solver in PyTorch and Tensorflow was a reason not to try porting emll to those frameworks. |
I am also trying something else that I noticed. In |
When applying the
linlog
model for use inpymc
inference, we face quite significant inference time (2 days - 10 days). This is obviously not ideal for any productive application of the workflow. Utilizingpymc
built-in profiling methods, we can narrow the issue down to thescan
operation occurring inemll
. Specifically foremll/src/emll/linlog_model.py
Lines 120 to 166 in bf4eca3
This contains the line
which is where
scan
is being used by our code. We need to find a way to optimize this method of determiningxn
because it my have worked withtheano
, but it is definitely struggling withpytensor
.The text was updated successfully, but these errors were encountered: