You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One idea raised was to speed MBAR up without losing precision is to start the simulation using minibatching; using something like ADAM or other stochastic gradient solvers as often used in ML. It won't be as accurate, but we can switch to Newton-Raphson when it gets close enough.
Looking at various implementations, it seems that using something like scikit learn would introduce too many dependencies and require squeezing the data into a weird shape, so reimplementing may be the best bet.
The text was updated successfully, but these errors were encountered:
One idea raised was to speed MBAR up without losing precision is to start the simulation using minibatching; using something like ADAM or other stochastic gradient solvers as often used in ML. It won't be as accurate, but we can switch to Newton-Raphson when it gets close enough.
Looking at various implementations, it seems that using something like scikit learn would introduce too many dependencies and require squeezing the data into a weird shape, so reimplementing may be the best bet.
The text was updated successfully, but these errors were encountered: