-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should Subsampling be Recommended? #545
Comments
I am extremely interested in this, and think we need to examine this. I suspect block bootstrapping is going to be the best way to do this. There are issues at small sample number I have seen with most analytical estimates, but I would like to investigate the two you list a bit more. I am going to be tied up for the next 10 days or so (and then digging out afterwards) but this is very interesting to me, and I would love to talk more. I'd love to get a quantitative estimate of the additional errors introduced by subsampling. I would say in MBAR the biggest issue is poor estimation of the correlation time leading to too aggressive subsampling (another thing we have talked about), but even if the autocorrelation time is estimated correctly. We should think of the right experiments to show this (perhaps artificial data generated with a autoregressive model so that we know the autocorrelation time exactly). |
Thanks for your interest. I've played with a toy model - 4 harmonic oscialltors with the same force constant and equal spacing, so that all free energy changes are 0 for simplicity. I've modified For varying
For each This shows:
Where Eff denotes an effective sample size, An increase in the variance by ~ 30 % isn't a huge price to pay for subsampling, but:
CodeAll code and the conda environment used: gh_subsampling_issue.tar.gz |
My understanding is: subsampling is recommended so that Equation 4.2 of Kong et al., 2003 , which is derived for uncorrelated samples, can be used to estimate the variance. However, subsampling increases the variance. It seems unintuitive to increase the variance so that it can be better estimated. Would it not be better to minimise the variance by retaining all samples, and use a variance estimator which directly accounts for autocorrelation?
The Issue: Subsampling Increases the Variance
Geyer, 1992 (Section 3.6) discusses subsampling. He points out:
I'm assuming that the cost of using samples is generally negligible compared to the cost of generating them.
The increase in variance caused by subsampling seems to be shown, for example, in Table III of Tan, 2012, where the variance of the MBAR/UWHAM uncertainties increase after subsampling (the variances without subsampling are calculated using block-bootstrapping).
Possible Solutions: Directly Accounting for Autocorrelation in the Variance Estimates
To account for autocorrelation in the variance estimates without subsampling, block bootstrapping could be used, with the block size selected according to the procedure of Politis and White, 2004 (and correction), for example. However, I understand that fast analytical estimates may be preferred to avoid repeated MBAR evaluations. Could the analytical estimates from Geyer, 1994/ Li et al., 2023 be used?
Why This May Be Irrelevant
I'm biased by the fact I work with ABFE calculations and regularly feed MBAR very highly correlated data which are aggressively subsampled, sometimes producing unreliable estimates (which are reasonable without subsampling). I understand that for most applications relatively few samples will be discarded and any increase in uncertainty may be small.
It would be great to hear some thoughts on this/ be corrected if I am misunderstanding.
Thanks!
The text was updated successfully, but these errors were encountered: