Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need better support for cross validation / typical fitting workflow #28

Open
bcdaniels opened this issue Jun 13, 2022 · 0 comments
Open

Comments

@bcdaniels
Copy link
Contributor

The original Sir Isaac code performs model selection using a modified Bayesian information criterion. This makes sense in cases with limited data, but in many applications, we have enough data that it makes more sense to use cross validation (fit to a subset of the data and select a model based on its out-of-sample performance).

In particular, this is what we did in applying Sir Isaac to C. elegans behavior data. (Note to self: This was implemented in gatherResults.goodnessOfFit_model.) We should make this easier to do within Sir Isaac.

This could also be an excuse to automate what has become the usual model fitting workflow, in which we first fit to a very small subset of data and then gradually add data, refitting at each step.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant