-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot increase number of iterations from where the GLMM models that will not converge left off during likelihood ratio testing #80
Comments
Can you provide a working example? Otherwise it will be difficult for me to provide a fix. |
I had the same/similar problem, so here is a reprex.
|
This is not limited to method = 'LRT' by the way, also happens with 'PB'. |
@SDAMcIntyre I found that update attempts with 'PB' also did not work. Thank you for the working example. I am working with proprietary data and thus cannot provide my working example and have had limited success generating data sets with such issues mostly due to my current skill level. |
Hey Dr. Singmann,
I really like afex. It has been helpful so far with getting overall tests for categorical variables with more than 2 levels except when I have a GLMM that I need to use more iterations on to get it to converge. For example, I usually run the following code on my GLMER objects that have trouble converging and may have to run it a second time (I have yet to need a third to reach convergence).
No problem there. The model converges after the second iteration extension from where the last model left off. However, when I run afex::mixed to get the overall estimates for my GLMM effects with 3+ level categorical variables involved I run into problems with convergence that I cannot seem to correct.
I am running an LRT because I want to use REML due to its help with unequal cell sizes and not fully completed repeated measures by all subjects that are intrinsic to the paradigm I am working with. I have tried several things and this post would get very long if I explained fully. Briefly:
allFit did not work when used on the full.model even with all the optimizers and maxfun=2e09 or as an argument (allFit=TRUE) within the mixed function itself.
attempting an update as shown above on the mixedclassobjectname$full.model did not work
including the control=glmerControl(optCtrl=list(maxfun=2e9)) argument in the function within mixed also did not work.
Is there a way to start from where the previous models left off (theta & fixef) and extend the number of iterations of all the models run during the LRT and rerun the LRT with those updated? If so, do you have some example code? If not, do you have any suggestions? I am not a newbie but not a super advanced user in R. Any help is greatly appreciated here as I imagine others will encounter this issue.
The text was updated successfully, but these errors were encountered: