Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using a Pretrained XGBoost Model #51

Open
BardiaKh opened this issue Mar 3, 2022 · 2 comments · May be fixed by #60
Open

Using a Pretrained XGBoost Model #51

BardiaKh opened this issue Mar 3, 2022 · 2 comments · May be fixed by #60
Labels
enhancement New feature or request next minor release

Comments

@BardiaKh
Copy link

BardiaKh commented Mar 3, 2022

Describe the feature and the current state.

In the XGBSEDebiasedBCE model, there is no way to overwrite the model with an already-trained model. This is useful as XGBSE does not expose some low-level APIs of the XGBoost model, for example, sample/feature weighting. This feature would be useful to add the data and an already trained model and then run the LR on top of that model.

Will this change a current behavior? How?

No. It will only add a method to the XGBSEDebiasedBCE model to replace the 'self.bst' and run the multitask LR task.

@BardiaKh BardiaKh added the enhancement New feature or request label Mar 3, 2022
@davivieirab
Copy link
Contributor

davivieirab commented Aug 2, 2022

Thanks for the report, @BardiaKh .
We are currently working on a way to replace the 1st step xgboost model from both XGBSEDebiasedBCE and XGBSEStackedWeibull modules. Both will be part of our next minor release.

@BardiaKh
Copy link
Author

BardiaKh commented Aug 2, 2022

Great!

@BardiaKh BardiaKh changed the title Using a Pretrained XBBoost Model Using a Pretrained XGBoost Model Aug 2, 2022
@davivieirab davivieirab linked a pull request Aug 8, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request next minor release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants