Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should we consider adding bayesmark to the benchmarking suite #203

Open
kveerama opened this issue Jul 18, 2020 · 1 comment
Open

Should we consider adding bayesmark to the benchmarking suite #203

kveerama opened this issue Jul 18, 2020 · 1 comment

Comments

@kveerama
Copy link
Contributor

The bayesmark package is another wrapper hyper parameter tuning library. We can add this to our benchmarking suite. Per their documentation, they wrap around:

The builtin optimizers are wrappers on the following projects:

    HyperOpt
    Nevergrad
    OpenTuner
    PySOT
    Scikit-optimize

https://github.com/uber/bayesmark/

And we already benchmark against HyperOpt. Note that OpenTuner is a previous package developed at MIT in 2014.

@kveerama kveerama changed the title Should we consider adding bayesmark to the benchmarking suite Should we consider adding bayesmark to the benchmarking suite Jul 18, 2020
@kveerama
Copy link
Contributor Author

We have in the past tried Nevergrad. Alternatively, we can just add Nevergrad, Scitkit-optimize and PySOT individually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant