title | booktitle | year | volume | series | address | month | publisher | url | abstract | layout | id | tex_title | bibtex_author | firstpage | lastpage | page | order | cycles | editor | author | date | container-title | genre | issued | extras | |||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Multi-objective Bandits: Optimizing the Generalized Gini Index |
Proceedings of the 34th International Conference on Machine Learning |
2017 |
70 |
Proceedings of Machine Learning Research |
0 |
PMLR |
We study the multi-armed bandit (MAB) problem where the agent receives a vectorial feedback that encodes many possibly competing objectives to be optimized. The goal of the agent is to find a policy, which can optimize these objectives simultaneously in a fair way. This multi-objective online optimization problem is formalized by using the Generalized Gini Index (GGI) aggregation function. We propose an online gradient descent algorithm which exploits the convexity of the GGI aggregation function, and controls the exploration in a careful way achieving a distribution-free regret |
inproceedings |
busa-fekete17a |
Multi-objective Bandits: Optimizing the Generalized {G}ini Index |
R{\'o}bert Busa-Fekete and Bal{\'a}zs Sz{\"o}r{\'e}nyi and Paul Weng and Shie Mannor |
625 |
634 |
625-634 |
625 |
false |
|
|
2017-07-17 |
Proceedings of the 34th International Conference on Machine Learning |
inproceedings |
|
|