Thresholded Linear Bandits
Published in International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
This paper introduces thresholded linear bandits, extending linear bandit theory to scenarios where only thresholded feedback is available. We develop algorithms and provide regret bounds for this challenging setting where the learner receives binary feedback based on whether the reward exceeds a threshold.
Recommended citation: Mehta, N., Komiyama, J., Nguyen, A., Potluru, V., & Grant-Hagen, M. (2023). “Thresholded Linear Bandits.” In International Conference on Artificial Intelligence and Statistics (AISTATS 2023).
Recommended citation: Mehta, N., Komiyama, J., Nguyen, A., Potluru, V., & Grant-Hagen, M. (2023). "Thresholded Linear Bandits." In International Conference on Artificial Intelligence and Statistics (AISTATS 2023).
Download Paper