Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2
publications
Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem
Published in Conference on Learning Theory (COLT), 2015
This paper establishes regret lower bounds and develops optimal algorithms for dueling bandit problems, providing fundamental theoretical contributions to preference-based learning.
Recommended citation: Komiyama, J., Honda, J., Kashima, H., & Nakagawa, H. (2015). "Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem." In Proceedings of the 28th Annual Conference on Learning Theory (COLT 2015), 1141-1154.
Download Paper
Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays
Published in International Conference on Machine Learning (ICML), 2015
This paper provides optimal regret analysis of Thompson sampling in stochastic multi-armed bandit problems with multiple plays, establishing theoretical guarantees for Bayesian bandit algorithms.
Recommended citation: Komiyama, J., Honda, J., & Nakagawa, H. (2015). "Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays." In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), 1152-1161.
Download Paper
Regret Lower Bound and Optimal Algorithm in Finite Stochastic Partial Monitoring
Published in Advances in Neural Information Processing Systems (NIPS), 2015
This paper establishes regret lower bounds and develops optimal algorithms for finite stochastic partial monitoring, extending bandit theory to partial feedback settings.
Recommended citation: Komiyama, J., Honda, J., & Nakagawa, H. (2015). "Regret Lower Bound and Optimal Algorithm in Finite Stochastic Partial Monitoring." In Advances in Neural Information Processing Systems 28 (NIPS 2015), 1792-1800.
Download Paper
Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm, and Computationally Efficient Algorithm
Published in International Conference on Machine Learning (ICML), 2016
This paper provides comprehensive analysis of the Copeland dueling bandit problem, including regret lower bounds, optimal algorithms, and computationally efficient implementations.
Recommended citation: Komiyama, J., Honda, J., & Nakagawa, H. (2016). "Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm, and Computationally Efficient Algorithm." In Proceedings of the 33rd International Conference on Machine Learning (ICML 2016), 1235-1244.
Download Paper
Position-based Multiple-play Bandit Problem with Unknown Position Bias
Published in Advances in Neural Information Processing Systems (NIPS), 2017
This paper addresses position-based multiple-play bandit problems with unknown position bias, providing theoretical analysis and practical algorithms.
Recommended citation: Komiyama, J., Honda, J., & Takeda, A. (2017). "Position-based Multiple-play Bandit Problem with Unknown Position Bias." In Advances in Neural Information Processing Systems 30 (NIPS 2017), 5005-5015.
Download Paper
Statistical Emerging Pattern Mining with Multiple Testing Correction
Published in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2017
This paper develops statistical methods for emerging pattern mining with multiple testing correction, providing rigorous statistical guarantees for pattern discovery.
Recommended citation: Komiyama, J., Ishihata, M., Arimura, H., Nishibayashi, T., & Minato, S. (2017). "Statistical Emerging Pattern Mining with Multiple Testing Correction." In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2017), 897-906.
Download Paper
Nonconvex Optimization for Regression with Fairness Constraints
Published in International Conference on Machine Learning (ICML), 2018
This paper develops nonconvex optimization methods for regression problems with fairness constraints, addressing algorithmic bias in machine learning.
Recommended citation: Komiyama, J., Takeda, A., Honda, J., & Shimao, H. (2018). "Nonconvex Optimization for Regression with Fairness Constraints." In Proceedings of the 35th International Conference on Machine Learning (ICML 2018).
Download Paper
Scaling Multi-Armed Bandit Algorithms
Published in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2019
This paper addresses scalability challenges in multi-armed bandit algorithms, developing methods for handling large-scale bandit problems efficiently.
Recommended citation: Fouche, E., Komiyama, J., & Bohm, K. (2019). "Scaling Multi-Armed Bandit Algorithms." In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2019).
Download Paper
Bridging Offline and Online Experimentation: Constraint Active Search for Deployed Performance Optimization
Published in Transactions of Machine Learning Research, 2022
This paper develops methods for bridging offline and online experimentation through constraint active search, enabling effective performance optimization in deployed systems.
Recommended citation: Komiyama, J., Malkomes, G., Cheng, B., & McCourt, M. (2022). "Bridging Offline and Online Experimentation: Constraint Active Search for Deployed Performance Optimization." Transactions of Machine Learning Research.
Download Paper
Minimax Optimal Algorithms for Fixed-Budget Best Arm Identification
Published in Advances in Neural Information Processing Systems (NeurIPS), 2022
This paper develops minimax optimal algorithms for fixed-budget best arm identification, providing theoretical guarantees and practical implementations.
Recommended citation: Komiyama, J., Tsuchiya, T., & Honda, J. (2022). "Minimax Optimal Algorithms for Fixed-Budget Best Arm Identification." In Advances in Neural Information Processing Systems (NeurIPS 2022).
Download Paper
Anytime Capacity Expansion in Medical Residency Match by Monte Carlo Tree Search
Published in International Joint Conference on Artificial Intelligence (IJCAI), 2022
This paper applies Monte Carlo tree search for flexible-capacity mechanism design in medical residency matching, addressing NP-Complete optimization problems.
Recommended citation: Abe, K., Komiyama, J., & Iwasaki, A. (2022). "Anytime Capacity Expansion in Medical Residency Match by Monte Carlo Tree Search." In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI 2022).
Download Paper
High-dimensional Contextual Bandit Problem without Sparsity
Published in Advances in Neural Information Processing Systems (NeurIPS), 2023
This paper addresses high-dimensional contextual bandit problems without sparsity assumptions, providing theoretical guarantees and practical algorithms.
Recommended citation: Komiyama, J., & Imaizumi, M. (2023). "High-dimensional Contextual Bandit Problem without Sparsity." In Advances in Neural Information Processing Systems (NeurIPS 2023).
Download Paper
Thresholded Linear Bandits
Published in International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
This paper introduces thresholded linear bandits, extending linear bandit theory to scenarios where only thresholded feedback is available.
Recommended citation: Mehta, N., Komiyama, J., Nguyen, A., Potluru, V., & Grant-Hagen, M. (2023). "Thresholded Linear Bandits." In International Conference on Artificial Intelligence and Statistics (AISTATS 2023).
Download Paper
Posterior Tracking Algorithm for Classification Bandits
Published in International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
This paper develops posterior tracking algorithms for classification bandit problems, providing efficient methods for learning classifiers in sequential decision-making settings.
Recommended citation: Tabata, K., Komiyama, J., Nakamura, A., & Komatsuzaki, T. (2023). "Posterior Tracking Algorithm for Classification Bandits." In International Conference on Artificial Intelligence and Statistics (AISTATS 2023).
Download Paper
On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach
Published in Management Science, 2024
This paper analyzes statistical discrimination through the lens of multi-armed bandit theory, showing how it emerges as a failure of social learning.
Recommended citation: Komiyama, J., & Noda, S. (2024). "On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach." Management Science. To appear.
Download Paper
Finite-time Analysis of Globally Nonstationary Multi-Armed Bandits
Published in Journal of Machine Learning Research, 2024
This paper provides finite-time analysis for globally nonstationary multi-armed bandit problems, extending theoretical guarantees to dynamic environments.
Recommended citation: Komiyama, J., Fouche, E., & Honda, J. (2024). "Finite-time Analysis of Globally Nonstationary Multi-Armed Bandits." Journal of Machine Learning Research. Vol. 25 (No. 112), 1-56.
Download Paper
Rate-Optimal Bayesian Simple Regret in Best Arm Identification
Published in Mathematics of Operations Research, 2024
This paper establishes rate-optimal bounds for Bayesian simple regret in best arm identification problems, providing theoretical guarantees for Bayesian bandit algorithms.
Recommended citation: Komiyama, J., Ariu, K., Kato, M., & Qin, C. (2024). "Rate-Optimal Bayesian Simple Regret in Best Arm Identification." Mathematics of Operations Research. Vol. 49 (No.3), 1629-1646.
Download Paper
Strategic Choices of Migrants and Smugglers in the Central Mediterranean Sea
Published in PLoS ONE, 2024
This paper analyzes the strategic interactions between migrants and smugglers in the Central Mediterranean Sea using game-theoretic and machine learning approaches.
Recommended citation: Pham, K. H., & Komiyama, J. (2024). "Strategic Choices of Migrants and Smugglers in the Central Mediterranean Sea." PLoS ONE. Vol. 19 (No. 4) e0300553.
Download Paper
Learning Fair Division from Bandit Feedback
Published in International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
This paper develops algorithms for learning fair division mechanisms from bandit feedback, combining fairness considerations with sequential decision-making.
Recommended citation: Yamada, H., Komiyama, J., Abe, K., & Iwasaki, A. (2024). "Learning Fair Division from Bandit Feedback." In International Conference on Artificial Intelligence and Statistics (AISTATS 2024).
Download Paper
Fixed Confidence Best Arm Identification in the Bayesian Setting
Published in Advances in Neural Information Processing Systems (NeurIPS), 2024
This paper addresses fixed confidence best arm identification in the Bayesian setting, providing theoretical analysis and practical algorithms for Bayesian bandit problems.
Recommended citation: Jang, K., Komiyama, J., & Yamazaki, K. (2024). "Fixed Confidence Best Arm Identification in the Bayesian Setting." In Advances in Neural Information Processing Systems (NeurIPS 2024).
Download Paper
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.