Research
I work on developing principled statistical and algorithmic methods for solving practical problems in machine learning and data science. In particular, I have worked on the following topics:
News
Preprints
Publications
For the complete list of publications and collaborators, please see my Google Scholar page.
-
Reducing sequential change detection to sequential estimation.
Shubhanshu Shekhar, and Aaditya Ramdas.
ICML, 2024.
-
Deep anytime-valid hypothesis testing.
Teodora Pandeva, Patrick Forré, Aaditya Ramdas, and Shubhanshu Shekhar.
AISTATS 2024
-
A Permutation-free Kernel Independence Test.
Shubhanshu Shekhar, Ilmun Kim, and Aaditya Ramdas.
Journal of Machine Learning Research (JMLR), 2023
-
Nonparametric Two-Sample Testing by Betting
Shubhanshu Shekhar and Aaditya Ramdas.
IEEE Transactions on Information Theory, 2023
-
Sequential change detection via backward confidence sequences.
Shubhanshu Shekhar and Aaditya Ramdas.
International Conference on Machine Learning (ICML), 2023
-
Risk-limiting financial audits via weighted sampling without replacement.
Shubhanshu Shekhar, Ziyu Xu, Zachary Liption, Pierre Liang, and Aaditya Ramdas.
Conference on Uncertainty in Artificial Intelligence (UAI) 2023
-
A Permutation-free Kernel Two-Sample Test
Shubhanshu Shekhar, Ilmun Kim, and Aaditya Ramdas.
Neural Information Processing Systems (NeurIPS), 2022
Accepted for oral presentation.
-
Instance-dependent regret analysis of kernelized bandits.
Shubhanshu Shekhar and Tara Javidi.
International Conference on Machine Learning (ICML), 2022
-
Multi-scale zeroth-order optimization of smooth functions in an RKHS.
Madison Lee, Shubhanshu Shekhar, and Tara Javidi.
International Symposium on Information Theory (ISIT), 2022.
-
Adaptive sampling for minimax fair classification.
Shubhanshu Shekhar, Greg Fields, Mohammad Ghavamzadeh and Tara Javidi.
Neural Information Processing Systems (NeurIPS), 2021.
-
Active learning for classification with abstention.
Shubhanshu Shekhar, Mohammad Ghavamzadeh and Tara Javidi.
IEEE Journal on Selected Topics in Information Theory, 2021.
Among six finalists for Jack K. Wolf Student paper award, ISIT 2020.
-
Significance of gradient information in Bayesian optimization.
Shubhanshu Shekhar and Tara Javidi.
International Conference on Artificial Intelligence and Statistics (AISTATS) 2021.
-
Uncertainty-aware safe exploratory planning using Gaussian processes and neural control contraction metric.
Dawei Sun, MJ Khojasteh, Shubhanshu Shekhar, and Chuchu Fan.
Learning for Dynamics and Control (L4DC) 2021.
-
Active model estimation in Markov decision processes.
Jean Tarbouriech, Shubhanshu Shekhar, Matteo Pirotta, Mohammad Ghavamzadeh and Alessandro Lazaric.
Conference on Uncertainty in Artificial Intelligence (UAI) 2020.
-
Adaptive sampling for learning probability distributions.
Shubhanshu Shekhar, Tara Javidi and Mohammad Ghavamzadeh.
International Conference on Machine Learning (ICML), 2020.
-
Multiscale Gaussian process level set estimation.
Shubhanshu Shekhar and Tara Javidi.
International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
-
Species tree estimation using ASTRAL: how many genes are enough?
Shubhanshu Shekhar, Sebastien Roch, and Siavash Mirarab.
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 2018.
-
Fully decentralized federated learning.
Anusha Lalitha, Shubhanshu Shekhar, Tara Javidi, and Farinaz Koushanfar.
Workshop on Bayesian Deep Learning, (NeurIPS), 2018.
-
Gaussian Process Bandits with Adaptive Discretization.
Shubhanshu Shekhar and Tara Javidi.
Electronic Journal of Statistics, 2018
|