Shubhanshu Shekhar

Shubhanshu Shekhar

Postdoctoral Researcher

Carnegie Mellon University


I am a Postdoctoral researcher in the Department of Statistics and Data Science at CMU, working with Prof. Aaditya Ramdas. My research interests lie in the general areas of maching learning and sequential decision making. I am currently working on the problems of sequential nonparametric hypothesis testing and weighted sampling without replacement. In the past, I have worked on the topics of Bayesian Optimization, Active Learning and Reinforcement Learning.

Here is a link to my CV


  • Sequential Hypothesis Testing
  • Uncertainty Quantification
  • Gaussian Process Bandits
  • Active Learning


  • PhD in Electrical Engineering

    University of California, San Diego

  • M.E. in Electrical Engineering

    Indian Institute of Science, Bangalore

  • B.E. in Electrical Engineering

    Indian Institute of Technology, Kharagpur

Recent News

All news»

[03/12/22] Uploaded a preprint on instance-dependent analysis of kernelized bandits. Update: Accepted @ICML2022.

[12/01/21] Uploaded a preprint on game-theoretic two-sample testing.

[09/28/21] Our paper Adaptive Sampling for Minimax Fair Classification got accepted at Neurips 2021.

[06/01/21] Awarded the Dr. Sassan Sheedvash Memorial Award by ECE Department, UCSD.

Sequential Two-Sample Testing

A game-theoretic approach for sequential nonparametric one- and two-sample testing
Sequential Two-Sample Testing

Bayesian Optimization

Low complexity algorithms with improved regret bounds
Bayesian Optimization

Active Learning in Bandits and MDPs

Optimal scheme for allocating samples to learn $K$ distributions uniformly well.
Active Learning in Bandits and MDPs

Kernelized Bandits

Algorithm with uniformly improved regret bounds + a computationally efficient heuristic with better empirical performance
Kernelized Bandits

Active Learning for Classification With Abstention

A minimax near-optimal active learning algorithm for classification with abstention.
Active Learning for Classification With Abstention

Sample Complexity of Species Tree Estimation

Precise characterization of sample complexity of a popular species tree reconstruction algorithm - ASTRAL
Sample Complexity of Species Tree Estimation



Research Intern

Facebook AI Research

Jun 2019 – Sep 2019 Menlo Park, California
Worked on the design and analysis of an adaptive sampling scheme for estimating probability distributions uniformly well in terms of several commonly used distance measures.

Research Intern

Qualcomm Research

Jun 2018 – Sep 2018 San Diego, California
Worked on the desgin of algorithms for optimizing the low-level code representation of commonly used tensor-operations in Deep Learning.


Selected list of graduate coursework in ECE and Math departments


  • Matrix Analysis
  • Information Theory
  • Random Processes
  • Parameter Estimation
  • Wave Theory of Information
  • Detection and Estimation Theory
  • Convex Optimization and Applications
  • Stochastic Processes in Dynamical Systems


The term in bracket denotes the number of courses in the series.

  • Real Analysis (3)
  • Applied Algebra (2)
  • Probability Theory (3)
  • Functional Analysis (2)
  • Mathematical Statistics (3)
  • Statistical Learning Theory (1)


I have served as a TA for the following courses.

Recent Posts

[UIT 1] Regret bound for mixture method

Summary: Derivation of upper bound on the regret for the mixture method (KT scheme) for individual sequence prediction. Suppose $\mathcal{X} = \{1, \ldots, m\}$ for some integer $m \geq 2$ and let $\Theta = \Delta_m$ denote the $m-1$ dimensional probability simplex.

Lower Bound for Active Learning in Bandits via Le Cam's method

Summary: We introduce LeCam’s method for obtaining minimax lower bounds for statistical estimation problems, which proceeds by relating the probabililty of error of a binary hypothesis testing problem to the total-variation distance between the two distributions.

Recent Publications

Instance-Dependent Regret Analysis of Kernelized Bandits

We study the kernelized bandit problem, that involves designing an adpative strategy for querying a noisy zeroth-order-oracle to …

Game-Theoretic Formulations of Sequential Nonparametric One- and Two-Sample Tests

A general framework for designing sequential tests by repeatedly betting against the null.

Adaptive Sampling for Minimax Fair Classification

An adaptive scheme for incrementally constructing a training dataset that balances the proprtion of inputs from different protected groups to ensure the highest worst-case classification accuracy.

Significance of Gradient Information in Bayesian Optimization

A theoretical analysis of possible improvement in regret when given access to gradient information in Bayesian Optimization.

Multi-Scale Zero-Order Optimization of Smooth Functions in an RKHS

We aim to optimize a black-box function f : X→ R under the assumption that f is H¨older smooth and has bounded norm in the Reproducing …