I am a Postdoctoral researcher in the Department of Statistics and Data Science at CMU, working with Prof. Aaditya Ramdas. My research interests lie in the general areas of maching learning and sequential decision making. I am currently working on the problems of sequential nonparametric hypothesis testing and weighted sampling without replacement. In the past, I have worked on the topics of Bayesian Optimization, Active Learning and Reinforcement Learning.

- Sequential Hypothesis Testing
- Uncertainty Quantification
- Gaussian Process Bandits
- Active Learning

PhD in Electrical Engineering

University of California, San Diego

M.E. in Electrical Engineering

Indian Institute of Science, Bangalore

B.E. in Electrical Engineering

Indian Institute of Technology, Kharagpur

**[03/12/22]** Uploaded a preprint on instance-dependent analysis of kernelized bandits. Update: Accepted @ICML2022.

**[12/01/21]** Uploaded a preprint on game-theoretic two-sample testing.

**[09/28/21]** Our paper Adaptive Sampling for Minimax Fair Classification got accepted at Neurips 2021.

**[06/01/21]** Awarded the Dr. Sassan Sheedvash Memorial Award by ECE Department, UCSD.

A game-theoretic approach for sequential nonparametric one- and two-sample testing

Optimal scheme for allocating samples to learn $K$ distributions uniformly well.

Algorithm with uniformly improved regret bounds + a computationally efficient heuristic with better empirical performance

A minimax near-optimal active learning algorithm for classification with abstention.

Precise characterization of sample complexity of a popular species tree reconstruction algorithm - ASTRAL

Worked on the design and analysis of an adaptive sampling scheme for estimating
probability distributions uniformly well in terms of several commonly used distance
measures.

Worked on the desgin of algorithms for optimizing the low-level code representation of commonly used tensor-operations in Deep Learning.

Selected list of graduate coursework in ECE and Math departments

- Matrix Analysis
- Information Theory
- Random Processes
- Parameter Estimation
- Wave Theory of Information
- Detection and Estimation Theory
- Convex Optimization and Applications
- Stochastic Processes in Dynamical Systems

The term in bracket denotes the number of courses in the series.

- Real Analysis (3)
- Applied Algebra (2)
- Probability Theory (3)
- Functional Analysis (2)
- Mathematical Statistics (3)
- Statistical Learning Theory (1)

I have served as a TA for the following courses.

- ECE 101 (Linear System Fundamentals), Spring 2018
- ECE 153 (Probability and Random Processes), Fall 2018
- ECE 158A (Data Networks), Fall 2016
- ECE 257A (Graduate Communication Networks), Winter 2017
- ECE 267 (Network Algorithms and Analysis), Winter 2019
- ECE 272A (Dynamical Systems), Winter 2018

Summary: Derivation of upper bound on the regret for the mixture method (KT scheme) for individual sequence prediction.
Suppose $\mathcal{X} = \{1, \ldots, m\}$ for some integer $m \geq 2$ and let $\Theta = \Delta_m$ denote the $m-1$ dimensional probability simplex.

Summary: We introduce LeCam’s method for obtaining minimax lower bounds for statistical estimation problems, which proceeds by relating the probabililty of error of a binary hypothesis testing problem to the total-variation distance between the two distributions.

We study the kernelized bandit problem, that involves designing an adpative strategy for querying a noisy zeroth-order-oracle to …

A general framework for designing sequential tests by repeatedly betting against the null.

An adaptive scheme for incrementally constructing a training dataset that balances the proprtion of inputs from different protected groups to ensure the highest worst-case classification accuracy.

A theoretical analysis of possible improvement in regret when given access to gradient information in Bayesian Optimization.

We aim to optimize a black-box function f : X→ R under the assumption that f is H¨older smooth and has bounded norm in the Reproducing …