I am a Postdoctoral researcher in the Department of Statistics and Data Science at CMU, working with Prof. Aaditya Ramdas. My research interests lie in the general areas of maching learning and sequential decision making. I am currently working on the problem of sequential nonparametric hypothesis testing. In the past, I have worked on the topics of Bayesian Optimization, Active Learning and Reinforcement Learning.

- Gaussian Process Bandits
- Active Learning
- Optimization
- Reinforcement Learning

PhD in Electrical Engineering

University of California, San Diego

M.E. in Electrical Engineering

Indian Institute of Science, Bangalore

B.E. in Electrical Engineering

Indian Institute of Technology, Kharagpur

**[01/22/21]** Our paper Significance of Gradient Information in Bayesian Optimization got accepted at AISTATS 2021 (final version in preparation).

**[11/20/20]** Awarded the Shannon Memorial Fellowship by CMRR, UCSD for the academic year 2020/21.

**[06/01/20]** Our paper Adaptive Sampling for Learning Probability Distributions got accepted at ICML 2020.

**[05/14/20]** Our paper Active Model Estimation in Markov Decision Processes got accepted at UAI 2020.

Low complexity algorithms with improved regret bounds

Optimal scheme for allocating samples to learn $K$ distributions uniformly well.

Algorithm with uniformly improved regret bounds + a computationally efficient heuristic with better empirical performance

A minimax near-optimal active learning algorithm for classification with abstention.

Precise characterization of sample complexity of a popular species tree reconstruction algorithm - ASTRAL

Worked on the design and analysis of an adaptive sampling scheme for estimating
probability distributions uniformly well in terms of several commonly used distance
measures.

Worked on the desgin of algorithms for optimizing the low-level code representation of commonly used tensor-operations in Deep Learning.

Selected list of graduate coursework in ECE and Math departments

- Matrix Analysis
- Information Theory
- Random Processes
- Parameter Estimation
- Wave Theory of Information
- Detection and Estimation Theory
- Convex Optimization and Applications
- Stochastic Processes in Dynamical Systems

The term in bracket denotes the number of courses in the series.

- Real Analysis (3)
- Applied Algebra (2)
- Probability Theory (3)
- Functional Analysis (2)
- Mathematical Statistics (3)
- Statistical Learning Theory (1)

I have served as a TA for the following courses.

- ECE 101 (Linear System Fundamentals), Spring 2018
- ECE 153 (Probability and Random Processes), Fall 2018
- ECE 158A (Data Networks), Fall 2016
- ECE 257A (Graduate Communication Networks), Winter 2017
- ECE 267 (Network Algorithms and Analysis), Winter 2019
- ECE 272A (Dynamical Systems), Winter 2018

Summary: Derivation of upper bound on the regret for the mixture method (KT scheme) for individual sequence prediction.
Suppose $\mathcal{X} = \{1, \ldots, m\}$ for some integer $m \geq 2$ and let $\Theta = \Delta_m$ denote the $m-1$ dimensional probability simplex.

Summary: We introduce LeCam’s method for obtaining minimax lower bounds for statistical estimation problems, which proceeds by relating the probabililty of error of a binary hypothesis testing problem to the total-variation distance between the two distributions.

We aim to optimize a black-box function f : X→ R under the assumption that f is H¨older smooth and has bounded norm in the Reproducing …

We consider the problem of allocating samples to a finite set of discrete distributions in order to learn them uniformly well in terms …

We construct and analyze active learning algorithms for the problem of binary classification with abstention. We consider three …

We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision …

In this paper, the problem of estimating the level set of a black-box function from noisy and expensive evaluation queries is …