# Sequential Two-Sample Testing

Two-sample testing, also known as homogeneity testing, is a fundamental problem in statistics, where the goal is to decide whether two independent samples are drawn from the same distribution or not. Most of the prior works (with some exceptions) have studied this problem either in the *batch* setting (or the fixed-sample size setting) or in a *sequential* but *parametric* setting. Batch tests run the risk of allocating too many (resp. too few) observations on easier (resp. harder) problem instances, whereas the strong parametric assumptions are often not satisfied in many practical tasks, thus limiting the applicability of parametric sequential tests. Our work addresses both of these issues by proposing a general framework for designing consistent level $\alpha$ sequential nonparametric two-sample (as well as one-sample) tests.

Our design strategy builds upon the principle of *testing-by-betting*, which, in the context of hypothesis testing, establishes the equivalence between gathering evidence against the null, and multiplying an initial wealth by a large factor by repeatedly betting on the observations with payoff functions bought for their expected value under the null. Within this framework, we show that designing consistent tests can be transformed into the task of selecting payoff functions that result in high growth rate of the wealth of the bettor in the repeated betting game. We propose a general strategy for selecting these payoff functions as predictable estimates of the *witness function* associated with the variational representations of certain statistical distance measures.

We instantiate the above strategy for several commonly used distance measures to obtain sequential versions of Kolmogorov-Smirnov (KS) test, $\chi^2$-test and kernel-MMD test. Empirical results demonstrate the ability of the proposed tests to adapt to the unknown hardness of the problem instance.