Eric Price: Tight bounds for learning a mixture of two Gaussians

Wednesday, January 14, 2015 - 4:00pm to 5:00pm
Location: 
32-G575
Speaker: 
Eric Price
Biography: 
UT Austin

We consider the problem of identifying the parameters of an unknown mixture of two arbitrary d-dimensional gaussians from a sequence of independent random samples. Our main results are upper and lower bounds giving a computationally efficient moment-based estimator with an optimal convergence rate, thus resolving a problem introduced by Pearson (1894). Denoting by σ2 the variance of the unknown mixture, we prove that Θ(σ^12) samples are necessary and sufficient to estimate each parameter up to constant additive error when d=1. Our upper bound extends to arbitrary dimension d>1 up to a (provably necessary) logarithmic loss in d using a novel---yet simple---dimensionality reduction technique. We further identify several interesting special cases where the sample complexity is notably smaller than our optimal worst-case bound. For instance, if the means of the two components are separated by Ω(σ) the sample complexity reduces to O(σ^2) and this is again optimal. 

Our results also apply to learning each component of the mixture up to small error in total variation distance. Here we show that O~(d^6) samples are necessary and sufficient in the near-isotropic case, while giving strong improvements in sample complexity over previous work in the general case.