Friday, November 13, 2015 - 1:30pm to 4:30pm

Refreshments:

Pizza at 1:15p

Location:

MIT, Bldg. 66, Room 66-160

Speaker:

Philippe Rigollet

Seminar group:

**Abstract: Statistical learning developments are driven by two forces: extracting the most information about the data on the one hand and designing fast algorithms on the other. In some cases, these forces push in opposite directions calling for a certain tradeoff between statistical accuracy and computational efficiency. While this tradeoff can be exhibited for specific families of methods, it natural to ask whether it is inherent to the problem itself in the ****sense that no method can exhibit simultaneously good sample and computational complexity.**

**We will study this gap primarily in the context of sparse Principal Component Analysis (PCA) and conclude by describing a problem where a similar gap may exist. The following points will be covered:1) Optimal sample complexity of sparse PCA: standard information theoretic tools employed in high dimensional learning will be reviewed.2) Computationally efficient methods: SDP, MDP and the diagonal method.3) Planted clique and computational lower bounds: average case reduction.4) Average case complexity, open problems and random ideas.**