Russell Impagliazzo: Learning models : connections between boosting, hard-core distributions, dense models, GAN, and regularity

Tuesday, September 12, 2017 - 4:00pm to 5:00pm
Refreshments: 
3:45pm
Location: 
32-G449
Speaker: 
Russell Impagliazzo
Biography: 
UCSD

Learning models : connections between boosting, hard-core distributions, dense models, GAN, and regularity
 

A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.

For example, a scientist is trying to find a mathematical model explaining observed data about some phenomenon, such as kinds of butterflies in a forest.  A minimal criterion for success is that the model should accurately predict the results of future observations.  When is this possible?   This general situation arises in many contexts in computer science and mathematics.  In machine learning, the object might be a distribution on data points, high dimensional real vectors, and the tests might be half-spaces. The goal would be to learn a simple representation of the data that determines the probability of any half-space or possibly intersections of half spaces. In computational complexity, the object might be a Boolean function or distribution on strings, and the tests are functions of low circuit complexity. In graph theory, the object is a large graph, and the tests are the cuts In the graph; the representation should determine approximately the size of any cut. In additive combinatorics, the object might be a function or distribution over an Abelian group, and the tests might be correlations with linear functions or polynomials.

This talk is to illustrate the connections between these various contexts. 

In particular, we'll show direct reductions between:

\begin{description}

\item[Boosting:]  From machine learning, a boosting algorithm converts  a weak learner, that finds a hypothesis that has a small correlation to an unknown , to a strong learner, that finds a hypothesis agreeing with the function almost everywhere.

\item[Hard-core distributions:] From complexity theory, a hard-core distribution lemma says that any Boolean function which cannot be computed with success more than $1-\delta$ on random inputs, has a sub-distribution of size $2 \delta$ where the function is pseudo-random

\item[Dense model theorems:]  Initially from additive combinatorics.  Dense model theorems say that any distribution either has a simple witness that it is small (has low min-entropy) or has a simple indistinguishable distribution that is large (high min-entropy). 

\item[GAN-type algorithms:]   From machine learning, generative adversarial networks (GANs) are algorithms that use a distinguisher that finds differences between a target distribution and a sampling algorithm in order to refine the sampling algorithm.  Algorithmic versions of dense model theorems can be viewed as analogous to GAN's in many ways, but can have stronger provable guarantees.

\item[Regularity lemmas:] Initially from graph theory, a regularity lemma shows that an arbitrary mathematical object has a simply described approximation. 

\end{description}

Many of these connections were pointed out by Trevisan Tulsiani and Vadhan, and by Reingold, Trevisan, Tulsiani and Vadhan.  However, our reductions are more black-box, giving us greater versatility.

While many of our connections are used to prove known results in new ways, we also obtain some novel improvements and generalizations, both by exploiting the generality of these reductions and the wide assortment of boosting algorithms, and by applying these results recursively.

Related work:

Klivans and Servedio, Boosting and Hard-core Sets, FOCS 99.

Omer Reingold, Luca Trevisan, Madhur Tulsiani, Salil P. Vadhan: Dense Subsets of Pseudorandom Sets. FOCS 2008: 76-85

Luca Trevisan, Madhur Tulsiani, Salil P. Vadhan: Regularity, Boosting, and Efficiently Simulating Every High-Entropy Distribution. IEEE Conference on Computational Complexity 2009: 126-136

Russell Impagliazzo, Algorithmic Dense Model Theorems and Weak Regularity

Sita Gakkhar Russell Impagliazzo Valentine Kabanets. Hardcore Measures, Dense Models and Low Complexity Approximations

 

Robert E. Schapire: The Strength of Weak Learnability (Extended Abstract). FOCS 1989: 28-33 : 01 June 2005 A desicion-theoretic generalization of on-line learning and an application to boosting Yoav Freund, Robert E. Schapire

Yoav Freund, Robert E. Schapire: Game Theory, On-Line Prediction and Boosting. COLT 1996: 325-332

Russell Impagliazzo: Hard-Core Distributions for Somewhat Hard Problems. FOCS 1995: 538-545

Thomas Holenstein: Key agreement from weak bit agreement. STOC 2005: 664-673

Boaz Barak, Ronen Shaltiel, Avi Wigderson: Computational Analogues of Entropy. RANDOM-APPROX 2003: 200-215

Alan M. Frieze, Ravi Kannan: The Regularity Lemma and Approximation Schemes for Dense Problems. FOCS 1996: 12-20

Noga Alon, Amin Coja-Oghlan, Hiêp Hàn, Mihyun Kang, Vojtech Rödl, Mathias Schacht: Quasi-Randomness and Algorithmic Regularity for Graphs with General Degree Distributions. SIAM J. Comput. 39(6): 2336-2362(2010)

Noga Alon, Assaf Naor: Approximating the Cut-Norm via Grothendieck’s Inequality. SIAM J. Comput. 35(4): 787-803 (2006)

Green, Ben; Tao, Terence (2008). “The primes contain arbitrarily long arithmetic progressions”. Annals of Mathematics. 167 (2): 481–547.

Tao, Terence; Ziegler, Tamar (2008). “The primes contain arbitrarily long polynomial progressions”. Acta Mathematica. 201 (2): 213–305