TOC Seminars Fall 2011

Sep 13

Andrew Lo
MIT, Sloan and CSAIL 

The Origin of Behavior

We propose a single evolutionary explanation for the origin of several behaviors that have been observed in organisms ranging from ants to human subjects, including risk-sensitive foraging, risk aversion, loss aversion, probability matching, randomization, and diversification.

Given an initial population of individuals, each assigned a purely arbitrary behavior with respect to a binary choice problem, and assuming that offspring behave identically to their parents, only those behaviors linked to reproductive success will survive, and less reproductively successful behaviors will disappear at exponential rates.

When the uncertainty in reproductive success is systematic, natural selection yields behaviors that may be individually sub-optimal but are optimal from the population perspective; when reproductive uncertainty is idiosyncratic, the individual and population perspectives coincide.

This framework generates a surprisingly rich set of behaviors, and the simplicity and generality of our model suggest that these derived behaviors are primitive and nearly universal within and across species. Our approach also yields an evolutionary foundation for bounded rationality and a simple version of Hawkins' "memory-prediction" model of intelligence.

Sep 20

Oded Goldreich
Weizmann Institute of Science, Israel 

A Property Testing Double-Feature of Short Talks

I will present two overview talks on the subject of property testing, preceeded by a brief introduction to the area. Abstracts of the two corresponding works, follow.

1. Testing Graph Blow-Up (joint work with Lidor Avigad)

Referring to the query complexity of testing graph properties in the adjacency matrix model, we advance the study of the class of properties that can be tested non-adaptively within complexity that is inversely proportional to the proximity parameter. Arguably, this is the lowest meaningful complexity class in this model, and we show that it contains a very natural class of graph properties. Specifically, for every fixed graph $H$, we consider the set of all graphs that are obtained by a (possibly unbalanced) blow-up of $H$.

2. Proximity Oblivious Testing and the Role of Invariances (joint work with Tali Kaufman)

We present a general notion of properties that are characterized by local conditions that are invariant under a sufficiently rich class of symmetries. Our framework generalizes two popular models of testing graph properties as well as the algebraic invariances studied by Kaufman and Sudan (STOC'08). Our focus is on the case that the property is characterized by a constant number of local conditions and a rich set of invariances. We show that, in the aforementioned models of testing graph properties, characterization by such invariant local conditions is closely related to proximity oblivious testing (as defined by Goldreich and Ron, STOC'09). In contrast to this relation, we show that, in general, characterization by invariant local conditions is neither necessary nor sufficient for proximity oblivious testing.

Sep 27 Microsoft Research @ 20 Years Event No Seminar
Oct 4

Scott Aaronson
MIT, CSAIL 

The Complexity of Linear Optics

In recent work with Alex Arkhipov, we proposed a rudimentary form of quantum computing, based entirely on linear optics with nonadaptive measurements; and used a connection between linear optics and the permanent function to show that even this limited model could solve sampling and search problems that are intractable for classical computers under plausible complexity assumptions. In this talk, I'll discuss this work in a self-contained way accessible to classical TCS folks, and mention some of its implications for quantum computing experiments in the very near future. I'll also discuss some more general results that emerged from our work. These include: a new equivalence between sampling problems and search problems based on Kolmogorov complexity; a new, linear-optics-based proof of Valiant's famous theorem that the permanent is #P-complete; and a new classical approximation algorithm for the permanent.

Papers:

The Computational Complexity of Linear Optics
The Equivalence of Sampling and Searching
A Linear-Optical Proof that the Permanent is #P-Hard

Oct 11

Columbus Day

No seminar
Oct 17, 4pm, 32-144 Yury Polyanskiy
LIDS, MIT
Notice the special DAY and LOCATION
Optimal channel codes and concentration of measure

Under the assumption of vanishing probability of error, it is known classically that a capacity-achieving sequence of codes must necessarily have empirical distribution which is close to the capacity-achieving one. In this work the result is extended to the practically more relevant case of non-vanishing errors. Surprisingly, in this regime the question becomes much more delicate and reveals connections with concentration of measure phenomenon. For example, it is shown that Lipschitz functions of channel outputs concentrate sharply around their expected value, which in turn is independent of the employed code. Thus despite the fact that only very few capacity-achieving codes are actually known, it is possible to predict exactly what kind of interference such codes will cause to others. For the benefit of a wider audience, in the beginning of the talk I will give a brief overview of the channel coding setup as traditionally studied in information theory.

Oct 25

FOCS 2011

No seminar
Oct 27, 4pm, 32-G449 (Patil/Kiva)

Lap Chi Lau
The Chinese University of Hong Kong

Notice the special DAY and LOCATION 
Graph Connectivity, Network Coding, Expander Graph, and Matrix Rank.

We present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sublinear. Through this connection, we have also found an interesting use of expanders and superconcentrators to design fast algorithms for some graph connectivity problems.

Finally we show a recent development on using the above ideas to design fast algorithms for computing matrix rank.

Joint work with Ho Yee Cheung, Tsz Chiu Kwok, Kai Man Leung.

Nov 1

STOC deadline

No seminar
Nov 3rd, 32-D463 (Star)

Devavrat Shah
MIT, LIDS 

Notice the special DAY
Rumor in a network: who's the culprit ?

Suppose a malicious "worm" has spread in a network (e.g. Stuxnet) and interest is in finding who is the source (so that many of the following possible actions be taken: report news to the world, reward, take punitive action). Motivated by such scenarios where "information" diffuses in the network starting from an unknown source and interest is in identifying the "source" based on the observed spread, we discuss the problem of finding the source of a rumor in a network. We will design a computationally efficient source estimator for the popular Susceptible-Infected (SI) information spreading model with generic spreading time. In establishing the effectiveness of the estimator, we shall explain the role played by the underlying network structure (a form of expansion property) in determining ability of detection. Connections to generalized Polya's urn, network centrality, number 1 - ln 2 and searching for tweets will be explained.

This is based on joint work with Tauhid Zaman.

Nov 8

Yury Makarychev
TTI

The Grothendieck constant is strictly smaller than Krivine's bound. 

I will talk about Grothendieck's Inequality. The inequality was proved by Grothendieck in 1953, and since then it has found numerous applications in Analysis, Quantum Mechanics and Computer Science. From the point of view of combinatorial optimization, the inequality states that the integrality gap of a certain semi-definite program is less than some absolute constant. The optimal value of this constant is called the Grothendieck constant K_G. The Grothendieck constant lies between 1.67 and 1.79, however, its exact value is unknown. The last progress on this problem was in 1977, when Krivine proved that K_G \leq \pi / (2 log(1+\sqrt{2})) and conjectured that his bound is optimal. In this talk, we will disprove this conjecture and show that K_G is strictly less than Krivine's bound. We will show that for this problem a new binary rounding scheme, which projects vectors on a random 2 dimensional subspace, performs better than the ubiquitous random hyperplane technique.

Joint work with Mark Braverman (University of Toronto), Konstantin Makarychev (IBM Research), Assaf Naor (Courant Institute).
Nov 15

Santosh Vempala
Georgia Tech

Many Sparse Cuts via Higher Eigenvalues 

Cheeger's fundamental inequality says that any edge-weighted graph has a vertex subset whose weighted expansion (a.k.a. conductance) is bounded by twice the square root of the second smallest eigenvalue of the normalized Laplacian of the graph. This inequality and its algorithmic proof have a myriad of applications. 

We present a generalization: for any integer k less than n, there exist c.k disjoint subsets such that the expansion of each one is bounded by C\sqrt{\lambda_k \log k}, where \lambda_k is the k'th largest eigenvalue of the normalized Laplacian and c,C are absolute constants. Our proof is via a polynomial-time algorithm to find such subsets, using spectral projection and randomized rounding and can be seen as an extension of the Cheeger method. As a consequence, we get the same upper bound for the small-set expansion problem, namely for any k, there is a subset whose weight is an O(1/k) fraction of the total weight and its expansion is bounded by C\sqrt{\lambda_k \log k}. Both results are the best possible up to constant factors.

The underlying algorithmic problem, namely finding k subsets such that the maximum expansion is minimized, besides extending sparse cuts to more than one subset, appears to be a natural clustering problem in its own right.

This is joint work with Anand Louis and a couple of Prasads.
Nov 22 Aleksander Madry
Microsoft Research-New England
Online Algorithms and the k-server Conjecture

Traditionally, in the problems considered in optimization, one needs to produce the solution only after the whole input is made available. However, in many real-world scenarios the input is revealed gradually, and one needs to make irrevocable decisions along the way while having only partial information on the whole input. This motivates us to develop models that allow us to address such scenarios. 

In this talk, I will consider one of the most popular approaches to dealing with uncertainty in optimization: the online model and competitive analysis; and focus on a central problem in this area: the k-server problem. This problem captures many online scenarios - in particular, the widely studied caching problem - and is considered by many to be the "holy grail" problem of the field. I will present a new randomized algorithm for the k-server problem that is the first online algorithm for this problem that achieves polylogarithmic competitiveness.

Based on joint work with Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor.
Nov 29 Brendan Lucier
Microsoft Research-New England
The Limits of Black-Box Transformations in Mechanism Design

A single-parameter mechanism design problem is an optimization problem with a game-theoretic twist: the input values are held by rational agents, who may misrepresent the input for their own gain. To combat this manipulation one might ask for an algorithm that is truthful, meaning that it is in the best interest of each agent to report its value truthfully. In general, however, the impact of the truthfulness requirement on the approximation factors achievable by polynomial-time algorithms for single-parameter problems is unknown.

We consider the problem of converting an arbitrary approximation algorithm for a single-parameter optimization problem into a computationally efficient truthful mechanism. We ask for reductions that are "black-box", meaning that they require only oracle access to the given algorithm and in particular do not require explicit knowledge of the problem constraints.

We show that no such general black-box reduction exists, even for single-parameter problems. Specifically, we show that a black-box reduction for the social welfare objective is not possible if the resulting mechanism is required to be truthful in expectation and to preserve the worst-case approximation ratio of the algorithm to within a subpolynomial factor. Further, we prove that for other objectives such as makespan, no black-box reduction is possible even if we only require Bayesian truthfulness and an average-case performance guarantee.

Joint work with Shuchi Chawla and Nicole Immorlica
Dec 6

Jon Kleinberg
Cornell University 

Network Formation in the Presence of Contagious Risk

There are a number of domains where agents must collectively form a network in the face of the following trade-off: each agent receives benefits from the direct links it forms to others, but these links expose it to the risk of being hit by a cascading failure that might spread over multi-step paths. Financial contagion, epidemic disease, and the exposure of covert organizations to discovery are all settings in which such issues have been articulated.

Here we formulate a stylized model for this problem, and we provide characterizations of the agents' welfare under optimal structures as well as structures arising from strategic behavior. We find that optimal networks are situated very close to the emergence of a giant component in an underlying random graph, and networks arising from strategic behavior lie slightly further beyond this transition, at a point where most of the welfare available to the agents has been lost. Our analysis enables us to explore such issues as the trade-offs between clustered and anonymous market structures, and it exposes a fundamental sense in which very small amounts of ``over-linking'' in networks with contagious risk can have strong consequences for the welfare of the participants.

Joint work with Larry Blume, David Easley, Bobby Kleinberg, and Eva Tardos.

Dec 13 Huijia (Rachel) Lin
EECS, MIT
Concurrent Security

Cryptographic protocols have been developed for a variety of tasks, including electronic auctions, electronic voting systems, privacy preserving data mining and more. The Internet allows for the concurrent execution of cryptographic protocols. Such concurrency severely challenges their security.

In this talk we introduce a novel technique for transforming any "stand-alone" secure protocol (i.e., one whose security is only guaranteed if executed in isolation) into one that is secure under concurrent executions. Contrary to previous results in the literature, this result is established without relying on additional trusted infrastructure or cryptographic hardness assumptions.