Feb 1 |
Silvio Micali MIT |
The Second-Knowledge Mechanism
In auctions of a single good, we consider a new revenue benchmark that always lies between the highest and second-highest valuation, prove that no classical mechanism can achieve it (or even slightly approximate it) in any robust way, and provide a new mechanism that perfectly and robustly achieve it.
Our work puts forward a new set-theoretic way of modeling and leveraging the knowledge that the players may have about their opponents.
Joint work with Jing Chen |
Feb 8 |
No Colloquium due to Killian Lecture
|
|
Feb 15 |
Swastik Kopparty IAS |
Error-correcting Codes of High Rate with Sublinear Time Decoding
Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its local decoding algorithms has been well studied. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(k^epsilon) were known only for codes of very low rate (exp(-1/epsilon)), where k is the length of the message. Furthermore, for codes of rate > 1/2, no nontrivial local decodability was known.
In this talk, I will describe a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every epsilon > 0 and alpha > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 - alpha, and is locally decodable from a constant fraction of errors in O(k^epsilon) time. The high rate and local decodability are evident even in concrete settings (and not just in asymptotic behavior), giving hope that local decoding techniques may have practical implications.
These codes, which we call multiplicity codes, are based on evaluating high degree multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial based codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in their rate and distance.
Joint work with Shubhangi Saraf (MIT) and Sergey Yekhanin (Microsoft Research) |
Feb 22 |
No Colloquium due to Monday Schedule
|
|
March 1 |
Emanuele Viola Northeastern |
The Complexity of Distributions
Complexity theory, with some notable exceptions, typically studies the complexity of computing a function h(x) of a *given* input x. We advocate the study of the complexity of generating -- or sampling -- the output distribution h(x) for random x, given random bits.
In particular, we present first-of-their-kind lower bounds for generating distributions in various restricted computational models, and discuss their implications for extractor constructions and lower bounds for succinct data structures. |
March 8 |
Venkat Guruswami CMU |
Bridging Shannon and Hamming: Codes for computationally simple channels
The theory of error-correcting codes has had two divergent schools of thought, dating back to its origins, based on the underlying model of the noisy channel. Shannon's theory modeled the channel as a stochastic process with a known probability law. Hamming's work suggested a worst-case model, where the channel is subject only to a limit on the number of errors it may cause. These two approaches share several common tools, however in terms of quantitative results, the classical results in the harsher Hamming model are much weaker.
In this talk, we will discuss a line of research aimed at bridging between these models. We will begin by surveying some approaches that rely on setup assumptions (such as shared randomness) to construct codes against worst-case errors with information rate similar to what is possible against random errors. We then turn to our results for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the total fraction of errors is bounded by a parameter p, and (b) the process which adds the errors is sufficiently ``simple'' computationally. Such channel models are well-motivated since physical noise processes may be mercurial, but are not computationally intensive. Also, as with codes for worst-case errors, codes for such channels can handle errors whose true behavior is unknown or varying over time.
We will describe an explicit construction of a poly-time encodable/decodable code of rate approaching Shannon capacity 1-h(p) that can correct an arbitrary error pattern with total fraction of errors bounded by p, provided the channel's action is oblivious to the codeword. We will hint at an extension to channels limited to online logarithmic space that gives efficient codes with optimal rate that enable recovery of a short list containing the correct message. (A similar claim holds for channels admitting polynomial size circuits, assuming the existence of pseudorandom generators.) Our results do not use any shared randomness or other setup assumptions.
Based on joint work with Adam Smith |
March 15 |
Chris Umans Caltech |
Pseudorandom generators, the BQP vs. PH problem, and beating the hybrid argument
It is a longstanding open problem to devise an oracle relative to which BQP does not lie in the Polynomial-Time Hierarchy (PH). We advance a natural conjecture about the capacity of the Nisan-Wigderson pseudorandom generator [NW94] to fool AC^0, with MAJORITY as its hard function. Our conjecture is essentially that the loss due to the hybrid argument (which is a component of the standard proof from [NW94]) can be avoided in this setting. We then show that our conjecture implies the existence of an oracle relative to which BQP is not in the PH. This entails giving an explicit construction of unitary matrices, realizable by small quantum circuits, whose row-supports are “nearly-disjoint.” Our framework generalizes the setting of Aaronson [Aar09], and remains a viable approach to resolving the BQP vs. PH problem after the recent proof that the Generalized Linial-Nisan Conjecture of [Aar09] is false.
As a first step towards proving our conjecture, we show that the loss from the hybrid argument can indeed be avoided for "repeated sampling" generators based on PARITY, MAJORITY, and other functions. This leads to new pseudorandom generators for the low-level circuit classes AC^0[p], which have sublinear stretch, but nevertheless represent the best-known generators for these classes.
Based on joint work with Bill Fefferman, Ronen Shaltiel, and Emanuele Viola. |
March 22 |
No Colloquium due to Spring Break
|
|
March 29 |
No Colloquium
|
|
April 5 |
Daniele Micciancio UC San Diego |
Solving All Lattice Problems in Deterministic Single Exponential Time
We give deterministic exp(n)-time algorithms to solve all the most important computational problems on point lattices in NP, including the Shortest Vector Problem (SVP), Closest Vector Problem (CVP), and Shortest Independent Vectors Problem (SIVP). This improves the exp(n*log(n)) running time of the best previously known algorithms for CVP (Kannan, 1987) and SIVP (Micciancio, 2008), and gives a deterministic alternative to the randomized SVP algorithm for of (Ajtai, Kumar and Sivakumar, 2001). The core of our algorithm is a new method to solve the closest vector problem with preprocessing (CVPP) that uses the Voronoi cell of the lattice (described as intersection of half-spaces) as the result of the preprocessing function.
Talk based on joint work with P. Voulgaris |
April 12 |
No Colloquium
|
|
April 26 |
Boaz Barak MSR New England and Princeton |
Semidefinite Programming Hierarchies & the Unique Games
Semidefinite programs (SDP's) are a form of convex relaxation that found many uses in algorithms. In the early 1990's several researchers proposed stronger forms of SDP's known as *SDP hierarchies*. In this talk I will present a new way of taking algorithmic advantages of these hierarchies to solve constraint- satisfaction problems for 2-variable constraints such as LABEL-COVER, MAX-CUT, and UNIQUE-GAMES.
Specifically, we show SDP-hiearchy based algorithms that provides arbitrarily good approximation to all these problems in time poly(n)*exp(r), where r is the number of eigenvalues in the constraintgraph larger than some constant threshold (depending on the accuracy parameter and type of constraint used).
In particular we get quasipolynomial algorithms for instances whose constraint graph is *hypercontractive*, as is the case for all the canonical "hard instances" for MAX-CUT and UNIQUE-GAMES. This gives more reason to consider relatively low (i.e. polylogarithmic) levels of SDP hierarchy as candidate algorithms for refuting Khot's Unique Games Conjecture.
Joint work with Prasad Raghavendra and David Steurer. |
May 3 |
Amin Saberi Stanford |
A Randomized Rounding Approach to the Traveling Salesman Problem
For some positive constant c, we give a 3/2-c approximation algorithm for the following problem: given a graph G(V; E), find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in G. The result improves on the 3/2 -approximation algorithm due to Christodes.
Joint work with M. Singh and S. Oveis Gharan |
May 10 |
Zvika Brakerski Weizmann and MIT |
Efficient Fully Homomorphic Encryption from (Standard) LWE
In fully homomorphic encryption, it is possible to transform an encryption of a message, $m$, into an encryption of any (efficient) function of that message, $f(m)$, without knowing the secret key. This property makes it into a very useful cryptographic building block.
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of short vector problems on arbitrary lattices. As icing on the cake, our scheme is quite efficient, and has very short ciphertexts.
Our construction improves upon previous works in two aspects:
1. We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings.
2. More importantly, we deviate from the ``squashing paradigm'' used in all previous works. We introduce a new {\em dimension reduction} technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. In contrast, all previous works required an additional, very strong assumption (namely, the sparse subset sum assumption).
Since our scheme has very short ciphertexts, we use it to construct an asymptotically-efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k \cdot \polylog\,k+\log |DB|$ bits per single-bit query, which is better than any known scheme. Previously, it was not known how to achieve a communication complexity of even $\poly(k, \log|DB|)$ based on LWE.
Joint work with Vinod Vaikuntanathan. |
May 17 |
Aaron Roth MSR New England and UPenn |
Privately Releasing Conjunctions and the Statistical Query Barrier
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C: Can we do any better?
We show that the number of statistical queries necessary and sufficient for this task is—up to polynomial factors—equal to the agnostic learning complexity of C in Kearns’ statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern.
We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions.
While interesting from a learning theoretic point of view, our main applications are in privacy preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to all Boolean conjunctions with 1% average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
This talk is based on joint work with Anupam Gupta, Moritz Hardt, and Jon Ullman. |