TOC Seminar Spring 2013

February 5

Noam Nisan
Hebrew University and Microsoft Research Silicon Valley 

Selling Two Objects (in Three Models)

Abstract: As more economic activity is done *by* computing systems or *for* computing systems, CS-ey points of view on economic questions become needed. This talk will demonstrate this using a very simple type of economic problem: A single seller wants to sell two items to one or two possible buyers. One would think that we would know how to do this optimally for most natural goals, and in most natural scenarios and models. Yet, this talk will present three basic scenarios for which while economic theory knows essentially everything about selling a single item, it knows very little when it comes to selling two. We will not be able to provide the "answers" in any of these cases either, but will be able to shed some light on the difficulties involved and obtain some interesting approximation results. The talk is based on joint works with Shahar Dobzinski, with Sergiu Hart, and with Avinatan Hassidim, Haim Kaplan and Yishay Mansour.

 

February 12

Ran Canetti
Boston University and Tel Aviv University 

On extractable functions and other beasts

Abstract: A key step in the security analysis of many cryptographic protocols is coming up with an efficient mechanism for extracting useful information (or, ``knowledge") that is buried inside the attackers algorithm. Often this is also the most challenging step. Indeed, the need for efficient extraction is a common hurdle in way of better protocols and analyses. 

The notion of extractable functions (Canetti and Dakdouk, TCC 09) is aimed at distilling the challenge of efficient extraction: Essentially, a function f is extractable if any efficient algorithm A that outputs a value y in the range of f is guaranteed to come with an efficient ``knowledge extraction algorithm" that outputs a corresponding preimage x such that f(x)=y. (Here x can be thought of as the knowledge buried inside the code of A). When combined with appropriate hardness properties, extractable functions can be used as ``extractable computational gadgets" that simplify proofs of security and enable more efficient and powerful protocols. 

We will demonstrate the power of extractable functions and related constructs in a number of areas in cryptography, including resettable Zero Knowledge, universally composable computation, program obfuscation and succinct delegation of computation protocols. We will also discuss candidate constructions of extractable functions and the types of hardness assumptions involved. 

The talk is aimed at non-cryptographers. In other words it will attempt to extract useful information from adversarial cryptographers and present it in layman's terms. 

Based on joint works with Bitansky, Chiesa, Dakdouk, Goldwasser, Lin, Paneth, Rubinstein and Tromer, and works of Bitansky and Paneth. 

 

February 19

No Event

 

 

 

February 26

Greg Valiant
Microsoft Research 

Finding Correlations, Learning Juntas and the Closest Pair Problem

Abstract: Consider the following basic problem:  given a set of n Boolean vectors with the promise that they are chosen uniformly at random, with the exception of a single pair of vectors that is slightly correlated, how quickly can one find the correlated pair?  Can one do better than the quadratic-time brute-force algorithm?  Yes. Approaches such as the Locality Sensitive Hashing (LSH) can find the correlated pair in time n^(2-O(c)), where c is the correlation of the planted pair, and the constant in the O(c) term has been improved in a series of works.   We present a subquadratic time algorithm for this problem, having runtime O(n^1.6) for any constant c>0.

This result is leveraged to yield new algorithmic results for several other problems, including learning parity with noise, learning juntas (given a function over n variables, in which only k<<n are actually relevant, how quickly can one find the set of k relevant variables?), and the eps-approximate closest pair problem (given n points, find a pair whose euclidean distance is at most a factor of 1+eps larger than that of the closest pair). 

 

 

March 5  

Sebastien Roch
University of Wisconsin-Madison

Probabilistic Techniques in Mathematical Phylogenetics: Relating Combinatorial and Variational Distances on Trees

Abstract: I will describe recent results on a deep connection between a well-studied phase transition in Markov random fields on trees and two important problems in evolutionary biology: the inference of ancestral molecular sequences and the estimation of large phylogenies using maximum likelihood. No biology background will be assumed. *Joint work with A. Sly 

 

 

March 12

Siu On Chan
UC Berkeley 

Approximation Resistance from Pairwise Independent Subgroups

Abstract:We show optimal (up to constant factor) NP-hardness for Max-k-CSP over any domain, whenever k is larger than the domain size. This follows from our main result concerning predicates over abelian groups. We show that a predicate is approximation resistant if it contains a subgroup that is balanced pairwise independent. This gives an unconditional analogue of Austrin–Mossel hardness result, bypassing the Unique-Games Conjecture for predicates with an abelian subgroup structure.

Our main ingredient is a new soundness reduction technique inspired by XOR-lemmas. Using this

technique, we also improve the NP-hardness of approximating Independent-Set on bounded-degree

graphs, Almost-Coloring, Two-Prover-One-Round-Game, and various other problems.

 

 

March 19

Lorenzo Orecchia
MIT 

A Nearly-linear-time Spectral Algorithm for Balanced Graph Partioning

Abstract:Approximation algorithms for graph partitioning are a fundamental primitive in many applications, from the analysis of social networks to the construction of VLSI layouts. They also play an important role in the design of recursive graph algorithms for a variety of combinatorial and numerical problems, by allowing us to "divide" instances into clusters of roughly equal size, before we "conquer" each cluster separately. The version of the graph-partitioning problem that is the most appropriate for this use is the Balanced-Cut problem: on input an undirected graph G with m edges, a constant balance parameter b and a target conductance value y, we are asked to decide whether G has any b-balanced cut of conductance at most y. In many practical applications, it is of crucial importance for the underlying Balanced-Cut algorithm to run in time as close to linear as possible and have an efficient implementation. For this reason, researchers and practitioners have focused on spectra l methods that guarantee speed and ease of implementation, albeit at the loss of approximation quality on some graphs. The simplest spectral algorithm for the Balanced-Cut problem finds a low-conductance cut by computing the graph's slowest-mixing eigenvector. If this cut does not meet the balance condition, the algorithm removes it and recurses on the rest of the graph. This algorithm achieves an approximation guarantee that is asymptotically optimal for spectral algorithms, but unfortunately, it may need Omega (n) recursive calls and runs in worst-case quadratic time. In work with Nisheeth Vishnoi and Sushant Sachdeva, we give the first spectral algorithm that achieves the same approximation guarantee and runs in nearly-linear time O (m polylog n), by designing a better recursion that removes unbalanced low-conductance cuts in O (\log n) stages. The main idea behind this improvement is the use of heat-kernel random walks as a regularized, more stable analogue of the graph's slowest-mixing eigenvector. The principle of using regularized convex relaxations of the objective function, rather than the standard linear LP or SDP formulations is a fundamental technique in the design of fast algorithms and I will briefly survey its application to graph partitioning. Another novel contribution is a method to compute the required heat-kernel vectors in time O (m polylog n). This combines ideas in Approximation Theory, Lanczos methods, and nearly-linear time Laplacian solvers.

 

 

March 26

No Event

 

 

 

April 2

Thomas Rothvoss
MIT

Approximating Bin Packing within O(log OPT * log log OPT) bins

Abstract: For bin packing, the input consists of n items with sizes s_1,...,s_n in [0,1] which have to be assigned to a minimum number of bins of size 1. The seminal Karmarkar-Karp algorithm from '82 produces a solution with at most OPT + O(log^2 OPT) bins.

 

We provide the first improvement in now 3 decades and show that one can find a solution of cost OPT + O(log OPT * log log OPT) in polynomial time.  This is achieved by rounding a fractional solution to the Gilmore-Gomory LP relaxation using the Entropy Method from discrepancy theory. The result is constructive via algorithms of Bansal and Lovett-Meka.

 

 

April 9

Valerie King
University of Victoria 

Dynamic Graph Connectivity in Polylogarithmic Worst Case Time

Abstract:The dynamic graph connectivity problem is the following: given a graph on a fixed set of n nodes which is undergoing a sequence of edge insertions and deletions, answer queries of the form Q(a,b): “Is there a path between nodes a and b?'' While data structures for this problem with polylogarithmic amortized time per operation have been known since the mid-1990's, these data structures have Θ(n) worst case time. In fact, no previously known solution has worst case time per operation which is o(√n). In this talk I'll explain a solution with worst case times O(log4 n) per edge insertion, O(log5 n) per edge deletion, and O(log n/log log n) per query. The answer to each query is correct if the answer is “yes" and is correct with high probability if the answer is “no". The data structure is based on a simple novel idea which can be used to quickly identify an edge in a cutset. *This work is joint with Bruce Kapron and Ben Mountjoy.

 

April 16

No Event

 

 

 

April 23

Joshua Grochow
University of Toronto 

Unifying And Generalizing Known Lower Bounds Via Geometric Complexity Theory

Abstract: Geometric Complexity Theory (GCT) is an approach towards P versus NP and other lower bounds using algebraic geometry and representation theory. In this talk, we discuss how many known lower bounds can be viewed as following the part of the GCT approach outlined in [GCT I, II, SIAM J. Comput.]. This includes the best known bounds on permanent versus determinant, lower bounds on constant depth circuits, the partial derivatives technique, the connected components technique, the degree bound, matrix rigidity, and others. In particular, we expose the representation-theoretic content of these proofs, introducing GCT as a natural generalization of known methods. No knowledge of algebraic geometry or representation theory is assumed – rather, we motivate the use and definitions of representation theory via these previous complexity-theoretic proofs.

April 30

NO MEETING
 

 

 

May 7

NO MEETING
 

 

 

May 13

Umesh Vazirani
UC Berkeley 

Quantum Hamiltonian Complexity: through the computational lens

Abstract: The exponential complexity of quantum systems is a double edged sword: while making quantum computers possible it is also an enormous obstacle to analyzing and understanding physical systems. Is there any way around this curse of exponentiality? Here are three basic questions that explore this issue: 1. Do `typical' quantum states that occur in Nature have succinct (polynomial) description? 2. Can quantum systems at room temperature exhibit exponential complexity? 3. Is the scientific method sufficiently powerful to comprehend general quantum systems? Each of these issues is best studied through the computational lens as a question about computation. The resulting questions lie at the core of computational complexity theory. The first asks about the structure of solutions to the quantum analog of SAT. The second asks whether there is a quantum analog of the PCP theorem. And the third can be formulated as a question about interactive proof systems with quantum polynomial time provers. This is a very active area, with a number of recent breakthroughs and many exciting open questions. In this talk I will try to summarize the state of the art, while keeping the talk widely accessible. 

May 14

David Woodruff
IBM 

Low Rank Approximation and Regression in Input Sparsity Time

Abstract: We improve the running times of algorithms for least squares regression and low-rank approximation to account for the sparsity of the input matrix. Namely, if nnz(A) denotes the number of non-zero entries of an input matrix A: - we show how to solve approximate least squares regression given an n x d matrix A in nnz(A) + poly(d log n) time - we show how to find an approximate best rank-k approximation of an n x n matrix in nnz(A) + n*poly(k log n) time All approximations are relative error. Previous algorithms based on fast Johnson-Lindenstrauss transforms took at least ndlog d or nnz(A)*k time. Joint work with Ken Clarkson.