Feb 7 
Virginia Vassilevska Williams 
Multiplying matrices faster than CoppersmithWinograd
In 1987 Coppersmith and Winograd presented an algorithm to multiply two n by n matrices using O(n^{2.3755}) arithmetic operations. This algorithm has remained the theoretically fastest approach for matrix multiplication for 24 years. We have recently been able to design an algorithm that multiplies n by n matrices and uses at most O(n^{2.3727}) arithmetic operations, thus improving the CoppersmithWinograd running time. The improvement is based on a recursive application of the original CoppersmithWinograd construction, together with a general theorem that reduces the analysis of the algorithm running time to solving a nonlinear constraint program. The final analysis is then done by numerically solving this program. To fully optimize the running time we utilize an idea from independent work by Stothers who claimed an O(n^{2.3737}) runtime in his Ph.D. thesis. 
Feb 14 
Silvio Micali 
Knightian Auctions
In an auction, a player may not exactly know his own valuation, or its distribution. We thus study singlegood auctions whose players know their own valuations only within a multiplicative factor (e.g., 10%). The notions of implementation in dominant and undominated strategies are naturally extended to this setting, but their power is vastly different. Namely, (1) We prove that no dominantstrategy mechanism can guarantee more social welfare than by assigning the good at random; but (2) We prove that a much better performance can be obtained via undominatedstrategy mechanisms, and actually provide tight upper and lower bounds for the fraction of the maximum social welfare guaranteable by such mechanisms, whether deterministic or probabilistic. Joint work with Alessandro Chiesa and Zeyuan Zhu 
Feb 21 
Thomas Vidick 
Certifiable Quantum Dice
Motivation for the field of pseudorandomness stems from a fundamental observation: since no statistical test can distinguish a uniformly random string from one that has been carefully crafted to pass that specific test, no physical process can be trusted to produce truly random bits  hence the need for pseudorandom bits. While quantum mechanics trivially allows for the generation of randomness, when combined with the nosignaling principle it allows for a much more remarkable effect: a physical device whose output is certifiably random as long as it passes a simple statistical test. The guarantee that the device's outputs are random (that is, any particular nbit output has probability close to 2^{n}) is independent of the validity of quantum mechanics, and follows only from the statistical test and the fact that a nosignaling condition is met between two parts of the device (such a condition may follow, for instance, from the speed of light limit imposed by special relativity). We describe such a device that stretches an initial (log n)bit random string into n random bits, thereby realizing an exponential expansion of randomness. The underlying statistical test is based on a simple Bell inequality, the CHSH inequality. In addition we show that the random bits produced by the device may be used in postquantum cryptography by proving that they are random even from the point of view of a quantum adversary, who may have prior entanglement with the very device used to generate the random bits. This answers a question left open since the introduction of quantum randomnessgenerating devices by Colbeck in 2009. Based on joint work with Umesh Vazirani. 
Feb 28 
Ilias Diakonikolas 
UNUSUAL TIME: 5.15pm Reconstructing Boolean Threshold Functions from their Average Satisfying Assignment In the 2nd FOCS conference (1961) C.K. Chow gave a surprising characterization of boolean threshold functions (halfspaces). Among all boolean functions, each threshold function f is uniquely determined by the ``center of mass'' of its positive inputs, and the number of positive inputs. These n+1 parameters of f (known since as ``Chow parameters'') are equivalent to its degree0 and degree1 Fourier coefficients. The proof of Chow's theorem is highly nonconstructive. In this talk, we address the algorithmic problem of efficiently reconstructing a threshold function from its Chow parameters. This problem arises in various contexts and has been considered by researchers in electrical engineering, game theory, social choice and learning. I will describe a nearlyoptimal algorithm to approximately reconstruct a threshold function from its Chow parameters. The algorithm and its analysis use tools from Fourier analysis, linear algebra, probability and learning. Based on joint work with Anindya De (Berkeley), Vitaly Feldman (IBM Almaden), and Rocco Servedio (Columbia). 
March 6 
Julia Chuzhoy 
Routing in Undirected Graphs with Constant Congestion
Given an undirected graph G=(V,E), a collection (s_1,t_1),...,(s_k,t_k) of k sourcesink pairs, and an integer c, the goal in the Edge Disjoint Paths with Congestion problem is to connect maximum possible number of the sourcesink pairs by paths, so that the maximum load on any edge (called edge congestion) does not exceed c. We show an efficient randomized algorithm to route (OPT/polylog k) sourcesink pairs with congestion at most 14, where OPT is the maximum number of pairs that can be simultaneously routed on edgedisjoint paths. The best previous algorithm that routed (OPT/polylog n) pairs required congestion poly(log log n), and for the setting where the maximum allowed congestion is bounded by a constant c, the best previous algorithms could only guarantee the routing of OPT/n^{O(1/c)} pairs. We also introduce a new type of vertex sparsifiers that we call integral flow sparsifiers, that approximately preserve both fractional and integral routings, and show an algorithm to construct such sparsifiers. 
March 13 
Mihalis Yannakakis 
Polynomial Time Algorithms for MultiType Branching Processes and Stochastic ContextFree Grammars
We show that one can approximate the least fixed point solution for a multivariate system of monotone probabilistic polynomial equations in time polynomial in both the encoding size of the system of equations and in log(1/epsilon), where epsilon > 0 is the desired additive error bound of the solution. (The model of computation is the standard Turing machine model.) We use this result to resolve several open problems regarding the computational complexity of computing key quantities associated with some classic and heavily studied stochastic processes, including multitype branching processes and stochastic contextfree grammars. (Joint work with Kousha Etessami and Alistair Stewart.) 
March 20  Pseudo Deterministic Algorithms
We describe a new type of probabilistic search algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and with high probability produce correct and unique solution. These algorithms are pseudodeterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access. We show a necessary and suffi cient condition for the existence of a Bellagio Algorithm for an NP relation R: R has a Bellagio algorithm if and only if it is deterministically reducible to some decision problem in BPP. Several examples of Bellagio algorithms, for problems in number theory, algebra and graph theory which improve on deterministic solutions, will also be discussed. We note that the characterization scales up. The notion of pseudodeterministic algorithms (or more generally computations) is interesting beyond just sequential polynomial time algorithms, in other domains where the use of randomization is essential. For example, one can define and obtain pseudodeterministic fault tolerance distributed algorithms for tasks which are impossible to solve without randomization. We will discuss several such domains. 

March 27  No Seminar Spring Break  
April 3  No Seminar FOCS deadline  
April 10 
Dana Ron 
On sublinear algorithms for approximating graph parameters
When we refer to efficient algorithms, we usually mean polynomialtime algorithms. In particular this is true for graph algorithms, which are considered efficient if they run in time polynomial in the number of vertices and edges of the graph. However, in some cases we may seek even more efficient algorithms whose running time is sublinear in the size of the input graph. Such algorithms do not even read the whole input graph, but rather sample random parts of the graph and compute approximations of various parameters of interest. In this talk I will survey various such algorithms, where the parameters I will discuss are: (1) The average degree the number of small stars (2) The weight of a minimum spanning tree (3) The size of a minimum vertex cover and a maximum matching (4) The number of edges that should be added/removed in order to obtain various properties 
April 17  No Seminar Patriots Day  
April 24 
Tim Roughgarden 
An Extension Theorem for the Price of Anarchy in Games of Incomplete Information
Purestrategy Nash equilibria  where each player deterministically picks a single action  are often easier to analyze than their more general cousins like mixedstrategy Nash equilibria (where players can randomize) and BayesNash equilibria (where players don't even know with certainty what game they're playing in). For this reason, the price of anarchy of a game  the worstcase ratio between the objective function value of an equilibrium and of an optimal outcome  is often analyzed, at least initially, only for the game's purestrategy Nash equilibria. But as much as he or she might want to, the conscientious researcher cannot stop there. For example, the Nash equilibrium concept fundamentally assumes that all players' preferences are common knowledge, which is inappropriate for most auction and mechanism design contexts, where participants have private information. Can we obtain price of anarchy bounds for more general equilibrium concepts without working any harder than we already do to analyze purestrategy Nash equilibria? Ideal would be an "extension theorem" that could be used in the following ``blackbox'' way: (1) prove a bound on the price of anarchy of purestrategy Nash equilibria of a game; (2) invoke the extension theorem to conclude immediately that the exact same approximation bound applies to some more general equilibrium concept. Such an extension theorem would dodge the obvious deterrents from generalizing price of anarchy bounds beyond pure Nash equilibria  no extra work, and no loss in the approximation guarantee. In this talk, I'll describe a new extension theorem for games of incomplete information, along with some applications. 
May 1 
Jake Abernethy 
Minimax Option Pricing Meets BlackScholes in the Limit
Option contracts are a type of financial derivative that allow investors to hedge risk and speculate on the volatility of an asset's future market price. In short, an option has a particular payout that is based on the market price for an asset on a given date in the future. In 1973, Black and Scholes proposed a valuation model that gives a "fair price" for an option under the assumption that the price fluctuates according to geometric Brownian motion (GBM). Black and Scholes provided a continuoustime trading strategy for an investor, known as a "replication strategy" or "hedging strategy", which allows the investor to buy and sell the asset in order to "replicate" the option's payoff. In this talk we'll consider the the design of replication strategies, and hence a pricing mechanism, for options and other derivatives that does not rely on the GBM assumption. Indeed, we shall address the following question: what can an investor achieve when the asset's price path is chosen by... an adversary? What we show is that, even under these worstcase conditions, we ultimately recover the Black Scholes pricing model but without the GBM assumption. This talk will dive into topics like finance and repeated game playing, but will be relatively accessible to anyone with an interest in game theory, learning theory, or online algorithms. Joint work with Rafael Frongillo and Andre Wibisono. 
May 8 
Irit Dinur 
Covering CSPs Delayed to Fall semester due to Simons Talks
We study the complexity of constraint satisfaction problems (CSPs) from a new "covering" perspective. We define the covering number of a CSP instance C, denoted v(C), to be the smallest number of assignments to the variables, such that each constraint is satisfied by at least one of the assignments. This covering notion describes situations in which we must satisfy all the constraints, and are willing to use more than one assignment to do so. At the same time, we want to minimize the number of assignments. We study the covering CSP problem for different constraint predicates. We also study an interesting relationship between this problem and approximate coloring of graphs. Based on joint work with Gillat Kol. 
May 15  Efficient Interactive Coding Against Adversarial Noise
We study the fundamental problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an errorfree channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (noninteractive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors. We show how to convert any interactive protocol into one that is both efficient and robust against constantrate adversarial noise, and incurs only a constant blowup in the communication complexity (cc). Our robust protocol is randomized, and it succeeds in simulating the original protocol with probability at least 12^{Omega(cc)}. Previous solutions are either inefficient or are resilient only to stochastic noise. This is joint work with Zvika Brakerski. 