**MIT CIS SEMINAR- HISTORY**

*Date*: **Friday, Feb 22, 2013 **

*Time*: **10:00 - 11:30 am **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: **Shielding Circuits with Groups* **

*Speaker*: **Eric Miles**

*Abstract*: Traditionally, cryptography models an adversary as having only input/output access to a given algorithm. A recent line of work known as leakage-resistant cryptography additionally gives the adversary the output of a computationally limited leakage function applied to the algorithm's internal state (e.g. to the wires of a circuit implementing the algorithm). A general goal in this area is to compile any circuit into a new "shielded" circuit that remains secure under these attacks.

In this work we give a new such compiler, producing shielded circuits that withstand leakage from virtually any class of functions against which average-case lower bounds are known, recovering and extending previous results. Specifically, our circuits derive their security from the hardness of computing iterated products over the alternating group A_5. We also conjecture that our circuits withstand NC^1 leakage if NC^1 is not equal to L. We build on previous constructions by Ishai et al. [Crypto ’03] and Faust et al. [Eurocrypt ’10], and also use and extend the relationship between group theory and computation first established by Barrington [STOC '86]. In particular we exploit properties of the alternating group beyond what is sufficient for Barrington's theorem.

*This is joint work with Emanuele Viola.

*Date*: **Friday, March 1, 2013 **

*NO MEETING*

*Date*: **Friday, March 8, 2013 **

*Time*: **10:00 - 11:30 am **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: **TBA**

*Speaker*: **Huijia Rachel Lin**

*Abstract*: TBD

*Date*: **Friday, March 15, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: **Dynamic Proofs of Retrievability via Oblivious RAM***

*Speaker*: **Daniel Wichs**

*Abstract*: Proofs of retrievability allow a client to store her data on

a remote server (``in the cloud'') and periodically execute an

efficient audit protocol to check that all of the data is being

maintained correctly and can be recovered from the server. For

efficiency, the computation and communication of the server and client

during an audit protocol should be significantly smaller than

reading/transmitting the data in its entirety. Although the server is

only asked to access a few locations of its storage during an audit,

it must maintain full knowledge of all client data to be able to pass.

Starting with the work of Juels and Kaliski (CCS '07), all prior

solutions to this problem crucially assume that the client data is

static and do not allow it to be efficiently updated. Indeed, they all

store a redundant encoding of the data on the server, so that the

server must delete a large fraction of its storage to `lose' any

actual content. Unfortunately, this means that even a single bit

modification to the original data will need to modify a large fraction

of the server storage, which makes updates highly inefficient.

Overcoming this limitation was left as the main open problem by all

prior works.

In this work, we give the first solution providing proofs of

retrievability for dynamic storage, where the client can perform

arbitrary reads/writes on any location within her data by running an

efficient protocol with the server. At any point in time, the client

can execute an efficient audit protocol to ensure that the server

maintains the latest version of the client data. The computation and

communication complexity of the server and client in our protocols is

only polylogarithmic in the size of the client's data. The starting

point of our solution is to split up the data into small blocks and

redundantly encode each block of data individually, so that an update

inside any data block only affects a few codeword symbols. The main

difficulty is to prevent the server from identifying and deleting too

many codeword symbols belonging to any single data block. We do so by

hiding where the various codeword symbols for any individual data

block are stored on the server and when they are being accessed by the

client, using the algorithmic techniques of oblivious RAM.

*Joint work with David Cash and Alptekin Kupcu

*Date*: **Friday, March 22, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: TBD

*Speaker*: **Guy Rothblum**

*Abstract*: TBD

*Date*: **Friday, April 5, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: TBD

*Speaker*: **Mark Zhandry**

*Abstract*: Post-quantum cryptography attempts to build classical cryptosystems that are secure, even against a quantum adversary. The security definitions are left mostly unchanged from the classical setting, keeping the adversary’s interaction with the cryptosystem classical. This models a world where the adversary has a quantum computer, but the honest parties remain classical.

In this talk, we look beyond post-quantum cryptography to a world where all parties have a quantum computer. Now an adversary may interact with the honest parties in a quantum way, necessitating new security definitions that model these interactions. We demonstrate how to construct pseudorandom functions, message authentication codes, signatures, and encryption schemes in the setting where the adversary can issue quantum queries to the parties involved. We develop new security definitions for these settings and new proof techniques for arguing the security of our schemes.

*Joint work with Dan Boneh.

*Date*: **Friday, April 12, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: TBD

*Speaker*: **Eli Ben-Sasson**

*Abstract*: TBD

*Date*: **Friday, April 19, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: TBD

*Speaker*: **Raluca Ada Popa**

*Abstract*: TBD

*Date*: **Friday, April 26, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: TBD

*Speaker*: **Alessandro Chiesa**

*Abstract*: TBD

*Date*: **Friday, May 3, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: TBD

*Speaker*: **Dana Dachman**

*Abstract*: TBD

*Date*: **Friday, May 10, 2013 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G575, Stata Ctr, MIT **

*Title*: TBD

*Speaker*: **Stefano Tessaro**

*Abstract*: TBD

*** PAST SEMINARS * **

*Date*: **Friday, Dec 14, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: **Rational Security and its Query Complexity **

*Speaker*: **Pablo Azar, CSAIL, MIT**

*Abstract*: We study a security model where an expert has private knowledge about a distribution $\mathcal{D}$ of states of the world. We are interested in receiving this information from the expert, but need to trust that she is giving us the correct distribution. Security in this setting means that by telling the truth, the expert will maximize some reward.

This is quite different from the traditional notion of security, but it is good in practice when the adversary is neither honest nor malicious, but is instead rational. As in traditional security, we prove that randomness is crucial if we want an efficient guarantee of correctness.

More formally, if the support of $\mathcal{D}$ has a number $2^n$ of states, and the reward that you are willing to give the expert is bounded, we prove that (a) any deterministic reward function requires time $\Omega(2^n)$ to compute and (b) a new randomized reward function that can be computed in time $poly(n)$.

Joint work with Silvio Micali.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Monday, November 19, 2012 **

*Time*: **6:00 pm - 7:30 pm **

*Place*: **32-141, MIT Stata Ctr, 32 Vassar St, Cambridge **

*Title*: **THE GOVERNMENT IS PROFILING YOU **

*Speaker*: **William Binney, Former National Security Agency, Technical Director of the World Geopolitical and Military Analysis Reporting Group, Signals Intelligence Automation Research Center (SARC) **

*Discussant*: **Carol Rose, Executive Director, ACLU of Massachusetts **

*Sponsored by*: **MIT CSAIL CIS Group and The Computers, Freedom and Privacy Conference**

*VIDEO available on*: **http://techtv.mit.edu/collections/cis **

*Abstract*:

While spreading an atmosphere of fear after 9/11, our government has violated our laws, prevented the Congress and courts from doing their Constitutional duty, created a surveillance state and concentrated power in the executive, all in the name of keeping us safe. In an effort to reverse these ongoing unconstitutional activities, William Binney revealed the National Security Agency's massive domestic spying program, Stellar Wind, which intercepts domestic communications without protections for US citizens. Binney disclosed that NSA sought and received access to telecommunications companies' domestic and international billing records. He told the public that, since 9/11, the agency has intercepted between 15 and 20 trillion communications. Binney also revealed that the NSA concealed Stellar Wind under the patriotic-sounding "Terrorist Surveillance Program," in order to give cover to the warrantless surveillance program's violations of Americans' constitutional rights.

Two related Wired magazine articles by James Bamford: http://www.wired.com/threatlevel/2012/03/ff_nsadatacenter/all/ http://www.wired.com/threatlevel/2012/04/shady-companies-nsa/

MIT CIS SEMINAR

**Open To The Public **

*Date*: **Friday, Oct 19, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: **Constructing constant round non-malleable commitments **

*Speaker*: **Vipul Goyal, MSR India**

*Abstract*:

Non-malleable commitments are fundamental tools in cryptography. They are useful on their own to counter man-in-the-middle attacks as well as useful as a building block in designing larger protocols: both in the standalone setting as well as the internet setting.

We provide the first constant round constructions of non-malleable commitments (and zero-knowledge) based only on one-way functions. This improves upon several previous (incomparable) works which required either: (a) super-constant number of rounds, or, (b) non-standard or sub-exponential hardness assumptions, or, (c) non-black-box simulation and collision resistant hash functions. This also allows us to obtain the first constant round multi-party computation protocol relying only on the existence of constant round oblivious transfer protocols.

Our primary technique can be seen as a means of implementing the previous â€œtwo-slot non-black-box simulationâ€ idea of Pass (STOC 2004) with only black-box simulation.

We also show how to instantiate the above protocol by making only a black-box use of the underlying one-way functions. Prior to this work, no black-box construction of non-malleable commitments was known (except for relaxed notions of security) in any (polynomial) number of rounds based on any cryptographic assumption. This is done by relying on ideas from the "secure computation in the head" paradigm of Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC 2007).

Parts of this work have appeared in FOCS 2012 (jointly with Lee, Ostrovsky, and, Visconti) and STOC 2011.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, Oct 5, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Patil/Kiva Stata Ctr, MIT **

*Title*: **From the Impossibility of Obfuscation to a New Non-Black-Box Simulation Technique **

*Speaker*: **Omer Paneth, Boston University **

*Abstract*:

The introduction of a non-black-box simulation technique by Barak (FOCS 2001) has been a major landmark in cryptography, breaking the previous barriers of black-box impossibility. Barakâ€™s techniques were subsequently extended and have given rise to various powerful applications. We present the first non-black-box simulation technique that does not rely on Barakâ€™s technique (or on nonstandard assumptions). Our technique is based on essentially different tools: it does not invoke universal arguments, nor does it rely on collision-resistant hashing. Instead, the main ingredient we use is the impossibility result for general program obfuscation of Barak et al. (CRYPTO 2001).

Using our technique, we construct a new resettably sound zero-knowledge (rsZK) protocol. rsZK protocols remain sound even against cheating provers that can repeatedly reset the verifier to its initial state and random tape. Indeed, for such protocols black-box simulation is impossible. Our rsZK protocol is the first to be based solely on semi-honest oblivious transfer and does not rely on collision-resistant hashing; in addition, our protocol does not use PCP machinery. In the converse direction, we show a generic transformation from any rsZK protocol to a family of functions that cannot be obfuscated.

Joint work with Nir Bitansky.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, July 13, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Note Different Place*: **32-G575, Stata Ctr, MIT **

*Title*: **Functional Encryption with Bounded Collusions via Multi-Party Computation **

*Speaker*: **Hoeteck Wee, George Washington University **

*Abstract*:

We construct functional encryption schemes for polynomial-time computable functions se- cure against an a-priori bounded polynomial number of collusions. Our constructions require only semantically secure public-key encryption schemes and pseudorandom generators computable by small-depth circuits (known to be implied by most concrete intractability as- sumptions). For certain special cases such as predicate encryption schemes with public in- dex, the construction requires only semantically secure encryption schemes.

Along the way, we show a "bootstrapping theorem" that builds a q-query functional encryp- tion scheme for arbitrary functions starting from a q-query functional encryption scheme for bounded-degree functions. All our constructions rely heavily on techniques from secure mul- ti-party computation and randomized encodings.

Our constructions are secure under a strong simulation-based definition of functional encryp- tion. Joint work with Sergey Gorbunov and Vinod Vaikuntanathan)

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, July 6, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Note Different Place*: **32-G575, Stata Ctr, MIT **

*Title*: **Abstract cryptography, game isomorphism, local simulators, and coercibility **

*Speaker*: **Ueli Maurer, ETH, Zurich **

*Abstract*:

In the spirit of algebraic abstraction, the abstract cryptography (AC) framework advocates the definition and use of higher levels of abstraction in cryptography (and beyond). If contrasted with the standard bottom-up approach in CS to defining models of computation, algorithms, complexity, efficiency, and then security of cryptographic schemes, AC is top-down and axiomatic, where lower abstraction levels inherit the definitions and theorems (e.g. a composition theorem) from the higher level, but the definition or concretization of low levels is not required for proving theorems at the higher levels. The goal is to strive for simpler definitions, higher generality of results, simpler proofs, improved elegance, possibly better didactic suitability, and to derive new insights from the abstract viewpoint.

In particular, based on an isomorphism concept for games, this leads to a general framework for defining and proving that a resource satisfying an (ideal) specification is constructed from some resources satisfying certain (real) specifications. This puts the well-known ``ideal-world real-world'' paradigm as well as the simulation paradigm on a new theoretical foundation, applicable in various cryptographic settings. Existing frameworks for proving composable security such as Canetti's UC framework, reactive simulatability by Pfitzmann and Waidner, or indifferentiability, can be explained as special cases of our framework, with unnecessary technical details eliminated. But, as a strict extension, simulators in AC are local for each party, and this allows to capture settings that can not be modeled by a central adversary, like coercibility. Joint work with Renato Renner.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, May 25, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Note Different Place*: **32-G449, Stata Ctr, MIT **

*Title*: **How to Bootstrap a SNARK in Public **

*Speaker*: **Nir Bitanksy, TAU and BU **

*Abstract*:

Publicly verifiable succinct non-interactive arguments of knowledge (SNARKs) are concise, easily-checkable proofs for the correct execution of long computations. These proofs require a public and reusable reference string that allows proofs to be created and checked by anyone. Generalizing SNARKs, Proof-carrying data systems (PCDs) provide a powerful tool for verifying correctness of distributed computations. So far, publicly-verifiable SNARKs and PCDs were only known to exist either in the random oracle model or in a "signing oracle" model.

We present recursive composition and bootstrapping techniques for SNARKs and PCDs that: 1. Transform SNARKs that have a long reference string but are otherwise succinct into fully succinct SNARKs. 2. Transform any SNARK into a PCD system for distributed computations over chains of fixed polynomial length.

Applying our transformations to the succinct NIZKs of Groth [ASIACRYPT11], or of Gentry et al. [EPRINT12], whose security is based on a Knowledge of Exponent assumption in bilinear groups, we obtain the first publicly-verifiable SNARKs and PCDs in the plain model. Interestingly, the resulting constructions do not rely on the PCP Theorem and enjoy several efficiency benefits. We also obtain analogous results for privately-verifiable PCDs and SNARKs.

We note that the PCD abstraction plays a central role in our construction and analysis, not only as a goal but also as a main tool. In particular, PCDs allow us to present and prove our constructions in a relatively clean and concise way. Joint work with Ran Canetti, Alessandro Chiesa and Eran Tromer.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, May 11, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Note Different Place*: **32-G882, Hewlett Room, Stata Ctr, MIT **

*Title*: **Rational Proofs **

*Speaker*: **Silvio Micali, CSAIL, MIT **

*Abstract*:

We unify the treatment of asymmetry of information in theoretical computer science and economics.

We put forward a new type of proof system, where an unbounded Prover and a polynomial time Verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). In our setting there no longer are â€œgoodâ€ or â€œmaliciousâ€ Provers, but only RATIONAL ones. In essence, (1) the Verifier gives the Prover a reward in [0,1] determined by the transcript of their interaction; (2) the Prover wishes to maximize his expected reward; and (3) the reward is maximized only if the verifier correctly learns f(x).

Rational proofs are as powerful as interactive ones, but can be amazingly more efficient in the amount of communication involved: that is, the number of bits exchanged and the number of rounds of interaction. Joint work with Pablo Azar.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, May 4, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, 32-G449, Patil/Kiva **

*Title*: **Illegitimi non carborundum **

*Speaker*: **Ronald Rivest, CSAIL, MIT **

*Abstract*:

Motivated by the increasing sophistication of recent Advanced Persistent Threats (APTs), we introduce the game of ``stealthy takeovers'' (aka ``FlipIt''). FlipIt is a two-player game (between an attacker and a defender) where either player can move at any given time, but cannot determine the state of the game until he moves himself. This stealthy game models various security situations in which a resource can be ``silently corrupted'' by an adversary, but then ``disinfected'' later by the defender. Interesting features of this game include: * Aggressive play by one player can motivate the other player to drop out (not play at all). * Optimal play by the defender generally requires tolerance of some amount of corruption by the adversary. We analyze several variants of the game, and show applications of these ideas in modeling various computer security scenarios, including password reset, cloud audit, and machine patching. (The work discussed here is joint with Marten van Dijk, Ari Juels, and Alina Oprea of RSA Labs.)

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, April 27, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Note Different Place*: **32 Vassar Street, 32-D463, Star Conference room**

*Title*: **Fast Reductions from RAMs to Delegatable Succinct Constraint Satisfaction Problems **

*Speaker*: **Alessandro Chiesa, CSAIL, MIT **

*Abstract*:

Succinct arguments for NP are proof systems that allow a weak verifier to retroactively check computation done by a more powerful prover. These protocols prove membership in languages (consisting of succinctly-represented very large constraint satisfaction problems) that, alas, are unnatural in the sense that the problems that arise in practice are not in such form.

For general computation tasks, the most natural and efficient representation is typically as random-access machine (RAM) algorithms, because such a representation can be obtained very efficiently by applying a compiler to code written in a high-level programming language. We thus study efficient reductions from RAM to other problem representations for which succinct arguments are known. Specifically, we construct reductions from the correctness of computation of a $T$-step non-deterministic random-access machine to: 1. (succinct) circuit satisfiability with $O(\log T)$ overhead, and 2. (succinct) algebraic constraint satisfaction with $O(\log^{2} T)$ overhead.

On the latter problem representation, the best known Probabilistically Checkable Proofs can be directly invoked. Our constructions are explicit and do not hide large constants.

To attain these, we develop a set of tools (both unconditional and leveraging computational assumptions) for generically and efficiently structuring and arithmetizing the computation of random-access machines.

http://eprint.iacr.org/2012/071

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, April 20, 2012 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Multiparty Computation Secure Against Continual Memory Leakage **

*Speaker*: **Elette Boyle, CSAIL, MIT **

*Abstract*:

We consider multiparty computation (MPC) within a setting where a malicious adversary may corrupt some parties and {\it leak} information about the secret state of all honest parties. Leakage queries are chosen adaptively, throughout the protocol execution. We show two main results.

1) We construct a MPC protocol secure against continual memory leakage, with a (necessary) one-time leak-free preprocessing phase in which the players' inputs are shared. More specifically, we prove that any malicious adversary who corrupts (1-\epsilon) fraction of all parties (for any constant \epsilon > 0) and can continuously leak information about the secret state of honest parties for an unbounded number of executions of the MPC protocol, learns nothing beyond the evaluated function values, using the classical simulation-based security definitions. Our construction is based on a number of new primitives, which can be instantiated under DDH and the existence of FHE. This result is joint with Goldwasser, Jain, and Kalai.

2) We construct a UC-secure protocol for computing any efficient function f, without any leak-free stage, with the weaker guarantee that an adversary who corrupts any subset of parties and leaks L bits on the secret states of honest parties learns nothing beyond the function output and L bits of information about the inputs. The construction relies on the linear assumption over bilinear groups. This result is joint with Goldwasser, Kalai, Jain, Garg and Sahai.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, December 16, 2011 **

*Time*: **9:30 am - 11:00 am **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Secret, Incoercible and Deniable Bidding Countering Collusion in Auctions **

*Speaker*: **Michael Rabin, SEAS, Harvard University**

*Abstract*:

Recent novel highly efficient Zero Knowledge Proofs demonstrated that an auctioneer can receive bids, determine winners and later prove correctness of announced auction results, without revealing any bids values. New work done with Silvio Micali, allows bidders to submit bids in an incoercible and deniable manner. In certain auctions this combined shield of complete secrecy frustrates collusion of bidders against the auctioneer/seller.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, December 9, 2011 **

*Time*: **9:30 am - 11:00 am **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Semantic Security for the Wiretap Channel **

*Speaker*: **Stefano Tessaro, UC San Diego**

*Abstract*:

The wiretap channel is a setting where one aims to get information-theoretic privacy based only on the assumption that the channel from sender to receiver is noisier than the one from sender to adversary. Papers on this topic over thirty years have sought only weak security yet failed to reach the ultimate goal of a polynomial-time scheme of optimal rate with proven privacy over the adversary channel and proven decodability over the receiver channel.

We not only achieve this goal but do so with strong security. We define and prove equivalent two strong privacy metrics, one based on semantic security and the other on entropy. We then provide a new scheme that achieves these strong security goals in polynomial time with optimal rate, thereby not only solving the central open problem in this area but delivering even more in terms of security.

Joint work with Mihir Bellare and Alexander Vardy.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Thursday, December 8, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **The Equivalence of the Random Oracle Model and the Ideal Cipher Model, Revisited **

*Speaker*: **Stefano Tessaro, UC San Diego**

*Abstract*:

We consider the cryptographic problem of constructing an invertible random permutation from a public random function (i.e., which can be evaluated by the adversary). This goal is formalized by the notion of indifferentiability of Maurer et al. (TCC 2004). This is the natural extension to the public setting of the well-studied problem of building random permutations from random functions, which was first solved by Luby and Rackoff using the Feistel construction. The most important implication of such a construction is the equivalence of the random oracle model and the ideal cipher model.

Coron et al. (CRYPTO 2008) gave a rather involved proof that the six-round Feistel construction with independent random round functions is indifferentiable from an invertible random permutation. Also, it is known that fewer than six rounds do not suffice for indifferentiability. The first contribution (and starting point) of our paper is a concrete distinguishing attack which shows that the indifferentiability proof of Coron et al. is not correct. In addition, we provide supporting evidence that an indifferentiability proof for the six-round Feistel construction may be very hard to find.

To overcome this gap, our main contribution is a proof that the Feistel construction with fourteen rounds is indifferentiable from an invertible random permutation. Joint work with Thomas Holenstein and Robin Kuenzler (STOC 2011).

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Thursday, November 17, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G575, 32 Vassar Street**

*Title*: **Efficient Chosen-Ciphertext Security via Extractable Hash Proofs **

*Speaker*: **Hoeteck Wee, George Washington University**

*Abstract*:

I will present a (not-so-recent) framework of extractable hash proofs, and how it yields efficient CCA-secure encryption schemes. If time permits, I will also outline a new connection between KDM-security and smooth projective hashing.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, November 4, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **How to compute group statistics while preserving individual privacy **

*Speaker*: **Eleanor Rieffel,Senior Research Scientist, FXPAL **

*Abstract*:

As sensors become ever more prevalent, more and more information will be collected about each of us. A long-term research question is how best to support beneficial uses while preserving individual privacy. Aggregate statistics are sufficient for many beneficial uses in public health and medical research, smart metering, and user analysis, and other areas. W

We introduce "aggregator oblivious" protocols for time series data in which an individual has full access to her own data, a third party processes the data without learning anything about the data values, and analysts learn only statistical information over the whole group at each time step. We describe two families of non-interactive aggregator oblivious protocols. We also show how these schemes can be combined with differential privacy mechanisms to obtain stronger privacy guarantees, and how encryption makes it possible to obtain a given level of differential privacy with less noise.

Joint work with: Jacob Biehl, Hubert Chan, Richard Chow, Adam J. Lee, Elaine Shi, Dawn Song, William van Melle

**MIT TOC/CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, October 14, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Limits of Provable Security from Standard Assumptions**

*Speaker*: **Rafael Pass, Cornell University **

*Abstract*:

We show that the security of some well-known cryptographic protocols, primitives and assumptions (e.g., the Schnorr identification scheme, commitments secure under adaptive selective decommitment, the one-more discrete logarithm assumption) cannot be based on any standard intractability assumption using a Turing (i.e., black-box) reduction. This result holds even if the protocol makes non-black-box use of the underlying assumption.

**MIT TOC/CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, July 22, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Efficient and Explicit Coding for Interactive Communication**

*Speaker*: **Amit Sahai, UCLA **

*Abstract*:

In this work, we study the fundamental problem of reliable interactive communication over a noisy channel. In a breakthrough sequence of papers published in 1992 and 1993, Schulman gave non-constructive proofs of the existence of general methods to emulate any two-party interactive protocol such that:

(1) the emulation protocol only takes a constant-factor longer than the original protocol, and

(2) if the emulation protocol is executed over any discrete memoryless noisy channel with constant capacity, then the probability that the emulation protocol fails to perfectly emulate the original protocol is exponentially small in the total length of the protocol.

Unfortunately, Schulman's emulation procedures either only work in a nonstandard model with a large amount of shared randomness, or are non-constructive in that they rely on the existence of absolute tree codes. The only known proofs of the existence of absolute tree codes are non-constructive, and finding an explicit construction remains an important open problem. Indeed, randomly generated tree codes are not absolute tree codes with overwhelming probability.

In this work, we revisit the problem of reliable interactive communication, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memoryless noisy channel with constant capacity, and our protocol's probability of failure is exponentially small in the total length of the protocol. We accomplish this goal by obtaining the following results: * We introduce a new notion of goodness for a tree code, and define the notion of a potent tree code. We believe that this notion is of independent interest. * We prove the correctness of an explicit emulation procedure based on any potent tree code. (This replaces the need for absolute tree codes in the work of Schulman.) * We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only $O(n)$ random bits, where $n$ is the depth of the tree.

These (derandomized) results allow us to obtain our main result. Our results also extend to the case of interactive multi-party communication.

This talk is based on joint work with Ran Gelles and joint work with Ran Gelles and Ankur Moitra.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, July 1, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **New Cryptanalytic Attacks on Secret Key Cryptosystems**

*Speaker*: **Adi Shamir, The Weizmann Institute of Science **

*Abstract*:

Over the last couple of years, many cryptographic schemes which had resisted attacks for many years were finally breached. Examples include the full AES (developed in 1997), the full Russian standard GOST (published in 1994), the full 3GPP cellular telephony scheme Kasumi (developed in the early 2000), etc. In this talk I will describe some of the new techniques used in these attacks, at a level which does not require any prior knowledge of cryptanalytic techniques.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, May 13, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Truth or Envy? **

*Speaker*: **Jason Hartline, Northwestern University **

*Abstract*:

We consider (profit maximizing) mechanism design in general settings that include, e.g., position auctions (for selling advertisements on Internet search engines) and single-minded combinatorial auctions. We analyze optimal envy-free pricing in these settings and give economic justification for using optimal envy-free revenue as a benchmark for prior-free mechanism design and analysis. In addition to its economic justification, the envy-free revenue has a very simple structure and a strong connection to incentive compatibility (a.k.a., truthfulness) constraints in mechanism design.

As a first example of the connection between envy-free pricing and incentive compatible mechanism design, because the structures of optimal pricings and optimal mechanisms are similar, we give a mechanism design reduction from structurally rich environments including position auctions (and environments with a matroid structure) to multi-unit auction environments (i.e., auctioning $k$ identical units to $n$ unit-demand agents). For instance, via this reduction we are able to extend all prior-free digital good auctions to position auctions with a factor of two of loss in the approximation factor.

As a second example we extend a variant of the random sampling auction to get constant approximations for general downward closed (i.e., if we can serve a given set of agents, we can serve any subset) settings. Joint work with Qiqi Yan.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, April 29, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Homomorphic Signatures for Polynomial Functions**

*Speaker*: **David Freeman,Stanford U**

*Abstract*:

We present a homomorphic signature scheme that is capable of evaluating multivariate polynomials on signed data. Given the public key and a signed data set, there is an efficient algorithm to produce a signature on the mean, standard deviation, and other statistics of the signed data. Previous systems for computing on signed data could only handle linear operations. For polynomials of constant degree, the length of a derived signature only depends logarithmically on the size of the data set.

Our system uses ideal lattices in a way that is a "signature analogue" of Gentry's fully homomorphic encryption. Security is based on hard problems on ideal lattices similar to those in Gentry's system. This is joint work with Dan Boneh, to appear in Eurocrypt 2011.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, April 22, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva**

*Title*: **Leakage-Resilient Zero Knowledge and its Applications**

*Speaker*: **Abhishek Jain, UCLA**

*Abstract*:

We initiate a study of zero knowledge proof systems in the presence of side-channel attacks. Specifically, we consider a setting where a cheating verifier is allowed to obtain arbitrary bounded leakage on the entire state (including the witness and the random coins) of the prover during the entire protocol execution. We formalize a meaningful definition of leakage-resilient zero knowledge (LR-ZK) proof system, that intuitively guarantees that "the protocol does not yield anything beyond the validity of the statement and the leakage obtained by the verifier."

We give a construction of LR-ZK interactive proof system based on general assumptions. To the best of our knowledge, this is the first instance of a cryptographic protocol where the adversary is allowed to perform leakage attacks during the protocol execution (in contrast, prior work only focused on leakage-resilient primitives such as encryption, or leakage-resilient devices, or leakage prior to the protocol execution). Next, we give an LR-NIZK argument system based on standard assumptions.

Finally, we demonstrate the usefulness of our notions by giving two concrete applications: (1) We show how to do UC secure computation in the "leaky token model" (i.e., where an adversary in possession of a token can obtain arbitrary bounded leakage on the entire state of the token) based on standard assumptions. (2) Next, we give a new construction of fully leakage-resilient signatures in the bounded leakage model (as well as the continual leakage model) based on standard assumptions. In contrast to the recent constructions of fully leakage resilient signatures, our scheme is also secure in the noisy leakage model.

Joint work with Sanjam Garg and Amit Sahai.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, April 1, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **How to Garble Arithmetic Circuits **

*Speaker*: **Yuval Ishai, Technion**

*Abstract*:

Yao's garbled circuit construction transforms a boolean circuit C : {0,1}^n --> {0,1}^m into a "garbled circuit" C' along with n pairs of k-bit keys, one for each input bit, such that C' together with the n keys corresponding to an input x reveal C(x) and no additional information about x. The garbled circuit construction is a central tool for constant-round secure computation and has several other applications.

Motivated by these applications, we present the first arithmetic analogue of Yao's construction. Our construction transforms an arithmetic circuit C : Z^n --> Z^m over integers from a bounded (but possibly exponential) range into a garbled circuit C' along with n affine functions L_i : Z --> Z^k such that C' together with the n integer vectors L_i(x_i) reveal C(x) and no additional information about x. The security of our construction relies on the intractability of the decisional variant of the learning with errors (LWE) problem.

Joint work with Benny Applebaum and Eyal Kushilevitz

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Thursday, March 31, 2011 **

*Time*: **4:00 - 5:00 pm **

*Place*: **32-G575 **

*Title*: **Homomorphic Encryption: from Private-Key to Public-Key **

*Speaker*: **Ron Rothblum, Weizmann Institute**

*Abstract*:

We show how to transform any additively homomorphic private-key encryption scheme that is compact, into a public-key encryption scheme. By compact we mean that the length of a homomorphically generated encryption is independent of the number of ciphertexts from which it was created. We do not require anything else on the distribution of homomorphically generated encryptions (in particular, we do not require them to be distributed like real ciphertexts).

Our resulting public-key scheme is homomorphic in the following sense. If the private-key scheme is $i+1$-hop homomorphic with respect to some set of operations then the public-key scheme we construct is $i$-hop homomorphic with respect to the same set of operations.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, January 21, 2011 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Privacy, incentives, and truthfulness **

*Speaker*: **David Xiao, LRI Université Paris-Sud and LIAFA, Université Paris **

*Abstract*:

Privacy has become an ever more pressing concern as we conduct more and more of our lives in public forums such as the Internet. One privacy question that has received much study is how a database curator may output "sanitized" data that does not reveal too much information about any particular individual. This criteria has been formalized as differential privacy, proposed originally by Dwork et al. (TCC '06 and ICALP '06), which captures the idea that "the presence or absence of any individual's data does not change the distribution of the sanitized data by much". This guarantee has been interpreted to mean that individuals should be comfortable revealing their information, since their participation barely changes the output.

In this talk, we argue that the interpretation is incomplete because unless participation in the database somehow explicitly benefits the individuals, individuals will always refuse to participate truthfully, regardless of whether the sanitization mechanism is differentially private or not.

We therefore advocate studying differential privacy in conjunction with the notion of truthfulness from game theory, which says that a mechanism should be designed so that it is in the individuals' own interest to give their true information. We show that there exist games for which differentially private mechanisms, in particular the exponential mechanism of McSherry and Talwar (FOCS '07), do not motivate the individuals to participate truthfully. On the positive side, we show that a wide class of games do admit differentially private, truthful, and efficient mechanisms.

Finally, we explore the possibility of tradeoffs between utility and privacy. This is because individuals may be willing to give up some privacy if they receive enough utility from a game, and vice versa. We show that, under a natural measure of information cost, even the release of a differentially private histogram may reveal so much information that individuals would rather suffer the consequences of lying rather than have their information published.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, December 17, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Separating Succinct Non-Interactive Arguments From All Falsifiable Assumptions **

*Speaker*: **Daniel Wichs, NYU **

*Abstract*:

In this paper, we study succinct computationally sound proofs (arguments) for NP, whose communication complexity is polylogarithmic the instance and witness sizes. The seminal works of Kilian '92 and Micali '94 show that such arguments can be constructed under standard cryptographic hardness assumptions with four rounds of interaction, and that they be made non-interactive in the random-oracle model. The latter construction also gives us some evidence that succinct non interactive arguments (SNARGs) may exist in the standard model with a common reference string (CRS), by replacing the oracle with a sufficiently complicated hash function whose description goes in the CRS. However, we currently do not know of any construction of SNARGs with a formal proof of security under any simple cryptographic assumption.

In this work, we give a broad black-box separation result, showing that black-box reductions cannot be used to prove the security of any SNARG construction based on any falsifiable cryptographic assumption. This includes essentially all common assumptions used in cryptography (one-way functions, trapdoor permutations, DDH, RSA, LWE etc.). More generally, we say that an assumption is falsifiable if it can be modeled as an interactive game between an adversary and an efficient challenger that can efficiently decide if the adversary won the game. This is similar, in spirit, to the notion of falsifiability of Naor '03, and captures the fact that we can efficiently check if an adversarial strategy breaks the assumption.

Our separation result also extends to designated verifier SNARGs, where the verifier needs a trapdoor associated with the CRS to verify arguments, and slightly succinct SNARGs, whose size is only required to be sublinear in the statement and witness size.

Joint work with Craig Gentry See: http://eprint.iacr.org/2010/610

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, December 10, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Efficient Block-Wise PKE with Key Dependent Message Security under Flexible SLP-Queries **

*Speaker*: **Tal Malkin, Columbia University **

*Abstract*:

Key Dependent Message (KDM) secure encryption is a new research area of cryptography and extensively studied in past few years. A KDM secure scheme w.r.t. a function set F provides security even if one encrypts a key dependent message f(sk) for any f in F. The main result of this work is the construction of an efficient public key encryption scheme which is KDM secure with respect to quite a large function set F. Proposing such a scheme is significant because, although KDM security has practical applications, all previous works in the standard model are either feasibility results, or for a small set of functions such as linear functions.

Efficiency of our scheme is dramatically improved compared to the previous feasibility results, and our function set is significantly larger than the set of linear functions. Our function set contains all rational functions whose denominator and numerator have polynomial bounded degree and which can be computed by a Straight Line Program of polynomial length. Joint work with Isamu Teranishi and Moti Yung.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, December 3, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **How to Leak on Key Updates **

*Speaker*: **Allison Lewko, University of Texas at Austin**

*Abstract*:

In the continual memory leakage model, security against attackers who can repeatedly obtain leakage is achieved by periodically updating the secret key. This is an appealing model which captures a wide class of side-channel attacks, but all previous constructions in this model provide only a very minimal amount of leakage tolerance during secret key updates. Since key updates may happen frequently, improving security guarantees against attackers who obtain leakage during these updates is an important problem.

In this talk, we will discuss the particular challenges faced in tolerating leakage during updates and introduce new techniques for overcoming these obstacles. We present the first cryptographic primitives which are secure against a super-logarithmic amount of leakage during secret key updates: we provide signature and public key encryption schemes in the standard model which can tolerate a constant fraction of the secret key and update randomness to be leaked during updates.

Our signature scheme also allows us to leak a constant fraction of the entire secret state during signing. Before this work, it was unknown how to tolerate super-logarithmic leakage during updates even in the random oracle model. We rely on subgroup decision assumptions in composite order bilinear groups.

Joint work with Mark Lewko and Brent Waters.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, November 19, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Black-Box, Round-Efficient Secure Computation via Non-Malleability Amplification **

*Speaker*: **Hoeteck Wee, Queens College, CUNY **

*Abstract*:

We present round-efficient protocols for secure multi-party computation with a dishonest majority that rely on *black-box* access to the underlying primitives. Our main contributions are:

-- a O(log^* n)-round protocol that relies on black-box access to dense cryptosystems, homomorphic encryption schemes, or lossy encryption schemes. This improves upon the recent O(1)^{log^* n} -round protocol of Lin, Pass and Venkitasubramaniam (STOC 2009) that relies on non-black-box access to a smaller class of primitives.

-- a O(1)-round protocol requiring in addition, black-box access to a one-way function with sub-exponential hardness, improving upon the recent work of Pass and Wee (Eurocrypt 2010).

These are the first black-box constructions for secure computation with sublinear round complexity. Our constructions build on and improve upon the work of Lin and Pass (STOC 2009) on non-malleability amplification, as well as that of Ishai et al. (STOC 2006) on black-box secure computation.
**MIT CIS SEMINAR **
**Open To The Public **
*Date*: **Friday, November 12, 2010 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **32 Vassar Street, G449 Patil/Kiva **
*Title*: **Security Problems in India's Electronic Voting System**
*Speakers*: **Hari K. Prasad and J. Alex Halderman, UMich **
*Abstract*:
India uses paperless electronic voting machines (EVMs) nationwide. These machines use a simple embedded system architecture that is considerably different from the complex voting machines typically used in the U.S. and Europe, where almost all prior research has focused. Despite suspicions of fraud, Indian authorities have never permitted a serious, independent review of the machines' security.

Hyderabad-based engineer Hari Prasad spent a year trying to convince election officials to complete such a review, but they insisted that the government-made machines were -Y´perfect,¡ "infallible," and ´tamperproof.¡ Then, in February of this year, an anonymous source offered him access to one of the machines to study. He assembled an international research team, including J. Alex Halderman from the University of Michigan and Rop Gonggrijp from the Netherlands. Together, they discovered that, far from being tamper-proof, the machines suffer from serious weaknesses that could be exploited to alter national election results.

Months of hot debate about these findings have produced a growing consensus that India¢s electronic voting machines should be scrapped. There have also been more disturbing developments: Prasad was arrested and jailed in August by authorities demanding to know the identity of the anonymous source. He has since been released on bail, and he is visiting the U.S. to accept the Electronic Frontier Foundation's Pioneer Award for his work.

In this talk, Prasad and Halderman will describe the design and motivations behind India's electronic voting system, the technical problems their study demonstrated, the political circumstances behind Prasad's arrest, and the implications of the machines' security weaknesses for voting technology in India and beyond. They'll also discuss some of the formidable practical challenges that India and many other democracies face in conducting elections. Designing voting systems that provide transparency and security under these constraints presents many open problems.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, November 5, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model **

*Speaker*: **Zvika Brakerski, Weizmann Institute **

*Abstract*:

In this work, we present a generic framework for constructing efficient signature schemes, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles). We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call a-priori-message unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack).

Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present {\em simple} constructions based on the RSA assumption, the {\em short integer solution} (SIS) assumption, and the {\em computational Diffie-Hellman} (CDH) assumption over bilinear groups.

Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the {\em short integer solution} (SIS) assumption, and the other is based on the CDH assumption over bilinear groups.

As a building block for these constructions, we define a primitive that we call \emph{ring trapdoor functions}. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.

Finally, we show a connection between ring signature schemes and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the {\em learning with error} (LWE) assumption, and is similar to the recently introduced IBE scheme of Cash-Hofheinz-Kiltz-Peikert; The second is based on the $d$-linear assumption over bilinear groups. Joint work with Yael Tauman Kalai.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, October 29, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **A Fully Transparent Optical Scan Voting System **

*Speaker*: **Hovav Shacham, UCSD **

*Abstract*:

Existing optical scan voting systems depend on the integrity of the scanner. If a compromised -- or merely faulty -- scanner reports incorrect results, there is no ready mechanism for detecting errors. While methods exist for ameliorating these risks, none of them are entirely satisfactory. We propose an alternative: a radically open system in which any observer can simultaneously and independently count the ballots for himself.

Our approach, called OpenScan, combines digital video recordings of ballot sheet feeding with computer vision techniques to allow any observer with a video camera to obtain a series of ballot images that he can then process with ordinary optical scan counting software. Preliminary experimental results indicate that OpenScan produces accurate results at a manageable cost of around $1000 in hardware plus $0.0010 per ballot counted. Joint work with Kai Wang, Eric Rescorla, and Serge Belongie.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, October 22, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Perfectly Resilient Mechanisms **

*Speaker*: **Jing Chen, CSAIL, MIT **

*Abstract*:

We call a mechanism perfectly resilient if it is totally immune to the problems of collusion, complexity, and privacy. In a setting of complete information and quasi-linear utilities, we construct such a mechanism for generating virtually maximum revenue. Our mechanism has a unique subgame-perfect equilibrium, requires the players to communicate the minimum number of bits, keeps the players' preferences private to the maximum possible extent, and works no matter who colludes with whom ---as long as not all players belong to the same collusive set--- and no matter how well collusive players may coordinate their actions.

Joint work with Avinatan Hassidim and Silvio Micali.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, October 15, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **NOTE- Star Conf rm D-463 **

*Title*: **Adaptive Hardness and Concurrent Security in the Plain Model from Standard Assumptions **

*Speaker*: **Rachel Lin, Cornell U.**

*Abstract*:

We construct the first general secure computation protocols that require no trusted infrastructure other than authenticated communication, and that satisfy a meaningful notion of security that is preserved under universal composition assuming only the existence of enhanced trapdoor permutations. The notion of security fits within a generalization of the angel-based framework of Prabhakaran and Sahai (STOC¢04) and implies super-polynomial time simulation security. Security notions of this kind are currently known to be realizable only under strong and specific hardness assumptions.

A key element in our construction is a commitment scheme that satisfies a new and strong notion of security. The notion, security against chosen-commitment-attacks (CCA security), means that security holds even if the attacker has access to an extraction oracle that gives the adversary decommitment information to commitments of the adversary¢s choice. This notion is stronger than concurrent non-malleability and is of independent interest. We construct CCA- secure commitments based on standard one-way functions, and with no trusted set-up. To the best of our knowledge, this provides the first construction of a natural cryptographic primitive requiring adaptive hardness from standard hardness assumptions, using no trusted set-up or public keys. This is joint work with Ran Canetti, and Rafael Pass.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, October 8, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Boosting and Differential Privacy **

*Speaker*: **Guy Rothblum, Princeton University **

*Abstract*:

We consider a trusted curator who maintains a database of sensitive information about a population of participants, and wants to release privacy-preserving answers to statistical queries about the population. A successful research program has, in the last few years, formulated the rigorous privacy guarantee of differential privacy [Dwork McSherry Nissim and Smith '06] and provided both feasability results and lower bounds for differentially private data analysis.

We introduce new tools for designing differentially private algorithms: 1. A new boosting methodology that lets us improve the accuracy guarantees of weak differentially private algorithms to obtain stronger more accurate algorithms 2. An improved understanding of composition for differentially private analyses. We show that privacy guarantees degrade more slowly under composition than was previously known. We use these tools to show that, computational complexity aside, differential privacy permits surprisingly rich statistical data analyses. Time permitting, I will also describe computational hardness results and map the boundary between feasability and infeasability for privacy-preserving data analysis.

Based on joint work with Cynthia Dwork and Salil Vadhan.

**MIT CIS SEMINAR **

**Open To The Public **

*Date*: **Friday, October 1, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Fully Leakage-Resilient Signatures **

*Speaker*: **Elette Boyle, MIT **

*Abstract*:

A signature scheme is {\em fully leakage resilient} (Katz and Vaikuntanathan, ASIACRYPT '09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on {\em all intermediate values that are used throughout the lifetime of the system}. This is a strong and meaningful notion of security that captures a significantly wide range of side-channel attacks.

One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.

In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length $(1-o(1))L$ bits, where $L$ is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS '10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS '10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
**JOINT CIS/MICROSOFT TALK at MIT **
**Open To The Public **
*Date*: **Friday, August 13, 2010 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **32 Vassar Street, G449 Patil/Kiva **
*Title*: **On Strong Simulation and Composable Point Obfuscation**
*Speaker*: **Nir Bitansky, Tel-Aviv University **
*Abstract*:
The Virtual Black Box (VBB) property for program obfuscators provides a strong guarantee: Anything computable by an efficient adversary given the obfuscated program can also be computed by an efficient simulator with only oracle access to the program. However, we know how to achieve this notion only for very restricted classes of programs.

This work studies a simple relaxation of VBB: Allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We then demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of fully composable obfuscators for point programs: It is known that, w.r.t. VBB, if such obfuscators exist then there exist multi-bit point obfuscators (aka ``digital lockers'') and subsequently also very strong variants of encryption that are resilient to various attacks, such as key leakage and key-dependent-messages.
However, no composable VBB-obfuscators for point programs have been shown. We show fully composable {\em VGB}-obfuscators for point programs under a strong variant of the Decision Diffie Hellman assumption. We show they suffice for the above applications and even for extensions to the public key setting as well as for encryption schemes with resistance to certain related key attacks (RKA).
Joint work with Ran Canetti.
**JOINT CIS/MICROSOFT TALK at Microsoft, NE**
**Open To The Public **
*Date*: **Friday, July 23, 2010 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **Microsoft NE, Conf Room Nerd 1/1030 Crispus Attucks **
*Title*: **Universal One-Way Hash Functions via Inaccessible Entropy **
*Speaker*: **Hoeteck Wee, Queens College, CUNY **
*Abstract*:
This paper revisits the construction of Universally One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of *inaccessible entropy* (Haitner et al. STOC 2009).

With this perspective, we observe that a small tweak of any one-way function f is already a weak form of a UOWHF: Consider F(x, i) that outputs the i-bit long prefix of f(x). If F were a UOWHF then given a random x and i it would be hard to come up with x' \neq x such that F(x, i) = F(x', i). While this may not be the case, we show (rather easily) that it is hard to sample x' with almost full entropy among all the possible such values of x'. The rest of our construction simply amplifies and exploits this basic property.

With this and other recent works we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy. Joint work with Iftach Haitner and Thomas Holenstein and Omer Reingold and Salil Vadhan (Eurocrypt 2010)

**JOINT CIS/MICROSOFT TALK at MIT in 32-G575**

**Open To The Public **

*Date*: **Thursday, June 10, 2010 **

*Time*: **1:30 pm - 3:00 pm **

*Place*: *** 32-G575 Theory space, Stata * **

*Title*: **Multichannel protocols to prevent relay attacks**

*Speaker*: **Frank Stajano, University of Cambridge, UK and UMass Amherst **

*Abstract*:

A number of security systems, from Chip-and-PIN payment cards to contactless subway and train tokens, as well as secure localization systems, are vulnerable to relay attacks. Encrypting the communication between the honest endpoints does not protect against such attacks. The main solution that has been offered to date is distance bounding, in which a tightly timed exchange of challenges and responses persuades the verifier that the prover cannot be further away than a certain distance. This solution, however, still won't say whether the specific endpoint the verifier is talking to is the intended one or not--it will only tell the verifier whether the real prover is "nearby".

Are there any alternatives? We propose a more general paradigm based on multichannel protocols. Our class of protocols, of which distance bounding can be modelled as a special case, allows a precise answer to be given to the question of whether the unknown device in front of the potential victim is a relaying attacker or the device with which the victim intended to communicate. We discuss several instantiations of our solution and point out the extent to which all these countermeasures rely, often implicitly, on the alertness of a honest human taking part in the protocol.

Paper: Frank Stajano, Ford-Long Wong and Bruce Christianson "Multichannel protocols to prevent relay attacks" http://www.cl.cam.ac.uk/~fms27/papers/2009-StajanoWonChr-relay.pdf In Proceedings of Financial Cryptography 2010

**JOINT CIS/MICROSOFT TALK at MIT in 32-G575**

**Open To The Public **

*Date*: **Friday, June 4, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: *** 32-G575 Theory space, Stata * **

*Title*: **Physical Attacks in a Physical World **

*Speaker*: **Christof Paar, University of Bochum (Germany) and UMass Amherst **

*Abstract*:

Over the last few years we were able to break various real-world security systems using various flavors of physical attacks. In this talk we introduce some of the attacks, and try to draw some meaningful conclusions.

About two years ago we were able to break KeeLoq, a 64-bit block cipher that is popular for remote keyless entry (RKE) systems. Even though the attack seems almost straightforward in hindsight, there were many practical and theoretical problems to overcome. Eventually, we have become capable of mounting a cloning attack from a single power trace. We will summarize the evolution of the attack which gives some general insights in physical attacks in practice.

The next case study deals with our attack against a widely used electronic payment system which is based on Mifare Classic cards. Our analysis found that the system contains numerous design flaws that go far beyond the use of an outdated cipher. Various attacks, including card cloning and recharging, are possible contactlessly within a few 10 milliseconds. Perhaps most devastatingly, a new all-you-can-eat attack by an anarchistic-leaning attacker (or black mailer) could force the complete shut-down of the system.

Most recently we were able to attack a commercial smart card equipped with 3DES using side-channel techniques. Not only is the cipher cryptographically much stronger than the two 1980s-ciphers mentioned above, the card has SCA countermeasures built in. However, careful analysis allowed us to recover the full 3DES-key. In summary, it is safe to conclude that it appears still a formidable task to design embedded hardware tokens, at least under cost constrains, that withstand determined attackers. Our recommendation is that relying on token security only is a risky undertaking, and that a robust system design is a necessity.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, May 14, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G575 Theory space, Stata **

*Title*: **Fully Homomorphic Encryption over the Integers **

*Speaker*: **Vinod Vaikuntanathan, IBM **

*Abstract*:

We construct a simple fully homomorphic encryption scheme, using only elementary modular arithmetic. The security of our scheme relies on the hardness of the approximate integer greatest common divisors (gcd) problem -- namely, given a list of integers that are "near-multiples" of a hidden integer, output that hidden integer.

Joint work with Marten van Dijk, Craig Gentry, and Shai Halevi.

**JOINT CIS/MICROSOFT TALK at Microsoft, NE**

**Open To The Public **

*Date*: **Friday, May 7, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **Microsoft NE, One Memorial Drive **

*Title*: **Cryptography Resilient to Continual Memory Leakage **

*Speaker*: **Zvika Brakerski, Weizmann **

*Abstract*:

In recent years, there has been a major effort to design cryptographic schemes that remain secure even if part of the secret key is leaked. This is due to a recent proliferation of side channel attacks which, through various physical means, can recover part of the secret key. We explore the possibility of achieving security even with continual leakage, i.e., even if some information is leaked each time the key is used.

We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, {\em at each time period}, can probe the entire memory (containing a secret key) and ``leak'' up to a $1-\epsilon$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [Micali-Reyzin, TCC04]).

Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $1/2-\epsilon$) or the symmetric external Diffie-Hellman assumption (which allows for a leakage rate of $1-\epsilon$), we achieve the above for public key encryption, identity-based encryption, and signature schemes. Prior to this work, it was not known how to construct public-key encryption schemes even in the more restricted model of [MR].

The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage. Joint work with Yael Kalai, Jonathan Katz and Vinod Vaikuntanathan.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, April 30, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Privacy Amplification with Asymptotically Optimal Entropy Loss **

*Speaker*: **Bhavana Kanukurthi, BU **

*Abstract*:

We study the problem of ``privacy amplification'': key agreement between two parties who both know a weak secret w, such as a password. (Such a setting is ubiquitous on the internet, where passwords are the most commonly used security device.) We assume that the key agreement protocol is taking place in the presence of an active computationally unbounded adversary Eve. The adversary may have partial knowledge about w, so we assume only that w has some entropy from Eve's point of view. Thus, the goal of the protocol is to convert this non-uniform secret w into a uniformly distributed string R that is fully secret from Eve. R may then be used as a key for running symmetric cryptographic protocols (such as encryption, authentication, etc.).

Because we make no computational assumptions, the entropy in R can come only from w. Thus such a protocol must minimize the entropy loss during its execution, so that R is as long as possible. The best previous results have entropy loss of \Theta(\kappa^2), where \kappa is the security parameter, thus requiring the password to be very long even for small values of \kappa. In this work, we present the first protocol for information-theoretic key agreement that has entropy loss linear in the security parameter. The result is optimal up to constant factors. We achieve our improvement through a somewhat surprising application of error-correcting codes for the edit distance. Joint work with Nishanth Chandran, Rafail Ostrovsky and Leonid Reyzin.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, April 23, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **How to Play Mental Solitaire under Continuous Side-Channels: A Completeness Theorem using Secure Hardware **

*Speaker*: **Guy Rothblum, IAS **

*Abstract*:

We present a general method to compile any cryptographic algorithm into one which resists side channel attacks of the only computation leaks information variety for an unbounded number of executions. Our method uses as a building block a semantically secure bit encryption scheme with the following additional operations: key refreshing, oblivious generation of cipher texts,cipher-text re-generation, and blinded homomorphic evaluation of one single complete gate (e.g. NAND). Furthermore, the security properties of the encryption scheme should withstand bounded leakage incurred while performing each of the above operations.

We show how to implement such an encryption scheme under the DDH intractability assumption and the existence of a simple secure hardware component. The hardware component is independent of the encryption scheme secret key. The encryption scheme resists leakage attacks which are polynomial time computable function, whose output size is bounded by a constant fraction of the security parameter.

Our technical contributions include showing that the BHHO/Naor-Segev cryptosystem is resilient to leakage on both keys and ciphertexts (each separately), a method for refreshing secret keys and ciphertexts, homomorphic evaluation procedure for NANDs on encrypted values, which can be blinded even in the presence of side-channels (using simple secure hardware), and a framework for composable security under side-channel attacks. Joint work with Shafi Goldwasser.

**JOINT CIS/MICROSOFT TALK at MIT **

**Open To The Public **

*Date*: **Friday, April 16, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar Street, G449 Patil/Kiva **

*Title*: **Cryptography Against Continuous Memory Attacks **

*Speaker*: **Daniel Wichs, NYU **

*Abstract*:

We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that:

-- The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the -Y´outside world¡ is neither affected by these key refreshes, nor needs to know about their frequency.

-- The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system.

In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption.

**JOINT CIS/MICROSOFT TALK**

**Open To The Public **

*Date*: **Friday, April 2, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **Microsoft NE, One Memorial Drive **

*Title*: **i-Hop Homomorphic Encryption Schemes **

*Speaker*: **Shai Halevi, IBM **

*Abstract*:

A homomorphic encryption scheme enables computing on encrypted data by means of a public evaluation procedure on ciphertexts, making it possible for anyone to transform an encryption of $x$ into an encryption of $f(x)$ (for any function $f$). But evaluated ciphertexts may differ from freshly encrypted ones, which brings up the question of whether one can keep computing on evaluated ciphertexts. Namely, is it still possible for anyone to transform the encryption of $f(x)$ into an encryption of $g(f(x))$?

An $i$-hop homomorphic encryption is a scheme where the evaluation procedure can be called on its own output upto $i$ times (while still being able to decrypt the result). A multi-hop homomorphic encryption is a scheme that is $i$-hop for all $i$. In this work we study $i$-hop and multi-hop schemes, in conjunction with the properties of circuit-privacy (i.e., the evaluation procedure hides the function) and compactness (i.e., the output of evaluation is short). We provide appropriate formal definitions for all these properties and show three constructions: * We show a DDH-based construction, which is multi-hop and circuit private (but not compact), and where the size of the ciphertext is linear in the size of the composed circuit, but independent of the number of hops. * More generically, for any constant $i$, an $i$-hop circuit-private homomorphic encryption can be constructed from any two-flow protocol for two-party SFE. (Conversely, a two-flow protocol for two-party SFE can be constructed from any 1-hop circuit-private homomorphic encryption.) * For any polynomial $i$, an $i$-hop compact and circuit-private homomorphic encryption can be constructed from a 1-hop compact homomorphic encryption and a 1-hop circuit-private homomorphic encryption, where the size of the public key grows linearly with $i$. Moreover, a multi-hop scheme can be constructed by making an additional circular-security assumption.

For the first construction, we describe a *re-randomizable* variant of Yao's garbled-circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated that circuit (in collusion with the intended recipient) will not be able to recognize it. This construction may be of independent interest. Joint work with Craig Gentry and Vinod Vaikuntanathan

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, March 26, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-449 Kiva/Patil Stata **

*Title*: **How Secure are Secure Internet Routing Protocols?**

*Speaker*: **Sharon Goldberg, Microsoft, NE **

*Abstract*:

A decade of research has been devoted to addressing vulnerabilities in the Internet¢s interdomain routing system. The result is a plethora of security proposals, each providing different types of security guarantees. To inform decisions about which of these protocols should be deployed in the Internet, we use both theory and simulations to *quantify* the ability of these protocols to blunt a particularly dangerous form of attack, namely, when an attacker manipulates routing protocol messages in order to attract traffic to its network (so that it can eavesdrop, tamper, or drop traffic).

The key implication of our work is that network access control mechanisms (e.g. route filtering) can be as effective as cryptographic routing protocols. Moreover, we present a series of counterintuitive examples, found in the empirical data, to show that the attack strategies considered by most prior work can *underestimate* the severity of these attacks.

Joint work with Michael Schapira, Pete Hummon, and Jennifer Rexford.

Bio:

Sharon Goldberg is a post doc researcher at MSR New England, and will be joining the Computer Science Department at Boston University as an assistant professor in August 2010. She received her Ph.D. in Electrical Engineering from Princeton University in September 2009. Her research leverages cryptography, game theory and algorithms to solve practical problems in network security.

**JOINT CIS/MICROSOFT TALK at Microsoft**

**Open To The Public **

*Date*: **Friday, March 12, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **First Floor Conference Center, Microsoft Research NE **

*Title*: **On the Geometry of Differential Privacy **

*Speaker*: **Moritz Hardt, Princeton University **

*Abstract*:

We consider the noise complexity of differentially private mechanisms in the setting where the user asks d linear queries f:R^n->R non-adaptively. Here, the database is represented by a vector in R^n. Proximity between databases is measured in the L1-metric.

We show that the problem is characterized by two geometric parameters of a convex body associated with the set of queries. Using this connection we give upper and lower bounds on how much noise a mechanism must add in order to guarantee differential privacy. Our bounds are tight for random linear queries and they are nearly optimal for any set of linear queries assuming the truth of a deep conjecture from convex geometry known as the Hyperplane Conjecture.

Quantitatively, our result shows that it is necessary and sufficient to add total Euclidean error \Theta(min{d\sqrt{\log n},d\sqrt{d}}) to achieve differential privacy. The best previous upper bound (Laplacian mechanism) gives a bound of O(d\sqrt{d}), while the best known lower bound was \Omega(d). In contrast, our lower bound is strong enough to separate differential privacy from the notion of approximate differential privacy where an upper bound of O(d) can be achieved.

Joint work with Kunal Talwar.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, January 15, 2010 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Cryptographic Complexity and Computational Intractability**

*Speaker*: **Manoj M Prabhakaran, University of Illinois, Urbana-Champaign**

*Abstract*:

In this talk, I shall describe an ongoing project, and a series of results therein, to understand cryptographic access to information in its various flavors, and its relationship to computational intractability.

Our first set of results relates to classifying 2-party secure computation functionalities based on their "cryptographic complexity." Here complexity is defined using natural notions of reduction that capture cryptographic properties, but do not depend on computational complexity aspects. We show several new reductions (protocols) as well as separations, that help us identify cryptographic complexity classes.

Our second set of results explores the connection between computational intractability and cryptographic complexity. Our results suggest that there are only a few distinct intractability assumptions that are necessary and sufficient for all the infinitely many reductions among functionalities. In deriving these results, again, we provide several new protocols as well as separation results.

Significantly, this approach of defining the universe of intractability requirements in terms of cryptographic functionalities (rather than using specific assumptions formulated for proving the security of specific constructions) gives a possibly finite set of computational complexity assumptions to study, corresponding to a finite set of worlds between Minicrypt and Cryptomania. The main open problem we pose is to identify the set of all intractability assumptions that appear in this way.

These results are based on joint work with Hemanta Maji and Mike Rosulek; if time permits I will mention on going works that also involve Mohammad Mahmoody-Ghidary, Pichayoot Ouppaphan, Vinod Prabhakaran and Amit Sahai.

**JOINT CIS/MICROSOFT TALK at Microsoft**

**Open To The Public **

*Date*: **Friday, December 18, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **First Floor Conference Center, Microsoft Research NE **

*Title*: **A Parallel Repetition Theorem for Any Interactive Argument **

*Speaker*: **Iftach Ilan Haitner, Microsoft Research NE **

*Abstract*:

The question whether or not parallel repetition reduces the soundness error is a fundamental question in the theory of protocols. While parallel repetition reduces (at an exponential rate) the error in interactive proofs and (at a weak exponential rate) in special cases of interactive arguments (e.g., 3-message protocols - Bellare, Impagliazzo and Naor [FOCS '97], and public-coin protocols - Håstad, Pass, Pietrzak and WikstrÄom [Manuscript '08]), Bellare et. al gave example of interactive arguments for which parallel repetition does not reduce the soundness error at all.

We show that by slightly modifying any interactive argument, in a way that preserves its completeness and only slightly deteriorates its soundness, we get a protocol for which parallel repetition does reduce the error at a weak exponential rate. In this modified version, the verifier flips at the beginning of each round an (1 - (1/4m), 1/4m) biased coin (i.e., 1 is tossed with probability 1/4m), where m is the round complexity of the (original) protocol. If the coin is one, the verifier halts the interaction and accepts, otherwise it sends the same message that the original verifier would. At the end of the protocol (if reached), the verifier accepts if and only if the original verifier would.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, December 11, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Parallel Repetition Theorems for Interactive Arguments **

*Speaker*: **Kai-Min Chung, Harvard U.**

*Abstract*:

We present (almost) tight parallel repetition theorems for certain classes of interactive arguments. Interactive arguments are interactive proofs with computational soundness, namely, the soundness holds only against efficient malicious provers. Parallel repetition is a natural and desirable way to reduce the soundness error of interactive arguments, as it preserves the round complexity. However, unlike the case of interactive proofs, parallel repetition does not always reduce the soundness error of interactive arguments. Therefore, there has been a line of research to identify restricted classes of protocols where parallel repetition does improve the soundness, and to bound the rate at which the soundness error decreases.

In this talk, I will show a tight parallel repetition theorem for public-coin interactive arguments, namely, n-fold parallel repetition can reduce the soundness error of a public-coin argument from \eps to \eps^n + negligible. This is done by giving a tight analysis of the efficient reduction algorithm of Hastad, Pass, Wikstrom, Pietrzak (TCC 2010). I will also show an almost tight parallel repetition theorem for a more general class of interactive arguments that contains public-coin and three-rounds (private-coin) arguments. If time permits, I will show how to modify any interactive argument to one in the class above using fully homomorphic encryption schemes. This gives a way to amplify the soundness of any interactive argument without increasing the round complexity.

**JOINT CIS/MICROSOFT TALK at Microsoft**

**Open To The Public **

*Date*: **Friday, December 4, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **First Floor Conference Center, Microsoft Research NE **

*Title*: **Cryptographic hardness for learning intersections of halfspaces **

*Speaker*: **Alexander Sherstov, Microsoft Research NE **

*Abstract*:

We give the first representation-independent hardness results for PAC learning intersections of halfspaces, a central concept class in computational learning theory. Our hardness results are derived from two public-key cryptosystems due to Regev, which are based on the worst-case hardness of well-studied lattice problems. Specifically, we prove that a polynomial-time algorithm for PAC learning intersections of n^eps halfspaces (for a constant eps>0) in n dimensions would yield a polynomial-time solution to ~O(n^{1.5})-uSVP (unique shortest vector problem).

We also prove that PAC learning intersections of n^eps low-weight halfspaces would yield a polynomial-time quantum solution to ~O(n^{1.5})-SVP and ~O(n^{1.5})-SIVP (shortest vector problem and shortest independent vector problem, respectively). Our approach also yields the first representation-independent hardness results for learning polynomial-size depth-2 neural networks and polynomial-size depth-3 arithmetic circuits.

Joint work with Adam Klivans (U of Texas at Austin).

Journal version in "Cryptographic hardness for learning intersections of halfspaces", J. Comput. Syst. Sci. 75(1):2-12, 2009.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, Nov 20, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Public key Encryption Schemes with Auxiliary Input**

*Speaker*: **Vinod Vaikuntanathan, IBM TJ Watson **

*Abstract*:

We construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman and Learning with Errors problems.

Our technical contributions include:

* a novel extension of the Goldreich-Levin theorem to provide a hard-core (pseudorandom) value over large fields, and

* a proof that the learning with errors assumption holds even in the presence of auxiliary information about the secrets.

Joint work with Yevgeniy Dodis, Shafi Goldwasser, Yael Kalai and Chris Peikert.

**JOINT CIS/MICROSOFT TALK at Microsoft**

**Open To The Public **

*Date*: **Friday, November 13, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **First Floor Conference Center, Microsoft Research NE **

*Title*: **Bit Encryption is Complete **

*Speaker*: **Abhi Shelat, University of VA**

*Abstract*:

We show that an encryption scheme that encrypts only 1-bit at a time can be used to construct an encryption scheme that encrypts arbitrarily long messages.

This property is well-known for some weaker notions of secure encryption. However, for chosen-ciphertext secure (CCA2) encryption---i.e., the notion of encryption that is strong enough for the Internet---the question has remained open since the notion of CCA2 security was introduced by Rackoff and Simon in 1992.

The importance of this work is two-fold: we extend our understanding of encryption by establishing that bit-encryption is complete, and we introduce a novel technique that may help resolve remaining open questions about encryption. Joint work with Steven Myers at Indiana.

**JOINT CIS/MICROSOFT TALK at Microsoft**

**Open To The Public **

*Date*: **Friday, November 6, 2009 **

*Time*: **10:30 am - 12:30 pm **

*Place*: **First Floor Conference Center, Microsoft Research NE **

*Title*: **Public-Key Encryption in the Bounded-Retrieval Model **

*Speaker*: **Daniel Wichs, Courant Institute, NYU **

*Abstract*:

We construct the first public-key encryption scheme in the Bounded-Retrieval Model (BRM), providing security against various forms of adversarial "key leakage" attacks. In this model, the adversary is allowed to learn arbitrary information about the decryption key, subject only to the constraint that the overall amount of "leakage" is bounded by at most L bits. The goal of the BRM is to design cryptographic schemes that can flexibly tolerate arbitrarily leakage bounds L (few bits or many Gigabytes), by only increasing the size of secret key proportionally, but keeping all the other parameters -- including the size of the public key, ciphertext, encryption/decryption time, and the number of secret-key bits accessed during decryption - small and independent of L.

As our main technical tool, we introduce the concept of an Identity-Based Hash Proof System (IB-HPS), which generalizes the notion of hash proof systems of Cramer and Shoup [CS02] to the identity-based setting. We give three different constructions of this primitive based on: (1) bilinear groups, (2) lattices, and (3) quadratic residuosity. As a result of independent interest, we show that an IB-HPS almost immediately yields an Identity-Based Encryption (IBE) scheme which is secure against (small) partial leakage of the target identity¢s decryption key. As our main result, we use IB-HPS to construct public-key encryption (and IBE) schemes in the Bounded-Retrieval Model.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, October 16, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **NOTE PLACE CHANGE 32-D463 (Star), Stata Center**

*Title*: **Leakage Resilient Key Proxies **

*Speaker*: **Yevgeniy Vahlis, U of Toronto **

*Abstract*:

A common assumption made by cryptographers is that computers are built using secure hardware, i.e. an adversary can only interact with the device in a black box manner and does not have access to its internals. The reality is less friendly. Indeed, in many cases an adversary can perform side-channel attacks by making physical measurements on and around the device. Such attacks include measuring power consumption, sound emanation, and timing as well as attacks where the adversary gains access to the RAM of the device after it has been powered off (often referred to as "Cold Boot" attacks).

A common approach to protecting against such attacks is to design more secure hardware. However, in recent years cryptographers have been increasingly focused on algorithmic approaches for protecting against side channel attacks, thus trying to minimize the assumptions about the security of the underlying hardware.

In this work we define and construct a Leakage Resilient "Key Proxy" -- a cryptographic primitive that allows a secret key to be stored and computed on repeatedly in an environment susceptible to side-channel attacks. We assume that every time a function is evaluated on the key, a bounded amount of information about the "active" part of memory leaks to the adversary, and that the total number of invocations is unrestricted. Our construction uses the recently achieved Fully Homomorphic Public Key Encryption of Gentry, and makes use of a small memory-less leak free component of fixed complexity. Consequently, our result can be viewed as reducing the task of protecting stateful computation of arbitrary complexity to that of protecting stateless computation of fixed complexity against leakage. Joint work with Ali Juma and Charles Rackoff.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, October 2, 2009 **

*Time*: **10:30 am - 12:30 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Elliptic Curve Cryptography: A Survey **

*Speaker*: **Joe Silverman, MSR New England and Brown University **

*Abstract*:

It's been almost 25 years since Victor Miller and Neal Koblitz proposed using elliptic curves for cryptography. Since that time, ECC has become a primary source of cryptographic primitives in both theory and practice. In this talk I will survey the mathematical theory of elliptic curves and discuss some of the ways in which they are used in cryptography, including classical constructions such as key exchange and digital signatures and more recent innovations that fall under the rubric of pairing-based cryptography. This talk will be expository and should be accessible to graduate students and postdocs.

**JOINT CIS/MICROSOFT TALK at Microsoft**

**Open To The Public **

*Date*: **Friday, September 25, 2009 **

*Time*: **10:30 am - 12:30 pm **

*Place*: **First Floor Conference Center, Microsoft Research NE **

*Title*: **On the Necessary and Sufficient Assumptions for UC Computation **

*Speaker*: **Claudio Orlandi, University of Aarhus **

*Abstract*:

We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the common random string model and the public-key infrastructure (PKI) model, and provide new result for all of them. Perhaps most interestingly we show that:

- For even the minimal meaningful PKI, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious-transfer protocol for the stand-alone model. Since a PKI where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the PKI model: any non-trivial PKI is sufficient for UC computation.

- We show that in the PKI model one-way functions are sufficient for UC commitment and UC zero-knowledge. These are the first examples of UC secure protocols for non-trivial tasks which do not assume the existence of public-key primitives. In particular, the protocols show that non-trivial UC computation is possible in Minicrypt.

This is joint work with Ivan Damgaard and Jesper Nielsen.
**JOINT CIS/MICROSOFT TALK at MIT**
**Open To The Public **
*Date*: **Friday, September 18, 2009 **
*Time*: **10:30 am - 12:30 pm **
*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**
*Title*: **Protecting Circuits from Computationally-Bounded Leakage **
*Speaker*: **Eran Tromer, CSAIL, MIT **
*Abstract*:

Physical computational devices leak side-channel information that may, and often does, reveal secret internal states. We present a general transformation that compiles any circuit into a device that maintains secrecy even in the presence of well-defined classes of side-channel leakage. Our construction requires only a minimal leak-proof component: one that draws random elements from a simple distribution. We thus reduce the problem of shielding arbitrary complex circuits to the problem of shielding a single simple component.

Our approach is based on modeling the adversary as a powerful observer that inspects the device via a "limited" measurement apparatus. We capture the notion of "limited" measurements using computational complexity classes, and our proofs of security rely on the hardness of certain functions for these classes. Thus, for example, AC^0 lower bounds yield a construction that is resilient to any leakage that can be computed by constant-depth circuits. More generally, we give a generic composition theorem that shows how to build a provably secure devices of arbitrary complexity out of components that satisfy a simulatability condition. Several applications are shown.

In contrast to previous works, we allow the side-channel leakage to depend on the whole state and on all the wires in the device, and to grow unbounded over time.

Joint work with Sebastian Faust and Leonid Reyzin.
**JOINT CIS/MICROSOFT TALK at Microsoft**
**Open To The Public **
*Date*: **Friday, August 7th, 2009 **
*Time*: **4:00 pm - 5:00 pm **
*Place*: **First Floor Conference Center, Microsoft Research NE **
*Title*: **A New Protocol Compiler**
*Speaker*: **Amit Sahai, UCLA **
*Abstract*:

One of the most fundamental goals in cryptography is to design protocols that remain secure when adversarial participants can engage in arbitrary malicious behavior. In 1986, Goldreich, Micali, and Wigderson presented a powerful paradigm for designing such protocols: their approach reduced the task of designing secure protocols to designing protocols that only guarantee security against "honest-but-curious" participants. By making use of zero-knowledge proofs, the GMW paradigm enforces honest behavior without compromising secrecy. Over the past two decades, this approach has been the dominant paradigm for cryptographic protocol design.

In this talk, we present a new general paradigm/protocol compiler for secure protocol design. Our approach also reduces the task of designing secure protocols to designing protocols that only guarantee security against honest-but-curious participants. However, our approach avoids the use of zero-knowledge proofs, and instead makes use of multi-party protocols in a much simpler setting - where the majority of participants are completely honest (such multi-party protocols can exist without requiring any computational assumptions). Our paradigm yields protocols that rely on Oblivious Transfer (OT) as a building block. This offers a number of advantages in generality and efficiency.

In contrast to the GMW paradigm, by avoiding the use of zero-knowledge proofs, our paradigm is able to treat all of its building blocks as "black boxes". This allows us to improve over previous results in the area of secure computation. In particular, we obtain: * Conceptually simpler and more efficient ways for basing unconditionally secure cryptography on OT. * More efficient protocols for generating a large number of OTs using a small number of OTs. * Secure and efficient protocols which only make a black-box use of cryptographic primitives or underlying algebraic structures in settings where no such protocols were known before. This talk is based primarily on joint works with Yuval Ishai (Technion and UCLA) and Manoj Prabhakaran (UIUC).

Speaker Bio: Professor Amit Sahai received his Ph.D. in Computer Science from MIT in 2000. From 2000 to 2004, he was a professor at Princeton University; in 2004 he joined UCLA as an Associate Professor of Computer Science, and as Associate Director of the Center for Information and Computation Security. His research interests are in security and cryptography, and theoretical computer science more broadly. He has published more than 75 original technical research papers at venues such as the ACM Symposium on Theory of Computing (STOC), CRYPTO, and the Journal of the ACM. He has given a number of invited talks at institutions such as MIT, Stanford, and Berkeley, including the 2004 Distinguished Cryptographer Lecture Series at NTT Labs, Japan. Professor Sahai is the recipient of numerous honors; he was named an Alfred P. Sloan Foundation Research Fellow in 2002, and received an Okawa Research Award in 2007. His research has been covered by several news agencies including the BBC World Service.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, May 29, 2009 **

*Time*: **10:30 am - 12:30 pm **

*Place*: **NOTE Different PLACE: 32-124, Stata Ctr, 32 Vassar St**

*Title*: **Efficient and Reliable Tools for Delegating Computation, PhD Thesis Defense **

*Speaker*: **Guy Rothblum **

*Abstract*:

In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise, especially for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises immediate concerns:

When and how can a weak device verify that a computational task was completed correctly? For example, how can such a device verify that a computation server performed a costly computation correctly?

This practically motivated question touches on foundational questions in cryptography and complexity theory. In the talk I will present recent progress and tools for delegating computational tasks reliably and survey applications of these new tools to cryptography and complexity theory.

**JOINT CIS/MICROSOFT TALK at MIT**

**Open To The Public **

*Date*: **Friday, May 22, 2009 **

*Time*: **10:30 am - 12:30 pm **

*Place*: **1st floor, Microsoft Research NE**

*Title*: **Fast Cryptographic Primitives Based on the Hardness of Decoding Random Linear Code**

*Speaker*: **Benny Applebaum, Princeton University**

*Abstract*:

The task of decoding a random linear code (or, equivalently, learning parity with noise) is considered to be a hard problem, and is the basis of several cryptographic schemes. In this talk we further study the cryptographic applications of this intractability assumption and demonstrate its usefulness by obtaining the following results.

(1) A symmetric encryption scheme whose encryption and decryption algorithms are computable by circuits of quasilinear size (in the message length). The scheme provides security against key-dependent messages and achieves circular security. This provides a highly efficient alternative in the private-key setting to the circular-secure \emph{public-key} encryption scheme of Boneh, Halevi, Hamburg, and Ostrovsky (CRYPTO 2008).

(2) A pseudorandom generator (PRG) that expands an n-bit seed into a 2n-bit pseudorandom sequence and can be computed in quasilinear time. By plugging our PRG in the reductions of Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC 2008) we get commitments and encryption scheme (both in the public and symmetric setting) with polylogarithmic computational overhead and constant communication overhead.

(3) A simple construction of weak randomized pseudorandom function -- a relaxation of standard PRF -- which can be obliviously computed in quasilinear time.
To appear in CRYPTO 2009.
**JOINT CIS/MICROSOFT TALK at MIT**
**Open To The Public **
*Date*: **Friday, May 15, 2009 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar St**
*Title*: **Non-malleability Amplification **
*Speaker*: **Rafael Pass, Cornell **
*Abstract*:
We present a technique for amplifying commitment schemes that are non-malleable with respect to identities of length t, into ones that are non-malleable with respect to identities of length \Omega(2^t), while only incurring a constant overhead in round-complexity. As a result we obtain a construction of O(1)^{log* n}-round (i.e., ``essentially'' constant-round) non-malleable commitments from any one-way function, and using a black-box proof of security.
Joint work with Huijia Lin

**JOINT CIS/MICROSOFT TALK- AT MICROSOFT **

**Open To The Public **

*Date*: **Friday, April 24, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **14th Floor Tea Lounge, Microsoft Research NE**

*Title*: **Rational Secret Sharing **

*Speaker*: **Jonathan Katz, University of Maryland **

*Abstract*:

Designers of cryptographic protocols traditionally view all parties as either arbitrarily malicious (i.e., acting unpredictably) or completely honest (i.e., slavishly following the protocol). A more realistic perspective might be that everyone is simply *rational*: i.e., willing to deviate from the protocol, but only so long as they accrue some benefit from doing so. How should protocols be designed in such a setting?

We examine this question in the specific case of secret sharing. We begin by reviewing work from 2006 showing a protocol for rational secret sharing under the (somewhat unrealistic) assumption of simultaneous broadcast. We then discuss more recent results giving rational secret sharing protocols for asynchronous, point-to-point networks that achieve a stronger notion of equilibrium (namely, (computational) strict Nash equilibrium).

Bio: Jonathan Katz is an associate professor of computer science at the University of Maryland, and a co-author of the textbook "Introduction to Modern Cryptography". He received his bachelors degrees in chemistry and mathematics from MIT in 1996, and a PhD in computer science from Columbia University in 2002. He enjoys rock climbing in his free time.

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, April 17, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar St**

*Title*: **Fully Homomorphic Encryption Using Ideal Lattices **

*Speaker*: **Craig Gentry, IBM **

*Abstract*:

We propose a fully homomorphic encryption scheme - i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result - that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable.

Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Lattice-based cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a public-key ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits.

Unfortunately, our initial scheme is not quite bootstrappable - i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a server-aided cryptosystem.
**JOINT CIS/MICROSOFT TALK- AT MIT **
**Open To The Public **
*Date*: **Friday, April 10, 2009 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar St**
*Title*: **Order-preserving symmetric encryption **
*Speaker*: **Alexandra Boldyreva, GeorgiaTech **
*Abstract*:
We initiate the cryptographic study of order-preserving symmetric encryption (OPE), a primitive suggested in the database community for allowing efficient range queries on encrypted data. We first show that a straightforward relaxation of standard security notions for encryption such as indistinguishability against chosen-plaintext attack is unachievable by a practical OPE scheme. Instead, we propose a security notion in the spirit of pseudorandom functions and related primitives asking that an OPE scheme look ``as-random-as-possible" subject to the order-preserving constraint.

We then design an efficient OPE scheme and prove its security under our notion based on pseudorandomness of an underlying blockcipher. Our construction is based on a natural relation we uncover between a random order-preserving function and the hypergeometric probability distribution.

In particular, it makes black-box use of an efficient sampling algorithm for the latter. This is joint work with Nate Chenette, Adam O'Neill and Younho Lee.

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, April 3, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar St**

*Title*: **Non-malleable Extractors & Symmetric Key Cryptography from Weak Secrets **

*Speaker*: **Yevgeniy Dodis, NYU **

*Abstract*:

We study the question of information-theoretically secure authenticated key agreement from weak secrets. In this setting, Alice and Bob share a n-bit secret W, which might *not* be uniformly random but the adversary has at least k bits of uncertainty about it (formalized using conditional min-entropy). Alice and Bob wish to use W to agree on a nearly uniform secret key R, over a public channel controlled by an *active* adversary Eve. We show that non-interactive (single-message) protocols do not work when k <= n/2, and require poor parameters even when n/2 < k << n.

On the other hand, for arbitrary values of k, we design a communication efficient two-message protocol extracting nearly k random bits. This dramatically improves the only previously known protocol of Renner and Wolf [RW03], which required O(s) rounds where s is the security parameter. Our solution takes a new approach by studying and constructing **non-malleable seeded randomness extractors ** --- if an attacker sees a random seed X and comes up with an arbitrarily related seed X', then we bound the relationship between R= Ext(W;X) and R' = Ext(W;X').

We also extend our one-round key agreement protocol to the "fuzzy" setting, where Alice and Bob share "close'' (but not equal) secrets W_A and W_B, and to the Bounded Retrieval Model (BRM) where the size of the secret W is huge.

Joint work with Daniel Wichs from STOC 2009. Paper available at http://eprint.iacr.org/2008/503
**JOINT CIS/MICROSOFT TALK- AT MICROSOFT **
**Open To The Public **
*Date*: **Friday, March 6, 2009 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **14th Floor Tea Lounge, Microsoft Research NE**
*Title*: **Cryptography against Memory Attacks **
*Speaker*: **Vinod Vaikuntanathan, IBM, TJ Watson **
*Abstract*:
The absolute privacy of the secret keys associated with cryptographic algorithms has been the corner-stone of modern cryptography. Still, there is ample evidence in practice that keys do get compromised at times, by various means. In a particularly devastating side-channel attack proposed recently, termed the ``memory attack'', a significant fraction of the bits of the secret key can be measured if the secret key is {\em ever stored} in a part of memory which can be accessed. Such an attack has been shown to completely compromise the security of various crypto-systems in use, including RSA and AES.

We show two *public-key encryption schemes* secure against memory attacks that leak upto (1-\epsilon) bits of information about the secret-key (for any constant epsilon>0). The first of these is the lattice-based encryption scheme of Regev, and the second is a DDH-based encryption scheme recently proposed by Boneh, Halevi, Hamburg and Ostrovsky. This is done without increasing the size of the secret key, and without introducing any complication of the natural encryption and decryption routines. Based on joint work with Adi Akavia, Yael Kalai, Chris Peikert and Shafi Goldwasser.

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, Feb 27, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **On the Complexity of Differentially Private Data Release: Efficient Algorithms and Hardness Results **

*Speaker*: **Guy Rothblum, CSAIL, MIT**

*Abstract*:

We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a data set containing sensitive information, releases to the public a ``sanitization'' of the data set. The goal is for the sanitization to both protect the privacy of the individual contributors of data and offer aggregate statistical utility to a data analyst. We focus on the case where the process is non-interactive; once the sanitization has been released the original data and the curator play no further role. Blum et al. [STOC '08] showed a remarkable result: for any collection of counting predicate queries, the exponential mechanism of McSherry and Talwar [FOCS '07] can be used to (inefficiently) generate a synthetic dataset that maintains approximately correct fractional counts for all of the queries, while ensuring a strong privacy guarantee, known as differential privacy.

In this work we investigate the computational complexity of such non-interactive privacy mechanisms, mapping the boundary between feasibility and infeasibility. We show:

1. For any query set C and data universe X there is an efficient sanitizer that outputs a synthetic dataset and runs in time poly(|C|,|X|). The sanitizer works even when the size of the original dataset is sub-polynomial in |C|.

2. Under standard cryptographic assumptions, there is no sanitizer that outputs synthetic datasets and works for general classes of counting queries and runs in time sub-polynomial in |C| or |X|. In particular, this is a case where the exponential mechanism cannot, in general, be implemented efficiently.

3. Turning to the potentially easier problem of privately generating an arbitrary data structure (not necessarily synthetic data), there is a tight connection between hardness of sanitization and the existence of traitor tracing schemes. Joint work with Cynthia Dwork, Moni naor, Omer Reingold and Salil Vadhan.

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, Feb 20, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **How Voters Can Be Certain That Their Votes Are Counted Properly **

*Speaker*: **Josh Benaloh, Microsoft **

*Abstract*:

Three decades of research have produced a variety of very effective cryptographic protocols to enable verifiable secret-ballot elections. The only problem is that these protocols, while easily employed by machines, do not account for the limitations of their end-users: human voters. Recent work has focused on election "ceremonies" in which humans can effectively participate - often by engaging in a form of "human-verifiable interactive proof".

This talk will briefly review the state of cryptographic election protocols and then delve into recent work on how to bring human voters into the verification process.

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, Feb 13, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Building our own Trojan Horse: Communications Surveillance and Creating Communications (In)Security **

*Speaker*: **Susan Landau, Sun Microsystems **

*Abstract*:

Through requiring surveillance capabilities be built into Internet voice communications systems and expanding warrantless wiretapping to any communications where one end was ``reasonably believed'' to be located outside the U.S., the U.S. government is slowly but steadily extending wiretapping capabilities to the Internet. This effort is in the name of national security. But building architected security breaches into a communications network carries real risks. In a world that has both al-Qaeda and Hurricane Katrina, does this increased wiretapping capability make us safer? We will examine what real security needs are in a post 9/11 world. *

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, Feb 6, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Key Agreement from Close Secrets over Unsecured Channels **

*Speaker*: **Bhavana Kanukurthi, Boston University **

*Abstract*:

We consider information-theoretic key agreement between two parties sharing somewhat different versions of a secret w that has relatively little entropy. This setting arises, for example, when a trusted server stores the biometric of a user, and the user subsequently uses his fresh biometric reading to authenticate himself to the server.

Such key agreement, also known as information reconciliation and privacy amplification over unsecured channels, was shown to be theoretically feasible by Renner and Wolf (Eurocrypt 2004), although no protocol that runs in polynomial time was described. We propose a protocol that is not only polynomial-time, but actually practical, requiring only a few seconds on consumer-grade computers.

Our protocol can be seen as an interactive version of robust fuzzy extractors (Dodis et al., Crypto 2006). While robust fuzzy extractors, due to their noninteractive nature, require w to have entropy at least half its length, we have no such constraint. In fact, unlike in prior solutions, in our solution the entropy loss is essentially unrelated to the length or the entropy of w, and depends only on the security parameter.

The paper appears in Eurocrypt 2009 and is available at http://eprint.iacr.org/2008/494. This is joint work with Leonid Reyzin.

**JOINT CIS/MICROSOFT TALK- at MIT **

**Open To The Public **

*Date*: **Friday, January 30, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Accessible Entropy **

*Speaker*: **Iftach Haitner, Microsoft Research New England **

*Abstract*:

We put forth a new computational notion of entropy, which measures the (in)feasibility of sampling high entropy strings that are consistent with a given protocol. Specifically, we say that the i'th round of a protocol (A,B) has *accessible entropy* at most k, if no polynomial-time strategy A* can generate messages for A such that the entropy of its message in the i'th round has entropy greater than k when conditioned both on prior messages of the protocol and on prior coin tosses of A*.

As applications of this notion, we: - Give a much simpler and more efficient construction of statistically hiding commitment schemes from arbitrary one-way functions, and - Prove that constant-round statistically hiding commitments are necessary for constructing constant-round zero-knowledge proof systems for NP that remain secure under parallel composition (assuming the existence of one-way functions).

Joint work with Omer Reingold, Salil Vadhan and Hoeteck Wee.

**JOINT CIS/MICROSOFT TALK- at MIT **

**Open To The Public **

*Date*: **Friday, January 9, 2009 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Merkle Puzzles are Optimal --- an O(n2)-query attack on key exchange from a random oracle **

*Speaker*: **Boaz Barak, Princeton University **

*Abstract*:

We prove that every key exchange protocol in the random oracle model in which the honest users make at most n queries to the oracle can be broken by an adversary making O(n2) queries to the oracle. This improves on the previous O~(n6) query attack given by Impagliazzo and Rudich (STOC '89), and answers an open question posed by them. Our bound is optimal up to a constant factor since Merkle (CACM '78) gave an n-query key exchange protocol in this model that cannot be broken by an adversary making o(n2) queries.

Joint work with Mohammad Mahmoody-Ghidary.

**JOINT CIS/MICROSOFT TALK- AT MICROSOFT **

**Open To The Public **

*Date*: **Friday, Dec 12, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **14th Floor Tea Lounge, Microsoft Research NE**

*Title*: **Internet path-quality monitoring in the presence of adversaries **

*Speaker*: **Sharon Goldberg, Princeton University **

*Abstract*:

The Internet is an indispensable part of our society, and yet its basic foundations remain vulnerable to simple attacks. In recent years, a variety of proposals for securing Internet routing have been considered; path-quality monitoring (PQM) protocols that monitor packet loss and corruption are an essential component of many of these proposals.

In this talk, I give a formal treatment of the path-quality monitoring problem, and discuss new secure protocols that allow a sender to raise an alarm when packet loss and corruption exceeds some threshold, even when an adversary tries to bias monitoring results. Our protocols are designed to be lightweight enough to be deployed in the resource-constrained environment of high-speed routers.

I will present our -Y´secure sketch¡ protocol, that combines second-moment estimation techniques with ideas from cryptography, and requires O(log(T)) storage in order to monitor T packets sent on an Internet data path; e.g., monitoring billions of packets requires 200-600 bytes of storage and a single IP packet of extra communication. Time permitting, I will present another protocol that scales well with number of routers in the network, and discuss connections between the path-quality monitoring problem and the adversarial sketch model of Mironov, Naor, and Segev. This is joint work with David Xiao, Eran Tromer, Boaz Barak, and Jennifer Rexford.

**JOINT CIS/MICROSOFT TALK- at MIT **

**Open To The Public **

*Date*: **Friday, December 5, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Very Local Self Correcting of Homomorphism and MPC Codes **

*Speaker*: **Adi Akavia, IAS/DIMACS **

*Abstract*:

Blum-Luby-Rubinfeld gave a local self correcting algorithm for homomorphism codes Hom(G,H) for any finite abelian groups G and H. The BLR algorithm, given an entry location s and oracle access to a corrupted codeword w close to a codeword C, outputs the value of C on entry s, while reading only a constant number of entries in w. Although the number of *entries* read by the BLR algorithm is constant, the number of read *bits* is O(log|H|), as each alphabet symbol is represented by log|H| bits. This number of read bits may be quite large with respect to the information rate; in particular, for Hom(Z_N,Z_N) the number of read bits is larger than the information rate.

In this work we present a local self correcting algorithm for homomorphism codes Hom(Z_N^k,Z_N) that reads O(k) *bits* to output a single bit of the value of the closest codeword on the given entry (in a non standard binary representation); the algorithm is restricted to codewords C and entries s corresponding to invertible group elements, and extends to other codes Hom(G,H). Our algorithm improves over prior works as long as k = o(log N). In particular, for Hom(Z_N,Z_N) our algorithm reduces the number of read bits from O(log N) to O(1).

In addition, we use our algorithm to obtain the first local self correcting algorithm for the MPC codes of Akavia-Goldwasser-Safra FOCS'03, and for their generalizations defined here.

In terms of techniques, we develop a new approach for local self correcting in which we first reduce the self correcting problem into a property testing problem concerning the location of significant Fourier transform coefficients (aka, SFT-testing), and then give an efficient algorithm solving SFT-testing with O(k) read bits. The SFT-testing algorithm may be of independent interest.

**JOINT CIS/MICROSOFT TALK- at MIT **

**Open To The Public **

*Date*: **Friday, Nov 21, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar Street**

*Title*: **Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem **

*Speaker*: **Chris Peikert, SRI International **

*Abstract*:

We construct public-key cryptosystems that are secure assuming the *worst-case* hardness of approximating the shortest vector problem on lattices. Prior cryptosystems with worst-case connections (e.g., the Ajtai-Dwork system) were based either on a *special case* of the shortest vector problem, or on the conjectured hardness of lattice problems for *quantum* algorithms.

Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the "learning with errors" (LWE) problem; previously, only a quantum reduction of this kind was known. In addition, we construct new cryptosystems based on LWE, including a very natural chosen ciphertext-secure system that has a much simpler description and tighter underlying worst-case approximation factor than prior constructions.

**JOINT CIS/MICROSOFT TALK- AT Microsoft **

**Open To The Public **

*Date*: **Friday, Nov 7, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **14th Floor Tea Lounge, Microsoft Research NE**

*Title*: **Public Key Cryptography from Different Assumptions **

*Speaker*: **Benny Applebaum, Princeton University **

*Abstract*:

We construct a new public key encryption based on two assumptions:

- It is hard to learn parity with noise given a small number of sparse examples (e.g., n^4 vectors of weight 10).

- It is hard to distinguish between a random unbalanced bipartite graph of small degree (e.g., 10), and a similar graph with a random planted non-expanding set of vertices.

The validity and strength of the assumptions raise interesting new algorithmic and pseudorandomness questions, and we explore their relation to the current state-of-art. Joint work with Boaz Barak and Avi Wigderson.

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, Oct 31, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar St, 32-G449, Patil/Kiva, MIT**

*Title*: **Implementing Secure Multi-Party Computation **

*Speaker*: **Benny Pinkas, University of Haifa**

*Abstract*:

Secure computation is one of the great achievements of modern cryptography, enabling a set of untrusting parties to compute any function of their private inputs while revealing nothing but the result of the function. Advances in modern cryptography coupled with rapid growth in processing and communication speeds make secure computation a realistic paradigm. This was demonstrated by the Fairplay system, which is a generic system for secure two-party computation that supports high-level specification of the computation.

We will describe in this talk two recent advances in implementing secure computation. The first is a system for secure two-party computation which has fully-simulatable security against malicious adversaries. Experiments with this system reveal interesting results about the overhead of different parts of the computation, and about the efficiency of using components which are secure in the standard model.

We also present FairplayMP (for "Fairplay Multi-Party"), a system for multi-party computation secure against semi-honest adversaries. The underlying protocol of FairplayMP is the Beaver-Micali-Rogaway (BMR) protocol, which is modified in order to improve its efficiency. This protocol was chosen since it runs in a constant number of communication rounds. We also report on different experiments which measure the effect of different parameters on the performance of the system and demonstrate its scalability.

Based on joint work with Yehuda Lindell and Nigel Smart, and with Assaf Ben-David and Noam Nisan.

**JOINT CIS/MICROSOFT TALK- AT Microsoft **

**Open To The Public **

*Date*: **Friday, Oct 17, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **14th Floor Tea Lounge, Microsoft Research NE**

*Title*: **David and Goliath Commitments: UC Computation for Asymmetric Parties Using Tamper-Proof Hardware **

*Speaker*: **Tal Moran, Weizmann Institute of Science **

*Abstract*:

I will talk about constructing universally-composable (UC) commitment protocols for asymmetric parties: a weak ``David'' vs. a powerful ``Goliath''. The protocols are based on tamper-proof hardware tokens --- a physical assumption introduced by Katz [Eurocrypt '07] as an alternative to a trusted setup stage (UC computation for most interesting tasks is provably impossible in the plain model, so some extra assumptions are needed).

In Katz's original protocol, instead of trusting a third party to correctly generate setup information, each party creates its own hardware token, which it sends to the other party. Each party only needs to trust that its own token is tamper-proof.

Our protocols only require a single party (Goliath) to be capable of generating tokens. We construct a version of the protocol that is secure for computationally unbounded parties, and a more efficient version that makes computational assumptions only about David (we require only the existence of a one-way function). Our protocols are simple enough to be performed by hand on David's side.

These properties may allow such protocols to be used in situations which are inherently asymmetric in real-life, especially those involving individuals versus large organizations. Classic examples include voting protocols (voters versus ``the government'') and protocols involving private medical data (patients versus insurance-agencies or hospitals). This is joint work with Gil Segev.

**JOINT CIS/MICROSOFT TALK- AT MIT **

**Open To The Public **

*Date*: **Friday, Oct 10, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar St, 32-G449, Patil/Kiva, MIT**

*Title*: **Barriers to Proving Hardness of Improper Learning from Worst-case Assumptions **

*Speaker*: **David Xiao, Princeton University**

*Abstract*:

Computational learning theory, and in particular PAC learning, was introduced in 1984 by Valiant and has since become a major area of research in theoretical and applied computer science. One natural question that was posed at the very inception of the field is whether there are classes of functions that are hard to learn. Here it is important to make a distinction between proper and improper learning: in proper learning one is required to output a hypothesis that is comparable to the function being learned (e.g. if we are trying to learn k-DNF then the hypothesis should also be a k-DNF), while in improper learning the hypothesis can be more complex (e.g. if we are trying to learn k-DNF then the hypothesis could be a circuit of size n^c).

Computational limitations to proper and improper learning have been extensively studied, starting with the seminal works of Pitt-Valiant (JACM '88) and Kearns-Valiant (JACM '94). However, while the hardness of proper learning can be based on worst case assumptions on the power of NP (e.g. NP != RP), all known limitations on improper learning are based on cryptographic assumptions (e.g., the existence of one-way functions). It is natural to ask whether this gap is inherent, specifically: is it possible to base hardness of improper learning on worst case assumptions such as NP != RP?

Unfortunately this problem has remained open and seems difficult to solve. In this work we show that it is related to questions of average-case vs. worst-case complexity and weak cryptographic primitives. In particular, we prove the following: if there exists a black-box reduction R from deciding a language L to learning small circuits, then there exists a reduction R' from deciding L to inverting a family of auxiliary-input one-way functions (as defined by Ostrovsky-Wigderson). Furthermore, if R is non-adaptive, then in fact we can adapt the results of Akavia et al. to show that this implies L is contained in coAM. As a corollary, if L is NP-complete then PH collapses and so such a reduction is unlikely to exist. Our results hold even in the stronger model of agnostic learning. This is joint work with Benny Applebaum and Boaz Barak.

**JOINT CIS/MICROSOFT TALK - AT MIT **

*Date*: **Friday, Oct 3, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **14th Floor Tea Lounge, Microsoft Research NE**

*Title*: **Saving Private Randomness in One-Way Functions & Pseudorandom Generators**

*Speaker*: **Leonid Reyzin, Boston University**

*Abstract*:

Can a one-way function f on n input bits be used with fewer than n bits while retaining comparable hardness of inversion? We show that the answer to this fundamental question is negative, if one is limited black-box reductions.

Instead, we ask whether one can save on the (expensive) secret random bits at the cost of adding (inexpensive) public random bits. Our first main result shows that if the number of output elements of f is at most 2^k, then a simple construction using pairwise-independent hash functions results in a new one-way function that uses only k secret bits (or even just a secret of entropy k). We also demonstrate that this is optimal, in a sense, for black-box reductions.

Our second main result is an application of the public-randomness approach: we show a construction of a pseudorandom generator based on any regular one-way function with output range of known size 2^k. The construction requires a seed of only 2n+O(k log k) bits (as opposed to O(n log n in previous constructions); the savings come from the reusability of public randomness. The secret part of the seed is of length only k (as opposed to n in previous constructions), less than the length of the one-way function input.

**JOINT CIS/MICROSOFT TALK **

**Open To The Public **

*Date*: **Friday, September 12, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32 Vassar St, 32-G449, Patil/Kiva, MIT**

*Title*: **The MD6 hash function **

*Speaker*: **Ron Rivest, CSAIL, MIT**

*Abstract*:

Recent developments in cryptanalytic techniques (most notably the differential collision-finding attacks) have led to surprisingly effective breaks or near-breaks of many existing hash functions, such as MD5 and SHA-1.

This has led NIST to propose a "hash function contest" to design what may be selected as a new national hash function standard, called SHA-3. Proposals for this contest are due in October of this year.

This talk describes the MD6 hash function, which will be submitted to this contest. MD6 has a high degree of parallelizability (it is tree-based), and provable resistance to standard differential collision-finding attacks. We discuss the design principles and security of MD6.

This work is joint with quite a few colleagues and students, whose names will be listed in the talk.

**JOINT CIS/MICROSOFT TALK- NOTE CHANGE OF PLACE **

**Open To The Public **

*Date*: **Friday, Aug 29, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **14th Floor Tea Lounge, Microsoft Research NE**

*Title*: **Finding Collisions in Interactive Protocols -- A Tight Lower Bound on the Round Complexity of Statistically-Hiding Commitments **

*Speaker*: **Iftach Haitner, Microsoft**

*Abstract*:

We study the round complexity of various cryptographic protocols. Our main result is a tight lower bound on the round complexity of any fully-black-box construction of a statistically-hiding commitment scheme from one-way permutations, and even from trapdoor permutations. This lower bound matches the round complexity of the statistically-hiding commitment scheme due to Naor, Ostrovsky,

Venkatesan and Yung (CRYPTO '92). As a corollary, we derive similar tight lower bounds for several other cryptographic protocols, such as single-server private information retrieval, interactive hashing, and oblivious transfer that guarantees statistical security for one of the parties.

Our techniques extend the collision-finding oracle due to Simon (EUROCRYPT '98) to the setting of interactive protocols (our extension also implies an alternative proof for the main property of the original oracle). In addition, we substantially extend the reconstruction paradigm of Gennaro and Trevisan (FOCS `00). In both cases, our extensions are quite delicate and were found useful inproving additional black-box separation results. Joint work with Jonathan J. Hoch, Gil Segev and Omer Reingold
**JOINT CIS/MICROSOFT TALK **
**Open To The Public **
*Date*: **Friday, Aug 15, 2008 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **32-G449, Kiva, Stata Center, 32 Vassar St. **
*Title*: **"Probabilistically Checkable Arguments"**
*Speaker*: **Yael Kalai, Microsoft **
*Abstract*:
We give two results. The first is a general reduction converting any public-coin interactive proof into a one-round (two-message) argument. It is based on a method of Aiello et al, using a Private-Information-Retrieval (PIR) scheme to collapse rounds in interactive proofs. For example, the reduction implies that for any security parameter t, the membership in any language in PSPACE can be proved by a one-round (two-message) argument of size poly(n,t), which is sound for malicious provers of size 2^t.

(Note that the honest prover in this construction runs in exponential time, since she has to prove membership in PSPACE, but we can choose t such that 2^t is significantly larger than the running time of the honest prover).

We then define the notion of a *probabilistically checkable argument* (PCA).
This is a relaxation of the notion of probabilistically checkable proof (PCP).
It is defined analogously to PCP, except that the soundness property is required to hold only computationally.
We consider the model where each verifier is associated with a public key, and each PCA is verifier-dependent, i.e., it depends on the verifier's public key. (The key does not need to be certified, and we can assume that the verifier simply publishes it on his web-page). We show that every NP language, verifiable by a poly-size formula, has a PCA (with efficient honest prover) of size polynomial in the size of the *witness*. This compares to the best PCPs that are of size polynomial in the size of the *instance* (that may be significantly larger). The number of queries to these PCAs is poly-logarithmic.
This result is proved via a general reduction that converts any *interactive-PCP* (with certain properties) into a PCA, together with a very recent interactive-PCP construction of Goldwasser et al.
The soundness property, in all our results, relies on exponential hardness assumptions.
This is joint work with Ran Raz.
**JOINT CIS/MICROSOFT TALK **
**Open To The Public **
*Date*: **Friday, Aug 08, 2008 **
*Time*: **10:30 am - 12:00 pm **
*Place*: **Microsoft, Charles Rm, 1st flr, 1 Mem Drive **
*Title*: **"A Framework for Efficient and Composable Oblivious Transfer"**
*Speaker*: **Vinod Vaikuntanathan, CSAIL, MIT **
*Host*: **Yael Kalai **
*Abstract*:
We propose a simple and general framework for constructing *oblivious transfer* (OT) protocols that are efficient, universally composable, and generally realizable from a variety of cryptographic assumptions, such as the decisional Diffie-Hellman assumption, the Quadratic Residuosity assumption and worst-case complexity assumptions relating to *lattices*. Our OT protocols are round-optimal (one message each way) and efficient in the parties' communication and local computation, and use only one reference string for an unbounded number of executions.

One of our key technical contributions is a unified view of several encryption schemes in the literature that have what we call *message-lossy* public keys, whose defining property is that a cipher-text produced under such a key carries *no information* (even statistically) about the encrypted message.

Joint work with Chris Peikert (SRI) and Brent Waters (SRI).

BIO: Vinod Vaikuntanathan is a Ph.D. candidate in the Theory of Computation group at MIT. His research interests lie in cryptography, distributed algorithms and complexity theory. *Upon arrival at One Memorial Drive, kindly approach the Lobby Security Desk and show a picture ID and sign the Building Visitor Log.

**Open To The Public **

*Date*: **Friday, Aug 1, 2008 **

*Time*: **10:30 am - 12:00 pm **

*Place*: **32-G449, Kiva, Stata Center, 32 Vassar St. **

*Title*: **"Chosen Ciphertext Security via Correlated Products"**

*Speaker*: **Alon Rosen, IDC Herzliya **

*Abstract*:

In this talk I will present a new notion of security, called one-wayness under correlated products. The question we are interested in is what are necessary and sufficient conditions for a function f and a distribution on inputs (x1,...,xk), so that the function (f(x1),...,f(xk)) is one-way. The main motivation of this study is the construction of public-key encryption schemes that are secure against chosen-ciphertext attacks (CCA). We show that any collection of injective trapdoor functions that is secure under very natural correlated products can be used to construct a CCA-secure public-key encryption scheme. The construction is simple, black-box, and admits a direct proof of security.

We provide evidence that security under correlated products is achievable by demonstrating that any collection of lossy trapdoor functions, a powerful primitive introduced by Peikert and Waters (STOC '08), yields a collection of injective trapdoor functions that is secure under the above mentioned natural correlated products. Although we eventually base security under correlated products on lossy trapdoor functions, we argue that the former notion is potentially weaker as a general assumption. Specifically, there is no fully-black-box construction of loss trapdoor functions from trapdoor functions that are secure under correlated products. Joint work with Gil Segev.

**Open To The Public **

*Date*: **Friday, July 18, 2008 **

*Time*: **11:00 am - 12:30 pm **

*Place*: **32-G449, Kiva, Stata Center, 32 Vassar St. **

*Title*: **Efficient Protocols in the Presence of Covert and Malicious Adversaries**

*Speaker*: **Carmit Chazay, Bar Ilan University **

*Abstract*:

In this talk we present efficient secure protocols for a variety of tasks, including oblivious transfer, set intersection and pattern matching. Our protocols for securely computing the set intersection functionality are based on secure pseudorandom function evaluations, in contrast to previous protocols that used secure polynomial evaluation. We also use secure pseudorandom function evaluation in order to achieve secure pattern matching.

In this case, we utilize specific properties of the Naor-Reingold pseudorandom function in order to achieve high efficiency. Finally, we show that using standard smartcards it is possible to construct truly practical secure protocols, and demonstrate this on the problem of set intersection.

We consider a variety of adversary models and definitions of security in our results. Some of our protocols are secure in the presence of malicious adversaries with full simulation (via the ideal/real paradigm), and some provide only privacy. We also present protocols that are secure in the presence of covert adversaries. Loosely speaking, this means that a malicious adversary can cheat, but will then be caught with good probability.

**Open To The Public **

*Date*: **Friday, May 16, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Kiva, Stata Center, 32 Vassar St. **

*Title*: **Non-Malleable Obfuscation **

*Speaker*: **Mayank Varia, Math Dept., MIT **

*Abstract*:

Existing definitions of program obfuscation do not rule out malleability attacks, where an adversary that sees an obfuscated program is able to generate another (potentially obfuscated) program that is related to the original one in some way.

We formulate two quite different flavors of non-malleability requirements for program obfuscation, and construct non-malleable obfuscators of both flavors for some program families of interest. Some of our constructions are in the Random Oracle model, whereas another one is in the common reference string model. We also define the notion of verifiable obfuscation which is of independent interest. This is joint work with Ran Canetti.

**Open To The Public **

*Date*: **Friday, May 9, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Outsourced Storage and Compact Proofs of Retrievability **

*Speaker*: **Hovav Shacham, UCSD **

**(Joint work with Brent Waters, SRI International) **

*Abstract*:

In a proof-of-retrievability system, a data storage center must prove to a verifier that he is actually storing all of a client's data. The central challenge is to build systems that are both efficient and provably secure -- that is, it should be possible to extract the client's data from any prover that passes a verification check. All previous provably secure solutions require that a prover send *O(l)* authenticator values (i.e., MACs or signatures) to verify a file, for a total of *O(l ^{2})* bits of communication, where l is the security parameter. The extra cost over the ideal

We create the first compact and provably secure proof of retrievability systems. Our solutions allow for compact proofs with just one authenticator value -- in practice this can lead to proofs with as little as 40 bytes of communication. We present two solutions with similar structure. The first one is privately verifiable and builds elegantly on pseudorandom functions (PRFs); the second allows for publicly verifiable proofs and is built from the signature scheme of Boneh, Lynn, and Shacham in bilinear groups. Both solutions rely on homomorphic properties to aggregate a proof into one small authenticator value.

**Open To The Public **

*Date*: **Friday, April 18, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Online Untransferable Signatures **

*Speaker*: **Moses Liskov, The College of William and Mary **

*Abstract*:

Non-transferability of digital signatures is an important security concern, traditionally achieved via interactive verification protocols. Such protocols, however, are vulnerable to "online transfer attacks" -- i.e. attacks mounted during the protocols' executions.

In this paper, we show how to guarantee online untransferability of signatures, via a reasonable public-key infrastructure and general assumptions, without random oracles. Our untransferable signatures are comparable in efficiency to the best prior schemes that provide provably weaker forms of security; in particular, we make no use of general zero-knowledge proofs.

Joint work with Silvio Micali; paper published at PKC 2008.

Bio: Moses Liskov received his PhD from MIT in 2004 and is now an assistant professor in the computer science department of the College of William and Mary in Williamsburg, Virginia.
**Open To The Public **
*Date*: **Friday, April 4, 2008 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **
*Title*: **Isolated Proofs of Knowledge and Isolated Zero Knowledge **
*Speaker*: **Daniel Wichs, NYU **
Authors: **Ivan Damgaard and Jesper Buus Nielsen and Daniel Wichs**
**http://eprint.iacr.org/2007/331**
*Abstract*:
We introduce a new notion called $\ell$-isolated proofs of knowledge ($\ell$-IPoK). These are proofs of knowledge where a cheating prover is allowed to exchange up to $\ell$ bits of communication with some external adversarial environment during the run of the proof.

Without any additional setup assumptions, no witness hiding protocol can be an $\ell$-IPoK for \emph{unbounded} values of $\ell$. However, for any \emph{pre-defined} threshold $\ell$, and any relation in NP and we construct an $\ell$-IPoK protocol for that relation. The resulting protocols are zero knowledge (ZK) in the standard sense, i.e., w.r.t. a verifier that communicates only with the prover during the proof. The cost of having a large threshold $\ell$ is a large communication complexity of the constructed protocol. We analyze these costs and present a solution that is asymptotically optimal.

If a cheating verifier is allowed to communicate arbitrarily with an external environment, it is not possible to construct an $\ell$-IPoK that is also ZK with respect to such a verifier. As another new notion, we define $\ell$-isolated zero knowledge ($\ell$-IZK) where the verifier is $\ell$-isolated. For every relation in NP and every $\ell$, we construct an $\ell$-IPoK protocol that is also $\ell$-IZK.

We describe several applications of $\ell$-IPoK protocols under the physical assumption that one can $\ell$-isolate a prover for the duration of the proof phase. Firstly, we can use a witness indistinguishable (WI) $\ell$-IPoK to prevent ``man-in-the-middle'' attacks on identification schemes. Prior results for this scenario required all verifiers to register keys under a PKI, or the ability to fully isolate the prover. Secondly, a partially isolated prover can register a public key and use a WI $\ell$-IPoK to prove knowledge of the corresponding secret key to another party acting as a verifier. This allows us to set up a PKI where the key registrant does not need to trust the Certificate Authority. The PKI is not perfect since the proof is only witness indistinguishable and not zero knowledge. In a companion paper, we show how to set up such a PKI and use it to implement arbitrary multiparty computation securely in the UC framework without relying on any trusted third parties.

**Open To The Public **

*Date*: **Friday, March 28, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*:

*Speaker*: **Dominique Unruh **

*Abstract*:

In a coercion attack, an attacker forces a basically honest party to deviate from the protocol; the protocol is only considered secure if the coerced party is able to follow its original protocol while making the attacker believe that the party followed the attacker's instructions.

We present an extension of the Universal Composability framework that allows to model the security of cryptographic protocols against coercion. The framework allows to model general protocol tasks as ideal functionalities and guarantees secure composition.

Various degrees of incoercibility can be expressed, ranging from deniability (where the honest party only has to deny after the protocol execution that it performed some actions) to full incoercibility (where the attacker may force the party to deviate from the protocol to produce proofs of its actions).

We compare the framework with the Global UC framework by Canetti, Dodis, Pass and Walfish which also models deniability. We argue that the Global UC framework models deniability against an outside adversary, while our framework models deniability or incoercibility against an inside adversary partaking in the protocol.

Finally, we investigate composable incoercible commitments. (Joint work with Jorn Muller-Quade.)

**Open To The Public **

*Date*: **Friday, March 14, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **One-Time Programs **

*Speaker*: **Guy Rothblum, CSAIL, MIT **

*Abstract*:

In this work we introduce one-time programs, a new computational paradigm geared towards security applications. A one-time program is a program that can be executed on a single input, specified by the user at run time. Nothing is leaked about the program's code or functionality except its output on this single input. Hence, a one-time program is like a ``black box'' function that may be evaluated once and then ``self destructs''. This concept easily extends to k-time programs, which are like black box functions that can be evaluated k times before they self destruct.

One-time programs serve many of the same purposes of program obfuscation, the obvious one being software protection. However, the applications of one-time programs go beyond those of obfuscation. One-time programs only allow a user to learn the program's output on a bounded number of inputs, whereas obfuscated programs can be run an arbitrary number of times. For example, one-time programs lead naturally to electronic cash or token schemes: coins are generated by a program that can only be run once, and thus cannot be double spent.

Moreover, the new paradigm of one-time computing opens new avenues for conceptual research. We explore one such avenue, presenting the new concept of ``one-time proofs'', proofs that can only be verified once and then become useless and unconvincing.

All the above tasks are impossible using software alone, as any piece of software can be copied and run again, enabling the user to execute the program on more than one input. Our solutions employ a very simple and cheap secure hardware device. We show that any standard program (Turing Machine) can be efficiently compiled into a functionally equivalent one-time program that employs this simple hardware device. We can similarly transform a classic NP witness for a statement into a one-time proof for the statement, this one-time proof employs the same hardware device.

Joint work with Shafi Goldwasser and Yael Kalai

**Open To The Public **

*Date*: **Friday, March 7, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Complete Fairness in Secure Two-Party Computation **

*Speaker*: **Jonathan Katz, University of Maryland **

*Abstract*:

A well known result of Cleve shows that completely-fair secure two-party computation is impossible. His result, however, only implies that complete fairness is impossible *in general*. In this work, we ask whether there are *any* interesting examples of functions that can be computed with complete fairness in the two-party setting, and give a partial answer to this question.

This is joint work with Carmit Hazay, Dov Gordon, and Yehuda Lindell

**Open To The Public **

*Date*: **Friday, Feb 29, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **What Can We Learn Privately? **

*Speaker*: **Adam Smith, Penn State University **

*Abstract*:

A recent line of work investigates the possibility of publishing "anonymized" statistical summaries that reveal global properties of a database of sensitive information (think of data from a census, social network, or medical study), without revealing any information specific to the individuals in the database.

In this talk, we ask what types of *learning* problems can be solved (and their results, published) while preserving rigorous notions of privacy. Learning problems form an important category of computational tasks that includes many of the computations applied to large, real-life datasets.

Roughly, our study corresponds to asking: what learning problems can be solved by algorithms that do not depend heavily on any single input?

Some highlights: a) Ignoring computation time, but retaining the restrictions of polynomial sample complexity and a polynomially-sized output, the class of solvable classification problems (roughly, Valiant's "PAC" model) is the same with and without the constraint of preserving privacy. b) There are polynomial-time *private* learning algorithms for the PARITY problem.Together with earlier results on "statistical query" (SQ) learning, this implies that all commonly studied learning classes have efficient private learners. c) The power of "local" privacy-preserving publication mechanisms is equivalent to SQ. Local mechanisms, in which each contributing individual anonymizes his/her own data, generalize "randomized response" and other techniques. The data mining community has studied local mechanisms extensively. Our results imply that these popular mechanisms are strictly less powerful than the more general model used above for learning parity.

Based on joint work with Shiva Kasivswanathan, Homin Lee, Kobbi Nissim and Sofya Raskhodnikova. *The talk will be self-contained.* However, attendees may also be interested in a talk to be given by Sofya Raskhodnikova (on different, but related, results) on Thursday, 2/ 28 in the algorithms seminar.

**Open To The Public **

*PLZ NOTE*: **CHANGE of TIME and PLACE **

*Date*: **Friday, Feb 22, 2008 **

*Time*: **3:30 pm - 5:00 pm **

*Place*: **32-D463, Star Conf. rm, Stata Center, 32 Vassar St. **

*Title*: **Black-Box Complexity of Non-Malleable Encryption **

*Speaker*: **Hoeteck Wee, Columbia University **

*Abstract*:

A non-malleable encryption scheme is one wherein given an encryption of a message, it is infeasible to generate an encryption of a related message. We exhibit a black-box construction of a non-malleable encryption scheme from any semantically secure one, which improves upon the previous non-black-box construction of MIT students/alumni Pass, Shelat and Vaikuntanathan (Crypto â€™06). Our result implies that non-malleable and semantically encryption schemes are in fact equivalent with respect to black-box constructions.

Our construction departs from the oft-used paradigm of re-encrypting the same message with different keys and then proving consistency of encryptions in zero-knowledge. Instead, we encrypt an encoding of the message with certain locally testable and self-correcting properties. In this talk, I will focus on how we exploit properties of low-degree polynomials to compute such an encoding.

This is joint work with MIT alumni Tal Malkin and her students Seung Geol Choi and Dana Dachman-Soled.
**Open To The Public **
*Date*: **Friday, Feb 8, 2008 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **
*Title*: **Towards the Bare Bones of Trust: A Tour of Trust Assumptions for Obtaining Universally Composable Security **
*Speaker*: **Ran Canetti, IBM, CSAIL **
*Abstract*:
A desirable goal for cryptographic protocols is to guarantee security when the protocol is composed with other protocol instances. Universally Composable (UC) security provides this guarantee in a strong sense: A UC-secure protocol maintains its security properties even when composed concurrently with an unbounded number of instances of arbitrary protocols. However, many interesting cryptographic tasks are provably impossible to realize with UC security, unless some trusted set-up is assumed.

We review the basic impossibility results and examine a number of quite different trust models that were recently demonstrated to suffice for constructing UC-secure protocols that realize practically any cryptographic task. (This is a re-run of a talk given at Asiacrypt 2007.)

**Open To The Public **

*Date*: **Friday, Jan 18, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **G449, Patil/Kiva, 32 Vassar St.**

*Title*: **Lossy Trapdoor Functions and Their Applications **

*Speaker*: **Brent Waters, Stanford Research Institute **

*Authors*: **Chris Peikert and Brent Waters**

*Abstract*:

In this talk I'll introduce a new general primitive called lossy trapdoor functions (lossy TDFs). Using lossy TDFs, I'll show new approaches for constructing many important cryptographic primitives, including standard trapdoor functions, CCA-secure cryptosystems, collision-resistant hash functions, and more. All of our constructions are simple, efficient, and black-box.

In addition, I'll describe how to realize lossy TDFs under number theoretic assumptions, including hardness of the decisional Diffie-Hellman (DDH) problem and the worst-case hardness of standard lattice problems. Taken all together, these results resolve some long-standing open problems in cryptography. They give the first known (injective) trapdoor functions based on problems not directly related to integer factorization, and provide the first known CCA-secure cryptosystem based solely on worst-case lattice assumptions.

**Open To The Public **

*Date*: **Friday, Jan 11, 2008 **

*Time*: **10:30 am - 12 pm **

*Place*: **G449, Patil/Kiva, 32 Vassar St.**

*Title*: **A Framework for Efficient and Composable Oblivious Transfer **

*Speaker*: **Vinod Vaikuntanathan, CSAIL, MIT **

*Abstract*:

We propose a simple and general framework for constructing *oblivious transfer* (OT) protocols that are efficient, universally composable, and generally realizable from a variety of cryptographic assumptions, such as the decisional Diffie-Hellman assumption, the Quadratic Residuosity assumption and worst-case complexity assumptions relating to *lattices*. Our OT protocols are round-optimal (one message each way) and efficient in the parties' communication and local computation, and use only one reference string for an unbounded number of executions.

One of our key technical contributions is a unified view of several encryption schemes in the literature that have what we call "message-lossy" public keys, whose defining property is that a ciphertext produced under such a key carries *no information* (even statistically) about the encrypted message.

Joint work with Chris Peikert and Brent Waters.
**CANCELLED - To Be Rescheduled in the Spring**
*Date*: **Friday, December 14, 2007 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **
*Title*: **Sampling, Learning and Data Privacy **
*Speaker*: **Adam Smith, PSU **

**Open To The Public **

*Date*: **Friday, December 14, 2007 **

*Time*: **2:30 pm - 4:00 pm **

*Place*: **32-G463, Star conference room, 32 Vassar St. **

*Title*: **Daoli--Grid security via Trusted Computing protected virtualization **

*Speaker*: **Wenbo Mao, Director of EMC Research China **

*Abstract*:

A grid builds for a resource-scarce user a virtual organization (VO) of unbounded computational and/or storage capacity by pooling heterogeneous resources from real organizations. A VO is a multitenancy environment in two senses: (i) a user is a tenant of multiple resource providers (lessors) for one batch of jobs and (ii) each lessor can host multiple tenants. Ideally, commercial organizations in particular resource-under-utilized financial institutions, should ``go for grid'' to become lessors. However currently such multitenancy grids are not in commercial adoption yet. Inadequate grid security is a main hurdle holding commercial organizations from becoming lessors. A missing security service is behavior conformity: a tenant mustn't cause damage to the lessor or other tenants, and conversely, the lessor mustn't compromise the proprietary code/data of a tenant.

Project Daoli attempts to strengthen grid security by adding behavior conformity. We will apply Trusted Computing Group's (TCG) technology as our means to behavior conformity and we do so by working on virtualization in two layers in the software stack. In the OS layer, a highly-privileged hypervisor for memory arbitration will be measured by a Trusted Platform Module (TPM) to achieve isolation between processes of different tenants. Above OSes a grid middleware will achieve virtualization of hardware platforms and commodity OSes so that a unique VO code of a tenant for remote policy enforcement can run across a heterogeneous environment. The VO code and/or data which need confidentiality and/or integrity protection are secured by cryptographic credentials. By calling the standard credential migration function of TCG, VO's credentials can be migrated from one TPM to another along the leased platforms.

**Open To The Public **

*Date*: **Friday, December 7, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Non-Interactive Anonymous Credentials **

*Speaker*: **Mira Belenkiy, Brown Universitiy **

*Authors*: **Mira Belenkiy, Melissa Chase, Markulf Kohlweiss and Anna Lysyanskaya **

*Abstract*:

We give the first construction of a practical non-interactive anonymous credential system whose security does not rely on the random oracle model. We introduce P-signatures (Signatures with efficient Protocols) as our main building block. A P-signature scheme consists of a signature scheme, a commitment scheme, and (1) an interactive protocol for obtaining a signature on a committed value; (2) a non-interactive proof system for proving that the contents of a commitment has been signed; (3) a non-interactive proof system for proving that a pair of commitments are commitments to the same value. P-signatures are a powerful tool that may serve as a useful building block for other privacy-preserving authentication mechanisms.

Our constructions makes extensive use of the recently developed Groth-Sahai proof system. We present some new notation that makes it easier to use Groth-Sahai proofs as a building block, which may be of independent interest.
**Open To The Public **
*Date*: **Friday, November 9, 2007 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **
*Title*: **Trapdoors for Hard Lattices, with Cryptographic Applications **
*Speaker*: **Chris Peikert, SRI International **
*Abstract*:
Cryptographic schemes based on *lattices* are attractive for several reasons: their computations are simple and of low complexity, they have thus far resisted quantum attacks, and their security can be based on the *worst-case* hardness of lattice problems. To date, constructions of such primitives have unfortunately been limited mainly to collision-resistant hashing and public-key encryption.

This talk will describe how to securely generate and exploit "trapdoors" in lattices. The cryptographic applications include various kinds of trapdoor functions, simple "hash-and-sign" digital signature schemes, and identity-based encryption, all based on worst-case lattice assumptions. The talk will be self-contained. Joint work with Craig Gentry and Vinod Vaikuntanathan.

**Open To The Public **

*Date*: **Friday, November 2, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **On UC Security, Deniability and Global Setup**

*Speaker*: **Yevgeniy Dodis, New York University **

*Abstract*:

In this talk we extend the Universal Composability (UC) framework to properly model global setup, such as Common Reference String (CRS), Public-Key Infrastructure (PKI) and Random Oracle (RO) models. The new, Generalized UC (GUC) framework has many advatages over the traditional UC framework.

1) it gurantees deniability. 2) it allows one to use the same setup with *arbitrary* protocols, as opposed to specially designed protocols (as in ordinary UC). 3) it is more natural than UC. For example, it removes artificial restrictions of the former, resulting in shorter definitions. 4) one can still model UC setup in GUC, but the resulting setup assumptions become very hard to realize. This explains several existing criticisms of the UC modeling of the CRS/PKI models.

We also show that the global CRS model in insufficient to realize most useful tasks in GUC. However, we introduce a slight strengthing of the CRS setup --- augmented CRS (ACRS) --- which allows one to GUC-realize any cryptographic task. This novel setup (necessarily) introduces a semi-passive trusted party into the CRS model, but has the following win-win guarantee: if the trusted party is present, we get (very strong) GUC security; if it is not, we get a functional equivalent of the CRS model in the UC framework, but with provably stronger security guarantees! The paper is available at http://eprint.iacr.org/2006/432

**Open To The Public **

*Date*: **Friday, October 19, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Space-Efficient Identity Based Encryption Without Pairings **

*Speaker*: **Dan Boneh, Stanford University **

*Abstract*:

Identity Based Encryption (IBE) systems are often constructed using pairings on elliptic curves. One exception is an elegant system due to Cocks which builds an IBE based on the quadratic residuosity problem modulo an RSA composite N. The Cocks system, however, produces long ciphertexts. Since the introduction of the Cocks system in 2001 it has been an open problem to construct a space efficient IBE system without pairings.

We present an IBE system in which ciphertext size is short: an encryption of an s-bit message consists of a single element in Z_N plus s+1 additional bits. Security, as in the Cocks system, relies on the quadratic residuosity problem. The system is based on the theory of ternary quadratic forms.

**Open To The Public **

*Date*: **Friday, October 19, 2007 **

*Time*: **1:30 pm - 3:00 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Code-Based Game-Playing Proofs and the Security of Triple Encryption **

*Speaker*: **Phillip Rogaway, UC Davis & Chiang Mai University **

*Abstract*:

The game-playing technique is a powerful tool for analyzing cryptographic constructions. We illustrate this by using games as the central tool for proving security of three-key triple-encryption, a long-standing open problem. Our result, which is in the ideal-cipher model, demonstrates that for DES parameters (56-bit keys and 64-bit plaintexts) an adversary's maximal advantage is small until it asks about 2^{78} queries. Beyond this application, we begin to develop the foundations for game playing, formalizing a general framework for game-playing proofs and discussing techniques used within such proofs.

Joint work with Mihir Bellare. Paper appeared in EUROCRYPT '06.

**Open To The Public **

*Date*: **Friday, October 12, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Cryptography from sunspots: How to use an imperfect reference string**

*Speaker*: **Abhi Shelat, University of Virginia **

*Abstract*:

The Common Reference String (CRS) model enables otherwise-impossible cryptographic goals such as removing interaction from protocols and guaranteeing composable security. However, the CRS model requires the reference string to be sampled from a fixed and known distribution (e.g., the uniform distribution); security analyses of all current protocols fail when the actual distribution of the reference string differs from the specified one even by a small amount.

This fact rules out a large class of potential implementations of the CRS model such as measurements of physical phenomena (like sunspots), or alternatively using random sources that might be adversarially influenced.

We study the possibility of obtaining universally composable (UC) security in a relaxed variant of theCRS model in which the reference string it sampled from an *arbitrary, adversarially specified* distribution that is unknown to the protocol. On the positive side, we demonstrate that UC general secure computation is obtainable in this model as long as: (a) this distribution has some minimal min-entropy, (b) it is efficiently samplable, (c) the sampling algorithm has not too long a description, and (d) the sampling algorithm is known to the adversary (and simulator). On the negative side, we show that if any one of these four conditions is removed then general UC secure computation becomes impossible.

**Open To The Public **

*Date*: **Friday, October 5, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Precise Cryptography **

*Speaker*: **Rafael Pass, Cornell University **

*Abstract*:

The seminal work of Goldwasser, Micali and Rackoff put forward a computational approach to knowledge in interactive systems, providing the foundation of modern Cryptography. Their notion, and its refinement by Goldriech, Micali and Wigderson, bounds the knowledge of a player in terms of his *potential* computational power, technically defined as its worst-case running-time. We put forward a stronger notion that precisely bounds the knowledge gained by a player in an interaction in terms of the *actual* computation it has performed in that *particular* interaction.

Our approach---which is exemplified in the contexts of zero-knowledge proofs, proofs of knowledge, encryption and secure computation----not only remains valid even if P= NP, but is most meaningful when modeling knowledge of computationally easy properties.

**Open To The Public **

*Date*: **Friday, September 21, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Highly Efficient Zero Knowledge Proofs of Correctness of Computations, and Applications **

*Speaker*: **Michael Rabin, Harvard University **

*Abstract*:

We consider a new model of an Evaluator Prover who receives input values from parties P_1, ..., P_n, performs a computation on these values and publishes the result together with a ZKP of its correctness. The efficiency is achieved by working directly with input numbers rather than at the bit/circuit level. It achieves a hundred-fold efficiency improvement over methods employing homomorphic encryptions. Classical ZKPs can be made a special case as n =0. Applications include practical secure and secrecy preserving auctions. Presentation will be self contained.

Joint work with Rocco Servidio and Chris Thorpe.

**Open To The Public **

*Date*: **Friday, May 4, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Unconditional Relationships within Zero Knowledge **

*Speaker*: **Shien Jin Ong, Harvard U. **

*Abstract*:

We establish complexity-theoretic characterization of the classes of languages in NP having zero-knowledge argument systems. Using these characterizations, we show that for languages in NP: - -- Instance-dependent commitment schemes are necessary and sufficient for zero-knowledge protocols. Instance-dependent commitment schemes for a given language are commitment schemes that can depend on the instance of the language, and where the hiding and binding properties are required to hold only on the YES and NO instances of the language, respectively. - -- Computational zero knowledge and computational soundness (a property held by argument systems) are symmetric properties. Namely, we show that the class of languages in NP intersect co-NP having zero-knowledge arguments is closed under complement, and that a language in NP has a statistical zero-knowledge **argument** system if and only if its complement has a **computational** zero-knowledge proof system. - -- A method of transforming any zero-knowledge protocol that is secure only against an honest verifier that follows the prescribed protocol into one that is secure against malicious verifiers. In addition, our transformation gives us protocols with desirable properties like having public coins, being black-box simulatable, and having an efficient prover.

The novelty of our results above is that they are **unconditional**, meaning that they do not rely on any unproven complexity assumptions such as the existence of one-way functions.

Joint work with Iftach Haitner, Minh-Huyen Nguyen, Omer Reingold, and Salil Vadhan.

**Open To The Public **

*Date*: **Friday, April 27, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **SFE and MPC separate **

*Speaker*: **Ueli Maurer, ETH-Zurich **

*Abstract*:

Secure function evaluation (SFE) allows a set of players to compute an arbitrary agreed function of their private inputs, even if an adversary may corrupt some of the players. Secure multi-party computation (MPC) is a generalization allowing to perform an arbitrary on-going (also called reactive or stateful) computation during which players can receive outputs and provide new inputs at intermediate stages. In both cases, the functionality is described by an algebraic circuit over a finite field.

Three types of corruptions are usually considered: active, passive, and fail corruptions. The adversary's corruption power is characterized by a constraint on which players he can potentially corrupt, and in which way. One is interested in the exact condition for which SFE and MPC are possible. So far, only restricted settings were considered where either the condition is a threshold condition or where not all three corruption types were considered at the same time. For all these models, the exact conditions are identical for perfectly secure SFE and MPC.

We present a very simple approach to secure computation and, based on it, establish the exact conditions for perfectly secure SFE and MPC for the natural general adversary model allowing active, passive, and fail corruptions of players. Surprisingly, these two conditions are distinct, i.e., there exist adversaries against which secure SFE is possible, but secure MPC is not possible.

This is joint work with Zuzana Beerliova-Trubiniova, Matthias Fitzi, Martin Hirt, Vassilis Zikas.
**Open To The Public **
*Date*: **Friday, April 20, 2007 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **
*Title*: **``Order Computations in Generic Groups: Beating the Birthday Paradox'' **
*Speaker*: **Andrew Sutherland, MIT **
*Abstract*:
Computing the order of an element in a generic group is a provably hard problem (assuming no information about the group is available) requiring exponential time. The two standard methods used to solve this problem, Pollard's rho method and Shanks' baby-steps giant-steps algorithm, both have complexity \Theta(\sqrt{N}). An \Omega(\sqrt{N}) lower bound exists for the discrete log problem and has been conjectured for order computation.

We show this conjecture to be false by presenting a new algorithm with complexity o(\sqrt{N}). The constant factors involved are small, and in practice the new algorithm is substantially faster than existing methods.
**Open To The Public **
*Date*: **Friday, April 13, 2007 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **
*Title*: **A Cryptographic Study of Secure Internet Measurement **
*Speaker*: **David Xiao, Princeton U. **
*Abstract*:
The Internet is an indispensable part of our information society, and yet its basic foundations remain vulnerable to simple attacks, and one area that remains especially susceptible to attack is routing. There have been increasing efforts in the networking community to incorporate security into current routing protocols, and secure Internet measurement is necessary to inform any routing protocol. In this talk we will show how to use theoretical tools to give a rigorous treatment of this security problem. We believe our work shows that rigorous techniques, and even tools for negative results such as reducing to one-way functions or black-box separations, can have an immediate impact on the study of security problems of real-world importance.

We describe two definitions for this problem: fault detection (FD) where the honest parties only want to know if the packets they sent were dropped or modified or not, and fault localization (FL) where the honest parties want to know in addition where exactly their packets were modified or dropped. Besides traditional per-packet definitions where we want to know the fate of every packet, we also propose *statistical* definitions that reduce the communication and storage overhead of protocols yet retain useful security properties. We will sketch constructions of schemes that satisfy our security definitions and have desirable practical properties.

Next, we show the negative results implied by our definitions. In particular, we can show the necessity of keys, cryptography, and storage in any secure FD or FL scheme. We will describe in detail the proof of our result that any secure black-box construction of a FL protocol requires cryptography to be performed at each nodes. This result uses a novel application of the black-box separation technique of Impagliazzo-Rudich and the learning algorithm of Naor-Rothblum.

This is joint work with Sharon Goldberg, Boaz Barak, and Jennifer Rexford.

**Open To The Public **

*Date*: **Friday, April 6, 2007 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Does Privacy Require True Randomness? **

*Speaker*: **Yevgeniy Dodis, NYU **

*Abstract*:

All known techniques for achieving privacy seem to fundamentally require (nearly) perfect randomness. We ask the question whether this is just a coincidence, or, perhaps, privacy inherently requires true randomness?

We resolve this question for the case of (information-theoretic) private-key encryption, where parties wish to encrypt a b-bit value using an n-bit shared secret key sampled from some imperfect source of randomness S. Our main result shows that if such n-bit source S allows for a secure encryption of b bits, where b > log n, then one can deterministically extract nearly b almost perfect random bits from S. Moreover, the restriction that b > log n is nearly tight.

Hence, to a large extent, true randomness is inherent for encryption: either the key length must be exponential in the message length b, or one can deterministically extract nearly b almost unbiased random bits from the key. In particular, * the one-time pad scheme is essentially "universal"*.

Our technique also extends to related *computational* primitives which are "perfectly-binding", such as perfectly-binding commitment and computationally secure private- or public-key encryption, showing the necessity to *efficiently* extract almost b *pseudorandom* bits.
Joint work with Carl Bosley.
**Open To The Public **
*Date*: **Friday, March 16, 2007 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **
*Title*: **Robust Fuzzy Extractors and Authenticated Key Agreement from Close Secrets **
*Speaker*: **Leonid Reyzin, Boston University **
(Joint work with Yevgeniy Dodis, Jonathan Katz and Adam Smith)
*Abstract*:
Consider the problem of deriving high-quality keys from noisy data in the presense of an active adversary: two parties (or the same party at two different times), holding correlated secret inputs W and W', wish to agree on a uniformly distributed secret key R by sending a single message over an insecure channel.

We study both the keyless case, where the parties share no additional secret information, and the keyed case, where the parties share a long-term secret SK that they can use to generate a sequence of session keys R_j using multiple pairs (W_j, W'_j). The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded storage model with errors.

Our results improve upon previous work in several respects: -> The best previous solution for the keyless case with no errors (i.e., W=W') required the min-entropy of W to exceed 2n/3, where n is the bit-length of W. Our solution applies whenever min-entropy of W exceeds the minimal possible threshold n/2, and yields a longer key. -> Previous solutions for the keyless case in the presence of errors (i.e., W close, but not equal to, W') required random oracles. We give the first constructions in the standard model. -> Previous solutions for the keyed case were stateful, thus requiring synchronization between the parties. We give the first stateless solution.

**Open To The Public **

*Date*: **Friday, Feb 16, 2007 **

*Time*: **4:00 pm **

*Place*: **de Rothchild room, E-15 283A **

*Title*: **AttackDog - A Threat Modeling Tool for Security Research and Analysis on Voting Technology and Other Vulnerable Systems **

*Speaker*: **Eric Lazarus **

*Abstract*:

This talk will describe AttackDog is a multi user shared environment for evaluating the security of systems by developing threat models and evaluating countermeasures to attacks. It is currently being used to explore the security capabilities of voting technologies including the effort to understand the impact of deploying End-to-End Cryptographic Election technology. This threat modeling is being done in partnership with NIST and involves a team of more than 15 people from many institutions including Harvard, Yale, Stanford, UC Davis, and Microsoft Research.

The talk will explore how the tool and threat modeling in general can be used.

**Open To The Public **

*Date*: **Friday, December 1, 2006 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Advances in Verifiable Voting Schemes **

*Speaker*: **Peter Ryan, University of Newcastle **

*Abstract*:

For centuries, the democratic process has largely been taken for granted and implicit trust has been placed in the paper ballot approach to casting and counting votes. In reality, the democratic process is one of considerable fragility, as the recent US presidential elections demonstrate.

For over a century, the US has been using technological approaches to recording and counting votes: level machines, punch cards, optical readers, touch screen machines, prompted largely by widespread corruption with paper ballots. All have been prey to various scams and forms of corruption or just plain malfunctions. Reports from Johns Hopkins and Princeton have demonstrated that many of the touch screen devices currently deployed in the US are wide open to virtually undetectable corruption.

In this talk I will discuss recent advances in cryptographically based, voter-verifiable schemes. These strive to provide high assurance of accuracy whilst preserving ballot secrecy whilst requiring minimal trust in hardware, software, officials etc. Voters are able to confirm that their vote is included in the tally whilst having no way to prove to a third party how they voted, thus ensuring coercion-resistance.

In particular, I will outline the Prêt à Voter scheme and describe a number of vulnerabilities identified with the 2005 version. I will then describe enhancements designed to counter these vulnerabilities. These include the distributed generation of Prêt à Voter ballot forms in encrypted form, on demand decryption and printing of forms, re-encryption mixes for tabulation, and a variant of Adida/Rivest off-line auditing.

**Open To The Public **

*Date*: **Friday, November 17, 2006 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **The Geometry of Innocent Flesh on the Bone **

*Speaker*: **Hovav Shacham, Stanford University **

*Abstract*:

We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build "gadgets" that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.

*Date*: **Friday, October 27, 2006 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Constructing an Ideal Hash Function from Weak Ideal Compression Functions **

*Speaker*: **Moses Liskov, The College of William and Mary **

*Abstract*:

We introduce the notion of a weak ideal compression function, which is vulnerable to strong forms of attack, but is otherwise random. We show that such weak ideal compression functions can be used to create secure hash functions, thereby giving a design that can be used to eliminate attacks caused by undesirable properties of compression functions. We prove that the construction we give, which we call the "zipper hash," is ideal in the sense that the overall hash function is indistinguishable from a random oracle when implemented with these weak ieal building blocks.

The zipper hash function is relatively simple, requiring two compression function evaluations per block of input, but it is not streamable. We also show how to create an ideal (strong) compression function from ideal weak compression functions, which can be used in the standard iterated way to make a streamable hash function.
**Open To The Public **
*Date*: **Friday, October 13, 2006 **
*Time*: **10:30 am - 12 pm **
*Place*: **32-D463, Star conf. rm, Stata Center, 32 Vassar St. **
*Title*: **Mitigating Dictionary Attacks on Password-Protected Local Storage **
*Authors*: **Ran Canetti, Shai Halevi, Michael Steiner **
*Speaker*: **Shai Halevi, IBM TJ Watson **
*Abstract*:
We address the issue of encrypting data in local storage using a key that is derived from the user's password. The typical solution in use today is to derive the key from the password using a cryptographic hash function. This solution provides relatively weak protection, since an attacker that gets hold of the encrypted data can mount an off-line dictionary attack on the user's password, thereby recovering the key and decrypting the stored data.

We propose an approach for limiting off-line dictionary attacks in this setting without relying on secret storage or secure hardware. In our proposal, the process of deriving a key from the password requires the user to solve a puzzle that is presumed to be solvable only by humans (e.g, a CAPTCHA). We describe a simple protocol using this approach: many different puzzles are stored on the disk, the user's password is used to specify which of them need to be solved, and the encryption key is derived from the password and the solutions of the specified puzzles. Completely specifying and analyzing this simple protocol, however, raises a host of modeling and technical issues, such as new properties of human-solvable puzzles and some seemingly hard combinatorial problems. Here we analyze this protocol in some interesting special cases.

**Open To The Public **

*** Joint talk with TDS * **

*Date*: **Friday, September 22, 2006 **

*Time*: **10:30 am - 12 pm **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Sound cryptographic abstractions for automated proofs **

*Speaker*: **Birgit Pfitzmann, IBM Zurich **

*Abstract*:

Security-critical systems are an important application area for formal methods. However, such systems often contain cryptographic subsystems. The natural definitions of these subsystems are probabilistic and in most cases computational. Hence it is not obvious how one can treat cryptographic subsystems in a sound way within formal methods, in particular if one does not want to encumber the proof of an overall system by probabilities and computational restrictions due only to its cryptographic subsystems.

We survey our progress on integrating cryptography into formal models, in particular our work on reactive simulatability (RSIM), a refinement notion suitable for cryptography. We also present the underlying system model which unifies a computational and a more abstract presentation and allows generic distributed scheduling. We show the relation of RSIM and other types of specifications.

In particular, we present soundness results as well as impossibility results for the classical abstractions of cryptography used in the formal-methods community, the abstraction by initial models of term algebras, also called Dolev-Yao models or symbolic cryptography.

Homepage: http://www.zurich.ibm.com/~bpf

**Open To The Public **

**PhD Thesis Defense **

*Date*: **THURS, July 22, 2006 **

*Time*: **9:00 am **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Verifying Secret-Ballot Elections with Cryptography **

*Speaker*: **Ben Adida, CSAIL, MIT **

*Abstract*:

In the US, the secret ballot is 115 years old: the first 23 Presidents were elected using public polling. Introduced to stem voter coercion, the secret ballot carries, to this day, a significant audit-ability and transparency cost: how can voters be given direct assurance of their vote without enabling coercion? Cryptography often solves problems with conflicting requirements: in this case , cryptography can fully reconcile ballot secrecy and election audit- ability.

This talk presents an overview of cryptographic voting techniques developed over the last 20 years and introduces two new ideas: (1) Scratch & Vote, a practical, paper-based, cryptographic voting system that is particularly useful in illustrating the capabilities of cryptographic voting, and (2) Public Mixing, a new theoretical definition and construction that achieves anonymization (of votes, for example) through public computation. This work asks "if crypto voting is so good, why aren't we using it yet?" and offers some tentative answers. Thesis Committee: Ronald L. Rivest, Srini Devadas, Shafi Goldwasser http://ben.adida.net/thesis-defense/

**Open To The Public **

**Please Note Different Day, Time and Place **

*Date*: **WEDS, June 21, 2006 **

*Time*: **4:00 p.m.- 5:30 p.m. **

*Place*: **32-G575, Theory Lounge, Stata Center, 32 Vassar St. **

*Title*: **New Techniques for Zero Knowledge **

*Speaker*: **Amit Sahai, UCLA **

*Abstract*:

Non-interactive zero-knowledge (NIZK) protocols are fundamental cryptographic primitives used in many constructions. In this talk, we will discuss new techniques for constructing NIZK protocols using computational assumptions over elliptic curve groups and associated bilinear maps.

Using these new techniques, we resolve a number of open questions about NIZK, such as: * We show how to construct Perfect NIZK Arguments for any language in NP. * We show how to construct Non-interactive Witness-Indistinguishable Proofs for any language in NP, without a common reference string (such a result was only known previously under a non-standard assumption). * We show how to construct NIZK Proofs and Arguments for Circuit Satisfiability, where the length of the Common Reference String is O(k) and the length of the proof is O(k|C|), where |C| is the size of the circuit.

(Joint work with Jens Groth and Rafail Ostrovsky, appearing in Eurocrypt 2006 and Crypto 2006)

**Open To The Public **

*Date*: **Friday, June 2, 2006 **

*Time*: **2:00 p.m.- 3:30 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Building Secure Systems with Attestation **

*Speaker*: **Adrian Perrig, CMU **

*Abstract*:

Attestation is a promising approach for building secure systems. The recent development of a Trusted Platform Module (TPM) by the Trusted Computing Group (TCG) that is starting to be deployed in common laptop and desktop platforms is fueling research in attestation mechanisms. In this talk, I will present approaches on how to build secure systems with advanced TPM architectures. In particular, we have designed an approach for fine-grained attestation that enables the design of efficient secure distributed systems. We demonstrate this approach by designing a secure version of the BGP routing protocol.

We have also developed software-based attestation mechanisms for legacy systems without relying on trusted hardware. Our approach enables a verifier to obtain the property of untampered code execution on a legacy Pentium IV workstation.

Adrian Perrig is an Assistant Professor in Electrical and Computer Engineering, Engineering and Public Policy, and Computer Science at Carnegie Mellon University. He earned his Ph.D. degree in Computer Science from Carnegie Mellon University, and spent three years during his Ph.D. degree at University of California at Berkeley. He received his B.Sc. degree in Computer Engineering from the Swiss Federal Institute of Technology in Lausanne (EPFL). Adrian's research interests revolve around building secure systems and include Internet security, security for sensor networks and mobile applications. More information about his research is available at: Adrian Adrian is a recipient of the NSF CAREER award in 2004, the IBM faculty fellowship in 2004 and 2005, and the Sloan research fellowship in 2006.

**Open To The Public **

*Date*: **Friday, May 19, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Statistical Zero-Knowledge Arguments for NP from Any One-Way Function**

*Speaker*: **Shien Jin Ong, Harvard University**

(Joint work with Minh-Huyen Nguyen and Salil Vadhan)

*Abstract*:

We show that every language in NP has a statistical zero-knowledge argument system under the (minimal) complexity assumption that nonuniformly-secure one-way functions exist. In such protocols, even a computationally unbounded verifier cannot learn anything other than the fact that the assertion being proven is true, whereas a polynomial-time prover cannot convince the verifier to accept a false assertion except with negligible probability.

This resolves an open question posed by Naor, Ostrovsky, Venkatesan, and Yung (Crypto `92), who gave a construction based on one-way permutations. Constructions were also known under collision-resistant hashing and ``approximable preimage size'' one-way functions. The existence of one-way functions is implied by all the these assumptions, and is essentially the minimal assumption needed for nontrivial zero knowledge. A key tool in our construction is the notion of 1-out-of-2-binding commitments, recently introduced by Nguyen and Vadhan (STOC `06).

**Open To The Public **

*Date*: **Friday, May 12, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **The Rivest-Sherman Ciphertext-Only Attacks on Enigma-Like Machines**

*Speaker*: **Alan Sherman, University of Md, Baltimore County**

(Joint work with William Byrd, now at Indiana University)

*Abstract*:

We present, mathematically analyze, and experimentally evaluate ciphertext-only attacks on Enigma-like cryptographs that search for the rotors one at a time. Devised by Rivest and Sherman in 1980, but previously unpublished, the full attack is the first ciphertext-only attack on Enigma proposed in the open literature. In its basic one-stage version, the attack rank orders all possible choices for the fastest-moving rotor and its initial position, given a basket of rotors with known rotor wirings. The full attack repeatedly applies the one-stage attack to guide a heuristic search for the key (all rotors in the scrambler and their initial positions), examining fewer candidate keys on average than does exhaustive search. In comparison with exhaustive search, the full attack is more complicated but finds the solution more quickly on average, for sufficiently large baskets (for a three-rotor machine, a basket of six rotors suffices). Throughout we work on a simplified model of Enigma similar to the Railway Enigma that has no ring settings nor plugboard connections.

The one-stage Rivest-Sherman (RS) Attack evaluates each candidate (rotor, initial position)-pair by computing a statistic equivalent to the Index of Coincidence (IC) on the character string that is generated by passing ciphertext through the candidate rotor starting in its candidate initial position. The attack computes this statistic separately for each block of 26 characters, during which only the fastest-moving rotor spins. The correct pair generates a string whose distribution is equivalent (up to permutation of the letter frequencies within each block) to that created by passing plaintext through the correct rotor. By contrast, incorrect rotors, and the correct rotor in any wrong initial position, generate strings formed by passing approximately uniform text through the candidate rotor. The full attack heuristically searches for the (rotor, initial position)-pair for each rotor in the scrambler, one rotor at a time, performing an iteratively-broadening tree search. For each rotor to be determined, the full attack orders all candidate pairs by their statistical scores. This search tests complete candidate keys by decrypting the ciphertext and checking for valid plaintext.

We evaluate the effectiveness of the RS-Attacks by determining relationships between their following parameters: ciphertext length, running time, and probability of success at finding the correct key. Contrary to predictions by Rivest and Sherman, the correct (rotor, initial-position)-pair tends to have a low statistical score rather than a high score. The reason stems from fact that, within each block, the statistic is a mixed-alphabet IC rather than a matching-alphabet IC. We extend and revise Sherman's 1981 theoretical analysis to explain all of our experimental findings. Keywords. Ciphertext-only attacks, cryptanalysis, cryptography, cryptology, Enigma cryptograph, iteratively-broadening tree search, Railway Enigma, Rivest-Sherman Attacks, RS-Attacks, rotor machines, statistical cryptanalysis, wired-wheel machines. Alan T. Sherman is associate professor of computer science at UMBC, director of the UMBC Center for Information Security and Assurance (CISA), editor of Cryptologia, and a cryptologic consultant.

**Open To The Public **

*Date*: **Thursday, May 11, 2006 **

*Time*: **3:00 p.m.- 4:30 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **A Study of Vote Verification Technologies **

*Speaker*: **Dr. Alan Sherman, University of Md, Baltimore County**

(Joint work with Aryya Gangopadhyay, Stephen H. Holden, George Karabatis, A. Gunes Koru, Chris M. Law, Donald F. Norris, John Pinkston, Andrew Sears, and Dongsong Zhang)

*Abstract*:

We describe our findings and experiences from our technical review of vote verification systems for the Maryland State Board of Elections (SBE). The review included the following four systems for possible use together with Maryland's existing Diebold AccuVote-TS (touch screen) voting system: VoteHere Sentinel; SCYTL Pnyx.DRE; MIT-Selker audio system; Diebold voter verified paper audit trail. As a baseline, we also examined the SBE's procedures for "parallel testing" of its Diebold system. For each system, we examined how it enables voters who use touch screens to verify that their votes are cast as intended, recorded as cast, and reported as recorded. We also examined how well it permits post-election auditing. To this end, we considered implementation, impact on current state voting processes and procedures, impact on voting, functional completeness, security against fraud, attack and failure, reliability, accessibility, and voter privacy.

Our principal findings are, first, that each system we examined may at some point provide a degree of vote verification beyond what is available through the Diebold System as currently implemented, provided the system were fully developed, fully integrated with the Diebold system, and effectively implemented. Second, none of the systems is yet a fully developed, commercially ready product. This interdisciplinary study--the first of its kind--is of interest for the way in which it evaluates the systems, for the technical questions it raises about standard interfaces, and as a snapshot of the state of vote verification technologies and their commercial development. Keywords. Diebold AccuvoteTS, Diebold VVPAT, Direct Recording Equipment (DRE), computer system security, electronic voting systems, information assurance, Maryland State Board of Elections, MIT Selker VVAATT, parallel testing, Scytl Pnyx.DRE, VoteHere Sentinel, vote verification technology. Support for this research was provided in part by the Maryland State Board of Elections. For our full report, see Norris, et al. (www.umbc.edu/mipar). This work was carried out at the National Center for the Study of Elections of the Maryland Institute for Policy Analysis and Research at UMBC. Alan T. Sherman is associate professor of computer science at UMBC and director of the UMBC Center for Information Security and Assurance (CISA).

**Open To The Public **

*Date*: **Friday, April 21, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **On the Difficulty of Perfect NIZK with Adaptive Security**

*Speaker*: **Masa Abe, NTT **

*Abstract*:

The notion of non-interactive zero-knowledge (NIZK), introduced by Blum, Feldman and Micali, is of fundamental importance in cryptography. Despite the vast attention the concept of NIZK has attracted since its introduction in the late 80's, one question has remained open until recently: Is it possible to construct NIZK schemes for any $\np$-language with {\em statistical} (or even {\em perfect}) ZK? Groth, Ostrovsky and Sahai recently proposed two elegant constructions for perfect NIZK arguments for any $\np$-language $L$. However, their schemes achieve non-adaptive security, meaning security against a dishonest prover that chooses the target instance \mbox{$x^* \not\in L$} (for which he wants to fake a proof) {\em independent} of the common reference string (CRS), or need strong non-standard assumption to achieve adaptive security. But is such a strong assumption inevitable in general?

In this talk, we show that using ``standard techniques'' it is not possible to construct a statistical NIZK argument for an $\np$-complete language with adaptive security unless \mbox{$\np \in \ppoly$}. This is a joint work with Serge Fehr (CWI).

**Open To The Public **

*Date*: **Friday, April 7, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Fully Collusion-Resistant Broadcast Encryption and Traitor Tracing Systems **

*Speaker*: **Brent Waters, SRI **

*Abstract*:

A Broadcast Encryption cryptosystem allows a sender to encrypt a message to some target set of users. In a secure system users in the target set can decrypt the ciphertext and no collusion of users outside of the target set can learn anything about the message. Broadcast encryption systems have a variety of applications. For example, we could build a shared encrypted filesystem from broadcast encryption where a user broadcast encrypts a file to the set of users he wants to share it with. Broadcast encryption is also useful for large-scale content distribution; a content distributor such as DirectTV or XMRadio will encrypt its digital media content to the devices of all paying subscribers.

The primary challenge with broadcast encryption is to design secure systems with small ciphertext size. For example, we could achieve a broadcast encryption scheme with ciphertexts linear in the number of receivers by simply encrypting a message (or symmetric encryption key) separately to each user the target set. However, this approach is inefficient and becomes infeasible in large systems where there could be many users in a target set.

In this talk I will present two recent developments in broadcast encryption. First, I will discuss my work with Dan Boneh and Craig Gentry on a broadcast encryption scheme that has constant ciphertext size and constant size private keys. Our scheme can be used to encrypt to arbitrary sets of users and is secure against an arbitrary number of colluding attackers. Somewhat surprisingly, the only previous fully-collusion resistant scheme is the trivial where we encrypt to each user separately.

Additionally, I will present some very recent work with Dan Boneh and Amit Sahai on a related problem known as "Tracing Traitors". Our tracing traitors construction allows us to trace a creator of a "pirate box". Our solution achieves O(\sqrt(n)) size ciphertexts and is secure against an arbitrary number of colluders.

Links: http://eprint.iacr.org/2006/045.pdf http://eprint.iacr.org/2005/018.pdf

**Open To The Public **

*PLEASE Note Different Day, Time and Place*

*Date*: **Wednesday, April 5, 2006 **

*Time*: **4:00 p.m.- 5:30 p.m. **

*Place*: **32-G575, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Random Selection with an Adversarial Majority **

*Speaker*: **Ronen Gradwohl, Weizmann Institute **

*Abstract*:

We consider the problem of random selection, where $p$ players follow a protocol to jointly select a random element of a universe of size $n$. However, some of the players may be adversarial and collude to force the output to lie in a small subset of the universe. We describe essentially the first protocols that solve this problem in the presence of a dishonest majority in the full-information model (where the adversary is computationally unbounded and all communication is via broadcast).

Our protocols are nearly optimal in several parameters, including the round complexity (as a function of $n$), the randomness complexity, the communication complexity, and the tradeoffs between the fraction of honest players, the probability that the output lies in a small subset of the universe, and the density of this subset. Joint work with Salil Vadhan and David Zuckerman.

**Open To The Public **

*Date*: **Friday, March 24, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Calibrating Noise to Sensitivity in Private Data Analysis **

*Speaker*: **Adam Smith, Weizmann Inst. and MIT **

*Abstract*:

Consider a trusted server that holds a database of sensitive information. The server would like to reveal global statistics about the population in the database, and yet hide information specific to individuals. I will discuss a recent line of work exploring the tradeoff between these conflicting goals -- first, how the goals can be formulated precisely and second, to what extent they can both be satisfied. The technical results I will mention come mostly from a paper in TCC 2006.

The main contribution is a new, simple framework for "output perturbation". Given a query function F mapping databases to reals, we say the "true" query answer is the result of applying F to the database. To protect privacy, the server perturbs the true answer by adding random noise and sends the result as the query answer. In this setting, a strong notion of privacy can be guaranteed by calibrating the standard deviation of the noise to the "sensitivity" of the function F.

The second contribution is series of bounds on the power of special classes of privacy mechanisms. These show that various kinds of interaction add significantly to the "utility" of the mechanism from the users' point of view. Based on joint work with Cynthia Dwork, Frank McSherry, Kobbi Nissim and Enav Weinreb.

**Open To The Public **

*Date*: **FRIDAY, MARCH 17, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Information-Theoretically Secure Protocols and Security Under Composition**

*Speaker*: **Tal Rabin, IBM **

*Abstract*:

We investigate the question of whether security of protocols in the information-theoretic setting (where the adversary is computationally unbounded) implies the security of these protocols under concurrent composition. This question is motivated by the folklore that all known protocols that are secure in the information-theoretic setting are indeed secure under concurrent composition. We provide answers to this question for a number of different settings (i.e., considering perfect versus statistical security, and concurrent composition with adaptive versus fixed inputs).

Our results enhance the understanding of what is necessary for obtaining security under composition, as well as providing tools (i.e., composition theorems) that can be used for proving the security of protocols under composition while considering only the standard stand-alone definitions of security. Joint work with: Eyal Kushilevitz and Yehuda Lindellw

**Open To The Public **

*Date*: **FRIDAY, MARCH 10, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **On Extractors, Error-Correction and Hiding All Partial Information **

*Speaker*: **Yevgeniy Dodis, New York University**

*Abstract*:

Randomness extractors allow one to obtain nearly perfect randomness from highly imperfect sources randomness, which are only known to contain "scattered" entropy. Not surprisingly, such extractors have found numerous applications in many areas of computer science including cryptography. Aside from extracting randomness, a less known usage of extractors comes from the fact that they hide all deterministic functions of their (high-entropy) input: in other words, extractors provide certain level of privacy for the imperfect source that they use. In the latter kind of applications, one typically needs extra properties of extractors, such as invertibility, collision-resistance, error-correction or unforgeability.

In this talk we survey some of such usages of extractors, concentrating on several recent results by the speaker. The primitives we will survey include several flavors of randomness extractors (including "fuzzy extractors" and "extractor-macs"), entropically secure encryption and perfect one-way hash functions. The main technical tools will include several variants of the leftover hash lemma, error correcting codes, and the connection between randomness extraction and hiding all partial information.

**Open To The Public **

*Date*: **FRIDAY, MARCH 3, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **"Mercurial Commitments with Applications to Zero-Knowledge Sets" **

*Speaker*: **Leonid Reyzin, Boston University**

*Abstract*:

We introduce a new flavor of commitment schemes, which we call mercurial commitments. Informally, mercurial commitments allow for soft decommitments, which, on the one hand, are not binding, but, on the other hand, cannot conflict with true decommitments.

We then demonstrate that a particular instantiation of mercurial commitments has been implicitly used by Micali, Rabin and Kilian [MRK] for the first ever construction of zero-knowledge sets. (A zero-knowledge set scheme allows a Prover to (1) commit to a set S in a way that reveals nothing about S; and (2) prove to a Verifier, in zero knowledge, statements of the form "x in S" and "x not in S.") The rather complicated construction of [MRK] becomes easy to understand when viewed as a more general construction with mercurial commitments as an underlying building block.

By providing mercurial commitments based on various assumptions, we obtain several new zero-knowledge set constructions. This is joint work with Melissa Chase (Brown), Alex Healy (Harvard), Anna Lysyanskaya (Brown) and Tal Malkin (Columbia).

**Open To The Public **

*Date*: **FRIDAY, FEBRUARY 17, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **The Complexity of Online Memory Checking **

*Speaker*: **Guy Rothblum, CSAIL, MIT **

*Abstract*:

Suppose you want to store a large file on a remote and unreliable server. You would like to verify that your file has not been corrupted, so you store a small private (randomized)``fingerprint'' of the file on your own computer. This is the setting for the well-studied authentication problem, and the size of the required private fingerprint is well understood.

We study the problem of sub-linear authentication: suppose you would like to encode and store your file in a way that allows you to verify that it has not been corrupted, but without reading all of it. If you only want to read t bits of the file, how large does the size s of the fingerprint need to be? We define this problem formally, and show a tight lower bound on the relationship between s and t when the adversary is not computationally bounded, namely: s x t= Omega(n) where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum, Evans, Gemmel, Kannan and Naor in 1991, and hence the same (tight) lower bound applies also to this problem.

It was previously shown that when the adversary is not computationally bounded, under the assumption that one-way functions exist, it is possible to construct much better online memory checkers and sub-linear authentication schemes. We show that the existence of one-way functions is also a necessary condition: even slightly breaking the s x t= Omega(n) lower bound in a computational setting implies the existence of one-way functions. Joint work with Moni Naor.

**Open To The Public **

*Date*: **FRIDAY, FEBRUARY 10, 2006 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **On the (Im)Possibility of Basing One-Way Functions on NP-Hardness **

*Speaker*: **Adi Akavia, CSAIL, MIT **

*Abstract*:

We consider the question of whether it is possible to base the existence of one-way functions on NP-hardness. That is we study the possibility of reductions from a worst-case NP-hard decision problem to the task of inverting a polynomial time computable function. We prove two negative results:

1. For any polynomial time computable function f: the existence of a randomized *non-adaptive* reduction of worst case NP problems to the task of average-case inverting f implies that coNP\subseteq\AM. It is widely believed that coNP is not contained in AM. Thus, this result may be regarded as showing that such reductions cannot exist (unless coNP\subseteq\AM). This result improves previous negative results that placed coNP in {\em non-uniform} AM.

2. For any polynomial time computable function f for which it is possible to efficiently compute pre-image sizes (\ie, |f^{-1}(y)| for a given y): the existence of a randomized reduction of worst case NP problems to the task of inverting f implies that coNP\subseteq\AM. Moreover, this is also true for functions for which it is possible to verify (via and AM protocol) the approximate size of pre-image sizes (\ie, |f^{-1}(y)| for a given y). These results holds for *any reduction*, including *adaptive* ones.

The previously known negative results regarding worst-case to average-case reductions were confined to *non-adaptive* reductions. In the course of proving the above results, two new AM protocols emerge for proving *upper bounds* on the sizes of NP sets. Whereas the known *lower* bound protocol on set sizes by [Goldwasser-Sipser] works for any NP set, the known *upper* bound protocol on set sizes by [Aiello-Hastad] works in a setting where the verifier knows a random secret element (unknown to the prover) in the $\NP$ set. The new protocols we develop here, each work under different requirements than that of [Aiello-Hastad], enlarging the settings in which it is possible to prove upper bounds on NP set size.

Joint work with Oded Goldreich, Shafi Goldwasser and Dana Moshkovitz

**Open To The Public **

*Date*: **FRIDAY, JANUARY 20, 2006 **

*Time*: **1:30 p.m. - 3:00 p.m.**

*Place*: **32-G575, Stata Center, 32 Vassar St. **

*Title*: **Chosen Ciphertext Security 20% Faster **

*Speaker*: **Rosario Gennaro, IBM T.J. Watson **

*Abstract*:

In a 1998 breakthrough result, Cramer and Shoup showed how to build an efficient encryption scheme which resists chosen ciphertext attacks, the ultimate notion of security for encryption. Indeed, previous proposals for chosen-ciphertext secure encryption schemes were so complex to be completely infeasible in practice, in spite of their theoretical polynomial complexity.

Though reasonably efficient, the Cramer-Shoup scheme is still substantially more expensive than basic semantically secure schemes: for example, compared to the basic ElGamal scheme it is almost 3 times less efficient in both computation and ciphertext length. For this reason practitioners are using encryption schemes which are more efficient than Cramer-Shoup, but for which chosen-ciphertext security is not proven in the standard computational model but rather in idealized ones such as the random oracle model. The central question then remains to improve as much as possible the efficiency of provably secure schemes.

In this talk we present a simple modification to the Cramer-Shoup scheme which improves its efficiency by 20% and discuss various interesting technical issues that arise in its security proof. We also present a theoretical framework that provides an insightful explanation for why this new more efficient scheme works and may produce further improvements. In particular, using this framework, we are able to construct the first efficient hybrid threshold encryption scheme which is chosen-ciphertext secure.

The new encryption scheme is joint work with Yvo Desmedt, Kaoru Kurosawa and Victor Shoup, while the theoretical framework is joint work with Masayuki Abe and Kaoru Kurosawa.

**Open To The Public **

*Date*: **FRIDAY, DECEMBER 16, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **A simple and provably good code for SHA Message Expansion**

*Speaker*: **Charanjit Jutla, IBM TJ Watson **

*Abstract*:

We develop a new computer assisted technique for lower bounding the minimum distance of codes similar to those used in SHA-1 message expansion. Using this technique, we prove that a modified SHA-1 like code has minimum distance at least 80, and that too in just the last 64 of the 80 expanded words. We propose a new compression function which is identical to SHA-1 except for the modified message expansion code.

We also show that if a collision is obtained (as in the Wang et al attack) by putting together local collisions then the disturbance vector must be an xor of a non-zero message expansion codeword and a cipher codeword. We conclude that the modified SHA-1 is resistant to recent differential attacks and their natural extensions. This is joint work with Anindya Patthak, UT Austin.

**Open To The Public **

*Date*: **FRIDAY, DECEMBER 9, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **The HMAC Construction: A Decade Later **

*Speaker*: **Ran Canetti, IBM TJ Watson **

*Abstract*:

Message authentication codes (MACs) provide a way for parties in a network to authenticate each other's messages. As such, they are arguably the most commonly used cryptographic constructs. HMAC is a MAC construction based on cryptographic hash functions, that was designed for the IPSec protocol in 1995. It is now incorporated in protocols such as IPSec, TLS, SSH and SHTTP, standardized by NIST and ANSI, and shipped with any major operating system and web browser.

HMAC combines simple design with sound cryptographic analysis that makes relatively mild security assumptions on the underlying hash function. In particular, this analysis shows that HMAC remains unaffected by the recent advancements in finding collisions in hash functions.

This talk will review the design of HMAC with some historic perspective, in an attempt to draw lessons on the impact of theoretical analysis on practical security and acceptability. We will also survey other common uses of HMAC today, such as for key derivation and randomness extraction in VPN protocols. Finally, we will use these lessons to propose requirements for a new cryptographic hash function, thus contributing to the current specification effort led by NIST.

HMAC is joint work with Mihir Bellare and Hugo Krawczyk.

**Open To The Public **

*Date*: **FRIDAY, December 2, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Efficient Collision-Resistant Hashing from Worst-Case Assumptions on Cyclic Lattices **

*Speaker*: **Chris Peikert, CIS, CSAIL, MIT **

*Abstract*:

Collision-resistant hashing is one of the most widely-employed cryptographic primitives. Its applications include integrity checking, user and message authentication, commitment protocols, and more.

In this talk, we will present a very efficient hash function for which finding collisions given a *random* key is at least as hard as approximating the shortest vector (SVP) in a cyclic lattice *in the worst case*. Each evaluation of our function requires only a small number (e.g., 3) of Fast Fourier Transforms. Thus, assuming that SVP is indeed hard for the special case of cyclic lattices, our function is efficient enough to be considered a candidate for practical usage.

Our worst-case to average-case reduction is non-standard, exploiting an intimate connection between the ring algebra of integer polynomials and the linear algebra of cyclic lattices. These new techniques enable us to provide a more efficient and conceptually simpler security reduction than previous ones, and to show tight connections among certain worst-case problems on cyclic lattices. This is joint work with Alon Rosen.

**Open To The Public **

*Date*: **WEDNESDAY, November 23, 2005 **

*Time*: **1:15 p.m.- 3:00 p.m. **

*Place*: **NOTE LOCATION: 32-G575, Stata Center, 32 Vassar St. **

*Title*: **Efficient Cryptographic Constructions for Privacy-preserving Applications**

*Speaker*: **Dawn Song, CMU**

*Abstract*:

Many applications require privacy-preserving solutions to enable the functionalities of the applications without compromising users' privacy. In this talk, I will describe our new efficient cryptographic constructions for several privacy-preserving appications.

I will first talk about our new constructions for privacy-preserving set operations (CRYPTO 05, joint work with Lea Kissner). We propose efficient techniques for privacy-preserving operations on multisets. By building a framework of set operations using polynomial representations and employing the mathematical properties of polynomials, we design efficient methods to enable privacy-preserving computation of the union, intersection, and element reduction operations.

I will also demonstrate how these operations can be arbitrarily composed, and thus used in a wide range of applications.

I will then describe more recent results on some other cryptographic constructions for privacy-preserving applications. http://www.ece.cmu.edu/~dawnsong/bio.html

**Open To The Public **

*Date*: **FRIDAY, November 18, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G575, Stata Center, 32 Vassar St. **

*Title*: **Security analysis of network protocols**

*Speaker*: **Anupam Datta, Stanford University**

*Abstract*:

In this talk, I will present PCL - a logic for proving security properties of network protocols. Two central results for this logic are a "composition theorem" and a "computational soundness theorem". The composition theorem allows proofs of complex protocols to be built up from proofs of their constituent sub-protocols. It is formulated and proved by adapting ideas from the assume-guarantee paradigm for reasoning about distributed systems. The computational soundness theorem guarantees that, for a class of security properties and protocols, axiomatic proofs in PCL carry the same meaning as hand-proofs done by cryptographers.

The soundness proof uses standard proof techniques from cryptography, in particular, complexity-theoretic reductions. PCL has been successfully applied to a number of internet, wireless and mobile network security protocols developed by the IEEE and IETF Working Groups, in several cases identifying serious security vulnerabilities.

**Open To The Public **

*Date*: **FRIDAY, November 4, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Updatable Zero Knowledge Databases**

*Speaker*: **Moses Liskov, The College of William and Mary**

*Abstract*:

Micali, Rabin, and Kilian recently introduced zero-knowledge sets and databases, in which a prover sets up a database by publishing a commitment, and then gives proofs about particular values. While an elegant and useful primitive, zero-knowledge databases do not offer any good way to perform updates. We explore the issue of updating zeroknowledge databases. We define and discuss transparent updates, which (1) allow holders of proofs that are still valid to update their proofs, but (2) otherwise maintain secrecy about the update.

We give rigorous definitions for transparently updatable zero- knowledge databases, and give a practical construction based on the Chase et al construction, assuming that verifiable random functions exist and that mercurial commitments exist, in the random oracle model. We also investigate the idea of updatable commitments, an attempt to make simple commitments transparently updatable. We define this new primitive and give a simple secure construction.

**Open To The Public **

*Date*: **FRIDAY, OCTOBER 28, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **On the Impossibility of Obfuscation with Auxiliary Input**

*Speaker*: **Yael Tauman Kalai, CIS, CSAIL, MIT **

*Abstract*:

Informally, program obfuscation aims at making a program "unintelligible" while preserving its functionality. Whereas practitioners have been attempting to obfuscate programs for many years, it has only recently received attention in the theoretical community.

Barak et al formalized the notion of obfuscation, and showed that there exist (contrived) classes of functions that cannot be obfuscated. In contrast, Canetti and Wee showed, under various complexity assumptions, how to obfuscate a particular class of simple functions, called point functions, that output 1 on a single point (and output 0 everywhere else). Thus, it seemed completely possible that most functions of interest can be obfuscated even though in principle general purpose obfuscators do not exist.

In this talk, we show that this is unlikely to be the case. In particular, we consider the notion of obfuscation w.r.t. auxiliary input, which corresponds to the setting where the adversary, which is given the obfuscated circuit, may have some additional a priori information. We first argue that any useful positive result about the possibility of obfuscation must satisfy this extended definition. We then prove that there exist many natural classes of functions that cannot be obfuscated w.r.t. auxiliary input, both when the auxiliary input is dependent of the function being obfuscated and even when the auxiliary input is independent of the function being obfuscated.

This is joint work with Shafi Goldwasser.

**Open To The Public **

*Date*: **FRIDAY, October 7, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Concurrent Zero Knowledge without Complexity Assumptions**

*Speaker*: **Shien Jin Ong, Harvard U.**

*Paper by*: **Daniele Micciancio, Shien Jin Ong, Amit Sahai and Salil Vadhan**

*Abstract*:

We provide unconditional constructions of concurrent statistical zero-knowledge proofs for a variety of non-trivial problems (not known to have probabilistic polynomial-time algorithms). The problems include Graph Isomorphism, Graph Nonisomorphism, Quadratic Residuosity, Quadratic Nonresiduosity, a restricted version of Statistical Difference, and approximate versions of the (coNP forms of the) Shortest Vector Problem and Closest Vector Problem in lattices. For some of the problems, such as Graph Isomorphism and Quadratic Residuosity, the proof systems have provers that can be implemented in polynomial time (given an NP witness) and have \tilde{O}(log n) rounds, which is known to be essentially optimal for black-box simulation.

To our best of knowledge, these are the first constructions of concurrent zero-knowledge protocols in the asynchronous model (without timing assumptions) that do not require complexity assumptions (such as the existence of one-way functions).

The paper is available at http://eprint.iacr.org/2005/286, and is a joint work with Daniele Micciancio, Amit Sahai and Salil Vadhan

**Open To The Public **

*Date*: **FRIDAY, September 30, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Probabilistic I/O Automata: a promising framework for the analysis of security protocols?**

*Speaker*: **Olivier Pereira, Universite Catholique de Louvain **

*Abstract*:

We demonstrate the use of Probabilistic I/O automata (PIOAs) for analyzing a classical Oblivious Transfer protocol.

Our analysis involves (among other aspects):

- - Recasting computational indistinguishability and hardness assumptions within the PIOA framework,

- - Developing new simulation relations, allowing to prove indistinguishability by using inductive techniques instead of using classical "distinguisher" arguments.

Potential benefits of this approach include: - - A more rigorous specification of cryptographic protocols, as opposed to typical semi-formal pseudocode. - - more modular and systematic proofs. we can decompose the proofs into different parts, completely separating the concurrency issues from the computational ones.

We hope using PIOAs for the analysis of security protocols will allow for proofs which are less error-prone and more amenable to mechanizatio nand eventual automation.

This is joint work with Ran Canetti, Ling Cheung, Dilsun Kaynar, Moses Liskov, Nancy Lynch and Roberto Segala.

**Open To The Public **

*Date*: **FRIDAY, September 23, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Compact E-Cash**

*Speaker*: **Susan Hohenberger, CIS, CSAIL, MIT **

*Abstract*:

This talk presents efficient off-line anonymous e-cash schemes where a user can withdraw a wallet containing 2^l coins each of which she can spend unlinkably. Our first result is a scheme, secure under the strong RSA and the y-DDHI assumptions, where the complexity of the withdrawal and spend operations is O(l+k) and the user's wallet can be stored using O(l+k) bits, where k is a security parameter.

The best previously known schemes require at least one of these complexities to be O(2^l k). In fact, compared to previous e-cash schemes, our whole wallet of 2^l coins has about the same size as one coin in these schemes. Our scheme also offers exculpability of users, that is, the bank can prove to third parties that a user has double-spent.

We then extend our scheme to our second result, the first e-cash scheme that provides traceable coins without a trusted third party. That is, once a user has double spent one of the 2^l coins in her wallet, all her spendings of these coins can be traced. We present two alternate constructions. One construction shares the same complexities with our first result but requires a strong bilinear map assumption that is only conjectured to hold on MNT curves.

The second construction works on more general types of elliptic curves, but the price for this is that the complexity of the spending and of the withdrawal protocols becomes O(lk) and O(lk + k^2) bits, respectively, and wallets take O(lk) bits of storage. All our schemes are secure in the random oracle model.

Joint work with Jan Camenisch and Anna Lysyanskaya from Eurocrypt 2005.

*Date*: **THURSDAY, September 22, 2005 **

**Please Note different DAY and PLACE **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G575, Stata Center, 32 Vassar St. **

*Title*: **Expanders and the Elliptic Curve DLOG problem **

*Speaker*: **Stephen Miller, Hebrew U and Rutgers **

*Abstract*:

I will talk about a family of graphs which originally arose in cryptography, in studying the difficulty of the discrete logarithm problem on elliptic curves.

These graphs can be shown to be expanders, assuming the generalized Riemann hypothesis (GRH) for Hecke's grossencharacter L-functions.

From this one can deduce a random reducibility result for the discrete logarithm problem on the types of elliptic curves that are used in cryptographic applications. The rapid mixing of the random walk on these graphs had previously been assumed in a number of recent attacks on elliptic curve cryptosystems. One application of our estimates of the graph eigenvalues is to give useful, explicit bounds for these mixing times. The graphs themselves represent a new (conditional) construction of expanders, which can be used for other applications.

(Joint work with Ramarathnam Venkatesan and David Jao, Microsoft Research Cryptography and Anti-Piracy Group.)

Host: Adi Akavia, MIT

**Open To The Public **

**PLEASE NOTE Different Time And Place **

*Date*: **WEDNESDAY, September 14, 2005 **

*Time*: **4:00 p.m.- 5:30 p.m. **

*Place*: **32-G575, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Special-Purpose Hardware for Integer Factoring**

*Speaker*: **Eran Tromer, Weizmann Institute**

*Abstract*:

Factoring of large integers is of considerable interest in cryptography and algorithmic number theory. In the quest for factorization of larger integers, the present bottleneck lies in the sieving and matrix steps of the Number Field Sieve algorithm. In a series of works, several special-purpose hardware architectures for these steps were proposed and evaluated. The use of custom hardware, as opposed to the traditional RAM model, offers major benefits (beyond plain reduction of overheads): the possibility of vast fine-grained parallelism, and the chance to identify and exploit technological tradeoffs at the algorithmic level. Taken together, these works have reduced the cost of factoring by many orders of magnitude, making it feasible, for example, to factor 1024-bit integers within one year at the cost of about US$1M (as opposed to the trillions of US$ forecasted previously). This talk will survey these results, emphasizing the underlying general ideas. Joint works with Adi Shamir, Arjen Lenstra, Willi Geiselmann, Rainer Steinwandt, Hubert KÃ¶pfer, Jim Tomlinson, Wil Kortsmit, Bruce Dodson, James Hughes and Paul Leyland.

**Open To The Public **

*Date*: **FRIDAY, May 20, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Towards Physically-Secure Cryptographic Devices**

*Speaker*: **Francois-Xavier Standaert, Universite Catholique de Louvain**

*Abstract*:

Traditionally, cryptographic algorithms provide security against an adversary who has only black box access to cryptographic devices. That is, the only thing the adversary can do is query the cryptographic algorithm on inputs of his choice and analyze the responses, which are always computed according to the correct original secret information. However, such model does not always correspond to the realities of physical implementations, and actually very rarely does.

During the last decade, significant attention has been paid to the physical security evaluation of cryptographic implementations. In particular, it has been demonstrated that actual attackers may be much more powerful than what can be captured by the black box model.

Examples of extended capabilities include the physical tampering of cryptographic devices, the induction of faults during computations or the use of side-channel leakages such as time, power consumption and electromagnetic measurements. As a straightforward consequence, various cryptosystems implemented on various platforms have been corrupted by these physically enhanced attackers.

In this talk, we will question the physical security of cryptographic implementations with a focus on side-channel attacks, in which adversaries are enhanced with the possibility to exploit physical leakages such as power consumption or electromagnetic radiation. After reviewing the state of the art attacks, we will investigate some commonly proposed countermeasures and illustrate that they do not offer any theoretical guarantee of security.

We will finally discuss the possibility to consider a model of computation where side-channel leakages (and possibly other physical effects) would be formally considered. Joint work with Tal G. Malkin and Moti Yung (Columbia University).

**Open To The Public **

*Date*: **FRIDAY, May 13, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **A Formal Treatment of Onion Routing **

*Speaker*: **Anna Lysyanskaya, Brown University**

*Abstract*:

Anonymous channels are necessary for a multitude of privacy-protecting protocols. Onion routing is the best known way to achieve anonymity in practice. However, the cryptographic aspects of onion routing have not been sufficiently explored: no satisfactory definitions of security have been given, and existing constructions have only had ad-hoc security analysis for the most part.

We provide a formal definition of onion-routing in the universally composable framework, and also discover a simpler definition (similar to CCA2 security for encryption) that implies security in the UC model. We then exhibit an efficient and easy to implement construction of an onion routing scheme satisfying this definition. Joint work with Jan Camenisch.

**Open To The Public **

*Date*: **FRIDAY, May 6, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Lightweight Signatures for Email**

*Speaker*: **Ben Adida, CIS, CSAIL, MIT**

*Joint work with*: **Susan Hohenberger and Ronald Rivest **

*Abstract*:

In 2004, phishing attacks, emails that trick victims into revealing sensitive information, have cost corporations and individuals more than $10B in fraud. The problem is becoming much worse with the appearance of customized phishing attacks, where spoofed emails appear to originate from a friend or colleague. This is a problem that clearly calls for some form of email signatures. Unfortunately, most current email authentication techniques require significant effort in key generation, certification, and distribution. Thus, to date, no such method has been widely deployed.

We propose a new, lightweight PKI built specifically for email signatures. By limiting ourselves to this sole goal - in particular, by forgoing encryption and more general-purpose authentication - we are able to design a system that relies solely on already-deployed infrastructure. This system requires minimal end-user work, placing the onus, instead, on email domain administrators. Also, the adoption pattern for lightweight signatures is promising: large institutions and ISPs, particularly web-based email providers, can deploy the solution entirely at the mail server level, while more advanced users can obtain the same functionality solely through a mail client upgrade.

However, the addition of digital signatures suddenly makes emails strongly non-repudiable. While such strong authentication may be useful in certain cases, it is not necessary in most email exchanges, and it may in fact hamper deployment if users are forced to treat every email as a signed document. Privacy advocates, in particular, worry about this development. For this reason, we introduce a new type of repudiable signature usable in the lightweight framework: Separable Identity Based Ring signatures (SIBR, pronounced "cyber", signature), which authenticate a message only to the intended recipient.

**Open To The Public **

*Date*: **THURSDAY, May 5, 2005 **

*Time*: **2:00 p.m.- 4:30 p.m. **

*Place*: **32-D463, Star, Stata Center, 32 Vassar St. **

*Title*: **Towards Constant Bandwidth Overhead Integrity Checking of Untrusted Data **

*Speaker*: **Dwaine Clarke, CSAIL, MIT**

*Note*: **PhD Thesis Defense**

*Abstract*:

We present a trace-hash scheme and an adaptive tree-trace scheme to improve the performance of checking the integrity of arbitrarily-large untrusted data, when using only a small fixed-sized trusted state. Currently, hash trees are used to check the data.

In many systems that use hash trees, programs perform many data operations before performing a critical operation that exports a result outside of the program's execution environment. The trace-hash and adaptive tree-trace schemes check sequences of data operations.

For each of the schemes, for all programs, as the average number of times the program accesses data between critical operations increases, the scheme's bandwidth overhead approaches a constant bandwidth overhead.

**Open To The Public **

*Date*: **FRIDAY, April 29, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Collision Search Attacks on SHA-1 **

*Speaker*: **Yiqun L Yin, independent security consultant **

**Authors: Xiaoyun Wang, Yiqun Lisa Yin and Hongbo Yu**

*Abstract*:

The hash function SHA-1 was designed by NSA and issued by NIST in 1995 as a federal standard. Since its publication, SHA-1 has been adopted in many government and industry security standards, especially standards for digital signatures in which a collision-resistant hash function is required. Consequently, SHA-1 has been widely implemented in commercial security systems.

In this talk, we present a collision search attack on SHA-1. Our attack is built upon existing attacks on its predecessor SHA-0 as well as new analytical techniques introduced in the recent attacks on the MDx family of hash functions. We estimate that collisions of SHA-1 can be found with a workload equivalent to about 2^69 hash operations. This is the first attack on the full 80-step SHA-1 with complexity less than the 2^{80} theoretical bound.

Our attack exploits some unexpected weaknesses in the design of SHA-1. First, the message preprocessing step does not have enough avalanche effect even with the inclusion of the extra shift operation compared with SHA-0. Second, the use of certain Boolean functions together with modular addition can facilitate, rather than preventing, differential-style attacks. We hope that the analysis on SHA-1 can provide useful insight on design criteria for more security hash functions.

**Open To The Public **

*Date*: **FRIDAY, April 15, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Finding Collisions on a Public Road, or Do Secure Hash Functions Need Secret Coins? **

*Speaker*: **Chun-Yuan Hsiao, Boston University **

*Abstract*:

Many cryptographic primitives begin with parameter generation, which picks a primitive from a family. Such generation can use public coins (e.g., in the discrete-logarithm-based case) or secret coins (e.g., in the factoring-based case). We study the relationship between public-coin and secret-coin collision-resistant hash function families (CRHFs). Specifically, we demonstrate that:

* there is a lack of attention to the distinction between secret-coin and public-coin definitions in the literature, which has led to some problems in the case of CRHFs;

* in some cases, public-coin CRHFs can be built out of secret-coin CRHFs;

* the distinction between the two notions is meaningful, because in general secret-coin CRHFs are unlikely to imply public-coin CRHFs.

The last statement above is our main result, which states that there is no black-box reduction from public-coin CRHFs to secret-coin CRHFs. Our proof for this result, while employing oracle separations, uses a novel approach, which demonstrates that there is no black-box reduction without demonstrating that there is no relativizing reduction.

This is joint work with Leonid Reyzin, and it appears in Crypto 2004.

**Open To The Public **

*Date*: **FRIDAY, March 4, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G449, Patil/Kiva, Stata Center, 32 Vassar St. **

*Title*: **Upper and Lower Bounds on Black-Box Steganography **

*Speaker*: **Nenad Dedic, B.U. **

*Abstract*:

We study the limitations of steganography when the sender is not using any properties of the underlying channel beyond its entropy and the ability to sample from it. On the negative side, we show that the number of samples the sender must obtain from the channel is exponential in the rate of the stegosystem. On the positive side, we present the first secret-key stegosystem that essentially matches this lower bound regardless of the entropy of the underlying channel. Furthermore, for high-entropy channels, we present the first secret-key stegosystem that matches this lower bound statelessly (i.e., without requiring synchronized state between sender and receiver).

Authors: Nenad Dedic, Gene Itkis, Leonid Reyzin, Scott Russell: Boston University

**Open To The Public **

*Date*: **FRIDAY, February 25, 2005 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G575, Stata Center, 32 Vassar St. **

*Title*: **Security Proofs As Sequences of Games **

*Speaker*: **Victor Shoup, NYU **

*Abstract*:

In this talk, I will discuss how one can sometimes organize a security proof as a sequence of games, in order to control the complexity of the proof. This technique has been used for some time now by several authors, in some cases more explicitly than others. I will begin by illustrating the technique on a few "toy" examples that may be interesting mainly from a pedagogic point of view.

Then I will illustrate the real power of the technique with a new analysis of the Kurosawa-Desmedt variant of the Cramer-Shoup cryptosytem, under weaker, and more realistic, assumptions than those originally made in the original proof by Kurosawa and Desmedt; this result is joint work with Rosario Gennaro.

**Open To The Public **

*Date*: **WEDNESDAY, February 9, 2005 **

*Time*: **11:00 a.m.- 12:30 p.m. **

*Place*: **32-G575, Stata Center, 32 Vassar St. **

*Title*: **On Obfuscating Point Functions **

*Speaker*: **Hoeteck Wee, UC Berkeley **

*Abstract*:

We continue the study of obfuscation (initiated in [BGI+01]) in the context of point functions. A point function is a Boolean function that assumes the value 1 at exactly one point. Our main results are as follows:

* We provide a simple construction of efficient obfuscators for point functions for a slightly relaxed notion of obfuscation - wherein the size of the simulator has an inverse polynomial dependency on the distinguishing probability - which is nonetheless impossible for general circuits. This is the first known construction of obfuscators for a non-trivial family of functions under general computational assumptions. Our obfuscator is based on a probabilistic hash function constructed from a very strong one-way permutation, and can be realized in the standard model. Our construction also yields an obfuscator for point functions with multi-bit output.

* We also show that such a strong one-way permutation can be realized using a random permutation oracle. and that an assumption like the one we used is necessary for our obfuscator construction.

* Finally, we establish two impossibility results on obfuscating point functions which indicate that the limitations on our construction (in simulating only adversaries with single-bit output and in using non-uniform advice in our simulator) are in some sense inherent.

We stress that prior to this work, what is known about obfuscation are negative results for the general class of circuits [BGI+01] and positive results in the random oracle model [LPS04] or under non-standard number-theoretic assumptions [C97]. This work represents the first effort to bridge the gap between the two for a natural class of functionalities.

Preliminary version of the paper available at http://eprint.iacr.org/2005/001

**Open To The Public **

*CIS and The Voting Technology Project*

*Date*: **THURSDAY, January 13, 2005 **

*Time*: **2:00 p.m.- 4:30 p.m. **

*Place*: **32-D463, Star, Stata Center, 32 Vassar St. **

*Title*: **A Practical Voter-verifiable Election Scheme **

*Speaker*: **Professor Peter Ryan, University of Newcastle**

*Abstract*:

I discuss some of the requirements and challenges in designing dependable voting systems. I present a scheme, based on an earlier proposal by David Chaum, that provides voter-verification whilst maintaining ballot secrecy and requiring minimal trust in the system components. This scheme appears to be simpler to understand and to implement that Chaum's original.

**Open To The Public **

*CIS and The Voting Technology Project*

*Date*: **FRIDAY, January 14, 2005 **

*Time*: **2:00 p.m.- 4:30 p.m. **

*Place*: **32-D463, Star, Stata Center, 32 Vassar St. **

*Title*: **Trustworthy Electronic Election Results without Trusted Machines **

*Speaker*: **Andy Neff, VoteHere, Inc. **

*Abstract*:

Electronic devices and systems potentially offer many of the same benefits to the process of conducting elections that they have already delivered to the worlds of business and finance. Unfortunately, the requirement for ballot secrecy, along with the high degree of complexity possible in today's devices, makes it impossible for the general voting population to directly infer that systems tasked with collecting and counting votes are behaving accurately and honestly.

Recently, verifiable mix net and homomorphic tabulation protocols have effectively solved the problem of publicly counting encrypted ballots, thereby eliminating the need to trust special vote counters -- people or machines. Our focus in this talk will be on describing a new protocol, executable by voters while in the poll booth, that eliminates the need for voters to trust the vote casting devices (DREs) as well. By way of a challenge-response scheme, the voting device is prevented from casting an encrypted ballot which is inconsistent with the voter's intent without showing contradictory evidence which the voter can easily detect by simple inspection.

To prevent vote tampering after ballots have been cast, each voter is given a receipt which can be used to audit the public ballot box. However, because part of the voter's proof of ballot correctness is derived from direct observation in the poll booth of the temporal sequence by which the receipt is formed, the receipt is meaningless to someone else -- even when the voter is faced with the threat of coercion. Hence we may obtain for our large scale secret ballot elections the same certainty one obtains when counting a simple show of hands.

**OPEN TO THE PUBLIC **

*Date*: **FRIDAY, December 3, 2004 **

*Time*: **10:30 a.m.- 12:00 p.m. **

*Place*: **32-G575, Stata Center, 32 Vassar St. **

*Title*: **Fully Simulatable Multiparty Computation **

*Speaker*: **Yevgeniy Dodis, NYU **

*Abstract*:

We introduce and realize the notion of fully simulatable multiparty computation. Unlike any of the previous models, our notion simultaneously enjoys the following features: * Main feature: The simulator does not have any extra power over the adversary. In particular, it cannot program any public parameters, learn secret keys not given to the adversary or run in super-polynomial time. Thus, our imple-mentation is fully deniable for tasks such as authentication and zero-knowledge. * Universal composability (in particular, straight-line simulation). * No PKI (although there exists one 'non-programmable' public key). * Adaptive security.

We remark that it might seem impossible to realize all (or even the main) of the above features, even for relatively simple tasks such as zero-knowledge. The way we overcome this apparent contradiction is by introducing a polynomial-time, fully off-line trusted party T to our model. We believe that the addition of fully off-line T is a minimal and very realistic way to overcome the impossibility results in the standard model.

Joint work with Rafael Pass and Shabsi Walfish.

**OPEN TO THE PUBLIC **

*Date*: **FRIDAY, November 19, 2004 **

*Time*: **4:00 p.m.- 5:30 p.m. **

*Place*: **32-G449 (Kiva), Stata Center, 32 Vassar St. **

*Title*: **Concurrent General Composition of Secure Protocols in the Timing Model**

*Speaker*: **Yael Tauman Kalai, TOC, CSAIL **

*Abstract*:

Broad impossibility results have recently been proven regarding the feasibility of obtaining protocols that remain secure under concurrent composition, unless an honest majority or trusted setup phase are assumed. These results hold both for the case of general composition (where a secure protocol is run many times concurrently with arbitrary other protocols) and self composition (where a single secure protocol is run many times concurrently).

One approach for bypassing these impossibility results is to consider more limited settings of concurrency that still realistically model real-world networks. In this paper, we investigate the feasibility of obtaining secure multiparty protocols in a network where certain time bounds are assumed. Specifically, the security of our protocols rely on the very reasonable assumption that local clocks do not ``drift'' too much (i.e., proceed at approximately the same rate). We show that under this timing assumption, it is possible to securely compute any multi-party functionality under concurrent general composition (as long as messages from the arbitrary other protocols are delayed for a specified amount of time).

This is joint work with Yehuda Lindell and Manoj Prabhakaran.

*JOINT TOC/CIS SEMINAR*

*Date: TUESDAY, November 9, 2004 *

*Time: 4:00 p.m.- 5:30 p.m. *

*Place: 32-G449 (Kiva), Stata Center, 32 Vassar St. *

*Title: Toward Privacy in Public Databases *

*Speaker: Cynthia Dwork, Microsoft Research, SVC *

*Abstract: *

*We will describe definitions and algorithmic results for the ``census problem''. Informally, in a census individual respondents give private information to a trusted (and trustworthy) party, who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned ``safely'' should not be learned at all.*

*The definition of privacy and the requirements for a safe sanitization are important contributions of this work. Our definition of privacy formalizes the notion of protection from being brought to the attention of others -- one's privacy is maintained to the extent that one blends in with the crowd. Our definition of a safe sanitization emulates the definition of semantic security for a cryptosystem and says, roughly, that an adversary given access to the sanitized data is not much more able to compromise privacy than an adversary who is not given access to the sanitization.*

*Joint work with Shuchi Chawla, Frank McSherry, Adam Smith, and Hoeteck Wee. *

*Date: MONDAY, November 1, 2004 *

*Time: 3:30 p.m.- 5:00 p.m. *

*Place: 32-G575, Stata Center, 32 Vassar St. *

*Title: Multi-Trapdoor Commitments and their Applications to Non-Malleable Protocols *

*Speaker: Rosario Gennaro, IBM TJ Watson Research Ctr. *

*Abstract:*

*Sketch recognition systems are currently being developed for many domains, but can be time consuming to build if they are to handle the intricacies of each domain. Rather than build each recognition system separately, our group's philosophy is to have one recognition system that can be augmented for each domain. The developer simply writes a domain description in LADDER, the first language to describe how shapes in a domain are drawn, displayed, and edited. This domain description is then used to automatically generate a sketch recognition user interface for that domain. To allow developers to automatically generate sketch recognition user interfaces, I have created *
*1) the LADDER sketch language, *
*2) a translator which transforms a domain description into domain specific recognizers, exhibitors, and editors, to be used by *
*3) a general recognition system that uses lower level recognizers in combination with the generated recognizers to recognize neatly hand-drawn shapes in domain. Because shape descriptions may be imperfect, I have also created *
*4) a shape description debugger that displays suspected near miss examples for over- and under-constrained shapes. *

*Date: FRIDAY, October 29, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: 32-G575, Stata Center, 32 Vassar St. *

*Title: On the (Im)possibility of Cryptography with Imperfect Randomness *

*Speaker: Shien Jin Ong, Harvard U. *

*Abstract:*

*We investigate the feasibility of a variety of cryptographic tasks with imperfect randomness. In particular, we concentrate on entropy sources, such as those considered by Santha and Vazirani, Chor and Goldreich, and Zuckerman. We show the following.*

*Certain cryptographic tasks like bit commitment, encryption, secret sharing, zero-knowledge, non-interactive zero-knowledge, and secure two-party computation for any non-trivial function are impossible to realize if parties have access to entropy sources with slightly less-than-perfect entropy, i.e., sources with imperfect randomness. These results are unconditional and do not rely on any unproven assumption.*

*On the other hand, based on stronger variants of standard assumptions, secure signature schemes are possible with imperfect entropy sources. As another positive result, we show (without any unproven assumption) that interactive proofs can be made sound with respect to imperfect entropy sources.*

*Joint work with Yevgeniy Dodis, Manoj Prabhakaran and Amit Sahai. *

*Date: FRIDAY, October 22, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: 32-G449 (Kiva), Stata Center, 32 Vassar St. *

*Title: Private Approximations with Polylogarithmic Communication *

*Speaker: David Woodruff, CSAIL, MIT *

*Abstract:*

*A private approximation of a function f is defined to be another function F that approximates f in the usual sense, but does not reveal any information about x other than what can be deduced from f(x). We give the first secure two-party private approximation of the L_2 distance with polylogarithmic communication in the semi-honest model. In particular, we obtain a polylogarithmic private approximation of the Hamming distance, resolving the main open question of Feigenbaum et al [FIMNSW00].*

*We then look at the private near neighbor and private all-nearest neighbors problem: Alice has x1, ..., xn in {0,1}^d, Bob has y1, ... , yn in {0,1}^d, and they should privately learn all pairs xi, yj for which f(xi,yj) < t for some distance function f and threshold t. We give more efficient protocols for a general class of functions f, resolving several open questions. Even for n=1, we obtain protocols more communication-efficient than generic secure function evaluation. We then relax this problem by defining the private approximate near neighbor problem, introduce a formal security model for the problem, and give solutions with sublinear communication.*

*Joint work with Piotr Indyk. *

*Date: FRIDAY, October 15, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: 32-G449 (Kiva), Stata Center, 32 Vassar St. *

*Title: Optimal Error Correction Against Computationally-Bounded Noise *

*Speaker: Chris Peikert, CSAIL, MIT *

*Abstract: TBA *

Cryptography and Information Security Group: Seminars and Talks |

*Here are some of the recent (or forthcoming) seminars and talks sponsored by the Cryptography and Information Security Group of MIT's Laboratory for Computer Science. (See also the talks in the Applied Security Seminar.) Please contact Be Blackburn, imbe@mit.edu, if you wish to be added to the mailing list for these seminars. *

*Date: FRIDAY, October 8, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: 32-G575, Stata Center, 32 Vassar St. *

*Title: Worst-case to Average-case Reductions based on Gaussian Measures *

*Speaker: Daniele Micciancio, UC San Diego *

*Abstract: *

*We show that solving modular linear equation on the average is at least as hard as approximating several lattice problems (e.g. SVP, SIVP, CRP) in the worst case within a factor almost linear in the rank of the lattice. This greatly improves on all previous work on the subject starting from Ajtai's seminal paper (STOC, 1996), up to the strongest previously known results by Micciancio (STOC, 2002). Our results also bring us closer to the limit where the problems are no longer known to be in the intersection of NP and coNP.*

*Our main tools are Gaussian measures on lattices, the high dimensional Fourier transform, and a new lattice parameter which determines the amount of Gaussian noise that one has to add to a lattice in order to get close to a uniform distribution. In addition to yielding quantitatively much stronger results, the use of this parameter allows us to simplify many of the complications in previous work. *
*Joint work with Oded Regev (FOCS 2004). *

*Date: FRIDAY, October 1, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: RSA Lounge, G-5th floor, Stata Center, 32 Vassar St. *

*Title: ``Fair Zero Knowledge'' *

*Speaker: Matt Lepinski, CSAIL, MIT *

*Abstract: *

*A traditional zero-knowledge proof occurs between a Prover and a single Verifier.*

*Quite paradoxically, the verifier is convinced that a theorem is true without learning WHY. (That is, he learns nothing except that the statement in question holds.)*

*We put forward a new type of zero-knowledge proof with many Verifiers satisfying the following additional property: not only is each verifier convinced that a theorem is true without knowing why, but he is also guaranteed that the "proof" is zero-knowledge for all other Verifiers (even those maliciously colluding with a malicious Prover). *
*Joint Work with Silvio Micali and Abhi Shelat. *

*Date: Special Talk: Thursday, September 30, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NOTE: 32-D463 (STAR), Stata Center, 32 Vassar Street *

*Title: Secret-Ballot Receipts: True Voter-Verifiable Elections *

*Speaker: David Chaum *

*Abstract: *

*A new kind of receipt sets a far higher standard of security by letting voters verify correctness of the election outcome -- even if all election computers and records were to be compromised.*

*In the voting booth, the voter can see his or her choices clearly indicated on the receipt. After taking it out of the booth, the voter can use it to ensure that the votes it contains are included correctly in the final tally. But, because the choices are safely encrypted before it can be removed from the booth, the receipt cannot be used to show others how the voter voted.*

*Several ways to realize such receipts will be described. Some are well suited to be added on to current touch-screen voting machines. A very simple type will be introduced that is similar in use to current optical scan voting systems. *

*Date: FRIDAY, September 24, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NOTE CHANGE: 32-G449 (Kiva), Stata Center, 32 Vassar St. *

*Title: "Cryptographic Fairness and Its Game-Theoretic Power" *

*Speaker: Abhi Shelat, CSAIL, MIT *

*Abstract:*

*We show a powerful connection between Cryptography and Game Theory. Specifically, *
*1. FAIR SECURE COMPUTATION: We exhibit the first protocol for jointly evaluating a function, on privately held individual inputs, that is totally FAIR: whatever any number of bad players might do, either each player gets his correct and private output, or no player learns any information whatsoever. *
*2. SAFE CHEAP-TALK: We use fair secure computation to achieve equilibria more powerful than Nash as safely as possible: without providing any extra power to any coalition of players.*

*Joint Work with Matt Lepinski, Silvio Micali and Chris Peikert. *

*Date: FRIDAY, September 10, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: RSA Lounge, Stata Center, 32 Vassar St. *

*Title: An Unconditional Study of Computational Zero Knowledge *

*Speaker: Salil Vadhan, Harvard University *

*Abstract: *

*We prove a number of general theorems about CZK, the class of problems possessing computational zero-knowledge proofs. Our results are *unconditional*, in contrast to most previous works on CZK which rely on the assumption that one-way functions exist.*

*We establish several new characterizations of CZK, and use these characterizations to prove results such as: *
*-- Honest-verifier CZK equals general CZK. *
*-- Public-coin CZK equals private-coin CZK. *
*-- CZK is closed under union (and more generally, "monotone formula closure"). *
*-- CZK with imperfect completeness equals CZK with perfect completeness. *
*-- Any problem in [CZK intersect NP] can be proven in computational zero knowledge by a BPP^NP prover. *
*-- CZK with black-box simulators equals CZK with general, non-black-box simulators. *
*The above equalities refer to the resulting *class* of problems (and do not necessarily preserve other efficiency measures such as round complexity).*

*Our approach is to combine the conditional techniques previously used in the study of CZK with the unconditional techniques developed in the study of SZK, the class of problems possessing statistical zero-knowledge proofs. To enable this combination, we prove that every problem in CZK can be decomposed into a a problem in SZK together with a set of instances from which a one-way function can be constructed. *

*Date: FRIDAY, June 11, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: G631, Stata Center, 32 Vassar St. *

*Title: Dolev-Yao-type Abstraction of Modular Exponentiation - the Cliques Case Study *

*Speaker: Olivier Pereira, University of Louvain, Belgium *

*Abstract: *

*Guided by the case-study of the Cliques authenticated group key agreement protocols (proposed by Ateniese et al. in 1998), we designed a Dolev-Yao-type abstraction of modular exponentiation. Reasoning in our model allowed us to: *

*- prove that several published protocols were flawed, *

*- design protocols which are secure with respect to our model, *

*- prove that all Cliques-type authenticated group key agreement protocols become insecure as soon as they are executed by at least four parties. *

*Date: FRIDAY, May 21, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: G-5th floor RSA lounge, Stata Center, 32 Vassar St. *

*Title: COLLABORATION WITHOUT COLLUSION (New Security for Protocols and Games) *

*Speaker: Abhi Shelat, CSAIL, MIT *

*Abstract: *

*In a protocol, collaboration (among honest parties) appears inseparable from collusion (among dishonest ones). Traditionally, secure protocols just minimize the damage inflictable by malicious colluding players, but do not prevent it. *

*We show how to design secure protocols which prevent deviating parties from colluding during run time. *

*A key ingredient of our solution is a new type of secure computation which makes any type of steganography provably impossible. *

*Joint work with Silvio Micali and Matt Lepinski. *

*Date: FRIDAY, April 23, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: 32-G631, Gates Tower, Stata Center, 32 Vassard Street *

*Title: "Round-Optimal Secure Two-Party Computation" *

*Speaker: Jonathan Katz, University of Maryland *

*Abstract:*

*We consider the central cryptographic task of performing a two-party computation secure against malicious adversarial behavior, in a setting where both parties have private inputs and they wish to compute some agreed-upon function in a way that reveals nothing except the respective output(s). Despite extensive research in this area, the exact round-complexity of this fundamental problem was not previously known.*

*By showing matching lower- and upper-bounds, we establish the exact round complexity of secure two-party computation (with respect to black-box proofs of security). In particular, we show a lower bound establishing that four rounds are not sufficient to securely compute the coin-tossing functionality for a super-logarithmic number of coins; the proof extends to rule out 4-round protocols for other functionalities as well.*

*On the other hand, we construct a new protocol based on trapdoor permutations that computes an arbitrary poly-time (randomized) functionality using only five rounds. Under a slightly stronger assumption, a modification of the protocol --- still requiring only five rounds and without assuming erasures --- tolerates an adaptive adversary who corrupts at most one of the players. Thus, five rounds are necessary and sufficient for secure two-party computation. *
*Joint work with Rafail Ostrovsky*

*Date: FRIDAY, April 16, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: 5th floor lounge (the RSA lounge), Gates Tower, Stata Center, 32 Vassard Street *

*Title: "FairPlay -- A Secure Two-Party Computation System" *

*Speaker: Yaron Sella, Hebrew University, Jerusalem, Israel *

*Abstract:*

*Advances in modern cryptography coupled with rapid growth in processing and communication speeds make secure two-party computation a realistic paradigm. Yet, thus far, interest in this paradigm has remained mostly theoretical. *
*The talk introduces Fairplay, a full-fledged system that implements generic secure function evaluation (SFE). Fairplay comprises of a high level procedural definition language called SFDL tailored to the SFE paradigm; a compiler of SFDL into a one-pass Boolean circuit presented in a language called SHDL; and Bob/Alice programs that evaluate the SHDL circuit in the manner suggested by Yao in \cite{YAO86}. *
*This system enables us to present the first evaluation of an overall SFE in real settings, as well as examining its components and identifying potential bottlenecks. It provides a test-bed of ideas and enhancements concerning SFE, whether by replacing parts of it, or by integrating with it. We exemplify its utility by examining several alternative implementations of {\em oblivious transfer\/} within the system, and reporting on their effect on overall performance. *

*Date: FRIDAY, April 2, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: 32-G631, Gates Tower, Stata Center, 32 Vassard Street *

*Title: "Efficient and Secure Multi-Party Computation with Faulty Majority and Complete Fairness" *

*Speaker: Ke Yang, Carnegie Mellon University *

*Abstract:*

*We study the problem of constructing secure multi-party computation (MPC) protocols that are {\em completely fair} --- meaning that either all the parties learn the output of the function, or nobody does --- even when a majority of the parties are corrupted.*

*We first propose a framework for fair multi-party computation, within which we formulate a definition of secure and fair protocols. The definition follows the standard simulation paradigm, but is modified to allow the protocol to depend on the runing time of the adversary. In this way, we avoid a well-known impossibility result for fair MPC with corrupted majority; in particular, our definition admits constructions that tolerate up to $(n-1)$ corruptions, where $n$ is the total number of parties.*

*Next, we define a ``commit-prove-fair-open'' functionality and construct an efficient protocol that realizes it, using a new variant of a cryptographic primitive known as ``time-lines.'' With this functionality, we show that some of the existing secure MPC protocols can be easily transformed into fair protocols while preserving their security.*

*Putting these results together, we construct efficient, secure MPC protocols that are completely fair even in the presence of corrupted majorities. Furthermore, these protocols remain secure when arbitrarily composed with any protocols, which means, in particular, that they are concurrently-composable and non-malleable. Finally, as an example to our results, we show a very efficient protocol that fairly and securely solves the socialist millionaires' problem. *
*This is joint work with Juan Garay and Philip MacKenzie at Bell Labs. *
*The paper is available at eprint report 2004/009. *

*Date: FRIDAY, March 19, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: "Randomness Extraction via Common Pseudorandom Functions and Its Application to the Hashed Diffie-Hellman Transform" *

*Speaker: Hugo Krawczyk, Technion *

*Abstract:*

*It is generally accepted that for securing common applications of Diffie-Hellman (DH), such as key exchange and encryption, one needs to ensure two elements: that the group over which the DH is computed satisfies the Decisional Diffie-Hellman (DDH) assumption, and that the DH output is hashed (typically, via a pairwise independent hash family) into a shorter pseudorandom output. Motivated by practical considerations, we study two relaxations of these basic premises.*

*First we show that strict pairwise independence of the hash family can be relaxed by extending the well-known Leftover Hash Lemma to the case of "imperfect" hash functions. On this basis we study the suitability as extractors of practical pseudorandom families (CBC, Cascade and HMAC), and prove their extraction properties under several (idealized and other) assumptions on the underlying primitives (block ciphers and compression functions). The twist here is the unconventional use of these pseudorandom families with random BUT KNOWN keys.*

*Second, we show that the requirement to work over DDH groups (that is, groups that satisfy the DDH assumption) can be relaxed to the sole assumption that the group over which DH is computed contains a large enough DDH subgroup. This justifies the common use in practice of non-DDH groups such as Zp*. Moreover, we show that one can work directly over Zp* without requiring any knowledge of the prime factorization of p-1 and without even having to find a generator of Zp*. These results are obtained via a general characterization of DDH groups in terms of their DDH subgroups, and a relaxation of the DDH assumption (called t-DDH) via computational entropy. In addition, we show that, under the short-exponent dlog assumption, DH with short exponents is as secure as DH with full exponents.*

*The talk is based on work by the speaker in the context of the IPsec standards, and more academic work with R. Gennaro, J. Hastad, and T. Rabin. *

*Date: FRIDAY, March 12, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: ``Confronting the Composition Conundrum: Universal Composability without Set-up Assumptions'' *

*Speaker: Manoj Prabhakaran, Princeton University *

*Abstract:*

*We propose a modification to the frame-work of Universally Composable (UC) security [Canetti'01]. Our new notion, involves comparing the protocol executions with an ideal execution involving ideal functionalities (just as in UC-security), but allowing the environment and adversary access to some super-polynomial computational power. We argue the meaningfulness of the new notion, which in particular subsumes many of the traditional notions of security.*

*We generalize the Universal Composition theorem of [Canetti] to the new setting. Then under new computational assumptions, we realize secure multi-party computation (for static adversaries), without a common reference string or any other set-up assumptions, in the new framework. This is known to be impossible under the UC framework.*

*Joint work with Amit Sahai. *

*Date: FRIDAY, February 27, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: "A Framework for Password-Based Authenticated Key Exchange" *

*Speaker: Rosario Gennaro, IBM, T.J. Watson *

*Abstract:*

*In this paper we present a general framework for password-based authenticated key exchange protocols, in the common reference string model. Our protocol is actually an abstraction of the key exchange protocol of Katz et al. and is based on the recently introduced notion of smooth projective hashing by Cramer and Shoup. We gain a number of benefits from this abstraction.*

*First, we obtain a modular protocol that can be described using just three high-level cryptographic tools. This allows a simple and intuitive understanding of its security. Second, our proof of security is significantly simpler and more modular. Third, we are able to derive analogues to the Katz et al. protocol under additional cryptographic assumptions. Specifically, in addition to the DDH assumption used by Katz et al., we obtain protocols under both the Quadratic and N-Residuosity assumptions. In order to achieve this, we construct new smooth projective hash functions.*

*Joint work with Yehuda Lindell. *

*Date: FRIDAY, February 13, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: "Simpler Session-Key Generation from Short Random Passwords" *

*Speaker: Minh-Huyen Nguyen, Harvard University *

*Abstract:*

*What is the minimal amount of information that two parties must share in order to perform nontrivial cryptography? Both private-key and public-key cryptography require the amount of shared information to be quite large, e.g. a private key is a long random string. Hence, a natural question is whether we can do non-trivial cryptography when the parties share only low-entropy keys. For example, if Alice and Bob share only a low-entropy password, can we build a protocol enabling them to generate a long random key in the presence of an active adversary?*

*Goldreich and Lindell (CRYPTO '01) presented the first protocol for this problem of password-authenticated key exchange in the standard model. Their protocol is remarkable since the only setup assumption required is the shared password (which is chosen at random from an arbitrary dictionary), there is no common reference string or public parameter. However, their protocol uses several heavy tools and has a complicated analysis.*

*We present a simplification of the Goldreich-Lindell protocol and analysis for the special case where the password is a short random string (like an ATM pin number). Our protocol can be converted into one for arbitrary dictionaries using a common reference string of logarithmic length. *

*Date: FRIDAY, February 6, 2004 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: TOC/CIS Planning Meeting *

*Speaker: Internal MIT meeting for planning and discussion *

*Note: Not a public meeting. *

*Date: FRIDAY, DECEMBER 5, 2003 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: "CCA-secure encryption (and some other goodies) from the Bilinear DDH assumption" *

*Speaker: Ran Canetti, IBM, Hawthorne *

*Abstract: *

*I'll present a new encryption scheme that is secure against chosen ciphertext attacks, based on the Bilinear Decisional Diffie-Hellman assumption. The scheme does not use non-interactive zero-knowledge proofs (which underlie, either explicitly or implicitly, all known CCA-secure schemes in the standard model). *

*The scheme is constructed in two steps: First we define a weak version of identity-based encryption (WIBE), and show how to construct it based on the BDDH assumption in the standard model. (In contrast, the standard version of IBE is only known to be realizable in the random oracle model.) Next, we show how to obtain CCA security from any WIBE scheme. *

*Time permitting, I'll also mention extensions of WIBE to the hierarchical case and applications to forward-secure encryption. *

*The talk will cover material from reports 2003/083 and 2003/182 on the eprint archive. Both works are joint with Shai Halevi and Jonathan Katz. *

*Date: FRIDAY, NOVEMBER 21, 2003 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: "Private analysis of data sets" *

*Speaker: Benny Pinkas, HP Research *

*Abstract:*

*Consider a scenario of two or more parties holding large private data sets, whose goal is to perform some simple analysis of the data while preserving privacy. In other words, given data sets X and Y, the parties' goal is to compute F(X,Y), for some function F, while hiding any other information about X and Y.*

*It is well known that generic constructions can perform this secure computation with polynomial overhead for any polynomial-time F(), but our goal is to design privacy preserving constructions with linear or sublinear overhead, that can be applied to very large data sets. We describe such constructions, secure against both semi-honest and malicious adversaries, for two types of functions: (1) Computing the intersection of two sets, and (2) computing the k-ranked item (e.g. the median) of the union of the sets. *

*Date: TUESDAY, NOVEMBER 11, 2003 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-518, 200 Tech Square *

*Title: "Secure Two-Party Computation Under Concurrent Self Composition" *

*Speaker: Yehuda Lindell, Weizmann Institute *

*Abstract: *

*In the stand-alone setting for secure computation, a single protocol execution takes place. Until recently, most research on secure computation considered this stand-alone setting. However, security in this setting does not imply security in the setting of concurrent composition, where many protocol executions take place simultaneously. Therefore, known feasibility results for the stand-alone setting do not imply feasibility under composition. *

*In this talk, we consider the setting of concurrent {\em self} composition, where a single protocol is run many times concurrently. (This is the setting considered for concurrent zero-knowledge.) We present severe impossibility results for the case of unbounded concurrency (where the protocol must remain secure for any polynomial number of executions), and lower bounds for the case of bounded concurrency (where an a-priori bound on the number of executions is known). We also show that under bounded-concurrency, it is possible to securely compute any functionality. Unfortunately, however, due to our lower bounds, such positive results must be due to rather inefficient protocols. We note that our results are for the plain model, where no trusted setup phase is assumed (and so, for example, a common reference string is not allowed). *

*Date: FRIDAY, OCTOBER 31, 2003 *

*Time: 10:30 a.m.- 12:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: "Extractors via Low-Degree Polynomials" *

*Speaker: Muli Safra, Tel Aviv University *

*Abstract: *

*Randomness has proven extremely useful in various aspects of computation, including Algorithm Design, Cryptography, Game Theory and Hardness of Approximation. Derandomization, namely the design process which reduces the number of random bits a process uses, possibly completely avoiding randomness, has proven many times to be even more productive. *

*One of the main tools in derandomization techniques is that of an extractor. An extractor assumes a faulty source of randomness, which contains some entropy however is not uniformly distributed among all strings, and extracts from it as much entropy as possible. *

*The talk will describe two constructions of extractors that differ from previous construction by utilizing low-degree polynomials. They basically interpret a given string of the faulty source as a polynomial of low degree, and extract random bits by picking a predetermined sequence of points on which to evaluate that polynomial (out of a small set of potential sequences). *

*The talk is based on a joint paper with Amnon Ta-Shma and David Zuckerman, and a consequent improved result due to Ronen Shaltiel and Chris Umans.*

*Date: FRIDAY, October 24, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: "Parallelizable Authentication Trees" *

*Speaker: Charanjit S. Jutla, IBM T.J. Watson Research Center (written with Eric Hall) *

*Abstract: *

*We define a new authentication tree in the "symmetric key" setting, which has the same computational, storage and security parameters as the well known Merkle Authentication Tree, but which unlike the latter, allows for all the cryptographic operations required for an update to be performed in parallel. The cryptographic operations required for verification can also be parallelized. We also prove the security of an optimized scheme which combines IAPM (Integrity Aware Parallelizable Mode) with the new parallelizable authentication tree. *

*The paper preprint can be found at http://eprint.iacr.org/2002/190/ *

*Date: MONDAY, October 20, 2003 *

*Time: 4:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: "How far are we from a complete cryptography? An incomplete view" *

*Speaker: Jean-Jacques Quisquater, University of Louvain *

*Abstract: *

*In this informal talk, Jean-Jacques Quisquater will discuss his research group at UCL, describe what they are doing, and then speak about many important aspects of cryptography now facing the field --- including both implementations and social aspects. *

*Short examples will be given of recent results in flexible cryptography, watermarking, and how to verify remotely the integrity of (large) files. *

*Date: FRIDAY, October 17, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: "Bounded-Concurrent Secure Two-Party Computation in a Constant Number of Rounds" *

*Speaker: Rafael Pass, Royal Institute of Technology, Sweden *

*Abstract: *

*The original setting in which secure two-party protocols were investigated allowed the execution of a single instance of the protocol at a time. A more realistic setting, however, is one which allows the concurrent execution of protocols. In the concurrent setting, many two-party protocols are executed at the same time, involving many parties which may be talking with the same (or many) other parties simultaneously. This setting presents the new risk of a coordinated attack in which an adversary controls many parties, interleaving the executions of the protocols and choosing messages based on other partial executions of the protocol. *

*In this work we consider the problem of constructing a general protocol for secure two-party computation in a way that preserves security under concurrent composition. In our treatment, we focus on the case where an a-priori bound on the number of concurrent sessions is specified before the protocol is constructed (a.k.a. bounded concurrency). We make no set-up assumptions. *

*Lindell (STOC 2003) has shown that any protocol for bounded-concurrent secure two-party computation, whose security is established via black-box simulation, must have round complexity that is strictly larger than the bound on the number of concurrent sessions. In this talk I will show how to construct a (non black-box) protocol for realizing bounded-concurrent secure two-party computation in a constant number of rounds. The only previously known protocol for realizing the above task required more rounds than the pre-specified bound on the number of sessions (despite usage of non black-box simulation techniques). *

*Joint work with Alon Rosen. *

***********************************************************************

*Date: FRIDAY, October 10, 2003 *

*Time: 2:00 p.m.- 4:00 p.m. *

*Place: NE43-518, 200 Tech Square *

*Title: "Private Circuits: Securing Hardware against Probing Attacks" *

*Speaker: Amit Sahai, Princeton U *

*Abstract:*

*Can you guarantee secrecy even if an adversary can eavesdrop on your brain? We consider the problem of protecting privacy in circuits, when faced with an adversary that can access a bounded number of wires in the circuit (we call such an attack a "probing attack"). This question is motivated by side channel attacks, which allow an adversary to gain partial access to the inner workings of hardware, breaking the critical "black-box" assumption which underlies most cryptographic constructions.*

*In this talk, we give several efficient techniques for building private circuits resisting probing attacks. We initiate a systematic study of the complexity of such private circuits. *
***********************************************************************

*Date: FRIDAY, October 10, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Lower Bounds for Non-Black-Box Zero-Knowledge *

*Speaker: Boaz Barak, Institute for Advanced Study, Princeton, NJ *

*Abstract: *

*Even after 20 years of very fruitful research, there are still some fascinating and important open questions about zero-knowledge proof systems. In particular, there are still relatively few lower bounds and impossibility results on the types of zero-knowledge systems that can exist for proving non-trivial statements. In particular, many of these known lower bounds hold only for *black-box* zero-knowledge. However, recent results show that such lower bounds do *not* imply corresponding lower bounds for the general (i.e., *non-black-box*) case. *

*In this talk I will discuss the open problems, and show new lower bounds and impossibility results for general (i.e., non-black-box) zero-knowledge proofs and arguments. The results are that, under reasonable complexity assumptions: *

*1. There does not exist a constant-round zero-knowledge *strong* proof (or argument) of knowledge (as defined by Goldreich (2001)) for a nontrivial language. *

*2. There does not exist a two-round zero-knowledge *proof* system with perfect completeness for an NP-complete language. *

*The previous impossibility result for two-round zero knowledge, by Goldreich and Oren (J. Cryptology, 1994) was only for the case of *auxiliary-input* zero-knowledge proofs and arguments. *

*3. There does not exist a constant-round public-coin *proof* system for a nontrivial language that is *resettable* zero knowledge. This result also extends to *bounded* resettable zero knowledge. *

*In contrast, we show that under reasonable assumptions, there does exist such a (computationally sound) *argument* system that is bounded-resettable zero knowledge. *

*The complexity assumptions we use are not commonly used in cryptography. However, in all cases, we show that assumptions like ours are necessary for the above results. *

*Joint work with Yehuda Lindell (IBM T.J. Watson) and Salil Vadhan (Harvard). *

*Date: FRIDAY, October 3, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: "Zero-Knowledge Sets" *

*Speaker: Silvio Micali, CSAIL, MIT *

*Abstract:*

*Despite the centrality of sets in Mathematics, no zero-knowledge treatment of sets existed to date.*

*We fill this gap by showing that a polynomial-time Prover can commit to an arbitrary set S and then, for any sequence of potential elements X prove "X is in S" or "Xis not in S", whichever the case may be, without revealing any more than implied by these mere assertions.*

*Our implementation, based on the discrete logarithm problem, is non-interactive and extremely efficient.*

*Joint Work with Michael Rabin and Joe Kilian. *

*Date: FRIDAY, September 26, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: "Proving Hard-Core Predicates Using List Decoding" *

*Speaker: Adi Akavia, CSAIL, MIT *

*Abstract:*

*We introduce a unifying framework for proving that predicate $P$ is hard-core for a one-way function $f$, and apply it to a broad family of functions and predicates, reproving old results in an entirely different way as well as showing new hard-core predicates for well known one-way function candidates.*

*Our framework extends the list-decoding method of Goldreich and Levin for showing hard-core predicates. Namely, a predicate will correspond to some error correcting code, predicting a predicate will correspond to access to a corrupted codeword, and the task of inverting one-way functions will correspond to the task of list decoding a corrupted codeword.*

*A characteristic of the error correcting codes which emerge and are addressed by our framework, is that codewords can be approximated by a small number of heavy coefficients in their Fourier representation. Moreover, as long as corrupted words are close enough to legal codewords, they will share a heavy Fourier coefficient. We list decode such codes, by devising a learning algorithm applied to corrupted codewords for learning heavy Fourier coefficients.*

*For codes defined over ${0,1}^n$ domain, a learning algorithm by Kushilevitz and Mansour already exists. For codes defined over $Z_N$, which are the codes which emerge for predicates based on number theoretic one-way functions such as the RSA and Exponentiation modulo primes, we develop a new learning algorithm. This latter algorithm may be of independent interest outside the realm of hard-core predicates. *
*Joint work with Shafi Goldwasser and Muli Safra *

*Date: FRIDAY, September 19, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: "Making Data Disappear" *

*Speaker: Radia Perlman, Sun Microsystems *

*Abstract:*

*It's not so easy to reliably delete data. It can still exist on backup media, in swap areas, etc. In theory, the solution is to encrypt all data, and then it just requires throwing away the decryption key. If, for instance, email is supposed to only last for 2 weeks, then each user should generate new public keys every day, with 2 week expiration dates, when the user reliably throws away the private key. However, it is unrealistic to expect users to be able to manage keys.*

*The alternative is an "ephemerizer" which is a third party whose job it is to create and delete keys. The cost of doing this can be amortized across many users. We want a system that allows Alice to send a message to Bob that will disappear after some finite time (say 2 weeks). The system requires the cooperation of Bob (so that he does not save copies of the unencrypted message). Bob will only be able to read messages with the aid of the ephemerizer.*

*The system must have the property that only someone that can prove they are Bob (i.e., knows Bob's private key) before the ephemeral key is discarded can get the ephemerizer to decrypt the message. It also should be designed with minimal trust in the ephemerizer. *

*Date: FRIDAY, September 5, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: "Encryption of High-Entropy Sources: How Best to Fool an Unbounded Adversary with a Short Key" *

*Speaker: Adam Smith, MIT *

*Abstract:*

*Russell and Wang (RW02) recently introduced an elegant, information-theoretic notion called "entropic security of encryption": they require a condition similar to semantic security (GM84) but only when the distribution on messages has high min-entropy. They show that this notion of security can be achieved with very short keys, for sufficiently entropic messages spaces.*

*We show that entropic security is equivalent to *indistinguishability* of encryptions of messages from high min-entropy sources -- an equivalence inspired by the conventional notion of indistinguishability (GM84). This equivalence allows us to give a variety of constructions which use key length (n -t - 2 log(1/eps) for messages spaces of entropy at least t with error parameter eps. The constructions can be viewed as special randomness extractors with an additional property to allow decryption.*

*The same equivalence also allows us to prove lower bounds on entropic security, either by direct argument or by reduction to lower bounds on extractors. *
*Joint work with Yevgeniy Dodis (NYU). *

*Date: FRIDAY, June 6, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: Note Room Change: NE43-518, 200 Tech Square *

*Title: ``Derandomization in Cryptography'' *

*Speaker: Shien Jin Ong, MIT *

*Abstract:*

*We give two applications of Nisan--Wigderson-type (`non-cryptographic') pseudorandom generators in cryptography. Specifically, assuming the existence of an appropriate NW-type generator, we construct:*

*1. A one-message witness-indistinguishable proof system for every language in NP, based on any trapdoor permutation. This proof system does not assume a shared random string or any setup assumption, so it is actually an `NP proof system.' *
*2. A noninteractive bit commitment scheme based on any one-way function.*

*The specific NW-type generator we need is a hitting set generator fooling *nondeterministic* distinguishers. It is known how to construct such a generator if E = DTIME(2^{O(n)}) has a function of nondeterministic circuit complexity 2^{Omega(n)} (Miltersen and Vinodchandran, FOCS `99).*

*Our witness-indistinguishable proofs are obtained by using the NW-type generator to derandomize the ZAPs of Dwork and Naor (FOCS `00). To our knowledge, this is the first construction of an NP proof system achieving a secrecy property.*

*Our commitment scheme is obtained by derandomizing the interactive commitment scheme of Naor (J. Cryptology, 1991). Previous constructions of noninteractive commitment schemes were only known under incomparable assumptions.*

*Joint work with Boaz Barak (Weizmann Institute) and Salil Vadhan (Harvard). *

*Date: FRIDAY, May 30, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: On the (In)security of the Fiat-Shamir Paradigm *

*Speaker: Yael Tauman, MIT *

*Abstract:*

*In 1986, Fiat and Shamir introduced a general method for transforming secure 3-round public-coin identification schemes into digital signature schemes, which are efficient and hopefully secure against chosen message attacks. The method is significant, because all other known constructions which guarantee such security are substantially more inefficient and complicated in design.*

*In 1996, Pointcheval and Stern proved that signature schemes obtained by the Fiat-Shamir transformation are secure in the so-called "Random Oracle Model". The question is does the security of the Fiat-Shamir transformation in the Random Oracle Model, imply the security of the Fiat-Shamir transformation in the "real world"?*

*In this talk, we will answer this question negatively. Namely, we will show that there exist secure 3-round public-coin identification schemes for which the Fiat-Shamir methodology produces insecure digital signature schemes for any implementation of the Random Oracle Model by any function ensemble. *
*This is joint work with Shafi Goldwasser. *

*Date: FRIDAY, May 23, 2003 *

*NOTE: NO CIS SEMINAR TODAY *

*REASON: 100th EECS celebration *

*Date: FRIDAY, May 16, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: ``A zero knowledge protocol for anonymously proving that a key is on a trusted hardware device'' *

*Speaker: Ernie Brickell, Intel *

*Abstract: *

*We will present an efficient protocol for demonstrating that a public signature verification key corresponds to a private signature generation key that is contained in a certified hardware device, without identifying which certified hardware device contains the private signature generation key.*

*Each hardware device contains a unique certified key, but neither the public, nor the private unique keys are ever known outside of the hardware device. The protocol provides a verifier with the ability to revoke a hardware device. The cryptographic assumptions required are an interval RSA assumption and a Bounded Decision Diffie Hellman (DDH) assumption. The proofs use the random oracle model.*

*We will discuss the privacy issues with cryptographic keys associated with trusted computing, and motivate the privacy requirements that were met in this protocol. We will also discuss potential performance and capability enhancements to the protocol. *

*Date: FRIDAYS, May 9, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, NOTE ROOM CHANGE *

*Title: Physically Observable Cryptography, Part Two *

*Speaker: Leo Reyzin, BU *

*Abstract: Part II: The Theorems*

*Complexity-theoretic Cryptography has enjoyed tremendous success but has ``abstracted away" computation: an adversary may attack a cryptographic algorithm essentially only by exchanging messages with it. Consequently, this theory fails to take into account the PHYSICAL nature of actual computation, and cannot protect against PHYSICAL attacks cleverly exploiting the information leakage inherent to the PHYSICAL execution of any cryptographic algorithm. Such ``physical observation attacks'' can successfully break mathematically impregnable systems.*

*We respond to the present crisis by eliminating the mathematically convenient but physically unrealistic separation between the adversary and cryptographic computations. *
*Specifically, *
*(1) We put forward a powerful, comprehensive, and precise model for delivering cryptographic security when an adversary has full (and indeed adaptive) access to any information leaked from the physical execution of cryptographic algorithms; *
*(2) We show that some of the basic theorems and intuitions of traditional cryptography no longer hold in a physically observable setting; and *
*(3) We construct schemes (such as pseudorandom generators and digital signatures) that are provably secure against ALL physical-observation attacks. *

*Date: FRIDAYS, May 2, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-518, NOTE ROOM CHANGE *

*Title: Physically Observable Cryptography, Part One *

*Speaker: Silvio Micali, MIT *

*Abstract: Part I: The Model*

*Complexity-theoretic Cryptography has enjoyed tremendous success but has ``abstracted away" computation: an adversary may attack a cryptographic algorithm essentially only by exchanging messages with it. Consequently, this theory fails to take into account the PHYSICAL nature of actual computation, and cannot protect against PHYSICAL attacks cleverly exploiting the information leakage inherent to the PHYSICAL execution of any cryptographic algorithm. Such ``physical observation attacks'' can successfully break mathematically impregnable systems.*

*We respond to the present crisis by eliminating the mathematically convenient but physically unrealistic separation between the adversary and cryptographic computations. *
*Specifically, *
*(1) We put forward a powerful, comprehensive, and precise model for delivering cryptographic security when an adversary has full (and indeed adaptive) access to any information leaked from the physical execution of cryptographic algorithms; *
*(2) We show that some of the basic theorems and intuitions of traditional cryptography no longer hold in a physically observable setting; and *
*(3) We construct schemes (such as pseudorandom generators and digital signatures) that are provably secure against ALL physical-observation attacks. *

*CANCELLED *

*Date: FRIDAY, April 25, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-518, 200 Tech Square *

*Title: Computer Virus-Antivirus Co-evolution: A history of malicious code and detection technologies" *

*Speaker: Carey Nachenberg, Symantec Research Labs *

*Abstract: *

*Over the past twenty years, malicious computer code and anti-virus software have engaged in a co-evolution. This cat and mouse game, driven by virus authors' desires to avoid detection, and an intensely competitive antivirus industry, has led to a number of fascinating innovations.*

*The latest threats are self-mutating, can have trillions of trillions of different possible appearances, and burrow themselves deep into their hosts to avoid detection. Just as amazing, modern antivirus software can scan computers for over 63 thousand of these threats, detecting any infection in a fraction of a second. This talk will chronicle this computer virus-antivirus co-evolution, exploring the most interesting innovations in both malcode and antivirus detection technologies over the past two decades. *
*Bio:*

*Carey Nachenberg, Chief Architect of Symantec Research Labs has been an innovator at Symantec Corporation for the past twelve years. Mr. Nachenberg designs and develops anti-virus, intrusion detection, firewall and vulnerability assessment technologies for Symantec's award-winning line of products.*

*Nachenberg has contributed to four books including Intern et Security Professional Reference and Windows NT Server 4: Security, Troubleshooting and Optimization. Adept at conveying complex information in an a ccessible fashion, Nachenberg is a regular contributor to such computer security journals as Virus Bulletin, Secure Computing, and Communications of the ACM. He holds BS and MS degrees in Computer Science and Engineering from University of California at Los Angeles. His Masters thesis covers the topic of polymorphic computer virus detection. *

*Date: FRIDAY, April 11, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-518, 200 Tech Square *

*Title: Fair Audience Inference or Proving that Business is Bad *

*Speaker: Jessica Staddon, PARC *

*Abstract:*

*Internet content distributors have an incentive to inflate their audience counts since by doing so they can command higher advertising rates. The problem of measuring audience size in a manner that resists such inflation attempts is well-studied. With the recent efforts of the RIAA to charge Internet microbroadcasters on a per-song basis, and the increased possibility of limited-distribution licenses and screen-scraping, there is now a strong incentive for content distributors to deflate their audience sizes, however little work has been done in this area.*

*We give a brief overview of existing techniques for audience counting and present two new audience counting protocols that provide protection against deflation attempts. Our protocols work over an anonymous network and trade-off the amount of additional information the content distributors must provide to facilitate audience counting with the amount of infrastructure required.*

*This is joint work with Rob Johnson.*

*Time permitting, I'll wrap up with a slide or two on other research at PARC. We expect to be recruiting over the next year in many areas of computer science and I'd be happy to meet with any interested students after the talk. *

*Date: FRIDAY, March 21, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Signcryption: Whats, Whys and Hows" *

*Speaker: Yevgeniy Dodis, NYU *

*Abstract: I will survey several of my recent papers in the area of public-key authenticated encryption, also known as *signcryption*. Overall, these results put signcryption on the formal theoretical ground of provable security, and simultaneously provide the most general, flexible and efficient signcryption schemes up to date.*

*In joint work with Jee Hea An and Tal Rabin (EC'02), we gave first formal definitions for signcryption, and constructed several general and provably secure signcryption schemes. One such general method, called "commit-then-encrypt-and-sign" (CtE&S), puts forward the concept of **parallel signcryption**, where the encrypting/signing operations are performed in parallel.*

*In the upcoming joint work with Michael Freedman and Shabsi Walfish, we significantly improve the efficiency of CtE&S by introducing a new method for parallel signcryption called **Padding-based Parallel Signcryption** (PbPS). PbPS can be based on any family of trapdoor permutations (such as RSA), enjoys flexible key management, short keys, tight security reductions, associated data support, and is fully compatible with the PKCS#1 standard. In particular, it can be used with any of the classical padding schemes with message recovery, such as PSS-R, OAEP, OAEP+, and a new **Probabilistic Signature Encryption Padding** (PSEP) we introduce. PSEP generalizes PSS-R and OAEP, but allows one to achieve optimal message bandwidth for signcryption using PbPS.*

*Finally, in the joint work with Jee Hea An (EC'03), we give a simple and general method to utilize a secure signcryption SC of short messages to efficiently signcrypt arbitrarily long messages. The method first splits m into a pair (b,h), where |b|<<|m|, and sets SC'(m) = . The required properties on introduce an elegant new primitive we call **concealment**. We study theoretical properties of concealments, their applications, compare them with existing primitives such as commitment, and finally construct simple and optimal concealment schemes.*

*Relevant papers (in order of presentation): *
*http://eprint.iacr.org/2002/046/ *
*http://eprint.iacr.org/2003/043/ *
*http://eprint.iacr.org/2003/050/ *

*Date: FRIDAY, Feb 28, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Cryptographic Tamper Evidence *

*Speaker: Gene Itkis, Boston University *

*Abstract: *

*We propose a new notion of cryptographic tamper evidence. A tamper-evident signature scheme provides an additional procedure Div which detects tampering: given two signatures, Div can determine whether one of them was generated by the forger. Surprisingly, this is possible even after the adversary has inconspicuously learned some --- or even all --- the secrets in the system. In this case, it might be impossible to tell which signature is generated by the legitimate signer and which by the forger. But at least the fact of the tampering will be made evident.*

*We define several variants of tamper-evidence, differing in their power to detect tampering. In all of these, we assume an equally powerful adversary: she adaptively controls all the inputs to the legitimate signer (i.e., all messages to be signed and their timing), and observes all his outputs; she can also adaptively expose all the secrets at arbitrary times. We provide tamper-evident schemes for all the variants and prove their optimality.*

*We stress that our mechanisms are purely cryptographic: the tamper-detection algorithm Div is stateless and takes no inputs except the two signatures (in particular, it keeps no logs), we use no infrastructure (or other ways to conceal additional secrets), and we use no hardware properties (except those implied by the standard cryptographic assumptions, such as random number generators).*

*Our constructions are based on arbitrary ordinary signature schemes and do not require random oracles. *

*Date: FRIDAY, Feb 21, 2003 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Efficient Universal Padding Techniques for Multiplicative Trapdoor One-way Permutation *

*Speaker: Kazuo Ohta, U. of Electro-Communications, Tokyo *

*Abstract:*

*Coron et al. proposed the ES-based scheme PSS-ES which realizes an encryption scheme and a signature scheme with a unique padding technique and key pair. The security of PSS-ES as an encryption scheme is based on the {\em partial-domain one-wayness} of the encryption permutation.*

*In this paper, we propose new ES schemes OAEP-ES, OAEP++-ES, and REACT-ES, and prove their security under the assumption of only the {\em one-wayness} of encryption permutation. OAEP-ES, OAEP++-ES, and REACT-ES suit practical implementation because they use the same padding technique for encryption and for signature, and their security proof guarantees that we can prepare one key pair to realize encryption and signature in the same way as PSS-ES. Since {\em one-wayness} is a weaker assumption than {\em partial-domain one-wayness}, the proposed schemes offer tighter security than PSS-ES. Hence, we conclude that OAEP-ES, OAEP++-ES, and REACT-ES are more effective than PSS-ES. REACT-ES is the most practical approach in terms of the tightness of security and communication efficiency. *
*This is a joint work with Yuichi Komano. *

*Date: FRIDAY, Dec 13, 2002 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Round Efficiency in Multi-Party Computations with a Faulty Majority *

*Speaker: Adam Smith, CIS, TOC, MIT *

*Abstract: *

*Multi-party computing allows a group of n players in a network, each with a private input x_i, to compute the value of some function f(x_1,...,x_n) without leaking any (other) information about the inputs. Examples of things that fir into this broad framework include voting, e-commerce, threshold signatures, auctions, and games.*

*In this talk, I'll explain what is known about the case when there is a majority of dishonest participants. I'll recap earlier work, and show how some of it can be combined with recent insights to get new results: we obtain a (log n) round protocol with a black-box simulation, and a constant-round protocol using a non-black-box coin flipping protocol due to Barak.*

*Joint work with Jon Katz and Rafi Ostrovsky. *

*Date: FRIDAY, Dec 6, 2002 *

*NOTE: No CIS Talk This Week *

*Date: FRIDAY, Nov 22, 2002 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Cryptographic Constructions Using the r'th Power Residue Symbol *

*Speaker: Dan Boneh, Stanford University *

*Abstract:*

*We describe three cryptographic applications of the r'th power residue symbol. First, we describe an additively homomorphic public key system where decryption is done using any r'th root of unity in Z_n. This enables us to give partial decryption keys that enable the recipient to decrypt only a fragment of a given ciphertext. It also enables us to construct a new mechanism for Signcryption. As a second application we show how the symbol can be used to speed up the factorization of integers of the form N = pq^r for small r. As a third application we show how to directly use the r'th symbol for encryption.*

*The talk will be self contained and no previous knowledge of the r'th power symbol will be assumed. This is joint work with Jeremy Horwitz. *

*Date: FRIDAY, Nov 15, 2002 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Concurrent Zero-Knowledge with Logarithmic Round Complexity *

*Speaker: Alon Rosen, The Weizmann Institute *

*Abstract:*

*We show that every language in NP has a (black-box) concurrent zero-knowledge proof system using ~O(log n) rounds of interaction. The number of rounds in our protocol is optimal, in the sense that any language outside BPP requires at least ~Omega(log n) rounds of interaction in order to be proved in black-box concurrent zero-knowledge. The zero-knowledge property of our main protocol is proved under the assumption that there exists a collection of claw-free functions. Assuming only the existence of one-way functions, we show the existence of ~O(log n)-round concurrent zero-knowledge arguments for all languages in NP.*

*Joint work with Manoj Prabhakaran and Amit Sahai. *
*/ / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / /*

*Date: FRIDAY, Nov 8, 2002 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NOTE: NE43-518, 200 Tech Square *

*Title: ``Generalized compact knapsacks, cyclic lattices, and efficient one-way functions from worst-case complexity assumptions'' *

*Speaker: Daniele Micciancio, UC, San Diego *

*Abstract: *

*We study a generalization of the compact knapsack problem for arbitrary rings: given m = O(log n) ring elements a_1,...,a_m in R and a target value b in R, find coefficients x_1,...,x_m in X (where X is a subset of R of size 2^n) such that sum(a_i x_i) = b. The computational complexity of this problem depends on the choice of the ring R and set of coefficients X. This problem is known to be solvable in quasi polynomial time when R is the ring of the integers and X is the set of small integers {0,...,2^n - 1}. We show that if R is an appropriately chosen ring of modular polynomials and X is the subset of polynomials with small coefficients, then the compact knapsack problem is as hard to solve on the average as the worst case instance of approximating the covering radius (or the length of the shortest vector, or various other well known lattice problems) of any cyclic lattice within a polynomial factor. Our proof adapts, to the cyclic lattice setting, techniques initially developed by Ajtai for the case of general lattices. *

*Date: FRIDAY, Nov 1, 2002 *

*Time: There Will Be No Meeting This Week *

*Date: FRIDAY, Oct 25, 2002 Open to the Public *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: "Computational Soundness of Formal Adversaries" *

*Speaker: Jonathan Herzog, MIT *

*Abstract:*

*The Dolev-Yao model is a high-level framework with which to analyze cryptographic protocols. Like the more common computational model, the Dolev-Yao model can be used to determine if a protocol meets secrecy or authentication goals even when used in the presence of a powerful adversary. However, -unlike- the computational model, the Dolev-Yao model makes two strong assumptions: *
*1) Instead of specific cryptographic algorithms, protocols are assumed to be using abstracted operations, and *
*2) Both the honest participants and the adversary are assumed to be state machines.*

*This approach has significant advantages. Most importantly, a protocol expressed in this model can often be analyzed fully automatically, using the tools and techniques from formal methods. Furthermore, any flaw found in the protocol using this method will exist in any model of the protocol-- including the computational model. *
*However, things are less clear when no flaw is found. If a protocol is "proven" to be correct in this model, will it still be secure in the computational one?*

*In this talk, we will discuss a partial answer to this question. In particular, we focus on the adversary: a formal adversary is modeled as a state machine while a computational adversary is modeled as a Turing machine. At first glance, it would seem that the computational adversary is far more powerful than the formal one. However, this need not be the case. Our result is two-fold: *
*-- First, we define a security condition for a computational encryption algorithm that will limit the computational adversary to the operations available to the formal adversary. *
*-- Second, we present an encryption algorithm that actually meets the above definition. *
*No previous knowledge of the Dolev-Yao model or formal methods in general will be assumed. *

*Date: FRIDAY, Oct 18, 2002 Open to the Public *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NOTE: NE43-518, 200 Tech Square *

*Title: Palladium *

*Speaker: Brian LaMacchia, Microsoft Corp. *

*Abstract:*

*This talk will present a technical overview of the Microsoft "Palladium" Initiative. The "Palladium" code name refers to a set of hardware and software security features currently under development for a future version of the Windows operating system. "Palladium" adds four categories of security services to today's PCs:*

*a.. Curtained memory. The ability to wall off and hide pages of main memory so that each "Palladium" application can be assured that it is not modified or observed by any other application or even the operating system.*

*b.. Attestation. The ability for a piece of code to digitally sign or otherwise attest to a piece of data and further assure the signature recipient that the data was constructed by an unforgeable, cryptographically identified software stack.*

*c.. Sealed storage. The ability to securely store information so that a "Palladium" application or module can mandate that the information be accessible only to itself or to a set of other trusted components that can be identified in a cryptographically secure manner.*

*d.. Secure input and output. A secure path from the keyboard and mouse to "Palladium" applications, and a secure path from "Palladium" applications to an identifiable region of the screen.*

*Together, these features provide a parallel execution environment to the "traditional" kernel- and user-mode stacks. The goal of "Palladium" is to help protect software from software; that is, to provide a set of features and services that a software application can use to defend against malicious software also running on the machine (viruses running in the main operating system, keyboard sniffers, frame grabbers, etc). "Palladium" is not designed to provide defenses against hardware-based attacks that originate from someone in control of the local machine. *
*Hosted by: Ron Rivest and Hal Abelson *

*Date: FRIDAY, Oct 11, 2002 Open to the Public *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: An Improved Pseudorandom Generator Based on Hardness of Factoring *

*Speaker: Nenad Dedic, Boston University *

*Abstract:*

*We present a simple to implement and efficient pseudorandom generator based on the factoring assumption. It outputs more than pn/2 pseudorandom bits per p exponentiations, each with the same base and an exponent shorter than n/2 bits. Our generator is based on results by Hastad, Schrift and Shamir [HSS93], but unlike their generator and its improvement by Goldreich and Rosen [GR00], it does not use hashing or extractors, and is thus simpler and somewhat more efficient.*

*In addition, we present a general technique that can be used to speed up pseudorandom generators based on iterating one-way permutations. We construct our generator by applying this technique to results of [HSS93]. We also show how the generator given by Gennaro [Gen00] can be simply derived from results of Patel and Sundaram [PS98] using our technique.*

*Joint work with Leonid Reyzin and Salil Vadhan. *

*Date: FRIDAY, Oct 4, 2002 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*Title: Unique Signatures and Verifiable Random Functions from the DH-DDH Separation *

*By: Anna Lysyanskaya, Brown University *

*Abstract:*

*A unique signature scheme has the property that a signature $\sigma_{\pk}(m)$ is a (hard-to-compute) function of the public key $\pk$ and message $m$, for all, even adversarially chosen, $\pk$. Unique signatures, first considered by Goldwasser and Ostrovsky, have been shown to be a building block for constructing verifiable random functions. Another useful property of unique signatures is that they are stateless: the signer does not need to update his secret key after an invocation.*

*The only previously known construction of a unique signature in the plain model, due to Micali, Rabin, and Vadhan, was based on the RSA assumption. The only other previously known provably secure constructions of stateless signatures were based on the Strong RSA assumption. Here, we give a construction of a unique signature scheme based on a generalization of the Diffie-Hellman assumption in groups where decisional Diffie-Hellman is easy. Several recent results suggest plausibility of such groups.*

*We also give a few related constructions of verifiable random functions (VRFs). VRFs, introduced by Micali, Rabin, and Vadhan, are objects that combine the properties of pseudorandom functions (i.e. indistinguishability from random even after querying) with the verifiability property. Prior to our work, VRFs were only known to exist under the RSA assumption.*

*(This work appeared in Crypto 2002.) *

*Date: FRIDAY, Sept 27, 2002 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*TITLE: ``Fuzzy Commitments'' *

*By: Ari Juels, RSA *

*Abstract: TBA*

*Cryptographic protocols are conventionally predicated on exact knowledge. The RSA cryptosystem, for example, derives its security essentially from the presumption that a legitimate user with public key (N,e) possesses a corresponding, uniquely specifiable private key (N,d).*

*There are situations, however, in which human factors and other sources of error undermine the possibility of exactness in a security system. In biometric systems where users identify themselves by means of fingerprints, for example, variability in user interaction is such that fingerprint features are rarely read exactly the same way twice.*

*This talk will focus on what might be called _fuzzy cryptography_, namely the combination of error correcting codes and cryptography to tolerate the use of inexact keys. Among its other uses, fuzzy cryptography is applicable to biometric authentication and to password recovery based on users providing error-prone answers to personal questions. (E.g., "What was the name of your first pet?", "What was the color of your first car?") The talk draws on work from three papers, published in ACM CCS '99, ACM CCS '01, and ISIT '02, and joint work with Martin Wattenberg (IBM), Niklas Frykholm (now a novelist, formerly of RSA), and Madhu Sudan (MIT) respectively. *

*Date: FRIDAY, Sept 20, 2002 *

*Time: 10:30 a.m.- 12:00 noon *

*Place: NE43-308, 200 Tech Square *

*TITLE: A presentation on primality testing in P *

*By: Adi Akavia (Weizmann Institute of Science, Israel *

*Date: FRIDAY, May 17, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: Signature Schemes and Applications to Cryptographic Protocol Design *

*By: Anna Lysyanskaya, CIS, LCS, MIT *

*(Thesis Defense Open to the public) *

*Abstract:*

*A signature scheme is a cryptographic primitive of central importance, both as a standalone application, and integrated into the design of other cryptographic systems. From the most classical applications, such as Byzantine agreement, to the very recent ones, such as anonymous credential systems, signature schemes are a central building block in the design of a protocol. They also play a role in constructing other important primitives, such as verifiable random functions. In my thesis I study signature schemes that are applicable in all the contexts mentioned. For this defense, I will focus on anonymous credential systems.*

*In order to obtain access to a resource, a credential is usually required. For example, one needs a driver's license to rent a car or a library card to borrow a book. As paper-based transactions are being replaced by electronic ones, credentials are also taking electronic form.*

*Hand-in-hand with the convenience of electronic credentials, comes the danger that all transactions can be easily recorded and analyzed. Such recording happens in the paper-based world, as well; however, electronic records make searching and aggregating information practical on a large scale. Thus, electronic credentials make it ever so easy to collect too much personal information. For example, by observing a person's use of a driver's license, one can trace this person's itinerary.*

*I will present an anonymous credential system designed to solve this problem. In this system, a user (credential owner) can prove possession of a credential without revealing more than this single bit of information, and can obtain a credential without revealing more than the information that is required by the issuing authority. As a result, the user's identity remains hidden, and, moreover, transactions carried out by the same user cannot be linked. Our system thus guarantees privacy of users.*

*Ours is the first anonymous credential system suitable for practical use: it requires no involvement of trusted third parties, and all the protocols are efficient. In addition to ensuring privacy of users, our system enables credentials that have expiration dates and other attributes and ensures that they are non-transferable and revocable. Moreover, a user's anonymity can be revoked in case of emergency.*

*The crucial building block for constructing our system is a signature scheme with an efficient protocol for proving knowledge of a signature. *

*Date: MONDAY, May 13, 2002 *

*Time: 2:30 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: Secure Intrusion-tolerant Replication on the Internet *

*By: Christian Cachin *

*IBM Zurich Research Laboratory*

*(Open to the public) *

*Abstract:*

*This talk describes a secure intrusion-tolerant replication architecture (SINTRA) for coordination in asynchronous networks subject to Byzantine faults. SINTRA supplies a number of group communication primitives, such as binary and multi-valued Byzantine agreement, reliable and consistent broadcast, and an atomic broadcast channel. Atomic broadcast immediately provides secure state-machine replication.*

*The protocols are designed for an asynchronous wide-area network, such as the Internet, where messages may be delayed indefinitely, the servers do not have access to a common clock, and up to one third of the servers may fail in potentially malicious ways. Security is achieved through the use of threshold public-key cryptography, in particular through a cryptographic common coin based on the Diffie-Hellman problem that underlies the randomized protocols in SINTRA.*

*A prototype of SINTRA has been realized in Java, using standard TCP/IP protocols for point-to-point communication. We have measured its performance on a test-bed of servers distributed over three continents. The results show that extensive use of public-key cryptography does not impose a large overhead for secure coordination in wide-area networks. *

*Date: FRIDAY, May 10, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: Intrusion-Resilient Signatures, or Towards Obsoletion of Certificate Revocation *

*By: Leo Reyzin and Gene Itkis, BU *

*(Open to the public) *

*Abstract: to follow *

*Date: FRIDAY, May 3, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: Tweakable Block Ciphers *

*By: Moses Liskov, CIS, LCS, MIT *

*(Open to the public) *

*Authors: Moses Liskov and Ronald Rivest *

*Abstract: *

*We propose a new cryptographic primitive, the ``tweakable block cipher.'' Such a cipher has not only the usual inputs--message and cryptographic key---but also a third input, the ``tweak.'' The tweak serves much the same purpose that an initialization vector does for CBC mode or that a nonce does for OCB mode. Our proposal thus brings this feature down to the primitive block-cipher level, instead of incorporating it only at the higher modes-of-operation levels. We suggest that: *

*(1) tweakable block ciphers are easy to design, *

*(2) the extra cost of making a block cipher ``tweakable'' is small, and *

*(3) it is easier to design and prove modes of operation based on tweakable block ciphers. *

*Moreover, we can prove tighter security bounds for certain modes of operation based on tweakable block ciphers than are known for the corresponding modes based on standard block ciphers (e.g. for OCB mode). *

*Date: FRIDAY, April 26, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: Copyright, Return to First Principles in Policy *

*By: Jean Camp *

*Kennedy School of Government, Harvard U.*

*(Open to the public) *

*Abstract:*

*Digital Rights Management standard are being developed for a global ubiquitous market for digital content. Yet the question remains, what problems are the systems trying to solve? Most systems are explicitly modelled on copyright and use the metaphors of piracy and author's moral rights to define the design goals. In this work I begin with a return to copyright and consider the functions fulfilled by copyright. I argue that the brilliance of copyright is that it provides attribution, access, and a definition of a fungible right and thereby enables epistemological surety, literacy, and a functioning information market.*

*Can the fundamental design factors of copyright offer guidance to design of a system for DRM? I offer a sketch of a system designed on the principles of copyright.*

*Professor L. Jean Camp's research focuses on human values and technical design. It was this interest that led Prof. Camp from graduate electrical engineering research in North Carolina to the Department of Engineering and Public Policy at Carnegie Mellon, and it remains her core research interest at Harvard's Kennedy School. *

*Date: FRIDAY, April 19, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: Informal brainstorming meeting *

*YES, it's OPEN to the Public *

** Abstract**:

*We *will* meet tomorrow, from 10:30 to noon, in room 308 of 200 Technology Square (usual time and place). *

*We will not have a speaker, but will have a brainstorming meeting. The nominal topic will be crypto protocols for RFID tags, extending the discussion following the recent CIS seminar by Sanjay Sarma. Sanjay Sarma will be there. *

*(If there is time available, we may talk about other topics as well; bring along one!)*

*Hosted by: Ron Rivest *

*Date: FRIDAY, April 5, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: ``Radio Frequency Identification Systems'' *

*By: Professor Sanjay Sarma *

*MIT Mech. Eng. Department*

*(Open to the public) *

*Abstract:*

*At the Auto-ID Center, we are developing low-cost radio-frequency identification (RFID) systems for large scale deployments as next generation bar-codes. I will describe RFID technology in my talk, summarize our approach and our research, and most importantly, describe the research opportunities in RFID for experts in cryptography and information security.*

*The RFID tags we are developing are very simple, and contain only a unique, read-only serial number. Furthermore, they have the capability of being permanently deactivated by the user. We therefore provide the user with the facility of opting out of any privacy risks.*

*As technology improves in coming years, it may be possible to place more computational power and memory on the tag. User may want to keep tags active, write information on to the tags, personalize them and, perhaps use them at home. What are the risks involved? How can they be mitigated? The challenges then will revolve around how to preserve privacy and security for the user without adding too much complexity to the tag.*

*I will discuss the issues we are aware of and, hopefully, initiate a discussion on future developments in security and privacy in RFID tags. *

*Date: TUESDAY, April 2, 2002 *

*Time: 1:00 p.m. *

*Place: NE43-308, 200 Tech Square *

*Title: ``From Minicrypt to Cryptomania: Relationships Among the Fundamental Cryptographic Primitives'' *

*By: Tal Malkin , ATT *

*(Open to the public) *

*Abstract:*

*The cryptographic research of the last few decades has proposed solutions to numerous cryptographic tasks, proving the correctness and security of these solutions under a growing number of unproven computational assumptions. Indeed, since the security of most cryptographic tasks implies that P is not equal to NP, unconditional proofs of security seem well beyond the reach of complexity theory today. A major goal of cryptography is thus studying the minimal assumptions necessary for securely implementing cryptographic protocols, and establishing the relationships between different primitives.*

*We study the relationships among some of the most fundamental primitives and protocols in cryptography: public-key encryption, oblivious transfer (which is equivalent to general secure multi-party computation), key agreement, trapdoor functions, and trapdoor permutations. Despite the importance of these primitives, little was known about the relationships among them. We resolve all the missing relationships with respect to black-box reductions. For example, we show that public-key encryption and oblivious transfer are incomparable under black-box reductions. Furthermore, relative to black-box reductions, neither oblivious transfer nor 1:1 trapdoor functions imply trapdoor permutations. Finally, we show that there is no black-box reduction from trapdoor functions to public-key encryption (two primitives whose relationship is of particular interest). Our separation results employ the oracle separation paradigm introduced by Impagliazzo and Rudich, as well as a new model of separation.*

*Our results demonstrate that, borrowing from the terminology of Impagliazzo, there are many possible worlds that lie between Minicrypt (the world that contains only one-way functions and primitives implied by them) and Cryptomania (the world that contains trapdoor permutations and the rich cryptographic possibilities that they offer).*

*The talk is based on joint works with Y. Gertner, S. Kannan, O. Reingold, and M. Viswanathan (FOCS 00), and with Y. Gertner and O. Reingold (FOCS 01). *

*Date: Friday, March 22, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: ``Key-Insulated Public-Key Cryptosystems'' *

*By: Yevgeniy Dodis, Courant Inst., NYU *

*(Open to the public) *

*Abstract:*

*Protecting private keys is perhaps the most basic requirement of any cryptographic scheme. On the other hand, cryptographic computations (decryption, signature generation, etc.) are often performed on a relatively insecure device (e.g., a mobile device or an Internet-connected host) which cannot be trusted to maintain secrecy of the private key. We propose and investigate the notion of \emph{key-insulated security} whose goal is to minimize the damage caused by secret-key exposure. In our model, the secret key(s) stored on the insecure device are refreshed at discrete time periods via interaction with a physically-secure --- but computationally-limited - --- device which stores a ``master key''.*

*All cryptographic computations are still done on the insecure device, and the public key remains unchanged. In a $(t, N)$-key-insulated scheme, an adversary who compromises the insecure device and obtains secret keys for up to $t$ periods of his choice is unable to violate the security of the cryptosystem for \emph{any} of the remaining $N-t$ periods. Furthermore, the scheme remains secure (for \emph{all} time periods) against an adversary who compromises \emph{only} the physically-secure device.*

*Key-insulated schemes significantly improve the security guarantee of forward-secure schemes~\cite{A97, BM99}, in which exposure of the secret key at even a single time period (necessarily) compromises the security of the system for all future time periods. This improvement is achieved at minimal cost: infrequent key updates with a (possibly untrusted) secure device.*

*We focus primarily on key-insulated public-key encryption. We construct a $(t, N)$-key-insulated encryption scheme based on any (standard) public-key encryption scheme, and also give a more efficient construction based on the DDH assumption. Our second construction can be improved to achieve chosen-ciphertext security as well. We also briefly consider our new work on key-insulated signature and identification schemes.*

*Joint work with J. Katz, S. Xu, M. Yung. *

*Date: Friday, March 15, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: TBA *

*By: Ross Anderson, Cambridge University *

*(Open to the public) *

*Abstract:*

*In this talk, I will summarise work done over the last eighteen months at Cambridge on the security of cryptoprocessors. We have developed two new classes of attacks. The first exploits hitherto unknown vulnerabilities in the APIs with which these devices communicate with the outside world, and led to the recent, highly publicised attack on the IBM 4758. The second involves a novel optical probing technique, which enables even a low budget attacker to induce faults selectively in the operation of processors and memory.*

*We have also been working on a new defensive technology. We have built a prototype CPU, plus the accessories needed for smartcard operation such as a Montgomery multiplier and a UART, using self-timed dual-rail logic. This design technique has been extended in various ways to make it very much harder to mount attacks using power analysis and selective fault induction. *

*Date: Friday, March 8, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: NOVOMODO PKI: Digital Certs Made Easy (and More Useful) *

*By: Silvio Micali, LCS, MIT *

*(Open to the public) *

*Abstract:*

*We present a NEW WAY to realize Public Key Infrastructures that makes them eminently scalable, cost effective, efficient , very easy to manage, and applicable to totally new settings. *

*Date: Friday, March 1, 2002 *

*Time: 10:30 a.m. *

*Place: NE43-308, 200 Tech Square *

*Title: Physical Security Protocols for Ubicomp Systems *

*By: Tim Kindberg, Hewlett-Packard Labs, Palo Alto *

*(Open to the public) *

*Abstract:*

*Ubiquitous systems tend to have two predominant characteristics: first, the integration of computing components and the physical world; second, spontaneous (ad hoc) interoperation between components. Each of these characteristics has ramifications for security. My talk will cover some of the implications of physical integration by considering notions of "constrained channels" and "context authentication"; it will cover protocols for authenticating a principal's location and consider the issues raised by other contextual parameters. If there is time, the talk will go on to an overview of how physical relationships can be used for validating spontaneous associations between devices.*

*Bio: *
*Tim Kindberg (http://champignon.net/TimKindberg) is a senior researcher at HP Labs, Palo Alto, where he works on nomadic computing systems as part of the Cooltown program. He was formerly a senior lecturer in Computer Science at the University of London. He holds a PhD in Computer Science from the University of Westminster and a BA in Mathematics from the University of Cambridge. His research interests include ubiquitous computing systems, distributed systems and human factors. He is co-author of the textbook 'Distributed Systems -- Concepts & Design'. *
*NOTE: : No Seminar Fri, Feb 22, 2002 *

*Date: Thursday, February 14, 2002 *

*Time: 1:00 p.m. *

*Place: NE43-628, 200 Tech Square *

*Title: On the Composition of Authenticated Byzantine Agreement *

*By: Yehuda Lindell, Weizmann Institute *

*(Open to the public) *

*Abstract:*

*A fundamental problem of distributed computing is that of simulating a (secure) broadcast channel within the setting of a point-to-point network. Beyond being an important task within itself, secure broadcast is used at the core of many secure multiparty protocols. This problem is known as "Byzantine Agreement" and has been the focus of much research.*

*Lamport et al. showed that in order to obtain secure broadcast, more than 2/3 of the participating parties must by honest. They further show that if one augments the network with a public-key infrastructure (specifically, of digital signature keys), then it is possible to achieve secure broadcast when any number of the parties are dishonest. This augmented problem is called ``authenticated Byzantine Agreement.''*

*Previously, the security of authenticated Byzantine Agreement protocols was considered when the protocol was run once and in isolation. However, clearly it should be proven for multiple invocations, in order for it to be of real use.*

*We present surprising impossibility results showing that: *
*1. Authenticated Byzantine Agreement cannot be composed in parallel (and thus concurrently) if less than 2/3 of the parties are honest. *
*2. Deterministic authenticated Byzantine Agreement protocols with r rounds can be composed sequentially at most 2r times. *
*In contrast, we present some positive results by giving randomized protocols that do compose sequentially. *
*This is joint work with and Anna Lysyanskaya and Tal Rabin. *

*Date: Friday, February 8, 2002 *

*Time: 10:30 a.m.- probably *

*Place: NE43-308, 200 Tech Square *

*Title: Universally Composable Security: A New Paradigm for Cryptographic Protocols *

*By: Dr. Ran Canetti, IBM, TJ Watson *

*(Open to the public) *

*Abstract:*

*We propose a new paradigm for defining security of cryptographic protocols, called Universally Composable Security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed with an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system.*

*This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more.*

*We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known general techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero-knowledge, and more. We will also survey on-going work on the realizability of the proposed definitions. *

*Date: Wednesday, December 19, 2001 *

*Time: 2:00 p.m. *

*Place: NE43-308 200 Technology Square (formerly 545 Technology Sq.) *

*Title: Secure Execution via Program Shepherding *

*By: Professor Saman Amarasinghe, MIT LCS *

*(Open to the public) *

*Abstract:*

*In this talk I will describe a system that executes arbitrary program binaries in a secure manner. The system guarantees that only code that was present in the application and library images on disk will ever be executed. By preventing dynamically generated or modified code from being executed, frequent security problems such as buffer overflow attacks can be thwarted. The system is able to provide this guarantee with minimal or no performance penalties. Furthermore, sandboxing checks can be added to any type of program operation with a guarantee that the checks will never be bypassed. This system requires no hardware or operating system support and provide protection to existing applications on IA-32 machines (both Windows NT and Linux).*

*This is joint work with Vladimir Kiriansky and Derek Bruening. *

*Date: Friday, Dec 14, 2001 *

*Time: 9:00 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*NOTE: Open to CIS students and faculty, only*

*Title: Student presentations *

*By: (Not open to public) *

*Date: Friday, Dec 10, 2001 *

*Time: DIFFERENT TIME: 6:00 p.m. - 7:00 p.m. *

*Place: NE43-518 *

*Note: SPECIAL TALK * Open to public *

*Title: ``Cyberspace Security: The Threat and Response'' *

*By: Richard Clarke, Special Advisor to the President for Cyberspace Security and chair of the President's Critical Infrastructure Protection Board *

*Date: LAST OPEN Friday CIS meeting of 2001, FRIDAY, DEC 7 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: Fractal Hash Sequence Representation and Traversal *

*By: Markus Jakobsson, RSA Laboratories *

*Abstract:*

*We introduce a novel amortization technique for computation of consecutive preimages of hash chains from a common seed. While all previously known techniques have a memory-times-computational complexity of O(n) per chain element, the complexity of our technique can be upper bounded at O(log^2 n), making it a useful primitive for low-cost applications such as authentication, signatures and micro-payments.*

*Our technique uses a logarithmic number of ''helper values'' associated with points on the hash chain. The locations of these helper values are modified over time. Like fractals, where images can be found within images, the helper values move within intervals and sub-intervals according to a highly symmetric pattern. *

*Date: Friday, Nov 30, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: ``Analysis of IKE: The key exchange protocol for IPsec '' *

*By: Radia Perlman, Boston Center for Networking *

*Abstract:*

*Much has been written about the complexity of IPsec, but mostly concentrating on the packet formats, e.g., whether it is necessary to have AH. IKE has had less scrutiny because of its complexity, not only of the protocol itself, but of the way it was documented in three documents (ISAKMP, IKE, and DOI) that mostly referenced each other, contradicted each other in places, used different terminology, and left holes. This talk describes the properties of the protocol as specified, and explains ways of simplifying and improving it. *

*Date: Friday, Nov 16, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: ``Zero Knowledge and Chosen-Ciphertext Security'' *

*By: Amit Sahai, Princeton University *

*Abstract:*

*Zero-knowledge proofs, introduced by Goldwasser, Micali, and Rackoff, are fascinating constructs in which one party (the "prover") convinces another party (the "verifier") that some assertion is true, without revealing anything else to the verifier.*

*In this talk, we present a connection between the theory of zero-knowledge proofs and one of the classical notions in cryptography, encryption. In particular, we introduce the notion of simulation-sound zero knowledge, and show how the non-interactive form of this notion can be used to achieve the strongest definition of security for encryption, namely adaptive chosen-ciphertext security, in a simple manner. We also present constructions of simulation-sound non-interactive zero-knowledge proofs for all NP languages, and discuss other applications of this notion. *

*Date: Friday, Nov 9, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: OCB: A Block-Cipher Mode of Operation for Efficient Authenticated Encryption *

*By: Phillip Rogaway, UC Davis *

*See: ``www.cs.ucdavis.edu/~rogaway'' for more info. *

*Abstract: *

*We describe a parallelizable block-cipher mode of operation that simultaneously provides privacy and authenticity. OCB (offset codebook) encrypts-and-authenticates a nonempty message $M\in\{0,1\}^*$ using $\lceil |M|/n\rceil + 2$ block cipher invocations, where $n$ is the block length of the underlying block cipher. Additional overhead is small. OCB refines a scheme, IAPM, suggested by Jutla.*

*Desirable new properties of OCB include: very cheap offset calculations; operating on an arbitrary message $M\in\{0,1\}^*$; producing ciphertexts of minimal length; using a single underlying cryptographic key; making a nearly optimal number of block-cipher calls; avoiding the need for a random IV; and rendering it infeasible for an adversary to find ``pretag collisions''.*

*OCB is intended for next-generation cryptographic standards, a domain where simplicity, key length, key-setup costs, ciphertext length, and computational efficiency (serial and parallel, in hardware and in software) are all important factors. The mode is included in the draft standard IEEE 802.11 (for wireless LANs) and is under consideration by NIST. We prove OCB secure, quantifying the adversary's ability to violate privacy or authenticity in terms of the quality of the block cipher as a pseudorandom permutation (PRP) or as a strong PRP, respectively. *

*Date: Friday, November 2, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: Cryptography in non-classical computation models *

*By: David Woodruff, MIT *

*Abstract:*

*We investigate the possibility of cryptographic primitives over non-classical computational models. We first replace the traditional finite field (F_n)* with the infinite field of rational numbers and we give all parties unbounded computational power. We also give parties the ability to sample random real numbers. In this model we determine that secure signature schemes and secure encryption schemes do not exist. We then expand the model by giving all parties the ability to extract square roots. Even in this larger model, we prove that in many cases secure signature schemes are still impossible, and we conjecture that they are impossible in all cases. *

*Date: Friday, Oct 26, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: An Identity Escrow Scheme with Appointed Verifiers *

*By: Anna Lysyanskaya, MIT *

*Abstract:*

*An identity escrow scheme allows a member of a group to prove membership in this group without revealing any extra information. At the same time, in case of abuse, his identity can still be discovered. We put forward the notion of an identity escrow scheme with appointed verifiers. Such a scheme allows the user to only convince an appointed verifier (or several appointed verifiers) of his membership; but no unauthorized verifier can verify a user's group membership even if the user fully cooperates, unless the user is completely under his control. We then give an efficient construction of an identity escrow scheme with appointed verifiers provably secure under common number-theoretic assumptions in the public-key model.*

*This is joint work with Jan Camenisch, IBM Zurich. Appeared in Crypto2001. *

*Date: Friday, Oct 19, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: The Electronic Voting Enigma: Hard Problems in Computer Science *

*By: Rebecca Mercuri, Bryn Mawr, Notable Software *

*Abstract:*

*Although it might appear that modern technology should be able to provide secure, auditable, anonymous elections, this turns out to be a difficult problem for computer scientists. Vote collection and tabulation involves processes for system security, program provability, user authentication, and product reliability, all of which harbor inherent flaws. These matters are further compounded by sociological and legal technicalities -- such as the prevention of vote-selling and protection from denial-of-service attacks. This talk will address these subjects from a computer science standpoint, focusing on those which are considered to be "hard" (the CS word for "presently unsolvable"). The discussion will include cryptographic-related issues. Although these computer systems can not achieve all desired election goals, suggestions will be made regarding design enhancements which, if implemented, could improve these devices to the point where they are almost as good as mechanical lever machines and hand-counted paper ballots.*

*Rebecca Mercuri is the President of Notable Software, Inc., a consulting firm specializing in computer expert testimony and forensic analysis. She is also an Assistant Professor of Computer Science at Bryn Mawr College. For the last decade, she has written extensively, and advised cities and counties in many States, on the subject of voting systems. Her Ph.D. thesis from the University of Pennsylvania's School of Engineering (defended two weeks before the November 2000 election) is titled "Electronic Vote Tabulation Checks & Balances." Dr. Mercuri provided testimony for the Florida recount to the 11th Circuit Court of Appeals and was referenced in one of the briefs to the U.S. Supreme Court.*

*During 2001 she was requested to testify at a hearing for the U.S. House of Representatives Science Committee regarding the need for stronger standards in elections, and has advised the U.S. General Accounting Office and the Federal Election Commission on this matter. Since the election, she has been quoted heavily by the media (Associated Press, Newhouse News Service, U.S. News and World Report, Wall Street Journal, Philadelphia Inquirer, Los Angeles Times, NPR, WJR, WBAI, KYW and WHYY radio, to name but a few) on her opposition to fully electronic and Internet balloting systems. Her electronic voting website at www.notablesoftware.com is considered by many to be a primary resource on electronic voting. *

*Date: Friday, Oct 12, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: *

*By: Meeting Cancelled *

*Date: Friday, Oct 5, 2001 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: TBA *

*By: Burt Kaliski, RSA *

*Abstract:*

*The security of a signature scheme based on a hash function depends not only on the strength of the hash function, but also on the verifier's assurance that the correct hash function is applied during signature verification. In several schemes, this assurance is provided by a hash function ``firewall'', where a hash function identifier is included along with the hash value in the input to the signature primitive. The identifier is intended to prevent an opponent from causing a verifier to process an existing signature with a weak hash function, in an attempt to obtain a signature forgery.*

*Despite this protection for existing signatures, we show that in several schemes it is possible for an opponent to exploit a weak hash function to forge new signatures. In particular, we show how to break the hash function firewalls in the ISO/IEC 9796-2 signature scheme, the ISO/IEC 14888-2 scheme based on Guillou-Quisquater signatures, a variant of DSA with a hash function identifier, and a version of the Bellare-Rogaway Probabilistic Signature Scheme (PSS). We also show how the version of PSS in the current draft of IEEE P1363a resists this kind of attack, without having an explicit hash function identifier. *

*Date: Friday, September 28, 2001 *

*Time: 10:30 - 12:00 pm *

*Place: NE43-308 *

*To Discuss: Work with Silvio on 3-round zero-knowledge proofs *

*Matt Lepinski presenting *

*Abstract:*

*A Zero-Knowledge Proof System is a protocol by which a computationally unbounded prover can convince a probabilistic polynomial time verifier that a theorem is true without providing any additional information about the theorem. The first zero-knowledge proof system for an NP-Complete languages required a number of rounds (messages) that grew polynomially in the length of the theorem. Later work provided zero-knowledge proof systems for NP which used only a constant number of rounds. Additionally, it has been shown that 2-round zero-knowledge proof systems for NP do not exist (unless NP=BPP) and any 3-round zero-knowledge proof system that exists is not black-box simulatable (a property which almost all known zero-knowledge proof systems possess).*

*In this talk we present a non-black-box simulatable 3-round zero-knowledge proof system for NP. We require non-standard assumptions (similar to those used by Hada and Tanaka to prove the existence of a 3-round zero-knowledge argument for NP) in order to prove our protocol correct. Additionally, we provide a proof of knowledge framework in which to view this type of non-standard assumption.*

*Much of this talk will be spent discussing background material related to our result. No prior knowledge of interactive proofs, zero-knowledge, proofs of knowledge or black-box simulation is assumed. *

*Date: Friday, September 21, 2001 *

*Time: 10:30 - 12:00 pm *

*Place: NE43-308 *

*Title: Visit with Sarah Flannery *

*Date: Friday, September 14, 2001 *

*Time: 10:30 - 12:00 pm *

*Place: NE43-308 *

*Title: On the (Im)possibility of Obfuscating Programs *

*By: Boaz Barak, Oded Goldreich, Russel Impagliazzo, Steven Rudich, Amit Sahai, Salil Vadhan, and Ke Yang *

*Moses Liskov presenting *

*Abstract: *

*Informally, an {\em obfuscator} {\Cal O} is an (efficient, probabilistic) "compiler" that takes as input a program (or circuit) P and produces a new program {\Cal O}(P) that has the same functionality as P yet is "unintelligible" in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice's theorem. Most of these applications are based on an interpretation of the "unintelligibility" condition in obfuscation as meaning that {\Cal O}(P) is a "virtual black box," in the sense that anything one can efficiently compute given {\Cal O}(P), one could also efficiently compute given oracle access to P.*

*In this work, the authors initiate a theoretical investigation of obfuscation. Their main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. They prove this by constructing a family of functions {\Cal F} that are {\em unobfuscatable} in the following sense: there is a property \pi: {\Cal F} \rightarrow \{0,1\} such that (a) given {\em any program} that computes a function f \in {\Cal F}, the value \pi (f) can be efficiently computed, yet (b) given {\em oracle access} to a (randomly selected) function f \in {\Cal F}, no efficient algorithm can compute \pi (f) much better than random guessing.*

*The authors extent their impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only {\em approximately} preserve the functionality, and (c) only need to work for very restricted models of computation (TC_0). They also rule out several potential applications of obfuscators, by constructing "unobfuscatable" signature schemes, encryption schemes, and pseudorandom function families.*

*Appeared in Proceedings of CRYPTO 2001, pp. 1 -- 18. Also available online at http://eprint.iacr.org/ (paper number 2001/069) *

*Date: Friday, September 7, 2001 *

*Time: 10:30 - 12:00 pm *

*Place: NE43-308 *

*Title: Open discussion, Meeting New members, etc. *

*Closed to MIT CIS members or potential student members *

*Ron Rivest hosting*

*Date: Friday, July 13, 2001 *

*Time: 11:00 a.m. *

*Place: NE43-308 *

*Title: ``Session-Key Generation using Human Passwords Only'' *

*By: Yehuda Lindell, The Weizmann Institute *

*Abstract: *

*We present session-key generation protocols in a model where the legitimate parties share {\em only a human-memorizable password. The security guarantee holds with respect to probabilistic polynomial-time adversaries that control the communication channel (between the parties), and may omit, insert and modify messages at their choice.*

*Loosely speaking, the effect of such an adversary that attacks an execution of our protocol is comparable to an attack in which an adversary is only allowed to make a constant number of queries of the form ``is $w$ the password of Party A''. We stress that the result holds also in case the passwords are selected at random from a small dictionary so that it is feasible (for the adversary) to scan the entire directory.*

*We note that prior to our result, it was not clear whether or not such protocols were attainable without the use of random oracles or additional setup assumptions.*

*Joint work with Oded Goldreich. *

*Date: Friday, April 27, 2001 *

*APPLIED SECURITY READING GROUP SEMINAR *

*Time: 3:00 p.m. - 4:30 p.m. *

*Place: NE43-518 *

*Title: So, Where's All The Financial Cryptography? *

*By: Win Treese *

*Abstract:*

*The technology of financial cryptography has promised many changes for the way that individuals and organizations do business, yet little progress has been made in real systems. In part, this is because the technology proposals--the crypto, the protocols, and occasionally code--are usually presented with little or no context for the total system in which they play. This talk will look at some of the systems issues, both technical and non-technical, that are critical for successful implementations of financial cryptography.*

*For a slightly out-of-date bio (he no longer works for Open Market), see http://www.treese.org/Commerce/author_information.htm *

*Date: Friday, April 13, 2001 *

*Time: NOTE LATER TIME: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: Mutually Independent Commitment *

*By: Moses Liskov, TOC, LCS, MIT *

*Abstract:*

*We describe a new kind of commitment scheme in which two parties commit to values in a commitment stage, at the end of which we are assured that the values they have committed to cannot be correlated to one another. We call this new primitive ``mutually independent commitments.'' We present three mutually independent commitment schemes which handle single bit commitments, and which are computationally hiding and perfectly binding.*

*This is joint work with Anna Lysyanskaya, Silvio Micali, Leo Reyzin, and Adam Smith. *

*Date: Friday, April 6, 2001 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: STRANDS & HONESTY: Proving Protocols Correct *

*By: Jonathan Herzog, TOC, LCS, MIT *

*Abstract: *

*The Strand Space method (Thayer, Herzog, Guttman) is a new technique for the analysis of cryptographic protocols. Previous attempts at analyzing protocols at this level of abstraction have focused on the use of automated tools (such as theorem provers and model checkers) or idealize the protocol even further (so as to use specialized logics). The Strand Space method is a pencil-and-paper technique that operates directly upon the messages of the protocol, allowing one to derive and prove the security conditions that it satisfies.*

*In this talk, we will cover the basic definitions and results of the technique. We will begin by showing how the technique models and formalizes the standard assumptions of formal cryptography. We will then show how to model the interactions between honest participants and the adversary in a graph theoretic way, and then demonstrate how simple graph-theoretic results can be used to prove specific security conditions. We will finish by proving the security conditions satisfied by an example protocol.*

*A familiarity with the area will be useful, but not required. *

*Date: Friday, March 23, 2001 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: Forward-Secure Signatures with Optimal Signing and Verifying *

*By: Gene Itkis (BU) and Leonid Reyzin (MIT) *

*Abstract: *

*Ordinary digital signatures have an inherent weakness: if the secret key is leaked, then all signatures, even the ones generated before the leak, are no longer trusworthy. Forward-secure digital signatures were proposed recently by Anderson and formalized Bellare and Miner to address this weakness: they ensure that past signatures remain secure even if the current secret key is leaked.*

*A few forward-secure signature schemes have been recently put forward. All are significantly less efficient than ordinary signature schemes.*

*We propose the first forward-secure signature scheme for which both signing and verifying are as efficient as for some of the most efficient ordinary signature schemes, each requiring just two modular exponentiations with a short exponent. Moreover, this is achieved with only minimal increases to the sizes of keys and signatures and without any additional public storage. *

*Date: Friday, March 9, 2001 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: Dispatches From The World Of Red Cryptography*

*(Cryptographic protocols and formal methods) *

*By: Jonathan Herzog, TOC, LCS, MIT *

*Abstract: *

*Those who attended the talk by Rogaway last semester will recall that he described two "worlds" of cryptographic protocol analysis: the "blue" (computational) approach, and the "red" (formal methods) approach. The two worlds are complementary. While the blue world is able to prove cryptographic security for low-level primitives, it often cannot deal with high levels of abstraction or complexity. The red approach, on the other hand, disregards the actual choices of primitives to focus on high-level, abstract specifications of a protocol. By doing so, it concentrates its attention upon protocol flaws typically found at high levels of complexity, flaws that would exist regardless of the properties of the cryptographic primitives. *

*In this talk, we will introduce world-view of the red cryptographer-- the concerns, assumptions, and results of this camp. In particular, we will discuss the advantages and disadvantages of the three most popular approaches: *

** The use of model checkers, *

** Belief logics, and *

** Inductive reasoning. *

*If time permits, we may discuss in more depth the Strand Space method, an approach due to Guttman, Thayer, and myself. *

*No special background is assumed. *

*Date: Friday, December 8, 2000 *

*Time: NOTE Earlier Time: 9:30 a.m. - 11:00 a.m. *

*Place: NE43-308 *

*Title: Practical and Provably Secure Password-Authenticated Key Exchange *

*By: Jonathan Katz, Columbia University and Telcordia Technologies *

*Abstract: *

*There has been much recent interest in password-authenticated key exchange protocols which remain secure even when users choose passwords from a very limited space (say, a dictionary of English words). Under this realistic assumption, one must be careful to design protocols which cannot be broken using off-line dictionary attacks in which an adversary enumerates all possible passwords in an attempt to determine the correct one. Many heuristic protocols have been proposed to solve this problem. Only recently have formal validations of security (namely, proofs in the idealized random oracle and ideal cipher models) been given for specific constructions \cite{BPR00, BMP00, MPS00}. *

*More recently, a construction based on general assumptions, secure in the standard model, has been proposed \cite{GL00}. Although this protocol requires no public parameters, it requires techniques from general multi-party computation and generic zero-knowledge proofs which render it impractical. *

*In this work, we propose a practical, 3-round, password-authenticated key exchange protocol which is provably secure under the Decisional Diffie-Hellman assumption, yet requires only (roughly) 8 times more computation than ``standard'' Diffie-Hellman key exchange \cite{DH76} (which provides no authentication at all). We stress that we work in the standard model only, and do not require any "random oracle" assumption. *

*Joint work with Rafail Ostrovsky and Moti Yung. *

*Date: Friday, December 1, 2000 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: Everlasting Security *

*By: Michael Rabin, Harvard University and *

*Hebrew University *

*Abstract: *

*Currently used encryption schemes, such as Public-Key encryption, Rijndael or DES, depend on unproven assumptions for their security. DES and related schemes employ a secret key shared by the Sender and Receiver. A result due to C. Shannon substantially states that information-theoretic security can be achieved only if the key's length exceeds the length of the message to be encrypted.*

*Even if factoring integers will be proven to be hard (CS is eons away from this goal), in practice progressively larger integers are being factored. Thus an important encrypted message may be intercepted and stored today by an adversary who may be able to decrypt it a year from now. By employing a new setting for encryption, we surmount the above limitations on encryption and provably achieve, for the first time,the goal mentioned in the title. We also present a method for rendering our encryption non-malleable. (Joint work with Y.Z. Ding.)*

*Michael Rabin is the T.J. Watson Sr. Professor of CS at Harvard University. His fields of interest include randomized algorithms, DNA computing, parallel and distributed computing, cryptography and computer security. *

*Date: Friday, November 17, 2000 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: A New Forward-Secure Digital Signature Scheme *

*By: Leonid Reyzin, Joint work with Michel Abdalla *

*Abstract: *

*In 1997, Anderson proposed the notion of forward-secure signature schemes. In such schemes, even if the signer's secret key is compromised, signatures produced before the compromise remain secure. In 1999, Bellare and Miner formalized Anderson's notion and provided the first efficient implementation (in the random oracle model).*

*We improve the Bellare-Miner construction. Our scheme has significantly shorter keys and is, therefore, more practical. By using a direct proof technique not used for forward-secure schemes before, we are able to provide better security bounds for the original construction as well as for our scheme.*

*Bellare and Miner also presented a method for constructing such schemes without the use of the random oracle. We will discuss possible improvements to that proposal, as well. *

*Date: FRIDAY, October 27, 2000 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: Some Signature Schemes *

*By: Ron Rivest *

*Abstract: *

*I'll describe some recent work (in progress) on signature and authentication schemes: *

*-- "transitive" signature schemes for signing the edges of a (transitively closed) graph (joint work with Silvio Micali) *

*-- "prefix aggregation" schemes for signing subsets of an an IP address space (joint work with Tal Rabin and Suresh Chari) *

*-- "beeper-based" signature schemes (joint work with Anna Lysyanskaya) *

*Many open problems will be presented too... *

*See you there! *

**** Combined CIS/TOC SEMINAR *** *

*Date: THURSDAY, October 19, 2000 *

*Refreshments: 4:00 p.m. *

*Talk: 4:15 p.m. *

*Place: NE43-518 *

*Title: Reconciling Two Views Of Cryptography (The Computational Soundness Of Formal Encryption) *

*By: Phillip Rogaway, University of California, Davis *

*Abstract: *

*Two distinct, rigorous views of cryptography have developed over the years, in two mostly separate communities. One of the views relies on a simple but effective formal approach; the other, on a detailed computational model that considers issues of complexity and probability. There is an uncomfortable and interesting gap between these two approaches to cryptography. This paper starts to bridge the gap, by providing a computational justification for a formal treatment of encryption. *

*Joint work with Martin Abadi *

*Date: FRIDAY, September 29, 2000 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: Amortized E-Cash *

*By: Moses Liskov and Silvio Micali, CIS group, LCS *

*Abstract: *

*We present an e-cash scheme which provides a trade-off between anonymity and efficiency. In many e-cash schemes, the provable anonymity of an e-coin is achieved by means of a zero-knowledge protocol, while the authenticity of an e-coin is achieved by a digital signature. Therefore, the computation involved in generating each e-coin is quite expensive. We show how to amortize the cost of a zero-knowledge protocol and a signature by using them to generate an arbitrary number of e-coins. *

*Date: FRIDAY, September 22, 2000 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: Bit Security of the Diffie-Hellman and other related schemes *

*By: Igor Shparlinski, Macquarie U, AU *

*Abstract: *

*In the first part of the talk I'll outline an approach proposed by Boneh and Venkatesan to proving that last (\log p)^{1/2} bits of the Diffie-Hellman key g^{xy} mod p are as secure as the whole key. Unfortunately their paper has an error. I'll show how to correct this error and outline some other applications (such as the Shamir message passing scheme). Joint work with Isabel Gonzales Vasco *

*In the second part I'll show that the same approach can be used to design an attack on DSA. *

*Joint work with Phong Nguyen. *

*Date: FRIDAY, September 15, 2000 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-308 *

*Title: On the Round Security of Symmetric-Key Cryptographic Primitives *

*By: Leonid Reyzin, CIS group, LCS *

*Abstract: *

*We put forward a new model for understanding the security of symmetric-key primitives, such as block ciphers. The model captures the fact that many such primitives often consist of iterating simpler constructs for a number of rounds, and may provide insight into the security of such designs. *

*We completely characterize the security of four-round Luby-Rackoff ciphers in our model, and show that the ciphers remain secure even if the adversary is given black-box access to the middle two round functions. A similar result can be obtained for message authentication codes based on universal hash functions. *

*Joint work with Zulfikar Ramzan, presented at Crypto 2000.*

*NOTE: The talk will be shorter than usual, and will be probably be followed by a meeting at which we discuss the seminar for the rest of the semester. *

*** LAST SEMESTER ** *

*Date: FRIDAY, June 2, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Meeting: Cancelled *

*Title: For MIT's GRADUATION *

*Date: FRIDAY, May 5, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: ``The Love Virus'' *

*By: Discussion lead by Ron Rivest *

*Handouts: Three will be provided *

*Date: FRIDAY, April 28, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: A Cryptographic Solution to a Game Theoretic Problem *

*By: Yevgeniy Dodis, MIT *

*Abstract: *

*Although Game Theory and Cryptography seem to have some similar scenarios in common, it is very rare to find instances where tools from one area are applied in the other. In this work we use cryptography to solve a game theoretic problem. The problem that we discuss arises naturally in the game theory area of two-party strategic games. In these games there are two players. Each player decides on a ``move'' (according to some strategy), and then the players execute the game, i.e. the two players make their moves simultaneously. Once these moves are played each player receives a payoff, which depends on both moves. Each player only cares to maximize its payoff.*

*In the game theory literature it was shown that higher payoffs can be achieved by the players if they use correlated strategies. This is enabled through the introduction of a trusted third party (a ``mediator''), who assists the players in choosing their move. Though now the property of the game being a {\em two player} game is lost. It is natural to ask whether a game can exist which would be a two player game yet maintain the high payoffs which the mediator aided strategy offered.*

*We answer this question affirmatively. We extend the game by adding an initial step in which the two players communicate and then they proceed to execute the game as usual. For this extended game we can prove (informally speaking) the following: any correlated strategy for 2-player games can be achieved, provided that the players are computationally bounded and can communicate before playing the game.*

*We obtain an efficient solution to the above game-theoretic problem, by providing a cryptographic protocol to the following {\em Correlated Element Selection} problem. Both Alice and Bob know a list of pairs $(a_1,b_1)\ldots (a_n,b_n)$ (possibly with repetitions), and they want to pick a {\em random} index $i$ such that Alice learns only $a_i$ and Bob learns only $b_i$. We believe that this problem has other applications, beyond our application to game theory. Our solution is quite efficient: it has constant number of rounds, negligible error probability, and uses only very simple zero-knowledge proofs. *

*Date: FRIDAY, April 21, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Internet Voting in Public Elections *

*By: David Jefferson, Compaq Systems Research Center *

*Abstract:*

*In recent months interest has been growing in systems for online voting, online voter registration, and online petition signing. The Secretary of State of California recently appointed a task force to study the issues, and its report is available at http://www.ss.ca.gov/executive/ivote. This talk will summarize the technical issues addressed in that report: security, privacy, failure tolerance, and availability.*

*One major security conclusion is that Internet voting systems must be divided into two fundamental classes: *
*(a) those in which the election officials control the voting infrastructure on the client side, including the client hardware, software, and the LANs they are connected to; and *
*(b) those in which the voter or a 3rd party controls the client environment, e.g. voting from PCs at home, office, university, hotel, etc.*

*Systems of type (a) are technically managable today, and may appear in California as soon as November, 2000, at least on a trial basis. On the other hand systems of type (b), such as the one used in the Arizona Domocratic primary in March, are vulnerable to Trojan horse attacks for which there are today no good technical solutions that are both effective and convenient enough for voters. Such systems should not be fielded until there is progress on the fundamental problem of managing malicious code. *
*David Jefferson's Bio:*

*David Jefferson is a Senior Member of the Research Staff at Compaq Systems Research Center in Palo Alto, CA, where he has been doing research on the use of the Internet in public elections for over five years. Recently he served as the chair of the technical committee for the California Secretary of State's Internet Voting Task Force.*

*He is also a Director, and former Chairman of the Board, of the California Voter Foundation, a nonprofit, nonpartisan organization dedicated to a more informed and engaged California electorate, especially through use of the Internet.*

*Prior to joining Compaq he was a professor of computer science for many years, first at the University of Southern California and then at UCLA, where he conducted research in parallel computation and numerous other fields. *

*Date: FRIDAY, April 14, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: OPEN *

*By: TBA *

*Abstract: *

*Date: FRIDAY, April 7, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Real Time Cryptanalysis of A5/1 on a PC *

*By: Adi Shamir, Weizmann Institute, Faculty of Mathematics *

*Abstract: *

*A5/1 is the strong version of the encryption algorithm used by about 130 million GSM customers in Europe to protect the over-the-air privacy of their cellular voice and data communication. The best published attacks against it require between $2^{40}$ and $2^{45}$ steps. This level of security makes it vulnerable to hardware-based attacks by large organizations, but not to software-based attacks on multiple targets by hackers. *

*In this paper we describe new attacks on A5/1, which are based on subtle flaws in the tap structure of the registers, their noninvertible clocking mechanism, and their frequent resets. After a $2^{48}$ parallelizable data preparation stage (which has to be carried out only once), the actual attacks can be carried out in real time on a single PC. *

*The first attack requires the output of the A5/1 algorithm during the first two minutes of the conversation, and computes the key in about one second. The second attack requires the output of the A5/1 algorithm during about two seconds of the conversation, and computes the key in several minutes. The two attacks are related, but use different types of time-memory tradeoffs. The attacks were verified with actual implementations, except for the preprocessing stage which was extensively sampled rather than completely executed. *

*Joint work with Alex Biryukov and David Wagner. *

*Date: FRIDAY, March 31, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Database Nation -- Why Cryptography is Not Enough *

*By: Simson Garfinkel *

*Abstract: *

*DATABASE NATION: Simson L. Garfinkel will talk about his new book, Data Protection, the Code of Fair Information Practices, the OECD privacy guidelines of 1980, and why encryption alone won't protect our privacy in the 21st century. *

*Date: FRIDAY, March 24, 2000 *

*NOTE: Spring Break - NO MEETING *

*Date: FRIDAY, March 17, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Provable-Subgroup Signatures *

*By: Leonid Reyzin (joint work with Kazuo Ohta and Silvio Micali) *

*Abstract:*

*We put forward the notion of Provable-Subgroup Signatures (PS signatures). In essence, PS signatures enable any subgroup, S, of a given group, G, of potential signers, to sign efficiently a message M so that the signature provably reveals the identities of the signers in S to any verifier. Thus, PS signatures keep the actual signers accountable for what they sign, which is desirable in several applications.*

*We also provide an efficient implementation of PS signatures in the random-oracle model based on the Discrete Log problem by transforming the Schnorr signature scheme. The proposed implementation is quite practical:*

*Only three rounds of communication are required per signature.**The signing time per party is the same as for the single-signer Schnorr scheme, regardless of the number of parties.**The verification time is only slightly greater than that for the single-signer Schnorr scheme.**The signature length is the same as for the single-signer Schnorr scheme, regardless of the number of parties.*

*Date: FRIDAY, March 10, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Talk on LCS's ``Oxygen'' system *

*Meeting open to MIT community only, sorry *

*By: Anant Agarwal *

*Abstract: *

*Professor Anant Agarwal will give a brief overview of the Lab's research on the ``Oxygen'' system (as featured in Scientific American!), and then lead an informal discussion/brainstorming on the security issues in Oxygen. Oxygen has an interesting structure, with many anonymous nodes, and security and authentication are non-trivial. Oxygen may turn out to be a good source of significant research problems and projects at all levels. Come with your thinking cap on! *

*See you there! *

*Date: FRIDAY, March 3, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: discussion of ``Capability-Based Financial Instruments'' *

*Written by: Mark Miller, Chip Morningstar, and Bill Frantz *

*Abstract: *

*Every novel cooperative arrangement of mutually suspicious parties interacting electronically---every smart contract---effectively requires a new cryptographic protocol. However, if every new contract requires new cryptographic protocol design, our dreams of cryptographically enabled electronic commerce would be unreachable. Cryptographic protocol design is too hard and expensive, given our unlimited need for new contracts. *

*Just as the digital logic gate abstraction allows digital circuit designers to create large analog circuits without doing analog circuit design, we present cryptographic capabilities as an abstraction allowing a similar economy of engineering effort in creating smart contracts. We explain the E system, which embodies these principles, and show a covered-call-option as a smart contract written in a simple security formalism independent of cryptography, but automatically implemented as a cryptographic protocol coordinating five mutually suspicious parties. *

*Hosted By: Ron Rivest *

*Date: FRIDAY, Feb 25, 2000 *

*Title: No CIS meeting this week *

*Date: FRIDAY, Feb 18, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Organizational Meeting for Student Crypto Seminars *

*By: Yevgeniy Dodis *

*Date: FRIDAY, FEBRUARY 11, 2000 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Single Database Private Information Retrieval: A Survey *

*By: Rafail Ostrovsky *

*Telcordia Technologies *

*http://www.telcordia.com/research/whoweare/staff/rafail.html *

*Abstract: *

*We consider a setting where a user wishes to retrieve an item from a database, without letting the database administrator know which item is being retrieved. Of course, a trivial (but expensive) solution is for the user to request contents of the entire database, thus concealing from the database administrator which item is of interest to the user. Can this be done with less communication? In 1997, I showed with Kushilevitz that this is indeed possible with total communication complexity strictly less then the size of the database, under some standard cryptographic assumptions. Since then much progress has been achieved on this problem. I will survey several recent results and state some open problems in this area. *

*The talk will be self contained. *

*Date: FRIDAY, DECEMBER 10, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Efficient Threshold and Proactive Cryptography Secure Against The Adaptive Adversary (previously scheduled for Oct 8th) *

*By: Anna Lysyanskaya *

*Abstract: *

*A threshold cryptosystem or signature scheme is a system with n participants where an honest majority can successfully decrypt a message or issue a signature, but where the security and functionality properties of the system are retained even as the adversary corrupts up to t players. The natural adversary one can imagine in this setting is the adaptive adversary, i.e. one that chooses which player to corrupt at which step based on all the information available to it at that step. *

*Recently, Canetti et al.~\cite{CGJKR99} showed how to implement threshold DSS and RSA secure against such an adversary. We extend their contribution in two main directions: (1) for the first time in threshold cryptography, we propose practical distributed cryptographic systems that are secure against the adaptive adversary in the concurrent setting; and (2) we propose simple and clean methods for achieving security against the adaptive adversary. *

*Our new techniques allow us to implement the threshold version of the Cramer-Shoup cryptosystem such that it withstands active attacks from the adaptive adversary. This is the most secure known practical threshold cryptosystem, since the underlying Cramer-Shoup cryptosystem is secure against adaptive chosen ciphertext attack. We note that our techniques apply to transforming virtually any discrete- logarithm-based cryptosystem into its threshold counterpart secure against the adaptive adversary. *

*This research was carried out while the author was visiting IBM Zurich Research Lab, Switzerland. *

*Date: FRIDAY, DECEMBER 3, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: How To Share A Quantum Secret (part 2) *

*By: Adam Smith, TOC, LCS, MIT *

*Abstract: *

*(I will give the second part of a talk which began on Nov. 5.) *

*Quantum cryptography has, until recently, concentrated on key distribution and two-party protocols such as bit commitment and oblivious transfer. In January, Cleve, Gottesman and Lo took a step away from these settings by giving a simple quantum secret sharing scheme. I will discuss some basic properties of quantum information and erasure-correcting codes. I will also present their scheme and possibly some extensions. *

*Date: Friday, November 19, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: TBA *

*By: Moses Liskov, TOC, LCS, MIT *

*Abstract: *

*We are concerned with the following problem: A number of parties wish to jointly generate an RSA key for shared use, so that none of them can perform private key operations without the help of others. This is a survey of recent results, including papers by Cocks, Boneh & Franklin, and Gilboa. We discuss the details of the schemes and their applications and possible related questions. *

*References: *

*N. Gilboa. Two Party RSA Key Generation. In Proceedings of Crypto '99, pages 116-129, Springer-Verlag, 1999. *

*C. Cocks. Split Knowledge Generation of RSA Parameters. In M. Darnell, ed., Cryptography and Coding, 6th IMA international conference, pages 89-95, December 1997. http://www.cesg.gov.uk/downlds/math/rsa.pdf *

*D. Boneh and M. Franklin. Efficient Generation of Shared RSA Keys. In Proc. of Crypto 97, pages 425-439. Springer-Verlag, 1997. The full version appears on the web at http://theory.stanford.edu/~dabo/pubs.html *

*Date: Friday, November 12, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: The Potential and the Limitations of Unconditional Key Agreement *

*By: Stefan Wolf, ETH Zurich *

*Abstract: *

*One of the basic problems in cryptography is the generation of a mutual secret key between two parties willing to establish secure communication. This talk deals with unconditional key agreement, whose security can be proven without any computational assumption. Starting from Shannon's pessimistic impracticality theorem, we expose Wyner's, Csiszar and Koerner's, and Maurer's models of secret-key agreement based on a limitation of the adversary's knowledge due to noise in communication channels. The secrecy capacity and the secret-key rate, respectively, quantify the legitimate partners' ability of generating a mutual secure string in the different settings. We describe lower and upper bounds on these quantities and sketch the corresponding key-agreement protocols in various scenarios.*

*The important special case of privacy amplification, i.e., transforming a mutual partially- insecure into a highly-secret string by public communication, is discussed. It is shown that, besides previously used universal hashing, extractor functions are a useful tool for privacy amplification. Recent advances in the theory of such derandomization techniques have important consequences in the context of information- theoretic security. *
*First, key agreement can be made secure even against active adversaries. *
*Second, we demonstrate that the previous privacy requirements to the resulting keys were unsatisfactorily weak and can be strengthened without effect on the achievable key-generation rates in all mentioned scenarios. This result solves a well-known open problem and makes all the previous achievements in unconditional key agreement, which were theoretically interesting but unsatisfactory for direct cryptographic applications due to the weakness of the keys, practically much more relevant. *

*Date: Friday, November 5, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: ``How to Share a Quantum Secret'' *

*By: Adam Smith, LCS, MIT *

*Abstract: *

*Quantum cryptography has, until recently, concentrated on key distribution and two-party protocols such as bit commitment and oblivious transfer. In January, Cleve, Gottesman and Lo took a step away from these settings by giving a simple quantum secret sharing scheme. I will discuss their scheme (and some recent extensions) and some relations between quantum error-correction, fault-tolerance, secret-sharing and multiparty computing. I don't assume any specific knowledge about quantum computing, but people who have absolutely no background may want to read a simple survey such as the SIAM News article which will soon be in the boxes next to Be's office (NE43-322). You can also look at the following links:*

*(Some surveys) *
*http://www.CS.McGill.CA/~crepeau/qcomp.html *
*(Secret Sharing) *
*http://xxx.lanl.gov/abs/quant-ph/9901025 *
*http://xxx.lanl.gov/abs/quant-ph/9910067 *
*http://theory.lcs.mit.edu/~asmith/PS/messagetoDG.ps *
*(Incomplete, but contains some results I will discuss) *

*Date: Friday, October 29, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Luby-Rackoff Ciphers over Finite Algebraic Structures or *

*Why XOR is not so Exclusive *

*By: Zulfikar Ramzan, LCS, MIT *

*Abstract: *

*Luby and Rackoff showed how to construct pseudo-random permutations from pseudo-random functions; their paper formalized the concept of a secure block cipher. The technique is based on composing several Feistel permutations. The traditional definition of a Feistel permutation involves applying a so-called round function to the right half of the input and taking the XOR with the left half of the input. We consider the question of what happens when operations other than the XOR are applied. In particular, we initiate a study of Luby-Rackoff ciphers when the operation in the underlying Feistel network is addition over an arbitrary finite algebraic structure. We obtain the following results: *

** We construct a cipher which can be broken in a constant number of queries when XOR is used, but is completely secure against adaptive chosen plaintext and ciphertext attacks when addition in finite groups of characteristic greater than 2 are considered. This cipher has better time/space complexity and uses fewer random bits than all previously considered Luby-Rackoff ciphers. *

** We show that our construction is tight when operations are performed over a finite field, and a minor relaxation in one of the requirements results in it being completely insecure through a non-obvious attack. *

** We examine various other Luby-Rackoff ciphers known to be insecure under XOR. In some cases, we can break these ciphers over arbitrary Abelian groups - -- though we have to employ new more complex techniques. In other cases, however, the security remains an open problem. *

*This work is based on joint work with Sarvar Patel and Ganesh Sundaram of Lucent Technologies/Bell Labs. *

*Date: Friday, October 22, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: A New Group Theoretic Protocol for Key Exchange *

*By: Dorian Goldfeld, Arithmetica, Inc. - Dallas, Texas *

*Abstract: *

*We introduce a new protocol for asymmetric key exchange whose security is based on the difficulty of solving systems of simultaneous equations in finitely generated infinite non-abelian groups. A particular implementation based on the braid group will be presented which offers significant performance advantages. *

*This talk is based on Joint work with Iris Anshel and Michael Anshel. *

*Date: Friday, October 15, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: A New Secure GQ Protocol Dedicated To Smartcards and Local Networks *

*By: Louis Guillou - CNET, France Telecom, and*

*Jean-Jacques Quisquater - UCL, Louvain-la-Neuve, Belgium *

*Abstract: *

*The former GQ protocol was based on the factorization and on the RSA hypothesis. We will show *

*- how to remove the RSA hypothesis *

*- how to optimize this protocol (for signatures and identification) in such a way to go really faster (about a factor of 10 for signatures and identifications). New interesting questions occur and we will show how to construct the parameters of the protocol (we call it GQ2). *

*The new GQ protocol is very well suited for smart cards without coprocessors and for local networks. Let us remark that the GQ protocol is already in use, e.g. for the security of Novell systems (NDS and Netware, about 100.000.000 clients and 5.000.000 servers). *

*Date: Friday, October 8, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Fist Talk: ``Adaptive Security for Threshold Cryptosystems'' *

*By: Stanislaw Jarecki (TOC, LCS, MIT) *

*Abstract: *

*We present adaptively-secure efficient solutions to several central problems in the area of threshold cryptography. We prove these solutions to withstand adaptive attackers that choose parties for corruption at any time during the run of the protocol. In contrast, all previously known efficient protocols for these problems were proven secure only against less realistic static adversaries, i.e. adversaries that choose and fix the subset of corrupted parties before the start of the protocol run.*

*Specifically, we provide adaptively-secure solutions for: *
*- distributed key generation in discrete-log based cryptosystems *
*- for the problem of distributed generation of DSS signatures (i.e. "threshold DSS") *
*- threshold RSA *
*- proactive versions of both threshold DSS and threshold RSA*

*We introduce several techniques for the design and analysis of adaptively-secure protocols that may well find further applications.*

*This is joint work with Ran Canetti, Rosario Gennaro, Hugo Krawczyk and Tal Rabin. The paper was presented in last Crypto'99. Its extended version of can be taken from http://theory.lsc.mit.edu/~stasio/ *
*Title: Postponed Second Talk: ``Construction Of Distributed Cryptographic Applications Secure Against Adaptive Adversaries'' *
*To be given at a future date *
*By: Anna Lysyanskaya (TOC, LCS, MIT) *
*Abstract: *
*We generalize the techniques and results [presented by Stas Jarecki in the first half of the seminar] in two main directions: we examine the more fault-tolerant distributed scenario that allows some of the players to be unreachable while still considered honest; and we show how to construct discrete-logarithm-based threshold cryptosystems and signature schemes that remain secure against adaptive adversaries under concurrent composition. We illustrate how one might use our techniques by generalizing the Canetti-Goldwasser construction of threshold Cramer-Shoup cryptosystem, such that it can withstand a linear number of corruptions by an adaptive adversary. This is the most secure known practical threshold cryptosystem, since not only is the underlying cryptosystem secure against adaptive chosen ciphertext attack (under the Decisional Diffie-Hellman assumption), but it remains secure against an adaptive adversary in the threshold setting.*

*This research was carried out while the author was visiting IBM Research Lab, Zurich, Switzerlard *

*Date: Friday, October 1, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: ``Verifiable Random Functions'' *

*By: Salil Vadhan (LCS, MIT) *

*Abstract: *

*In this talk, I will describe recent joint work with Silvio Micali and Michael Rabin, in which we extend the Goldreich-Goldwasser-Micali notion of pseudorandom functions to incorporate "verifiability". Specifically, we introduce the notion of a "verifiable random function", which is a family of pseudorandom functions f_s in which knowledge of the seed s enables one not only to evaluate f_s at any point x, but also provide an NP-proof that the value f_s(x) is correct without compromising the pseudorandomness of f_s at any other point for which no such proof was provided. We also give a construction of such a family of functions based on a variant of the RSA assumption. *

*Papers available outside Be's office (322)- red stacking file.*

*Date: Friday, September 24, 1999 *

*Time: 10:30 a.m. - 12:00 noon *

*Place: NE43-308 *

*Title: Square Hash: Fast Message Authentication via Optimized Universal Hash Functions *

*By: Zully Ramzan (LCS, MIT) *

*Abstract: *

*In this talk, we introduce two new ideas in the construction of fast universal hash functions geared towards the task of message authentication. First, we describe a simple but novel family of universal hash functions that is more efficient than many standard constructions. We compare our hash functions to the MMH family studied by Halevi and Krawczyk. All the main techniques used to optimize MMH work on our hash functions as well. Second, we introduce additional techniques for speeding up our constructions; these techniques apply to MMH and may apply to other hash functions. The techniques involve ignoring certain parts of the computation, while still retaining the necessary statistical properties for secure message authentication. Finally, we give implementation results on an ARM processor. Our constructions are general and can be used in any setting where universal hash functions are needed; therefore they may be of independent interest. *

*(A new postscript version of the paper will be on Zully's web page (http://theory.lcs.mit.edu/~zulfikar/MyResearch/homepage.html) some time during the day on Thursday.) *

*[NOTE: The talk that was originally schedule for last week by Salil Vadhan on Verifiable Random Functions but not held due to the weather will be rescheduled for some later date...]*

*Date: Friday, September 17, 1999 *

*Time: CANCELLED- *

*Place: would have been NE43-308, 545 Technology Square *

*Subject: CANCELLED due to weather *

*By:*

*Date: Friday, September 10, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Subject: Closed Meeting for planning purposes *

*Hosted By: Ron Rivest*

*Date: Friday, June 25, 1999 *

*Time: 11:00 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Subject: TBA *

*By: Shafi Goldwasser *

*Abstract: TBA*

*Date: Friday, June 18, 1999 *

*Time: 11:00 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: Fast approximate PCPs *

*By: Daniele Micciancio *

*Abstract: Presentation of paper by Ergun, Kumar and Rubinfield from STOC '99.*

*Date: Friday, June 4, 1999 *

*Time: Cancelled *

*Subject: CANCELLED because it's COMMENCEMENT*

*Date: Friday, May 28, 1999 *

*Time: NOTE CHANGE to 3:30 p.m. - 4:30 p.m. *

*Place: NE43-308 *

*Subject: The SFS Secure File System *

*By: David Mazieres (LCS, MIT) *

*Abstract: *

*No secure global file system the size of the Internet exists today. We believe the primary reason to be the lack of any good way to manage the necessary encryption keys. We propose self-certifying pathnames---file names that specify both a file server and its public key. Clients need no additional information beyond a self-certifying pathname to establish a secure connection to a file server. As a result, key management can be removed from the file system and left to users who can choose arbitrary key-management mechanisms. We demonstrate the benefits of self-certifying pathnames in SFS, a secure, global file system. Through self-certifying pathnames, SFS provides secure file sharing over untrusted networks in almost any setting one might want.*

*(Joint work with Michael Kaminsky, Frans Kaashoek, and Emmett Witchel.) *
*See http://pdos.lcs.mit.edu/~dm/sfs.ps for a current version of the paper. *
*(Copies will be available outside Be's office.)*

*Date: Friday, MAY 21, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-518 *

*Title: Linear Secret Sharing Implies General Secure Multi-Party Computation -- or: A Unified Approach To Secure MPC *

*By: Ueli Maurer *

*Abstract:*

*We show that verifiable secret sharing (VSS) and secure multi-party computation (MPC) among a set of n players can be based on any linear secret sharing scheme (SSS) among the players, if the access structure satisfies a certain necessary and sufficient condition. SSS is implied by VSS and MPC (as an invariant of the computation guaranteed for all intermediate results) but up to now VSS and MPC appeared to be a much more specialized primitive than SSS, requiring more mathematical structure (e.g. polynomials, field size > n, use of an n-th root of unity, etc.).*

*Our new reduction gives rise to simpler and more efficient MPC protocols. They allow to work over any field (in particular the binary field), and they yield MPC protocols secure against arbitrary (non-threshold) adversary structures, specified by a family of subsets of players, one of which may be corrupted by an adversary. This generalization of the classical threshold-type results in which the tolerated subsets are specified by their cardinality (less than n/2 or less than n/3) was proposed by Hirt and Maurer who showed that in the information-theoretic model, a passive (active) adversary structure can be tolerated if and only if no two (no three) of the potentially corrupted subsets cover the full player set. These results follow naturally from our approach, as does the generalization to the cryptographic setting.*

*Our approach to secure MPC is generic and applies to both the cryptographic and the information-theoretic setting. It is based on some new subprotocols for computing with committed values and appears well-suited for teaching MPC in a classroom. The construction is based on: *
*1) a formalization of the special property of a linear SSS that is needed to perform a multiplication of two shared values, *
*2) an efficient generic construction to obtain from any linear SSS another linear SSS for the same access structure Gamma that has this property (whenever this is achievable at all for Gamma), and *
*3) an efficient generic construction to build verifiability into every linear SSS (whenever a VSS for the given access structure is achievable at all).*

*The talk is based on joint work with Ronald Cramer and Ivan Damgaard.*

*Date: Friday, May 14, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Subject: TWO TOPICS *

*By: Discussions Lead by Ben Adida and Ron Rivest *

*Abstract: We will cover two topics (approx 45 minutes each): *

*1) Ben Adida will present a description of the research in his Master's thesis, entitled ``Self-Describing Cryptography'' *

*2) Ron will lead a discussion of Adi Shamir's new sieving gadget that (when built) could lead to a substantial speed-up in the sieving phase of factoring algorithms such as the quadratic sieve or the number-field sieve.*

*Date: Friday, May 7, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Subject: OPEN *

*By: OPEN *

*Abstract: OPEN*

*Date: Friday, April 30, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-518 *

*Title: Oblivious transfer and polynomial evaluation *

*By: Benny Pinkas, Weizmann Institute *

*Abstract: *

*An oblivious evaluation protocol for a function f of two variables allows two parties, Alice, who knows x and Bob, who knows y, to jointly compute the value of f(x,y) in a way that does not reveal to each side more information than what can be deduced from $f(x,y)$. We describe efficient constructions for two oblivious two-party computation problems: (i) 1-out-of-N Oblivious Transfer (and k-out-of-N OT) (ii) Oblivious Polynomial Evaluation.*

*A direct corollary of the 1-out-of-N OT protocol is an efficient transformation of any Private Information Retrieval (PIR) protocol to a Symmetric PIR (SPIR) protocol without increasing the number of databases.*

*The efficiency of the new oblivious evaluation protocols makes them useful for a variety of applications. These include oblivious sampling which can be used to securely compare the sizes of web search engines, and protocols for privately solving the list intersection problem.*

*Joint work with Moni Naor.*

*Date: Friday, April 23, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Title: Multi-Signature Schemes Secure Against Active Insider Attacks *

*By: Kazuo Ohta (NTT) *

*Abstract: *

*This paper proposes the first provably secure multi-signature schemes under the random oracle model. The security of our schemes can be proven in the sense of {\it concrete security} by ourselves. The proposed schemes are efficient if the random oracle is replaced by practical hash functions. The essential techniques in our proof of security are the {\it optimal reduction} from breaking the corresponding identification to breaking signatures ({\it ID Reduction Technique}), and the {\it hierarchical heavy row lemmas} used in the concrete reduction from solving the primitive problem to breaking the identification scheme.*

*Joint work with Tatsuaki Okamoto*

*Date: Friday, April 16, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Subject: OPEN *

*By: OPEN *

*Abstract: OPEN*

*Date: Friday, April 9, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-518 *

*Title: Verifiable Random Functions *

*By: Silvio Micali *

*Abstract: *

*GGM showed that, if one-way functions exist, one can, from a secret seed, efficiently construct pseudo-random oracles that are totally unpredictable. In their construction, however, it was not possible to prove that a given value is the correct value of the pseudo-random oracle at a given point.*

*In this talk we introduce the notion of a pseudo-random function Fs, easily constructed from a secret seed s, that is both unpredictable and verifiable.*

*Namely, knowledge of s not only enables one to evaluate Fs efficiently at any point x, but also to prove efficiently that such values Fs(x) are indeed correct without compromising the unpredictability of Fs at any other point for which no such a proof was provided. We show the existence of such "verifiable random functions" based on a variant of the RSA.*

*Joint work with Michael Rabin*

*Date: Friday, April 2, 1999 - Joint TOC/CIS Seminar *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Title: The All-Or-Nothing Nature of Two-Party Secure Computation *

*By: Amos Beimel (Harvard) *

*Abstract: *

*A function f is computationally securely computable if two computationally-bounded parties Alice, having a secret input x, and Bob, having a secret input y, can talk back and forth so that (even if one of them is malicious) (1) Bob learns essentially only f(x,y) while (2) Alice learns essentially nothing.*

*We prove that, if any non-trivial function can be so computed, then so can every function. Consequently, the complexity assumptions sufficient and/or required for computationally securely computing f are the same for every non-trivial function f.*

*Based on joint work with Tal Malkin (MIT) and Silvio Micali (MIT).*

*Date: Friday, March 26, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-518 *

*Title: Pseudo-Random Functions, Permutations and Synthesizers *

*By: Omer Reingold (Weizmann Institute) *

*Abstract: *

*Pseudo-random functions and permutations model the key components of private-key cryptography. They allow parties who share a common key to send secret messages to each other, identify themselves and authenticate their messages. In addition, these functions have many other applications, essentially in any setting that calls for a random function that is provided as a black-box.*

*In recent years we have presented several constructions of pseudo-random functions and permutations. The main objective of this research was to obtain efficient constructions based on standard intractability assumptions (where efficiency refers both to the sequential and parallel time-complexity of the computation). Significant progress towards this goal was achieved by introducing a new cryptographic primitive called a pseudo-random synthesizer. Among the additional results of this research: i) A simple Zero-Knowledge proof of the value of our functions. ii) A very efficient pseudo-random mode of operation for block-ciphers. Furthermore, our constructions have implications to computational learning theory and lower bounds for circuit complexity.*

*This talk will survey some of the applications of pseudo-random functions and permutations, and some of our results. In addition, we will discuss several open questions of the field.*

*Most of this talk is based on joint research with Moni Naor.*

*Date: Friday, March 19, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Title: ``Decision oracles are equivalent to Matching oracles'' *

*By: Yiannis Tsiounis *

*Paper with Helena Handschuh and Moti Yung *

*Abstract: *

*One of the key directions in complexity theory which has also filtered through to cryptographic research, is the effort to classify related but seemingly distinct notions. Separation or reduction arguments are the basic means for this classification.*

*Continuing this direction we identify a class of problems, called ``matching problems,'' which are related to the class of ``decision problems.'' In many cases, these classes are neither trivially equivalent nor distinct. Briefly, a ``decision'' problem consists of one instance and a supposedly related image of this instance; the problem is to decide whether the instance and the image indeed satisfy the given predicate. In a ``matching'' problem two such pairs of instances-images are given, and the problem is to ``match'' or ``distinguish'' which image corresponds to which instance. Clearly the decision problem is more difficult, since given a ``decision'' oracle one can simply test each of the two images to be matched against an instance and solve the matching problem. Here we show that the opposite direction also holds, presuming that randomization of the input is possible, and that the matching oracle is successful in all but a negligible part of its input set.*

*We first apply our techniques to show equivalence between the matching Diffie-Hellman and the decision Diffie-Hellman problems which were both applied recently quite extensively. This is a constructive step towards examining the strength of the Diffie-Hellman related problems. Then we show that in cryptosystems which can be uniformly randomized, non-semantic security implies that there is an oracle that decides whether a given plaintext corresponds to a given ciphertext. In the process we provide a new characteristic of encryption functions, which we call ``universal malleability.''*

*Keywords: Diffie-Hellman variants, randomized reductions, uniform reductions, public-key encryption, homomorphic encryption functions (ElGamal, Goldwasser-Micali, Okamoto-Uchiyama, Naccache-Stern), random self-reducibility, decision problems, matching problems, universal malleability.*

*Date: Friday, March 12, 1999 *

*Time: 12:00 p.m. - 1:00 p.m. *

*Place: NE43-204 *

*BYOL: Bring Your Own Lunch To Hear Joe Reagle: *

*Speak On: XML Digital Signatures*

*Date: Friday, March 12, 1999 -- Joint Seminar with TOC *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-518 *

*Title: Statistical Zero-Knowledge: An Introduction and A Survey of Recent Developments *

*By: Salil Vadhan (MIT) *

*Abstract: Zero-knowledge proofs, introduced by Goldwasser, Micali, and Rackoff in 1985, are fascinating constructs which enable one party to convince another of an assertion without revealing anything other than the validity of the assertion. *Statistical* zero-knowledge proofs are a particular type of such proofs in which the condition that the verifier learns nothing is interpreted in a strong statistical sense. In this talk, we survey a number of recent results which have given us a much more refined understanding of statistical zero-knowledge proofs and the class SZK of languages ("assertions") which possess such proofs.*

*Particular items of focus in this survey are:*

*The role of Okamoto's theorem (1996) that any SZK proof can be converted into a "public coin" one in facilitating these recent improvements in our understanding of SZK.**The use of "complete problems" to obtain new characterizations of SZK and to reduce the study of the class to a single problem (as first seen in [Sahai, Vadhan, 1997]).*

*We illustrate the benefits of these two tools, by surveying some of the results that have been obtained:*

*Strong boolean closure properties of SZK [Sahai, Vadhan, 1997].**Converting honest verifier SZK proofs to any verifier SZK proofs [Goldreich, Sahai, Vadhan, 1998].**Extending the theory to "noninteractive" SZK proofs [De Santis, Di Crescenzo, Persiano, Yung, 1998] and using this to relate SZK to "noninteractive" SZK [Goldreich, Sahai, Vadhan, 1998].*

*Date: Friday, March 5, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Title: Improving the Exact Security of Digital Signature Schemes *

*By: Leonid Reyzin (MIT) *

*Abstract: We provide two contributions to exact security analysis of digital signatures:*

*We put forward a new method of constructing Fiat-Shamir-like signature schemes that yields better "exact security" than the original Fiat-Shamir method; and**We extend exact security analysis to exact cost-security analysis by showing that digital signature schemes with "loose security" may be preferable for reasonable measures of cost.*

*Joint work with Silvio Micali. *

*Date: Friday, February 26, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Title: Efficient Communication-Storage Tradeoffs for Multicast Encryption *

*By: Tal Malkin (MIT) *

*Abstract: Multicast communication is an attractive method for delivery of data to multiple recipients, minimizing consumption of both sender and network resources. Some applications that benefit from multicast include real-time information update, multi-party conferences, and pay TV. Securing multicast communication poses several important challenges. We focus on providing access control, namely ensuring that only legitimate members of the multicast group have access to the group communication. This is done by maintaining a secret session key known to all legitimate members, and encrypting all group communication using this key. We consider a dynamic group, where users join or leave the group in an arbitrary fashion, and a center who is in charge of performing the re-keying associated with group updates. We require strong security, where the session key is secure against any coalition of nonmembers. *

*There is a variety of different scenarios using multicast, presenting a range of efficiency requirements with respect to several parameters. We give an upper bound on the tradeoff between storage and communication parameters. In particular, we suggest an improvement of the schemes by Wallner et al. and Wong et al. with sub-linear center storage, without a significant loss in other parameters. Correctly selecting the parameters of our scheme we can efficiently accommodate a wide range of scenarios. This is demonstrated by applying the protocol to some known benchmark scenarios. *

*We also show lower bounds on the tradeoff between communication and user storage, and show that our scheme is almost optimal with respect to these lower bounds. *

*Joint work with Ran Canetti and Kobbi Nissim.*

*Date: Friday, February 19, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Title: Lower Bounds for Oblivious Transfer Reductions *

*By: Yevgeniy Dodis (MIT) *

*Abstract: I'll try to introduce information theory (IT) as a useful tool for making precise cryptographic definitions and proving lower bounds in cryptography. I'll demonstrate the use of IT by the following application that is to appear in Eurocrypt'99. No prior knowledge in information theory will be assumed. *

*We prove the first general and non-trivial lower bound for the number of times a 1-out-of-n Oblivious Transfer of strings of length l should be invoked so as to obtain, by an information-theoretically secure reduction, a 1-out-of-N Oblivious Transfer of strings of length L. Our bound is tight in many significant cases. *

*Joint work with Silvio Micali. Copies of the paper "Lower Bounds for Oblivious Transfer Reductions" can be obtained outside of Be's office and will also be handed out. (Or see http://theory.lcs.mit.edu/~yevgen/ps/obliv.ps.gz)*

*Date: Friday, February 5, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-308 *

*Title: Anonymous Authentication of Membership in Dynamic Groups *

*By: Stuart Schechter (Harvard), Todd Parnell (MIT), Alexander Hartemink (MIT) *

*Abstract: We present a protocol for authenticating an individual's membership in a group without revealing that individual's identity and without restricting how the membership of the group may be changed. Existing protocols that authenticate membership by identifying individuals do not provide anonymity. Those in which members share a common key require a new key to be distributed whenever an individual leaves the group. To overcome these limitations we introduce the _verifiably common secret encoding_ to construct anonymous authentication protocols. These protocols both authenticate membership without identifying the member and enable a trusted third party to add and remove members of the group instantly, in a single message to the authenticator. Applications in electronic commerce and communication can now provide anonymous authentication while accommodating frequent changes in membership.*

*Date: Friday, January 29, 1999 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-204 *

*Title: On the Security Properties of OAEP as an All-or-nothing Transform *

*By: Victor Boyko *

*Abstract: This work studies All-or-Nothing Transforms (AONTs), which have been proposed by Rivest as a mode of operation for block ciphers. An AONT is an unkeyed, invertible, randomized transformation with the property that it is hard to invert unless all of the output is known. We give several formal definitions of security for AONTs. We then prove that Optimal Asymmetric Encryption Padding (OAEP), which was originally introduced by Bellare and Rogaway as a tool for constructing secure encryption schemes, satisfies these definitions (in the random oracle model). The adversary's advantage in getting information about the input of the OAEP is shown to be inversely exponential in the number of bits removed from the output. This bound is nearly optimal. We conclude with several open questions and problems in the area.*

*Date: Friday, December 11, 1998 *

*Time: 10:30 a.m. - 12:00 p.m. *

*Place: NE43-518 *

*Title: Concurrent Zero-Knowledge: An Overview *

*By: Amit Sahai *

*Abstract: Zero-Knowledge protocols are fascinating constructs which allow one party (a prover) to convince another (a verifier) of the validity of an assertion, without revealing anything more. However, in a distributed setting, the standard definitions fail to guarantee adequate security. In this talk, we define and consider a notion where the Zero-Knowledge requirement is guaranteed even in the presence of many verifiers interacting concurrently with one or more provers. We survey a number of results, including:*

*A Timing Model in which constant round Concurrent Zero-Knowledge protocols exist for NP, due to [Dwork, Naor, and Sahai 98]**The impossibility of 4 round Concurrent Zero-Knowledge protocols in the standard model, due to [Killian, Petrank, and Rackoff 98].*

*We conclude with several open questions and problems in the area. *

*Date: Friday, November 20, 1998 *

*Time: 10:00 a.m. - 11:30 a.m. *

*Place: NE43-518 *

*Title: A Practical Secure Physical Random Bit Generator *

*By: Markus Jakobsson *

*Abstract: We suggest a practical and economical way to generate random bits using a computer disk drive as a source of randomness. It requires no additional hardware (given a system with a disk), and no user involvement. As a concrete example of performance, on a Sun Ultra-1 with a Seagate Cheetah disk, it generates bits at a rate of either 5 bits per minute or 577 bits per minute depending on the physical phenomena that we use as a source of randomness. The generated bits are random by a theoretical argument, and also pass a severe battery of statistical tests. *

*Date: Friday, November 13, 1998 *

*Time: 10:30 a.m. *

*Place: NE43-204 *

*Title: Secure Signatures, Without Trees or Random Oracles *

*Speaker: Shai Halevi, IBM, TJ Watson Research Center *

*Abstract: We present a new signature scheme which is existentially unforgeable under chosen message attack assuming some variant of the RSA conjecture.*

*This scheme is not based on "signature trees", and instead it uses a so called "cryptographic hash function". It is unique in that the assumptions made on this hash function are well defined and reasonable. In particular, we *do not* model this function as a random oracle.*

*We construct our proof of security in steps. First we give and prove a construction which operates in the random oracle model and assumes a variant of the RSA conjecture. Then we show that the random oracle in this construction can be replaced by a hash function which satisfies some strong (but well defined!) computational assumptions. Finally, we demonstrate that these assumptions are reasonable, by proving that a function satisfying them exists assuming that factoring is hard. *
*(Joint work with Rosario Gennaro and Tal Rabin.) *

*Date: Friday, November 13, 1998 *

*Time: 10:30 a.m. *

*Place: NE43-518 *

*Title: Secure Signatures, Without Trees or Random Oracles *

*Speaker: Shai Halevi, IBM, TJ Watson Research Center *

*Abstract: We present a new signature scheme which is existentially unforgeable under chosen message attack assuming some variant of the RSA conjecture.*

*This scheme is not based on "signature trees", and instead it uses a so called "cryptographic hash function". It is unique in that the assumptions made on this hash function are well defined and reasonable. In particular, we *do not* model this function as a random oracle.*

*We construct our proof of security in steps. First we give and prove a construction which operates in the random oracle model and assumes a variant of the RSA conjecture. Then we show that the random oracle in this construction can be replaced by a hash function which satisfies some strong (but well defined!) computational assumptions. Finally, we demonstrate that these assumptions are reasonable, by proving that a function satisfying them exists assuming that factoring is hard. *
*(Joint work with Rosario Gennaro and Tal Rabin.) *

*Date: Friday, November 6th*

* 10:30 a.m. - 12:00 p.m. *

* NE43-204 *

*Title: One-way Functions are Essential for Single Database Private Information Retrieval *

*Speaker: Amos Beimel, Harvard University *

*Abstract: Private information retrieval protocols allow a user to read information from a database without the server storing the database knowing what information the user read. Kushilevitz and Ostrovsky construct, based on the quadratic residuosity assumption, a single server protocol with small communication complexity. Cachin, Micali and Stadler present a protocol with smaller communication complexity, based on the (new) phi-hiding assumption.*

*A major question addressed in this work is what the minimal assumption necessary for the construction of single-server private information retrieval protocols with small communication complexity is. We prove that if there is a (0-error) protocol in which the server sends less than n bits then one-way functions exist (where n is the number of bits in the database). That is, even saving one bit compared to the naive protocol, in which the entire database is sent, already requires *

*A similar result holds (but requires more work) even if we allow the retrieval to fail with some small probability. More specifically, we prove that if there is a private information retrieval protocol with error O(1/ n) and communication complexity less than n bits then one-way functions exist. The same result holds if there is a protocol with error probability 1/4 and communication complexity n/10 bits.*

*Joint work with Eyal Kushilevitz and Tal Malkin. *

*Date: Friday, October 30th *

*Title: "Access with Pseudonyms" by Lidong (Lily) Chen *

*Speaker: Anna Lysyanskaya*

*Proceedings of Cryptography: Policy and Algorithms *
* Ed Dawson, Jovan Golic (eds) *
* Brisbane, Queensland, Australia, July 1995 *
* pages 232-243*

*This work focuses on a model in which there are a number of individuals and a number of organizations. Each organization has some set of credentials that it is supposed to issue to individuals. For example, the Department of Motor Vehicles can issue Driver's Licences and a judge can issue marriage certificates. Each organization identifies each individual by his organization-specific pseudonym. The user is able to transfer the credential issued on one pseudonym to another pseudonym untraceably. For example, the Department of Motor Vehicles will be able to obtain the marital status of an individual identified by a pseudonym, but won't find out anything else even if the judge fully cooperates.*

*The security of the system presented in this paper depends on several assumptions about discrete logarithms, as well as involvement of an off-line trusted third party at set-up stages of the system. *

*Title: An Efficient Threshold Public Key Cryptosystem Secure Against Adaptive Chosen Ciphertext Attack *

*Speaker: Professor Shafi Goldwasser, MIT EECS *

*Abstract: We propose a new simple threshold Public-Key Cryptosystem which is provably secure against adaptive chosen cyphertext attack, under the DDH intractability assumption. We will present several variants of the scheme.*

* Previously, it was shown how to design non-interactive threshold PKC secure under chosen cypher text attack, assuming both a random orale exist and the DDH intractability assumption [Gennaro-Shoup].*

* The random-oracle assumption was used both in the proof of security and to eliminate interaction. General completenesss results on multi-party computations [bgw,gmw] enable in principle converting any single server PKC secure against CCA {Naor-Yung,Cramer-Shoup] into a threshold PKC secure against CCA, but the conversions are inefficient and require much interaction among servers for each cyphertext decrypted.*

* The recent work by [Cramer-Shoup] on single server PKC secure against CCA has been particularily inspiring for our new prposal.*

* This is joint work with Ran Canetti. *

*Date: Friday, October 16, 1998 *

*Title: An Efficient Threshold Public Key Cryptosystem Secure Against Adaptive Chosen Ciphertext Attack *

*Speaker: Professor Shafi Goldwasser, MIT EECS *

*Abstract: We propose a new simple {\it threshold} Public-Key Cryptosystem which is provably secure against adaptive chosen cyphertext attack, under the DDH intractability assumption. We will present several variants of the scheme.*

* Previously, it was shown how to design non-interactive threshold PKC secure under chosen cypher text attack, assuming both a random orale exist and the DDH intractability assumption [Gennaro-Shoup].*

* The random-oracle assumption was used both in the proof of security and to eliminate interaction. General completenesss results on multi-party computations [bgw,gmw] enable in principle converting any single server PKC secure against CCA {Naor-Yung,Cramer-Shoup] into a threshold PKC secure against CCA, but the conversions are inefficient and require much interaction among servers for each cyphertext decrypted.*

* The recent work by [Cramer-Shoup] on single server PKC secure against CCA has been particularily inspiring for our new prposal.*

* This is joint work with Ran Canetti.*

*Date: Friday, October 9, 1998 *

*Title: Computationally Private Information Retrieval With Polylogarithmic Communication *

*Speaker: Professor Silvio Micali, MIT EECS *

*Abstract: We present a single-database computational private information retrieval system with polylogarithmic communication complexity. Our construction is based on a new, but reasonable assumption: essentially the difficulty of deciding whether a small prime 2 divides \phi(m), where m is a composite integer of unknown factorization. (Joint work with Christian Cachin and Markus Stadler)*

*Date: Friday, October 2, 1998 *

*Title: Proof of Plaintext Knowledge and Other Protocols, Without Tears *

*Speaker: Michael O. Rabin, Harvard University and Hebrew University *

*Abstract: We present novel efficient algorithms for the following important problems: Proof of Plaintext Knowledge (PPTK), foiling Chosen Ciphertext Attacks, Non-Malleable Encryption, and Deniable Authentication. A salient feature of these protocols/algorithms is that they do not employ traditional Zero Knowledge Proofs. As a consequence, they are far more efficient than previous solutions. (Joint work with Y. Aumann.)*

*Date: Monday, April 14, 1997 *

*Title: Efficient Anonymous Multicast and Reception *

*Speaker: Shlomi Dolev, Department of Math and Computer Science, Ben-Gurion University *

*Abstract: We examine the problem of efficient anonymous broadcast and reception in general communication networks. We show how anonymous broadcast/reception can be achieved with O(1) amortized communication complexity per bit sent and low computational complexity. In contrast, all previous solutions required polynomial (in the size of the network and security parameter) communication complexity. Joint work with Rafail Ostrovsky.*

*Date: Thursday, April 3, 1997 *

*Title: Is Linear Hashing Good? *

*Speaker: Erez Petrank, DIMACS *

*Abstract: We discuss the behavior of linear transformations as a universal family of hash functions. An interesting measure of universal hashing is the expected size of the largest bucket (i.e., the largest number of elements being hashed to the same point in the range). The mere fact that a family of hash functions is universal does not guarantee good behavior in this sense. We first show that the family of linear transformations over GF[2] behaves surpsinigly good in this respect. It's behavior is (almost) optimal. In contrast we show that the family of linear transformations over large fields behaves extremely bad in this respect: They behave (almost) as bad as possible for a universal family. Joint work with Noga Alon, Martin Dietzfelbinger, Peter Bro Miltersen, and Gabor Tardos.*

*Date: Monday, March 31, 1997 *

*Title: Towards Realizing Random Oracles *

*Speaker: Ran Canetti, MIT Laboratory for Computer Science *

*Abstract: The {\em random oracle model} is a very convenient setting for designing cryptographic protocols. In this idealized model all parties have access to a common, public random function called a {\em random oracle.} Protocols in this model are often very simple and efficient; also the analysis is often clearer. However, we do not know how to construct a mechanism that realizes a random oracle. In fact, we do not even know how to meaningfully {\em specify} the properties required from such a mechanism. Instead, it is a common practice to simply replace the random oracle with a `cryptographic hash function' (e.g., MD5 or SHA). Consequently, the resulting protocols have no proof of security. We initiate an effort to improve this situation, by proposing a new primitive that realizes a specific aspect of random oracles. This primitive, called {\em oracle hashing,} is a hash function that, like random oracles, `hides all partial information on its input'. A salient property of oracle hashing is that it is probabilistic: different applications to the same input result in different hash values. Still, we maintain the ability to {\em verify} whether a given hash value was generated from a given input. Demonstrating the applicability of the new primitive, we show how oracle hashing successfully replaces the use of random oracles in several natural problems and constructions where the use of random oracles seemed inherent. Several constructions of oracle hashing are presented. One is based on a strong variant of the Diffie-Hellman assumption. Others are based on `cryptographic hash functions' such as MD5 or SHA.*

*Date: Friday, December 6, 1996 *

*Title: Asymmetric and Anonymous Fingerprinting *

*Speaker: Birgit Pfitzmann, University of Hildesheim *

*Abstract: Fingerprinting is one type of measures for the copyright protection of digital data. The merchant sells a slightly different 'copy' to each buyer so that he can later identify the original buyer of an illegally redistributed copy. All previous fingerprinting schemes are symmetric in the following sense: Both the buyer and the merchant know the fingerprinted copy. Thus, when the merchant finds this copy somewhere, there is no proof that it was the buyer who put it there, and not the merchant. We introduce and construct asymmetric fingerprinting schemes, where only the buyer knows the fingerprinted copy and the merchant, upon finding it somewhere, can find out and prove to third parties whose copy it was. Another problem with fingerprinting arises in the context of electronic marketplaces where untraceable electronic cash offers buyers privacy similar to that when buying books or music in normal shops with normal cash. Now buyers would have to identify themselves for fingerprinting. To remedy this, we introduce and construct anonymous asymmetric fingerprinting schemes, where buyers can buy information anonymously, but can nevertheless be identified if they redistribute this information illegally.*

*Date: Friday, October 4, 1996 *

*Title: On The Knowledge Complexity of NP *

*Speaker: Erez Petrank, DIMACS *

*Abstract: Knowledge complexity measures how much a party gains from an interaction with another (possibly stronger) party. Making sure that a party learns nothing or very little from an interaction is a fundamental issue in cryptography, hence the extensive study of the special case of zero knowledge complexity. We consider Knowledge Complexity to be also a fundamental issue in complexity, where we would like to understand not only the computational abilities of machines working on their own, but also their computational abilities following an interaction with the rest of the world. In this work, we study "what can be done" in low knowledge complexity. In particular, we ask which languages can be proven in low knowledge complexity. We show that all languages which have interactive proofs with logarithmic knowledge complexity are in the class co-AM. This class does not contain NP unless the Polynomial Time Hierarchy (PH) collapses. Thus, if the Polynomial Time Hierarchy does not collapse, then complete NP languages cannot be proven without yielding to the verifier super-logarithmic bits of knowledge. Additional results will be discussed in the talk. This work is a joint work with Gabor Tardos.*

*Date: Wednesday, September 11, 1996 *

*Title: The Policy Crisis in Cryptography *

*Speaker: Dr. Herb Lin, NRC *

*Abstract: Cryptography, the work of creating and deciphering coded information using mathematical formulas, long has been the sphere of spies and the military. But in the past 10 years private-sector use of cryptography has exploded as a result of advances in electronic communications and information technologies. Decisions about national cryptography policy now have important implications not only for national security, but also for U.S. economic competitiveness, law enforcement interests, and the protection of the privacy and other rights of individuals. The Computer Science and Telecommunications Board of the National Research Council recently completed a congressionally mandated study (Cryptography's Role In Securing the Information Society) to examine the issues and conflicting interests involved in cryptography and made recommendations on national policy in this highly controverial area.*

*Date: Friday, May 10, 1996 *

*Title: Incoercible Multiparty Computation *

*Speaker: Ran Canetti, IBM T.J. Watson Research Center *

*Abstract: Current secure multiparty protocols have the following deficiency. The public transcript of the communication can be used as an involuntary {\em commitment} of the parties to their inputs and outputs. Thus parties can be later coerced by some authority to reveal their private data. Previous work that has pointed this interesting problem out contained only partial treatment. In this work we present the first general and rigorous treatment of the coercion problem in secure computation. First we present a general definition of protocols that provide resilience to coercion. Our definition constitutes a natural extension of the general paradigm used for defining secure multiparty protocols. Next we show that if trapdoor permutations exist then any function can be incoercibly computed (i.e., computed by a protocol that provides resilience to coercion) in the presence of computationally bounded adversaries and only public communication channels. This holds as long as less than half the parties are coerced (or corrupted). In particular, ours are the first incoercible protocols without physical assumptions. Also, our protocols constitute an alternative solution to the recently solved {\sf adaptive security} problem. Our techniques include non-standard use of {\sf deniable encryptions.} Joint work with Rosario Gennaro.*

*Date: Monday, March 31, 1997 *

*Title: Incoercible Multiparty Computation *

*Speaker: Ran Canetti, IBM T.J. Watson Research Center *

*Abstract: Current secure multiparty protocols have the following deficiency. The public transcript of the communication can be used as an involuntary {\em commitment} of the parties to their inputs and outputs. Thus parties can be later coerced by some authority to reveal their private data. Previous work that has pointed this interesting problem out contained only partial treatment. In this work we present the first general and rigorous treatment of the coercion problem in secure computation. First we present a general definition of protocols that provide resilience to coercion. Our definition constitutes a natural extension of the general paradigm used for defining secure multiparty protocols. Next we show that if trapdoor permutations exist then any function can be incoercibly computed (i.e., computed by a protocol that provides resilience to coercion) in the presence of computationally bounded adversaries and only public communication channels. This holds as long as less than half the parties are coerced (or corrupted). In particular, ours are the first incoercible protocols without physical assumptions. Also, our protocols constitute an alternative solution to the recently solved {\sf adaptive security} problem. Our techniques include non-standard use of {\sf deniable encryptions.} Joint work with Rosario Gennaro.*

*Date: March 8, 1996 *

*Title: Practical and Provably-Secure Commitment Schemes from Collision-Free Hashing *

*Speaker: Shai Halevi, MIT Laboratory for Computer Science *

*Abstract: We present a very practical string-commitment scheme which is provably secure based solely on collision-free hashing. Further, our scheme enables a computationally bounded party to commit strings to an unbounded one, and is optimal (within a small constant factor) in terms of interaction, communication, and computation. Our result also proves that statistical zero-knowledge arguments and constant-round (computational) zero-knowledge proofs for NP exist based on the existence of collision-free hash functions. This is a joint work with Silvio Micali.*

*Date: March 1, 1996 *

*Title: Robust and Efficient Sharing of RSA Functions *

*Speaker: Rosario Gennaro, MIT Laboratory for Computer Science *

*Abstract: The explosive growth in the number of users and applications run on the Internet has raised several security issues and created the need for new and more sophisticated cryptographic tools. One example are robust shared signature schemes which we will survey in this talk. A (T,N)-threshold signature scheme is a protocol for a group of N players, which on input a message M allows any subset of T or more players to generate a signature for M. No coalition of fewer than T players can produce a valid signature. Such a scheme is robust if it performs correctly even in the presence of up to T arbitrarily malicious players. T is called the fault-tolerance parameter. Threshold signature schemes have very important applications in cryptography as they allow a distributed implementation of signing capabilities. This provides increased security against attacks aimed at the recovery of the secret key. Also the robustness property guarantees availability of the service in the presence of an active adversary. The principal application lies in the implementation of secure public-key certification authorities where leaking the secret key and/or a denial of service attack could have disastrous consequences. During the talk we will present a robust and efficient threshold signature scheme based on RSA of optimal fault-tolerance (T=N/2.) Solutions for the case of RSA are especially important because of its spread use (a de-facto standard). In addition, solutions to shared RSA signatures usually lead to efficient key escrow schemes for RSA. The scheme is based on interesting extensions that we devised for the Information Checking Protocol of T. Rabin and Ben-Or, and work on undeniable signatures by Chaum. These extensions are of independent interest. Joint work with Stas Jarecki (MIT), Hugo Krawczyk (IBM), Tal Rabin (MIT).*

*Date: Friday, February 16, 1996 *

*Title: Java Security: HotJava to Netscape and Beyond *

*Speaker:Drew Dean and Dan Wallach, Computer Science Department, Princeton University *

*Abstract: The introduction of Java applets has taken the World Wide Web by storm. Information servers can customize the presentation of their content with server-supplied code which executes inside the Web browser. We examine the Java language and both the HotJava and Netscape browsers which support it. We demonstrate several attacks which compromise the browsers' security. Our attacks are made possible through browser code which fails to consistently enforce its security policy. We also discuss the underlying tension between the openness desired by Web application writers and the security needs of their users. We suggest how both can be accommodated.*

*Date: Friday, February 12, 1996 *

*Title: Classic Cryptanalysis: The Folger Cipher *

*Speaker: Dr. S. Brent Morris, NSA*

*Date: Friday, December 15, 1995 *

*Title: Micropayment Schemes, Light-Weight Signatures, and Public-Key Certification *

*Speaker: Mark Manasse, Ron Rivest, Adi Shamir, Silvio Micali *

*Abstract: We present four short talks on related subject matter: (1) Mark Manasse will present his micropayment scheme "Millicent". (2) Ron Rivest will present the "PayWord" micropayment scheme. (3) Adi Shamir will present the "MicroMint" micropayment scheme. (4) Silvio Micali will present an enhanced certificate revocation system.*

*Date: Friday, December 8, 1995 *

*Title: An Efficient and Secure Digital Signature Scheme *

*Speaker: Silvio Micali *

*Abstract: We present an efficient digital signature algorithm which is provably secure based on the existence of random-oracle hash functions and the computational difficulty of factoring. The new scheme improves on the ones of Goldwasser, Micali and Rivest (as improved by Goldreich), and by Fiat and Shamir.*

*Date: Friday, December 1, 1995 *

*Title: Maintaining Authenticated Communication In the Presence of Break-Ins *

*Speaker: Shai Halevi *

*Abstract: Cryptography provides authenticated communication over insecure channels, as long as the secret keys are not exposed. However, attacks by hackers and insiders often expose secret keys. Such attackers often control the systems only for a limited time, and therefore security may be regained, following exposure, provided new keys can be selected and installed securely. This proactive recovery operation must be invoked periodically, since the exposure would often not be detected. We consider a highly adversarial scenario where the adversary has complete control over the network, and can also occasionally break into parties, expose and modify their keys. In this setting we present a proactive secure communication mechanism for maintain authenticated communication. Using our scheme as a stepping stone, general ``higher level'' applications for secure multi-party computation, distributed databases, secret sharing, etc., can be performed in this scenario. Joint work with Ran Canetti and Amir Herzberg.*

*Date: Thursday, November 2, 1995 *

*Title: Quantum Cryptoanalysis of Hidden Linear Forms *

*Speaker: Dan Boneh, Princeton University, Joint work with Richard Lipton*

*Date: Thursday, October 19, 1995 *

*Title: Pseudo-Random Synthesizers *

*Speaker: Omer Reingold, Wiezmann Institute, Israel*

*Date: Monday, October 2, 1995 *

*Title: Secret Sharing with Public Reconstruction *

*Speaker: Amos Beimel, Technion, Israel *