Silas Richelson: On the Hardness of Learning with Rounding with Small Modulus

Friday, September 18, 2015 - 10:30am to 12:00pm
Location: 
Hewlett G882
Speaker: 
Silas Richelson

Abstract: We show the following reductions from the learning with errors problem (LWE) to the learning with rounding problem (LWR): (1) Learning the secret and (2) distinguishing samples from random strings is at least as hard for LWR as it is for LWE for efficient algorithms if the LWR modulus q is at least Bpm where m is the number of samples, p is the rounding modulus and the noise is sampled from any distribution supported over the set {-B,...,B}.

Our second result generalizes and gives an alternate proof of a theorem of Alwen, Krenn, Pietrzak and Wichs (CRYPTO 2013).  Moreover, unlike Alwen et al., we do not impose any number theoretic restrictions on the modulus q. The first result also extends to variants of LWR and LWE over polynomial rings.  The above reductions are sample preserving and run in time poly(n, q, m).

As additional results we show that (3) distinguishing any number of LWR samples from random strings is of equivalent hardness to LWE whose noise distribution is uniform over the integers in the range [-q/2p,...,q/2p) provided q is a multiple of p and (4) the "noise flooding" technique for converting faulty LWE noise to a discrete Gaussian distribution can be applied whenever q = \Omega(B\sqrt{m}).

This is a joint work with:  Andrej Bogdanov, Siyao Guo, Daniel Masny and Alon Rosen