Nancy Lynch: An Algorithmic Theory of Brain Networks

Friday, May 18, 2018 - 1:00pm to 2:30pm
Location: 
32-G631
Speaker: 
Nancy Lynch
Biography: 
MIT

This talk will describe recent work with Cameron Musco and Merav Parter, on studying neural networks from the perspective of Distributed Algorithms.   In our project, we aim both to obtain interesting, elegant theoretical results, and also to draw relevant biological conclusions.

We base our work on simple Stochastic Spiking Neural Network (SSN) models, in which probabilistic neural components are organized into weighted directed graphs and execute in a synchronized fashion.  Our model captures the spiking behavior observed in real neural networks and reflects the widely accepted notion that spike responses, and neural computation in general, are inherently stochastic.  In most of our work so far, we have considered static networks, but the model would allow us to also consider learning by means of weight adjustments.


We consider the implementation of various algorithmic primitives using stochastic SNNs.  We first consider a basic symmetry-breaking task that has been well studied in the computational neuroscience community:  the Winner-Take-All  (WTA)  problem.  We also consider the use of stochastic behavior in neural algorithms for Similarity Testing (ST).  At the heart of our solution to ST is the design of a compact and fast-converging neural Random Access Memory (neuro-RAM)  mechanism.