In recent years, the amount of data on which computers are expected to operate has increased by several orders of magnitude, and it is no longer rare to come across data sets whose sizes are many terabytes. (Consider, for example, how one would use a computer to answer questions about the Internet, the human genome, or the sales logs of Wal-Mart.) Furthermore, we are asking computers to perform more and more intricate analyses on this data, requiring them to answer such diverse questions as calculating the 3-dimensional shape of a protein made up of many thousands of atoms, finding the most relevant web page to a query out of a pool of billions, or figuring out how best to allocate scarce resources among thousands of entities given only error-prone probabilistic information about the consequences of your decision. As the scope, difficulty, and importance of the problems that we pose to computers grow, it is crucially important that we devise new mathematical tools to attack them. The Algorithms group at MIT has long been at the forefront of this effort, with faculty ranking among the world experts in optimization, network algorithms, computational geometry, distributed computing, algorithms for massive data sets, parallel computing, computational biology, and scientific computing.
We have faculty, students, and visitors from both the Department of Electrical Engineering and Computer Science and the Department of Mathematics.