In this talk, I will introduce an algorithmic framework for analyzing paging in multi-core shared memory systems. While the classical paging problem has been extensively studied and well understood in the sequential setting for decades, its parallel counterpart has remained largely unresolved for over twenty years. In the parallel paging scenario, p processors share a limited, fast-access cache of size k, and the challenge is to dynamically allocate cache space among the processors to minimize metrics such as average or maximum completion time. I will present matching upper and lower bounds of \Theta(log p) on the competitive ratio, achievable with only constant-factor resource augmentation.
Following this, I will highlight recent theoretical progress in paging for emerging memory architectures, particularly systems equipped with high-bandwidth memory (HBM).
Location: https://mit.zoom.us/j/94523742291