Protein language models (PLMs) have emerged as transformative tools for understanding and interpreting protein sequences, enabling advances in structure prediction, functional annotation, and variant effect assessment directly from sequence alone. Yet realizing their full potential requires both algorithmic innovation and a deeper understanding of their capabilities and limitations. In this talk, I will present several recent developments that advance PLM-based protein sequence analysis along these dimensions. First, I will introduce Bag-of-Mer (BoM) pooling, a biologically inspired strategy for aggregating amino acid embeddings that can capture both local motifs and long-range interactions, improving performance on diverse tasks such as protein activity prediction, remote homology detection, and peptide–protein interaction prediction. Next, I will describe ARIES, a highly scalable multiple-sequence alignment algorithm that leverages PLM embeddings to achieve superior accuracy even in low-identity regions where traditional methods struggle. Finally, time permitting, I will discuss insights into PLM performance, including the roles of training data, sequence fit, and model architecture. Together, this work illustrates how PLMs can both power and reshape core computational biology tasks, while providing guidance for more effective and biologically grounded model development