Darwinian evolution proceeds by natural selection acting on random variation. I will argue that, although mutations are random, the novel phenotypes they produce can be highly biased towards simple or more compressible forms. This bias is so strong that it can dramatically shape the spectrum of adaptive outcomes. The basic intuition follows from an algorithmic twist on the infinite monkey theorem inspired by the fact that natural selection doesn’t act directly on mutations, but rather on the phenotypes that are generated by developmental programmes. If monkeys type at random in a computer language, they are much more likely to generate outputs derived from shorter algorithms. This intuition can be formalised with the coding theorem of algorithmic information theory, predicting that random mutations are exponentially more likely to result in simpler, more compressible phenotypes with low descriptional (Kolmogorov) complexity. Evidence for this evolutionary Occam’s razor can be found in the symmetry in protein complexes [1], and in the simplicity of RNA secondary structures [2], gene regulatory networks, leaf shape, and Richard Dawkins’ biomorphs model of development [3]. This principle may also extend to machine learning, offering insights into why neural networks generalize well on typical datasets [4].
[1] Symmetry and simplicity spontaneously emerge from the algorithmic nature of evolution, IG Johnston, et al, PNAS 119 (11), e2113883119 (2022);
[2] Phenotype bias determines how RNA structures occupy the morphospace of all possible shapes, Kamaludin Dingle, Fatme Ghaddar, Petr Sulc, and Ard A. Louis. Molecular Biology and Evolution, 39, msab280 (2021)
[3] Bias in the arrival of variation can dominate over natural selection in Richard Dawkins’s biomorphs, NS Martin, CQ Camargo, AA Louis PLOS Comp. Bio. 20 (3), e1011893 (2024)
[4] Do deep neural networks have an inbuilt Occam's razor? C Mingard, H Rees, G Valle-Pérez, AA Louis arXiv preprint arXiv:2304.06670