Virchow2: Scaling Self-Supervised Mixed Magnification Models in Pathology

Published in arXiv preprint, 2024

Abstract: We investigate key factors for developing foundation models in computational pathology. We introduce three vision transformer models—Virchow2 (632M parameters), Virchow2G (1.9B parameters), and Virchow2G Mini (22M distilled parameters)—trained on 3.1 million histopathology whole slide images with varied tissues, institutions, and stains. The models achieve state-of-the-art performance on 12 tile-level tasks, demonstrating that data diversity and domain-specific methods can outperform models that only scale in the number of parameters.

Recommended citation: E Zimmermann et al. (2024). "Virchow2: Scaling Self-Supervised Mixed Magnification Models in Pathology" arXiv:2408.00738. https://arxiv.org/abs/2408.00738