Skip to yearly menu bar Skip to main content


Poster

Snuffy: Efficient Whole Slide Image Classifier

Hossein Jafarinia · Alireza Alipanah · Saeed Razavi · Nahal Mirzaie · Mohammad Rohban

# 136
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Whole Slide Image (WSI) classification has recently gained much attention in digital pathology, facing the researchers with unique challenges. We tackle two main challenges in existing (WSI) classification approaches. Initially, these methods invest a considerable amount of time and computational resources into pre-training a vision backbone on domain-specific training datasets for embedding generation. This limits the development of such models to certain groups and institutions with computational budgets, and hence slowing their development progress. Furthermore, they typically employ architectures with limited approximation capabilities for the Multiple Instance Learning (MIL), which are inadequate for the intricate characteristics of WSIs, resulting in a sub-optimal accuracy. Our research proposes novel solutions to these issues and balances efficiency and performance. Firstly, we present the novel approach of the continual self-supervised pretraining of ImageNet-1K Vision Transformers (ViTs) equipped with Adapters on pathology domain datasets, achieving a level of efficiency orders of magnitude better than prior techniques. Secondly, we introduce an innovative Sparse Transformer architecture and theoretically prove its universal approximability, featuring a new upper bound for the layer count. We additionally evaluate our method on both pathology and MIL datasets, showcasing its superiority on image- and patch-level accuracies compared to the previous methods. Our code is available at \url{https://github.com/jafarinia/snuffy}.

Live content is unavailable. Log in and register to view live content