Artificial intelligence (AI) models, which were pretrained on vast data sets, outperformed a standard baseline model in identifying nonmelanoma skin cancers from digital images of tissue samples, based on a session presented during the 2025 American Association for Cancer Research (AACR) Annual Meeting1 and simultaneously published in Cancer Epidemiology, Biomarkers, and Prevention.2 In fact, the study authors believe these pretrained machine learning models may expand the reach of AI-based cancer diagnosis to resource-limited settings such as Bangladesh. This work has been done in collaboration with the Institute for Population and Precision Health at the University of Chicago.

Steven Song, MS, BS
“While our study suggests foundation models as resource-efficient tools for aiding in nonmelanoma skin cancer diagnosis, we acknowledge that we are still far from having a direct impact on patient care and that further work is needed to address practical considerations, such as the availability of digital pathology infrastructure, Internet connectivity, integration into clinical workflows, and user training,” stated Steven Song, MS, BS, MD/PhD Candidate, Pritzker School of Medicine, The University of Chicago.
Study Details and Key Results
In this study, the researchers evaluated three contemporary pathology foundation models—PRISM (from Paige AI), UNI (from the Mahmood Lab at Brigham and Women’s), and Prov-GigaPath (from Microsoft)—with fivefold cross validation in identifying nonmelanoma skin cancer from digital pathology images of suspected cancerous skin lesions.
The accuracy of these models in diagnosing nonmelanoma skin cancers was evaluated on 2,130 hematoxylin and eosin–stained whole-slide images, representing 553 biopsy samples from 455 Bangladeshi individuals enrolled in the Bangladesh Vitamin E and Selenium Trial. High levels of exposure to arsenic through contaminated drinking water increases the risk for nonmelanoma skin cancer in this population, providing a relevant real-world context for the study, Mr. Song said. Of these biopsy samples, 41% were benign, 31% were Bowen’s disease (also known as squamous cell carcinoma in situ), 21% were basal cell carcinoma, and 7% were invasive squamous cell carcinoma.
“We take a whole-slide image, tile it into smaller image patches, encode those tiles using tile encoders, aggregate the tiles back into slide-level representation, and then do final classification over the slide-level embedding,” Mr. Song explained. For slide aggregation, he added, attention-based deep multiple instance learning was used for best model combinations for UNI and Prov-GigaPath, and multilayer perceptron was used for classification for all three models.
“AI-based foundation models are strong feature extractors that can be easily adapted to the task of accurately diagnosing nonmelanoma skin cancer...and represent a potential tool to assist pathologists working in resource-limited settings.”— STEVEN SONG, MS, BS
Tweet this quote
The accuracy of the three foundation models was compared with that of ResNet18, an older architecture for image recognition. “ResNet architectures have been used as a starting point for training vision models for nearly a decade and serve as a meaningful baseline comparison for evaluating the performance gains of newer pretrained foundation models,” Mr. Song said.
According to the study authors, all three foundation models significantly outperformed ResNet18. With no additional training of the foundation models, the derived classifiers correctly diagnosed subtypes of nonmelanoma skin cancers, with an area under the receiver operating characteristic curve (AUROC) of 0.925 (PRISM), 0.913 (UNI), and 0.908 (Prov-GigaPath), all significantly outperforming ResNet18’s 0.805 (P < .001).
“Across the board, basal cell carcinoma is relatively easier for each of these foundation models,” Mr. Song stated. “Bowen’s disease is relatively harder across the three models. Furthermore, UNI and Prov-GigaPath struggle in distinguishing invasive squamous cell carcinoma. PRISM, however, actually recovers some of that performance in distinguishing between invasive squamous cell carcinoma and Bowen’s disease.”
Mr. Song offered a possible explanation for PRISM’s better performance in this task. “PRISM is trained over a lot more whole-slide skin images and seems to have pan-slide attention that is key to distinguishing between invasive and in situ squamous cell carcinomas.”
Use in Resource-Limited Settings
Mr. Song and colleagues consider these foundation models to be “strong feature extractors that can be easily adapted to the task of accurately diagnosing nonmelanoma skin cancer...and represent a potential tool to assist pathologists working in resource-limited settings.” To make them more amenable to use in resource-limited settings, they developed and tested simplified versions of each model. These models, which require less-extensive computational analysis of pathology image data, still outperformed ResNet18, with AUROCs of 0.882 (PRISM), 0.865 (UNI), and 0.855 (Prov-GigaPath).
Additionally, they developed and evaluated an automatic slide annotation tool, which requires no model training and may offer further utility in a resource-limited triage setting. “Broadly, there is agreement between our model-derived annotation of these slides and the manual annotation done by an expert pathologist,” Mr. Song noted.
The investigators acknowledged a few study limitations. First, the models were evaluated on a single cohort of patients from Bangladesh, which may limit the generalizability of the findings to other populations. Second, although the study approached model design from the perspective of resource-limited settings, it did not examine the practical details of deploying the pretrained machine learning models in such resource-limited settings.
DISCLOSURE: The study was supported by the National Institutes of Health. Mr. Song reported no conflicts of interest.
REFERENCES
1. Ellis S, et al: 2025 AACR Annual Meeting. Abstract 1141. Presented April 27, 2025.
2. Ellis S, et al: Cancer Epidemiol Biomarkers Prev. April 27, 2025 (early release online).