Browse through all publications from the Institute of Global Health Innovation, which our Patient Safety Research Collaboration is part of. This feed includes reports and research papers from our Centre. 

Citation

BibTex format

@article{Fang:2026:10.1111/bju.70273,
author = {Fang, L and Mayor, N and Light, A and Silvanto, A and Haider, A and Ng, C and Gopalakrishnan, A and Boaz, RJ and Tanaka, MB and Khoubehi, B and Hellawell, G and Almeida-Magana, R and Mendes, L and Dinneen, E and Shaw, G and Challacombe, B and Cathcart, P and Connor, MJ and Shah, TT and Ahmed, HU and Fiorentino, F and Giannarou, S and Winkler, M},
doi = {10.1111/bju.70273},
journal = {BJU Int},
title = {Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.},
url = {http://dx.doi.org/10.1111/bju.70273},
year = {2026}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - OBJECTIVE: To develop and validate a deep learning model for interpretation of fluorescence confocal microscopy (FCM) images for intraoperative surgical margin assessment during radical prostatectomy (RP). PATIENTS AND METHODS: Fluorescence confocal microscopy images from the multicentre ³Ô¹ÏºÚÁÏ Prostate 8-Fluorescence Confocal Microscopy for Rapid Evaluation of Surgical Cancer Excision (IP8-FLUORESCE) study were used to train and test a convolutional neural network model. The modified model incorporated focal loss with label smoothing, dropout regularisation, adaptive class weighting, and weighted sampling to address pronounced class imbalance. Images were pre-processed by extracting regions of interest at a defined digital zoom level and normalised to 896 × 896 pixels. The reference standard was surgical margin status on conventional histopathology assessed by an expert histopathologist. Diagnostic performance was assessed using sensitivity, specificity, positive and negative predictive value, area under the receiver-operating-characteristic curve (AUC), and calibration via Brier scores. External validation was conducted using an independent dataset from the LaserSAFE feasibility trial. Model explainability was evaluated using Gradient-weighted Class Activation Mapping (Grad-CAM) and a custom graphical user interface (GUI) was developed to support real-time deployment. RESULTS: A total of 275 images (37 tumour and 238 benign from 24 patients) were included for model development and internal testing. On the internal test set (n = 57), the model achieved a sensitivity of 87.5%, specificity of 97.9%, and an AUC of 0.93, with good calibration (Brier score 0.16). External validation using 46 independent images yielded a sensitivity of 91.3%, specificity of 73.9%, and an AUC of 0.83, with acceptable calibration (Brier score 0.20). Grad-CAM visualisations aligned with malignant structures on FCM images, and the GUI enabled rapid, interp
AU - Fang,L
AU - Mayor,N
AU - Light,A
AU - Silvanto,A
AU - Haider,A
AU - Ng,C
AU - Gopalakrishnan,A
AU - Boaz,RJ
AU - Tanaka,MB
AU - Khoubehi,B
AU - Hellawell,G
AU - Almeida-Magana,R
AU - Mendes,L
AU - Dinneen,E
AU - Shaw,G
AU - Challacombe,B
AU - Cathcart,P
AU - Connor,MJ
AU - Shah,TT
AU - Ahmed,HU
AU - Fiorentino,F
AU - Giannarou,S
AU - Winkler,M
DO - 10.1111/bju.70273
PY - 2026///
TI - Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.
T2 - BJU Int
UR - http://dx.doi.org/10.1111/bju.70273
UR - https://www.ncbi.nlm.nih.gov/pubmed/42001901
ER -

NIHR logo