Skip to main content
Engineering

Multi-Modal Deep Learning Approaches for Oncological Therapeutic Validation

By 6th September 2021No Comments

The following study was conducted by Scientists from Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA; Department of Colorectal Surgery, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China; Guangdong Institute of Gastroenterology, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Guangzhou, China; Department of Colorectal Surgery, Sun Yat-sen University Cancer Center, Guangzhou, China; Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Center for Network Information, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China. Study is published in Nature Communications Journal as detailed below.

Nature Communications; Volume 12, Article Number: 1851 (2021)

Predicting Treatment Response from Longitudinal Images Using Multi-Task Deep Learning

Abstract

Radiographic imaging is routinely used to evaluate treatment response in solid tumors. Current imaging response metrics do not reliably predict the underlying biological response. Here, we present a multi-task deep learning approach that allows simultaneous tumor segmentation and response prediction. We design two Siamese subnetworks that are joined at multiple layers, which enables integration of multi-scale feature representations and in-depth comparison of pre-treatment and post-treatment images. The network is trained using 2568 magnetic resonance imaging scans of 321 rectal cancer patients for predicting pathologic complete response after neoadjuvant chemoradiotherapy. In multi-institution validation, the imaging-based model achieves AUC of 0.95 (95% confidence interval: 0.91–0.98) and 0.92 (0.87–0.96) in two independent cohorts of 160 and 141 patients, respectively. When combined with blood-based tumor markers, the integrated model further improves prediction accuracy with AUC 0.97 (0.93–0.99). Our approach to capturing dynamic information in longitudinal images may be broadly used for screening, treatment response evaluation, disease monitoring, and surveillance.

Source:

Nature Communications

URL: https://www.nature.com/articles/s41467-021-22188-y

Citation:

Jin, C., Yu, H., Ke, J. et al. Predicting treatment response from longitudinal images using multi-task deep learning. Nat Commun 12, 1851 (2021). https://doi.org/10.1038/s41467-021-22188-y