SurvRNC: Learning Ordered Representations for Survival Prediction using Rank-N-Contrast

Numan Saeed1, Muhammad Ridzuan1, Fadillah Adamsyah Maani1, Hussain Alasmawi1, Karthik Nandakumar1, Mohammad Yaqub1
1Mohamed bin Zayed University of Artificial Intelligence
Accepted at MICCAI 2024

Abstract

Predicting the likelihood of survival is of paramount importance for individuals diagnosed with cancer as it provides invaluable information regarding prognosis at an early stage. This knowledge enables the formulation of effective treatment plans that lead to improved patient outcomes. In the past few years, deep learning models have provided a feasible solution for assessing medical images, electronic health records, and genomic data to estimate cancer risk scores. However, these models often fall short of their potential because they struggle to learn regression-aware feature representations. In this study, we propose Survival Rank-N Contrast (SurvRNC) method, which introduces a loss function as a regularizer to obtain an ordered representation based on the survival times. This function can handle censored data and can be incorporated into any survival model to ensure that the learned representation is ordinal. The model was extensively evaluated on a HEad \& NeCK TumOR (HECKTOR) segmentation and the outcome-prediction task dataset. We demonstrate that using the SurvRNC method for training can achieve higher performance on different deep survival models. Additionally, it outperforms state-of-the-art methods by 3.6% on the concordance index.

Main Architecture

Project Architecture Diagram

HuLP: Human-in-the-Loop for Prognosis

Muhammad Ridzuan1, Mai Kassem,1, Numan Saeed,1, Ikboljon Sobirov1, Mohammad Yaqub1
1Mohamed bin Zayed University of Artificial Intelligence
Accepted at MICCAI 2024

Abstract

This paper introduces HuLP, a Human-in-the-Loop for Prognosis model designed to enhance the reliability and interpretability of prognostic models in clinical contexts, especially when faced with the complexities of missing covariates and outcomes. HuLP offers an innovative approach that enables human expert intervention, empowering clinicians to interact with and correct models’ predictions, thus fostering collaboration between humans and AI models to produce more accurate prognosis. Additionally, HuLP addresses the challenges of missing data by utilizing neural networks and providing a tailored methodology that effectively handles missing data. Traditional methods often struggle to capture the nuanced variations within patient populations, leading to compromised prognostic predictions. HuLP imputes missing covariates based on imaging features, aligning more closely with clinician workflows and enhancing reliability. We conduct our experiments on two real-world, publicly available medical datasets to demonstrate the superiority and competitiveness of HuLP.

Main Architecture

Project Architecture Diagram

Deep learning apparatus and method for segmentation and survival prediction for head and neck tumors

Numan Saeed1, Ikboljon Sobirov1, Roba Al Majzoub1, Mohammad Yaqub1
1Mohamed bin Zayed University of Artificial Intelligence
US Patent App. 17/849,943

Abstract

A system, computer-readable storage medium and method for prognosis of head and neck cancer, includes an input for receiving electronic health records (EHR) of a patient, an input for receiving multimodal images of a head and neck area of the patient, a feature extraction module for converting the electronic health records and multimodal images into at least one feature vector, a hybrid machine learning architecture that includes a multi-task logistic regression (MTLR) model and a multi-layer artificial neural network, the hybrid architecture takes as input the at least one feature vector and outputs a final risk score of prognosis for head and neck cancer for the patient.

System Architecture

Project Architecture Diagram

TMSS: An End-to-End Transformer-based Multimodal Network for Segmentation and Survival Prediction

Numan Saeed1, Ikboljon Sobirov1, Roba Al Majzoub1, Mohammad Yaqub1
1Mohamed bin Zayed University of Artificial Intelligence
Accepted at MICCAI 2022

Abstract

When oncologists estimate cancer patient survival, they rely on multimodal data. Even though some multimodal deep learning methods have been proposed in the literature, the majority rely on having two or more independent networks that share knowledge at a later stage in the overall model. On the other hand, oncologists do not do this in their analysis but rather fuse the information in their brain from multiple sources such as medical images and patient history. This work proposes a deep learning method that mimics oncologists’ analytical behavior when quantifying cancer and estimating patient survival. We propose TMSS, an end-to-end Transformer based Multimodal network for Segmentation and Survival predication that leverages the superiority of transformers that lies in their abilities to handle different modalities. The model was trained and validated for segmentation and prognosis tasks on the training dataset from the HEad & NeCK TumOR segmentation and the outcome prediction in PET/CT images challenge (HECKTOR). We show that the proposed prognostic model significantly outperforms state-of-the-art methods with a concordance index of 0.763 ± 0.14 while achieving a comparable dice score of 0.772 ± 0.030 to a standalone segmentation model.

Main Architecture

Project Architecture Diagram

An Ensemble Approach for Patient Prognosis of Head and Neck Tumor Using Multimodal Data

Numan Saeed1, Roba Al Majzoub1, Ikboljon Sobirov1, Mohammad Yaqub1
1Mohamed bin Zayed University of Artificial Intelligence
Accepted at MICCAI 2022

Abstract

Accurate prognosis of a tumor can help doctors provide a proper course of treatment and, therefore, save the lives of many. Traditional machine learning algorithms have been eminently useful in crafting prognostic models in the last few decades. Recently, deep learning algorithms have shown significant improvement when developing diagnosis and prognosis solutions to different healthcare problems. However, most of these solutions rely solely on either imaging or clinical data Utilizing patient tabular data such as demographics and patient medical history alongside imaging data in a multimodal approach to solve a prognosis task has started to gain more interest recently and has the potential to create more accurate solutions. The main issue when using clinical and imaging data to train a deep learning model is to decide on how to combine the information from these sources. We propose a multimodal network that ensembles deep multi-task logistic regression (MTLR), Cox proportional hazard (CoxPH) and CNN models to predict prognostic outcomes for patients with head and neck tumors using patients’ clinical and imaging (CT and PET) data. Features from CT and PET scans are fused and then combined with patients’ electronic health records for the prediction. The proposed model is trained and tested on 224 and 101 patient records respectively. Experimental results show that our proposed ensemble solution achieves a C-index of 0.72 on the HECKTOR test set that saved us the first place in prognosis task of the HECKTOR challenge.

Main Architecture

Project Architecture Diagram

Team Members

Numan Saeed

Numan Saeed

Postdoctoral Fellow

Muhammad Ridzuan

Muhammad Ridzuan

PhD Student

Mohammad Yaqub

Mohammad Yaqub

Associate Professor