December 30th, 2023

Started serving as the Associate Editor for Data Intelligence.

December 18th, 2023

Started serving as the Associate Editor for IEEE Transactions on Artificial Intelligence.

December 13th, 2023

Our work on the development of a cyclic image to/from text model, "AdaMatch-Cyclic", is now availabl on arXiv. In this collaborative work with Dr. Yixuan Yuan at the Chinese University of Hong Kong (CUHK), we explored the fine-grained mapping between chest x-ray images and their corresponding radiology reports and developed generative model for the image and text correspondingly. Preliminary result shows that the cyclic generation model can accurately capture word-patch relationsihp and perform effective text generation (report writing) and image generation.

November 10h, 2023

Another work from us on evaluating GPT-4v for its potential in medicine: "Multimodal ChatGPT for Medical Applications: an Experimental Study of GPT-4V", is now availabl on arXiv. In this work, we performed a large-scale evaluation probing GPT-4V's capabilities and limitations on specialties including radiology, oncology, ophthalmology, pathology, and more. Tasks include modality recognition, anatomy localization, disease diagnosis, report generation, and lesion detection. It was found that GPT-4V is proficiency in modality and anatomy recognition but difficulty with disease diagnosis and localization. GPT-4V also excels at diagnostic report generation, indicating strong image captioning skills. While promising for biomedical imaging AI, GPT-4V requires further enhancement and validation before clinical deployment.

November 4th, 2023

Our work on generative multi-view learning, "Structure Mapping Generative Adversarial Network for Multi-view Information Mapping Pattern Mining", was accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence. In this collaborative work with Dr. Xia-An Bi from Hunan Normal University, we proposesd a Structure Mapping Generative adversarial network (SM-GAN) framework, which utilizes the consistency and complementarity of multi-view data. Compared with regular GAN, we added a structural information mapping module to model the structural information mapping from the micro-view to the macro-view. Its preprint on PubMed can be found here.

November 3rd, 2023

Started serving as the Associate Editor for BMC Biomedical Engineering.

October 29th, 2023

Our work on evaluating the medical image understanding capability of GPT-4v, "Multimodal ChatGPT for Medical Applications: an Experimental Study of GPT-4V", is now availabl on arXiv. In this work, we found that the current version of GPT-4V is not recommended for real-world diagnostics due to its unreliable and suboptimal accuracy in responding to diagnostic medical questions. In addition, we delineate seven unique facets of GPT-4V's behavior in medical VQA task, highlighting its constraints within this complex arena. The paper has been recommended by Hugging Face >here.

October 20th, 2023

Our work on graph transformer representation of dynamic brain imaging data, "Large-scale Graph Representation Learning of Dynamic Brain Connectome with Transformers", was accepted by NeurIPS 2023 Temporal Graph Learning Workshop. In this collaborative work with Dr. Byung-Hoon Kim from Yonsei University College of Medicine, we developed a representation learning framework for modeling dynamic functional connectivity with graph transformers. By the novel "connectome embedding" concept developed in this work, we are able to characterize the position, structure, and time information of the functional connectivity graph within an integrated embedding. Experiment results from multiple fMRI datasets show state-of-the-art performance of the proposed framework in gender classification and age regression tasks. Its OpenReview link can be found here.

October 13th, 2023

Our work on knowledge graph discovery from unstructured text data, "Coarse-to-fine Knowledge Graph Domain Adaptation based on Distantly-supervised Iterative Training", was accepted by IEEE International Conference on Bioinformatics and Biomedicine 2023 (BIBM 2023). In this collaborative work with the South China University of Technology, we developed an integrated framework for adapting and re-learning knowledge graphs from one coarse domain (biomedical) to a finer-define domain (oncology). Its arXiv link can be found here.

September 27th, 2023

Our recent paper, "MedEdit: Model Editing for Medical Question Answering with External Knowledge Bases", is now on arXiv. In this collaborative work with Prof. Ninghao Liu from the University of Georgia, we developed a comprehensive retrieval strategy to extract medical facts from an external knowledge base, and then incorporated them into the query prompt for the LLM. This work demonstrates the potential of model editing to enhance LLM performance, offering a practical approach to mitigate the challenges of black-box LLMs.

September 24th, 2023

Our recent paper, "MediViSTA-SAM: Zero-shot Medical Video Analysis with Spatio-temporal SAM Adaptation", is now on arXiv. As the Segmentation Anything Model (SAM) has become prominent for its generalization abilities, we developed MediViSTA-SAM for adapting SAM to medical video data. The model can effectively capture both long and short-range temporal dependency structures through its cross-frame attention mechanism. Furthermore, MediViSTA-SAM utilizes a novel U-shaped encoder and an adapted mask decoder to handle objects of various dimensions. MediViSTA-SAM was tested on echocardiography datasets from multiple vendors/institutions and achieved state-of-the-art performance in segmenting the left ventricle and left atrium from echocardiogram video data.

September 22nd, 2023

Our research on fine-tuning LLM for radiation oncology, "RadOnc-GPT: A Large Language Model for Radiation Oncology", is now on arXiv. RadOnc-GPT is fined-tuned using an extensive dataset encompassing radiation oncology patient records and clinical observations sourced from Mayo Clinic Arizona. Clinical implication of RadOnc-GPT is investigated on three tasks: generating radiotherapy treatment plans, determining optimal radiation modalities, and providing diagnostic descriptions/ICD codes based on patient diagnostic details. When benchmarked against general-domain large language models, RadOnc-GPT has superior performance, characterized by its enhanced clarity, specificity, and clinical relevance. This work underscores the revolutionary potential of domain-centric language models for healthcare practice that demands specialized expertise.

September 22nd, 2023

Started working as the Affiliate Faculty Member at the Kempner Institute for Natural and Artificial Intelligence of Harvard University.

September 16th, 2023

Our recent paper, "MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation", is now on arXiv. While the Segment Anything Model (SAM) has showcased stellar results in general-domain image segmentation, it needs further adaptation to be working on medical images due to its limitation to 2-D images. Recognizing the importance of the third dimension, either volumetric or temporal, we developed MA-SAM, a modality-agnostic framework tailored for a wide variety of 3D medical imaging modalities, equipping our efficient parameter fine-tuning strategy and the 3D adapters design. We tested MA-SAM on ten diverse datasets spanning 3D CT, 3D MRI, and surgical videos. The model can achieve superior performance over major medical segmentation algorithms, including nnU-Net, by notable margins. The code of MA-SAM is accessible on GitHub.

September 10th, 2023

Our paper on analyzing ChatGPT-generated language data and its comparison to human-written language, "Differentiate ChatGPT-generated and Human-written Medical Texts", has been accepted by the JMIR Medical Education (JME). In this collaborative work between us, Univeristy of Georgia, and South China University of Technology, we analyzed the linguistic features of the ChatGPT-generated txt versus human-generated to uncover differences in vocabulary, part-of-speech, dependency, sentiment, perplexity, etc. We also developed A BERT-based model to detect medical texts generated by ChatGPT with high accuracy. Its arXiv version can be found here.

August 29th, 2023

Our work on fine-tuning LLaMA2 for radiology, "Radiology-Llama2: Best-in-Class Large Language Model for Radiology", is now on arXiv. Radiology-Llama2 is a large language model specialized for radiology by fine-tuning a large dataset of radiology reports, targeting at generating coherent and clinically useful impressions from radiological findings. Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance compared to other large language models. Additional assessments by radiology experts highlight the model's strengths in understandability, coherence, relevance, conciseness, and clinical utility. The work illustrates the potential of domain-specific language models designed and tuned for specialized fields in healthcare. When properly evaluated and deployed, such models can transform fields like radiology by automating tasks and providing clinical decision-making support.

August 18th, 2023

Our paper on the fine-tuning and adaptation of LLMs on radiology report data to develop foundation models for radiology, entitled "Tailoring Large Language Models to Radiology: A preliminary approach to LLM adaptation for a highly specialized domain", is accepted by Machine Learning in Medical Imaging (MLMI 2023) for oral presentation. In this work, we developed a domain-specific large language model (LLM) for radiology by fine-tuning LLaMA on the findings-impression pairs extracted from the radiology reports in the MIMIC dataset. The work demonstrated promising performance and show potential applications in radiological diagnosis, research, and communication, suggesting a promising direction for tailored medical language models with conversational abilities. Its arXiv version can be found here.

August 18th, 2023

Our paper on view classification for echocardiogram ultrasound imaging, entitled "Multi-task Learning for Hierarchically-Structured Images: Study on Echocardiogram View Classification", has been accepted by the MICCAI Workshop of Special Interest Group on Medical Ultrasound. In this collaborative work with cardiologists and sonographers at Massachusetts General Hospital and Brigham and Women's Hospital, we developed a Multi-task Residual Neural Network (MTRNN) with a hierarchically structured output for echocardiogram view classification, showcasing superior performance. The design of this work can be extended to other data classification scenarios with hierarchical data labels as well.

August 8th, 2023

Our abstract on predicting the early discharge of patients after receiving Transcatheter Aortic Valve Replacement (TAVR), entitled "Machine Learning Model for the Prediction of Early Discharge of Patient Underwent Transcatheter Aortic Valve Replacement Using Electronic Medical Record", has been accepted to be presented at the American Heart Association’s annual Scientific Sessions 2023 (AHA 2023). In this collaborative work with cardiologists at Massachusetts General Hospital and Brigham and Women's Hospital, we collected electronic health record (EHR) data from patients who underwent TAVR and trained a machine learning model to predict the early discharge (within two days) of patients, which has been critical for accelerating patient recovery and saving costs.

July 7th, 2023

Our abstract on differentiating people with Alzheimer's disease and normal aging by their speech audio and transcripts, entitled "Alzheimer's Disease Prediction through Patients Speech Transcript Using Pre-trained Language Models", has been accepted to the presented at the American Medical Informatics Association (AMIA) 2023 Annual Symposium. This study explores methods for detecting combineing pre-trained language models, Graph Neural Networks (GNN), text data augmentation by ChatGPT, and contrastive learning for text-audio data fusion. Its arXiv version can be found here.

June 1st, 2023

Promoted to Assistant Professor at Department of Radiology, Harvard Medical School and Massachusetts General Hospital.

May 6th, 2023

https://mmmi2023.github.io/
This year we will continue organizing the MICCAI workshop on Multiscale Multimodal Medical Imaging (MMMI 2023). If you work on related areas, we are looking forward to your paper submission. The submission deadline is July 14thJuly 31st, 2023.

March 17th, 2023

Our paper on the development of novel fiber segmentation method and its application for autism diagnosis, "Accurate corresponding fiber tract segmentation via FiberGeoMap learner with application to autism" is accepted by Cerebral Cortex and available at here.

March 14th, 2023

Our paper on the open community bench-testing platform for reserach on neuron tracing, "BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets" is accepted by Nature Methods and available at here. This work was initiated and devloped by the team led by Dr. Hanchuan Peng, who's a member of my Phd. committee.

February 28th, 2023

Our absract on predicting the clinical outcomes (length-of-stay and readmission) of aortic stenosis patients after receiving Surgical Aortic Valve Replacement (SAVR)or Transcatheter Aortic Valve Replacement (TAVR), entitled "Machine Learning Model for Aortic Stenosis Patient Outcome Prediction", has been accepted to be presented at the AMIA 2023 Clinical Informatics Conference 2023. Risk stratification and patient outcome prediction are helpful for physicians in guiding clinical decision-making and the hospital’s resource allocation and patient management. In this work on preoperative aortic stenosis (AS) patient outcome prediction, we showed that by applying machine learning methods to a comprehensive list of the patient electronic health record (EHR) data, superior prediction performance could be achieved compared with the current regression-based risk score systems.

December 16th, 2022

Our first paper on the language processing in medical domain, "ClinicalRadioBERT: Knowledge-Infused Few Shot Learning for Clinical Notes Named Entity Recognition" is accepted by International Workshop on Machine Learning in Medical Imaging (MLMI 2022)and available at here. In this paper, we proposed the knowledge-infused few-shot learning (KI-FSL) approach to develop the ClinicalRadioBERT model for the task of radiotherapy clinical notes named entity recognition (NER)..

October 13th, 2022

Proceeding of MMMI 2022 as part of the MICCAI 2022 conference proceeding, published as Lecture Notes in Computer Science (LNCS) book series, is now available for free access via the MMMI 2022 official website at here. Note that the full proceeding is only accessible directed from that website, and will be available for 4 weeks. Cover of this proceeding can be downloaded here.

July 1st, 2022

We have been awarded by the National Institutes of Health for 2-year support of our project, "Identification of Multi-modal Imaging Biomarkers for Early Prediction of MCI-AD Conversion via Multigraph Representation" (1R03AG078625-01). Alzheimer's disease (AD) will result in cognitive decline and dementia and is a leading cause of mortality in the growing elderly population. As a progressive disease, AD typically has an insidious onset with overlapping clinical features with the transitional state of Mild Cognitive Impairment (MCI). Analysis of the relationship between MCI and AD with a focus on the converting factors, by the co-modeling of a wide array of imaging methods, can help us in a deeper understanding of the disease mechanism, which can lead to more accurate early diagnosis and identification of better intervention techniques.

April 5th, 2022

https://mmmi2022.github.io/
This year we will continue organizing the MICCAI workshop on Multiscale Multimodal Medical Imaging (MMMI 2022). If you work on related areas, we are looking forward to your paper submission. The submission deadline is July 22nd, 2022.

March 15th, 2022

We have been awarded by the National Institutes of Health for a 4-year support of our project, "Deep Learning Based Phenotyping and Treatment Optimization of Heart Failure with Preserved Ejection Fraction" (1R01HL159183-01A1). Heart failure with preserved ejection fraction (HFpEF) is a major public health problem that is rising in prevalence with the aging population. By performing deep phenotyping of the patients from their Cardiac Magnetic Resonance images and Electronic Health Record simultaneously, we aim to provide phenotypic-specific, individualized treatment optimization based on the current massive amount of clinical data using deep learning.

February 8th, 2022

Our paper on structural connectivity-based parcellation of cortical surface has been accepted by BME Frontiers. With collaboration with the Laboratory of Mathematics in Imaging in Brigham and Women's Hospital, we have developed a Spatial-graph Convolution Parcellation (SGCP) model, utilizing contrastive learning on the graphs for parcellating cortical surface into sub-regions. Preprint version of the paper can be found here.

February 4th, 2022

I have received the 2021 MGH Thrall Innovation Grants Award for funding my research of lung cancer screening by transformed chest X-ray imaging. The project is called "Chest Radiographs-based Lung Cancer Screening by the DeepProjection Technique" based on the DeepProjection technique previously developed by us which can generate near-real volumetric 3D CT image from a single 2D Chest Radiograph (CXR) image. By replacing CT scans with CXR-generated pseudo-CTs for lung cancer screening, we can reduce radiation doses, scan time and cost for the screening, as well as better availability for remote healthcare sites. News of this award can be found here, and covered by the Rad Times over here.

December 1st, 2021

Our paper "Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success " is awarded the "Most Cited Articles" of Journal of the American College of Radiology.

September 27th, 2021

We have the pleasure to host the research topic of "Multi-Dimensional Characterization of Neuropsychiatric Disorders" in Frontiers in Neuroscience - Brain Imaging Methods. Link to this research topic can be found here. All research related to the identification of multi-modal biomarkers for psychiatric disorders and multi-modal fusion methodologies are welcomed! Deadline for the intention-to-submit is December 23rd, 2021.

August 13th, 2021

Our work on multi-hospital federated learning for combined analysis of chest X-ray images and electronic health records data towards the prediction of COVID-19 patient's risk in the emergency department has been accepted by Nature Medicine. Its link can be found here (open access). This work, led by Mass General Brigham and NVIDIA, shows how federated learning enables creation of robust AI models for healthcare and other industries constrained by confidential and sparse data. Coverage of this work by NVidia can be found here.

June 7th, 2021

Our work on the co-modeling of brain iron deposition and gene expression patterns in Alzheimer’s disease patients have been accepted by Frontiers in Human Neuroscience. Its link can be found here (open access).

In this work, by combining AD progression-related regions identified from SWI imaging with gene expression data from Allen Brain Atlas, we observed considerable overlap between these two modalities. Further, we have identified a new potential AD-related gene (MEF2C) closely related to the interaction between iron deposition and AD progression in the brain.

March 23rd, 2021

Our work on severe outcome prediction and a clinical risk score (CO-RISK score) system for COVID-19 patient triage at emergency department is now online at arXiv.

In this study we constructed the "MGB Cohort", a database covering all patients suspected of COVID-19 presented to the emergency department at the four hospital sites of the MGB system. A total of 11,060 patients were used in the model development and validation, according to our inclusion/exclusion criteria. A deep learning system based on the architecture of Deep and Cross network was developed to predict the patient's outcome in 24/72 hours based on the EHR and imaging (CXR) data up to the initial present to the emergency department.

February 1st, 2021

Our work on analyzing chest radiograph of COVID-19 patients using deep metric learning, "Deep metric learning-based image retrieval system for chest radiograph and its clinical applications in COVID-19", has been accepted by Medical Image Analysis. Its link can be found here, its pdf can be downloaded here.

In this work, Aoxiao Zhong, a PhD student at Harvard SEAS and I together developed a deep metric learning model based on a contrastive learning scheme with attention module, in order to perform chest radiograph image retrieval by a query image from incoming patients. By doing so, we can perform diagnosis of chest radiograph based on the labels of retrieved images, as well as visual comparisons by the physicians of the images between query and retrieved images. The whole framework is currently undergoing the preliminary deployment stage to the clinical workflow of MGB system.

October 30th, 2020

Our work on modeling multi-label objects for image analysis, "Multi-label Detection and Classification of Red Blood Cells in Microscopic Images", has been accepted by The 7th Big Data Analytic Technology For Bioinformatics and Health Informatics Workshop (KDDBHI 2020), its link can be found here, its pdf can be downloaded here.

This work was developed by my mentored trainees, Wei Qiu and Jiaming Guo, who were visiting undergraduate students at MGH. The model aims at solving multi-label (e.g. multiple type of deformed cells) problem in medical image semantic segmentation.

October 5th, 2020

Our work on using federated learning for stratifying COVID-19 patient's risk at Emergency Department across multiple hospitals, the EMR-CXR-AI-Model (EXAM) study, was covered by NVidia here.

June 23rd, 2020

Our work on deformable U-Net, validated on the RBC segmentation task, "Automated Semantic Segmentation of Red Blood Cells for Sickle Cell Disease", has been accepted by IEEE Journal of Biomedical and Health Informatics, its link can be found here, its pdf can be downloaded here.

This work is an extension of our previous conference presentation at MICCAI 2018, "RBC Semantic Segmentation For Sickle Cell Disease Based on Deformable U-Net".

June 22nd, 2020

Two papers accepted by MICCAI 2020:

"Discovering Functional Brain Networks with 3D Residual Autoencoder (ResAE)", its link can be found here, its pdf can be downloaded here.

and

"Spatiotemporal Attention Autoencoder (STAAE) for ADHD Classification", its link can be found here, its pdf can be downloaded here.

Both of the works are in collaboration with Dr. Qinglin Dong, a PhD student of Prof. Tianming Liu at the University of Georgia.

April 7th, 2020

Our paper, "ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning", has been awarded by ISBI 2020 as the 2nd place winner of Best Paper Award

Congratulates to Mo Zhang, my mentored PhD student as the first author of this paper!

The announcement video can be found here.

Link to this paper be found on here, its pdf can be downloaded here.

January 17th, 2020

Our work on fMRI single denoising using dictionary learning method, "Dictionary Learning and Sparse Coding-based Denoising for High-Resolution Brain Activation and Functional Connectivity Modeling: A Task fMRI Study", has been accepted by IEEE Access.

The link can be found here (open access).

This work is an extension of our previous conference presentation at MLMI 2017, "Dictionary Learning and Sparse Coding-based Denoising for High-Resolution Task Functional Connectivity MRI Analysis". In this collaboration work with Dr. Seongah Jeong, a research fellow at Harvard SEAS, we developed a dictionary learning-based method to perform signal denoising on fMRI data, and revealed enhanced activation patterns and functional connectivities from the denoised data.

January 7th, 2020

Our work on adaptive scale deep learning method has been accepted by ISBI 2020:

"ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning".

Its link be found on here, its pdf can be downloaded here.

December 24th, 2019

Our review paper on functional neuroimaging, "Functional Neuroimaging in the New Era of Big Data", is accepted by Genomics, Proteomics and Bioinformatics. The link can be found here, its pdf can be downloaded here.

This work also summarizes my major perspective on the future trend of functional neuroimaging, proposed in my PhD thesis.

November 5th, 2019

Our paper on PET image analysis using graph convolution network (GCN), entitled "Predicting Alzheimer’s Disease by Hierarchical Graph Convolution from Positron Emission Tomography Imaging", has been accepted by the IEEE BigData 2019 Workshop of Deep Graph Learning (DGLMA'19). Its link can be found here, its pdf can be downloaded here.

This work was developed by my mentored trainees, Wei Qiu and Jiaming Guo, who were visiting undergraduate students at MGH. It is among the first works leveraging graph representation for analyzing medical imaging data.

October 7th, 2019

Our pneumothorax screening paper, "Deep Learning-Enabled System for Rapid Pneumothorax Screening on Chest CT", is covered by The Imaging Wire, link can be found here

September 24th, 2019

Our paper on pneumothorax screening based on CT images, "Deep Learning-Enabled System for Rapid Pneumothorax Screening on Chest CT", has been accepted by European Journal of Radiology. Its link can be found here, its pdf can be downloaded here.

In this work we aim at speeding up and re-prioritizing the radiology workflow using deep learning, taking advantage of its speed and high diagnostic positive detection in analyzing medical images such as CT in pneumothorax. We further conducted large-scale inter-observational study for evaluating radiologist performance of pneumothorax diagnosis, and investigated how AI and human performed differently.

August 20th, 2019

Two of our abstracts have been accepted by American Heart Association 2019 annual meeting:

"Personalized Treatment for Heart Failure With Preserved Ejection Fraction Using Deep Reinforcement Learning" is accepted as oral presentation, using reinforcement learning to optimize the treatment plans of HFpEF patients. Abstract as appeared in the Circulation journal can be found here

and "Recurrent Neural Network Enhance Phenotyping in Heart Failure With Preserved Ejection Fraction Using Electronic Health Record" is accepted as poster presentation, using recurrent neural network to discover HFpEF phenotypes from patients' longitudinal EHR data. Abstract as appeared in the Circulation journal can be found here

August 1st, 2019

Promoted to Instructor at Department of Radiology, Harvard Medical School and Massachusetts General Hospital.

June 10th, 2019

https://mmmi2019.github.io/


This year Quanzheng Li and I from MGH/HMS, Richard Leahy from USC and Bin Dong from PKU are organising the first MICCAI workshop on Multiscale Multimodal Medical Imaging (MMMI 2019). If you work on related areas, we are looking forward to your paper submission.
Proceeding of MMMI 2019 as part of the MICCAI 2019 conference proceeding, published as Lecture Notes in Computer Science (LNCS) book series, is now available online at Springer. Cover of this proceeding can be downloaded here.

May 27th, 2019

Our paper pulmonary lymph node metastasis modeling using network inference, entitled "Transition Patterns between N1 and N2 Stations Discovered from Data-driven Lymphatic Metastasis Study in Non-Small Cell Lung Cancer", is accepted by 2019 World Conference on Lung Cancer for oral presentation.

This work is based on our own network inference modeling method "PathInf", whicn is online at arXiv.

The PathInf program developed in this work can infer most possible transition paths among variables, only with independently observed instances of a transition process with large missing values.
The program is tested on simulation data as well as lung cancer metastasis data on lung lymph nodes. Performance of PathInf is compared with commonly applied GES (Greedy Equivalence Search) method, showing better recovery accuracy, especially for its capability of reducing false positive edges caused by the missing values.

May 11th, 2019

Our paper on spatio-temporal modeling of fMRI data, "4D Modeling of fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)", is accepted by IEEE Transactions on Cognitive and Developmental Systems. The link can be found here, its pdf can be downloaded here.

This work is an extension of our previous conference presentation at MICCAI 2018, "Modeling 4D fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)".

January 2nd, 2019

Our paper on multi-modal image fusing in a deep learning context, "Deep Learning-based Image Segmentation on Multi-modal Medical Imaging", is accepted by IEEE Transactions on Radiation and Plasma Medical Sciences. The link can be found here, its pdf can be downloaded here.

This work is an extension of our previous conference presentation at ISBI 2018, "Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes".

December 18th, 2018

Four papers accepted by ISBI 2019:

"Multi-Size Computer-Aided Diagnosis of Positron Emission Tomography Images Using Graph Convolutional Networks"

This is developed by my mentored trainee, Xuandong Zhao, who was a visiting undergraduate students at MGH. It is our first attempt in using graph representation for medical image analysis, trying to tackle the challenge of varying image sizes in deep learning inputs. We found that graph convolution neural networks, which is based on the concept of non-Euclidean convolution operations, achieves superior performance comparing with traditional 3D CNNs. Its link can be found here, its pdf can be downloaded here.

"3D Regional Shape Analysis of Left Ventricle Using MR Images: Abnormal Myocardium Detection and Classification"

This work summarizes our preliminary results in performing 3D shape analysis of Cardiac MR images, showing that shape spectral analysis based using multilinear principal component analysis can obtain group-wise shape-based image marker for classifying heart failure vs. normal controls, and different heart failure subtypes. Its link can be found here, its pdf can be downloaded here.

"Novel Radiomic Features Based on Graph Theory for PET Image Analysis"

In this work we developed new graph-based radiomics features, which capture better intratumoral heterogeneity from PET images comparing with traditional, image-derived texture features. Its link can be found here, its pdf can be downloaded here.

"Automated Segmentation of Cervical Nuclei in Pap Smear Images using Deformable Multi-path Ensemble Model"

In this collaborative work with PKU, we developed the Deformable Multipath Ensemble Model (D-MEM) based on an ensembling scheme. It achieved state-of-the-art accuracy for nuclei segmentation in pap smear images for cervical cancer screening. Its link can be found here, its pdf can be downloaded here. Presentation slides of it could be found here


August 1st, 2018

Our left ventricle quantification paper is accepted by STACOM workshop of MICCAI 2018:

"Multi-Estimator Full Left Ventricle Quantification through Ensemble Learning", its link can be found here, its pdf can be downloaded here. This work is later included in the paper "Left Ventricle Quantification Challenge: A Comprehensive Comparison and Evaluation of Segmentation and Regression for Mid-ventricular Short-axis Cardiac MR Data" (co-authored), its link can be found here.


June 15th, 2018

Our Neuroimage paper "Spatio-temporal modeling of connectome-scale brain network interactions via time-evolving graphs" is covered by Editorial of Dialogues in Clinical Neuroscience

The article, "New ways of understanding brain neurocircuitry", can be found here


May 25th, 2018

Two papers accepted by MICCAI 2018:

"RBC Semantic Segmentation For Sickle Cell Disease Based on Deformable U-Net", its link can be found here. The journal version of this paper can be found here, pdf of the journal paper can be downloaded here.

In this collaboration work with Mo Zhang at Peking University and Mengjia Xu at Northeastern University, we developed a fully automatic framework for simultaneous cell segmentation and classification (semantic segmentation) on red blood cells microscopic images.

and:

"Modeling 4D fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)", which is our collaboration work with Dr. Yu Zhao, a PhD student of Prof. Tianming Liu at the University of Georgia. Its link can be found here. The journal version of this paper can be found here, pdf of the journal paper can be downloaded here.


May 8th, 2018

Our presentation on pneumothorax detection in American Roentgen Ray Society (ARRS) 2018 Annual Meeting is covered by auntminnie, the report can be found here


March 30th, 2018

Our project on pneumothorax detection is among the four finalist of the 2018 NVIDIA Global Impact Award, detailed information can be found here


February 25th, 2018

Our paper on distributed analytics for fMRI based on rank-1 decomposition and cloud computing, titled "A Distributed Computing Platform for fMRI Big Data Analytics", has been accepted by IEEE Transactions on Big Data, the link can be found here, its pdf can be downloaded here.


February 21th, 2018

Two reviews on our JACR paper can be found at Radiology Business and auntminnie.


February 5th, 2018

Our perspective paper on the impact of artificial intelligence on radiology is now online at:Journal of the American College of Radiology, the link can be found here, its pdf can be downloaded here.

It has been selected as the Continuing Medical Education (CME) material of the month, for ACR credentials.


December 22th, 2017

Our work on multi-modal image fusion analysis has been accepted by ISBI 2018:

"Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes".

We discussed three different fusion schemes for performing supervised learning on medical image analysis. This is a preliminary study from us on how to utilize images from different modalities together to make more accurate and robust image-based decision.

The link to its journal version can be found here, its pdf can be downloaded here.


November 20th, 2017

Our work on using deep learning for pneumothorax detection on chest CT images has been accepted by American Roentgen Ray Society 2018 as oral presentation:

"Deep Learning Algorithm for rapid automatic detection of pneumothorax on chest CT"


November 14th, 2017

Another paper on functional brain dynamics, "Spatio-temporal modeling of connectome-scale brain network interactions via time-evolving graphs", has been accepted by Neuroimage, its link can be found here, pdf can be downloaded here.

In this work we modeled the fMRI signal using dynamic functional networks based on sliding time window approach, then further analyze the patterns of the networks using time-evolving graphs.


July 18th, 2017

Two papers accepted by MLMI 2017:

"Self-paced Convolutional Neural Network for Computer Aided Detection in Medical Imaging Analysis", its link can be found here, its pdf can be downloaded here.

In this work we proposed the self-paced learning method to generate virtual training samples for supervised learning, particularly to overcome the sample size limitations in medical image analysis.

and:

"Dictionary Learning and Sparse Coding-based Denoising for High-Resolution Task Functional Connectivity MRI Analysis", its link can be found here. The journal version of this work can be found here, its pdf can be downloaded here.


May 29th, 2017

Our work on tensor decomposition on sparse and low rank data is online at arXiv.

The collaboration work with Songting Shi at Peking University offers an integrated framework to perform PARAFAC tensor decomposition on large-scale data with the property of both sparse and low-rank.


February 28th, 2017

Our work on using dictionary learning for fMRI signal de-noising is accepted by OHBM 2017

"FMRI Signal Denoising by Dictionary Learning for High-Resolution Functional Connectivity Inference", its link can be found here.


January 8th, 2017

Two papers accepted by ISBI 2017:

"Template-guided Functional Network Identification via Supervised Dictionary Learning". Its link can be found here, its pdf can be downloaded here.

This work is based on the r1DL model previously proposed in my KDD2016 paper. It has shown fast and accurate performance for identifying target functional networks given a set of pre-defined templates from fMRI signals.

and:

"Exploring Human Brain Activation via Nested Sparse Coding and Functional Operators". Its link can be found here, its pdf can be downloaded here.

The collaboration work with Shu Zhang, a PhD student of Prof. Tianming Liu, proposed a highly novel framework which characterizes functional networks as the results of "functional operators" inside the brain, which offers a brand new perspective for brain functional decoding.


October 13th, 2016

Invited talk on the International Workshop on Big Data Neuroimaging Analytics for Brain and Mental Health at the 2016 International Conference on Brain Informatics and Health.


September 14th, 2016

Joined Harvard Medical School and Massachusetts General Hospital, as well as MGH & BWH Center for Clinical Data Science (CCDS), as postdoctoral research fellow, under the mentorship of James H. Thrall, MD and Dr. Quanzheng Li.


June 2nd, 2016

Two papers accepted by MICCAI 2016:

"Modeling Functional Dynamics of Cortical Gyri and Sulci". Its link can be found here, its pdf can be downloaded here.

This work is our preliminary attempt in investigating the differences of functional dynamics between gyri and sulci areas, which are supposed to work in different roles within the brain functional architecture.

"Discover Mouse Gene Coexpression Landscape Using Dictionary Learning and Sparse Coding"

The work from Yujie Li using a sparse coding method on the mouse brain gene data obtained a surprisingly cleared-edge map of the mouse brain regions. Its link can be found here. Journal version of this work can be found here, pdf can be downloaded here.


May 12th, 2016

Awarded the Outstanding Graduate Dissertation/Thesis! Thanks to Computer Science Department for the recognition and my advisor, Prof. Tianming Liu, for the mentorship!


May 11th, 2016

Paper accepted by ACM SigKDD 2016:

"Scalable Fast Rank-1 Dictionary Learning for fMRI Big Data Analysis". Its link can be found here, pdf can be downloaded here.

The collaboration work across our group, Dr. Shannon Quinn and Dr. Jieping Ye features our solutions towards fMRI big data analysis by leveraging the distributed computation power by Python Spark.


March, 17th, 2016

Talk on SIAM-SEAS session "Parallel and distributed computing for biomedical imaging". The presentation slide could be found here


December 23, 2015

Three papers accepted by ISBI 2016:

"Modeling Functional Network Dynamics Via Multi-Scale Dictionary Learning and Network Continuums" (with Dr. Jieping Ye)

This work marks the development of "network continnum", a novel concept characterizing the continuous/disruptive dynamics of the functional networks. The model reveals ever-changing spatial patterns of the same networks overtime.

"Multiple-Demand System Identification and Characterization Via Sparse Representations of fMRI Data"

This work investigates the well-known multiple-demand system through a network decomposition approach. The work is in sync with our previous Human Brain Mapping paper "Sparse representation of HCP grayordinate data reveals novel functional architecture of cerebral cortex" which focused on MDS on grayordinates (i.e. cortical surface).

"Identifying Group-Wise Consistent Sub-Networks Via Spatial Sparse Representation of Natural Stimulus fMRI Data" (collaboration work with Cheng Lyu)

This work offers a great tool for the brain network analysis from based on its spatial distribution, including sub-network identification and calculating similarities among different networks.


October 14th, 2015

"Sparse representation of HCP grayordinate data reveals novel functional architecture of cerebral cortex", (collaboration work with Xi Jiang) has been accepted by Human Brain Mapping.


July 20th, 2015

Starting visit at Prof. Jieping Ye's group at University of Michigan.


April 1st, 2015

Our article, " Holistic Atlases of Functional Networks and Interactions Reveal Reciprocal Organizational Architecture of Cortical Function", has been selected as the feature story and cover of IEEE Transactions on Biomedical Engineering.


March 17th, 2015

Guest lecture on UGA CSCI6900: Mining Massive Datasets (at Dr. Shannon Quinn's invitation), the presentation slide could be found here


February 13th, 2015

Thanks to Franklin Foundation for providing the travel award to support me for attending ISBI 2015!


February 5th, 2015

Two papers accepted by ISBI 2015:
"Interactive Exemplar-Based Segmentation Toolkit for Biomedical Image Analysis" (with Dr. Hanchuan Peng)
and
"Characterizing and Differentiating Task-Based and Resting State FMRI Signals Via Two-Stage Dictionary Learning"


February 3rd, 2015

"Characterizing and Differentiating Task-based and Resting State FMRI Signals via Two-stage Sparse Representations" (collaboration work with Shu Zhang) has been accepted by Brain Imaging and Behavior


September 29th, 2014

Starting a 3-months visit at Allen Institute of Brain Science, with Zhi Zhou, Brain Long, Hanbo Chen (also from UGA) and Hanchuan Peng.


September 29th, 2014

"Characterizing and Differentiating Brain State Dynamics via Hidden Markov Models" (collaboration work with Dr. Jinli Ou and Prof. Leo Xie at ZJU) has been accepted by Brain Topography.


May 13th, 2014

Oral presentation at ISBI 2014, on the topics of Bayesian network-based change point detection and sparse representation of functional network using DICCCOL.


April 15th, 2013

Nominated for the best student paper award of ISBI 2013, for my work "Discovering Common Functional Connectomics Signatures". Its oral presentation video could be found here.

I am an Assistant Professor at the Massachusetts General Hospital and Harvard Medical School, Department of Radiology, and an Affiliate Faculty Member at the Kempner Institute for Natural and Artificial Intelligence of Harvard University. I received my postdoc training in Massachusetts General Hospital under the co-mentorship of Associate Prof. Quanzheng Li and Distinguished Professor, National Academy of Medicine member James H. Thrall. I received my PhD in Computer Science from the University of Georgia, supervised by Distinguished Professor, AIMBE Fellow Tianming Liu in 2016. My primary research interest is medical data analysis with focus on clinical data streamlining, big data frameworks, multi-modal multi-scale image fusion, and foundation models.

Services (selected):

  • Associate Editor: Frontiers in Oncology, Frontiers in Radiology, Frontiers in Neuroscience, Frontiers in Cardiovascular Medicine, Meta-Radiology, BMC Biomedical Engineering, IEEE Transactions on Artificial Intelligence
  • Committee Service: Founding Chair of the International Workshop on Multiscale Multimodal Medical Imaging (MMMI), Area Chair of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Program Committee of ACM SIGKDD Workshop on Mining and Learning from Time Series (MILETS), International Workshop on Medical Image Learning with Noisy and Limited Data (MILLanD), International Conference on Brain Informatics (BI), and Machine Learning in Medical Imaging (MLMI)
  • Publications: 100+ papers with h-index of 33, see my Google Scholar page for more information.

    Reach me at:

    25 New Chardon St, 449A, Boston, MA, 02114
    xiangli.shaun{at}gmail.com

    My CV

    My MGH Researcher page

    My Google Scholar page

    My Scopus author page

    My dblp page

    My arXiv author id page

    My ORCID page

    ORCID iD iconorcid.org/0000-0002-9851-6376

    Homepage of the Center for Advanced Medical Computing and Analysis (CAMCA) at HMS/MGH, where I'm currently working at.

    Homepage of the Cortical Architecture Imaging and Discovery (CAID) lab, where I previous joined at UGA computer science department.

    Homepage of the 3D visualization platform I've been previously working at, the Vaa3D system by Dr. Hanchuan Peng.