Chenxin Li

I am a Ph.D. student at The Chinese University of Hong Kong (CUHK), advised by Prof. Yixuan Yuan. I received M. Eng degree from Xiamen University, advised by Prof. Xinghao Ding and Prof. Yue Huang. Prior to that, I earned the B. Eng degree from the Department of Information Enigneering, Xiamen University.

Evidently, the information in real world is worth far more than what can be depicted by a few images. I am dedicated to developing accurate, efficient, and reliable CV/ML algorithms for multi-modal and 3D vision. My long-term research goal is to foster real-world intelligent machines and contribute to scientific innovation such as in biomedicine.

Email  /  CV  /  Google Scholar  /  Github

profile photo
Latest News

  • [11/2023] One paper accepted to 3DV 2024.
  • [07/2023] Invited talk at AIxMed Seminar, Massachusetts General Hospital and Harvard Medical School.
  • [07/2023] One paper accepted to ICCV 2023.
Featured Publications
StegaNeRF: Embedding Invisible Information within Neural Radiance Fields
Chenxin Li*, Brandon Y. Feng*, Zhiwen Fan*, Panwang Pan, Zhangyang Wang (* Equal Contribution)
International Conference on Computer Vision (ICCV), 2023
[Project] [ArXiv] [Video] [Code]

NeRF with multi-modal IP information instillation

Knowledge Condensation Distillation
Chenxin Li, Mingbao Lin, Zhiyuan Ding, Nie Lin, Yihong Zhuang, Xinghao Ding, Yue Huang, Liujuan Cao
European Conference on Computer Vision (ECCV), 2022
[PDF] [Supp] [ArXiv] [Code]

Co-design of dataset and model distillation

Generator Versus Segmentor: Pseudo-healthy Synthesis
Yunlong Zhang*, Chenxin Li*, Xin Lin, Liyan Sun, Yihong Zhuang, Xinghao Ding, Yue Huang, Yizhou Yu (* Equal Contribution)
International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2021
[PDF] [ArXiv] [Code]

Generative AI for lesion-centric synthesis


Other Publications
Projects

    Embedding Information within Neural Radiance Fields                     
        Fig1. Rendering Views           Fig2. Residual Error (x5).           Fig3. Residual Error (x25).     Fig4. Recovered Customized Images

    Recent advances in Neural Radiance Field (NeRF) imply a future of widespread visual data distributions through sharing NeRF model weights. In StegaNeRF, we signify an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings, with minimal impact to rendered images. We sincerely hope this work can promote the concerns about the intellectual property of INR/NeRF.


    Efficient Knowledge Distillation Algorithms

      Fig1. Knowledge Condensation Distillation     Fig2. Relation of Condensed Knowledge.       Fig3. Hint-Dynamic Distillation.

    Knowledge distillation (KD) plays a key role in developing lightweight deep networks by transferring the dark knowledge from a high-capacity teacher network to strengthen a smaller student one. In KCD (ECCV'22), we explore an efficient knowledge distillation framework by co-designing model distillation and knowledge condensation, which dynamically identifies and summarizes the informative knowledge points as a compact knowledge set across the knowledge transfer.

    In HKD, we investigate the diverse guidance effect from the knowledge of teacher model in different instances and learning stages. The existing literature keeps the fixed learning fashion to handle these knowledge hints. In comparison, we present to leverage the merits of meta-learning to customize a specific distillation fashion for each instance adaptively and dynamically.


    Data-Efficient Learning for Medical Imaging Analysis

        Fig1. GVS for Pseudo-Healthy Synthesis           Fig2. Uncertainty-Aware Self-Training                    Fig3. Enhanced Feature by GCN-DE

    Pseudo-Healthy Synthesis: As a variant of style-transfer task, synthesizing the healthy counterpart from the lesion regions is a important problem in clinical practice. In GVS (MICCAI'21), we leverage the more accurate lesion attribution by constructing an adversarial learning framework between the pseudo-healthy generator and lesion segmentor.

    Domain Adaptation/Generalization: Generalizing the deep models trained on one data source to other datasets is essential issue in practical medical imaging analysis. We present a domain adaptive approach by leveraging the self-supervised strategy called Vessel-Mixing (ICIP'21), which is driven by the geometry characteristics of retinal vessels. We also attempt tp address the domain generalization problem in medical imaging via Task-Aug (CBM'21). We investigate the neglected issue summarized as task over-fitting, that is, the meta-learning framework gets over-fitting to the simulated meta-tasks, and present a task augmentation strategy.

    Semi-Supervised Learning: The existing semi-supervised methods mainly exploit the unlabeled data via a self-labeling strategy. In UAST (NCA'21), we present to decouple the unreliable connect between the decision boundary learning and pseudo-label evaluation. We instead leverage an uncertainty-aware self-training paradigm by modeling the accuracy of pseudo-labels via uncertainty modeling.

    Few-shot Learning: Existing few-shot segmentation methods tend to fail in the incongruous foreground regions of support and query images. We present a few-shot learning method called GCN-DE (CBM'21) which leverages a global correlation capture and discriminative embedding to address the above issue.

Professional Activities

Conference Reviewer

    ICLR'24, AAAI'24 (LLM Special Track), NeurIPS'23, CVPR'23/24, ICCV'23, ACM MM'23, MICCAI'23

Journal Reviewer

    Pattern Recognition (PR), Transactions on Neural Networks and Learning Systems (TNNLS), Neural Computing and Applications (NCAA)


Modified from Jon Barron