I am a Ph.D. student at
The Chinese University of Hong Kong (CUHK), advised by Prof.
Yixuan Yuan.
I received M. Eng degree from Xiamen University, advised by Prof. Xinghao Ding and Prof. Yue Huang.
Prior to that, I earned the B. Eng degree from the Department of Information Enigneering, Xiamen University.
Evidently, the information in real world is worth far more than what can be depicted by a few images.
I am dedicated to developing accurate, efficient, and reliable CV/ML algorithms for multi-modal and 3D vision.
My long-term research goal is to foster real-world intelligent machines and contribute to scientific innovation such as in biomedicine.
Recent advances in Neural Radiance Field (NeRF) imply a future of widespread visual data distributions through sharing NeRF model weights.
In StegaNeRF,
we signify an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings, with minimal impact to rendered images.
We sincerely hope this work can promote the concerns about the intellectual property of INR/NeRF.
Knowledge distillation (KD) plays a key role in developing lightweight deep networks by transferring the dark knowledge from a high-capacity teacher network to strengthen a smaller student one.
In
KCD (ECCV'22),
we explore an efficient knowledge distillation framework by co-designing model distillation and knowledge condensation,
which dynamically identifies and summarizes the informative knowledge points as a compact knowledge set across the knowledge transfer.
In HKD,
we investigate the diverse guidance
effect from the knowledge of teacher model in different instances and learning stages.
The existing literature keeps the fixed learning fashion to handle these knowledge hints.
In comparison, we present to leverage the merits of meta-learning to customize a specific distillation fashion for each instance adaptively and dynamically.
Data-Efficient Learning for Medical Imaging Analysis
Pseudo-Healthy Synthesis: As a variant of style-transfer task, synthesizing the healthy counterpart from the lesion regions is a important problem in clinical practice.
In GVS (MICCAI'21), we leverage the more accurate lesion attribution by constructing an adversarial learning framework between the pseudo-healthy generator and lesion segmentor.
Domain Adaptation/Generalization:
Generalizing the deep models trained on one data source to other datasets is essential issue in practical medical imaging analysis.
We present a domain adaptive approach by leveraging the self-supervised strategy called Vessel-Mixing (ICIP'21),
which is driven by the geometry characteristics of retinal vessels.
We also attempt tp address the domain generalization problem in medical imaging via Task-Aug (CBM'21). We investigate the neglected issue summarized as task over-fitting, that is, the meta-learning framework gets over-fitting to the simulated meta-tasks, and present a task augmentation strategy.
Semi-Supervised Learning:
The existing semi-supervised methods mainly exploit the unlabeled data via a self-labeling strategy.
In UAST (NCA'21),
we present to decouple the unreliable connect between the decision boundary learning and pseudo-label evaluation.
We instead leverage an uncertainty-aware self-training paradigm by modeling the accuracy of pseudo-labels via uncertainty modeling.
Few-shot Learning:
Existing few-shot segmentation methods tend to fail in the incongruous foreground regions of support and query images.
We present a few-shot learning method called GCN-DE (CBM'21) which leverages a global correlation capture and discriminative embedding to address the above issue.