Cross-Modality Image Synthesis via Weakly-Coupled and Geometry Co-Regularized Joint Dictionary Learning
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors such as patient discomfort, increased cost, prolonged scanning time and scanner unavailability. In addition, in large imaging studies incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multimodal acquisitions. In this paper, we propose a Weakly-coupled And Geometry co-regularized (WAG) joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. We then propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.
Dictionary Learning, Sparse Representation, Image Synthesis, Domain Adaption, Manifold Learning, MRI.