CN113554728B - Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method - Google Patents

Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method Download PDF

Info

Publication number
CN113554728B
CN113554728B CN202110687898.4A CN202110687898A CN113554728B CN 113554728 B CN113554728 B CN 113554728B CN 202110687898 A CN202110687898 A CN 202110687898A CN 113554728 B CN113554728 B CN 113554728B
Authority
CN
China
Prior art keywords
magnetic resonance
image
sequence
resonance image
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110687898.4A
Other languages
Chinese (zh)
Other versions
CN113554728A (en
Inventor
庄吓海
高尚奇
周杭琪
靳建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202110687898.4A priority Critical patent/CN113554728B/en
Publication of CN113554728A publication Critical patent/CN113554728A/en
Application granted granted Critical
Publication of CN113554728B publication Critical patent/CN113554728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a method for generating a multi-sequence 3T to 7T magnetic resonance image based on semi-supervision. It includes: acquiring a 3T magnetic resonance image and a 7T magnetic resonance image to generate a training sample; a generator G1 constructed by a neural network generates a multi-sequence 3T magnetic resonance image from the 7T magnetic resonance image; extracting and fusing the characteristics of the 3T magnetic resonance image through a characteristic fusion module constructed by a neural network to obtain a 7T guidance image of guidance structure information and high-frequency details; a generator G2 constructed by a neural network generates 7T magnetic resonance images of the same sequence by taking the 3T magnetic resonance images and the obtained guide images as input; the synthetic loss function uses an optimizer to train the network; a single sequence of 7T images is generated quickly from the trained generator G2. The invention uses the 3T magnetic resonance images of multiple sequences, can better synthesize the high-frequency details corresponding to the 7T magnetic resonance images, has high robustness and strong generalization capability, and is easy to train and realize.

Description

Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method
Technical Field
The invention belongs to the technical field of medical treatment, and particularly relates to a 7T magnetic resonance image reconstruction method.
Background
Compared with the conventional 3T magnetic resonance image, the image generated by the 7T magnetic resonance scanner has higher signal-to-noise ratio and finer anatomical detail, and is helpful for improving the effect of medical diagnosis and prognosis. However, 7T magnetic resonance scanners are expensive and cannot be popularized in large medical institutions like 3T scanners. Therefore, the method for generating the corresponding 7T magnetic resonance image from the 3T magnetic resonance image has great significance in clinical research and application. In order to learn the mapping from the 3T image to the 7T image, the traditional non-deep learning method generates an image which meets the regularization condition by using a random forest or sparse learning mode; the deep learning-based method uses a convolutional neural network to extract 3T magnetic resonance image features, then fits a nonlinear mapping mapped to a 7T image space, and finally generates a corresponding 7T magnetic resonance image from the 7T image features. Considering that it is very difficult to acquire a large number of paired 3T-7T magnetic resonance images for training, some methods propose a framework of cyclic countermeasure training (CycleGAN), fully utilizing unpaired 3T-7T data, and besides performing mapping and constraint in an image domain, constraint and transformation in a frequency domain are also common in 7T image generation methods based on deep learning, such as minimizing a training loss function for generating a wavelet component distance between a 7T image and a real 7T image. These improvements increase the generative effect of the deep learning approach.
In practical applications, the current 7T image generation method based on deep learning still has the following two challenges:
(1) the 7T images contain a lot of anatomical details, and it is difficult to capture enough details from only a single sequence of 3T images, resulting in a 7T magnetic resonance image with high visual quality.
(2) The results generated based on the competitive training are more likely to introduce confounding details that do not match the diagnostic information represented by the input 3T image.
The investigation of the existing documents finds that the 3T magnetic resonance images of different modes contain different details of tissue structures, and considering that the multi-sequence 3T magnetic resonance images are easy to obtain, the multi-sequence 3T magnetic resonance images are used for generating the 7T magnetic resonance images of the single sequence, so that the anatomical details of the 7T magnetic resonance images can be more fully captured, and the visual quality of the generated images is effectively improved; at the same time, the multi-sequence 3T magnetic resonance images can provide a more comprehensive structural reference for the generated 7T magnetic resonance images.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a multi-sequence 3T to 7T magnetic resonance image generation method based on semi-supervision.
The invention provides a method for generating a multi-sequence 3T to 7T magnetic resonance image based on semi-supervision, which comprises the following specific steps:
(1) acquiring partial paired multi-sequence 3T magnetic resonance images and single-sequence 7T magnetic resonance images to generate training samples;
(2) a multi-sequence 3T magnetic resonance image is generated from the single-sequence 7T magnetic resonance image by a convolutional neural network-constructed generator G1. The generator G1 is a cascade structure composed of a plurality of convolution residual blocks containing space transform layers (SFT Layer) and deconvolution layers for up sampling, the input of the generator is a 7T magnetic resonance image with multiple scales, except that the input with the minimum scale generates a target image through the convolution residual blocks, images with other scales generate scale transform parameters and offset parameters through the space transform layers, and the output of the last convolution residual block is corrected;
(3) extracting and fusing the characteristics of the multi-sequence 3T magnetic resonance images through a neural network to obtain a 7T guide image for guiding structure information and high-frequency details; the feature fusion module for fusing features may be combined from any number of layers of convolutional layers.
(4) And (4) generating a 7T magnetic resonance image of the same sequence by taking the 3T magnetic resonance image of the random sequence and the guide image obtained in the step (3) as input through a condition generator G2 constructed by a convolutional neural network. The structure of the condition generator G2 is similar to G1, but the input to the spatial transform layer is a guide image.
(5) Calculating a plurality of types of loss functions using the generated image and the 3T/7T image as an input of the generator; the synthetic loss function uses an optimizer to train the network; wherein:
the loss function includes three terms:
the first item: distinguishing the difference between the distribution of the generated 3T/7T magnetic resonance image and the real distribution by using a discriminator, and calculating a distinguishing loss function; the discriminator is a multi-scale multilayer network structure, the input of the discriminator is a real or generated image, and the output of the discriminator is a label for measuring the real degree of the input image; any type of discriminant loss function can be selected, and the specific implementation depends on the actual reconstruction effect.
The second term is: remapping the generated 3T/7T magnetic resonance image into a 7T/3T magnetic resonance image, and calculating a cyclic loss function with the initially input 7T/3T training image to minimize the L1 distance;
the third item: for the paired training samples, the L1 distance of the generated image from the real image is minimized.
The synthesis loss function is synthesized to use an optimizer to train the network, which includes training a generator, a feature fusion module and a discriminator by using part of paired training samples, and the specific network structure is briefly described above.
(6) With the multi-sequence 3T magnetic resonance images as input, a corresponding single sequence 7T image is generated quickly from the trained generator G2.
Further:
in the step (2), a generator G1 constructed by a convolutional neural network is used for generating multi-sequence 3T magnetic resonance images from single-sequence 7T magnetic resonance images, and the mapping from a 7T image data space to a 3T image data space is learned; the 3T magnetic resonance image generated in step (2) can also be used as an input to step (3) and step (4) to amplify the training sample.
In the step (3), a feature fusion module constructed by the convolutional neural network is used for extracting features of the multi-sequence 3T magnetic resonance image and fusing the features into a guide image for assisting the generation of a subsequent 7T magnetic resonance image.
In the step (4), a condition generator G2 constructed by a convolutional neural network is used, and a single-sequence 7T magnetic resonance image is generated by taking a 3T magnetic resonance image with the same sequence as the 7T magnetic resonance image and the guide image obtained in the step (3) as input; in addition, the 7T magnetic resonance image generated in step (4) can also be used as an input to step (2) in subsequent training.
Compared with the prior art, the invention has the following advantages:
(1) according to the invention, the multi-sequence 3T magnetic resonance image is used for generating the single-sequence 7T magnetic resonance image, so that the detail information and the structure reference contained in the easily-obtained multi-sequence 3T magnetic resonance image can be fully utilized, and the 7T magnetic resonance image with higher visual quality and stronger stability is generated;
(2) the invention has the advantages of full automation, short calculation time, convenient realization and the like.
Drawings
Fig. 1 is a block flow diagram of a method of generating a multi-sequence 3T to 7T magnetic resonance image based on semi-supervision according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiments is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiments.
In an embodiment, as shown in fig. 1, a method for generating a multi-sequence 3T to 7T magnetic resonance image based on semi-supervision specifically includes the following steps:
step 1, acquiring partial paired multi-sequence 3T brain magnetic resonance images (such as T1, T2 and PD) and single-sequence (such as T1) 7T brain magnetic resonance images to generate training samples, specifically, firstly removing voxels outside the brain in the magnetic resonance images, such as skull parts, and then normalizing the intensity values to [ -1,1 ]; for unpaired images, randomly disorganizing them; for the paired images, performing linear calibration on the 3T magnetic resonance image by taking the 7T magnetic resonance image as a reference; in addition, the training data is augmented by using a data augmentation technology, and more training samples are generated. These prepared samples will be used for network training in subsequent steps.
And 2, training the neural network by adopting semi-supervised cyclic countermeasure generation because the data are partially paired. Thus, a multi-sequence 3T magnetic resonance image is first generated from a single sequence 7T magnetic resonance image by the neural network-built generator G1. These generated 3T magnetic resonance images will be used to calculate a loss function, letting G1 learn the mapping from the 7T space to the multi-sequence 3T space; they can also be used as input for subsequent training to achieve the effect of data augmentation.
And 3, because the difficulty of mapping the 3T magnetic resonance image to the 7T magnetic resonance image is high, considering that the multi-sequence 3T magnetic resonance image contains similar structure and complementary structure information, extracting the features of the multi-sequence 3T magnetic resonance image by using a feature fusion module constructed by a neural network and fusing the features to obtain a guidance image for guiding the 7T magnetic resonance image structure information and high-frequency details.
Step 4, using a generator G2 constructed by a convolutional neural network to generate a single-sequence 7T magnetic resonance image by taking a 3T magnetic resonance image with the same sequence as the 7T magnetic resonance image and the guide image obtained in the step 3 as input; similarly, they can be used as input for subsequent training to achieve data augmentation.
Step 5, using two discriminators D1 and D2 constructed by a convolutional neural network respectively to judge the difference between the distribution of the generated image and the real image, calculating a comprehensive loss function and training the network:
Figure 20718DEST_PATH_IMAGE001
the first item: the difference between the distribution of the generated 3T/7T magnetic resonance image and the real distribution is judged by using the judgers D1 and D2, and a judgment loss function is calculated;
the second term is: remapping the generated 3T/7T magnetic resonance image into a 7T/3T magnetic resonance image, and calculating a cyclic loss function with initially input 7T/3T training data to minimize the L1 distance of the cyclic loss function;
the third item: for the paired training samples, the L1 distance of the generated image from the real image is minimized.
And 6, taking the multi-sequence 3T magnetic resonance image as input, obtaining a guide image by using a feature fusion module, inputting the single-sequence 3T magnetic resonance image and the guide image, and quickly generating a 7T image with the same sequence from the trained generator G2.
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.

Claims (4)

1. A method for generating a multi-sequence 3T to 7T magnetic resonance image based on semi-supervision is characterized by comprising the following specific steps:
(1) acquiring partial paired multi-sequence 3T magnetic resonance images and single-sequence 7T magnetic resonance images to generate training samples;
(2) generating a multi-sequence 3T magnetic resonance image from the single-sequence 7T magnetic resonance image through a generator G1 constructed by a convolutional neural network; the generator G1 adopts a cascade architecture composed of a plurality of convolution residual blocks containing spatial conversion layers and deconvolution layers for up-sampling, the input of the generator is a 7T magnetic resonance image with multiple scales, except that the input with the minimum scale generates a target image through the convolution residual blocks, images with other scales generate scale conversion parameters and offset parameters through the spatial conversion layers, and the output of the last convolution residual block is corrected;
(3) extracting and fusing the characteristics of the multi-sequence 3T magnetic resonance images through a neural network to obtain a 7T guide image for guiding structure information and high-frequency details; the characteristic fusion module for fusing the characteristics is formed by combining convolution layers with any number of layers;
(4) generating a 7T magnetic resonance image of the same sequence by taking a 3T magnetic resonance image of a random sequence and the guide image obtained in the step (3) as input through a condition generator G2 constructed by a convolutional neural network; the structure of the condition generator G2 is similar to that of the generator G1, but the input of the spatial transform layer is a guide image;
(5) calculating a plurality of types of loss functions using the generated image and the 3T/7T image as an input of the generator; the synthetic loss function uses an optimizer to train the network; wherein:
the loss function includes three terms:
the first item: distinguishing the difference between the distribution of the generated 3T/7T magnetic resonance image and the real distribution by using a discriminator, and calculating a distinguishing loss function; the discriminator is a multi-scale multilayer network structure, the input of the discriminator is a real or generated image, and the output of the discriminator is a label for measuring the real degree of the input image;
the second term is: remapping the generated 3T/7T magnetic resonance image into a 7T/3T magnetic resonance image, and calculating a cyclic loss function with initially input 7T/3T training data to minimize the L1 distance of the cyclic loss function;
the third item: for pairs of training samples, the L1 distance of the generated image from the real image is minimized;
the comprehensive loss function training network comprises a training generator, a feature fusion module and a discriminator which utilize part of paired training samples, and the network is a convolutional neural network;
(6) with the multi-sequence 3T magnetic resonance images as input, a corresponding single sequence 7T image is generated quickly from the trained generator G2.
2. The method for generating a multi-sequence 3T to 7T magnetic resonance image based on semi-supervision as claimed in claim 1, wherein in step (2), a generator G1 constructed by using a convolutional neural network is used to generate a multi-sequence 3T magnetic resonance image from a single-sequence 7T magnetic resonance image, and the mapping from a 7T image data space to a 3T image data space is learned; the 3T magnetic resonance image generated in step (2) can also be used as an input to step (3) and step (4) to amplify the training sample.
3. The method for generating a multi-sequence 3T to 7T magnetic resonance image based on semi-supervision as claimed in claim 1, wherein in step (3), the feature fusion module constructed by using the convolutional neural network is used to extract the features of the multi-sequence 3T magnetic resonance image and fuse the features into a guide image for assisting the generation of the subsequent 7T magnetic resonance image.
4. The semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method as claimed in claim 1, wherein in the step (4), a condition generator G2 constructed by a convolutional neural network is used, and a 3T magnetic resonance image which is the same as the 7T magnetic resonance image sequence and the guide image obtained in the step (3) are used as input to generate a single-sequence 7T magnetic resonance image; in addition, the 7T magnetic resonance image generated in step (4) can also be used as an input to step (2) in subsequent training.
CN202110687898.4A 2021-06-21 2021-06-21 Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method Active CN113554728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110687898.4A CN113554728B (en) 2021-06-21 2021-06-21 Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110687898.4A CN113554728B (en) 2021-06-21 2021-06-21 Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method

Publications (2)

Publication Number Publication Date
CN113554728A CN113554728A (en) 2021-10-26
CN113554728B true CN113554728B (en) 2022-04-12

Family

ID=78102232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110687898.4A Active CN113554728B (en) 2021-06-21 2021-06-21 Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method

Country Status (1)

Country Link
CN (1) CN113554728B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881848A (en) * 2022-07-01 2022-08-09 浙江柏视医疗科技有限公司 Method for converting multi-sequence MR into CT
CN115240032B (en) * 2022-07-20 2023-06-23 中国人民解放军总医院第一医学中心 Method for generating 7T magnetic resonance image based on 3T magnetic resonance image of deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377455A (en) * 2018-09-27 2019-02-22 浙江工业大学 The improved multisequencing magnetic resonance image method for registering based on self-similarity
CN110270015A (en) * 2019-05-08 2019-09-24 中国科学技术大学 A kind of sCT generation method based on multisequencing MRI
CN110619635A (en) * 2019-07-25 2019-12-27 深圳大学 Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
CN112734770A (en) * 2021-01-06 2021-04-30 中国人民解放军陆军军医大学第二附属医院 Multi-sequence fusion segmentation method for cardiac nuclear magnetic images based on multilayer cascade
CN112802046A (en) * 2021-01-28 2021-05-14 华南理工大学 Image generation system for generating pseudo CT from multi-sequence MR based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10753997B2 (en) * 2017-08-10 2020-08-25 Siemens Healthcare Gmbh Image standardization using generative adversarial networks
US20210012486A1 (en) * 2019-07-09 2021-01-14 Shenzhen Malong Technologies Co., Ltd. Image synthesis with generative adversarial network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377455A (en) * 2018-09-27 2019-02-22 浙江工业大学 The improved multisequencing magnetic resonance image method for registering based on self-similarity
CN110270015A (en) * 2019-05-08 2019-09-24 中国科学技术大学 A kind of sCT generation method based on multisequencing MRI
CN110619635A (en) * 2019-07-25 2019-12-27 深圳大学 Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
CN112734770A (en) * 2021-01-06 2021-04-30 中国人民解放军陆军军医大学第二附属医院 Multi-sequence fusion segmentation method for cardiac nuclear magnetic images based on multilayer cascade
CN112802046A (en) * 2021-01-28 2021-05-14 华南理工大学 Image generation system for generating pseudo CT from multi-sequence MR based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CT Super-Resolution GAN Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE);Chenyu You et al.;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20200131;第39卷(第1期);全文 *
Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy;Mengke Qi et al.;《Med. Phys》;20200430;第47卷(第4期);全文 *
Reconstruction of 7T-Like Images From 3T MRI;Khosro Bahrami et al.;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20160930;第35卷(第9期);全文 *
基于级联GAN网络的医学图像超分辨率重建及图像数据集增广;龚明杰;《中国优秀硕士学位论文全文数据库》;20210215(第02期);论文第三章 *

Also Published As

Publication number Publication date
CN113554728A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
Hu et al. Brain MR to PET synthesis via bidirectional generative adversarial network
Güngör et al. TranSMS: Transformers for super-resolution calibration in magnetic particle imaging
Lam et al. Constrained magnetic resonance spectroscopic imaging by learning nonlinear low-dimensional models
CN113554728B (en) Semi-supervised-based multi-sequence 3T to 7T magnetic resonance image generation method
Zhou et al. Deep learning methods for medical image fusion: A review
CN112488976B (en) Multi-modal medical image fusion method based on DARTS network
Benou et al. De-noising of contrast-enhanced MRI sequences by an ensemble of expert deep neural networks
Kang et al. Fusion of brain PET and MRI images using tissue-aware conditional generative adversarial network with joint loss
Qiang et al. deep variational autoencoder for modeling functional brain networks and ADHD identification
Saleh et al. A brief analysis of multimodal medical image fusion techniques
Wang et al. MSE-Fusion: Weakly supervised medical image fusion with modal synthesis and enhancement
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
Wang et al. Variable augmented network for invertible modality synthesis and fusion
CN115984257A (en) Multi-modal medical image fusion method based on multi-scale transform
CN114298979B (en) Method for generating hepatonuclear magnetic image sequence guided by description of focal lesion symptom
Wu et al. Hierarchical and symmetric infant image registration by robust longitudinal‐example‐guided correspondence detection
Zhang et al. SS-SSAN: a self-supervised subspace attentional network for multi-modal medical image fusion
Shi et al. An unsupervised region of interest extraction model for tau PET images and its application in the diagnosis of Alzheimer's disease
Kalluvila Super-Resolution of Brain MRI via U-Net Architecture
Lee et al. A Novel Knowledge Keeper Network for 7T-Free but 7T-Guided Brain Tissue Segmentation
Chen et al. TractGeoNet: A geometric deep learning framework for pointwise analysis of tract microstructure to predict language assessment performance
Mirza et al. Skip connections for medical image synthesis with generative adversarial networks
CN112258457B (en) Multi-dimensional feature extraction method of full-volume three-dimensional ultrasonic image
Chen et al. CNS: CycleGAN-Assisted Neonatal Segmentation Model for Cross-Datasets
Thakur et al. Medical Image Fusion Using Discrete Wavelet Transform: In view of Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant