CN110827232A - Cross-modal MRI (magnetic resonance imaging) synthesis method based on morphological feature GAN (gain) - Google Patents

Cross-modal MRI (magnetic resonance imaging) synthesis method based on morphological feature GAN (gain) Download PDF

Info

Publication number
CN110827232A
CN110827232A CN201911113248.8A CN201911113248A CN110827232A CN 110827232 A CN110827232 A CN 110827232A CN 201911113248 A CN201911113248 A CN 201911113248A CN 110827232 A CN110827232 A CN 110827232A
Authority
CN
China
Prior art keywords
modality
representative
target
mode
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911113248.8A
Other languages
Chinese (zh)
Other versions
CN110827232B (en
Inventor
王艳
李志昂
吴锡
周激流
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201911113248.8A priority Critical patent/CN110827232B/en
Publication of CN110827232A publication Critical patent/CN110827232A/en
Application granted granted Critical
Publication of CN110827232B publication Critical patent/CN110827232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a trans-modal MRI synthesis method based on morphological characteristics GAN, which comprises the steps of establishing an MRFE-GAN model, wherein the MRFE-GAN model comprises a residual error network module and a modal representative characteristic extraction module; the source mode acquires a pseudo target mode through a residual error network module; and extracting representative features of the pseudo target modality through a modality representative feature extraction module, combining the representative features with basic information of the source modality, and fusing to generate a synthetic target modality. The invention can obtain more real and effective target modes; the information difference between cross-domain modes can be effectively overcome, and data of different levels can be effectively extracted; the difference between the synthetic mode and the real target mode is effectively reduced, so that the synthetic image is more real and reliable.

Description

Cross-modal MRI (magnetic resonance imaging) synthesis method based on morphological feature GAN (gain)
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a cross-modality MRI synthesis method based on morphological characteristics GAN.
Background
Medical imaging techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET) are important components of modern healthcare. MRI, which can capture contrast differences in soft tissue, has become the primary imaging modality for studying neuroanatomy. By applying different pulse sequences and parameters, a wide variety of tissue contrasts can be generated when imaging the same anatomy, thereby obtaining images of different contrasts, i.e. MRI modalities. For example, by selecting pulse sequences such as magnetization-prepared gradient echoes (MPRAGE) and recalled variance gradients (SPGR), T1-weighted (T1) images can be generated that clearly depict gray and white matter tissue. In contrast, the T2-weighted (T2) image depicts fluid from cortical tissue, generated by applying a pulse sequence, such as a Dual Spin Echo (DSE). Additionally, fluid attenuation inversion recovery (FLAIR) is a T2 weighted pulse sequence, which is employed to enhance image contrast of white matter lesions. Different contrast images of the same patient may provide different diagnostic information. Magnetic Resonance Imaging (MRI) techniques can produce various tissue contrasts by using different pulse sequences and parameters. The same anatomy with different tissue contrast increases the diversity of MRI information. However, obtaining multiple different contrast images (or modalities) for the same examiner is very time consuming and expensive. In practice, therefore, the number of contrast agent modalities to be acquired for the same patient is always limited due to relevant factors such as limited scan time and high cost.
Although there are some synthesis methods for performing modality synthesis through a depth network, because the difference in feature distribution between the source modality and the target modality is large, even though the existing methods can know the mapping between different modalities to some extent, there may still be some difference between the input image and the generated image, which cannot effectively reduce the difference, resulting in a large difference between the synthesized image and the real image.
Disclosure of Invention
In order to solve the problems, the invention provides a cross-modal MRI synthesis method based on morphological characteristics GAN, which can obtain a more real and effective target modality; the information difference between cross-domain modes can be effectively overcome, and the features of different layers can be effectively extracted; the difference between the synthetic mode and the real target mode is effectively reduced, so that the synthetic image is more real and reliable.
In order to achieve the purpose, the invention adopts the technical scheme that: a cross-modality MRI synthesis method based on morphological characteristics GAN comprises the following steps:
establishing an MRFE-GAN model, which comprises a residual error network module and a modal representative feature extraction module;
the source mode acquires a pseudo target mode through a residual error network module;
and extracting representative features of the pseudo target modality through a modality representative feature extraction module, combining the representative features with basic information of the source modality, and fusing to generate a synthetic target modality.
Further, the residual error network module takes a source mode as an input, and a pseudo target mode is formed by establishing an intermediate mode in the residual error network module to simulate a target mode. In the image synthesis task, the feature information between the input and the output is completely different, which means that a deeper network is required to improve the synthesis performance, but in the deepening process of the network, the network may be difficult to optimize, thereby reducing the performance. The residual network can effectively overcome this problem.
Further, the residual error network module comprises 3 downsampling blocks, 12 residual error blocks and 3 deconvolution layers which are operated in sequence; the downsampling blocks increase the number of feature maps from 1 to 256 and each downsampling block comprises convolution, instance normalization and ReLU layers run in sequence; the residual block comprises padding, convolution, instance normalization and ReLU layers; the deconvolution block includes deconvolution, instance normalization, a ReLU layer, and an activation function. The independence of each modality instance after each convolutional layer can be effectively maintained through instance normalization.
Further, the modality representative feature extraction module in the MRFE-GAN model comprises a basic encoder, a representative encoder and a decoder; inputting a source mode into the basic encoder, and extracting basic information from the source mode by the basic encoder; inputting a pseudo-target modality into the representative encoder, and extracting representative information from the pseudo-target modality by the representative encoder; the basic encoder and the representative encoder are connected to a decoder in parallel, and the decoder fuses the basic information and the representative information to generate a synthetic target modality.
Further, in the modality representative feature extraction module, a source modality x and a pseudo target modality y are input, and the modality distributions are respectively p (x) and p (y); each mode is decomposed into two different distributions in the own space, which are respectively: p (x1| x) and P (x2| x), P (y1| y) and P (y2| y); wherein P (x1| x) and P (y1| y) are basic distributions of structures of respective modes, P (x2| x) and P (y2| y) are representative distributions of structures of respective modes, and a difference between the two modes is embodied by P (x2| x) and P (y2| y) and is a representative feature of the mode itself;
obtaining y-modal representative features based on an x-modal structure by fusing P (x1| x) and P (y2| y) at the modal representative feature extraction module; or, obtaining x modal representative features based on a y modal structure by fusing P (x2| x) and P (y1| y) at the modal representative feature extraction module.
Further, the base encoder comprises 3 downsampling blocks and 4 residual blocks which are sequentially run, wherein the downsampling blocks comprise convolution, instance normalization and ReLU layers which are sequentially run, and the residual blocks comprise filling, convolution, instance normalization and ReLU layers which are sequentially run;
the representative encoder comprises 5 sets of convolutional and ReLU layer combinations, a global average pooling layer, and 3 sets of linear layers; discarding all instance normalization layers before each ReLU layer of the representative encoder, converting each two-dimensional feature channel into a real number by utilizing global average pooling, and modeling correlation among the channels in three groups of linear layers to obtain a mean value and a standard deviation of representative information; on the basis of improving the synthesis identification precision, the problem that the original characteristic mean value and the standard deviation of important representative information are deleted due to example normalization is solved;
and the decoder combines the standard deviation and the average value obtained by the representative encoder into a proportion parameter and a shift parameter of an adaptive instance normalization layer of the decoder, so that the basic information and the representative information are fused to generate a synthetic target modality.
Further, the step of the decoder fusing the basic information and the representative information to generate the synthetic target modality includes:
first, normalization α is performed by the adaptive instance normalization layer:
wherein α is representative information of the input of the adaptive instance normalization layer, Δ (α) and xi (α) are respectively a standard deviation and a mean of the representative information α;
then, the normalized α' is multiplied by a proportion parameter, and a shift parameter is added, so that the fusion of two kinds of information is completed, the reconstruction mode is realized, and the synthetic target mode is generated:
α"=Δ′*α'+Ξ';
wherein, the scale parameter is a standard deviation of the representative information, and the shift parameter is an average value of the representative information.
The system further comprises a first identification module and a second identification module, wherein the first identification module is used for inputting a real target mode and a pseudo target mode into the first identification module to perform loss calculation; inputting a synthetic target mode and a real target mode into a second identification module for loss calculation; and combining the loss calculation of the first identification module and the loss calculation of the second identification module to form a total loss function of the MRFE-GAN model, performing optimization training on the total loss function to minimize the loss function, and optimizing the pseudo-target mode by adjusting parameters so as to improve the truth of the synthetic result.
Further, a pseudo target modality g (x) is generated in the residual error network module according to the source modality x, and the pseudo target modality g (x) is used as an input of the modality representative feature extraction module to extract representative feature information, specifically, the process is as follows:
the pseudo target modality g (x) is put into a first identification module D1 for a first loss calculation:
LRESNET(G,D1)=Ey[logD1(y)]+Ex[log(1-D1(G(x))];
where y is the target modality, and E is the expected value of the input and output;
the loss reconstruction loss with L1 helps the network to capture the overall appearance and relatively coarse features of the target modality in the synthetic modality:
LRESNET-L1(G)=Ex,y[||y-G(x)||1];
the modality represents the feature extraction module, which takes x and G (x) as input and outputs the synthesis target modality M (x, G (x)), and the synthesis target modality M (x, G (x)) is put into the identification module D2 for loss calculation: l isMRFE(M,D2)=Ey[logD2(y)]+Ex,G(x)[log(1-D2(M(x,G(x))))];
The L1 penalty is used to calculate the difference between the synthetic target modality and the real target modality:
LMRFE-L1(M)=Ex,G(x),y[||y-M(x,G(x))||1];
the total loss function of the MRFE-GAN model is as follows:
Ltotal=λ1LRESNET(G,D1)+λ2LRESNET-L1(G)+λ3LMRFE(M,D2)+λ4LMRFE-L1(M);
countermeasure mode for maximizing D1 and D2 by minimizing loss function in training process, wherein λ is set in loss function1λ 31 and λ2=λ4100; and the network performance is enhanced, and a more real result is output.
The beneficial effects of the technical scheme are as follows:
the invention provides a novel framework for cross-modal MRI synthesis, a countermeasure network (GAN) is generated based on conditions, and a Modal Representative Feature Extraction (MRFE) strategy is introduced into the network to form the MRFE-GAN model provided by the invention. And extracting representative characteristics of the target mode through the MRFE-GAN model, combining the representative characteristics with basic information of the source mode, and accurately synthesizing the target mode from the source mode. The proposed method is superior to the latest image synthesis methods in both qualitative and quantitative aspects, and achieves better performance.
The present invention divides the modality into two different levels of information, called basic information and representative information, respectively, and extracts them using two different network structures. Two encoders with completely different structures are used to encode the modality into two different pieces of information, which are used to extract the basic information and the representative information, respectively. The target modality is not suitable for the test set, so that before the MRFE module, an RESNET module is provided to generate an intermediate modality as a pseudo target modality; in consideration of the concealment of the target morphology during the test, a residual network is used to synthesize intermediate results as a source of representative information. In addition, in the decoding stage, instead of directly fusing the extracted basic information and the representative information, an AdaIN layer is added to the decoder, and the representative information is used as the shifting and scaling parameters of the AdaIN layer, so as to complete the information fusion process, and fuse the information of two different levels. A more realistic and effective target modality is obtained. The information difference between cross-domain modes can be effectively overcome, and data of different levels can be effectively extracted; the difference between the synthetic mode and the real target mode is effectively reduced, so that the synthetic image is more real and reliable.
The MRFE-GAN model in the present invention consists of a residual network and an MRFE network that includes two encoders (i.e., a base encoder and a representative encoder) and a decoder. Compared with the traditional deep neural network, the method has the advantages that: (1) unlike the conventional GAN in which a source modality is directly used as input to generate a synthetic target modality, the MRFE architecture in the network of the present invention considers both information of two basic characteristic source modalities and representative characteristic information of a target modality to generate a synthetic target modality; the two encoders respectively extract information of different layers, so that on one hand, a better fusion effect can be obtained when information fusion is carried out, on the other hand, image fusion is more flexible, and different fusion images can be obtained through fusion of bottom layer information and representative information between different modules; (2) in order to extract representative information from the target modality, a residual error network (RESNET) is adopted to generate an intermediate modality as a pseudo target modality, so that a representative encoder can encode the intermediate modality; (3) the two kinds of information are directly fused, the standard deviation and the mean value of the representative information are used as the translation and scaling parameters of the self-adaptive instance normalization layer in the decoder, the characteristic space information fusion process mode is completed, and the fusion speed can be accelerated and the calculation cost is not brought by the conversion of the distribution form in the characteristic space; (4) after each convolutional layer, no Batch Normalization (BN) is used, but instead Instance Normalization (IN) is used to maintain the difference between different MRI mode instances.
Drawings
FIG. 1 is a schematic flow chart of a cross-modality MRI synthesis method based on morphological feature GAN according to the present invention;
FIG. 2 is a schematic topological structure diagram of a cross-modality MRI synthesis method based on morphological feature GAN in an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a residual error network module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a basic encoder according to an embodiment of the present invention;
FIG. 5 is a block diagram of a representative encoder in an embodiment of the present invention;
FIG. 6 is a comparison of the results of the synthesis mode of synthesizing T2 from T1 by the present method and the prior art method in accordance with an embodiment of the present invention;
FIG. 7 is a comparison of the results of the synthesis modality of FLAIR from T2 by the present method and by the prior art method in accordance with an embodiment of the present invention;
fig. 8 is a comparison of the results of the synthesis mode of the present method and the prior art method for synthesizing T2 from PD in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1 and fig. 2, the present invention proposes a cross-modality MRI synthesis method based on morphological feature GAN,
a cross-modality MRI synthesis method based on morphological characteristics GAN comprises the following steps:
establishing an MRFE-GAN model, which comprises a residual error network module and a modal representative feature extraction module;
the source mode acquires a pseudo target mode through a residual error network module;
and extracting representative features of the pseudo target modality through a modality representative feature extraction module, combining the representative features with basic information of the source modality, and fusing to generate a synthetic target modality.
As an optimization scheme of the above embodiment, the residual error network module takes a source mode as an input, and a pseudo target mode is formed by establishing an intermediate mode in the residual error network module to simulate a target mode. In the image synthesis task, the feature information between the input and the output is completely different, which means that a deeper network is needed to improve the synthesis performance, and in the deepening process, the network is difficult to optimize, so that the performance is reduced, and therefore, the problem can be effectively overcome by using the generated intermediate mode in the residual network module.
As shown in fig. 3, the residual network module includes 3 downsampling blocks, 12 residual blocks (ResBlock), and 3 deconvolution layers, which are sequentially run; the downsampling blocks increase the number of feature maps from 1 to 256 and each downsampling block comprises a convolution (Conv), instance normalization (instanceNorm) and ReLU layers run in sequence; the residual block comprises successively running padding (padding), convolution (Conv), instance normalization (instanceNorm), and ReLU layers; the deconvolution layer was run sequentially deconvolution (ConvT), instance normalization (instanceNorm), ReLU layer, and activation function (Tanh). The independence of each modality instance after each convolutional layer can be effectively maintained through instance normalization.
As an optimization scheme of the above embodiment, the modality representative feature extraction module in the MRFE-GAN model includes a base encoder, a representative encoder, and a decoder; inputting a source mode into the basic encoder, and extracting basic information from the source mode by the basic encoder; inputting a pseudo-target modality into the representative encoder, and extracting representative information from the pseudo-target modality by the representative encoder; the basic encoder and the representative encoder are connected to a decoder in parallel, and the decoder fuses the basic information and the representative information to generate a synthetic target modality.
In the modal representative feature extraction module, inputting a source modal x and a pseudo target modal y', wherein modal distributions are respectively P (x) and P (y); each mode is decomposed into two different distributions in the own space, which are respectively: p (x1| x) and P (x2| x), P (y1| y) and P (y2| y); wherein P (x1| x) and P (y1| y) are basic distributions of structures of respective modes, P (x2| x) and P (y2| y) are representative distributions of structures of respective modes, and a difference between the two modes is embodied by P (x2| x) and P (y2| y), and the modes have characteristic features;
obtaining y' modal representative features based on an x modal structure by fusing P (x1| x) and P (y2| y) at the modal representative feature extraction module; or, obtaining x modal representative features based on a y modal structure by fusing P (x2| x) and P (y1| y) at the modal representative feature extraction module.
As shown in fig. 4, the base encoder includes 3 downsample blocks and 4 residual blocks, which are sequentially run, the downsample blocks including sequentially run convolution (Conv), instance normalization (instanceNorm), and ReLU layers, and the residual blocks including sequentially run padding (padding), convolution (Conv), instance normalization (instanceNorm), and ReLU layers;
as shown in fig. 5, the representative encoder includes 5 sets of convolutional and ReLU layer combinations, a global average pooling layer (adaptive avgpool), and 3 sets of Linear layers (Linear); discarding all instance normalization layers before each ReLU layer of the representative encoder, converting each two-dimensional feature channel into a real number by utilizing global average pooling, and modeling correlation among the channels in three groups of linear layers to obtain a mean value and a standard deviation of representative information; on the basis of improving the synthesis identification precision, the problem that the original characteristic mean value and the standard deviation of important representative information are deleted due to example normalization is solved;
and the decoder combines the standard deviation and the average value obtained by the representative encoder into a proportion parameter and a shift parameter of an adaptive instance normalization layer of the decoder, so that the basic information and the representative information are fused to generate a synthetic target modality.
The step of the decoder fusing the basic information and the representative information to generate the synthetic target modality comprises the following steps:
first, normalization α is performed by the adaptive instance normalization layer:
Figure BDA0002273349860000081
wherein α is representative information of the input of the adaptive instance normalization layer, Δ (α) and xi (α) are respectively a standard deviation and a mean of the representative information α;
then, the normalized α is multiplied by a proportion parameter, and a shift parameter is added, so that the fusion of two kinds of information is completed, the reconstruction mode is realized, and the synthetic target mode is generated:
α″=Δ′*α′+Ξ′;
wherein, the scale parameter is a standard deviation of the representative information, and the shift parameter is an average value of the representative information.
As an optimization scheme of the above embodiment, the system further includes a first identification module and a second identification module, and the first identification module is used for inputting a real target mode and a pseudo target mode for loss calculation; inputting a synthetic target mode and a real target mode into a second identification module for loss calculation; and combining the loss calculation of the first identification module and the loss calculation of the second identification module to form a total loss function of the MRFE-GAN model, performing optimization training on the total loss function to minimize the loss function, and optimizing the pseudo-target mode by adjusting parameters so as to improve the truth of the synthetic result.
Generating a pseudo target mode G (x) in the residual error network module according to the source mode x, and extracting representative feature information by taking the pseudo target mode G (x) as one input of a mode representative feature extraction module, wherein the specific process comprises the following steps:
the pseudo target modality g (x) is put into a first identification module D1 for a first loss calculation:
LRESNET(G,D1)=Ey[logD1(y)]+Ex[log(1-D1(G(x))];
where y is the target modality, and E is the expected value of the input and output;
the loss reconstruction loss with L1 helps the network to capture the overall appearance and relatively coarse features of the target modality in the synthetic modality:
LRESNET-L1(G)=Ex,y[||y-G(x)||1];
the modality represents the feature extraction module, which takes x and G (x) as input and outputs the synthesis target modality M (x, G (x)), and the synthesis target modality M (x, G (x)) is put into the identification module D2 for loss calculation:
LMRFE(M,D2)=Ey[logD2(y)]+Ex,G(x)[log(1-D2(M(x,G(x))))];
the L1 penalty is used to calculate the difference between the synthetic target modality and the real target modality:
LMRFE-L1(M)=Ex,G(x),y[||y-M(x,G(x))||1];
the total loss function of the MRFE-GAN model is as follows:
Ltotal=λ1LRESNET(G,D1)+λ2LRESNET-L1(G)+λ3LMRFE(M,D2)+λ4LMRFE-L1(M);
countermeasure mode for maximizing D1 and D2 by minimizing loss function in training process, wherein λ is set in loss function1λ 31 and λ2=λ4100; and the network performance is enhanced, and a more real result is output.
The advantages of the present method were verified by performing extensive experiments, which were mainly performed on a multimodal brain tumor segmentation (BRATS) dataset 1 and a non-peeled image (I × I) dataset 2. To evaluate the proposed model and compare it with the two most recent comparison methods. In particular, the advantages of the representation encoder in the MRFE block were first investigated, since the main contribution of the proposed model is to introduce representation information from the target modality for synthesis. The performance of the model in synthesizing T2 from the T1 modality and FLAIR from the T2 modality was then studied using the BRATS dataset. Finally, the synthetic target modality is displayed in the image in which no skull stripping has been performed on the I × I dataset.
To verify the advantage of using a representative encoder in the MRFE-GAN model: experiments were performed by comparing (MRFE-GAN w/oRE) between the MRFE-GAN model using a representative encoder and the MRFE-GAN model without a representative encoder. Quantitative comparison results of the BRATS data set (T1 to T2) and the I × I data set (PD to T2) are given in table 1 and table 2, with standard deviations in parentheses. As shown in tables 1 and 2, the results of combining the representative information indicate that the peak signal-to-noise ratio PSNR and the structural similarity SSIM increase to some extent and the standard root mean square error NRMSE decreases to some extent on both data sets, compared with the structure without a representative encoder. This means that the representative information provided by the target modality is advantageous for improving the performance of modality synthesis. The extraction of the representation information by the representation encoder is necessary to improve the synthesis performance in the single-modality synthesis task.
TABLE 1
Figure BDA0002273349860000091
TABLE 2
Figure BDA0002273349860000092
The performance of the model synthesizing T2 from T1 was verified by using the method proposed by the present invention on the BRATS dataset. An example of the synthesis of the target modality using different methods is given in fig. 6, where it can be seen that the proposed model (MRFE-GAN) yields a higher visual quality in terms of structure and detail (as indicated by the arrows) than the existing MILR and p-GAN models. The model result provided by the invention can better retain the detailed information in the synthesized T2 image. For quantitative comparison, the average synthetic target modes obtained by different methods are listed in table 3; the table shows that all three indices obtained by the process of the invention are significantly improved compared to the other two processes, in particular with respect to the PSNR values.
TABLE 3
Figure BDA0002273349860000102
The performance of the model for synthesizing T2 from PD was verified by using the proposed method of the present invention on IXI data set. In the conventional detection, since a dark skull region surrounded by a bright skin or fat region causes intensity unevenness in an MRI image, it is difficult to synthesize an image that is not stumbled by the skull. In this experiment, the IXI dataset for the non-peeled image was synthesized from the PD as T2. As can be seen in fig. 8, the model of the present invention tends to better preserve the detailed information in the synthesized T2 image with minimal differences compared to the two more recent methods, MILR and P-GAN. Table 5 shows the results of a quantitative comparison of the different methods, and it can be seen that all three indicators of the method of the invention are significantly improved compared to MILR and P-GAN.
TABLE 5
Figure BDA0002273349860000103
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. The cross-modality MRI synthesis method based on morphological characteristics GAN is characterized by comprising the following steps:
establishing an MRFE-GAN model, which comprises a residual error network module and a modal representative feature extraction module;
the source mode acquires a pseudo target mode through a residual error network module;
and extracting the representative characteristics of the pseudo target mode through a mode representative characteristic extraction module, combining the representative characteristics with the basic information of the source mode, and fusing to generate the target mode.
2. The modality-feature-GAN-based cross-modality MRI synthesis method according to claim 1, wherein the residual network module takes a source modality as an input, and a pseudo-target modality is constituted in the residual network module by establishing an intermediate modality to simulate a target modality.
3. The modality-feature-GAN-based cross-modality MRI synthesis method of claim 2, wherein the residual network module comprises 3 downsampling blocks, 12 residual blocks and 3 deconvolution layers which are sequentially run; the downsampling blocks increase the number of feature maps from 1 to 256 and each downsampling block comprises convolution, instance normalization and ReLU layers run in sequence; the residual block comprises a filling layer, a convolution layer, an instance normalization layer and a ReLU layer which are sequentially operated; the deconvolution layer sequentially runs deconvolution, instance normalization, a ReLU layer and an activation function.
4. The modality-feature-GAN-based cross-modality MRI synthesis method according to claim 3, wherein the modality representative feature extraction module in the MRFE-GAN model comprises a base encoder, a representative encoder and a decoder; inputting a source mode into the basic encoder, and extracting basic information from the source mode by the basic encoder; inputting a pseudo-target modality into the representative encoder, and extracting representative information from the pseudo-target modality by the representative encoder; the basic encoder and the representative encoder are connected to a decoder in parallel, and the decoder fuses the basic information and the representative information to generate a synthetic target modality.
5. The cross-modality MRI synthesis method based on morphological feature GAN as claimed in claim 4, wherein in the modality representation feature extraction module, a source modality x and a pseudo-target modality y' are inputted, and the modality distributions are P (x) and P (y), respectively; each mode is decomposed into two different distributions in the own space, which are respectively: p (x1| x) and P (x2| x), P (y1| y) and P (y2| y); where P (x1| x) and P (y1| y) are the fundamental distributions of the respective modal structures, and P (x2| x) and P (y2| y) are the representative distributions of the respective modal structures. The difference between the two modalities is represented by P (x2| x) and P (y2| y), which can be regarded as representative features of the modalities themselves;
obtaining y' modal representative features based on an x modal structure by fusing P (x1| x) and P (y2| y) at the modal representative feature extraction module; or, obtaining x modal representative features based on a y modal structure by fusing P (x2| x) and P (y1| y) at the modal representative feature extraction module.
6. The modality-feature-GAN-based cross-modality MRI synthesis method according to claim 5, wherein the base encoder includes 3 downsampling blocks and 4 residual blocks which are run in sequence, the downsampling blocks include convolution, instance normalization and ReLU layers which are run in sequence, and the residual blocks include padding, convolution, instance normalization and ReLU layers which are run in sequence;
the representative encoder comprises 5 sets of convolutional and ReLU layer combinations, a global average pooling layer, and 3 sets of linear layers; discarding all instance normalization layers before each ReLU layer of the representative encoder, converting each two-dimensional feature channel into a real number by utilizing global average pooling, and modeling correlation among the channels in three groups of linear layers to obtain a mean value and a standard deviation of representative information;
and the decoder combines the standard deviation and the average value obtained by the representative encoder into a proportion parameter and a shift parameter of an adaptive instance normalization layer of the decoder, so that the basic information and the representative information are fused to generate a synthetic target modality.
7. The modality-feature-GAN-based cross-modality MRI synthesis method according to claim 6, wherein the step of fusing the basic information and the representative information by the decoder to generate the synthesis target modality comprises:
first, normalization α is performed by the adaptive instance normalization layer:
Figure FDA0002273349850000021
wherein α is representative information of the input of the adaptive instance normalization layer, Δ (α) and xi (α) are respectively a standard deviation and a mean of the representative information α;
then, the normalized α' is multiplied by a proportion parameter, and a shift parameter is added, so that the fusion of two kinds of information is completed, the reconstruction mode is realized, and the synthetic target mode is generated:
α"=Δ′*α′+Ξ′;
wherein, the scale parameter is a standard deviation of the representative information extracted by the form encoder, and the shift parameter is an average value of the representative information.
8. The modality-feature-GAN-based cross-modality MRI synthesis method according to claims 1-7, further comprising two discriminator modules. Inputting a real target mode and a pseudo target mode into a first discriminator module for loss calculation; inputting a synthetic target mode and a real target mode into a second discriminator module for loss calculation; and calculating to form a total loss function of the MRFE-GAN model by combining the losses of the first discriminator module and the second discriminator module, performing optimization training on the total loss function to minimize the loss function, and optimizing the synthesized target mode by adjusting parameters.
9. The cross-modality MRI synthesis method according to claim 8, wherein the residual error network module generates a pseudo-target modality g (x) according to a source modality x, and extracts the representative feature information by using the pseudo-target modality g (x) as an input of the modality representative feature extraction module, which comprises the following steps:
the pseudo target modality g (x) is put into a first identification module D1 for a first loss calculation:
LRESNET(G,D1)=Ey[logD1(y)]+Ex[log(1-D1(G(x))];
where y is the target modality, and E is the expected value of the input and output;
the loss reconstruction loss with L1 helps the network to capture the overall appearance and relatively coarse features of the target modality in the synthetic modality:
LRESNET-L1(G)=Ex,y[||y-G(x)||1];
the modality represents the feature extraction module, which takes x and G (x) as input and outputs the synthesis target modality M (x, G (x)), and the synthesis target modality M (x, G (x)) is put into the identification module D2 for loss calculation: l isMRFE(M,D2)=Ey[logD2(y)]+Ex,G(x)[log(1-D2(M(x,G(x))))];
The L1 penalty is used to calculate the difference between the synthetic target modality and the real target modality:
LMRFE-L1(M)=Ex,G(x),y[||y-M(x,G(x))||1];
the total loss function of the MRFE-GAN model is as follows:
Liotal=λ1LRESNET(G,D1)+λ2LRESNET-L1(G)+λ3LMRFE(M,D2)+λ4LMRFE-L1(M);
countermeasure mode for maximizing D1 and D2 by minimizing loss function in training process, wherein λ is set in loss function1=λ31 and λ2=λ4=100。
CN201911113248.8A 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN) Active CN110827232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911113248.8A CN110827232B (en) 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911113248.8A CN110827232B (en) 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)

Publications (2)

Publication Number Publication Date
CN110827232A true CN110827232A (en) 2020-02-21
CN110827232B CN110827232B (en) 2022-07-15

Family

ID=69555297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911113248.8A Active CN110827232B (en) 2019-11-14 2019-11-14 Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)

Country Status (1)

Country Link
CN (1) CN110827232B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524147A (en) * 2020-04-14 2020-08-11 杭州健培科技有限公司 Brain tumor segmentation method based on generative confrontation network
CN111862261A (en) * 2020-08-03 2020-10-30 北京航空航天大学 FLAIR modal magnetic resonance image generation method and system
CN113012086A (en) * 2021-03-22 2021-06-22 上海应用技术大学 Cross-modal image synthesis method
EP3872754A1 (en) * 2020-02-28 2021-09-01 Siemens Healthcare GmbH Method and system for automated processing of images when using a contrast agent in mri

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002076489A1 (en) * 2001-03-09 2002-10-03 Dyax Corp. Serum albumin binding moieties
US20030083652A1 (en) * 2001-10-31 2003-05-01 Oratec Interventions, Inc Method for treating tissue in arthroscopic environment
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN110047056A (en) * 2018-01-16 2019-07-23 西门子保健有限责任公司 With the cross-domain image analysis and synthesis of depth image to image network and confrontation network
CN110210422A (en) * 2019-06-05 2019-09-06 哈尔滨工业大学 It is a kind of based on optical imagery auxiliary naval vessel ISAR as recognition methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002076489A1 (en) * 2001-03-09 2002-10-03 Dyax Corp. Serum albumin binding moieties
US20030083652A1 (en) * 2001-10-31 2003-05-01 Oratec Interventions, Inc Method for treating tissue in arthroscopic environment
CN110047056A (en) * 2018-01-16 2019-07-23 西门子保健有限责任公司 With the cross-domain image analysis and synthesis of depth image to image network and confrontation network
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN110210422A (en) * 2019-06-05 2019-09-06 哈尔滨工业大学 It is a kind of based on optical imagery auxiliary naval vessel ISAR as recognition methods

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
EVELIEN DE SCHEPPER等: "Prevalence of spinal pathology in patients presenting for lumbar MRI as referred from general practice", 《FAMILY PRACTICE》 *
YAN WANG 等: "Statistical iterative reconstruction using adaptive fractional order regularization", 《BIOMEDICAL OPTICS EXPRESS》 *
于洋 等: "不同分子亚型乳腺癌的MRI和病理特征初探", 《中华放射学杂志》 *
冷何英等: "基于相似度的双搜索多目标识别算法", 《红外与激光工程》 *
潘沛克 等: "基于U-net模型的全自动鼻咽肿瘤MR图像分割", 《计算机应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3872754A1 (en) * 2020-02-28 2021-09-01 Siemens Healthcare GmbH Method and system for automated processing of images when using a contrast agent in mri
CN111524147A (en) * 2020-04-14 2020-08-11 杭州健培科技有限公司 Brain tumor segmentation method based on generative confrontation network
CN111524147B (en) * 2020-04-14 2022-07-12 杭州健培科技有限公司 Brain tumor segmentation method based on generative confrontation network
CN111862261A (en) * 2020-08-03 2020-10-30 北京航空航天大学 FLAIR modal magnetic resonance image generation method and system
CN111862261B (en) * 2020-08-03 2022-03-29 北京航空航天大学 FLAIR modal magnetic resonance image generation method and system
CN113012086A (en) * 2021-03-22 2021-06-22 上海应用技术大学 Cross-modal image synthesis method
CN113012086B (en) * 2021-03-22 2024-04-16 上海应用技术大学 Cross-modal image synthesis method

Also Published As

Publication number Publication date
CN110827232B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN110827232B (en) Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)
Liu et al. Multimodal MR image synthesis using gradient prior and adversarial learning
Zhan et al. Multi-modal MRI image synthesis via GAN with multi-scale gate mergence
CN110288609B (en) Multi-modal whole-heart image segmentation method guided by attention mechanism
CN114266939B (en) Brain extraction method based on ResTLU-Net model
CN112634265B (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
CN113674330A (en) Pseudo CT image generation system based on generation countermeasure network
CN114170244A (en) Brain glioma segmentation method based on cascade neural network structure
CN116188452A (en) Medical image interlayer interpolation and three-dimensional reconstruction method
CN117437420A (en) Cross-modal medical image segmentation method and system
CN112785540B (en) Diffusion weighted image generation system and method
CN109741439B (en) Three-dimensional reconstruction method of two-dimensional MRI fetal image
CN116563402A (en) Cross-modal MRI-CT image synthesis method, system, equipment and medium
CN115861464A (en) Pseudo CT (computed tomography) synthesis method based on multimode MRI (magnetic resonance imaging) synchronous generation
CN115908451A (en) Heart CT image segmentation method combining multi-view geometry and transfer learning
CN115496732A (en) Semi-supervised heart semantic segmentation algorithm
CN114332271A (en) Dynamic parameter image synthesis method and system based on static PET image
Hu Multi-texture GAN: exploring the multi-scale texture translation for brain MR images
Larroza et al. Deep learning for MRI-based CT synthesis: A comparison of MRI sequences and neural network architectures
Szűcs et al. Self-supervised segmentation of myocardial perfusion imaging SPECT left ventricles
CN117974832B (en) Multi-modal liver medical image expansion algorithm based on generation countermeasure network
CN113538451B (en) Method and device for segmenting magnetic resonance image of deep vein thrombosis, electronic equipment and storage medium
CN117649422B (en) Training method of multi-modal image segmentation model and multi-modal image segmentation method
CN117315065B (en) Nuclear magnetic resonance imaging accurate acceleration reconstruction method and system
Wang et al. Spatial Attention-based Implicit Neural Representation for Arbitrary Reduction of MRI Slice Spacing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant