CN112489154A - MRI motion artifact correction method for generating countermeasure network based on local optimization - Google Patents

MRI motion artifact correction method for generating countermeasure network based on local optimization Download PDF

Info

Publication number
CN112489154A
CN112489154A CN202011414346.8A CN202011414346A CN112489154A CN 112489154 A CN112489154 A CN 112489154A CN 202011414346 A CN202011414346 A CN 202011414346A CN 112489154 A CN112489154 A CN 112489154A
Authority
CN
China
Prior art keywords
image
mri
discriminator
motion artifact
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011414346.8A
Other languages
Chinese (zh)
Other versions
CN112489154B (en
Inventor
曾宪华
纪聪辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Dayu Chuangfu Technology Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202011414346.8A priority Critical patent/CN112489154B/en
Publication of CN112489154A publication Critical patent/CN112489154A/en
Application granted granted Critical
Publication of CN112489154B publication Critical patent/CN112489154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application discloses an MRI motion artifact correction method based on a local optimization generation countermeasure network, wherein a jump layer connection is added between an up-sampling module and a down-sampling module, so that the finally constructed MRI motion artifact correction method can better realize the characteristics and the removal of the motion artifact, and can better realize the removal of the motion artifact.

Description

MRI motion artifact correction method for generating countermeasure network based on local optimization
Technical Field
The invention relates to the technical field of medical image processing, in particular to an MRI motion artifact correction method based on a local optimization generation countermeasure network.
Background
Medical imaging is widely used in modern medicine, wherein because magnetic resonance imaging does not generate radiation to a human body and has a good examination effect on tumors, the medical imaging is widely applied to clinical examination. However, since mri takes a long time, a whole body examination usually takes about half an hour, and thus, compared with other types of medical images, mri is more susceptible to human body movement. At the same time, since the acquisition period of the nuclear magnetic resonance is extremely sensitive to the motion of the human body, motion artifacts tend to appear in the final imaging result. The motion artifacts produced in clinical treatment may adversely affect the diagnosis of the physician, such as loss or blurring of pathological information, which increases the risk of misdiagnosis, and thus it is desirable to avoid motion artifacts in clinical application of MRI. Although a clear MRI can be acquired by re-acquisition, the time cost and economic cost incurred by MR acquisition are very high.
Disclosure of Invention
In order to solve the technical problem, the invention provides an MRI motion artifact correction method for generating an antagonistic network based on local optimization.
The technical scheme adopted by the invention is as follows: an MRI motion artifact correction method for generating an antagonistic network based on local optimization, comprising:
s1: obtaining a plurality of original sample images IOFor each of said original sample images IOConverting the K space data into K space data through fast Fourier transform, performing random phase shift on the K space data, and performing inverse fast Fourier transform on the changed K space data to obtain an image I with motion artifactsMA
S2: constructing a generated countermeasure network model, wherein the generated countermeasure network model comprises a generator and a discriminator, and the generator comprises a down-sampling module, a residual error module and an up-sampling module connected with the down-sampling module in a layer-skipping manner;
s3: the image I is processedMAInputting the generation countermeasure network model, and extracting the image I by the down-sampling moduleMAThe up-sampling module performs fusion processing on the image features of the corresponding levels output by the down-sampling module and the image features output by the residual error module, and outputs a corrected image I without motion artifactsF
S4: the original sample image IOAnd images I with motion artifactsMATraining a discriminator used as training data for generating the confrontation network model for t times, then training the discriminator for k times, and alternately performing iterative optimization until the total training round is reached to obtain a target generation confrontation network model; the discriminator utilizes the advance Wasserstein distance in the image I in each training processFAnd image IOThe contrast loss function constructed in betweenFAnd image IOThe generator utilizes the gradient penalty loss function constructed in advance in the image I in each training process to carry out optimizationFAnd image IOContent loss function constructed in between and previously in image IFAnd image IOOptimizing a local optimization loss function constructed in the above steps; k is a radical of>t, t and k are integers of 1 or more;
s5: acquiring an MRI (magnetic resonance imaging) original image from which a motion artifact is to be removed;
s6: and inputting the MRI original image into the target to generate an MRI target image which is corrected by a countermeasure network model.
Further, by step S1
Figure BDA0002819692430000021
Generating images with motion artifactsMA
Wherein FFT denotes the fast Fourier transform, IFFT denotes the inverse fast Fourier transform, kxAnd kyRespectively representing images I in K spaceOCoordinates in the frequency-encoding direction and the phase-encoding direction, m (k), respectivelyy) And n (k)y) Respectively representing images I in K spaceOAre respectively at kxDirection sum kyA phase shift function in the direction.
Further, the upsampling module passes the upsample in step S3out(q)=σ(concate(upsampleout(q-1),downsampleout(s-q +1))) generates and outputs a corrected image I from which motion artifacts have been removedF
Wherein the upsampleout(1)=σ(concate(Res,downsampleout(s))), the number of the down-sampling module and the up-sampling module are s, upsamplleout(q) represents the output of the qth upsampling block, downsampleout(s-q +1) represents the output of the s-q +1 th down-sampling module, concatee represents splicing in the last dimension of the feature map, sigma represents an activation function, the residual module is connected between the s-th down-sampling module and the 1 st up-sampling module, and Res represents the output of the residual module.
Further, the countermeasure loss function previously constructed in step S4 is:
Figure BDA0002819692430000031
wherein the content of the first and second substances,
Figure BDA0002819692430000032
Figure BDA0002819692430000033
a generator is shown that performs the task of removing motion artifacts,
Figure BDA0002819692430000034
denotes a discriminator, p (I)F) And p (I)O) Respectively representing corrected images IFAnd an original sample image IOThe distribution of the image of (a) is,
Figure BDA0002819692430000035
representing the discriminator input as image IFThe output of the time-of-flight discriminator,
Figure BDA0002819692430000036
representing the discriminator input as image IOThe output of the time-of-flight discriminator,
Figure BDA0002819692430000037
representation generator through image IMAGenerated image IFAnd label image IOWas betweenThe serstein distance.
Further, the gradient penalty loss function pre-constructed in step S4 is:
Figure BDA0002819692430000038
wherein the content of the first and second substances,
Figure BDA0002819692430000039
epsilon represents the interpolated sampled random number,
Figure BDA00028196924300000310
represents a gradient penalty loss intermediate calculation parameter, lambda represents a hyperparameter for weighing weight between losses,
Figure BDA00028196924300000311
indicates the arbiter input as
Figure BDA00028196924300000312
The two-norm of the time-discriminator gradient.
Further, the total loss function of the discriminator in each training process is
Figure BDA00028196924300000313
Further, the content loss function pre-constructed in step S4 is:
LContent=LMSE+αLPerceptual
wherein the content of the first and second substances,
Figure BDA00028196924300000314
LMSErepresenting the pixel mean square error loss, LPerceptualRepresenting the perceptual loss, a representing a hyper-parameter controlling the perceptual loss weight, N representing the total number of sample images,
Figure BDA0002819692430000041
representing the n-th original sample image,
Figure BDA0002819692430000042
representing the image with motion artifact I corresponding to the n-th original sample imageMA
Figure BDA0002819692430000043
Representation generator input as an image
Figure BDA0002819692430000044
The output of time, j, represents the number of layers where the pooling layer is located in the pre-trained VGG network, phijRepresenting the output, W, of all network layers before the jth layer pooling layer of the VGG network for their input imagesjAnd HjIs to represent phijThe width and height of the output feature map.
Further, the local optimization loss function pre-constructed in step S4 is:
Figure BDA0002819692430000045
wherein K represents a predefined identity matrix,
Figure BDA0002819692430000046
representing a convolution operation.
Further, the total loss function of the generator during each training is
Figure BDA0002819692430000047
Wherein beta represents a hyper-parameter controlling the local optimization loss specific gravity, gamma represents a hyper-parameter controlling the antagonistic loss specific gravity,
Figure BDA0002819692430000048
indicates the discriminator input is IFThe output of the discriminator.
Further, t is 1 and k is 2.
According to the MRI motion artifact correction method based on the local optimization generation countermeasure network, the jump layer connection is added between the up-sampling module and the down-sampling module, so that the finally constructed MRI motion artifact correction method can better realize the characteristics and the removal of the motion artifact, can better realize the removal of the motion artifact, calculates the local loss of each group of photos based on the local optimization loss, ensures that the output image has the minimum global loss and can also reach the optimum in a local area, thereby retaining the local consistency of the output image, and does not need to add additional components.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a schematic flowchart of an MRI motion artifact correction method for generating an antagonistic network based on local optimization according to this embodiment;
fig. 2 is a block diagram of an MRI motion artifact correction method for generating an antagonistic network based on local optimization according to the present embodiment;
fig. 3 is a graph of experimental data provided in this example.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
The present embodiment provides an MRI motion artifact correction method for generating an anti-collision network based on local optimization, a flowchart of which is shown in fig. 1, and a block diagram of which is shown in fig. 2, including the following steps:
s1: obtaining a plurality of original sample images IOFor each of said original sample images IOConverting the K space data into K space data through fast Fourier transform, and transmitting the K space data after carrying out random phase shift on the K space dataObtaining image I with motion artifact from changed K space data by fast Fourier transformationMA
Specifically, step S1 may be executed
Figure BDA0002819692430000051
Generating images with motion artifactsMA
Wherein FFT denotes the fast Fourier transform, IFFT denotes the inverse fast Fourier transform, kxAnd kyRespectively representing images I in K spaceOCoordinates in the frequency-encoding direction and the phase-encoding direction, m (k), respectivelyy) And n (k)y) Respectively representing images I in K spaceOAre respectively at kxDirection sum kyDisplacement function in direction.
From the fourier transform, the K-space data with motion artifact pictures can be represented by the following formula:
Figure BDA0002819692430000052
let x ═ x-m (k)y),y′=y-n(ky);
Then
Figure BDA0002819692430000053
Figure BDA0002819692430000054
And finally obtaining:
Figure BDA0002819692430000061
wherein, 2 pi (k)xm(ky)+kyn(ky) Is the added phase offset for adding motion artifacts, s is the K-space data of the picture without motion artifacts, and s' is the K-space data after adding motion artifacts.
S2: the method comprises the steps of constructing a generation countermeasure network model, wherein the generation countermeasure network model comprises a generator and a discriminator, and the generator comprises a down-sampling module, a residual error module and an up-sampling module connected with the down-sampling module in a layer-skipping mode.
S3: the image I is processedMAInputting the generation countermeasure network model, and extracting the image I by the down-sampling moduleMAThe up-sampling module performs fusion processing on the image features of the corresponding levels output by the down-sampling module and the image features output by the residual error module, and outputs a corrected image I without motion artifactsF
In this embodiment, the output of the up-sampling module fuses the output of the down-sampling of the corresponding hierarchy, and specifically, the up-sampling module may generate and output the correction image I with the motion artifact removed by the following formula in step S3F
upsampleout(q)=σ(concate(upsampleout(q-1),downsampleout(s-q+1)));
Wherein the upsampleout(1)=σ(concate(Res,downsampleout(s))), the number of the down-sampling module and the up-sampling module are s, upsamplleout(q) represents the output of the qth upsampling block, downsampleout(s-q +1) represents the output of an s-q +1 th down-sampling module, concatee represents splicing on the last dimension of the feature map, sigma represents an activation function, the residual module is connected between the s-th down-sampling module and the 1 st up-sampling module and is used for extracting the deep-level features of the image, and Res represents the output of the residual module.
S4: the original sample image IOAnd images I with motion artifactsMATraining a discriminator used as training data for generating the confrontation network model for t times, then training the discriminator for k times, and alternately performing iterative optimization until the total training round is reached to obtain a target generation confrontation network model; the discriminator uses the distance previously passed through Wasserstein (bulldozer distance) in the image I during each trainingFAnd image IOThe contrast loss function constructed in betweenFAnd image IOThe generator utilizes the gradient penalty loss function constructed in advance in the image I in each training process to carry out optimizationFAnd image IOContent loss function constructed in between and previously in image IFAnd image IOOptimizing a local optimization loss function constructed in the above steps; k is a radical of>t, t and k are integers of 1 or more.
It should be noted that, before the training of the confrontation network model, the image I is previously subjected to the training of the confrontation network modelFAnd image IOA loss-resisting function is constructed between the two images to ensure that the output image is more vivid and has stronger generalization capability, and the loss-resisting function is previously constructed in the image IFAnd image IOThe method constructs a gradient penalty loss function to constrain the optimization of the discriminator in the sampling space, avoids the phenomena of gradient disappearance and gradient explosion, and pre-constructs a gradient penalty loss function in the image IFAnd image IOA content loss function is constructed between the two images, so that the output image is clearer and the texture is more real, and the image I is subjected to content loss function construction in advanceFAnd image IOConstruct local loss function between to ensure image IFLocal consistency.
The countermeasure loss function previously constructed in step S4 is:
Figure BDA0002819692430000071
wherein the content of the first and second substances,
Figure BDA0002819692430000072
Figure BDA0002819692430000073
a generator is shown that performs the task of removing motion artifacts,
Figure BDA0002819692430000074
denotes a discriminator, p (I)F) And p (I)O) Respectively representing corrected images IFAnd an original sample image IOThe distribution of the image of (a) is,
Figure BDA0002819692430000075
representation generator through image IMAGenerated image IFAnd label image IOWasserstein distance in between.
The gradient penalty loss function pre-constructed in step S4 is:
Figure BDA0002819692430000076
wherein the content of the first and second substances,
Figure BDA0002819692430000077
epsilon represents the interpolated sampled random number,
Figure BDA0002819692430000078
represents a gradient penalty loss intermediate calculation parameter, lambda represents a hyperparameter for weighing weight between losses,
Figure BDA0002819692430000079
indicates the arbiter input as
Figure BDA00028196924300000710
The two-norm of the time-discriminator gradient.
The total loss function of the discriminator in each training process is
Figure BDA00028196924300000711
And optimizing each parameter in the discriminator by taking the function as an objective function of the discriminator.
The content loss function pre-constructed in step S4 is:
LContent=LMSE+αLPerceptual
wherein the content of the first and second substances,
Figure BDA0002819692430000081
LMSErepresenting the pixel mean square error loss, LPerceptualRepresenting the perceptual loss, alpha representing a hyper-parameter controlling the specific gravity of the perceptual loss, and N representing a sampleThe total number of the present images,
Figure BDA0002819692430000082
representing the n-th original sample image,
Figure BDA0002819692430000083
representing the image with motion artifact I corresponding to the n-th original sample imageMA
Figure BDA0002819692430000084
Representation generator input as an image
Figure BDA0002819692430000085
The output of time, j, represents the number of layers where the pooling layer is located in the pre-trained VGG network, phijRepresenting the output, W, of all network layers before the jth layer pooling layer of the VGG network for their input imagesjAnd HjIs to represent phijThe width and height of the output feature map.
The pre-training network in this embodiment is a pre-training network for extracting image features, and is obtained by public MRI dataset training, and preferably, the VGG network in this embodiment may be a VGG-19 network.
The local optimization loss function pre-constructed in step S4 is:
Figure BDA0002819692430000086
wherein K represents a predefined identity matrix,
Figure BDA0002819692430000087
representing a convolution operation.
Further, the total loss function of the generator during each training is
Figure BDA0002819692430000088
Using the parameter as the objective function of the generator to generate each parameter in the generatorAnd (4) optimizing the number.
Wherein beta represents a hyper-parameter controlling the local optimization loss specific gravity, gamma represents a hyper-parameter controlling the antagonistic loss specific gravity,
Figure BDA0002819692430000089
indicates the discriminator input is IFThe output of the discriminator.
S5: and acquiring an MRI original image from which the motion artifact is to be removed.
S6: and inputting the MRI original image into the target to generate an MRI target image which is corrected by a countermeasure network model.
In the conventional image field, when training to generate a confrontation network model, a discriminator is generally trained 5 times, then a generator is trained 1 time, and a natural image data set is used as the data set. However, in this embodiment, considering the reason that semantic information of medical images is relatively simple, the convergence rate of the discriminator is faster than the convergence rate of the generator that needs to perform the motion artifact removal operation, so k > t in this embodiment, after many experiments, preferably, t in this embodiment is 1, k is 2, fig. 3 is a schematic diagram of an image during an experiment, the leftmost part is a simulated MRI, the middle part is an original label MRI, and the rightmost part is an MRI corrected by the target countermeasure network model restoration.
The MRI motion artifact backtracking correction method based on the generation countermeasure network provided by the embodiment can achieve removal of motion artifacts in MRI, compared with other methods, the result of the method is more real after being repaired, specifically, simulation of the motion artifacts is performed by using a random-based phase shift method, constraint and optimization of a model are achieved by combining countermeasure loss and content loss, an output image of the model has real texture information and structure information, local optimization loss is introduced to ensure local consistency of MRI, and the method can be achieved without adding extra components.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An MRI (magnetic resonance imaging) motion artifact traceback correction method for generating an antagonistic network based on local optimization, characterized by comprising:
s1: obtaining a plurality of original sample images IOFor each of said original sample images IOConverting the K space data into K space data through fast Fourier transform, performing random phase shift on the K space data, and performing inverse fast Fourier transform on the changed K space data to obtain an image I with motion artifactsMA
S2: constructing a generated countermeasure network model, wherein the generated countermeasure network model comprises a generator and a discriminator, and the generator comprises a down-sampling module, a residual error module and an up-sampling module connected with the down-sampling module in a layer-skipping manner;
s3: the image I is processedMAInputting the generation countermeasure network model, and extracting the image I by the down-sampling moduleMAThe corresponding level of the down-sampling module output by the up-sampling moduleThe image characteristics of the image and the image characteristics output by the residual error module are fused, and a corrected image I with motion artifact removed is outputF
S4: the original sample image IOAnd images I with motion artifactsMATraining a discriminator used as training data for generating the confrontation network model for t times, then training the discriminator for k times, and alternately performing iterative optimization until the total training round is reached to obtain a target generation confrontation network model; the discriminator utilizes the advance Wasserstein distance in the image I in each training processFAnd image IOThe contrast loss function constructed in betweenFAnd image IOThe generator utilizes the gradient penalty loss function constructed in advance in the image I in each training process to carry out optimizationFAnd image IOContent loss function constructed in between and previously in image IFAnd image IOOptimizing a local optimization loss function constructed in the above steps; k is a radical of>t, t and k are integers of 1 or more;
s5: acquiring an MRI (magnetic resonance imaging) original image from which a motion artifact is to be removed;
s6: and inputting the MRI original image into the target to generate an MRI target image which is corrected by a countermeasure network model.
2. The MRI motion artifact correction method based on locally optimized generation of antagonistic networks as claimed in claim 1, characterized in that in step S1 the method is implemented by
Figure FDA0002819692420000021
Generating images with motion artifactsMA
Wherein FFT denotes the fast Fourier transform, IFFT denotes the inverse fast Fourier transform, kxAnd kyRespectively representing images I in K spaceOCoordinates in the frequency-encoding direction and the phase-encoding direction, m (k), respectivelyy) And n (k)y) Respectively representing images I in K spaceOAre respectively at kxDirection sum kyFunction of phase shift in directionAnd (4) counting.
3. The MRI motion artifact correction method based on locally optimized generation countermeasure network as claimed in claim 1, wherein said up-sampling module passes an upsample in step S3out(q)=σ(concate(upsampleout(q-1),downsampleout(s-q +1))) generates and outputs a corrected image I from which motion artifacts have been removedF
Wherein the upsampleout(1)=σ(concate(Res,downsampleout(s))), the number of the down-sampling module and the up-sampling module are s, upsamplleout(q) represents the output of the qth upsampling block, downsampleout(s-q +1) represents the output of the s-q +1 th down-sampling module, concatee represents splicing in the last dimension of the feature map, sigma represents an activation function, the residual module is connected between the s-th down-sampling module and the 1 st up-sampling module, and Res represents the output of the residual module.
4. The MRI motion artifact correction method based on locally optimized generation of antagonistic networks as claimed in claim 1, characterized in that the antagonistic loss function pre-constructed in step S4 is:
Figure FDA0002819692420000022
wherein the content of the first and second substances,
Figure FDA0002819692420000023
Figure FDA0002819692420000024
a generator is shown that performs the task of removing motion artifacts,
Figure FDA0002819692420000025
denotes a discriminator, p (I)F) And p (I)O) Respectively representing corrected images IFAnd an original sample image IOThe distribution of the image of (a) is,
Figure FDA0002819692420000026
representing the discriminator input as image IFThe output of the time-of-flight discriminator,
Figure FDA0002819692420000027
representing the discriminator input as image IOThe output of the time-of-flight discriminator,
Figure FDA0002819692420000031
representation generator through image IMAGenerated image IFAnd label image IOWasserstein distance in between.
5. The MRI motion artifact correction method based on local optimization generation countermeasure network as claimed in claim 4, wherein the pre-constructed gradient penalty loss function in step S4 is:
Figure FDA0002819692420000032
wherein the content of the first and second substances,
Figure FDA0002819692420000033
epsilon represents the interpolated sampled random number,
Figure FDA0002819692420000034
represents a gradient penalty loss intermediate calculation parameter, lambda represents a hyperparameter for weighing weight between losses,
Figure FDA0002819692420000035
indicates the arbiter input as
Figure FDA0002819692420000036
The two-norm of the time-discriminator gradient.
6. The MRI motion artifact correction method based on locally optimized generation countermeasure network as claimed in claim 5, characterized in that the total loss function of said discriminator during each training is
Figure FDA0002819692420000037
7. The MRI motion artifact correction method based on locally optimized generation countermeasure network as claimed in claim 1, wherein the content loss function pre-constructed in step S4 is: l isContent=LMSE+αLPerceptual
Wherein the content of the first and second substances,
Figure FDA0002819692420000038
LMSErepresenting the pixel mean square error loss, LPerceptualRepresenting the perceptual loss, a representing a hyper-parameter controlling the perceptual loss weight, N representing the total number of sample images,
Figure FDA0002819692420000039
representing the n-th original sample image,
Figure FDA00028196924200000310
representing the image with motion artifact I corresponding to the n-th original sample imageMA
Figure FDA00028196924200000311
Representation generator input as an image
Figure FDA00028196924200000312
The output of time, j, represents the number of layers where the pooling layer is located in the pre-trained VGG network, phijRepresenting the output, W, of all network layers before the jth layer pooling layer of the VGG network for their input imagesjAnd HjIs to represent phijThe width and height of the output feature map.
8. The MRI motion artifact correction method based on locally optimized generation countermeasure network as claimed in claim 7, wherein the locally optimized loss function pre-constructed in step S4 is:
Figure FDA0002819692420000041
wherein K represents a predefined identity matrix,
Figure FDA0002819692420000042
representing a convolution operation.
9. The MRI motion artifact correction method based on locally optimized generation countermeasure network as claimed in claim 8, characterized in that the total loss function of the generator during each training is
Figure FDA0002819692420000043
Wherein beta represents a hyper-parameter controlling the local optimization loss specific gravity, gamma represents a hyper-parameter controlling the antagonistic loss specific gravity,
Figure FDA0002819692420000044
indicates the discriminator input is IFThe output of the discriminator.
10. An MRI motion artifact correction method based on locally optimized generation of an antagonistic network as claimed in any of the claims 1 to 9, characterized in that t-1, k-2.
CN202011414346.8A 2020-12-07 2020-12-07 MRI motion artifact correction method for generating countermeasure network based on local optimization Active CN112489154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011414346.8A CN112489154B (en) 2020-12-07 2020-12-07 MRI motion artifact correction method for generating countermeasure network based on local optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011414346.8A CN112489154B (en) 2020-12-07 2020-12-07 MRI motion artifact correction method for generating countermeasure network based on local optimization

Publications (2)

Publication Number Publication Date
CN112489154A true CN112489154A (en) 2021-03-12
CN112489154B CN112489154B (en) 2022-06-03

Family

ID=74940301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011414346.8A Active CN112489154B (en) 2020-12-07 2020-12-07 MRI motion artifact correction method for generating countermeasure network based on local optimization

Country Status (1)

Country Link
CN (1) CN112489154B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113081001A (en) * 2021-04-12 2021-07-09 杭州电子科技大学 Method for removing BCG artifact of synchronous EEG-fMRI (electroencephalogram-based magnetic resonance imaging)
CN113177078A (en) * 2021-04-30 2021-07-27 哈尔滨工业大学(威海) Efficient approximate query processing algorithm based on condition generation model
CN113284059A (en) * 2021-04-29 2021-08-20 Oppo广东移动通信有限公司 Model training method, image enhancement method, device, electronic device and medium
CN113662524A (en) * 2021-08-23 2021-11-19 合肥工业大学 Method for removing motion artifacts of PPG (photoplethysmography) signals
CN115359144A (en) * 2022-10-19 2022-11-18 之江实验室 Magnetic resonance plane echo imaging artifact simulation method and system
CN115860113A (en) * 2023-03-03 2023-03-28 深圳精智达技术股份有限公司 Training method and related device for self-antagonistic neural network model
WO2023123361A1 (en) * 2021-12-31 2023-07-06 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for motion correction for a medical image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190369191A1 (en) * 2018-05-31 2019-12-05 The Board Of Trustees Of The Leland Stanford Junior University MRI reconstruction using deep learning, generative adversarial network and acquisition signal model
CN110675461A (en) * 2019-09-03 2020-01-10 天津大学 CT image recovery method based on unsupervised learning
CN111028306A (en) * 2019-11-06 2020-04-17 杭州电子科技大学 AR2U-Net neural network-based rapid magnetic resonance imaging method
CN111443318A (en) * 2019-01-16 2020-07-24 上海联影医疗科技有限公司 Magnetic resonance image processing method, magnetic resonance image processing device, storage medium and magnetic resonance imaging system
CN111815692A (en) * 2020-07-15 2020-10-23 大连东软教育科技集团有限公司 Method, system and storage medium for generating artifact-free data and artifact-containing data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190369191A1 (en) * 2018-05-31 2019-12-05 The Board Of Trustees Of The Leland Stanford Junior University MRI reconstruction using deep learning, generative adversarial network and acquisition signal model
CN111443318A (en) * 2019-01-16 2020-07-24 上海联影医疗科技有限公司 Magnetic resonance image processing method, magnetic resonance image processing device, storage medium and magnetic resonance imaging system
CN110675461A (en) * 2019-09-03 2020-01-10 天津大学 CT image recovery method based on unsupervised learning
CN111028306A (en) * 2019-11-06 2020-04-17 杭州电子科技大学 AR2U-Net neural network-based rapid magnetic resonance imaging method
CN111815692A (en) * 2020-07-15 2020-10-23 大连东软教育科技集团有限公司 Method, system and storage medium for generating artifact-free data and artifact-containing data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUANHUA WANG.ET AL.: "Accelerated MRI Reconstruction with Dual-Domain Generative Adversarial Network", 《INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION》 *
LIMING XU.ET AL.: "Low-dose chest X-ray image super-resolution using generative adversarial nets with spectral normalization", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》 *
陈眺: "基于生成对抗神经网络的核磁共振多加权成像方法", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113081001A (en) * 2021-04-12 2021-07-09 杭州电子科技大学 Method for removing BCG artifact of synchronous EEG-fMRI (electroencephalogram-based magnetic resonance imaging)
CN113081001B (en) * 2021-04-12 2022-04-01 杭州电子科技大学 Method for removing BCG artifact of synchronous EEG-fMRI (electroencephalogram-based magnetic resonance imaging)
CN113284059A (en) * 2021-04-29 2021-08-20 Oppo广东移动通信有限公司 Model training method, image enhancement method, device, electronic device and medium
CN113177078A (en) * 2021-04-30 2021-07-27 哈尔滨工业大学(威海) Efficient approximate query processing algorithm based on condition generation model
CN113177078B (en) * 2021-04-30 2022-06-17 哈尔滨工业大学(威海) Approximate query processing algorithm based on condition generation model
CN113662524A (en) * 2021-08-23 2021-11-19 合肥工业大学 Method for removing motion artifacts of PPG (photoplethysmography) signals
CN113662524B (en) * 2021-08-23 2024-04-30 合肥工业大学 Method for removing PPG signal motion artifact
WO2023123361A1 (en) * 2021-12-31 2023-07-06 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for motion correction for a medical image
CN115359144A (en) * 2022-10-19 2022-11-18 之江实验室 Magnetic resonance plane echo imaging artifact simulation method and system
CN115359144B (en) * 2022-10-19 2023-03-03 之江实验室 Magnetic resonance plane echo imaging artifact simulation method and system
CN115860113A (en) * 2023-03-03 2023-03-28 深圳精智达技术股份有限公司 Training method and related device for self-antagonistic neural network model

Also Published As

Publication number Publication date
CN112489154B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN112489154B (en) MRI motion artifact correction method for generating countermeasure network based on local optimization
Wu et al. Self-attention convolutional neural network for improved MR image reconstruction
CN107610193B (en) Image correction using depth-generated machine learning models
Shi et al. LRTV: MR image super-resolution with low-rank and total variation regularizations
CN110914866A (en) System and method for anatomical segmentation in image analysis
Kudo et al. Virtual thin slice: 3D conditional GAN-based super-resolution for CT slice interval
CN107527359A (en) A kind of PET image reconstruction method and PET imaging devices
CN112602099A (en) Deep learning based registration
CN111161269B (en) Image segmentation method, computer device, and readable storage medium
Du et al. Accelerated super-resolution MR image reconstruction via a 3D densely connected deep convolutional neural network
CN110827369B (en) Undersampling model generation method, image reconstruction method, apparatus and storage medium
JP7466141B2 (en) High-speed magnetic resonance image reconstruction method and magnetic resonance imaging device
KR102428725B1 (en) Method and program for imaging quality improving
WO2021102644A1 (en) Image enhancement method and apparatus, and terminal device
CN111047512B (en) Image enhancement method and device and terminal equipment
Huang et al. TransMRSR: transformer-based self-distilled generative prior for brain MRI super-resolution
US20230079353A1 (en) Image correction using an invertable network
CN117036162B (en) Residual feature attention fusion method for super-resolution of lightweight chest CT image
WO2024021796A1 (en) Image processing method and apparatus, electronic device, storage medium, and program product
Bazrafkan et al. Deep neural network assisted iterative reconstruction method for low dose ct
Gulamhussene et al. Transfer-learning is a key ingredient to fast deep learning-based 4D liver MRI reconstruction
CN111161330A (en) Non-rigid image registration method, device, system, electronic equipment and storage medium
US20220292673A1 (en) On-Site training of a machine-learning algorithm for generating synthetic imaging data
CN111462041A (en) Image processing method, device, equipment and storage medium
CN111812571B (en) Magnetic resonance imaging method, device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240129

Address after: Room 801, 85 Kefeng Road, Huangpu District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou Dayu Chuangfu Technology Co.,Ltd.

Country or region after: China

Address before: 400065 Chongwen Road, Nanshan Street, Nanan District, Chongqing

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China