CN112446873A - Method for removing image artifacts - Google Patents
Method for removing image artifacts Download PDFInfo
- Publication number
- CN112446873A CN112446873A CN202011440467.XA CN202011440467A CN112446873A CN 112446873 A CN112446873 A CN 112446873A CN 202011440467 A CN202011440467 A CN 202011440467A CN 112446873 A CN112446873 A CN 112446873A
- Authority
- CN
- China
- Prior art keywords
- image
- loss function
- network model
- discriminator
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 16
- 238000005070 sampling Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 125000004122 cyclic group Chemical group 0.000 claims description 4
- 238000001228 spectrum Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 7
- 210000004351 coronary vessel Anatomy 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 206010006322 Breath holding Diseases 0.000 description 1
- 238000000137 annealing Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to the technical field of medical image processing, in particular to a method for removing image artifacts, which comprises the following steps: acquiring a training data set; constructing a bidirectional circulation generation confrontation network model; designing a bidirectional loop to generate a multiple loss function of a countermeasure network model; training a bidirectional loop to generate a confrontation network model; and inputting the image with the motion artifact into a trained bidirectional loop to generate a confrontation network model, so as to obtain the image without the motion artifact. The image artifact removing method provided by the invention solves the problems of weak generalization capability and unstable reconstruction effect of the conventional CTA coronary image artifact removing method.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method for removing image artifacts.
Background
CTA is a totally-called noninvasive coronary artery imaging technology, is an important direction of the development of the current CT technology, has the advantages of no wound, simple and convenient operation and the like which are not possessed by the traditional CT examination, is widely used, and in recent years, the high-speed development of multilayer spiral improves the time resolution of CT scanning, so that the problem of breath holding of a patient is solved, but because the heart rate is too fast, the scanning pickup change is not easy to control, or the formation of motion artifacts can be caused, the imaging of blood vessels is not clear, the clinical observation of the blood vessels in medicine is influenced, and the medical diagnosis is further influenced, so the CTA coronary artery image artifact removal research has a profound scientific significance application prospect.
Hongfeng Ma et al published an article "Evaluation of Motion Artifact Metrics for Coronary CT imaging" in 2019, which lists the occurrence of Motion artifacts, analyzes a plurality of different forms of Artifact occurrence, constructs a Motion Artifact matrix by listing different Artifact conditions in an ROI, comprehensively evaluates the Artifact conditions by low-intensity region scores and low-intensity scores, and reconstructs and repairs Motion Artifact images by matrix transformation.
Shipen Xie et al published a paper, "Artifact Removal using Improved Google Net for Sparse-view CT Reconstruction," which uses GoogLeNet for Artifact Removal and Reconstruction, and the overall network includes 3 layers of convolution layers and 23 layers of inclusion module structures combined to form 26 layers of Googlene networks for image Reconstruction.
The existing artifact removing method has the following problems:
(1) the prior art can restore the known artifact form by reconstructing the artifact image, but has weak generalization capability and less stable reconstruction effect.
(2) The image reconstruction speed is slow due to the large amount of data required to be acquired in normal CT imaging.
(3) The ROI needs to be cut out firstly by using the traditional method, and the reconstruction process is complex.
Disclosure of Invention
The invention provides an image artifact removing method, which aims to solve the problems of weak generalization capability and unstable reconstruction effect of the existing CTA coronary image motion artifact removing method.
The technical scheme for solving the problems is as follows: the image artifact removing method comprises the following steps:
s1: a training data set is acquired, wherein,
the training data set includes: an image training dataset with motion artifacts and an image training dataset without motion artifacts;
s2: constructing a bidirectional circulation generation confrontation network model;
s3: designing a bidirectional loop to generate a multiple loss function of a countermeasure network model;
s4: training bidirectional circulation generation confrontation network model
Inputting the training data set to the bidirectional circulation generation confrontation network model, and training the bidirectional circulation generation confrontation network model based on the multiple loss function to obtain a well-trained bidirectional circulation generation confrontation network model;
s5: and inputting the image with the motion artifact into the trained bidirectional loop to generate a confrontation network model, so as to obtain the image without the motion artifact.
Preferably, the bidirectional loop generation countermeasure network model includes a first discriminator, a second discriminator, a first multi-scale generator and a second multi-scale generator; wherein,
the first multi-scale generator is used for generating an image without motion artifacts from the image with the motion artifacts;
the second multi-scale generator is used for generating an image with motion artifacts from the image without the motion artifacts;
the first discriminator is used for discriminating an input image and a generated image with motion artifacts;
the second discriminator is used to discriminate between the input and the generated image without motion artifacts.
Preferably, the multiple loss function is composed of a discriminator loss function, a generator joint loss function, a cyclic loss function, and a self-consistent loss function: wherein,
the discriminator loss function is expressed as:
the generator joint loss function is represented as:
L*(G)=λg*L(G)+λssim*LSSIM+λpsnr*LPSNR
the cyclic loss function is expressed as:
the self-consistent loss function is represented as:
wherein: e (-) denotes expectation, x (-) denotes that the input follows a certain kind of distribution, D (-) denotes the output obtained after passing through the discriminator, and λ1Performing a weight parameter of gradient panel for wgan-gp;
λg、λssimand λpsnrRespectively representing the weight lost by the generator, the weight constrained by the structural similarity, and the weight constrained by the peak signal-to-noise ratio, l (g) representing the generator loss,
f denotes an image used to input a reference image, generating a counterfeit input, | x | denotes a first-order norm.
Preferably, the first and second discriminators have the same structure, wherein the structure includes a convolution structure, a global average pooling layer, and a full connection layer; wherein the number of convolution structures is greater than 1.
Preferably, the convolution structure is composed of a convolution layer, a spectrum normalization process, a BN process and a LeakyReLu process, wherein the convolution layer has convolution kernel sizes of 3 × 3 and step sizes of 2.
Preferably, the first and second multi-scale generators have the same structure, wherein each structure comprises an encoding part, a central convolution part and a decoding part; wherein,
the encoding part comprises a plurality of down-sampling network layers, the decoding part comprises a plurality of up-sampling network layers, and the encoding part and the decoding part are cascaded with each other and used for fusing the characteristics of the encoding part and the characteristics of the decoding part.
Preferably, the structure of the downsampling network layer comprises a convolutional layer, a spectrum normalization process, a BN process and a ReLu process;
the structure of the up-sampling network layer comprises bilinear difference processing, a convolution layer and BN processing;
the structure of the central convolution portion includes convolution layer, BN processing, and RELU processing.
Preferably, the initial learning rates of the first and second discriminators are 1e-3 and the initial learning rates of the first and second multi-scale generators are 1 e-4.
The invention also provides an image artifact removing device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the image artifact removing method.
The present invention also provides a computer-readable storage medium having stored thereon a computer program characterized in that the program, when executed by a processor, performs a method of removing image artifacts.
Compared with the prior art, the invention has the beneficial effects that:
1) the invention adopts a multi-scale generator to improve the reconstruction effect of the coronary image motion artifact, can combine semantic information of various scales and different depths to improve the reconstruction effect of the network, and can remove the motion artifact to a great extent.
2) When multiple loss functions are designed, the structural similarity and the peak signal-to-noise ratio of the ROI are used as constraints, and the loss functions are designed in combination with the ground cavity distance, so that the effect of recovering key parts is improved, and the generated CTA image is free from the influence of motion artifacts.
Drawings
FIG. 1 is a diagram of a bi-directional loop generation countermeasure network of the present invention;
FIG. 2 is a schematic structural diagram of a first discriminator according to the present invention;
FIG. 3 is a schematic diagram of a first multi-scale generator according to the present invention;
fig. 4 is a comparison graph of the image restoration results with motion artifacts in embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
Example 1: a method for removing CTA cardiac coronary artifacts, comprising the steps of:
step 1: acquiring an image training data set with a motion artifact and an image training data set without the motion artifact, wherein the image training data set with the motion artifact is input, and the image training data set without the motion artifact is a target;
step 2: construction of bidirectional loop generation confrontation network model
21: designing a first discriminator and a second discriminator
As shown in fig. 2, the first discriminator and the second discriminator have the same structure, and are used for distinguishing the correct image from the image to be restored. The first discriminator comprises 5 convolutional layers, one global average pooling layer and one full-link layer, the size of the convolutional core is 3 x 3, the step size is 2, and the size of the full zero padding is 1. In order to make the fitting ability of the discriminator exceed the generator model, 5 convolutional layers are used for the first and second discriminators.
22: designing a first multi-scale generator and a second multi-scale generator
As shown in fig. 3, the first and second multi-scale generators are identical in structure. The first multi-scale generator comprises a four-time sampling network layer, a central convolution layer and a four-time sampling network layer; the four times of down sampling network layer is the coding part, and the four times of sampling network layer is the decoding part. In the process of encoding and decoding, the generated feature maps are cascaded, and in such a way, deep semantic information and shallow position information can be fused to extract more comprehensive image features.
The structure of the downsampling network layer comprises a convolution layer, a spectrum normalization process, a BN process and ReLu, and the ReLu can be used for enabling the generated image not to generate chessboard artifacts. The structure of the up-sampling network layer comprises bilinear difference processing, a convolutional layer and BN processing.
23: designing bidirectional loop generation countermeasure network model
As shown in FIG. 1, adopt GabAnd GbaTwo sets of generators (i.e., a first multi-scale generator and a second multi-scale generator) and DaAnd DbTwo sets of discriminators (i.e. a first discriminator and a second discriminator), where GabAnd DaTo generate images without motion artifacts from images with motion artifacts, and GbaAnd DbIt is desirable to generate a map with motion artifacts from an image without motion artifactsLike, therefore, using GabThe process of removing the artifact of the artifact image can be realized, and G is usedbaThe simulation process of the generation of the sharp image artifact can be realized. The bidirectional circulation generation countermeasure network is designed to control the direction of image generation reconstruction, so that the image generation reconstruction is converted into a controllable direction, and the converted image and the original image are prevented from being in completely irrelevant fields.
3: designing multiple loss functions for bi-directional loop generation of countermeasure network model
In the design process of generating the whole confrontation network model in a bidirectional cycle mode, the loss function specifically comprises the following parts:
1. the network is built based on generating a competing network, so the core loss function is a competing loss. In view of the problem that the original generation confrontation loss easily causes unstable training of the generator, so that gradient disappears and the like, the earthquake motion distance is adopted to replace the original confrontation loss, and the expressions of the generator L (G) and the discriminator loss L (D) are as follows:
where E (-) represents expectation, x (-) represents that the input follows a certain kind of distribution, D (-) represents the output obtained after passing through the discriminator, and λ1The weighting parameter for gradient panel for wgan-gp was set to 1 in the experiment. In order to ensure that more detailed information of coronary artery vessels is recovered in the training process, the ROI of the vessel is selected, and Structural Similarity (SSIM) and peak signal-to-noise ratio (PSNR) constraints are performed in the ROI region, and the formula of the peak signal-to-noise ratio and the structural similarity is as follows:
in the structural similarity formula, μ and σ represent the mean and standard deviation of the image, C, respectively1=(k1L)2And C2=(k2L)2Are two smaller constant terms, avoiding a denominator of 0, where L represents the maximum pixel value of the image.
The final generator loss is expressed as:
L*(G)=λg*L(G)+λssim*LSSIM+λpsnr*LPSNR
wherein L is*(G) Is the final generator combining loss, whereg,λssimAnd λpsnrRespectively, as weights for the generator loss, the structural similarity constraint, and the peak signal-to-noise ratio constraint, which were found to be 1.0, -1.0, and-10.0, respectively, in the experiment.
2. Because a bidirectional loop generation countermeasure network is used, bidirectional network constraint is adopted, and the bidirectional network constraint comprises two parts of loop loss and self-consistent loss, wherein a loop loss function is expressed as follows:
where F represents the image used to input the reference image, generating a bogus input image, | | | represents a first-order norm. And the self-consistent loss is expressed as:
self-consistency loss indicates that the generator is used to generate the converted image, then the converted image is fed into the generator, which should still generate the image, only to prove that the generator has the ability to generate the target image type, indicating self-consistency. The overall loss function design contains several partial constraints as above.
31: all the networks are optimized by using ADAM, the initial learning rates of the first discriminator and the first discriminator are 1e-3, the initial learning rates of the first multi-scale generator and the first multi-scale generator are 1e-4, and the convergence effect is improved by using a cosine annealing mechanism for training.
And 4, step 4: and respectively inputting the images with the motion artifacts and the images without the motion artifacts as input images and target images into the bidirectional circulation generation confrontation network model for training to obtain the trained bidirectional circulation generation confrontation network model.
And 5: and inputting the image with the motion artifact into the trained bidirectional loop to generate a confrontation network model, so as to obtain the image without the motion artifact.
Example 2: an image artifact removing device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the image artifact removing method of embodiment 1.
Example 3: a computer-readable storage medium on which a computer program is stored, which when executed by a processor, performs the image artifact removal method of embodiment 1.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations made by using the contents of the specification and the drawings, or applied directly or indirectly to other related systems, are included in the scope of the present invention.
Claims (10)
1. The method for removing the image artifact is characterized by comprising the following steps of:
s1: a training data set is acquired, wherein,
the training data set includes: an image training dataset with motion artifacts and an image training dataset without motion artifacts;
s2: constructing a bidirectional circulation generation confrontation network model;
s3: designing a bidirectional loop to generate a multiple loss function of a countermeasure network model;
s4: training bidirectional circulation generation confrontation network model
Inputting the training data set to the bidirectional circulation generation confrontation network model, and training the bidirectional circulation generation confrontation network model based on the multiple loss function to obtain a well-trained bidirectional circulation generation confrontation network model;
s5: and inputting the image with the motion artifact into the trained bidirectional loop to generate a confrontation network model, so as to obtain the image without the motion artifact.
2. The method of removing image artifacts according to claim 1, wherein the bi-directional loop generation countermeasure network model includes a first discriminator, a second discriminator, a first multi-scale generator, and a second multi-scale generator; wherein,
the first multi-scale generator is used for generating an image without motion artifacts from the image with the motion artifacts;
the second multi-scale generator is used for generating an image with motion artifacts from the image without the motion artifacts;
the first discriminator is used for discriminating an input image and a generated image with motion artifacts;
the second discriminator is used to discriminate between the input and the generated image without motion artifacts.
3. The method of removing image artifacts as set forth in claim 2, wherein the multiple loss function is composed of a discriminator loss function, a generator joint loss function, a cyclic loss function, and a self-consistent loss function: wherein,
the discriminator loss function is expressed as:
the generator joint loss function is represented as:
L*(G)=λg*L(G)+λssim*LSSIM+λpsnr*LPSNR
the cyclic loss function is expressed as:
the self-consistent loss function is represented as:
wherein: e (-) denotes expectation, x (-) denotes that the input follows a certain kind of distribution, D (-) denotes the output obtained after passing through the discriminator, and λ1Performing a weight parameter of gradient panel for wgan-gp;
λg、λssimand λpsnrRespectively representing the weight lost by the generator, the weight constrained by the structural similarity, and the weight constrained by the peak signal-to-noise ratio, l (g) representing the generator loss,
f denotes an image used to input a reference image, generating a counterfeit input, | x | denotes a first-order norm.
4. The method according to claim 2 or 3, wherein the first and second discriminators have the same structure, wherein the structure includes a convolution structure, a global average pooling layer, and a full connection layer; wherein the number of convolution structures is greater than 1.
5. The method according to claim 4, wherein the convolution structure is a convolution layer including a convolution kernel of 3 x 3 size and a step size of 2, the spectral normalization process, the BN process, and the LeakyReLu process.
6. The method of claim 2 or 3, wherein the first and second multi-scale generators have the same structure, wherein each structure comprises an encoding portion, a central convolution portion, and a decoding portion; wherein,
the encoding part comprises a plurality of down-sampling network layers, the decoding part comprises a plurality of up-sampling network layers, and the encoding part and the decoding part are cascaded with each other and used for fusing the characteristics of the encoding part and the characteristics of the decoding part.
7. The method of removing image artifacts as set forth in claim 6,
the structure of the down-sampling network layer comprises a convolution layer, spectrum normalization processing, BN processing and ReLu processing;
the structure of the up-sampling network layer comprises bilinear difference processing, a convolution layer and BN processing;
the structure of the central convolution portion includes convolution layer, BN processing, and RELU processing.
8. The method of claim 2 or 3, wherein the initial learning rates of the first and second discriminators are 1e-3 and the initial learning rates of the first and second multi-scale generators are 1 e-4.
9. An image artifact removal apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the image artifact removal method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of image artifact removal as set forth in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011440467.XA CN112446873B (en) | 2020-12-11 | 2020-12-11 | Method for removing image artifacts |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011440467.XA CN112446873B (en) | 2020-12-11 | 2020-12-11 | Method for removing image artifacts |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112446873A true CN112446873A (en) | 2021-03-05 |
CN112446873B CN112446873B (en) | 2024-09-24 |
Family
ID=74740253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011440467.XA Active CN112446873B (en) | 2020-12-11 | 2020-12-11 | Method for removing image artifacts |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112446873B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113129237A (en) * | 2021-04-26 | 2021-07-16 | 广西师范大学 | Depth image deblurring method based on multi-scale fusion coding network |
CN113139917A (en) * | 2021-04-23 | 2021-07-20 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114445515A (en) * | 2022-02-14 | 2022-05-06 | 深圳市赛禾医疗技术有限公司 | Image artifact removing method and device, electronic equipment and storage medium |
CN114943786A (en) * | 2022-06-20 | 2022-08-26 | 南开大学 | Image restoration method for optical coherence tomography image saturation artifact |
CN115170424A (en) * | 2022-07-07 | 2022-10-11 | 北京安德医智科技有限公司 | Heart ultrasonic image artifact removing method and device |
WO2023123361A1 (en) * | 2021-12-31 | 2023-07-06 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for motion correction for a medical image |
WO2024032098A1 (en) * | 2022-08-09 | 2024-02-15 | 腾讯科技(深圳)有限公司 | Training method and apparatus for artifact removal model, and device, medium and program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815893A (en) * | 2019-01-23 | 2019-05-28 | 中山大学 | The normalized method in colorized face images illumination domain of confrontation network is generated based on circulation |
CN110135366A (en) * | 2019-05-20 | 2019-08-16 | 厦门大学 | Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network |
CN110197517A (en) * | 2019-06-11 | 2019-09-03 | 常熟理工学院 | The SAR image painting methods that consistent sex resistance generates network are recycled based on multiple domain |
US20200111194A1 (en) * | 2018-10-08 | 2020-04-09 | Rensselaer Polytechnic Institute | Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle) |
-
2020
- 2020-12-11 CN CN202011440467.XA patent/CN112446873B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200111194A1 (en) * | 2018-10-08 | 2020-04-09 | Rensselaer Polytechnic Institute | Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle) |
CN109815893A (en) * | 2019-01-23 | 2019-05-28 | 中山大学 | The normalized method in colorized face images illumination domain of confrontation network is generated based on circulation |
CN110135366A (en) * | 2019-05-20 | 2019-08-16 | 厦门大学 | Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network |
CN110197517A (en) * | 2019-06-11 | 2019-09-03 | 常熟理工学院 | The SAR image painting methods that consistent sex resistance generates network are recycled based on multiple domain |
Non-Patent Citations (2)
Title |
---|
JOOST VAN AMERSFOORT: "Frame interpolation with multi-scale deep loss functions and generative adversarial networks", COMPUTER VISION AND PATTERN RECOGNITION, 26 February 2019 (2019-02-26) * |
彭晏飞、等: "基于循环生成对抗网络的图像风格迁移", 计算机工程与科学, 30 April 2020 (2020-04-30), pages 2 - 3 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139917A (en) * | 2021-04-23 | 2021-07-20 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113129237A (en) * | 2021-04-26 | 2021-07-16 | 广西师范大学 | Depth image deblurring method based on multi-scale fusion coding network |
CN113129237B (en) * | 2021-04-26 | 2022-10-28 | 广西师范大学 | Depth image deblurring method based on multi-scale fusion coding network |
WO2023123361A1 (en) * | 2021-12-31 | 2023-07-06 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for motion correction for a medical image |
CN114445515A (en) * | 2022-02-14 | 2022-05-06 | 深圳市赛禾医疗技术有限公司 | Image artifact removing method and device, electronic equipment and storage medium |
CN114943786A (en) * | 2022-06-20 | 2022-08-26 | 南开大学 | Image restoration method for optical coherence tomography image saturation artifact |
CN115170424A (en) * | 2022-07-07 | 2022-10-11 | 北京安德医智科技有限公司 | Heart ultrasonic image artifact removing method and device |
CN115170424B (en) * | 2022-07-07 | 2023-04-07 | 北京安德医智科技有限公司 | Heart ultrasonic image artifact removing method and device |
WO2024032098A1 (en) * | 2022-08-09 | 2024-02-15 | 腾讯科技(深圳)有限公司 | Training method and apparatus for artifact removal model, and device, medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN112446873B (en) | 2024-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112446873A (en) | Method for removing image artifacts | |
Mardani et al. | Deep generative adversarial neural networks for compressive sensing MRI | |
CN109993809B (en) | Rapid magnetic resonance imaging method based on residual U-net convolutional neural network | |
Luo et al. | Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models | |
Bhagat et al. | Data augmentation using generative adversarial networks for pneumonia classification in chest xrays | |
Arai et al. | Significant dimension reduction of 3D brain MRI using 3D convolutional autoencoders | |
Xia et al. | Physics-/model-based and data-driven methods for low-dose computed tomography: A survey | |
WO2022121100A1 (en) | Darts network-based multi-modal medical image fusion method | |
CN111899165A (en) | Multi-task image reconstruction convolution network model based on functional module | |
Huang et al. | TransMRSR: transformer-based self-distilled generative prior for brain MRI super-resolution | |
CN115953494B (en) | Multi-task high-quality CT image reconstruction method based on low dose and super resolution | |
CN111325695B (en) | Low-dose image enhancement method and system based on multi-dose grade and storage medium | |
US20230024401A1 (en) | Implicit Neural Representation Learning with Prior Embedding for Sparsely Sampled Image Reconstruction and Other Inverse Problems | |
CN112819914A (en) | PET image processing method | |
Wahid et al. | Multi-layer basis pursuit for compressed sensing MR image reconstruction | |
Trung et al. | Dilated residual convolutional neural networks for low-dose CT image denoising | |
CN114049334B (en) | Super-resolution MR imaging method taking CT image as input | |
US20220292641A1 (en) | Dynamic imaging and motion artifact reduction through deep learning | |
Sander et al. | Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI | |
Liu et al. | DL‐MRI: A Unified Framework of Deep Learning‐Based MRI Super Resolution | |
CN110335327A (en) | A kind of medical image method for reconstructing directly solving inverse problem | |
CN112991220A (en) | Method for correcting image artifacts by convolutional neural network based on multiple constraints | |
CN115330600A (en) | Lung CT image super-resolution method based on improved SRGAN | |
CN116888624A (en) | Contrast enhancement through machine learning | |
Zhou et al. | AR-UNet: A Deformable Image Registration Network with Cyclic Training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |