CN116894783A - Metal artifact removal method for countermeasure generation network model based on time-varying constraint - Google Patents
Metal artifact removal method for countermeasure generation network model based on time-varying constraint Download PDFInfo
- Publication number
- CN116894783A CN116894783A CN202310878651.XA CN202310878651A CN116894783A CN 116894783 A CN116894783 A CN 116894783A CN 202310878651 A CN202310878651 A CN 202310878651A CN 116894783 A CN116894783 A CN 116894783A
- Authority
- CN
- China
- Prior art keywords
- image
- artifact
- generator
- network
- metal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 132
- 239000002184 metal Substances 0.000 title claims abstract description 115
- 230000006870 function Effects 0.000 claims description 30
- 238000012549 training Methods 0.000 claims description 30
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 29
- 230000009466 transformation Effects 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 20
- 238000012937 correction Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 12
- 238000009499 grossing Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 5
- 238000011387 Li's method Methods 0.000 claims description 4
- 238000003860 storage Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 230000003042 antagnostic effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 10
- 210000000988 bone and bone Anatomy 0.000 description 9
- 210000004872 soft tissue Anatomy 0.000 description 7
- 238000013459 approach Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 6
- 239000007943 implant Substances 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 210000001519 tissue Anatomy 0.000 description 5
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 4
- 238000004445 quantitative analysis Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000001054 cortical effect Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 229920002430 Fibre-reinforced plastic Polymers 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000012885 constant function Methods 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000011151 fibre-reinforced plastic Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000004451 qualitative analysis Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The application discloses a metal artifact removal method for an countermeasure generation network model based on time-varying constraint, which constructs a GAN network demetallization artifact Model (MARGANVAC) with time-varying constraint, and the model carries out self-adaptive fidelity constraint on each part of a full graph by introducing time-varying constraint items, so that a generator is more effectively trained to generate images with better detail fidelity and metal artifact removal effect. Compared with the current mainstream model, the method has better artifact removal effect and wider application scene.
Description
Technical Field
The application relates to a metal artifact removal method, in particular to a metal artifact removal method for an countermeasure generation network model based on time-varying constraint, and belongs to the technical field of improving CT image quality.
Background
The presence of metal implants can lead to severe metal artifacts in the CT image, severely degrading the image quality. Over the last decades, many methods of reducing metal artifacts have been proposed. Although these conventional methods have produced some effect on metal artifact removal, there is a great distance from the effect of the actual demand. In recent years, with the rise of deep learning, a deep learning model is gradually used for research of metal artifact removal, and good effects are produced. In references [1,2], park et al use U-net learning to map the geometry of the metal track area in the projection map to the corresponding beam hardening factor to correct inaccurate projection data in the projection map. The CNNMAR method combines an uncorrected image, an LI image and a BHC image into a three-channel image, inputs the three-channel image into a convolutional neural network to generate a priori image, and then uses the projection result to guide the interpolation process to generate an image with artifacts removed. Based on the idea of conditional generation against a network (cGAN), wang et al in reference [3] propose cGAN, learn the mapping from artifact-free pictures to artifact-free pictures, and use PatchGAN as a discriminator. The above methods are all supervised methods, requiring good pairing data as training sets. Many scholars are devoted to unsupervised learning research that relieves the need for pairing data for the metal artifact removal problem. Based on the concept of CycleGAN, lee et al proposed Attention-guide β -CycleGAN, see reference [4], to use an Attention mechanism to focus on the unique features of metal artifacts in the spatial and channel domains, training in an unsupervised manner, resulting in good robustness. In reference [5], liao et al creatively propose ADN networks using the concept of hidden space, encoding artifact-containing images into a content space containing only image information features using different encoders, respectively, and encoding artifact-containing images into an artifact-containing space containing only artifacts using another encoder, enabling decoupling of artifacts from tissue detail. Unsupervised methods eliminate the need for paired data, but do not have good processing power in the face of complex artifacts and tissue details that occur under clinical conditions. Although the current mainstream supervised method is superior to the unsupervised method in performance, the performance of metal artifact removal and image detail fidelity cannot meet the requirements of practical application.
Reference is made to:
[1]Park H S,Chung Y E,Lee S M,et al.Sinogram-consistency learning in CT for metal artifact reduction[J].arXiv preprint arXiv:00607,2017,1.
[2]Park H S,Lee S M,Kim H P,et al.CT sinogram-consistency learning for metal-induced beam hardening correction[J].Medical physics,2018,45(12):5376-84.
[3]Wang J,Zhao Y,Noble J H,et al.Conditional generative adversarial networks for metal artifact reduction in CT images of the ear[C].International Conference on Medical Image Computing and Computer-Assisted Intervention.2018:3-11.
[4]Lee J,Gu J,Ye J C.Unsupervised CT Metal Artifact Learning Using Attention-Guidedβ-CycleGAN[J].IEEE Transactions on Medical Imaging,2021,40(12):3932-44.
[5]Liao H,Lin W-A,Zhou S K,et al.ADN:artifact disentanglement network for unsupervised metal artifact reduction[J].IEEE Transactions on Medical Imaging,2019,39(3):634-43.
disclosure of Invention
The application aims to: aiming at the problems and the shortcomings in the prior art, the application provides a metal artifact removal method for an countermeasure generation network model based on time-varying constraint. A GAN network demetallization Model (MARGANVAC) with time-domain variable constraints is constructed that adaptively performs fidelity constraints on various parts of the full graph by introducing time-varying constraint terms, thereby more effectively training the generator to generate images with better detail fidelity and metal artifact removal effects. Compared with the current mainstream model, the method has better artifact removal effect and wider application scene.
The technical scheme is as follows: a metal artifact removal method for generating a network model based on countermeasure of time-varying constraint; constructing a GAN network demetallization artifact Model (MARGANVAC) with time domain variable constraint, wherein the model carries out self-adaptive fidelity constraint on each part of the full graph by introducing time-varying constraint items on the basis of the GAN network;
the GAN network demetallization artifact model with the time domain variable constraint comprises three modules, namely a generator G, a discriminator D and a registration network R, wherein a CT image to be processed is input into the generator G after random affine change, and is output to the registration network and the discriminator D through the generator G; the registration network R is used for random sampling, and the neighborhood of the sampled pixel points is gradually reduced along with the increase of the iteration times; the registration network R can adaptively adjust the parameters without human intervention, thereby enabling the generator to produce a more realistic picture.
Further, the training process of the GAN network demetallization artifact model with the time domain variable constraint is as follows:
first, for each input image x with metal artifacts a Performing random affine transformation on the image x a Random affine transformation is carried out on the reference image x which is not affected by the artifacts, and the images x are respectively obtained a And reference image x-transformed image and xT The method comprises the steps of carrying out a first treatment on the surface of the When image->By means of the post-generator G, an artifact-free image is obtained>Will-> and xT Inputting a registration network R; the physical meaning of the registration network is available as a function +.>Expressed by a first part G (x a ;θ G ) Is the output result of the generator sub-network, θ G Representing parameters of the generator sub-network, j being the pixel index, phi of the second part representing an abstract sampling function for the pixel x j Sampling pixels in the neighborhood of σ t Is a parameter for controlling the size of the neighborhood, and the parameter sigma is as iteration progresses in the training process t Gradually converging to zero. At the beginning of training, the performance of the registration network is poor, so the output +.>With affine transformed ground trunk image x t There will be some positional deviation between them, which is variable, i.e. each pixel in each input image of each iteration has an independent different deviation, which can help to model the function phi (x j ,σ t ) Is a random sampling process of (a). And, as training progresses, the registration performance of the registration network will increase, i.e., the representation parameter σ t Will decrease, σ if the entire model converges well t Will drop to zero.
And generating a metal artifact removed image for the CT image with the metal artifact by using the trained GAN network demetallization artifact model with the time domain variable constraint.
Further, residual learning is introduced in the generator G; introducing self-reconstruction as a constraint to regularize the generator; the generator comprises an encoder and a decoder; the encoder is used for mapping the image sample from the image domain to a hidden space, wherein the content information and the characteristics of the metal artifacts in the hidden space are separated; the decoder reconstructs the separated content information into an artifact-free image; a deep subnetwork of 21 acceptance-res net modules is introduced between the encoder and decoder to improve the separation of artifact characteristics and content information in the hidden space.
Further, the training process of the GAN network demetallization artifact model with time-domain variable constraints introduces self-reconstruction as a constraint to regularize the generator for better distinguishing metal artifacts from content information. In the self-reconstruction process, the artifact-free image y is spliced into [ y, y ]]And then input into the generator. Before stitching, carrying out random affine transformation on the artifact-free image y to obtain a result A 3 (y) splicing to obtain [ A ] 3 (y),A 3 (y)]Accordingly, in self-reconstruction, the generator output becomes: g ([ A) 3 (y),A 3 (y)];θ G ) The method comprises the steps of carrying out a first treatment on the surface of the The artifact-free image y corresponds to the artifact-free reference image y.
Further, for image x a Obtaining an estimated value of a metal track part in a projection image of a sinusoidal domain by using a linear interpolation method (LI), and obtaining an LI correction reconstructed image x by using an FBP (film bulk phase) or FDK (full-field-depth) method [LI]a The method comprises the steps of carrying out a first treatment on the surface of the Image x to be corrected by LI method [LI]a And image x affected by artifacts a Respectively carrying out random affine transformation to obtain affine transformation result A 1 (x [LI]a) and A1 (x a ) Will A 1 (x [LI]a) and A1 (x a ) Connected together in the channel dimension as input to generator G.
Further, residual learning is introduced in the generator G, and the image after removing the metal artifact is expressed as: [x a ,x [LI]a ]represents x a and x[LI]a Is connected with the operation of the connecting device; under the dead weightIn construction, the generator output becomes +.>Then take-> and A2 (x) As input to the registration network R, a 2 (x) The output of the registration network R is a deformation vector field T representing a random affine transformation of the reference image x x ,/> θ R To register network parameters, a deformation vector field T is obtained x After that, by +.>Application T x To obtain a resampled image +.>Generating T according to sum x The same procedure is followed to obtain a deformation vector field T y ,/> A 4 (y) represents a random affine transformation of the reference image y; and in the case of reconstruction, the corresponding resampled image +.>Generator output-> and A2 (x) As input to the discriminator D, likewise in the case of reconstruction, the generator outputs +.> and A4 (y) as input to the arbiter D.
Further, the GAN network demetallization artifact model with time-domain variable constraint includes four loss functions, which are artifact correction loss respectivelySelf-weight loss->Countering losses->And a smooth loss block for constraining the deformation vector field>The total loss is expressed as a weighted sum of these losses +.> Each λ represents a weight coefficient of the corresponding loss.
Correcting lossesThe registration network R and the generator G are trained simultaneously, and the design correction loss is:
using the L1 norm allows the generator to maintain more detail while reducing metal artifacts,indicating desire(s)>Indicating that x is from the dataset +.>The definition domain of CT image affected by metal artifact is +.>
Loss from reconstructionThe training goal is to make the generator learn to remove metal artifacts, while also introducing more constraints to make the generator retain more content information; i.e. metal artifacts are minimized in the presence of metal and artifacts and all image content information is preserved in the absence of metal artifacts.
wherein ,indicating desire(s)>Representing y from the dataset +.>The definition domain of the image unaffected by the artifact is +.>
Countering lossesThe countermeasure learning causes the generator G to generate a more realistic artifact-free image. To achieve this, the generator should haveThere is the ability to distinguish artifact information from content information in hidden space. To enhance this capability, two resistance learning strategies were introduced. One strategy is to increase the ability of the hidden space to identify metal artifacts, and the other strategy is to increase the ability of the hidden space to hold content information. The input data for both strategies are an image affected by metal artifacts and an image free of metal artifacts, respectively. These two strategies are performed simultaneously in resistance learning, and their respective losses can be written as:
the base of log is 2, D (·) represents the output of the arbiter D; the total countering loss is therefore:
smoothing lossThe resulting deformation vector field may become unsmooth and physically unrealistic while minimizing the combined loss of correction and self-reconstruction losses. To solve this problem, diffusion regularization operators in the gradient direction of the deformation vector field are introduced in the original registration network to constrain the deformation vector field to smooth it. Because the generator has two outputs and />There are two corresponding deformation vector field regularization terms. Thus, the total smoothing loss can be expressed as:
wherein Is the gradient of the deformation vector field T. In practice, the differences between pixels are used to approximate the gradient of each pixel in the deformation vector field.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a metal artifact removal method of an countermeasure generation network model based on time-varying constraints as described above when executing the computer program.
A computer readable storage medium storing a computer program for performing a metal artifact removal method of an countermeasure generation network model based on time-varying constraints as described above.
The beneficial effects are that: compared with the prior art, the application constructs the GAN network demetallization artifact Model (MARGANVAC) with time-domain variable constraint, and the model carries out self-adaptive fidelity constraint on each part of the whole image by introducing time-varying constraint terms, so that the generator is trained more effectively to generate images with better detail fidelity and metal artifact removal effect. Compared with the current mainstream model, the method has better artifact removal effect and wider application scene.
Drawings
FIG. 1 is a network schematic diagram of a conventional GAN model;
FIG. 2 is a diagram showing the results of a MAR method based on a conventional GAN architecture;
FIG. 3 is a schematic diagram of a GAN network incorporating a registration network as a sampling function;
fig. 4 is a schematic view of affine variation, wherein: (a) is an original image, (b) is an image obtained by performing affine transformation on (a), and (c) is a difference image between (a) and (b);
FIG. 5 is a schematic diagram of the overall architecture of a MARGANVAC model according to an embodiment of the application;
FIG. 6 is a schematic diagram of a generator;
FIG. 7 is a schematic diagram of the basic building blocks of the generator;
FIG. 8 is a schematic diagram of a arbiter;
FIG. 9 is a schematic diagram of a qualitative comparison of different methods on a deeplesion dataset;
FIG. 10 is a schematic diagram of a qualitative comparison of different methods on a Micro CT synthetic artifact dataset;
FIG. 11 is a schematic representation of the results of qualitative comparisons of different methods on cone-beam Micro CT datasets containing real metal artifacts.
Detailed Description
The present application is further illustrated below in conjunction with specific embodiments, it being understood that these embodiments are meant to be illustrative of the application and not limiting the scope of the application, and that modifications of the application, which are equivalent to those skilled in the art to which the application pertains, fall within the scope of the application defined in the appended claims after reading the application.
Let the definition domain of CT image affected by metal artifact beThe definition domain of the image unaffected by the artifact is +.>The goal of the metal artifact removal network is to find the mapping function f (x a ) =x, where->With one paired dataset When training a supervisory network with the paired data set to learn f.
GAN networks have better fitting ability than common Convolutional Neural Networks (CNNs), and canTo generate an image with more anatomical detail. Therefore, the GAN model is a very good underlying network model that can be used for MARs. A disadvantage of the conventional GAN model is that the details of the generation may not be consistent with the group trunk. A conventional GAN network is shown in fig. 1. It is composed of three parts. The first part is the generator subnetwork G (x a ;θ G ) The second part is an authentication sub-network D (x a ||x;θ D ) The third part is the loss of fidelity based on the L1, L2 norms or some specific network extracted advanced features. In general, a powerful generator G (x a ;θ G ) And a discriminator D (x a ||x;θ D ) To remove metal artifacts well is very simple and convenient because the characteristics of metal artifacts are very different from those of the image content information. Fig. 2 shows an artifact-removed image obtained by a conventional GAN network-based MAR model. From this, it can be seen that the texture features (such as streaks and shadows) of a particular metal artifact are eliminated and the generator produces an image that appears to be free of metal artifacts. In other words, a powerful generator will successfullyX in the domain a Image conversion into->In the field->An image. However, it is apparent that the generator generates an imageAnd does not coincide with the group trunk image x in terms of content information. It can be seen from this preliminary experiment that the generation of the antagonism network has been trained well enough to enable conversion between different domain images, but the loss of fidelity is insufficient to guide the generator to optimize to generate a graph very similar to the group trunk image. Common loss of fidelity is
L p =||C(x a ;θ G )-x|| p , (1.1)
Wherein p is 1 or 2, representing an L1 or L2 norm. Equation (1.1) is a pixel level fidelity constraint, so it is a strong constraint. From many previous studies, such as the well-known ResNet model, it is more efficient to encode the residual vector than the original vector. This is because the residual vector contains less coding information, thereby reducing the learning burden on the network. Such residual coding may be referred to as spatial coding. The present application seeks to extend the idea of residual coding to the time domain, i.e. letting the network learn the content information characteristics step by step. More specifically, the network is allowed to learn the low frequency features of the image (i.e., the relatively smooth portions of the CT image) and then capture the high frequency features (i.e., the edge information in the image, etc.). Thus, the learning process is progressive and the network may evolve gradually. To achieve this objective, the present application requires the design of a time-varying loss function instead of a time-invariant loss function.
L var =F t (G(x a ;θ G ),x) (1.2)
wherein Ft Is related to generating an image G (x a ;θ G ) And a time-varying function of the group trunk image x. The image is composed of a low frequency component and a high frequency component, which means that the image x can be represented as a combination of the low frequency component and the high frequency component. The low frequency component of the image can be regarded as a piecewise constant function due to the relatively smooth characteristic, so that the pixel x j There is a greater probability of being similar to neighboring pixels. If the GAN model is initially learned to generate low frequency parts, pixel fidelity constraints are relaxed by introducing neighborhood similarity constraints as follows:
where j is the pixel index, where φ is a sampling function used in pixel x j Is used to sample the pixels in the neighborhood of (a), σ is a parameter used to control the size of the neighborhood. This loss function L p Physical meaning of (3)Is pixel x j Similar to its neighboring pixels. G (x) a ;θ G ) The low frequency component of x is learned. If the parameter sigma can be dynamically changed to become a time-varying function, a loss function L is obtained var :
Function phi (x) j ,σ t ) Is pixel x j Sampling function of the vicinity, parameter sigma, as the iteration proceeds t Gradually converging to zero, i.e. the loss function eventually degenerates to:
is a common loss of fidelity, meaning that constraints are being enforced to encourage the GAN to learn high frequency components. Next, a function phi (x j ,σ t ) To satisfy the above conditions, i.e. the function has a sampling function and the parameter sigma decreases with time; in theory, there should be many options to construct such a function, and similar function extensions are all a good solution for the deformable registration network of the present application, which is exemplified in the present application, and has the advantage that the network parameters can be adaptively adjusted without human intervention, so that the generator produces a more realistic de-artifacted picture. Fig. 3 is a schematic diagram of a GAN network incorporating a registration network. First for each input image x a Performing random affine transformation, and performing random affine transformation on the reference image x to obtain a transformed image and xT . When->After passing through the generator, an image with artifacts removed is obtainedDue to-> and xT Not the image corresponding to the pixel level, in order to compare the differences between them, will +.> and xT The registration network is input to partially correct the previous deformations due to the random affine transformation. As shown in fig. 4, the amplitude of the random affine variations taken is not large, within a controllable range, so the registration network can correct these random deviations.
At the beginning of training, the registration network performs poorly, and is therefore outputWith affine transformed ground trunk image x T There will be some positional deviations between them that are variable, i.e. there will be an independent different deviation for each pixel in each input image for each epoch. This feature can help network model function phi (x j ,σ t ) Is a random sampling process of (a). And, as training progresses, the registration performance of the registration network will increase, i.e., the parameter σ t Will decrease. If the whole model converges well σ t Will drop to zero.
The overall architecture of the proposed model MARGANVAC is shown in fig. 5. MARGANVAC comprises three modules, generator G, arbiter D and registration network R, respectively. In addition to the novel training mechanism of the time-domain variable constraint described above, some effective skills have been introduced to allow the generator to be trained more effectively. First, for an input image x a Obtaining an estimated value of a metal track part in a projection image of a sinusoidal graph domain by using a linear interpolation method, and obtaining an LI correction reconstruction image x by using an FBP (film bulk phase) or FDK (full-field-depth-correction) method [LI]a . Image x to be corrected by LI method [LI]a And image x affected by artifacts a Connected together in the channel dimension as input to generator G. Next, to go intoThe learning burden of the generator is relieved in one step, and residual learning is introduced in the generator G. Then the image after removal of the metal artifact is expressed as:
where [ m, n ] represents a stitching operation of the images m and n. To better distinguish between metal artifacts and content information, self-reconstruction is introduced as a constraint to regularize the generator. In the self-reconstruction process, the artifact-free image y is stitched into [ y, y ], and then input into the generator, so the self-reconstructed image is expressed as:
for the registration network R to work properly, for each input artifact image x a And carrying out random affine transformation on the group-trunk image x. Let the corresponding affine transformation images be A 1 (x a) and A2 (x) A. The application relates to a method for producing a fibre-reinforced plastic composite In this way, the image from which the metal artifact is removed is represented as:
accordingly, in self-reconstruction, the output becomes:
then take and A2 (x) As input to a registration network R whose output is a deformation vector field T x ,
In the process of obtaining deformation vector field T x Thereafter, by inputting the imageApplication T x To obtain a resampled image +.>
Generating T according to sum x The same procedure is followed to obtain a deformation vector field T y ,
And in the case of a reconstruction, the corresponding resampled image
In order to enable the generator to separate content information and metal artifacts well, a specially designed generator is introduced, as shown in fig. 6. The generator includes an encoder and a decoder. The encoder is for mapping image samples from an image domain to an implicit space in which content information and features of metal artifacts are separated. The decoder reconstructs the separated content information into an artifact-free image. A deep subnetwork of 21 acceptance-res net modules is introduced between the encoder and decoder to improve the separation of artifact characteristics and content information in the hidden space. The introduction structure proposed in GoogleNet makes full use of convolution kernels of different sizes, so the introduction structure has better performance in terms of extracting features. The acceptance-ResNet module is formed by introducing a residual network structure into the acceptance structure, so that a deep network consisting of a plurality of acceptance-ResNet modules is easy to construct to strengthen the separation capability of content information and artifact characteristics without worrying about whether the network can converge. The structure of the used acceptance-ResNet module is shown in FIG. 7 (c), where the dimension of the input feature map is reduced by adding a 1×1 convolution operator, thereby reducing the computational effort. In the encoder, the core module is a downsampling module, as shown in fig. 7 (b), which consists of a convolution layer, an instance normalization (Instance Normalization) layer, and a ReLU activation function. The reason for choosing instance normalization instead of batch normalization (Batch Normalization) is that instance normalization has proven to have better performance for small batches of image generation tasks. In the decoder, the core module is an upsampling module, which is similar to the downsampling module as shown in fig. 7 (e), the only modification being that the transpose convolution replaces the convolution operation. The arbiter D is built based on the PatchGAN arbiter. The construction of the arbiter is shown in fig. 8. The registration network R is based on the overall structure of U-Net, but is deeper than the conventional U-Net structure. In this very deep U-Net, all levels of features can be acquired to effectively facilitate the registration process.
Loss function design
The learning process of the model is also a process that encourages the generator to reduce metal artifacts while maintaining content information. In each challenge learning iteration, the generator G will output an image that reduces metal artifacts; when the performance of both G and D is improved, the output image looks more like an artifact free image. The purpose of the introduced registration network is to help generator G learn the content features step by step, thus requiring simultaneous updating of the network weights θ during training R and θG . As can be seen from fig. 5, MARGANVAC contains four forms of loss, respectively artifact correction lossSelf-weight loss->Countering losses->And a smoothing loss for constraining the deformation vector field +.>The total loss may be expressed as a weighted sum of these losses as follows:
where lambda is a hyper-parameter that balances the importance of each loss in the training process.
Correcting lossesSince the whole model is based on supervised learning, the most direct and efficient constraint is to correct the loss, which minimizes the difference between the image from which the metal artifact is removed and the ground trunk image. However, after the input image is subjected to the random affine transformation, there is no more strict group trunk image corresponding to the pixel level. To solve this problem, the registration network R and generator G are trained simultaneously, with the design correction penalty being:
wherein A is a resampled image obtained by the formula (1.11) 2 (x) The affine transformed group trunk image is represented by formula (1.10). In many image-to-image conversion tasks, the L1 norm loss function has proven to be more efficient at restoring image detail, so L1 norms are used here to cause the generator to maintain more detail while reducing metal artifacts.
Self-rebuilding damageLoss of functionThe training goal is for the generator to learn to remove metal artifacts. At the same time, more constraints need to be introduced to encourage the generator to keep more existing content information unchanged. The core goal is for the network to recognize metal artifacts and content information, i.e., reduce metal artifacts in the presence of metal and artifacts, and preserve all image content information in the absence of metal artifacts. Thus, the self-reconstruction loss is introduced as follows:
wherein ,a is a resampled image obtained by the formula (1.13) 4 And (y) is affine transformation of the artifact-free image y.
Countering lossesThe countermeasure learning may encourage the generator G to generate more realistic artifact-free images. To achieve this, the generator should have the ability to distinguish artifact information from content information in hidden space. To enhance this capability, two resistance learning strategies were introduced. One strategy is to increase the ability of the hidden space to identify metal artifacts, and another approach is to increase the ability of the hidden space to hold content information. The input data for both strategies are an image affected by metal artifacts and an image free of metal artifacts, respectively. These two strategies are performed simultaneously in resistance learning, and their respective losses can be written as:
the total countering loss is therefore:
smoothing lossThe resulting deformation vector field may become unsmooth and physically unrealistic while minimizing the combined loss of correction and self-reconstruction losses. To solve this problem, diffusion regularization operators in the gradient direction of the deformation vector field are introduced in the original registration network to constrain the deformation vector field to smooth it. Because the generator has two outputs and />There are two corresponding deformation vector field regularization terms. Thus, the total smoothing loss can be expressed as:
wherein Is the gradient of the deformation vector field T. In practice, the differences between pixels are used to approximate the gradient of each pixel in the deformation vector field.
It will be apparent to those skilled in the art that the steps of the method for metal artifact removal based on time-varying constraint countermeasure generation network model of the embodiments of the present application described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed over a network of computing devices, or alternatively they may be implemented in program code executable by a computing device, such that they may be stored in a memory device, executed by a computing device, and in some cases, the steps shown or described may be executed in a different order than what is shown or described herein, or they may be individually fabricated as individual integrated circuit modules, or a plurality of the modules or steps in them may be fabricated as a single integrated circuit module. Thus, embodiments of the application are not limited to any specific combination of hardware and software.
Early preparation of experiment
(1) Data set construction
Data set 1: deeplesion simulated metal artifact dataset
4000 artifact-free CT images are selected from the deep dataset, 90 metal masks are selected from 100 binarized metal masks with different shapes and sizes provided by CNNMAR to synthesize metal artifacts, and a total of 360000 paired CT images are synthesized as a training dataset. Furthermore, 200 artifact-free CT images and the remaining 10 binarized metal masks were additionally selected from the deeplesion dataset to generate a test dataset containing 2000 paired CT images. The reconstruction method is FBP by adopting fan beam projection, and the image size is 256×256.
Data set 2: real artifact dataset
The real artifact dataset is a Micro CT dataset, which is a collection of real CT projections of metal-implanted and metal-free bone samples. And synthesizing the paired training data set by adopting a synthesis method based on the real metal artifact. After the synthetic projection data inserted into the metal track are obtained, a corresponding reconstructed image is obtained through an FDK method. The training set contained 3868 images and the test set contained 537 images. The image size is 364×364.
(2) Implementation and training details
The network is built based on a Pytorch deep learning frame and runs on a computer equipped with Nvidia 2080Ti GPU. In the training process, an Adam optimizer (optimizer parameters (β1, β2) = (0.5,0.999)) is used to optimize the loss function. For the deep dataset, the batch size was set to 2, the learning rate was set to 0.0001, and the number of training epochs was 5. For cone-beam Micro CT dataset, batch sizea was set to 1, learning rate was set to 0.0001, and a total of 70 epochs were trained. The weight of the loss function is lambda smooth =10,λ Adv =1,λ Corr =20,λ Rec =20。
(3) Evaluation index
On the paired dataset of synthetic artifacts, the performance of all MAR methods, including the methods proposed by the present application and other classical MAR methods, was quantitatively assessed using Structural Similarity (SSIM), peak signal-to-noise ratio (PSNR), and standard deviation. Specifically, the higher the SSIM and PSNR values, the better the effect of metal artifact removal and content information retention performance. The introduction of standard deviation is to evaluate different methods from the dimension of the stability of the results, because different models have to pay attention to the stability of the results when encountering different data in addition to SSIM and PSNR. On a real cone-beam Micro CT dataset with real metal artifacts, only qualitative assessment based on visual evaluation of the image was performed due to the lack of paired artifact-free CT images.
MARGANVAC model validity verification
In order to demonstrate the excellent performance of the proposed method in terms of metal artifact removal, several representative MAR methods were tested, including the traditional LI algorithm, FSMAR algorithm, supervised algorithms CNNMAR, cGANMAR, U-Net, the two-domain method InDuDoNet, the unsupervised ADN algorithm, the cycleGAN, the attention-directed beta-cycleGAN algorithm. All methods were performed on the public dataset deep dataset and the private Micro CT dataset. All methods are based on the disclosed code or the model presented in the published paper, except for the model presented in the present application.
Because the projection and reconstruction of the CT image are involved in the training process of the double-domain network, the FDK of the cone beam CT consumes a large amount of calculation resources, in the previous training of the double-domain network, fan beam CT is adopted, the data of the Micro CT is cone beam CT, therefore, the double-domain network experiment is only carried out on the deep data set, and the double-domain experiment is not carried out on the Micro CT data set.
Evaluating on deeplesion dataset
Table 1.1 comparison of PSNR (dB) and SSIM over deeplesion dataset for different methods
In this section of the experiment, qualitative and quantitative analysis was performed on the paired depth dataset for each MAR method. The results of the quantitative analysis were obtained based on 2000 test set images. As shown in table 1.1, it can be intuitively seen that the deep learning method is generally superior to the unsupervised method due to the conventional method. Of all the existing supervised approaches, the two-domain model InDuDoNet performs best. The PSNR and SSIM scores of the model provided by the application are only inferior to InDuDoNet, while InDuDoNet is only suitable for fan beam CT, but the model provided by the application can be suitable for fan beam and cone beam CT as well.
The image affected by the artifact, and its corresponding artifact-free image, and the various MAR methods after artifact removal, are shown in fig. 9. Fig. 9 shows a sample of a chest slice. In the image affected by the metal artifact in fig. 9, almost no tissue/organ structure of the adjacent region of the metal implant is seen, the streak artifact extends through the entire image, and the image content information of the adjacent region of the metal implant is seriously damaged by the metal artifact. From images after removal of artifacts by various MAR methods, it can be seen that most of the streak-like artifacts far from the metal are removed, and the deep learning method is superior in performance to the conventional LI method and FSMAR method in this respect. The tissue structure is still blurred and secondary artifacts exist after being processed by the LI and FSMAR methods of the traditional method. The supervised method CNNMAR, cGANMAR, inDuDoNet and U-Net perform better than the traditional methods, but do not recover the missing content information well. In contrast, unsupervised methods tend to automatically generate rich details in the image, but many of these details are inconsistent with the actual missing structure. Thus, it can be seen that some structures like tissues, organs or lesions are not actually present. beta-CycleGAN and ADN are improved in this regard, but cannot be completely avoided because the unsupervised approach lacks fidelity constraints. By analyzing the image of the removal of the artifact of MARGANVAC of the method of the present application, it can be clearly seen that several vessel lumens are restored and the vertebral body boundaries are intact. These sharp details are not visible in other images. These results indicate that the method of the present application exhibits better performance in reducing metal artifacts and preserving content information. Although the InDuDoNet method is slightly better than the method of the present application in terms of the evaluation indices ssim and psnr, it can be seen from FIG. 9 that the image texture recovered by the method of the present application is closer to a real image than that recovered by the InDuDoNet method.
The stability of the different methods can be compared by calculating the standard deviation (Std) of ssim and psnr for the different MAR methods. It can be seen from table 1.1 that the Std ranking of the PSNR indices of the inventive method is third, the Std ranking of the SSIM indices is first. Thus, the method of the present application can obtain more stable results while ensuring good image quality, compared to other MAR methods.
Evaluation on cone beam Micro CT artifact migration dataset
To verify the performance of the proposed MAR method in real-world applications, a dataset acquired from cone-beam Micro CT was prepared. Pairing data were generated by migrating the metal trajectories affected by the artifact into projections without metal implants, and the results of quantitative analysis of all MAR methods are shown in table 1.2. It can be seen that the quality of the original image contaminated with metal artifacts is much worse than in the simulation experiments in terms of SSIM and PSNR indices. It is explained that the introduction of real metal artifacts to synthesize an image affected by the artifacts may lead to more severe image degradation. As can be seen from the quantitative analysis results of the MAR methods in Table 1.2, the performance of all MAR methods was reduced, and the method of the present application was superior to other MAR methods. The reduced value may be due to the fact that the quality of the original image affected by the artifact is lower than the original image quality of the simulation experiment. The qualitative comparison of different MAR methods is shown in FIG. 10. A bone cross-sectional image affected by the artifact is shown. In fig. 10, a metal implant is inserted into bone, resulting in serious image degradation and loss of content information. Stronger shading artifacts can be seen than in the simulation experiments, with a complete loss of some of the bone tissue detail. A portion of the trabeculae disappeared and the intact cortical bone was split into several portions. From the results of the unsupervised methods CycleGAN and ADN removal of artifacts, it can be seen that the gaps between the segmented portions of cortical bone are not well filled nor are the missing trabeculae well recovered. beta-CycleGAN performs slightly better than CycleGAN and ADN in both respects. Compared to the unsupervised approach, the supervised approach, in particular CNNMAR, is a major improvement in both aspects, whereas the U-Net approach is more prone to smooth blurring of pictures. By comparing the images obtained by the method and the CNNMAR, the cortical bone boundary recovered by the method is clearer and more approximate to the ground trunk image. Soft tissue areas near metal implants are more prone to contamination by metal artifacts because the CT values are much smaller relative to bone tissue. In clinical applications, however, proper soft tissue imaging is critical to diagnosis. Therefore, soft tissue restoration performance should be a key indicator for reducing metal artifacts. By comparing soft tissues, the method provided by the application has the advantages of removing metal artifacts and simultaneously having better soft tissue recovery performance. In summary, all the MAR methods based on deep learning can remove most metal artifacts, but the performance difference in content information retention is larger, i.e. the supervised method is better than the unsupervised method, wherein the method provided by the application performs best.
Table 1.2 comparison of PSNR (dB) and SSIM on Micro CT synthetic artifact dataset for different methods
Performance evaluation on cone beam Micro CT real artifact image
Experiments were further conducted on the Micro CT data set affected by the real metal artifact to test the performance of the method in practical application. Since there is no group try picture, its performance can only be evaluated in a qualitative way. All MAR models were first trained on a Micro CT paired training dataset generated using an artifact migration method, then tested on a real artifact image of a cone-beam Micro CT. Fig. 11 shows a real artifact-affected image and the corresponding artifact-removed image obtained by the different MAR methods. From the real artifact image, it can be seen that the artifact is visually very similar to the synthesized artifact in fig. 10. By comparing partial enlarged images of the resulting images obtained by different MAR methods, it can be seen that the supervised method has better artifact removal performance than the unsupervised method. It may be stated that the training of the supervised method is successful. In other words, the pairing data set synthesized by the artifact migration method successfully expands the supervised method into practical application. Among all the comparison methods, it is apparent that CNNMAR and the method of the present application perform better in removing metal artifacts and recovering image content information. Further comparing the details in the partial enlarged view, it can be found that the bone trabecular anatomy obtained by the method of the present application is clearer, the soft tissue area around the metal has few residual artifacts, and the soft tissue boundary is smoother. This result is consistent with the simulation experiment described above.
Claims (9)
1. A metal artifact removal method for an countermeasure generation network model based on time-varying constraint is characterized in that a GAN network demetallization artifact model with time-domain variable constraint, which is MARGANVAC for short, is constructed, and the model performs self-adaptive fidelity constraint on each part of a full graph by introducing time-varying constraint items on the basis of the GAN network;
the GAN network demetallization artifact model with the time domain variable constraint comprises three modules, namely a generator G, a discriminator D and a registration network R, wherein a CT image to be processed is input into the generator G after random affine change, and is output to the registration network and the discriminator D through the generator G; the registration network R is used for random sampling, and the neighborhood of the sampled pixel points is gradually reduced along with the increase of the iteration times;
and generating an image with metal artifacts for the CT image with the metal artifacts by using the trained GAN network demetallization artifact model with the time domain variable constraint.
2. The method for removing metal artifacts from an countermeasure generation network model based on time-varying constraints according to claim 1, wherein the training process of the GAN network demetallization model with time-domain variable constraints is as follows:
first, for each input image x with metal artifacts a Performing random affine transformation on the image x a Random affine transformation is carried out on the reference image x which is not affected by the artifacts, and the images x are respectively obtained a And reference image x-transformed image and xT The method comprises the steps of carrying out a first treatment on the surface of the When image->By means of the post-generator G, an artifact-free image is obtained>Will-> and xT Inputting a registration network R; function for physical meaning of registration network>Expressed by a first part G (x a ;θ G ) Is the output result of the generator sub-network, θ G The parameters of the generator sub-network, j, are the pixel index, and the second part phi is a sampling function for the pixel x j Sampling pixels in the neighborhood of σ t Is a parameter for controlling the size of the neighborhood, and the parameter sigma is as iteration progresses in the training process t Gradually converging to zero.
3. The method for removing metal artifacts based on time-varying constraint countermeasure generation network model according to claim 1, characterized in that residual learning is introduced in generator G; introducing self-reconstruction as a constraint to regularize the generator; the generator comprises an encoder and a decoder; the encoder is used for mapping the image sample from the image domain to a hidden space, wherein the content information and the characteristics of the metal artifacts are separated; the decoder reconstructs the separated content information into an artifact-free image; between the encoder and decoder, a deep subnetwork consisting of 21 acceptance-ResNet modules was introduced.
4. The method for removing metal artifacts based on time-varying constrained anti-aliasing network models of claim 1, wherein the training process of the GAN network demetallization model with time-domain variable constraints introduces self-reconstruction as a constraint to regularize the generator, and during the self-reconstruction process, the artifact-free image y is stitched into [ y, y]Then input into a generator; before stitching, carrying out random affine transformation on the artifact-free image y to obtain a result A 3 (y) splicing to obtain [ A ] 3 (y),A 3 (y)]Accordingly, in self-reconstruction, the generator output becomes: g ([ A) 3 (y),A 3 (y)];θ G ) The method comprises the steps of carrying out a first treatment on the surface of the The artifact-free image y corresponds to the artifact-free reference image y.
5. The method of metal artifact removal for a time-varying constraint based countermeasure generation network model of claim 1, wherein for image x a Obtaining an estimated value of a metal track part in a projection image of a sinusoidal graph domain by using a linear interpolation method, and obtaining an LI correction reconstruction image x by using an FBP (film bulk phase) or FDK (full-field-depth-correction) method [LI]a The method comprises the steps of carrying out a first treatment on the surface of the Image x to be corrected by LI method [LI]a And image x affected by artifacts a Respectively carrying out random affine transformation to obtain affine transformation result A 1 (x [LI]a) and a1 (x a ) Will be
a 1 (x [LI]a) and A1 (x a ) Connected together in the channel dimension as input to generator G.
6. The method for removing metal artifacts based on time-varying constraint countermeasure generation network model according to claim 1, characterized in that in generator GResidual learning is introduced, and the image after metal artifact removal is expressed as:
[x a ,x [LI]a ]represents x a and x[LI]a Is connected with the operation of the connecting device; in self-reconstruction, the generator output becomes +.>Then take-> and A2 (x) As input to the registration network R, a 2 (x) The output of the registration network R is a deformation vector field T representing a random affine transformation of the reference image x x , θ R To register network parameters, a deformation vector field T is obtained x After that, by +.>Application T x To obtain a resampled image +.> Generating T according to sum x The same procedure is followed to obtain a deformation vector field T y ,/> A 4 (y) represents a random affine transformation of the reference image y; and in the case of reconstruction, the corresponding resampled image +.> The generator outputs an image +.> and A2 (x) As input to the discriminator D, likewise in the case of reconstruction, the generator outputs +.> and A4 (y) as input to the arbiter D.
7. The method of claim 1, wherein the GAN network demetallization artifact model with time-domain variable constraints comprises four loss functions, each artifact correction lossSelf-weight loss->Countering losses->And smoothing loss for constraining deformation vector fieldsThe total loss is expressed as a weighted sum of these losses +.>
Each λ represents a weight coefficient of the corresponding loss;
correcting lossesThe registration network R and the generator G are trained simultaneously, and the design correction loss is:
using the L1 norm allows the generator to maintain more detail while reducing metal artifacts,indicating desire(s)>Indicating that x is from the dataset +.>The definition domain of CT image affected by metal artifact is +.>
Loss from reconstructionThe training goal is to make the generator learn to remove metal artifacts while also requiring the introduction of more constraints to enable the generator to preserveLeaving more content information, i.e. reducing metal artifacts in the presence of metal and artifacts, and retaining all image content information in the absence of metal artifacts;
wherein ,indicating desire(s)>Representing y from the dataset +.>The definition domain of the image unaffected by the artifact is +.>
Countering lossesThe countermeasure learning encouragement generator G generates a more realistic artifact-free image; to achieve this goal, the generator should have the ability to distinguish artifact information from content information in hidden space; to enhance this capability, two antagonistic learning strategies were introduced; one strategy is to improve the ability of the hidden space to identify metal artifacts, and the other strategy is to improve the ability of the hidden space to store content information; the input data of the two strategies are an image affected by metal artifacts and an image free of metal artifacts respectively; these two strategies are performed simultaneously in resistance learning, with their respective losses being:
the base of log is 2, D ()'s represent the output of the arbiter D; the total countering loss is therefore:
smoothing lossThe method comprises the following steps:
wherein Is the gradient of the deformation vector field T.
8. A computer device, characterized by: the computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the time-varying constraint-based metal artifact removal method of the countermeasure generation network model of any one of claims 1 to 7 when the computer program is executed by the processor.
9. A computer-readable storage medium, characterized by: the computer readable storage medium stores a computer program for performing the time-varying constraint based metal artifact removal method of the countermeasure generation network model of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310878651.XA CN116894783A (en) | 2023-07-18 | 2023-07-18 | Metal artifact removal method for countermeasure generation network model based on time-varying constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310878651.XA CN116894783A (en) | 2023-07-18 | 2023-07-18 | Metal artifact removal method for countermeasure generation network model based on time-varying constraint |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116894783A true CN116894783A (en) | 2023-10-17 |
Family
ID=88314602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310878651.XA Pending CN116894783A (en) | 2023-07-18 | 2023-07-18 | Metal artifact removal method for countermeasure generation network model based on time-varying constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116894783A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117914656A (en) * | 2024-03-13 | 2024-04-19 | 北京航空航天大学 | End-to-end communication system design method based on neural network |
-
2023
- 2023-07-18 CN CN202310878651.XA patent/CN116894783A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117914656A (en) * | 2024-03-13 | 2024-04-19 | 北京航空航天大学 | End-to-end communication system design method based on neural network |
CN117914656B (en) * | 2024-03-13 | 2024-05-10 | 北京航空航天大学 | End-to-end communication system design method based on neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110827216B (en) | Multi-generator generation countermeasure network learning method for image denoising | |
CN109754403A (en) | Tumour automatic division method and system in a kind of CT image | |
CN108492269A (en) | Low-dose CT image de-noising method based on gradient canonical convolutional neural networks | |
CN110675461A (en) | CT image recovery method based on unsupervised learning | |
CN110930416A (en) | MRI image prostate segmentation method based on U-shaped network | |
Li et al. | Low-dose CT streak artifacts removal using deep residual neural network | |
CN112837244B (en) | Low-dose CT image denoising and artifact removing method based on progressive generation confrontation network | |
WO2023202265A1 (en) | Image processing method and apparatus for artifact removal, and device, product and medium | |
CN115953494B (en) | Multi-task high-quality CT image reconstruction method based on low dose and super resolution | |
CN112017131B (en) | CT image metal artifact removing method and device and computer readable storage medium | |
WO2022246677A1 (en) | Method for reconstructing enhanced ct image | |
Wang et al. | Adaptive convolutional dictionary network for CT metal artifact reduction | |
CN113222852B (en) | Reconstruction method for enhanced CT image | |
Niu et al. | Low-dimensional manifold-constrained disentanglement network for metal artifact reduction | |
CN110047054A (en) | A kind of GAN medical image denoising method for extracting feature based on VGG-19 | |
CN116894783A (en) | Metal artifact removal method for countermeasure generation network model based on time-varying constraint | |
CN108038840B (en) | Image processing method and device, image processing equipment and storage medium | |
CN116645283A (en) | Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network | |
CN114897726A (en) | Chest CT image artifact removing method and system based on three-dimensional generation countermeasure network | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
Du et al. | Deep-learning-based metal artefact reduction with unsupervised domain adaptation regularization for practical CT images | |
Li et al. | Quad-Net: Quad-domain network for CT metal artifact reduction | |
CN116934721A (en) | Kidney tumor segmentation method based on multi-scale feature extraction | |
Sureau et al. | Convergent admm plug and play pet image reconstruction | |
Zhu et al. | CT metal artifact correction assisted by the deep learning-based metal segmentation on the projection domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |