CN110728729B - Attention mechanism-based unsupervised CT projection domain data recovery method - Google Patents

Attention mechanism-based unsupervised CT projection domain data recovery method Download PDF

Info

Publication number
CN110728729B
CN110728729B CN201910931302.3A CN201910931302A CN110728729B CN 110728729 B CN110728729 B CN 110728729B CN 201910931302 A CN201910931302 A CN 201910931302A CN 110728729 B CN110728729 B CN 110728729B
Authority
CN
China
Prior art keywords
sinogram
feature matrix
attention
target
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910931302.3A
Other languages
Chinese (zh)
Other versions
CN110728729A (en
Inventor
史再峰
王仲琦
罗韬
曹清洁
程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910931302.3A priority Critical patent/CN110728729B/en
Publication of CN110728729A publication Critical patent/CN110728729A/en
Application granted granted Critical
Publication of CN110728729B publication Critical patent/CN110728729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an unsupervised CT projection domain data recovery method based on an attention mechanism, which is mainly technically characterized in that: preparing an input dataset comprising a source sinogram and a target sinogram; constructing a generator module to obtain a target domain sinogram; constructing a discriminator module to perform discrimination processing; constructing a loss function of the GAN network; performing large-scale iterative training on the GAN network by using the complete CT sinogram data set until the change amplitude of the loss function of the whole network does not exceed a preset threshold value during each iteration; using the CT sinogram affected by the metal trace as the test dataset, a sinogram without metal trace is obtained. The invention has reasonable design, realizes high-quality recovery of incomplete CT projection data influenced by the metal implant by supplementing and completely eliminating beam hardening artifact and metal artifact in the CT image, can be better applied in practice, and promotes further development of accurate medical treatment.

Description

Attention mechanism-based unsupervised CT projection domain data recovery method
Technical Field
The invention belongs to the technical field of computer tomography, and particularly relates to an unsupervised CT projection domain data recovery method based on an attention mechanism.
Background
X-ray computed tomography (Computed Tomography, CT) technology has been widely used in the fields of industrial detection and medical diagnosis, but CT examination has been plagued by artifacts, which present great difficulties for clinical examination and diagnosis.
Metal artifacts are one of the common artifacts in Computed Tomography (CT) images that are introduced by metal implants during imaging and reconstruction. The formation of metal artifacts involves various mechanisms such as beam hardening, scattering, noise and nonlinear partial volume effects. In the clinical examination process, the metal implant can absorb a large amount of X-ray photons incident during CT scanning, so that projection domain data of the areas are missing (CT projection data is also called sinogram), namely the projection data are incomplete, the projection data of the metal implant are represented as bright metal traces in the sinogram (the more the number of the metal traces is, the wider the width is, the more serious the projection data are influenced by the metal implant, the more missing is), star-shaped or radial artifacts are generated around the implant after reconstruction, the definition of tissue structures around the metal implant on an image is seriously influenced, and great difficulty is brought to judgment of the implant and the surrounding tissue structures, and the problem has plagued the clinical examination for many years.
The removal of artifacts by conventional modeling is very difficult. Among many image reconstruction algorithms, a commonly used reconstruction algorithm is a filtered back projection algorithm (FBP), but a tomographic image obtained directly using the algorithm shows a certain artifact.
Because CT imaging is a projection data space reconstruction method, the incomplete projection data can have a considerable influence on the subsequent image reconstruction and diagnosis work, the missing projection data is completely supplemented, and the influence of metal artifacts in the image can be effectively eliminated in the subsequent image reconstruction process.
In recent years, deep learning has been increasingly applied to the field of medical image processing, and an important advantage of a deep convolutional neural network in the deep learning is that information is extracted from original data to abstract semantic concepts layer by layer, so that the deep learning has outstanding advantages in the aspects of global feature extraction of the extracted data and data recovery. The characteristic matrix of the input image is extracted by convolution operation in the network, and then an activation function is input, so that the nonlinear expression capacity of the model is improved, linear-to-nonlinear complex mapping is learned, and common activation functions include a logic activation function (sigmoid), a hyperbolic tangent function (tanh) and the like, and the activation function is prepared for back propagation of the convolution neural network.
Current Computed Tomography (CT) Metal Artifact Reduction (MAR) methods based on deep neural networks are mostly performed in the CT image domain and are supervised methods requiring paired CT images that are identical in anatomical detail, one with metal artifacts and the other without metal artifacts, which rely heavily on simulation data for training. However, the supervised approach is often not well generalized to clinical applications, as the simulation data may not perfectly mimic the underlying physical mechanisms of CT imaging.
In deep learning, more challenging but at the same time more practical unsupervised methods have received more attention and research, where input data that is not completely paired is available for training. While the data recovery of the CT projection data field can be seen as a form of data-to-data conversion from incomplete projection data (source sinogram) affected by the metal implant to complete projection data (target sinogram) unaffected by the metal implant. Unsupervised data conversion can be implemented with a Generation Antagonism Network (GAN), which uses two basic models of traditional deep learning: the generation model and the discrimination model are unified in the same framework, in the GAN, the two models are mutually game, and the training is alternately iterated, in the process, the GAN framework can well keep the details of the input data and learn the complex data distribution of the input data. A good data recovery model should: (i) The incomplete projection data caused by the metal implant is supplemented as completely as possible; (ii) maintaining the anatomical content of the input CT data. CT sinogram conversion is implemented in an end-to-end manner based on GAN networks and unsupervised methods, where the attention mechanism and adaptive layer instance normalization function (AdaLIN) are combined. The attention mechanism directs the data conversion model to focus on the important areas of the source sinogram and the target sinogram, distinguishing the feature matrix from the source sinogram from the feature matrix from the target sinogram by means of an auxiliary Classifier (CAM), which is effectively a 0-1 two classifier, learning the importance weights of the feature matrix during this process, helping the model know where to make the dense conversion. Whereas the adaptive layer instance normalization function includes two parts: example normalization (IN) and Layer Normalization (LN), which normalize over image pixels, calculate the mean μ of the feature matrix based on the above description I Sum of variances sigma I Layer normalization normalizes in the channel direction and calculates the mean μ of the feature matrix based on this L Sum of variances sigma L The combination of the two flexibly controls the variation of the shape and the texture in the sinogram through learning parameters, thereby enhancing the robustness of the recovery model, so that the data recovery capability of the recovery model is not affected by the severity of the metal trace in the input sinogram.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides an unsupervised CT projection domain data recovery method based on an attention mechanism, which is used for completely reducing metal artifacts in CT images by supplementing CT projection data, realizing conversion from a sinogram influenced by a metal implant to a sinogram not influenced by the metal implant based on a GAN network under the condition that input sinograms are not matched, correcting metal traces, completely supplementing CT projection data, and finally eliminating serious metal artifacts in CT reconstructed images to obtain high-quality CT images.
The invention solves the technical problems by adopting the following technical scheme:
an unsupervised CT projection domain data recovery method based on an attention mechanism comprises the following steps:
step 1, preparing an input data set comprising a source sinogram affected by a metal implant and a target sinogram unaffected by the metal implant;
step 2, constructing a generator module, and obtaining a target domain sinogram through calculation, normalization processing and up-sampling of an encoder A, an auxiliary classifier B and an attention feature matrix;
step 3, constructing a discriminator module, and encoding the encoder C, the auxiliary classifier D and the global classifier F T Performing discrimination processing;
step 4, constructing a loss function of the GAN network;
step 5, performing large-scale iterative training on the GAN network by using the complete CT sinogram data set, and continuously adjusting the super parameter value, the learning rate and the network iteration number in the step 4 in the training process until the change amplitude of the loss function of the whole network does not exceed a preset threshold value during each iteration;
step 6,Using CT sinograms affected by metal traces as test data sets, a sinogram without metal traces is obtained
Figure BDA0002220350470000021
Further, the source sinogram and the target sinogram in the step 1 are not completely paired.
Further, the specific implementation method of the step 2 includes the following steps:
the method comprises the steps of constructing an encoder A consisting of two convolution layers, and inputting an output feature matrix after feature extraction by convolution operation of downsampling of the encoder A;
secondly, inputting the feature matrixes into an auxiliary classifier B to perform classification judgment on the feature matrixes of the source sinogram and the target sinogram, and finding out the most important area for judging whether one feature matrix is from the source sinogram or the target sinogram in each feature matrix;
calculating an attention feature matrix by using the importance weight;
fourth, the attention feature matrixes are subjected to vectorization, a multi-layer perceptron MLP function is input, the weight factor gamma and the bias beta of each attention feature matrix are calculated through gradient updating and counter propagation iteration by a network, normalization calculation is carried out on the attention feature matrixes, and then the normalization feature matrixes are calculated through weighted summation;
and step five, generating a target domain sinogram after the normalized feature matrix in the step is subjected to an activation function.
Further, the step takes care of the feature matrix a S (x) The method of (1) is as follows:
Figure BDA0002220350470000031
wherein ,
Figure BDA0002220350470000032
is the kth feature matrix after convolution operation, < >>
Figure BDA0002220350470000033
Is the importance weight of the kth feature matrix learned by the auxiliary classifier; n is the number of feature matrices output by the encoder;
the method for calculating the normalized feature matrix alpha by weighted summation comprises the following steps:
Figure BDA0002220350470000034
wherein ,
Figure BDA0002220350470000035
where ρ is the back propagation update, μ I Is the mean, mu, of the normalization of the examples L Is the mean value of layer normalization, sigma I and σL Is the standard deviation and epsilon is the coefficient of deviation.
Further, the specific implementation method of the step 3 is as follows:
let T, G S→T (x S ) For samples from the target sinogram and the converted source sinogram, both form a set y as input to the discriminator module;
the discriminator module comprises an encoder C, an auxiliary classifier D and a global classifier F which are connected in sequence T
The encoder C has the same structure as the encoder A of the generator module, and the characteristic matrix C is output after the convolution operation in the convolution layer is carried out after the encoder C is input into y T (y);
C T (y) after input into the auxiliary classifier D, output as attention feature matrix a T (y);
Attention to feature matrix a T (y) continuing to input to classifier F T Judging whether the generated conversion graph is close to the target sinogram distribution or not;
through global classifier F T After the activation function of (2), the discriminator module outputs: probability value D T (y)。
Further, the step 4 is to construct a loss function of the GAN network as follows:
min maxL=min max(λ 1 L 12 L 23 L 34 L 4 )
l1, L2, L3, L4 are the fight loss, identity loss, auxiliary classifier B loss and auxiliary classifier D loss, respectively, λ is the hyper-parameter controlling each loss function.
The invention has the advantages and positive effects that:
the invention utilizes incompletely paired CT sinogram pairs which are affected by the metal implant and are not affected by the metal implant, converts the source sinogram into the target sinogram through a GAN network, restores CT projection data affected by the metal implant under an unsupervised condition, and completely eliminates beam hardening artifact and metal artifact existing in a CT image through supplementing the CT projection data, thereby realizing high-quality restoration of the CT projection data which are affected by the metal implant but are incomplete. Meanwhile, the invention does not need a training data set which is strictly matched, so that the invention can be better applied in practice and promotes the further development of accurate medical treatment.
Drawings
FIG. 1 is a schematic diagram of an unsupervised CT projection domain data recovery method of the present invention.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
An unsupervised CT projection domain data recovery method based on an attention mechanism, as shown in fig. 1, comprises the following steps:
step 1, data set preparation: the source sinogram affected by the metal implant and the target sinogram unaffected by the metal implant form an input dataset x e { S, T }, S representing the source sinogram and T representing the target sinogram, the two not being perfectly paired.
Step 2, building a generator module G S→T The specific implementation method comprises the following steps:
(1) Encoder a design: the encoder consists of two convolution layers, and the output after the feature extraction is performed by the convolution operation of the input x encoder downsampling: feature matrix
Figure BDA0002220350470000041
k refers to the kth feature matrix after convolution operation.
(2) The feature matrix is input into an auxiliary Classifier (CAM) B for classification, classification judgment of the feature matrix of the source sinogram and the target sinogram is performed (the process is an unsupervised process),
Figure BDA0002220350470000042
is the importance weight of the kth feature matrix learned by the auxiliary classifier, ++>
Figure BDA0002220350470000043
The importance of the corresponding feature of the feature matrix is determined by the weight updated through gradient updating and back propagation in the network iterative training process, and based on the importance, the auxiliary classifier B can find out the most important area for judging whether one feature matrix is from the source sinogram or the target sinogram in each feature matrix, so that the attention mechanism is realized.
During each iteration training process of the network, eta S (x) If the value of (2) is 0, the feature matrix calculation data comes from the source sinogram; if eta S (x) The eigenvector calculation data comes from the target sinogram with a value of 1.
(3) Using the importance weights, an attention feature matrix is calculated:
Figure BDA0002220350470000044
wherein ,aS (x) Representing the attention feature matrix after the attention mechanism, n is the number of feature matrices output by the encoder.
(4) Attention will be paid to the feature matrix a S (x) Through vectorization, an MLP (multi-layer perceptron) function is input, two parameter vectors gamma and beta are calculated by the network through gradient updating and back propagation iteration, gamma is taken as a weight factor of each attention feature matrix, and beta is offset.
γ,β=MLP(a S (x))
Continuing to the decoder, the attention feature matrix is first normalized, as shown in the following equation.
Figure BDA0002220350470000051
wherein μI Is the mean, mu, of the normalization of the examples L Is the mean value of layer normalization, sigma I and σL Is the standard deviation. Epsilon is the coefficient of deviation.
The weighted summation calculates a normalized feature matrix a:
Figure BDA0002220350470000052
where ρ is a learning weight, Δρ represents the updated amount of the parameter ρ by back-propagating the update, while the value of ρ is constrained to the range of [0,1], which is adjusted in the iterative training of the generator. τ is the learning rate during the ρ gradient update.
(5) The normalized characteristic matrix a in the step (4) is subjected to up-sampling operation of a decoder A, namely one-layer convolution, and an output G is obtained after an activation function S→T (x S ) I.e. the target domain sinogram generated by the generator module.
Step 3, constructing a discriminator module D T : the specific method comprises the following steps:
let T, G S→T (x S ) The samples from the target sinogram and the transformed source sinogram are represented, both forming a set y as input to the discriminator module.
The composition of the discriminator module (in order): encoder C, auxiliary classifier D, global classifier F T
The encoder C is identical in structure to the encoder a of the generator module. After y is input C, the characteristic matrix C is output through convolution operation in a convolution layer T (y)。
C T (y) inputting an auxiliary classifier D (D is also similar to B of the generator module), learning importance weights of the feature matrix by D, and outputting as attention featuresSign matrix a T (y)。
Attention to feature matrix a T (y) continuing to input to classifier F T To determine whether the generated transition map is sufficiently close to the target sinogram distribution, i.e., whether the source sinogram with the severe metal traces is converted by the outcome into the target sinogram.
Through global classifier F T After activation of the function of the whole arbiter module: d (D) T (y) is a probability value.
Step 4, designing a loss function of the GAN network, wherein the specific method is as follows:
(1) Resistance loss: matching the distribution of the transformed sinogram and the target sinogram using the contrast loss:
L 1 =(E y~T [(D T (y)) 2 ]+E x~S [(1-D T (G S→T (x S ))) 2 ])
wherein E is an expected value.
(2) Identity loss: to ensure that the gray scale distributions of the input sinogram and the output sinogram are similar, identity consistency constraints are applied to the generator. Given a sinogram x ε T, in using G S→T After conversion x, the sinogram should not change:
L 2 =E x~T [|x-G S→T (x T )| 1 ]
(3) CAM loss: by using the information from the auxiliary classifiers B and D, the image x e { S, T } is given. G S→T and DT It is necessary to determine the location of the dense transition or where the difference between the two domains is greatest in the current state:
L 3 =-(E x~S [lg(η S (x))])+E x~T [lg(1-η S (x))]
L 4 =E x~T [(η T (y)) 2 ]+E x~S [log(1-η T (G S→T (x))) 2 ]
total objective function:
finally, the multiple loss functions are weighted and combined to obtain a loss function minmaxL of the final GAN network, which is:
min maxL=min max(λ 1 L 12 L 23 L 34 L 4 )
where lambda is the hyper-parameter controlling each loss function.
And 5, performing large-scale iterative training on the GAN network by using the complete CT sinogram data set, and continuously adjusting the super-parameter values, the learning rate, the network iteration times and the like in the step 4 in the training process. Until the change amplitude of the loss function minmaxL of the whole network at each iteration does not exceed a preset threshold value. CT projection domain data illustrating the effect of metal implants is complemented.
Step 6, after model training is completed, using the CT sinogram affected by the metal trace as a test data set X, as shown in the end section of FIG. 1, thus obtaining a sinogram without the metal trace
Figure BDA0002220350470000061
And changing the super parameters in the previous step according to the test result, and continuously enhancing the image restoration capability of the model.
The invention is tested by one specific example as follows:
the recovery model of this example is based on a GAN network implementation. A vertebral-site dataset is selected. CT images from this dataset are split into two groups, one with artifacts and the other without artifacts. First, we determine a region with a CT value greater than 2500 as a metal region. Then, a CT image whose maximum connected metal region has more than 400 pixels is selected as an artifact-affected image. A CT image having a maximum CT value of less than 2000 is selected as an artifact-free image, and these images are projected in MATLAB to obtain projection data (sinogram) corresponding thereto. After this selection, the group affected by the metal implant contained 6000 source sinograms and the group not affected by the metal implant contained 20000 target sinograms. We subtracted 200 parts of the source sinogram from the group affected by the metal implant for testing.
The method is realized under the deep learning framework, and the deviation system in the normalization processNumber epsilon=1×10 -5 . Using a filter with τ=1×10 -4 The optimization algorithm of the rate is learned to minimize the objective function. The encoder of the generator consists of two convolution layers, the convolution step size is 2; the decoder of the generator consists of four residual operation blocks and two up-sampling convolution layers, the convolution step size is 1, and the convolution kernel sizes are 3×3. The important super parameters in the objective function are set as follows: lambda (lambda) 1 =1,λ 2 =10,λ 3 =λ 4 =10. All weight parameters are initialized in normal distribution, and the whole network is trained for 50000 times in an iterative way.
And inputting the training set into a generated countermeasure network to train, observing whether the objective function can converge to a minimum value, if not, changing the learning rate in the network, then retraining until the objective function converges, and stopping training. And finally, testing an unsupervised CT projection domain data recovery model based on an attention mechanism by using a test set (200 parts of sinograms influenced by metal traces) to obtain a complete CT sinogram with the metal traces removed, and finally reconstructing a high-quality CT image with metal artifact removed and good detail reservation.
The invention is applicable to the prior art where it is not described.
It should be emphasized that the examples described herein are illustrative rather than limiting, and therefore the invention includes, but is not limited to, the examples described in the detailed description, as other embodiments derived from the technical solutions of the invention by a person skilled in the art are equally within the scope of the invention.

Claims (3)

1. An unsupervised CT projection domain data recovery method based on an attention mechanism is characterized by comprising the following steps:
step 1, preparing an input data set comprising a source sinogram affected by a metal implant and a target sinogram unaffected by the metal implant;
step 2, constructing a generator module, and obtaining a target domain sinogram through calculation, normalization processing and up-sampling of an encoder A, an auxiliary classifier B and an attention feature matrix;
step 3, constructing a discriminator module, and encoding the encoder C, the auxiliary classifier D and the global classifier F T Performing discrimination processing;
step 4, constructing a loss function of the GAN network;
step 5, performing large-scale iterative training on the GAN network by using the complete CT sinogram data set, and continuously adjusting the super parameter value, the learning rate and the network iteration number in the step 4 in the training process until the change amplitude of the loss function of the whole network does not exceed a preset threshold value during each iteration;
step 6, using the CT sinogram affected by the metal trace as a test data set to obtain a sinogram without the metal trace
Figure FDA0004116059550000016
The specific implementation method of the step 2 comprises the following steps:
the method comprises the steps of constructing an encoder A consisting of two convolution layers, and inputting an output feature matrix after feature extraction by convolution operation of downsampling of the encoder A;
secondly, inputting the feature matrixes into an auxiliary classifier B to perform classification judgment on the feature matrixes of the source sinogram and the target sinogram, and finding out the most important area for judging whether one feature matrix is from the source sinogram or the target sinogram in each feature matrix;
calculating an attention feature matrix by using the importance weight;
fourth, the attention feature matrixes are subjected to vectorization, a multi-layer perceptron MLP function is input, the weight factor gamma and the bias beta of each attention feature matrix are calculated through gradient updating and counter propagation iteration by a network, normalization calculation is carried out on the attention feature matrixes, and then the normalization feature matrixes are calculated through weighted summation;
step five, generating a target domain sinogram after the normalized feature matrix in the step five is subjected to an activation function;
the step of focusing attention on the feature matrix a S (x) The method of (1) is as follows:
Figure FDA0004116059550000011
wherein ,
Figure FDA0004116059550000012
is the kth feature matrix after convolution operation, < >>
Figure FDA0004116059550000013
Is the importance weight of the kth feature matrix learned by the auxiliary classifier; n is the number of feature matrices output by the encoder;
the method for calculating the normalized feature matrix alpha by weighted summation comprises the following steps:
Figure FDA0004116059550000014
wherein ,
Figure FDA0004116059550000015
where ρ is the back propagation update, μ I Is the mean, mu, of the normalization of the examples L Is the mean value of layer normalization, sigma I and σL Standard deviation, epsilon is a deviation coefficient;
the specific implementation method of the step 3 is as follows:
let T, G S→T (x S ) For samples from the target sinogram and the converted source sinogram, both form a set y as input to the discriminator module;
the discriminator module comprises an encoder C, an auxiliary classifier D and a global classifier F which are connected in sequence T
The encoder C has the same structure as the encoder A of the generator module, and the characteristic matrix C is output after the convolution operation in the convolution layer is carried out after the encoder C is input into y T (y);
C T (y) after input into the auxiliary classifier D, output as attentionFeature matrix a T (y);
Attention to feature matrix a T (y) continuing to input to classifier F T Judging whether the generated conversion graph is close to the target sinogram distribution or not;
through global classifier F T After the activation function of (2), the discriminator module outputs: probability value D T (y)。
2. An attention-based method for unsupervised CT projection domain data recovery as claimed in claim 1, wherein: the source sinogram and the target sinogram in the step 1 are not completely paired.
3. An attention-based method for unsupervised CT projection domain data recovery as claimed in claim 1, wherein: the loss function of the GAN network constructed in the step 4 is as follows:
minmaxL=minmax(λ 1 L 12 L 23 L 34 L 4 )
l1, L2, L3, L4 are the fight loss, identity loss, auxiliary classifier B loss and auxiliary classifier D loss, respectively, λ is the hyper-parameter controlling each loss function.
CN201910931302.3A 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method Active CN110728729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931302.3A CN110728729B (en) 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931302.3A CN110728729B (en) 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method

Publications (2)

Publication Number Publication Date
CN110728729A CN110728729A (en) 2020-01-24
CN110728729B true CN110728729B (en) 2023-05-26

Family

ID=69219608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931302.3A Active CN110728729B (en) 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method

Country Status (1)

Country Link
CN (1) CN110728729B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400754B (en) * 2020-03-11 2021-10-01 支付宝(杭州)信息技术有限公司 Construction method and device of user classification system for protecting user privacy
WO2021184389A1 (en) * 2020-03-20 2021-09-23 深圳先进技术研究院 Image reconstruction method, image processing device, and device with storage function
CN111862258A (en) * 2020-07-23 2020-10-30 深圳高性能医疗器械国家研究院有限公司 Image metal artifact suppression method
CN111915522A (en) * 2020-07-31 2020-11-10 天津中科智能识别产业技术研究院有限公司 Image restoration method based on attention mechanism
WO2022032445A1 (en) * 2020-08-10 2022-02-17 深圳高性能医疗器械国家研究院有限公司 Reconstructed neural network and application thereof
CN112508808B (en) * 2020-11-26 2023-08-01 中国人民解放军战略支援部队信息工程大学 CT double-domain combined metal artifact correction method based on generation countermeasure network
CN112614205B (en) * 2020-12-28 2021-09-28 推想医疗科技股份有限公司 Image reconstruction method and device
CN112907691A (en) * 2021-03-26 2021-06-04 深圳安科高技术股份有限公司 Neural network-based CT image reconstruction method, device, equipment and storage medium
CN113592968B (en) * 2021-07-09 2022-10-18 清华大学 Method and device for reducing metal artifacts in tomographic images
CN113744356B (en) * 2021-08-17 2024-05-07 中山大学 Low-dose SPECT chord graph recovery and scattering correction method
CN113936143B (en) * 2021-09-10 2022-07-01 北京建筑大学 Image identification generalization method based on attention mechanism and generation countermeasure network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146988A (en) * 2018-06-27 2019-01-04 南京邮电大学 Non-fully projection CT image rebuilding method based on VAEGAN
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
WO2019128660A1 (en) * 2017-12-29 2019-07-04 清华大学 Method and device for training neural network, image processing method and device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146988A (en) * 2018-06-27 2019-01-04 南京邮电大学 Non-fully projection CT image rebuilding method based on VAEGAN
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
zaifeng shi.A spatial information incorporation method for irregular sampling CT based on deep learning.《ResearchGate》.2019,全文. *
满晨龙 ; 史再峰 ; 徐江涛 ; 姚素英 ; .基于区域分割的快速随机喷洒Retinex方法.南开大学学报(自然科学版).2017,(第02期),全文. *

Also Published As

Publication number Publication date
CN110728729A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110728729B (en) Attention mechanism-based unsupervised CT projection domain data recovery method
US11727569B2 (en) Training a CNN with pseudo ground truth for CT artifact reduction
CN109146988B (en) Incomplete projection CT image reconstruction method based on VAEGAN
Shaw et al. MRI k-space motion artefact augmentation: model robustness and task-specific uncertainty
JP2020168352A (en) Medical apparatus and program
CN110827216A (en) Multi-generator generation countermeasure network learning method for image denoising
CN110675461A (en) CT image recovery method based on unsupervised learning
Yuan et al. SIPID: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction
JP2021013736A (en) X-ray diagnostic system, image processing apparatus, and program
Fu et al. A deep learning reconstruction framework for differential phase-contrast computed tomography with incomplete data
CN107958471A (en) CT imaging methods, device, CT equipment and storage medium based on lack sampling data
WO2023202265A1 (en) Image processing method and apparatus for artifact removal, and device, product and medium
Zhou et al. DuDoUFNet: dual-domain under-to-fully-complete progressive restoration network for simultaneous metal artifact reduction and low-dose CT reconstruction
Zhu et al. Metal artifact reduction for X-ray computed tomography using U-net in image domain
Niu et al. Low-dimensional manifold-constrained disentanglement network for metal artifact reduction
CN110599530B (en) MVCT image texture enhancement method based on double regular constraints
CN115272511A (en) System, method, terminal and medium for removing metal artifacts in CBCT image based on dual decoders
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
CN116894783A (en) Metal artifact removal method for countermeasure generation network model based on time-varying constraint
JP2021065707A (en) Medical image processing device, learned model and medical image processing method
Ikuta et al. A deep recurrent neural network with FISTA optimization for ct metal artifact reduction
CN116645283A (en) Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network
CN116152373A (en) Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
CN115100045A (en) Method and device for converting modality of image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant