CN110728729A - Unsupervised CT projection domain data recovery method based on attention mechanism - Google Patents

Unsupervised CT projection domain data recovery method based on attention mechanism Download PDF

Info

Publication number
CN110728729A
CN110728729A CN201910931302.3A CN201910931302A CN110728729A CN 110728729 A CN110728729 A CN 110728729A CN 201910931302 A CN201910931302 A CN 201910931302A CN 110728729 A CN110728729 A CN 110728729A
Authority
CN
China
Prior art keywords
sinogram
feature matrix
target
encoder
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910931302.3A
Other languages
Chinese (zh)
Other versions
CN110728729B (en
Inventor
史再峰
王仲琦
罗韬
曹清洁
程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910931302.3A priority Critical patent/CN110728729B/en
Publication of CN110728729A publication Critical patent/CN110728729A/en
Application granted granted Critical
Publication of CN110728729B publication Critical patent/CN110728729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to an attention mechanism-based unsupervised CT projection domain data recovery method, which is mainly technically characterized by comprising the following steps of: preparing an input data set comprising a source sinogram and a target sinogram; constructing a generator module to obtain a target domain sinogram; constructing a discriminator module for discrimination processing; constructing a loss function of the GAN network; carrying out large-scale iterative training on the GAN network by using a complete CT sinogram data set until the variation amplitude of a loss function of the whole network does not exceed a preset threshold value during each iteration; the CT sinogram affected by the metal trace is used as a test data set to obtain a sinogram without a metal trace. The method has reasonable design, completely eliminates beam hardening artifacts and metal artifacts in the CT image by supplementing the CT projection data, realizes high-quality recovery of incomplete CT projection data influenced by metal implants, can be better applied in practice, and promotes further development of precise medical treatment.

Description

Unsupervised CT projection domain data recovery method based on attention mechanism
Technical Field
The invention belongs to the technical field of computed tomography, and particularly relates to an attention mechanism-based unsupervised CT projection domain data recovery method.
Background
The X-ray Computed Tomography (CT) technology has been widely used in the fields of industrial detection and medical diagnosis, but the CT examination is always troubled by artifacts, which bring great difficulty to clinical examination and diagnosis.
Metal artifacts are one of the common artifacts in Computed Tomography (CT) images, which are introduced by metal implants during imaging and reconstruction. The formation of metal artifacts involves a variety of mechanisms such as beam hardening, scattering, noise and nonlinear partial volume effects. In the clinical examination process, a metal implant absorbs a large number of X-ray photons incident during CT scanning, so that data of a projection domain of these regions is missing (CT projection data is also called as sinogram), that is, the projection data is incomplete, the projection data of the metal implant is represented by bright metal traces in the sinogram (the larger the number of the metal traces is, the wider the width is, the more the projection data is affected by the metal implant is, and the more missing the metal trace is), starry or radial artifacts are generated around the implant after reconstruction, the definition of the tissue structure around the metal implant on the image is seriously affected, and great difficulty is brought to judgment of the implant and the tissue structure around the implant, which has plagued clinical examination for many years.
It is very difficult to remove the artifacts by modeling with the conventional method. Among many image reconstruction algorithms, a commonly used reconstruction algorithm is a filtered back projection algorithm (FBP), but a tomographic image obtained directly by using this algorithm shows a certain artifact.
Because CT imaging is a projection data space reconstruction method, the defect of projection data can cause considerable influence on subsequent image reconstruction and diagnosis work, the missing projection data is completely supplemented, and the influence of metal artifacts in images can be effectively eliminated in the subsequent image reconstruction process.
In recent years, deep learning is increasingly applied to the field of medical image processing, and an important advantage of deep convolutional neural network in deep learning is to extract information from original data to abstract semantic concepts layer by layer, so that the deep convolutional neural network has outstanding advantages in the aspects of extracting global features of data and recovering data. A feature matrix of an input image is extracted by using convolution operation in the network, and then an activation function is input to improve the nonlinear expression capability of the model and learn linear-to-nonlinear complex mapping, the commonly used activation functions comprise a logic activation function (sigmoid), a hyperbolic tangent function (tanh) and the like, and the activation function is prepared for back propagation of the convolution neural network.
Current Computed Tomography (CT) Metal Artifact Reduction (MAR) methods based on deep neural networks are mostly performed in the CT image domain and are supervised methods, requiring pairs of CT images that are identical in anatomical detail, one with metal artifacts and one without metal artifacts, which rely heavily on simulation data for training. However, since the simulation data may not perfectly mimic the underlying physical mechanisms of CT imaging, the supervised approach is often not well-suited for clinical applications.
In deep learning, more challenging but at the same time more practical unsupervised approaches have received more attention and research, where no fully paired input data is available for training. While data recovery in the CT projection data domain can be viewed as a form of data-to-data conversion from incomplete projection data (source sinogram) affected by a metal implant to complete projection data (target sinogram) unaffected by the metal implant. Unsupervised data transformation can be implemented using a generative countermeasure network (GAN), which combines two basic models of traditional deep learning: the generated model and the discrimination model are unified in the same frame, in the GAN, the two models play games with each other and are alternately subjected to iterative training, and in the process, the GAN frame can well keep details of input data and learn complex data distribution of the input data. A good data recovery model should: (i) the incomplete projection data caused by the metal implant is supplemented and completed as much as possible; (ii) the anatomical content of the input CT data is preserved. CT sinogram conversion is implemented in an end-to-end fashion based on GAN networks and unsupervised methods, which combine attention mechanisms and adaptive layer instance normalization functions (AdaLIN). Attention device for controlling the drawThe derivative data transformation model focuses on the important regions of the source sinogram and the target sinogram, and distinguishes the feature matrix from the source sinogram and the feature matrix from the target sinogram by means of an auxiliary Classifier (CAM), wherein the auxiliary classifier is actually a 0-1 binary classifier, and the importance weight of the feature matrix is learned in the process, so that the model is helped to know where intensive transformation is carried out. While the adaptive layer instance normalization function includes two parts: example normalization (IN) which is normalized on image pixels, and Layer Normalization (LN) which is based on the above description, the mean value μ of the feature matrix is calculatedISum variance σIThe layer normalization is normalized in the channel direction, and the mean value mu of the feature matrix is calculated based on the normalizationLSum variance σLThe combination of the two flexibly controls the amount of shape and texture variation in the sinogram by learning parameters, thereby enhancing the robustness of the recovery model so that the data recovery capability of the recovery model is not affected by the severity of the metal traces in the input sinogram.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an unsupervised CT projection domain data recovery method based on an attention mechanism, which reduces metal artifacts in CT images by supplementing and completing CT projection data, realizes the conversion from a sinogram influenced by a metal implant to a sinogram not influenced by the metal implant on the basis of a GAN network under the condition that input sinograms are not matched, corrects metal traces, supplements and completes CT projection data, and finally eliminates the serious metal artifacts in CT reconstructed images to obtain high-quality CT images.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
an attention mechanism-based unsupervised CT projection domain data recovery method comprises the following steps:
step 1, preparing an input data set comprising a source sinogram affected by a metal implant and a target sinogram not affected by the metal implant;
step 2, constructing a generator module, and obtaining a target domain sinogram through an encoder A, an auxiliary classifier B, attention feature matrix calculation, normalization processing and upsampling;
step 3, constructing a discriminator module, and passing through an encoder C, an auxiliary classifier D and a global classifier FTPerforming discrimination processing;
step 4, constructing a loss function of the GAN network;
step 5, carrying out large-scale iterative training on the GAN network by using a complete CT sinogram data set, and continuously adjusting the over-parameter value, the learning rate and the network iteration frequency in the step 4 in the training process until the change amplitude of the loss function of the whole network does not exceed a preset threshold value in each iteration;
step 6, using the CT sinogram influenced by the metal trace as a test data set to obtain the sinogram without the metal trace
Figure BDA0002220350470000021
Further, the source sinogram and the target sinogram in step 1 are not completely paired.
Further, the specific implementation method of step 2 includes the following steps:
⑴, constructing an encoder A consisting of two convolution layers, inputting an output characteristic matrix of which the characteristic is extracted by convolution operation of downsampling of the encoder A;
⑵ inputting the feature matrix into an auxiliary classifier B to perform classification judgment of the feature matrices of the source sinogram and the target sinogram, and finding out the most important region in each feature matrix for judging whether one feature matrix is from the source sinogram or the target sinogram;
⑶ calculating an attention feature matrix using the importance weights;
⑷, vectorizing the attention feature matrix, inputting the vectorized attention feature matrix into a multi-layer perceptron MLP function, iteratively calculating the weight factor gamma and the bias beta of each attention feature matrix through gradient updating and back propagation by a network, carrying out normalization calculation on the attention feature matrix, and then carrying out weighted summation to calculate the normalized feature matrix;
⑸ the normalized feature matrix in step ⑷ generates a target domain sinogram via an activation function.
Further, the step ⑶ notes feature matrix aS(x) The method comprises the following steps:
Figure BDA0002220350470000031
wherein ,
Figure BDA0002220350470000032
for the k-th feature matrix after convolution operation,
Figure BDA0002220350470000033
is the importance weight of the kth feature matrix learned by the auxiliary classifier; n is the number of feature matrices output by the encoder;
the ⑷ weighted sum calculation normalized feature matrix alpha calculation method comprises the following steps:
Figure BDA0002220350470000034
wherein ,
Figure BDA0002220350470000035
where ρ is the back propagation update, μIIs the mean value of the example normalization, μLIs the mean value of the layer normalization, σI and σLIs the standard deviation and epsilon is the coefficient of deviation.
Further, the specific implementation method of step 3 is as follows:
let T, GS→T(xS) Forming a set y for samples from the target sinogram and the converted source sinogram as an input of a discriminator module;
the discriminator module comprises an encoder C, an auxiliary classifier D and a global classifier F which are connected in sequenceT
The encoder C has the same structure as the encoder A of the generator module, and the characteristic matrix is output after y is input into the encoder C and is subjected to convolution operation in the convolution layerCT(y);
CT(y) after inputting the auxiliary classifier D, the output is the attention feature matrix aT(y);
Note that the feature matrix aT(y) continues to the classifier FTTo determine whether the generated converted map is sufficiently close to the target sinogram distribution;
through a global classifier FTAfter activating the function, the discriminator module outputs: probability value DT(y)。
Further, the loss function of constructing the GAN network in step 4 is:
min maxL=min max(λ1L12L23L34L4)
l1, L2, L3, L4 are adversarial losses, identity losses, auxiliary classifier B losses and auxiliary classifier D losses, respectively, and λ is a hyper-parameter that controls each loss function.
The invention has the advantages and positive effects that:
the method utilizes the incomplete pairing CT sinogram pair which is influenced by the metal implant but not influenced by the metal implant to convert the source sinogram into the target sinogram through the GAN network, recovers the CT projection data influenced by the metal implant under the unsupervised condition, and supplements and completely eliminates the beam hardening artifact and the metal artifact existing in the CT image through the CT projection data, thereby realizing the high-quality recovery of the incomplete CT projection data influenced by the metal implant. Meanwhile, the invention does not need a strictly matched training data set, so that the method can be better applied in practice and promotes the further development of accurate medical treatment.
Drawings
Fig. 1 is a structural diagram of an unsupervised CT projection domain data recovery method of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail with reference to the accompanying drawings.
An unsupervised CT projection domain data recovery method based on attention mechanism, as shown in fig. 1, includes the following steps:
step 1, data set preparation: the source sinogram affected by the metal implant and the target sinogram not affected by the metal implant form an input data set x e { S, T }, wherein S represents the source sinogram and T represents the target sinogram, and the source sinogram and the target sinogram are not completely matched.
Step 2, building a generator module GS→TThe specific implementation method comprises the following steps:
(1) encoder A design: the encoder consists of two convolution layers, and the input x is output after the characteristics are extracted through convolution operation of encoder down sampling: feature matrix
Figure BDA0002220350470000041
k refers to the kth feature matrix after convolution operation.
(2) Inputting the feature matrix into a subsidiary Classifier (CAM) B for classification, making classification judgment on the feature matrices of the source sinogram and the target sinogram (the process is an unsupervised process),
Figure BDA0002220350470000042
is the importance weight of the kth feature matrix learned by the auxiliary classifier,
Figure BDA0002220350470000043
the feature matrix is updated through gradient updating and back propagation in the network iterative training process, the weight determines the importance of the feature corresponding to the feature matrix, and based on the importance, the auxiliary classifier B can find the most important area in each feature matrix for judging whether a feature matrix is from a source sinogram or a target sinogram, so that the attention mechanism is realized.
Eta during each iteration of the network training processS(x) If the value of (1) is 0, the feature matrix calculation data comes from the source sinogram; if etaS(x) Is 1, the feature matrix calculation data comes from the target sinogram.
(3) Using the importance weights, an attention feature matrix is calculated:
wherein ,aS(x) Representing the attention feature matrix after the attention mechanism, and n is the number of feature matrices output by the encoder.
(4) Attention is paid to the feature matrix aS(x) After vectorization, an MLP (multi-layer perceptron) function is input, and two parameter vectors gamma and beta are iteratively calculated by the network through gradient updating and back propagation, wherein gamma is a weight factor of each attention feature matrix, and beta is an offset.
γ,β=MLP(aS(x))
Continuing into the decoder, the attention feature matrix is first normalized as noted in the following equation.
Figure BDA0002220350470000051
wherein μIIs the mean value of the example normalization, μLIs the mean value of the layer normalization, σI and σLIs the standard deviation. ε is the coefficient of variation.
Weighted summation calculation normalized feature matrix a:
Figure BDA0002220350470000052
where ρ is a learning weight, Δ ρ represents the update amount of the parameter ρ by back-propagation update, while the value of ρ is constrained to the range of [0,1], which is adjusted in the iterative training of the generator. τ is the learning rate during ρ gradient update.
(5) The feature matrix a normalized in the step (4) is subjected to an up-sampling operation of a decoder A, namely, a layer of convolution, and an output G is obtained after an activation functionS→T(xS) I.e. the target domain sinogram generated by the generator module.
Step 3, building a discriminator module DT: the specific method comprises the following steps:
let T, GS→T(xS) Representing samples from a target sinogram and a converted source sinogram, which form a sety is used as input to the discriminator module.
Composition of the discriminator module (in order of precedence): encoder C, auxiliary classifier D, global classifier FT
The encoder C is structurally identical to the encoder a of the generator module. y inputs C and outputs a characteristic matrix C through convolution operation in the convolution layerT(y)。
CT(y) inputting an auxiliary classifier D (D is also similar to B of the generator module), learning the importance weight of the feature matrix by D, and outputting as an attention feature matrix aT(y)。
Note that the feature matrix aT(y) continues to the classifier FTTo determine whether the generated converted map is close enough to the target sinogram distribution, i.e., whether the source sinogram where the severe metal traces are present is converted by the effort to the target sinogram.
Through a global classifier FTAfter the activation function, the output of the whole discriminator module: dTAnd (y) is a probability value.
Step 4, designing a loss function of the GAN network, wherein the specific method comprises the following steps:
(1) loss of antagonism: the impedance losses are used to match the distribution of the transformed sinogram to the target sinogram:
L1=(Ey~T[(DT(y))2]+Ex~S[(1-DT(GS→T(xS)))2])
where E is the expected value.
(2) Loss of identity: to ensure that the intensity distributions of the input and output sinograms are similar, an identity consistency constraint is applied to the generator. Given a sinogram x ∈ T, using GS→TAfter converting x, the sinogram should not change:
L2=Ex~T[|x-GS→T(xT)|1]
(3) CAM loss: by utilizing information from the secondary classifiers B and D, an image x ∈ { S, T } is given. GS→T and DTThe location or current of the dense transition needs to be determinedWhere the difference between the two domains is greatest in the state:
L3=-(Ex~S[lg(ηS(x))])+Ex~T[lg(1-ηS(x))]
L4=Ex~T[(ηT(y))2]+Ex~S[log(1-ηT(GS→T(x)))2]
the overall objective function:
finally, weighting and combining a plurality of loss functions to obtain a final loss function minmaxL of the GAN network as follows:
min maxL=min max(λ1L12L23L34L4)
where λ is the hyper-parameter that controls each loss function.
And 5, carrying out large-scale iterative training on the GAN network by using a complete CT sinogram data set, and continuously adjusting the hyper-parameter values, the learning rate, the network iteration times and the like in the step 4 in the training process. Until the change amplitude of the loss function minmaxL of the whole network does not exceed a preset threshold value during each iteration. The CT projection domain data affected by the metal implant is completed.
Step 6, after the model training is finished, using the CT sinogram influenced by the metal trace as a test data set X, as shown in the end segment of figure 1, and obtaining the sinogram without the metal trace
Figure BDA0002220350470000061
And changing the hyper-parameters in the previous steps according to the test result, and continuously enhancing the image recovery capability of the model.
The invention was tested by the following specific example:
the recovery model of this example is based on a GAN network implementation. A vertebral location data set is selected. The CT images from this data set are divided into two groups, one with artifacts and the other without artifacts. First, we determined regions with CT values greater than 2500 as the metal regions. Then, a CT image whose largest connected metal region has more than 400 pixels is selected as an artifact-affected image. CT images with a maximum CT value of less than 2000 are selected as artifact-free images and these images are projected in MATLAB, resulting in projection data (sinogram) corresponding thereto. After this selection, the set affected by the metal implant contains 6000 source sinograms, and the set unaffected by the metal implant contains 20000 target sinograms. We subtracted 200 source sinograms from the group affected by the metal implant for testing.
The method is realized under a deep learning framework, and the deviation coefficient epsilon in the normalization process is 1 multiplied by 10-5. Using a catalyst having τ of 1 × 10-4A rate optimization algorithm is learned to minimize the objective function. The encoder of the generator consists of two convolution layers, and the convolution step size is 2; the decoder of the generator consists of four residual error operation blocks and two upsampling convolution layers, the convolution step size is 1, and the convolution kernel sizes are all 3 multiplied by 3. The important hyper-parameters in the objective function are set as: lambda [ alpha ]1=1,λ2=10,λ3=λ410. All weight parameters are initialized with normal distribution, and the whole network is trained 50000 times in an iterative way.
Inputting a training set into a generated countermeasure network for training, observing whether the target function can be converged to the minimum value, if not, changing the learning rate in the network and then re-training until the target function is converged. And finally, testing an unsupervised CT projection domain data recovery model based on an attention mechanism by using a test set (200 sinograms influenced by metal traces) to obtain a CT sinogram with metal traces removed and complete recovery, and finally reconstructing a high-quality CT image with metal artifacts eliminated and good detail retention.
Nothing in this specification is said to apply to the prior art.
It should be emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention is not limited to the embodiments described in the detailed description, but also includes other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art.

Claims (6)

1. An attention mechanism-based unsupervised CT projection domain data recovery method is characterized by comprising the following steps:
step 1, preparing an input data set comprising a source sinogram affected by a metal implant and a target sinogram not affected by the metal implant;
step 2, constructing a generator module, and obtaining a target domain sinogram through an encoder A, an auxiliary classifier B, attention feature matrix calculation, normalization processing and upsampling;
step 3, constructing a discriminator module, and passing through an encoder C, an auxiliary classifier D and a global classifier FTPerforming discrimination processing;
step 4, constructing a loss function of the GAN network;
step 5, carrying out large-scale iterative training on the GAN network by using a complete CT sinogram data set, and continuously adjusting the over-parameter value, the learning rate and the network iteration frequency in the step 4 in the training process until the change amplitude of the loss function of the whole network does not exceed a preset threshold value in each iteration;
step 6, using the CT sinogram influenced by the metal trace as a test data set to obtain the sinogram without the metal trace
2. The unsupervised CT projection domain data recovery method based on attention mechanism as claimed in claim 1, wherein: and the source sinogram and the target sinogram in the step 1 are not completely paired.
3. The unsupervised CT projection domain data recovery method based on attention mechanism as claimed in claim 1, wherein: the specific implementation method of the step 2 comprises the following steps:
⑴, constructing an encoder A consisting of two convolution layers, inputting an output characteristic matrix of which the characteristic is extracted by convolution operation of downsampling of the encoder A;
⑵ inputting the feature matrix into an auxiliary classifier B to perform classification judgment of the feature matrices of the source sinogram and the target sinogram, and finding out the most important region in each feature matrix for judging whether one feature matrix is from the source sinogram or the target sinogram;
⑶ calculating an attention feature matrix using the importance weights;
⑷, vectorizing the attention feature matrix, inputting the vectorized attention feature matrix into a multi-layer perceptron MLP function, iteratively calculating the weight factor gamma and the bias beta of each attention feature matrix through gradient updating and back propagation by a network, carrying out normalization calculation on the attention feature matrix, and then carrying out weighted summation to calculate the normalized feature matrix;
⑸ the normalized feature matrix in step ⑷ generates a target domain sinogram via an activation function.
4. The method of claim 3, wherein the step ⑶ attention feature matrix a is applied to an unsupervised CT projection domain data recovery method based on attention mechanismS(x) The method comprises the following steps:
Figure FDA0002220350460000011
wherein ,
Figure FDA0002220350460000012
for the k-th feature matrix after convolution operation,
Figure FDA0002220350460000013
is the importance weight of the kth feature matrix learned by the auxiliary classifier; n is the number of feature matrices output by the encoder;
the ⑷ weighted sum calculation normalized feature matrix alpha calculation method comprises the following steps:
Figure FDA0002220350460000014
wherein ,
Figure FDA0002220350460000021
where ρ is the back propagation update, μIIs the mean value of the example normalization, μLIs the mean value of the layer normalization, σI and σLIs the standard deviation and epsilon is the coefficient of deviation.
5. The unsupervised CT projection domain data recovery method based on attention mechanism as claimed in claim 1, wherein: the specific implementation method of the step 3 is as follows:
let T, GS→T(xS) Forming a set y for samples from the target sinogram and the converted source sinogram as an input of a discriminator module;
the discriminator module comprises an encoder C, an auxiliary classifier D and a global classifier F which are connected in sequenceT
The encoder C has the same structure as the encoder A of the generator module, and the characteristic matrix C is output after y is input into the encoder C and is subjected to convolution operation in the convolution layerT(y);
CT(y) after inputting the auxiliary classifier D, the output is the attention feature matrix aT(y);
Note that the feature matrix aT(y) continues to the classifier FTTo determine whether the generated converted map is sufficiently close to the target sinogram distribution;
through a global classifier FTAfter activating the function, the discriminator module outputs: probability value DT(y)。
6. The unsupervised CT projection domain data recovery method based on attention mechanism as claimed in claim 1, wherein: the loss function of constructing the GAN network in the step 4 is as follows:
min max L=min max(λ1L12L23L34L4)
l1, L2, L3, L4 are adversarial losses, identity losses, auxiliary classifier B losses and auxiliary classifier D losses, respectively, and λ is a hyper-parameter that controls each loss function.
CN201910931302.3A 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method Active CN110728729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931302.3A CN110728729B (en) 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931302.3A CN110728729B (en) 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method

Publications (2)

Publication Number Publication Date
CN110728729A true CN110728729A (en) 2020-01-24
CN110728729B CN110728729B (en) 2023-05-26

Family

ID=69219608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931302.3A Active CN110728729B (en) 2019-09-29 2019-09-29 Attention mechanism-based unsupervised CT projection domain data recovery method

Country Status (1)

Country Link
CN (1) CN110728729B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400754A (en) * 2020-03-11 2020-07-10 支付宝(杭州)信息技术有限公司 Construction method and device of user classification system for protecting user privacy
CN111862258A (en) * 2020-07-23 2020-10-30 深圳高性能医疗器械国家研究院有限公司 Image metal artifact suppression method
CN111915522A (en) * 2020-07-31 2020-11-10 天津中科智能识别产业技术研究院有限公司 Image restoration method based on attention mechanism
CN112508808A (en) * 2020-11-26 2021-03-16 中国人民解放军战略支援部队信息工程大学 CT (computed tomography) dual-domain joint metal artifact correction method based on generation countermeasure network
CN112614205A (en) * 2020-12-28 2021-04-06 推想医疗科技股份有限公司 Image reconstruction method and device
CN112907691A (en) * 2021-03-26 2021-06-04 深圳安科高技术股份有限公司 Neural network-based CT image reconstruction method, device, equipment and storage medium
WO2021184389A1 (en) * 2020-03-20 2021-09-23 深圳先进技术研究院 Image reconstruction method, image processing device, and device with storage function
CN113592968A (en) * 2021-07-09 2021-11-02 清华大学 Method and device for reducing metal artifacts in tomographic images
CN113744356A (en) * 2021-08-17 2021-12-03 中山大学 Low-dose SPECT (single photon emission computed tomography) chord map recovery and scatter correction method
CN113936143A (en) * 2021-09-10 2022-01-14 北京建筑大学 Image identification generalization method based on attention mechanism and generation countermeasure network
WO2022032445A1 (en) * 2020-08-10 2022-02-17 深圳高性能医疗器械国家研究院有限公司 Reconstructed neural network and application thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
CN109146988A (en) * 2018-06-27 2019-01-04 南京邮电大学 Non-fully projection CT image rebuilding method based on VAEGAN
US20190206095A1 (en) * 2017-12-29 2019-07-04 Tsinghua University Image processing method, image processing device and storage medium
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
US20190206095A1 (en) * 2017-12-29 2019-07-04 Tsinghua University Image processing method, image processing device and storage medium
CN109146988A (en) * 2018-06-27 2019-01-04 南京邮电大学 Non-fully projection CT image rebuilding method based on VAEGAN
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZAIFENG SHI: "A spatial information incorporation method for irregular sampling CT based on deep learning" *
满晨龙;史再峰;徐江涛;姚素英;: "基于区域分割的快速随机喷洒Retinex方法" *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400754A (en) * 2020-03-11 2020-07-10 支付宝(杭州)信息技术有限公司 Construction method and device of user classification system for protecting user privacy
WO2021184389A1 (en) * 2020-03-20 2021-09-23 深圳先进技术研究院 Image reconstruction method, image processing device, and device with storage function
CN111862258A (en) * 2020-07-23 2020-10-30 深圳高性能医疗器械国家研究院有限公司 Image metal artifact suppression method
CN111915522A (en) * 2020-07-31 2020-11-10 天津中科智能识别产业技术研究院有限公司 Image restoration method based on attention mechanism
WO2022032445A1 (en) * 2020-08-10 2022-02-17 深圳高性能医疗器械国家研究院有限公司 Reconstructed neural network and application thereof
CN112508808A (en) * 2020-11-26 2021-03-16 中国人民解放军战略支援部队信息工程大学 CT (computed tomography) dual-domain joint metal artifact correction method based on generation countermeasure network
CN112614205B (en) * 2020-12-28 2021-09-28 推想医疗科技股份有限公司 Image reconstruction method and device
CN112614205A (en) * 2020-12-28 2021-04-06 推想医疗科技股份有限公司 Image reconstruction method and device
CN112907691A (en) * 2021-03-26 2021-06-04 深圳安科高技术股份有限公司 Neural network-based CT image reconstruction method, device, equipment and storage medium
CN113592968A (en) * 2021-07-09 2021-11-02 清华大学 Method and device for reducing metal artifacts in tomographic images
CN113744356A (en) * 2021-08-17 2021-12-03 中山大学 Low-dose SPECT (single photon emission computed tomography) chord map recovery and scatter correction method
CN113744356B (en) * 2021-08-17 2024-05-07 中山大学 Low-dose SPECT chord graph recovery and scattering correction method
CN113936143A (en) * 2021-09-10 2022-01-14 北京建筑大学 Image identification generalization method based on attention mechanism and generation countermeasure network
CN113936143B (en) * 2021-09-10 2022-07-01 北京建筑大学 Image identification generalization method based on attention mechanism and generation countermeasure network

Also Published As

Publication number Publication date
CN110728729B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN110728729A (en) Unsupervised CT projection domain data recovery method based on attention mechanism
CN109146988B (en) Incomplete projection CT image reconstruction method based on VAEGAN
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN109584254A (en) A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer
CN110675461A (en) CT image recovery method based on unsupervised learning
CN109949235A (en) A kind of chest x-ray piece denoising method based on depth convolutional neural networks
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
WO2023202265A1 (en) Image processing method and apparatus for artifact removal, and device, product and medium
CN111242865A (en) Fundus image enhancement method based on generation type countermeasure network
Zhu et al. Metal artifact reduction for X-ray computed tomography using U-net in image domain
CN110473150B (en) CNN medical CT image denoising method based on multi-feature extraction
CN109389171A (en) Medical image classification method based on more granularity convolution noise reduction autocoder technologies
CN112215339B (en) Medical data expansion method based on generation countermeasure network
CN114283088A (en) Low-dose CT image noise reduction method and device
CN116664429A (en) Semi-supervised method for removing metal artifacts in multi-energy spectrum CT image
Abdi et al. GAN-enhanced conditional echocardiogram generation
CN117876519A (en) Chord graph recovery method and system based on diffusion model
CN116503506B (en) Image reconstruction method, system, device and storage medium
Li et al. Vision transformer for cell tumor image classification
CN117333751A (en) Medical image fusion method
CN116894783A (en) Metal artifact removal method for countermeasure generation network model based on time-varying constraint
CN116152373A (en) Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning
US12045958B2 (en) Motion artifact correction using artificial neural networks
Mahmoud et al. Variant Wasserstein Generative Adversarial Network Applied on Low Dose CT Image Denoising.
CN113034473A (en) Lung inflammation image target detection method based on Tiny-YOLOv3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant