CN111739114B - Low-dose CT reconstruction method based on twin feedback network - Google Patents

Low-dose CT reconstruction method based on twin feedback network Download PDF

Info

Publication number
CN111739114B
CN111739114B CN202010541512.4A CN202010541512A CN111739114B CN 111739114 B CN111739114 B CN 111739114B CN 202010541512 A CN202010541512 A CN 202010541512A CN 111739114 B CN111739114 B CN 111739114B
Authority
CN
China
Prior art keywords
dose
loss function
image
training
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010541512.4A
Other languages
Chinese (zh)
Other versions
CN111739114A (en
Inventor
叶昕辰
徐禹尧
徐睿
樊鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202010541512.4A priority Critical patent/CN111739114B/en
Publication of CN111739114A publication Critical patent/CN111739114A/en
Application granted granted Critical
Publication of CN111739114B publication Critical patent/CN111739114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a low-dose CT reconstruction method based on a twin feedback network, belonging to the field of medical image processing and computer vision. According to the method, a low-dose CT image is taken as input, a preliminary reconstruction result is obtained through a reconstruction branch of a twin feedback network, a boundary diagram is obtained through a priori branch, network parameters are continuously optimized through detail protection loss, and a final reconstruction result is obtained. The method has simple program and easy realization, and can obtain a high-resolution reconstruction result in an end-to-end mode; the detail protection loss function designed by the invention can comprehensively realize CT image detail protection from two aspects of local and global.

Description

Low-dose CT reconstruction method based on twin feedback network
Technical Field
The invention belongs to the field of medical image processing and computer vision, and particularly relates to a low-dose CT reconstruction method for realizing detail protection based on a twin feedback network.
Background
At present, computed tomography (CT, computed Tomography) has become a popular diagnostic-assisted imaging method. However, in order to ensure a sufficiently clear CT image, the radiation risk to the human body by high-dose radiation is not insignificant. Most devices aim to reduce the risk of cancerous or genetic lesions that may be created by high doses of radiation, so that low doses of CT are created. How to solve the stronger noise and more artifacts caused by reducing the radiation dose becomes a challenging problem. The existing methods can be summarized into the following three categories: (1) A filter reconstruction method for a CT sinogram (sine): these methods are directed to sinogram design filtering to achieve reconstruction, the most widely applied such method being FBP (Filtered Back Projection ) (AC Kak, "Digital image processing techniques," Digital Image Processing Techniques, academic Press, orlando, pp.111-169,1984.) which is computationally simple, but the reconstruction resolution is low and introduces many artifacts. (2) iterative reconstruction method: these methods convert the reconstruction problem into an iterative optimization process, which is computationally intensive and inefficient (Marcel Beister, daniel Kolditz, and Willi A Kalender, "Iterative reconstruction methods in x-ray ct," Physica media, vol. 28, no.2, pp.94-108,2012.). (3) Reconstruction method based on CNN (convolutional neural network ): in recent years, many reconstruction methods based on CNN have emerged, unlike the first two methods of processing difficult-to-obtain sinograms, CNN directly learns the mapping between Low-dose CT images to normal-dose CT images with its strong expressive power on features, such as the RED-CNN method (Hu Chen, yi Zhang, mannudeep K Kalra, feng Lin, yang Chen, peixi Liao, jiliu Zhou, and GeWang, "Low-dose CT with a residual encoder-decoder convolutional neural network," IEEE transactions on medical imaging, vol.36, no.12, pp.2524-2535,2017. However, the existing methods do not handle CT details and organ geometry information well. Based on the above, the invention designs a low-dose CT reconstruction method for realizing detail protection based on a deep neural network, which can remove noise and protect detailed information such as organ lesions from being damaged.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a low-dose CT reconstruction method based on a twin feedback deep neural network. The existing method has the defects that the CT detail is not protected well in the reconstruction process, so that the resolution of the organ lesion part is low or artifacts are introduced. Based on the method, a twin feedback network is designed, and a detail protection loss function is constructed to obtain a reconstruction result with higher quality.
The technical scheme of the invention is that a low-dose CT reconstruction method for realizing detail protection based on a twin feedback network comprises the following steps:
firstly, preparing training data;
the training data set comprises a low dose CT image and a corresponding normal dose CT image;
secondly, constructing a twin feedback network;
firstly, shallow feature extraction is carried out on an input low-dose CT image through N convolution layers, N is more than 2, then the CT image enters a twin feedback network, two paths of networks are composed of different numbers of mapping modules, and each mapping module is composed of a feedforward module and a feedback module; one path is a reconstruction branch and consists of M mapping modules, and the outputs of each module are spliced together and then pass through a convolution layer to obtain a reconstruction result R; the other path is a priori branch and consists of K mapping modules, the outputs of each module are spliced together, and a boundary diagram S is obtained through an activation function after passing through a convolution layer;
thirdly, constructing a loss function and training a network;
the loss function guarantees the effect of detail protection from two aspects of local detail protection and global structure maintenance respectively; comprising pixel level detail loss function L P Task loss function L T Global loss function L H The method comprises the steps of carrying out a first treatment on the surface of the The discriminator is sequentially composed of I convolution layers, an activation function and J full connection layers, wherein I>5,J>1, a step of; then training based on the network structure of the second step;
fourth, testing and evaluating;
and training the converged networks on the two data set training sets respectively, and testing and evaluating in the corresponding testing sets.
Further, in the second step, a twin feedback network is built, which specifically includes the following steps:
2-1) the output of the n-1 th mapping module serves as the input H of the feed-forward module n After M rolling layers, M is more than or equal to 1, and the result is L n The method comprises the steps of carrying out a first treatment on the surface of the I.e. L n =Conv.(H n ) Wherein conv represents M rolls per pairPerforming product operation;
2-2)L n after a deconvolution, the result is H n+1 The method comprises the steps of carrying out a first treatment on the surface of the I.e. H n+1 =Deconv.(L n ) Wherein deconv represents performing M deconvolution operations on x;
2-3)H n and H is n+1 Subtracting to obtain h; i.e. h=h n -H n+1
2-4) h passes through the same convolution layer as in 2-1) to obtain a result i, i.e. i=conv. (h);
2-5)L n added with L to obtain L n+1 The method comprises the steps of carrying out a first treatment on the surface of the I.e. L n+1 =L n +l;
2-6)L n+1 As input to the feedback module, the result is H after first passing through a deconvolution layer identical to that in 2-2) n+2 The method comprises the steps of carrying out a first treatment on the surface of the I.e. H n+2 =Deconv.(L n+1 );
2-7)H n+2 After passing through a convolution layer identical to that in 2-1), the result is L n+2 The method comprises the steps of carrying out a first treatment on the surface of the I.e. L n+2 = Conv.(H n+2 );
2-8)L n+1 And L is equal to n+2 Subtracting to obtain L; i.e. l=l n+1 -L n+2
2-9) L is subjected to the same deconvolution as described above to obtain the result H, i.e. h=deconv. (L);
2-10)H n+2 adding with H to obtain H n+3 The method comprises the steps of carrying out a first treatment on the surface of the I.e. H n+3 =H n+2 +H;H n+3 As an output of the feedback module, a feedforward module in the next mapping module is input, and the operations of 2-1) to 2-10) are iteratively performed.
Thirdly, constructing a loss function and training a network, which specifically comprises the following steps:
3-1) detail loss function L P : CT images typically have many smooth regions that are misshapen due to discontinuities. And the gradient of the CT image is composited with heavy tail distribution. The following edge protection loss is obtained by adding the TGV (Total Generalized Variation) filtering to the second order gradient of the image in the x, y direction:
wherein N represents the total number of training samples, p t A pixel representing a t position; r and S are respectively a reconstruction result obtained by reconstructing the branch and a boundary diagram obtained by the prior branch, R (p) t ) And S (p) t ) The corresponding pixel value;the secondary derivative of the image in the x and y directions is obtained, wherein gamma is a weight factor; in order to stabilize S during training, an additional constraint L is proposed r
Wherein S is t A pixel representing a t position; detail loss function L P The formula is as follows:
L P1 =α 1 L p2 L r
α 1 ,α 2 is a weight factor;
3-2) Global loss function L H : 3-1) L P The details are protected from the pixel level, and global structural knowledge is also indispensable in addition to local consideration. So the global loss function L is designed H
Wherein E is]Representing the expectation, D is a discriminator, I is an input low dose CT image, I T For a corresponding normal dose CT image;representation I T Obeys the probability distribution P (I) T ) R-P (R) represents the desire for R to obey the probability distribution P (R);
3-3) task loss function L T
3-4) Total loss function L Total
L Total =L P13 L T4 L H
α 3 ,α 4 Is a weight factor;
3-5) training the network with the total loss function, optimizing using Adam optimizer.
The beneficial effects of the invention are as follows: based on deep learning, the invention firstly takes a low-dose CT image as input, obtains a preliminary reconstruction result through a reconstruction branch of a twin feedback network, obtains a boundary map through a priori branch, and continuously optimizes network parameters through detail protection loss to obtain a final reconstruction result. The invention has the following characteristics:
1. the network framework is easy to construct, and a high-resolution reconstruction result can be obtained in an end-to-end mode;
2. the detail protection is comprehensively realized from two aspects of local and global through the supervision of the detail protection loss function designed by the invention.
3. By iterative feedforward and feedback operations in a mapping module in the twin feedback network, the mapping between the low-dose CT image and the normal-dose CT image is better learned, so that a better reconstruction result is obtained.
4. The data are readily available: the method can be realized by directly taking a low-dose CT image as input without the need of the original data of the CT sinogram which is difficult to obtain.
Drawings
Fig. 1 is a network configuration diagram.
Fig. 2 is a graph of simulated elliptical data reconstruction results and a comparison with other methods, where (a) is the FBP method results and (b) is the RED-CNN method results. (c) Results were obtained for the WGAN-VGG (Qingsong Yang, pingkun Yang, yanboZhang, hengyong Yu, yongyi Shi, xuanqin Mou, mannudeep K Kalra, yizhang, ling Sun, and Ge Wang, "Low-dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss," IEEE transactions on medical imaging, vol.37, no.6, pp.1348-1357,2018.). (d) The results were obtained as a MAP-NN (Hongming Shan, atul Padot, fatemeh Homayounieh, uwe Kruger, ruhani Doda Khera, chayanin Nitiwarangkul, mannudeep K Kalra, and Ge Wang, "Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose ct image reconstruction," Nature Machine Intelligence, vol.1, no.6, pp.269, 2019.). (e) is the result of the method. (f) normal dose CT images.
Fig. 3 is Mayo data and a comparison to other methods, wherein: (a) The graph shows the result of the FBP method, and (b) the result of the RED-CNN method. (c) is the result of the WGAN-VGG method. (d) is the MAP-NN method result. (e) is the result of the method. (f) normal dose CT images.
Detailed Description
The low-dose CT reconstruction method based on the twin feedback network of the present invention is described in detail below with reference to the examples and the accompanying drawings.
A low-dose CT reconstruction method based on a twin feedback network, the specific network structure is shown in figure 1, and the implementation of the method comprises the following steps:
firstly, preparing training data;
the training data comprises two parts of simulation ellipse data and Mayo data. Both data sets contain low dose images and corresponding normal dose images.
1-1) simulated ellipse data: we made this dataset using the ODL library (Operator Discretization Library) in Python language, the dataset divided into a training set and a test set, the image sizes being 128X128 pixels, the training set containing 5000 pairs of normal dose elliptical images and corresponding simulated low dose elliptical images. Referring to the Mayo data, our simulated low dose data was also simulated with the addition of noise. The test set includes 500 of the above image pairs.
1-2) Mayo data: this dataset was published from the 2016NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge game. The dataset contains true normal dose CT images of ten patients and corresponding low dose CT images. The image sizes are 512X512 pixels. The low-dose CT image is formed by simulating the normal-dose CT image with noise, and is equivalent to 0.25 times of the normal-dose CT image. The training set consisted of all image pairs of 9 patients (L067, L096, L109, L143, L192, L286, L291, L310, L333) and the test set was all 211 image pairs of one patient (L506).
Secondly, constructing a twin feedback network, as shown in fig. 1: firstly, shallow feature extraction is carried out on an input low-dose CT image through three convolution layers, the convolution kernel size of the first two convolution layers is 3 multiplied by 3, the last convolution kernel size of the first two convolution layers is 1 multiplied by 1, then the input low-dose CT image enters a twin feedback network, two paths of networks are composed of different numbers of mapping modules, and each module is composed of a feedforward module and a feedback module. The upper path is a reconstruction branch and consists of ten mapping modules, and the outputs of each module are spliced together and then pass through a convolution layer to obtain a reconstruction result R. The lower path is a priori branch and is composed of five mapping modules, the outputs of each module are spliced together, and a boundary diagram S is obtained through a convolution layer and a sigmoid activation function. The specific structure and flow of the feedforward module and the feedback module are shown in the lower part of fig. 1:
2-1) the output of the last mapping module serves as the input H of the feed forward module in the illustration 1 First, a convolution layer is passed to obtain L 1 . I.e. L 1 =Conv.(H 1 ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein conv represents performing a convolution operation on the x.
2-2)L 1 After a deconvolution, the result is H 2 . I.e. H 2 =Deconv.(L 1 ) The method comprises the steps of carrying out a first treatment on the surface of the Where deconv represents deconvolution of the samples.
2-3)H 1 And H is 2 And subtracting to obtain h. I.e. h=h 1 -H 2
2-4) h passes through the same convolution layer as in 2-1) to obtain a result i, i.e. i=conv. (h).
2-5)L 1 Added with L to obtain L 2 . I.e. L 2 =L 1 +l。
2-6)L 2 As input to the feedback module, the result is H after first going through a deconvolution layer identical to that in 2-2) 3 . I.e. H 3 =Deconv.(L 2 )。
2-7)H 3 After passing through a convolution layer identical to that described above, the result is L 3 . I.e. L 3 =Conv.(H 3 ).
2-8)L 2 And L is equal to 3 Subtracting to obtain L. I.e. l=l 2 -L 3
2-9) L is subjected to the same deconvolution as described above to give the result H, i.e. h=deconv. (L).
2-10)H 3 Adding with H to obtain H 4 . I.e. H 4 =H 3 +H;H 4 As output of the feedback module, the feedforward module in the next mapping module is input, and the operations of 2-1) to 2-10) are iteratively performed. Wherein, the convolution kernel sizes of the convolution layer and the deconvolution layer are 8×8.
Thirdly, constructing a loss function and training a network;
the loss function guarantees the effect of detail protection from two aspects, namely local detail protection and global structure maintenance. Comprising pixel level detail loss function L P Task loss function L T Global loss function L implemented by WGAN H . The arbiter consists of six convolution and ReLU activation functions, two fully connected layers of 1024 and 1 dimensions, respectively.
3-1) detail loss function L P : from the observation, CT images typically have many smooth regions that are distorted due to discontinuities. And the gradient of the CT image is composited with heavy tail distribution. The following edge protection loss was designed using TGV (Total Generalized Variation) filtering plus the second order gradient of the image in the x, y direction:
wherein N represents the total number of training samples, p t A pixel representing a t position; r and S are respectively a reconstruction result obtained by reconstructing the branch and a boundary diagram obtained by the prior branch, R (p) t ) And S (p) t ) The corresponding pixel value;the image is subjected to secondary derivation in the x and y directions, wherein gamma is a weight factor, and the size is 10. In order to stabilize S in training process, an additional constraint item L is designed r
Wherein S is t Boundary map pixel values representing t-positions. Detail loss function L P The formula is as follows:
L p1 =α 1 L p2 L r
α 1 ,α 2 the weight factors are all 1.
3-2) Global loss function L H : 3-1) L P The details are protected from the pixel level, and global structural knowledge is also indispensable in addition to local consideration. So the global loss function L is designed H
Wherein E is]Representing the expectation, D is a discriminator, I is an input low dose CT image, I T Corresponding to a normal dose CT image.Representation I T Obeys the probability distribution P (I) T ) R.about.P (R) represents the desire for R to obey the probability distribution P (R).
3-3) task loss function L T
3-4) Total loss function L Total
L Total =L P13 L T4 L H
α 3 ,α 4 The weight factors are here respectively 100 and 1.
3-5) training the network with the total loss function, optimizing with an Adam optimizer, wherein the parameters are: momentum momentum=0.9, parameter β 1 =0.9, parameter β 2 =0.99, parameter e=10 -8 . Initial learning rate of 10 -5 The learning rate is multiplied by 0.5 every 20 training cycles.
Fourth, testing and evaluating;
and training the converged networks on the two data set training sets respectively, and testing and evaluating in the corresponding testing sets.
The comparison of this method with other methods is shown in fig. 3. Wherein: (a) The graph shows the result of the FBP method, and (b) the result of the RED-CNN method. (c) is the result of the WGAN-VGG method. (d) is the MAP-NN method result. (e) is the result of the method. (f) normal dose CT images. The result shows that the method can well maintain the detailed structure in the image while denoising, and achieves the optimal result.

Claims (3)

1. The low-dose CT reconstruction method for realizing detail protection based on the twin feedback network is characterized by comprising the following steps of:
firstly, preparing training data;
the training data set comprises a low dose CT image and a corresponding normal dose CT image;
secondly, constructing a twin feedback network;
firstly, shallow feature extraction is carried out on an input low-dose CT image through N convolution layers, N is more than 2, then the CT image enters a twin feedback network, two paths of networks are composed of different numbers of mapping modules, and each mapping module is composed of a feedforward module and a feedback module; one path is a reconstruction branch and consists of M mapping modules, and the outputs of each module are spliced together and then pass through a convolution layer to obtain a reconstruction result R; the other path is a priori branch and consists of K mapping modules, the outputs of each module are spliced together, and a boundary diagram S is obtained through an activation function after passing through a convolution layer;
thirdly, constructing a loss function and training a network;
the loss function guarantees the effect of detail protection from two aspects of local detail protection and global structure maintenance respectively; comprising pixel level detail loss function L P1 Task loss function L T Global loss function L H The method comprises the steps of carrying out a first treatment on the surface of the The discriminator is sequentially composed of I convolution layers, an activation function and J full connection layers, wherein I>5,J>1, a step of; then training based on the network structure of the second step;
fourth, testing and evaluating;
and training the converged networks on the two data set training sets respectively, and testing and evaluating in the corresponding testing sets.
2. The low-dose CT reconstruction method for realizing detail protection based on a twin feedback network according to claim 1, wherein the second step is to build the twin feedback network, and the method specifically comprises the following steps:
2-1) the output of the n-1 th mapping module serves as the input H of the feed-forward module n After passing through M convolution layers, M is more than or equal to 1, and the result is L n The method comprises the steps of carrying out a first treatment on the surface of the I.e. L n =Conv.(H n ) Wherein conv represents performing M convolution operations on x;
2-2)L n after a deconvolution, the result is H n+1 The method comprises the steps of carrying out a first treatment on the surface of the I.e. H n+1 =Deconv.(L n ) Wherein deconv represents performing M deconvolution operations on x;
2-3)H n and H is n+1 Subtracting to obtain h; i.e. h=h n -H n+1
2-4) h passes through the same convolution layer as in 2-1) to obtain a result i, i.e. i=conv. (h);
2-5)L n added with L to obtain L n+1 The method comprises the steps of carrying out a first treatment on the surface of the I.e. L n+1 =L n +l;
2-6)L n+1 As input to the feedback module, the result is H after first going through a deconvolution layer identical to that in 2-2) n+2 The method comprises the steps of carrying out a first treatment on the surface of the I.e. H n+2 =Deconv.(L n+1 );
2-7)H n+2 After passing through a convolution layer identical to that in 2-1), the result is L n+2 The method comprises the steps of carrying out a first treatment on the surface of the I.e. L n+2 =Conv.(H n+2 );
2-8)L n+1 And L is equal to n+2 Subtracting to obtain L; i.e. l=l n+1 -L n+2
2-9) L is subjected to the same deconvolution as described above to obtain the result H, i.e. h=deconv. (L);
2-10)H n+2 adding with H to obtain H n+3 The method comprises the steps of carrying out a first treatment on the surface of the I.e. H n+3 =H n+2 +H;H n+3 As output of the feedback module, the feedforward module in the next mapping module is input, and the operations of 2-1) to 2-10) are iteratively performed.
3. The low-dose CT reconstruction method for implementing detail protection based on a twin feedback network according to claim 1, wherein the third step of constructing a loss function and training the network comprises the following steps:
3-1) detail loss function L P1 : using TGV filtering plus the second order gradient of the image in the x, y direction yields the following edge protection loss:
wherein N represents the total number of training samples, p t A pixel representing a t position; r and S are respectively a reconstruction result obtained by reconstructing the branch and a boundary diagram obtained by the prior branch, R (p) t ) And S (p) t ) The corresponding pixel value;the secondary derivative of the image in the x and y directions is obtained, wherein gamma is a weight factor; in order to stabilize S during training, an additional constraint L is proposed r
Wherein S is t A pixel representing a t position; detail loss function L P1 The formula is as follows:
L P1 =α 1 L p2 L r
α 1 ,α 2 is a weight factor;
3-2) Global loss function L H
Wherein E is]Representing the expectation, D is a discriminator, I is an input low dose CT image, I T For a corresponding normal dose CT image;representation I T Obeys the probability distribution P (I) T ) R-P (R) represents the desire of R to obey the probability distribution P (R);
3-3) task loss function L T
3-4) Total loss function L Total
L Total =L P13 L T4 L H
α 3 ,α 4 Is a weight factor;
3-5) training the network with the total loss function, optimizing using Adam optimizer.
CN202010541512.4A 2020-06-15 2020-06-15 Low-dose CT reconstruction method based on twin feedback network Active CN111739114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010541512.4A CN111739114B (en) 2020-06-15 2020-06-15 Low-dose CT reconstruction method based on twin feedback network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010541512.4A CN111739114B (en) 2020-06-15 2020-06-15 Low-dose CT reconstruction method based on twin feedback network

Publications (2)

Publication Number Publication Date
CN111739114A CN111739114A (en) 2020-10-02
CN111739114B true CN111739114B (en) 2023-12-15

Family

ID=72649155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010541512.4A Active CN111739114B (en) 2020-06-15 2020-06-15 Low-dose CT reconstruction method based on twin feedback network

Country Status (1)

Country Link
CN (1) CN111739114B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330575B (en) * 2020-12-03 2022-10-14 华北理工大学 Convolution neural network medical CT image denoising method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110121749A (en) * 2016-11-23 2019-08-13 通用电气公司 Deep learning medical system and method for Image Acquisition
CN110223255A (en) * 2019-06-11 2019-09-10 太原科技大学 A kind of shallow-layer residual error encoding and decoding Recursive Networks for low-dose CT image denoising

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110121749A (en) * 2016-11-23 2019-08-13 通用电气公司 Deep learning medical system and method for Image Acquisition
CN110223255A (en) * 2019-06-11 2019-09-10 太原科技大学 A kind of shallow-layer residual error encoding and decoding Recursive Networks for low-dose CT image denoising

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction;Hongming Shan et al.;Nature Machine Intelligence;全文 *
Deep Back-Projection Networks For Super-Resolution;Muhammad Haris et al.;2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition;全文 *
Generative Adversarial Networks for Noise Reduction in Low-Dose CT;Jelmer M. Wolterink et al.;IEEE TRANSACTIONS ON MEDICAL IMAGING;全文 *

Also Published As

Publication number Publication date
CN111739114A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
Ma et al. Low-dose CT image denoising using a generative adversarial network with a hybrid loss function for noise learning
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
He et al. Optimizing a parameterized plug-and-play ADMM for iterative low-dose CT reconstruction
Heinrich et al. Residual U-net convolutional neural network architecture for low-dose CT denoising
CN107481297A (en) A kind of CT image rebuilding methods based on convolutional neural networks
CN110728729B (en) Attention mechanism-based unsupervised CT projection domain data recovery method
JP6109524B2 (en) X-ray computed tomography apparatus and image correction method
US20220351431A1 (en) A low dose sinogram denoising and pet image reconstruction method based on teacher-student generator
Yuan et al. SIPID: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction
Huang et al. Learning a deep CNN denoising approach using anatomical prior information implemented with attention mechanism for low-dose CT imaging on clinical patient data from multiple anatomical sites
Fu et al. A deep learning reconstruction framework for differential phase-contrast computed tomography with incomplete data
Han et al. Deep learning interior tomography for region-of-interest reconstruction
CN109697691B (en) Dual-regularization-term-optimized finite-angle projection reconstruction method based on L0 norm and singular value threshold decomposition
Zhou et al. DuDoUFNet: dual-domain under-to-fully-complete progressive restoration network for simultaneous metal artifact reduction and low-dose CT reconstruction
CN110070510A (en) A kind of CNN medical image denoising method for extracting feature based on VGG-19
CN111739114B (en) Low-dose CT reconstruction method based on twin feedback network
CN114387236A (en) Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network
CN112767273B (en) Low-dose CT image restoration method and system applying feature decoupling
Liang et al. A self-supervised deep learning network for low-dose CT reconstruction
CN109658464B (en) Sparse angle CT image reconstruction method based on minimum weighted nuclear norm
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
Yang et al. Improve 3D cone-beam CT reconstruction by slice-wise deep learning
CN116152373A (en) Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning
Ikuta et al. A deep recurrent neural network with FISTA optimization for ct metal artifact reduction
Zhu et al. Sinogram domain metal artifact correction of CT via deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant