CN114140442A - Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception - Google Patents

Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception Download PDF

Info

Publication number
CN114140442A
CN114140442A CN202111472842.3A CN202111472842A CN114140442A CN 114140442 A CN114140442 A CN 114140442A CN 202111472842 A CN202111472842 A CN 202111472842A CN 114140442 A CN114140442 A CN 114140442A
Authority
CN
China
Prior art keywords
image
domain
frequency domain
module
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111472842.3A
Other languages
Chinese (zh)
Inventor
孙畅
刘奕彤
杨鸿文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202111472842.3A priority Critical patent/CN114140442A/en
Publication of CN114140442A publication Critical patent/CN114140442A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception, which belongs to the technical field of image processing, and comprises the steps of firstly constructing a frequency domain model, designing a frequency domain attention module, learning different characteristics of sparse angle CT with different degradation levels in a frequency domain in a display mode, outputting weighted frequency characteristics, and then sending the frequency domain characteristics to a frequency domain reconstruction module to generate a frequency domain reconstruction image; secondly, an image domain model is built, an image domain attention module is designed, edge pixel reconstruction characteristics of sparse angle CT with different degradation levels in an image domain are learned by using a frequency domain reconstructed image, an image domain attention predicted image is output, and finally the image domain attention predicted image is sent to an image domain reconstruction module to output a final reconstruction result. By combining the CT data set containing a plurality of degradation levels and supervised training, the method can overcome the defects of poor generalization and non-expansibility of the existing deep learning method facing to a single degradation level, effectively improves the overall reconstruction precision, inhibits noise and artifacts, and simultaneously retains the detailed texture characteristics.

Description

Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception.
Background
CT reconstruction is a process of reconstructing projection data obtained by CT scanning into a CT image by an algorithm. In the past fifty years, CT images have been widely used in clinical diagnosis, non-destructive testing and biological research due to their high resolution and high sensitivity. With the continuous development of medical CT technology, people have made demands on CT technology with higher speed, safety and precision. However, high doses of radiation can lead to headaches and in severe cases even to cancer and leukemia. In addition, long scan times and high scan frequencies can further increase the hazard. The sparse angle CT and the limited angle CT reduce the number of measurements by sparse projection and controlling the projection angle range, respectively. However, due to insufficient data acquisition, the reconstructed CT image inevitably exhibits severe streak artifacts and direction artifacts. The study of low dose CT reconstructions has therefore received considerable attention from researchers.
CT reconstruction methods can be divided into three major categories, projection domain reconstruction, iterative reconstruction, and image domain reconstruction. In the projection domain reconstruction method, the traditional filtering algorithm has the advantages of small calculation amount and high reconstruction speed, but the satisfactory performance cannot be achieved under the condition that the original data are seriously lost. Dictionary and deep learning based approaches in turn often result in undesirable artifacts or excessive smoothing of the reconstructed results. With the help of prior knowledge, the iterative method can achieve a better reconstruction effect, but simultaneously sacrifices a large amount of computing resources.
In recent years, the deep learning method is particularly important in image domain CT reconstruction. Many models based on convolutional neural networks perform much better than iterative algorithms at a particular level of impairment. However, the current deep learning reconstruction method often applies supervised learning on data with a single degradation degree (for example, 120-angle sparse projection data), and thus good reconstruction performance cannot be obtained on data with other degradation degrees (for example, 60-angle sparse projection data). A straightforward approach to solve this problem is to train a set of parameters for each degradation level, but deployment is a huge challenge in practice due to the increase in training computation and parameter storage. In addition, as the degradation degree level is expanded, the training cost and the parameter storage amount are increased linearly, and the expansibility is not available in practical application. Another approach is to construct a training data set by mixing data of multiple degradation levels to alleviate this problem, but this will result in a compromised reconstruction result predicted by the model, and there is still room for improvement in performance.
Disclosure of Invention
The invention aims to overcome the defects of the existing CT reconstruction method based on deep learning and provides a deep learning CT reconstruction algorithm based on degradation perception. Characteristics of data with different degradation levels are learned by designing degradation perception modules in a frequency domain and an image domain respectively, and different operations are performed on the data with different degradation levels according to the characteristics. In addition, the prediction effect and the prediction stability of the model are improved by combining the CT data set containing a plurality of degradation levels and supervised training.
In order to achieve the above purpose, the present invention provides a deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception, which includes the following steps:
(1) constructing a data set comprising a plurality of degradation levels
Acquiring full-angle CT images of different parts of a patient, carrying out forward projection on the CT images to obtain sufficient projection data, carrying out sparse sampling (for example, 60 angles, 120 angles, 240 angles and the like) at different levels on the projection data, adding noise, rebuilding the sparse projection data into sparse angle CT images by using an iterative algorithm, and forming a training pair by each sparse angle CT image and each full angle CT image;
(2) and constructing a frequency domain reconstruction model
And (2.1) constructing a discrete cosine transform module, and converting the block discrete cosine transform into a convolutional layer. Wherein the block size is NXN, and the convolution layer is composed of N2The system comprises convolution kernels with the size of NxN, initialization weight parameters are discrete cosine basis functions, the input of a discrete cosine transform module is a sparse angle CT image, and the output is frequency characteristics;
and (2.2) constructing a frequency domain attention module. The input to the frequency domain attention module is a frequency signature and the output is a weighted frequency signature. The network structure between the input layer and the output layer is as follows:
the first layer is a global tie pooling layer;
the second layer consists of a full connection layer and a linear rectification function (ReLU);
the third layer consists of a full connection layer and a Rogers function (Sigmoid);
the fourth layer is a pixel-by-pixel multiplication operation: multiplying the input of the first layer and the output of the third layer pixel by pixel;
and (2.3) constructing a frequency domain reconstruction module. The frequency domain reconstruction module can be designed into any neural network model, the input of the frequency domain reconstruction module is weighted frequency characteristics, and the output of the frequency domain reconstruction module is a frequency prediction result;
and (2.4) constructing an inverse discrete cosine transform module, and converting the blocked inverse discrete cosine transform into a transposed convolutional layer. Wherein the block size is NXN, and the convolution layer is composed of N2The convolution kernel with the size of NxN is formed, the initialization weight parameter is a discrete cosine basis function, the input of the discrete cosine transform module is a frequency prediction result, and the output is a frequency domain reconstruction image;
(3) and constructing an image domain reconstruction model
(3.1) constructing an image domain attention module. The image domain attention module can be designed into any neural network model, the input of the image domain attention module is the splicing (channel splicing) of a sparse angle CT image and a frequency domain reconstruction image, and the output of the image domain attention module is an image domain attention prediction image;
and (3.2) constructing an image domain reconstruction module. The image domain reconstruction module can be designed into any neural network model, the input of the image domain reconstruction module is the splicing of a frequency domain reconstruction image and an image domain attention prediction image, and the output is the final reconstruction result;
(4) training frequency domain model
Training the weight parameters of the frequency domain model using a training pair (sparse angle CT image, full angle CT image);
(5) constructing a training dataset for an image domain attention Module
(5.1) extracting the edge of the CT image with the complete angle by using an edge detection algorithm to obtain an edge detection image;
(5.2) reconstructing the sparse angle CT image into a frequency domain reconstructed image by using a frequency domain model by using the trained weight parameters in the step (4), calculating a difference value between the frequency domain reconstructed image and the full angle CT image, then taking an absolute value, setting pixel points with the difference value more than or equal to r as 1, and setting pixel points less than r as 0 to obtain a reconstructed difference image;
(5.3) taking the intersection of the edge detection image and the reconstructed difference image to obtain an image domain attention ideal image, wherein each image (splicing of a sparse angle CT image and a frequency domain reconstructed image and an image domain attention ideal image) forms an image domain attention module training sample;
(6) image domain attention training module
Training a weight parameter of an image domain attention module by using a training pair (splicing of a sparse angle CT image and a frequency domain reconstruction image, and an image domain attention ideal image), wherein a two-classification cross entropy loss function is used as a loss function;
(7) training image domain reconstruction module
Training the weight parameters of an image domain reconstruction module by using a training pair (splicing of a frequency domain reconstruction image and an image domain attention prediction image, a full angle CT image);
(8) and overall training
The weight parameters of the frozen image domain attention module, the frequency domain model and the image domain reconstruction module are integrally trained using a training pair (sparse angle CT image, full angle CT image).
Drawings
FIG. 1 is a general schematic diagram of a network architecture;
FIG. 2 is a flowchart of the steps of a deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation sensing according to the method of the present invention;
FIG. 3 is a schematic diagram of a network structure of a discrete cosine transform module and a frequency domain attention module according to an embodiment;
FIG. 4 is a schematic diagram of a network structure of a frequency domain reconstruction module in an embodiment;
FIG. 5 is a schematic diagram of a network structure of an image domain attention module in an embodiment;
FIG. 6 is a schematic diagram of a network structure of an image domain reconstruction module in an embodiment;
FIG. 7 is a schematic diagram of a training data set of an image domain attention module in an embodiment;
fig. 8 shows the image comparison results of different processing methods in the embodiment.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example (b):
in this embodiment, the invention relates to a deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception, which includes the following steps:
(1) constructing a data set comprising a plurality of degradation levels
Full-angle CT images of different parts of a patient are acquired and down-sampled to 512 x 512, and full-angle projection data of 360 projection angles are obtained by simulating fan-beam projection to perform forward projection on each CT image. Gaussian and Poisson noise is added to the projection data, and then random uniform sparse sampling is carried out to 60, 120 and 240 projection angles. And finally, reconstructing the sparse projection data into a sparse angle CT image by using a filtering back projection reconstruction algorithm. The training set, the verification set and the test set respectively comprise 9203, 300 and 1000 training pairs;
(2) and constructing a frequency domain reconstruction model
(2.1) constructing a discrete cosine transform module, as shown in fig. 3, adopting 8 × 8 discrete cosine transform with block size, wherein a convolution layer consists of 64 convolution kernels with 8 × 8 size, the initialized weight parameter is a discrete cosine basis function, and the step length of convolution operation is 8 to ensure that image blocks are not overlapped;
and (2.2) constructing a frequency domain attention module. As shown in fig. 3, the network structure between the input layer and the output layer is as follows:
the first layer is a global tie pooling layer;
the second layer consists of a fully connected layer (containing 32 neurons) and a linear rectification function (ReLU);
the third layer consists of a fully connected layer (containing 64 neurons), and a Rogers function (Sigmoid);
the fourth layer is a pixel-by-pixel multiplication operation: multiplying the input of the first layer and the output of the third layer pixel by pixel;
and (2.3) constructing a frequency domain reconstruction module. As shown in fig. 4, the frequency domain reconstruction module is designed to be of a U-Net structure, and includes operations such as convolution, maximum pooling, maximum inverse pooling, and the like;
and (2.4) constructing an inverse discrete cosine transform module, and converting the blocked inverse discrete cosine transform into a transposed convolutional layer. The block size is 8 multiplied by 8, the convolution layer is composed of 64 convolution kernels with the size of 8 multiplied by 8, and the initialization weight parameter is a discrete cosine basis function;
(3) constructing an image domain reconstruction model,
(3.1) constructing an image domain attention module. As shown in fig. 5, the image domain attention module is designed to be in a U-Net structure and comprises operations of convolution, down-sampling, up-sampling and the like;
and (3.2) constructing an image domain reconstruction module. As shown in fig. 6, the image domain reconstruction module is designed as a residual error network model, and includes 6 residual error modules, and the network includes convolution, ReLU, and other operations;
(4) training frequency domain model
Training weight parameters of a frequency domain model by using a training pair (sparse angle CT image and full angle CT image), using a mean square error as a loss function, setting a batch size (batch size) to be 32, performing gradient updating by adopting an ADAM optimizer, and setting an initial learning rate to be 10-4Every 10 th5The iterative learning rate is reduced by half, the training is stopped when the loss of the verification set is not reduced any more, and the training process is finished on a GPU (GeForce GTX 1080 Ti);
(5) constructing a training dataset for an image domain attention Module
(5.1) as shown in FIG. 7, extracting the edge of the full-angle CT image by using a Canny edge detection operator to obtain an edge detection image;
(5.2) reconstructing the sparse angle CT image into a frequency domain reconstructed image by using a frequency domain model by using the trained weight parameters in the step (4), calculating a difference value between the frequency domain reconstructed image and the full angle CT image, then taking an absolute value (normalized to be between 0 and 1), setting the pixel point with the difference value of more than or equal to 0.01 as 1, and setting the pixel point with the difference value of less than r as 0 to obtain a reconstructed difference image;
(5.3) taking the intersection of the edge detection image and the reconstructed difference image to obtain an image domain attention ideal image;
(6) image domain attention training module
Training the weight parameters of an image domain attention module by using a training pair (splicing of a sparse angle CT image and a frequency domain reconstructed image, and an image domain attention ideal image), wherein a loss function uses a two-classification cross entropy loss function, the batch size (batch size) is 4, an ADAM optimizer is adopted for gradient updating, and the initial learning rate is 10-4Every 10 th5The iterative learning rate is reduced by half, the training is stopped when the loss of the verification set is not reduced any more, and the training process is finished on a GPU (GeForce GTX 1080 Ti);
(7) training image domain reconstruction module
Training the weight parameters of an image domain reconstruction module by using a training pair (splicing of a frequency domain reconstruction image and an image domain attention prediction image, a full angle CT image); loss function the loss function was L1 loss function, batch size (batch size) was 8, gradient update was performed using ADAM optimizer, initial learning rate was 10-4Every 10 th5The iterative learning rate is reduced by half, the training is stopped when the loss of the verification set is not reduced any more, and the training process is finished on a GPU (GeForce GTX 1080 Ti);
(8) and overall training
The weight parameters of the frozen image domain attention module, the frequency domain model and the image domain reconstruction module are integrally trained using a training pair (sparse angle CT image, full angle CT image). Loss function the loss function was L1 loss function, the batch size (batch size) was 1, the gradient update was performed using ADAM optimizer, and the initial learning rate was 10-4Every 10 th5The iterative learning rate is reduced by half, the training is stopped when the loss of the verification set is not reduced any more, and the training process is finished on a GPU (GeForce GTX 1080 Ti).
The embodiment is tested on a test set, PSNR and SSIM are selected as evaluation standards, and the calculation mode is as follows:
Figure BDA0003385798220000061
Figure BDA0003385798220000062
Figure BDA0003385798220000063
where I (I, j), K (I, j) represent the final reconstruction result and the full angle CT image, respectively, with the size of m × n, MAX is the maximum pixel value of the image, μx,μyRespectively represent the average values, σ, of I (I, j) and K (I, j)x,σy,σxyRespectively representing the variance of I (I, j), the variance of K (I, j) and their covariance, c1=(0.01*L)2,c2=(0.03*L)2And L is the range of pixel values.
For convenience of description, the relative terms appearing are explained:
FBP: filtering back projection reconstruction algorithm
T-U-Net: a deep learning based reconstruction algorithm: han Y, Ye J C. Framing U-Net video default conditional frames Application to space-view CT [ J ]. IEEE transactions on medial imaging, 2018, 37(6): 1418) 1429.
The reconstruction effect is shown in table 1:
table 1 comparison table of reconstruction effect
Figure BDA0003385798220000064
Figure BDA0003385798220000071
Fig. 8 shows a reconstruction result graph of different methods at 120 projection angles, and the result shows that the method of the embodiment can effectively suppress noise and artifacts, reconstruct a detailed structure and texture, and obtain a better reconstruction effect.

Claims (1)

1. A deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception comprises the following steps:
(1) constructing a data set comprising a plurality of degradation levels
Acquiring full-angle CT images of different parts of a patient, performing forward projection on the CT images to obtain sufficient projection data, performing sparse sampling (for example, 60 angles, 120 angles, 240 angles and the like) at different levels on the projection data, adding noise, reversely reconstructing the sparse projection data into sparse-angle CT images by using an iterative algorithm, and forming a training pair by using each sparse-angle CT image and each full-angle CT image;
(2) and constructing a frequency domain reconstruction model
And (2.1) constructing a discrete cosine transform module, and converting the block discrete cosine transform into a convolutional layer. Wherein the block size is NXN, and the convolution layer is composed of N2The system comprises convolution kernels with the size of NxN, initialization weight parameters are discrete cosine basis functions, the input of a discrete cosine transform module is a sparse angle CT image, and the output is frequency characteristics;
and (2.2) constructing a frequency domain attention module. The input to the frequency domain attention module is a frequency signature and the output is a weighted frequency signature. The network structure between the input layer and the output layer is as follows:
the first layer is a global tie pooling layer;
the second layer consists of a full connection layer and a linear rectification function (ReLU);
the third layer consists of a full connection layer and a Rogers function (Sigmoid);
the fourth layer is a pixel-by-pixel multiplication operation: multiplying the outputs of the input third layer of the first layer pixel by pixel;
and (2.3) constructing a frequency domain reconstruction module. The frequency domain reconstruction module can be designed into any neural network model, the input of the frequency domain reconstruction module is weighted frequency characteristics, and the output of the frequency domain reconstruction module is a frequency prediction result;
and (2.4) constructing an inverse discrete cosine transform module, and converting the blocked inverse discrete cosine transform into a transposed convolutional layer. Wherein the block size is NXN, and the convolution layer is composed of N2The convolution kernel with the size of NxN is formed, the initialization weight parameter is a discrete cosine basis function, the input of the discrete cosine transform module is a frequency prediction result, and the output is a frequency domain reconstruction image;
(3) and constructing an image domain reconstruction model
(3.1) constructing an image domain attention module. The image domain attention module can be designed into any neural network model, the input of the image domain attention module is the splicing (channel splicing) of a sparse angle CT image and a frequency domain reconstruction image, and the output of the image domain attention module is an image domain attention prediction image;
and (3.2) constructing an image domain reconstruction module. The image domain reconstruction module can be designed into any neural network model, the input of the image domain reconstruction module is the splicing of a sparse angle CT image and an image domain attention prediction image, and the output is the final reconstruction result;
(4) training frequency domain model
Training the weight parameters of the frequency domain model using a training pair (sparse angle CT image, full angle CT image);
(5) constructing a training dataset for an image domain attention Module
(5.1) extracting the edge of the CT image with the complete angle by using an edge detection algorithm to obtain an edge detection image;
(5.2) reconstructing the sparse angle CT image into a frequency domain reconstructed image by using a frequency domain model by using the trained weight parameters in the step (4), calculating a difference value between the frequency domain reconstructed image and the full angle CT image, then taking an absolute value, setting pixel points with the difference value more than or equal to r as 1, and setting pixel points less than r as 0 to obtain a reconstructed difference image;
(5.3) taking the intersection of the edge detection image and the reconstructed difference image to obtain an image domain attention ideal image, wherein each image (splicing of a sparse angle CT image and a frequency domain reconstructed image and an image domain attention ideal image) forms an image domain attention module training sample;
(6) image domain attention training module
Training a weight parameter of an image domain attention module by using a training pair (splicing of a sparse angle CT image and a frequency domain reconstruction image, and an image domain attention ideal image), wherein a two-classification cross entropy loss function is used as a loss function;
(7) training image domain reconstruction module
Training the weight parameters of an image domain reconstruction module by using a training pair (splicing of a sparse angle CT image and an image domain attention prediction image, a full angle CT image);
(8) and overall training
The weight parameters of the frozen image domain attention module, the frequency domain model and the image domain reconstruction module are integrally trained using a training pair (sparse angle CT image, full angle CT image).
CN202111472842.3A 2021-12-01 2021-12-01 Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception Pending CN114140442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111472842.3A CN114140442A (en) 2021-12-01 2021-12-01 Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111472842.3A CN114140442A (en) 2021-12-01 2021-12-01 Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception

Publications (1)

Publication Number Publication Date
CN114140442A true CN114140442A (en) 2022-03-04

Family

ID=80388118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111472842.3A Pending CN114140442A (en) 2021-12-01 2021-12-01 Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception

Country Status (1)

Country Link
CN (1) CN114140442A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723842A (en) * 2022-05-24 2022-07-08 之江实验室 Sparse visual angle CT imaging method and device based on depth fusion neural network
CN115019183A (en) * 2022-07-28 2022-09-06 北京卫星信息工程研究所 Remote sensing image model migration method based on knowledge distillation and image reconstruction
CN116228903A (en) * 2023-01-18 2023-06-06 北京长木谷医疗科技有限公司 High-definition CT image reconstruction method based on CSA module and deep learning model

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723842A (en) * 2022-05-24 2022-07-08 之江实验室 Sparse visual angle CT imaging method and device based on depth fusion neural network
CN114723842B (en) * 2022-05-24 2022-08-23 之江实验室 Sparse visual angle CT imaging method and device based on depth fusion neural network
CN115019183A (en) * 2022-07-28 2022-09-06 北京卫星信息工程研究所 Remote sensing image model migration method based on knowledge distillation and image reconstruction
CN116228903A (en) * 2023-01-18 2023-06-06 北京长木谷医疗科技有限公司 High-definition CT image reconstruction method based on CSA module and deep learning model
CN116228903B (en) * 2023-01-18 2024-02-09 北京长木谷医疗科技股份有限公司 High-definition CT image reconstruction method based on CSA module and deep learning model

Similar Documents

Publication Publication Date Title
Shi et al. Image compressed sensing using convolutional neural network
CN109035142B (en) Satellite image super-resolution method combining countermeasure network with aerial image prior
CN114140442A (en) Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception
Yang et al. Single-image super-resolution reconstruction via learned geometric dictionaries and clustered sparse coding
Hawe et al. Analysis operator learning and its application to image reconstruction
Shen et al. Image denoising using a tight frame
Zhang et al. Image super-resolution reconstruction based on sparse representation and deep learning
CN110490832A (en) A kind of MR image reconstruction method based on regularization depth image transcendental method
CN110533591B (en) Super-resolution image reconstruction method based on codec structure
CN113674172B (en) Image processing method, system, device and storage medium
Zhao et al. CREAM: CNN-REgularized ADMM framework for compressive-sensed image reconstruction
Deeba et al. Sparse representation based computed tomography images reconstruction by coupled dictionary learning algorithm
He et al. Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
Liu et al. Learning cascaded convolutional networks for blind single image super-resolution
Wen et al. Learning flipping and rotation invariant sparsifying transforms
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
Jiang et al. A new nonlocal means based framework for mixed noise removal
Liu et al. Texture image prior for SAR image super resolution based on total variation regularization using split Bregman iteration
CN111243047B (en) Image compression sensing method based on self-adaptive nonlinear network and related product
CN116612009A (en) Multi-scale connection generation countermeasure network medical image super-resolution reconstruction method
CN107767342B (en) Wavelet transform super-resolution image reconstruction method based on integral adjustment model
CN114998137A (en) Ground penetrating radar image clutter suppression method based on generation countermeasure network
Li et al. A Decoupled method for image inpainting with patch-based low rank regulariztion
CN114757826A (en) POCS image super-resolution reconstruction method based on multiple features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination