CN114862982A - Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network - Google Patents

Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network Download PDF

Info

Publication number
CN114862982A
CN114862982A CN202210493873.5A CN202210493873A CN114862982A CN 114862982 A CN114862982 A CN 114862982A CN 202210493873 A CN202210493873 A CN 202210493873A CN 114862982 A CN114862982 A CN 114862982A
Authority
CN
China
Prior art keywords
image
feature map
generator
inputting
countermeasure network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210493873.5A
Other languages
Chinese (zh)
Inventor
张荣繁
倪培君
谢恩
李雄兵
左欣
付康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Weapon Science Academy Ningbo Branch
Original Assignee
China Weapon Science Academy Ningbo Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Weapon Science Academy Ningbo Branch filed Critical China Weapon Science Academy Ningbo Branch
Priority to CN202210493873.5A priority Critical patent/CN114862982A/en
Publication of CN114862982A publication Critical patent/CN114862982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a hybrid domain unsupervised finite angle CT reconstruction method based on generation of a countermeasure network, which comprises the following steps: step 1, constructing a CT reconstruction model: the constructed CT reconstruction module model comprises a first generation countermeasure network used for complementing the missing projection data and a second generation countermeasure network used for removing the artifact; step 2, independently training the first generation countermeasure network and the second generation countermeasure network to obtain a trained first generation countermeasure network and a trained second generation countermeasure network, and obtaining a trained CT reconstruction model; and 3, inputting any test sample in the test set into the trained CT reconstruction model to obtain a reconstructed CT image. Has the advantages that: the method further reduces the limited angular range while producing an image with a better quality representation.

Description

Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network
Technical Field
The invention relates to the technical field of image processing, in particular to a hybrid domain unsupervised finite angle CT reconstruction method based on a generation countermeasure network.
Background
The computed tomography is a cross section tomography technology, which can accurately acquire the internal information of a scanned object by performing X-ray imaging on the scanned object at different angles to generate a sinogram and then obtaining a reconstructed image through an algorithm. However, in many application scenarios, due to the influence of the shape structure of the detected object, the limitation of the radiation dose of the X-ray, and the pursuit of the scanning speed and time, only projection data of a limited angle (the projection data amount is less than 180) can be acquired, such as: c-arm CT imaging, digital breast imaging, dental imaging, and the like, are referred to as finite angle CT. Such data are incomplete and do not satisfy the Nyquist sampling theorem, and the image reconstructed by the filtered back projection algorithm (FBP) has serious artifacts, which may affect the detection of industrial defects and the diagnosis of medical diseases. Finite angle projection image reconstruction is a typical mathematical inverse problem, and the solved solution may be unstable, non-unique or non-existent, which is a challenging but widespread problem.
With the improvement of computer performance and the development of machine learning technology in recent years, many researchers apply the supervised-based convolutional neural network method to the field of CT image reconstruction. For example: the stripe artifacts generated after the finite angle sinusoidal image is reconstructed by the FBP algorithm have similar characteristics, so an image domain-based neural network characteristic fitting framework is provided for learning mapping between an artifact image and an artifact-free image. Inspired by research on image recovery in machine vision, people think that the method for predicting missing information by image completion in a projection domain is also an effective method. The method carries out complementation by applying a Context Encoder (CE) limited angle sinusoidal image and then carries out reconstruction by combining an iterative algorithm, and the result proves that the efficiency and the quality of iterative reconstruction can be effectively improved by the sinusoidal image complementation method based on the context encoder. The method is to process in a single image domain or a projection domain, and for this reason, a mixed domain convolution neural network method is proposed, and projection domain completion and image domain artifact suppression are combined, so that the result is obviously superior to that of a single domain FBPConvN method.
When the training data volume is sufficient and the network is well trained, the supervised deep learning method can obtain a good reconstruction result. However, in the limited angle CT reconstruction, it is difficult to acquire a large amount of paired data for supervised training, so some researchers have proposed a sinusoidal completion network (SIN) for unsupervised training, the network is composed of two parts, the first part is a generation countermeasure network based on U-net for completing sinusoidal images, the second part is a generation countermeasure network with the same structure of U-net for learning the mapping from artifact images to artifact-free images, and the two parts are connected by a differentiable Radon and an inverse Radon transform network, which is superior to the latest regularized simultaneous algebraic reconstruction technique (SART-TV) in both qualitative and quantitative aspects. However, it has limitations in that only images of 256 × 256 in size can be generated, and the limits of the limited angular range can be reduced to [0 °, 120 ° ]. In addition, another unsupervised tomographic reconstruction countermeasure network based on the hybrid domain optimization is proposed, which successfully expands the size of the reconstructed image to 512 × 512 based on the sinusoidal completion network and significantly reduces the reconstruction time. However, the method learns the mapping from the simulated sinogram to the real sinogram in the projection domain, does not add effective information in the projection domain, has no obvious effect on improving the reconstruction quality, and also supports the minimum scan angle limit of [0 degrees and 120 degrees ], thus being incapable of being used when the scan angle is further reduced. Further improvements are needed for this purpose.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a hybrid domain unsupervised finite angle CT reconstruction method based on a generative countermeasure network, which reduces the finite angle range and improves the image quality.
The technical scheme adopted by the invention for solving the technical problems is as follows: a hybrid domain unsupervised finite angle CT reconstruction method based on a generation countermeasure network is characterized by comprising the following steps:
step 1, constructing a CT reconstruction model;
the constructed CT reconstruction module model comprises a first generation countermeasure network used for complementing the missing projection data and a second generation countermeasure network used for removing the artifact; the first generation countermeasure network comprises a first generator G and a first discriminator D, wherein the first generator G consists of an encoder module a, an attention module b, a decoder module c and a multi-scale decoding module D; the second generative countermeasure network includes a second generator Gx2y, a second discriminator D _ y, a third generator Gy2x, and a third discriminator D _ x;
step 2, independently training the first generation countermeasure network and the second generation countermeasure network to obtain a trained first generation countermeasure network and a trained second generation countermeasure network, and obtaining a trained CT reconstruction model;
the training process of the first generation of the antagonistic network is as follows:
step 2-1, converting an original CT image into a plurality of sinograms with the angle ranges of [0 degrees and q degrees ], wherein q is more than 180 and less than or equal to 360, and deleting projection data of [ s degrees and t degrees ] from each sinogram of [0 degrees and q degrees ] to obtain projection data missing sinograms corresponding to the sinograms of [0 degrees and q degrees ] one by one; s and t belong to {0, q }; then constructing a training set and a testing set together;
each training sample in the training set and each test sample in the test set respectively comprise a sinogram with an angle range of [0 degrees, q degrees ] and a projection data missing sinogram corresponding to the sinogram;
2-2, inputting one or more training samples in a training set into an initialized first generation countermeasure network;
the training process of inputting any projection data missing sinogram into the initialized first generation pairwise reactance network comprises the following steps:
2-21, inputting the projection data missing sinogram into an encoder module a, wherein the encoder module a comprises N downsampling layers which are sequentially connected, namely a 1 st downsampling feature map, a 2 nd downsampling feature map and an N < th > downsampling feature map are sequentially obtained; n is a positive integer;
step 2-22, inputting the down-sampling feature map output by each down-sampling layer into an attention module b, wherein the attention module b comprises N-1 ATN networks;
the method specifically comprises the following steps: inputting the N-th downsampling feature map and the N-1-th downsampling feature map into an N-1-th ATN network to obtain an N-1-th patching feature map; inputting the ith repairing characteristic diagram and the (i-1) th down-sampling characteristic diagram into an (i-1) th ATN to obtain an (i-1) th repairing characteristic diagram, wherein i is sequentially valued as N-1 and N-2.. 2, namely, an (N-2) th repairing characteristic diagram and an (N-3) th repairing characteristic diagram.. 1 st repairing characteristic diagram are sequentially obtained;
step 2-23, inputting the down-sampling feature map output by the encoder module a and the patching feature map output by the attention module b into a decoder module c in a cross-layer connection mode, wherein the decoder module c comprises N sequentially connected up-sampling layers and convolutional layers with the number of channels being 1;
the method specifically comprises the following steps: inputting the Nth down-sampling feature map into the 1 st up-sampling layer to obtain a 1 st up-sampling feature map; stacking the jth upsampling feature map and the (N-j) th repairing feature map, and inputting the jth upsampling feature map and the (N-j) th repairing feature map into a (j + 1) th upsampling layer to obtain a (j + 1) th upsampling feature map, wherein the value of j is 1 and 2.. N-1 in sequence, namely, a 2 nd upsampling feature map and a 3 rd upsampling feature map are obtained in sequence; the N up-sampling characteristic images pass through a convolution layer with the channel number being 1 to obtain a final complete sinogram;
2-24, inputting the final complete sinogram output from the decoder module c into a first discriminator D, wherein the first discriminator D comprises M convolutional layers which are sequentially connected, and the number of channels of the last convolutional layer is 1, namely outputting a probability result image block;
in addition, the up-sampling feature maps output by the first N-1 up-sampling layers in the decoder module c are input into a multi-scale decoding module d, and the multi-scale decoding module d comprises N-1 feature extraction layers of 1 x 1 convolution kernels with the channel number of 1;
the method specifically comprises the following steps: inputting the 1 st up-sampling feature map, the 2 nd up-sampling feature map and the 3 rd up-sampling feature map into a feature extraction layer of a 1 x 1 convolution kernel with 1 channel number respectively;
step 2-25, calculating the total loss function L SI And according to the total loss function L SI Updating the initialized first generation countermeasure network parameters to obtain a first generation countermeasure network after one training;
2-3, selecting training samples of the training set for multiple times and training the first generation antagonistic network according to the same method in the step 2-2 to obtain the trained first generation antagonistic network;
the second training process for generating the countermeasure network is as follows:
step 2-a, in the first generation countermeasure network, splicing the image output by the decoder module c and the projection data missing sinogram input into the encoder module a as a first generation countermeasure network output result; extracting a [0 degrees, 180 degrees ] sinusoidal image from the first generation countermeasure network output result, and carrying out FBP reconstruction on the extracted [0 degrees, 180 degrees ] sinusoidal image to obtain a generation domain CT image r _ x;
step 2-b, taking the generated domain CT image r _ x and the collected original CT image r _ y as a non-paired data set;
step 2-c, inputting the generated domain CT image r _ x into an initialized second generator Gx2y to obtain a first image f _ y, and inputting the obtained first image f _ y into a second discriminator D _ y;
step 2-D, inputting the original CT image r _ y into an initialized third generator Gy2x to obtain a second image f _ x, and inputting the obtained second image f _ x into a third discriminator D _ x;
step 2-e, calculating the total loss function L of the generated domain CT image r _ x and the original CT image r _ y AR And according to the total loss functionL AR Updating the network parameters of the second generator Gx2y and the third generator Gy2x to obtain a second generator after one training;
step 2-f, training the second generator Gx2y for multiple times according to the same method in the steps 2-a-2-e, and obtaining a trained second generator Gx2 y;
step 3, inputting any test sample in the test set into the trained CT reconstruction model to obtain a reconstructed CT image;
the specific process is as follows:
step 3-1, inputting any projection data missing sinogram in the test set into a trained encoder module a, and obtaining a first complete sinogram output in a trained decoder module c according to the same method in the steps 2-21 to 2-23; splicing a first complete sinogram output by the decoder module c after the training and a projection data missing sinogram input into the encoder module a after the training into a final full-angle sinusoidal image;
step 3-2, extracting a [0 degrees, 180 degrees ] sinusoidal image from the final full-angle sinusoidal image, and performing FBP reconstruction on the extracted [0 degrees, 180 degrees ] sinusoidal image to obtain a first generated domain CT image;
and 3-3, inputting the first generated domain CT image into a second generator Gx2y after training is finished, and obtaining a reconstructed CT image.
Preferably, the encoder module a in steps 2-21 is composed of 6 downsampling layers connected in sequence, each downsampling layer is a convolutional layer, a convolutional kernel of each convolutional layer is 3 × 3, and the number of channels is: 32. 64, 128, 256, 512 and 512, the LeakyReLU is selected as the activation function for the first 5 convolution layers, and the slope is set to be 0.2; the sixth convolutional layer uses ReLU as the activation function.
Preferably, the decoder module c in steps 2 to 23 includes 6 upsampling layers formed by "upsampling + convolution", the convolution kernel size of each layer is 3 × 3, and the number of channels in the first 6 layers is: 512. 256, 128, 64, 32 and 32, the activation function is ReLU, and the final complete sinogram is obtained through the output of the convolution layer with the channel number of 1, and the activation function of the last layer is Tanh.
Preferably, in steps 2 to 24, the first discriminator D is composed of 5 layers of convolution, the first 4 layers of convolution kernel are 5 × 5, the step size is 2, and the number of channels is: 64. 128, 256, 512. The activation function is LeakyReLU with a slope set to 0.2. Finally, a convolution layer with the number of channels being 1, the convolution kernel being 5 multiplied by 5 and the step length being 1 is adopted.
In the present invention, the total loss function L in said steps 2-25 SI The calculation formula of (2) is as follows:
L SI =λ 1 L GAN (G,D)+λ 2 Lh ole (G)+λ 3 L ms
wherein λ is 1 、λ 2 And λ 3 Are respectively L GAN (G,D)、L hol e, (G) and L ms The learning rate of (c); l is GAN (G, D) generating a pair loss-resistance function,
Figure BDA0003622463570000041
d (y) represents [0 °, q ° ] in any one training sample]The output probability of the sinogram y after being input into a first discriminator D; g (x) represents an image produced by inputting the projection data missing sinogram x corresponding to the sinogram y to the first generator G; d (G (x)) represents the output probability of the image G (x) generated by the first generator G after being input to the first discriminator D;
Figure BDA0003622463570000051
representing a computational mathematical expectation; pdata (y) represents the data distribution of the sinogram y; pdata (x) represents the data distribution of the projection data missing sinogram x; l is hole (G) As a function of the loss of the hole,
Figure BDA0003622463570000052
m is a mask, the value of the missing part of the projection data is 1, and the value of the remaining part is 0; indicating matrix dot-product, | | | | non-conducting phosphor 1 Represents L 1 A norm; l is ms For the multi-scale L1 loss,
Figure BDA0003622463570000053
φ p up-sampled feature map, f (phi), for the p-th layer output of decoder module c p ) Representation of the upsampled feature map phi p Encoding into a gray image of 1 channels of the same size, y p Represents the size and phi p The same original image.
In the present invention, the total loss function L in said step 2-e AR The calculation formula is as follows:
L AR =α 1 L GAN2 L cycle3 L ide4 L err
wherein alpha is 1 、α 2 、α 3 、α 4 Are respectively L GAN 、L cycle 、L ide And L err The coefficient of (a);
L GAN =L GAN (Gx2y)+L GAN (Gy2x),L GAN (Gx2y) is the loss function of the second generator Gx2y,
Figure BDA0003622463570000054
y′~P data (r _ y) represents the data distribution of the image y' obeying the original CT image r _ y; d _ y (y') represents an output probability of the image y after being input to the second discriminator D _ y; x' to P data (r _ x) represents the data distribution of the image x' subject to the generation of the domain CT image r _ x; gx2y (x ') represents an image obtained by inputting the image x' to the second generator Gx2y, and D _ y (Gx2y (x ')) represents an output probability of Gx2y (x') after being input to the second discriminator D _ y; l is GAN (Gy2x) is a loss function of the third generator Gy2x,
Figure BDA0003622463570000055
d _ x (x') represents an output probability after the image x is input to the third discriminator D _ x; gy2x (y ') represents an image obtained by inputting the image y to the third generator Gy2x, and D _ x (Gy2x (y ')) represents an output probability after Gy2x (y ') is input to the third discriminator D _ x;
Figure BDA0003622463570000056
gy2x (Gx2y (x ')) represents an image in which Gx2y (x') is input to and output from the third generator Gy2 x; gx2y (Gy2x (y ')) represents the image in which Gy2x (y') is input to the output of the second generator Gx2 y;
Figure BDA0003622463570000057
gy2x (x ') represents the image x' input to the third generator Gy2x for output; gx2y (y ') represents the image y' input to the output of the second generator Gx2 y;
Figure BDA0003622463570000058
compared with the prior art, the invention has the advantages that: the first generation pairing-reactance network can carry out effective training under the condition of no pairing sine-reconstruction image pair, and generates a reconstruction result equivalent to full-scan reconstruction by using sine data missing from limited-angle projection data on the basis of ensuring the reconstruction size through an unsupervised method. Compared with the existing unsupervised deep learning mixed domain processing method, the method further reduces the limited angle range, and meanwhile, the generated image has better performance in quality.
Drawings
FIG. 1 is a schematic diagram of a first generation countermeasure network in an embodiment of the invention;
FIG. 2 is a schematic diagram of the attention module of FIG. 1;
FIG. 3 is a diagram illustrating a first generated countermeasure network for testing in accordance with an embodiment of the present invention;
fig. 4 is a schematic diagram of a second generative countermeasure network in an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
In this embodiment, a hybrid domain unsupervised finite angle CT reconstruction method based on a generated countermeasure network includes the following steps:
step 1, constructing a CT reconstruction model;
the constructed CT reconstruction module model comprises a first generation countermeasure network used for complementing the missing projection data and a second generation countermeasure network used for removing the artifact; the first generation countermeasure network comprises a first generator G and a first discriminator D, wherein the first generator G consists of an encoder module a, an attention module b, a decoder module c and a multi-scale decoding module D; the second generative countermeasure network includes a second generator Gx2y, a second discriminator D _ y, a third generator Gy2x, and a third discriminator D _ x;
step 2, independently training the first generation countermeasure network and the second generation countermeasure network to obtain a trained first generation countermeasure network and a trained second generation countermeasure network, and obtaining a trained CT reconstruction model;
the training process of the first generation of the antagonistic network is as follows:
step 2-1, converting an original CT image into a plurality of sinograms with the angle ranges of [0 degrees and q degrees ], wherein q is more than 180 and less than or equal to 360, and deleting projection data of [ s degrees and t degrees ] from each sinogram of [0 degrees and q degrees ] to obtain projection data missing sinograms corresponding to the sinograms of [0 degrees and q degrees ] one by one; s and t belong to {0, q }; then constructing a training set and a testing set together;
each training sample in the training set and each test sample in the test set respectively comprise a sinogram with an angle range of [0 degrees, q degrees ] and a projection data missing sinogram corresponding to the sinogram;
in this embodiment, q is 192, s is 90, and t is 180;
2-2, inputting one or more training samples in a training set into an initialized first generation countermeasure network;
the training process of inputting any projection data missing sinogram into the initialized first generation pairwise reactance network comprises the following steps:
2-21, inputting the projection data missing sinogram into an encoder module a, wherein the encoder module a comprises N downsampling layers which are sequentially connected, namely a 1 st downsampling feature map, a 2 nd downsampling feature map and an N < th > downsampling feature map are sequentially obtained; n is a positive integer;
step 2-22, inputting the down-sampling feature map output by each down-sampling layer into an attention module b, wherein the attention module b comprises N-1 ATN networks;
the method specifically comprises the following steps: inputting the N-th downsampling feature map and the N-1-th downsampling feature map into an N-1-th ATN network to obtain an N-1-th patching feature map; inputting the ith repairing characteristic diagram and the (i-1) th down-sampling characteristic diagram into an (i-1) th ATN to obtain an (i-1) th repairing characteristic diagram, wherein i is sequentially valued as N-1 and N-2.. 2, namely, an (N-2) th repairing characteristic diagram and an (N-3) th repairing characteristic diagram.. 1 st repairing characteristic diagram are sequentially obtained;
step 2-23, inputting the down-sampling feature map output by the encoder module a and the patching feature map output by the attention module b into a decoder module c in a cross-layer connection mode, wherein the decoder module c comprises N sequentially connected up-sampling layers and convolutional layers with the number of channels being 1;
the method specifically comprises the following steps: inputting the Nth down-sampling feature map into the 1 st up-sampling layer to obtain a 1 st up-sampling feature map; stacking the jth upsampling feature map and the (N-j) th repairing feature map, and inputting the jth upsampling feature map and the (N-j) th repairing feature map into a (j + 1) th upsampling layer to obtain a (j + 1) th upsampling feature map, wherein the value of j is 1 and 2.. N-1 in sequence, namely, a 2 nd upsampling feature map and a 3 rd upsampling feature map are obtained in sequence; the N up-sampling characteristic images pass through a convolution layer with the channel number being 1 to obtain a final complete sinogram;
2-24, inputting the final complete sinogram output from the decoder module c into a first discriminator D, wherein the first discriminator D comprises M convolutional layers which are sequentially connected, and the number of channels of the last convolutional layer is 1, namely outputting a probability result image block;
in addition, the up-sampling feature maps output by the first N-1 up-sampling layers in the decoder module c are input into a multi-scale decoding module d, and the multi-scale decoding module d comprises N-1 feature extraction layers with 1 x 1 convolution kernels with the channel number being 1;
the method comprises the following specific steps: inputting the 1 st up-sampling feature map, the 2 nd up-sampling feature map and the 3 rd up-sampling feature map into a feature extraction layer of a 1 x 1 convolution kernel with 1 channel number respectively;
step 2-25, calculating the total loss function L SI And according to the total loss function L SI Updating the initialized first generation countermeasure network parameters to obtain a first generation countermeasure network after one training;
total loss function L SI The calculation formula of (c) is:
L SI =λ 1 L GAN (G,D)+λ 2 L hole (G)+λ 3 L ms
wherein, λ 1 、λ 2 And λ 3 Are each L GAN (G,D)、L hole (G) And L ms The learning rate of (c); l is GAN (G, D) generating a pair loss-resistance function,
Figure BDA0003622463570000071
d (y) represents [0 °, q ° ] in any one training sample]The output probability of the sinogram y after being input into a first discriminator D; g (x) represents an image generated after the projection data missing sinogram x corresponding to the sinogram y is input to the first generator G, and corresponds to an image output by the decoder module c in the first generator G; d (G (x)) represents the output probability of the image G (x) generated by the first generator G after being input to the first discriminator D;
Figure BDA0003622463570000081
representing a computational mathematical expectation; pdata (y) represents the data distribution of the sinogram y, and y-pdata (y) corresponds to any one or one group of sinograms y meeting pdata (y); pdata (x) represents the data distribution of the projection data missing sinogram x, and x-pdata (x) corresponds to any one or one group of projection data missing sinograms x meeting pdata (x); l is hole (G) As a function of the loss of the hole,
Figure BDA0003622463570000082
Figure BDA0003622463570000083
m is a mask, the missing part of the projection data has a value of 1, and the remaining part has a value of 0, and m is a matrix having the same size as the sinograms y and g (x); indicating matrix dot-product, | | | | non-conducting phosphor 1 Represents L 1 A norm; l is ms For the multi-scale L1 loss,
Figure BDA0003622463570000084
φ p up-sampled feature map, f (phi), for the p-th layer output of decoder module c p ) Representation of the upsampled feature map phi p Encoding into a gray image of 1 channels of the same size, y p Represents the size and phi p The same original image;
in this embodiment, λ 1 、λ 2 、λ 3 The values of (A) are respectively: 0.05, 6, 0.5;
generating the penalty function L GAN (G, D) attempting to learn the mapping of G (x) → y, and generating a y-compliant anaglyphic image by x as much as possible; in order to enable the generator to generate real data in the missing image area as much as possible, the hole loss function L is introduced in the embodiment hole (G) Constraint is carried out on the optimization result; in addition through multi-scale L 1 The loss function limits the output result of each layer of the decoder module, so that the output result of each layer is similar to the original image with the same scale as much as possible, and the loss constraint of different scales can further improve the model effect;
2-3, selecting training samples of the training set for multiple times and training the first generation antagonistic network according to the same method in the step 2-2 to obtain the trained first generation antagonistic network;
each training sample of the first generation antagonizing network comprises a sinogram with an angle range of [0 degrees, q degrees ] and a projection data missing sinogram corresponding to the sinogram, but during actual training, the projection data missing sinogram is input into the first generation antagonizing network to obtain a complemented sinogram; the sinogram with the angle range of [0 degrees, q degrees ] is used for playing a constraint role in the calculation of the loss function;
as shown in fig. 4, the training process of the second generation countermeasure network is:
step 2-a, in the first generation countermeasure network, splicing the image output by the decoder module c and the projection data missing sinogram input into the encoder module a as a first generation countermeasure network output result; extracting a [0 degrees, 180 degrees ] sinusoidal image from the first generation countermeasure network output result, and carrying out FBP reconstruction on the extracted [0 degrees, 180 degrees ] sinusoidal image to obtain a generation domain CT image r _ x;
step 2-b, taking the generated domain CT image r _ x and the collected original CT image r _ y as a non-paired data set;
2-c, inputting the generated domain CT image r _ x into an initialized second generator Gx2y to obtain a first image f _ y, and inputting the obtained first image f _ y into a second discriminator D _ y;
step 2-D, inputting the original CT image r _ y into an initialized third generator Gy2x to obtain a second image f _ x, and inputting the obtained second image f _ x into a third discriminator D _ x;
step 2-e, calculating the total loss function L of the generated domain CT image r _ x and the original CT image r _ y AR And according to the total loss function L AR Updating the network parameters of the second generator Gx2y and the third generator Gy2x to obtain a second generator after one training;
total loss function L AR The calculation formula is as follows:
L AR =α 1 L GAN2 L cycle3 L ide4 L err
wherein alpha is 1 、α 2 、α 3 、α 4 Are each L GAN 、L cycle 、L ide And L err The coefficients of (c);
L GAN =L GAN (Gx2y)+L GAN (Gy2x),L GAN (Gx2y) is the loss function of the second generator Gx2y,
Figure BDA0003622463570000091
y′~P data (r _ y) representation image y' clothesData distribution from the original CT image r _ y; d _ y (y') represents an output probability after the image y is input to the second discriminator D _ y; x' to P data (r _ x) represents the data distribution of the image x' subject to the generation of the domain CT image r _ x; gx2y (x ') represents an image obtained by inputting the image x' to the second generator Gx2y, and D _ y (Gx2y (x ')) represents an output probability of Gx2y (x') after being input to the second discriminator D _ y; l is GAN (Gy2x) is a loss function of the third generator Gy2x,
Figure BDA0003622463570000092
d _ x (x') represents an output probability after the image x is input to the third discriminator D _ x; gy2x (y ') represents an image obtained by inputting the image y to the third generator Gy2x, and D _ x (Gy2x (y ')) represents an output probability after Gy2x (y ') is input to the third discriminator D _ x;
Figure BDA0003622463570000093
gy2x (Gx2y (x ')) represents an image in which Gx2y (x') is input to the third generator Gy2x for output; gx2y (Gy2x (y ')) represents the image in which Gy2x (y') is input to the output of the second generator Gx2 y;
Figure BDA0003622463570000094
gy2x (x ') represents the image x' input to the third generator Gy2x for output; gx2y (y ') represents the image y' input to the output of the second generator Gx2 y;
Figure BDA0003622463570000095
in this example, α 1 、α 2 、α 3 、α 4 1, 8, 4 and 2 respectively;
step 2-f, training the second generator Gx2y for multiple times according to the same method in the step 2-a psi 2-e, and obtaining a trained second generator Gx2 y;
step 3, inputting any test sample in the test set into the trained CT reconstruction model to obtain a reconstructed CT image;
the specific process is as follows:
step 3-1, inputting any projection data missing sinogram in the test set into a trained encoder module a, and obtaining a first complete sinogram output in a trained decoder module c according to the same method in the steps 2-21 to 2-23; splicing a first complete sinogram output by the decoder module c after the training and a projection data missing sinogram input into the encoder module a after the training into a final full-angle sinusoidal image;
step 3-2, extracting a [0 degrees, 180 degrees ] sinusoidal image from the final full-angle sinusoidal image, and performing FBP reconstruction on the extracted [0 degrees, 180 degrees ] sinusoidal image to obtain a first generated domain CT image;
and 3-3, inputting the first generated domain CT image into a second generator Gx2y after training is finished, and obtaining a reconstructed CT image.
The overall framework of the first generation countermeasure network is shown in fig. 1, and the schematic diagram of the ATN network in the attention module b is shown in fig. 2; during testing, as shown in fig. 3, only three modules, namely an encoder module a, an attention module b and a decoder module c, which are trained, are used for predicting the missing part of the true limited-angle sinusoidal image, and the missing part of the true limited-angle sinusoidal image is spliced with the input limited-angle sinusoidal image to form a final full-angle sinusoidal image.
The encoder module a in the first generation countermeasure network is mainly used for extracting feature information of a limited angle sinogram, and downsampling low-level masks and images layer by layer to learn a high-level compact potential feature; in this embodiment, the encoder module a is composed of 6 downsampling layers connected in sequence, each downsampling layer is a convolutional layer, a convolution kernel of each convolutional layer is 3 × 3, and the number of channels is: 32. 64, 128, 256, 512 and 512, the LeakyReLU is selected as the activation function for the first 5 convolution layers, and the slope is set to be 0.2; in order to make the values of the characteristic diagram output by the encoder all positive, the sixth convolution layer selects ReLU as the activation function.
In order to overcome the limitation that the encoder module a cannot effectively observe the far image characteristics due to the layer-by-layer convolution, the attention module b is added in the invention, the ATN network is repeatedly used between adjacent layers of the encoder, the content of the high-level semantic characteristic layer is learned to fill the missing region of the low-level characteristic layer, and the cross-layer attention filling mechanism can realize the mapping from the high-level semantic characteristics to the low-level characteristics, realize higher resolution and ensure the content consistency of the missing region. The specific procedures of ATN networks are described in papers: "Yu, J.H., et al, genetic Image Inpainting with Contextual organization Attention.2018Ieee/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr), 2018: p.5505-5514 ".
And the decoder module c receives the feature images from the encoder module a and the attention module b simultaneously in a cross-layer connection mode, stacks the output feature images of the upper layer of the decoder and the output feature images of the corresponding layers of the attention module in front and back and then inputs the stacked output feature images into the lower layer of the decoder, and continuously recurs to realize missing hole completion to form a complete sinusoidal image. To avoid the checkerboard phenomenon when using transposed convolution, the module replaces the transposed convolution layer by "upsampling + convolution". The convolution kernel size of each layer is 3 × 3, and the number of channels of the first 6 layers is respectively: 512. 256, 128, 64, 32 and 32, the activation function is ReLU, and the final complete sinogram is obtained through the output of the convolution layer with the channel number of 1, and the activation function of the last layer is Tanh.
The multi-scale decoding module d generates pictures with different scales by performing convolution on the feature maps of different layers of the decoder module c, and establishes stricter multi-scale loss, so that the model achieves better effect. All convolutional layers use a 1 × 1 convolutional kernel with channel number 1, taking Tanh as the activation function.
The first discriminator D in this embodiment will output an M × N image block instead of a single value, and each pixel value in the image block represents the probability of the authenticity of the image in the corresponding receptive field range in the original image. The area perception discriminator can improve the similarity of the generated image and the original image in texture and semantics, so that the generated image has higher resolution and detail. The first discriminator D in this embodiment is composed of 5 layers of convolution, the first 4 layers of convolution kernel are 5 × 5, the step length is 2, and the number of channels is: 64. 128, 256, 512. The activation function is LeakyReLU with a slope set to 0.2. And finally, outputting a probability result image block by adopting a convolutional layer with the number of channels being 1, the convolutional kernel being 5 multiplied by 5 and the step size being 1, wherein the size of the image block is 59 multiplied by 19.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the technical principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A hybrid domain unsupervised finite angle CT reconstruction method based on a generation countermeasure network is characterized by comprising the following steps:
step 1, constructing a CT reconstruction model;
the constructed CT reconstruction module model comprises a first generation countermeasure network used for complementing the missing projection data and a second generation countermeasure network used for removing the artifact;
the first generation countermeasure network comprises a first generator G and a first discriminator D, wherein the first generator G consists of an encoder module a, an attention module b, a decoder module c and a multi-scale decoding module D; the second generative countermeasure network includes a second generator Gx2y, a second discriminator D _ y, a third generator Gy2x, and a third discriminator D _ x;
step 2, independently training the first generation countermeasure network and the second generation countermeasure network to obtain a trained first generation countermeasure network and a trained second generation countermeasure network, and obtaining a trained CT reconstruction model;
the training process of the first generation of the antagonistic network is as follows:
step 2-1, converting an original CT image into a plurality of sinograms with the angle ranges of [0 degrees and q degrees ], wherein q is more than 180 and less than or equal to 360, and deleting projection data of [ s degrees and t degrees ] from each sinogram of [0 degrees and q degrees ] to obtain projection data missing sinograms corresponding to the sinograms of [0 degrees and q degrees ] one by one; s and t belong to {0, q }; then constructing a training set and a testing set together;
each training sample in the training set and each test sample in the test set respectively comprise a sinogram with an angle range of [0 degrees, q degrees ] and a projection data missing sinogram corresponding to the sinogram;
2-2, inputting one or more training samples in a training set into an initialized first generation countermeasure network;
the training process of inputting any projection data missing sinogram into the initialized first generation pairwise reactance network comprises the following steps:
2-21, inputting the projection data missing sinogram into an encoder module a, wherein the encoder module a comprises N downsampling layers which are sequentially connected, namely a 1 st downsampling feature map, a 2 nd downsampling feature map and an … nth downsampling feature map are sequentially obtained; n is a positive integer;
step 2-22, inputting the down-sampling feature map output by each down-sampling layer into an attention module b, wherein the attention module b comprises N-1 ATN networks;
the method specifically comprises the following steps: inputting the N-th downsampling feature map and the N-1-th downsampling feature map into an N-1-th ATN network to obtain an N-1-th patching feature map; inputting the ith repairing characteristic diagram and the (i-1) th down-sampling characteristic diagram into an (i-1) th ATN to obtain an (i-1) th repairing characteristic diagram, wherein i is sequentially valued as N-1 and N-2 … 2, namely, the (N-2) th repairing characteristic diagram and the (N-3) th repairing characteristic diagram … is sequentially obtained as the 1 st repairing characteristic diagram;
step 2-23, inputting the down-sampling feature map output by the encoder module a and the patching feature map output by the attention module b into a decoder module c in a cross-layer connection mode, wherein the decoder module c comprises N sequentially connected up-sampling layers and convolutional layers with the number of channels being 1;
the method comprises the following specific steps: inputting the Nth down-sampling feature map into the 1 st up-sampling layer to obtain a 1 st up-sampling feature map; stacking the jth upsampling feature map and the (N-j) th repairing feature map, and inputting the jth upsampling feature map and the (N-j) th repairing feature map into a (j + 1) th upsampling layer to obtain a (j + 1) th upsampling feature map, wherein the value of j is 1 and 2 … N-1 in sequence, namely obtaining a 2 nd upsampling feature map and a 3 rd upsampling feature map … an N upsampling feature map in sequence; the N up-sampling characteristic images pass through a convolution layer with the channel number being 1 to obtain a final complete sinogram;
2-24, inputting the final complete sinogram output from the decoder module c into a first discriminator D, wherein the first discriminator D comprises M convolutional layers which are sequentially connected, and the number of channels of the last convolutional layer is 1, namely outputting a probability result image block;
in addition, the up-sampling feature maps output by the first N-1 up-sampling layers in the decoder module c are input into a multi-scale decoding module d, and the multi-scale decoding module d comprises N-1 feature extraction layers of 1 x 1 convolution kernels with the channel number of 1;
the method specifically comprises the following steps: inputting the 1 st up-sampling feature map, the 2 nd up-sampling feature map and the 3 rd up-sampling feature map …, the N-1 st up-sampling feature map into a feature extraction layer of a 1 x 1 convolution kernel with 1 channel number respectively;
step 2-25, calculating the total loss function L SI And according to the total loss function L SI Updating the initialized first generation countermeasure network parameters to obtain a first generation countermeasure network after one training;
2-3, selecting training samples of the training set for multiple times and training the first generation antagonistic network according to the same method in the step 2-2 to obtain the trained first generation antagonistic network;
the second training process for generating the countermeasure network is as follows:
step 2-a, in the first generation countermeasure network, splicing the image output by the decoder module c and the projection data missing sinogram input into the encoder module a as a first generation countermeasure network output result; extracting a [0 degrees, 180 degrees ] sinusoidal image from the first generation countermeasure network output result, and carrying out FBP reconstruction on the extracted [0 degrees, 180 degrees ] sinusoidal image to obtain a generation domain CT image r _ x;
step 2-b, taking the generated domain CT image r _ x and the collected original CT image r _ y as a non-paired data set;
step 2-c, inputting the generated domain CT image r _ x into an initialized second generator Gx2y to obtain a first image f _ y, and inputting the obtained first image f _ y into a second discriminator D _ y;
step 2-D, inputting the original CT image r _ y into an initialized third generator Gy2x to obtain a second image f _ x, and inputting the obtained second image f _ x into a third discriminator D _ x;
step 2-e, calculating the total loss function L of the generated domain CT image r _ x and the original CT image r _ y AR And according to the total loss function L AR Updating the network parameters of the second generator Gx2y and the third generator Gy2x to obtain a second generator Gx2y after one training;
step 2-f, training the second generator Gx2y for multiple times according to the same method in the steps 2-a-2-e, and obtaining a trained second generator Gx2 y;
step 3, inputting any test sample in the test set into the trained CT reconstruction model to obtain a reconstructed CT image;
the specific process is as follows:
step 3-1, inputting any projection data missing sinogram in the test set into a trained encoder module a, and obtaining a first complete sinogram output in a trained decoder module c according to the same method in the steps 2-21 to 2-23; splicing a first complete sinogram output by the decoder module c after the training and a projection data missing sinogram input into the encoder module a after the training into a final full-angle sinusoidal image;
step 3-2, extracting a [0 degrees, 180 degrees ] sinusoidal image from the final full-angle sinusoidal image, and performing FBP reconstruction on the extracted [0 degrees, 180 degrees ] sinusoidal image to obtain a first generated domain CT image;
and 3-3, inputting the first generated domain CT image into a second generator Gx2y after training is finished, and obtaining a reconstructed CT image.
2. The hybrid domain unsupervised finite angle CT reconstruction method based on generative countermeasure network of claim 1, wherein: the encoder module a in the steps 2 to 21 is composed of 6 downsampling layers which are sequentially connected, each downsampling layer is a convolutional layer, a convolutional kernel of each convolutional layer is 3 × 3, and the number of channels is sequentially: 32. 64, 128, 256, 512 and 512, the LeakyReLU is selected as the activation function for the first 5 convolution layers, and the slope is set to be 0.2; the sixth convolutional layer uses ReLU as the activation function.
3. The hybrid domain unsupervised finite angle CT reconstruction method based on generative countermeasure network according to claim 1, wherein: the decoder module c in the steps 2 to 23 includes up-sampling layers composed of 6 up-sampling and convolution, the convolution kernel size of each layer is 3 × 3, and the number of channels of the first 6 layers is respectively: 512. 256, 128, 64, 32 and 32, the activation function is ReLU, and the final complete sinogram is obtained through the output of the convolution layer with the channel number of 1, and the activation function of the last layer is Tanh.
4. The hybrid domain unsupervised finite angle CT reconstruction method based on generative countermeasure network of claim 1, wherein: in the steps 2 to 24, the first discriminator D is composed of 5 layers of convolution, the first 4 layers of convolution kernels are 5 × 5, the step length is 2, and the number of channels is: 64. 128, 256, 512. The activation function is LeakyReLU with a slope set to 0.2. Finally, a convolution layer with the number of channels being 1, the convolution kernel being 5 multiplied by 5 and the step length being 1 is adopted.
5. The hybrid domain unsupervised finite angle CT reconstruction method based on generative countermeasure network of claim 1, wherein: the total loss function L in said steps 2-25 SI The calculation formula of (2) is as follows:
L SI =λ 1 L GAN (G,D)+λ 2 L hole (G)+λ 3 L ms
wherein λ is 1 、λ 2 And λ 3 Are respectively L GAN (G,D)、L hole (G) And L ms The learning rate of (c); l is a radical of an alcohol GAN (G, D) generating a pair loss-resistance function,
Figure FDA0003622463560000031
d (y) represents [0 °, q ° ] in any one training sample]The output probability of the sinogram y after being input into a first discriminator D; g (x) represents an image produced by inputting the projection data missing sinogram x corresponding to the sinogram y to the first generator G; d (G (x)) represents the output probability of the image G (x) generated by the first generator G after being input into the first discriminator D;
Figure FDA0003622463560000032
representing a computational mathematical expectation; pdata (y) represents the data distribution of the sinogram y; pdata (x) represents the data distribution of the projection data missing sinogram x; l is hole (G) As a function of the loss of the hole,
Figure FDA0003622463560000041
m is a mask, the value of the missing part of the projection data is 1, and the value of the remaining part is 0; indicates a matrix dot product, | | | |1 indicates L 1 A norm; l is ms For the multi-scale L1 loss,
Figure FDA0003622463560000042
φ p up-sampled feature map, f (phi), for the p-th layer output of decoder module c p ) Representation of the upsampled feature map phi p Encoding into gray image with same size and channel number of 1, y p Represents the size and phi p The same original image.
6. The hybrid domain unsupervised finite angle CT reconstruction method based on generative countermeasure network of claim 1, wherein: the total loss function L in said step 2-e AR The calculation formula is as follows:
L AR =α 1 L GAN2 L cycle3 L ide4 L err
wherein alpha is 1 、α 2 、α 3 、α 4 Are respectively L GAN 、L cycle 、L ide And L err The coefficient of (a);
L GAN =L GAN (Gx2y)+L GAN (Gy2x),L GAN (Gx2y) is the loss function of the second generator Gx2y,
Figure FDA0003622463560000043
y′~P data (r _ y) represents the data distribution of the image y' obeying the original CT image r _ y; d _ y (y') represents an output probability of the image y after being input to the second discriminator D _ y; x' to P data (r _ x) represents the data distribution of the image x' subject to the generation of the domain CT image r _ x; gx2y (x ') represents an image obtained by inputting the image x' to the second generator Gx2y, and D _ y (Gx2y (x ')) represents an output probability of Gx2y (x') after being input to the second discriminator D _ y; l is GAN (Gy2x) is a loss function of the third generator Gy2x,
Figure FDA0003622463560000044
d _ x (x') represents an output probability after the image x is input to the third discriminator D _ x; gy2x (y ') represents an image obtained by inputting the image y to the third generator Gy2x, and D _ x (Gy2x (y ')) represents an output probability after Gy2x (y ') is input to the third discriminator D _ x;
Figure FDA0003622463560000045
gy2x (Gx2y (x ')) represents an image in which Gx2y (x') is input to the third generator Gy2x for output; gx2y (Gy2x (y ')) represents the image in which Gy2x (y') is input to the output of the second generator Gx2 y;
Figure FDA0003622463560000046
gy2x (x ') represents the image x' input to the third generator Gy2x for output; gx2y (y ') represents the image y' input to the output of the second generator Gx2 y;
Figure FDA0003622463560000047
CN202210493873.5A 2022-04-28 2022-04-28 Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network Pending CN114862982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210493873.5A CN114862982A (en) 2022-04-28 2022-04-28 Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210493873.5A CN114862982A (en) 2022-04-28 2022-04-28 Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN114862982A true CN114862982A (en) 2022-08-05

Family

ID=82634886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210493873.5A Pending CN114862982A (en) 2022-04-28 2022-04-28 Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN114862982A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512182A (en) * 2022-09-26 2022-12-23 中国人民解放军总医院第一医学中心 CT angiography intelligent imaging method based on focused learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512182A (en) * 2022-09-26 2022-12-23 中国人民解放军总医院第一医学中心 CT angiography intelligent imaging method based on focused learning

Similar Documents

Publication Publication Date Title
WO2022047625A1 (en) Image processing method and system, and computer storage medium
CN108898642B (en) Sparse angle CT imaging method based on convolutional neural network
CN110728729B (en) Attention mechanism-based unsupervised CT projection domain data recovery method
CN112085677A (en) Image processing method, system and computer storage medium
CN110942424A (en) Composite network single image super-resolution reconstruction method based on deep learning
CN115953494B (en) Multi-task high-quality CT image reconstruction method based on low dose and super resolution
CN113538246B (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN113762147B (en) Facial expression migration method and device, electronic equipment and storage medium
CN113160380B (en) Three-dimensional magnetic resonance image super-resolution reconstruction method, electronic equipment and storage medium
CN111667407B (en) Image super-resolution method guided by depth information
CN115330615A (en) Method, apparatus, device, medium, and program product for training artifact removal model
CN113592715A (en) Super-resolution image reconstruction method for small sample image set
CN116645283A (en) Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network
CN115631107A (en) Edge-guided single image noise removal
CN114862982A (en) Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network
US20230079353A1 (en) Image correction using an invertable network
Liu et al. Tomogan: Low-dose x-ray tomography with generative adversarial networks
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
CN117036162B (en) Residual feature attention fusion method for super-resolution of lightweight chest CT image
CN116503506B (en) Image reconstruction method, system, device and storage medium
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
CN116188615A (en) Sparse angle CT reconstruction method based on sine domain and image domain
CN115239836A (en) Extreme sparse view angle CT reconstruction method based on end-to-end neural network
CN113902912A (en) CBCT image processing method, neural network system creation method, and device
US20230019733A1 (en) Motion artifact correction using artificial neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination