CN112348936A - Low-dose cone-beam CT image reconstruction method based on deep learning - Google Patents

Low-dose cone-beam CT image reconstruction method based on deep learning Download PDF

Info

Publication number
CN112348936A
CN112348936A CN202011371624.6A CN202011371624A CN112348936A CN 112348936 A CN112348936 A CN 112348936A CN 202011371624 A CN202011371624 A CN 202011371624A CN 112348936 A CN112348936 A CN 112348936A
Authority
CN
China
Prior art keywords
image
projection
neural network
cone
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011371624.6A
Other languages
Chinese (zh)
Other versions
CN112348936B (en
Inventor
李强
晁联盈
王燕丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202011371624.6A priority Critical patent/CN112348936B/en
Publication of CN112348936A publication Critical patent/CN112348936A/en
Application granted granted Critical
Publication of CN112348936B publication Critical patent/CN112348936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a low-dose cone-beam CT image reconstruction method based on deep learning, which belongs to the field of medical images and comprises the following steps: transforming the low-dose cone-beam CT original projection data into a plurality of projection images; respectively inputting the plurality of projection images into a trained projection domain depth convolution neural network, predicting the noise distribution in the projection images by the projection domain depth convolution neural network, subtracting the noise distribution from the prediction, and outputting corresponding high-quality projection images; performing three-dimensional reconstruction on the obtained high-quality projection image to obtain a cone beam CT image; further comprising: and inputting the cone beam CT image obtained by three-dimensional reconstruction into a trained image domain depth convolution neural network, eliminating noise and artifacts in the cone beam CT image by the image domain depth convolution neural network, and outputting a high-quality cone beam CT image as a final reconstruction result. The invention can reduce the X-ray dose of the cone beam CT and improve the quality of the reconstructed cone beam CT image.

Description

Low-dose cone-beam CT image reconstruction method based on deep learning
Technical Field
The invention belongs to the field of medical images, and particularly relates to a low-dose cone-beam CT image reconstruction method based on deep learning.
Background
Cone Beam CT (CBCT) is currently widely used in the fields of image-guided surgery, image-guided radiotherapy, three-dimensional imaging of the oral cavity, and the like. Compared with the traditional fan beam CT, the cone beam CT has the advantages of high ray utilization rate, high scanning speed and the like. However, in cone beam CT, as well as fan beam CT, X-ray radiation can affect the health of the patient, and further studies have shown that excessive X-ray radiation can cause genetic variation, cancer and other diseases. Therefore, low dose cone-beam CT is of increasing importance for commercial applications. The implementation of low dose cone-beam CT is mainly divided into two types: firstly, projection images are sparsely acquired, so that the images reconstructed by FDK have bar artifacts; secondly, the intensity of the X-ray is reduced, so that the projection image has noise, and the reconstructed image contains noise and artifacts. Both sparsely acquired projection images and reduced X-ray intensity introduce artifacts and noise into the CBCT reconstructed images that can mix with tissue information and interfere with the physician's determination and localization of abnormal tissue.
Low dose cone-beam CT introduces more severe artifacts and noise than conventional fan-beam CT at the same low dose. When the CT image contains a small amount of noise, good image quality can be obtained by using image domain techniques. However, for CT images with severe noise, which covers even the structural information completely, the image domain technique can sacrifice the structure of the image while eliminating the noise.
At present, the mainstream methods for high-quality reconstruction of low-dose cone-beam CT can be classified into a projection domain filtering method, an iteration method and an image domain method. The projection domain filtering method mainly utilizes known noise distribution information to design a corresponding filtering function, and improves the quality of a projected image in a projection domain so as to improve the quality of a reconstructed image, such as an FDK algorithm; the projection domain filtering method is easy to cause some structures in the projection image to be lost, so that the reconstructed image also loses some structural information. The iterative method mainly utilizes the statistical characteristics of projection data and the prior information of a reconstructed image to design a corresponding optimization function, and obtains a high-quality reconstructed image by optimizing a target function in the iterative process, such as a joint algebraic iteration technology; iterative methods are often time consuming and the hyper-parameters need to be adjusted empirically. The image domain method is divided into two categories, namely the traditional method, such as Block-Matchinand 3D filtering (BM3D), dictionary learning and the like; image domain deep learning techniques, such as residual codec convolutional neural networks; however, the potential of reducing the X-ray dose is small on the premise of ensuring the image quality by the image domain method, and the image details which are lost in the reconstructed image cannot be recovered by the traditional image domain method or the image domain depth learning method.
Disclosure of Invention
Aiming at the defects and the improvement requirements of the prior art, the invention provides a low-dose cone-beam CT image reconstruction method based on deep learning, and aims to improve the quality of a reconstructed cone-beam CT image while reducing the X-ray dose of cone-beam CT.
To achieve the above object, according to one aspect of the present invention, there is provided a low dose cone-beam CT image reconstruction method based on deep learning, including:
a projection transformation step: transforming the low-dose cone-beam CT original projection data into a plurality of projection images;
a projected image denoising step: inputting a projection image to be processed to a trained projection domain depth convolution neural network, predicting noise distribution in the projection image to be processed by the projection domain depth convolution neural network, subtracting the noise distribution from the prediction, and outputting a high-quality projection image;
three-dimensional reconstruction: and respectively executing a projection image denoising step on the plurality of projection images obtained in the projection transformation step to obtain high-quality projection images corresponding to the projection images, and then performing three-dimensional reconstruction on the obtained high-quality projection images to obtain cone beam CT images.
After the low-dose cone beam CT original projection data are converted into projection images, the noise distribution in the projection images is predicted by using a projection domain depth convolution neural network, and the noise in the projection images is subtracted, so that the noise and artifacts caused by low dose in the projection images can be removed while the original structure of the projection images is kept, and the quality of the projection images is effectively improved; the method carries out three-dimensional reconstruction based on the projected image after denoising, and can effectively improve the quality of the cone beam CT image obtained by reconstruction.
In the process of image reconstruction, the time-consuming iterative process is not required to be executed, so that the reconstruction speed is high.
Further, the projection domain depth convolution neural network comprises one or more first residual blocks which are connected in sequence, and the structures of the first residual blocks are the same;
the first residual block comprises one or more first units which are connected in sequence, the first units comprise a convolution layer, a normalization layer and an activation function layer which are connected in sequence, and the input of the first residual block is summed through a jump link with the output of the activation function layer through the jump link to be used as the output of the first residual block, so that a residual network is formed.
Further, the projection domain depth convolution neural network is used as a generator for generating a countermeasure network in the training process;
a generator in the countermeasure network is generated for predicting the noise distribution of the input image and subtracting the noise distribution from the input image to obtain a high-quality image; and the discriminator in the generation countermeasure network is used for discriminating the difference between the high-quality image output by the generator and the corresponding gold standard and feeding back the difference to the generator so as to update the generator.
The invention completes the training of the projection domain depth convolution neural network by means of the generation countermeasure network, and takes the projection domain depth convolution neural network as the generator for generating the countermeasure network, thereby ensuring the training effect of the projection domain depth convolution neural network by utilizing the discrimination function of the discriminator in the generation countermeasure network, effectively removing the noise in the projection image and improving the quality of the intermediate image when the cone beam CT image is reconstructed.
Further, the training method of the projection domain depth convolution neural network comprises the following steps:
randomly selecting a part of projection images from an original projection image data set according to a preset proportion; adding noise to the selected projection image, taking the projection image after noise addition as a low-dose projection image, taking the projection image before noise addition as a corresponding gold standard, forming a training sample by the low-dose projection image and the corresponding gold standard, and forming a projection domain training data set by all the training samples;
establishing a generated confrontation network, and training the generated confrontation network by utilizing a projection domain training data set;
and after training is finished, extracting generators in the generation countermeasure network as a projection domain deep convolution neural network.
The method randomly selects part of the projection images from the original projection image data set, simulates the low-dose projection images in a noise adding mode, and uses the original projection images without noise as a gold standard, so as to construct a training data set of the projection domain depth convolution neural network.
Further, the low dose cone-beam CT image reconstruction method based on deep learning provided by the present invention further includes:
and (3) denoising a reconstructed image: and inputting the cone beam CT image obtained in the three-dimensional reconstruction step into a trained image domain depth convolution neural network, eliminating noise and artifacts in the cone beam CT image by the image domain depth convolution neural network, outputting a high-quality cone beam CT image, and taking the high-quality cone beam CT image as a final reconstruction result.
On the basis of obtaining the cone beam CT image through three-dimensional reconstruction, the invention eliminates noise and artifacts in the cone beam CT image by using the image domain depth convolution neural network, and can further improve the quality of the cone beam CT image.
Further, the image domain deep convolutional neural network comprises: an encoder, one or more second residual blocks, and a decoder connected in sequence;
the encoder comprises one or more second units which are connected in sequence, and the second residual block comprises one or more second units which are connected in sequence; the second unit comprises a convolution layer, a normalization layer and an activation function layer which are connected in sequence;
the decoder comprises one or more third units which are connected in sequence; the third unit comprises an deconvolution layer, a normalization layer and an activation function layer which are connected in sequence;
the encoder and the decoder adopt a symmetrical structure, and the input of the first convolution layer, the third convolution layer and the fifth convolution layer of the encoder are sequentially added with the output of the corresponding deconvolution layer in the decoder through jump linkage to form a residual error structure; the input of the first convolutional layer of the second residual block is added to the output of the normalized second convolutional layer by a skip link to form a residual structure.
In the invention, the image domain deep convolutional neural network is a residual error structure, which can deepen the depth of the convolutional neural network, thereby improving the quality of an output image and accelerating the training of the convolutional neural network.
Further, the training method of the image domain deep convolutional neural network comprises the following steps:
constructing an image domain training data set; in the image domain training data set, each sample consists of two images, wherein one image is a cone beam CT image obtained by utilizing a three-dimensional reconstruction step and is used as the input of an image domain depth convolution neural network, and the other image is a cone beam CT image with normal dosage and is used as a reference image of the image domain depth convolution neural output;
and establishing an image domain deep convolutional neural network, and training the image domain deep convolutional neural network by using an image domain training data set, so that the trained image domain deep convolutional neural network is obtained after training is finished.
When the image domain depth convolution neural network is trained, the cone beam CT image reconstructed by the projection image denoised by the projection domain depth convolution neural network and the cone beam CT image with normal dose are used for constructing a training data set, so that the image domain depth convolution neural network obtained by training can effectively remove the noise and the artifact still existing in the projection image.
Further, in the three-dimensional reconstruction step, the obtained high-quality projection image is subjected to three-dimensional reconstruction by using an FDK algorithm.
The FDK algorithm (Feldkamp-Davis-Kress algorithm) has the advantage of high reconstruction speed as an analytical algorithm, and usually has good image quality under the condition of normal dose, but the reconstructed image has more serious noise and artifacts under the condition of low dose; because the invention utilizes the projection domain depth convolution neural network to remove the noise and the artifacts in the projection image obtained by the transformation of the low-dose cone beam CT projection data and obtain the high-quality projection image, the FDK algorithm is utilized to reconstruct the projection image, and the reconstruction quality can be ensured and the reconstruction speed can be effectively improved.
According to another aspect of the present invention, there is provided a computer readable storage medium comprising a stored computer program;
when the computer program is executed by the processor, the computer readable storage medium is controlled to execute the method for reconstructing a low-dose cone-beam CT image based on deep learning according to the present invention.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
(1) after the low-dose cone beam CT original projection data are converted into projection images, the noise distribution in the projection images is predicted by using a projection domain depth convolution neural network, and the noise in the projection images is subtracted, so that the noise and artifacts caused by low dose in the projection images can be removed while the original structure of the projection images is kept, and the quality of the projection images is effectively improved; the method carries out three-dimensional reconstruction based on the projected image after denoising, and can effectively improve the quality of the cone beam CT image obtained by reconstruction.
(2) In the process of image reconstruction, the time-consuming iterative process is not required to be executed, so that the reconstruction speed is high.
(3) On the basis of obtaining the cone beam CT image through three-dimensional reconstruction, the invention eliminates noise and artifacts in the cone beam CT image by using the image domain depth convolution neural network, and can further improve the quality of the cone beam CT image.
Drawings
FIG. 1 is a flowchart of a method for reconstructing a low-dose cone-beam CT image based on deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a low dose cone-beam CT image reconstruction method based on deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a generative countermeasure network provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image domain deep convolutional neural network structure provided in an embodiment of the present invention;
fig. 5 is a schematic diagram of reconstruction effects of different reconstruction methods according to an embodiment of the present invention; the cone beam CT image reconstruction method comprises the steps of (a) reconstructing a cone beam CT image obtained by using normal dose cone beam CT projection data, (b) reconstructing a cone beam CT image obtained by using low dose cone beam CT projection data, (c) reconstructing a cone beam CT image obtained by using SIRT, (d) reconstructing a cone beam CT image obtained by using CGLS, (e) reconstructing a cone beam CT image obtained by using RED-CNN, (f) reconstructing a cone beam CT image obtained by using MAP-NN, (g) reconstructing a cone beam CT image obtained by using embodiment 2 of the invention, and (h) reconstructing a cone beam CT image obtained by using embodiment 1 of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In the present application, the terms "first," "second," and the like (if any) in the description and the drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Aiming at the technical problem that the quality of a reconstructed image is not high because the conventional low-dose cone-beam CT image reconstruction method cannot eliminate noise and simultaneously reserve an image structure, the invention provides a low-dose cone-beam CT image reconstruction method based on deep learning, which has the overall thought that: after low-dose cone beam CT original projection data are converted into projection images, noise distribution in each projection image is predicted by using a projection domain depth convolution neural network, the noise distribution is subtracted from the projection images so as to improve the quality of the projection images, then three-dimensional reconstruction is carried out on the basis of the high-quality projection images so as to obtain cone beam CT images, therefore, the original structure of the projection images is kept while the noise and artifacts in the projection images are eliminated, and the quality of the reconstructed cone beam CT images is improved; on the basis, the noise and the artifact in the reconstructed cone beam CT image are further eliminated by utilizing the image domain depth convolution neural network, and the quality of the reconstructed image is further improved. The following are examples:
example 1:
a low dose cone-beam CT image reconstruction method based on deep learning, as shown in fig. 1 and 2, comprising:
a projection transformation step:
transforming the low-dose cone-beam CT original projection data into a plurality of projection images;
in this embodiment, the processed low-dose cone-beam CT original projection data may be obtained by sparse acquisition, or may be obtained by reducing the intensity of X-rays; when the low-dose cone-beam CT original projection data are converted into projection images, the beer-Lambert law is specifically adopted for conversion;
a projected image denoising step:
inputting a projection image to be processed to a trained projection domain depth convolution neural network, predicting noise distribution in the projection image to be processed by the projection domain depth convolution neural network, subtracting the noise distribution from the prediction, and outputting a high-quality projection image;
three-dimensional reconstruction:
and respectively executing a projection image denoising step on the plurality of projection images obtained in the projection transformation step to obtain high-quality projection images corresponding to the projection images, and then performing three-dimensional reconstruction on the obtained high-quality projection images to obtain cone beam CT images.
As an optional implementation manner, in this embodiment, the projection domain deep convolutional neural network is used as a generator for generating a countermeasure network (GAN) in the training process;
a generator in the countermeasure network is generated for predicting the noise distribution of the input image and subtracting the noise distribution from the input image to obtain a high-quality image; a discriminator in the generation countermeasure network is used for discriminating the difference between the high-quality image output by the generator and the corresponding gold standard and feeding back the difference to the generator so as to update the generator;
the structure of the generated countermeasure network is specifically shown in fig. 3, in which the Generator is a Generator, the Discriminator is a Discriminator, and the Generator specifically includes: the structure of each first residual block is the same;
the first residual block comprises one or more first units which are connected in sequence, the first units comprise a convolutional layer (Conv), a normalization layer (BN) and an activation function layer (ReLU) which are connected in sequence, and the input of the first residual block is summed with the output of the activation function layer through a jump link to form the output of the first residual block, so that a residual network is formed;
as shown in fig. 3, in the generated countermeasure network, the arbiter specifically includes 6 units, each unit includes a convolution layer, a normalization layer, and an activation function layer connected in sequence, where, except that the activation function used by the activation function layer of the last unit is a sigmoid activation function, the activation functions used by the activation function layers of the other units are all lreul;
based on the generation countermeasure network structure shown in fig. 3, when the deep convolutional neural network in the projection domain is trained, the real X-Ray projection images of 10 walnuts are specifically selected to construct a training data set; firstly, reading obtained walnut X-ray original projection data in a TIFF (Tag Image File Format) Format into an internal memory by using a python open source kit (Numpy), wherein a pixel value in the TIFF Image File represents the photon number obtained by detecting a corresponding detecting element; converting the raw projection data into a beam intensity loss image I, also referred to as a projection image, using Beer-Lambert's law; for each walnut, randomly selecting 200 projected images, obtaining 2000 projected images in total, and forming an original projected image data set by the 2000 projected images;
based on the original projection image data set, in this embodiment, the training method of the projection domain depth convolution neural network includes:
randomly selecting a part of projection images from an original projection image data set according to a preset proportion; adding noise to the selected projection image, taking the projection image after noise addition as a low-dose projection image, taking the projection image before noise addition as a corresponding gold standard, forming a training sample by the low-dose projection image and the corresponding gold standard, and forming a projection domain training data set by all the training samples;
specifically, the low-dose projection image is simulated by adding poisson noise to the projection image, and the relevant calculation expression is as follows:
Figure BDA0002806230150000101
Pld,sim=log(Ild,sim)
wherein P represents a cone beam CT projection image of normal dose, f is a flat field image of low dose, α is a tube current of an X-ray tube corresponding to the normal dose, β is a tube current of an X-ray tube corresponding to the low dose, Ild,simFor simulated low dose incident flux, Pld,simA simulated low dose projection image; optionally, in this embodiment, β/α is 1/8, i.e., projection images simulating one-eighth of the dose;
after the projection domain training data set is established, establishing a generation countermeasure network shown in fig. 3, and training the generation countermeasure network by using the projection domain training data set; in the training process, inputting the low-dose projection image into a generator, predicting the noise distribution of the low-dose projection image by the generator, and then subtracting the corresponding noise distribution from the low-dose projection image to obtain a projection image without noise; meanwhile, the discriminator judges the difference between the generated image and the normal dose projection image, and feeds the difference back to the generator to realize the updating of the generator, so that the generator generates a projection image with higher quality; meanwhile, the discriminator is correspondingly updated; thus, the generator and the discriminator are updated alternately, thereby generating a high-quality projection image; in consideration of the limitation of the computing power of the device, optionally, the present embodiment sets the batch _ size in the training process to 1;
in this embodiment, a difference between a generated image and a real projection image of a generated countermeasure network is measured by using a supervisory loss function, and an expression of the supervisory loss function is as follows:
Figure BDA0002806230150000102
Figure BDA0002806230150000103
representing the resistance loss function, specifically:
Figure BDA0002806230150000104
Figure BDA0002806230150000105
represents the peak signal-to-noise ratio loss function, specifically:
Figure BDA0002806230150000106
MSE represents a mean square error function, specifically:
Figure BDA0002806230150000107
Figure BDA0002806230150000108
the structural similarity loss function is specifically as follows:
Figure BDA0002806230150000111
where Θ represents the weight for generating the countermeasure network, λ1,λ2And λ3Is an equilibrium constant; g and D represent the generator and the arbiter, respectively, for generating an antagonistic network, PldAnd PndRespectively representing a low-dose projection image and a corresponding normal-dose projection image, and E represents the Wassertein distance; maxyA maximum of pixels representing a normal dose image; pgRepresenting an image generated by a generator generating the countermeasure network, M and N representing the height and width, respectively, of the image, (i, j) representing the pixel coordinates in the image; mu.s1And mu2Respectively representing the average values of the pixels of the generated countermeasure network generated image and the reference image. Sigma1,σ2And σ1,2Respectively representing the standard deviation of the pixel values of the generated countermeasure network generation image and the reference image and the covariance of the pixel values of the generated countermeasure network generation image and the reference image; c1And C2Respectively, are hyper-parameters;
in order to effectively generate the countermeasure network, the example adopts RMSprop (root mean square prop) as an optimizer, the learning rate is 0.0002, and the number of training rounds is 30; the whole training process is run on a tensorflow framework;
after training is finished, storing the model of the last iteration, and extracting a generator in the generated confrontation network as a projection domain depth convolution neural network;
it should be noted that the structure of the generator and the arbiter in the generation countermeasure network is only an optional embodiment, and should not be construed as the only limitation of the present invention, and in other embodiments of the present invention, the corresponding structure may be adjusted according to the actual situation; similarly, in this embodiment, the training of the projection domain deep convolutional neural network is completed by means of the generation of the antagonistic network, which is also only an optional embodiment of the present invention, and in some other embodiments of the present invention, the training of the projection domain deep convolutional neural network may also be completed without the generation of the antagonistic network, as long as the effect of the final projection domain deep convolutional neural network on removing noise in the projection image can meet the precision requirement of practical application.
In order to further improve the quality of the reconstructed cone-beam CT image, the present embodiment further includes:
and (3) denoising a reconstructed image: inputting the cone beam CT image obtained in the three-dimensional reconstruction step into a trained image domain depth convolution neural network, eliminating noise and artifacts in the cone beam CT image by the image domain depth convolution neural network, outputting a high-quality cone beam CT image, and taking the high-quality cone beam CT image as a final reconstruction result;
optionally, in this embodiment, a structure of the image domain deep convolutional neural network is shown in fig. 4, and includes: the encoder, the six second residual blocks and the decoder are connected in sequence; the encoder is used for extracting the characteristic information of the input image and transmitting the characteristic information to the six second residual blocks to further eliminate the noise and the artifacts of the input image, and the decoder recovers the high-quality cone beam CT image by utilizing the characteristic information provided by the second residual blocks;
as shown in fig. 4, the encoder includes 5 second units connected in sequence, and the second residual block includes 2 units connected in sequence; the second unit comprises a convolution layer (Conv), a normalization layer (BN) and an activation function layer (ReLU) which are connected in sequence;
the decoder comprises one or more third units which are connected in sequence; the third unit comprises a deconvolution layer (Deconv), a normalization layer (BN) and an activation function layer (ReLU) which are connected in sequence;
the encoder and the decoder adopt a symmetrical structure, and the input of the first convolution layer, the third convolution layer and the fifth convolution layer of the encoder are sequentially added with the output of the corresponding deconvolution layer in the decoder through jump linkage to form a residual error structure; the input of the first convolutional layer of the second residual block is added with the output of the second convolutional layer which is normalized through a jump link to form a residual structure;
in the embodiment, the image domain depth convolution neural network is a residual error structure, which can deepen the depth of the convolution neural network, thereby improving the quality of an output image and accelerating the training of the convolution neural network;
in this embodiment, the training method for the image domain deep convolutional neural network includes:
constructing an image domain training data set; in the image domain training data set, each sample consists of two images, wherein one image is a cone beam CT image obtained by a three-dimensional reconstruction step, namely the cone beam CT image obtained by three-dimensional reconstruction after the quality of a projection image is improved by a projection domain depth convolution neural network is used as the input of the image domain depth convolution neural network, and the other image is a cone beam CT image with normal dosage and is used as a reference image of the image domain depth convolution neural output;
establishing an image domain deep convolutional neural network, and training the image domain deep convolutional neural network by using an image domain training data set, so that a trained image domain deep convolutional neural network is obtained after training is finished;
optionally, in this embodiment, in the process of training the image domain depth convolutional neural network, the batch _ size is set to 2, and a traditional Mean Square Error (MSE) loss function is used to describe a difference between the output image and the cone beam image with a normal dose, where the expression is as follows:
Figure BDA0002806230150000131
wherein, omega represents the weight of the image domain depth convolution neural network, eta is the learning rate, F is the mapping function of the image domain depth convolution neural network, Im,1Representing cone-beam CT images generated by generating a countermeasure network, Im,2Is a normal dose cone-beam CT image, | | | | | non-woven phosphor2Represents a 2-norm, and M is the size of batch.
In order to ensure the quality of the reconstructed image and improve the reconstruction speed, as a preferred embodiment, in the three-dimensional reconstruction step of this embodiment, the three-dimensional reconstruction is performed on the obtained high-quality projection image by using the FDK algorithm;
the FDK algorithm (Feldkamp-Davis-Kress algorithm) has the advantage of high reconstruction speed as an analytical algorithm, and usually has good image quality under the condition of normal dose, but the reconstructed image has more serious noise and artifacts under the condition of low dose; in the embodiment, the projection domain depth convolution neural network is used for removing noise and artifacts in the projection image obtained by transforming the low-dose cone-beam CT projection data to obtain the high-quality projection image, so that the FDK algorithm is used for reconstructing the projection image, and the reconstruction speed can be effectively improved while the reconstruction quality is ensured; it should be noted that the FDK algorithm is only a preferred embodiment of the present invention, and other algorithms that can perform cone-beam CT image reconstruction using projection images can be applied to the present invention.
Example 2:
a low-dose cone beam CT image reconstruction method based on deep learning is similar to the embodiment 1, and is different in that the embodiment only does not include a reconstruction image denoising step, and a cone beam CT image obtained in a three-dimensional reconstruction step is directly used as a reconstruction result;
the specific implementation of this embodiment can refer to the description of embodiment 1, and will not be repeated here.
Example 3:
a computer readable storage medium comprising a stored computer program;
when the computer program is executed by the processor, the apparatus in which the computer readable storage medium is located is controlled to execute the method for reconstructing a low-dose cone-beam CT image based on deep learning provided in embodiment 1 or embodiment 2.
To further illustrate the effectiveness and reliability of the present invention, examples 1 and 2 above are compared with existing methods of boosting low dose cone-beam CT images, including: CGLS (continuous gradient least square) algorithm, SIRT (Simultaneous iterative detection technique), RED-CNN (Residual Encode DecoderConvolitional Neural network), and MAP-NN (modulated Adaptive Processing Neural network). CGLS is a traditional analytical algorithm, and the least square problem of an underdetermined system is solved by a conjugate gradient method; SIRT is a classical iterative algorithm, and an optimal solution is obtained by minimizing an objective function in an iterative process; RED-CNN and MAP-NN belong to the methods of applying the deep learning technology to the CT image domain process, and currently, a better effect is obtained in a quarter-dose fan-beam CT image.
The reconstruction results of the cone beam CT projection data of the same object by the various methods are shown in (a) to (h) in fig. 5, and it can be seen from the reconstruction results shown in fig. 5 that the reconstructed image under the FDK algorithm contains serious noise and artifacts, and the noise and artifacts cause difficulty in distinguishing image structure information; SIRT can eliminate noise and artifacts to some extent, but causes image distortion; the CGLS algorithm inhibits partial noise and artifacts, but the residual noise and artifacts still attach to an image structure, so that partial structure information is lost; RED-CNN and MAP-NN can effectively eliminate the noise and artifacts of the reconstructed image as a deep learning post-processing technology, but the image structure is also sacrificed to a great extent; in this embodiment 2, the quality of the projection image is improved by using the generated countermeasure network in the projection domain, and the corresponding reconstructed image can largely retain the structural information of the image; in this embodiment, an image domain deep learning post-processing technique is further adopted, so that noise and artifacts of the image in embodiment 2 can be further eliminated, and the contrast of the reconstructed image is improved. In conclusion, the method of the embodiment achieves better results than other algorithms, both in terms of noise suppression and structure retention.
In order to quantitatively compare the performance of improving the quality of the low-dose cone-beam CT images by various methods, three standards of PSNR (Peak Signal-to-Noise Ratio), SSIM (structural similarity) and RMSE (root Mean Square error) are adopted, the SSIM value is used for measuring the structure retention degree of the reconstructed images compared with the normal dose reconstructed images, the better the SSIM is, the PSNR and RMSE are used for measuring the visual difference between the reconstructed images and the normal dose images, and the larger the PSNR and RMSE are, the smaller the visual difference between the two images is. Specifically, the quantization values of 200 projection images are counted, and the quantization performance comparison of different methods on the task of implementing the sparse angle cone-beam CT image is shown in table one.
Table one: on-task performance comparison of different methods on low-dose cone-beam CT image realization
Figure BDA0002806230150000151
As can be analyzed from the table i, the reconstructed image under the method of this embodiment obtains the highest peak signal-to-noise ratio (PSNR) and the highest Structural Similarity (SSIM), which indicates that the low-dose reconstructed image under this embodiment is visually closest to the normal-dose reconstructed image and retains the structural information to the greatest extent. Meanwhile, the RMSE of the low-dose reconstructed image in this embodiment is the minimum compared with the normal-dose reconstructed image, that is, the difference of the average pixel value is the minimum, which indicates that the reconstruction accuracy of the method of this embodiment is the highest.
Generally speaking, the method transforms low-dose cone beam CT original projection data into projection images, predicts noise distribution in the projection images by using a projection domain depth convolution neural network, subtracts the noise in the projection images to obtain high-quality projection images, carries out three-dimensional reconstruction based on the high-quality projection images to obtain cone beam CT images, can remove noise and artifacts caused by low dose in the projection images while keeping the original structure of the projection images, and effectively improves the quality of the cone beam CT images. On the basis, the noise and the artifact in the cone beam CT image are eliminated by utilizing the image domain depth convolution neural network, so that the quality of the cone beam CT image can be further improved.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A low-dose cone-beam CT image reconstruction method based on deep learning is characterized by comprising the following steps:
a projection transformation step: transforming the low-dose cone-beam CT original projection data into a plurality of projection images;
a projected image denoising step: inputting a projection image to be processed to a trained projection domain depth convolution neural network, predicting noise distribution in the projection image to be processed by the projection domain depth convolution neural network, subtracting the noise distribution from the prediction, and outputting a high-quality projection image;
three-dimensional reconstruction: and respectively executing the projection image denoising step on the plurality of projection images obtained in the projection transformation step to obtain high-quality projection images corresponding to the projection images, and then performing three-dimensional reconstruction on the obtained high-quality projection images to obtain cone beam CT images.
2. The deep learning-based low-dose cone-beam CT image reconstruction method according to claim 1, wherein the projection domain deep convolutional neural network comprises one or more first residual blocks connected in sequence, and the first residual blocks have the same structure;
the first residual block comprises one or more first units which are connected in sequence, the first units comprise a convolution layer, a normalization layer and an activation function layer which are connected in sequence, and the input of the first residual block is summed through a jump link with the output of the activation function layer through the jump link to be used as the output of the first residual block, so that a residual network is formed.
3. The deep learning-based low-dose cone-beam CT image reconstruction method as claimed in claim 1 or 2, characterized in that the projection domain deep convolution neural network is used as a generator for generating a countermeasure network in the training process;
the generator in the generation countermeasure network is used for predicting the noise distribution of the input image and subtracting the noise distribution from the input image to obtain a high-quality image; and the discriminator in the generation countermeasure network is used for discriminating the difference between the high-quality image output by the generator and the corresponding gold standard and feeding back the high-quality image to the generator so as to update the generator.
4. The deep learning-based low-dose cone-beam CT image reconstruction method according to claim 3, wherein the training method of the projection domain deep convolutional neural network comprises the following steps:
randomly selecting a part of projection images from an original projection image data set according to a preset proportion; adding noise to the selected projection image, taking the projection image after noise addition as a low-dose projection image, taking the projection image before noise addition as a corresponding gold standard, forming a training sample by the low-dose projection image and the corresponding gold standard, and forming a projection domain training data set by all the training samples;
establishing the generation countermeasure network, and training the generation countermeasure network by using the projection domain training data set;
and after training is finished, extracting a generator in the generation countermeasure network as the projection domain depth convolution neural network.
5. The deep learning-based low-dose cone-beam CT image reconstruction method of claim 1, further comprising:
and (3) denoising a reconstructed image: and inputting the cone beam CT image obtained in the three-dimensional reconstruction step into a trained image domain depth convolution neural network, eliminating noise and artifacts in the cone beam CT image by the image domain depth convolution neural network, outputting a high-quality cone beam CT image, and taking the high-quality cone beam CT image as a final reconstruction result.
6. The method of deep learning based low dose cone-beam CT image reconstruction according to claim 5, wherein the image domain depth convolution neural network comprises: an encoder, one or more second residual blocks, and a decoder connected in sequence;
the encoder comprises one or more second units which are connected in sequence, and the second residual block comprises one or more second units which are connected in sequence; the second unit comprises a convolution layer, a normalization layer and an activation function layer which are connected in sequence;
the decoder comprises one or more third units which are connected in sequence; the third unit comprises a deconvolution layer, a normalization layer and an activation function layer which are connected in sequence;
the encoder and the decoder adopt a symmetrical structure, and the input of the first convolution layer, the input of the third convolution layer and the input of the fifth convolution layer of the encoder are sequentially added with the output of the corresponding deconvolution layer in the decoder through jump linkage to form a residual error structure; the input of the first convolutional layer of the second residual block is added to the output of the normalized second convolutional layer by a skip link to form a residual structure.
7. The deep learning-based low-dose cone-beam CT image reconstruction method according to claim 5 or 6, wherein the training method of the image domain deep convolutional neural network comprises the following steps:
constructing an image domain training data set; in the image domain training data set, each sample consists of two images, wherein one image is a cone beam CT image obtained by the three-dimensional reconstruction step and is used as the input of the image domain depth convolution neural network, and the other image is a cone beam CT image with normal dosage and is used as a reference image of the image domain depth convolution neural output;
and establishing the image domain deep convolutional neural network, and training the image domain deep convolutional neural network by using the image domain training data set, so that the trained image domain deep convolutional neural network is obtained after training is finished.
8. The method for reconstructing a low-dose cone-beam CT image based on deep learning of claim 1, wherein in the step of three-dimensional reconstruction, the three-dimensional reconstruction is performed on the obtained high-quality projection image by using FDK algorithm.
9. A computer-readable storage medium comprising a stored computer program;
the computer program, when executed by a processor, controls an apparatus on the computer readable storage medium to perform the method for reconstructing a low-dose cone-beam CT image based on deep learning according to any one of claims 1 to 8.
CN202011371624.6A 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning Active CN112348936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011371624.6A CN112348936B (en) 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011371624.6A CN112348936B (en) 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning

Publications (2)

Publication Number Publication Date
CN112348936A true CN112348936A (en) 2021-02-09
CN112348936B CN112348936B (en) 2023-04-18

Family

ID=74365107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011371624.6A Active CN112348936B (en) 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning

Country Status (1)

Country Link
CN (1) CN112348936B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052935A (en) * 2021-03-23 2021-06-29 大连理工大学 Single-view CT reconstruction method for progressive learning
CN113052936A (en) * 2021-03-30 2021-06-29 大连理工大学 Single-view CT reconstruction method integrating FDK and deep learning
CN113436112A (en) * 2021-07-21 2021-09-24 杭州海康威视数字技术股份有限公司 Image enhancement method, device and equipment
CN114241074A (en) * 2021-12-20 2022-03-25 四川大学 CBCT image reconstruction method for deep learning and electronic noise simulation
CN114757928A (en) * 2022-04-25 2022-07-15 东南大学 One-step dual-energy finite angle CT reconstruction method based on deep training network
CN114998466A (en) * 2022-05-31 2022-09-02 华中科技大学 Low-dose cone-beam CT reconstruction method based on attention mechanism and deep learning
CN115049753A (en) * 2022-05-13 2022-09-13 沈阳铸造研究所有限公司 Cone beam CT artifact correction method based on unsupervised deep learning
CN115187470A (en) * 2022-06-10 2022-10-14 成都飞机工业(集团)有限责任公司 Double-domain iterative noise reduction method based on 3D printing inner cavity
CN117152365A (en) * 2023-10-31 2023-12-01 中日友好医院(中日友好临床医学研究所) Method, system and device for oral cavity CBCT ultra-low dose imaging
CN117409100A (en) * 2023-12-15 2024-01-16 山东师范大学 CBCT image artifact correction system and method based on convolutional neural network
CN117830456A (en) * 2024-03-04 2024-04-05 中国科学技术大学 Method and device for correcting image metal artifact and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023282A (en) * 2014-04-30 2015-11-04 华中科技大学 Sparse projection ultrasonic CT image reconstruction method based on CS
WO2017223560A1 (en) * 2016-06-24 2017-12-28 Rensselaer Polytechnic Institute Tomographic image reconstruction via machine learning
CN108961237A (en) * 2018-06-28 2018-12-07 安徽工程大学 A kind of low-dose CT picture breakdown method based on convolutional neural networks
CN109559359A (en) * 2018-09-27 2019-04-02 东南大学 Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized
CN110660123A (en) * 2018-06-29 2020-01-07 清华大学 Three-dimensional CT image reconstruction method and device based on neural network and storage medium
US20200118306A1 (en) * 2018-10-12 2020-04-16 Korea Advanced Institute Of Science And Technology Method for processing unmatched low-dose x-ray computed tomography image using neural network and apparatus therefor
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Low-dose CT lung image denoising method based on deep convolutional neural network
CN111696166A (en) * 2020-06-10 2020-09-22 浙江大学 FDK (finite Difference K) type preprocessing matrix-based circumferential cone beam CT (computed tomography) fast iterative reconstruction method
US20200311914A1 (en) * 2017-04-25 2020-10-01 The Board Of Trustees Of Leland Stanford University Dose reduction for medical imaging using deep convolutional neural networks
CN111899188A (en) * 2020-07-08 2020-11-06 西北工业大学 Neural network learning cone beam CT noise estimation and suppression method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023282A (en) * 2014-04-30 2015-11-04 华中科技大学 Sparse projection ultrasonic CT image reconstruction method based on CS
WO2017223560A1 (en) * 2016-06-24 2017-12-28 Rensselaer Polytechnic Institute Tomographic image reconstruction via machine learning
US20200311914A1 (en) * 2017-04-25 2020-10-01 The Board Of Trustees Of Leland Stanford University Dose reduction for medical imaging using deep convolutional neural networks
CN108961237A (en) * 2018-06-28 2018-12-07 安徽工程大学 A kind of low-dose CT picture breakdown method based on convolutional neural networks
CN110660123A (en) * 2018-06-29 2020-01-07 清华大学 Three-dimensional CT image reconstruction method and device based on neural network and storage medium
CN109559359A (en) * 2018-09-27 2019-04-02 东南大学 Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized
US20200118306A1 (en) * 2018-10-12 2020-04-16 Korea Advanced Institute Of Science And Technology Method for processing unmatched low-dose x-ray computed tomography image using neural network and apparatus therefor
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Low-dose CT lung image denoising method based on deep convolutional neural network
CN111696166A (en) * 2020-06-10 2020-09-22 浙江大学 FDK (finite Difference K) type preprocessing matrix-based circumferential cone beam CT (computed tomography) fast iterative reconstruction method
CN111899188A (en) * 2020-07-08 2020-11-06 西北工业大学 Neural network learning cone beam CT noise estimation and suppression method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HENRI DER SARKISSIAN: "A cone-beam X-ray computed tomography data collection designed for machine learning", 《SCIENTIFIC DATA》 *
JELMER M.WOLTERRINK ET AL.: "Generative Adversarial Networks for Noise Reduction in Low-Dose CT", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
方伟等: "神经网络在CT重建方面应用的最新进展", 《中国体视学与图像分析》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052935A (en) * 2021-03-23 2021-06-29 大连理工大学 Single-view CT reconstruction method for progressive learning
CN113052936A (en) * 2021-03-30 2021-06-29 大连理工大学 Single-view CT reconstruction method integrating FDK and deep learning
CN113436112A (en) * 2021-07-21 2021-09-24 杭州海康威视数字技术股份有限公司 Image enhancement method, device and equipment
CN113436112B (en) * 2021-07-21 2022-08-26 杭州海康威视数字技术股份有限公司 Image enhancement method, device and equipment
CN114241074A (en) * 2021-12-20 2022-03-25 四川大学 CBCT image reconstruction method for deep learning and electronic noise simulation
CN114241074B (en) * 2021-12-20 2023-04-21 四川大学 CBCT image reconstruction method for deep learning and electronic noise simulation
CN114757928A (en) * 2022-04-25 2022-07-15 东南大学 One-step dual-energy finite angle CT reconstruction method based on deep training network
CN115049753B (en) * 2022-05-13 2024-05-10 沈阳铸造研究所有限公司 Cone beam CT artifact correction method based on unsupervised deep learning
CN115049753A (en) * 2022-05-13 2022-09-13 沈阳铸造研究所有限公司 Cone beam CT artifact correction method based on unsupervised deep learning
CN114998466A (en) * 2022-05-31 2022-09-02 华中科技大学 Low-dose cone-beam CT reconstruction method based on attention mechanism and deep learning
CN114998466B (en) * 2022-05-31 2024-07-05 华中科技大学 Low-dose cone beam CT reconstruction method based on attention mechanism and deep learning
CN115187470A (en) * 2022-06-10 2022-10-14 成都飞机工业(集团)有限责任公司 Double-domain iterative noise reduction method based on 3D printing inner cavity
CN117152365B (en) * 2023-10-31 2024-02-02 中日友好医院(中日友好临床医学研究所) Method, system and device for oral cavity CBCT ultra-low dose imaging
CN117152365A (en) * 2023-10-31 2023-12-01 中日友好医院(中日友好临床医学研究所) Method, system and device for oral cavity CBCT ultra-low dose imaging
CN117409100A (en) * 2023-12-15 2024-01-16 山东师范大学 CBCT image artifact correction system and method based on convolutional neural network
CN117830456A (en) * 2024-03-04 2024-04-05 中国科学技术大学 Method and device for correcting image metal artifact and electronic equipment
CN117830456B (en) * 2024-03-04 2024-05-28 中国科学技术大学 Method and device for correcting image metal artifact and electronic equipment

Also Published As

Publication number Publication date
CN112348936B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN112348936B (en) Low-dose cone-beam CT image reconstruction method based on deep learning
CN109146988B (en) Incomplete projection CT image reconstruction method based on VAEGAN
CN112396672B (en) Sparse angle cone-beam CT image reconstruction method based on deep learning
CN110570492B (en) CT artifact suppression method, device and medium based on neural network
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN108961237B (en) Low-dose CT image decomposition method based on convolutional neural network
CN108898642A (en) A kind of sparse angular CT imaging method based on convolutional neural networks
CN112258415A (en) Chest X-ray film super-resolution and denoising method based on generation countermeasure network
WO2020118830A1 (en) Dictionary training and image super-resolution reconstruction method, system and device, and storage medium
CN116645283A (en) Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network
CN102013108A (en) Regional spatial-temporal prior-based dynamic PET reconstruction method
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
CN117152365B (en) Method, system and device for oral cavity CBCT ultra-low dose imaging
CN117876261A (en) CBCT scattering correction imaging method based on deep learning
Xue et al. PET Synthesis via Self-supervised Adaptive Residual Estimation Generative Adversarial Network
CN111080736B (en) Low-dose CT image reconstruction method based on sparse transformation
CN116245969A (en) Low-dose PET image reconstruction method based on deep neural network
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
CN110176045A (en) A method of dual-energy CT image is generated by single energy CT image
CN115100045A (en) Method and device for converting modality of image data
Liang et al. A model-based deep learning reconstruction for X-Ray CT
CN117726705B (en) Deep learning method for simultaneous low-dose CT reconstruction and metal artifact correction
CN113744149B (en) Deep learning post-processing method for solving problem of over-smoothing of low-dose CT image
CN117409100B (en) CBCT image artifact correction system and method based on convolutional neural network
Zhao et al. Sparse angle CT reconstruction based on neural radial field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant