CN112348936A - Low-dose cone-beam CT image reconstruction method based on deep learning - Google Patents

Low-dose cone-beam CT image reconstruction method based on deep learning Download PDF

Info

Publication number
CN112348936A
CN112348936A CN202011371624.6A CN202011371624A CN112348936A CN 112348936 A CN112348936 A CN 112348936A CN 202011371624 A CN202011371624 A CN 202011371624A CN 112348936 A CN112348936 A CN 112348936A
Authority
CN
China
Prior art keywords
image
projection
neural network
domain
dose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011371624.6A
Other languages
Chinese (zh)
Other versions
CN112348936B (en
Inventor
李强
晁联盈
王燕丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202011371624.6A priority Critical patent/CN112348936B/en
Publication of CN112348936A publication Critical patent/CN112348936A/en
Application granted granted Critical
Publication of CN112348936B publication Critical patent/CN112348936B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本发明公开了一种基于深度学习的低剂量锥束CT图像重建方法,属于医学图像领域,包括:将低剂量锥束CT原始投影数据变换为多幅投影图像;将多幅投影图像分别输入至已训练好的投影域深度卷积神经网络,由投影域深度卷积神经网络预测投影图像中的噪声分布,并从中减去噪声分布,输出相应的高质量投影图像;对所得到的高质量投影图像进行三维重建,得到锥束CT图像;进一步包括:将三维重建得到的锥束CT图像输入至已训练好的图像域深度卷积神经网络,以由图像域深度卷积神经网络消除锥束CT图像中的噪声和伪影,输出高质量的锥束CT图像作为最终的重建结果。本发明能够在降低锥束CT的X射线剂量的同时,提高重建的锥束CT图像的质量。

Figure 202011371624

The invention discloses a deep learning-based low-dose cone-beam CT image reconstruction method, belonging to the field of medical images, comprising: transforming low-dose cone-beam CT original projection data into multiple projection images; inputting the multiple projection images into The trained projection domain deep convolutional neural network predicts the noise distribution in the projection image by the projection domain deep convolutional neural network, and subtracts the noise distribution from it, and outputs the corresponding high-quality projection image; Performing three-dimensional reconstruction of the image to obtain a cone beam CT image; further comprising: inputting the cone beam CT image obtained by the three-dimensional reconstruction into the trained image domain deep convolutional neural network, so as to eliminate the cone beam CT by the image domain deep convolutional neural network Noise and artifacts in the image, output high-quality cone beam CT image as the final reconstruction result. The invention can improve the quality of the reconstructed cone beam CT image while reducing the X-ray dose of the cone beam CT.

Figure 202011371624

Description

Low-dose cone-beam CT image reconstruction method based on deep learning
Technical Field
The invention belongs to the field of medical images, and particularly relates to a low-dose cone-beam CT image reconstruction method based on deep learning.
Background
Cone Beam CT (CBCT) is currently widely used in the fields of image-guided surgery, image-guided radiotherapy, three-dimensional imaging of the oral cavity, and the like. Compared with the traditional fan beam CT, the cone beam CT has the advantages of high ray utilization rate, high scanning speed and the like. However, in cone beam CT, as well as fan beam CT, X-ray radiation can affect the health of the patient, and further studies have shown that excessive X-ray radiation can cause genetic variation, cancer and other diseases. Therefore, low dose cone-beam CT is of increasing importance for commercial applications. The implementation of low dose cone-beam CT is mainly divided into two types: firstly, projection images are sparsely acquired, so that the images reconstructed by FDK have bar artifacts; secondly, the intensity of the X-ray is reduced, so that the projection image has noise, and the reconstructed image contains noise and artifacts. Both sparsely acquired projection images and reduced X-ray intensity introduce artifacts and noise into the CBCT reconstructed images that can mix with tissue information and interfere with the physician's determination and localization of abnormal tissue.
Low dose cone-beam CT introduces more severe artifacts and noise than conventional fan-beam CT at the same low dose. When the CT image contains a small amount of noise, good image quality can be obtained by using image domain techniques. However, for CT images with severe noise, which covers even the structural information completely, the image domain technique can sacrifice the structure of the image while eliminating the noise.
At present, the mainstream methods for high-quality reconstruction of low-dose cone-beam CT can be classified into a projection domain filtering method, an iteration method and an image domain method. The projection domain filtering method mainly utilizes known noise distribution information to design a corresponding filtering function, and improves the quality of a projected image in a projection domain so as to improve the quality of a reconstructed image, such as an FDK algorithm; the projection domain filtering method is easy to cause some structures in the projection image to be lost, so that the reconstructed image also loses some structural information. The iterative method mainly utilizes the statistical characteristics of projection data and the prior information of a reconstructed image to design a corresponding optimization function, and obtains a high-quality reconstructed image by optimizing a target function in the iterative process, such as a joint algebraic iteration technology; iterative methods are often time consuming and the hyper-parameters need to be adjusted empirically. The image domain method is divided into two categories, namely the traditional method, such as Block-Matchinand 3D filtering (BM3D), dictionary learning and the like; image domain deep learning techniques, such as residual codec convolutional neural networks; however, the potential of reducing the X-ray dose is small on the premise of ensuring the image quality by the image domain method, and the image details which are lost in the reconstructed image cannot be recovered by the traditional image domain method or the image domain depth learning method.
Disclosure of Invention
Aiming at the defects and the improvement requirements of the prior art, the invention provides a low-dose cone-beam CT image reconstruction method based on deep learning, and aims to improve the quality of a reconstructed cone-beam CT image while reducing the X-ray dose of cone-beam CT.
To achieve the above object, according to one aspect of the present invention, there is provided a low dose cone-beam CT image reconstruction method based on deep learning, including:
a projection transformation step: transforming the low-dose cone-beam CT original projection data into a plurality of projection images;
a projected image denoising step: inputting a projection image to be processed to a trained projection domain depth convolution neural network, predicting noise distribution in the projection image to be processed by the projection domain depth convolution neural network, subtracting the noise distribution from the prediction, and outputting a high-quality projection image;
three-dimensional reconstruction: and respectively executing a projection image denoising step on the plurality of projection images obtained in the projection transformation step to obtain high-quality projection images corresponding to the projection images, and then performing three-dimensional reconstruction on the obtained high-quality projection images to obtain cone beam CT images.
After the low-dose cone beam CT original projection data are converted into projection images, the noise distribution in the projection images is predicted by using a projection domain depth convolution neural network, and the noise in the projection images is subtracted, so that the noise and artifacts caused by low dose in the projection images can be removed while the original structure of the projection images is kept, and the quality of the projection images is effectively improved; the method carries out three-dimensional reconstruction based on the projected image after denoising, and can effectively improve the quality of the cone beam CT image obtained by reconstruction.
In the process of image reconstruction, the time-consuming iterative process is not required to be executed, so that the reconstruction speed is high.
Further, the projection domain depth convolution neural network comprises one or more first residual blocks which are connected in sequence, and the structures of the first residual blocks are the same;
the first residual block comprises one or more first units which are connected in sequence, the first units comprise a convolution layer, a normalization layer and an activation function layer which are connected in sequence, and the input of the first residual block is summed through a jump link with the output of the activation function layer through the jump link to be used as the output of the first residual block, so that a residual network is formed.
Further, the projection domain depth convolution neural network is used as a generator for generating a countermeasure network in the training process;
a generator in the countermeasure network is generated for predicting the noise distribution of the input image and subtracting the noise distribution from the input image to obtain a high-quality image; and the discriminator in the generation countermeasure network is used for discriminating the difference between the high-quality image output by the generator and the corresponding gold standard and feeding back the difference to the generator so as to update the generator.
The invention completes the training of the projection domain depth convolution neural network by means of the generation countermeasure network, and takes the projection domain depth convolution neural network as the generator for generating the countermeasure network, thereby ensuring the training effect of the projection domain depth convolution neural network by utilizing the discrimination function of the discriminator in the generation countermeasure network, effectively removing the noise in the projection image and improving the quality of the intermediate image when the cone beam CT image is reconstructed.
Further, the training method of the projection domain depth convolution neural network comprises the following steps:
randomly selecting a part of projection images from an original projection image data set according to a preset proportion; adding noise to the selected projection image, taking the projection image after noise addition as a low-dose projection image, taking the projection image before noise addition as a corresponding gold standard, forming a training sample by the low-dose projection image and the corresponding gold standard, and forming a projection domain training data set by all the training samples;
establishing a generated confrontation network, and training the generated confrontation network by utilizing a projection domain training data set;
and after training is finished, extracting generators in the generation countermeasure network as a projection domain deep convolution neural network.
The method randomly selects part of the projection images from the original projection image data set, simulates the low-dose projection images in a noise adding mode, and uses the original projection images without noise as a gold standard, so as to construct a training data set of the projection domain depth convolution neural network.
Further, the low dose cone-beam CT image reconstruction method based on deep learning provided by the present invention further includes:
and (3) denoising a reconstructed image: and inputting the cone beam CT image obtained in the three-dimensional reconstruction step into a trained image domain depth convolution neural network, eliminating noise and artifacts in the cone beam CT image by the image domain depth convolution neural network, outputting a high-quality cone beam CT image, and taking the high-quality cone beam CT image as a final reconstruction result.
On the basis of obtaining the cone beam CT image through three-dimensional reconstruction, the invention eliminates noise and artifacts in the cone beam CT image by using the image domain depth convolution neural network, and can further improve the quality of the cone beam CT image.
Further, the image domain deep convolutional neural network comprises: an encoder, one or more second residual blocks, and a decoder connected in sequence;
the encoder comprises one or more second units which are connected in sequence, and the second residual block comprises one or more second units which are connected in sequence; the second unit comprises a convolution layer, a normalization layer and an activation function layer which are connected in sequence;
the decoder comprises one or more third units which are connected in sequence; the third unit comprises an deconvolution layer, a normalization layer and an activation function layer which are connected in sequence;
the encoder and the decoder adopt a symmetrical structure, and the input of the first convolution layer, the third convolution layer and the fifth convolution layer of the encoder are sequentially added with the output of the corresponding deconvolution layer in the decoder through jump linkage to form a residual error structure; the input of the first convolutional layer of the second residual block is added to the output of the normalized second convolutional layer by a skip link to form a residual structure.
In the invention, the image domain deep convolutional neural network is a residual error structure, which can deepen the depth of the convolutional neural network, thereby improving the quality of an output image and accelerating the training of the convolutional neural network.
Further, the training method of the image domain deep convolutional neural network comprises the following steps:
constructing an image domain training data set; in the image domain training data set, each sample consists of two images, wherein one image is a cone beam CT image obtained by utilizing a three-dimensional reconstruction step and is used as the input of an image domain depth convolution neural network, and the other image is a cone beam CT image with normal dosage and is used as a reference image of the image domain depth convolution neural output;
and establishing an image domain deep convolutional neural network, and training the image domain deep convolutional neural network by using an image domain training data set, so that the trained image domain deep convolutional neural network is obtained after training is finished.
When the image domain depth convolution neural network is trained, the cone beam CT image reconstructed by the projection image denoised by the projection domain depth convolution neural network and the cone beam CT image with normal dose are used for constructing a training data set, so that the image domain depth convolution neural network obtained by training can effectively remove the noise and the artifact still existing in the projection image.
Further, in the three-dimensional reconstruction step, the obtained high-quality projection image is subjected to three-dimensional reconstruction by using an FDK algorithm.
The FDK algorithm (Feldkamp-Davis-Kress algorithm) has the advantage of high reconstruction speed as an analytical algorithm, and usually has good image quality under the condition of normal dose, but the reconstructed image has more serious noise and artifacts under the condition of low dose; because the invention utilizes the projection domain depth convolution neural network to remove the noise and the artifacts in the projection image obtained by the transformation of the low-dose cone beam CT projection data and obtain the high-quality projection image, the FDK algorithm is utilized to reconstruct the projection image, and the reconstruction quality can be ensured and the reconstruction speed can be effectively improved.
According to another aspect of the present invention, there is provided a computer readable storage medium comprising a stored computer program;
when the computer program is executed by the processor, the computer readable storage medium is controlled to execute the method for reconstructing a low-dose cone-beam CT image based on deep learning according to the present invention.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
(1) after the low-dose cone beam CT original projection data are converted into projection images, the noise distribution in the projection images is predicted by using a projection domain depth convolution neural network, and the noise in the projection images is subtracted, so that the noise and artifacts caused by low dose in the projection images can be removed while the original structure of the projection images is kept, and the quality of the projection images is effectively improved; the method carries out three-dimensional reconstruction based on the projected image after denoising, and can effectively improve the quality of the cone beam CT image obtained by reconstruction.
(2) In the process of image reconstruction, the time-consuming iterative process is not required to be executed, so that the reconstruction speed is high.
(3) On the basis of obtaining the cone beam CT image through three-dimensional reconstruction, the invention eliminates noise and artifacts in the cone beam CT image by using the image domain depth convolution neural network, and can further improve the quality of the cone beam CT image.
Drawings
FIG. 1 is a flowchart of a method for reconstructing a low-dose cone-beam CT image based on deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a low dose cone-beam CT image reconstruction method based on deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a generative countermeasure network provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image domain deep convolutional neural network structure provided in an embodiment of the present invention;
fig. 5 is a schematic diagram of reconstruction effects of different reconstruction methods according to an embodiment of the present invention; the cone beam CT image reconstruction method comprises the steps of (a) reconstructing a cone beam CT image obtained by using normal dose cone beam CT projection data, (b) reconstructing a cone beam CT image obtained by using low dose cone beam CT projection data, (c) reconstructing a cone beam CT image obtained by using SIRT, (d) reconstructing a cone beam CT image obtained by using CGLS, (e) reconstructing a cone beam CT image obtained by using RED-CNN, (f) reconstructing a cone beam CT image obtained by using MAP-NN, (g) reconstructing a cone beam CT image obtained by using embodiment 2 of the invention, and (h) reconstructing a cone beam CT image obtained by using embodiment 1 of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In the present application, the terms "first," "second," and the like (if any) in the description and the drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Aiming at the technical problem that the quality of a reconstructed image is not high because the conventional low-dose cone-beam CT image reconstruction method cannot eliminate noise and simultaneously reserve an image structure, the invention provides a low-dose cone-beam CT image reconstruction method based on deep learning, which has the overall thought that: after low-dose cone beam CT original projection data are converted into projection images, noise distribution in each projection image is predicted by using a projection domain depth convolution neural network, the noise distribution is subtracted from the projection images so as to improve the quality of the projection images, then three-dimensional reconstruction is carried out on the basis of the high-quality projection images so as to obtain cone beam CT images, therefore, the original structure of the projection images is kept while the noise and artifacts in the projection images are eliminated, and the quality of the reconstructed cone beam CT images is improved; on the basis, the noise and the artifact in the reconstructed cone beam CT image are further eliminated by utilizing the image domain depth convolution neural network, and the quality of the reconstructed image is further improved. The following are examples:
example 1:
a low dose cone-beam CT image reconstruction method based on deep learning, as shown in fig. 1 and 2, comprising:
a projection transformation step:
transforming the low-dose cone-beam CT original projection data into a plurality of projection images;
in this embodiment, the processed low-dose cone-beam CT original projection data may be obtained by sparse acquisition, or may be obtained by reducing the intensity of X-rays; when the low-dose cone-beam CT original projection data are converted into projection images, the beer-Lambert law is specifically adopted for conversion;
a projected image denoising step:
inputting a projection image to be processed to a trained projection domain depth convolution neural network, predicting noise distribution in the projection image to be processed by the projection domain depth convolution neural network, subtracting the noise distribution from the prediction, and outputting a high-quality projection image;
three-dimensional reconstruction:
and respectively executing a projection image denoising step on the plurality of projection images obtained in the projection transformation step to obtain high-quality projection images corresponding to the projection images, and then performing three-dimensional reconstruction on the obtained high-quality projection images to obtain cone beam CT images.
As an optional implementation manner, in this embodiment, the projection domain deep convolutional neural network is used as a generator for generating a countermeasure network (GAN) in the training process;
a generator in the countermeasure network is generated for predicting the noise distribution of the input image and subtracting the noise distribution from the input image to obtain a high-quality image; a discriminator in the generation countermeasure network is used for discriminating the difference between the high-quality image output by the generator and the corresponding gold standard and feeding back the difference to the generator so as to update the generator;
the structure of the generated countermeasure network is specifically shown in fig. 3, in which the Generator is a Generator, the Discriminator is a Discriminator, and the Generator specifically includes: the structure of each first residual block is the same;
the first residual block comprises one or more first units which are connected in sequence, the first units comprise a convolutional layer (Conv), a normalization layer (BN) and an activation function layer (ReLU) which are connected in sequence, and the input of the first residual block is summed with the output of the activation function layer through a jump link to form the output of the first residual block, so that a residual network is formed;
as shown in fig. 3, in the generated countermeasure network, the arbiter specifically includes 6 units, each unit includes a convolution layer, a normalization layer, and an activation function layer connected in sequence, where, except that the activation function used by the activation function layer of the last unit is a sigmoid activation function, the activation functions used by the activation function layers of the other units are all lreul;
based on the generation countermeasure network structure shown in fig. 3, when the deep convolutional neural network in the projection domain is trained, the real X-Ray projection images of 10 walnuts are specifically selected to construct a training data set; firstly, reading obtained walnut X-ray original projection data in a TIFF (Tag Image File Format) Format into an internal memory by using a python open source kit (Numpy), wherein a pixel value in the TIFF Image File represents the photon number obtained by detecting a corresponding detecting element; converting the raw projection data into a beam intensity loss image I, also referred to as a projection image, using Beer-Lambert's law; for each walnut, randomly selecting 200 projected images, obtaining 2000 projected images in total, and forming an original projected image data set by the 2000 projected images;
based on the original projection image data set, in this embodiment, the training method of the projection domain depth convolution neural network includes:
randomly selecting a part of projection images from an original projection image data set according to a preset proportion; adding noise to the selected projection image, taking the projection image after noise addition as a low-dose projection image, taking the projection image before noise addition as a corresponding gold standard, forming a training sample by the low-dose projection image and the corresponding gold standard, and forming a projection domain training data set by all the training samples;
specifically, the low-dose projection image is simulated by adding poisson noise to the projection image, and the relevant calculation expression is as follows:
Figure BDA0002806230150000101
Pld,sim=log(Ild,sim)
wherein P represents a cone beam CT projection image of normal dose, f is a flat field image of low dose, α is a tube current of an X-ray tube corresponding to the normal dose, β is a tube current of an X-ray tube corresponding to the low dose, Ild,simFor simulated low dose incident flux, Pld,simA simulated low dose projection image; optionally, in this embodiment, β/α is 1/8, i.e., projection images simulating one-eighth of the dose;
after the projection domain training data set is established, establishing a generation countermeasure network shown in fig. 3, and training the generation countermeasure network by using the projection domain training data set; in the training process, inputting the low-dose projection image into a generator, predicting the noise distribution of the low-dose projection image by the generator, and then subtracting the corresponding noise distribution from the low-dose projection image to obtain a projection image without noise; meanwhile, the discriminator judges the difference between the generated image and the normal dose projection image, and feeds the difference back to the generator to realize the updating of the generator, so that the generator generates a projection image with higher quality; meanwhile, the discriminator is correspondingly updated; thus, the generator and the discriminator are updated alternately, thereby generating a high-quality projection image; in consideration of the limitation of the computing power of the device, optionally, the present embodiment sets the batch _ size in the training process to 1;
in this embodiment, a difference between a generated image and a real projection image of a generated countermeasure network is measured by using a supervisory loss function, and an expression of the supervisory loss function is as follows:
Figure BDA0002806230150000102
Figure BDA0002806230150000103
representing the resistance loss function, specifically:
Figure BDA0002806230150000104
Figure BDA0002806230150000105
represents the peak signal-to-noise ratio loss function, specifically:
Figure BDA0002806230150000106
MSE represents a mean square error function, specifically:
Figure BDA0002806230150000107
Figure BDA0002806230150000108
the structural similarity loss function is specifically as follows:
Figure BDA0002806230150000111
where Θ represents the weight for generating the countermeasure network, λ1,λ2And λ3Is an equilibrium constant; g and D represent the generator and the arbiter, respectively, for generating an antagonistic network, PldAnd PndRespectively representing a low-dose projection image and a corresponding normal-dose projection image, and E represents the Wassertein distance; maxyA maximum of pixels representing a normal dose image; pgRepresenting an image generated by a generator generating the countermeasure network, M and N representing the height and width, respectively, of the image, (i, j) representing the pixel coordinates in the image; mu.s1And mu2Respectively representing the average values of the pixels of the generated countermeasure network generated image and the reference image. Sigma1,σ2And σ1,2Respectively representing the standard deviation of the pixel values of the generated countermeasure network generation image and the reference image and the covariance of the pixel values of the generated countermeasure network generation image and the reference image; c1And C2Respectively, are hyper-parameters;
in order to effectively generate the countermeasure network, the example adopts RMSprop (root mean square prop) as an optimizer, the learning rate is 0.0002, and the number of training rounds is 30; the whole training process is run on a tensorflow framework;
after training is finished, storing the model of the last iteration, and extracting a generator in the generated confrontation network as a projection domain depth convolution neural network;
it should be noted that the structure of the generator and the arbiter in the generation countermeasure network is only an optional embodiment, and should not be construed as the only limitation of the present invention, and in other embodiments of the present invention, the corresponding structure may be adjusted according to the actual situation; similarly, in this embodiment, the training of the projection domain deep convolutional neural network is completed by means of the generation of the antagonistic network, which is also only an optional embodiment of the present invention, and in some other embodiments of the present invention, the training of the projection domain deep convolutional neural network may also be completed without the generation of the antagonistic network, as long as the effect of the final projection domain deep convolutional neural network on removing noise in the projection image can meet the precision requirement of practical application.
In order to further improve the quality of the reconstructed cone-beam CT image, the present embodiment further includes:
and (3) denoising a reconstructed image: inputting the cone beam CT image obtained in the three-dimensional reconstruction step into a trained image domain depth convolution neural network, eliminating noise and artifacts in the cone beam CT image by the image domain depth convolution neural network, outputting a high-quality cone beam CT image, and taking the high-quality cone beam CT image as a final reconstruction result;
optionally, in this embodiment, a structure of the image domain deep convolutional neural network is shown in fig. 4, and includes: the encoder, the six second residual blocks and the decoder are connected in sequence; the encoder is used for extracting the characteristic information of the input image and transmitting the characteristic information to the six second residual blocks to further eliminate the noise and the artifacts of the input image, and the decoder recovers the high-quality cone beam CT image by utilizing the characteristic information provided by the second residual blocks;
as shown in fig. 4, the encoder includes 5 second units connected in sequence, and the second residual block includes 2 units connected in sequence; the second unit comprises a convolution layer (Conv), a normalization layer (BN) and an activation function layer (ReLU) which are connected in sequence;
the decoder comprises one or more third units which are connected in sequence; the third unit comprises a deconvolution layer (Deconv), a normalization layer (BN) and an activation function layer (ReLU) which are connected in sequence;
the encoder and the decoder adopt a symmetrical structure, and the input of the first convolution layer, the third convolution layer and the fifth convolution layer of the encoder are sequentially added with the output of the corresponding deconvolution layer in the decoder through jump linkage to form a residual error structure; the input of the first convolutional layer of the second residual block is added with the output of the second convolutional layer which is normalized through a jump link to form a residual structure;
in the embodiment, the image domain depth convolution neural network is a residual error structure, which can deepen the depth of the convolution neural network, thereby improving the quality of an output image and accelerating the training of the convolution neural network;
in this embodiment, the training method for the image domain deep convolutional neural network includes:
constructing an image domain training data set; in the image domain training data set, each sample consists of two images, wherein one image is a cone beam CT image obtained by a three-dimensional reconstruction step, namely the cone beam CT image obtained by three-dimensional reconstruction after the quality of a projection image is improved by a projection domain depth convolution neural network is used as the input of the image domain depth convolution neural network, and the other image is a cone beam CT image with normal dosage and is used as a reference image of the image domain depth convolution neural output;
establishing an image domain deep convolutional neural network, and training the image domain deep convolutional neural network by using an image domain training data set, so that a trained image domain deep convolutional neural network is obtained after training is finished;
optionally, in this embodiment, in the process of training the image domain depth convolutional neural network, the batch _ size is set to 2, and a traditional Mean Square Error (MSE) loss function is used to describe a difference between the output image and the cone beam image with a normal dose, where the expression is as follows:
Figure BDA0002806230150000131
wherein, omega represents the weight of the image domain depth convolution neural network, eta is the learning rate, F is the mapping function of the image domain depth convolution neural network, Im,1Representing cone-beam CT images generated by generating a countermeasure network, Im,2Is a normal dose cone-beam CT image, | | | | | non-woven phosphor2Represents a 2-norm, and M is the size of batch.
In order to ensure the quality of the reconstructed image and improve the reconstruction speed, as a preferred embodiment, in the three-dimensional reconstruction step of this embodiment, the three-dimensional reconstruction is performed on the obtained high-quality projection image by using the FDK algorithm;
the FDK algorithm (Feldkamp-Davis-Kress algorithm) has the advantage of high reconstruction speed as an analytical algorithm, and usually has good image quality under the condition of normal dose, but the reconstructed image has more serious noise and artifacts under the condition of low dose; in the embodiment, the projection domain depth convolution neural network is used for removing noise and artifacts in the projection image obtained by transforming the low-dose cone-beam CT projection data to obtain the high-quality projection image, so that the FDK algorithm is used for reconstructing the projection image, and the reconstruction speed can be effectively improved while the reconstruction quality is ensured; it should be noted that the FDK algorithm is only a preferred embodiment of the present invention, and other algorithms that can perform cone-beam CT image reconstruction using projection images can be applied to the present invention.
Example 2:
a low-dose cone beam CT image reconstruction method based on deep learning is similar to the embodiment 1, and is different in that the embodiment only does not include a reconstruction image denoising step, and a cone beam CT image obtained in a three-dimensional reconstruction step is directly used as a reconstruction result;
the specific implementation of this embodiment can refer to the description of embodiment 1, and will not be repeated here.
Example 3:
a computer readable storage medium comprising a stored computer program;
when the computer program is executed by the processor, the apparatus in which the computer readable storage medium is located is controlled to execute the method for reconstructing a low-dose cone-beam CT image based on deep learning provided in embodiment 1 or embodiment 2.
To further illustrate the effectiveness and reliability of the present invention, examples 1 and 2 above are compared with existing methods of boosting low dose cone-beam CT images, including: CGLS (continuous gradient least square) algorithm, SIRT (Simultaneous iterative detection technique), RED-CNN (Residual Encode DecoderConvolitional Neural network), and MAP-NN (modulated Adaptive Processing Neural network). CGLS is a traditional analytical algorithm, and the least square problem of an underdetermined system is solved by a conjugate gradient method; SIRT is a classical iterative algorithm, and an optimal solution is obtained by minimizing an objective function in an iterative process; RED-CNN and MAP-NN belong to the methods of applying the deep learning technology to the CT image domain process, and currently, a better effect is obtained in a quarter-dose fan-beam CT image.
The reconstruction results of the cone beam CT projection data of the same object by the various methods are shown in (a) to (h) in fig. 5, and it can be seen from the reconstruction results shown in fig. 5 that the reconstructed image under the FDK algorithm contains serious noise and artifacts, and the noise and artifacts cause difficulty in distinguishing image structure information; SIRT can eliminate noise and artifacts to some extent, but causes image distortion; the CGLS algorithm inhibits partial noise and artifacts, but the residual noise and artifacts still attach to an image structure, so that partial structure information is lost; RED-CNN and MAP-NN can effectively eliminate the noise and artifacts of the reconstructed image as a deep learning post-processing technology, but the image structure is also sacrificed to a great extent; in this embodiment 2, the quality of the projection image is improved by using the generated countermeasure network in the projection domain, and the corresponding reconstructed image can largely retain the structural information of the image; in this embodiment, an image domain deep learning post-processing technique is further adopted, so that noise and artifacts of the image in embodiment 2 can be further eliminated, and the contrast of the reconstructed image is improved. In conclusion, the method of the embodiment achieves better results than other algorithms, both in terms of noise suppression and structure retention.
In order to quantitatively compare the performance of improving the quality of the low-dose cone-beam CT images by various methods, three standards of PSNR (Peak Signal-to-Noise Ratio), SSIM (structural similarity) and RMSE (root Mean Square error) are adopted, the SSIM value is used for measuring the structure retention degree of the reconstructed images compared with the normal dose reconstructed images, the better the SSIM is, the PSNR and RMSE are used for measuring the visual difference between the reconstructed images and the normal dose images, and the larger the PSNR and RMSE are, the smaller the visual difference between the two images is. Specifically, the quantization values of 200 projection images are counted, and the quantization performance comparison of different methods on the task of implementing the sparse angle cone-beam CT image is shown in table one.
Table one: on-task performance comparison of different methods on low-dose cone-beam CT image realization
Figure BDA0002806230150000151
As can be analyzed from the table i, the reconstructed image under the method of this embodiment obtains the highest peak signal-to-noise ratio (PSNR) and the highest Structural Similarity (SSIM), which indicates that the low-dose reconstructed image under this embodiment is visually closest to the normal-dose reconstructed image and retains the structural information to the greatest extent. Meanwhile, the RMSE of the low-dose reconstructed image in this embodiment is the minimum compared with the normal-dose reconstructed image, that is, the difference of the average pixel value is the minimum, which indicates that the reconstruction accuracy of the method of this embodiment is the highest.
Generally speaking, the method transforms low-dose cone beam CT original projection data into projection images, predicts noise distribution in the projection images by using a projection domain depth convolution neural network, subtracts the noise in the projection images to obtain high-quality projection images, carries out three-dimensional reconstruction based on the high-quality projection images to obtain cone beam CT images, can remove noise and artifacts caused by low dose in the projection images while keeping the original structure of the projection images, and effectively improves the quality of the cone beam CT images. On the basis, the noise and the artifact in the cone beam CT image are eliminated by utilizing the image domain depth convolution neural network, so that the quality of the cone beam CT image can be further improved.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1.一种基于深度学习的低剂量锥束CT图像重建方法,其特征在于,包括:1. a low-dose cone beam CT image reconstruction method based on deep learning, is characterized in that, comprises: 投影变换步骤:将低剂量锥束CT原始投影数据变换为多幅投影图像;Projection transformation step: transform the original projection data of low-dose cone beam CT into multiple projection images; 投影图像去噪步骤:将待处理的投影图像输入至已训练好的投影域深度卷积神经网络,由所述投影域深度卷积神经网络预测所述待处理的投影图像中的噪声分布,并从中减去所述噪声分布,输出高质量投影图像;The projection image denoising step: input the projection image to be processed into the trained projection domain deep convolutional neural network, predict the noise distribution in the projection image to be processed by the projection domain deep convolutional neural network, and subtracting the noise distribution therefrom to output a high-quality projected image; 三维重建步骤:对于所述投影变换步骤中得到的多幅投影图像分别执行所述投影图像去噪步骤,得到各投影图像对应的高质量投影图像后,对所得到的高质量投影图像进行三维重建,得到锥束CT图像。3D reconstruction step: performing the projection image denoising step on the multiple projection images obtained in the projection transformation step, respectively, and after obtaining high-quality projection images corresponding to each projection image, perform 3D reconstruction on the obtained high-quality projection images to obtain cone beam CT images. 2.如权利要求1所述的基于深度学习的低剂量锥束CT图像重建方法,其特征在于,所述投影域深度卷积神经网络包括依次连接的一个或多个第一残差块,各第一残差块的结构相同;2. The deep learning-based low-dose cone beam CT image reconstruction method according to claim 1, wherein the projection domain deep convolutional neural network comprises one or more first residual blocks connected in sequence, and each The structure of the first residual block is the same; 所述第一残差块包括依次连接的一个或多个第一单元,所述第一单元包括依次连接的卷积层、标准化层和激活函数层,且所述第一残差块的输入通过跳跃链接与所述激活函数层的输出通过跳跃链接求和后作为所述第一残差块的输出,由此构成残差网络。The first residual block includes one or more first units connected in sequence, the first units include a convolution layer, a normalization layer, and an activation function layer connected in sequence, and the input of the first residual block is passed through The output of the skip link and the activation function layer is summed through the skip link as the output of the first residual block, thereby forming a residual network. 3.如权利要求1或2所述的基于深度学习的低剂量锥束CT图像重建方法,其特征在于,所述投影域深度卷积神经网络在训练的过程中,作为生成对抗网络的生成器;3. The low-dose cone-beam CT image reconstruction method based on deep learning according to claim 1 or 2, wherein the projection domain deep convolutional neural network is used as a generator of a generative adversarial network during training. ; 所述生成对抗网络中的生成器用于预测输入图像的噪声分布,并从输入图像中减去该噪声分布,得到高质量图像;所述生成对抗网络中的判别器用于判别所述生成器输出的高质量图像与相应的金标准之间的差异,并反馈给所述生成器,以更新所述生成器。The generator in the generative adversarial network is used to predict the noise distribution of the input image, and subtract the noise distribution from the input image to obtain a high-quality image; the discriminator in the generative adversarial network is used to discriminate the output of the generator. The difference between the high quality image and the corresponding gold standard is fed back to the generator to update the generator. 4.如权利要求3所述的基于深度学习的低剂量锥束CT图像重建方法,其特征在于,所述投影域深度卷积神经网络的训练方法包括:4. The low-dose cone beam CT image reconstruction method based on deep learning as claimed in claim 3, wherein the training method of the deep convolutional neural network in the projection domain comprises: 按照预设的比例从原始的投影图像数据集中随机选取出部分投影图像;对所选取的投影图像添加噪声,将添加噪声后的投影图像作为低剂量投影图像,将添加噪声前的投影图像作为对应的金标准,由低剂量投影图像及其对应的金标准构成一条训练样本,由所有训练样本构成投影域训练数据集;Partial projection images are randomly selected from the original projection image data set according to a preset ratio; noise is added to the selected projection images, and the noise-added projection image is used as the low-dose projection image, and the projection image before noise is added as the corresponding The gold standard for low-dose projection images and their corresponding gold standards constitute a training sample, and all training samples constitute a training dataset in the projection domain; 建立所述生成对抗网络,并利用所述投影域训练数据集训练所述生成对抗网络;establishing the generative adversarial network, and training the generative adversarial network using the projection domain training data set; 训练结束后,提取出所述生成对抗网络中的生成器作为所述投影域深度卷积神经网络。After the training, the generator in the generative adversarial network is extracted as the projection domain deep convolutional neural network. 5.如权利要求1所述的基于深度学习的低剂量锥束CT图像重建方法,其特征在于,还包括:5. The deep learning-based low-dose cone beam CT image reconstruction method according to claim 1, further comprising: 重建图像去噪步骤:将所述三维重建步骤得到的锥束CT图像输入至已训练好的图像域深度卷积神经网络,以由所述图像域深度卷积神经网络消除所述锥束CT图像中的噪声和伪影,输出高质量的锥束CT图像,将所述高质量的锥束CT图像作为最终的重建结果。Reconstructed image denoising step: input the cone beam CT image obtained in the three-dimensional reconstruction step into the trained image domain deep convolutional neural network to eliminate the cone beam CT image by the image domain deep convolutional neural network noise and artifacts in, output a high-quality cone-beam CT image, and use the high-quality cone-beam CT image as the final reconstruction result. 6.如权利要求5所述的基于深度学习的低剂量锥束CT图像重建方法,其特征在于,所述图像域深度卷积神经网络包括:依次连接的编码器,一个或多个第二残差块,以及解码器;6. The deep learning-based low-dose cone beam CT image reconstruction method according to claim 5, wherein the image domain deep convolutional neural network comprises: sequentially connected encoders, one or more second residual difference block, and decoder; 所述编码器包括依次连接的一个或多个第二单元,所述第二残差块包括依次连接的一个或多个第二单元;所述第二单元包括依次连接的卷积层、标准化层和激活函数层;The encoder includes one or more second units connected in sequence, the second residual block includes one or more second units connected in sequence; the second unit includes convolution layers, normalization layers connected in sequence and activation function layer; 所述解码器包括依次连接的一个或多个第三单元;所述第三单元包括依次连接的反卷积层、标准化层和激活函数层;The decoder includes one or more third units connected in sequence; the third unit includes a deconvolution layer, a normalization layer and an activation function layer connected in sequence; 所述编码器和所述解码器采用对称结构,所述编码器的第一、第三、第五个卷积层的输入通过跳跃链接依次与所述解码器中相应的反卷积层的输出相加,构成残差结构;所述第二残差块的第一个卷积层的输入通过跳跃链接与经过标准化的第二个卷积层的输出相加,构成残差结构。The encoder and the decoder adopt a symmetric structure, and the inputs of the first, third, and fifth convolutional layers of the encoder are sequentially connected with the outputs of the corresponding deconvolutional layers in the decoder through skip links. Add up to form a residual structure; the input of the first convolutional layer of the second residual block is added with the output of the second normalized convolutional layer through skip links to form a residual structure. 7.如权利要求5或6所述的基于深度学习的低剂量锥束CT图像重建方法,其特征在于,所述图像域深度卷积神经网络的训练方法包括:7. The deep learning-based low-dose cone beam CT image reconstruction method according to claim 5 or 6, wherein the training method of the deep convolutional neural network in the image domain comprises: 构建图像域训练数据集;所述图像域训练数据集中,每个样本由两幅图像构成,其中一幅图像为利用所述三维重建步骤得到的锥束CT图像,作为所述图像域深度卷积神经网络的输入,另一幅图像为正常剂量的锥束CT图像,作为所述图像域深度卷积神经输出的参考图像;Construct an image domain training data set; in the image domain training data set, each sample consists of two images, one of which is a cone beam CT image obtained by the three-dimensional reconstruction step, as the image domain depth convolution The input of the neural network, and the other image is a normal dose cone beam CT image, which is used as a reference image for the output of the deep convolutional neural network in the image domain; 建立所述图像域深度卷积神经网络,并利用所述图像域训练数据集训练所述图像域深度卷积神经网络,从而在训练结束后得到训练好的图像域深度卷积神经网络。The image domain deep convolutional neural network is established, and the image domain deep convolutional neural network is trained by using the image domain training data set, so as to obtain a trained image domain deep convolutional neural network after the training. 8.如权利要求1所述的基于深度学习的低剂量锥束CT图像重建方法,其特征在于,所述三维重建步骤中,利用FDK算法对所得到的高质量投影图像进行三维重建。8 . The deep learning-based low-dose cone beam CT image reconstruction method according to claim 1 , wherein, in the three-dimensional reconstruction step, the FDK algorithm is used to perform three-dimensional reconstruction on the obtained high-quality projection images. 9 . 9.一种计算机可读存储介质,其特征在于,包括存储的计算机程序;9. A computer-readable storage medium, comprising a stored computer program; 所述计算机程序被处理器执行时,控制所述计算机可读存储介质所在设备执行权利要求1-8任一项所述的基于深度学习的低剂量锥束CT图像重建方法。When the computer program is executed by the processor, the device where the computer-readable storage medium is located is controlled to execute the deep learning-based low-dose cone-beam CT image reconstruction method according to any one of claims 1-8.
CN202011371624.6A 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning Expired - Fee Related CN112348936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011371624.6A CN112348936B (en) 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011371624.6A CN112348936B (en) 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning

Publications (2)

Publication Number Publication Date
CN112348936A true CN112348936A (en) 2021-02-09
CN112348936B CN112348936B (en) 2023-04-18

Family

ID=74365107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011371624.6A Expired - Fee Related CN112348936B (en) 2020-11-30 2020-11-30 Low-dose cone-beam CT image reconstruction method based on deep learning

Country Status (1)

Country Link
CN (1) CN112348936B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052936A (en) * 2021-03-30 2021-06-29 大连理工大学 Single-view CT reconstruction method integrating FDK and deep learning
CN113052935A (en) * 2021-03-23 2021-06-29 大连理工大学 Single-view CT reconstruction method for progressive learning
CN113436112A (en) * 2021-07-21 2021-09-24 杭州海康威视数字技术股份有限公司 Image enhancement method, device and equipment
CN114022377A (en) * 2021-11-03 2022-02-08 北京航空航天大学宁波创新研究院 Low-dose X-ray mammary gland imaging method based on convolutional neural network
CN114241074A (en) * 2021-12-20 2022-03-25 四川大学 A CBCT image reconstruction method based on deep learning and electronic noise simulation
CN114757928A (en) * 2022-04-25 2022-07-15 东南大学 A one-step dual-energy finite-angle CT reconstruction method based on deep training network
CN114998466A (en) * 2022-05-31 2022-09-02 华中科技大学 A low-dose cone-beam CT reconstruction method based on attention mechanism and deep learning
CN115049753A (en) * 2022-05-13 2022-09-13 沈阳铸造研究所有限公司 Cone beam CT artifact correction method based on unsupervised deep learning
CN115187470A (en) * 2022-06-10 2022-10-14 成都飞机工业(集团)有限责任公司 Double-domain iterative noise reduction method based on 3D printing inner cavity
CN117152365A (en) * 2023-10-31 2023-12-01 中日友好医院(中日友好临床医学研究所) A method, system and device for ultra-low-dose oral CBCT imaging
CN117409100A (en) * 2023-12-15 2024-01-16 山东师范大学 CBCT image artifact correction system and method based on convolutional neural network
CN117830456A (en) * 2024-03-04 2024-04-05 中国科学技术大学 Method, device and electronic device for correcting metal artifacts in images

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023282A (en) * 2014-04-30 2015-11-04 华中科技大学 Sparse projection ultrasonic CT image reconstruction method based on CS
WO2017223560A1 (en) * 2016-06-24 2017-12-28 Rensselaer Polytechnic Institute Tomographic image reconstruction via machine learning
CN108961237A (en) * 2018-06-28 2018-12-07 安徽工程大学 A kind of low-dose CT picture breakdown method based on convolutional neural networks
CN109559359A (en) * 2018-09-27 2019-04-02 东南大学 Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized
CN110660123A (en) * 2018-06-29 2020-01-07 清华大学 Three-dimensional CT image reconstruction method and device based on neural network and storage medium
US20200118306A1 (en) * 2018-10-12 2020-04-16 Korea Advanced Institute Of Science And Technology Method for processing unmatched low-dose x-ray computed tomography image using neural network and apparatus therefor
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Denoising method of low-dose CT lung images based on deep convolutional neural network
CN111696166A (en) * 2020-06-10 2020-09-22 浙江大学 FDK (finite Difference K) type preprocessing matrix-based circumferential cone beam CT (computed tomography) fast iterative reconstruction method
US20200311914A1 (en) * 2017-04-25 2020-10-01 The Board Of Trustees Of Leland Stanford University Dose reduction for medical imaging using deep convolutional neural networks
CN111899188A (en) * 2020-07-08 2020-11-06 西北工业大学 Neural network learning cone beam CT noise estimation and suppression method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023282A (en) * 2014-04-30 2015-11-04 华中科技大学 Sparse projection ultrasonic CT image reconstruction method based on CS
WO2017223560A1 (en) * 2016-06-24 2017-12-28 Rensselaer Polytechnic Institute Tomographic image reconstruction via machine learning
US20200311914A1 (en) * 2017-04-25 2020-10-01 The Board Of Trustees Of Leland Stanford University Dose reduction for medical imaging using deep convolutional neural networks
CN108961237A (en) * 2018-06-28 2018-12-07 安徽工程大学 A kind of low-dose CT picture breakdown method based on convolutional neural networks
CN110660123A (en) * 2018-06-29 2020-01-07 清华大学 Three-dimensional CT image reconstruction method and device based on neural network and storage medium
CN109559359A (en) * 2018-09-27 2019-04-02 东南大学 Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized
US20200118306A1 (en) * 2018-10-12 2020-04-16 Korea Advanced Institute Of Science And Technology Method for processing unmatched low-dose x-ray computed tomography image using neural network and apparatus therefor
CN111047524A (en) * 2019-11-13 2020-04-21 浙江工业大学 Denoising method of low-dose CT lung images based on deep convolutional neural network
CN111696166A (en) * 2020-06-10 2020-09-22 浙江大学 FDK (finite Difference K) type preprocessing matrix-based circumferential cone beam CT (computed tomography) fast iterative reconstruction method
CN111899188A (en) * 2020-07-08 2020-11-06 西北工业大学 Neural network learning cone beam CT noise estimation and suppression method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HENRI DER SARKISSIAN: "A cone-beam X-ray computed tomography data collection designed for machine learning", 《SCIENTIFIC DATA》 *
JELMER M.WOLTERRINK ET AL.: "Generative Adversarial Networks for Noise Reduction in Low-Dose CT", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
方伟等: "神经网络在CT重建方面应用的最新进展", 《中国体视学与图像分析》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052935A (en) * 2021-03-23 2021-06-29 大连理工大学 Single-view CT reconstruction method for progressive learning
CN113052935B (en) * 2021-03-23 2024-11-15 大连理工大学 Single-view CT reconstruction method based on progressive learning
CN113052936A (en) * 2021-03-30 2021-06-29 大连理工大学 Single-view CT reconstruction method integrating FDK and deep learning
CN113052936B (en) * 2021-03-30 2024-11-15 大连理工大学 Single-view CT reconstruction method integrating FDK and deep learning
CN113436112A (en) * 2021-07-21 2021-09-24 杭州海康威视数字技术股份有限公司 Image enhancement method, device and equipment
CN113436112B (en) * 2021-07-21 2022-08-26 杭州海康威视数字技术股份有限公司 Image enhancement method, device and equipment
CN114022377A (en) * 2021-11-03 2022-02-08 北京航空航天大学宁波创新研究院 Low-dose X-ray mammary gland imaging method based on convolutional neural network
CN114241074B (en) * 2021-12-20 2023-04-21 四川大学 A CBCT image reconstruction method based on deep learning and electronic noise simulation
CN114241074A (en) * 2021-12-20 2022-03-25 四川大学 A CBCT image reconstruction method based on deep learning and electronic noise simulation
CN114757928A (en) * 2022-04-25 2022-07-15 东南大学 A one-step dual-energy finite-angle CT reconstruction method based on deep training network
CN115049753B (en) * 2022-05-13 2024-05-10 沈阳铸造研究所有限公司 Cone beam CT artifact correction method based on unsupervised deep learning
CN115049753A (en) * 2022-05-13 2022-09-13 沈阳铸造研究所有限公司 Cone beam CT artifact correction method based on unsupervised deep learning
CN114998466B (en) * 2022-05-31 2024-07-05 华中科技大学 Low-dose cone beam CT reconstruction method based on attention mechanism and deep learning
CN114998466A (en) * 2022-05-31 2022-09-02 华中科技大学 A low-dose cone-beam CT reconstruction method based on attention mechanism and deep learning
CN115187470A (en) * 2022-06-10 2022-10-14 成都飞机工业(集团)有限责任公司 Double-domain iterative noise reduction method based on 3D printing inner cavity
CN117152365A (en) * 2023-10-31 2023-12-01 中日友好医院(中日友好临床医学研究所) A method, system and device for ultra-low-dose oral CBCT imaging
CN117152365B (en) * 2023-10-31 2024-02-02 中日友好医院(中日友好临床医学研究所) Method, system and device for oral cavity CBCT ultra-low dose imaging
CN117409100A (en) * 2023-12-15 2024-01-16 山东师范大学 CBCT image artifact correction system and method based on convolutional neural network
CN117830456A (en) * 2024-03-04 2024-04-05 中国科学技术大学 Method, device and electronic device for correcting metal artifacts in images
CN117830456B (en) * 2024-03-04 2024-05-28 中国科学技术大学 Method and device for correcting image metal artifact and electronic equipment

Also Published As

Publication number Publication date
CN112348936B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN112348936A (en) Low-dose cone-beam CT image reconstruction method based on deep learning
CN108898642B (en) A sparse angle CT imaging method based on convolutional neural network
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN110570492B (en) CT artifact suppression method, device and medium based on neural network
CN112396672B (en) Sparse angle cone-beam CT image reconstruction method based on deep learning
CN108961237B (en) Low-dose CT image decomposition method based on convolutional neural network
CN109785243B (en) Denoising method and computer based on unregistered low-dose CT of countermeasure generation network
CN110288671A (en) A low-dose CBCT image reconstruction method based on 3D adversarial generative network
CN112258415A (en) Chest X-ray film super-resolution and denoising method based on generation countermeasure network
CN110009613A (en) Low-dose CT imaging method, device and system based on deep dense network
CN109741254B (en) Dictionary training and image super-resolution reconstruction method, system, device and storage medium
CN116630738A (en) Energy spectrum CT imaging method based on depth convolution sparse representation reconstruction network
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Mangalagiri et al. Toward generating synthetic CT volumes using a 3D-conditional generative adversarial network
CN117876261A (en) CBCT scattering correction imaging method based on deep learning
CN117726705A (en) A deep learning method for simultaneous low-dose CT reconstruction and metal artifact correction
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
CN112270725A (en) Image reconstruction and coding method in spectral tomography
Gopalakrishnan Meena et al. Physics guided machine learning for multi-material decomposition of tissues from dual-energy CT scans of simulated breast models with calcifications
Liang et al. A model-based deep learning reconstruction for X-Ray CT
Zhu et al. Unsupervised metal artifacts reduction network for CT images based on efficient transformer
Yunker et al. Noise2Inverse for 3D Low-Dose Cone-Beam Computed Tomography
Xu et al. Personalized artifacts modeling and federated learning for multi-institutional low-dose CT reconstruction
Liu et al. Low dose ct noise artifact reduction based on multi-scale weighted convolutional coding network
Xu et al. Deep Fusion Network Based Sparse View CT Reconstructions for Clinical Diagnostic Scanners

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230418