CN115049753A - Cone beam CT artifact correction method based on unsupervised deep learning - Google Patents
Cone beam CT artifact correction method based on unsupervised deep learning Download PDFInfo
- Publication number
- CN115049753A CN115049753A CN202210521271.6A CN202210521271A CN115049753A CN 115049753 A CN115049753 A CN 115049753A CN 202210521271 A CN202210521271 A CN 202210521271A CN 115049753 A CN115049753 A CN 115049753A
- Authority
- CN
- China
- Prior art keywords
- network
- layer
- image
- training
- output characteristics
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000012937 correction Methods 0.000 title claims abstract description 11
- 238000012549 training Methods 0.000 claims description 28
- 230000006870 function Effects 0.000 claims description 15
- 238000012795 verification Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 3
- 239000002184 metal Substances 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 238000009659 non-destructive testing Methods 0.000 abstract description 3
- 238000013507 mapping Methods 0.000 abstract 1
- 238000003062 neural network model Methods 0.000 abstract 1
- 230000007547 defect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
A cone beam CT artifact correction method based on unsupervised deep learning belongs to the field of industrial nondestructive testing. And then, learning the image mapping relation under different noise levels by using a light-weight full convolution neural network model, removing the image noise of a projection domain under an unsupervised condition, and reducing the artifact of the reconstructed domain image.
Description
Technical Field
The invention relates to the field of industrial nondestructive testing, in particular to a cone beam CT artifact correction method based on unsupervised deep learning.
Background
Cone beam CT is an imaging detection technique in which a cone beam radiation source and an area array detector are used to collect a series of projection images of a measured object at different angles, and continuous sequence slices are reconstructed according to a reconstruction algorithm. Compared with the conventional CT system, the X-ray received by the area array detector is converted into an image signal, so that the axial multi-section data of the measured object can be acquired through one-time scanning, and the method has the advantages of high scanning speed, high ray utilization rate and the like, and is an ideal nondestructive testing means for quantitatively representing the internal structure size, position and density of the object. However, in the actual cone-beam CT detection process, due to various reasons such as the random distribution characteristic of X-ray photons absorbed by the detector, photon escape from the conversion screen, and coupling efficiency, the detector signal deviates from the true signal to some extent, so that the projected image inevitably contains image noise, and further, the tomographic image reconstructed from the projected image has artifacts. The artifacts can significantly increase the gray non-uniformity of the reconstructed domain image, reduce the contrast, interfere the subsequent image edge detection and segmentation, and influence the size measurement precision and the defect identification accuracy of the cone beam CT system in the industrial nondestructive detection application.
At present, the industry widely adopts an integral noise reduction strategy to reduce the noise of the image in the projection domain, and although the method is simple and effective, the scanning time is increased by acquiring multi-frame images, and the detection efficiency is obviously reduced. Filtering and noise reduction methods such as Gaussian filtering, bilateral filtering, non-average local filtering and the like inevitably remove part of useful signals while removing image noise, so that the processed image is excessively smooth, and the detail loss is obvious. In recent years, deep learning-based image denoising research is carried out deeply, and although the deep learning method can effectively avoid detail loss, a large number of noise-free images are required as label images for network training, which limits the application of the method in industry. The method in the patent (publication number: CN 111899188A) considers that the noise of the CT projection image conforms to the poisson distribution, and obtains a virtual noiseless image through the simulation technique, which solves the problem that the noiseless image cannot be obtained to a certain extent, but because the simulated image and the real image have a domain interval, and the noise distribution in the projection image does not completely conform to the poisson distribution, the processing effect is not ideal.
Disclosure of Invention
In view of some defects of the method, the invention provides a cone beam CT artifact correction method based on unsupervised deep learning, so that the noise of the projection domain image is effectively removed and the artifact of the reconstruction domain image is reduced under the condition of no need of a noise image.
The technical scheme of the invention is as follows:
a cone beam CT artifact correction method based on unsupervised deep learning is characterized by comprising the following specific steps:
step 1, construction of an image data set:
step 1.1: collecting multiple frame images
Placing a metal part on a cone beam CT turntable, and collecting n (a positive integer greater than or equal to 3) frame projection images at the same angleWhereinIs the k frame projection image.
Step 1.2: randomly extracting partial frames
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
step 1.3: integration superposition
Performing integral superposition on each extracted image frame to obtain an image,。 Andthe contained object space information is completely consistent, but the contained noise level is different.Andas inputs and outputs to the network, respectively.
Step 1.4: forming an image dataset
Replacing parts and angles, acquiring images according to the steps 1.1-1.3 to obtain more image samples, and forming a data set containing m pairs of imagesThe obtained image data set N is randomly divided into a training set, a verification set and a test set, and the proportion of the training set, the verification set and the test set is 70%, 10% and 20% respectively.
Step 2, constructing a lightweight full convolution neural network
In order to reduce the network parameter quantity and improve the network processing speed, the invention uses a full convolution neural network formed by stacking 7 deep separable convolution layers as a training networkF. For the first 6 convolutional layers, the number of convolutional layer channels is 32, the convolutional kernel size is 3 × 3, the step size is 1, and the activation function of the convolutional layer is Relu. And the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then serve as the input of the fifth layer. And the output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer. And after the output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel, the output characteristics are used as the input of the seventh layer. For the seventh convolutional layer, the number of convolutional layer channels is 1, the convolutional kernel size is 3 × 3, the step size is 1, and the activation function of the convolutional layer is Tanh. Convolutional network models of other structures, such as UNET, FCN, etc., may also be used as training networks in the present invention. The loss function in the present invention is the sum of the L1 loss and the L2 loss.
Step 3, network training
And (3) after the model is built, training by using the training data set in the step (1), inputting a fixed number of images each time, obtaining a loss function value through forward propagation, and optimizing parameters in each convolution layer of the model by using a back propagation algorithm. Repeating the steps until the loss function value of the verification set does not decrease any more, the model is converged, and the parameter values in the convolutional layer are fixed.
Step 4, network application
After training is finished, inputting any projection image into the network model, wherein the output of the network is the projection image after noise is removed. And reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
Compared with the prior art, the invention has the advantages that:
the random multi-frame superposition strategy is provided, image data meeting the requirement of neural network training can be constructed, the defect that a large number of noise-free images are needed in the existing deep learning method is overcome, meanwhile, the method is simple to implement, good in denoising effect and capable of effectively reducing artifacts caused by noise in cone beam CT projection images, and excessive smoothness of the images is avoided.
Drawings
FIG. 1 is a schematic representation of the process of the present invention.
Detailed Description
Examples
As shown in fig. 1, a cone beam CT artifact correction method based on unsupervised deep learning specifically includes the steps of:
step 1, constructing an image data set:
step 1.1: collecting multiple frame images
Placing the metal part on a cone beam CT turntable, and collecting n (n is more than or equal to 3) frame projection images at the same angleWhereinIs the k frame projection image.
Step 1.2: randomly extracting partial frames
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
step 1.3: integration superposition
Performing integral superposition on each extracted image frame to obtain an image,。 Andthe contained object space information is completely consistent, but the contained noise level is different.Andas inputs and outputs to the network, respectively.
Step 1.4: forming an image dataset
Replacing parts and angles, acquiring images according to the steps 1.1-1.3 to obtain more image samples, and forming a data set containing m pairs of imagesThe obtained image data set N is randomly divided into a training set, a verification set and a test set, and the proportion of the training set, the verification set and the test set is 70%, 10% and 20% respectively.
Step 2, constructing a lightweight full convolution neural network
In order to reduce the number of network parameters and improve the network processing speed, the invention uses a full convolution neural network formed by stacking 7 deep separable convolution layers as a training networkF. For the first 6 convolutional layers, the number of convolutional layer channels is 32, the convolutional kernel size is 3 × 3, the step size is 1, and the activation function of the convolutional layers is Relu. And the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then serve as the input of the fifth layer. And the output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer. And after the output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel, the output characteristics are used as the input of the seventh layer. For the seventh convolutional layer, the number of convolutional layer channels is 1, the convolutional kernel size is 3 × 3, the step size is 1, and the activation function of the convolutional layer is Tanh. Convolutional network models of other structures, such as UNET, FCN, etc., may also be used as training networks in the present invention. The loss function in the present invention is the sum of the L1 loss and the L2 loss.
Step 3, network training
And (3) after the model is built, training by using the training data set in the step (1), inputting a fixed number of images each time, obtaining a loss function value through forward propagation, and optimizing parameters in each convolution layer of the model by using a back propagation algorithm. And repeating the steps until the loss function value of the verification set is not reduced, the model is converged, and the parameter value of the convolutional layer is fixed.
Step 4, network application
After training is finished, inputting any projection image into the network model, wherein the output of the network is the projection image after noise is removed. And reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
The method can construct image data meeting the requirement of neural network training, overcomes the defect that the existing deep learning method needs a large number of noise-free images, is simple to implement, has a good denoising effect, does not cause excessive smoothness of the images, and can effectively reduce artifact caused by noise in cone beam CT projection images.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Moreover, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Claims (4)
1. A cone beam CT artifact correction method based on unsupervised deep learning is characterized by comprising the following specific steps:
step 1, construction of an image data set:
step 1.1: collecting multiple frame images
Placing the metal part on a cone beam CT turntable, and collecting n frames of projection images at the same angleWhereinIs the k frame projection image;
step 1.2: randomly extracting partial frames
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
step 1.3: integration superposition
Performing integral superposition on each extracted image frame to obtain an image,; Andthe contained object space information is completely consistent, but the contained noise level is different;andas input and output of the network, respectively;
step 1.4: forming an image dataset
Replacing parts and angles, acquiring images according to the steps 1.1-1.3 to obtain more image samples, and forming a data set containing m pairs of imagesRandomly dividing the obtained image data set N into a training set, a verification set and a test set;
step 2, constructing a lightweight full convolution neural network
Using a full convolution neural network formed by stacking 7 depth separable convolution layersFor training networksF(ii) a For the first 6 convolutional layers, the number of convolutional layer channels is 32, the size of convolutional kernel is 3 × 3, the step length is 1, and the activation function of convolutional layer is Relu; after the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel, the output characteristics are used as the input of the fifth layer; the output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer; after the output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel, the output characteristics are used as the input of a seventh layer; for the seventh convolutional layer, the number of channels of the convolutional layer is 1, the size of the convolutional core is 3 × 3, the step length is 1, and the activation function of the convolutional layer is Tanh; the loss function is the sum of the L1 loss and the L2 loss;
step 3, network training
After the model is built, training is carried out by using the training data set in the step 1, after a fixed number of images are input each time, a loss function value is obtained through forward propagation, and parameters in each convolution layer of the model are optimized by using a backward propagation algorithm; repeating the steps until the loss function value of the verification set is not reduced, the model is converged, and the parameter value of the convolution layer is fixed;
step 4, network application
After training is finished, inputting any projected image into a network model, wherein the output of the network is the projected image after noise is removed; and reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
2. The unsupervised deep learning-based cone-beam CT artifact correction method as claimed in claim 1, wherein: in the step 1.1, n is a positive integer greater than or equal to 3.
3. The unsupervised deep learning-based cone-beam CT artifact correction method as claimed in claim 1, wherein: in step 1.4, the obtained image data set N is randomly divided into a training set, a verification set and a test set, wherein the proportion of the training set, the verification set and the test set is 70%, 10% and 20% respectively.
4. The unsupervised deep learning-based cone-beam CT artifact correction method as claimed in claim 1, wherein: in step 2, convolutional network models of other structures can be used as training networks in the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210521271.6A CN115049753B (en) | 2022-05-13 | 2022-05-13 | Cone beam CT artifact correction method based on unsupervised deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210521271.6A CN115049753B (en) | 2022-05-13 | 2022-05-13 | Cone beam CT artifact correction method based on unsupervised deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115049753A true CN115049753A (en) | 2022-09-13 |
CN115049753B CN115049753B (en) | 2024-05-10 |
Family
ID=83157077
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210521271.6A Active CN115049753B (en) | 2022-05-13 | 2022-05-13 | Cone beam CT artifact correction method based on unsupervised deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049753B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012478A (en) * | 2022-12-27 | 2023-04-25 | 哈尔滨工业大学 | CT metal artifact removal method based on convergence type diffusion model |
CN117726706A (en) * | 2023-12-19 | 2024-03-19 | 燕山大学 | CT metal artifact correction and super-resolution method for unsupervised deep dictionary learning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559359A (en) * | 2018-09-27 | 2019-04-02 | 东南大学 | Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized |
CN111882624A (en) * | 2020-06-19 | 2020-11-03 | 中国人民解放军战略支援部队信息工程大学 | Nano CT image motion artifact correction method and device based on multiple acquisition sequences |
CN111899188A (en) * | 2020-07-08 | 2020-11-06 | 西北工业大学 | Neural network learning cone beam CT noise estimation and suppression method |
US20210012543A1 (en) * | 2019-07-11 | 2021-01-14 | Canon Medical Systems Corporation | Apparatus and method for artifact detection and correction using deep learning |
CN112348936A (en) * | 2020-11-30 | 2021-02-09 | 华中科技大学 | Low-dose cone-beam CT image reconstruction method based on deep learning |
KR20220021368A (en) * | 2020-08-13 | 2022-02-22 | 한국과학기술원 | Tomography image processing method using neural network based on unsupervised learning to remove missing cone artifacts and apparatus therefor |
-
2022
- 2022-05-13 CN CN202210521271.6A patent/CN115049753B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559359A (en) * | 2018-09-27 | 2019-04-02 | 东南大学 | Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized |
US20210012543A1 (en) * | 2019-07-11 | 2021-01-14 | Canon Medical Systems Corporation | Apparatus and method for artifact detection and correction using deep learning |
CN111882624A (en) * | 2020-06-19 | 2020-11-03 | 中国人民解放军战略支援部队信息工程大学 | Nano CT image motion artifact correction method and device based on multiple acquisition sequences |
CN111899188A (en) * | 2020-07-08 | 2020-11-06 | 西北工业大学 | Neural network learning cone beam CT noise estimation and suppression method |
KR20220021368A (en) * | 2020-08-13 | 2022-02-22 | 한국과학기술원 | Tomography image processing method using neural network based on unsupervised learning to remove missing cone artifacts and apparatus therefor |
CN112348936A (en) * | 2020-11-30 | 2021-02-09 | 华中科技大学 | Low-dose cone-beam CT image reconstruction method based on deep learning |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012478A (en) * | 2022-12-27 | 2023-04-25 | 哈尔滨工业大学 | CT metal artifact removal method based on convergence type diffusion model |
CN116012478B (en) * | 2022-12-27 | 2023-08-18 | 哈尔滨工业大学 | CT metal artifact removal method based on convergence type diffusion model |
CN117726706A (en) * | 2023-12-19 | 2024-03-19 | 燕山大学 | CT metal artifact correction and super-resolution method for unsupervised deep dictionary learning |
Also Published As
Publication number | Publication date |
---|---|
CN115049753B (en) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hu et al. | Artifact correction in low‐dose dental CT imaging using Wasserstein generative adversarial networks | |
WO2021232653A1 (en) | Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network | |
WO2022047625A1 (en) | Image processing method and system, and computer storage medium | |
CN109146988B (en) | Incomplete projection CT image reconstruction method based on VAEGAN | |
CN109816742B (en) | Cone beam CT geometric artifact removing method based on fully-connected convolutional neural network | |
CN112348936B (en) | Low-dose cone-beam CT image reconstruction method based on deep learning | |
CN111311704B (en) | Image reconstruction method, image reconstruction device, computer equipment and storage medium | |
CN110490832A (en) | A kind of MR image reconstruction method based on regularization depth image transcendental method | |
CN109949411B (en) | Image reconstruction method based on three-dimensional weighted filtering back projection and statistical iteration | |
CN104751429B (en) | A kind of low dosage power spectrum CT image processing methods based on dictionary learning | |
CN111899188B (en) | Neural network learning cone beam CT noise estimation and suppression method | |
CN109472841B (en) | CBCT three-dimensional reconstruction method based on Gaussian mixture/Poisson maximum likelihood function | |
Li et al. | DDPTransformer: dual-domain with parallel transformer network for sparse view CT image reconstruction | |
CN110728727A (en) | Low-dose energy spectrum CT projection data recovery method | |
CN106780338A (en) | Based on anisotropic quick super-resolution method for reconstructing | |
WO2022000192A1 (en) | Ct image construction method, ct device, and storage medium | |
CN111553849A (en) | Cone beam CT geometric artifact removing method and device based on local feature matching | |
CN112734871A (en) | Low-dose PET image reconstruction algorithm based on ADMM and deep learning | |
CN113516586A (en) | Low-dose CT image super-resolution denoising method and device | |
CN115049753B (en) | Cone beam CT artifact correction method based on unsupervised deep learning | |
Hayes et al. | Low‐dose cone‐beam CT via raw counts domain low‐signal correction schemes: Performance assessment and task‐based parameter optimization (Part I: Assessment of spatial resolution and noise performance) | |
CN113012105A (en) | Yarn hairiness detection and rating method based on image processing | |
CN106127825A (en) | A kind of X ray CT image rebuilding method based on broad sense punishment weighted least-squares | |
CN114972332A (en) | Bamboo laminated wood crack detection method based on image super-resolution reconstruction network | |
CN107845120A (en) | PET image reconstruction method, system, terminal and readable storage medium storing program for executing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |