CN115049753B - Cone beam CT artifact correction method based on unsupervised deep learning - Google Patents
Cone beam CT artifact correction method based on unsupervised deep learning Download PDFInfo
- Publication number
- CN115049753B CN115049753B CN202210521271.6A CN202210521271A CN115049753B CN 115049753 B CN115049753 B CN 115049753B CN 202210521271 A CN202210521271 A CN 202210521271A CN 115049753 B CN115049753 B CN 115049753B
- Authority
- CN
- China
- Prior art keywords
- network
- layer
- image
- images
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000012937 correction Methods 0.000 title claims abstract description 10
- 238000012549 training Methods 0.000 claims abstract description 25
- 230000006870 function Effects 0.000 claims description 15
- 230000004913 activation Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 239000002184 metal Substances 0.000 claims description 3
- 238000010200 validation analysis Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 4
- 238000009659 non-destructive testing Methods 0.000 abstract description 3
- 238000013507 mapping Methods 0.000 abstract 1
- 238000003062 neural network model Methods 0.000 abstract 1
- 230000007547 defect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
A cone beam CT artifact correction method based on unsupervised deep learning belongs to the field of industrial nondestructive testing, and comprises the steps of firstly, carrying out random integral superposition on multi-frame projection images at the same position to construct training image pairs containing different noise levels. And then, the image mapping relation under different noise levels is learned by utilizing a light-weighted full convolution neural network model, so that the effect of removing projection domain image noise and reducing reconstruction domain image artifacts under an unsupervised condition is realized.
Description
Technical Field
The invention relates to the field of industrial nondestructive testing, in particular to a cone beam CT artifact correction method based on unsupervised deep learning.
Background
Cone beam CT is an imaging detection technique that uses a cone beam source and an area array detector to acquire a series of projection images of a measured object at different angles, and reconstructs a continuous sequence slice according to a reconstruction algorithm. Compared with the conventional CT system, the planar array detector is adopted to convert the received X-rays into image signals, so that the axial multi-section data of the measured object can be obtained through one-time scanning, the planar array detector has the advantages of high scanning speed, high ray utilization rate and the like, and is an ideal nondestructive testing means for quantitatively representing the internal structure size, position and density of the object. However, in the actual cone beam CT detection process, due to the random distribution characteristic of the X-ray photon absorbed by the detector, the photon escape and coupling efficiency of the conversion screen, and other reasons, the detector signal deviates from the real signal to a certain extent, so that the projection image is inevitably mixed with image noise, and further, the fault image reconstructed from the projection image has artifacts. The artifacts can obviously increase the gray level non-uniformity of the reconstructed domain image, reduce the contrast, interfere with the subsequent image edge detection and segmentation, and influence the dimension measurement precision and the defect identification accuracy of the cone beam CT system in industrial nondestructive detection application.
At present, the industry widely adopts an integral noise reduction strategy to reduce the noise of the projection domain image, and the method is simple and effective, but the acquisition of multi-frame images increases the scanning time and obviously reduces the detection efficiency. The filtering noise reduction methods such as Gaussian filtering, bilateral filtering, non-average local filtering and the like inevitably remove part of useful signals while removing image noise, so that the processed image is excessively smooth, and detail loss is obvious. In recent years, image denoising research based on deep learning is in progress, and although the deep learning method can effectively avoid detail loss, a large amount of noiseless images are required to be used as label images for network training, so that the application of the deep learning method in industry is limited. The method in the patent (publication number: CN 111899188A) considers that the noise of the CT projection image accords with the Poisson distribution, a virtual noiseless image is obtained through a simulation technology, and the problem that the noiseless image cannot be obtained is solved to a certain extent, but because the simulation image and the real image have a domain interval, the noise distribution in the projection image does not completely accord with the Poisson distribution, and the processing effect is not ideal.
Disclosure of Invention
In view of some defects of the method, the invention provides a cone beam CT artifact correction method based on unsupervised deep learning, so that projection domain image noise is effectively removed and reconstruction domain image artifacts are reduced under the condition that no noise image is needed.
The technical scheme of the invention is as follows:
A cone beam CT artifact correction method based on unsupervised deep learning is characterized by comprising the following specific steps:
Step 1, constructing an image data set:
step 1.1: acquiring multiple frames of images
The metal part is placed on a cone beam CT turntable, and n (positive integer more than or equal to 3) frames of projection images are collected under the same angleWherein/>Is the kth frame projection image.
Step 1.2: randomly decimating a portion of a frame
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
Step 1.3: integral superposition
Integrating and superposing the extracted image frames to obtain an image,/>。 />And/>The object space information contained is completely consistent but the noise level contained is different. /(I)And/>As input and output to the network, respectively.
Step 1.4: forming image data sets
Changing parts and angles, collecting images according to steps 1.1-1.3 to obtain more image samples to form a data set containing m pairs of imagesThe obtained image dataset N was randomly divided into training set, validation set and test set with proportions of 70%,10% and 20%, respectively.
Step 2, constructing a light-weight full convolution neural network
In order to reduce the number of network parameters and increase the network processing speed, a full convolutional neural network formed by stacking 7 depth separable convolutional layers is used as a training network F. For the first 6 convolutional layers, the number of convolutional layer channels is 32, the convolutional kernel size is 3×3, the step size is 1, and the convolutional layer activation function is Relu. And the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then are used as the input of the fifth layer. The output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer. The output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel and then used as the input of the seventh layer. For the seventh convolution layer, the number of convolution layer channels is 1, the convolution kernel size is 3×3, the step size is 1, and the activation function of the convolution layer is Tanh. Other structured convolutional network models, such as UNET, FCN, etc., may also be used as the training network in the present invention. The loss function in the present invention is the sum of the L1 loss and the L2 loss.
Step 3, network training
After the model is built, training is carried out by using the training data set in the step 1, after a fixed number of images are input each time, a loss function value is obtained through forward propagation, and parameters in each convolution layer of the model are optimized by using a back propagation algorithm. Repeating the steps until the loss function value of the verification set is not reduced, converging the model, and fixing the parameter value in the convolution layer.
Step 4. Network application
After training, inputting any projection image into the network model, and outputting the network as the projection image after noise removal. And reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
Compared with the prior art, the invention has the advantages that:
The random multi-frame superposition strategy is provided, image data meeting the training requirement of a neural network can be constructed, the defect that a large number of noiseless images are required by the existing deep learning method is overcome, meanwhile, the method is simple to realize, has a good denoising effect, does not cause excessive smoothness of the images, and can effectively reduce the artifact phenomenon caused by noise in cone beam CT projection images.
Drawings
FIG. 1 is a schematic diagram of the method of the present invention.
Detailed Description
Examples
As shown in fig. 1, the cone beam CT artifact correction method based on the unsupervised deep learning specifically includes the following steps:
Step 1, constructing an image data set:
step 1.1: acquiring multiple frames of images
Placing the metal part on a cone beam CT turntable, and collecting n (n is more than or equal to 3) frame projection images under the same angleWherein/>Is the kth frame projection image.
Step 1.2: randomly decimating a portion of a frame
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
Step 1.3: integral superposition
Integrating and superposing the extracted image frames to obtain an image,/>。 />And/>The object space information contained is completely consistent but the noise level contained is different. /(I)And/>As input and output to the network, respectively.
Step 1.4: forming image data sets
Changing parts and angles, collecting images according to steps 1.1-1.3 to obtain more image samples to form a data set containing m pairs of imagesThe obtained image dataset N was randomly divided into training set, validation set and test set with proportions of 70%,10% and 20%, respectively.
Step 2, constructing a light-weight full convolution neural network
In order to reduce the number of network parameters and increase the network processing speed, a full convolutional neural network formed by stacking 7 depth separable convolutional layers is used as a training network F. For the first 6 convolutional layers, the number of convolutional layer channels is 32, the convolutional kernel size is 3×3, the step size is 1, and the convolutional layer activation function is Relu. And the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then are used as the input of the fifth layer. The output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer. The output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel and then used as the input of the seventh layer. For the seventh convolution layer, the number of convolution layer channels is 1, the convolution kernel size is 3×3, the step size is 1, and the activation function of the convolution layer is Tanh. Other structured convolutional network models, such as UNET, FCN, etc., may also be used as the training network in the present invention. The loss function in the present invention is the sum of the L1 loss and the L2 loss.
Step 3, network training
After the model is built, training is carried out by using the training data set in the step 1, after a fixed number of images are input each time, a loss function value is obtained through forward propagation, and parameters in each convolution layer of the model are optimized by using a back propagation algorithm. Repeating the steps until the loss function value of the verification set is not reduced, converging the model, and fixing the parameter value in the convolution layer.
Step 4. Network application
After training, inputting any projection image into the network model, and outputting the network as the projection image after noise removal. And reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
The method can construct image data meeting the training requirement of the neural network, overcomes the defect that a large number of noiseless images are required by the existing deep learning method, is simple to realize, has a good denoising effect, does not cause excessive smoothness of images, and can effectively reduce the artifact phenomenon caused by noise in cone beam CT projection images.
The above embodiments are provided to illustrate the technical concept and features of the present invention and are intended to enable those skilled in the art to understand the content of the present invention and implement the same, and are not intended to limit the scope of the present invention. All equivalent changes or modifications made in accordance with the spirit of the present invention should be construed to be included in the scope of the present invention.
Furthermore, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the present invention.
Claims (3)
1. A cone beam CT artifact correction method based on unsupervised deep learning is characterized by comprising the following specific steps:
Step 1, constructing an image data set:
step 1.1: acquiring multiple frames of images
Placing a metal part on a cone beam CT turntable, and collecting n frames of projection images f 1,f2,...,fk,...,fn under the same angle, wherein f k is a kth frame of projection image;
Step 1.2: randomly decimating a portion of a frame
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
Step 1.3: integral superposition
Integrating and superposing the extracted image frames to obtain the object space information contained in the images I a,Ib;Ia and I b which are completely consistent but different in noise level; i a and I b are respectively used as input and output of the network;
Step 1.4: forming image data sets
Changing parts and angles, collecting images according to steps 1.1-1.3 to obtain more image samples to form a data set containing m pairs of imagesRandomly dividing the obtained image data set N into a training set, a verification set and a test set;
step 2, constructing a light-weight full convolution neural network
Using a full convolutional neural network formed by stacking 7 depth separable convolutional layers as a training network F; for the first 6 convolution layers, the number of channels of the convolution layers is 32, the convolution kernel size is 3×3, the step length is 1, and the activation function of the convolution layers is Relu; the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then used as the input of the fifth layer; the output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer; the output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel and then used as the input of the seventh layer; for the seventh convolution layer, the number of channels of the convolution layer is 1, the convolution kernel size is 3×3, the step length is 1, and the activation function of the convolution layer is Tanh; the loss function is the sum of the L1 loss and the L2 loss;
Step 3, network training
After the model is built, training is carried out by using the training data set in the step 1, after a fixed number of images are input each time, a loss function value is obtained through forward propagation, and parameters in each convolution layer of the model are optimized by using a backward propagation algorithm; repeating the steps until the loss function value of the verification set is not reduced, converging the model, and fixing the parameter value in the convolution layer;
Step 4. Network application
After training, inputting any projection image into a network model, wherein the output of the network is the projection image after noise is removed; and reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
2. The method for cone beam CT artifact correction based on unsupervised deep learning according to claim 1, wherein: in the step 1.1, n is a positive integer greater than or equal to 3.
3. The method for cone beam CT artifact correction based on unsupervised deep learning according to claim 1, wherein: in step 1.4, the obtained image dataset N is randomly divided into a training set, a validation set and a test set, the proportions of which are 70%,10% and 20%, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210521271.6A CN115049753B (en) | 2022-05-13 | 2022-05-13 | Cone beam CT artifact correction method based on unsupervised deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210521271.6A CN115049753B (en) | 2022-05-13 | 2022-05-13 | Cone beam CT artifact correction method based on unsupervised deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115049753A CN115049753A (en) | 2022-09-13 |
CN115049753B true CN115049753B (en) | 2024-05-10 |
Family
ID=83157077
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210521271.6A Active CN115049753B (en) | 2022-05-13 | 2022-05-13 | Cone beam CT artifact correction method based on unsupervised deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049753B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012478B (en) * | 2022-12-27 | 2023-08-18 | 哈尔滨工业大学 | CT metal artifact removal method based on convergence type diffusion model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559359A (en) * | 2018-09-27 | 2019-04-02 | 东南大学 | Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized |
CN111882624A (en) * | 2020-06-19 | 2020-11-03 | 中国人民解放军战略支援部队信息工程大学 | Nano CT image motion artifact correction method and device based on multiple acquisition sequences |
CN111899188A (en) * | 2020-07-08 | 2020-11-06 | 西北工业大学 | Neural network learning cone beam CT noise estimation and suppression method |
CN112348936A (en) * | 2020-11-30 | 2021-02-09 | 华中科技大学 | Low-dose cone-beam CT image reconstruction method based on deep learning |
KR20220021368A (en) * | 2020-08-13 | 2022-02-22 | 한국과학기술원 | Tomography image processing method using neural network based on unsupervised learning to remove missing cone artifacts and apparatus therefor |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11100684B2 (en) * | 2019-07-11 | 2021-08-24 | Canon Medical Systems Corporation | Apparatus and method for artifact detection and correction using deep learning |
-
2022
- 2022-05-13 CN CN202210521271.6A patent/CN115049753B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559359A (en) * | 2018-09-27 | 2019-04-02 | 东南大学 | Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized |
CN111882624A (en) * | 2020-06-19 | 2020-11-03 | 中国人民解放军战略支援部队信息工程大学 | Nano CT image motion artifact correction method and device based on multiple acquisition sequences |
CN111899188A (en) * | 2020-07-08 | 2020-11-06 | 西北工业大学 | Neural network learning cone beam CT noise estimation and suppression method |
KR20220021368A (en) * | 2020-08-13 | 2022-02-22 | 한국과학기술원 | Tomography image processing method using neural network based on unsupervised learning to remove missing cone artifacts and apparatus therefor |
CN112348936A (en) * | 2020-11-30 | 2021-02-09 | 华中科技大学 | Low-dose cone-beam CT image reconstruction method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN115049753A (en) | 2022-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022047625A1 (en) | Image processing method and system, and computer storage medium | |
WO2021232653A1 (en) | Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network | |
Hu et al. | Artifact correction in low‐dose dental CT imaging using Wasserstein generative adversarial networks | |
CN109816742B (en) | Cone beam CT geometric artifact removing method based on fully-connected convolutional neural network | |
CN110490832A (en) | A kind of MR image reconstruction method based on regularization depth image transcendental method | |
CN112085677A (en) | Image processing method, system and computer storage medium | |
CN111311704B (en) | Image reconstruction method, image reconstruction device, computer equipment and storage medium | |
CN106780338B (en) | Rapid super-resolution reconstruction method based on anisotropy | |
Sandino et al. | Deep convolutional neural networks for accelerated dynamic magnetic resonance imaging | |
CN112348936A (en) | Low-dose cone-beam CT image reconstruction method based on deep learning | |
CN115049753B (en) | Cone beam CT artifact correction method based on unsupervised deep learning | |
CN113516586A (en) | Low-dose CT image super-resolution denoising method and device | |
Okamoto et al. | Artifact reduction for sparse-view CT using deep learning with band patch | |
CN116091636A (en) | Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement | |
CN116843825B (en) | Progressive CBCT sparse view reconstruction method | |
CN114187181A (en) | Double-path lung CT image super-resolution method based on residual information refining | |
CN117523095A (en) | Sparse angle THz-CT image reconstruction method based on deep learning | |
Wu et al. | Deep learning-based low-dose tomography reconstruction with hybrid-dose measurements | |
CN116485925A (en) | CT image ring artifact suppression method, device, equipment and storage medium | |
Yuan et al. | A deep learning-based ring artifact correction method for x-ray ct | |
CN112001956B (en) | CNN-based image denoising method for strong laser far-field focal spot measurement by using schlieren method | |
CN113298711B (en) | Optical fiber bundle multi-frame image super-resolution reconstruction method and device | |
CN118212153B (en) | Ultrasonic image denoising method based on unsupervised learning model | |
CN116977473B (en) | Sparse angle CT reconstruction method and device based on projection domain and image domain | |
CN116098642A (en) | Children skull CT scanning parameter optimization method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |