CN115049753B - Cone beam CT artifact correction method based on unsupervised deep learning - Google Patents

Cone beam CT artifact correction method based on unsupervised deep learning Download PDF

Info

Publication number
CN115049753B
CN115049753B CN202210521271.6A CN202210521271A CN115049753B CN 115049753 B CN115049753 B CN 115049753B CN 202210521271 A CN202210521271 A CN 202210521271A CN 115049753 B CN115049753 B CN 115049753B
Authority
CN
China
Prior art keywords
network
layer
image
images
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210521271.6A
Other languages
Chinese (zh)
Other versions
CN115049753A (en
Inventor
李兴捷
于涵
李新越
侯春雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Research Institute of Foundry Co Ltd
Original Assignee
Shenyang Research Institute of Foundry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Research Institute of Foundry Co Ltd filed Critical Shenyang Research Institute of Foundry Co Ltd
Priority to CN202210521271.6A priority Critical patent/CN115049753B/en
Publication of CN115049753A publication Critical patent/CN115049753A/en
Application granted granted Critical
Publication of CN115049753B publication Critical patent/CN115049753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

A cone beam CT artifact correction method based on unsupervised deep learning belongs to the field of industrial nondestructive testing, and comprises the steps of firstly, carrying out random integral superposition on multi-frame projection images at the same position to construct training image pairs containing different noise levels. And then, the image mapping relation under different noise levels is learned by utilizing a light-weighted full convolution neural network model, so that the effect of removing projection domain image noise and reducing reconstruction domain image artifacts under an unsupervised condition is realized.

Description

Cone beam CT artifact correction method based on unsupervised deep learning
Technical Field
The invention relates to the field of industrial nondestructive testing, in particular to a cone beam CT artifact correction method based on unsupervised deep learning.
Background
Cone beam CT is an imaging detection technique that uses a cone beam source and an area array detector to acquire a series of projection images of a measured object at different angles, and reconstructs a continuous sequence slice according to a reconstruction algorithm. Compared with the conventional CT system, the planar array detector is adopted to convert the received X-rays into image signals, so that the axial multi-section data of the measured object can be obtained through one-time scanning, the planar array detector has the advantages of high scanning speed, high ray utilization rate and the like, and is an ideal nondestructive testing means for quantitatively representing the internal structure size, position and density of the object. However, in the actual cone beam CT detection process, due to the random distribution characteristic of the X-ray photon absorbed by the detector, the photon escape and coupling efficiency of the conversion screen, and other reasons, the detector signal deviates from the real signal to a certain extent, so that the projection image is inevitably mixed with image noise, and further, the fault image reconstructed from the projection image has artifacts. The artifacts can obviously increase the gray level non-uniformity of the reconstructed domain image, reduce the contrast, interfere with the subsequent image edge detection and segmentation, and influence the dimension measurement precision and the defect identification accuracy of the cone beam CT system in industrial nondestructive detection application.
At present, the industry widely adopts an integral noise reduction strategy to reduce the noise of the projection domain image, and the method is simple and effective, but the acquisition of multi-frame images increases the scanning time and obviously reduces the detection efficiency. The filtering noise reduction methods such as Gaussian filtering, bilateral filtering, non-average local filtering and the like inevitably remove part of useful signals while removing image noise, so that the processed image is excessively smooth, and detail loss is obvious. In recent years, image denoising research based on deep learning is in progress, and although the deep learning method can effectively avoid detail loss, a large amount of noiseless images are required to be used as label images for network training, so that the application of the deep learning method in industry is limited. The method in the patent (publication number: CN 111899188A) considers that the noise of the CT projection image accords with the Poisson distribution, a virtual noiseless image is obtained through a simulation technology, and the problem that the noiseless image cannot be obtained is solved to a certain extent, but because the simulation image and the real image have a domain interval, the noise distribution in the projection image does not completely accord with the Poisson distribution, and the processing effect is not ideal.
Disclosure of Invention
In view of some defects of the method, the invention provides a cone beam CT artifact correction method based on unsupervised deep learning, so that projection domain image noise is effectively removed and reconstruction domain image artifacts are reduced under the condition that no noise image is needed.
The technical scheme of the invention is as follows:
A cone beam CT artifact correction method based on unsupervised deep learning is characterized by comprising the following specific steps:
Step 1, constructing an image data set:
step 1.1: acquiring multiple frames of images
The metal part is placed on a cone beam CT turntable, and n (positive integer more than or equal to 3) frames of projection images are collected under the same angleWherein/>Is the kth frame projection image.
Step 1.2: randomly decimating a portion of a frame
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
Step 1.3: integral superposition
Integrating and superposing the extracted image frames to obtain an image,/>。 />And/>The object space information contained is completely consistent but the noise level contained is different. /(I)And/>As input and output to the network, respectively.
Step 1.4: forming image data sets
Changing parts and angles, collecting images according to steps 1.1-1.3 to obtain more image samples to form a data set containing m pairs of imagesThe obtained image dataset N was randomly divided into training set, validation set and test set with proportions of 70%,10% and 20%, respectively.
Step 2, constructing a light-weight full convolution neural network
In order to reduce the number of network parameters and increase the network processing speed, a full convolutional neural network formed by stacking 7 depth separable convolutional layers is used as a training network F. For the first 6 convolutional layers, the number of convolutional layer channels is 32, the convolutional kernel size is 3×3, the step size is 1, and the convolutional layer activation function is Relu. And the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then are used as the input of the fifth layer. The output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer. The output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel and then used as the input of the seventh layer. For the seventh convolution layer, the number of convolution layer channels is 1, the convolution kernel size is 3×3, the step size is 1, and the activation function of the convolution layer is Tanh. Other structured convolutional network models, such as UNET, FCN, etc., may also be used as the training network in the present invention. The loss function in the present invention is the sum of the L1 loss and the L2 loss.
Step 3, network training
After the model is built, training is carried out by using the training data set in the step 1, after a fixed number of images are input each time, a loss function value is obtained through forward propagation, and parameters in each convolution layer of the model are optimized by using a back propagation algorithm. Repeating the steps until the loss function value of the verification set is not reduced, converging the model, and fixing the parameter value in the convolution layer.
Step 4. Network application
After training, inputting any projection image into the network model, and outputting the network as the projection image after noise removal. And reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
Compared with the prior art, the invention has the advantages that:
The random multi-frame superposition strategy is provided, image data meeting the training requirement of a neural network can be constructed, the defect that a large number of noiseless images are required by the existing deep learning method is overcome, meanwhile, the method is simple to realize, has a good denoising effect, does not cause excessive smoothness of the images, and can effectively reduce the artifact phenomenon caused by noise in cone beam CT projection images.
Drawings
FIG. 1 is a schematic diagram of the method of the present invention.
Detailed Description
Examples
As shown in fig. 1, the cone beam CT artifact correction method based on the unsupervised deep learning specifically includes the following steps:
Step 1, constructing an image data set:
step 1.1: acquiring multiple frames of images
Placing the metal part on a cone beam CT turntable, and collecting n (n is more than or equal to 3) frame projection images under the same angleWherein/>Is the kth frame projection image.
Step 1.2: randomly decimating a portion of a frame
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
Step 1.3: integral superposition
Integrating and superposing the extracted image frames to obtain an image,/>。 />And/>The object space information contained is completely consistent but the noise level contained is different. /(I)And/>As input and output to the network, respectively.
Step 1.4: forming image data sets
Changing parts and angles, collecting images according to steps 1.1-1.3 to obtain more image samples to form a data set containing m pairs of imagesThe obtained image dataset N was randomly divided into training set, validation set and test set with proportions of 70%,10% and 20%, respectively.
Step 2, constructing a light-weight full convolution neural network
In order to reduce the number of network parameters and increase the network processing speed, a full convolutional neural network formed by stacking 7 depth separable convolutional layers is used as a training network F. For the first 6 convolutional layers, the number of convolutional layer channels is 32, the convolutional kernel size is 3×3, the step size is 1, and the convolutional layer activation function is Relu. And the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then are used as the input of the fifth layer. The output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer. The output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel and then used as the input of the seventh layer. For the seventh convolution layer, the number of convolution layer channels is 1, the convolution kernel size is 3×3, the step size is 1, and the activation function of the convolution layer is Tanh. Other structured convolutional network models, such as UNET, FCN, etc., may also be used as the training network in the present invention. The loss function in the present invention is the sum of the L1 loss and the L2 loss.
Step 3, network training
After the model is built, training is carried out by using the training data set in the step 1, after a fixed number of images are input each time, a loss function value is obtained through forward propagation, and parameters in each convolution layer of the model are optimized by using a back propagation algorithm. Repeating the steps until the loss function value of the verification set is not reduced, converging the model, and fixing the parameter value in the convolution layer.
Step 4. Network application
After training, inputting any projection image into the network model, and outputting the network as the projection image after noise removal. And reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
The method can construct image data meeting the training requirement of the neural network, overcomes the defect that a large number of noiseless images are required by the existing deep learning method, is simple to realize, has a good denoising effect, does not cause excessive smoothness of images, and can effectively reduce the artifact phenomenon caused by noise in cone beam CT projection images.
The above embodiments are provided to illustrate the technical concept and features of the present invention and are intended to enable those skilled in the art to understand the content of the present invention and implement the same, and are not intended to limit the scope of the present invention. All equivalent changes or modifications made in accordance with the spirit of the present invention should be construed to be included in the scope of the present invention.
Furthermore, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the present invention.

Claims (3)

1. A cone beam CT artifact correction method based on unsupervised deep learning is characterized by comprising the following specific steps:
Step 1, constructing an image data set:
step 1.1: acquiring multiple frames of images
Placing a metal part on a cone beam CT turntable, and collecting n frames of projection images f 1,f2,...,fk,...,fn under the same angle, wherein f k is a kth frame of projection image;
Step 1.2: randomly decimating a portion of a frame
Randomly scrambling the acquired n frames of projection images, and extracting a frame a and a frame b before the scrambled image sequence, wherein a and b meet the following conditions:
Step 1.3: integral superposition
Integrating and superposing the extracted image frames to obtain the object space information contained in the images I a,Ib;Ia and I b which are completely consistent but different in noise level; i a and I b are respectively used as input and output of the network;
Step 1.4: forming image data sets
Changing parts and angles, collecting images according to steps 1.1-1.3 to obtain more image samples to form a data set containing m pairs of imagesRandomly dividing the obtained image data set N into a training set, a verification set and a test set;
step 2, constructing a light-weight full convolution neural network
Using a full convolutional neural network formed by stacking 7 depth separable convolutional layers as a training network F; for the first 6 convolution layers, the number of channels of the convolution layers is 32, the convolution kernel size is 3×3, the step length is 1, and the activation function of the convolution layers is Relu; the output characteristics of the third layer network and the output characteristics of the fourth layer network are spliced on the channel and then used as the input of the fifth layer; the output characteristics of the second layer network and the output characteristics of the fifth layer network are spliced on the channel and then used as the input of the sixth layer; the output characteristics of the first layer network and the output characteristics of the sixth layer network are spliced on the channel and then used as the input of the seventh layer; for the seventh convolution layer, the number of channels of the convolution layer is 1, the convolution kernel size is 3×3, the step length is 1, and the activation function of the convolution layer is Tanh; the loss function is the sum of the L1 loss and the L2 loss;
Step 3, network training
After the model is built, training is carried out by using the training data set in the step 1, after a fixed number of images are input each time, a loss function value is obtained through forward propagation, and parameters in each convolution layer of the model are optimized by using a backward propagation algorithm; repeating the steps until the loss function value of the verification set is not reduced, converging the model, and fixing the parameter value in the convolution layer;
Step 4. Network application
After training, inputting any projection image into a network model, wherein the output of the network is the projection image after noise is removed; and reconstructing the plurality of projection images by using an FDK reconstruction algorithm to obtain a tomographic image with obviously reduced artifacts.
2. The method for cone beam CT artifact correction based on unsupervised deep learning according to claim 1, wherein: in the step 1.1, n is a positive integer greater than or equal to 3.
3. The method for cone beam CT artifact correction based on unsupervised deep learning according to claim 1, wherein: in step 1.4, the obtained image dataset N is randomly divided into a training set, a validation set and a test set, the proportions of which are 70%,10% and 20%, respectively.
CN202210521271.6A 2022-05-13 2022-05-13 Cone beam CT artifact correction method based on unsupervised deep learning Active CN115049753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210521271.6A CN115049753B (en) 2022-05-13 2022-05-13 Cone beam CT artifact correction method based on unsupervised deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210521271.6A CN115049753B (en) 2022-05-13 2022-05-13 Cone beam CT artifact correction method based on unsupervised deep learning

Publications (2)

Publication Number Publication Date
CN115049753A CN115049753A (en) 2022-09-13
CN115049753B true CN115049753B (en) 2024-05-10

Family

ID=83157077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210521271.6A Active CN115049753B (en) 2022-05-13 2022-05-13 Cone beam CT artifact correction method based on unsupervised deep learning

Country Status (1)

Country Link
CN (1) CN115049753B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012478B (en) * 2022-12-27 2023-08-18 哈尔滨工业大学 CT metal artifact removal method based on convergence type diffusion model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559359A (en) * 2018-09-27 2019-04-02 东南大学 Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized
CN111882624A (en) * 2020-06-19 2020-11-03 中国人民解放军战略支援部队信息工程大学 Nano CT image motion artifact correction method and device based on multiple acquisition sequences
CN111899188A (en) * 2020-07-08 2020-11-06 西北工业大学 Neural network learning cone beam CT noise estimation and suppression method
CN112348936A (en) * 2020-11-30 2021-02-09 华中科技大学 Low-dose cone-beam CT image reconstruction method based on deep learning
KR20220021368A (en) * 2020-08-13 2022-02-22 한국과학기술원 Tomography image processing method using neural network based on unsupervised learning to remove missing cone artifacts and apparatus therefor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11100684B2 (en) * 2019-07-11 2021-08-24 Canon Medical Systems Corporation Apparatus and method for artifact detection and correction using deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559359A (en) * 2018-09-27 2019-04-02 东南大学 Artifact minimizing technology based on the sparse angular data reconstruction image that deep learning is realized
CN111882624A (en) * 2020-06-19 2020-11-03 中国人民解放军战略支援部队信息工程大学 Nano CT image motion artifact correction method and device based on multiple acquisition sequences
CN111899188A (en) * 2020-07-08 2020-11-06 西北工业大学 Neural network learning cone beam CT noise estimation and suppression method
KR20220021368A (en) * 2020-08-13 2022-02-22 한국과학기술원 Tomography image processing method using neural network based on unsupervised learning to remove missing cone artifacts and apparatus therefor
CN112348936A (en) * 2020-11-30 2021-02-09 华中科技大学 Low-dose cone-beam CT image reconstruction method based on deep learning

Also Published As

Publication number Publication date
CN115049753A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
Hu et al. Artifact correction in low‐dose dental CT imaging using Wasserstein generative adversarial networks
WO2022047625A1 (en) Image processing method and system, and computer storage medium
WO2021232653A1 (en) Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network
CN109816742B (en) Cone beam CT geometric artifact removing method based on fully-connected convolutional neural network
CN112085677A (en) Image processing method, system and computer storage medium
CN110490832A (en) A kind of MR image reconstruction method based on regularization depth image transcendental method
CN106780338B (en) Rapid super-resolution reconstruction method based on anisotropy
Sandino et al. Deep convolutional neural networks for accelerated dynamic magnetic resonance imaging
CN104751429B (en) A kind of low dosage power spectrum CT image processing methods based on dictionary learning
CN115049753B (en) Cone beam CT artifact correction method based on unsupervised deep learning
CN112348936A (en) Low-dose cone-beam CT image reconstruction method based on deep learning
Li et al. DDPTransformer: dual-domain with parallel transformer network for sparse view CT image reconstruction
CN110070510A (en) A kind of CNN medical image denoising method for extracting feature based on VGG-19
CN116091636A (en) Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement
CN116843825B (en) Progressive CBCT sparse view reconstruction method
CN114972332A (en) Bamboo laminated wood crack detection method based on image super-resolution reconstruction network
CN109816747A (en) A kind of metal artifacts reduction method of Cranial Computed Tomography image
CN114187235A (en) Artifact insensitive medical image deformation field extraction method and registration method and device
Wu et al. Deep learning-based low-dose tomography reconstruction with hybrid-dose measurements
Ziabari et al. Simurgh: A Framework for CAD-Driven Deep Learning Based X-Ray CT Reconstruction
CN115100308B (en) Neural network training method and system for removing CT (computed tomography) artifacts
Zhang et al. Deep generalized learning model for PET image reconstruction
CN116485925A (en) CT image ring artifact suppression method, device, equipment and storage medium
CN112001956B (en) CNN-based image denoising method for strong laser far-field focal spot measurement by using schlieren method
Yuan et al. A deep learning-based ring artifact correction method for x-ray CT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant