CN114897785B - Single-view CT reconstruction method combining global and local in defect detection - Google Patents
Single-view CT reconstruction method combining global and local in defect detection Download PDFInfo
- Publication number
- CN114897785B CN114897785B CN202210388137.3A CN202210388137A CN114897785B CN 114897785 B CN114897785 B CN 114897785B CN 202210388137 A CN202210388137 A CN 202210388137A CN 114897785 B CN114897785 B CN 114897785B
- Authority
- CN
- China
- Prior art keywords
- network
- image
- feature
- local
- global
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007547 defect Effects 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 238000012549 training Methods 0.000 claims description 37
- 238000006243 chemical reaction Methods 0.000 claims description 20
- 238000005457 optimization Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 3
- 239000012141 concentrate Substances 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000002591 computed tomography Methods 0.000 abstract description 134
- 238000013135 deep learning Methods 0.000 abstract description 8
- 238000013170 computed tomography imaging Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000004458 analytical method Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 14
- 230000009466 transformation Effects 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 238000011158 quantitative evaluation Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000009659 non-destructive testing Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/005—Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
The invention belongs to the field of computer tomography image reconstruction, and particularly relates to a three-dimensional global appearance and an internal local defect structure of a workpiece are simultaneously reconstructed from a single X-ray image by combining global and local single-view CT by using a deep learning technology, in particular to a method for reconstructing the single-view CT by combining global and local in defect detection. The CT reconstruction network designed by the invention can effectively accelerate CT imaging speed and improve the efficiency of detecting the defects of a large number of workpieces in industrial nondestructive detection on one hand; on the other hand, the 3D external structure and the internal defect information of the workpiece can be provided simultaneously, so that the accuracy of defect detection and analysis is improved.
Description
Technical Field
The invention belongs to the field of Computer Tomography (CT) reconstruction, and particularly relates to a three-dimensional global appearance and an internal local structure of a target reconstructed simultaneously from a single X-ray image of a workpiece by combining a global single view CT and a local single view CT by using a deep learning technology, in particular to a method for reconstructing the single view CT by combining the global single view CT and the local single view CT in defect detection.
Background
In industrial nondestructive testing, the rapid defect detection of products by using X-rays has become an integral part of product quality control and process optimization in many industries, such as the automotive industry, the aviation industry and the foundry industry. However, the current method based on two-dimensional X-ray digital imaging (DR imaging) for inspection of small defects can present problems of missed detection and false detection, because two-dimensional DR imaging compresses and superimposes the three-dimensional structural information of the target on the two-dimensional image, which results in internal defects being obscured by other structures. Furthermore, this also results in failure to accurately locate and identify defects due to loss of defect depth information in the 2D image. Therefore, computed Tomography (CT) is the best way to inspect and analyze defects inside a workpiece. And 3D structure information obtained by CT reconstruction can be used for accurately positioning the position of the defect and carrying out measurement analysis. However, conventional CT scanning is inefficient because of the large number of projection images that need to be acquired, and real-time CT imaging and detection cannot be achieved. Therefore, conventional CT detection cannot meet the detection requirements of large batches.
In order to improve the efficiency of CT detection, some researches strive to reduce the number of acquired X-ray images and realize CT image reconstruction, for example, chinese patent "a limited angle CT image reconstruction method based on DTw-SART-TV iterative process" (CN 112381904A) acquires an X-ray image under a limited angle for reconstruction, and chinese patent "a sparse angle X-ray CT imaging method" (CN 103136773 a) proposes a sparse angle CT reconstruction method. Wherein, chinese patent CN112381904a combines the steps of SART and IRS, and performs iterative computation continuously through two stages, thereby realizing reconstruction of CT images from X-ray projections acquired at a limited angle. Chinese patent CN103136773a obtains an X-ray image by full-angle sparse sampling, performs CT reconstruction in advance, and then iteratively solves a final CT image by using an optimization model. Both types of CT reconstruction algorithms need to be combined with an a priori model to reconstruct images in an iterative manner. Both of these methods still cannot reconstruct CT images quickly when reconstructing high resolution images. In addition, the limited angle method cannot iteratively reconstruct high quality images due to serious data loss in some scenes with very limited scanning angles. It is an object of the present invention to solve the problem of very few angle CT reconstruction, thereby speeding up the CT reconstruction.
In recent years, the rapid development of deep learning technology opens up new directions for traditional CT technology. Limited angle and sparse angle CT reconstruction methods based on deep learning are continuously presented. The CT reconstruction efficiency of the model based on learning is superior to that of the traditional CT reconstruction model, so that the method has wide development prospect. In addition, the strong ability of the deep learning technique in extracting image information and constructing a nonlinear model enables some researchers to combine deep learning with CT image reconstruction, so that CT reconstruction is realized when the scanning range and the number of X-ray images reach the minimum limit at the same time, namely CT image reconstruction (simply called single-view CT) is realized from a single X-ray image, and the data volume required to be acquired and processed by CT reconstruction is greatly reduced and the speed of CT imaging is accelerated. However, single view CT reconstruction is a very challenging task and there are pathological problems in solution. The current single view-based deep learning method can reconstruct a global appearance from a single X-ray image to be close to a real CT result, but the quality and structural details of the CT image are still to be improved, and especially the local tiny defect structure in a workpiece cannot be accurately reconstructed. Thus, CT reconstruction focusing on internal detail structures is another object of the present invention.
Disclosure of Invention
In order to realize rapid CT detection of the workpiece defects, the invention designs a deep learning network combining global and local single-view CT reconstruction, and reconstructs a three-dimensional (3D) CT global structure and an internal local micro defect of a target from a single two-dimensional (2D) X-ray image of the workpiece, thereby improving the efficiency and accuracy of defect detection. In addition, the invention designs a new focusing function, so that the network can pay attention to reconstruction of a micro structure in the workpiece. Meanwhile, the invention uses the X-ray gradient image as network input, thereby promoting the generation of CT images and improving the quality of reconstructed images.
In order to achieve the above object, the present invention has the following technical scheme:
the single view CT reconstruction method combining global and local in defect detection comprises the following steps:
And 1, establishing and preprocessing a data set.
The dataset contains a plurality of paired X-ray images of the workpiece and corresponding CT volume images. Each pair of image data is first separately averaged μ and variance σ and then processed by data normalization (X- μ)/σ, where X represents the X-ray image or CT volume image to be normalized. The standardized data set is divided into a training set and a testing set which are respectively used for network training and testing.
The preprocessing process mainly adopts two-dimensional gradient operation to process the X-ray image, extracts a gradient image containing target edge information, and uses the gradient image as input of network training.
And 2, constructing a CT reconstruction network for reconstructing a CT volume image from the single X-ray image.
The CT reconstruction network comprises three modules: global CT generation module, local CT generation module and image optimization module.
The global CT generation module primarily focuses on the generation of large geometries and contours in the workpiece. The module comprises a feature encoding network, a feature decoding network, a feature conversion network and a feature fusion structure. The feature encoding network utilizes a plurality of convolution layers to extract features of an input single X-ray image and gradient images thereof. The extracted feature information is encoded into the hidden layer space through a plurality of image downsampling and convolution operations. And the feature decoding network carries out convolution up-sampling on the coding information in the hidden layer space by utilizing a plurality of convolution operation modules, and finally recovers the 3D CT volume image. The feature conversion network is connected with the features of the same level in the coding and decoding network, so that the loss of information in the coding process is compensated. The feature conversion network comprises two parts, namely feature selection and feature conversion. The feature selection section assigns different weights to the encoded features, giving more attention to the higher weighted features. The feature conversion section converts the weight-assigned features so as to be closer to features of the CT volume image. The feature fusion structure mainly fuses the decoded features and the output features of the feature conversion network, so that the quality of the reconstructed CT image is improved. The feature fusion structure comprises three steps of operations. First, the mean and variance of the decoded features are calculated. And secondly, calculating the mean value and variance of the output characteristics of the characteristic conversion network. And thirdly, fusing the decoding characteristics with the output characteristics of the characteristic conversion network according to the mean variance of the first step and the second step.
The local CT generation module mainly focuses on the generation of tiny defects inside a workpiece. The module adopts the same network structure as the global CT generation module. The feature coding network is used for enabling the local CT generation network to concentrate on the study of the small-scale defect structure by subtracting the coding features of the corresponding positions of the feature coding network of the global CT generation module in the feature coding process; the structural information is used for finally generating a CT volume image containing only tiny defects through a feature decoding network.
The image optimization module optimizes the combined result of the global CT generation module and the local CT generation module, so that the final CT result is closer to reality. The image optimization module comprises an encoding network and a decoding network, and is mainly used for encoding the 3D CT stereoscopic image obtained by combining the input global CT generation module and the local CT generation module, and then decoding the 3D CT stereoscopic image through the decoding network to generate an optimized CT stereoscopic image. The coding network is composed of a plurality of convolution layers and a downsampling layer. The decoding network is composed of a plurality of convolution layers and an upsampling layer. The encoding network and the decoding network transmit the same-level encoding information to the decoding network through the common jump connection, thereby compensating the information loss in the encoding process and simultaneously promoting the generation and optimization of CT results.
Step3, training the CT reconstruction network in the step 2
The network training is to make the network learn the mapping relation of the single 2D X ray image to the 3D CT volume from the data set established in the step 1. The network training comprises three steps. First, a network of global CT generation modules is trained, and the network is input into a single-view 2D X ray image and a corresponding gradient image and output into a 3D CT volume image containing a global structure. And secondly, training a network of the local CT generation module, wherein the network is input into a single-view 2D X ray image and a corresponding gradient image, and outputting into a 3D CT volume image containing the local defect. In order to make the network of local CT generation modules pay more attention to the defects, the invention specifically designs a focusing function as a loss function of the network of training local CT generation modules. In the network training process, the focusing function blocks the whole image, gives large weight to the image blocks containing the defects, and gives small weight to the image blocks not containing the defects, so that the network is guided to pay attention to the generation of the defect parts in training. Thirdly, training an image optimization network to optimize the combined result of the global CT generation network and the local CT generation network. The input of the image optimization network is a combined 3D CT volume image result, and the output is an optimized 3D CT volume image result.
The definition of the focusing function is as follows:
In the formula (1), y is a real CT image, y is a predicted CT image, and M is the number of segments of the whole image; for the j-th image block y j in the real image y, the corresponding predicted image block is Equation (2) is an image block weight allocation function, wherein,Representing the mean value of the image block, wherein a and b are super parameters;
And 4, testing on the test data set by using the CT reconstruction network trained in the step 3. CT volume images of the workpiece can be reconstructed by inputting a single X-ray image into the network.
The beneficial effects of the invention are as follows:
The invention designs a combined global and local single view CT reconstruction network for workpiece defect detection. Corresponding 3D CT external structure and internal micro defect information can be obtained from a single X-ray image of a defective workpiece rapidly through the CT reconstruction network. On the one hand, the invention can effectively accelerate CT imaging speed and improve the efficiency of detecting the defects of a large number of workpieces in industrial nondestructive detection. On the other hand, the 3D external structure and the internal defect information of the workpiece can be provided simultaneously, so that the accuracy of defect detection and analysis is improved.
Drawings
FIG. 1 is a flow chart of the method steps of the present invention.
Fig. 2 is a general block diagram of a single view CT reconstruction network combining global and local according to the present invention.
Fig. 3 is a network structure diagram in the CT reconstruction network of the present invention.
Fig. 4 (a) to 4 (e) are graphs of CT reconstruction results of the method of the present invention, wherein fig. 4 (a) is an input X-ray image, fig. 4 (b) is a reconstructed 3D CT volume, fig. 4 (c) is a corresponding transparent CT volume, fig. 4 (D) is a volume after 3D rotation sectioning, and fig. 4 (e) is a corresponding transparent CT volume.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings and technical schemes.
As shown in fig. 1, a single view CT reconstruction method combining global and local comprises the following steps:
And 1, establishing and preprocessing a data set. In the embodiment, the 2D X radial image and the corresponding 3D CT volume image of the piston are obtained by CT scanning of the automobile piston. In order to amplify the data, the present embodiment performs a rotation and translation operation on the 3D CT volume, and obtains corresponding X-ray images by adopting a mode of analog projection, thereby creating a dataset including paired X-ray images and CT volume images. The established data set is divided into a training set and a testing set according to a mode of 7:3, and the training set and the testing set are used for training and testing the network. To train the defect CT generation network, it is necessary to use the CT volume containing only defects as network supervision. For this purpose, a 3D defect data set needs to be established. Firstly, defects are extracted from a 3D CT volume image in a manual segmentation mode, and then rotation translation amplification is carried out, so that a defect three-dimensional data set is obtained. Each pair of data in the dataset needs to be normalized. For example, for an X-ray image, the mean μ and variance σ thereof are first found and then processed in a normalized (X- μ)/σ manner, where X represents the X-ray image to be normalized. The 3D CT volumetric image is processed in the same manner.
And 2, constructing a single view CT reconstruction network combining the global and the local. The overall structure of the CT reconstruction network is shown in fig. 2, and comprises three modules: global CT generation module, local CT generation module and image optimization module. The global CT generation module and the local CT generation module are composed of the same network structure, as shown in FIG. 3. The system comprises a feature encoding network, a feature decoding network, a feature conversion network and a feature fusion structure. The encoding network can fully extract important features in the input single X-ray image and generate CT volume through the decoding network. The feature conversion network can make up for the loss of information in the feature extraction process, and the converted features fuse some information into the output CT features through the feature fusion structure, so that the quality of the reconstructed CT image is improved. The network architecture of the image optimization module is similar to that of the global CT generation module, but does not include a feature transformation network and a feature fusion architecture, as the network primarily accomplishes the transformation of 3D to 3D data, rather than generating 3D CT from 2D images. The process of generating a 3D CT from a 2D image comprises the steps of:
(1) The feature encoding network performs image encoding on the input 128X128 resolution X-ray image and its gradient image through a convolution layer and a general downsampling residual block, thereby obtaining hidden layer encoding features. During encoding, the number of feature channels gradually increases from 1 to 840 at the input while the feature resolution gradually downsamples from 128x128 to 8x8. During upsampling, the hidden layer encoded features are decoded by upsampling the residual block and the convolutional layer, wherein the number of channels gradually decreases from 840 to 128, while the feature resolution is gradually up-sampled to 128x128, thereby outputting a CT volumetric image (128 x 128).
(2) In the coding network, the feature downsampling process results in loss of information because the resolution is continually decreasing. Therefore, the feature conversion network converts the features before downsampling and connects the converted features to the upsampling result of the same level, thereby compensating the loss of information to a certain extent. The structure of the feature transformation network is shown in fig. 3, where the dimension of the input feature is c×w×h, where C represents the number of channels and the W and H sub-tables represent the width and height of the image. The input features are given different weights to different channel features through feature selection, and the weights are automatically learned by the network during network training, so that the network pays attention to important features. The feature selection has different implementation modes, and in the embodiment, a channel attention mechanism is adopted, and the method mainly comprises three processes of channel compression, weight activation and weighting operation. Respectively by convolution, full join and dot product operations of 1 x 1. These important features are converted to CT-like features by feature transformation. The implementation of the feature transformation is not unique, and in this embodiment the transformation includes three processes of 2x 2 size convolution downsampling, two-dimensional convolution, and 2x 2 size convolution upsampling.
(3) The feature fusion structure fuses the result x of the feature conversion network with the result y of the up-sampling. In this embodiment, a general adaptive instance normalization method is used for fusion. The method comprises the following steps:
wherein μ (·) represents the mean and σ (·) represents the standard deviation.
And 3, training the network in the step 2 on a training set. The network training comprises three steps. First, training a global CT generation network, wherein the network is input into a single-view 2D X ray image and a corresponding gradient image, and output into a 3D CT volume containing a global structure. And secondly, training a local CT generation network, wherein the network input is a single-view 2D X ray image and a corresponding gradient image, and the network input is a 3D CT volume containing local defects. And thirdly, optimizing the joint result of the global CT generation network and the local CT generation network. The network input is a combined 3D CT result, and the network input is an optimized 3D CT result. In the present embodiment of the present invention, the resolutions of the paired X-ray images and CT volume images in the training set are 128X 128 and 128X 128, respectively. The X-ray images are used for network input and the CT volume images are used to supervise the results generated by the network. In order to make the local CT generation network pay more attention to the defects, the invention particularly designs a focusing function as a loss function of network training. The definition of the focusing function is as follows:
W(x)=tan(xa*π/2)b (2)
The SSIM patch uses a common structural similarity function (SSIM), but the function assigns the same weight to different local image blocks when measuring the structural similarity. While training a local CT generation network is desirable, the network is more concerned with defect generation. The proposed focusing function uses the function W (x) to give greater weight to image blocks containing defects, thereby facilitating the generation of defects. a and b are super parameters, set to a=0.1, b=1, M is the number of tiles of the full image (128 x 128), m=121. For the j-th image block in the real image y, Represented is the mean of the image blocks.
And 4, training the network in the step 2 to finally obtain the trained model parameters. In the test, we input an X-ray image to the trained model that was not used for the network training and obtain the corresponding CT volume. Fig. 4 (a) -4 (e) show some reconstructed CT image results. In the input X-ray image, the defects shown by the boxes are difficult to recognize by the human eye. But defects shown by boxes can be easily identified in the reconstructed CT image. Meanwhile, the CT volume can be rotationally dissected, so that defects can be more easily observed, and the defects can be measured and analyzed. Table 1 shows quantitative evaluation results of two strategies of the focus loss function and the gradient image input in the network, and verifies the effectiveness of the two strategies in improving the single-view CT reconstruction performance of the invention. It can be seen that the reconstructed CT results of the present invention perform best on two commonly used image quality assessment indicators (peak signal to noise ratio (PSNR), structural similarity (FSIM)) when both strategies are applied. In addition, the trained network model can reconstruct the corresponding 3D CT volume rapidly by only acquiring one X-ray image of the piston when in practical deployment application, and the time for completing one reconstruction is about 0.3 seconds.
Table 1 quantitative evaluation results
Claims (1)
1. The single view CT reconstruction method combining global and local in defect detection is characterized by comprising the following steps:
Step 1, establishing and preprocessing a data set;
The dataset comprises paired X-ray images of the workpiece and corresponding CT volume images; each pair of image data is firstly respectively obtained to obtain a mean mu and a variance sigma, and then is processed in a data standardization (X-mu)/sigma mode, wherein X represents an X-ray image or a CT volume image to be standardized; the standardized data set is divided into a training set and a testing set which are respectively used for network training and testing;
The preprocessing process adopts two-dimensional gradient operation to process the X-ray image, extracts a gradient image containing target edge information, and uses the gradient image as input of network training;
step 2, constructing a CT reconstruction network for reconstructing CT volume images from a single X-ray image;
the CT reconstruction network comprises three modules: the system comprises a global CT generation module, a local CT generation module and an image optimization module;
The global CT generation module focuses on the generation of large geometric structures and shapes in a workpiece; the module comprises a feature encoding network, a feature decoding network, a feature conversion network and a feature fusion structure; the feature coding network utilizes a plurality of convolution layers to perform feature extraction on an input single X-ray image and a gradient image thereof, and the extracted feature information is coded into a hidden layer space through repeated image downsampling and convolution operation; the feature decoding network carries out convolution up-sampling on coding information in the hidden layer space by utilizing a plurality of convolution operation modules, and finally recovers a 3D CT volume image; the feature conversion network is connected with the features of the same level in the coding and decoding network, so that the loss of information in the coding process is compensated; the feature conversion network comprises two parts, namely feature selection and feature conversion; the feature selection part assigns different weights to the coded features and gives more attention to the features with higher weights; the feature conversion part converts the feature subjected to weight distribution so that the feature is closer to the feature of the CT volume image; the feature fusion structure fuses the decoded features and the output features of the feature conversion network, so that the quality of the reconstructed CT image is improved; the feature fusion structure comprises three steps of operation; firstly, calculating the mean value and variance of decoding characteristics; calculating the mean value and variance of the output characteristics of the characteristic conversion network; thirdly, fusing decoding characteristics and output characteristics of the characteristic conversion network according to the mean variance of the first step and the second step;
The local CT generation module focuses on the generation of micro defects in a workpiece; the module adopts the same network structure as the global CT generation module; the feature coding network is used for enabling the local CT generation network to concentrate on the study of the small-scale defect structure by subtracting the coding features of the corresponding positions of the feature coding network of the global CT generation module in the feature coding process; the structural information finally generates a CT volume image only containing micro defects through a feature decoding network;
The image optimization module optimizes the combined result of the global CT generation module and the local CT generation module so that the final CT result is more similar to reality; the image optimization module comprises a coding network and a decoding network, codes the 3D CT stereoscopic image obtained by combining the input global CT generation module and the local CT generation module, and then decodes the 3D CT stereoscopic image through the decoding network to generate an optimized CT stereoscopic image; the coding network consists of a plurality of convolution layers and a downsampling layer; the decoding network consists of a plurality of convolution layers and an up-sampling layer; the encoding network and the decoding network transmit the same-level encoding information to the decoding network through jump connection;
step3, training the CT reconstruction network in the step 2
The network training is to enable the network to learn the mapping relation from the single 2D X ray image to the 3D CT volume from the data set established in the step 1; the network training comprises three steps; training a network of a global CT generation module, wherein the network is input into a single-view 2D X ray image and a corresponding gradient image, and outputting into a 3D CT volume image containing a global structure; training a network of the local CT generation module, wherein the network is input into a single-view 2D X ray image and a corresponding gradient image, and outputting into a 3D CT volume image containing local defects; in order to make the network of the local CT generation module pay more attention to the defect, a focusing function is designed as a loss function of the network for training the local CT generation module; in the network training process, the focusing function blocks the whole image, gives a large weight to the image block containing the defect, and gives a small weight to the image block not containing the defect, so that the network is guided to pay attention to the generation of the defect part during training; thirdly, training an image optimization network to optimize the combined result of the global CT generation network and the local CT generation network; the input of the image optimization network is a combined 3D CT volume image result, and the output is an optimized 3D CT volume image result;
The definition of the focusing function is as follows:
In the formula (1), y is a real CT image, y is a predicted CT image, and M is the number of segments of the whole image; for the j-th image block y j in the real image y, the corresponding predicted image block is Equation (2) is an image block weight allocation function, wherein,Representing the mean value of the image block, wherein a and b are super parameters; SSIM patch is a structural similarity function; w j is a weight;
Step 4, testing on the test data set by using the CT reconstruction network trained in the step 3; CT volume images of the workpiece can be reconstructed by inputting a single X-ray image into the network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210388137.3A CN114897785B (en) | 2022-04-14 | 2022-04-14 | Single-view CT reconstruction method combining global and local in defect detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210388137.3A CN114897785B (en) | 2022-04-14 | 2022-04-14 | Single-view CT reconstruction method combining global and local in defect detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114897785A CN114897785A (en) | 2022-08-12 |
CN114897785B true CN114897785B (en) | 2024-07-26 |
Family
ID=82716705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210388137.3A Active CN114897785B (en) | 2022-04-14 | 2022-04-14 | Single-view CT reconstruction method combining global and local in defect detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897785B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465790A (en) * | 2020-12-03 | 2021-03-09 | 天津大学 | Surface defect detection method based on multi-scale convolution and trilinear global attention |
CN113052935A (en) * | 2021-03-23 | 2021-06-29 | 大连理工大学 | Single-view CT reconstruction method for progressive learning |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017165672A1 (en) * | 2016-03-23 | 2017-09-28 | University Of Iowa Research Foundation | Devices, systems and methods utilizing framelet-based iterative maximum-likelihood reconstruction algorithms in spectral ct |
CN109745062B (en) * | 2019-01-30 | 2020-01-10 | 腾讯科技(深圳)有限公司 | CT image generation method, device, equipment and storage medium |
CN113052936A (en) * | 2021-03-30 | 2021-06-29 | 大连理工大学 | Single-view CT reconstruction method integrating FDK and deep learning |
-
2022
- 2022-04-14 CN CN202210388137.3A patent/CN114897785B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465790A (en) * | 2020-12-03 | 2021-03-09 | 天津大学 | Surface defect detection method based on multi-scale convolution and trilinear global attention |
CN113052935A (en) * | 2021-03-23 | 2021-06-29 | 大连理工大学 | Single-view CT reconstruction method for progressive learning |
Also Published As
Publication number | Publication date |
---|---|
CN114897785A (en) | 2022-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363215B (en) | Method for converting SAR image into optical image based on generating type countermeasure network | |
CN111127482B (en) | CT image lung and trachea segmentation method and system based on deep learning | |
CN112258526B (en) | CT kidney region cascade segmentation method based on dual attention mechanism | |
Kasem et al. | Spatial transformer generative adversarial network for robust image super-resolution | |
CN109344818B (en) | Light field significant target detection method based on deep convolutional network | |
Choi et al. | Latent-space scalability for multi-task collaborative intelligence | |
CN114693660B (en) | ICT-based solid rocket engine charge calculation grid generation method | |
CN113052935A (en) | Single-view CT reconstruction method for progressive learning | |
CN110782427A (en) | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution | |
CN115830004A (en) | Surface defect detection method, device, computer equipment and storage medium | |
CN117409192A (en) | Data enhancement-based infrared small target detection method and device | |
CN117746119A (en) | Ultrasonic image breast tumor classification method based on feature fusion and attention mechanism | |
Li et al. | 3-D inspection method for industrial product assembly based on single X-ray projections | |
CN117649385A (en) | Lung CT image segmentation method based on global and local attention mechanisms | |
Varghese et al. | Unpaired image-to-image translation of structural damage | |
CN113516084B (en) | Semi-supervised classification method, device, equipment and medium for high-resolution remote sensing image | |
CN114897785B (en) | Single-view CT reconstruction method combining global and local in defect detection | |
CN117576250A (en) | Rapid reconstruction method and system for prospective undersampled MRI Dixon data | |
CN108615221A (en) | Light field angle super-resolution rate method and device based on the two-dimentional epipolar plane figure of shearing | |
CN117151983A (en) | Image full-color sharpening method based on wavelet heuristics and high-frequency enhancement | |
Posilović et al. | Synthetic 3D ultrasonic scan generation using optical flow and generative adversarial networks | |
Cissé et al. | A new deep learning method for multispectral image time series completion using hyperspectral data | |
CN112508862B (en) | Method for enhancing magneto-optical image of crack by improving GAN | |
Guldur et al. | Damage detection on structures using texture mapped laser point clouds | |
CN114636704A (en) | Terahertz continuous wave three-dimensional tomography method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |