CN116091636A - Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement - Google Patents
Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement Download PDFInfo
- Publication number
- CN116091636A CN116091636A CN202310019258.5A CN202310019258A CN116091636A CN 116091636 A CN116091636 A CN 116091636A CN 202310019258 A CN202310019258 A CN 202310019258A CN 116091636 A CN116091636 A CN 116091636A
- Authority
- CN
- China
- Prior art keywords
- projection
- domain
- reconstruction
- phase contrast
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Health & Medical Sciences (AREA)
- Algebra (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Mathematical Optimization (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses a double-domain enhancement-based X-ray differential phase contrast imaging incomplete data reconstruction method, which comprises the following steps: obtaining an initial tomographic reconstruction image of incomplete data through a filtering back projection algorithm; performing differential forward projection on the reconstructed image to obtain a projection sequence degraded by the artifact; and (3) performing enhanced reconstruction on the degraded projection sequence by using a deep learning technology based on dual-domain (projection domain and reconstructed image domain) enhancement, so as to obtain a high-quality tomographic reconstructed image with the artifact removed. Compared with the existing X-ray differential phase contrast imaging technology, the embodiment of the invention can reduce the imaging radiation dose and ensure that the imaging quality is as same as that of complete data imaging as possible; the method fully utilizes domain conversion in the differential phase contrast reconstruction process, utilizes the strong image processing capability of the convolutional neural network to realize the output of the reconstructed image by directly inputting the projection sequence, and carries out reconstruction enhancement in the double domains in the process.
Description
Technical Field
The invention relates to the technical field of deep learning and X-ray differential phase contrast imaging, in particular to a double-domain enhancement-based incomplete data reconstruction method for X-ray differential phase contrast imaging.
Background
Compared with the traditional absorption contrast imaging technology, the X-ray differential phase contrast imaging can obtain higher imaging contrast in the imaging aspect of light element substances such as biological tissues and the like. In the existing X-ray grating differential phase contrast imaging device, X-rays generated by a common X-ray source generate coherent X-rays through a source grating G0, pass through an absorption grating G2 after passing through a phase grating G1 and freely propagating for a certain distance, and finally are received by a detector behind the absorption grating G2. Because the detector cannot directly obtain the phase change of the X-ray, the absorption grating G2 micron-sized distance is generally required to be moved laterally for several times (generally 4-8 times of movement is required), the acquired two-dimensional projection image is analyzed to obtain a differential phase contrast signal, and then the differential phase contrast signal obtained by analysis is reconstructed. This process significantly increases the imaging time, increases the radiation dose during imaging, and greatly reduces the imaging efficiency. And the absorption grating requires micron-level movement precision when moving, and has high precision requirement on control equipment.
The incomplete angle sampling is a general method for reducing the radiation dose of CT imaging, and the radiation dose can be effectively reduced when the incomplete angle sampling is applied to phase contrast CT imaging. The operation is simple, and the existing phase contrast CT imaging device is not required to be changed. Common incomplete angle sampling generally comprises sparse angle sampling and finite angle sampling, and incomplete data is directly reconstructed through FBP, so that a large amount of artifacts can be introduced into a reconstructed image, and imaging quality is affected.
Disclosure of Invention
The invention solves the technical problems: the method is characterized in that the defect of the existing X-ray phase contrast imaging incomplete data reconstruction technology is overcome, the radiation dose and imaging time are greatly reduced in the CT scanning process through incomplete angle sampling, and the quality of a final reconstructed image is synchronously improved in a projection domain and an image domain based on a double-domain enhanced deep learning technology, so that the radiation dose is reduced, the imaging efficiency is improved, the high-quality phase contrast imaging quality is maintained, and the efficient phase contrast CT imaging technology is formed.
The technical proposal of the invention is as follows: a method for reconstructing incomplete data of X-ray differential phase contrast imaging based on dual-domain enhancement comprises the following steps:
step 3, differential forward projection based on three-point difference is carried out on the initial tomographic reconstruction image by utilizing a differential forward projection operator, and a degraded complete projection sequence is obtained; the differential forward projection refers to a phase contrast projection sequence, the phase contrast projection sequence shows the differential of forward projection in the analysis process, and the forward projection refers to pixel value accumulation on the transmission path of rays and a sample; the degraded complete projection sequence means that the size of the projection sequence is consistent with the complete projection sequence, but the projection sequence is degraded due to image artifact caused by data deletion;
step 4, performing enhanced reconstruction on the degraded complete data projection sequence by using a convolution neural network based on double-domain enhancement, and eliminating image artifacts generated by incomplete data reconstruction to obtain a fault reconstructed image with complete characteristics; the double-domain enhancement refers to applying a convolutional neural network technology to a projection domain and a reconstructed image domain for synchronous enhancement, and embedding a deep learning technology into the whole phase contrast CT reconstruction process;
further, in step 2, the phase-contrast FBP reconstruction algorithm shown in formulas (1) and (2) is used to reconstruct the incomplete data phase-contrast projection sequence, and the obtained phase-contrast image contains incomplete data reconstruction artifacts:
wherein delta (x, y) is a tomographic reconstructed image, U is a geometric weighting factor, P is a phase contrast projection sequence, theta is a rotation angle, v is a time domain correspondence, and h (v) is a Hilbert transform filter reconstructed by a phase contrast FBP.
Further, in the step, a three-point difference-based differential forward projection operator shown in formulas (3) and (4) is adopted to process the degraded reconstructed image, so as to obtain a degraded complete phase-contrast projection sequence:
α(s,θ)=∫δ(x,y)dl (3)
wherein alpha is a forward projection sequence, delta (x, y is a tomographic reconstructed image, s is a projection sequence number, theta is a rotation angle, l is a transmission path of rays and a sample, and P is a phase contrast projection sequence.
Further, in step 4, a phase contrast CT reconstruction enhancement architecture based on a dual domain enhancement and convolutional neural network is employed. The architecture consists of the differential forward projection depicted in fig. 3 and a convolutional neural network in the dashed box below in the figure.
It can be seen that the dual-domain enhancement network consists of three sub-networks, namely a projection domain enhancement network, a phase contrast reconstruction module supporting gradient back propagation and a reconstructed image domain enhancement network. It is noted that the three sub-networks are closed loops, requiring simultaneous training rather than separate training, ultimately forming a complete reconstructed network.
Projection domain enhancement network: 1) Feature extraction: first, 2 convolutional layers of size 3×3, step size 1, and channel number 32 are passed. Then, after 4 downsampling with different downsamples, each branch is configured with a residual block, and then a corresponding upsampling convolution layer is connected, and 4 multi-scale output feature layers with the sizes of H×W×1 are generated through the convolution layers with the sizes of 3×3, the step length of 1 and the channel number of 1. Where H and W are the length and width of the input image, respectively. The introduction of the residual block can inhibit the degradation phenomenon occurring when the depth of the network becomes deep to a certain extent, and the model precision obtained by final training is improved. 2) Channel polymerization: after feature extraction of each branch in the first stage, the output of each branch has completed the preliminary noise reduction and structure recovery work. Channel aggregation aggregates the outputs of the branches at the channel level to obtain an aggregated feature layer of size H x W x 1.
And (II) a phase contrast reconstruction module: the phase contrast reconstruction module is used for performing phase contrast CT reconstruction to serve as a connection between a projection domain and an image domain. Furthermore, the module must be able to allow the reverse transfer of gradients, otherwise the entire network is not connected and the network enhancement effect will be limited to the image domain only. The phase contrast reconstruction module is realized by optimizing the back projection link in the FBP reconstruction algorithm, as shown in formulas (5) and (6). Wherein X and Y respectively represent a filtered projection sequence and a reconstructed image, row and col represent rows and columns in the reconstructed image, θ represents a rotation angle, subscript i corresponds to different sampling angles, and t i Equal to theta i ,row·cos(θ i )+col·sin(θ i ) In the formula (6)And->Two rounding operators are represented, respectively, for taking an integer to positive and for taking an integer to negative.
And (III) reconstructing an image domain enhancement network: the reconstructed image domain enhancement network is positioned at the network back end, the most commonly used image restoration network U-Net is used, and the network parameter setting is consistent with the conventional U-Net. The function of this sub-network is to further enhance the reconstruction process in the CT image domain.
In step 4, a convolution neural network based on dual-domain enhancement shown in formulas (7) and (8) is adopted to enhance the degraded complete phase contrast projection sequence, so as to obtain a reconstructed image without artifacts, which is specifically as follows:
wherein, the formulas (7) and (8) are respectively multidimensional convolution operation which acts on a projection domain and a reconstruction image domain; x is X 0 ∈R H×W For the input phase contrast projection sequence, H and W are the length and width of the projection sequence, and the output X of the projection domain convolution neural network is obtained through a series of multidimensional convolution operation operations n ∈R H×W ,X n To enhance the phase contrast projection image, K 1 And b 1 Convolution kernel and offset of each convolution layer of corresponding projection domain network, Y 0 ∈R W×W For inputting the image domain network, obtaining the output Y of the projection domain convolution neural network through a series of multidimensional convolution operation operations n ∈R W×W ,Y n For enhanced reconstruction of the image, K 2 And b 2 And corresponding to the convolution kernels and offsets of each convolution layer of the reconstructed image domain network, wherein n represents an nth convolution layer.
Compared with the existing X-ray grating differential phase contrast imaging technology, the method can effectively reduce the radiation dose of the sample and improve the application potential of phase contrast imaging in an actual detection scene; meanwhile, the components utilize the image data change in the whole imaging process, and the parameters of the convolutional neural network are synchronously updated in a projection domain and a reconstruction image domain by utilizing the convolutional neural network based on double-domain enhancement, so that the artifact removal of the phase contrast reconstruction image is realized, and the imaging quality is improved. The incomplete angle sampling adopted by the invention is a general method for reducing imaging radiation dose, can be realized by changing sampling parameters of imaging software without improving an imaging device, is simple to operate and is easy to expand; in addition, the method based on the dual-domain enhancement is different from a network for respectively training the projection domain and the reconstructed image domain, so that defect superposition of independent enhancement of the two domains is avoided, and the application value of incomplete data reconstruction is remarkably improved. The invention reduces radiation dose, improves imaging efficiency, simultaneously maintains high-quality phase contrast imaging quality, forms a high-efficiency phase contrast CT imaging technology, and meets the requirements of high-quality and high-efficiency industrial and medical nondestructive detection.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a comparison of incomplete and complete data, for example, sparse angular sampling;
FIG. 3 is a schematic diagram of differential forward projection employed in an embodiment of the present invention;
FIG. 4 is a convolutional neural network architecture employed in an embodiment of the present invention;
fig. 5 is a phase contrast imaging incomplete data reconstruction optimization result of experimental data under sparse angle sampling.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
As shown in fig. 1, the method of the present invention comprises the following specific steps:
s101, incomplete data acquisition: and acquiring a projection sequence of the sample under an incomplete sampling angle by using an X-ray grating differential phase contrast imaging device based on the Talbot-Lau effect.
The incomplete sampling angle generally refers to sparse angle sampling or finite angle sampling, and a great amount of artifacts exist in a reconstructed image obtained by directly performing filtered back projection reconstruction on the obtained incomplete data.
S102, incomplete data phase contrast reconstruction: and performing filtered back projection (Filter Projection, FBP) reconstruction on the incomplete data projection sequence to obtain an initial tomographic reconstructed image of the sample.
Reconstructing the incomplete data phase-contrast projection sequence by using a phase-contrast FBP reconstruction algorithm described in formulas (1) and (2), wherein the obtained phase-contrast image comprises incomplete data reconstruction artifacts:
wherein delta (x, y) is a tomographic reconstructed image, U is a geometric weighting factor, P is a phase contrast projection sequence, theta is a rotation angle, v is a time domain correspondence, and h (v) is a Hilbert transform filter corresponding to phase contrast reconstruction. The FBP reconstruction algorithm is a spatial processing technology based on the Fourier transform theory, and is the most widely applied reconstruction algorithm in the CT imaging field at present. However, the FBP algorithm requires that the data meet the nyquist condition, i.e. the projection data must be complete. For the incomplete angle sampling condition faced by the embodiment of the invention, serious artifacts and noise can be introduced by directly adopting the FBP algorithm for reconstruction.
S103, performing differential forward projection on the degraded image, and inputting the degraded image as a subsequent convolutional neural network: the artifact-degraded complete projection sequence is obtained by directly performing differential forward projection on the degraded image, and the structural information of the tomographic image and the incomplete data reconstruction artifact are fed back into the projection sequence together, wherein the size of the projection sequence is consistent with that of the complete projection sequence.
And (3) processing the degraded reconstructed image by using the differential forward projection operator based on the three-point difference as shown in formulas (3) and (4) to obtain a degraded complete phase-contrast projection sequence containing artifacts:
α(s,θ)=∫δ(x,y)dl (3)
wherein alpha is a forward projection sequence, v is a tomographic image, s is a projection sequence number, θ is a rotation angle, l is a transmission path of a ray and a sample, and P is a differential phase contrast projection sequence. Compared with the conventional interpolation method for processing incomplete data into complete size in the prior projection domain enhancement method, the embodiment of the invention adopts forward projection, and can reflect the distribution of sinograms; furthermore, interpolation cannot be applied to limited angle data.
S104, performing enhancement reconstruction based on dual-domain enhancement: processing the degraded complete data projection sequence by using a convolution neural network based on dual-domain enhancement to obtain an enhanced tomographic reconstruction image; the enhanced reconstructed image is a convolution neural network model enhanced by double domains, and the reconstruction process is enhanced and optimized in the projection domain and the image domain at the same time, so that the reconstruction quality is greatly enhanced.
And adopting a phase contrast CT reconstruction enhancement architecture based on a dual-domain enhancement and convolution neural network. The architecture consists of the differential forward projection depicted in fig. 3 and a convolutional neural network in the dashed box below in the figure.
It can be seen that the dual-domain enhancement network consists of three sub-networks, namely a projection domain enhancement network, a phase contrast reconstruction module supporting gradient back propagation and a reconstructed image domain enhancement network. It is noted that the three sub-networks are closed loops, requiring simultaneous training rather than separate training, ultimately forming a complete reconstructed network.
Projection domain enhancement network: 1) Feature extraction: first, 2 convolutional layers of size 3×3, step size 1, and channel number 32 are passed. Then, after 4 downsampling with different downsamples, each branch is configured with a residual block, and then a corresponding upsampling convolution layer is connected, and 4 multi-scale output feature layers with the sizes of H×W×1 are generated through the convolution layers with the sizes of 3×3, the step length of 1 and the channel number of 1. Where H and W are the length and width of the input image, respectively. The introduction of the residual block can inhibit the degradation phenomenon occurring when the depth of the network becomes deep to a certain extent, and the model precision obtained by final training is improved. 2) Channel polymerization: after feature extraction of each branch in the first stage, the output of each branch has completed the preliminary noise reduction and structure recovery work. Channel aggregation aggregates the outputs of the branches at the channel level to obtain an aggregated feature layer of size H x W x 1.
And (II) a phase contrast reconstruction module: the phase contrast reconstruction module is used for performing phase contrast CT reconstruction to serve as a connection between a projection domain and an image domain. Furthermore, the module must be able to allow the reverse transfer of gradients, otherwise the entire network is not connected and the network enhancement effect will be limited to the image domain only. The phase contrast reconstruction module is realized by optimizing the back projection link in the FBP reconstruction algorithm, as shown in formulas (5) and (6). Wherein X and Y respectively represent a filtered projection sequence and a reconstructed image, row and col represent rows and columns in the reconstructed image, θ represents a rotation angle, subscript i corresponds to different sampling angles, and t i Equal to theta i ,row·cos(θ i )+col·sin(θ i ) In the formula (6)And->Two rounding operators are represented, respectively, for taking an integer to positive and for taking an integer to negative.
And (III) reconstructing an image domain enhancement network: the reconstructed image domain enhancement network is positioned at the network back end, the most commonly used image restoration network U-Net is used, and the network parameter setting is consistent with the conventional U-Net. The function of this sub-network is to further enhance the reconstruction process in the CT image domain.
After determining the network structure, the convolution neural network based on the double-domain enhancement shown in formulas (7) - (8) is used for enhancing the degraded complete phase contrast projection sequence, so as to obtain a reconstructed image without artifacts, which is specifically as follows:
wherein, the formulas (5) and (7) are respectively multidimensional convolution operation which acts on a projection domain and a reconstruction image domain, and the formula (6) is a phase-contrast Radon reconstruction layer which can allow gradient reverse transfer, so as to realize connection of double domains and image transformation; x is X 0 ∈R H×W For the input phase contrast projection sequence, H and W are the length and width of the projection sequence, and the output X of the projection domain convolution neural network can be obtained through a series of multidimensional convolution operations n ∈R H×W ,X n To enhance the phase contrast projection image, K 1 And b 1 Corresponding to convolution kernels and offsets of all convolution layers, wherein theta is a rotation angle in the reconstruction process, t is a function of theta and W, and Y 0 ∈R W×W The output Y of the projection domain convolution neural network can be obtained by a series of multidimensional convolution operations and other operations for inputting the reconstructed image domain network n ∈R W×W ,Y n For enhanced reconstruction of the image, K 2 And b 2 Corresponding to the convolution kernel and offset of each convolution layer, n represents the n-th convolution layer.
Training a convolutional neural network based on double-domain enhancement, and determining network parameters to form a double-domain enhancement model. And inputting the degraded complete phase contrast projection sequence into a trained two-domain enhancement model to obtain a phase contrast reconstructed image without artifacts.
Fig. 2 compares projection sequences under complete sampling and incomplete sampling conditions, taking sparse angular sampling as an example, and FBP reconstructed images thereof. Wherein the sampling intervals for the complete and sparse angular samples are 0.5 ° and 4 °, respectively, for fan-beam CT, corresponding to 720 and 90 and projection sequences of view angles. It can be seen that the FBP reconstructed image of the projection sequence obtained by sparse angle sampling contains obvious streak artifacts, which seriously affect the extraction and interpretation of the detail features of the image and are very unfavorable for the feature extraction precision of focus in medical CT or part cracks in industrial CT.
Fig. 3 is a schematic diagram of differential forward projection employed in an embodiment of the present invention. Taking a lung CT image in a medical image database as an example, it is taken as an original image. And (3) adopting a differential forward projection operator described by formulas (3) and (4), sampling at sampling intervals of 0.5 DEG and 4 DEG to obtain a complete phase-contrast projection sequence and a sparse sampling phase-contrast projection sequence respectively, wherein the complete phase-contrast projection sequence is used as a label of a projection domain sub-network. And (2) performing CT reconstruction on the projection sequence by adopting an FBP reconstruction algorithm described in formulas (1) - (2) to obtain a high-quality phase contrast reconstructed image and a degraded reconstructed image, wherein the high-quality phase contrast reconstructed image is used as a label of an image domain sub-network. And finally, carrying out differential forward projection with a sampling interval of 0.5 DEG on the degraded reconstructed image to obtain a degraded complete projection sequence with complete angle and incomplete image information, and taking the degraded complete projection sequence as the input of the network provided by the embodiment. As can be seen, the projection sequence has the same size as the complete projection sequence, but contains a significant difference in the amount of information.
FIG. 4 is a schematic diagram of a convolutional neural network based on dual domain enhancement as employed in an embodiment of the present invention. The input of the two-domain neural network is a degraded complete phase-contrast projection sequence, namely differential forward projection of an initial phase-contrast FBP reconstructed image of the incomplete phase-contrast projection sequence; the output of the network is a reconstructed image which is obtained by reconstructing a complete phase-contrast projection sequence and does not contain image artifacts and has clear characteristics; the network contains two labels, one for each complete projection sequence and one for each phase contrast reconstructed image. The two-domain neural network consists of three parts: and (one) enhancing a projection domain, and extracting multi-scale feature extraction in a projection sequence. However, multi-scale feature extraction may lead to degradation problems due to network depth. The residual network (ResNet) is adopted to provide an effective solution for the degradation problem of the deep neural network, and the convergence speed is increased; and (II) a phase contrast reconstruction module, such as a phase contrast reconstruction layer (Phase Contrast Radon InverseLayer, PCRIL) box in fig. 4, has the function of rewriting a phase contrast FBP reconstruction algorithm into a series of two-dimensional matrix operation formulas, including Hilbert filtering and back projection reconstruction. PCRIL can replace circulation in the original reconstruction algorithm, so that the module can realize reverse transfer of gradients and play a role in connecting a projection domain and a reconstructed image domain; and (III) reconstructing image domain enhancement, which aims at eliminating image artifacts while preserving image structure as much as possible. A U-Net network is introduced in the reconstructed image domain as an enhancement network to further improve image quality.
In order to demonstrate the effect of the above examples, the following experiments were performed in the examples of the present invention, and the experimental procedures are as follows:
(1) Acquiring a projection sequence of a sample under an incomplete sampling angle by using an X-ray grating differential phase contrast imaging device based on the Talbot-Lau effect;
(2) Performing filtered back projection (Filter Projection, FBP) reconstruction on the incomplete data projection sequence to obtain an initial reconstructed image of the sample;
(3) Performing differential forward projection on the degraded image to ensure that the size of the degraded projection sequence is consistent with that of the complete projection sequence, and inputting the degraded projection sequence as a subsequent convolutional neural network;
(4) Training a convolutional neural network based on double-domain enhancement, and after training, processing the degraded complete data projection sequence by the network with determined parameters to obtain an enhanced tomographic reconstruction image.
As shown in fig. 5, the experimental data phase contrast imaging incomplete data reconstruction optimization result under sparse angle sampling is obtained. The three images from left to right are respectively a complete sampling reconstruction image, a sparse sampling reconstruction image and a reconstruction image which is subjected to double-domain enhancement under the sparse sampling condition. The details of the image in the dashed box are magnified to increase the objectivity of the visual observation. As can be seen from the figure, the reconstructed image of sparse angle sampling is severely degraded, and the artifact is extremely serious, so that the detail features of the image are lost greatly. However, after the double-domain enhancement, the artifacts are basically eliminated, and furthermore, the detail structure of the image is well repaired.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand; the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.
Claims (4)
1. The method for reconstructing incomplete data of X-ray differential phase contrast imaging based on double-domain enhancement is characterized by comprising the following steps:
step 1, carrying out incomplete sampling scanning on a sample to obtain an incomplete data phase contrast projection sequence of the sample; the incomplete sampling refers to sparse angle sampling or limited angle sampling, and the reduction of the ray dose is realized by increasing the sampling angle interval or sampling in a certain angle range;
step 2, performing phase contrast filtering back projection on the incomplete data phase contrast projection sequence, namely performing phase contrast FBP reconstruction to obtain an initial tomographic reconstructed image of the sample; the phase-contrast FBP reconstruction is to adopt Hilbert filtering as a reconstruction filter on the basis of classical FBP reconstruction;
step 3, differential forward projection based on three-point difference is carried out on the initial tomographic reconstruction image by utilizing a differential forward projection operator, and a degraded complete projection sequence is obtained; the differential forward projection refers to a phase contrast projection sequence, the phase contrast projection sequence shows the differential of forward projection in the analysis process, and the forward projection refers to pixel value accumulation on the transmission path of rays and a sample; the degraded complete projection sequence means that the size of the projection sequence is consistent with the complete projection sequence, but the projection sequence is degraded due to image artifact caused by data deletion;
step 4, performing enhanced reconstruction on the degraded complete data projection sequence by using a convolution neural network based on double-domain enhancement, and eliminating image artifacts generated by incomplete data reconstruction to obtain a fault reconstructed image with complete characteristics; the double-domain enhancement refers to the application of a convolutional neural network technology to a projection domain and a reconstructed image domain for synchronous enhancement, and the deep learning technology is embedded into the whole phase contrast CT reconstruction process.
2. The method for reconstructing incomplete data of X-ray differential phase contrast imaging based on dual domain enhancement as set forth in claim 1, wherein: in the step 2, the phase contrast FBP reconstruction is performed by using the formulas (1) and (2), and the initial tomographic reconstructed image of the sample is obtained as follows:
wherein delta (x, y) is an initial tomographic reconstructed image, U is a geometric weighting factor, P is a phase contrast projection sequence, theta is a rotation angle, v is a time domain correspondence, and h (v) is a Hilbert transform filter in phase contrast FBP reconstruction.
3. The method for reconstructing incomplete data of X-ray differential phase contrast imaging based on dual domain enhancement as set forth in claim 1, wherein: in the step 3, differential forward projection operator shown in formulas (3) and (4) is used to perform differential forward projection based on three-point difference on the initial reconstructed image, so as to obtain a degraded complete projection sequence containing artifacts:
α(s,θ)=∫δ(x,y)dl (3)
wherein alpha is a forward projection sequence, x, y are respectively the abscissas and ordinates of the images, delta (x, y) is a tomographic reconstructed image, s is a projection sequence number, theta is a rotation angle, l is a transmission path of rays and a sample, and P is a phase contrast projection sequence.
4. The method for reconstructing incomplete data of X-ray differential phase contrast imaging based on dual domain enhancement as set forth in claim 1, wherein: in the step 4, the two-domain enhancement-based convolutional neural network is utilized to carry out enhancement reconstruction on the degraded complete data projection sequence, image artifacts generated by incomplete data reconstruction are eliminated, and a fault reconstruction image with complete characteristics is obtained specifically as follows:
the convolution neural network based on double-domain enhancement comprises three sub-networks, namely a projection domain enhancement network, a phase contrast reconstruction module supporting gradient back propagation and a reconstructed image domain enhancement network, wherein the three sub-networks are sequentially connected in series, and a connection mode with the output of the above stage as the current input is adopted to form an end-to-end reconstruction network; in addition, although the three sub-networks have the sequence in the reconstruction process, there is no enhancement sequence, i.e. the enhancement is synchronous;
(1) Projection domain enhancement network: 1) Feature extraction: firstly, 2 convolution layers with the size of 3 multiplied by 3, the step length of 1 and the channel number of 32 are passed; then 4 downsampling with different downsampling sizes are carried out, each branch is provided with a residual block, then a corresponding upsampling convolution layer is connected, 4 multiscale output characteristic layers with H multiplied by W multiplied by 1 are generated through the convolution layers with the size of 3 multiplied by 3, the step length of 1 and the channel number of 1, wherein H and W are the length and the width of an input image respectively; the introduction of the residual block can inhibit the degradation phenomenon occurring when the depth of the network becomes deep, and the model precision obtained by final training is improved; 2) Channel polymerization: after the feature extraction of each branch in the first stage, the output of each branch has finished the preliminary noise reduction and structure recovery work; channel aggregation aggregates the output of each branch on a channel level to obtain an aggregation feature layer with the size of H multiplied by W multiplied by 1;
(2) Phase contrast reconstruction module supporting gradient back propagation: the phase contrast reconstruction module is used for performing phase contrast CT reconstruction to serve as connection between a projection domain and an image domain; in addition, the module can allow reverse transfer of gradients, otherwise the whole network is not connected, and the network enhancement effect is limited to the image domain; the phase contrast reconstruction module is realized by optimizing the back projection link in the FBP reconstruction algorithm, as shown in formulas (5) and (6), wherein X and Y respectively represent a filtered projection sequence and a reconstructed image, row and col represent rows and columns in the reconstructed image, θ represents a rotation angle, subscript i corresponds to different sampling angles, and t i Equal to θi, row cos (θ i )+col·sin(θ i ) In the formula (6)And->Two rounding operators respectively representing positive integer taking and negative integer taking;
(3) Image domain enhancement network: the image domain enhancement network is positioned at the back end of the network, the most commonly used image restoration network U-Net is used, the network parameter setting is consistent with the conventional U-Net, and the function of the sub-network is to further enhance the reconstruction process in the CT image domain;
the training process for the convolutional neural network based on the dual domain enhancement is as follows:
training the two-domain enhancement network by adopting a convolution neural network structure shown in formulas (7) and (8) to obtain a determined network model so as to realize phase contrast CT enhancement;
wherein, the formulas (7) and (8) are respectively multidimensional convolution operation which acts on a projection domain and a reconstruction image domain; x is X 0 ∈R H×W For the input phase contrast projection sequence, H and W are the length and width of the projection sequence, and the output X of the projection domain convolution neural network is obtained through a series of multidimensional convolution operation operations n ∈R H×W ,X n To enhance the phase contrast projection image, K 1 And b 1 Convolution kernel and offset of each convolution layer of corresponding projection domain network, Y 0 ∈R W×W For inputting the image domain network, obtaining the output Y of the projection domain convolution neural network through a series of multidimensional convolution operation operations n ∈R W×W ,Y n K for enhanced tomographic reconstruction of images 2 And b 2 And corresponding to the convolution kernels and offsets of each convolution layer of the reconstructed image domain network, wherein n represents an nth convolution layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310019258.5A CN116091636A (en) | 2023-01-06 | 2023-01-06 | Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310019258.5A CN116091636A (en) | 2023-01-06 | 2023-01-06 | Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116091636A true CN116091636A (en) | 2023-05-09 |
Family
ID=86202022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310019258.5A Pending CN116091636A (en) | 2023-01-06 | 2023-01-06 | Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116091636A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116977473A (en) * | 2023-09-21 | 2023-10-31 | 北京理工大学 | Sparse angle CT reconstruction method and device based on projection domain and image domain |
CN117115577A (en) * | 2023-10-23 | 2023-11-24 | 南京安科医疗科技有限公司 | Cardiac CT projection domain optimal phase identification method, equipment and medium |
-
2023
- 2023-01-06 CN CN202310019258.5A patent/CN116091636A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116977473A (en) * | 2023-09-21 | 2023-10-31 | 北京理工大学 | Sparse angle CT reconstruction method and device based on projection domain and image domain |
CN116977473B (en) * | 2023-09-21 | 2024-01-26 | 北京理工大学 | Sparse angle CT reconstruction method and device based on projection domain and image domain |
CN117115577A (en) * | 2023-10-23 | 2023-11-24 | 南京安科医疗科技有限公司 | Cardiac CT projection domain optimal phase identification method, equipment and medium |
CN117115577B (en) * | 2023-10-23 | 2023-12-26 | 南京安科医疗科技有限公司 | Cardiac CT projection domain optimal phase identification method, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116091636A (en) | Incomplete data reconstruction method for X-ray differential phase contrast imaging based on dual-domain enhancement | |
CN111081354B (en) | System and method for denoising medical images through deep learning network | |
CN111429379B (en) | Low-dose CT image denoising method and system based on self-supervision learning | |
Yuan et al. | SIPID: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction | |
CN112396672B (en) | Sparse angle cone-beam CT image reconstruction method based on deep learning | |
Onishi et al. | Anatomical-guided attention enhances unsupervised PET image denoising performance | |
CN111091575B (en) | Medical image segmentation method based on reinforcement learning method | |
Xue et al. | LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks | |
CN111882503A (en) | Image noise reduction method and application thereof | |
Zhu et al. | Metal artifact reduction for X-ray computed tomography using U-net in image domain | |
CN110070510A (en) | A kind of CNN medical image denoising method for extracting feature based on VGG-19 | |
CN116452423A (en) | Simultaneous sparse angle CT reconstruction and metal artifact high-precision correction method | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
CN109816747A (en) | A kind of metal artifacts reduction method of Cranial Computed Tomography image | |
WO2022027216A1 (en) | Image denoising method and application thereof | |
CN116503506B (en) | Image reconstruction method, system, device and storage medium | |
CN112070856A (en) | Limited angle C-arm CT image reconstruction method based on non-subsampled contourlet transform | |
Sun et al. | A lightweight dual-domain attention framework for sparse-view CT reconstruction | |
CN114137002B (en) | Low-dose X-ray differential phase contrast imaging method based on contrast enhancement | |
CN115049753B (en) | Cone beam CT artifact correction method based on unsupervised deep learning | |
Wu et al. | Deep learning-based low-dose tomography reconstruction with hybrid-dose measurements | |
Gao et al. | Attention-based dual-branch deep network for sparse-view computed tomography image reconstruction | |
CN116485925A (en) | CT image ring artifact suppression method, device, equipment and storage medium | |
Bai et al. | Dual-domain unsupervised network for removing motion artifact related to Gadoxetic acid-enhanced MRI | |
CN114494498B (en) | Metal artifact removing method based on double-domain Fourier neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |