CN116843779A - Linear scanning detector differential BPF reconstructed image sparse artifact correction method - Google Patents
Linear scanning detector differential BPF reconstructed image sparse artifact correction method Download PDFInfo
- Publication number
- CN116843779A CN116843779A CN202310666199.0A CN202310666199A CN116843779A CN 116843779 A CN116843779 A CN 116843779A CN 202310666199 A CN202310666199 A CN 202310666199A CN 116843779 A CN116843779 A CN 116843779A
- Authority
- CN
- China
- Prior art keywords
- image
- deep learning
- sparse
- bpf
- linear scanning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012937 correction Methods 0.000 title claims abstract description 8
- 238000013135 deep learning Methods 0.000 claims abstract description 42
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 24
- 238000001914 filtration Methods 0.000 claims abstract description 7
- 230000009467 reduction Effects 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 15
- 238000005070 sampling Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 239000000523 sample Substances 0.000 claims 2
- 238000002591 computed tomography Methods 0.000 abstract description 4
- 238000003384 imaging method Methods 0.000 abstract description 3
- 238000013170 computed tomography imaging Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000012805 post-processing Methods 0.000 abstract 1
- 230000008569 process Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 238000013519 translation Methods 0.000 description 5
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000010603 microCT Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Algebra (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
A correction method for sparse artifacts of a linear scanning detector differential BPF reconstructed image belongs to the technical field of image processing and CT imaging, and comprises the following steps: initializing the number of linear scanning segments of parameters; acquiring a CT image under sparse angle projection by utilizing a differential BPF operator of a linear scanning detector; acquiring a real tag image by a linear scanning filtering back projection algorithm, and converting the image with the artifact by using a deep learning network; and carrying out Gaussian blind noise reduction on the converted image by utilizing an image post-processing network, and finally outputting a high-quality CT image. The sparse artifact correction method is oriented to the linear CT scanning track, can effectively avoid artifacts and noise caused by sparse angle scanning, can reconstruct high-quality images from fewer cone beam projections, and improves scanning efficiency and imaging quality.
Description
Technical Field
The invention belongs to the technical fields of image processing and CT imaging, and particularly relates to a method for correcting sparse artifacts of a differential BPF reconstructed image of a linear scanning detector.
Background
Micro-focus computed tomography (micro computed tomography, micro-CT) is an advanced technique that can image the internal structure of an object with high resolution, and is widely used in various fields. It is well known that in CT systems, higher spatial resolution will be possible by setting a larger geometric magnification ratio. However, larger geometric magnification generally results in a smaller field of view (FOV), i.e., high spatial resolution and large FOV, which are difficult to combine. To solve this problem, a number of new CT scan structures and their corresponding image reconstruction algorithms are proposed in succession. Among them, chongqing university Liu Fenglin et al propose a new CT scanning mode with simple structure and lower cost performance, namely a source linear scanning mode [1,2]. In this scanning mode, the flat panel detector is fixed and the object being imaged will be close to the source, with high resolution imaging of the object by translating the source in a linear trajectory parallel to the detector. In this case, however, the beam cannot cover the object to be measured completely, i.e. each projection view is obtained truncated [3]. If a filtered back projection (filtered projection, FBP) algorithm is used, the reconstructed image may introduce serious truncation artifacts. This is because the global ramp filter operator in the FBP algorithm needs to process the entire projection, if the projection is truncated, the ramp filter will interfere with the entire projection, and the subsequent back projection will propagate errors in the entire image. In this case, it is even impossible to restore a part of the real image therefrom.
To overcome this limitation, the existing research has designed an iterative reconstruction algorithm [4] for the source linear scanning CT structure, and although the algorithm can reconstruct an image without truncation artifacts, the requirement on a computing hardware platform is not satisfied, and the reconstruction efficiency is not satisfactory. The method adopting virtual projection geometry can well solve the problem of truncation artifact caused by the FBP algorithm, but because the projection geometry is exchanged, the source track is subjected to slope filtering, so that huge source sampling points (namely projection view numbers) are needed to realize high-quality image reconstruction. For this purpose, a back projection filter (backprojection filtering, BPF) analytic reconstruction algorithm is proposed for this scan geometry, which mainly comprises two steps of differential back projection (differential back projection, DBP) and limited Hilbert inverse transformation. Unlike ramp filtering in FBP-like algorithms, BPF algorithms can solve the problem of exact reconstruction of truncated projections because the differential and finite Hilbert inverse sums therein are both local (local) operations, i.e., the result of the operation is only related to the data of the neighbors. If the projection data is truncated, it can still calculate the derivative for the rest of the projection, and the intermediate result after back projection can still be considered accurate within the FOV. However, to achieve the reconstruction of the complete image, the source linear scan often needs to cooperate with the rotation of several segments of the measured object to obtain a complete projection for the reconstruction. In this case, the Hilbert images obtained by the DBP process of the BPF algorithm have different reconstruction directions to be inverse transformed, which requires a subsequent limited Hilbert inverse transformation to be able to customize the directions to complete the reconstruction.
However, the method needs to reconstruct under the complete angle of the scanning angle, which results in the problems of slow scanning speed and low reconstruction efficiency, and if a sparse scanning mode is adopted to improve the scanning and reconstruction speed, the method can result in data deletion of certain projection angles, thereby introducing serious streak artifacts and causing serious degradation of image quality. Reference is made to:
[1] liu Fenglin, yu Haijun, li Lei, tan Chuandong a novel large field-of-view linear scanning CT system and image reconstruction method [ P ]. Chongqing city: CN111839568A,2020-10-30.
[2]H.Yu,L.Li,C.Tan,F.Liu,R.Zhou,X-ray source translation based computed tomography(STCT),Optics Express,29(2021)19743-19758.R.Clackdoyle,F.Noo,A large class of inversion formulae for the 2D Radon transform of functions of compact support,Inverse Problems,20(2004)1281.
[3] Gossypol Wen Jie, yu Haijun, chen Jie, et al source linear scan computed tomography analytical reconstruction based on derivative-hilbert transform-backprojection [ J ]. Optics, 2022,42 (11): 292-303.
[4] Li Lei, yu Haijun, tan Chuandong, duan Xiaojiao, liu Fenglin. Analytical reconstruction algorithm for radiation source translation scanning CT [ J ]. Instructions on instruments and meters, 2022,43 (02): 187-195.DOI:10.19650/j.cnki.cjsi.J2108157.
[5]Yu H,Ni S,Chen J,et al.Analytical reconstruction algorithm for multiple source-translation computed tomography(mSTCT)[J].Applied Mathematical Modelling,2023,117:251-266.
Disclosure of Invention
In view of the problems of slow scanning speed and low reconstruction efficiency of the algorithm and the problem that a sparse projection mode is adopted to introduce streak artifacts, the invention provides a sparse angle source linear scanning CT-oriented artifact removal method, which mainly aims at circular tracks and two-dimensional plane CT scanning modes.
The correction method of the linear scanning detector differential BPF reconstructed image sparse artifact mainly comprises the following steps:
s1, initializing the number T of parameter linear scanning segments and the rotation angle interval delta theta, wherein the zero space of an image to be reconstructed is formed
S2, acquiring a CT image with artifacts under sparse angle projection by utilizing a linear scanning detector differential BPF algorithm;
s3, acquiring a tag image by using a linear scanning filtering back projection algorithm to serve as tag data of a first deep learning network;
s4, performing artifact removal on the CT image obtained in the step S2 by using a first deep learning network, wherein the network sequentially comprises a generator and a discriminator, the generator comprises an encoder formed by a plurality of two-dimensional convolution modules and a self-attention module and a decoder formed by a plurality of up-sampling layers and convolution layers, and the discriminator comprises a multi-stage full convolution layer and is responsible for discriminating the authenticity of the generated image and the label image;
s5, gaussian blind noise reduction is carried out on the artifact-removed CT image obtained in the step S4 by using a second deep learning network, wherein the second deep learning network comprises a plurality of cascade structures formed by two-dimensional convolution, batch normalization and leakage relu activation functions, and high-quality CT images after correction are output through the second deep learning network.
Preferably, in the step S4, the method mainly includes:
s41, an encoding module in the first deep learning network generator encodes the feature map subjected to two-dimensional convolution into an input sequence of the global context;
s42, performing up-sampling and convolution on the obtained characteristic sequence by a decoder to generate an artifact-removed CT image;
s43, the discriminator obtains the probability of true or false of the generated image after the artifact-removed CT image obtained in the step S42 and the label image pass through the same convolution layer, normalization layer and activation function.
In step S4, the discriminator in the first deep learning network is a multi-scale discriminator, and the number and the size of downsampling of the discriminator can be selected according to the characteristics of the processed image, for a sparse angle CT image, a smaller downsampling size is adopted, meaning that texture details of the image are more concerned, and a larger downsampling size means that global consistency of the image is more concerned.
Preferably, the loss function of the first deep learning network is composed of three parts, and the calculation formula is as follows:
wherein n represents the number of downsampled sizes selected in the discriminator, G and D k Representing loss of network generator and discriminator, respectively, L FM Downsampling feature map loss, L, representing a first deep learning network c Representing content editing loss, responsible for ensuring consistency of image content, L GAN And is responsible for global loss and for learning the details of the generated image.
Preferably, in step S5, when the denoising model training is performed, constraint is performed between the output error map and the true error map by using SME loss function, and the training data is used forThe loss function is defined as:
where θ is a parameter in the network where information of the training set is stored.
Preferably, in step S2, a CT image with artifacts under sparse angle projection is acquired by using a linear scanning detector differential BPF algorithm, including the steps of:
(1) projection sinogram data from a simulated geometry or real image using projection;
(2) the T-section linear scanning image is acquired by utilizing a differential BPF algorithm of a line scanning detector, and the calculation formula is as follows:
in the method, in the process of the invention,representing a point to be reconstructed; s is half of the linear translation length of the ray source; η (eta) i Representing the Hilbert image for the i th segment>A direction angle for performing a limited Hilbert inverse transform; 1/L 2 Is reversely thrownA shadow weighting factor; the back projection operation is performed in a matrix larger than the image to be reconstructed, i.e. p is added outside each edge of the matrix space of the image to be reconstructed 0 Zero in number =0.5 x I, where I is the image row or column number to be reconstructed, where +.>Is obtained by differentiating along the direction of the detector u to obtain differential projection, and the calculation formula is
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a differential operator, realizing by finite difference operation, wherein data in projection adopts center difference, and data in projection boundary adopts single-side difference; u (u) * Representing the coordinates of the ray passing through the point to be reconstructed on the detector,
wherein L represents a point to be reconstructedPerpendicular distance to the source trajectory, l= -x sin θ i +y cosθ i +l; h represents the vertical distance of the point to be reconstructed to the detector, h=xsin θ i -y cosθ i +h;
(3) For the i-th segment Hilbert image of the pairThe limited Hilbert inverse transformation is carried out along the direction of the linear translation track of the ith section ray source, and the ith section limited angle image can be reconstructed>The calculation formula is that
Wherein y is 1 Representing the one-dimensional Hilbert transform direction, y 1 ∈[L y +ε y ,U y -ε y ]Wherein [ L ] y ,U y ]Representing the finite interval, ε, of the Hilbert transform y Is a small positive number;representing reconstructing an i-th segment limited angle image +.>Unknown constants to be calculated;
(4) repeating the steps T times, wherein T is the number of straight line scanning segments, and reconstructing the results of each segment The superposition is carried out to obtain a complete reconstructed image,
the input to the first deep learning network, i.e., the artifact-bearing image dataset that needs to be trained, is thus obtained.
The invention has the beneficial effects that:
the method combines the advantages of high reconstruction precision of a deep learning method and a linear scanning detector differential BPF algorithm, and can realize the accurate imaging strategy of a target region of interest (ROI), so that the problem of projection transverse truncation (truncated laterally) is effectively avoided, and in addition, the training and testing of the first and second deep learning networks can be performed on the GPU. Each operation in the network is simple and local, well suited for GPU-based parallelization, e.g., convolution in the network is driven by GPU computing supported by MatConvNet toolbox or cuDNN (NVIDIA).
Description of the drawings:
FIG. 1 is a flow chart of a method for removing artifacts for sparse angle source linear scan CT of the present invention;
FIG. 2 is a graph of input and output results of a first deep learning model training and testing section;
FIG. 3 is a flow chart of a second deep learning network training process employed by the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings and examples.
The method for reconstructing a differential three-dimensional image of a detector line direction of a source linear scan trajectory, as shown in fig. 1, is described in detail below with reference to the accompanying drawings and detailed description. Meanwhile, for clarity and clarity of explanation, examples of embodiments will be given and corresponding explanations will be given.
Aiming at the problems that the existing BPF image reconstruction process in the source linear scanning CT has low scanning speed and low reconstruction efficiency, and a sparse projection mode is adopted to introduce strip artifacts, the invention provides a two-dimensional plane parallel beam CT image reconstruction method based on an image translation deep learning network, which has the following overall thought: the conversion process from the image with the strip artifacts to the clear image is learned by utilizing a deep learning mode, so that the image with higher quality is reconstructed; on the basis, gaussian blind noise reduction is carried out on the reconstructed image by using a second deep learning network, so that the quality of the reconstructed image is further improved. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict. The following is an example of implementation.
Implementation example:
a flow diagram and an implementation process diagram of the sparse angle source linear scanning CT-oriented artifact removal method are shown in fig. 1.
S1, initializing the number T of parameter linear scanning segments and the rotation angle interval delta theta, wherein the zero space of an image to be reconstructed is formed
S2, acquiring a CT image with artifacts under sparse angle projection by utilizing a linear scanning detector differential BPF algorithm;
s3, acquiring a tag image by using a linear scanning filtering back projection algorithm to serve as tag data of a first deep learning network;
s4, performing artifact removal on the CT image obtained in the step S2 by using a first deep learning network, wherein the network sequentially comprises a generator and a discriminator, the generator comprises an encoder formed by a plurality of two-dimensional convolution modules and a self-attention module and a decoder formed by a plurality of up-sampling layers and convolution layers, and the discriminator comprises a multi-stage full convolution layer and is responsible for discriminating the authenticity of the generated image and the label image;
s5, gaussian blind noise reduction is carried out on the artifact-removed CT image obtained in the step S4 by using a second deep learning network, wherein the second deep learning network comprises a plurality of cascade structures formed by two-dimensional convolution, batch normalization and leakage relu activation functions, and high-quality CT images after correction are output through the second deep learning network.
Preferably, in the step S4, the method mainly includes:
s41, an encoding module in the first deep learning network generator encodes the feature map subjected to two-dimensional convolution into an input sequence of the global context;
s42, performing up-sampling and convolution on the obtained characteristic sequence by a decoder to generate an artifact-removed CT image;
s43, the discriminator obtains the probability of true or false of the generated image after the artifact-removed CT image obtained in the step S42 and the label image pass through the same convolution layer, normalization layer and activation function.
Preferably, in step S4, the discriminator in the first deep learning network is a multi-scale discriminator, and the number and the size of downsampling of the discriminator can be selected according to the characteristics of the processed image, for a sparse angle CT image, a smaller downsampling size is adopted, meaning that texture details of the image are more concerned, and a larger downsampling size means that global consistency of the image is more concerned.
Preferably, the loss function of the first deep learning network is composed of three parts, and the calculation formula is as follows:
wherein n represents the number of downsampled sizes selected in the discriminator, G and D k Representing loss of network generator and discriminator, respectively, L FM Downsampling feature map loss, L, representing a first deep learning network c Representing content editing loss, responsible for ensuring consistency of image content, L GAN And is responsible for global loss and for learning the details of the generated image.
Preferably, in step S5, constraint is performed between the output error map and the true error map by using SME loss function when training the denoising model, and the training data is obtainedThe loss function is defined as:
where θ is a parameter in the network where information of the training set is stored.
To further illustrate a specific embodiment of the present invention, in step S2, a CT image with artifacts under sparse angle projection is acquired by using a linear scanning detector differential BPF algorithm, including the steps of:
(1) projection sinogram data from a simulated geometry or real image using projection;
(2) the T-section linear scanning image is acquired by utilizing a differential BPF algorithm of a line scanning detector, and the calculation formula is as follows:
in the method, in the process of the invention,representing a point to be reconstructed; s is half of the linear translation length of the ray source;η i Representing the Hilbert image for the i th segment>A direction angle for performing a limited Hilbert inverse transform; 1/L 2 Weighting factors for the backprojection; the back projection operation is performed in a matrix larger than the image to be reconstructed, i.e. p is added outside each edge of the matrix space of the image to be reconstructed 0 Zero in number =0.5 x I, where I is the image row or column number to be reconstructed, where +.>Is obtained by differentiating along the direction of the detector u to obtain differential projection, and the calculation formula is
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a differential operator, realizing by finite difference operation, wherein data in projection adopts center difference, and data in projection boundary adopts single-side difference; u (u) * Representing the coordinates of the ray passing through the point to be reconstructed on the detector,
wherein L represents a point to be reconstructedPerpendicular distance to the source trajectory, l= -x sin θ i +y cosθ i +l; h represents the vertical distance of the point to be reconstructed to the detector, h=xsin θ i -y cosθ i +h;
(3) For the i-th segment Hilbert image of the pairThe limited Hilbert inverse transformation is carried out along the direction of the linear translation track of the ith section ray source, and the ith section limited angle image can be reconstructed>The calculation formula is that
Wherein y is 1 Representing the one-dimensional Hilbert transform direction, y 1 ∈[L y +ε y ,U y -ε y ]Wherein [ L ] y ,U y ]Representing the finite interval, ε, of the Hilbert transform y Is a small positive number;representing reconstructing an i-th segment limited angle image +.>Unknown constants to be calculated;
(4) repeating the steps T times, wherein T is the number of straight line scanning segments, and reconstructing the results of each segment The superposition is carried out to obtain a complete reconstructed image,
the input to the first deep learning network, i.e., the artifact-bearing image dataset that needs to be trained, is thus obtained.
The first deep learning network training and testing using process of this embodiment is shown in fig. 2, and includes:
step (1): firstly, acquiring an image of a sparse angle condition by utilizing a differential BPF algorithm of a linear scanning detector, and acquiring a label image under a standard scanning angle condition;
step (2): inputting the paired image and the label image into a first deep learning network model for training;
step (3): and (3) inputting the sparse angle CT image obtained by utilizing the differential BPF algorithm of the linear scanning detector into a trained network model, and outputting a high-quality CT image.
Fig. 3 is a training flow of the second deep learning network according to the embodiment, in which the output image of the first deep learning network model is used as input to perform model training, and a CT image after high-quality denoising is output.
The training parameter settings of the first deep learning network in this embodiment are shown in table 1 below.
TABLE 1 simulation scan parameters and deep learning network parameters table
( And (3) injection: batch is the Batch trained per input network, and Batch size is the number of training samples in each Batch )
The invention is not limited to the specific embodiments described above, which are intended to be illustrative only and not limiting; those skilled in the art, having the benefit of this disclosure, may make numerous forms without departing from the spirit of the invention and the scope of the claims which follow.
Claims (5)
1. The method for correcting the sparse artifact of the differential BPF reconstructed image of the linear scanning detector is characterized by comprising the following steps of:
s1, initializing the number T of parameter linear scanning segments and the rotation angle interval delta theta, wherein the zero space of an image to be reconstructed is formed
S2, acquiring a CT image with artifacts under sparse angle projection by utilizing a linear scanning detector differential BPF algorithm;
s3, acquiring a tag image by using a linear scanning filtering back projection algorithm to serve as tag data of a first deep learning network;
s4, performing artifact removal on the CT image obtained in the step S2 by using a first deep learning network, wherein the network sequentially comprises a generator and a discriminator, the generator comprises an encoder formed by a plurality of two-dimensional convolution modules and a self-attention module and a decoder formed by a plurality of up-sampling layers and convolution layers, and the discriminator comprises a multi-stage full convolution layer and is responsible for discriminating the authenticity of the generated image and the label image;
s5, gaussian blind noise reduction is carried out on the artifact-removed CT image obtained in the step S4 by using a second deep learning network, wherein the second deep learning network comprises a plurality of cascade structures formed by two-dimensional convolution, batch normalization and leakage relu activation functions, and high-quality CT images after correction are output through the second deep learning network.
2. The method for correcting sparse artifacts in a reconstructed image of a differential BPF of a linear scanning probe according to claim 1, characterized in that it comprises, in step S4:
s41, an encoding module in the first deep learning network generator encodes the feature map subjected to two-dimensional convolution into an input sequence of the global context;
s42, performing up-sampling and convolution on the obtained characteristic sequence by a decoder to generate an artifact-removed CT image;
s43, the discriminator obtains the probability of true or false of the generated image after the artifact-removed CT image obtained in the step S42 and the label image pass through the same convolution layer, normalization layer and activation function.
3. The method according to claim 2, wherein in step S4, the discriminator in the first deep learning network is a multi-scale discriminator, the number and size of downsampled discriminators can be selected according to the processed image characteristics, and for sparse angle CT images, a smaller downsampling size is used, meaning that texture details of the image are more focused, and a larger downsampling size means that global consistency of the image is more focused.
4. The method for correcting sparse artifacts in a reconstructed image of a differential BPF of a linear scanning probe according to claim 2, wherein the loss function of the first deep learning network is composed of three parts, calculated as follows:
wherein n represents the number of downsampled sizes selected in the discriminator, G and D k Representing loss of network generator and discriminator, respectively, L FM Downsampling feature map loss, L, representing a first deep learning network c Representing content editing loss, responsible for ensuring consistency of image content, L GAB And is responsible for global loss and for learning the details of the generated image.
5. The method for correcting sparse artifacts in a reconstructed image of a differential BPF of a linear scan detector according to claim 1, wherein in step S5, a SME loss function is used to constrain the output error map and the true error map during the training of the denoising model, and the training data isThe loss function is defined as:
where θ is a parameter in the network where information of the training set is stored.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310302823 | 2023-03-24 | ||
CN2023103028239 | 2023-03-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116843779A true CN116843779A (en) | 2023-10-03 |
Family
ID=88173449
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310666199.0A Pending CN116843779A (en) | 2023-03-24 | 2023-06-06 | Linear scanning detector differential BPF reconstructed image sparse artifact correction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116843779A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876279A (en) * | 2024-03-11 | 2024-04-12 | 浙江荷湖科技有限公司 | Method and system for removing motion artifact based on scanned light field sequence image |
CN117876279B (en) * | 2024-03-11 | 2024-05-28 | 浙江荷湖科技有限公司 | Method and system for removing motion artifact based on scanned light field sequence image |
-
2023
- 2023-06-06 CN CN202310666199.0A patent/CN116843779A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876279A (en) * | 2024-03-11 | 2024-04-12 | 浙江荷湖科技有限公司 | Method and system for removing motion artifact based on scanned light field sequence image |
CN117876279B (en) * | 2024-03-11 | 2024-05-28 | 浙江荷湖科技有限公司 | Method and system for removing motion artifact based on scanned light field sequence image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3506209B1 (en) | Image processing method, image processing device and storage medium | |
Zhang et al. | Image prediction for limited-angle tomography via deep learning with convolutional neural network | |
CN110660123B (en) | Three-dimensional CT image reconstruction method and device based on neural network and storage medium | |
CN110544282B (en) | Three-dimensional multi-energy spectrum CT reconstruction method and equipment based on neural network and storage medium | |
US9801591B2 (en) | Fast iterative algorithm for superresolving computed tomography with missing data | |
US9824468B2 (en) | Dictionary learning based image reconstruction | |
US7929746B2 (en) | System and method for processing imaging data | |
CN112102428B (en) | CT cone beam scanning image reconstruction method, scanning system and storage medium | |
Xie et al. | Dual network architecture for few-view CT-trained on ImageNet data and transferred for medical imaging | |
CN106228601A (en) | Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation | |
CN111739113B (en) | CT image reconstruction method and device for linear distributed light source and detector | |
Okamoto et al. | Artifact reduction for sparse-view CT using deep learning with band patch | |
CN116630460A (en) | Detector line differential high-quality image reconstruction method for source linear scanning track | |
CN116843779A (en) | Linear scanning detector differential BPF reconstructed image sparse artifact correction method | |
CN116188615A (en) | Sparse angle CT reconstruction method based on sine domain and image domain | |
CN113554729B (en) | CT image reconstruction method and system | |
Guo et al. | Iterative image reconstruction for limited-angle CT using optimized initial image | |
Okamoto et al. | Patch-based artifact reduction for three-dimensional volume projection data of sparse-view micro-computed tomography | |
Yang et al. | Iterative excitation with noise rejection techniques for X-ray computed tomography of hollow turbine blades | |
Mu et al. | Sparse filtered SIRT for electron tomography | |
Wu et al. | Deep learning-based low-dose tomography reconstruction with hybrid-dose measurements | |
CN116645441A (en) | Multi-model deep learning Hilbert inverse transformation reconstruction method for source linear scanning CT | |
Thaler et al. | Volumetric reconstruction from a limited number of digitally reconstructed radiographs using cnns | |
Valat et al. | Sinogram Inpainting with Generative Adversarial Networks and Shape Priors. Tomography 2023, 9, 1137–1152 | |
CN116977473B (en) | Sparse angle CT reconstruction method and device based on projection domain and image domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |