CN116843779A - Linear scanning detector differential BPF reconstructed image sparse artifact correction method - Google Patents

Linear scanning detector differential BPF reconstructed image sparse artifact correction method Download PDF

Info

Publication number
CN116843779A
CN116843779A CN202310666199.0A CN202310666199A CN116843779A CN 116843779 A CN116843779 A CN 116843779A CN 202310666199 A CN202310666199 A CN 202310666199A CN 116843779 A CN116843779 A CN 116843779A
Authority
CN
China
Prior art keywords
image
deep learning
sparse
bpf
linear scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310666199.0A
Other languages
Chinese (zh)
Inventor
汪志胜
崔俊宁
王顺利
赵亚敏
边星元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Publication of CN116843779A publication Critical patent/CN116843779A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A correction method for sparse artifacts of a linear scanning detector differential BPF reconstructed image belongs to the technical field of image processing and CT imaging, and comprises the following steps: initializing the number of linear scanning segments of parameters; acquiring a CT image under sparse angle projection by utilizing a differential BPF operator of a linear scanning detector; acquiring a real tag image by a linear scanning filtering back projection algorithm, and converting the image with the artifact by using a deep learning network; and carrying out Gaussian blind noise reduction on the converted image by utilizing an image post-processing network, and finally outputting a high-quality CT image. The sparse artifact correction method is oriented to the linear CT scanning track, can effectively avoid artifacts and noise caused by sparse angle scanning, can reconstruct high-quality images from fewer cone beam projections, and improves scanning efficiency and imaging quality.

Description

Linear scanning detector differential BPF reconstructed image sparse artifact correction method
Technical Field
The invention belongs to the technical fields of image processing and CT imaging, and particularly relates to a method for correcting sparse artifacts of a differential BPF reconstructed image of a linear scanning detector.
Background
Micro-focus computed tomography (micro computed tomography, micro-CT) is an advanced technique that can image the internal structure of an object with high resolution, and is widely used in various fields. It is well known that in CT systems, higher spatial resolution will be possible by setting a larger geometric magnification ratio. However, larger geometric magnification generally results in a smaller field of view (FOV), i.e., high spatial resolution and large FOV, which are difficult to combine. To solve this problem, a number of new CT scan structures and their corresponding image reconstruction algorithms are proposed in succession. Among them, chongqing university Liu Fenglin et al propose a new CT scanning mode with simple structure and lower cost performance, namely a source linear scanning mode [1,2]. In this scanning mode, the flat panel detector is fixed and the object being imaged will be close to the source, with high resolution imaging of the object by translating the source in a linear trajectory parallel to the detector. In this case, however, the beam cannot cover the object to be measured completely, i.e. each projection view is obtained truncated [3]. If a filtered back projection (filtered projection, FBP) algorithm is used, the reconstructed image may introduce serious truncation artifacts. This is because the global ramp filter operator in the FBP algorithm needs to process the entire projection, if the projection is truncated, the ramp filter will interfere with the entire projection, and the subsequent back projection will propagate errors in the entire image. In this case, it is even impossible to restore a part of the real image therefrom.
To overcome this limitation, the existing research has designed an iterative reconstruction algorithm [4] for the source linear scanning CT structure, and although the algorithm can reconstruct an image without truncation artifacts, the requirement on a computing hardware platform is not satisfied, and the reconstruction efficiency is not satisfactory. The method adopting virtual projection geometry can well solve the problem of truncation artifact caused by the FBP algorithm, but because the projection geometry is exchanged, the source track is subjected to slope filtering, so that huge source sampling points (namely projection view numbers) are needed to realize high-quality image reconstruction. For this purpose, a back projection filter (backprojection filtering, BPF) analytic reconstruction algorithm is proposed for this scan geometry, which mainly comprises two steps of differential back projection (differential back projection, DBP) and limited Hilbert inverse transformation. Unlike ramp filtering in FBP-like algorithms, BPF algorithms can solve the problem of exact reconstruction of truncated projections because the differential and finite Hilbert inverse sums therein are both local (local) operations, i.e., the result of the operation is only related to the data of the neighbors. If the projection data is truncated, it can still calculate the derivative for the rest of the projection, and the intermediate result after back projection can still be considered accurate within the FOV. However, to achieve the reconstruction of the complete image, the source linear scan often needs to cooperate with the rotation of several segments of the measured object to obtain a complete projection for the reconstruction. In this case, the Hilbert images obtained by the DBP process of the BPF algorithm have different reconstruction directions to be inverse transformed, which requires a subsequent limited Hilbert inverse transformation to be able to customize the directions to complete the reconstruction.
However, the method needs to reconstruct under the complete angle of the scanning angle, which results in the problems of slow scanning speed and low reconstruction efficiency, and if a sparse scanning mode is adopted to improve the scanning and reconstruction speed, the method can result in data deletion of certain projection angles, thereby introducing serious streak artifacts and causing serious degradation of image quality. Reference is made to:
[1] liu Fenglin, yu Haijun, li Lei, tan Chuandong a novel large field-of-view linear scanning CT system and image reconstruction method [ P ]. Chongqing city: CN111839568A,2020-10-30.
[2]H.Yu,L.Li,C.Tan,F.Liu,R.Zhou,X-ray source translation based computed tomography(STCT),Optics Express,29(2021)19743-19758.R.Clackdoyle,F.Noo,A large class of inversion formulae for the 2D Radon transform of functions of compact support,Inverse Problems,20(2004)1281.
[3] Gossypol Wen Jie, yu Haijun, chen Jie, et al source linear scan computed tomography analytical reconstruction based on derivative-hilbert transform-backprojection [ J ]. Optics, 2022,42 (11): 292-303.
[4] Li Lei, yu Haijun, tan Chuandong, duan Xiaojiao, liu Fenglin. Analytical reconstruction algorithm for radiation source translation scanning CT [ J ]. Instructions on instruments and meters, 2022,43 (02): 187-195.DOI:10.19650/j.cnki.cjsi.J2108157.
[5]Yu H,Ni S,Chen J,et al.Analytical reconstruction algorithm for multiple source-translation computed tomography(mSTCT)[J].Applied Mathematical Modelling,2023,117:251-266.
Disclosure of Invention
In view of the problems of slow scanning speed and low reconstruction efficiency of the algorithm and the problem that a sparse projection mode is adopted to introduce streak artifacts, the invention provides a sparse angle source linear scanning CT-oriented artifact removal method, which mainly aims at circular tracks and two-dimensional plane CT scanning modes.
The correction method of the linear scanning detector differential BPF reconstructed image sparse artifact mainly comprises the following steps:
s1, initializing the number T of parameter linear scanning segments and the rotation angle interval delta theta, wherein the zero space of an image to be reconstructed is formed
S2, acquiring a CT image with artifacts under sparse angle projection by utilizing a linear scanning detector differential BPF algorithm;
s3, acquiring a tag image by using a linear scanning filtering back projection algorithm to serve as tag data of a first deep learning network;
s4, performing artifact removal on the CT image obtained in the step S2 by using a first deep learning network, wherein the network sequentially comprises a generator and a discriminator, the generator comprises an encoder formed by a plurality of two-dimensional convolution modules and a self-attention module and a decoder formed by a plurality of up-sampling layers and convolution layers, and the discriminator comprises a multi-stage full convolution layer and is responsible for discriminating the authenticity of the generated image and the label image;
s5, gaussian blind noise reduction is carried out on the artifact-removed CT image obtained in the step S4 by using a second deep learning network, wherein the second deep learning network comprises a plurality of cascade structures formed by two-dimensional convolution, batch normalization and leakage relu activation functions, and high-quality CT images after correction are output through the second deep learning network.
Preferably, in the step S4, the method mainly includes:
s41, an encoding module in the first deep learning network generator encodes the feature map subjected to two-dimensional convolution into an input sequence of the global context;
s42, performing up-sampling and convolution on the obtained characteristic sequence by a decoder to generate an artifact-removed CT image;
s43, the discriminator obtains the probability of true or false of the generated image after the artifact-removed CT image obtained in the step S42 and the label image pass through the same convolution layer, normalization layer and activation function.
In step S4, the discriminator in the first deep learning network is a multi-scale discriminator, and the number and the size of downsampling of the discriminator can be selected according to the characteristics of the processed image, for a sparse angle CT image, a smaller downsampling size is adopted, meaning that texture details of the image are more concerned, and a larger downsampling size means that global consistency of the image is more concerned.
Preferably, the loss function of the first deep learning network is composed of three parts, and the calculation formula is as follows:
wherein n represents the number of downsampled sizes selected in the discriminator, G and D k Representing loss of network generator and discriminator, respectively, L FM Downsampling feature map loss, L, representing a first deep learning network c Representing content editing loss, responsible for ensuring consistency of image content, L GAN And is responsible for global loss and for learning the details of the generated image.
Preferably, in step S5, when the denoising model training is performed, constraint is performed between the output error map and the true error map by using SME loss function, and the training data is used forThe loss function is defined as:
where θ is a parameter in the network where information of the training set is stored.
Preferably, in step S2, a CT image with artifacts under sparse angle projection is acquired by using a linear scanning detector differential BPF algorithm, including the steps of:
(1) projection sinogram data from a simulated geometry or real image using projection;
(2) the T-section linear scanning image is acquired by utilizing a differential BPF algorithm of a line scanning detector, and the calculation formula is as follows:
in the method, in the process of the invention,representing a point to be reconstructed; s is half of the linear translation length of the ray source; η (eta) i Representing the Hilbert image for the i th segment>A direction angle for performing a limited Hilbert inverse transform; 1/L 2 Is reversely thrownA shadow weighting factor; the back projection operation is performed in a matrix larger than the image to be reconstructed, i.e. p is added outside each edge of the matrix space of the image to be reconstructed 0 Zero in number =0.5 x I, where I is the image row or column number to be reconstructed, where +.>Is obtained by differentiating along the direction of the detector u to obtain differential projection, and the calculation formula is
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a differential operator, realizing by finite difference operation, wherein data in projection adopts center difference, and data in projection boundary adopts single-side difference; u (u) * Representing the coordinates of the ray passing through the point to be reconstructed on the detector,
wherein L represents a point to be reconstructedPerpendicular distance to the source trajectory, l= -x sin θ i +y cosθ i +l; h represents the vertical distance of the point to be reconstructed to the detector, h=xsin θ i -y cosθ i +h;
(3) For the i-th segment Hilbert image of the pairThe limited Hilbert inverse transformation is carried out along the direction of the linear translation track of the ith section ray source, and the ith section limited angle image can be reconstructed>The calculation formula is that
Wherein y is 1 Representing the one-dimensional Hilbert transform direction, y 1 ∈[L yy ,U yy ]Wherein [ L ] y ,U y ]Representing the finite interval, ε, of the Hilbert transform y Is a small positive number;representing reconstructing an i-th segment limited angle image +.>Unknown constants to be calculated;
(4) repeating the steps T times, wherein T is the number of straight line scanning segments, and reconstructing the results of each segment The superposition is carried out to obtain a complete reconstructed image,
the input to the first deep learning network, i.e., the artifact-bearing image dataset that needs to be trained, is thus obtained.
The invention has the beneficial effects that:
the method combines the advantages of high reconstruction precision of a deep learning method and a linear scanning detector differential BPF algorithm, and can realize the accurate imaging strategy of a target region of interest (ROI), so that the problem of projection transverse truncation (truncated laterally) is effectively avoided, and in addition, the training and testing of the first and second deep learning networks can be performed on the GPU. Each operation in the network is simple and local, well suited for GPU-based parallelization, e.g., convolution in the network is driven by GPU computing supported by MatConvNet toolbox or cuDNN (NVIDIA).
Description of the drawings:
FIG. 1 is a flow chart of a method for removing artifacts for sparse angle source linear scan CT of the present invention;
FIG. 2 is a graph of input and output results of a first deep learning model training and testing section;
FIG. 3 is a flow chart of a second deep learning network training process employed by the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings and examples.
The method for reconstructing a differential three-dimensional image of a detector line direction of a source linear scan trajectory, as shown in fig. 1, is described in detail below with reference to the accompanying drawings and detailed description. Meanwhile, for clarity and clarity of explanation, examples of embodiments will be given and corresponding explanations will be given.
Aiming at the problems that the existing BPF image reconstruction process in the source linear scanning CT has low scanning speed and low reconstruction efficiency, and a sparse projection mode is adopted to introduce strip artifacts, the invention provides a two-dimensional plane parallel beam CT image reconstruction method based on an image translation deep learning network, which has the following overall thought: the conversion process from the image with the strip artifacts to the clear image is learned by utilizing a deep learning mode, so that the image with higher quality is reconstructed; on the basis, gaussian blind noise reduction is carried out on the reconstructed image by using a second deep learning network, so that the quality of the reconstructed image is further improved. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict. The following is an example of implementation.
Implementation example:
a flow diagram and an implementation process diagram of the sparse angle source linear scanning CT-oriented artifact removal method are shown in fig. 1.
S1, initializing the number T of parameter linear scanning segments and the rotation angle interval delta theta, wherein the zero space of an image to be reconstructed is formed
S2, acquiring a CT image with artifacts under sparse angle projection by utilizing a linear scanning detector differential BPF algorithm;
s3, acquiring a tag image by using a linear scanning filtering back projection algorithm to serve as tag data of a first deep learning network;
s4, performing artifact removal on the CT image obtained in the step S2 by using a first deep learning network, wherein the network sequentially comprises a generator and a discriminator, the generator comprises an encoder formed by a plurality of two-dimensional convolution modules and a self-attention module and a decoder formed by a plurality of up-sampling layers and convolution layers, and the discriminator comprises a multi-stage full convolution layer and is responsible for discriminating the authenticity of the generated image and the label image;
s5, gaussian blind noise reduction is carried out on the artifact-removed CT image obtained in the step S4 by using a second deep learning network, wherein the second deep learning network comprises a plurality of cascade structures formed by two-dimensional convolution, batch normalization and leakage relu activation functions, and high-quality CT images after correction are output through the second deep learning network.
Preferably, in the step S4, the method mainly includes:
s41, an encoding module in the first deep learning network generator encodes the feature map subjected to two-dimensional convolution into an input sequence of the global context;
s42, performing up-sampling and convolution on the obtained characteristic sequence by a decoder to generate an artifact-removed CT image;
s43, the discriminator obtains the probability of true or false of the generated image after the artifact-removed CT image obtained in the step S42 and the label image pass through the same convolution layer, normalization layer and activation function.
Preferably, in step S4, the discriminator in the first deep learning network is a multi-scale discriminator, and the number and the size of downsampling of the discriminator can be selected according to the characteristics of the processed image, for a sparse angle CT image, a smaller downsampling size is adopted, meaning that texture details of the image are more concerned, and a larger downsampling size means that global consistency of the image is more concerned.
Preferably, the loss function of the first deep learning network is composed of three parts, and the calculation formula is as follows:
wherein n represents the number of downsampled sizes selected in the discriminator, G and D k Representing loss of network generator and discriminator, respectively, L FM Downsampling feature map loss, L, representing a first deep learning network c Representing content editing loss, responsible for ensuring consistency of image content, L GAN And is responsible for global loss and for learning the details of the generated image.
Preferably, in step S5, constraint is performed between the output error map and the true error map by using SME loss function when training the denoising model, and the training data is obtainedThe loss function is defined as:
where θ is a parameter in the network where information of the training set is stored.
To further illustrate a specific embodiment of the present invention, in step S2, a CT image with artifacts under sparse angle projection is acquired by using a linear scanning detector differential BPF algorithm, including the steps of:
(1) projection sinogram data from a simulated geometry or real image using projection;
(2) the T-section linear scanning image is acquired by utilizing a differential BPF algorithm of a line scanning detector, and the calculation formula is as follows:
in the method, in the process of the invention,representing a point to be reconstructed; s is half of the linear translation length of the ray source;η i Representing the Hilbert image for the i th segment>A direction angle for performing a limited Hilbert inverse transform; 1/L 2 Weighting factors for the backprojection; the back projection operation is performed in a matrix larger than the image to be reconstructed, i.e. p is added outside each edge of the matrix space of the image to be reconstructed 0 Zero in number =0.5 x I, where I is the image row or column number to be reconstructed, where +.>Is obtained by differentiating along the direction of the detector u to obtain differential projection, and the calculation formula is
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a differential operator, realizing by finite difference operation, wherein data in projection adopts center difference, and data in projection boundary adopts single-side difference; u (u) * Representing the coordinates of the ray passing through the point to be reconstructed on the detector,
wherein L represents a point to be reconstructedPerpendicular distance to the source trajectory, l= -x sin θ i +y cosθ i +l; h represents the vertical distance of the point to be reconstructed to the detector, h=xsin θ i -y cosθ i +h;
(3) For the i-th segment Hilbert image of the pairThe limited Hilbert inverse transformation is carried out along the direction of the linear translation track of the ith section ray source, and the ith section limited angle image can be reconstructed>The calculation formula is that
Wherein y is 1 Representing the one-dimensional Hilbert transform direction, y 1 ∈[L yy ,U yy ]Wherein [ L ] y ,U y ]Representing the finite interval, ε, of the Hilbert transform y Is a small positive number;representing reconstructing an i-th segment limited angle image +.>Unknown constants to be calculated;
(4) repeating the steps T times, wherein T is the number of straight line scanning segments, and reconstructing the results of each segment The superposition is carried out to obtain a complete reconstructed image,
the input to the first deep learning network, i.e., the artifact-bearing image dataset that needs to be trained, is thus obtained.
The first deep learning network training and testing using process of this embodiment is shown in fig. 2, and includes:
step (1): firstly, acquiring an image of a sparse angle condition by utilizing a differential BPF algorithm of a linear scanning detector, and acquiring a label image under a standard scanning angle condition;
step (2): inputting the paired image and the label image into a first deep learning network model for training;
step (3): and (3) inputting the sparse angle CT image obtained by utilizing the differential BPF algorithm of the linear scanning detector into a trained network model, and outputting a high-quality CT image.
Fig. 3 is a training flow of the second deep learning network according to the embodiment, in which the output image of the first deep learning network model is used as input to perform model training, and a CT image after high-quality denoising is output.
The training parameter settings of the first deep learning network in this embodiment are shown in table 1 below.
TABLE 1 simulation scan parameters and deep learning network parameters table
( And (3) injection: batch is the Batch trained per input network, and Batch size is the number of training samples in each Batch )
The invention is not limited to the specific embodiments described above, which are intended to be illustrative only and not limiting; those skilled in the art, having the benefit of this disclosure, may make numerous forms without departing from the spirit of the invention and the scope of the claims which follow.

Claims (5)

1. The method for correcting the sparse artifact of the differential BPF reconstructed image of the linear scanning detector is characterized by comprising the following steps of:
s1, initializing the number T of parameter linear scanning segments and the rotation angle interval delta theta, wherein the zero space of an image to be reconstructed is formed
S2, acquiring a CT image with artifacts under sparse angle projection by utilizing a linear scanning detector differential BPF algorithm;
s3, acquiring a tag image by using a linear scanning filtering back projection algorithm to serve as tag data of a first deep learning network;
s4, performing artifact removal on the CT image obtained in the step S2 by using a first deep learning network, wherein the network sequentially comprises a generator and a discriminator, the generator comprises an encoder formed by a plurality of two-dimensional convolution modules and a self-attention module and a decoder formed by a plurality of up-sampling layers and convolution layers, and the discriminator comprises a multi-stage full convolution layer and is responsible for discriminating the authenticity of the generated image and the label image;
s5, gaussian blind noise reduction is carried out on the artifact-removed CT image obtained in the step S4 by using a second deep learning network, wherein the second deep learning network comprises a plurality of cascade structures formed by two-dimensional convolution, batch normalization and leakage relu activation functions, and high-quality CT images after correction are output through the second deep learning network.
2. The method for correcting sparse artifacts in a reconstructed image of a differential BPF of a linear scanning probe according to claim 1, characterized in that it comprises, in step S4:
s41, an encoding module in the first deep learning network generator encodes the feature map subjected to two-dimensional convolution into an input sequence of the global context;
s42, performing up-sampling and convolution on the obtained characteristic sequence by a decoder to generate an artifact-removed CT image;
s43, the discriminator obtains the probability of true or false of the generated image after the artifact-removed CT image obtained in the step S42 and the label image pass through the same convolution layer, normalization layer and activation function.
3. The method according to claim 2, wherein in step S4, the discriminator in the first deep learning network is a multi-scale discriminator, the number and size of downsampled discriminators can be selected according to the processed image characteristics, and for sparse angle CT images, a smaller downsampling size is used, meaning that texture details of the image are more focused, and a larger downsampling size means that global consistency of the image is more focused.
4. The method for correcting sparse artifacts in a reconstructed image of a differential BPF of a linear scanning probe according to claim 2, wherein the loss function of the first deep learning network is composed of three parts, calculated as follows:
wherein n represents the number of downsampled sizes selected in the discriminator, G and D k Representing loss of network generator and discriminator, respectively, L FM Downsampling feature map loss, L, representing a first deep learning network c Representing content editing loss, responsible for ensuring consistency of image content, L GAB And is responsible for global loss and for learning the details of the generated image.
5. The method for correcting sparse artifacts in a reconstructed image of a differential BPF of a linear scan detector according to claim 1, wherein in step S5, a SME loss function is used to constrain the output error map and the true error map during the training of the denoising model, and the training data isThe loss function is defined as:
where θ is a parameter in the network where information of the training set is stored.
CN202310666199.0A 2023-03-24 2023-06-06 Linear scanning detector differential BPF reconstructed image sparse artifact correction method Pending CN116843779A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310302823 2023-03-24
CN2023103028239 2023-03-24

Publications (1)

Publication Number Publication Date
CN116843779A true CN116843779A (en) 2023-10-03

Family

ID=88173449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310666199.0A Pending CN116843779A (en) 2023-03-24 2023-06-06 Linear scanning detector differential BPF reconstructed image sparse artifact correction method

Country Status (1)

Country Link
CN (1) CN116843779A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876279A (en) * 2024-03-11 2024-04-12 浙江荷湖科技有限公司 Method and system for removing motion artifact based on scanned light field sequence image
CN117876279B (en) * 2024-03-11 2024-05-28 浙江荷湖科技有限公司 Method and system for removing motion artifact based on scanned light field sequence image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876279A (en) * 2024-03-11 2024-04-12 浙江荷湖科技有限公司 Method and system for removing motion artifact based on scanned light field sequence image
CN117876279B (en) * 2024-03-11 2024-05-28 浙江荷湖科技有限公司 Method and system for removing motion artifact based on scanned light field sequence image

Similar Documents

Publication Publication Date Title
EP3506209B1 (en) Image processing method, image processing device and storage medium
Zhang et al. Image prediction for limited-angle tomography via deep learning with convolutional neural network
CN110660123B (en) Three-dimensional CT image reconstruction method and device based on neural network and storage medium
CN110544282B (en) Three-dimensional multi-energy spectrum CT reconstruction method and equipment based on neural network and storage medium
US9801591B2 (en) Fast iterative algorithm for superresolving computed tomography with missing data
US9824468B2 (en) Dictionary learning based image reconstruction
US7929746B2 (en) System and method for processing imaging data
CN112102428B (en) CT cone beam scanning image reconstruction method, scanning system and storage medium
Xie et al. Dual network architecture for few-view CT-trained on ImageNet data and transferred for medical imaging
CN106228601A (en) Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation
CN111739113B (en) CT image reconstruction method and device for linear distributed light source and detector
Okamoto et al. Artifact reduction for sparse-view CT using deep learning with band patch
CN116630460A (en) Detector line differential high-quality image reconstruction method for source linear scanning track
CN116843779A (en) Linear scanning detector differential BPF reconstructed image sparse artifact correction method
CN116188615A (en) Sparse angle CT reconstruction method based on sine domain and image domain
CN113554729B (en) CT image reconstruction method and system
Guo et al. Iterative image reconstruction for limited-angle CT using optimized initial image
Okamoto et al. Patch-based artifact reduction for three-dimensional volume projection data of sparse-view micro-computed tomography
Yang et al. Iterative excitation with noise rejection techniques for X-ray computed tomography of hollow turbine blades
Mu et al. Sparse filtered SIRT for electron tomography
Wu et al. Deep learning-based low-dose tomography reconstruction with hybrid-dose measurements
CN116645441A (en) Multi-model deep learning Hilbert inverse transformation reconstruction method for source linear scanning CT
Thaler et al. Volumetric reconstruction from a limited number of digitally reconstructed radiographs using cnns
Valat et al. Sinogram Inpainting with Generative Adversarial Networks and Shape Priors. Tomography 2023, 9, 1137–1152
CN116977473B (en) Sparse angle CT reconstruction method and device based on projection domain and image domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination