CN116188615A - Sparse angle CT reconstruction method based on sine domain and image domain - Google Patents

Sparse angle CT reconstruction method based on sine domain and image domain Download PDF

Info

Publication number
CN116188615A
CN116188615A CN202310187803.1A CN202310187803A CN116188615A CN 116188615 A CN116188615 A CN 116188615A CN 202310187803 A CN202310187803 A CN 202310187803A CN 116188615 A CN116188615 A CN 116188615A
Authority
CN
China
Prior art keywords
image
sinogram
angle
reconstruction
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310187803.1A
Other languages
Chinese (zh)
Inventor
伍佳
李章勇
钟丽莎
张政
王芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202310187803.1A priority Critical patent/CN116188615A/en
Publication of CN116188615A publication Critical patent/CN116188615A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of X-ray computed tomography, and particularly relates to a sparse angle CT reconstruction method based on a sine domain and an image domain; acquiring a sparse angle sinogram training data set, and training a sparse angle CT reconstruction network model based on a sine domain and an image domain; inputting the sparse angle sinogram into a trained sparse angle CT reconstruction network model to obtain a repaired full-angle sinogram and a reconstructed CT image; constructing a CT image reconstruction target equation, taking the repaired full-angle sinogram and the reconstructed CT image as priori regular constraints of the CT image reconstruction target equation, iteratively optimizing the CT image reconstruction target equation, and solving by adopting a least square method to obtain a high-precision reconstructed CT image; aiming at the problems of generalization, robustness and data consistency of a network model, the invention integrates network output as depth priori into iterative reconstruction, thereby further improving the quality of reconstructed CT images.

Description

Sparse angle CT reconstruction method based on sine domain and image domain
Technical Field
The invention belongs to the technical field of X-ray computed tomography (Computed Tomography, CT), and particularly relates to a sparse angle CT reconstruction method based on a sine domain and an image domain.
Background
CT images are widely used in clinical diagnosis, nondestructive testing and biological research due to their high resolution and high sensitivity. With the continuous development of medical CT technology, faster, safer, and more accurate CT technologies are being demanded. However, high dose radiation of CT may be potentially harmful to the human body, and long-term, high frequency scanning may further increase the harm. At present, by reducing the current intensity (low dose CT) and reducing the number of samplings (sparse angle CT), the radiation dose can be reduced, but the completeness of the projection data is destroyed, and the quality of the image directly reconstructed by the conventional reconstruction algorithm is seriously degraded. Therefore, how to reduce the radiation dose while ensuring the quality of the reconstructed image has become a hot spot in CT research in recent years.
In recent years, researchers have proposed data-driven sparse projection CT reconstruction techniques that utilize data-driven or adaptive models (dictionary learning, sparse changes, tensor transformations) instead of mathematical statistical models to reconstruct CT images by training model parameters that include true and undersampled data. The technology is divided into a self-adaptive model technology based on learning and a deep learning technology, and the sparse projection CT reconstruction method based on the deep learning technology firstly utilizes a filtered back projection algorithm (Filtered Back Projection, FBP) to generate an image containing noise from a sparse sinogram, and then compares the image with a label image without noise to learn artifact distribution. The university of eastern and south-east image science and technology laboratory proposes a sparse projection reconstruction hybrid neural network, which utilizes deep learning to simultaneously reduce noise of projection sinusoidal data and reconstructed images, and finally obtains reconstructed images. Although the streak artifact of the image domain is fully considered in the method, only the image domain information is considered, the projection sine domain information is not considered, and a satisfactory reconstruction result is difficult to obtain under the condition of high noise or serious loss of projection data. Aiming at the problem, an Anhui industrial university computer and an information laboratory provide a mixed double-domain reconstruction method aiming at cone beam CT sparse projection, and the method considers sine domain and image domain information at the same time, but the method does not consider consistency of the sine domain and the image domain, and secondary artifacts can be generated in the reconstruction process; furthermore, this lack of constraints on deep neural network feature generation may lead to the appearance of structural features in the reconstructed image outside the target CT image, which can lead to extremely serious problems. In addition, the robustness and generalization of the method are to be verified.
Disclosure of Invention
In order to solve the problems, the invention provides a sparse angle CT reconstruction method based on a sine domain and an image domain, wherein the CT reconstruction method considers that a stable high-quality CT reconstruction result can be obtained by introducing the sine domain and the image domain prior in an iterative model, and has better robustness and generalization.
The specific scheme of the invention is as follows:
s1, acquiring a sparse angle sinogram training data set, wherein the sparse angle sinogram training data set comprises a sparse angle sinogram set and a label data set corresponding to the sparse angle sinogram set;
s2, constructing a sparse angle CT reconstruction network model based on a sine domain and an image domain, and training the sparse angle CT reconstruction network model through a sparse angle sinogram training data set until the model converges;
s3, inputting the sparse angle sinogram into a trained sparse angle CT reconstruction network model to obtain a repaired full-angle sinogram and a reconstructed CT image;
s4, constructing a CT image reconstruction target equation, taking the repaired full-angle sinogram and the reconstructed CT image as priori regular constraints of the CT image reconstruction target equation, iteratively optimizing the CT image reconstruction target equation, and solving by adopting a least square method to obtain a high-precision reconstructed CT image.
Further, the process of acquiring the sparse angle sinogram training dataset in step S1 includes:
s11, for the measured object, fan beam geometric CT is adopted to respectively use 0,
Figure BDA0004105005330000031
Figure BDA0004105005330000032
For an initial angular rotation half a revolution, 23800 first sinograms are acquired, each first sinogram containing 180 projections;
s12, intercepting each first sinogram to obtain a corresponding sparse angle sinogram, wherein each sparse angle sinogram comprises 60 projections;
s13, taking the first sinogram as tag data, obtaining a sparse angle sinogram training data set by one-to-one correspondence between the tag data and the sparse angle sinogram, and obtaining a training set, a verification set and a test set by dividing the sparse angle sinogram training data set according to a ratio of 7:2:1.
Further, in step S2, the sparse angle CT reconstruction network model includes a projection recovery network, a filtered back projection reconstruction layer, and an image enhancement network, and the training process of the sparse angle CT reconstruction network model based on the sinusoidal domain and the image domain includes the following steps:
s21, interpolation processing is carried out on the sparse angle sinogram S through a linear interpolation method, and corresponding label data S is used R Assigning values to the pixels at the interpolation position to obtain an interpolation complement full-angle sinogram S L
S22, combining the full-angle sinogram S L Inputting into a projection recovery network to obtain a repaired full-angle sinogram S out
S23, constructing a first loss function, and calculating tag data S R And the repaired full-angle sinogram S out Repair loss of (a);
s24, repairing the full-angle sinogram S out As input, intermediate reconstructed CT image X is obtained by filtering the back projection reconstruction layer out The method comprises the steps of carrying out a first treatment on the surface of the Tag data S by using FBP algorithm R Performing back projection to obtain a label CT image X label
S25, constructing a second loss function, and calculating an intermediate reconstruction CT image X out And tag CT image X label Is a filter of (2)Back projection reconstruction loss;
s26, reconstructing an intermediate CT image X out And tag CT image X label Inputting the reconstructed CT images X into an image enhancement network together for antagonism enhancement;
s27, constructing a third loss function, and calculating a reconstructed CT image X and a label CT image X after enhancement label Inter-formation loss and countermeasures loss.
Further, the projection recovery network in step S22 includes a sub-attention mechanism and a residual convolution block; the residual convolution block comprises two convolution layers with the convolution kernel size of 3x3, a residual convolution layer with the kernel size of 1x1, two grouping normalization layers with the group number of 8 and an addition layer, and an activation function adopted by the residual convolution block is swish.
Further, step S23 constructs a first loss function according to the sine icon label and the corresponding repaired full-angle sine graph, which is expressed as:
L E =||S out -S R || 1 =||E ω (S L )-S R || 1
wherein E is ω () Representing a projection recovery network.
Further, in step S24, an intermediate reconstructed CT image X is obtained by filtering the back projection reconstruction layer out The process of (1) comprises:
s51, repairing the full-angle sinogram X according to the step S22 out Converting its fan beam geometry projection into parallel beam projection to obtain a first repaired full angle sinogram S out1
S52, performing first restoration on the full-angle sinogram S out1 Performing two-dimensional Fourier transform to obtain a frequency domain image S out2 And the frequency domain image S is filtered by a Ram-Lak filter out2 Filtering;
s53, processing the filtered frequency domain image S by adopting two-dimensional inverse Fourier transform out2 Obtaining a filtered sinogram S out3
S54, filtering sinogram S out3 Parallel beam back projection is carried out to obtain an intermediate reconstruction CT image X out
Further, the image enhancement network eliminates the intermediate reconstruction CT image X based on the countermeasure learning out Streak artifact of (a); the network structure of the generator of the image enhancement network is the same as that of the projection recovery network; the discriminator of the image enhancement network adopts a deep convolution structure and consists of 10 convolution layers, 10 grouping normalization layers and 10 swish activation function layers; wherein the convolution kernel sizes of the 10 convolution layers are sequentially increased.
Further, step S27 is based on the enhanced reconstructed CT image X and the labeled CT image X label A third loss function is constructed, expressed as:
L G =L Gan +αL pMSE
Figure BDA0004105005330000051
Figure BDA0004105005330000061
wherein L is Gan Indicating loss of antagonism, L pMSE Represents the mean square loss of the pixel, alpha represents the weight coefficient, G θ The representation of the generator is provided with a representation,
Figure BDA0004105005330000062
representing the discriminator W, H representing the width and height of the image, respectively, and X (X, y) representing the pixel (X, y) of the enhanced reconstructed CT image X label (X, y) represents a label CT image X label Is a pixel (x, y).
Further, the CT image reconstruction target equation constructed in step S3 is expressed as:
Figure BDA0004105005330000063
wherein S-A (x) i )|| 2 Representing data fidelity terms, s representing a sparse angle sinogram, A representing a forward projection matrix, x i Representing the mesh obtained by the ith iterative reconstructionThe CT image of the object is marked, ||x i -G θ (S)|| 2 、||S-E ω (S)|| 2 Representing regular terms, G θ ,f R ,E ω Respectively represent a trained image enhancement network, a filtered back projection reconstruction layer and a projection recovery network, lambda 12 A hyper-parameter representing a balance fidelity term and a regularization term;
solving an iterative optimized CT image reconstruction target equation according to a least square method, wherein the equation is expressed as:
Figure BDA0004105005330000064
wherein A is T Representing the transpose of the forward projection matrix, i=0, 1,2, … represents the number of iterations.
The invention has the beneficial effects that:
1. the invention provides a sparse angle CT reconstruction method based on a sine domain and an image domain, which takes sine domain and image domain output of a sparse angle sine image in a sparse angle CT reconstruction network model as depth priori, and is integrated into an optimization iteration process of a CT image reconstruction target equation.
2. The invention designs a filtering back projection reconstruction layer for connecting a sine domain and an image domain, the layer allows gradient back propagation, and secondary artifacts generated in the reconstruction process can be effectively restrained by introducing radon consistency loss to carry out supervision training, so that the reconstruction precision is improved. The image enhancement network introduces label CT images and intermediate reconstructed CT images for contrast training, and further improves the image quality.
3. The invention can dynamically adjust the attention degree according to different areas of the image by introducing a self-attention mechanism, thereby capturing the advanced features of the image more accurately, and meanwhile, the method uses an up-sampling-down-sampling network structure based on residual blocks, which can skip some convolution layers to reduce the complexity of the network, thereby reducing the overfitting degree of the model.
Drawings
FIG. 1 is a flow chart of a sparse angle CT reconstruction method based on a sinusoidal domain and an image domain of the present invention;
FIG. 2 is a training flow chart of a sparse angle CT reconstruction network model based on a sine domain and an image domain;
FIG. 3 is a block diagram of a sparse angle CT reconstruction model based on a sinusoidal domain and an image domain of the present invention;
FIG. 4 is a graph showing the comparison of the effect of the present invention and the conventional reconstruction method.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to realize high-precision reconstruction of a sparse angle sinogram CT, the invention provides a sparse angle CT reconstruction method based on a sinogram domain and an image domain, which is shown in figure 1 and comprises the following steps:
s1, acquiring a sparse angle sinogram training data set, wherein the sparse angle sinogram training data set comprises a sparse angle sinogram set and a label data set corresponding to the sparse angle sinogram set;
specifically, the fan beam geometric CT is adopted to sample the measured object at equal intervals, and the specific arrangement comprises: the scanning voltage range is [100,120]kV, scanning current range is [200, 500 ]]mA, X-ray source with projection center distance of 950cm, detector with projection center distance of 200cm, detector unit area of 1.2X1.09 mm 2 The slice thickness was 0.1mm.
The step S1 of acquiring a sparse angle sinogram training data set comprises the following steps:
s11, for the measured object, fan beam geometric CT is adopted to respectively use 0,
Figure BDA0004105005330000081
Figure BDA0004105005330000082
For an initial angular rotation half a cycle, 23800 first sinograms containing 180 angles are acquired, each first sinogram containing 180 projections;
s12, intercepting 60 angles from each first sinogram to obtain a corresponding sparse angle sinogram, wherein each sparse angle sinogram comprises 60 projections;
s13, taking the first sinogram as tag data, obtaining a sparse angle sinogram training data set by one-to-one correspondence between the tag data and the sparse angle sinogram, and obtaining a training set, a verification set and a test set by dividing the sparse angle sinogram training data set according to a ratio of 7:2:1.
S2, constructing a sparse angle CT reconstruction network model based on a sine domain and an image domain, and training the sparse angle CT reconstruction network model through a sparse angle sinogram training data set until the model converges;
s3, inputting the sparse angle sinogram into a trained sparse angle CT reconstruction network model to obtain a repaired full-angle sinogram and a reconstructed CT image;
s4, constructing a CT image reconstruction target equation, taking the repaired full-angle sinogram and the reconstructed CT image as priori regular constraints of the CT image reconstruction target equation, iteratively optimizing the CT image reconstruction target equation, and solving by adopting a least square method to obtain a high-precision reconstructed CT image.
In one embodiment, as shown in fig. 3, the sparse angle CT reconstruction network model includes a projection recovery network (Sinogram Enhancement network), a filtered back projection reconstruction layer (Radon Inversion module), and image enhancement networks (Image Generative network and Image Discriminative network), and the training process of the sparse angle CT reconstruction network model based on the sinusoidal domain and the image domain, as shown in fig. 2, includes the following steps:
s21, performing Interpolation processing on the sparse angle sinogram S by using a linear Interpolation (Interpolation) method, and using a corresponding standard thereofSign data S R Assigning the pixels at the interpolation to obtain an interpolation-complemented full-angle sinogram S with the image resolution of (180 multiplied by 624) L
S22, combining the full-angle sinogram S L Inputting into a projection recovery network to obtain a repaired full-angle sinogram S out
In particular, as shown in fig. 3, the projection recovery network includes a sub-attention mechanism and a residual convolution block; the residual convolution block comprises two convolution layers with the convolution kernel size of 3x3, a residual convolution layer with the kernel size of 1x1, two grouping normalization layers with the group number of 8 and an addition layer, and an activation function adopted by the residual convolution block is swish. The residual convolution block realizes the reinforcement processing of the input image through convolution and residual connection so as to ensure that the characteristics of the input image are not lost. The sub-attention mechanism calculates attention weights by calculating the inner product of Query and key (Value), adopts a Softmax function, applies different weights to each pixel point of an input image, controls the attention degree of a network model to the image, better retains useful information in the image, and reduces errors in the process of recovering the image.
The projection recovery network has an input (10, 180, 624,1) and an output (10, 180, 624,1) that can be divided into a downsampling stage and an upsampling stage. Firstly, extracting features of an input image through a residual convolution block in a downsampling stage, weighting the extracted features through a sub-attention mechanism, and then performing deconvolution operation in an upsampling stage to finally obtain a repaired sinogram.
S23, constructing a first loss function, and calculating tag data S R And the repaired full-angle sinogram S out Repair loss of (a);
specifically, a first loss function L E Expressed as:
L E =||S out -S R || 1 =||E ω (S L )-S R || 1
wherein E is ω () Representing a projection recovery network.
S24, willFull-angle sinogram S after repair out As input, intermediate reconstructed CT image X is obtained by filtering the back projection reconstruction layer out The method comprises the steps of carrying out a first treatment on the surface of the Tag data S by using FBP algorithm R Performing back projection to obtain a label CT image X label
Specifically, an intermediate reconstructed CT image X is obtained by filtering the back projection reconstruction layer out The process of (1) comprises:
s51, repairing the full-angle sinogram S according to the step S22 out Converting its fan beam geometry projection into parallel beam projection to obtain a first repaired full angle sinogram S out1 The method comprises the steps of carrying out a first treatment on the surface of the The physical geometrical information comprises an all-angle sinogram S out The number of detectors, the pixel size, the distance from the X-ray source to the projection center, the distance from the detector to the projection center, and the detector size; the conversion process is expressed as:
Figure BDA0004105005330000111
where (γ, β) denotes fan beam geometry projection coordinates, (t, θ) denotes parallel beam projection coordinates, and d denotes projection center to detector distance.
S52, performing first restoration on the full-angle sinogram S out1 Performing two-dimensional Fourier transform to obtain a frequency domain image S out2 And the frequency domain image S is filtered by a Ram-Lak filter out2 Filtering;
s53, processing the filtered frequency domain image S by adopting two-dimensional inverse Fourier transform out2 Obtaining a filtered sinogram S out3
S54, filtering sinogram S out3 Parallel beam back projection is carried out to obtain an intermediate reconstruction CT image X out
Specifically, an intermediate reconstructed CT image X out For parallel beam projection operator F, gradient is calculated, the gradient is counter-propagated, and CT image X is reconstructed in the middle of the associated image domain out Full angle sinogram S after repair with sinusoidal domain out
Preferably, according to the repaired full angle sinogram S out The reconstructed image can be presentedThe method is shown as follows:
Figure BDA0004105005330000112
Figure BDA0004105005330000113
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004105005330000114
representing a two-dimensional Fourier transform, ">
Figure BDA0004105005330000115
Representing a two-dimensional inverse Fourier transform S para (t, θ) represents the reconstructed projection sinogram converted into parallel beam projection, ω represents the filter, and θ represents the projection angle.
S25, constructing a second loss function, and calculating an intermediate reconstruction CT image X out And tag CT image X label Filtered backprojection reconstruction loss of (a);
specifically, the second loss function L R Expressed as:
L R =||X out -X|| 1 =||f R (S out )-X|| 1
wherein f R () Representing a filtered backprojection reconstruction layer.
S26, reconstructing an intermediate CT image X out And tag CT image X label Inputting the reconstructed CT images X into an image enhancement network together for antagonism enhancement;
specifically, the image enhancement network is based on antagonism learning to eliminate streak artifacts of the intermediate reconstructed CT image, enhances detailed texture features, and obtains the enhanced reconstructed CT image. The network structure of the generator of the image enhancement network is the same as that of the projection recovery network, as shown in fig. 3; meanwhile, the discriminator of the image enhancement network adopts a deep convolution structure and consists of 10 convolution layers, 10 grouping normalization layers and 10 swish activation function layers, wherein the sizes of convolution kernels of the 10 convolution layers are sequentially increased, so that the layer-by-layer enhancement processing of the feature map of the CT image is realized. And finally, through global averaging pooling, the CT image reconstruction method is connected to a linear layer to realize true and false discrimination of the reconstructed CT image.
S27, constructing a third loss function, and calculating a reconstructed CT image X and a label CT image X after enhancement label Inter-production loss and antagonism loss. Specifically, the reconstructed CT image X and the label CT image X are enhanced label A third loss function is constructed, expressed as:
L G =L Gan +αL pMSE
Figure BDA0004105005330000121
Figure BDA0004105005330000122
wherein L is Gan Indicating loss of antagonism, L pMSE Represents the mean square loss of the pixel, alpha represents the weight coefficient, G θ The representation of the generator is provided with a representation,
Figure BDA0004105005330000131
representing the discriminator W, H representing the width and height of the image, respectively, and X (X, y) representing the pixel (X, y) of the enhanced reconstructed CT image X label (X, y) represents a label CT image X label Is a pixel (x, y).
Specifically, the total loss is expressed as:
L=L G +λL R +γL E
wherein L is G Represents a third loss function, L R Representing a second loss function, L E The first loss function is represented, and λ and γ represent weight coefficients.
In one embodiment, the CT image reconstruction objective equation is expressed as:
Figure BDA0004105005330000132
wherein S-A (x) i )|| 2 Representing data fidelity terms, s representing a sparse angle sinogram, A representing a forward projection matrix, x i Representing a target CT image obtained by the ith iterative reconstruction, ||x i -G θ (S)|| 2 、||S-E ω (S)|| 2 Representing regular terms, G θ ,f R ,E ω Respectively represent a trained image enhancement network, a filtered back projection reconstruction layer and a projection recovery network, lambda 12 A hyper-parameter representing a balance fidelity term and a regularization term;
solving an iterative optimized CT image reconstruction target equation according to a least square method, wherein the equation is expressed as:
Figure BDA0004105005330000133
wherein A is T Representing the transpose of the forward projection matrix, i=0, 1,2, … represents the number of iterations.
In one embodiment, to measure model performance, the reconstruction method of the present invention is compared to a conventional CT reconstruction method (FBP). Fig. 4 shows a real CT image, 60 projection angle FBP reconstructed CT images, 180 projection angle FBP reconstructed CT images, and 60 projection angle reconstructed CT images according to the method of the present invention. Under the condition of 60 projection angles, the result can be observed, the FBP can reconstruct a CT image, but the reconstruction effect is poor, a large amount of artifacts exist, and the image detail is not clearly displayed. Compared with the FBP method, the method of the invention has better reconstruction result, clear edge and structure, can display fine granularity characteristics and details, and has less noise and distortion.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "configured," "connected," "secured," "rotated," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intermediaries, or in communication with each other or in interaction with each other, unless explicitly defined otherwise, the meaning of the terms described above in this application will be understood by those of ordinary skill in the art in view of the specific circumstances.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A sparse angle CT reconstruction method based on a sine domain and an image domain is characterized by comprising the following steps:
s1, acquiring a sparse angle sinogram training data set, wherein the sparse angle sinogram training data set comprises a sparse angle sinogram set and a label data set corresponding to the sparse angle sinogram set;
s2, constructing a sparse angle CT reconstruction network model based on a sine domain and an image domain, and training the sparse angle CT reconstruction network model through a sparse angle sinogram training data set until the model converges;
s3, inputting the sparse angle sinogram into a trained sparse angle CT reconstruction network model to obtain a repaired full-angle sinogram and a reconstructed CT image;
s4, constructing a CT image reconstruction target equation, taking the repaired full-angle sinogram and the reconstructed CT image as priori regular constraints of the CT image reconstruction target equation, iteratively optimizing the CT image reconstruction target equation, and solving by adopting a least square method to obtain a high-precision reconstructed CT image.
2. The sparse angle CT reconstruction method based on the sinusoidal and image domains of claim 1, wherein step S1 of acquiring the sparse angle sinogram training dataset comprises:
s11, for the measured object, fan beam geometric CT is adopted to respectively use 0,
Figure FDA0004105005320000011
Figure FDA0004105005320000012
]For an initial angular rotation half a revolution, 23800 first sinograms are acquired, each first sinogram containing 180 projections;
s12, intercepting each first sinogram to obtain a corresponding sparse angle sinogram, wherein each sparse angle sinogram comprises 60 projections;
s13, taking the first sinogram as tag data, obtaining a sparse angle sinogram training data set by one-to-one correspondence between the tag data and the sparse angle sinogram, and obtaining a training set, a verification set and a test set by dividing the sparse angle sinogram training data set according to a ratio of 7:2:1.
3. The sparse angle CT reconstruction method based on the sinusoidal domain and the image domain of claim 1, wherein in step S2, the sparse angle CT reconstruction network model comprises a projection recovery network, a filtered back projection reconstruction layer, and an image enhancement network, and the training process of the sparse angle CT reconstruction network model based on the sinusoidal domain and the image domain comprises the steps of:
s21, interpolation processing is carried out on the sparse angle sinogram S through a linear interpolation method, and corresponding label data S is used R Assigning values to the pixels at the interpolation position to obtain an interpolation complement full-angle sinogram S L
S22, combining the full-angle sinogram S L Inputting into a projection recovery network to obtain a repaired full-angle sinogram S out
S23, constructing a first loss function, and calculating tag data S R And the repaired full-angle sinogram S out Repair loss of (a);
s24, repairing the full-angle sinogram S out As input, intermediate reconstructed CT image X is obtained by filtering the back projection reconstruction layer out The method comprises the steps of carrying out a first treatment on the surface of the Tag data S by using FBP algorithm R Performing back projection to obtain a label CT image X label
S25, constructing a second loss function, and calculating an intermediate reconstruction CT image X out And tag CT image X label Filtered backprojection reconstruction loss of (a);
s26, reconstructing an intermediate CT image X out And tag CT image X label Inputting the reconstructed CT images X into an image enhancement network together for antagonism enhancement;
s27, constructing a third loss function, and calculating a reconstructed CT image X and a label CT image X after enhancement label Inter-production loss and antagonism loss.
4. A sparse angle CT reconstruction method based on sinusoidal and image domains according to claim 3, wherein said projection recovery network of step S22 comprises a sub-attention mechanism and a residual convolution block; the residual convolution block comprises two convolution layers with the convolution kernel size of 3x3, a residual convolution layer with the kernel size of 1x1, two grouping normalization layers with the group number of 8 and an addition layer, and an activation function adopted by the residual convolution block is swish.
5. The sparse angle CT reconstruction method of claim 3 wherein in step S24, the intermediate reconstructed CT image X is obtained by filtering the back projection reconstruction layer out The process of (1) comprises:
s51, repairing the full-angle sinogram S according to the step S22 out Converting its fan beam geometry projection into parallel beam projection to obtain a first repaired full angle sinogram S out1
S52, performing first restoration on the full-angle sinogram S out1 Performing two-dimensional Fourier transform to obtain a frequency domain image S out2 And the frequency domain image S is filtered by a Ram-Lak filter out2 Filtering;
s53, processing the filtered frequency domain image S by adopting two-dimensional inverse Fourier transform out2 Obtaining a filtered sinogram S out3
S54, filtering sinogram S out3 Parallel beam back projection is carried out to obtain an intermediate reconstruction CT image X out
6. According to the weightsA sparse angle CT reconstruction method based on sinusoidal and image domains as defined in claim 3, wherein said image enhancement network eliminates intermediate reconstructed CT map X based on contrast learning out Streak artifact of (a); the network structure of the generator of the image enhancement network is the same as that of the projection recovery network; the discriminator of the image enhancement network adopts a deep convolution structure and consists of 10 convolution layers, 10 grouping normalization layers and 10 swish activation function layers; wherein the convolution kernel sizes of the 10 convolution layers are sequentially increased.
7. A sparse angle CT reconstruction method based on sinusoidal and image domains according to claim 3, wherein step S23 is based on label data S R And the repaired full-angle sinogram S out Construction of a first loss function L E Expressed as:
L E =||S out -S R || 1 =||E ω (S L )-S R || 1
wherein E is ω () Representing a projection recovery network;
step S25 is based on the intermediate reconstructed CT image X out Constructing a second loss function L by using the corresponding label CT diagram X R Expressed as:
L R =||X out -X|| 1 =||f R (S out )-X|| 1
wherein f R () Representing a filtered back projection reconstruction layer;
step S27 is based on the enhanced reconstructed CT image X and the label CT image X label A third loss function is constructed, expressed as:
L G =L Gan +αL pMSE
Figure FDA0004105005320000041
Figure FDA0004105005320000042
wherein L is Gan Indicating loss of antagonism, L pMSE Represents the mean square loss of the pixel, alpha represents the weight coefficient, G θ The representation of the generator is provided with a representation,
Figure FDA0004105005320000043
representing the discriminator W, H representing the width and height of the image, respectively, and X (X, y) representing the pixel (X, y) of the enhanced reconstructed CT image X label (X, y) represents a label CT image X label Is a pixel (x, y).
8. The sparse angle CT reconstruction method of claim 1, wherein the CT image reconstruction objective equation constructed in step S3 is expressed as:
Figure FDA0004105005320000051
wherein S-A (x) i )|| 2 Representing data fidelity terms, s representing a sparse angle sinogram, A representing a forward projection matrix, x i Representing a target CT image obtained by the ith iterative reconstruction, ||x i -G θ (S)|| 2 、||S-E ω (S)|| 2 Representing regular terms, G θ ,f R ,E ω Respectively represent a trained image enhancement network, a filtered back projection reconstruction layer and a projection recovery network, lambda 12 A hyper-parameter representing a balance fidelity term and a regularization term;
solving an iterative optimized CT image reconstruction target equation according to a least square method, wherein the equation is expressed as:
Figure FDA0004105005320000052
wherein A is T Representing the transpose of the forward projection matrix, i=0, 1,2, … represents the number of iterations.
CN202310187803.1A 2023-03-01 2023-03-01 Sparse angle CT reconstruction method based on sine domain and image domain Pending CN116188615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310187803.1A CN116188615A (en) 2023-03-01 2023-03-01 Sparse angle CT reconstruction method based on sine domain and image domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310187803.1A CN116188615A (en) 2023-03-01 2023-03-01 Sparse angle CT reconstruction method based on sine domain and image domain

Publications (1)

Publication Number Publication Date
CN116188615A true CN116188615A (en) 2023-05-30

Family

ID=86434352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310187803.1A Pending CN116188615A (en) 2023-03-01 2023-03-01 Sparse angle CT reconstruction method based on sine domain and image domain

Country Status (1)

Country Link
CN (1) CN116188615A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977473A (en) * 2023-09-21 2023-10-31 北京理工大学 Sparse angle CT reconstruction method and device based on projection domain and image domain

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977473A (en) * 2023-09-21 2023-10-31 北京理工大学 Sparse angle CT reconstruction method and device based on projection domain and image domain
CN116977473B (en) * 2023-09-21 2024-01-26 北京理工大学 Sparse angle CT reconstruction method and device based on projection domain and image domain

Similar Documents

Publication Publication Date Title
RU2709437C1 (en) Image processing method, an image processing device and a data medium
CN111492406B (en) Method for training machine learning algorithm, image processing system and image reconstruction method
CN110462689B (en) Tomographic reconstruction based on deep learning
Wang et al. Machine learning for tomographic imaging
US20160247302A1 (en) Image reconstruction from limited or incomplete data
US10628973B2 (en) Hierarchical tomographic reconstruction
Zhang et al. PET image reconstruction using a cascading back-projection neural network
Sunnegårdh et al. Regularized iterative weighted filtered backprojection for helical cone‐beam CT
CN116188615A (en) Sparse angle CT reconstruction method based on sine domain and image domain
Ma et al. Learning image from projection: A full-automatic reconstruction (FAR) net for computed tomography
Pan et al. Iterative Residual Optimization Network for Limited-angle Tomographic Reconstruction
Lu et al. A geometry-guided deep learning technique for CBCT reconstruction
Tivnan et al. Control of variance and bias in ct image processing with variational training of deep neural networks
Vengrinovich Bayesian image and pattern reconstruction from incomplete and noisy data
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
Chen et al. Dual-domain modulation for high-performance multi-geometry low-dose CT image reconstruction
CN110264536B (en) Method for calculating high-low resolution projection relation in parallel beam ultra-resolution reconstruction
Wang et al. Helical ct reconstruction from sparse-view data through exploiting the 3d anatomical structure sparsity
Cheng et al. Super-resolution reconstruction for parallel-beam SPECT based on deep learning and transfer learning: a preliminary simulation study
Ali et al. Level-Set Method for Limited-Data Reconstruction in CT using Dictionary-Based Compressed Sensing
Ma et al. Learning image from projection: A full-automatic reconstruction (FAR) net for sparse-views computed tomography
Ma et al. Projection-to-image transform frame: a lightweight block reconstruction network for computed tomography
Brooks A survey of algebraic algorithms in computerized tomography
CN112509089B (en) CT local reconstruction method based on truncated data extrapolation network
Wei et al. Improved sparse domain super-resolution reconstruction algorithm based on CMUT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination