CN113034641B - Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding - Google Patents

Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding Download PDF

Info

Publication number
CN113034641B
CN113034641B CN202110331020.7A CN202110331020A CN113034641B CN 113034641 B CN113034641 B CN 113034641B CN 202110331020 A CN202110331020 A CN 202110331020A CN 113034641 B CN113034641 B CN 113034641B
Authority
CN
China
Prior art keywords
image
scale
reconstructed
convolution
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110331020.7A
Other languages
Chinese (zh)
Other versions
CN113034641A (en
Inventor
刘进
亢艳芹
强俊
王勇
夏振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Polytechnic University
Original Assignee
Anhui Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Polytechnic University filed Critical Anhui Polytechnic University
Priority to CN202110331020.7A priority Critical patent/CN113034641B/en
Publication of CN113034641A publication Critical patent/CN113034641A/en
Application granted granted Critical
Publication of CN113034641B publication Critical patent/CN113034641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/421Filtered back projection [FBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/432Truncation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding, and belongs to the technical field of computed tomography. The method comprises the steps of firstly carrying out wavelet transformation on a high-quality CT sample image to obtain a high-frequency coefficient image, then carrying out multi-scale convolution characteristic learning on the high-frequency coefficient image, and constructing a multi-scale filter dictionary; then introducing the constructed multi-scale filter dictionary, and establishing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding; carrying out variable decomposition on the reconstruction model, and dividing the variable decomposition into a convolution characteristic learning updating target function and a reconstruction image updating target function; and finally, updating the reconstructed image and the multi-scale filter dictionary through an alternate iteration strategy to obtain a final reconstructed image. The invention can effectively slow down the strip artifact and detail loss in sparse angle CT reconstruction, improve the reconstructed image contrast and promote the use of sparse angle CT scanning in the field of clinical diagnosis and treatment.

Description

Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding
Technical Field
The invention relates to the technical field of computed tomography, in particular to a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding.
Background
Computed Tomography (CT) is an image technique that uses the X-ray attenuation difference of object components to present accurate and error-free structural information through a reconstruction algorithm, and can implement non-invasive detection. The CT imaging has a series of advantages of high spatial resolution, low scanning cost, short time and the like, is complementary with the magnetic resonance imaging, ultrasonic imaging, positron emission tomography imaging and other technologies in clinic, can provide an image basis for disease screening, diagnosis and treatment, and is one of indispensable medical devices in hospitals at all levels at present. However, excessive X-ray exposure can damage tissue cells, increasing the risk of acquiring an underlying disease. It has been investigated that in a conventional helical CT scan, the examiner may be exposed to a radiation dose of 1.5-10 mSv, which is much higher than the dose of 0.2-0.5 mSv for a conventional chest examination. With the increase of the number of times of examination, the radiation also has an accumulative effect, the injury suffered by the examiners is prolonged, and in addition, the injury suffered by some special people (such as children, pregnant women, old people and the like) is larger. Therefore, the X-ray dose is reduced as much as possible without affecting the image diagnosis.
And sparse angle scanning is adopted, so that the number of projection data angles is reduced, and the method is an effective way for reducing X-ray irradiation. However, the reduction of the sampling of the ray can cause the loss of the acquired signal, thereby causing the degradation of the reconstructed image, in particular, the loss of tissue details can be caused, the stripe artifact of the reconstructed image is increased, and the condition of missed diagnosis and misdiagnosis can occur to the doctor during the film reading. In order to improve the sparse angle CT imaging effect: on one hand, from the perspective of CT images, researchers design specialized image restoration and processing algorithms to suppress artifacts and enhance image details. However, the artifact characterization of CT images varies greatly in different scanning apparatuses, modes and reconstruction methods, which also results in poor generalization ability of the method. On the other hand, from the perspective of CT projection data, processes such as restoration and restoration are performed on original data or projection data after logarithmic transformation to improve the consistency of the projection data, and thus the reconstruction effect can be improved. However, due to the high sensitivity of the projection data, under-correction, over-correction, low data consistency, and the like are likely to occur in the processing process. In addition, an improved reconstruction algorithm is also a main way for improving the imaging effect, and a large number of iterative reconstruction algorithms are proposed and achieve excellent performance in recent years, especially a statistical iterative reconstruction algorithm based on prior information constraint. However, the main problems facing such algorithms are: the super-parameters are many, and self-adaptive optimization is difficult; the algorithm has high complexity and needs repeated iterative computation; the prior information has instability, and prior terms and the like under a unified framework cannot be obtained, so that the iterative reconstruction is difficult to fully exert the value in a clinical application scene. Although there are many problems in the "sparse scan imaging", these will be important indicators in the future CT research field and also the main direction for developing the X-ray imaging.
Sparse feature learning is used as a prior model to form a constraint term, and the method is widely applied to sparse angle CT reconstruction. The sparse feature learning method also shows excellent performance, and greatly promotes the practicability of the sparse angle CT imaging algorithm. The method mainly constructs a dictionary through sample training, utilizes the dictionary to carry out sparse coding on signals, and is widely concerned in the fields of feature recognition, classification, image restoration and the like. However, the traditional sparse feature learning has limited prior information extraction capability, how to expand and enhance the feature learning capability, how to design feature coding forms of multiple scales to give full play to the advantages, and better serve low-dose CT imaging, which is a key problem in clinical CT imaging development.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
1. Technical problems to be solved by the invention
The invention aims to solve the problems of low image quality, more artifact residues, tissue detail loss, low contrast ratio and the like of a sparse angle CT Reconstruction method in the prior art, and provides a sparse angle CT Reconstruction method based on Wavelet Multi-Scale convolution feature coding, which is called Wavelet Multi-Scale convolution feature coding constrained Reconstruction (WMCR). The method improves the sensing, coding and decoding capabilities of feature information by means of convolution feature learning on multiple scales under the condition of not changing the existing CT hardware cost, obtains rich priori knowledge, and serves for sparse angle CT high-quality reconstruction. According to the invention, the image artifact and detail loss phenomenon caused by the scanning angle loss are inhibited, so that the sparse angle CT reconstructed image quality is improved, the extra radiation is finally reduced for a patient, and the diagnosis and treatment benefit is increased.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention discloses a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding, which comprises the following steps of:
step 1, acquiring an initial multi-scale filter dictionary atom;
for a given heightQuality CT sample image x s Performing wavelet transform to obtain high-frequency and low-frequency wavelet coefficients, wherein the high-frequency coefficient part is expressed as
Figure GDA0003856912530000021
Figure GDA0003856912530000022
And
Figure GDA0003856912530000023
sub-band signals in the horizontal direction, the vertical direction and the diagonal direction respectively; to F is aligned with 0 Performing multi-scale convolution characteristic learning to obtain initial multi-scale filter dictionary atoms
Figure GDA0003856912530000024
The learning model is represented as:
Figure GDA0003856912530000025
wherein, the x is a convolution operator, the K is the scale number of convolution kernels, the N is the number of convolution kernels under a single scale,
Figure GDA0003856912530000026
is a feature map of the corresponding atom, beta is a regularization parameter;
step 2, constructing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding;
step 3, decomposition of a reconstruction model: carrying out fixed variable decomposition on the reconstruction model to obtain a convolution characteristic learning update target function and an image to be reconstructed update target function;
and 4, solving the convolution characteristic learning updating target function and the image to be reconstructed updating target function in an alternative mode to obtain a final reconstruction result graph.
Further, the reconstruction model constructed in step 2 is represented as:
Figure GDA0003856912530000031
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, d is regularization parameter n,k For multi-scale filter dictionary atoms, M n,k Are corresponding characteristic diagrams.
Furthermore, the convolution feature learning update objective function and the image to be reconstructed update objective function obtained by decomposition in step 3 are respectively expressed as:
Figure GDA0003856912530000032
Figure GDA0003856912530000033
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, x is t Is the image to be reconstructed after the t (0 is less than or equal to t) time of updating,
Figure GDA0003856912530000034
for the multi-scale filter dictionary atom after the t-th update,
Figure GDA0003856912530000035
is the characteristic diagram of the corresponding atom after the t-th updating.
Furthermore, in the wavelet transform in the step 1, 1-layer two-dimensional stationary wavelet transform is adopted, and Haar wavelet bases are selected.
Further, the parameters of the multi-scale filter in step 1 are: k is more than or equal to 2 and less than or equal to 5, N is more than or equal to 32 and less than or equal to 64 under the condition of single scale, and the size of the convolution kernel can be selected from 6 multiplied by 3 to 14 multiplied by 3.
Furthermore, in the learning model formula (1) in the step 1, an alternating direction multiplier algorithm is adopted for solving, and the initial multi-scale filter dictionary atoms are obtained.
Furthermore, the operation steps of the wavelet transform high-frequency coefficient extraction operator W in the step 2 are as follows: firstly, performing 1-layer two-dimensional stationary wavelet transform on an image, and selecting a Haar wavelet base; then selecting subband signals of a high-frequency coefficient part in the horizontal direction, the vertical direction and the diagonal direction; and finally, sequentially superposing and combining the subband signals in the three directions according to a third dimension.
Further, when t =0 in step 3 is an initial value, the initial multi-scale filter dictionary atom is obtained in step 1, and the initial image x to be reconstructed is obtained 0 And reconstructing the image by using a filtered back projection algorithm of a ramp filter.
Furthermore, in the step 4, the convolution feature learning updated target function formula (3) is solved by adopting an alternative direction multiplier algorithm, and the image to be reconstructed updated target function formula (4) is solved by adopting a paraboloid substitution algorithm.
Furthermore, in step 4, the image to be reconstructed satisfies RMSE (x) before and after iteration t+1 -x t ) Stopping when the temperature is less than or equal to 30 ℃, and outputting a final reconstruction result graph, wherein RMSE (-) is a mean square error calculation operator.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
the invention discloses a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding, which comprises the steps of firstly, carrying out wavelet transformation on a high-quality CT sample image to obtain a high-frequency coefficient image, carrying out multi-scale convolution feature learning on the high-frequency coefficient image, and constructing a multi-scale filter dictionary; then, establishing a wavelet domain high-frequency coefficient convolution characteristic coding constrained reconstruction model by taking the constructed multi-scale filter dictionary as an initial value; then, carrying out variable decomposition on the reconstruction model, and dividing the variable decomposition into reconstruction image updating and multi-scale filter dictionary updating; and finally, updating the reconstructed image and the multi-scale filter dictionary through an alternate iteration strategy to obtain a final reconstructed image. By introducing the wavelet domain multi-scale convolution characteristic coding prior into reconstruction, the problems of strip artifacts and detail loss in sparse scanning angle reconstruction of the conventional reconstruction method can be effectively solved. Experimental results prove that under various Sparse angle scanning data, compared with a traditional Wavelet domain convolution Sparse Coding reconstruction method (WCSC for short), the method (WMCR) can effectively inhibit the problems of strip artifacts and detail loss caused by projection angle loss in a reconstructed image, and the reconstructed image has better visual effect and contrast. The method is expected to provide an advanced and practical sparse angle reconstruction frame for image departments and CT manufacturers of domestic hospitals, reduces additional radiation for patients, increases diagnosis and treatment benefits, and has high application and popularization prospects.
Drawings
FIG. 1 is a schematic flow chart of a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding in the present invention;
FIG. 2 is a reconstructed image of 180 scan angle projection data of an abdomen in an embodiment (a: a reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
FIG. 3 is a reconstructed image of 120 scan angle projection data of an abdomen in an embodiment (a: a reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
FIG. 4 is a diagram of a filter dictionary set after abdominal reconstruction in an embodiment (a: 180 angular scanning experiment; b:120 angular scanning experiment);
FIG. 5 is a reconstructed image of 180 scan angle projection data of the breast in an embodiment (a: reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
FIG. 6 is a reconstructed image of 120 scan angle projection data of the breast in an embodiment (a: reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
fig. 7 is a Profile curve (a: 180 scan angles; b:120 scan angles) of a reconstructed map of the chest projection data in an embodiment.
Detailed Description
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The present invention will be further described with reference to the following examples.
Example 1
A flowchart of a sparse angle CT reconstruction method based on wavelet multi-scale convolutional feature coding according to this embodiment is shown in fig. 1, and the specific steps are as follows:
step 1, obtaining dictionary atoms of an initial multi-scale filter;
for a given high quality CT sample image x s Performing wavelet transform to obtain high-frequency and low-frequency wavelet coefficients, wherein the high-frequency coefficient part can be expressed as
Figure GDA0003856912530000051
Figure GDA0003856912530000052
And
Figure GDA0003856912530000053
sub-band signals in the horizontal direction, the vertical direction and the diagonal direction respectively; to F 0 Performing multi-scale convolution characteristic learning to obtain initial multi-scale filter dictionary atoms
Figure GDA0003856912530000054
The learning model can be expressed as:
Figure GDA0003856912530000055
Wherein, K is the scale number of convolution kernels, N is the number of convolution kernels under a single scale,
Figure GDA0003856912530000056
beta is a regularization parameter for the feature map of the corresponding atom.
In particular, for a given high quality CT sample image x s And performing 1-layer two-dimensional stationary wavelet transform, and selecting a Haar wavelet base. The multi-scale filter parameters are: k is more than or equal to 2 and less than or equal to 5, N is more than or equal to 32 and less than or equal to 64 under the condition of single scale, the size of the convolution kernel can be selected from 6 multiplied by 3 to 14 multiplied by 3, and the specific size is manually selected according to factors such as the scanning angle, the size of a computer storage space, the quality of an image to be reconstructed and the like. Beta is a regularization parameter that is manually adjusted according to specific data. After solving equation (1) by using an alternating direction multiplier algorithm, obtaining an initial multi-scale filter dictionary atom
Figure GDA0003856912530000061
Step 2, constructing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding;
the reconstructed model constructed can be expressed as:
Figure GDA0003856912530000062
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, d is regularization parameter n,k For multi-scale filter dictionary atoms, M n,k Is a characteristic diagram of the corresponding atom.
Specifically, the operation steps of the wavelet transform high-frequency coefficient extraction operator W in the sparse angle CT reconstruction model are as follows: firstly, performing 1-layer two-dimensional stationary wavelet transform on an image, and selecting a Haar wavelet base; then selecting subband signals of a high-frequency coefficient part in the horizontal direction, the vertical direction and the diagonal direction; and finally, sequentially superposing and combining the subband signals in the three directions according to a third dimension. And obtaining three-dimensional data after the operator W is operated, wherein the size of the first dimension and the second dimension is equal to that of the image to be reconstructed, and the size of the third dimension is 3. The regularization parameter λ is empirically selected based on the specific data.
Step 3, decomposition of a reconstruction model:
performing fixed variable decomposition on the reconstruction model formula (2) to obtain a convolution feature learning update objective function and an image to be reconstructed update objective function, which are respectively expressed as:
Figure GDA0003856912530000063
Figure GDA0003856912530000064
wherein, X is convolution operator, K is convolution kernel scale degree, N is scale convolution kernel number, A is projection matrix, p is projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, and x is t Is the image to be reconstructed after the t (0 is less than or equal to t) time of updating,
Figure GDA0003856912530000065
for the multi-scale filter dictionary atom after the t-th update,
Figure GDA0003856912530000066
is the characteristic diagram of the corresponding atom after the t-th updating.
Specifically, in the convolution feature learning updating target function and the image to be reconstructed updating target function, when t =0, the initial multi-scale filter dictionary atom and the feature map are obtained in step 1, and the initial image x to be reconstructed is obtained 0 By filteringAnd (3) reconstructing by a Back Projection (FBP) algorithm.
And 4, solving the convolution characteristic learning updating target function and the image to be reconstructed updating target function in an alternative mode to obtain a final reconstructed image.
Specifically, the target function formula (3) is updated by convolution feature learning and solved by an alternating direction multiplier algorithm. Let D = (D) 1,1 ,d 1,2 ,…,d n,k ) For vectorized filter dictionary, M = (M) 1,1 ,M 1,2 ,…,M n,k ) For vectorized feature set, then
Figure GDA0003856912530000071
Which can be simplified to be expressed as a matrix multiplication DM (where there is still a convolution operation between the matrix elements) and adding the auxiliary variables C and F to the vectorized feature map set M and the filter dictionary D, the solution of equation (3) can include the following:
Figure GDA0003856912530000072
Figure GDA0003856912530000073
u t+1 =u t +M t+1 -C t+1 (3-3)
Figure GDA0003856912530000074
Figure GDA0003856912530000075
h t+1 =h t +D t+1 -F t+1 (3-6)
where u and h are scaled dual auxiliary variables in the solution, ρ 1 And ρ 2 For the Lagrange multiplier, the size can be set to ρ 1 =50β+1,ρ 2 F (= 1,proj) · is projection truncation operation, and F is performed after initialization by truncating a filter to ensure that the size of a code is the same as the size of image data to be reconstructed 0 =D 0 ,C 0 =M 0 ,h 0 =0,u 0 And =0. The formula (3-1) is characteristic diagram updating, the formula (3-4) is filter dictionary updating, solutions of the formula (3-1) and the formula (3-4) can be obtained through three-dimensional Fourier transform line solving, the formula (3-2) can be obtained through soft threshold shrinkage algorithm solving, and the formula (3-5) can be obtained through three-dimensional Fourier transform and projection truncation solving. The target function formula (4) for updating the image to be reconstructed is solved by adopting a paraboloid substitution algorithm, and can be specifically expressed as follows:
x t+1 =x t -[A T (Ax t -p)+λW T (DM-Wx t )]/[A T AI+λ] (4-1)
wherein A is T For the back-projection operator of CT systems, W T For the inverse wavelet high frequency coefficient transform operation, I is a vector of all 1 s. Finally, solving the formula (3-1), the formula (3-2), the formula (3-3), the formula (3-4), the formula (3-5), the formula (3-6) and the formula (4-1) alternately in sequence, and repeating iteration until the image to be reconstructed meets RMSE (x) before and after iteration t+1 -x t ) Stopping when the calculation is less than or equal to 30, and outputting a final reconstruction result graph, wherein RMSE ((-)) is a mean square error calculation operator.
Criteria for evaluating effects
In the experiment, high-quality abdomen and chest image data are simulated, simulated projection data at different scanning angles are obtained, and corresponding reconstruction is carried out. The parameters used for the simulated scan were: the size of the detector is 960, the size of the detector unit is 0.78mm, the distances from the ray source to the center of the object and the center of the detector are 50cm and 100cm respectively, 180 projection data and 120 projection data are acquired by scanning respectively, and other parameters adopt default values. The regularization parameters lambda and beta in the reconstruction of 180 abdominal projection data are respectively 0.02 and 0.016, and the parameters in the reconstruction of 120 abdominal projection data are respectively 0.025 and 0.018; the regularization parameters λ and β were 0.022 and 0.018 in the reconstruction of 180 chest projection data, and 0.026 and 0.021 in the reconstruction of 120 chest projection data, respectively. In the experiment, the number of convolution kernels under a single scale is 32, the number of the convolution kernels under the scale is 3, and the sizes of the convolution kernels are 8 multiplied by 3, 10 multiplied by 3 and 12 multiplied by 3 respectively.
In the figure, all reconstructed CT images show a window width of 400HU (Housfield Units, HU) and a window level of 50HU. The experiment adopts subjective evaluation and objective evaluation methods to verify the effectiveness of the algorithm. Subjective evaluation: the reconstruction effect of the present invention was averaged by comparing the FBP, WCSC and WMCR reconstructed maps of the sparse angular lower abdomen and chest data (see fig. 2, 3, 5 and 6); by selecting the region of interest and drawing a profile curve of the fixed region of the reconstruction map (such as the white line segment marked region in fig. 4 and fig. 7), the deviation of the details of the reconstructed tissue can be observed carefully. Objective evaluation: the results of the experiments will be quantitatively compared using reference evaluation indexes such as Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM).
Subjective evaluation
By observing and comparing the characteristics of the CT reconstructed images in fig. 2, 3, 5 and 6, such as the intensity of the strip artifact, the distribution of the artifact, the details of the organization, the contrast among different organizations, the texture of the reconstructed image and the like, the reconstructed image with higher quality can be obtained. Meanwhile, as can be seen from the reconstruction result, when the number of the scanning angles is reduced, the FBP reconstruction image is subjected to serious noise interference, tissue details cannot be distinguished, noise artifacts in the WCSC reconstruction image and the WMCR reconstruction image are obviously inhibited, and the WMCR reconstruction method can better maintain image details and improve contrast compared with the WCSC reconstruction method. With the reduction of the number of angles, the more the artifacts are, the gradually reduced quality of the reconstructed image is, but the WMCR method still brings better reconstruction results, which is obviously superior to the WCSC algorithm.
Objective evaluation
While the effectiveness of the method in sparse angular scanning CT reconstruction is subjectively evaluated, the experiment further adopts two quantitative indexes of PSNR and SSIM to evaluate the reconstructed image so as to quantitatively confirm the effectiveness of the method. The PSNR and SSIM calculation method is as follows:
Figure GDA0003856912530000081
Figure GDA0003856912530000091
wherein x is T For the last updated image to be reconstructed, x r For a high quality reference image for simulation, N is the total number of image pixels; h max Is x r Maximum value of (a), mu xT And mu xr Respectively representing CT images x T And x r Average value of the CT values of the medium total pixels; sigma xT And σ xr Respectively representing CT images x T And x r Standard deviation of CT value of middle total pixel, sigma xTr As a CT image x T And x r Covariance of (2), constant C 1 =(0.01×H max ) 2 ,C 2 =(0.03×H max ) 2 . PSNR and SSIM values of different data reconstructed images were calculated using the high-quality images used for the simulation as a reference image, and the results are shown in table 1.
TABLE 1
Figure GDA0003856912530000092
It can be seen from table 1 that, in the abdominal and thoracic data reconstruction under the simulated sparse angle, the quantization index of the FBP reconstructed image is worst, and the WCSC reconstruction result is improved to a certain extent, whereas higher SSIM and PSNR values can be obtained by using the WMCR method of the present invention (compared with the WCSC method result, in the 180-degree scan data experiment, the PSNR is higher by about 1.4dB, the SSIM is higher by about 0.01, in the 120-degree scan data experiment, the PSNR is higher by about.9 dB, and the SSIM is higher by about 0.01). As can be seen from fig. 4 and 7, in the selected pixels (the white line segments of the images in fig. 4 (a) and fig. 7 (a) mark the area, ref is the reference image), the tissue boundary pixel value jump in the WMCR reconstructed image is more obvious, the tissue boundary is sharper, and the curve trend is closer to the reference image. The experiments show that the method can obtain CT reconstructed images with less artifacts under the same sparse angle scanning condition, has high stability and has a certain application prospect.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, without departing from the spirit of the present invention, a person of ordinary skill in the art should understand that the present invention shall not be limited to the embodiments and the similar structural modes without creative design.

Claims (1)

1. A sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding is characterized by comprising the following steps:
step 1, obtaining dictionary atoms of an initial multi-scale filter;
for a given high quality CT sample image x s Performing wavelet transformation to obtain high-frequency and low-frequency wavelet coefficients, wherein the wavelet transformation adopts 1-layer two-dimensional stationary wavelet transformation and selects a Haar wavelet base; the high-frequency coefficient part is expressed as
Figure FDA0003856912520000011
Figure FDA0003856912520000012
f v 0 And
Figure FDA0003856912520000013
sub-band signals in the horizontal direction, the vertical direction and the diagonal direction respectively; to F 0 Performing multi-scale convolution characteristic learning to obtain initial multi-scale filter dictionary atoms
Figure FDA0003856912520000014
The learning model is represented as:
Figure FDA0003856912520000015
wherein, K is the scale number of convolution kernels, N is the number of convolution kernels under a single scale,
Figure FDA0003856912520000016
is a feature map of the corresponding atom, beta is a regularization parameter; solving the learning model formula (1) by adopting an alternative direction multiplier algorithm to obtain an initial multi-scale filter dictionary atom; wherein the multi-scale filter parameters are: k is more than or equal to 2 and less than or equal to 5, N is more than or equal to 32 and less than or equal to 64 under the condition of single scale, and the size of the convolution kernel can be selected from 6 multiplied by 3 to 14 multiplied by 3;
step 2, constructing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding; the reconstructed model constructed is represented as:
Figure FDA0003856912520000021
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, d is regularization parameter n,k For multi-scale filter dictionary atoms, M n,k Is a corresponding characteristic diagram; the operation steps of the wavelet transform high-frequency coefficient extraction operator W are as follows: firstly, performing 1-layer two-dimensional stationary wavelet transform on an image, and selecting a Haar wavelet base; then selecting subband signals of a high-frequency coefficient part in the horizontal direction, the vertical direction and the diagonal direction; finally, overlapping and combining the subband signals in the three directions according to a third dimension sequence; obtaining three-dimensional data after the operator W is operated, wherein the size of a first dimension and a second dimension is equal to that of an image to be reconstructed, and the size of a third dimension is 3;
step 3, decomposition of a reconstruction model: carrying out fixed variable decomposition on the reconstruction model to obtain a convolution characteristic learning update target function and an image to be reconstructed update target function; respectively expressed as:
Figure FDA0003856912520000022
Figure FDA0003856912520000023
wherein, the x is convolution operator, the K is the scale number of convolution kernel, the N is the number of convolution kernel under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high frequency coefficient extraction operator, the lambda and beta are regularization parameters, and x is t Is the image to be reconstructed after the t (0 is less than or equal to t) time of updating,
Figure FDA0003856912520000031
for the multi-scale filter dictionary atom after the t-th update,
Figure FDA0003856912520000032
the characteristic diagram of the corresponding atom after the t-th updating; when t =0 is an initial value, the initial multi-scale filter dictionary atom is obtained in step 1, and an initial image x to be reconstructed is obtained 0 The image is obtained by reconstruction of a filtering back projection algorithm of a slope filter;
step 4, solving the convolution characteristic learning updating objective function and the image to be reconstructed updating objective function in an alternative mode to obtain a final reconstruction result graph; the convolution characteristic learning updating target function formula (3) is solved by adopting an alternating direction multiplier algorithm, and the image to be reconstructed updating target function formula (4) is solved by adopting a paraboloid substitution algorithm; when the image to be reconstructed before and after iteration satisfies RMSE (x) t+1 -x t ) Stopping when the calculation is less than or equal to 30, and outputting a final reconstruction result graph, wherein RMSE ((-)) is a mean square error calculation operator.
CN202110331020.7A 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding Active CN113034641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110331020.7A CN113034641B (en) 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110331020.7A CN113034641B (en) 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding

Publications (2)

Publication Number Publication Date
CN113034641A CN113034641A (en) 2021-06-25
CN113034641B true CN113034641B (en) 2022-11-08

Family

ID=76473376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110331020.7A Active CN113034641B (en) 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding

Country Status (1)

Country Link
CN (1) CN113034641B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379868A (en) * 2021-07-08 2021-09-10 安徽工程大学 Low-dose CT image noise artifact decomposition method based on convolution sparse coding network
CN113436118B (en) * 2021-08-10 2022-09-27 安徽工程大学 Low-dose CT image restoration method based on multi-scale convolutional coding network
CN114723842B (en) * 2022-05-24 2022-08-23 之江实验室 Sparse visual angle CT imaging method and device based on depth fusion neural network
CN115115551B (en) * 2022-07-26 2024-03-29 北京计算机技术及应用研究所 Parallax map restoration method based on convolution dictionary

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507962A (en) * 2020-12-22 2021-03-16 哈尔滨工业大学 Hyperspectral image multi-scale feature extraction method based on convolution sparse decomposition

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780342A (en) * 2016-12-28 2017-05-31 深圳市华星光电技术有限公司 Single-frame image super-resolution reconstruction method and device based on the reconstruct of sparse domain
CN107871332A (en) * 2017-11-09 2018-04-03 南京邮电大学 A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
CN108898642B (en) * 2018-06-01 2022-11-11 安徽工程大学 Sparse angle CT imaging method based on convolutional neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507962A (en) * 2020-12-22 2021-03-16 哈尔滨工业大学 Hyperspectral image multi-scale feature extraction method based on convolution sparse decomposition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于字典学习方法的CT不完全投影图像重建算法;赵可等;《数学的实践与认识》;20140123(第02期);第143-149页 *

Also Published As

Publication number Publication date
CN113034641A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN113034641B (en) Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding
Sagheer et al. A review on medical image denoising algorithms
CN108961237B (en) Low-dose CT image decomposition method based on convolutional neural network
Liu et al. 3D feature constrained reconstruction for low-dose CT imaging
Chen et al. Artifact suppressed dictionary learning for low-dose CT image processing
US9558570B2 (en) Iterative reconstruction for X-ray computed tomography using prior-image induced nonlocal regularization
US11562469B2 (en) System and method for image processing
CN104077791B (en) A kind of several dynamic contrast enhancement magnetic resonance images joint method for reconstructing
CA3067078C (en) System and method for image processing
Zhang et al. Statistical image reconstruction for low-dose CT using nonlocal means-based regularization. Part II: An adaptive approach
Bai et al. Z-index parameterization for volumetric CT image reconstruction via 3-D dictionary learning
CN109598680B (en) Shear wave transformation medical CT image denoising method based on rapid non-local mean value and TV-L1 model
Chen et al. Breast volume denoising and noise characterization by 3D wavelet transform
CN115984394A (en) Low-dose CT reconstruction method combining prior image and convolution sparse network
Zhang et al. Adaptive non‐local means on local principle neighborhood for noise/artifacts reduction in low‐dose CT images
Chen et al. Low-dose CT image denoising model based on sparse representation by stationarily classified sub-dictionaries
Nagare et al. A bias-reducing loss function for CT image denoising
Li et al. Unpaired low‐dose computed tomography image denoising using a progressive cyclical convolutional neural network
Du et al. X-ray CT image denoising with MINF: A modularized iterative network framework for data from multiple dose levels
CN115731158A (en) Low-dose CT reconstruction method based on residual error domain iterative optimization network
CN116167929A (en) Low-dose CT image denoising network based on residual error multi-scale feature extraction
Bao et al. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation
Xiong et al. Re-UNet: a novel multi-scale reverse U-shape network architecture for low-dose CT image reconstruction
CN113379868A (en) Low-dose CT image noise artifact decomposition method based on convolution sparse coding network
Chen et al. Dual-domain modulation for high-performance multi-geometry low-dose CT image reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant