CN116485925A - CT image ring artifact suppression method, device, equipment and storage medium - Google Patents

CT image ring artifact suppression method, device, equipment and storage medium Download PDF

Info

Publication number
CN116485925A
CN116485925A CN202310297455.3A CN202310297455A CN116485925A CN 116485925 A CN116485925 A CN 116485925A CN 202310297455 A CN202310297455 A CN 202310297455A CN 116485925 A CN116485925 A CN 116485925A
Authority
CN
China
Prior art keywords
image
fused
ras
network model
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310297455.3A
Other languages
Chinese (zh)
Inventor
杨民
谭大龙
海潮
吴雅朋
刘海鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202310297455.3A priority Critical patent/CN116485925A/en
Publication of CN116485925A publication Critical patent/CN116485925A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a method, a device, equipment and a storage medium for suppressing ring artifacts of CT images, which are characterized in that an initial CT image is obtained, and a data set is manufactured according to the initial CT image; constructing an RAS_UNet network model, and designing a loss function for the RAS_UNet network model; inputting the data set of the initial CT image into an RAS_UNet network model, wherein the RAS_UNet network model can inhibit artifacts in the initial CT, and outputting a first CT image and a second CT image which are subjected to artifact inhibition processing, so that ring artifacts are removed, and finally, effective information in the first CT image and effective information in the second CT image are fused, a fused final CT image is output, details and structures of the fused final CT image can be effectively recovered, and the signal to noise ratio can be obviously improved.

Description

CT image ring artifact suppression method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer imaging technologies, and in particular, to a method, an apparatus, a device, and a storage medium for suppressing CT image ring artifacts.
Background
As a common nondestructive testing technology in the medical and industrial fields, the X-ray computed tomography imaging technology can reconstruct the internal information of an object with high resolution, and is greatly convenient for related personnel to interpret the internal structure, the material quality, the density and the defect information of the target object. However, due to the problems of manufacturing hardware devices in an actual imaging system, concentric rings with different brightness, i.e. ring artifacts, with a fixed point as a center of a circle often occur in the reconstructed tomographic image. The generation reasons of the ring-shaped artifacts are not unique, and the ring-shaped artifacts can be generated by the damage of the detector pixels, inconsistent response of the detector channels, faults of a data acquisition system, insufficient X-ray photons and the like. How to effectively inhibit the ring artifact and improve the image quality of the X-ray tomographic image has important theoretical and practical significance.
In the related art, studies on ring artifact suppression can be divided into two categories, projection domain-based artifact suppression and image domain-based artifact suppression. If the early Raven designs a numerical filter based on Fourier transform in a projection image domain, the influence of annular artifacts on the image quality is effectively weakened, but the filtered image still has macroscopic artifact marks; titarenko corrects the sinogram data by using a compressed sensing theory and by minimizing Tikhonov functional, the method has obvious effect of inhibiting the artifacts, the reconstructed image is basically invisible with annular artifacts, the quality of the image is greatly improved, but the algorithm has high complexity and large calculated amount, and the setting of partial parameters needs to be regulated according to the characteristics of the image; sun et al keep the details of the image while inhibiting the ring artifact by carrying out interpolation-reconstruction-filtering-reconstruction on the sinogram, and the proposed algorithm has a good correction effect on the serious ring artifact caused by the damage of a large-area detector, but the interpolation and filtering process in the algorithm can change the original information of the image and reduce the signal to noise ratio of the image.
In summary, for the problem of suppressing the ring artifact in the CT image, the conventional algorithm has the problems of incomplete artifact removal, reduced image signal-to-noise ratio, and the like.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for suppressing ring artifacts of a CT image, which are used for solving the defects of incomplete removal of the artifacts of the CT image, low signal to noise ratio of the image and the like in the prior art, realizing the effects of suppressing the ring artifacts to the greatest extent while protecting the structural information of the CT image and obviously improving the signal to noise ratio of the CT image.
The invention provides a CT image ring artifact inhibition method, which comprises the following steps:
acquiring a data set of an initial CT image;
constructing an RAS_UNet network model;
designing a loss function for the RAS_UNet network model;
inputting the data set into the RAS_UNet network model, and outputting a first CT image and a second CT image after artifact suppression processing;
and fusing the effective information in the first CT image and the second CT image, and outputting a fused final CT image.
According to the CT image ring artifact suppression method provided by the invention, the loss functions comprise a loss function in an image domain and a loss function in a projection domain; the first CT image is a CT image after artifact suppression in the image domain, and the second CT image is a CT image after artifact suppression in the projection domain.
According to the CT image ring artifact suppression method provided by the invention, the RAS_UNet network model adopts a UNet network as a model frame, the down-sampling part of the RAS_UNet network model comprises 1 double convolution module and 4 acceptance modules for extracting artifact characteristics of different scales, and the up-sampling part comprises 4 up-sampling modules and 3 residual modules for identifying and processing artifacts and also comprises 1 convolution kernel.
According to the CT image ring artifact suppression method provided by the invention, a loss function is designed for the RAS_UNet network model, and the method specifically comprises the following steps of;
simultaneously, calculating the gradient of the initial CT image by using the forward difference and the backward difference, and taking the maximum value of the forward difference and the backward difference as the gradient of the initial CT image:
wherein I represents the initial CT image, (I, j) represents coordinates of pixels in the image, G1 represents an image gradient calculated from forward differential, G2 represents an image gradient calculated from backward differential, max represents a maximum value, gsin is a final image gradient in the sinogram;
firstly, averaging Gsin according to rows to obtain Gmean, and then weighting gradient of Gsin in a sinogram domain by taking Gmean as a weight:
wherein WG sin (i, j) represents sinogram image gradient weighting coefficients;
the gradients are weighted in the slice map domain:
wherein epsilon takes positive real numbers to avoid denominator as 0, R (p) represents a local area window taking a pixel p as a center in the initial CT image, q is a pixel point in the local area window, and g p,q G is a weight coefficient of a Gaussian distribution centered on p img WG for image gradient in slice img Is the weighting coefficient of the gradient of the image,representing partial differentiation of I along the column direction in the image;
and calculating the loss functions in the image domain and the projection domain by taking the weighted gradient as a regular term and the structural similarity coefficient as a fidelity term:
wherein, loss sin Loss function for the projection domain, loss img A loss function for the image domain; SSIM is a fidelity term, representing a structural similarity coefficient; λ1, λ2 ε [0,1 ]]。
According to the method for suppressing the ring artifact of the CT image provided by the invention, effective information in the first CT image and effective information in the second CT image are fused, and a fused final CT image is output, which specifically comprises the following steps:
decomposing the first CT image and the second CT image by using a non-downsampling contourlet transformation algorithm to obtain a plurality of sub-images to be fused of the first CT image and a plurality of sub-images to be fused of the second CT image;
calculating residual errors of the sub-images to be fused, taking pixel values of the sub-images to be fused with smaller contrast as fusion results under the condition that the residual errors exceed a set residual error threshold value, and outputting the fused sub-images; under the condition that the residual error does not exceed a set residual error threshold value, taking the pixel value of the sub-image to be fused with larger contrast as a fusion result, and outputting the fused sub-image;
and combining the plurality of fused sub-images to output a final CT image.
According to the CT image ring artifact suppression method provided by the invention, the sub-images to be fused comprise a low-frequency sub-band image and a high-frequency sub-band image.
According to the CT image ring artifact suppression method provided by the invention, the residual error is the absolute value of the difference between the gray values of the two sub-images to be fused; the residual error threshold value is the product of the sum of gray values and the ratio value of the two sub-images to be fused.
The invention also provides a CT image ring artifact suppression device, which comprises:
the data collection module is used for acquiring a data set of the initial CT image;
the model building module is used for building an RAS_UNet network model;
a function design module for designing a loss function for the RAS_UNet network model;
and the artifact processing module is used for inputting the data set into the RAS_UNet network model and outputting a first CT image and a second CT image after artifact suppression processing.
And the image fusion module is used for fusing the effective information in the first CT image and the second CT image and outputting a fused final CT image.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for suppressing the ring artifact of the CT image when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a CT image ring artifact suppression method as described in any of the above.
The invention provides a method, a device, equipment and a storage medium for suppressing annular artifacts of CT images, which are characterized in that an initial CT image is obtained, and a data set is manufactured according to the initial CT image; constructing an RAS_UNet network model, and designing a loss function for the RAS_UNet network model; inputting the data set of the initial CT image into an RAS_UNet network model, wherein the RAS_UNet network model can inhibit artifacts in the initial CT, and outputting a first CT image and a second CT image which are subjected to artifact inhibition processing, so that ring artifacts are removed, and finally, effective information in the first CT image and effective information in the second CT image are fused, a fused final CT image is output, details and structures of the fused final CT image can be effectively recovered, and the signal to noise ratio can be obviously improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a CT image ring artifact suppression method provided by the invention;
FIG. 2 is a general flow diagram of a CT image ring artifact suppression method provided by the present invention;
fig. 3 is a schematic general structural diagram of a ras_unet network model provided by the present invention;
FIG. 4 is a schematic diagram of the structure of an acceptance module in an embodiment of the invention;
FIG. 5 is a schematic diagram of the architecture of the Residual module in an embodiment of the invention;
FIG. 6 is a sinogram with streak artifacts in an embodiment of the present invention;
FIG. 7 is a gradient image in an embodiment of the invention;
FIG. 8 is a weighted gradient image in an embodiment of the invention;
FIG. 9 is a schematic representation of the morphology of ring artifacts in an embodiment of the present invention;
FIG. 10 is a graph of ringing and edge curves in accordance with one embodiment of the present invention;
FIG. 11 is a graph illustrating the weighting effect of ring artifacts in an embodiment of the present invention;
FIG. 12 is a multi-scale exploded view of NSCT in an embodiment of the invention;
FIG. 13 is a schematic diagram of a non-downsampling pyramid principle in an embodiment of the invention;
FIG. 14 is a schematic diagram of NSDFB first-order decomposition reconstruction in an embodiment of the present invention;
fig. 15 is a schematic diagram of NSDFB four-channel direction splitting and band splitting in an embodiment of the present invention;
FIG. 16 is a flow chart of an image fusion algorithm in an embodiment of the invention;
FIG. 17 is a schematic view of an image fusion effect in an embodiment of the present invention;
FIG. 18 is a three-dimensional effect diagram of a test piece in an embodiment of the present invention;
fig. 19 is a schematic diagram showing the suppression effect of the ras_unet model on the streak artifact in the sinusoidal image in the embodiment of the present invention;
fig. 20 is a schematic view showing the effect of the ras_unet model on suppressing ring artifacts in tomographic images in the embodiment of the present invention;
FIG. 21 is a graph showing the effect of NSCT image fusion in an embodiment of the present invention;
FIG. 22 is a graph comparing ring artifact suppression effects of different algorithms in an embodiment of the present invention;
FIG. 23 is a schematic diagram of a CT image ring artifact suppression device according to the present invention;
fig. 24 is a schematic structural diagram of an electronic device provided by the present invention.
Reference numerals:
231: a data collection module; 232: a model building module; 233: a function design module; 234: an artifact processing module; 235: and an image fusion module.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 2 is a general flow chart of a method for suppressing ring artifacts in CT images according to an embodiment of the present invention, where Randon is the radon transform and IRandon is the radon inverse transform. The limited data is always an important factor limiting the application of deep learning in the CT field, and in order to train a ring artifact suppression network model with excellent performance under the condition of limited data, the invention selects a UNet model as a network basis and combines an acceptance network structure to extract deeper image features. Meanwhile, in order to prevent the model training from sinking into a local optimal solution due to the weakening of the gradient return effect along with the increase of the network depth in the network training process, the invention introduces the idea of connecting a residual error with a network. Aiming at the geometric characteristics of the ring artifact image, the invention designs special loss functions based on a projection domain and an image domain respectively, so that the network endows more weight to the artifact characteristics, and finally designs special image fusion rules for the ring artifact image based on a non-downsampled contourlet transform (Nonsubsampled Contourlet Transform, NSCT) method, and fuses the output images of the projection domain and the image domain network.
Specifically, an embodiment of the present invention provides a method for suppressing ring artifacts in a CT image, as shown in fig. 1, where the method includes:
s1, acquiring a data set of an initial CT image.
In this step, as an alternative embodiment of the present invention, a CT scan experiment may be performed by a self-developed 3D- μCT system to obtain a dataset of real CT images. The hardware components of the system mainly include an X-ray source, a flat panel detector, and a stage. The X-RAY source is an XWT model RAY source produced by X-RAY WorX GmbH, and the main parameters are shown in table 1; the detector used was a Perkin Elmer XRD0822 with the main technical parameters shown in table 2.
Maximum tube voltage Maximum tube current Maximum power Minimum focal spot size
225kv 3.0mA 350W 3μm
TABLE 1
Conversion screen CsI (cesium iodide)
Pixel area: total area of 204.8×204.8mm 2
A matrix of pixels: total number of 1024×1024
Pixel size 0.2mm
Communication mode RJ45 Gigabit Ethernet
A/D conversion bit number 16bit
TABLE 2
S2, inputting the data set of the initial CT image into the RAS_UNet network model, and outputting the first CT image and the second CT image after artifact suppression processing.
The invention constructs a RAS_UNet network model, as shown in fig. 3, which comprises the following structures:
the RAS_UNet network model adopts a UNet network as a model framework, a downsampling part of the RAS_UNet network model uses an acceptance module to replace convolution, and an upsampling part introduces a Residual mechanism to finally form the RAS_UNet for inhibiting the ring artifact of the X-ray tomographic image. The downsampling part of the ras_unet network model comprises 1 double convolution module (Double convolution module) and 4 acceptance modules for extracting artifact features of different scales, and the upsampling part comprises 4 upsampling modules (Upsampling module) and 3 Residual modules (Residual modules) for identifying and processing artifacts, and further comprises 1 convolution kernel (Convolution kernel). The RAS_UNet network model combines two parts of image information by using a Skip connection mode, and the original image information is protected from being damaged to the greatest extent, wherein the two parts of image information refer to information of a CT image in a downsampling process and information of the CT image in an upsampling process when a data set of the CT image is input into the RAS_UNet network model for training. The two pieces of image information are information of the same CT image in different processing stages, and part of low-frequency information is lost when the CT image is downsampled, so that the part of lost low-frequency information is added back by using 'jump connection' when the CT image is upsampled.
UNet is a symmetric, end-to-end and unsupervised convolutional neural network model, and the network is proposed for medical image segmentation tasks at first, and after 2015, a great deal of achievements are obtained in the fields of medical image segmentation and the like, so that the network becomes a reference model for medical image segmentation gradually. Because of limited image data, the invention is based on the UNet network structure, utilizes the 'jump connection' mechanism between symmetrical network layers, and is expected to ensure the image signal-to-noise ratio while accelerating the convergence of the network training curve. The ring artifact in the X-ray CT image has a single structure, and is generally represented by concentric rings in the image domain and by straight lines of single pixel width in the projection domain. The UNet belongs to a lightweight model, the depth of the network is shallower, experiments show that the UNet structure is directly used, the network cannot accurately identify the characteristics of the artifacts, and the network can introduce additional random artifacts while inhibiting the ring artifacts, so that the image distortion is caused. To extract the deeper artifact features, the present invention introduces an acceptance module as shown in fig. 4 instead of the convolution module in downsampling. The acceptance module has a sparse structure by combining various convolution kernels, so that denser data can be generated, network performance is improved, and meanwhile, the use efficiency of resources is guaranteed.
The introduction of the acceptance module deepens the depth of the network, so that the network has stronger deep feature structure extraction capability, but the possibility of gradient dispersion is also improved, and the network is easy to sink into local optimum in the training process. To alleviate the dispersion problem, the invention introduces a Residual module (Residual module) as shown in fig. 5 in the deep layer of the network, and increases the width of the network while optimizing gradient backhaul.
When the RAS_UNet network model is trained, the loss function can reflect the accuracy degree of the model prediction result, quantitatively describe the difference between the prediction value and the theoretical value, and the proper loss function can accelerate the training of the network model and improve the performance of the model. According to the characteristics of the ring artifact in different domains, the invention designs a highly-targeted loss function for the RAS_UNet network model, namely a loss function in an image domain and a loss function in a projection domain.
Specifically, as shown in fig. 6, in the projection domain, the ring artifact appears as a stripe artifact with different brightness, and the width of the line is one pixel. The sinogram with artifacts differs from the ideal sinogram only in the image structure at the straight line, and is identical elsewhere in the image, so the invention expects the network to focus as much attention as possible at the straight line in training the network model.
To highlight the streak artifact, the present invention transfers the image to the gradient domain. Since the width of the streak artifact is one pixel, in order to ensure that the structure of the streak artifact in the gradient domain is unchanged (the width is still a single pixel), as shown in formula (1), the gradient of the initial CT image is calculated by using the forward difference and the backward difference at the same time, and as shown in fig. 7, the maximum value of the forward difference and the backward difference is finally taken as the gradient of the image.
In the formula (1), I represents an initial CT image; g 1 Representing the image gradient calculated from the forward difference; g 2 Representing the image gradient calculated by the backward difference; max represents the maximum value; g sin Is the final image gradient in the sinogram.
The formula (1) can improve the contrast of the streak artifact and highlight the structure and detail of the image, if the loss error is directly calculated in the gradient domain, the network can remove the streak artifact and simultaneously reduce the structure and detail of the blurring image, and new artifact is introduced. In order to protect the original information of the image, the invention weights the gradient of the image through the formula (2) on the basis of the formula (1).
Where WGsin (i, j) represents a sinogram gradient weighting coefficient.
As can be seen from fig. 7, the gradient of the streak artifact has regularity with respect to the structure and detail of the image, i.e. the straight line areas where the streak artifact is located all have relatively large gradient values. According to this feature, equation (2) first applies to G sin Averaging by row to obtain G mean Then G is used mean As the weight pair G sin The gradients are weighted in the sinogram domain to highlight streak artifacts while suppressing the structure and detail of the image. The weighted result is shown in fig. 8.
The features of the ring artifact in the image domain are more complex than in the projection domain, as shown in fig. 9, which appears as a ring of multiple pixels in width.
By analyzing the gray scale graph of the image, as shown in fig. 10, the gray scale curve of the ring artifact appears as a ridge, the gray scale curve of the detail and edge contour in the image appears as a step, the difference also exists in the gradient domain, and the pixel values of the ring artifact are alternately positive and negative. In fig. 10, (a) is a Roof edge; (b) is a Step edge.
In order to highlight the features of the ringing, the gradients are weighted in the slice map domain by the idea of the relative variation according to equation (3):
in the formula (3), epsilon takes a small positive real number so as not to make denominator 0; r (p) represents a local area window centered on a pixel p in the image; q is the pixel point in the local window; g p,q G is a weight coefficient of a Gaussian distribution centered on p img WG for image gradient in slice img Is the weighting coefficient of the gradient of the image,representing partial differentiation of image I along the q-direction (q representing the column direction in the image).
In calculating the weighted gradient, it is notable thatIs calculated by the computer. In order to use the characteristic of alternating positive and negative gray values, when calculating the gradient of an image, the straight line where the differential direction is located is ensured to pass through the center of the ring-shaped artifact.
In an alternative embodiment of the invention, to verify the effect of the designed weighted gradient, a simple program was written using Matlab R2018b software and tested using real X-ray tomograms. The test results are shown in fig. 11, wherein a is an image containing ring artifacts, and b is a gradient image; c is the weighted gradient image. It can be seen that the characteristics of the ring artifact are more pronounced after the gradient weighting process.
In order to remove the ring artifact and protect the image information from damage as much as possible, the weighted gradient designed by the invention is used as a regularization term, meanwhile, the structural similarity coefficient is used as a fidelity term, and the final loss function is shown in a formula (4):
in equation (4), loss sin And Loss of img Loss functions of a projection domain and an image domain respectively; SSIM is a fidelity term, representing a structural similarity coefficient; λ1, λ2∈[0,1]For adjusting weights between regular and fidelity terms.
In step S2, the data set of the initial CT image is input into the ras_unet network model, which is capable of performing artifact suppression processing on the initial CT image in the image domain and the projection domain, respectively, and outputting a first CT image after artifact suppression in the image domain and a second CT image after artifact suppression in the projection domain.
S3, fusing the effective information in the first CT image and the second CT image, and outputting a fused final CT image.
In order to suppress the ring artifact to the maximum and to protect the image information as much as possible, it is necessary to fuse the effective information in the artifact suppression results in the image domain and the projection domain to the maximum in one image. The quality of detail in an image is the primary goal pursued, and image multi-scale decomposition provides a method for image processing in different scale spaces. According to the invention, the double-domain images (namely the first CT image and the second CT image) are respectively decomposed into different image scales, and information under different scales is fused according to a designed fusion rule, so that the purposes of enriching image details and improving image contrast are achieved. The invention adopts an image fusion method based on non-downsampled contourlet transform (Nonsubsampled Contourlet Transform, NSCT) decomposition.
Specifically, step S3 includes the steps of:
s31, decomposing the first CT image and the second CT image by using a non-downsampling contourlet transformation algorithm to obtain a plurality of sub-images to be fused of the first CT image and a plurality of sub-images to be fused of the second CT image. Wherein the sub-images to be fused comprise a low frequency sub-band image and a high frequency sub-band image.
Specifically, as shown in fig. 12, the non-downsampling pyramid (Nonsubsampled Pyramid, NSP) is responsible for multi-resolution decomposition of the image, and the non-downsampling direction filter bank (Nonsubsampled Directional FilterBank, NSDFB) is responsible for multi-direction decomposition of the image. One high frequency subband and one low frequency subband may be generated per one NSP decomposition pass. And then performing multi-directional decomposition on the high-frequency sub-band decomposed by each stage of NSP by adopting NSDFB, and synthesizing singular points distributed in the same direction into a coefficient, thereby obtaining sub-band images with different scales and directions.
FIG. 13 (a) is a schematic diagram of NSP first order decomposition reconstruction, wherein H 0 (z) and H 1 (z) a low-pass and a high-pass component Jie Lvbo, G 0 (z) and G 1 (z) a low-pass reconstruction filter and a high-pass reconstruction filter, respectively. The decomposition iteration structure of NSP is shown in fig. 13 (b), where the filter bank does not downsample the image. The non-downsampling direction filter bank (Nonsubsampled Filter Bank, NSFB) of the next stage is constituted by 2×2 column sampling by the NSFB group of the previous stage.
Fig. 14 is an NSDFB first order decomposition reconstruction illustration. U (U) 0 (z) and U 1 (z) is a decomposition Filter, V 0 (z) and V 1 (z) is a synthesis filter. The basic module of NSDFB is a two-channel non-downsampling filter bank, through which the image can be divided into 4-direction sub-bands by the sector filter bank and the quadrant filter bank in its structure. This process avoids resampling operations on the image with translational invariance. Fig. 15 shows the general procedure of NSDFB four-channel direction decomposition and band splitting. y is k The outputs correspond to the k (k=0, 1,2, 3) directions in fig. 15, respectively. If the sub-band image under a certain scale is subjected to k-level direction decomposition, 2k direction sub-band images with the same size as the original input image can be obtained. The image is decomposed by J-level NSCT to obtain 1 low-frequency subband image anda band pass direction subband image, where k j The number of stages is resolved for the direction at scale j.
S32, calculating residual errors of the sub-images to be fused, wherein the pixel values of the sub-images to be fused with smaller contrast are used as fusion results under the condition that the residual errors exceed a set residual error threshold value, and outputting the fused sub-images; and under the condition that the residual error does not exceed the set residual error threshold value, taking the pixel value of the sub-image to be fused with larger contrast as a fusion result, and outputting the fused sub-image.
The first CT image and the second CT image processed by the ras_unet model have the ring artifacts suppressed, and at the same time, the details and structure of the image are inevitably smoothed, resulting in a decrease in image contrast. In order to improve the contrast of an image, the contour information of the image is sharpened, and when effective information in a first CT image and effective information in a second CT image are fused, the information of the two-domain image are combined, and local residual errors are used as fusion rules.
The algorithm in this step is implemented as shown in fig. 16, where G1 and G2 represent the local contrast of the image, (i, j) represent the coordinates of the pixels in the image, th represents the gray threshold (i.e., the residual threshold described above), res represents the gray difference of the image (i.e., the residual described above), ratio is a ratio value, which is a value artificially determined according to the gray fluctuation degree of the image area, and the recommended value range is 0.04-0.125.
From the above analysis, the edges and details in the image have a wider profile than the ring artifacts. The purpose of image fusion is to improve the contrast of the image while suppressing the ring artifact, so that during fusion, we will compare the local information of the two corresponding sub-images to be fused, i.e. the gray values of the two corresponding sub-images to be fused. If the difference in local information (i.e., the image gray level difference) exceeds a set threshold (i.e., the gray level threshold), then artifacts are considered to be present here. When the images are fused, directly taking the pixel value of the sub-image to be fused with smaller contrast as a fusion result, and outputting the fused sub-image so as to inhibit the artifact; if the difference value of the local information (namely, the gray level difference of the image) does not exceed the set threshold value (namely, the gray level threshold value), taking the pixel value of the sub-image to be fused with larger contrast as a fusion result, and outputting the fused sub-image, thereby improving the contrast of the image.
S33, combining the plurality of fused sub-images and outputting a final CT image.
For example, the first CT image and the second CT image are decomposed to generate a high-frequency sub-image and a low-frequency sub-image, the high-frequency sub-image of the first CT image and the high-frequency sub-image of the second CT image are fused, the low-frequency sub-image of the first CT image and the low-frequency sub-image of the second CT image are fused, that is, two fused sub-images are formed, and finally the two fused sub-images are combined to form a final CT image.
To verify the validity of the designed fusion rule, we take a tomographic image with ring artifacts and a tomographic image without ring artifacts as inputs, and perform an image fusion experiment, the experimental result is shown in fig. 17, where (a) is an original image containing ring artifacts, (b) is a standard artifact-free image, and (c) is an image after fusion of (a) and (b). The similarity between the fusion result and the standard image is objectively evaluated by Peak Signal-to-Noise Ratio (PSNR), structural similarity (Structural Similarity, SSIM) and mean square error (Mean Squared Error, MSE) indexes, and the result shows that PSNR and SSIM between the fused images can be achieved by MSE respectively: 33.48, 0.963 and 8.9 Xe-4.
In order to train and test the performance of the RAS_UNet network model, three groups of tomographic images are acquired by using the 3D-mu CT system, and the visual structures of the three test pieces are shown in figure 18, wherein (a) is a rock-soil sample; (b) is a frozen soil sample; (c) is a shut-off valve sample.
In order to train out a network model for ring artifact suppression, according to the CT reconstruction principle, based on Matlab 2018 software, random ring artifacts are added into a Radon forward and reverse reversing CT tomogram, three groups of training sets and test sets are respectively generated, and original data are used as standard images for network training. The main parameters of the dataset are shown in table 3.
TABLE 3 Table 3
As shown in Table 3, the ratio of training set to test set was set to 4:1 when preparing the data set, and additionally, the image was geometrically transformed by random to improve the performance of the network training when loading the data set. The best training parameters for the network were determined through a number of experiments, as shown in table 4.
TABLE 4 Table 4
According to the parameters shown in table 4, we train network models for the sinusoidal image and the tomographic image, respectively, fig. 19 shows the suppression effect of the ras_unet model on the streak artifact in the sinusoidal image, and fig. 20 shows the suppression effect of the ras_unet model on the ring artifact in the tomographic image.
As can be seen from fig. 19 and 20, after the ras_unet network model processing, the artifacts are effectively removed, both in the projection domain and in the image domain. Finally, the two-domain images are fused based on NSCT fusion algorithm, and the fusion result is shown in figure 21. In addition, the invention compares the peak signal-to-noise ratio (Peak Signal to Noise Ration, PSNR), the structural similarity coefficient (Structrual Similarity Index Metric, SSIM) and the mean square error coefficient (Mean Squared Error, MSE) of the images before and after fusion, and the PSNR and SSIM of the images are improved after fusion and the MSE is obviously reduced, so that the details and the structure of the images are effectively protected through double-domain image fusion.
TABLE 5
In addition, in an alternative embodiment of the present invention, in order to highlight the superiority of the CT image ring artifact suppression method provided by the present invention, we use test piece 1 (rock soil) as a test object, and use three methods for comparison verification. Two algorithms realize the suppression of ring artifacts from the projection domain and the image domain respectively (the two algorithms are respectively represented by Algo1 and Algo2, and the other is a deep learning method with UNet as a network structure. FIG. 22 illustrates the effect of artifact suppression for four methods, where (a) is the original image of the input; (b) is the processing result of the algorithm Algo 1; (c) is the processing result of algorithm Algo 2; (d) processing results of the UNet network model; (e) The invention provides a processing result of the CT image ring artifact suppression method; (f) is a standard ring artifact free image.
As can be seen from the above processing results, the method for suppressing the ring artifact of the CT image provided by the invention can solve the defects of incomplete artifact removal, low signal-to-noise ratio of the image and the like of the CT image in the related technology, and can realize the effects of suppressing the ring artifact to the greatest extent while protecting the structural information of the CT image and obviously improving the signal-to-noise ratio of the CT image.
The CT image ring artifact suppression device provided by the present invention is described below, and the CT image ring artifact suppression device described below and the CT image ring artifact suppression method described above may be referred to correspondingly.
As shown in fig. 23, the CT image ring artifact suppression device provided by the present invention includes:
a data collection module 231 for acquiring a dataset of an initial CT image;
a model building module 232 for building a ras_unet network model;
a function design module 233 for designing a loss function for the ras_unet network model;
the artifact processing module 234 is configured to input the data set into the ras_unet network model, and output the first CT image and the second CT image after artifact suppression processing.
And the image fusion module 235 is configured to fuse the effective information in the first CT image and the second CT image, and output a fused final CT image.
The annular artifact suppression device for the CT image can be used for executing the annular artifact suppression method for the CT image, so that the defects of incomplete artifact removal, low image signal-to-noise ratio and the like of the CT image in the related technology are overcome, the annular artifact is suppressed to the greatest extent while the structural information of the CT image is protected, and meanwhile, the signal-to-noise ratio of the CT image is improved obviously.
Fig. 24 illustrates a physical structure diagram of an electronic device, as shown in fig. 24, which may include: processor 2410, communication interface (Communications Interface) 2420, memory (memory) 2430 and communication bus 2440, wherein processor 2410, communication interface 2420 and memory 2430 communicate with each other via communication bus 2440. Processor 2410 may invoke logic instructions in memory 2430 to perform a CT image ring artifact suppression method comprising:
acquiring a data set of an initial CT image;
constructing an RAS_UNet network model;
designing a loss function for the RAS_UNet network model;
inputting the data set into the RAS_UNet network model, and outputting a first CT image and a second CT image after artifact suppression processing;
and fusing the effective information in the first CT image and the second CT image, and outputting a fused final CT image.
Further, the logic instructions in memory 2430 can be implemented in the form of software functional units and stored on a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer can execute a CT image ring artifact suppression method provided by the above methods, and the method includes:
acquiring a data set of an initial CT image;
constructing an RAS_UNet network model;
designing a loss function for the RAS_UNet network model;
inputting the data set into the RAS_UNet network model, and outputting a first CT image and a second CT image after artifact suppression processing;
and fusing the effective information in the first CT image and the second CT image, and outputting a fused final CT image.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a method for suppressing CT image ringing provided by the above methods, the method comprising:
acquiring a data set of an initial CT image;
constructing an RAS_UNet network model;
designing a loss function for the RAS_UNet network model;
inputting the data set into the RAS_UNet network model, and outputting a first CT image and a second CT image after artifact suppression processing;
and fusing the effective information in the first CT image and the second CT image, and outputting a fused final CT image.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for suppressing ring artifacts in CT images, comprising:
acquiring a data set of an initial CT image;
constructing an RAS_UNet network model;
designing a loss function for the RAS_UNet network model;
inputting the data set into the RAS_UNet network model, and outputting a first CT image and a second CT image after artifact suppression processing;
and fusing the effective information in the first CT image and the second CT image, and outputting a fused final CT image.
2. The CT image ringing suppression method of claim 1, wherein the loss functions include a loss function in an image domain and a loss function in a projection domain; the first CT image is a CT image after artifact suppression in the image domain, and the second CT image is a CT image after artifact suppression in the projection domain.
3. The CT image ring artifact suppression method according to claim 1, wherein the ras_unet network model uses UNet network as a model framework, and the downsampling portion of the ras_unet network model includes 1 double convolution module and 4 acceptance modules for extracting artifact features of different scales, and the upsampling portion includes 4 upsampling modules and 3 residual modules for identifying and processing artifacts, and further includes 1 convolution kernel.
4. The CT image ring artifact suppression method as recited in claim 2, wherein designing a loss function for the ras_unet network model comprises;
simultaneously, calculating the gradient of the initial CT image by using the forward difference and the backward difference, and taking the maximum value of the forward difference and the backward difference as the gradient of the initial CT image:
wherein I represents the initial CT image, (I, j) represents coordinates of pixels in the image, G1 represents an image gradient calculated from forward differential, G2 represents an image gradient calculated from backward differential, max represents a maximum value, gsin is a final image gradient in the sinogram;
firstly, averaging Gsin according to rows to obtain Gmean, and then weighting gradient of Gsin in a sinogram domain by taking Gmean as a weight:
wherein WG sin (i, j) represents a sinogram gradient weighting factor;
the gradients are weighted in the slice map domain:
wherein epsilon takes positive real numbers to avoid denominator as 0, R (p) represents a local area window taking a pixel p as a center in the initial CT image, q is a pixel point in the local area window, and g p,q G is a weight coefficient of a Gaussian distribution centered on p img WG for image gradient in slice img Is the weighting coefficient of the gradient of the image,representing partial differentiation of I along the column direction in the image;
and calculating the loss functions in the image domain and the projection domain by taking the weighted gradient as a regular term and the structural similarity coefficient as a fidelity term:
wherein, loss sin Loss function for the projection domain, loss img A loss function for the image domain; SSIM is a fidelity term, representing a structural similarity coefficient; λ1, λ2 ε [0,1 ]]。
5. The method of claim 1, wherein fusing the effective information in the first CT image and the second CT image, and outputting a fused final CT image, specifically comprising:
decomposing the first CT image and the second CT image by using a non-downsampling contourlet transformation algorithm to obtain a plurality of sub-images to be fused of the first CT image and a plurality of sub-images to be fused of the second CT image;
calculating residual errors of the sub-images to be fused, taking pixel values of the sub-images to be fused with smaller contrast as fusion results under the condition that the residual errors exceed a set residual error threshold value, and outputting the fused sub-images; under the condition that the residual error does not exceed a set residual error threshold value, taking the pixel value of the sub-image to be fused with larger contrast as a fusion result, and outputting the fused sub-image;
and combining the plurality of fused sub-images to output a final CT image.
6. The CT image ringing suppression method of claim 5, wherein the images to be fused comprise a low frequency subband image and a high frequency subband image.
7. The method according to claim 5, wherein the residual is an absolute value of a difference between gray values of the two sub-images to be fused; the residual error threshold value is the product of the sum of gray values and the ratio value of the two sub-images to be fused.
8. A CT image ring artifact suppression apparatus, comprising:
the data collection module is used for acquiring a data set of the initial CT image;
the model building module is used for building an RAS_UNet network model;
a function design module for designing a loss function for the RAS_UNet network model;
the artifact processing module is used for inputting the data set into the RAS_UNet network model and outputting a first CT image and a second CT image which are subjected to artifact inhibition processing;
and the image fusion module is used for fusing the effective information in the first CT image and the second CT image and outputting a fused final CT image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the CT image ring artifact suppression method of any of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the CT image ring artifact suppression method according to any one of claims 1 to 7.
CN202310297455.3A 2023-03-24 2023-03-24 CT image ring artifact suppression method, device, equipment and storage medium Pending CN116485925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310297455.3A CN116485925A (en) 2023-03-24 2023-03-24 CT image ring artifact suppression method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310297455.3A CN116485925A (en) 2023-03-24 2023-03-24 CT image ring artifact suppression method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116485925A true CN116485925A (en) 2023-07-25

Family

ID=87211011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310297455.3A Pending CN116485925A (en) 2023-03-24 2023-03-24 CT image ring artifact suppression method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116485925A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078791A (en) * 2023-10-13 2023-11-17 俐玛精密测量技术(苏州)有限公司 CT ring artifact correction method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078791A (en) * 2023-10-13 2023-11-17 俐玛精密测量技术(苏州)有限公司 CT ring artifact correction method and device, electronic equipment and storage medium
CN117078791B (en) * 2023-10-13 2024-01-12 俐玛精密测量技术(苏州)有限公司 CT ring artifact correction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Nishio et al. Convolutional auto-encoder for image denoising of ultra-low-dose CT
CN110660123B (en) Three-dimensional CT image reconstruction method and device based on neural network and storage medium
CN111081354B (en) System and method for denoising medical images through deep learning network
CN109102550B (en) Full-network low-dose CT imaging method and device based on convolution residual error network
CN108492269B (en) Low-dose CT image denoising method based on gradient regular convolution neural network
CN109816742B (en) Cone beam CT geometric artifact removing method based on fully-connected convolutional neural network
WO2014021349A1 (en) X-ray computer tomography image pick-up device and image reconstruction method
CN106296763A (en) A kind of metal material Industry CT Image Quality method for quickly correcting
Arif et al. Maximizing information of multimodality brain image fusion using curvelet transform with genetic algorithm
CN116485925A (en) CT image ring artifact suppression method, device, equipment and storage medium
CN111161182B (en) MR structure information constrained non-local mean guided PET image partial volume correction method
Zhang et al. Spectral CT image-domain material decomposition via sparsity residual prior and dictionary learning
Feng et al. Dual residual convolutional neural network (DRCNN) for low-dose CT imaging
CN110751701A (en) X-ray absorption contrast computed tomography incomplete data reconstruction method based on deep learning
Lu et al. A new nonlocal low-rank regularization method with applications to magnetic resonance image denoising
CN109658464B (en) Sparse angle CT image reconstruction method based on minimum weighted nuclear norm
Song et al. Unsupervised denoising for satellite imagery using wavelet subband cyclegan
CN116188615A (en) Sparse angle CT reconstruction method based on sine domain and image domain
CN116342444A (en) Dual-channel multi-mode image fusion method and fusion imaging terminal
CN115239836A (en) Extreme sparse view angle CT reconstruction method based on end-to-end neural network
KR102329938B1 (en) Method for processing conebeam computed tomography image using artificial neural network and apparatus therefor
Wu et al. Deep learning-based low-dose tomography reconstruction with hybrid-dose measurements
CN114137002A (en) Contrast enhancement-based low-dose X-ray differential phase contrast imaging method
Yuan et al. A deep learning-based ring artifact correction method for x-ray CT
Kumar et al. Edge preservation based CT image denoising using wavelet and curvelet transforms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination