CN116309910A - Method for removing Gibbs artifacts of magnetic resonance images - Google Patents

Method for removing Gibbs artifacts of magnetic resonance images Download PDF

Info

Publication number
CN116309910A
CN116309910A CN202310230968.2A CN202310230968A CN116309910A CN 116309910 A CN116309910 A CN 116309910A CN 202310230968 A CN202310230968 A CN 202310230968A CN 116309910 A CN116309910 A CN 116309910A
Authority
CN
China
Prior art keywords
image
artifact
network
magnetic resonance
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310230968.2A
Other languages
Chinese (zh)
Inventor
施俊
刘阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202310230968.2A priority Critical patent/CN116309910A/en
Publication of CN116309910A publication Critical patent/CN116309910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for removing gibbs artifacts of magnetic resonance images, comprising the steps of: acquiring a magnetic resonance image to obtain an image domain including gibbs artifacts; preprocessing the artifact image to obtain a training set and a testing set of the artifact; the image containing the artifact is used as an input image of the network, and is processed through encoding and decoding of the network; training the training set by using the artifact removal coding and decoding network, and storing the trained model parameters after the training is finished; when the model is used, the trained model parameters are directly called, the result graph output by the test is evaluated, and meanwhile, the artifact-free image is output. According to the method for removing the Gibbs artifacts of the magnetic resonance image, the inherent limitation of the traditional convolution operation UNet is improved, the receptive field is enlarged through a sliding window self-attention mechanism special for the Swin Transformer, more global features are obtained, the wavelet transformation is utilized for carrying out lossless downsampling operation, and the image reconstruction details are improved.

Description

Method for removing Gibbs artifacts of magnetic resonance images
Technical Field
The invention relates to the field of image processing, in particular to a method for removing Gibbs artifacts of magnetic resonance images.
Background
Gibbs artifact is a type of ring artifact that is common in magnetic resonance imaging. The main reason for this is the lack of high frequency sampling signals. The occurrence of gibbs artifacts can alter the strength, shape and anatomical detail of tissue structures, affecting the diagnosis of disease by the physician. The removal of gibbs artifacts using conventional filtering techniques often requires complex parameter adjustments and pre-processing, and deep learning methods are also applied in this field due to their simplicity and rapidity. The existing method for removing the Gibbs artifact by deep learning flexibly removes the Gibbs artifact and noise in the image by constructing a nonlinear model, and performs evolution and efficient processing on the magnetic resonance image containing the Gibbs artifact. The method mainly comprises the steps of removing Gibbs artifacts of a laminated convolutional neural network or a magnetic resonance image based on self-attention connection UNet, wherein the laminated convolutional neural network is used for extracting artifact contours from the magnetic resonance image containing the Gibbs artifacts by adopting the laminated convolutional neural network, and removing the artifacts in an original image to obtain an artifact-free image; the self-attention connected UNet-based magnetic resonance image degauss artifact is a traditional jump connection structure of the UNet improved by using a transducer module, a self-distillation technology is adopted to train a network, and the self-attention connected UNet-based magnetic resonance image degauss artifact algorithm is an end-to-end magnetic resonance image degauss artifact algorithm.
However, in the method for removing the gibbs artifacts of the magnetic resonance image, the single-scale CNN is easy to generate excessively smooth output in the process of removing the artifacts, and a part of texture details are lost; inherent limitations of convolution operations limit the expansion of receptive fields; the repeated downsampling operation of the encoder in the U-shaped network structure loses part of characteristics and affects reconstruction accuracy.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention aims to solve the technical problem that the existing method for removing gibbs artifacts is easy to generate excessively smooth output, lose texture details, and limit the expansion of receptive field due to the inherent limitation of convolution operation; and partial characteristics are lost, so that the reconstruction accuracy is affected. The invention provides a method for removing Gibbs artifacts of a magnetic resonance image, which utilizes wavelet transformation and inverse transformation to help a network to reconstruct the image under the condition of retaining original information of the image; the inherent limitation of the traditional convolution operation UNet is improved, the receptive field is enlarged through a sliding window self-attention mechanism specific to the Swin transform, more global features are obtained, meanwhile, a self-attention module is added, the region of interest is focused more in the network training process, the reconstruction level is improved, lossless downsampling operation is carried out by utilizing wavelet transformation, and image reconstruction details are improved.
To achieve the above object, the present invention provides a method for removing gibbs artifacts of magnetic resonance images, comprising the steps of:
acquiring a magnetic resonance image to obtain an image domain including gibbs artifacts;
preprocessing the artifact image to obtain a training set and a testing set of the artifact; the image containing the artifact is used as an input image of the network, and is processed through encoding and decoding of the network;
training the training set by using the artifact removal coding and decoding network, and storing the trained model parameters after the training is finished;
when the method is used, the trained model parameters are directly called, and the test set data are input into the trained model for testing; and evaluating the result graph output by the test by using two parameters of peak signal-to-noise ratio and structural similarity, and outputting an artifact-free image.
Further, the de-artifacting codec network includes an encoding stage in which downsampling is performed using wavelet transform and a decoding stage in which upsampling is performed using inverse transform.
Further, the artifact removal coding and decoding network specifically comprises a wavelet transformation layer, a converter module, a wavelet inverse transformation layer, a jump connection module and a full connection layer; the wavelet transformation layer performs wavelet transformation on the input image or characteristic to obtain decomposition characteristics under different frequencies, so as to realize downsampling; the converter module performs feature extraction, long-distance connection between features is established through a sliding window, and image features are encoded or decoded; the wavelet inverse transformation layer performs inverse wavelet transformation on the characteristics of different frequency components to realize up-sampling; jump connecting the feature images with the same level at the two ends of the fusion encoding and decoding, and assisting the decoding end to reconstruct images; the full-connection layer maps the feature map generated by the decoding end to generate an artifact-free image through dimension mapping, and the task of removing artifacts of the image is achieved.
Further, each time a wavelet transform is performed, the decomposition generates one low frequency subband and three high frequency subbands, and the subsequent wavelet transform is performed based on the last low frequency subband, and the subsequent wavelet transform is sequentially repeated until the i-level wavelet transform of the image is completed.
Further, the converter module blocks the features, then linearly changes the segmented sub-block features, and embeds position vector information; then, the processed sub-blocks are sent into two continuous Swin transducer modules to perform self-attention mechanism operation; two Swin transducer blocks calculate the normal window self-attention and shift window self-attention, respectively, to limit the calculation of self-attention to each window; and finally merging the sub-blocks through sub-block fusion, and up-sampling to restore the resolution of the sub-blocks to be the same as the input characteristics.
Further, the subblock nesting in the converter module performs non-overlapping subblock division and position coding on the feature map output by the coding end.
Further, the Swin transducer module comprises a layer standardization layer, a multi-head self-attention layer, a residual connection and a multi-layer perceptron module, wherein the residual connection and the multi-layer perceptron module comprise two GELU nonlinear activation full connection layers.
Further, the two Swin transducer modules adopt different self-attention mechanisms, wherein the first module adopts a window multi-head self-attention mechanism, the input features are divided into non-overlapping windows, and self-attention scores are respectively calculated in each window; the second module adopts a multi-head self-attention mechanism of a shift window, and the shift window obtains remote dependency information between feature graphs after each shift, so that local and global self-attention is considered; the self-attention score is defined as:
Figure SMS_1
wherein,,
Figure SMS_2
respectively representQuery matrix, key matrix and value matrix, M represents the number of sub-blocks in the window, d represents the dimensions of the query matrix and key matrix, and B represents the bias matrix.
Further, training the training set by using the artifact removal coding and decoding network specifically includes:
firstly, coding, then decoding, sequentially sending images of a training set into a network through an iterator to train, obtaining network output images, and then calculating a loss function of the corresponding output images and labels; the loss function is optimized through iteration of an Adam optimizer, the difference between the output image and the label is further reduced through back propagation, and after network training is completed, model parameters are saved.
Further, the loss function of the reconstructed network is the loss function between the predicted artifact-free image and the real label, as shown in the following formula:
Figure SMS_3
wherein L is res Representing reconstruction losses; y is i Representing a real label, i.e. an artifact free image;
Figure SMS_4
predicted images for a network; n is the number of training samples.
Technical effects
The method for removing the Gibbs artifact of the magnetic resonance image solves the problem of detail loss caused by downsampling by adopting pooling operation in the UNet coding process, and utilizes wavelet transformation and inverse transformation to help the network to reconstruct the image under the condition of retaining the original information of the image; the inherent limitation of the traditional convolution operation UNet is improved, the receptive field is enlarged through a sliding window self-attention mechanism specific to the Swin transducer, more global features are obtained, and meanwhile, a self-attention module is added, so that the region of interest is focused more in the network training process, and the reconstruction level is improved.
The method for removing the Gibbs artifact of the magnetic resonance image provided by the invention uses the characteristic of lossless downsampling of wavelet transformation to replace downsampling and upsampling processes of a coding and decoding network structure, and can keep image details in the training process; the converter module is used for encoding and decoding instead of convolution operation, so that the receptive field is enlarged, and the network can accurately position artifact information; the end-to-end image artifact removal operation is carried out by using the trained network, no extra step is needed, the speed is high, and the effect is good.
The conception, specific structure, and technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, features, and effects of the present invention.
Drawings
FIG. 1 is a de-artifacting codec network architecture diagram of a method for removing Gibbs artifacts of a magnetic resonance image according to a preferred embodiment of the present invention;
figure 2 is a wavelet transform diagram of a method for removing gibbs artifacts of a magnetic resonance image in accordance with a preferred embodiment of the present invention;
FIG. 3 is a schematic representation of an inverse wavelet transform of a method for removing Gibbs artifacts of a magnetic resonance image in accordance with a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a transducer module of a method for removing Gibbs artifacts of a magnetic resonance image according to a preferred embodiment of the present invention;
figure 5 is a network flow diagram of a method for removing gibbs artifacts of magnetic resonance images in accordance with a preferred embodiment of the present invention;
FIG. 6 is a schematic illustration of experimental results of a method for removing Gibbs artifacts of a magnetic resonance image according to a preferred embodiment of the present invention;
figure 7 is a schematic representation of gibbs artifact generation in a method for removing gibbs artifacts of a magnetic resonance image in accordance with a preferred embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular internal procedures, techniques, etc. in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In the task of removing gibbs artifacts, the conventional convolutional neural network has the following problems: conventional convolution operations have a small receptive field, are limited to capturing global information, and downsampling may result in partial detail loss. Therefore, an embodiment of the present invention provides a method for removing gibbs artifacts in a magnetic resonance image, which is an image artifact removing codec network based on a wavelet transform-converter (transducer) module, and the infrastructure of the network is a U-shaped network. The traditional U-shaped network adopts an encoding-decoding structure, and in the encoding stage, an input image is encoded through a convolution layer and downsampling (pooling), so that the receptive field of the image is continuously enlarged, and the advanced characteristic information of the image is obtained; in the decoding stage, the convolutional layer and the up-sampling operation are utilized to decode the advanced features, and finally the artifact-free image is output.
A method for removing gibbs artifacts of a magnetic resonance image according to an embodiment of the present invention includes the steps of:
step 1, a data set is produced, as shown in FIG. 7, specifically comprising
Step 100: magnetic resonance images of the CC359 dataset were acquired, with image size 160 x 256, fourier transformed to k-space, where the k-space image was the high frequency center. And (4) carrying out frequency shift on the k space to obtain a low-frequency center.
Step 200: mask m is defined as the zero matrix of the image size. The coordinates of the center point of the acquired image are (x 1 ,y 1 ),The mask width was set to 2×w (w=40 in this experiment), and the center area of the mask m was set to 1, as follows:
m[x 1 -w:x 1 +w,y 1 -w:y 1 +w]=1
wherein m represents a matrix of the mask; 2 xw is the window size, i.e. the size of the reserved low frequency part; (x) 1 ,y 1 ) Is the center point coordinates of the low frequency center image.
Then multiplying the low-frequency center spectrogram with a mask, retaining low-frequency data in the low-frequency center image, and removing high-frequency data of the edge part:
F m =F×m
wherein F is a low-frequency centre diagram, F m Is a k-space domain image from which part of the high frequency information is removed.
Step 300: will F m After Fourier inverse transformation, obtaining an image domain image containing Gibbs artifacts;
and after the data set is subjected to artifact processing, a pair of artifact-free label image and an artifact-containing image are obtained.
Step 400: all images are preprocessed in normalization, dimension expansion and the like, and the images are divided according to the proportion of five-fold cross validation, so that a training set and a testing set of artifacts are obtained. The image containing the artifact is processed as an input image of the network by encoding and decoding of the network.
Step 2: training network
Step 210: encoding stage
Assuming that the input image size is (1×1×h×w), 4 subgraphs are obtained after passing through the first wavelet transform layer, and the subgraphs are spliced in the second dimension to obtain a feature map with a size of (1×4×h/2×w/2). The feature map is then input into the converter module for self-attention calculations.
The image is divided into smaller sub-blocks by a sub-block nesting operation, which is to take out the same position value of each window and splice the same into new sub-blocks, although the resolution of the feature is reduced, no information is lost, and the downsampling parameter is set to 2. After the subblock nesting operation, the feature size becomes (1×8×h/4×w/4). The features are then input into a Swin transducer module, with the self-attention mechanism operating in each window:
Figure SMS_5
wherein,,
Figure SMS_6
respectively representing a query matrix, a key matrix and a value matrix, M represents the number of sub-blocks in a window, d represents the dimensions of the query matrix and the key matrix, and B represents a deviation matrix.
The common window self-attention calculation is carried out in a plurality of small windows, so that compared with a shift window, the method can save a lot of calculation amount and focus on local parts. The shift window self-attention calculation utilizes global information, integrates information among different windows through cyclic movement of the windows, calculates self-attention among different windows, finds out an interested part in the global, and the processed characteristics not only contain local information, but also contain certain global information. After the feature is subjected to window self-attention calculation twice, feature resolution is restored through sub-block fusion. After the encoding process by the first converter module, the feature size becomes (1×16×h/2×w/2). The self-attention calculation herein refers to the calculation of self-attention in a limited pixel point of a window, and the difference between the common window self-attention and the shift window self-attention is that the window is the window acquisition mode, the common window performs window division according to the position, the sliding window acquires the window by setting a sliding window, but the following self-attention calculation process is consistent.
Subsequently, the feature is subjected to a second-layer wavelet transform, the feature size after the processing becomes (1×64×h/4×w/4), and then the feature size becomes (1×128×h/4×w/4) through a second transformer module.
And so on, after the input image is subjected to four wavelet transformation and coding operations of the converter, the whole image characteristic coding process is finished, and the characteristic size is (1×512×h/16×w/16).
The feature processing is changed from the encoding stage to the decoding stage by a convolution layer, where the output image size is (1×256×h/16×w/16), and the convolution operation changes only the number of channels.
Step 220: decoding stage
And the decoding stage performs channel dimension combination on the same size characteristics at the encoding end and the decoding end through a fourth jump connection unit. The skip connection unit combines the output features of the fourth wavelet transform layer with the output features of the convolution layer, the final feature map having a size of (1×512×h/16×w/16).
The decoding stage is a process of feature recovery and reconstruction of the image. And sending the image into a fifth converter module for decoding operation, wherein the size of the output characteristic is (1X 128 XH/8 XW/8), the converter operation in the decoding process does not change the size of the characteristic, only the self-attention calculation of the global and local characteristics is carried out, and more characteristics are reserved for the recovery of the image. The features were then subjected to an inverse wavelet transform to obtain upsampled features of size (1×128×h/8×w/8).
After the wavelet inverse transformation operation, the features are subjected to jump connection again, the encoding end features and the decoding end features are combined, and the recovery of the detail features is completed through the combination of different semantic information. Similarly, the features will repeat the skip connection operation, the inverse wavelet transform and the transformer operation, and the above repeated operations are performed four times at the decoding end, and finally the reconstructed network output image (1×1×h×w) is obtained after the fourth inverse wavelet transform layer.
The down sampling and the up sampling are respectively carried out through wavelet transformation and inverse transformation, so that the problem of detail feature loss caused by pooling operation in the traditional U-shaped network can be avoided. In addition, the receptive field of convolution operation in the traditional U-shaped network is small, and global information in the image cannot be captured. The converter module calculates the image self-attention through the sliding window, can capture global information in the image, and enhances the relevance between the image features. Therefore, the network adopts the converter module, which is favorable for acquiring accurate artifact positions and learning the characteristic information of the artifacts, thereby realizing image artifact removal.
Step 230: calculation of loss function
And sequentially sending the images of the training set into a network for training through an iterator, and calculating a loss function according to the corresponding output images and the labels after obtaining the network output images. And calculating the difference between the network output image and the artifact-free label image by using the mean square error loss. The mean square error loss calculation formula is as follows:
Figure SMS_7
where y is the artifact free label image,
Figure SMS_8
the result image is output by the network, and n is the number of pixels. The MSE calculates the sum of squares of the distances between the target variable and the predicted value.
Step 240: iterative optimization and preservation of network parameters
The loss function is optimized through iteration of an Adam optimizer, the difference between the output image and the label is further reduced through back propagation, and after network training is completed, model parameters are saved.
Step 3: test network
During testing, the network directly calls the stored training parameters, the test set image is input into the network, and the image without artifacts is generated after the network processing. The deghosting effect is evaluated by calculating the peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM), in particular:
Figure SMS_9
Figure SMS_10
wherein mu y Is the average value of the label y,
Figure SMS_11
is network output->
Figure SMS_12
Average value of>
Figure SMS_13
Is the variance of y, +.>
Figure SMS_14
Is->
Figure SMS_15
Variance of c 1 And c 2 Is used to maintain a constant.
The initial learning rate of the experiment is 0.0001, and the data set is disclosed by adopting a CC359 by using an Adam optimizer, as shown in fig. 6, which is a demonstration of the result of one experiment in the embodiment. Compared with the prior art, the invention obtains excellent results that the average value +/-standard deviation of PSNR and SSIM is 30.14 +/-0.19,0.9171 +/-0.0043 respectively.
The method for removing the gibbs artifacts of the magnetic resonance image provided by the embodiment of the invention comprises an image artifact removal reconstruction network based on a U-shaped network of a wavelet transformation-converter module, as shown in fig. 1, and comprises the following structures:
the system comprises a plurality of wavelet transformation layers, a plurality of converter modules, a plurality of wavelet inverse transformation layers, a plurality of jump connection modules and a full connection layer. The wavelet transformation layer performs wavelet transformation on the input image or characteristic to obtain decomposition characteristics under different frequencies, so as to realize downsampling; the converter module performs feature extraction, long-distance connection between features is established through a sliding window, and image features are encoded or decoded; the wavelet inverse transformation layer performs inverse wavelet transformation on the characteristics of different frequency components to realize up-sampling. And jumping-connection fusion of characteristic images with the same level at two ends of the coding and decoding, and assisting the decoding end in image reconstruction. The full-connection layer maps the feature map generated by the decoding end to generate an artifact-free image through dimension mapping, and the task of removing artifacts of the image is achieved.
As shown in fig. 2 and 3, wavelet transform and inverse transform operations, respectively. This embodiment employs a typical haar wavelet transform. Each time wavelet transformation is performed, a low-frequency sub-band (LL: row low frequency, column low frequency) and three high-frequency sub-bands (vertical sub-band LH: row low frequency, column high frequency; horizontal sub-band HL: high frequency, column low frequency; diagonal sub-band HH: high frequency, column high frequency) are generated by decomposition, the low-frequency component reflects the basic object structure of coarse granularity level, and the high-frequency component retains the object texture detail of fine granularity level. In this way, different levels of image detail are preserved in the different sub-bands of lower resolution without loss of information. The subsequent wavelet transform is performed based on the upper-level low-frequency subband LL, and the subsequent wavelet transforms are sequentially repeated, so that the I-level wavelet transform of the image can be completed, wherein i= (1, 2,3, … I). Each wavelet transform can be regarded as performing the spaced-point sampling on the horizontal direction of the rows and the vertical direction of the columns of the image, respectively, and the spatial resolution becomes 1/2 each time. Therefore, after the ith level wavelet transform, its subband spatial resolution is 1/2i of the original image. The image features after one wavelet transform layer will generate four feature components, which are 1/4 of the original feature in size.
The converter module consists of a subblock nest, two Swin transducer blocks and a subblock fusion, and the main structure is shown in figure 4. The converter module first performs sub-block nesting of the input features. Specifically, firstly, the features are segmented, secondly, the segmented sub-block features are subjected to linear change, and position vector information is embedded. The processed sub-blocks are then fed into two successive Swin transducer blocks for self-attention mechanism operations. The two Swin fransformer blocks calculate the normal window self-attention and the shifted window self-attention, respectively, to limit the calculation of self-attention to each window. And the shift window operation enables the characteristics among different windows to perform information interaction, and the global characteristics and the calculation complexity are considered. And finally merging the sub-blocks through sub-block fusion, and up-sampling to restore the resolution of the sub-blocks to be the same as the input characteristics.
The subblock nesting performs non-overlapping subblock division and position coding on the feature map output by the coding end. The Swin transducer module consists of a layer normalization (Layer Normalization, LN) layer, a Multi-head self-attention (MSA) layer, a residual connection and a Multi-layer perceptron (Multilayer Perceptron, MLP) module, wherein the MLP module contains two GELU nonlinear active fully connected layers. Wherein the two Swin transducer modules employ different self-attention mechanisms. The first module adopts a Window Multi-head Self-attention (W-MSA) mechanism to divide input features into non-overlapping windows, and calculates Self-attention scores in each Window respectively; the second module adopts a shift window multi-head Self-attention (SW-MSA) mechanism, and the shift window obtains remote dependency information between feature graphs after each shift, so that the remote dependency information gives consideration to local and global Self-attention. The self-attention score is defined as:
Figure SMS_16
wherein,,
Figure SMS_17
representing a query matrix, a key matrix and a value matrix, respectively. M represents the number of sub-blocks in the window, d represents the dimensions of the query matrix and the key matrix, and B represents the bias matrix.
The sub-block fusion operation uses deconvolution to up-sample the feature map obtained by the down-sampling in the above-mentioned nesting operation, and restore the original resolution.
The wavelet inverse transformation layer combines a U-shaped network with a hop connection unit. In the encoding process, only the low-frequency sub-band LL is downsampled, the other three frequency component features are fused with the same-scale features of the decoding end through jump connection, and wavelet inverse transformation is carried out, so that the high-frequency information of the features is recovered to the maximum extent; the wavelet inverse transformation is the inverse process of wavelet transformation, and the wavelet inverse transformation is carried out on the four characteristic components, so that the image recovery and the up-sampling are realized; the feature output by the codec needs to pass through a full-connection layer to finish the reconstruction from the feature to the artifact-free image, wherein the full-connection layer consists of a convolution layer with a convolution kernel size of 1×1, and specific modules and parameters are shown in table 1.
Table 1 the structure and parameters of the various modules in the present network
Figure SMS_18
Figure SMS_19
The goal of the MRI de-artifacting task is to learn the mapping relationship between the gibbs artifact image and the artifact-free artwork through a codec network, and the network training process is shown in fig. 5. The loss functions related by the invention are all L2 Euclidean distance loss, so that the difference between the predicted artifact-free graph and the real label is measured. The training process of the image reconstruction model is guided by back propagation.
During training, the loss function of the reconstructed network is the loss function between the predicted artifact-free image and the real label, as shown in the following formula:
Figure SMS_20
wherein L is res Representing reconstruction losses; y is i Representing a real label, i.e. an artifact free image;
Figure SMS_21
predicted images for a network; n is the number of training samples.
The loss function is used in training the network and represents the gap between the network output image and the label image. When the image is input into the coding and decoding network, a network output image is obtained, the loss function is calculated by carrying out the calculation on the output image and the label image, and the loss function is continuously reduced by back propagation, and the relevant parameters of the network are trained, so that the iterative process is optimized, and finally, the output of the network is closer to the label, namely, the network learns the mapping of the artifact-free image output from the artifact-free image. By saving the trained network parameters, the test set can be tested, and because the images of the test set and the training set are mutually independent, the evaluation process of network output is carried out on the test set, and the evaluation of the Gibbs artifact removal task can be completed by calculating the peak signal-to-noise ratio and the structural similarity.
The present invention verifies the validity of the proposed algorithm on the CC359 database. The CC359 database is single channel coil data acquired in a clinical MRI scanner (Discovery MR750, general Electric (CE) Healthcare, waukesha, wis.) and includes 35 fully sampled T1-weighted MRI data. The dataset is a complex image set that can be inverse transformed to generate complex k-space samples. The data set may be used as a 2D data set when undersampling the data set in both directions (slice encoding and phase encoding).
The data set is divided into a training set containing 28 cases through five-fold cross validation, and each case slice contains 45 images of 1440 pieces; there were 7 cases in the test set, 315 images total. The present invention cuts the input image into 160 x 256 pixels, removing non-tissue areas. Experiments were performed using the Pytorch framework and back propagation was performed using Adam optimizer. As shown in fig. 6, the results of one experiment in this example are shown. Compared with the prior art, the invention obtains excellent results that the average value +/-standard deviation of PSNR and SSIM is 30.14 +/-0.19,0.9171 +/-0.0043 respectively.
As shown in fig. 6, an original image containing gibbs artifacts of the test set is selected, input into a trained network model, end-to-end artifact-free image reconstruction is performed, and an image output by the network and a label image of the test set are evaluated to calculate the effect of network training.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. A method for removing gibbs artifacts from a magnetic resonance image, comprising the steps of acquiring a magnetic resonance image to obtain an image domain gibbs artifact-containing image;
preprocessing the artifact image to obtain a training set and a testing set of the artifact; the image containing the artifact is used as an input image of the network, and is processed through encoding and decoding of the network;
training the training set by using the artifact removal coding and decoding network, and storing the trained model parameters after the training is finished;
when the method is used, the trained model parameters are directly called, and the test set data are input into the trained model for testing; and evaluating the result graph output by the test by using two parameters of peak signal-to-noise ratio and structural similarity, and outputting an artifact-free image.
2. A method for removing gibbs artifacts of magnetic resonance images as claimed in claim 1, characterized in that the de-artifacting codec network comprises an encoding phase in which downsampling is performed using wavelet variations and a decoding phase in which upsampling is performed using inverse transforms.
3. A method for removing gibbs artifacts of magnetic resonance images as claimed in claim 2, characterized in that the de-artifacting codec network comprises in particular a wavelet transform layer, a converter module, an inverse wavelet transform layer, a jump connection module and a full connection layer; the wavelet transformation layer performs wavelet transformation on an input image or characteristic to obtain decomposition characteristics under different frequencies, so as to realize downsampling; the converter module performs feature extraction, long-distance connection between features is established through a sliding window, and image features are encoded or decoded; the wavelet inverse transformation layer performs inverse wavelet transformation on the characteristics of different frequency components to realize up-sampling; jump connecting the feature images with the same level at the two ends of the fusion encoding and decoding, and assisting the decoding end to reconstruct images; the full-connection layer maps the feature map generated by the decoding end to generate an artifact-free image through dimension mapping, and the task of removing artifacts of the image is achieved.
4. A method for removing gibbs artifacts of a magnetic resonance image as claimed in claim 3, characterized in that each time a wavelet transform is performed, the decomposition generates one low frequency subband and three high frequency subbands, the subsequent wavelet transform is performed on the basis of the last low frequency subband, and repeated in sequence until the i-level wavelet transform of the image is completed.
5. A method for removing gibbs artifacts of a magnetic resonance image as in claim 3, wherein the transducer module blocks the features and then linearly varies the segmented sub-block features and embeds the position vector information; then, the processed sub-blocks are sent into two continuous Swin transducer modules to perform self-attention mechanism operation; two Swin transducer blocks calculate the normal window self-attention and shift window self-attention, respectively, to limit the calculation of self-attention to each window; and finally merging the sub-blocks through sub-block fusion, and up-sampling to restore the resolution of the sub-blocks to be the same as the input characteristics.
6. A method for removing gibbs artifacts of magnetic resonance images as in claim 5, wherein the subblock nesting in the converter module performs non-overlapping subblock partitioning and position coding of the feature map output at the encoding end.
7. A method for removing gibbs artifacts from a magnetic resonance image as recited in claim 5, wherein the Swin Transformer module comprises a layer normalization layer, a multi-headed self-attention layer, a residual connection and a multi-layered perceptron module, wherein the residual connection and the multi-layered perceptron module comprise two gel nonlinear-activated fully-connected layers.
8. A method for removing gibbs artifacts from magnetic resonance images as in claim 7, wherein two Swin transducer modules employ different self-attention mechanisms, wherein the first module employs a windowed multi-headed self-attention mechanism, dividing the input features into non-overlapping windows, and calculating self-attention scores within each window separately; the second module adopts a multi-head self-attention mechanism of a shift window, and the shift window obtains remote dependency information between feature graphs after each shift, so that local and global self-attention is considered; the self-attention score is defined as:
Figure FDA0004120481860000021
wherein,,
Figure FDA0004120481860000022
respectively representing a query matrix, a key matrix and a value matrix, M represents the number of sub-blocks in a window, d represents the dimensions of the query matrix and the key matrix, and B represents a deviation matrix.
9. A method for removing gibbs artifacts of magnetic resonance images as in claim 1, characterized in that training the training set with a de-artifacting codec network, in particular comprises:
firstly, coding, then decoding, sequentially sending images of a training set into a network through an iterator to train, obtaining network output images, and then calculating a loss function of the corresponding output images and labels; the loss function is optimized through iteration of an Adam optimizer, the difference between the output image and the label is further reduced through back propagation, and after network training is completed, model parameters are saved.
10. A method for removing gibbs artifacts of magnetic resonance images as in claim 7, characterized in that the loss function of the reconstruction network is the loss function between the predicted artifact free image and the real label as shown in the following formula:
Figure FDA0004120481860000023
wherein: l (L) res Representing reconstruction losses; y is i Representing a real label, i.e. an artifact free image;
Figure FDA0004120481860000024
predicted images for a network; n is the number of training samples.
CN202310230968.2A 2023-03-12 2023-03-12 Method for removing Gibbs artifacts of magnetic resonance images Pending CN116309910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310230968.2A CN116309910A (en) 2023-03-12 2023-03-12 Method for removing Gibbs artifacts of magnetic resonance images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310230968.2A CN116309910A (en) 2023-03-12 2023-03-12 Method for removing Gibbs artifacts of magnetic resonance images

Publications (1)

Publication Number Publication Date
CN116309910A true CN116309910A (en) 2023-06-23

Family

ID=86818133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310230968.2A Pending CN116309910A (en) 2023-03-12 2023-03-12 Method for removing Gibbs artifacts of magnetic resonance images

Country Status (1)

Country Link
CN (1) CN116309910A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173525A (en) * 2023-09-05 2023-12-05 北京交通大学 Universal multi-mode image fusion method and device
CN117541673A (en) * 2023-11-13 2024-02-09 烟台大学 Multi-mode magnetic resonance image conversion method
CN117690331A (en) * 2024-02-04 2024-03-12 西南医科大学附属医院 Prostate puncture operation training system and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173525A (en) * 2023-09-05 2023-12-05 北京交通大学 Universal multi-mode image fusion method and device
CN117541673A (en) * 2023-11-13 2024-02-09 烟台大学 Multi-mode magnetic resonance image conversion method
CN117541673B (en) * 2023-11-13 2024-04-26 烟台大学 Multi-mode magnetic resonance image conversion method
CN117690331A (en) * 2024-02-04 2024-03-12 西南医科大学附属医院 Prostate puncture operation training system and method
CN117690331B (en) * 2024-02-04 2024-05-14 西南医科大学附属医院 Prostate puncture operation training system and method

Similar Documents

Publication Publication Date Title
CN116309910A (en) Method for removing Gibbs artifacts of magnetic resonance images
Souza et al. A hybrid, dual domain, cascade of convolutional neural networks for magnetic resonance image reconstruction
KR101664913B1 (en) Method and system for determining a quality measure for an image using multi-level decomposition of images
CN107123091A (en) A kind of near-infrared face image super-resolution reconstruction method based on deep learning
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN109345473B (en) Image processing method based on self-adaptive fast iterative shrinkage threshold algorithm
CN115578427A (en) Unsupervised single-mode medical image registration method based on deep learning
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
CN116416156A (en) Swin transducer-based medical image denoising method
CN112037304A (en) Two-stage edge enhancement QSM reconstruction method based on SWI phase image
Tong et al. HIWDNet: a hybrid image-wavelet domain network for fast magnetic resonance image reconstruction
Xiao et al. SR-Net: a sequence offset fusion net and refine net for undersampled multislice MR image reconstruction
CN113096207B (en) Rapid magnetic resonance imaging method and system based on deep learning and edge assistance
CN117576240A (en) Magnetic resonance image reconstruction method based on double-domain transducer
Mahapatra et al. MR image super resolution by combining feature disentanglement CNNs and vision transformers
CN117557476A (en) Image reconstruction method and system based on FCTFT
CN116957940A (en) Multi-scale image super-resolution reconstruction method based on contour wave knowledge guided network
CN105260992A (en) Traffic image denoising algorithm based on robust principal component decomposition and feature space reconstruction
CN112669400B (en) Dynamic MR reconstruction method based on deep learning prediction and residual error framework
Indira et al. Pixel based medical image fusion techniques using discrete wavelet transform and stationary wavelet transform
CN114549361A (en) Improved U-Net model-based image motion blur removing method
Zhao et al. K-space transformer for undersampled MRI reconstruction
Zhang et al. Image Super-Resolution Using a Wavelet-based Generative Adversarial Network
Yuan et al. ARCNet: An Asymmetric Residual Wavelet Column Correction Network for Infrared Image Destriping
KR20200052422A (en) Method and apparatus for processing MR angiography image using neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination