CN117333571B - Reconstruction method, system, equipment and medium of magnetic resonance image - Google Patents

Reconstruction method, system, equipment and medium of magnetic resonance image Download PDF

Info

Publication number
CN117333571B
CN117333571B CN202311394491.8A CN202311394491A CN117333571B CN 117333571 B CN117333571 B CN 117333571B CN 202311394491 A CN202311394491 A CN 202311394491A CN 117333571 B CN117333571 B CN 117333571B
Authority
CN
China
Prior art keywords
image
magnetic resonance
layer
reconstruction
resonance image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311394491.8A
Other languages
Chinese (zh)
Other versions
CN117333571A (en
Inventor
吕骏
赵训康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai University
Original Assignee
Yantai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai University filed Critical Yantai University
Priority to CN202311394491.8A priority Critical patent/CN117333571B/en
Publication of CN117333571A publication Critical patent/CN117333571A/en
Application granted granted Critical
Publication of CN117333571B publication Critical patent/CN117333571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a reconstruction method, a reconstruction system, reconstruction equipment and a reconstruction medium of a magnetic resonance image, wherein the reconstruction method comprises the following steps: constructing a training set; wherein the training set comprises: the contrast of the full-sampling magnetic resonance image of the target mode and the undersampled magnetic resonance image of the target mode are both the first contrast, and the contrast of the full-sampling magnetic resonance image of the reference mode is the second contrast; training the image reconstruction model by adopting a training set, taking both the undersampled magnetic resonance image of the target mode and the fully sampled magnetic resonance image of the reference mode as input values of the model, and taking the fully sampled magnetic resonance image of the target mode as output values of the model to obtain a trained image reconstruction model; and acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image.

Description

Reconstruction method, system, equipment and medium of magnetic resonance image
Technical Field
The present invention relates to the field of image reconstruction technologies, and in particular, to a method, a system, an apparatus, and a medium for reconstructing a magnetic resonance image.
Background
The statements in this section merely relate to the background of the present disclosure and may not necessarily constitute prior art.
Magnetic Resonance Imaging (MRI) is one of the most commonly used imaging techniques in disease diagnosis and treatment planning. As a non-invasive, non-radiative, in vivo imaging modality, MRI provides better soft tissue contrast than many other imaging techniques, and provides accurate measurement of anatomical and functional signals. However, due to the need to sample the complete k-space, especially in multiple protocols requiring long echo Times (TE) and repetition Times (TR), long acquisition thereof may lead to important artifacts in the reconstructed image, such as cardiac and respiratory motion, due to patient or physiological motion during acquisition. For this reason, there is an urgent need in practical applications to reconstruct high quality MC images using limited measurement data to reduce scan time. However, the existing magnetic resonance rapid imaging method has some defects:
1. the traditional compressed sensing method has small imaging acceleration rate, the acceleration factor is limited to 2 to 3, and if the undersampling rate is increased, detail loss and unnecessary artifacts in reconstruction can be caused.
2. Since MRI images are generated by sampling from K-space and converting to image domain, some information is lost in this process, and most existing methods focus on recovering MRI images in image domain without using K-space information, and it is inevitable that there is aliasing in reconstructed images.
3. When undersampled image reconstruction is carried out on a protocol with long sampling time, as the acceleration multiple increases, more aliasing and artifacts are generated, and the reconstruction effect is not ideal.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a method, a system, equipment and a medium for reconstructing a magnetic resonance image; and the reconstruction of the undersampled MRI under different sampling modes is realized on the image through an image domain and K-space domain alternating optimization strategy.
In one aspect, a method for reconstructing a magnetic resonance image is provided, comprising:
constructing a training set; wherein the training set comprises: the contrast of the full-sampling magnetic resonance image of the target mode and the undersampled magnetic resonance image of the target mode are both the first contrast, and the contrast of the full-sampling magnetic resonance image of the reference mode is the second contrast;
training the image reconstruction model by adopting a training set, taking both the undersampled magnetic resonance image of the target mode and the fully sampled magnetic resonance image of the reference mode as input values of the model, and taking the fully sampled magnetic resonance image of the target mode as output values of the model to obtain a trained image reconstruction model;
and acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image.
In another aspect, a reconstruction system for magnetic resonance images is provided, comprising:
a training set construction module configured to: constructing a training set; wherein the training set comprises: the contrast of the full-sampling magnetic resonance image of the target mode and the undersampled magnetic resonance image of the target mode are both the first contrast, and the contrast of the full-sampling magnetic resonance image of the reference mode is the second contrast;
a training module configured to: training the image reconstruction model by adopting a training set, taking both the undersampled magnetic resonance image of the target mode and the fully sampled magnetic resonance image of the reference mode as input values of the model, and taking the fully sampled magnetic resonance image of the target mode as output values of the model to obtain a trained image reconstruction model;
a reconstruction module configured to: and acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image.
In still another aspect, there is provided an electronic device including:
a memory for non-transitory storage of computer readable instructions; and
a processor for executing the computer-readable instructions,
wherein the computer readable instructions, when executed by the processor, perform the method of the first aspect described above.
In yet another aspect, there is also provided a storage medium non-transitory storing computer readable instructions, wherein the instructions of the method of the first aspect are executed when the non-transitory computer readable instructions are executed by a computer.
In a further aspect, there is also provided a computer program product comprising a computer program for implementing the method of the first aspect described above when run on one or more processors.
One of the above technical solutions has the following advantages or beneficial effects:
1. aiming at the problem of the increase of the acceleration multiple and the loss of reconstruction details. The invention realizes MRI rapid imaging by using a deep learning method and adopting a concentration mechanism and convolution to model pixel information in different ranges of the image in parallel, and recovers more image details under the conditions of higher acceleration factors and multiple sampling modes.
2. The defect that the image is restored only in the image domain is overcome. The invention provides that images are alternately restored in an image domain and a K-space domain, the images are circulated for a plurality of times, and a loss function is set for image domain reconstruction and K-space reconstruction.
3. The increase of the acceleration times can cause serious aliasing and artifact of the input image, so that the reconstruction difficulty of the model on the image is increased. Considering that the multi-mode MRI images of the same sampler have similar structural information, the invention takes the image with shorter sampling time as a reference image to assist the reconstruction of the mode with longer sampling time.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a diagram of an undersampling operation in accordance with a first embodiment of the present invention;
FIG. 2 is a block diagram of a model according to a first embodiment of the present invention;
FIG. 3 is a block diagram of an image domain reconstruction module according to a first embodiment of the present invention;
FIG. 4 is a block diagram of a multi-scale modeling according to a first embodiment of the present invention;
FIGS. 5 (a) -5 (h) are visualization results of a 10-fold acceleration of radial sampling in accordance with an embodiment of the present invention;
FIGS. 6 (a) -6 (h) are visualization results of 20-fold acceleration of radial sampling in accordance with a first embodiment of the present invention;
FIGS. 7 (a) -7 (h) are visual results of 8-fold acceleration of random sampling according to the first embodiment of the present invention;
FIG. 8 is a data consistency layer in image domain reconstruction in accordance with the present invention;
FIG. 9 is a data consistency layer in k-space domain reconstruction in accordance with the present invention;
FIGS. 10 (a) and 10 (b) are modes of action of the anchor attention mechanism;
FIGS. 11 (a) and 11 (b) are modes of action of a moving window self-attention mechanism;
FIG. 12 is a calculation process of the anchor attention mechanism;
fig. 13 is a calculation process of the moving window self-attention mechanism.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Example 1
The embodiment provides a reconstruction method of a magnetic resonance image;
as shown in fig. 1, a method for reconstructing a magnetic resonance image includes:
s101: constructing a training set; wherein the training set comprises: the contrast of the full-sampling magnetic resonance image of the target mode and the undersampled magnetic resonance image of the target mode are both the first contrast, and the contrast of the full-sampling magnetic resonance image of the reference mode is the second contrast;
s102: training the image reconstruction model by adopting a training set, taking both the undersampled magnetic resonance image of the target mode and the fully sampled magnetic resonance image of the reference mode as input values of the model, and taking the fully sampled magnetic resonance image of the target mode as output values of the model to obtain a trained image reconstruction model;
s103: and acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image.
It should be understood that the target modality refers to a first contrast; the reference mode refers to the second contrast.
Further, the constructing the training set includes:
performing Fourier transform operation on the fully sampled magnetic resonance image of the target mode to obtain K space data; multiplying the K space data with the mask element by element to obtain a product, adding noise to the product to obtain an intermediate value, and performing inverse Fourier transformation on the intermediate value to obtain an undersampled magnetic resonance image of the target mode.
It should be appreciated that K-space data is a concept in the fields of medical imaging and Magnetic Resonance Imaging (MRI), which may also be referred to as frequency domain data, which is the corresponding result of fourier transforming image domain data. The mask is multiplied with the fully sampled magnetic resonance image to obtain an undersampled image. The undersampled image thus obtained simulates an undersampled image in a real environment.
It will be appreciated that in order to simulate a real situation, where the model reconstructs an MRI image with a smaller sample size, the present invention requires that an undersampled image be acquired first. Undersampling operation, applying a Fourier transform to an imageThe K-space data obtained after the operation is multiplied element by the mask under K-space, and then an inverse Fourier transform is applied>The image domain obtains an undersampled image in a certain sampling mode:
wherein Y is mri Representing a fully sampled magnetic resonance image,representing the fourier transform +.>Indicating inverse fourier transform, ++indicates pixel-wise multiplication, ++>Represents a mask and epsilon represents noise. Fig. 1 shows an undersampling process.
Further, the step S102: training the image reconstruction model by adopting a training set, which specifically comprises the following steps:
s102-1: the undersampled magnetic resonance image tensor of the target mode and the fully sampled magnetic resonance image tensor of the reference mode are connected in series in the channel dimension, and the serial result is used as an input value of the model;
s102-2: sending the input value into an image domain reconstruction module to obtain a reconstruction result of the image domain;
s102-3: sending the reconstruction result of the image domain into a Fourier transform layer to obtain frequency domain data;
s102-4: the frequency domain data and the mask are simultaneously sent into a first data consistency layer, so that the consistency of the data is maintained;
s102-5: the output value of the data consistency layer and the frequency domain representation of the full-sampling magnetic resonance image of the reference mode are sent into a K space together for reconstruction, and a frequency domain reconstruction result is obtained; the frequency domain representation is obtained by carrying out Fourier transform on the fully sampled magnetic resonance image of the reference mode;
s102-6: inputting the frequency domain reconstruction result into a second data consistency layer, and inputting the output result of the second data consistency layer into an inverse Fourier transform layer to obtain a reconstructed image;
s102-7: and (3) taking the reconstructed image as an input value, repeatedly executing the steps S102-2 to S102-6 twice, and outputting the obtained reconstructed image as a result.
Further, the step S102: and training the image reconstruction model by adopting a training set, wherein the adopted loss function is an L1 loss function in the training process.
A multiscale, two-domain network is trained with undersampled and fully sampled images of a target modality pair for undersampling reconstruction of MRI. The network frame diagram is shown in fig. 2, the image module represents a reconstruction module of an image domain, the FT and IFT modules represent a fourier transform layer and an inverse fourier transform layer, the DC layer represents a data consistency layer, and the k-space represents a reconstruction module of MRI in a frequency domain. The variables presented in fig. 2 are explained:
detailed steps of training:
(201) Tensor of undersampled images of a target modalityAnd the full sampled image tensor of the reference modality>After the channel dimensions are connected, the channel dimensions are sent to an image domain reconstruction module to obtain a reconstruction result of the image domain;
(202) Sending the image domain reconstruction result into a Fourier data transformation layer to be converted into frequency domain data;
(203) Maintaining data consistency of the frequency domain data and the mask through a data consistency layer (DC layer);
(204) Representation of the output of the DC layer and the full sampled image of the reference modality in the frequency domainSending the MRI signals to a k-space reconstruction module to reconstruct an MRI frequency domain;
(205) The reconstruction result of the frequency domain sequentially passes through DC (Data Consistency Layer) layers and an inverse Fourier transform layer to obtain a reconstructed image; the loop (201) to (205) is executed 3 times to obtain the final reconstruction result. The training adopts an L1 loss function, and the whole network is realized by a Pytorch framework.
Further, as shown in fig. 2, the image reconstruction model includes:
the first reconstruction module, the second reconstruction module and the third reconstruction module are sequentially connected;
the internal structures of the first reconstruction module, the second reconstruction module and the third reconstruction module are the same;
the first reconstruction module includes: the system comprises an image domain reconstruction module, a Fourier transform layer, a first data consistency layer, a K space reconstruction module, a second data consistency layer and an inverse Fourier transform layer which are connected in sequence;
the inputs of the first data coherency layer Data Consistency Layer and the second data coherency layer DataConsistency Layer are also used to input a mask;
the input end of the image domain reconstruction module is also used for inputting the characteristic representation of the fully sampled magnetic resonance image of the reference mode;
the input of the K-space reconstruction module is also for inputting a frequency domain representation of the fully sampled magnetic resonance image of the reference modality.
Further, as shown in fig. 8, the first data consistency layer includes:
subtracting the identity matrix from the mask to obtain a processed first mask;
carrying out Fourier transform on the output value of the image domain reconstruction module to obtain a first intermediate result;
multiplying the first intermediate result by the processed first mask to obtain a second intermediate result;
multiplying the K space representation of the full sample of the reference modality with a mask to obtain a processed second mask;
adding the processed second mask to the second intermediate result to obtain a third intermediate result;
and carrying out inverse Fourier transform on the third intermediate result to obtain a first data consistency result.
Further, as shown in fig. 9, the second data consistency layer includes:
subtracting the identity matrix from the mask to obtain a processed third mask;
taking the output value of the K space reconstruction module as a fourth intermediate result;
multiplying the fourth intermediate result by the processed third mask to obtain a fifth intermediate result;
multiplying the K space representation of the full sample of the reference mode with the mask to obtain a processed fourth mask;
adding the processed fourth mask and the fifth intermediate result to obtain a sixth intermediate result;
and carrying out inverse Fourier transform on the sixth intermediate result to obtain a second data consistency result.
It should be appreciated that the first data consistency layer and the second data consistency layer are both used to ensure that images generated from the acquired K-space data are consistent with the actual measurement data. The data consistency layer is a key component in the image reconstruction algorithm, and the main objective is to minimize the difference between the image and the original data, thereby improving the image quality and accuracy.
The first data consistency layer and the second data consistency layer differ somewhat: the first data consistency layer needs to obtain corresponding k-space data by carrying out Fourier transform on the image; the second data consistency layer does not need to perform a fourier transformation since the input reconstruction result is already k-space data.
Further, as shown in fig. 3, the internal structures of the image domain reconstruction module and the K-space reconstruction module are the same, and the image domain reconstruction module includes:
the first convolution layer, the first modeling stage, the second modeling stage, the third modeling stage, the fourth modeling stage, the fifth modeling stage and the sixth modeling stage, the first adder and the second convolution layer are sequentially connected;
the output end of the first convolution layer is connected with the input end of the first adder.
Wherein the internal structures of the first modeling stage, the second modeling stage, the third modeling stage, the fourth modeling stage, the fifth modeling stage and the sixth modeling stage are the same; the first modeling stage includes:
the first multi-scale modeling layer, the second multi-scale modeling layer, the third multi-scale modeling layer, the fourth multi-scale modeling layer and the second adder are sequentially connected;
the input end of the second adder is connected with the input end of the first multi-scale modeling layer.
Wherein, as shown in fig. 4, the internal structures of the first multi-scale modeling layer, the second multi-scale modeling layer, the third multi-scale modeling layer and the fourth multi-scale modeling layer are the same, and the first multi-scale modeling layer comprises:
the input port is connected with the input end of the channel equipartition module; the output end of the channel equipartition module is respectively connected with the input end of the anchor attention mechanism layer and the input end of the self-attention mechanism layer based on the moving window, the output end of the anchor attention mechanism layer and the output end of the self-attention mechanism layer based on the moving window are both connected with the input end of the channel connection module, and the output end of the channel connection module is connected with the input end of the convolution layer of 3x 3;
the input port is also connected with the input end of the channel attention module, and the output end of the channel attention module is connected with the input end of the 3x3 convolution layer;
the input port is also connected with the input end of the convolution layer of 3x 3; the output end of the 3x3 convolution layer is connected with the input end of the full connection layer; the output of the fully connected layer serves as the output of the first multi-scale modeling layer.
It should be appreciated that the anchor attention mechanism layer, the moving window based self attention mechanism layer, the channel attention module model different ranges of the image, respectively, together forming a multi-scale modeling. The modeling range of the channel attention is minimum (local range), the moving window self-attention mechanism models in a slightly larger range, and the anchor attention mechanism models the characteristics of the image in a larger range. Through the modeling form of the image multi-scale, the characteristics of different fine granularity are obtained.
Further, the channel sharing module is used for equally dividing the channels of the input features into two subsets so as to perform parallel processing.
Further, as shown in fig. 12, the anchor Attention mechanism layer, anchor-Attention, is used for long-range modeling of image features in a larger range of images. The scope of action of the anchor attention: the image is rectangular in several horizontal directions, and an anchor attention mechanism is implemented in each rectangular range as shown in fig. 10 (a) and 10 (b). The division of the image into horizontal stripes is because the image has similarity in content in the horizontal direction; the image features can be extracted by implementing the anchor attention within a rectangular range in the horizontal direction.
The anchor Attention mechanism layer Anchor-Attention, the process can be expressed by the following formula:
Y=M e Z=M e (M d ·V),
A=W A ·X,
wherein A represents an anchor matrix, A T Is the transpose of the anchor matrix for linking Q and K T Bridge between the two, W A Representing the feature matrix, and linearly mapping the feature matrix X to A. M is M d And M e Are all weight score graphs, which are intermediate results of the calculation process.
Embedding a feature Q (query) passing sum matrix W Q Linear mapping is performed to obtain information representing the information to be focused or queried by the model. Embedding a feature K for representing information to be compared by the model, K T Representing the transpose of the embedded feature matrix K. The embedded feature V is used to store the actual information or features.
Further, as shown in fig. 13, the self-Attention mechanism layer (SW-MSA, shifted Window based Self-Attention) based on a moving window, in which the Attention mechanism is also used for long-range modeling of the features of the image, but only the scope of the action is the whole image; dividing an image into a plurality of windows, applying an attention mechanism in each window, calculating attention weights in each window and between the windows, and extracting image characteristics; as shown in fig. 11 (a) and 11 (b), the moving window self-attention mechanism is to extract features within the window of the image.
The self-attention mechanism of moving windows will divide the input image into uniform windows, each of which can be seen as a sequence. A self-attention mechanism is then used within each window to determine the dependencies between the different locations within the window. This may help the model better capture contextual information in the image, especially when the image is very large or contains complex structures.
By introducing moving window attention, computational complexity can be significantly reduced because the model only needs to consider correlations between locations within each window, rather than global correlations. This helps to improve the efficiency of the image reconstruction model, especially when processing high resolution images or large images.
Further, the processing of the moving window based self-Attention mechanism layer (SW-Attention) can be expressed by the following formula:
Q=W Q ·X
K=W K ·X,
V=W V ·X
wherein X represents an input feature, Y represents a feature processed by a moving window attention branch, W Q ,W K ,W V Representing the matrix respectively.
Embedding a feature Q (query) passing sum matrix W Q Linear mapping is performed to obtain information representing the information to be focused or queried by the model. In self-attention, the Query vector for each location is used to measure the relevance or similarity between other locations and the current location. Each Query vector is multiplied by Key vectors at other locations to obtain a distribution of attention for determining the importance of the different locations.
Embedding feature Key (Key) passthrough and matrix W K Linear mapping is carried out to obtain Key matrix used for representing modelInformation to be used for comparison. In self-attention, the Key vector for each location is used to measure the association between other locations and the current location. Each Key vector is multiplied by the Query vector at other locations to obtain the attention profile for determining the importance of the different locations.
Embedding a feature V (value) passing sum matrix W V And performing linear mapping to obtain the information or characteristics for storing the actual information or characteristics. In self-attention, the Value vector for each location contains information related to the current location. Once the attention profiles are determined, the Value vector for each location will be multiplied by these profiles.
Further, the channel connection module is used for connecting the parallel processing results in the channel dimension;
further, the Channel Attention module Channel-Attention is used for adjusting the importance of feature mapping of different channels through a convolution layer so as to improve the Attention degree of a model to specific features, thereby improving feature extraction and model performance.
Further, the step S103: acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image, wherein the method specifically comprises the following steps of:
s103-1: taking a magnetic resonance image to be reconstructed and a fully sampled magnetic resonance image of a reference mode as input values;
s103-2: sending the magnetic resonance image to be reconstructed and the full-sampling magnetic resonance image of the reference mode into an image domain reconstruction module to obtain a reconstruction result of an image domain;
s103-3: sending the reconstruction result of the image domain into a Fourier transform layer to obtain frequency domain data;
s103-4: the frequency domain data and the mask are simultaneously sent into a first data consistency layer, so that the consistency of the data is maintained;
s103-5: sending the output value of the first data consistency layer and the frequency domain representation of the full-sampling magnetic resonance image of the reference mode into a K space reconstruction module to obtain a frequency domain reconstruction result;
s103-6: inputting the frequency domain reconstruction result into a second data consistency layer, and inputting the output result of the second data consistency layer into an inverse Fourier transform layer to obtain a reconstructed image;
s103-7: and (3) taking the reconstructed image as an input value, repeating the steps S103-2 to S103-6 twice, and outputting the obtained reconstructed image as a result.
An image domain reconstruction module and a K space reconstruction module: the two modules are identical in structure, except that the types of data processed are different, one processing spatial domain data and the other processing frequency domain data. Therefore, taking the image domain reconstruction module as an example, as shown in fig. 3, the data first goes through a 3x3 convolution layer to extract the features, then goes through 6 Modeling stages (Modeling Stage) to perform depth Modeling on the features, and finally uses a convolution layer to complete the reconstruction. Each Modeling Stage (Modeling Stage) is internally composed of 4 multi-scale Modeling modules (MSM, multiple Scale Modeling), the structure of which is shown in fig. 4. The characteristics are modeled respectively through three parallel attention branches in the multi-scale modeling module, and then a multi-scale modeling result is obtained through a convolution layer and a full connection layer of 3x3 at the tail of the multi-scale modeling module.
To evaluate the performance of the protocol, some common evaluation criteria are used. These indicators include peak signal to noise ratio (PSNR), which tells the invention how high the sharpness of the MRI image is; structural Similarity Index (SSIM), which aids the present invention in understanding the structural similarity of MRI. Through these indices, the present invention is able to comprehensively evaluate the performance of the present invention. Fig. 5 (a) to 5 (h), fig. 6 (a) to 6 (h), and fig. 7 (a) to 7 (h) respectively show a reconstruction result of 10 times acceleration in the radial sampling mode, a reconstruction result of 20 times acceleration in the radial sampling mode, a reconstruction result of 8 times acceleration in the random sampling mode, and a corresponding error map. The darker the brightness of the error map, the smaller the error between the predicted image and the gold standard image, and the closer the reconstructed image of the invention is to the gold standard.
The performance index of the reconstruction results of the present invention and another advanced method under various sampling modes and acceleration multiple scenarios is presented in table 1. Obviously, the invention achieves the best results in the aspect of evaluation indexes in various scenes.
Table 1 evaluation index of different methods in different sampling modes
The present invention describes a specific network architecture, other deep learning network architectures, such as Convolutional Neural Networks (CNNs), recurrent Neural Networks (RNNs), etc., may be tried in addition to using a combination of transformers and local convolutions for achieving MRI rapid imaging.
The invention uses the L1 function as the loss function, and other loss functions, such as Mean Square Error (MSE) or L2 loss functions, can be considered to adapt to different reconstruction requirements.
The invention can be used for other multi-modal imaging tasks, such as PET-MRI or CT-MRI, in addition to multi-contrast MRI. The method can be adjusted and optimized according to the specific application scene.
The invention can explore other alternative optimization strategies including different iteration times, different data consistency constraint modes and the like so as to improve the image reconstruction performance.
The invention may attempt different multi-scale strategies, including different scale numbers and resolution settings, to optimize the reconstruction results of the image.
Example two
The embodiment provides a reconstruction system of magnetic resonance images;
a reconstruction system for magnetic resonance images, comprising:
a training set construction module configured to: constructing a training set; wherein the training set comprises: the contrast of the full-sampling magnetic resonance image of the target mode and the undersampled magnetic resonance image of the target mode are both the first contrast, and the contrast of the full-sampling magnetic resonance image of the reference mode is the second contrast;
a training module configured to: training the image reconstruction model by adopting a training set, taking both the undersampled magnetic resonance image of the target mode and the fully sampled magnetic resonance image of the reference mode as input values of the model, and taking the fully sampled magnetic resonance image of the target mode as output values of the model to obtain a trained image reconstruction model;
a reconstruction module configured to: and acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image.
It should be noted that the training set construction module, the training module, and the reconstruction module correspond to steps S101 to S103 in the first embodiment, and the modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure in the first embodiment. It should be noted that the modules described above may be implemented as part of a system in a computer system, such as a set of computer-executable instructions.
The foregoing embodiments are directed to various embodiments, and details of one embodiment may be found in the related description of another embodiment.
The proposed system may be implemented in other ways. For example, the system embodiments described above are merely illustrative, such as the division of the modules described above, are merely a logical function division, and may be implemented in other manners, such as multiple modules may be combined or integrated into another system, or some features may be omitted, or not performed.
Example III
The embodiment also provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein the processor is coupled to the memory, the one or more computer programs being stored in the memory, the processor executing the one or more computer programs stored in the memory when the electronic device is running, to cause the electronic device to perform the method of the first embodiment.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software.
The method in the first embodiment may be directly implemented as a hardware processor executing or implemented by a combination of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Example IV
The present embodiment also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, perform the method of embodiment one.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The reconstruction method of the magnetic resonance image is characterized by comprising the following steps:
constructing a training set; wherein the training set comprises: the contrast of the full-sampling magnetic resonance image of the target mode and the undersampled magnetic resonance image of the target mode are both the first contrast, and the contrast of the full-sampling magnetic resonance image of the reference mode is the second contrast;
training the image reconstruction model by adopting a training set, taking both the undersampled magnetic resonance image of the target mode and the fully sampled magnetic resonance image of the reference mode as input values of the model, and taking the fully sampled magnetic resonance image of the target mode as output values of the model to obtain a trained image reconstruction model;
training the image reconstruction model by adopting a training set, which specifically comprises the following steps:
2-1: the undersampled magnetic resonance image tensor of the target mode and the fully sampled magnetic resonance image tensor of the reference mode are connected in series in the channel dimension, and the serial result is used as an input value of the model;
2-2: sending the input value into an image domain reconstruction module to obtain a reconstruction result of the image domain;
2-3: sending the reconstruction result of the image domain into a Fourier transform layer to obtain frequency domain data;
2-4: the frequency domain data and the mask are simultaneously sent into a first data consistency layer, so that the consistency of the data is maintained;
2-5: the output value of the data consistency layer and the frequency domain representation of the full-sampling magnetic resonance image of the reference mode are sent into a K space together for reconstruction, and a frequency domain reconstruction result is obtained; the frequency domain representation is obtained by carrying out Fourier transform on the fully sampled magnetic resonance image of the reference mode;
2-6: inputting the frequency domain reconstruction result into a second data consistency layer, and inputting the output result of the second data consistency layer into an inverse Fourier transform layer to obtain a reconstructed image;
2-7: repeatedly executing 2-2 to 2-6 times by taking the reconstructed image as an input value, and outputting the obtained reconstructed image as a result;
and acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image.
2. The method for reconstructing a magnetic resonance image according to claim 1, wherein said constructing a training set comprises:
performing Fourier transform operation on the fully sampled magnetic resonance image of the target mode to obtain K space data; multiplying the K space data with the mask element by element to obtain a product, adding noise to the product to obtain an intermediate value, and performing inverse Fourier transformation on the intermediate value to obtain an undersampled magnetic resonance image of the target mode.
3. The method of reconstructing a magnetic resonance image as set forth in claim 1, wherein the image reconstruction model includes:
the first reconstruction module, the second reconstruction module and the third reconstruction module are sequentially connected;
the internal structures of the first reconstruction module, the second reconstruction module and the third reconstruction module are the same;
the first reconstruction module includes: the system comprises an image domain reconstruction module, a Fourier transform layer, a first data consistency layer, a K space reconstruction module, a second data consistency layer and an inverse Fourier transform layer which are connected in sequence;
the input ends of the first data consistency layer and the second data consistency layer are also used for inputting masks;
the input end of the image domain reconstruction module is also used for inputting the characteristic representation of the fully sampled magnetic resonance image of the reference mode;
the input of the K-space reconstruction module is also for inputting a frequency domain representation of the fully sampled magnetic resonance image of the reference modality.
4. A method of reconstructing a magnetic resonance image as set forth in claim 3, wherein the first data consistency layer comprises:
subtracting the identity matrix from the mask to obtain a processed first mask;
carrying out Fourier transform on the output value of the image domain reconstruction module to obtain a first intermediate result;
multiplying the first intermediate result by the processed first mask to obtain a second intermediate result;
multiplying the K space representation of the full sample of the reference modality with a mask to obtain a processed second mask;
adding the processed second mask to the second intermediate result to obtain a third intermediate result;
performing inverse Fourier transform on the third intermediate result to obtain a first data consistency result;
or,
the second data consistency layer comprises:
subtracting the identity matrix from the mask to obtain a processed third mask;
taking the output value of the K space reconstruction module as a fourth intermediate result;
multiplying the fourth intermediate result by the processed third mask to obtain a fifth intermediate result;
multiplying the K space representation of the full sample of the reference mode with the mask to obtain a processed fourth mask;
adding the processed fourth mask and the fifth intermediate result to obtain a sixth intermediate result;
and carrying out inverse Fourier transform on the sixth intermediate result to obtain a second data consistency result.
5. A method of reconstructing a magnetic resonance image as set forth in claim 3, wherein the internal structures of the image domain reconstruction module and the K-space reconstruction module are identical, the image domain reconstruction module comprising:
the first convolution layer, the first modeling stage, the second modeling stage, the third modeling stage, the fourth modeling stage, the fifth modeling stage and the sixth modeling stage, the first adder and the second convolution layer are sequentially connected;
the output end of the first convolution layer is connected with the input end of the first adder;
wherein the internal structures of the first modeling stage, the second modeling stage, the third modeling stage, the fourth modeling stage, the fifth modeling stage and the sixth modeling stage are the same; the first modeling stage includes:
the first multi-scale modeling layer, the second multi-scale modeling layer, the third multi-scale modeling layer, the fourth multi-scale modeling layer and the second adder are sequentially connected;
the input end of the second adder is connected with the input end of the first multi-scale modeling layer;
the internal structures of the first multi-scale modeling layer, the second multi-scale modeling layer, the third multi-scale modeling layer and the fourth multi-scale modeling layer are the same, and the first multi-scale modeling layer comprises:
the input port is connected with the input end of the channel equipartition module; the output end of the channel equipartition module is respectively connected with the input end of the anchor attention mechanism layer and the input end of the self-attention mechanism layer based on the moving window, the output end of the anchor attention mechanism layer and the output end of the self-attention mechanism layer based on the moving window are both connected with the input end of the channel connection module, and the output end of the channel connection module is connected with the input end of the convolution layer of 3x 3;
the input port is also connected with the input end of the channel attention module, and the output end of the channel attention module is connected with the input end of the 3x3 convolution layer;
the input port is also connected with the input end of the convolution layer of 3x 3; the output end of the 3x3 convolution layer is connected with the input end of the full connection layer; the output of the fully connected layer serves as the output of the first multi-scale modeling layer.
6. The method for reconstructing a magnetic resonance image as set forth in claim 5, wherein the anchor attention mechanism layer is a layer for dividing the image into a plurality of horizontal directions, and implementing an attention mechanism within each rectangular range; the self-attention mechanism layer based on the moving window is used for dividing the image into a plurality of windows and applying an attention mechanism inside each window; the channel attention module adjusts the feature mapping weight of different channels through the convolution layer.
7. A reconstruction system for magnetic resonance images for implementing the method according to any one of claims 1-6, comprising:
a training set construction module configured to: constructing a training set; wherein the training set comprises: the contrast of the full-sampling magnetic resonance image of the target mode and the undersampled magnetic resonance image of the target mode are both the first contrast, and the contrast of the full-sampling magnetic resonance image of the reference mode is the second contrast;
a training module configured to: training the image reconstruction model by adopting a training set, taking both the undersampled magnetic resonance image of the target mode and the fully sampled magnetic resonance image of the reference mode as input values of the model, and taking the fully sampled magnetic resonance image of the target mode as output values of the model to obtain a trained image reconstruction model;
a reconstruction module configured to: and acquiring a magnetic resonance image to be reconstructed, inputting the magnetic resonance image to be reconstructed into a trained image reconstruction model, and outputting the reconstructed image.
8. An electronic device, comprising:
a memory for non-transitory storage of computer readable instructions; and
a processor for executing the computer-readable instructions,
wherein the computer readable instructions, when executed by the processor, perform the method of any of the preceding claims 1-6.
9. A storage medium, characterized by non-transitory storage of computer readable instructions, wherein the instructions of the method of any of claims 1-6 are performed when the non-transitory computer readable instructions are executed by a computer.
CN202311394491.8A 2023-10-25 2023-10-25 Reconstruction method, system, equipment and medium of magnetic resonance image Active CN117333571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311394491.8A CN117333571B (en) 2023-10-25 2023-10-25 Reconstruction method, system, equipment and medium of magnetic resonance image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311394491.8A CN117333571B (en) 2023-10-25 2023-10-25 Reconstruction method, system, equipment and medium of magnetic resonance image

Publications (2)

Publication Number Publication Date
CN117333571A CN117333571A (en) 2024-01-02
CN117333571B true CN117333571B (en) 2024-03-26

Family

ID=89295271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311394491.8A Active CN117333571B (en) 2023-10-25 2023-10-25 Reconstruction method, system, equipment and medium of magnetic resonance image

Country Status (1)

Country Link
CN (1) CN117333571B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246200A (en) * 2019-05-27 2019-09-17 深圳先进技术研究院 Mr cardiac film imaging method, device and magnetic resonance scanner
CN113269849A (en) * 2020-07-23 2021-08-17 上海联影智能医疗科技有限公司 Method and apparatus for reconstructing magnetic resonance
CN113795764A (en) * 2018-07-30 2021-12-14 海珀菲纳股份有限公司 Deep learning technique for magnetic resonance image reconstruction
CN114693823A (en) * 2022-03-09 2022-07-01 天津大学 Magnetic resonance image reconstruction method based on space-frequency double-domain parallel reconstruction
CN114998458A (en) * 2021-11-29 2022-09-02 厦门理工学院 Undersampled magnetic resonance image reconstruction method based on reference image and data correction
CN115375785A (en) * 2022-07-29 2022-11-22 上海康达卡勒幅医疗科技有限公司 Magnetic resonance image reconstruction method and device based on artificial neural network
CN116725515A (en) * 2023-08-14 2023-09-12 山东奥新医疗科技有限公司 Magnetic resonance rapid imaging method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11327137B2 (en) * 2017-06-06 2022-05-10 Shenzhen Institutes Of Advanced Technology One-dimensional partial Fourier parallel magnetic resonance imaging method based on deep convolutional network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113795764A (en) * 2018-07-30 2021-12-14 海珀菲纳股份有限公司 Deep learning technique for magnetic resonance image reconstruction
CN110246200A (en) * 2019-05-27 2019-09-17 深圳先进技术研究院 Mr cardiac film imaging method, device and magnetic resonance scanner
CN113269849A (en) * 2020-07-23 2021-08-17 上海联影智能医疗科技有限公司 Method and apparatus for reconstructing magnetic resonance
CN114998458A (en) * 2021-11-29 2022-09-02 厦门理工学院 Undersampled magnetic resonance image reconstruction method based on reference image and data correction
CN114693823A (en) * 2022-03-09 2022-07-01 天津大学 Magnetic resonance image reconstruction method based on space-frequency double-domain parallel reconstruction
CN115375785A (en) * 2022-07-29 2022-11-22 上海康达卡勒幅医疗科技有限公司 Magnetic resonance image reconstruction method and device based on artificial neural network
CN116725515A (en) * 2023-08-14 2023-09-12 山东奥新医疗科技有限公司 Magnetic resonance rapid imaging method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜年茂.《基于深度学习的快速核磁成像算法研究》.硕士电子期刊出版,2022,第1-64页. *

Also Published As

Publication number Publication date
CN117333571A (en) 2024-01-02

Similar Documents

Publication Publication Date Title
Tezcan et al. MR image reconstruction using deep density priors
CN109325985B (en) Magnetic resonance image reconstruction method, apparatus and computer readable storage medium
Pal et al. A review and experimental evaluation of deep learning methods for MRI reconstruction
US11170543B2 (en) MRI image reconstruction from undersampled data using adversarially trained generative neural network
Wang et al. Denoising auto-encoding priors in undecimated wavelet domain for MR image reconstruction
Upadhyay et al. Uncertainty-aware gan with adaptive loss for robust mri image enhancement
CN114299185A (en) Magnetic resonance image generation method, magnetic resonance image generation device, computer equipment and storage medium
KR20220082302A (en) MAGNETIC RESONANCE IMAGE PROCESSING APPARATUS AND METHOD USING ARTIFICIAL NEURAL NETWORK AND RESCAlING
Vasudeva et al. Compressed sensing mri reconstruction with co-vegan: Complex-valued generative adversarial network
Vasudeva et al. Co-VeGAN: Complex-valued generative adversarial network for compressive sensing MR image reconstruction
Xu et al. STRESS: Super-resolution for dynamic fetal MRI using self-supervised learning
Cui et al. Motion artifact reduction for magnetic resonance imaging with deep learning and k-space analysis
US11941732B2 (en) Multi-slice MRI data processing using deep learning techniques
CN117333571B (en) Reconstruction method, system, equipment and medium of magnetic resonance image
CN116626570A (en) Multi-contrast MRI sampling and image reconstruction
Gan et al. SS-JIRCS: Self-supervised joint image reconstruction and coil sensitivity calibration in parallel MRI without ground truth
CN113838105B (en) Diffusion microcirculation model driving parameter estimation method, device and medium based on deep learning
WO2023038910A1 (en) Dual-domain self-supervised learning for accelerated non-cartesian magnetic resonance imaging reconstruction
Ke et al. CRDN: cascaded residual dense networks for dynamic MR imaging with edge-enhanced loss constraint
CN114494014A (en) Magnetic resonance image super-resolution reconstruction method and device
Cao et al. CS-GAN for high-quality diffusion tensor imaging
Yang et al. Generative Adversarial Network Powered Fast Magnetic Resonance Imaging—Comparative Study and New Perspectives
US11967004B2 (en) Deep learning based image reconstruction
Xu A Robust and Efficient Framework for Slice-to-Volume Reconstruction: Application to Fetal MRI
Su et al. Realistic Restorer: artifact-free flow restorer (AF2R) for MRI motion artifact removal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant