CN113096207A - Rapid magnetic resonance imaging method and system based on deep learning and edge assistance - Google Patents

Rapid magnetic resonance imaging method and system based on deep learning and edge assistance Download PDF

Info

Publication number
CN113096207A
CN113096207A CN202110278962.3A CN202110278962A CN113096207A CN 113096207 A CN113096207 A CN 113096207A CN 202110278962 A CN202110278962 A CN 202110278962A CN 113096207 A CN113096207 A CN 113096207A
Authority
CN
China
Prior art keywords
edge
image
branch
magnetic resonance
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110278962.3A
Other languages
Chinese (zh)
Other versions
CN113096207B (en
Inventor
庞彦伟
王岳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202110278962.3A priority Critical patent/CN113096207B/en
Publication of CN113096207A publication Critical patent/CN113096207A/en
Application granted granted Critical
Publication of CN113096207B publication Critical patent/CN113096207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention relates to a fast magnetic resonance imaging method and a fast magnetic resonance imaging system based on deep learning and edge assistance, which comprise the following steps: step 1, collecting, organizing and processing a data set; step 2, constructing a trunk image reconstruction branch based on a cascade coding and decoding convolutional neural network architecture; step 3, constructing an auxiliary edge reconstruction branch based on a progressive refinement convolutional neural network architecture; step 4, training a depth convolution reconstruction network interacted by a main image branch and an auxiliary edge branch by using the data set collected, organized and processed in the step 1; and 5, generating a reconstructed image. The invention can improve the reconstruction performance of the undersampled magnetic resonance image.

Description

Rapid magnetic resonance imaging method and system based on deep learning and edge assistance
Technical Field
The invention belongs to the technical field of image processing and pattern recognition, relates to a magnetic resonance imaging method and system, and particularly relates to a rapid magnetic resonance imaging method and system based on deep learning and edge assistance.
Background
In recent years, the field of medical health has attracted more attention, and the technical model of "AI + medical" has been developed and advanced. Among them, the magnetic resonance imaging technology based on deep learning becomes one of the focuses.
Unlike conventional imaging techniques that acquire signals from the image domain, MRI acquires information from the frequency domain (K-space) and reconstructs to obtain a resultant image. To obtain a sharp image, the sampling process follows the nyquist criterion. Under the premise of full sampling, if factors such as equipment noise and the like are not considered, a clear image result can be obtained only by carrying out inverse Fourier transform on the original K space data.
However, MRI systems are long imaging times and susceptible to motion artifacts, so as to be limited in some clinical application scenarios. Therefore, on the premise of ensuring the imaging quality, how to reduce the MR acquisition time as much as possible and improve the imaging efficiency becomes the research focus of technicians.
The method is characterized in that the undersampled image reconstruction belongs to one of acceleration methods, the sampling rate is reduced by mainly focusing image information in a frequency domain on low-frequency and high-frequency weak details, only partial K space data is sampled, namely, more high-frequency information is acquired, and less low-frequency information is acquired.
Through search, the following patent documents in the prior art are found:
1. an enhanced residual error cascade network model (CN111487573A) for magnetic resonance undersampling imaging reconstructs undersampled magnetic resonance images through a deep network composed of local dense connections and global dense connection recursion units.
2. An undersampled nuclear magnetic resonance image reconstruction method (CN110570486A) based on deep learning constructs a deep neural network consisting of small network units, compressed small network units and an output module to reconstruct a magnetic resonance image.
However, the above prior art is mainly limited to fitting the mapping from the under-sampled aliased image to the sharp target image, and has the problems of reconstruction difficulty, performance bottleneck and the like, and fails to sufficiently extract and utilize effective supervision information to guide reconstruction.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a rapid magnetic resonance imaging method and system based on deep learning and edge assistance.
The invention solves the practical problem by adopting the following technical scheme:
a fast magnetic resonance imaging method based on deep learning and edge assistance comprises the following steps:
step 1, collecting, organizing and processing a data set;
step 2, constructing a trunk image reconstruction branch based on a cascade coding and decoding convolutional neural network architecture;
step 3, constructing an auxiliary edge reconstruction branch based on a progressive refinement convolutional neural network architecture;
step 4, training a depth convolution reconstruction network interacted by a main image branch and an auxiliary edge branch by using the data set collected, organized and processed in the step 1;
and 5, generating a reconstructed image.
Moreover, the data set collected, organized and processed in step 1 comprises: various sagittal planes, coronal planes, transverse planes and magnetic resonance images with and without proton density weighted fat suppression; each sample and its label have complex value full sampling image domain data and K space data; a real-valued data pair obtained by modulo the complex-valued full-sampling image.
In addition, in the step 2 of constructing a trunk image reconstruction branch based on a cascaded coding and decoding convolutional neural network architecture, each level of network adopts a coding and decoding architecture, and comprises an encoder part and a decoder part; the encoder part consists of 4 layers of convolution blocks, the size of a fed-in characteristic diagram is gradually reduced to one half, and the number of channels is multiplied, so that the reconstruction of characteristic representation and the learning of sample internal rules and hierarchical representation are realized, and the magnetic resonance image characteristics are encoded; the decoder part samples the characteristic graph twice by 4 layers of reverse convolution blocks in sequence, and simultaneously leads out the characteristic graph of the encoder with the same size through jumping connection and splices the characteristic graph with the characteristic graph according to a channel, thereby continuously recovering the reconstructed image with the spatial resolution and the semantic information fusion.
In addition, in the auxiliary edge reconstruction branch based on the gradual refinement convolutional neural network architecture constructed in the step 3, each segment adopts a gradual refinement equal-size network, the network main body is 2 edge extraction and enhancement modules, and the structure is as follows: firstly, preliminarily extracting features of a feature map by a 3 × 3 convolution head unit, a PReLU head unit and a 3 × 3 convolution head unit in sequence; secondly, the feature map passes through a channel attention module to model the weight of the relevance degree of each channel for representing and key information; meanwhile, the feature map passes through a residual block formed by 4 3 × 3 convolutional layers, and is spliced with the feature map enhanced by the attention of the channel, and then 1 × 1 convolution is used for reducing the number of half channels; thirdly, repeating the previous step; finally, the fusion of the effective edge features is realized through a tail unit consisting of 3 × 3 convolution, PReLU and 3 × 3 convolution.
Moreover, the specific method of the step 4 is as follows:
(1) and (3) carrying out coupling interactive design on the main image branches and the auxiliary edge branches to form a reconstruction network architecture:
for the first cascade, firstly, the shallowest layer output characteristic graph of the main image branch encoder part is fed into the auxiliary edge branch; secondly, extracting an edge graph from the training data by adopting a Sobel operator, splicing the edge graph with a feature graph from an encoder of a main image branch, feeding the edge graph into a 1 st edge extraction and enhancement module for output, splicing the edge graph with a shallowest layer output feature graph of a decoder part of the main image branch, and feeding the edge graph into a 2 nd edge extraction and enhancement module; thirdly, after the 1 st edge extraction and enhancement module is led out and connected to the 2 nd edge extraction and enhancement module in a jumping mode, signal flowing is enhanced, and training stability is improved; finally, the edge output by the auxiliary edge branch and the image restored by the main image branch are added according to elements;
and for subsequent multiple cascades, adopting the edge and the image output by the last cascade and respectively inputting the auxiliary edge branch and the main image branch of the next cascade. Between cascades, hard data coherency operations are employed, namely: and Fourier transformation is carried out on the image to K space, the acquired pixel points of the undersampled input data are used for directly covering and filling the K space, and then inverse Fourier transformation is carried out to the image domain, so that the acquired correct data are used for forcibly constraining and reconstructing the result.
(2) And (3) training the reconstruction network architecture formed in the step (1) to obtain a trained deep convolution reconstruction network with interaction of the main image branches and the auxiliary edge branches.
Moreover, the specific method in the step (2) of the step 4 is as follows:
during training, for the trunk image branches, images with processed data sets are used as reconstruction targets, and an SSIM loss function is used; for the auxiliary edge branch, the existing edge detection operator is used for extracting the target image of the main image branch to obtain an edge label, the edge label is used as a supervision signal of the auxiliary edge branch to restrain the recovery of the edge, and an L1 loss function is adopted. And the multitask loss formed by the two is optimized by adopting an Adam optimizer until the training loss and the verification loss tend to be stable, converged and normally fitted, and the trained depth convolution reconstruction network weight parameters interacted by the main image branches and the auxiliary edge branches are reserved.
Moreover, the specific method of the step 5 is as follows:
and testing and qualitatively and quantitatively evaluating the trained model, namely testing each image in the test set by using the model weight stored after training is finished to generate a reconstructed image.
A fast magnetic resonance image reconstruction system based on deep learning and edge assistance comprises a power supply, a main computer, a controller, a pulse sequence, an examination bed and a magnet system; the magnet system comprises a magnet, a uniform magnetic field coil, a transmitting/receiving coil and a gradient change coil; the power supply is used for providing electric energy required by the overall operation of the system and is connected to each electric device through a wire; the host computer is used for acquiring, storing, reconstructing and displaying a magnetic resonance image and interacting with each component through optical fibers; the controller is used as external hard control equipment, is responsible for starting, regulating and closing the process of each part of the system and is connected with the controlled component through optical fibers; the pulse sequence is a pulse program consisting of a radio frequency pulse and a gradient pulse and is used for setting a magnetic resonance image acquisition mode; the examination bed is used as a table fixing position when the magnetic resonance data is acquired for the testee; the magnet system is used for providing an electromagnetic physical environment for a tested person to generate and receive magnetic resonance image signals.
The invention has the advantages and beneficial effects that:
1. the invention provides a fast magnetic resonance imaging method and a fast magnetic resonance imaging system based on deep learning and edge assistance, wherein a reconstruction network related by the method consists of a trunk image reconstruction branch based on a coding and decoding cascade convolution neural network and an auxiliary edge reconstruction branch based on a gradually refined convolution neural network in step 4, and the auxiliary edge reconstruction branch takes an edge extraction and enhancement module as a core and effectively extracts, enhances and utilizes an edge information auxiliary guidance and constraint reconstruction result of an image through a channel attention mechanism and a dense residual block, thereby improving the reconstruction performance of an undersampled magnetic resonance image.
2. The auxiliary edge branches and the main image branches designed by the invention are not limited to respective reconstruction targets, and the process interaction is continuously implemented in the cascade. Wherein, the feature map fed into the auxiliary edge branch by the main image branch in step 4 is helpful to enhance the recovery of the edge map; the auxiliary edge branches feed back the edge images of the main image branches, which is beneficial to improving the reconstruction of the magnetic resonance image, and further effectively improves the reconstruction result through the concept of divide and conquer.
3. The invention provides a fast magnetic resonance imaging method and system based on deep learning and edge assistance, which realize effective reconstruction of undersampled magnetic resonance images by a deep reconstruction network formed by a trunk image reconstruction branch based on an encoding and decoding cascade convolution neural network and an auxiliary edge reconstruction branch based on a gradually refined convolution neural network, wherein the trunk image reconstruction branch can interact with the trunk image reconstruction branch. The method realizes automatic extraction of image features by using the deep convolutional neural network and using a supervised learning mode, and solves the problems of low efficiency, strong subjective factor, complex calculation and the like caused by manual feature extraction in the traditional method.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2(a) is an example of a fully sampled K-space data map; FIG. 2(b) is a 4 times Cartesian random pixel level mask example; FIG. 2(c) is an example of a fat-suppressed knee coronal plane proton density weighted image; FIG. 2(d) is an example of a coronal proton density weighted image of a knee joint without fat suppression;
FIG. 3 is a schematic diagram of the overall network architecture of the present invention (taking 2 cascades as an example);
FIG. 4 is a schematic structural component diagram of an edge extraction and enhancement module, which is a key component of an auxiliary edge branch in the network architecture of the present invention;
FIG. 5(a) is a test image reconstruction edge example; FIG. 5(b) is an example of a test image reconstruction result; FIG. 5(c) is an example of a test image reconstruction target;
FIG. 6 is a schematic diagram of the overall system architecture of the present invention;
Detailed Description
The embodiments of the invention will be described in further detail below with reference to the accompanying drawings:
a fast magnetic resonance imaging method based on deep learning and edge assist, as shown in fig. 1, comprising the following steps:
step 1, collecting, organizing and processing a data set;
the collected, organized and processed data set of step 1 comprises: various sagittal planes, coronal planes, transverse planes and magnetic resonance images with and without proton density weighted fat suppression; each sample and its label have complex value full sampling image domain data and K space data; a real-valued data pair obtained by modulo the complex-valued full-sampling image.
In the present embodiment, the data sets collected, organized, and processed include a training set, a validation set, and a test set. In order to improve the universality and the robustness of the model, the total number of the data set samples reaches more than ten thousand slices. The data set, on the one hand, contains various types of sagittal, coronal, transverse and proton density weighted fat-suppressed magnetic resonance images. On the other hand, each sample and the label thereof have both complex-value full-sampling image domain data and K space data and also have real-value data pairs obtained by taking a module of the complex-value full-sampling image, so that various image-frequency dual-domain operations and training such as data consistency and the like are facilitated.
In this embodiment, step 1 randomly samples tens of thousands of slice images as a training set, a validation set, and a test set. In order to improve the universality and the robustness of the model, the total number of the data set samples reaches more than ten thousand slices. In the data set, on the one hand, magnetic resonance images containing various types of sagittal, coronal, transverse and with and without proton density weighted fat suppression are shown, for example, in fig. 2(c) (d). On the other hand, each slice image is a clean fully sampled complex-valued K-space image, an example of which is shown in modulo form in fig. 2 (a). In one aspect, the corresponding 4-fold accelerated undersampled K-space data is obtained by applying a 4-fold cartesian random pixel level mask, an example of which is shown in fig. 2 (b); and on the other hand, obtaining a corresponding image domain label image through discrete inverse Fourier transform and modulus taking. To this end, each pair of samples containing data and a label is obtained.
Step 2, constructing a trunk image reconstruction branch based on a cascade coding and decoding convolutional neural network architecture;
in the step 2, in the trunk image reconstruction branch of the construction of the cascade-based coding and decoding convolutional neural network architecture, each level of network adopts a coding and decoding architecture, and comprises an encoder part and a decoder part. The encoder part consists of 4 layers of convolution blocks, the size of a fed-in characteristic diagram is gradually reduced to one half, and the number of channels is multiplied, so that the reconstruction of characteristic representation and the learning of sample internal rules and hierarchical representation are realized, and the magnetic resonance image characteristics are encoded; the decoder part samples the characteristic graph twice by 4 layers of reverse convolution blocks in sequence, and simultaneously leads out the characteristic graph of the encoder with the same size through jumping connection and splices the characteristic graph with the characteristic graph according to a channel, thereby continuously recovering the reconstructed image with the spatial resolution and the semantic information fusion.
In this embodiment, the step 2 constructs a trunk image reconstruction branch based on a concatenated codec convolutional neural network architecture, and the architecture thereof is shown in fig. 3. In the trunk image reconstruction branch of fig. 3, each level of networking adopts a coding and decoding architecture.
On one hand, the encoder is composed of 4 layers of convolution blocks, the size of a fed-in characteristic diagram is gradually reduced to one half, and the number of channels is multiplied, so that the reconstruction of characteristic representation and the learning of sample internal rules and hierarchical representation are realized, and the magnetic resonance image characteristics are encoded. More specifically, the encoder section contains 4 convolutional blocks and pooling layers, each convolutional block being composed of 2 groups of convolutional groups, each group of convolutional groups containing, in order, a 3 × 3 convolutional layer with step size of 1, an example normalization layer, and a LeakyReLU activation function layer with a negative slope of 0.2. With the reduction of the feature map in the encoder, the number of channels of the feature map is multiplied, and the number of convolution kernels included in each corresponding convolution block is 16, 32, 64 and 128 in sequence. The subsequent pooling layers of each rolling block are all subjected to 2 × 2 average pooling with the step length of 2. The number of lanes of the last volume block is 256 and pooling is not performed.
On the other hand, the decoder samples the feature map twice by 4 layers of deconvolution blocks in sequence, and simultaneously leads out the encoder feature map with the same size through jumping connection and splices with the encoder feature map according to channels, thereby continuously recovering the reconstructed image with the spatial resolution and the semantic information fusion. More specifically, the decoder section contains 4 transposed convolutional blocks, each containing in turn a 2 × 2 transposed convolutional layer of step size 2, an instance normalization layer, a LeakyReLU activation function layer with a negative slope of 0.2, and a reflective fill layer. With the enlargement of the feature map in the decoder, the number of channels of the feature map is reduced by half, and the number of convolution kernels contained in each transposed convolution block is 128, 64, 32 and 16 in sequence. The reflection filling layer carries out mirror reflection filling by taking the right edge and the lower edge of the current feature diagram as symmetry axes, so that the size of the feature diagram after being subjected to transposition convolution and up-sampling corresponds to the size of the feature diagram from the skip connection of the encoder one by one, and the splicing operation of the feature diagram according to the channel (direction) can be realized.
Step 3, constructing an auxiliary edge reconstruction branch based on a progressive refinement convolutional neural network architecture;
in the auxiliary edge reconstruction branch based on the gradual refinement convolutional neural network architecture constructed in the step 3, each segment adopts a gradual refinement equal-size network, a network main body is 2 edge extraction and enhancement modules, and the structure is as follows: firstly, preliminarily extracting features of a feature map by a 3 × 3 convolution head unit, a PReLU head unit and a 3 × 3 convolution head unit in sequence; secondly, the feature map passes through a channel attention module to model the weight of the relevance degree of each channel for representing and key information; meanwhile, the feature map passes through a residual block formed by 4 3 × 3 convolutional layers, and is spliced with the feature map enhanced by the attention of the channel, and then 1 × 1 convolution is used for reducing the number of half channels; thirdly, repeating the previous step; finally, the fusion of the effective edge features is realized through a tail unit consisting of 3 × 3 convolution, PReLU and 3 × 3 convolution.
In this embodiment, the step 3 constructs an auxiliary edge reconstruction branch based on a progressively refined convolutional neural network architecture, which is shown in fig. 3. In the branch, a progressively refined equal-size network is adopted, the network main body is 2 edge extraction and enhancement modules, and the structure of the network main body is shown in fig. 4. For the edge extraction and enhancement module, firstly, the feature map is subjected to initial feature extraction by a 3 × 3 convolution, a PReLU and a 3 × 3 convolution head unit in sequence; secondly, the feature map passes through a basic channel attention module which is composed of average pooling, 1 × 1 convolution, PReLU, 1 × 1 convolution, Sigmoid and multiplication of elements and residual errors in sequence, so as to model the weight of the correlation degree of each channel for representing and key information; meanwhile, the characteristic diagram is subjected to 4 dense residual blocks formed by connecting and adding 3 multiplied by 3 convolutional layers and local residual errors according to elements, and is spliced with the characteristic diagram enhanced by the attention of the channel according to the channel (direction), and then the number of the channels of the characteristic diagram after half splicing is reduced by using 1 multiplied by 1 convolution; thirdly, repeating the previous step; finally, effective fusion of edge features is achieved through a tail unit consisting of a 3 × 3 convolution, a PReLU and a 3 × 3 convolution.
Step 4, training a depth convolution reconstruction network interacted by a main image branch and an auxiliary edge branch by using the data set collected, organized and processed in the step 1;
the specific method of the step 4 comprises the following steps:
(1) and (3) carrying out coupling interactive design on the main image branches and the auxiliary edge branches to form a reconstruction network architecture:
for the first cascade, firstly, the shallowest layer output characteristic graph of the main image branch encoder part is fed into the auxiliary edge branch; secondly, extracting an edge graph from the training data by adopting a Sobel operator, splicing the edge graph with a feature graph from an encoder of a main image branch, feeding the edge graph into a 1 st edge extraction and enhancement module for output, splicing the edge graph with a shallowest layer output feature graph of a decoder part of the main image branch, and feeding the edge graph into a 2 nd edge extraction and enhancement module; thirdly, after the 1 st edge extraction and enhancement module is led out and connected to the 2 nd edge extraction and enhancement module in a jumping mode, signal flowing is enhanced, and training stability is improved; finally, the edge output by the auxiliary edge branch and the image restored by the main image branch are added according to elements;
and for subsequent multiple cascades, adopting the edge and the image output by the last cascade and respectively inputting the auxiliary edge branch and the main image branch of the next cascade. Between cascades, hard data coherency operations are employed, namely: and Fourier transformation is carried out on the image to K space, the acquired pixel points of the undersampled input data are used for directly covering and filling the K space, and then inverse Fourier transformation is carried out to the image domain, so that the acquired correct data are used for forcibly constraining and reconstructing the result.
(2) Training the reconstruction network architecture formed in the step (1) to obtain a trained deep convolution reconstruction network with interaction of the main image branches and the auxiliary edge branches;
the specific method in the step (2) of the step 4 comprises the following steps:
during training, for the trunk image branches, images with processed data sets are used as reconstruction targets, and an SSIM loss function is used; for the auxiliary edge branch, the existing edge detection operator is used for extracting the target image of the main image branch to obtain an edge label, the edge label is used as a supervision signal of the auxiliary edge branch to restrain the recovery of the edge, and an L1 loss function is adopted. And the multitask loss formed by the two is optimized by adopting an Adam optimizer until the training loss and the verification loss tend to be stable, converged and normally fitted, and the trained depth convolution reconstruction network weight parameters interacted by the main image branches and the auxiliary edge branches are reserved.
In this embodiment, on one hand, for network details, a coupling interactive design is performed on a main image branch and an auxiliary edge branch to form a reconstructed network whole, and a network whole architecture of 2-time cascade is shown in fig. 3. For the first cascade, firstly, the shallowest layer output characteristic graph of the main image branch encoder part is fed into the auxiliary edge branch; secondly, extracting an edge graph from the training data by adopting a Sobel operator, splicing the edge graph with a feature graph from an encoder of a main image branch, feeding the edge graph into a 1 st edge extraction and enhancement module for output, splicing the edge graph with a shallowest layer output feature graph of a decoder part of the main image branch, and feeding the edge graph into a 2 nd edge extraction and enhancement module; thirdly, after the 1 st edge extraction and enhancement module is led out and connected to the 2 nd edge extraction and enhancement module in a jumping mode, signal flowing is enhanced, and training stability is improved; finally, the edge output by the auxiliary edge branch and the image restored by the main image branch are added element by element. Note that for more subsequent deeper cascading expansions, the edge and image output in the previous cascade are all adopted and the auxiliary edge branch and the main image branch in the next cascade are respectively input. Between cascades, hard data coherency operations are employed, namely: and Fourier transformation is carried out on the image to K space, the acquired pixel points of the undersampled input data are used for directly covering and filling the K space, and then inverse Fourier transformation is carried out to the image domain, so that the acquired correct data are used for forcibly constraining and reconstructing the result.
On the other hand, with respect to training details. For the main image branch, adopting an image with a processed data set as a reconstruction target, and using an SSIM loss function; for the auxiliary edge branch, a target image of the main image branch is extracted by using a Sobel operator to obtain an edge label, the edge label is used as a supervision signal of the auxiliary edge branch to restrain the recovery of the edge, and an L1 loss function is adopted. The multitask loss formed by the two is optimized by an Adam optimizer (lr is 0.0003), and meanwhile, a StepLR learning rate attenuation strategy (step _ size is 40, and gamma is 0.5) is used for carrying out step attenuation learning rate, wherein batch _ size is set to be 1 until the training loss and the verification loss tend to be stable and converged and normally fit, and the whole weight parameters of the network after the training is finished are saved.
And 5, generating a reconstructed image.
The specific method of the step 5 comprises the following steps:
the trained model is tested and evaluated qualitatively and quantitatively, namely: and testing each image in the test set by using the model weight stored after the training is finished to generate a reconstructed image.
In the embodiment, in order to verify the effectiveness of the method and the system for reconstructing the rapid magnetic resonance image based on deep learning and edge assistance, the invention adopts three indexes of mean square error (NMSE), peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) to perform quantitative performance evaluation.
The method of the invention is adopted to train the images in the training set, and then the images in the testing set are tested to evaluate the performance, so that a better effect is obtained, and the test examples are shown in fig. 5(a), (b) and (c). Fig. 5(a) shows an example of reconstructed edges of a test image. The observation shows that the reconstructed edge is smooth and clear, which is not only beneficial to maintaining and recovering the whole structure of the image, but also beneficial to relieving the difficulty of the branch learning of the reconstruction of the main image. Fig. 5(b) is an example of a test image reconstruction result, and fig. 5(c) is a corresponding reconstruction target example. The contrast is clear, the reconstructed image is sharp in edge, and the magnetic resonance image is well recovered. In addition to the above qualitative analysis results, quantitative evaluation indexes with respect to NMSE, PSNR and SSIM are shown in table 1. By comparison, compared with the baseline U-Net, the method provided by the invention has the advantages that the indexes of NMSE, PSNR and SSIM are comprehensively improved. Taking 2 cascades as an example, NMSE is improved by 0.42%, PSNR is improved by 0.81%, and SSIM is improved by 0.85%; taking 3 cascades as an example, NMSE is improved by 0.47%, PSNR is improved by 0.92%, and SSIM is improved by 1.04%. The method not only reflects the effectiveness of the invention, but also shows the expansibility of the invention, and can explore higher performance improvement through cascade deepening.
In this example, the test results are shown in table 1:
TABLE 1 test set quantitative results analysis Table
Figure BDA0002977655910000131
A fast magnetic resonance image reconstruction system based on deep learning and edge assistance is disclosed, as shown in figure 6, and comprises a power supply, a main computer, a controller, a pulse sequence, an examination bed and a magnet system; the magnet system comprises a magnet, a uniform magnetic field coil, a transmitting/receiving coil and a gradient change coil; the power supply is used for providing electric energy required by the overall operation of the system and is connected to each electric device through a wire; the host computer is used for acquiring, storing, reconstructing and displaying a magnetic resonance image and interacting with each component through optical fibers; the controller is used as external hard control equipment, is responsible for starting, regulating and closing the process of each part of the system and is connected with the controlled component through optical fibers; the pulse sequence is a pulse program consisting of a radio frequency pulse and a gradient pulse and is used for setting a magnetic resonance image acquisition mode; the examination bed is used as a table fixing position when the magnetic resonance data is acquired for the testee; the magnet system is used for providing an electromagnetic physical environment for a tested person to generate and receive magnetic resonance image signals.
In the present embodiment, in the magnet system, the magnet is used for magnetism supply; the homogeneous magnetic field coil is responsible for providing a homogeneous magnetic field to determine the resonance frequency and the static magnetic moment; the transmitting/receiving coil is responsible for transmitting/receiving the radio frequency excitation pulse and the echo thereof; the gradient change coil is responsible for switching of the gradient magnetic field to realize space positioning coding and the like. Under the regulation and control of the controller, the magnet system can realize the functions of digital-to-analog conversion, gradient control and amplification, signal receiving and transmitting, encoding and the like, and mainly realizes the signal transmission among the systems in the form of gradient current. On the other hand, a fast magnetic resonance imaging algorithm model based on deep learning and edge assistance is deployed on the software level by using a forward reasoning framework and trained under a test line from step 1 to step 5, so that a fast magnetic resonance imaging system is finally formed.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (8)

1. A fast magnetic resonance imaging method based on deep learning and edge assistance is characterized in that: the method comprises the following steps:
step 1, collecting, organizing and processing a data set;
step 2, constructing a trunk image reconstruction branch based on a cascade coding and decoding convolutional neural network architecture;
step 3, constructing an auxiliary edge reconstruction branch based on a progressive refinement convolutional neural network architecture;
step 4, training a depth convolution reconstruction network interacted by a main image branch and an auxiliary edge branch by using the data set collected, organized and processed in the step 1;
and 5, generating a reconstructed image.
2. The fast magnetic resonance imaging method based on deep learning and edge assistance as claimed in claim 1, characterized in that: the collected, organized and processed data set of step 1 comprises: various sagittal planes, coronal planes, transverse planes and magnetic resonance images with and without proton density weighted fat suppression; each sample and its label have complex value full sampling image domain data and K space data; a real-valued data pair obtained by modulo the complex-valued full-sampling image.
3. The fast magnetic resonance imaging method based on deep learning and edge assistance as claimed in claim 1, characterized in that: in the step 2, in the trunk image reconstruction branch for constructing the convolutional neural network architecture based on the cascade connection of the coding and decoding, each level of network adopts the coding and decoding architecture, and comprises an encoder part and a decoder part; the encoder part consists of 4 layers of convolution blocks, the size of a fed-in characteristic diagram is gradually reduced to one half, and the number of channels is multiplied, so that the reconstruction of characteristic representation and the learning of sample internal rules and hierarchical representation are realized, and the magnetic resonance image characteristics are encoded; the decoder part samples the characteristic graph twice by 4 layers of reverse convolution blocks in sequence, and simultaneously leads out the characteristic graph of the encoder with the same size through jumping connection and splices the characteristic graph with the characteristic graph according to a channel, thereby continuously recovering the reconstructed image with the spatial resolution and the semantic information fusion.
4. The fast magnetic resonance imaging method based on deep learning and edge assistance as claimed in claim 1, characterized in that: in the auxiliary edge reconstruction branch based on the gradual refinement convolutional neural network architecture constructed in the step 3, each segment adopts a gradual refinement equal-size network, a network main body is 2 edge extraction and enhancement modules, and the structure is as follows: firstly, preliminarily extracting features of a feature map by a 3 × 3 convolution head unit, a PReLU head unit and a 3 × 3 convolution head unit in sequence; secondly, the feature map passes through a channel attention module to model the weight of the relevance degree of each channel for representing and key information; meanwhile, the feature map passes through a residual block formed by 4 3 × 3 convolutional layers, and is spliced with the feature map enhanced by the attention of the channel, and then 1 × 1 convolution is used for reducing the number of half channels; thirdly, repeating the previous step; finally, the fusion of the effective edge features is realized through a tail unit consisting of 3 × 3 convolution, PReLU and 3 × 3 convolution.
5. The fast magnetic resonance imaging method based on deep learning and edge assistance as claimed in claim 1, characterized in that: the specific method of the step 4 comprises the following steps:
(1) and (3) carrying out coupling interactive design on the main image branches and the auxiliary edge branches to form a reconstruction network architecture:
for the first cascade, firstly, the shallowest layer output characteristic graph of the main image branch encoder part is fed into the auxiliary edge branch; secondly, extracting an edge graph from the training data by adopting a Sobel operator, splicing the edge graph with a feature graph from an encoder of a main image branch, feeding the edge graph into a 1 st edge extraction and enhancement module for output, splicing the edge graph with a shallowest layer output feature graph of a decoder part of the main image branch, and feeding the edge graph into a 2 nd edge extraction and enhancement module; thirdly, after the 1 st edge extraction and enhancement module is led out and connected to the 2 nd edge extraction and enhancement module in a jumping mode, signal flowing is enhanced, and training stability is improved; finally, the edge output by the auxiliary edge branch and the image restored by the main image branch are added according to elements;
and for subsequent multiple cascades, adopting the edge and the image output by the last cascade and respectively inputting the auxiliary edge branch and the main image branch of the next cascade. Between cascades, hard data coherency operations are employed, namely: and Fourier transformation is carried out on the image to K space, the acquired pixel points of the undersampled input data are used for directly covering and filling the K space, and then inverse Fourier transformation is carried out to the image domain, so that the acquired correct data are used for forcibly constraining and reconstructing the result.
(2) And (3) training the reconstruction network architecture formed in the step (1) to obtain a trained deep convolution reconstruction network with interaction of the main image branches and the auxiliary edge branches.
6. The fast magnetic resonance imaging method based on deep learning and edge assistance as claimed in claim 5, characterized in that: the specific method in the step (2) of the step 4 comprises the following steps:
during training, for the trunk image branches, images with processed data sets are used as reconstruction targets, and an SSIM loss function is used; for the auxiliary edge branch, the existing edge detection operator is used for extracting the target image of the main image branch to obtain an edge label, the edge label is used as a supervision signal of the auxiliary edge branch to restrain the recovery of the edge, and an L1 loss function is adopted. And the multitask loss formed by the two is optimized by adopting an Adam optimizer until the training loss and the verification loss tend to be stable, converged and normally fitted, and the trained depth convolution reconstruction network weight parameters interacted by the main image branches and the auxiliary edge branches are reserved.
7. The fast magnetic resonance imaging method based on deep learning and edge assistance as claimed in claim 1, characterized in that: the specific method of the step 5 comprises the following steps:
and testing and qualitatively and quantitatively evaluating the trained model, namely testing each image in the test set by using the model weight stored after training is finished to generate a reconstructed image.
8. A fast magnetic resonance image reconstruction system based on deep learning and edge assistance is characterized in that: comprises a power supply, a main computer, a controller, a pulse sequence, an examination bed and a magnet system; the magnet system comprises a magnet, a uniform magnetic field coil, a transmitting/receiving coil and a gradient change coil; the power supply is used for providing electric energy required by the overall operation of the system and is connected to each electric device through a wire; the host computer is used for acquiring, storing, reconstructing and displaying a magnetic resonance image and interacting with each component through optical fibers;
the controller is used as external hard control equipment, is responsible for starting, regulating and closing the process of each part of the system and is connected with the controlled component through optical fibers; the pulse sequence is a pulse program consisting of a radio frequency pulse and a gradient pulse and is used for setting a magnetic resonance image acquisition mode; the examination bed is used as a table fixing position when the magnetic resonance data is acquired for the testee; the magnet system is used for providing an electromagnetic physical environment for a tested person to generate and receive magnetic resonance image signals.
CN202110278962.3A 2021-03-16 2021-03-16 Rapid magnetic resonance imaging method and system based on deep learning and edge assistance Active CN113096207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110278962.3A CN113096207B (en) 2021-03-16 2021-03-16 Rapid magnetic resonance imaging method and system based on deep learning and edge assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110278962.3A CN113096207B (en) 2021-03-16 2021-03-16 Rapid magnetic resonance imaging method and system based on deep learning and edge assistance

Publications (2)

Publication Number Publication Date
CN113096207A true CN113096207A (en) 2021-07-09
CN113096207B CN113096207B (en) 2023-01-13

Family

ID=76667435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110278962.3A Active CN113096207B (en) 2021-03-16 2021-03-16 Rapid magnetic resonance imaging method and system based on deep learning and edge assistance

Country Status (1)

Country Link
CN (1) CN113096207B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469199A (en) * 2021-07-15 2021-10-01 中国人民解放军国防科技大学 Rapid and efficient image edge detection method based on deep learning
CN116260969A (en) * 2023-05-15 2023-06-13 鹏城实验室 Self-adaptive channel progressive coding and decoding method, device, terminal and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011033422A1 (en) * 2009-09-17 2011-03-24 Koninklijke Philips Electronics N.V. Mr imaging system comprising physiological sensors
US20200034948A1 (en) * 2018-07-27 2020-01-30 Washington University Ml-based methods for pseudo-ct and hr mr image estimation
CN111754403A (en) * 2020-06-15 2020-10-09 南京邮电大学 Image super-resolution reconstruction method based on residual learning
CN112037304A (en) * 2020-09-02 2020-12-04 上海大学 Two-stage edge enhancement QSM reconstruction method based on SWI phase image
US20200410675A1 (en) * 2018-12-13 2020-12-31 Shenzhen Institutes Of Advanced Technology Method and apparatus for magnetic resonance imaging and plaque recognition
CN112164067A (en) * 2020-10-12 2021-01-01 西南科技大学 Medical image segmentation method and device based on multi-mode subspace clustering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011033422A1 (en) * 2009-09-17 2011-03-24 Koninklijke Philips Electronics N.V. Mr imaging system comprising physiological sensors
US20200034948A1 (en) * 2018-07-27 2020-01-30 Washington University Ml-based methods for pseudo-ct and hr mr image estimation
US20200410675A1 (en) * 2018-12-13 2020-12-31 Shenzhen Institutes Of Advanced Technology Method and apparatus for magnetic resonance imaging and plaque recognition
CN111754403A (en) * 2020-06-15 2020-10-09 南京邮电大学 Image super-resolution reconstruction method based on residual learning
CN112037304A (en) * 2020-09-02 2020-12-04 上海大学 Two-stage edge enhancement QSM reconstruction method based on SWI phase image
CN112164067A (en) * 2020-10-12 2021-01-01 西南科技大学 Medical image segmentation method and device based on multi-mode subspace clustering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
庞彦伟等: "《基于边缘特征融合和跨连接的车道线语义分割神经网络》", 《天津大学学报(自然科学与工程技术版)》 *
黄敏等: "基于K空间数据的深度核磁共振图像重建", 《生物医学工程研究》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469199A (en) * 2021-07-15 2021-10-01 中国人民解放军国防科技大学 Rapid and efficient image edge detection method based on deep learning
CN116260969A (en) * 2023-05-15 2023-06-13 鹏城实验室 Self-adaptive channel progressive coding and decoding method, device, terminal and medium
CN116260969B (en) * 2023-05-15 2023-08-18 鹏城实验室 Self-adaptive channel progressive coding and decoding method, device, terminal and medium

Also Published As

Publication number Publication date
CN113096207B (en) 2023-01-13

Similar Documents

Publication Publication Date Title
CN108460726B (en) Magnetic resonance image super-resolution reconstruction method based on enhanced recursive residual network
CN109557489B (en) Magnetic resonance imaging method and device
Souza et al. A hybrid, dual domain, cascade of convolutional neural networks for magnetic resonance image reconstruction
CN113096208B (en) Reconstruction method of neural network magnetic resonance image based on double-domain alternating convolution
CN113096207B (en) Rapid magnetic resonance imaging method and system based on deep learning and edge assistance
CN112150568A (en) Magnetic resonance fingerprint imaging reconstruction method based on Transformer model
CN109597012B (en) Single-scanning space-time coding imaging reconstruction method based on residual error network
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN110992440B (en) Weak supervision magnetic resonance rapid imaging method and device
CN111951344A (en) Magnetic resonance image reconstruction method based on cascade parallel convolution network
CN112946545B (en) PCU-Net network-based fast multi-channel magnetic resonance imaging method
CN111353935A (en) Magnetic resonance imaging optimization method and device based on deep learning
CN113971706A (en) Rapid magnetic resonance intelligent imaging method
CN111784792A (en) Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
CN114167334A (en) Magnetic resonance image reconstruction method and device and electronic equipment
CN112037304A (en) Two-stage edge enhancement QSM reconstruction method based on SWI phase image
CN116309910A (en) Method for removing Gibbs artifacts of magnetic resonance images
CN115375785A (en) Magnetic resonance image reconstruction method and device based on artificial neural network
Lv et al. Parallel imaging with a combination of sensitivity encoding and generative adversarial networks
CN113538616B (en) Magnetic resonance image reconstruction method combining PUGAN with improved U-net
CN116863024A (en) Magnetic resonance image reconstruction method, system, electronic equipment and storage medium
KR101580532B1 (en) Apparatus and method for magnetic resonance image processing
CN106137199A (en) Broad sense sphere in diffusion magnetic resonance imaging deconvolutes
CN116188610A (en) Multitasking phase preprocessing method and system based on two-way learning three-dimensional network
CN114693823A (en) Magnetic resonance image reconstruction method based on space-frequency double-domain parallel reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant