WO2020134826A1 - Parallel magnetic resonance imaging method and related equipment - Google Patents

Parallel magnetic resonance imaging method and related equipment Download PDF

Info

Publication number
WO2020134826A1
WO2020134826A1 PCT/CN2019/121508 CN2019121508W WO2020134826A1 WO 2020134826 A1 WO2020134826 A1 WO 2020134826A1 CN 2019121508 W CN2019121508 W CN 2019121508W WO 2020134826 A1 WO2020134826 A1 WO 2020134826A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
neural network
channel
undersampled
magnetic resonance
Prior art date
Application number
PCT/CN2019/121508
Other languages
French (fr)
Chinese (zh)
Inventor
梁栋
王珊珊
程慧涛
刘新
郑海荣
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020134826A1 publication Critical patent/WO2020134826A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]

Definitions

  • the invention relates to the technical field of magnetic resonance image processing, and more specifically, to a magnetic resonance parallel imaging method and related equipment.
  • the magnetic resonance imaging system uses static magnetic fields and radio frequency magnetic fields to image human tissues. It not only provides rich tissue contrast and does not cause side effects to the human body, so it has become a common tool for medical clinical diagnosis.
  • the magnetic resonance imaging system mainly includes two devices: an image acquisition device and a parallel imaging device.
  • the image acquisition device includes multiple parallel channels. Different channels correspond to different acquisition coils. The different acquisition coils are located at different acquisition positions and are used to acquire image data of human tissue from different directions.
  • the parallel imaging device uses the parallel imaging (PI) algorithm, that is, the image data collected by each parallel channel is reconstructed to obtain image data with better display effect.
  • PI parallel imaging
  • K-space reconstruction such as space harmonic synchronization acquisition (SMASH), generalized automatic calibration partial parallel acquisition (GRAAPA), self-consistent parallel imaging (SPIRiT), etc.
  • An image-based reconstruction such as sensitivity encoding (SENSitivity Encoding, SENSE), etc.
  • typical reconstruction algorithms are L1-SPIRiT and so on.
  • deep learning has been well used in fast magnetic resonance reconstruction.
  • MLP multi-layer perceptron
  • VN Variational Network
  • the present invention provides a parallel magnetic resonance imaging method to further improve the detail processing effect of the reconstructed image.
  • the present invention also provides magnetic resonance parallel imaging related equipment to ensure the practical application and implementation of the method.
  • the present application provides a parallel magnetic resonance imaging method, including:
  • the neural network integrated model includes interconnected neural networks and a K-space consistent layer;
  • the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
  • the K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
  • the target reconstructed image data of all channels is merged to obtain single-channel image data.
  • a parallel magnetic resonance imaging device including:
  • An image data acquisition module for acquiring undersampled image data acquired in parallel by K-space magnetic resonance coils of multiple channels
  • a comprehensive model obtaining module used to obtain a pre-built neural network comprehensive model; wherein the neural network comprehensive model includes interconnected neural networks and a K-space consistent layer;
  • the image data reconstruction module is used to input the undersampled image data of each channel into the neural network integrated model, so that the neural network integrated model performs the following steps:
  • the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
  • the K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
  • the image data merging module is used to merge the target reconstruction image data of all channels to obtain single channel image data.
  • the present application provides a parallel magnetic resonance imaging device, including:
  • the receiver is used to receive the under-sampled image data collected by the parallel multiple channel magnetic resonance coils in the K space;
  • a processor used to obtain a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer; input the undersampled image data of each channel into the neural network integrated model , So that the comprehensive model of the neural network performs the following steps: the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data, and obtains the preliminary corresponding to each channel Reconstructed image data; the K-space consistent layer performs back-processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel to obtain the target reconstructed image data corresponding to each channel; merges the target reconstruction of all channels Image data to get single-channel image data;
  • a display is used to display the single-channel image data.
  • the present application provides a readable storage medium on which a computer program is stored, which is characterized in that, when the computer program is executed by a processor, the above-mentioned parallel magnetic resonance imaging method is realized.
  • the present invention provides a parallel magnetic resonance imaging method, which obtains undersampled image data acquired in parallel by the coils of multiple channels in K-space, and inputs the undersampled image data of each channel to the A comprehensive model of the constructed neural network.
  • the model also includes a K-space consistent layer.
  • the neural network can reconstruct the undersampled image data.
  • the K-space consistent layer can use the undersampled image data to reconstruct the image data of the neural network. After performing the generation process, the target reconstructed image data is obtained, and finally, the target reconstructed image data of all channels are merged to obtain a single-channel image.
  • the present invention adds a K-space consistent layer to the neural network model, which can use the under-sampled image data before reconstruction to perform post-processing on the reconstructed image data, thereby retaining more images
  • the details improve the image reconstruction effect.
  • FIG. 1 is a schematic flowchart of a parallel magnetic resonance imaging method provided by the present invention
  • FIG. 2 is a schematic structural diagram of a neural network integrated module provided by the present invention.
  • FIG. 3 is a schematic flowchart of a training convolutional neural network provided by the present invention.
  • FIG. 4 is a comparison diagram of the imaging effect of the magnetic resonance parallel imaging method provided by the present invention and the traditional parallel imaging algorithm for reconstruction;
  • FIG. 5 is a schematic structural diagram of a magnetic resonance parallel imaging device provided by the present invention.
  • FIG. 6 is another schematic structural diagram of a magnetic resonance parallel imaging device provided by the present invention.
  • FIG. 7 is a schematic diagram of a computer architecture of a magnetic resonance parallel imaging device provided by the present invention.
  • the magnetic resonance imaging system is a system commonly used in the medical field for imaging human tissue.
  • the system mainly includes two devices, an image acquisition device and a parallel imaging device.
  • the image acquisition device includes multiple parallel channels, and each channel uses different positions of magnetic resonance coils to acquire image data of human tissue from different directions.
  • Slow imaging speed has always been a major bottleneck restricting the application of this system, so it is particularly important to increase the scanning speed and reduce the scanning time under the premise that the imaging quality is clinically acceptable.
  • magnetic resonance imaging systems generally use parallel imaging technology, in which the image collected by the image acquisition device is undersampled image data.
  • image acquisition device multiple coils simultaneously acquire image data during one scan, and each coil only acquires image data of a certain local position of the object. These image data are undersampled image data.
  • the parallel imaging device uses a parallel imaging algorithm to reconstruct the undersampled image data collected in each channel. It can be seen that parallel imaging mainly uses the sensitivity information collected by the coils at different positions to reconstruct the image. It can achieve the purpose of accelerating scanning through a certain degree of undersampling, which is one of the important methods in fast magnetic resonance imaging.
  • the invention provides a parallel magnetic resonance imaging method, which is used in a parallel imaging device and used to improve the detail processing effect of a reconstructed image. See FIG. 1, which shows a schematic flow diagram of a parallel magnetic resonance imaging method, which specifically includes steps S101 to S104.
  • the image data of the human tissues is collected by the image acquisition device first.
  • the image acquisition device includes multiple parallel channels, and the magnetic resonance coil of each channel uses the under-sampling method to collect local information of the human tissue, so that each channel will obtain an under-sampled image data.
  • the number of channels is related to the actual image acquisition device, such as 6 channels, 8 channels, 12 channels, 32 channels, etc.
  • K-space is a domain that represents images and is also the image data collection domain, that is, image data is collected in K-space.
  • Undersampling image data is data collected by undersampling in the image K-space.
  • the image data in K-space is obtained by Fourier transform.
  • the image data in the K-space domain is not an image that can be recognized by the human eye. It needs to undergo inverse Fourier transform to obtain an image that can be recognized by the human eye.
  • S102 Obtain a pre-built neural network integrated model; where the neural network integrated model includes interconnected neural networks and a K-space consistent layer.
  • the neural network synthesis model is pre-built. Unlike the existing neural network model, the neural network synthesis model constructed by the present invention includes two parts, one is the neural network and the other is the K-space consistency layer (Data Consistency, DC).
  • the neural network needs to be pre-trained by the neural network training algorithm.
  • the role of the K-space consistent layer is to make full use of the collected K-space data information in the image data to correct the image reconstructed by the neural network, so that the reconstructed image is more accurate and retained. More image details.
  • CS-PI Compressed sensing-based parallel imaging algorithm
  • the multi-layer perceptron is a fully connected neural network, which requires a large amount of parameters, and this method needs to be reconstructed line by line, focusing on solving the aliasing artifacts caused by one-dimensional undersampling.
  • the sampling method has certain restrictions.
  • VN network is one of the methods to network traditional algorithms. It needs to calculate the sensitivity information of the coil array in advance, so the reconstruction result depends on the accuracy of the coil sensitivity estimation.
  • the neural network selected in the present invention is specifically a convolutional neural network (Convolutional Neural Network, CNN).
  • CNN convolutional Neural Network
  • convolutional neural network CNN has advantages in image processing. It is suitable for both one-dimensional and two-dimensional undersampling, and is not affected by the number of traditional auto-calibration signal (ACS) lines.
  • the invention uses the convolutional neural network CNN for parallel magnetic resonance imaging, so that the invention has no special limitation on the sampling mode and is not sensitive to the ACS line.
  • the connection between the neural network and the K-space consistent layer can be a multi-layer cascade.
  • a specific structure of a comprehensive model of a neural network is shown in Figure 2.
  • the comprehensive model of the neural network includes N sub-modules Block.
  • the N sub-modules are connected back and forth, and the image reconstruction result output by the previous sub-module is input to the latter sub-module to continue the reconstruction process.
  • N is a known value obtained in advance based on training experience.
  • the sub-module may be referred to as a residual module.
  • Each submodule includes a convolutional neural network CNN and a K-space consistent layer DC connected to each other.
  • the K-space consistent layer is connected after the convolutional neural network CNN to ensure that the image reconstruction results of the convolutional neural network CNN are more For accuracy.
  • the structure of cascading neural networks connected to the K-space consistent layer can make the reconstructed image closer to the label image (that is, fully sampled image data) during the forward propagation process.
  • a specific structure of the convolutional neural network includes M superposed complex convolutional layers, M is a known value obtained in advance according to training experience.
  • S103 Input the under-sampled image data of each channel into the neural network integrated model, so that the neural network performs preliminary reconstruction on the under-sampled image data and the spatial reconstruction layer to perform preliminary processing on the preliminary reconstruction results to obtain target reconstructed image data.
  • the undersampled image data of each channel is input into the neural network integrated model, so that the neural network integrated model executes steps A1 and A2.
  • A1 The neural network reconstructs each under-sampled image data separately according to the correlation between each under-sampled image data to obtain the preliminary reconstructed image data corresponding to each channel;
  • A2 The K-space consistent layer is based on each channel’s Undersampling the image data, the preliminary reconstructed image data of each channel is back-processed to obtain the target reconstructed image data corresponding to each channel.
  • the input data of the comprehensive model of the neural network is undersampled image data of multiple parallel channels obtained in step S101.
  • the processing of the neural network integrated model is divided into two parts: in step A1, the neural network first performs preliminary reconstruction on the undersampled image data. During the reconstruction process, the correlation of the image data between multiple coils is fully utilized, and the resulting image is called the preliminary reconstructed image Data; in step A2, the K-space consistent layer uses the undersampled image data input to the neural network synthesis model to perform the retrograde processing on the preliminary reconstructed image data. data.
  • the generation process is: in the preliminary reconstructed image data after reconstruction, if the pixel value of a pixel at a certain position does not exist in the undersampled image data corresponding to the preliminary reconstructed image data, the pixel value is retained; if a certain The pixel values of the pixels at each position exist in the under-sampled image data corresponding to the preliminary reconstructed image data, and then the pixel values in the under-sampled image data are replaced with the pixel values in the preliminary reconstructed image data.
  • the K-space consistent layer of the first iteration way is expressed by a specific formula, then the K-space consistent layer is expressed as:
  • S is the set of all sampled pixel positions of the undersampled image data of the jth channel; k is the position of any pixel in the preliminary reconstructed image data output by the comprehensive model of the neural network; f l,j (k) is the jth The pixel value of the K space of the preliminary reconstructed image data of each channel at position k; ⁇ is the preset weighted weight; f 0,j (k) is the K space of the undersampled image data input from the jth channel at position k Pixel values.
  • the K-space consistent layer is expressed as:
  • the reconstruction effect of the undersampled image data by the neural network is not good enough, and the effect is insufficient for two reasons: one is that the neural network for learning and training may not be very accurate, and the second is undersampled image data These pixels already existing in the image will be distorted after being reconstructed by the neural network.
  • the reconstruction effect of the preliminary reconstructed image data can be improved.
  • the image data processed by the above K-space consistent layer are all image data in K-space.
  • this step is to process the undersampled image data of each channel separately to obtain the target reconstructed image data corresponding to each channel respectively.
  • S104 Combine target reconstruction image data of all channels to obtain single-channel image data.
  • the target reconstructed image data of all channels can be combined through adaptive channel merging or square sum merging, etc., to obtain single-channel image data.
  • the present invention provides a parallel magnetic resonance imaging method, which obtains undersampled image data acquired in parallel by the coils of multiple channels in K-space, and inputs the undersampled image data of each channel to the A comprehensive model of the constructed neural network.
  • the model also includes a K-space consistent layer.
  • the neural network can reconstruct the undersampled image data.
  • the K-space consistent layer can use the undersampled image data to reconstruct the image data of the neural network. After performing the generation process, the target reconstructed image data is obtained, and finally, the target reconstructed image data of all channels are merged to obtain a single-channel image.
  • the present invention adds a K-space consistent layer to the neural network model, which can use the under-sampled image data before reconstruction to perform post-processing on the reconstructed image data, thereby retaining more images
  • the details improve the image reconstruction effect.
  • the present invention designs a novel cascaded multi-channel network based on Convolutional Neural Network (CNN) for parallel magnetic resonance imaging.
  • CNN Convolutional Neural Network
  • the learning of the image domain and the consistency of the collected K-space data are considered in the network structure, which can progressively improve the image quality in the cascade network without losing the information of the collected data.
  • the present invention adopts a complex convolution operation for rationally processing real and imaginary parts of complex magnetic resonance data.
  • the following specifically describes the training process of the neural network in the neural network synthesis model.
  • the neural network is specifically a convolutional neural network.
  • a specific training process of the convolutional neural network includes the following steps S301 to S304.
  • the full-sampled image data is used to make the under-sampled image data during training, so the full-sampled image data is obtained first, and the full-sampled data is used as the label data in the training process of the convolutional neural network.
  • a 12-channel head coil on a 3T magnetic resonance instrument SIEMENS MANGETOM Trio Tim
  • 3T magnetic resonance instrument SIEMENS MANGETOM Trio Tim
  • More than 3000 two-dimensional magnetic resonance brain images are fully sampled image data.
  • a two-dimensional magnetic resonance brain image contains fully sampled image data acquired by each of 12 channels, that is, 3000*12 pieces of fully sampled image data are obtained.
  • S302 Undersampling the fully sampled image data of each channel to obtain the undersampled image data samples of each channel.
  • This step can be referred to as the production step of undersampled image data.
  • Undersampling processing is the process of simulating the acquisition of undersampling image data.
  • the full sampling image data of each channel must be subjected to the above undersampling processing to obtain the undersampling image data corresponding to each full sampling image data. Since the obtained under-sampled image data is used as a training sample for training the convolutional neural network, it can be called an under-sampled image data sample.
  • each undersampled image data can also be formulated according to the formula Normalize the modulus.
  • X k represents the modulus of the pixel value of the pixel at position k after the normalization of the undersampled image data
  • I k represents the modulus of the pixel value of the pixel at the position k of the undersampled image data
  • max ⁇ abs (X) ⁇ represents the maximum modulus of all pixel values of undersampled image data.
  • the under-sampled image data collected by the magnetic resonance coil is a complex image, that is, the pixel values of pixels in the image are of a complex type.
  • Each pixel in the undersampled image data is subjected to the normalization of the modulus value to normalize the modulus value of each pixel in the undersampled image data to (0,1).
  • the pixel value of the pixels may be relatively large, and the normalized processing of the modulus value can unify the magnitude of the pixel value of each under-sampled image data.
  • the normalization of the modulus value can ensure the effectiveness and convergence of training, which is a very important step in data processing.
  • step S303 training processing is performed on the undersampled image data that has undergone the normalization of the modulus.
  • S303 Use a convolutional neural network training algorithm to train the undersampled image data samples to obtain the weight parameter with the smallest loss value.
  • the convolutional neural network training algorithm is an existing training algorithm, and the present invention will not repeat them in detail.
  • the convolutional neural network training process is the process of calculating the most suitable weight parameters in the convolutional neural network.
  • the training process includes multiple forward propagation steps and multiple back propagation (BP) steps.
  • the forward propagation step is to use the known weight parameters to calculate the output value of the convolutional neural network in the forward direction (the output value is the reconstruction data).
  • a loss function is used to compare the reconstructed data with the label data, and the weight parameters in the convolutional neural network are updated through the comparison result.
  • Backpropagation constantly updates the weight parameters by calculating the gradient of the loss function.
  • the result of the update is that the difference between the output value of the convolutional neural network calculated by the loss function and the label data is the minimum value.
  • the function of the loss function is to compare the difference between the processing result of the undersampled image data of the convolutional neural network and the fully sampled image data corresponding to the undersampled image data. The value is the loss value in this step. The larger the loss value, the greater the gap between the two image data, which means that the reconstruction result of the convolutional neural network is not good enough, so the reconstruction result needs to be fed back to the convolutional neural network. If the loss value reaches the minimum value, the final value of the weight parameter can be determined, and the training process ends.
  • the back propagation process can be expressed as: among them: For the weight parameters obtained by training, J(x,y) is the loss function, x is the undersampled image data input to the convolutional neural network, and y is the fully sampled image data corresponding to the undersampled image data. It should be noted that the convolutional neural network is a multi-layer, each layer has corresponding weight parameters, so the weight parameters finally obtained Includes multiple sets of values.
  • An L-layer convolutional neural network It can be specifically expressed as follows:
  • C 0 is the image data reconstructed by the first layer of convolutional neural network
  • X u is the undersampled image data sample
  • C L is the reconstructed image data of the Lth layer convolutional neural network
  • ⁇ L is the convolution kernel of dimension FW L ⁇ FH L ⁇ K L-1 ⁇ K L
  • FW L ⁇ FH L is the Lth layer convolutional nerve
  • K L-1 is the number of L-1 layer convolutional neural network feature maps
  • K L is the Lth layer convolutional neural network feature maps
  • C L-1 is the L-1th layer Image data reconstructed by the layer convolutional neural network
  • b L is the offset of K L dimension.
  • Convolutional Neural Network Training ie training weight parameters the process of. among them Including the value of L group ⁇ , b, namely
  • the specific loss function used in the present invention will be described below.
  • the loss function will supervise the network training process, and different loss functions have different effects on the network training.
  • the loss value is calculated by a loss function, and the loss function used in the present invention is an average absolute value error function
  • M is the number of undersampled image data input to the same channel in the convolutional neural network by one batch
  • m is the serial number of undersampled image data input to the same channel in the convolutional neural network by one batch
  • C(x m ; ⁇ ) is the image data reconstructed by the convolutional neural network for the undersampled image data with sequence number m
  • y m is the fully sampled image data corresponding to the undersampled image data with sequence number m.
  • input training samples may be performed in batches, that is, each input training sample is multiple training samples for each channel. For example, if the number of channels is 12, and a batch contains 4 training samples of one channel, the input training samples each time are 12*4 undersampled image data.
  • the loss function J(x, y) used in the present invention can be specifically an average absolute value error function (Mean Absolution Error, MAE). Compared with the loss function of mean square error, MAE can not only ensure that the training results can well suppress noise , And slightly better maintain image detail.
  • MAE mean Absolution Error
  • the weight parameters in the convolutional neural network are set to the final training You can get the constructed convolutional neural network Where x is the undersampled image data to be input, It is the weight parameter of the convolutional neural network.
  • the trained convolutional neural network can reconstruct any input undersampled image data in practical applications.
  • the inventors carried out a comparative implementation. Specifically, for the same experimental sample data obtained by the one-dimensional random (1D random) 3 times undersampling method, the traditional parallel imaging algorithm and the algorithm provided by the present invention are used for the reconstruction process, and the reproduction effect is shown in FIG. 4.
  • the top and bottom images are: original image and sampling mode, SPIRiT reconstruction result and corresponding error map, L1-SPIRiT reconstruction result and corresponding error map, obtained using the algorithm of the present invention Reconstruction results and corresponding error graphs.
  • the parallel magnetic resonance imaging method provided by the present invention is relatively thorough in removing aliasing artifacts caused by one-dimensional undersampling, and has the best noise suppression effect, which is closest to the original image. It can be seen that, compared with the traditional parallel imaging algorithm, the magnetic resonance parallel imaging method proposed by the present invention has obvious advantages.
  • the magnetic resonance parallel imaging apparatus may include: an image data obtaining module 501, an integrated model obtaining module 502, an image data reconstruction module 503, and an image data merging module 504.
  • the image data obtaining module 501 is used to obtain the under-sampled image data acquired by the parallel multiple channel magnetic resonance coils in the K space;
  • the integrated model obtaining module 502 is used to obtain a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer;
  • the image data reconstruction module 503 is used to input the undersampled image data of each channel into the neural network integrated model, so that the neural network integrated model performs the following steps:
  • the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
  • the K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
  • the image data merging module 504 is used to merge the target reconstructed image data of all channels to obtain single channel image data.
  • the comprehensive model of the neural network includes a plurality of cascaded residual modules, and each residual module includes a connected neural network and a K-space consistent layer.
  • the neural network specifically includes a convolutional neural network.
  • the present invention also provides another structure of a parallel magnetic resonance imaging device.
  • the magnetic resonance parallel imaging device may further include a neural network training module 505 based on the structure shown in FIG. 5.
  • the neural network training module 505 is used to train the convolutional neural network in the integrated model of the neural network.
  • the neural network training module 505 includes:
  • Full sampling data acquisition sub-module used to obtain the full sampling image data of each channel
  • Undersampling data acquisition sub-module used to undersampling the fully sampled image data of each channel to obtain the undersampled image data samples of each channel;
  • a weight parameter training sub-module used to train the undersampled image data samples using a convolutional neural network training algorithm to obtain weight parameters with the smallest loss value
  • Convolutional network determination sub-module for constructing a convolutional neural network using the weighting parameters
  • x is the undersampled image data to be input, It is the weight parameter of the convolutional neural network.
  • the convolutional neural network Specifically:
  • C 0 is the image data reconstructed by the first layer of convolutional neural network
  • X u is the undersampled image data sample
  • C L is the reconstructed image data of the Lth layer convolutional neural network
  • ⁇ L is the convolution kernel of dimension FW L ⁇ FH L ⁇ K L-1 ⁇ K L
  • FW L ⁇ FH L is the Lth layer convolutional nerve
  • K L-1 is the number of L-1 layer convolutional neural network feature maps
  • K L is the Lth layer convolutional neural network feature maps
  • C L-1 is the L-1th layer Image data reconstructed by the layer convolutional neural network
  • b L is the offset of K L dimension.
  • the loss value is calculated by a loss function, and the loss function is an average absolute value error function
  • M is the number of undersampled image data input to the same channel in the convolutional neural network by one batch
  • m is the serial number of undersampled image data input to the same channel in the convolutional neural network by one batch
  • C(x m ; ⁇ ) is the image data reconstructed by the convolutional neural network for the undersampled image data with sequence number m
  • y m is the fully sampled image data corresponding to the undersampled image data with sequence number m.
  • the K-space consistent layer is specifically:
  • S is the set of all sampled pixel positions of the undersampled image data of the jth channel; k is the position of any pixel in the preliminary reconstructed image data output by the comprehensive model of the neural network; f l,j (k) is the jth The pixel value of the K space of the preliminary reconstructed image data of each channel at position k; ⁇ is the preset weighted weight; f 0,j (k) is the K space of the undersampled image data input from the jth channel at position k Pixel values.
  • the neural network training module 505 further includes: a module normalization processing sub-module.
  • Modular normalization processing sub-module which is used to normalize the modular values of each of the undersampled image data according to the following formula before using the convolutional neural network training algorithm to train the undersampled image data samples:
  • X k represents the modulus of the pixel value of the pixel at position k after the normalization of the undersampled image data
  • I k represents the modulus of the pixel value of the pixel at the position k of the undersampled image data
  • max ⁇ abs(X) ⁇ represents the maximum modulus of all pixel values of undersampled image data.
  • FIG. 7 shows a magnetic resonance parallel imaging device provided by the present application, which specifically includes: a memory 701, a receiver 702, a processor 703, a display 704, and a communication bus 705.
  • the memory 701, the receiver 702, the processor 703, and the display 704 communicate with each other through the communication bus 705.
  • the memory 701 is used to store programs; the memory 501 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one magnetic disk memory.
  • the receiver 702 is used to connect with the image acquisition device, and is used to receive the under-sampled image data acquired in the K-space by the magnetic resonance coils of multiple parallel channels in the image acquisition device.
  • the processor 703 is configured to execute a program.
  • the program may include a program code, and the program code includes an operation instruction of the processor.
  • the program can be specifically used for:
  • the neural network integrated model includes interconnected neural networks and a K-space consistent layer; input the undersampled image data of each channel into the neural network integrated model, so that the The comprehensive model of the neural network performs the following steps: the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain the preliminary reconstructed image data corresponding to each channel; K The spatially consistent layer performs back-processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel to obtain the target reconstructed image data corresponding to each channel; the target reconstructed image data of all channels is merged to obtain Single-channel image data.
  • the processor 703 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the embodiments of the present application.
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • processor may execute various steps related to the above-mentioned parallel magnetic resonance imaging method, which is not repeated here.
  • the display 704 is used to display the single-channel image data.
  • the present application also provides a readable storage medium on which a computer program is stored, and the computer program can be executed by a processor to implement various steps in a method of processing virtual asset data.

Abstract

A parallel magnetic resonance imaging method, comprising: obtaining undersampled image data acquired by magnetic resonance coils of multiple parallel channels in K-space (S101); obtaining a pre-built comprehensive neural network model, wherein the comprehensive neural network model comprises interconnected neural networks and a K-space-consistent layer (S102); inputting the undersampled image data of each channel into the comprehensive neural network model, so that the neural networks may initially reconstruct the undersampled image data and the space-consistent layer performs back substitution processing on an initial reconstruction result, so as to obtain target reconstruction image data (S103); and merging the target reconstruction image data of all channels to obtain image data of a single channel (S104). Hence, the present method adds the K-space-consistent layer to the neural network models, and said layer uses the undersampled image data before reconstruction to perform back substitution processing on the reconstructed image data, thereby retaining more image details and improving an image reconstruction effect. Also provided is related equipment for parallel magnetic resonance imaging.

Description

磁共振并行成像方法及相关设备Parallel magnetic resonance imaging method and related equipment 技术领域Technical field
本发明涉及磁共振图像处理技术领域,更具体地,是磁共振并行成像方法及相关设备。The invention relates to the technical field of magnetic resonance image processing, and more specifically, to a magnetic resonance parallel imaging method and related equipment.
背景技术Background technique
磁共振成像系统利用静磁场和射频磁场对人体组织成像,它不仅提供了丰富的组织对比度且不会对人体产生副作用危害,因此成为医学临床诊断的一种常用工具。The magnetic resonance imaging system uses static magnetic fields and radio frequency magnetic fields to image human tissues. It not only provides rich tissue contrast and does not cause side effects to the human body, so it has become a common tool for medical clinical diagnosis.
磁共振成像系统主要包括两个装置:图像采集装置及并行成像装置。图像采集装置包括多个并行的通道,不同的通道对应不同的采集线圈,不同的采集线圈位于不同的采集位置,用于从不同方向上采集人体组织的图像数据。并行成像装置应用的是并行成像(Parallel imaging,PI)算法,即对每个并行通道采集到的图像数据进行重建,以得到展示效果更佳的图像数据。The magnetic resonance imaging system mainly includes two devices: an image acquisition device and a parallel imaging device. The image acquisition device includes multiple parallel channels. Different channels correspond to different acquisition coils. The different acquisition coils are located at different acquisition positions and are used to acquire image data of human tissue from different directions. The parallel imaging device uses the parallel imaging (PI) algorithm, that is, the image data collected by each parallel channel is reconstructed to obtain image data with better display effect.
传统的并行成像算法主要分为两类:一种基于K空间重建,如空间谐波同步采集(SMASH)、广义自动校准部分并行采集(GRAAPA)、自一致性并行成像(SPIRiT)等;另一种基于图像域重建,如敏感度编码(SENSitivity Encoding,SENSE)等。另外,压缩感知的出现也较大地改善了并形成像的重建,其中典型的重建算法有L1-SPIRiT等。近些年,深度学习在快速磁共振重建中得到了很好地应用,目前深度学习用于并行成像的方法主要有两个,一个是利用多层感知机(MLP)进行并行成像,另一个是将传统迭代算法网络化的变分网络(Variational Network,VN)网络。Traditional parallel imaging algorithms are mainly divided into two categories: one based on K-space reconstruction, such as space harmonic synchronization acquisition (SMASH), generalized automatic calibration partial parallel acquisition (GRAAPA), self-consistent parallel imaging (SPIRiT), etc.; An image-based reconstruction, such as sensitivity encoding (SENSitivity Encoding, SENSE), etc. In addition, the emergence of compressed sensing has also greatly improved and formed image reconstruction, among which typical reconstruction algorithms are L1-SPIRiT and so on. In recent years, deep learning has been well used in fast magnetic resonance reconstruction. At present, there are two main methods of deep learning for parallel imaging, one is the use of multi-layer perceptron (MLP) for parallel imaging, and the other is A Variational Network (VN) network that networks traditional iterative algorithms.
然而,现有并行成像算法所重建的图像,图像细节处理效果有待进一步提高。However, for images reconstructed by existing parallel imaging algorithms, the image detail processing effect needs to be further improved.
发明内容Summary of the invention
有鉴于此,本发明提供了一种磁共振并行成像方法,用以进一步提高重建图像的细节处理效果。另外,本发明还提供了磁共振并行成像相关设备, 用以保证所述方法在实际中的应用及实现。In view of this, the present invention provides a parallel magnetic resonance imaging method to further improve the detail processing effect of the reconstructed image. In addition, the present invention also provides magnetic resonance parallel imaging related equipment to ensure the practical application and implementation of the method.
为实现所述目的,本发明提供的技术方案如下:To achieve the stated objective, the technical solutions provided by the present invention are as follows:
第一方面,本申请提供了一种磁共振并行成像方法,包括:In the first aspect, the present application provides a parallel magnetic resonance imaging method, including:
获得并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据;Obtain under-sampled image data acquired in parallel by K-space for multiple channels of magnetic resonance coils;
获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;Obtain a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer;
将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下述步骤:Input the undersampled image data of each channel into the neural network integrated model, so that the neural network integrated model performs the following steps:
神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;The neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;The K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
合并所有通道的目标重建图像数据,以得到单通道的图像数据。The target reconstructed image data of all channels is merged to obtain single-channel image data.
第二方面,本申请提供了一种磁共振并行成像装置,包括:In a second aspect, the present application provides a parallel magnetic resonance imaging device, including:
图像数据获得模块,用于获得并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据;An image data acquisition module for acquiring undersampled image data acquired in parallel by K-space magnetic resonance coils of multiple channels;
综合模型获得模块,用于获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;A comprehensive model obtaining module, used to obtain a pre-built neural network comprehensive model; wherein the neural network comprehensive model includes interconnected neural networks and a K-space consistent layer;
图像数据重建模块,用于将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下述步骤:The image data reconstruction module is used to input the undersampled image data of each channel into the neural network integrated model, so that the neural network integrated model performs the following steps:
神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;The neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;The K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
图像数据合并模块,用于合并所有通道的目标重建图像数据,以得到单通道的图像数据。The image data merging module is used to merge the target reconstruction image data of all channels to obtain single channel image data.
第三方面,本申请提供了一种磁共振并行成像设备,包括:In a third aspect, the present application provides a parallel magnetic resonance imaging device, including:
接收器,用于接收并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据;The receiver is used to receive the under-sampled image data collected by the parallel multiple channel magnetic resonance coils in the K space;
处理器,用于获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下述步骤:神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;合并所有通道的目标重建图像数据,以得到单通道的图像数据;A processor, used to obtain a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer; input the undersampled image data of each channel into the neural network integrated model , So that the comprehensive model of the neural network performs the following steps: the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data, and obtains the preliminary corresponding to each channel Reconstructed image data; the K-space consistent layer performs back-processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel to obtain the target reconstructed image data corresponding to each channel; merges the target reconstruction of all channels Image data to get single-channel image data;
显示器,用于显示所述单通道的图像数据。A display is used to display the single-channel image data.
第四方面,本申请提供了一种可读性存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现上述的磁共振并行成像方法。According to a fourth aspect, the present application provides a readable storage medium on which a computer program is stored, which is characterized in that, when the computer program is executed by a processor, the above-mentioned parallel magnetic resonance imaging method is realized.
由以上技术方案可知,本发明提供了一种磁共振并行成像方法,该方法获得并行的多个通道的线圈在K空间上采集的欠采样图像数据,将各通道的欠采样图像数据输入至预构建的神经网络综合模型,该模型中除了包括神经网络之后还包括K空间一致层,神经网络可以对欠采样图像数据进行重建,K空间一致层可以使用欠采样图像数据对神经网络重建的图像数据进行回代处理,从而得到目标重建图像数据,最后将所有通道的目标重建图像数据进行合并便可以得到单通道图像。与现有技术相比,本发明在神经网络模型中添加了K空间一致层,该层可以使用重建前的欠采样图像数据对重建后的图像数据进行回代处理,从而保留了更多的图像细节,提高了图像重建效果。It can be seen from the above technical solutions that the present invention provides a parallel magnetic resonance imaging method, which obtains undersampled image data acquired in parallel by the coils of multiple channels in K-space, and inputs the undersampled image data of each channel to the A comprehensive model of the constructed neural network. In addition to the neural network, the model also includes a K-space consistent layer. The neural network can reconstruct the undersampled image data. The K-space consistent layer can use the undersampled image data to reconstruct the image data of the neural network. After performing the generation process, the target reconstructed image data is obtained, and finally, the target reconstructed image data of all channels are merged to obtain a single-channel image. Compared with the prior art, the present invention adds a K-space consistent layer to the neural network model, which can use the under-sampled image data before reconstruction to perform post-processing on the reconstructed image data, thereby retaining more images The details improve the image reconstruction effect.
附图说明BRIEF DESCRIPTION
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly explain the embodiments of the present invention or the technical solutions in the prior art, the following will briefly introduce the drawings required in the embodiments or the description of the prior art. Obviously, the drawings in the following description are only This is an embodiment of the present invention. For those of ordinary skill in the art, without paying any creative labor, other drawings may be obtained according to the provided drawings.
图1为本发明提供的磁共振并行成像方法的一个流程示意图;FIG. 1 is a schematic flowchart of a parallel magnetic resonance imaging method provided by the present invention;
图2为本发明提供的神经网络综合模块的一种结构示意图;2 is a schematic structural diagram of a neural network integrated module provided by the present invention;
图3为本发明提供的训练卷积神经网络的一个流程示意图;3 is a schematic flowchart of a training convolutional neural network provided by the present invention;
图4为本发明提供的磁共振并行成像方法与传统并行成像算法进行重建的成像效果对比图;4 is a comparison diagram of the imaging effect of the magnetic resonance parallel imaging method provided by the present invention and the traditional parallel imaging algorithm for reconstruction;
图5为本发明提供的磁共振并行成像装置的一个结构示意图;5 is a schematic structural diagram of a magnetic resonance parallel imaging device provided by the present invention;
图6为本发明提供的磁共振并行成像装置的另一结构示意图;6 is another schematic structural diagram of a magnetic resonance parallel imaging device provided by the present invention;
图7为本发明提供的磁共振并行成像设备的一个计算机架构示意图。7 is a schematic diagram of a computer architecture of a magnetic resonance parallel imaging device provided by the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the protection scope of the present invention.
磁共振成像系统是医学领域中常用的一种用于对人体组织成像的系统。该系统主要包括两个装置,图像采集装置及并行成像装置。The magnetic resonance imaging system is a system commonly used in the medical field for imaging human tissue. The system mainly includes two devices, an image acquisition device and a parallel imaging device.
图像采集装置包括多个并行的通道,每个通道使用不同位置的磁共振线圈从不同方向采集人体组织的图像数据。成像速度慢一直是制约该系统应用的一大瓶颈,因此如何在成像质量为临床可接受的前提下,提高扫描速度从而减少扫描时间尤为重要。The image acquisition device includes multiple parallel channels, and each channel uses different positions of magnetic resonance coils to acquire image data of human tissue from different directions. Slow imaging speed has always been a major bottleneck restricting the application of this system, so it is particularly important to increase the scanning speed and reduce the scanning time under the premise that the imaging quality is clinically acceptable.
为了实现这个目的,磁共振成像系统普遍采用并行成像技术,在该项技术中图像采集装置所采集的图像为欠采样图像数据。具体地,图像采集装置在一次扫描过程中,多个线圈同时采集图像数据,每个线圈只采集对象某个局部位置的图像数据,这些图像数据为欠采样图像数据。同时,并行成像装置使用并行成像算法,对每个通道采集到的欠采样图像数据进行重建。可见,并行成像主要利用不同位置的线圈采集的敏感度信息进行图像重建,可以通过一定程度的欠采样达到加速扫描的目的,是快速磁共振成像中的重要方法之一。To achieve this, magnetic resonance imaging systems generally use parallel imaging technology, in which the image collected by the image acquisition device is undersampled image data. Specifically, in an image acquisition device, multiple coils simultaneously acquire image data during one scan, and each coil only acquires image data of a certain local position of the object. These image data are undersampled image data. At the same time, the parallel imaging device uses a parallel imaging algorithm to reconstruct the undersampled image data collected in each channel. It can be seen that parallel imaging mainly uses the sensitivity information collected by the coils at different positions to reconstruct the image. It can achieve the purpose of accelerating scanning through a certain degree of undersampling, which is one of the important methods in fast magnetic resonance imaging.
本发明提供了一种磁共振并行成像方法,该方法用于并行成像装置,用于提高重建图像的细节处理效果。见图1,其示出了磁共振并行成像方法的一个流程示意,具体包括步骤S101~S104。The invention provides a parallel magnetic resonance imaging method, which is used in a parallel imaging device and used to improve the detail processing effect of a reconstructed image. See FIG. 1, which shows a schematic flow diagram of a parallel magnetic resonance imaging method, which specifically includes steps S101 to S104.
S101:获得并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据。S101: Obtain under-sampled image data acquired in parallel by the magnetic resonance coils of multiple channels in K-space.
在实际应用中,当需要检测人体组织如头部情况时,首先由图像采集装置采集人体组织的图像数据。前已述及,图像采集装置包括多个并行的通道,每个通道的磁共振线圈都会使用欠采样方式采集该人体组织的局部信息,从而每个通道都会得到一张欠采样图像数据。通道个数与实际应用的图像采集装置有关,如可以是6路、8路、12路、32路等等。In practical applications, when it is necessary to detect the condition of human tissues such as the head, the image data of the human tissues is collected by the image acquisition device first. As mentioned above, the image acquisition device includes multiple parallel channels, and the magnetic resonance coil of each channel uses the under-sampling method to collect local information of the human tissue, so that each channel will obtain an under-sampled image data. The number of channels is related to the actual image acquisition device, such as 6 channels, 8 channels, 12 channels, 32 channels, etc.
需要说明的是,K空间是表示图像的一个域,同时也是图像数据的采集域,即在K空间上采集图像数据。欠采样图像数据是在图像K空间上通过欠采样方式采集到的数据。K空间上的图像数据是经过傅里叶变换得到的,K空间这个域表示的图像数据并非人肉眼能识别的图像,需要经过傅里叶逆变换,才能得到人肉眼可以识别的图像。It should be noted that K-space is a domain that represents images and is also the image data collection domain, that is, image data is collected in K-space. Undersampling image data is data collected by undersampling in the image K-space. The image data in K-space is obtained by Fourier transform. The image data in the K-space domain is not an image that can be recognized by the human eye. It needs to undergo inverse Fourier transform to obtain an image that can be recognized by the human eye.
S102:获得预先构建的神经网络综合模型;其中神经网络综合模型包括相互连接的神经网络以及K空间一致层。S102: Obtain a pre-built neural network integrated model; where the neural network integrated model includes interconnected neural networks and a K-space consistent layer.
其中,神经网络综合模型是预先构建的,与现有的神经网络模型不同的是,本发明所构建的神经网络综合模型包括两部分,一部分为神经网络,一部分为K空间一致层(Data Consistency,DC)。神经网络需要通过神经网络训练算法预先训练得到,K空间一致层作用是,将充分利用图像数据中已采集的K空间数据信息,对神经网络重建的图像进行修正,从而使重建图像更精确,保留更多图像细节。Among them, the neural network synthesis model is pre-built. Unlike the existing neural network model, the neural network synthesis model constructed by the present invention includes two parts, one is the neural network and the other is the K-space consistency layer (Data Consistency, DC). The neural network needs to be pre-trained by the neural network training algorithm. The role of the K-space consistent layer is to make full use of the collected K-space data information in the image data to correct the image reconstructed by the neural network, so that the reconstructed image is more accurate and retained. More image details.
发明人通过研究发现,传统的并行成像算法,往往受线圈个数和排列影响较大,且重建结果对噪声有放大作用,加速倍数也受到一定限制。基于压缩感知的并行成像算法(CS-PI),仅对非相干采样的采样模式有较好的重建效果,对于一维欠采样的混叠伪影重建结果较差,且目标函数中的权重参数较难调节,同时迭代求解过程时间略长。The inventor found through research that the traditional parallel imaging algorithm is often greatly affected by the number and arrangement of coils, and the reconstruction result has an amplification effect on noise, and the acceleration factor is also limited. Compressed sensing-based parallel imaging algorithm (CS-PI) only has good reconstruction effect for non-coherent sampling sampling mode, poor reconstruction result for one-dimensional undersampling aliasing artifacts, and the weight parameters in the objective function It is more difficult to adjust, while the iterative solution process takes a little longer.
基于深度学习的并行成像方法中,多层感知机为全连接神经网络,所需参数量较大,且该方法需逐行进行重建,专注于解决一维欠采样导致的混叠伪影,对采样方式有一定限制。VN网络是将传统算法网络化的方法之一,其需要提前计算线圈阵列的敏感度信息,因此重建结果依赖于线圈敏感度的估计准确度。In the parallel imaging method based on deep learning, the multi-layer perceptron is a fully connected neural network, which requires a large amount of parameters, and this method needs to be reconstructed line by line, focusing on solving the aliasing artifacts caused by one-dimensional undersampling. The sampling method has certain restrictions. VN network is one of the methods to network traditional algorithms. It needs to calculate the sensitivity information of the coil array in advance, so the reconstruction result depends on the accuracy of the coil sensitivity estimation.
因此,可选地,本发明选用的神经网络具体为卷积神经网络(Convolutional Neural Network,CNN)。相较于多层感知机等其他神经网络,卷积神经网络具有局部连接和权值共享的特点,在计算机视觉领域有更独特的优势。更具体地来讲,卷积神经网络CNN在图像处理方面具有优势,对于一维和二维欠采样都适用,且不受传统自动校准信号(Auto-calibration signal,ACS)线多少的影响,因此本发明将卷积神经网络CNN用于磁共振并行成像,使得本发明对采样方式没有特殊限制,且对ACS线不敏感。Therefore, optionally, the neural network selected in the present invention is specifically a convolutional neural network (Convolutional Neural Network, CNN). Compared with other neural networks such as multilayer perceptrons, convolutional neural networks have the characteristics of local connection and weight sharing, and have more unique advantages in the field of computer vision. More specifically, convolutional neural network CNN has advantages in image processing. It is suitable for both one-dimensional and two-dimensional undersampling, and is not affected by the number of traditional auto-calibration signal (ACS) lines. The invention uses the convolutional neural network CNN for parallel magnetic resonance imaging, so that the invention has no special limitation on the sampling mode and is not sensitive to the ACS line.
神经网络综合模型中,神经网络与K空间一致层之间的连接方式可以是多层级联。例如,神经网络综合模型的一种具体结构见图2。如图2所示,神经网络综合模型包括N个子模块Block,该N个子模块前后连接,前一个子模块输出的图像重建结果输入至后一个子模块中继续进行重建处理。需要说明的是,N为预先根据训练经验得到的已知值。另外,子模块可以称为残差模块。In the comprehensive model of neural network, the connection between the neural network and the K-space consistent layer can be a multi-layer cascade. For example, a specific structure of a comprehensive model of a neural network is shown in Figure 2. As shown in FIG. 2, the comprehensive model of the neural network includes N sub-modules Block. The N sub-modules are connected back and forth, and the image reconstruction result output by the previous sub-module is input to the latter sub-module to continue the reconstruction process. It should be noted that N is a known value obtained in advance based on training experience. In addition, the sub-module may be referred to as a residual module.
其中每个子模块中均包括相互连接的一个卷积神经网络CNN以及一个K空间一致层DC,K空间一致层连接在卷积神经网络CNN之后,用以保证卷积神经网络CNN的图像重建结果更为准确。在训练过程中可知,逐个级联的神经网络连接K空间一致层这种结构,可以使得重建图像在正向传播过程中越来越接近于标签图像(即全采样图像数据)。Each submodule includes a convolutional neural network CNN and a K-space consistent layer DC connected to each other. The K-space consistent layer is connected after the convolutional neural network CNN to ensure that the image reconstruction results of the convolutional neural network CNN are more For accuracy. In the training process, it can be seen that the structure of cascading neural networks connected to the K-space consistent layer can make the reconstructed image closer to the label image (that is, fully sampled image data) during the forward propagation process.
卷积神经网络的一种具体结构中,包括M个叠加的复数卷积层,M为预先根据训练经验得到的已知值。A specific structure of the convolutional neural network includes M superposed complex convolutional layers, M is a known value obtained in advance according to training experience.
S103:将各个通道的欠采样图像数据输入至神经网络综合模型中,以使神经网络对欠采样图像数据初步重建以及空间一致层对初步重建结果进行回代处理,得到目标重建图像数据。S103: Input the under-sampled image data of each channel into the neural network integrated model, so that the neural network performs preliminary reconstruction on the under-sampled image data and the spatial reconstruction layer to perform preliminary processing on the preliminary reconstruction results to obtain target reconstructed image data.
具体地,将各个通道的欠采样图像数据输入至神经网络综合模型中,以使神经网络综合模型执行步骤A1及A2。Specifically, the undersampled image data of each channel is input into the neural network integrated model, so that the neural network integrated model executes steps A1 and A2.
A1:神经网络依据各个欠采样图像数据之间的相关性,分别对每个欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;A2:K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据。A1: The neural network reconstructs each under-sampled image data separately according to the correlation between each under-sampled image data to obtain the preliminary reconstructed image data corresponding to each channel; A2: The K-space consistent layer is based on each channel’s Undersampling the image data, the preliminary reconstructed image data of each channel is back-processed to obtain the target reconstructed image data corresponding to each channel.
具体地,神经网络综合模型的输入数据为步骤S101获得的多个并行通道的欠采样图像数据。神经网络综合模型的处理分为两部分:步骤A1中神经网络首先对欠采样图像数据进行初步重建,重建过程中充分利用多个线圈之间图像数据的相关性,得到的图像称为初步重建图像数据;步骤A2中K空间一致层使用输入至神经网络综合模型的欠采样图像数据,对初步重建图像数据进行回代处理,为了便于描述,将回代处理后的初步重建图像数据成为目标重建图像数据。Specifically, the input data of the comprehensive model of the neural network is undersampled image data of multiple parallel channels obtained in step S101. The processing of the neural network integrated model is divided into two parts: in step A1, the neural network first performs preliminary reconstruction on the undersampled image data. During the reconstruction process, the correlation of the image data between multiple coils is fully utilized, and the resulting image is called the preliminary reconstructed image Data; in step A2, the K-space consistent layer uses the undersampled image data input to the neural network synthesis model to perform the retrograde processing on the preliminary reconstructed image data. data.
回代处理过程是:经过重建后的初步重建图像数据中,如果某个位置的像素点的像素值在该初步重建图像数据对应的欠采样图像数据中不存在,则保留该像素值;如果某个位置的像素点的像素值在该初步重建图像数据对应的欠采样图像数据中存在,则将欠采样图像数据中的像素值回代至初步重建图像数据中的像素值。The generation process is: in the preliminary reconstructed image data after reconstruction, if the pixel value of a pixel at a certain position does not exist in the undersampled image data corresponding to the preliminary reconstructed image data, the pixel value is retained; if a certain The pixel values of the pixels at each position exist in the under-sampled image data corresponding to the preliminary reconstructed image data, and then the pixel values in the under-sampled image data are replaced with the pixel values in the preliminary reconstructed image data.
更具体地,一种回代方式是欠采样图像数据中的像素值以及初步重建图像数据中的像素值进行加权处理,并将初步重建图像数据中的像素值替换为处理后的像素值;另一种处理方式是直接将欠采样图像数据中的像素值直接替换初步重建图像数据中的像素值。More specifically, one way to go back is to perform weighted processing on the pixel values in the undersampled image data and the pixel values in the preliminary reconstructed image data, and replace the pixel values in the preliminary reconstructed image data with the processed pixel values; another One processing method is to directly replace the pixel values in the under-sampled image data with the pixel values in the preliminary reconstructed image data.
通过具体的公式表示第一种回代方式的K空间一致层,则K空间一致层表示为:The K-space consistent layer of the first iteration way is expressed by a specific formula, then the K-space consistent layer is expressed as:
Figure PCTCN2019121508-appb-000001
Figure PCTCN2019121508-appb-000001
其中S为第j个通道的欠采样图像数据所有采样像素点位置的集合;k为神经网络综合模型输出的初步重建图像数据中任一像素点的位置;f l,j(k)为第j个通道的初步重建图像数据的K空间在位置k的像素值;λ为预设的加权权重;f 0,j(k)为第j个通道输入的欠采样图像数据的K空间在位置k的像素值。 Where S is the set of all sampled pixel positions of the undersampled image data of the jth channel; k is the position of any pixel in the preliminary reconstructed image data output by the comprehensive model of the neural network; f l,j (k) is the jth The pixel value of the K space of the preliminary reconstructed image data of each channel at position k; λ is the preset weighted weight; f 0,j (k) is the K space of the undersampled image data input from the jth channel at position k Pixel values.
通过具体的公式表示第二种回代方式的K空间一致层,则K空间一致层表示为:
Figure PCTCN2019121508-appb-000002
Using a specific formula to express the K-space consistent layer of the second generation, the K-space consistent layer is expressed as:
Figure PCTCN2019121508-appb-000002
至于为什么需要进行上述回代处理,是因为神经网络对欠采样图像数据的重建效果不够好,效果不够的原因有二:一是由于学习训练的神经网络可 能并非十分精确,二是欠采样图像数据中已经存在的这些像素点被神经网络重建后会失真。使用没有经过重建处理的欠采样图像数据对初步重建图像数据进行上述回代处理之后,可以提高初步重建图像数据的重建效果。As for why the above generation processing is required, it is because the reconstruction effect of the undersampled image data by the neural network is not good enough, and the effect is insufficient for two reasons: one is that the neural network for learning and training may not be very accurate, and the second is undersampled image data These pixels already existing in the image will be distorted after being reconstructed by the neural network. After using the under-sampled image data that has not undergone the reconstruction process to perform the above-mentioned generation processing on the preliminary reconstructed image data, the reconstruction effect of the preliminary reconstructed image data can be improved.
需要说明的是,上述K空间一致层所处理的图像数据都是K空间上的图像数据。另外,本步骤是分别对每个通道的欠采样图像数据进行处理,分别得到每个通道各自对应的目标重建图像数据。It should be noted that the image data processed by the above K-space consistent layer are all image data in K-space. In addition, this step is to process the undersampled image data of each channel separately to obtain the target reconstructed image data corresponding to each channel respectively.
S104:合并所有通道的目标重建图像数据,以得到单通道的图像数据。S104: Combine target reconstruction image data of all channels to obtain single-channel image data.
其中,可以通过自适应通道合并或者平方和合并等方法,将所有通道的目标重建图像数据合并,从而得到单通道的图像数据。Among them, the target reconstructed image data of all channels can be combined through adaptive channel merging or square sum merging, etc., to obtain single-channel image data.
由以上技术方案可知,本发明提供了一种磁共振并行成像方法,该方法获得并行的多个通道的线圈在K空间上采集的欠采样图像数据,将各通道的欠采样图像数据输入至预构建的神经网络综合模型,该模型中除了包括神经网络之后还包括K空间一致层,神经网络可以对欠采样图像数据进行重建,K空间一致层可以使用欠采样图像数据对神经网络重建的图像数据进行回代处理,从而得到目标重建图像数据,最后将所有通道的目标重建图像数据进行合并便可以得到单通道图像。与现有技术相比,本发明在神经网络模型中添加了K空间一致层,该层可以使用重建前的欠采样图像数据对重建后的图像数据进行回代处理,从而保留了更多的图像细节,提高了图像重建效果。It can be seen from the above technical solutions that the present invention provides a parallel magnetic resonance imaging method, which obtains undersampled image data acquired in parallel by the coils of multiple channels in K-space, and inputs the undersampled image data of each channel to the A comprehensive model of the constructed neural network. In addition to the neural network, the model also includes a K-space consistent layer. The neural network can reconstruct the undersampled image data. The K-space consistent layer can use the undersampled image data to reconstruct the image data of the neural network. After performing the generation process, the target reconstructed image data is obtained, and finally, the target reconstructed image data of all channels are merged to obtain a single-channel image. Compared with the prior art, the present invention adds a K-space consistent layer to the neural network model, which can use the under-sampled image data before reconstruction to perform post-processing on the reconstructed image data, thereby retaining more images The details improve the image reconstruction effect.
更进一步地,本发明设计了一种新型的基于卷积神经网络(Convolutional Neural Network,CNN)的级联的多通道网络,用于磁共振并行成像。在网络结构中考虑了图像域的学习和已采集K空间数据的一致性,能够在级联网络中递进地改善图像质量,且不丢失已采集数据的信息。同时,本发明采用了一种复数卷积操作,用于合理地处理磁共振复数数据的实部和虚部。Furthermore, the present invention designs a novel cascaded multi-channel network based on Convolutional Neural Network (CNN) for parallel magnetic resonance imaging. The learning of the image domain and the consistency of the collected K-space data are considered in the network structure, which can progressively improve the image quality in the cascade network without losing the information of the collected data. At the same time, the present invention adopts a complex convolution operation for rationally processing real and imaginary parts of complex magnetic resonance data.
以下具体说明神经网络综合模型中的神经网络的训练过程。The following specifically describes the training process of the neural network in the neural network synthesis model.
其中,神经网络具体为卷积神经网络。见图3,卷积神经网络的一种具体训练过程包括如下步骤S301~S304。Among them, the neural network is specifically a convolutional neural network. Referring to FIG. 3, a specific training process of the convolutional neural network includes the following steps S301 to S304.
S301:获得每个通道的全采样图像数据。S301: Obtain the fully sampled image data of each channel.
其中,训练时使用全采样图像数据来制作欠采样图像数据,因此首先获得全采样图像数据,且全采样数据作为卷积神经网络训练过程中的标签数据。Among them, the full-sampled image data is used to make the under-sampled image data during training, so the full-sampled image data is obtained first, and the full-sampled data is used as the label data in the training process of the convolutional neural network.
例如,可以使用3T磁共振仪器(SIEMENS MANGETOM Trio Tim)上的12通道的头部线圈采集3000多张二维磁共振脑部图像,其中可以包含矢状面、冠状面、横断面图像,也可以混合T1加权、T2加权、PD加权图像。3000多张二维磁共振脑部图像为全采样图像数据。需要说明的是,一张二维磁共振脑部图像包含12个通道各自采集的全采样图像数据,也就是说,获取到了3000*12张全采样图像数据。For example, you can use a 12-channel head coil on a 3T magnetic resonance instrument (SIEMENS MANGETOM Trio Tim) to acquire more than 3000 two-dimensional magnetic resonance brain images, which can include sagittal, coronal, and cross-sectional images, or you can mix T1 Weighted, T2-weighted, and PD-weighted images. More than 3000 two-dimensional magnetic resonance brain images are fully sampled image data. It should be noted that a two-dimensional magnetic resonance brain image contains fully sampled image data acquired by each of 12 channels, that is, 3000*12 pieces of fully sampled image data are obtained.
需要说明的是,大数据对于深度学习来说至关重要,因此想得到较好的训练结果,必须保证一定的数据量。在原始数据量不够大的情况下,可以通过进行数据增强来扩充数据。本发明中,对于每一张原始图像,通过旋转90°、180°、270°和沿x轴、y轴镜像翻转的方法,得到了8倍于原始数据的数据量。也就是说,对每个通道的全采样数据分别执行上述扩充数据的操作,以得到3000*12*8张全采样图像数据,其中每个通道对应有3000*8张全采样图像数据。当然,数据扩充方式还可以是其他,并不局限于上述方式。It should be noted that big data is very important for deep learning, so if you want to get better training results, you must ensure a certain amount of data. When the amount of original data is not large enough, data can be augmented by data augmentation. In the present invention, for each original image, by rotating 90°, 180°, 270°, and mirroring and flipping along the x-axis and y-axis, the data amount is 8 times that of the original data. In other words, the above expanded data operation is performed on the full-sampled data of each channel to obtain 3000*12*8 full-sampled image data, where each channel corresponds to 3000*8 full-sampled image data. Of course, there may be other data expansion methods, which are not limited to the above methods.
需要说明的是,扩充得到的全采样图像数据同样作为标签数据。It should be noted that the expanded fully sampled image data is also used as tag data.
S302:对每个通道的全采样图像数据进行欠采样处理,得到每个通道的欠采样图像数据样本。S302: Undersampling the fully sampled image data of each channel to obtain the undersampled image data samples of each channel.
其中本步骤可以称为欠采样图像数据的制作步骤。This step can be referred to as the production step of undersampled image data.
具体地,欠采样处理为回顾式欠采样处理,可以表示为:X u=F HP HPFX;其中,X u为欠采样图像数据样本,F表示傅里叶编码矩阵(经过傅里叶编码矩阵变换之后,便可以将欠采样图像数据变换到K空间),P表示通道进行采样的欠采样模式的循环对角矩阵,F H及P H的上角标H表示厄尔米特转置操作,X表示标签数据。 Specifically, the undersampling processing is retrospective undersampling processing, which can be expressed as: X u =F H P H PFX; where, X u is an undersampled image data sample, and F represents a Fourier coding matrix (after Fourier coding after the transformation matrix, can be converted to the undersampled image data space K), P represents the channel mode sampling cycle undersampling diagonal matrix, and P H H F. superscript H indicates the Hermitian transpose operation , X represents the label data.
欠采样处理即模拟采集欠采样图像数据的过程,每个通道的全采样图像数据都要进行上述欠采样处理,以得到每个全采样图像数据所对应的欠采样图像数据。由于得到的欠采样图像数据作为训练样本用于训练卷积神经网络,因此可以称为欠采样图像数据样本。Undersampling processing is the process of simulating the acquisition of undersampling image data. The full sampling image data of each channel must be subjected to the above undersampling processing to obtain the undersampling image data corresponding to each full sampling image data. Since the obtained under-sampled image data is used as a training sample for training the convolutional neural network, it can be called an under-sampled image data sample.
需要说明的是,在执行步骤S303使用卷积神经网络训练算法,对欠采样图像数据样本进行训练之前,还可以将各个欠采样图像数据按照公式
Figure PCTCN2019121508-appb-000003
进行模值归一处理。
It should be noted that before performing step S303 using the convolutional neural network training algorithm to train the undersampled image data samples, each undersampled image data can also be formulated according to the formula
Figure PCTCN2019121508-appb-000003
Normalize the modulus.
其中X k表示欠采样图像数据归一化处理后位置k处的像素点的像素值的模值;I k表示欠采样图像数据在位置k处的像素点的像素值的模值;max{abs(X)}表示欠采样图像数据所有像素值的最大模值。 Where X k represents the modulus of the pixel value of the pixel at position k after the normalization of the undersampled image data; I k represents the modulus of the pixel value of the pixel at the position k of the undersampled image data; max{abs (X)} represents the maximum modulus of all pixel values of undersampled image data.
具体来讲,磁共振线圈采集到的欠采样图像数据是复数图像,即图像中像素点的像素值是复数类型。对欠采样图像数据中每个像素点都进行模值归一处理,以将欠采样图像数据中每个像素点的模值归一化到(0,1]之间。Specifically, the under-sampled image data collected by the magnetic resonance coil is a complex image, that is, the pixel values of pixels in the image are of a complex type. Each pixel in the undersampled image data is subjected to the normalization of the modulus value to normalize the modulus value of each pixel in the undersampled image data to (0,1).
各个欠采样图像数据之间,像素点的像素值大小可能差距比较大,模值归一化处理可以使各个欠采样图像数据的像素值的数量级统一。对于深度学习神经网络综合模型来说,模值归一化处理能保证训练的有效性及收敛性,是数据处理中非常重要的步骤。步骤S303中是对经过模值归一化处理后的欠采样图像数据进行训练处理。Between each under-sampled image data, the pixel value of the pixels may be relatively large, and the normalized processing of the modulus value can unify the magnitude of the pixel value of each under-sampled image data. For the comprehensive model of deep learning neural network, the normalization of the modulus value can ensure the effectiveness and convergence of training, which is a very important step in data processing. In step S303, training processing is performed on the undersampled image data that has undergone the normalization of the modulus.
S303:使用卷积神经网络训练算法,对欠采样图像数据样本进行训练,以得到损失值最小的权重参数。S303: Use a convolutional neural network training algorithm to train the undersampled image data samples to obtain the weight parameter with the smallest loss value.
其中,卷积神经网络训练算法为现有的一种训练算法,本发明并不做具体赘述。卷积神经网络训练过程是计算卷积神经网络中最为合适的权重参数的过程。Among them, the convolutional neural network training algorithm is an existing training algorithm, and the present invention will not repeat them in detail. The convolutional neural network training process is the process of calculating the most suitable weight parameters in the convolutional neural network.
具体地,训练过程包括多次前向传播步骤以及多次反向传播(Backpropagation,BP)步骤。可以知道的是,在训练前会需要设置权重参数,前向传播步骤是使用已知的权重参数正向计算卷积神经网络的输出值(输出值即重建数据),如图2所示,反向传播步骤使用损失函数对重建数据与标签数据进行比对,通过比对结果更新卷积神经网络中的权重参数。Specifically, the training process includes multiple forward propagation steps and multiple back propagation (BP) steps. It can be known that the weight parameters need to be set before training. The forward propagation step is to use the known weight parameters to calculate the output value of the convolutional neural network in the forward direction (the output value is the reconstruction data). In the propagation step, a loss function is used to compare the reconstructed data with the label data, and the weight parameters in the convolutional neural network are updated through the comparison result.
反向传播通过计算损失函数的梯度来不断更新权重参数,更新的结果是,损失函数所计算的卷积神经网络的输出值与标签数据之间的差值为最小值。可见,反向传播过程需要使用损失函数,损失函数的作用是比对卷积神经网络对欠采样图像数据的处理结果与该欠采样图像数据对应的全采样图像数据之间的差值,该差值即本步骤中的损失值。损失值越大,则说明两个图像数据之间的差距越大,也即表示卷积神经网络的重建结果不够好,因此需要将重建结果反馈到卷积神经网络中。如果损失值达到最小值,便可以确定出权重参数的最终值,训练过程结束。Backpropagation constantly updates the weight parameters by calculating the gradient of the loss function. The result of the update is that the difference between the output value of the convolutional neural network calculated by the loss function and the label data is the minimum value. It can be seen that the back propagation process needs to use a loss function. The function of the loss function is to compare the difference between the processing result of the undersampled image data of the convolutional neural network and the fully sampled image data corresponding to the undersampled image data. The value is the loss value in this step. The larger the loss value, the greater the gap between the two image data, which means that the reconstruction result of the convolutional neural network is not good enough, so the reconstruction result needs to be fed back to the convolutional neural network. If the loss value reaches the minimum value, the final value of the weight parameter can be determined, and the training process ends.
反向传播过程可以表示为:
Figure PCTCN2019121508-appb-000004
其中:
Figure PCTCN2019121508-appb-000005
为训练得到的权重参数,J(x,y)为损失函数,x为输入至卷积神经网络的欠采样图像数据,y为该欠采样图像数据对应的全采样图像数据。需要说明的是,卷积神经网络为多层,每层都有对应的权重参数,因此最后得到的权重参数
Figure PCTCN2019121508-appb-000006
包括多组数值。
The back propagation process can be expressed as:
Figure PCTCN2019121508-appb-000004
among them:
Figure PCTCN2019121508-appb-000005
For the weight parameters obtained by training, J(x,y) is the loss function, x is the undersampled image data input to the convolutional neural network, and y is the fully sampled image data corresponding to the undersampled image data. It should be noted that the convolutional neural network is a multi-layer, each layer has corresponding weight parameters, so the weight parameters finally obtained
Figure PCTCN2019121508-appb-000006
Includes multiple sets of values.
一个L层卷积神经网络
Figure PCTCN2019121508-appb-000007
可以具体表示为如下形式:
An L-layer convolutional neural network
Figure PCTCN2019121508-appb-000007
It can be specifically expressed as follows:
Figure PCTCN2019121508-appb-000008
Figure PCTCN2019121508-appb-000008
其中,C 0为第一层卷积神经网络重建的图像数据;X u为欠采样图像数据样本; Among them, C 0 is the image data reconstructed by the first layer of convolutional neural network; X u is the undersampled image data sample;
C l为第l层卷积神经网络重建的图像数据;σ l为非线性激活函数;Ω l为维度为FW l×FH l×K l-1×K l的卷积核,FW l×FH l为第l层卷积神经网络卷积核的大小,K l-1为第l-1层卷积神经网络特征图的数量,K l为第l层卷积神经网络特征图的数量;C l-1为第l-1层卷积神经网络重建的图像数据;b l为K l维的偏置;L为卷积神经网络的层数; C l is the reconstructed image data of the first layer convolutional neural network; σ l is the nonlinear activation function; Ω l is the convolution kernel with dimensions FW l ×FH l ×K l-1 ×K l , FW l ×FH l is the size of the convolution kernel of the layer 1 convolutional neural network, K l-1 is the number of feature maps of the l-1 layer convolutional neural network, and K l is the number of feature maps of the layer 1 convolutional neural network; C l-1 is the image data reconstructed by the convolutional neural network of layer l-1; b l is the offset of K l dimension; L is the number of layers of the convolutional neural network;
C L为第L层卷积神经网络重建的图像数据;Ω L为维度为FW L×FH L×K L-1×K L的卷积核,FW L×FH L为第L层卷积神经网络卷积核的大小,K L-1为第L-1层卷积神经网络特征图的数量,K L为第L层卷积神经网络特征图的数量;C L-1为第L-1层卷积神经网络重建的图像数据;b L为K L维的偏置。 C L is the reconstructed image data of the Lth layer convolutional neural network; Ω L is the convolution kernel of dimension FW L ×FH L ×K L-1 ×K L , and FW L ×FH L is the Lth layer convolutional nerve The size of the network convolution kernel, K L-1 is the number of L-1 layer convolutional neural network feature maps, K L is the Lth layer convolutional neural network feature maps; C L-1 is the L-1th layer Image data reconstructed by the layer convolutional neural network; b L is the offset of K L dimension.
对卷积神经网络
Figure PCTCN2019121508-appb-000009
的训练,即训练权重参数
Figure PCTCN2019121508-appb-000010
的过程。其中
Figure PCTCN2019121508-appb-000011
包括L组Ω,b的值,即
Figure PCTCN2019121508-appb-000012
Convolutional Neural Network
Figure PCTCN2019121508-appb-000009
Training, ie training weight parameters
Figure PCTCN2019121508-appb-000010
the process of. among them
Figure PCTCN2019121508-appb-000011
Including the value of L group Ω, b, namely
Figure PCTCN2019121508-appb-000012
以下对本发明所使用的具体损失函数进行说明。损失函数会监督网络训练过程,不同的损失函数对网络训练的影响不同。The specific loss function used in the present invention will be described below. The loss function will supervise the network training process, and different loss functions have different effects on the network training.
具体地,损失值通过损失函数计算得到,本发明使用的损失函数为平均绝对值误差函数
Figure PCTCN2019121508-appb-000013
其中M为一个批次输入至卷积神经网络中的同一通道的欠采样图像数据的数量;m为一个批次输入至卷积神经网络中 的同一通道的欠采样图像数据的序号;C(x m;θ)为卷积神经网络对序号为m的欠采样图像数据重建后的图像数据;y m为序号为m的欠采样图像数据对应的全采样图像数据。
Specifically, the loss value is calculated by a loss function, and the loss function used in the present invention is an average absolute value error function
Figure PCTCN2019121508-appb-000013
Where M is the number of undersampled image data input to the same channel in the convolutional neural network by one batch; m is the serial number of undersampled image data input to the same channel in the convolutional neural network by one batch; C(x m ; θ) is the image data reconstructed by the convolutional neural network for the undersampled image data with sequence number m; y m is the fully sampled image data corresponding to the undersampled image data with sequence number m.
需要说明的是,在训练过程中,输入训练样本可以按照批次进行,即每次输入的训练样本为每个通道的多张训练样本。例如,通道数量为12个,预先设置一个批次包含一个通道的4张训练样本,则每次输入的训练样本为12*4张欠采样图像数据。It should be noted that, during the training process, input training samples may be performed in batches, that is, each input training sample is multiple training samples for each channel. For example, if the number of channels is 12, and a batch contains 4 training samples of one channel, the input training samples each time are 12*4 undersampled image data.
本发明使用的损失函数J(x,y)可以具体为平均绝对值误差函数(Mean Absolution Error,MAE),相对于均方误差这种损失函数,MAE不仅可以保证训练结果能很好地抑制噪声,而且能略微更好地保持图像细节。The loss function J(x, y) used in the present invention can be specifically an average absolute value error function (Mean Absolution Error, MAE). Compared with the loss function of mean square error, MAE can not only ensure that the training results can well suppress noise , And slightly better maintain image detail.
S304:使用权重参数构建卷积神经网络。S304: Construct a convolutional neural network using weight parameters.
其中,将卷积神经网络中的权重参数设置为最终训练得到的
Figure PCTCN2019121508-appb-000014
便可以得到构建完成的卷积神经网络
Figure PCTCN2019121508-appb-000015
其中x为待输入的欠采样图像数据,
Figure PCTCN2019121508-appb-000016
为卷积神经网络的权重参数。训练完成的卷积神经网络便可以在实际应用中对任意输入的欠采样图像数据进行重建。
Among them, the weight parameters in the convolutional neural network are set to the final training
Figure PCTCN2019121508-appb-000014
You can get the constructed convolutional neural network
Figure PCTCN2019121508-appb-000015
Where x is the undersampled image data to be input,
Figure PCTCN2019121508-appb-000016
It is the weight parameter of the convolutional neural network. The trained convolutional neural network can reconstruct any input undersampled image data in practical applications.
为了检测本发明所构建的神经网络综合模型的可行性及有效性,发明人进行了对比实现。具体地,针对一维随机(1D random)3倍欠采样方式得到的相同的实验样本数据,分别使用传统并行成像算法及本发明提供的算法进行重建过程,重现效果见图4。In order to test the feasibility and effectiveness of the neural network integrated model constructed by the present invention, the inventors carried out a comparative implementation. Specifically, for the same experimental sample data obtained by the one-dimensional random (1D random) 3 times undersampling method, the traditional parallel imaging algorithm and the algorithm provided by the present invention are used for the reconstruction process, and the reproduction effect is shown in FIG. 4.
如图4所示,从左到右来看,上下两幅图依次为:原图和采样模式、SPIRiT重建结果及对应的误差图、L1-SPIRiT重建结果及对应误差图、使用本发明算法得到的重建结果及对应的误差图。As shown in Fig. 4, from left to right, the top and bottom images are: original image and sampling mode, SPIRiT reconstruction result and corresponding error map, L1-SPIRiT reconstruction result and corresponding error map, obtained using the algorithm of the present invention Reconstruction results and corresponding error graphs.
可以很明显地看出,本发明提供的磁共振并行成像方法对于一维欠采样造成的混叠伪影去除较为彻底,且噪声抑制效果也最好,与原图最为接近。可见,与传统并行成像算法相比,本发明提出的磁共振并行成像方法有明显的优势。It can be clearly seen that the parallel magnetic resonance imaging method provided by the present invention is relatively thorough in removing aliasing artifacts caused by one-dimensional undersampling, and has the best noise suppression effect, which is closest to the original image. It can be seen that, compared with the traditional parallel imaging algorithm, the magnetic resonance parallel imaging method proposed by the present invention has obvious advantages.
见图5,其示出了本发明提供的磁共振并行成像装置的一种结构。如图5所示,磁共振并行成像装置可以包括:图像数据获得模块501、综合模型获得模块502、图像数据重建模块503及图像数据合并模块504。See FIG. 5, which shows a structure of a magnetic resonance parallel imaging device provided by the present invention. As shown in FIG. 5, the magnetic resonance parallel imaging apparatus may include: an image data obtaining module 501, an integrated model obtaining module 502, an image data reconstruction module 503, and an image data merging module 504.
图像数据获得模块501,用于获得并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据;The image data obtaining module 501 is used to obtain the under-sampled image data acquired by the parallel multiple channel magnetic resonance coils in the K space;
综合模型获得模块502,用于获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;The integrated model obtaining module 502 is used to obtain a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer;
图像数据重建模块503,用于将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下述步骤:The image data reconstruction module 503 is used to input the undersampled image data of each channel into the neural network integrated model, so that the neural network integrated model performs the following steps:
神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;The neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;The K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
图像数据合并模块504,用于合并所有通道的目标重建图像数据,以得到单通道的图像数据。The image data merging module 504 is used to merge the target reconstructed image data of all channels to obtain single channel image data.
在一种实现方式中,所述神经网络综合模型包括级联的多个残差模块,每个残差模块均包括相互连接的神经网络以及K空间一致层。In one implementation, the comprehensive model of the neural network includes a plurality of cascaded residual modules, and each residual module includes a connected neural network and a K-space consistent layer.
在一种实现方式中,所述神经网络具体包括卷积神经网络。In one implementation, the neural network specifically includes a convolutional neural network.
见图6,本发明还提供了磁共振并行成像装置的另一种结构。如图6所示,磁共振并行成像装置在图5所示的结构基础上,还可以包括神经网络训练模块505。神经网络训练模块505用于训练所述神经网络综合模型中的卷积神经网络。具体地,神经网络训练模块505包括:Referring to FIG. 6, the present invention also provides another structure of a parallel magnetic resonance imaging device. As shown in FIG. 6, the magnetic resonance parallel imaging device may further include a neural network training module 505 based on the structure shown in FIG. 5. The neural network training module 505 is used to train the convolutional neural network in the integrated model of the neural network. Specifically, the neural network training module 505 includes:
全采样数据获得子模块,用于获得每个通道的全采样图像数据;Full sampling data acquisition sub-module, used to obtain the full sampling image data of each channel;
欠采样数据获得子模块,用于对每个通道的全采样图像数据进行欠采样处理,得到每个通道的欠采样图像数据样本;Undersampling data acquisition sub-module, used to undersampling the fully sampled image data of each channel to obtain the undersampled image data samples of each channel;
权重参数训练子模块,用于使用卷积神经网络训练算法,对所述欠采样图像数据样本进行训练,以得到损失值最小的权重参数;A weight parameter training sub-module, used to train the undersampled image data samples using a convolutional neural network training algorithm to obtain weight parameters with the smallest loss value;
卷积网络确定子模块,用于使用所述权重参数构建卷积神经网络
Figure PCTCN2019121508-appb-000017
其中x为待输入的欠采样图像数据,
Figure PCTCN2019121508-appb-000018
为卷积神经网络的权重参数。
Convolutional network determination sub-module for constructing a convolutional neural network using the weighting parameters
Figure PCTCN2019121508-appb-000017
Where x is the undersampled image data to be input,
Figure PCTCN2019121508-appb-000018
It is the weight parameter of the convolutional neural network.
在一种实现方式中,所述卷积神经网络
Figure PCTCN2019121508-appb-000019
具体为:
In one implementation, the convolutional neural network
Figure PCTCN2019121508-appb-000019
Specifically:
Figure PCTCN2019121508-appb-000020
Figure PCTCN2019121508-appb-000020
其中,C 0为第一层卷积神经网络重建的图像数据;X u为欠采样图像数据样本; Among them, C 0 is the image data reconstructed by the first layer of convolutional neural network; X u is the undersampled image data sample;
C l为第l层卷积神经网络重建的图像数据;σ l为非线性激活函数;Ω l为维度为FW l×FH l×K l-1×K l的卷积核,FW l×FH l为第l层卷积神经网络卷积核的大小,K l-1为第l-1层卷积神经网络特征图的数量,K l为第l层卷积神经网络特征图的数量;C l-1为第l-1层卷积神经网络重建的图像数据;b l为K l维的偏置;L为卷积神经网络的层数; C l is the reconstructed image data of the first layer convolutional neural network; σ l is the nonlinear activation function; Ω l is the convolution kernel with dimensions FW l ×FH l ×K l-1 ×K l , FW l ×FH l is the size of the convolution kernel of the layer 1 convolutional neural network, K l-1 is the number of feature maps of the l-1 layer convolutional neural network, and K l is the number of feature maps of the layer 1 convolutional neural network; C l-1 is the image data reconstructed by the convolutional neural network of layer l-1; b l is the offset of K l dimension; L is the number of layers of the convolutional neural network;
C L为第L层卷积神经网络重建的图像数据;Ω L为维度为FW L×FH L×K L-1×K L的卷积核,FW L×FH L为第L层卷积神经网络卷积核的大小,K L-1为第L-1层卷积神经网络特征图的数量,K L为第L层卷积神经网络特征图的数量;C L-1为第L-1层卷积神经网络重建的图像数据;b L为K L维的偏置。 C L is the reconstructed image data of the Lth layer convolutional neural network; Ω L is the convolution kernel of dimension FW L ×FH L ×K L-1 ×K L , and FW L ×FH L is the Lth layer convolutional nerve The size of the network convolution kernel, K L-1 is the number of L-1 layer convolutional neural network feature maps, K L is the Lth layer convolutional neural network feature maps; C L-1 is the L-1th layer Image data reconstructed by the layer convolutional neural network; b L is the offset of K L dimension.
在一种实现方式中,所述损失值通过损失函数计算得到,所述损失函数为平均绝对值误差函数
Figure PCTCN2019121508-appb-000021
其中M为一个批次输入至卷积神经网络中的同一通道的欠采样图像数据的数量;m为一个批次输入至卷积神经网络中的同一通道的欠采样图像数据的序号;C(x m;θ)为卷积神经网络对序号为m的欠采样图像数据重建后的图像数据;y m为序号为m的欠采样图像数据对应的全采样图像数据。
In an implementation manner, the loss value is calculated by a loss function, and the loss function is an average absolute value error function
Figure PCTCN2019121508-appb-000021
Where M is the number of undersampled image data input to the same channel in the convolutional neural network by one batch; m is the serial number of undersampled image data input to the same channel in the convolutional neural network by one batch; C(x m ; θ) is the image data reconstructed by the convolutional neural network for the undersampled image data with sequence number m; y m is the fully sampled image data corresponding to the undersampled image data with sequence number m.
在一种实现方式中,所述K空间一致层具体为:In one implementation, the K-space consistent layer is specifically:
Figure PCTCN2019121508-appb-000022
Figure PCTCN2019121508-appb-000022
其中S为第j个通道的欠采样图像数据所有采样像素点位置的集合;k为神经网络综合模型输出的初步重建图像数据中任一像素点的位置;f l,j(k)为第j个通道的初步重建图像数据的K空间在位置k的像素值;λ为预设的加权权重;f 0,j(k)为第j个通道输入的欠采样图像数据的K空间在位置k的像素值。 Where S is the set of all sampled pixel positions of the undersampled image data of the jth channel; k is the position of any pixel in the preliminary reconstructed image data output by the comprehensive model of the neural network; f l,j (k) is the jth The pixel value of the K space of the preliminary reconstructed image data of each channel at position k; λ is the preset weighted weight; f 0,j (k) is the K space of the undersampled image data input from the jth channel at position k Pixel values.
在一种实现方式中,神经网络训练模块505还包括:模值归一处理子模块。In one implementation, the neural network training module 505 further includes: a module normalization processing sub-module.
模值归一处理子模块,用于在使用卷积神经网络训练算法,对所述欠采样图像数据样本进行训练之前,将各个所述欠采样图像数据按照下述公式进行模值归一处理:Modular normalization processing sub-module, which is used to normalize the modular values of each of the undersampled image data according to the following formula before using the convolutional neural network training algorithm to train the undersampled image data samples:
Figure PCTCN2019121508-appb-000023
Figure PCTCN2019121508-appb-000023
其中所述X k表示欠采样图像数据归一化处理后位置k处的像素点的像素值的模值;I k表示欠采样图像数据在位置k处的像素点的像素值的模值;max{abs(X)}表示欠采样图像数据所有像素值的最大模值。 Where X k represents the modulus of the pixel value of the pixel at position k after the normalization of the undersampled image data; I k represents the modulus of the pixel value of the pixel at the position k of the undersampled image data; max {abs(X)} represents the maximum modulus of all pixel values of undersampled image data.
见图7,其示出了本申请提供的一种磁共振并行成像设备,具体包括:存储器701、接收器702、处理器703、显示器704及通信总线705。See FIG. 7, which shows a magnetic resonance parallel imaging device provided by the present application, which specifically includes: a memory 701, a receiver 702, a processor 703, a display 704, and a communication bus 705.
其中,存储器701、接收器702、处理器703、显示器704通过通信总线705完成相互间的通信。Among them, the memory 701, the receiver 702, the processor 703, and the display 704 communicate with each other through the communication bus 705.
存储器701,用于存放程序;存储器501可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。The memory 701 is used to store programs; the memory 501 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one magnetic disk memory.
接收器702,用于与图像采集装置相连,用于接收图像采集装置中并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据。The receiver 702 is used to connect with the image acquisition device, and is used to receive the under-sampled image data acquired in the K-space by the magnetic resonance coils of multiple parallel channels in the image acquisition device.
处理器703,用于执行程序,程序可以包括程序代码,所述程序代码包括处理器的操作指令。其中,程序可具体用于:The processor 703 is configured to execute a program. The program may include a program code, and the program code includes an operation instruction of the processor. Among them, the program can be specifically used for:
获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下述步骤:神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;K空间一致层分 别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;合并所有通道的目标重建图像数据,以得到单通道的图像数据。Obtain a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer; input the undersampled image data of each channel into the neural network integrated model, so that the The comprehensive model of the neural network performs the following steps: the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain the preliminary reconstructed image data corresponding to each channel; K The spatially consistent layer performs back-processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel to obtain the target reconstructed image data corresponding to each channel; the target reconstructed image data of all channels is merged to obtain Single-channel image data.
处理器703可能是一个中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本申请实施例的一个或多个集成电路。The processor 703 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the embodiments of the present application.
需要说明的是,所述处理器可以执行与上述磁共振并行成像方法相关的各个步骤,此处并不赘述。It should be noted that the processor may execute various steps related to the above-mentioned parallel magnetic resonance imaging method, which is not repeated here.
显示器704,用于显示所述单通道的图像数据。The display 704 is used to display the single-channel image data.
本申请还提供了一种可读存储介质,其上存储有计算机程序,所述计算机程序可以被处理器执行,以实现虚拟资产数据的处理方法中的各个步骤。The present application also provides a readable storage medium on which a computer program is stored, and the computer program can be executed by a processor to implement various steps in a method of processing virtual asset data.
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。It should be noted that the embodiments in this specification are described in a progressive manner. Each embodiment focuses on the differences from other embodiments. The same and similar parts between the embodiments refer to each other. can.
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括上述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should also be noted that in this article, relational terms such as first and second are used only to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities or operations There is any such actual relationship or order. Moreover, the terms "include", "include" or any other variant thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device that includes a series of elements includes not only those elements, but also those not explicitly listed Or other elements that are inherent to this process, method, article, or equipment. Without more restrictions, the elements defined by the sentence "include one..." do not exclude that there are other identical elements in the process, method, article or equipment that includes the above elements.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be apparent to those skilled in the art, and the general principles defined herein can be implemented in other embodiments without departing from the spirit or scope of the present invention. Therefore, the present invention will not be limited to the embodiments shown herein, but should conform to the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

  1. 一种磁共振并行成像方法,其特征在于,包括:A parallel magnetic resonance imaging method, characterized in that it includes:
    获得并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据;Obtain under-sampled image data acquired in parallel by K-space for multiple channels of magnetic resonance coils;
    获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;Obtain a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer;
    将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下述步骤:Input the undersampled image data of each channel into the neural network integrated model, so that the neural network integrated model performs the following steps:
    神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;The neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
    K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;The K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
    合并所有通道的目标重建图像数据,以得到单通道的图像数据。The target reconstructed image data of all channels is merged to obtain single-channel image data.
  2. 根据权利要求1所述的磁共振并行成像方法,其特征在于,所述神经网络综合模型包括级联的多个残差模块,每个残差模块均包括相互连接的神经网络以及K空间一致层。The magnetic resonance parallel imaging method according to claim 1, wherein the neural network synthesis model includes a plurality of cascaded residual modules, each residual module includes a connected neural network and a K-space consistent layer .
  3. 根据权利要求1所述的磁共振并行成像方法,其特征在于,所述神经网络具体包括卷积神经网络。The parallel magnetic resonance imaging method of claim 1, wherein the neural network specifically comprises a convolutional neural network.
  4. 根据权利要求3所述的磁共振并行成像方法,其特征在于,所述神经网络综合模型中神经网络的训练过程包括:The parallel magnetic resonance imaging method of claim 3, wherein the neural network training process in the neural network synthesis model includes:
    获得每个通道的全采样图像数据;Obtain the fully sampled image data of each channel;
    对每个通道的全采样图像数据进行欠采样处理,得到每个通道的欠采样图像数据样本;Undersampling the fully sampled image data of each channel to obtain the undersampled image data samples of each channel;
    使用卷积神经网络训练算法,对所述欠采样图像数据样本进行训练,以得到损失值最小的权重参数;Use a convolutional neural network training algorithm to train the undersampled image data samples to obtain the weight parameter with the smallest loss value;
    使用所述权重参数构建卷积神经网络
    Figure PCTCN2019121508-appb-100001
    其中x为待输入的欠采样图像数据,
    Figure PCTCN2019121508-appb-100002
    为卷积神经网络的权重参数。
    Use the weight parameters to construct a convolutional neural network
    Figure PCTCN2019121508-appb-100001
    Where x is the undersampled image data to be input,
    Figure PCTCN2019121508-appb-100002
    It is the weight parameter of the convolutional neural network.
  5. 根据权利要求4所述的磁共振并行成像方法,其特征在于,The parallel magnetic resonance imaging method of claim 4, wherein:
    所述卷积神经网络
    Figure PCTCN2019121508-appb-100003
    具体为:
    Figure PCTCN2019121508-appb-100004
    The convolutional neural network
    Figure PCTCN2019121508-appb-100003
    Specifically:
    Figure PCTCN2019121508-appb-100004
    其中,C 0为第一层卷积神经网络重建的图像数据;X u为欠采样图像数据样本; Among them, C 0 is the image data reconstructed by the first layer of convolutional neural network; X u is the undersampled image data sample;
    C l为第l层卷积神经网络重建的图像数据;σ l为非线性激活函数;Ω l为维度为FW l×FH l×K l-1×K l的卷积核,FW l×FH l为第l层卷积神经网络卷积核的大小,K l-1为第l-1层卷积神经网络特征图的数量,K l为第l层卷积神经网络特征图的数量;C l-1为第l-1层卷积神经网络重建的图像数据;b l为K l维的偏置;L为卷积神经网络的层数; C l is the reconstructed image data of the first layer convolutional neural network; σ l is the nonlinear activation function; Ω l is the convolution kernel with dimensions FW l ×FH l ×K l-1 ×K l , FW l ×FH l is the size of the convolution kernel of the layer 1 convolutional neural network, K l-1 is the number of feature maps of the l-1 layer convolutional neural network, and K l is the number of feature maps of the layer 1 convolutional neural network; C l-1 is the image data reconstructed by the convolutional neural network of layer l-1; b l is the offset of K l dimension; L is the number of layers of the convolutional neural network;
    C L为第L层卷积神经网络重建的图像数据;Ω L为维度为FW L×FH L×K L-1×K L的卷积核,FW L×FH L为第L层卷积神经网络卷积核的大小,K L-1为第L-1层卷积神经网络特征图的数量,K L为第L层卷积神经网络特征图的数量;C L-1为第L-1层卷积神经网络重建的图像数据;b L为K L维的偏置。 C L is the reconstructed image data of the Lth layer convolutional neural network; Ω L is the convolution kernel of dimension FW L ×FH L ×K L-1 ×K L , and FW L ×FH L is the Lth layer convolutional nerve The size of the network convolution kernel, K L-1 is the number of L-1 layer convolutional neural network feature maps, K L is the Lth layer convolutional neural network feature maps; C L-1 is the L-1th layer Image data reconstructed by the layer convolutional neural network; b L is the offset of K L dimension.
  6. 根据权利要求4所述的磁共振并行成像方法,其特征在于,The parallel magnetic resonance imaging method of claim 4, wherein:
    所述损失值通过损失函数计算得到,所述损失函数为平均绝对值误差函数
    Figure PCTCN2019121508-appb-100005
    其中M为一个批次输入至卷积神经网络中的同一通道的欠采样图像数据的数量;m为一个批次输入至卷积神经网络中的同一通道的欠采样图像数据的序号;C(x m;θ)为卷积神经网络对序号为m的欠采样图像数据重建后的图像数据;y m为序号为m的欠采样图像数据对应的全采样图像数据。
    The loss value is calculated by a loss function, and the loss function is an average absolute value error function
    Figure PCTCN2019121508-appb-100005
    Where M is the number of undersampled image data input to the same channel in the convolutional neural network by one batch; m is the serial number of undersampled image data input to the same channel in the convolutional neural network by one batch; C(x m ; θ) is the image data reconstructed by the convolutional neural network for the undersampled image data with sequence number m; y m is the fully sampled image data corresponding to the undersampled image data with sequence number m.
  7. 根据权利要求1所述的磁共振并行成像方法,其特征在于,所述K空间一致层具体为:The parallel magnetic resonance imaging method of claim 1, wherein the K-space consistent layer is specifically:
    Figure PCTCN2019121508-appb-100006
    Figure PCTCN2019121508-appb-100006
    其中S为第j个通道的欠采样图像数据所有采样像素点位置的集合;k为神经网络综合模型输出的初步重建图像数据中任一像素点的位置;f l,j(k)为第 j个通道的初步重建图像数据的K空间在位置k的像素值;λ为预设的加权权重;f 0,j(k)为第j个通道输入的欠采样图像数据的K空间在位置k的像素值。 Where S is the set of all sampled pixel positions of the undersampled image data of the jth channel; k is the position of any pixel in the preliminary reconstructed image data output by the comprehensive model of the neural network; f l,j (k) is the jth The pixel value of the K space of the preliminary reconstructed image data of each channel at position k; λ is the preset weighted weight; f 0,j (k) is the K space of the undersampled image data input from the jth channel at position k Pixel values.
  8. 根据权利要求4所述的磁共振并行成像方法,其特征在于,在使用卷积神经网络训练算法,对所述欠采样图像数据样本进行训练之前,还包括:The parallel magnetic resonance imaging method according to claim 4, wherein, before using the convolutional neural network training algorithm to train the undersampled image data samples, the method further comprises:
    将各个所述欠采样图像数据按照下述公式进行模值归一处理:Normalize each of the under-sampled image data according to the following formula:
    Figure PCTCN2019121508-appb-100007
    Figure PCTCN2019121508-appb-100007
    其中所述X k表示欠采样图像数据归一化处理后位置k处的像素点的像素值的模值;I k表示欠采样图像数据在位置k处的像素点的像素值的模值;max{abs(X)}表示欠采样图像数据所有像素值的最大模值。 Where X k represents the modulus of the pixel value of the pixel at position k after the normalization of the undersampled image data; I k represents the modulus of the pixel value of the pixel at the position k of the undersampled image data; max {abs(X)} represents the maximum modulus of all pixel values of undersampled image data.
  9. 一种磁共振并行成像装置,其特征在于,包括:A magnetic resonance parallel imaging device is characterized by comprising:
    图像数据获得模块,用于获得并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据;An image data acquisition module for acquiring undersampled image data acquired in parallel by K-space magnetic resonance coils of multiple channels;
    综合模型获得模块,用于获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;A comprehensive model obtaining module, used to obtain a pre-built neural network comprehensive model; wherein the neural network comprehensive model includes interconnected neural networks and a K-space consistent layer;
    图像数据重建模块,用于将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下述步骤:The image data reconstruction module is used to input the undersampled image data of each channel into the neural network integrated model, so that the neural network integrated model performs the following steps:
    神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;The neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data to obtain preliminary reconstructed image data corresponding to each channel;
    K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;The K-space consistent layer performs the generation processing on the preliminary reconstructed image data of each channel according to the undersampled image data of each channel, to obtain the target reconstructed image data corresponding to each channel;
    图像数据合并模块,用于合并所有通道的目标重建图像数据,以得到单通道的图像数据。The image data merging module is used to merge the target reconstruction image data of all channels to obtain single channel image data.
  10. 一种磁共振并行成像设备,其特征在于,包括:A parallel magnetic resonance imaging device, characterized in that it includes:
    接收器,用于接收并行的多个通道的磁共振线圈在K空间上采集的欠采样图像数据;The receiver is used to receive the under-sampled image data collected by the parallel multiple channel magnetic resonance coils in the K space;
    处理器,用于获得预先构建的神经网络综合模型;其中所述神经网络综合模型包括相互连接的神经网络以及K空间一致层;将各个通道的欠采样图像数据输入至所述神经网络综合模型中,以使所述神经网络综合模型执行下 述步骤:神经网络依据各个所述欠采样图像数据之间的相关性,分别对每个所述欠采样图像数据进行重建,得到每个通道对应的初步重建图像数据;K空间一致层分别依据每个通道的欠采样图像数据,对每个通道的初步重建图像数据进行回代处理,得到每个通道对应的目标重建图像数据;合并所有通道的目标重建图像数据,以得到单通道的图像数据;A processor for obtaining a pre-built neural network integrated model; wherein the neural network integrated model includes interconnected neural networks and a K-space consistent layer; input the undersampled image data of each channel into the neural network integrated model , So that the comprehensive model of the neural network performs the following steps: the neural network reconstructs each of the under-sampled image data according to the correlation between each of the under-sampled image data, and obtains the preliminary corresponding to each channel Reconstructed image data; the K-space consistent layer reprocesses the preliminary reconstructed image data of each channel according to the undersampled image data of each channel to obtain the target reconstructed image data corresponding to each channel; merges the target reconstruction of all channels Image data to get single-channel image data;
    显示器,用于显示所述单通道的图像数据。A display is used to display the single-channel image data.
  11. 一种可读性存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1-8任一项所述的磁共振并行成像方法。A readable storage medium on which a computer program is stored, characterized in that when the computer program is executed by a processor, the magnetic resonance parallel imaging method according to any one of claims 1-8 is realized.
PCT/CN2019/121508 2018-12-24 2019-11-28 Parallel magnetic resonance imaging method and related equipment WO2020134826A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811581817.7 2018-12-24
CN201811581817.7A CN111353947A (en) 2018-12-24 2018-12-24 Magnetic resonance parallel imaging method and related equipment

Publications (1)

Publication Number Publication Date
WO2020134826A1 true WO2020134826A1 (en) 2020-07-02

Family

ID=71127466

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121508 WO2020134826A1 (en) 2018-12-24 2019-11-28 Parallel magnetic resonance imaging method and related equipment

Country Status (2)

Country Link
CN (1) CN111353947A (en)
WO (1) WO2020134826A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269849A (en) * 2020-07-23 2021-08-17 上海联影智能医疗科技有限公司 Method and apparatus for reconstructing magnetic resonance
CN113592973A (en) * 2021-07-30 2021-11-02 哈尔滨工业大学(深圳) Magnetic resonance image reconstruction method and device based on multi-frequency complex convolution
CN113592972A (en) * 2021-07-30 2021-11-02 哈尔滨工业大学(深圳) Magnetic resonance image reconstruction method and device based on multi-modal aggregation
CN113842134A (en) * 2021-11-09 2021-12-28 清华大学 Double-sequence accelerated nuclear magnetic imaging optimization method based on double-path artificial neural network
CN115272510A (en) * 2022-08-08 2022-11-01 中国科学院精密测量科学与技术创新研究院 Lung gas MRI reconstruction method based on coding enhanced complex value network
CN115717893A (en) * 2022-11-29 2023-02-28 泉州装备制造研究所 Deep learning positioning method and device based on pixilated magnetic nail information
CN116109524A (en) * 2023-04-11 2023-05-12 中国医学科学院北京协和医院 Magnetic resonance image channel merging method, device, electronic equipment and storage medium
CN116228913A (en) * 2023-05-06 2023-06-06 杭州师范大学 Processing method and device for magnetic resonance image data and storage medium
CN116740218A (en) * 2023-08-11 2023-09-12 南京安科医疗科技有限公司 Heart CT imaging image quality optimization method, device and medium
CN116993852A (en) * 2023-09-26 2023-11-03 阿尔玻科技有限公司 Training method of image reconstruction model, main control equipment and image reconstruction method
CN117710513A (en) * 2024-02-06 2024-03-15 中国科学院深圳先进技术研究院 Quantum convolution neural network-based magnetic resonance imaging method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646410A (en) * 2013-11-27 2014-03-19 中国科学院深圳先进技术研究院 Magnetic resonance rapid parameter imaging method and system
CN107182216A (en) * 2015-12-30 2017-09-19 中国科学院深圳先进技术研究院 A kind of rapid magnetic resonance imaging method and device based on depth convolutional neural networks
EP3382417A2 (en) * 2017-03-28 2018-10-03 Siemens Healthcare GmbH Magnetic resonance image reconstruction system and method
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN108828481A (en) * 2018-04-24 2018-11-16 朱高杰 A kind of magnetic resonance reconstruction method based on deep learning and data consistency

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN107341518A (en) * 2017-07-07 2017-11-10 东华理工大学 A kind of image classification method based on convolutional neural networks
CN107909095A (en) * 2017-11-07 2018-04-13 江苏大学 A kind of image-recognizing method based on deep learning
CN108535675B (en) * 2018-04-08 2020-12-04 朱高杰 Magnetic resonance multi-channel reconstruction method based on deep learning and data self-consistency

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646410A (en) * 2013-11-27 2014-03-19 中国科学院深圳先进技术研究院 Magnetic resonance rapid parameter imaging method and system
CN107182216A (en) * 2015-12-30 2017-09-19 中国科学院深圳先进技术研究院 A kind of rapid magnetic resonance imaging method and device based on depth convolutional neural networks
EP3382417A2 (en) * 2017-03-28 2018-10-03 Siemens Healthcare GmbH Magnetic resonance image reconstruction system and method
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN108828481A (en) * 2018-04-24 2018-11-16 朱高杰 A kind of magnetic resonance reconstruction method based on deep learning and data consistency

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269849A (en) * 2020-07-23 2021-08-17 上海联影智能医疗科技有限公司 Method and apparatus for reconstructing magnetic resonance
CN113592973A (en) * 2021-07-30 2021-11-02 哈尔滨工业大学(深圳) Magnetic resonance image reconstruction method and device based on multi-frequency complex convolution
CN113592972A (en) * 2021-07-30 2021-11-02 哈尔滨工业大学(深圳) Magnetic resonance image reconstruction method and device based on multi-modal aggregation
CN113592972B (en) * 2021-07-30 2023-11-14 哈尔滨工业大学(深圳) Magnetic resonance image reconstruction method and device based on multi-mode aggregation
CN113842134A (en) * 2021-11-09 2021-12-28 清华大学 Double-sequence accelerated nuclear magnetic imaging optimization method based on double-path artificial neural network
CN113842134B (en) * 2021-11-09 2024-04-12 清华大学 Double-sequence acceleration nuclear magnetic imaging optimization method based on double-path artificial neural network
CN115272510B (en) * 2022-08-08 2023-09-22 中国科学院精密测量科学与技术创新研究院 Pulmonary gas MRI reconstruction method based on coding enhancement complex value network
CN115272510A (en) * 2022-08-08 2022-11-01 中国科学院精密测量科学与技术创新研究院 Lung gas MRI reconstruction method based on coding enhanced complex value network
CN115717893A (en) * 2022-11-29 2023-02-28 泉州装备制造研究所 Deep learning positioning method and device based on pixilated magnetic nail information
CN116109524A (en) * 2023-04-11 2023-05-12 中国医学科学院北京协和医院 Magnetic resonance image channel merging method, device, electronic equipment and storage medium
CN116228913B (en) * 2023-05-06 2023-08-22 杭州师范大学 Processing method and device for magnetic resonance image data and storage medium
CN116228913A (en) * 2023-05-06 2023-06-06 杭州师范大学 Processing method and device for magnetic resonance image data and storage medium
CN116740218B (en) * 2023-08-11 2023-10-27 南京安科医疗科技有限公司 Heart CT imaging image quality optimization method, device and medium
CN116740218A (en) * 2023-08-11 2023-09-12 南京安科医疗科技有限公司 Heart CT imaging image quality optimization method, device and medium
CN116993852A (en) * 2023-09-26 2023-11-03 阿尔玻科技有限公司 Training method of image reconstruction model, main control equipment and image reconstruction method
CN116993852B (en) * 2023-09-26 2024-01-30 阿尔玻科技有限公司 Training method of image reconstruction model, main control equipment and image reconstruction method
CN117710513A (en) * 2024-02-06 2024-03-15 中国科学院深圳先进技术研究院 Quantum convolution neural network-based magnetic resonance imaging method and device

Also Published As

Publication number Publication date
CN111353947A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
WO2020134826A1 (en) Parallel magnetic resonance imaging method and related equipment
CN110378980B (en) Multichannel magnetic resonance image reconstruction method based on deep learning
CN108090871A (en) A kind of more contrast MR image reconstruction methods based on convolutional neural networks
CN111383741B (en) Method, device and equipment for establishing medical imaging model and storage medium
CN109523584A (en) Image processing method, device, multi-mode imaging system, storage medium and equipment
WO2020114329A1 (en) Fast magnetic resonance parametric imaging and device
Pal et al. A review and experimental evaluation of deep learning methods for MRI reconstruction
Wang et al. High-quality image compressed sensing and reconstruction with multi-scale dilated convolutional neural network
Feng et al. DONet: dual-octave network for fast MR image reconstruction
Kelkar et al. Prior image-constrained reconstruction using style-based generative models
Singh et al. Joint frequency and image space learning for MRI reconstruction and analysis
WO2021102644A1 (en) Image enhancement method and apparatus, and terminal device
CN114863225A (en) Image processing model training method, image processing model generation device, image processing equipment and image processing medium
CN111784792A (en) Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
Lu et al. A novel 3D medical image super-resolution method based on densely connected network
Shi et al. Affirm: Affinity fusion-based framework for iteratively random motion correction of multi-slice fetal brain MRI
Xiao et al. SR-Net: a sequence offset fusion net and refine net for undersampled multislice MR image reconstruction
CN110942496A (en) Propeller sampling and neural network-based magnetic resonance image reconstruction method and system
CN111047512B (en) Image enhancement method and device and terminal equipment
WO2024021796A1 (en) Image processing method and apparatus, electronic device, storage medium, and program product
US11941732B2 (en) Multi-slice MRI data processing using deep learning techniques
Jiang et al. GA-HQS: MRI reconstruction via a generically accelerated unfolding approach
Liu et al. High-Fidelity MRI Reconstruction Using Adaptive Spatial Attention Selection and Deep Data Consistency Prior
CN112669400B (en) Dynamic MR reconstruction method based on deep learning prediction and residual error framework
WO2021129235A1 (en) Rapid three-dimensional magnetic resonance parameter imaging method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19906233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19906233

Country of ref document: EP

Kind code of ref document: A1