CN111063000A - Magnetic resonance rapid imaging method and device based on neural network structure search - Google Patents

Magnetic resonance rapid imaging method and device based on neural network structure search Download PDF

Info

Publication number
CN111063000A
CN111063000A CN201911287973.7A CN201911287973A CN111063000A CN 111063000 A CN111063000 A CN 111063000A CN 201911287973 A CN201911287973 A CN 201911287973A CN 111063000 A CN111063000 A CN 111063000A
Authority
CN
China
Prior art keywords
network model
magnetic resonance
data
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911287973.7A
Other languages
Chinese (zh)
Other versions
CN111063000B (en
Inventor
肖韬辉
王珊珊
李程
郑海荣
刘新
梁栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201911287973.7A priority Critical patent/CN111063000B/en
Publication of CN111063000A publication Critical patent/CN111063000A/en
Application granted granted Critical
Publication of CN111063000B publication Critical patent/CN111063000B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Signal Processing (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention adopts a neural structure search mode to automatically generate a network, and utilizes an intensive learning mode to continuously and circularly iterate to obtain an optimal result. The invention obtains a network structure, namely a sub-network, in a search space through a controller, then trains on a prepared data set by using the sub-network, tests on a verification set to obtain an error, and transmits the error to the controller, and the controller continuously optimizes to obtain another network structure, and the steps are repeated continuously until an optimal reconstruction result is obtained.

Description

Magnetic resonance rapid imaging method and device based on neural network structure search
Technical Field
The invention relates to the field of image processing, in particular to a method for reconstructing a magnetic resonance image by using a neural network algorithm.
Background
Magnetic Resonance Imaging (MRI), which is a multi-parameter, multi-contrast Imaging technique, is one of the main Imaging modes in modern medical Imaging, can reflect various characteristics of tissues T1, T2, proton density and the like, and can provide information for detection and diagnosis of diseases. The basic working principle of magnetic resonance imaging is to excite hydrogen protons in a human body by using a magnetic resonance phenomenon and radio frequency excitation, perform position encoding by using a gradient field, receive electromagnetic signals with position information by using a receiving coil, and finally reconstruct image information by using Fourier transform.
Limited by the fourier encoding method and nyquist sampling theorem, the magnetic resonance imaging requires a long scanning time, not only causes a certain discomfort to the patient, but also easily generates motion artifacts in the reconstructed image. Meanwhile, the imaging of moving objects, such as blood flow, heart, etc., by MRI is limited by the lengthy scan time. By improving hardware performance such as gradient switching rate and magnetic field intensity, the mode of accelerating acquisition is limited by the bearing capacity of human nerves to magnetic field transformation, and has no room for further improvement. Recently, the deep learning method has achieved remarkable results in the directions of image recognition, segmentation, and the like, and for the problem of slow magnetic resonance image scanning time, a Deep Neural Network (DNN) is recently applied to acceleration of magnetic resonance scanning to solve the problem of slow magnetic resonance imaging scanning speed.
However, in the conventional neural network algorithm, the structure of the neural network needs to be preset, and then the neural network is trained by using the labeled data, so that a neural network model which can be finally used for image processing is obtained. In the traditional framework, how to determine the network structure can only depend on the experience of algorithm designers to carry out parameter adjustment and test, and a better network structure is found. However, parameter tuning is a very difficult matter for the depth model, and many hyper-parameters and network structure parameters can generate explosive combinations, so that the traditional method is difficult to find the optimal solution. In early work, the genetic algorithm is used as a representative evolutionary algorithm to optimize the hyper-parameters and weights of the neural network, and the neural network at that time has only a few layers, each layer has dozens of neurons, a complex network architecture does not exist, the parameters are very limited, and the optimization can be directly performed. On one hand, the deep learning model has a complex network structure, on the other hand, the weight parameters are usually in the millions to billions, and the evolutionary algorithm cannot be optimized at all.
Disclosure of Invention
The invention is based on at least one of the above technical problems, and provides a new method for determining a neural network model structure and a method for reconstructing a magnetic resonance highly undersampled image, which finds an optimal structure in a search space of the neural network structure through reinforcement learning, and reconstructs highly undersampled magnetic resonance data by using a neural network model corresponding to the optimal structure to obtain a reconstructed magnetic resonance image, thereby improving the optimization efficiency of the neural network and the reconstruction effect of the undersampled magnetic resonance image.
In view of this, an embodiment of the first aspect of the present invention provides a method for determining a neural network model for fast magnetic resonance image reconstruction, including:
s1: acquiring sample data and label data for model training;
s2: constructing a search space based on network topology parameters related to a topological structure of a neural network model; establishing a corresponding first network model according to the network topology parameters in the search space;
s3: training the first network model by using the sample data and the label data to obtain a trained first network model;
s4: testing the trained first network model by using test data to obtain an error result;
s5: and finding the optimal solution of the network topological parameters in the search space by using a reinforcement learning algorithm and the error result, and determining the trained first network model corresponding to the optimal solution as the neural network model for the rapid reconstruction of the magnetic resonance image.
Preferably, the reinforcement learning algorithm is an algorithm based on a recurrent neural network model. In another preferred embodiment, the recurrent neural network model is a long-term memory network model (LSTM).
In this embodiment, the first network model is a convolutional neural network model (CNN), and the convolutional neural network model may include different computing units, such as: convolution structures (convolution architecture), rectifier linear units (ReLU), batch normalization (batch normalization), skip connections (skip connections), and the like constitute the necessary units and extension structures of the deep learning network. Correspondingly, the search space comprises network topology parameters representing the calculation units and the connection relationship between the calculation units, and the units and the topology mechanism of the neural network can be represented through the network topology parameters, namely, a neural network model of a corresponding structure can be constructed according to a group of network topology parameters in the search space.
In this embodiment, the step S1 of obtaining sample data and label data for model training further includes:
s11: acquiring full-sampling magnetic resonance data for model training;
s12: reconstructing the fully sampled magnetic resonance data to obtain a magnetic resonance image as label data used for training;
s13: and resampling the fully sampled magnetic resonance data to obtain undersampled data which is used as sample data for training.
In the step of resampling, different sampling templates and different undersampling multiplying powers can be used for generating a plurality of groups of undersampled data for training so as to improve the subsequent robustness of the model and adapt to different sampling data more widely, so that the data with different sampling multiplying powers and different sampling methods have better adaptability during image reconstruction.
In contrast, in order to improve the final reconstruction effect, the resampling may use a specific sampling template and sampling rate, that is, a specific sampling method and sampling rate are used to generate training data, so that the trained model will be specifically adapted to the specific undersampled data, but the reconstruction accuracy may be improved. This method is suitable for the fast reconstruction of magnetic resonance data sampled in a specific way. If the sampling method and sampling magnification have already been determined when the patient is examined for magnetic resonance, the model can be trained and reconstructed in this way.
An embodiment of another aspect of the present invention provides a method for fast reconstruction of a magnetic resonance image, including:
acquiring undersampled magnetic resonance data of a target object;
the undersampled magnetic resonance data is reconstructed using the neural network model determined in the previous embodiment to obtain a magnetic resonance image of the target object.
In the invention, in order to obtain a better reconstruction effect, the same sampling mode used when the magnetic resonance data of the target object is sampled can be used as the sampling mode of the undersampled data in the training data.
In still another aspect, a further embodiment of the present invention provides a neural network model determining apparatus, including:
the training data acquisition module is used for acquiring sample data and label data for model training;
the search space module is used for constructing a search space based on network topology parameters related to a topological structure of the neural network model; establishing a corresponding first network model according to the network topology parameters in the search space;
the sub-network training module is used for training the first network model by using the sample data and the label data to obtain a trained first network model;
the error calculation module is used for testing the trained first network model by using test data to obtain an error result;
and the controller module is used for finding the optimal solution of the network topology parameters in the search space by using a reinforcement learning algorithm and the error result, and determining the trained first network model corresponding to the optimal solution as the neural network model for the rapid magnetic resonance image reconstruction.
In yet another aspect, a further embodiment of the present invention provides a computer storage medium storing one or more first instructions adapted to be loaded by a processor and to perform the model training method of the preceding embodiment; alternatively, the computer storage medium stores one or more second instructions adapted to be loaded by the processor and to perform the image processing method in the foregoing embodiments.
By the technical scheme, a neural structure searching mode can be adopted to automatically generate the network, and an optimal result is obtained by continuously and circularly iterating in a reinforcement learning mode. Obtaining a network structure, namely a sub-network, in a search space through a controller, then training the manufactured data set by using the sub-network, testing the manufactured data set on a verification set to obtain an error, transmitting the error to the controller, continuously optimizing the controller to obtain another network structure, and repeating the steps until the optimal reconstruction result is obtained.
Drawings
Fig. 1 illustrates a neural network model determination method according to a first embodiment of the invention;
FIG. 2 is a diagram illustrating a training data acquisition method according to a first embodiment of the invention;
FIG. 3 is a diagram illustrating a neural network structure searching method according to a first embodiment of the present invention;
figure 4 shows a schematic representation of a magnetic resonance image reconstruction method according to a second embodiment of the invention;
FIG. 5 shows a schematic block diagram of a neural network model determination apparatus according to a third embodiment of the present invention;
fig. 6 shows a schematic representation of a magnetic resonance apparatus according to a fourth embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as specifically described herein, and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
It will be understood that when a module or element is referred to as being "on," "connected to," or "coupled to" another module or element, it can be directly on, connected or coupled to the other module or element or intervening modules or elements may be present. In contrast, when a module or unit is referred to as being "directly on," "directly connected to," or "directly coupled to" another module or unit, there may be no intervening modules or units present. In this application, the term "and/or" may include any and all combinations of one or more of the associated listed items.
The terminology used in the description presented herein is for the purpose of describing particular example embodiments only and is not intended to limit the scope of the present application. As used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, components, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, components, and/or groups thereof.
The present application relates generally to Magnetic Resonance Imaging (MRI), and more particularly, to systems and methods for fast imaging in MRI. MRI images can be generated by manipulating a virtual space called k-space. The term "k-space" as used herein may refer to a digital array (matrix) representing spatial frequencies in an MR image. In some embodiments, k-space may be a 2D or 3D fourier transform of an MR image. The way k-space is manipulated, called k-space sampling, may affect the acquisition Time (TA). As used herein, the term "acquisition time" may refer to the time at which the signal of the entire pulse sequence is acquired. For example, the term "acquisition time" may refer to the time from the start of filling k-space to the acquisition of the entire k-space data set. Traditionally, two k-space sampling methods, cartesian and non-cartesian, are provided to manipulate k-space. In cartesian sampling, the k-space trajectory is a straight line, whereas in non-cartesian sampling, e.g. radiation sampling or helical sampling, the k-space trajectory may be longer than in cartesian sampling.
Example one
FIG. 1 shows a schematic block diagram of a neural network model determination method according to one embodiment of the present invention.
As shown in fig. 1, according to a neural network model determining method of an embodiment of the present invention, the present embodiment includes the following steps:
s1: acquiring sample data and label data for model training;
the neural network is used for reconstructing a magnetic resonance image, the input of the neural network is undersampled magnetic resonance data, and the output of the neural network is a reconstructed magnetic resonance image. An important prerequisite for neural network applications is the need for a training set, the output samples in which are typically high quality noise-free magnetic resonance images. The high quality noise-free magnetic resonance image is typically reconstructed from fully or ultra-fully sampled k-space data. The acquisition of such fully or ultra-fully sampled k-space data consumes a relatively long acquisition time. In order to enable the model to be used for fast magnetic resonance imaging, input data of model training should be undersampled K-space data which is the same as that of fast imaging, and the undersampled K-space data can be obtained by performing secondary sampling on fully sampled K-space data.
As shown in fig. 2, in this embodiment, the step S1 of acquiring sample data and label data for model training further includes:
s11: acquiring full-sampling magnetic resonance data for model training;
s12: reconstructing the fully sampled magnetic resonance data to obtain a magnetic resonance image as label data used for training;
s13: and resampling the fully sampled magnetic resonance data to obtain undersampled data which is used as sample data for training.
In the step of resampling, different sampling templates and different undersampling multiplying powers can be used for generating a plurality of groups of undersampled data for training so as to improve the subsequent robustness of the model and adapt to different sampling data more widely, so that the data with different sampling multiplying powers and different sampling methods have better adaptability during image reconstruction.
In contrast, in order to improve the final reconstruction effect, the resampling may use a specific sampling template and sampling rate, that is, a specific sampling method and sampling rate are used to generate training data, so that the trained model will be specifically adapted to the specific undersampled data, but the reconstruction accuracy may be improved. This method is suitable for the fast reconstruction of magnetic resonance data sampled in a specific way. If the sampling method and sampling magnification have already been determined when the patient is examined for magnetic resonance, the model can be trained and reconstructed in this way.
The traditional algorithm often cannot obtain a good effect on the reconstruction of highly undersampled k-space data, the existing deep neural network models for reconstructing highly undersampled magnetic resonance images, such as neural network models of MoDL, ADMM-Net, AUTOMAP, U-Net, VN-Net and the like, are all networks with specific topological structures constructed in advance, generally have specific requirements on the parts, acceleration multiplying power and sampling modes of magnetic resonance imaging, and have poor adaptability and expansibility.
In addition, the full sampling data used for training can be screened in advance, and the magnetic resonance image with the signal-to-noise ratio lower than a preset threshold value is filtered out, so that a better effect is obtained.
Before training, normalization processing can be carried out on the image reconstructed by the full sampling data. The undersampled data obtained by resampling can also be subjected to normalization preprocessing before training.
S2: constructing a search space based on network topology parameters related to a topological structure of a neural network model; establishing a corresponding first network model according to the network topology parameters in the search space;
the invention uses a neural structure search method to construct a neural network model for image reconstruction, and a great deal of structural engineering and technical knowledge is usually required for designing a neural network structure aiming at different tasks. Therefore, Neural Network Architecture Search (NAS) has come to its end, and its main task is to automate the process of artificially designing a Neural network Architecture.
Three main components of Neural Network Architecture Search (NAS) are:
1. a space is searched. The search space describes a set of potential neural network architectures. The search space is specifically designed for the application, typically a convolutional network space for computer vision tasks, or a recursive neural network space for language modeling tasks. Therefore, the NAS approach is not fully automated, as the design of these search spaces fundamentally relies on an artificially designed architecture as a starting point. Even so, there are still many architectural parameters that need to be decided. In practice, the number of potential architectures that need to be considered in these search spaces is typically more than 10 to the power of 10.
For the present invention, the neural network is used for fast magnetic resonance image reconstruction, and therefore, the neural network architecture in the search space is limited to a convolutional neural network, and accordingly, the search space includes different computing units of the convolutional neural network, such as: convolution structures (convolution architecture), rectifier linear units (ReLU), batch normalization (batch normalization), skip connections (skip connections), and the like constitute the necessary units and extension structures of the deep learning network.
2. And (5) an optimization method. An optimization method is used to determine how to browse the search space in order to find a good framework. The most basic method here is random search, while various adaptive methods such as reinforcement learning, evolutionary search, gradient-based optimization and bayesian optimization have been introduced. While these adaptive methods vary somewhat in choosing which architectures to evaluate, they both attempt to search for network architectures that tend to be more likely to perform well. All of these methods have corresponding methods in the context of traditional hyper-parametric optimization tasks.
The invention uses a reinforcement learning method to search, in particular, uses a recurrent neural network model to execute an optimization method, and the part of the components is called a controller in the invention.
3. And (4) an evaluation method. The assembly measures the performance of each structure considered by the optimization method. The simplest, but most computationally intensive, option is to train a network completely.
In this step, the type of the neural network is defined as a convolutional neural network, and a convolutional neural network with an initial structure can be constructed through initial parameters according to parameters such as the calculation units in the search space and the connection relationship between the calculation units.
S3: training the first network model by using the sample data and the label data to obtain a trained first network model;
this step trains the neural network model in the search space using the training data acquired in step S1. When a neural network structure search is performed, an initial structure is often required, and then the initial structure is trained using training data until convergence.
The convergence formula for training the first network model is as follows:
Figure BDA0002318561880000081
in the formula: f denotes the mapping between end-to-end of the sub-network, F (x)m,n(ii) a Θ) results in the output of the network; Θ represents the parameter to be learned; x is the number ofm,nAn input representing a network; y ism,nAn output label representing the network.
And for each searched neural network in the neural network structure search, training by using training data is required to obtain a trained first network model.
S4: testing the trained first network model by using test data to obtain an error result;
this step is used to evaluate different neural network structures in the search space. Different evaluation parameters may be used for different application scenarios. For example, some application scenarios focus more on the computational performance of the neural network model, and therefore may be evaluated using computational delay parameters; some application scenarios focus more on the accuracy of the neural network model processing results, so that error parameters can be selected for evaluation. In particular, the magnetic resonance image reconstruction is more concerned with the accuracy of the reconstruction results, and therefore only error parameters are selected for evaluating different network structures.
The error assessment uses test data that is different from the training data set. The method is the same as the generation method of the training data, the test data also uses the fully sampled k-space magnetic resonance data, the fully sampled magnetic resonance data is reconstructed to obtain an accurate magnetic resonance image, and the undersampled data is obtained through secondary sampling and is used as the input data in the test data. And inputting the undersampled data into the trained first network model to obtain an output image, and comparing the output image with an accurate image reconstructed by the full sampling data to obtain an error of the reconstructed image. The error calculation may use mean square error, or other error calculation methods known in the art. The test data set likewise comprises a plurality of different magnetic resonance images, from which finally test error results are obtained.
S5: and finding the optimal solution of the network topological parameters in the search space by using a reinforcement learning algorithm and the error result, and determining the trained first network model corresponding to the optimal solution as the neural network model for the rapid reconstruction of the magnetic resonance image.
As shown in fig. 3, the part performing reinforcement learning is generally referred to as a controller, the controller generates a sub-network based on the search space and trains the prepared training samples until convergence, then tests on the validation set to obtain corresponding error results, the results are fed back to the controller, the controller performs corresponding adjustment according to the results to regenerate a sub-network, and the training, testing and feeding back are performed again, and thus the best results are obtained repeatedly. Here the tuning process of the controller is trained using reinforcement learning. In addition, if the conditions allow distributed parallel training, a plurality of controllers can be arranged at the same time and a plurality of sub-networks can be generated, and the efficiency and the performance of neural structure searching can be greatly improved.
Example two
Fig. 4 shows a schematic view of another embodiment of the invention.
As shown in fig. 4, a second embodiment of the present invention provides a magnetic resonance image reconstruction method using a neural network model, which specifically includes:
acquiring undersampled magnetic resonance data of a target object;
and reconstructing the undersampled magnetic resonance data by using the neural network model determined in the previous embodiment to obtain a magnetic resonance image of the target object.
The step of acquiring undersampled magnetic resonance data of a target object performs undersampled magnetic resonance scanning on a human body through a magnetic resonance device, and an undersampled method generally used for common magnetic resonance fast imaging comprises a radial track and a spiral track. In the invention, the undersampling can be carried out by using higher sampling acceleration multiplying power so as to obtain higher sampling speed.
When the image reconstruction is carried out, the used neural network model is the optimal network which is well trained in the neural network structure search, so that the network can be directly used for carrying out the image reconstruction to obtain the output magnetic resonance image.
EXAMPLE III
As shown in fig. 5, a third embodiment of the present invention provides a neural network model determining apparatus, which may be a computer program (including program code) running in a terminal. The model training apparatus may execute the model determination method in the first embodiment, and specifically includes:
the training data acquisition module is used for acquiring sample data and label data for model training;
the search space module is used for constructing a search space based on network topology parameters related to a topological structure of the neural network model; establishing a corresponding first network model according to the network topology parameters in the search space;
the sub-network training module is used for training the first network model by using the sample data and the label data to obtain a trained first network model;
the error calculation module is used for testing the trained first network model by using test data to obtain an error result;
and the controller module is used for finding the optimal solution of the network topology parameters in the search space by using a reinforcement learning algorithm and the error result, and determining the trained first network model corresponding to the optimal solution as the neural network model for the rapid magnetic resonance image reconstruction.
The units in the model training device may be respectively or completely combined into one or several other units to form the model training device, or some unit(s) may be further split into multiple units with smaller functions to form the model training device, which may achieve the same operation without affecting the achievement of the technical effect of the embodiments of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the model-based training apparatus may also include other units, and in practical applications, these functions may also be implemented by the assistance of other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the model training apparatus device as shown in fig. 5 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the corresponding method in one of the embodiments on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like as well as a storage element, and the neural network model determination method of the embodiment of the present invention may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
Example four
An embodiment of the present invention provides a magnetic resonance apparatus including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a magnetic resonance fast image reconstruction method as described in embodiment two.
In fig. 6, the apparatus comprises a processor 201, a memory 202, an input device 203, and an output device 204; the number of the processors 201 in the device may be one or more, and one processor 201 is taken as an example in fig. 6; the processor 201, the memory 202, the input device 203 and the output device 204 in the apparatus may be connected by a bus or other means, for example, in fig. 6.
The memory 202 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the neural network model determining method in the first embodiment of the present invention, or program instructions/modules corresponding to the magnetic resonance image reconstruction algorithm in the second embodiment of the present invention. The processor 201 executes various functional applications of the apparatus and data processing by executing software programs, instructions and modules stored in the memory 202, i.e. implements the magnetic resonance image reconstruction method described above.
The memory 202 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 202 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 202 may further include memory located remotely from the processor 201, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 203 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the apparatus.
The output device 204 may include a display device such as a display screen, for example, of a user terminal.
EXAMPLE five
An embodiment five of the present invention provides a computer storage medium, where one or more first instructions are stored, where the one or more first instructions are adapted to be loaded by a processor and to execute the model training method in the foregoing embodiment; alternatively, the computer storage medium stores one or more second instructions adapted to be loaded by the processor and to perform the neural network determination method or the image reconstruction method in the foregoing embodiments.
The steps in the method of each embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of each embodiment of the invention can be merged, divided and deleted according to actual needs.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The technical solutions of the present invention have been described in detail with reference to the accompanying drawings, and the above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for determining a neural network model for fast magnetic resonance image reconstruction, comprising:
s1: acquiring sample data and label data for model training;
s2: constructing a search space based on network topology parameters related to a topological structure of a neural network model; establishing a corresponding first network model according to the network topology parameters in the search space;
s3: training the first network model by using the sample data and the label data to obtain a trained first network model;
s4: testing the trained first network model by using test data to obtain an error result;
s5: and finding the optimal solution of the network topological parameters in the search space by using a reinforcement learning algorithm and the error result, and determining the trained first network model corresponding to the optimal solution as the neural network model for the rapid reconstruction of the magnetic resonance image.
2. The method of claim 1, wherein:
the reinforcement learning algorithm is an algorithm based on a recurrent neural network model.
3. The method of claim 1, in which the first network model is a convolutional neural network model (CNN).
4. The method of claim 3, wherein the computing unit in the network topology parameters comprises: convolution structure, rectified linear units (ReLU), batch normalization (batch normalization), and jump connection.
5. The method of claim 1, wherein the step S1 further comprises:
s11: acquiring full-sampling magnetic resonance data for model training;
s12: reconstructing the fully sampled magnetic resonance data to obtain a magnetic resonance image as label data used for training;
s13: and resampling the fully sampled magnetic resonance data to obtain undersampled data which is used as sample data for training.
6. The method of claim 2, in which the recurrent neural network model is a long-short memory network model (LSTM).
7. A magnetic resonance image fast reconstruction method is characterized by comprising the following steps:
acquiring undersampled magnetic resonance data of a target object;
reconstructing the undersampled magnetic resonance data using the neural network model determined in one of the claims 1 to 6 resulting in a magnetic resonance image.
8. An apparatus for determining a neural network model, comprising:
the training data acquisition module is used for acquiring sample data and label data for model training;
the search space module is used for constructing a search space based on network topology parameters related to a topological structure of the neural network model; establishing a corresponding first network model according to the network topology parameters in the search space;
the sub-network training module is used for training the first network model by using the sample data and the label data to obtain a trained first network model;
the error calculation module is used for testing the trained first network model by using test data to obtain an error result;
and the controller module is used for finding the optimal solution of the network topology parameters in the search space by using a reinforcement learning algorithm and the error result, and determining the trained first network model corresponding to the optimal solution as the neural network model for the rapid magnetic resonance image reconstruction.
9. A magnetic resonance image fast reconstruction apparatus, comprising:
the data acquisition module is used for acquiring undersampled magnetic resonance data of a target object;
an image reconstruction module for reconstructing the undersampled magnetic resonance data using the neural network model determined in one of the claims 1 to 6 to obtain a magnetic resonance image.
10. A storage medium containing computer-executable instructions for performing the method of any one of claims 1-7 when executed by a computer processor.
CN201911287973.7A 2019-12-15 2019-12-15 Magnetic resonance rapid imaging method and device based on neural network structure search Active CN111063000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911287973.7A CN111063000B (en) 2019-12-15 2019-12-15 Magnetic resonance rapid imaging method and device based on neural network structure search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911287973.7A CN111063000B (en) 2019-12-15 2019-12-15 Magnetic resonance rapid imaging method and device based on neural network structure search

Publications (2)

Publication Number Publication Date
CN111063000A true CN111063000A (en) 2020-04-24
CN111063000B CN111063000B (en) 2023-12-26

Family

ID=70301801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911287973.7A Active CN111063000B (en) 2019-12-15 2019-12-15 Magnetic resonance rapid imaging method and device based on neural network structure search

Country Status (1)

Country Link
CN (1) CN111063000B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022037039A1 (en) * 2020-08-18 2022-02-24 中国银联股份有限公司 Neural network architecture search method and apparatus
WO2023102782A1 (en) * 2021-12-08 2023-06-15 深圳先进技术研究院 Reconstruction method and system for quantitative imaging of magnetic resonance parameters

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680657A (en) * 2016-08-01 2018-02-09 西门子保健有限责任公司 Medical scanners learn by oneself optimization clinical protocol and IMAQ
CN108647741A (en) * 2018-05-18 2018-10-12 湖北工业大学 A kind of image classification method and system based on transfer learning
CN108805877A (en) * 2017-05-03 2018-11-13 西门子保健有限责任公司 For the multiple dimensioned deeply machine learning of the N-dimensional segmentation in medical imaging
US20190258713A1 (en) * 2018-02-22 2019-08-22 Google Llc Processing text using neural networks
CN110333466A (en) * 2019-06-19 2019-10-15 东软医疗系统股份有限公司 A kind of MR imaging method neural network based and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680657A (en) * 2016-08-01 2018-02-09 西门子保健有限责任公司 Medical scanners learn by oneself optimization clinical protocol and IMAQ
CN108805877A (en) * 2017-05-03 2018-11-13 西门子保健有限责任公司 For the multiple dimensioned deeply machine learning of the N-dimensional segmentation in medical imaging
US20190258713A1 (en) * 2018-02-22 2019-08-22 Google Llc Processing text using neural networks
CN108647741A (en) * 2018-05-18 2018-10-12 湖北工业大学 A kind of image classification method and system based on transfer learning
CN110333466A (en) * 2019-06-19 2019-10-15 东软医疗系统股份有限公司 A kind of MR imaging method neural network based and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖韬辉;郭建;赵涛;王珊珊;梁栋;: "深度学习的快速磁共振成像及欠采样轨迹设计", 中国图象图形学报, no. 02, pages 194 - 208 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022037039A1 (en) * 2020-08-18 2022-02-24 中国银联股份有限公司 Neural network architecture search method and apparatus
WO2023102782A1 (en) * 2021-12-08 2023-06-15 深圳先进技术研究院 Reconstruction method and system for quantitative imaging of magnetic resonance parameters

Also Published As

Publication number Publication date
CN111063000B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN110689038B (en) Training method and device for neural network model and medical image processing system
CN111650453B (en) Power equipment diagnosis method and system based on windowing characteristic Hilbert imaging
CN104933683B (en) A kind of non-convex low-rank method for reconstructing for magnetic resonance fast imaging
US10627470B2 (en) System and method for learning based magnetic resonance fingerprinting
US11250543B2 (en) Medical imaging using neural networks
CN113158615A (en) Quantum gate optimization method, device, equipment and storage medium
WO2021119875A1 (en) Fast magnetic resonance imaging method and apparatus based on neural architecture search
CN110992440B (en) Weak supervision magnetic resonance rapid imaging method and device
CN111462264B (en) Medical image reconstruction method, medical image reconstruction network training method and device
US10247800B2 (en) MRI pulse sequence design
US10871535B2 (en) Magnetic resonance fingerprinting optimization in magnetic resonance imaging
CN111063000B (en) Magnetic resonance rapid imaging method and device based on neural network structure search
US20230032472A1 (en) Method and apparatus for reconstructing medical image
US10527695B2 (en) Systems and methods for efficient magnetic resonance fingerprinting scheduling
CN111210484A (en) Medical image generation method, model training method, device and medium
CN114913262B (en) Nuclear magnetic resonance imaging method and system with combined optimization of sampling mode and reconstruction algorithm
CN110189302A (en) Brain image analysis method, computer equipment and readable storage medium storing program for executing
CN111815558A (en) Medical image processing system, method and computer storage medium
WO2021114098A1 (en) Weakly supervised fast magnetic resonance imaging method and apparatus
CN111681297A (en) Image reconstruction method, computer device, and storage medium
CN116626570A (en) Multi-contrast MRI sampling and image reconstruction
CN116486304A (en) Key frame extraction method based on ultrasonic video and related equipment
US11967004B2 (en) Deep learning based image reconstruction
CN116203486A (en) System and method for MRI data processing
CN105654527A (en) Magnetic resonance imaging reconstruction method and device based on structural dictionary learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant