WO2021119875A1 - Procédé et appareil d'imagerie par résonance magnétique rapide basés sur une recherche d'architecture neuronale - Google Patents
Procédé et appareil d'imagerie par résonance magnétique rapide basés sur une recherche d'architecture neuronale Download PDFInfo
- Publication number
- WO2021119875A1 WO2021119875A1 PCT/CN2019/125460 CN2019125460W WO2021119875A1 WO 2021119875 A1 WO2021119875 A1 WO 2021119875A1 CN 2019125460 W CN2019125460 W CN 2019125460W WO 2021119875 A1 WO2021119875 A1 WO 2021119875A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network model
- magnetic resonance
- data
- neural network
- sampling
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000002595 magnetic resonance imaging Methods 0.000 title abstract description 15
- 230000001537 neural effect Effects 0.000 title abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 51
- 238000012360 testing method Methods 0.000 claims abstract description 22
- 238000005070 sampling Methods 0.000 claims description 70
- 238000003062 neural network model Methods 0.000 claims description 44
- 238000004422 calculation algorithm Methods 0.000 claims description 18
- 230000002787 reinforcement Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 230000000306 recurrent effect Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 3
- 230000006403 short-term memory Effects 0.000 claims description 2
- 238000005457 optimization Methods 0.000 abstract description 9
- 230000008569 process Effects 0.000 abstract description 4
- 238000010200 validation analysis Methods 0.000 abstract 1
- 238000013528 artificial neural network Methods 0.000 description 27
- 230000015654 memory Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 125000004435 hydrogen atom Chemical group [H]* 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Definitions
- the present invention relates to the field of image processing, in particular to a method for reconstructing a magnetic resonance image using a neural network algorithm.
- Magnetic Resonance Imaging as a multi-parameter, multi-contrast imaging technology, is one of the main imaging methods in modern medical imaging. It can reflect various characteristics of tissues such as T1, T2 and proton density. Provide information for the detection and diagnosis of diseases.
- the basic working principle of magnetic resonance imaging is to use the phenomenon of magnetic resonance, use radio frequency excitation to excite hydrogen protons in the human body, use gradient fields for position encoding, and then use a receiving coil to receive electromagnetic signals with position information, and finally use Fourier transform to reconstruct Image information.
- MRI requires a longer scan time, which not only brings some discomfort to the patient, but also prone to motion artifacts in the reconstructed image.
- the long scan time limits the imaging of moving objects by MRI, such as blood flow and heart.
- the method of accelerating acquisition is restricted by the ability of human nerves to withstand the magnetic field transformation and there is no room for further improvement.
- deep learning methods have achieved significant results in image recognition and segmentation.
- DNN deep neural networks
- the traditional neural network algorithm needs to set the structure of the neural network in advance, and then use the pre-labeled label data to train the neural network, so as to obtain the neural network model that can be used for image processing.
- how to determine the network structure can only rely on the experience of the algorithm designer to adjust and test the parameters to find a better network structure.
- tuning parameters is a very difficult thing for deep models. Numerous hyperparameters and network structure parameters will produce explosive combinations, and traditional methods are difficult to find the optimal solution.
- a lot of early work used evolutionary algorithms represented by genetic algorithms to optimize the hyperparameters and weights of neural networks because the neural network at that time had only a few layers, with more than a dozen neurons in each layer, and there was no complicated network architecture. The parameters are very limited and can be optimized directly.
- the deep learning model has a complex network structure.
- the weight parameters are usually in the hundreds of millions to billions, and evolutionary algorithms cannot be optimized at all.
- the present invention is based on at least one of the above technical problems, and proposes a new method for determining the structure of a neural network model and a method for reconstructing a highly under-sampled magnetic resonance image.
- the search space of the neural network structure is found to be the best Optimizing the structure, and using the neural network model corresponding to the optimal structure to reconstruct the highly under-sampled magnetic resonance data to obtain the reconstructed magnetic resonance image, which improves the optimization efficiency of the neural network and the effect of under-sampling magnetic resonance image reconstruction.
- the embodiment of the first aspect of the present invention proposes a method for determining a neural network model for rapid reconstruction of magnetic resonance images, including:
- S2 construct a search space based on network topology parameters related to the topology of the neural network model; establish a corresponding first network model according to the network topology parameters in the search space;
- the reinforcement learning algorithm is an algorithm based on a recurrent neural network model.
- the recurrent neural network model is a long short-term memory network model (LSTM).
- the first network model is a convolutional neural network model (CNN)
- the convolutional neural network model can include different computing units, such as convolutional architectures, rectified linear units (ReLU), batch reduction Batch normalization, skip connections, etc. form the necessary units and extended structures of deep learning networks.
- the search space includes network topology parameters that characterize these computing units and their connection relationships. Through the network topology parameters, the units and topology mechanisms of the neural network can be characterized, that is, according to a set of network topology parameters in the search space, A neural network model with corresponding structure can be constructed.
- the step S1 of obtaining sample data and label data for model training further includes:
- S12 Reconstruct the full-sampling magnetic resonance data to obtain a magnetic resonance image, which is used as tag data for training;
- different sampling templates and different under-sampling magnifications can be used to generate multiple sets of under-sampling data for training, so as to improve the robustness of the model, and to adapt to different sampled data more widely to realize the image During reconstruction, it has good adaptability to different sampling methods and different sampling magnification data.
- re-sampling can use a specific sampling template and sampling rate, that is, use a specific sampling method and sampling rate to generate training data, so that the trained model will be specifically suitable for the specific under-sampling Data, but the reconstruction accuracy can be improved.
- This method is suitable for rapid reconstruction of magnetic resonance data sampled in a specific way. If the sampling method and sampling magnification have been determined during the magnetic resonance examination of the patient, the model can be trained and reconstructed in this way.
- Another embodiment of the present invention provides a method for rapid reconstruction of magnetic resonance images, including:
- the neural network model determined in the foregoing embodiment is used to reconstruct the under-sampled magnetic resonance data to obtain a magnetic resonance image of the target object.
- the sampling method of the under-sampling data in the training data can use the same sampling method used when the magnetic resonance data is sampled on the target object.
- another embodiment of the present invention provides a neural network model determining device, including:
- the training data acquisition module is used to acquire sample data and label data for model training
- the search space module is used to construct a search space based on network topology parameters related to the topology of the neural network model; establish a corresponding first network model according to the network topology parameters in the search space;
- a sub-network training module configured to use the sample data and the label data to train the first network model to obtain a trained first network model
- An error calculation module configured to use test data to test the trained first network model to obtain an error result
- the controller module is configured to use the reinforcement learning algorithm and the error result to find the optimal solution of the network topology parameter in the search space, and to correspond the optimal solution to the trained first network model Determined to be the neural network model used for rapid reconstruction of magnetic resonance images.
- another embodiment of the present invention provides a computer storage medium storing one or more first instructions, and the one or more first instructions are suitable for being loaded and executed by a processor
- the model training method in the foregoing embodiment or, the computer storage medium stores one or more second instructions, and the one or more second instructions are suitable for being loaded by the processor and executed in the foregoing embodiment Image processing method.
- the neural structure search method can be used to automatically generate the network, and the reinforcement learning method can be used to continuously loop iteratively to obtain the optimal result.
- Fig. 1 shows a method for determining a neural network model according to the first embodiment of the present invention
- Fig. 2 shows a schematic diagram of a training data acquisition method according to the first embodiment of the present invention
- Fig. 3 shows a schematic diagram of a neural network structure search method according to the first embodiment of the present invention
- Fig. 4 shows a schematic diagram of a magnetic resonance image reconstruction method according to the second embodiment of the present invention
- Fig. 5 shows a schematic block diagram of an apparatus for determining a neural network model according to a third embodiment of the present invention
- Fig. 6 shows a schematic diagram of a magnetic resonance device according to the fourth embodiment of the present invention.
- module or unit when a module or unit is referred to as being “on,” “connected to,” or “coupled to” another module or unit, it can be directly on the other module or unit or an intermediate module or unit that may exist, Connected or coupled to other modules or units or intermediate modules or units that may be present. In contrast, when a module or unit is referred to as being “directly on”, “directly connected to” or “directly coupled to” another module or unit, there may be no intervening modules or units.
- the term “and/or” can include any and all combinations of one or more of the related listed items.
- the MRI image can be generated by manipulating a virtual space called k-space.
- k-space used herein may refer to a digital array (matrix) representing the spatial frequency in the MR image.
- the k-space may be a 2D or 3D Fourier transform of the MR image.
- the way of manipulating k-space, called k-space sampling, can affect the acquisition time (TA).
- acquisition time can refer to the time to collect the signal of the entire pulse sequence.
- the term "acquisition time” may refer to the time from the beginning of filling k-space to collecting the entire k-space data set.
- acquisition time may refer to the time from the beginning of filling k-space to collecting the entire k-space data set.
- Cartesian sampling two k-space sampling methods, Cartesian sampling and non-Cartesian sampling, are provided to manipulate k-space.
- Cartesian sampling the k-space trajectory is a straight line
- non-Cartesian sampling such as radiation sampling or spiral sampling
- the k-space trajectory can be longer than the k-space trajectory in Cartesian sampling.
- Fig. 1 shows a schematic block diagram of a method for determining a neural network model according to an embodiment of the present invention.
- this embodiment includes the following steps:
- the neural network of the present invention is used for the reconstruction of magnetic resonance images, and its input is under-sampled magnetic resonance data, and its output is a reconstructed magnetic resonance image.
- An important prerequisite for the application of neural networks is the need for a training set.
- the output samples in the training set are generally high-quality, noise-free magnetic resonance images.
- the high-quality noise-free magnetic resonance image is generally reconstructed from full-sampling or super-full-sampling k-space data.
- the acquisition of the full-sampling or ultra-full-sampling k-space data requires a long acquisition time.
- the input data for model training should also be the same under-sampled K-space data as fast imaging.
- the under-sampled K-space data can be obtained by sub-sampling the full-sampled K-space data. .
- the step S1 of obtaining sample data and label data for model training further includes:
- S12 Reconstruct the full-sampling magnetic resonance data to obtain a magnetic resonance image, which is used as tag data for training;
- different sampling templates and different under-sampling magnifications can be used to generate multiple sets of under-sampling data for training, so as to improve the robustness of the model, and to adapt to different sampled data more widely to realize the image During reconstruction, it has good adaptability to different sampling methods and different sampling magnification data.
- re-sampling can use a specific sampling template and sampling rate, that is, use a specific sampling method and sampling rate to generate training data, so that the trained model will be specifically suitable for the specific under-sampling Data, but the reconstruction accuracy can be improved.
- This method is suitable for rapid reconstruction of magnetic resonance data sampled in a specific way. If the sampling method and sampling magnification have been determined during the magnetic resonance examination of the patient, the model can be trained and reconstructed in this way.
- the full sampling data used for training can be filtered in advance to filter out magnetic resonance images with a signal-to-noise ratio lower than a preset threshold to obtain better results.
- the image reconstructed from the full-sampled data can be normalized.
- normalized preprocessing can also be performed before training.
- S2 construct a search space based on network topology parameters related to the topology of the neural network model; establish a corresponding first network model according to the network topology parameters in the search space;
- the present invention uses a neural structure search method to construct a neural network model for image reconstruction.
- designing a neural network structure usually requires a lot of structural engineering and technical knowledge. Therefore, Neural Architecture Search (NAS) emerged at the historic moment, and its main task is to automate the process of artificial neural network structure design.
- NAS Neural Architecture Search
- NAS Neural Architecture Search
- the search space describes the set of potentially possible neural network architectures.
- the search space is specifically designed for applications, such as convolutional network space for computer vision tasks, or recurrent neural network space for language modeling tasks. Therefore, the NAS method is not completely automated, because the design of these search spaces fundamentally relies on a human-designed architecture as a starting point. Even so, there are still many architectural parameters that need to be decided. In fact, the number of potential architectures that need to be considered in these search spaces usually exceeds 10 to the power of 10.
- the neural network is used for fast magnetic resonance image reconstruction. Therefore, the neural network architecture in the search space is limited to the convolutional neural network.
- the search space includes different computing units of the convolutional neural network, such as: Convolutional architectures, rectified linear units (ReLU), batch normalization, skip connections, etc. form the necessary units and extended structures for deep learning networks.
- optimization method The optimization method is used to determine how to browse the search space in order to find a good architecture.
- the most basic method here is random search, and various adaptive methods are also introduced, such as reinforcement learning, evolutionary search, gradient-based optimization and Bayesian optimization. Although these adaptive methods differ slightly in choosing which architectures to evaluate, they all try to search for network architectures that tend to be more likely to perform well. All these methods have corresponding methods in the context of traditional hyperparameter optimization tasks.
- the present invention uses a reinforcement learning method to search. Specifically, a recurrent neural network model is used to perform the optimization method. In the present invention, this part of the component is called a controller.
- Evaluation method The component measures the performance of each structure considered by the optimization method. The simplest, but the most computationally intensive option is to train a network completely.
- the type of neural network is limited to a convolutional neural network, and a convolutional neural network with an initial structure can be constructed through initial parameters based on the calculation units in the search space and the connection relationship between the calculation units.
- This step uses the training data obtained in step S1 to train the neural network model in the search space.
- an initial structure is often required, and then the initial structure is trained using training data until convergence.
- the convergence formula for training the first network model is as follows:
- F represents the end-to-end mapping relationship between the sub-networks, F(x m,n ; ⁇ ) gets the output of the network; ⁇ represents the parameters that need to be learned; x m,n represents the input of the network; y m ,n represents the output label of the network.
- training data For each neural network searched in the neural network structure search, training data needs to be used for training to obtain the trained first network model.
- This step is used to evaluate different neural network structures in the search space.
- different evaluation parameters can be used. For example, some application scenarios pay more attention to the calculation performance of the neural network model, so the calculation delay parameter can be used for evaluation; some application scenarios pay more attention to the accuracy of the neural network model processing result, so the error parameter can be selected for evaluation.
- the reconstruction of magnetic resonance images pays more attention to the accuracy of the reconstruction results. Therefore, only error parameters are selected to evaluate different network structures.
- the error evaluation uses test data that is different from the training data set.
- the test data also uses the fully sampled k-space magnetic resonance data, the accurate magnetic resonance image is reconstructed from the fully sampled magnetic resonance data, and the under-sampled data is obtained by sub-sampling as the input data in the test data. .
- the error calculation can use the mean square error, or other error calculation methods known in the prior art.
- the test data set also includes a large number of different magnetic resonance images, and finally the test error results are obtained from multiple magnetic resonance images.
- the part that performs reinforcement learning is generally called the controller.
- the controller generates a sub-network based on the search space and trains the prepared training samples until convergence, and then tests on the verification set to obtain the corresponding error results , The result is fed back to the controller, and the controller makes corresponding adjustments based on the result to regenerate a sub-network, train, test and feedback again, and repeat the process to get the best result.
- the adjustment process of the controller here uses reinforcement learning to train.
- multiple controllers can be set up at the same time and multiple sub-networks can be generated, which can greatly improve the efficiency and performance of neural structure search.
- Fig. 4 shows a schematic diagram of another embodiment of the present invention.
- the second embodiment of the present invention provides a method for reconstructing a magnetic resonance image using a neural network model, which specifically includes:
- the neural network model determined in the foregoing embodiment is used to reconstruct the under-sampled magnetic resonance data to obtain a magnetic resonance image of the target object.
- the step of obtaining the under-sampled magnetic resonance data of the target object is to perform under-sampling magnetic resonance scanning of the human body through a magnetic resonance device.
- Commonly used under-sampling methods for rapid magnetic resonance imaging generally include radial trajectories and spiral trajectories.
- a higher sampling acceleration rate can be used for under-sampling to obtain a faster sampling speed.
- the neural network model used is in the neural network structure search and the optimal network trained, the network can be used directly for image reconstruction to obtain the output magnetic resonance image.
- the third embodiment of the present invention provides a neural network model determination device.
- the model determination device may be a computer program (including program code) running in a terminal.
- the model training device can execute the model determination method in the first embodiment, which specifically includes:
- the training data acquisition module is used to acquire sample data and label data for model training
- the search space module is used to construct a search space based on network topology parameters related to the topology of the neural network model; establish a corresponding first network model according to the network topology parameters in the search space;
- a sub-network training module configured to use the sample data and the label data to train the first network model to obtain a trained first network model
- An error calculation module configured to use test data to test the trained first network model to obtain an error result
- the controller module is configured to use the reinforcement learning algorithm and the error result to find the optimal solution of the network topology parameter in the search space, and to correspond the optimal solution to the trained first network model Determined to be the neural network model used for rapid reconstruction of magnetic resonance images.
- Each unit in the model training device can be separately or completely combined into one or several other units to form, or some of the units can be further divided into functionally smaller units to form multiple units. The same operation can be achieved without affecting the realization of the technical effects of the embodiments of the present invention.
- the above-mentioned units are divided based on logical functions. In practical applications, the function of one unit may also be realized by multiple units, or the functions of multiple units may be realized by one unit. In other embodiments of the present invention, the model-based training device may also include other units. In practical applications, these functions may also be implemented with the assistance of other units, and may be implemented by multiple units in cooperation.
- a general-purpose computing device such as a computer including a central processing unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM) and other processing elements and storage elements
- CPU central processing unit
- RAM random access storage medium
- ROM read-only storage medium
- the computer program may be recorded on, for example, a computer-readable recording medium, and loaded into the above-mentioned computing device through the computer-readable recording medium, and run in it.
- the fourth embodiment of the present invention provides a magnetic resonance device, including:
- One or more processors are One or more processors;
- Storage device for storing one or more programs
- the one or more processors implement the method for rapid magnetic resonance image reconstruction as described in the second embodiment.
- the device includes a processor 201, a memory 202, an input device 203, and an output device 204; the number of processors 201 in the device can be one or more.
- one processor 201 is taken as an example; processing in the device
- the device 201, the memory 202, the input device 203, and the output device 204 may be connected by a bus or other methods. In FIG. 6, the connection by a bus is taken as an example.
- the memory 202 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the neural network model determination method in the first embodiment of the present invention, or as in the embodiment of the present invention.
- the processor 201 executes various functional applications and data processing of the device by running the software programs, instructions, and modules stored in the memory 202, that is, realizes the above-mentioned magnetic resonance image reconstruction method.
- the memory 202 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal.
- the memory 202 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
- the memory 202 may further include a memory remotely provided with respect to the processor 201, and these remote memories may be connected to the device through a network. Examples of the foregoing network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
- the input device 203 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the device.
- the output device 204 may include a display device such as a display screen, for example, a display screen of a user terminal.
- the fifth embodiment of the present invention provides a computer storage medium, the computer storage medium stores one or more first instructions, and the one or more first instructions are suitable for being loaded by a processor and executed in the foregoing embodiments.
- Model training method or, the computer storage medium stores one or more second instructions, and the one or more second instructions are suitable for being loaded by the processor and executing the neural network determination method in the foregoing embodiment or Image reconstruction method.
- the program can be stored in a computer-readable storage medium.
- the storage medium includes read-only Memory (Read-Only Memory, ROM), Random Access Memory (RAM), Programmable Read-only Memory (PROM), Erasable Programmable Read Only Memory, EPROM), One-time Programmable Read-Only Memory (OTPROM), Electronically-Erasable Programmable Read-Only Memory (EEPROM), CD-ROM (Compact Disc) Read-Only Memory, CD-ROM) or other optical disk storage, magnetic disk storage, tape storage, or any other computer-readable medium that can be used to carry or store data.
- Read-Only Memory ROM
- RAM Random Access Memory
- PROM Programmable Read-only Memory
- EPROM Erasable Programmable Read Only Memory
- OTPROM One-time Programmable Read-Only Memory
- EEPROM Electronically-Erasable Programmable Read-Only Memory
- CD-ROM Compact Disc
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/125460 WO2021119875A1 (fr) | 2019-12-15 | 2019-12-15 | Procédé et appareil d'imagerie par résonance magnétique rapide basés sur une recherche d'architecture neuronale |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/125460 WO2021119875A1 (fr) | 2019-12-15 | 2019-12-15 | Procédé et appareil d'imagerie par résonance magnétique rapide basés sur une recherche d'architecture neuronale |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021119875A1 true WO2021119875A1 (fr) | 2021-06-24 |
Family
ID=76476509
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/125460 WO2021119875A1 (fr) | 2019-12-15 | 2019-12-15 | Procédé et appareil d'imagerie par résonance magnétique rapide basés sur une recherche d'architecture neuronale |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021119875A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113722902A (zh) * | 2021-08-23 | 2021-11-30 | 西安电子科技大学 | 基于神经网络的赋形反射面天线最佳吻合参数估计方法 |
CN115760777A (zh) * | 2022-11-21 | 2023-03-07 | 脉得智能科技(无锡)有限公司 | 基于神经网络结构搜索的桥本氏甲状腺炎诊断系统 |
CN116628457A (zh) * | 2023-07-26 | 2023-08-22 | 武汉华康世纪医疗股份有限公司 | 一种磁共振设备运行中的有害气体检测方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120099774A1 (en) * | 2010-10-21 | 2012-04-26 | Mehmet Akcakaya | Method For Image Reconstruction Using Low-Dimensional-Structure Self-Learning and Thresholding |
CN109325985A (zh) * | 2018-09-18 | 2019-02-12 | 上海联影智能医疗科技有限公司 | 磁共振图像重建方法、装置和计算机可读存储介质 |
CN109712208A (zh) * | 2018-12-13 | 2019-05-03 | 深圳先进技术研究院 | 基于深度学习的大视野磁共振扫描图像重建方法和装置 |
CN110570486A (zh) * | 2019-08-23 | 2019-12-13 | 清华大学深圳研究生院 | 一种基于深度学习的欠采样核磁共振图像重建方法 |
-
2019
- 2019-12-15 WO PCT/CN2019/125460 patent/WO2021119875A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120099774A1 (en) * | 2010-10-21 | 2012-04-26 | Mehmet Akcakaya | Method For Image Reconstruction Using Low-Dimensional-Structure Self-Learning and Thresholding |
CN109325985A (zh) * | 2018-09-18 | 2019-02-12 | 上海联影智能医疗科技有限公司 | 磁共振图像重建方法、装置和计算机可读存储介质 |
CN109712208A (zh) * | 2018-12-13 | 2019-05-03 | 深圳先进技术研究院 | 基于深度学习的大视野磁共振扫描图像重建方法和装置 |
CN110570486A (zh) * | 2019-08-23 | 2019-12-13 | 清华大学深圳研究生院 | 一种基于深度学习的欠采样核磁共振图像重建方法 |
Non-Patent Citations (2)
Title |
---|
XIAO TAOHUI, GUO JIAN;ZHAO TAO;WANG SHANSHAN;LIANG DONG: "Fast magnetic resonance imaging with deep learning and design of undersampling trajectory", JOURNAL OF IMAGE AND GRAPHICS, ZHONGGUO TUXIANG TUXING XUEHUI, CN, vol. 23, no. 2, 1 January 2018 (2018-01-01), CN, pages 194 - 208, XP055822195, ISSN: 1006-8961, DOI: 10.11834/jig.170274 * |
XIAO TAOHUI: "Research on Fast Magnetic Resonance Imaging Based on Deep Learning", BASIC SCIENCES, CHINA MASTER’S THESES FULL-TEXT DATABASE, 1 June 2018 (2018-06-01), XP055822194 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113722902A (zh) * | 2021-08-23 | 2021-11-30 | 西安电子科技大学 | 基于神经网络的赋形反射面天线最佳吻合参数估计方法 |
CN115760777A (zh) * | 2022-11-21 | 2023-03-07 | 脉得智能科技(无锡)有限公司 | 基于神经网络结构搜索的桥本氏甲状腺炎诊断系统 |
CN115760777B (zh) * | 2022-11-21 | 2024-04-30 | 脉得智能科技(无锡)有限公司 | 基于神经网络结构搜索的桥本氏甲状腺炎诊断系统 |
CN116628457A (zh) * | 2023-07-26 | 2023-08-22 | 武汉华康世纪医疗股份有限公司 | 一种磁共振设备运行中的有害气体检测方法及装置 |
CN116628457B (zh) * | 2023-07-26 | 2023-09-29 | 武汉华康世纪医疗股份有限公司 | 一种磁共振设备运行中的有害气体检测方法及装置 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021119875A1 (fr) | Procédé et appareil d'imagerie par résonance magnétique rapide basés sur une recherche d'architecture neuronale | |
CN107610194B (zh) | 基于多尺度融合cnn的磁共振图像超分辨率重建方法 | |
CN107680657B (zh) | 医学扫描仪自学优化临床协议和图像采集 | |
EP3785231A1 (fr) | Amélioration d'image à l'aide de réseaux adverses génératifs | |
JP2021520882A (ja) | 深層学習を使用して磁気共鳴撮像を向上させるためのシステムおよび方法 | |
US11250543B2 (en) | Medical imaging using neural networks | |
US10878311B2 (en) | Image quality-guided magnetic resonance imaging configuration | |
CN110163260A (zh) | 基于残差网络的图像识别方法、装置、设备及存储介质 | |
US20200026967A1 (en) | Sparse mri data collection and classification using machine learning | |
US10627470B2 (en) | System and method for learning based magnetic resonance fingerprinting | |
CN111870245B (zh) | 一种跨对比度引导的超快速核磁共振成像深度学习方法 | |
CN114450599B (zh) | 麦克斯韦并行成像 | |
US10871535B2 (en) | Magnetic resonance fingerprinting optimization in magnetic resonance imaging | |
CN112767504A (zh) | 用于图像重建的系统和方法 | |
CN110992440B (zh) | 弱监督磁共振快速成像方法和装置 | |
US20200327379A1 (en) | Fastestimator healthcare ai framework | |
CN111063000B (zh) | 基于神经网络结构搜索的磁共振快速成像方法和装置 | |
WO2021184195A1 (fr) | Procédé de reconstruction d'image médicale, et procédé et appareil d'apprentissage de réseau de reconstruction d'image médicale | |
US10527695B2 (en) | Systems and methods for efficient magnetic resonance fingerprinting scheduling | |
CN110189302A (zh) | 脑图像分析方法、计算机设备和可读存储介质 | |
CN115330669A (zh) | 预测解剖结构的疾病量化参数的计算机实现的方法、系统及存储介质 | |
Huang et al. | Enhanced MRI reconstruction network using neural architecture search | |
CN117011673A (zh) | 基于噪声扩散学习的电阻抗层析成像图像重建方法和装置 | |
CN111815558A (zh) | 一种医学图像处理系统、方法及计算机存储介质 | |
US20200400769A1 (en) | Contrast and/or system independent motion detection for magnetic resonance imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19956891 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956891 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.01.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956891 Country of ref document: EP Kind code of ref document: A1 |