WO2021097594A1 - Quick imaging model training method and apparatus, and server - Google Patents
Quick imaging model training method and apparatus, and server Download PDFInfo
- Publication number
- WO2021097594A1 WO2021097594A1 PCT/CN2019/119097 CN2019119097W WO2021097594A1 WO 2021097594 A1 WO2021097594 A1 WO 2021097594A1 CN 2019119097 W CN2019119097 W CN 2019119097W WO 2021097594 A1 WO2021097594 A1 WO 2021097594A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- under
- data
- training
- sampling
- mask
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Definitions
- This application relates to the technical field of magnetic resonance scanning imaging, in particular to a training method, device and server for a fast imaging model.
- magnetic resonance imaging can provide a wealth of anatomical and functional information, making magnetic resonance imaging widely used in the medical field.
- the patient In order to perform magnetic resonance imaging, the patient needs to be clinically scanned with magnetic resonance. During the scan, the patient needs to maintain a posture for a long time, resulting in a poor patient experience. Therefore, the speed of magnetic resonance imaging needs to be accelerated.
- the magnetic resonance scanner In a real scene, the magnetic resonance scanner needs to sample the data at the Nyquist sampling frequency to ensure that the data can be restored without distortion.
- the method of rapid magnetic resonance imaging is mainly constructed based on deep learning.
- the main steps of imaging using this method are to consider collecting only part of the data, sampling the data in a sampling method that does not meet the Nyquist sampling theorem (high-power retrospective under-sampling); performing zero-filling operations on the obtained under-sampling data Obtain a zero-filled image; input the zero-filled image into the deep learning network, and output the restored high-definition image after processing by the deep learning network.
- the fast imaging method constructed based on deep learning cannot learn from the under-sampling mask used for data sampling, the under-sampling mask cannot be optimized; and the fast imaging method constructed based on deep learning only considers the channel attention, resulting in The imaging effect of the rapid imaging method is not good.
- One of the objectives of the embodiments of the present application is to provide a fast imaging model training method, device, and server, aiming to solve the problems of the inability to optimize the under-sampling mask and the long imaging time.
- a training method for a fast imaging model including:
- the training data is input into a fast imaging model, the training data is extracted according to the multi-scale information of the image and the attention mechanism through N multi-granularity attention modules, and each of the multi-granularity attention modules is merged to extract Characteristic map of; N ⁇ 1;
- the updated parameters and the updated under-sampling mask are used for forward calculation to output the next imaging data.
- the training data is input to a fast imaging model, and features are extracted from the training data according to the multi-scale information of the image and the attention mechanism through N multi-granularity attention modules, and each of them is merged.
- the feature maps extracted by the multi-granularity attention module include:
- For each of the multi-granularity attention modules perform feature extraction on the initialization feature data according to preset image scales, and fuse the extracted feature maps;
- the step of calculating the gradient inversely according to the imaging data and the target label to update the parameters of the fast imaging model and the under-sampling mask through the gradient includes:
- the step of calculating the gradient inversely according to the imaging data and the target label to update the parameters of the fast imaging model and the under-sampling mask through the gradient further includes:
- the fast imaging model includes learning the convolutional layer of the under-sampling mask, and correspondingly setting the convolution kernel and parameters of the convolutional layer according to the under-sampling mask; the under-sampling mask
- the initial value of the film includes a preset number of low-frequency sampling strips and randomly sampled high-frequency sampling strips.
- the rule for binarizing the updated continuous mask is: set to 1 when any element in the updated continuous mask is greater than a preset threshold; when If any element in the updated continuous mask is less than the preset threshold, it is set to 0; wherein, the preset threshold is a preset percentage of the maximum value in the updated continuous mask; The preset percentage is set according to the imaging acceleration multiple.
- the under-sampling of the image scanned by magnetic resonance according to the under-sampling mask to obtain training data includes:
- inverse Fourier transform is performed on the image scanned by magnetic resonance to obtain full-sample image domain data, which is used as the target tag.
- a training device for a fast imaging model including:
- the training data generation module is used for under-sampling the image scanned by MRI according to the under-sampling mask during each iteration of the model training to obtain training data;
- the model training module is used to input the training data into the fast imaging model for model training to obtain imaging data;
- the feature extraction module is used to input the training data into the fast imaging model, and perform feature extraction on the training data according to the multi-scale information and attention mechanism of the image through N multi-granularity attention modules, and fuse each of the multiple The feature map extracted by the granular attention module; N ⁇ 1;
- the image fusion module is used to reconstruct the image of the fused feature map and output the imaging data
- the forward calculation module is used for forward calculation using the updated parameters and the updated under-sampling mask to output the next imaging data.
- the feature extraction module includes:
- the feature extraction unit is used to extract the initialization feature data of the training data.
- each of the multi-granularity attention modules includes:
- the multi-scale densely connected feature fusion unit is used to perform feature extraction on the initial feature data according to several preset image scales, and fuse several extracted feature maps;
- the feature refinement unit based on the multi-granularity attention mechanism is used to segment the fused feature map into several regional images with different attention weights through the multi-granularity attention mechanism;
- the fusion image unit is used to fuse all the region images to obtain a feature map after feature refinement.
- the parameter and under-sampling mask update module includes:
- a gradient calculation unit configured to reversely calculate a gradient according to the imaging data and the target tag to obtain a gradient matrix
- the model parameter update unit is configured to update the attention weight given to the plurality of regional images by the multi-granularity attention mechanism according to the gradient matrix.
- the parameter and under-sampling mask update module further includes:
- An under-sampling mask updating unit configured to generate a continuous mask according to the under-sampling mask, and add the continuous mask and the gradient matrix to obtain an updated continuous mask
- the mask binarization unit is used to binarize the updated continuous mask to obtain an updated under-sampling mask.
- a server including: a memory, a processor, and a computer program stored in the memory and capable of being run on the processor.
- the processor executes the computer program, the computer program in the first aspect is implemented. Training method of fast imaging model.
- the training method, device, and server of a fast imaging model provided by the embodiments of the application have the beneficial effect of: under-sampling the image scanned by magnetic resonance according to the under-sampling mask during each iteration of the model training to obtain training Data; input the training data into a fast imaging model, and perform feature extraction on the training data according to the multi-scale information and attention mechanism of the image through N multi-granularity attention modules, and fuse each of the multi-granularity attention modules
- the fast imaging model includes a neural network layer that learns the under-sampling mask; according to the imaging data and the target label Calculate the gradient backwards to update the parameters of the fast imaging model and the under-sampling mask through the gradient; use the updated parameters and the updated under-sampling mask to perform forward calculation to output the next Imaging data.
- the fast imaging model By embedding the neural network that learns the under-sampling mask into the fast imaging model, iteratively trains together, and optimizes the under-sampling mask and model parameters according to the gradient of the imaging data and the target label inversely calculated, thereby improving the imaging rate of the fast imaging model.
- the fast imaging model includes N multi-granularity attention modules to extract features of the training data according to the multi-scale information and attention mechanism of the image, and make full use of the multi-granularity information and regional attention of the image. Enhance the representation of features in the imaging data, thereby improving the imaging effect.
- FIG. 1 is a schematic flowchart of a method for training a fast imaging model provided in Embodiment 1 of the present application;
- FIG. 2 is a schematic structural diagram of a fast imaging model provided by Embodiment 1 of the present application;
- FIG. 3 is a schematic structural diagram of a multi-granularity attention module provided in Embodiment 1 of the present application;
- Embodiment 4 is a schematic structural diagram of a feature refinement part based on a multi-granularity attention mechanism provided by Embodiment 1 of the present application;
- FIG. 5 is a schematic flowchart of a training method for a fast imaging model provided in Embodiment 2 of the present application;
- FIG. 6 is a schematic diagram of an embodiment of a magnetic resonance scanning imaging process provided in Embodiment 2 of the present application.
- FIG. 7 is a schematic structural diagram of a training device for a fast imaging model provided in Embodiment 3 of the present application.
- FIG. 8 is a schematic structural diagram of a server provided in Embodiment 4 of the present application.
- the element must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be construed as a limitation of the present application.
- the specific meaning of the above terms can be understood according to specific conditions.
- the terms “first” and “second” are only used for ease of description, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features.
- the meaning of "plurality” means two or more than two, unless otherwise specifically defined.
- FIG. 1 it is a schematic flowchart of a training method for a fast imaging model provided in Embodiment 1 of the present application.
- This embodiment can be applied to the application scenario of magnetic resonance scanning imaging.
- the method can be executed by a training device of a fast imaging model.
- the device can be a server, a smart terminal, a tablet or a PC, etc.; in this embodiment of the application, fast imaging is used.
- the training device of the model is explained as the main body of execution, and the method specifically includes the following steps:
- the scan data that is, the full sampling K-space data
- the magnetic resonance scanner needs to sample the scanned data at the Nyquist sampling frequency to generate images to ensure that the data can be restored without distortion.
- the process of sampling the scanned data at the Nyquist sampling frequency is slow, resulting in a long imaging time.
- it may be considered to collect only part of the scanned data, and sample the data at a sampling rate lower than the Nyquist sampling frequency, that is, under-sampling.
- under-sampling methods There are many under-sampling methods in related technologies.
- the commonly used method is 1D random (one-dimensional random), which uses an under-sampling matrix whose number of columns is consistent with the length of the phase encoding direction of the K-space image (scan data), that is, the under-sampling mask. Multiply the scanned data to get an under-sampled image. Therefore, the data required for imaging can be obtained by under-sampling the scan data according to the preset under-sampling mask.
- the iterative training process is the first model iterative training process, under-sampling the image scanned by magnetic resonance according to the preset initial value of the under-sampling mask to obtain training data; If this iterative training process is not the first model iterative training process, then the image scanned by the magnetic resonance is under-sampled according to the under-sampling mask updated after the previous iterative training to obtain training data.
- FIG. 2 it is a schematic structural diagram of a fast imaging model.
- the fast imaging model can be a multi-granularity attention network, which mainly includes two parts: a feature extraction part 21 and a reconstruction part 22.
- the initial feature data of the training data can be extracted through a convolutional layer in the feature extraction part 21 first.
- the feature extraction part of the fast imaging model also includes N multi-granularity attention modules 23, where N ⁇ 1; and the parameters in each multi-granularity attention module 23 are different to add more nonlinear operations. Make the result more optimized.
- N can be 5.
- the initial feature data extracted through a convolutional layer in the feature extraction part 21 is input to a multi-granularity attention module 23, and the initial feature data is feature extracted according to the multi-scale information of the image and the attention mechanism to obtain the feature image, and then The feature image is input to the next multi-granularity attention module 23 until N multi-granularity attention modules 23 are traversed.
- the feature extraction part of the fast imaging model also includes a connection layer, through which the feature maps extracted by each of the multi-granularity attention modules are fused together. Since the feature maps extracted by each of the multi-granularity attention modules produce feature maps of many channels, one convolutional layer in the feature extraction part needs to modify the number of channels of the feature maps. The feature map after the number of channels is modified also needs to perform global residual calculation to prevent the problem of gradient disappearance due to too deep model network layers and difficulty in training parameters. The calculated feature map is then input into the reconstruction part 22 in the fast imaging model to generate an image, which enhances the representation of the features in the generated image.
- the process for each multi-granularity attention module to perform feature extraction on the initial feature data according to the multi-scale information of the image and the attention mechanism is: for each of the multi-granularity attention modules, according to a preset Perform feature extraction on the initial feature data at several image scales, and fuse the extracted feature maps; divide the fused feature maps into several regional images with different attention weights through a multi-granularity attention mechanism; fuse all The area image is described, and the feature map after feature refinement is obtained.
- each multi-grained attention block may include two parts: a feature fusion part based on multi-scale dense connection and a feature refinement part based on a multi-grained attention mechanism; and each multi-grained attention block There is a local residual connection in the force module.
- Figure 3 it is a schematic diagram of the structure of the multi-granularity attention module. Since visual information of different scales will be helpful for imaging, the feature fusion part based on multi-scale dense connection performs feature extraction on the initial feature data according to several preset image scales, and fuse several extracted feature maps.
- each unit has two paths, and each path is equipped with a convolutional layer. According to a number of preset image scales, each unit is The convolutional layer parameters are set.
- there are 3 units in the feature fusion part based on multi-scale dense connection and a convolution layer with a 3 ⁇ 3 convolution kernel and a convolution layer with a 5 ⁇ 5 convolution kernel may be used respectively.
- the initial feature data is input based on the feature fusion part of multi-scale dense connection
- the initial feature data is convolved through two convolutional layers in a unit, and then the outputs of the two convolutional layers are fused together through the connection layer, thus
- the feature maps containing visual information of different scales are integrated together; the feature images obtained by the fusion are input into the next unit in a densely connected manner to continue the convolution calculation, until traversing the 3 in the feature fusion part based on the multi-scale densely connected Units.
- several feature maps extracted from the fusion are input into the feature refinement part based on the multi-granularity attention mechanism after convolution with a convolution kernel of 1X1.
- the feature refinement part based on the multi-granularity attention mechanism may include two parts: the squeeze excitation operation and the multi-granularity attention mechanism.
- Figure 4 it is a schematic diagram of the structure of the feature refinement part based on the multi-granularity attention mechanism.
- the multi-granularity attention mechanism divides the input feature maps in a variety of preset ways, and each division way forms a corresponding number of regional feature maps.
- each regional image with attention weight also needs to go through the squeeze excitation operation, that is, go through the corresponding global pooling, and then go through two convolutional layers with weighted convolution kernels of 1X1 to obtain the learned channel weights W 1 and W 2 ; It also needs to go through activation function calculation and a dot product operation to get the final attention weight value.
- the activation function may be a Sigmoid activation function. Fusion of all the regional images with the final attention weight value to obtain the feature map after feature refinement.
- S130 Perform image reconstruction on the fused feature map, and output the imaging data
- the reconstruction part may be composed of an up-sampling layer and a convolutional layer. Since the feature map after feature refinement is inconsistent with the final real image (target label), there is only half of the dimension.
- the feature map after feature refinement is up-sampling through the upsampling layer to restore the feature map to the real image (target label) Same size.
- the up-sampled feature map is then convolved by the convolutional layer to obtain imaging data.
- the parameters and under-sampling masks of the fast imaging model are determined based on the imaging data output from this model training.
- the gradient can be calculated inversely based on the imaging data output from this model training and the preset target label, so as to update the parameters of the fast imaging model and the under-sampling mask according to the calculated gradient.
- the process of updating the parameters of the fast imaging model according to the gradient calculated inversely between the imaging data and the target tag may be: calculating the gradient inversely according to the imaging data and the target tag to obtain a gradient matrix;
- the gradient matrix updates the attention weight given to the several regional images by the multi-granularity attention mechanism.
- the parameters of the fast imaging model and the under-sampling mask are updated according to the output imaging data and the target label in reverse calculation of the gradient, and then the model iterative training process of this round is completed.
- the updated parameters and the under-sampling mask are used for forward calculation to perform the next round of model iterative training.
- the training method of a fast imaging model is to obtain training data by under-sampling the image scanned by magnetic resonance according to the under-sampling mask during each iteration of the model training; and input the training data
- the fast imaging model uses N multi-granularity attention modules to perform feature extraction on the training data according to the multi-scale information and attention mechanism of the image, and fuse the feature maps extracted by each of the multi-granularity attention modules; N ⁇ 1; Perform image reconstruction on the fused feature map and output imaging data;
- the fast imaging model includes a neural network layer that learns the under-sampling mask; the gradient is calculated inversely according to the imaging data and the target label to pass all
- the gradient updates the parameters of the fast imaging model and the under-sampling mask; the updated parameters and the updated under-sampling mask are used for forward calculation to output the next imaging data.
- the fast imaging model By embedding the neural network that learns the under-sampling mask into the fast imaging model, iteratively trains together, and optimizes the under-sampling mask and model parameters according to the gradient of the imaging data and the target label inversely calculated, thereby improving the imaging rate of the fast imaging model.
- the fast imaging model includes N multi-granularity attention modules to extract features of the training data according to the multi-scale information and attention mechanism of the image, and make full use of the multi-granularity information and regional attention of the image. Enhance the representation of features in the imaging data, thereby improving the imaging effect.
- FIG. 5 is a schematic flowchart of the training method of the fast imaging model provided in the second embodiment of the present application.
- this embodiment also provides a method of implementing the learning of the under-sampling mask by embedding the neural network for learning the under-sampling mask into the fast imaging model together with iterative training.
- the method specifically includes:
- a fast imaging model that can be constructed through deep learning generates images based on data obtained by under-sampling. If the imaging effect is not good, the parameters of the fast imaging model can be optimized through multiple iteration training. However, no matter how the fast imaging model is optimized, the under-sampling data of the input model is always obtained by under-sampling according to the initial under-sampling mask, and the under-sampling mask cannot be optimized at the same time according to the imaging effect. Since the under-sampling mask is related to the imaging speed of the fast imaging model, the under-sampling mask cannot be updated and optimized, which makes the imaging time long.
- the neural network layer that learns the under-sampling mask is embedded in the fast imaging model.
- the fast imaging model is iteratively trained, the image scanned by magnetic resonance is under-sampled according to the under-sampling mask to obtain training data.
- the under-sampling mask can be iteratively trained with the fast imaging model to generate a learnable under-sampling mask.
- the fast imaging model includes a convolutional layer that learns an under-sampling mask, and the convolution kernel and parameters of the convolutional layer are correspondingly set according to the elements contained in the under-sampling mask;
- the initial value of the under-sampling mask includes A preset number of low-frequency sampling strips and randomly sampled high-frequency sampling strips.
- the initial value corresponding to the preset under-sampling mask includes a preset number of low-frequency samples Strips and randomly sampled high-frequency sampling strips.
- the under-sampling mask is a binarization mask (that is, only two values of 0 and 1 are included), the element corresponding to the sampling bar in the under-sampling mask is "1", and the remaining elements are "0".
- the image scanned by magnetic resonance is under-sampled according to the under-sampling mask
- the specific process of obtaining training data may be: according to the under-sampling mask, the image scanned by magnetic resonance (full-sampled K-space Data) performing under-sampling to obtain under-sampled K-space data; performing inverse Fourier transform on the under-sampled K-space data to obtain under-sampled image domain data, which is used as the training data.
- S220 Input the training data into a fast imaging model for model training, to obtain imaging data;
- FIG. 6 is a schematic diagram of an embodiment of the magnetic resonance scanning imaging process.
- the training data After under-sampling the image scanned by magnetic resonance according to the under-sampling mask, the training data will be obtained.
- Data is input to the fast imaging model for model training to obtain imaging data, that is, imaging calculations are performed according to the training data to generate reconstructed images.
- the parameters and under-sampling masks of the fast imaging model are determined based on the imaging data output from this model training.
- the gradient can be calculated inversely based on the imaging data output from this model training and the preset target label, so as to update the parameters of the fast imaging model and the under-sampling mask according to the calculated gradient.
- the gradient is calculated backwards according to the imaging data output by the model training and the preset target label to update the under-sampling mask may be: the gradient is calculated backwards according to the imaging data and the target label, Obtain a gradient matrix; generate a continuous mask according to the under-sampling mask, add the continuous mask and the gradient matrix to obtain an updated continuous mask; combine the updated continuous mask Binarize to obtain the updated under-sampling mask.
- the full-sampling image domain data can be obtained by performing inverse Fourier transform on the image scanned by magnetic resonance, which can be used as a preset target label.
- the gradient is calculated inversely to obtain the gradient matrix, and then the current undersampling mask needs to be generated according to the current undersampling mask before updating the current undersampling mask according to the gradient matrix.
- this model training process is the first model iterative training process, the current under-sampling mask is the initial value, and the sampling strip position of the continuous mask generated according to the current under-sampling mask can be preset to be consistent with the current under-sampling mask.
- the initial value of the sampling strip position comes from a uniform distribution U(0.5, 1), and the initial value of the non-sampling strip position comes from another uniform distribution U(0, 0.5).
- the generated continuous mask and the calculated gradient matrix are equal in size, and each element in the gradient matrix is the gradient that the corresponding element in the continuous mask needs to update.
- the generated continuous mask and the gradient matrix are added to obtain the updated continuous mask; the updated continuous mask is binarized to obtain the updated under-sampling mask, so that the updated continuous mask can be obtained.
- the under-sampling mask of is used in the next round of fast imaging model training process.
- the rule for binarizing the updated continuous mask is: when any element in the updated continuous mask is greater than a preset threshold, set to 1; when the Any element in the updated continuous mask is set to 0 when it is less than the preset threshold; wherein, the preset threshold is a preset percentage of the maximum value in the updated continuous mask; The preset percentage is set according to the imaging acceleration multiple.
- the updated continuous mask is binarized according to the above-mentioned rules, and the value of the element greater than the preset threshold in the updated continuous mask is set to 1, and the value of the updated continuous mask is smaller than the preset threshold.
- the preset threshold is a preset percentage of the maximum value in the continuous mask after the update; and the preset percentage is set according to the imaging acceleration multiple.
- the corresponding relationship between the imaging acceleration factor and the percentage may be that acceleration factor 4 corresponds to 25%, acceleration factor 8 corresponds to 12.5%, acceleration factor 12 corresponds to 8.3%, and acceleration factor 16 corresponds to 6.25%. In order to improve the imaging speed by updating and optimizing the under-sampling mask.
- S240 Perform forward calculation using the updated parameters and the updated under-sampling mask to output the next imaging data.
- the parameters of the fast imaging model and the under-sampling mask are updated according to the output imaging data and the target label in reverse calculation of the gradient, and then the model iterative training process of this round is completed.
- the updated parameters and the under-sampling mask are used for forward calculation to perform the next round of model iterative training.
- FIG. 7 is a schematic structural diagram of a training device for a fast imaging model provided in Embodiment 3 of the present application.
- a training device 7 which includes:
- the training data generation module 701 is used for under-sampling the image scanned by magnetic resonance according to the under-sampling mask during each iteration of the model training to obtain training data;
- the training data generation module 701 includes:
- a data processing unit configured to perform inverse Fourier transform on the under-sampled K-space data to obtain under-sampled image domain data as the training data;
- the target label generating unit is configured to perform inverse Fourier transform on the image scanned by magnetic resonance to obtain full-sampled image domain data, which is used as the target label.
- the feature extraction module 702 is used to input the training data into a fast imaging model, and perform feature extraction on the training data according to the multi-scale information and attention mechanism of the image through N multi-granularity attention modules, and fuse each of the training data.
- the image fusion module 703 is configured to perform image reconstruction on the fused feature map, and output the imaging data.
- the feature extraction module 702 includes:
- the feature extraction unit is used to extract the initialization feature data of the training data.
- each multi-granularity attention module includes:
- the multi-scale densely connected feature fusion unit is used to perform feature extraction on the initial feature data according to several preset image scales, and fuse several extracted feature maps;
- the feature refinement unit based on the multi-granularity attention mechanism is used to segment the fused feature map into several regional images with different attention weights through the multi-granularity attention mechanism;
- the image fusion unit is used to fuse all the regional images to obtain a feature map after feature refinement.
- the parameter and under-sampling mask update module 704 is configured to calculate a gradient inversely according to the imaging data and the target tag, so as to update the parameters of the fast imaging model and the under-sampling mask through the gradient;
- the parameter and under-sampling mask update module 704 include:
- a gradient calculation unit configured to reversely calculate a gradient according to the imaging data and the target tag to obtain a gradient matrix
- the model parameter update unit is configured to update the attention weight given to the plurality of regional images by the multi-granularity attention mechanism according to the gradient matrix.
- An under-sampling mask updating unit configured to generate a continuous mask according to the under-sampling mask, and add the continuous mask and the gradient matrix to obtain an updated continuous mask
- the mask binarization unit is used to binarize the updated continuous mask to obtain an updated under-sampling mask.
- the forward calculation module 705 is configured to use the updated parameter and the updated under-sampling mask to perform forward calculation to output the next imaging data.
- the training device for a fast imaging model obtains training data by under-sampling images scanned by magnetic resonance according to an under-sampling mask during each iteration of the model training; and inputting the training data
- the fast imaging model uses N multi-granularity attention modules to perform feature extraction on the training data according to the multi-scale information and attention mechanism of the image, and fuse the feature maps extracted by each of the multi-granularity attention modules; N ⁇ 1; Perform image reconstruction on the fused feature map and output imaging data;
- the fast imaging model includes a neural network layer that learns the under-sampling mask; the gradient is calculated inversely according to the imaging data and the target label to pass all
- the gradient updates the parameters of the fast imaging model and the under-sampling mask; the updated parameters and the updated under-sampling mask are used for forward calculation to output the next imaging data.
- the fast imaging model By embedding the neural network that learns the under-sampling mask into the fast imaging model, iteratively trains together, and optimizes the under-sampling mask and model parameters according to the gradient of the imaging data and the target label inversely calculated, thereby improving the imaging rate of the fast imaging model.
- the fast imaging model includes N multi-granularity attention modules to extract features of the training data according to the multi-scale information and attention mechanism of the image, and make full use of the multi-granularity information and regional attention of the image. Enhance the representation of features in the imaging data, thereby improving the imaging effect.
- FIG. 8 is a schematic structural diagram of a server provided in Embodiment 4 of the present application.
- the server includes: a processor 1, a memory 2, and a computer program 3 stored in the memory 2 and running on the processor 1, such as a program for a training method of a fast imaging model.
- the processor 1 executes the computer program 3 the steps in the above-mentioned fast imaging model training method embodiment are implemented, for example, steps S110 to S150 shown in FIG. 1.
- the computer program 3 may be divided into one or more modules, and the one or more modules are stored in the memory 2 and executed by the processor 1 to complete the application.
- the one or more modules may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 3 in the server.
- the computer program 3 can be divided into a training data generation module, a feature extraction module, an image fusion module, a parameter and under-sampling mask update module, and a forward calculation module.
- the specific functions of each module are as follows:
- the training data generation module is used for under-sampling the image scanned by MRI according to the under-sampling mask during each iteration of the model training to obtain training data;
- the feature extraction module is used to input the training data into the fast imaging model, and perform feature extraction on the training data according to the multi-scale information and attention mechanism of the image through N multi-granularity attention modules, and fuse each of the multiple The feature map extracted by the granular attention module; N ⁇ 1;
- the image fusion module is used to reconstruct the image of the fused feature map and output the imaging data
- the parameter and under-sampling mask update module is used to calculate a gradient inversely according to the imaging data and the target tag, so as to update the parameters of the fast imaging model and the under-sampling mask through the gradient;
- the forward calculation module is used for forward calculation using the updated parameters and the updated under-sampling mask to output the next imaging data.
- the server may include, but is not limited to, a processor 1, a memory 2, and a computer program 3 stored in the memory 2.
- FIG. 8 is only an example of a server, and does not constitute a limitation on the server. It may include more or less components than those shown in the figure, or a combination of certain components, or different components, such as the
- the server may also include input and output devices, network access devices, buses, and so on.
- the processor 1 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the storage 2 may be an internal storage unit of the server, such as a hard disk or memory of the server.
- the memory 2 may also be an external storage device, such as a plug-in hard disk equipped on a server, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), etc. Further, the storage 2 may also include both an internal storage unit of the server and an external storage device.
- the memory 2 is used to store the computer program and other programs and data required by the fast imaging model training method.
- the memory 2 can also be used to temporarily store data that has been output or will be output.
- the disclosed device/terminal device and method may be implemented in other ways.
- the device/terminal device embodiments described above are merely illustrative.
- the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
- components can be combined or integrated into another system, or some features can be omitted or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the present application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
- the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, it can implement the steps of the foregoing method embodiments.
- the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
- the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
- ROM Read-Only Memory
- RAM Random Access Memory
- electrical carrier signal telecommunications signal
- software distribution media etc.
- the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction.
- the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Description
Claims (14)
- 一种快速成像模型的训练方法,其特征在于,包括:A method for training a fast imaging model, which is characterized in that it includes:在每一次模型迭代训练时,根据欠采样掩膜对磁共振扫描到的图像进行欠采样,得到训练数据;In each iteration of the model training, under-sampling the image scanned by MRI according to the under-sampling mask to obtain training data;将所述训练数据输入快速成像模型,通过N个多粒度注意力模块根据图像的多尺度信息和注意力机制对所述训练数据进行特征提取,并融合每一所述多粒度注意力模块提取到的特征图;N≥1;The training data is input into a fast imaging model, the training data is extracted according to the multi-scale information of the image and the attention mechanism through N multi-granularity attention modules, and each of the multi-granularity attention modules is merged to extract Characteristic map of; N≥1;对融合后的特征图进行图像重建,输出成像数据;Perform image reconstruction on the fused feature map and output imaging data;根据所述成像数据与目标标签反向计算梯度,以通过所述梯度更新所述快速成像模型的参数和所述欠采样掩膜;Reversely calculate gradients according to the imaging data and target tags, so as to update the parameters of the fast imaging model and the under-sampling mask through the gradients;采用更新后的参数和更新后的欠采样掩膜进行前向计算,以输出下一所述成像数据。The updated parameters and the updated under-sampling mask are used for forward calculation to output the next imaging data.
- 根据权利要求1所述的快速成像模型的训练方法,其特征在于,所述将所述训练数据输入快速成像模型,通过N个多粒度注意力模块根据图像的多尺度信息和注意力机制对所述训练数据进行特征提取,并融合每一所述多粒度注意力模块提取到的特征图,包括:The method for training a fast imaging model according to claim 1, wherein the training data is input to the fast imaging model, and the N multi-granularity attention modules are used to compare the training data according to the multi-scale information of the image and the attention mechanism. Perform feature extraction on the training data, and merge the feature maps extracted by each of the multi-granularity attention modules, including:提取所述训练数据的初始化特征数据;Extracting the initialization feature data of the training data;对于每一所述多粒度注意力模块,根据预设的若干图像尺度对所述初始化特征数据进行特征提取,并融合提取到的若干特征图;For each of the multi-granularity attention modules, perform feature extraction on the initialization feature data according to preset image scales, and fuse the extracted feature maps;通过多粒度注意力机制将融合后的特征图分割为若干具有不同注意力权重的区域图像;Divide the fused feature map into several regional images with different attention weights through the multi-granularity attention mechanism;融合所有所述区域图像,得到特征细化后的特征图。Fusion of all the regional images to obtain a feature map after feature refinement.
- 根据权利要求2所述的快速成像模型的训练方法,其特征在于,所述根据所述成像数据与目标标签反向计算梯度,以通过所述梯度更新所述快速成像模型的参数和所述欠采样掩膜,包括:The method for training a fast imaging model according to claim 2, wherein the gradient is calculated inversely according to the imaging data and the target label, so as to update the parameters of the fast imaging model and the under-developed image through the gradient. Sampling mask, including:根据所述成像数据与所述目标标签反向计算梯度,得到梯度矩阵;Calculate gradients backwards according to the imaging data and the target tag to obtain a gradient matrix;根据所述梯度矩阵更新所述多粒度注意力机制赋予所述若干区域 图像的注意力权重。According to the gradient matrix, the attention weight given to the several regional images by the multi-granularity attention mechanism is updated.
- 根据权利要求3所述的快速成像模型的训练方法,其特征在于,所述根据所述成像数据与目标标签反向计算梯度,以通过所述梯度更新所述快速成像模型的参数和所述欠采样掩膜,还包括:The method for training a fast imaging model according to claim 3, wherein the gradient is calculated inversely according to the imaging data and the target label, so as to update the parameters of the fast imaging model and the under-developed image through the gradient. The sampling mask also includes:根据所述欠采样掩膜生成连续型掩膜,将所述连续型掩膜与所述梯度矩阵相加得到更新后的连续型掩膜;Generating a continuous mask according to the under-sampling mask, and adding the continuous mask and the gradient matrix to obtain an updated continuous mask;将所述更新后的连续型掩膜二值化,得到更新后的欠采样掩膜。Binarize the updated continuous mask to obtain an updated under-sampling mask.
- 根据权利要求4所述的快速成像模型的训练方法,其特征在于,所述快速成像模型包括学习所述欠采样掩膜的卷积层,根据所述欠采样掩膜对应设置所述卷积层的卷积核和参数;所述欠采样掩膜的初始值包括预设数量的低频采样条和随机采样的高频采样条。The method for training a fast imaging model according to claim 4, wherein the fast imaging model comprises learning a convolutional layer of the under-sampling mask, and correspondingly setting the convolutional layer according to the under-sampling mask The convolution kernel and parameters; the initial value of the under-sampling mask includes a preset number of low-frequency sampling strips and randomly sampled high-frequency sampling strips.
- 根据权利要求4所述的快速成像模型的训练方法,其特征在于,所述将所述更新后的连续型掩膜二值化的规则为:当所述更新后的连续型掩膜中的任一元素大于预设阈值时设为1;当所述更新后的连续型掩膜中的任一元素小于所述预设阈值时设为0;其中,所述预设阈值为所述更新后的连续型掩膜中最大值的预设百分比;所述预设百分比根据成像加速倍数设置。The method for training a fast imaging model according to claim 4, wherein the rule for binarizing the updated continuous mask is: when any of the updated continuous masks When an element is greater than the preset threshold, set to 1; when any element in the updated continuous mask is less than the preset threshold, set to 0; wherein, the preset threshold is the updated The preset percentage of the maximum value in the continuous mask; the preset percentage is set according to the imaging acceleration multiple.
- 根据权利要求1至6任一项所述的快速成像模型的训练方法,其特征在于,所述根据欠采样掩膜对磁共振扫描到的图像进行欠采样,得到训练数据,包括:The method for training a fast imaging model according to any one of claims 1 to 6, wherein the under-sampling of the image scanned by magnetic resonance according to the under-sampling mask to obtain training data comprises:根据所述欠采样掩膜对磁共振扫描到的图像进行欠采样,得到欠采样K空间数据;Performing under-sampling on the image scanned by magnetic resonance according to the under-sampling mask to obtain under-sampled K-space data;对所述欠采样K空间数据进行傅里叶逆变换得到欠采样图像域数据,以作为所述训练数据。Performing an inverse Fourier transform on the under-sampled K-space data to obtain under-sampled image domain data as the training data.
- 根据权利要求7所述的快速成像模型的训练方法,其特征在于,对磁共振扫描到的所述图像进行傅里叶逆变换得到全采样图像域数据,以作为所述目标标签。The method for training a fast imaging model according to claim 7, wherein the inverse Fourier transform is performed on the image scanned by magnetic resonance to obtain fully sampled image domain data, which is used as the target label.
- 一种快速成像模型的训练装置,其特征在于,包括:A training device for a fast imaging model, which is characterized in that it comprises:训练数据生成模块,用于在每一次模型迭代训练时,根据欠采样掩膜对磁共振扫描到的图像进行欠采样,得到训练数据;The training data generation module is used for under-sampling the image scanned by MRI according to the under-sampling mask during each iteration of the model training to obtain training data;模型训练模块,用于将所述训练数据输入快速成像模型进行模型训练,得到成像数据;The model training module is used to input the training data into the fast imaging model for model training to obtain imaging data;特征提取模块,用于将所述训练数据输入快速成像模型,通过N个多粒度注意力模块根据图像的多尺度信息和注意力机制对所述训练数据进行特征提取,并融合每一所述多粒度注意力模块提取到的特征图;N≥1;The feature extraction module is used to input the training data into the fast imaging model, and perform feature extraction on the training data according to the multi-scale information and attention mechanism of the image through N multi-granularity attention modules, and fuse each of the multiple The feature map extracted by the granular attention module; N≥1;图像融合模块,用于对融合后的特征图进行图像重建,输出成像数据;The image fusion module is used to reconstruct the image of the fused feature map and output the imaging data;前向计算模块,用于采用更新后的参数和更新后的欠采样掩膜进行前向计算,以输出下一所述成像数据。The forward calculation module is used for forward calculation using the updated parameters and the updated under-sampling mask to output the next imaging data.
- 根据权利要求9所述的快速成像模型的训练装置,其特征在于,所述特征提取模块包括:The training device for a fast imaging model according to claim 9, wherein the feature extraction module comprises:特征提取单元,用于提取所述训练数据的初始化特征数据。The feature extraction unit is used to extract the initialization feature data of the training data.
- 根据权利要求10所述的快速成像模型的训练装置,其特征在于,每一所述多粒度注意力模块包括:The training device for a fast imaging model according to claim 10, wherein each of the multi-granularity attention modules comprises:多尺度密集连接的特征融合单元,用于根据预设的若干图像尺度对所述初始化特征数据进行特征提取,并融合提取到的若干特征图;The multi-scale densely connected feature fusion unit is used to perform feature extraction on the initial feature data according to several preset image scales, and fuse several extracted feature maps;基于多粒度注意力机制的特征细化单元,用于通过多粒度注意力机制将融合后的特征图分割为若干具有不同注意力权重的区域图像;The feature refinement unit based on the multi-granularity attention mechanism is used to segment the fused feature map into several regional images with different attention weights through the multi-granularity attention mechanism;融合图像单元,用于融合所有所述区域图像,得到特征细化后的特征图。The fusion image unit is used to fuse all the region images to obtain a feature map after feature refinement.
- 根据权利要求11所述的快速成像模型的训练装置,其特征在于,所述参数和欠采样掩膜更新模块包括:The fast imaging model training device according to claim 11, wherein the parameter and under-sampling mask update module comprises:梯度计算单元,用于根据所述成像数据与所述目标标签反向计算 梯度,得到梯度矩阵;A gradient calculation unit, configured to reversely calculate a gradient according to the imaging data and the target tag to obtain a gradient matrix;模型参数更新单元,用于根据所述梯度矩阵更新所述多粒度注意力机制赋予所述若干区域图像的注意力权重。The model parameter update unit is configured to update the attention weight given to the plurality of regional images by the multi-granularity attention mechanism according to the gradient matrix.
- 根据权利要求12所述的快速成像模型的训练装置,其特征在于,所述参数和欠采样掩膜更新模块还包括:The fast imaging model training device according to claim 12, wherein the parameter and under-sampling mask update module further comprises:欠采样掩膜更新单元,用于根据所述欠采样掩膜生成连续型掩膜,将所述连续型掩膜与所述梯度矩阵相加得到更新后的连续型掩膜;An under-sampling mask updating unit, configured to generate a continuous mask according to the under-sampling mask, and add the continuous mask and the gradient matrix to obtain an updated continuous mask;掩膜二值化单元,用于将所述更新后的连续型掩膜二值化,得到更新后的欠采样掩膜。The mask binarization unit is used to binarize the updated continuous mask to obtain an updated under-sampling mask.
- 一种服务器,其特征在于,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至8任一项所述快速成像模型的训练方法的步骤。A server, characterized in that it includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program to achieve The steps of the training method of the fast imaging model described in any one of 1 to 8 are required.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/119097 WO2021097594A1 (en) | 2019-11-18 | 2019-11-18 | Quick imaging model training method and apparatus, and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/119097 WO2021097594A1 (en) | 2019-11-18 | 2019-11-18 | Quick imaging model training method and apparatus, and server |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021097594A1 true WO2021097594A1 (en) | 2021-05-27 |
Family
ID=75979955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/119097 WO2021097594A1 (en) | 2019-11-18 | 2019-11-18 | Quick imaging model training method and apparatus, and server |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021097594A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107182216A (en) * | 2015-12-30 | 2017-09-19 | 中国科学院深圳先进技术研究院 | A kind of rapid magnetic resonance imaging method and device based on depth convolutional neural networks |
CN109584248A (en) * | 2018-11-20 | 2019-04-05 | 西安电子科技大学 | Infrared surface object instance dividing method based on Fusion Features and dense connection network |
CN109636721A (en) * | 2018-11-29 | 2019-04-16 | 武汉大学 | Video super-resolution method based on confrontation study and attention mechanism |
CN110415815A (en) * | 2019-07-19 | 2019-11-05 | 银丰基因科技有限公司 | The hereditary disease assistant diagnosis system of deep learning and face biological information |
-
2019
- 2019-11-18 WO PCT/CN2019/119097 patent/WO2021097594A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107182216A (en) * | 2015-12-30 | 2017-09-19 | 中国科学院深圳先进技术研究院 | A kind of rapid magnetic resonance imaging method and device based on depth convolutional neural networks |
CN109584248A (en) * | 2018-11-20 | 2019-04-05 | 西安电子科技大学 | Infrared surface object instance dividing method based on Fusion Features and dense connection network |
CN109636721A (en) * | 2018-11-29 | 2019-04-16 | 武汉大学 | Video super-resolution method based on confrontation study and attention mechanism |
CN110415815A (en) * | 2019-07-19 | 2019-11-05 | 银丰基因科技有限公司 | The hereditary disease assistant diagnosis system of deep learning and face biological information |
Non-Patent Citations (2)
Title |
---|
WANG SHANSHAN, TAN SHA, GAO YUAN, LIU QIEGEN, YING LESLIE, XIAO TAOHUI, LIU YUANYUAN, LIU XIN, ZHENG HAIRONG, LIANG DONG: "Learning Joint-Sparse Codes for Calibration-Free Parallel MR Imaging", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ., US, vol. 37, no. 1, 1 January 2018 (2018-01-01), US, pages 251 - 261, XP055812994, ISSN: 0278-0062, DOI: 10.1109/TMI.2017.2746086 * |
ZHANG CHAOXIA: "Research on Optimal Angiographic Viewing Angles for the Segment of Interest and Arterial Motion Estimation based on MSCT", CHINA DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, MEDICINE AND HEALTH SCIENCES, 1 December 2011 (2011-12-01), XP055813002 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111091604B (en) | Training method and device of rapid imaging model and server | |
CN111615702B (en) | Method, device and equipment for extracting structured data from image | |
WO2019167884A1 (en) | Machine learning method and device, program, learned model, and identification device | |
CN110598714A (en) | Cartilage image segmentation method and device, readable storage medium and terminal equipment | |
CN109633502B (en) | Magnetic resonance rapid parameter imaging method and device | |
CN110838085B (en) | Super-resolution reconstruction method and device for image and electronic equipment | |
CN111091521A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
JP6612486B1 (en) | Learning device, classification device, learning method, classification method, learning program, and classification program | |
CN110738235A (en) | Pulmonary tuberculosis determination method, pulmonary tuberculosis determination device, computer device, and storage medium | |
Muhammad et al. | Multi-scale Xception based depthwise separable convolution for single image super-resolution | |
CN115272250B (en) | Method, apparatus, computer device and storage medium for determining focus position | |
CN110084809B (en) | Diabetic retinopathy data processing method and device and electronic equipment | |
CN111161386A (en) | Ultrasonic image rendering method and device and ultrasonic equipment | |
Graham et al. | Unsupervised 3d out-of-distribution detection with latent diffusion models | |
WO2019109410A1 (en) | Fully convolutional network model training method for splitting abnormal signal region in mri image | |
Jobst et al. | Efficient MPS representations and quantum circuits from the Fourier modes of classical image data | |
WO2021097594A1 (en) | Quick imaging model training method and apparatus, and server | |
Bricman et al. | CocoNet: A deep neural network for mapping pixel coordinates to color values | |
CN113723515B (en) | Moire pattern recognition method, device, equipment and medium based on image recognition | |
Zhou et al. | Directional relative total variation for structure–texture decomposition | |
CN113592724A (en) | Target face image restoration method and device | |
CN117456562B (en) | Attitude estimation method and device | |
Zhang et al. | Salient detection network for lung nodule detection in 3D Thoracic MRI Images | |
KR102215902B1 (en) | Method of correcting an image and apparatuses performing the same | |
CN115375626B (en) | Medical image segmentation method, system, medium and device based on physical resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19953566 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19953566 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.01.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19953566 Country of ref document: EP Kind code of ref document: A1 |