WO2020118616A1 - 一种基于深度先验学习的头颈联合成像方法和装置 - Google Patents
一种基于深度先验学习的头颈联合成像方法和装置 Download PDFInfo
- Publication number
- WO2020118616A1 WO2020118616A1 PCT/CN2018/120882 CN2018120882W WO2020118616A1 WO 2020118616 A1 WO2020118616 A1 WO 2020118616A1 CN 2018120882 W CN2018120882 W CN 2018120882W WO 2020118616 A1 WO2020118616 A1 WO 2020118616A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- complex
- image
- head
- neural network
- convolutional neural
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Definitions
- the present application belongs to the technical field of image processing, and particularly relates to a method and device for joint imaging of head and neck based on deep prior learning.
- intracranial imaging is basically a two-dimensional imaging technology.
- the two-dimensional imaging technology can only observe a certain section of the cross-sectional image.
- the layer thickness is generally too large, and not all Identical items cannot meet the actual application requirements.
- intracranial three-dimensional vascular wall imaging can simultaneously obtain blood flow and bleeding signals, which is conducive to the quantitative detection of plaque bleeding, but there are problems such as low spatial resolution, long imaging time, and insufficient contrast between the vascular wall and cerebrospinal fluid.
- the current combined head and neck imaging technology generally adopts T1-weighted three-dimensional fast spin echo technology.
- the technology uses integrated head and neck imaging with a maximum field of view of 250mm.
- the flip-down preparation pulse is used to uniformly suppress the cerebrospinal fluid signal, and the DANTE module is used to effectively inhibit blood flow.
- the signal has good contrast, and the brain has an isotropic resolution of 0.5mm.
- the imaging time is longer. If the carotid artery examination is added, the time will be longer and unsatisfactory Actual application requirements.
- the purpose of this application is to provide a head and neck joint imaging method and device based on deep prior learning to improve the imaging accuracy and shorten the imaging time of head and neck joint imaging.
- This application provides a method and device for joint imaging of head and neck based on deep prior learning:
- a head and neck joint imaging method based on deep prior learning includes:
- the complex convolutional neural network model is used to reconstruct the head-neck joint magnetic resonance image to be reconstructed to obtain an artifact-free high-resolution head-neck joint image.
- the complex convolutional neural network model includes: a first complex convolutional layer, a plurality of complex residual blocks, and a second complex convolutional layer, wherein each complex residual block includes two Complex convolutional layers.
- the complex convolution operation in the complex convolution layer is expressed as:
- w represents the input complex image
- c represents the complex convolution kernel
- c real represents the real part of the input complex image
- c imgi represents the imaginary part of the input complex image
- w real represents the real part of the complex convolution kernel
- w imgi represents the imaginary part of the complex convolution kernel
- the complex convolutional neural network model is established in the following manner:
- the fully sampled sample image is a head and neck combined magnetic resonance image obtained from a magnetic resonance apparatus
- using the undersampled sample image as a training sample and the fully sampled sample image as a label to train a pre-established complex convolutional neural network includes:
- x m represents the multi-channel complex input image
- y m is the fully sampled original image
- C(x m ; ⁇ ) represents the predicted output of the network
- ⁇ represents weight and b represents bias
- Represents the ⁇ corresponding to the minimum error between the network output and the label as M represents the total number of training samples
- m represents the sequence number of the current training sample.
- the head-neck joint magnetic resonance image to be reconstructed is an undersampled image containing artifacts.
- a head and neck joint imaging device based on deep prior learning including:
- the acquisition module is used to acquire the head and neck joint magnetic resonance image to be reconstructed
- An input module for inputting the head-neck combined magnetic resonance image to be reconstructed into a pre-established complex convolutional neural network model, wherein the complex convolutional neural network model is provided with a complex residual block;
- the reconstruction module is used to reconstruct the head-neck joint magnetic resonance image to be reconstructed through the complex convolutional neural network model to obtain an artifact-free high-resolution head-neck joint image.
- the complex convolutional neural network model includes: a first complex convolutional layer, a plurality of complex residual blocks, and a second complex convolutional layer, wherein each complex residual block includes two Complex convolutional layers.
- the complex convolution operation in the complex convolution layer is expressed as:
- w represents the input complex image
- c represents the complex convolution kernel
- c real represents the real part of the input complex image
- c imgi represents the imaginary part of the input complex image
- w real represents the real part of the complex convolution kernel
- w imgi represents the imaginary part of the complex convolution kernel
- the complex convolutional neural network model is established in the following manner:
- the fully sampled sample image is a head and neck combined magnetic resonance image obtained from a magnetic resonance apparatus
- using the undersampled sample image as a training sample and the fully sampled sample image as a label to train a pre-established complex convolutional neural network includes:
- x m represents the multi-channel complex input image
- y m is the fully sampled original image
- C(x m ; ⁇ ) represents the predicted output of the network
- ⁇ represents weight and b represents bias
- Represents the ⁇ corresponding to the minimum error between the network output and the label as M represents the total number of training samples
- m represents the sequence number of the current training sample.
- the head-neck joint magnetic resonance image to be reconstructed is an undersampled image containing artifacts.
- a terminal device includes a processor and a memory for storing processor-executable instructions.
- the processor executes the instructions, the following method steps are implemented:
- the complex convolutional neural network model is used to reconstruct the head-neck joint magnetic resonance image to be reconstructed to obtain an artifact-free high-resolution head-neck joint image.
- a computer-readable storage medium having computer instructions stored thereon, when the instructions are executed, the following method steps are implemented:
- the complex convolutional neural network model is used to reconstruct the head-neck joint magnetic resonance image to be reconstructed to obtain an artifact-free high-resolution head-neck joint image.
- the head and neck joint imaging method and device based on deep prior learning provided by this application reconstruct the magnetic resonance image of the head and neck joint to be reconstructed through a pre-established complex convolutional neural network model, thereby obtaining an artifact-free high-resolution head and neck Joint image. Therefore, the magnetic resonance image of the head and neck joint to be reconstructed is an undersampled image, and the complex convolutional neural network has a better image reconstruction effect, so that a high-resolution, artifact-free high-resolution head and neck joint image can be obtained. It solves the problem that the existing head and neck joint imaging cannot guarantee the imaging accuracy and the imaging time at the same time, and achieves the technical effect of effectively shortening the imaging time under the condition of ensuring the imaging accuracy.
- FIG. 1 is a method flowchart of an embodiment of a head-neck joint imaging method based on deep prior learning provided by this application;
- FIG. 2 is a schematic diagram of a model of a complex convolution network provided by this application.
- FIG. 3 is a schematic diagram of a model of a complex residual block provided by this application.
- FIG. 5 is a schematic diagram of a module for image reconstruction based on a complex convolution network provided by this application;
- FIG. 6 is an architectural diagram of a terminal device provided by this application.
- FIG. 7 is a schematic structural diagram of an embodiment of a head-neck joint imaging module based on deep prior learning provided by this application.
- a complex convolutional neural network model can be generated, which can be converted from undersampling with artifacts to The network model of the image without artifacts, so that you only need to provide the joint head and neck magnetic resonance image with artifacts, you can get the high resolution head and neck joint magnetic resonance image without artifacts, which can reduce the scanning time In the case of, the joint head and neck image can meet the accuracy requirements.
- FIG. 1 is a method flowchart of an embodiment of a head-neck joint imaging method based on deep prior learning described in this application.
- this application provides method operation steps or device structures as shown in the following embodiments or drawings, more or fewer operation steps or module units may be included in the method or device based on conventional or without creative labor .
- the execution order of these steps or the module structure of the device is not limited to the execution order or module structure shown in the embodiment of the present application and shown in the drawings.
- the method or the module structure shown in the embodiments or the drawings can be connected to perform sequential execution or parallel execution (for example, parallel processor or multi-threaded processing). Environment, even distributed processing environment).
- the joint imaging method based on deep prior learning can include the following steps:
- Step 101 Acquire a head-neck joint magnetic resonance image to be reconstructed
- the magnetic resonance scan image to be reconstructed may be an image obtained by joint imaging of the undersampling of the head and neck position of the target object by the magnetic resonance scanner, for example, the undersampling scan of the head and neck of the target object may be performed by the magnetic resonance scanner
- the resulting image is an image containing artifacts.
- artifacts refer to various forms of images that appear on the image when the scanned object does not exist. Artifacts are roughly divided into two categories related to patients and machines.
- the artifact of magnetic resonance image refers to the abnormal change in density of the image that does not match the actual anatomical structure. It involves the failure of CT machine components, insufficient calibration, and algorithm errors or even errors.
- Step 102 Input the head-neck joint magnetic resonance image to be reconstructed into a pre-established complex convolutional neural network model, wherein a complex residual block is set in the complex convolutional neural network model;
- the above complex convolutional neural network model can be established in the following manner:
- S1 Acquire a fully sampled sample image, wherein the fully sampled sample image is a head and neck combined magnetic resonance image obtained from a magnetic resonance apparatus;
- the fully sampled sample image is fully sampled original image data, and is image data without artifacts.
- the fully sampled image on which the training samples are based may be an image acquired from a magnetic resonance scanner by a low magnification factor; then, the pre-processed image is pre-processed, where the pre-processing may include but not limited to the following At least one of: image selection processing and normalization processing; using the preprocessed image as the fully sampled image.
- image selection processing is to remove some images with low quality or not containing more available information.
- the normalization process is to make the data adapt to the unified input of the network and eliminate the adverse effects caused by the singular sample data, so that the The image data can be suitable for the training of complex convolutional neural network models.
- the above-mentioned under-sampled sample image is an image obtained by performing under-sampling processing on the above-mentioned fully-sampled sample image according to a preset under-sampling ratio, and the under-sampled image is an image containing artifacts.
- the head-neck joint imaging method based on deep prior learning is provided.
- the pre-established complex convolutional neural network model is used to reconstruct the head-neck joint magnetic resonance image to be reconstructed, thereby obtaining artifact-free high resolution Joint head and neck image. Therefore, the magnetic resonance image of the head and neck joint to be reconstructed is an undersampled image, and the complex convolutional neural network has a better image reconstruction effect, so that a high-resolution, artifact-free high-resolution head and neck joint image can be obtained. It solves the problem that the existing head and neck joint imaging cannot guarantee the imaging accuracy and the imaging time at the same time, and achieves the technical effect of effectively shortening the imaging time under the condition of ensuring the imaging accuracy.
- Step 103 Use the complex convolutional neural network model to reconstruct the head-neck joint magnetic resonance image to be reconstructed to obtain an artifact-free high-resolution head-neck joint image.
- the high-resolution image is an image close to a fully sampled image, and these high-resolution images can meet actual application requirements.
- the above-mentioned complex convolutional neural network model may be as shown in FIG. 2, which in turn includes: a first complex convolutional layer, a plurality of complex residual blocks, and a second complex convolutional layer, where each complex residual The difference block includes two complex convolutional layers.
- the complex convolution operation in the complex convolution layer can be expressed as:
- w represents the input complex image
- c represents the complex convolution kernel
- c real represents the real part of the input complex image
- c imgi represents the imaginary part of the input complex image
- w real represents the real part of the complex convolution kernel
- w imgi represents the imaginary part of the complex convolution kernel
- the following function can be used as an objective function for the pre-established complex number Convolutional neural network for training:
- x m represents the multi-channel complex input image
- y m is the fully sampled original image
- C(x m ; ⁇ ) represents the predicted output of the network
- ⁇ represents weight and b represents bias
- Represents the ⁇ corresponding to the minimum error between the network output and the label as M represents the total number of training samples
- m represents the sequence number of the current training sample.
- Residual network When the number of neural network layers reaches a certain number, as the number of neural network layers increases, the effect on the training set will become worse, because as the depth of the neural network becomes deeper, the training becomes It turns out that the harder it is, the more difficult it is to optimize the network. Too deep neural networks will cause degradation problems, but the effect is not as good as that of relatively shallow networks.
- the residual network is to solve this problem. The deeper the residual network, the better the training set will be.
- the residual network is a layer that constructs an identity map on several convolutional layers, that is, the layer whose output is equal to the input, thereby constructing a deeper network. Specifically, by adding shortcut connections, the neural network becomes easier to be optimized.
- Residual block As shown in Figure 3, for several layers of networks that include a fast connection, it is called a residual block.
- the method mainly includes the following aspects : Multi-channel head and neck joint magnetic resonance large sample database construction, multi-channel high-dimensional big data deep prior learning model research, integrated deep prior online online high-dimensional reconstruction model research.
- the multi-channel image refers to images of the same scene taken by multiple cameras or images of the same scene taken by a camera at different times.
- multiple channels are used to encode the image.
- Multi-channel images are often used in the field of artificial intelligence.
- the image is made up of individual pixels, all pixels of different colors form a complete image, and the computer stores the pictures in binary. In general, the bit used by a computer to store a single pixel is called the depth of the image.
- the channel of the image is related to its encoding method. If the image is decomposed into three RGB components, it is three channels. If the image is a grayscale image, It is a channel, and multi-channel images are images with the number of channels ⁇ 3.
- the imaginary part of the image is generally not used.
- the imaginary part of the image often contains the phase information of the image. If the imaginary part can be effectively used, the accuracy of the multi-channel image can be improved.
- the corresponding complex convolutional neural network is designed, and the residual block is used to learn the multi-channel head and neck integrated magnetic resonance image to extract key feature information, so as to achieve The purpose of online reconstruction of vascular wall magnetic resonance images.
- the input of the network is an artifact-containing image after undersampling the full-sampling head and neck integrated magnetic resonance image, and the output label is full-sampling original image data.
- the intermediate network is composed of two complex convolution layers and three complex residual blocks.
- the complex convolution layer directly performs convolution operations on the input complex image, that is, the convolution kernel used is a complex convolution kernel, and the input is expressed by a mathematical formula
- the plural image of is:
- the complex convolution kernel is:
- the complex convolution operation formula is as follows:
- the objective function used in the complex convolutional network can be expressed as:
- x m represents the multi-channel complex input image
- y m is the fully sampled original image
- C(x m ; ⁇ ) represents the predicted output of the network
- ⁇ represents weight and b represents bias
- Represents the ⁇ corresponding to the minimum error between the network output and the label as M represents the total number of training samples
- m represents the sequence number of the current training sample.
- the above-mentioned deep neural network can be used for online reconstruction of the head and neck position after training, and finally a high-quality head and neck position vascular wall magnetic resonance image can be obtained in a short time.
- it may include: a data processing module, a model acquisition module, a model test module, and a model application module, among which:
- Data processing module used for normalizing and other preprocessing operations on the collected image data, and making input and output samples for network training;
- Model acquisition module used to train and optimize the designed complex convolutional network
- Model test module used for online reconstruction testing of the under-sampling head and neck integrated images that did not participate in the training, to verify that the trained network model can reconstruct high-quality images
- the model application module is used to verify that the model has sufficient generalization ability, and finally apply the deep convolution reconstruction algorithm to the actual application scenario.
- the deep learning method is used to perform integrated fast and high-precision imaging of the head and neck positions.
- a complex convolution operation for head and neck integrated magnetic resonance image data is proposed.
- the complex convolution network is used, based on the traditional convolution network.
- a complex residual block has been added to improve the accuracy of the head and neck integrated magnetic resonance vascular wall imaging and shorten the imaging time.
- FIG. 6 is a hardware block diagram of a terminal device based on a deep prior learning based head-neck joint imaging method according to an embodiment of the present invention.
- the terminal device 10 may include one or more (only one is shown in the figure) processor 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) , A memory 104 for storing data, and a transmission module 106 for communication functions.
- the structure shown in FIG. 6 is merely an illustration, which does not limit the structure of the foregoing electronic device.
- the terminal device 10 may further include more or fewer components than those shown in FIG. 6, or have a configuration different from that shown in FIG.
- the memory 104 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the head-neck joint imaging method based on deep prior learning in the embodiment of the present invention, and the processor 102 runs the software program stored in the memory 104 by And a module to perform various functional applications and data processing, that is, a head-neck joint imaging method based on deep a priori learning based on the aforementioned application program.
- the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
- the memory 104 may further include memories remotely provided with respect to the processor 102, and these remote memories may be connected to the computer terminal 10 through a network.
- Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
- the transmission module 106 is used to receive or send data via a network.
- the above specific example of the network may include a wireless network provided by a communication provider of the computer terminal 10.
- the transmission module 106 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices through the base station to communicate with the Internet.
- the transmission module 106 may be a radio frequency (Radio Frequency) module, which is used to communicate with the Internet in a wireless manner.
- Radio Frequency Radio Frequency
- the above-mentioned deep prior learning-based head and neck joint imaging device may be as shown in FIG. 7 and includes:
- the obtaining module 701 is used to obtain a head-neck joint magnetic resonance image to be reconstructed
- the input module 702 is configured to input the head-neck joint magnetic resonance image to be reconstructed into a pre-established complex convolutional neural network model, wherein the complex convolutional neural network model is provided with a complex residual block;
- the reconstruction module 703 is configured to reconstruct the head-neck joint magnetic resonance image to be reconstructed through the complex convolutional neural network model to obtain an artifact-free high-resolution head-neck joint image.
- the complex convolutional neural network model may in turn include: a first complex convolutional layer, a plurality of complex residual blocks, and a second complex convolutional layer, where each complex residual block includes Two complex convolutional layers.
- the complex convolution operation in the complex convolution layer can be expressed as:
- w represents the input complex image
- c represents the complex convolution kernel
- c real represents the real part of the input complex image
- c imgi represents the imaginary part of the input complex image
- w real represents the real part of the complex convolution kernel
- w imgi represents the imaginary part of the complex convolution kernel
- the complex convolutional neural network model may be established in the following manner:
- S1 Acquire a fully sampled sample image, wherein the fully sampled sample image is a head and neck combined magnetic resonance image obtained from a magnetic resonance apparatus;
- the following function can be used as the objective function to train the pre-established complex convolutional neural network:
- x m represents the multi-channel complex input image
- y m is the fully sampled original image
- C(x m ; ⁇ ) represents the predicted output of the network
- ⁇ represents weight and b represents bias
- Represents the ⁇ corresponding to the minimum error between the network output and the label as M represents the total number of training samples
- m represents the sequence number of the current training sample.
- the head-neck joint magnetic resonance image to be reconstructed may be an under-sampled image containing artifacts.
- Embodiments of the present application also provide a specific implementation of an electronic device that can implement all steps in the head-neck joint imaging method based on deep prior learning in the foregoing embodiment, and the electronic device specifically includes the following content:
- Processor processor
- memory memory
- communication interface Communication interface
- the processor 601, the memory 602, and the communication interface 603 communicate with each other through the bus 604; the processor 601 is used to call a computer program in the memory 602, and the processor executes the computer
- the program implements all the steps in the head-neck joint imaging method based on deep prior learning in the above embodiment. For example, when the processor executes the computer program, the following steps are realized:
- Step 1 Acquire the magnetic resonance image of the head and neck joint to be reconstructed
- Step 2 Input the magnetic resonance image of the head-neck joint to be reconstructed into a pre-established complex convolutional neural network model, wherein a complex residual block is set in the complex convolutional neural network model;
- Step 3 Reconstruct the head-neck joint magnetic resonance image to be reconstructed through the complex convolutional neural network model to obtain an artifact-free high-resolution head-neck joint image.
- the head and neck joint imaging method and device based on deep prior learning, through the pre-established complex convolutional neural network model, to reconstruct the head and neck joint magnetic resonance image to be reconstructed, thereby obtaining artifact-free high resolution Joint head and neck image. Therefore, the magnetic resonance image of the head and neck joint to be reconstructed is an undersampled image, and the complex convolutional neural network has a better image reconstruction effect, so that a high-resolution, artifact-free high-resolution head and neck joint image can be obtained. It solves the problem that the existing head and neck joint imaging cannot guarantee the imaging accuracy and the imaging time at the same time, and achieves the technical effect of effectively shortening the imaging time under the condition of ensuring the imaging accuracy.
- An embodiment of the present application also provides a computer-readable storage medium capable of implementing all steps in the head-neck joint imaging method based on deep a priori learning in the foregoing embodiment, and the computer-readable storage medium stores a computer program on it.
- the computer program is executed by the processor, all steps of the head-neck joint imaging method based on deep prior learning in the above embodiments are implemented. For example, when the processor executes the computer program, the following steps are realized:
- Step 1 Acquire the magnetic resonance image of the head and neck joint to be reconstructed
- Step 2 Input the magnetic resonance image of the head-neck joint to be reconstructed into a pre-established complex convolutional neural network model, wherein a complex residual block is set in the complex convolutional neural network model;
- Step 3 Reconstruct the head-neck joint magnetic resonance image to be reconstructed through the complex convolutional neural network model to obtain an artifact-free high-resolution head-neck joint image.
- the head and neck joint imaging method and device based on deep prior learning, through the pre-established complex convolutional neural network model, to reconstruct the head and neck joint magnetic resonance image to be reconstructed, thereby obtaining artifact-free high resolution Joint head and neck image. Therefore, the magnetic resonance image of the head and neck joint to be reconstructed is an undersampled image, and the complex convolutional neural network has a better image reconstruction effect, so that a high-resolution, artifact-free high-resolution head and neck joint image can be obtained. It solves the problem that the existing head and neck joint imaging cannot guarantee the imaging accuracy and the imaging time at the same time, and achieves the technical effect of effectively shortening the imaging time under the condition of ensuring the imaging accuracy.
- the system, device, module or unit explained in the above embodiments may be specifically implemented by a computer chip or entity, or implemented by a product with a certain function.
- a typical implementation device is a computer.
- the computer may be, for example, a personal computer, a laptop computer, an on-board human-machine interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet A computer, a wearable device, or any combination of these devices.
- the functions are divided into various modules and described separately.
- the functions of each module may be implemented in one or more software and/or hardware, or the modules that implement the same function may be implemented by a combination of multiple submodules or subunits.
- the device embodiments described above are only schematic.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or integrated To another system, or some features can be ignored, or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- controller in addition to implementing the controller in the form of pure computer-readable program code, it is entirely possible to logically program method steps to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers and embedded To achieve the same function in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the device for implementing various functions included therein can also be regarded as a structure within the hardware component. Or even, the means for realizing various functions can be regarded as both a software module of an implementation method and a structure within a hardware component.
- each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram may be implemented by computer program instructions.
- These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to produce a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device
- These computer program instructions may also be stored in a computer-readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction device, the instructions
- the device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
- the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.
- the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- the memory may include non-permanent memory, random access memory (RAM) and/or non-volatile memory in computer-readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
- RAM random access memory
- ROM read only memory
- flash RAM flash random access memory
- Computer-readable media including permanent and non-permanent, removable and non-removable media, can store information by any method or technology.
- the information may be computer readable instructions, data structures, modules of programs, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
- computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
- the embodiments of the present specification may be provided as methods, systems, or computer program products. Therefore, the embodiments of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present specification may take the form of computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
- computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- Embodiments of this specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules.
- program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
- the embodiments of the present specification may also be practiced in distributed computing environments in which tasks are performed by remote processing devices connected through a communication network.
- program modules may be located in local and remote computer storage media including storage devices.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
一种基于深度先验学习的头颈联合成像方法和装置,其中,该方法包括:获取待重建的头颈联合的磁共振图像(101);将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块(102);通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像(103)。通过上述方案解决了现有的头颈联合成像中所存在的无法同时保证成像精度和成像时间需求的问题,达到了在保证成像精度的情况下,可以有效缩短成像时间的技术效果。
Description
本申请属于图像处理技术领域,尤其涉及一种基于深度先验学习的头颈联合成像方法和装置。
快速成像一直是磁共振成像中的研究热点,而头颈部位的磁共振扫描是磁共振成像领域中非常重要的一方面。头颈联合的磁共振血管壁成像的难点主要是颅内部分,一般颅内成像基本都是二维成像技术,二维成像技术只能观察某一段的断面图像,层厚一般过大,且不是各项同性,无法满足实际的应用需求。然而,颅内三维血管壁成像可以同时获取血流和出血信号,有利于斑块出血的定量检测,但存在空间分辨率较低、成像时间长及对血管壁与脑脊液对比度不足等问题。
目前的头颈联合成像技术一般是采用T1加权三维快速自旋回波技术,该技术采用头颈一体化成像,最大视野为250mm,使用flip-down准备脉冲均匀抑制脑脊液信号,同时采用DANTE模块有效抑制血流信号,具有较好的对比度,全脑0.5mm各向同性分辨率,然而,由于扫描视野增大,导致在成像时间上更长,如果再加上颈动脉检查,时间会更长,更无法满足实际的应用需求。
针对现有的头颈联合成像所存在无法同时满足成像精度和成像时间需求的问题,目前尚未提出有效的解决方案。
发明内容
本申请目的在于提供一种基于深度先验学习的头颈联合成像方法和装置,以提高头颈联合成像的成像精度和缩短成像时间。
本申请提供一种基于深度先验学习的头颈联合成像方法和装置是这样实现的:
一种基于深度先验学习的头颈联合成像方法,所述方法包括:
获取待重建的头颈联合的磁共振图像;
将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;
通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
在一个实施方式中,所述复数卷积神经网络模型依次包括:第一复数卷积层、多个复数残差块、第二复数卷积层,其中,每个复数残差块中包括有两个复数卷积层。
在一个实施方式中,复数卷积层中的复数卷积操作表示为:
w*c=(c
real+ic
imgi)*(w
real+iw
imgi)=(w
real*c
real-w
imgi*c
imgi)+i(w
real*c
real+w
imgi*c
imgi)
其中,w表示输入的复数图像,c表示复数卷积核,c
real表示输入的复数图像的实部,c
imgi表示输入的复数图像的虚部,w
real表示复数卷积核的实部,w
imgi表示复数卷积核的虚部。
在一个实施方式中,所述复数卷积神经网络模型是按照以下方式建立的:
获取全采样样本图像,其中,所述全采样样本图像为从磁共振仪获取的头颈联合的磁共振图像;
对所述全采样样本图像进行欠采样处理,得到欠采样样本图像;
将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,得到所述复数卷积神经网络模型。
在一个实施方式中,将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,包括:
以如下函数作为目标函数,对所述预先建立的复数卷积神经网络进行训练:
其中,x
m表示多通道复数输入图像,y
m为全采样原始图像,C(x
m;θ)表示网络的预测输出,θ={(Ω
1,b
1),...,(Ω
l,b
l),...,(Ω
L,b
L)}为训练需要更新的参数,其中,Ω表示权重,b表示偏置,
表示网络输出与标签之间误差最小时的权重和偏置取值,
表示取网络输出与标签之间的最小误差对应的θ作为
M表示训练样本的总数量,m表示当前训练样本的序号。
在一个实施方式中,所述待重建的头颈联合的磁共振图像为欠采样的含伪影的图像。
一种基于深度先验学习的头颈联合成像装置,包括:
获取模块,用于获取待重建的头颈联合的磁共振图像;
输入模块,用于将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;
重建模块,用于通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共 振图像进行重建,得到无伪影的高分辨率头颈联合图像。
在一个实施方式中,所述复数卷积神经网络模型依次包括:第一复数卷积层、多个复数残差块、第二复数卷积层,其中,每个复数残差块中包括有两个复数卷积层。
在一个实施方式中,复数卷积层中的复数卷积操作表示为:
w*c=(c
real+ic
imgi)*(w
real+iw
imgi)=(w
real*c
real-w
imgi*c
imgi)+i(w
real*c
real+w
imgi*c
imgi)
其中,w表示输入的复数图像,c表示复数卷积核,c
real表示输入的复数图像的实部,c
imgi表示输入的复数图像的虚部,w
real表示复数卷积核的实部,w
imgi表示复数卷积核的虚部。
在一个实施方式中,所述复数卷积神经网络模型是按照以下方式建立的:
获取全采样样本图像,其中,所述全采样样本图像为从磁共振仪获取的头颈联合的磁共振图像;
对所述全采样样本图像进行欠采样处理,得到欠采样样本图像;
将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,得到所述复数卷积神经网络模型。
在一个实施方式中,将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,包括:
以如下函数作为目标函数,对所述预先建立的复数卷积神经网络进行训练:
其中,x
m表示多通道复数输入图像,y
m为全采样原始图像,C(x
m;θ)表示网络的预测输出,θ={(Ω
1,b
1),...,(Ω
l,b
l),...,(Ω
L,b
L)}为训练需要更新的参数,其中,Ω表示权重,b表示偏置,
表示网络输出与标签之间误差最小时的权重和偏置取值,
表示取网络输出与标签之间的最小误差对应的θ作为
M表示训练样本的总数量,m表示当前训练样本的序号。
在一个实施方式中,所述待重建的头颈联合的磁共振图像为欠采样的含伪影的图像。
一种终端设备,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现如下方法的步骤:
获取待重建的头颈联合的磁共振图像;
将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其 中,所述复数卷积神经网络模型中设置有复数残差块;
通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现如下方法的步骤:
获取待重建的头颈联合的磁共振图像;
将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;
通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
本申请提供的基于深度先验学习的头颈联合成像方法和装置,通过预先建立的复数卷积神经网络模型,对待重建的头颈联合的磁共振图像进行重建,从而得到无伪影的高分辨率头颈联合图像。因此,待重建的头颈联合的磁共振图像为欠采样的图像,复数卷积神经网络具有更好的图像重建效果,从而可以得到高精度的无伪影的高分辨率头颈联合图像,通过上述方案解决了现有的头颈联合成像中所存在的无法同时保证成像精度和成像时间需求的问题,达到了在保证成像精度的情况下,可以有效缩短成像时间的技术效果。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请提供的基于深度先验学习的头颈联合成像方法一种实施例的方法流程图;
图2是本申请提供的复数卷积网络的模型示意图;
图3是本申请提供的复数残差块的模型示意图;
图4是本申请提供的基于复数卷积网络进行图像重建的数据走向图;
图5是本申请提供的基于复数卷积网络进行图像重建的模块示意图;
图6是本申请提供的终端设备的架构图;
图7是本申请提供的基于深度先验学习的头颈联合成像模块一种实施例的模块结构示意图。
为了使本技术领域的人员更好地理解本申请中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
针对现有的头颈联合磁共振扫描技术中所存在速度慢、精度不高的问题,基于此,在本例中考虑到可以复数卷积神经网络模型,生成可以由带伪影的欠采样转换为不带伪影的图像的网络模型,这样,只需要提供带伪影的头颈联合磁共振图像,就可以得到分辨率较高的不带伪影的头颈联合磁共振图像,从而可以在减少扫描时间的情况下,使得头颈联合图像可以满足精度的需求。
图1是本申请所述一种基于深度先验学习的头颈联合成像方法一个实施例的方法流程图。虽然本申请提供了如下述实施例或附图所示的方法操作步骤或装置结构,但基于常规或者无需创造性的劳动在所述方法或装置中可以包括更多或者更少的操作步骤或模块单元。在逻辑性上不存在必要因果关系的步骤或结构中,这些步骤的执行顺序或装置的模块结构不限于本申请实施例描述及附图所示的执行顺序或模块结构。所述的方法或模块结构的在实际中的装置或终端产品应用时,可以按照实施例或者附图所示的方法或模块结构连接进行顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至分布式处理环境)。
如图1所示,该基于深度先验学习的头颈联合成像方法,可以包括如下步骤:
步骤101:获取待重建的头颈联合的磁共振图像;
其中,该待重建的磁共振扫描图像,可以是磁共振扫描仪对目标物的头颈部位进行欠采样的联合成像得到的图像,例如,可以通过磁共振扫描仪对目标对象的头颈进行欠采样扫描得到的图像,该图像是含有伪影的图像。
所谓伪影指的是原本被扫描物体并不存在而在图像上却出现的各种形态的影像。伪影大致分为与患者有关和与机器有关的两类。磁共振图像的伪影指的是图像上与实际解剖结构不相符的密度异常变化,它涉及CT机部件故障、校准不够及算法误差甚至错误 等项目。
步骤102:将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;
其中,上述的复数卷积神经网络模型可以是按照以下方式建立的:
S1:获取全采样样本图像,其中,所述全采样样本图像为从磁共振仪获取的头颈联合的磁共振图像;
其中,该全采样样本图像,为全采样原始图像数据,是无伪影的图像数据。
S2:对所述全采样样本图像进行欠采样处理,得到欠采样样本图像;
S3:将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,得到所述复数卷积神经网络模型。
其中,上述训练样本所基于的全采样图像,可以是通过低倍欠采因子从磁共振扫描仪采集图像;然后,对采集的图像进行预处理,其中,所述预处理可以包括但不限于以下至少之一:选图处理、归一化处理;将预处理后的图像作为所述全采样图像。其中,上述选图处理是去除一些质量不高或者未包含较多可用信息的图像,归一化处理是为了使得数据可以适应网络的统一输入,并消除奇异样本数据导致的不良影响,从而使得得到的图像数据可以适合进行复数卷积神经网络模型的训练。
对于上述的欠采样样本图像,是对上述的全采样样本图像按照预设的欠采比进行欠采样处理后得到的图像,欠采样图像中是含有伪影的图像。
在上例中,提供的基于深度先验学习的头颈联合成像方法,通过预先建立的复数卷积神经网络模型,对待重建的头颈联合的磁共振图像进行重建,从而得到无伪影的高分辨率头颈联合图像。因此,待重建的头颈联合的磁共振图像为欠采样的图像,复数卷积神经网络具有更好的图像重建效果,从而可以得到高精度的无伪影的高分辨率头颈联合图像,通过上述方案解决了现有的头颈联合成像中所存在的无法同时保证成像精度和成像时间需求的问题,达到了在保证成像精度的情况下,可以有效缩短成像时间的技术效果。
步骤103:通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
其中,该高分辨率图像就是接近全采样图像的图像,这些高分辨率图像就可以满足实际的应用需求。
在实际实现的时候,上述复数卷积神经网络模型可以如图2所示,依次包括:第一 复数卷积层、多个复数残差块、第二复数卷积层,其中,每个复数残差块中包括有两个复数卷积层。
其中,复数卷积层中的复数卷积操作可以表示为:
w*c=(c
real+ic
imgi)*(w
real+iw
imgi)=(w
real*c
real-w
imgi*c
imgi)+i(w
real*c
real+w
imgi*c
imgi)
其中,w表示输入的复数图像,c表示复数卷积核,c
real表示输入的复数图像的实部,c
imgi表示输入的复数图像的虚部,w
real表示复数卷积核的实部,w
imgi表示复数卷积核的虚部。
在将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练的过程中,可以以如下函数作为目标函数,对预先建立的复数卷积神经网络进行训练:
其中,x
m表示多通道复数输入图像,y
m为全采样原始图像,C(x
m;θ)表示网络的预测输出,θ={(Ω
1,b
1),...,(Ω
l,b
l),...,(Ω
L,b
L)}为训练需要更新的参数,其中,Ω表示权重,b表示偏置,
表示网络输出与标签之间误差最小时的权重和偏置取值,
表示取网络输出与标签之间的最小误差对应的θ作为
M表示训练样本的总数量,m表示当前训练样本的序号。
为了更好地理解本申请,下面对残差、残差网络和残差块说明如下:
残差:在数理统计中是指实际观察值与估计值(拟合值)之间的差。假设我们需要找一个x,使得f(x)=b,给定一个x的估计值x0,那么残差就是b-f(x0),同时,误差就是x-x0。这样即使x的取值不知道,仍然可以计算残差。
残差网络:在神经网络的层数达到一定数量的情况下,随着神经网络层数的增多,训练集上的效果会变差,因为随着神经网络的深度越来越深,训练变得原来越难,网络的优化变得越来越难,过深的神经网络会产生退化问题,效果反而不如相对较浅的网络。残差网络就是为了解决这个问题,残差网络越深,训练集上的效果会越好。残差网络是在几个卷积层上构建出一个恒等映射的层,即,输出等于输入的层,从而构建得到更深的网络。具体的,是通过加入shortcut connections(快捷连接),使得神经网络变得更加容易被优化。
残差块:如图3所示,对于包含有一个快捷连接的几层网络,被称为一个残差块 (residual block)。
下面结合一个具体实施例对上述方法进行说明,然而,值得注意的是,该具体实施例仅是为了更好地说明本申请,并不构成对本申请的不当限定。
现有联合成像的过程中,一般在吸取先验知识的同时仅考虑了少量低维样本信息或单纯的采用高维迭代重建的方式。针对这些问题,在本例中,从已有的大样本、高维信号、并行头颈联合磁共振图像中获取充分的先验知识,实现头颈联合快速高精度成像,即,提出了基于深度先验学习的头颈联合快速成像方法,以提高头颈联合成像的精度及缩短成像时间。
具体的,在本例中基于深度先验学习的快速成像理论与方法,以便在较短的扫描时间内,获取具有高分辨率的头颈联合磁共振血管壁图像,该方法主要包括如下几个方面:多通道头颈联合磁共振大样本数据库的构建、多通道高维大数据的深度先验学习模型研究、集成深度先验的在线高维重建模型研究。
其中,多通道图像是指多个摄像机拍摄的同一个场景的图像或者一个摄像机在不同时刻拍摄的同一场景的图像。在表示图像时,使用多个通道对图像进行编码。多通道图像常用于人工智能领域。图像是由一个个像素点构成的,所有不同颜色的像素点构成了一副完整的图像,计算机存储图片是以二进制来进行的。一般将计算机存储单个像素点所用到的bit位称为图像的深度,图像的通道与其编码方式相关,如果将图像分解为RGB三个分量来表示,则是三通道,如果图像是灰度图像,则是一个通道,多通道图像是通道数≥3的图像。
考虑到一般在进行图像重建的时候,都是用的图像的实部,图像的虚部一般是不使用的。然而,图像的虚部往往包含着图像的相位信息,如果可以对虚部也进行有效利用,那么可以提高多通道图像的精度。
在本例中,根据血管壁磁共振图像的复数特性,设计了相应的复数卷积神经网络,并结合残差块对多通道头颈一体化磁共振图像进行学习,提取关键特征信息,从而达到对血管壁磁共振图像进行在线重建的目的。具体的,如图4所示,网络的输入为对全采样头颈一体化磁共振图像进行欠采样处理之后的含伪影图像,输出标签为全采样原始图像数据。中间网络由两个复数卷积层及三个复数残差块组成,复数卷积层对输入的复数图像直接进行卷积操作,即采用的卷积核为复数卷积核,用数学公式表示输入的复数图像为:
c=c
real+ic
imgi
复数卷积核为:
w=w
real+iw
imgi;
复数卷积操作公式如下:
w*c=(w
real*c
real-w
imgi*c
imgi)+i(w
real*c
real+w
imgi*c
imgi)
在卷积之后,为ReLU激活操作。
该复数卷积网络中采用的目标函数可以表示为:
其中,x
m表示多通道复数输入图像,y
m为全采样原始图像,C(x
m;θ)表示网络的预测输出,θ={(Ω
1,b
1),...,(Ω
l,b
l),...,(Ω
L,b
L)}为训练需要更新的参数,其中,Ω表示权重,b表示偏置,
表示网络输出与标签之间误差最小时的权重和偏置取值,
表示取网络输出与标签之间的最小误差对应的θ作为
M表示训练样本的总数量,m表示当前训练样本的序号。
上述的深度神经网络在经过训练之后可以用于对头颈部位进行在线重建,最终能够在短时间内得到高质量的头颈部位的血管壁磁共振图像。具体的,如图5所示,可以包括:数据处理模块、模型获取模块、模型测试模块及模型应用模块,其中:
1)数据处理模块,用于对采集的图像数据进行归一化等预处理操作,并制作好网络训练的输入输出样本;
2)模型获取模块,用于对设计好的复数卷积网络进行训练和优化;
3)模型测试模块,用于对未参与训练的头颈一体化欠采样图像进行在线重建测试,验证训练好的网络模型能重建出高质量的图像;
4)模型应用模块,用于在验证模型有足够好的泛化能力之后,最终将该深度卷积重建算法用于实际应用场景。
在上例中,基于医学磁共振影像数据,利用深度学习技术,通过所设计的复数卷积神经网络来提高血管壁磁共振成像的精度并缩短成像时间,从而实现快速高精度的头颈联合图像的重建。即,利用深度学习方法对头颈部位进行一体化快速高精度成像,具体的提出了用于头颈一体化磁共振图像数据的复数卷积操作,使用了复数卷积网络,在传统的卷积网络基础上增加了复数残差块,从而可以提高头颈一体化磁共振血管壁成像的精度并缩短成像时间。
本申请上述实施例所提供的方法实施例可以在终端设备、计算机终端或者类似的运算装置中执行。以运行在终端设备上为例,图6是本发明实施例的一种基于深度先验学习的头颈联合成像方法的终端设备的硬件结构框图。如图6所示,终端设备10可以包括一个或多个(图中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)、用于存储数据的存储器104、以及用于通信功能的传输模块106。本领域普通技术人员可以理解,图6所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,终端设备10还可包括比图6中所示更多或者更少的组件,或者具有与图6所示不同的配置。
存储器104可用于存储应用软件的软件程序以及模块,如本发明实施例中的基于深度先验学习的头颈联合成像方法对应的程序指令/模块,处理器102通过运行存储在存储器104内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的应用程序的基于深度先验学习的头颈联合成像方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输模块106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输模块106包括一个网络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输模块106可以为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
在软件层面,上述基于深度先验学习的头颈联合成像装置可以如图7所示,包括:
获取模块701,用于获取待重建的头颈联合的磁共振图像;
输入模块702,用于将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;
重建模块703,用于通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
在一个实施方式中,所述复数卷积神经网络模型依次可以包括:第一复数卷积层、多个复数残差块、第二复数卷积层,其中,每个复数残差块中包括有两个复数卷积层。
在一个实施方式中,复数卷积层中的复数卷积操作可以表示为:
w*c=(c
real+ic
imgi)*(w
real+iw
imgi)=(w
real*c
real-w
imgi*c
imgi)+i(w
real*c
real+w
imgi*c
imgi)
其中,w表示输入的复数图像,c表示复数卷积核,c
real表示输入的复数图像的实部,c
imgi表示输入的复数图像的虚部,w
real表示复数卷积核的实部,w
imgi表示复数卷积核的虚部。
在一个实施方式中,所述复数卷积神经网络模型可以是按照以下方式建立的:
S1:获取全采样样本图像,其中,所述全采样样本图像为从磁共振仪获取的头颈联合的磁共振图像;
S2:对所述全采样样本图像进行欠采样处理,得到欠采样样本图像;
S3:将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,得到所述复数卷积神经网络模型。
在一个实施方式中,在实际实现的时候,可以以如下函数作为目标函数,对所述预先建立的复数卷积神经网络进行训练:
其中,x
m表示多通道复数输入图像,y
m为全采样原始图像,C(x
m;θ)表示网络的预测输出,θ={(Ω
1,b
1),...,(Ω
l,b
l),...,(Ω
L,b
L)}为训练需要更新的参数,其中,Ω表示权重,b表示偏置,
表示网络输出与标签之间误差最小时的权重和偏置取值,
表示取网络输出与标签之间的最小误差对应的θ作为
M表示训练样本的总数量,m表示当前训练样本的序号。
在一个实施方式中,所述待重建的头颈联合的磁共振图像可以是欠采样的含伪影的图像。
本申请的实施例还提供能够实现上述实施例中的基于深度先验学习的头颈联合成像方法中全部步骤的一种电子设备的具体实施方式,所述电子设备具体包括如下内容:
处理器(processor)、存储器(memory)、通信接口(Communications Interface)603和总线604;
其中,所述处理器601、存储器602、通信接口603通过所述总线604完成相互间的通信;所述处理器601用于调用所述存储器602中的计算机程序,所述处理器执行所述计算机程序时实现上述实施例中的基于深度先验学习的头颈联合成像方法中的全部步骤,例如,所述处理器执行所述计算机程序时实现下述步骤:
步骤1:获取待重建的头颈联合的磁共振图像;
步骤2:将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;
步骤3:通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
从上述描述可知,基于深度先验学习的头颈联合成像方法和装置,通过预先建立的复数卷积神经网络模型,对待重建的头颈联合的磁共振图像进行重建,从而得到无伪影的高分辨率头颈联合图像。因此,待重建的头颈联合的磁共振图像为欠采样的图像,复数卷积神经网络具有更好的图像重建效果,从而可以得到高精度的无伪影的高分辨率头颈联合图像,通过上述方案解决了现有的头颈联合成像中所存在的无法同时保证成像精度和成像时间需求的问题,达到了在保证成像精度的情况下,可以有效缩短成像时间的技术效果。
本申请的实施例还提供能够实现上述实施例中的基于深度先验学习的头颈联合成像方法中全部步骤的一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中的基于深度先验学习的头颈联合成像方法的全部步骤,例如,所述处理器执行所述计算机程序时实现下述步骤:
步骤1:获取待重建的头颈联合的磁共振图像;
步骤2:将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;
步骤3:通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
从上述描述可知,基于深度先验学习的头颈联合成像方法和装置,通过预先建立的复数卷积神经网络模型,对待重建的头颈联合的磁共振图像进行重建,从而得到无伪影的高分辨率头颈联合图像。因此,待重建的头颈联合的磁共振图像为欠采样的图像,复数卷积神经网络具有更好的图像重建效果,从而可以得到高精度的无伪影的高分辨率头颈联合图像,通过上述方案解决了现有的头颈联合成像中所存在的无法同时保证成像精度和成像时间需求的问题,达到了在保证成像精度的情况下,可以有效缩短成像时间的技术效果。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于硬件+ 程序类实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
虽然本申请提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的劳动可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或客户端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境)。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
虽然本说明书实施例提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的手段可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至为分布式数据处理环境)。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、产品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、产品或者设备所固有的要素。在没有更多限制的情况下,并不排除在包括所述要素的过程、方法、产品或者设备中还存在另外的相同或等同要素。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本说明书实施例时可以把各模块的功能在同一个或多个软件和/或硬件中实现,也可以将实现同一功能的模块由多个子模块或子单元的组合实现等。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另 外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内部包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本说明书的实施例可提供为方法、系统或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本说明书实施例的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述仅为本说明书实施例的实施例而已,并不用于限制本说明书实施例。对于 本领域技术人员来说,本说明书实施例可以有各种更改和变化。凡在本说明书实施例的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书实施例的权利要求范围之内。
Claims (14)
- 一种基于深度先验学习的头颈联合成像方法,其特征在于,所述方法包括:获取待重建的头颈联合的磁共振图像;将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
- 根据权利要求1所述的方法,其特征在于,所述复数卷积神经网络模型依次包括:第一复数卷积层、多个复数残差块、第二复数卷积层,其中,每个复数残差块中包括有两个复数卷积层。
- 根据权利要求2所述的方法,其特征在于,复数卷积层中的复数卷积操作表示为:w*c=(c real+ic imgi)*(w real+iw imgi)=(w real*c real-w imgi*c imgi)+i(w real*c real+w imgi*c imgi)其中,w表示输入的复数图像,c表示复数卷积核,c real表示输入的复数图像的实部,c imgi表示输入的复数图像的虚部,w real表示复数卷积核的实部,w imgi表示复数卷积核的虚部。
- 根据权利要求1所述的方法,其特征在于,所述复数卷积神经网络模型是按照以下方式建立的:获取全采样样本图像,其中,所述全采样样本图像为从磁共振仪获取的头颈联合的磁共振图像;对所述全采样样本图像进行欠采样处理,得到欠采样样本图像;将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,得到所述复数卷积神经网络模型。
- 根据权利要求4所述的方法,其特征在于,将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,包括:以如下函数作为目标函数,对所述预先建立的复数卷积神经网络进行训练:
- 根据权利要求1至5中任一项所述的方法,其特征在于,所述待重建的头颈联合的磁共振图像为欠采样的含伪影的图像。
- 一种基于深度先验学习的头颈联合成像装置,其特征在于,包括:获取模块,用于获取待重建的头颈联合的磁共振图像;输入模块,用于将所述待重建的头颈联合的磁共振图像输入预先建立的复数卷积神经网络模型,其中,所述复数卷积神经网络模型中设置有复数残差块;重建模块,用于通过所述复数卷积神经网络模型,对所述待重建的头颈联合的磁共振图像进行重建,得到无伪影的高分辨率头颈联合图像。
- 根据权利要求7所述的装置,其特征在于,所述复数卷积神经网络模型依次包括:第一复数卷积层、多个复数残差块、第二复数卷积层,其中,每个复数残差块中包括有两个复数卷积层。
- 根据权利要求8所述的装置,其特征在于,复数卷积层中的复数卷积操作表示为:w*c=(c real+ic imgi)*(w real+iw imgi)=(w real*c real-w imgi*c imgi)+i(w real*c real+w imgi*c imgi)其中,w表示输入的复数图像,c表示复数卷积核,c real表示输入的复数图像的实部,c imgi表示输入的复数图像的虚部,w real表示复数卷积核的实部,w imgi表示复数卷积核的虚部。
- 根据权利要求7所述的装置,其特征在于,所述复数卷积神经网络模型是按照以下方式建立的:获取全采样样本图像,其中,所述全采样样本图像为从磁共振仪获取的头颈联合的磁共振图像;对所述全采样样本图像进行欠采样处理,得到欠采样样本图像;将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,得到所述复数卷积神经网络模型。
- 根据权利要求10所述的装置,其特征在于,将所述欠采样样本图像作为训练样本,将所述全采样样本图像作为标签,对预先建立的复数卷积神经网络进行训练,包括:以如下函数作为目标函数,对所述预先建立的复数卷积神经网络进行训练:
- 根据权利要求7至11中任一项所述的装置,其特征在于,所述待重建的头颈联合的磁共振图像为欠采样的含伪影的图像。
- 一种终端设备,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现权利要求1至6中任一项所述方法的步骤。
- 一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现权利要求1至6中任一项所述方法的步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/120882 WO2020118616A1 (zh) | 2018-12-13 | 2018-12-13 | 一种基于深度先验学习的头颈联合成像方法和装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/120882 WO2020118616A1 (zh) | 2018-12-13 | 2018-12-13 | 一种基于深度先验学习的头颈联合成像方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020118616A1 true WO2020118616A1 (zh) | 2020-06-18 |
Family
ID=71075271
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/120882 WO2020118616A1 (zh) | 2018-12-13 | 2018-12-13 | 一种基于深度先验学习的头颈联合成像方法和装置 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020118616A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112082915A (zh) * | 2020-08-28 | 2020-12-15 | 西安科技大学 | 一种即插即用型大气颗粒物浓度检测装置及检测方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106250812A (zh) * | 2016-07-15 | 2016-12-21 | 汤平 | 一种基于快速r‑cnn深度神经网络的车型识别方法 |
CN107182216A (zh) * | 2015-12-30 | 2017-09-19 | 中国科学院深圳先进技术研究院 | 一种基于深度卷积神经网络的快速磁共振成像方法及装置 |
CN108010100A (zh) * | 2017-12-07 | 2018-05-08 | 厦门大学 | 一种基于残差网络的单扫描磁共振定量t2成像重建方法 |
CN108828481A (zh) * | 2018-04-24 | 2018-11-16 | 朱高杰 | 一种基于深度学习和数据一致性的磁共振重建方法 |
-
2018
- 2018-12-13 WO PCT/CN2018/120882 patent/WO2020118616A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107182216A (zh) * | 2015-12-30 | 2017-09-19 | 中国科学院深圳先进技术研究院 | 一种基于深度卷积神经网络的快速磁共振成像方法及装置 |
CN106250812A (zh) * | 2016-07-15 | 2016-12-21 | 汤平 | 一种基于快速r‑cnn深度神经网络的车型识别方法 |
CN108010100A (zh) * | 2017-12-07 | 2018-05-08 | 厦门大学 | 一种基于残差网络的单扫描磁共振定量t2成像重建方法 |
CN108828481A (zh) * | 2018-04-24 | 2018-11-16 | 朱高杰 | 一种基于深度学习和数据一致性的磁共振重建方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112082915A (zh) * | 2020-08-28 | 2020-12-15 | 西安科技大学 | 一种即插即用型大气颗粒物浓度检测装置及检测方法 |
CN112082915B (zh) * | 2020-08-28 | 2024-05-03 | 西安科技大学 | 一种即插即用型大气颗粒物浓度检测装置及检测方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109658469B (zh) | 一种基于深度先验学习的头颈联合成像方法和装置 | |
Johnson et al. | Conditional generative adversarial network for 3D rigid‐body motion correction in MRI | |
Zhao et al. | Channel splitting network for single MR image super-resolution | |
CN109712208B (zh) | 基于深度学习的大视野磁共振扫描图像重建方法和装置 | |
CN109712119B (zh) | 一种磁共振成像及斑块识别方法和装置 | |
US10915990B2 (en) | Systems and methods for denoising medical images with deep learning network | |
US11756191B2 (en) | Method and apparatus for magnetic resonance imaging and plaque recognition | |
WO2020134826A1 (zh) | 磁共振并行成像方法及相关设备 | |
Liu et al. | High-performance rapid MR parameter mapping using model-based deep adversarial learning | |
CN110942496B (zh) | 基于螺旋桨采样和神经网络的磁共振图像重建方法及系统 | |
CN111161269A (zh) | 图像分割方法、计算机设备和可读存储介质 | |
CN110717958A (zh) | 一种图像重建方法、装置、设备及介质 | |
CN110874855B (zh) | 一种协同成像方法、装置、存储介质和协同成像设备 | |
Hermann et al. | Accelerated white matter lesion analysis based on simultaneous T1 and T2∗ quantification using magnetic resonance fingerprinting and deep learning | |
CN116823625A (zh) | 基于变分自编码器的跨对比度磁共振超分辨率方法和系统 | |
Sander et al. | Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI | |
WO2020118616A1 (zh) | 一种基于深度先验学习的头颈联合成像方法和装置 | |
Fu et al. | ADGAN: An asymmetric despeckling generative adversarial network for unpaired OCT image speckle noise reduction | |
CN114494484B (zh) | 数据识别模型的训练方法、数据识别方法、装置及设备 | |
CN114708353B (zh) | 图像重建方法、装置、电子设备与存储介质 | |
CN111145096A (zh) | 基于递归极深网络的超分辨图像重构方法及系统 | |
CN113866694B (zh) | 一种快速三维磁共振t1定量成像方法、系统及介质 | |
CN112801866B (zh) | 图像重建模型的生成方法、图像重建方法及相关设备 | |
Zhou et al. | A lightweight recurrent learning network for sustainable compressed sensing | |
Han et al. | Super-resolution AFM imaging based on enhanced convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18942846 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.11.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18942846 Country of ref document: EP Kind code of ref document: A1 |