WO2021184195A1 - 医学图像重建方法、医学图像重建网络训练方法和装置 - Google Patents

医学图像重建方法、医学图像重建网络训练方法和装置 Download PDF

Info

Publication number
WO2021184195A1
WO2021184195A1 PCT/CN2020/079678 CN2020079678W WO2021184195A1 WO 2021184195 A1 WO2021184195 A1 WO 2021184195A1 CN 2020079678 W CN2020079678 W CN 2020079678W WO 2021184195 A1 WO2021184195 A1 WO 2021184195A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
network
vector
image reconstruction
real
Prior art date
Application number
PCT/CN2020/079678
Other languages
English (en)
French (fr)
Inventor
胡圣烨
王书强
陈卓
申妍燕
张炽堂
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to US17/791,099 priority Critical patent/US20230032472A1/en
Priority to PCT/CN2020/079678 priority patent/WO2021184195A1/zh
Publication of WO2021184195A1 publication Critical patent/WO2021184195A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/4806Functional imaging of brain activation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the embodiments of the present application belong to the field of image processing technology, and in particular relate to a medical image reconstruction method, a medical image reconstruction network training method and device.
  • Functional magnetic resonance imaging is an emerging neuroimaging method. Its principle is to use magnetic resonance imaging to measure the changes in hemodynamics caused by neuronal activity. As a non-interventional technology, it can accurately locate specific cortical areas of the brain and capture changes in blood oxygen that can reflect neuronal activity.
  • fMRI functional magnetic resonance imaging
  • the number of images that can be acquired is often limited in certain application scenarios, which greatly limits The application of artificial intelligence methods that rely on large amounts of data such as deep learning in the field of medical image analysis.
  • a promising solution is to use existing artificial intelligence methods to use limited real image samples to learn to reconstruct corresponding medical images from Gaussian hidden layer vectors, so as to achieve the purpose of enhancing the sample size and supporting subsequent image analysis tasks.
  • the generative adversarial network is the current best-performing generative model, which has gradually become a research hotspot of deep learning, and has begun to be applied to the field of medical imaging.
  • the traditional generative adversarial network can generate new images with diversity by learning the distribution of real data, but it also has the problem of difficulty in network training and difficulty in achieving optimal convergence.
  • One aspect of the embodiments of the present application provides a medical image reconstruction network training method, which includes:
  • an image reconstruction network performing image reconstruction based on the feature code vector to obtain a first image, and performing image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
  • the performing feature encoding extraction on a real image sample to obtain a feature encoding vector of the real image sample includes:
  • the feature extraction of the real image sample based on the image coding network to obtain the feature coding vector of the real image sample includes:
  • the extracted features are processed through a linear function to obtain the feature encoding vector of the real image sample.
  • the linear function is a piecewise linear function.
  • the piecewise linear function is a ReLU function.
  • the method further includes:
  • the image coding network is optimized based on the vector discrimination result.
  • the optimizing the image coding network based on the vector discrimination result includes:
  • the performing confrontation training on the image coding network based on the vector discrimination result includes:
  • the first loss function is:
  • L C is the first loss function
  • z e is the feature coding vector
  • z r is the first hidden layer vector
  • C represents the image coding network
  • E is the mathematical expectation.
  • the optimizing the image reconstruction network according to the image discrimination result includes:
  • the performing confrontation training on the image reconstruction network according to the image discrimination result includes:
  • the structural similarity measurement loss function and the perception measurement loss function, the second loss function of the image reconstruction network is determined, and the network parameters of the image reconstruction network are updated by the gradient descent method. Rebuild the network for training;
  • the second loss function is:
  • L G is the second loss function
  • z e is the feature coding vector
  • z r is the first hidden layer vector
  • C represents the image coding network
  • D is the image discrimination network
  • G is In the image reconstruction network
  • E is the mathematical expectation
  • L SSIM is the structural similarity measurement loss function
  • L perceptual is the perceptual measurement loss function
  • X real represents the real image
  • ⁇ 1 and ⁇ 2 are the weight coefficients
  • is the Gram matrix
  • L D is the loss function of the image discrimination network.
  • the performing image reconstruction based on the feature coding vector to obtain the first image through the image generation network, and performing image reconstruction based on the first hidden layer vector of the real image sample to obtain the second image includes:
  • the feature encoding vector and the first hidden layer vector are input to the image reconstruction network to obtain the first image and the second image; wherein, the convolutional layer of the image generation network is upsampled by neighbors Three-dimensional separable convolutional layer.
  • a second aspect of the embodiments of the present application provides a medical image reconstruction method, which includes:
  • image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector.
  • the third aspect of the embodiments of the present application provides a medical image reconstruction network training device, which includes:
  • the feature encoding extraction module is used to perform feature encoding extraction on real image samples to obtain feature encoding vectors of the real image samples;
  • the first image reconstruction module is configured to perform image reconstruction based on the feature code vector through an image reconstruction network to obtain a first image, and perform image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
  • the first optimization module is configured to perform image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimize the image generation network according to the result of the image discrimination.
  • a fourth aspect of the embodiments of the present application provides a medical image reconstruction device, which includes:
  • Hidden layer vector acquisition module for acquiring the second hidden layer vector of the image to be reconstructed
  • the second image reconstruction module is configured to perform image reconstruction on the reconstructed image based on the second hidden layer vector through the trained image reconstruction network.
  • a fifth aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and computer-readable instructions stored in the memory and running on the processor, and the processor executes the computer-readable instructions. The following steps are implemented when ordering:
  • an image reconstruction network performing image reconstruction based on the feature code vector to obtain a first image, and performing image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
  • the performing feature encoding extraction on a real image sample to obtain a feature encoding vector of the real image sample includes:
  • the processor further implements the following steps when executing the computer-readable instructions:
  • the image coding network is optimized based on the vector discrimination result.
  • a sixth aspect of the embodiments of the present application provides a terminal device.
  • the terminal device includes a memory, a processor, and computer-readable instructions that are stored in the memory and run on the processor, and the processor executes all When the computer-readable instructions are described, the following steps are implemented:
  • image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector.
  • a seventh aspect of the embodiments of the present application provides a computer storage medium, the computer-readable storage medium stores computer-readable instructions, wherein the computer-readable instructions are executed by a processor to implement the following steps:
  • an image reconstruction network performing image reconstruction based on the feature code vector to obtain a first image, and performing image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
  • the eighth aspect of the embodiments of the present application provides a computer storage medium, the computer-readable storage medium stores computer-readable instructions, wherein the computer-readable instructions are executed by a processor to implement the following steps:
  • image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector.
  • the ninth aspect of the embodiments of the present application provides a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute any one of the above-mentioned methods in the first aspect, or causes the terminal device to execute the above-mentioned second aspect Any one of the methods.
  • feature encoding and extraction are performed on real image samples to obtain the feature encoding vector of the real image sample
  • the first image is obtained by image reconstruction based on the feature encoding vector through the image reconstruction network, based on the hidden layer of the real image sample
  • the vector is used for image reconstruction to obtain the second image.
  • the real image sample, the first image, and the second image are distinguished through the image distinguishing network, and the image reconstruction network is optimized according to the image distinguishing result, and the optimized
  • the image reconstruction network is used for image reconstruction, and introduces prior knowledge guidance from real images for the generation of the confrontation network, so as to stabilize the training of the image reconstruction network, and is easy to achieve optimal convergence, thereby solving the problem of difficulty in training the generation of the confrontation network.
  • Fig. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a medical image reconstruction network training method provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a medical image reconstruction network training method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a medical image reconstruction network training method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a medical image reconstruction method provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a medical image reconstruction process provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a medical image reconstruction network training device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a medical image reconstruction device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • Functional magnetic resonance imaging fMRI is an emerging neuroimaging method, its principle is to use magnetic resonance imaging to measure the changes in hemodynamics caused by neuronal activity. As a non-interventional technology, it can accurately locate specific cortical areas of the brain and capture changes in blood oxygen that can reflect neuronal activity.
  • fMRI image acquisition and long scanning time and some special patients cannot perform it (such as those with metal objects in the body cannot accept scanning)
  • the number of images that can be acquired is often limited in certain application scenarios, which greatly limits The application of artificial intelligence methods that rely on large amounts of data such as deep learning in the field of medical image analysis.
  • a promising solution is to use existing artificial intelligence methods to use limited real image samples to learn to reconstruct corresponding medical images from Gaussian hidden layer vectors, so as to achieve the purpose of enhancing the sample size and supporting subsequent image analysis tasks.
  • the generative adversarial network is currently the best-performing generative model. It was first proposed by Lan Goodfellow et al. in 2014. It can capture the potential distribution of real data through the generator, so as to achieve the purpose of generating real data distribution samples from the hidden layer space. . Since then, generative adversarial networks have gradually become a research hotspot in deep learning and have begun to be applied in various fields.
  • the generative adversarial network can generate new images with diversity by learning the real data distribution, but its biggest problem is that it is difficult to train the network and it is not easy to achieve optimal convergence.
  • the purpose of generating the confrontation network is to make the data distribution fitted by the generator close to the real data distribution.
  • the inventor of the present application found in the research that the generation network introduced without any prior knowledge does not know the real data distribution at all, and can only be judged based on The true and false feedback of the device is tested again and again.
  • this problem does not exist. It can first extract the encoding feature vector of the real image, and at the same time perform variational inference through resampling, and decode the hidden vector according to the variational result. generate.
  • the embodiment of the present application introduces the encoding feature vector of the variational autoencoder as a feature prior knowledge about the real image into the training of the generative confrontation network, giving the generative network a comparison.
  • a clear optimization direction to solve the problem of difficult, time-consuming, and easy to crash training.
  • this application further introduces a separate encoding discriminator, so that the optimization process of the variational autoencoder is also incorporated into the "generation-adversarial" system to solve the difference between the objective function of the variational reasoning and the generative adversarial network. There is an optimization conflict between.
  • the embodiments of the present application can be applied to the exemplary scenario shown in FIG. 1.
  • the terminal 10 and the server 20 constitute application scenarios of the above-mentioned medical image reconstruction network training method and medical image reconstruction method.
  • the terminal 10 is used to obtain a real image sample of the subject and send the real image sample to the server 20;
  • the server 20 is used to perform feature encoding extraction on the real image sample to obtain the feature encoding vector of the real image sample, and pass Image reconstruction network, image reconstruction based on feature coding vector to obtain the first image, image reconstruction based on the hidden layer vector of the real image sample to obtain the second image, and image discrimination network for the real image sample, the first image and the second image Discrimination, and optimize the image reconstruction network based on the results of the image discrimination, use the optimized image generation network for image reconstruction work, and introduce prior knowledge guidance from the real image for the generation of the confrontation network, thereby stabilizing the image reconstruction network Training is easy to achieve optimal convergence, thereby solving the problem of difficulty in training the generation of adversarial networks.
  • FIG. 2 is a schematic flowchart of a medical image reconstruction network training method provided by an embodiment of the present application. Referring to FIG. 2, the medical image reconstruction network training method is detailed as follows:
  • step 101 feature encoding extraction is performed on a real image sample to obtain a feature encoding vector of the real image sample.
  • step 101 feature extraction may be performed on the above-mentioned real image sample through an image coding network to obtain the feature coding vector of the above-mentioned real image sample.
  • the feature extraction of the real image sample through the image coding network to obtain the feature encoding vector of the real image sample may specifically include:
  • step 1011 hierarchical feature extraction is performed on the real image sample through the multiple three-dimensional convolutional layers of the image coding network.
  • step 1012 the extracted features are processed through a linear function to obtain the feature encoding vector of the real image sample.
  • real image samples can be generated into 3D images in a time series, and the 3D images can be sequentially input into the image coding network, and the 3D images can be layered using multiple 3D convolutional layers of the image coding network Feature extraction, and synthesize the linear and non-linear features of the three-dimensional image through a linear function to obtain the feature encoding representation vector of the real image sample.
  • the above-mentioned linear function is a piecewise linear function. Specifically, the linear and non-linear features of the three-dimensional image are processed by the piecewise linear function to obtain the feature encoding representation vector of the real image sample.
  • the piecewise linear function may be a ReLU function. Specifically, the linear feature and the non-linear feature of the three-dimensional image are processed through the ReLU function to obtain the feature encoding representation vector of the real image sample.
  • step 102 through an image reconstruction network, image reconstruction is performed based on the feature code vector to obtain a first image, and image reconstruction is performed based on the first hidden layer vector of the real image sample to obtain a second image.
  • the above-mentioned feature encoding vector and the above-mentioned first hidden layer vector may be input to the above-mentioned image reconstruction network to obtain the above-mentioned first image and the above-mentioned second image; wherein, the convolution of the image generation network in this embodiment of the application
  • the layer is a three-dimensional separable convolutional layer that is upsampled by neighbors.
  • the feature code vector extracted from the real image sample and the first hidden layer vector sampled from the Gaussian distribution of the real image sample may be used as the input of the image reconstruction network, and the feature code vector and the first hidden layer can be used as input to the image reconstruction network.
  • the first image and the second image are obtained by stepwise reconstruction in the vector.
  • a three-dimensional separable convolutional layer with neighbor upsampling is used to replace the deconvolutional layer in the traditional image reconstruction network, which can reduce the number of learnable parameters, and can improve the quality of the generated fMRI image, so that the reconstruction The image has fewer artifacts, clearer structure, etc.
  • step 103 image discrimination is performed on the real image sample, the first image, and the second image through an image discrimination network, and the image reconstruction network is optimized according to the result of the image discrimination.
  • the real image samples, the first image, and the second image can all be used as the input of the image discrimination network, and the image reconstruction network can be optimized according to the discrimination results of the image discrimination network, and the "generation-adversarial" training can be constructed, and The optimized and trained image reconstruction network is used for image reconstruction.
  • Step 103 is executed again after the image, and the execution is repeated in turn.
  • the above medical image reconstruction network training method is to perform feature encoding extraction on real image samples to obtain feature encoding vectors of the above real image samples.
  • image reconstruction is performed based on the feature encoding vectors to obtain the first image, based on the above real image samples
  • the second image is obtained by image reconstruction with the hidden layer vector of, and at the same time, the real image sample, the first image and the second image are distinguished through the image discrimination network, and the image reconstruction network is optimized according to the result of the image discrimination.
  • the optimized image reconstruction network is used for image reconstruction work, and introduces prior knowledge guidance from real images for the generation of the confrontation network, so as to stabilize the training of the image reconstruction network, and is easy to achieve optimal convergence, thereby solving the difficulty of generating the confrontation network training The problem.
  • FIG. 4 is a schematic flowchart of a medical image reconstruction network training method provided by an embodiment of this application.
  • the medical image reconstruction network training method may further include:
  • step 104 vector discrimination is performed on the feature coding vector and the first hidden layer vector through the coding feature discrimination network.
  • step 105 the image coding network is optimized based on the vector discrimination result.
  • the feature coding vector and the first hidden layer vector of the real image sample can be used to optimize the image coding network through steps 104 and 105, and then the optimized image coding network can be used as The image coding network in step 101 can be used to perform step 101 again; this loop is repeated to optimize the image coding network.
  • the image coding network may be subjected to confrontation training based on the vector discrimination result, so as to optimize the above-mentioned image coding network.
  • a coding feature discrimination network with the same structure as the image discrimination network can be constructed, and the feature coding vector obtained from real image samples and the first hidden layer vector sampled from the Gaussian distribution are used as the input of the coding feature discrimination network.
  • the coding feature discrimination network and the image coding network also constitute a "generation-antagonism" training relationship, instead of variational inference, to solve the problem of training conflicts between variational inference and generative confrontation objective function.
  • the above-mentioned adversarial training of the image coding network based on the vector discrimination result may specifically include: calculating the voxel-by-voxel difference between the second image and the real image sample, and performing gradient descent Method to update the network parameters of the image coding network until the voxel-by-voxel difference is less than or equal to the preset threshold to realize the training of the image coding network, wherein the voxel-by-voxel difference is the first of the image coding network A loss function.
  • the coding feature discrimination network is introduced to replace the original variational reasoning process.
  • the training process of the above image coding network first calculate the voxel-by-voxel difference between the reconstructed fMRI image and the real fMRI image, and then update the network parameters of the above-mentioned image coding network through the gradient descent method, so that the voxel-by-voxel difference is less than or equal to the first A preset threshold is enough; secondly, the Wasserstein distance is selected as the measurement tool of the real image distribution and the reconstructed image distribution in the first loss function, and the gradient penalty item is introduced to crop the discriminator network gradient to further stabilize the network training.
  • the first loss function may be:
  • L C is the first loss function
  • z e is the feature coding vector
  • z r is the first hidden layer vector
  • C represents the image coding network
  • E is the mathematical expectation.
  • the optimization of the image reconstruction network according to the image discrimination result in step 103 may specifically be: performing confrontation training on the image reconstruction network according to the image discrimination result.
  • performing confrontation training on the image reconstruction network according to the image discrimination result may include: determining the second image reconstruction network according to the image discrimination result, structural similarity measurement loss function, and perception measurement loss function Loss function, and update the network parameters of the image reconstruction network through a gradient descent method, and train the image reconstruction network.
  • the image reconstruction network is subjected to confrontation training according to the image discrimination result. Specifically, if the discrimination result of the image discrimination network is closer to the real image, then only the image reconstruction network needs to be first performed on the network parameters by using the gradient descent method. The preset amplitude is updated or not updated; if the judgment result of the image discrimination network is closer to the reconstructed image, the image reconstruction network needs to update the network parameters with the second preset amplitude, and the second preset amplitude is greater than the first preset amplitude .
  • the structural similarity measurement loss and the perceptual measurement loss are also introduced to ensure that the characteristics of the reconstructed image are more in line with the real image.
  • the second loss function may be:
  • L G is the second loss function
  • z e is the feature coding vector
  • z r is the first hidden layer vector
  • C represents the image coding network
  • D is the image discrimination network
  • G is In the image reconstruction network
  • E is the mathematical expectation
  • L SSIM is the structural similarity measurement loss function
  • L perceptual is the perceptual measurement loss function
  • X real represents the real image
  • ⁇ 1 and ⁇ 2 are the weight coefficients
  • is the Gram matrix
  • L D is the loss function of the image discrimination network.
  • the image overlap ratio (SOR) technical indicator can be used to evaluate the closeness of the reconstructed image reconstructed by the image reconstruction network to the real image.
  • the trained image reconstruction network can reconstruct high-quality medical image samples from the Gaussian hidden layer vector, which can enhance the image sample size and facilitate subsequent analysis.
  • FIG. 5 is a schematic flowchart of a medical image reconstruction method provided by an embodiment of the present application. Referring to FIG. 5, the medical image reconstruction method is described in detail as follows:
  • step 201 the second hidden layer vector of the image to be reconstructed is obtained.
  • step 202 image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
  • the medical image reconstruction method described above is to perform feature encoding extraction on real image samples to obtain the feature encoding vector of the real image sample.
  • image reconstruction is performed based on the feature encoding vector to obtain the first image.
  • the layer vector is used for image reconstruction to obtain the second image.
  • the real image sample, the first image, and the second image are distinguished through the image discriminating network, and the image reconstruction network is trained and optimized according to the image discriminating result, and passed
  • the image reconstruction network after training optimization is based on the second hidden layer vector to reconstruct the image to be reconstructed, and introduces the prior knowledge guidance from the real image for the generation of the confrontation network, so as to stabilize the training of the image reconstruction network and easily achieve optimal convergence , So as to solve the problem of the difficulty of generating the confrontation network training, and the reconstructed image is closer to the real image.
  • the process of medical image reconstruction may include the following steps:
  • step 301 feature extraction is performed on the real image sample based on the image coding network to obtain the feature coding vector of the real image sample.
  • step 302 through the image reconstruction network, the first image is obtained by image reconstruction based on the feature code vector, and the second image is obtained by image reconstruction based on the first hidden layer vector of the real image sample.
  • step 303 the real image sample, the first image, and the second image are discriminated by the image discriminating network, and the image reconstruction network is trained and optimized according to the image discriminating result.
  • the image reconstruction network after training and optimization is used as the image reconstruction network in step 302 to perform the next image reconstruction.
  • step 304 the feature encoding vector in step 301 and the first hidden layer vector of the real image sample are subjected to vector discrimination through the encoding feature discrimination network.
  • step 305 based on the vector discrimination result, the image coding network is optimized, and the optimized image coding network is used as the image coding network in step 301 to perform feature extraction on the next real image sample.
  • step 306 after the image reconstruction network training and optimization are completed through the real image samples, the second hidden layer vector of the image to be reconstructed is acquired.
  • step 307 image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
  • the real fMRI image x real of the rat brain area is developed into a three-dimensional image in a time series, and input into the image coding network in turn, and the three-dimensional image is layered using multiple three-dimensional convolutional layers of the image coding network Feature extraction, and synthesize linear and non-linear features through the ReLU function, and output the feature encoding vector z e of the real fMRI image.
  • the feature encoding vector z e obtained by extracting the real fMRI image and the hidden layer vector z r sampled from the Gaussian distribution are both used as the input of the image reconstruction network, which are reconstructed step by step from z e and z r respectively fMRI images x rec and x rand .
  • the convolution of the image reconstruction network is a three-dimensional separable convolutional layer with nearest neighbor upsampling.
  • the use of a three-dimensional separable convolution operation with nearest neighbor upsampling instead of the traditional deconvolution layer can reduce the number of learnable parameters and Improve the quality of the reconstructed fMRI image, so that the reconstructed image has fewer artifacts and the brain structure is clearer.
  • the real fMRI image x real , image x rec and image x rand are all used as the input of the image discrimination network, and the image reconstructor is optimized according to the discrimination result of the image discrimination network to construct "generation-adversarial" training .
  • a coding feature discrimination network with the same structure as the image discrimination network is constructed, and the feature representation vector z e encoded from the real fMRI image x real and the hidden layer vector z r sampled from the Gaussian distribution are used as its input to make the encoding
  • the feature discrimination network and the image coding network also constitute a "generation-antagonism" training relationship to replace the variational reasoning and solve the training conflict between the variational reasoning and the generation against the objective function.
  • the fourth step is to select the optimal loss function to train and optimize the network.
  • this embodiment cleverly introduces the coding feature discrimination network to replace the traditional variational reasoning process, and only needs to minimize the voxel difference between the reconstructed fMRI image and the real fMRI image; moreover, this application
  • the Wasserstein distance is selected as the measurement tool of the real image distribution and the reconstructed image distribution in the loss function, and the gradient penalty item is introduced to crop the discriminator network gradient to further stabilize the image coding network training.
  • this application also introduces structural similarity measurement loss and perceptual measurement loss to ensure that the reconstructed image is in the rat ventral tegmental area (VTA), prefrontal cortex (PFC), etc.
  • VTA ventral tegmental area
  • PFC prefrontal cortex
  • the characteristics of the key area correspond to the real image.
  • the loss function formula of each network is as follows:
  • the loss function of the image coding network is:
  • the loss function of the image discrimination network is:
  • the loss function of the image reconstruction network is:
  • L SSIM is the structural similarity measurement loss function
  • L perceptual represents the perceptual measurement loss function
  • this program intends to evaluate the closeness of the reconstructed image to the real image through the image overlap ratio (SOR) technical indicator.
  • the trained image reconstruction network reconstructs high-quality medical image samples from the Gaussian hidden layer vector of the image to be reconstructed, which enhances the image sample size and facilitates subsequent analysis.
  • the embodiment of this application proposes a medical image reconstruction network training method that integrates a variational autoencoder and a generative confrontation network. Compared with the traditional generative confrontation network, this application introduces a method from the real world through the fusion variational autoencoder. The image is guided by prior knowledge, so as to solve the difficult problem of generating confrontation network training.
  • a separate coding discrimination network is added between the variational autoencoder and the generative confrontation network. Its goal is to replace the function of variational inference, so that the encoding feature vector of the variational encoder can be trained as a confrontation.
  • the method approximates the original Gaussian hidden layer vector, so as to resolve the conflict between the variational reasoning and the objective function of the generated confrontation network.
  • FIG. 7 shows the structural block diagram of the medical image reconstruction network training device provided by the embodiment of the present application. part.
  • the medical image reconstruction network training device in the embodiment of the present application may include a feature code extraction module 401, a first image reconstruction module 402 and an optimization module 403.
  • the feature encoding extraction module 401 is configured to perform feature encoding extraction on real image samples to obtain feature encoding vectors of the real image samples;
  • the first image reconstruction module 402 is configured to perform image reconstruction based on the feature code vector through an image reconstruction network to obtain a first image, and perform image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image;
  • the first optimization module 403 is configured to perform image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimize the image reconstruction network according to the result of the image discrimination.
  • the feature coding extraction module 401 may be used to: perform feature extraction on the real image sample based on an image coding network to obtain a feature coding vector of the real image sample.
  • the feature encoding extraction module 401 may be specifically used for:
  • the extracted features are processed through a linear function to obtain the feature encoding vector of the real image sample.
  • the linear function is a piecewise linear function.
  • the piecewise linear function is a ReLU function.
  • the medical image reconstruction network training device may further include a second optimization module; the second optimization module is used to:
  • the image coding network is optimized based on the vector discrimination result.
  • the optimizing the image coding network based on the vector discrimination result includes:
  • the performing confrontation training on the image coding network based on the vector discrimination result includes:
  • the voxel-by-voxel difference is the first loss function of the image coding network, and the first loss function is:
  • L C is the first loss function
  • z e is the feature coding vector
  • z r is the first hidden layer vector
  • C represents the image coding network
  • E is the mathematical expectation.
  • the first optimization module 403 may be used for:
  • the performing confrontation training on the image reconstruction network according to the image discrimination result may include:
  • the structural similarity measurement loss function and the perception measurement loss function, the second loss function of the image reconstruction network is determined, and the network parameters of the image reconstruction network are updated by the gradient descent method. Rebuild the network for training;
  • the second loss function is:
  • L G is the second loss function
  • z e is the feature coding vector
  • z r is the first hidden layer vector
  • C represents the image coding network
  • D is the image discrimination network
  • G is In the image reconstruction network
  • E is the mathematical expectation
  • L SSIM is the structural similarity measurement loss function
  • L perceptual is the perceptual measurement loss function
  • X real represents the real image
  • ⁇ 1 and ⁇ 2 are the weight coefficients
  • is the Gram matrix
  • L D is the loss function of the image discrimination network.
  • the first image reconstruction module 402 may be specifically used for:
  • the feature encoding vector and the first hidden layer vector are input to the image reconstruction network to obtain the first image and the second image; wherein, the convolutional layer of the image generation network is upsampled by neighbors Three-dimensional separable convolutional layer.
  • FIG. 8 shows a structural block diagram of the medical image reconstruction device provided by an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
  • the medical image reconstruction device in the embodiment of the present application may include a hidden layer vector acquisition module 501 and a second image reconstruction module 502.
  • the hidden layer vector obtaining module 501 is used to obtain the second hidden layer vector of the image to be reconstructed;
  • the second image reconstruction module 502 is configured to perform image reconstruction on the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
  • the terminal device 600 may include: at least one processor 610, a memory 620, and stored in the memory 620 and available on the at least one processor 610.
  • a running computer program when the processor 610 executes the computer program, the steps in any of the foregoing method embodiments are implemented, such as steps 101 to 103 in the embodiment shown in FIG. 2, or the embodiment shown in FIG. 5 Step 201 to step 202 in.
  • the processor 610 executes the computer program
  • the functions of the modules/units in the foregoing device embodiments are implemented, for example, the functions of the modules 401 to 403 shown in FIG. 7 or the functions of the modules 501 to 502 shown in FIG. 8.
  • the computer program may be divided into one or more modules/units, and one or more modules/units are stored in the memory 620 and executed by the processor 610 to complete the application.
  • the one or more modules/units may be a series of computer program segments capable of completing specific functions, and the program segments are used to describe the execution process of the computer program in the terminal device 600.
  • FIG. 9 is only an example of a terminal device, and does not constitute a limitation on the terminal device. It may include more or less components than those shown in the figure, or a combination of certain components, or different components, such as Input and output equipment, network access equipment, bus, etc.
  • the processor 610 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), ready-made Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 620 may be an internal storage unit of the terminal device, or an external storage device of the terminal device, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, and a flash memory card. (Flash Card) and so on.
  • the memory 620 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 620 may also be used to temporarily store data that has been output or will be output.
  • the bus can be an Industry Standard Architecture (ISA) bus, Peripheral Component (PCI) bus, or Extended Industry Standard Architecture (EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the buses in the drawings of this application are not limited to only one bus or one type of bus.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each embodiment of the above-mentioned medical image reconstruction network training method are implemented , Or implement the steps in each embodiment of the above-mentioned medical image reconstruction method.
  • the embodiments of the present application provide a computer program product.
  • the computer program product runs on a mobile terminal, the mobile terminal executes the steps in each embodiment of the above-mentioned medical image reconstruction network training method, or realizes the above-mentioned medical image reconstruction method. Steps in each embodiment.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying the computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • electric carrier signal telecommunications signal and software distribution medium.
  • U disk mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Abstract

本申请提供一种医学图像重建方法、医学图像重建网络训练方法和装置。该医学图像重建网络训练方法包括:对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像重建网络进行优化。

Description

医学图像重建方法、医学图像重建网络训练方法和装置 技术领域
本申请实施例属于图像处理技术领域,尤其涉及一种医学图像重建方法、医学图像重建网络训练方法和装置。
背景技术
功能性磁共振成像(functional magnetic resonance imaging,fMRI)是一种新兴的神经影像学方式,其原理是利用磁振造影来测量神经元活动所引发之血液动力的改变。它作为一种非介入的技术,能够对特定的大脑活动皮层区域进行准确定位,并捕获能够反映神经元活动的血氧变化。但由于fMRI图像采集成本高、扫描时间长,且一些特殊患者无法进行(比如体内金属物品者不能接受扫描),在特定的应用场景下,能获取的影像数量往往是有限的,这极大地限制了深度学习等依赖于大量数据的人工智能方法在医学影像分析领域的应用。
一个极具前景的解决方法是通过现有的人工智能方法利用有限的真实图像样本,学习从高斯隐层向量中重建相应医学影像,从而达到增强样本量,支撑后续图像分析任务的目的。而生成对抗网络是当前性能较佳的生成模型,逐渐成为了深度学习的研究热点,并开始应用到医学图像领域中。
技术问题
传统的生成对抗网络可以通过学习真实数据分布,生成带有多样性的新图像,但也存在网络训练困难,不易达到最优收敛的问题。
技术解决方案
本申请实施例一方面提供一种医学图像重建网络训练方法,其包括:
对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像重建网络进行优化。
在一个实施例中,所述对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量,包括:
基于图像编码网络对所述真实图像样本进行特征提取,得到所述真实图像样本的特征编码向量。
在一个实施例中,所述基于图像编码网络对所述真实图像样本进行特征提取,得到所述真实图像样本的特征编码向量,包括:
通过所述图像编码网络的多个三维卷积层对所述真实图像样本进行分层特征提取;
通过线性函数对提取到的特征进行处理,得到所述真实图像样本的特征编码向量。
在一个实施例中,所述线性函数为分段线性函数。
在一个实施例中,所述分段线性函数为ReLU函数。
在一个实施例中,所述方法还包括:
通过编码特征判别网络对所述特征编码向量和所述第一隐层向量进行向量判别;
基于向量判别结果对所述图像编码网络进行优化。
在一个实施例中,所述基于向量判别结果对所述图像编码网络进行优化,包括:
基于所述向量判别结果对所述图像编码网络进行对抗训练。
在一个实施例中,所述基于所述向量判别结果对所述图像编码网络进行对抗训练,包括:
计算所述第二图像与所述真实图像样本之间的逐体素差异,并通过梯度下降法更新所述图像编码网络的网络参数,直至所述逐体素差异小于或等于预设阈值,实现对所述图像编码网络的训练,其中所述逐体素差异为所述图像编码网络的第一损失函数;
其中,所述第一损失函数为:
Figure PCTCN2020079678-appb-000001
其中,L C为所述第一损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,E为数学期望。
在一个实施例中,所述根据图像判别结果对所述图像重建网络进行优化,包括:
根据所述图像判别结果对所述图像重建网络进行对抗训练。
在一个实施例中,所述根据所述图像判别结果对所述图像重建网络进行对抗训练,包括:
根据所述图像判别结果、结构相似性度量损失函数和感知度量损失函数,确定所述图像重建网络的第二损失函数,并通过梯度下降法更新所述图像重建网络的网络参数,对所述图像重建网络进行训练;
其中,所述第二损失函数为:
Figure PCTCN2020079678-appb-000002
Figure PCTCN2020079678-appb-000003
Figure PCTCN2020079678-appb-000004
Figure PCTCN2020079678-appb-000005
其中,L G为所述第二损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,D为所述图像判别网络,G为所述图像重建网络,E为数学期望,L SSIM为结构相似性度量损失函数,L perceptual代表感知度量损失函数,X real表征所述真实图像,λ 1和λ 2为权重系数,Φ为Gram矩阵,L D为图像判别网络的损失函数。
在一个实施例中,所述通过图像生成网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像,包括:
将所述特征编码向量和所述第一隐层向量输入所述图像重建网络,得到所述第一图像和所述第二图像;其中,所述图像生成网络的卷积层为近邻上采样的三维可分离卷积层。
本申请实施例第二方面提供一种医学图像重建方法,其包括:
获取待重建图像的第二隐层向量;
通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
本申请实施例第三方面提供一种医学图像重建网络训练装置,其包括:
特征编码提取模块,用于对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
第一图像重建模块,用于通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
第一优化模块,用于通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像生成网络进行优化。
本申请实施例第四方面提供一种医学图像重建装置,其包括:
隐层向量获取模块,用于获取待重建图像的第二隐层向量;
第二图像重建模块,用于通过训练后的图像重建网络,基于所述第二隐层向量对所述带重建图像进行图像重建。
本申请实施例第五方面提供一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像生成网络进行优化。
在一个实施例中,所述对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量,包括:
基于图像编码网络对所述真实图像样本进行特征提取,得到所述真实图像样本的特征编码向量。
在一个实施例中,所述处理器执行所述计算机可读指令时还实现如下步骤:
通过编码特征判别网络对所述特征编码向量和所述第一隐层向量进行向量判别;
基于向量判别结果对所述图像编码网络进行优化。
本申请实施例第六方面提供一种终端设备,所述终端设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取待重建图像的第二隐层向量;
通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
本申请实施例第七方面提供一种计算机存储介质,所述计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现如下步骤:
对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像生成网络进行优化。
本申请实施例第八方面提供一种计算机存储介质,所述计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现如下步骤:
获取待重建图像的第二隐层向量;
通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
本申请实施例第九方面提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项的方法,或使得终端设备执行上述第二方面中任一项的方法。
有益效果
本申请实施例,对真实图像样本进行特征编码提取,得到上述真实图像样本的特征编码向量,通过图像重建网络,基于上述特征编码向量进行图像重建得到第一图像,基于上述真实图像样本的隐层向量进行图像重建得到第二图像,同时通过图像判别网络对上述真实图像样本、上述第一图像和上述第二图像进行图像判别,并根据图像判别结果对上述图像重建网络进行优化,将优化后的图像重建网络用于图像重建工作,为生成对抗网络引入来自于真实图像中的先验知识引导,从而稳定对图像重建网络的训练,易于达到最优收敛,从而解决生成对抗网络训练困难的问题。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的应用场景示意图;
图2是本申请一实施例提供的医学图像重建网络训练方法的流程示意图;
图3是本申请一实施例提供的医学图像重建网络训练方法的流程示意图;
图4是本申请一实施例提供的医学图像重建网络训练方法的流程示意图;
图5是本申请一实施例提供的医学图像重建方法的流程示意图;
图6是本申请一实施例提供的医学图像重建的流程示意图;
图7是本申请一实施例提供的医学图像重建网络训练装置的结构示意图;
图8是本申请一实施例提供的医学图像重建装置的结构示意图;
图9是本申请一实施例提供的终端设备的结构示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
功能性磁共振成像fMRI是一种新兴的神经影像学方式,其原理是利用磁振造影来测量神经元活动所引发之血液动力的改变。它作为一种非介入的技术,能够对特定的大脑活动皮层区域进行准确定位,并捕获能够反映神经元活动的血氧变化。但由于fMRI图像采集成本高、扫描时间长,且一些特殊患者无法进行(比如体内金属物品者不能接受扫描),在特定的应用场景下,能获取的影像数量往往是有限的,这极大地限制了深度学习等依赖于大量数据的人工智能方法在医学影像分析领域的应用。
一个极具前景的解决方法是通过现有的人工智能方法利用有限的真实图像样本,学习从高斯隐层向量中重建相应医学影像,从而达到增强样本量,支撑后续图像分析任务的目的。而生成对抗网络是当前性能较佳的生成模型,它最早由Lan Goodfellow等人于2014年提出,可以通过生成器捕捉真实数据的潜在分布,从而达到从隐层空间中生成真实数据分布样本的目的。此后,生成对抗网络逐渐成为了深度学习的研究热点,并开始应用到各个领域中。除了从隐层向量中重建原始图像,另一种解决思路是从一个模态的医学图像合成另一个模态的医学图像,如从同一患者的CT图像合成对应的PET图像。有许多学者在此方面进行了大量工作,然而跨模态合成的解决思路需要大量另一模态的图像数据对模型进行训练,并且其合成样本的多样性有限。因此,最受关注的思路仍然是如何从隐层向量中稳定重建相应医学图像。
尽管在这个方法中,生成对抗网络可以通过学习真实数据分布,生成带有多样性的新图像,但它最大的问题在于网络训练困难,不易达到最优收敛。生成对抗网络的目的是使生成器拟合的数据分布接近真实的数据分布,本申请发明人在研究中发现没有任何先验知识引入的生成网络根本不知道真实数据分布的情况,只能根据判别器的真假反馈一次次地 试探。而作为另一性能强大的生成模型变分自编码网络则不存在这个问题,它可以先抽取真实图像的编码特征向量,同时通过重采样的进行变分推理,根据变分结果对隐向量进行解码生成。
基于变分自编码器的作用机理启发,本申请实施例将变分自编码器的编码特征向量作为一种关于真实图像的特征先验知识引入到生成对抗网络的训练中,给予生成网络一个较明确的优化方向,从而解决其训练难、耗时长且易崩溃的问题。并且我们发现简单地拼凑组合变分自编码器和生成对抗网络是不行的,这是由于变分推理与生成对抗网络的目标函数之间存在优化冲突,二者无法同时达到最优收敛。为了解决这个问题,本申请进一步引入了单独的编码判别器,从而将变分自编码器的优化过程也纳入到了“生成-对抗”的体系下,解决变分推理与生成对抗网络的目标函数之间存在的优化冲突。
举例说明,本申请实施例可以应用到如图1所示的示例性场景中。其中,终端10和服务器20构成上述医学图像重建网络训练方法、医学图像重建方法的应用场景。
具体的,终端10用于获取被检体的真实图像样本,并将该真实图像样本发送给服务器20;服务器20用于对真实图像样本进行特征编码提取,得到真实图像样本的特征编码向量,通过图像重建网络,基于特征编码向量进行图像重建得到第一图像,基于真实图像样本的隐层向量进行图像重建得到第二图像,通过图像判别网络对真实图像样本、第一图像和第二图像进行图像判别,并根据图像判别结果对图像重建网络进行优化,将优化后的图像生成网络用于图像重建工作,为生成对抗网络引入来自于真实图像中的先验知识引导,从而稳定对图像重建网络的训练,易于达到最优收敛,从而解决生成对抗网络训练困难的问题。
以下结合图1对本申请的医学图像重建网络训练方法进行详细说明。
图2是本申请一实施例提供的医学图像重建网络训练方法的示意性流程图,参照图2,该医学图像重建网络训练方法的详述如下:
在步骤101中,对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量。
一个实施例中,步骤101中可以通过图像编码网络对上述真实图像样本进行特征提取,得到上述真实图像样本的特征编码向量。
示例性的,参见图3,上述通过图像编码网络对上述真实图像样本进行特征提取,得到上述真实图像样本的特征编码向量,具体可以包括:
在步骤1011中,通过上述图像编码网络的多个三维卷积层对上述真实图像样本进行分层特征提取。
在步骤1012中,通过线性函数对提取到的特征进行处理,得到所述真实图像样本的特征编码向量。
一个示例场景中,可以将真实图像样本展成时间序列上的三维影像,并将该三维影像依次输入到图像编码网络中,利用图像编码网络的多个三维卷积层对该三维影像进行分层特征抽取,并通过线性函数综合三维影像的线性特征和非线性特征,得到真实图像样本的特征编码表示向量。
其中,上述线性函数为分段线性函数。具体的,通过分段线性函数对三维影像的线性 特征和非线性特征进行处理,得到真实图像样本的特征编码表示向量。
例如,分段线性函数可以为ReLU函数。具体的,通过ReLU函数对三维影像的线性特征和非线性特征进行处理,得到真实图像样本的特征编码表示向量。
在步骤102中,通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像。
一个实施例中,可以将上述特征编码向量和上述第一隐层向量输入到上述图像重建网络,得到上述第一图像和上述第二图像;其中,本申请实施例中的图像生成网络的卷积层为近邻上采样的三维可分离卷积层。
示例性的,可以将从真实图像样本抽取得到的特征编码向量与从真实图像样本的高斯分布中采样得到的第一隐层向量作为图像重建网络的输入,分别从特征编码向量和第一隐层向量中逐级重建得到第一图像与第二图像。本实施例中,利用带有近邻上采样的三维可分离卷积层代替传统图像重建网络中的反卷积层,能够降低可学习参数的数量,并能够提高生成的fMRI图像的质量,使得重建图像的伪影更少、结构更加清晰等。
在步骤103中,通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像重建网络进行优化。
具体的,可以将真实图像样本、第一图像与第二图像三者均作为图像判别网络的输入,并根据图像判别网络的判别结果对图像重建网络进行优化,构建“生成-对抗”训练,并将经过优化训练后的图像重建网络用于图像重建。
其中,在步骤103中对所述图像重建网络进行优化后,将该图像重建网络继续用于步骤102中所述的图像重建得到第一图像和第二图像,并在得到第一图像和第二图像后再次执行步骤103,依次循环执行。
上述医学图像重建网络训练方法,对真实图像样本进行特征编码提取,得到上述真实图像样本的特征编码向量,通过图像重建网络,基于上述特征编码向量进行图像重建得到第一图像,基于上述真实图像样本的隐层向量进行图像重建得到第二图像,同时通过图像判别网络对上述真实图像样本、上述第一图像和上述第二图像进行图像判别,并根据图像判别结果对上述图像重建网络进行优化,将优化后的图像重建网络用于图像重建工作,为生成对抗网络引入来自于真实图像中的先验知识引导,从而稳定对图像重建网络的训练,易于达到最优收敛,从而解决生成对抗网络训练困难的问题。
图4为本申请一实施例提供的医学图像重建网络训练方法的示意性流程图,参照图4,基于图2所示的实施例,该医学图像重建网络训练方法还可以包括:
在步骤104中,通过编码特征判别网络对所述特征编码向量和所述第一隐层向量进行向量判别。
在步骤105中,基于向量判别结果对所述图像编码网络进行优化。
其中,在步骤101中得到特征编码向量后,可以将特征编码向量和真实图像样本的第一隐层向量,通过步骤104和步骤105对图像编码网络进行优化后,将优化后的图像编码网络作为步骤101中的图像编码网络,可以用于再次执行步骤101;如此循环反复对图像编码网络进行优化。
一个实施例中,可以基于所述向量判别结果对所述图像编码网络进行对抗训练,从而对上述图像编码网络进行优化。
具体的,可以构建与图像判别网络结构相同的编码特征判别网络,将从真实图像样本中编码得到的特征编码向量与从高斯分布中采样得到的第一隐层向量作为编码特征判别网络的输入,使编码特征判别网络与图像编码网络也构成“生成-对抗”的训练关系,以代替变分推理,解决变分推理与生成对抗目标函数的训练冲突问题。
一个实施例中,上述基于所述向量判别结果对所述图像编码网络进行对抗训练,具体可以包括:计算所述第二图像与所述真实图像样本之间的逐体素差异,并通过梯度下降法更新所述图像编码网络的网络参数,直至所述逐体素差异小于或等于预设阈值,实现对所述图像编码网络的训练,其中所述逐体素差异为所述图像编码网络的第一损失函数。
示例性的,对于图像编码网络的训练优化,引入编码特征判别网络代替原本的变分推理过程。在对上述图像编码网络的训练过程中,首先计算重建fMRI图像与真实fMRI图像的逐体素差异,随后通过梯度下降法更新上述图像编码网络的网络参数,使该逐体素差异小于或等于第一预设阈值即可;其次,选取Wasserstein距离作为第一损失函数中真实图像分布与重建图像分布的度量工具,同时引入梯度惩罚项裁剪判别器网络梯度,进一步稳定网络训练。
示例性的,第一损失函数可以为:
Figure PCTCN2020079678-appb-000006
其中,L C为所述第一损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,E为数学期望。
一个实施例中,步骤103中所述的根据图像判别结果对所述图像重建网络进行优化,具体可以为:根据所述图像判别结果对所述图像重建网络进行对抗训练。
其中,上述根据所述图像判别结果对所述图像重建网络进行对抗训练,可以包括:根据所述图像判别结果、结构相似性度量损失函数和感知度量损失函数,确定所述图像重建网络的第二损失函数,并通过梯度下降法更新所述图像重建网络的网络参数,对所述图像重建网络进行训练。
示例性的,根据图像判别结果对图像重建网络进行对抗训练,具体而言,如果图像判别网络的判别结果越接近于真实图像,则只需对图像重建网络利用梯度下降法对网络参数进行第一预设幅度更新或不更新;如若图像判别网络的判别结果越接近于重建图像,则需对图像重建网络对网络参数进行第二预设幅度更新,且第二预设幅度大于第一预设幅度。此外,除了选取Wasserstein距离作为第二损失函数中真实图像分布与重建图像分布的度量工具,还引入结构相似性度量损失与感知度量损失,可以确保重建图像的特征更加符合真实图像。
示例性的,第二损失函数可以为:
Figure PCTCN2020079678-appb-000007
Figure PCTCN2020079678-appb-000008
Figure PCTCN2020079678-appb-000009
Figure PCTCN2020079678-appb-000010
其中,L G为所述第二损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,D为所述图像判别网络,G为所述图像重建网络,E为数学期望,L SSIM为结构相似性度量损失函数,L perceptual代表感知度量损失函数,X real表征所述真实图像,λ 1和λ 2为权重系数,Φ为Gram矩阵,L D为图像判别网络的损失函数。
本实施例中,可以通过图像重叠率(SOR)技术指标来评估图像重建网络重建的重建图像与真实图像的接近程度。在对图像重建网络训练优化完成后,通过训练后的图像重建网络可从高斯隐层向量中重建出高质量的医学图像样本,起到增强图像样本量的作用,便于后续的分析工作。
以下结合图1对本申请的医学图像重建方法进行详细说明。
图5是本申请一实施例提供的医学图像重建方法的示意性流程图,参照图5,该医学图像重建方法的详述如下:
在步骤201中,获取待重建图像的第二隐层向量。
在步骤202中,通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
上述医学图像重建方法,对真实图像样本进行特征编码提取,得到上述真实图像样本的特征编码向量,通过图像重建网络,基于上述特征编码向量进行图像重建得到第一图像,基于上述真实图像样本的隐层向量进行图像重建得到第二图像,同时通过图像判别网络对上述真实图像样本、上述第一图像和上述第二图像进行图像判别,并根据图像判别结果对上述图像重建网络进行训练优化,并通过训练优化后的图像重建网络基于第二隐层向量对待重建图像进行图像重建,为生成对抗网络引入来自于真实图像中的先验知识引导,从而稳定对图像重建网络的训练,易于达到最优收敛,从而解决生成对抗网络训练困难的问题,且重建出的图像更接近于真实图像。
参见图6,在本实施例中,医学图像重建的过程可以包括以下步骤:
在步骤301中,基于图像编码网络对真实图像样本进行特征提取,得到真实图像样本的特征编码向量。
在步骤302中,通过图像重建网络,基于特征编码向量进行图像重建得到第一图像,基于真实图像样本的第一隐层向量进行图像重建得到第二图像。
在步骤303中,通过图像判别网络对真实图像样本、第一图像和第二图像进行图像判别,并根据图像判别结果对图像重建网络进行训练优化。其中,将训练优化后的图像重建网络作为步骤302中的图像重建网络进行下一次图像重建。
在步骤304中,通过编码特征判别网络对步骤301中的特征编码向量和真实图像样本 的第一隐层向量进行向量判别。
在步骤305中,基于向量判别结果,对图像编码网络进行优化,并将优化后的图像编码网络作为步骤301中的图像编码网络对下一真实图像样本进行特征提取。
在步骤306中,在通过真实图像样本对图像重建网络训练优化完成后,获取待重建图像的第二隐层向量。
在步骤307中,通过训练后的图像重建网络,基于第二隐层向量对待重建图像进行图像重建。
以下以大鼠脑区的真实fMRI图像为例对本申请实施例进行说明,但并不以此为限。
第一步,将大鼠脑区的真实fMRI图像x real展成时间序列上的三维影像,并依次输入到图像编码网络中,利用图像编码网络的多个三维卷积层对三维影像进行分层特征抽取,并通过ReLU函数综合线性和非线性特征,输出真实fMRI图像的特征编码向量z e
第二步,将真实fMRI图像抽取得到的特征编码向量z e与从高斯分布中采样得到的隐层向量z r二者均作为图像重建网络的输入,分别从z e和z r中逐级重建fMRI图像x rec与x rand。图像重建网络的卷积为带有近邻上采样的三维可分离卷积层,利用带有近邻上采样的三维可分离卷积操作代替传统的反卷积层,能够降低可学习参数的数量,并提高重建得到的fMRI图像质量,使重建图像的伪影更少、脑区结构更加清晰等。
第三步,将真实fMRI图像x real、图像x rec与图像x rand三者均作为图像判别网络的输入,并根据图像判别网络的判别结果对图像重建器进行优化,构建“生成-对抗”训练。同时,构建与图像判别网络结构相同的编码特征判别网络,将从真实fMRI图像x real中编码得到的特征表示向量z e与从高斯分布中采样得到的隐层向量z r作为其输入,使编码特征判别网络与图像编码网络也构成“生成-对抗”的训练关系,以代替变分推理,解决变分推理与生成对抗目标函数的训练冲突问题。
第四步,选取最优的损失函数进行网络的训练与优化。对于图像编码网络的训练优化,本实施例巧妙地引入编码特征判别网络代替了传统的变分推理过程,只需最小化重建fMRI图像与真实fMRI图像的逐体素差异即可;而且,本申请选取Wasserstein距离作为损失函数中真实图像分布与重建图像分布的度量工具,同时引入梯度惩罚项裁剪判别器网络梯度,进一步稳定图像编码网络训练。对于图像重建器网络的训练,本申请除了选取Wasserstein距离之外,还引入结构相似性度量损失与感知度量损失,确保重建图像在大鼠腹侧被盖区(VTA)、前额皮质(PFC)等关键区域的特征符合真实图像。各个网络的损失函数公式如下:
图像编码网络的损失函数为:
Figure PCTCN2020079678-appb-000011
图像判别网络的损失函数为:
Figure PCTCN2020079678-appb-000012
图像重建网络的损失函数为:
Figure PCTCN2020079678-appb-000013
其中,L SSIM为结构相似性度量损失函数,L perceptual代表感知度量损失函数,分别为:
Figure PCTCN2020079678-appb-000014
Figure PCTCN2020079678-appb-000015
最后本方案拟通过图像重叠率(SOR)技术指标来评估重建图像与真实图像的接近程度。在图像重建网络训练优化完成后,通过训练后的图像重建网络从待重建图像的高斯隐层向量中重建出高质量的医学图像样本,起到增强图像样本量的作用,便于后续的分析工作。
本申请实施例,提出了一种融合变分自编码器和生成对抗网络的医学图像重建网络训练方法,与传统的生成对抗网络相比,本申请通过融合变分自编码器引入了来自于真实图像的先验知识引导,从而解决生成对抗网络训练困难的难题。
本申请实施例在变分自编码器和生成对抗网络之间新增了一个单独的编码判别网络,其目标在于代替变分推理的功能,使变分编码器的编码特征向量以一种对抗训练的方式逼近原始高斯隐层向量,从而解决变分推理与生成对抗网络的目标函数之间的冲突。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例应用于医学图像重建网络训练方法,图7示出了本申请实施例提供的医学图像重建网络训练装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参见图7,本申请实施例中的医学图像重建网络训练装置可以包括特征编码提取模块401、第一图像重建模块402和优化模块403。
其中,特征编码提取模块401,用于对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
第一图像重建模块402,用于通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
第一优化模块403,用于通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像重建网络进行优化。
可选的,特征编码提取模块401可以用于:基于图像编码网络对所述真实图像样本进行特征提取,得到所述真实图像样本的特征编码向量。
可选的,特征编码提取模块401可以具体用于:
通过所述图像编码网络的多个三维卷积层对所述真实图像样本进行分层特征提取;
通过线性函数对提取到的特征进行处理,得到所述真实图像样本的特征编码向量。
可选的,所述线性函数为分段线性函数。
可选的,所述分段线性函数为ReLU函数。
可选的,所述医学图像重建网络训练装置还可以包括第二优化模块;所述第二优化模块用于:
通过编码特征判别网络对所述特征编码向量和所述第一隐层向量进行向量判别;
基于向量判别结果对所述图像编码网络进行优化。
可选的,所述基于向量判别结果对所述图像编码网络进行优化,包括:
基于所述向量判别结果对所述图像编码网络进行对抗训练。
可选的,所述基于所述向量判别结果对所述图像编码网络进行对抗训练,包括:
计算所述第二图像与所述真实图像样本之间的逐体素差异,并通过梯度下降法更新所述图像编码网络的网络参数,直至所述逐体素差异小于或等于预设阈值;
其中,所述逐体素差异为所述图像编码网络的第一损失函数,所述第一损失函数为:
Figure PCTCN2020079678-appb-000016
其中,L C为所述第一损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,E为数学期望。
可选的,所述第一优化模块403可以用于:
根据所述图像判别结果对所述图像重建网络进行对抗训练。
可选的,所述根据所述图像判别结果对所述图像重建网络进行对抗训练,可以包括:
根据所述图像判别结果、结构相似性度量损失函数和感知度量损失函数,确定所述图像重建网络的第二损失函数,并通过梯度下降法更新所述图像重建网络的网络参数,对所述图像重建网络进行训练;
其中,所述第二损失函数为:
Figure PCTCN2020079678-appb-000017
Figure PCTCN2020079678-appb-000018
Figure PCTCN2020079678-appb-000019
Figure PCTCN2020079678-appb-000020
其中,L G为所述第二损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,D为所述图像判别网络,G为所述图像重建网络,E为数学期望,L SSIM为结构相似性度量损失函数,L perceptual代表感知度量损失函数,X real表征所述真实图像,λ 1和λ 2为权重系数,Φ为Gram矩阵,L D为图像判别网络的损失函数。
可选的,所述第一图像重建模块402具体可以用于:
将所述特征编码向量和所述第一隐层向量输入所述图像重建网络,得到所述第一图像和所述第二图像;其中,所述图像生成网络的卷积层为近邻上采样的三维可分离卷积层。
对应于上文实施例应用于图像重建练方法,图8示出了本申请实施例提供的医学图像重建装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参见图8,本申请实施例中的医学图像重建装置可以包括隐层向量获取模块501和第二图像重建模块502。
其中,隐层向量获取模块501,用于获取待重建图像的第二隐层向量;
第二图像重建模块502,用于通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种终端设备,参见图9,该终端设备600可以包括:至少一个处理器610、存储器620以及存储在所述存储器620中并可在所述至少一个处理器610上运行的计算机程序,所述处理器610执行所述计算机程序时实现上述任意各个方法实施例中的步骤,例如图2所示实施例中的步骤101至步骤103,或如图5所示实施例中的步骤201至步骤202。或者,处理器610执行所述计算机程序时实现上述各装置实施例中各模块/单元的功能,例如图7所示模块401至403的功能,或如图8所示模块501至502的功能。
示例性的,计算机程序可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器620中,并由处理器610执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序段,该程序段用于描述计算机程序在终端设备600中的执行过程。
本领域技术人员可以理解,图9仅仅是终端设备的示例,并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如输入输出设备、网络接入设备、总线等。
处理器610可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器620可以是终端设备的内部存储单元,也可以是终端设备的外部存储设备,例 如插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。所述存储器620用于存储所述计算机程序以及终端设备所需的其他程序和数据。所述存储器620还可以用于暂时地存储已经输出或者将要输出的数据。
总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,本申请附图中的总线并不限定仅有一根总线或一种类型的总线。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述医学图像重建网络训练方法各个实施例中的步骤,或实现上述医学图像重建方法各个实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现上述医学图像重建网络训练方法各个实施例中的步骤,或实现上述医学图像重建方法各个实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种医学图像重建网络训练方法,其中,所述医学图像重建网络训练方法包括:
    对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
    通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
    通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像重建网络进行优化。
  2. 如权利要求1所述的医学图像重建网络训练方法,其中,所述对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量,包括:
    基于图像编码网络对所述真实图像样本进行特征提取,得到所述真实图像样本的特征编码向量。
  3. 如权利要求2所述的医学图像重建网络训练方法,其中,所述基于图像编码网络对所述真实图像样本进行特征提取,得到所述真实图像样本的特征编码向量,包括:
    通过所述图像编码网络的多个三维卷积层对所述真实图像样本进行分层特征提取;
    通过线性函数对提取到的特征进行处理,得到所述真实图像样本的特征编码向量。
  4. 如权利要求3所述的医学图像重建网络训练方法,其中,所述线性函数为分段线性函数。
  5. 如权利要求4所述的医学图像重建网络训练方法,其中,所述分段线性函数为ReLU函数。
  6. 如权利要求2所述的医学图像重建网络训练方法,其中,所述方法还包括:
    通过编码特征判别网络对所述特征编码向量和所述第一隐层向量进行向量判别;
    基于向量判别结果对所述图像编码网络进行优化。
  7. 如权利要求6所述的医学图像重建网络训练方法,其中,所述基于向量判别结果对所述图像编码网络进行优化,包括:
    基于所述向量判别结果对所述图像编码网络进行对抗训练。
  8. 如权利要求7所述的医学图像重建网络训练方法,其中,所述基于所述向量判别结果对所述图像编码网络进行对抗训练,包括:
    计算所述第二图像与所述真实图像样本之间的逐体素差异,并通过梯度下降法更新所述图像编码网络的网络参数,直至所述逐体素差异小于或等于预设阈值;
    其中,所述逐体素差异为所述图像编码网络的第一损失函数,所述第一损失函数为:
    Figure PCTCN2020079678-appb-100001
    L C为所述第一损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,E为数学期望。
  9. 如权利要求1所述的医学图像重建网络训练方法,其中,所述根据图像判别结果对所述图像重建网络进行优化,包括:
    根据所述图像判别结果对所述图像重建网络进行对抗训练。
  10. 如权利要求9所述的医学图像重建网络训练方法,其中,所述根据所述图像判别结果对所述图像重建网络进行对抗训练,包括:
    根据所述图像判别结果、结构相似性度量损失函数和感知度量损失函数,确定所述图像重建网络的第二损失函数,并通过梯度下降法更新所述图像重建网络的网络参数,对所述图像重建网络进行训练;
    其中,所述第二损失函数为:
    Figure PCTCN2020079678-appb-100002
    Figure PCTCN2020079678-appb-100003
    Figure PCTCN2020079678-appb-100004
    Figure PCTCN2020079678-appb-100005
    L G为所述第二损失函数,z e为所述特征编码向量,z r为所述第一隐层向量,C表征所述图像编码网络,D为所述图像判别网络,G为所述图像重建网络,E为数学期望,L SSIM为结构相似性度量损失函数,L perceptual代表感知度量损失函数,X real表征所述真实图像,λ 1和λ 2为权重系数,Φ为Gram矩阵,L D为图像判别网络的损失函数。
  11. 如权利要求1所述的医学图像重建网络训练方法,其中,所述通过图像生成网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像,包括:
    将所述特征编码向量和所述第一隐层向量输入所述图像重建网络,得到所述第一图像和所述第二图像;其中,所述图像生成网络的卷积层为近邻上采样的三维可分离卷积层。
  12. 一种医学图像重建方法,其中,所述医学图像重建方法包括:
    获取待重建图像的第二隐层向量;
    通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
  13. 一种医学图像重建网络训练装置,其中,所述医学图像重建网络训练装置包括:
    特征编码提取模块,用于对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
    第一图像重建模块,用于通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
    第一优化模块,用于通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像生成网络进行优化。
  14. 一种医学图像重建装置,其中,所述医学图像重建装置包括:
    隐层向量获取模块,用于获取待重建图像的第二隐层向量;
    第二图像重建模块,用于通过训练后的图像重建网络,基于所述第二隐层向量对所述 带重建图像进行图像重建。
  15. 一种终端设备,其中,所述终端设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
    对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
    通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
    通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像生成网络进行优化。
  16. 如权利要求15所述的终端设备,其中,所述对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量,包括:
    基于图像编码网络对所述真实图像样本进行特征提取,得到所述真实图像样本的特征编码向量。
  17. 如权利要求16所述的终端设备,其中,所述处理器执行所述计算机可读指令时还实现如下步骤:
    通过编码特征判别网络对所述特征编码向量和所述第一隐层向量进行向量判别;
    基于向量判别结果对所述图像编码网络进行优化。
  18. 一种终端设备,其中,所述终端设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取待重建图像的第二隐层向量;
    通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
  19. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现如下步骤:
    对真实图像样本进行特征编码提取,得到所述真实图像样本的特征编码向量;
    通过图像重建网络,基于所述特征编码向量进行图像重建得到第一图像,基于所述真实图像样本的第一隐层向量进行图像重建得到第二图像;
    通过图像判别网络对所述真实图像样本、所述第一图像和所述第二图像进行图像判别,并根据图像判别结果对所述图像生成网络进行优化。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现如下步骤:
    获取待重建图像的第二隐层向量;
    通过训练后的图像重建网络,基于所述第二隐层向量对所述待重建图像进行图像重建。
PCT/CN2020/079678 2020-03-17 2020-03-17 医学图像重建方法、医学图像重建网络训练方法和装置 WO2021184195A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/791,099 US20230032472A1 (en) 2020-03-17 2020-03-17 Method and apparatus for reconstructing medical image
PCT/CN2020/079678 WO2021184195A1 (zh) 2020-03-17 2020-03-17 医学图像重建方法、医学图像重建网络训练方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/079678 WO2021184195A1 (zh) 2020-03-17 2020-03-17 医学图像重建方法、医学图像重建网络训练方法和装置

Publications (1)

Publication Number Publication Date
WO2021184195A1 true WO2021184195A1 (zh) 2021-09-23

Family

ID=77772180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079678 WO2021184195A1 (zh) 2020-03-17 2020-03-17 医学图像重建方法、医学图像重建网络训练方法和装置

Country Status (2)

Country Link
US (1) US20230032472A1 (zh)
WO (1) WO2021184195A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393534A (zh) * 2022-10-31 2022-11-25 深圳市宝润科技有限公司 基于深度学习的锥束三维dr重建的方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385330B (zh) * 2023-06-06 2023-09-15 之江实验室 一种利用图知识引导的多模态医学影像生成方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537743A (zh) * 2018-03-13 2018-09-14 杭州电子科技大学 一种基于生成对抗网络的面部图像增强方法
CN109559358A (zh) * 2018-10-22 2019-04-02 天津大学 一种基于卷积自编码的图像样本升采样方法
CN109685863A (zh) * 2018-12-11 2019-04-26 帝工(杭州)科技产业有限公司 一种重建医学乳房图像的方法
CN110148194A (zh) * 2019-05-07 2019-08-20 北京航空航天大学 图像重建方法和装置
WO2019169594A1 (en) * 2018-03-08 2019-09-12 Intel Corporation Methods and apparatus to generate three-dimensional (3d) model for 3d scene reconstruction
CN110490807A (zh) * 2019-08-27 2019-11-22 中国人民公安大学 图像重建方法、装置及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019169594A1 (en) * 2018-03-08 2019-09-12 Intel Corporation Methods and apparatus to generate three-dimensional (3d) model for 3d scene reconstruction
CN108537743A (zh) * 2018-03-13 2018-09-14 杭州电子科技大学 一种基于生成对抗网络的面部图像增强方法
CN109559358A (zh) * 2018-10-22 2019-04-02 天津大学 一种基于卷积自编码的图像样本升采样方法
CN109685863A (zh) * 2018-12-11 2019-04-26 帝工(杭州)科技产业有限公司 一种重建医学乳房图像的方法
CN110148194A (zh) * 2019-05-07 2019-08-20 北京航空航天大学 图像重建方法和装置
CN110490807A (zh) * 2019-08-27 2019-11-22 中国人民公安大学 图像重建方法、装置及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393534A (zh) * 2022-10-31 2022-11-25 深圳市宝润科技有限公司 基于深度学习的锥束三维dr重建的方法及系统

Also Published As

Publication number Publication date
US20230032472A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
CN111462264B (zh) 医学图像重建方法、医学图像重建网络训练方法和装置
Biffi et al. Explainable anatomical shape analysis through deep hierarchical generative models
US20230342918A1 (en) Image-driven brain atlas construction method, apparatus, device and storage medium
WO2021186592A1 (ja) 診断支援装置及びモデル生成装置
CN112435341B (zh) 三维重建网络的训练方法及装置、三维重建方法及装置
Rutherford et al. Automated brain masking of fetal functional MRI with open data
WO2021184195A1 (zh) 医学图像重建方法、医学图像重建网络训练方法和装置
Zhan et al. LR-cGAN: Latent representation based conditional generative adversarial network for multi-modality MRI synthesis
CN112949654A (zh) 图像检测方法及相关装置、设备
CN115272295A (zh) 基于时域-空域联合状态的动态脑功能网络分析方法及系统
CN115496720A (zh) 基于ViT机制模型的胃肠癌病理图像分割方法及相关设备
CN117274599A (zh) 一种基于组合双任务自编码器的脑磁共振分割方法及系统
Yerukalareddy et al. Brain tumor classification based on mr images using GAN as a pre-trained model
Tiago et al. A domain translation framework with an adversarial denoising diffusion model to generate synthetic datasets of echocardiography images
CN113724185A (zh) 用于图像分类的模型处理方法、装置及存储介质
CN111383217B (zh) 大脑成瘾性状评估的可视化方法、装置及介质
CN114463320B (zh) 一种磁共振成像脑胶质瘤idh基因预测方法及系统
CN115965785A (zh) 图像分割方法、装置、设备、程序产品及介质
KR102400568B1 (ko) 인코더를 이용한 이미지의 특이 영역 분석 방법 및 장치
CN114155232A (zh) 颅内出血区域检测方法、装置、计算机设备及存储介质
CN113850796A (zh) 基于ct数据的肺部疾病识别方法及装置、介质和电子设备
CN115115900A (zh) 图像重建模型的训练方法、装置、设备、介质及程序产品
CN114266738A (zh) 轻度脑损伤磁共振影像数据的纵向分析方法及系统
CN113223104B (zh) 一种基于因果关系的心脏mr图像插补方法及系统
CN116597041B (zh) 一种脑血管病核磁影像清晰度优化方法、系统及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926334

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926334

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20926334

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20926334

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04-07-2023)