CN111462264B - Medical image reconstruction method, medical image reconstruction network training method and device - Google Patents

Medical image reconstruction method, medical image reconstruction network training method and device Download PDF

Info

Publication number
CN111462264B
CN111462264B CN202010186019.5A CN202010186019A CN111462264B CN 111462264 B CN111462264 B CN 111462264B CN 202010186019 A CN202010186019 A CN 202010186019A CN 111462264 B CN111462264 B CN 111462264B
Authority
CN
China
Prior art keywords
image
network
image reconstruction
vector
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010186019.5A
Other languages
Chinese (zh)
Other versions
CN111462264A (en
Inventor
胡圣烨
王书强
陈卓
申妍燕
张炽堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010186019.5A priority Critical patent/CN111462264B/en
Publication of CN111462264A publication Critical patent/CN111462264A/en
Application granted granted Critical
Publication of CN111462264B publication Critical patent/CN111462264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing and provides a medical image reconstruction method, a medical image reconstruction network training method and a medical image reconstruction network training device. The medical image reconstruction network training method comprises the following steps: extracting feature codes of the real image samples to obtain feature code vectors of the real image samples; performing image reconstruction based on the feature coding vector to obtain a first image through an image reconstruction network, and performing image reconstruction based on a first hidden layer vector of the real image sample to obtain a second image; and carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimizing the image reconstruction network according to an image discrimination result. The method introduces priori knowledge guidance from a real image, stabilizes training of the image reconstruction network, and is easy to achieve optimal convergence, so that the problem of difficulty in generating the countermeasure network training is solved.

Description

Medical image reconstruction method, medical image reconstruction network training method and device
Technical Field
The embodiment of the application belongs to the technical field of image processing, and particularly relates to a medical image reconstruction method, a medical image reconstruction network training method and a medical image reconstruction network training device.
Background
Functional magnetic resonance imaging (functional magnetic resonance imaging, fMRI) is an emerging neuroimaging modality, the principle of which is to measure hemodynamic changes induced by neuronal activity using magnetic resonance imaging. As a non-invasive technique, it can accurately locate specific brain activity cortical areas and capture blood oxygen changes that can reflect neuronal activity. However, since fMRI image acquisition cost is high, scanning time is long, and some special patients cannot perform (such as in-vivo metal object persons cannot accept scanning), the number of images which can be acquired is often limited in a specific application scene, which greatly limits the application of artificial intelligence methods relying on a large amount of data, such as deep learning, in the field of medical image analysis.
One promising solution is to learn to reconstruct corresponding medical images from Gaussian hidden layer vectors by using limited real image samples through the existing artificial intelligence method, thereby achieving the purposes of enhancing the sample size and supporting subsequent image analysis tasks. The generation of the countermeasure network is a generation model with better current performance, gradually becomes a research hot spot of deep learning, and starts to be applied to the field of medical images.
The traditional generation countermeasure network can generate new images with diversity by learning real data distribution, but the problem that the network training is difficult and the optimal convergence is not easy to achieve exists.
Disclosure of Invention
In order to overcome the problems in the related art, the embodiment of the application provides a medical image reconstruction method, a medical image reconstruction network training method and a medical image reconstruction network training device.
The application is realized by the following technical scheme:
in a first aspect, embodiments of the present application provide a medical image reconstruction network training method, including:
extracting feature codes of the real image samples to obtain feature code vectors of the real image samples;
performing image reconstruction based on the feature coding vector to obtain a first image through an image reconstruction network, and performing image reconstruction based on a first hidden layer vector of the real image sample to obtain a second image;
and carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimizing the image reconstruction network according to an image discrimination result.
In a first possible implementation manner of the first aspect, the extracting feature codes of the real image samples to obtain feature code vectors of the real image samples includes:
Carrying out layered feature extraction on the real image sample through a plurality of three-dimensional convolution layers of an image coding network;
and processing the extracted features through a linear function to obtain feature coding vectors of the real image samples.
In a second possible implementation manner of the first aspect, the method further includes:
carrying out vector discrimination on the feature coding vector and the first hidden layer vector through a coding feature discrimination network;
and optimizing the image coding network based on the vector discrimination result.
In a third possible implementation manner of the first aspect, the optimizing the image coding network based on the vector discrimination result includes:
calculating the voxel-by-voxel difference between the second image and the real image sample, and updating network parameters of the image coding network by a gradient descent method until the voxel-by-voxel difference is smaller than or equal to a preset threshold value;
wherein the voxel-by-voxel difference is a first loss function of the image encoding network, the first loss function being:
Figure GDA0004214669240000021
L C z as the first loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image encoding network, E is a mathematical expectation.
In a fourth possible implementation manner of the first aspect, the optimizing the image reconstruction network according to the image discrimination result includes:
determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network;
wherein the second loss function is:
Figure GDA0004214669240000031
Figure GDA0004214669240000032
Figure GDA0004214669240000033
Figure GDA0004214669240000034
L G z as the second loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the image reconstruction network, E is a mathematical expectation, L SSIM To measure the loss function for structural similarity, L perceptual Representing a perceptual metric loss function, X real Characterizing the real image lambda 1 And lambda (lambda) 2 As the weight coefficient, phi is Gram matrix, L D A loss function of the network is determined for the image.
In a second aspect, embodiments of the present application provide a medical image reconstruction method, including:
acquiring a second hidden layer vector of the image to be reconstructed;
and carrying out image reconstruction on the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
In a third aspect, embodiments of the present application provide a medical image reconstruction network training apparatus, including:
the feature code extraction module is used for extracting feature codes of the real image samples to obtain feature code vectors of the real image samples;
the first image reconstruction module is used for carrying out image reconstruction based on the feature coding vector to obtain a first image through an image reconstruction network, and carrying out image reconstruction based on a first hidden layer vector of the real image sample to obtain a second image;
the first optimizing module is used for carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network and optimizing the image generation network according to an image discrimination result.
In a fourth aspect, embodiments of the present application provide a medical image reconstruction apparatus, including:
the hidden layer vector acquisition module is used for acquiring a second hidden layer vector of the image to be reconstructed;
and the second image reconstruction module is used for reconstructing an image of the reconstructed image through the trained image reconstruction network based on the second hidden layer vector.
In a fifth aspect, embodiments of the present application provide a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the medical image reconstruction network training method according to the first aspect or implements the medical image reconstruction method according to the second aspect when the processor executes the computer program.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the medical image reconstruction network training method as described in the first aspect, or implements the medical image reconstruction method as described in the second aspect.
In a seventh aspect, embodiments of the present application provide a computer program product, which when run on a terminal device, causes the terminal device to perform the medical image reconstruction network training method as described in the first aspect, or to implement the medical image reconstruction method as described in the second aspect.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
according to the embodiment of the application, feature coding extraction is carried out on a real image sample to obtain a feature coding vector of the real image sample, image reconstruction is carried out through an image reconstruction network based on the feature coding vector to obtain a first image, image reconstruction is carried out based on a hidden layer vector of the real image sample to obtain a second image, meanwhile, image discrimination is carried out on the real image sample, the first image and the second image through an image discrimination network, the image reconstruction network is optimized according to an image discrimination result, the optimized image reconstruction network is used for image reconstruction work to generate a priori knowledge guide from the real image, so that training of the image reconstruction network is stabilized, optimal convergence is easy to achieve, and the problem that training of an countermeasure network is difficult to generate is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flow chart of a medical image reconstruction network training method according to an embodiment of the present application;
FIG. 3 is a flow chart of a medical image reconstruction network training method according to an embodiment of the present application;
FIG. 4 is a flow chart of a medical image reconstruction network training method according to an embodiment of the present application;
FIG. 5 is a flow chart of a medical image reconstruction method according to an embodiment of the present application;
FIG. 6 is a flow chart of medical image reconstruction provided in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a medical image reconstruction network training apparatus according to an embodiment of the present application;
FIG. 8 is a schematic structural view of a medical image reconstruction apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Functional magnetic resonance imaging fMRI is an emerging neuroimaging modality, the principle of which is to measure hemodynamic changes induced by neuronal activity using magnetic resonance imaging. As a non-invasive technique, it can accurately locate specific brain activity cortical areas and capture blood oxygen changes that can reflect neuronal activity. However, since fMRI image acquisition cost is high, scanning time is long, and some special patients cannot perform (such as in-vivo metal object persons cannot accept scanning), the number of images which can be acquired is often limited in a specific application scene, which greatly limits the application of artificial intelligence methods relying on a large amount of data, such as deep learning, in the field of medical image analysis.
One promising solution is to learn to reconstruct corresponding medical images from Gaussian hidden layer vectors by using limited real image samples through the existing artificial intelligence method, thereby achieving the purposes of enhancing the sample size and supporting subsequent image analysis tasks. The generation of the countermeasure network is a generation model with better current performance, which is proposed by Lan Goodfulow et al in 2014 at the earliest, and the purpose of generating real data distribution samples from hidden layer space can be achieved by capturing potential distribution of real data through the generator. Thereafter, generation of an countermeasure network gradually becomes a research hotspot for deep learning, and starts to be applied to various fields. In addition to reconstructing the original image from hidden layer vectors, another solution idea is to synthesize a medical image of one modality from the medical image of another modality, such as a corresponding PET image from a CT image of the same patient. Many students do much work in this regard, however, the solution idea of cross-modality synthesis requires a large amount of image data of another modality to train on the model, and its diversity of synthetic samples is limited. Therefore, the most interesting idea is still how to reconstruct the corresponding medical image stably from the hidden layer vectors.
Although in this method the generation of the countermeasure network can generate new images with diversity by learning the true data distribution, it has the biggest problem that the network training is difficult and optimal convergence is not easily achieved. The purpose of generating the countermeasure network is to make the data distribution fitted by the generator approximate to the real data distribution, and the inventor discovers in the research that the generation network without any prior knowledge is not aware of the real data distribution at all, and can only probe once according to the real and false feedback of the discriminator. The self-coding network of the variation of the generation model with strong performance does not have the problem, and the self-coding network can firstly extract the coding feature vector of the real image, and simultaneously decode the hidden vector according to the variation result through resampling for variation reasoning.
Based on the inspired action mechanism of the variation self-encoder, the coding feature vector of the variation self-encoder is used as feature priori knowledge about a real image to be introduced into the training of generating an countermeasure network, and the generating network is given a relatively clear optimization direction, so that the problems of difficult training, long time consumption and easy collapse are solved. And we find that simply piecing together the combined variational self-encoder and generating the antagonism network is not feasible because there is an optimization conflict between variational reasoning and generating the objective function of the antagonism network, both of which cannot reach optimal convergence at the same time. To solve this problem, the present application further introduces a separate encoding discriminant, so that the optimization process of the variation self-encoder is also included under the "generate-fight" system, and the optimization conflict existing between the variation reasoning and the objective function of the generate fight network is solved.
By way of example, the embodiments of the present application may be applied to an exemplary scenario as shown in fig. 1. The terminal 10 and the server 20 form application scenes of the medical image reconstruction network training method and the medical image reconstruction method.
Specifically, the terminal 10 is configured to acquire a real image sample of the subject, and send the real image sample to the server 20; the server 20 is configured to perform feature encoding extraction on a real image sample to obtain a feature encoding vector of the real image sample, perform image reconstruction based on the feature encoding vector through an image reconstruction network to obtain a first image, perform image reconstruction based on a hidden layer vector of the real image sample to obtain a second image, perform image discrimination on the real image sample, the first image and the second image through an image discrimination network, optimize the image reconstruction network according to an image discrimination result, and use the optimized image generation network for image reconstruction work, so that the generation network introduces priori knowledge from the real image to guide, thereby stabilizing training of the image reconstruction network, facilitating optimal convergence, and solving the problem of difficulty in generating an countermeasure network training.
The medical image reconstruction network training method of the present application is described in detail below with reference to fig. 1.
Fig. 2 is a schematic flowchart of a medical image reconstruction network training method according to an embodiment of the present application, and referring to fig. 2, the medical image reconstruction network training method is described in detail as follows:
in step 101, feature encoding extraction is performed on a real image sample, so as to obtain a feature encoding vector of the real image sample.
In one embodiment, in step 101, feature extraction may be performed on the real image sample through an image encoding network, so as to obtain a feature encoding vector of the real image sample.
For example, referring to fig. 3, the feature extraction of the real image sample by the image coding network to obtain a feature coding vector of the real image sample may specifically include:
in step 1011, hierarchical feature extraction is performed on the real image samples through a plurality of three-dimensional convolution layers of the image coding network.
In step 1012, the extracted features are processed by a linear function to obtain feature encoding vectors of the real image samples.
In one example scenario, a real image sample may be generated into a three-dimensional image on a time sequence, the three-dimensional image is sequentially input into an image coding network, layered feature extraction is performed on the three-dimensional image by using a plurality of three-dimensional convolution layers of the image coding network, and linear features and nonlinear features of the three-dimensional image are synthesized through a linear function, so as to obtain a feature coding representation vector of the real image sample.
Wherein the linear function is a piecewise linear function. Specifically, the linear features and the nonlinear features of the three-dimensional image are processed through a piecewise linear function, so that feature coding representation vectors of the real image samples are obtained.
For example, the piecewise linear function may be a ReLU function. Specifically, linear features and nonlinear features of the three-dimensional image are processed through a ReLU function, and feature coding representation vectors of real image samples are obtained.
In step 102, through an image reconstruction network, performing image reconstruction based on the feature encoding vector to obtain a first image, and performing image reconstruction based on the first hidden layer vector of the real image sample to obtain a second image.
In one embodiment, the feature code vector and the first hidden layer vector may be input to the image reconstruction network to obtain the first image and the second image; the convolution layer of the image generation network in the embodiment of the application is a three-dimensional separable convolution layer with neighbor upsampling.
For example, a feature code vector extracted from a real image sample and a first hidden layer vector sampled from a gaussian distribution of the real image sample may be used as inputs of an image reconstruction network, and a first image and a second image may be reconstructed step by step from the feature code vector and the first hidden layer vector, respectively. In this embodiment, the three-dimensional separable convolution layer with neighbor upsampling is used to replace the deconvolution layer in the traditional image reconstruction network, so that the number of the learnable parameters can be reduced, the quality of the generated fMRI image can be improved, and the reconstructed image has fewer artifacts, a clearer structure and the like.
In step 103, image discrimination is performed on the real image sample, the first image and the second image through an image discrimination network, and the image reconstruction network is optimized according to the image discrimination result.
Specifically, the real image sample, the first image and the second image can be used as the input of the image discrimination network, the image reconstruction network is optimized according to the discrimination result of the image discrimination network, the generation-countermeasure training is constructed, and the image reconstruction network after the optimization training is used for image reconstruction.
After the image reconstruction network is optimized in step 103, the image reconstruction network is continuously used for image reconstruction in step 102 to obtain a first image and a second image, and step 103 is executed again after the first image and the second image are obtained, and the steps are sequentially and circularly executed.
According to the medical image reconstruction network training method, feature coding extraction is carried out on a real image sample, feature coding vectors of the real image sample are obtained, image reconstruction is carried out through an image reconstruction network based on the feature coding vectors to obtain a first image, image reconstruction is carried out based on hidden layer vectors of the real image sample to obtain a second image, meanwhile, image discrimination is carried out on the real image sample, the first image and the second image through an image discrimination network, the image reconstruction network is optimized according to an image discrimination result, the optimized image reconstruction network is used for image reconstruction work to generate a priori knowledge guide from the real image, training of the image reconstruction network is stabilized, optimal convergence is easy to achieve, and the problem that the generation of an countermeasure network training is difficult is solved.
Fig. 4 is a schematic flowchart of a medical image reconstruction network training method according to an embodiment of the present application, and referring to fig. 4, based on the embodiment shown in fig. 2, the medical image reconstruction network training method may further include:
in step 104, vector discrimination is performed on the feature code vector and the first hidden layer vector through a code feature discrimination network.
In step 105, the image coding network is optimized based on the vector discrimination result.
After the feature coding vector is obtained in the step 101, the feature coding vector and the first hidden layer vector of the real image sample may be optimized through the step 104 and the step 105, and the optimized image coding network is used as the image coding network in the step 101, and may be used to execute the step 101 again; the image coding network is optimized by repeating the steps.
In one embodiment, the image coding network may be subjected to countermeasure training based on the vector discrimination result, so as to optimize the image coding network.
Specifically, a coding feature discrimination network with the same structure as the image discrimination network can be constructed, and a feature coding vector obtained by coding from a real image sample and a first hidden layer vector obtained by sampling from Gaussian distribution are used as inputs of the coding feature discrimination network, so that the coding feature discrimination network and the image coding network also form a training relation of 'generation-countermeasure', thereby replacing variation reasoning and solving the problem of training conflict of the variation reasoning and generation countermeasure objective function.
In one embodiment, the training of the image coding network based on the vector discrimination result may specifically include: and calculating the voxel-by-voxel difference between the second image and the real image sample, and updating network parameters of the image coding network through a gradient descent method until the voxel-by-voxel difference is smaller than or equal to a preset threshold value, so as to realize training of the image coding network, wherein the voxel-by-voxel difference is a first loss function of the image coding network.
For training optimization of an image coding network, a coding feature discrimination network is introduced to replace the original variational reasoning process. In the training process of the image coding network, firstly calculating the voxel-by-voxel difference between a reconstructed fMRI image and a real fMRI image, and then updating network parameters of the image coding network through a gradient descent method to ensure that the voxel-by-voxel difference is smaller than or equal to a first preset threshold value; secondly, the Wasserstein distance is selected as a measuring tool of real image distribution and reconstructed image distribution in the first loss function, and meanwhile, a gradient penalty term is introduced to cut off the network gradient of the discriminator, so that the network training is further stabilized.
Illustratively, the first loss function may be:
Figure GDA0004214669240000111
wherein L is C Z as the first loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image encoding network, E is a mathematical expectation.
In one embodiment, the optimizing the image reconstruction network according to the image discrimination result in step 103 may specifically be: and performing countermeasure training on the image reconstruction network according to the image discrimination result.
The performing the countermeasure training on the image reconstruction network according to the image discrimination result may include: and determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network.
For example, performing countermeasure training on the image reconstruction network according to the image discrimination result, specifically, if the discrimination result of the image discrimination network is closer to the real image, only performing a first preset amplitude update or no update on the network parameter of the image reconstruction network by using a gradient descent method; if the judging result of the image judging network is closer to the reconstructed image, the image reconstructing network is required to update the network parameters by a second preset amplitude, and the second preset amplitude is larger than the first preset amplitude. In addition, besides the Wasserstein distance is selected as a measuring tool of the real image distribution and the reconstructed image distribution in the second loss function, structural similarity measurement loss and perception measurement loss are also introduced, so that the characteristics of the reconstructed image can be ensured to be more consistent with the real image.
Illustratively, the second loss function may be:
Figure GDA0004214669240000121
Figure GDA0004214669240000124
/>
Figure GDA0004214669240000122
Figure GDA0004214669240000123
wherein L is G Z as the second loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the image reconstruction network, E is a mathematical expectation, L SSIM To measure the loss function for structural similarity, L perceptual Representing a perceptual metric loss function, X real Characterizing the real image lambda 1 And lambda (lambda) 2 As the weight coefficient, phi is Gram matrix, L D A loss function of the network is determined for the image.
In this embodiment, the proximity of the image reconstructed by the image reconstruction network to the real image may be estimated by an image overlap ratio (SOR) technique index. After the training and optimizing of the image reconstruction network are completed, high-quality medical image samples can be reconstructed from Gaussian hidden layer vectors through the trained image reconstruction network, so that the effect of enhancing the image sample size is achieved, and subsequent analysis work is facilitated.
The medical image reconstruction method of the present application is described in detail below with reference to fig. 1.
Fig. 5 is a schematic flowchart of a medical image reconstruction method according to an embodiment of the present application, and referring to fig. 5, the medical image reconstruction method is described in detail as follows:
In step 201, a second hidden layer vector of the image to be reconstructed is acquired.
In step 202, through the trained image reconstruction network, image reconstruction is performed on the image to be reconstructed based on the second hidden layer vector.
According to the medical image reconstruction method, feature coding extraction is carried out on a real image sample to obtain a feature coding vector of the real image sample, image reconstruction is carried out on the basis of the feature coding vector through an image reconstruction network to obtain a first image, image reconstruction is carried out on the basis of a hidden layer vector of the real image sample to obtain a second image, meanwhile, image discrimination is carried out on the real image sample, the first image and the second image through an image discrimination network, training optimization is carried out on the image reconstruction network according to an image discrimination result, image reconstruction is carried out on a to-be-reconstructed image through the image reconstruction network after training optimization on the basis of the second hidden layer vector, prior knowledge guidance from the real image is introduced into a generation contrast network, therefore, training of the image reconstruction network is stabilized, optimal convergence is easy to achieve, the problem that training of the contrast network is difficult to generate is solved, and the reconstructed image is closer to the real image.
Referring to fig. 6, in the present embodiment, the process of medical image reconstruction may include the steps of:
in step 301, feature extraction is performed on a real image sample based on an image coding network, so as to obtain a feature coding vector of the real image sample.
In step 302, through an image reconstruction network, performing image reconstruction based on the feature encoding vector to obtain a first image, and performing image reconstruction based on a first hidden layer vector of a real image sample to obtain a second image.
In step 303, image discrimination is performed on the real image sample, the first image and the second image through the image discrimination network, and training optimization is performed on the image reconstruction network according to the image discrimination result. Wherein, the image reconstruction network after training optimization is used as the image reconstruction network in step 302 to perform the next image reconstruction.
In step 304, vector discrimination is performed on the feature code vector in step 301 and the first hidden layer vector of the real image sample through the code feature discrimination network.
In step 305, the image coding network is optimized based on the vector discrimination result, and the optimized image coding network is used as the image coding network in step 301 to perform feature extraction on the next real image sample.
In step 306, after the optimization of the image reconstruction network training by the real image samples is completed, a second hidden layer vector of the image to be reconstructed is obtained.
In step 307, the image to be reconstructed is reconstructed based on the second hidden layer vector through the trained image reconstruction network.
The following describes embodiments of the present application by taking a real fMRI image of a brain region of a rat as an example, but not limited thereto.
First, a real fMRI image x of the brain region of a rat is obtained real Generating three-dimensional images on a time sequence, sequentially inputting the three-dimensional images into an image coding network, extracting layered characteristics of the three-dimensional images by utilizing a plurality of three-dimensional convolution layers of the image coding network, synthesizing linear and nonlinear characteristics by a ReLU function, and outputting a characteristic coding vector z of a real fMRI image e
Secondly, extracting a feature coding vector z obtained by extracting a true fMRI image e And hidden layer vector z sampled from Gaussian distribution r Both as inputs to the image reconstruction network, respectively from z e And z r Medium-level step-by-step reconstruction of fMRI image x rec And x rand . Convolution of an image reconstruction network as a three-dimensional separable with neighbor upsamplingThe convolution layer replaces the traditional deconvolution layer by utilizing the three-dimensional separable convolution operation with neighbor upsampling, can reduce the number of the learnable parameters, improves the quality of the reconstructed fMRI image, reduces the artifacts of the reconstructed image, and ensures that the brain area structure is clearer, etc.
Third, the true fMRI image x is obtained real Image x rec And image x rand All three are used as the input of the image discrimination network, and the image reconstructor is optimized according to the discrimination result of the image discrimination network, so as to construct the 'generation-countermeasure' training. Meanwhile, constructing a coding feature discrimination network with the same structure as the image discrimination network, and obtaining the true fMRI image x real The feature representation vector z obtained by the mid-encoding e And hidden layer vector z sampled from Gaussian distribution r As input, the coding feature discrimination network and the image coding network also form a training relation of 'generation-countermeasure', so as to replace the variational reasoning and solve the problem of training conflict between the variational reasoning and the generation countermeasure objective function.
And fourthly, selecting an optimal loss function to train and optimize the network. For training optimization of an image coding network, the embodiment skillfully introduces a coding feature discrimination network to replace the traditional variational reasoning process, and only needs to minimize voxel-by-voxel differences of a reconstructed fMRI image and a true fMRI image; moreover, the Wasserstein distance is selected as a measuring tool of real image distribution and reconstructed image distribution in the loss function, and meanwhile, a gradient penalty term is introduced to cut the network gradient of the discriminator, so that the image coding network training is further stabilized. For training of an image reconstructor network, besides the Wasserstein distance, structural similarity measurement loss and perception measurement loss are introduced, so that the characteristics of the reconstructed image in key areas such as a rat ventral covered area (VTA) and a forehead cortex (PFC) are ensured to accord with a real image. The loss function formula for each network is as follows:
The loss function of the image coding network is:
Figure GDA0004214669240000151
the loss function of the image discrimination network is:
Figure GDA0004214669240000152
the loss function of the image reconstruction network is:
Figure GDA0004214669240000153
wherein L is SSIM To measure the loss function for structural similarity, L perceptual Representing perceptual metric loss functions, respectively:
Figure GDA0004214669240000154
/>
Figure GDA0004214669240000155
finally, the scheme is to evaluate the proximity degree of the reconstructed image and the real image through an image overlap ratio (SOR) technical index. After the training and optimizing of the image reconstruction network are completed, high-quality medical image samples are reconstructed from Gaussian hidden layer vectors of the image to be reconstructed through the trained image reconstruction network, so that the effect of enhancing the image sample size is achieved, and subsequent analysis work is facilitated.
Compared with the traditional generation of the countermeasure network, the prior knowledge guidance from the real image is introduced through the fusion variation self-encoder, so that the difficulty of generating the countermeasure network training is solved.
The embodiment of the application newly adds an independent coding discrimination network between the variable self-encoder and the generation countermeasure network, and aims to replace the function of variable reasoning, so that the coding feature vector of the variable encoder approximates to the original Gaussian hidden layer vector in a manner of countermeasure training, thereby solving the conflict between the variable reasoning and the objective function of the generation countermeasure network.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the application of the above embodiments to the medical image reconstruction network training method, fig. 7 shows a block diagram of the medical image reconstruction network training apparatus provided in the embodiment of the present application, and for convenience of explanation, only the portions relevant to the embodiment of the present application are shown.
Referring to fig. 7, the medical image reconstruction network training apparatus in an embodiment of the present application may include a feature code extraction module 401, a first image reconstruction module 402, and an optimization module 403.
The feature code extraction module 401 is configured to perform feature code extraction on a real image sample, so as to obtain a feature code vector of the real image sample;
a first image reconstruction module 402, configured to perform image reconstruction based on the feature encoding vector to obtain a first image, and perform image reconstruction based on a first hidden layer vector of the real image sample to obtain a second image through an image reconstruction network;
the optimizing module 403 is configured to perform image discrimination on the real image sample, the first image, and the second image through an image discrimination network, and optimize the image reconstruction network according to an image discrimination result.
Alternatively, the feature code extraction module 401 may be configured to: and extracting the characteristics of the real image sample based on an image coding network to obtain the characteristic coding vector of the real image sample.
Alternatively, the feature code extraction module 401 may be specifically configured to:
carrying out layered feature extraction on the real image sample through a plurality of three-dimensional convolution layers of the image coding network;
and processing the extracted features through a linear function to obtain feature coding vectors of the real image samples.
Optionally, the linear function is a piecewise linear function.
Optionally, the piecewise linear function is a ReLU function.
Optionally, the medical image reconstruction network training apparatus may further include a second optimization module; the second optimizing module is used for:
carrying out vector discrimination on the feature coding vector and the first hidden layer vector through a coding feature discrimination network;
and optimizing the image coding network based on the vector discrimination result.
Optionally, the optimizing the image coding network based on the vector discrimination result includes:
and performing countermeasure training on the image coding network based on the vector discrimination result.
Optionally, the performing countermeasure training on the image coding network based on the vector discrimination result includes:
calculating the voxel-by-voxel difference between the second image and the real image sample, and updating network parameters of the image coding network by a gradient descent method until the voxel-by-voxel difference is smaller than or equal to a preset threshold value;
wherein the voxel-by-voxel difference is a first loss function of the image encoding network, the first loss function being:
Figure GDA0004214669240000171
wherein L is C Z as the first loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image encoding network, E is a mathematical expectation.
Alternatively, the optimizing module 403 may be configured to:
and performing countermeasure training on the image reconstruction network according to the image discrimination result.
Optionally, the performing the countermeasure training on the image reconstruction network according to the image discrimination result may include:
determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network;
Wherein the second loss function is:
Figure GDA0004214669240000181
Figure GDA0004214669240000182
Figure GDA0004214669240000183
Figure GDA0004214669240000184
L G z as the second loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the image reconstruction network, E is a mathematical expectation, L SSIM To measure the loss function for structural similarity, L perceptual Representing a perceptual metric loss function, X real Characterizing the real image lambda 1 And lambda (lambda) 2 As the weight coefficient, phi is Gram matrix, L D A loss function of the network is determined for the image.
Optionally, the first image reconstruction module 402 may specifically be configured to:
inputting the feature coding vector and the first hidden layer vector into the image reconstruction network to obtain the first image and the second image; the convolution layer of the image generation network is a three-dimensional separable convolution layer of neighbor upsampling.
Corresponding to the application of the above embodiments to the image reconstruction method, fig. 8 shows a block diagram of the medical image reconstruction apparatus provided in the embodiment of the present application, and for convenience of explanation, only the portions relevant to the embodiment of the present application are shown.
Referring to fig. 8, a medical image reconstruction apparatus in an embodiment of the present application may include a hidden layer vector acquisition module 501 and a second image reconstruction module 502.
The hidden layer vector obtaining module 501 is configured to obtain a second hidden layer vector of the image to be reconstructed;
and the second image reconstruction module 502 is configured to reconstruct an image of the image to be reconstructed based on the second hidden layer vector through the trained image reconstruction network.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the present application further provides a terminal device, referring to fig. 9, the terminal device 600 may include: at least one processor 610, a memory 620, and a computer program stored in the memory 620 and executable on the at least one processor 610, the processor 610, when executing the computer program, performing steps in any of the various method embodiments described above, such as steps 101 to 103 in the embodiment shown in fig. 2, or steps 201 to 202 in the embodiment shown in fig. 5. Alternatively, the processor 610, when executing the computer program, implements the functions of the modules/units in the above-described apparatus embodiments, for example, the functions of the modules 401 to 403 shown in fig. 7, or the functions of the modules 501 to 502 shown in fig. 8.
By way of example, a computer program may be partitioned into one or more modules/units that are stored in the memory 620 and executed by the processor 610 to complete the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions for describing the execution of the computer program in the terminal device X00.
It will be appreciated by those skilled in the art that fig. 9 is merely an example of a terminal device and is not limiting of the terminal device, and may include more or fewer components than shown, or may combine certain components, or different components, such as input-output devices, network access devices, buses, etc.
The processor 610 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 620 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), or the like. The memory 620 is used to store the computer program and other programs and data required for the terminal device. The memory 620 may also be used to temporarily store data that has been output or is to be output.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described embodiments of the medical image reconstruction network training method, or implements the steps of the above-described embodiments of the medical image reconstruction method.
Embodiments of the present application provide a computer program product, which when run on a mobile terminal, causes the mobile terminal to perform the steps in or the steps of the respective embodiments of the medical image reconstruction network training method described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (8)

1. A medical image reconstruction network training method, comprising:
extracting feature codes of the real image samples to obtain feature code vectors of the real image samples;
performing image reconstruction based on the feature coding vector to obtain a first image through an image reconstruction network, and performing image reconstruction based on a first hidden layer vector of the real image sample to obtain a second image;
Carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network, and optimizing the image reconstruction network according to an image discrimination result; the convolution of the image reconstruction network is a three-dimensional separable convolution layer with neighbor upsampling;
the extracting the feature codes of the real image samples to obtain feature code vectors of the real image samples comprises the following steps:
carrying out layered feature extraction on the real image sample through a plurality of three-dimensional convolution layers of an image coding network;
processing the extracted features through a linear function to obtain feature coding vectors of the real image samples;
the method further comprises the steps of:
carrying out vector discrimination on the feature coding vector and the first hidden layer vector through a coding feature discrimination network;
and optimizing the image coding network based on the vector discrimination result.
2. The medical image reconstruction network training method of claim 1, wherein said optimizing said image coding network based on vector discrimination results comprises:
calculating the voxel-by-voxel difference between the second image and the real image sample, and updating network parameters of the image coding network by a gradient descent method until the voxel-by-voxel difference is smaller than or equal to a preset threshold value;
Wherein the voxel-by-voxel difference is a first loss function of the image encoding network, the first loss function being:
Figure FDA0004214669230000021
L C z as the first loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image encoding network, E is a mathematical expectation.
3. The medical image reconstruction network training method of claim 1, wherein optimizing the image reconstruction network according to the image discrimination result comprises:
determining a second loss function of the image reconstruction network according to the image discrimination result, the structural similarity measurement loss function and the perception measurement loss function, updating network parameters of the image reconstruction network through a gradient descent method, and training the image reconstruction network;
wherein the second loss function is:
Figure FDA0004214669230000022
Figure FDA0004214669230000023
Figure FDA0004214669230000024
Figure FDA0004214669230000025
L G z as the second loss function e Encoding a vector for the feature, z r For the first hidden layer vector, C characterizes the image coding network, D is the image discrimination network, G is the image reconstruction network, E is a mathematical expectation, L SSIM To measure the loss function for structural similarity, L perceptual Representing a perceptual metric loss function, X real Characterizing the real image lambda 1 And lambda (lambda) 2 As the weight coefficient, phi is Gram matrix, L D A loss function of the network is determined for the image.
4. A method of medical image reconstruction, comprising:
acquiring a second hidden layer vector of the image to be reconstructed;
and performing image reconstruction on the image to be reconstructed based on the second hidden layer vector through a trained image reconstruction network, wherein the trained image reconstruction network is obtained through the medical image reconstruction network training method according to any one of claims 1-3.
5. A medical image reconstruction network training apparatus, comprising:
the feature code extraction module is used for extracting feature codes of the real image samples to obtain feature code vectors of the real image samples;
the first image reconstruction module is used for carrying out image reconstruction based on the feature coding vector to obtain a first image through an image reconstruction network, and carrying out image reconstruction based on a first hidden layer vector of the real image sample to obtain a second image;
the first optimization module is used for carrying out image discrimination on the real image sample, the first image and the second image through an image discrimination network and optimizing the image reconstruction network according to an image discrimination result; the convolution of the image reconstruction network is a three-dimensional separable convolution layer with neighbor upsampling;
The feature code extraction module is specifically configured to:
carrying out layered feature extraction on the real image sample through a plurality of three-dimensional convolution layers of an image coding network;
processing the extracted features through a linear function to obtain feature coding vectors of the real image samples;
the medical image reconstruction network training apparatus further comprises a second optimization module for:
carrying out vector discrimination on the feature coding vector and the first hidden layer vector through a coding feature discrimination network;
and optimizing the image coding network based on the vector discrimination result.
6. A medical image reconstruction apparatus, comprising:
the hidden layer vector acquisition module is used for acquiring a second hidden layer vector of the image to be reconstructed;
a second image reconstruction module, configured to reconstruct an image of the image to be reconstructed based on the second hidden layer vector through a trained image reconstruction network, where the trained image reconstruction network is obtained by the medical image reconstruction network training method according to any one of claims 1-3.
7. A terminal device comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, when executing the computer readable instructions, implementing the method of any one of claims 1 to 4.
8. A computer readable storage medium storing computer readable instructions which, when executed by a processor, implement the method of any one of claims 1 to 4.
CN202010186019.5A 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device Active CN111462264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010186019.5A CN111462264B (en) 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010186019.5A CN111462264B (en) 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device

Publications (2)

Publication Number Publication Date
CN111462264A CN111462264A (en) 2020-07-28
CN111462264B true CN111462264B (en) 2023-06-06

Family

ID=71680771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010186019.5A Active CN111462264B (en) 2020-03-17 2020-03-17 Medical image reconstruction method, medical image reconstruction network training method and device

Country Status (1)

Country Link
CN (1) CN111462264B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037299B (en) * 2020-08-20 2024-04-19 上海壁仞智能科技有限公司 Image reconstruction method and device, electronic equipment and storage medium
CN112419303B (en) * 2020-12-09 2023-08-15 上海联影医疗科技股份有限公司 Neural network training method, system, readable storage medium and device
CN112598790A (en) * 2021-01-08 2021-04-02 中国科学院深圳先进技术研究院 Brain structure three-dimensional reconstruction method and device and terminal equipment
CN112802072B (en) * 2021-02-23 2022-10-11 临沂大学 Medical image registration method and system based on counterstudy
CN113569928B (en) * 2021-07-13 2024-01-30 湖南工业大学 Train running state detection data missing processing model and reconstruction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776959A (en) * 2018-07-10 2018-11-09 Oppo(重庆)智能科技有限公司 Image processing method, device and terminal device
CN110298898A (en) * 2019-05-30 2019-10-01 北京百度网讯科技有限公司 Change the method and its algorithm structure of automobile image body color

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10636141B2 (en) * 2017-02-09 2020-04-28 Siemens Healthcare Gmbh Adversarial and dual inverse deep learning networks for medical image analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776959A (en) * 2018-07-10 2018-11-09 Oppo(重庆)智能科技有限公司 Image processing method, device and terminal device
CN110298898A (en) * 2019-05-30 2019-10-01 北京百度网讯科技有限公司 Change the method and its algorithm structure of automobile image body color

Also Published As

Publication number Publication date
CN111462264A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462264B (en) Medical image reconstruction method, medical image reconstruction network training method and device
CN110559009B (en) Method for converting multi-modal low-dose CT into high-dose CT based on GAN
CN111784706B (en) Automatic identification method and system for primary tumor image of nasopharyngeal carcinoma
CN110363760B (en) Computer system for recognizing medical images
Leeds et al. Comparing visual representations across human fMRI and computational vision
CN110246137A (en) A kind of imaging method, device and storage medium
CN112435341A (en) Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device
Zhan et al. LR-cGAN: Latent representation based conditional generative adversarial network for multi-modality MRI synthesis
CN116823625B (en) Cross-contrast magnetic resonance super-resolution method and system based on variational self-encoder
CN111353935A (en) Magnetic resonance imaging optimization method and device based on deep learning
CN110874855B (en) Collaborative imaging method and device, storage medium and collaborative imaging equipment
CN112949654A (en) Image detection method and related device and equipment
CN112037146A (en) Medical image artifact automatic correction method and device and computer equipment
CN111340903A (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
WO2021184195A1 (en) Medical image reconstruction method, and medical image reconstruction network training method and apparatus
CN115272295A (en) Dynamic brain function network analysis method and system based on time domain-space domain combined state
Hu et al. Domain-adaptive 3D medical image synthesis: An efficient unsupervised approach
Zuo et al. HACA3: A unified approach for multi-site MR image harmonization
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI
Yang et al. Hierarchical progressive network for multimodal medical image fusion in healthcare systems
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
CN113592972B (en) Magnetic resonance image reconstruction method and device based on multi-mode aggregation
CN115115900A (en) Training method, device, equipment, medium and program product of image reconstruction model
CN113643263A (en) Identification method and system for upper limb bone positioning and forearm bone fusion deformity
CN113327221A (en) Image synthesis method and device fusing ROI (region of interest), electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant