CN110866909A - Training method of image generation network, image prediction method and computer equipment - Google Patents

Training method of image generation network, image prediction method and computer equipment Download PDF

Info

Publication number
CN110866909A
CN110866909A CN201911106039.0A CN201911106039A CN110866909A CN 110866909 A CN110866909 A CN 110866909A CN 201911106039 A CN201911106039 A CN 201911106039A CN 110866909 A CN110866909 A CN 110866909A
Authority
CN
China
Prior art keywords
image
initial
network
generation network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911106039.0A
Other languages
Chinese (zh)
Other versions
CN110866909B (en
Inventor
李青峰
石峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201911106039.0A priority Critical patent/CN110866909B/en
Publication of CN110866909A publication Critical patent/CN110866909A/en
Application granted granted Critical
Publication of CN110866909B publication Critical patent/CN110866909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a training method of an image generation network, an image prediction method and computer equipment. The training method comprises the following steps: acquiring a training sample image; inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval; inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first real sample image is a reference image of the initial prediction image; calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network. The method can improve the precision of the image generation network obtained by training and the accuracy and intuition of the obtained predicted image.

Description

Training method of image generation network, image prediction method and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a training method for an image generation network, an image prediction method, and a computer device.
Background
In the medical field, it is often necessary to make predictions about the future state of a patient, such as whether the current haematoma period will expand, whether the current lung nodules will transform morphology, etc., and similar prediction tasks may be collectively referred to as longitudinal change prediction. In the aspect of clinical application, the high-precision longitudinal change prediction of the scanning data can provide visual and accurate longitudinal data information for doctors, and meanwhile, the method is widely applied to medicine effect tracking and patient disease return visit, so that the method has an important role in computer-aided diagnosis and clinical diagnosis. Because of its characteristics of no radioactivity and high imaging quality of brain structure, nuclear magnetic resonance imaging is widely used in Diagnosis of brain diseases, and Computer Aided Diagnosis (CAD) can effectively screen patients with brain diseases according to nuclear magnetic resonance images, thereby greatly reducing the workload of doctors and improving the accuracy of detection.
The longitudinal change prediction method in the traditional technology is generally to analyze signs in a nuclear magnetic resonance image, such as analyzing the size and density of spots appearing beside a hematoma for cerebral hemorrhage to predict the possibility of future hemorrhage expansion. However, such signs do not match well with the patient's subsequent condition and do not give an intuitive picture of the patient's future condition.
Therefore, the vertical variation prediction method in the traditional technology has low accuracy and poor intuitiveness.
Disclosure of Invention
Based on this, it is necessary to provide a training method for an image generation network, an image prediction method, and a computer device, for solving the problems of low accuracy and poor intuitiveness of a longitudinal variation prediction method in the conventional technology.
In a first aspect, an embodiment of the present application provides a training method for an image generation network, including:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first real sample image is a reference image of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
In a second aspect, an embodiment of the present application provides an image prediction method, including:
acquiring a to-be-predicted image;
inputting an image to be predicted into an image generation network to obtain a predicted image; the training mode of the image generation network comprises the following steps:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the change condition of the predicted measured object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
In a third aspect, an embodiment of the present application provides a training apparatus for an image generation network, including:
the first acquisition module is used for acquiring a training sample image;
the generating module is used for inputting the training sample image into an initial image generating network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval;
the judging module is used for inputting the initial prediction image and the first real sample image into an initial judging network to obtain a judging result of the initial prediction image; the first real sample image is a reference image of the initial prediction image;
the training module is used for calculating the loss between the discrimination result and the simulation mark and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
In a fourth aspect, an embodiment of the present application provides an image prediction apparatus, including:
the second acquisition module is used for acquiring a to-be-predicted image;
the prediction module is used for inputting the image to be predicted into the network to obtain a predicted image; the training mode of the image generation network comprises the process of the training device of the image generation network.
In a fifth aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first real sample image is a reference image of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
In a sixth aspect, an embodiment of the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring a to-be-predicted image;
inputting an image to be predicted into an image generation network to obtain a predicted image; the training mode of the image generation network comprises the following steps:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the change condition of the predicted measured object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first real sample image is a reference image of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
In an eighth aspect, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring a to-be-predicted image;
inputting an image to be predicted into an image generation network to obtain a predicted image; the training mode of the image generation network comprises the following steps:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the change condition of the predicted measured object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
The training method, the image prediction method, the device, the computer equipment and the storage medium of the image generation network can firstly input a training sample image into an initial image generation network to obtain an initial prediction image, and input the initial prediction image and a first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; then calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network. In the method, the simulation mark of the training sample image is a mark set by the computer equipment, manual operation is not needed, the network training efficiency is improved, the training sample data volume is greatly increased, and the precision of the training obtained image for generating the network is further improved; the initial image generation network and the initial discrimination network are trained by utilizing the joint countermeasure thought, and the precision of the obtained image generation network is further improved, so that the accuracy of image prediction by using the image generation network is improved. Moreover, the predicted image obtained by the method is similar to a real medical image, so that a doctor can visually review the image, and the intuitiveness of the image is greatly improved.
Drawings
FIG. 1 is a schematic flow chart of a training method for an image generation network according to an embodiment;
FIG. 2 is a schematic flowchart of a training method for an image generation network according to another embodiment;
FIG. 2a is a schematic diagram of a training process for an initial image generation network according to an embodiment;
FIG. 3 is a schematic flowchart of a training method for an image generation network according to another embodiment;
FIG. 3a is a diagram illustrating an initial difference image and an initial predicted image according to one embodiment;
FIG. 4 is a flowchart illustrating an image prediction method according to an embodiment;
FIG. 5 is a schematic structural diagram of a training apparatus of an image generation network according to an embodiment;
FIG. 6 is a schematic structural diagram of an image prediction apparatus according to an embodiment;
fig. 7 is a schematic internal structural diagram of a computer device according to an embodiment.
Detailed Description
The training method for the image generation network provided by the embodiment of the application can be suitable for the training process of a network model for predicting medical images. The medical image may be Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Computed Tomography (CT), or the like, and the prediction process of the medical image, such as whether the current hematoma period is expanded, whether the current lung nodule is transformed into a form, whether the brain volume is atrophied, or the like, may be referred to as prediction of longitudinal change. Taking the atrophy of brain areas as an example, brain atrophy is a common anatomical change in the aging process of the brain, mainly due to the degeneration of cortical neurons of the brain. Brain atrophy occurs in the aging process of every person, but different diseases have different effects on brain atrophy, and certain areas related to certain diseases may have more remarkable atrophy, such as the cognitive impairment (MCI) of Alzheimer's Disease (AD) and its prodromal Mild Cognitive Impairment (MCI), brain structures controlling memory (such as hippocampus, temporal lobe, entorhinal cortex, etc.), and the atrophy is more remarkable compared with the aging of normal people. The magnetic resonance image display of the current year can predict the brain atrophy condition of the patient after a few years.
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that the execution subject of the method embodiments described below may be a training apparatus of an image generation network, and the apparatus may be implemented as part of or all of a computer device by software, hardware, or a combination of software and hardware. The following method embodiments take the execution subject as a computer device for example, where the computer device may be a terminal, may also be a server, may be a separate computing device, or may be integrated on a medical imaging device, as long as the training of an image generation network can be completed, and this embodiment is not limited to this.
Fig. 1 is a schematic flowchart of a training method for an image generation network according to an embodiment. The embodiment relates to a specific process of training an initial image generation network by using an acquired training sample image by a computer device. As shown in fig. 1, the method includes:
and S101, acquiring a training sample image.
Specifically, the computer device first obtains a plurality of training sample images, which may be CT images, PET images, MRI images, etc., and may be brain images, chest images, abdomen images, etc., and it should be noted that, for the same batch of training sample images participating in training, the same type of images are required. For example, when predicting brain atrophy in AD patients, a plurality of brain training images are acquired to train the network. Optionally, the manner of acquiring the training sample image by the computer device may be directly called from a memory of the computer device, or may be acquired from a Picture Archiving and Communication System (PACS), which is not limited in this embodiment.
S102, inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark, and the initial prediction image represents the predicted change condition of the tested object after a preset time interval.
Specifically, the computer device inputs the training sample image into an initial image generation network, and may obtain an initial prediction image. Optionally, the initial image generation network may be a newly built network, and may be a neural network, a deep learning network, or a machine learning network. The initial predicted image is a predicted change situation of the subject after a preset time interval, for example, if a training sample image is a brain image of a 60-year-old AD patient and has a certain brain atrophy situation, the training sample image is input into the initial image generation network, and a brain image of the patient at the age of 62 can be predicted, that is, the brain atrophy situation of the patient at the age of 62 can be predicted. Since the initial image generation network is adopted at this time, the accuracy of the obtained initial predicted image is relatively low, and since the initial predicted image is obtained by the initial image generation network, and is not the real change condition of the object to be measured after the preset time interval, the initial predicted image further includes a preset simulation flag (such as 0). Optionally, the simulation mark may be automatically marked when the initial prediction image is generated by the initial image generation network, or may be subsequently marked by the computer device without manual marking.
Optionally, the computer device may further obtain index information of the object to be tested corresponding to the training sample image, such as age and gender, and input the index information as an input parameter into the initial image generation network, and the initial image generation network may output an initial prediction image corresponding to the index information of the object to be tested according to the index information of the object to be tested, so that the prediction accuracy of the initial image generation network may be improved. Optionally, a specified prediction time interval (e.g., 1 year later, 2 years later) may also be input to obtain an initial predicted image after the corresponding time interval; for example, the initial image generation network a may generate a predicted image after 1 year, the initial image generation network B may generate a predicted image after 2 years, and the initial image generation network C may generate a predicted image after 3 years. When the computer device receives an input time interval, the corresponding initial image generation network may be invoked.
Optionally, the initial image generation network may be a V-net network, and the network structure of the initial image generation network mainly includes a down-sampling segment and an up-sampling segment. The downsampling section adopts a 3 multiplied by 3 convolution kernel, more abstract features of the image are extracted along with the increase of the number of layers, and meanwhile, the image resolution is gradually reduced through pooling operation, so that the features extracted by the convolution kernel are more global along with the increase of the number of layers. The up-sampling section adopts a 3 multiplied by 3 convolution kernel to carry out deconvolution operation, and establishes the corresponding relation between the input image and the output image while improving the resolution of feature mapping. The whole V-Net network adopts the interlayer connection design of the residual error network, overcomes the problem that the gradient of a deep network disappears, and enables the updating of network parameters to be more sensitive to the gradient change. Meanwhile, interlayer connection is also constructed at the positions corresponding to the feature mapping resolution of the down-sampling section and the up-sampling section of the whole network, so that the interlayer connection has the advantages of the interlayer connection, information from an input image is kept, loss of useful information caused by the down-sampling section during pooling operation is avoided, and the robustness of the whole network is further improved. Optionally, the initial image generation network may also be a network obtained by combining a deep learning registration network and a spatial transformation network.
Optionally, the training sample image may be preprocessed before the computer device inputs the training sample image into the initial image generation network for processing. Taking the example of training sample images as MRIT1 or T2 weighted images of the brain, the computer device may record labels for each image, such as AD, MCI, Parkinson's Disease (PD), etc. Then, all training sample images are subjected to operations of rotation, resampling, size adjustment, skull removal, image non-uniform correction, histogram matching, gray level normalization and the like, so that the image sizes are all 256 multiplied by 256mm3The directions are all standard Cartesian LPI coordinate systems, and the gray scale ranges are all standard images in the (-1,1) interval. Optionally, the size of the image obtained by preprocessing can be 48 × 48 × 48mm3、64×64×64mm3、128×128×128mm3Equal size, this embodiment is not limited in this regard. The preprocessed training sample images are input into the initial image generation network, so that the accuracy of the processing result can be improved.
S103, inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first true sample image is a reference image of the initial predicted image.
Specifically, the computer device may input the initial prediction image and the first real sample image into the initial discrimination network, and determine whether the initial prediction image satisfies data distribution of the first real sample image, to obtain a discrimination result of the initial prediction image. The first real sample image is a reference image of an initial prediction image, namely a real medical image of a reference test object which has the same age, the same sex and similar diseases with the test object corresponding to the time point of the initial prediction image. For example, the training sample images are brain images of patients aged 60, women and AD, the obtained initial prediction image is the brain image aged 62, the first real sample images are brain images of a batch of patients aged 62, women and AD, and the initial judgment network can obtain the judgment result of the initial prediction image according to the initial prediction image and the first real sample image.
Alternatively, the judgment result may be the probability that the initial predicted image belongs to the simulated image and the probability that the initial predicted image belongs to the real image. Optionally, the initial discrimination network may also be a newly built network, and may be a neural network, a deep learning network, or a machine learning network. Optionally, the initial discrimination network may be a DenseNet network, a main body part of the initial discrimination network is composed of several Dense blocks, a convolution operation of 1 × 1 × 1 is included in a convolution front of a 3 × 3 × 3 convolution of each Dense block, which is called a Bottleneck layer, and the purpose of the convolution is to compress the number of input feature maps, reduce the amount of computation while fusing features of each channel, and use an output result of the Bottleneck layer as an input of the 3 × 3 × 3 convolution. According to the network structure design of the denet, the output of each layer is connected with the output of all the previous layers according to channels and used as the input of the next layer, so that the number of output channels of each Denseblock is huge, and in order to reduce the memory occupation and fuse the characteristics of each output channel, a group of 1 × 1 × 1 convolution operations, called Transition layer, is arranged between every two Denseblocks so as to reduce the number of output characteristic mappings. Optionally, the initial discrimination network adds a partition Block composed of a hole convolution module to each DenseBlock to expand the field of the convolution kernel; adding a compression-activation module after the 3 multiplied by 3 convolution of the Densblock so as to obtain the weights of different channels of the feature mapping; meanwhile, a bypass formed by a residual error attention module is added for each DenseLock so as to obtain the weight of different voxels of the feature mapping; optionally, a feature weighting module (SEBlock) may be further incorporated as a sub-network structure; therefore, the accuracy of judging the network judgment result can be improved. Optionally, the initial discrimination network may also be a ResNet network.
S104, calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
Specifically, the computer device may calculate the loss between the discrimination result and the simulation mark by using a loss function, where the simulation mark is a preset mark (e.g. 0) set by the computer device, and the discrimination result is a probability that the training sample image belongs to the simulation image and belongs to the real image, so that the computer device may calculate the loss between the two. Optionally, the Loss function may be a cross-entropy Loss function, a Focal local Loss function, or other types of Loss functions.
Then, the computer device can train the initial discrimination network and the initial image generation network according to the loss, wherein the trained image generation network aims to obtain a predicted image more accurately so that the predicted image is closer to a real image, and therefore the discrimination network cannot easily distinguish whether the predicted image is a simulated image or the real image; the trained discrimination network requires that the predicted images and the real images are distinguished as much as possible, so that the training process of the discrimination network and the image generation network is a zero-sum game relation and a countermeasure generation process.
In the continuous training process of the initial discrimination network and the initial image generation network, the value of the loss function is continuously changed, wherein the value of the loss function is smaller and smaller by the purpose of the initial discrimination network, and the value of the loss function is larger and larger by the purpose of the initial image generation network, so that when the value of the loss function is converged, the two networks are represented to be trained, and the image generation network and the discrimination network with converged training can be obtained. Optionally, the output decision result of the initial decision network may be a probability that the initial predicted image belongs to the simulated image and a probability that the initial predicted image belongs to the real image, and when a value of both the probabilities is close to 0.5, a value representing the loss function may converge.
In the training method for the image generation network provided by this embodiment, a computer device inputs a training sample image into an initial image generation network to obtain an initial prediction image, and inputs the initial prediction image and a first real sample image into an initial discrimination network to obtain a discrimination result of the initial prediction image; then calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network. In the method, the simulation mark of the training sample image is a mark set by the computer equipment, manual operation is not needed, the network training efficiency is improved, the training sample data volume is greatly increased, and the precision of the training obtained image for generating the network is further improved; the initial image generation network and the initial discrimination network are trained by utilizing the joint countermeasure thought, and the precision of the obtained image generation network is further improved, so that the accuracy of image prediction by using the image generation network is improved. Moreover, the predicted image obtained by the method is similar to a real medical image, so that a doctor can visually review the image, and the intuitiveness of the image is greatly improved.
In the above embodiment, the first real sample image is a medical image of a reference subject having the same age, the same sex, and similar disease as the subject at the time point where the initial prediction image is located, but is not a medical image of the subject, that is, the training process is a weak supervision training process, and therefore, an error may occur in the discrimination of the initial prediction image using the first real sample image as the reference image. The computer device may also perform joint training on the initial image generation network by using a real medical image of the object after a preset time interval, that is, a supervised training process; it should be noted that, if the amount of real medical image data of the subject after the preset time interval is small, the supervised training process is used to assist the weak supervised training process in training the initial image generation network.
Fig. 2 is a schematic flowchart of a training method for an image generation network according to another embodiment. The embodiment relates to a specific process of training an initial image generation network by a computer device according to an initial prediction image and a second real sample image. On the basis of the foregoing embodiment, optionally, as shown in fig. 2, the foregoing method further includes:
s201, a second real sample image is obtained, where the second real sample image is a real medical image obtained by scanning the object after a time interval.
Specifically, the second real sample image is a real medical image obtained by scanning the subject after a time interval, for example, the training sample image is an image of a brain of an AD patient at 60 years old, the patient has a double-diagnosis at 62 years old after 2 years old, and an image of a brain of the AD patient at 62 years old is obtained, so that the computer device may obtain the image of the brain of the AD patient at 62 years old from the PACS system according to the identity of the subject.
S202, calculating the difference between the initial prediction image and the second real sample image, and training the initial image generation network according to the difference.
Specifically, the computer device may calculate a difference between the initial prediction image and the second real sample image, that is, a difference between the predicted brain image of the subject aged 62 and the real brain image of the subject aged 62, and then perform joint training on the initial image generation network according to the difference. Optionally, the network parameters of the initial image generation network may be adjusted in a reverse gradient propagation manner by using the difference, and when the difference is smaller than or equal to a preset threshold, training of the initial image generation network is completed. The training process for the entire initial image generation network can be seen in fig. 2 a.
In the training method for the image generation network provided by this embodiment, the computer device obtains the second real medical image obtained by scanning the object after the time interval, calculates the difference between the second real medical image and the initial predicted image, and performs the joint training on the initial image generation network by using the difference. Because the second real medical image is the real medical image of the tested object, the training data is more accurate, so that the precision of the generated image generation network can be further improved, and the accuracy of the predicted image generated by the image generation network is further improved.
Optionally, in some embodiments, the method further includes: constructing a network model optimization function according to the loss function, the difference, the initial predicted image, the mathematical expectation of the value of the loss function and the mathematical expectation of the value of the difference; and when the value of the network model optimization function is smaller than or equal to a preset threshold value, representing that the loss reaches convergence, namely that the image generation network reaches convergence.
Wherein the loss function is a function for calculating the loss between the discrimination result and the simulation mark, and the difference is the initial predicted image and the second predicted imageThe difference between the two real sample images. Optionally, the network model optimization function includes an image generation network optimization function and a discrimination network optimization function, and when a sum of a value of the image generation network optimization function and a value of the discrimination network optimization function is less than or equal to a preset threshold, the loss is represented to reach convergence. Optionally, the image generation network optimization function may be
Figure BDA0002271333880000141
Wherein D (x) is a loss function, G (x) is an initial predicted image, xT2In order to be the second true sample image,
Figure BDA0002271333880000142
in order to be a difference in the number of the components,
Figure BDA0002271333880000143
is a mathematical expectation at the current point in time; the discriminant network optimization function may be
Figure BDA0002271333880000144
Figure BDA0002271333880000145
Is a mathematical expectation after a preset time interval. The goal of the two optimization functions is to make the value of the expression as small as possible, and when the sum of them reaches a preset threshold, the characterization loss converges. Optionally, the value range of the d (x) loss function may be (0,1), and the larger the value, the higher the prediction probability of the input image as a real image by the discrimination network is.
Optionally, in some embodiments, because the training process of the initial discrimination network and the initial image generation network is a game relationship, the training of the initial discrimination network and the initial image generation network according to the loss includes: adjusting network parameters of the initial discrimination network according to the loss, wherein the adjusted initial discrimination network reduces the loss value; and adjusting network parameters of the initial image generation network according to the loss, wherein the adjusted initial image generation network increases the value of the loss. Then a convergence state is reached when the value of the loss tends to stabilize.
Optionally, adjusting a network parameter of the initial image generation network according to the loss includes: and after the network parameters of the initial judgment network are adjusted, adjusting the network parameters of the initial image generation network according to the loss. Specifically, in the network training process, after a corresponding loss is obtained for each input training sample image, the computer device may first fix the initial image to generate the network parameters of the network, and adjust the network parameters of the initial discrimination network by using gradient back propagation of the loss. And then fixing the network parameters of the initial judgment network and adjusting the network parameters of the initial image generation network. Optionally, the computer may also adjust the network parameters of the initial image generation network and the initial discrimination network at the same time, or may alternatively adjust the network parameters of the initial image generation network and the initial discrimination network, which is not limited in this embodiment.
Fig. 3 is a flowchart illustrating a training method for an image generation network according to another embodiment. The embodiment relates to a specific process that computer equipment inputs a training sample image into an initial image generation network to obtain an initial prediction image. On the basis of the above embodiment, optionally, as shown in fig. 3, S102 may include:
s301, inputting the training sample image into an initial image generation network to obtain an initial difference image.
S302, fusing the initial difference image with the training sample image to obtain an initial prediction image; wherein the initial difference image characterizes a difference between the training sample image and the initial predicted image.
Specifically, after the computer device inputs the training sample image into the initial image generation network, an initial difference image may be obtained, where the image represents a difference between the training sample image and the initial predicted image, that is, a change area between the prediction time point and the current time point, such as a circled area in fig. 3a (the image is only an example of a difference image, and the specific embodiment is not limited, and may also be marked with different colors, etc.). And then the computer equipment fuses the initial difference image and the training sample image to obtain an initial prediction image. Therefore, the user can visually see not only the predicted image after the preset time interval, but also the change condition of the time interval, and the intuitiveness of the generated predicted image is further improved.
After the training of the image generation network is completed, prediction of a predicted image can be performed by using the network, and fig. 4 is a flowchart of an image prediction method provided by an embodiment, where the method includes:
s401, acquiring a to-be-predicted image.
S402, inputting an image to be predicted into a network to obtain a predicted image; the training mode of the image generation network comprises the following steps: acquiring a training sample image; inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the change condition of the predicted measured object after a preset time interval; inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
Specifically, after the computer device obtains the image to be predicted, the computer device may perform preprocessing according to the image preprocessing method to obtain a standard image to be predicted, and then input the image to be predicted into the trained image generation network to obtain a predicted image after a preset time interval. Optionally, the index information of the measured object corresponding to the image to be predicted may also be input to the image generation network at the same time, so as to improve the accuracy of the predicted image. Optionally, the corresponding image generation network may be invoked according to the input time interval to generate a predicted image corresponding to the time interval; for example, the call image generation network a can generate a prediction image after 1 year, the call image generation network B can generate a prediction image after 2 years, and the call image generation network C can generate a prediction image after 3 years.
In the image prediction method provided by this embodiment, a computer device obtains an image to be predicted, and inputs the image to be predicted into an image generation network to obtain a predicted image. Because the joint countermeasure thought is used for training in the training process of the image generation network, and the simulation mark does not need manual operation, the network training efficiency is improved, the training sample data volume is greatly increased, the precision of the image generation network is further improved, and the accuracy of the prediction image is improved. Moreover, the obtained predicted image is similar to a real medical image, so that a doctor can visually review the image, and the intuitiveness of the image is greatly improved.
It should be understood that although the various steps in the flowcharts of fig. 1-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 5 is a schematic structural diagram of a training apparatus of an image generation network according to an embodiment. As shown in fig. 5, the apparatus includes: the device comprises a first acquisition module 11, a generation module 12, a judgment module 13 and a training module 14.
Specifically, the first obtaining module 11 is configured to obtain a training sample image.
The generating module 12 is configured to input the training sample image into an initial image generating network to obtain an initial predicted image; the initial prediction image comprises a preset simulation mark, and the initial prediction image represents the predicted change condition of the tested object after a preset time interval.
The judging module 13 is configured to input the initial predicted image and the first real sample image into an initial judging network to obtain a judging result of the initial predicted image; the first true sample image is a reference image of the initial predicted image.
The training module 14 is used for calculating the loss between the discrimination result and the simulation mark and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
The training apparatus for an image generation network provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, the first acquiring module 11 is further configured to acquire a second real sample image, where the second real sample image is a real medical image obtained by scanning the subject after a time interval; the training module 14 is further configured to calculate a difference between the initial predicted image and the second real sample image, and train the initial image generation network according to the difference.
In one embodiment, the apparatus further comprises a construction module for constructing a network model optimization function based on the loss function, the difference, the initial predicted image, and the mathematical expectation of the value of the loss function, the mathematical expectation of the value of the difference; and when the value of the network model optimization function is smaller than or equal to a preset threshold value, the characterization loss reaches convergence.
In one embodiment, the network model optimization function comprises an image generation network optimization function and a discrimination network optimization function; the method for optimizing the network model according to the value of the network model optimization function is less than or equal to a preset threshold specifically includes: the sum of the value of the image generation network optimization function and the value of the discrimination network optimization function is less than or equal to a preset threshold value.
In one embodiment, the training module 14 is specifically configured to adjust a network parameter of an initial decision network according to the loss, and the adjusted initial decision network reduces the loss value; and adjusting network parameters of the initial image generation network according to the loss, wherein the adjusted initial image generation network increases the value of the loss.
In one embodiment, the training module 14 is specifically configured to adjust the network parameters of the initial image generation network according to the loss after the network parameters of the initial discrimination network are adjusted.
In one embodiment, the generating module 12 is specifically configured to input a training sample image into an initial image generation network to obtain an initial difference image; fusing the initial difference image and the training sample image to obtain an initial prediction image; wherein the initial difference image characterizes a difference between the training sample image and the initial predicted image.
In one embodiment, the training sample image is a magnetic resonance image of the brain of the subject.
Fig. 6 is a schematic structural diagram of an image prediction apparatus according to an embodiment. As shown in fig. 6, the apparatus includes: a second acquisition module 15 and a prediction module 16.
Specifically, the second obtaining module 15 is configured to obtain the image to be predicted.
The prediction module 16 is used for inputting the image to be predicted into the network to obtain a predicted image; the training process of the image generation network may refer to the implementation process of the above training apparatus embodiment of the image generation network.
The image prediction apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
For specific limitations of the training apparatus and the image prediction apparatus of the image generation network, reference may be made to the above limitations of the training method and the image prediction method of the image generation network, and details are not repeated here. The modules in the training device and the image prediction device of the image generation network can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a training method and an image prediction method for an image generation network. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first real sample image is a reference image of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a second real sample image, wherein the second real sample image is a real medical image obtained by scanning a detected object after a time interval;
and calculating the difference between the initial prediction image and the second real sample image, and training the initial image generation network according to the difference.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
constructing a network model optimization function according to the loss function, the difference, the initial predicted image, the mathematical expectation of the value of the loss function and the mathematical expectation of the value of the difference;
and when the value of the network model optimization function is smaller than or equal to a preset threshold value, the characterization loss reaches convergence.
In one embodiment, the network model optimization function includes an image generation network optimization function and a discriminant network optimization function;
the value of the network model optimization function is less than or equal to a preset threshold value, and the method comprises the following steps:
the sum of the value of the image generation network optimization function and the value of the discrimination network optimization function is less than or equal to a preset threshold value.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
adjusting network parameters of the initial discrimination network according to the loss, wherein the adjusted initial discrimination network reduces the loss value;
and adjusting network parameters of the initial image generation network according to the loss, wherein the adjusted initial image generation network increases the value of the loss.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and after the network parameters of the initial judgment network are adjusted, adjusting the network parameters of the initial image generation network according to the loss.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting a training sample image into an initial image generation network to obtain an initial difference image;
fusing the initial difference image and the training sample image to obtain an initial prediction image; wherein the initial difference image characterizes a difference between the training sample image and the initial predicted image.
In one embodiment, the training sample image is a magnetic resonance image of the brain of the subject.
In one embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the following steps when executing the computer program:
acquiring a to-be-predicted image;
inputting an image to be predicted into an image generation network to obtain a predicted image; the training mode of the image generation network comprises the following steps:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the change condition of the predicted measured object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first real sample image is a reference image of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a second real sample image, wherein the second real sample image is a real medical image obtained by scanning a detected object after a time interval;
and calculating the difference between the initial prediction image and the second real sample image, and training the initial image generation network according to the difference.
In one embodiment, the computer program when executed by the processor further performs the steps of:
constructing a network model optimization function according to the loss function, the difference, the initial predicted image, the mathematical expectation of the value of the loss function and the mathematical expectation of the value of the difference;
and when the value of the network model optimization function is smaller than or equal to a preset threshold value, the characterization loss reaches convergence.
In one embodiment, the network model optimization function includes an image generation network optimization function and a discriminant network optimization function;
the value of the network model optimization function is less than or equal to a preset threshold value, and the method comprises the following steps:
the sum of the value of the image generation network optimization function and the value of the discrimination network optimization function is less than or equal to a preset threshold value.
In one embodiment, the computer program when executed by the processor further performs the steps of:
adjusting network parameters of the initial discrimination network according to the loss, wherein the adjusted initial discrimination network reduces the loss value;
and adjusting network parameters of the initial image generation network according to the loss, wherein the adjusted initial image generation network increases the value of the loss.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and after the network parameters of the initial judgment network are adjusted, adjusting the network parameters of the initial image generation network according to the loss.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting a training sample image into an initial image generation network to obtain an initial difference image;
fusing the initial difference image and the training sample image to obtain an initial prediction image; wherein the initial difference image characterizes a difference between the training sample image and the initial predicted image.
In one embodiment, the training sample image is a magnetic resonance image of the brain of the subject.
In one embodiment, there is also provided a computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing the steps of:
acquiring a to-be-predicted image;
inputting an image to be predicted into an image generation network to obtain a predicted image; the training mode of the image generation network comprises the following steps:
acquiring a training sample image;
inputting a training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the change condition of the predicted measured object after a preset time interval;
inputting the initial prediction image and the first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain the image generation network.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for training an image generation network, comprising:
acquiring a training sample image;
inputting the training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the predicted change condition of the tested object after a preset time interval;
inputting the initial prediction image and a first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image; the first real sample image is a reference image of the initial predicted image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain an image generation network.
2. The method of claim 1, further comprising:
acquiring a second real sample image, wherein the second real sample image is a real medical image obtained by scanning the tested object after the time interval;
and calculating the difference between the initial prediction image and the second real sample image, and training the initial image generation network according to the difference.
3. The method of claim 2, further comprising:
constructing a network model optimization function according to the loss function, the difference, the initial predicted image, the mathematical expectation of the value of the loss function and the mathematical expectation of the value of the difference;
and when the value of the network model optimization function is smaller than or equal to a preset threshold value, representing that the loss reaches convergence.
4. The method of claim 3, wherein the network model optimization function comprises an image generation network optimization function and a discriminant network optimization function;
the value of the network model optimization function is less than or equal to a preset threshold value, and the method comprises the following steps:
the sum of the value of the image generation network optimization function and the value of the discrimination network optimization function is less than or equal to the preset threshold value.
5. The method of claim 1, wherein training the initial discriminative network and the initial image generation network based on the loss comprises:
adjusting network parameters of the initial discrimination network according to the loss, wherein the adjusted initial discrimination network reduces the value of the loss;
and adjusting the network parameters of the initial image generation network according to the loss, wherein the adjusted initial image generation network increases the value of the loss.
6. The method of claim 5, wherein adjusting network parameters of the initial image generation network based on the loss comprises:
and after the network parameter of the initial judgment network is adjusted, adjusting the network parameter of the initial image generation network according to the loss.
7. The method according to any of claims 1-6, wherein inputting the training sample images into an initial image generation network to obtain an initial predictive image comprises:
inputting the training sample image into an initial image generation network to obtain an initial difference image;
fusing the initial difference image and the training sample image to obtain the initial prediction image; wherein the initial difference image characterizes a difference between the training sample image and the initial prediction image.
8. The method of any one of claims 1-6, wherein the training sample image is a magnetic resonance image of the brain of the subject.
9. An image prediction method, comprising:
acquiring a to-be-predicted image;
inputting the image to be predicted into an image generation network to obtain a predicted image; the training mode of the image generation network comprises the following steps:
acquiring a training sample image;
inputting the training sample image into an initial image generation network to obtain an initial prediction image; the initial prediction image comprises a preset simulation mark and represents the change condition of the predicted measured object after a preset time interval;
inputting the initial prediction image and a first real sample image into an initial judgment network to obtain a judgment result of the initial prediction image;
calculating the loss between the discrimination result and the simulation mark, and training the initial discrimination network and the initial image generation network according to the loss; and when the loss reaches convergence, finishing training of the initial image generation network to obtain an image generation network.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of claim 9 when executing the computer program.
CN201911106039.0A 2019-11-13 2019-11-13 Training method of image generation network, image prediction method and computer equipment Active CN110866909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911106039.0A CN110866909B (en) 2019-11-13 2019-11-13 Training method of image generation network, image prediction method and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911106039.0A CN110866909B (en) 2019-11-13 2019-11-13 Training method of image generation network, image prediction method and computer equipment

Publications (2)

Publication Number Publication Date
CN110866909A true CN110866909A (en) 2020-03-06
CN110866909B CN110866909B (en) 2022-09-27

Family

ID=69653432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911106039.0A Active CN110866909B (en) 2019-11-13 2019-11-13 Training method of image generation network, image prediction method and computer equipment

Country Status (1)

Country Link
CN (1) CN110866909B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383221A (en) * 2020-03-12 2020-07-07 南方科技大学 Method for generating scoliosis detection model and computer equipment
CN111388000A (en) * 2020-03-27 2020-07-10 上海杏脉信息科技有限公司 Virtual lung air retention image prediction method and system, storage medium and terminal
CN111582254A (en) * 2020-06-19 2020-08-25 上海眼控科技股份有限公司 Video prediction method, device, computer equipment and readable storage medium
CN111709446A (en) * 2020-05-14 2020-09-25 天津大学 X-ray chest radiography classification device based on improved dense connection network
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN112102294A (en) * 2020-09-16 2020-12-18 推想医疗科技股份有限公司 Training method and device for generating countermeasure network, and image registration method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682009A (en) * 2018-05-08 2018-10-19 深圳市铱硙医疗科技有限公司 A kind of Alzheimer's disease prediction technique, device, equipment and medium
CN109308450A (en) * 2018-08-08 2019-02-05 杰创智能科技股份有限公司 A kind of face's variation prediction method based on generation confrontation network
CN109754447A (en) * 2018-12-28 2019-05-14 上海联影智能医疗科技有限公司 Image generating method, device, equipment and storage medium
US20190163959A1 (en) * 2017-11-24 2019-05-30 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for recognizing face
CN109829892A (en) * 2019-01-03 2019-05-31 众安信息技术服务有限公司 A kind of training method of prediction model, prediction technique and device using the model
CN110276736A (en) * 2019-04-01 2019-09-24 厦门大学 A kind of magnetic resonance image fusion method based on weight prediction network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190163959A1 (en) * 2017-11-24 2019-05-30 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for recognizing face
CN108682009A (en) * 2018-05-08 2018-10-19 深圳市铱硙医疗科技有限公司 A kind of Alzheimer's disease prediction technique, device, equipment and medium
CN109308450A (en) * 2018-08-08 2019-02-05 杰创智能科技股份有限公司 A kind of face's variation prediction method based on generation confrontation network
CN109754447A (en) * 2018-12-28 2019-05-14 上海联影智能医疗科技有限公司 Image generating method, device, equipment and storage medium
CN109829892A (en) * 2019-01-03 2019-05-31 众安信息技术服务有限公司 A kind of training method of prediction model, prediction technique and device using the model
CN110276736A (en) * 2019-04-01 2019-09-24 厦门大学 A kind of magnetic resonance image fusion method based on weight prediction network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHANGHEE HAN 等: "GAN-based synthetic brain MR image generation", 《2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018)》 *
GRIGORY ANTIPOV 等: "FACE AGING WITH CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS", 《ARXIV:1702.01983 [CS.CV]》 *
袁帅等: "应用残差生成对抗网络的路况视频帧预测模型", 《西安交通大学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383221A (en) * 2020-03-12 2020-07-07 南方科技大学 Method for generating scoliosis detection model and computer equipment
CN111383221B (en) * 2020-03-12 2023-04-28 南方科技大学 Scoliosis detection model generation method and computer equipment
CN111388000A (en) * 2020-03-27 2020-07-10 上海杏脉信息科技有限公司 Virtual lung air retention image prediction method and system, storage medium and terminal
CN111388000B (en) * 2020-03-27 2023-08-25 上海杏脉信息科技有限公司 Virtual lung air retention image prediction method and system, storage medium and terminal
CN111709446A (en) * 2020-05-14 2020-09-25 天津大学 X-ray chest radiography classification device based on improved dense connection network
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN111754520B (en) * 2020-06-09 2023-09-15 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN111582254A (en) * 2020-06-19 2020-08-25 上海眼控科技股份有限公司 Video prediction method, device, computer equipment and readable storage medium
CN112102294A (en) * 2020-09-16 2020-12-18 推想医疗科技股份有限公司 Training method and device for generating countermeasure network, and image registration method and device
CN112102294B (en) * 2020-09-16 2024-03-01 推想医疗科技股份有限公司 Training method and device for generating countermeasure network, and image registration method and device

Also Published As

Publication number Publication date
CN110866909B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110866909B (en) Training method of image generation network, image prediction method and computer equipment
CN110321920B (en) Image classification method and device, computer readable storage medium and computer equipment
CN111862066B (en) Brain tumor image segmentation method, device, equipment and medium based on deep learning
CN110751187B (en) Training method of abnormal area image generation network and related product
CN109767461B (en) Medical image registration method and device, computer equipment and storage medium
EP4005498A1 (en) Information processing device, program, learned model, diagnostic assistance device, learning device, and method for generating prediction model
CN110717905B (en) Brain image detection method, computer device, and storage medium
CN110599526A (en) Image registration method, computer device, and storage medium
CN114298234B (en) Brain medical image classification method and device, computer equipment and storage medium
CN110210543B (en) Image classification system, method, apparatus and storage medium
Hammouda et al. A new framework for performing cardiac strain analysis from cine MRI imaging in mice
US20220036575A1 (en) Method for measuring volume of organ by using artificial neural network, and apparatus therefor
Corrado et al. Quantifying atrial anatomy uncertainty from clinical data and its impact on electro-physiology simulation predictions
CN115409879A (en) Data processing method and device for image registration, storage medium and electronic equipment
CN111223158A (en) Artifact correction method for heart coronary image and readable storage medium
Zhang et al. Empowering cortical thickness measures in clinical diagnosis of Alzheimer's disease with spherical sparse coding
CN111160441B (en) Classification method, computer device, and storage medium
CN113192031A (en) Blood vessel analysis method, blood vessel analysis device, computer equipment and storage medium
CN111275059B (en) Image processing method and device and computer readable storage medium
CN110766653A (en) Image segmentation method and device, computer equipment and storage medium
CN115375787A (en) Artifact correction method, computer device and readable storage medium
CN111091504B (en) Image offset field correction method, computer device, and storage medium
CN111210414B (en) Medical image analysis method, computer device, and readable storage medium
CN115393377A (en) Training method of image segmentation model, image segmentation method and device
CN111080733A (en) Medical scanning image acquisition method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220118

Address after: 430206 22 / F, building C3, future science and technology building, 999 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province

Applicant after: Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co.,Ltd.

Address before: Room 3674, 3 / F, 2879 Longteng Avenue, Xuhui District, Shanghai, 200232

Applicant before: SHANGHAI UNITED IMAGING INTELLIGENT MEDICAL TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant