CN113486925A - Model training method, fundus image generation method, model evaluation method and device - Google Patents

Model training method, fundus image generation method, model evaluation method and device Download PDF

Info

Publication number
CN113486925A
CN113486925A CN202110633931.5A CN202110633931A CN113486925A CN 113486925 A CN113486925 A CN 113486925A CN 202110633931 A CN202110633931 A CN 202110633931A CN 113486925 A CN113486925 A CN 113486925A
Authority
CN
China
Prior art keywords
image
training set
disease
fundus
probability distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110633931.5A
Other languages
Chinese (zh)
Inventor
刘从新
王斌
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd, Beijing Airdoc Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202110633931.5A priority Critical patent/CN113486925A/en
Publication of CN113486925A publication Critical patent/CN113486925A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention discloses a model training method, an eyeground image generation method, a model evaluation method and a model evaluation device, wherein the model training method comprises the following steps: acquiring a first training set, wherein the first training set comprises a plurality of sample fundus images; generating a blood vessel segmentation map of each sample fundus image, wherein the blood vessel segmentation map is a binary map; for each sample fundus image, combining the pixel values of RGB three channels of each pixel point in the sample fundus image with the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image to obtain a four-channel image; inputting the images of the four channels into a pre-constructed initial network model, and performing model training to obtain a target network model, wherein the network hyper-parameters of the initial network model at least comprise guiding strength, the guiding strength is used for inspiring the network model to generate new blood vessels in the fundus image based on blood vessel information in the images of the four channels, and the target network model is used for generating a new fundus image.

Description

Model training method, fundus image generation method, model evaluation method and device
Technical Field
The invention relates to the field of medical image processing, in particular to a model training method, an eyeground image generation method, a model evaluation method and a device.
Background
At present, common machine learning and even deep learning models can be divided into discriminant models and generative models, wherein the discriminant models can give discriminative predictions on input data based on learning of supervisory signals, for example, classification, detection and segmentation in computer vision; the generative model can directly learn the data distribution of the training set without depending on a specific supervision signal, and then can infinitely generate data conforming to the distribution.
If the generated data is high in fidelity, many valuable application possibilities are derived. Taking medical images as an example, medical images, particularly medical images containing rare cases of disease, have natural scarcity. If a large number of medical images of cases can be generated by using a generative model, it is undoubtedly useful for medical image recognition and understanding, for example, to facilitate data expansion to enrich the training set, training or displaying medical knowledge with the generated images to avoid privacy disclosure, and analyzing semantic structures of the hidden space of the model to perform research or prognostic diagnosis of disease evolution.
In the prior art, although the details of the fundus image generated by the generative model are real, the vascular structure is unexpectedly disordered and is not consistent with the common medical knowledge, for example, some arteries or veins are not communicated, no source exists, and a vascular section, arteriovenous confusion and the like exist. Therefore, how to generate a high-quality fundus image becomes a technical problem that those skilled in the art are urgently required to solve.
Disclosure of Invention
The embodiment of the invention provides a model training method, a fundus image generation method, a model evaluation method and a model evaluation device, and aims to solve the technical problem that the image quality of a generated fundus image is low in the prior art.
According to a first aspect of the invention, a method of model training is disclosed, the method comprising:
acquiring a first training set, wherein the first training set comprises a plurality of sample fundus images;
generating a blood vessel segmentation map of each sample fundus image, wherein the blood vessel segmentation map is a binary map;
for each sample fundus image, combining the pixel values of RGB three channels of each pixel point in the sample fundus image with the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image to obtain a four-channel image;
inputting the images of the four channels into a pre-constructed initial network model, and performing model training to obtain a target network model, wherein the network hyper-parameters of the initial network model at least comprise guiding strength, the guiding strength is used for inspiring the network model to generate a new blood vessel in the fundus image based on blood vessel information in the images of the four channels, and the target network model is used for generating a new fundus image.
Optionally, as an embodiment, the initial network model is a network model constructed based on a generative confrontation network GAN.
Optionally, as an embodiment, a pixel value of each pixel point in the blood vessel segmentation map is 0 or 1;
the pixel value of each pixel point in the four-channel image is [ R, G, B, m, s, U ], wherein R is the pixel value of an R channel of the pixel point in the sample fundus image, G is the pixel value of a G channel, B is the pixel value of a B channel, m is the pixel value of a pixel point at a corresponding position in the blood vessel segmentation map, s is the guiding intensity, and U is 255.
According to a second aspect of the present invention, there is disclosed a fundus image generating method for generating a new fundus image based on the target network model in the first aspect, the method comprising:
receiving an original fundus image;
generating a vessel segmentation map of the original fundus image;
combining the RGB three-channel pixel values of each pixel point in the original fundus image with the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image to obtain a four-channel first image;
inputting the first image into a target network model for processing to obtain a second image of four channels;
and generating a new fundus image based on the RGB channel information of the pixel points in the second image.
According to a third aspect of the present invention, a model evaluation method is disclosed for evaluating the performance of the target network model in the first aspect, the method comprising:
acquiring a second training set and a composite image set, wherein the second training set comprises a plurality of sample fundus images, the composite image set comprises a plurality of fundus images, and the fundus images in the composite image set are new fundus images generated based on the second training set and a target network model;
inputting the fundus images of the samples in the second training set into a pre-trained multi-disease classification model for processing to obtain probability distribution of the fundus images of the samples belonging to various disease types; inputting all fundus images in the synthetic image set into the multiple disease classification models respectively for processing to obtain probability distribution of all fundus images belonging to all disease types;
calculating the similarity of the second training set and the synthetic image set under the disease probability distribution dimensionality according to the probability distribution of each sample fundus image in the second training set belonging to each disease and the probability distribution of each fundus image in the synthetic image set belonging to each disease;
and evaluating the performance of the target network model according to the similarity.
Optionally, as an embodiment, the calculating, according to the probability distribution that each sample fundus image in the second training set belongs to each disease type and the probability distribution that each fundus image in the combined image set belongs to each disease type, a similarity of the second training set and the combined image set in a disease type probability distribution dimension includes:
calculating JS divergence values of the second training set and the synthetic image set in each disease dimension according to the probability distribution that each sample fundus image in the second training set belongs to each disease and the probability distribution that each fundus image in the synthetic image set belongs to each disease;
and calculating the similarity of the second training set and the synthetic image set in the dimensionality of the probability distribution of the disease types according to the JS divergence values of the second training set and the synthetic image set in the dimensionality of each disease type.
Optionally, as an embodiment, the calculating, according to the JS divergence values of the second training set and the composite image set in each disease category dimension, a similarity of the second training set and the composite image set in the disease category probability distribution dimension includes:
carrying out mean value operation on JS divergence values of the second training set and the synthetic image set in each disease category dimension to obtain a JS divergence mean value, wherein the smaller the numerical value of the JS divergence mean value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is; alternatively, the first and second electrodes may be,
performing weighted summation operation on JS divergence values of the second training set and the synthetic image set in each disease category dimension to obtain a JS divergence weighted summation value, wherein the smaller the JS divergence weighted summation value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is; alternatively, the first and second electrodes may be,
performing square root operation on JS divergence values of the second training set and the synthetic image set under each disease category dimensionality to obtain JS distance values of the second training set and the synthetic image set under each disease category dimensionality, and performing mean value operation on the JS distance values of the second training set and the synthetic image set under each disease category dimensionality to obtain JS distance mean values, wherein the smaller the numerical value of the JS distance mean values is, the higher the similarity of the second training set and the synthetic image set under the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
right the second training set with the composite image set carries out weighted summation operation on the JS distance values under each disease category dimensionality to obtain the JS distance weighted summation value, wherein the smaller the numerical value of the JS distance weighted summation value is, the higher the similarity of the second training set with the composite image set under the disease category probability distribution dimensionality is.
According to a fourth aspect of the present invention, there is disclosed a model training apparatus, the apparatus comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first training set, and the first training set comprises a plurality of sample fundus images;
the first generation module is used for generating a blood vessel segmentation map of each sample fundus image, wherein the blood vessel segmentation map is a binary map;
the first merging module is used for merging the RGB three-channel pixel values of all the pixel points in the sample fundus image and the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image of each sample fundus image to obtain four-channel images;
the training module is used for inputting the images of the four channels into a pre-constructed initial network model, performing model training and obtaining a target network model, wherein the network hyper-parameters of the initial network model at least comprise guiding strength, the guiding strength is used for inspiring the network model to generate blood vessels in a new fundus image based on blood vessel information in the images of the four channels, and the target network model is used for generating a new fundus image.
Optionally, as an embodiment, the initial network model is a network model constructed based on a generative confrontation network GAN.
Optionally, as an embodiment, a pixel value of each pixel point in the blood vessel segmentation map is 0 or 1;
the pixel value of each pixel point in the four-channel image is [ R, G, B, m, s, U ], wherein R is the pixel value of an R channel of the pixel point in the sample fundus image, G is the pixel value of a G channel, B is the pixel value of a B channel, m is the pixel value of a pixel point at a corresponding position in the blood vessel segmentation map, s is the guiding intensity, and U is 255.
According to a fifth aspect of the present invention, there is disclosed a fundus image generating apparatus for generating a new fundus image based on the target network model in the fourth aspect, the apparatus comprising:
a receiving module for receiving an original fundus image;
a second generation module for generating a blood vessel segmentation map of the original fundus image;
the second merging module is used for merging the pixel values of the RGB three channels of each pixel point in the original fundus image with the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image to obtain a first image of four channels;
the first processing module is used for inputting the first image into a target network model for processing to obtain a second image of four channels;
and the third generation module is used for generating a new fundus image based on the RGB channel information of the pixel points in the second image.
According to a sixth aspect of the present invention, there is disclosed a model evaluation apparatus for evaluating performance of a target network model in the fourth aspect, the apparatus comprising:
a second obtaining module, configured to obtain a second training set and a composite image set, where the second training set includes a plurality of sample fundus images, the composite image set includes a plurality of fundus images, and a fundus image in the composite image set is a new fundus image generated based on the second training set and a target network model;
the second processing module is used for respectively inputting the fundus images of the samples in the second training set into a multi-disease classification model trained in advance for processing to obtain probability distribution of the fundus images of the samples belonging to various disease types; inputting all fundus images in the synthetic image set into the multiple disease classification models respectively for processing to obtain probability distribution of all fundus images belonging to all disease types;
the calculation module is used for calculating the similarity of the second training set and the synthetic image set in the disease probability distribution dimension according to the probability distribution that each sample fundus image in the second training set belongs to each disease and the probability distribution that each fundus image in the synthetic image set belongs to each disease;
and the evaluation module is used for evaluating the performance of the target network model according to the similarity.
Optionally, as an embodiment, the calculation module includes:
the first calculation submodule is used for calculating JS divergence values of the second training set and the synthetic image set in each disease dimensionality according to the probability distribution that each sample fundus image in the second training set belongs to each disease dimensionality and the probability distribution that each fundus image in the synthetic image set belongs to each disease dimensionality;
and the second calculation submodule is used for calculating the similarity of the second training set and the synthetic image set in the dimensionality of the probability distribution of the disease types according to the JS divergence values of the second training set and the synthetic image set in the dimensionality of each disease type.
Optionally, as an embodiment, the second computing submodule includes:
the first calculating unit is used for carrying out mean value operation on the JS divergence values of the second training set and the synthetic image set in each disease category dimensionality to obtain a JS divergence mean value, wherein the smaller the numerical value of the JS divergence mean value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
the second calculating unit is used for performing weighted summation operation on the JS divergence values of the second training set and the synthetic image set in each disease category dimensionality to obtain a JS divergence weighted summation value, wherein the smaller the JS divergence weighted summation value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
the third calculating unit is used for carrying out square root operation on the JS divergence values of the second training set and the synthetic image set under each disease category dimensionality to obtain JS distance values of the second training set and the synthetic image set under each disease category dimensionality, carrying out mean value operation on the JS distance values of the second training set and the synthetic image set under each disease category dimensionality to obtain a JS distance mean value, wherein the smaller the numerical value of the JS distance mean value is, the higher the similarity of the second training set and the synthetic image set under the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
right the second training set with the composite image set carries out weighted summation operation on the JS distance values under each disease category dimensionality to obtain the JS distance weighted summation value, wherein the smaller the numerical value of the JS distance weighted summation value is, the higher the similarity of the second training set with the composite image set under the disease category probability distribution dimensionality is.
According to a seventh aspect of the present invention, there is disclosed an electronic apparatus comprising: a processor and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the model training method as in the first aspect.
According to an eighth aspect of the present invention, there is disclosed an electronic apparatus comprising: a processor and a program stored on the memory and executable on the processor, the program realizing the steps of the fundus image generating method as in the second aspect when executed by the processor.
According to a ninth aspect of the present invention, there is disclosed an electronic apparatus comprising: a processor and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the model evaluation method as in the third aspect.
According to a tenth aspect of the present invention, a computer readable storage medium is disclosed, having a program stored thereon, which when executed by the processor, performs the steps of the model training method as in the first aspect.
According to an eleventh aspect of the present invention, there is disclosed a computer-readable storage medium having stored thereon a program which, when executed by the processor, realizes the steps of the fundus image generating method as in the second aspect.
According to a twelfth aspect of the present invention, a computer-readable storage medium is disclosed, having a program stored thereon, which when executed by the processor, performs the steps of the model evaluation method as in the third aspect.
In the embodiment of the invention, when an initial network model is constructed, a network hyper-parameter of guide intensity is added into the initial network model, when the model is trained, besides RGB channel information of a sample fundus image, auxiliary information used for representing an actual blood vessel structure in the sample fundus image is added into training data, and in the training process, the training of the initial network model is inspired according to a natural rule of blood vessel distribution in the fundus image through the training data and the guide intensity, so that a target network model is obtained. Compared with the prior art, in the embodiment of the invention, the target network model is obtained by training the network model to learn the real distribution information of the blood vessel structure in the fundus image, so that the blood vessel structure in the fundus image generated based on the target network model accords with the medical general knowledge, and the image quality of the generated fundus image is higher.
In the embodiment of the invention, the training set and the synthetic image set generated by the model based on the training set can be inferred respectively based on the multi-disease classification model to obtain respective probability distribution, the generated network obtained by training is evaluated by comparing the probability distribution of the training set and the synthetic image set, and the evaluation process is simple, convenient and efficient.
Drawings
FIG. 1 is a flow diagram of a model training method of one embodiment of the present invention;
FIG. 2 is an exemplary diagram of a model training process of one embodiment of the present invention;
fig. 3 is a flowchart of a fundus image generation method of an embodiment of the present invention;
FIG. 4 is a flow diagram of a model evaluation method of one embodiment of the invention;
FIG. 5 is an exemplary diagram of a probability distribution of a disease to which a sample fundus image belongs according to an embodiment of the present invention;
FIG. 6 is one of exemplary graphs of probability distributions of disease species to which a synthesized fundus image belongs according to an embodiment of the present invention;
FIG. 7 is a second exemplary graph of a probability distribution of a disease type to which a synthesized fundus image belongs according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a model training apparatus according to an embodiment of the present invention;
fig. 9 is a schematic configuration diagram of a fundus image generating apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a model evaluation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
At present, the mainstream generation Networks include a Variational auto-encoder (Variational auto-encoder), an Autoregressive model (Autoregressive methods), a Flow-based method (Flow-based methods), and a Generative Adaptive Networks (GAN). From the viewpoint of the definition and the reality of image generation, the GAN network is superior and is the mainstream method at present. Since higher resolution (1024x1024 or more) is required for fundus images, the effect is best with PGGAN and its modified version SytleGAN.
Although the details of the fundus image generated by the GAN are real, the vascular structure is unexpectedly disordered and is not consistent with the medical common sense, for example, some arteries or veins are not communicated, no source exists through the empty vessels, and the arteries and the veins are mixed. The reason for this is because blood vessels are long-range related fine structures. CNN is good at learning global patterns, which can ensure the normality of anatomical structures and locations, such as optic disc and macular regions; it allows for detail diversity, and differences of several pixels can lead to vessel rupture and malfunction. How to actually generate blood vessels is the key to generating high-quality fundus pictures.
In contrast, some solutions have been proposed in the prior art, for example, a vessel segmentation map is generated first, and then the style is migrated to the whole fundus, which is true from the pure visual point, but the solutions are short of medical knowledge. Since the blood vessels grow integrally with the rest of the fundus, there are no other portions of the existing blood vessels. In addition, the method can only generate images with the resolution of 512 × 512, and cannot meet the requirement of high-definition resolution of 1024 × 1024. As can be seen, the image quality of the fundus image generated in the prior art is low.
In order to solve the technical problem, embodiments of the present invention provide a model training method, an eye fundus image generation method, a model evaluation method, and an eye fundus image evaluation device.
First, a model training method provided by the embodiment of the present invention is described below.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, and in practical application, the electronic device may include: computer devices such as a server, a notebook/desktop computer, and a desktop computer may also include: the embodiments of the present invention are not limited to mobile terminals such as smart phones, tablet computers, and personal digital assistants.
FIG. 1 is a flow chart of a model training method according to an embodiment of the present invention, which, as shown in FIG. 1, may include the steps of: step 101, step 102, step 103 and step 104, wherein,
in step 101, a first training set is acquired, wherein the first training set includes a plurality of sample fundus images.
In the embodiment of the invention, the sample fundus image can be a real fundus image shot by a fundus camera or a fundus image synthesized in the later period.
In the embodiment of the present invention, in order to reduce the difference of the distribution of training data in the dimension of image size and resolution, and facilitate model training, the generation process of the fundus image of the sample may include: cutting off black edges of the original fundus image, reserving a square area where the fundus is located, and then adjusting the size to a specified resolution ratio to obtain a sample fundus image.
In practical applications, when the resolution requirement of the fundus image generated by the generation network is 1024 × 1024, the above-described specified resolution is 1024 × 1024.
In the embodiment of the present invention, it is considered that the larger the number of samples is, the more accurate the generation result of the trained model is, and preferably, the first training set may include a large amount of sample fundus images.
In step 102, a blood vessel segmentation map of each sample fundus image is generated, wherein the blood vessel segmentation map is a binary map.
In the embodiment of the invention, a blood vessel segmentation model can be adopted, and the sample fundus image is input into the blood vessel segmentation model to be processed to obtain the blood vessel segmentation image of the sample fundus image.
In the embodiment of the invention, each pixel point in the blood vessel segmentation graph only has one color channel, the value can be 0 or 1, 0 represents pure black, pure black represents no blood vessel, 1 represents pure white, and pure white represents blood vessel.
The blood vessel segmentation model used in the embodiment of the present invention may be any blood vessel segmentation model in the related art, and specifically, may be a blood vessel segmentation model capable of segmenting a rough blood vessel morphology vein, or may be a blood vessel segmentation model capable of segmenting a fine blood vessel morphology vein, which is not limited in the embodiment of the present invention.
In step 103, for each sample fundus image, the RGB three-channel pixel values of each pixel point in the sample fundus image and the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation map are combined to obtain a four-channel image.
In the embodiment of the invention, when the pixel value of each pixel point in the blood vessel segmentation graph is 0 or 1, the pixel value of each pixel point in the image of the four channels is [ R, G, B, m × s × U ], wherein R is the pixel value of an R channel of the pixel point in the fundus image of the sample, G is the pixel value of a G channel, B is the pixel value of a B channel, m is the pixel value of the pixel point at the corresponding position in the blood vessel segmentation graph, s is the guiding strength, the guiding strength is the network hyper-parameter in the initial network model for model training, the s value range is 0-1, and U is 255.
In the embodiment of the present invention, for the pixel value [ r, g, b, m × s × U ] of each pixel point in the image of four channels, when m is 0, m × s × U is 0, which indicates that there is no blood vessel at this pixel point; when m is 1, m is s, U is s 255, and since 0 < s.ltoreq.1, 0 < s.ltoreq.255 indicates that a blood vessel is present at this pixel point.
In step 104, the images of the four channels are input into a pre-constructed initial network model, model training is performed, and a target network model is obtained, wherein the network hyper-parameters of the initial network model at least comprise guiding strength, the guiding strength is used for inspiring the network model to generate a new blood vessel in the fundus image based on blood vessel information in the images of the four channels, and the target network model is used for generating a new fundus image.
In the embodiment of the invention, each sample fundus image in the first training set corresponds to a four-channel image respectively, the four-channel images are input into the initial network model in turn, and model training is carried out until the model converges to obtain the target network model. Wherein the target network model may generate a new fundus image based on the gaussian distributed random quantities.
In view of the clarity and reality of image generation, the GAN network is more advantageous, and in the embodiment of the present invention, the initial network model may be a network model constructed based on a generative confrontation network GAN.
In the embodiment of the invention, the guiding strength can be gradually searched from 1, 1/2, 1/3, 1/4, … …,1/254 and 1/255, and the empirical value is 1/12-1/16. For example, when constructing an initial network model based on GAN, the guidance strength is set to 1, and through the training process of step 104, a target network model with guidance strength of 1 is obtained. When constructing the GAN-based initial network model, setting the pilot strength to 1/2, and through the training process of step 104, obtaining a target network model with pilot strength of 1/2. By analogy, a plurality of target network models with different guiding strengths can be trained.
For ease of understanding, the model training process will be described by taking the training of GAN network as an example, in conjunction with the example shown in fig. 2.
As shown in fig. 2, a sample fundus image is input into a blood vessel segmentation model for processing, a blood vessel segmentation map is input, the pixel values of RGB three channels of each pixel point in the sample fundus image and the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation map are combined to obtain a four-channel image, and the four-channel image is input into an initial model constructed based on a GAN network for training.
The formula of the training mode against the generated network is specifically as follows:
Figure BDA0003104674900000131
the challenge-generating network is divided into two parts: a generator (also called a generation network) and a discriminator (also called a discrimination network), wherein the generator is a network for generating an image, which receives a random noise z from which the image is generated; the discriminator is a network for discriminating whether an image is "true" or not, and has an input parameter of x, x representing an image, and an output x representing the probability that an image is true, and if 1, representing 100% of images are true, and an output 0 representing images that are not likely to be true.
The antagonism generation training is mainly to improve the effect of a generator through network antagonism of the generator and a discriminator, a real fundus image and a fundus image (namely a virtual fundus image) generated by the generator are continuously and alternately input to the discriminator in the training process, the discriminator judges whether the input image belongs to the virtual fundus image or the real fundus image generated by the generator, a true and false value is output, a loss function is calculated in each round of training, then parameters of the discriminator are adjusted according to feedback of the loss function, and the discriminator can more and more accurately identify whether the input image belongs to the virtual fundus image or the real fundus image generated by the generator along with adjustment of the parameters. The purpose of the generator is that the generated image can deceive the discriminator, so that the loss function is calculated in each training round, the parameters of the generator are adjusted according to the loss function, and the output which is more like a real image is generated along with the continuous adjustment of the parameters, thereby improving the quality of the image. And determining the final trained generator as a target network model.
It can be seen from the above embodiments that, in this embodiment, when an initial network model is constructed, a network hyper-parameter of the pilot strength is added to the initial network model, when the model is trained, the training data includes RGB channel information of the sample fundus image, and in addition, auxiliary information for characterizing an actual blood vessel structure in the sample fundus image is added, and in the training process, the training of the initial network model is inspired according to a natural law of blood vessel distribution in the fundus image through the training data and the pilot strength, so as to obtain a target network model. Compared with the prior art, in the embodiment of the invention, the target network model is obtained by training the network model to learn the real distribution information of the blood vessel structure in the fundus image, so that the blood vessel structure in the fundus image generated based on the target network model accords with the medical general knowledge, and the image quality of the generated fundus image is higher.
After the description of the model training process is completed, how to use the trained model is described next.
Fig. 3 is a flowchart of a fundus image generation method according to an embodiment of the present invention, which may include the steps of, as shown in fig. 3: step 301, step 302, step 303, step 304 and step 305, wherein,
in step 301, an original fundus image is received.
In the embodiment of the invention, when the trained target network model is used, the input image of the target network model is an original fundus image, and the original fundus image can be from a fundus camera or can be a synthesized image.
In step 302, a blood vessel segmentation map of the original fundus image is generated.
In the embodiment of the present invention, the generating process of step 302 is similar to the generating process of step 102 in the embodiment shown in fig. 1, and is not described herein again.
In step 303, the pixel values of the RGB three channels of each pixel point in the original fundus image and the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation map are combined to obtain a first image of the four channels.
In the embodiment of the invention, for the fundus image with special pathological changes, if the blood vessel segmentation image can not be provided, the pixel value of the pixel point at the corresponding position in the blood vessel segmentation image can be set to be zero.
In step 304, the first image is input into the target network model for processing, and a second image of four channels is obtained.
In the embodiment of the invention, the second image is a newly generated image of four channels.
In step 305, a new fundus image is generated based on the RGB channel information of the pixel points in the second image.
In the embodiment of the invention, the fundus image (i.e. the new fundus image generated by using the target network model) can be reconstructed based on the RGB channel information of each pixel point in the second image; based on the pixel value information of the fourth channel of each pixel point in the second image, a new blood vessel segmentation image of the fundus image can be reconstructed.
It can be seen from the above embodiments that, in this embodiment, when an initial network model is constructed, a network hyper-parameter of the pilot strength is added to the initial network model, when the model is trained, the training data includes RGB channel information of the sample fundus image, and in addition, auxiliary information for characterizing an actual blood vessel structure in the sample fundus image is added, and in the training process, the training of the initial network model is inspired according to a natural law of blood vessel distribution in the fundus image through the training data and the pilot strength, so as to obtain a target network model. Compared with the prior art, in the embodiment of the invention, the target network model is obtained by training the network model to learn the real distribution information of the blood vessel structure in the fundus image, so that the blood vessel structure in the fundus image generated based on the target network model accords with the medical general knowledge, and the image quality of the generated fundus image is higher.
After completing the description of the model training process and the model using process, how to evaluate the trained model is described next.
Currently, the evaluation of the generative network generally adopts a pre-trained classification model inclusion v3 of a natural image ImageNet dataset. However, the fundus image is substantially different from the natural image, and a more appropriate method is required for comparing the effects of different models by evaluating the authenticity of the fundus image and the similarity between the generated image and the distribution of the training set. Because the eyeground has the characteristics of multiple disease types, the embodiment of the invention provides an evaluation method of an eyeground multiple disease type classification model based on pre-training.
FIG. 4 is a flow chart of a model evaluation method of one embodiment of the present invention, which, as shown in FIG. 4, may include the steps of: step 401, step 402, step 403 and step 404, wherein,
in step 401, a second training set including a plurality of sample fundus images and a composite image set including a plurality of fundus images are acquired, and the fundus images in the composite image set are new fundus images generated based on the second training set and the target network model.
In the embodiment of the present invention, the second training set and the first training set may be the same training set, or the second training set and the first training set may be different training sets.
In the embodiment of the present invention, the process of generating the fundus image in the combined image set is similar to that in the embodiment shown in fig. 3, and will not be described again.
In the embodiment of the present invention, the number of fundus images in the sample fundus image set in the second training set is the same as the number of fundus images in the combined image set, and for example, 1 ten thousand fundus images are included in the second training set and 1 ten thousand fundus images are also included in the combined image set.
In step 402, inputting the fundus images of the samples in the second training set into a multi-disease classification model trained in advance for processing to obtain probability distribution of the fundus images of the samples belonging to various disease types; and inputting each fundus image in the synthetic image set into a multi-disease classification model for processing to obtain the probability distribution of each fundus image belonging to each disease.
In the embodiment of the invention, for a multi-disease classification model, the input is an image x, and the output is the probability C (x) of L disease species1,C(x)2,...,C(x)L
In the embodiment of the invention, the multi-disease classification model can be a network structure classification model disclosed by inclusion-v 3 pre-trained on a public data set based on a natural image data set ImageNet.
In consideration of the fact that the fundus image usually presents multiple disease characteristics, in the embodiment of the invention, a pre-trained multiple disease classification model can be used for reasoning the second training set and the composite image set generated by the model respectively to obtain respective probability distribution. The more realistic the generated image, the more similar the probability distributions of the two data sets.
In the embodiment of the present invention, for the second training set, the probability distributions of L disease species are:
x~pdata(x)C(x)1,C(x)2,...,C(x)L
for the composite image set, the probability distributions of the L disease species are:
z~pz(z)C(G(z))1,C(G(z))2,...,C(G(z))L
in step 403, the similarity of the second training set and the synthetic image set in the disease probability distribution dimension is calculated according to the probability distribution of each sample fundus image in the second training set belonging to each disease and the probability distribution of each fundus image in the synthetic image set belonging to each disease.
Considering that the general Kullack-Leibler divergence has some disadvantages in measuring the similarity of probability distributions, for example, asymmetry, the resulting scores are not normalized to 0,1 for easy comparison, and singularity is caused if the two probability distributions do not overlap.
In view of the above problems, embodiments of the present inventionThe similarity of the second training set and the synthetic image set in the disease probability distribution dimension can be obtained by using Jensen Shannon (JS) divergence, wherein the JS divergence is calculated by the following formula:
Figure BDA0003104674900000171
the calculation formula of the KL divergence is as follows:
Figure BDA0003104674900000172
it should be noted that x is a random variable in the probability space χ, and for a continuous variable, the summation operation in the formula is changed to an integral operation. If the base of the logarithm log in the KL divergence calculation is 2, the JS divergence value range is [0,1 ]. The smaller the JS divergence is, the more similar the two probability distributions are; at 0, the two probability distributions are completely similar.
Accordingly, in an embodiment provided by the present invention, the step 403 may specifically include the following steps (not shown in the figure): step 4031 and step 4031, wherein,
in step 4031, a JS divergence value of the second training set and the composite image set in each disease dimension is calculated according to the probability distribution of each sample fundus image in the second training set belonging to each disease and the probability distribution of each fundus image in the composite image set belonging to each disease.
For the sake of understanding, the histogram calculation JS divergence is taken as an example, and in one example, the number of fundus images in the second training set and the composite image set is 1 ten thousand, and the total number L of disease types is 6, which are respectively represented by the following symbols: did1, did110, did118, did161, did163, and did 166. After calculating the probability that 1 million fundus images in the second training set belong to 6 diseases, and calculating the probability that 1 million fundus images in the combined image set belong to 6 diseases,
as shown in fig. 5, a probability distribution histogram "Did 1" of 1 ten thousand fundus images in the second training set under the disease category did1 is generated, a probability distribution histogram "Did 110" of 1 ten thousand fundus images in the second training set under the disease category did110 is generated, a probability distribution histogram "Did 118" of 1 ten thousand fundus images in the second training set under the disease category did118 is generated, a probability distribution histogram "Did 161" of 1 ten thousand fundus images in the second training set under the disease category did161 is generated, a probability distribution histogram "Did 163" of 1 ten thousand fundus images in the second training set under the disease category did163 is generated, and a probability distribution histogram "Did 166" of 1 ten thousand fundus images in the second training set under the disease category did166 is generated.
As shown in fig. 6, a probability distribution histogram "Did 1" of 1 ten thousand fundus images in the combined image set under the disease category did1 is generated, a probability distribution histogram "Did 110" of 1 ten thousand fundus images in the combined image set under the disease category did110 is generated, a probability distribution histogram "Did 118" of 1 ten thousand fundus images in the combined image set under the disease category did118 is generated, a probability distribution histogram "Did 161" of 1 ten thousand fundus images in the combined image set under the disease category did161 is generated, a probability distribution histogram "Did 163" of 1 ten thousand fundus images in the combined image set under the disease category did163 is generated, and a probability distribution histogram "Did 166" of 1 ten thousand fundus images in the combined image set under the disease category did166 is generated.
According to the probability distribution histogram "Did 1" in FIG. 5 and the probability distribution histogram "Did 1" in FIG. 6, the JS divergence value A1 under the disease species did1 of the second training set and the synthetic image set can be calculated, wherein A1 is more than or equal to 0 and less than or equal to 1;
according to the probability distribution histogram "Did 110" in FIG. 5 and the probability distribution histogram "Did 110" in FIG. 6, the JS divergence value A2 of the second training set and the synthetic image set under the disease category did110 can be calculated, wherein A2 is greater than or equal to 0 and less than or equal to 1;
according to the probability distribution histogram "Did 118" in FIG. 5 and the probability distribution histogram "Did 118" in FIG. 6, the JS divergence value A3 of the second training set and the synthetic image set under the disease category did118 can be calculated, 0 ≦ A3 ≦ 1;
according to the probability distribution histogram "Did 161" in FIG. 5 and the probability distribution histogram "Did 161" in FIG. 6, the JS divergence value A4 of the second training set and the synthetic image set under the disease category did161 can be calculated, wherein A4 is greater than or equal to 0 and less than or equal to 1;
according to the probability distribution histogram "Did 163" in FIG. 5 and the probability distribution histogram "Did 163" in FIG. 6, the JS divergence value A5 of the second training set and the synthetic image set under the disease category did163 can be calculated, 0 ≦ A5 ≦ 1;
based on the probability distribution histogram "Did 166" in FIG. 5 and the probability distribution histogram "Did 166" in FIG. 6, the JS divergence value A6 of the second training set and the synthetic image set under the disease category did166 can be calculated, 0 ≦ A6 ≦ 1.
It should be noted that the above calculation of JS divergence by using a histogram is only an example convenient for understanding, and in consideration of the accuracy of the JS divergence calculation result, in practical application, when calculating JS divergence, the probability may be divided into smaller intervals, and JS divergence is calculated by means of integration.
In step 4032, according to the JS divergence values of the second training set and the synthetic image set in each disease category dimension, the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is calculated.
In the embodiment of the invention, the average value of the JS divergence values of the second training set and the synthetic image set in each disease category dimension can be calculated to obtain the JS divergence average value, wherein the smaller the numerical value of the JS divergence average value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is.
In one example, following the example in step 4031, JS divergence mean S1 is (a1+ a2+ A3+ a4+ a5+ a6)/6, 0 ≦ S1 ≦ 1, and the smaller S1, the higher the similarity.
In the embodiment of the invention, according to the severity of the disease category, weighted summation operation can be performed on the JS divergence values of the second training set and the synthetic image set in each disease category dimension to obtain the JS divergence weighted summation value, wherein the smaller the JS divergence weighted summation value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is.
In one example, following the example in step 4031, the JS divergence weighted sum S2 is a1 a1+ a2 a2+ A3 A3+ a4 a4+ a5 a5+ A6 a6, 0 ≦ S2 ≦ 1, and the smaller S2 is, the higher the similarity is, where a1 to A6 are weight coefficients.
Considering the square root of the JS divergence, that is, the JS distance, has a good property of distance measurement, in the embodiment of the present invention, the square root operation may be performed on the JS divergence values of the second training set and the composite image set in each disease category dimension, so as to obtain the JS distance values of the second training set and the composite image set in each disease category dimension, and the average value operation is performed on the JS distance values of the second training set and the composite image set in each disease category dimension, so as to obtain the JS distance average value, where the smaller the value of the JS distance average value is, the higher the similarity of the second training set and the composite image set in the disease category probability distribution dimension is.
In one example, following the example in step 4031, JS distance mean
Figure BDA0003104674900000201
Figure BDA0003104674900000202
S3 is more than or equal to 0 and less than or equal to 1, and the smaller S3 is, the higher the similarity is.
In the embodiment of the invention, according to the severity of the disease category, the JS distance values of the second training set and the synthetic image set in each disease category dimension are subjected to weighted summation operation to obtain the JS distance weighted summation value, wherein the smaller the value of the JS distance weighted summation value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is.
In one example, following the example in step 4031, JS distance weighted sum
Figure BDA0003104674900000203
Figure BDA0003104674900000204
S4 is more than or equal to 0 and less than or equal to 1, the smaller S4 is, the higher the similarity is, wherein b 1-b 6 are weight coefficients.
In step 404, the performance of the target network model is evaluated based on the similarity.
In the embodiment of the invention, the higher the similarity is, the better the performance of the target network model is.
In an embodiment of the present invention, the performance of the target network models constructed based on different guiding strengths may be evaluated by using the methods in steps 401 to 403, each target network model may obtain a score related to JS, and the lower the score value is, the higher the similarity between the second training set and the synthetic image set in the disease probability distribution dimension is, the better the effect of generating the fundus image of the target network model is.
In one example, fig. 5 is a disease probability distribution histogram of the second training set, fig. 6 is a disease probability distribution histogram of the combined image set when the guidance intensity is 1/4, and fig. 7 is a disease probability distribution histogram of the combined image set when the guidance intensity is 1/12, and it is found by comparing the probability distribution histogram "Did 1" in fig. 5 with the probability distribution histogram "Did 1" in fig. 6 that the difference between the two histograms is large under the disease type did 1. As shown in fig. 7, when the guidance intensity is 1/12, the difference between the combined image set and the second training set under the disease type did1 is found to be shrinking through comparison.
It can be seen from the above embodiment that, in this embodiment, the training set and the synthetic image set generated by the model based on the training set can be inferred respectively based on the multi-disease classification model to obtain respective probability distributions, and the generated network obtained by training is evaluated by comparing the probability distributions of the training set and the synthetic image set, so that the evaluation process is simple, convenient and efficient.
Fig. 8 is a schematic structural diagram of a model training apparatus according to an embodiment of the present invention, and as shown in fig. 8, the model training apparatus 800 may include: a first acquisition module 801, a first generation module 802, a first merging module 803, and a training module 804, wherein,
a first acquisition module 801, configured to acquire a first training set, where the first training set includes a plurality of sample fundus images;
a first generating module 802, configured to generate a blood vessel segmentation map of each sample fundus image, where the blood vessel segmentation map is a binary map;
a first merging module 803, configured to merge, for each sample fundus image, pixel values of RGB three channels of each pixel point in the sample fundus image and pixel values of pixel points at corresponding positions in the blood vessel segmentation map thereof to obtain four-channel images;
a training module 804, configured to input the images of the four channels into a pre-constructed initial network model, perform model training, and obtain a target network model, where a network hyper-parameter of the initial network model at least includes a pilot strength, the pilot strength is used to inspire the network model to generate a blood vessel in a new fundus image based on blood vessel information in the images of the four channels, and the target network model is used to generate a new fundus image.
It can be seen from the above embodiments that, in this embodiment, when an initial network model is constructed, a network hyper-parameter of the pilot strength is added to the initial network model, when the model is trained, the training data includes RGB channel information of the sample fundus image, and in addition, auxiliary information for characterizing an actual blood vessel structure in the sample fundus image is added, and in the training process, the training of the initial network model is inspired according to a natural law of blood vessel distribution in the fundus image through the training data and the pilot strength, so as to obtain a target network model. Compared with the prior art, in the embodiment of the invention, the target network model is obtained by training the network model to learn the real distribution information of the blood vessel structure in the fundus image, so that the blood vessel structure in the fundus image generated based on the target network model accords with the medical general knowledge, and the image quality of the generated fundus image is higher.
Alternatively, as an embodiment, the initial network model may be a network model constructed based on a generative confrontation network GAN.
Optionally, as an embodiment, a pixel value of each pixel point in the blood vessel segmentation map is 0 or 1;
the pixel value of each pixel point in the four-channel image can be [ R, G, B, m × s × U ], wherein R is the pixel value of an R channel of the pixel point in the sample fundus image, G is the pixel value of a G channel, B is the pixel value of a B channel, m is the pixel value of a pixel point at a corresponding position in the blood vessel segmentation map, s is the guiding intensity, and U is 255.
Fig. 9 is a schematic configuration diagram of a fundus image generating apparatus according to an embodiment of the present invention, and as shown in fig. 9, a fundus image generating apparatus 900 may include: a receiving module 901, a second generating module 902, a second combining module 903, a first processing module 904, and a third generating module 905, wherein,
a receiving module 901, configured to receive an original fundus image;
a second generating module 902, configured to generate a blood vessel segmentation map of the original fundus image;
a second merging module 903, configured to merge pixel values of RGB three channels of each pixel point in the original fundus image with pixel values of pixel points at corresponding positions in the blood vessel segmentation map of the original fundus image, so as to obtain a first image of four channels;
a first processing module 904, configured to input the first image into a target network model for processing, so as to obtain a second image of four channels;
a third generating module 905, configured to generate a new fundus image based on the RGB channel information of the pixel points in the second image.
It can be seen from the above embodiments that, in this embodiment, when an initial network model is constructed, a network hyper-parameter of the pilot strength is added to the initial network model, when the model is trained, the training data includes RGB channel information of the sample fundus image, and in addition, auxiliary information for characterizing an actual blood vessel structure in the sample fundus image is added, and in the training process, the training of the initial network model is inspired according to a natural law of blood vessel distribution in the fundus image through the training data and the pilot strength, so as to obtain a target network model. Compared with the prior art, in the embodiment of the invention, the target network model is obtained by training the network model to learn the real distribution information of the blood vessel structure in the fundus image, so that the blood vessel structure in the fundus image generated based on the target network model accords with the medical general knowledge, and the image quality of the generated fundus image is higher.
Fig. 10 is a schematic structural diagram of a model evaluation apparatus according to an embodiment of the present invention, and as shown in fig. 10, the model evaluation apparatus 1000 may include: a second obtaining module 1001, a second processing module 1002, a calculating module 1003 and an evaluating module 1004, wherein,
a second obtaining module 1001, configured to obtain a second training set and a composite image set, where the second training set includes a plurality of sample fundus images, the composite image set includes a plurality of fundus images, and a fundus image in the composite image set is a new fundus image generated based on the second training set and a target network model;
the second processing module 1002 is configured to input each sample fundus image in the second training set into a pre-trained multi-disease classification model for processing, so as to obtain probability distribution that each sample fundus image belongs to each disease category; inputting all fundus images in the synthetic image set into the multiple disease classification models respectively for processing to obtain probability distribution of all fundus images belonging to all disease types;
a calculating module 1003, configured to calculate, according to probability distributions that the fundus images in the second training set belong to the different disease types and probability distributions that the fundus images in the combined image set belong to the different disease types, a similarity between the second training set and the combined image set in a disease probability distribution dimension;
an evaluation module 1004, configured to evaluate performance of the target network model according to the similarity.
It can be seen from the above embodiment that, in this embodiment, the training set and the synthetic image set generated by the model based on the training set can be inferred respectively based on the multi-disease classification model to obtain respective probability distributions, and the generated network obtained by training is evaluated by comparing the probability distributions of the training set and the synthetic image set, so that the evaluation process is simple, convenient and efficient.
Optionally, as an embodiment, the calculating module 1003 may include:
the first calculation submodule is used for calculating JS divergence values of the second training set and the synthetic image set in each disease dimensionality according to the probability distribution that each sample fundus image in the second training set belongs to each disease dimensionality and the probability distribution that each fundus image in the synthetic image set belongs to each disease dimensionality;
and the second calculation submodule is used for calculating the similarity of the second training set and the synthetic image set in the dimensionality of the probability distribution of the disease types according to the JS divergence values of the second training set and the synthetic image set in the dimensionality of each disease type.
Optionally, as an embodiment, the second computing submodule may include:
the first calculating unit is used for carrying out mean value operation on the JS divergence values of the second training set and the synthetic image set in each disease category dimensionality to obtain a JS divergence mean value, wherein the smaller the numerical value of the JS divergence mean value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
the second calculating unit is used for performing weighted summation operation on the JS divergence values of the second training set and the synthetic image set in each disease category dimensionality to obtain a JS divergence weighted summation value, wherein the smaller the JS divergence weighted summation value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
the third calculating unit is used for carrying out square root operation on the JS divergence values of the second training set and the synthetic image set under each disease category dimensionality to obtain JS distance values of the second training set and the synthetic image set under each disease category dimensionality, carrying out mean value operation on the JS distance values of the second training set and the synthetic image set under each disease category dimensionality to obtain a JS distance mean value, wherein the smaller the numerical value of the JS distance mean value is, the higher the similarity of the second training set and the synthetic image set under the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
right the second training set with the composite image set carries out weighted summation operation on the JS distance values under each disease category dimensionality to obtain the JS distance weighted summation value, wherein the smaller the numerical value of the JS distance weighted summation value is, the higher the similarity of the second training set with the composite image set under the disease category probability distribution dimensionality is.
According to still another embodiment of the present invention, there is also provided an electronic apparatus including: a processor and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps in the model training method according to any of the embodiments described above.
According to still another embodiment of the present invention, there is also provided an electronic apparatus including: a processor and a program stored on the memory and executable on the processor, the program realizing the steps in the fundus image generation method according to any one of the above-described embodiments when executed by the processor.
According to still another embodiment of the present invention, there is also provided an electronic apparatus including: a processor and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps in the model evaluation method according to any of the embodiments described above.
According to still another embodiment of the present invention, there is also provided a computer-readable storage medium having a program stored thereon, the program, when executed by a processor, implementing the steps in the model training method according to any one of the above embodiments.
According to still another embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, realizes the steps in the fundus image generation method according to any one of the embodiments described above.
According to still another embodiment of the present invention, there is also provided a computer-readable storage medium having a program stored thereon, the program implementing the steps of the model evaluation method according to any one of the above-described embodiments when executed by a processor.
The embodiments of the present invention are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The model training method, the fundus image generation method, the model evaluation method and the device provided by the invention are described in detail, specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. A method of model training, the method comprising:
acquiring a first training set, wherein the first training set comprises a plurality of sample fundus images;
generating a blood vessel segmentation map of each sample fundus image, wherein the blood vessel segmentation map is a binary map;
for each sample fundus image, combining the pixel values of RGB three channels of each pixel point in the sample fundus image with the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image to obtain a four-channel image;
inputting the images of the four channels into a pre-constructed initial network model, and performing model training to obtain a target network model, wherein the network hyper-parameters of the initial network model at least comprise guiding strength, the guiding strength is used for inspiring the network model to generate a new blood vessel in the fundus image based on blood vessel information in the images of the four channels, and the target network model is used for generating a new fundus image.
2. The method of claim 1, wherein the initial network model is a network model constructed based on a generative confrontation network (GAN).
3. The method according to claim 1 or 2, wherein the pixel value of each pixel point in the vessel segmentation map is 0 or 1;
the pixel value of each pixel point in the four-channel image is [ R, G, B, m, s, U ], wherein R is the pixel value of an R channel of the pixel point in the sample fundus image, G is the pixel value of a G channel, B is the pixel value of a B channel, m is the pixel value of a pixel point at a corresponding position in the blood vessel segmentation map, s is the guiding intensity, and U is 255.
4. A fundus image generating method for generating a new fundus image based on the target network model according to any one of claims 1 to 3, characterized by comprising:
receiving an original fundus image;
generating a vessel segmentation map of the original fundus image;
combining the RGB three-channel pixel values of each pixel point in the original fundus image with the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image to obtain a four-channel first image;
inputting the first image into a target network model for processing to obtain a second image of four channels;
and generating a new fundus image based on the RGB channel information of the pixel points in the second image.
5. A model evaluation method for evaluating the performance of the target network model of any one of claims 1 to 3, the method comprising:
acquiring a second training set and a composite image set, wherein the second training set comprises a plurality of sample fundus images, the composite image set comprises a plurality of fundus images, and the fundus images in the composite image set are new fundus images generated based on the second training set and a target network model;
inputting the fundus images of the samples in the second training set into a pre-trained multi-disease classification model for processing to obtain probability distribution of the fundus images of the samples belonging to various disease types; inputting all fundus images in the synthetic image set into the multiple disease classification models respectively for processing to obtain probability distribution of all fundus images belonging to all disease types;
calculating the similarity of the second training set and the synthetic image set under the disease probability distribution dimensionality according to the probability distribution of each sample fundus image in the second training set belonging to each disease and the probability distribution of each fundus image in the synthetic image set belonging to each disease;
and evaluating the performance of the target network model according to the similarity.
6. The method of claim 5, wherein calculating the similarity of the second training set and the composite image set in the disease probability distribution dimension based on the probability distribution of each sample fundus image in the second training set belonging to each disease category and the probability distribution of each fundus image in the composite image set belonging to each disease category comprises:
calculating JS divergence values of the second training set and the synthetic image set in each disease dimension according to the probability distribution that each sample fundus image in the second training set belongs to each disease and the probability distribution that each fundus image in the synthetic image set belongs to each disease;
and calculating the similarity of the second training set and the synthetic image set in the dimensionality of the probability distribution of the disease types according to the JS divergence values of the second training set and the synthetic image set in the dimensionality of each disease type.
7. The method of claim 6, wherein computing the similarity of the second training set and the composite image set in the disease category probability distribution dimension according to the JS divergence values of the second training set and the composite image set in each disease category dimension comprises:
carrying out mean value operation on JS divergence values of the second training set and the synthetic image set in each disease category dimension to obtain a JS divergence mean value, wherein the smaller the numerical value of the JS divergence mean value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is; alternatively, the first and second electrodes may be,
performing weighted summation operation on JS divergence values of the second training set and the synthetic image set in each disease category dimension to obtain a JS divergence weighted summation value, wherein the smaller the JS divergence weighted summation value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimension is; alternatively, the first and second electrodes may be,
performing square root operation on JS divergence values of the second training set and the synthetic image set under each disease category dimensionality to obtain JS distance values of the second training set and the synthetic image set under each disease category dimensionality, and performing mean value operation on the JS distance values of the second training set and the synthetic image set under each disease category dimensionality to obtain JS distance mean values, wherein the smaller the numerical value of the JS distance mean values is, the higher the similarity of the second training set and the synthetic image set under the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
right the second training set with the composite image set carries out weighted summation operation on the JS distance values under each disease category dimensionality to obtain the JS distance weighted summation value, wherein the smaller the numerical value of the JS distance weighted summation value is, the higher the similarity of the second training set with the composite image set under the disease category probability distribution dimensionality is.
8. A model training apparatus, the apparatus comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first training set, and the first training set comprises a plurality of sample fundus images;
the first generation module is used for generating a blood vessel segmentation map of each sample fundus image, wherein the blood vessel segmentation map is a binary map;
the first merging module is used for merging the RGB three-channel pixel values of all the pixel points in the sample fundus image and the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image of each sample fundus image to obtain four-channel images;
the training module is used for inputting the images of the four channels into a pre-constructed initial network model, performing model training and obtaining a target network model, wherein the network hyper-parameters of the initial network model at least comprise guiding strength, the guiding strength is used for inspiring the network model to generate blood vessels in a new fundus image based on blood vessel information in the images of the four channels, and the target network model is used for generating a new fundus image.
9. The apparatus of claim 8, wherein the initial network model is a network model constructed based on a generative confrontation network (GAN).
10. The apparatus according to claim 8 or 9, wherein the pixel value of each pixel point in the vessel segmentation map is 0 or 1;
the pixel value of each pixel point in the four-channel image is [ R, G, B, m, s, U ], wherein R is the pixel value of an R channel of the pixel point in the sample fundus image, G is the pixel value of a G channel, B is the pixel value of a B channel, m is the pixel value of a pixel point at a corresponding position in the blood vessel segmentation map, s is the guiding intensity, and U is 255.
11. A fundus image generating apparatus for generating a new fundus image based on the target network model according to any one of claims 8 to 10, the apparatus comprising:
a receiving module for receiving an original fundus image;
a second generation module for generating a blood vessel segmentation map of the original fundus image;
the second merging module is used for merging the pixel values of the RGB three channels of each pixel point in the original fundus image with the pixel values of the pixel points at the corresponding positions in the blood vessel segmentation image to obtain a first image of four channels;
the first processing module is used for inputting the first image into a target network model for processing to obtain a second image of four channels;
and the third generation module is used for generating a new fundus image based on the RGB channel information of the pixel points in the second image.
12. A model evaluation apparatus for evaluating the performance of the target network model of any one of claims 8 to 10, the apparatus comprising:
a second obtaining module, configured to obtain a second training set and a composite image set, where the second training set includes a plurality of sample fundus images, the composite image set includes a plurality of fundus images, and a fundus image in the composite image set is a new fundus image generated based on the second training set and a target network model;
the second processing module is used for respectively inputting the fundus images of the samples in the second training set into a multi-disease classification model trained in advance for processing to obtain probability distribution of the fundus images of the samples belonging to various disease types; inputting all fundus images in the synthetic image set into the multiple disease classification models respectively for processing to obtain probability distribution of all fundus images belonging to all disease types;
the calculation module is used for calculating the similarity of the second training set and the synthetic image set in the disease probability distribution dimension according to the probability distribution that each sample fundus image in the second training set belongs to each disease and the probability distribution that each fundus image in the synthetic image set belongs to each disease;
and the evaluation module is used for evaluating the performance of the target network model according to the similarity.
13. The apparatus of claim 12, wherein the computing module comprises:
the first calculation submodule is used for calculating JS divergence values of the second training set and the synthetic image set in each disease dimensionality according to the probability distribution that each sample fundus image in the second training set belongs to each disease dimensionality and the probability distribution that each fundus image in the synthetic image set belongs to each disease dimensionality;
and the second calculation submodule is used for calculating the similarity of the second training set and the synthetic image set in the dimensionality of the probability distribution of the disease types according to the JS divergence values of the second training set and the synthetic image set in the dimensionality of each disease type.
14. The apparatus of claim 12, wherein the second computation submodule comprises:
the first calculating unit is used for carrying out mean value operation on the JS divergence values of the second training set and the synthetic image set in each disease category dimensionality to obtain a JS divergence mean value, wherein the smaller the numerical value of the JS divergence mean value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
the second calculating unit is used for performing weighted summation operation on the JS divergence values of the second training set and the synthetic image set in each disease category dimensionality to obtain a JS divergence weighted summation value, wherein the smaller the JS divergence weighted summation value is, the higher the similarity of the second training set and the synthetic image set in the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
the third calculating unit is used for carrying out square root operation on the JS divergence values of the second training set and the synthetic image set under each disease category dimensionality to obtain JS distance values of the second training set and the synthetic image set under each disease category dimensionality, carrying out mean value operation on the JS distance values of the second training set and the synthetic image set under each disease category dimensionality to obtain a JS distance mean value, wherein the smaller the numerical value of the JS distance mean value is, the higher the similarity of the second training set and the synthetic image set under the disease category probability distribution dimensionality is; alternatively, the first and second electrodes may be,
right the second training set with the composite image set carries out weighted summation operation on the JS distance values under each disease category dimensionality to obtain the JS distance weighted summation value, wherein the smaller the numerical value of the JS distance weighted summation value is, the higher the similarity of the second training set with the composite image set under the disease category probability distribution dimensionality is.
CN202110633931.5A 2021-06-07 2021-06-07 Model training method, fundus image generation method, model evaluation method and device Pending CN113486925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110633931.5A CN113486925A (en) 2021-06-07 2021-06-07 Model training method, fundus image generation method, model evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110633931.5A CN113486925A (en) 2021-06-07 2021-06-07 Model training method, fundus image generation method, model evaluation method and device

Publications (1)

Publication Number Publication Date
CN113486925A true CN113486925A (en) 2021-10-08

Family

ID=77934413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110633931.5A Pending CN113486925A (en) 2021-06-07 2021-06-07 Model training method, fundus image generation method, model evaluation method and device

Country Status (1)

Country Link
CN (1) CN113486925A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035766A (en) * 2022-06-24 2022-09-09 首都医科大学附属北京同仁医院 Ophthalmologic teaching training method, system and equipment
CN115082459A (en) * 2022-08-18 2022-09-20 北京鹰瞳科技发展股份有限公司 Method for training detection model for diopter detection and related product

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
AU2015261891A1 (en) * 2014-05-23 2016-10-13 Ventana Medical Systems, Inc. Systems and methods for detection of biological structures and/or patterns in images
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
WO2018005413A1 (en) * 2016-06-30 2018-01-04 Konica Minolta Laboratory U.S.A., Inc. Method and system for cell annotation with adaptive incremental learning
CN108122236A (en) * 2017-12-18 2018-06-05 上海交通大学 Iterative eye fundus image blood vessel segmentation method based on distance modulated loss
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
CN110348541A (en) * 2019-05-10 2019-10-18 腾讯医疗健康(深圳)有限公司 Optical fundus blood vessel image classification method, device, equipment and storage medium
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN111127447A (en) * 2019-12-26 2020-05-08 河南工业大学 Blood vessel segmentation network and method based on generative confrontation network
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN111383210A (en) * 2020-03-03 2020-07-07 上海鹰瞳医疗科技有限公司 Method and equipment for training eye ground image classification model
CN111445147A (en) * 2020-03-27 2020-07-24 中北大学 Generative confrontation network model evaluation method for mechanical fault diagnosis
JP2020119543A (en) * 2019-01-18 2020-08-06 富士通株式会社 Apparatus and method of training classification model
CN111862009A (en) * 2020-07-02 2020-10-30 清华大学深圳国际研究生院 Classification method of fundus OCT images and computer-readable storage medium
US20200364566A1 (en) * 2019-05-15 2020-11-19 The Florida International University Board Of Trustees Systems and methods for predicting pain level

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
AU2015261891A1 (en) * 2014-05-23 2016-10-13 Ventana Medical Systems, Inc. Systems and methods for detection of biological structures and/or patterns in images
WO2018005413A1 (en) * 2016-06-30 2018-01-04 Konica Minolta Laboratory U.S.A., Inc. Method and system for cell annotation with adaptive incremental learning
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN108122236A (en) * 2017-12-18 2018-06-05 上海交通大学 Iterative eye fundus image blood vessel segmentation method based on distance modulated loss
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
JP2020119543A (en) * 2019-01-18 2020-08-06 富士通株式会社 Apparatus and method of training classification model
CN110348541A (en) * 2019-05-10 2019-10-18 腾讯医疗健康(深圳)有限公司 Optical fundus blood vessel image classification method, device, equipment and storage medium
US20200364566A1 (en) * 2019-05-15 2020-11-19 The Florida International University Board Of Trustees Systems and methods for predicting pain level
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN111127447A (en) * 2019-12-26 2020-05-08 河南工业大学 Blood vessel segmentation network and method based on generative confrontation network
CN111383210A (en) * 2020-03-03 2020-07-07 上海鹰瞳医疗科技有限公司 Method and equipment for training eye ground image classification model
CN111445147A (en) * 2020-03-27 2020-07-24 中北大学 Generative confrontation network model evaluation method for mechanical fault diagnosis
CN111862009A (en) * 2020-07-02 2020-10-30 清华大学深圳国际研究生院 Classification method of fundus OCT images and computer-readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NOH, KJ (NOH, KYOUNG JIN) ETC.: "KlusTree: Clustering Answer Trees from Keyword Search on Graphs", 《MULTIMODAL REGISTRATION OF FUNDUS IMAGES WITH FLUORESCEIN ANGIOGRAPHY FOR FINE-SCALE VESSEL SEGMENTATION》, no. 8, pages 63757 - 63769 *
曹源: "基于委员会查询和自步多样性学习的医学图像分割", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》, no. 2021, pages 060 - 35 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035766A (en) * 2022-06-24 2022-09-09 首都医科大学附属北京同仁医院 Ophthalmologic teaching training method, system and equipment
CN115035766B (en) * 2022-06-24 2023-08-01 首都医科大学附属北京同仁医院 Ophthalmic teaching training method, system and equipment
CN115082459A (en) * 2022-08-18 2022-09-20 北京鹰瞳科技发展股份有限公司 Method for training detection model for diopter detection and related product

Similar Documents

Publication Publication Date Title
US10600185B2 (en) Automatic liver segmentation using adversarial image-to-image network
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
US10489909B2 (en) Method of automatically detecting microaneurysm based on multi-sieving convolutional neural network
US11941807B2 (en) Artificial intelligence-based medical image processing method and medical device, and storage medium
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
CN110335259B (en) Medical image identification method and device and storage medium
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
CN110490239B (en) Training method, quality classification method, device and equipment of image quality control network
US20220092789A1 (en) Automatic pancreas ct segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN111242933B (en) Retinal image artery and vein classification device, apparatus, and storage medium
Aranguren et al. Improving the segmentation of magnetic resonance brain images using the LSHADE optimization algorithm
CN111932529B (en) Image classification and segmentation method, device and system
CN109800781A (en) A kind of image processing method, device and computer readable storage medium
CN112070781A (en) Processing method and device of craniocerebral tomography image, storage medium and electronic equipment
CN112446891A (en) Medical image segmentation method based on U-Net network brain glioma
CN113486925A (en) Model training method, fundus image generation method, model evaluation method and device
Morgenstern et al. An image-computable model of human visual shape similarity
CN111046893B (en) Image similarity determining method and device, image processing method and device
CN110110727A (en) The image partition method post-processed based on condition random field and Bayes
CN116934747B (en) Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN112818774A (en) Living body detection method and device
CN116188501B (en) Medical image segmentation method based on multi-scale cross attention
CN114494263B (en) Medical image lesion detection method, system and equipment integrating clinical information
Fan et al. Accurate recognition and simulation of 3D visual image of aerobics movement
Hao et al. Iris segmentation using feature channel optimization for noisy environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination