CN112992304B - High-resolution red eye case data generation method, device and storage medium - Google Patents
High-resolution red eye case data generation method, device and storage medium Download PDFInfo
- Publication number
- CN112992304B CN112992304B CN202010854745.XA CN202010854745A CN112992304B CN 112992304 B CN112992304 B CN 112992304B CN 202010854745 A CN202010854745 A CN 202010854745A CN 112992304 B CN112992304 B CN 112992304B
- Authority
- CN
- China
- Prior art keywords
- image data
- generator
- pinkeye
- discriminator
- resolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method, equipment and a storage medium for generating high-resolution red eye case data. And (3) fusing a cyclogan network and an esrgan network, constructing a high-resolution red eye case data generation model, and training the model to obtain a trained high-resolution red eye case data generation model. And inputting the eye image data of the red eye case data to be generated into a trained high-resolution red eye case data generation model to obtain the red eye case data. The invention builds a new pinkeye case data generation model by fusing a cyclogan network and an esrgan network, and solves the problem of lack of high-resolution pinkeye data.
Description
Technical Field
The invention relates to the field of deep learning, in particular to a method for generating high-resolution red eye case data.
Background
At present, the pinkeye diagnosis is carried out by a doctor in a face-to-face way, and no technology can well detect whether the detected person has pinkeye. Deep learning is the inherent law and presentation hierarchy of learning sample data, which greatly helps the interpretation of image data. Thus, deep learning can be used to solve this problem.
However, the deep learning technology is used for diagnosing pinkeye, a large amount of high-resolution pinkeye case data is needed for training the model, and if the pinkeye case data amount is too small or the resolution is too low, the feature extraction of the case data by the deep learning technology is affected, so that the diagnosis effect of the deep learning technology on pinkeye is also affected.
At present, as red eye case data, especially red eye case data with high resolution is too little, red eye case data meeting requirements is difficult to collect for analysis and processing, and great obstruction is brought to development of red eye detection technology. The existing method can only generate the red eye case data with low resolution, and can not meet the requirement of deep learning on the red eye case data.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a method, equipment and a storage medium for generating high-resolution red eye case data in order to promote the use of a deep learning technology to solve the problem of red eye disease diagnosis.
In order to achieve the technical purpose, the invention adopts the following specific technical scheme:
the method for generating the high-resolution red eye case data comprises the following steps:
and acquiring a human eye image data set, wherein the eye image data in the human eye image data set comprises a large amount of normal eye image data and pinkeye image data, all normal eye image data in the human eye image data set form a normal eye image data set A, and all pinkeye image data in the human eye image data set form a pinkeye image data set B.
And (3) fusing a cyclogan network and an esrgan network, constructing a high-resolution red eye case data generation model, and training the model to obtain a trained high-resolution red eye case data generation model.
And inputting the eye image data of the red eye case data to be generated into a trained high-resolution red eye case data generation model to obtain the red eye case data.
According to the invention, a frontal image data set is formed by collecting a large amount of frontal image data of people, the eyes in the frontal image data set are manually marked, and the frontal image data of the marked people are trained by using a yolov4 model, so that a trained human eye image extraction model is obtained.
And extracting eye image data by adopting a trained eye image extraction model, and taking all the obtained eye image data as a pinkeye case data set to be generated. And performing classification processing of the pinkeye and the normal eyes on all eye image data in the to-be-generated pinkeye case data set, wherein all normal eye image data form a normal eye image data set A. The obtained pinkeye image data is relatively less and does not meet the requirement of a cyclogram network on training data, so that the collected pinkeye image data needs to be subjected to data enhancement processing such as overturning, contrast change, brightness change and the like, and then enough pinkeye image data is obtained as a pinkeye image data set B.
In the invention, a high-resolution red eye case data generation model is constructed, which comprises the following steps:
(1) The pinkeye data generator g_a2b is constructed based on a cyclogram network, and the generator g_a2b is a generation function in the cyclogram network that converts normal eye image data in the normal eye image data set into pinkeye image data. Meanwhile, a discriminator d_b is introduced into the generator g_a2b, which functions to discriminate whether or not the image data generated by the generator g_a2b is the original image data in the pinkeye image data set B.
The network structure of the generator g_a2b is: firstly, extracting features of normal eye image data in a normal eye image data set A by using a convolution layer, then processing the processed data through a plurality of G_A2B basic modules, performing up-sampling processing on the processed data, and finally generating image data similar to the image data of the pinkeye in a pinkeye image data set B after passing through a convolution layer; the G_A2B basic module firstly passes through a convolution layer and then enters a normalization layer to perform data normalization processing, then uses SIREN activation function to perform non-linearization on the data, and then passes through a convolution layer and the normalization layer again.
The network structure of discriminator d_b is: firstly, carrying out feature extraction on image data generated by G_A2B by using a convolution layer, then carrying out nonlinearity on the image data by using a leakage ReLU activation function, then sending the image data to a D_B basic module for convolution, nonlinearity and normalization processing, carrying out convolution and nonlinearity processing again after processing by a plurality of D_B basic modules, and finally carrying out regression processing by using a regression function (sigmoid), wherein the obtained result shows whether the image data generated by G_A2B belongs to original image data in a pinkeye image data set B.
(2) The normal eye data generator g_b2a is constructed based on a cyclogram network, and the generator g_b2a is a generation function in the cyclogram network that converts the pinkeye image data in the pinkeye image data set into normal eye image data. Meanwhile, a discriminator d_a is introduced into the generator g_b2a, which is used to discriminate whether the image data generated by the generator g_b2a is the original image data in the normal eye image data set a.
The network structure of the generator g_b2a is: firstly, performing feature extraction on pinkeye image data in a pinkeye image data set B by using a convolution layer, then performing up-sampling processing on the processed data after processing by a plurality of G_B2A basic modules, and finally generating image data similar to normal eye image data in a normal eye image data set A after passing through a convolution layer; the G_B2A basic module firstly passes through a convolution layer and then enters a normalization layer to perform data normalization processing, then uses SIREN activation function to perform non-linearization on the data, and then passes through a convolution layer and the normalization layer again.
The network structure of discriminator d_a is: firstly, carrying out feature extraction on image data generated by G_B2A by using a convolution layer, then carrying out nonlinearity on the image data by using a leakage ReLU activation function, then sending the image data to a D_A basic module for convolution, nonlinearity and normalization processing, carrying out convolution and nonlinearity processing again after processing by a plurality of D_A basic modules, and finally carrying out regression processing by using a regression function (sigmoid), wherein the obtained result shows whether the image data generated by G_B2A belongs to original image data in a normal eye image data set A.
(3) The high resolution image data generator g_e is constructed based on the esrgan network. The high-resolution image data generator g_e is a generator in the esrgan network that improves the resolution of image data. Meanwhile, a discriminator d_e is introduced for the high-resolution image data generator g_e, which functions to discriminate whether or not the image data generated by the high-resolution image data generator g_e is high-resolution image data.
The high-resolution image data generator g_e high-resolution image data generator has a network structure of: firstly, the characteristic extraction is carried out on input image data by utilizing a convolution layer, then a plurality of residual intensive modules (RRDB) are used for processing the data, then up-sampling processing is carried out, and finally, the processed image data is obtained after one convolution layer. Wherein the data input into the residual dense module (RRDB) is output after being subjected to multiple convolution layers and nonlinear processing. Specifically, after the convolution layer is passed through the residual dense module (RRDB), the SIREN activation function is used for carrying out nonlinear processing, and the convolution layer and the nonlinear processing are repeated three times, so that the residual dense module (RRDB) processing is completed once.
The discriminator d_e has the network structure: firstly, a convolution layer is used for extracting features of image data generated by a high-resolution image data generator G_E, then a leakage ReLU activation function is used for carrying out non-linearization on the image data, and then the image data is sent to a basic module for convolution, non-linearization and normalization. The image data generated by G_E is displayed as whether the image data belongs to high-resolution image data or not by processing the image data through a plurality of basic modules, sending the processed image data to a Dense connection module (Dense) for feature processing and then using sigmoid for processing after nonlinear processing.
Preferably, the training method of the high-resolution red eye case data generation model in the invention comprises the following steps of;
the original normal eye image data in the normal eye image data set a and the normal eye image data generated by the generator g_b2a are input to the discriminator d_a to be trained, while the generator g_a2b and the generator g_b2a are kept unchanged. Meanwhile, the pinkeye image data in the pinkeye image data set B and the pinkeye image data generated by g_a2b are input into the discriminator d_b for training.
Loss function of discriminator D_AThe method comprises the following steps:
wherein G is B2A Representation generator G_B2A, D A Representation discriminator D_A, G B2A (b) Representing the image data generated by generator g_b2a;data p generated by generator B (b) Distribution over data set B, +.>Data p generated by generator A (a) Distribution over dataset a; d (D) A (a) Representing the result of discriminating a certain image a in the data set A by the discriminator D_A, D A (G B2A (b) Representing discriminator D) A For the image G generated by the generator G_B2A B2A (b) The result obtained by the discrimination is performed.
When (when)When the maximum is reached, the discriminator d_a can accurately discriminate the original normal eye image data in the normal eye image data set a from the normal eye image data generated by the generator g_b2a, and the discriminator d_a is kept unchanged, and the original pinkeye image data in the pinkeye image data set B is input to the generator g_b2a for training.
The loss function of discriminator d_b is:
wherein G is A2B Representation generator G_A2B, D B Representation discriminator D_B, G A2B (a) Representing the image data generated by generator g_a2b; d (D) B (b) Representing the result of discriminating a certain image B in the data set B by the discriminator D_B, D B (G A2B (a) Representing discriminator D) B For the image G generated by the generator G_A2B A2B (a) The result obtained by the discrimination is performed.
When the loss function of the discriminator d_b reaches the maximum, the discriminator d_b can accurately discriminate the original pinkeye image data in the pinkeye image data set B from the pinkeye image data generated by the generator g_a2b, leaving the discriminator d_b unchanged, and inputting the original normal eye image data in the normal eye image data set a into the generator g_a2b for training.
The loss function of generator g_a2b is:
the loss function of generator g_b2a is:
when the loss function of the generator g_a2b and the loss function of the generator g_b2a are minimized, the pinkeye image data generated by the generator g_a2b and the normal eye image data generated by the generator g_b2a are highly similar to the original pinkeye image data and the normal eye image data, respectively, and at this time, the discriminator d_a cannot discriminate the original normal eye image data in the normal eye image data set a from the normal eye image data generated by the generator g_b2a, and the discriminator d_b cannot discriminate the original pinkeye image data in the pinkeye image data set B from the pinkeye image data generated by the generator g_a2b, and the generator g_b2a and the generator g_a2b effect are optimized. The pinkeye image data generated by the generator g_a2b at this time is input to the high-resolution image data generator g_e for training, and the effect of the data generated by the high-resolution image data generator g_e is discriminated by using the discriminator d_e, and when the overall loss is minimized, the high-resolution red eye case data generation model is obtained. Wherein the loss of the whole is as follows:
wherein the method comprises the steps ofIn order to cycle the loss of consistency,
an image in the eye image dataset a, B is an image in the pinkeye image dataset B.For perception loss->Phi is a loss function network, C j H j W j Represents the size, phi of the j-th layer feature map j (b) Loss of image B in the image dataset representing pinkeye B,/for>Representing the generated image +.>Is a loss of (2); />
The present invention also provides an apparatus comprising a memory storing a computer program and a processor executing the steps of the above-described high resolution red eye case data generating method.
The present invention still further provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described high-resolution red-eye case data generation method.
The beneficial effects of the invention are as follows:
the invention builds a new pinkeye case data generation model by fusing a cyclogan network and an esrgan network, and solves the problem of lack of high-resolution pinkeye data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a network configuration diagram of generators G_A2B and G_B2A;
FIG. 2 is a network configuration diagram of discriminators D_A and D_B;
fig. 3 is a network configuration diagram of the generator g_e;
fig. 4 is a network configuration diagram of the discriminator d_e;
fig. 5 is a network configuration diagram of the high-resolution red eye disease case data generation model.
Detailed Description
In order to make the technical scheme and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1:
referring to fig. 1 to 5, the present embodiment provides a high-resolution red eye case data generating method, including:
(S1) collecting a large amount of human face image data, and labeling eye parts in the face image data to obtain eye image data, so as to form a human eye image data set. The classification process of pinkeye and normal eyes is performed for all eye image data in the human eye image data set. Wherein all normal eye image data constitute a normal eye image dataset a. Since the collected pinkeye image data is relatively small, the collected pinkeye image data needs to be subjected to data enhancement processing such as inversion, contrast change, brightness change and the like to obtain a final pinkeye image data set B.
(S2) selecting yolov4 to construct a human eye image extraction model, and training the human eye image extraction model by adopting a human eye image data set.
And (S3) fusing a cyclogan network and an esrgan network, constructing a high-resolution red eye case data generation model, and training the model to obtain a trained high-resolution red eye case data generation model.
(S3.1) constructing a pinkeye data generator g_a2b based on the cyclogram network, the generator g_a2b being a generation function in the cyclogram network that converts normal eye image data in the normal eye image data set into pinkeye image data. Meanwhile, a discriminator d_b is introduced into the generator g_a2b, which functions to discriminate whether or not the image data generated by the generator g_a2b is the original image data in the pinkeye image data set B.
Referring to fig. 1, the network structure of the generator g_a2b is: firstly, extracting features of normal eye image data in a normal eye image data set A by using a convolution layer (Conv), then processing the processed data through a plurality of G_A2B Basic modules (Basic blocks), and finally generating image data similar to the image data of the pinkeye in a pinkeye image data set B after passing through a layer of convolution layer; the data normalization processing is performed by first passing through a convolution layer and then entering a normalization layer (BN layer) in the G_A2B basic module, then the data is nonlinear by using an SIREN activation function, and then the data is processed by one convolution layer and the normalization layer (BN layer) again.
Referring to fig. 2, the network structure of the discriminator d_b is: firstly, carrying out feature extraction on image data generated by G_A2B by using a convolution layer, then carrying out nonlinearity on the image data by using a leakage ReLU activation function, then sending the image data to a D_B Basic module for convolution, nonlinearity and normalization processing, carrying out convolution and nonlinearity processing again after processing by a plurality of D_B Basic modules (Basic blocks), and finally carrying out regression processing by using a regression function (sigmoid), wherein the obtained result shows whether the image data generated by G_A2B belongs to the original image data in the pinkeye image data set B.
(S3.2) constructing a normal eye data generator G_B2A based on the cyclogram network, the generator G_B2A being a generating function in the cyclogram network that converts the pinkeye image data in the pinkeye image data set into normal eye image data. Meanwhile, a discriminator d_a is introduced into the generator g_b2a, which is used to discriminate whether the image data generated by the generator g_b2a is the original image data in the normal eye image data set a. The network structure of the generator g_b2a is the same as that of the generator g_a2b, and the network structure of the discriminator d_a is the same as that of the discriminator d_b.
Referring to fig. 1, the network structure of the generator g_b2a is: firstly, performing feature extraction on pinkeye image data in a pinkeye image data set B by using a convolution layer, then performing up-sampling processing on the processed data after processing by a plurality of G_B2A basic modules, and finally generating image data similar to normal eye image data in a normal eye image data set A after passing through a convolution layer; the G_B2A basic module firstly passes through a convolution layer and then enters a normalization layer to perform data normalization processing, then uses SIREN activation function to perform non-linearization on the data, and then passes through a convolution layer and the normalization layer again.
Referring to fig. 2, the network structure of discriminator d_a is: firstly, carrying out feature extraction on image data generated by G_B2A by using a convolution layer, then carrying out nonlinearity on the image data by using a leakage ReLU activation function, then sending the image data to a D_A basic module for convolution, nonlinearity and normalization processing, carrying out convolution and nonlinearity processing again after processing by a plurality of D_A basic modules, and finally carrying out regression processing by using a regression function (sigmoid), wherein the obtained result shows whether the image data generated by G_B2A belongs to original image data in a normal eye image data set A.
(S3.3) constructing a high resolution image data generator g_e based on the esrgan network. The high-resolution image data generator g_e is a generator in the esrgan network that improves the resolution of image data. Meanwhile, a discriminator d_e is introduced for the high-resolution image data generator g_e, which functions to discriminate whether or not the image data generated by the high-resolution image data generator g_e is high-resolution image data.
Referring to fig. 3, the high resolution image data generator g_e has a network structure of: firstly, the characteristic extraction is carried out on the input image data by utilizing a convolution layer, then the convolution and up-sampling processing are carried out on the data after the processing is carried out by utilizing a plurality of residual intensive modules (RRDB), and finally the processed image data is obtained after the processing is carried out by a convolution layer. Wherein the data input into the residual dense module (RRDB) is output after being subjected to multiple convolution layers and nonlinear processing. Specifically, after the convolution layer is passed through the residual dense module (RRDB), the SIREN activation function is used for carrying out nonlinear processing, and the convolution layer and the nonlinear processing are repeated three times, so that the residual dense module (RRDB) processing is completed once.
Referring to fig. 4, the discriminator d_e has a network structure of: firstly, a convolution layer is used for extracting features of image data generated by a high-resolution image data generator G_E, then a leak ReLU activation function is used for carrying out non-linearization on the image data, and then the image data is sent to a basic module for convolution, activation function non-linearization processing and normalization processing. The image data is processed by a plurality of basic modules, then is sent to a Dense connection module (Dense) for feature processing, is sent to a next Dense connection module (Dense) for feature processing after being subjected to nonlinear processing by an activation function, and finally is processed by using sigmoid, and the obtained result shows whether the image data generated by G_E belongs to high-resolution image data or not.
(S3.4) training a high-resolution red eye case data generation model;
as shown in fig. 5, the generator g_a2b and the generator g_b2a are first kept unchanged, and the original normal eye image data in the normal eye image data set a and the normal eye image data generated by the generator g_b2a are input into the discriminator d_a for training. Meanwhile, the pinkeye image data in the pinkeye image data set B and the pinkeye image data generated by g_a2b are input into the discriminator d_b for training.
The loss function of discriminator d_a is:
wherein G is B2A Representation generator G_B2A, D A Representation discriminator D_A, G B2A (b) Representing the image data generated by generator g_b2a;data p generated by generator B (b) Distribution on the pinkeye image dataset B,/->Data p generated by generator A (a) Distribution over the normal eye image dataset a; d (D) A (a) Representing the result of discriminating a certain image a in the normal eye image dataset A by the discriminator D_A, D A (G B2A (b) Representing discriminator D) A For the image G generated by the generator G_B2A B2A (b) The result obtained by the discrimination is performed.
When the loss function of the discriminator d_a reaches the maximum, the discriminator d_a can accurately discriminate the original normal eye image data in the normal eye image data set a from the normal eye image data generated by the generator g_b2a, leaving the discriminator d_a unchanged, and inputting the original pinkeye image data in the pinkeye image data set B into the generator g_b2a for training.
The loss function of discriminator d_b is:
wherein G is A2B Representation generator G_A2B, D B Representation discriminator D_B, G A2B (a) Representing the image data generated by generator g_a2b; d (D) B (b) Representing the result of discriminating a certain image B in the pinkeye image dataset B by the discriminator D_B, D B (G A2B (a) Representing discriminator D) B For the image G generated by the generator G_A2B A2B (a) The result obtained by the discrimination is performed.
When the loss function of the discriminator d_b reaches the maximum, the discriminator d_b can accurately discriminate the original pinkeye image data in the pinkeye image data set B from the pinkeye image data generated by the generator g_a2b, leaving the discriminator d_b unchanged, and inputting the original normal eye image data in the normal eye image data set a into the generator g_a2b for training.
The loss function of generator g_a2b is:
the loss function of generator g_b2a is:
when the loss function of the generator g_a2b and the loss function of the generator g_b2a are minimized, the pinkeye image data generated by the generator g_a2b and the normal eye image data generated by the generator g_b2a are highly similar to the original pinkeye image data and the normal eye image data, respectively, and at this time, the discriminator d_a cannot discriminate the original normal eye image data in the normal eye image data set a from the normal eye image data generated by the generator g_b2a, and the discriminator d_b cannot discriminate the original pinkeye image data in the pinkeye image data set B from the pinkeye image data generated by the generator g_a2b, and the generator g_b2a and the generator g_a2b effect are optimized. The pinkeye image data generated by the generator G_A2B at the moment is input into a high-resolution image data generator G_E for training, and the effect of the data generated by the high-resolution image data generator G_E is judged by using a discriminator D_E, when the overall loss is minimum, a high-resolution pinkeye case data generation model is obtained, wherein the overall loss is as follows:
wherein the method comprises the steps ofIn order to cycle the loss of consistency,
a is a certain image in the normal eye image dataset A, B is a certain image in the pinkeye image dataset B,/a>For perception loss->Phi is a loss function network, C j H j W j Represents the size, phi of the j-th layer feature map j (b) Loss of image B in the image dataset representing pinkeye B,/for>Representing the generated image +.>Is added to the system, the loss of (a) is,
and (S4) acquiring a face front image of the red eye case data to be generated, inputting the face front image into the human eye image extraction model trained in the step (S2), and extracting the eye part data of the corresponding red eye case data to be generated. And then inputting the extracted eye part image data into the trained high-resolution pinkeye case data generation model in the step (S3) to obtain high-resolution pinkeye image data.
Example 2
A computer apparatus comprising a memory storing a computer program and a processor executing the steps of the high-resolution red-eye case data generating method provided in the above embodiment 1.
Example 3
A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the high resolution red eye case data generating method provided in the above embodiment 1.
In view of the foregoing, it will be evident to those skilled in the art that these embodiments are thus presented in terms of a simplified form, and that these embodiments are not limited to the particular embodiments disclosed herein.
Claims (8)
1. The method for generating the high-resolution red eye case data is characterized by comprising the following steps:
acquiring a human eye image data set, wherein the eye image data in the human eye image data set comprises normal eye image data and pinkeye image data, all normal eye image data in the human eye image data set form a normal eye image data set A, and all pinkeye image data in the human eye image data set form a pinkeye image data set B;
the cyclegan network and the esrgan network are fused, a high-resolution red eye case data generation model is built and trained, and a trained high-resolution red eye case data generation model is obtained, wherein the high-resolution red eye case data generation model is built, and the method comprises the following steps:
constructing a pinkeye data generator G_A2B based on a cyclogram network, wherein the generator G_A2B is a generating function for converting normal eye image data in a normal eye image data set into pinkeye image data in the cyclogram network; meanwhile, a discriminator D_B is introduced into the generator G_A2B, and the role of the discriminator D_B is to discriminate whether the image data generated by the generator G_A2B is the original image data in the pinkeye image data set B;
constructing a normal eye data generator G_B2A based on a cyclogram network, wherein the generator G_B2A is a generating function for converting pinkeye image data in a pinkeye image data set into normal eye image data in the cyclogram network; meanwhile, a discriminator D_A is introduced into the generator G_B2A, and the function is to discriminate whether the image data generated by the generator G_B2A is the original image data in the normal eye image data set A;
constructing a high-resolution image data generator G_E based on an esrgan network; the high-resolution image data generator g_e is a generator in the esrgan network that improves the resolution of the image data; meanwhile, a discriminator d_e is introduced for the high-resolution image data generator g_e, which is operative to discriminate whether the image data generated by the high-resolution image data generator g_e is high-resolution image data;
the training method of the high-resolution red eye case data generation model comprises the following steps:
firstly, enabling a generator G_A2B and a generator G_B2A to be unchanged, and inputting original normal eye image data in a normal eye image data set A and normal eye image data generated by the generator G_B2A into a discriminator D_A for training; meanwhile, the pinkeye image data in the pinkeye image data set B and the pinkeye image data generated by g_a2b are input into the discriminator d_b for training;
the loss function of discriminator d_a is:
wherein G is B2A Representation generator G_B2A, D A Representation discriminator D_A, G B2A (b) Representing the image data generated by generator g_b2a;data p generated by generator B (b) Distribution on the pinkeye image dataset B,/->Data p generated by generator A (a) Distribution over the normal eye image dataset a; d (D) A (a) Representing the result of discriminating a certain image a in the normal eye image dataset A by the discriminator D_A, D A (G B2A (b) Representing the image data G generated by the discriminator D_A to the generator G_B2A B2A (b) Performing discrimination to obtain a result;
when the loss function of the discriminator d_a reaches the maximum, the discriminator d_a can accurately distinguish the original normal eye image data in the normal eye image data set a from the normal eye image data generated by the generator g_b2a, keep the discriminator d_a unchanged, and input the original pinkeye image data in the pinkeye image data set B into the generator g_b2a for training;
the loss function of discriminator d_b is:
wherein G is A2B Representation generator G_A2B, D B Representation discriminator D_B, G A2B (a) Representing the image data generated by generator g_a2b; d (D) B (b) Representing the result of discriminating a certain image B in the pinkeye image dataset B by the discriminator D_B, D B (G A2B (a) Representing the image data G generated by the discriminator D_B to the generator G_A2B A2B (a) Performing discrimination to obtain a result;
when the loss function of the discriminator d_b reaches the maximum, the discriminator d_b can accurately distinguish the original pinkeye image data in the pinkeye image data set B from the pinkeye image data generated by the generator g_a2b, keep the discriminator d_b unchanged, and input the original normal eye image data in the normal eye image data set a into the generator g_a2b for training;
the loss function of generator g_a2b is:
the loss function of generator g_b2a is:
when the loss function of the generator g_a2b and the loss function of the generator g_b2a are minimized, the pinkeye image data generated by the generator g_a2b and the normal eye image data generated by the generator g_b2a are highly similar to the original pinkeye image data and the normal eye image data, respectively, and at this time, the discriminator d_a cannot discriminate the original normal eye image data in the normal eye image data set a from the normal eye image data generated by the generator g_b2a, and the discriminator d_b cannot discriminate the original pinkeye image data in the pinkeye image data set B from the pinkeye image data generated by the generator g_a2b, and the generator g_b2a and the generator g_a2b effect are optimized; the pinkeye image data generated by the generator G_A2B at the moment is input into a high-resolution image data generator G_E for training, and the effect of the data generated by the high-resolution image data generator G_E is judged by using a discriminator D_E, when the overall loss is minimum, a high-resolution pinkeye case data generation model is obtained, wherein the overall loss is as follows:
wherein the method comprises the steps ofIn order to cycle the loss of consistency,
a is a certain image in the normal eye image dataset A, B is a certain image in the pinkeye image dataset B,/a>For perception loss->Phi is a loss function network, C j H j W j Represents the size, phi of the j-th layer feature map j (b) Loss of image B in the image dataset representing pinkeye B,/for>Representing the generated image +.>Is added to the system, the loss of (a) is,λ>0;
and inputting the eye image data of the red eye case data to be generated into a trained high-resolution red eye case data generation model to obtain the red eye case data.
2. The method for generating high-resolution red eye case data according to claim 1, wherein the method is characterized in that a face image dataset is formed by collecting face image data of a person, the eye parts in the face image dataset are manually marked, and the face image data of the person marked with the mark is trained by using a yolov4 model to obtain a trained human eye image extraction model;
extracting eye image data by adopting a trained eye image extraction model, and taking all the obtained eye image data as a pinkeye case data set to be generated;
and performing classification processing of the pinkeye and the normal eyes on all eye image data in the to-be-generated pinkeye case data set, wherein all normal eye image data form a normal eye image data set A, and performing data enhancement processing of overturning, contrast change and brightness change on the pinkeye image data to obtain the pinkeye image data as a pinkeye image data set B.
3. The high-resolution red eye case data generating method according to claim 1, wherein the network structure of the generator g_a2b is: firstly, extracting features of normal eye image data in a normal eye image data set A by using a convolution layer, then processing the processed data through a plurality of G_A2B basic modules, performing up-sampling processing on the processed data, and finally generating image data similar to the image data of the pinkeye in a pinkeye image data set B after passing through a convolution layer; the G_A2B basic module firstly passes through a convolution layer and then enters a normalization layer to perform data normalization processing, then uses SIREN activation function to perform non-linearization on the data normalization processing, and then passes through a convolution layer and the normalization layer again;
the network structure of generator g_b2a is: firstly, performing feature extraction on pinkeye image data in a pinkeye image data set B by using a convolution layer, then performing up-sampling processing on the processed data after processing by a plurality of G_B2A basic modules, and finally generating image data similar to normal eye image data in a normal eye image data set A after passing through a convolution layer; the G_B2A basic module firstly passes through a convolution layer and then enters a normalization layer to perform data normalization processing, then uses SIREN activation function to perform non-linearization on the data, and then passes through a convolution layer and the normalization layer again.
4. The high-resolution red eye case data generating method according to claim 3, wherein the network structure of the discriminator d_b is: firstly, carrying out feature extraction on image data generated by G_A2B by using a convolution layer, then carrying out nonlinearity on the image data by using a leakage ReLU activation function, then sending the image data to a D_B basic module for convolution, nonlinearity and normalization processing, carrying out convolution and nonlinearity processing again after processing by a plurality of D_B basic modules, and finally carrying out regression processing by using a regression function, wherein an obtained result shows whether the image data generated by G_A2B belongs to original image data in a pinkeye image data set B;
the network structure of discriminator d_a is: firstly, carrying out feature extraction on image data generated by G_B2A by using a convolution layer, then carrying out nonlinearity on the image data by using a leakage ReLU activation function, then sending the image data to a D_A basic module for convolution, nonlinearity and normalization processing, carrying out convolution and nonlinearity processing again after processing by a plurality of D_A basic modules, and finally carrying out regression processing by using a regression function, wherein the obtained result shows whether the image data generated by G_B2A belongs to original image data in a normal eye image data set A.
5. The high-resolution red eye case data generating method according to claim 1, wherein the network structure of the high-resolution image data generator g_e is: firstly, carrying out feature extraction on input image data by utilizing a convolution layer, then processing the data by using a plurality of residual intensive modules, then carrying out up-sampling processing, and finally obtaining the processed image data after passing through a convolution layer; and after the convolution layer is passed through the residual dense module, performing nonlinear processing by using an SIREN activation function, and repeating the convolution layer and the nonlinear processing for three times to finish the processing of the residual dense module once.
6. The high-resolution red eye case data generating method according to claim 5, wherein the network structure of the discriminator d_e is: firstly, carrying out feature extraction on image data generated by a high-resolution image data generator G_E by using a convolution layer, then carrying out nonlinearity on the image data by using a leakage ReLU activation function, and then sending the image data to a basic module for convolution, nonlinearity and normalization processing; the image data generated by G_E is displayed as whether the image data belongs to high-resolution image data or not by processing the image data through a plurality of basic modules, then sending the processed image data to a dense connection module for feature processing and then using sigmoid for processing after nonlinear processing.
7. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the high resolution red eye case data generating method of any one of claims 1 to 6 when the computer program is executed.
8. A storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the high resolution red eye case data generating method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010854745.XA CN112992304B (en) | 2020-08-24 | 2020-08-24 | High-resolution red eye case data generation method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010854745.XA CN112992304B (en) | 2020-08-24 | 2020-08-24 | High-resolution red eye case data generation method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112992304A CN112992304A (en) | 2021-06-18 |
CN112992304B true CN112992304B (en) | 2023-10-13 |
Family
ID=76344280
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010854745.XA Active CN112992304B (en) | 2020-08-24 | 2020-08-24 | High-resolution red eye case data generation method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112992304B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113569655A (en) * | 2021-07-02 | 2021-10-29 | 广州大学 | Red eye patient identification system based on eye color monitoring |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220929A (en) * | 2017-06-23 | 2017-09-29 | 深圳市唯特视科技有限公司 | A kind of non-paired image method for transformation using the consistent confrontation network of circulation |
CN108269245A (en) * | 2018-01-26 | 2018-07-10 | 深圳市唯特视科技有限公司 | A kind of eyes image restorative procedure based on novel generation confrontation network |
CN109993072A (en) * | 2019-03-14 | 2019-07-09 | 中山大学 | The low resolution pedestrian weight identifying system and method generated based on super resolution image |
CN110363068A (en) * | 2019-05-28 | 2019-10-22 | 中国矿业大学 | A kind of high-resolution pedestrian image generation method based on multiple dimensioned circulation production confrontation network |
CN111028146A (en) * | 2019-11-06 | 2020-04-17 | 武汉理工大学 | Image super-resolution method for generating countermeasure network based on double discriminators |
CN111275647A (en) * | 2020-01-21 | 2020-06-12 | 南京信息工程大学 | Underwater image restoration method based on cyclic generation countermeasure network |
CN111340173A (en) * | 2019-12-24 | 2020-06-26 | 中国科学院深圳先进技术研究院 | Method and system for training generation countermeasure network for high-dimensional data and electronic equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11232541B2 (en) * | 2018-10-08 | 2022-01-25 | Rensselaer Polytechnic Institute | CT super-resolution GAN constrained by the identical, residual and cycle learning ensemble (GAN-circle) |
WO2020097731A1 (en) * | 2018-11-15 | 2020-05-22 | Elmoznino Eric | System and method for augmented reality using conditional cycle-consistent generative image-to-image translation models |
-
2020
- 2020-08-24 CN CN202010854745.XA patent/CN112992304B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220929A (en) * | 2017-06-23 | 2017-09-29 | 深圳市唯特视科技有限公司 | A kind of non-paired image method for transformation using the consistent confrontation network of circulation |
CN108269245A (en) * | 2018-01-26 | 2018-07-10 | 深圳市唯特视科技有限公司 | A kind of eyes image restorative procedure based on novel generation confrontation network |
CN109993072A (en) * | 2019-03-14 | 2019-07-09 | 中山大学 | The low resolution pedestrian weight identifying system and method generated based on super resolution image |
CN110363068A (en) * | 2019-05-28 | 2019-10-22 | 中国矿业大学 | A kind of high-resolution pedestrian image generation method based on multiple dimensioned circulation production confrontation network |
CN111028146A (en) * | 2019-11-06 | 2020-04-17 | 武汉理工大学 | Image super-resolution method for generating countermeasure network based on double discriminators |
CN111340173A (en) * | 2019-12-24 | 2020-06-26 | 中国科学院深圳先进技术研究院 | Method and system for training generation countermeasure network for high-dimensional data and electronic equipment |
CN111275647A (en) * | 2020-01-21 | 2020-06-12 | 南京信息工程大学 | Underwater image restoration method based on cyclic generation countermeasure network |
Non-Patent Citations (5)
Title |
---|
An Improved Technique for Face Age Progression and Enhanced Super-Resolution with Generative Adversarial Networks;Neha Sharma et al.;《Wireless Personal Communications》;全文 * |
CT Super-Resolution GAN Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE);Chenyu You et al.;《IEEE Transactions on Medical Imaging》;第39卷(第01期);全文 * |
Unsupervised Real-World Super Resolution with Cycle Generative Adversarial Network and Domain Discriminator;Gwantae Kim et al.;《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)》;全文 * |
基于生成对抗网络的图像超分辨算法研究;连帅龙;《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》(第02期);全文 * |
生成式对抗网络的应用综述;叶晨 等;《同济大学学报(自然科学版)》(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112992304A (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446730B (en) | CT pulmonary nodule detection device based on deep learning | |
CN110823574B (en) | Fault diagnosis method based on semi-supervised learning deep countermeasure network | |
CN113724880A (en) | Abnormal brain connection prediction system, method and device and readable storage medium | |
CN109493342B (en) | Skin disease picture lesion type classification method based on deep learning | |
CN113191390B (en) | Image classification model construction method, image classification method and storage medium | |
CN111985538A (en) | Small sample picture classification model and method based on semantic auxiliary attention mechanism | |
CN102880855A (en) | Cloud-model-based facial expression recognition method | |
Bushra et al. | Crime investigation using DCGAN by Forensic Sketch-to-Face Transformation (STF)-A review | |
CN115482595B (en) | Specific character visual sense counterfeiting detection and identification method based on semantic segmentation | |
CN112949469A (en) | Image recognition method, system and equipment for face tampered image characteristic distribution | |
CN113592769A (en) | Abnormal image detection method, abnormal image model training method, abnormal image detection device, abnormal image model training device and abnormal image model training medium | |
CN112992304B (en) | High-resolution red eye case data generation method, device and storage medium | |
Han et al. | Learning generative models of tissue organization with supervised GANs | |
CN115358337A (en) | Small sample fault diagnosis method and device and storage medium | |
Fahad et al. | Skinnet-8: An efficient cnn architecture for classifying skin cancer on an imbalanced dataset | |
CN117726872A (en) | Lung CT image classification method based on multi-view multi-task feature learning | |
Bie et al. | XCoOp: Explainable Prompt Learning for Computer-Aided Diagnosis via Concept-guided Context Optimization | |
Shao et al. | Two-stream coupling network with bidirectional interaction between structure and texture for image inpainting | |
Alam et al. | Effect of Different Modalities of Facial Images on ASD Diagnosis Using Deep Learning-Based Neural Network | |
Guttikonda et al. | Integrating Convolutional Neural Networks (CNN) and Machine Learning for Accurate Identification of Autism Spectrum Disorder Using Facial Biomarkers | |
Malik et al. | Exploring dermoscopic structures for melanoma lesions' classification | |
Zhang et al. | Integrating clinical knowledge in a thyroid nodule classification model based on | |
Vijayalakshmi et al. | Comparative Analysis of Self-Supervised and Supervised Deep Learning Models for Ocular Disease Recognition | |
CN116503858B (en) | Immunofluorescence image classification method and system based on generation model | |
CN116402682B (en) | Image reconstruction method and system based on differential value dense residual super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |