CN110826470A - Eye fundus image left and right eye identification method based on depth active learning - Google Patents

Eye fundus image left and right eye identification method based on depth active learning Download PDF

Info

Publication number
CN110826470A
CN110826470A CN201911060368.6A CN201911060368A CN110826470A CN 110826470 A CN110826470 A CN 110826470A CN 201911060368 A CN201911060368 A CN 201911060368A CN 110826470 A CN110826470 A CN 110826470A
Authority
CN
China
Prior art keywords
right eye
fundus image
model
fundus
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911060368.6A
Other languages
Chinese (zh)
Inventor
侯君临
杜姗姗
冯瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201911060368.6A priority Critical patent/CN110826470A/en
Publication of CN110826470A publication Critical patent/CN110826470A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

The invention provides a fundus image left and right eye identification method based on depth active learning, which can complete accurate left and right eye identification aiming at a main-view fundus image and a non-main-view fundus image with relatively high identification difficulty, and is characterized by comprising the following steps: step S1, preprocessing the image to be detected; step S2, inputting the preprocessed image into the left and right eye identification models of the fundus image to obtain left and right eye classification results of the image to be measured, the left and right eye identification models of the fundus image being trained by the following method: step T1, constructing an initial fundus image recognition model; step T2, inputting the training set into an initial fundus image recognition model for training to obtain a main visual field fundus image recognition model; a step T3 of selecting a non-main-view fundus image from the plurality of fundus images as a difficult sample by using the main-view fundus image recognition model; and step T4, training the main-view fundus image recognition model by using the difficult samples as an increased training set to obtain a left and right eye recognition model of the fundus image.

Description

Eye fundus image left and right eye identification method based on depth active learning
Technical Field
The invention belongs to the field of computer vision and the field of medical images, relates to a method for identifying left and right eyes of a fundus image, and particularly relates to a method for identifying left and right eyes of a fundus image based on active depth learning.
Background
Fundus photography is an important method for examining vitreous, retinal, choroidal and optic nerve diseases, and many systemic diseases such as hypertension, diabetes and the like cause fundus lesions, so fundus images are important diagnostic data. The distinguishing and identification of the left and right eyes based on the fundus images is a necessary basis for a large number of subsequent tasks, and is mainly carried out according to the relative positions of the fundus optic nerve disc and the macula lutea and the bending direction of the central retinal artery. If the positions of the fundus optic disc and the macula lutea can be clearly recognized on the fundus image, and the fovea centralis is taken as the center of the shooting visual field, the imaging at least covers a 45-degree retina area, the fundus optic disc is positioned on the left side of the macula lutea, and the central artery of the retina is left convex, the fundus image is the left eye main visual field fundus image, and the fundus image is the right eye main visual field fundus image.
In recent years, with the continuous development of deep learning, especially the excellent performance of a convolutional neural network on pattern classification, more and more image classification tasks can be efficiently automated. Some studies have applied convolutional neural networks to the task of left and right eye identification of fundus images.
However, the current left-right eye identification model has strong dependence on data, and only has strong identification capability on main-view fundus images with good image quality. In real situations, due to the shooting instruments, illumination, techniques, angles and the like, non-main-field fundus images of the left and right eyes, such as images in which the fundus optic disk is located at the center of the images and the position of the macula lutea is difficult to identify, may appear, and it is difficult for a general left and right eye identification model to have strong generalization and high-precision identification performance on the difficult samples.
Disclosure of Invention
In order to solve the problems, the invention provides a method for identifying left and right eyes of a fundus image, which can accurately identify the left and right eyes aiming at a non-main-vision fundus image with relatively high identification difficulty, and adopts the following technical scheme:
the invention provides a method for identifying left and right eyes of an eyeground image based on deep active learning, which is characterized by comprising the following steps of: step S1, preprocessing the image to be detected to obtain a preprocessed image; step S2, inputting the preprocessed image into a left and right eye identification model of the fundus image for performing left and right eye identification on at least the main-view fundus image and the non-main-view fundus image to obtain a left and right eye classification result of the image to be measured, wherein the left and right eye identification model of the fundus image is obtained by training as follows: step T1, constructing an initial fundus image recognition model; step T2, inputting the marked main-view fundus images as training sets into the constructed initial fundus image recognition model for model training and obtaining a main-view fundus image recognition model; step T3, selecting a non-main-view fundus image from the plurality of unmarked fundus images as a difficult sample by adopting an active learning method and utilizing a main-view fundus image recognition model; and step T4, performing model training on the main-view fundus image recognition model as an increased training set after manual labeling of the difficult samples, and obtaining a left eye and right eye recognition model of the fundus image.
The fundus image left and right eye identification method based on active depth learning provided by the invention can also have the technical characteristics that the training part of the step T2 comprises the following steps: step T2-1, sequentially inputting each training image in the training set into the constructed left and right eye identification models of the fundus images and carrying out one iteration; step T2-2, calculating loss errors by using the model parameters of the last layer and reversely propagating the loss errors so as to update the model parameters; and step T2-3, repeating the steps T2-1 to T2-2 until the training completion condition is reached, and obtaining the trained left and right eye identification model of the initial fundus image.
The fundus image left and right eye identification method based on active depth learning provided by the invention can also have the technical characteristics that the step T4 comprises the following sub-steps: a step T4-1 of giving the left and right eye identification models of the main-view fundus image trained in the step S2; step T4-2, sequentially acquiring a batch of added training images in the training set and inputting the training images into the left and right eye identification models of the main-view fundus image for one iteration; step T4-3, calculating loss errors by using the model parameters of the last layer and reversely propagating the loss errors so as to update the model parameters; step T4-4, repeating the step T4-2 to the step T4-3 until the training completion condition is reached, and obtaining a trained eye fundus image left and right eye identification model; and T4-5, testing the model effect of the trained eye fundus image left and right eye identification model by using the verification set, obtaining the final eye fundus image left and right eye identification model if the classification accuracy is reached, and repeating the steps T3-T4 until the classification accuracy is reached if the classification accuracy is not reached.
The fundus image left and right eye identification method based on the depth active learning provided by the invention can also have the technical characteristics that the ratio of the number of correctly classified fundus images to the total number of correctly classified fundus images in the classification accuracy rate is more than 95%.
The fundus image left and right eye identification method based on active depth learning provided by the present invention may further have a technical feature that the output of the fundus image left and right eye identification model is probability scores indicating that the image is identified as a left eye and a right eye, respectively, the left and right eye classification result is a left eye if the left eye score is greater than the right eye score, and the left and right eye classification result is a right eye if the right eye score is greater than the left eye score.
The fundus image left and right eye identification method based on the depth active learning provided by the invention can also have the technical characteristics that the fundus image left and right eye identification model comprises an input layer, a convolutional layer, a maximum pooling layer, 3 residual modules C1, 4 residual modules C2, 6 residual modules C3, 3 residual modules C4, an average pooling layer, a full-link layer and a Softmax normalization layer which are sequentially arranged.
Action and Effect of the invention
According to the fundus image left and right eye identification method based on active deep learning of the invention, since the initial model is trained through the marked main-view fundus image, therefore, the model initially has accurate identification performance on the main-view fundus images, further, the unmarked fundus images are identified by adopting an active learning method and utilizing the trained main-view fundus image identification model, so that the most valuable and representative non-main-visual-field fundus image can be conveniently and accurately acquired, the recognition model of the main-view eyeground image is trained again by manually marking and using the non-main-view eyeground images as an increased training set, so that the recognition effect of the model on difficult samples such as the non-main-view eyeground images and the like can be effectively improved, thus, a fundus image recognition model capable of simultaneously performing high-precision recognition on the main-view fundus image and the non-main-view fundus image is obtained. According to the method for identifying the left eye and the right eye of the fundus image based on the depth active learning, provided by the invention, the fundus images obtained by different shooting instruments, illumination, technologies and angles can be efficiently identified through the fundus image left eye and right eye identification model, and the method has the advantages of strong generalization and high precision.
Drawings
FIG. 1 is a schematic view of a main-view fundus image and a non-main-view fundus image in an embodiment of the present invention;
FIG. 2 is a flowchart of a method for identifying left and right eyes of a fundus image based on active depth learning according to an embodiment of the present invention;
FIG. 3 is a flowchart of a model training procedure of a fundus image left and right eye identification model in an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a left and right eye identification model of a fundus image according to an embodiment of the present invention; and
fig. 5 is a residual block diagram of the left and right eye identification models of the fundus image according to the embodiment of the present invention.
Detailed Description
In order to make the technical means, creation features, achievement purposes and effects of the present invention easy to understand, the following describes the method for identifying left and right eyes of fundus images based on active deep learning specifically with reference to the embodiments and the accompanying drawings.
< example >
In this embodiment, the fundus image left and right eye identification method based on the depth active learning is executed by a computer, the computer needs a display card to perform GPU acceleration to complete a model training process, and the model and the image identification process of the fundus image left and right eye identification method that are trained are stored in the computer in the form of executable codes.
In this embodiment, the data set used is obtained by two specialized doctors performing left and right eye labeling on the fundus image, including a training set and a verification set. The training set comprises 1.2 million left-eye images and 1.4 million right-eye images, and 2.6 million training data in total for initial model training; the verification set comprises 80 dominant-view left-eye and right-eye images respectively, 68 non-dominant-view left-eye and right-eye images respectively and 72 non-dominant-view left-eye and right-eye images respectively, and 300 pieces of verification data are used for verifying the model effect. In addition, this embodiment also employs a left and right fundus image data set without annotation. An example of a main-view fundus image and a non-main-view fundus image is shown in fig. 1.
Fig. 2 is a flowchart of a fundus image left and right eye identification method based on active depth learning in the embodiment of the present invention.
As shown in fig. 2, the fundus image left and right eye identification method 100 based on the depth active learning includes the steps of:
and step S1, preprocessing the image to be detected to obtain a preprocessed image.
In this embodiment, the preprocessing includes resizing, center clipping, and normalization operations.
Step S2, inputting the pre-processed image obtained in step S1 into a fundus image left and right eye identification model obtained by training in advance to obtain left and right eye classification results of the image to be measured,
in this embodiment, the left and right eye identification models of the fundus images are obtained by training in the model training step in advance and stored in the computer, and the computer can call the models through the executable code and simultaneously process a plurality of fundus images in batch, so as to obtain and output the left and right eye classification results of each fundus image.
In this embodiment, the output dimension of the fundus image left and right eye identification model is two-dimensional, and represents the probability scores of the images for identifying the left eye and the right eye, respectively, and if the left eye score is greater than the right eye score, it is determined that the left and right eye classification results of the fundus image are the left eye, and otherwise, it is determined that the left and right eye classification results of the fundus image are the right eye.
Fig. 3 is a flowchart of a model training procedure of the fundus image left and right eye identification model in the embodiment of the present invention.
As shown in fig. 3, the model training step specifically includes the following steps:
at step T1, an initial fundus image recognition model is constructed.
Each layer of the initial fundus image left-right eye identification model of the present embodiment contains different model parameters, which are randomly set at the time of construction.
In this embodiment, the initial fundus image model is constructed using the residual error network ResNet34 model as a backbone and using the existing depth learning frame PyTorch. Meanwhile, the model adopts a residual error module to deepen the number of network layers and can express high-level features in the image, so that the model has excellent performance on an image classification task. The model of the present embodiment is composed of a residual module (convolution layer), a pooling layer, an active layer, and a Batch Normalization layer (Batch Normalization), and the specific structure will be described in detail later.
And step T2, inputting the marked main-view fundus images as training sets into the constructed initial fundus image recognition model for model training and obtaining the main-view fundus image recognition model.
In step T2 of this embodiment, the images in the training set are entered into the network model in batches for training, the batch size of the training image entered into the network model each time is 32, and the training is iterated for 20 times. The step T2 includes the following sub-steps:
step T2-1, sequentially inputting each training image in the training set into the constructed left and right eye identification models of the fundus images and carrying out one iteration;
step T2-2, after iteration, calculating loss errors by using the model parameters of the last layer, and then reversely propagating the calculated loss errors so as to update the model parameters;
and step T2-3, repeating the steps T2-1 to S2-2 until the training completion condition is reached, and obtaining the trained left and right eye identification model of the initial fundus image.
In the model training process, after each iteration (i.e., a training image passes through the model), the Loss error (i.e., cross entropy Loss) is calculated by the model parameters of the last layer of the model, then the calculated Loss error is propagated in the reverse direction, the parameter optimization is performed by adopting a random gradient descent algorithm, and the learning rate is 0.001, so that the model parameters are updated. In addition, the training completion conditions of the model training are the same as those of the conventional convolutional neural network model, namely, the training is completed after the model parameters of each layer are converged.
In this embodiment, after the training is completed in step T2, the left and right eye identification models of the main-view fundus image obtained are verified on the verification set, and the experimental result shows that, for 160 main-view fundus images, the recognition accuracy of the left and right eye identification models of the main-view fundus image reaches 100%, that is, the initial model can completely and correctly recognize the main-view fundus image; however, for 140 non-main-field fundus images, the recognition accuracy of the left and right eye recognition models of the initial fundus image only reaches 48.57%.
In step T3, a non-main-view fundus image is selected as a difficult sample from the plurality of unlabeled fundus images by the main-view fundus image recognition model using the active learning method.
In this embodiment, in step T3, left and right eye recognition is performed on another fundus image data set including 2 ten thousand non-labeled fundus images using the dominant-field fundus image recognition model obtained in step S2, a left and right eye probability score is obtained for each image, and a corresponding non-dominant-field fundus image is obtained from the left and right eye probability scores.
For example, assuming that only the left-eye probability score is considered, with a score of 1 indicating the left eye and a score of 0 indicating the right eye, a fundus image with a left-eye score between 0.4 and 0.6, i.e., a fundus image for which the recognition confidence of the initial model is not high, is selected, and a non-dominant-field fundus image can be obtained. In the embodiment, 140 fundus images are collected, and the fundus images mainly belong to non-main-field fundus images of which the fundus optic nerve disc is located at the central position of the images and the position of the macula is difficult to identify through manual examination.
And step T4, performing manual labeling on the difficult samples to obtain an added training set, and performing model training on the main-view fundus image recognition model by using the added training set to obtain a left and right eye recognition model of the fundus image.
In this embodiment, each layer of the main-view fundus image recognition model trained in step T4 similarly includes different model parameters, which are model parameters of each layer after the training of the model is completed in step S2. And (3) the images in the added training set enter the network model in batches for training, the batch size of the training images entering the network model each time is 32, and the training is performed for 20 times in total. The step T4 includes the following sub-steps:
a step T4-1 of giving the left and right eye identification models of the main-view fundus image trained in the step S2;
step T4-2, sequentially acquiring a batch of added training images in the training set and inputting the training images into the left and right eye identification models of the main-view fundus image for one iteration;
step T4-3, after iteration, calculating loss errors by using the model parameters of the last layer, and then reversely propagating the calculated loss errors so as to update the model parameters;
step T4-4, repeating the step T4-2 to the step T4-3 until the training completion condition is reached, and obtaining a trained eye fundus image left and right eye identification model;
and T4-5, testing the model effect of the trained eye fundus image left and right eye identification model by using the verification set, obtaining the final eye fundus image left and right eye identification model if the classification accuracy reaches the standard, and repeating the steps T3 to T4 until the classification accuracy reaches the standard.
In step T4-5 of the present embodiment, the criterion of the classification accuracy is that the ratio of the number of correctly classified fundus images to the total number is 95% or more. If the classification degree does not reach the standard, more non-main-view fundus images need to be screened again through the step T3, and training is carried out through the step T4-1 to the step T4-4 again, so that the classification accuracy reaches the standard finally,
in the model training process, after each iteration (namely, the added training set image passes through the model), the Loss error (cross entropy Loss) is respectively calculated by the model parameters of the last layer, and then the calculated Loss error (cross entropy Loss) is propagated reversely, so that the model parameters are updated. In addition, the training completion conditions of the model training are the same as those of the conventional convolutional neural network model, namely, the training is completed after the model parameters of each layer are converged.
After the iterative training and the error calculation and the back propagation in the iterative process, the trained eye fundus image left and right eye identification model can be obtained.
The finally obtained left and right eye identification models of the fundus images are verified on a verification set, and experimental results show that the recognition accuracy of the left and right eye identification models of the fundus images after active learning training reaches 100% for 160 main-view fundus images, namely the models can completely and correctly recognize the main-view fundus images; for 140 non-main-view fundus images, the recognition accuracy of the left and right eye recognition models of the fundus images after active learning training reaches 95.71%, and is improved by 47.14% compared with the recognition effect of the initial model, namely, the training strategy of active learning enables the left and right eye recognition models of the fundus images to recognize the non-main-view fundus images with higher accuracy. The final left and right eye identification model of the fundus image can identify the main-view fundus image and the non-main-view fundus image with high accuracy. The present embodiment performs the task of identifying the left and right eyes of the fundus image using the final fundus image left and right eye identification model.
In the present embodiment, the fundus image left and right eye identification model learns the curve direction of the central retinal artery in the fundus image, and the left and right eyes are identified based on this feature.
Fig. 4 is a schematic structural diagram of a left and right eye identification model of a fundus image in an embodiment of the present invention.
As shown in fig. 4, the detailed structure of the fundus image left-right eye identification model includes an input layer I, a convolutional layer C0, a maximum pooling layer, 3 residual modules C1, 4 residual modules C2, 6 residual modules C3, 3 residual modules C4, an average pooling layer, a full link layer, and a Softmax normalization layer, which are sequentially arranged. The specific structure of the left and right eye identification models of the fundus images is as follows:
(1) the input layer I is used for inputting an original fundus image, and obtaining a normalized image with the size of 512 multiplied by 3 through image preprocessing operations such as size adjustment, center cutting, normalization and the like;
(2) convolutional layer C0, convolution kernel size 7 × 7, sliding step size 2, output 256 × 256 × 64;
(3) the maximum pooling layer has the pooling size of 3 multiplied by 3, the sliding step length of 2 and the output of 128 multiplied by 64;
(4) a plurality of residual modules including 3 residual modules C1 (convolution kernel size 3 × 3, sliding step size 2, output 128 × 128 × 64), 4 residual modules C2 (convolution kernel size 3 × 3, sliding step size 2, output 64 × 64 × 128), 6 residual modules C3 (convolution kernel size 3 × 3, sliding step size 2, output 32 × 32 × 256), 3 residual modules C4 (convolution kernel size 3 × 3, sliding step size 2, output 16 × 16 × 512);
(5) an average pooling layer, which is averaged in each dimension and has an output of 1 × 1 × 512;
(6) a full connection layer, which carries out matrix transformation and outputs 1 × 2;
(7) the Softmax normalization layer, which normalizes the output values to between 0-1 using the Softmax function, may be viewed as a probability score for the image to be identified as left and right eye.
In the residual error module of the left and right eye identification models of the fundus images, as shown in fig. 5, batch normalization is performed after each convolution layer.
Examples effects and effects
According to the fundus image left-right eye identification method based on active depth learning provided by the present embodiment, since the initial model is trained by the marked main-view fundus image, therefore, the model initially has accurate identification performance on the main-view fundus images, further, the unmarked fundus images are identified by adopting an active learning method and utilizing the trained main-view fundus image identification model, so that the most valuable and representative non-main-visual-field fundus image can be conveniently and accurately acquired, the recognition model of the main-view eyeground image is trained again by manually marking and using the non-main-view eyeground images as an increased training set, so that the recognition effect of the model on difficult samples such as the non-main-view eyeground images and the like can be effectively improved, thus, a fundus image recognition model capable of simultaneously performing high-precision recognition on the main-view fundus image and the non-main-view fundus image is obtained. According to the method for identifying the left and right eyes of the fundus image based on the depth active learning, provided by the embodiment of the invention, the fundus images obtained by different shooting instruments, illumination, technologies and angles can be efficiently identified through the fundus image left and right eye identification model, and the method has the advantages of strong generalization and high precision.
In addition, in the embodiment, since the fundus image left and right eye identification model is based on the deep convolutional neural network ResNet34 model, the high-level features of the image can be expressed, which is advantageous for the image classification task. Meanwhile, the left eye identification model and the right eye identification model of the fundus image are only based on the depth convolution neural network ResNet34 model, so that the model is simple in structure, high-precision identification of the main-view fundus image and the non-main-view fundus image can be completed without using methods such as model mixing, multitask training and metric learning, the model of the embodiment is fast and convenient to construct, training can be achieved without too much data of a training set, the training process can be completed quickly, and computing resources consumed by training are less.
The above-described embodiments are merely illustrative of specific embodiments of the present invention, and the present invention is not limited to the description of the above-described embodiments.

Claims (6)

1. A method for identifying left and right eyes of fundus images based on active depth learning is used for accurately identifying the left and right eyes of main-view fundus images and non-main-view fundus images, and is characterized by comprising the following steps:
step S1, preprocessing the image to be detected to obtain a preprocessed image;
a step S2 of inputting the preprocessed image into a fundus image left and right eye identification model for performing left and right eye identification on at least a main-view fundus image and a non-main-view fundus image to obtain left and right eye classification results of the image to be measured,
the left eye identification model and the right eye identification model of the fundus image are obtained by training through the following method:
step T1, constructing an initial fundus image recognition model;
step T2, inputting the marked main-view fundus images as training sets into the constructed initial fundus image recognition model for model training and obtaining a main-view fundus image recognition model;
step T3, selecting a non-main-view fundus image from a plurality of unmarked fundus images as a difficult sample by adopting an active learning method and utilizing the main-view fundus image recognition model;
and T4, performing model training on the main-view fundus image recognition model by taking the difficult sample after manual labeling as an added training set, and obtaining the left and right eye recognition models of the fundus images.
2. A fundus image left and right eye identification method based on active depth learning according to claim 1, wherein:
wherein, the training part of the step T2 comprises the following steps:
step T2-1, sequentially inputting each training image in the training set into the constructed left and right eye identification models of the fundus images and carrying out one iteration;
step T2-2, calculating loss errors by using the model parameters of the last layer and reversely propagating the loss errors so as to update the model parameters;
and T2-3, repeating the steps T2-1 to T2-2 until the training completion condition is reached, and obtaining the trained left and right eye identification model of the initial fundus image.
3. A fundus image left and right eye identification method based on active depth learning according to claim 1, wherein:
wherein the step T4 includes the following sub-steps:
a step T4-1 of giving the left and right eye identification models of the main-view fundus image trained in the step S2;
step T4-2, sequentially acquiring a batch of training images in the added training set and inputting the left and right eye identification models of the main-view fundus image for one iteration;
step T4-3, calculating loss errors by using the model parameters of the last layer and reversely propagating the loss errors so as to update the model parameters;
step T4-4, repeating the step T4-2 to the step T4-3 until the training completion condition is reached, and obtaining a trained left and right eye identification model of the fundus image;
and T4-5, testing the model effect of the trained eye fundus image left and right eye identification model by using a verification set, obtaining the final eye fundus image left and right eye identification model if the classification accuracy reaches the standard, and repeating the steps T3 to T4 until the classification accuracy reaches the standard.
4. A fundus image left and right eye identification method based on depth active learning according to claim 3, wherein:
wherein the classification accuracy is determined by a ratio of the number of correctly classified fundus images to the total number of fundus images of 95% or more.
5. A fundus image left and right eye identification method based on active depth learning according to claim 1, wherein:
the output of the left and right eye identification models of the fundus image is probability scores respectively representing that the image is identified as a left eye and a right eye, if the left eye score is larger than the right eye score, the left and right eye classification results are the left eye, and if the right eye score is larger than the left eye score, the left and right eye classification results are the right eye.
6. A fundus image left and right eye identification method based on active depth learning according to claim 1, wherein:
the left eye identification model and the right eye identification model of the fundus image comprise an input layer, a convolutional layer, a maximum pooling layer, 3 residual modules C1, 4 residual modules C2, 6 residual modules C3, 3 residual modules C4, an average pooling layer, a full connection layer and a Softmax normalization layer which are sequentially arranged.
CN201911060368.6A 2019-11-01 2019-11-01 Eye fundus image left and right eye identification method based on depth active learning Pending CN110826470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911060368.6A CN110826470A (en) 2019-11-01 2019-11-01 Eye fundus image left and right eye identification method based on depth active learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911060368.6A CN110826470A (en) 2019-11-01 2019-11-01 Eye fundus image left and right eye identification method based on depth active learning

Publications (1)

Publication Number Publication Date
CN110826470A true CN110826470A (en) 2020-02-21

Family

ID=69551949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911060368.6A Pending CN110826470A (en) 2019-11-01 2019-11-01 Eye fundus image left and right eye identification method based on depth active learning

Country Status (1)

Country Link
CN (1) CN110826470A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101438A (en) * 2020-09-08 2020-12-18 南方科技大学 Left and right eye classification method, device, server and storage medium
WO2022180227A1 (en) * 2021-02-26 2022-09-01 Carl Zeiss Meditec, Inc. Semi-supervised fundus image quality assessment method using ir tracking
CN115170503A (en) * 2022-07-01 2022-10-11 上海市第一人民医院 Eye fundus image visual field classification method and device based on decision rule and deep neural network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100060720A1 (en) * 2008-09-09 2010-03-11 Yasutaka Hirasawa Apparatus, method, and computer program for analyzing image data
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
KR20170090764A (en) * 2016-01-29 2017-08-08 한국전자통신연구원 Apparatus for classifying left eye and right eye in the image
CN108734102A (en) * 2018-04-18 2018-11-02 佛山市顺德区中山大学研究院 A kind of right and left eyes recognizer based on deep learning
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109635669A (en) * 2018-11-19 2019-04-16 北京致远慧图科技有限公司 Image classification method, the training method of device and disaggregated model, device
US20190221313A1 (en) * 2017-08-25 2019-07-18 Medi Whale Inc. Diagnosis assistance system and control method thereof
CN110223294A (en) * 2019-06-21 2019-09-10 北京万里红科技股份有限公司 A kind of human body left/right eye image decision method based on multilayer convolutional neural networks
CN110348428A (en) * 2017-11-01 2019-10-18 腾讯科技(深圳)有限公司 Eye fundus image classification method, device and computer readable storage medium
CN110400288A (en) * 2019-06-18 2019-11-01 中南民族大学 A kind of sugar of fusion eyes feature nets sick recognition methods and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100060720A1 (en) * 2008-09-09 2010-03-11 Yasutaka Hirasawa Apparatus, method, and computer program for analyzing image data
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
KR20170090764A (en) * 2016-01-29 2017-08-08 한국전자통신연구원 Apparatus for classifying left eye and right eye in the image
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network
US20190221313A1 (en) * 2017-08-25 2019-07-18 Medi Whale Inc. Diagnosis assistance system and control method thereof
CN110348428A (en) * 2017-11-01 2019-10-18 腾讯科技(深圳)有限公司 Eye fundus image classification method, device and computer readable storage medium
CN108734102A (en) * 2018-04-18 2018-11-02 佛山市顺德区中山大学研究院 A kind of right and left eyes recognizer based on deep learning
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109635669A (en) * 2018-11-19 2019-04-16 北京致远慧图科技有限公司 Image classification method, the training method of device and disaggregated model, device
CN110400288A (en) * 2019-06-18 2019-11-01 中南民族大学 A kind of sugar of fusion eyes feature nets sick recognition methods and device
CN110223294A (en) * 2019-06-21 2019-09-10 北京万里红科技股份有限公司 A kind of human body left/right eye image decision method based on multilayer convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钟志权: "基于卷积神经网络的左右眼识别", pages 1667 - 1673 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101438A (en) * 2020-09-08 2020-12-18 南方科技大学 Left and right eye classification method, device, server and storage medium
CN112101438B (en) * 2020-09-08 2024-04-16 南方科技大学 Left-right eye classification method, device, server and storage medium
WO2022180227A1 (en) * 2021-02-26 2022-09-01 Carl Zeiss Meditec, Inc. Semi-supervised fundus image quality assessment method using ir tracking
CN115170503A (en) * 2022-07-01 2022-10-11 上海市第一人民医院 Eye fundus image visual field classification method and device based on decision rule and deep neural network
CN115170503B (en) * 2022-07-01 2023-12-19 上海市第一人民医院 Fundus image visual field classification method and device based on decision rule and deep neural network

Similar Documents

Publication Publication Date Title
CN110197493B (en) Fundus image blood vessel segmentation method
Shan et al. A deep learning method for microaneurysm detection in fundus images
Li et al. Deep learning-based automated detection of glaucomatous optic neuropathy on color fundus photographs
CN110826470A (en) Eye fundus image left and right eye identification method based on depth active learning
CN110807762B (en) Intelligent retinal blood vessel image segmentation method based on GAN
CN113177916B (en) Slight hypertension fundus identification model based on few-sample learning method
CN112101424B (en) Method, device and equipment for generating retinopathy identification model
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
KR20230104083A (en) Diagnostic auxiliary image providing device based on eye image
CN111833334A (en) Fundus image feature processing and analyzing method based on twin network architecture
CN114821189A (en) Focus image classification and identification method based on fundus images
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
CN115641340A (en) Retina blood vessel image segmentation method based on multi-scale attention gating network
CN112869697A (en) Judgment method for simultaneously identifying stage and pathological change characteristics of diabetic retinopathy
CN111160431A (en) Method and device for identifying keratoconus based on multi-dimensional feature fusion
CN113610842B (en) OCT image retina detachment and splitting automatic segmentation method based on CAS-Net
CN111047590A (en) Hypertension classification method and device based on fundus images
CN113222975B (en) High-precision retinal vessel segmentation method based on improved U-net
Sharma et al. Harnessing the Strength of ResNet50 to Improve the Ocular Disease Recognition
CN110610480A (en) MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
KR20200114837A (en) Method and apparatus for generating feature data for glaucoma diagnosis, method and apparatus for diagnosing glaucoma
CN111369546B (en) Cervical lymph node image classification and identification device and method
Kabir et al. Multi-classification based Alzheimer's disease detection with comparative analysis from brain MRI scans using deep learning
CN112741651A (en) Method and system for processing ultrasonic image of endoscope
CN116758038A (en) Infant retina disease information identification method and system based on training network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination