CN110276333B - Eye ground identity recognition model training method, eye ground identity recognition method and equipment - Google Patents

Eye ground identity recognition model training method, eye ground identity recognition method and equipment Download PDF

Info

Publication number
CN110276333B
CN110276333B CN201910578321.2A CN201910578321A CN110276333B CN 110276333 B CN110276333 B CN 110276333B CN 201910578321 A CN201910578321 A CN 201910578321A CN 110276333 B CN110276333 B CN 110276333B
Authority
CN
China
Prior art keywords
fundus
image
images
feature image
identity recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910578321.2A
Other languages
Chinese (zh)
Other versions
CN110276333A (en
Inventor
熊健皓
和宗尧
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN201910578321.2A priority Critical patent/CN110276333B/en
Publication of CN110276333A publication Critical patent/CN110276333A/en
Application granted granted Critical
Publication of CN110276333B publication Critical patent/CN110276333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The invention provides a fundus identity recognition model training method, a fundus identity recognition method and equipment, wherein the fundus identity recognition model training method comprises the following steps: performing feature extraction on the fundus images to obtain training data, wherein the training data comprise a first fundus feature image, a second fundus feature image and a third fundus feature image, and the second fundus feature image and the first fundus feature image are fundus images of the same eye; the third fundus characteristic image and the first fundus characteristic image are different fundus images; identifying the first fundus feature image, the second fundus feature image and the third fundus feature image by using a fundus identity identification model to obtain a loss value; and adjusting parameters of the fundus identity recognition model according to the loss value.

Description

Eye ground identity recognition model training method, eye ground identity recognition method and equipment
Technical Field
The invention relates to the technical field of medical image recognition, in particular to a fundus identity recognition model training method, a fundus identity recognition method and fundus identity recognition equipment.
Background
The current fundus diseases are generally taken by a special photographing apparatus, and a doctor can judge whether an examinee may suffer from a certain fundus disease by observing the fundus image, thereby making a recommendation as to whether further examination or a doctor's visit is required.
The disease condition of the fundus disease may be continuously developed, in the subsequent patient's review process, the doctor needs to compare the fundus images of the previous times to perform disease condition tracking so as to give better treatment suggestions, so that the doctor needs to select the fundus images from the same eye from a plurality of fundus images, although the doctor with many years of experience can select the fundus images belonging to the same eye according to the experience of the doctor, the fundus shooting process has a plurality of uncertain influence factors, such as the brightness of the image, the image rotation, the translation and the like. These can make the identification of the fundus image very difficult, easily lead to the doctor to be difficult to accurately distinguish from the same eye fundus image to the accurate realization of fundus disease tracking is difficult.
Disclosure of Invention
In view of the above, the present invention provides a fundus identity recognition model training method, including: acquiring training data, wherein the training data comprises a first fundus feature image, a second fundus feature image and a third fundus feature image which are obtained by feature extraction based on fundus images, and the second fundus feature image and the first fundus feature image are fundus images of the same eye; the third fundus characteristic image and the first fundus characteristic image are different fundus images; identifying the first fundus feature image, the second fundus feature image and the third fundus feature image by using a fundus identity identification model to obtain a loss value; and adjusting parameters of the fundus identification model according to the loss value.
Optionally, the fundus feature comprises: at least one of optic disc, macula, blood vessel, retina.
Optionally, the fundus feature comprises: the fundus features include abstract features related to vessel morphology.
Optionally, the performing feature extraction on the fundus image to obtain training data includes: extracting the fundus feature in the fundus image by using a segmentation neural network to obtain a probability map or a binary image containing the confidence coefficient of the fundus feature.
Optionally, the training data comprises fundus feature images of n eyes, wherein each eye corresponds to m fundus images; wherein n and m are integers greater than 1.
Optionally, inputting the first fundus feature image, the second fundus feature image, and the third fundus feature image into the fundus identity recognition model to obtain the loss value includes: calculating a first distance between the second fundus feature image and the first fundus feature image; calculating a second distance between the third fundus feature image and the first fundus feature image; and obtaining a loss value according to the first distance and the second distance.
Optionally, adjusting the parameters of the fundus identification model using the loss values includes: feeding back the loss value to the fundus identity recognition model; and adjusting the parameter according to the loss value to reduce the first distance and increase the second distance until the first distance is smaller than the second distance by a preset value.
Optionally, before performing fundus feature extraction on the sample by using a computer vision algorithm or a machine learning algorithm, the method comprises the following steps: tailoring the training data and/or data enhancing the training data.
According to a second aspect, an embodiment of the present invention provides a fundus identity recognition method, including: acquiring at least two fundus images to be identified; identifying at least two fundus images to be identified by using a fundus identity identification model obtained by the fundus identity identification model training method of any one of the first aspect to obtain the similarity between the fundus images to be identified; and identifying whether the fundus images to be identified belong to the identification result of the same eye or not according to the similarity.
Optionally, identifying whether the fundus images to be identified belong to the same eye according to the similarity includes: judging whether the similarity is greater than a preset threshold value, wherein the preset threshold value is a distance threshold value between fundus images to be identified; when the similarity is greater than a preset threshold value, confirming that the fundus images to be identified belong to the same eye; and when the similarity is rainy and a preset threshold value is set, confirming that the fundus images to be identified belong to different eyes.
According to a third aspect, an embodiment of the present invention provides a fundus identification apparatus, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the fundus identity recognition model training method of any one of the first aspects and/or the fundus identity recognition method of the second aspect as claimed above.
Performing characteristic extraction on fundus images to obtain training data, selecting one of a plurality of training data as a first fundus characteristic image as a reference sample, selecting a second fundus characteristic image from the same eye as the reference sample as a positive sample, wherein the second fundus characteristic image has difference with the first fundus characteristic image in the shooting process, selecting a third fundus characteristic image from different eyes from the reference sample as a negative sample, identifying and calculating loss values of the three samples by using a fundus identity recognition model, adjusting identity recognition model parameters according to the back propagation of the loss values to optimize the fundus identity recognition model, fully considering a plurality of uncertain influence factors in the fundus shooting process in the model training process, performing comparison training on the fundus images of different eyes, and avoiding the influence factors due to the uncertainty in the fundus shooting process, in addition, through extracting the characteristics of the fundus, interference image information irrelevant to identity recognition can be greatly eliminated, the model recognition performance is remarkably improved, the images from the same eye can be accurately distinguished, and reliable basis is provided for tracking the illness state of the eye disease patient.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a fundus identity recognition model training method according to an embodiment of the present invention;
FIG. 2 is a fundus image in an embodiment of the present invention;
FIG. 3 is an image block in the fundus image shown in FIG. 2;
FIG. 4 is a segmentation result of the segmentation model for the image block shown in FIG. 3;
FIG. 5 is a fundus blood vessel image segmented and stitched for the image shown in FIG. 2;
FIG. 6 is a flowchart of a fundus identity recognition method in an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a fundus identity recognition model training device in an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a fundus identity recognition model training method, which can be used for training a neural network model for fundus identity recognition and can be executed by electronic equipment such as a computer, a server and the like. As shown in fig. 1, the method comprises the following steps:
s11, training data are obtained, wherein each training data respectively comprises a first fundus feature image, a second fundus feature image and a third fundus feature image, and the three fundus feature images are feature images obtained by feature extraction based on original fundus images.
Specifically, after acquiring the fundus image, the fundus features may be extracted using a computer vision algorithm or a machine learning algorithm. In a specific embodiment, a probability map or a binary image containing the confidence of the fundus feature can be obtained by extracting the fundus feature in the fundus image by using a segmented neural network. As shown in fig. 2, the fundus image may be divided into a plurality of image blocks, the size of which is set according to the size of the fundus image, and for most cases, the size of the divided image blocks should be significantly smaller than the size of the entire fundus image. For example, the fundus image has a size of 1000 × 1000 (pixels), and the divided image blocks have a size of 100 × 100 (pixels).
Respectively segmenting the blood vessel image in each image block by using a preset segmentation model to obtain segmented image blocks; the segmentation model may be a neural network such as FCN, SegNet, deep lab, or the like, and should be trained by using sample data before using the segmentation model, so that the segmentation model has a certain semantic segmentation capability, and may be obtained by training a sample image block artificially labeled with a blood vessel region.
The segmentation model extracts features of the blood vessel image in the image block, and forms a segmentation image block according to the extracted features, in which the blood vessel image is highlighted, and specific highlighting ways are various, for example, various pixel values obviously different from the background are adopted to express the position of the blood vessel, and the like.
The image block shown in fig. 3 is input into the segmentation model, so as to obtain the segmented image block shown in fig. 4, in this embodiment, the segmentation model used outputs a binary image, which adopts two pixel values to respectively express the background and the blood vessel image, so as to visually highlight the blood vessel position, and the binary image is more favorable for the subsequent measurement operation of the blood vessel image. The segmented image blocks are used to stitch fundus blood vessel images, such as the image shown in fig. 5. Fig. 5 clearly expresses the blood vessel image and the background in the fundus image. The extraction of the blood vessel characteristics can be finished. As an alternative embodiment, the above method may also be used to extract other features such as: optic disc, macula, and retina. By extracting the characteristics of the fundus oculi, interference image information irrelevant to fundus oculi identity recognition can be greatly eliminated, and the model recognition performance is remarkably improved.
In an alternative embodiment, high-level indirect features (or abstract features) such as blood vessel bifurcation position and direction, blood vessel intersection position and direction, blood vessel vector map, etc. may also be present in the fundus feature image. After the original fundus image is acquired, the above-described indirect features may be extracted therefrom as training data.
The fundus feature images in the training data are all labeled as to which eye it belongs. Specifically, the marking may be performed by the extracted feature of the fundus, for example, the marking may be performed by a feature that is relatively conspicuous, such as a disc, macula lutea, blood vessel, retina, and the like. In this embodiment, the same eye may have a plurality of fundus images, training data pertaining to the same eye is formed, and the specific angle, brightness, and the like of each fundus image may be different.
The second fundus characteristic image and the first fundus characteristic image are from different fundus images belonging to the same eye; the third fundus feature image is from a fundus image of a different eye than the first fundus feature image. In a specific embodiment, the training data may include fundus feature images of a plurality of eyes, where the number of fundus feature images of each eye is multiple, where the first fundus feature image may be randomly extracted from the multiple fundus images of the plurality of eyes, and the first fundus feature image may be used as a standard sample; the second fundus characteristic image is obtained by selecting a fundus characteristic image from the same eye with the first fundus characteristic image from a plurality of fundus images of a plurality of eyes, and the second fundus characteristic image can be used as a positive sample, namely the second fundus characteristic image is different from the standard sample; the third fundus characteristic image may be a fundus characteristic image from a different eye from the first fundus characteristic image selected from a plurality of fundus characteristic images of a plurality of eyes. The third fundus characteristic image may be a negative sample, i.e. a fundus characteristic image of a different eye from the standard sample.
Each training data is a fundus feature image, and the fundus images can be preprocessed before feature extraction, so that the trained fundus identity recognition model is more accurate in fundus identity recognition. Specifically, each fundus image may be trimmed first, and since the original image of the captured fundus image has a large number of black backgrounds, the fundus image may be trimmed first. The large black pixels in the background are removed, the fundus images are all cropped to a minimum rectangle that can contain the entire circular fundus, in one specific embodiment, all fundus images may be cropped to a uniform format, for example, the size is uniform to 224 × 224 pixels, and the input picture format during model training and recognition may be in a uniform 224 × 224 pixel and RGB three color channel fundus image.
In order to improve the robustness of the fundus identification model, the preprocessing of the fundus images can further comprise data enhancement of the fundus images, the data enhancement process can use rotation, translation, amplification and principal component transformation (PCA) color enhancement, and a plurality of fundus images using random enhancement parameters can be generated by enhancing each fundus image through data. For example, the format of the fundus image enhanced by the data may employ fundus images of three color channels of uniform 224 × 224 pixels and RGB. The fundus image may be cut first, and the data of the cut fundus image is enhanced, or the fundus image may be subjected to data enhancement first, and then the data-enhanced fundus image is cut.
As a specific example, the training data may be fundus feature images of n eyes, where each eye corresponds to m fundus pictures; n and m are integers greater than 1, and specifically, the larger the numerical value of n is, the more accurate the recognition precision after model training is. After repeated research by the inventor, when m is greater than or equal to 8, the recognition accuracy of the trained model is obviously improved, and therefore, the value of m in the embodiment may be greater than or equal to 8.
S12, utilizing the eye ground identity recognition model to perform eye ground identification on the first eye ground characteristic image, the second eye ground characteristic image and
and identifying the third fundus characteristic image to obtain a loss value. The fundus identity recognition model can be any neural network model. Forming a training data set by the first fundus characteristic image, the second fundus characteristic image and the third fundus characteristic image, inputting the training data set into a fundus identity recognition model, and calculating a loss value by using a preset loss function; calculating a second distance between the third fundus feature image and the first fundus feature image; and obtaining a loss value according to the first distance and the second distance.
Specifically, the loss value may be calculated by using a triplet loss function, for example, a first fundus feature image randomly selected from fundus feature images of a plurality of eyes may be referred to as an Anchor, a second fundus feature image selected to belong to the same eye as the first fundus feature image may be referred to as Positive, and a second fundus feature image selected to belong to a different eye from the first fundus feature image may be referred to as Negative, thereby forming a (Anchor, Positive, Negative) triplet. The final feature expressions obtained for the three samples of the triplet are respectively:
Figure BDA0002112552080000091
in this embodiment, the feature may be calculated
Figure BDA0002112552080000092
And features of
Figure BDA0002112552080000093
First distance between, calculating the feature
Figure BDA0002112552080000094
And features of
Figure BDA0002112552080000095
A second distance of the first distance in between. Specifically, the first distance and the second distance may be measured by euclidean distance.
The loss value is calculated by using the first distance and the second distance, and specifically, the following loss function relation can be adopted for calculation:
Figure BDA0002112552080000096
where α denotes a preset value, which is a minimum interval between the first distance and the second distance. When + represents that the value in [ ] is greater than zero, the loss value is taken as the value, and when the value in [ ] is less than zero, the loss is zero.
And S13, adjusting parameters of the eye ground identity recognition model according to the loss value. For example, parameters of the identification model may be updated using back propagation based on the loss value to optimize the identification model.
Specifically, the loss value can be fed back to the fundus identification model; and adjusting the parameter according to the loss value to reduce the first distance and increase the second distance until the first distance is smaller than the second distance by a preset value. In a specific embodiment, a triplet loss function can be adopted to make the distance between the Anchor and the positive smaller and the distance between the Anchor and the Negative larger in the process of transmitting loss of the fundus identification model, and finally make a minimum interval alpha between the first distance and the second distance, so that the robustness of the fundus identification model can be improved. In this embodiment, the model is trained using multiple sets of training data until the loss function converges.
Firstly, performing characteristic extraction on fundus images to obtain training data, randomly selecting a fundus characteristic image from a plurality of training data as a first fundus characteristic image which is used as a reference sample, selecting a second fundus characteristic image from the same eye as the reference sample as a positive sample, wherein the second fundus characteristic image has difference with the first fundus characteristic image in the shooting process, selecting a third fundus characteristic image from different eyes from the reference sample as a negative sample, utilizing a fundus identity recognition model to perform recognition calculation on three samples to obtain loss values, and adjusting identity recognition model parameters according to the back propagation of the loss values to optimize the fundus identity recognition model, wherein in the model training process, the influence factors with a plurality of uncertainties in the fundus shooting process are fully considered, and the comparison training is performed on the fundus images of different eyes to avoid a plurality of influence factors in the fundus shooting process, in addition, through extracting the characteristics of the fundus, interference image information irrelevant to fundus identity recognition can be greatly eliminated, the model recognition performance is remarkably improved, images from the same eye can be accurately distinguished, and reliable basis is provided for tracking the state of an illness of a patient suffering from eye diseases.
The embodiment of the present invention further provides a fundus identity recognition method, as shown in fig. 6, the method may include the following steps:
s21, acquiring at least two fundus images to be identified. In a specific embodiment, after acquiring the fundus image to be recognized, the fundus image may be subjected to data preprocessing, for example, the fundus image to be recognized may be cropped to remove a large number of black pixels in the background, the fundus images may each be cropped to a smallest rectangle capable of containing the entire circular fundus, in this embodiment, all of the fundus images may each be cropped to a uniform format, for example, sizes unified to 224 × 224 pixels, and fundus images of three color channels of RGB.
And S22, identifying at least two fundus images to be identified by using the fundus identity identification model to obtain the similarity between the fundus images to be identified. The fundus identity recognition model can be obtained by training with the fundus identity recognition model training method in the above embodiment. Specifically, the structure of the fundus identification model can adopt a convolutional neural network.
After the eye ground identity recognition model is trained, two eye ground images to be recognized are input at will, the eye ground identity recognition model can output a similarity value between the two eye ground images to be recognized, the convolutional neural network comprises a convolutional layer, a pooling layer, an activation function layer and a full-connection layer, and each neuron parameter of each layer is determined through training. And acquiring the distance between two fundus images to be identified output by a full-connection layer of the convolutional neural network model by utilizing the trained convolutional neural network and through network forward propagation. Specifically, the fundus identification model can perform region segmentation on two arbitrarily input fundus images to be identified in a high-dimensional space to calculate the distance between the two fundus images to be identified.
And S23, identifying whether the fundus images to be identified belong to the identification result of the same eye according to the similarity. Specifically, the greater the distance between the two fundus images to be recognized, the greater the similarity representing the two fundus images to be recognized, and the smaller the distance between the two fundus images to be recognized, the smaller the similarity representing the two fundus images to be recognized. For example, it may be determined whether or not the degree of similarity between two fundus images to be recognized is larger than a threshold value, and when the degree of similarity is larger than the threshold value, it is determined that the two fundus images are from the same eye, and when the degree of similarity is smaller than the threshold value, it is determined that the two fundus images are from different eyes.
The similarity of fundus images to be recognized is recognized through a trained fundus identity recognition model, whether the images belong to the same eye or not is confirmed according to the similarity, a large amount of training data containing various conditions are adopted when the model is trained, the difficulty in recognition of the identity of the fundus images caused by the fact that the fundus images have a lot of uncertainties in the fundus shooting process, such as differences of the light and shade of the images, image rotation, translation and the like, can be avoided, the images from the same eye can be accurately distinguished, and reliable basis is provided for tracking the illness state of a patient suffering from eye diseases.
The embodiment of the present invention further provides a device for training an eye fundus identity recognition model, which includes, as shown in fig. 7:
the feature extraction module 31 is configured to perform feature extraction on a fundus image to obtain training data, where the training data includes a first fundus feature image, a second fundus feature image, and a third fundus feature image, and the second fundus feature image and the first fundus feature image are fundus images of the same eye; the third fundus characteristic image and the first fundus characteristic image are different fundus images;
the loss value calculation module 32 is configured to identify the first fundus feature image, the second fundus feature image, and the third fundus feature image by using the fundus identity recognition model to obtain a loss value;
and the parameter adjusting module 33 is used for adjusting parameters of the fundus identification model according to the loss value.
The embodiment of the invention also provides a fundus identity recognition model training device, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the processor, and the instructions are executed by the at least one processor to cause the at least one processor to perform the fundus identification model training method in the above embodiments and/or the fundus image identification method in the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (6)

1. A fundus identity recognition model training method is characterized by comprising the following steps:
step one, preprocessing an eye fundus image:
performing edge cutting processing on each fundus image, removing large black pixels in the background, cutting the fundus images to be the smallest rectangle capable of containing the whole round fundus, and performing data enhancement on the fundus images by using rotation, translation, amplification and principal component transformation color enhancement;
acquiring training data, and extracting the characteristics of the eyeground by using a computer vision algorithm or a machine learning algorithm:
the training data comprises a first fundus feature image, a second fundus feature image and a third fundus feature image which are obtained by feature extraction based on fundus images, wherein the second fundus feature image and the first fundus feature image are fundus images of the same eye, and the third fundus feature image and the first fundus feature image are fundus images of different eyes; the fundus characteristic image is one of an optic disc image, a macular image and a fundus blood vessel image, or comprises a blood vessel bifurcation point position and direction, a blood vessel intersection point position and direction and a blood vessel vector diagram;
the fundus feature image acquisition method comprises the steps of dividing a fundus image sample into a plurality of image blocks by utilizing a segmentation neural network with semantic segmentation capability, and obtaining a probability map or a binary image containing a fundus feature confidence coefficient by utilizing a segmentation model, wherein the segmentation neural network is an FCN (fuzzy C-means network), a SegNet (SegNet) or a DeepLab (DeepLab) neural network, two pixel values are adopted to respectively express background and blood vessel images, and the binary image blocks are spliced into the fundus blood vessel image to serve as training data;
or directly extracting the position and the direction of the blood vessel bifurcation, the position and the direction of the blood vessel intersection and the blood vessel vector diagram from the fundus image as training data;
thirdly, identifying the first fundus feature image, the second fundus feature image and the third fundus feature image by using a fundus identity identification model to obtain a loss value;
and step four, adjusting parameters of the fundus identity recognition model according to the loss value.
2. The training method for fundus identity recognition model according to claim 1, wherein said inputting said first fundus feature image, said second fundus feature image, and said third fundus feature image into a fundus identity recognition model to obtain a loss value comprises:
calculating a first distance between the second fundus feature image and the first fundus feature image;
calculating a second distance between the third fundus feature image and the first fundus feature image;
and obtaining the loss value according to the first distance and the second distance.
3. The fundus identity model training method of claim 2, wherein said adjusting parameters of the fundus identity model using the loss values comprises:
feeding the loss value back to the fundus identity recognition model;
adjusting the parameter according to the loss value to decrease the first distance and increase the second distance until the first distance is smaller than the second distance by a preset value.
4. A fundus identity recognition method is characterized by comprising the following steps:
acquiring at least two fundus images to be identified;
identifying the at least two fundus images to be identified by using a fundus identity identification model obtained by the fundus identity identification model training method according to any one of claims 1 to 3 to obtain the similarity between the fundus images to be identified; judging whether the similarity is greater than a preset threshold value, wherein the preset threshold value is a distance threshold value between the fundus images to be identified;
when the similarity is larger than the preset threshold value, confirming that the fundus images to be identified belong to the same eye;
and when the similarity is smaller than the preset threshold value, confirming that the fundus image to be recognized belongs to different eyes.
5. A fundus identity recognition model training device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a fundus identity model training method according to any one of claims 1-3.
6. A fundus identification apparatus, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the fundus identification method of claim 4.
CN201910578321.2A 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment Active CN110276333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578321.2A CN110276333B (en) 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578321.2A CN110276333B (en) 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment

Publications (2)

Publication Number Publication Date
CN110276333A CN110276333A (en) 2019-09-24
CN110276333B true CN110276333B (en) 2021-10-15

Family

ID=67962605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578321.2A Active CN110276333B (en) 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment

Country Status (1)

Country Link
CN (1) CN110276333B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580530A (en) * 2020-12-22 2021-03-30 泉州装备制造研究所 Identity recognition method based on fundus images
CN116421140B (en) * 2023-06-12 2023-09-05 杭州目乐医疗科技股份有限公司 Fundus camera control method, fundus camera, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109390053A (en) * 2017-08-02 2019-02-26 上海市第六人民医院 Method for processing fundus images, device, computer equipment and storage medium
CN109522436A (en) * 2018-11-29 2019-03-26 厦门美图之家科技有限公司 Similar image lookup method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017031099A1 (en) * 2015-08-20 2017-02-23 Ohio University Devices and methods for classifying diabetic and macular degeneration
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN109753978B (en) * 2017-11-01 2023-02-17 腾讯科技(深圳)有限公司 Image classification method, device and computer readable storage medium
CN111192285B (en) * 2018-07-25 2022-11-04 腾讯医疗健康(深圳)有限公司 Image segmentation method, image segmentation device, storage medium and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN109390053A (en) * 2017-08-02 2019-02-26 上海市第六人民医院 Method for processing fundus images, device, computer equipment and storage medium
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109522436A (en) * 2018-11-29 2019-03-26 厦门美图之家科技有限公司 Similar image lookup method and device

Also Published As

Publication number Publication date
CN110276333A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110263755B (en) Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device
CN108717696B (en) Yellow spot image detection method and equipment
CN107292877B (en) Left and right eye identification method based on fundus image characteristics
CN109377474B (en) Macular positioning method based on improved Faster R-CNN
Morales et al. Automatic detection of optic disc based on PCA and mathematical morphology
CN109684981B (en) Identification method and equipment of cyan eye image and screening system
EP2888718B1 (en) Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
Fan et al. Optic disk detection in fundus image based on structured learning
CN112017185B (en) Focus segmentation method, device and storage medium
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN109961848B (en) Macular image classification method and device
Zhou et al. Optic disc and cup segmentation in retinal images for glaucoma diagnosis by locally statistical active contour model with structure prior
CN110276333B (en) Eye ground identity recognition model training method, eye ground identity recognition method and equipment
CN113012093B (en) Training method and training system for glaucoma image feature extraction
CN113344894A (en) Method and device for extracting characteristics of eyeground leopard streak spots and determining characteristic index
CN114627067A (en) Wound area measurement and auxiliary diagnosis and treatment method based on image processing
Manchalwar et al. Detection of cataract and conjunctivitis disease using histogram of oriented gradient
CN106960199A (en) A kind of RGB eye is as the complete extraction method in figure white of the eye region
Fondón et al. Automatic optic cup segmentation algorithm for retinal fundus images based on random forest classifier
CN112907581A (en) MRI (magnetic resonance imaging) multi-class spinal cord tumor segmentation method based on deep learning
CN112669273A (en) Method and device for automatically segmenting drusen in fundus image and readable storage medium
CN116030042A (en) Diagnostic device, method, equipment and storage medium for doctor's diagnosis
Kaur et al. Review on: blood vessel extraction and eye retinopathy detection
CN111563910A (en) Fundus image segmentation method and device
Shaikha et al. Optic Disc Detection and Segmentation in Retinal Fundus Image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant