CN110276333A - Eyeground identification model training method, eyeground personal identification method and equipment - Google Patents

Eyeground identification model training method, eyeground personal identification method and equipment Download PDF

Info

Publication number
CN110276333A
CN110276333A CN201910578321.2A CN201910578321A CN110276333A CN 110276333 A CN110276333 A CN 110276333A CN 201910578321 A CN201910578321 A CN 201910578321A CN 110276333 A CN110276333 A CN 110276333A
Authority
CN
China
Prior art keywords
eyeground
image
characteristic image
eye fundus
identification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910578321.2A
Other languages
Chinese (zh)
Other versions
CN110276333B (en
Inventor
熊健皓
和宗尧
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN201910578321.2A priority Critical patent/CN110276333B/en
Publication of CN110276333A publication Critical patent/CN110276333A/en
Application granted granted Critical
Publication of CN110276333B publication Critical patent/CN110276333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The present invention provides a kind of eyeground identification model training method, eyeground personal identification method and equipment, wherein identification model training method in eyeground includes: to carry out feature extraction to eye fundus image, obtain training data, the training data includes First view bottom characteristic image, the second eyeground characteristic image and third eyeground characteristic image, wherein, the second eyeground characteristic image and First view bottom characteristic image are with eye eye fundus image;Third eyeground characteristic image is different eye eye fundus images from First view bottom characteristic image;First view bottom characteristic image, the second eyeground characteristic image and third eyeground characteristic image are identified to obtain penalty values using eyeground identification model;The parameter of the eyeground identification model is adjusted according to the penalty values.

Description

Eyeground identification model training method, eyeground personal identification method and equipment
Technical field
The present invention relates to medical image identification technology fields, and in particular to a kind of eyeground identification model training method, Eyeground personal identification method and equipment.
Background technique
Present fundus oculi disease usually passes through special photographing device shooting eye fundus image, and doctor can pass through observing eye bottom Image judges whether those who are investigated may be with certain fundus oculi diseases, to be made whether to need further to check or medical It is recommended that.
The state of an illness of fundus oculi disease may continue to develop, and during the further consultation of successive patients, doctor needs to compare former Secondary eye fundus image carries out state of an illness tracking, to provide better treatment recommendations, thus needs in many eye fundus images In pick out eye fundus image from same eyes, although the doctor for having many years of experience can select according to the experience of oneself Belong to the eye fundus image of same eyes out, but because having many probabilistic influence factors in the shooting process of eyeground, than The light and shade of such as image, image rotation, translation.These meetings are easy to lead so that the identification to eye fundus image has great difficulty Doctor is caused to be difficult to accurately distinguish from same eyes eye fundus image, to be difficult to accurately realize that the eyeground state of an illness tracks.
Summary of the invention
In view of this, the present invention provides a kind of eyeground identification model training method, comprising: obtain training data, institute Stating training data includes carrying out First view bottom characteristic image, the second eyeground characteristic pattern obtained from feature extraction based on eye fundus image Picture and third eyeground characteristic image, wherein the second eyeground characteristic image and First view bottom characteristic image is the same as eye eye fundus images; Third eyeground characteristic image is different eye eye fundus images from First view bottom characteristic image;Using eyeground identification model to first Eyeground characteristic image, the second eyeground characteristic image and third eyeground characteristic image are identified to obtain penalty values;According to penalty values Adjust the parameter of eyeground identification model.
Optionally, eyeground feature includes: at least one of optic disk, macula lutea, blood vessel, retina.
Optionally, it includes abstract characteristics relevant to vascular morphology that eyeground feature, which includes: eyeground feature,.
Optionally, feature extraction is carried out to eye fundus image, obtaining training data includes: using segmentation neural network to eyeground Eyeground feature in image extracts, and obtains the probability graph or binary image comprising eyeground feature confidence level.
Optionally, training data includes the eyeground characteristic image of n eyes, and wherein each eye corresponds to m eye fundus images; Wherein, n and m is the integer greater than 1.
Optionally, First view bottom characteristic image, the second eyeground characteristic image and third eyeground characteristic image are inputted into eyeground It includes: to calculate the first distance of the second eyeground characteristic image and First view bottom characteristic image that identification model, which obtains penalty values,; Calculate the second distance of third eyeground characteristic image and First view bottom characteristic image;It is damaged according to first distance and second distance Mistake value.
It optionally, include: that penalty values are fed back into eyeground body using the parameter of penalty values adjustment eyeground identification model Part identification model;Increase second distance until first distance compares second distance to reduce first distance according to penalty values adjusting parameter Small preset value.
Optionally, it is wrapped before carrying out eyeground feature extraction to sample using computer vision algorithms make or machine learning algorithm It includes: training data is cut out and/or data enhancing is carried out to training data.
According to second aspect, the embodiment of the invention provides a kind of eyeground personal identification methods, comprising: obtains at least two Eye fundus image to be identified;The eyeground body obtained using the eyeground identification model training method of above-mentioned first aspect any one Part identification model identifies at least two eye fundus images to be identified, to obtain the similarity between eye fundus image to be identified; Whether belong to the recognition result of same one eye according to similarity identification eye fundus image to be identified.
It optionally, include: judgement similarity according to whether similarity identification eye fundus image to be identified belongs to same eyes Whether preset threshold is greater than, and preset threshold is the distance between eye fundus image to be identified threshold value;When similarity is greater than preset threshold When, confirm that eye fundus image to be identified belongs to same eyes;When similarity rains preset threshold, eye fundus image to be identified is confirmed Belong to different eyes.
According to the third aspect, the embodiment of the invention provides a kind of eyeground identification apparatus, comprising: at least one processing Device;And the memory being connect at least one processor communication;Wherein, memory is stored with and can be executed by processor Instruction, instruction is executed by least one processor, so that at least one processor executes any one of above-mentioned first aspect The eyeground personal identification method of eyeground identification model training method and/or such as preceding claim second aspect.
Feature extraction is carried out to eye fundus image, training data is obtained, a Zhang Zuowei is arbitrarily selected in multiple training datas First view bottom characteristic image is used as reference sample, selection and reference sample to make from the second eyeground characteristic image of same eyes For positive sample, the second eyeground characteristic image and First view bottom characteristic image have otherness in shooting process, select and refer to sample This third eyeground characteristic image from different eyes carries out three samples using eyeground identification model as negative sample Identification calculates penalty values, and adjusts identification model parameter according to penalty values backpropagation, to optimize eyeground identification mould Type fully considers have many probabilistic influence factors in the shooting process of eyeground, and pass through during to model training The eye fundus image of different eyes compares training, can to avoid due to have in the shooting process of eyeground probabilistic influence because Element, such as the otherness that presents of the eye fundus images such as light and shade, image rotation, translation of image, caused eye fundus image identification Difficulty, in addition, by the extraction to eyeground feature, can greatly exclude with the incoherent interference image information of identification, It is obviously improved model recognition performance, can accurately distinguish the image from same eyes, is the eye disease patient state of an illness Tracking provides reliable foundation.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 is the flow chart of one of embodiment of the present invention eyeground identification model training method;
Fig. 2 is the eye fundus image in the embodiment of the present invention;
Fig. 3 is an image block in eye fundus image shown in Fig. 2;
Fig. 4 is segmentation result of the parted pattern for image block shown in Fig. 3;
Fig. 5 is the optical fundus blood vessel image for being split and splicing for image shown in Fig. 2;
Fig. 6 is the flow chart of the eyeground personal identification method in the embodiment of the present invention;
Fig. 7 is the structural schematic diagram of the eyeground identification model training apparatus in the embodiment of the present invention.
Specific embodiment
Technical solution of the present invention is clearly and completely described below in conjunction with attached drawing, it is clear that described implementation Example is a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill Personnel's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
As long as in addition, the non-structure each other of technical characteristic involved in invention described below different embodiments It can be combined with each other at conflict.
The present invention provides a kind of eyeground identification model training method, can be used for training for carrying out eyeground identity knowledge Other neural network model, this method can be executed by electronic equipments such as computer and servers.This method includes as shown in Figure 1 Following steps:
S11. training data is obtained, each training data respectively includes First view bottom characteristic image, the second eyeground feature Image and third eyeground characteristic image, this three open one's eyes bottom characteristic image be carry out feature extraction based on original eye fundus image and Obtained characteristic image.
Specifically, after getting eye fundus image, computer vision algorithms make or machine learning algorithm be can use to eyeground Feature extracts.It in a specific embodiment, can be by utilizing segmentation neural network to the eyeground in eye fundus image Feature extracts, and obtains the probability graph or binary image comprising eyeground feature confidence level.As shown in Fig. 2, can be by eyeground Image is divided into multiple images block, and the size of image block is set according to the size of eye fundus image, for most cases, divides The size of image block should be significantly less than the size of entire eye fundus image.Such as the size of eye fundus image be 1000*1000 (as Element), the size of the image block marked off is 100*100 (pixel).
It is split to obtain segmented image using the blood vessel image that preset parted pattern is directed to respectively in each image block Block;Parted pattern specifically can be the neural networks such as FCN, SegNet, DeepLab, should use before using parted pattern Sample data, which is trained it, makes it have certain semantic segmentation ability, and handmarking's angiosomes specifically can be used The training of sample image block obtains.
Parted pattern can extract the feature of image block medium vessels image, and form segmented image block according to the feature of extraction, Blood vessel image is highlighted wherein, is specifically highlighted there are many modes, for example, by using the various pixel values for differing markedly from background Express the position etc. where blood vessel.
Image block shown in Fig. 3 is inputted into parted pattern, available segmented image block as shown in Figure 4, in this reality Apply the output of parted pattern used in example is bianry image, it expresses background and blood vessel shadow using two kinds of pixel values respectively Picture intuitively highlights vessel position, and bianry image is more advantageous to the subsequent measurement to blood vessel image and operates.Utilize segmentation figure As block is spliced into optical fundus blood vessel image, such as image shown in fig. 5.Fig. 5 clearly expresses the blood vessel image in eye fundus image And background.Both the extraction of blood vessel feature can have been completed.As optional embodiment, other spies can also be extracted using the above method It levies for example: the features such as optic disk, macula lutea and retina.By the extraction to eyeground feature, can greatly exclude and eyeground identity It identifies incoherent interference image information, is obviously improved model recognition performance.
In an alternative embodiment, there may also be advanced indirect features in the characteristic image of eyeground (or is Abstract characteristics), such as vascular bifurcation point position and direction, intersecting blood vessels point position and direction, blood vessel vectogram etc..It is former obtaining It, can be from above-mentioned indirect feature be wherein extracted as training data after the eye fundus image of beginning.
Eyeground characteristic image in training data is marked with which eyes belonged to.Specifically, extraction can be passed through The feature on eyeground is marked, such as can be marked by the more apparent feature of optic disk, macula lutea, blood vessel, retina etc. Note.In the present embodiment, same eyes can have multiple eye fundus images, form the training data for belonging to same eyes, Angle, brightness of specific every eye fundus image etc. can be different.
Second eyeground characteristic image is from First view bottom characteristic image from the different eye fundus images for belonging to same eyes; Third eyeground characteristic image is from First view bottom characteristic image from the eye fundus image of different eyes.In the particular embodiment, Training data may include the eyeground characteristic image of multiple eyes, and wherein the eyeground characteristic image quantity of each eye is multiple, Wherein, First view bottom characteristic image can be to randomly select in multiple eye fundus images of more eyes, First view bottom characteristic pattern As can be used as master sample;Second eyeground characteristic image is special in multiple eye fundus images selection of more eyes and First view bottom Levy different eyeground characteristic images of the image from same eyes, the second eyeground characteristic image can be used as positive sample, i.e., with mark Quasi- sample is the different eyeground characteristic images of same eyes;Third eyeground characteristic image can be on multiple eyeground of more eyes The eyeground characteristic image from First view bottom characteristic image from different eyes is chosen in characteristic image.Third eyeground characteristic image can Think negative sample, i.e., is the eyeground characteristic image of different eyes from master sample.
Each training data is eyeground characteristic image, can first be located in advance to eye fundus image before feature extraction Reason, so that the eyeground identification model of training is more accurate when carrying out eyeground identification.Specifically, can be first to every A eye fundus image is cut out processing, can be first to eyeground since the eye fundus image original image of shooting has more black background Image carries out cutting edge processing.Black picture element large stretch of in background is removed, it can include whole that eye fundus image, which is cut into the smallest, The rectangle on a circle eyeground, in a specific embodiment, all eye fundus images can be cut to unified format, for example, Size, which is unified to the picture format inputted when 224*224 pixel, model training and identification, can use unified 224*224 picture The eye fundus image of element and tri- Color Channels of RGB.
It can also include to eyeground figure to the pretreatment of eye fundus image to improve the robustness of eyeground identification model As carrying out data enhancing, rotation is can be used in data enhancement process, is translated, and amplification and the enhancing of principal component transform (PCA) color are led to It crosses data and enhances each eye fundus image multiple can be generated and use eye fundus images of random enhancing parameter.Such as enhanced by data The format of eye fundus image afterwards can use the eye fundus image of tri- Color Channels of unified 224*224 pixel and RGB.It can be first Eye fundus image is cut, data enhancing is being carried out to the eye fundus image after cutting, first eye fundus image can also be counted According to enhancing, then to cutting into crossing the enhanced eye fundus image of data, in the present embodiment, for two kinds of data predictions Sequence is without limitation.
As concrete example, training data can be the eyeground characteristic image of n eyes, and wherein each eye corresponds to m Eyeground picture;Wherein n and m is the integer greater than 1, specifically, the numerical value of n is bigger, the accuracy of identification after model training is more accurate. It is studied repeatedly by inventor, wherein when being greater than or equal to 8, the accuracy of identification of the model after training is obviously improved m, therefore The value of m can be greater than or equal to 8 in the present embodiment.
S12. using eyeground identification model to First view bottom characteristic image, the second eyeground characteristic image and
Third eyeground characteristic image is identified to obtain penalty values.Identification model in eyeground can be arbitrary nerve net Network model.First view bottom characteristic image, the second eyeground characteristic image and third eyeground characteristic image are formed into a training data group, It is input in eyeground identification model, carries out penalty values calculating using preset loss letter, in a specific embodiment, The first distance for calculating the second eyeground characteristic image and First view bottom characteristic image can be calculated;Calculate third eyeground characteristic image With the second distance of First view bottom characteristic image;Penalty values are obtained according to first distance and second distance.
Specifically, penalty values can be calculated using triplet loss function, for example, in the eyeground characteristic pattern of more eyes The First view bottom characteristic image randomly selected as in is properly termed as Anchor, and belonging to First view bottom characteristic image for selection is same Second eyeground characteristic image of eyes is properly termed as Positive, and selection belongs to different eyes from First view bottom characteristic image The second eyeground characteristic image be properly termed as Negative, thus constitute (Anchor, Positive, a Negative) ternary Group.Three finally obtained feature representations of sample of triple are calculated as respectively:
In the present embodiment, feature can be calculatedWith featureBetween first distance, calculate featureWith featureBetween first distance second distance.Specifically, first distance and second distance can be adopted It is measured with Euclidean distance.
Calculate penalty values using first distance and second distance, specifically, can using following loss function relational expression into Row calculates:
Wherein, α indicates preset value, minimum interval of the preset value between first distance and second distance.+ indicate in [] Value be greater than zero when, take the value be penalty values, when the value in [] is minus, loss zero.
S13. the parameter of eyeground identification model is adjusted according to penalty values.Such as it can use and carry out on the basis of penalty values Backpropagation updates the parameter of identification model to optimize identification model.
Specifically, penalty values can be fed back to eyeground identification model;According to penalty values adjusting parameter to reduce One distance increases second distance until first distance preset value smaller than second distance.In a specific embodiment, it can adopt With triplet loss function during identification Model Transfer in eyeground is lost, to make Anchor's and positive Distance becomes smaller, and the distance of Anchor and Negative becomes larger, and finally allowing between first distance and second distance has a minimum Interval α, it is possible thereby to improve the robustness of eyeground identification model.In the present embodiment, using multiple groups training data pair Model is trained, until loss function is restrained.
Feature extraction is carried out to eye fundus image first, training data is obtained, one is arbitrarily selected in multiple training datas Eyeground characteristic image is used as reference sample as First view bottom characteristic image, selection and reference sample from same eyes the Two eyeground characteristic images have difference in shooting process as positive sample, the second eyeground characteristic image and First view bottom characteristic image Property, select the third eyeground characteristic image from reference sample from different eyes to utilize eyeground identification mould as negative sample Type carries out identification to three samples and calculates penalty values, and adjusts identification model parameter according to penalty values backpropagation, with excellent Change eyeground identification model and fully considers have many uncertainties in the shooting process of eyeground during to model training Influence factor, and training is compared by the eye fundus image of different eyes, can be to avoid due to having in the shooting process of eyeground Standby many probabilistic influence factors, such as the otherness that presents of the eye fundus images such as light and shade, image rotation, translation of image, Caused eye fundus image identification is difficult, in addition, can greatly be excluded and eyeground identity by the extraction to eyeground feature It identifies incoherent interference image information, is obviously improved model recognition performance, can accurately distinguish from same The image of eyes provides reliable foundation for the tracking of the eye disease patient state of an illness.
The embodiment of the invention also provides a kind of eyeground personal identification methods, as shown in fig. 6, this method may include as follows Step:
S21. at least two eye fundus images to be identified are obtained.In a specific embodiment, get it is to be identified After eye fundus image, which can be subjected to data prediction, for example, can be cut out with eye fundus image to be identified, Remove large stretch of black picture element in background, eye fundus image be cut into it is the smallest can comprising the rectangle on entire round eyeground, In the present embodiment, all eye fundus images can be cut to unified format, for example, size is unified to 224*224 pixel, and The eye fundus image of tri- Color Channels of RGB.
S22. at least two eye fundus images to be identified are identified using eyeground identification model, it is to be identified to obtain Similarity between eye fundus image.Eyeground identification model can be using the eyeground identification model instruction in above-described embodiment Practice method training to obtain.Specifically, the structure of the eyeground identification model can use convolutional neural networks.
After identification model training in eyeground is good, two eye fundus images to be identified are arbitrarily inputted, which knows Other model can export the similarity value between two eye fundus images to be identified, and convolutional neural networks include convolutional layer, pond Change layer, activation primitive layer and full articulamentum, every layer of each neuron parameter is determined by training.Utilize trained convolution mind Through network, by network propagated forward, the full articulamentum for obtaining convolutional neural networks model exports two eyeground figures to be identified The distance as between.Specifically, the eyeground identification model can be by arbitrarily input two eye fundus images to be identified in height Dimension space carries out region segmentation and calculates the distance between two eye fundus images to be identified.
S23. whether belong to the recognition result of same one eye according to similarity identification eye fundus image to be identified.Specifically, two The distance between eye fundus image to be identified is bigger, indicates that the similarity of two eye fundus images to be identified is bigger, two wait know The distance between other eye fundus image is smaller, indicates that the similarity of two eye fundus images to be identified is smaller.Such as it may determine that two Whether the similarity opened between eye fundus image to be identified is greater than threshold value, determines two eye fundus images from same when being greater than threshold value Eyes determine that two eye fundus images come from different eyes when similarity is less than threshold value.
In the similarity for identifying eye fundus image to be identified by trained eyeground identification model, and according to similarity It is confirmed whether to belong to same eyes, due to when being trained model, using the training number for largely including a variety of situations According to can be to avoid due to having many uncertainties, such as light and shade, image rotation, the translation of image etc. in the shooting process of eyeground The otherness that eye fundus image is presented, caused eye fundus image identification is difficult, can accurately distinguish from same The image of eyes provides reliable foundation for the tracking of the eye disease patient state of an illness.
The embodiment of the invention also provides a kind of eyeground identification model training apparatus, the device as shown in Figure 7 includes:
Characteristic extracting module 31 obtains training data, the training data packet for carrying out feature extraction to eye fundus image Include First view bottom characteristic image, the second eyeground characteristic image and third eyeground characteristic image, wherein the second eyeground characteristic image It is with eye eye fundus image with First view bottom characteristic image;Third eyeground characteristic image and First view bottom characteristic pattern As being different eye eye fundus images;
Penalty values computing module 32, for utilizing identification model in eyeground to First view bottom characteristic image, the second eyeground Characteristic image and third eyeground characteristic image are identified to obtain penalty values;
Parameter adjustment module 33, for adjusting the parameter of eyeground identification model according to penalty values.
The embodiment of the invention also provides a kind of eyeground identification model training equipment, comprising: at least one processor; And the memory being connect at least one processor communication;Wherein, memory is stored with the finger that can be executed by a processor It enables, instruction is executed by least one processor, so that at least one processor executes the eyeground identification in above-described embodiment The other method of eyeground figure identity in model training method and/or above-described embodiment.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Obviously, the above embodiments are merely examples for clarifying the description, and does not limit the embodiments.It is right For those of ordinary skill in the art, can also make on the basis of the above description it is other it is various forms of variation or It changes.There is no necessity and possibility to exhaust all the enbodiments.And it is extended from this it is obvious variation or It changes still within the protection scope of the invention.

Claims (10)

1. a kind of eyeground identification model training method characterized by comprising
Training data is obtained, the training data includes carrying out First view bottom feature obtained from feature extraction based on eye fundus image Image, the second eyeground characteristic image and third eyeground characteristic image, wherein second eyeground characteristic image and described first Eyeground characteristic image is with eye eye fundus image, and third eyeground characteristic image is different eyes from First view bottom characteristic image Eye fundus image;
Using eyeground identification model to First view bottom characteristic image, the second eyeground characteristic image and the third eyeground Characteristic image is identified to obtain penalty values;
The parameter of the eyeground identification model is adjusted according to the penalty values.
2. identification model training method in eyeground as described in claim 1, which is characterized in that the eyeground feature includes: At least one of optic disk, macula lutea, blood vessel, retina.
3. identification model training method in eyeground as described in claim 1, which is characterized in that the eyeground feature include with The relevant abstract characteristics of vascular morphology.
4. identification model training method in eyeground as described in claim 1, which is characterized in that the acquisition training data packet It includes:
The eyeground feature in eye fundus image is extracted using segmentation neural network, is obtained general comprising eyeground feature confidence level Rate figure or binary image.
5. identification model training method in eyeground as described in claim 1, which is characterized in that described by the First view bottom Characteristic image, the second eyeground characteristic image and third eyeground characteristic image input eyeground identification model obtain penalty values Include:
Calculate the first distance of second eyeground characteristic image Yu First view bottom characteristic image;
Calculate the second distance of third eyeground characteristic image Yu First view bottom characteristic image;
The penalty values are obtained according to the first distance and the second distance.
6. identification model training method in eyeground as claimed in claim 5, which is characterized in that described to utilize the penalty values The parameter for adjusting the eyeground identification model includes:
The penalty values are fed back into the eyeground identification model;
The parameter is adjusted according to the penalty values to reduce the first distance and increase the second distance until described first Distance preset value smaller than the second distance.
7. identification model training method in eyeground as described in claim 1, which is characterized in that regarded described using computer Feel that algorithm or machine learning algorithm carry out sample to include: before the feature extraction of eyeground
The training data is cut out and/or data enhancing is carried out to the training data.
8. a kind of eyeground personal identification method characterized by comprising
Obtain at least two eye fundus images to be identified;
The eyeground identification mould obtained using eyeground identification model training method described in claim 1-7 any one Type identifies at least two eye fundus images to be identified, to obtain the similarity between the eye fundus image to be identified;
Whether the eye fundus image to be identified according to the similarity identification belongs to same eyes.
9. eyeground personal identification method as claimed in claim 8, which is characterized in that described according to the similarity identification Whether eye fundus image to be identified belongs to same eyes
Judge whether the similarity is greater than preset threshold, the preset threshold is the distance between described eye fundus image to be identified Threshold value;
When the similarity is greater than the preset threshold, confirm that the eye fundus image to be identified belongs to same eyes;
When the similarity is less than the preset threshold, confirm that the eye fundus image to be identified belongs to different eyes.
10. a kind of eyeground identification apparatus characterized by comprising
At least one processor;And the memory being connect at least one described processor communication;Wherein, the memory is deposited Contain the instruction that can be executed by one processor, described instruction is executed by least one described processor so that it is described extremely A few processor executes eyeground identification model training method as described in any one of claim 1-7 and/or such as Eyeground personal identification method described in claim 8 or 9.
CN201910578321.2A 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment Active CN110276333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578321.2A CN110276333B (en) 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578321.2A CN110276333B (en) 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment

Publications (2)

Publication Number Publication Date
CN110276333A true CN110276333A (en) 2019-09-24
CN110276333B CN110276333B (en) 2021-10-15

Family

ID=67962605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578321.2A Active CN110276333B (en) 2019-06-28 2019-06-28 Eye ground identity recognition model training method, eye ground identity recognition method and equipment

Country Status (1)

Country Link
CN (1) CN110276333B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580530A (en) * 2020-12-22 2021-03-30 泉州装备制造研究所 Identity recognition method based on fundus images
CN116421140A (en) * 2023-06-12 2023-07-14 杭州目乐医疗科技股份有限公司 Fundus camera control method, fundus camera, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
US20180235467A1 (en) * 2015-08-20 2018-08-23 Ohio University Devices and Methods for Classifying Diabetic and Macular Degeneration
CN108961296A (en) * 2018-07-25 2018-12-07 腾讯科技(深圳)有限公司 Eye fundus image dividing method, device, storage medium and computer equipment
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109390053A (en) * 2017-08-02 2019-02-26 上海市第六人民医院 Method for processing fundus images, device, computer equipment and storage medium
CN109522436A (en) * 2018-11-29 2019-03-26 厦门美图之家科技有限公司 Similar image lookup method and device
WO2019085757A1 (en) * 2017-11-01 2019-05-09 腾讯科技(深圳)有限公司 Image classifying method, server and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180235467A1 (en) * 2015-08-20 2018-08-23 Ohio University Devices and Methods for Classifying Diabetic and Macular Degeneration
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN109390053A (en) * 2017-08-02 2019-02-26 上海市第六人民医院 Method for processing fundus images, device, computer equipment and storage medium
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
WO2019085757A1 (en) * 2017-11-01 2019-05-09 腾讯科技(深圳)有限公司 Image classifying method, server and computer readable storage medium
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN108961296A (en) * 2018-07-25 2018-12-07 腾讯科技(深圳)有限公司 Eye fundus image dividing method, device, storage medium and computer equipment
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109522436A (en) * 2018-11-29 2019-03-26 厦门美图之家科技有限公司 Similar image lookup method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIAOMING LIU 等: "A Vascular Bifurcations Detection Method Based on Transfer Learning Model", 《2016 9TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS(CISP-BMEI 2016)》 *
周化祥主编: "《网络及电子商务安全》", 30 September 2004 *
王佳: "相干光断层成像OCT视网膜图像处理方法研究", 《万方数据知识服务平台》 *
谢林培: "基于深度学习的眼底图像血管分割方法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580530A (en) * 2020-12-22 2021-03-30 泉州装备制造研究所 Identity recognition method based on fundus images
CN116421140A (en) * 2023-06-12 2023-07-14 杭州目乐医疗科技股份有限公司 Fundus camera control method, fundus camera, and storage medium
CN116421140B (en) * 2023-06-12 2023-09-05 杭州目乐医疗科技股份有限公司 Fundus camera control method, fundus camera, and storage medium

Also Published As

Publication number Publication date
CN110276333B (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN110263755A (en) Eye fundus image identification model training method, eye fundus image recognition methods and equipment
CN105917353B (en) Feature extraction and matching for biological identification and template renewal
CN106408564B (en) A kind of method for processing fundus images based on deep learning, apparatus and system
US20220415087A1 (en) Method, Device, Electronic Equipment and Storage Medium for Positioning Macular Center in Fundus Images
CN109934197A (en) Training method, device and the computer readable storage medium of human face recognition model
CN107506770A (en) Diabetic retinopathy eye-ground photography standard picture generation method
CN112017185B (en) Focus segmentation method, device and storage medium
Adem et al. Detection of hemorrhage in retinal images using linear classifiers and iterative thresholding approaches based on firefly and particle swarm optimization algorithms
CN108765392B (en) Digestive tract endoscope lesion detection and identification method based on sliding window
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN109583364A (en) Image-recognizing method and equipment
CN110111316A (en) Method and system based on eyes image identification amblyopia
CN108629378A (en) Image-recognizing method and equipment
CN110232318A (en) Acupuncture point recognition methods, device, electronic equipment and storage medium
CN110188673A (en) Expression recognition method and device
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
CN106355574B (en) Fatty dividing method in a kind of abdomen based on deep learning
CN109636788A (en) A kind of CT image gall stone intelligent measurement model based on deep neural network
Calimeri et al. Optic disc detection using fine tuned convolutional neural networks
CN110276333A (en) Eyeground identification model training method, eyeground personal identification method and equipment
CN110110723A (en) A kind of method and device that objective area in image automatically extracts
CN109697716A (en) Glaucoma image-recognizing method, equipment and screening system
CN110163849A (en) Training data processing method, disaggregated model training method and equipment
CN111160431B (en) Method and device for identifying keratoconus based on multi-dimensional feature fusion
CN110399833A (en) Personal identification method, modeling method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant