CN110298850A - The dividing method and device of eye fundus image - Google Patents
The dividing method and device of eye fundus image Download PDFInfo
- Publication number
- CN110298850A CN110298850A CN201910590552.5A CN201910590552A CN110298850A CN 110298850 A CN110298850 A CN 110298850A CN 201910590552 A CN201910590552 A CN 201910590552A CN 110298850 A CN110298850 A CN 110298850A
- Authority
- CN
- China
- Prior art keywords
- image
- eye fundus
- sample
- fundus image
- optic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Embodiment of the disclosure discloses the dividing method and device of eye fundus image.One specific embodiment of this method includes: to obtain eye fundus image to be detected;Eye fundus image input picture to be detected is generated into model, obtains the corresponding cup disk mask image of eye fundus image to be detected;Based on the corresponding cup disk mask image of eye fundus image to be detected, the mask image in the optic cup region in eye fundus image to be detected and the mask image in optic disk region are fitted.The embodiment can accurately be partitioned into optic disk region and optic cup region in eye fundus image.
Description
Embodiment of the disclosure is related to field of computer technology, and in particular to field of image processing more particularly to eyeground figure
The dividing method and device of picture.
Background technique
Currently, with the development of computer technology, various image Segmentation Technologies are continued to bring out.Image Segmentation Technology can solve
Certainly many practical problems.One of them typical application is the dividing processing of medical image, and image segmentation can orient image
In key position, thus assisted diagnosis and treatment.
In the image procossing of eye fundus image, image Segmentation Technology is expected that by divide optic cup region and optic disk region.
Current eye fundus image cutting techniques are that the pixel value of the image based on optic cup and optic disk carries out Threshold segmentation mostly.
Summary of the invention
Embodiment of the disclosure proposes the dividing method and device of eye fundus image.
In a first aspect, embodiment of the disclosure provides a kind of dividing method of eye fundus image, this method comprises: obtain to
The eye fundus image of detection;Eye fundus image input picture to be detected is generated into model, it is corresponding to obtain eye fundus image to be detected
Cup disk mask image, the corresponding cup disk mask image of eye fundus image to be detected characterize optic cup in eye fundus image to be detected and
Differential area between optic disk;Based on the corresponding cup disk mask image of eye fundus image to be detected, eyeground to be detected is fitted
The mask image in the optic cup region in image and the mask image in optic disk region.
In some embodiments, it is based on the corresponding cup disk mask image of eye fundus image to be detected, is fitted to be detected
The mask image in the optic cup region in eye fundus image and the mask image in optic disk region, comprising: treated using ellipse fitting method
The inner boundary and outer boundary of the corresponding cup disk mask image of the eye fundus image of detection are fitted, and obtain the exposure mask figure in optic cup region
The mask image of picture and optic disk region.
In some embodiments, this method further include: the exposure mask figure of mask image and optic disk region based on optic cup region
Picture determines the boundary information in the optic cup region and optic disk region in eye fundus image to be detected;Control display equipment display packet
Eye fundus image to be detected containing boundary information.
In some embodiments, image generates model and generates as follows: obtaining sample set, wherein sample
The sample of concentration includes eye fundus image and sample mask image corresponding with eye fundus image, and sample mask image characterization is corresponding
The differential area of optic cup and optic disk in the eye fundus image of sample;Acquisition is initially generated confrontation network, wherein is initially generated confrontation net
Network includes generating network and differentiation network;Sample is chosen from sample set, and executes following training step: utilizing generation network
Predict that the differential area between the optic cup and optic disk in the eye fundus image for the sample chosen, the eye fundus image for obtaining sample are corresponding
Prediction mask image;And prediction mask image and the input of the sample mask image of selection are differentiated into network, obtain sample exposure mask
The kind judging result of image and corresponding prediction mask image;By kind judging result and preset expectation kind judging result
It is compared;It is determined according to comparison result and generates whether network trains completion;In response to determining that generating network training completes, and will give birth to
It is determined as image at network and generates model.
In some embodiments, the differential area between the optic cup and optic disk in the eye fundus image of the sample of selection is to utilize
Generate what network predicted as follows: input generates network with pre- test sample after superimposed noise in the eye fundus image of sample
The differential area between optic cup and optic disk in this eye fundus image.
Second aspect, embodiment of the disclosure provide a kind of segmenting device of eye fundus image, which includes: to obtain list
Member is configured as obtaining eye fundus image to be detected;Generation unit is configured as eye fundus image input picture to be detected is raw
At model, the corresponding cup disk mask image of eye fundus image to be detected is obtained, the corresponding cup disk exposure mask of eye fundus image to be detected
The differential area between optic cup and optic disk in characterization image eye fundus image to be detected;Fitting unit, be configured as based on to
The corresponding cup disk mask image of the eye fundus image of detection, fits the mask image in the optic cup region in eye fundus image to be detected
With the mask image in optic disk region.
In some embodiments, fitting unit is configured to fit eyeground figure to be detected as follows
The mask image in the optic cup region as in and the mask image in optic disk region: using ellipse fitting method to eyeground figure to be detected
As the inner boundary and outer boundary of corresponding cup disk mask image are fitted, mask image and the optic disk region in optic cup region are obtained
Mask image.
In some embodiments, the device further include: determination unit, be configured as mask image based on optic cup region and
The mask image in optic disk region determines the boundary information in the optic cup region and optic disk region in eye fundus image to be detected;It is aobvious
Show unit, is configured as control display equipment and shows the eye fundus image to be detected comprising boundary information.
In some embodiments, image generates model and generates as follows: obtaining sample set, wherein sample
The sample of concentration includes eye fundus image and sample mask image corresponding with eye fundus image, and sample mask image characterization is corresponding
The differential area of optic cup and optic disk in the eye fundus image of sample;Acquisition is initially generated confrontation network, wherein is initially generated confrontation net
Network includes generating network and differentiation network;Sample is chosen from sample set, and executes following training step: utilizing generation network
Predict that the differential area between the optic cup and optic disk in the eye fundus image for the sample chosen, the eye fundus image for obtaining sample are corresponding
Prediction mask image;And prediction mask image and the input of the sample mask image of selection are differentiated into network, obtain sample exposure mask
The kind judging result of image and corresponding prediction mask image;By kind judging result and preset expectation kind judging result
It is compared;It is determined according to comparison result and generates whether network trains completion;In response to determining that generating network training completes, and will give birth to
It is determined as image at network and generates model.
In some embodiments, the differential area between the optic cup and optic disk in the eye fundus image of the sample of selection is to utilize
Generate what network predicted as follows: input generates network with pre- test sample after superimposed noise in the eye fundus image of sample
The differential area between optic cup and optic disk in this eye fundus image.
The third aspect, embodiment of the disclosure provide a kind of electronic equipment, which includes: one or more places
Manage device;Storage device is stored thereon with one or more programs;When one or more programs are held by one or more processors
Row, so that one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program,
The method as described in implementation any in first aspect is realized when the program is executed by processor.
The dividing method and device for the eye fundus image that embodiment of the disclosure provides, by obtaining eyeground figure to be detected
Eye fundus image input picture to be detected is generated model, obtains the corresponding cup disk mask image of eye fundus image to be detected by picture,
The corresponding cup disk mask image of eye fundus image to be detected characterizes the difference between optic cup and optic disk in eye fundus image to be detected
Subregion fits in eye fundus image to be detected finally, being based on the corresponding cup disk mask image of eye fundus image to be detected
The mask image in optic cup region and the mask image in optic disk region.A kind of method so as to obtain image segmentation is realized quasi-
Really quick segmented image region.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the dividing method of the eye fundus image of the disclosure;
Fig. 3 is the schematic diagram of an application scenarios of the dividing method of eye fundus image according to an embodiment of the present disclosure;
Fig. 4 is a kind of flow chart of the implementation for the generation method that above-mentioned image generates model;
Fig. 5 a, Fig. 5 b are the examples of sample eye fundus image according to an embodiment of the present disclosure and corresponding sample mask image
Figure;
Fig. 6 is the structural schematic diagram of one embodiment of the segmenting device of eye fundus image according to an embodiment of the present disclosure;
Fig. 7 is adapted for the structural schematic diagram for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the example of the segmenting device of the dividing method or eye fundus image of the eye fundus image of the disclosure
Property framework 100.
As shown in Figure 1, system architecture 100 may include terminal 101,102, network 103,104 kimonos of database server
Business device 105.Network 103 is to provide communication link in terminal 101,102 between database server 104 and server 105
Medium.Network 103 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User 110 can be used terminal 101,102 and be interacted by network 103 with server 105, to receive or send
Message etc..Various client applications can be installed, such as the application of model training class, image processing class are answered in terminal 101,102
With, shopping class application, the application of payment class, web browser and immediate communication tool etc..
Here terminal 101,102 can be hardware, be also possible to software.When terminal 101,102 is hardware, can be
Various electronic equipments with display screen, including but not limited to smart phone, tablet computer, E-book reader, MP3 player
(Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3),
Pocket computer on knee and desktop computer etc..When terminal 101,102 is software, may be mounted at above-mentioned cited
In electronic equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, also may be implemented
At single software or software module.It is not specifically limited herein.
Database server 104 can be to provide the database server of various services.In application scenes, such as
Sample set is can store in database server 104.It include a large amount of sample in sample set.Wherein, sample may include
Eye fundus image and sample mask image corresponding with eye fundus image.In this way, user 110 can also by terminal 101,102, from
Training sample is chosen in the sample set that database server 104 is stored.
Server 105 is also possible to provide the server of various services, such as various answers to what is run in terminal 101,102
The background server supported with offer.Background server can be handled the eye fundus image to be detected received, and will
Processing result (to be detected eye fundus image corresponding cup disk mask image) feeds back to terminal 101,102.In application scenes
In, background server also can use the sample in the sample set of the transmission of terminal 101,102, to initial generation fight network into
Row training, and training result (as the image generated generates model) can be sent to terminal 101,102.In this way, terminal user
Model can be generated using the image generated obtain characterizing the difference area between optic cup and optic disk in eye fundus image to be detected
The mask image in domain.
Here database server 104 and server 105 equally can be hardware, be also possible to software.When they are
When hardware, the distributed server cluster of multiple server compositions may be implemented into, individual server also may be implemented into.When it
When being software, multiple softwares or software module may be implemented into (such as providing multiple softwares of Distributed Services or soft
Part module), single software or software module also may be implemented into.It is not specifically limited herein.
It should be noted that the dividing method of eye fundus image provided by embodiment of the disclosure is generally by server 105
It executes.Correspondingly, the segmenting device of eye fundus image is generally also disposed in server 105.
It should be pointed out that being in the case where the correlation function of database server 104 may be implemented in server 105
Database server 104 can be not provided in system framework 100.
It should be understood that the number of terminal, network, database server and server in Fig. 1 is only schematical.Root
It factually now needs, can have any number of terminal, network, database server and server.
With continued reference to Fig. 2, the process of one embodiment of the dividing method of the eye fundus image according to the disclosure is shown
200.The dividing method of the eye fundus image may comprise steps of:
Step 201, eye fundus image to be detected is obtained.
In the present embodiment, the executing subject (such as server 105 shown in FIG. 1) for the method that image generates can pass through
Various ways obtain eye fundus image to be detected.For example, above-mentioned executing subject can be from database server (such as Fig. 1 institute
The database server 104 shown) in obtain be stored in eye fundus image to be detected therein.For another example above-mentioned executing subject
It can receive the eyeground figure to be detected of terminal (such as terminal shown in FIG. 1 101,102) or the acquisition of other eye detection devices
Picture.
Herein, eye fundus image be often referred to include optic cup region and optic disk region image.It can be color image (such as
RGB (Red, Green, Blue, RGB) photo), it is also possible to gray level image.The format of the image is in this application and unlimited
System, such as jpg (Joint Photo graphic Experts Group, a kind of picture format), BMP (Bitmap, image file
Format) or the formats such as RAW (RAW Image Format, nondestructive compression type), as long as can be performed main body reads identification i.e.
It can.
Step 202, eye fundus image input picture to be detected is generated into model, it is corresponding obtains eye fundus image to be detected
Cup disk mask image, the corresponding cup disk mask image of eye fundus image to be detected characterize optic cup in eye fundus image to be detected and
Differential area between optic disk.
In the present embodiment, image, which generates model, can be artificial neural network.Based on to be detected acquired in step 201
Eye fundus image, above-mentioned executing subject eye fundus image to be detected can be input in advance trained artificial neural network come it is true
Determine the corresponding cup disk mask image of eye fundus image to be detected.Herein, the corresponding cup disk exposure mask figure of eye fundus image to be detected
As characterizing the differential area between the optic cup and optic disk in eye fundus image to be detected.Optic disk, that is, retina from macula lutea to nasal side about
There are a diameter about 1.5mm, the clear pale red disc-shaped structure of boundary at 3mm.Optic cup is the white cup-shaped region at optic disk center.
The differential area training that above-mentioned image generates model and can be between the optic cup and optic disk that utilize the eye fundus image marked obtains
's.It is understood that the differential area between optic cup and optic disk can be expressed with diversified forms, for example, can be using view
The boundary coordinate value of differential area between cup and optic disk indicates position of the differential area on eye fundus image.
In some optional implementations of the present embodiment, image generates model and can be using such as Fig. 4 implementation
Described method and generate.Specific generating process may refer to the associated description of Fig. 4 implementation.
Step 203, it is based on the corresponding cup disk mask image of eye fundus image to be detected, fits eye fundus image to be detected
In optic cup region mask image and optic disk region mask image.
In the present embodiment, since the boundary on the boundary of optic cup and optic disk is usually elliptoid, above-mentioned executing subject can
To orient the boundary of optic cup and optic disk from eye fundus image using this characteristic, and then pass through process of fitting treatment, Cong Beipan exposure mask
The mask image in the optic cup region in eye fundus image to be detected and the mask image in optic disk region are obtained in image.
In some optional implementations of the present embodiment, this method further include: using ellipse fitting method to be checked
The inner boundary and outer boundary of the corresponding cup disk mask image of the eye fundus image of survey are fitted, and obtain the mask image in optic cup region
With the mask image in optic disk region.
In the optional implementation, above-mentioned executing subject is covered by obtaining the corresponding cup disk of eye fundus image to be detected
Film image characterizes optic cup and view in eye fundus image to be detected according to the corresponding cup disk mask image of eye fundus image to be detected
Differential area between disk can carry out pixel value reverse turn operation to the corresponding cup disk mask image of eye fundus image to be detected,
The connected domain for obtaining out optic cup region extracts glass inner boundary of the disk mask image i.e. boundary in optic cup region.Then, it will extract
The corresponding cup disk mask image in optic cup region and eye fundus image to be detected out is overlapped operation, extracts a glass disk exposure mask figure
The outer boundary of picture, the i.e. boundary in optic disk region.Finally, going out the boundary in optic cup region and the boundary point in optic disk region to said extracted
Not carry out ellipse fitting processing, obtain the mask image in optic cup region and the mask image in optic disk region.
In some optional implementations of the present embodiment, this method further include: the mask image based on optic cup region
With the mask image in optic disk region, the boundary information in the optic cup region and optic disk region in eye fundus image to be detected is determined;
Control display equipment shows the eye fundus image to be detected comprising boundary information.
In the optional implementation, according to the mask image of mask image and optic disk region from above-mentioned optic cup region
In the boundary on the boundary in optic cup region and optic disk region that extracts, the optic cup region in available eye fundus image to be detected
With the boundary information in optic disk region.Here boundary information can be the coordinate on the boundary in optic cup region and the boundary in optic disk region
Location information.
In the optional implementation, display equipment can be with above-mentioned executing subject communication connection, for showing
The equipment (such as terminal shown in FIG. 1 101,102) for the image that above-mentioned executing subject is sent.In practice, above-mentioned executing subject can
To send control signal to display equipment, and then controls display equipment and show the eye fundus image to be detected comprising boundary information.
Such as can be specified pixel value by the pixel value of optic cup and the boundary coordinate of optic disk, so that optic cup and view in eye fundus image
The boundary of disk is highlighted.
In the present embodiment, above-mentioned executing subject can mask image and optic disk region based on optic cup region exposure mask figure
Picture determines that the boundary information in the optic cup region and optic disk region in eye fundus image to be detected, and control display equipment are shown
Show the eye fundus image to be detected comprising boundary information.On the one hand, the eye to be detected comprising boundary information can be directly displayed
Base map picture, determines whether the image generated accurately completes region segmentation.On the other hand, the image of generation is completed disposably to view
The segmentation in cup region and optic disk region, then carry out simple process of fitting treatment and obtain optic cup area image and optic disk area image, it mentions
The high speed and accuracy of image segmentation.
With continued reference to Fig. 3, it illustrates an applications of the dividing method of eye fundus image according to an embodiment of the present disclosure
The schematic diagram of scene.In the application scenario diagram of Fig. 3, user obtains eye fundus image 302 to be detected from terminal device 301,
To image generate model application provide back-office support server 303 eye fundus image 302 to be detected is handled, obtain to
The corresponding cup disk mask image 304 of the eye fundus image of detection finally obtains in eye fundus image to be detected by process of fitting treatment
The mask image 305 in optic cup region and the mask image 306 in optic disk region.
The dividing method of above-mentioned eye fundus image obtains eye fundus image to be detected first.Then, by eyeground figure to be detected
Model is generated as being input to image, obtains the corresponding cup disk mask image of eye fundus image to be detected.Finally, being fitted by image
Processing obtains the mask image in the optic cup region in eye fundus image to be detected and the mask image in optic disk region.This method is realized
The accurate segmentation of optic cup and optic disk region in eye fundus image.
With continued reference to Fig. 4, it illustrates the flow charts that above-mentioned image generates a kind of implementation of the generation method of model.
The process 400 that the image generates the generation method of model may comprise steps of:
Step 401, sample set is obtained, wherein the sample in sample set includes eye fundus image and corresponding with eye fundus image
Sample mask image, sample mask image characterizes the differential area of optic cup and optic disk in the eye fundus image of corresponding sample.
In the present embodiment, above-mentioned executing subject can be from database server (such as database server shown in FIG. 1
104) it is obtained in and is stored in existing training sample set therein.For another example user can be by terminal (such as shown in Fig. 1
Terminal 101,102) collect training sample.In this way, above-mentioned executing subject can receive sample collected by terminal, and by this
A little samples are stored in local, to generate training sample set.
It herein, may include at least one sample in sample set.Wherein, sample may include eye fundus image and with eye
Base map is as corresponding sample mask image.Here sample mask image can characterize optic cup in the eye fundus image of corresponding sample
With the differential area of optic disk.It is understood that sample mask image here can be based on the methods of artificial mark in advance
It obtains.For example, above-mentioned executing subject can be labeled according to optic cup region in eye fundus image and the position in optic disk region, obtain
To above-mentioned mask image.In mathematical image process field, above-mentioned executing subject can also use selected image or figure, right
The image (all or part) of processing is blocked, to control region or the treatment process of image procossing.This selected image
Or figure is referred to as exposure mask.Mask image can be two-dimensional matrix array.
Fig. 5 a and Fig. 5 b are showing for sample eye fundus image according to an embodiment of the present disclosure and corresponding sample mask image
Example diagram.As shown in figure 5 a and 5b, Fig. 5 a is sample eye fundus image, wherein image-region 501 is the optic disk area in eye fundus image
Domain, image-region 502 are the optic cup regions in eye fundus image.Fig. 5 b is the corresponding sample exposure mask figure of above-mentioned sample eye fundus image
Picture, wherein area can be the differential area of optic cup and optic disk in sample eye fundus image in sample mask image.
Step 402, it obtains and is initially generated confrontation network, wherein being initially generated confrontation network includes generating network and differentiation
Network.
In the present embodiment, above-mentioned executing subject is available is initially generated confrontation network.Wherein, it is initially generated confrontation net
Network may include initial generation network and initial differentiation network.Above-mentioned executing subject, which can use, generates neural network forecast selection
Sample eye fundus image in optic cup and optic disk between differential area, obtain the corresponding prediction mask of eye fundus image of sample
Image.Differentiate network be determined for generate network output sample the corresponding prediction mask image of eye fundus image whether be
The corresponding true mask image of the eye fundus image of sample.
Generation network can include but is not limited at least one of following: deep neural network model, hidden Markov model
(Hidden Markov Model, HMM), model-naive Bayesian, gauss hybrid models.Differentiate that network may include but unlimited
In at least one of following: linear regression model (LRM), linear discriminant analysis, support vector machines (Support Vector Machine,
SVM), neural network.It should be appreciated that being initially generated after confrontation network can be initiation parameter, unbred generations is fought
Network is also possible to the generation trained in advance confrontation network.
Step 403, sample is chosen from sample set, executes training step.
In the present embodiment, sample is chosen in the sample set that above-mentioned executing subject can be obtained from step 401, and is held
Row step 4031 to step 4035 training step.Wherein, the selection mode of sample and selection quantity are in the disclosure and unlimited
System.Such as above-mentioned executing subject can choose at least one sample.
More specifically, training step includes the following steps:
Step 4031, the difference between the optic cup and optic disk in the eye fundus image for generating the sample that neural network forecast is chosen is utilized
Region obtains the corresponding prediction mask image of eye fundus image of sample.
In the present embodiment, above-mentioned executing subject can add default noise in the eye fundus image of the sample of selection, so
The eye fundus image input of the sample for being added to noise is generated into network afterwards, with the optic cup and view in the eye fundus image of forecast sample
Differential area between disk obtains the corresponding prediction mask image of eye fundus image of sample.For example, default noise here can be with
It is salt-pepper noise, Gaussian noise.Here the purpose for adding noise is to improve extensive energy to improve the anti-interference for generating network
Power.
Step 4032, prediction mask image and the input of the sample mask image of selection are differentiated into network, obtains sample exposure mask
The kind judging result of image and corresponding prediction mask image.
In the present embodiment, above-mentioned executing subject can will be generated in step 4031 the obtained prediction mask image of network and
Sample mask image corresponding with the eye fundus image input of the sample of selection differentiates network.Differentiate that network can export to obtain sample
The kind judging result of mask image and corresponding prediction mask image.In generating confrontation network, differentiate network for differentiating
Whether composograph is consistent with true picture performance.If differentiating that the differentiation that network provides generates what network generated as the result is shown
Composograph is consistent with the classification of true picture, or differentiates that network cannot be distinguished and generate composograph that network generates and true
Which is true image to image, it may be considered that generating the similarity of composograph and true picture that network generates very
It is high.Using in the present embodiment, the classification for differentiating that network determines can be according to the prediction of the corresponding synthesis of eye fundus image of sample
Mask image whether be sample the corresponding true sample mask image of eye fundus image.As an example, classification here
Determine that result can be based on prediction mask image and the class label of sample mask image and indicate, to above two image into
Row determines obtained kind judging result, it is assumed that the image tag of sample mask image is 1, the image tag of prediction mask image
It may be judged as 0 or 1.It should be noted that image tag is also possible to other pre-set information, it is not limited to numerical value 1
With 0.Loss function is namely based on the label of sample mask image and kind judging result obtains.
Step 4033, kind judging result is compared with preset expectation kind judging result.
In the present embodiment, the kind judging result obtained based on step 4032 reaches preset expectation kind judging result
When, it is believed that kind judging result approaches or approximate preset expectation kind judging result.Wherein, preset expectation classification
It is rule of thumb pre-set to determine that result can be those skilled in the art, and those skilled in the art can be to the preset phase
Kind judging result is hoped to be adjusted.
As an example, preset expectation kind judging result, which may is that, differentiates that network cannot be distinguished prediction mask image
With the classification of sample mask image, differentiate network to generation network as an example, the expectation kind judging result of prediction may is that
The prediction probability of the classification of the mask image of generation is close to 0.5.
Step 4034, it is determined according to comparison result and generates whether network trains completion.
In the present embodiment, according to the comparison result in step 4033, above-mentioned executing subject can determine that generating network is
No training is completed.As an example, if choosing in step 4033 has multiple samples, in the kind judging result of each sample
Reach preset expectation kind judging as a result, above-mentioned executing subject can determine that generating network training completes.For another example above-mentioned hold
Row main body, which can count total kind judging result and reach the sample of preset expectation kind judging result, accounts for the sample of selection
Ratio.And reach default training sample ratio (such as 95%) in the ratio, it can determine that generating network training completes.If above-mentioned hold
Row main body determines that generating network has trained completion, then can continue to execute step 4035.
In some optional implementations of the present embodiment, if above-mentioned executing subject determines that generating network has not trained
At then adjustable to be initially generated the relevant parameter fought in network.It is concentrated from training sample and chooses sample, return re-executes
Above-mentioned training step.The mode of adjusting parameter can be for example, by using back-propagation algorithm etc..In this way, can make to be initially generated confrontation
Network carries out circulative training, final to guarantee to obtain after repetitive exercise optimal to be initially generated confrontation network.
It should be noted that selection mode here does not also limit in the disclosure.Such as have in training sample concentration big
In the case where measuring sample, executing subject can therefrom choose the sample of unselected mistake.
Step 4035, in response to determining that generating network training completes, network will be generated and be determined as image generation model.
It in the present embodiment, can be by the generation network (i.e. if above-mentioned executing subject determines that generating network training completes
The generation network that training is completed) as image generation model.
Optionally, the image of generation can be generated model and be stored in local by above-mentioned executing subject, can also be sent to
To terminal or database server.
Above method process 400 is based on generating confrontation network training model, and completes to generate network as image for training
Model is generated, the image that the mask image of " mixing the spurious with the genuine " can be generated generates model, by generating network and confrontation
Constantly confrontation can be derived that accurate, reliable image generates model to optimize generation network to network in the training process.And then it adopts
Model is generated to carry out the segmentation of eye fundus image with the image, is further improved to optic cup in eye fundus image and optic disk region point
The precision cut.
With continued reference to Fig. 6, as the realization to method shown in above-mentioned Fig. 2, this application provides a kind of points of eye fundus image
Cut one embodiment of device.The Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically apply
In various electronic equipments.
As shown in fig. 6, the segmenting device 600 of the eye fundus image of the present embodiment may include: acquiring unit 601, it is configured
To obtain eye fundus image to be detected;Generation unit 602 is configured as eye fundus image input picture to be detected generating mould
Type, obtains the corresponding cup disk mask image of eye fundus image to be detected, the corresponding cup disk mask image of eye fundus image to be detected
Characterize the differential area between the optic cup and optic disk in eye fundus image to be detected;Fitting unit 603 is configured as based on to be checked
The corresponding cup disk mask image of the eye fundus image of survey, fit the optic cup region in eye fundus image to be detected mask image and
The mask image in optic disk region.
In some embodiments, above-mentioned apparatus 600 can also comprise determining that unit (not shown), be configured as base
In the mask image in optic cup region and the mask image in optic disk region, determine optic cup region in eye fundus image to be detected and
The boundary information in optic disk region;Display unit (not shown) is configured as control display equipment and shows comprising boundary information
Eye fundus image to be detected.
In some optional implementations of the present embodiment, fitting unit 603 is configured to according to such as lower section
Formula fits the mask image in the optic cup region in eye fundus image to be detected and the mask image in optic disk region: using oval quasi-
Conjunction method is fitted the inner boundary and outer boundary of the corresponding cup disk mask image of eye fundus image to be detected, obtains optic cup area
The mask image in domain and the mask image in optic disk region.
In some optional implementations of the present embodiment, image generates model and generates as follows: obtaining
Take sample set, wherein the sample in sample set includes eye fundus image and sample mask image corresponding with eye fundus image, sample
Mask image characterizes the differential area of optic cup and optic disk in the eye fundus image of corresponding sample;Acquisition is initially generated confrontation network,
Wherein, being initially generated confrontation network includes generating network and differentiation network;Sample is chosen from sample set, and executes following instruction
Practice step: using the differential area between the optic cup and optic disk in the eye fundus image for generating the sample that neural network forecast is chosen, obtaining
The corresponding prediction mask image of the eye fundus image of sample;And prediction mask image and the input of the sample mask image of selection are sentenced
Other network obtains the kind judging result of sample mask image and corresponding prediction mask image;By kind judging result and in advance
If expectation kind judging result be compared;It is determined according to comparison result and generates whether network trains completion;In response to determination
It generates network training to complete, network will be generated and be determined as image generation model.
In some optional implementations of the present embodiment, optic cup and optic disk in the eye fundus image of the sample of selection it
Between differential area be using generate network predict as follows: in the eye fundus image of sample after superimposed noise it is defeated
Enter to generate network with the differential area between the optic cup and optic disk in the eye fundus image of forecast sample.
It is understood that all units recorded in the device 600 and each step phase in the method with reference to Fig. 2 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 600 and its
In include unit, details are not described herein.
Below with reference to Fig. 7, it illustrates the electronic equipment that is suitable for being used to realize embodiment of the disclosure, (example is as shown in figure 1
Server) 700 structural schematic diagram.Server shown in Fig. 7 is only an example, should not be to the function of embodiment of the disclosure
Any restrictions can be brought with use scope.
As shown in fig. 7, electronic equipment 700 may include processing unit (such as central processing unit, graphics processor etc.)
701, random access can be loaded into according to the program being stored in read-only memory (ROM) 702 or from storage device 708
Program in memory (RAM) 703 and execute various movements appropriate and processing.In RAM 703, it is also stored with electronic equipment
Various programs and data needed for 700 operations.Processing unit 701, ROM 702 and RAM703 are connected with each other by bus 704.
Input/output (I/O) interface 705 is also connected to bus 704.
In general, following device can connect to I/O interface 705: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 706 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 707 of dynamic device etc.;Storage device 708 including such as tape, hard disk etc.;And communication device 709.Communication device
709, which can permit electronic equipment 700, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 7 shows tool
There is the electronic equipment 700 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.Each box shown in Fig. 7 can represent a device, can also root
According to needing to represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 709, or from storage device 708
It is mounted, or is mounted from ROM 702.When the computer program is executed by processing unit 701, the implementation of the disclosure is executed
The above-mentioned function of being limited in the method for example.
It should be noted that the computer-readable medium of embodiment of the disclosure can be computer-readable signal media or
Person's computer readable storage medium either the two any combination.Computer readable storage medium for example can be ---
But be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above group
It closes.The more specific example of computer readable storage medium can include but is not limited to: have being electrically connected for one or more conducting wires
Connect, portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed it is read-only
Memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In embodiment of the disclosure, computer readable storage medium can be any packet
Contain or store the tangible medium of program, which can be commanded execution system, device or device use or in connection
It uses.And in embodiment of the disclosure, computer-readable signal media may include in a base band or as carrier wave one
Divide the data-signal propagated, wherein carrying computer-readable program code.The data-signal of this propagation can use more
Kind form, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media is also
It can be any computer-readable medium other than computer readable storage medium, which can send out
It send, propagate or transmits for by the use of instruction execution system, device or device or program in connection.It calculates
The program code for including on machine readable medium can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF
(Radio Frequency, radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the electronic equipment, so that the electronic equipment: obtaining eye fundus image to be detected;By eyeground figure to be detected
As input picture generation model, the corresponding cup disk mask image of eye fundus image to be detected, eye fundus image pair to be detected are obtained
The cup disk mask image answered characterizes the differential area between optic cup and optic disk in eye fundus image to be detected;Based on to be detected
The corresponding cup disk mask image of eye fundus image, fits the mask image and optic disk in the optic cup region in eye fundus image to be detected
The mask image in region.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof
The computer program code of work, programming language include object oriented program language-such as Java,
Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language
Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence
Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or
It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through
The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor,
Including acquiring unit, generation unit, fitting unit, or: a kind of processor, including image acquisition unit and image procossing list
Member.Wherein, the title of these units does not constitute the restriction to the unit itself under certain conditions, for example, first obtains list
Member is also described as " obtaining the unit of eye fundus image to be detected ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member it should be appreciated that embodiment of the disclosure involved in invention scope, however it is not limited to the specific combination of above-mentioned technical characteristic and
At technical solution, while should also cover do not depart from foregoing invention design in the case where, by above-mentioned technical characteristic or its be equal
Feature carries out any combination and other technical solutions for being formed.Such as disclosed in features described above and embodiment of the disclosure (but
It is not limited to) technical characteristic with similar functions is replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of dividing method of eye fundus image, comprising:
Obtain eye fundus image to be detected;
The eye fundus image input picture to be detected is generated into model, obtains the corresponding cup disk exposure mask of eye fundus image to be detected
Image, the corresponding cup disk mask image of the eye fundus image to be detected characterize optic cup and optic disk in eye fundus image to be detected
Between differential area;
Based on the corresponding cup disk mask image of the eye fundus image to be detected, fit in the eye fundus image to be detected
The mask image in optic cup region and the mask image in optic disk region.
2. described to be based on the corresponding cup disk exposure mask figure of eye fundus image to be detected according to the method described in claim 1, wherein
Picture fits the mask image in the optic cup region in eye fundus image to be detected and the mask image in optic disk region, comprising:
It is carried out using inner boundary and outer boundary of the ellipse fitting method to the corresponding cup disk mask image of eye fundus image to be detected
Fitting, obtains the mask image in optic cup region and the mask image in optic disk region.
3. according to the method described in claim 2, wherein, the method also includes:
The mask image of mask image and the optic disk region based on the optic cup region, determines the eyeground to be detected
The boundary information in optic cup region and optic disk region in image;
Control display equipment shows the eye fundus image to be detected comprising boundary information.
4. according to the method described in claim 1, wherein, described image generates model and generates as follows:
Obtain sample set, wherein the sample in sample set includes eye fundus image and sample exposure mask figure corresponding with eye fundus image
Picture, sample mask image characterize the differential area of optic cup and optic disk in the eye fundus image of corresponding sample;
Acquisition is initially generated confrontation network, wherein being initially generated confrontation network includes generating network and differentiation network;
Sample is chosen from sample set, and executes following training step: utilizing the eyeground for generating the sample that neural network forecast is chosen
The differential area between optic cup and optic disk in image obtains the corresponding prediction mask image of eye fundus image of sample;And it will
Prediction mask image and the input of the sample mask image of selection differentiate network, obtain sample mask image and corresponding prediction mask
The kind judging result of image;Kind judging result is compared with preset expectation kind judging result;It is tied according to comparing
Fruit, which determines, generates whether network trains completion;In response to determining that generating network training completes, network will be generated and be determined as image life
At model.
5. according to the method described in claim 4, wherein, between the optic cup and optic disk in the eye fundus image of the sample of the selection
Differential area be using generate network predict as follows:
Input generates network with the optic cup and view in the eye fundus image of forecast sample after superimposed noise in the eye fundus image of sample
Differential area between disk.
6. a kind of segmenting device of eye fundus image, comprising:
Acquiring unit is configured as obtaining eye fundus image to be detected;
Generation unit is configured as the eye fundus image input picture to be detected generating model, obtains eyeground to be detected
The corresponding cup disk mask image of image, the corresponding cup disk mask image characterization of eye fundus image to be detected eyeground to be detected
The differential area between optic cup and optic disk in image;
Fitting unit, is configured as based on the corresponding cup disk mask image of the eye fundus image to be detected, fit it is described to
The mask image in the optic cup region in the eye fundus image of detection and the mask image in optic disk region.
7. device according to claim 6, wherein the fitting unit is configured to be fitted as follows
The mask image of the mask image in the optic cup region in eye fundus image to be detected and optic disk region out:
It is carried out using inner boundary and outer boundary of the ellipse fitting method to the corresponding cup disk mask image of eye fundus image to be detected
Fitting, obtains the mask image in optic cup region and the mask image in optic disk region.
8. device according to claim 7, wherein described device further include:
Determination unit is configured as the mask image based on optic cup region and the mask image in optic disk region, determines to be detected
Eye fundus image in optic cup region and optic disk region boundary information;
Display unit is configured as control display equipment and shows the eye fundus image to be detected comprising boundary information.
9. device according to claim 6, wherein described image generates model and generates as follows:
Obtain sample set, wherein the sample in sample set includes eye fundus image and sample exposure mask figure corresponding with eye fundus image
Picture, sample mask image characterize the differential area of optic cup and optic disk in the eye fundus image of corresponding sample;
Acquisition is initially generated confrontation network, wherein being initially generated confrontation network includes generating network and differentiation network;
Sample is chosen from sample set, and executes following training step: utilizing the eyeground for generating the sample that neural network forecast is chosen
The differential area between optic cup and optic disk in image obtains the corresponding prediction mask image of eye fundus image of sample;And it will
Prediction mask image and the input of the sample mask image of selection differentiate network, obtain sample mask image and corresponding prediction mask
The kind judging result of image;Kind judging result is compared with preset expectation kind judging result;It is tied according to comparing
Fruit, which determines, generates whether network trains completion;In response to determining that generating network training completes, network will be generated and be determined as image life
At model.
10. device according to claim 9, wherein optic cup and optic disk in the eye fundus image of the sample of the selection it
Between differential area be using generate network predict as follows:
Input generates network with the optic cup and view in the eye fundus image of forecast sample after superimposed noise in the eye fundus image of sample
Differential area between disk.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more programs are executed by one or more processors, so that one or more processors realize such as claim
Any method in 1-5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Such as method any in claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910590552.5A CN110298850B (en) | 2019-07-02 | 2019-07-02 | Segmentation method and device for fundus image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910590552.5A CN110298850B (en) | 2019-07-02 | 2019-07-02 | Segmentation method and device for fundus image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110298850A true CN110298850A (en) | 2019-10-01 |
CN110298850B CN110298850B (en) | 2022-03-15 |
Family
ID=68029938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910590552.5A Active CN110298850B (en) | 2019-07-02 | 2019-07-02 | Segmentation method and device for fundus image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110298850B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110969617A (en) * | 2019-12-17 | 2020-04-07 | 腾讯医疗健康(深圳)有限公司 | Method, device and equipment for identifying image of optic cup and optic disk and storage medium |
CN112001920B (en) * | 2020-10-28 | 2021-02-05 | 北京至真互联网技术有限公司 | Fundus image recognition method, device and equipment |
WO2021159643A1 (en) * | 2020-02-11 | 2021-08-19 | 平安科技(深圳)有限公司 | Eye oct image-based optic cup and optic disc positioning point detection method and apparatus |
CN113450341A (en) * | 2021-07-16 | 2021-09-28 | 依未科技(北京)有限公司 | Image processing method and device, computer readable storage medium and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520522A (en) * | 2017-12-31 | 2018-09-11 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
CN109325942A (en) * | 2018-09-07 | 2019-02-12 | 电子科技大学 | Eye fundus image Structural Techniques based on full convolutional neural networks |
CN109658385A (en) * | 2018-11-23 | 2019-04-19 | 上海鹰瞳医疗科技有限公司 | Eye fundus image judgment method and equipment |
CN109684981A (en) * | 2018-12-19 | 2019-04-26 | 上海鹰瞳医疗科技有限公司 | Glaucoma image-recognizing method, equipment and screening system |
CN109829877A (en) * | 2018-09-20 | 2019-05-31 | 中南大学 | A kind of retinal fundus images cup disc ratio automatic evaluation method |
-
2019
- 2019-07-02 CN CN201910590552.5A patent/CN110298850B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520522A (en) * | 2017-12-31 | 2018-09-11 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
CN109325942A (en) * | 2018-09-07 | 2019-02-12 | 电子科技大学 | Eye fundus image Structural Techniques based on full convolutional neural networks |
CN109829877A (en) * | 2018-09-20 | 2019-05-31 | 中南大学 | A kind of retinal fundus images cup disc ratio automatic evaluation method |
CN109658385A (en) * | 2018-11-23 | 2019-04-19 | 上海鹰瞳医疗科技有限公司 | Eye fundus image judgment method and equipment |
CN109684981A (en) * | 2018-12-19 | 2019-04-26 | 上海鹰瞳医疗科技有限公司 | Glaucoma image-recognizing method, equipment and screening system |
Non-Patent Citations (1)
Title |
---|
YUN JIANG 等: "Optic Disc and Cup Segmentation Based on Deep Convolutional Generative Adversarial Networks", 《IEEE ACCESS》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110969617A (en) * | 2019-12-17 | 2020-04-07 | 腾讯医疗健康(深圳)有限公司 | Method, device and equipment for identifying image of optic cup and optic disk and storage medium |
CN110969617B (en) * | 2019-12-17 | 2024-03-15 | 腾讯医疗健康(深圳)有限公司 | Method, device, equipment and storage medium for identifying video cup video disc image |
WO2021159643A1 (en) * | 2020-02-11 | 2021-08-19 | 平安科技(深圳)有限公司 | Eye oct image-based optic cup and optic disc positioning point detection method and apparatus |
CN112001920B (en) * | 2020-10-28 | 2021-02-05 | 北京至真互联网技术有限公司 | Fundus image recognition method, device and equipment |
US11620763B2 (en) | 2020-10-28 | 2023-04-04 | Beijing Zhenhealth Technology Co., Ltd. | Method and device for recognizing fundus image, and equipment |
CN113450341A (en) * | 2021-07-16 | 2021-09-28 | 依未科技(北京)有限公司 | Image processing method and device, computer readable storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN110298850B (en) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109508681A (en) | The method and apparatus for generating human body critical point detection model | |
CN110298850A (en) | The dividing method and device of eye fundus image | |
CN107622240B (en) | Face detection method and device | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN107644209A (en) | Method for detecting human face and device | |
CN108197618B (en) | Method and device for generating human face detection model | |
CN108509915A (en) | The generation method and device of human face recognition model | |
CN108446651A (en) | Face identification method and device | |
CN109086719A (en) | Method and apparatus for output data | |
CN110021052A (en) | The method and apparatus for generating model for generating eye fundus image | |
CN108280477A (en) | Method and apparatus for clustering image | |
CN109446990A (en) | Method and apparatus for generating information | |
CN109993150A (en) | The method and apparatus at age for identification | |
CN109034069A (en) | Method and apparatus for generating information | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
CN108509921A (en) | Method and apparatus for generating information | |
CN109472264A (en) | Method and apparatus for generating object detection model | |
CN108491812A (en) | The generation method and device of human face recognition model | |
CN110009626A (en) | Method and apparatus for generating image | |
CN109241934A (en) | Method and apparatus for generating information | |
CN108133197A (en) | For generating the method and apparatus of information | |
CN110070076A (en) | Method and apparatus for choosing trained sample | |
CN109242043A (en) | Method and apparatus for generating information prediction model | |
CN108171208A (en) | Information acquisition method and device | |
CN111931628B (en) | Training method and device of face recognition model and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |