CN110517248A - Processing, training method, device and its equipment of eye fundus image - Google Patents

Processing, training method, device and its equipment of eye fundus image Download PDF

Info

Publication number
CN110517248A
CN110517248A CN201910796692.8A CN201910796692A CN110517248A CN 110517248 A CN110517248 A CN 110517248A CN 201910796692 A CN201910796692 A CN 201910796692A CN 110517248 A CN110517248 A CN 110517248A
Authority
CN
China
Prior art keywords
image
eyeground
eye fundus
macula
optic disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910796692.8A
Other languages
Chinese (zh)
Inventor
孙钦佩
杨叶辉
王磊
许言午
黄艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910796692.8A priority Critical patent/CN110517248A/en
Publication of CN110517248A publication Critical patent/CN110517248A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Abstract

This application discloses a kind of processing of eye fundus image, training method, device and its equipment, are related to field of artificial intelligence.Specific implementation are as follows: the processing method is applied to terminal device, the terminal device is connect with image acquisition units, the processing method includes: the eye fundus image to be detected including eyeground key structure for obtaining the acquisition of described image acquisition unit, and the eyeground key structure includes at least optic disk and blood vessel;By the default generation model of eye fundus image input to be detected, with to identify the position of the optic disk and the blood vessel in the eye fundus image to be detected by the default regression model, the default generation model is obtained based on generation confrontation network training.

Description

Processing, training method, device and its equipment of eye fundus image
Technical field
This application involves technical field of image processing, and in particular to field of artificial intelligence more particularly to a kind of eyeground Processing, training method, device and its equipment of image.
Background technique
Computer-aided diagnosis is a kind of common supplementary means of funduscopy, for example, using computer technology from eyeground The information of some eye key structures is extracted in figure, so that doctor be made to determine whether according to the information that computer extracts With risk.Therefore, if can quickly and accurately extract eye key structure information decides computer-aided diagnosis Clinical practicability.
Summary of the invention
Processing, training method, device and its equipment of a kind of eye fundus image provided by the embodiments of the present application, by using base The default identification model that network training obtains is fought in generation to identify eye fundus image to be detected, extracts eyeground so as to improve The accuracy of key structure information.
In a first aspect, the embodiment of the present application provides a kind of processing method of eye fundus image, it is applied to terminal device, it is described Terminal device is connect with image acquisition units, which comprises obtain the acquisition of described image acquisition unit includes that eyeground is closed The eye fundus image to be detected of bond structure, the eyeground key structure include at least optic disk and blood vessel;By the eyeground figure to be detected Model is generated as input is default, to identify the optic disk and the blood vessel in the eye to be detected by the default generation model Position in base map picture, the default generation model are obtained based on generation confrontation network training.
The eye to be detected including eyeground key structure that the embodiment of the present application is acquired by obtaining described image acquisition unit Base map picture, the eyeground key structure include at least optic disk and blood vessel;By the default generation mould of eye fundus image input to be detected Type, to identify the position of the optic disk and the blood vessel in the eye fundus image to be detected by the default generation model, The default generation model is obtained based on generation confrontation network training.Due to being obtained using based on generation confrontation network training To default identification model identify eye fundus image to be detected, therefore can be improved and extract the accurate of eyeground key structure information Degree.
Optionally, the eyeground key structure further includes central fovea of macula;It is described to input the eye fundus image to be detected It is default to generate model, to identify the optic disk and the blood vessel in the eye fundus image to be detected by the default generation model In position after, the method also includes: by it is described it is default generate model identification the optic disk and the blood vessel it is described to It detects the position in eye fundus image and inputs default regression model, to determine the central fovea of macula by the default regression model Position in the eye fundus image to be detected, the default regression model are obtained based on Recurrent networks training.
The embodiment of the present application is by inputting the position of the default optic disk for generating model identification and the blood vessel Default regression model, determines the position of the central fovea of macula, and the default regression model is obtained based on Recurrent networks training , the position of optic disk in eye fundus image to be detected, blood vessel and central fovea of macula can be disposably extracted, to improve pair Extraction rate in the position of optic disk, blood vessel and central fovea of macula.
Optionally, the position input by the default optic disk for generating model identification and the blood vessel is preset back Return model, behind the position for determining the central fovea of macula, the method also includes: display identifies optic disk position, vessel position With the eye fundus image to be detected of central fovea of macula position.
The embodiment of the present application is by will identify that the described to be detected of optic disk position, vessel position and central fovea of macula position Eye fundus image is shown, foundation can be provided for computer-aided diagnosis, and visual and intuitive is more preferable.
Optionally, the display identifies the eye to be detected of optic disk position, vessel position and central fovea of macula position Base map picture, comprising: it is in place to mark the institute of the optic disk, blood vessel and central fovea of macula that identify in the eye fundus image to be detected It sets;Show the eye fundus image to be detected of mark.
Optic disk, blood vessel and the central fovea of macula that the embodiment of the present application is identified by mark are in the eye fundus image to be detected In position;The eye fundus image to be detected for showing mark, to provide diagnosis basis for doctor, be labelled with optic disk, The visuality and intuitive of the eye fundus image to be detected of blood vessel and central fovea of macula specific location are more preferable.
Second aspect, the embodiment of the present application provide a kind of training method of eye fundus image, comprising: obtaining has mark letter The eyeground training image of breath, the markup information include at least optic disk position coordinates, vessel position coordinate and image sources;By institute It states eyeground training image and inputs the generation network constructed in advance, to generate eyeground prediction by the generation network constructed in advance Image;According to the eyeground forecast image, the corresponding image sources of the eyeground training image are differentiated;At least it is based on the mark Confrontation between the image sources of information and the image sources of differentiation adjusts the network ginseng of the generation network constructed in advance Number.
Optionally, the markup information further includes the position coordinates of central fovea of macula;Described obtain has markup information After the training image of eyeground, the method also includes: determine the optic disk position coordinates and vessel position in the eyeground training image Coordinate;It is returned what the eyeground training image that the optic disk position coordinates and vessel position coordinate have been determined input constructed in advance Return network, with true according to the determining optic disk position coordinates and vessel position coordinate by the Recurrent networks constructed in advance The position coordinates of the fixed central fovea of macula;According to the position coordinates of the central fovea of macula of mark and the macula lutea center determined Difference between recessed position coordinates adjusts the network parameter of the Recurrent networks constructed in advance.
Optionally, the markup information further includes the position coordinates of central fovea of macula;It is described by the eyeground training image The generation network constructed in advance is inputted, after generating eyeground forecast image by the generation network constructed in advance, the side Method further include: the eyeground forecast image is inputted to the Recurrent networks constructed in advance, to pass through the recurrence net constructed in advance Network determines the position coordinates of central fovea of macula, and the eyeground forecast image includes optic disk position coordinates and vessel position coordinate;Root According to the position coordinates of the determining central fovea of macula, and mark the central fovea of macula position coordinates between difference Adjust the network parameter of the Recurrent networks constructed in advance;Repetition adjust the generation network constructed in advance and it is described in advance The network parameter of the Recurrent networks of building respectively obtains default generation model and default regression model.
The third aspect, the embodiment of the present application provide a kind of processing unit of eye fundus image, and the processing unit is connected to Image acquisition units, the processing unit include: the first acquisition module, include for obtain the acquisition of described image acquisition unit The eye fundus image to be detected of eyeground key structure, the eyeground key structure include at least optic disk and blood vessel;Identification module is used for Model is generated by the eye fundus image to be detected input is default, to pass through default the generations model identification optic disk and described Position of the blood vessel in the eye fundus image to be detected, the default generation model are obtained based on generation confrontation network training 's.
Optionally, the eyeground key structure further includes central fovea of macula;The identification module, being also used to will be described default The position of the optic disk and the blood vessel that generate model identification inputs default regression model, to pass through the default regression model Determine position of the central fovea of macula in the eye fundus image to be detected, the default regression model is based on Recurrent networks What training obtained.
Optionally, described device further include: display module;The display module identifies optic disk position, blood for showing The eye fundus image to be detected of pipe position and central fovea of macula position.
Optionally, the display module, which is shown, identifies the described of optic disk position, vessel position and central fovea of macula position When eye fundus image to be detected, it is specifically used for: marks the optic disk, blood vessel and central fovea of macula that identify in the eyeground figure to be detected Position as in;Show the eye fundus image to be detected of mark.
Fourth aspect, the embodiment of the present application provide a kind of training device of eye fundus image, comprising: and second obtains module, For obtaining the eyeground training image with markup information, the markup information includes at least optic disk position coordinates, vessel position Coordinate and image sources;Prediction module, for the eyeground training image to be inputted the generation network constructed in advance, to pass through It states the generation network constructed in advance and generates eyeground forecast image;Discrimination module, for differentiating institute according to the eyeground forecast image State the corresponding image sources of eyeground training image;Adjust module, at least based on the markup information image sources and Confrontation between the image sources of differentiation adjusts the network parameter of the generation network constructed in advance.
Optionally, the markup information further includes the position coordinates of central fovea of macula;Described device further include: first determines Module, for determining optic disk position coordinates and vessel position coordinate in the eyeground training image;And it will determine described The eyeground training image of optic disk position coordinates and vessel position coordinate inputs the Recurrent networks constructed in advance, by described The Recurrent networks constructed in advance determine the central fovea of macula according to the determining optic disk position coordinates and vessel position coordinate Position coordinates;The adjustment module, the macula lutea for being also used to the position coordinates according to the central fovea of macula of mark and determining Difference between the position coordinates of central fovea adjusts the network parameter of the Recurrent networks constructed in advance.
Optionally, the markup information further includes the position coordinates of central fovea of macula;Described device further include: second determines Module, for the eyeground forecast image to be inputted the Recurrent networks constructed in advance, to pass through the recurrence net constructed in advance Network determines the position coordinates of central fovea of macula, and the eyeground forecast image includes optic disk position coordinates and vessel position coordinate;Institute Adjustment module is stated, the position coordinates according to the determining central fovea of macula, and the central fovea of macula of mark are also used to Position coordinates between discrepancy adjustment described in the network parameter of Recurrent networks that constructs in advance;And it repeats to adjust described preparatory The generation network of building and the network parameter of the Recurrent networks constructed in advance respectively obtain default generation model and preset back Return model.
5th aspect, the embodiment of the present application provide a kind of processing equipment of eye fundus image, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out method described in first aspect.
6th aspect, the embodiment of the present application provide a kind of training equipment of eye fundus image, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out method described in second aspect.
7th aspect, the embodiment of the present application provide a kind of non-instantaneous computer-readable storage for being stored with computer instruction Medium, the computer instruction is for making the computer execute method described in first aspect and second aspect.
Eighth aspect, the embodiment of the present application provide a kind of processing method of eye fundus image, are applied to terminal device, described Terminal device is connect with image acquisition units, which comprises obtain the acquisition of described image acquisition unit includes that eyeground is closed The eye fundus image to be detected of bond structure;Model is generated by the eye fundus image to be detected input is default, with identify the optic disk and Position of the blood vessel in the eye fundus image to be detected, the default generation model are obtained based on generation confrontation network training It arrives.
Optionally, described that the eye fundus image to be detected is inputted into default identification model, to identify the eyeground key knot After the position of structure, the method also includes: display identifies the eyeground to be detected of the position of the eyeground key structure Image.
It is optionally, described to show the eye fundus image to be detected for identifying the position of the eyeground key structure, comprising: Mark position of the eyeground key structure identified in the eye fundus image to be detected;Display mark it is described to Detect eye fundus image.
One embodiment in above-mentioned application has the following advantages that or the utility model has the advantages that can more accurately extract eye Bottom key structure information, and promote the splitting speed of eyeground key structure.Because using default identification model is based on life At the technological means that confrontation network training obtains, so overcome in the prior art for eye fundus image quality requirement height, and The not high technical problem of extraction accuracy of eyeground key structure, so reach can more accurately extract eyeground key Structural information, and promote the technical effect of the splitting speed of eyeground key structure.
Other effects possessed by above-mentioned optional way are illustrated hereinafter in conjunction with specific embodiment.
Detailed description of the invention
Attached drawing does not constitute the restriction to the application for more fully understanding this programme.Wherein:
Fig. 1 can be achieved on the application scenario diagram of the embodiment of the present application;
Fig. 2 is the flow chart according to the processing method of the eye fundus image of the embodiment of the present application;
Fig. 3 is according to eye fundus image provided by the embodiments of the present application;
Fig. 4 is the schematic diagram according to identification model provided by the embodiments of the present application;
Fig. 5 is the flow chart according to the processing method of the eye fundus image of the embodiment of the present application;
Fig. 6 is the flow chart according to the processing method of the eye fundus image of the embodiment of the present application;
Fig. 7 is the schematic diagram according to the processing unit of the eye fundus image of the embodiment of the present application;
Fig. 8 is the schematic diagram according to the training device of the eye fundus image of the embodiment of the present application;
Fig. 9 is the block diagram for the electronic equipment for the processing method for realizing the eye fundus image of the embodiment of the present application.
Specific embodiment
It explains below in conjunction with exemplary embodiment of the attached drawing to the application, including the various of the embodiment of the present application Details should think them only exemplary to help understanding.Therefore, those of ordinary skill in the art should recognize It arrives, it can be with various changes and modifications are made to the embodiments described herein, without departing from the scope and spirit of the present application.Together Sample, for clarity and conciseness, descriptions of well-known functions and structures are omitted from the following description.
The processing method of eye fundus image provided by the embodiments of the present application can be applied to image analysis and processing function Equipment on, such as terminal devices such as computer, IPAD.It, can be with when the scheme of the present embodiment is applied on above-mentioned terminal device Eye fundus image is acquired by the image acquisition units being arranged on terminal device, then eyeground figure is executed by the processor of terminal device The processing method of picture.Certainly, the processing method of the eye fundus image of the present embodiment can also by external image acquisition units come Eye fundus image is acquired, and passes through wired or is wirelessly transferred to terminal device, is executed by the processor of terminal device The processing method of eye fundus image.Below by acquire eye fundus image by external image acquisition units, and by wired or Person is wirelessly transferred to terminal device, and the application under the processing method of eye fundus image is executed by the processor of terminal device For scene, describe in detail to the processing method of eye fundus image provided by the embodiments of the present application:
As shown in Figure 1, the application scenarios include: image acquisition units 10 and terminal device 11,10 He of image acquisition units Terminal device 11 can carry out wire communication or wireless communication.Optionally, image acquisition units 10 can be imaging sensor, example Such as, eyeground detector, image acquisition units 10 can acquire eye fundus image, and the eye fundus image of acquisition is sent to terminal device 11.Terminal device 11 is with display screen and the equipment for being internally provided with processor, such as computer, IPAD equipment.Wherein, it shows Eye fundus image that display screen curtain can show acquisition or display are by the method for the embodiment of the present application treated eye fundus image, eventually The eye fundus image that processor inside end equipment can acquire image acquisition units 10 is handled.
Fig. 2 is the schematic diagram provided according to the application first embodiment.As shown in Fig. 2, a kind of processing side of eye fundus image Method, comprising:
Step 201, the eye fundus image to be detected including eyeground key structure for obtaining image acquisition units acquisition, wherein Eyeground key structure includes at least optic disk and blood vessel.
Eye fundus image to be detected in the present embodiment can be the eyeground figure of shooting when to the progress funduscopy of certain an object Picture.For example, to people carry out funduscopy when shooting people eye fundus image.Image Acquisition list as shown in Figure 1 can specifically be passed through Member 10 collects, and is sent to terminal device 11.
Wherein, eye fundus image refers to the image of the tissue at rear portion in eyeball, some key structures including eyeground, such as regards Disk, optic cup, blood vessel and central fovea of macula.The specific location of optic disk, blood vessel and central fovea of macula refers to Fig. 3, specific:
Optic disk (optic disc) be retina from macula lutea to nasal side about 3mm at have a diameter about 1.5mm, it is clear-cut Pale red disc-shaped structure, referred to as discus nervi optici, abbreviation optic disk, also referred to as optic papilla are that visual fibers converge on retina Collection is pierced by the position of eyeball, is the beginning of optic nerve, as circle is irised out the location A come in Fig. 3.
Macula lutea central fovea (fovea) is a dolly dimple at macula lutea center, and macula lutea is located at retinal centre and is located at optic disk Outside.Due to the macular pigment of high concentration, compared with the retinal tissue of surrounding, there is darker appearance, such as circle institute in Fig. 3 Iris out the B location come.
Blood vessel (blood vessel) is distributed in entire eyeground region, such as the location of C of arrow meaning in Fig. 3, it should be understood that , the vessel position C of arrow meaning is a part of blood vessel position in Fig. 3, does not represent all blood vessels in eyeground.
Optionally, eye fundus image can be coloured picture, be also possible to grayscale image.
Step S202, eye fundus image to be detected is inputted into default generation model, to identify optic disk by the default model that generates With position of the blood vessel in the eye fundus image to be detected, the default model that generates is obtained based on generation confrontation network training.
As shown in Figure 1, presetting generation model can be according to input after eye fundus image to be detected to be inputted to default generation model Eye fundus image automatically identify optic disk and blood vessel in eye fundus image to be detected position be A, B.
Wherein, generating confrontation network (Generative Adversarial Network, GAN) includes generating network and sentencing Other network generates network and is used to generate corresponding prognostic chart according to the eye fundus image of input, differentiates that network generates net for differentiating The prognostic chart of network input is true picture or generates image, differentiates that the differentiation result of network can return to generation network, so that The network parameter that network adjusts itself is generated, it is true for being finally reached the prognostic chart that generation network generates differentiation network can not be differentiated Real image still generates the effect of image, then obtains default generation model, and presetting and generating model includes trained generation network.
The embodiment of the present application is by obtaining the eye fundus image to be detected including eyeground key structure, and eyeground key structure is at least Including optic disk and blood vessel;And eye fundus image to be detected is inputted into default generation model, thus identify the position of optic disk and blood vessel, Wherein, presetting and generating model is obtained based on generation confrontation network training.Since the default model that generates is based on generation confrontation What network training obtained, therefore, after eye fundus image to be inputted to trained generation network, generating network can be to eye fundus image In blood vessel and optic disk carry out Pixel-level segmentation, to reach more accurate segmentation effect.In addition, generating network for image matter The requirement of amount is simultaneously less stringent.
Fig. 4 is the schematic diagram according to identification model provided by the embodiments of the present application.Optionally, eyeground key structure further includes Central fovea of macula.Eye fundus image to be detected is inputted to preset and generates model, behind the position to identify optic disk and the blood vessel, this Shen Please embodiment method can also include the following steps: the position input of the optic disk of default generation model identification and blood vessel is default Regression model determines the position of central fovea of macula, wherein default regression model is obtained based on Recurrent networks training.Such as Fig. 4 Shown, identification model, which includes that feature extraction network 41, first is default, to be generated the default generation model 43 of model 42, second and presets back Return model 44;After eye fundus image input feature vector to be detected shown in Fig. 3 is extracted network 41, feature extraction network extracts respectively Optic disk characteristic pattern and blood vessel characteristic pattern out;Optic disk characteristic pattern and blood vessel characteristic pattern input respectively first it is default generate model 42, After second default generation model 43, the first default generation model 42, second is default to generate the meeting of model 43 respectively according to optic disk spy Sign figure and blood vessel characteristic pattern be partitioned into optic disk position coordinate optic disk prognostic chart and blood vessel position coordinate blood vessel it is pre- Mapping, and then the optic disk prognostic chart and blood vessel prognostic chart of optic disk characteristic pattern and the generation of blood vessel characteristic pattern will continue to input default recurrence Model 44, since the position between optic disk and the position and central fovea of macula of blood vessel has certain incidence relation, preset Regression model 44 can according to the optic disk position coordinate of optic disk prognostic chart and the blood vessel position coordinate of blood vessel prognostic chart, Determine the position of central fovea of macula.As shown in figure 4, being preset after eye fundus image to be detected to be inputted to default generation model Regression model can be according to the default position A (white area in figure for generating the optic disk that model is exported according to eye fundus image to be detected Position) coordinate and the coordinate of position B (the white area position in figure) of blood vessel returned, obtain the position of central fovea of macula Set the coordinate (the white area position in figure) of C.
It should be noted that not needing to relate to when using the position of identification model identification optic disk, blood vessel and central fovea of macula And to network is differentiated, for example, eyeground figure passes through the default generation network and second for generating model 42 of feature extraction network 41, first The default generation network for generating model 43, default regression model 44, finally extract optic disk, blood vessel and central fovea of macula to Detect the position in the figure of eyeground.
The embodiment of the present application is by inputting default identification model for eye fundus image to be detected, to identify optic disk and blood vessel Behind position, the position of the optic disk of default generation model identification and blood vessel is inputted into default regression model, determines central fovea of macula Position, default regression model is obtained based on Recurrent networks training, for example, by identification model as shown in Figure 4, Neng Goutong When extract the position of optic disk, blood vessel and central fovea of macula, improve the segmentation timeliness to optic disk, blood vessel and central fovea of macula Property.
As shown in figure 5, being the flow chart according to the processing method of the eye fundus image of the embodiment of the present application.The embodiment of the present application In the position for identifying optic disk, blood vessel and central fovea of macula in eye fundus image to be detected using default generation model and default regression model Before setting, need first by being trained to obtain default generation model and default regression model to neural network.Optionally, will Eye fundus image input to be detected is default to generate model, to lead to by presetting before generating the position that model identifies optic disk and blood vessel It crosses and neural network is trained to obtain default generation model, specifically comprise the following steps:
Step S501, the eyeground training image with markup information is obtained, the markup information includes at least optic disk position Coordinate, vessel position coordinate and image sources.
Step S502, the eyeground training image is inputted to the generation network constructed in advance, to pass through the life constructed in advance Eyeground forecast image is generated at network.
Step S503, according to the eyeground forecast image, differentiate the corresponding image sources of the eyeground training image.
Step S504, the image sources at least based on the markup information and the confrontation between the image sources of differentiation, Adjust the network parameter of the generation network constructed in advance.
In the present embodiment, eyeground training image can be to concentrate from disclosed training data and obtain, and be also possible to acquisition Eye fundus image or the eye fundus image generated by computer.Training image used in training process includes markup information, Markup information includes but is not limited to: the position coordinates and image sources of optic disk and blood vessel in image, such as image is really to scheme As still generating image.In training process, it can choose one or a batch includes the training image input life of markup information At network, and then according to the network parameter of the discrepancy adjustment generation network between the output result and markup information for generating network. For example, as shown in figure 4, by eyeground figure input feature vector extraction network 41 extract optic disk characteristic pattern and blood vessel characteristic pattern respectively, And optic disk characteristic pattern and blood vessel characteristic pattern are inputted into the first default life for generating model 42 and the second default generation model 43 respectively The default generation of input first is really schemed at generation optic disk prognostic chart and blood vessel prognostic chart in network, then by optic disk prognostic chart and optic disk The differentiation network and blood vessel prognostic chart and blood vessel of model 42 really scheme the default differentiation network for generating model 43 of input second, First default generate the differentiation result that model 42 and the second default differentiations network for generating model 43 export and can return to the again One presets generation model 42 and the second default generation network for generating model 43, so that the first default generation model 42 and second is in advance If the generation network for generating model 43 adjusts separately the network parameter of itself.
Optionally, feature extraction network includes N number of channel, and N is the integer more than or equal to 1;By eye fundus image to be detected After input feature vector extracts network, feature extraction network can carry out the feature extraction in N number of channel respectively, obtain N number of optic disk characteristic pattern With N number of blood vessel characteristic pattern.For example, then N value is 3 when eye fundus image to be detected is color image, it is logical to respectively represent R, G, B Road, after an eye fundus image input feature vector to be detected is extracted network, feature extraction network can carry out the spy in 3 channels respectively Sign is extracted, and 3 optic disk characteristic patterns and 3 blood vessel characteristic patterns are obtained.Similarly, when eye fundus image to be detected is grayscale image, then N Value is 1, and after an eye fundus image input feature vector to be detected is extracted network, feature extraction network can carry out 1 respectively and lead to The feature extraction in road obtains 1 optic disk characteristic pattern and 1 blood vessel characteristic pattern.
Optionally, the markup information further includes the position coordinates of central fovea of macula;Net is generated according to training image training Network, after obtaining default generation model, the embodiment of the present application can also continue to obtain default regression model to Recurrent networks training, in advance If the training process of regression model includes the steps of determining that optic disk position coordinates and blood vessel position in the eyeground training image Set coordinate;The eyeground training image that the optic disk position coordinates and vessel position coordinate have been determined input is constructed in advance Recurrent networks, to determine that the position of the central fovea of macula is sat according to the determining optic disk position coordinates and vessel position coordinate Mark;According to the difference between the position coordinates of the central fovea of macula of mark and the position coordinates of the central fovea of macula of determination, Adjust the network parameter of the Recurrent networks constructed in advance.
In the embodiment of the present application, the markup information of training image used in training process includes in image in addition to needing Except the position coordinates of optic disk and blood vessel, it is also necessary to the position coordinates including central fovea of macula.In training process, one can be chosen Or a batch include and then the output that network will be generated in the trained generation network of training image input of markup information As a result the input (including optic disk prognostic chart and blood vessel prognostic chart) as Recurrent networks, Recurrent networks are according to optic disk prognostic chart and blood The position coordinates of the central fovea of macula marked in pipe prognostic chart, and position coordinates and training based on optic disk in optic disk prognostic chart The position coordinates of central fovea of macula, and then the position of the central fovea of macula according to mark is calculated in the position coordinates of optic disk in image Set coordinate and the position coordinates of central fovea of macula that are calculated between discrepancy adjustment Recurrent networks network parameter.
Optionally, individually Recurrent networks can also be trained without reference to the trained output for generating network.For example, What be will acquire is labeled with the training image of the position coordinates of optic disk, blood vessel and central fovea of macula, and extracts by other means It obtains that there is the prognostic chart of optic disk and vessel position coordinate to input Recurrent networks, obtains Recurrent networks with optic disk according to extraction The position coordinates of central fovea of macula, and then the position coordinates of the central fovea of macula according to mark are calculated with vessel position coordinate And the network parameter of the discrepancy adjustment Recurrent networks between the position coordinates for the central fovea of macula being calculated.Optionally, depending on The mark of the position coordinates of disk, blood vessel and central fovea of macula can be by the way of manually marking.
Above-described embodiment introduction of the application is individually trained to generation network and Recurrent networks, optionally, according to this The embodiment of application, present invention also provides a kind of flow charts of the processing method of eye fundus image, to generation network and can return Return network while being trained.Generation network and Recurrent networks are trained simultaneously, specifically refer to Jie of following embodiment It continues:
As shown in fig. 6, being the flow chart according to the processing method of the eye fundus image of the embodiment of the present application, wherein mark letter Breath further includes the position coordinates of central fovea of macula, then the eyeground training image is being inputted to the generation network constructed in advance, obtained To after the forecast image of eyeground, the method also includes following steps:
Step S601, the eyeground forecast image is inputted to the Recurrent networks constructed in advance, to return by what is constructed in advance Network is returned to determine the position coordinates of central fovea of macula, the eyeground forecast image includes that optic disk position coordinates and vessel position are sat Mark.
Optionally, it may refer to the introduction of above-described embodiment for the acquisition of eyeground training image, the present embodiment is herein not Repeat introduction.
Step S602, according to the position coordinates of the determining central fovea of macula, and the central fovea of macula of mark Position coordinates between discrepancy adjustment described in the network parameter of Recurrent networks that constructs in advance.
Step S603, repetition adjusts the network of the generation network constructed in advance and the Recurrent networks constructed in advance Parameter respectively obtains default generation model and default regression model.
With above-described embodiment to the training of Recurrent networks the difference is that, the step S602 of the present embodiment is to returning net The training of network is an iteration training, therefore the specific embodiment of the step S602 of the present embodiment may refer to above-described embodiment pair In the introduction of the single iteration training process of Recurrent networks, the present embodiment is not repeated to introduce herein.
The embodiment of the present application generates network and Recurrent networks by successive ignition training, until the first preset model, second Difference between the generation network of preset model, the output result of Recurrent networks and the markup information of input picture reaches setting threshold Value is less than given threshold, and training terminates.For example, generating the position of optic disk in the optic disk prognostic chart that network is exported based on training image Set the difference between the position coordinates of the optic disk marked in coordinate and training image, the position coordinates of blood vessel prognostic chart medium vessels with Difference between the position coordinates of the blood vessel marked in training image, and mark central fovea of macula position coordinates and meter Difference between the position coordinates of obtained central fovea of macula reaches respective given threshold or is less than respective setting threshold Value, the first preset model, the second preset model differentiation network can not to differentiate the image of input be true picture or generation figure When picture, training terminates.
Optionally, as shown in figure 4, the first preset model 42, the second preset model 43 are based on a generation confrontation respectively What network training obtained.For example, training image input feature vector, which is extracted network, carries out feature extraction, optic disk characteristic pattern is respectively obtained With blood vessel characteristic pattern, then optic disk characteristic pattern and blood vessel characteristic pattern are inputted into the first preset model 42, the second preset model 43 respectively Generation network, the first preset model 42, the second preset model 43 generation network can export the position coordinates with optic disk Forecast image, and the forecast image of the position coordinates with blood vessel.Then optic disk is predicted into training image and true optic disk Image and blood vessel prediction training image and true blood-vessel image input the first preset model 42, the second preset model respectively 43 differentiate networks, in training process, optic disk predict training image and true optic disk image and blood vessel prediction training image and True blood-vessel image all has the markup information of expression image sources, such as optic disk prediction training image is labeled as generating network The optic disk image of generation, true optic disk image labeling are that true optic disk image and blood vessel predict that training image is labeled as The blood-vessel image that network generates is generated, true blood-vessel image is labeled as true blood-vessel image and inputs differentiation network respectively, sentences Other network can differentiate that the image of four seed types of input is true picture or the generation image for generating network output, initial Training stage differentiates that network can accurately distinguish true picture and generate image, with trained progress, generate network according to Differentiate that the differentiation result of network adjusts network parameter, differentiating that network can not gradually differentiate the image of these four types is true picture Or generate the generation image of network output.
In one embodiment of the application, GAN network can be instructed by defining the loss function of GAN network Practice.The loss function of GAN network refers to following formula (1):
In formula (1), G, which is represented, generates network, and D, which is represented, differentiates network, and x represents the eyeground training image of input, and y, which is represented, to be generated The prognostic chart of network G output, wherein x and y obeys probability distribution, and G (x), which is represented, generates network to output when inputting as x, G (x)=y.
In the present embodiment, the training for generating confrontation network is intended to make the loss function of generation network towards ever-increasing side To optimization, and optimize the loss function for fighting network towards ever-reduced direction, game confrontation in this way, finally Make to differentiate that network can not gradually differentiate the image that generation network generates and be true picture or generate image.Above-mentioned game confrontation Process can be indicated using following function expression:
In formula (2), G, which is represented, generates network, and D, which is represented, differentiates network, and x represents the eyeground training image of input, and y, which is represented, to be generated The prognostic chart of network G output, wherein x and y obeys probability distribution, and G (x), which is represented, generates network to output when inputting as x, G (x)=y.
In addition, generating network in the training process, the training of segmentation ability can also be added, it, can be with for partitioning portion Using loss function BCE-loss (Binary Cross Entropy):
In formula (3), N is the number of pixel in optic disk or blood vessel prognostic chart, and i represents pixel in optic disk or blood vessel prognostic chart The number of point, yi represent ith pixel in optic disk or blood vessel prognostic chart label (if for example, optic disk or blood vessel, label 1, no It is 0) optic disk or blood vessel are just labeled as, p (yi) represents the prediction probability to ith pixel point in optic disk or blood vessel prognostic chart.
And for Recurrent networks, loss function can be defined are as follows:
In formula (4), zi represents whether some pixel belongs to the label of central fovea of macula (if for example, central fovea of macula, mark 0) note 1, is labeled as without being central fovea of macula,Be prediction some pixel whether belong to central fovea of macula label (for example, If central fovea of macula, 0) label 1 is labeled as without being optic disk or blood vessel.
So, for entire identification model, loss function can be with is defined as:
In formula (5), α, beta, gamma is preset coefficient, and effect is the loss function for balancing each section.
During repetitive exercise each time, the loss function of formula (5) can be according to output result, the differentiation net for generating network A penalty values are calculated in the output result of network and the output result of Recurrent networks, and return to and generate network, differentiate network And Recurrent networks, so as to generate network, differentiate that network and Recurrent networks adjust the network parameter of itself according to penalty values, until Reach the loss threshold value of setting or the loss threshold value less than setting.
In one embodiment of the application, by above-mentioned training process training obtain generation network, differentiate network and The network parameter of Recurrent networks may refer to introduce as follows:
Optionally, generate network include 4 groups of sequentially connected convolutional layers and pond layer and 8 groups it is sequentially connected on adopt Sample layer and convolutional layer.Wherein, in 4 groups of sequentially connected convolutional layers (convolution kernel for using 3 × 3) and pond layer (using 2 × 2 Convolution kernel) in;(3 × 3 convolution is used in 8 groups of sequentially connected up-sampling layers (convolution kernel for using 2 × 2) and convolutional layer Core).Differentiate that network includes that 4 groups of sequentially connected convolutional layers (convolution kernel for using 3 × 3) and pond layer (use 2 × 2 convolution Core), convolutional layer (convolution kernel for using 3 × 3), complete is sequentially connected after 4 groups of sequentially connected convolutional layers and pond layer again The average pond layer of office and full articulamentum (convolution kernel for using 1 × 2).
Optionally, Recurrent networks include that 4 groups of sequentially connected convolutional layers (convolution kernel for using 3 × 3) and pond layer (use 2 × 2 convolution kernel), 4 groups of sequentially connected convolutional layers and pond layer are sequentially connected a convolutional layer (using 3 × 3 again later Convolution kernel), global average pond layer and full articulamentum (convolution kernel using 1 × 2).
Wherein, for generating network and differentiating network, if input eyeground coloured picture, then input is in R, G, B tri- Characteristic pattern under a channel, if eyeground grayscale image, then input is the characteristic pattern under a channel.For Recurrent networks, What it was finally entered is the characteristic pattern under a channel.
Optionally, the position of the optic disk of default generation model identification and blood vessel is inputted into default regression model, determine described in Behind the position of central fovea of macula, the method for the embodiment of the present application further includes following steps: display identifies optic disk position, blood vessel position Set the eye fundus image to be detected with central fovea of macula position.
Optionally, display identifies the eye fundus image to be detected of optic disk position, vessel position and central fovea of macula position, packet It includes: marking the position of the optic disk, blood vessel and central fovea of macula that identify in eye fundus image to be detected;Display mark to Detect eye fundus image.
As shown in Figure 1, in the position for identifying the optic disk in eye fundus image to be detected, blood vessel and central fovea of macula Afterwards, position A, B, C of optic disk, blood vessel and central fovea of macula can be shown on the display screen of terminal device 11.
The embodiment of the present application also provides a kind of processing methods of eye fundus image, comprising: obtaining includes eyeground key structure Eye fundus image to be detected, the eyeground key structure include at least optic disk, blood vessel and central fovea of macula;By the eye to be detected The input of base map picture is default to generate model, to identify that the optic disk, the blood vessel and central fovea of macula exist by the default model that generates Position in the eye fundus image to be detected.
Optionally, described that the eye fundus image to be detected is inputted into preset model, to identify institute by the preset model Optic disk, the blood vessel and the central fovea of macula are stated behind the position in the eye fundus image to be detected, the method also includes: Display identifies the eye fundus image to be detected of the optic disk, the blood vessel and the central fovea of macula respective positions.
It is optionally, described to show the eye fundus image to be detected for identifying the position of the eyeground key structure, comprising: Mark the place in each leisure of the optic disk, the blood vessel and the central fovea of macula eye fundus image to be detected identified Position;Show the eye fundus image to be detected of mark.
Eyeground key structure in the embodiment of the present application is eyeball posterior tissue structure, includes at least optic disk or optic cup, blood Pipe and central fovea of macula.Optic cup is by inner boundary optic disk crosspoint and connection retinal pigment epithelium (retinal pigment Epithelium, RPE) above end 150 μm a sets of parallel at.
The embodiment of the present application can determine the position of central fovea of macula based on the location information of optic disk and blood vessel, can be with The position of central fovea of macula is determined based on the location information of optic cup and blood vessel.It is determined based on the location information of optic disk and blood vessel The position of central fovea of macula, or when determining the position of central fovea of macula based on the location information of optic cup and blood vessel, can refer to The introduction of the implementation process of above-described embodiment.
According to an embodiment of the present application, present invention also provides a kind of processing unit of eye fundus image, which connects Image acquisition units are connected to, the processing unit in the present embodiment can be terminal device 11 shown in Fig. 1, image acquisition units It can be image acquisition units 10 shown in Fig. 1, as shown in fig. 7, a kind of place of eye fundus image provided by the embodiments of the present application Managing device 70 includes: the first acquisition module 71, identification module 72 and display module 73;Wherein, first module 71 is obtained, for obtaining The eye fundus image to be detected including eyeground key structure for taking described image acquisition unit to acquire, the eyeground key structure is at least Including optic disk and blood vessel;Identification module 72 generates model for presetting the eye fundus image input to be detected, by default It generates model and identifies the position of the optic disk and the blood vessel in the eye fundus image to be detected, the default generation model is Fight what network training obtained based on generating.
Optionally, the eyeground key structure further includes central fovea of macula;The identification module 72, being also used to will be described pre- If the position of the optic disk and the blood vessel that generate model identification inputs default regression model, with true by default regression model Fixed position of the central fovea of macula in the eye fundus image to be detected, the default regression model are instructed based on Recurrent networks It gets.
Optionally, the display module 73 identifies optic disk position, vessel position and central fovea of macula position for showing The eye fundus image to be detected.
Optionally, the display module 73 shows the institute for identifying optic disk position, vessel position and central fovea of macula position When stating eye fundus image to be detected, it is specifically used for: marks the optic disk, blood vessel and central fovea of macula that identify on the eyeground to be detected Position in image;Show the eye fundus image to be detected of mark.
The embodiment of the present application is by obtaining the eye fundus image to be detected including eyeground key structure, and eyeground key structure is at least Including optic disk and blood vessel;And eye fundus image to be detected is inputted into default generation model, thus identify the position of optic disk and blood vessel, Wherein, presetting and generating model is obtained based on generation confrontation network training.Since the default model that generates is based on generation confrontation What network training obtained, therefore, after eye fundus image to be inputted to trained generation network, generating network can be to eye fundus image In blood vessel and optic disk carry out Pixel-level segmentation, to reach more accurate segmentation effect.In addition, generating network for image matter The requirement of amount is simultaneously less stringent.
According to an embodiment of the present application, present invention also provides a kind of training devices of eye fundus image.As shown in figure 8, this The training device 80 for applying for the eye fundus image that embodiment provides includes: the second acquisition module 81, prediction module 82, discrimination module 83 With adjustment module 84;Wherein, second module 81 is obtained, for obtaining the eyeground training image with markup information, the mark Information includes at least optic disk position coordinates, vessel position coordinate and image sources;Prediction module 82, for training the eyeground Image inputs the generation network constructed in advance, to generate eyeground forecast image by the generation network constructed in advance;Discrimination module 83, for differentiating the corresponding image sources of the eyeground training image according to the eyeground forecast image;Module 84 is adjusted, is used Confrontation between the image sources of image sources and differentiation at least based on the markup information adjusts the preparatory building Generation network network parameter.
Optionally, the markup information further includes the position coordinates of central fovea of macula;Described device further include: first determines Module 85, for determining optic disk position coordinates and vessel position coordinate in the eyeground training image;And institute will be determined The eyeground training image for stating optic disk position coordinates and vessel position coordinate inputs the Recurrent networks constructed in advance, by pre- The Recurrent networks first constructed determine the central fovea of macula according to the determining optic disk position coordinates and vessel position coordinate Position coordinates;The adjustment module 84, the macula lutea for being also used to the position coordinates according to the central fovea of macula of mark and determining Difference between the position coordinates of central fovea adjusts the network parameter of the Recurrent networks constructed in advance.
Optionally, the markup information further includes the position coordinates of central fovea of macula;Described device further include: second determines Module 86, for the eyeground forecast image to be inputted the Recurrent networks constructed in advance, to pass through the Recurrent networks constructed in advance Determine the position coordinates of central fovea of macula, the eyeground forecast image includes optic disk position coordinates and vessel position coordinate;It is described Module 84 is adjusted, the position coordinates according to the determining central fovea of macula, and the central fovea of macula of mark are also used to Position coordinates between discrepancy adjustment described in the network parameter of Recurrent networks that constructs in advance;And it repeats to adjust described preparatory The generation network of building and the network parameter of the Recurrent networks constructed in advance respectively obtain default generation model and preset back Return model.
According to an embodiment of the present application, present invention also provides a kind of electronic equipment and a kind of readable storage medium storing program for executing.
As shown in figure 9, being the block diagram according to the electronic equipment of the processing method of the eye fundus image of the embodiment of the present application.Electronics Equipment can be the processing equipment of the eye fundus image of above-described embodiment or the training equipment of eye fundus image, electronic equipment are intended to indicate that Various forms of digital computers, such as, laptop computer, desktop computer, workbench, personal digital assistant, server, Blade server, mainframe computer and other suitable computer.Electronic equipment also may indicate that various forms of mobile dresses It sets, such as, personal digital assistant, cellular phone, smart phone, wearable device and other similar computing devices.This paper institute Component, their connection and the relationship shown and their function are merely exemplary, and are not intended to limit described herein And/or requirement the application realization.
As shown in figure 9, the electronic equipment includes: one or more processors 91, memory 92, and for connecting each portion The interface of part, including high-speed interface and low-speed interface.All parts are interconnected using different buses, and can be mounted It installs in other ways on public mainboard or as needed.Processor can carry out the instruction executed in electronic equipment Processing, including storage in memory or on memory (such as, to be coupled to the aobvious of interface in external input/output device Show equipment) on show GUI graphical information instruction.In other embodiments, if desired, can by multiple processors and/ Or multiple bus is used together with multiple memories with multiple memories.It is also possible to multiple electronic equipments are connected, each equipment The necessary operation (for example, as server array, one group of blade server or multicomputer system) in part is provided.Fig. 9 In by taking a processor 91 as an example.
Memory 92 is non-transitory computer-readable storage medium provided herein.Wherein, the memory is deposited The instruction that can be executed by least one processor is contained, so that at least one described processor executes eyeground provided herein The processing method of image or the training method of eye fundus image.The non-transitory computer-readable storage medium of the application stores computer Instruction, the processing method or eye fundus image which is used to that computer to be made to execute eye fundus image provided herein Training method.
Memory 92 is used as a kind of non-transitory computer-readable storage medium, can be used for storing non-instantaneous software program, non- Instantaneous computer executable program and module, as the corresponding program of the processing method of the eye fundus image in the embodiment of the present application refers to Order/module (for example, attached shown in Fig. 8 first obtains module 81, identification module 82 and display module 83).Processor 91 passes through fortune Non-instantaneous software program, instruction and the module that row is stored in memory 92, thereby executing the various function application of server And data processing, that is, realize the processing method of the eye fundus image in above method embodiment or the training method of eye fundus image.
Memory 92 may include storing program area and storage data area, wherein storing program area can storage program area, Application program required at least one function;Storage data area can store the electronic equipment of the processing method according to eye fundus image Use created data etc..In addition, memory 92 may include high-speed random access memory, it can also include non-instantaneous Memory, for example, at least a disk memory, flush memory device or other non-instantaneous solid-state memories.In some implementations In example, optional memory 92 includes the memory remotely located relative to processor 91, these remote memories can pass through net Network is connected to the electronic equipment of the processing method of eye fundus image.The example of above-mentioned network includes but is not limited to internet, in enterprise Portion's net, local area network, mobile radio communication and combinations thereof.
The electronic equipment of the training method of the processing method or eye fundus image of eye fundus image can also include: input unit 93 With output device 94.Processor 91, memory 92, input unit 93 and output device 94 can pass through bus or other modes It connects, in Fig. 9 for being connected by bus.
Input unit 93 can receive the number or character information of input, and generate the processing method or eye with eye fundus image The related key signals input of the user setting and function control of the electronic equipment of the training method of base map picture, such as touch screen, The inputs such as keypad, mouse, track pad, touch tablet, indicating arm, one or more mouse button, trace ball, control stick dress It sets.Output device 94 may include display equipment, auxiliary lighting apparatus (for example, LED) and haptic feedback devices (for example, vibration Motor) etc..The display equipment can include but is not limited to, and liquid crystal display (LCD), light emitting diode (LED) display and wait Gas ions display.In some embodiments, display equipment can be touch screen.
The various embodiments of system and technology described herein can be in digital electronic circuitry, integrated circuit system It is realized in system, dedicated ASIC (specific integrated circuit), computer hardware, firmware, software, and/or their combination.These are various Embodiment may include: to implement in one or more computer program, which can be It executes and/or explains in programmable system containing at least one programmable processor, which can be dedicated Or general purpose programmable processors, number can be received from storage system, at least one input unit and at least one output device According to and instruction, and data and instruction is transmitted to the storage system, at least one input unit and this at least one output Device.
These calculation procedures (also referred to as program, software, software application or code) include the machine of programmable processor Instruction, and can use programming language, and/or the compilation/machine language of level process and/or object-oriented to implement these Calculation procedure.As used herein, term " machine readable media " and " computer-readable medium " are referred to for referring to machine It enables and/or data is supplied to any computer program product, equipment, and/or the device of programmable processor (for example, disk, light Disk, memory, programmable logic device (PLD)), including, receive the machine readable of the machine instruction as machine-readable signal Medium.Term " machine-readable signal " is referred to for machine instruction and/or data to be supplied to any of programmable processor Signal.
In order to provide the interaction with user, system and technology described herein, the computer can be implemented on computers The display device for showing information to user is included (for example, CRT (cathode-ray tube) or LCD (liquid crystal display) monitoring Device);And keyboard and indicator device (for example, mouse or trace ball), user can by the keyboard and the indicator device come Provide input to computer.The device of other types can be also used for providing the interaction with user;For example, being supplied to user's Feedback may be any type of sensory feedback (for example, visual feedback, audio feedback or touch feedback);And it can use Any form (including vocal input, voice input or tactile input) receives input from the user.
System described herein and technology can be implemented including the computing system of background component (for example, as data Server) or the computing system (for example, application server) including middleware component or the calculating including front end component System is (for example, the subscriber computer with graphic user interface or web browser, user can pass through graphical user circle Face or the web browser to interact with the embodiment of system described herein and technology) or including this backstage portion In any combination of computing system of part, middleware component or front end component.Any form or the number of medium can be passed through Digital data communicates (for example, communication network) and is connected with each other the component of system.The example of communication network includes: local area network (LAN), wide area network (WAN) and internet.
Computer system may include client and server.Client and server is generally off-site from each other and usually logical Communication network is crossed to interact.By being run on corresponding computer and each other with the meter of client-server relation Calculation machine program generates the relationship of client and server.
According to the technical solution of the embodiment of the present application, by obtaining the eye fundus image to be detected including eyeground key structure, Eyeground key structure includes at least optic disk and blood vessel;And eye fundus image to be detected is inputted into default generation model, to identify The position of optic disk and blood vessel, wherein the default model that generates is obtained based on generation confrontation network training.Mould is generated due to default Type is obtained based on generation confrontation network training, therefore, after eye fundus image to be inputted to trained generation network, generates net Network can in eye fundus image blood vessel and optic disk carry out Pixel-level segmentation, to reach more accurate segmentation effect.In addition, raw The network of requirement at to(for) picture quality is simultaneously less stringent.
In above-described embodiment, the processing unit of eye fundus image and the training device of eye fundus image can be set using same terminal It is standby, different terminal devices can also be used, the present embodiment is not specifically limited in this embodiment.
It should be understood that various forms of processes illustrated above can be used, rearrangement increases or deletes step.Example Such as, each step recorded in the application of this hair can be performed in parallel or be sequentially performed the order that can also be different and execute, As long as it is desired as a result, being not limited herein to can be realized technical solution disclosed in the present application.
Above-mentioned specific embodiment does not constitute the limitation to the application protection scope.Those skilled in the art should be bright White, according to design requirement and other factors, various modifications can be carried out, combination, sub-portfolio and substitution.It is any in the application Spirit and principle within made modifications, equivalent substitutions and improvements etc., should be included within the application protection scope.

Claims (20)

1. a kind of processing method of eye fundus image, which is characterized in that be applied to terminal device, the terminal device and Image Acquisition Unit connection, which comprises
Obtain the eye fundus image to be detected including eyeground key structure of described image acquisition unit acquisition, the eyeground key knot Structure includes at least optic disk and blood vessel;
Model is generated by the eye fundus image to be detected input is default, with by default the generations model identify the optic disk with Position of the blood vessel in the eye fundus image to be detected, the default generation model are obtained based on generation confrontation network training It arrives.
2. the method according to claim 1, wherein the eyeground key structure further includes central fovea of macula;
Described preset the eye fundus image input to be detected generates model, identifies the view to pass through the default generation model Disk and the blood vessel behind the position in the eye fundus image to be detected, the method also includes:
Position of the default optic disk and the blood vessel for generating model identification in the eye fundus image to be detected is defeated Enter default regression model, to determine the central fovea of macula in the eye fundus image to be detected by the default regression model Position, the default regression model be based on Recurrent networks training obtain.
3. according to the method described in claim 2, it is characterized in that, described by the default optic disk for generating model identification Default regression model is inputted with the position of the blood vessel, to determine the central fovea of macula in institute by the default regression model After stating the position in eye fundus image to be detected, the method also includes:
Display identifies the eye fundus image to be detected of optic disk position, vessel position and central fovea of macula position.
4. according to the method described in claim 3, it is characterized in that, the display identifies optic disk position, vessel position and Huang The eye fundus image to be detected of spot center recessed position, comprising:
Mark the position of the optic disk, blood vessel and the central fovea of macula that identify in the eye fundus image to be detected;
Show the eye fundus image to be detected of mark.
5. a kind of training method of eye fundus image characterized by comprising
The eyeground training image with markup information is obtained, the markup information includes at least optic disk position coordinates, vessel position Coordinate and image sources;
The eyeground training image is inputted to the generation network constructed in advance, to generate by the generation network constructed in advance Eyeground forecast image;
According to the eyeground forecast image, the corresponding image sources of the eyeground training image are differentiated;
Image sources at least based on the markup information and the confrontation between the image sources of differentiation adjust the preparatory structure The network parameter for the generation network built.
6. according to the method described in claim 5, it is characterized in that, the position that the markup information further includes central fovea of macula is sat Mark;
After the acquisition has the eyeground training image of markup information, the method also includes:
Determine the optic disk position coordinates and vessel position coordinate in the eyeground training image;
It is returned what the eyeground training image that the optic disk position coordinates and vessel position coordinate have been determined input constructed in advance Return network, with true according to the determining optic disk position coordinates and vessel position coordinate by the Recurrent networks constructed in advance The position coordinates of the fixed central fovea of macula;
According to the difference between the position coordinates of the central fovea of macula of mark and the position coordinates of the central fovea of macula of determination, Adjust the network parameter of the Recurrent networks constructed in advance.
7. according to the method described in claim 5, it is characterized in that, the position that the markup information further includes central fovea of macula is sat Mark;
It is described that the eyeground training image is inputted to the generation network constructed in advance, to pass through the generation network constructed in advance After generating eyeground forecast image, the method also includes:
The eyeground forecast image is inputted to the Recurrent networks constructed in advance, to determine by the Recurrent networks constructed in advance The position coordinates of central fovea of macula, the eyeground forecast image include optic disk position coordinates and vessel position coordinate;
According to the position coordinates of the determining central fovea of macula, and mark the central fovea of macula position coordinates between Discrepancy adjustment described in the network parameter of Recurrent networks that constructs in advance;
Repetition adjusts the network parameter of the generation network constructed in advance and the Recurrent networks constructed in advance, respectively obtains It is default to generate model and default regression model.
8. a kind of processing unit of eye fundus image, which is characterized in that the processing unit is connected to image acquisition units, the place Managing device includes:
First obtains module, for obtaining the eyeground figure to be detected including eyeground key structure of described image acquisition unit acquisition Picture, the eyeground key structure include at least optic disk and blood vessel;
Identification module generates model for presetting the eye fundus image input to be detected, to pass through the default generation model Identify the position of the optic disk and the blood vessel in the eye fundus image to be detected, the default generation model is based on generation Confrontation network training obtains.
9. device according to claim 8, which is characterized in that the eyeground key structure further includes central fovea of macula;
The identification module is also used to the position input of the default optic disk for generating model identification and the blood vessel is pre- If regression model, to determine position of the central fovea of macula in the eye fundus image to be detected by the default regression model It sets, the default regression model is obtained based on Recurrent networks training.
10. device according to claim 8 or claim 9, which is characterized in that described device further include: display module;
The display module is for showing the eye to be detected for identifying optic disk position, vessel position and central fovea of macula position Base map picture.
11. device according to claim 10, which is characterized in that the display module, which is shown, identifies optic disk position, blood When the eye fundus image to be detected of pipe position and central fovea of macula position, it is specifically used for:
Mark the position of the optic disk, blood vessel and the central fovea of macula that identify in the eye fundus image to be detected;
Show the eye fundus image to be detected of mark.
12. a kind of training device of eye fundus image characterized by comprising
Second obtains module, and for obtaining the eyeground training image with markup information, the markup information includes at least optic disk Position coordinates, vessel position coordinate and image sources;
Prediction module, for the eyeground training image to be inputted the generation network constructed in advance, to pass through the preparatory building Generation network generate eyeground forecast image;
Discrimination module, for differentiating the corresponding image sources of the eyeground training image according to the eyeground forecast image;
Module is adjusted, for the confrontation between the image sources of image sources and differentiation at least based on the markup information, Adjust the network parameter of the generation network constructed in advance.
13. device according to claim 12, which is characterized in that the markup information further includes the position of central fovea of macula Coordinate;
Described device further include:
First determining module, for determining optic disk position coordinates and vessel position coordinate in the eyeground training image;And The eyeground training image that the optic disk position coordinates and vessel position coordinate have been determined is inputted to the recurrence net constructed in advance Network, to determine institute according to the optic disk position coordinates and vessel position coordinate that determine by the Recurrent networks constructed in advance State the position coordinates of central fovea of macula;
The adjustment module is also used to position coordinates and the determining central fovea of macula according to the central fovea of macula of mark Difference between position coordinates adjusts the network parameter of the Recurrent networks constructed in advance.
14. device according to claim 12, which is characterized in that the markup information further includes the position of central fovea of macula Coordinate;
Described device further include:
Second determining module, for the eyeground forecast image to be inputted the Recurrent networks constructed in advance, by described preparatory The Recurrent networks of building determine the position coordinates of central fovea of macula, and the eyeground forecast image includes optic disk position coordinates and blood vessel Position coordinates;
The adjustment module is also used to the position coordinates according to the determining central fovea of macula, and the macula lutea of mark The network parameter of the Recurrent networks constructed in advance described in discrepancy adjustment between the position coordinates of central fovea;And repeat adjustment institute The network parameter for stating the generation network and the Recurrent networks constructed in advance that construct in advance, respectively obtain it is default generate model and Default regression model.
15. a kind of processing equipment of eye fundus image characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out method of any of claims 1-4.
16. a kind of training equipment of eye fundus image characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out method described in any one of claim 5-7.
17. a kind of non-transitory computer-readable storage medium for being stored with computer instruction, which is characterized in that the computer refers to It enables for making the computer perform claim require method described in any one of 1-7.
18. a kind of processing method of eye fundus image characterized by comprising
Obtain include eyeground key structure eye fundus image to be detected, the eyeground key structure include at least optic disk, blood vessel and Central fovea of macula;
The eye fundus image to be detected is inputted into preset model, to identify the optic disk, the blood vessel by the preset model With position of the central fovea of macula in the eye fundus image to be detected.
19. according to the method for claim 18, which is characterized in that described that the eye fundus image to be detected is inputted default mould Type, to identify the optic disk, the blood vessel and the central fovea of macula in the eyeground figure to be detected by the preset model Behind position as in, the method also includes:
Display identifies the eyeground figure to be detected of the optic disk, the blood vessel and the central fovea of macula respective positions Picture.
20. according to the method for claim 19, which is characterized in that described to show the position for identifying the eyeground key structure The eye fundus image to be detected set, comprising:
It marks in each leisure of the optic disk, the blood vessel and the central fovea of macula eye fundus image to be detected identified Position;
Show the eye fundus image to be detected of mark.
CN201910796692.8A 2019-08-27 2019-08-27 Processing, training method, device and its equipment of eye fundus image Pending CN110517248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910796692.8A CN110517248A (en) 2019-08-27 2019-08-27 Processing, training method, device and its equipment of eye fundus image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910796692.8A CN110517248A (en) 2019-08-27 2019-08-27 Processing, training method, device and its equipment of eye fundus image

Publications (1)

Publication Number Publication Date
CN110517248A true CN110517248A (en) 2019-11-29

Family

ID=68627187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910796692.8A Pending CN110517248A (en) 2019-08-27 2019-08-27 Processing, training method, device and its equipment of eye fundus image

Country Status (1)

Country Link
CN (1) CN110517248A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017187A (en) * 2020-11-02 2020-12-01 平安科技(深圳)有限公司 Method and device for locating center of macula lutea of fundus image, server and storage medium
CN113344894A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 Method and device for extracting characteristics of eyeground leopard streak spots and determining characteristic index
WO2022160676A1 (en) * 2021-01-29 2022-08-04 北京百度网讯科技有限公司 Method and apparatus for training heat map generation model, and electronic device and storage medium
CN113344894B (en) * 2021-06-23 2024-05-14 依未科技(北京)有限公司 Method and device for extracting features of fundus leopard spots and determining feature indexes

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506770A (en) * 2017-08-17 2017-12-22 湖州师范学院 Diabetic retinopathy eye-ground photography standard picture generation method
CN108269245A (en) * 2018-01-26 2018-07-10 深圳市唯特视科技有限公司 A kind of eyes image restorative procedure based on novel generation confrontation network
CN108537801A (en) * 2018-03-29 2018-09-14 山东大学 Based on the retinal angiomatous image partition method for generating confrontation network
CN109166095A (en) * 2018-07-11 2019-01-08 广东技术师范学院 A kind of ophthalmoscopic image cup disk dividing method based on generation confrontation mechanism
CN109615632A (en) * 2018-11-09 2019-04-12 广东技术师范学院 A kind of eyeground figure optic disk and optic cup dividing method based on semi-supervised condition production confrontation network
CN109784337A (en) * 2019-03-05 2019-05-21 百度在线网络技术(北京)有限公司 A kind of macular area recognition methods, device and computer readable storage medium
CN109886955A (en) * 2019-03-05 2019-06-14 百度在线网络技术(北京)有限公司 Method and apparatus for handling eye fundus image
CN109978796A (en) * 2019-04-04 2019-07-05 北京百度网讯科技有限公司 Optical fundus blood vessel Picture Generation Method, device and storage medium
CN110097545A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image generation method based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506770A (en) * 2017-08-17 2017-12-22 湖州师范学院 Diabetic retinopathy eye-ground photography standard picture generation method
CN108269245A (en) * 2018-01-26 2018-07-10 深圳市唯特视科技有限公司 A kind of eyes image restorative procedure based on novel generation confrontation network
CN108537801A (en) * 2018-03-29 2018-09-14 山东大学 Based on the retinal angiomatous image partition method for generating confrontation network
CN109166095A (en) * 2018-07-11 2019-01-08 广东技术师范学院 A kind of ophthalmoscopic image cup disk dividing method based on generation confrontation mechanism
CN109615632A (en) * 2018-11-09 2019-04-12 广东技术师范学院 A kind of eyeground figure optic disk and optic cup dividing method based on semi-supervised condition production confrontation network
CN109784337A (en) * 2019-03-05 2019-05-21 百度在线网络技术(北京)有限公司 A kind of macular area recognition methods, device and computer readable storage medium
CN109886955A (en) * 2019-03-05 2019-06-14 百度在线网络技术(北京)有限公司 Method and apparatus for handling eye fundus image
CN109978796A (en) * 2019-04-04 2019-07-05 北京百度网讯科技有限公司 Optical fundus blood vessel Picture Generation Method, device and storage medium
CN110097545A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image generation method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ROYCHOWDHURY S 等: "Blood vessel segmentation of fundus images by major vessel extraction and subimage classification", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》 *
姜平: "眼底图像分割方法研究", 《中国优秀博士学位论文全文数据库 中国优秀硕士学位论文全文数据库 工程科技I辑》 *
陈锟 等: "生成对抗网络在医学图像处理中的应用", 《生命科学仪器》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017187A (en) * 2020-11-02 2020-12-01 平安科技(深圳)有限公司 Method and device for locating center of macula lutea of fundus image, server and storage medium
WO2022160676A1 (en) * 2021-01-29 2022-08-04 北京百度网讯科技有限公司 Method and apparatus for training heat map generation model, and electronic device and storage medium
CN113344894A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 Method and device for extracting characteristics of eyeground leopard streak spots and determining characteristic index
CN113344894B (en) * 2021-06-23 2024-05-14 依未科技(北京)有限公司 Method and device for extracting features of fundus leopard spots and determining feature indexes

Similar Documents

Publication Publication Date Title
TWI714225B (en) Method, device and electronic apparatus for fixation point judgment and computer storage medium thereof
US9766703B2 (en) Triangulation of points using known points in augmented or virtual reality systems
CN109784337A (en) A kind of macular area recognition methods, device and computer readable storage medium
CN108875633A (en) Expression detection and expression driving method, device and system and storage medium
CN103100193A (en) Image processing device, image processing method, and program
Gültepe et al. Real-time virtual fitting with body measurement and motion smoothing
CN108876764A (en) Render image acquiring method, device, system and storage medium
CN109155053A (en) Information processing equipment, information processing method and recording medium
CN111932535A (en) Method, apparatus, device and storage medium for processing image
CN110517248A (en) Processing, training method, device and its equipment of eye fundus image
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
US11747904B2 (en) Electronic training system and method for electronic evaluation and feedback of sports performance
CN109584153A (en) Modify the methods, devices and systems of eye
CN108229301A (en) Eyelid line detecting method, device and electronic equipment
CN108256481A (en) A kind of pedestrian head detection method using body context
CN109741438A (en) Three-dimensional face modeling method, device, equipment and medium
CN110110782A (en) Retinal fundus images optic disk localization method based on deep learning
CN110415245A (en) Optical data determines method, model training method and equipment
CN110472600A (en) The identification of eyeground figure and its training method, device, equipment and storage medium
KR20210071188A (en) Prediction apparatus for predicting anatomical landmarks and a prediction method thereof
CN109410138A (en) Modify jowled methods, devices and systems
CN110414539A (en) A kind of method and relevant apparatus for extracting characterization information
CN111861999A (en) Detection method and device for artery and vein cross compression sign, electronic equipment and readable storage medium
EP3815599A1 (en) Ophthalmic image processing device, oct device, ophthalmic image processing program, and method of building mathematical model
CN110188652A (en) Processing method, device, terminal and the storage medium of facial image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination