CN106408564B - A kind of method for processing fundus images based on deep learning, apparatus and system - Google Patents
A kind of method for processing fundus images based on deep learning, apparatus and system Download PDFInfo
- Publication number
- CN106408564B CN106408564B CN201610884671.8A CN201610884671A CN106408564B CN 106408564 B CN106408564 B CN 106408564B CN 201610884671 A CN201610884671 A CN 201610884671A CN 106408564 B CN106408564 B CN 106408564B
- Authority
- CN
- China
- Prior art keywords
- image
- eye fundus
- deep learning
- fundus image
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Eye Examination Apparatus (AREA)
Abstract
This application discloses a kind of method for processing fundus images based on deep learning, apparatus and systems.The described method comprises the following steps: eye fundus image pre-processes first, and the segmentation for carrying out region to eye fundus image is extracted;Then obtained area image is extracted to segmentation carries out resampling;Data extending is carried out to the area image of resampling;Using deep learning method identification region characteristics of image.The method for processing fundus images based on deep learning, apparatus and system in the application may be implemented automatically to analyze eye fundus image, and analysis result is objective and accurate, save human cost.
Description
Technical field
The present invention relates to eye fundus image process fields, handle in particular to a kind of eye fundus image based on deep learning
Method and system.
Background technique
The analysis of eye fundus image is predominantly artificial in the prior art to be carried out, since artificial analysis has stronger subjectivity
Property, therefore the accuracy of people's manual analysis and consistency are difficult to be guaranteed well.Human cost is also higher and higher simultaneously, because
This automatic analysis technology has very big advantage as more objective and human cost saving aspect.
The concept of deep learning is derived from the research of artificial neural network.Multilayer perceptron containing more hidden layers is exactly a kind of depth
Learning structure.Deep learning, which forms more abstract high level by combination low-level feature, indicates attribute classification or feature, with discovery
The distributed nature of data indicates.The concept of deep learning was proposed by Hinton et al. in 2006.Based on depth confidence network
(DBN) it proposes the non-supervisory layer-by-layer training algorithm of greed, brings hope to solve the relevant optimization problem of deep structure, then propose
Multilayer autocoder deep structure.Furthermore the convolutional neural networks that Lecun et al. is proposed are first real multilayered structures
Algorithm is practised, it reduces number of parameters using spatial correlation to improve training performance.Deep learning is in machine learning research
A new field, motivation be establish, simulation human brain carry out analytic learning neural network, it imitate human brain mechanism
To explain data, such as image, sound and text.
Deep learning technology has application in many fields, and certain attention is also resulted in the field of medical imaging, instead of
Artificial experience medical imaging analysis feature, deep learning technology can automatically learn image data abstract spy at all levels
Sign, so that the substantive characteristics of preferably response data, reduces incompleteness caused by artificial design feature.
Although deep learning can automatic mode of learning feature, and can achieve good accuracy of identification, it is a kind of
Data driven type model, needs the data of " quite big " magnitude, and simulation human brain vision mechanism automatically learns to each layer of data
Secondary abstract characteristics, to preferably reflect the essence of data.It is general to be first abstracted characteristics of image in pixel layer, to each pixel mark
Remember its affiliated type, final output result is exactly that the identification of feature contains by taking 2048 × 2048 eye fundus image as an example
More than 4,000,000 pixels, the calculation amount of searching characteristics of image is extremely huge in pixel scale, and the time of this algorithm is multiple
Miscellaneous degree is sharply promoted, and in order to guarantee algorithm real-time, some deep learning algorithms contract to image in preprocessing process
It is small, to improve calculating speed.The diminution of image inevitably results in merging for pixel and surrounding pixel, and then affects and be with pixel
The accuracy of rank selection characteristics of image.Such as not downscaled images, while guaranteeing the accuracy calculated, then needs higher parallel volume
Journey skill and more preferable more hardware supporteds, otherwise calculating speed is very slow.Due to this temporal inadmissibility, limit
Deep learning has been made in the popularization of practical application.
Summary of the invention
The main problem that the application solves is to provide a kind of method for processing fundus images based on deep learning, device and is
System manually carries out the analysis of eye fundus image in the prior art, and analysis result is subjective while human cost is also higher and higher
The technical issues of.
In order to solve the above-mentioned technical problem, the invention discloses a kind of method for processing fundus images based on deep learning its
It is characterized in that, comprising the following steps: eye fundus image pre-processes first, and the segmentation for carrying out region to eye fundus image is extracted;Then right
Segmentation extracts obtained area image and carries out resampling;Data extending is carried out to the area image of resampling;Using deep learning
Method identification region characteristics of image.
Preferably, the eye fundus image pretreatment further includes calculating entire eye fundus image field color average value, eyeground figure
As any pixel subtracts the average value color.
Preferably, the region includes optic disk region, optic cup region, circumpapillary atrophy region and angiosomes.
Preferably, the resampling includes that area image is generated fettucelle and cut out areas image.
Preferably, the data extending includes area image scaling, area image rotation, area image inclination and/or area
The adjusting of area image contrast.
Preferably, the use deep learning method identification region characteristics of image includes the training of area image feature and region
The identification of characteristics of image.
Preferably, the deep learning method is convolutional neural networks method.
The eye fundus image processing unit based on deep learning that the invention also discloses a kind of characterized by comprising
Eye fundus image preprocessing module, it is described for being pre-processed to eye fundus image, region is carried out to eye fundus image
Segmentation is extracted;
Resampling module, the resampling module are used to extract segmentation obtained area image and carry out resampling;
Data extending module, the data extending module carry out data extending to the area image of resampling;And
Area image feature recognition module, the area image feature recognition module are used to identify using deep learning method
Area image feature.
The eye fundus image processing system based on deep learning that the invention also discloses a kind of, which is characterized in that including eyeground
Image-input device, the eye fundus image input unit are connect with eye fundus image processing unit, the eye fundus image processing unit
It is connect with result output device.
Compared with prior art, a kind of method for processing fundus images based on deep learning described herein, device and
System achieving the following effects:
(1) method for processing fundus images based on deep learning, the apparatus and system in the application, it is automatic right to may be implemented
Eye fundus image is analyzed, and analysis result is objective and accurate.
(2) method for processing fundus images based on deep learning, the apparatus and system in the application, using deep learning
Method automatically learns image data abstract characteristics at all levels, so that the substantive characteristics of preferably response data, reduces people
For incompleteness caused by design feature.
(3) method for processing fundus images based on deep learning, the apparatus and system in the application, in image preprocessing mistake
The segmentation that optic disk, optic cup, circumpapillary atrophy region, angiosomes is added in journey is extracted, the data as subsequent deep learning.
Downscaled images are not both needed in this way, the training of data can be also completed in the acceptable time, solve deep learning in time
The problem of inadmissibility.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes a part of the invention, this hair
Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the method for processing fundus images described in the embodiment of the present invention one based on deep learning;
Fig. 2 is the method for processing fundus images described in the embodiment of the present invention three based on deep learning using convolutional Neural
Network training process flow chart;
Fig. 3 is the structural block diagram of the eye fundus image processing unit described in the embodiment of the present invention four based on deep learning;
Fig. 4 is the structural block diagram of the eye fundus image processing system described in the embodiment of the present invention five based on deep learning;
Fig. 5 is the flow chart of eye fundus image preprocess method described in the embodiment of the present invention five;
Fig. 6 be the optical fundus blood vessel of eye fundus image preprocess method described in the embodiment of the present invention five parabola aspect graph and
The direction map of optical fundus blood vessel;
Fig. 7 is that the optical fundus blood vessel trend of eye fundus image preprocess method described in the embodiment of the present invention five is indicated method
Procedure chart;
Fig. 8 is the optic disk in eye fundus image preprocess method described in the embodiment of the present invention five under different brightness;
Fig. 9 be eye fundus image preprocess method described in the embodiment of the present invention five middle determining optic disk area and macular area figure with
Corresponding blood vessel distance field figure.
Specific embodiment
As used some vocabulary to censure specific components in the specification and claims.Those skilled in the art answer
It is understood that hardware manufacturer may call the same component with different nouns.This specification and claims are not with name
The difference of title is as the mode for distinguishing component, but with the difference of component functionally as the criterion of differentiation.Such as logical
The "comprising" of piece specification and claim mentioned in is an open language, therefore should be construed to " include but do not limit
In "." substantially " refer within the acceptable error range, those skilled in the art can within a certain error range solve described in
Technical problem basically reaches the technical effect.Specification subsequent descriptions are to implement the better embodiment of the application, so described
Description is being not intended to limit the scope of the present application for the purpose of the rule for illustrating the application.The protection scope of the application
As defined by the appended claims.
The application is described in further detail below in conjunction with attached drawing, but not as the restriction to the application.Embodiment one
As shown in Figure 1, for the method for processing fundus images based on deep learning in the present embodiment, which is characterized in that packet
Step S100 is included to step S400:
S100: eye fundus image pre-processes first, and the segmentation for carrying out region to eye fundus image is extracted;
S200: and then extract obtained area image to segmentation and carry out resampling;
S300: data extending is carried out to the area image of resampling;
S400: deep learning method identification region characteristics of image is used.
Embodiment two
On the basis of example 1, wherein step S100 the following steps are included: firstly, in order to eliminate due to illumination item
Part, camera resolution difference caused by difference between image, calculate entire eye fundus image field color average value, eyeground figure
As any pixel subtracts the average value color.Secondly, optical fundus blood vessel form is described using multi-level parabola, eye is identified
The center line of bottom blood vessel.Then, it moves towards to obtain optic disk regional location (place of blood vessel aggregation) using blood vessel, with ellipse fitting side
Formula obtains optic disk region.Using optic disk center as origin, farthest boundary is in the distance for expanding 50 pixels outward from origin to optic disk
For radius, optic cup region and circumpapillary atrophy region (PPA) are extracted.
Preferably, step S200 the following steps are included: regenerated in the image of extraction multiple sizes be 128 pixels ×
The square dough sheet of 128 pixels (can also be other sizes), two neighboring square center distance is 32 pixels.In addition, being
Remove boundary effect, cuts out the image of resampling to the 90% of full size.
Preferably, step S300 is the following steps are included: random zoomed image ± 10%, spends range inward turning 0 to 360 at random
Turn image, random inclination ± 0.2rad (rad is angular unit: radian);It can also include the tune for areal image contrast
Section is to increase trained quantity;By the parameter value of modified-image, a large amount of image data is obtained, so that the quantity of area image
Increase, to achieve the purpose that expanding data.
Embodiment three
As shown in Fig. 2, to be located in advance firstly for eye fundus image using convolutional neural networks training process flow chart
Then reason carries out resampling and data extending according to the method in embodiment two, the convolutional neural networks after training are to eyeground figure
As being identified and analyzed.
As shown in Fig. 2, convolutional neural networks framework includes 5 convolutional layers and 2 full articulamentums with weight, input
Layer is the image that resampling generates in image preprocessing step.What is be connected with input layer is 5 convolutional layers
(Convolutional Layers).1st and the 2nd convolutional layer convolution it is complete after used ReLUS (rectified linear
Units) function is handled, to accelerate the training speed of neural network;Then local regularization (Local has been carried out again
Response Normalization) (formula 1) finally also carried out maximum pond (MaxPooling) with preventing over-fitting.
It is givenIndicate that the neuron activity of the point (x, y) using convolution kernel i, the neuron activity are applying ReLUS
Afterwards, progress regularization activityExpression formula are as follows:
Wherein, convolution nucleus number n adjacent for the same space position, quantity of the N for all convolution kernels of the convolutional layer, k, n, α,
β is hyper parameter.
The output in maximum pond is as input the 3rd convolutional layer of connection.3rd, 4, the interconnection of 5 convolutional layers, there is no just
Then change layer and maximum pond layer is coupled.Convolutional layer is connected with two full articulamentums (FullyConnection Layers),
For 1st full articulamentum using the output of the 5th convolutional layer as input, exporting for the 2nd full articulamentum is defeated as softmax layers
Enter, softmax layers of output is the labeled bracketing of the characteristics of image automatically extracted.It is excessively quasi- in full articulamentum in order to reduce
It closes, the method that we use " dropout ", in each training batch, a certain proportion of full articulamentum nodal value is allowed to be 0
Reduce over-fitting.
In the present embodiment the method for deep learning we using convolutional neural networks, other nerve nets can also be used
Network method or other methods such as combine based on autocoder or limit the deep learning method of Boltzmann machine.
Example IV
As shown in figure 3, being the structural block diagram of the eye fundus image processing unit based on deep learning, wherein eye fundus image is handled
Device 2 includes that eye fundus image preprocessing module 10, resampling module 20, data extending module 30 and area image feature identify mould
Block 40;Wherein eye fundus image preprocessing module 10, resampling module 20, data extending module 30 and area image feature identify mould
Block 40 is linked in sequence;
Eye fundus image preprocessing module 10 carries out the segmentation in region to eye fundus image for pre-processing to eye fundus image
It extracts;
20 resampling module of resampling module is used to extract segmentation obtained area image and carries out resampling;
30 data extending module of data extending module carries out data extending to the area image of resampling;
40 area image feature recognition module of area image feature recognition module is used to use deep learning method cog region
Area image feature.
Embodiment five
The present embodiment is obtained in example IV, and the present embodiment is at a kind of eye fundus image based on deep learning
Reason system, as shown in figure 4, including eye fundus image input unit 1, eye fundus image input unit 1 and eye fundus image processing unit 2 connect
It connects, eye fundus image processing unit 2 is connect with result output device 3;
Eye fundus image input unit 1 in the present embodiment is for obtaining eye fundus image, specific eye fundus image input unit 1
It can be fundus camera, the PACS (abbreviation of Picture Archiving and Communication Systems, for shadow
As archiving and communication system) or image storage systems etc.;After eye fundus image input unit 1 obtains image, by image transmitting to eye
Bottom image processing apparatus 2, wherein eye fundus image input unit 1 passes through DICOM or TCP/IP interface and eye fundus image processing unit 2
Connection, image can be uploaded by the way of automatic or manual upload, and user can also be by Internet-browser with HTTP
Mode uploads eye fundus image;
Eye fundus image analysis method of the eye fundus image processing unit 2 according to embodiment one in the application into embodiment three,
The eye fundus image come to the transmission of eye fundus image input unit 1 is analyzed, and analysis result is transferred to result output device 3,
As a result output device 3 is exported result is analyzed, and specific result of analyzing is passed through in the form of image and/or statistical graph etc.
The output of the form of DICOM browser and/or webpage.
Embodiment six
Provided in this embodiment is a kind of pretreated method of eye fundus image in the application, i.e., carries out region to eye fundus image
A kind of method for extracting of segmentation, but in the application eye fundus image and the method for processing can also using it is in the prior art other
Method.
As shown in figure 5, the flow chart of the eye fundus image preprocess method for the present embodiment, the eye fundus image of the embodiment are pre-
Processing method includes the following steps S 10 to step S 60:
S 10: optical fundus blood vessel of the identification on eye fundus image;
S 20: the global morphosis of optical fundus blood vessel is indicated;Optical fundus blood vessel trend is indicated, blood vessel is obtained
Direction map;
S 30: according to the global morphosis of the optical fundus blood vessel in step S 20, coarse localization optic disk area;
S 40: it in the optic disk area of the coarse localization in step S 30, determines part high contrast or high-brightness region is
The pixel cluster of optic disk;
S 50: on the pixel cluster of the optic disk of ellipse fitting method obtaining in step S 40 determine optic disk boundary,
Obtain optic disk area;
S 60: the base region of macular area is determined in conjunction with the vessel directions map in step S 20, calculates the distance of blood vessel
, the maximum value position of blood vessel distance field, the i.e. central recessed position of macular area are found in the base region of macular area.
Preferably, the method for optical fundus blood vessel of the identification in step slo on eye fundus image can use patent application
The identification of eye fundus image blood vessel number is carried out for method recorded in 201410220540.0.The following steps are included: from rgb format
Retinal fundus images in extract retina grayscale image in green access;Multiple comparison scales are set, in each comparison ruler
Metrization is compared to the pixel on the retina grayscale image from multiple directions under degree, obtains retina binaryzation pair
Than degree figure, wherein the pixel in the retina binarized contrast degree figure is divided into pixel and non-retinal blood on retinal vessel
Pixel on pipe;Difference fusion is carried out to multiple retina binarized contrast degree figures of acquisition, and is obtained after removing fusion
Image in noise and artifact, obtain retina fusion figure;Central retroreflective regions are determined from retina fusion figure, and
The central retroreflective regions are filled, retinal vessel distribution map is obtained.Other blood vessels know method for distinguishing in the application
In it is also applicable, details are not described herein.
Preferably, the global morphosis of the optical fundus blood vessel in step S20 is indicated method are as follows: uses at many levels
Form of the parabola to optical fundus blood vessel similar to umbrella is described, and is the eye of the eye fundus image in the present embodiment as shown in Fig. 6-A
The parabola aspect graph of bottom blood vessel.S 30: according to the global morphosis of the optical fundus blood vessel in step S 20, coarse localization optic disk
Area, as shown in Fig. 6-A, wherein vertex of a parabola position corresponds to the position in optic disk area substantially;In step S20 to optical fundus blood vessel
Trend is indicated method are as follows: turns to a series of straightways blood vessel is discrete, the direction of blood vessel is indicated using straightway, is passed through
It counts these directions and determines the trend of blood vessel to get vessel directions map is arrived;It as shown in figure 6-b, is the eyeground in the present embodiment
The direction map of the optical fundus blood vessel of image, 2 optic disk areas of highlight regions point and highlight regions point 1 in map are macular area;
Preferably, method is indicated for more specifically a kind of operation side to optical fundus blood vessel trend in step S20
Formula: as shown in fig. 7,
Eye fundus image shown in Fig. 7-A given first, (can also be used other methods using lines approximating method in practice
Identifying blood vessel) automatic identification retinal vessel is as shown in Fig. 7-B;
Then blood vessel is segmented and (other methods can be used to segmentation of blood vessels) with regular grid as shown in Fig. 7-B, often
Vessel segment central axes direction definition is the line direction in two farthest points in a grid;.
A point P (x, y) in given image, with following formula
The orientation of vessel segment is calculated, wherein Si is the orientation of the vessel segment (Vi) of given area around set point P, and Ti is a little
Line between the center and point P of P Peripheral blood pipeline section.Vessel segment orientation in all grids is calculated with this method, is obtained such as Fig. 7-C
Shown in direction map;
Metastable optic disk area in order to obtain, the former direction map (Fig. 7-C) as unit of pixel position value are converted
At the new direction map as unit of orientation values maximum around pixel.Smoothed out new direction map is as shown in Fig. 7-D;
Can be seen that macular area and optic disk area have relatively high orientation values from the map of direction, by calculate macular area and
The blood vessel number in optic disk area distinguishes the two regions, and the region 31 more than blood vessel number is considered as optic disk area, 32 be macular area.
Preferably, with regard to as follows for the concrete operations of the determination of the pixel cluster of optic disk in step S 40: firstly, from RGB
Green channel is extracted in the colored retinal fundus images of format.Secondly, given eye fundus image pixel P (x, y), uses sliding window
(pixel is bright for the generic pixel brightness of port filter ((other methods can also be used and calculate generic pixel brightness)) each pixel of calculating
The difference of angle value and window average brightness value).One grey-scale contrast image is obtained using the filter to eye fundus image.Although from this
Optic disk boundary can be intuitively distinguished in a contrast image, but since the discontinuous row on optic disk boundary causes still to be difficult to know identification
Optic disk boundary, therefore the contrast threshold of grey-scale contrast image is altered in steps to obtain the pixel for accurately representing optic disk in we
Cluster.
Preferably, the concrete operations with regard to the determination in step S 50 for regarding panel boundary are as follows: for obtained in S40
The pixel cluster of optic disk obtains round optic disk using Hough transformation (other methods can also be used), on the basis of border circular areas,
It determines the neighbouring pixel cluster in the border circular areas periphery, then determines optic disk boundary using the method for ellipse fitting, it finally will be adjacent
The pixel on nearly ellipse boundary is also classified as optic disk region, and the boundary finally obtained is optic disk region.In view of being regarded in different images
The brightness of disk and contrast differ greatly, we are using multiple dimensioned strategy, the optic disk being illustrated in figure 8 under different brightness,
Wherein Threshold is the threshold value of brightness;The brightness for gradually adjusting optic disk, then repeats entire circle detection and ellipse fitting,
It scores each ellipse, the standard of scoring is to comprehensively consider elliptical ratio of semi-minor axis length and elliptical average brightness, is commented
Dividing highest is optic disk area.Using same method, optic cup can be positioned in identified optic disk area.
Preferably, with regard to as follows for the concrete operations of the determination of macular area in step S 60: after optic disk region determines,
Macular region is substantially positioned according to the direction map of blood vessel, specific method is the maximum map that optic disk area is removed in investigation map
Value.Later, the distance field for calculating blood vessel finds maximum distance field value position, this position in macula lutea approximate region
The as central fovea position of macular area;
More specific: for the feature of optical fundus blood vessel, blood vessel distal generally points to macular area position, but with view
Disk is compared, and the blood vessel of macular area is thinner, and the brightness of image of macular area is relatively low.In order to which the center of macula lutea is accurately positioned
Domain, i.e. central fovea, we utilize blood vessel distance field, as shown in figure 9, wherein Fig. 9-A is the optic disk area 52 marked off and macular area
51, Fig. 9-B are the distance field of the optical fundus blood vessel of eye fundus image corresponding with Fig. 9-A, and the 54 of circle mark are optical fundus blood vessel distance
The maximum value of field, the central fovea position as in macular area.
The automatic identifying method in above-mentioned optic disk area is equally applicable to the automatic identification of optic cup, in Fig. 9-A, in optic disk area
52 include optic cup 53 (optic disk includes optic cup: optic disk is also view nipple, and optic cup is also optic papilla sunken).
Eye fundus image analysis method in this implementation, the multimodal system of optical fundus blood vessel is (such as: blood vessel general morphology and blood vessel
Tendency), optic disk/macular area image and (such as: being both located at the region that blood vessel collects) organically combine the features such as anatomy
Get up, determines optic disk area and optic cup region first using the method for ellipse fitting, on the basis of determining optic disk region, in conjunction with
Optic disk and macula lutea luminance difference on the image and distance field analysis method, finally determine macular area.This method takes full advantage of
The form and characteristics of image of eyeground Various Tissues, and these features are combined, the complementation between these forms is utilized
Relationship, so that it is guaranteed that this method may finally adapt to various picture qualities and illumination independent of certain form or feature
The factors such as environment, it is ensured that accurately, steadily obtain macula lutea and optic disk region.
Image pre-processing method in the application be not limited to the present embodiment in method, other methods in the prior art
It can be used as the pretreatment of eye fundus image, so that the segmentation for carrying out region to eye fundus image is extracted, used on the basis of pretreated
The method of deep learning is further processed.It repeats no more again.
Compared with prior art, a kind of method for processing fundus images based on deep learning of the present invention, device and
System achieving the following effects:
(1) method for processing fundus images based on deep learning, the apparatus and system in the application, it is automatic right to may be implemented
Eye fundus image is analyzed, and analysis result is objective and accurate.
(2) method for processing fundus images based on deep learning, the apparatus and system in the application, using deep learning
Method automatically learns image data abstract characteristics at all levels, so that the substantive characteristics of preferably response data, reduces people
For incompleteness caused by design feature.
(3) method for processing fundus images based on deep learning, the apparatus and system in the application, in image preprocessing mistake
The segmentation that optic disk, optic cup, circumpapillary atrophy region, angiosomes is added in journey is extracted, the data as subsequent deep learning.
Downscaled images are not both needed in this way, the training of data can be also completed in the acceptable time, solve deep learning in time
The problem of inadmissibility.
Since method part has been described in detail the embodiment of the present application, here to system involved in embodiment
Expansion with method corresponding part describes to omit, and repeats no more.It can refer to method for the description of particular content in system to implement
The content of example is no longer specific here to limit.
Above description shows and describes several preferred embodiments of the present application, but as previously described, it should be understood that the application
Be not limited to forms disclosed herein, should not be regarded as an exclusion of other examples, and can be used for various other combinations,
Modification and environment, and the above teachings or related fields of technology or knowledge can be passed through in application contemplated scope described herein
It is modified.And changes and modifications made by those skilled in the art do not depart from spirit and scope, then it all should be in this Shen
It please be in the protection scope of appended claims.
Claims (8)
1. a kind of method for processing fundus images based on deep learning, which comprises the following steps:
Eye fundus image pre-processes first, and the segmentation for carrying out region to eye fundus image is extracted;The eye fundus image pre-processes
Entire eye fundus image field color average value is calculated, any pixel of eye fundus image subtracts the average value color;
Then obtained area image is extracted to segmentation carries out resampling;
Data extending is carried out to the area image of resampling;
Using deep learning method identification region characteristics of image.
2. the method for processing fundus images according to claim 1 based on deep learning, which is characterized in that the region packet
Include optic disk region, optic cup region, circumpapillary atrophy region and angiosomes.
3. the method for processing fundus images according to claim 1 based on deep learning, which is characterized in that the resampling
Including area image is generated fettucelle and cut out areas image.
4. the method for processing fundus images according to claim 1 based on deep learning, which is characterized in that the data expand
It fills including area image scaling, area image rotation, area image inclination and/or the adjusting of areal image contrast.
5. the method for processing fundus images according to claim 1 based on deep learning, which is characterized in that described using deep
Degree learning method identification region characteristics of image includes the trained identification with area image feature of area image feature.
6. the method for processing fundus images according to claim 5 based on deep learning, which is characterized in that the depth
Learning method is convolutional neural networks method.
7. a kind of eye fundus image processing unit based on deep learning characterized by comprising
Eye fundus image preprocessing module, the eye fundus image preprocessing module is for pre-processing eye fundus image, to eyeground
The segmentation that image carries out region is extracted;The eye fundus image pretreatment further includes calculating entire eye fundus image field color to be averaged
Value, any pixel of eye fundus image subtract the average value color;
Resampling module, the resampling module are used to extract segmentation obtained area image and carry out resampling;
Data extending module, the data extending module carry out data extending to the area image of resampling;And
Area image feature recognition module, the area image feature recognition module are used to use deep learning method identification region
Characteristics of image.
8. a kind of eye fundus image processing system based on deep learning, which is characterized in that described including eye fundus image input unit
Eye fundus image input unit is connect with eye fundus image processing unit described in claim 7, the eye fundus image processing unit and knot
The connection of fruit output device.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610884671.8A CN106408564B (en) | 2016-10-10 | 2016-10-10 | A kind of method for processing fundus images based on deep learning, apparatus and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610884671.8A CN106408564B (en) | 2016-10-10 | 2016-10-10 | A kind of method for processing fundus images based on deep learning, apparatus and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106408564A CN106408564A (en) | 2017-02-15 |
| CN106408564B true CN106408564B (en) | 2019-04-02 |
Family
ID=59228993
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610884671.8A Active CN106408564B (en) | 2016-10-10 | 2016-10-10 | A kind of method for processing fundus images based on deep learning, apparatus and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106408564B (en) |
Families Citing this family (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106971174B (en) * | 2017-04-24 | 2020-05-22 | 华南理工大学 | CNN model, CNN training method and CNN-based vein identification method |
| CN107358605B (en) * | 2017-05-04 | 2018-09-21 | 深圳硅基仿生科技有限公司 | The deep neural network apparatus and system of diabetic retinopathy for identification |
| CN108172291B (en) * | 2017-05-04 | 2020-01-07 | 深圳硅基智能科技有限公司 | Diabetic retinopathy recognition system based on fundus images |
| CN107527069A (en) | 2017-08-22 | 2017-12-29 | 京东方科技集团股份有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
| CN109427052B (en) * | 2017-08-29 | 2020-10-02 | 中国移动通信有限公司研究院 | Related method and device for processing fundus images based on deep learning |
| CN107644418B (en) * | 2017-09-26 | 2019-11-08 | 山东大学 | Video disc detection method and system based on convolutional neural network |
| CN110399929B (en) * | 2017-11-01 | 2023-04-28 | 腾讯科技(深圳)有限公司 | Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium |
| CN108520522A (en) * | 2017-12-31 | 2018-09-11 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
| CN108230322B (en) * | 2018-01-28 | 2021-11-09 | 浙江大学 | Eye ground characteristic detection device based on weak sample mark |
| CN108230341B (en) * | 2018-03-07 | 2021-12-17 | 汕头大学 | Eyeground image blood vessel segmentation method based on layered matting algorithm |
| CN108492285B (en) * | 2018-03-13 | 2021-11-09 | 北京理工大学 | Radial optic disc atrophy arc segmentation method based on gray scale change |
| CN108596338A (en) * | 2018-05-09 | 2018-09-28 | 四川斐讯信息技术有限公司 | A kind of acquisition methods and its system of neural metwork training collection |
| CN108717696B (en) * | 2018-05-16 | 2022-04-22 | 上海鹰瞳医疗科技有限公司 | Yellow spot image detection method and equipment |
| EP3815599B1 (en) * | 2018-06-29 | 2025-09-17 | Nidek Co., Ltd. | Ophthalmic image processing device, oct device, ophthalmic image processing program, and mathematical model building method |
| CN109166117B (en) * | 2018-08-31 | 2022-04-12 | 福州依影健康科技有限公司 | Automatic eye fundus image analysis and comparison method and storage device |
| CN109325942B (en) * | 2018-09-07 | 2022-03-25 | 电子科技大学 | Fundus image structure segmentation method based on full convolution neural network |
| CN109685770B (en) * | 2018-12-05 | 2020-10-09 | 合肥奥比斯科技有限公司 | Method for determining retinal vascular tortuosity |
| CN109977905B (en) * | 2019-04-04 | 2021-08-06 | 北京百度网讯科技有限公司 | Method and apparatus for processing fundus images |
| CN110101361B (en) * | 2019-04-23 | 2022-07-12 | 深圳市新产业眼科新技术有限公司 | Big data based online intelligent diagnosis platform and operation method and storage medium thereof |
| CN110327013B (en) * | 2019-05-21 | 2022-02-15 | 北京至真互联网技术有限公司 | Fundus image detection method, device and equipment and storage medium |
| CN110400288B (en) * | 2019-06-18 | 2021-08-10 | 中南民族大学 | Sugar network disease identification method and device fusing binocular features |
| CN110489577B (en) * | 2019-08-06 | 2024-01-26 | 腾讯医疗健康(深圳)有限公司 | Medical image management method and device, fundus image processing method and electronic equipment |
| JP7264254B2 (en) * | 2019-08-30 | 2023-04-25 | 株式会社ニコン | Image processing method, image processing apparatus, and program |
| CN110543921A (en) * | 2019-10-14 | 2019-12-06 | 四川大学 | A method for early fault identification of cables based on waveform learning |
| CN110838116B (en) * | 2019-11-14 | 2023-01-03 | 上海联影医疗科技股份有限公司 | Medical image acquisition method, device, equipment and computer-readable storage medium |
| CN116824203B (en) * | 2019-12-04 | 2025-12-23 | 深圳硅基智能科技有限公司 | A glaucoma identification device and method based on neural networks |
| CN112017168B (en) * | 2020-08-25 | 2024-09-06 | 北京致远慧图科技有限公司 | Method and device for detecting defect of optic nerve fiber layer |
| CN112001920B (en) | 2020-10-28 | 2021-02-05 | 北京至真互联网技术有限公司 | Fundus image recognition method, device and equipment |
| CN115018799B (en) * | 2022-06-15 | 2022-11-11 | 潍坊眼科医院有限责任公司 | Fundus image quality optimization method based on deep learning |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1101249A (en) * | 1994-06-24 | 1995-04-12 | 中国科学院上海技术物理研究所 | Real time collecting for eyeground picture and processing method and its apparatus |
| CN104636701A (en) * | 2014-12-12 | 2015-05-20 | 浙江工业大学 | A Laser Two-Dimensional Code Recognition Method Based on Image Restoration |
| CN105069413A (en) * | 2015-07-27 | 2015-11-18 | 电子科技大学 | Human body gesture identification method based on depth convolution neural network |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4336561B2 (en) * | 2003-10-24 | 2009-09-30 | キヤノン株式会社 | Fundus image processing device |
-
2016
- 2016-10-10 CN CN201610884671.8A patent/CN106408564B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1101249A (en) * | 1994-06-24 | 1995-04-12 | 中国科学院上海技术物理研究所 | Real time collecting for eyeground picture and processing method and its apparatus |
| CN104636701A (en) * | 2014-12-12 | 2015-05-20 | 浙江工业大学 | A Laser Two-Dimensional Code Recognition Method Based on Image Restoration |
| CN105069413A (en) * | 2015-07-27 | 2015-11-18 | 电子科技大学 | Human body gesture identification method based on depth convolution neural network |
Non-Patent Citations (1)
| Title |
|---|
| 眼底图像分割算法及应用研究;高玮玮;《中国博士学位论文全文数据库 信息科技辑》;20160115(第1期);第2.2.1节,第3.4节,第5.3.1节 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106408564A (en) | 2017-02-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106408564B (en) | A kind of method for processing fundus images based on deep learning, apparatus and system | |
| Tulsani et al. | Automated segmentation of optic disc and optic cup for glaucoma assessment using improved UNET++ architecture | |
| CN106920227B (en) | The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method | |
| Shen et al. | Domain-invariant interpretable fundus image quality assessment | |
| Al-Bander et al. | Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc | |
| CN110197493B (en) | Fundus image blood vessel segmentation method | |
| Uysal et al. | Computer-aided retinal vessel segmentation in retinal images: convolutional neural networks | |
| CN109325942B (en) | Fundus image structure segmentation method based on full convolution neural network | |
| CN112288720A (en) | Deep learning-based color fundus image glaucoma screening method and system | |
| CN108095683A (en) | The method and apparatus of processing eye fundus image based on deep learning | |
| CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
| CN111028230A (en) | A detection algorithm for optic disc and macular location in fundus images based on YOLO-V3 | |
| CN108416371A (en) | A kind of diabetic retinopathy automatic testing method | |
| CN106960199B (en) | Complete extraction method of white eye region of true color eye picture | |
| CN109658423B (en) | Automatic optic disk cup segmentation method for color fundus picture | |
| CN117557840B (en) | A method for grading fundus lesions based on small sample learning | |
| CN109583331A (en) | Human wrist arteries and veins mouth position precise positioning method based on deep learning | |
| CN109919938A (en) | Method for obtaining optic disc segmentation map of glaucoma | |
| CN106846293A (en) | Image processing method and device | |
| CN110930346A (en) | Automatic detection method and storage device for fundus image microangioma | |
| CN108665474B (en) | A B-COSFIRE-based method for retinal blood vessel segmentation in fundus images | |
| Xue et al. | Optic disk detection and segmentation for retinal images using saliency model based on clustering | |
| Selvathi | Classification of ocular diseases using transfer learning approaches and glaucoma severity grading | |
| Sharma et al. | Automatic glaucoma diagnosis in digital fundus images using convolutional neural network | |
| Purwanithami et al. | Hemorrhage diabetic retinopathy detection based on fundus image using neural network and FCM segmentation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |