CN109215039B - Method for processing fundus picture based on neural network - Google Patents

Method for processing fundus picture based on neural network Download PDF

Info

Publication number
CN109215039B
CN109215039B CN201811330427.2A CN201811330427A CN109215039B CN 109215039 B CN109215039 B CN 109215039B CN 201811330427 A CN201811330427 A CN 201811330427A CN 109215039 B CN109215039 B CN 109215039B
Authority
CN
China
Prior art keywords
neural network
picture
fundus
area
optic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811330427.2A
Other languages
Chinese (zh)
Other versions
CN109215039A (en
Inventor
覃鹏志
包勇
文耀锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Industrial Technology Research Institute of Zhejiang University
Original Assignee
Changzhou Industrial Technology Research Institute of Zhejiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Industrial Technology Research Institute of Zhejiang University filed Critical Changzhou Industrial Technology Research Institute of Zhejiang University
Priority to CN201811330427.2A priority Critical patent/CN109215039B/en
Publication of CN109215039A publication Critical patent/CN109215039A/en
Application granted granted Critical
Publication of CN109215039B publication Critical patent/CN109215039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a method for processing fundus pictures based on a neural network, which comprises the following steps: s1, acquiring a plurality of fundus pictures through shooting of a camera; s2, detecting a region containing the optic disc and the optic cup in the fundus picture, and cutting the region containing the optic disc and the optic cup from the fundus picture to obtain an initial sample picture; s3, circling a video disc area and a video cup area on the initial sample picture, and respectively filling the video disc area and the video cup area with different colors to obtain a training sample picture; s4, constructing a neural network; s5, training the neural network through a plurality of training sample pictures, and determining parameters of the neural network; the method for processing the fundus picture can assist medical workers in efficiently and accurately judging the cup-to-tray ratio of the fundus picture, and improves the efficiency and accuracy of diagnosing glaucoma.

Description

Method for processing fundus picture based on neural network
Technical Field
The invention relates to a method for processing an eyeground picture, in particular to a neural network for automatically identifying and dividing an eyecup and an optic disc from the eyeground picture by dividing the eyecup and the optic disc from the eyeground picture and training a large number of samples.
Background
Glaucoma is one of the major eye diseases causing blindness, and is expected to affect people around 8000 ten thousand by 2020, and unlike ophthalmic diseases such as cataract, myopia, etc., the loss of vision in glaucoma is irreversible, and therefore, early screening is essential for early treatment to maintain vision and quality of life. There are three clinical approaches for screening for glaucoma: tonometry, visual field examination, and optic disc assessment. The tonometry has a certain risk, which is not enough to be an effective inspection tool for a large number of glaucoma patients with normal intraocular pressure; the field-based examination requires specialized equipment and is not applicable in small hospitals and clinics; optic disc assessment is a convenient way to detect glaucoma early, and only a fundus map is needed for assessment. A more popular ONN assessment is therefore based on cup to disc ratio (CDR), which calculates the ratio of the perpendicular optic cup diameter to the optic disc diameter, and if larger, indicates a greater risk of glaucoma. The current judgment mode is that a doctor makes a judgment by observing a fundus picture, and because the boundary between a cup area and a optic disc area in the fundus picture is not clear, a long time is needed for judging the diameter ratio between the cup area and the optic disc area, the diagnosis efficiency is low, and the accuracy is low.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the invention provides a processing method of fundus pictures based on a neural network, which is used for assisting medical staff to diagnose glaucoma for patients.
The technical scheme adopted by the invention for solving the technical problems is as follows: a processing method of fundus pictures based on a neural network comprises the following steps:
s1, acquiring a plurality of fundus pictures through shooting of a camera;
s2, detecting a region containing the optic disc and the optic cup in the fundus picture, and cutting the region containing the optic disc and the optic cup from the fundus picture to obtain an initial sample picture;
s3, circling a video disc area and a video cup area on the initial sample picture, and respectively filling the video disc area and the video cup area with different colors to obtain a training sample picture;
s4, constructing a neural network;
and S5, training the neural network through a plurality of training sample pictures, and determining parameters of the neural network.
Preferably, in step S4, the neural network includes an input layer, an output layer and m hidden layers, the neurons of the input layer include an optic cup area, an optic disc area and a ratio of the optic cup area to the optic disc area, and the neurons of the output layer include a plurality of disease levels of glaucoma; an activation function and an objective function are determined.
Preferably, the activation function is:
Figure BDA0001859824800000021
the objective function is:
Figure BDA0001859824800000022
wherein x is an input layer vector, w is a weight of the neural network, b is a bias term of the neural network, n is a sample capacity of the training sample picture, y is a vector representing a disease grade of glaucoma corresponding to x, and a is an output layer vector.
According to some other embodiments, in step S4, the neural network uses an FCN network, and the network elements of the neural network use inclusion structure elements in a GoogleNet network.
Further, in the step S5:
inputting a training sample picture into a neural network, carrying out hierarchical processing, wherein the hierarchical processing comprises convolution, pooling, linear correction, dropout and deconvolution, and outputting a gray picture;
comparing the gray level picture with the training sample picture, and calculating cross entropy loss and IOU;
comparing the IOU to a threshold size;
if the IOU is smaller than a threshold value, adjusting parameters of a neural network according to the cross entropy loss;
the above steps are performed using different training sample picture loops until the IOU is greater than the threshold.
Preferably, in step S2, a circle region including the optic disc is found by using hough transform circle detection method; and cutting a square area containing an optic disc and an optic cup in the fundus picture by taking the circle center of the circular area as the center and taking a preset length as the side length to be used as the initial sample picture.
Preferably, the method further comprises the following steps:
performing image enhancement on the training sample picture by using a Laplacian operator;
and carrying out simulated red-free light treatment on the training sample picture.
The method for processing the fundus picture based on the neural network has the advantages that medical workers can be assisted in efficiently and accurately judging the cup-to-tray ratio of the fundus picture, and the efficiency and the accuracy for diagnosing glaucoma are improved.
Drawings
The invention is further illustrated with reference to the following figures and examples.
Fig. 1 is a flowchart of an embodiment 1 of a processing method of a fundus picture based on a neural network.
Fig. 2 is a display view of a fundus picture.
Fig. 3 is a display diagram of a training sample picture.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "axial", "radial", "circumferential", and the like, indicate orientations and positional relationships based on the orientations and positional relationships shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be considered as limiting the present invention.
Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Fig. 1 shows an embodiment 1 of a processing method of a fundus image based on a neural network, including the steps of:
s1, acquiring a plurality of fundus pictures through shooting of the camera, wherein a fundus picture is shown in figure 2, so that the actual picture shot by the fundus camera is larger, most of images for identifying and judging glaucoma come from a cup optic disk, and the cup optic disk is inconvenient to judge by directly using the fundus picture.
Therefore, in step S2, a circle region including the optic disc is found by using hough transform circle detection method; a square region including the optic disc and the optic cup is cut out from the fundus picture with the center of the circle as the center and a preset length as the side length as an initial sample picture, and it can be seen from fig. 2 that the portion L1 is a circular region and the portion L2 is a square region. The video disc and the video cup are cut, so that the situation that the diagnosis is influenced due to the fact that the occupied proportion of the background is too large can be avoided, the foreground is easily divided into the background for training of a neural network behind, the scale of the image can be changed, and a pair of images with high pixels is converted into images with low pixels, so that the calculation amount of a deep learning network and the use of a GPU memory are reduced, and the network operation time is shortened.
S3, circling a video disc area and a video cup area on the initial sample picture, and respectively filling the video disc area and the video cup area with different colors to obtain a training sample picture; then training a sample picture to amplify data by rotating, turning, adjusting contrast and the like, then utilizing a Laplacian operator to enhance the image, wherein the region of the enhanced picture viewing cup disc is clearer, which is beneficial to subsequent segmentation, then removing a red channel and simulating no red light, wherein the red light-free illumination phase is to filter red light, and the fundus is irradiated by blue-green light to be exposed, so that the surface layer of the retina reflects the blue-green light, and the interference of deep tissues is eliminated; fig. 3 shows a picture of a training sample after processing.
S4, constructing a neural network, wherein the network unit of the neural network adopts an FCN (fuzzy conditional Networks) neural network adopts an inclusion structure unit in a GoogleNet network.
The FCN utilizes the method of converting the full connection layer in the traditional CNN (volumetric neural networks) into the convolution layer one by one, the traditional CNN is the convolution layer in front and the vector in back, and corresponds to the probability of how many classes, and the FCN converts the vector into the convolution base layer through the convolution kernel with the width and height of 1, and then obtains the segmentation output graph with the same size as the original graph through up-sampling. In the embodiment, after the FCN network is deconvoluted, a plurality of convolution kernels are added for feature extraction, an inclusion structure unit in a GoogleNet network is adopted, and a 1 × 1 convolution kernel and a 3 × 3 convolution kernel are utilized, so that too much calculation load is not increased;
and S5, training the neural network through a plurality of training sample pictures, and determining parameters of the neural network. Inputting a training sample picture into a neural network, carrying out hierarchical processing, wherein the hierarchical processing comprises convolution, pooling, linear correction, dropout and deconvolution, and outputting a gray picture;
comparing the gray level picture with the training sample picture, and calculating cross entropy loss and IOU;
comparing the IOU to a threshold value;
if the IOU is smaller than the threshold value, adjusting parameters of the neural network according to the cross entropy loss;
the above steps are performed using different training sample picture loops until the IOU is greater than the threshold.
In other embodiments, steps S4 and S5 are different from embodiment 1:
in step S4, the neural network includes an input layer, an output layer and m hidden layers, the neurons of the input layer include cup area, disc area and ratio of the cup area to the disc area, and the neurons of the output layer include a plurality of disease levels of glaucoma; an activation function and an objective function are determined.
The activation function is:
Figure BDA0001859824800000061
the objective function is:
Figure BDA0001859824800000071
wherein x is an input layer vector, w is a weight of the neural network, b is a bias term of the neural network, n is a sample capacity of the training sample picture, y is a vector representing a disease grade of glaucoma corresponding to x, and a is an output layer vector. For example x ═ area n of the optic cup region1Area n of optic disc region1Ratio n1]Y is corresponding to [ grade ]11, grade ═ 120, grade30, grade40, grade5=0]Sample n is illustrated1The actual disease level of (a) is level 1, and the elements in the output layer vector of (a) are the same as (y).
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, a schematic representation of the term does not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (7)

1. A processing method of fundus pictures based on a neural network is characterized by comprising the following steps:
s1, acquiring a plurality of fundus pictures through shooting of a camera;
s2, detecting a region containing the optic disc and the optic cup in the fundus picture, and cutting the region containing the optic disc and the optic cup from the fundus picture to obtain an initial sample picture;
s3, circling a video disc area and a video cup area on the initial sample picture, and respectively filling the video disc area and the video cup area with different colors to obtain a training sample picture;
s4, constructing a neural network;
and S5, training the neural network through a plurality of training sample pictures, and determining parameters of the neural network.
2. The processing method of a fundus picture based on a neural network as set forth in claim 1, wherein: in step S4, the neural network includes an input layer, an output layer and m hidden layers, the neurons of the input layer include an optic cup area, an optic disc area and a ratio of the optic cup area to the optic disc area, and the neurons of the output layer include a plurality of disease levels of glaucoma; an activation function and an objective function are determined.
3. The processing method of a fundus picture based on a neural network as set forth in claim 2, wherein: the activation function is:
Figure FDA0001859824790000011
the objective function is:
Figure FDA0001859824790000012
wherein x is an input layer vector, w is a weight of the neural network, b is a bias term of the neural network, n is a sample capacity of the training sample picture, y is a vector representing a disease grade of glaucoma corresponding to x, and a is an output layer vector.
4. The processing method of a fundus picture based on a neural network as set forth in claim 1, wherein: in the step S4, the neural network employs an FCN network, and the network elements of the neural network employ inclusion structure elements in the GoogleNet network.
5. The processing method of a fundus picture based on a neural network according to claim 4, wherein in said step S5:
inputting a training sample picture into a neural network, carrying out hierarchical processing, wherein the hierarchical processing comprises convolution, pooling, linear correction, dropout and deconvolution, and outputting a gray picture;
comparing the gray level picture with the training sample picture, and calculating cross entropy loss and IOU;
comparing the IOU to a threshold size;
if the IOU is smaller than a threshold value, adjusting parameters of a neural network according to the cross entropy loss;
the above steps are performed using different training sample picture loops until the IOU is greater than the threshold.
6. The method for processing the fundus picture based on the neural network according to any one of claims 1 to 5, wherein: in step S2, finding a circular area containing the optic disc by using hough transform circular detection method; and cutting a square area containing an optic disc and an optic cup in the fundus picture by taking the circle center of the circular area as the center and taking a preset length as the side length to be used as the initial sample picture.
7. The method for processing a fundus picture based on a neural network according to claim 6, further comprising the steps of:
performing image enhancement on the training sample picture by using a Laplacian operator;
and carrying out simulated red-free light treatment on the training sample picture.
CN201811330427.2A 2018-11-09 2018-11-09 Method for processing fundus picture based on neural network Active CN109215039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811330427.2A CN109215039B (en) 2018-11-09 2018-11-09 Method for processing fundus picture based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811330427.2A CN109215039B (en) 2018-11-09 2018-11-09 Method for processing fundus picture based on neural network

Publications (2)

Publication Number Publication Date
CN109215039A CN109215039A (en) 2019-01-15
CN109215039B true CN109215039B (en) 2022-02-01

Family

ID=64995660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811330427.2A Active CN109215039B (en) 2018-11-09 2018-11-09 Method for processing fundus picture based on neural network

Country Status (1)

Country Link
CN (1) CN109215039B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309778A (en) * 2019-07-01 2019-10-08 江苏医像信息技术有限公司 Eyeground figure personal identification method
CN117788407A (en) * 2019-12-04 2024-03-29 深圳硅基智能科技有限公司 Training method for glaucoma image feature extraction based on artificial neural network
CN111863241B (en) * 2020-07-10 2023-06-30 北京化工大学 Fundus imaging classification system based on integrated deep learning
CN112651921B (en) * 2020-09-11 2022-05-03 浙江大学 Glaucoma visual field data region extraction method based on deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764342A (en) * 2018-05-29 2018-11-06 广东技术师范学院 A kind of semantic segmentation method of optic disk and optic cup in the figure for eyeground

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009139722A1 (en) * 2008-05-14 2009-11-19 Agency For Science, Technology And Research Automatic cup-to-disc ratio measurement system
CN106650596A (en) * 2016-10-10 2017-05-10 北京新皓然软件技术有限责任公司 Fundus image analysis method, device and system
EP3424406A1 (en) * 2016-11-22 2019-01-09 Delphinium Clinic Ltd. Method and system for classifying optic nerve head
CN108520522A (en) * 2017-12-31 2018-09-11 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN108665447B (en) * 2018-04-20 2021-07-30 浙江大学 Glaucoma image detection method based on fundus photography deep learning
CN108717868A (en) * 2018-04-26 2018-10-30 博众精工科技股份有限公司 Glaucoma eye fundus image screening method based on deep learning and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764342A (en) * 2018-05-29 2018-11-06 广东技术师范学院 A kind of semantic segmentation method of optic disk and optic cup in the figure for eyeground

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Measurement of optical cup-to-disc ratio in fundus images for glaucoma screening;Hanan Alghmdi et al;《 2015 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM)》;20151207;全文 *
青光眼眼底图像的自动识别与分析方法研究;干能强;《中国优秀硕士学位论文全文数据库电子期刊 信息科技辑》;20090615;第2009年卷(第6期);全文 *

Also Published As

Publication number Publication date
CN109215039A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109215039B (en) Method for processing fundus picture based on neural network
CN109784337B (en) Method and device for identifying yellow spot area and computer readable storage medium
KR101977645B1 (en) Eye image analysis method
CN111493814B (en) Recognition system for fundus lesions
Cao et al. Hierarchical method for cataract grading based on retinal images using improved Haar wavelet
Gao et al. Automatic feature learning to grade nuclear cataracts based on deep learning
Abràmoff et al. Retinal imaging and image analysis
Niemeijer et al. Automatic detection of red lesions in digital color fundus photographs
JP6469387B2 (en) Fundus analyzer
Abramoff et al. The automatic detection of the optic disc location in retinal images using optic disc location regression
CN113768460B (en) Fundus image analysis system, fundus image analysis method and electronic equipment
CN104102899B (en) Retinal vessel recognition methods and device
CN109671049B (en) Medical image processing method, system, equipment and storage medium
CN113768461A (en) Fundus image analysis method and system and electronic equipment
Dogan et al. Objectifying the conjunctival provocation test: photography-based rating and digital analysis
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
Ramaswamy et al. A study and comparison of automated techniques for exudate detection using digital fundus images of human eye: a review for early identification of diabetic retinopathy
CN113870270A (en) Eyeground image cup and optic disc segmentation method under unified framework
CN111402246A (en) Eye ground image classification method based on combined network
Brancati et al. Segmentation of pigment signs in fundus images for retinitis pigmentosa analysis by using deep learning
Raman et al. The effects of spatial resolution on an automated diabetic retinopathy screening system's performance in detecting microaneurysms for diabetic retinopathy
Xu Simultaneous automatic detection of optic disc and fovea
Hussein et al. Convolutional Neural Network in Classifying Three Stages of Age-Related Macula Degeneration
JP2018198968A (en) Fundus analysis device
Hussein et al. Automatic classification of AMD in retinal images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant