CN112651921B - Glaucoma visual field data region extraction method based on deep learning - Google Patents

Glaucoma visual field data region extraction method based on deep learning Download PDF

Info

Publication number
CN112651921B
CN112651921B CN202010953155.2A CN202010953155A CN112651921B CN 112651921 B CN112651921 B CN 112651921B CN 202010953155 A CN202010953155 A CN 202010953155A CN 112651921 B CN112651921 B CN 112651921B
Authority
CN
China
Prior art keywords
visual field
pixel
glaucoma
value
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010953155.2A
Other languages
Chinese (zh)
Other versions
CN112651921A (en
Inventor
叶娟
金凯
龚薇
斯科
黄笑羚
竺家柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010953155.2A priority Critical patent/CN112651921B/en
Publication of CN112651921A publication Critical patent/CN112651921A/en
Application granted granted Critical
Publication of CN112651921B publication Critical patent/CN112651921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The invention discloses a glaucoma visual field data region extraction method based on deep learning. Acquiring a glaucoma visual field to obtain a visual field map, acquiring a one-dimensional array of a total deviation value according to the visual field map, and converting the one-dimensional array into a pixel array with gray scale after processing; arranging the pixel array and filling the image, converting the pixel array into a gray image which is easy to identify, and restoring the view distribution characteristics; inputting the gray scale image into a convolutional neural network which is trained in advance for processing, and extracting and obtaining an effective data area of the glaucoma visual field. The invention effectively utilizes the data in the visual field map, converts the data into an image which is more visual, contains real visual field information and strengthens the regional characteristics, can be utilized in various scenes and provides convenience for further observation of glaucoma.

Description

Glaucoma visual field data region extraction method based on deep learning
Technical Field
The invention relates to an ophthalmologic image data processing method, in particular to a glaucoma visual field data region extraction method based on deep learning.
Background
Glaucoma (glaucoma) is a group of eye diseases characterized by pathological increases in ocular pressure that threatens and damages the optic nerve and its visual pathways, leading to characteristic impairment of visual function. The world health organization indicates that glaucoma is the second leading cause of blindness worldwide. Glaucoma is also a type of psychosomatic disease, and glaucoma patients have a reduced quality of life and limited daily functions (such as driving). Although glaucoma cannot be cured, the necessary treatment can delay the progression of the disease and prevent the serious consequences of blindness and the like. The treatment of glaucoma is complicated by the fact that different treatment regimens are often selected depending on the type and severity of the glaucoma.
In the diagnosis and evaluation of glaucoma, a visual field map is a very important index for judging visual function impairment. Any visual path related diseases are accompanied by the loss of visual function, and abnormal results and pathological changes related to the abnormal results can be timely found out from visual field data graphs obtained by visual field examination. However, in the prior art, there are few methods for processing the view data.
At present, the continuous innovation of computer technology allows artificial intelligence to move into the field of vision of people. With the development of artificial intelligence in the medical technology field, artificial intelligence is increasingly involved in auxiliary diagnosis and disease classification. In the existing artificial intelligence technology related to glaucoma, fundus pictures are generally input into a deep learning network for auxiliary diagnosis or grading of glaucoma. However, glaucoma is a complex disease, and visual field images should also be considered as a criterion for disease progression in order to fully assess the severity of glaucoma.
Disclosure of Invention
Aiming at the defects and shortcomings of the background art, the invention provides a glaucoma visual field data region extraction method based on deep learning, which can extract and obtain an image containing visual field real information and strengthening visual field characteristics.
Specifically, the technical scheme adopted by the invention is as follows:
s1, collecting the glaucoma visual field to obtain a visual field map, obtaining a one-dimensional array of the total deviation value according to the visual field map, and converting the one-dimensional array into a pixel array with gray scale after processing;
the present invention is directed to a visual field image that is an image acquisition under known glaucoma and is not a diagnostic process.
S2, arranging the pixel groups and filling the images, converting the pixel groups into easily-recognized gray level images, and restoring the view distribution characteristics;
and S3, inputting the gray scale image into the convolutional neural network which is trained in advance for processing, and extracting and obtaining an effective data area of the glaucoma visual field.
The one-dimensional array of the Total deviation values refers to an array formed by specific values in a Total deviation graph (Total deviation) obtained by a Humphrey perimeter, or an array formed by specific values in a Comparison graph (Comparison) obtained by an Octopus perimeter.
The acquiring of the overall deviation value in the visual field map comprises acquiring specific numerical values in corresponding images in the visual field map using an optical OCR technology.
In S1, converting the one-dimensional array into a pixel array with gray scale after processing, including:
carrying out unification treatment on each numerical value in the one-dimensional array, wherein the absolute value is changed into a full positive value after treatment or the value is changed into a full negative value after treatment; performing thresholding treatment on the unified numerical values according to a preset minimum normal deviation value, and assigning the numerical values within the minimum normal deviation value boundary as the minimum normal deviation value; if the values are all positive values, all the values smaller than the minimum normal deviation value are assigned as the minimum normal deviation value; and if the values are all negative values, assigning all the values which are larger than the minimum normal deviation value as the minimum normal deviation value. Then, according to the preset minimum normal deviation value and the maximum deviation value, each numerical value after thresholding is linearly mapped to 0,255]The standard gray levels corresponding to the respective numerical values are combined into a pixel array z with gray levelskK is the kth value in the pixel array, and the maximum value of k isThe total number of values in the one-dimensional array.
The maximum deviation value is a deviation value corresponding to the visual field with the photosensitivity of 0 or the blindness state of the dot region; the minimum normal deviation value, i.e., when the visual field deviation value is less than this value (positive value), the visual field is considered to be normal without any defect.
The S2 specifically includes: constructing a blank square picture, establishing an inscribed circle in the square picture, wherein the inscribed circle represents the circular visual field of the perimeter, the gray levels of pixel points in other areas of the square picture outside the inscribed circle are all set to be zero, assigning each numerical value in the pixel array with the gray levels to the gray levels of the pixel points in the inscribed circle and the pixel points respectively corresponding to the same positions in the original visual field picture, assigning the pixel point as the value Zk of the pixel point with the closest distance to the same position of the original visual field picture in the inscribed circle if no pixel point with the numerical value exists in the original visual field picture in the inscribed circle, and taking the point with the smaller point number if the distance between the two original pixel points is equal (for example, taking the value of Zk if the distance between the two original pixel points is the same as Zk and Zk + 1), and assigning the value of the pixel point corresponding to the same position of the inscribed circle at the physiological blind point position in the original visual field picture to be zero, namely black, and finally taking a square picture as a gray image.
And arranging pixel points in the visual field images obtained by different perimeter meters in different modes, arranging the pixel points in the inscribed circle of the square image according to the original positions of the pixel points, and filling and assigning values.
The convolutional neural network in S3 includes residual basic blocks, each of which is added with a Batch Normalization layer (Batch Normalization) process, and changes 1000 nodes output by the last full link layer into 5 nodes. And inputting the processed gray level image into a convolutional neural network for training, so that the output value of the deep learning network is consistent with the marked region, and the purpose of region extraction is achieved.
Therefore, the invention obtains an accurate and effective gray image characteristic diagram by specially processing the result of the visual field acquisition, and then inputs the gray image characteristic diagram into a convolutional neural network for processing and judgment to obtain the result of extraction and identification.
The convolutional neural network is mainly formed by sequentially connecting an input layer, an input convolutional layer, a maximum pooling layer, a residual error basic block multiplied by 16, an average pooling layer and a full-connection layer; the residual error basic block is mainly formed by sequentially connecting a first batch of rectification modules, a first convolution layer, a first discarding layer, a second batch of rectification modules and a second convolution layer, wherein the batch of rectification modules are sequentially formed by batch normalization processing and linear rectification functions; the output of the second convolution layer and the input of the residual error basic block are jointly added by the additive layer and then output as the output of the residual error basic block, or the input of the residual error basic block is added by the third convolution layer and then output together with the output of the second convolution layer by the additive layer and then output as the output of the residual error basic block.
The final average pooling layer is to average the graphs output by the 16 residual basic blocks, thereby extracting key features and carrying out dimensionality reduction processing on the data. The fully connected layer acts as a classifier mapping the features output by the average pooling layer into the sample space.
By applying the technical scheme, the method acquires the total deviation value from the visual field image, processes and converts the total deviation value into the pixel point with the gray scale; meanwhile, a specific method is adopted for arrangement and data filling, and an image for restoring the view distribution characteristics and strengthening the characteristics is generated; and processing the image according to a deep learning network to output a region feature extraction result.
The method effectively utilizes the data in the visual field image, carries out preprocessing aiming at the data containing the visual field information, arranges the pixel points and expands the data by the Thiessen polygon method, so that the pixel points are converted into a more visual image which contains real visual field information and strengthens the regional characteristics, and the image is input into the convolutional neural network to output the regional characteristics of the visual field distribution, thereby being capable of being utilized in various scenes and providing convenience for further observing glaucoma.
Compared with the prior art, the invention has the advantages that:
the invention effectively utilizes the data in the visual field image, converts the data into an image which is more intuitive, contains real visual field information and strengthens the regional characteristics, and is beneficial to artificial intelligence for identification.
The method for processing the visual field data can be universally used for different forms of visual field images obtained by two different devices, and gray level images containing glaucoma features which are easier to distinguish are obtained and can be utilized in various scenes, so that convenience is brought to further observation of glaucoma.
Drawings
FIG. 1 is a general flow diagram of the process of the present invention;
FIG. 2 is a schematic flow chart of the arrangement and data filling in the method of the present invention.
FIG. 3 is a schematic diagram of a residual basic block structure of a convolutional neural network in an embodiment of the present invention, wherein (a) is when the number of channels of the residual basic block is consistent; (b) when the number of input and output channels in the residual basic block is different.
Detailed Description
In order to more clearly illustrate the technical means and the final benefits achieved by the present invention, the following describes the glaucoma visual field data processing and deep learning based classification method in detail with reference to the embodiments and the accompanying drawings.
It should be particularly noted that the described embodiments are only a few embodiments of the invention, and not all embodiments. All other embodiments obtained by a person skilled in the art without making any inventive step shall fall within the scope of protection of the present invention.
The embodiment and the specific implementation process of the invention are as follows:
step 1: 13231 field data from the U.S. Harvard well-known database were collected in a one-dimensional array format from 7300 patients, measured from 10 months to 4 months of 2012 in 2006. Cases with false negative rates above 20% and loss of fixation rates above 33% in the visual field report have been removed, and only the most recent visual field data for each eye is left. The data were collected using 3 Humphrey perimetry analyzers (HFA-II, Carl Zeiss Meditec AG, Jena, Germany) using the protocol Swedish Interactive threshold Algorithm Standard 24-2. All view data information is converted to "rightEye mode ", that is, the data of the left eye is horizontally flipped along the vertical central axis, and the collected data are specific values in the total deviation map of the visual field map, and the 54 value ranges are [ -38, 38 [ -38 [ ]]Is composed of the following groups, using the vector x ═ x1,x2,...x54]And (4) showing.
Step 2: after the vector x is obtained, the following preprocessing steps are carried out on the vector x:
step 2-1: in this embodiment, the minimum normal deviation value is-4 dB, which is a negative value, so that all values greater than that are converted into the minimum normal deviation value, that is, x is thresholded: y is min (x, -4), and a corresponding vector y is obtained;
step 2-2: the maximum deviation value (negative value, in this embodiment, minus 38dB) preset by the perimeter and the minimum normal deviation value, namely the decibel value [ -38, -4] are linearly mapped to the 8-bit gray level of [0,255 ], so as to obtain the standard gray level corresponding to each decibel value;
the specific values of the minimum normal deviation value and the preset maximum deviation value of the perimeter in step 2-1 and step 2-2 of the present example are obtained by consulting literature or referencing machine parameters of the specific perimeter.
Step 2-3: and (3) corresponding each decibel value in the vector y to the relevant standard gray level one by one to obtain the pixel array with the gray level, namely, a numerical vector z ═ (y +38)/(-4+38) × 255.
And step 3: and arranging the obtained pixels with gray levels by using a Thiessen polygon method and filling the rest part of the image to obtain an image which contains real view information and strengthens the regional characteristics.
Fig. 2 is a schematic flow chart of the thiessen polygon method in the method of the present invention, wherein fig. 2 is a schematic flow chart of step 3 in this example, and the specific steps are as follows:
step 3-1: a blank square picture of size 224 x 224 is constructed, the inscribed circle of the picture representing the circular field of view of the perimeter, the grey levels outside the inscribed circle being set to zero.
Step 3-2: not. And assigning each numerical value in the pixel array with the gray level to the gray level value of the pixel point in the inscribed circle corresponding to the same position in the original view map.
Step 3-3: for the pixel with coordinate (i, j) which is not measured in the original visual field measurement, namely does not exist in the visual field report, other arbitrary pixels in the inscribed circle of the square picture have gray value g (i, j) which depends on the pixel point z nearest to the pixel point g (i, j)kA value of (k ═ 1, 2.. 54). As shown in FIG. 2, the black box represents a pixel point that is closest to the 24 th data point, so that its gray value is taken as z24. When the distance from a certain point to two points in the pixel array z is the same and is the closest, the gray value of the pixel point takes the zk value with the smaller number in z. I.e. if (i, j) to zkAnd zk+n(n is 1, 2.. 53) is the same distance and is the nearest, the gray value g (i, j) of the pixel point is zk
And 4, step 4: and manually dividing the obtained view gray level image according to different image region patterns. The pictures which can not be identified and extracted are deleted, and 12401 pictures are left.
And 5: and inputting the manually divided region style categories as labels and the processed visual field gray level image into a convolutional neural network for training, so that an expected labeling result is output. The convolutional neural network comprises a multi-layer neural network structure.
The convolutional neural network is mainly formed by sequentially connecting an input layer, an input convolutional layer, a maximum pooling layer, a residual error basic block multiplied by 16, an average pooling layer and a full-connection layer.
The specific structure of the residual basic block is shown in fig. 3, and the residual basic block is divided into two types.
When the number of input channels and the number of output channels in the residual basic block are the same, the residual basic block structure is as shown in fig. 3 (a). The input layer is subjected to batch normalization and linear rectification functions, then convolution is carried out by a convolution kernel with the size of 3 multiplied by 3 and the step length is 1, the number C of channels is unchanged, then the input layer is subjected to a rejection layer, the steps of batch normalization, linear rectification and convolution are repeated, and finally the original input is superposed to serve as an output result.
When the number of input channels and the number of output channels in the residual error basic block are different, the structure of the residual error basic block is as shown in fig. 3(b), the input is firstly subjected to batch normalization and linear rectification functions, then is subjected to convolution with a step length of 2 by a convolution kernel of 3 × 3, and is doubled at the same time, and then is subjected to batch normalization, linear rectification functions and convolution after passing through a rejection layer, wherein the convolution step length is 1, the number of channels is kept unchanged, meanwhile, the convolution of the convolution kernel of 3 × 3 is performed on the original input, the step length is 2, the number of channels is doubled, and the two results are superposed to serve as an output result.
Step 6: and inputting the image into the trained convolutional neural network to obtain an output value, and judging the glaucoma visual field image area mode through the output value.
The method obtains an image containing real visual field information and enhanced features by processing glaucoma visual field data. The image is useful for distinguishing regional features of the visual field and is utilized by a deep learning network to extract and output regional patterns of glaucomatous visual field. The output can be used for visual field result analysis and evaluation of the severity of the glaucoma visual field defect.
According to the method, 12401 visual field images marked by an ophthalmologist are used for training in the embodiment, the accuracy of deep learning network output classification is 88%, the Macro F1 in the multi-label model evaluation index F1-score is 0.793, and the Micro F1 is 0.879. The glaucoma visual field data processing and deep learning region mode extraction and classification system established by the method can be effectively applied to the fields of data processing or clinical evaluation and the like.

Claims (1)

1. A glaucoma visual field data region extraction method based on deep learning is characterized by comprising the following steps:
s1, collecting the glaucoma visual field to obtain a visual field map, obtaining a one-dimensional array of the total deviation value according to the visual field map, and converting the one-dimensional array into a pixel array with gray scale after processing;
s2, arranging the pixel groups and filling the images, converting the pixel groups into easily-recognized gray level images, and restoring the view distribution characteristics;
s3, inputting the gray level image into a convolutional neural network which is trained in advance for processing, and extracting and obtaining an effective data area of the glaucoma visual field;
the one-dimensional array of the overall deviation value is an array formed by all specific values in an overall deviation graph (Total deviation) obtained by a Humphrey perimeter, or an array formed by all specific values in a Comparison graph (Comparison) obtained by an Octopus perimeter;
in S1, converting the one-dimensional array into a pixel array with gray scale after processing, including: carrying out unification treatment on each numerical value in the one-dimensional array; performing thresholding treatment on the unified numerical values according to a preset minimum normal deviation value, and assigning the numerical values within the minimum normal deviation value boundary as the minimum normal deviation value; then, according to the preset minimum normal deviation value and maximum deviation value, each numerical value after thresholding is linearly mapped to [0,255%]The standard gray levels corresponding to the respective numerical values are combined into a pixel array with gray levels, and the k-th pixel point in the pixel array isZ k The maximum value of k is the total number of numerical values in the one-dimensional array;
the S2 specifically includes: establishing a blank square picture, establishing an inscribed circle in the square picture, wherein the inscribed circle represents the circular visual field of a perimeter, the gray levels of pixel points in other regions of the square picture outside the inscribed circle are all set to be zero, assigning each numerical value in a pixel array with gray levels to the gray levels of the pixel points in the inscribed circle and the pixel points which respectively correspond to the same positions in the original visual field picture, assigning the pixel point to the value Zk of the pixel point which is in the inscribed circle and is closest to the same position of the original visual field picture as the gray level of the pixel point in the inscribed circle, and assigning the gray level of the pixel point which is in the same position of the inscribed circle and corresponds to the physiological blind point position in the original visual field picture to be zero, and finally taking the square picture as a gray level image;
the convolutional neural network in S3 includes residual basic blocks, each of which is added with a Batch Normalization layer (Batch Normalization) process, and changes 1000 nodes output by the last full link layer into 5 nodes.
CN202010953155.2A 2020-09-11 2020-09-11 Glaucoma visual field data region extraction method based on deep learning Active CN112651921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010953155.2A CN112651921B (en) 2020-09-11 2020-09-11 Glaucoma visual field data region extraction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010953155.2A CN112651921B (en) 2020-09-11 2020-09-11 Glaucoma visual field data region extraction method based on deep learning

Publications (2)

Publication Number Publication Date
CN112651921A CN112651921A (en) 2021-04-13
CN112651921B true CN112651921B (en) 2022-05-03

Family

ID=75346408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010953155.2A Active CN112651921B (en) 2020-09-11 2020-09-11 Glaucoma visual field data region extraction method based on deep learning

Country Status (1)

Country Link
CN (1) CN112651921B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229545A (en) * 2017-12-22 2018-06-29 北京市商汤科技开发有限公司 The method, apparatus and electronic equipment of diagnosis of glaucoma
CN109215039A (en) * 2018-11-09 2019-01-15 浙江大学常州工业技术研究院 A kind of processing method of eyeground picture neural network based
CN110619332A (en) * 2019-08-13 2019-12-27 中国科学院深圳先进技术研究院 Data processing method, device and equipment based on visual field inspection report
CN111179226A (en) * 2019-12-14 2020-05-19 中国科学院深圳先进技术研究院 Visual field map identification method and device and computer storage medium
CN111340778A (en) * 2020-02-25 2020-06-26 中国科学院深圳先进技术研究院 Glaucoma image processing method and equipment
WO2020176039A1 (en) * 2019-02-26 2020-09-03 Ngee Ann Polytechnic System and method for classifying eye images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4057215A1 (en) * 2013-10-22 2022-09-14 Eyenuk, Inc. Systems and methods for automated analysis of retinal images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229545A (en) * 2017-12-22 2018-06-29 北京市商汤科技开发有限公司 The method, apparatus and electronic equipment of diagnosis of glaucoma
CN109215039A (en) * 2018-11-09 2019-01-15 浙江大学常州工业技术研究院 A kind of processing method of eyeground picture neural network based
WO2020176039A1 (en) * 2019-02-26 2020-09-03 Ngee Ann Polytechnic System and method for classifying eye images
CN110619332A (en) * 2019-08-13 2019-12-27 中国科学院深圳先进技术研究院 Data processing method, device and equipment based on visual field inspection report
CN111179226A (en) * 2019-12-14 2020-05-19 中国科学院深圳先进技术研究院 Visual field map identification method and device and computer storage medium
CN111340778A (en) * 2020-02-25 2020-06-26 中国科学院深圳先进技术研究院 Glaucoma image processing method and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Glaucoma Monitoring Using Manifold Learning and Unsupervised Clustering;Siamak Yousefi 等;《2018 International Conference on Image and Vision Computing New Zealand (IVCNZ)》;20190207;全文 *
Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs;Shaoze Wang 等;《IEEE Transactions on Medical Imaging》;20151208;第35卷(第4期);全文 *
基于深度学习的青光眼智能诊断研究;秦运输;《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》;20200229(第2期);全文 *

Also Published As

Publication number Publication date
CN112651921A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN109493954B (en) SD-OCT image retinopathy detection system based on category distinguishing and positioning
AU2020103938A4 (en) A classification method of diabetic retinopathy grade based on deep learning
Narasimha-Iyer et al. Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy
Khan et al. Cataract detection using convolutional neural network with VGG-19 model
CN108095683A (en) The method and apparatus of processing eye fundus image based on deep learning
US20060257031A1 (en) Automatic detection of red lesions in digital color fundus photographs
CN111986211A (en) Deep learning-based ophthalmic ultrasonic automatic screening method and system
CN112837805B (en) Eyelid topological morphology feature extraction method based on deep learning
CN112446860B (en) Automatic screening method for diabetic macular edema based on transfer learning
CN112233087A (en) Artificial intelligence-based ophthalmic ultrasonic disease diagnosis method and system
Chaudhary et al. Detection of diabetic retinopathy using machine learning algorithm
CN113610842B (en) OCT image retina detachment and splitting automatic segmentation method based on CAS-Net
Sangeethaa Presumptive discerning of the severity level of glaucoma through clinical fundus images using hybrid PolyNet
CN117338234A (en) Diopter and vision joint detection method
CN115937085B (en) Nuclear cataract image processing method based on neural network learning
CN112651921B (en) Glaucoma visual field data region extraction method based on deep learning
CN112634221A (en) Image and depth-based cornea level identification and lesion positioning method and system
CN111402246A (en) Eye ground image classification method based on combined network
CN111291706B (en) Retina image optic disc positioning method
Baharlouei et al. Detection of retinal abnormalities in OCT images using wavelet scattering network
Arvind et al. Deep learning regression-based retinal layer segmentation process for early diagnosis of retinal anamolies and secure data transmission through thingspeak
Kumari et al. Automated process for retinal image segmentation and classification via deep learning based cnn model
Vignesh et al. Detection of Diabetic Retinopathy Image Analysis using Convolution Graph Neural Network
CN112700409A (en) Automatic retinal microaneurysm detection method and imaging method
Doan et al. Implementation of complete glaucoma diagnostic system using machine learning and retinal fundus image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant