CN111861977A - Feature extraction method of anterior segment tomogram based on machine vision - Google Patents
Feature extraction method of anterior segment tomogram based on machine vision Download PDFInfo
- Publication number
- CN111861977A CN111861977A CN202010461475.6A CN202010461475A CN111861977A CN 111861977 A CN111861977 A CN 111861977A CN 202010461475 A CN202010461475 A CN 202010461475A CN 111861977 A CN111861977 A CN 111861977A
- Authority
- CN
- China
- Prior art keywords
- cornea
- anterior segment
- gaussian
- iris
- machine vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 9
- 210000004087 cornea Anatomy 0.000 claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 28
- 210000000695 crystalline len Anatomy 0.000 claims abstract description 19
- 238000005286 illumination Methods 0.000 claims abstract description 17
- 230000008569 process Effects 0.000 claims abstract description 10
- 238000011430 maximum method Methods 0.000 claims abstract description 8
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 241000282414 Homo sapiens Species 0.000 claims description 5
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 244000208734 Pisonia aculeata Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Eye Examination Apparatus (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a feature extraction method of a machine vision-based anterior segment tomogram, which comprises the following steps: acquiring a front eye segment tomogram under low-illumination; the Retinex algorithm is adopted to enhance the contrast of the anterior segment tomogram; performing gaussian filtering to remove the noise generated after the step S2; finding out a potential cornea area by binarization and matching with blob shape analysis; roughly positioning the front and back surface edges of the cornea by using a gradient maximum method for the potential cornea area; determining sub-pixel precision boundaries of the front and back surfaces of the cornea by a Gaussian fitting positioning method; according to the obtained sub-pixel precision boundary of the front and back surfaces of the cornea, the iris and the initial point of the crystalline lens are found, and the accurate boundary values of the front and back surfaces of the cornea, the front surface of the crystalline lens and the front surface of the iris are obtained through tracking. The invention has the following advantages and effects: the invention can also process the image with high precision in the low illumination imaging mode, and greatly improves the matching degree and comfort of the patient.
Description
Technical Field
The invention relates to the technical field of ophthalmologic detection, in particular to a feature extraction method of a machine vision-based anterior segment tomographic image.
Background
When a slit lamp or related equipment is used for collecting anterior segment tomograms, in order to ensure the image quality, a high-illumination light source is usually adopted to irradiate human eyes, certain people sensitive to brightness even can not cooperate to complete the whole detection, an eye opener is required to cooperate, and the eyes can not be absorbed in a fixation point in the test process, so that the measurement precision is reduced. Therefore, if the intensity of the illumination light source can be reduced, the discomfort of the patient can be reduced, and the image can be processed with high precision under the low-illumination imaging mode, so that the matching degree and the comfort degree of the patient can be greatly improved.
Disclosure of Invention
The invention aims to provide a feature extraction method of a sectional image of an anterior segment of an eye based on machine vision, which aims to solve the problems in the background art.
The technical purpose of the invention is realized by the following technical scheme: a feature extraction method of a machine vision-based anterior segment tomographic image comprises the following steps:
step S1, acquiring a front eye segment tomogram under low-illumination;
S2, enhancing the contrast of the anterior segment tomogram by adopting a Retinex algorithm;
step S3, Gaussian filtering is performed to remove the noise generated after the step S2;
step S4, finding out a potential cornea area through binarization and blob shape analysis;
step S5, roughly positioning the front and back surface edges of the cornea by applying a gradient maximum method to the potential cornea area;
step S6, determining sub-pixel precision boundaries of the front and back surfaces of the cornea by a Gaussian fitting positioning method;
and step S7, finding out initial points of the iris and the crystalline lens according to the obtained sub-pixel precision boundary of the front and back surfaces of the cornea, and obtaining the accurate boundary values of the front and back surfaces of the cornea, the front surface of the crystalline lens and the front surface of the iris through tracking.
Further, the step S2 is specifically:
attribute information of the anterior segment tomographic image itself is contained in the reflected light component R (x, y), and the reflected component ambient light component L (x, y) affecting human vision is removed, and the reflected light component R (x, y) is retained.
Further, the step S4 is specifically:
the anterior segment tomographic image mainly comprises four parts, namely a dark background, a darker crystalline lens area, a brighter corneal area and a brightest iris area, and four areas, namely the background, the crystalline lens, the iris and the cornea, are segmented by adopting a Kmean value clustering algorithm due to the known basic composition and classification number of the image; and then determining whether the binarized segmented region is accurate or not through Blob shape analysis.
Further, the step S5 is specifically:
based on the potential cornea area which is divided, the line is drawn on the front surface and the back surface of the cornea point by point to obtainAs the corneal boundary point on the pull line.
Further, the step S6 is specifically:
the Gaussian fitting positioning method is characterized in that after rough edge positions of the front surface and the back surface of a cornea are obtained, corresponding adjacent pixel gray sequences are extracted in the direction of the maximum gradient value by taking the rough edge positions as the center, a gray gradient function of the Gaussian fitting is used, pixel level positioning information is further improved to a sub-pixel level, and singular points caused by noise can be eliminated due to the Gaussian fitting, so that the precision is greatly improved; the fitted gaussian function is:
in the formula: μ is the sub-pixel edge coordinate value, σ is the standard deviation of the gaussian function, k is the amplitude, and the fitting process uses the least squares method to solve the values of the gaussian functions μ, σ, k.
The invention has the beneficial effects that:
the low-illumination imaging can greatly reduce the patient's non-adaptability, but brings the low-precision problem that the image contrast is low, and is difficult to process or marginal to process. The invention obviously enhances the image contrast and illumination equalization by utilizing the Retinex algorithm, and then ensures that the extraction of the cornea edge reaches the sub-pixel level precision by utilizing the positioning method based on Gaussian fitting, thereby providing a solid foundation for further analysis and processing of the anterior segment of the eye. Therefore, the invention can process the image with high precision under the low illumination imaging mode, and greatly improves the matching degree and comfort of the patient.
Drawings
FIG. 1 is a schematic flow chart of an embodiment;
FIG. 2 is a diagram of an embodiment of the Retinex algorithm;
FIG. 3 is a process of the Retinex algorithm in the embodiment;
FIG. 4 is a schematic diagram of a gradient maxima method in an embodiment;
FIG. 5 is a diagram illustrating an exemplary pixel-level boundary determination method by corneal pullback;
FIG. 6 is an example blurred edge model;
fig. 7 is a schematic diagram of finding initial points of the iris and the crystalline lens in the embodiment.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, a feature extraction method for anterior segment tomograms based on machine vision includes the following steps:
step S1, acquiring a front eye segment tomogram under low-illumination;
s2, enhancing the contrast of the anterior segment tomogram by adopting a Retinex algorithm;
step S3, Gaussian filtering is performed to remove the noise generated after the step S2;
step S4, potential cornea regions (a is ideal edge jump, and b is a graph obtained by solving a first derivative of the ideal edge jump) are found out through binarization and blob shape analysis;
step S5, roughly positioning the front and back surface edges of the cornea by applying a gradient maximum method to the potential cornea area;
Step S6, determining sub-pixel precision boundaries of the front and back surfaces of the cornea by a Gaussian fitting positioning method;
and step S7, finding out initial points of the iris and the crystalline lens according to the obtained sub-pixel precision boundary of the front and back surfaces of the cornea, and obtaining the accurate boundary values of the front and back surfaces of the cornea, the front surface of the crystalline lens and the front surface of the iris through tracking.
In step S1; the anterior segment tomogram can be acquired by using low-illumination through a slit lamp or related equipment.
In step S2, for low-illumination images, the conventional spatial domain and frequency domain image enhancement algorithms are relatively single in function, and can only process certain types of images, or enhance certain characteristics of images, and the Retinex-based image enhancement algorithm eliminates negative effects on images due to illumination changes by simulating the visual characteristics of human beings. The basic idea of the Retinex image enhancement algorithm is that for an image I (x, y) to be enhanced, the pixel-by-pixel product of two parts, i.e., I (x, y) ═ L (x, y) × R (x, y), of the reflected light component R (x, y) and the ambient light component L (x, y) can be considered. The reflected light component R (x, y) corresponds to the image after the influence of the illumination is removed, and the ambient light component L (x, y) directly determines the dynamic range that can be achieved by the pixels in the image. The schematic diagram is shown in figure 2.
The basic idea of Retinex theory is to eliminate the ambient light component L (x, y) affecting human vision in the original image by various transformation methods, and to retain the reflected light component R (x, y) as much as possible, because the attribute information of the anterior segment tomogram itself is contained in the reflected light component R (x, y). The typical processing procedure of the Retinex algorithm is as shown in fig. 3, and includes performing logarithmic transformation on an input image, estimating an illumination component of the image, and performing logarithmic and exponential operations to obtain a reflection component, i.e., a desired enhanced image.
In this embodiment, the Retinex algorithm may specifically adopt a single-scale SSR algorithm, a multi-scale MSR algorithm, a multi-scale MSRCR algorithm with color recovery, or a Retinex algorithm based on a variational framework.
In step 3, the gaussian filter is a linear smoothing filter that selects weights according to the shape of the gaussian function, and has a better effect on suppressing the noise of the normal distribution. For the processing of the anterior segment tomogram, the Gaussian filter is used for filtering the details and the noise of the anterior segment tomogram by using a two-dimensional convolution operator of a Gaussian kernel, and the anterior segment tomogram is fuzzified to obtain the anterior segment tomogram with higher signal-to-noise ratio. The weight of the Gaussian filter is mainly related to the spatial position of a pixel point in the anterior segment tomographic image, and the larger the distance between the target pixel point and the comparison pixel point is, the larger the weight of the pixel point is; otherwise, the smaller the weight.
In step S4, the anterior segment tomographic image is mainly composed of four parts, i.e., a dark background, a darker crystalline lens region, a lighter corneal region, and a brightest iris region, and four regions, i.e., the background, the crystalline lens, the iris, and the cornea, are segmented by using the Kmean mean clustering algorithm because the basic composition and the classification number of the image are known. The number of clusters and an initial cluster center are required to be input by a Kmean algorithm, the number of the clusters is known as four, the selection of the initial cluster center can be determined by drawing a histogram and solving four wave peak values of the histogram, and therefore four areas of a background, a crystalline lens, an iris and a cornea are segmented.
And then determining whether the binarized and segmented corneal region is accurate or not through Blob shape analysis, wherein the corneal region has obvious shape characteristics, namely a slender arc region, and the Blob shape analysis can define the slender region, namely the corneal region, through calculating the characteristics of the area, compactness, various moments and the like of a graph.
In step S5, the basis of the potential corneal region that has been segmented is usedIn the above, the edge at the corneal pixel level is further obtained by the gradient maximum method. In fig. 4, a is an ideal edge transition, and b is a graph obtained by calculating a first derivative of the ideal edge transition; as can be seen by referring to a and b in fig. 4, the ideal edge point is located at the maximum of the first derivative. In this patent, based on the divided cornea, the lines are drawn point by point on the anterior and posterior surfaces of the cornea to find As the corneal boundary point on the pull line.
Specifically, as shown in fig. 5, the arc is the boundary of the binarized segmented cornea in the y-axis direction, the horizontal line is a pull line of a certain point of the cornea boundary in the x-axis direction, and the true pixel-level cornea boundary is determined by finding the maximum value of the first derivative on the pull line point by point. The arrow indicates the moving direction of the pull wire, i.e. the pull wire moves along the y-axis direction.
In step S6, the edge obtained by the first derivative maximum method in the previous step is only a pixel-level edge, and the differential operator is very sensitive to noise and often generates some false edges, and the above problem can be solved well by fitting a sub-pixel positioning algorithm based on a gaussian curve in the gradient direction. A common edge model is a step model, as shown in a in fig. 4, but since the image production process itself generates blurring effects (illumination, ccd imaging cause, etc.), the ideal step model exhibits a convolution similar to an ideal step function and a gaussian function, as shown in fig. 6, the gray scale gradient function is a gaussian function model.
The Gaussian fitting positioning method is characterized in that after rough edge positions of the front surface and the back surface of a cornea are obtained, corresponding adjacent pixel gray sequences are extracted in the direction of the maximum gradient value by taking the rough edge positions as the center, a gray gradient function of the Gaussian fitting is used, pixel level positioning information is further improved to a sub-pixel level, and singular points caused by noise can be eliminated due to the Gaussian fitting, so that the precision is greatly improved; the fitted gaussian function is:
In the formula: μ is the sub-pixel edge coordinate value, σ is the standard deviation of the gaussian function, k is the amplitude, and the fitting process uses the least squares method to solve the values of the gaussian functions μ, σ, k.
In step S7, when a fine corneal boundary is found, the corneal edge is divided into four equal parts, three lines are drawn in the iris and pupil directions (directions indicated by arrows in fig. 7), and the gradation value falling on each line is subjected to the gradient maximum method to determine the initial value points of the iris (divided into upper and lower parts by the crystalline lens) and the crystalline lens.
Next, boundary tracking of pixel level accuracy is performed by using the gradient maximum method with the initial points of the upper iris, the lower iris, and the crystalline lens as the centers. If sub-pixel accuracy is required, the positioning can be further accurately carried out by utilizing Gaussian fitting.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (5)
1. A feature extraction method of a machine vision-based anterior segment tomographic image is characterized by comprising the following steps:
step S1, acquiring a front eye segment tomogram under low-illumination;
S2, enhancing the contrast of the anterior segment tomogram by adopting a Retinex algorithm;
step S3, Gaussian filtering is performed to remove the noise generated after the step S2;
step S4, finding out a potential cornea area through binarization and blob shape analysis;
step S5, roughly positioning the front and back surface edges of the cornea by applying a gradient maximum method to the potential cornea area;
step S6, determining sub-pixel precision boundaries of the front and back surfaces of the cornea by a Gaussian fitting positioning method;
and step S7, finding out initial points of the iris and the crystalline lens according to the obtained sub-pixel precision boundary of the front and back surfaces of the cornea, and obtaining the accurate boundary values of the front and back surfaces of the cornea, the front surface of the crystalline lens and the front surface of the iris through tracking.
2. The method for extracting features of a machine vision-based anterior segment tomographic image according to claim 1, wherein the step S2 specifically comprises:
attribute information of the anterior segment tomographic image itself is contained in the reflected light component R (x, y), and the reflected component ambient light component L (x, y) affecting human vision is removed, and the reflected light component R (x, y) is retained.
3. The method for extracting features of a machine vision-based anterior segment tomographic image according to claim 1, wherein the step S4 specifically comprises:
The anterior segment tomographic image mainly comprises four parts, namely a dark background, a darker crystalline lens area, a brighter corneal area and a brightest iris area, and four areas, namely the background, the crystalline lens, the iris and the cornea, are segmented by adopting a Kmean value clustering algorithm due to the known basic composition and classification number of the image; and then determining whether the binarized segmented region is accurate or not through Blob shape analysis.
4. The method for extracting features of a machine vision-based anterior segment tomographic image according to claim 1, wherein the step S5 specifically comprises:
5. The method for extracting features of a machine vision-based anterior segment tomographic image according to claim 1, wherein the step S6 specifically comprises:
the Gaussian fitting positioning method is characterized in that after rough edge positions of the front surface and the back surface of a cornea are obtained, corresponding adjacent pixel gray sequences are extracted in the direction of the maximum gradient value by taking the rough edge positions as the center, a gray gradient function of the Gaussian fitting is used, pixel level positioning information is further improved to a sub-pixel level, and singular points caused by noise can be eliminated due to the Gaussian fitting, so that the precision is greatly improved; the fitted gaussian function is:
In the formula: μ is the sub-pixel edge coordinate value, σ is the standard deviation of the gaussian function, k is the amplitude, and the fitting process uses the least squares method to solve the values of the gaussian functions μ, σ, k.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010461475.6A CN111861977A (en) | 2020-05-27 | 2020-05-27 | Feature extraction method of anterior segment tomogram based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010461475.6A CN111861977A (en) | 2020-05-27 | 2020-05-27 | Feature extraction method of anterior segment tomogram based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111861977A true CN111861977A (en) | 2020-10-30 |
Family
ID=72985244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010461475.6A Pending CN111861977A (en) | 2020-05-27 | 2020-05-27 | Feature extraction method of anterior segment tomogram based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111861977A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023092929A1 (en) * | 2021-11-24 | 2023-06-01 | 复旦大学附属眼耳鼻喉科医院 | Method and apparatus for measuring permeation depth of riboflavin in cornea |
CN116309661A (en) * | 2023-05-23 | 2023-06-23 | 广东麦特维逊医学研究发展有限公司 | Method for extracting OCT (optical coherence tomography) image contour of anterior segment of eye |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104013384A (en) * | 2014-06-11 | 2014-09-03 | 温州眼视光发展有限公司 | Anterior ocular segment cross-sectional image feature extraction method |
US20150161785A1 (en) * | 2012-08-02 | 2015-06-11 | Singapore Health Services Pte Ltd | Methods and systems for characterizing angle closure glaucoma for risk assessment or screening |
CN105894521A (en) * | 2016-04-25 | 2016-08-24 | 中国电子科技集团公司第二十八研究所 | Sub-pixel edge detection method based on Gaussian fitting |
CN108470348A (en) * | 2018-02-13 | 2018-08-31 | 温州眼视光发展有限公司 | Slit-lamp anterior ocular segment faultage image feature extracting method |
CN109684915A (en) * | 2018-11-12 | 2019-04-26 | 温州医科大学 | Pupil tracking image processing method |
CN110110761A (en) * | 2019-04-18 | 2019-08-09 | 温州医科大学 | The image characteristic extracting method of anterior ocular segment faultage image based on machine vision |
CN110705468A (en) * | 2019-09-30 | 2020-01-17 | 四川大学 | Eye movement range identification method and system based on image analysis |
-
2020
- 2020-05-27 CN CN202010461475.6A patent/CN111861977A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150161785A1 (en) * | 2012-08-02 | 2015-06-11 | Singapore Health Services Pte Ltd | Methods and systems for characterizing angle closure glaucoma for risk assessment or screening |
CN104013384A (en) * | 2014-06-11 | 2014-09-03 | 温州眼视光发展有限公司 | Anterior ocular segment cross-sectional image feature extraction method |
CN105894521A (en) * | 2016-04-25 | 2016-08-24 | 中国电子科技集团公司第二十八研究所 | Sub-pixel edge detection method based on Gaussian fitting |
CN108470348A (en) * | 2018-02-13 | 2018-08-31 | 温州眼视光发展有限公司 | Slit-lamp anterior ocular segment faultage image feature extracting method |
CN109684915A (en) * | 2018-11-12 | 2019-04-26 | 温州医科大学 | Pupil tracking image processing method |
CN110110761A (en) * | 2019-04-18 | 2019-08-09 | 温州医科大学 | The image characteristic extracting method of anterior ocular segment faultage image based on machine vision |
CN110705468A (en) * | 2019-09-30 | 2020-01-17 | 四川大学 | Eye movement range identification method and system based on image analysis |
Non-Patent Citations (1)
Title |
---|
MARIZUANA M.DAUD,ET AL: "Automated Corneal Segmentation of Anterior Segment Photographed Images using Centroid-Based Active Contour Model", 《PROCEDIA COMPUTER SCIENCE》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023092929A1 (en) * | 2021-11-24 | 2023-06-01 | 复旦大学附属眼耳鼻喉科医院 | Method and apparatus for measuring permeation depth of riboflavin in cornea |
CN116309661A (en) * | 2023-05-23 | 2023-06-23 | 广东麦特维逊医学研究发展有限公司 | Method for extracting OCT (optical coherence tomography) image contour of anterior segment of eye |
CN116309661B (en) * | 2023-05-23 | 2023-08-08 | 广东麦特维逊医学研究发展有限公司 | Method for extracting OCT (optical coherence tomography) image contour of anterior segment of eye |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109493954B (en) | SD-OCT image retinopathy detection system based on category distinguishing and positioning | |
Dey et al. | FCM based blood vessel segmentation method for retinal images | |
CN102860814A (en) | OCT (Optical Coherence Tomography) synthetic fundus image optic disc center positioning method and equipment | |
Deka et al. | Detection of macula and fovea for disease analysis in color fundus images | |
Li et al. | Vessel recognition of retinal fundus images based on fully convolutional network | |
CN111861977A (en) | Feature extraction method of anterior segment tomogram based on machine vision | |
Poshtyar et al. | Automatic measurement of cup to disc ratio for diagnosis of glaucoma on retinal fundus images | |
Naz et al. | Glaucoma detection in color fundus images using cup to disc ratio | |
Acharya et al. | Swarm intelligence based adaptive gamma corrected (SIAGC) retinal image enhancement technique for early detection of diabetic retinopathy | |
Prageeth et al. | Early detection of retinal nerve fiber layer defects using fundus image processing | |
Uribe-Valencia et al. | Automated Optic Disc region location from fundus images: Using local multi-level thresholding, best channel selection, and an Intensity Profile Model | |
Datta et al. | A new contrast enhancement method of retinal images in diabetic screening system | |
Kumar et al. | Automatic optic disc segmentation using maximum intensity variation | |
Tavakoli et al. | Automated optic nerve head detection based on different retinal vasculature segmentation methods and mathematical morphology | |
Joshi et al. | Review of preprocessing techniques for fundus image analysis | |
CN111292285B (en) | Automatic screening method for diabetes mellitus based on naive Bayes and support vector machine | |
El-Hag et al. | An efficient framework for macula exudates detection in fundus eye medical images | |
Amrutha et al. | An efficient ridge detection method for retinopathy of prematurity severity analysis | |
Zulfahmi et al. | Techniques for exudate detection for diabetic retinopathy | |
Zheng et al. | New simplified fovea and optic disc localization method for retinal images | |
Hamann et al. | At the pulse of time: Machine vision in retinal videos | |
Kumar et al. | Fundus image enhancement using visual transformation and maximum a posterior estimation | |
Mary et al. | Automatic optic nerve head segmentation for glaucomatous detection using hough transform and pyramidal decomposition | |
Kumar et al. | Image Enhancement using NHSI Model Employed in Color Retinal Images | |
Yi et al. | Observation model based retinal fundus image normalization and enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201030 |
|
RJ01 | Rejection of invention patent application after publication |