CN115393350A - Iris positioning method - Google Patents
Iris positioning method Download PDFInfo
- Publication number
- CN115393350A CN115393350A CN202211314729.7A CN202211314729A CN115393350A CN 115393350 A CN115393350 A CN 115393350A CN 202211314729 A CN202211314729 A CN 202211314729A CN 115393350 A CN115393350 A CN 115393350A
- Authority
- CN
- China
- Prior art keywords
- image
- iris
- pixel
- area
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Abstract
The invention relates to an iris positioning method, which comprises the following steps: s1, performing graphical processing on an original image to obtain a separated iris area outline image; s2, selecting the circle center positioning and range of the iris area outline image according to the obtained separated iris area outline image, and thus completing the iris positioning of the image. The eye interference factors can be accurately eliminated, the recognition deviation is small, and the eye interference eliminating device can be applied to all ophthalmic surgeries operated under a microscope.
Description
Technical Field
The invention relates to the technical field of iris recognition, in particular to an iris positioning method.
Background
Iris positioning is an image processing technology for finding the edge of an iris, plays an important role in the field of iris recognition, and is the basis of the accuracy of iris feature extraction. Meanwhile, iris positioning is an essential part in the intelligent process of the ophthalmic surgery.
The traditional iris positioning method mainly comprises the following steps: (1) Circle detection is performed in the iris image, and an iris region is extracted. The method has the advantages that the accuracy is greatly influenced by the image brightness, and the light reflection processing capability is weak. (2) And carrying out edge detection in the iris image, and obtaining a circle radius according to Hough transformation so as to segment the iris area. (3) And extracting the iris edge information by adopting a least square fitting method.
The problems of the method in iris positioning are as follows: the effect is not good when the influence factors such as large-area shielding and eyelash interference exist. In the microscope image during operation, interference items such as bleeding, incomplete eye image, iris non-perfect-circle caused by iris deformation and the like influence the iris detection.
Therefore, there is an urgent need to develop an iris positioning method which can be applied to an ophthalmic surgery scene, can correctly process eye interference factors, is not easy to generate large-amplitude recognition deviation, and can keep consistency for subsequent intelligent surgery operations.
Disclosure of Invention
In view of the above, there is a need for an iris positioning method, which can precisely eliminate the existing eye interference factors, has small recognition deviation, and can be applied to all ophthalmic surgeries under a microscope.
The invention provides an iris positioning method, which comprises the following steps: s1, performing graphical processing on an original image to obtain a separated iris area outline image; and S2, selecting the circle center positioning and range of the iris area outline image according to the obtained separated iris area outline image, thereby completing the iris positioning of the image.
Specifically, the step S1 includes:
s11, converting the original image from a three-channel color image into a single-channel gray image, and uniformly segmenting the single-channel gray image through threshold binarization processing to obtain a binarized image; wherein the original image is a microscope image in an ophthalmic surgery;
s12, performing morphological processing on the obtained binary image to obtain a separated eye multi-contour binary image; wherein, the morphological treatment is to alternately use an opening operation and a closing operation;
s13, carrying out multiple flood filling on the obtained separated eye multi-contour binary image to obtain an eye contour binary image without canthus tissue interference;
step S14, extracting an outermost contour of the obtained binary image of the intraocular contour, calculating the area of the outermost contour, drawing an image of the contour with the largest area, and scanning and filling the image of the largest contour to obtain an initial extraction image of an iris area;
and step S15, repeating the steps S12-S14 for the iris area primary extraction image obtained in the step S14, and obtaining a separated iris area outline image.
Specifically, in step S12: eliminating the protrusion and the tiny connection part of the outline in the binary image by using an opening operation; filling the hole in the outline and repairing the tiny recess on the edge by using a closing operation; the opening and closing operation is alternately used, accidental communication of different areas caused by small gray scale change is eliminated, and meanwhile, the defect of the outline area is filled.
Specifically, in step S13: and performing flood filling on four offset corner points of the separated eye multi-contour binary image to eliminate the contour of an interference object.
Specifically, the flood filling is: for the image with the size of (x y), sequentially taking (a, a), (a, y-a), (x-a, a) and (x-a, y-a) as flooding seed points to carry out flooding filling; wherein, x is the image width, y is the image height, a is the offset, and the unit is pixel.
Specifically, the scan fill includes: map for maximum profileScanning in the row direction and the column direction respectively to completely fill the interior of the outline; the scan in the row direction is: the image with the size of (x y) is scanned line by line and pixel by pixel, and if the pixel with the color of foreground exists in the ith (i belongs to [0, x ]) line, all coordinates are satisfiedOf a pixelFilling as foreground color, whereinThe coordinates of the pixel to be filled are indicated,、respectively representing the pixel coordinates of the first color as the foreground color and the pixel coordinates of the last color as the foreground color in the ith row; for the j (j belongs to [0, y)) line, if there is a pixel with the color of foreground color, all coordinates are satisfiedIs formed by a plurality of pixelsFilled in as foreground color, whereinThe coordinates of the pixel to be filled are represented,、respectively representing that the first color of the jth column is the pixel coordinate of the foreground color and the last color is the foreground colorPixel coordinates of the color.
Specifically, the step S2 includes:
s21, carrying out iris area centroid positioning on the separated iris area outline image to obtain a centroid coordinate;
s22, traversing the separated iris area outline images row by row and column by column to obtain the predicted radius r of an iris detection circle;
step S23, using the obtained centroid coordinate as the center of circle, at (r-r) e , r+ r e ) Performing circle matching within the radius range of the base; wherein r is e An offset is searched for the radius.
Specifically, the step S21 includes:
obtaining centroid coordinates (centrore) x ,centre y ) The specific calculation formula of (2) is as follows:
wherein, gray (x, y) is the pixel value at the point (x, y), and n is the total pixel number of the iris area.
Specifically, the step S22 includes:
the calculation method for obtaining the predicted radius r of the iris detection circle is as follows:
wherein x is min And x max X-coordinate values, y-coordinate values of the first iris region pixel and the last iris region pixel in the row direction in the separated iris region profile image respectively min And y max And the y coordinate values of the first iris area pixel and the last iris area pixel in the column direction in the separated iris area outline image are respectively.
Specifically, the step S23 includes:
in the matching process, the pixel gradient is used as an index, and the calculation formula is as follows:
wherein r 'represents the search set of radii of the iris detection circle, r' e (r-r) e ,r+r e ) And R is the final selected iris radius.
The invention is based on microscope images in ophthalmic surgery, through carrying out iris positioning processing on images under the scene in the surgery, the technical problems that the iris detection is influenced by bleeding, incomplete eye images, deformation of the iris and the like in the microscope images are solved, the existing extraocular interference items can be accurately eliminated, the invention is suitable for most of the images with eye deformation, ensures that the identification deviation is small, provides reliable and accurate iris detection of the images in the surgery, realizes accurate iris positioning, accurately segments iris characteristic areas, improves the safety of the surgery, and can be applied to all ophthalmic surgeries operated under the microscope.
Drawings
FIG. 1 is a flow chart of the iris positioning method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Please refer to fig. 1, which is a flowchart illustrating an iris positioning method according to a preferred embodiment of the present invention.
And S1, performing graphical processing on the original image to obtain a separated iris area outline image. The method specifically comprises the following steps:
and S11, converting a three-channel color image into a single-channel gray image for an original image, namely a microscope image in an ophthalmic surgery, and uniformly segmenting the single-channel gray image through threshold binarization processing to obtain a binarized image.
In step S11, the original image is simplified based on the gradation by threshold binarization processing so as to preliminarily extract contour and region feature information of the eye image.
And S12, performing morphological processing on the obtained binary image to obtain a separated eye multi-contour binary image. Wherein the morphological treatment is to alternately use an open operation and a close operation.
In step S12, the protrusion and fine junction of the contour in the binarized image are eliminated using an on operation; a closing operation is used to fill the holes in the profile and repair the edge micro-pits. The opening and closing operation is alternately used, accidental communication of different areas caused by small gray scale change is eliminated, and meanwhile, the defect of the outline area is filled. Through the step, the processed image is converted from the binary image with large-area communication into an image with a plurality of mutually separated outlines.
And S13, performing multiple flood filling on the obtained separated eye multi-contour binary image to obtain an eye contour binary image without the interference of the canthus tissues.
Since periocular tissues and instruments such as eyelids, eyelashes and eyeball holders are frequently found in the microscope image, the above objects are easily recognized as iris regions by mistake in the image, which causes serious hindrance to the recognition effect. In step S13, four offset corner points of the separated eye multi-contour binary image are flood-filled to eliminate the contour of the interfering object and eliminate the negative impact thereof.
Specifically, the method comprises the following steps:
the flood filling is as follows: for the image with the size of (x y), flood filling is carried out by taking (a, a), (a, y-a), (x-a, a) and (x-a, y-a) as the flood seed points in sequence. Wherein x is the image width, y is the image height, a is the offset, and the unit is pixel. In the present embodiment, the offset a =5, and the fill color is the background color black.
And step S14, extracting the outermost contour of the obtained binary image of the intraocular contour, calculating the area of the outermost contour, drawing an image of the contour with the largest area, and scanning and filling the image of the largest contour to obtain an initial iris region extraction image.
In the eye image under the microscope, the area proportion of the iris region of the human eye is the highest in most cases, so after the periocular tissue contour with a large area is removed in step S13, in step S14, the contour with the largest area is the approximate contour of the region where the iris is located.
Wherein the scan padding comprises: for the image of the maximum profile, scanning is performed in the row direction and the column direction respectively, and the interior of the profile is completely filled. The scan in the row direction is: the image with the size of (x y) is scanned line by line and pixel by pixel, and for the ith (i belongs to [0, x ]) line, if the pixel with the color of the foreground exists, in the embodiment, the foreground color is white, all coordinates are satisfiedOf a pixelFilling in a foreground color white, whereinThe coordinates of the pixel to be filled are represented,、respectively representing the pixel coordinate of the first color of the ith row as the foreground color and the pixel coordinate of the last color as the foreground color; for the j (j ∈ [0, y)) th line, if there is a pixel whose color is foreground color white, all coordinates are satisfiedIs formed by a plurality of pixelsFilled to a foreground color white, whereinThe coordinates of the pixel to be filled are represented,、respectively representing the pixel coordinate of the j-th column, wherein the first color is the foreground color white and the last color is the foreground color white.
The effect of the scanning filling operation performed in step S14 is to obtain an internally hole-free, edge-flattened, recess-free convex communication region.
And step S15, repeating the steps S12-S14 for the primary extracted iris area image obtained in the step S14 to obtain a separated iris area contour image.
And (5) repeating the steps S12-S14 3-10 times for the initial iris area extraction image obtained in the step S14 to obtain a separated iris area outline image.
In the present embodiment, the steps S12-S14 are repeated 4 times with respect to the iris region initial-extracted image obtained in step S14, and a separated iris region contour image is obtained.
And S2, selecting circle center positioning and range according to the obtained separated iris area outline image, thereby completing iris positioning of the image. The method specifically comprises the following steps:
s21, carrying out iris area centroid positioning on the separated iris area outline image, and acquiring a centroid coordinate (centre) x ,centre y ). The specific calculation formula is as follows:
wherein, gray (x, y) is the pixel value at the point (x, y), and n is the total pixel number of the iris area.
In step S21, the centroid is selected as the center of circle reference point, compared to the conventional center selection method (x, y) = ((x, y) =) max +x min ,y max +y min ) /2), the pixel value is taken as the quality index in the step, the center of the area can be judged more accurately, and the deviation caused by uneven distribution of the area is avoided.
Step S22, traversing the separated iris area outline image row by row and column by column to obtain the predicted radius r of the iris detection circle, wherein the calculation method comprises the following steps:
wherein x is min And x max X-coordinate values, y-coordinate values of the first iris region pixel and the last iris region pixel in the row direction in the separated iris region profile image respectively min And y max And the y coordinate values of the first iris area pixel and the last iris area pixel in the column direction in the separated iris area outline image are respectively.
Step S23, using the obtained centroid coordinate as the center of circle, at (r-r) e , r+ r e ) Is matched with the circle within the radius range of the base. Wherein r is e The offset is searched for the radius.
In the present embodiment, r e =10. In the matching process, the pixel gradient is used as an index, and the calculation formula is as follows:
wherein r 'represents the search set of radii of the iris detection circle, r' e (r-r) e ,r+r e ) And R is the final selected iris radius.
In the step of radius search, a circle is defined for each candidate radius, gray statistics is carried out on all points on the circle, finally, gradient is calculated through convolution, after edge information is obtained, the maximum value of the gradient is selected as the circle closest to the iris outline, and the radius is determined. The method and the device are based on the edge information of the iris area, and the optimal circumference result can be obtained by taking the reliable centroid coordinate as the circle center.
Although the present invention has been described with reference to the presently preferred embodiments, it will be understood by those skilled in the art that the foregoing description is illustrative only and is not intended to limit the scope of the invention, as claimed.
Claims (9)
1. An iris positioning method is characterized by comprising the following steps:
s1, performing graphical processing on an original image to obtain a separated iris area outline image;
s2, selecting circle center positioning and range of the iris area outline image according to the obtained separated iris area outline image so as to complete iris positioning of the image;
wherein, the step S1 comprises:
s11, converting the original image from a three-channel color image into a single-channel gray image, and uniformly segmenting the single-channel gray image through threshold binarization processing to obtain a binarized image; wherein the original image is a microscope image in an ophthalmic surgery;
step S12, performing morphological processing on the obtained binary image to obtain a separated eye multi-contour binary image; wherein, the morphological treatment is to alternately use an opening operation and a closing operation;
s13, carrying out multiple flood filling on the obtained separated eye multi-contour binary image to obtain an eye contour binary image without the interference of eye corner tissues;
step S14, extracting an outermost contour of the obtained binary image of the intraocular contour, calculating the area of the outermost contour, drawing an image of the contour with the largest area, and scanning and filling the image of the largest contour to obtain an initial extraction image of an iris area;
and step S15, repeating the steps S12-S14 for the iris area primary extraction image obtained in the step S14, and obtaining a separated iris area outline image.
2. An iris positioning method according to claim 1, wherein in the step S12: eliminating the protrusion and tiny connection part of the outline in the binary image by using an opening operation; filling the hole in the outline and repairing the tiny recess on the edge by using a closing operation; the opening and closing operation is alternately used, accidental communication of different areas caused by small gray scale change is eliminated, and meanwhile, the defect of the outline area is filled.
3. An iris positioning method as claimed in claim 2, wherein in the step S13: and carrying out flood filling on four offset corner points of the separated eye multi-contour binary image so as to eliminate the contour of an interference object.
4. An iris localization method according to claim 3, wherein the flood filling is: for the image with the size of (x y), sequentially taking (a, a), (a, y-a), (x-a, a) and (x-a, y-a) as flooding seed points to carry out flooding filling; wherein x is the image width, y is the image height, a is the offset, and the unit is pixel.
5. An iris localization method of claim 4 wherein said scan fill-in comprises: for the image with the maximum outline, scanning in the row direction and the column direction respectively, and completely filling the inside of the outline; the scanning in the row direction is: the image with the size of (x y) is scanned line by line and pixel by pixel, and for the ith (i belongs to [0, x)) line, if the pixel with the color of the foreground color exists, all the coordinates are satisfiedIs formed by a plurality of pixelsFilled in as foreground color, whereinThe coordinates of the pixel to be filled are represented,、respectively representing the pixel coordinates of the first color as the foreground color and the pixel coordinates of the last color as the foreground color in the ith row; for the j (j ∈ [0, y)) th line, if there is a pixel whose color is the foreground color, all coordinates are satisfiedIs formed by a plurality of pixelsFilling as foreground color, whereinThe coordinates of the pixel to be filled are indicated,、and respectively representing the pixel coordinate of the first color of the jth column as the foreground color and the pixel coordinate of the last color as the foreground color.
6. An iris positioning method as claimed in claim 1, wherein the step S2 includes:
s21, carrying out iris area centroid positioning on the separated iris area outline image to obtain a centroid coordinate;
s22, traversing the separated iris area outline images row by row and column by column to obtain the predicted radius r of an iris detection circle;
step S23, using the obtained barycentric coordinates as a circleHeart in (r-r) e , r+ r e ) Performing circle matching within the radius range of the base; wherein r is e The offset is searched for the radius.
7. An iris positioning method according to claim 6, wherein said step S21 comprises:
obtaining centroid coordinates (centrore) x ,centre y ) The specific calculation formula of (2) is as follows:
wherein, gray (x, y) is the pixel value at the point (x, y), and n is the total number of pixels in the iris area.
8. An iris positioning method as claimed in claim 7, wherein the step S22 includes:
the calculation method for obtaining the predicted radius r of the iris detection circle is as follows:
wherein x is min And x max X coordinate value, y coordinate value of the first iris region pixel and the last iris region pixel in the row direction in the separated iris region outline image min And y max And the y coordinate values of the first iris area pixel and the last iris area pixel in the column direction in the separated iris area outline image are respectively.
9. An iris positioning method as claimed in claim 8, wherein the step S23 includes:
in the matching process, the pixel gradient is used as an index, and the calculation formula is as follows:
wherein r 'represents the search set of radii of the iris detection circle, r' e (r-r) e ,r+r e ) And R is the final selected iris radius.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211314729.7A CN115393350B (en) | 2022-10-26 | 2022-10-26 | Iris positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211314729.7A CN115393350B (en) | 2022-10-26 | 2022-10-26 | Iris positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115393350A true CN115393350A (en) | 2022-11-25 |
CN115393350B CN115393350B (en) | 2023-06-09 |
Family
ID=84129030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211314729.7A Active CN115393350B (en) | 2022-10-26 | 2022-10-26 | Iris positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115393350B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6095989A (en) * | 1993-07-20 | 2000-08-01 | Hay; Sam H. | Optical recognition methods for locating eyes |
US20070140531A1 (en) * | 2005-01-26 | 2007-06-21 | Honeywell International Inc. | standoff iris recognition system |
US20090284627A1 (en) * | 2008-05-16 | 2009-11-19 | Kaibushiki Kaisha Toshiba | Image processing Method |
US20140347540A1 (en) * | 2013-05-23 | 2014-11-27 | Samsung Electronics Co., Ltd | Image display method, image display apparatus, and recording medium |
US20150105759A1 (en) * | 2013-10-15 | 2015-04-16 | Lensar, Inc. | Iris registration method and system |
CN106575357A (en) * | 2014-07-24 | 2017-04-19 | 微软技术许可有限责任公司 | Pupil detection |
CN107358224A (en) * | 2017-08-18 | 2017-11-17 | 北京工业大学 | A kind of method that iris outline detects in cataract operation |
CN107895157A (en) * | 2017-12-01 | 2018-04-10 | 沈海斌 | A kind of pinpoint method in low-resolution image iris center |
-
2022
- 2022-10-26 CN CN202211314729.7A patent/CN115393350B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6095989A (en) * | 1993-07-20 | 2000-08-01 | Hay; Sam H. | Optical recognition methods for locating eyes |
US20070140531A1 (en) * | 2005-01-26 | 2007-06-21 | Honeywell International Inc. | standoff iris recognition system |
US20090284627A1 (en) * | 2008-05-16 | 2009-11-19 | Kaibushiki Kaisha Toshiba | Image processing Method |
US20140347540A1 (en) * | 2013-05-23 | 2014-11-27 | Samsung Electronics Co., Ltd | Image display method, image display apparatus, and recording medium |
US20150105759A1 (en) * | 2013-10-15 | 2015-04-16 | Lensar, Inc. | Iris registration method and system |
CN106575357A (en) * | 2014-07-24 | 2017-04-19 | 微软技术许可有限责任公司 | Pupil detection |
CN107358224A (en) * | 2017-08-18 | 2017-11-17 | 北京工业大学 | A kind of method that iris outline detects in cataract operation |
CN107895157A (en) * | 2017-12-01 | 2018-04-10 | 沈海斌 | A kind of pinpoint method in low-resolution image iris center |
Non-Patent Citations (1)
Title |
---|
刘辉 等: ""基于改进粒子群算法及分区去噪的虹膜定位研究"", 《粘接》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115393350B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lim et al. | Integrated optic disc and cup segmentation with deep learning | |
US6885766B2 (en) | Automatic color defect correction | |
EP1229493B1 (en) | Multi-mode digital image processing method for detecting eyes | |
CN102426649B (en) | Simple steel seal digital automatic identification method with high accuracy rate | |
US6229905B1 (en) | Animal identification based on irial granule analysis | |
Gangwar et al. | IrisSeg: A fast and robust iris segmentation framework for non-ideal iris images | |
CN107844736B (en) | Iris positioning method and device | |
CN100382751C (en) | Canthus and pupil location method based on VPP and improved SUSAN | |
US20040146187A1 (en) | Iris extraction method | |
CN106355599B (en) | Retinal vessel automatic division method based on non-fluorescence eye fundus image | |
CN107480644A (en) | The positioning of optic disk and dividing method, device and storage medium in eye fundus image | |
CN111507932B (en) | High-specificity diabetic retinopathy characteristic detection method and storage device | |
Salazar-Gonzalez et al. | Optic disc segmentation by incorporating blood vessel compensation | |
Xiao et al. | Retinal hemorrhage detection by rule-based and machine learning approach | |
CN113256580A (en) | Automatic identification method for target colony characteristics | |
Kovacs et al. | Graph based detection of optic disc and fovea in retinal images | |
CN102332098A (en) | Method for pre-processing iris image | |
Zaim | Automatic segmentation of iris images for the purpose of identification | |
Kumar et al. | Automatic optic disc segmentation using maximum intensity variation | |
CN115393350A (en) | Iris positioning method | |
Soares et al. | Exudates dynamic detection in retinal fundus images based on the noise map distribution | |
CN113362346B (en) | Video disc and video cup segmentation method based on machine learning double-region contour evolution model | |
CN114926635A (en) | Method for segmenting target in multi-focus image combined with deep learning method | |
Shanthamalar et al. | A novel approach for glaucoma disease identification through optic nerve head feature extraction and random tree classification | |
Revathy | Revelation of diabetics by inadequate balanced SVM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |