CN110543802A - Method and device for identifying left eye and right eye in fundus image - Google Patents

Method and device for identifying left eye and right eye in fundus image Download PDF

Info

Publication number
CN110543802A
CN110543802A CN201810534260.5A CN201810534260A CN110543802A CN 110543802 A CN110543802 A CN 110543802A CN 201810534260 A CN201810534260 A CN 201810534260A CN 110543802 A CN110543802 A CN 110543802A
Authority
CN
China
Prior art keywords
image
optic disc
circular
fundus
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810534260.5A
Other languages
Chinese (zh)
Inventor
赵雷
金蒙
唐轶
王斯凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Daheng Prust Medical Technology Co ltd
Original Assignee
Beijing Daheng Prust Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Daheng Prust Medical Technology Co ltd filed Critical Beijing Daheng Prust Medical Technology Co ltd
Priority to CN201810534260.5A priority Critical patent/CN110543802A/en
Publication of CN110543802A publication Critical patent/CN110543802A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Abstract

The invention relates to a method and a device for identifying left and right eyes in an eyeground image. The method comprises the steps of firstly detecting the optic disc position in an eye fundus image, and then judging whether the eye fundus image is a left eye or a right eye according to the optic disc position. The method detects the optic disc position by identifying a highlight circular area in the fundus image, and comprises the following steps: performing down-sampling processing on the image; performing color decomposition on the down-sampled image to obtain a single-channel image on three RGB components; respectively carrying out median filtering and maximum filtering on the single-channel images on the three components; and detecting a circular highlight area in the single-channel image on the three components, wherein the circular highlight area is the position of the optic disc. In addition, a method based on pattern recognition or a method based on deep learning may be employed to detect the optic disc position in the fundus image. The method can accurately identify whether the fundus image is the left eye image or the right eye image, and can be applied to the image acquisition process and the fundus image analysis and processing process.

Description

Method and device for identifying left eye and right eye in fundus image
Technical Field
The invention belongs to the technical field of information technology and image processing, and particularly relates to a method and a device for identifying left and right eyes in an eyeground image.
background
in the process of analyzing the fundus images, the fundus images need to be classified, and the left eye fundus image and the right eye fundus image are distinguished, so that the subsequent analysis and diagnosis work is facilitated. At present, mainly by a manual classification method, an ophthalmologist marks the fundus images as left eyes or right eyes after completing the acquisition of the fundus images, then groups the images marked as the left eyes, and groups the images marked as the right eyes. Still another method for classifying the left and right eyes is performed during the acquisition of the fundus images, and whether the images are left or right is classified by agreeing on the order in which the fundus images are acquired (for example, the right eye is acquired first and the left eye is acquired later). Currently, there is also a method of using a deep learning training classifier to distinguish an eye FUNDUS image as a left eye or a right eye (Chakravorty R, Garnavi R, Roy P. AUTOMATICALLY DETECTING EYE TYPE IN RETINAL FUNDUS IMAGES:, US20170112372[ P ].2017.), in which the method first extracts the position of a blood vessel tissue body in the eye FUNDUS image, then locates the position of a optic disc, and extracts features through the positions and colors of the blood vessel and the optic disc, trains the classifier, and then realizes the classification of the image as a left eye or a right eye.
In the prior art, a method for appointing an acquisition sequence in manual classification and image acquisition processes is mainly adopted. The manual classification needs to manually interpret the content of the fundus images, distinguish the left and right eyes, and mark the category of the fundus images, which requires the ophthalmologist to spend effort to read the images. Another method for appointing an acquisition sequence in the process of acquiring images has poor flexibility, and because images which may be acquired in the process of acquiring fundus images are not ideal, or one eye needs to acquire a plurality of images, and the other eye does not need to acquire or only needs to acquire one image, the method for appointing the acquisition sequence is difficult to adapt to various conditions in the process of acquiring images. The method for training the classifier by adopting deep learning needs to extract blood vessels of the fundus image and classify by adopting the deep neural network, so that the calculated amount is large, and real-time processing is difficult to achieve.
disclosure of Invention
The invention realizes a method and a device for automatically identifying the eye fundus image as the left eye or the right eye, and the method can be applied to the image acquisition process and the eye fundus image analysis and processing process. The invention can accurately identify whether the eye fundus image is the left eye image or the right eye image, and can reduce the workload of doctors in the actual classification process. The invention does not need to extract the position of the blood vessel in the fundus image, has small calculated amount and can realize real-time processing.
The technical scheme adopted by the invention is as follows:
A method for identifying a left eye and a right eye in a fundus image, comprising the steps of:
1) detecting a disc position in the fundus image;
2) And judging whether the fundus image is a left eye or a right eye according to the position of the optic disc.
Further, step 1) detects the optic disc position by recognizing a highlight circular region in the fundus image, including the steps of:
1.1) carrying out down-sampling processing on the image;
1.2) carrying out color decomposition on the down-sampled image to obtain a single-channel image on three components of RGB;
1.3) respectively carrying out median filtering and maximum filtering on the single-channel images on the three components;
1.4) detecting a circular highlight area in a single-channel image on the three components, wherein the circular highlight area is the position of the optic disc.
further, step 1.4) detecting a circular highlight area on the green component single-channel image, and then detecting the highlight circular area on the red and blue component single-channel images respectively; the positions of the eligible circular areas detected on the single-channel image on the three color components on the image and the radii of the circles are recorded, and the circle with the largest radius and the largest circle overlap is found as the detection result of the highlight circular area on the fundus image.
Further, the following steps are taken to detect a circular highlight region on a single-channel image:
a) carrying out binarization on the image by adopting different gray threshold values, wherein the gray value of a pixel in the image, which is greater than the threshold value, is set as 1, and the gray value of the pixel, which is smaller than the threshold value, is set as 0;
b) Extracting the contour of the shape formed by the pixels with the value of 1 on the binarized image;
c) calculating the area of the region contained in the outline and the convexity of the outline;
d) Finding out the contour larger than the threshold value according to the set area threshold value and the convexity threshold value, and using a circle to fit a contour curve to obtain the circle center position and the radius of the circle;
e) selecting circular areas with the overlapping times larger than a certain numerical value from circular areas obtained on images binarized by different gray threshold values, discarding other circular areas, and reserving the circular area with the largest radius.
Further, step 1.4) enlarges the detected circle position and radius according to the image scaling coefficient, and obtains the position and radius of the highlighted circle area with the original size upward.
further, if no overlapped circular area appears in step 1.4), the circular area on the single-channel image of the green component is taken as the final detection result.
Further, step 1) detects the optic disc position in the fundus image using a pattern recognition based method; the method based on pattern recognition comprises the steps of firstly manually marking the position of a video disc in an image of a training set, then extracting the characteristics of the area where the video disc is located, training a classifier, and then positioning the position where the video disc is located in the image of a testing set by using the classifier.
further, step 1) adopts the method based on deep learning to detect the optic disc position in the fundus image; the method based on the deep learning comprises the steps of firstly, manually marking the position of an optic disc in an eye fundus image, using a local image containing the optic disc as a positive sample, using a local image not containing the optic disc as a negative sample, then training a deep neural network, and identifying the position of the optic disc in the image by using the trained deep neural network.
a left-right eye identifying device in a fundus image, comprising:
The image input module is responsible for loading the fundus images:
The optic disc positioning module is responsible for detecting the optic disc position in the eye fundus image;
and the result output and display module is responsible for judging whether the fundus image is a left eye or a right eye according to the position of the optic disc and outputting and displaying the judgment result.
The invention has the following beneficial effects:
a) The calculation amount is small, and the real-time processing can be realized: according to the invention, firstly, the fundus image with high resolution is subjected to down-sampling, so that an image with lower resolution is obtained, the subsequent calculation amount is reduced, and the accuracy of the final calculation result is not influenced. The invention does not need to extract the blood vessel position information on the fundus image, thereby further reducing the calculation amount.
b) High accuracy, high specificity: tests on a large number of normal and pathological fundus images marked by doctors show that the method has extremely high accuracy and specificity.
drawings
Fig. 1 is a flowchart of the steps of the method for identifying the left and right eyes in a fundus image according to the present invention.
fig. 2 is a configuration diagram of a left-right eye recognition apparatus in a fundus image according to the present invention.
Detailed Description
in order to make the aforementioned objects, features and advantages of the present invention comprehensible, the present invention shall be described in further detail with reference to the following detailed description and accompanying drawings.
Fig. 1 is a flowchart of the steps of the method for identifying the left and right eyes in a fundus image according to the present invention. The method mainly comprises three steps of loading images, detecting the position of a video disc and identifying the left eye and the right eye according to the position of the video disc.
1. Loading images
This step reads a fundus image from a storage medium.
2. Optic disc position detection
2.1) Down-sampling
Since the resolution of the fundus image is high, the image height is about 2000 pixels, and it is necessary to reduce the image first. The image is reduced, so that the subsequent required calculation amount can be reduced, and the detection speed is improved.
2.2) color decomposition
And performing RGB three-channel decomposition on the down-sampled image to obtain a single-channel image on RGB three components.
2.3) median and maximum filtering
And respectively carrying out median filtering and maximum filtering on the single-channel image on the three components, and then expanding the dynamic range of the single-channel image to 0-255 for facilitating the binarization of the image.
The median filtering is to select a local area from left to right and from top to bottom on a single-channel image, sort the gray values of pixels in the local area in the order from small to large, calculate the gray value at the middle position after sorting, and replace the center of the local area with the gray value.
The maximum filtering is to select a local area from left to right and from top to bottom on a single-channel image, to obtain the maximum gray value in the local area, and then to replace the center of the local area with the maximum value.
2.4) extracting the circular highlight area
Next, a circular highlight region is first detected on the green component single-channel image. The method for detecting the circular highlight area comprises the following steps:
a) And carrying out binarization on the image by adopting different gray threshold values, wherein the gray value of a pixel in the image, which is greater than the threshold value, is set as 1, and the gray value of the pixel, which is smaller than the threshold value, is set as 0.
b) The contour of the shape composed of pixels having a value of 1 on the binarized image is extracted. The method for extracting the contour comprises the following steps: traversing the binarized image from left to right and from top to bottom, recording the coordinate of a pixel when encountering the first pixel with a non-0 value, marking the point as the starting point of the contour, then searching the pixel with the non-0 value as the next point of the contour in a clockwise sequence by taking the left pixel as the starting point in 8 pixels adjacent to the current pixel value, recording the coordinate of the point, repeating the steps until the starting point of the contour is returned, wherein all recorded points are the contour, then setting the gray value of the pixel point with the non-zero value in the extracted contour as 0, and then continuously searching the starting point of the next contour from left to right and from top to bottom until the lower right corner of the image.
c) The area of the region included in the contour and the convexity of the contour are calculated. The area of the region contained by the contour is the number of pixels of the image contained by the contour. Convexity is the mean of the tangent of the central angle contained by the fixed length arc at different positions in the profile divided by 4.
d) Finding out the contour larger than the threshold value according to the set area threshold value and the convexity threshold value, and using a circle to fit a contour curve to obtain the circle center position and the radius of the circle; extracting a circle on an image binarized by using different gray threshold values by adopting the method, and then recording the circle center position and the radius of the obtained circle;
e) Selecting circular areas with the overlapping times larger than a certain numerical value from circular areas obtained on images binarized by different gray threshold values, discarding other circular areas, and reserving the circular area with the largest radius.
Then detecting highlight circular areas on the single-channel images of the red component and the blue component respectively; the positions of the eligible circular areas detected on the single-channel image on the three color components on the image and the radii of the circles are recorded, and the circle with the largest radius and the largest circle overlap is found as the detection result of the highlight circular area on the fundus image. And then, amplifying the detected circle position and radius according to the image scaling coefficient to obtain the position and radius of the highlighted circle region with the upward original size map. If no overlapping circular area appears, a circular area on the single-channel image of the green component is taken as the final detection result.
3. Identifying left and right eyes from disc position
After a highlight circular area on the fundus image is obtained, whether the image is a left eye or a right eye is judged according to the circle center position of the highlight circular area. And if the circle center of the highlight circular area is positioned on the right side of the center of the image, the image is a right eye, and if the circle center of the highlight circular area is positioned on the left side of the image, the image is a left eye.
The key points of the method mainly comprise:
a) zooming, median filtering and maximum filtering are carried out on the image;
b) and carrying out color component decomposition on the image to obtain a single-channel image of each color component, and carrying out circular highlight area detection on the single-channel image of each color component.
c) And selecting a region with the most overlapping times as a final detection result from the circular highlight regions detected on the single-channel histograms of different color components.
the method provided by the invention tests a large number of normal and pathological fundus images marked by doctors, and the experimental results are shown in table 1.
TABLE 1 test results of the method of the invention
Doctor labeling left eye fundus Doctor labeling right eye fundus rate of accuracy
Software discrimination of eye fundus for left eye 95 3 96.9%
Software discrimination of eye fundus for left eye 5 97 95.1%
Specificity of 95% 97%
The core of the present invention is to identify the position of the optic disc in the fundus image, and the above-described embodiment adopts a scheme of identifying a highlight circular area in the image as the optic disc position. The invention may also identify disc position by other methods, such as: a method based on pattern recognition. The positions of the optic discs in the images of the training set are manually marked, then the characteristics of the color, the histogram, the gradient and the like of the area where the optic discs are located are extracted, a classifier is trained, and then the classifier is used for positioning the positions of the optic discs in the images of the testing set, so that the left eye and the right eye of the eyeground images are distinguished.
The invention can also adopt a video disc positioning method based on deep learning; the method comprises the steps of marking the position of an optic disc in an eye fundus image through manual work (such as a doctor), taking a local image containing the optic disc as a positive sample, taking a local image not containing the optic disc as a negative sample, training a depth neural network, and identifying the position of the optic disc in the image through the trained depth neural network.
Another embodiment of the present invention provides a device for identifying left and right eyes in fundus images, which has a structure as shown in fig. 2, and comprises an input device, an output device, a processor, a RAM, a memory and other basic components, an image input module, a video disc positioning module, and a result output and display module. The image input module is responsible for loading the fundus images: the optic disc positioning module is responsible for detecting the optic disc position in the eye fundus image by adopting the method; and the result output and display module is responsible for judging whether the fundus image is a left eye or a right eye according to the position of the optic disc and outputting and displaying the judgment result.
the above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (10)

1. A method for identifying a left eye and a right eye in a fundus image, comprising the steps of:
1) detecting a disc position in the fundus image;
2) And judging whether the fundus image is a left eye or a right eye according to the position of the optic disc.
2. the method according to claim 1, wherein step 1) of detecting the optic disc position by identifying a highlight circle region in the fundus image comprises the steps of:
1.1) carrying out down-sampling processing on the image;
1.2) carrying out color decomposition on the down-sampled image to obtain a single-channel image on three components of RGB;
1.3) respectively carrying out median filtering and maximum filtering on the single-channel images on the three components;
1.4) detecting a circular highlight area in a single-channel image on the three components, wherein the circular highlight area is the position of the optic disc.
3. the method according to claim 2, characterized in that step 1.4) detects circular highlight areas on the green component single-channel image first, and then detects highlight circular areas on the red and blue component single-channel images respectively; the positions of the eligible circular areas detected on the single-channel image on the three color components on the image and the radii of the circles are recorded, and the circle with the largest radius and the largest circle overlap is found as the detection result of the highlight circular area on the fundus image.
4. The method of claim 3, wherein the following steps are used to detect circular highlight regions on a single-channel image:
a) carrying out binarization on the image by adopting different gray threshold values, wherein the gray value of a pixel in the image, which is greater than the threshold value, is set as 1, and the gray value of the pixel, which is smaller than the threshold value, is set as 0;
b) Extracting the contour of the shape formed by the pixels with the value of 1 on the binarized image;
c) Calculating the area of the region contained in the outline and the convexity of the outline;
d) Finding out the contour larger than the threshold value according to the set area threshold value and the convexity threshold value, and using a circle to fit a contour curve to obtain the circle center position and the radius of the circle;
e) Selecting circular areas with the overlapping times larger than a certain numerical value from circular areas obtained on images binarized by different gray threshold values, discarding other circular areas, and reserving the circular area with the largest radius.
5. The method of claim 4, wherein the step b) of extracting the contour is performed by: traversing the binarized image from left to right and from top to bottom, recording the coordinate of a pixel when the pixel meets the first non-0 value, marking the point as the starting point of the contour, then searching the pixel with the non-0 value as the next point of the contour according to the clockwise sequence by taking the left pixel as the starting point in 8 pixels adjacent to the current pixel value, and recording the coordinate of the point; repeating the steps until the starting point of the contour is returned, wherein all recorded points are the contour; and then setting the gray value of the pixel point of the non-zero value in the extracted contour as 0, and continuously searching the starting point of the next contour from left to right and from top to bottom until the lower right corner of the image.
6. The method according to claim 3, characterized in that step 1.4) enlarges the detected circle position and radius according to the image scaling factor to obtain the position and radius of the highlighted circle area with the original size map up.
7. a method according to claim 3, characterized in that in step 1.4), if no overlapping circular area appears, a circular area on the single-channel image of the green component is taken as the final detection result.
8. the method according to claim 1, characterized in that step 1) detects the optic disc position in the fundus image using a pattern recognition based method; the method based on pattern recognition comprises the steps of firstly manually marking the position of a video disc in an image of a training set, then extracting the characteristics of the area where the video disc is located, training a classifier, and then positioning the position where the video disc is located in the image of a testing set by using the classifier.
9. The method according to claim 1, characterized in that step 1) employs a deep learning based method to detect the optic disc position in the fundus image; the method based on the deep learning comprises the steps of firstly, manually marking the position of an optic disc in an eye fundus image, using a local image containing the optic disc as a positive sample, using a local image not containing the optic disc as a negative sample, then training a deep neural network, and identifying the position of the optic disc in the image by using the trained deep neural network.
10. A left-right eye recognition apparatus in a fundus image, comprising:
The image input module is responsible for loading the fundus images:
the optic disc positioning module is responsible for detecting the optic disc position in the eye fundus image;
and the result output and display module is responsible for judging whether the fundus image is a left eye or a right eye according to the position of the optic disc and outputting and displaying the judgment result.
CN201810534260.5A 2018-05-29 2018-05-29 Method and device for identifying left eye and right eye in fundus image Pending CN110543802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810534260.5A CN110543802A (en) 2018-05-29 2018-05-29 Method and device for identifying left eye and right eye in fundus image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810534260.5A CN110543802A (en) 2018-05-29 2018-05-29 Method and device for identifying left eye and right eye in fundus image

Publications (1)

Publication Number Publication Date
CN110543802A true CN110543802A (en) 2019-12-06

Family

ID=68701136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810534260.5A Pending CN110543802A (en) 2018-05-29 2018-05-29 Method and device for identifying left eye and right eye in fundus image

Country Status (1)

Country Link
CN (1) CN110543802A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292296A (en) * 2020-01-20 2020-06-16 京东方科技集团股份有限公司 Training set acquisition method and device based on eye recognition model
CN112101438A (en) * 2020-09-08 2020-12-18 南方科技大学 Left and right eye classification method, device, server and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN106651827A (en) * 2016-09-09 2017-05-10 浙江大学 Fundus image registering method based on SIFT characteristics
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
CN107292835A (en) * 2017-05-31 2017-10-24 瑞达昇科技(大连)有限公司 A kind of method and device of eye fundus image retinal vessel Automatic Vector
TW201740871A (en) * 2016-05-19 2017-12-01 施秉宏 Method for reconstructing fundus image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
TW201740871A (en) * 2016-05-19 2017-12-01 施秉宏 Method for reconstructing fundus image
CN106651827A (en) * 2016-09-09 2017-05-10 浙江大学 Fundus image registering method based on SIFT characteristics
CN107292835A (en) * 2017-05-31 2017-10-24 瑞达昇科技(大连)有限公司 A kind of method and device of eye fundus image retinal vessel Automatic Vector
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邹北骥;张思剑;朱承璋;: "彩色眼底图像视盘自动定位与分割", 光学精密工程, no. 04, pages 1187 - 1193 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292296A (en) * 2020-01-20 2020-06-16 京东方科技集团股份有限公司 Training set acquisition method and device based on eye recognition model
CN112101438A (en) * 2020-09-08 2020-12-18 南方科技大学 Left and right eye classification method, device, server and storage medium
CN112101438B (en) * 2020-09-08 2024-04-16 南方科技大学 Left-right eye classification method, device, server and storage medium

Similar Documents

Publication Publication Date Title
CN109816644B (en) Bearing defect automatic detection system based on multi-angle light source image
CN109190690B (en) Method for detecting and identifying cerebral microhemorrhage points based on SWI image of machine learning
Zhu et al. Detection of the optic disc in images of the retina using the Hough transform
WO2016091016A1 (en) Nucleus marker watershed transformation-based method for splitting adhered white blood cells
CN109961426B (en) Method for detecting skin of human face
CN108186051B (en) Image processing method and system for automatically measuring double-apical-diameter length of fetus from ultrasonic image
CN104794721B (en) A kind of quick optic disk localization method based on multiple dimensioned spot detection
CN107248161A (en) Retinal vessel extracting method is supervised in a kind of having for multiple features fusion
NL2024774B1 (en) Blood leukocyte segmentation method based on adaptive histogram thresholding and contour detection
CN108961280B (en) Fundus optic disc fine segmentation method based on SLIC super-pixel segmentation
CN106408566B (en) A kind of fetal ultrasound image quality control method and system
Jaafar et al. Automated detection of red lesions from digital colour fundus photographs
CN108378869B (en) Image processing method and processing system for automatically measuring head circumference length of fetus from ultrasonic image
CN105389581B (en) A kind of rice germ plumule integrity degree intelligent identifying system and its recognition methods
US9898824B2 (en) Method for volume evaluation of penumbra mismatch in acute ischemic stroke and system therefor
CN110866932A (en) Multi-channel tongue edge detection device and method and storage medium
Maji et al. An automated method for counting and characterizing red blood cells using mathematical morphology
WO2022198898A1 (en) Picture classification method and apparatus, and device
Vo et al. Discriminant color texture descriptors for diabetic retinopathy recognition
CN111798408B (en) Endoscope interference image detection and classification system and method
CN110543802A (en) Method and device for identifying left eye and right eye in fundus image
CN111222371A (en) Sublingual vein feature extraction device and method
CN106372593B (en) Optic disk area positioning method based on vascular convergence
CN117197064A (en) Automatic non-contact eye red degree analysis method
CN115937085B (en) Nuclear cataract image processing method based on neural network learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination