WO2020125319A1 - 青光眼图像识别方法、设备和筛查系统 - Google Patents

青光眼图像识别方法、设备和筛查系统 Download PDF

Info

Publication number
WO2020125319A1
WO2020125319A1 PCT/CN2019/120215 CN2019120215W WO2020125319A1 WO 2020125319 A1 WO2020125319 A1 WO 2020125319A1 CN 2019120215 W CN2019120215 W CN 2019120215W WO 2020125319 A1 WO2020125319 A1 WO 2020125319A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fundus
disc
glaucoma
partial
Prior art date
Application number
PCT/CN2019/120215
Other languages
English (en)
French (fr)
Inventor
赵昕
陈李健
黄烨霖
熊健皓
张大磊
Original Assignee
上海鹰瞳医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海鹰瞳医疗科技有限公司 filed Critical 上海鹰瞳医疗科技有限公司
Priority to EP19898889.1A priority Critical patent/EP3901816A4/en
Publication of WO2020125319A1 publication Critical patent/WO2020125319A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the invention relates to the field of eye detection equipment, in particular to a glaucoma image recognition method, equipment and screening system.
  • Glaucoma is an irreversible blinding fundus disease. In the screening or clinical diagnosis, doctors can judge whether the examinee may have glaucoma by observing the fundus image, and make recommendations on whether further examination or medical treatment is needed.
  • an ophthalmologist can make judgments by observing the conditions of the optic cup and optic disc in the fundus diagram. For example, if the optic cup is too large, and the proportion of the optic disc is too large, the examinee is likely to suffer from glaucoma, and the ratio of the optic cup to the optic cup is generally the ratio of the vertical diameter of the optic cup and optic disc.
  • the ophthalmologist's way of estimating the cup-to-disk ratio or the edge shape with the aid of the shooting equipment is very subjective, lacking objective data, which results in inaccurate results and consumes a lot of time and energy.
  • the present invention provides a glaucoma image recognition method, including the following steps:
  • the disc edge image, the partial image, and the fundus image it is determined whether the fundus image is a glaucoma image.
  • extracting a partial image from the fundus image includes:
  • the partial area is extracted to form the partial image, and the partial image and the fundus image have the same color.
  • the obtaining the optic disc image and the optic cup image according to the fundus image or the partial image includes:
  • a third machine learning model is used to identify the cup image from the partial images.
  • the optic disc image and the optic cup image are binary images.
  • the disc edge image includes a background area other than the visual disc, the optic cup area, and the disc edge area, and they are identified as different grayscale values.
  • the judging whether the fundus image is a glaucoma image based on the disc edge image, the partial image, and the fundus image includes:
  • a fourth machine learning model is used to identify the disc edge image, the partial image, and the fundus image, and output a glaucoma image judgment result.
  • the fourth machine learning model includes a first feature extraction unit, a second feature extraction unit, a third feature extraction unit, a feature fusion unit, and a determination unit; Recognizing along the image, the partial image, and the fundus image, and outputting the judgment result of the glaucoma image includes:
  • a feature fusion unit to form a fusion feature according to the first feature, the second feature, and the third feature
  • the judgment unit outputs the glaucoma image judgment result according to the fusion feature.
  • the present invention provides a glaucoma image recognition device, including:
  • a local recognition unit for extracting a local image from the fundus image, the local image including the optic disc and the fundus background;
  • An area recognition unit for obtaining a disc image and a cup image based on the fundus image
  • a disc edge determining unit configured to obtain a disc edge image based on the disc image and the cup image
  • the glaucoma recognition unit is used to determine whether the fundus image is a glaucoma image based on the disc edge image, the partial image, and the fundus image.
  • the present invention also provides a glaucoma image recognition device, comprising: at least one processor and a memory communicatively connected to the at least one processor; wherein the memory stores executable by the at least one processor Instructions, the instructions being executed by the at least one processor to cause the at least one processor to perform the glaucoma image recognition method described above.
  • the invention also provides a glaucoma disease screening system, including:
  • a fundus camera device used to take a fundus image; and the above-mentioned glaucoma image recognition device.
  • a partial image is first obtained using a fundus image, most of the fundus background is removed, a disc image and a cup image are further extracted, and a disc edge image is obtained from the two images. Finally, the disc edge image, partial image and fundus image are recognized, and the global feature, local feature and disc edge feature are integrated to determine whether the fundus image is a glaucoma image.
  • This solution is based on image data and objective algorithm to obtain the judgment result of glaucoma, saving Human resources can effectively assist doctors or experts to diagnose glaucoma diseases.
  • FIG. 1 is a flowchart of a glaucoma image recognition method in an embodiment of the present invention
  • FIG. 2 is a flowchart of a specific glaucoma image recognition method in an embodiment of the present invention
  • FIG. 3 is a cut fundus image in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of obtaining a binary image of an optical disc in an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of obtaining a binary image of an optic cup in an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of obtaining a colored disk edge image in an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a glaucoma image recognition device in an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a machine learning model in an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a glaucoma disease screening system in an embodiment of the present invention.
  • the present invention provides a glaucoma image recognition method, which can be executed by electronic devices such as computers, servers, or portable terminals. As shown in Figure 1, the method includes the following steps:
  • the fundus image is usually a color image, but in the embodiment of the present invention, a single-channel grayscale image can also be used, or even a binary image.
  • the S2A extract a partial image from the fundus image, and the partial image includes the optic disc and the fundus background.
  • the proportion of the optic disc in the partial image is greater than the proportion of the optic disc in the original fundus image.
  • the partial image may include the optic disc and a small portion of the fundus background content.
  • the shape of the image may be a set regular shape, such as a square image or a circle image. This step can remove most of the fundus background content.
  • S3A Obtain the optic disc image and optic cup image according to the fundus image or the partial image.
  • extraction methods For example, based on the principle of machine vision, the two regions of the optic disc and the optic cup can be searched and extracted based on pixel value features to form an image; artificial intelligence algorithms can also be used to identify and extract these using a trained machine learning model Area and form an image.
  • the partial image is a part of the fundus image. Extracting the disc image and the cup image based on the partial image can improve the recognition efficiency, but it is also feasible to recognize the disc image and the cup image based on the global fundus image.
  • the optic cup area is within the optic disc area.
  • the optic cup area can be removed from the optic disc area, usually an image of a ring-shaped area can be obtained.
  • the expression form of the disk edge area may be an image showing only the disk edge area, for example, there is a ring-shaped area in a single-color background.
  • the disc edge region and the cup region can also be reserved in the disc edge image, and the disc edge region can be identified.
  • the morphological features of the disc edges can be extracted for the three images, and the characteristics of the three images can be synthesized to obtain the final conclusion; the features can also be extracted for the three images and obtained three Conclusion, and then combine the three conclusions to reach a final conclusion.
  • an artificial intelligence algorithm may be used to recognize these three images using a trained machine learning model and output the recognition results.
  • the orientation of the fundus image obtained is the same as the orientation of the human body, that is, the upper and lower sides of the image are the upper and lower sides of the human body, and the two sides of the image are the nasal and temporal sides of the human body (the left and right eye images are in the opposite direction) ). If the angle of the acquired image is special, the image angle can be adjusted after step S1 to make it coincide with the human body orientation.
  • the output of step S5A may be information for expressing the possibility of suffering from glaucoma, such as percentage information; it may also output conclusive information such as negative or positive. This information can be used as a basis for doctors to judge glaucoma disease.
  • a partial image is first obtained using a fundus image, most of the fundus background is removed, a disc image and a cup image are further extracted, and a disc edge image is obtained according to the two images. Finally, the disc edge image, partial image and fundus image are recognized, and the global feature, local feature and disc edge feature are integrated to determine whether the fundus image is a glaucoma image.
  • This solution is based on image data and objective algorithm to obtain the judgment result of glaucoma, saving Human resources can effectively assist doctors or experts to diagnose glaucoma diseases.
  • An embodiment of the present invention provides a specific glaucoma image recognition method.
  • a machine learning model is used to recognize an image.
  • the machine learning model may be various types of neural networks. As shown in Figure 2, the method includes the following steps:
  • S1B obtaining a fundus photograph taken by a fundus camera device. It is usually an image with a black background, and some text information may also be included in the background.
  • This cropping operation is an optimization process made for the subsequent recognition of the image using the machine learning model. In other embodiments, the cropping operation may not be performed, or more content may be cropped, and at least the complete disc area may be retained. .
  • the proportion of the optic disc in the partial image is greater than the proportion of the optic disc in the original fundus image.
  • the partial image may include the optic disc and a small portion of the fundus background content.
  • the shape of the image may be a set regular shape, such as a square image or a circle image. This step can remove most of the background content of the fundus, and obtain a partial image mainly based on the optical disc as shown in FIG. 4.
  • an embodiment of the present invention provides a preferred model training solution.
  • the dotted frame shown in Figure 5 is the label content.
  • the form of this label box into the machine learning model is (x, y, height , width), where x and y are the coordinates of the upper left corner of the label box in the image, and height and width are the height and width of the label box, respectively.
  • a large number of fundus images and annotation frames are input into the model for training.
  • the model can predict the position of the effective area including the optic disc through learning, and output the results in the same form as the annotation frame.
  • the existing deep detection model may be used as the first machine learning model, such as SSD, YOLO, or Faster-RCNN, or the deep network model may be designed by itself.
  • the partial image has the same color as the original fundus image as a process of detecting and intercepting the image; in other embodiments, an image with a changed color channel, such as a grayscale image, can also be obtained.
  • pre-process the local image to enhance pixel features pre-process the local image to enhance pixel features.
  • the local histogram equalization algorithm CLAHE
  • This step can make the features in the image more prominent, and it is easier to find the outline of the optic disc and the outline of the optic cup when recognizing the image after preprocessing, thereby improving the accuracy and efficiency of recognition.
  • an embodiment of the present invention provides a preferred model training solution.
  • the optical disc is manually and accurately marked during training, and then a fill mask as shown in FIG. 6 is generated based on the manually marked outline, where white is the optical disc area and black is the background.
  • the intercepted optic disc area and the corresponding mask are input into the model for training.
  • the model recognizes the optic disc area through learning and divides it out. The process of labeling and segmenting the visual cup follows the same steps.
  • Embodiments of the present invention may use existing depth detection models as the second and third machine learning models, for example, existing models such as U-Net, Mask R-CNN, DeepLabV3, or self-designed depth segmentation models may be used.
  • existing models such as U-Net, Mask R-CNN, DeepLabV3, or self-designed depth segmentation models may be used.
  • the second machine learning model outputs a binary image of the optic disc, where the background gray value is 0, and the grey value of the optic disc region is 255; the third machine learning model outputs the binary image of the optic cup, where the background gray The value is 0, and the gray value of the cup area is 255.
  • This is a preferred processing method used to facilitate subsequent interception of the disc edge image.
  • an image with the same color as the original fundus image may also be output.
  • the disc edge image includes a background area other than the disc, the cup area, and the disc edge area, and they are identified as different grayscale values. For example, combining the binary images in FIGS. 6 and 7 to obtain the disc edge image shown in FIG. 8, the gray value of the background area is 0, the gray value of the cup area is 255, and the disc area is the disc area For a ring-shaped area not blocked by the eyecup area, the gray value may be set to 128, for example. In actual processing, any three distinct grayscale values can be used to identify these three areas, which is not limited to the values in the above examples.
  • the disc edge image may also include only the disc edge area and the background area, for example, subtract the binary image in FIG. 7 from the binary image in FIG. 6 to obtain the image shown in FIG. 9 Binary image, where the white ring is the disc edge area.
  • the disc edge binary image shown in FIG. 9 provides the interception position and range, according to which the disc edge image shown in FIG. 10 can be intercepted from the original fundus image or the partial image in S3B. This step is to obtain the original The color of the fundus image.
  • the optical disc image and the optical cup image are grayscale images or color images, and the two grayscale images or color images can also be subtracted to directly obtain the corresponding color disc edge image.
  • the fourth machine learning model is used to identify the disc edge image, the partial image and the fundus image, and the glaucoma image judgment result is output.
  • a partial image is input to the model (it may be a partial image after the enhancement processing on the left side in FIG. 6 or FIG. 7 or an image without enhancement processing as shown in FIG. 4) , Global fundus image (the cropped image shown in FIG. 3 or the original fundus photo), and the edge image (the edge image shown in FIG. 8 or 9 or 10).
  • the existing deep detection model can be used as the fourth machine learning model.
  • existing models such as inceptionV3 and ResNet can be used, or they can be designed by themselves. Deep recognition model.
  • the fourth machine learning model in this embodiment includes a first feature extraction unit, a second feature extraction unit, a third feature extraction unit, a feature fusion unit, and a determination unit.
  • step S8 includes the following steps:
  • the judgment unit outputs the judgment result of the glaucoma image according to the fusion feature.
  • glaucoma images global fundus image, optic disc local image, and disc edge image
  • different non-glaucoma images are input repeatedly during training to make the model learn The difference between the two.
  • the output result can be of two types, namely negative or positive (yes or no); it can also be percentage (probability) information, such as the probability of a glaucoma image or the probability of not a glaucoma image.
  • the captured fundus photo is first trimmed to remove disturbing content, so that the machine learning model can more accurately segment the optic disc-based and relatively large size from the larger size fundus image Small partial image; through two machine learning models to identify the partial image respectively, the binary image of the optic disc area and the binary image of the optic cup area can be accurately output, and the binary image can be efficiently and accurately obtained by combining the binary images Disc edge image; in the final recognition process, the machine learning model takes into account the characteristics of the three images to obtain the recognition result, so as to improve the accuracy of the glaucoma image judgment.
  • an embodiment of the present invention also provides a glaucoma image recognition device.
  • the device includes:
  • the obtaining unit 111 is used to obtain a fundus image
  • a partial recognition unit 112 configured to extract a partial image from the fundus image, where the partial image includes the optic disc and the fundus background;
  • An area recognition unit 113 configured to obtain a disc image and a cup image based on the fundus image
  • the disc edge determining unit 114 is configured to obtain a disc edge image based on the disc image and the cup image;
  • the glaucoma recognition unit 115 is configured to determine whether the fundus image is a glaucoma image based on the disc edge image, the partial image, and the fundus image.
  • the partial recognition unit 112 includes:
  • a first machine learning model used to identify a local area from the fundus image, where the local area includes the optic disc and the fundus background;
  • An image intercepting unit is used to extract the partial area to form the partial image, and the partial image and the fundus image have the same color.
  • the area recognition unit 113 includes:
  • a second machine learning model used to identify the disc image from the partial image
  • the third machine learning model is used to identify the cup image from the partial image.
  • the glaucoma recognition unit 115 includes:
  • a fourth machine learning model is used to identify the disc edge image, the partial image, and the fundus image, and output a glaucoma image judgment result.
  • the fourth machine learning model includes:
  • a first feature extraction unit used to extract a first feature from the disc edge image
  • a second feature extraction unit used to extract a second feature from the partial image
  • a third feature extraction unit configured to extract a third feature from the fundus image
  • a feature fusion unit configured to form a fusion feature according to the first feature, the second feature, and the third feature
  • the judging unit is used to output a glaucoma image judgment result according to the fusion feature.
  • the feature extraction unit may be a convolutional neural network
  • the feature fusion unit may be a neural network with fully connected layers
  • the determination unit may be a neural network or a classifier.
  • An embodiment of the present invention further provides an electronic device, including: at least one processor and a memory communicatively connected to the at least one processor; wherein the memory stores executable by the at least one processor Instructions, the instructions are executed by the at least one processor, so that the at least one processor executes the glaucoma image recognition method in the foregoing embodiment.
  • An embodiment of the present invention also provides a glaucoma disease screening system, as shown in FIG. 13 including:
  • Fundus camera 131 which is used to capture the user's fundus image
  • the glaucoma image recognition device 132 is used to perform the glaucoma image recognition method in the above embodiment.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram may be implemented by computer program instructions.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to produce a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions
  • the device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

一种青光眼图像识别方法、设备和筛查系统,所述方法包括如下步骤:获取眼底图像(S1A);从所述眼底图像中提取局部图像,所述局部图像中包括视盘和眼底背景(S2A);根据所述眼底图像或者所述局部图像获得视盘图像和视杯图像(S3A);根据所述视盘图像和所述视杯图像获得盘沿图像(S4A);根据所述盘沿图像、所述局部图像和所述眼底图像判断所述眼底图像是否为青光眼图像(S5A)。

Description

青光眼图像识别方法、设备和筛查系统 技术领域
本发明涉及眼部检测设备领域,具体涉及一种青光眼图像识别方法、设备和筛查系统。
背景技术
青光眼是一种不可逆的致盲性眼底疾病,在筛查或临床诊断上,医生可以通过观察眼底图像来判断被检查者是否可能患有青光眼,从而做出是否需要进一步检查或就诊的建议。
在临床诊断时,眼科医生可以通过观察眼底图中视杯和视盘的情况做出判断。例如视杯过大,导致视杯视盘的比例过大,则被检查者很可能患有青光眼,其中的杯盘比一般为视杯和视盘的垂直直径比。
但是,眼科医生肉眼或者借助拍摄设备估算杯盘比或者盘沿形态的方式主观性很强,缺乏客观数据依据,导致结果不够准确,而且消耗大量的时间和精力。
发明内容
有鉴于此,本发明提供一种青光眼图像识别方法,包括如下步骤:
获取眼底图像;
从所述眼底图像中提取局部图像,所述局部图像中包括视盘和眼底背 景;
根据所述眼底图像或者所述局部图像获得视盘图像和视杯图像;
根据所述视盘图像和所述视杯图像获得盘沿图像;
根据所述盘沿图像、所述局部图像和所述眼底图像判断所述眼底图像是否为青光眼图像。
可选地,从所述眼底图像中提取局部图像,包括:
利用第一机器学习模型从所述眼底图像中识别出局部区域,所述局部区域中包括视盘和眼底背景;
提取所述局部区域形成所述局部图像,所述局部图像与所述眼底图像色彩一致。
可选地,所述根据所述眼底图像或者所述局部图像获得视盘图像和视杯图像,包括:
利用第二机器学习模型从所述局部图像中识别出视盘图像;
利用第三机器学习模型从所述局部图像中识别出视杯图像。
可选地,所述视盘图像和所述视杯图像均为二值图像。
可选地,所述盘沿图像中包括视盘以外的背景区域、视杯区域和盘沿区域,并且它们被标识为不同的灰度值。
可选地,所述根据所述盘沿图像、所述局部图像和所述眼底图像判断所述眼底图像是否为青光眼图像,包括:
利用第四机器学习模型对所述盘沿图像、所述局部图像和所述眼底图像进行识别,输出青光眼图像判断结果。
可选地,其中所述第四机器学习模型包括第一特征提取单元、第二特 征提取单元、第三特征提取单元、特征融合单元和判定单元;所述利用第四机器学习模型对所述盘沿图像、所述局部图像和所述眼底图像进行识别,输出青光眼图像判断结果包括:
利用第一特征提取单元从所述盘沿图像中提取第一特征;
利用第二特征提取单元从所述局部图像中提取第二特征;
利用第三特征提取单元从所述眼底图像中提取第三特征;
利用特征融合单元根据所述第一特征、第二特征和第三特征形成融合特征;
利用判定单元根据所述融合特征输出青光眼图像判断结果。
相应地,本发明提供一种青光眼图像识别装置,包括:
获取单元,用于获取眼底图像;
局部识别单元,用于从所述眼底图像中提取局部图像,所述局部图像中包括视盘和眼底背景;
区域识别单元,用于根据所述眼底图像获得视盘图像和视杯图像;
盘沿确定单元,用于根据所述视盘图像和所述视杯图像获得盘沿图像;
青光眼识别单元,用于根据所述盘沿图像、所述局部图像和所述眼底图像判断所述眼底图像是否为青光眼图像。
相应地,本发明还提供一种青光眼图像识别设备,包括:至少一个处理器以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行上述青光眼图像识别方法。
本发明还提供了一种青光眼疾病筛查系统,包括:
眼底照相设备,用于拍摄眼底图像;以及上述青光眼图像识别设备。
根据本发明提供的青光眼图像识别方法,首先利用眼底图像获得局部图像,去除大部分眼底背景,进一步提取视盘图像和视杯图像,并根据这两个图像获得盘沿图像。最终对盘沿图像、局部图像和眼底图像进行识别,综合全局特征、局部特征和盘沿特征,判断眼底图像是否为青光眼图像,本方案基于图像数据及客观算法得出对青光眼的判断结果,节约人力资源,能够有效辅助医生或专家对青光眼疾病做出诊断。
附图说明
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例中的青光眼图像识别方法的流程图;
图2为本发明实施例中一种具体的青光眼图像识别方法的流程图;
图3为本发明实施例中的剪切后的眼底图像;
图4为本发明实施例中的包含视盘的局部图像;
图5为本发明实施例中的一种样本图像;
图6为本发明实施例中的获得视盘二值图像的示意图;
图7为本发明实施例中的获得视杯二值图像的示意图;
图8为本发明实施例中的一种盘沿图像;
图9为本发明实施例中的另一种盘沿图像;
图10为本发明实施例中的获得有色盘沿图像的示意图;
图11为本发明实施例中的青光眼图像识别装置的结构示意图;
图12为本发明实施例中的机器学习模型的结构示意图;
图13为本发明实施例中的青光眼疾病筛查系统的结构示意图。
具体实施方式
下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”和“第四”仅用于描述目的,而不能理解为指示或暗示相对重要性。
此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
本发明提供了一种青光眼图像识别方法,该方法可以由计算机、服务器或者便携式终端等电子设备执行。如图1所示,该方法包括如下步骤:
S1A,获取眼底图像。眼底图像通常是彩色图像,但在本发明实施例中 也可以使用单通道的灰度图像,甚至也可以是二值图像。
S2A,从眼底图像中提取局部图像,局部图像中包括视盘和眼底背景。视盘在该局部图像中的占比大于视盘在原眼底图像中的占比,局部图像内可包括视盘以及少部分眼底背景内容,图像的形状可以是设定的规则形状,例如一个方形图像或圆形图像。此步骤可以去除掉大部分眼底背景内容。
S3A,根据眼底图像或者局部图像获得视盘图像和视杯图像。具体提取方法包括多种,例如可以基于机器视觉原理,根据像素值特征搜索并提取视盘和视杯这两个区域形成图像;也可以采用人工智能算法,使用训练后的机器学习模型识别并提取这些区域并形成图像。
局部图像是眼底图像的一部分,基于局部图像提取视盘图像和视杯图像可提高识别效率,但基于全局眼底图像识别到视盘图像和视杯图像也是可行的。
S4A,根据视盘图像和视杯图像获得盘沿图像。视杯区域在视盘区域以内,在一个实施例中,可以在视盘区域中去除视杯区域,通常可得到一个环状区域的图像。盘沿区域的表现形式可以是一张只显示盘沿区域的图像,例如在单一色彩的背景中存在一个环状区域。
在另一个实施例中,盘沿图像中也可以保留视盘区域和视杯区域,并标识出盘沿区域。
S5A,根据盘沿图像、局部图像和眼底图像判断眼底图像是否为青光眼图像。在一个实施例中,可以基于机器视觉原理,针对这三个图像分别提取盘沿的形态特征,综合三个图像的特征得出最终的结论;也可以分别针对三个图像提取特征并得到三个结论,然后综合三个结论得出最终的结论。 在另一个实施例中,可以采用人工智能算法,使用训练后的机器学习模型识别这三个图像,输出识别结果。
通常情况下获取的眼底图像的方位和人体方位是一致的,即图像的上方和下方即为人体的上方和下方,图像两侧分别是人体的鼻侧和颞侧(左、右眼图像方向相反)。如果获取的图像角度比较特殊,可以在步骤S1后调整图像角度,使其与人体方位一致。
实际应用中,步骤S5A所输出的可以是一个用于表达患青光眼病可能性的信息,例如百分比信息;也可以输出如阴性或阳性这种结论性信息。这种信息可作为医生判断青光眼疾病的依据。
根据本发明实施例提供的青光眼图像识别方法,首先利用眼底图像获得局部图像,去除大部分眼底背景,进一步提取视盘图像和视杯图像,并根据这两个图像获得盘沿图像。最终对盘沿图像、局部图像和眼底图像进行识别,综合全局特征、局部特征和盘沿特征,判断眼底图像是否为青光眼图像,本方案基于图像数据及客观算法得出对青光眼的判断结果,节约人力资源,能够有效辅助医生或专家对青光眼疾病做出诊断。
本发明的一个实施例提供了一种具体的青光眼图像识别方法,在本方法中使用了机器学习模型识别图像,所述机器学习模型可以是各种类型的神经网络。如图2所示,该方法包括如下步骤:
S1B,获取眼底照相设备拍摄的眼底照片。一般是有黑色背景的图像,背景中可能还包括一些文字信息。
S2B,对眼底照片进行剪裁得到眼底图像,该图像边缘恰好容纳圆形的眼底区域。如图3所示,剪切后图像的四个边分别与眼底区域边缘相交。 这种剪裁操作是为了后续使用机器学习模型对图像进行识别而做出的优化处理,在其它实施例中也可不进行剪裁的操作,或者剪裁掉更多的内容,至少保留完整的视盘区域即可。
S3B,利用第一机器学习模型从眼底图像中识别出包含视盘的局部图像。视盘在该局部图像中的占比大于视盘在原眼底图像中的占比,局部图像内可包括视盘以及少部分眼底背景内容,图像的形状可以是设定的规则形状,例如一个方形图像或圆形图像。此步骤可以去除掉大部分眼底背景内容,得到一个如图4所示以视盘为主的局部图像。
在使用机器学习模型进行识别之前,应当使用训练数据对其进行训练。关于第一机器学习模型的训练过程,本发明实施例提供一种优选的模型训练方案。在训练阶段先通过手工把包括视盘的有效区域在眼底图像中标注出来得到训练数据,例如图5所示的虚线框为标注内容,这个标注框进入机器学习模型的形式为(x,y,height,width),其中x,y为标注框左上角点在图像中的坐标,height、width分别为标注框的高度和宽度。利用大量的眼底图像和标注框一起输入模型进行训练,模型通过学习可以预测出包括视盘的有效区域的位置,以与标注框相同的形式输出结果。
本发明实施例可以采用现有的深度检测模型作为第一机器学习模型,如SSD、YOLO或Faster-RCNN等,也可以自行设计深度网络模型。
在本实施例中,局部图像与原始眼底图像色彩一致,作为一个检测和截取图像的过程;在其它实施例中,也可以获得变换色彩通道的图像,例如灰度图像。
S4B,对局部图像进行预处理以增强像素点特征。具体可以使用局部直 方图均衡算法(CLAHE)对局部图像做图像增强。此步骤可以使图像中的特征更加突出,经过预处理后对图像进行识别时更容易找到视盘的轮廓和视杯的轮廓,从而提高识别准确性和识别效率。
S5B,利用第二机器学习模型从局部图像中识别出视盘图像、利用第三机器学习模型从局部图像中识别出视杯图像。此步骤得到的是更佳精确的区域分割结果,图像中的视盘和视杯轮廓与人体的视盘和视杯轮廓一致,通常是不规则的形状。经过识别可以得到如图6所示的视盘图像和如图7所示的视杯图像。
关于第二机器学习模型和第三机器学习模型的训练过程,本发明实施例提供一种优选的模型训练方案。具体地,在训练时手工精确地标注出视盘,然后基于人工标注的轮廓生成如图6所示填充掩图,其中白色为视盘区域,黑色为背景。最终把截取的视盘区域和对应的掩图一起输入模型进行训练,模型通过学习识别出视盘区域,把它分割出来。视杯的标注和分割的过程遵循同样的步骤。
本发明实施例可以采用现有的深度检测模型作为第二和第三机器学习模型,例如可以采用U-Net、Mask R-CNN、DeepLabV3等现有的模型,或自行设计的深度分割模型。
在本实施例中,第二机器学习模型输出视盘二值图像,其中背景灰度值为0,视盘区域的灰度值为255;第三机器学习模型输出视杯二值图像,其中背景灰度值为0,视杯区域的灰度值为255。这是为了便于后续截取盘沿图像而使用的一种优选的处理方式,在其它实施例中,也可以输出与原眼底图像色彩一致的图像。
S6B,根据视盘二值图像和视杯二值图像得到盘沿图像。在本实施例中,盘沿图像中包括视盘以外的背景区域、视杯区域和盘沿区域,并且它们被标识为不同的灰度值。例如组合图6和图7中的二值图像,得到图8所示的盘沿图像,其中背景区域的灰度值为0,视杯区域的灰度值为255,盘沿区域是视盘区域中未被视杯区域所遮挡的环形区域,灰度值例如可以被设为128。实际处理时可以采用任意三种有明显区别的灰度值来标识这三个区域,并不限于上述举例中的取值。
在另一实施例中,盘沿图像中也可以只包含盘沿区域和背景区域,例如在图6中的二值图像中减去图7中的二值图像,即可得到图9所示的二值图像,其中的白色环形为盘沿区域。
还可以进一步根据盘沿二值图像在眼底图像中截取盘沿图像。图9所示的盘沿二值图像提供了截取位置和范围,据此可从原始眼底图像中或者S3B中的局部图像中截取到如图10所示的盘沿图像,此步骤是为了获得原眼底图像的色彩。
在另一实施例中,视盘图像和视杯图是灰度图像或者彩色图像,也可以将两个灰度图像或者彩色图像相减直接得到相应色彩的盘沿图像。
S7B,利用第四机器学习模型对盘沿图像、局部图像和眼底图像进行识别,输出青光眼图像判断结果。在本实施例中,向该模型输入图的是局部图像(可以是如图6或图7中左侧的增强处理后的局部图像,也可以是如图4所示未进行增强处理的图像)、全局的眼底图像(如图3所示的经过剪裁的图像或者是原始的眼底照片)、盘沿图像(如图8或者图9或者图10所示的盘沿图像)。
第四机器学习模型的结构和处理方式有多种选择,本发明实施例可以采用现有的深度检测模型作为第四机器学习模型,例如可以采用inceptionV3、ResNet等现有的模型,或自行设计的深度识别模型。
作为一个优选的实施方式,本实施例中的第四机器学习模型包括第一特征提取单元、第二特征提取单元、第三特征提取单元、特征融合单元和判定单元。具体地,步骤S8包括如下步骤:
利用第一特征提取单元从所述盘沿图像中提取第一特征;
利用第二特征提取单元从所述局部图像中提取第二特征;
利用第三特征提取单元从所述眼底图像中提取第三特征;
利用特征融合单元根据第一特征、第二特征和第三特征形成融合特征;
利用判定单元根据融合特征输出青光眼图像判断结果。
关于第四机器学习模型的训练过程,在训练时分别循环多次输入不同的青光眼图像的组合(全局眼底图像、视盘局部图像、盘沿图像),以及不同的非青光眼图像的组合,使模型学习两者的区别。输出结果可以是两类,即阴性或阳性(是或否);也可以是百分比(概率)信息,例如是青光眼图像的概率或者不是青光眼图像的概率。
根据本发明实施例提供的青光眼图像识别方法,首先对拍摄的眼底照片进行剪裁去除干扰内容,使机器学习模型可以更准确地从较大尺寸的眼底图像中分割出以视盘为主的并且尺寸较小的局部图像;通过两个机器学习模型分别对该局部图像进行识别,可准确输出视盘区域的二值图像和视杯区域的二值图像,通过对二值图像进行组合可以高效且准确地获得盘沿图像;在最终识别过程中使机器学习模型兼顾三种图像的特征得出识别结 果,以提高青光眼图像判断的准确性。
相应地,本发明的一个实施例还提供了一种青光眼图像识别装置,如图11所示,该装置包括:
获取单元111,用于获取眼底图像;
局部识别单元112,用于从所述眼底图像中提取局部图像,所述局部图像中包括视盘和眼底背景;
区域识别单元113,用于根据所述眼底图像获得视盘图像和视杯图像;
盘沿确定单元114,用于根据所述视盘图像和所述视杯图像获得盘沿图像;
青光眼识别单元115,用于根据所述盘沿图像、所述局部图像和所述眼底图像判断所述眼底图像是否为青光眼图像。
作为一个优选的实施方式,局部识别单元112包括:
第一机器学习模型,用于从所述眼底图像中识别出局部区域,所述局部区域中包括视盘和眼底背景;
图像截取单元,用于提取所述局部区域形成所述局部图像,所述局部图像与所述眼底图像色彩一致。
作为一个优选的实施方式,所述区域识别单元113包括:
第二机器学习模型,用于从所述局部图像中识别出视盘图像;
第三机器学习模型,用于从所述局部图像中识别出视杯图像。
作为一个优选的实施方式,所述青光眼识别单元115包括:
第四机器学习模型,用于对所述盘沿图像、所述局部图像和所述眼底图像进行识别,输出青光眼图像判断结果。
进一步地,如图12所示,第四机器学习模型包括:
第一特征提取单元,用于从所述盘沿图像中提取第一特征;
第二特征提取单元,用于从所述局部图像中提取第二特征;
第三特征提取单元,用于从所述眼底图像中提取第三特征;
特征融合单元,用于根据所述第一特征、第二特征和第三特征形成融合特征;
判定单元,用于利用根据所述融合特征输出青光眼图像判断结果。
其中的特征提取单元可以是卷积神经网络,特征融合单元可以是具有全连接层的神经网络,判定单元可以是神经网络或者分类器等。
本发明的一个实施例还提供了一种电子设备,包括:至少一个处理器以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行上述实施例中的青光眼图像识别方法。
本发明的一个实施例还提供了一种青光眼疾病筛查系统,如图13所示包括:
眼底照相设备131,用于拍摄用户眼底图像;以及
青光眼图像识别设备132,用于执行上述实施例中的青光眼图像识别方法。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个 或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方 式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本发明创造的保护范围之中。

Claims (11)

  1. 一种青光眼图像识别方法,其特征在于,包括如下步骤:
    获取眼底图像;
    从所述眼底图像中提取局部图像,所述局部图像中包括视盘和眼底背景;
    根据所述眼底图像或者所述局部图像获得视盘图像和视杯图像;
    根据所述视盘图像和所述视杯图像获得盘沿图像;
    根据所述盘沿图像、所述局部图像和所述眼底图像判断所述眼底图像是否为青光眼图像。
  2. 根据权利要求1所述的方法,其特征在于,从所述眼底图像中提取局部图像,包括:
    利用第一机器学习模型从所述眼底图像中识别出局部区域,所述局部区域中包括视盘和眼底背景;
    提取所述局部区域形成所述局部图像,所述局部图像与所述眼底图像色彩一致。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述眼底图像或者所述局部图像获得视盘图像和视杯图像,包括:
    利用第二机器学习模型从所述局部图像中识别出视盘图像;
    利用第三机器学习模型从所述局部图像中识别出视杯图像。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述视盘图像和所述视杯图像均为二值图像。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述盘沿图像中包括视盘以外的背景区域、视杯区域和盘沿区域,并且它们被标识为不同的灰度值。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述根据所述盘沿图像、所述局部图像和所述眼底图像判断所述眼底图像是否为青光眼图像,包括:
    利用第四机器学习模型对所述盘沿图像、所述局部图像和所述眼底图像进行识别,输出青光眼图像判断结果。
  7. 根据权利要求6所述的方法,其特征在于,其中所述第四机器学习模型包括第一特征提取单元、第二特征提取单元、第三特征提取单元、特征融合单元和判定单元;所述利用第四机器学习模型对所述盘沿图像、所述局部图像和所述眼底图像进行识别,输出青光眼图像判断结果包括:
    利用第一特征提取单元从所述盘沿图像中提取第一特征;
    利用第二特征提取单元从所述局部图像中提取第二特征;
    利用第三特征提取单元从所述眼底图像中提取第三特征;
    利用特征融合单元根据所述第一特征、第二特征和第三特征形成融合特征;
    利用判定单元根据所述融合特征输出青光眼图像判断结果。
  8. 一种计算机存储介质,其特征在于,其上存储有指令,当所述指令在计算机上运行时,使得所述计算机执行权利要求1-7中任意一项所述的青光眼图像识别方法。
  9. 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机执行权利要求1-7中任意一项所述的青光眼图像识别方法。
  10. 一种青光眼图像识别设备,其特征在于,包括:至少一个处理器以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行权利要求1-7中任一项所述的青光眼图像识别方法。
  11. 一种青光眼疾病筛查系统,其特征在于,包括:
    眼底照相设备,用于拍摄眼底图像;以及
    权利要求10所述的青光眼图像识别设备。
PCT/CN2019/120215 2018-12-19 2019-11-22 青光眼图像识别方法、设备和筛查系统 WO2020125319A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19898889.1A EP3901816A4 (en) 2018-12-19 2019-11-22 METHOD AND DEVICE FOR RECOGNIZING GLAUCOMA IMAGES AND SCREENING SYSTEM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811557812.0 2018-12-19
CN201811557812.0A CN109684981B (zh) 2018-12-19 2018-12-19 青光眼图像识别方法、设备和筛查系统

Publications (1)

Publication Number Publication Date
WO2020125319A1 true WO2020125319A1 (zh) 2020-06-25

Family

ID=66186283

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120215 WO2020125319A1 (zh) 2018-12-19 2019-11-22 青光眼图像识别方法、设备和筛查系统

Country Status (3)

Country Link
EP (1) EP3901816A4 (zh)
CN (1) CN109684981B (zh)
WO (1) WO2020125319A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111863241A (zh) * 2020-07-10 2020-10-30 北京化工大学 一种基于集成深度学习的眼底照分类系统
CN114463346A (zh) * 2021-12-22 2022-05-10 成都中医药大学 一种基于移动端的复杂环境快速舌分割装置
CN117095450A (zh) * 2023-10-20 2023-11-21 武汉大学人民医院(湖北省人民医院) 一种基于图像的眼干程度评估系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684981B (zh) * 2018-12-19 2020-10-09 上海鹰瞳医疗科技有限公司 青光眼图像识别方法、设备和筛查系统
CN110298850B (zh) * 2019-07-02 2022-03-15 北京百度网讯科技有限公司 眼底图像的分割方法和装置
CN110570421B (zh) * 2019-09-18 2022-03-22 北京鹰瞳科技发展股份有限公司 多任务的眼底图像分类方法和设备
CN110599480A (zh) * 2019-09-18 2019-12-20 上海鹰瞳医疗科技有限公司 多源输入的眼底图像分类方法和设备
CN110969617B (zh) * 2019-12-17 2024-03-15 腾讯医疗健康(深圳)有限公司 视杯视盘图像识别方法、装置、设备及存储介质
CN110992382B (zh) * 2019-12-30 2022-07-15 四川大学 用于辅助青光眼筛查的眼底图像视杯视盘分割方法及系统
CN111986202B (zh) 2020-10-26 2021-02-05 平安科技(深圳)有限公司 青光眼辅助诊断装置、方法及存储介质
CN115496954B (zh) * 2022-11-03 2023-05-12 中国医学科学院阜外医院 眼底图像分类模型构建方法、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110190657A1 (en) * 2009-08-10 2011-08-04 Carl Zeiss Meditec, Inc. Glaucoma combinatorial analysis
CN106214120A (zh) * 2016-08-19 2016-12-14 靳晓亮 一种青光眼的早期筛查方法
CN108717868A (zh) * 2018-04-26 2018-10-30 博众精工科技股份有限公司 基于深度学习的青光眼眼底图像筛查方法及系统
CN109684981A (zh) * 2018-12-19 2019-04-26 上海鹰瞳医疗科技有限公司 青光眼图像识别方法、设备和筛查系统
CN109697716A (zh) * 2018-12-19 2019-04-30 上海鹰瞳医疗科技有限公司 青光眼图像识别方法、设备和筛查系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180140180A1 (en) * 2016-11-22 2018-05-24 Delphinium Clinic Ltd. Method and system for classifying optic nerve head
CN108875618A (zh) * 2018-06-08 2018-11-23 高新兴科技集团股份有限公司 一种人脸活体检测方法、系统及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110190657A1 (en) * 2009-08-10 2011-08-04 Carl Zeiss Meditec, Inc. Glaucoma combinatorial analysis
CN106214120A (zh) * 2016-08-19 2016-12-14 靳晓亮 一种青光眼的早期筛查方法
CN108717868A (zh) * 2018-04-26 2018-10-30 博众精工科技股份有限公司 基于深度学习的青光眼眼底图像筛查方法及系统
CN109684981A (zh) * 2018-12-19 2019-04-26 上海鹰瞳医疗科技有限公司 青光眼图像识别方法、设备和筛查系统
CN109697716A (zh) * 2018-12-19 2019-04-30 上海鹰瞳医疗科技有限公司 青光眼图像识别方法、设备和筛查系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3901816A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111863241A (zh) * 2020-07-10 2020-10-30 北京化工大学 一种基于集成深度学习的眼底照分类系统
CN111863241B (zh) * 2020-07-10 2023-06-30 北京化工大学 一种基于集成深度学习的眼底照分类系统
CN114463346A (zh) * 2021-12-22 2022-05-10 成都中医药大学 一种基于移动端的复杂环境快速舌分割装置
CN117095450A (zh) * 2023-10-20 2023-11-21 武汉大学人民医院(湖北省人民医院) 一种基于图像的眼干程度评估系统
CN117095450B (zh) * 2023-10-20 2024-01-09 武汉大学人民医院(湖北省人民医院) 一种基于图像的眼干程度评估系统

Also Published As

Publication number Publication date
EP3901816A4 (en) 2022-09-14
CN109684981B (zh) 2020-10-09
CN109684981A (zh) 2019-04-26
EP3901816A1 (en) 2021-10-27

Similar Documents

Publication Publication Date Title
WO2020125319A1 (zh) 青光眼图像识别方法、设备和筛查系统
WO2020125318A1 (zh) 青光眼图像识别方法、设备和诊断系统
CN110276356B (zh) 基于r-cnn的眼底图像微动脉瘤识别方法
WO2020259209A1 (zh) 眼底图像识别方法、装置、设备和存储介质
EP2888718B1 (en) Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN109961848B (zh) 黄斑图像分类方法和设备
CN110263755B (zh) 眼底图像识别模型训练方法、眼底图像识别方法和设备
CN110428421A (zh) 黄斑图像区域分割方法和设备
CN110930446B (zh) 一种眼底图像定量分析的前置处理方法及存储设备
CN114627067B (zh) 一种基于图像处理的伤口面积测量及辅助诊疗方法
CN102567734A (zh) 基于比值的视网膜细小血管分割方法
CN109087310A (zh) 睑板腺纹理区域的分割方法、系统、存储介质及智能终端
CN109549619B (zh) 眼底盘沿宽度确定方法、青光眼疾病诊断装置和系统
CN111259763A (zh) 目标检测方法、装置、电子设备及可读存储介质
Manchalwar et al. Detection of cataract and conjunctivitis disease using histogram of oriented gradient
CN112288697B (zh) 用于量化异常程度的方法、装置、电子设备及可读存储介质
CN110276333B (zh) 眼底身份识别模型训练方法、眼底身份识别方法和设备
CN115496700A (zh) 一种基于眼部图像的疾病检测系统及方法
Padmanaban et al. Localization of optic disc using Fuzzy C Means clustering
CN115456974A (zh) 基于人脸关键点的斜视检测系统、方法、设备及介质
Fan et al. Positive-aware lesion detection network with cross-scale feature pyramid for OCT images
Akhade et al. Automatic optic disc detection in digital fundus images using image processing techniques
Princye et al. Detection of exudates and feature extraction of retinal images using fuzzy clustering method
Tamilarasi et al. Template matching algorithm for exudates detection from retinal fundus images
Tulasigeri et al. An advanced thresholding algorithm for diagnosis of glaucoma in fundus images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19898889

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019898889

Country of ref document: EP

Effective date: 20210719