CN111938567A - Ophthalmic parameter measurement method, system and equipment based on deep learning - Google Patents
Ophthalmic parameter measurement method, system and equipment based on deep learning Download PDFInfo
- Publication number
- CN111938567A CN111938567A CN202010655960.7A CN202010655960A CN111938567A CN 111938567 A CN111938567 A CN 111938567A CN 202010655960 A CN202010655960 A CN 202010655960A CN 111938567 A CN111938567 A CN 111938567A
- Authority
- CN
- China
- Prior art keywords
- cornea
- sclera
- eye
- eyelid
- corneal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 26
- 238000000691 measurement method Methods 0.000 title claims description 4
- 210000004087 cornea Anatomy 0.000 claims abstract description 64
- 210000003786 sclera Anatomy 0.000 claims abstract description 60
- 238000000034 method Methods 0.000 claims abstract description 51
- 238000013528 artificial neural network Methods 0.000 claims abstract description 22
- 238000005259 measurement Methods 0.000 claims abstract description 11
- 210000000744 eyelid Anatomy 0.000 claims description 55
- 230000015654 memory Effects 0.000 claims description 26
- 239000000284 extract Substances 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 11
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 210000001747 pupil Anatomy 0.000 claims description 8
- 238000012887 quadratic function Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 206010015997 Eyelid retraction Diseases 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 210000001061 forehead Anatomy 0.000 claims description 4
- 230000000903 blocking effect Effects 0.000 claims description 2
- 201000010099 disease Diseases 0.000 abstract description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 5
- 238000004458 analytical method Methods 0.000 abstract description 3
- 210000001508 eye Anatomy 0.000 description 89
- 238000005516 engineering process Methods 0.000 description 7
- 238000003745 diagnosis Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000001815 facial effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 208000030533 eye disease Diseases 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000036541 health Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012827 research and development Methods 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 206010006187 Breast cancer Diseases 0.000 description 1
- 208000026310 Breast neoplasm Diseases 0.000 description 1
- 206010008342 Cervix carcinoma Diseases 0.000 description 1
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 208000006105 Uterine Cervical Neoplasms Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000010881 cervical cancer Diseases 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 201000008827 tuberculosis Diseases 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/1005—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring distances inside the eye, e.g. thickness of the cornea
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/107—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining the shape or measuring the curvature of the cornea
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/11—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
- A61B3/112—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Multimedia (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Ophthalmology & Optometry (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Eye Examination Apparatus (AREA)
Abstract
本发明提供了一种基于深度学习的人眼参数测量方法及系统,包括:获取人脸图片,并提取图片中左右眼图像;采用深度神经网络,识别左右眼图像中的不同位置,包括角膜、巩膜和内外眦位置;计算左右眼图像中的不同位置的多个眼部参数。同时提供了一种基于上述人眼参数测量方法及系统实现的设备。本发明提供的基于深度学习的人眼参数测量方法、系统及设备,能够实现人眼不同部位的自动识别和定位;实现眼科医生常需要测量的眼部参数的自动测量;为眼科医生实现对眼部病症情况的分析提供了参数辅助及支持。
The invention provides a method and system for measuring human eye parameters based on deep learning, including: acquiring a face picture, and extracting left and right eye images in the picture; using a deep neural network to identify different positions in the left and right eye images, including cornea, Sclera and medial and lateral canthal positions; calculates multiple ocular parameters at different positions in the left and right eye images. At the same time, a device implemented based on the above-mentioned method and system for measuring human eye parameters is provided. The method, system and device for measuring human eye parameters based on deep learning provided by the present invention can realize automatic identification and positioning of different parts of the human eye; realize automatic measurement of eye parameters that ophthalmologists often need to measure; The analysis of the external disease situation provides parameter assistance and support.
Description
技术领域technical field
本发明涉及人工智能技术领域的一种医学影像处理技术,具体地,涉及一种基于深度学习的眼科参数测量方法、系统及设备。The present invention relates to a medical image processing technology in the field of artificial intelligence technology, and in particular, to a method, system and device for measuring ophthalmology parameters based on deep learning.
背景技术Background technique
目前我国国内面临优质医疗资源的供需不平衡,医生培养周期长,误诊率高,疾病谱变化快,技术日新月异,人口老龄化加剧,慢性疾病增长等问题待解决。而随着人们对健康重视程度提高,大量需求催生了医疗AI的快速发展。At present, my country is facing an imbalance in the supply and demand of high-quality medical resources, with a long training cycle for doctors, a high rate of misdiagnosis, rapid changes in disease spectrum, rapid technological changes, an aging population, and the growth of chronic diseases. As people pay more attention to health, the large demand has spawned the rapid development of medical AI.
至今为止,AI在我国的医疗领域的更多细分领域都取得了长足的发展,比如,2016年10月,百度发布《百度医疗大脑》,对标谷歌和IBM的同类产品。百度医疗大脑在医疗领域的具体应用,它大量采集与分析医学专业文献和医疗数据,通过模拟问诊流程,基于用户症状,给出诊疗的最终建议。So far, AI has made great progress in more segments of my country's medical field. For example, in October 2016, Baidu released "Baidu Medical Brain", which benchmarked similar products from Google and IBM. The specific application of Baidu Medical Brain in the medical field, it collects and analyzes a large number of medical professional literature and medical data, and gives final recommendations for diagnosis and treatment based on user symptoms by simulating the consultation process.
2017年7月,阿里健康发布医疗AI系统“Doctor You",包括临床医学科研诊断平台、医疗辅助检测引擎等。此外阿里健康还与政府、医院、科研院校等外部机构合作,开发了20种常见、多发疾病的智能诊断引擎,包括糖尿病、肺癌预测、眼底筛查等。In July 2017, Ali Health released the medical AI system "Doctor You", including a clinical medical research and diagnosis platform, a medical auxiliary detection engine, etc. In addition, AliHealth has also cooperated with the government, hospitals, scientific research institutions and other external institutions to develop intelligent diagnosis engines for 20 common and frequently-occurring diseases, including diabetes, lung cancer prediction, and fundus screening.
2018年11月,腾讯牵头承担的“数字诊疗装备研发专项”启动,该项目作为国家重点研发计划首批启动的6个试点专项之一,基于“AI+CDSS”(人工智能的临床辅助决策支持技术)探索和助力医疗服务升级。In November 2018, the "Digital Diagnosis and Treatment Equipment Research and Development Project" led by Tencent was launched. As one of the first six pilot projects launched in the national key research and development plan, the project is based on "AI+CDSS" (Artificial Intelligence Clinical Assistant Decision Support). technology) to explore and facilitate the upgrading of medical services.
AI与医学领域的结合点非常多,通过AI在医疗领域的应用情况进行总结分析,目前主要应用于五大领域,分别为:医学影像,辅助诊断,药物研发,健康管理,疾病预测。There are many integration points between AI and the medical field. The application of AI in the medical field is summarized and analyzed. Currently, it is mainly used in five fields, namely: medical imaging, auxiliary diagnosis, drug research and development, health management, and disease prediction.
借助医疗影像大数据及图像识别技术的发展优势,医学影像成为中国人工智能与医疗结合最成熟的一个领域可应用于肺结核、眼底、乳腺癌、宫颈癌等医疗领域。在眼科检查中,医生通常需要对患者的眼睛进行准确的测量,以进行诊断和疾病状态判断。他们通常手动测量许多眼参数,例如眼裂的高度,角膜的纵向和横向直径,内can和外distance之间的距离以及瞳孔之间的距离。当前,常用的方法是医生使用尺子直接在患者的眼睛上测量这些参数。这对于医生来说非常麻烦,并且结果难以令人信服,同时,身体接触使医生和患者都有被感染的风险。为了解决这个问题,本领域技术人员尝试通过特定算法根据眼睛图像自动计算眼睛的这些参数,这可以使眼科医生不必费时费力地用尺子测量患者的眼睛,患者不必摆出奇怪的姿势来方便医生并承担被感染的风险。近年来,随着深度学习的飞速发展,基于CNN的图像对象检测已经获得了很高的准确度,越来越多的研究者将其应用于医学图像分析,但应用于眼睛疾病测量的技术和研究仍然有限。With the development advantages of medical imaging big data and image recognition technology, medical imaging has become one of the most mature fields in China for the combination of artificial intelligence and medical care, which can be applied to medical fields such as tuberculosis, fundus, breast cancer, and cervical cancer. During an eye exam, physicians often require accurate measurements of a patient's eyes for diagnosis and disease status determination. They usually manually measure many ocular parameters, such as the height of the eye cleft, the longitudinal and transverse diameters of the cornea, the distance between the inner and outer canals, and the distance between the pupils. Currently, a common method is for doctors to measure these parameters directly on the patient's eye using a ruler. This is very troublesome for doctors, and the results are unconvincing, at the same time, physical contact puts doctors and patients at risk of infection. In order to solve this problem, those skilled in the art try to automatically calculate these parameters of the eye according to the eye image through a specific algorithm, which can save the ophthalmologist from the time-consuming and laborious measure of the patient's eye with a ruler, and the patient does not have to pose strange postures to facilitate the doctor and Take the risk of being infected. In recent years, with the rapid development of deep learning, CNN-based image object detection has achieved high accuracy, and more and more researchers have applied it to medical image analysis, but the technology used in eye disease measurement and Research is still limited.
目前没有发现同本发明类似技术的说明或报道,也尚未收集到国内外类似的资料。At present, there is no description or report of the technology similar to the present invention, and no similar materials at home and abroad have been collected.
发明内容SUMMARY OF THE INVENTION
针对现有技术中存在的上述不足,本发明的目的是提供一种基于深度学习的人眼参数测量方法、系统及设备。In view of the above deficiencies in the prior art, the purpose of the present invention is to provide a method, system and device for measuring human eye parameters based on deep learning.
本发明是通过以下技术方案实现的。The present invention is achieved through the following technical solutions.
根据本发明的一个方面,提供了一种基于深度学习的人眼参数测量方法,包括:According to one aspect of the present invention, there is provided a method for measuring human eye parameters based on deep learning, including:
获取的人脸图片,并提取人脸图片中左右眼图像;The acquired face image, and extract the left and right eye images in the face image;
采用深度神经网络,识别左右眼图像中的不同部位,包括角膜、巩膜和内外眦位置,得到识别结果;Using a deep neural network, identify different parts in the left and right eye images, including the cornea, sclera and the position of the inner and outer canthus, and get the recognition results;
根据识别结果计算左右眼图像中不同位置以及不同状态的多个眼部参数。According to the recognition results, multiple eye parameters in different positions and different states in the left and right eye images are calculated.
优选地,所述提取人脸图片中左右眼图像,包括:Preferably, the extracting the left and right eye images in the face picture includes:
采用开源工程处理人脸图片并提取人脸图片中多个眼部关键点,根据眼部关键点的位置提取出完整的左右眼图像。Use open source engineering to process face images and extract multiple eye key points in the face image, and extract the complete left and right eye images according to the positions of the eye key points.
优选地,所述深度神经网络,采用多任务卷积神经网络,包括角膜块、巩膜块和内外眦块;其中:Preferably, the deep neural network adopts a multi-task convolutional neural network, including a corneal block, a sclera block and an inner and outer canthal block; wherein:
所述角膜块采用基于像素的预测,对于每一个像素位置均预测一个边界框以及置信度,置信度反映的是此像素是否位于角膜内的概率。经过非最大抑制后对所有的边界框求加权均值作为最终的预测结果;The corneal block adopts pixel-based prediction, and for each pixel position, a bounding box and a confidence level are predicted, and the confidence level reflects the probability of whether the pixel is located in the cornea. After non-maximum suppression, the weighted average of all bounding boxes is calculated as the final prediction result;
所述巩膜块采用全卷积网络,其中输入特征图被上采样m倍以获得分类得分图;The sclera block adopts a fully convolutional network, where the input feature map is upsampled by a factor of m to obtain a classification score map;
所述内外眦块采用YOLO(You Only Look Once)目标检测神经网络后三层的设计结构,将图像分割成n×n的网格,预测内外眦所落入的网格以及相对于网格中心的坐标偏移,进而直接预测出关键点的坐标。The inner and outer canthal blocks adopt the design structure of the last three layers of the YOLO (You Only Look Once) target detection neural network, divide the image into n×n grids, and predict the grids where the inner and outer canthals fall and relative to the center of the grid. The coordinate offset of , and then directly predict the coordinates of the key points.
优选地,所述眼部参数为10个,包括:睑裂高度、角膜纵径、角膜横径、角膜覆盖、眼睑上退缩量、眼睑下退缩量、瞳孔直径、下转受限量、上转受限量以及外转受限量。Preferably, the ocular parameters are 10, including: palpebral fissure height, corneal longitudinal diameter, corneal transverse diameter, corneal coverage, upper eyelid retraction, lower eyelid retraction, pupil diameter, limited amount of downward rotation, and upward rotation Limited and outbound restrictions.
优选地,采用二次函数拟合眼睑,并结合贴纸计算物理世界和图像尺寸的比例尺的方法,计算眼部参数。Preferably, the eye parameters are calculated by using a quadratic function to fit the eyelid, and combining the method of calculating the scale of the physical world and the image size with the sticker.
优选地,所述二次函数拟合眼睑的方法,包括:Preferably, the method for fitting the eyelid by the quadratic function comprises:
在获得巩膜的识别结果后,使用二次函数拟合巩膜的边缘以获得眼睑的位置,将角膜中心设为二次曲线的顶点,二次曲线的表达式为:After obtaining the identification result of the sclera, use the quadratic function to fit the edge of the sclera to obtain the position of the eyelid, and set the center of the cornea as the vertex of the quadratic curve. The expression of the quadratic curve is:
p(y-cy)=±(x-cx)2 (1)p(y-cy)=±(x-cx) 2 (1)
其中(cx,cy)是角膜中心,(x,y)为图像中像素点的位置坐标。当方程式的右侧使用正号时,该方程式用于拟合下眼睑,如果采用负号,则方程式可以拟合上眼睑。p的值由下式决定:where (cx, cy) is the center of the cornea, and (x, y) is the position coordinate of the pixel in the image. When a positive sign is used on the right side of the equation, the equation fits the lower eyelid, and if a negative sign is used, the equation fits the upper eyelid. The value of p is determined by:
p=d1+d2 (2)p=d 1 +d 2 (2)
其中,d1和d2分别表示巩膜宽度的像素距离和由于角膜位置造成的巩膜不能衔接的最大张开或者角膜直径的像素距离大小;其中,对于上下眼睑单边遮挡角膜的巩膜识别结果,d1和d2按照巩膜宽度的像素距离和由于角膜位置造成的巩膜不能衔接的最大张开的像素距离取值;对于上下眼睑共同遮挡角膜的巩膜识别结果,d2按照巩膜上下张开的像素距离的均值取值;对于上下眼睑均未遮挡角膜的巩膜识别结果,d2按照角膜直径的像素距离取值。Among them, d 1 and d 2 respectively represent the pixel distance of the sclera width and the maximum opening of the sclera that cannot be connected due to the position of the cornea or the pixel distance of the corneal diameter; among them, for the sclera recognition result of the upper and lower eyelids unilaterally blocking the cornea, d 1 and d 2 are taken according to the pixel distance of the sclera width and the pixel distance of the maximum opening of the sclera that cannot be connected due to the position of the cornea; for the sclera recognition result in which the upper and lower eyelids jointly block the cornea, d 2 is based on the pixel distance of the upper and lower opening of the sclera. For the sclera recognition result in which the upper and lower eyelids do not block the cornea, d 2 takes the value according to the pixel distance of the corneal diameter.
优选地,采用贴纸计算物理世界和图像尺寸的比例尺的方法,包括:Preferably, the method of calculating the scale of the physical world and the image size using stickers includes:
在病人额头贴两个完全相同的标准的圆形贴纸并识别此贴纸以获得图片尺寸与现实世界物理尺寸之间的比例尺Sscale,具体为:Apply two identical standard circular stickers on the patient's forehead and identify this sticker to obtain the scale S scale between the picture size and the real world physical size, specifically:
首先使用霍夫变换来识别图中的两个贴纸,并提取两个贴纸的直径,假设从图片中获得的两个贴纸的直径分别为DA和DB,则将两个贴纸的平均直径记作Dimage:First, use the Hough transform to identify the two stickers in the picture, and extract the diameters of the two stickers. Assuming that the diameters of the two stickers obtained from the picture are D A and D B , respectively, the average diameter of the two stickers is recorded as Make D image :
贴纸在现实世界中的物理尺寸为Dreal,则图片尺寸与现实世界物理尺寸之间的比例尺Sscale为:The physical size of the sticker in the real world is D real , then the scale S scale between the picture size and the physical size in the real world is:
优选地,计算眼部参数的方法为:Preferably, the method for calculating the eye parameters is:
睑裂高度:经过拟合眼睑边缘后,计算眼睑最高点和最低点的距离;Eyelid fissure height: After fitting the edge of the eyelid, calculate the distance between the highest point and the lowest point of the eyelid;
角膜纵径和角膜横径:直接根据识别结果计算;Corneal longitudinal diameter and corneal transverse diameter: directly calculated according to the recognition result;
角膜覆盖:经过拟合眼睑边缘后,计算眼睑最高点和角膜最高点的距离;Corneal coverage: After fitting the edge of the eyelid, calculate the distance between the highest point of the eyelid and the highest point of the cornea;
眼睑上退缩量:经过拟合眼睑边缘后,计算眼睑最高点和角膜最高点的距离;Upper eyelid retraction: After fitting the eyelid edge, calculate the distance between the highest point of the eyelid and the highest point of the cornea;
眼睑下退缩量:经过拟合眼睑边缘后,计算眼睑最低点和角膜最低点的距离;The amount of retraction under the eyelid: after fitting the edge of the eyelid, calculate the distance between the lowest point of the eyelid and the lowest point of the cornea;
瞳孔直径:识别出角膜以后,对角膜做二值化后直接计算;Pupil diameter: After the cornea is identified, it is directly calculated after binarizing the cornea;
下转受限量:计算内外眦连线与角膜最低点的距离;Downward rotation limitation: Calculate the distance between the line connecting the inner and outer canthus and the lowest point of the cornea;
上转受限量:计算内外眦连线与角膜最低点的距离;Upward rotation limit: Calculate the distance between the line connecting the inner and outer canthus and the lowest point of the cornea;
外转受限量:计算外眦与角膜最外点的距离。External rotation limit: Calculate the distance between the lateral canthus and the outermost point of the cornea.
根据本发明的第二个方面,提供了一种基于深度学习的人眼参数测量系统,包括:According to a second aspect of the present invention, a deep learning-based human eye parameter measurement system is provided, including:
眼部图片提取模块,根据获取的人脸图片,提取人脸图片中左右眼图像;The eye image extraction module extracts the left and right eye images in the face image according to the acquired face image;
眼部部位识别模块,采用深度神经网络,识别左右眼图像中的不同部位,包括角膜、巩膜和内外眦位置,得到识别结果;The eye part recognition module uses a deep neural network to recognize different parts in the left and right eye images, including the positions of the cornea, sclera, and inner and outer canthus, and obtain the recognition results;
眼部参数计算模块,根据眼部部位识别模块得到的识别结果计算左右眼图像中的不同位置以及不同状态的多个眼部参数。The eye parameter calculation module calculates a plurality of eye parameters in different positions and different states in the left and right eye images according to the recognition results obtained by the eye part recognition module.
优选地,所述眼部图片提取模块,采用开源工程处理人脸图片并提取人脸图片中多个眼部关键点,根据眼部关键点的位置提取出完整的左右眼图像。Preferably, the eye image extraction module adopts open source engineering to process the face image and extracts multiple key points of the eye in the face image, and extracts the complete left and right eye images according to the positions of the key points of the eye.
优选地,所述眼部部位识别模块中,深度神经网络采用多任务卷积神经网络,包括角膜块、巩膜块和内外眦块;其中:Preferably, in the eye part identification module, the deep neural network adopts a multi-task convolutional neural network, including a corneal block, a sclera block and an inner and outer canthal block; wherein:
所述角膜块采用基于像素的预测,对于每一个像素位置均预测一个边界框以及置信度,经过非最大抑制后对所有的边界框求加权均值作为最终的预测结果;The corneal block adopts pixel-based prediction, predicts a bounding box and a confidence level for each pixel position, and obtains a weighted average of all bounding boxes as the final prediction result after non-maximum suppression;
所述巩膜块采用全卷积网络,其中输入特征图被上采样m倍以获得分类得分图;The sclera block adopts a fully convolutional network, where the input feature map is upsampled by a factor of m to obtain a classification score map;
所述内外眦块采用YOLO目标检测神经网络后三层的设计结构,将图像分割成n×n的网格,预测内外眦所落入的网格以及相对于网格中心的坐标偏移,进而直接预测出关键点的坐标。The inner and outer canthus block adopts the design structure of the last three layers of the YOLO target detection neural network, divides the image into n × n grids, predicts the grid where the inner and outer canthus falls and the coordinate offset relative to the center of the grid, and then Directly predict the coordinates of key points.
根据本发明第三个方面,提供了一种设备,包括存储器、处理器及存储在存储器上并能够在处理器上运行的计算机程序,所述处理器执行所述计算机程序时能够用于执行上述任一项所述的方法。According to a third aspect of the present invention, there is provided a device comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor can be used to execute the above-mentioned computer program when the processor executes the computer program The method of any one.
由于采用了上述技术方案,本发明具有如下至少一项有益效果:Due to the adoption of the above-mentioned technical solutions, the present invention has at least one of the following beneficial effects:
本发明提供的基于深度学习的人眼参数测量方法、系统及设备,能够实现人眼不同部位的自动识别和定位。The method, system and device for measuring human eye parameters based on deep learning provided by the present invention can realize automatic identification and positioning of different parts of the human eye.
本发明提供的基于深度学习的人眼参数测量方法、系统及设备,实现眼科医生常需要测量的眼部参数的自动测量。The method, system and device for measuring human eye parameters based on deep learning provided by the present invention realize automatic measurement of eye parameters that ophthalmologists often need to measure.
本发明提供的基于深度学习的人眼参数测量方法、系统及设备,为眼科医生实现对眼部病症情况的分析提供了参数辅助及支持。The method, system and device for measuring human eye parameters based on deep learning provided by the present invention provide parameter assistance and support for ophthalmologists to realize the analysis of eye diseases.
附图说明Description of drawings
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:
图1为本发明实施例中人眼不同部位说明及人眼图片实例。FIG. 1 is a description of different parts of a human eye and an example of a picture of the human eye according to an embodiment of the present invention.
图2为本发明实施例中多任务神经网络结构示意图。FIG. 2 is a schematic structural diagram of a multi-task neural network in an embodiment of the present invention.
图3为本发明实施例中标准贴纸。FIG. 3 is a standard sticker in an embodiment of the present invention.
图4为本发明实施例中计算眼部参数的后处理图片示意图。FIG. 4 is a schematic diagram of a post-processing picture for calculating eye parameters in an embodiment of the present invention.
图5为本发明实施例所提供的基于深度学习的人眼参数测量方法流程图。FIG. 5 is a flowchart of a method for measuring human eye parameters based on deep learning provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面对本发明的实施例作详细说明:本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。The embodiments of the present invention are described in detail below: This embodiment is implemented on the premise of the technical solution of the present invention, and provides detailed implementation modes and specific operation processes. It should be pointed out that for those skilled in the art, without departing from the concept of the present invention, several modifications and improvements can be made, which all belong to the protection scope of the present invention.
本发明一实施例提供了一种基于深度学习的眼科参数测量方法,该方法使用深度学习技术和图像处理算法,对人眼部影像进行处理,进而识别人眼的不同部位并完成眼科参数的测量。如图5所示,该方法包括如下步骤:An embodiment of the present invention provides an ophthalmological parameter measurement method based on deep learning. The method uses deep learning technology and image processing algorithm to process human eye images, thereby identifying different parts of the human eye and completing the measurement of ophthalmic parameters . As shown in Figure 5, the method includes the following steps:
步骤1:采集病人面部高清照片,并从中自动提取病人的左右眼照片;Step 1: Collect high-definition photos of the patient's face, and automatically extract the photos of the patient's left and right eyes;
作为一优选实施例,将左右眼照片分开处理;As a preferred embodiment, the left and right eye photos are processed separately;
步骤2:使用深度神经网络识别左右眼照片中眼睛的不同部位,包括角膜,巩膜,内外眦(即眼角);Step 2: Use the deep neural network to identify different parts of the eyes in the left and right eye photos, including the cornea, sclera, and inner and outer canthus (ie, canthus);
步骤3:使用结果处理算法和眼部参数计算方法计算左右眼照片中眼部的10个常用参数。其中,这些参数是眼科医生对眼部情况进行分析的依据。Step 3: Use the result processing algorithm and the eye parameter calculation method to calculate 10 common parameters of the eyes in the left and right eye photos. Among them, these parameters are the basis for the ophthalmologist to analyze the eye condition.
通过上述测量出的眼部参数,可以辅助眼科医生对眼部情况进行分析。The eye parameters measured above can assist the ophthalmologist to analyze the eye condition.
作为一优选实施例,步骤1,使用当前已有的开源工程(例如dlib开源库,可以根据需求提取照片中的68个关键点)用于处理人脸并提取多个面部关键点,这些面部关键点也包括人眼等关键部位,然后根据这些关键点可以提取人眼的图片。As a preferred embodiment, in step 1, an existing open source project (such as the dlib open source library, which can extract 68 key points in the photo according to requirements) is used to process the human face and extract multiple facial key points. The points also include key parts such as the human eye, and then images of the human eye can be extracted based on these key points.
在本发明部分实施例中,需要识别的人眼图片如图1所示,图1中,(a)是人眼不同部位的说明,(b)和(c)分别是几个不同眼病图片的示例。In some embodiments of the present invention, the pictures of the human eye to be identified are shown in Figure 1. In Figure 1, (a) is the description of different parts of the human eye, and (b) and (c) are pictures of several different eye diseases, respectively. Example.
作为一优选实施例,步骤2,包括如下子步骤:As a preferred embodiment, step 2 includes the following sub-steps:
角膜、巩膜和内外眦具有不同的特征,因此需要使用不同的方法来分别识别它们。巩膜的形状是不规则的,不同的患者之间也有很大的差异,因此比较适合像素级的语义分割,这样可以比较精确地识别出巩膜的范围。角膜形状要规则一些,大致呈圆形或椭圆形,可以使用一个矩形框定位和定界,然后用一个椭圆来提取。内外眦是位于眼睛两侧的两个点,因此希望直接定位出其坐标。另外,内外眦位于巩膜的两侧并且临近巩膜,这是一个比较容易发现的约束关系。这是本发明实施例中建立深度神经网络的基本思路。The cornea, sclera, and inner and outer canthus have different characteristics, so different methods need to be used to identify them separately. The shape of the sclera is irregular, and there are great differences between different patients, so it is more suitable for pixel-level semantic segmentation, which can more accurately identify the scope of the sclera. The shape of the cornea is more regular, roughly circular or elliptical, and can be positioned and bounded with a rectangular box, and then extracted with an ellipse. The medial and lateral canthus are two points located on both sides of the eye, so it is desirable to locate their coordinates directly. In addition, the medial and lateral canthus are located on both sides of the sclera and are adjacent to the sclera, which is a relatively easy to find constraint relationship. This is the basic idea of establishing a deep neural network in the embodiment of the present invention.
采用多任务卷积神经网络(CNN),如图2所示,旨在识别上述角膜、巩膜和内外眦部分,多任务卷积神经网络的骨干网基于VGG16。图3是CNN的3个块的结构,包括:Cornea(角膜)块、Sclera(巩膜)块和Canthus(内外眦)块,其中:A multi-task convolutional neural network (CNN), as shown in Fig. 2, is used to identify the above-mentioned cornea, sclera, and inner and outer canthal parts, and the backbone of the multi-task convolutional neural network is based on VGG16. Figure 3 is the structure of the three blocks of CNN, including: Cornea (cornea) block, Sclera (sclera) block and Canthus (inner and outer canthus) block, where:
角膜块采用基于像素的预测,对于每一个像素位置都预测一个边界框以及置信度,经过非最大抑制后对所有的边界框求加权均值作为最终的预测结果;The corneal block adopts pixel-based prediction, and predicts a bounding box and confidence for each pixel position. After non-maximum suppression, the weighted average of all bounding boxes is calculated as the final prediction result;
巩膜块采用全卷积网络,其中输入特征图被上采样m倍以获得分类得分图;作为一优选实施例,m取值为8;The sclera block adopts a fully convolutional network, in which the input feature map is upsampled by m times to obtain a classification score map; as a preferred embodiment, m takes a value of 8;
内外眦块的直接预测出关键点的坐标,采用YOLO(You Only Look Once)目标检测神经网络(可参考Redmon,Joseph,Santosh Divvala,Ross Girshick,and Ali Farhadi."You only look once:Unified,real-time object detection."In Proceedings of theIEEE conference on computer vision and pattern recognition,pp.779-788.2016)后三层的设计结构,将图像分割成n×n的网格,预测内外眦落在哪个网格中以及相对于网格中心的坐标偏移。通过这种方式,得到了眼睛的所有关键部位,巩膜,角膜,内外眦,并可以推断眼睑的位置;作为一优选实施例,n取值为7。The coordinates of the key points are directly predicted by the inner and outer canthal blocks, and the YOLO (You Only Look Once) target detection neural network is used (refer to Redmon, Joseph, Santosh Divvala, Ross Girshick, and Ali Farhadi. "You only look once: Unified, real -time object detection."In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.779-788.2016) The design structure of the last three layers divides the image into n×n grids, and predicts which grid the inner and outer canthus falls on and the coordinate offset relative to the center of the grid. In this way, all the key parts of the eye, sclera, cornea, and inner and outer canthus are obtained, and the position of the eyelid can be inferred; as a preferred embodiment, n is 7.
作为一优选实施例,步骤3,包括如下步骤:As a preferred embodiment, step 3 includes the following steps:
选取眼科医生常需测量的10个参数,这些参数为医生对眼部情况进行分析提供客观依据;这10个参数包括:睑裂高度(PFH),角膜纵径,角膜横径,角膜覆盖,巩膜上退缩量,巩膜下退缩量,瞳孔直径,下转受限量,上转受限量,外转受限量。Select 10 parameters that ophthalmologists often need to measure, these parameters provide objective basis for doctors to analyze the eye condition; these 10 parameters include: palpebral fissure height (PFH), corneal longitudinal diameter, corneal transverse diameter, corneal coverage, sclera The amount of superior retraction, the amount of subscleral retraction, the pupil diameter, the limited amount of downward rotation, the amount of limited upward rotation, and the amount of limited external rotation.
为了获得图片尺寸与现实世界物理尺寸之间的比例,在患者的额头上贴一个直径Dreal=10.0mm的圆形贴纸,如图3所示,图中(a)和(b)为两个贴标准贴纸的实例。放置贴纸时,尽量使贴纸平整并平行于相机镜头,以最大程度地减少倾斜和褶皱引起的误差。将两个贴纸的平均直径记为Dimage个像素,则图像尺寸与物理世界尺寸的比例尺Sscale为:In order to obtain the ratio between the picture size and the physical size in the real world, a circular sticker with a diameter of D real = 10.0mm is placed on the patient's forehead, as shown in Figure 3, where (a) and (b) are two An example of a standard sticker. When placing the sticker, try to keep the sticker flat and parallel to the camera lens to minimize errors caused by tilt and wrinkles. Denote the average diameter of the two stickers as D image pixels, then the scale S scale between the image size and the physical world size is:
首先使用霍夫变换来识别图中的两个贴纸,并提取两个贴纸的直径,假设从图片中获得的两个贴纸的直径分别为DA和DB,则将两个贴纸的平均直径记作Dimage:First, use the Hough transform to identify the two stickers in the picture, and extract the diameters of the two stickers. Assuming that the diameters of the two stickers obtained from the picture are D A and D B , respectively, the average diameter of the two stickers is recorded as Make D image :
贴纸在现实世界中的物理尺寸为Dreal,则图片尺寸与现实世界物理尺寸之间的比例尺记为Sscale可以从公式(10)中计算得出:The physical size of the sticker in the real world is D real , then the scale between the picture size and the physical size in the real world is denoted as S scale , which can be calculated from formula (10):
使用二次函数拟合巩膜的边缘以获得眼睑的位置,如图4所示,图中,(a)为几个眼睛图片的实例,(b)为拟合巩膜边缘的参数的说明图,(c)为拟合巩膜边缘的方法示意。将角膜中心设为二次曲线的顶点,二次曲线的表达式为:A quadratic function is used to fit the edge of the sclera to obtain the position of the eyelid, as shown in Figure 4. In the figure, (a) is an example of several eye pictures, (b) is an illustration of the parameters for fitting the edge of the sclera, ( c) Schematic illustration of the method of fitting the scleral edge. Set the center of the cornea as the vertex of the quadratic curve, and the expression of the quadratic curve is:
p(y-cy)=±(x-cx)2 (5)p(y-cy)=±(x-cx) 2 (5)
其中(cx,cy)是角膜中心,(x,y)为图像中像素点的位置坐标。当方程式的右侧使用正号时,该方程式用于拟合下眼睑,如果采用负号,则方程式可以拟合上眼睑。p的值由下式决定:where (cx, cy) is the center of the cornea, and (x, y) is the position coordinate of the pixel in the image. When a positive sign is used on the right side of the equation, the equation fits the lower eyelid, and if a negative sign is used, the equation fits the upper eyelid. The value of p is determined by:
p=d1+d2 (6)p=d 1 +d 2 (6)
其中d1和d2如图4中(b)所示,d1和d2分别表示巩膜宽度的像素距离和由于角膜位置造成的巩膜不能衔接的最大张开或者角膜直径的像素距离大小;对于图4(b)的第一、第二幅图上下眼睑单边遮挡角膜这种巩膜识别结果,d1和d2按照巩膜宽度的像素距离和由于角膜位置造成的巩膜不能衔接的最大张开的像素距离取值;对于第三幅图上下眼睑共同遮挡角膜这种巩膜识别结果,d2按照巩膜上下张开的像素距离的均值取值;对于第四幅图上下眼睑均未遮挡角膜这种巩膜识别结果,d2按照角膜直径的像素距离取值。where d 1 and d 2 are shown in Fig. 4(b), d 1 and d 2 respectively represent the pixel distance of the sclera width and the maximum opening of the sclera that cannot be connected due to the corneal position or the pixel distance of the corneal diameter; for In the first and second pictures of Fig. 4(b), the upper and lower eyelids unilaterally block the cornea. The sclera identification results, d 1 and d 2 are based on the pixel distance of the sclera width and the maximum opening of the sclera that cannot be connected due to the position of the cornea. The value of the pixel distance; for the sclera recognition result of the upper and lower eyelids covering the cornea in the third picture, d 2 is taken according to the average value of the pixel distances of the sclera open up and down; for the fourth picture, the upper and lower eyelids do not block the cornea, which is the sclera In the recognition result, d 2 takes the value according to the pixel distance of the corneal diameter.
作为一优选实施例,计算眼部参数的方法为:As a preferred embodiment, the method for calculating eye parameters is:
睑裂高度:经过拟合眼睑边缘后,计算眼睑最高点和最低点的距离;Eyelid fissure height: After fitting the edge of the eyelid, calculate the distance between the highest point and the lowest point of the eyelid;
角膜纵径和角膜横径:直接根据识别结果计算;Corneal longitudinal diameter and corneal transverse diameter: directly calculated according to the recognition result;
角膜覆盖:经过拟合眼睑边缘后,计算眼睑最高点和角膜最高点的距离;Corneal coverage: After fitting the edge of the eyelid, calculate the distance between the highest point of the eyelid and the highest point of the cornea;
眼睑上退缩量:经过拟合眼睑边缘后,计算眼睑最高点和角膜最高点的距离;Upper eyelid retraction: After fitting the eyelid edge, calculate the distance between the highest point of the eyelid and the highest point of the cornea;
眼睑下退缩量:经过拟合眼睑边缘后,计算眼睑最低点和角膜最低点的距离;The amount of retraction under the eyelid: after fitting the edge of the eyelid, calculate the distance between the lowest point of the eyelid and the lowest point of the cornea;
瞳孔直径:识别出角膜以后,对角膜做二值化后直接计算;Pupil diameter: After the cornea is identified, it is directly calculated after binarizing the cornea;
下转受限量:计算内外眦连线与角膜最低点的距离;Downward rotation limitation: Calculate the distance between the line connecting the inner and outer canthus and the lowest point of the cornea;
上转受限量:计算内外眦连线与角膜最低点的距离;Upward rotation limit: Calculate the distance between the line connecting the inner and outer canthus and the lowest point of the cornea;
外转受限量:计算外眦与角膜最外点的距离。External rotation limit: Calculate the distance between the lateral canthus and the outermost point of the cornea.
本发明另一个实施例,提供了一种基于深度学习的人眼参数测量系统,包括:Another embodiment of the present invention provides a deep learning-based human eye parameter measurement system, including:
眼部图片提取模块,根据获取的人脸图片,提取图片中左右眼图像;The eye image extraction module extracts the left and right eye images in the image according to the acquired face image;
眼部部位识别模块,采用深度神经网络,识别左右眼图像中的不同部位,包括角膜、巩膜和内外眦位置;The eye part recognition module adopts deep neural network to recognize different parts in the left and right eye images, including the cornea, sclera, and inner and outer canthal positions;
眼部参数计算模块,根据识别结果计算人眼图像中的不同位置不同状态的多个眼部参数。The eye parameter calculation module calculates a plurality of eye parameters in different positions and different states in the human eye image according to the recognition result.
作为一优选实施例,眼部图片提取模块使用当前已有的开源工程(本专利联合使用Dlib开源库和OpenCV的Haar检测算法,dlib开源库用于处理完整人脸图片,可以提取68个关键点;Haar检测算法可以处理部分面部图片并定位人眼)用于处理人脸并提取多个面部关键点,这些面部关键点包括人眼等关键部位,然后根据这些关键点提取人眼图片。As a preferred embodiment, the eye image extraction module uses an existing open source project (this patent jointly uses the Dlib open source library and the Haar detection algorithm of OpenCV, the dlib open source library is used to process the complete face image, and 68 key points can be extracted ; Haar detection algorithm can process part of facial pictures and locate human eyes) is used to process human faces and extract multiple facial key points, these facial key points include key parts such as human eyes, and then extract human eye images according to these key points.
作为一优选实施例,眼部部位识别模块中,深度神经网络采用多任务神经网络,包括角膜块、巩膜块和内外眦块,可以根据不同眼睛部位的特点可以识别眼睛的所有关键部位,包括巩膜,角膜,内外眦。其中:As a preferred embodiment, in the eye part identification module, the deep neural network adopts a multi-task neural network, including the corneal block, the sclera block and the inner and outer canthal blocks, which can identify all key parts of the eye according to the characteristics of different eye parts, including the sclera. , cornea, inner and outer canthus. in:
角膜块采用基于像素的预测,对于每一个像素位置均预测一个边界框以及置信度,置信度反映的是此像素是否位于角膜内的概率。经过非最大抑制后对所有的边界框求加权均值作为最终的预测结果;Corneal blocks use pixel-based prediction, predicting a bounding box and confidence for each pixel location, which reflects the probability of whether the pixel is located in the cornea. After non-maximum suppression, the weighted average of all bounding boxes is calculated as the final prediction result;
巩膜块采用全卷积网络,其中输入特征图被上采样m倍以获得分类得分图;The sclera block adopts a fully convolutional network, where the input feature map is upsampled by a factor of m to obtain a classification score map;
内外眦块采用YOLO(You Only Look Once)目标检测神经网络后三层的设计结构,将图像分割成n×n的网格,预测内外眦所落入的网格以及相对于网格中心的坐标偏移,进而直接预测出关键点的坐标。The inner and outer canthal block adopts the YOLO (You Only Look Once) target detection neural network design structure after three layers, divides the image into n×n grid, predicts the grid where the inner and outer canthus falls and the coordinates relative to the center of the grid offset, and then directly predict the coordinates of the key points.
作为一优选实施例,眼部参数计算模块中,首先选取了眼科医生常需测量的10个参数,包括睑裂高度(PFH),角膜纵径,角膜横径,角膜覆盖,巩膜上退缩量,巩膜下退缩量,瞳孔直径,下转受限量,上转受限量,外转受限量。作为一优选实施例,该模块可以在病人额头贴一个标准的圆形贴纸并自动识别此贴纸以获得图片尺寸与现实世界物理尺寸之间的比例;此后使用二次函数拟合眼睑并计算眼部参数。As a preferred embodiment, the eye parameter calculation module first selects 10 parameters that ophthalmologists often need to measure, including palpebral fissure height (PFH), corneal longitudinal diameter, corneal transverse diameter, corneal coverage, and scleral retraction, Subscleral retraction, pupil diameter, limited downward rotation, limited upward rotation, and limited external rotation. As a preferred embodiment, the module can stick a standard circular sticker on the patient's forehead and automatically recognize the sticker to obtain the ratio between the size of the picture and the physical size of the real world; then use a quadratic function to fit the eyelid and calculate the eye parameter.
本发明第三个实施例,提供了一种设备,包括存储器、处理器及存储在存储器上并能够在处理器上运行的计算机程序,处理器执行所述计算机程序时能够用于执行任一项的基于深度学习的人眼参数测量方法。A third embodiment of the present invention provides a device including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor can be used to execute any item when executing the computer program A deep learning-based method for measuring human eye parameters.
可选地,存储器,用于存储程序;存储器,可以包括易失性存储器(英文:volatilememory),例如随机存取存储器(英文:random-access memory,缩写:RAM),如静态随机存取存储器(英文:static random-access memory,缩写:SRAM),双倍数据率同步动态随机存取存储器(英文:Double Data Rate Synchronous Dynamic Random Access Memory,缩写:DDR SDRAM)等;存储器也可以包括非易失性存储器(英文:non-volatile memory),例如快闪存储器(英文:flash memory)。存储器62用于存储计算机程序(如实现上述方法的应用程序、功能模块等)、计算机指令等,上述的计算机程序、计算机指令等可以分区存储在一个或多个存储器中。并且上述的计算机程序、计算机指令、数据等可以被处理器调用。Optionally, the memory is used to store the program; the memory may include volatile memory (English: volatile memory), such as random-access memory (English: random-access memory, abbreviation: RAM), such as static random-access memory ( English: static random-access memory, abbreviation: SRAM), double data rate synchronous dynamic random access memory (English: Double Data Rate Synchronous Dynamic Random Access Memory, abbreviation: DDR SDRAM), etc.; memory can also include non-volatile Memory (English: non-volatile memory), such as flash memory (English: flash memory). The memory 62 is used to store computer programs (such as application programs, functional modules, etc. for implementing the above-mentioned methods), computer instructions, etc., and the above-mentioned computer programs, computer instructions, etc. may be stored in one or more memories in partitions. And the above-mentioned computer programs, computer instructions, data, etc. can be called by the processor.
上述的计算机程序、计算机指令等可以分区存储在一个或多个存储器中。并且上述的计算机程序、计算机指令、数据等可以被处理器调用。The computer programs, computer instructions, etc. described above may be partitioned and stored in one or more memories. And the above-mentioned computer programs, computer instructions, data, etc. can be called by the processor.
处理器,用于执行存储器存储的计算机程序,以实现上述实施例涉及的方法中的各个步骤。具体可以参见前面方法实施例中的相关描述。The processor is configured to execute the computer program stored in the memory, so as to implement each step in the method involved in the above embodiments. For details, refer to the relevant descriptions in the foregoing method embodiments.
处理器和存储器可以是独立结构,也可以是集成在一起的集成结构。当处理器和存储器是独立结构时,存储器、处理器可以通过总线耦合连接。The processor and memory can be separate structures or integrated structures that are integrated together. When the processor and the memory are independent structures, the memory and the processor can be coupled and connected through a bus.
本发明上述实施例提供的基于深度学习的人眼参数测量方法、系统及设备,通过获取人脸图片,提取图片中左右眼图像;采用深度神经网络,识别左右眼图像中的不同位置,包括角膜、巩膜和内外眦位置;计算左右眼图像中的不同位置的多个眼部参数;能够实现人眼不同部位的自动识别和定位;实现眼科医生常需要测量的眼部参数的自动测量;为眼科医生实现对眼部病症情况的分析提供了参数辅助及支持。The method, system and device for measuring human eye parameters based on deep learning provided by the above-mentioned embodiments of the present invention extract the left and right eye images in the picture by acquiring a face picture; and use a deep neural network to identify different positions in the left and right eye images, including the cornea. , sclera and inner and outer canthal positions; calculate multiple eye parameters at different positions in the left and right eye images; can realize automatic identification and positioning of different parts of the human eye; realize automatic measurement of eye parameters that ophthalmologists often need to measure; for ophthalmology Doctors provide parameter assistance and support for the analysis of eye conditions.
需要说明的是,本发明提供的方法中的步骤,可以利用系统中对应的模块、装置、单元等予以实现,本领域技术人员可以参照系统的技术方案实现方法的步骤流程,即,系统中的实施例可理解为实现方法的优选例,在此不予赘述。It should be noted that the steps in the method provided by the present invention can be implemented by using corresponding modules, devices, units, etc. in the system, and those skilled in the art can refer to the technical solutions of the system to implement the steps of the method, that is, the steps in the system The embodiment may be understood as a preferred example of the implementation method, which will not be repeated here.
本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统及其各个装置以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统及其各个装置以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同功能。所以,本发明提供的系统及其各项装置可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构;也可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system provided by the present invention and its respective devices in the form of pure computer-readable program codes, the system provided by the present invention and its respective devices can be made by logic gates, Switches, application-specific integrated circuits, programmable logic controllers, and embedded microcontrollers are used to achieve the same function. Therefore, the system and its various devices provided by the present invention can be regarded as a kind of hardware components, and the devices for realizing various functions included in the system can also be regarded as structures in the hardware components; The means for implementing various functions can be regarded as either a software module implementing a method or a structure within a hardware component.
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变形或修改,这并不影响本发明的实质内容。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various variations or modifications within the scope of the claims, which do not affect the essential content of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655960.7A CN111938567B (en) | 2020-07-09 | 2020-07-09 | Ophthalmic parameter measurement method, system and equipment based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655960.7A CN111938567B (en) | 2020-07-09 | 2020-07-09 | Ophthalmic parameter measurement method, system and equipment based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111938567A true CN111938567A (en) | 2020-11-17 |
CN111938567B CN111938567B (en) | 2021-10-22 |
Family
ID=73340112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010655960.7A Active CN111938567B (en) | 2020-07-09 | 2020-07-09 | Ophthalmic parameter measurement method, system and equipment based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111938567B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113116292A (en) * | 2021-04-22 | 2021-07-16 | 上海交通大学医学院附属第九人民医院 | Eye position measuring method, device, terminal and equipment based on eye appearance image |
WO2024037587A1 (en) * | 2022-08-18 | 2024-02-22 | 上海市内分泌代谢病研究所 | Palpebral fissure height measurement method and apparatus, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023269A (en) * | 2016-05-16 | 2016-10-12 | 北京大学第医院 | Method and device for estimating wound area |
CN108520512A (en) * | 2018-03-26 | 2018-09-11 | 北京医拍智能科技有限公司 | A kind of method and device measuring eye parameter |
CN109815850A (en) * | 2019-01-02 | 2019-05-28 | 中国科学院自动化研究所 | Iris image segmentation and localization method, system and device based on deep learning |
CN110111316A (en) * | 2019-04-26 | 2019-08-09 | 广东工业大学 | Method and system based on eyes image identification amblyopia |
CN110866490A (en) * | 2019-11-13 | 2020-03-06 | 复旦大学 | Face detection method and device based on multitask learning |
CN111191573A (en) * | 2019-12-27 | 2020-05-22 | 中国电子科技集团公司第十五研究所 | Driver fatigue detection method based on blink rule recognition |
-
2020
- 2020-07-09 CN CN202010655960.7A patent/CN111938567B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023269A (en) * | 2016-05-16 | 2016-10-12 | 北京大学第医院 | Method and device for estimating wound area |
CN108520512A (en) * | 2018-03-26 | 2018-09-11 | 北京医拍智能科技有限公司 | A kind of method and device measuring eye parameter |
CN109815850A (en) * | 2019-01-02 | 2019-05-28 | 中国科学院自动化研究所 | Iris image segmentation and localization method, system and device based on deep learning |
CN110111316A (en) * | 2019-04-26 | 2019-08-09 | 广东工业大学 | Method and system based on eyes image identification amblyopia |
CN110866490A (en) * | 2019-11-13 | 2020-03-06 | 复旦大学 | Face detection method and device based on multitask learning |
CN111191573A (en) * | 2019-12-27 | 2020-05-22 | 中国电子科技集团公司第十五研究所 | Driver fatigue detection method based on blink rule recognition |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113116292A (en) * | 2021-04-22 | 2021-07-16 | 上海交通大学医学院附属第九人民医院 | Eye position measuring method, device, terminal and equipment based on eye appearance image |
WO2024037587A1 (en) * | 2022-08-18 | 2024-02-22 | 上海市内分泌代谢病研究所 | Palpebral fissure height measurement method and apparatus, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111938567B (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325942A (en) | Fundus image structure segmentation method based on fully convolutional neural network | |
CN112308932B (en) | Gaze detection method, device, equipment and storage medium | |
WO2020140370A1 (en) | Method and device for automatically detecting petechia in fundus, and computer-readable storage medium | |
WO2020151149A1 (en) | Microaneurysm automatic detection method, device, and computer-readable storage medium | |
CN107506770A (en) | Diabetic retinopathy eye-ground photography standard picture generation method | |
CN114694236B (en) | A Segmentation and Localization Method for Eye Movement Based on Recurrent Residual Convolutional Neural Network | |
CN111951219B (en) | Thyroid eye disease screening method, system and equipment based on orbital CT images | |
CN110335266A (en) | A kind of intelligent traditional Chinese medicine visual diagnosis image processing method and device | |
CN109658400A (en) | A kind of methods of marking and system based on head CT images | |
CN110786824A (en) | Method and system for detection of hemorrhagic lesions in coarsely labeled fundus photography based on bounding box correction network | |
CN111700582A (en) | A diagnostic system for common ocular surface diseases based on smart terminals | |
Song et al. | Multiple facial image features-based recognition for the automatic diagnosis of turner syndrome | |
CN110338763A (en) | An image processing method and device for intelligent diagnosis and testing of traditional Chinese medicine | |
CN113889267A (en) | Construction method and electronic device of diabetes diagnosis model based on eye image recognition | |
CN111938567A (en) | Ophthalmic parameter measurement method, system and equipment based on deep learning | |
CN110619332A (en) | Data processing method, device and equipment based on visual field inspection report | |
CN116682564B (en) | Near-sighted traction maculopathy risk prediction method and device based on machine learning | |
CN115762787B (en) | A method and system for evaluating the curative effect of eyelid disease surgery | |
CN115909470B (en) | Fully automatic eyelid disease postoperative appearance prediction system and method based on deep learning | |
Zhu et al. | Calculation of ophthalmic diagnostic parameters on a single eye image based on deep neural network | |
CN111938655A (en) | Orbital soft tissue morphology evaluation method, system and equipment based on key point information | |
CN113887311B (en) | Method, device and storage medium for protecting privacy of ophthalmic patient | |
CN116453692A (en) | Ophthalmology disease risk assessment screening system | |
CN115690486A (en) | Method, device and equipment for identifying focus in image and storage medium | |
Suwandi et al. | A Systematic Literature Review: Diabetic Retinopathy Detection Using Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |