WO2020211174A1 - Machine vision-based method for processing eccentric photorefraction image - Google Patents

Machine vision-based method for processing eccentric photorefraction image Download PDF

Info

Publication number
WO2020211174A1
WO2020211174A1 PCT/CN2019/089777 CN2019089777W WO2020211174A1 WO 2020211174 A1 WO2020211174 A1 WO 2020211174A1 CN 2019089777 W CN2019089777 W CN 2019089777W WO 2020211174 A1 WO2020211174 A1 WO 2020211174A1
Authority
WO
WIPO (PCT)
Prior art keywords
pupil
image
area
processing
present
Prior art date
Application number
PCT/CN2019/089777
Other languages
French (fr)
Chinese (zh)
Inventor
陈浩
于航
黄锦海
郑晓波
梅晨阳
Original Assignee
温州医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 温州医科大学 filed Critical 温州医科大学
Publication of WO2020211174A1 publication Critical patent/WO2020211174A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Definitions

  • the invention belongs to an image processing method for ophthalmology, and in particular relates to a method for processing eccentric photographic optometry image based on machine vision.
  • Retinoscopy is the gold standard for refractive errors, with an accuracy of 0.25D. But for children, retinoscopy optometry has its limitations.
  • the handheld vision screener is an instrument specially designed and produced for infants and young children's vision hoof examination in recent years. Its characteristic is: the detection can be carried out while keeping a certain distance from the examinee, without the examinee's high coordination. This makes it not only suitable for people with strong coordination as the previous inspection methods, but also for vision screening for infants and young children and people with poor coordination.
  • the camera uses an infrared light source to project to the retina, and the light reflected by the retina presents different patterns in different refractive states.
  • the camera records the patterns and calculates data such as spherical lens, cylindrical lens and axis position. It can obtain the refractive status, pupil diameter, interpupillary distance, and eye position of both eyes in one measurement, which is convenient for doctors to quickly screen and fully understand the patient's vision development.
  • the principle of eccentric photography refraction using near-infrared light-emitting diodes to form a light source array, the light enters the retina at a specific angle to the pupil of a certain distance, and is reflected by the retina. After being refracted), it is emitted from the pupil area and captured by the camera. Therefore, the refractive state and adjustment state of the examined eye determine the shape and brightness of the light and shadow in the pupil area of the examined eye. Through the processing and analysis of pupil light and shadow images, the corresponding vision detection results are obtained.
  • the image information acquisition device (camera or video camera) collects the eye image, because both eyes are taken at the same time, there are many unwanted interference information in the image besides the eye, which affects the accuracy of the vision detection result.
  • the purpose of the present invention is to overcome the shortcomings and shortcomings of the prior art, and provide a method for processing eccentric photographic refraction images based on machine vision.
  • a method for processing eccentric photographic refraction images based on machine vision is characterized by including the following steps:
  • the present invention first uses the Adaboost strong classifier self-learning method based on the Harr-like rectangle feature to locate the pupil area.
  • the principle of eccentric photography makes the pupils with myopia or hyperopia have uneven brightness, and the wallis uniform light is adopted.
  • the algorithm can maximize the uniform pupil gray value and enhance pupil edge information.
  • perform binarization blob analysis to remove noise and interference areas, and only retain the area where the pupil is, and then use the gray difference method to obtain the precise pupil boundary at the edge of the pupil after binarization, and finally perform a least squares method Ellipse fitting, output pupil area parameters.
  • the invention eliminates interference information, accurately obtains pupil area parameters, and helps to improve the accuracy of optometry for infants and young children and people with poor coordination.
  • Figure 1 is a schematic flow diagram of the present invention
  • Figure 2 is a schematic diagram of the Adaboost learning algorithm training Harr features.
  • a machine vision-based eccentric photorefractive image processing method includes the following steps:
  • Acquire eye images use the camera to continuously collect eye images, and perform image processing on each image according to the following steps;
  • the Haar feature value reflects the gray changes of the image.
  • Haar features are divided into three categories: edge features, linear features, central features and diagonal features, which are combined into feature templates. There are white and black rectangles in the feature template, and the feature value of the template is defined as the sum of white rectangle pixels and minus black rectangle pixels.
  • Integral map is a fast algorithm that can find the sum of pixels in all areas of the image only once through the image, which greatly improves the calculation efficiency of Harr features.
  • the main idea is to store the sum of pixels in the rectangular area formed by the image from the starting point to each point as an element of an array in memory. When calculating the sum of pixels in a certain area, you can directly index the elements of the array without recalculating The sum of pixels in this area speeds up the calculation.
  • the way the integral graph is constructed is the value at position (i, j) Is the original image The sum of all pixels in the upper left corner: .
  • Adaboost learning algorithm uses the Adaboost learning algorithm to roughly locate the human eye area:
  • the basic idea of the Adaboost learning algorithm is to train the model separately, each round of training a new model, at the end of each round, the wrong sample will be calibrated and added to the next round The weights in the new training set, and then the next round of learning to get a new model.
  • the main idea is to compensate for the errors of the previous models based on later models, and to achieve integration by adding new models through continuous iterations. Each time a model is learned, make sure that its classification accuracy is greater than 0.5, which can be misclassified, but cannot be missed. Drop.
  • the gray average value of a gray image reflects its brightness
  • the variance reflects its dynamic range of grayscale changes. Due to ambient light Different from the subject, the pupil brightness and variance of each frame of image are different, and if the subject’s eyes have myopia or hyperopia, the brightness in the same pupil will be different, and the uneven gray value will affect the subsequent The pupil segmentation affects, so the uniform light algorithm can be used to minimize uneven illumination.
  • Wallis filter maps the gray mean and variance of the image to a fixed value, and makes the gray variance and gray mean of different images approximately equal. It is mainly used to transform the gray mean value and standard deviation between different images or at different positions within the image to have approximately equal values, and to enhance the brightness and contrast of the dark areas in the unevenly illuminated images.
  • the specific algorithm formula is as follows:
  • the precise through hole boundary is obtained by the gray difference method.
  • the precise pupil boundary is obtained by the gray difference method.
  • step (7) Using the precise boundary of the pupil obtained in step 6, perform an ellipse fitting based on the least square method to obtain the pupil area parameters.
  • a person of ordinary skill in the art can understand that all or part of the steps in the method of the foregoing embodiments can be implemented by a program instructing related hardware.
  • the program can be stored in a computer readable storage medium.
  • Media such as ROM/RAM, magnetic disk, optical disk, etc.

Abstract

The present invention relates to a method for processing ophthalmology medical images, and specifically relates to a machine vision-based method for processing eccentric photorefraction images. The present invention employs a Harr-like rectangle feature-based Adaboost strong classifier self-learning method to locate the pupil region, wherein a principle of eccentric photography makes a near-sighted or far-sighted pupil produce uneven brightness, and employs the Wallis dodging algorithm to homogenize the grayscale value of the pupil to the greatest extent, thereby enhancing edge information of the pupil. Then, binarization blob analysis is performed, noise points and interference regions are removed, retaining only the region where the pupil is located, then a precise pupil boundary is obtained by using a grayscale difference method at the binarized edge of the pupil, and finally elliptical fitting is performed on the basis of the method of least squares to output pupil region parameters. The present invention eliminates interference information and accurately obtains pupil region parameters, thereby helping to improve the precision of optometry for infants and people who are not cooperative.

Description

基于机器视觉的偏心摄影验光图像处理的方法Method for processing eccentric photographic optometry image based on machine vision 技术领域Technical field
本发明属于眼科医学图像处理方法,具体涉及一种基于机器视觉的偏心摄影验光图像处理的方法。The invention belongs to an image processing method for ophthalmology, and in particular relates to a method for processing eccentric photographic optometry image based on machine vision.
背景技术Background technique
检影验光是屈光不正检查的金标准,精确度可达0.25D。但对儿童来说,检影验光有其应用的局限性。手持式视力筛查仪是近些年来专门针对婴幼儿视力蹄查而设计生产的仪器。其特点是:可在与被检者保持一定距离的情况下进行检测,不需要被检者具有很高的配合性。这使其不但同以往检查方法一样适用于配合力强的人群,也同样适用于婴幼儿及配合度差的人群的视力筛查。Retinoscopy is the gold standard for refractive errors, with an accuracy of 0.25D. But for children, retinoscopy optometry has its limitations. The handheld vision screener is an instrument specially designed and produced for infants and young children's vision hoof examination in recent years. Its characteristic is: the detection can be carried out while keeping a certain distance from the examinee, without the examinee's high coordination. This makes it not only suitable for people with strong coordination as the previous inspection methods, but also for vision screening for infants and young children and people with poor coordination.
它利用红外光源投射到视网膜,通过视网膜反射回来的光在不同的屈光状态下呈现不同的图案,摄像机记录图案并通过计算得出球镜、柱镜和轴位等数据。它一次测量可以获得双眼的屈光状态、瞳孔直径、瞳距以及眼位等信息,方便医生快速筛查并全面了解患者的视力发育状况。It uses an infrared light source to project to the retina, and the light reflected by the retina presents different patterns in different refractive states. The camera records the patterns and calculates data such as spherical lens, cylindrical lens and axis position. It can obtain the refractive status, pupil diameter, interpupillary distance, and eye position of both eyes in one measurement, which is convenient for doctors to quickly screen and fully understand the patient's vision development.
偏心摄影验光原理,采用近红外发光二极管组成光源阵列,光线以特定角度射向一定距离外的被检瞳孔进入视网膜,被视网膜反射,期间光线经由眼球屈光系统两次折射(入眼和出眼均被折射)后,从瞳孔区域发出而被相机摄取。因此被检眼的屈光状态和调节状态决定了被检眼瞳孔区光影的形态和亮度。通过对瞳孔光影图像的处理和分析,得到对应的视力检测结果。The principle of eccentric photography refraction, using near-infrared light-emitting diodes to form a light source array, the light enters the retina at a specific angle to the pupil of a certain distance, and is reflected by the retina. After being refracted), it is emitted from the pupil area and captured by the camera. Therefore, the refractive state and adjustment state of the examined eye determine the shape and brightness of the light and shadow in the pupil area of the examined eye. Through the processing and analysis of pupil light and shadow images, the corresponding vision detection results are obtained.
图像信息采集装置(相机或摄影机)采集眼睛图像时,由于同时拍摄双眼,因此图像上除了眼睛外,还有很多不需要的干扰信息,影响视力检测结果的准确性。When the image information acquisition device (camera or video camera) collects the eye image, because both eyes are taken at the same time, there are many unwanted interference information in the image besides the eye, which affects the accuracy of the vision detection result.
技术问题technical problem
本发明的目的是为了克服现有技术存在的缺点和不足,而提供一种基于机器视觉的偏心摄影验光图像处理的方法。The purpose of the present invention is to overcome the shortcomings and shortcomings of the prior art, and provide a method for processing eccentric photographic refraction images based on machine vision.
技术解决方案Technical solutions
本发明所采取的技术方案如下:   基于机器视觉的偏心摄影验光图像处理的方法,其特征在于包括以下步骤:The technical solutions adopted by the present invention are as follows: A method for processing eccentric photographic refraction images based on machine vision is characterized by including the following steps:
(1)     采集眼睛图像;(1) Collect eye images;
(2)     利用积分图法求Harr特征;(2) Use integral graph method to find Harr characteristics;
(3)     利用Adaboost学习算法粗定位人眼区域,若存在瞳孔,则继续以下步骤,若不存在瞳孔,则结束该图像的处理;(3) Use the Adaboost learning algorithm to roughly locate the human eye area. If there is a pupil, continue with the following steps. If there is no pupil, end the image processing;
(4)     对瞳孔区域进行Wallis匀光处理,均匀瞳孔灰度值,增强瞳孔边缘信息;(4) Perform Wallis uniform light processing on the pupil area, uniform pupil gray value, and enhance pupil edge information;
(5)     进行二值化blob分析,去除噪点和干扰区域,仅保留瞳孔所在区域;(5) Perform binary blob analysis to remove noise and interference areas, and only retain the area where the pupil is;
(6)     在瞳孔二值化后的边缘处利用灰度差分法求得精确瞳孔边界;(6) Use the gray difference method to obtain the precise pupil boundary at the edge of the pupil after binarization;
(7)对求得的精确瞳孔边界,进行基于最小二乘法的椭圆拟合,得到瞳孔区域参数。(7) Perform ellipse fitting based on the least square method on the exact pupil boundary obtained to obtain the pupil area parameters.
有益效果Beneficial effect
本发明的有益效果如下:本发明先采用基于Harr-like矩形特征的Adaboost强分类器自学习方法进行瞳孔区域定位,偏心摄影原理使得有近视或远视的瞳孔产生亮度不均匀,而采用wallis匀光算法可以最大限度的均匀瞳孔灰度值,增强瞳孔边缘信息。然后再进行二值化blob分析,去除噪点和干扰区域,仅保留瞳孔所在区域,然后再在瞳孔二值化后的边缘处利用灰度差分法求得精确瞳孔边界,最后进行基于最小二乘法的椭圆拟合,输出瞳孔区域参数。本发明排除干扰信息,精确得到瞳孔区域参数,有助于提高婴幼儿及配合度差的人群验光的精确性。The beneficial effects of the present invention are as follows: the present invention first uses the Adaboost strong classifier self-learning method based on the Harr-like rectangle feature to locate the pupil area. The principle of eccentric photography makes the pupils with myopia or hyperopia have uneven brightness, and the wallis uniform light is adopted. The algorithm can maximize the uniform pupil gray value and enhance pupil edge information. Then perform binarization blob analysis to remove noise and interference areas, and only retain the area where the pupil is, and then use the gray difference method to obtain the precise pupil boundary at the edge of the pupil after binarization, and finally perform a least squares method Ellipse fitting, output pupil area parameters. The invention eliminates interference information, accurately obtains pupil area parameters, and helps to improve the accuracy of optometry for infants and young children and people with poor coordination.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,根据这些附图获得其他的附图仍属于本发明的范畴。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, without creative labor, obtaining other drawings based on these drawings still belongs to the scope of the present invention.
图1为本发明的流程示意图;Figure 1 is a schematic flow diagram of the present invention;
图2为Adaboost学习算法将Harr特征进行训练的示意图。Figure 2 is a schematic diagram of the Adaboost learning algorithm training Harr features.
本发明的最佳实施方式The best mode of the invention
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with the accompanying drawings.
如图1所示,一种基于机器视觉的偏心摄影验光图像处理的方法,包括以下步骤:As shown in Figure 1, a machine vision-based eccentric photorefractive image processing method includes the following steps:
(1)    采集眼睛图像:用相机连续采集眼睛图像,对每一张图像分别按以下步骤进行图像处理;(1) Acquire eye images: use the camera to continuously collect eye images, and perform image processing on each image according to the following steps;
(2)    利用积分图法求Harr特征,Haar特征值反映了图像的灰度变化情况,Haar特征分为三类:边缘特征、线性特征、中心特征和对角线特征,组合成特征模板。特征模板内有白色和黑色两种矩形,并定义该模板的特征值为白色矩形像素和减去黑色矩形像素和。(2) Using integral graph method to find Harr features, the Haar feature value reflects the gray changes of the image. Haar features are divided into three categories: edge features, linear features, central features and diagonal features, which are combined into feature templates. There are white and black rectangles in the feature template, and the feature value of the template is defined as the sum of white rectangle pixels and minus black rectangle pixels.
积分图是只遍历一次图像就可以求出图像中所有区域像素和的快速算法,大大的提高了Harr特征的计算效率。其主要思想是将图像从起点开始到各个点所形成的矩形区域像素之和作为一个数组的元素保存在内存中,当要计算某个区域的像素和时可以直接索引数组的元素,不用重新计算这个区域的像素和,从而加快了计算。积分图的构造方式是位置(i,j)处的值
Figure 339260dest_path_image001
是原图像
Figure 742822dest_path_image002
左上角方向所有像素的和:
Figure 285798dest_path_image003
Integral map is a fast algorithm that can find the sum of pixels in all areas of the image only once through the image, which greatly improves the calculation efficiency of Harr features. The main idea is to store the sum of pixels in the rectangular area formed by the image from the starting point to each point as an element of an array in memory. When calculating the sum of pixels in a certain area, you can directly index the elements of the array without recalculating The sum of pixels in this area speeds up the calculation. The way the integral graph is constructed is the value at position (i, j)
Figure 339260dest_path_image001
Is the original image
Figure 742822dest_path_image002
The sum of all pixels in the upper left corner:
Figure 285798dest_path_image003
.
积分图构建算法:Integral graph construction algorithm:
1)  用
Figure 171715dest_path_image004
表示行方向的累加和,初始化
Figure 232075dest_path_image005
1) Use
Figure 171715dest_path_image004
Represents the cumulative sum of the row direction, initialization
Figure 232075dest_path_image005
2)  用
Figure 254257dest_path_image001
表示一个积分图像,初始化
Figure 201091dest_path_image006
2) Use
Figure 254257dest_path_image001
Represents an integral image, initialized
Figure 201091dest_path_image006
3)  逐行扫描图像,递归计算每个像素(i,j)行方向的累加和s(i,j)和积分图像ii(i,j)的值:3) Scan the image line by line, recursively calculate the cumulative sum s(i,j) in the row direction of each pixel (i,j) and the value of the integral image ii(i,j):
Figure 574304dest_path_image007
+
Figure 297409dest_path_image008
Figure 574304dest_path_image007
+
Figure 297409dest_path_image008
Figure 174098dest_path_image009
Figure 174098dest_path_image009
4)  扫描图像一遍,当到达图像右下角像素时,积分图像ii就构造好了。积分图构造好之后,图像中任何矩阵区域的像素累加和都可以通过简单运算得到。4) Scan the image once, and when it reaches the pixel at the bottom right corner of the image, the integral image ii is constructed. After the integral map is constructed, the cumulative sum of pixels in any matrix area in the image can be obtained by simple operations.
(3)    利用Adaboost学习算法粗定位人眼区域:Adaboost学习算法的基本思想为分开训练模型,每一轮训练一个新的模型,在每一轮的结束,错分的样本会被标定并且增加其在下一轮新的训练集中的权重,然后进行下一轮学习得到一个新的模型。其主要思想是基于后来的模型可以对前面模型的错误进行补偿,通过不断的迭代增加新模型来实现集成,每一次学到一个模型,确保它的分类精度大于0.5,可以错分,但不能漏掉。(3) Use the Adaboost learning algorithm to roughly locate the human eye area: The basic idea of the Adaboost learning algorithm is to train the model separately, each round of training a new model, at the end of each round, the wrong sample will be calibrated and added to the next round The weights in the new training set, and then the next round of learning to get a new model. The main idea is to compensate for the errors of the previous models based on later models, and to achieve integration by adding new models through continuous iterations. Each time a model is learned, make sure that its classification accuracy is greater than 0.5, which can be misclassified, but cannot be missed. Drop.
如图2所示,利用Adaboost学习算法将Harr特征进行训练,生成多级弱分类器,然后级联成一个强分类器,也可以根据需要再将强分类器级联构造成更强的分类器,用于检测瞳孔区域。As shown in Figure 2, use the Adaboost learning algorithm to train the Harr features to generate a multi-level weak classifier, and then cascade it into a strong classifier. You can also cascade the strong classifiers into a stronger classifier as needed. , Used to detect the pupil area.
若存在瞳孔,则继续以下步骤,若不存在瞳孔,则结束该图像的处理,开始处理分析下一个图像。If there are pupils, continue with the following steps. If there are no pupils, end the processing of the image and start processing and analyzing the next image.
(4)    对瞳孔区域进行Wallis匀光处理,均匀瞳孔灰度值,增强瞳孔边缘信息:一副灰度图像的灰度均值反映了它的亮度,方差则反映了它的灰度动态变化范围,由于环境光和被摄人的不同,每帧图像的瞳孔亮度和方差也不一样,而且如果被摄人眼存在近视或远视情况,同一只瞳孔内的亮度也不一样,灰度值的不均匀会对后续的瞳孔分割造成影响,因此可以通过匀光算法最大限度的减少光照不均匀。Wallis滤波器将图像的灰度均值及方差映射到一个定值,并且使不同图像的灰度方差和灰度均值都近似相等。主要用于变换不同的图像之间或图像内部的不同位置上的灰度均值和标准偏差使之具有近似相等的数值,增强光照不均匀图像中灰暗区域的亮度及对比度。具体算法公式如下:(4) Perform Wallis homogenization processing on the pupil area, uniform pupil gray value, and enhance pupil edge information: the gray average value of a gray image reflects its brightness, and the variance reflects its dynamic range of grayscale changes. Due to ambient light Different from the subject, the pupil brightness and variance of each frame of image are different, and if the subject’s eyes have myopia or hyperopia, the brightness in the same pupil will be different, and the uneven gray value will affect the subsequent The pupil segmentation affects, so the uniform light algorithm can be used to minimize uneven illumination. Wallis filter maps the gray mean and variance of the image to a fixed value, and makes the gray variance and gray mean of different images approximately equal. It is mainly used to transform the gray mean value and standard deviation between different images or at different positions within the image to have approximately equal values, and to enhance the brightness and contrast of the dark areas in the unevenly illuminated images. The specific algorithm formula is as follows:
Figure 465402dest_path_image010
;
Figure 465402dest_path_image010
;
式中:
Figure 561797dest_path_image011
为原始影像在
Figure 88593dest_path_image012
处的灰度值,
Figure 554209dest_path_image013
Wallis变换后结果影像在
Figure 141049dest_path_image012
处的灰度值,
Figure 98640dest_path_image014
为原始影像的局部灰度均值,
Figure 662083dest_path_image015
为原始影像的局部灰度标准偏差,
Figure 513365dest_path_image016
为变换后影像局部灰度均值的目标值,
Figure 271105dest_path_image017
为变换后影像的局部灰度标准偏差的目标值,
Figure 450414dest_path_image018
为影像方差的扩展常数,
Figure 584592dest_path_image019
为影像的亮度系数,当
Figure 57424dest_path_image020
趋向于1时,影像均值被强制到
Figure 454907dest_path_image016
,当
Figure 387091dest_path_image020
趋向于0时,影像均值被强制到
Figure 793802dest_path_image014
。当调节系数c和b都取1时,上式也可表示为如下线性变换式:
Where:
Figure 561797dest_path_image011
For the original image in
Figure 88593dest_path_image012
The gray value at
Figure 554209dest_path_image013
The resulting image after Wallis transformation is
Figure 141049dest_path_image012
The gray value at
Figure 98640dest_path_image014
Is the local mean gray value of the original image,
Figure 662083dest_path_image015
Is the local gray standard deviation of the original image,
Figure 513365dest_path_image016
Is the target value of the local gray scale average of the transformed image
Figure 271105dest_path_image017
Is the target value of the local gray standard deviation of the transformed image,
Figure 450414dest_path_image018
Is the expansion constant of the image variance,
Figure 584592dest_path_image019
Is the brightness coefficient of the image, when
Figure 57424dest_path_image020
When tending to 1, the image mean is forced to
Figure 454907dest_path_image016
, when
Figure 387091dest_path_image020
When tending to 0, the image mean is forced to
Figure 793802dest_path_image014
. When the adjustment coefficients c and b are both 1, the above equation can also be expressed as the following linear transformation equation:
Figure 619675dest_path_image021
Figure 619675dest_path_image021
Figure 952174dest_path_image014
等于
Figure 371654dest_path_image016
Figure 847635dest_path_image015
等于
Figure 528015dest_path_image017
时,即待校正影像与标准影像的均值和方差一致,采用上述线性变换式不会引起影像灰度的改变,并且能使校正影像与标准影像的亮度和方差基本一致。
when
Figure 952174dest_path_image014
equal
Figure 371654dest_path_image016
,
Figure 847635dest_path_image015
equal
Figure 528015dest_path_image017
At this time, that is, the mean and variance of the image to be corrected are consistent with the standard image. Using the above linear transformation formula will not cause the image grayscale to change, and the brightness and variance of the corrected image and the standard image can be basically consistent.
 To
(5)    进行二值化blob分析,去除噪点和干扰区域,仅保留瞳孔所在区域;(5) Perform binary blob analysis to remove noise and interference areas, and only retain the area where the pupil is;
在瞳孔二值化后的边缘处利用灰度差分法求得精确通孔边界。At the edge of the pupil after binarization, the precise through hole boundary is obtained by the gray difference method.
(6)  在第5步处理后得到的瞳孔所在区域的每一个边缘点(该边缘点为粗略边缘),利用灰度差分法求得精确瞳孔边界。(6) For each edge point (the edge point is a rough edge) in the area where the pupil is obtained after processing in step 5, the precise pupil boundary is obtained by the gray difference method.
(7)利用步骤6求得的瞳孔精确边界,进行基于最小二乘法的椭圆拟合,得到瞳孔区域参数。 (7) Using the precise boundary of the pupil obtained in step 6, perform an ellipse fitting based on the least square method to obtain the pupil area parameters.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘、光盘等。A person of ordinary skill in the art can understand that all or part of the steps in the method of the foregoing embodiments can be implemented by a program instructing related hardware. The program can be stored in a computer readable storage medium. Media, such as ROM/RAM, magnetic disk, optical disk, etc.
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。The above-disclosed are only preferred embodiments of the present invention. Of course, the scope of rights of the present invention cannot be limited by this. Therefore, equivalent changes made according to the claims of the present invention still fall within the scope of the present invention.

Claims (1)

  1. 基于机器视觉的偏心摄影验光图像处理的方法,其特征在于包括以下步骤:The method for processing eccentric photographic optometry image based on machine vision is characterized by comprising the following steps:
    采集眼睛图像;Collect eye images;
    利用积分图法求Harr特征;Use integral graph method to find Harr characteristics;
    利用Adaboost学习算法粗定位人眼区域,若存在瞳孔,则继续以下步骤,若不存在瞳孔,则结束该图像的处理;Use the Adaboost learning algorithm to roughly locate the human eye area. If there is a pupil, continue with the following steps. If there is no pupil, end the image processing;
    对瞳孔区域进行Wallis匀光处理,均匀瞳孔灰度值,增强瞳孔边缘信息;Perform Wallis uniform light processing on the pupil area, uniform pupil gray value, and enhance pupil edge information;
    进行二值化blob分析,去除噪点和干扰区域,仅保留瞳孔所在区域;Perform binary blob analysis to remove noise and interference areas, and only retain the area where the pupil is;
    在瞳孔二值化后的边缘处利用灰度差分法求得精确瞳孔边界;Use the gray difference method to obtain the precise pupil boundary at the edge of the pupil after binarization;
    对求得的精确瞳孔边界,进行基于最小二乘法的椭圆拟合,得到瞳孔区域参数。For the exact pupil boundary obtained, the ellipse fitting based on the least square method is performed to obtain the pupil area parameters.
     To
PCT/CN2019/089777 2019-04-18 2019-06-03 Machine vision-based method for processing eccentric photorefraction image WO2020211174A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910313589.3A CN110096978A (en) 2019-04-18 2019-04-18 The method of eccentricity cycles image procossing based on machine vision
CN201910313589.3 2019-04-18

Publications (1)

Publication Number Publication Date
WO2020211174A1 true WO2020211174A1 (en) 2020-10-22

Family

ID=67445197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089777 WO2020211174A1 (en) 2019-04-18 2019-06-03 Machine vision-based method for processing eccentric photorefraction image

Country Status (2)

Country Link
CN (1) CN110096978A (en)
WO (1) WO2020211174A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725479A (en) * 2023-08-14 2023-09-12 杭州目乐医疗科技股份有限公司 Self-help optometry instrument and self-help optometry method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112022081B (en) * 2020-08-05 2023-08-25 广东小天才科技有限公司 Method for detecting eyesight, terminal equipment and computer readable storage medium
CN113627231B (en) * 2021-06-16 2023-10-31 温州医科大学 Automatic segmentation method for liquid region in retina OCT image based on machine vision

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080219515A1 (en) * 2007-03-09 2008-09-11 Jiris Usa, Inc. Iris recognition system, a method thereof, and an encryption system using the same
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN103366157A (en) * 2013-05-03 2013-10-23 马建 Method for judging line-of-sight distance of human eye
CN104013384A (en) * 2014-06-11 2014-09-03 温州眼视光发展有限公司 Anterior ocular segment cross-sectional image feature extraction method
CN104050667A (en) * 2014-06-11 2014-09-17 温州眼视光发展有限公司 Pupil tracking image processing method
CN105279774A (en) * 2015-10-13 2016-01-27 金晨晖 Digital image identification method of refractive errors
CN107506705A (en) * 2017-08-11 2017-12-22 西安工业大学 A kind of pupil Purkinje image eye tracking is with watching extracting method attentively
CN108921010A (en) * 2018-05-15 2018-11-30 北京环境特性研究所 A kind of pupil detection method and detection device
CN109359503A (en) * 2018-08-15 2019-02-19 温州生物材料与工程研究所 Pupil identifies image processing method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080219515A1 (en) * 2007-03-09 2008-09-11 Jiris Usa, Inc. Iris recognition system, a method thereof, and an encryption system using the same
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN103366157A (en) * 2013-05-03 2013-10-23 马建 Method for judging line-of-sight distance of human eye
CN104013384A (en) * 2014-06-11 2014-09-03 温州眼视光发展有限公司 Anterior ocular segment cross-sectional image feature extraction method
CN104050667A (en) * 2014-06-11 2014-09-17 温州眼视光发展有限公司 Pupil tracking image processing method
CN105279774A (en) * 2015-10-13 2016-01-27 金晨晖 Digital image identification method of refractive errors
CN107506705A (en) * 2017-08-11 2017-12-22 西安工业大学 A kind of pupil Purkinje image eye tracking is with watching extracting method attentively
CN108921010A (en) * 2018-05-15 2018-11-30 北京环境特性研究所 A kind of pupil detection method and detection device
CN109359503A (en) * 2018-08-15 2019-02-19 温州生物材料与工程研究所 Pupil identifies image processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725479A (en) * 2023-08-14 2023-09-12 杭州目乐医疗科技股份有限公司 Self-help optometry instrument and self-help optometry method
CN116725479B (en) * 2023-08-14 2023-11-10 杭州目乐医疗科技股份有限公司 Self-help optometry instrument and self-help optometry method

Also Published As

Publication number Publication date
CN110096978A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN109345469B (en) Speckle denoising method in OCT imaging based on condition generation countermeasure network
US10413180B1 (en) System and methods for automatic processing of digital retinal images in conjunction with an imaging device
CN109684915B (en) Pupil tracking image processing method
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
CN111933275B (en) Depression evaluation system based on eye movement and facial expression
WO2020211174A1 (en) Machine vision-based method for processing eccentric photorefraction image
JP5502081B2 (en) Biometric authentication by examining the surface map of the second refraction of the eye
US20120157800A1 (en) Dermatology imaging device and method
CN104102899A (en) Retinal vessel recognition method and retinal vessel recognition device
Calimeri et al. Optic disc detection using fine tuned convolutional neural networks
CN112837805A (en) Deep learning-based eyelid topological morphology feature extraction method
Noris et al. Calibration-free eye gaze direction detection with gaussian processes
Aggarwal et al. Towards automating retinoscopy for refractive error diagnosis
Laaksonen Spectral retinal image processing and analysis for ophthalmology
CN115456974A (en) Strabismus detection system, method, equipment and medium based on face key points
CN112598028B (en) Eye fundus image registration model training method, eye fundus image registration method and eye fundus image registration device
CN110796638B (en) Pore detection method
CN115375611A (en) Model training-based refraction detection method and detection system
CN111008569A (en) Glasses detection method based on face semantic feature constraint convolutional network
US20220180509A1 (en) Diagnostic tool for eye disease detection
CN113011286B (en) Squint discrimination method and system based on deep neural network regression model of video
To et al. Optimization of a Novel Automated, Low Cost, Three-Dimensional Photogrammetry System (PHACE)
Wang Investigation of image processing and computer-assisted diagnosis system for automatic video vision development assessment
Valencia Automatic detection of diabetic related retina disease in fundus color images
Rosa An accessible approach for corneal topography

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924805

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924805

Country of ref document: EP

Kind code of ref document: A1