WO2020211174A1 - Procédé basé sur la vision artificielle pour traiter une image de photoréfraction excentrique - Google Patents

Procédé basé sur la vision artificielle pour traiter une image de photoréfraction excentrique Download PDF

Info

Publication number
WO2020211174A1
WO2020211174A1 PCT/CN2019/089777 CN2019089777W WO2020211174A1 WO 2020211174 A1 WO2020211174 A1 WO 2020211174A1 CN 2019089777 W CN2019089777 W CN 2019089777W WO 2020211174 A1 WO2020211174 A1 WO 2020211174A1
Authority
WO
WIPO (PCT)
Prior art keywords
pupil
image
area
processing
present
Prior art date
Application number
PCT/CN2019/089777
Other languages
English (en)
Chinese (zh)
Inventor
陈浩
于航
黄锦海
郑晓波
梅晨阳
Original Assignee
温州医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 温州医科大学 filed Critical 温州医科大学
Publication of WO2020211174A1 publication Critical patent/WO2020211174A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Definitions

  • the invention belongs to an image processing method for ophthalmology, and in particular relates to a method for processing eccentric photographic optometry image based on machine vision.
  • Retinoscopy is the gold standard for refractive errors, with an accuracy of 0.25D. But for children, retinoscopy optometry has its limitations.
  • the handheld vision screener is an instrument specially designed and produced for infants and young children's vision hoof examination in recent years. Its characteristic is: the detection can be carried out while keeping a certain distance from the examinee, without the examinee's high coordination. This makes it not only suitable for people with strong coordination as the previous inspection methods, but also for vision screening for infants and young children and people with poor coordination.
  • the camera uses an infrared light source to project to the retina, and the light reflected by the retina presents different patterns in different refractive states.
  • the camera records the patterns and calculates data such as spherical lens, cylindrical lens and axis position. It can obtain the refractive status, pupil diameter, interpupillary distance, and eye position of both eyes in one measurement, which is convenient for doctors to quickly screen and fully understand the patient's vision development.
  • the principle of eccentric photography refraction using near-infrared light-emitting diodes to form a light source array, the light enters the retina at a specific angle to the pupil of a certain distance, and is reflected by the retina. After being refracted), it is emitted from the pupil area and captured by the camera. Therefore, the refractive state and adjustment state of the examined eye determine the shape and brightness of the light and shadow in the pupil area of the examined eye. Through the processing and analysis of pupil light and shadow images, the corresponding vision detection results are obtained.
  • the image information acquisition device (camera or video camera) collects the eye image, because both eyes are taken at the same time, there are many unwanted interference information in the image besides the eye, which affects the accuracy of the vision detection result.
  • the purpose of the present invention is to overcome the shortcomings and shortcomings of the prior art, and provide a method for processing eccentric photographic refraction images based on machine vision.
  • a method for processing eccentric photographic refraction images based on machine vision is characterized by including the following steps:
  • the present invention first uses the Adaboost strong classifier self-learning method based on the Harr-like rectangle feature to locate the pupil area.
  • the principle of eccentric photography makes the pupils with myopia or hyperopia have uneven brightness, and the wallis uniform light is adopted.
  • the algorithm can maximize the uniform pupil gray value and enhance pupil edge information.
  • perform binarization blob analysis to remove noise and interference areas, and only retain the area where the pupil is, and then use the gray difference method to obtain the precise pupil boundary at the edge of the pupil after binarization, and finally perform a least squares method Ellipse fitting, output pupil area parameters.
  • the invention eliminates interference information, accurately obtains pupil area parameters, and helps to improve the accuracy of optometry for infants and young children and people with poor coordination.
  • Figure 1 is a schematic flow diagram of the present invention
  • Figure 2 is a schematic diagram of the Adaboost learning algorithm training Harr features.
  • a machine vision-based eccentric photorefractive image processing method includes the following steps:
  • Acquire eye images use the camera to continuously collect eye images, and perform image processing on each image according to the following steps;
  • the Haar feature value reflects the gray changes of the image.
  • Haar features are divided into three categories: edge features, linear features, central features and diagonal features, which are combined into feature templates. There are white and black rectangles in the feature template, and the feature value of the template is defined as the sum of white rectangle pixels and minus black rectangle pixels.
  • Integral map is a fast algorithm that can find the sum of pixels in all areas of the image only once through the image, which greatly improves the calculation efficiency of Harr features.
  • the main idea is to store the sum of pixels in the rectangular area formed by the image from the starting point to each point as an element of an array in memory. When calculating the sum of pixels in a certain area, you can directly index the elements of the array without recalculating The sum of pixels in this area speeds up the calculation.
  • the way the integral graph is constructed is the value at position (i, j) Is the original image The sum of all pixels in the upper left corner: .
  • Adaboost learning algorithm uses the Adaboost learning algorithm to roughly locate the human eye area:
  • the basic idea of the Adaboost learning algorithm is to train the model separately, each round of training a new model, at the end of each round, the wrong sample will be calibrated and added to the next round The weights in the new training set, and then the next round of learning to get a new model.
  • the main idea is to compensate for the errors of the previous models based on later models, and to achieve integration by adding new models through continuous iterations. Each time a model is learned, make sure that its classification accuracy is greater than 0.5, which can be misclassified, but cannot be missed. Drop.
  • the gray average value of a gray image reflects its brightness
  • the variance reflects its dynamic range of grayscale changes. Due to ambient light Different from the subject, the pupil brightness and variance of each frame of image are different, and if the subject’s eyes have myopia or hyperopia, the brightness in the same pupil will be different, and the uneven gray value will affect the subsequent The pupil segmentation affects, so the uniform light algorithm can be used to minimize uneven illumination.
  • Wallis filter maps the gray mean and variance of the image to a fixed value, and makes the gray variance and gray mean of different images approximately equal. It is mainly used to transform the gray mean value and standard deviation between different images or at different positions within the image to have approximately equal values, and to enhance the brightness and contrast of the dark areas in the unevenly illuminated images.
  • the specific algorithm formula is as follows:
  • the precise through hole boundary is obtained by the gray difference method.
  • the precise pupil boundary is obtained by the gray difference method.
  • step (7) Using the precise boundary of the pupil obtained in step 6, perform an ellipse fitting based on the least square method to obtain the pupil area parameters.
  • a person of ordinary skill in the art can understand that all or part of the steps in the method of the foregoing embodiments can be implemented by a program instructing related hardware.
  • the program can be stored in a computer readable storage medium.
  • Media such as ROM/RAM, magnetic disk, optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de traitement d'images médicales ophtalmologiques, et concerne spécifiquement un procédé basé sur la vision artificielle pour traiter des images de photoréfraction excentriques. La présente invention utilise un procédé d'auto-apprentissage de classificateur fort Adaboost basé sur des caractéristiques rectangulaires pseudo-Haar pour localiser la zone de la pupille, un principe de photographie excentrique amenant une pupille myope ou une pupille hypermétrope à produire une luminosité irrégulière, et utilise l'algorithme de Wallis pour homogénéiser la valeur d'échelle de gris de la pupille dans la plus grande mesure, ce qui améliore ainsi les informations de contour de la pupille. Ensuite, un analyse de taches de binarisation est effectuée, des points de bruit et des régions d'interférence sont éliminés, retenant uniquement la région où se trouve la pupille, puis une limite précise de la pupille est obtenue à l'aide d'un procédé de différence de niveaux de gris au niveau du contour binarisé de la pupille, et enfin un ajustement elliptique est effectué sur la base du procédé des moindres carrés pour fournir des paramètres de la zone de la pupille. La présente invention élimine les informations d'interférence et obtient avec précision des paramètres de la zone de la pupille, ce qui aide à améliorer la précision d'optométrie pour des nourrissons et des personnes qui ne sont pas coopératives.
PCT/CN2019/089777 2019-04-18 2019-06-03 Procédé basé sur la vision artificielle pour traiter une image de photoréfraction excentrique WO2020211174A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910313589.3 2019-04-18
CN201910313589.3A CN110096978A (zh) 2019-04-18 2019-04-18 基于机器视觉的偏心摄影验光图像处理的方法

Publications (1)

Publication Number Publication Date
WO2020211174A1 true WO2020211174A1 (fr) 2020-10-22

Family

ID=67445197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089777 WO2020211174A1 (fr) 2019-04-18 2019-06-03 Procédé basé sur la vision artificielle pour traiter une image de photoréfraction excentrique

Country Status (2)

Country Link
CN (1) CN110096978A (fr)
WO (1) WO2020211174A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725479A (zh) * 2023-08-14 2023-09-12 杭州目乐医疗科技股份有限公司 一种自助式验光仪以及自助验光方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112022081B (zh) * 2020-08-05 2023-08-25 广东小天才科技有限公司 一种检测视力的方法、终端设备以及计算机可读存储介质
CN113627231B (zh) * 2021-06-16 2023-10-31 温州医科大学 一种基于机器视觉的视网膜oct图像中液体区域自动分割方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080219515A1 (en) * 2007-03-09 2008-09-11 Jiris Usa, Inc. Iris recognition system, a method thereof, and an encryption system using the same
CN103136512A (zh) * 2013-02-04 2013-06-05 重庆市科学技术研究院 一种瞳孔定位方法及系统
CN103366157A (zh) * 2013-05-03 2013-10-23 马建 一种人眼视线距离的判断方法
CN104013384A (zh) * 2014-06-11 2014-09-03 温州眼视光发展有限公司 眼前节断层图像特征提取方法
CN104050667A (zh) * 2014-06-11 2014-09-17 温州眼视光发展有限公司 瞳孔跟踪图像处理方法
CN105279774A (zh) * 2015-10-13 2016-01-27 金晨晖 一种屈光不正的数字化影像识别方法
CN107506705A (zh) * 2017-08-11 2017-12-22 西安工业大学 一种瞳孔‑普尔钦斑视线跟踪与注视提取方法
CN108921010A (zh) * 2018-05-15 2018-11-30 北京环境特性研究所 一种瞳孔检测方法及检测装置
CN109359503A (zh) * 2018-08-15 2019-02-19 温州生物材料与工程研究所 瞳孔识别图像处理方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080219515A1 (en) * 2007-03-09 2008-09-11 Jiris Usa, Inc. Iris recognition system, a method thereof, and an encryption system using the same
CN103136512A (zh) * 2013-02-04 2013-06-05 重庆市科学技术研究院 一种瞳孔定位方法及系统
CN103366157A (zh) * 2013-05-03 2013-10-23 马建 一种人眼视线距离的判断方法
CN104013384A (zh) * 2014-06-11 2014-09-03 温州眼视光发展有限公司 眼前节断层图像特征提取方法
CN104050667A (zh) * 2014-06-11 2014-09-17 温州眼视光发展有限公司 瞳孔跟踪图像处理方法
CN105279774A (zh) * 2015-10-13 2016-01-27 金晨晖 一种屈光不正的数字化影像识别方法
CN107506705A (zh) * 2017-08-11 2017-12-22 西安工业大学 一种瞳孔‑普尔钦斑视线跟踪与注视提取方法
CN108921010A (zh) * 2018-05-15 2018-11-30 北京环境特性研究所 一种瞳孔检测方法及检测装置
CN109359503A (zh) * 2018-08-15 2019-02-19 温州生物材料与工程研究所 瞳孔识别图像处理方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725479A (zh) * 2023-08-14 2023-09-12 杭州目乐医疗科技股份有限公司 一种自助式验光仪以及自助验光方法
CN116725479B (zh) * 2023-08-14 2023-11-10 杭州目乐医疗科技股份有限公司 一种自助式验光仪以及自助验光方法

Also Published As

Publication number Publication date
CN110096978A (zh) 2019-08-06

Similar Documents

Publication Publication Date Title
CN109345469B (zh) 一种基于条件生成对抗网络的oct成像中散斑去噪方法
US10413180B1 (en) System and methods for automatic processing of digital retinal images in conjunction with an imaging device
CN109684915B (zh) 瞳孔跟踪图像处理方法
CN108985210A (zh) 一种基于人眼几何特征的视线追踪方法及系统
CN111933275B (zh) 一种基于眼动与面部表情的抑郁评估系统
WO2020211174A1 (fr) Procédé basé sur la vision artificielle pour traiter une image de photoréfraction excentrique
JP5502081B2 (ja) 目の第二屈折の表面マップを検査することによる生体認証
US20120157800A1 (en) Dermatology imaging device and method
CN104102899A (zh) 视网膜血管识别方法及装置
Calimeri et al. Optic disc detection using fine tuned convolutional neural networks
CN112837805A (zh) 基于深度学习的眼睑拓扑形态特征的提取方法
Noris et al. Calibration-free eye gaze direction detection with gaussian processes
Aggarwal et al. Towards automating retinoscopy for refractive error diagnosis
Laaksonen Spectral retinal image processing and analysis for ophthalmology
CN115456974A (zh) 基于人脸关键点的斜视检测系统、方法、设备及介质
CN112598028B (zh) 眼底图像配准模型训练方法、眼底图像配准方法和装置
CN110796638B (zh) 毛孔检测方法
CN115375611A (zh) 一种基于模型训练的屈光检测方法和检测系统
CN111008569A (zh) 一种基于人脸语义特征约束卷积网络的眼镜检测方法
CN113011286B (zh) 基于视频的深度神经网络回归模型的斜视判别方法及系统
To et al. Optimization of a Novel Automated, Low Cost, Three-Dimensional Photogrammetry System (PHACE)
Wang Investigation of image processing and computer-assisted diagnosis system for automatic video vision development assessment
Valencia Automatic detection of diabetic related retina disease in fundus color images
Rosa An accessible approach for corneal topography
Rodrigues Retinal image quality assessment using deep convolutional neural networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924805

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924805

Country of ref document: EP

Kind code of ref document: A1