CN110477858A - Tested eye alignment methods, device and Ophthalmologic apparatus - Google Patents

Tested eye alignment methods, device and Ophthalmologic apparatus Download PDF

Info

Publication number
CN110477858A
CN110477858A CN201810459248.2A CN201810459248A CN110477858A CN 110477858 A CN110477858 A CN 110477858A CN 201810459248 A CN201810459248 A CN 201810459248A CN 110477858 A CN110477858 A CN 110477858A
Authority
CN
China
Prior art keywords
examinee
picture
tested eye
camera lens
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810459248.2A
Other languages
Chinese (zh)
Other versions
CN110477858B (en
Inventor
王辉
刘艳华
郭曙光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Moting Medical Technology Co ltd
Original Assignee
Shenzhen Certainn Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Certainn Technology Co Ltd filed Critical Shenzhen Certainn Technology Co Ltd
Priority to CN201810459248.2A priority Critical patent/CN110477858B/en
Publication of CN110477858A publication Critical patent/CN110477858A/en
Application granted granted Critical
Publication of CN110477858B publication Critical patent/CN110477858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/111Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring interpupillary distance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/15Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing
    • A61B3/152Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing for aligning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

A kind of tested eye alignment methods, device and Ophthalmologic apparatus, the camera lens that the tested eye alignment methods are used to control Ophthalmologic apparatus is automatically aligned to the tested eye of examinee, it include: image acquisition step, the picture of examinee is obtained, the picture includes at least the above eyes of examinee's nose face area below;Image processing step handles the picture, determines that the facial parameter of examinee, the facial parameter include at least: interpupillary distance, eye chin away from;Camera lens moving step makes the alignment lens be detected eye according to the mobile camera lens of facial parameter control.Tested eye alignment methods provided in an embodiment of the present invention, may be implemented to be automatically aligned to tested eye, easy to operate.

Description

Tested eye alignment methods, device and Ophthalmologic apparatus
Technical field
The invention belongs to Ophthalmologic apparatus field more particularly to a kind of tested eye alignment methods, device and Ophthalmologic apparatus, are used for The camera lens of control Ophthalmologic apparatus is automatically aligned to the tested eye of examinee.
Background technique
When carrying out the operation such as eye examination to examinee, the tested eye of the camera lens and those who are investigated that need to make Ophthalmologic apparatus is right Standard, the prior art needs operator to manually adjust Ophthalmologic apparatus, or controls motor by operator, adjusts ophthalmology by motor and sets It is standby to be aligned with realizing.
On the one hand, prior art operation is complicated, and manual hand manipulation or control motor realize the alignment to tested eye, needs Operator has received certain training or has had certain experience, and from far-off regions since condition lacks, and operator does not have mostly It is trained also also to can not find suitable behaviour even if government is that hospital from far-off regions purchases high-end Ophthalmologic apparatus without corresponding experience Author, or need to spend high public expenditure training operator.This causes the people from far-off regions in ophthalmology health side Face, can not get rid of the spell of " Matthew effect ", and social equity problem is difficult to be properly settled.
On the other hand, if alignment function is realized by the operation to Ophthalmologic apparatus is not especially skilled operator, The average time that the prior art checks that every those who are investigated are spent is longer, not only waste operator and those who are investigated when Between, and the utilization efficiency of Ophthalmologic apparatus is reduced, Ophthalmologic apparatus is mostly more expensive, and poor efficiency is using caused waste Very huge, this is also one of the formative factor for nowadays perplexing " the difficulty of getting medical service " " high cost of getting medical treatment " problem of broad masses of the people.
And with advances in technology, it automates, is unmanned increasingly prominent as technology trends, the prior art is not inconsistent simultaneously Close technology trends.
Summary of the invention
Aiming at the problems existing in the prior art, the embodiment of the present invention provides a kind of tested eye alignment methods, for controlling The tested eye of the alignment lens examinee of Ophthalmologic apparatus may be implemented to be automatically aligned to tested eye, easy to operate.
Technical solution provided in an embodiment of the present invention is as follows:
A kind of tested eye alignment methods, the camera lens for controlling Ophthalmologic apparatus are automatically aligned to the tested eye of examinee, comprising: Image acquisition step, obtains the picture of the examinee, and the picture includes at least the above eyes of examinee's nose face below Portion region;Image processing step handles the picture, determines that the facial parameter of the examinee, the facial parameter at least wrap Include: interpupillary distance, eye chin away from;Camera lens moving step makes the alignment lens institute according to the mobile camera lens of facial parameter control State tested eye.
The embodiment of the present invention also provides a kind of tested eye alignment device, the camera lens for controlling Ophthalmologic apparatus be automatically aligned to by The tested eye of inspection person, comprising: image collection module, for obtaining examinee's picture, the picture includes at least examinee The above eyes of nose face area below;Image processing module determines the face of the examinee for handling the picture Parameter, the facial parameter include at least interpupillary distance and eye chin away from;Camera lens mobile module is moved for being controlled according to the facial parameter The camera lens is moved, makes to be detected eye described in the alignment lens.
The embodiment of the present invention also provides a kind of Ophthalmologic apparatus, including processor, memory and is stored in the memory In and be configured as the computer program executed by the processor, the processor realizes this when executing the computer program The tested eye alignment methods that inventive embodiments provide.
The embodiment of the present invention also provides a kind of computer readable storage medium, and the computer readable storage medium includes depositing The computer program of storage, equipment where controlling the computer readable storage medium in computer program operation execute sheet The tested eye alignment methods that inventive embodiments provide.
Tested eye alignment methods provided in an embodiment of the present invention and device, determine interpupillary distance, eye chin away from etc. examinees face After parameter, the position of the camera lens according to facial parameter automatic adjustment Ophthalmoligic instrument, to realize that the camera lens of Ophthalmoligic instrument is automatically right The tested eye of standard, it is easy to operate, it is easy to upper hand.
Detailed description of the invention
Fig. 1 is the flow chart of tested eye alignment methods provided in an embodiment of the present invention.
Fig. 2 is the principle that the space coordinate of face feature is determined in tested eye alignment methods provided in an embodiment of the present invention Figure.
Specific embodiment
Technical solution of the present invention is clearly and completely described below in conjunction with attached drawing, it is clear that described implementation Example is a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill Personnel's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of tested eye alignment methods (hereinafter referred to as " alignment methods "), sets for controlling ophthalmology Standby camera lens is automatically aligned to the tested eye of examinee, it is to be understood that and it is succinct for style of writing, " examinee " word is used, Its object for both having referred to Ophthalmologic apparatus inspection also refers to the object of Ophthalmologic apparatus treatment;" tested eye " word is used, was both referred to The eyes of Ophthalmologic apparatus inspection also refer to the eyes of Ophthalmologic apparatus treatment.
The Ophthalmologic apparatus for example can be ophthalmology OCT (optical coherence tomography, an optics phase Dry layer scanning technology) somascope, including station, equipment body, detection probe, bracket, camera and controller, the equipment Main body, the bracket and the camera are fixed on the station, and the bracket is located at the equipment body close to quilt The side of inspection person, the detection probe are arranged on the equipment body, and can on the direction of top to bottom, left and right, front and rear three phase It is mobile for the equipment body, camera lens is fixedly installed in the detection probe, the camera lens is mobile with the detection probe.
The bracket is used to fix the head of the examinee, and the bracket includes the chin strap and position positioned at bracket bottom end In the E Tuo of cantilever tip, the chin strap can be gone up and down in the up-down direction.
The camera is for shooting examinee, and in a preferred embodiment of the present invention, the Ophthalmologic apparatus includes first camera And second camera, the first camera and the second camera shoot examinee with the position set from different perspectives.
The controller is electrical connected with the detection probe, the bracket and the camera, for controlling the inspection The movement of probing head and the chin strap, and the photo of camera shooting is handled.
It please join Fig. 1, alignment methods include the following steps:
S1: image acquisition step obtains the picture of examinee, the picture include at least the above eyes of examinee's nose with Under face area.
In an embodiment of the present invention, examinee is shot by the camera being arranged on Ophthalmologic apparatus, to obtain examinee Picture.
In a preferred embodiment of the present invention, described image obtaining step includes: the first image acquisition step and the second image Obtaining step.
In the first image acquisition step, the first picture of examinee is obtained, specifically, by being arranged on Ophthalmologic apparatus First camera shoots examinee, to obtain the first picture of examinee.
In the second image acquisition step, the second picture of examinee is obtained, specifically, by being arranged on Ophthalmologic apparatus Second camera shoots examinee, to obtain the second picture of examinee.
First picture is different from the shooting angle of the second picture.
S2: image processing step handles the picture, determines that the facial parameter of examinee, the facial parameter at least wrap Include: interpupillary distance, eye chin away from;
The interpupillary distance refers to the distance between examinee's right and left eyes pupil center;Eye chin is away from referring to examinee's chin Distance of the bottom end to right and left eyes pupil center line midpoint.In a preferred embodiment of the present invention, the facial parameter further includes volume Chin is away from, volume chin away from referring to distance of examinee's forehead center to chin bottom end.
In embodiments of the present invention, described image processing step further comprises image segmentation step, feature identification step And parameter determination.
In image segmentation step, processing region is partitioned into from the picture.Specifically, the examinee shot from camera The picture in identify at least skin area of the examinee, and be partitioned into an at least processing region, each place Managing region includes a skin area, it is preferred that each processing region is the smallest rectangular area for including a skin area Domain.In a preferred embodiment of the present invention, the examinee is identified from the second picture from first picture neutralization respectively Skin area, and be partitioned into the processing region.
In embodiments of the present invention, the picture for shooting the examinee of acquisition is rgb format.Picture treatment process In, the format of the picture is converted into YCbCr color space or YIQ color space by RGB.After the format conversion of picture, The skin area in picture is identified by automatic cluster algorithm.
In feature identification step, identify that face feature, the face feature include eyes from the processing region, institute Stating face feature further includes at least one of both nose and lip.
It first identifies this feature of eyes, an at least eyes position candidate is oriented in each processing region, according to eyes It is black and have the characteristics that symmetry, judge each eyes position candidate with the presence or absence of eyes using the method for feature extraction.If Judge that wherein there are eyes for an eyes position candidate, then the coordinate of eyes position candidate in the processing area is the eyes Coordinate in the processing area, and can determine whether out that the processing region including the eyes position candidate includes the facial regions of examinee The processing region is further processed in domain, and gives up to fall the processing region there is no eyes.
Sciagraphy is divided to orient eyes candidate region by integral projection method or difference projection method or difference-product.Specifically, In In present pre-ferred embodiments, processing region is a rectangular area, for the array of m × n pixel composition, including m row pixel With n column pixel.The pixel value for the pixel that I (i, j) is indicated in the i-th row jth column in processing region is defined, interP (i) is indicated The horizontal integrated value of the pixel value of i-th row pixel, interP (j) indicate the vertical integrated value of the pixel value of jth column pixel, DiffP (i) indicates that the horizontal difference value of the pixel value of the i-th row pixel, diffP (j) indicate the vertical of the pixel value of jth column pixel Difference value.
Wherein,
M horizontal integrated values, m horizontal difference values, n vertical integrated values and n vertical difference values can be obtained.
Using m horizontal integrated values as abscissa, using corresponding row sequence as ordinate, horizontal integral projection line can be constructed, The ordinate of the eyes position candidate in the processing area can be obtained in the extreme point for finding out horizontal integral projection line.Alternatively, Using m horizontal difference values as abscissa, using corresponding row sequence as ordinate, horizontal difference projection line can be constructed, level is found out The ordinate of the eyes position candidate in the processing area can be obtained in the extreme point of difference projection line.
Vertical integral projection line can be constructed using corresponding column sequence as abscissa using n vertical integrated values as ordinate, The abscissa of the eyes position candidate in the processing area can be obtained in the extreme point for finding out vertical integral projection line.Alternatively, Using n vertical difference values as ordinate, using corresponding column sequence as abscissa, vertical difference projection line can be constructed, is found out vertical The abscissa of the eyes position candidate in the processing area can be obtained in the extreme point of difference projection line.
Preferably, the average value of each horizontal integrated value with a corresponding horizontal difference value is found out, with m horizontal integrated values Average value with horizontal difference value, using corresponding row sequence as ordinate, can construct horizontal product moment and divide projection line, ask as abscissa Horizontal product moment divides the extreme point of projection line out, and the ordinate of the eyes position candidate in the processing area can be obtained.
The average value of each vertical integrated value with a corresponding vertical difference value is found out, with n vertical integrated values and vertically The average value of difference value is as ordinate, using corresponding column sequence as abscissa, can construct vertical product moment and divide projection line, finds out vertical Product moment divides the extreme point of projection line, and the abscissa of the eyes position candidate in the processing area can be obtained.
After identifying this feature of eyes, the identification of nose and/or lip is carried out.
It is explained in detail below and nose is known otherwise.Judging that a certain processing region includes the face area of examinee Afterwards, which is rotated, so that the pupil of both eyes line of centres is horizontal in postrotational image.This is partitioned into again Pupil of both eyes line of centres region below is sharp in the first subprocessing region as the first subprocessing region in processing region Abscissa of the nose in the first subprocessing region is oriented with the method for vertical product difference projection, and with pupil of both eyes center company The ordinate in the first subprocessing region in the ordinate in the first subprocessing region as nose at the midpoint of line, later It is reversely rotated, so that processing region is returned to the state before rotation, obtain the coordinate of nose in the processing area.
When the picture of the examinee of acquisition includes lip, lip can be identified, be explained in detail below and lip is known Otherwise.After judging face area that a certain processing region includes examinee, which is rotated, so that The pupil of both eyes line of centres is horizontal in postrotational image.Be partitioned into the processing region again the pupil of both eyes line of centres with Under region as the second subprocessing region, in the second subprocessing region using it is vertical product difference projection method orient Abscissa of the lip in the second subprocessing region, and with the midpoint of the pupil of both eyes line of centres in the second subprocessing region Ordinate in second subprocessing region of the ordinate as lip, is reversely rotated later, is returned to processing region State before rotation obtains the coordinate of lip in the processing area.
In a preferred embodiment of the present invention, same face feature is identified in the first picture and second picture respectively.
In parameter determination, the coordinate of the face feature is determined, and determine according to the coordinate of the face feature The facial parameter.Specifically, mutual using the parameter of first camera, the parameter of second camera, first camera and second camera Between positional relationship and the face feature from the first picture segmentation go out processing region in coordinate and the face feature exist Coordinate from the processing region that second picture is partitioned into, it may be determined that the space coordinate of the face feature.
Face feature to be asked please be referred to alphabetical P, the method for determining the space coordinate of the face feature is detailed with ginseng Fig. 2 It states as follows:
Wherein, the optical axis of first camera is primary optic axis, and focal length is the first focal length f1, the optical center of imaging lens is the first light Heart C1;The optical axis of second camera is the second optical axis, and focal length is the second focal length f2, the optical center of imaging lens is the second optical center C2;First Optical axis and the second optical axis are located on the first plane, and the angle between primary optic axis and the second optical axis is θ, the first optical center and second The distance between optical center is L0, the line of the first optical center and the second optical center is vertical with primary optic axis.
In the first optical center C1The second plane is constructed between face feature P, the second plane is flat perpendicular to primary optic axis, second Face and the first optical center C1Distance be the first focal length f1.In the second plane, with the intersection point O of primary optic axis and the second plane1For original Point, with vectorParallel X1Axis is horizontal axis, with X1The vertical Y of axis1Axis is the longitudinal axis, constructs the first coordinate system, and first sits Mark system is plane coordinate system, wherein X1The positive direction and vector of axisDirection it is identical, Y1The vertical paper of the positive direction of axis to In.
In the second optical center C2Third plane is constructed between face feature P, for third plane perpendicular to the second optical axis, third is flat Face and the second optical center C2Distance be the second focal length f2, the intersection point of the second optical axis and third plane is O2.Preferably, first camera It is arranged so that with second cameraWithParallel and direction is identical.It is flat with the second optical axis and third in third plane The intersection point O in face2For origin, with the intersecting lens X of third plane and the first plane2Axis is horizontal axis, with X2The vertical Y of axis2Axis is vertical Axis, constructs the second coordinate system, and the second coordinate system is plane coordinate system, wherein X1Axis positive direction and X2Angle between axis positive direction For acute angle, Y2The vertical paper of the positive direction of axis is inwards.
With the first optical center C1For origin, with vectorParallel X3Axis is horizontal axis, with the Y vertical with the first plane3Axis (not shown) is the longitudinal axis, with the Z parallel with primary optic axis3Axis is vertical pivot, constructs third coordinate system, and third coordinate system is space seat Mark system, wherein X3The positive direction of axis withIt is identical, Y3The vertical paper of the positive direction of axis inwards, Z3The positive direction of axis is by C1It is directed toward Examinee.
With the second optical center C2For origin, to be in the first plane and be parallel to the X of third plane4Axis is horizontal axis, with first The vertical Y of plane4Axis (not shown) is the longitudinal axis, with the Z parallel with the second optical axis4Axis is vertical pivot, building 4-coordinate system, the 4th Coordinate system is space coordinates, wherein X4The positive direction and X of axis3Angle between the positive direction of axis is acute angle, Y4The pros of axis Inwards to vertical paper, Z4The positive direction of axis is by C2It is directed toward examinee.
PC1Intersection point with the second plane is P1, P1Coordinate o'clock in the first coordinate system is (x1, y1), PC2With third plane Intersection point be P2, P2The coordinate of point in the second coordinate system is (x2, y2), it will be understood that due to P1Point corresponds to P point in the first phase Imaging in machine, P2Point corresponds to P point imaging in second camera, then (x1, y1)、(x2, y2) it is known.
If coordinate of the P point in third coordinate system is (x3, y3, z3), it will be understood that following equation is to set up:
And in third coordinate system, it is known that vectorKnown to It is understood that being parallel to Y in third coordinate system2Axis and and Y2Positive direction unit vector in the same direction isIt is parallel to X2Axis and and X2Positive direction unit vector in the same direction is
If coordinate of the P point in 4-coordinate system is (x4, y4, z4), it will be understood that following equation is to set up:
y4=y3......(4)
It is appreciated that following equation is to set up:
Simultaneous (1), (2), (3), (4), (5), (6), (7), can obtain:
f1·x3-x1·z3=0...... (8)
f1·y3-y1·z3=0...... (9)
f2·y3-y2·[z3·cosθ+sinθ·(L0-x3)]=0...... (10)
-f2·[cosθ·(L0-x3)-z3·sinθ]-x2·[z3·cosθ+sinθ·(L0-x3)]=0...... (11)
Coordinate value (x of the P point in third coordinate system can be acquired by (8) (9) (10) (11)3, y3, z3)。
Coordinate value of each face feature in third coordinate system is acquired, i.e., using the positional relationship between each face feature Determine each facial parameter.
In a preferred embodiment of the present invention, in feature identification step, the eyes and nose of examinee are identified.Utilize eyes Between positional relationship, interpupillary distance can be acquired;Using the positional relationship between eyes and nose, a nose can be acquired away from H, it is described Eye nose is away from i.e. nose to the distance at the midpoint of the pupil of both eyes line of centres.
Eye chin is away from S1It is obtained by the following formula:
S1=k1H
Wherein, coefficient k1Reflect eye chin away from S1With eye nose away from the relationship (three five, front yards) between H in statistics rule, k1 Value range is 1.8-2.2, in present aspect preferred embodiment, k1Value is 2.
Volume is nodded away from S2It is obtained by the following formula:
S2=k2H
Wherein, coefficient k2Reflection volume is nodded away from S2With eye nose away from the relationship between H in statistics rule, specifically, k2Value Range is 2.8-3.2, in present aspect preferred embodiment, k2Value is 3.
In embodiments of the present invention, Ophthalmologic apparatus can only include a camera, and camera shoots individual front of examinee and shines Piece, when camera shoots examinee, examinee passes through at a distance from camera at a distance from camera for setting value or examinee infrared The equidistant detection mode of distance measuring sensor detects to obtain.After identifying face feature, according to the seat of face feature in the processing area It is marked with and examinee is at a distance from camera, each facial parameter can be acquired.
S3: camera lens moving step makes quilt described in the alignment lens according to the mobile camera lens of facial parameter control Examine eye.
When carrying out eye examination or ophthalmologic operation to examinee, the head of examinee is fixed on bracket, specifically, The chin of examinee is supported by chin strap, and the forehead of examinee rests in volume support.It is fixed on bracket on the head of examinee Before, it is nodded according to the volume of examinee and is moved up and down away from, controller control chin strap to adjust the distance between chin strap and E Tuo, It is adapted the distance between chin strap and E Tuo and the head of examinee.
Using the position of the chin strap after adjusting as reference, according to the interpupillary distance of examinee and eye chin away from controller control is visited Head is mobile, and the alignment lens on probe is made to be detected eye.After completing eye examination or the ophthalmologic operation to tested eye, root According to the interpupillary distance of examinee, controller control probe movement makes another tested eye of alignment lens on probe.
Tested eye alignment methods provided in an embodiment of the present invention, determine interpupillary distance, eye chin away from etc. after examinees' facial parameter, The position of camera lens according to facial parameter automatic adjustment Ophthalmoligic instrument, so that it is tested to realize that the camera lens of Ophthalmoligic instrument is automatically aligned to Eye, it is easy to operate, it is easy to upper hand.
The tested eye alignment methods that present pre-ferred embodiments provide, shoot examinee from different perspectives, utilize acquisition Two pictures accurately determine the space coordinate of face feature, to increase the accuracy of facial parameter.
The embodiment of the present invention also provides a kind of tested eye alignment device, the camera lens for controlling Ophthalmologic apparatus be automatically aligned to by The tested eye of inspection person, including image collection module, image processing module and camera lens mobile module.
Described image obtains module and is used to obtain the picture of the examinee, the picture include at least examinee's nose with Upper eyes face area below.
In a preferred embodiment of the present invention, it includes the first image acquisition submodule and the second image that described image, which obtains module, Acquisition submodule.
The first image acquisition submodule is used to obtain the first picture of examinee, the second image acquisition submodule For obtaining the second picture of examinee, first picture is different from the shooting angle of the second picture.
Described image processing module determines the facial parameter of the examinee, face's ginseng for handling the picture Number include at least interpupillary distance and eye chin away from.In a preferred embodiment of the present invention, the facial parameter further includes volume chin away from the volume chin Away from referring to distance of examinee's forehead center to chin bottom end.
In embodiments of the present invention, described image processing module further comprises image segmentation submodule, feature identification Module and parameter determination submodule.
Described image segmentation submodule is for being partitioned into processing region from the picture.Specifically, from the picture It identifies at least skin area of the examinee, and is partitioned into an at least processing region, each processing region includes one The skin area, it is preferred that each processing region is the minimum rectangular area for including a skin area.The present invention compared with In good embodiment, the skin area that the examinee is identified from the second picture is neutralized from first picture respectively, and It is partitioned into the processing region.
The feature identification submodule from the processing region for identifying that face feature, the face feature include eye Eyeball, the face feature further include at least one of both nose and lip.
The parameter determination submodule is used to determine the coordinate of the face feature, and according to the coordinate of the face feature Determine the facial parameter.
The camera lens mobile module is used to make the alignment lens institute according to the mobile camera lens of facial parameter control State tested eye.
Tested eye alignment device provided in an embodiment of the present invention, determine interpupillary distance, eye chin away from etc. after examinees' facial parameter, The position of camera lens according to facial parameter automatic adjustment Ophthalmoligic instrument, so that it is tested to realize that the camera lens of Ophthalmoligic instrument is automatically aligned to Eye, it is easy to operate, it is easy to upper hand.
The tested eye alignment device that present pre-ferred embodiments provide, shoots examinee from different perspectives, utilizes acquisition Two pictures accurately determine the space coordinate of face feature, to increase the accuracy of facial parameter.
The embodiment of the present invention also provides a kind of Ophthalmologic apparatus, including processor, memory and is stored in the memory In and be configured as the computer program executed by the processor, the processor realizes this when executing the computer program The tested eye alignment methods that inventive embodiments provide.
The embodiment of the present invention also provides a kind of computer readable storage medium, and the computer readable storage medium includes depositing The computer program of storage, equipment where controlling the computer readable storage medium in computer program operation execute sheet The tested eye alignment methods that inventive embodiments provide.
It will appreciated by the skilled person that realizing complete in tested eye alignment methods provided in an embodiment of the present invention Portion or part process are relevant hardware can be instructed to complete by computer program, and the program can be stored in In one computer readable storage medium, the program is when being executed, it may include in tested eye alignment methods provided by the above embodiment Process.Wherein, the storage medium can be disk, CD, read-only memory (Read-Only Memory, ROM) Or random access memory (Random Access Memory, RAM) etc..
The above disclosure is only the preferred embodiments of the present invention, cannot limit the right model of the present invention with this certainly It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.

Claims (10)

1. a kind of tested eye alignment methods, the camera lens for controlling Ophthalmologic apparatus are automatically aligned to the tested eye of examinee, feature It is, includes the following steps:
Image acquisition step, obtains the picture of the examinee, and the picture includes at least the above eyes of examinee's nose or less Face area;
Image processing step handles the picture, determines that the facial parameter of the examinee, the facial parameter include at least: Interpupillary distance, eye chin away from;
Camera lens moving step makes to be detected eye described in the alignment lens according to the mobile camera lens of facial parameter control.
2. tested eye alignment methods as described in claim 1, which is characterized in that the facial parameter further include volume chin away from.
3. tested eye alignment methods as described in claim 1, which is characterized in that described image obtaining step includes:
First image acquisition step obtains the first picture of the examinee;
Second image acquisition step obtains the second picture of the examinee;
First picture is different from the shooting angle of the second picture.
4. tested eye alignment methods as described in any one of claims 1 to 3, which is characterized in that described image processing step packet It includes:
Image segmentation step is partitioned into processing region from the picture;
Feature identification step identifies that face feature, the face feature include eyes from the processing region, and the face is special Sign further includes at least one of both nose and lip;
Parameter determination determines the coordinate of the face feature, and determines the face according to the coordinate of the face feature Parameter.
5. a kind of tested eye alignment device, the camera lens for controlling Ophthalmologic apparatus are automatically aligned to the tested eye of examinee, feature It is, comprising:
Image collection module, for obtaining examinee's picture, the picture include at least the above eyes of examinee's nose with Under face area;
Image processing module determines that the facial parameter of the examinee, the facial parameter at least wrap for handling the picture Include interpupillary distance and eye chin away from;
Camera lens mobile module, for making to be detected described in the alignment lens according to the mobile camera lens of facial parameter control Eye.
6. tested eye alignment device as claimed in claim 5, which is characterized in that the facial parameter further include volume chin away from.
7. tested eye alignment device as claimed in claim 5, which is characterized in that described image obtains module and includes:
First image acquisition submodule, for obtaining the first picture of the examinee;
Second image acquisition submodule, for obtaining the second picture of the examinee;
First picture is different from the shooting angle of the second picture.
8. such as the described in any item tested eye alignment devices of claim 5 to 7, which is characterized in that described image processing module packet It includes:
Image segmentation submodule, for being partitioned into processing region from the picture;
Feature identifies submodule, and for identifying face feature from the processing region, the face feature includes eyes, described Face feature further includes at least one of both nose and lip.
Parameter determination submodule determines institute for determining the coordinate of the face feature, and according to the coordinate of the face feature State facial parameter.
9. a kind of Ophthalmologic apparatus, which is characterized in that including processor, memory and store in the memory and be configured For the computer program executed by the processor, the processor realizes such as claim 1 when executing the computer program To tested eye alignment methods described in any one of 4.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium includes the calculating of storage Machine program, equipment where controlling the computer readable storage medium in computer program operation execute such as claim Tested eye alignment methods described in any one of 1 to 4.
CN201810459248.2A 2018-05-15 2018-05-15 Eye to be inspected alignment method, device and ophthalmic equipment Active CN110477858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810459248.2A CN110477858B (en) 2018-05-15 2018-05-15 Eye to be inspected alignment method, device and ophthalmic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810459248.2A CN110477858B (en) 2018-05-15 2018-05-15 Eye to be inspected alignment method, device and ophthalmic equipment

Publications (2)

Publication Number Publication Date
CN110477858A true CN110477858A (en) 2019-11-22
CN110477858B CN110477858B (en) 2023-11-28

Family

ID=68545070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810459248.2A Active CN110477858B (en) 2018-05-15 2018-05-15 Eye to be inspected alignment method, device and ophthalmic equipment

Country Status (1)

Country Link
CN (1) CN110477858B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005348832A (en) * 2004-06-08 2005-12-22 National Univ Corp Shizuoka Univ Real-time pupil position detection system
CN1989894A (en) * 2005-12-28 2007-07-04 株式会社拓普康 Alignment method for ophthalmic measurement apparatus and alignment device of the same
JP2009086705A (en) * 2007-09-27 2009-04-23 Kao Corp Method for detecting eye position
CN103251384A (en) * 2012-01-25 2013-08-21 佳能株式会社 Ophthalmologic apparatus, and control method thereof
KR20160036375A (en) * 2014-09-25 2016-04-04 백석대학교산학협력단 Fast Eye Detection Method Using Block Contrast and Symmetry in Mobile Device
CN105809507A (en) * 2016-02-29 2016-07-27 北京酷配科技有限公司 Virtualized wearing method and virtualized wearing apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005348832A (en) * 2004-06-08 2005-12-22 National Univ Corp Shizuoka Univ Real-time pupil position detection system
CN1989894A (en) * 2005-12-28 2007-07-04 株式会社拓普康 Alignment method for ophthalmic measurement apparatus and alignment device of the same
JP2009086705A (en) * 2007-09-27 2009-04-23 Kao Corp Method for detecting eye position
CN103251384A (en) * 2012-01-25 2013-08-21 佳能株式会社 Ophthalmologic apparatus, and control method thereof
KR20160036375A (en) * 2014-09-25 2016-04-04 백석대학교산학협력단 Fast Eye Detection Method Using Block Contrast and Symmetry in Mobile Device
CN105809507A (en) * 2016-02-29 2016-07-27 北京酷配科技有限公司 Virtualized wearing method and virtualized wearing apparatus

Also Published As

Publication number Publication date
CN110477858B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN109690553A (en) The system and method for executing eye gaze tracking
JP5869489B2 (en) Method and apparatus for automatically measuring at least one refractive property of a person's eyes
US9628697B2 (en) Method and device for measuring an interpupillary distance
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
EP3339943A1 (en) Method and system for obtaining optometric parameters for fitting eyeglasses
US11294455B2 (en) Method and device for determining gaze placement, computer readable storage medium
CN107184178A (en) A kind of hand-held vision drop instrument of intelligent portable and optometry method
US20160202756A1 (en) Gaze tracking via eye gaze model
US20150029322A1 (en) Method and computations for calculating an optical axis vector of an imaged eye
JP2016173313A (en) Visual line direction estimation system, visual line direction estimation method and visual line direction estimation program
CN104809424B (en) Method for realizing sight tracking based on iris characteristics
WO2008007781A1 (en) Visual axis direction detection device and visual line direction detection method
CN110766656B (en) Method, device, equipment and storage medium for screening fundus macular region abnormality
US20150049952A1 (en) Systems and methods of measuring facial characteristics
WO2023011339A1 (en) Line-of-sight direction tracking method and apparatus
JP2019215688A (en) Visual line measuring device, visual line measurement method and visual line measurement program for performing automatic calibration
CN110750157B (en) Eye control auxiliary input device and method based on 3D eyeball model
CN114360043B (en) Model parameter calibration method, sight tracking method, device, medium and equipment
CN106843492B (en) Multi-user viewpoint calibration system and method
WO2020237940A1 (en) Fatigue detection method and device based on human eye state identification
CN111339982A (en) Multi-stage pupil center positioning technology implementation method based on features
CN114022514A (en) Real-time sight line inference method integrating head posture and eyeball tracking
JP2006285531A (en) Detection device for eye direction, detecting method for eye direction, program for executing the same detecting method for eye direction by computer
CN110477858A (en) Tested eye alignment methods, device and Ophthalmologic apparatus
CN112656366B (en) Method and system for measuring pupil size in non-contact manner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 803, block B, Jingang center, Jingang building, houye community, Xixiang street, Bao'an District, Shenzhen, Guangdong

Applicant after: Shenzhen moting Medical Technology Co.,Ltd.

Address before: 518112 Room 501, 5 / F, block C, phase 2, saitu digital technology park, No. 137, Bulan Road, Buji street, Longgang District, Shenzhen, Guangdong Province

Applicant before: SHENZHEN CERTAINN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room L302, Building 2, Skyworth Innovation Valley, No. 8 Tangtou 1st Road, Tangtou Community, Shiyan Street, Bao'an District, Shenzhen City, Guangdong Province, 518108

Patentee after: Shenzhen Moting Medical Technology Co.,Ltd.

Address before: 518000 803, block B, Jingang center, Jingang building, houye community, Xixiang street, Bao'an District, Shenzhen, Guangdong

Patentee before: Shenzhen moting Medical Technology Co.,Ltd.