CN109344714A - One kind being based on the matched gaze estimation method of key point - Google Patents

One kind being based on the matched gaze estimation method of key point Download PDF

Info

Publication number
CN109344714A
CN109344714A CN201811011543.8A CN201811011543A CN109344714A CN 109344714 A CN109344714 A CN 109344714A CN 201811011543 A CN201811011543 A CN 201811011543A CN 109344714 A CN109344714 A CN 109344714A
Authority
CN
China
Prior art keywords
face
point
pupil
pupil center
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811011543.8A
Other languages
Chinese (zh)
Other versions
CN109344714B (en
Inventor
李宏亮
颜海强
尹康
袁欢
梁小娟
邓志康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201811011543.8A priority Critical patent/CN109344714B/en
Publication of CN109344714A publication Critical patent/CN109344714A/en
Application granted granted Critical
Publication of CN109344714B publication Critical patent/CN109344714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses one kind to be based on the matched gaze estimation method of key point, belongs to the sight estimation of computer vision field.After the present invention passes through depth network Primary Location pupil key point, go further to correct pupil center location using SGBM template matching method.Pupil center location can be more accurately positioned compared to existing gaze estimation method, larger situation is biased especially for head or eyeball.Implementation of the invention can effectively promote the precision of sight estimation, greatly reduce equipment cost only with single network camera compared to pupil corneal reflection method.Compared to the existing method based on single image processing, the posture on limitation head is not needed, the robustness of algorithm greatly increases.The limitation of all postures can not be indicated by the current existing database in the matching of 3D faceform, avoided, to increase the practicability of this method.

Description

One kind being based on the matched gaze estimation method of key point
Technical field
The present invention proposes that one kind is a kind of sight of computer vision field based on the matched gaze estimation method of key point Estimate new technique.
Background technique
With the development of computer science, human-computer interaction has been increasingly becoming a popular field.Human eye sight can be anti- The concern information for reflecting people also belongs to information input source important in human-computer interaction.Human-computer interaction based on sight estimation it is military, There is vast potential for future development in the fields such as medical treatment, amusement.
Sight estimation technique practical at present is mainly based upon pupil corneal reflection technology (PCCR), uses near-infrared light source Make to generate reflected image on the cornea and pupil of eyes of user, the image of eyes and reflection then acquired using imaging sensor, Eyes position in space and sight are finally calculated based on three-dimensional eyeball phantom.This method although precision with higher, But it is constrained to expensive sensor device to be difficult to popularize.
In view of the above-mentioned problems, there is the gaze estimation method based on 3D faceform.It only needs camera to acquire Picture carries out crucial point location as input data, to collected picture, and combine known model estimation head pose with And eyeball center, sight angle is obtained then in conjunction with the pupil center location detected.
But the existing gaze estimation method based on 3D faceform is when calculating pupil center location, due to database Limitation can not cover the situation of all reality, for there are biggish mistakes under head pose or the biggish situation of eye biasing Difference leads to finally very big deviation occur to the estimation of sight.
Summary of the invention
Goal of the invention of the invention is: in view of the above problems, providing a kind of combination depth network and template Matched method is accurately positioned pupil center, increases the feasibility of scheme.
It is of the invention based on the matched gaze estimation method of key point, including the following steps:
Step 1: detection target face:
The video flowing of camera acquisition is inputted into trained Face datection network model and (selects usual Face datection net Network model, such as MobileNet-SSD) Face datection is carried out, intercept wherein mesh of the maximum face of size as line-of-sight detection Mark facial image;
After carrying out size normalized to the target facial image again, as face critical point detection network model The input of (the corresponding usual detection network model of selection, such as SE-Net), for obtaining the people of target facial image Face key point and pupil center.
Step 2: face critical point detection and pupil center's Primary Location:
Face critical point detection network model based on selection, the target facial image after inputting size normalized, The coordinate of face key point and 2 initial pupil centers on current goal facial image is obtained, and is converted into video figure As the coordinate of upper (before normalizing), the face key point includes 4 eye key points, and right and left eyes respectively include two Key point (two endpoints of eyes);
Step 3: head pose estimation and Ins location:
By having an X-rayed n point algorithm (PNP algorithm, pespective-n-point), the face key point that will test and mark Quasi- three-dimensional face key point matches to obtain face relative to the spatial position of camera and rotation angle;
To obtain the three-dimensional coordinate at 2 initial pupil centers and the three-dimensional coordinate of 4 eye key points;
Under three-dimensional coordinate, the midpoint of two eye key points of right and left eyes 12mm behind head pose direction is taken to make respectively For images of left and right eyes ball center position;
Step 4: amendment pupil center location:
Under three-dimensional coordinate, according to the 4 eye key points interception left eye and right eye picture detected, half global is utilized Pupil center's point is relocated with SGBM method, if the confidence level of currently available match point (pupil center's point of repositioning) Greater than 0.7, then it is assumed that the match point is credible;And take the intermediate value of credible match point twice as final pupil center location;
Step 5: direction of visual lines is estimated:
Under three-dimensional coordinate, eyeball center is calculated to the optical axis information of pupil center, obtains current gaze direction.Worked as Preceding direction of visual lines.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are:
The precision proposed by the present invention that sight estimation can be effectively promoted based on the matched gaze estimation method of key point, Equipment cost is greatly reduced only with single network camera compared to pupil corneal reflection method.Compared to existing base In the method for single image processing, the posture on limitation head is not needed, the robustness of algorithm greatly increases.By in 3D faceform Matching, the current existing database avoided can not indicate the limitation of all postures, to increase the practicability of this method.
Detailed description of the invention
Fig. 1 is treatment process schematic diagram of the invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below with reference to embodiment and attached drawing, to this hair It is bright to be described in further detail.
The error that existing gaze estimation method positions pupil center is larger, biggish especially for head pose Situation.The present invention attempts crucial come Primary Location face by SE-Net (Squeeze-and-Excitation Networks) Then point and pupil center are again modified result using the pupil center that SGBM matching algorithm (half global registration) obtains, Further increase Pupil diameter precision.
Firstly, carrying out Face datection to the picture that camera is read, cuts the maximum face of its mesoscale and estimate as needs The target of sight is counted, and normalizes to the size of standard.(example is common for human face characteristic point based on SE-Net network detection face 68 key points) and 2 pupil center locations.
Then, the 3D face key point of the 68 face key points and standard that will test using perspective n point algorithm (PNP) Match to obtain face relative to camera position in space and rotation angle.
Then, using method proposed by the present invention, left eye and right eye are intercepted according to obtained eye key point respectively Picture, eye picture and standard pupil picture are matched using half global block matching algorithm (SGBM), find matching result The middle highest point of confidence level is used as pupil center, if the confidence level being matched to is greater than 0.7, then it is assumed that the position being matched to is credible , the final positioning result of Pupil diameter result calculating twice is at this moment taken, calculation formula is as follows:
Wherein, PSeNetIt is the obtained pupil detection of SE-Net as a result, PSGBMIt is the obtained pupil detection of SGBM as a result, T is Pupil center's confidence level that SGBM is obtained, confidence level is bigger, and the result for representing detection is more accurate.
Finally, take the center of eye key point along head biased direction 12mm as eyeball center, in conjunction with eyeball center and The vector of pupil center obtains final direction of visual lines.
After the present invention passes through depth network Primary Location pupil key point, gone further using SGBM template matching method Correct pupil center location.Pupil center location can be more accurately positioned compared to existing gaze estimation method, especially In the case of head or eyeball bias larger.
Embodiment
Referring to Fig. 1, the invention mainly comprises the following steps: detection target face, face critical point detection and pupil Hole center Primary Location, head pose estimation and Ins location, amendment pupil center location, direction of visual lines estimation.
Step 1: detection target face.
The video flowing of camera acquisition is inputted into trained Face datection network (MobileNet-SSD) and carries out face inspection It surveys, intercepts wherein target face of the maximum face of size as line-of-sight detection, normalized to the size conduct of 300*300 The input of critical point detection network.
Step 2: face critical point detection and pupil center's Primary Location.
The model training for carrying out face key point and pupil center's detection as basic network using SE-Net, in training Positioning accuracy is further increased as loss function using L1loss in the process.The face picture of 300*300 is passed to and is trained Model, obtain the coordinate of 68 key points and 2 pupil center in face picture, then, these coordinates be converted into original Coordinate on figure.Wherein, L1loss expression formula are as follows:Wherein f (xi) indicate i-th of input The model prediction of data is as a result, yiCorresponding label is indicated as a result, m indicates the data amount check of each input model.
Step 3: head pose estimation and Ins location.
Utilize 68 the obtained crucial two-dimensional coordinates and existing 68 faces three-dimensional coordinate mould on video pictures Spatial position and rotation angle of the type using PNP algorithm estimation face relative to camera.Then two eye key points are taken Midpoint behind head pose direction 12mm as eyeball center.
Step 4: amendment pupil center location.
According to the 4 eye key points interception left eye and right eye picture detected, pupil center is searched for using SGBM method, Center and corresponding confidence level are obtained, the more big then accuracy rate of confidence level is higher.If confidence level is greater than 0.7, in conjunction with twice Positioning result obtain final pupil center location.
Step 5: direction of visual lines is estimated.
The three-dimensional coordinate for taking eyeball center and pupil center, calculating the optical axis information is exactly finally obtained direction of visual lines.
The above description is merely a specific embodiment, any feature disclosed in this specification, except non-specifically Narration, can be replaced by other alternative features that are equivalent or have similar purpose;Disclosed all features or all sides Method or in the process the step of, other than mutually exclusive feature and/or step, can be combined in any way.

Claims (2)

1. one kind is based on the matched gaze estimation method of key point, characterized in that it comprises the following steps:
Step 1: detection target face:
The video flowing of camera acquisition is inputted into trained Face datection network model and carries out Face datection, intercepts wherein size Target facial image of the maximum face as line-of-sight detection;
Size normalized is carried out to the target facial image;
Step 2: face critical point detection and pupil center's Primary Location:
Face critical point detection network model based on selection, the target facial image after inputting size normalized, obtains The coordinate of face key point and 2 initial pupil centers on current goal facial image, and be converted on video image Coordinate, the face key point includes 4 eye key points, and right and left eyes respectively include two key points;
Step 3: head pose estimation and Ins location:
By having an X-rayed n point algorithm, the face key point that will test matches to obtain face phase with standard three-dimensional face key point Spatial position and rotation angle for camera;
To obtain the three-dimensional coordinate at 2 initial pupil centers and the three-dimensional coordinate of 4 eye key points;
Under three-dimensional coordinate, the 12mm conduct behind head pose direction of the midpoint of two eye key points of right and left eyes is taken respectively Images of left and right eyes ball center position;
Step 4: amendment pupil center location:
Under three-dimensional coordinate, according to the 4 eye key points interception left eye and right eye picture detected, half global registration is utilized SGBM method relocates pupil center's point, if the confidence level of currently available match point is greater than 0.7, then it is assumed that the match point can Letter;And take the intermediate value of credible match point twice as final pupil center location;
Step 5: direction of visual lines is estimated:
Under three-dimensional coordinate, eyeball center is calculated to the optical axis information of pupil center, obtains current gaze direction.
2. the method as described in claim 1, which is characterized in that the normalization preferred size of the target facial image is 300* 300。
CN201811011543.8A 2018-08-31 2018-08-31 Sight estimation method based on key point matching Active CN109344714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811011543.8A CN109344714B (en) 2018-08-31 2018-08-31 Sight estimation method based on key point matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811011543.8A CN109344714B (en) 2018-08-31 2018-08-31 Sight estimation method based on key point matching

Publications (2)

Publication Number Publication Date
CN109344714A true CN109344714A (en) 2019-02-15
CN109344714B CN109344714B (en) 2022-03-15

Family

ID=65291973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811011543.8A Active CN109344714B (en) 2018-08-31 2018-08-31 Sight estimation method based on key point matching

Country Status (1)

Country Link
CN (1) CN109344714B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901716A (en) * 2019-03-04 2019-06-18 厦门美图之家科技有限公司 Sight line point prediction model method for building up, device and sight line point prediction technique
CN110051319A (en) * 2019-04-23 2019-07-26 七鑫易维(深圳)科技有限公司 Adjusting method, device, equipment and the storage medium of eyeball tracking sensor
CN110414419A (en) * 2019-07-25 2019-11-05 四川长虹电器股份有限公司 A kind of posture detecting system and method based on mobile terminal viewer
CN110503068A (en) * 2019-08-28 2019-11-26 Oppo广东移动通信有限公司 Gaze estimation method, terminal and storage medium
CN111291701A (en) * 2020-02-20 2020-06-16 哈尔滨理工大学 Sight tracking method based on image gradient and ellipse fitting algorithm
CN113780164A (en) * 2021-09-09 2021-12-10 福建天泉教育科技有限公司 Head posture recognition method and terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787012A (en) * 2004-12-08 2006-06-14 索尼株式会社 Method,apparatua and computer program for processing image
CN102799888A (en) * 2011-05-27 2012-11-28 株式会社理光 Eye detection method and eye detection equipment
US20180096503A1 (en) * 2016-10-05 2018-04-05 Magic Leap, Inc. Periocular test for mixed reality calibration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787012A (en) * 2004-12-08 2006-06-14 索尼株式会社 Method,apparatua and computer program for processing image
CN102799888A (en) * 2011-05-27 2012-11-28 株式会社理光 Eye detection method and eye detection equipment
US20180096503A1 (en) * 2016-10-05 2018-04-05 Magic Leap, Inc. Periocular test for mixed reality calibration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张帅: "基于双目立体视觉的眼球突出度测量方法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901716A (en) * 2019-03-04 2019-06-18 厦门美图之家科技有限公司 Sight line point prediction model method for building up, device and sight line point prediction technique
CN110051319A (en) * 2019-04-23 2019-07-26 七鑫易维(深圳)科技有限公司 Adjusting method, device, equipment and the storage medium of eyeball tracking sensor
CN110414419A (en) * 2019-07-25 2019-11-05 四川长虹电器股份有限公司 A kind of posture detecting system and method based on mobile terminal viewer
CN110503068A (en) * 2019-08-28 2019-11-26 Oppo广东移动通信有限公司 Gaze estimation method, terminal and storage medium
CN111291701A (en) * 2020-02-20 2020-06-16 哈尔滨理工大学 Sight tracking method based on image gradient and ellipse fitting algorithm
CN111291701B (en) * 2020-02-20 2022-12-13 哈尔滨理工大学 Sight tracking method based on image gradient and ellipse fitting algorithm
CN113780164A (en) * 2021-09-09 2021-12-10 福建天泉教育科技有限公司 Head posture recognition method and terminal
CN113780164B (en) * 2021-09-09 2023-04-28 福建天泉教育科技有限公司 Head gesture recognition method and terminal

Also Published As

Publication number Publication date
CN109344714B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN109344714A (en) One kind being based on the matched gaze estimation method of key point
US11741624B2 (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least part of a real object at absolute spatial scale
CN108875524B (en) Sight estimation method, device, system and storage medium
CN104978548B (en) A kind of gaze estimation method and device based on three-dimensional active shape model
US9864430B2 (en) Gaze tracking via eye gaze model
Valenti et al. Combining head pose and eye location information for gaze estimation
Alnajar et al. Calibration-free gaze estimation using human gaze patterns
US9164583B2 (en) Method and apparatus for gaze point mapping
Lai et al. Hybrid method for 3-D gaze tracking using glint and contour features
Cristina et al. Model-based head pose-free gaze estimation for assistive communication
JP7168953B2 (en) Gaze measurement device for automatic calibration, Gaze measurement method and Gaze measurement program
Schnieders et al. Reconstruction of display and eyes from a single image
EP4053736B1 (en) System and method for matching a test frame sequence with a reference frame sequence
WO2019136588A1 (en) Cloud computing-based calibration method, device, electronic device, and computer program product
CN112329699A (en) Method for positioning human eye fixation point with pixel-level precision
Liu et al. Robust 3-D gaze estimation via data optimization and saliency aggregation for mobile eye-tracking systems
Kang et al. A robust extrinsic calibration method for non-contact gaze tracking in the 3-D space
Lu et al. Neural 3D gaze: 3D pupil localization and gaze tracking based on anatomical eye model and neural refraction correction
Scoleri et al. Effects of garments on photoanthropometry of body parts: application to stature estimation
Lages et al. Enhanced geometric techniques for point marking in model-free augmented reality
Nitschke Image-based eye pose and reflection analysis for advanced interaction techniques and scene understanding
US20170007118A1 (en) Apparatus and method for estimating gaze from un-calibrated eye measurement points
Tordoff et al. Head pose estimation for wearable robot control.
Kao et al. Eyeball model construction and matching for visible-spectrum gaze tracking systems
TWI629662B (en) Method for realizing acupoint visualization by AR technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant