CN113729616A - Method and device for determining pupil center position data and storage medium - Google Patents

Method and device for determining pupil center position data and storage medium Download PDF

Info

Publication number
CN113729616A
CN113729616A CN202111023640.0A CN202111023640A CN113729616A CN 113729616 A CN113729616 A CN 113729616A CN 202111023640 A CN202111023640 A CN 202111023640A CN 113729616 A CN113729616 A CN 113729616A
Authority
CN
China
Prior art keywords
position data
candidate
pupil
determining
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111023640.0A
Other languages
Chinese (zh)
Other versions
CN113729616B (en
Inventor
朱冬晨
李航
林敏静
车何框亿
李嘉茂
张晓林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Microsystem and Information Technology of CAS
Original Assignee
Shanghai Institute of Microsystem and Information Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Microsystem and Information Technology of CAS filed Critical Shanghai Institute of Microsystem and Information Technology of CAS
Priority to CN202111023640.0A priority Critical patent/CN113729616B/en
Publication of CN113729616A publication Critical patent/CN113729616A/en
Application granted granted Critical
Publication of CN113729616B publication Critical patent/CN113729616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The method, the device and the storage medium for determining the pupil center position data disclosed by the embodiment of the application comprise the steps of obtaining eyeball position data corresponding to an eyeball center, pupil reference position data corresponding to a pupil center, a first pupil candidate area and a second pupil candidate area, determining a first candidate position data set from the first pupil candidate area, determining a second candidate position data set from the second pupil candidate area, and determining the pupil center position data based on the eyeball position data, the pupil reference position data, the first candidate position data set and the second candidate position data set. According to the embodiment of the application, the candidate position data of the sight intersection is determined from the candidate position data group set based on the internal relation of the binocular sight, and the candidate position data is used as the pupil center position data, so that the binocular sight estimation precision can be improved.

Description

Method and device for determining pupil center position data and storage medium
Technical Field
The present invention relates to the field of gaze estimation technologies, and in particular, to a method and an apparatus for determining pupil center position data, and a storage medium.
Background
The gaze estimation can be performed by studying the corresponding changes of the head pose and the eyes when the human gaze changes, and the existing gaze estimation method comprises the following steps: under the infrared illumination condition, determining a sight line direction according to the position of a purkinje spot formed on the surface of a cornea of a subject by an infrared light source; and secondly, under the condition of natural illumination, establishing a geometric model by taking a connecting line of the center of the eyeball and the center of the pupil as a sight line direction. The positioning of the pupil center position is crucial to the accuracy of the gaze estimation.
Currently, methods for pupil center detection include statistical learning-based detection methods and feature extraction-based detection methods. The detection method based on statistical learning is to take eye images as input data, and directly output coordinates of the center point of the pupil by a trained model, for example, the center position of the pupil can be directly detected when a person wears glasses based on an SVM algorithm. The detection method based on feature extraction is to detect the center position of the pupil by using the gray characteristics of the pupil area, for example, the center position of the pupil is located by using hough transformation detection circle and a hybrid projection method. For detecting the pupil center position, the human eye region has the problems of eyelid occlusion, eyelash occlusion, white spot interference caused by corneal reflection and the like, and the method for detecting the pupil center position only aiming at the eye image cannot realize higher precision.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining pupil center position data and a storage medium, wherein candidate position data of sight intersection can be determined from a candidate position data set as pupil center position data based on the internal relation of binocular sights, and the binocular sight estimation precision can be improved.
The embodiment of the application provides a method for determining pupil center position data, which comprises the following steps:
acquiring eyeball position data corresponding to the center of an eyeball, pupil reference position data corresponding to the center of a pupil, a first pupil candidate area and a second pupil candidate area;
determining a first set of candidate location data from a first pupil candidate region;
determining a second set of candidate location data from a second pupil candidate region;
determining pupil center position data based on the eye position data, the pupil reference position data, the first candidate position data set and the second candidate position data set.
Further, determining pupil center position data based on the eyeball position data, the pupil reference position data, the first candidate position data set and the second candidate position data set comprises:
determining a first parameter set according to the pupil reference position data, the first candidate position data set and the second candidate position data set;
determining a second parameter set according to the eyeball position data, the first candidate position data set and the second candidate position data set;
and determining pupil center position data according to the first parameter set and the second parameter set.
Further, after determining the second parameter set according to the eyeball position data, the first candidate position data set and the second candidate position data set, the method further includes:
determining a third parameter set according to the eyeball position data, the first candidate position data set and the second candidate position data set;
and determining pupil center position data according to the first parameter set, the second parameter set and the third parameter set.
Further, determining a first set of parameters from the pupil reference position data, the first set of candidate position data and the second set of candidate position data comprises:
determining a set of candidate location data sets from the first set of candidate location data and the second set of candidate location data; each candidate position data set in the set of candidate position data sets comprises a first candidate position data and a second candidate position data;
a first set of parameters is determined from the pupil reference position data and each of the candidate position data sets.
Further, determining a second set of parameters from the eye position data, the first set of candidate position data and the second set of candidate position data comprises:
determining a first vector according to the eyeball position data;
determining a second vector corresponding to each candidate position data set according to the eyeball position data and each candidate position data set;
and determining a second parameter set according to the first vector and a second vector corresponding to each candidate position data set.
Further, determining a third set of parameters from the first set of candidate location data and the second set of candidate location data comprises:
acquiring first gaze point depth data; the first point-of-regard depth data is the distance between a target point and a straight line where the center of an eyeball is located when a subject gazes at the target point;
determining second fixation point depth data corresponding to each candidate position data set according to the eyeball position data and each candidate position data set;
and determining a third parameter set according to the first viewpoint depth data and the second viewpoint depth data corresponding to each candidate position data set.
Further, determining a second set of candidate location data from a second pupil candidate region comprises:
determining a target region from the second pupil candidate region according to each first candidate position data in the first candidate position data set;
a second set of candidate location data is determined from the target area.
Correspondingly, the embodiment of the application also provides a method for determining the pupil center position data, which comprises the following steps:
the acquisition module is used for acquiring eyeball position data corresponding to the center of an eyeball, pupil reference position data corresponding to the center of a pupil, a first pupil candidate area and a second pupil candidate area;
a first determination module for determining a first set of candidate location data from a first pupil candidate region;
a second determination module for determining a second set of candidate location data from a second pupil candidate region;
and the third determining module is used for determining pupil center position data based on the eyeball position data, the pupil reference position data, the first candidate position data set and the second candidate position data set.
Further, a third determining module includes:
a first determining unit configured to determine a first parameter set according to the pupil reference position data, the first candidate position data set, and the second candidate position data set;
a second determining unit configured to determine a second parameter set according to the eyeball position data, the first candidate position data set, and the second candidate position data set;
and a third determining unit for determining the pupil center position data according to the first parameter set and the second parameter set.
Further, the third determining module further includes:
a fourth determination unit for, after determining the second set of parameters from the eye position data, the first set of candidate position data and the second set of candidate position data,
determining a third parameter set according to the eyeball position data, the first candidate position data set and the second candidate position data set;
and determining pupil center position data according to the first parameter set, the second parameter set and the third parameter set.
Further, the first determination unit includes:
a first determining subunit, configured to determine a set of candidate position data sets from the first set of candidate position data and the second set of candidate position data; each candidate position data set in the set of candidate position data sets comprises a first candidate position data and a second candidate position data;
and the second determining subunit is used for determining the first parameter set according to the pupil reference position data and each candidate position data group.
Further, the second determination unit includes:
a third determining subunit, configured to determine the first vector according to the eyeball position data;
the fourth determining subunit is used for determining a second vector corresponding to each candidate position data group according to the eyeball position data and each candidate position data group;
and the fifth determining subunit is used for determining a second parameter set according to the first vector and the second vector corresponding to each candidate position data set.
Further, a fourth determination unit includes:
an obtaining subunit, configured to obtain first gaze point depth data; the first point-of-regard depth data is the distance between a target point and a straight line where the center of an eyeball is located when a subject gazes at the target point;
a sixth determining subunit, configured to determine, according to the eyeball position data and each candidate position data group, second gaze point depth data corresponding to each candidate position data group;
a seventh determining subunit, configured to determine a third parameter set according to the first gaze point depth data and the second gaze point depth data corresponding to each candidate position data group.
Further, the second determining module includes:
a fifth determining unit configured to determine a target region from the second pupil candidate region according to each first candidate position data in the first candidate position data set;
a sixth determining unit for determining a second set of candidate position data from the target area.
The embodiment of the application has the following beneficial effects:
the method, the device and the storage medium for determining the pupil center position data disclosed by the embodiment of the application comprise the steps of obtaining eyeball position data corresponding to an eyeball center, pupil reference position data corresponding to a pupil center, a first pupil candidate area and a second pupil candidate area, determining a first candidate position data set from the first pupil candidate area, determining a second candidate position data set from the second pupil candidate area, and determining the pupil center position data based on the eyeball position data, the pupil reference position data, the first candidate position data set and the second candidate position data set. According to the embodiment of the application, the candidate position data of the sight intersection is determined from the candidate position data group set based on the internal relation of the binocular sight, and the candidate position data is used as the pupil center position data, so that the binocular sight estimation precision can be improved.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining pupil center position data according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for determining pupil center position data according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another method for determining pupil center position data according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a device for determining pupil center position data according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings. It should be apparent that the described embodiment is only one embodiment of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An "embodiment" as referred to herein relates to a particular feature, structure, or characteristic that may be included in at least one implementation of the present application. In the description of the embodiments of the present application, it should be understood that the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second" and "third" may explicitly or implicitly include one or more of the features. Moreover, the terms "first," "second," and "third," etc. are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than described or illustrated herein. Furthermore, the terms "comprising," "having," and "being," as well as any variations thereof, are intended to cover non-exclusive inclusions.
FIG. 1 is a schematic diagram of an application environment according to an embodiment of the present application, including a left eyeball center position OLCenter position of right eyeball ORInitial left pupil center position PLInitial right pupil center position PRFirst candidate position P corresponding to the center of the left pupilL(ni) Second candidate position P corresponding to right pupil centerR(nj) And first gaze point depth data Z1And second point of regard depth data Z2Wherein, i is 1, 2.. N, j is 1, 2.. N.
In the embodiment of the application, when the subject gazes at the target point, if the eyes are detected to be in front of the faceFor example, the server may set a first candidate position corresponding to the center of the left pupil and a second candidate position corresponding to the center of the right pupil as the unknown parameter PL(L)、PR(R), further based on the left eyeball center position OLCenter position of right eyeball ORInitial left pupil center position PLInitial right pupil center position PRFirst candidate position P corresponding to the center of the left pupilL(L) second candidate position P corresponding to right pupil centerR(R) and first gaze point depth data Z1And second point of regard depth data Z2A location determination model is constructed.
In an alternative embodiment, this may include a calculation based on the initial left pupil center position PLInitial right pupil center position PRConstraint 1 determined by the first candidate position data and the second candidate position data may be based on the left eye ball center position OLCenter position of right eyeball ORConstraints 2 determined by the first candidate position data and the second candidate position data, and depth data Z of the first point of regard1And second point of regard depth data Z2And (3) determining a constraint condition 3, and further constructing a position determination model:
S(PL(L),PR(R))=λ1l+λ2d+λ3Z
wherein λ is1l represents the constraint 1, λ2d represents the constraint 2, λ3Z represents constraint 3.
When applied, each candidate position data set (P) in the set of candidate position data sets may be enumerated using an enumeration methodL(ni),PR(nj) Input position determination model to output pupil center position data.
A specific embodiment of a method for determining pupil center position data according to the present application is described below, and fig. 2 is a schematic flowchart of a method for determining pupil center position data according to the present application, where the present specification provides the method operation steps as shown in the embodiment or the flowchart, but more or fewer operation steps may be included based on conventional or non-inventive labor. The sequence of steps recited in the embodiments is only one of many execution sequences, and does not represent the only execution sequence, and in the actual execution, the steps can be executed in the sequence of the method shown in the embodiment or the drawings or executed in parallel. As shown in fig. 2, the method may include:
s201: and acquiring eyeball position data corresponding to the center of an eyeball, pupil reference position data corresponding to the center of a pupil, a first pupil candidate region and a second pupil candidate region.
In the embodiment of the application, a target point can be set on the screen, and when a subject gazes at the target point, if the convergence of the sight lines of the two eyes in front of the face is detected, the eyeball position data corresponding to the center of the eyeball and the pupil reference position data corresponding to the center of the pupil can be acquired. That is, the server can acquire three-dimensional coordinate data O corresponding to the center of the left eye sphere in the spatial coordinate systemL(xo L,yo L,zo L) Three-dimensional coordinate data O corresponding to the center of the right eyeballR(xo R,yo R,zo R) Initial three-dimensional coordinate data P corresponding to the center of the left pupilR(xP R,yP R,zP R) And initial three-dimensional coordinate data P corresponding to the center of the right pupilR(xP R,yP R,zP R)。
In the embodiment of the application, if it is detected that the sight lines of the two eyes converge in front of the face, the iris area w x h of the two eyes can be obtained, and the first pupil candidate area and the second pupil candidate area can be determined from the iris area w x h. That is, the left pupil candidate region phi may be determined from the binocular iris region according to the distance between the target point and the subjectLAnd the right pupil candidate area phiR
In the embodiment of the present application, the server may further obtain first gaze point depth data, that is, a distance Z between a target point and a straight line where the center of the eyeball is located when the subject gazes at the target point1
S203: a first set of candidate location data is determined from the first pupil candidate region.
In this embodiment of the application, the server may determine the first candidate position data set from the first pupil candidate area, that is, determine a plurality of first candidate position data corresponding to the left pupil center from the left pupil candidate area. Alternatively, the first set of candidate position data may be { P }L(0),PL(1)...PL(nL) And each first candidate position data has corresponding three-dimensional coordinate data in a spatial coordinate system.
S205: a second set of candidate location data is determined from the second pupil candidate region.
In this embodiment of the application, the server may determine the second candidate position data set from the second pupil candidate area, that is, determine a plurality of second candidate position data corresponding to the center of the right pupil from the right pupil candidate area. Alternatively, the second set of candidate position data may be { P }R(0),PR(1)...PR(nR) And each second candidate position data has corresponding three-dimensional coordinate data in a spatial coordinate system.
In an alternative embodiment, the server may determine the target area from the second pupil candidate area and further determine the second candidate position data set from the target area according to each first candidate position data in the first candidate position data set. Specifically, the server may select a certain first candidate position data corresponding to the left pupil from the first candidate position data, and may further determine a gaze direction according to the first candidate position data and the left eyeball center position data, and then select a plurality of points in a preset included angle interval between a connection line with the right eyeball center position data and a pitch angle of the gaze direction from the right pupil candidate region, to obtain a second candidate position data set { P }R(0),PR(1)...PR(nR) And each second candidate position data has corresponding three-dimensional coordinate data in a spatial coordinate system.
S207: determining pupil center position data based on the eye position data, the pupil reference position data, the first candidate position data set and the second candidate position data set.
Fig. 3 is a schematic flowchart of a method for determining pupil center position data according to an embodiment of the present disclosure, and in an alternative implementation, the method shown in fig. 3 may be used to determine the pupil center position data, specifically including the following steps:
s301: a first set of parameters is determined from the pupil reference position data, the first set of candidate position data and the second set of candidate position data.
In an alternative embodiment, the server may determine the set of candidate location data sets from the first set of candidate location data and the second set of candidate location data. Wherein each candidate position data set of the set of candidate position data sets may comprise a first candidate position data and a second candidate position data. Optionally, the server may arbitrarily select one first candidate position data P from the first candidate position data set corresponding to the center of the left pupilL(ni) And randomly selecting one second candidate position data P from a second candidate position data set corresponding to the center of the right pupilR(nj) To obtain a candidate position data set (P)L(ni),PR(nj)). In the same way, a set of candidate position data sets can be obtained.
In this embodiment of the application, after determining the candidate position data set, the server may determine the first parameter set according to the pupil reference position data and each candidate position data set. That is, the server may obtain the initial three-dimensional coordinate data P corresponding to the center of the left pupilR(xP R,yP R,zP R) Initial three-dimensional coordinate data P corresponding to the center of the right pupilR(xP R,yP R,zP R) And each candidate position data group (P)L(ni),PR(nj) And determining a first parameter corresponding to each candidate position data group to obtain a first parameter set corresponding to the candidate position data group set. The first parameter set may specifically be determined using the following formulaCombining:
l=||PL(ni)-PL||+||PR(nj)-PR||
s303: a second set of parameters is determined from the eye position data, the first set of candidate position data and the second set of candidate position data.
In the embodiment of the application, the binocular vision can accord with the gazing characteristic, namely, when the binocular vision is intersected, the length of a common perpendicular line of the binocular vision straight line approaches to zero.
In this embodiment, the server may determine the first vector according to the eyeball position data, may determine the second vector corresponding to each candidate position data set according to the eyeball position data and each candidate position data set, and may further determine the second parameter set according to the first vector and the second vector corresponding to each candidate position data set.
That is, the server can be based on the left eyeball center position data OLAnd right eyeball center position data ORDetermining a vector
Figure BDA0003241464610000101
And can be based on the left eye ball center position data OLRight eyeball center position data ORAnd a candidate position data group (P)L(ni),PR(nj) Determine a common plumb vector, which may be derived from the left eye line
Figure BDA0003241464610000102
And the right sight line
Figure BDA0003241464610000103
Cross multiplication to obtain the vector of the common perpendicular line
Figure BDA0003241464610000104
And then the vector can be converted into
Figure BDA0003241464610000105
Vector in the common vertical line
Figure BDA0003241464610000106
The projection of the direction is used as the length of a common perpendicular line between two out-of-plane straight lines, namely, a second parameter corresponding to each candidate position data set, so as to obtain a second parameter set. The common vertical vector can be determined specifically by the following formula:
Figure BDA0003241464610000107
the second set of parameters may be determined using the following well known techniques:
Figure BDA0003241464610000108
s305: and determining pupil center position data according to the first parameter set and the second parameter set.
In this embodiment, the server may determine the pupil center position data from the candidate position data set according to the first parameter and the second parameter corresponding to each candidate position data set. Alternatively, the loss values corresponding to the first parameter and the second parameter corresponding to each candidate position data set may be determined, and then the candidate position data set corresponding to the minimum loss value may be used as the pupil center position data. The pupil center position data may specifically be determined by the following formula: (ii) a
S(PL(ni),PR(nj))=λ1l+λ2d
Wherein, S (P)L(ni),PR(nj) As a candidate position data group PL(ni),PR(nj) Corresponding loss value, λ1、λ2Are all constraint weight factors. If the position data set (P) is candidateL(ni),PR(nj) The corresponding loss value is the minimum loss value in the candidate position data set, the first candidate position data corresponding to the left pupil center may be used as the left pupil center position data, the second candidate position data corresponding to the right pupil center may be used as the right pupil center position data,pupil center position data are obtained.
According to the embodiment of the application, based on the human eye gazing characteristic, the pupil center position data are determined from the candidate position data set by determining the first parameter and the second parameter corresponding to each candidate position data set, and the binocular vision estimation accuracy can be prompted.
Fig. 4 is a schematic flowchart of another method for determining pupil center position data according to an embodiment of the present disclosure, and in an alternative implementation, the method shown in fig. 3 may be used to determine the pupil center position data, specifically including the following steps:
s401, determining a first parameter set according to the pupil reference position data, the first candidate position data set and the second candidate position data set.
In an alternative embodiment, the server may determine the set of candidate location data sets from the first set of candidate location data and the second set of candidate location data. Wherein each candidate position data set of the set of candidate position data sets may comprise a first candidate position data and a second candidate position data. Optionally, the server may arbitrarily select one first candidate position data P from the first candidate position data set corresponding to the center of the left pupilL(ni) And randomly selecting one second candidate position data P from a second candidate position data set corresponding to the center of the right pupilR(nj) To obtain a candidate position data set (P)L(ni),PR(nj)). In the same way, a set of candidate position data sets can be obtained.
In this embodiment of the application, after determining the candidate position data set, the server may determine the first parameter set according to the pupil reference position data and each candidate position data set. That is, the server may obtain the initial three-dimensional coordinate data P corresponding to the center of the left pupilR(xP R,yP R,zP R) Initial three-dimensional coordinate data P corresponding to the center of the right pupilR(xP R,yP R,zP R) And each candidate position data group (P)L(ni),PR(nj) And determining a first parameter corresponding to each candidate position data group to obtain a first parameter set corresponding to the candidate position data group set. The first parameter set may specifically be determined by using the following formula:
l=||PL(ni)-PL||+||PR(nj)-PR||
s403: a second set of parameters is determined from the eye position data, the first set of candidate position data and the second set of candidate position data.
In the embodiment of the application, the binocular vision can accord with the gazing characteristic, namely, when the binocular vision is intersected, the length of a common perpendicular line of the binocular vision straight line approaches to zero.
In this embodiment, the server may determine the first vector according to the eyeball position data, may determine the second vector corresponding to each candidate position data set according to the eyeball position data and each candidate position data set, and may further determine the second parameter set according to the first vector and the second vector corresponding to each candidate position data set.
That is, the server can be based on the left eyeball center position data OLAnd right eyeball center position data ORDetermining a vector
Figure BDA0003241464610000121
And can be based on the left eye ball center position data OLRight eyeball center position data ORAnd a candidate position data group (P)L(ni),PR(nj) Determine a common plumb vector, which may be derived from the left eye line
Figure BDA0003241464610000122
And the right sight line
Figure BDA0003241464610000123
Cross multiplication to obtain the vector of the common perpendicular line
Figure BDA0003241464610000124
And then the vector can be converted into
Figure BDA0003241464610000125
Vector in the common vertical line
Figure BDA0003241464610000126
The projection of the direction is used as the length of a common perpendicular line between two out-of-plane straight lines, namely, a second parameter corresponding to each candidate position data set, so as to obtain a second parameter set. The common vertical vector can be determined specifically by the following formula:
Figure BDA0003241464610000127
the second set of parameters may be determined using the following well known techniques:
Figure BDA0003241464610000128
s405: a third set of parameters is determined from the eye position data, the first set of candidate position data and the second set of candidate position data.
In this embodiment, the server may determine the first vector according to the eyeball position data, may determine the second vector corresponding to each candidate position data group according to the eyeball position data and each candidate position data group, that is, determine the plumb line vector corresponding to each candidate position data group, may further determine the distance between the midpoint of the plumb line vector and the straight line where the centers of the left and right eyeballs are located, and determine the second gaze point depth data Z corresponding to each candidate position data group2Then, according to the first gaze point depth data described above, i.e. the distance Z between the target point and the straight line where the center of the eyeball is located when the subject gazes at the target point1And determining a third parameter corresponding to each candidate position data set to obtain a third parameter set. The third parameter set may specifically be determined by the following formula:
Z=|Z1-Z2|
s407: and determining pupil center position data according to the first parameter set, the second parameter set and the third parameter set.
In this embodiment, the server may determine the pupil center position data from the candidate position data set according to the first parameter, the second parameter, and the third parameter corresponding to each candidate position data set. Optionally, the loss values corresponding to the first parameter, the second parameter, and the third parameter corresponding to each candidate position data set may be determined, and then the candidate position data set corresponding to the minimum loss value may be used as the pupil center position data. The pupil center position data may specifically be determined by the following formula: (ii) a
S(PL(ni),PR(nj))=λ1l+λ2d+λ3Z
Wherein, S (P)L(ni),PR(nj) As a candidate position data group PL(ni),PR(nj) Corresponding loss value, λ1、λ2And λ3Are all constraint weight factors. If the position data set (P) is candidateL(ni),PR(nj) The corresponding loss value is the minimum loss value in the candidate position data group set, and the first candidate position data corresponding to the left pupil center may be used as the left pupil center position data, and the second candidate position data corresponding to the right pupil center may be used as the right pupil center position data, so as to obtain the pupil center position data.
According to the embodiment of the application, based on the human eye gazing characteristic, the pupil center position data are determined from the candidate position data set by determining the first parameter, the second parameter and the third parameter corresponding to each candidate position data set, and the binocular vision estimation precision can be prompted.
By adopting the method for determining the pupil center position data, provided by the embodiment of the application, the candidate position data of the sight intersection is determined from the candidate position data set as the pupil center position data based on the internal relation of the binocular sight, so that the binocular sight estimation precision can be improved.
Fig. 5 is a schematic structural diagram of the device for determining pupil center position data provided in the embodiment of the present application, and as shown in fig. 5, the device may include:
the obtaining module 501 is configured to obtain eyeball position data corresponding to an eyeball center, pupil reference position data corresponding to a pupil center, a first pupil candidate region, and a second pupil candidate region;
the first determining module 503 is configured to determine a first candidate location data set from the first pupil candidate area;
the second determination module 505 is configured to determine a second candidate position data set from a second pupil candidate region;
the third determining module 507 is configured to determine pupil center position data based on the eyeball position data, the pupil reference position data, the first candidate position data set, and the second candidate position data set.
In this embodiment of the application, the third determining module 507 may include:
a first determining unit configured to determine a first parameter set according to the pupil reference position data, the first candidate position data set, and the second candidate position data set;
a second determining unit configured to determine a second parameter set according to the eyeball position data, the first candidate position data set, and the second candidate position data set;
and a third determining unit for determining the pupil center position data according to the first parameter set and the second parameter set.
In this embodiment of the application, the third determining module 507 may further include:
a fourth determination unit for, after determining the second set of parameters from the eye position data, the first set of candidate position data and the second set of candidate position data,
determining a third parameter set according to the eyeball position data, the first candidate position data set and the second candidate position data set;
and determining pupil center position data according to the first parameter set, the second parameter set and the third parameter set.
In an embodiment of the present application, the first determining unit may include:
a first determining subunit, configured to determine a set of candidate position data sets from the first set of candidate position data and the second set of candidate position data; each candidate position data set in the set of candidate position data sets comprises a first candidate position data and a second candidate position data;
and the second determining subunit is used for determining the first parameter set according to the pupil reference position data and each candidate position data group.
In this embodiment of the application, the second determining unit may include:
a third determining subunit, configured to determine the first vector according to the eyeball position data;
the fourth determining subunit is used for determining a second vector corresponding to each candidate position data group according to the eyeball position data and each candidate position data group;
and the fifth determining subunit is used for determining a second parameter set according to the first vector and the second vector corresponding to each candidate position data set.
In this embodiment of the application, the fourth determining unit may include:
an obtaining subunit, configured to obtain first gaze point depth data; the first point-of-regard depth data is the distance between a target point and a straight line where the center of an eyeball is located when a subject gazes at the target point;
a sixth determining subunit, configured to determine, according to the eyeball position data and each candidate position data group, second gaze point depth data corresponding to each candidate position data group;
a seventh determining subunit, configured to determine a third parameter set according to the first gaze point depth data and the second gaze point depth data corresponding to each candidate position data group.
In this embodiment, the second determining module 505 may include:
a fifth determining unit configured to determine a target region from the second pupil candidate region according to each first candidate position data in the first candidate position data set;
a sixth determining unit for determining a second set of candidate position data from the target area.
The device and method embodiments in the embodiments of the present application are based on the same application concept.
By adopting the device for determining the pupil center position data, provided by the embodiment of the application, the candidate position data of the sight intersection is determined from the candidate position data group set based on the internal relation of the binocular sight as the pupil center position data, so that the binocular sight estimation precision can be improved.
The electronic device may be configured in the server to store at least one instruction, at least one program, a code set, or a set of instructions related to a method for determining pupil center position data in the method embodiment, where the at least one instruction, the at least one program, the code set, or the set of instructions are loaded from the memory and executed to implement the method for determining pupil center position data.
The storage medium may be configured in the server to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a method for determining pupil center position data in the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions are loaded and executed by the processor to implement the method for determining pupil center position data.
Optionally, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to, a storage medium including: various media that can store program codes, such as a usb disk, a Read-only Memory (ROM), a removable hard disk, a magnetic disk, or an optical disk.
As can be seen from the embodiments of the method, the apparatus, the electronic device, or the storage medium for determining pupil center position data provided in the present application, the method in the present application includes acquiring eyeball position data corresponding to an eyeball center, pupil reference position data corresponding to a pupil center, a first pupil candidate region, and a second pupil candidate region, determining a first candidate position data set from the first pupil candidate region, determining a second candidate position data set from the second pupil candidate region, and determining pupil center position data based on the eyeball position data, the pupil reference position data, the first candidate position data set, and the second candidate position data set. According to the embodiment of the application, the candidate position data of the sight intersection is determined from the candidate position data group set based on the internal relation of the binocular sight, and the candidate position data is used as the pupil center position data, so that the binocular sight estimation precision can be improved.
It should be noted that: the foregoing sequence of the embodiments of the present application is for description only and does not represent the superiority and inferiority of the embodiments, and the specific embodiments are described in the specification, and other embodiments are also within the scope of the appended claims. In some cases, the actions or steps recited in the claims can be performed in the order of execution in different embodiments and achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown or connected to enable the desired results to be achieved, and in some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment is described with emphasis on differences from other embodiments. In particular, for the structural embodiment, since it is based on the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
The foregoing is a preferred embodiment of the present invention, and it should be noted that it would be apparent to those skilled in the art that various modifications and enhancements can be made without departing from the principles of the invention, and such modifications and enhancements are also considered to be within the scope of the invention.

Claims (10)

1. A method for determining pupil center position data, comprising:
acquiring eyeball position data corresponding to the center of an eyeball, pupil reference position data corresponding to the center of a pupil, a first pupil candidate area and a second pupil candidate area;
determining a first set of candidate location data from the first pupil candidate region;
determining a second set of candidate location data from the second pupil candidate region;
determining pupil center position data based on the eye position data, the pupil reference position data, the first candidate position data set, and the second candidate position data set.
2. The method of claim 1, wherein determining pupil center location data based on the eye location data, the pupil reference location data, the first set of candidate location data, and the second set of candidate location data comprises:
determining a first parameter set according to the pupil reference position data, the first candidate position data set and the second candidate position data set;
determining a second parameter set according to the eyeball position data, the first candidate position data set and the second candidate position data set;
and determining the pupil center position data according to the first parameter set and the second parameter set.
3. The method of claim 2, wherein after determining a second set of parameters based on the eye position data, the first set of candidate position data, and the second set of candidate position data, further comprising:
determining a third parameter set according to the eyeball position data, the first candidate position data set and the second candidate position data set;
determining the pupil center position data according to the first parameter set, the second parameter set and the third parameter set.
4. A method as claimed in claim 3, wherein determining a first set of parameters from the pupil reference position data, the first set of candidate position data and the second set of candidate position data comprises:
determining a set of candidate location data sets from the first set of candidate location data and the second set of candidate location data; each candidate location data set of the set of candidate location data sets comprises a first candidate location data and a second candidate location data;
determining the first set of parameters from the pupil reference position data and the each candidate position data set.
5. The method of claim 4, wherein determining a second set of parameters from the eye position data, the first set of candidate position data, and the second set of candidate position data comprises:
determining a first vector according to the eyeball position data;
determining a second vector corresponding to each candidate position data group according to the eyeball position data and each candidate position data group;
and determining the second parameter set according to the first vector and the second vector corresponding to each candidate position data group.
6. The method of claim 4, wherein determining a third set of parameters from the first set of candidate location data and the second set of candidate location data comprises:
acquiring first gaze point depth data; the first point-of-regard depth data is the distance between a target point and a straight line where the center of the eyeball is located when a subject gazes at the target point;
according to the eyeball position data and each candidate position data group, second fixation point depth data corresponding to each candidate position data group is determined;
and determining the third parameter set according to the first viewpoint depth data and the second viewpoint depth data corresponding to each candidate position data set.
7. The method of claim 6, wherein determining a second set of candidate location data from the second pupil candidate region comprises:
determining a target region from the second pupil candidate region according to each first candidate position data in the first candidate position data set;
determining the second set of candidate location data from the target area.
8. A method for determining pupil center position data, comprising:
the acquisition module is used for acquiring eyeball position data corresponding to the center of an eyeball, pupil reference position data corresponding to the center of a pupil, a first pupil candidate area and a second pupil candidate area;
a first determination module for determining a first set of candidate location data from the first pupil candidate region;
a second determination module for determining a second set of candidate location data from the second pupil candidate region;
a third determining module, configured to determine pupil center position data based on the eyeball position data, the pupil reference position data, the first candidate position data set, and the second candidate position data set.
9. An electronic device, comprising a processor and a memory, wherein the memory has stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method of determining pupil center position data according to any one of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of determining pupil center position data according to any one of claims 1 to 7.
CN202111023640.0A 2021-09-01 2021-09-01 Method and device for determining pupil center position data and storage medium Active CN113729616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111023640.0A CN113729616B (en) 2021-09-01 2021-09-01 Method and device for determining pupil center position data and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111023640.0A CN113729616B (en) 2021-09-01 2021-09-01 Method and device for determining pupil center position data and storage medium

Publications (2)

Publication Number Publication Date
CN113729616A true CN113729616A (en) 2021-12-03
CN113729616B CN113729616B (en) 2022-10-14

Family

ID=78734825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111023640.0A Active CN113729616B (en) 2021-09-01 2021-09-01 Method and device for determining pupil center position data and storage medium

Country Status (1)

Country Link
CN (1) CN113729616B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106575357A (en) * 2014-07-24 2017-04-19 微软技术许可有限责任公司 Pupil detection
CN109471523A (en) * 2017-09-08 2019-03-15 托比股份公司 Use the eye tracks of eyeball center
CN110263745A (en) * 2019-06-26 2019-09-20 京东方科技集团股份有限公司 A kind of method and device of pupil of human positioning
CN112749604A (en) * 2019-10-31 2021-05-04 Oppo广东移动通信有限公司 Pupil positioning method and related device and product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106575357A (en) * 2014-07-24 2017-04-19 微软技术许可有限责任公司 Pupil detection
CN109471523A (en) * 2017-09-08 2019-03-15 托比股份公司 Use the eye tracks of eyeball center
CN110263745A (en) * 2019-06-26 2019-09-20 京东方科技集团股份有限公司 A kind of method and device of pupil of human positioning
CN112749604A (en) * 2019-10-31 2021-05-04 Oppo广东移动通信有限公司 Pupil positioning method and related device and product

Also Published As

Publication number Publication date
CN113729616B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
US11468640B2 (en) Depth plane selection for multi-depth plane display systems by user categorization
CN107358217B (en) Sight estimation method and device
EP2150170B1 (en) Methods and apparatus for estimating point-of-gaze in three dimensions
US7747068B1 (en) Systems and methods for tracking the eye
US9135508B2 (en) Enhanced user eye gaze estimation
US9628697B2 (en) Method and device for measuring an interpupillary distance
JP3361980B2 (en) Eye gaze detecting apparatus and method
JP2021049354A (en) Eye pose identification using eye features
JP2022095879A5 (en)
EP3339943A1 (en) Method and system for obtaining optometric parameters for fitting eyeglasses
US11789262B2 (en) Systems and methods for operating a head-mounted display system based on user identity
JP2021501385A (en) Detailed eye shape model for robust biometric applications
Kasprowski et al. Guidelines for the eye tracker calibration using points of regard
JPWO2015190204A1 (en) Pupil detection system, gaze detection system, pupil detection method, and pupil detection program
CN112308932A (en) Gaze detection method, device, equipment and storage medium
WO2022032911A1 (en) Gaze tracking method and apparatus
WO2019125700A1 (en) System and method of obtaining fit and fabrication measurements for eyeglasses using simultaneous localization and mapping
Pires et al. Unwrapping the eye for visible-spectrum gaze tracking on wearable devices
CN111339982A (en) Multi-stage pupil center positioning technology implementation method based on features
CN108537103B (en) Living body face detection method and device based on pupil axis measurement
CN113723293B (en) Method and device for determining sight direction, electronic equipment and storage medium
US10036902B2 (en) Method of determining at least one behavioural parameter
Yang et al. Wearable eye-tracking system for synchronized multimodal data acquisition
CN113729616B (en) Method and device for determining pupil center position data and storage medium
CN112183160A (en) Sight estimation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant