CN112288855A - Method and device for establishing eye gaze model of operator - Google Patents

Method and device for establishing eye gaze model of operator Download PDF

Info

Publication number
CN112288855A
CN112288855A CN202011180492.9A CN202011180492A CN112288855A CN 112288855 A CN112288855 A CN 112288855A CN 202011180492 A CN202011180492 A CN 202011180492A CN 112288855 A CN112288855 A CN 112288855A
Authority
CN
China
Prior art keywords
operator
model
eye
face
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011180492.9A
Other languages
Chinese (zh)
Inventor
张也弛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011180492.9A priority Critical patent/CN112288855A/en
Publication of CN112288855A publication Critical patent/CN112288855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T7/596Depth or shape recovery from multiple images from stereo images from three or more stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided are a method and a device for establishing an eye gaze model of an operator, wherein the method comprises the following steps: generating a three-dimensional space model containing the face, eyeballs and different gaze target points of an operator; determining the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point according to each three-dimensional space model; determining a face model and an eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point; and determining an eye gazing model of the operator according to the face model and the eyeball model of the operator. According to the embodiment, the use of an infrared camera device can be omitted, the eye tracking method is simplified, and the cost for determining the eye fixation position is reduced.

Description

Method and device for establishing eye gaze model of operator
Technical Field
The present disclosure relates to the field of eye tracking technologies, and in particular, to a method and an apparatus for establishing an eye gaze model of an operator.
Background
The eye tracking technology, also called sight tracking technology, is mainly applied to the fields of webpage layout optimization, scene research, human-computer interaction, virtual reality, clinical medicine and the like at present. Gaze tracking is currently accomplished by relying on eye tracker devices. There are two types of eye movement apparatus, head-worn and telemetric, on the market today. The head-mounted eye tracker requires a helmet to be worn on the head or glasses to be worn on the eyes, which imposes a certain burden on the subject. The telemetering type eye movement instrument does not need to wear any device for the tested eye movement instrument, has small interference on the tested eye movement instrument, and can detect the eye movement data of the tested natural state.
The existing head-wearing type eye tracker and the remote measuring type eye tracker are realized by utilizing an infrared camera shooting principle, an infrared camera is utilized to irradiate the cornea of an eye, a highlight point is determined according to light rays emitted by the cornea, the distance between the highlight point and the pupil is determined, and then the watching angle of the eye is determined according to the distance between the highlight point and the pupil.
The existing eyeball tracking technology is mainly realized based on a Pupil Corneal Reflection method (PCCR), and the PCCR technology needs to use an infrared light source and an infrared camera, so that the existing eyeball tracking technology has the defects of complex equipment and high cost.
The PRRC technique relies on illuminating the cornea with infrared radiation, determining the highlight point, determining the distance between the highlight point and the pupil, determining where the eye is looking, and has the problems of high cost and high requirements for equipment accuracy.
Disclosure of Invention
The method is used for solving the problems of complex equipment and calculation method and high cost of the eye tracking mode in the prior art.
In order to solve the above technical problem, a first aspect herein provides a method for establishing an operator eye gaze model, comprising:
generating a three-dimensional space model containing the face, eyeballs and different gaze target points of an operator;
determining the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point according to each three-dimensional space model;
determining a face model and an eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point;
and determining an eye gazing model of the operator according to the face model and the eyeball model of the operator.
In further embodiments herein, generating a three-dimensional spatial model containing the operator's face, eye, and target point of sight seen comprises:
when an operator looks at each eye target point in the eye target point array, the face and the eye target points of the operator are scanned by using a three-dimensional scanner;
and generating a three-dimensional space model containing the face, the eyeballs and the sight target point of the operator according to the scanning data.
In a further embodiment of the present disclosure, determining, according to all three-dimensional spatial models, a rotation rule of an eyeball relative to a face when the eye of the operator gazes at each gaze target point includes:
determining a corresponding visual axis, a pupil center point and an iris edge curve in each three-dimensional space model;
superposing all three-dimensional space models by taking the face of an operator as a reference, and determining a visual axis intersection point and a feature point, wherein the feature point is a point which does not move along with the facial expression;
and connecting the pupil center points in all the three-dimensional space models, and determining the motion trail curved surface of the pupil center point.
In a further embodiment of the present disclosure, determining a face model and an eyeball model of the operator according to a rotation rule of an eyeball relative to a face when the eyes of the operator watch each gaze target point includes:
generating an eyeball model according to the visual axis, the pupil center and the iris edge curve of any three-dimensional space model or according to the visual axis, the pupil center and the iris edge curve which are obtained by weighted averaging of the visual axis, the pupil center and the iris edge curve of all three-dimensional space models;
and generating a face model according to the characteristic points, the visual axis intersection points and the motion track curved surface of the pupil center point.
In further embodiments herein, determining an operator eye gaze model from the operator's face model and eye model comprises:
setting the visual axis in the eyeball model to be positioned on the visual axis intersection point in the face model;
and setting the pupil center point in the eyeball model to be positioned on the motion trail curved surface of the pupil center point in the face model.
A second aspect herein provides a method of simulating an operator's eye looking at an object, comprising:
the method for establishing the operator eye gazing model of the operator is used for establishing the operator eye gazing model of the operator in advance;
generating a virtual space including the eye gazing model and the object gazing model of the operator according to the acquired image of the operator;
determining a connecting line between a visual axis intersection point in the eye gazing model of the operator and a point of interest in the object gazing model according to the virtual space;
and rotating the visual axis in the operator eye gazing model to be overlapped with the connecting line, thereby completing the simulation of the operator eye gazing object.
A third aspect herein provides an apparatus for establishing an operator eye gaze model, comprising: the three-dimensional space establishing module is used for generating a three-dimensional space model containing the face, eyeballs and different gaze target points of an operator;
the motion rule determining module is used for determining the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point according to each three-dimensional space model;
the first model establishing module is used for determining a face model and an eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point;
and the second model establishing module is used for determining an eye gazing model of the operator according to the face model and the eyeball model of the operator.
A fourth aspect herein provides an apparatus for simulating an eye of an operator looking at an object, comprising:
the modeling module is used for establishing an operator eye gazing model of the operator in advance by using the establishing method of the operator eye gazing model of any one of the previous embodiments;
the acquisition module is used for generating a virtual space containing the eye gazing model and the object gazing model of the operator according to the acquired image of the operator;
the processing module is used for determining a connecting line between a visual axis intersection point in the eye gazing model of the operator and a focused point in the object gazing model according to the virtual space;
and the simulation module is used for rotating the visual axis in the eye gazing model of the operator to be overlapped with the connecting line, so that the simulation of the eye gazing object of the operator is completed.
A fifth aspect herein provides a computer apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of establishing an operator eye gaze model as described with any of the preceding embodiments when executing the computer program.
A sixth aspect herein provides a computer-readable storage medium storing a computer program for execution by a processor to implement a method of establishing an operator eye gaze model as described with any of the preceding embodiments.
Generating a three-dimensional space model containing the face, eyeballs and different sight target points of an operator; determining the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point according to each three-dimensional space model; determining a face model and an eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point; according to the face model and the eyeball model of the operator, the eye watching model of the operator is determined, the use of an infrared camera device can be omitted, the eye tracking method is simplified, and the cost for determining the eye watching position is reduced.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 shows a flow chart of a method of establishing an operator eye gaze model according to embodiments herein;
FIG. 2 is a flow diagram illustrating a three-dimensional spatial model building process according to an embodiment herein;
FIG. 3 shows a schematic view of an operator looking at a gaze target point in accordance with embodiments herein;
FIG. 4 shows a schematic diagram of a three-dimensional lattice digital model of embodiments herein;
FIG. 5 illustrates a flow chart of a process for determining a law of motion according to embodiments herein;
FIG. 6 shows a schematic diagram of a three-dimensional spatial model of an embodiment herein;
FIG. 7 is a schematic diagram illustrating the coincidence of the corresponding visual axes of the three-dimensional spatial model of the embodiments herein;
FIG. 8 illustrates a close-up view of the coincidence of the visual axes corresponding to the three-dimensional spatial model of an embodiment herein;
fig. 9 shows a schematic diagram of a motion trajectory of a pupil center point of embodiments herein;
FIG. 10 shows a schematic view of an operator eye model according to embodiments herein;
FIG. 11A shows a schematic view of an operator face model according to embodiments herein;
FIGS. 11B and 11C are partial schematic diagrams of a face model for the left and right eyes according to embodiments herein;
FIG. 12A shows a schematic diagram of an intermediate process of operator eye gaze model determination in accordance with an embodiment herein;
fig. 12B shows an operator eye gaze model schematic of an embodiment herein;
FIG. 13A illustrates a flow diagram of a method of simulating an operator eye-gaze object according to embodiments herein;
FIG. 13B illustrates a schematic diagram of an operator eye-gaze model simulating an eye-gaze object of an embodiment herein;
fig. 14 is a block diagram showing a setup of an operator eye gaze model according to an embodiment herein;
FIG. 15 illustrates a block diagram of an apparatus to simulate an operator's eye looking at an object according to embodiments herein;
FIG. 16 is a block diagram that illustrates a computer device according to an embodiment herein.
Description of the symbols of the drawings:
300. a three-dimensional scanner;
300' and a gaze target array;
410. an operator eye characteristic;
420. a surrounding facial feature;
430. eye target point characteristics;
601. 602, pupil center;
603. eye target point central point;
604. 605, straight line (i.e., visual axis);
701. 702, a visual axis intersection;
1000. an eyeball model;
1010. a visual axis;
1020. a pupil center point;
1030. an iris edge curve;
1110. feature points;
1120. a visual axis intersection point;
1130. a motion trail curved surface of the pupil center point;
131. a point of interest;
132. a connecting wire;
1410. a three-dimensional space building module;
1420. a motion rule determining module;
1430. a first model building module;
1440. a second model building module;
1510. a modeling module;
1520. an acquisition module;
1530. a processing module;
1540. a simulation module;
1602. a computer device;
1604. a processor;
1606. a memory;
1608. a drive mechanism;
1610. an input/output module;
1612. an input device;
1614. an output device;
1616. a presentation device;
1618. a graphical user interface;
1620. a network interface;
1622. a communication link;
1624. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection.
The method for establishing the eye gaze model of the operator is suitable for the field of terminal control, and related terminals include but are not limited to intelligent terminals, virtual equipment and the like. As shown in fig. 1, fig. 1 illustrates a method for establishing an operator eye gaze model, and this embodiment can simplify a method for tracking and positioning human eyes by establishing an operator eye gaze model, specifically, the method for establishing an operator eye gaze model includes:
step 110, generating a three-dimensional space model containing the face, eyeballs and different gaze targets of an operator;
step 120, determining a rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point according to all the three-dimensional space models;
step 130, determining a face model and an eyeball model of an operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point;
and step 140, determining an eye gazing model of the operator according to the face model and the eyeball model of the operator.
The determined eye gaze model of the operator can simulate the relative position relationship between the eyeball and the face of the real operator during eyeball rotation, and further the eye gaze model of the operator can be used for identifying the eye gaze position of the operator according to the real-time image of the operator (the specific eye gaze position identification process is referred to the subsequent embodiment).
In an embodiment of the present invention, as shown in fig. 2, the step 110 generates a three-dimensional space model including a face, an eyeball, and different gaze points viewed by the operator, that is, each time the operator looks at a gaze point, a three-dimensional space model is generated, and for any three-dimensional space model, the implementation process includes:
step 210, when an operator looks at each eye target in the eye target array, scanning the face and the eye targets of the operator by using a three-dimensional scanner;
step 220, generating a three-dimensional space model containing the face, the eyeballs and the sight target points of the operator according to the scanning data.
In detail, in step 210, the three-dimensional scanner used for determining the iris edge curve can be a color three-dimensional scanner. The operator can look at the gaze target points in the gaze target array one by one according to a preset rule. The eye sight target points in the target point array can be arranged in a rectangular shape, a square shape, a circular shape, a triangular shape and the like, and patterns of the eye sight target points can be the same or different and are used for guiding the eyes of an operator to look at the eye sight target points. As shown in fig. 3, the pattern of the target points is drawn into a cross shape, in this example, the number of the target points includes 35, and each target point is assigned with different numbers, for example, the target point 301, the target point 302, and the target point … … 335 are assigned with different numbers.
The face of the operator is completely opposite to the geometric center of each gaze target point in the gaze target point array 300', namely the center position of the gaze target point cross shown in fig. 3, the head of the operator is kept still, and the eyes look at the gaze target points at different positions one by one. Each time the eyes of the operator look at a certain cross-gaze guiding target, the operator needs to stay for several seconds, and the three-dimensional scanner 300 scans the face of the operator and the gaze target point looked at from different angles to obtain a three-dimensional lattice digital model (i.e. scanning data), as shown in fig. 4, the three-dimensional lattice digital model includes: the operator's eye features 410 and their surrounding facial features 420, and the target point of gaze features 430. Because the error of the three-dimensional lattice digital model obtained by scanning an object by the existing three-dimensional scanner is smaller relative to the error of the real model of the scanned surface, the three-dimensional lattice digital model obtained by scanning in an ideal state can approximately restore the eyes of an operator, the facial features around the eyes, the shapes of the eye target points and the spatial position relation of the eye target points at the moment when the eyes of the real operator look at the center of the eye target points.
And generating a three-dimensional space model containing the face, the eyeball and the target point of sight of the operator according to the scanning data, and generating 35 independent three-dimensional space models when the operator views the 35 target points one by one.
In order to facilitate the subsequent calling of the data of the three-dimensional space model, the three-dimensional space model is stored in an information storage medium of the computer, and the three-dimensional space model can be named according to the name sequence of the gaze target points, such as the three-dimensional space model of the gaze target point I, the three-dimensional space model of the gaze target point II, the three-dimensional space model of the gaze target point III … …, and the three-dimensional space model of the gaze target point thirty-five.
In an embodiment of this document, as shown in fig. 5, the determining 120 a rotation rule of the eyeball relative to the face when the eye of the operator gazes at each gaze target point according to the three-dimensional spatial model of each view angle includes:
step 510, determining a visual axis, a pupil center point and an iris edge curve in each three-dimensional space model;
step 520, overlapping the faces of operators in all three-dimensional space models as a reference, and determining visual axis intersection points and feature points, wherein the feature points are points which do not move along with facial expressions;
and 530, connecting the pupil center points in all the three-dimensional space models, and determining the motion track curved surface of the pupil center point.
In the step 510, taking one of the three-dimensional space models as an example, as shown in fig. 6, the following process may be performed:
1) the pupil center 601 of the left eye and the pupil center 602 of the right eye are determined from the three-dimensional space model, and in specific implementation, the feature points of the pupil are determined first, and then the most central point in the feature points of the pupil is used as the pupil center.
2) A straight line 604 is drawn between the pupil center 601 and the gaze target center 603 corresponding to the three-dimensional space model, a straight line 605 is drawn between the pupil center 602 and the gaze target center 603 corresponding to the three-dimensional space model, and the straight lines 604 and 605 are the gaze center lines, i.e., the visual axes, of the gaze target of the operator's eyes. When the eyes of the operator look at any angle, the relative positions of the eyes of the operator and the visual axis of the eyes are always kept unchanged and are directed to the object watched by the operator.
3) And extracting the characteristic points of the iris edge from each three-dimensional space model, and determining an iris edge curve according to the characteristic points of the iris edge.
In the step 520, the intersection point of the visual axis is a rotation point of the visual axis, and when the eyes of the operator look at any angle, the visual axis of the operator always rotates around the intersection point of the visual axis, and the determination of the rotation point of the visual axis can provide a basis for the subsequent positioning of the relative position relationship between the eyeball model and the face model of the operator.
When the method is specifically implemented, the three-dimensional space models obtained by scanning of all three-dimensional scanners are overlapped by taking the face of an operator in the three-dimensional model as a reference. The method comprises the steps of extracting feature points which have obvious features on the face of an operator and do not move along with facial expressions, wherein the feature points comprise two outer canthi, two inner canthi, ears, nose, bridge of the nose and the like, so that the simplified face of the operator can be formed, and the feature points can be identified through face identification software during specific implementation. As shown in fig. 7 and 8, the left eye visual axis 604 and the right eye visual axis 605 in the post-coincidence model are extended, and the visual axis intersections 701 and 702 are determined according to the region where the visual axis extension lines intersect most densely, and the visual axis intersection may be any point in the region or the center point of the region, which is not limited herein.
In the step 530, the pupil center points of the left and right eyes in the three-dimensional space model are respectively fitted to determine the motion trajectory curved surface of the pupil center points of the left and right eyes, and specifically, for each eye, all the pupil center points of the eye can be connected into the motion trajectory curved surface of the pupil center point by using a smooth curved surface. As shown in fig. 9, the motion trajectory curved surface of the pupil center point is a rectangular-like curved surface. The movement locus of the pupil center point keeps unchanged position relation with the operator face model, when the eyes of an operator look at any angle, the pupil center point of the operator always moves on the movement locus curve of the pupil center point, and based on the movement locus curve, the position relation between the eyeball model of the operator and the operator face model can be determined so as to simulate the eyeball rotation of a real operator.
In an embodiment of the present invention, as shown in fig. 10, 11A, 11B and 11C, a and B represent the left eye and the right eye, respectively, and the step 130 determines the face model and the eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eye of the operator gazes at each target point, including:
generating an eyeball model according to a visual axis 1010, a pupil center 1020 and an iris edge curve 1030 of any three-dimensional space model or according to the visual axis, the pupil center and the iris edge curve which are obtained by weighted averaging of the visual axis, the pupil center and the iris edge curve of all three-dimensional space models;
a face model is generated from the feature points 1110, the visual axis intersection 1120, and the motion trajectory surface 1130 of the pupil center point.
In detail, the eyeball model can be represented by a combination of three features, namely, a visual axis, a pupil center point and an iris edge curve. Features within the eyeball model remain unchanged in shape and relative position for simulating a real operator's eye.
The face model can be represented by the combination of three characteristics of a characteristic point, a visual axis intersection point and a motion track curved surface of a pupil center point. The shape and relative position of the characteristic points, the visual axis intersection points and the motion track curved surface of the pupil center point in the face model are unchanged, and the face model is used for simulating the face of a real operator.
In one embodiment of the present invention, as shown in fig. 12A and 12B, the step 140 determining an operator eye gaze model according to the face model and the eyeball model of the operator includes:
setting the visual axis 1010 in the eyeball model 1000 to be located on the visual axis intersection point 1120 in the face model, as shown in fig. 12A;
the pupil center 1020 of the eyeball model 1000 is set to be located on the motion trajectory curved surface 1130 of the pupil center of the face model, as shown in fig. 12B.
The embodiment can enable the eye gazing model of the operator to be capable of realizing real and accurate mode of actual eye gazing.
In an embodiment of this document, there is further provided a method for simulating an operator's eye-gazing object by using the operator's eye-gazing model established in the foregoing embodiment, specifically, as shown in fig. 13A and 13B, the method includes:
step 1310, pre-establishing an operator eye gaze model of the operator by using the method for establishing the operator eye gaze model;
step 1320, generating a virtual space including an operator eye gazing model and a gazing object model according to the collected image of the operator;
step 1330, determining a connecting line 132 between a visual axis intersection in the operator eye gaze model a and the point 131 to be focused on in the object gaze model B according to the virtual space;
step 1340, rotating the visual axis in the operator eye-gaze model to overlap the connecting line, thereby completing the simulation of the operator eye-gaze object, as shown in fig. 13B.
In detail, the method is suitable for application scenarios such as intelligent terminal control and human-computer interaction, and the application scenarios are not specifically limited.
The operator image captured in step 1320 is a real-time captured facial image, and the image capturing device may be disposed on a wearable device in front of the eyes of the operator to obtain the image of the operator. Or the image acquisition device is arranged on the gazing object so as to obtain the image of the operator.
In step 1320, for example, a laser positioning mode may be adopted, and a laser ranging unit is disposed on a front camera of the operator image capturing device, so as to obtain a relative position relationship between the object and the operator, that is, a relative position relationship between the eye gaze model of the operator and the gaze object. Alternatively, the operator image acquired by the image acquisition device can be used for analyzing the image. The relative positional relationship between the operator and the gazing object is obtained by calculation, for example, based on the size of the operator in the image or the arm length of the general operator. For another example, the facial and eye feature information (the shape of the left outer corner of the eye, the right outer corner of the eye, the nose tip, the junction of the right forehead and the hair corner, and the iris edge curve) of the operator is determined based on the captured image of the operator, and the angle between the light beam incident on the image capturing device at the eye feature pixel and the normal of the lens of the image capturing device is determined based on the facial and eye feature information, and the angle is used for representing the relative positional relationship between the operator and the gazing object.
In step 1330, the point of interest in the object-of-gaze model may be any point on the object-of-gaze, which is not limited herein, and a connecting line between the intersection point of the visual axes in the eye-of-operator gaze model and the point of interest in the object-of-gaze model may determine the visual axis of the eyes of the operator. Simulating a point of interest at which the operator's eyes are focused on the gazing object may be accomplished by rotating the visual axis in the operator's eye gaze model to overlap the connecting line determined in step 1330, via step 1340.
The embodiment is suitable for common image acquisition equipment, can save the use of an infrared camera device, simplifies the eye tracking method and reduces the cost for determining the eye fixation position. Based on the same inventive concept, there is also provided an apparatus for establishing an eye gaze model of an operator, as described in the following embodiments. Because the principle of solving the problems of the device for establishing the operator eye-gaze model is similar to the method for establishing the operator eye-gaze model, the implementation of the device for establishing the operator eye-gaze model can refer to the implementation of the method for establishing the operator eye-gaze model, and repeated parts are not described again.
As shown in fig. 14, the device for creating an operator eye gaze model includes:
a three-dimensional space establishing module 1410, configured to generate a three-dimensional space model including the face, the eyeball, and the target points of different gaze directions of the operator;
a motion rule determining module 1420, configured to determine, according to each three-dimensional space model, a rotation rule of an eyeball with respect to a face when the eye of the operator gazes at each gaze target;
the first model establishing module 1430 is configured to determine a face model and an eyeball model of the operator according to a rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point;
the second model building module 1440 is configured to determine an eye gaze model of the operator according to the face model and the eyeball model of the operator.
In detail, all the functional modules in the device for establishing the eye gaze model of the operator may be implemented by a dedicated or general-purpose chip, or may be implemented by a software program, and the implementation manner of the functional modules is not limited herein.
In an embodiment herein, as shown in fig. 15, there is also provided an apparatus for simulating an eye-gazing object of an operator, including:
a modeling module 1510 for pre-building an operator eye gaze model of the operator using a method of building the operator eye gaze model;
an acquisition module 1520, configured to generate a virtual space including the eye gaze model and the object gaze model of the operator according to the acquired image of the operator;
a processing module 1530 for determining a connecting line between a visual axis intersection in the operator eye gaze model and a point of interest in the gaze object model according to the virtual space;
a simulation module 1540 for rotating the visual axis in the operator's eye gaze model to overlap the connecting line, thereby completing the simulation of the operator's eye gaze object.
The device for simulating the eyes of the operator to watch the object can be arranged on common terminal equipment in a software or hardware mode, so that the use of an infrared camera device can be omitted, the eye tracking method is simplified, and the cost for determining the eye watching position is reduced.
In an embodiment herein, there is also provided a computer device, as shown in fig. 16, the computer device 1602 may include one or more processors 1604, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads. The computer device 1602 may also include any memory 1606 for storing any kind of information, such as code, settings, data, etc. For example, and without limitation, memory 1606 may include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of the computer device 1602. The memory 1606 has a computer program stored thereon, wherein the computer program is executable on the processor 1604, and the processor 1604, when executing the computer program, implements the method for establishing the eye gaze model of the operator or the method for simulating the eye gaze object of the operator as described in any of the foregoing embodiments. In one case, when the processor 1604 executes the associated instructions, which are stored in any memory or combination of memories, the computer device 1602 can perform any of the operations of the associated instructions. The computer device 1602 also includes one or more drive mechanisms 1608, such as a hard disk drive mechanism, an optical disk drive mechanism, or the like, for interacting with any memory.
Computer device 1602 can also include an input/output module 1610(I/O) for receiving various inputs (via input device 1612) and for providing various outputs (via output device 1614)). One particular output mechanism may include a presentation device 1616 and an associated graphical user interface 1618 (GUI). In other embodiments, input/output module 1610(I/O), input device 1612, and output device 1614 may not be included, but merely as a computing device in a network. Computer device 1602 can also include one or more network interfaces 1620 for exchanging data with other devices via one or more communication links 1622. One or more communication buses 1624 couple the above-described components together.
Communication link 1622 may be implemented in any manner, such as over a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. Communications link 1622 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., as dictated by any protocol or combination of protocols.
In an embodiment of the present disclosure, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program is executed by a processor to perform the method for establishing an operator eye-gaze model or the method for simulating an operator eye-gaze object according to any of the foregoing embodiments.
In an embodiment herein, there is also provided computer readable instructions, wherein when executed by a processor, the program causes the processor to perform the method of establishing an operator eye gaze model or the method of simulating an operator eye gaze object as described in any of the preceding embodiments.
It should be understood that, in various embodiments herein, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments herein.
It should also be understood that, in the embodiments herein, the term "and/or" is only one kind of association relation describing an associated object, meaning that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purposes of the embodiments herein.
In addition, functional units in the embodiments herein may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present invention may be implemented in a form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The principles and embodiments of this document are explained herein using specific examples, which are presented only to aid in understanding the methods and their core concepts; meanwhile, for the general technical personnel in the field, according to the idea of this document, there may be changes in the concrete implementation and the application scope, in summary, this description should not be understood as the limitation of this document.

Claims (10)

1. A method of creating an operator eye gaze model, comprising:
generating a three-dimensional space model containing the face, eyeballs and different gaze target points of an operator;
determining the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point according to all the three-dimensional space models;
determining a face model and an eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point;
and determining an eye gazing model of the operator according to the face model and the eyeball model of the operator.
2. The method of claim 1, wherein generating a three-dimensional spatial model containing the operator's face, eye, and target point of sight seen comprises:
when an operator looks at each eye target point in the eye target point array, the face and the eye target points of the operator are scanned by using a three-dimensional scanner;
and generating a three-dimensional space model containing the face, the eyeballs and the sight target point of the operator according to the scanning data.
3. The method of claim 1, wherein determining the rule of rotation of the eye relative to the face of the operator when the eye of the operator is gazing at each target eye point according to all three-dimensional spatial models comprises:
determining a corresponding visual axis, a pupil center point and an iris edge curve in each three-dimensional space model;
superposing all three-dimensional space models by taking the face of an operator as a reference, and determining a visual axis intersection point and a feature point, wherein the feature point is a point which does not move along with the facial expression;
and connecting the pupil center points in all the three-dimensional space models, and determining the motion trail curved surface of the pupil center point.
4. The method of claim 3, wherein determining the face model and the eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch the target targets comprises:
generating an eyeball model according to the visual axis, the pupil center and the iris edge curve of any three-dimensional space model or according to the visual axis, the pupil center and the iris edge curve which are obtained by weighted averaging of the visual axis, the pupil center and the iris edge curve of all three-dimensional space models;
and generating a face model according to the characteristic points, the visual axis intersection points and the motion track curved surface of the pupil center point.
5. The method of claim 4, wherein determining an operator eye gaze model based on the operator's face model and eye ball model comprises:
setting the visual axis in the eyeball model to be positioned on the visual axis intersection point in the face model;
and setting the pupil center point in the eyeball model to be positioned on the motion trail curved surface of the pupil center point in the face model.
6. A method of simulating an operator's eye looking at an object, comprising:
pre-establishing the operator eye gaze model using the method of any one of claims 1 to 5;
generating a virtual space including the eye gazing model and the object gazing model of the operator according to the acquired image of the operator;
determining a connecting line between a visual axis intersection point in the eye gazing model of the operator and a point of interest in the object gazing model according to the virtual space;
and rotating the visual axis in the operator eye gazing model to be overlapped with the connecting line, thereby completing the simulation of the operator eye gazing object.
7. An apparatus for creating an eye gaze model of an operator, comprising:
the three-dimensional space establishing module is used for generating a three-dimensional space model containing the face, eyeballs and different gaze target points of an operator;
the motion rule determining module is used for determining the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point according to all the three-dimensional space models;
the first model establishing module is used for determining a face model and an eyeball model of the operator according to the rotation rule of the eyeball relative to the face when the eyes of the operator watch each gaze target point;
and the second model establishing module is used for determining an eye gazing model of the operator according to the face model and the eyeball model of the operator.
8. An apparatus for simulating an eye gaze of an operator on an object, comprising:
a modeling module for pre-establishing an operator eye gaze model of the operator using the method of any one of claims 1 to 5;
the acquisition module is used for generating a virtual space containing the eye gazing model and the object gazing model of the operator according to the acquired image of the operator;
the processing module is used for determining a connecting line between a visual axis intersection point in the eye gazing model of the operator and a focused point in the object gazing model according to the virtual space;
and the simulation module is used for rotating the visual axis in the eye gazing model of the operator to be overlapped with the connecting line, so that the simulation of the eye gazing object of the operator is completed.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores an executing computer program, which, when executed by a processor, implements the method of any one of claims 1 to 5.
CN202011180492.9A 2020-10-29 2020-10-29 Method and device for establishing eye gaze model of operator Pending CN112288855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011180492.9A CN112288855A (en) 2020-10-29 2020-10-29 Method and device for establishing eye gaze model of operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011180492.9A CN112288855A (en) 2020-10-29 2020-10-29 Method and device for establishing eye gaze model of operator

Publications (1)

Publication Number Publication Date
CN112288855A true CN112288855A (en) 2021-01-29

Family

ID=74374019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011180492.9A Pending CN112288855A (en) 2020-10-29 2020-10-29 Method and device for establishing eye gaze model of operator

Country Status (1)

Country Link
CN (1) CN112288855A (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787012A (en) * 2004-12-08 2006-06-14 索尼株式会社 Method,apparatua and computer program for processing image
CN101172034A (en) * 2006-11-03 2008-05-07 上海迪康医学生物技术有限公司 Eyeball moving track detecting method
US20080192990A1 (en) * 2007-02-09 2008-08-14 Kabushiki Kaisha Toshiba Gaze detection apparatus and the method of the same
US20080309671A1 (en) * 2007-06-18 2008-12-18 Brian Mark Shuster Avatar eye control in a multi-user animation environment
US8885882B1 (en) * 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
US20150293588A1 (en) * 2014-04-10 2015-10-15 Samsung Electronics Co., Ltd. Eye gaze tracking method and apparatus and computer-readable recording medium
US20160202756A1 (en) * 2015-01-09 2016-07-14 Microsoft Technology Licensing, Llc Gaze tracking via eye gaze model
CN108334191A (en) * 2017-12-29 2018-07-27 北京七鑫易维信息技术有限公司 Based on the method and apparatus of the determination blinkpunkt of eye movement analysis equipment
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism
US20180357790A1 (en) * 2017-06-09 2018-12-13 Aisin Seiki Kabushiki Kaisha Gaze-tracking device, computable readable medium, and method
CN109102734A (en) * 2018-09-04 2018-12-28 北京精英智通科技股份有限公司 Drive simulating training system and method
CN109885169A (en) * 2019-02-25 2019-06-14 清华大学 Eyeball parameter calibration and direction of visual lines tracking based on three-dimensional eyeball phantom
CN109947253A (en) * 2019-03-25 2019-06-28 京东方科技集团股份有限公司 The method for establishing model of eyeball tracking, eyeball tracking method, equipment, medium
CN209310751U (en) * 2019-01-23 2019-08-27 新拓三维技术(深圳)有限公司 A kind of spatial digitizer
CN110196640A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of method of controlling operation thereof and terminal
CN110585592A (en) * 2019-07-31 2019-12-20 毕宏生 Personalized electronic acupuncture device and generation method and generation device thereof
CN111176434A (en) * 2018-11-13 2020-05-19 本田技研工业株式会社 Gaze detection device, computer-readable storage medium, and gaze detection method

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787012A (en) * 2004-12-08 2006-06-14 索尼株式会社 Method,apparatua and computer program for processing image
CN101172034A (en) * 2006-11-03 2008-05-07 上海迪康医学生物技术有限公司 Eyeball moving track detecting method
US20080192990A1 (en) * 2007-02-09 2008-08-14 Kabushiki Kaisha Toshiba Gaze detection apparatus and the method of the same
US20080309671A1 (en) * 2007-06-18 2008-12-18 Brian Mark Shuster Avatar eye control in a multi-user animation environment
US8885882B1 (en) * 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
US20150293588A1 (en) * 2014-04-10 2015-10-15 Samsung Electronics Co., Ltd. Eye gaze tracking method and apparatus and computer-readable recording medium
US20160202756A1 (en) * 2015-01-09 2016-07-14 Microsoft Technology Licensing, Llc Gaze tracking via eye gaze model
US20180357790A1 (en) * 2017-06-09 2018-12-13 Aisin Seiki Kabushiki Kaisha Gaze-tracking device, computable readable medium, and method
CN108334191A (en) * 2017-12-29 2018-07-27 北京七鑫易维信息技术有限公司 Based on the method and apparatus of the determination blinkpunkt of eye movement analysis equipment
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism
CN109102734A (en) * 2018-09-04 2018-12-28 北京精英智通科技股份有限公司 Drive simulating training system and method
CN111176434A (en) * 2018-11-13 2020-05-19 本田技研工业株式会社 Gaze detection device, computer-readable storage medium, and gaze detection method
CN209310751U (en) * 2019-01-23 2019-08-27 新拓三维技术(深圳)有限公司 A kind of spatial digitizer
CN109885169A (en) * 2019-02-25 2019-06-14 清华大学 Eyeball parameter calibration and direction of visual lines tracking based on three-dimensional eyeball phantom
CN109947253A (en) * 2019-03-25 2019-06-28 京东方科技集团股份有限公司 The method for establishing model of eyeball tracking, eyeball tracking method, equipment, medium
CN110196640A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of method of controlling operation thereof and terminal
CN110585592A (en) * 2019-07-31 2019-12-20 毕宏生 Personalized electronic acupuncture device and generation method and generation device thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁宾等: "基于单张图像的三维人脸建模与表情动画", 《计算机工程与设计》 *
满毅等: "基于双眼模型的三维视线估计方法", 《机械科学与技术》 *

Similar Documents

Publication Publication Date Title
CN109086726B (en) Local image identification method and system based on AR intelligent glasses
JP6960494B2 (en) Collection, selection and combination of eye images
AU2019419376B2 (en) Virtual try-on systems and methods for spectacles
CN110187855B (en) Intelligent adjusting method for near-eye display equipment for avoiding blocking sight line by holographic image
EP4383193A1 (en) Line-of-sight direction tracking method and apparatus
US11300784B2 (en) Multi-perspective eye acquisition
KR20170111938A (en) Apparatus and method for replaying contents using eye tracking of users
JP2019215688A (en) Visual line measuring device, visual line measurement method and visual line measurement program for performing automatic calibration
CN112099622B (en) Sight tracking method and device
US11475592B2 (en) Systems and methods for determining an ear saddle point of a user to produce specifications to fit a wearable apparatus to the user's head
Nitschke et al. I see what you see: point of gaze estimation from corneal images
CN112288855A (en) Method and device for establishing eye gaze model of operator
CN117058749B (en) Multi-camera perspective method and device, intelligent glasses and storage medium
Lee et al. Low-cost Wearable Eye Gaze Detection and Tracking System
CN113760083A (en) Method and device for determining position of landing point of operator sight on screen of terminal equipment
WO2024059927A1 (en) Methods and systems for gaze tracking using one corneal reflection
CN112950688A (en) Method and device for determining gazing depth, AR (augmented reality) equipment and storage medium
Zhang et al. Integrated neural network-based pupil tracking technology for wearable gaze tracking devices in flight training
CN117406949A (en) Terminal equipment screen adjusting method and device
CN115834858A (en) Display method and device, head-mounted display equipment and storage medium
CN111208905A (en) Multi-module sight tracking method and system and sight tracking equipment
Kikinis Design Considerations for a Computer-Vision-Enabled Ophthalmic Augmented Reality Environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210129

WD01 Invention patent application deemed withdrawn after publication