CN113208591A - Method and device for determining eye opening and closing distance - Google Patents

Method and device for determining eye opening and closing distance Download PDF

Info

Publication number
CN113208591A
CN113208591A CN202010069773.0A CN202010069773A CN113208591A CN 113208591 A CN113208591 A CN 113208591A CN 202010069773 A CN202010069773 A CN 202010069773A CN 113208591 A CN113208591 A CN 113208591A
Authority
CN
China
Prior art keywords
position information
point
dimensional position
face
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010069773.0A
Other languages
Chinese (zh)
Other versions
CN113208591B (en
Inventor
李源
王晋玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202010069773.0A priority Critical patent/CN113208591B/en
Publication of CN113208591A publication Critical patent/CN113208591A/en
Application granted granted Critical
Publication of CN113208591B publication Critical patent/CN113208591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method and a device for determining an eye opening and closing distance, wherein the method comprises the following steps: obtaining a target three-dimensional face model of a person to be detected; fitting three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first specified characteristic point, the second specified characteristic point and the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye; determining first projection position information corresponding to an upper eyelid point and second projection position information corresponding to a lower eyelid point projected to the surface of an eyeball corresponding to the target eye based on three-dimensional position information corresponding to the eyeball center corresponding to the target eye and three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point; and determining the opening and closing distance of the target eye based on the first projection position information and the second projection position information so as to improve the accuracy of determining the opening and closing distance of the eye.

Description

Method and device for determining eye opening and closing distance
Technical Field
The invention relates to the field of image recognition, in particular to a method and a device for determining an eye opening and closing distance.
Background
At present, in the related fatigue state detection process, generally: the face of the person to be detected can be monitored, the opening and closing length of the eyes of the person to be detected is determined based on the monitoring picture, whether the person to be detected is in a fatigue state or not is determined according to the opening and closing length of the eyes of the person to be detected, and an alarm is given under the condition that the person to be detected is in the fatigue state.
In the above process, in the process of determining the open/close length of the eye of the person to be detected based on the monitoring screen, the upper and lower eyelids of the eye are generally detected from the monitoring screen, and the open/close distance of the eye is determined based on the distance between the upper and lower eyelids.
However, due to the difference in the eye structure, for example, in the case of an eye having an eyelid protruding type, in the process of determining the opening and closing distance of the eye based on the distance between the upper and lower eyelids, there may be a case where the distance calculated by the upper and lower eyelids of the eye having the eyelid protruding outward is greater than the distance calculated by the upper and lower eyelids of the normal eye, which is an eye having the eyelid fitted to the eyeball, and this may affect the actual determination result of the opening and closing distance of the eye to some extent, and further affect the detection result of the fatigue state. As shown in fig. 1A, the left side is a side view of an eye with the eyelid fitting the eyeball and the right side is a side view of an eye with the eyelid convex.
In order to ensure the accuracy of the detection result of the fatigue state, how to provide a more accurate method for determining the eye opening and closing distance becomes an urgent problem to be solved.
Disclosure of Invention
The invention provides a method and a device for determining an eye opening and closing distance, which aim to improve the accuracy of determining the eye opening and closing distance. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for determining an eye opening and closing distance, where the method includes:
obtaining a target three-dimensional face model of a person to be detected, wherein the target three-dimensional face model comprises: the three-dimensional position information corresponding to the face feature point of the face of the person to be detected comprises the following steps: the first specified characteristic point, the second specified characteristic point, the third specified characteristic point, and an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
fitting three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first specified characteristic point, the second specified characteristic point and the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye;
determining first projection position information corresponding to the upper eyelid point and second projection position information corresponding to the lower eyelid point projected to the eyeball surface corresponding to the target eye based on the three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point;
determining an opening and closing distance of the target eye based on the first projection position information and the second projection position information.
Optionally, the step of obtaining a target three-dimensional face model of a person to be detected includes:
obtaining a first face image containing a face of a person to be detected;
detecting two-dimensional position information of a face feature point of the face from the first face image;
and determining a target three-dimensional face model of the person to be detected based on the two-dimensional position information of the face characteristic points and a preset three-dimensional face model.
Optionally, the step of obtaining a target three-dimensional face model of a person to be detected includes:
acquiring second face images obtained when a plurality of image acquisition devices shoot the face of a person to be detected in the same acquisition period;
for each second face image, detecting two-dimensional position information of a face characteristic point of the face from the second face image;
and determining a target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face characteristic points in each second face image.
Optionally, the step of determining the target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face feature point in each second face image includes:
determining three-dimensional position information of space points corresponding to the first designated feature point, the second designated feature point and the third designated feature point based on target pose information and internal reference information of each image acquisition device and two-dimensional position information of the first designated feature point, the second designated feature point and the third designated feature point in each second face image;
determining three-dimensional position information corresponding to two canthus points of the target eyes respectively based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the two canthus points of the target eyes in the second face image acquired by the image acquisition devices;
constructing a first ocular angle constraint based on three-dimensional position information, a first numerical value, a second numerical value and a cubic parameter curve equation which respectively correspond to two ocular angle points of the target eye, wherein the first numerical value and the second numerical value are used for constraining the value range of an independent variable in the first ocular angle constraint;
constructing a reprojection error constraint corresponding to an upper eyelid and a reprojection error constraint corresponding to a lower eyelid of the target eye based on the cubic parameter curve equation, target pose information and internal reference information of each image acquisition device and two-dimensional position information corresponding to the upper eyelid point and the lower eyelid point;
and constructing a space eyelid curve corresponding to the upper eyelid and a space eyelid curve corresponding to the lower eyelid of the target eye based on the reprojection error constraint corresponding to the upper eyelid, the reprojection error constraint corresponding to the lower eyelid, the first canthus constraint, the distance constraint between a preset canthus space point and an eyelid space point and the eyelid point ordering constraint, so as to obtain three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, and obtain the target three-dimensional face model of the person to be detected.
Optionally, the first specified feature point is: the face feature points of the first designated position of the left face of the face are as follows: a face feature point of the first designated location of the right face of the face; the third specified feature point is: the central line of the face corresponds to a face characteristic point at a second designated position in the face characteristic points;
the step of fitting three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first specified feature point, the second specified feature point and the third specified feature point, and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye includes:
determining three-dimensional position information corresponding to a first midpoint of the first specified feature point and the second specified feature point based on the three-dimensional position information corresponding to the first specified feature point and the second specified feature point;
determining a direction vector of an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified characteristic point and the three-dimensional position information corresponding to the two eye corner points of the target eye;
and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on a sphere structure mathematical principle, three-dimensional position information corresponding to two eye corner points of the target eye, three-dimensional position information corresponding to the upper eyelid point, three-dimensional position information corresponding to the lower eyelid point and a direction vector of the eyeball center corresponding to the target eye.
Optionally, the step of determining, based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified feature point, and the three-dimensional position information corresponding to the two eye corners of the target eye, a direction vector of an eyeball center corresponding to the target eye includes:
determining a face direction vector corresponding to the face based on the three-dimensional position information corresponding to the first midpoint and the three-dimensional position information corresponding to the third specified feature point;
determining the midperpendicular corresponding to the two canthus points of the target eye based on the three-dimensional position information corresponding to the two canthus points of the target eye;
and determining the projection vector of the face direction vector on the vertical plane as the direction vector of the eyeball center corresponding to the target eye.
Optionally, the step of fitting the three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on a sphere structure mathematical principle, the three-dimensional position information corresponding to the two eye corner points of the target eye, the three-dimensional position information corresponding to the upper eyelid point, the three-dimensional position information corresponding to the lower eyelid point, and the direction vector of the eyeball center corresponding to the target eye includes:
determining three-dimensional position information corresponding to a second midpoint of the two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye;
constructing a first expression representing the radius of an eyeball corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the three-dimensional position information corresponding to the target canthus point in the two canthus points of the target eye by referring to a sphere structure mathematical principle and a pythagorean theorem;
constructing a second expression representing the position of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the direction vector of the eyeball center corresponding to the target eye by referring to the mathematical principle of the sphere structure;
constructing distance expressions from the upper eyelid point and the spatial point corresponding to the lower eyelid point to the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the upper eyelid point, the three-dimensional position information corresponding to the lower eyelid point and the second expression;
and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on the first expression and the distance expression.
Optionally, the step of determining the opening and closing distance of the target eye based on the first projection position information and the second projection position information includes:
determining eye angle vectors corresponding to the two eye angle points of the target eye based on the three-dimensional position information corresponding to the two eye angle points of the target eye;
determining eyelid point pairs of which corresponding eyelid direction vectors are orthogonal to the eye angle vectors from projection points corresponding to the upper eyelid point of a preset middle area and projection points corresponding to the lower eyelid point of the preset middle area based on the first projection position information and the second projection position information, wherein each eyelid point pair comprises an upper eyelid point and a lower eyelid point, and the eyelid direction vectors are as follows: a vector determined based on first projection position information corresponding to the corresponding upper eyelid point and second projection position information corresponding to the corresponding lower eyelid point;
determining a corresponding mode of each eyelid point pair based on first projection position information corresponding to an upper eyelid point and second projection position information corresponding to a lower eyelid point in the eyelid point pairs;
and determining the mode with the largest value in the corresponding modes of all the eyelid point pairs as the opening and closing distance of the target eye.
In a second aspect, an embodiment of the present invention provides an apparatus for determining an eye-opening and eye-closing distance, the apparatus including:
an obtaining module configured to obtain a target three-dimensional face model of a person to be detected, wherein the target three-dimensional face model includes: the three-dimensional position information corresponding to the face feature point of the face of the person to be detected, wherein the face feature point comprises: the first specified characteristic point, the second specified characteristic point, the third specified characteristic point, and an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
a fitting module configured to fit three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on three-dimensional position information corresponding to the first specified feature point, the second specified feature point and the third specified feature point and three-dimensional position information corresponding to an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
a first determining module configured to determine, based on three-dimensional position information corresponding to an eyeball center corresponding to the target eye and three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, first projection position information corresponding to the upper eyelid point and second projection position information corresponding to the lower eyelid point projected onto an eyeball surface corresponding to the target eye;
a second determination module configured to determine an open-close distance of the target eye based on the first projection position information and the second projection position information.
Optionally, the obtaining module is specifically configured to obtain a first face image including a face of a person to be detected;
detecting two-dimensional position information of a face feature point of the face from the first face image;
and determining a target three-dimensional face model of the person to be detected based on the two-dimensional position information of the face characteristic points and a preset three-dimensional face model.
Optionally, the obtaining module includes:
the acquisition unit is configured to acquire second face images obtained when the plurality of image acquisition devices shoot the face of the person to be detected in the same acquisition period;
a detection unit configured to detect, for each second face image, two-dimensional position information of a face feature point of the face from the second face image;
and the first determining unit is configured to determine a target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face characteristic points in each second face image.
Optionally, the first determining unit is specifically configured to determine, based on target pose information and internal reference information of each image capturing device and two-dimensional position information of the first specified feature point, the second specified feature point, and the third specified feature point in each second face image, three-dimensional position information of spatial points corresponding to the first specified feature point, the second specified feature point, and the third specified feature point;
determining three-dimensional position information corresponding to two canthus points of the target eyes respectively based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the two canthus points of the target eyes in the second face image acquired by the image acquisition devices;
constructing a first ocular angle constraint based on three-dimensional position information, a first numerical value, a second numerical value and a cubic parameter curve equation which respectively correspond to two ocular angle points of the target eye, wherein the first numerical value and the second numerical value are used for constraining the value range of an independent variable in the first ocular angle constraint;
constructing a reprojection error constraint corresponding to an upper eyelid and a reprojection error constraint corresponding to a lower eyelid of the target eye based on the cubic parameter curve equation, target pose information and internal reference information of each image acquisition device and two-dimensional position information corresponding to the upper eyelid point and the lower eyelid point;
and constructing a space eyelid curve corresponding to the upper eyelid and a space eyelid curve corresponding to the lower eyelid of the target eye based on the reprojection error constraint corresponding to the upper eyelid, the reprojection error constraint corresponding to the lower eyelid, the first canthus constraint, the distance constraint between a preset canthus space point and an eyelid space point and the eyelid point ordering constraint, so as to obtain three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, and obtain the target three-dimensional face model of the person to be detected.
Optionally, the first specified feature point is: the face feature points of the first designated position of the left face of the face are as follows: a face feature point of the first designated location of the right face of the face; the third specified feature point is: the central line of the face corresponds to a face characteristic point at a second designated position in the face characteristic points;
the fitting module comprises:
a second determination unit configured to determine three-dimensional position information corresponding to a first midpoint of the first specified feature point and the second specified feature point based on three-dimensional position information corresponding to the first specified feature point and the second specified feature point;
a third determining unit configured to determine a direction vector of an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified feature point, and the three-dimensional position information corresponding to the two eye corner points of the target eye;
the fitting unit is configured to fit three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on a sphere structure mathematical principle, three-dimensional position information corresponding to two eye corner points of the target eye, three-dimensional position information corresponding to the upper eyelid point, three-dimensional position information corresponding to the lower eyelid point, and a direction vector of the eyeball center corresponding to the target eye.
Optionally, the third determining unit is specifically configured to determine a face direction vector corresponding to the face based on the three-dimensional position information corresponding to the first midpoint and the three-dimensional position information corresponding to the third specified feature point;
determining the midperpendicular corresponding to the two canthus points of the target eye based on the three-dimensional position information corresponding to the two canthus points of the target eye;
and determining the projection vector of the face direction vector on the vertical plane as the direction vector of the eyeball center corresponding to the target eye.
Optionally, the fitting unit is specifically configured to determine three-dimensional position information corresponding to a second midpoint of two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye;
constructing a first expression representing the radius of an eyeball corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the three-dimensional position information corresponding to the target canthus point in the two canthus points of the target eye by referring to a sphere structure mathematical principle and a pythagorean theorem;
constructing a second expression representing the position of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the direction vector of the eyeball center corresponding to the target eye by referring to the mathematical principle of the sphere structure;
constructing distance expressions from the upper eyelid point and the spatial point corresponding to the lower eyelid point to the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the upper eyelid point, the three-dimensional position information corresponding to the lower eyelid point and the second expression;
and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on the first expression and the distance expression.
Optionally, the second determining module is specifically configured to determine, based on the three-dimensional position information corresponding to the two canthus points of the target eye, canthus vectors corresponding to the two canthus points of the target eye;
determining eyelid point pairs of which corresponding eyelid direction vectors are orthogonal to the eye angle vectors from projection points corresponding to the upper eyelid point of a preset middle area and projection points corresponding to the lower eyelid point of the preset middle area based on the first projection position information and the second projection position information, wherein each eyelid point pair comprises an upper eyelid point and a lower eyelid point, and the eyelid direction vectors are as follows: a vector determined based on first projection position information corresponding to the corresponding upper eyelid point and second projection position information corresponding to the corresponding lower eyelid point;
determining a corresponding mode of each eyelid point pair based on first projection position information corresponding to an upper eyelid point and second projection position information corresponding to a lower eyelid point in the eyelid point pairs;
and determining the mode with the largest value in the corresponding modes of all the eyelid point pairs as the opening and closing distance of the target eye.
As can be seen from the above, the method and apparatus for determining an eye opening/closing distance according to the embodiments of the present invention obtain a target three-dimensional face model of a person to be detected, where the target three-dimensional face model includes: the three-dimensional position information corresponding to the face feature point of the face of the person to be detected comprises the following steps: the first specified characteristic point, the second specified characteristic point, the third specified characteristic point, and an upper eyelid point, a lower eyelid point and two canthus points of the target eye; fitting three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first specified characteristic point, the second specified characteristic point and the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye; determining first projection position information corresponding to an upper eyelid point and second projection position information corresponding to a lower eyelid point projected to the surface of an eyeball corresponding to the target eye based on three-dimensional position information corresponding to the eyeball center corresponding to the target eye and three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point; and determining the opening and closing distance of the target eye based on the first projection position information and the second projection position information.
By applying the embodiment of the invention, the three-dimensional position information corresponding to the eyeball center corresponding to the target eye of the person to be detected can be fitted based on the first specified characteristic point, the second specified characteristic point, the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye in the target three-dimensional face model of the person to be detected, and then the first projection position information corresponding to the upper eyelid point and the second projection position information corresponding to the lower eyelid point projected to the eyeball surface corresponding to the target eye are determined by combining the three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, so that the eyelid points of the target eye are projected to the eyeball of the target eyelid, and a unified calculation basis is provided for the opening and closing distances of eyes with different shapes, the influence of eyes with special shapes, such as eyes with convex eyelids on the calculation of the opening and closing distance of the eyes is avoided to a certain extent, and the accuracy of the determination result of the opening and closing distance of the eyes with various shapes is improved. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. the three-dimensional position information corresponding to the eyeball center corresponding to the target eye of the person to be detected can be fitted based on the three-dimensional position information corresponding to the first specified characteristic point, the second specified characteristic point, the third specified characteristic point, the upper eyelid point, the lower eyelid point and the two canthus points in the target three-dimensional face model of the person to be detected, and further the three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point are combined to determine the first projection position information corresponding to the upper eyelid point and the second projection position information corresponding to the lower eyelid point projected to the eyeball surface corresponding to the target eye, so that the eyelid points of the target eye are projected to the eyeball of the target eyelid, a uniform calculation basis is provided for the opening and closing distances of eyes with different shapes, and eyes with special shapes are avoided to a certain extent, the influence of the eye with the convex eyelid on the calculation of the opening and closing distance of the eye improves the accuracy of the determination result of the opening and closing distance of the eye with various shapes.
2. Through the first canthus constraint, the reprojection error constraint corresponding to the upper eyelid of the target eye, the reprojection error constraint corresponding to the lower eyelid, the distance constraint between the preset canthus space point and the eyelid point ordering constraint, a space eyelid curve corresponding to the upper eyelid and a space eyelid curve corresponding to the lower eyelid of the target eye are jointly constructed, three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point is obtained, the accurate construction of the upper eyelid and the lower eyelid of the target eye is achieved, the accuracy of the obtained three-dimensional position information corresponding to the upper eyelid and the lower eyelid is guaranteed, a calculation basis is provided for accurately calculating the opening and closing distance of the target eye, and the accuracy of the determination result of the opening and closing distance of the eye is improved.
3. The method comprises the steps of determining a direction vector of an eyeball center corresponding to a target eye by utilizing face characteristic points at the same positions of a left face and a right face of the face, namely the face characteristic points which are symmetrical based on a center line of the face, the face characteristic points at a second designated position in the face characteristic points corresponding to the center line of the face and three-dimensional position information corresponding to two eye corner points of the target eye, further integrating a sphere structure mathematical principle, the three-dimensional position information corresponding to an upper eyelid point, the three-dimensional position information corresponding to a lower eyelid point and the direction vector of the eyeball center corresponding to the target eye, fitting out the relatively accurate three-dimensional position information corresponding to the eyeball center corresponding to the target eye, and providing a basis for accurately determining the opening and closing distance of the eye.
4. Based on the face feature points using the same positions of the left face and the right face of the face, i.e., based on the face feature points symmetric about the center line of the face, the face feature points at the second designated position among the face feature points corresponding to the center line of the face, i.e., three-dimensional position information corresponding to the third specified feature point, determines a direction vector facing the corresponding face, furthermore, the projection vector of the face direction vector on the vertical median plane is determined by combining the vertical median planes corresponding to the two eye corner points of the target eye, and the projection vector is determined as the direction vector of the eyeball center corresponding to the target eye, the direction vector of the eyeball center corresponding to the target eye is determined by combining the spatial structure characteristics of the face of the person to be detected, the direction vector of the eyeball center corresponding to the target eye which is more in line with the face state of the person to be detected at that time is determined, and the accuracy of the opening and closing distance of the subsequently determined eye is ensured.
5. According to the sphere structure mathematical principle and the Pythagorean theorem, constructing a first expression representing the eyeball radius corresponding to the target eye based on three-dimensional position information corresponding to a second midpoint of two eye corner points of the target eye and three-dimensional position information corresponding to a target eye corner point of the two eye corner points of the target eye, constructing a second expression representing the position of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint, a direction vector of the eyeball center corresponding to the target eye, the three-dimensional position information corresponding to an upper eyelid point and the three-dimensional position information corresponding to a lower eyelid point, combining the first expression representing the eyeball radius corresponding to the target eye and the second expression representing the position of the eyeball center corresponding to the target eye, and fitting the three-dimensional position information corresponding to the eyeball center corresponding to the target eye together, and the accuracy of the three-dimensional position information corresponding to the eyeball center corresponding to the fitted target eye is ensured.
6. And determining eyelid point pairs of which corresponding eyelid direction vectors are orthogonal to the eye angle vectors from upper eyelid points in a preset middle area and lower eyelid points in the preset middle area based on the first projection position information, the second projection position information and the eye angle vectors, determining a mode with the largest value in the modes corresponding to all the eyelid point pairs as the opening and closing distance of the target eye, and determining the relatively accurate opening and closing distance of the target eye while reducing the calculated amount.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
FIG. 1A is an exemplary illustration of a side view of an eye of a different shape;
fig. 1B is a schematic flowchart of a method for determining an eye opening/closing distance according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a position relationship between a spatial point corresponding to an eye corner of a target eye and an eyeball center corresponding to the target eye;
fig. 3 is a schematic structural diagram of an apparatus for determining an eye opening/closing distance according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The invention provides a method and a device for determining an eye opening and closing distance, which aim to improve the accuracy of determining the eye opening and closing distance. The following provides a detailed description of embodiments of the invention.
Fig. 1B is a schematic flow chart of a method for determining an eye opening/closing distance according to an embodiment of the present invention. The method may comprise the steps of:
s101: and obtaining a target three-dimensional face model of the person to be detected.
Wherein, the target three-dimensional face model includes: the three-dimensional position information corresponding to the face feature point of the face of the person to be detected comprises the following steps: the first specified feature point, the second specified feature point, the third specified feature point, and the upper eyelid point, the lower eyelid point, and the two eye corner points of the target eye.
The method for determining the eye opening and closing distance provided by the embodiment of the invention can be applied to any type of electronic equipment, and the electronic equipment can be a server or terminal equipment.
The electronic equipment can directly obtain a target three-dimensional face model of a person to be detected, and the target three-dimensional face model of the person to be detected is as follows: based on a model constructed by an image acquisition system aiming at an image acquired by the face of a person to be detected, the target three-dimensional face model can be characterized as follows: and when the image acquisition system acquires the image of the face of the person to be detected, the face state of the person to be detected. The image acquisition system may comprise one or more image acquisition devices, which may be cameras or webcams or the like.
In one case, the person to be detected can be a vehicle driver, and the image acquisition system can be arranged in a vehicle where the vehicle driver is located and is used for shooting the face of the person to be detected.
The target three-dimensional face model includes but is not limited to: three-dimensional position information corresponding to the face feature points of the face of the person to be detected, wherein the face feature points include but are not limited to: the first specified feature point, the second specified feature point, the third specified feature point, and the upper eyelid point, the lower eyelid point, and the two eye corner points of the target eye.
The target eye can be the left eye or the right eye of the person to be detected, and when the opening and closing distance of the eye of the person to be detected is determined, the opening and closing distance determining process of the eye provided by the embodiment of the invention can be utilized, the opening and closing distance of the eye is determined for the left eye of the person to be detected, and then the opening and closing distance of the eye is determined for the right eye of the person to be detected; or by using the process for determining the opening and closing distance of the eyes provided by the embodiment of the invention, the opening and closing distance of the right eye is determined firstly, and then the opening and closing distance of the left eye is determined; or the opening and closing distance of the right eye and the left eye is determined in parallel by utilizing the opening and closing distance determining process of the eyes provided by the embodiment of the invention. This is all possible.
The first specified feature point and the second specified feature point may be face feature points at the same physical positions of the left face and the right face of the face, that is, the first specified feature point and the second specified feature point may be considered as face feature points symmetric based on a center line of the face. For example: the first specified feature point may be a face feature point at the upper ear root position of the left ear of the left face, and correspondingly, the second specified feature point is a face feature point at the upper ear root position of the right ear of the right face; another example is: the first specified feature point may be a face feature point at the tip of the left ear of the left face, and correspondingly, the second specified feature point is a face feature point at the tip of the right ear of the right face; another example is: the first specified feature point may be a face feature point at a position on the upper side of the position near the left ear in the chin contour point of the left face, and correspondingly, the second specified feature point may be a face feature point at a position on the upper side of the position near the right ear in the chin contour point of the right face, and so on.
The third specified feature point may be a face feature point at a specified position in face feature points corresponding to the center line of the face, for example, any person center feature point on the center line of the face. Another example is: the face summarizes the feature points of the person in the central position in the recessed position under the lower lip. Another example is: facial feature points at the tip of the chin, etc.
In one case, the positions of the first specified feature point and the second specified feature point are higher than the position of the third specified feature point.
In one case, the electronic device may directly construct a target three-dimensional face model of the person to be detected based on an image acquired by the image acquisition system for the face of the person to be detected, in an implementation manner, the image acquisition system includes an image acquisition device, that is, in a case that the electronic device can obtain a frame of face image including the face of the person to be detected, correspondingly, the S101 may include the following steps 01 to 03:
01: a first face image containing a face of a person to be detected is obtained.
02: two-dimensional position information of a face feature point of a face is detected from a first face image.
03: and determining a target three-dimensional face model of the person to be detected based on the two-dimensional position information of the face characteristic points and a preset three-dimensional face model.
The electronic equipment obtains a face image containing the face of a person to be detected, the face image is used as a first face image, two-dimensional position information of face characteristic points is detected from the first face image by using a preset face detection algorithm, wherein the face characteristic points at least comprise characteristic points corresponding to an upper eyelid of a target eye of the person to be detected and characteristic points corresponding to a lower eyelid, the characteristic points corresponding to the upper eyelid comprise two eye corner points of the upper eyelid and the target eye, and the characteristic points corresponding to the lower eyelid comprise two eye corner points of the lower eyelid and the target eye. The electronic equipment determines a target three-dimensional face model of the person to be detected based on the two-dimensional position information of the face characteristic points and a preset three-dimensional face model.
The preset face detection algorithm may be a preset face detection model, the preset face detection model may be a neural network model obtained by training based on a sample image labeled with a face feature point, and the training process of the preset face detection model may refer to the training process of a model in the related art, which is not described herein again.
The process of determining the target three-dimensional face model of the person to be detected may include: determining a space point at a preset face position from a preset three-dimensional face model as a to-be-processed space point, wherein the to-be-processed space point has a corresponding relation with a face characteristic point; projecting each space point to be processed to the first face image by using the weak perspective projection matrix, and determining the projection position information of each space point to be processed in the first face image; and constructing a target three-dimensional face model corresponding to the person to be detected based on the projection position information of the projection point of each space point to be processed and the two-dimensional position information of the face characteristic point corresponding to each space point to be processed.
In one implementation, the electronic device may receive a user selection instruction, where the user selection instruction carries a preset face position of a spatial point to be selected, and the electronic device may determine, from a preset three-dimensional face model, a spatial point at the preset face position as a spatial point to be processed based on the preset face position carried by the user selection instruction. In another implementation manner, the preset face position may be prestored in the electronic device, and then the electronic device may read the preset face position from the corresponding storage position, and further determine a spatial point at the preset face position from the preset three-dimensional face model as a to-be-processed spatial point. The corresponding relation between the space point to be processed and the face characteristic point is as follows: a one-to-one correspondence. In one case, the preset face position may be set based on a position of a face feature point in the first face image.
In one case, the preset three-dimensional face model can be represented by the following formula (1):
Figure BDA0002377011900000121
wherein S represents the preset three-dimensional face model,
Figure BDA0002377011900000122
represents a preset average face, AadRepresenting shape information of a human face, AexpExpression information, alpha, representing a person's faceadThe weight representing the shape information of the face of a person may be referred to as the shape weight, αexpThe weight of expression information representing the face of a person may be referred to as an expression weight.
The electronic device may draw a characterized three-dimensional face model based on equation (1) above, the three-dimensional face model being composed of a point cloud. The electronic equipment can determine the space point at the position of the preset human face from the drawn three-dimensional human face model to be used as the space point to be processed, and further, the three-dimensional position information of the space point to be processed can be continuously obtained.
After the electronic device determines the spatial points to be processed, each spatial point to be processed may be projected into the first face image based on a preset weak perspective projection matrix, that is, projection position information of a projection point of each spatial point to be processed in the first face image is determined by using the weak perspective projection matrix and three-dimensional position information of each spatial point to be processed. And constructing a target three-dimensional face model corresponding to the person to be detected based on the projection position information of the projection point of each space point to be processed and the two-dimensional position information of the face characteristic point corresponding to each space point to be processed.
The process of constructing the target three-dimensional face model corresponding to the person to be detected based on the projection position information of the projection point of each space point to be processed and the two-dimensional position information of the face feature point corresponding to each space point to be processed may be: and determining the distance error of each space point to be processed and the corresponding human face characteristic point based on the projection position information of the projection point of each space point to be processed and the two-dimensional position information of the human face characteristic point corresponding to each space point to be processed, and constructing a target function based on the least square principle and the distance error of each space point to be processed and the corresponding human face characteristic point. And when the function value of the objective function is minimum, the solution of the corresponding unknown quantity in the objective function is solved, and the target three-dimensional face model corresponding to the person to be detected is obtained based on the solution.
In one case, the preset weak perspective projection matrix can be represented by the following formula (2):
sa2d=fPR(α,β,γ)(Sa+t3d); (2)
wherein s isa2dProjection position information of a projection point representing the a-th space point to be processed, wherein a can take [1, n]Wherein n represents the number of spatial points to be processed, f represents a scale factor, R (α, β, γ) represents a rotation matrix of 3 × 3, α represents a rotation angle of the preset three-dimensional face model in a horizontal axis of a preset spatial rectangular coordinate system, β represents a rotation angle of the preset three-dimensional face model in a vertical axis of the preset spatial rectangular coordinate system, γ represents a rotation angle of the preset three-dimensional face model in a vertical axis of the preset spatial rectangular coordinate system, and t represents a rotation angle of the preset three-dimensional face model in a vertical axis of the preset spatial rectangular coordinate system3dRepresenting a translation vector; saThree-dimensional position information representing the a-th spatial point to be processed, the rotation matrix and the translation vector being used for: and converting the preset three-dimensional face model into an equipment coordinate system of the image acquisition equipment from the preset space rectangular coordinate system where the preset three-dimensional face model is located.
The objective function can be expressed by the following formula (3):
Figure BDA0002377011900000131
wherein s isa2dtTwo-dimensional position information representing a face feature point corresponding to the a-th spatial point to be processed, | · |, represents a modulus of a vector representing: two-dimensional position information of a human face characteristic point corresponding to the a-th space point to be processed and the a-th space to be processedDistance error between projection position information of projection points of the points.
In the embodiment of the invention, f, R (alpha, beta, gamma) and t can be continuously adjusted by an iterative method3dadexpThe value of (a) is specifically chosen so that P is the minimum or so that P satisfies a preset constraint condition, where the preset constraint condition may be that P is not greater than a preset distance error threshold. Obtaining f, R (alpha, beta, gamma), t when P reaches local optimum or meets a preset constraint condition3dadexpAs a final value, will be, alphaadexpSubstituting the final value into the formula (1) to obtain a target three-dimensional face model corresponding to the person to be detected.
In another implementation, the image capturing system may include a plurality of image capturing devices, that is, the electronic device may obtain facial images including faces of persons to be detected captured by the plurality of image capturing devices in the same capturing period, in this case, the S101 may include the following steps 04-06:
04: and obtaining a second face image obtained when the plurality of image acquisition devices shoot the face of the person to be detected in the same acquisition period.
05: for each second face image, two-dimensional position information of a face feature point of a face is detected from the second face image.
06: and determining a target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face characteristic points in each second face image.
After the electronic equipment obtains second face images obtained when a plurality of image acquisition devices shoot the face of a person to be detected in the same acquisition period, aiming at each second face image, detecting two-dimensional position information of face characteristic points of the face from the second face image by using a preset face detection algorithm, wherein the face characteristic points at least comprise a first specified characteristic point, a second specified characteristic point, a third specified characteristic point, an upper eyelid point, a lower eyelid point and two canthus points of a target eye; and determining a target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face characteristic points in each second face image.
In one implementation, the step 06 may include the following steps 061-:
061: and determining three-dimensional position information of the space points corresponding to the first specified feature point, the second specified feature point and the third specified feature point based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the first specified feature point, the second specified feature point and the third specified feature point in each second face image.
062: and determining three-dimensional position information corresponding to the two eye corner points of the target eyes respectively based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the two eye corner points of the target eyes in the second face image acquired by the image acquisition devices.
063: and constructing a first eye angle constraint based on the three-dimensional position information, the first numerical value, the second numerical value and the cubic parameter curve equation which respectively correspond to the two eye angle points of the target eye.
The first numerical value and the second numerical value are used for constraining the value range of the independent variable in the first ocular angle constraint.
064: and constructing a reprojection error constraint corresponding to the upper eyelid and a reprojection error constraint corresponding to the lower eyelid of the target eye based on the cubic parameter curve equation, the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information corresponding to the upper eyelid point and the lower eyelid point.
065: and constructing a space eyelid curve corresponding to the upper eyelid and a space eyelid curve corresponding to the lower eyelid of the target eye based on the reprojection error constraint corresponding to the upper eyelid, the reprojection error constraint corresponding to the lower eyelid, the first canthus constraint, the distance constraint between the preset canthus space point and the eyelid point ordering constraint, so as to obtain three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, and obtain a target three-dimensional face model of the person to be detected.
The target pose information and the internal reference information of each image acquisition device are as follows: and the image acquisition equipment acquires the pose information and the internal reference information when acquiring the corresponding second face image.
For other face feature points, namely, the first specified feature point, the second specified feature point and the third specified feature point, of the face feature points, except for the upper eyelid point, the lower eyelid point and the two eye corner points of the target eye, the electronic device may determine the three-dimensional position information of the spatial points corresponding to the first specified feature point, the second specified feature point and the third specified feature point, directly based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the first specified feature point, the second specified feature point and the third specified feature point in each second face image.
In order to ensure the accuracy of the three-dimensional position information of the determined spatial points corresponding to the feature points corresponding to the upper eyelid and the feature points corresponding to the lower eyelid of the target eye, the electronic device may determine the three-dimensional position information of the spatial points of the first eye corner corresponding to the first eye corner based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the first eye corner of the target eye in the second face image acquired by the electronic device; and determining three-dimensional position information of a second eye corner spatial point corresponding to the second eye corner point based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the second eye corner point of the target eye in the second face image acquired by the image acquisition devices.
Furthermore, a first eye angle constraint is constructed based on the three-dimensional position information corresponding to the two eye corner points of the target eye, namely the three-dimensional position information of the first eye corner space point corresponding to the first eye corner point and the three-dimensional position information of the second eye corner space point corresponding to the second eye corner point, the first numerical value, the second numerical value and the cubic parameter curve equation.
It is to be understood that the process of constructing the eyelid curve for characterizing the upper eyelid of the target eye and the eyelid curve for the lower eyelid of the target eye is similar, and the following description will be given by taking the process of constructing the eyelid curve for characterizing the upper eyelid of the target eye as an example.
Specifically, the cubic parametric curve equation set for the upper eyelid of the target eye can be expressed as formula (4):
Figure BDA0002377011900000151
wherein, the a1、a2、a3、a4、b1、b2、b3、b4、c1、c2、c3And c4Each being a coefficient to be found, t being an argument, (x, y, z) representing the spatial coordinates of a point on the cubic parameter curve, i.e. the three-dimensional position information of a point on the curve, i.e. the spatial coordinates of an upper eyelid point on the upper eyelid of the target eye.
Substituting the three-dimensional position information corresponding to the two canthus points of the target eye into a preset curve equation to construct the following constraint;
specifically, it can be expressed as formula (5):
Figure BDA0002377011900000152
wherein (x)0,y0,z0) Three-dimensional position information of a first-eye-corner spatial point corresponding to a first eye-corner point of a target eye, (x)1,y1,z1) Three-dimensional position information representing a second canthus space point corresponding to a second canthus point of the target eye.
It is to be understood that the first and second eye corner points of the target eye are both present in the upper eyelid and the lower eyelid of the target eye in the second face image. The upper eyelid curve of the target eye and the lower eyelid curve of the target eye can be constrained simultaneously by the above constraint represented by equation (2).
The equation (5) is a curve equation corresponding to the eyelid curve of the upper eyelid of the target eye, and only the above a needs to be solved1、a2、a3、a4、b1、b2、b3、b4、c1、c2、c3And c4Twelve coefficients and the specific value of the independent variable waiting solving parameter corresponding to the upper eyelid point of the target eye detected from the second face image can obtain the eyelid curve representing the upper eyelid of the target eye. In order to solve the parameter to be solved, a value range of the independent variable of the formula (5) may be preset, for example, the value range of the independent variable of the formula (5) may be set to be the minimum value of the first value and the maximum value of the second value. In view of the fact that the upper eyelid points of the eye are located between the first and second canthi points of the eye and the lower eyelid points of the eye are located between the first and second canthi points of the eye, the value of the independent variable t in the curve equation corresponding to the first canthi point of the eye may be set to be the first value t01The value of the independent variable t in the curve equation corresponding to the second canthus point of the eye is the second value t02
Accordingly, the first ocular constraint may be represented by the following equation (6):
Figure BDA0002377011900000161
in one case, the above-mentioned first value t may be set for convenience of calculation01Is 0, the above-mentioned second value t02Is 1. Accordingly, will t010, and t02Substituting equation (3) for 1 results in equation (4) below, i.e., the first eye angle constraint may be represented by equation (7) below;
Figure BDA0002377011900000162
accordingly, equation (7) is transformed to (8):
Figure BDA0002377011900000163
limit t01Value of 0, t02Take a value of1, the coefficients in the above parameters to be solved can be reduced from 12 to 6, i.e. from a1、a2、a3、a4、b1、b2、b3、b4、c1、c2、c3And c412 coefficients, reduced to a1、a2、b1、b2、c1And c2The 6 coefficients reduce the number of coefficients required to be solved in the parameters to be solved, and can reduce the calculation amount of the subsequent eyelid curve construction process of the eyes to a certain extent. Wherein, it can be determined by the above equation (8): each coefficient can be represented by three-dimensional position information of a first eye corner space point corresponding to a first eye corner point of the target eye and/or three-dimensional position information of a second eye corner space point corresponding to a second eye corner point of the target eye.
Subsequently, the electronic device constructs a reprojection error constraint corresponding to the upper eyelid and a reprojection error constraint corresponding to the lower eyelid of the target eye based on the cubic parametric curve equation, the target pose information and the internal reference information of each image acquisition device, and the two-dimensional position information corresponding to the upper eyelid point and the lower eyelid point. Specifically, can be represented by tjiAn ith upper eyelid point representing the target eye in the jth second face image, where i can take [1, M [ ]j]A positive integer of (1), MjA first number representing upper eyelid points of the target eye in the jth second face image, j may be taken to be [1, n ]]N represents the number of second face images. By using a cubic parametric curve equation, three-dimensional position information of an upper eyelid space point corresponding to each upper eyelid point of the target eye is constructed and can be expressed as formula (9):
Figure BDA0002377011900000164
wherein the content of the first and second substances,
Figure BDA0002377011900000165
representing the upper eyelid point t of the target eyej,iCorresponding upper eyelid spaceThree-dimensional position information of the intermediate point.
The three-dimensional position information of the two eye corner points of the target eye and the spatial points corresponding to the upper eyelid and the lower eyelid may be position information in a world coordinate system, or may be position information in an apparatus coordinate system of any one of the image capturing apparatuses, and then, the process of determining the reprojection error constraint corresponding to the upper eyelid and the reprojection error constraint corresponding to the lower eyelid of the target eye is described by taking the three-dimensional position information of the two eye corner points of the target eye and the spatial points corresponding to the upper eyelid and the lower eyelid, or the position information in the world coordinate system as an example.
In the process of constructing an eyelid curve for characterizing an upper eyelid of a target eye, the electronic device may determine a position transformation relationship between each image capture device and a world coordinate system based on target pose information for each image capture device; further, for each second face image, based on the three-dimensional position information corresponding to each upper eyelid point of the target eye in the second face image and the position conversion relationship of the image acquisition equipment corresponding to the second face image, converting the spatial point corresponding to each upper eyelid point of the target eye from the world coordinate system to the equipment coordinate system of the image acquisition equipment corresponding to the second face image, and further determining the position information of the projection point of the spatial point corresponding to each upper eyelid point of the target eye in the second face image by combining the internal reference information of the image acquisition equipment corresponding to the second face image; a reprojection error constraint corresponding to the upper eyelid of the target eye is further calculated.
In the process of constructing the eyelid curve for characterizing the lower eyelid of the target eye, reference may be made to the above process of constructing the eyelid curve for characterizing the upper eyelid of the target eye, which is not described herein again.
Wherein, the reprojection error constraint corresponding to the upper eyelid point can be expressed as formula (10):
Figure BDA0002377011900000171
wherein M isjDenotes the jthA first number of upper eyelid points of the target eye in the second face image, (u)j,i,vj,i) Two-dimensional position information representing an ith upper eyelid point of a target eye in a jth personal face image; (u'j,i,v′j,i) Representing an upper eyelid point space point corresponding to the ith upper eyelid point of the target eye in the jth second face image, and position information of a projection point in the jth second face image, and acquiring target pose information and internal reference information of image acquisition equipment of the jth second face image and position information of the projection point in the jth second face image
Figure BDA0002377011900000172
And (4) calculating.
Furthermore, the electronic device constructs a spatial eyelid curve corresponding to the upper eyelid and a spatial eyelid curve corresponding to the lower eyelid of the target eye based on a reprojection error constraint corresponding to the upper eyelid, a reprojection error constraint corresponding to the lower eyelid, a first canthus constraint, a distance constraint between a preset canthus spatial point and an eyelid spatial point, and an eyelid point ordering constraint, so as to obtain three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, and obtain a target three-dimensional face model of the person to be detected.
The process of constructing an eyelid curve for characterizing the upper eyelid of the target eye is described as an example. Wherein the order constraint between the upper eyelid points of the target eye in the jth second face image can be expressed as:
Figure BDA0002377011900000173
Figure BDA0002377011900000174
wherein, when t is010, and t02When 1, equation (11) can be modified as:
Figure BDA0002377011900000175
Figure BDA0002377011900000176
in one implementation, the pre-setting a distance constraint of the eye corner spatial point and the eyelid spatial point with reference to the eye structure may include: a constraint that the distance from the midpoint of the eye corner space points of the target eye to the eyelid space points is no more than one-half times the distance between the eye corner space points of the target eye; the target eye's canthus space points include: a first eye corner space point and a second eye corner space point. Wherein the distance constraint between the preset canthus space point and the eyelid space point can be expressed by the following formula (13):
Figure BDA0002377011900000181
wherein (x)0,y0,z0) Three-dimensional position information of a first-eye-corner spatial point corresponding to a first eye-corner point of a target eye, (x)1,y1,z1) Three-dimensional position information representing a second canthus space point corresponding to a second canthus point of the target eye, (x)2,y2,z2) Three-dimensional position information representing a midpoint of a first eye corner space point and a second eye corner space point of the target eye, wherein,
Figure BDA0002377011900000182
Figure BDA0002377011900000183
representing the upper eyelid point t of the target eyej,iThree-dimensional position information of the corresponding upper eyelid space point.
By the equations (8), (9), (10), (12) and (13), a can be constructed1、a2、b1、b2、c1And c2And an argument t corresponding to the upper eyelid point of the target eye detected from the second face imagej,iA first optimization objective equation of a least squares problem of the parameters to be solved, wherein the first optimization objective equation can be represented by the following formula (14):
Figure BDA0002377011900000184
where P1 represents the value of the first optimization objective equation and k represents a1、a2、b1、b2、c1And c2And an argument t corresponding to the upper eyelid point of the target eye detected from the second face imagej,iParameter to be solved, f1j,iThe reprojection error corresponding to the ith upper eyelid point of the target eye in the jth second face image can be expressed by the following formula (15):
Figure BDA0002377011900000185
f2j,i(a1,a2,b1,b2,c1,c2,tj,i) The expression of the distance constraint corresponding to the ith upper eyelid point of the target eye in the jth second face image can be expressed by the following formula (16):
Figure BDA0002377011900000186
wherein d isj,iThe distance between the first and second eye corner space points, which represents one-half times the target eye, can be expressed as
Figure BDA0002377011900000187
rj,iRepresenting the midpoint of the eye corner space point of the target eye to the upper eyelid point t of the target eyej,iThe distance of the corresponding upper eyelid space point may be expressed as
Figure BDA0002377011900000188
f3j,iThe expression of the ordering corresponding to the ith upper eyelid point of the target eye in the jth second face image can be expressed by the following formula (17):
Figure BDA0002377011900000189
wherein, tj,i-1And representing an argument corresponding to a previous upper eyelid point to the ith upper eyelid point of the target eye in the jth second face image.
When the formula (17) reaches a preset first convergence condition by using a preset nonlinear optimization algorithm to solve, a1、a2、b1、b2、c1And c2And an argument t corresponding to the upper eyelid point of the eye detected from the second face imagej,iAnd waiting for solving the specific value of the parameter to obtain a spatial eyelid curve corresponding to the upper eyelid of the target eye.
And then, based on the same manner, determining a spatial eyelid curve corresponding to the lower eyelid of the target eye to obtain three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, and obtaining a target three-dimensional face model of the person to be detected.
Wherein, the first convergence condition may be: the value of P1 in equation (17) is not greater than the preset error threshold or reaches a local optimum, or the number of iterations of equation (17) reaches a preset first number of iterations. The preset non-linear optimization algorithm may include, but is not limited to: line Search Methods and Trust domain Methods, wherein the most typical algorithm used in the Trust domain method may be Levenberg-Marquardt.
S102: and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first specified characteristic point, the second specified characteristic point and the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye.
Wherein the first specified feature point and the second specified feature point are: face feature points at the same positions of the left and right faces in the face, i.e., symmetrical based on the center line of the face; the third designated feature point is one of the face feature points corresponding to the center line of the face.
The electronic equipment can determine a direction vector based on the three-dimensional position information corresponding to the first specified characteristic point, the second specified characteristic point and the third specified characteristic point, and the direction vector is determined as the direction vector of the eyeball center corresponding to the target eye, namely the eyeball center corresponding to the target eye is considered to be on the direction vector; further, determining the eyeball radius corresponding to the target eye by combining three-dimensional position information corresponding to two eye corner points of the target eye, three-dimensional position information corresponding to an upper eyelid point and three-dimensional position information corresponding to a lower eyelid point based on a sphere structure mathematical principle; based on the direction vector of the eyeball center corresponding to the target eye and the eyeball radius corresponding to the target eye, three-dimensional position information corresponding to the eyeball center corresponding to the target eye can be determined.
S103: based on the three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, first projection position information corresponding to the upper eyelid point and second projection position information corresponding to the lower eyelid point projected to the eyeball surface corresponding to the target eye are determined.
In one implementation mode, after determining three-dimensional position information corresponding to an eyeball center corresponding to a target eye, the electronic device determines an eyeball radius corresponding to the target eye; then, projection position information of a space point corresponding to the upper eyelid point projected to an eyeball spherical surface corresponding to the target eye is determined to serve as first projection position information directly based on three-dimensional position information corresponding to an eyeball center corresponding to the target eye, three-dimensional position information corresponding to the upper eyelid point and an eyeball radius corresponding to the target eye; and determining projection position information of the space point corresponding to the lower eyelid point projected to the eyeball spherical surface corresponding to the target eye as second projection position information based on the three-dimensional position information corresponding to the eyeball center corresponding to the target eye, the three-dimensional position information corresponding to the lower eyelid point and the eyeball radius corresponding to the target eye.
The process of determining the projection position information of the spatial point corresponding to the upper eyelid point onto the eyeball spherical surface corresponding to the target eye based on the three-dimensional position information corresponding to the eyeball center corresponding to the target eye, the three-dimensional position information corresponding to the upper eyelid point and the eyeball radius corresponding to the target eye may be: for each upper eyelid point, determining a direction vector from a space point corresponding to the upper eyelid point to an eyeball center corresponding to the target eye based on three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point; based on the direction vector from the space point corresponding to the upper eyelid point to the eyeball center corresponding to the target eye and the eyeball radius corresponding to the target eye, the projection position information of the space point corresponding to the upper eyelid point projected to the eyeball spherical surface corresponding to the target eye can be determined.
The process of determining the projection position information of the spatial point corresponding to the lower eyelid point onto the eyeball spherical surface corresponding to the target eye may refer to the process of determining the projection position information of the spatial point corresponding to the upper eyelid point onto the eyeball spherical surface corresponding to the target eye, which is not described herein again.
In another implementation manner, after determining the three-dimensional position information corresponding to the eyeball center corresponding to the target eye, the electronic device may directly combine with the eyeball radius corresponding to the target eye to determine the three-dimensional position information corresponding to the eyeball sphere corresponding to the target eye, and further, may combine with the three-dimensional position information corresponding to the eyeball sphere corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point to determine projection position information of the spatial point corresponding to the upper eyelid point onto the eyeball sphere corresponding to the target eye, as the first projection position information; and determining projection position information of the space point corresponding to the lower eyelid point projected to the eyeball spherical surface corresponding to the target eye as second projection position information by combining the three-dimensional position information corresponding to the eyeball spherical surface corresponding to the target eye and the three-dimensional position information corresponding to the lower eyelid point.
The process of determining the projection position information of the spatial point corresponding to the upper eyelid point onto the eyeball spherical surface corresponding to the target eye by combining the three-dimensional position information corresponding to the eyeball spherical surface corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point may be: determining three-dimensional position information corresponding to the eyelid points and three-dimensional position information on a straight line of the three-dimensional position information corresponding to the eyeball sphere center corresponding to the target eyes from the three-dimensional position information corresponding to the eyeball sphere surface corresponding to the target eyes, and determining the three-dimensional position information corresponding to the eyelid points and the target three-dimensional position information corresponding to the eyeball sphere center corresponding to the target eyes from the target three-dimensional position information as projection position information of the space points corresponding to the eyelid points projected to the eyeball sphere surface corresponding to the target eyes.
The process of determining the projection position information of the spatial point corresponding to the lower eyelid point onto the eyeball spherical surface corresponding to the target eye by combining the three-dimensional position information corresponding to the eyeball spherical surface corresponding to the target eye and the three-dimensional position information corresponding to the lower eyelid point can refer to the three-dimensional position information corresponding to the eyeball spherical surface corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point, and the process of projecting the spatial point corresponding to the upper eyelid point onto the projection position information of the eyeball spherical surface corresponding to the target eye is determined, which is not described herein again.
S104: and determining the opening and closing distance of the target eye based on the first projection position information and the second projection position information.
After the electronic device determines first projection position information of a projection point of a space point corresponding to an upper eyelid on an eyeball sphere corresponding to a target eye and second projection position information of a space point corresponding to a lower eyelid on the eyeball sphere corresponding to the target eye, the electronic device may determine the opening and closing distance of the target eye directly based on the first projection position information and the second projection position information.
Specifically, the following steps can be performed: determining projection points at the halving positions from projection points of space points corresponding to the upper eyelid in the eyeball spherical surface corresponding to the target eye based on the first projection position information, taking the projection points as first projection points, and determining first projection position information of the projection points; determining projection points at the halving positions from the projection points of the space points corresponding to the lower eyelid in the eyeball spherical surface corresponding to the target eye based on the second projection position information, and determining second projection position information of the projection points; and calculating the distance between the first projection position information of the first projection point and the second projection position information of the second projection point, and determining the distance as the opening and closing distance of the target eye.
When the opening and closing distance of the target eyes is determined, the fatigue degree of the person to be detected can be determined based on the opening and closing distance of the target eyes, so that the fatigue degree of the person to be detected can be detected through the opening and closing distance of the eyes of the person to be detected.
By applying the embodiment of the invention, the three-dimensional position information corresponding to the eyeball center corresponding to the target eye of the person to be detected can be fitted based on the first specified characteristic point, the second specified characteristic point, the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye in the target three-dimensional face model of the person to be detected, and then the first projection position information corresponding to the upper eyelid point and the second projection position information corresponding to the lower eyelid point projected to the eyeball surface corresponding to the target eye are determined by combining the three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, so that the eyelid points of the target eye are projected to the eyeball of the target eyelid, and a unified calculation basis is provided for the opening and closing distances of eyes with different shapes, the influence of eyes with special shapes, such as eyes with convex eyelids on the calculation of the opening and closing distance of the eyes is avoided to a certain extent, and the accuracy of the determination result of the opening and closing distance of the eyes with various shapes is improved.
In another embodiment of the present invention, the first specified feature point is: the face characteristic points of the first designated position of the left face of the face are as follows: a person's face feature point at a first designated location of the right face of the face; the third specified feature point is: the midline of the face corresponds to a human face characteristic point at a second appointed position in the human face characteristic points, such as a human middle characteristic point;
the step S102 may include the following steps 11 to 13:
11: and determining three-dimensional position information corresponding to the first middle point of the first specified characteristic point and the second specified characteristic point based on the three-dimensional position information corresponding to the first specified characteristic point and the second specified characteristic point.
12: and determining the direction vector of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified characteristic point and the three-dimensional position information corresponding to the two eye corner points of the target eye.
13: and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on a sphere structure mathematical principle, three-dimensional position information corresponding to two eye corner points of the target eye, three-dimensional position information corresponding to an upper eyelid point, three-dimensional position information corresponding to a lower eyelid point and a direction vector of the eyeball center corresponding to the target eye.
In this implementation manner, the electronic device may determine three-dimensional position information corresponding to a first midpoint of the first specified feature point and the second specified feature point based on the three-dimensional position information corresponding to the first specified feature point and the second specified feature point; determining a direction vector of an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified characteristic point and the three-dimensional position information corresponding to the two eye corner points of the target eye, namely determining the eyeball center corresponding to the target eye on the direction vector; further, determining the eyeball radius corresponding to the target eye by combining three-dimensional position information corresponding to two eye corner points of the target eye, three-dimensional position information corresponding to an upper eyelid point and three-dimensional position information corresponding to a lower eyelid point based on a sphere structure mathematical principle; based on the direction vector of the eyeball center corresponding to the target eye and the eyeball radius corresponding to the target eye, three-dimensional position information corresponding to the eyeball center corresponding to the target eye can be determined.
In one case, the position of the third specified feature point is lower than the positions of the first specified feature point and the second specified feature point.
In one implementation manner of the present invention, the step 12 may include the following steps 121-123:
121: and determining a face direction vector corresponding to the face based on the three-dimensional position information corresponding to the first midpoint and the three-dimensional position information corresponding to the third specified feature point.
122: and determining the midperpendicular corresponding to the two canthus points of the target eye based on the three-dimensional position information corresponding to the two canthus points of the target eye.
123: and determining the projection vector of the face direction vector on the vertical plane as the direction vector of the eyeball center corresponding to the target eye.
In view of the structure of the face of the person, the electronic device may determine a face direction vector corresponding to the face based on the three-dimensional position information corresponding to the first midpoint and the three-dimensional position information corresponding to the third specified feature point; and the direction vector of the face corresponding to the face points to the spatial point corresponding to the third specified characteristic point. Determining the vertical surfaces corresponding to the two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye, further projecting the facial direction vector on the vertical surfaces corresponding to the two eye corner points of the target eye, and determining the projection vector of the facial direction vector on the vertical surfaces as the direction vector of the eyeball center corresponding to the target eye.
Specifically, the direction vector of the eyeball center corresponding to the target eye can be represented by the following formula (18):
Figure BDA0002377011900000221
wherein the content of the first and second substances,
Figure BDA0002377011900000222
represents a direction vector of the eyeball center corresponding to the target eye,
Figure BDA0002377011900000223
a face direction vector representing a correspondence of the face,
Figure BDA0002377011900000224
and representing direction vectors corresponding to the two eye corner points of the target eye, wherein the direction vectors corresponding to the two eye corner points of the target eye can point to any eye corner point.
Subsequently, the direction vector of the eyeball center corresponding to the target eye can be determined
Figure BDA0002377011900000225
Normalization is performed to facilitate subsequent calculations. Can be normalized
Figure BDA0002377011900000226
Is shown as
Figure BDA0002377011900000227
Figure BDA0002377011900000228
In another embodiment of the present invention, the step 13 may include the following steps 131-135:
131: and determining three-dimensional position information corresponding to second midpoints of two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye.
132: and constructing a first expression representing the radius of the eyeball corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the three-dimensional position information corresponding to the target eye corner point in the two eye corner points of the target eye by referring to the mathematical principle of the sphere structure and the pythagorean theorem.
133: and constructing a second expression representing the position of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the direction vector of the eyeball center corresponding to the target eye by referring to the mathematical principle of the sphere structure.
134: and constructing distance expressions from the spatial points corresponding to the upper eyelid points and the lower eyelid points to the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the upper eyelid points, the three-dimensional position information corresponding to the lower eyelid points and the second expression.
135: and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on the first expression and the distance expression.
In the implementation mode, the electronic equipment determines three-dimensional position information corresponding to second midpoints of two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye; and constructing a first expression representing the radius of the eyeball corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the three-dimensional position information corresponding to the target eye corner point in the two eye corner points of the target eye by referring to the mathematical principle of the sphere structure and the pythagorean theorem. The target corner point may be any corner point of the target eye.
According to the mathematical principle of a sphere structure, the distances from the space points corresponding to the two eye corners of the target eye to the sphere center of the eyeball corresponding to the target eye are equal and equal to the radius of the eyeball corresponding to the target eye; and the connecting line of the space point corresponding to the second midpoint of the two eye corner points of the target eye and the eyeball center corresponding to the target eye is vertical to the connecting line of the space points corresponding to the two eye corner points of the target eye. Correspondingly, a right triangle is formed between the space point corresponding to the target eye corner of the target eye and the eyeball center corresponding to the target eye, between the space point corresponding to the target eye corner of the target eye and the space point corresponding to the second center, and between the space point corresponding to the second center and the eyeball center corresponding to the target eye. According to the Pythagorean theorem, the square of a connecting line between a space point corresponding to the target eye corner of the target eye and the eyeball center corresponding to the target eye is equal to the sum of the square of a connecting line between a space point corresponding to the second center and the eyeball center corresponding to the target eye and the square of a connecting line between the space point corresponding to the second center and the eyeball center corresponding to the target eye.
As shown in fig. 2, the schematic diagram of the position relationship between the spatial points corresponding to the two eye corner points of the target eye and the eyeball center corresponding to the target eye is shown, where points a and B in fig. 2 represent the spatial points corresponding to the two eye corner points of the target eye respectively, point C represents the spatial point corresponding to the central point of the two eye corner points of the target eye, point D represents the eyeball center corresponding to the target eye, and arc AB represents the surface of the target eye. It can be understood that, referring to the mathematical principle of the sphere structure, the distance of the segment AD is equal to the distance of the segment BD, which is equal to the radius of the eyeball center corresponding to the target eye; and segment CD is perpendicular to segment AB. Accordingly, with reference to the pythagorean theorem: the square of the length of the line segment AD is equal to the flat of the length of the line segment ACThe sum of squares of the length of the squares and the line segments CD, which can be expressed as AD2=AC2+CD2And the length of the line segment AD is equal to the radius of the eyeball corresponding to the target eye, and the length of the line segment AC is equal to half of the distance between the space points corresponding to the two eye corner points of the target eye.
Accordingly, with reference to the mathematical principle of the sphere structure and the pythagorean theorem, a first expression representing the radius of the eyeball corresponding to the target eye can be constructed, and can be expressed by the following formula (19):
Figure BDA0002377011900000231
wherein R represents the eyeball radius corresponding to the target eye, m represents the length of the line segment CD,
Figure BDA0002377011900000232
Figure BDA0002377011900000241
(Cx,Cy,Cz) Three-dimensional position information corresponding to a second midpoint of two eye corner points representing the target eye, (x)0,y0,z0) Representing three-dimensional position information of a first eye corner space point corresponding to a first eye corner point of a target eye, wherein the target eye corner point is the first eye corner point; when the target eye corner point is the second eye corner point, (x) can be0,y0,z0) Is replaced by (x)1,y1,z1). Correspondingly, the value of m is solved, namely the value of R is determined.
It can be understood that, referring to the mathematical principle of the sphere structure, the center of the eyeball corresponding to the target eye is on the plane perpendicular to the middle of the two corner points of the target eye, and the spatial points corresponding to the second midpoints of the two eye corner points of the target eye are on the vertical planes of the two eye corner points of the target eye, and correspondingly, the spatial points corresponding to the second midpoints of the two eye corner points of the target eye are on the direction vector of the eyeball center corresponding to the target eye, and there is a vector between the spatial point corresponding to the second midpoint of the two eye corner points of the target eye and the eyeball center corresponding to the target eye, which is equal to the distance between the spatial point corresponding to the second midpoints of the two eye corner points of the target eye and the eyeball center corresponding to the target eye, and the vector is multiplied by a unit vector normalized by the direction vector of the eyeball center corresponding to the target eye, and specifically, the vector can be expressed by the following formula (20):
(Cx-Dx,Cy-Dy,Cz-Dz)=m(e1,e2,e3) (20)。
wherein (D)x,Dy,Dz) Three-dimensional position information representing the position of the eyeball center corresponding to the target eye, m representing the distance between the space point corresponding to the second midpoint of the two eye corner points of the target eye and the position of the eyeball center corresponding to the target eye, and (e)1,e2,e3) The normalized direction vector of the eyeball center corresponding to the target eye is obtained. As shown in FIG. 2, (D)x,Dy,Dz) Is three-dimensional position information of point D, (C)x,Cy,Cz) Is the three-dimensional position information of point C.
In view of this, the electronic device may construct a second expression representing the position of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the direction vector of the eyeball center corresponding to the target eye, with reference to the mathematical principle of the sphere structure.
The constructed second expression representing the position of the eyeball center corresponding to the target eye can be represented by the following formula (21):
Figure BDA0002377011900000242
the electronic device may construct an initial distance expression from a spatial point corresponding to the upper eyelid point and the lower eyelid point to an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the upper eyelid point and the three-dimensional position information corresponding to the lower eyelid point, and specifically, the initial distance expression may be represented by the following formula (22):
Figure BDA0002377011900000243
wherein L isqThe distance between the q-th space point in the space points corresponding to the upper eyelid point and the lower eyelid point and the eyeball center corresponding to the target eye is represented (x)q,yq,zq) Representing the Q-th space point in the space points corresponding to the upper eyelid point and the lower eyelid point, wherein the value range of Q is [1, Q]Q is the total number of spatial points corresponding to the upper and lower eyelid points.
By combining the formula (21) and the formula (22), a distance expression from the spatial point corresponding to the upper eyelid point and the lower eyelid point to the eyeball center corresponding to the target eye can be obtained, and can be expressed by the following formula (23):
Figure BDA0002377011900000251
theoretically, the spatial points corresponding to the upper eyelid point and the lower eyelid point of the target eye both exist on the eyeball spherical surface corresponding to the target eye, and the distance from the point on the eyeball spherical surface corresponding to the target eye to the eyeball center corresponding to the target eye is equal to the eyeball radius corresponding to the target eye. In view of this, with reference to the nonlinear optimization method, in combination with the first expression and the distance expressions, i.e., equations (19) and (23), a second optimization target equation of the three-dimensional position information corresponding to the eyeball center corresponding to the target eye is constructed to fit the three-dimensional position information corresponding to the eyeball center corresponding to the target eye. Specifically, the second optimization objective equation can be expressed by the following formula (24):
Figure BDA0002377011900000252
where P2 represents the value of the second optimization objective equation.
And solving by using a preset nonlinear optimization algorithm to obtain the value of m when the formula (24) reaches a preset second convergence condition, and further obtaining the radius of the eyeball corresponding to the target eye and the three-dimensional position information corresponding to the sphere center of the eyeball corresponding to the target eye by the value of m. Wherein the second convergence condition may be: the final value of P2 in the formula (24) is not larger than a preset error threshold or reaches local optimum; or the number of iterations of equation (24) reaches a preset second number of iterations. The preset non-linear optimization algorithm may include, but is not limited to: line Search Methods and Trust domain Methods, wherein the most typical algorithm used in the Trust domain method may be Levenberg-Marquardt.
In another embodiment of the present invention, the S104 may include the following steps 21 to 24:
21: and determining the eye angle vectors corresponding to the two eye angle points of the target eye based on the three-dimensional position information corresponding to the two eye angle points of the target eye.
22: and determining eyelid point pairs of which corresponding eyelid direction vectors are orthogonal to the eye angle vectors from projection points corresponding to the upper eyelid point in a preset middle area and projection points corresponding to the lower eyelid point in the preset middle area based on the first projection position information and the second projection position information.
Wherein each eyelid point pair includes an upper eyelid point and a lower eyelid point, and the eyelid direction vector is: and the vector is determined based on the first projection position information corresponding to the corresponding upper eyelid point and the second projection position information corresponding to the corresponding lower eyelid point.
23: and for each eyelid point pair, determining a corresponding mode of the eyelid point pair based on the first projection position information corresponding to the upper eyelid point and the second projection position information corresponding to the lower eyelid point in the eyelid point pair.
24: and determining the mode with the largest value in the corresponding modes of all the eyelid point pairs as the opening and closing distance of the target eye.
In this implementation, the opening and closing distance of eyes for guaranteeing to obtain is more accurate, and the mode of the opening and closing distance that the eyes to different shapes confirm has referential nature and commonality more. The electronic device may first determine the eye angle vectors corresponding to the two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye, where the eye angle vectors corresponding to the two eye corner points of the target eye may be vectors pointing to a spatial point corresponding to any eye corner point. Further, based on the first projection position information and the second projection position information, a projection point in a preset middle area is determined from projection points corresponding to the upper eyelid point, and a projection point in the preset middle area is determined from projection points corresponding to the lower eyelid point; and determining eyelid point pairs of which the corresponding eyelid direction vectors are orthogonal to the eye angle vectors from the projection points corresponding to the upper eyelid points of the preset middle area and the projection points corresponding to the lower eyelid points of the preset middle area. Wherein, the eyelid direction vector corresponding to the eyelid point pair is: and a direction vector determined based on the three-dimensional position information corresponding to the upper eyelid point and the three-dimensional position information corresponding to the lower eyelid point in the pair of eyelid points.
The preset intermediate region may refer to all regions of the upper eyelid corresponding to the upper eyelid point, or a region located in the intermediate region of the upper eyelid corresponding to the upper eyelid point, that is, a region close to the central region of the target eye, and accordingly, the projection points in the preset intermediate region in the projection points corresponding to the upper eyelid point may include all projection points corresponding to the upper eyelid point, or may include: the projection point corresponding to the upper eyelid point in the middle area in the three equal areas in the projection point corresponding to the upper eyelid point; alternatively, it may include: the projected points corresponding to the upper eyelid point included in the middle two regions among the four equal regions among the projected points corresponding to the upper eyelid point, and the like may be used.
Subsequently, the electronic device may determine, for each eyelid point pair, a corresponding mode of the eyelid point pair based on the first projection position information corresponding to the upper eyelid point and the second projection position information corresponding to the lower eyelid point in the eyelid point pair, and determine a mode with a largest value among the modes corresponding to all eyelid point pairs as the opening and closing distance of the target eye. So as to determine the relatively accurate opening and closing distance of the target eye while reducing the calculation amount.
Corresponding to the above method embodiment, an embodiment of the present invention provides an apparatus for determining an eye-opening and closing distance, as shown in fig. 3, the apparatus including:
an obtaining module 310 configured to obtain a target three-dimensional face model of a person to be detected, wherein the target three-dimensional face model includes: the three-dimensional position information corresponding to the face feature point of the face of the person to be detected, wherein the face feature point comprises: the first specified characteristic point, the second specified characteristic point, the third specified characteristic point, and an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
a fitting module 320 configured to fit three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on three-dimensional position information corresponding to the first specified feature point, the second specified feature point and the third specified feature point, and three-dimensional position information corresponding to an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
a first determining module 330 configured to determine, based on three-dimensional position information corresponding to an eyeball center corresponding to the target eye and three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, first projection position information corresponding to the upper eyelid point and second projection position information corresponding to the lower eyelid point projected onto an eyeball surface corresponding to the target eye;
a second determining module 340 configured to determine an opening/closing distance of the target eye based on the first projection position information and the second projection position information.
By applying the embodiment of the invention, the three-dimensional position information corresponding to the eyeball center corresponding to the target eye of the person to be detected can be fitted based on the first specified characteristic point, the second specified characteristic point, the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye in the target three-dimensional face model of the person to be detected, and then the first projection position information corresponding to the upper eyelid point and the second projection position information corresponding to the lower eyelid point projected to the eyeball surface corresponding to the target eye are determined by combining the three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, so that the eyelid points of the target eye are projected to the eyeball of the target eyelid, and a unified calculation basis is provided for the opening and closing distances of eyes with different shapes, the influence of eyes with special shapes, such as eyes with convex eyelids on the calculation of the opening and closing distance of the eyes is avoided to a certain extent, and the accuracy of the determination result of the opening and closing distance of the eyes with various shapes is improved.
In another embodiment of the present invention, the obtaining module 310 is specifically configured to obtain a first face image including a face of a person to be detected;
detecting two-dimensional position information of a face feature point of the face from the first face image;
and determining a target three-dimensional face model of the person to be detected based on the two-dimensional position information of the face characteristic points and a preset three-dimensional face model.
In another embodiment of the present invention, the obtaining module 320 includes:
the acquisition unit is configured to acquire second face images obtained when the plurality of image acquisition devices shoot the face of the person to be detected in the same acquisition period;
a detection unit configured to detect, for each second face image, two-dimensional position information of a face feature point of the face from the second face image;
and the first determining unit is configured to determine a target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face characteristic points in each second face image.
In another embodiment of the present invention, the first determining unit is specifically configured to determine three-dimensional position information of spatial points corresponding to the first specified feature point, the second specified feature point, and the third specified feature point, based on the target pose information and the internal reference information of each image capturing device, and two-dimensional position information of the first specified feature point, the second specified feature point, and the third specified feature point in each second face image;
determining three-dimensional position information corresponding to two canthus points of the target eyes respectively based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the two canthus points of the target eyes in the second face image acquired by the image acquisition devices;
constructing a first ocular angle constraint based on three-dimensional position information, a first numerical value, a second numerical value and a cubic parameter curve equation which respectively correspond to two ocular angle points of the target eye, wherein the first numerical value and the second numerical value are used for constraining the value range of an independent variable in the first ocular angle constraint;
constructing a reprojection error constraint corresponding to an upper eyelid and a reprojection error constraint corresponding to a lower eyelid of the target eye based on the cubic parameter curve equation, target pose information and internal reference information of each image acquisition device and two-dimensional position information corresponding to the upper eyelid point and the lower eyelid point;
and constructing a space eyelid curve corresponding to the upper eyelid and a space eyelid curve corresponding to the lower eyelid of the target eye based on the reprojection error constraint corresponding to the upper eyelid, the reprojection error constraint corresponding to the lower eyelid, the first canthus constraint, the distance constraint between a preset canthus space point and an eyelid space point and the eyelid point ordering constraint, so as to obtain three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, and obtain the target three-dimensional face model of the person to be detected.
In another embodiment of the present invention, the first specified feature point is: the face feature points of the first designated position of the left face of the face are as follows: a face feature point of the first designated location of the right face of the face; the third specified feature point is: the central line of the face corresponds to a face characteristic point at a second designated position in the face characteristic points;
the fitting module 320 includes: a second determination unit configured to determine three-dimensional position information corresponding to a first midpoint of the first specified feature point and the second specified feature point based on three-dimensional position information corresponding to the first specified feature point and the second specified feature point;
a third determining unit configured to determine a direction vector of an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified feature point, and the three-dimensional position information corresponding to the two eye corner points of the target eye;
the fitting unit is configured to fit three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on a sphere structure mathematical principle, three-dimensional position information corresponding to two eye corner points of the target eye, three-dimensional position information corresponding to the upper eyelid point, three-dimensional position information corresponding to the lower eyelid point, and a direction vector of the eyeball center corresponding to the target eye.
In another embodiment of the present invention, the third determining unit is specifically configured to determine a face direction vector corresponding to the face based on the three-dimensional position information corresponding to the first midpoint and the three-dimensional position information corresponding to the third specified feature point; determining the midperpendicular corresponding to the two canthus points of the target eye based on the three-dimensional position information corresponding to the two canthus points of the target eye; and determining the projection vector of the face direction vector on the vertical plane as the direction vector of the eyeball center corresponding to the target eye.
In another embodiment of the present invention, the fitting unit is specifically configured to determine three-dimensional position information corresponding to a second midpoint of two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye;
constructing a first expression representing the radius of an eyeball corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the three-dimensional position information corresponding to the target canthus point in the two canthus points of the target eye by referring to a sphere structure mathematical principle and a pythagorean theorem;
constructing a second expression representing the position of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the direction vector of the eyeball center corresponding to the target eye by referring to the mathematical principle of the sphere structure;
constructing distance expressions from the upper eyelid point and the spatial point corresponding to the lower eyelid point to the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the upper eyelid point, the three-dimensional position information corresponding to the lower eyelid point and the second expression;
and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on the first expression and the distance expression.
In another embodiment of the present invention, the second determining module 340 is specifically configured to determine the corresponding eye angle vectors of the two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye;
determining eyelid point pairs of which corresponding eyelid direction vectors are orthogonal to the eye angle vectors from projection points corresponding to the upper eyelid point of a preset middle area and projection points corresponding to the lower eyelid point of the preset middle area based on the first projection position information and the second projection position information, wherein each eyelid point pair comprises an upper eyelid point and a lower eyelid point, and the eyelid direction vectors are as follows: a vector determined based on first projection position information corresponding to the corresponding upper eyelid point and second projection position information corresponding to the corresponding lower eyelid point;
determining a corresponding mode of each eyelid point pair based on first projection position information corresponding to an upper eyelid point and second projection position information corresponding to a lower eyelid point in the eyelid point pairs;
and determining the mode with the largest value in the corresponding modes of all the eyelid point pairs as the opening and closing distance of the target eye.
The device and system embodiments correspond to the method embodiments, and have the same technical effects as the method embodiments, and specific descriptions refer to the method embodiments. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again. Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for determining an eye opening and closing distance, the method comprising:
obtaining a target three-dimensional face model of a person to be detected, wherein the target three-dimensional face model comprises: the three-dimensional position information corresponding to the face feature point of the face of the person to be detected comprises the following steps: the first specified characteristic point, the second specified characteristic point, the third specified characteristic point, and an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
fitting three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first specified characteristic point, the second specified characteristic point and the third specified characteristic point and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye;
determining first projection position information corresponding to the upper eyelid point and second projection position information corresponding to the lower eyelid point projected to the eyeball surface corresponding to the target eye based on the three-dimensional position information corresponding to the eyeball center corresponding to the target eye and the three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point;
determining an opening and closing distance of the target eye based on the first projection position information and the second projection position information.
2. The method of claim 1, wherein the step of obtaining the target three-dimensional face model of the person to be detected comprises:
obtaining a first face image containing a face of a person to be detected;
detecting two-dimensional position information of a face feature point of the face from the first face image;
and determining a target three-dimensional face model of the person to be detected based on the two-dimensional position information of the face characteristic points and a preset three-dimensional face model.
3. The method of claim 1, wherein the step of obtaining the target three-dimensional face model of the person to be detected comprises:
acquiring second face images obtained when a plurality of image acquisition devices shoot the face of a person to be detected in the same acquisition period;
for each second face image, detecting two-dimensional position information of a face characteristic point of the face from the second face image;
and determining a target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face characteristic points in each second face image.
4. The method according to claim 3, wherein the step of determining the target three-dimensional face model of the person to be detected based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the face feature point in each second face image comprises:
determining three-dimensional position information of space points corresponding to the first designated feature point, the second designated feature point and the third designated feature point based on target pose information and internal reference information of each image acquisition device and two-dimensional position information of the first designated feature point, the second designated feature point and the third designated feature point in each second face image;
determining three-dimensional position information corresponding to two canthus points of the target eyes respectively based on the target pose information and the internal reference information of each image acquisition device and the two-dimensional position information of the two canthus points of the target eyes in the second face image acquired by the image acquisition devices;
constructing a first ocular angle constraint based on three-dimensional position information, a first numerical value, a second numerical value and a cubic parameter curve equation which respectively correspond to two ocular angle points of the target eye, wherein the first numerical value and the second numerical value are used for constraining the value range of an independent variable in the first ocular angle constraint;
constructing a reprojection error constraint corresponding to an upper eyelid and a reprojection error constraint corresponding to a lower eyelid of the target eye based on the cubic parameter curve equation, target pose information and internal reference information of each image acquisition device and two-dimensional position information corresponding to the upper eyelid point and the lower eyelid point;
and constructing a space eyelid curve corresponding to the upper eyelid and a space eyelid curve corresponding to the lower eyelid of the target eye based on the reprojection error constraint corresponding to the upper eyelid, the reprojection error constraint corresponding to the lower eyelid, the first canthus constraint, the distance constraint between a preset canthus space point and an eyelid space point and the eyelid point ordering constraint, so as to obtain three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, and obtain the target three-dimensional face model of the person to be detected.
5. The method of any one of claims 1-4, wherein the first specified feature point is: the face feature points of the first designated position of the left face of the face are as follows: a face feature point of the first designated location of the right face of the face; the third specified feature point is: the central line of the face corresponds to a face characteristic point at a second designated position in the face characteristic points;
the step of fitting three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first specified feature point, the second specified feature point and the third specified feature point, and the three-dimensional position information corresponding to the upper eyelid point, the lower eyelid point and the two canthus points of the target eye includes:
determining three-dimensional position information corresponding to a first midpoint of the first specified feature point and the second specified feature point based on the three-dimensional position information corresponding to the first specified feature point and the second specified feature point;
determining a direction vector of an eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified characteristic point and the three-dimensional position information corresponding to the two eye corner points of the target eye;
and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on a sphere structure mathematical principle, three-dimensional position information corresponding to two eye corner points of the target eye, three-dimensional position information corresponding to the upper eyelid point, three-dimensional position information corresponding to the lower eyelid point and a direction vector of the eyeball center corresponding to the target eye.
6. The method according to claim 5, wherein the step of determining the direction vector of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the first midpoint, the three-dimensional position information corresponding to the third specified feature point, and the three-dimensional position information corresponding to the two canthi points of the target eye comprises:
determining a face direction vector corresponding to the face based on the three-dimensional position information corresponding to the first midpoint and the three-dimensional position information corresponding to the third specified feature point;
determining the midperpendicular corresponding to the two canthus points of the target eye based on the three-dimensional position information corresponding to the two canthus points of the target eye;
and determining the projection vector of the face direction vector on the vertical plane as the direction vector of the eyeball center corresponding to the target eye.
7. The method according to claim 5, wherein the step of fitting the three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on the mathematical principle of spherical structure, the three-dimensional position information corresponding to the two eyepoint pairs of the target eye, the three-dimensional position information corresponding to the upper eyelid point, the three-dimensional position information corresponding to the lower eyelid point, and the direction vector of the eyeball center corresponding to the target eye comprises:
determining three-dimensional position information corresponding to a second midpoint of the two eye corner points of the target eye based on the three-dimensional position information corresponding to the two eye corner points of the target eye;
constructing a first expression representing the radius of an eyeball corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the three-dimensional position information corresponding to the target canthus point in the two canthus points of the target eye by referring to a sphere structure mathematical principle and a pythagorean theorem;
constructing a second expression representing the position of the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the second midpoint and the direction vector of the eyeball center corresponding to the target eye by referring to the mathematical principle of the sphere structure;
constructing distance expressions from the upper eyelid point and the spatial point corresponding to the lower eyelid point to the eyeball center corresponding to the target eye based on the three-dimensional position information corresponding to the upper eyelid point, the three-dimensional position information corresponding to the lower eyelid point and the second expression;
and fitting three-dimensional position information corresponding to the eyeball center corresponding to the target eye based on the first expression and the distance expression.
8. The method of any one of claims 1-7, wherein the step of determining the on-off distance of the target eye based on the first projection location information and the second projection location information comprises:
determining eye angle vectors corresponding to the two eye angle points of the target eye based on the three-dimensional position information corresponding to the two eye angle points of the target eye;
determining eyelid point pairs of which corresponding eyelid direction vectors are orthogonal to the eye angle vectors from projection points corresponding to the upper eyelid point of a preset middle area and projection points corresponding to the lower eyelid point of the preset middle area based on the first projection position information and the second projection position information, wherein each eyelid point pair comprises an upper eyelid point and a lower eyelid point, and the eyelid direction vectors are as follows: a vector determined based on first projection position information corresponding to the corresponding upper eyelid point and second projection position information corresponding to the corresponding lower eyelid point;
determining a corresponding mode of each eyelid point pair based on first projection position information corresponding to an upper eyelid point and second projection position information corresponding to a lower eyelid point in the eyelid point pairs;
and determining the mode with the largest value in the corresponding modes of all the eyelid point pairs as the opening and closing distance of the target eye.
9. An apparatus for determining an eye opening and closing distance, the apparatus comprising:
an obtaining module configured to obtain a target three-dimensional face model of a person to be detected, wherein the target three-dimensional face model includes: the three-dimensional position information corresponding to the face feature point of the face of the person to be detected, wherein the face feature point comprises: the first specified characteristic point, the second specified characteristic point, the third specified characteristic point, and an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
a fitting module configured to fit three-dimensional position information corresponding to an eyeball center corresponding to the target eye based on three-dimensional position information corresponding to the first specified feature point, the second specified feature point and the third specified feature point and three-dimensional position information corresponding to an upper eyelid point, a lower eyelid point and two canthus points of the target eye;
a first determining module configured to determine, based on three-dimensional position information corresponding to an eyeball center corresponding to the target eye and three-dimensional position information corresponding to the upper eyelid point and the lower eyelid point, first projection position information corresponding to the upper eyelid point and second projection position information corresponding to the lower eyelid point projected onto an eyeball surface corresponding to the target eye;
a second determination module configured to determine an open-close distance of the target eye based on the first projection position information and the second projection position information.
10. The apparatus according to claim 9, characterized in that the obtaining module is specifically configured to obtain a first face image containing the face of the person to be detected;
detecting two-dimensional position information of a face feature point of the face from the first face image;
and determining a target three-dimensional face model of the person to be detected based on the two-dimensional position information of the face characteristic points and a preset three-dimensional face model.
CN202010069773.0A 2020-01-21 2020-01-21 Method and device for determining eye opening and closing distance Active CN113208591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010069773.0A CN113208591B (en) 2020-01-21 2020-01-21 Method and device for determining eye opening and closing distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010069773.0A CN113208591B (en) 2020-01-21 2020-01-21 Method and device for determining eye opening and closing distance

Publications (2)

Publication Number Publication Date
CN113208591A true CN113208591A (en) 2021-08-06
CN113208591B CN113208591B (en) 2023-01-06

Family

ID=77085080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010069773.0A Active CN113208591B (en) 2020-01-21 2020-01-21 Method and device for determining eye opening and closing distance

Country Status (1)

Country Link
CN (1) CN113208591B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692980A (en) * 2009-10-30 2010-04-14 吴泽俊 Method for detecting fatigue driving
JP2017514193A (en) * 2014-02-04 2017-06-01 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン 3D image analysis apparatus for determining a line-of-sight direction
CN107133595A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 The eyes opening and closing detection method of infrared image
CN109389069A (en) * 2018-09-28 2019-02-26 北京市商汤科技开发有限公司 Blinkpunkt judgment method and device, electronic equipment and computer storage medium
CN109460704A (en) * 2018-09-18 2019-03-12 厦门瑞为信息技术有限公司 A kind of fatigue detection method based on deep learning, system and computer equipment
CN109934207A (en) * 2019-04-15 2019-06-25 华东师范大学 A kind of characteristic distance modification method of driver face based on facial expression fatigue driving detection algorithm
CN110516548A (en) * 2019-07-24 2019-11-29 浙江工业大学 A kind of iris center positioning method based on three-dimensional eyeball phantom and Snakuscule

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692980A (en) * 2009-10-30 2010-04-14 吴泽俊 Method for detecting fatigue driving
JP2017514193A (en) * 2014-02-04 2017-06-01 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン 3D image analysis apparatus for determining a line-of-sight direction
CN107133595A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 The eyes opening and closing detection method of infrared image
CN109460704A (en) * 2018-09-18 2019-03-12 厦门瑞为信息技术有限公司 A kind of fatigue detection method based on deep learning, system and computer equipment
CN109389069A (en) * 2018-09-28 2019-02-26 北京市商汤科技开发有限公司 Blinkpunkt judgment method and device, electronic equipment and computer storage medium
CN109934207A (en) * 2019-04-15 2019-06-25 华东师范大学 A kind of characteristic distance modification method of driver face based on facial expression fatigue driving detection algorithm
CN110516548A (en) * 2019-07-24 2019-11-29 浙江工业大学 A kind of iris center positioning method based on three-dimensional eyeball phantom and Snakuscule

Also Published As

Publication number Publication date
CN113208591B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
JP5136965B2 (en) Image processing apparatus, image processing method, and image processing program
US10121273B2 (en) Real-time reconstruction of the human body and automated avatar synthesis
US20150029322A1 (en) Method and computations for calculating an optical axis vector of an imaged eye
CN107953329B (en) Object recognition and attitude estimation method and device and mechanical arm grabbing system
CN107813310A (en) One kind is based on the more gesture robot control methods of binocular vision
CN106780389B (en) Fisheye image correction method and device based on coordinate transformation
CN111199528A (en) Fisheye image distortion correction method
CN110956065B (en) Face image processing method and device for model training
JP2021531601A (en) Neural network training, line-of-sight detection methods and devices, and electronic devices
WO2009043927A1 (en) Apparatus for acquiring and processing information relating to human eye movements
TW201220253A (en) Image calculation method and apparatus
CN115482574A (en) Screen fixation point estimation method, device, medium and equipment based on deep learning
CN110956068B (en) Fatigue detection method and device based on human eye state recognition
US11982878B2 (en) Method and device for measuring the local refractive power and/or refractive power distribution of a spectacle lens
CN111476151A (en) Eyeball detection method, device, equipment and storage medium
CN113208591B (en) Method and device for determining eye opening and closing distance
CN113095274A (en) Sight estimation method, system, device and storage medium
WO2020237941A1 (en) Personnel state detection method and apparatus based on eyelid feature information
CN112528714B (en) Single-light-source-based gaze point estimation method, system, processor and equipment
Ranganathan et al. Gaussian process for lens distortion modeling
JP2003256804A (en) Visual field video generating device and method, and visual field video generating program and recording medium with its program recorded
Schmidt et al. The calibration of the pan-tilt units for the active stereo head
CN113221600B (en) Method and device for calibrating image feature points
JP7300517B2 (en) Incident light information acquisition method, incident light information acquisition system, and information processing apparatus
CN117611752B (en) Method and system for generating 3D model of digital person

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211126

Address after: 215100 floor 23, Tiancheng Times Business Plaza, No. 58, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou, Jiangsu Province

Applicant after: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

Address before: Room 601-a32, Tiancheng information building, No. 88, South Tiancheng Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant