CN111860292A - Monocular camera-based human eye positioning method, device and equipment - Google Patents

Monocular camera-based human eye positioning method, device and equipment Download PDF

Info

Publication number
CN111860292A
CN111860292A CN202010688604.5A CN202010688604A CN111860292A CN 111860292 A CN111860292 A CN 111860292A CN 202010688604 A CN202010688604 A CN 202010688604A CN 111860292 A CN111860292 A CN 111860292A
Authority
CN
China
Prior art keywords
driver
distance
monocular camera
image
human eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010688604.5A
Other languages
Chinese (zh)
Inventor
高万军
李超龙
吴子扬
柳燕飞
汪华锋
和卫民
刘俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202010688604.5A priority Critical patent/CN111860292A/en
Publication of CN111860292A publication Critical patent/CN111860292A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Abstract

The invention discloses a monocular camera-based human eye positioning method, a monocular camera-based human eye positioning device and monocular camera-based human eye positioning equipment. The invention has the conception that the rigid body parameters of individual drivers, namely the real pupil distance characteristic information of the drivers are calibrated in advance through a monocular camera and a reference point and are used as the basis of subsequent positioning operation, and the pupil distance characteristic information of the current drivers is read through the corresponding relation in the actual positioning stage and the positions of the eyes of the drivers in the real space are measured and calculated by combining the information of the 2D images collected in real time. Compared with the existing monocular camera ranging-based scheme, the method can obviously improve the positioning accuracy, has the advantages of low cost and convenience in operation, and is easy to widely apply to the DSM field.

Description

Monocular camera-based human eye positioning method, device and equipment
Technical Field
The invention relates to the technical field of driver state monitoring, in particular to a human eye positioning method, a human eye positioning device and human eye positioning equipment based on a monocular camera.
Background
In the field of Driver State Monitoring (DSM), monitoring technology for a Driver's gaze target point or target area is involved.
In particular, the driver's gaze target during driving may be determined by the target location or area and the driver's gaze ray. Wherein the target position or area can be determined by calibrating the camera in advance and three-dimensionally modeling the cockpit; the gaze ray of the driver can be composed of the gaze direction of the driver and a gaze origin, wherein the gaze direction of the driver can be obtained by analyzing the facial image of the driver, which is acquired by the monocular camera.
However, due to the 2D imaging characteristics of monocular cameras, it is difficult to obtain depth information of an object, such as gaze origin information (i.e., the spatial position of the human eye), which cannot be accurately measured. In addition, some existing schemes for distance measurement by adopting a monocular camera ignore individual differences of drivers, and error factors are additionally increased, so that the positioning accuracy is poorer.
Disclosure of Invention
In view of the above, the present invention aims to provide a monocular camera-based human eye positioning method, apparatus and device, and accordingly proposes a computer-readable storage medium and a computer program product, by which higher-precision human eye spatial position information can be obtained at lower cost.
The technical scheme adopted by the invention is as follows:
In a first aspect, the present invention provides a method for needle-based monocular camera positioning of a human eye, comprising:
based on a set reference point and a monocular camera, measuring and storing an actual pupil distance parameter of a driver in advance;
acquiring a current image of a driver through a monocular camera;
calculating a first distance from the eyes of the driver to the camera according to the actual pupil distance parameter corresponding to the current driver and the information of the current image;
and determining the spatial position of the eyes of the driver by using the first distance and the information of the current image.
In at least one possible implementation manner, the pre-measuring and storing the actual interpupillary distance parameter of the driver based on the set reference point and the monocular camera includes:
prompting a driver to stare at the reference point, and acquiring a driver image by using a monocular camera;
according to the driver image, the positions of the left pupil and the right pupil and the midpoint of the two pupils in the image, the head deflection angle and the sight line direction are obtained;
calculating a second distance between the actual two pupillary points of the driver and the camera according to the positions of the two pupillary points, the position of the reference point and the sight direction;
and determining and storing the actual pupil distance parameter of the driver based on the positions of the left pupil and the right pupil in the image, the focal length of the monocular camera, the head deflection angle and the second distance.
In at least one possible implementation manner, the obtaining a first distance from the eyes of the driver to the camera according to the actual interpupillary distance parameter corresponding to the current driver and the information of the current image comprises:
reading the corresponding actual interpupillary distance parameter based on the identity of the current driver;
processing the current image to obtain the positions of left and right pupils and a head deflection angle in the current image;
determining the first distance according to the actual pupil distance parameter, the positions of left and right pupils in the current image, the focal length of a monocular camera and the head deflection angle; wherein the first distance refers to the distance between the actual two pupils of the current driver and the camera.
In at least one possible implementation manner, the determining the spatial position of the eyes of the driver by using the first distance and the information of the current image includes:
acquiring space position information of the midpoint of the two pupils relative to the monocular camera from the current image;
and determining the actual spatial position of the eyes of the current driver according to the spatial position information and the first distance.
In a second aspect, the present invention provides a monocular camera-based human eye positioning device, comprising:
The pupil distance calibration module is used for measuring and storing the actual pupil distance parameter of the driver in advance based on the set reference point and the monocular camera;
the image acquisition module is used for acquiring a current image of the driver through the monocular camera;
the human eye distance calculation module is used for calculating a first distance from the eyes of the driver to the camera according to the actual pupil distance parameter corresponding to the current driver and the information of the current image;
and the human eye space positioning module is used for determining the space position of the eyes of the driver by utilizing the first distance and the information of the current image.
In at least one possible implementation manner, the interpupillary distance calibration module includes:
the image acquisition unit is used for prompting a driver to stare at the reference point and acquiring a driver image by using a monocular camera;
the image processing unit is used for obtaining the positions of the left pupil, the right pupil and the midpoint of the two pupils in the image, the head deflection angle and the sight line direction according to the driver image;
the human eye distance calculation unit is used for calculating a second distance between the actual two pupillary points of the driver and the camera according to the positions of the two pupillary points, the position of the reference point and the sight line direction;
And the pupil distance parameter construction unit is used for determining and storing the actual pupil distance parameter of the driver based on the positions of the left and right pupils in the image, the focal length of the monocular camera, the head deflection angle and the second distance.
In at least one possible implementation manner, the human eye distance calculation module includes:
the pupil distance parameter reading unit is used for reading the corresponding actual pupil distance parameter based on the identity of the current driver;
the image processing unit is used for processing the current image to obtain the positions of left and right pupils and the head deflection angle in the current image;
the human eye distance calculation unit is used for determining the first distance according to the actual pupil distance parameter, the positions of left and right pupils in the current image, the focal length of the monocular camera and the head deflection angle; wherein the first distance refers to the distance between the actual two pupils of the current driver and the camera.
In at least one possible implementation, the eye space positioning module includes:
acquiring space position information of the midpoint of the two pupils relative to the monocular camera from the current image;
and determining the actual spatial position of the eyes of the current driver according to the spatial position information and the first distance.
In a third aspect, the present invention provides a monocular camera based human eye positioning device comprising:
one or more processors, a memory, which may employ a non-volatile storage medium, and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the device (which may be a processor), cause the device to perform the method as in the first aspect or any possible implementation of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform the method as described in the first aspect or any possible implementation manner of the first aspect.
In a fifth aspect, the present invention also provides a computer program product for a driver condition monitoring system, which, when executed by a computer device, is adapted to cause the driver condition monitoring system to perform the method of the first aspect or any of its possible implementations.
In a possible design of the fifth aspect, the relevant program related to the product may be stored in whole or in part on a memory packaged with the processor, or may be stored in part or in whole on a storage medium not packaged with the processor.
The invention has the conception that the rigid body parameters of individual drivers, namely the real pupil distance characteristic information of the drivers are calibrated in advance through a monocular camera and a reference point and are used as the basis of subsequent positioning operation, and the pupil distance characteristic information of the current drivers is read through the corresponding relation in the actual positioning stage and the positions of the eyes of the drivers in the real space are measured and calculated by combining the information of the 2D images collected in real time. Compared with the existing monocular camera ranging-based scheme, the method can obviously improve the positioning accuracy, has the advantages of low cost and convenience in operation, and is easy to widely apply to the DSM field.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of an embodiment of a monocular camera-based human eye positioning method provided by the present invention;
FIG. 2 is a flow chart of an embodiment of the present invention for measuring and storing an actual interpupillary distance parameter;
fig. 3 is a reference diagram of an embodiment of finding an actual interpupillary distance according to the present invention;
FIG. 4 is a flowchart of an embodiment of finding an actual distance from a human eye to a camera according to the present invention;
fig. 5 is a block diagram of an embodiment of a monocular camera-based human eye positioning device provided by the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
Before explaining the technical scheme of the invention, the inventive process and design context of the invention are first introduced.
For the determination of the spatial position of the driver's eyes, which mainly consists in measuring the distance of the eyes to the camera (i.e. the spatial position of the eyes with respect to the DSM camera), existing solutions generally fall into two ways: 1) the distance measurement method based on the 2D camera and the face feature part is characterized in that face feature points in a standard face image are used as a basis, and the distance is directly measured and calculated according to the mapping relation between a standard face and a real face. 2) The distance measurement method based on 3D cameras such as binocular cameras or depth cameras particularly calculates distance or directly obtains distance information according to binocular calibration, although a 3D image can accurately obtain depth information of an object compared with a 2D camera, the method additionally increases system cost, and is not beneficial to popularization and application in fact, so that in the existing driver state monitoring system, the main stream type of a DSM camera is still a monocular camera, for example, a plurality of cameras need to be arranged in a 3D stereo camera, and the number of cameras and an algorithm processing module are increased compared with the monocular camera; whereas depth cameras (TOF or structured light) require the addition of sophisticated illumination and receive processing devices, etc.
In view of the above-mentioned problems of the prior art, the present invention provides an eye positioning strategy for a monocular camera, which mainly comprises, in general, determining the real pupillary distance of the driver in advance based on the monocular camera, and positioning the eye position of the driver in a real application environment based on the monocular camera and the real pupillary distance parameters. In particular, reference may be made to an embodiment of at least one monocular camera-based human eye positioning method illustrated in fig. 1, which may comprise the steps of:
and step S1, measuring and storing the actual interpupillary distance parameter of the driver in advance based on the set reference point and the monocular camera.
The monocular camera may be, but is not limited to, a visible light or near infrared camera fixedly installed in a cab of the vehicle, and is generally installed above a dashboard of the vehicle or at a rear view mirror or the like of the vehicle in order to photograph a driver's state. The reference point may be, but is not limited to, an object that is disposed near the camera and is determined relative to the camera, for example, a point displayed on a central control display screen of the automobile, or a button of the central control system of the automobile. The actual interpupillary distance is an absolute distance between the centers of left and right pupils of a driver, and the interpupillary distances between individuals are different, and the interpupillary distances with different genders are also greatly different, for example, the interpupillary distances of adult men and women in China are respectively 62mm and 58mm on average, so that the interpupillary distance of an individual can be considered as a self-inherent characteristic that the individual interpupillary distance is relatively kept unchanged and is different from others for the adult.
Therefore, the invention proposes to calibrate and store the interpupillary distance of the driver in advance, namely to obtain the rigid body characteristic of the individual driver. In particular, the way of determining the actual interpupillary distance parameter of the driver by the reference point and the monocular camera may be adjusted in details thereof in different embodiments, for example, different settings may be made for the reference point, different parameters in the image taken by the monocular 2D camera may be selected, and auxiliary operations may be taken during the calibration stage to assist the driver in completing the calibration action, etc. Here, the present invention, in combination with the example of fig. 2, provides at least one possible implementation manner that the previously described measuring and storing the actual interpupillary distance parameter of the driver in advance based on the set reference point and the monocular camera includes:
and step S11, prompting the driver to stare at the reference point, and acquiring an image of the driver by using the monocular camera.
The specific implementation of this step can be referred to as follows, calibration is performed on a reference point and a monocular camera in advance, and three-dimensional modeling can be performed on a vehicle cabin, so that a camera coordinate system, a world coordinate system and the like can be established, and internal reference, external reference and distortion parameters of the camera and coordinates of the reference point can be obtained (in different embodiments, the reference point can be a relative camera coordinate system or a relative world coordinate system). In order to facilitate the assisting of the driver in pupil distance calibration, but not limited to, prompting the driver to gaze at a reference point in a normal standard sitting posture and keep still by using a voice guidance function of a vehicle-mounted device system, at this time, the set monocular 2D camera may be called to collect a video clip of the driver gazing at the reference point, and then one or more frames of relatively stable driver images may be selected by using algorithms such as jitter detection, wherein the jitter detection algorithm may use a gray level projection method, a Lucas-Kanade optical flow method, and the like. Since the calibration can be performed for different drivers, it can be considered that when any driver uses the vehicle for the first time, the DSM system completes calibration of the interpupillary distance characteristics of the driver, so that the system can obtain the interpupillary distance information of different drivers only by once calibration, and when different drivers use the vehicle later, the interpupillary distance information for the current driver can be called by the driver identification technology, which will be described later.
And step S12, obtaining the positions of the left pupil and the right pupil in the image and the midpoint of the two pupils, the head deflection angle and the sight line direction according to the driver image.
In the embodiment of the calibration stage, the parameters of the positions of the left and right pupils and the midpoint of the two pupils, the head deflection angle, and the gaze direction in the image are selected, and those skilled in the art can understand that other imaging parameters can be selected in other embodiments in combination with different scenes and actual needs, which are not limited herein.
For the present embodiment, the position information of the pupil of the driver in monocular camera imaging may be obtained based on the image coordinate system (pixel coordinate system uov); the middle points of the two pupils are the middle points of the connecting line of the centers of the left pupil and the right pupil of the driver in the image; the head deflection angle can be obtained by converting the imaging characteristics based on key feature points of the head of a driver in an image into a camera coordinate system through camera parameters to represent the Yaw angle (Yaw), Pitch angle (Pitch), Roll angle (Roll) and the like of the head; the gaze direction may obtain the angle of the real gaze in the camera coordinate system to the vertical direction (Pitch) and the horizontal direction (Yaw), respectively, based on the head deflection angle. The aforementioned parameters can be obtained by processing images captured by a monocular camera, and here, main image processing links are described, such as face detection, face feature point positioning, pupil positioning, head pose estimation, and gaze direction estimation, specifically:
(1) Face detection
The purpose of face detection is to identify the location of the driver's face in the image. Specifically, the specific process can be implemented by MTCNN, FaceBoxes, Mask-RCNN, and the like in the prior art, and the specific process can be implemented by referring to the related art, which is not described in detail herein.
(2) Face feature point positioning
The purpose of positioning the face feature points is to position the precise positions of the face feature points such as eyes, eyebrows, nose, mouth, and face outline. During specific positioning, the human face interesting region is intercepted on the basis of the human face detection, and the human face interesting region is input into a pre-trained neural network to regress coordinates of all feature points. The prior art such as SDM, MDM, PFLD and the like can be adopted, and the specific process can be realized by referring to the related technology and is not detailed here. After the key feature points are positioned, the coordinate positions of the facial feature points of the driver in the image under the pixel coordinate system can be obtained, and then the coordinate positions can be converted into the image coordinate system by combining camera internal parameters.
(3) Pupil positioning
The purpose of pupil positioning is to further position the precise position of the pupil center of the driver, and further determine the position of the middle point of the two pupils. The positions of the eye feature points can be determined according to the positioning result of the face feature points, and then the position of the eye center point can be calculated, however, the pupil center and the eye center point are not overlapped in most cases, and therefore, pupil positioning needs to be further performed. When positioning the pupil specifically: the region of interest of the eyes can be intercepted according to the positions of the eye feature points, and the region-changed image is subjected to binarization processing; carrying out edge detection on the processed image, and filtering a straight line edge; and fitting the pupil edge by a least square method ellipse to obtain the position of the pupil center. The interest area can also be directly input into a pre-trained neural network model to directly regress the coordinates of the pupil center. After the pupil center coordinates are obtained in the above manner, the coordinates of the centers of the left and right pupils and the two pupil midpoints under a pixel coordinate system can be directly obtained through calculation, and the positions of the left and right pupils and the two pupil midpoints in the image can be converted into an image coordinate system and/or a camera coordinate system by combining camera internal parameters.
(4) Head pose estimation
The purpose of head pose estimation is to estimate the yaw angle of the driver's head in the ambient space through camera imaging. In specific implementation, on one hand, a standard three-dimensional human face characteristic point model under a camera coordinate system can be established on the basis of the human face characteristic point positioning, and the yaw angle, the pitch angle and the roll angle of the standard three-dimensional human face characteristic point model under the camera coordinate system are all 0; then, based on mapping the coordinates of the two-dimensional face key feature points to a standard three-dimensional face feature point model, corresponding translation vectors and rotation matrixes can be obtained, and further the deflection angle of the head of the driver can be obtained. On the other hand, the method can be established on the basis of the face detection, and the deflection angle of the head of the driver is directly estimated by intercepting the head region of interest and inputting the region image into a pre-trained neural network model. Specifically, the prior art such as HopeNet and FSA-Net can be adopted, and the implementation process can be implemented by referring to the related art, which is not described in detail herein. After the head posture estimation, the deflection angle of the head of the driver relative to a camera coordinate system in the image can be obtained, and the head deflection angle can be converted into a world coordinate system by combining external parameters of the camera.
(5) Gaze direction estimation
The purpose of the gaze direction estimation is to estimate the direction at which the driver gazes at the calibration stage, which can be understood as the angle between the true gaze direction and the vertical direction and the horizontal direction, respectively, in the camera coordinate system. In specific implementation, according to the human face feature point positioning result, the head posture estimation result and the standard three-dimensional human face feature point model mentioned above, the driver image is subjected to distortion correction, translation, rotation and other processing, and the region of interest of the face and the eyes is intercepted, so that the face and the eyes are converted into a predefined mode, and the distance between the face and the eyes and the camera and the deflection angle of the face and the eyes are fixed in the predefined mode; inputting the face and eye images into a pre-trained neural network model, so that the gaze direction of the driver can be estimated; and then the gaze direction is reversely converted into a direction under an actual camera coordinate system according to the rotation in the processing process, so that the included angles between the gaze direction under the camera coordinate system and the vertical direction and the horizontal direction respectively can be obtained, and the obtained sight line direction can be converted into a sight line direction vector in a three-dimensional space.
The present invention aims to emphasize that, for the technical problems concerned by the present invention and the designed individual pupil distance characteristic calibration link, in some embodiments, parameters involved in the image processing need to be adopted, that is, what parameters are preferable for the object of the present invention to perform the subsequent pupil distance calculation.
And step S13, calculating the second distance between the two actual pupillary points of the driver and the camera according to the positions of the two pupillary points, the position of the reference point and the sight line direction.
In connection with the description of the foregoing steps and the schematic diagram of pupil distance measurement shown in fig. 3, the foregoing reference point, the origin of the camera coordinate system, and the real pupil midpoint of the driver may be specifically constructed as a triangle. I.e., point R, O, E shown in fig. 3, which represents the reference point, the origin of the camera coordinate system, the true pupil center of the driver, respectively, ElAnd ErThe centers of the left and right pupils of the real driver are respectively, and e is the center of the pupil of the driver on the shot imagePoint, elAnd erRespectively the centers of the left and right pupils of the driver on the image.
The coordinates of the reference point may be calibrated in advance at the stage of setting the reference point, where the position of the reference point R may represent R ═ X based on the camera coordinate systemR,YR,ZR) (ii) a In addition, the position of the pupil center of the driver in the camera coordinate system can be obtained through the steps, and can be expressed as e ═ X (X)e,YeF), where f refers to the camera focal length; from the position information of R and e, the included angle of the Oe side and the OR side, which is ≤ eOR, can be obtained through the Oe side length, the OR side length, and the arccosine function, and ≤ eOR ≤ EOR, so that an included angle in the triangle ROE can be obtained; in the same way, according to the measured sight direction vector of the driver mentioned above, another included angle ≈ OER in the triangular ROE can be calculated, so that values of all internal angles of the triangular ROE can be determined. Meanwhile, based on the preset reference point position and the camera origin position, the spatial distance between the reference point position and the camera origin position can be calculated, and the spatial distance is marked as D ORThen, knowing three interior angles and one side length of the triangle, other side lengths can be obtained through a trigonometric function, wherein the distance between the origin of the camera and the actual pupil midpoint is required by the invention and is marked as DOEI.e. the second distance.
And step S14, determining and storing the actual pupil distance parameter of the driver based on the positions of the left and right pupils in the image, the focal length of the monocular camera, the head deflection angle and the second distance.
The coordinates of the centers of the left and right pupils (i.e. the positions of the left and right pupils) in the image coordinate system xoy can be obtained through the aforementioned steps, so that the distance between the centers of the left and right pupils in the image, i.e. the interpupillary distance in the image is recorded as d, can be calculatedeCombined with a second distance DOEAnd the focal length f of the camera, and the estimated pupil distance between the actual pupil centers can be calculated by using a triangle similarity rule to be (D)OE×de) And/f. Considering that the head of the driver has a certain deflection relative to the camera coordinate system, the above-mentioned "estimated pupil distance" is actually the left and right pupilsThe projected distance of the hole on the camera coordinate system XOY plane (when the face is not facing the DSM monocular camera, "estimated pupil distance" is smaller than the actual pupil distance value). Therefore, the actual pupillary distance value of the driver in real space, which is denoted as D herein, can be obtained by back-projecting the head deflection angle obtained in the above-mentioned step onto the world coordinate system EAnd the value is saved as a characteristic parameter corresponding to the driver participating in the calibration.
Continuing, step S2, acquiring a current image of the driver by the monocular camera;
and step S3, calculating a first distance from the eyes of the driver to the camera according to the actual pupil distance parameter corresponding to the current driver and the information of the current image.
In the practical application stage, that is, during the driving of the vehicle, the driver can capture the current image of the current driver in real time through the monocular camera, and then process the current image and retrieve the interpupillary distance parameter for the current driver, for example, a processing idea similar to but reverse to the calibration stage is taken out, and the current first distance, that is, the distance from the eyes of the driver to the monocular camera at this time is obtained. Specifically, the implementation details of obtaining the distance between the eyes of the driver and the camera may be adjusted in different embodiments, for example, in some embodiments, one or two real eyes may be used to represent the eyes, or a midpoint between two actual pupils may be used to represent the eyes, different parameters in the current image taken by the monocular 2D camera may be selected, and a variety of identification means may be adopted in the stage of reading the pre-stored interpupillary distance parameters. Here, at least one possible implementation manner of the present invention, referring to fig. 4, the obtaining a first distance from the eyes of the driver to the camera according to the actual pupil distance parameter corresponding to the current driver and the information of the current image may include:
And step S31, reading the corresponding actual interpupillary distance parameter based on the identity of the current driver.
When the pupil distance parameter calibrated in the previous stage is called, the identity of the current driver can be distinguished by using a plurality of identity matching technologies such as voiceprint recognition, fingerprint recognition, face recognition and the like, and the pupil distance characteristic parameter corresponding to the current driver is called from the prestored parameter based on the identity.
And step S32, processing the current image to obtain the positions of the left and right pupils and the head deflection angle in the current image.
In the embodiment of the application stage, the positions of the left and right pupils in the image and the head deflection angle parameters are selected, and those skilled in the art can understand that other imaging parameters can be selected in other embodiments in combination with different scenes and actual needs, which are not limited herein.
For the present embodiment, as mentioned above, the center coordinates of the left and right pupils of the driver in the current image coordinate system and the head deflection angle of the driver in the camera coordinate system can be obtained by processing the currently captured image, including but not limited to face detection, face feature point positioning, pupil positioning, head pose estimation, and the like, which is specifically referred to above and will not be described herein again.
And step S33, determining the first distance according to the actual pupil distance parameter, the positions of the left and right pupils in the current image, the focal length of the monocular camera and the head deflection angle.
As described above, in a normal situation, it cannot be guaranteed that a face in a current driver image captured in real time is directly facing a monocular camera, that is, a certain deflection angle of the head of the driver in the current image relative to a camera coordinate system usually exists, in other words, a projection relationship exists between a distance value in the current image and a real distance value and an imaging plane of the camera. Therefore, the actual projection distance of the human eye on the camera imaging plane, that is, the read D can be obtained by combining the head deflection angleEConverting into the above-mentioned 'estimated pupil distance', and determining the pupil distance d in the current image by using the positions of the left and right pupils in the current imageeAnd the current driver's focal length f and the triangle similarity rule are combined to calculateThe distance between the actual two pupillary points and the camera is the first distance.
And step S4, determining the spatial position of the eyes of the driver by using the first distance and the information of the current image.
The first distance obtained in the foregoing step is a distance in the Z direction from the monocular camera plane to the human eye, and in this step, three-dimensional spatial information of the human eye is determined, and specifically, X, Y, Z three-direction coordinates (X) may be obtained E,YE,ZE)。
Specifically, spatial position information e ═ of the two pupillary points e with respect to the monocular camera can be acquired from the current image (X)e,YeF), i.e. the current problem is converted into the known spatial position information Xe、YeF and a first distance ZETo find XEYEAccording to the corresponding relationship Xe/XE=Ye/YE=f/ZECan find (X)E,YE,ZE) That is, the actual spatial position of the current driver's eyes is determined, and thus, the monocular camera-based eye positioning operation is completed.
In summary, the idea of the present invention is to calibrate the rigid body parameters of the individual driver, i.e. the real pupil distance characteristic information of the driver, in advance through the monocular camera and the reference point, and use this as the basis of the subsequent positioning operation, and in the actual positioning stage, the pupil distance characteristic information of the current driver is read through the corresponding relationship, and the position of the eyes of the driver in the real space is measured and calculated by combining the information of the 2D image collected in real time. Compared with the existing monocular camera ranging-based scheme, the method can obviously improve the positioning accuracy, has the advantages of low cost and convenience in operation, and is easy to widely apply to the DSM field.
Corresponding to the above embodiments and preferred schemes, the present invention further provides an embodiment of a human eye positioning device based on a monocular camera, as shown in fig. 5, which may specifically include the following components:
The pupil distance calibration module 1 is used for measuring and storing actual pupil distance parameters of a driver in advance based on a set reference point and a monocular camera;
the image acquisition module 2 is used for acquiring a current image of the driver through a monocular camera;
the human eye distance calculation module 3 is used for calculating a first distance from the eyes of the driver to the camera according to the actual pupil distance parameter corresponding to the current driver and the information of the current image;
and the human eye space positioning module 4 is used for determining the space position of the eyes of the driver by utilizing the first distance and the information of the current image.
In at least one possible implementation manner, the interpupillary distance calibration module includes:
the image acquisition unit is used for prompting a driver to stare at the reference point and acquiring a driver image by using a monocular camera;
the image processing unit is used for obtaining the positions of the left pupil, the right pupil and the midpoint of the two pupils in the image, the head deflection angle and the sight line direction according to the driver image;
the human eye distance calculation unit is used for calculating a second distance between the actual two pupillary points of the driver and the camera according to the positions of the two pupillary points, the position of the reference point and the sight line direction;
And the pupil distance parameter construction unit is used for determining and storing the actual pupil distance parameter of the driver based on the positions of the left and right pupils in the image, the focal length of the monocular camera, the head deflection angle and the second distance.
In at least one possible implementation manner, the human eye distance calculation module includes:
the pupil distance parameter reading unit is used for reading the corresponding actual pupil distance parameter based on the identity of the current driver;
the image processing unit is used for processing the current image to obtain the positions of left and right pupils and the head deflection angle in the current image;
the human eye distance calculation unit is used for determining the first distance according to the actual pupil distance parameter, the positions of left and right pupils in the current image, the focal length of the monocular camera and the head deflection angle; wherein the first distance refers to the distance between the actual two pupils of the current driver and the camera.
In at least one possible implementation, the eye space positioning module includes:
acquiring space position information of the midpoint of the two pupils relative to the monocular camera from the current image;
and determining the actual spatial position of the eyes of the current driver according to the spatial position information and the first distance.
It should be understood that the division of the components of the monocular camera based eye positioning device shown in fig. 5 is merely a logical division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And these components may all be implemented in software invoked by a processing element; or may be implemented entirely in hardware; and part of the components can be realized in the form of calling by the processing element in software, and part of the components can be realized in the form of hardware. For example, a certain module may be a separate processing element, or may be integrated into a certain chip of the electronic device. Other components are implemented similarly. In addition, all or part of the components can be integrated together or can be independently realized. In implementation, each step of the above method or each component above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above components may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), one or more microprocessors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, these components may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
In view of the foregoing examples and their preferred embodiments, it will be appreciated by those skilled in the art that in practice, the invention may be practiced in a variety of embodiments, and that the invention is illustrated schematically in the following vectors:
(1) a monocular camera-based human eye positioning device may comprise:
one or more processors, memory, and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the device (processor) cause the device to perform the steps/functions of the foregoing embodiments or equivalent implementations.
(2) A readable storage medium, on which a computer program or the above-mentioned apparatus is stored, which, when executed, causes the computer to perform the steps/functions of the above-mentioned embodiments or equivalent implementations.
In the several embodiments provided by the present invention, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on this understanding, some aspects of the present invention may be embodied in the form of software products, which are described below, or portions thereof, which substantially contribute to the art.
(3) A computer program product (which may include the above-mentioned apparatus and may refer to a computer program product applied to a DSM system), which, when run on a computer device (an in-vehicle device and/or a backend server, etc.), causes the DSM system (which may also be the aforementioned computer device) to perform the monocular camera-based human eye positioning method of the aforementioned embodiment or equivalent.
From the above description of the embodiments, it is clear to those skilled in the art that all or part of the steps in the above implementation method can be implemented by software plus a necessary general hardware platform. With this understanding, the above-described computer program products may include, but are not limited to, refer to APP; continuing on, the aforementioned device/terminal may be a computer device (e.g., a mobile phone, a PC terminal, a cloud platform, a server cluster, or a network communication device such as a media gateway). Moreover, the hardware structure of the computer device may further specifically include: at least one processor, at least one communication interface, at least one memory, and at least one communication bus; the processor, the communication interface and the memory can all complete mutual communication through the communication bus. The processor may be a central Processing unit CPU, a DSP, a microcontroller, or a digital Signal processor, and may further include a GPU, an embedded Neural Network Processor (NPU), and an Image Signal Processing (ISP), and may further include a specific integrated circuit ASIC, or one or more integrated circuits configured to implement the embodiments of the present invention, and the processor may have a function of operating one or more software programs, and the software programs may be stored in a storage medium such as a memory; and the aforementioned memory/storage media may comprise: non-volatile memories (non-volatile memories) such as non-removable magnetic disks, U-disks, removable hard disks, optical disks, etc., and Read-Only memories (ROM), Random Access Memories (RAM), etc.
In the embodiments of the present invention, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of skill in the art will appreciate that the various modules, elements, and method steps described in the embodiments disclosed in this specification can be implemented as electronic hardware, combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other. In particular, for embodiments of devices, apparatuses, etc., since they are substantially similar to the method embodiments, reference may be made to some of the descriptions of the method embodiments for their relevant points. The above-described embodiments of devices, apparatuses, etc. are merely illustrative, and modules, units, etc. described as separate components may or may not be physically separate, and may be located in one place or distributed in multiple places, for example, on nodes of a system network. Some or all of the modules and units can be selected according to actual needs to achieve the purpose of the above-mentioned embodiment. Can be understood and carried out by those skilled in the art without inventive effort.
The structure, features and effects of the present invention have been described in detail with reference to the embodiments shown in the drawings, but the above embodiments are merely preferred embodiments of the present invention, and it should be understood that technical features related to the above embodiments and preferred modes thereof can be reasonably combined and configured into various equivalent schemes by those skilled in the art without departing from and changing the design idea and technical effects of the present invention; therefore, the invention is not limited to the embodiments shown in the drawings, and all the modifications and equivalent embodiments that can be made according to the idea of the invention are within the scope of the invention as long as they are not beyond the spirit of the description and the drawings.

Claims (10)

1. A human eye positioning method based on a monocular camera is characterized by comprising the following steps:
based on a set reference point and a monocular camera, measuring and storing an actual pupil distance parameter of a driver in advance;
acquiring a current image of a driver through a monocular camera;
calculating a first distance from the eyes of the driver to the camera according to the actual pupil distance parameter corresponding to the current driver and the information of the current image;
and determining the spatial position of the eyes of the driver by using the first distance and the information of the current image.
2. The monocular camera-based human eye positioning method of claim 1, wherein the pre-measuring and storing the actual interpupillary distance parameter of the driver based on the set reference point and the monocular camera comprises:
prompting a driver to stare at the reference point, and acquiring a driver image by using a monocular camera;
according to the driver image, the positions of the left pupil and the right pupil and the midpoint of the two pupils in the image, the head deflection angle and the sight line direction are obtained;
calculating a second distance between the actual two pupillary points of the driver and the camera according to the positions of the two pupillary points, the position of the reference point and the sight direction;
And determining and storing the actual pupil distance parameter of the driver based on the positions of the left pupil and the right pupil in the image, the focal length of the monocular camera, the head deflection angle and the second distance.
3. The monocular camera-based human eye positioning method of claim 1, wherein the deriving a first distance of the driver's eyes from the camera based on the actual pupillary distance parameter corresponding to the current driver and the information of the current image comprises:
reading the corresponding actual interpupillary distance parameter based on the identity of the current driver;
processing the current image to obtain the positions of left and right pupils and a head deflection angle in the current image;
determining the first distance according to the actual pupil distance parameter, the positions of left and right pupils in the current image, the focal length of a monocular camera and the head deflection angle; wherein the first distance refers to the distance between the actual two pupils of the current driver and the camera.
4. The monocular camera-based human eye positioning method of claim 3, wherein the determining the spatial position of the driver's eyes using the first distance and the information of the current image comprises:
acquiring space position information of the midpoint of the two pupils relative to the monocular camera from the current image;
And determining the actual spatial position of the eyes of the current driver according to the spatial position information and the first distance.
5. A monocular camera-based human eye positioning device, comprising:
the pupil distance calibration module is used for measuring and storing the actual pupil distance parameter of the driver in advance based on the set reference point and the monocular camera;
the image acquisition module is used for acquiring a current image of the driver through the monocular camera;
the human eye distance calculation module is used for calculating a first distance from the eyes of the driver to the camera according to the actual pupil distance parameter corresponding to the current driver and the information of the current image;
and the human eye space positioning module is used for determining the space position of the eyes of the driver by utilizing the first distance and the information of the current image.
6. The monocular camera-based human eye positioning device of claim 5, wherein the interpupillary distance calibration module comprises:
the image acquisition unit is used for prompting a driver to stare at the reference point and acquiring a driver image by using a monocular camera;
the image processing unit is used for obtaining the positions of the left pupil, the right pupil and the midpoint of the two pupils in the image, the head deflection angle and the sight line direction according to the driver image;
The human eye distance calculation unit is used for calculating a second distance between the actual two pupillary points of the driver and the camera according to the positions of the two pupillary points, the position of the reference point and the sight line direction;
and the pupil distance parameter construction unit is used for determining and storing the actual pupil distance parameter of the driver based on the positions of the left and right pupils in the image, the focal length of the monocular camera, the head deflection angle and the second distance.
7. The monocular camera-based human eye positioning device of claim 5, wherein the human eye distance calculation module comprises:
the pupil distance parameter reading unit is used for reading the corresponding actual pupil distance parameter based on the identity of the current driver;
the image processing unit is used for processing the current image to obtain the positions of left and right pupils and the head deflection angle in the current image;
the human eye distance calculation unit is used for determining the first distance according to the actual pupil distance parameter, the positions of left and right pupils in the current image, the focal length of the monocular camera and the head deflection angle; wherein the first distance refers to the distance between the actual two pupils of the current driver and the camera.
8. The monocular camera-based human eye positioning method of claim 7, wherein the human eye spatial positioning module comprises:
Acquiring space position information of the midpoint of the two pupils relative to the monocular camera from the current image;
and determining the actual spatial position of the eyes of the current driver according to the spatial position information and the first distance.
9. A monocular camera-based human eye positioning device, comprising:
one or more processors, a memory, and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the processor, cause the human eye positioning device to perform the monocular camera based human eye positioning method of any one of claims 1-4.
10. A computer program product for a driver condition monitoring system, characterized in that, when running on a computer device, causes the driver condition monitoring system to perform the monocular camera based eye positioning method of any one of claims 1-4.
CN202010688604.5A 2020-07-16 2020-07-16 Monocular camera-based human eye positioning method, device and equipment Pending CN111860292A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010688604.5A CN111860292A (en) 2020-07-16 2020-07-16 Monocular camera-based human eye positioning method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010688604.5A CN111860292A (en) 2020-07-16 2020-07-16 Monocular camera-based human eye positioning method, device and equipment

Publications (1)

Publication Number Publication Date
CN111860292A true CN111860292A (en) 2020-10-30

Family

ID=72983170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010688604.5A Pending CN111860292A (en) 2020-07-16 2020-07-16 Monocular camera-based human eye positioning method, device and equipment

Country Status (1)

Country Link
CN (1) CN111860292A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627355A (en) * 2021-08-13 2021-11-09 合众新能源汽车有限公司 Distance measurement method, device and computer readable medium for yaw rotating human face
CN113627267A (en) * 2021-07-15 2021-11-09 中汽创智科技有限公司 Sight line detection method, device, equipment and medium
CN114067420A (en) * 2022-01-07 2022-02-18 深圳佑驾创新科技有限公司 Sight line measuring method and device based on monocular camera
WO2022094787A1 (en) * 2020-11-04 2022-05-12 深圳市大疆创新科技有限公司 Driver data processing system and driver data acquisition method
CN114608521A (en) * 2022-03-17 2022-06-10 北京市商汤科技开发有限公司 Monocular distance measuring method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171369A1 (en) * 2006-01-16 2007-07-26 Sensomotoric Instruments Gmbh Method of determining the spatial relationship of an eye of a person with respect to a camera device
CN103793719A (en) * 2014-01-26 2014-05-14 深圳大学 Monocular distance-measuring method and system based on human eye positioning
CN105072431A (en) * 2015-07-28 2015-11-18 上海玮舟微电子科技有限公司 Glasses-free 3D playing method and glasses-free 3D playing system based on human eye tracking
CN105260724A (en) * 2015-10-22 2016-01-20 四川膨旭科技有限公司 System for identifying attention of eyes of vehicle owner during travelling of vehicle
CN106548600A (en) * 2016-12-06 2017-03-29 张家港全智电子科技有限公司 Real-time pupil tracing system and fatigue state monitoring method
CN108282650A (en) * 2018-02-12 2018-07-13 深圳超多维科技有限公司 A kind of Nakedness-yet stereoscopic display method, device, system and storage medium
CN109040736A (en) * 2018-08-08 2018-12-18 上海玮舟微电子科技有限公司 A kind of scaling method, device, equipment and the storage medium of eye space position
JP2018205819A (en) * 2017-05-30 2018-12-27 富士通株式会社 Gazing position detection computer program, gazing position detection device, and gazing position detection method
WO2019029195A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Driving state monitoring method and device, driver monitoring system, and vehicle
CN109429060A (en) * 2017-07-07 2019-03-05 京东方科技集团股份有限公司 Interpupillary distance measurement method, wearable eye equipment and storage medium
WO2020062960A1 (en) * 2018-09-29 2020-04-02 北京市商汤科技开发有限公司 Neural network training method and apparatus, gaze tracking method and apparatus, and electronic device
CN111028205A (en) * 2019-11-21 2020-04-17 佛山科学技术学院 Eye pupil positioning method and device based on binocular ranging

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171369A1 (en) * 2006-01-16 2007-07-26 Sensomotoric Instruments Gmbh Method of determining the spatial relationship of an eye of a person with respect to a camera device
CN103793719A (en) * 2014-01-26 2014-05-14 深圳大学 Monocular distance-measuring method and system based on human eye positioning
CN105072431A (en) * 2015-07-28 2015-11-18 上海玮舟微电子科技有限公司 Glasses-free 3D playing method and glasses-free 3D playing system based on human eye tracking
CN105260724A (en) * 2015-10-22 2016-01-20 四川膨旭科技有限公司 System for identifying attention of eyes of vehicle owner during travelling of vehicle
CN106548600A (en) * 2016-12-06 2017-03-29 张家港全智电子科技有限公司 Real-time pupil tracing system and fatigue state monitoring method
JP2018205819A (en) * 2017-05-30 2018-12-27 富士通株式会社 Gazing position detection computer program, gazing position detection device, and gazing position detection method
CN109429060A (en) * 2017-07-07 2019-03-05 京东方科技集团股份有限公司 Interpupillary distance measurement method, wearable eye equipment and storage medium
WO2019029195A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Driving state monitoring method and device, driver monitoring system, and vehicle
CN108282650A (en) * 2018-02-12 2018-07-13 深圳超多维科技有限公司 A kind of Nakedness-yet stereoscopic display method, device, system and storage medium
CN109040736A (en) * 2018-08-08 2018-12-18 上海玮舟微电子科技有限公司 A kind of scaling method, device, equipment and the storage medium of eye space position
WO2020062960A1 (en) * 2018-09-29 2020-04-02 北京市商汤科技开发有限公司 Neural network training method and apparatus, gaze tracking method and apparatus, and electronic device
CN111028205A (en) * 2019-11-21 2020-04-17 佛山科学技术学院 Eye pupil positioning method and device based on binocular ranging

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
MUHAMMAD BILAL AHMAD ET AL.: "i−Riter: Machine Learning Based Novel Eye Tracking and Calibration", 2018 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE (I2MTC) *
张明恒;王荣本;郭烈: "基于灰度投影的驾驶员图像眼睛定位", 交通与计算机, no. 04 *
李贤辉;高盈;钱恭斌: "基于瞳孔定位的单目测距系统", 智能计算机与应用, no. 02 *
葛如海;符鸿玉;符凯;金桥: "基于红外差分和积分投影的驾驶员眼睛定位", 激光技术, no. 06 *
项奕阳: "基于图像处理的上课疲劳检测", 电子测试, vol. 419, no. 14 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022094787A1 (en) * 2020-11-04 2022-05-12 深圳市大疆创新科技有限公司 Driver data processing system and driver data acquisition method
CN113627267A (en) * 2021-07-15 2021-11-09 中汽创智科技有限公司 Sight line detection method, device, equipment and medium
CN113627355A (en) * 2021-08-13 2021-11-09 合众新能源汽车有限公司 Distance measurement method, device and computer readable medium for yaw rotating human face
CN114067420A (en) * 2022-01-07 2022-02-18 深圳佑驾创新科技有限公司 Sight line measuring method and device based on monocular camera
CN114608521A (en) * 2022-03-17 2022-06-10 北京市商汤科技开发有限公司 Monocular distance measuring method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111860292A (en) Monocular camera-based human eye positioning method, device and equipment
CN111854620B (en) Monocular camera-based actual pupil distance measuring method, device and equipment
EP3033999B1 (en) Apparatus and method for determining the state of a driver
CN107004275B (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
JP5230748B2 (en) Gaze direction determination device and gaze direction determination method
CN107273846B (en) Human body shape parameter determination method and device
CN109446892B (en) Human eye attention positioning method and system based on deep neural network
CN106598221A (en) Eye key point detection-based 3D sight line direction estimation method
KR101769177B1 (en) Apparatus and method for eye tracking
WO2019137215A1 (en) Head pose and distraction estimation
JP2013024662A (en) Three-dimensional range measurement system, three-dimensional range measurement program and recording medium
CN113366491B (en) Eyeball tracking method, device and storage medium
JP6897082B2 (en) Computer program for face orientation estimation, face orientation estimation device and face orientation estimation method
US11074431B2 (en) Facial recognition device
WO2023272453A1 (en) Gaze calibration method and apparatus, device, computer-readable storage medium, system, and vehicle
CN114007054B (en) Method and device for correcting projection of vehicle-mounted screen picture
JP2021531601A (en) Neural network training, line-of-sight detection methods and devices, and electronic devices
JP2022502757A (en) Driver attention state estimation
JP2019185469A (en) Image analysis device, method, and program
JP2018128749A (en) Sight line measurement device, sight line measurement method and sight line measurement program
JP6708152B2 (en) Driver state estimating device and driver state estimating method
WO2020237940A1 (en) Fatigue detection method and device based on human eye state identification
JP2018101212A (en) On-vehicle device and method for calculating degree of face directed to front side
CN109415020B (en) Luminance control device, luminance control system and luminance control method
JP2018101211A (en) On-vehicle device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination