CN113569653A - Three-dimensional head posture estimation algorithm based on facial feature information - Google Patents

Three-dimensional head posture estimation algorithm based on facial feature information Download PDF

Info

Publication number
CN113569653A
CN113569653A CN202110741653.5A CN202110741653A CN113569653A CN 113569653 A CN113569653 A CN 113569653A CN 202110741653 A CN202110741653 A CN 202110741653A CN 113569653 A CN113569653 A CN 113569653A
Authority
CN
China
Prior art keywords
head
dimensional
parameters
model
estimation algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110741653.5A
Other languages
Chinese (zh)
Inventor
王臣豪
孙萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Chunjian Electronic Technology Co ltd
Original Assignee
Ningbo Chunjian Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Chunjian Electronic Technology Co ltd filed Critical Ningbo Chunjian Electronic Technology Co ltd
Priority to CN202110741653.5A priority Critical patent/CN113569653A/en
Publication of CN113569653A publication Critical patent/CN113569653A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a three-dimensional head posture estimation algorithm based on facial feature information, which comprises the following steps: acquiring camera calibration parameters and three-dimensional coordinates of a plurality of model points selected in a face area of a head model; acquiring a face picture of a pose to be estimated through a camera, extracting feature points corresponding to model points, and obtaining two-dimensional pixel coordinates; calculating a head posture estimation parameter according to the camera calibration parameter, the three-dimensional coordinate and the two-dimensional pixel coordinate; and carrying out reprojection checking calculation on the head attitude estimation parameters, and if the reprojection error is larger than a set value, correcting the head attitude estimation parameters through extended Kalman filtering so as to enable the reprojection error of the corrected head attitude parameters to be smaller than or equal to the set value. By means of the arrangement, the re-projection error of the head attitude estimation parameter is ensured to be small through extended Kalman filtering correction, and the situation that the head attitude estimation is misjudged based on the algorithm of the two-dimensional image is avoided.

Description

Three-dimensional head posture estimation algorithm based on facial feature information
Technical Field
The invention relates to the technical field of face recognition and posture analysis, in particular to a three-dimensional head posture estimation algorithm based on facial feature information.
Background
The Head Pose (HPE) is widely used in real life as a cross field of face recognition and Pose analysis. Accidents caused by driver inattention during road traffic accidents account for a significant proportion of the total number of accidents. The attention condition of the driver can be judged through a head posture estimation algorithm so as to reduce the probability of road safety accidents.
Estimation of head pose is accomplished by the change in position of the face in three-dimensional space, and its estimation algorithms typically include model-based algorithms and image appearance-based algorithms. Algorithms based on image appearance can be roughly classified into three categories, namely algorithms based on three-dimensional color images, algorithms based on image depth information and algorithms based on two-dimensional images. The three-dimensional image contains more information, so that the data volume needing to be processed is larger, and the requirement on equipment is higher; the image depth information based algorithm requires a depth sensor to acquire image depth information, increasing the equipment cost. The algorithm based on the two-dimensional color image has low equipment cost, but when the head turns left and right or the elevation angle is too large, the human face characteristic points are easy to be lost, and misjudgment is easy to occur on the estimation of the three-dimensional head posture.
Therefore, how to solve the problem that in the prior art, the cost of equipment is high for an algorithm based on a three-dimensional image and depth information, and the problem that the algorithm based on a two-dimensional image generates misjudgment on head pose estimation is an important technical problem to be solved by a person skilled in the art.
Disclosure of Invention
The invention aims to provide a three-dimensional head posture estimation algorithm based on facial feature information, and solves the problems that in the prior art, the cost of equipment required by an algorithm based on a three-dimensional image and depth information is high, and the head posture estimation is misjudged by an algorithm based on a two-dimensional image. The technical effects that can be produced by the preferred technical scheme in the technical schemes provided by the invention are described in detail in the following.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides a three-dimensional head posture estimation algorithm based on facial feature information, which comprises the following steps:
acquiring camera calibration parameters and three-dimensional coordinates of a plurality of model points selected in a face area of a head model;
acquiring a face image of a pose to be estimated through a camera, extracting feature points corresponding to model points in the face image, and obtaining two-dimensional pixel coordinates of each feature point in a pixel coordinate system;
calculating a head posture estimation parameter according to the camera calibration parameter, the three-dimensional coordinate and the two-dimensional pixel coordinate;
and carrying out re-projection checking calculation on the head attitude estimation parameters, judging whether the re-projection error is greater than a set value, and if the re-projection error is greater than the set value, correcting the head attitude estimation parameters through extended Kalman filtering so as to enable the re-projection error of the corrected head attitude parameters to be less than or equal to the set value.
Preferably, the head pose estimation parameters include euler angles including a roll angle α, a pitch angle β, and a yaw angle γ of the head, and translation vectors of (tx, ty, tz) of the head.
Preferably, the system state equation modified by the extended kalman filter is:
θk=f(θk|k-1)+SK
Zk=h(θk)+VK (1),
the correction step comprises:
substituting the head posture prediction parameter into the formula (2), substituting the obtained result into the formula (3), circularly iterating the formula (2) and the formula (3) to obtain a corrected head posture parameter, wherein,
xk=f(x_k-1)
Figure BDA0003141580200000031
Figure BDA0003141580200000032
x_k=xk+K(zk-h(xk))
p_k=(I-KHk)pk (3),
state variable xk=[α β γ tx ty tz]T,SkRepresenting state noise, VkThe measurement noise is expressed, and the expression form of the matrix in the formula (2) and the formula (3) obtained based on the formula (1) is:
Figure BDA0003141580200000033
Figure BDA0003141580200000034
and obtaining the corrected head attitude parameters according to the correction steps and the extended Kalman filtering.
Preferably, a head coordinate system is constructed, the face of the head model is scanned, and three-dimensional coordinates of each model point in the head coordinate system are obtained.
Preferably, the origin of the head coordinate system is the nose tip point of the face region, the positive direction of the Y axis is from the chin point to the eyebrow center point, the positive direction of the X axis is from the right eye to the left eye, and the positive direction of the Z axis is determined according to the right-hand rule.
Preferably, the plurality of model points selected in the face region of the head model are respectively P0,P1,……PnAnd the two-dimensional pixel coordinate of the feature point corresponding to each model point on the face image is Q0,Q1,……QnAccording to PxAnd QxAnd calculating a head posture estimation parameter by using a PNP algorithm, wherein x is 0, 1 and … … n.
Preferably, the selected model points include nasal cusp point, chin point, left eye outer canthus point, right eye outer canthus point, left mouth corner point and right mouth corner point.
Preferably, feature point extraction is performed on the face picture according to the DLIB library, and two-dimensional pixel coordinates of the feature points are obtained.
Preferably, camera calibration parameters are obtained through a Matlab camera calibration tool box according to a checkerboard calibration method, and the camera calibration parameters comprise camera internal parameters and distortion coefficients.
In the technical scheme provided by the invention, a three-dimensional head posture estimation algorithm based on facial feature information firstly acquires three-dimensional coordinates and camera calibration parameters of a plurality of model points selected in a face area of a head model, a human face picture of a posture to be estimated is acquired through a camera, two-dimensional pixel coordinates of feature points corresponding to the human face picture and the model points are obtained, posture change of a head posture restored by a human face image relative to a reference posture (namely a front head posture) of the head model is calculated through the two-dimensional pixel coordinates, the three-dimensional coordinates and the camera calibration parameters, and head posture estimation parameters are obtained; and carrying out reprojection checking calculation on the head attitude estimation parameters, when the reprojection error is greater than a set value, correcting the head attitude estimation parameters through extended Kalman filtering until the reprojection error of the corrected result is less than or equal to the set value, and outputting the corrected result to obtain relatively accurate head attitude parameters. According to the arrangement, the algorithm estimates the head attitude based on the two-dimensional image information, the required equipment cost is low, the extended Kalman filtering is adopted to correct the situation with larger reprojection error, and the problem that the head attitude estimation is misjudged based on the two-dimensional image algorithm is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a three-dimensional head pose estimation algorithm based on facial feature information in an embodiment of the present invention;
FIG. 2 is a flow chart of extended Kalman filter correction in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
The present invention provides a three-dimensional head pose estimation algorithm based on facial feature information, which solves the problem that erroneous judgment is easily generated in head pose estimation on the premise of ensuring that the algorithm based on two-dimensional image information is less in investment.
Hereinafter, embodiments will be described with reference to the drawings. The embodiments described below do not limit the contents of the invention described in the claims. The entire contents of the configurations shown in the following embodiments are not limited to those required as solutions of the inventions described in the claims.
Referring to fig. 1 and 2, the three-dimensional head pose estimation algorithm based on facial feature information according to the present invention includes the following steps:
s01: and acquiring camera calibration parameters and three-dimensional coordinates of a plurality of model points selected in the face area of the head model.
The camera calibration parameters are obtained through a Matlab camera calibration tool box according to a checkerboard calibration method, the camera calibration parameters comprise camera internal parameters and distortion coefficients, the camera internal parameters obtained after camera calibration are shown in a table 1, and the distortion coefficients are shown in a table 2.
TABLE 1 internal reference of camera
Figure BDA0003141580200000061
TABLE 2 distortion factor
Figure BDA0003141580200000062
The method comprises the following steps of enabling a head model to face a scanner, selecting a plurality of model points in a face area of the head model, and determining three-dimensional coordinates corresponding to the selected model points, wherein the specific steps are as follows:
constructing a head coordinate system, wherein the origin of the head coordinate system is the nose tip point of the face area of the head model, the direction from the chin point to the eyebrow point is the positive direction of the Y axis, the direction from the right eye to the left eye is the positive direction of the X axis, and the positive direction of the Z axis is determined according to the right-hand rule, so that the head coordinate system is established; selecting a plurality of model points in the face area of the head model, in a specific embodiment, selecting six model points in the face area of the head model, namely nose tip points Q0Point of chin Q1Left eye outer canthus point Q2Outer canthus point Q of right eye3Left side mouth corner point Q4And right mouth angle point Q5Scanning and acquiring point cloud data of a face area of a head model by adopting an F6SR scanner, acquiring three-dimensional coordinates by adopting an active triangulation method through echo software, acquiring third-dimensional information by using stereo parallax, and acquiring third-dimensional information by Q1、Q2、Q3、Q4And Q5These 5 points and Q0The distances in the 3 axes determine the three-dimensional coordinates of the respective points, and the specific data are shown in table 3.
Table 3 selected model points and their corresponding three-dimensional coordinates
Model points Three dimensional coordinates
Q0 (0.0,0.0,0.0)
Q1 (0.0,-56.9,-14.6)
Q2 (43.5,21.3,-32.5)
Q3 (-43.5,21.3,-32.5)
Q4 (17.75,-30.1,-18.5)
Q5 (-17.75,-30.1,-18.5)
S02: the method comprises the steps of collecting a face picture of a pose to be estimated through a calibrated camera, extracting feature points of the face picture, and obtaining two-dimensional pixel coordinates of each feature point under a pixel coordinate system. In a specific embodiment, a total of six feature points are extracted, and correspond to the six model points in step S01 one-to-one.
Extracting 68 characteristic points of the human face of the head model by using a DLIB library to obtain a sum Q0,Q1,……Q5The corresponding feature point numbers are shown in table 4.
Table 4 sequence number table of feature points corresponding to selected model points
Model points Serial number of feature point
Q0 31
Q1 9
Q2 37
Q3 46
Q4 49
Q5 55
Extracting 68 characteristic points of the human face from the human face picture based on a DLIB library, and obtaining two-dimensional pixel coordinates P of the characteristic points with the serial numbers of 31, 9, 37, 46, 49 and 55 respectively in a pixel coordinate system0,P1,……P5. The head gestures of the face pictures collected by the camera are different, and the two-dimensional pixel coordinates of the obtained feature points are different.
S03: and calculating head posture estimation parameters according to the camera calibration parameters and the three-dimensional coordinates and the two-dimensional pixel coordinates of the plurality of model points selected in the face area of the head model. That is, three-dimensional coordinates (P) of a plurality of model points selected in a face region of the head model based on camera calibration parameters0,P1,……Pn) And two-dimensional pixel coordinates (Q) corresponding to each model point on the face image0,Q1,……Qn) A 1 is to Px、QxAnd substituting the camera calibration parameters into a PNP system frame, and calculating head posture estimation parameters by using a PNP algorithm, wherein x is 0, 1 and … … n.
Specifically, the head posture estimation parameters comprise Euler angles and translation vectors, and the Euler angles comprise three angular parameters of a rolling angle alpha, a pitch angle beta and a yaw angle gamma which describe the motion direction of the head in a three-dimensional space; the translation vector of the head is (tx, ty, tz), which may be understood as the position of the origin of the object coordinate system in the camera coordinate system, and in a specific embodiment, since the origin of the head coordinate system is the nose tip of the head model, the translation vector may be defined as the position of the nose tip in the camera coordinate system.
In a specific embodiment, a Least Square Iteration method (Least Square Iteration) is selected for the PNP algorithm, a series of nonlinear operations are performed on an input face image, an initial value is given in an Iteration mode, and a current optimization variable is continuously updated, so that an objective function is reduced until an increment at a certain moment is very small and cannot be reduced any more, at this moment, the algorithm is converged, and the process of searching for a minimum value is completed.
S04: carrying out re-projection checking calculation on the head attitude estimation parameters, and judging whether a re-projection error is larger than a set value or not; if the reprojection error is not greater than the set value, go to step S06; if the reprojection error is greater than the predetermined value, step S05 is executed.
In a specific embodiment, multiple groups of head pose estimation parameters are selected to check and calculate a reprojection error, a reprojection error set value is set to be 1 by combining a reprojection result, and the following results are obtained by summarizing: when tz in the head translation vector satisfies that the direction is negative and the absolute value of tz is in the range of [300, 1000], the reprojection error is small and can meet the requirement, and when tz does not satisfy the condition, the reprojection error is large and cannot meet the requirement.
In the first embodiment of the reprojection verification, the head pose estimation parameters are: α -5.00213, β -10.0019, γ -4.99998, tx-0.000220552, ty-0.00127817, tz-499.987, tz satisfy that the direction is negative and the absolute value is in the range of [300, 1000], and the result of the reprojection verification shows that all six projection points can correspond to six model points selected in the face region of the head model, and the reprojection error is small.
In the second reprojection verification embodiment, the head pose estimation parameters: α -7.65338, β -10.8251, γ -174.688, tx-0.568548, ty-0.0725333, tz-563.427, the absolute value of tz is in the range of [300, 1000], but the direction of non-satisfaction is negative, the result of the reprojection verification shows that the six projected points cannot coincide with the six model points selected in the face region of the head model, and the reprojection error increases.
In the second embodiment, the head pose estimation parameters: α -10.4804, β -19.1756, γ -11.2989, tx-2.26937 e +07, ty-1.56113 e +07, and tz-1.39073 e +09, wherein the direction of tz satisfies negative, but the absolute value of tz is not in the range of [300, 1000], and the result of the reprojection verification shows that six projection points are gathered together and cannot coincide with six model points selected in the face region of the head model, and the reprojection error is large.
S05: and (5) correcting the head attitude estimation parameters obtained in the step (S03) through the extended Kalman filtering to obtain corrected head attitude parameters, and then executing the step (S04). That is, after the correction, the reprojection check calculation needs to be performed on the corrected result, and when the reprojection error of the corrected result is smaller than the set value, the correction process is finished, and the corrected result is output.
In a specific embodiment, the system state equation corrected by the extended kalman filter is:
θk=f(θk|k-1)+SK
Zk=h(θk)+VK (1),
the correction comprises the following specific steps:
substituting the head posture prediction parameter into the formula (2), substituting the obtained result into the formula (3), circularly iterating the formula (2) and the formula (3) to obtain a corrected head posture parameter, wherein,
xk=f(x_k-1)
Figure BDA0003141580200000091
Figure BDA0003141580200000092
x_k=xk+K(zk-h(xk))
p_k=(I-KHk)pk (3),
state variable xk=[α β γ tx ty tz]T,SkRepresenting state noise, VkThe measurement noise is expressed, and the expression form of the matrix in the formula (2) and the formula (3) obtained based on the formula (1) is:
Figure BDA0003141580200000101
Figure BDA0003141580200000102
and obtaining the corrected head attitude parameters according to the correction steps and the extended Kalman filtering.
In a specific embodiment, the head pose prediction parameter is the head pose estimation parameter obtained in step S03, but in other embodiments, the head pose estimation parameter may also be the head pose estimation parameter input to the system or calculated by other relatively reasonable algorithms.
S06: the head posture estimation parameters obtained in step S03 are output and the process ends.
The head pose is estimated by the three-dimensional head pose estimation algorithm based on the facial feature information, camera calibration parameters and three-dimensional coordinates of a plurality of model points selected in a face area of a head model are firstly obtained, then two-dimensional pixel coordinates are extracted, obtaining head attitude estimation parameters through a PNP algorithm, performing re-projection checking calculation after obtaining the head attitude estimation parameters, when the reprojection error is larger, the EKF (Extended Kalman Filter) is used for correcting the head attitude estimation parameters to finally obtain a higher accurate result, thereby avoiding the error judgment of the head attitude estimation based on the algorithm of the two-dimensional image, the algorithm can be used for judging the attention condition of the driver, when the attention of the driver is judged to be not focused, prompt information and the like can be sent out to reduce the occurrence of road traffic accidents. Of course, the three-dimensional head pose estimation algorithm can also be applied to other processes needing face recognition and pose estimation.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments. The multiple schemes provided by the invention comprise basic schemes, are independent from each other and are not restricted with each other, but can be combined with each other under the condition of no conflict, so that multiple effects are realized together.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. A three-dimensional head pose estimation algorithm based on facial feature information, comprising the steps of:
acquiring camera calibration parameters and three-dimensional coordinates of a plurality of model points selected in a face area of a head model;
acquiring a face image of a pose to be estimated through a camera, extracting feature points corresponding to model points in the face image, and obtaining two-dimensional pixel coordinates of each feature point in a pixel coordinate system;
calculating a head posture estimation parameter according to the camera calibration parameter, the three-dimensional coordinate and the two-dimensional pixel coordinate;
and carrying out re-projection checking calculation on the head attitude estimation parameters, judging whether the re-projection error is greater than a set value, and if the re-projection error is greater than the set value, correcting the head attitude estimation parameters through extended Kalman filtering so as to enable the re-projection error of the corrected head attitude parameters to be less than or equal to the set value.
2. The three-dimensional head pose estimation algorithm of claim 1, wherein the head pose estimation parameters comprise euler angles including roll angle α, pitch angle β and yaw angle γ of the head, and translation vectors of the head being (tx, ty, tz).
3. The three-dimensional head pose estimation algorithm of claim 2, wherein the system state equation modified by extended kalman filtering is:
θk=f(θk|k-1)+SK
Zk=h(θk)+VK (1),
the correction step comprises:
substituting the head posture prediction parameter into the formula (2), substituting the obtained result into the formula (3), circularly iterating the formula (2) and the formula (3) to obtain a corrected head posture parameter, wherein,
xk=f(x_k-1)
Figure FDA0003141580190000011
Figure FDA0003141580190000021
x_k=xk+K(zk-h(xk))
p_k=(I-KHk)pk (3),
state variable xk=[α β γ tx ty tz]T,SkRepresenting state noise, VkThe measurement noise is expressed, and the expression form of the matrix in the formula (2) and the formula (3) obtained based on the formula (1) is:
Figure FDA0003141580190000022
Figure FDA0003141580190000023
and obtaining the corrected head attitude parameters according to the correction steps and the extended Kalman filtering.
4. The three-dimensional head pose estimation algorithm of claim 1, wherein a head coordinate system is constructed and a face scan of the head model is performed to obtain three-dimensional coordinates of each model point in the head coordinate system.
5. The three-dimensional head pose estimation algorithm of claim 4, wherein the origin of the head coordinate system is the nose tip point of the face area, the positive direction of the Y axis is from the chin point to the eyebrow center point, the positive direction of the X axis is from the right eye to the left eye, and the positive direction of the Z axis is determined according to the right hand rule.
6. The three-dimensional head pose estimation algorithm of claim 1, wherein the plurality of model points selected in the face region of the head model are each P0,P1,……PnAnd the two-dimensional pixel coordinate of the feature point corresponding to each model point on the face image is Q0,Q1,……QnAccording to PxAnd QxAnd calculating a head posture estimation parameter by using a PNP algorithm, wherein x is 0, 1 and … … n.
7. The three-dimensional head pose estimation algorithm of claim 1, wherein the selected model points comprise nasal cusp, chin, left eye outer canthus, right eye outer canthus, left mouth corner and right mouth corner.
8. The three-dimensional head pose estimation algorithm of claim 1, wherein feature point extraction is performed on the face picture according to DLIB library and two-dimensional pixel coordinates of the feature points are obtained.
9. The three-dimensional head pose estimation algorithm of claim 2, wherein camera calibration parameters are obtained by Matlab camera calibration toolkit according to a checkerboard calibration method, the camera calibration parameters comprising camera intrinsic parameters and distortion coefficients.
CN202110741653.5A 2021-06-30 2021-06-30 Three-dimensional head posture estimation algorithm based on facial feature information Pending CN113569653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110741653.5A CN113569653A (en) 2021-06-30 2021-06-30 Three-dimensional head posture estimation algorithm based on facial feature information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110741653.5A CN113569653A (en) 2021-06-30 2021-06-30 Three-dimensional head posture estimation algorithm based on facial feature information

Publications (1)

Publication Number Publication Date
CN113569653A true CN113569653A (en) 2021-10-29

Family

ID=78163280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110741653.5A Pending CN113569653A (en) 2021-06-30 2021-06-30 Three-dimensional head posture estimation algorithm based on facial feature information

Country Status (1)

Country Link
CN (1) CN113569653A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105203098A (en) * 2015-10-13 2015-12-30 上海华测导航技术股份有限公司 Whole attitude angle updating method applied to agricultural machinery and based on nine-axis MEMS (micro-electromechanical system) sensor
CN106289247A (en) * 2016-07-26 2017-01-04 北京长城电子装备有限责任公司 Indoor positioning device based on inertial sensor
CN106292288A (en) * 2016-09-22 2017-01-04 同济大学 Model parameter correction method based on Policy-Gradient learning method and application thereof
CN106500695A (en) * 2017-01-05 2017-03-15 大连理工大学 A kind of human posture recognition method based on adaptive extended kalman filtering
CN106767900A (en) * 2016-11-23 2017-05-31 东南大学 A kind of online calibration method of the optical fibre SINS system based on integrated navigation technology
CN109583338A (en) * 2018-11-19 2019-04-05 山东派蒙机电技术有限公司 Driver Vision decentralized detection method based on depth integration neural network
CN110470297A (en) * 2019-03-11 2019-11-19 北京空间飞行器总体设计部 A kind of attitude motion of space non-cooperative target and inertial parameter estimation method
US20200050263A1 (en) * 2018-08-09 2020-02-13 Acer Incorporated Electronic apparatus operated by head movement and operation method thereof
CN110929642A (en) * 2019-11-21 2020-03-27 扬州市职业大学(扬州市广播电视大学) Real-time estimation method for human face posture based on two-dimensional feature points

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105203098A (en) * 2015-10-13 2015-12-30 上海华测导航技术股份有限公司 Whole attitude angle updating method applied to agricultural machinery and based on nine-axis MEMS (micro-electromechanical system) sensor
CN106289247A (en) * 2016-07-26 2017-01-04 北京长城电子装备有限责任公司 Indoor positioning device based on inertial sensor
CN106292288A (en) * 2016-09-22 2017-01-04 同济大学 Model parameter correction method based on Policy-Gradient learning method and application thereof
CN106767900A (en) * 2016-11-23 2017-05-31 东南大学 A kind of online calibration method of the optical fibre SINS system based on integrated navigation technology
CN106500695A (en) * 2017-01-05 2017-03-15 大连理工大学 A kind of human posture recognition method based on adaptive extended kalman filtering
US20200050263A1 (en) * 2018-08-09 2020-02-13 Acer Incorporated Electronic apparatus operated by head movement and operation method thereof
CN109583338A (en) * 2018-11-19 2019-04-05 山东派蒙机电技术有限公司 Driver Vision decentralized detection method based on depth integration neural network
CN110470297A (en) * 2019-03-11 2019-11-19 北京空间飞行器总体设计部 A kind of attitude motion of space non-cooperative target and inertial parameter estimation method
CN110929642A (en) * 2019-11-21 2020-03-27 扬州市职业大学(扬州市广播电视大学) Real-time estimation method for human face posture based on two-dimensional feature points

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
MOHAMMAD AMIN MEHRALIAN 等: "EKFPnP: Extended Kalman Filter for Camera Pose Estimation in a Sequence of Images", 《ARXIV》, 22 April 2020 (2020-04-22), pages 1 - 13 *
WANG YU 等: "Head Pose Estimation Based on Head Tracking and the Kalman Filter", 《PHYSICS PROCEDIA》, vol. 22, 27 December 2011 (2011-12-27), pages 420 - 427, XP028354990, DOI: 10.1016/j.phpro.2011.11.066 *
王迪: "基于人眼状态的疲劳检测算法研究与应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 7, 15 July 2020 (2020-07-15), pages 138 - 911 *
许青: "基于单目视觉的头部姿态估计系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 4, 15 April 2019 (2019-04-15), pages 138 - 900 *
赵荣泳,张浩等著: "《UWB定位技术及智能制造应用》", 31 January 2020, 机械工业出版社, pages: 120 - 121 *

Similar Documents

Publication Publication Date Title
JP4728432B2 (en) Face posture estimation device, face posture estimation method, and face posture estimation program
CN106940704B (en) Positioning method and device based on grid map
US9881203B2 (en) Image processing device, image processing method, and program
US7894636B2 (en) Apparatus and method for performing facial recognition from arbitrary viewing angles by texturing a 3D model
EP1677250B1 (en) Image collation system and image collation method
CN111274943B (en) Detection method, detection device, electronic equipment and storage medium
US7027618B2 (en) Head motion estimation from four feature points
CN111652086B (en) Face living body detection method and device, electronic equipment and storage medium
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN110688947B (en) Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
CN105844276A (en) Face posture correction method and face posture correction device
KR101759188B1 (en) the automatic 3D modeliing method using 2D facial image
CN109086727B (en) Method and device for determining motion angle of human head and electronic equipment
CN110443245B (en) License plate region positioning method, device and equipment in non-limited scene
EP3506149A1 (en) Method, system and computer program product for eye gaze direction estimation
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN111815768B (en) Three-dimensional face reconstruction method and device
CN110363052B (en) Method and device for determining human face pose in image and computer equipment
CN111854620A (en) Monocular camera-based actual pupil distance measuring method, device and equipment
CN111680573B (en) Face recognition method, device, electronic equipment and storage medium
CN111508025A (en) Three-dimensional position estimation device and program
CN112084453A (en) Three-dimensional virtual display system, method, computer equipment, terminal and storage medium
CN112101247A (en) Face pose estimation method, device, equipment and storage medium
CN112507766B (en) Face image extraction method, storage medium and terminal equipment
CN113569653A (en) Three-dimensional head posture estimation algorithm based on facial feature information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination