CN114399800A - Human face posture estimation method and device - Google Patents

Human face posture estimation method and device Download PDF

Info

Publication number
CN114399800A
CN114399800A CN202111444206.XA CN202111444206A CN114399800A CN 114399800 A CN114399800 A CN 114399800A CN 202111444206 A CN202111444206 A CN 202111444206A CN 114399800 A CN114399800 A CN 114399800A
Authority
CN
China
Prior art keywords
obtaining
eyes
face
line segment
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111444206.XA
Other languages
Chinese (zh)
Inventor
贺克赛
程新景
杨睿刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Network Technology Shanghai Co Ltd
Original Assignee
International Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Network Technology Shanghai Co Ltd filed Critical International Network Technology Shanghai Co Ltd
Priority to CN202111444206.XA priority Critical patent/CN114399800A/en
Publication of CN114399800A publication Critical patent/CN114399800A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for estimating a face pose, wherein the method comprises the following steps: acquiring a face image and corresponding key point information thereof; based on the key point information, obtaining the projection distance of the nose relative to the midpoint of the two eyes and obtaining the distance between the two eyes; and obtaining the human face posture by utilizing a mapping relation obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between two eyes. According to the invention, the face image and the corresponding key point information are obtained, so that the face posture can be estimated conveniently according to the key point information and the mapping relation obtained by fitting in advance, the more accurate mapping relation is utilized to correspondingly predict the face posture, and the accuracy and the reliability of posture estimation are improved.

Description

Human face posture estimation method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a human face posture estimation method and device.
Background
With the continuous development of computer vision, face recognition related algorithms come out endlessly, and a face recognition algorithm based on deep learning achieves quite high recognition accuracy in an ideal experimental environment, but in a real scene, different face postures, such as changes of left and right faces, pitching and rotation angles in a plane, cause face information loss, so that a face recognition effect faces a very large challenge. Because face information is lost and different due to face pose change, the similarity of the side faces of different people is higher than that of the side faces and the front face of the same person, and therefore the face pose needs to be accurately estimated firstly when the face recognition effect is improved.
Currently, face pose estimation generally includes two methods: the first is that a human face image is projected to each principal component analysis attitude space, and the attitude of the closest projection coefficient space is taken as the human face attitude of the image; the second method is that three angle values of the face pose are directly regressed from the mapping relation from three-dimension (3D) to two-dimension (2D) through key point detection and model by defining the geometrical structure of the face key points in advance: pitch angle (Pitch), Yaw angle (Yaw), Roll angle (Roll).
However, the first method directly depends on the whole pixel information of the image, so the calculation dimension is high, the pose space is discontinuous, and a large number of human face image samples with different poses are required; the second method mainly depends on key points and a 3D face model, but the key points at a large angle cannot be detected at present, so that the method has certain limitation, and if the key points are predicted incorrectly, the attitude value finally estimated by fitting the key points with the 3D deformable face model has a very large error.
Disclosure of Invention
The invention provides a method and a device for estimating a face pose, which are used for solving the defect of poor pose estimation precision caused by the influence of a shooting angle in the prior art and effectively improving the precision and the efficiency of face pose estimation.
The invention provides a human face posture estimation method, which comprises the following steps: acquiring a face image and corresponding key point information thereof; based on the key point information, obtaining the projection distance of the nose relative to the midpoint of the two eyes and obtaining the distance between the two eyes; and obtaining the human face posture by utilizing a mapping relation which is obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between the two eyes.
According to the face pose estimation method provided by the invention, the obtaining of the projection distance of the nose relative to the midpoint of the two eyes based on the key point information comprises the following steps: obtaining a nose coordinate and two eye coordinates based on the key point information; obtaining a midpoint coordinate based on the coordinates of the two eyes; and obtaining the projection distance of the nose relative to the midpoint according to the midpoint coordinate and the nose coordinate.
According to the face pose estimation method provided by the invention, the obtaining of the projection distance of the nose relative to the midpoint according to the midpoint coordinate and the nose coordinate comprises the following steps: obtaining a first line segment according to the midpoint coordinate and the nose coordinate; obtaining a second line segment according to the coordinates of the two eyes; obtaining an included angle between the first line segment and the second line segment according to the first line segment and the second line segment; and based on the included angle, projecting the first line segment onto the second line segment to obtain a projection distance.
According to the method for estimating the face pose provided by the invention, the obtaining of the face pose by using a mapping relation obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between two eyes comprises the following steps: obtaining the specific gravity of the nose relative to the two eyes based on the projection distance and the distance between the two eyes; according to the specific gravity, obtaining angle information by utilizing a mapping relation obtained by fitting based on empirical parameters in advance; and obtaining the human face posture according to the angle information.
According to the face pose estimation method provided by the invention, the specific gravity is expressed as:
Figure BDA0003384296700000031
wherein r represents a specific gravity, dpRepresenting the projection distance, cosm representing the cosine value of an included angle between a first line segment formed by the nose and the midpoint of the two eyes and a second line segment formed by the two eyes; deThe interocular distance.
According to the method for estimating the face pose provided by the invention, before the obtaining of the face pose, the method further comprises the following steps: acquiring experience parameters in a preset angle range, wherein the experience parameters comprise angle experience parameters corresponding to a historical face image and key point experience parameters corresponding to the historical face image; acquiring specific gravity experience parameters of the nose relative to the eyes by using the key point experience parameters; and obtaining a mapping relation based on the proportion empirical parameters and the corresponding angle empirical parameters.
The invention also provides a human face posture estimation device, which comprises: the data acquisition module is used for acquiring the face image and the corresponding key point information thereof; the parameter acquisition module is used for acquiring the projection distance of the nose relative to the midpoint of the two eyes and acquiring the distance between the two eyes based on the key point information; and the posture estimation module is used for obtaining the human face posture by utilizing a mapping relation which is obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between the two eyes.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the human face posture estimation method.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for face pose estimation as described in any of the above.
The present invention also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method for face pose estimation as described in any of the above.
According to the method and the device for estimating the face pose, the face pose is estimated according to the key point information and the mapping relation obtained by fitting in advance by acquiring the face image and the corresponding key point information, so that the face pose is correspondingly predicted by using the more accurate mapping relation, and the accuracy and the reliability of pose estimation are improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for estimating a face pose according to the present invention;
FIG. 2 is a schematic flow chart of a method for mapping relationships based on empirical parameter fitting according to the present invention;
FIG. 3 is a schematic structural diagram of a face pose estimation apparatus provided by the present invention;
FIG. 4 is a schematic structural diagram of a mapping relationship obtaining module provided in the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a flow chart of a face pose estimation method of the present invention, which includes:
s11, acquiring a face image and corresponding key point information thereof;
s12, obtaining the projection distance of the nose relative to the midpoint of the two eyes and the distance between the two eyes based on the key point information;
and S13, obtaining the human face posture by using a mapping relation obtained in advance based on empirical parameter fitting based on the projection distance and the distance between two eyes.
It should be noted that S1N in this specification does not represent the precedence order of the face pose estimation method, and the face pose estimation method of the present invention is specifically described below.
And step S11, acquiring the face image and the corresponding key point information.
In this embodiment, the obtaining of the face image and the corresponding key point information includes: acquiring a face image; the method comprises the steps of inputting a face image into a face key point detection model to obtain key point information output by the face key point detection model, wherein the face key point detection model is obtained by training based on a face sample image and a face key point truth value corresponding to the face sample image. The key point information includes coordinate information of key points of each face.
In an optional embodiment, the obtained face image and the corresponding key point information thereof include: acquiring a face image; and carrying out key point labeling on the face image to obtain key point information. It should be noted that, in order to improve the accuracy of the estimation of the face pose, so as to provide a large number of training samples for the recognition of the fatigue driving behavior, the obtained face image may be detected in advance to obtain the key point information.
In an alternative embodiment, acquiring a face image comprises: acquiring a face image to be subjected to face key point detection based on electronic equipment or an application platform applying the face key point detection method; or, the face image is obtained based on a terminal device connected with an electronic device or an application platform applying the face key point detection method. It should be noted that the terminal device may obtain a face image of a person in the recognition area through a visual sensor connected to the terminal device. It should be noted that the face image may be a single frame picture or a sequence of picture frames obtained by shooting, or an image frame or a sequence of image frames obtained by shot-cutting a video and associated with a face to be detected.
In addition, the vision sensor includes at least one of a millimeter wave radar, a laser radar, a detector, a camera, and other image pickup devices, and the specific type of the vision sensor is not further limited herein.
And step S12, obtaining the projection distance of the nose relative to the midpoint of the two eyes and obtaining the distance between the two eyes based on the key point information.
Specifically, obtaining the projection distance of the nose relative to the midpoint of the two eyes based on the key point information comprises the following steps: obtaining a nose coordinate and two eye coordinates based on the key point information; obtaining a midpoint coordinate based on the coordinates of the two eyes; and obtaining the projection distance of the nose relative to the midpoint according to the midpoint coordinate and the nose coordinate.
First, based on the key point information, the nose coordinates and the two-eye coordinates are obtained. It should be noted that the acquired key point information includes face key point coordinate information, which includes nose coordinates (x) respectively1,y1,z1) Coordinates (x) corresponding to both eyes2,y2,z2) And (x)3,y3,z3)。
Secondly, a midpoint coordinate is obtained based on the coordinates of the two eyes. It should be noted that, if the midpoint is a midpoint of a line segment connected to the two eyes, the midpoint coordinate may be obtained according to coordinates of the two eyes. For example, the coordinates of both eyes are (x)2,y2,z2) And (x)3,y3,z3) Then the midpoint coordinate is
Figure BDA0003384296700000061
And finally, obtaining the projection distance of the nose relative to the midpoint according to the midpoint coordinate and the nose coordinate. In this embodiment, obtaining the projection distance of the nose relative to the midpoint according to the midpoint coordinate and the nose coordinate includes: obtaining a first line segment according to the midpoint coordinate and the nose coordinate; obtaining a second line segment according to the coordinates of the two eyes; obtaining an included angle between the first line segment and the second line segment according to the first line segment and the second line segment; and projecting the first line segment to the second line segment based on the included angle to obtain a projection distance.
For example, the midpoint coordinate is
Figure BDA0003384296700000062
The nose coordinate is (x)1,y1,z1) Then the first line segment L can be obtained1And the length of the first line segment; coordinates of both eyes are respectively (x)2,y2,z2) And (x)3,y3,z3) Then the second line segment L can be obtained2(ii) a Root of herbaceous plantAccording to a first line segment L1And a second line segment L2To obtain a first line segment L1And a second line segment L2The included angle m between the two; based on the included angle m, the first line segment is projected to the second line segment, and the projection distance d is obtained according to the intersection angle m and the length of the first line segmentp
And step S13, obtaining the human face posture by utilizing a mapping relation obtained in advance based on empirical parameter fitting based on the projection distance and the distance between two eyes.
In this embodiment, based on the projection distance and the inter-ocular distance, obtaining the face pose by using a mapping relationship obtained in advance based on empirical parameter fitting, includes: obtaining the specific gravity of the nose relative to the two eyes based on the projection distance and the distance between the two eyes; according to the specific gravity, obtaining angle information by utilizing a mapping relation obtained by fitting based on empirical parameters in advance; and obtaining the human face posture according to the angle information.
The specific gravity is expressed as:
Figure BDA0003384296700000071
wherein r represents a specific gravity, dpRepresenting the projection distance, cosm representing the cosine value of an included angle between a first line segment formed by the nose and the midpoint of the two eyes and a second line segment formed by the two eyes; deThe interocular distance.
In the present embodiment, the angle information includes at least one of a Pitch angle (Pitch), a Yaw angle (Yaw), and a Roll angle (Roll). In practical application, the face pose types need to be divided in advance according to the size of the angle information, so that the angle information in a certain angle range corresponds to a specific face pose type. For example, the face pose types are divided into six classes by taking the angle values of left and right rotation as an example, and each face pose type contains the angle values of [ -90, -60), [ -60, -30), [ -30,0), [0,30), [30,60), [60,90], respectively. For another example, the face pose may also be divided into at least two types by using the face pose as an angle value of pitch rotation or in-plane rotation. For another example, the face pose corresponding to the specific angle range may be set based on the specific angle included in the angle information.
In an alternative embodiment, referring to fig. 2, before obtaining the face pose, the method further includes: the mapping relation obtained based on empirical parameter fitting specifically includes:
s21, acquiring experience parameters in a preset angle range, wherein the experience parameters comprise angle experience parameters corresponding to the historical face images and key point experience parameters corresponding to the historical face images;
s22, acquiring specific gravity empirical parameters of the nose relative to the eyes by using the key point empirical parameters;
and S23, obtaining a mapping relation based on the specific gravity empirical parameters and the corresponding angle empirical parameters.
It should be noted that S2N in this specification does not represent the sequence of the mapping relationships obtained based on empirical parameter fitting, and the following describes the flow of the mapping relationships obtained based on empirical parameter fitting according to the present invention.
Step S21, acquiring experience parameters in a preset angle range, wherein the experience parameters comprise angle experience parameters corresponding to the historical face image and key point experience parameters corresponding to the historical face image.
It should be noted that, acquiring the empirical parameters within the preset angle range includes: acquiring a historical image of a human face within a preset angle range; and labeling the historical face image to obtain experience parameters. It should be noted that when the human face history image is acquired, the image can be acquired based on external factors such as different attitude angles, obstructions, illumination and the like.
In an optional embodiment, the preset angle range comprises a yaw angle range, the yaw angle range is selected from [ -30 degrees and [ -30 degrees ], so that the mapping relation between the yaw angle and key points of the human face can be accurately estimated according to the small angle range, the human face posture of a large angle range of [ -90 degrees, -30 degrees and [ -30 degrees ] can be accurately estimated according to key point information, and the estimation reliability of the human face posture is improved.
And step S22, acquiring specific gravity empirical parameters of the nose relative to the eyes by using the key point empirical parameters.
In this embodiment, the obtaining of the specific gravity empirical parameters of the nose relative to the eyes by using the key point empirical parameters includes: obtaining a nose reference coordinate and two-eye reference coordinates based on the key point experience parameters; according to the reference coordinates of the two eyes, obtaining a midpoint reference coordinate, a third line segment formed by the two eyes and the length of the third line segment; obtaining the lengths of a fourth line segment and a fourth line segment according to the midpoint reference coordinate and the nose reference coordinate; obtaining an included angle empirical parameter between the third line segment and the fourth line segment according to the third line segment and the fourth line segment; projecting the fourth line segment to the third line segment according to the empirical parameter of the included angle between the third line segment and the fourth line segment, and obtaining the projection distance parameter of the fourth line segment projected to the third line segment according to the empirical parameter of the included angle between the third line segment and the fourth line segment and the length of the fourth line segment; and obtaining a specific gravity empirical parameter of the nose relative to the eyes according to the projection distance parameter and the length of the third line segment.
And step S23, obtaining a mapping relation based on the specific gravity empirical parameters and the corresponding angle empirical parameters.
In this embodiment, obtaining the mapping relationship includes: based on the specific gravity empirical parameters and the corresponding angle empirical parameters, constructing a function y ═ f (x), wherein y represents the angle empirical parameters, f represents a mapping relation, and x represents the specific gravity empirical parameters; and obtaining a mapping relation based on the function. Based on the empirical parameters obtained in the preset angle range, a relatively accurate mapping relation is obtained through fitting, so that the face pose in the large angle range can be estimated conveniently in the follow-up process according to the mapping relation, and the accuracy of the estimation of the face pose in the large angle range is improved.
In summary, the embodiment of the present invention obtains the face image and the corresponding key point information thereof, so as to estimate the face pose according to the key point information and the mapping relationship obtained by fitting in advance, and correspondingly predict the face pose by using the more accurate mapping relationship, thereby improving the accuracy and reliability of pose estimation.
The following describes the face pose estimation apparatus provided by the present invention, and the face pose estimation apparatus described below and the face pose estimation method described above may be referred to in correspondence with each other.
Fig. 3 shows a schematic structural diagram of a human face pose estimation device of the invention, which comprises:
the data acquisition module 31 is used for acquiring a face image and corresponding key point information thereof;
the parameter acquisition module 32 is used for acquiring the projection distance of the nose relative to the midpoint of the two eyes and acquiring the distance between the two eyes based on the key point information;
the pose estimation module 33 obtains the face pose by using a mapping relation obtained in advance based on empirical parameter fitting based on the projection distance and the inter-ocular distance.
In this embodiment, the data obtaining module 31 includes: an image acquisition unit that acquires a face image; and the key point detection unit is used for inputting the face image into a face key point detection model to obtain key point information output by the face key point detection model, wherein the face key point detection model is obtained by training based on the face sample image and a face key point truth value corresponding to the face sample image. The key point information includes coordinate information of key points of each face.
In an alternative embodiment, the data acquisition module 31 includes: an image acquisition unit that acquires a face image; and the marking unit is used for marking key points on the face image to obtain key point information. It should be noted that, in order to improve the accuracy of the estimation of the face pose, so as to provide a large number of training samples for the recognition of the fatigue driving behavior, the obtained face image may be detected in advance to obtain the key point information.
In an alternative embodiment, the image obtaining unit, which obtains the face image, includes: acquiring a face image to be subjected to face key point detection based on electronic equipment or an application platform applying the face key point detection method; or, the face image is obtained based on a terminal device connected with an electronic device or an application platform applying the face key point detection method. It should be noted that the terminal device may obtain a face image of a person in the recognition area through a visual sensor connected to the terminal device. It should be noted that the face image may be a single frame picture or a sequence of picture frames obtained by shooting, or an image frame or a sequence of image frames obtained by shot-cutting a video and associated with a face to be detected.
In addition, the vision sensor includes at least one of a millimeter wave radar, a laser radar, a detector, a camera, and other image pickup devices, and the specific type of the vision sensor is not further limited herein.
A parameter acquisition module 32, comprising: the key point acquisition unit is used for acquiring a nose coordinate and two eye coordinates based on the key point information; a midpoint acquisition unit which acquires a midpoint coordinate based on the coordinates of both eyes; and the projection distance acquisition unit is used for acquiring the projection distance of the nose relative to the midpoint according to the midpoint coordinate and the nose coordinate.
Specifically, the projection distance acquisition unit includes: the first line segment acquisition subunit acquires a first line segment according to the midpoint coordinate and the nose coordinate; the second line segment obtaining subunit obtains a second line segment according to the coordinates of the two eyes; the intersection angle acquisition subunit is used for acquiring an included angle between the first line segment and the second line segment according to the first line segment and the second line segment; and the shadow casting unit projects the first line segment onto the second line segment based on the included angle to obtain a projection distance.
The attitude estimation module 33 includes: the method comprises the following steps: a specific gravity acquisition unit which obtains the specific gravity of the nose relative to the eyes based on the projection distance and the distance between the two eyes; the angle acquisition unit is used for acquiring angle information by utilizing a mapping relation obtained by fitting based on empirical parameters in advance according to the specific gravity; and the posture estimation unit is used for obtaining the human face posture according to the angle information.
The angle information includes at least one of a Pitch angle (Pitch), a Yaw angle (Yaw), and a Roll angle (Roll). In practical application, the face pose types need to be divided in advance according to the size of the angle information, so that the angle information in a certain angle range corresponds to a specific face pose type. For example, the face pose types are divided into six classes by taking the angle values of left and right rotation as an example, and each face pose type contains the angle values of [ -90, -60), [ -60, -30), [ -30,0), [0,30), [30,60), [60,90], respectively. For another example, the face pose may also be divided into at least two types by using the face pose as an angle value of pitch rotation or in-plane rotation. For another example, the face pose corresponding to the specific angle range may be set based on the specific angle included in the angle information.
In an alternative embodiment, referring to fig. 4, the apparatus further includes a mapping relationship obtaining module for obtaining the mapping relationship based on empirical parameter fitting. Specifically, the mapping relationship obtaining module includes:
the data acquisition unit 41 is configured to acquire experience parameters within a preset angle range, where the experience parameters include angle experience parameters corresponding to the historical face image and key point experience parameters corresponding to the historical face image;
an intermediate parameter acquiring unit 42 that acquires a specific gravity empirical parameter of the nose with respect to both eyes using the key point empirical parameter;
the mapping relation obtaining unit 43 obtains the mapping relation based on the specific gravity empirical parameter and the corresponding angle empirical parameter.
In this embodiment, the data acquiring unit 41 includes: the image acquisition subunit is used for acquiring a historical image of the human face within a preset angle range; and the labeling subunit is used for labeling the historical face image to obtain experience parameters. It should be noted that when the image acquisition subunit acquires the face history image, the face history image may be acquired based on different external factors such as attitude angles, obstructions, illumination and the like.
In an optional embodiment, the preset angle range comprises a yaw angle range, the yaw angle range is selected from [ -30 degrees and [ -30 degrees ], so that the mapping relation between the yaw angle and key points of the human face can be accurately estimated according to the small angle range, the human face posture of a large angle range of [ -90 degrees, -30 degrees and [ -30 degrees ] can be accurately estimated according to key point information, and the estimation reliability of the human face posture is improved.
The intermediate parameter acquiring unit 42 includes: the coordinate acquisition subunit is used for acquiring a nose reference coordinate and two-eye reference coordinates based on the key point experience parameters; the first intermediate parameter acquisition subunit is used for acquiring a midpoint reference coordinate and acquiring a third line segment formed by two eyes and the length of the third line segment according to the reference coordinates of the two eyes; the second intermediate parameter acquisition subunit acquires the lengths of a fourth line segment and a fourth line segment according to the midpoint reference coordinate and the nose reference coordinate; obtaining an included angle empirical parameter between the third line segment and the fourth line segment according to the third line segment and the fourth line segment; projecting the fourth line segment to the third line segment according to the empirical parameter of the included angle between the third line segment and the fourth line segment, and obtaining the projection distance parameter of the fourth line segment projected to the third line segment according to the empirical parameter of the included angle between the third line segment and the fourth line segment and the length of the fourth line segment; and obtaining a specific gravity empirical parameter of the nose relative to the eyes according to the projection distance parameter and the length of the third line segment.
The mapping relationship obtaining unit 43 includes: the function construction subunit is used for constructing a function based on the proportion empirical parameters and the corresponding angle empirical parameters; and obtaining a mapping relation based on the function. The function y ═ f (x), where y denotes an angle empirical parameter, f denotes a mapping relationship, and x denotes a specific gravity empirical parameter. In addition, based on the empirical parameters obtained in the preset angle range, a relatively accurate mapping relation is obtained through fitting, so that the face pose in the wide angle range can be estimated conveniently in the follow-up process according to the mapping relation, and the accuracy of the estimation of the face pose in the wide angle range is improved.
Fig. 5 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 5: a processor (processor)51, a communication Interface (communication Interface)52, a memory (memory)53 and a communication bus 54, wherein the processor 51, the communication Interface 52 and the memory 53 complete communication with each other through the communication bus 54. The processor 51 may invoke logic instructions in the memory 53 to perform a method of face pose estimation, the method comprising: acquiring a face image and corresponding key point information thereof; based on the key point information, obtaining the projection distance of the nose relative to the midpoint of the two eyes and obtaining the distance between the two eyes; and obtaining the human face posture by utilizing a mapping relation obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between two eyes.
In addition, the logic instructions in the memory 53 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product including a computer program, the computer program being stored on a non-transitory computer-readable storage medium, wherein when the computer program is executed by a processor, a computer is capable of executing the face pose estimation method provided by the above methods, and the method includes: acquiring a face image and corresponding key point information thereof; based on the key point information, obtaining the projection distance of the nose relative to the midpoint of the two eyes and obtaining the distance between the two eyes; and obtaining the human face posture by utilizing a mapping relation obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between two eyes.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, is implemented to perform the face pose estimation method provided by the above methods, the method including: acquiring a face image and corresponding key point information thereof; based on the key point information, obtaining the projection distance of the nose relative to the midpoint of the two eyes and obtaining the distance between the two eyes; and obtaining the human face posture by utilizing a mapping relation obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between two eyes.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A face pose estimation method is characterized by comprising the following steps:
acquiring a face image and corresponding key point information thereof;
based on the key point information, obtaining the projection distance of the nose relative to the midpoint of the two eyes and obtaining the distance between the two eyes;
and obtaining the human face posture by utilizing a mapping relation which is obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between the two eyes.
2. The method of estimating a face pose according to claim 1, wherein said deriving a projection distance of a nose relative to a midpoint of two eyes based on the keypoint information comprises:
obtaining a nose coordinate and two eye coordinates based on the key point information;
obtaining a midpoint coordinate based on the coordinates of the two eyes;
and obtaining the projection distance of the nose relative to the midpoint according to the midpoint coordinate and the nose coordinate.
3. The method of estimating a face pose according to claim 2, wherein said deriving a projection distance of a nose relative to a midpoint according to the midpoint coordinates and the nose coordinates comprises:
obtaining a first line segment according to the midpoint coordinate and the nose coordinate;
obtaining a second line segment according to the coordinates of the two eyes;
obtaining an included angle between the first line segment and the second line segment according to the first line segment and the second line segment;
and based on the included angle, projecting the first line segment onto the second line segment to obtain a projection distance.
4. The method according to claim 1, wherein the obtaining a face pose by using a mapping relationship obtained in advance based on empirical parameter fitting based on the projection distance and the inter-ocular distance comprises:
obtaining the specific gravity of the nose relative to the two eyes based on the projection distance and the distance between the two eyes;
according to the specific gravity, obtaining angle information by utilizing a mapping relation obtained by fitting based on empirical parameters in advance;
and obtaining the human face posture according to the angle information.
5. The face pose estimation method of claim 4, wherein the specific gravity is expressed as:
Figure FDA0003384296690000021
wherein r represents a specific gravity, dpRepresenting the projection distance, cosm representing the cosine value of an included angle between a first line segment formed by the nose and the midpoint of the two eyes and a second line segment formed by the two eyes; deThe interocular distance.
6. The method of estimating a face pose according to claim 1, further comprising, prior to said deriving a face pose:
acquiring experience parameters in a preset angle range, wherein the experience parameters comprise angle experience parameters corresponding to a historical face image and key point experience parameters corresponding to the historical face image;
acquiring specific gravity experience parameters of the nose relative to the eyes by using the key point experience parameters;
and obtaining a mapping relation based on the proportion empirical parameters and the corresponding angle empirical parameters.
7. A face pose estimation apparatus, comprising:
the data acquisition module is used for acquiring the face image and the corresponding key point information thereof;
the parameter acquisition module is used for acquiring the projection distance of the nose relative to the midpoint of the two eyes and acquiring the distance between the two eyes based on the key point information;
and the posture estimation module is used for obtaining the human face posture by utilizing a mapping relation which is obtained by fitting based on empirical parameters in advance based on the projection distance and the distance between the two eyes.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the face pose estimation method according to any of the claims 1 to 6.
9. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the face pose estimation method according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the steps of the face pose estimation method according to any of the claims 1 to 6.
CN202111444206.XA 2021-11-30 2021-11-30 Human face posture estimation method and device Pending CN114399800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111444206.XA CN114399800A (en) 2021-11-30 2021-11-30 Human face posture estimation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111444206.XA CN114399800A (en) 2021-11-30 2021-11-30 Human face posture estimation method and device

Publications (1)

Publication Number Publication Date
CN114399800A true CN114399800A (en) 2022-04-26

Family

ID=81225163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111444206.XA Pending CN114399800A (en) 2021-11-30 2021-11-30 Human face posture estimation method and device

Country Status (1)

Country Link
CN (1) CN114399800A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118038560A (en) * 2024-04-12 2024-05-14 魔视智能科技(武汉)有限公司 Method and device for predicting face pose of driver

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118038560A (en) * 2024-04-12 2024-05-14 魔视智能科技(武汉)有限公司 Method and device for predicting face pose of driver

Similar Documents

Publication Publication Date Title
US11830141B2 (en) Systems and methods for 3D facial modeling
WO2021174939A1 (en) Facial image acquisition method and system
CN109660783B (en) Virtual reality parallax correction
US20150243035A1 (en) Method and device for determining a transformation between an image coordinate system and an object coordinate system associated with an object of interest
CN108960045A (en) Eyeball tracking method, electronic device and non-transient computer-readable recording medium
CN110176032B (en) Three-dimensional reconstruction method and device
CN104899563A (en) Two-dimensional face key feature point positioning method and system
CN111998862B (en) BNN-based dense binocular SLAM method
CN112488067B (en) Face pose estimation method and device, electronic equipment and storage medium
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN114862973B (en) Space positioning method, device and equipment based on fixed point location and storage medium
CN111753739A (en) Object detection method, device, equipment and storage medium
WO2023016182A1 (en) Pose determination method and apparatus, electronic device, and readable storage medium
US8941651B2 (en) Object alignment from a 2-dimensional image
CN112648994A (en) Camera pose estimation method and device based on depth vision odometer and IMU
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
CN113538682B (en) Model training method, head reconstruction method, electronic device, and storage medium
CN111354029A (en) Gesture depth determination method, device, equipment and storage medium
CN114399800A (en) Human face posture estimation method and device
CN117274605A (en) Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN113822174B (en) Sight line estimation method, electronic device and storage medium
KR20180019329A (en) Depth map acquisition device and depth map acquisition method
US20230144111A1 (en) A method for generating a 3d model
CN113847907A (en) Positioning method and device, equipment and storage medium
CN113887289A (en) Monocular three-dimensional object detection method, device, equipment and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination