CN110363052B - Method and device for determining human face pose in image and computer equipment - Google Patents

Method and device for determining human face pose in image and computer equipment Download PDF

Info

Publication number
CN110363052B
CN110363052B CN201810321161.9A CN201810321161A CN110363052B CN 110363052 B CN110363052 B CN 110363052B CN 201810321161 A CN201810321161 A CN 201810321161A CN 110363052 B CN110363052 B CN 110363052B
Authority
CN
China
Prior art keywords
vertical distance
face
distance
horizontal
face region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810321161.9A
Other languages
Chinese (zh)
Other versions
CN110363052A (en
Inventor
朱丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810321161.9A priority Critical patent/CN110363052B/en
Publication of CN110363052A publication Critical patent/CN110363052A/en
Application granted granted Critical
Publication of CN110363052B publication Critical patent/CN110363052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a method and a device for determining a face pose in an image and computer equipment, and belongs to the technical field of image processing. The method comprises the following steps: detecting a face region in a target image; extracting feature point position information in the face region, wherein the feature point position information is used for indicating the relative position between feature points in the face contained in the face region; and calculating the face pose in the face region according to the position information of the characteristic points in the face region. According to the scheme, the characteristic points of the face region in the image are positioned, the face gesture in the face region is calculated according to the relative position relation among the characteristic points in the face region, the depth information in the image does not need to be extracted, the three-dimensional model is constructed, the calculation complexity is simplified, the recognition speed of the face gesture is improved, and the recognition efficiency is improved.

Description

Method and device for determining face pose in image and computer equipment
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method and a device for determining a face pose in an image and computer equipment.
Background
In many scenes such as security protection, photography, driving safety reminding and the like, the recognition of the human face posture in the image is an important technology.
In the related art, the face pose in the image is generally recognized by modeling based on the depth information. Specifically, the face image can be shot through professional 3D shooting equipment, the depth information in the shot face image is extracted, a three-dimensional virtual model of the face is constructed according to the depth information, and the face pose is obtained according to the three-dimensional virtual model.
In the scheme of identifying the face pose in the image in a modeling mode in the related art, depth information in the image needs to be extracted to construct a three-dimensional virtual model, and a large amount of processing resources are consumed for extracting the depth information and constructing the three-dimensional virtual model, so that the identification efficiency is low.
Disclosure of Invention
In order to solve the problem that extraction of depth information and construction of a three-dimensional virtual model in the related technology consume a large amount of processing resources and accordingly recognition efficiency is low, the application provides a method and a device for determining a face pose in an image.
In a first aspect, a method of determining a face pose in an image is provided, the method comprising:
detecting a face region in a target image;
extracting feature point position information in the face region, wherein the feature point position information is used for indicating the relative position between feature points in the face contained in the face region;
and calculating the face pose in the face region according to the position information of the characteristic points in the face region.
Optionally, the calculating the face pose in the face region according to the feature point position information in the face region includes:
and calculating the horizontal deflection angle, the pitching angle and the left and right deflection angles of the face in the face region according to the position information of the characteristic points.
Optionally, the feature points include a feature point corresponding to the left eye, a feature point corresponding to the right eye, a feature point corresponding to the nose tip, a feature point corresponding to the left mouth corner, and a feature point corresponding to the right mouth corner.
Optionally, the calculating a horizontal deflection angle of the face in the face region according to the feature point position information includes:
acquiring a first vertical distance dy1 and a second vertical distance dy2, where the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face pose is in an obverse state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face pose is in an obverse state;
extracting a third horizontal distance dx3, a fourth horizontal distance dx4, a fifth horizontal distance dx5, and a sixth horizontal distance dx6 from the landmark position information; the third horizontal distance dx3 is the horizontal distance between the tip of the nose and the left eye in the face region, the fourth horizontal distance dx4 is the horizontal distance between the tip of the nose and the right eye in the face region, the fifth horizontal distance dx5 is the horizontal distance between the tip of the nose and the left corner of the mouth in the face region, the sixth horizontal distance dx6 is the horizontal distance between the tip of the nose and the right corner of the mouth in the face region;
calculating a horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx 6.
Optionally, the calculating a horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx6 includes:
calculating a first angle from the first vertical distance dy1 and the fourth horizontal distance dx4, a second angle from the first vertical distance dy1 and the third horizontal distance dx3, and a difference between the first angle and the second angle as an eye horizontal deflection angle;
calculating a third angle according to the second vertical distance dy2 and the sixth horizontal distance dx6, a fourth angle according to the second vertical distance dy2 and the fifth horizontal distance dx5, and calculating a difference between the third angle and the fourth angle as a mouth horizontal deflection angle;
calculating an average of the eye horizontal deflection angle and the mouth horizontal deflection angle as a horizontal deflection angle of the face in the face region.
Optionally, the method further includes:
inquiring an actual deflection angle corresponding to the horizontal deflection angle of the face in the face area according to a preset correction relation table, wherein the correction relation table comprises the corresponding relation between the horizontal deflection angle and the actual deflection angle;
and acquiring the actual deflection angle as the horizontal deflection angle of the face in the corrected face area.
Optionally, the calculating a pitch angle of the face in the face region according to the feature point position information includes:
acquiring a first vertical distance dy1 and a second vertical distance dy 2; the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face posture is the frontal state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face posture is the frontal state;
extracting a third vertical distance dy3 and a fourth vertical distance dy4 from the landmark position information; the third vertical distance dy3 is the vertical distance between the tip of the nose and the eyes in the face region, and the fourth vertical distance dy4 is the vertical distance between the tip of the nose and the corner of the mouth in the face region;
calculating a pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, and the fourth vertical distance dy 4.
Optionally, the calculating the pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, and the fourth vertical distance dy4 includes:
calculating a product of the first vertical distance dy1, the fourth vertical distance dy4 and a first preset constant as a first numerical value, a product of the second vertical distance dy2, the third vertical distance dy3 and a second preset constant as a second numerical value, a product of the first vertical distance dy1, the fourth vertical distance dy4 and a third preset constant as a third numerical value, and a product of the second vertical distance dy2, the third vertical distance dy3 and a fourth preset constant as a fourth numerical value;
calculating a difference between the first value and the second value to obtain a fifth value;
calculating the sum of the third numerical value and the fourth numerical value to obtain a sixth numerical value;
and performing arc tangent calculation on the ratio of the fifth numerical value to the sixth numerical value to obtain the pitch angle of the face in the face region.
Optionally, the calculating a left-right deflection angle of the face in the face region according to the feature point position information includes:
extracting a seventh horizontal distance dx7, a fifth vertical distance dy5, an eighth horizontal distance dx8, and a sixth vertical distance dy6 from the landmark position information; the seventh horizontal distance dx7 is the horizontal distance between the left and right eyes in the face region, the fifth vertical distance dy5 is the vertical distance between the left and right eyes in the face region, the eighth horizontal distance dx8 is the horizontal distance between the left and right mouth corners in the face region, the sixth vertical distance dy6 is the vertical distance between the left and right mouth corners in the face region;
calculating a left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy 6.
Optionally, the calculating the left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy6 includes:
calculating an eye left-right deflection angle from the seventh horizontal distance dx7 and the fifth vertical distance dy 5;
calculating a mouth left-right yaw angle from the eighth horizontal distance dx8 and the sixth vertical distance dy 6;
calculating an average value of the eye left and right deflection angles and the mouth left and right deflection angles as left and right deflection angles of the face in the face region.
Optionally, the target image is an image of a face of a driver of the vehicle, the method further comprising:
when the face posture in the face area meets a preset condition, sending a reminding message;
wherein the preset condition comprises at least one of the following conditions:
the face posture in the face region is a first preset posture;
the face pose in the face region deviates from a second preset pose;
and the time length of the human face posture in the human face region deviating from the third preset posture reaches the preset time length.
In a second aspect, there is provided an apparatus for determining a face pose in an image, the apparatus comprising:
the region detection module is used for detecting a face region in the target image;
the information extraction module is used for extracting the position information of the characteristic points in the face region, and the position information of the characteristic points is used for indicating the relative positions of the characteristic points in the face contained in the face region;
and the gesture calculation module is used for calculating the face gesture in the face region according to the position information of the characteristic points in the face region.
Optionally, the pose calculation module is configured to calculate a horizontal deflection angle, a pitch angle, and a left-right deflection angle of the face in the face region according to the feature point position information.
Optionally, the feature points include a feature point corresponding to the left eye, a feature point corresponding to the right eye, a feature point corresponding to the nose tip, a feature point corresponding to the left mouth corner, and a feature point corresponding to the right mouth corner.
Optionally, the gesture calculation module is specifically used for
Acquiring a first vertical distance dy1 and a second vertical distance dy2, where the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face pose is in an obverse state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face pose is in an obverse state;
extracting a third horizontal distance dx3, a fourth horizontal distance dx4, a fifth horizontal distance dx5, and a sixth horizontal distance dx6 from the landmark position information; the third horizontal distance dx3 is the horizontal distance between the tip of the nose and the left eye in the face region, the fourth horizontal distance dx4 is the horizontal distance between the tip of the nose and the right eye in the face region, the fifth horizontal distance dx5 is the horizontal distance between the tip of the nose and the left corner of the mouth in the face region, the sixth horizontal distance dx6 is the horizontal distance between the tip of the nose and the right corner of the mouth in the face region;
calculating a horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx 6.
Optionally, when the horizontal deflection angle of the face in the face region is calculated according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx6, the pose calculation module is specifically configured to calculate the horizontal deflection angle of the face in the face region
Calculating a first angle according to the first vertical distance dy1 and the fourth horizontal distance dx4, calculating a second angle according to the first vertical distance dy1 and the third horizontal distance dx3, calculating a difference between the first angle and the second angle as an eye horizontal deflection angle;
calculating a third angle according to the second vertical distance dy2 and the sixth horizontal distance dx6, a fourth angle according to the second vertical distance dy2 and the fifth horizontal distance dx5, and calculating a difference between the third angle and the fourth angle as a mouth horizontal deflection angle;
calculating an average of the eye horizontal deflection angle and the mouth horizontal deflection angle as a horizontal deflection angle of the face in the face region.
Optionally, the apparatus further comprises: correction module for
And inquiring an actual deflection angle corresponding to the horizontal deflection angle of the face in the face area according to a preset correction relation table, wherein the correction relation table comprises the corresponding relation between the horizontal deflection angle and the actual deflection angle, and the actual deflection angle is acquired as the corrected horizontal deflection angle of the face in the face area.
Optionally, the gesture calculation module is specifically used for
Acquiring a first vertical distance dy1 and a second vertical distance dy 2; the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face posture is the frontal state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face posture is the frontal state;
extracting a third vertical distance dy3 and a fourth vertical distance dy4 from the landmark position information; the third vertical distance dy3 is the vertical distance between the tip of the nose and the eyes in the face region, and the fourth vertical distance dy4 is the vertical distance between the tip of the nose and the corner of the mouth in the face region;
calculating a pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, and the fourth vertical distance dy 4.
Optionally, when the pitch angle of the face in the face region is calculated according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3 and the fourth vertical distance dy4, the pose calculation module is specifically configured to calculate the pitch angle of the face in the face region
Calculating a product of the first vertical distance dy1, the fourth vertical distance dy4 and a first preset constant as a first numerical value, a product of the second vertical distance dy2, the third vertical distance dy3 and a second preset constant as a second numerical value, a product of the first vertical distance dy1, the fourth vertical distance dy4 and a third preset constant as a third numerical value, and a product of the second vertical distance dy2, the third vertical distance dy3 and a fourth preset constant as a fourth numerical value;
calculating a difference between the first value and the second value to obtain a fifth value;
calculating the sum of the third numerical value and the fourth numerical value to obtain a sixth numerical value;
and performing arc tangent calculation on the ratio of the fifth numerical value to the sixth numerical value to obtain the pitch angle of the face in the face region.
Optionally, the gesture calculation module is specifically used for
Extracting a seventh horizontal distance dx7, a fifth vertical distance dy5, an eighth horizontal distance dx8, and a sixth vertical distance dy6 from the landmark position information; the seventh horizontal distance dx7 is the horizontal distance between the left and right eyes in the face region, the fifth vertical distance dy5 is the vertical distance between the left and right eyes in the face region, the eighth horizontal distance dx8 is the horizontal distance between the left and right corners of the mouth in the face region, the sixth vertical distance dy6 is the vertical distance between the left and right corners of the mouth in the face region;
calculating a left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy 6.
Optionally, when calculating the left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy6, the pose calculation module is specifically configured to calculate the left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy6
Calculating an eye left-right deflection angle from the seventh horizontal distance dx7 and the fifth vertical distance dy 5;
calculating a mouth left-right yaw angle from the eighth horizontal distance dx8 and the sixth vertical distance dy 6;
calculating an average value of the eye left and right deflection angles and the mouth left and right deflection angles as left and right deflection angles of the face in the face region.
Optionally, the target image is an image of a face of a driver of the vehicle, the apparatus further comprising:
the reminding module is used for sending out a reminding message when the human face posture in the human face area meets a preset condition;
wherein the preset condition comprises at least one of the following conditions:
the face posture in the face region is a first preset posture;
the face pose in the face region deviates from a second preset pose;
and the time length of the human face posture in the human face region deviating from the third preset posture reaches the preset time length.
In a third aspect, there is provided a computer device comprising a processor and a memory, the memory having stored therein instructions, execution of which by the processor causes the computer device to implement the method of the first aspect as described above.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon instructions which, when executed by a computer device, cause the computer device to carry out the method of the first aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
by positioning the feature points of the face region in the image and calculating the face pose in the face region according to the relative position relation among the feature points in the face region, the depth information in the image does not need to be extracted and the three-dimensional model is not constructed, so that the calculation complexity is simplified, the recognition speed of the face pose is improved, and the recognition efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system configuration diagram of an image processing system provided by an embodiment of the present invention;
FIG. 2 is a flow diagram of a method for determining a face pose in an image according to one embodiment of the present invention;
FIG. 3 is a flow chart of a method for determining a face pose in an image according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of feature points in a face region according to the embodiment shown in FIG. 3;
FIG. 5 is a schematic diagram of feature points in a face-up state according to the embodiment shown in FIG. 3;
fig. 6 is a schematic diagram of feature points after a human face deflects a certain amount relative to a frontal face state according to the embodiment shown in fig. 3;
FIG. 7 is a block diagram of an apparatus for determining a face pose in an image according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The scheme shown in the embodiments of the present invention can be executed by a recognition device to be applied to various scenes in which a face pose in an image needs to be recognized. The identification device may be various types of computer devices, for example, the identification device may be a mobile terminal such as a smart phone, a tablet computer, or an e-book reader; alternatively, the identification device may be a computer device such as a personal computer or a server; alternatively, the identification device may be a smart wearable device such as a smart watch or smart glasses. The embodiment of the present invention is not limited to the specific implementation form of the identification device.
In one possible implementation manner, after a target image is generated or acquired, the recognition device immediately performs recognition processing on the target image to determine a face pose in the target image; alternatively, in another possible implementation, after the target image is generated or acquired, the recognition device may perform recognition processing on the target image at a specific time (e.g., at a predetermined time point, or when a specific instruction is received) to determine the face pose in the target image.
Referring to fig. 1, a system configuration diagram of an image processing system according to an embodiment of the present invention is shown. As shown in fig. 1, the image processing system may include a recognition device 110 and an image acquisition/generation device 120.
Wherein the image capturing/generating device 120 is used for capturing/generating a target image, for example, the image capturing/generating device 120 may include an image capturing component (such as a camera) through which the image capturing/generating device 120 captures the target image; alternatively, the image capturing/generating device 120 may obtain the target image by processing an existing image (which may be a received or captured image).
The recognition device 110 is used to recognize a face pose of a face contained in the target image. In the embodiment of the present invention, the face pose may refer to an orientation of a face.
In an embodiment of the present invention, the recognition device 110 and the image capturing/generating device 120 may be implemented as a same entity device, or the recognition device 110 and the image capturing/generating device 120 may also be implemented as two different entity devices. When the recognition device 110 and the image capturing/generating device 120 are implemented as two different physical devices, the recognition device 110 and the image capturing/generating device 120 may be connected via a wired or wireless network.
Referring to fig. 2, a flowchart of a method for determining a face pose in an image according to an embodiment of the present invention is shown. As shown in fig. 2, the method for determining the face pose in the image may include:
step 201, detecting a face region in a target image.
The step of detecting the face region in the target image may be performed by a neural network algorithm, for example, the face region in the target image may be detected by a convolutional neural network algorithm.
Step 202, extracting feature point position information in the face region, where the feature point position information is used to indicate relative positions between feature points in a face included in the face region.
The step of extracting the feature point position information in the face region may be performed by a neural network algorithm, for example, the feature point position information in the face region may be extracted by a convolutional neural network algorithm.
The feature points in the face may include at least three feature points, for example, the feature points may include a feature point corresponding to a left eye, a feature point corresponding to a right eye, a feature point corresponding to a nose tip, a feature point corresponding to a left mouth corner, a feature point corresponding to a right mouth corner, and the like.
Step 203, calculating the face pose in the face region according to the feature point position information in the face region.
In the embodiment of the present invention, the face pose in the target image refers to the orientation of the face in the target image. In a general case, the orientation of the face is affected by the horizontal deflection angle, the pitch angle, and the yaw angle of the face, that is, the face pose in the target image is considered to be determined as long as the horizontal deflection angle, the pitch angle, and the yaw angle of the face in the target image are determined.
The horizontal human face deflection (yaw) refers to the rotation of the human face around a straight line perpendicular to the horizontal plane as a rotation axis. The horizontal deflection angle may be an angle between an orientation of the deflected face in the horizontal plane and an orientation of the face in the horizontal plane when the face is frontal.
The pitch of the human face means that the human face raises or lowers upwards, which is equivalent to that a straight line parallel to the line between eyes or the line between ears in a horizontal plane is taken as a rotating shaft to rotate. The pitch angle may be an included angle between an orientation of the face after pitch in a vertical plane and an orientation of the face in the vertical plane when the face is frontal.
The left-right deflection (roll) of the human face means that the human face inclines towards the left shoulder or the right shoulder in a vertical plane parallel to the shoulders, and is equivalent to that the human face rotates by taking a certain straight line vertical to the plane where the human face is located as a rotating shaft. The left and right deflection angles may be angles at which the face is tilted from the frontal position to the left or right shoulder.
In summary, in the scheme shown in the embodiment of the present invention, the feature points of the face region in the image are located, and the face pose in the face region is calculated according to the relative position relationship between the feature points in the face region, so that it is not necessary to extract depth information in the image and construct a three-dimensional model, which simplifies the calculation complexity, increases the recognition speed of the face pose, and improves the recognition efficiency.
The scheme shown in fig. 1 can be applied to various application scenarios requiring rapid recognition of human face gestures, such as security protection, camera shooting, monitoring of driving states of vehicle drivers, and the like. The following embodiments of the present invention will be described by taking monitoring of the driving state of the driver of the vehicle as an example.
Referring to fig. 3, a flowchart of a method for determining a face pose in an image according to an embodiment of the present invention is shown. Taking the determination of the face region and the feature point through the convolutional neural network as an example, as shown in fig. 3, the method for determining the face pose in the image may include:
step 301, detecting a face region in a target image through a first convolutional neural network.
In the embodiment of the present invention, a neural network may be first used to detect a face region. In the embodiment of the invention, the detection of the face region can be performed through the first convolutional neural network.
Taking a scene in which the scheme is applied to monitoring the driving state of the vehicle driver as an example, the target image may be an image acquired by a camera arranged right in front of and above a seat of the vehicle driver.
Step 302, extracting feature point position information in the face region through a second convolutional neural network, where the feature point position information is used to indicate a relative position between feature points in a face included in the face region.
After the face region is obtained in step 301, feature point positioning is performed on the image in the face region, and feature point positioning is also performed by using a convolutional neural network method, for example, by using a second convolutional neural network.
The feature points may be feature points at specific positions in the human face, such as eyebrows, eyes, nose, mouth, and outer contours of the face. For example, please refer to fig. 4, which shows a schematic diagram of feature points in a face region according to an embodiment of the present invention. As shown in fig. 4, when determining the feature points, a plurality of feature points may be determined for each organ in the face region, for example, in fig. 4, a11 to a19 are feature points of the face contour, B11 to B16 are feature points of the mouth contour, C11 to C16 are feature points of the left eye contour, D11 to D16 are feature points of the right eye contour, E11 to E15 are feature points of the left eyebrow contour, F11 to F15 are feature points of the right eyebrow contour, and G11 to G17 are feature points of the nose contour.
In the embodiment of the present invention, the feature points for subsequently determining the face pose may include at least three of feature points corresponding to the left eye, feature points corresponding to the right eye, feature points corresponding to the nose tip, feature points corresponding to the left mouth corner, and feature points corresponding to the right mouth corner. For example, the feature points may include a feature point corresponding to the left eye, a feature point corresponding to the right eye, and a feature point corresponding to the tip of the nose; or the feature points may include a feature point corresponding to a nose tip, a feature point corresponding to a left mouth corner, and a feature point corresponding to a right mouth corner; alternatively, the feature points may include a feature point corresponding to the left eye, a feature point corresponding to the right eye, a feature point corresponding to the nose tip, a feature point corresponding to the left mouth corner, and a feature point corresponding to the right mouth corner.
Step 303, calculating a horizontal deflection angle, a pitch angle, and a left-right deflection angle of the face in the face region according to the feature point position information.
In the embodiment of the invention, the human face posture can be determined only by determining the horizontal deflection angle, the pitching angle and the left-right deflection angle of the human face by taking the example of decomposing the human face posture into the horizontal deflection, the pitching deflection and the left-right deflection. The embodiment of the present invention will be described by taking as an example that 5 feature points, that is, a feature point corresponding to the left eye, a feature point corresponding to the right eye, a feature point corresponding to the nose tip, a feature point corresponding to the left mouth angle, and a feature point corresponding to the right mouth angle, respectively determine the horizontal yaw angle, the pitch angle, and the left-right yaw angle.
Firstly, determining the horizontal deflection angle of the face in the face area.
In the embodiment of the present invention, when determining the horizontal deflection angle of the human face, a first vertical distance dy1 and a second vertical distance dy2 may be obtained, where the first vertical distance dy1 is the vertical distance between the tip of the nose and the eyes when the face pose is in the face-up state, and the second vertical distance dy2 is the vertical distance between the tip of the nose and the corner of the mouth when the face pose is in the face-up state; extracting a third horizontal distance dx3, a fourth horizontal distance dx4, a fifth horizontal distance dx5, and a sixth horizontal distance dx6 from the landmark position information; the third horizontal distance dx3 is the horizontal distance between the tip of the nose and the left eye in the face region, the fourth horizontal distance dx4 is the horizontal distance between the tip of the nose and the right eye in the face region, the fifth horizontal distance dx5 is the horizontal distance between the tip of the nose and the left corner of the mouth in the face region, the sixth horizontal distance dx6 is the horizontal distance between the tip of the nose and the right corner of the mouth in the face region; calculating a horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5 and the sixth horizontal distance dx 6.
The vertical distance between the nose tip and the eyes may be a vertical distance between the nose tip and the left eye, a vertical distance between the nose tip and the right eye, or an average value of the vertical distance between the nose tip and the left eye and the vertical distance between the nose tip and the right eye.
Accordingly, the vertical distance between the nose tip and the mouth corner may be a vertical distance between the nose tip and the left mouth corner, a vertical distance between the nose tip and the right mouth corner, or an average of the vertical distance between the nose tip and the left mouth corner and the vertical distance between the nose tip and the right mouth corner.
Here, the first vertical distance dy1 and the second vertical distance dy2 may be preset values, for example, taking a scenario in which the present solution is applied to monitoring the driving state of the vehicle driver, an image of the vehicle driver in the face frontal state may be collected in advance through a camera disposed right in front of and above the vehicle driver's seat, and the first vertical distance dy1 and the second vertical distance dy2 may be determined according to the image of the vehicle driver in the face frontal state.
Wherein, when calculating the horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5 and the sixth horizontal distance dx6, a first angle may be calculated according to dy1 and dx4, a second angle may be calculated according to dy1 and dx3, and a difference between the first angle and the second angle is calculated as an eye horizontal deflection angle; calculating a third angle from dy2 and dx6, a fourth angle from dy2 and dx5, and calculating a difference between the third angle and the fourth angle as a mouth horizontal deflection angle; calculating an average of the eye horizontal deflection angle and the mouth horizontal deflection angle as a horizontal deflection angle of the face in the face region.
The above equation 1 for calculating the horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5 and the sixth horizontal distance dx6 may be as follows:
Figure BDA0001625303900000131
optionally, in the embodiment of the present invention, the horizontal deflection angle calculated by the above formula may also be corrected, for example, an actual deflection angle corresponding to the horizontal deflection angle of the face in the face region may be queried according to a preset correction relation table, where the correction relation table includes a corresponding relation between the horizontal deflection angle and the actual deflection angle; and acquiring the actual deflection angle as the horizontal deflection angle of the face in the corrected face area.
Specifically, since the tip of the nose is farther from the rotation axis than the eyes or mouth, the nose is shifted by a larger distance in the horizontal direction when the head is horizontally deflected, and the correspondence between the actual horizontal deflection angle and the calculated horizontal deflection angle can be determined based on the first horizontal distance dx1, the second horizontal distance dx2, the first distance difference dz1, and the second distance difference dz 2; wherein the first horizontal distance dx1 is the horizontal distance between the tip of the nose and the eyes when the face pose is in the face-up state, the second horizontal distance dx2 is the horizontal distance between the tip of the nose and the mouth angle when the face pose is in the face-up state, the first distance difference dz1 is the difference between the distance from the tip of the nose to the rotation axis and the distance from the eyes to the rotation axis, the second distance difference dz2 is the difference between the distance from the tip of the nose to the rotation axis and the distance from the mouth angle to the rotation axis, and the rotation axis is the rotation axis of the horizontal deflection of the face;
the correspondence between the actual horizontal deflection angle and the calculated horizontal deflection angle can be expressed by the following calculation formula 2:
Figure BDA0001625303900000141
where α' is the actual horizontal deflection angle and α is the horizontal deflection angle calculated by the above equation 1.
Since different people have different dz1 values and dz2 values (different heights of the nose bridge, generally 0.8-1.0 times dx), according to the distribution of normal human faces, in the embodiment of the present invention, dz 1-dz 2-0.9 dx1 can be taken. In practical applications, the values of dz1 and dz2 may also be other values as needed, for example, dz1 ═ dz2 ═ 0.8dx1, or dz1 ═ dz2 ═ dx1, and the like, and the proportional relationship among dz1, dz2 and dx1 in the embodiment of the present invention is not limited.
According to the corresponding relationship between the actual horizontal deflection angle and the calculated horizontal deflection angle, a correction relationship table containing the corresponding relationship between each calculated value of formula 1 and the actual horizontal deflection angle may be preset, and when the horizontal deflection angle of the face in the image is recognized, after one horizontal deflection angle is calculated according to formula 1, the corrected horizontal deflection angle corresponding thereto may be directly inquired from the correction relationship table.
Or, in another possible implementation manner, when the horizontal deflection angle of the face in the image is recognized, after one horizontal deflection angle is calculated according to the above formula 1, the corrected horizontal deflection angle may also be directly calculated according to the correspondence between the actual horizontal deflection angle and the horizontal deflection angle calculated by the above method.
And secondly, determining the pitch angle of the face in the face area.
In the embodiment of the invention, when the pitch angle of the human face is determined, the first vertical distance dy1 and the second vertical distance dy2 may be obtained; the first vertical distance dy1 is the vertical distance between the nose tip and the eyes when the face posture is in the face-up state, and the second vertical distance dy2 is the vertical distance between the nose tip and the mouth corner when the face posture is in the face-up state; extracting a third vertical distance dy3 and a fourth vertical distance dy4 from the landmark position information; the third vertical distance dy3 is the vertical distance between the nose tip and the eyes in the face region, and the fourth vertical distance dy4 is the vertical distance between the nose tip and the corner of the mouth in the face region; calculating the pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3 and the fourth vertical distance dy 4.
The vertical distance between the nose tip and the eyes in the face region may be an average of the vertical distance between the nose tip and the left eye and the vertical distance between the nose tip and the right eye, and correspondingly, the vertical distance between the nose tip and the mouth corner in the face region may be an average of the vertical distance between the nose tip and the left mouth corner and the vertical distance between the nose tip and the right mouth corner.
When the pitch angle of the face in the face region is calculated according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3 and the fourth vertical distance dy4, the product of dy1, dy4 and a predetermined constant may be calculated as a first numerical value, the product of dy2, dy3 and the predetermined constant may be calculated as a second numerical value, the product of dy1, dy4 and a third predetermined constant may be calculated as a third numerical value, and the product of dy2, dy3 and a fourth predetermined constant may be calculated as a fourth numerical value; calculating a difference between the first value and the second value to obtain a fifth value; calculating the sum of the third value and the fourth value to obtain a sixth value; and performing arc tangent calculation on the ratio of the fifth numerical value to the sixth numerical value to obtain the pitch angle of the face in the face area.
The first preset constant, the second preset constant, the third preset constant and the fourth preset constant can be obtained by calculating an included angle between a line from eyes to the nose tip and a horizontal plane of a human face, and an included angle between a line from a mouth to the nose tip and the horizontal plane of the human face.
The above formula for calculating the pitch angle of the face in the face region may be the following formula 3:
Figure BDA0001625303900000151
wherein the above formula 3 is derived as follows:
the included angle between the line from the eyes to the nose tip and the horizontal plane of the face is
Figure BDA0001625303900000153
The included angle between the connecting line from the mouth to the nose tip and the horizontal plane of the face is
Figure BDA0001625303900000154
Therefore, the following equation 4 is given:
Figure BDA0001625303900000152
the above equation 3, that is, in equation 3, can be obtained by combining the above equation 4
Figure BDA0001625303900000155
Figure BDA0001625303900000156
Usually, the angle between the line from the eyes to the tip of the nose and the horizontal plane of the face is
Figure BDA0001625303900000157
Between 35 degrees and 40 degrees, and the included angle between the connecting line from the mouth to the nose tip and the horizontal plane of the face
Figure BDA0001625303900000158
Between 42 ° and 47 °, in the embodiment of the present invention, a fixed angle between a line connecting the eyes to the nose tip and the horizontal plane of the face and an angle between a line connecting the mouth to the nose tip and the horizontal plane of the face may be preset according to the angle range, and the first preset constant a1, the second preset constant a2, the third preset constant a3, and the fourth preset constant a4 may be set according to the preset angles.
Preferably, the angle between the line from the eyes to the nose tip and the horizontal plane of the face is an angle
Figure BDA0001625303900000159
Can be 37 degrees, and the included angle between the connecting line from the mouth to the nose tip and the horizontal plane of the face
Figure BDA00016253039000001510
May be 45 °, the first predetermined constant a1 may be cos37 ° sin45 °, the second predetermined constant a2 may be cos45 ° sin53 °, the third predetermined constant a3 may be sin37 ° sin45 °, and the fourth predetermined constant a4 may be sin45 ° sin53 °.
And thirdly, determining the left and right deflection angles of the face in the face area.
In the embodiment of the present invention, when calculating the left and right deflection angles of the human face, the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy6 may be extracted from the feature point position information; the seventh horizontal distance dx7 is the horizontal distance between the left and right eyes in the face region, the fifth vertical distance dy5 is the vertical distance between the left and right eyes in the face region, the eighth horizontal distance dx8 is the horizontal distance between the left and right corners of the mouth in the face region, the sixth vertical distance dy6 is the vertical distance between the left and right corners of the mouth in the face region; calculating the left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8 and the sixth vertical distance dy 6.
Wherein, when the left and right deflection angles of the face in the face region are calculated according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8 and the sixth vertical distance dy6, the left and right eye deflection angles can be calculated according to the seventh horizontal distance dx7 and the fifth vertical distance dy 5; calculating a mouth left-right yaw angle from the eighth horizontal distance dx8 and the sixth vertical distance dy 6; and calculating the average value of the left and right deflection angles of the eyes and the mouth as the left and right deflection angles of the face in the face area.
Specifically, taking five feature points corresponding to the extracted feature points including the left eye, the right eye, the nose tip, the left mouth corner and the right mouth corner as an example, please refer to fig. 5 and 6, where fig. 5 shows a feature point schematic diagram in a face-up state according to an embodiment of the present invention, and fig. 6 shows a feature point schematic diagram after a human face generates a certain deflection (including left-right rotation, pitching, and left-right deflection) relative to the face-up state according to the embodiment of the present invention.
The relative positional relationship between the respective feature points shown in fig. 5 can be obtained by processing the pre-captured front face image in the manner of steps 301 and 302, and the dx1, dx2, dy1, and dy2 can be further obtained from the relative positional relationship between the respective feature points in fig. 5.
The relative positional relationship between the feature points shown in fig. 6 can be obtained by processing the deflected face image (i.e., the target image) in the manner of steps 301 and 302, and the dx3 to dx8 and dy3 to dy6 can be further obtained from the relative positional relationship between the feature points in fig. 6.
The horizontal yaw angle, the pitch angle, and the left-right yaw angle of the human face in fig. 6 can be calculated by the dx1 to dx8 and dy1 to dy6 in fig. 5 and 6 in combination with the step 303.
Formula 5 for calculating the left and right deflection angles of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8 and the sixth vertical distance dy6 may be as follows:
Figure BDA0001625303900000171
after the horizontal deflection angle, the pitch angle, and the left-right deflection angle of the face in the face area are determined, specific scene application can be performed according to the determined horizontal deflection angle, pitch angle, and left-right deflection angle.
For example, taking the application of the present solution to a scene for monitoring the driving state of a vehicle driver as an example, a camera and a recognition device are disposed in a vehicle, wherein the shooting angle of the camera is adjusted in advance such that the camera just shoots the front face of the vehicle driver when the vehicle driver looks straight ahead. In the first use, the camera captures a front face image of the vehicle driver, for example, the vehicle driver is guided to look straight ahead by presenting with a voice/visual interface, and the like, and the recognition device processes the front face image according to the above steps 301 and 302 to obtain dx1, dx2, dy1, and dy 2.
In the vehicle driving process, the shooting device collects images in real time and sends the collected images to the recognition device, the recognition device processes the collected images according to the steps 301 to 303 to determine the face posture in the images, namely the face posture of a vehicle driver, when the determined face posture meets the preset conditions, the view field of the vehicle driver deviates from the right front of the vehicle and is too large, driving safety can be affected, and at the moment, a prompt can be sent to remind the driver to pay attention to the right front of the vehicle. Wherein the preset condition comprises at least one of the following conditions:
the face posture in the face area is a first preset posture; the face posture in the face area deviates from a second preset posture; and the time length of the human face posture in the human face area deviating from the third preset posture reaches the preset time length.
The first preset posture may be that any one of the horizontal yaw angle, the pitch angle, and the left-right yaw angle exceeds an angle numerical range corresponding to the angle. For example, when any one of the horizontal yaw angle, the pitch angle, and the yaw angle exceeds 30 ° in the above-determined face pose, it is considered that the field of view of the vehicle driver is deviated too much from the front of the vehicle.
The second preset posture or the third preset posture may be such that the horizontal yaw angle, the pitch angle, and the yaw angle are within respective corresponding angle value ranges. The second preset posture and the third preset posture may be the same posture, or the second preset posture and the third preset posture may be different postures.
For example, taking the second preset posture as a posture close to the position where the field of view of the vehicle driver is over or close to the right front (that is, the horizontal yaw angle, the pitch angle, and the left-right yaw angle are all in a small range), when the recognition device recognizes that the face posture of the vehicle driver deviates from the right front by too much, a prompt may be issued.
Alternatively, taking the third preset posture as a posture close to the position where the field of view of the vehicle driver is over or close to the right front as an example, when the recognition device recognizes that the posture of the face of the vehicle driver deviates from the right front by too much angle and the duration of the deviation exceeds a predetermined threshold (for example, exceeds 30s), a warning may be issued.
In summary, in the scheme shown in the embodiment of the present invention, the feature points of the face region in the image are located, and the face pose in the face region is calculated according to the relative position relationship between the feature points in the face region, so that it is not necessary to extract depth information in the image and construct a three-dimensional model, which simplifies the calculation complexity, increases the recognition speed of the face pose, and improves the recognition efficiency.
In addition, according to the scheme of the embodiment of the invention, the human face posture is decomposed into the horizontal deflection angle, the pitching angle and the left and right deflection angles of the human face relative to the normal face state, and the human face posture can be confirmed only by respectively calculating the horizontal deflection angle, the pitching angle and the left and right deflection angles according to the relative positions of the feature points in the human face region, so that the calculation complexity is further simplified, the recognition speed of the human face posture is improved, and the recognition efficiency is improved.
In addition, according to the scheme shown in the embodiment of the invention, the horizontal deflection angle, the pitching angle and the left-right deflection angle of the face relative to the frontal face state can be rapidly calculated only according to the relative positions of the five feature points of the left eye, the right eye, the nose tip, the left mouth angle and the right mouth angle in the target image, the data amount to be processed is small, and therefore, the recognition speed of the face posture is improved.
Referring to fig. 7, a block diagram of an apparatus for determining a face pose in an image according to an embodiment of the present invention is shown. The apparatus may be implemented as part or all of a computer device in hardware or a combination of hardware and software to perform all or part of the steps in fig. 2 or fig. 3. The apparatus may include:
a region detection module 701, configured to detect a face region in a target image;
an information extraction module 702, configured to extract feature point position information in the face region, where the feature point position information is used to indicate a relative position between feature points in a face included in the face region;
and the pose calculation module 703 is configured to calculate a face pose in the face region according to the feature point position information in the face region.
Optionally, the pose calculation module is configured to calculate a horizontal deflection angle, a pitch angle, and a left-right deflection angle of the face in the face region according to the feature point position information.
Optionally, the feature points include a feature point corresponding to the left eye, a feature point corresponding to the right eye, a feature point corresponding to the nose tip, a feature point corresponding to the left mouth corner, and a feature point corresponding to the right mouth corner.
Optionally, the gesture calculation module is specifically used for
Acquiring a first vertical distance dy1 and a second vertical distance dy2, wherein the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face posture is in a frontal state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face posture is in a frontal state;
extracting a third horizontal distance dx3, a fourth horizontal distance dx4, a fifth horizontal distance dx5, and a sixth horizontal distance dx6 from the landmark position information; the third horizontal distance dx3 is the horizontal distance between the tip of the nose and the left eye in the face region, the fourth horizontal distance dx4 is the horizontal distance between the tip of the nose and the right eye in the face region, the fifth horizontal distance dx5 is the horizontal distance between the tip of the nose and the left corner of the mouth in the face region, the sixth horizontal distance dx6 is the horizontal distance between the tip of the nose and the right corner of the mouth in the face region;
calculating a horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx 6.
Optionally, when the horizontal deflection angle of the face in the face region is calculated according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx6, the pose calculation module is specifically configured to calculate the horizontal deflection angle of the face in the face region
Calculating a first angle from the first vertical distance dy1 and the fourth horizontal distance dx4, a second angle from the first vertical distance dy1 and the third horizontal distance dx3, and a difference between the first angle and the second angle as an eye horizontal deflection angle;
calculating a third angle according to the second vertical distance dy2 and the sixth horizontal distance dx6, a fourth angle according to the second vertical distance dy2 and the fifth horizontal distance dx5, and calculating a difference between the third angle and the fourth angle as a mouth horizontal deflection angle;
calculating an average of the eye horizontal deflection angle and the mouth horizontal deflection angle as a horizontal deflection angle of the face in the face region.
The device further comprises: a correcting module 704, configured to query an actual deflection angle corresponding to a horizontal deflection angle of a face in the face region according to a preset correcting relationship table, where the correcting relationship table includes a correspondence between the horizontal deflection angle and the actual deflection angle, and acquire the actual deflection angle as a corrected horizontal deflection angle of the face in the face region.
Optionally, the gesture calculation module is specifically used for
Acquiring a first vertical distance dy1 and a second vertical distance dy 2; the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face pose is in the frontal state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face pose is in the frontal state;
extracting a third vertical distance dy3 and a fourth vertical distance dy4 from the landmark position information; the third vertical distance dy3 is the vertical distance between the tip of the nose and the eyes in the face region, and the fourth vertical distance dy4 is the vertical distance between the tip of the nose and the corner of the mouth in the face region;
calculating a pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, and the fourth vertical distance dy 4.
Optionally, when the pitch angle of the face in the face region is calculated according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, the fourth vertical distance dy4, the first included angle α, and the second included angle β, the pose calculation module is specifically configured to calculate the pitch angle of the face in the face region
Calculating a product of the first vertical distance dy1, the fourth vertical distance dy4 and a first preset constant as a first numerical value, a product of the second vertical distance dy2, the third vertical distance dy3 and a second preset constant as a second numerical value, a product of the first vertical distance dy1, the fourth vertical distance dy4 and a third preset constant as a third numerical value, and a product of the second vertical distance dy2, the third vertical distance dy3 and a fourth preset constant as a fourth numerical value; calculating a difference between the first value and the second value to obtain a fifth value; calculating the sum of the third numerical value and the fourth numerical value to obtain a sixth numerical value; and performing arc tangent calculation on the ratio of the fifth numerical value to the sixth numerical value to obtain the pitch angle of the face in the face region.
Optionally, the attitude calculation module is specifically used for
Extracting a seventh horizontal distance dx7, a fifth vertical distance dy5, an eighth horizontal distance dx8, and a sixth vertical distance dy6 from the landmark position information; the seventh horizontal distance dx7 is the horizontal distance between the left and right eyes in the face region, the fifth vertical distance dy5 is the vertical distance between the left and right eyes in the face region, the eighth horizontal distance dx8 is the horizontal distance between the left and right mouth corners in the face region, the sixth vertical distance dy6 is the vertical distance between the left and right mouth corners in the face region;
calculating a left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy 6.
Optionally, when the left and right deflection angles of the face in the face area are calculated according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8 and the sixth vertical distance dy6, the pose calculation module is specifically configured to calculate the left and right deflection angles of the face in the face area according to the first horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8 and the sixth vertical distance dy6
Calculating the left and right eye deflection angle according to the seventh horizontal distance dx7 and the fifth vertical distance dy 5;
calculating a mouth left-right yaw angle from the eighth horizontal distance dx8 and the sixth vertical distance dy 6;
calculating an average value of the eye left and right deflection angles and the mouth left and right deflection angles as left and right deflection angles of the face in the face region.
Optionally, the target image is an image of a face of a driver of the vehicle, the apparatus further comprising:
a reminding module 705, configured to send a reminding message when the face pose in the face area meets a preset condition; wherein the preset condition comprises at least one of the following conditions:
the face posture in the face region is a first preset posture;
the face pose in the face region deviates from a second preset pose;
and the time length of the human face posture in the human face region deviating from the third preset posture reaches the preset time length.
In summary, the apparatus shown in the embodiment of the present invention locates the feature points of the face region in the image, and calculates the face pose in the face region according to the relative position relationship between the feature points in the face region, without extracting depth information in the image and constructing a three-dimensional model, thereby simplifying the calculation complexity, increasing the recognition speed of the face pose, and improving the recognition efficiency.
In addition, the device shown in the embodiment of the invention decomposes the human face posture into a horizontal deflection angle, a pitching angle and a left-right deflection angle of the human face relative to the normal face state, and can confirm the human face posture only by respectively calculating the horizontal deflection angle, the pitching angle and the left-right deflection angle according to the relative positions of all feature points in the human face region, thereby further simplifying the calculation complexity, improving the recognition speed of the human face posture and improving the recognition efficiency.
In addition, the device shown in the embodiment of the invention can quickly calculate the horizontal deflection angle, the pitching angle and the left-right deflection angle of the face relative to the frontal face state according to the relative positions of the five feature points of the left eye, the right eye, the nose tip, the left mouth angle and the right mouth angle in the target image, and the data amount to be processed is small, so that the recognition speed of the face posture is improved.
Referring to fig. 8, a schematic structural diagram of a computer device according to an exemplary embodiment of the present invention is shown. The computer device includes: a processor 81, a communication component 82, a memory 83, and a bus 84.
The processor 81 includes one or more processing cores, and the processor 81 executes various functions and information processing by executing software programs and modules.
The communication component 82 may include at least one of a wired network interface (such as an ethernet interface) and a wireless network interface (such as a WLAN, BLE, ZigBee, etc. interface). The communication component 82 is used to modulate and/or demodulate information and receive or transmit the information via wired or wireless signals.
Wherein the communication component 82 is used for receiving images transmitted by other devices (such as an image capturing/generating device), and when the image capturing device is included in the computer device, the communication component 82 is an optional component.
The memory 83 is connected to the processor 81 via a bus 84.
Memory 83 may be used to store software programs and modules.
Memory 83 may store at least one application module 86 that functions as described herein. Processor 81 may implement all or a portion of the steps described above in fig. 2 or 3 by executing application modules 86 described above.
Further, memory 83 may be implemented by any type or combination of volatile and non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
An embodiment of the present invention also provides a non-transitory computer-readable storage medium, such as a memory, including instructions executable by a processor of a computer device to perform a method of determining a pose of a face in an image as shown in various embodiments of the present invention. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules may be merely a logical division, and in actual implementation, there may be another division, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (17)

1. A method of determining a face pose in an image, the method comprising:
detecting a face region in a target image;
extracting feature point position information in the face region, wherein the feature point position information is used for indicating the relative positions of feature points in a face contained in the face region, and the feature points comprise feature points corresponding to a left eye, feature points corresponding to a right eye, feature points corresponding to a nose tip, feature points corresponding to a left mouth corner and feature points corresponding to a right mouth corner;
calculating the horizontal deflection angle, the pitching angle and the left and right deflection angles of the face in the face area according to the position information of the feature points;
wherein, the calculating the horizontal deflection angle of the face in the face region according to the feature point position information includes:
acquiring a first vertical distance dy1 and a second vertical distance dy2, where the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face pose is in an obverse state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face pose is in an obverse state;
extracting a third horizontal distance dx3, a fourth horizontal distance dx4, a fifth horizontal distance dx5, and a sixth horizontal distance dx6 from the landmark position information; the third horizontal distance dx3 is the horizontal distance between the tip of the nose and the left eye in the face region, the fourth horizontal distance dx4 is the horizontal distance between the tip of the nose and the right eye in the face region, the fifth horizontal distance dx5 is the horizontal distance between the tip of the nose and the left corner of the mouth in the face region, the sixth horizontal distance dx6 is the horizontal distance between the tip of the nose and the right corner of the mouth in the face region;
calculating a horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx 6.
2. The method of claim 1, wherein the calculating the horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx6 comprises:
calculating a first angle from the first vertical distance dy1 and the fourth horizontal distance dx4, a second angle from the first vertical distance dy1 and the third horizontal distance dx3, and a difference between the first angle and the second angle as an eye horizontal deflection angle;
calculating a third angle according to the second vertical distance dy2 and the sixth horizontal distance dx6, a fourth angle according to the second vertical distance dy2 and the fifth horizontal distance dx5, and calculating a difference between the third angle and the fourth angle as a mouth horizontal deflection angle;
calculating an average of the eye horizontal deflection angle and the mouth horizontal deflection angle as a horizontal deflection angle of the face in the face region.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
inquiring an actual deflection angle corresponding to the horizontal deflection angle of the face in the face area according to a preset correction relation table, wherein the correction relation table comprises the corresponding relation between the horizontal deflection angle and the actual deflection angle;
and acquiring the actual deflection angle as the horizontal deflection angle of the face in the corrected face area.
4. The method according to claim 1, wherein the calculating the pitch angle of the face in the face region according to the eigen point position information comprises:
acquiring a first vertical distance dy1 and a second vertical distance dy 2; the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face posture is the frontal state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face posture is the frontal state;
extracting a third vertical distance dy3 and a fourth vertical distance dy4 from the landmark position information; the third vertical distance dy3 is the vertical distance between the tip of the nose and the eyes in the face region, and the fourth vertical distance dy4 is the vertical distance between the tip of the nose and the corner of the mouth in the face region;
calculating a pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, and the fourth vertical distance dy 4.
5. The method of claim 4, wherein the calculating the pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, and the fourth vertical distance dy4 comprises:
calculating a product of the first vertical distance dy1, the fourth vertical distance dy4 and a first preset constant as a first numerical value, a product of the second vertical distance dy2, the third vertical distance dy3 and a second preset constant as a second numerical value, a product of the first vertical distance dy1, the fourth vertical distance dy4 and a third preset constant as a third numerical value, and a product of the second vertical distance dy2, the third vertical distance dy3 and a fourth preset constant as a fourth numerical value;
calculating a difference between the first value and the second value to obtain a fifth value;
calculating the sum of the third numerical value and the fourth numerical value to obtain a sixth numerical value;
and performing arc tangent calculation on the ratio of the fifth numerical value to the sixth numerical value to obtain the pitch angle of the face in the face region.
6. The method according to claim 1, wherein the calculating the left-right deflection angle of the face in the face region according to the feature point position information comprises:
extracting a seventh horizontal distance dx7, a fifth vertical distance dy5, an eighth horizontal distance dx8, and a sixth vertical distance dy6 from the landmark position information; the seventh horizontal distance dx7 is the horizontal distance between the left and right eyes in the face region, the fifth vertical distance dy5 is the vertical distance between the left and right eyes in the face region, the eighth horizontal distance dx8 is the horizontal distance between the left and right mouth corners in the face region, the sixth vertical distance dy6 is the vertical distance between the left and right mouth corners in the face region;
calculating a left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy 6.
7. The method of claim 6, wherein the calculating the left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy6 comprises:
calculating an eye left-right deflection angle from the seventh horizontal distance dx7 and the fifth vertical distance dy 5;
calculating a mouth left-right yaw angle from the eighth horizontal distance dx8 and the sixth vertical distance dy 6;
calculating an average value of the eye left and right deflection angles and the mouth left and right deflection angles as left and right deflection angles of the face in the face region.
8. The method of any of claims 1 or 2 or 4 to 7, wherein the target image is an image of a face of a vehicle driver, the method further comprising:
when the face posture in the face area meets a preset condition, sending a reminding message;
wherein the preset condition comprises at least one of the following conditions:
the face posture in the face region is a first preset posture;
the face pose in the face region deviates from a second preset pose;
and the time length of the human face posture in the human face region deviating from the third preset posture reaches the preset time length.
9. An apparatus for determining a pose of a face in an image, the apparatus comprising:
the region detection module is used for detecting a face region in the target image;
the information extraction module is used for extracting feature point position information in the face region, wherein the feature point position information is used for indicating the relative positions of feature points in a face contained in the face region, and the feature points comprise feature points corresponding to a left eye, feature points corresponding to a right eye, feature points corresponding to a nose tip, feature points corresponding to a left mouth corner and feature points corresponding to a right mouth corner;
the attitude calculation module is used for calculating the horizontal deflection angle, the pitching angle and the left-right deflection angle of the face in the face area according to the position information of the characteristic points;
wherein the attitude calculation module is specifically used for
Acquiring a first vertical distance dy1 and a second vertical distance dy2, where the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face pose is in an obverse state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face pose is in an obverse state;
extracting a third horizontal distance dx3, a fourth horizontal distance dx4, a fifth horizontal distance dx5, and a sixth horizontal distance dx6 from the landmark position information; the third horizontal distance dx3 is the horizontal distance between the tip of the nose and the left eye in the face region, the fourth horizontal distance dx4 is the horizontal distance between the tip of the nose and the right eye in the face region, the fifth horizontal distance dx5 is the horizontal distance between the tip of the nose and the left corner of the mouth in the face region, the sixth horizontal distance dx6 is the horizontal distance between the tip of the nose and the right corner of the mouth in the face region;
calculating a horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5, and the sixth horizontal distance dx 6.
10. The apparatus according to claim 9, characterized in that the pose calculation module, in particular for calculating the horizontal deflection angle of the face in the face region when calculating the horizontal deflection angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third horizontal distance dx3, the fourth horizontal distance dx4, the fifth horizontal distance dx5 and the sixth horizontal distance dx6
Calculating a first angle according to the first vertical distance dy1 and the fourth horizontal distance dx4, calculating a second angle according to the first vertical distance dy1 and the third horizontal distance dx3, calculating a difference between the first angle and the second angle as an eye horizontal deflection angle;
calculating a third angle according to the second vertical distance dy2 and the sixth horizontal distance dx6, a fourth angle according to the second vertical distance dy2 and the fifth horizontal distance dx5, and calculating a difference between the third angle and the fourth angle as a mouth horizontal deflection angle;
calculating an average of the eye horizontal deflection angle and the mouth horizontal deflection angle as a horizontal deflection angle of the face in the face region.
11. The apparatus of claim 9 or 10, further comprising: correction module for
And inquiring an actual deflection angle corresponding to the horizontal deflection angle of the face in the face area according to a preset correction relation table, wherein the correction relation table comprises the corresponding relation between the horizontal deflection angle and the actual deflection angle, and the actual deflection angle is acquired as the corrected horizontal deflection angle of the face in the face area.
12. Device according to claim 9, characterized in that said attitude calculation module is particularly adapted to
Acquiring a first vertical distance dy1 and a second vertical distance dy 2; the first vertical distance dy1 is an average value of a vertical distance between the nose tip and the left eye and a vertical distance between the nose tip and the right eye when the face posture is the frontal state, and the second vertical distance dy2 is an average value of a vertical distance between the nose tip and the left corner of the mouth and a vertical distance between the nose tip and the right corner of the mouth when the face posture is the frontal state;
extracting a third vertical distance dy3 and a fourth vertical distance dy4 from the landmark position information; the third vertical distance dy3 is the vertical distance between the tip of the nose and the eyes in the face region, and the fourth vertical distance dy4 is the vertical distance between the tip of the nose and the corner of the mouth in the face region;
calculating a pitch angle of the face in the face region according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3, and the fourth vertical distance dy 4.
13. The apparatus according to claim 12, wherein the pose calculation module, in particular for calculating the pitch angle of the face in the face area, is adapted to calculate the pitch angle of the face in the face area according to the first vertical distance dy1, the second vertical distance dy2, the third vertical distance dy3 and the fourth vertical distance dy4
Calculating a product of the first vertical distance dy1, the fourth vertical distance dy4 and a first preset constant as a first numerical value, a product of the second vertical distance dy2, the third vertical distance dy3 and a second preset constant as a second numerical value, a product of the first vertical distance dy1, the fourth vertical distance dy4 and a third preset constant as a third numerical value, and a product of the second vertical distance dy2, the third vertical distance dy3 and a fourth preset constant as a fourth numerical value;
calculating a difference between the first value and the second value to obtain a fifth value;
calculating the sum of the third numerical value and the fourth numerical value to obtain a sixth numerical value;
and performing arc tangent calculation on the ratio of the fifth numerical value to the sixth numerical value to obtain the pitch angle of the face in the face region.
14. Device according to claim 9, characterized in that said attitude calculation module is particularly adapted to
Extracting a seventh horizontal distance dx7, a fifth vertical distance dy5, an eighth horizontal distance dx8, and a sixth vertical distance dy6 from the landmark position information; the seventh horizontal distance dx7 is the horizontal distance between the left and right eyes in the face region, the fifth vertical distance dy5 is the vertical distance between the left and right eyes in the face region, the eighth horizontal distance dx8 is the horizontal distance between the left and right mouth corners in the face region, the sixth vertical distance dy6 is the vertical distance between the left and right mouth corners in the face region;
calculating a left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8, and the sixth vertical distance dy 6.
15. The apparatus according to claim 14, wherein the pose calculation module, when calculating the left-right deflection angle of the face in the face region according to the seventh horizontal distance dx7, the fifth vertical distance dy5, the eighth horizontal distance dx8 and the sixth vertical distance dy6, is specifically configured to
Calculating an eye left-right deflection angle from the seventh horizontal distance dx7 and the fifth vertical distance dy 5;
calculating a mouth left-right yaw angle from the eighth horizontal distance dx8 and the sixth vertical distance dy 6;
calculating an average value of the eye left and right deflection angles and the mouth left and right deflection angles as left and right deflection angles of the face in the face region.
16. The apparatus of any one of claims 9 or 10 or 12 to 15, wherein the target image is an image of a face of a vehicle driver, the apparatus further comprising:
the reminding module is used for sending out a reminding message when the human face posture in the human face area meets a preset condition;
wherein the preset condition comprises at least one of the following conditions:
the face posture in the face region is a first preset posture;
the face pose in the face region deviates from a second preset pose;
and the time length of the human face posture in the human face region deviating from the third preset posture reaches the preset time length.
17. A computer device comprising a processor and a memory, the memory having stored therein instructions, execution of which by the processor causes the computer device to carry out a method of determining a pose of a face in an image according to any one of claims 1 to 8.
CN201810321161.9A 2018-04-11 2018-04-11 Method and device for determining human face pose in image and computer equipment Active CN110363052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810321161.9A CN110363052B (en) 2018-04-11 2018-04-11 Method and device for determining human face pose in image and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810321161.9A CN110363052B (en) 2018-04-11 2018-04-11 Method and device for determining human face pose in image and computer equipment

Publications (2)

Publication Number Publication Date
CN110363052A CN110363052A (en) 2019-10-22
CN110363052B true CN110363052B (en) 2022-05-20

Family

ID=68214177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810321161.9A Active CN110363052B (en) 2018-04-11 2018-04-11 Method and device for determining human face pose in image and computer equipment

Country Status (1)

Country Link
CN (1) CN110363052B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642368B (en) * 2020-05-11 2023-08-18 杭州海康威视数字技术股份有限公司 Face pose determining method, device, equipment and storage medium
CN112507848B (en) * 2020-12-03 2021-05-14 中科智云科技有限公司 Mobile terminal real-time human face attitude estimation method
CN113869186B (en) * 2021-09-24 2022-12-16 合肥的卢深视科技有限公司 Model training method and device, electronic equipment and computer readable storage medium
CN115599216A (en) * 2022-10-25 2023-01-13 广州豹驰实业有限公司(Cn) Interactive education robot and interactive method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070041644A1 (en) * 2005-08-17 2007-02-22 Samsung Electronics Co., Ltd. Apparatus and method for estimating a facial pose and a face recognition system using the method
CN104688251A (en) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 Method for detecting fatigue driving and driving in abnormal posture under multiple postures
CN107122054A (en) * 2017-04-27 2017-09-01 青岛海信医疗设备股份有限公司 A kind of detection method and device of face deflection angle and luffing angle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070041644A1 (en) * 2005-08-17 2007-02-22 Samsung Electronics Co., Ltd. Apparatus and method for estimating a facial pose and a face recognition system using the method
CN104688251A (en) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 Method for detecting fatigue driving and driving in abnormal posture under multiple postures
CN107122054A (en) * 2017-04-27 2017-09-01 青岛海信医疗设备股份有限公司 A kind of detection method and device of face deflection angle and luffing angle

Also Published As

Publication number Publication date
CN110363052A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363052B (en) Method and device for determining human face pose in image and computer equipment
CN110309782B (en) Living body face detection method based on infrared and visible light binocular system
TWI383325B (en) Face expressions identification
US9916495B2 (en) Face comparison device, method, and recording medium
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN110852310B (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
JP2009245338A (en) Face image collating apparatus
KR101759188B1 (en) the automatic 3D modeliing method using 2D facial image
WO2009091029A1 (en) Face posture estimating device, face posture estimating method, and face posture estimating program
WO2021196738A1 (en) Child state detection method and apparatus, electronic device, and storage medium
JP2018538593A (en) Head mounted display with facial expression detection function
US11120535B2 (en) Image processing method, apparatus, terminal, and storage medium
CN111382592B (en) Living body detection method and apparatus
US11232315B2 (en) Image depth determining method and living body identification method, circuit, device, and medium
WO2021218568A1 (en) Image depth determination method, living body recognition method, circuit, device, and medium
JPWO2017187694A1 (en) Attention area image generation device
CN113192164A (en) Avatar follow-up control method and device, electronic equipment and readable storage medium
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
US10929984B2 (en) Systems and methods for shaking action recognition based on facial feature points
CN112507766B (en) Face image extraction method, storage medium and terminal equipment
CN112364711B (en) 3D face recognition method, device and system
US20230103555A1 (en) Information processing apparatus, information processing method, and program
CN111832346A (en) Face recognition method and device, electronic equipment and readable storage medium
CN113095116B (en) Identity recognition method and related product
JP2007004534A (en) Face-discriminating method and apparatus, and face authenticating apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant