CN113449562A - Face pose correction method and device - Google Patents

Face pose correction method and device Download PDF

Info

Publication number
CN113449562A
CN113449562A CN202010225679.XA CN202010225679A CN113449562A CN 113449562 A CN113449562 A CN 113449562A CN 202010225679 A CN202010225679 A CN 202010225679A CN 113449562 A CN113449562 A CN 113449562A
Authority
CN
China
Prior art keywords
user
eyes
coordinates
posture
prompting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010225679.XA
Other languages
Chinese (zh)
Inventor
姜盛乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010225679.XA priority Critical patent/CN113449562A/en
Publication of CN113449562A publication Critical patent/CN113449562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face pose correction method and a face pose correction device, wherein the method comprises the following steps: a coordinate acquisition step of acquiring face coordinate data of a user by using a camera device so as to acquire a plurality of coordinate data of the user; and a position and posture determination and presentation step of calculating a position relationship between the user and the image pickup device and a posture of the user by using the plurality of pieces of acquired coordinate data, thereby giving adjustment information of the position and posture to the user. The position and posture determining and prompting step comprises a horizontal and vertical coordinate determining and prompting step, wherein whether the horizontal and vertical coordinates of the two eyes are located in the effective recognition range of the camera device is determined according to the coordinates of the two eyes in the plurality of coordinate data of the user, and when the horizontal and vertical coordinates of the two eyes are not located in the effective recognition range, the area where the user is located is determined according to the coordinates of the two eyes and the coordinates of the midpoint of the two eyes, and the user is prompted to perform movement corresponding to the area where the user is located.

Description

Face pose correction method and device
Technical Field
The invention relates to the technical field of computers, in particular to a face pose correction method and device.
Background
In the current face payment process, a user generally adjusts the distance and relative position relation between the user and the camera device through a self-photographing view angle picture displayed by equipment.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
in the face payment process, when the current partial face payment equipment senses that the user is far away, the user is prompted to approach the equipment so as to obtain the head portrait of the user, but the prompt is not accurate, the user cannot be prompted to move forwards or leftwards or rightwards accurately, and the like, so that the user cannot rapidly and accurately realize the requirements of face payment on the self-photographing picture.
In addition, in the face payment process, part of the identification areas of the user are shielded by other people, or the user wears shielding objects (such as glasses, hats and other external decorations), the current equipment does not perform basic reminding on the point, only prompts that the identification is not passed after the face is obtained, and then returns to the initial state, so that the user cannot accurately know the reason of the failed identification, and the user cannot quickly and accurately realize the requirements of the face payment on the self-shot picture.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for correcting a face pose, which can accurately prompt a user how to move to reach an identification area, and can accurately prompt the user to block a problem so as to facilitate accurate identification, so that the user can quickly and accurately implement a requirement of face payment on a self-portrait picture to complete face payment.
In order to achieve the above object, according to a first aspect of an embodiment of the present invention, there is provided a face pose correction method, characterized by including:
a coordinate acquisition step of acquiring a plurality of coordinate data of a user by using a camera device; and
a position and posture determination and presentation step of calculating a positional relationship between the user and the image pickup apparatus and a posture of the user using the plurality of pieces of coordinate data acquired, thereby giving the user adjustment information on the position and posture,
the position and posture judging and prompting step comprises the following steps:
and a horizontal/vertical coordinate determination and presentation step of determining whether horizontal/vertical coordinates of both eyes are within an effective recognition range of the imaging device based on the coordinates of both eyes in the plurality of coordinate data of the user, and determining a region where the user is located based on the coordinates of both eyes and a coordinate of a midpoint of both eyes when the horizontal/vertical coordinates of both eyes are not within the effective recognition range, and presenting the user to perform a motion corresponding to the region where the user is located.
Further, in the abscissa and ordinate determining and presenting step, the center of the imaging device is used as an origin, and an abscissa plane of the imaging device is divided into 2N regions by using N straight lines passing through the origin, so as to determine which region of the 2N regions the user is located in, and further present the user to perform a corresponding motion to approach the origin, where N is a natural number of 2 or more.
Further, N is 4, and the slopes of the N straight lines are 1/3, 3, -1/3, and-3, respectively.
Further, the position and posture determining and prompting step may further include:
and a depth coordinate determination and prompting step of determining whether the depth coordinates of both eyes are within the effective recognition range according to the coordinates of both eyes of the user, and prompting the user to approach or depart from the image pickup device when any one of the depth coordinates of both eyes is not within the effective recognition range.
Further, the plurality of coordinate data may further include head coordinates, neck coordinates, and shoulder center coordinates of the user.
Further, the position and posture determining and prompting step may further include:
and a posture judgment and prompt step, namely calculating the posture of the user according to the coordinates of the eyes, the head, the neck and the shoulder center of the user, and prompting the user to perform corresponding posture adjustment.
Further, in the gesture determining and prompting step, calculating the gesture of the user includes calculating slopes of both eyes of the user in a depth direction plane and an angle between a neck-to-head vector and a neck-to-shoulder center vector.
Further, the face pose correction method may further include:
and a shielding prompting step, namely acquiring a user photo by using the camera device, extracting a characteristic value in the photo, judging whether the face of the user is shielded or not by using the characteristic value, and prompting the user to remove the shielding when the judgment result shows that the face of the user is shielded.
According to a second aspect of the embodiments of the present invention, there is provided a face pose correction apparatus, including:
a coordinate acquisition module that acquires a plurality of coordinate data of a user using the camera device; and
a position and posture determination and prompting module that calculates a positional relationship between the user and the image pickup apparatus and a posture of the user using the plurality of coordinate data acquired to give the user adjustment information on the position and posture
The position and posture judging and prompting module comprises:
and a horizontal/vertical coordinate determination and presentation unit that determines whether or not horizontal/vertical coordinates of both eyes are within an effective recognition range of the imaging device based on the coordinates of both eyes in the plurality of coordinate data of the user, determines a region where the user is located based on the coordinates of both eyes and a midpoint of both eyes when the horizontal/vertical coordinates of both eyes are not within the effective recognition range, and presents the user with a motion corresponding to the region where the user is located.
According to a third aspect of the embodiments of the present invention, there is provided a face pose correction apparatus, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium having a computer program stored thereon, wherein the program is adapted to implement the method of the first aspect when executed by a processor.
One embodiment of the above invention has the following advantages or benefits: the embodiment of the invention provides a face pose correction method and device, which can accurately prompt a user how to move to reach an identification area and accurately prompt the user of shielding problems so as to facilitate accurate identification, so that the user can quickly and accurately realize the requirement of face payment on a self-portrait picture to finish face payment, the adjustment time required by face payment is effectively shortened, and the user experience is improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic diagram of a main flow of a face pose correction method according to an embodiment of the present invention;
FIG. 2 is a flow chart of the specific operation steps of the position and pose determination and prompting step in the face pose correction method according to the embodiment of the invention;
fig. 3 is a diagram illustrating one example of the position of a KinectV2 camera used by the face pose correction method according to the embodiment of the present invention;
FIG. 4 is a schematic view of the capture range of the KinectV2 camera shown in FIG. 3;
fig. 5 is a diagram illustrating one example of an area divided and N straight lines passing through the origin as the center of the image pickup device;
fig. 6 is a schematic diagram of the main blocks of the face pose correction apparatus according to the embodiment of the present invention, and specifically illustrates the main constituent units of the position and posture determination and prompting module;
fig. 7 is a schematic structural diagram of a computer system suitable for implementing the face pose device of the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The method can be used for various scenes needing face pose correction, such as face payment, self-help face entering a hotel and the like, and the face payment scene is taken as an example for detailed description in the embodiment.
Fig. 1 is a schematic diagram of a main flow of a face pose correction method according to an embodiment of the present invention, and fig. 2 is a flowchart of specific operation steps of a position and pose determination and prompting step in the face pose correction method according to the embodiment of the present invention.
The face pose is the position and the pose of the face.
As shown in fig. 1 and 2, the face pose correction method according to the embodiment of the present invention includes a coordinate acquisition step S1, a position and posture determination and prompting step S2, and an occlusion prompting step S3. The position and orientation determining and presenting step S2 further includes a horizontal/vertical coordinate determining and presenting step S21, a depth coordinate determining and presenting step S22, and an orientation determining and presenting step S23.
The respective steps of the face pose correction method according to the embodiment of the invention will be described in detail below with reference to fig. 1 to 5.
Coordinate acquisition step S1: a plurality of coordinate data of a user is acquired by a camera. The plurality of coordinate data includes binocular coordinates, head coordinates, neck coordinates, and shoulder center coordinates.
In the present invention, the imaging apparatus is described by taking KinectV2 as an example; specifically, in the embodiment of the present invention, the KinectV2 imaging device is used to acquire the coordinate points of the face, for example, 120 characteristic points of the face, and the coordinates of both eyes, that is, the left eye coordinates a, are obtained by mean value calculation from the characteristic points near the face1,i(x1,i,y1,i,z1,i) Right eye coordinate A2,i(x2,i,y2,i,z2,i) (i represents data of each frame), wherein the above-mentioned binocular coordinates can be obtained by, for example, the binocular coordinate obtaining method described in patent application CN 201710223047.8. Then, the head coordinate A of the user can be obtained through a bone recognition module of the KinectV2 camera device3,i(x3,i,y3,i,z3,i) Neck coordinate A4,i(x4,i,y4,i,z4,i) Shoulder center coordinate A5,i(x5,i,y5,i,z5,i)。
In the embodiment of the present invention, a KinectV2 camera (hereinafter, it may be simply referred to as "camera") may be embedded in a face payment apparatus that executes the face pose correction method according to the present invention, and preferably, the camera is disposed at the top center of the face payment apparatus. Furthermore, the camera may also be arranged at the outside of the face payment device, for example at the outside of the top center of the face payment device as shown in fig. 3. The acquisition distance of the camera device is calculated by taking the center of the camera device as an origin, and the camera device has an effective identification range capable of ensuring accurate identification, preferably, in the embodiment, the effective identification range of the camera device is as follows: the maximum transverse direction (abscissa in the direction of X coordinate axis) is X1The longitudinal direction (ordinate in the direction of the Y coordinate axis) being at most Y1The depth (depth coordinate in the direction of the Z coordinate axis) is most recently Z1Farthest depth is Z1As shown in fig. 4.
Position and orientation determination and presentation step S2:
the positional relationship of the user and the image pickup apparatus and the posture of the user are calculated using the acquired plurality of coordinate data of the user, that is, the binocular coordinate, the head coordinate, the neck coordinate, and the shoulder center coordinate, thereby giving the user adjustment information on the position and the posture.
Specifically, the position and orientation determining and presenting step S2 includes a horizontal and vertical coordinate determining and presenting step S21, a depth coordinate determining and presenting step S22, and an orientation determining and presenting step S23, and each sub-step in the position and orientation determining and presenting step S2 will be described in detail below.
Abscissa and ordinate determination and presentation step S21: according to the acquired coordinates of both eyes of the user, namely, the coordinates of the left eye A1,i(x1,i,y1,i,z1,i) Right eye coordinate A2,i(x2,i,y2,i,z2,i) (i represents data of each frame), whether the horizontal and vertical coordinates of the two eyes are located in the effective identification range is judged, and when the horizontal and vertical coordinates of the two eyes are not located in the effective identification range, the area where the user is located is judged according to the coordinates of the two eyes and the coordinates of the midpoint of the two eyes, and the user is prompted to perform movement corresponding to the area where the user is located.
In particular, the amount of the solvent to be used,
when-X1≤x1,i≤X1,Y1≤y1,i≤Y1and-X1≤x2,i≤X1,-Y1≤y2,i≤Y1When it is determined that the horizontal and vertical coordinates of both eyes are within the effective recognition range of the image pickup device in the horizontal and vertical directions, the process proceeds directly to the depth coordinate determination and prompting step S22 (described later) without prompting. When the above condition is not satisfied:
(-X1≤x1,i≤X1,Y1≤y1,i≤Y1and-X1≤x2,i≤X1,-Y1≤y2,i≤Y1) Then, the horizontal and vertical coordinates of the two eyes are judged to be out of the effective identification range of the camera device, and the midpoint of the two eyes is calculated at the moment
Figure BDA0002427556510000061
Further obtain the point opposite shooting of the two eyesSlope of device center (origin of coordinates)
Figure BDA0002427556510000062
Using calculated coordinates x of midpoint between eyesiAnd slope kiAnd judging the area where the face of the user is located and prompting the user to perform corresponding movement.
Specifically, the XOY plane is divided into 2N regions by N (N is a natural number of 2 or more) straight lines passing through the origin (center coordinates of the imaging device), and preferably, for example, as shown in fig. 5, by a slope k ═ k1,k=k2,k=-k1,k=-k2The XOY plane is divided into 8 regions, i.e., region 1 to region 8. Further preferably, for example, set to k1∈[1/4,1/2]And k is2∈[2,4]. More preferably, k may be set, for example1=1/3,k23. Coordinates x of the midpoint of the eyes according to the aboveiAnd slope kiIt is determined which region of the divided 2N regions (for example, regions 1 to 8) the face of the user is in, and a corresponding prompt is made for the region in which the determined face is located to cause the user to make a corresponding movement so as to approach the origin.
Hereinafter, will be given k1=1/3,k2The details are described with reference to fig. 3. Table 1 below details the midpoint coordinates x for both eyesiAnd slope kiCorresponding to the face area of the user, and simultaneously listing the prompting content corresponding to the face area of the user.
TABLE 1 horizontal and vertical movement prompt
Figure BDA0002427556510000071
When the user moves according to the prompt message until the condition that the horizontal and vertical coordinates of the eyes are located in the effective identification range in the horizontal and vertical directions as described above is satisfied (-X)1≤x1,i≤X1,-Y1≤y1,i≤Y1and-X1≤x2,i≤X1,-Y1≤y2,i≤Y1) The presentation of the movement in the lateral and longitudinal directions is not performed any more, and the process proceeds to a depth coordinate determination and presentation step S22, which is described below.
Depth coordinate determination and presentation step S22: it is determined whether the depth coordinates of both eyes of the user are within a valid recognition range of the image pickup device in the depth direction (Z direction). When either one of the depth coordinates of both eyes of the user is outside the effective recognition range, the user is prompted to approach or move away from the image pickup device. On the other hand, when the depth coordinates of both eyes of the user are both within the valid recognition range in the depth direction, the depth cue is not performed any more and the process proceeds directly to the posture determining and cueing step S23.
Specifically, when z is1,i<z1Or z2,i<z1When the user needs to go far away from the camera device, the user is prompted to ask the user to go far back; when z is1,i>Z1Or z2,i>Z1When the user approaches the camera device, prompting the user to ask for the forward approach; except for the above conditions, the indication of the movement in the depth direction is not performed. When the user approaches or moves away from the image pickup device according to the prompt until a condition that the depth coordinates of both eyes of the user are within the valid recognition range in the depth direction is satisfied, the process proceeds to the posture determination and prompt step S23. This step will be described in detail below.
Posture determination and presentation step S23: and calculating the posture of the user according to the binocular coordinate, the head coordinate, the neck coordinate and the shoulder center coordinate of the user, and carrying out corresponding prompt to enable the user to carry out corresponding posture adjustment. Specifically, the slope K of both eyes on the XOZ (depth direction) plane is calculated,
Figure BDA0002427556510000081
simultaneous calculation of neck-to-head vectors
Figure BDA0002427556510000082
Neck to shoulder center vector
Figure BDA0002427556510000083
Is carried out according to the values of the slope K and the angle thetaAnd (5) correspondingly adjusting the posture for prompting.
Table 2 shows, as an example, the ranges of slope K and angle θ and their corresponding prompts.
TABLE 2 attitude cues
Range Content of prompt
θ<145° Please raise the head
θ≥145°,K>0 Please turn left the head
θ≥145°,K<0 Please turn right head
It should be noted that the corresponding relationship between the ranges of the slope K and the included angle θ shown in table 2 and the corresponding prompt content is only an example, and the ranges of the slope K and the included angle θ and the corresponding relationship thereof in table 2 may be set according to the actual situation and the design requirement, for example, the position relationship, the height difference, and the like between the human face payment device and the operator to which the present invention is applied.
Through the steps S21 to S23, the position and the posture of the user are judged and prompted, the user can move correspondingly according to the prompted content, the user is accurately prompted to quickly meet the requirements of the face payment equipment on the position and the posture of the face, the time required by face payment is shortened, and the use experience of the user is improved.
The respective sub-steps of step S2 are repeatedly performed, and when the user moves according to the prompt until the respective sub-steps do not generate any prompt any more, the process proceeds to step S3. Step S3 will be described in detail below.
Occlusion prompting step S3: and acquiring a user photo by using the camera device, extracting a characteristic value in the photo, judging whether the face of the user is shielded or not by using the characteristic value, and prompting the user to remove the shielding when the judgment result is that the face of the user is shielded.
Specifically, a user photo is obtained by using a camera device, a PCA method is used for extracting a photo feature value (for example, the positions of important feature points such as human eyes, mouths and noses and the geometric shapes of important organs such as eyes are used as classification features), then an SVM method which is trained is used for classifying pictures, and the classification result is that the human face is not shielded and the human face is shielded. When the classification result is that the human face is not shielded, no prompt is given; and when the classification result indicates that the human face is shielded, prompting the user to remove the shielding object. The shade may be, for example, a hat, a sun visor, or the like.
It should be noted that the PCA method and the SVM method used in the above steps are only one example, and similar methods may be used as long as they can achieve the same functions.
In addition, in the embodiment of the present invention, the prompt content may be presented by, for example, voice. And is not limited, for example, it can be displayed by using a screen display text or the like.
Through the prompt, the user adjusts the corresponding position and posture, and finishes the processing of face pose correction after the adjusted face picture meeting the requirements is successfully acquired.
The steps of the face pose correction method according to the embodiment of the invention are described in detail, and by using the method, the user can be accurately prompted how to move to reach the recognition area, and the user can be accurately prompted to block the problem so as to be convenient for accurate recognition, so that the user can quickly and accurately realize the requirement of face payment on a self-portrait picture to complete the face payment, the adjustment time required by the face payment is effectively shortened, and the user experience is improved.
The respective modules of the face pose correction apparatus 600 according to the present invention will be described in detail below with reference to fig. 6.
The face pose correction apparatus according to the embodiment of the present invention includes a coordinate acquisition module 601, a position and pose determination and prompt module 602, and an occlusion prompt module 603. Among them, the position and orientation determining and prompting module 602 preferably further includes a horizontal and vertical coordinate determining and prompting unit 6021, a depth coordinate determining and prompting unit 6022, and an orientation determining and prompting step 6023.
The respective modules of the face pose correction apparatus 600 according to the embodiment of the invention will be described in detail below.
The coordinate acquisition module 601:
the coordinate acquisition module 601 acquires face coordinate data of the user using a KinectV2 camera, thereby acquiring a plurality of coordinate data of the user. The plurality of coordinate data are binocular coordinates, head coordinates, neck coordinates, and shoulder center coordinates.
Specifically, in the embodiment of the present invention, the coordinate acquisition module 601 acquires the coordinate points of, for example, 120 characteristic points of the face of a person by using the KinectV2 imaging device, and acquires the coordinates of both eyes, that is, the left-eye coordinates a by mean value calculation from the characteristic points near the face of the person1,i(x1,i,y1,i,z1,i) Right eye coordinate A2,i(x2,i,y2,i,z2,i) (i represents data of each frame), wherein the above-mentioned binocular coordinates can be obtained by, for example, the binocular coordinate obtaining method described in patent application CN 201710223047.8. Then, the coordinate acquisition module 601 may acquire the head coordinate a of the user through a bone recognition function of the KinectV2 camera3,i(x3,i,y3,i,z3,i) Neck coordinate A4,i(x4,i,y4,i,z4,i) Shoulder center coordinate A5,i(x5,i,y5,i,z5,i)。
Position and attitude determination and prompting module 602:
the position and posture determination and prompting module 602 calculates the position relationship of the user and the image pickup apparatus and the posture of the user using the acquired plurality of coordinate data of the user, that is, the binocular coordinate, the head coordinate, the neck coordinate, and the shoulder center coordinate, thereby giving the user adjustment information on the position and posture.
Specifically, the position and orientation determination and prompting module 602 includes a horizontal and vertical coordinate determination and prompting unit 6021, a depth coordinate determination and prompting unit 6022, and an orientation determination and prompting step 6023, and the respective units of the position and orientation determination and prompting module 602 will be described in detail below.
Horizontal/vertical coordinate determination/presentation unit 6021:
the abscissa and ordinate determination and prompt unit 6021 is configured to obtain the coordinates of both eyes of the user, i.e., the left-eye coordinates a1,i(x1,i,y1,i,z1,i) Right eye coordinate A2,i(x2,i,y2,i,z2,i) (i represents data of each frame), whether the horizontal and vertical coordinates of the two eyes are in the effective identification range, and when the horizontal and vertical coordinates of the two eyes are not in the effective identification range, judging the area where the user is located according to the coordinates of the two eyes and the coordinates of the midpoint of the two eyes and prompting the user to perform movement corresponding to the area where the user is located.
In particular, the amount of the solvent to be used,
when-X1≤x1,i≤X1,-Y1≤y1,i≤Y1and-X1≤x2,i≤X1,-Y1≤y2,i≤Y1Then, the horizontal/vertical coordinate determination and presentation unit 6021 determines that the horizontal/vertical coordinates of both eyes are within the effective recognition range of the imaging apparatus in the horizontal and vertical directions. When the above condition (-X) is not satisfied1≤x1,i≤X1,-Y1≤y1,i≤Y1and-X1≤x2,i≤X1,-Y1≤y2,i≤Y1) In the meantime, the abscissa determination and presentation unit 6021 determines that the abscissa of both eyes is out of the effective recognition range of the imaging apparatus, and at this time, the abscissa determination and presentation unit 6021 calculates the midpoint of both eyes:
Figure BDA0002427556510000111
further, the slope of the midpoint of the two eyes relative to the center (coordinate origin) of the image pickup device is obtained
Figure BDA0002427556510000112
The abscissa and ordinate determination and presentation unit 6021 calculates and obtains the coordinates x of the midpoint between the eyesiAnd slope kiAnd judging the area where the user face is located and carrying out corresponding prompt.
Specifically, the abscissa and ordinate determining and presenting unit 6021 divides the XOY plane into 2N regions by N (N is a natural number of 2 or more) straight lines passing through the origin (center coordinates of the imaging device), and preferably, for example, as shown in fig. 5, by the gradient k ═ k1,k=k2,k=-k1,k=-k2The XOY plane is divided into 8 regions, i.e., region 1 to region 8. Further preferably, for example, set to k1∈[1/4,1/2]And k is2∈[2,4]. More preferably, k may be set, for example1=1/3,k23. Coordinates x of the midpoint of the eyes according to the aboveiAnd slope kiIt is determined which region of the divided 2N regions (for example, regions 1 to 8) the face of the user is in, and a corresponding prompt is made for the region in which the determined face is located to cause the user to make a corresponding movement so as to approach the origin.
In particular, reference is made to the coordinates x of the midpoint of the eyes detailed in Table 1 aboveiAnd slope kiThe corresponding relation with the face area of the user and the prompt content which is listed at the same time and corresponds to the face area of the user.
When the user moves according to the guidance information until the abscissa determination and guidance unit 6021 determines that the condition that the lateral and vertical coordinates of both eyes are located within the effective recognition range in the lateral and vertical directions as described above is satisfied:
(-X1≤x1,i≤X1,-Y1≤y1,i≤Y1and-X1≤x2,i≤X1,-Y1≤y2,i≤Y1) The movement in the horizontal and vertical directions is not prompted.
Depth coordinate determination and cue unit 6022:
the depth coordinate determination and presentation unit 6022 determines whether or not the depth coordinates of both eyes of the user are within the effective recognition range of the image pickup apparatus in the depth direction (Z direction). When the depth coordinate determination and prompting unit 6022 determines that either one of the depth coordinates of both eyes of the user is outside the effective recognition range, the user is prompted to approach or depart from the image pickup apparatus. On the other hand, when the depth coordinate determination and presentation unit 6022 determines that the depth coordinates of both eyes of the user are within the valid recognition range in the depth direction, the depth presentation is not performed any more.
Specifically, when z is1,i<z1Or z2,i<z1Meanwhile, the depth coordinate determination and prompting unit 6022 prompts the user to request to move back away from the camera device; when z is1,i>Z1Or z2,i>Z1Then, the depth coordinate determination and prompt unit 6022 prompts the user to request the forward approach of the imaging apparatus; in addition to the above conditions, the depth coordinate determination and presentation unit 6022 does not present the movement in the depth direction.
Posture determination and presentation unit 6023:
the posture judging and prompting unit 6023 calculates the posture of the user according to the coordinates of the eyes, the head, the neck and the shoulder center of the user, and performs corresponding prompting to make the user perform corresponding posture adjustment. Specifically, the pose determination and cue unit 6023 calculates the slope K of both eyes on the XOZ (depth direction) plane,
Figure BDA0002427556510000121
simultaneous calculation of neck-to-head vectors
Figure BDA0002427556510000122
Neck to shoulder center vector
Figure BDA0002427556510000123
And correspondingly prompting according to the slope K and the value of the included angle theta. As an example, reference may be made to the ranges of slope K and included angle θ and pairs thereof shown in Table 2 aboveAnd prompting the content.
The occlusion prompt module 603:
the occlusion prompting module 603 is configured to obtain a user picture by using the camera device, extract a feature value in the picture, determine whether a face of the user is occluded by using the feature value, and prompt the user to remove the occlusion when the determination result indicates that the face of the user is occluded.
Specifically, the occlusion prompting module 603 obtains a user photo by using a KinectV2 camera device, extracts a photo feature value by using, for example, a PCA method (for example, positions of important feature points such as eyes, mouth, nose, and the like, and geometric shapes of important organs such as eyes, and the like are used as classification features), classifies the photo by using, for example, a trained SVM method, and classifies the result into whether a face is occluded or not and whether the face is occluded. When the result is that the face is not shielded, the shielding prompting module 603 does not prompt, and when the result is that the face is shielded, the shielding prompting module 603 prompts the user to remove the shielding object. The shade may be, for example, a hat, a sun visor, or the like.
Through the modules and the subunits of the face pose correction device, the position and the pose of the user can be judged and prompted, the user can move correspondingly according to the content of the prompt, the user is accurately prompted, the requirements of the face payment equipment on the position and the pose of the face are rapidly met, the time required by face payment is shortened, and the use experience of the user is improved.
Reference is now made to fig. 7, which is a block diagram of a computer system suitable for implementing a face payment device of an embodiment of the present invention. The face payment device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the use range of the embodiment of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary. Furthermore, the face payment apparatus according to the present invention may be further provided with a KinectV2 camera, which may be connected to the I/O interface 705 as the input section 706, as described earlier.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and units thereof described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware. The described modules and their units may also be provided in a processor, which may be described as: a processor comprises a coordinate acquisition module, a position and posture judgment and prompt module and a shielding prompt module, wherein the position and posture judgment and prompt module comprises a horizontal coordinate judgment and prompt unit, a vertical coordinate judgment and prompt unit, a depth coordinate judgment and prompt unit and a posture judgment and prompt unit. Here, the names of these modules and their units do not constitute a limitation of the modules and their units themselves in some cases, and for example, the coordinate acquisition module may also be described as a "module that acquires coordinates from a picture acquired by an image pickup apparatus".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
coordinate acquisition step S1: a plurality of coordinate data of the user is acquired by the KinectV2 camera. The plurality of coordinate data are binocular coordinates, head coordinates, neck coordinates, and shoulder center coordinates.
Position and orientation determination and presentation step S2: the positional relationship of the user with the imaging apparatus and the posture of the user are calculated using the acquired plurality of coordinate data of the user, that is, the binocular coordinate, the head coordinate, the neck coordinate, and the shoulder center coordinate, thereby giving the user adjustment information on the position and the posture.
Specifically, the position and orientation determining and presenting step S2 includes a horizontal and vertical coordinate determining and presenting step S21, a depth coordinate determining and presenting step S22, and an orientation determining and presenting step S23.
Wherein, sit transversely and longitudinallyMark determination and presentation step S21: according to the acquired coordinates of both eyes of the user, namely, the coordinates of the left eye A1,i(x1,i,y1,i,z1,i) Right eye coordinate A2,i(x2,i,y2,i,z2,i) (i represents data of each frame), whether the horizontal and vertical coordinates of the two eyes are located in the effective identification range is judged, and when the horizontal and vertical coordinates of the two eyes are not located in the effective identification range, the area where the user is located is judged according to the coordinates of the two eyes and the coordinates of the midpoint of the two eyes, and the user is prompted to perform movement corresponding to the area where the user is located.
Depth coordinate determination and presentation step S22: it is determined whether the depth coordinates of both eyes of the user are within a valid recognition range of the image pickup device in the depth direction (Z direction). When either one of the depth coordinates of both eyes of the user is outside the effective recognition range, the user is prompted to approach or move away from the image pickup device. On the other hand, when the depth coordinates of both eyes of the user are both within the valid recognition range in the depth direction, the depth cue is not performed any more and the process proceeds directly to the posture determining and cueing step S23.
Posture determination and presentation step S23: and calculating the posture of the user according to the binocular coordinate, the head coordinate, the neck coordinate and the shoulder center coordinate of the user, and carrying out corresponding prompt to enable the user to carry out corresponding posture adjustment. Specifically, the slope K of both eyes on the XOZ (depth direction) plane is calculated,
Figure BDA0002427556510000161
simultaneous calculation of neck-to-head vectors
Figure BDA0002427556510000162
Neck to shoulder center vector
Figure BDA0002427556510000163
And (4) correspondingly prompting according to the slope K and the value of the included angle theta.
Occlusion prompting step S3: and acquiring a user photo by using the camera device, extracting a characteristic value in the photo, judging whether the face of the user is shielded or not by using the characteristic value, and prompting the user to remove the shielding when the judgment result is that the face of the user is shielded.
Specifically, a user photo is obtained by using a camera device, a PCA method is used for extracting a photo feature value (for example, the positions of important feature points such as human eyes, mouths and noses and the geometric shapes of important organs such as eyes are used as classification features), then an SVM method which is trained is used for classifying pictures, and the classification result is that the human face is not shielded and the human face is shielded. When the classification result is that the human face is not shielded, no prompt is given; and when the classification result indicates that the human face is shielded, prompting the user to remove the shielding object. The shade may be, for example, a hat, a sun visor, or the like.
According to the technical scheme of the embodiment of the invention, the user can be accurately prompted how to move to reach the identification area, and the user can be accurately prompted about the problem of shielding so as to be convenient for accurate identification, so that the user can quickly and accurately realize the requirement of face payment on the self-shot picture to complete the face payment, the adjustment time required by the face payment is effectively shortened, and the user experience is improved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A face pose correction method is characterized by comprising the following steps:
acquiring coordinates, namely acquiring a plurality of coordinate data of a user by using a camera device; and
position and posture determination and prompting, wherein the position relation between the user and the camera device and the posture of the user are calculated by using the plurality of acquired coordinate data, so that the user is given adjustment information about the position and/or the posture; wherein the content of the first and second substances,
the position and attitude determination and prompting includes:
and judging and prompting a horizontal and vertical coordinate, namely judging whether the horizontal and vertical coordinates of the two eyes are positioned in an effective recognition range of the camera device according to the coordinates of the two eyes in the plurality of coordinate data of the user, judging a region where the user is positioned according to the coordinates of the two eyes and the coordinates of the midpoint of the two eyes when the horizontal and vertical coordinates of the two eyes are not positioned in the effective recognition range, and prompting the user to perform movement corresponding to the region where the user is positioned.
2. The face pose correction method according to claim 1, wherein,
in the abscissa and ordinate determining and presenting step, an abscissa plane of the image pickup apparatus is divided into 2N regions by N straight lines passing through the origin with the center of the image pickup apparatus as the origin, thereby determining which of the 2N regions the user is located in, and further presenting the user to perform a corresponding motion to approach the origin, and wherein,
n is a natural number of 2 or more.
3. The face pose correction method according to claim 2, wherein,
n is 4, and
the slopes of the N straight lines are 1/3, 3, -1/3 and-3, respectively.
4. The face pose correction method according to any one of claims 1 to 3, wherein,
the position and posture determining and prompting step further comprises:
and a depth coordinate determination and prompting step of determining whether the depth coordinates of both eyes are within the effective recognition range according to the coordinates of both eyes of the user, and prompting the user to approach or depart from the image pickup device when any one of the depth coordinates of both eyes is not within the effective recognition range.
5. The face pose correction method according to claim 4, wherein,
the plurality of coordinate data further includes head, neck, and shoulder center coordinates of the user.
6. The face pose correction method according to claim 5, wherein,
the position and posture determining and prompting step further comprises:
and a posture judgment and prompt step, namely calculating the posture of the user according to the coordinates of the eyes, the head, the neck and the shoulder center of the user, and prompting the user to perform corresponding posture adjustment.
7. The face pose correction method according to claim 6, wherein,
in the gesture determining and prompting step, calculating the gesture of the user includes calculating the slope of the eyes of the user in the depth direction plane and the included angle between the neck-to-head vector and the neck-to-shoulder center vector.
8. The method according to claim 7, characterized by further comprising:
and a shielding prompting step, namely acquiring a user photo by using the camera device, extracting a characteristic value in the photo, judging whether the face of the user is shielded or not by using the characteristic value, and prompting the user to remove the shielding when the judgment result shows that the face of the user is shielded.
9. A face pose correction apparatus, comprising:
a coordinate acquisition module that acquires a plurality of coordinate data of a user using the camera device; and
a position and posture determination and presentation module that calculates a positional relationship between the user and the image pickup apparatus and a posture of the user using the plurality of pieces of coordinate data acquired, thereby giving the user adjustment information on the position and posture,
the position and posture judging and prompting module comprises:
and a horizontal/vertical coordinate determination and presentation unit that determines whether or not horizontal/vertical coordinates of both eyes are within an effective recognition range of the imaging device based on the coordinates of both eyes in the plurality of coordinate data of the user, determines a region where the user is located based on the coordinates of both eyes and a midpoint of both eyes when the horizontal/vertical coordinates of both eyes are not within the effective recognition range, and presents the user with a motion corresponding to the region where the user is located.
10. A face pose correction apparatus, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202010225679.XA 2020-03-26 2020-03-26 Face pose correction method and device Pending CN113449562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010225679.XA CN113449562A (en) 2020-03-26 2020-03-26 Face pose correction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010225679.XA CN113449562A (en) 2020-03-26 2020-03-26 Face pose correction method and device

Publications (1)

Publication Number Publication Date
CN113449562A true CN113449562A (en) 2021-09-28

Family

ID=77807381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010225679.XA Pending CN113449562A (en) 2020-03-26 2020-03-26 Face pose correction method and device

Country Status (1)

Country Link
CN (1) CN113449562A (en)

Similar Documents

Publication Publication Date Title
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
Fischer et al. Rt-gene: Real-time eye gaze estimation in natural environments
EP3226208B1 (en) Information processing device and computer program
CN105474263B (en) System and method for generating three-dimensional face model
WO2020215898A1 (en) Three-dimensional reconstruction method, apparatus and system, model training method, and storage medium
CN104978548B (en) A kind of gaze estimation method and device based on three-dimensional active shape model
CN108363995B (en) Method and apparatus for generating data
US20160328825A1 (en) Portrait deformation method and apparatus
WO2015026645A1 (en) Automatic calibration of scene camera for optical see-through head mounted display
US11137824B2 (en) Physical input device in virtual reality
WO2013159686A1 (en) Three-dimensional face recognition for mobile devices
CN114143495A (en) Gaze correction of multi-perspective images
KR101642402B1 (en) Apparatus and method for capturing digital image for guiding photo composition
CN109937434B (en) Image processing method, device, terminal and storage medium
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
CN111353336B (en) Image processing method, device and equipment
CN112183200B (en) Eye movement tracking method and system based on video image
CN105988566B (en) A kind of information processing method and electronic equipment
CN109144250B (en) Position adjusting method, device, equipment and storage medium
CN110892444A (en) Method for removing object to be processed in image and device for executing method
CN110245549A (en) Real-time face and object manipulation
CN110895433B (en) Method and apparatus for user interaction in augmented reality
CN112749611A (en) Face point cloud model generation method and device, storage medium and electronic equipment
US11488415B2 (en) Three-dimensional facial shape estimating device, three-dimensional facial shape estimating method, and non-transitory computer-readable medium
CN113449562A (en) Face pose correction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination