WO2020215283A1 - Facial recognition method, processing chip and electronic device - Google Patents

Facial recognition method, processing chip and electronic device Download PDF

Info

Publication number
WO2020215283A1
WO2020215283A1 PCT/CN2019/084338 CN2019084338W WO2020215283A1 WO 2020215283 A1 WO2020215283 A1 WO 2020215283A1 CN 2019084338 W CN2019084338 W CN 2019084338W WO 2020215283 A1 WO2020215283 A1 WO 2020215283A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
face
image
dimensional image
preset
Prior art date
Application number
PCT/CN2019/084338
Other languages
French (fr)
Chinese (zh)
Inventor
吴勇辉
刘川熙
詹洁琼
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to CN201980000648.XA priority Critical patent/CN110199296A/en
Priority to PCT/CN2019/084338 priority patent/WO2020215283A1/en
Publication of WO2020215283A1 publication Critical patent/WO2020215283A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • This application relates to the field of image recognition technology, and in particular to a face recognition method, processing chip and electronic equipment.
  • the purpose of some embodiments of this application is to provide a face recognition method, processing chip and electronic device that can recognize whether a human face is a preset user, and determine whether the human face is a three-dimensional face, so as to prevent a two-dimensional picture from passing face recognition Verification improves the security of face recognition.
  • An embodiment of the application provides a face recognition method, including: acquiring at least one two-dimensional image containing a human face through a plurality of cameras, and selecting the first two-dimensional image containing the human face separately acquired by any two cameras And a second two-dimensional image containing a human face; extracting the two-dimensional coordinates of multiple feature points of the human face in the first two-dimensional image, and extracting the two-dimensional coordinates of multiple feature points in the second two-dimensional image; The preset parameters of any two cameras, as well as the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, obtain the three-dimensional mesh model corresponding to the face; according to the three-dimensional mesh model and the preset user’s The three-dimensional face model determines whether the face is a preset user's three-dimensional face.
  • the embodiment of the application also provides a processing chip, the processing chip is connected to at least one memory, the processing chip is connected to multiple cameras, the memory stores instructions that can be executed by the processing chip, and the instructions are executed by the processing chip so that the processing chip can Perform the aforementioned face recognition method.
  • An embodiment of the present application also provides an electronic device, including the aforementioned processing chip, at least one memory, and multiple cameras.
  • the embodiments of the present application collect at least one two-dimensional image containing a human face through multiple cameras.
  • Each two-dimensional image includes multiple feature points of the human face. Any two cameras are selected respectively.
  • the face model is compared to determine whether the face is the three-dimensional face of the preset user, that is, the three-dimensional mesh model corresponding to the two-dimensional image of the face is used for face recognition, which can identify whether the face belongs to the preset user, and It is determined whether the face is a three-dimensional face, and the
  • determining whether the face is the preset user's three-dimensional face according to the three-dimensional mesh model and the preset user's three-dimensional face model it also includes: determining whether the face is the preset user's face according to the multi-frame three-dimensional mesh model The user's live face.
  • performing live detection on the face can verify whether the face is the live face of the preset user, and avoid the face model of the preset user from being verified by face recognition.
  • determining whether the human face is a preset user's live human face includes: selecting two three-dimensional mesh models, and calculating the corresponding mesh of the multiple meshes on the two-frame three-dimensional mesh model. If at least one of the deformation variables is greater than or equal to the preset deformation threshold, it is determined that the face is the live face of the preset user.
  • This embodiment provides a specific implementation manner for determining whether a human face is a preset user's live human face according to a three-dimensional mesh model of a human face in consecutive multiple frames.
  • the method further includes: A two-dimensional image and a two-dimensional face image of a preset user to determine whether the two-dimensional image is from the preset user; if it is determined that the face is from the preset user, enter the preset parameters according to any two cameras, and the second The step of obtaining the three-dimensional mesh model corresponding to the human face by the two-dimensional coordinates of each feature point on the one-two-dimensional image and the second two-dimensional image.
  • the use of a two-dimensional image for face recognition is added, and the face recognition speed of the two-dimensional image is faster, thereby improving the face recognition speed of non-default users.
  • determining whether the face is the preset user's three-dimensional face includes: calculating the relative position parameters between the meshes on the three-dimensional mesh model; determining the three-dimensional Whether each relative position parameter on the mesh model matches the corresponding relative position parameter on the preset 3D face model; if the relative position parameters on the 3D mesh model are relative to the preset 3D face model The location parameters are matched, and it is determined that the human face is a three-dimensional human face of the preset user.
  • This embodiment provides a specific implementation method for determining whether the human face is the three-dimensional face of the preset user according to the three-dimensional mesh model and the three-dimensional face model of the preset user.
  • the three-dimensional mesh model corresponding to the face is obtained, including: according to any two cameras Calculate the three-dimensional coordinates of each feature point according to the preset parameters of the first two-dimensional image and the two-dimensional coordinates of each feature point on the second two-dimensional image; construct a three-dimensional network corresponding to the face image according to the three-dimensional coordinates of each feature point Lattice model.
  • This embodiment provides a specific implementation of obtaining a three-dimensional mesh model corresponding to a human face according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image the way.
  • the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image it also includes: using a window matching function And preset the user’s two-dimensional face image to correct the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image; according to the preset parameters of any two cameras, and the first two-dimensional image and The two-dimensional coordinates of each feature point on the second two-dimensional image are calculated, and the three-dimensional coordinates of each feature point are calculated, specifically: according to the preset parameters of any two cameras, and the corrected first and second two-dimensional images On the two-dimensional coordinates of each feature point, obtain the three-dimensional coordinates of each feature point.
  • the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image are corrected to make the acquired three-dimensional coordinates of each feature point more accurate, thereby obtaining a more accurate three-dimensional mesh model , Improve the accuracy of face recognition.
  • using a window matching function and a preset user’s two-dimensional face image to correct the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image includes: Set the window area including the feature points on the one-dimensional image and the second two-dimensional image; search for the feature on the two-dimensional face image in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively The point matched by the point is recorded as the exact feature point; the two-dimensional coordinates of the precise feature point on the first two-dimensional image are taken as the two-dimensional coordinates of the feature point on the corrected first two-dimensional image, and the second two-dimensional The two-dimensional coordinates of the precise feature point on the image are the two-dimensional coordinates of the feature point on the second two-dimensional image after correction.
  • This embodiment provides a specific implementation method for correcting the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image by using a window matching function and a preset user's two-dimensional
  • searching for points matching the feature points on the two-dimensional face image in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively includes: Set the gray value of the two-dimensional face image and the gray value of each point in the window area to obtain the matching degree between each point in the window area and the feature point on the two-dimensional face image; The matching degree between each point of and the feature point on the two-dimensional face image is obtained, and the point in the window area that matches the feature point on the two-dimensional face image is obtained.
  • This embodiment provides a specific implementation of searching for points matching the feature points on the two-dimensional face image in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively.
  • determining whether the face is the preset user's 3D face based on the 3D mesh model and the preset user's 3D face model includes: correcting the face pose corresponding to the 3D mesh model to the preset pose; The three-dimensional mesh model of the preset posture and the three-dimensional face model of the preset user determine whether the face is the three-dimensional face of the preset user.
  • the face posture corresponding to the three-dimensional mesh model is corrected to a preset posture, so as to facilitate comparison with the three-dimensional face model and improve the accuracy of face recognition.
  • Fig. 1 is a specific flowchart of a face recognition method according to the first embodiment of the present application
  • Fig. 2 is a specific flowchart of a face recognition method according to a second embodiment of the present application
  • Fig. 3 is a specific flowchart of a face recognition method according to a third embodiment of the present application.
  • Fig. 4 is a specific flow chart of a face recognition method according to a fourth embodiment of the present application.
  • Fig. 5 is a specific flow chart of the face recognition method according to the fifth embodiment of the present application.
  • Fig. 6 is a specific flow chart of a face recognition method according to a sixth embodiment of the present application.
  • Fig. 7 is a specific flowchart of a face recognition method according to a seventh embodiment of the present application.
  • Fig. 8 is a schematic diagram of a window area in a seventh embodiment according to the present application.
  • Fig. 9 is a specific flow chart of a face recognition method according to the eighth embodiment of the present application.
  • Fig. 10 is a schematic diagram of the processing chip in the ninth embodiment and the main control device in the tenth embodiment according to the present application.
  • the first embodiment of the present application relates to a face recognition method, which is applied to an electronic device containing multiple cameras, such as a mobile phone, a tablet computer, and the like.
  • FIG. 1 The specific process of the face recognition method in this embodiment is shown in FIG. 1.
  • Step 101 Collect at least one two-dimensional image containing a human face through a plurality of cameras, and select a first two-dimensional image containing a human face and a second two-dimensional image containing a human face separately collected by any two cameras.
  • the user needs to perform face recognition verification in scenes such as unlocking the screen and payment verification.
  • at least one two-dimensional image containing a human face is collected through multiple cameras of the electronic device. Multiple cameras can be collected simultaneously, that is, multiple face images can be collected simultaneously by multiple cameras at the same time.
  • the two two-dimensional images collected by any two cameras are the first two dimensions. Image and the second two-dimensional image; when two two-dimensional images collected by any two of the multiple cameras are selected, two two-dimensional images collected by any two cameras at the same time can be selected.
  • Step 102 Extract the two-dimensional coordinates of multiple feature points of the human face in the first two-dimensional image, and extract the two-dimensional coordinates of the multiple feature points in the second two-dimensional image.
  • the electronic device presets which positions on the face are used as feature points, and the face is set with multiple feature points corresponding to multiple locations, such as eyes, corners of eyes, nose, corners of mouth, etc.; ,
  • the two-dimensional coordinates of multiple feature points of the face can be extracted on the first two-dimensional image and the second two-dimensional image respectively.
  • a corner point of the first two-dimensional face image is used as the origin to establish a coordinate system, and the pixel coordinates of each feature point on the first two-dimensional face image are obtained as the pixel coordinates of each feature point. Two-dimensional coordinates.
  • Step 103 According to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, a three-dimensional mesh model corresponding to the human face is obtained.
  • the preset parameters of the two cameras include the internal parameters of the two cameras and the external parameters between the two cameras.
  • the internal parameters include the focal length of the camera itself, imaging parameters, and the center of the principal point.
  • the external parameters include the calibration of the two cameras. The coordinate relationship of the, including the translation matrix and the rotation matrix, etc., according to the preset parameters of the two cameras, and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, it can be obtained that the face corresponds to a frame of three-dimensional Grid model.
  • Step 104 according to the three-dimensional mesh model and the three-dimensional face model of the preset user, determine whether the face is the three-dimensional face of the preset user.
  • the electronic device stores the three-dimensional face model entered by the preset user, compares the three-dimensional mesh model corresponding to the face with the three-dimensional face model of the preset user, and judges the three-dimensional mesh model and the three-dimensional face model Whether it matches, when the two match, it means that the face is the preset user's three-dimensional face; when the two do not match, it means that the face is not the preset user's three-dimensional face, that is, the face may be non-predicted.
  • this embodiment collects at least one two-dimensional image containing a human face through multiple cameras, and each two-dimensional image includes multiple feature points of the human face, and any two cameras are selected to collect them separately.
  • the first two-dimensional image containing the human face and the second two-dimensional image containing the human face, and the two-dimensional coordinates of multiple feature points are extracted from the first two-dimensional image and the second two-dimensional image; then according to the two cameras
  • the face model is compared to determine whether the face is a three-dimensional face of the preset user.
  • the three-dimensional mesh model corresponding to the two-dimensional image of the face is used for face recognition. It can identify whether the face belongs to the preset user and determine Whether the face is a three-dimensional face, to prevent the two-dimensional picture from passing the face recognition verification, which improves the security of face recognition.
  • the second embodiment of the present application relates to a face recognition method.
  • This embodiment is an improvement on the basis of the first embodiment.
  • the main improvement lies in the addition of live detection of a human face.
  • FIG. 2 The specific process of the face recognition method in this embodiment is shown in FIG. 2.
  • step 201 to step 204 are roughly the same as step 101 to step 104, which will not be repeated here.
  • the main difference lies in the addition of step 205, which is specifically as follows:
  • Step 205 Determine whether the human face is a live human face of the preset user according to the multi-frame three-dimensional mesh model.
  • two face images at the same time collected by any two cameras can be selected to obtain a three-dimensional mesh model, so that multiple frames of three-dimensional mesh models at multiple times can be obtained, and the The comparison of multi-frame 3D mesh models can determine whether the human face is the live face of the preset user; because live human faces generally have subtle expression changes, such as blinking, changes in the corners of the mouth, etc., when the multi-frame 3D mesh model is completely When they are consistent, it means that the two-dimensional image may be derived from the face model of the preset user; when the multiple three-dimensional mesh models are not completely consistent, it means that the face is the live face of the preset user.
  • this embodiment performs live detection on the face to verify whether the face is the live face of the preset user, and avoids the face model of the preset user from being verified by the face recognition.
  • the third embodiment of the present application relates to a face recognition method.
  • This embodiment is a refinement on the basis of the second embodiment.
  • the main refinement is that it provides a method for determining a person based on a multi-frame three-dimensional mesh model. Whether the face is a specific implementation method of the preset user's live face.
  • step 301 to step 304 are roughly the same as step 201 to step 204, and will not be repeated here.
  • step 305 includes the following sub-steps:
  • sub-step 3051 two frames of three-dimensional grid models are selected, and the deformation between corresponding grids in the plurality of grids on the two frames of three-dimensional grid models is calculated.
  • the three-dimensional mesh model divides the human face into multiple meshes, and selects two frames of three-dimensional mesh models arbitrarily from the multi-frame three-dimensional mesh models, and calculates the correspondence between the two frames of the three-dimensional mesh model.
  • the deformation between the two grids specifically, the deformation between the two grids can be calculated according to the coordinates of the vertices of the corresponding two grids, so that multiple deformations can be obtained.
  • sub-step 3052 if at least one deformation variable is greater than or equal to a preset deformation threshold, it is determined that the human face is a living human face of the predetermined user.
  • two frames of face mesh models are selected for comparison, but it is not limited to this, and multiple frames of face mesh models can be compared separately to determine whether the face is a preset The result of the user's live face is more accurate.
  • this embodiment provides a specific implementation manner for determining whether a human face is a preset user's live human face according to a multi-frame three-dimensional mesh model.
  • the fourth embodiment of the present application relates to a face recognition method.
  • This embodiment is an improvement on the basis of the first embodiment.
  • the main improvement lies in the addition of a preset user person based on a two-dimensional face image and at least one frame.
  • step 401 and step 402 are roughly the same as step 101 and step 102, and step 404 and step 405 are roughly the same as step 103 and step 104, and will not be repeated here.
  • step 403 is added, which is specifically as follows:
  • Step 403 Determine whether the two-dimensional image comes from the preset user based on at least one two-dimensional image and the two-dimensional face image of the preset user. If yes, go to step 403; if no, then directly end.
  • At least one of the two-dimensional images collected by multiple cameras containing human faces is selected, and the two-dimensional image of the two-dimensional image is calculated according to the two-dimensional coordinates of each feature point in the two-dimensional image.
  • the feature value is, for example, a feature vector.
  • a preset user’s two-dimensional face image is pre-recorded in the electronic device, so that the feature value of the two-dimensional face image can be calculated and stored, and the feature value of the two-dimensional image is compared with The feature values of the two-dimensional face images of the preset user are compared.
  • step 404 is entered according to the preset parameters of any two cameras, and the first and second The two-dimensional coordinates of each feature point on the two-dimensional image and the second two-dimensional image are obtained to obtain a three-dimensional mesh model corresponding to the human face; when the two do not match, it is determined that the two-dimensional image does not originate from a preset user and ends directly.
  • multiple two-dimensional images collected by multiple cameras may also be subjected to the above-mentioned judgment whether it is derived from a preset user.
  • this embodiment increases the use of two-dimensional images for face recognition, and the speed of face recognition of the two-dimensional images is faster, thereby increasing the speed of face recognition for non-predetermined users.
  • This embodiment can also be used as an improvement on the basis of the second or third embodiment, and the same technical effect can be achieved.
  • the fifth embodiment of the present application relates to a face recognition method.
  • This embodiment is a refinement on the basis of the first embodiment.
  • the main refinement is that it provides a method based on a three-dimensional mesh model and a preset user
  • the three-dimensional face model determines whether the face is a specific realization method of the preset user's three-dimensional face.
  • step 501 to step 503 are roughly the same as step 101 to step 103, and will not be repeated here.
  • step 504 includes the following sub-steps:
  • sub-step 5041 the relative position parameters between the grids on the three-dimensional grid model are calculated.
  • the three-dimensional mesh model includes N meshes, where N is a positive integer greater than 1.
  • N is a positive integer greater than 1.
  • the three-dimensional mesh model includes 5 meshes, namely mesh 1, mesh 2.
  • Grid 3, Grid 4, Grid 5 calculate the relative position parameters between each grid on the three-dimensional grid model, that is, calculate the relative position parameters between any two grids on the three-dimensional grid model.
  • the relative position parameters are, for example, the angle between the two grids, the distance between the center points, and the like.
  • each relative position parameter on the three-dimensional mesh model matches the corresponding relative position parameter on the preset three-dimensional face model, it is determined that the face is the preset user's three-dimensional face.
  • the electronic device prestores the preset relative position parameters between the grids on the user's three-dimensional face model, and determines that each relative position parameter on the three-dimensional grid model corresponds to the preset three-dimensional face model. Whether the relative position parameters are matched, for example, the relative position parameter between grid 1 and grid 2 on the three-dimensional mesh model is A, and the relative position parameter between grid 1 and grid 2 on the three-dimensional face model is B , If the absolute value of the difference between A and B is within the preset range, it is determined that the relative position parameter A matches the relative position parameter B.
  • the relative position parameters on the three-dimensional mesh model correspond to those on the three-dimensional face model
  • the relative position parameters are matched, it is determined that the human face is a three-dimensional human face of the preset user.
  • the face image may be of a non-preset user Human faces and/or two-dimensional pictures.
  • a matching number threshold can also be set.
  • the number of relative position parameters that do not match the three-dimensional face model on the three-dimensional mesh model is less than the matching number threshold, it is determined that the face is the default user’s three-dimensional person Face:
  • the number of relative position parameters that do not match on the three-dimensional mesh model and the three-dimensional face model is greater than or equal to the matching number threshold, it is determined that the face image is not the preset user's three-dimensional face.
  • this embodiment determines whether the human face is a specific implementation manner of the predetermined user's three-dimensional face according to the three-dimensional mesh model and the preset user's three-dimensional face model.
  • This embodiment can also be used as a refinement on the basis of the second to fourth embodiments, and the same technical effect can be achieved.
  • the sixth embodiment of this application relates to a face recognition method.
  • This embodiment is a refinement on the basis of the first embodiment.
  • the main refinement is that it provides a preset parameter based on any two cameras, And the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image to obtain a specific implementation of the three-dimensional mesh model corresponding to the human face.
  • step 601, step 602, and step 604 are roughly the same as step 101, step 102, and step 104, and will not be repeated here.
  • step 603 includes the following sub-steps:
  • the three-dimensional coordinates of each feature point are calculated according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image.
  • the preset parameters of the two cameras include the internal parameters of the two cameras and the external parameters between the two cameras.
  • the internal parameters include the focal length of the camera itself, imaging parameters, and the center of the principal point.
  • the external parameters include the calibration of the two cameras.
  • the coordinate relationship of each feature point, including translation matrix and rotation matrix, etc., according to the preset parameters of the two cameras, and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, the three-dimensional of each feature point can be calculated coordinate.
  • Sub-step 6032 according to the three-dimensional coordinates of each feature point, construct a three-dimensional mesh model corresponding to the face image.
  • a point set in space can be obtained, and then a three-dimensional mesh model can be constructed.
  • the mesh can be a triangular mesh, a quadrilateral mesh, etc.
  • the grid is a triangular mesh as an example.
  • the Delaunay method is used to triangulate, so that a triangle can be formed from the nearest three points, and the sides of multiple triangles are not intersected.
  • a polygonal shell is formed.
  • the triangular mesh model corresponding to the face image is presented.
  • this embodiment provides a way to obtain a human face based on the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image.
  • This embodiment can also be used as a refinement on the basis of the second to fifth embodiments, and the same technical effect can be achieved.
  • the seventh embodiment of the present application relates to a face recognition method.
  • This embodiment is an improvement on the basis of the sixth embodiment.
  • the main improvement lies in the addition of the first two-dimensional image and the second two-dimensional image. Correction of two-dimensional coordinates of feature points.
  • step 601, step 602, and step 604 are roughly the same as step 101, step 102, and step 104, and will not be repeated here.
  • the main difference is that a sub-step 7031 is added to step 703, which is specifically as follows:
  • sub-step 7031 the two-dimensional coordinates of the feature points on the first two-dimensional image and the second two-dimensional image are corrected by using the window matching function and the preset two-dimensional face image of the user.
  • the six feature points are feature point 1, feature point 2, feature Point 3, Feature Point 4, Feature Point 5, Feature Point 6, when correcting the two-dimensional coordinates of each feature point on the first two-dimensional image, take feature point 1 as an example, set on the first two-dimensional image
  • the window area that includes feature point 1 at least one pixel point can be expanded with feature point 1 as the center, so that a window area can be obtained. For example, by expanding one pixel outside, the window area shown in Figure 8 can be obtained.
  • the window area includes 9 points (including feature point 1), and the 9 points in this window area are searched for a point that matches the feature point 1 on the preset user's two-dimensional face image, and it is recorded as accurate feature point 1; Then, the two-dimensional coordinates of the precise feature point 1 are used as the two-dimensional coordinates of the feature point 1 on the corrected first two-dimensional image.
  • the correction methods for other feature points are similar to the above; for each on the second two-dimensional image
  • the method of correcting the two-dimensional coordinates of the feature point is similar to the above, so I won’t repeat it here.
  • the first two-dimensional image window area is searched for points that match the feature points on the preset user's two-dimensional face image as an example.
  • the search method is: take feature point 1 as an example, and take the first A window with feature point 1 on a two-dimensional face image, take the window area shown in Figure 8 as an example, take a certain size window in turn with each point in the window as the center; preset the user’s two-dimensional face image Feature point 1 is a window with the same size as the center, and the matching degree between the gray levels of the two windows is calculated, so the matching degree D1 to D9 is obtained.
  • the matching degree can be similarity, ratio, etc., according to each point in the window area and the preset
  • the matching degree between the feature point 1 on the user's two-dimensional face image can be obtained in the window area that matches the feature point 1 on the preset user's two-dimensional face image, and the matching degree from D1 to D9 can be obtained
  • the point with the largest value is used as the point in the window area that matches the feature point 1 on the preset user's two-dimensional face image.
  • the three-dimensional coordinates of each feature point are obtained according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image after correction.
  • the three-dimensional coordinates of each feature point are calculated by using the two-dimensional coordinates of each feature point on the corrected first two-dimensional image and the second two-dimensional image.
  • Sub-step 7033 according to the three-dimensional coordinates of each feature point, construct a three-dimensional mesh model corresponding to the face image.
  • this embodiment corrects the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image to make the acquired three-dimensional coordinates of each feature point more accurate, thereby obtaining A more accurate three-dimensional mesh model improves the accuracy of face recognition.
  • the eighth embodiment of the present application relates to a face recognition method.
  • This embodiment is an improvement on the basis of the first embodiment, and the main improvement lies in: correcting the face pose corresponding to the acquired three-dimensional mesh model.
  • step 801 to step 803 are substantially the same as step 101 to step 103, which will not be repeated here.
  • step 804 includes the following sub-steps:
  • the face pose corresponding to the three-dimensional mesh model is corrected to a preset pose.
  • the current user may not aim at the camera with a preset pose. Therefore, the face pose corresponding to the obtained 3D mesh model is not the preset pose.
  • the corresponding face pose is corrected to obtain a three-dimensional mesh model of the preset pose.
  • the preset pose is consistent with the face pose corresponding to the preset 3D face model entered by the user. For example, if the user enters the 3D face model facing the camera, the preset pose is facing the camera. At this time, the 3D mesh model It is consistent with the posture of the three-dimensional face model, making it easier to compare.
  • sub-step 8042 according to the three-dimensional mesh model of the preset posture and the three-dimensional face model of the preset user, it is determined whether the face is the three-dimensional face of the preset user.
  • the electronic device stores the three-dimensional face model entered by the preset user, compares the three-dimensional mesh model of the preset pose with the three-dimensional face model of the preset user, and judges the three-dimensional mesh model and the three-dimensional face model Whether it matches, when the two match, it means that the face is the preset user's three-dimensional face; when the two do not match, it means that the face is not the preset user's three-dimensional face, that is, the face may be non-predicted.
  • this embodiment corrects the face posture corresponding to the three-dimensional mesh model to a preset posture, so as to facilitate comparison with the three-dimensional face model and improve the accuracy of face recognition.
  • This embodiment can also be used as an improvement on the basis of the second to seventh embodiments, and the same technical effect can be achieved.
  • the ninth embodiment of the present application relates to a processing chip.
  • the processing chip 110 is connected to at least one memory 111, and the processing chip 110 is also connected to a plurality of cameras 112.
  • the memory 111 stores data that can be executed by the processing chip 110. Instructions, the instructions are executed by the processing chip 110, so that the processing chip 110 can execute the foregoing method embodiments.
  • the tenth embodiment of the present application relates to a main control device. Please refer to FIG. 10.
  • the main control device includes the aforementioned processing chip 110, at least one memory 111 and a plurality of cameras 112.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

Disclosed are a facial recognition method, a processing chip and an electronic device. The facial recognition method comprises: respectively collecting, by means of a plurality of cameras, at least one two-dimensional image including a human face, and selecting any two cameras to respectively collect a first two-dimensional image including a human face and a second two-dimensional image including a human face (101); extracting two-dimensional coordinates of a plurality of feature points of the human face in the first two-dimensional image, and extracting two-dimensional coordinates of a plurality of feature points in the second two-dimensional image (102); obtaining, according to preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, a three-dimensional mesh model corresponding to the human face (103); and determining whether the human face is a three-dimensional human face of a preset user according to the three-dimensional mesh model and a three-dimensional human face model of the preset user (104). The method can prevent a two-dimensional picture from being verified by means of facial recognition, thereby improving the security of facial recognition.

Description

人脸识别方法、处理芯片以及电子设备Face recognition method, processing chip and electronic equipment 技术领域Technical field
本申请涉及图像识别技术领域,特别涉及一种人脸识别方法、处理芯片以及电子设备。This application relates to the field of image recognition technology, and in particular to a face recognition method, processing chip and electronic equipment.
背景技术Background technique
随着以手机为代表的智能终端的飞速发展,人脸识别技术在各种智能终端上的使用也越来越广泛,用户可以通过人脸识别来完成解锁终端、支付等操作。With the rapid development of smart terminals represented by mobile phones, the use of facial recognition technology on various smart terminals has become more and more widespread. Users can complete operations such as unlocking the terminal and making payments through facial recognition.
发明人发现现有技术至少存在以下问题:目前,在终端进行人脸识别时,一般无法识别出当前的人脸图像时来自用户照片还是用户本人,从而会出现用户图片通过人脸识别认证的情况,安全性较低。The inventor found that the prior art has at least the following problems: At present, when the terminal performs face recognition, it is generally unable to recognize whether the current face image is from the user's photo or the user himself, so that the user's image may pass the face recognition authentication. , Security is low.
发明内容Summary of the invention
本申请部分实施例的目的在于提供一种人脸识别方法、处理芯片以及电子设备,能够识别人脸是否为预设用户,并确定人脸是否为三维人脸,避免二维图片通过人脸识别验证,提升了人脸识别的安全性。The purpose of some embodiments of this application is to provide a face recognition method, processing chip and electronic device that can recognize whether a human face is a preset user, and determine whether the human face is a three-dimensional face, so as to prevent a two-dimensional picture from passing face recognition Verification improves the security of face recognition.
本申请实施例提供了一种人脸识别方法,包括:通过多个摄像头分别采集至少一幅包含人脸的二维图像,并选取任意两个摄像头分别采集的包含人脸 的第一二维图像和包含人脸的第二二维图像;在第一二维图像中提取人脸的多个特征点的二维坐标,并在第二二维图像中提取多个特征点的二维坐标;根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型;根据三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸。An embodiment of the application provides a face recognition method, including: acquiring at least one two-dimensional image containing a human face through a plurality of cameras, and selecting the first two-dimensional image containing the human face separately acquired by any two cameras And a second two-dimensional image containing a human face; extracting the two-dimensional coordinates of multiple feature points of the human face in the first two-dimensional image, and extracting the two-dimensional coordinates of multiple feature points in the second two-dimensional image; The preset parameters of any two cameras, as well as the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, obtain the three-dimensional mesh model corresponding to the face; according to the three-dimensional mesh model and the preset user’s The three-dimensional face model determines whether the face is a preset user's three-dimensional face.
本申请实施例还提供了一种处理芯片,处理芯片与至少一存储器连接,处理芯片与多个摄像头连接,存储器存储有可被处理芯片执行的指令,指令被处理芯片执行,以使处理芯片能够执行上述的人脸识别方法。The embodiment of the application also provides a processing chip, the processing chip is connected to at least one memory, the processing chip is connected to multiple cameras, the memory stores instructions that can be executed by the processing chip, and the instructions are executed by the processing chip so that the processing chip can Perform the aforementioned face recognition method.
本申请实施例还提供了一种电子设备,包括上述的处理芯片、至少一存储器,以及多个摄像头。An embodiment of the present application also provides an electronic device, including the aforementioned processing chip, at least one memory, and multiple cameras.
本申请实施例现对于现有技术而言,分别通过多个摄像头采集至少一幅包含人脸的二维图像,每个二维图像上包括人脸的多个特征点,选取任意两个摄像头分别采集的包含人脸的第一二维图像和包含人脸的第二二维图像,并在第一二维图像和第二二维图像上提取多个特征点的二维坐标;继而根据两个摄像头的预设参数,以及第一二维图像和第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型,从而可以将三维网格模型与预设用户的三维人脸模型进行对比,确定人脸是否为预设用户的三维人脸,即利用人脸的二维图像对应的三维网格模型进行人脸识别,能够识别该人脸是否属于预设用户,并确定该人脸是否为三维人脸,避免二维图片通过人脸识别验证,提升了人脸识别的安全性。For the prior art, the embodiments of the present application collect at least one two-dimensional image containing a human face through multiple cameras. Each two-dimensional image includes multiple feature points of the human face. Any two cameras are selected respectively. Collect the first two-dimensional image containing the human face and the second two-dimensional image containing the human face, and extract the two-dimensional coordinates of multiple feature points on the first two-dimensional image and the second two-dimensional image; The preset parameters of the camera, and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, to obtain the three-dimensional mesh model corresponding to the face, so that the three-dimensional mesh model can be compared with the preset user's three-dimensional The face model is compared to determine whether the face is the three-dimensional face of the preset user, that is, the three-dimensional mesh model corresponding to the two-dimensional image of the face is used for face recognition, which can identify whether the face belongs to the preset user, and It is determined whether the face is a three-dimensional face, and the two-dimensional picture is prevented from passing the face recognition verification, which improves the security of face recognition.
例如,在根据三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸之后,还包括:根据多帧三维网格模型,确定人脸是否 为预设用户的活体人脸。本实施例中,对人脸进行活体检测,能够验证人脸是否为预设用户的活体人脸,避免预设用户的人脸模型通过人脸识别的验证。For example, after determining whether the face is the preset user's three-dimensional face according to the three-dimensional mesh model and the preset user's three-dimensional face model, it also includes: determining whether the face is the preset user's face according to the multi-frame three-dimensional mesh model The user's live face. In this embodiment, performing live detection on the face can verify whether the face is the live face of the preset user, and avoid the face model of the preset user from being verified by face recognition.
例如,根据多帧三维网格模型,确定人脸是否为预设用户的活体人脸,包括:选取两帧三维网格模型,并计算两帧三维网格模型上多个网格中对应的网格之间的形变量;若至少一形变量大于或等于预设的形变阈值,确定人脸是预设用户的活体人脸。本实施例提供了一种根据连续多帧人脸三维网格模型,确定人脸是否为预设用户的活体人脸的具体实现方式。For example, according to the multi-frame 3D mesh model, determining whether the human face is a preset user's live human face includes: selecting two three-dimensional mesh models, and calculating the corresponding mesh of the multiple meshes on the two-frame three-dimensional mesh model. If at least one of the deformation variables is greater than or equal to the preset deformation threshold, it is determined that the face is the live face of the preset user. This embodiment provides a specific implementation manner for determining whether a human face is a preset user's live human face according to a three-dimensional mesh model of a human face in consecutive multiple frames.
例如,在根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型之前,还包括:根据至少一幅二维图像与预设用户的二维人脸图像,确定二维图像是否来源于预设用户;若确定人脸来源于预设用户,进入根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型的步骤。本实施例增加了利用二维图像进行人脸识别,二维图像的人脸识别速度较快,从而提升了非预设用户的人脸识别速度。For example, before obtaining the three-dimensional mesh model corresponding to the face according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, the method further includes: A two-dimensional image and a two-dimensional face image of a preset user to determine whether the two-dimensional image is from the preset user; if it is determined that the face is from the preset user, enter the preset parameters according to any two cameras, and the second The step of obtaining the three-dimensional mesh model corresponding to the human face by the two-dimensional coordinates of each feature point on the one-two-dimensional image and the second two-dimensional image. In this embodiment, the use of a two-dimensional image for face recognition is added, and the face recognition speed of the two-dimensional image is faster, thereby improving the face recognition speed of non-default users.
例如,根据三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸,包括:计算三维网格模型上各网格之间的相对位置参数;判断三维网格模型上的各相对位置参数与预设的三维人脸模型上对应的相对位置参数是否均匹配;若三维网格模型上的各相对位置参数与预设的三维人脸模型上对应的相对位置参数均匹配,确定人脸是预设用户的三维人脸。本实施例提供了一种根据三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸的具体实现方式。For example, according to the three-dimensional mesh model and the preset user's three-dimensional face model, determining whether the face is the preset user's three-dimensional face includes: calculating the relative position parameters between the meshes on the three-dimensional mesh model; determining the three-dimensional Whether each relative position parameter on the mesh model matches the corresponding relative position parameter on the preset 3D face model; if the relative position parameters on the 3D mesh model are relative to the preset 3D face model The location parameters are matched, and it is determined that the human face is a three-dimensional human face of the preset user. This embodiment provides a specific implementation method for determining whether the human face is the three-dimensional face of the preset user according to the three-dimensional mesh model and the three-dimensional face model of the preset user.
例如,根据任意两个摄像头的预设参数,以及第一二维图像与第二二维 图像上各特征点的二维坐标,得到人脸对应的三维网格模型,包括:根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,计算各特征点的三维坐标;根据各特征点的三维坐标,构建人脸图像对应的三维网格模型。本实施例提供了一种根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型的具体实现方式。For example, according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, the three-dimensional mesh model corresponding to the face is obtained, including: according to any two cameras Calculate the three-dimensional coordinates of each feature point according to the preset parameters of the first two-dimensional image and the two-dimensional coordinates of each feature point on the second two-dimensional image; construct a three-dimensional network corresponding to the face image according to the three-dimensional coordinates of each feature point Lattice model. This embodiment provides a specific implementation of obtaining a three-dimensional mesh model corresponding to a human face according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image the way.
例如,在根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,计算各特征点的三维坐标之前,还包括:利用窗口匹配函数及预设用户的二维人脸图像对第一二维图像与第二二维图像上各特征点的二维坐标进行修正;根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,计算各特征点的三维坐标,具体为:根据任意两个摄像头的预设参数,以及修正后的第一二维图像与第二二维图像上各特征点的二维坐标,获取各特征点的三维坐标。本实施例中,对第一二维图像与第二二维图像上各特征点的二维坐标进行修正,以使获取的各特征点的三维坐标更加精确,从而获取更精确的三维网格模型,提升了人脸识别的准确度。For example, before calculating the three-dimensional coordinates of each feature point according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, it also includes: using a window matching function And preset the user’s two-dimensional face image to correct the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image; according to the preset parameters of any two cameras, and the first two-dimensional image and The two-dimensional coordinates of each feature point on the second two-dimensional image are calculated, and the three-dimensional coordinates of each feature point are calculated, specifically: according to the preset parameters of any two cameras, and the corrected first and second two-dimensional images On the two-dimensional coordinates of each feature point, obtain the three-dimensional coordinates of each feature point. In this embodiment, the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image are corrected to make the acquired three-dimensional coordinates of each feature point more accurate, thereby obtaining a more accurate three-dimensional mesh model , Improve the accuracy of face recognition.
例如,利用窗口匹配函数及预设用户的二维人脸图像对第一二维图像与第二二维图像上各特征点的二维坐标进行修正,包括:对于每个特征点,分别在第一二维图像与第二二维图像上设定包括特征点的窗口区域;分别在第一二维图像的窗口区域与第二二维图像的窗口区域中搜寻与二维人脸图像上的特征点匹配的点,记作精确特征点;将第一二维图像上的精确特征点的二维坐标作为修正后的第一二维图像上的特征点的二维坐标,并将第二二维图像上的精确 特征点的二维坐标为修正后的第二二维图像上的特征点的二维坐标。本实施例提供了一种利用窗口匹配函数及预设用户的二维人脸图像对第一二维图像与第二二维图像上各特征点的二维坐标进行修正的具体实现方式。For example, using a window matching function and a preset user’s two-dimensional face image to correct the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image includes: Set the window area including the feature points on the one-dimensional image and the second two-dimensional image; search for the feature on the two-dimensional face image in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively The point matched by the point is recorded as the exact feature point; the two-dimensional coordinates of the precise feature point on the first two-dimensional image are taken as the two-dimensional coordinates of the feature point on the corrected first two-dimensional image, and the second two-dimensional The two-dimensional coordinates of the precise feature point on the image are the two-dimensional coordinates of the feature point on the second two-dimensional image after correction. This embodiment provides a specific implementation method for correcting the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image by using a window matching function and a preset user's two-dimensional face image.
例如,分别在第一二维图像的窗口区域与第二二维图像的窗口区域中搜寻与二维人脸图像上的特征点匹配的点,包括:对于每个窗口区域,根据特征点在预设二维人脸图像上的灰度值以及窗口区域内的各点的灰度值,得到窗口区域内的各点与二维人脸图像上的特征点之间的匹配度;根据窗口区域内的各点与二维人脸图像上的特征点之间的匹配度,获取窗口区域内与二维人脸图像上的特征点匹配的点。本实施例提供了一种分别在第一二维图像的窗口区域与第二二维图像的窗口区域中搜寻与二维人脸图像上的特征点匹配的点的具体实现方式。For example, searching for points matching the feature points on the two-dimensional face image in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively includes: Set the gray value of the two-dimensional face image and the gray value of each point in the window area to obtain the matching degree between each point in the window area and the feature point on the two-dimensional face image; The matching degree between each point of and the feature point on the two-dimensional face image is obtained, and the point in the window area that matches the feature point on the two-dimensional face image is obtained. This embodiment provides a specific implementation of searching for points matching the feature points on the two-dimensional face image in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively.
例如,在根据三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸,包括:将三维网格模型对应的人脸姿态矫正到预设姿态;根据预设姿态的三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸。本实施例中,将三维网格模型对应的人脸姿态矫正到预设姿态,以便于与三维人脸模型进行对比,提升人脸识别准确度。For example, determining whether the face is the preset user's 3D face based on the 3D mesh model and the preset user's 3D face model includes: correcting the face pose corresponding to the 3D mesh model to the preset pose; The three-dimensional mesh model of the preset posture and the three-dimensional face model of the preset user determine whether the face is the three-dimensional face of the preset user. In this embodiment, the face posture corresponding to the three-dimensional mesh model is corrected to a preset posture, so as to facilitate comparison with the three-dimensional face model and improve the accuracy of face recognition.
附图说明Description of the drawings
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the corresponding drawings. These exemplified descriptions do not constitute a limitation on the embodiments. Elements with the same reference numbers in the drawings are represented as similar elements. Unless otherwise stated, the figures in the attached drawings do not constitute a limitation of scale.
图1是根据本申请第一实施例中的人脸识别方法的具体流程图;Fig. 1 is a specific flowchart of a face recognition method according to the first embodiment of the present application;
图2是根据本申请第二实施例中的人脸识别方法的具体流程图;Fig. 2 is a specific flowchart of a face recognition method according to a second embodiment of the present application;
图3是根据本申请第三实施例中的人脸识别方法的具体流程图;Fig. 3 is a specific flowchart of a face recognition method according to a third embodiment of the present application;
图4是根据本申请第四实施例中的人脸识别方法的具体流程图;Fig. 4 is a specific flow chart of a face recognition method according to a fourth embodiment of the present application;
图5是根据本申请第五实施例中的人脸识别方法的具体流程图;Fig. 5 is a specific flow chart of the face recognition method according to the fifth embodiment of the present application;
图6是根据本申请第六实施例中的人脸识别方法的具体流程图;Fig. 6 is a specific flow chart of a face recognition method according to a sixth embodiment of the present application;
图7是根据本申请第七实施例中的人脸识别方法的具体流程图;Fig. 7 is a specific flowchart of a face recognition method according to a seventh embodiment of the present application;
图8是根据本申请第七实施例中的窗口区域的示意图;Fig. 8 is a schematic diagram of a window area in a seventh embodiment according to the present application;
图9是根据本申请第八实施例中的人脸识别方法的具体流程图;Fig. 9 is a specific flow chart of a face recognition method according to the eighth embodiment of the present application;
图10是根据本申请第九实施例中的处理芯片与第十实施例中主控设备的示意图。Fig. 10 is a schematic diagram of the processing chip in the ninth embodiment and the main control device in the tenth embodiment according to the present application.
具体实施例Specific embodiment
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the objectives, technical solutions, and advantages of the present application clearer, some embodiments of the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and are not used to limit the application.
本申请第一实施例涉及一种人脸识别方法,应用于包含多个摄像头的电子设备,电子设备例如为手机、平板电脑等。The first embodiment of the present application relates to a face recognition method, which is applied to an electronic device containing multiple cameras, such as a mobile phone, a tablet computer, and the like.
本实施例的人脸识别方式的具体流程如图1所示。The specific process of the face recognition method in this embodiment is shown in FIG. 1.
步骤101,通过多个摄像头分别采集至少一幅包含人脸的二维图像,并选取任意两个摄像头分别采集的包含人脸的第一二维图像和包含人脸的第二二维图像。Step 101: Collect at least one two-dimensional image containing a human face through a plurality of cameras, and select a first two-dimensional image containing a human face and a second two-dimensional image containing a human face separately collected by any two cameras.
具体而言,用户使用电子设备的过程中,在解锁屏幕、支付验证等场景 需要进行人脸识别验证,此时分别通过电子设备的多个摄像头采集至少一幅包含人脸的二维图像,其中多个摄像头可以同步进行采集,即可以在同一时刻通过多个摄像头同步采集多张人脸图像,在选取多个摄像头中的任意两个摄像头采集的两幅二维图像,分别为第一二维图像和第二二维图像;在选定多个摄像头中的任意两个摄像头采集的两幅二维图像时,可以选定任意两个摄像头采集的同一时刻的两幅二维图像。Specifically, in the process of using the electronic device, the user needs to perform face recognition verification in scenes such as unlocking the screen and payment verification. At this time, at least one two-dimensional image containing a human face is collected through multiple cameras of the electronic device. Multiple cameras can be collected simultaneously, that is, multiple face images can be collected simultaneously by multiple cameras at the same time. When selecting any two of the multiple cameras, the two two-dimensional images collected by any two cameras are the first two dimensions. Image and the second two-dimensional image; when two two-dimensional images collected by any two of the multiple cameras are selected, two two-dimensional images collected by any two cameras at the same time can be selected.
步骤102,在第一二维图像中提取人脸的多个特征点的二维坐标,并在第二二维图像中提取多个特征点的二维坐标。Step 102: Extract the two-dimensional coordinates of multiple feature points of the human face in the first two-dimensional image, and extract the two-dimensional coordinates of the multiple feature points in the second two-dimensional image.
具体而言,电子设备内部预设了人脸上哪些位置作为特征点,人脸上设定有对应于多个位置的多个特征点,特征点例如为眼睛、眼角、鼻头、嘴角等;继而,可以分别在第一二维图像与第二二维图像上提取人脸的多个特征点的二维坐标。以第一二维图像为例,以第一二维脸图像的一个角点作为原点建立坐标系,以像素作为单位,获取第一二维脸图像上各特征点的像素坐标作为各特征点的二维坐标。Specifically, the electronic device presets which positions on the face are used as feature points, and the face is set with multiple feature points corresponding to multiple locations, such as eyes, corners of eyes, nose, corners of mouth, etc.; , The two-dimensional coordinates of multiple feature points of the face can be extracted on the first two-dimensional image and the second two-dimensional image respectively. Taking the first two-dimensional image as an example, a corner point of the first two-dimensional face image is used as the origin to establish a coordinate system, and the pixel coordinates of each feature point on the first two-dimensional face image are obtained as the pixel coordinates of each feature point. Two-dimensional coordinates.
步骤103,根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型。Step 103: According to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, a three-dimensional mesh model corresponding to the human face is obtained.
具体而言,两个摄像头的预设参数包括两个摄像头的内参、两个摄像头之间的外参,内参包括摄像头本身的焦距、成像参数、主点中心等,外参包括两个摄像头标定后的坐标关系,包括平移矩阵和旋转矩阵等,根据两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,能够得到人脸对应一帧三维网格模型。Specifically, the preset parameters of the two cameras include the internal parameters of the two cameras and the external parameters between the two cameras. The internal parameters include the focal length of the camera itself, imaging parameters, and the center of the principal point. The external parameters include the calibration of the two cameras. The coordinate relationship of the, including the translation matrix and the rotation matrix, etc., according to the preset parameters of the two cameras, and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, it can be obtained that the face corresponds to a frame of three-dimensional Grid model.
步骤104,根据三维网格模型与预设用户的三维人脸模型,确定人脸是 否为预设用户的三维人脸。 Step 104, according to the three-dimensional mesh model and the three-dimensional face model of the preset user, determine whether the face is the three-dimensional face of the preset user.
具体而言,电子设备中存储有预设用户录入的三维人脸模型,将人脸对应的三维网格模型与预设用户的三维人脸模型进行对比,判断三维网格模型与三维人脸模型是否匹配,当二者匹配时,说明该人脸是预设用户的三维人脸;当二者不匹配时,说明该人脸并不是预设用户的三维人脸,即该人脸可能是非预设用户的人脸和/或是二维图片。Specifically, the electronic device stores the three-dimensional face model entered by the preset user, compares the three-dimensional mesh model corresponding to the face with the three-dimensional face model of the preset user, and judges the three-dimensional mesh model and the three-dimensional face model Whether it matches, when the two match, it means that the face is the preset user's three-dimensional face; when the two do not match, it means that the face is not the preset user's three-dimensional face, that is, the face may be non-predicted. Set the user's face and/or two-dimensional picture.
本实施例相对于现有技术而言,分别通过多个摄像头采集至少一幅包含人脸的二维图像,每个二维图像上包括人脸的多个特征点,选取任意两个摄像头分别采集的包含人脸的第一二维图像和包含人脸的第二二维图像,并在第一二维图像和第二二维图像上提取多个特征点的二维坐标;继而根据两个摄像头的预设参数,以及第一二维图像和第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型,从而可以将三维网格模型与预设用户的三维人脸模型进行对比,确定人脸是否为预设用户的三维人脸,即利用人脸的二维图像对应的三维网格模型进行人脸识别,能够识别该人脸是否属于预设用户,并确定该人脸是否为三维人脸,避免二维图片通过人脸识别验证,提升了人脸识别的安全性。Compared with the prior art, this embodiment collects at least one two-dimensional image containing a human face through multiple cameras, and each two-dimensional image includes multiple feature points of the human face, and any two cameras are selected to collect them separately. The first two-dimensional image containing the human face and the second two-dimensional image containing the human face, and the two-dimensional coordinates of multiple feature points are extracted from the first two-dimensional image and the second two-dimensional image; then according to the two cameras The preset parameters of the user’s face, and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, to obtain the three-dimensional mesh model corresponding to the face, so that the three-dimensional mesh model can be combined with the preset user’s three-dimensional human The face model is compared to determine whether the face is a three-dimensional face of the preset user. That is, the three-dimensional mesh model corresponding to the two-dimensional image of the face is used for face recognition. It can identify whether the face belongs to the preset user and determine Whether the face is a three-dimensional face, to prevent the two-dimensional picture from passing the face recognition verification, which improves the security of face recognition.
本申请第二实施例涉及一种人脸识别方法,本实施例是在第一实施例基础上的改进,主要改进之处在于:增加了人脸的活体检测。The second embodiment of the present application relates to a face recognition method. This embodiment is an improvement on the basis of the first embodiment. The main improvement lies in the addition of live detection of a human face.
本实施例的人脸识别方式的具体流程如图2所示。The specific process of the face recognition method in this embodiment is shown in FIG. 2.
其中,步骤201至步骤204与步骤101至步骤104大致相同,在此不再赘述,主要不同之处在于增加了步骤205,具体如下:Among them, step 201 to step 204 are roughly the same as step 101 to step 104, which will not be repeated here. The main difference lies in the addition of step 205, which is specifically as follows:
步骤205,根据多帧三维网格模型,确定人脸是否为预设用户的活体人 脸。Step 205: Determine whether the human face is a live human face of the preset user according to the multi-frame three-dimensional mesh model.
具体而言,可以选定任意两个摄像头采集的同一时刻的两幅人脸图像,来获取一幅三维网格模型,从而能够获取多个时刻的多帧三维网格模型,将多个时刻的多帧三维网格模型进行对比,可以确定人脸是否为预设用户的活体人脸;由于活体人脸一般均存在细微的表情变化,例如眨眼,嘴角变化等,当多帧三维网格模型完全一致时,说明二维图像可能来源于预设用户的人脸模型;当多幅三维网格模型并非完全一致时,说明人脸是预设用户的活体人脸。Specifically, two face images at the same time collected by any two cameras can be selected to obtain a three-dimensional mesh model, so that multiple frames of three-dimensional mesh models at multiple times can be obtained, and the The comparison of multi-frame 3D mesh models can determine whether the human face is the live face of the preset user; because live human faces generally have subtle expression changes, such as blinking, changes in the corners of the mouth, etc., when the multi-frame 3D mesh model is completely When they are consistent, it means that the two-dimensional image may be derived from the face model of the preset user; when the multiple three-dimensional mesh models are not completely consistent, it means that the face is the live face of the preset user.
本实施例相对于第一实施例而言,对人脸进行活体检测,能够验证人脸是否为预设用户的活体人脸,避免预设用户的人脸模型通过人脸识别的验证。Compared with the first embodiment, this embodiment performs live detection on the face to verify whether the face is the live face of the preset user, and avoids the face model of the preset user from being verified by the face recognition.
本申请第三实施例涉及一种人脸识别方法,本实施例是在第二实施例基础上的细化,主要细化之处在于:提供了一种根据多帧三维网格模型,确定人脸是否为预设用户的活体人脸的具体实现方式。The third embodiment of the present application relates to a face recognition method. This embodiment is a refinement on the basis of the second embodiment. The main refinement is that it provides a method for determining a person based on a multi-frame three-dimensional mesh model. Whether the face is a specific implementation method of the preset user's live face.
本实施例的人脸识别方式的具体流程如图3所示。The specific process of the face recognition method in this embodiment is shown in FIG. 3.
其中,步骤301至步骤304与步骤201至步骤204大致相同,在此不再赘述,主要不同之处在于,步骤305,包括以下子步骤:Among them, step 301 to step 304 are roughly the same as step 201 to step 204, and will not be repeated here. The main difference is that step 305 includes the following sub-steps:
子步骤3051,选取两帧三维网格模型,并计算两帧三维网格模型上多个网格中对应的网格之间的形变量。In sub-step 3051, two frames of three-dimensional grid models are selected, and the deformation between corresponding grids in the plurality of grids on the two frames of three-dimensional grid models is calculated.
具体而言,三维网格模型将人脸分为了多个网格,从多帧三维网格模型中任意选取两帧三维网格模型,计算这两帧三维网格模型上多个网格中对应的两个网格之间的形变量,具体的,可以根据对应的两个网格的各顶点的坐标,来计算这两个网格之间的形变量,从而能够得到多个形变量。Specifically, the three-dimensional mesh model divides the human face into multiple meshes, and selects two frames of three-dimensional mesh models arbitrarily from the multi-frame three-dimensional mesh models, and calculates the correspondence between the two frames of the three-dimensional mesh model. The deformation between the two grids, specifically, the deformation between the two grids can be calculated according to the coordinates of the vertices of the corresponding two grids, so that multiple deformations can be obtained.
子步骤3052,若至少一形变量大于或等于预设的形变阈值,确定人脸是 预设用户的活体人脸。In sub-step 3052, if at least one deformation variable is greater than or equal to a preset deformation threshold, it is determined that the human face is a living human face of the predetermined user.
具体而言,判断是否存在至少一形变量大于或等于预设的形变阈值,若存在至少一形变量大于或等于预设的形变阈值,则说明这两幅三维网格模型并非完全一致,人脸是预设用户的活体人脸。若所有的形变量均小于预设的形变阈值,说明这两帧三维网格模型完全一致,人脸可能是预设用户的人脸模型。Specifically, it is determined whether there is at least one deformation greater than or equal to the preset deformation threshold. If there is at least one deformation greater than or equal to the preset deformation threshold, it means that the two three-dimensional mesh models are not completely consistent. It is the live face of the preset user. If all the deformation variables are less than the preset deformation threshold, it means that the two three-dimensional mesh models are completely consistent, and the face may be the preset user's face model.
需要说明的是,本实施例中,选取两帧人脸网格模型进行对比,然不限于此,可以将多帧人脸网格模型两两分别进行对比,可以使得确定人脸是否为预设用户的活体人脸的结果更加准确。It should be noted that in this embodiment, two frames of face mesh models are selected for comparison, but it is not limited to this, and multiple frames of face mesh models can be compared separately to determine whether the face is a preset The result of the user's live face is more accurate.
本实施例相对于第二实施例而言,提供了一种根据多帧三维网格模型,确定人脸是否为预设用户的活体人脸的具体实现方式。Compared with the second embodiment, this embodiment provides a specific implementation manner for determining whether a human face is a preset user's live human face according to a multi-frame three-dimensional mesh model.
本申请第四实施例涉及一种人脸识别方法,本实施例是在第一实施例基础上的改进,主要改进之处在于:增加了根据二维人脸图像与至少一帧预设用户人脸图像进行比对,确定人脸是否为预设用户的步骤。The fourth embodiment of the present application relates to a face recognition method. This embodiment is an improvement on the basis of the first embodiment. The main improvement lies in the addition of a preset user person based on a two-dimensional face image and at least one frame. The step of comparing face images to determine whether the face is a preset user.
本实施例的人脸识别方式的具体流程如图4所示。The specific process of the face recognition method in this embodiment is shown in FIG. 4.
其中,步骤401、步骤402与步骤101、步骤102大致相同,步骤404、步骤405与步骤103、步骤104大致相同,在此不再赘述,主要不同之处在于,增加了步骤403,具体如下:Among them, step 401 and step 402 are roughly the same as step 101 and step 102, and step 404 and step 405 are roughly the same as step 103 and step 104, and will not be repeated here. The main difference is that step 403 is added, which is specifically as follows:
步骤403,根据至少一幅二维图像与预设用户的二维人脸图像,确定二维图像是否来源于预设用户。若是,则进入步骤403;若否,则直接结束。Step 403: Determine whether the two-dimensional image comes from the preset user based on at least one two-dimensional image and the two-dimensional face image of the preset user. If yes, go to step 403; if no, then directly end.
具体而言,选取多个摄像头采集的多幅包含人脸的二维图像中的至少一幅二维图像,根据该二维图像中的各特征点的二维坐标,计算出该二维图像的特征值,特征值例如为特征向量,电子设备中预先录入有预设用户的二维人脸 图像,从而能够计算出该二维人脸图像的特征值并存储,将二维图像的特征值与预设用户的二维人脸图像的特征值进行对比,当二者相比配时,确定二维图像来源于预设用户,进入步骤404根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型;当二者不匹配时,确定二维图像并非来源于预设用户,直接结束。其中,也可以将多个摄像头采集的多幅二维图像均进行上述是否来源于预设用户的判断。Specifically, at least one of the two-dimensional images collected by multiple cameras containing human faces is selected, and the two-dimensional image of the two-dimensional image is calculated according to the two-dimensional coordinates of each feature point in the two-dimensional image. The feature value is, for example, a feature vector. A preset user’s two-dimensional face image is pre-recorded in the electronic device, so that the feature value of the two-dimensional face image can be calculated and stored, and the feature value of the two-dimensional image is compared with The feature values of the two-dimensional face images of the preset user are compared. When the two are compared, it is determined that the two-dimensional image comes from the preset user, and step 404 is entered according to the preset parameters of any two cameras, and the first and second The two-dimensional coordinates of each feature point on the two-dimensional image and the second two-dimensional image are obtained to obtain a three-dimensional mesh model corresponding to the human face; when the two do not match, it is determined that the two-dimensional image does not originate from a preset user and ends directly. Wherein, multiple two-dimensional images collected by multiple cameras may also be subjected to the above-mentioned judgment whether it is derived from a preset user.
本实施例相对于第一实施例而言,增加了利用二维图像进行人脸识别,二维图像的人脸识别速度较快,从而提升了非预设用户的人脸识别速度。本实施例也可以作为在第二或第三实施例基础上的改进,可以达到同样的技术效果。Compared with the first embodiment, this embodiment increases the use of two-dimensional images for face recognition, and the speed of face recognition of the two-dimensional images is faster, thereby increasing the speed of face recognition for non-predetermined users. This embodiment can also be used as an improvement on the basis of the second or third embodiment, and the same technical effect can be achieved.
本申请第五实施例涉及一种人脸识别方法,本实施例是在第一实施例基础上的细化,主要细化之处在于:提供了一种根据三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸的具体实现方式。The fifth embodiment of the present application relates to a face recognition method. This embodiment is a refinement on the basis of the first embodiment. The main refinement is that it provides a method based on a three-dimensional mesh model and a preset user The three-dimensional face model determines whether the face is a specific realization method of the preset user's three-dimensional face.
本实施例的人脸识别方式的具体流程如图5所示。The specific process of the face recognition method in this embodiment is shown in FIG. 5.
其中,步骤501至步骤503与步骤101至步骤103大致相同,在此不再赘述,主要不同之处在于,步骤504,包括以下子步骤:Among them, step 501 to step 503 are roughly the same as step 101 to step 103, and will not be repeated here. The main difference is that step 504 includes the following sub-steps:
子步骤5041,计算三维网格模型上各网格之间的相对位置参数。In sub-step 5041, the relative position parameters between the grids on the three-dimensional grid model are calculated.
具体而言,三维网格模型上包括N个网格,N为大于1的正整数,以N=5为例,三维网格模型上包括5个网格,分别为网格1、网格2、网格3、网格4、网格5,计算三维网格模型上各网格之间的相对位置参数,即分别计算三维网格模型上任意两个网格之间的相对位置参数,在一个例子中,可以仅计算三维网格模型上相邻的两个网格之间的相对位置参数,相对位置参数例如为两 个网格之间夹角、中心点之间的距离等。Specifically, the three-dimensional mesh model includes N meshes, where N is a positive integer greater than 1. Taking N=5 as an example, the three-dimensional mesh model includes 5 meshes, namely mesh 1, mesh 2. , Grid 3, Grid 4, Grid 5, calculate the relative position parameters between each grid on the three-dimensional grid model, that is, calculate the relative position parameters between any two grids on the three-dimensional grid model. In an example, only the relative position parameters between two adjacent grids on the three-dimensional grid model can be calculated. The relative position parameters are, for example, the angle between the two grids, the distance between the center points, and the like.
子步骤5042,若三维网格模型上的各相对位置参数与预设的三维人脸模型上对应的相对位置参数均匹配,确定人脸是预设用户的三维人脸。In sub-step 5042, if each relative position parameter on the three-dimensional mesh model matches the corresponding relative position parameter on the preset three-dimensional face model, it is determined that the face is the preset user's three-dimensional face.
具体而言,电子设备中预存有预设用户的三维人脸模型上各网格之间的相对位置参数,判断三维网格模型上的各相对位置参数与预设的三维人脸模型上对应的相对位置参数是否均匹配,举例来说,三维网格模型上网格1与网格2之间的相对位置参数为A,三维人脸模型上网格1与网格2之间的相对位置参数为B,若A与B的差值的绝对值在预设范围内时,判定相对位置参数A与相对位置参数B相匹配,当三维网格模型上的各相对位置参数与三维人脸模型上对应的相对位置参数均匹配时,确定人脸是预设用户的三维人脸。当存在至少一个三维网格模型上的相对位置参数与三维人脸模型上对应的相对位置参数不匹配时,确定人脸不是预设用户的三维人脸,该人脸图像可能是非预设用户的人脸和/或是二维图片。Specifically, the electronic device prestores the preset relative position parameters between the grids on the user's three-dimensional face model, and determines that each relative position parameter on the three-dimensional grid model corresponds to the preset three-dimensional face model. Whether the relative position parameters are matched, for example, the relative position parameter between grid 1 and grid 2 on the three-dimensional mesh model is A, and the relative position parameter between grid 1 and grid 2 on the three-dimensional face model is B , If the absolute value of the difference between A and B is within the preset range, it is determined that the relative position parameter A matches the relative position parameter B. When the relative position parameters on the three-dimensional mesh model correspond to those on the three-dimensional face model When the relative position parameters are matched, it is determined that the human face is a three-dimensional human face of the preset user. When there is at least one relative position parameter on the 3D mesh model that does not match the corresponding relative position parameter on the 3D face model, it is determined that the face is not the 3D face of the preset user, and the face image may be of a non-preset user Human faces and/or two-dimensional pictures.
本实施例中,还可以设定一个匹配数量阈值,当三维网格模型上与三维人脸模型上不匹配的相对位置参数的数量小于匹配数量阈值时,确定人脸是预设用户的三维人脸;当三维网格模型上与三维人脸模型上不匹配的相对位置参数的数量大于或等于匹配数量阈值时,确定人脸图像不是预设用户的三维人脸。In this embodiment, a matching number threshold can also be set. When the number of relative position parameters that do not match the three-dimensional face model on the three-dimensional mesh model is less than the matching number threshold, it is determined that the face is the default user’s three-dimensional person Face: When the number of relative position parameters that do not match on the three-dimensional mesh model and the three-dimensional face model is greater than or equal to the matching number threshold, it is determined that the face image is not the preset user's three-dimensional face.
本实施例相对于第一实施例而言,根据三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸的具体实现方式。本实施例也可以作为在第二至第四实施例基础上的细化,可以达到同样的技术效果。Compared with the first embodiment, this embodiment determines whether the human face is a specific implementation manner of the predetermined user's three-dimensional face according to the three-dimensional mesh model and the preset user's three-dimensional face model. This embodiment can also be used as a refinement on the basis of the second to fourth embodiments, and the same technical effect can be achieved.
本申请第六实施例涉及一种人脸识别方法,本实施例是在第一实施例基础上的细化,主要细化之处在于:提供了一种根据任意两个摄像头的预设参数, 以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型的具体实现方式。The sixth embodiment of this application relates to a face recognition method. This embodiment is a refinement on the basis of the first embodiment. The main refinement is that it provides a preset parameter based on any two cameras, And the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image to obtain a specific implementation of the three-dimensional mesh model corresponding to the human face.
本实施例的人脸识别方式的具体流程如图6所示。The specific process of the face recognition method in this embodiment is shown in FIG. 6.
其中,步骤601、步骤602、步骤604与步骤101、步骤102、步骤104大致相同,在此不再赘述,主要不同之处在于,步骤603,包括以下子步骤:Among them, step 601, step 602, and step 604 are roughly the same as step 101, step 102, and step 104, and will not be repeated here. The main difference is that step 603 includes the following sub-steps:
子步骤6031,根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,计算各特征点的三维坐标。In sub-step 6031, the three-dimensional coordinates of each feature point are calculated according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image.
具体而言,两个摄像头的预设参数包括两个摄像头的内参、两个摄像头之间的外参,内参包括摄像头本身的焦距、成像参数、主点中心等,外参包括两个摄像头标定后的坐标关系,包括平移矩阵和旋转矩阵等,根据两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,能够计算得到各特征点的三维坐标。Specifically, the preset parameters of the two cameras include the internal parameters of the two cameras and the external parameters between the two cameras. The internal parameters include the focal length of the camera itself, imaging parameters, and the center of the principal point. The external parameters include the calibration of the two cameras. The coordinate relationship of each feature point, including translation matrix and rotation matrix, etc., according to the preset parameters of the two cameras, and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image, the three-dimensional of each feature point can be calculated coordinate.
子步骤6032,根据各特征点的三维坐标,构建人脸图像对应的三维网格模型。Sub-step 6032, according to the three-dimensional coordinates of each feature point, construct a three-dimensional mesh model corresponding to the face image.
具体而言,根据各特征点的三维坐标,可以得到空间中的一个点集,然后构建出一个三维网格模型,网格可以为三角形网格、四边形网格等,以三维网格模型中网格为三角形网格为例,采用Delaunay方法进行三角形剖分,从而可以以最近的三个点形成三角形,并且多个三角形的各边均不相交,剖分后形成了具有多边形的外壳,构建出了人脸图像对应的三角网格模型。Specifically, according to the three-dimensional coordinates of each feature point, a point set in space can be obtained, and then a three-dimensional mesh model can be constructed. The mesh can be a triangular mesh, a quadrilateral mesh, etc. The grid is a triangular mesh as an example. The Delaunay method is used to triangulate, so that a triangle can be formed from the nearest three points, and the sides of multiple triangles are not intersected. After the division, a polygonal shell is formed. The triangular mesh model corresponding to the face image is presented.
本实施例相对于第一实施例而言,提供了一种根据任意两个摄像头的预设参数,以及第一二维图像与第二二维图像上各特征点的二维坐标,得到人脸对应的三维网格模型的具体实现方式。本实施例也可以作为在第二至第五实施 例基础上的细化,可以达到同样的技术效果。Compared with the first embodiment, this embodiment provides a way to obtain a human face based on the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image. The specific implementation of the corresponding 3D mesh model. This embodiment can also be used as a refinement on the basis of the second to fifth embodiments, and the same technical effect can be achieved.
本申请第七实施例涉及一种人脸识别方法,本实施例是在第六实施例基础上的改进,主要改进之处在于:增加了对第一二维图像与第二二维图像上各特征点的二维坐标的修正。The seventh embodiment of the present application relates to a face recognition method. This embodiment is an improvement on the basis of the sixth embodiment. The main improvement lies in the addition of the first two-dimensional image and the second two-dimensional image. Correction of two-dimensional coordinates of feature points.
本实施例的人脸识别方式的具体流程如图7所示。The specific process of the face recognition method of this embodiment is shown in FIG. 7.
其中,步骤601、步骤602、步骤604与步骤101、步骤102、步骤104大致相同,在此不再赘述,主要不同之处在于,在步骤703中增加了子步骤7031,具体如下:Among them, step 601, step 602, and step 604 are roughly the same as step 101, step 102, and step 104, and will not be repeated here. The main difference is that a sub-step 7031 is added to step 703, which is specifically as follows:
子步骤7031,利用窗口匹配函数及预设用户的二维人脸图像对第一二维图像与第二二维图像上各特征点的二维坐标进行修正。In sub-step 7031, the two-dimensional coordinates of the feature points on the first two-dimensional image and the second two-dimensional image are corrected by using the window matching function and the preset two-dimensional face image of the user.
具体而言,以人脸上设定有六个特征点、对第一二维图像上各特征点的二维坐标进行修正为例,六个特征点分别为特征点1、特征点2、特征点3、特征点4、特征点5、特征点6,在对第一二维图像上的各特征点的二维坐标进行修正时,以特征点1为例,在第一二维图像上设定包括特征点1的窗口区域,可以以特征点1为中心外扩至少一个像素点,从而可以得到一个窗口区域,以外扩一个像素点为例,可以得到如图8所示的窗口区域,该窗口区域包括9个点(包含特征点1),然后在这个窗口区域的9个点中搜寻与预设用户的二维人脸图像上的特征点1匹配的点,记作精确特征点1;然后,再将精确特征点1的二维坐标,作为修正后的第一二维图像上的特征点1的二维坐标,其他特征点的修正方式与上类似;对第二二维图像上各特征点的二维坐标进行修正的方式与上述的类似,在此不再赘述Specifically, taking the six feature points set on the face and the correction of the two-dimensional coordinates of each feature point on the first two-dimensional image as an example, the six feature points are feature point 1, feature point 2, feature Point 3, Feature Point 4, Feature Point 5, Feature Point 6, when correcting the two-dimensional coordinates of each feature point on the first two-dimensional image, take feature point 1 as an example, set on the first two-dimensional image Given the window area that includes feature point 1, at least one pixel point can be expanded with feature point 1 as the center, so that a window area can be obtained. For example, by expanding one pixel outside, the window area shown in Figure 8 can be obtained. The window area includes 9 points (including feature point 1), and the 9 points in this window area are searched for a point that matches the feature point 1 on the preset user's two-dimensional face image, and it is recorded as accurate feature point 1; Then, the two-dimensional coordinates of the precise feature point 1 are used as the two-dimensional coordinates of the feature point 1 on the corrected first two-dimensional image. The correction methods for other feature points are similar to the above; for each on the second two-dimensional image The method of correcting the two-dimensional coordinates of the feature point is similar to the above, so I won’t repeat it here.
本实施例中,以在第一二维图像的窗口区域中搜寻与预设用户的二维人 脸图像上的特征点匹配的点为例,搜索方式为:以特征点1为例,取第一二维人脸图像上特征点1的窗口,以图8所示的窗口区域为例,依次以窗口内各点为中心取某一尺寸窗口;以预设用户的二维人脸图像上的特征点1为中心取相同尺寸的窗口,计算两个窗口灰度之间的匹配度,因此得到匹配度D1至D9,匹配度可以为相似度、比值等,根据窗口区域内各点与预设用户的二维人脸图像上的特征点1之间的匹配度,可以得到窗口区域内与预设用户的二维人脸图像上的特征点1匹配的点,可以将D1至D9中匹配度值最大的点作为窗口区域内与预设用户的二维人脸图像上的特征点1匹配的点。In this embodiment, the first two-dimensional image window area is searched for points that match the feature points on the preset user's two-dimensional face image as an example. The search method is: take feature point 1 as an example, and take the first A window with feature point 1 on a two-dimensional face image, take the window area shown in Figure 8 as an example, take a certain size window in turn with each point in the window as the center; preset the user’s two-dimensional face image Feature point 1 is a window with the same size as the center, and the matching degree between the gray levels of the two windows is calculated, so the matching degree D1 to D9 is obtained. The matching degree can be similarity, ratio, etc., according to each point in the window area and the preset The matching degree between the feature point 1 on the user's two-dimensional face image can be obtained in the window area that matches the feature point 1 on the preset user's two-dimensional face image, and the matching degree from D1 to D9 can be obtained The point with the largest value is used as the point in the window area that matches the feature point 1 on the preset user's two-dimensional face image.
子步骤7032,根据任意两个摄像头的预设参数,以及修正后的第一二维图像与第二二维图像上各特征点的二维坐标,获取各特征点的三维坐标。In sub-step 7032, the three-dimensional coordinates of each feature point are obtained according to the preset parameters of any two cameras and the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image after correction.
具体而言,与子步骤6031大致相同,主要不同在于以修正后的第一二维图像与第二二维图像上各特征点的二维坐标,来计算各特征点的三维坐标。Specifically, it is roughly the same as the sub-step 6031, and the main difference is that the three-dimensional coordinates of each feature point are calculated by using the two-dimensional coordinates of each feature point on the corrected first two-dimensional image and the second two-dimensional image.
子步骤7033,根据各特征点的三维坐标,构建人脸图像对应的三维网格模型。Sub-step 7033, according to the three-dimensional coordinates of each feature point, construct a three-dimensional mesh model corresponding to the face image.
具体而言,与子步骤6032大致相同,在此不再赘述。Specifically, it is roughly the same as the sub-step 6032, and will not be repeated here.
本实施例相对于第六实施例而言,对第一二维图像与第二二维图像上各特征点的二维坐标进行修正,以使获取的各特征点的三维坐标更加精确,从而获取更精确的三维网格模型,提升了人脸识别的准确度。Compared with the sixth embodiment, this embodiment corrects the two-dimensional coordinates of each feature point on the first two-dimensional image and the second two-dimensional image to make the acquired three-dimensional coordinates of each feature point more accurate, thereby obtaining A more accurate three-dimensional mesh model improves the accuracy of face recognition.
本申请第八实施例涉及一种人脸识别方法,本实施例是在第一实施例基础上的改进,主要改进之处在于:对获取的三维网格模型对应的人脸姿态进行矫正。The eighth embodiment of the present application relates to a face recognition method. This embodiment is an improvement on the basis of the first embodiment, and the main improvement lies in: correcting the face pose corresponding to the acquired three-dimensional mesh model.
本实施例的人脸识别方式的具体流程如图9所示。The specific process of the face recognition method in this embodiment is shown in FIG. 9.
其中,步骤801至步骤803与步骤101至步骤103大致相同,在此不再赘述,主要不同之处在于,步骤804,包括以下子步骤:Among them, step 801 to step 803 are substantially the same as step 101 to step 103, which will not be repeated here. The main difference is that step 804 includes the following sub-steps:
子步骤8041,将三维网格模型对应的人脸姿态矫正到预设姿态。In sub-step 8041, the face pose corresponding to the three-dimensional mesh model is corrected to a preset pose.
具体而言,在进行人脸识别时,当前用户可能不是以预设姿态来对准摄像头,因此,得到的三维网格模型对应的人脸姿态也不是预设姿态,此时对三维网格模型对应的人脸姿态进行矫正,得到预设姿态的三维网格模型。其中,预设姿态与预设用户录入的三维人脸模型对应的人脸姿态相一致,例如用户录入三维人脸模型是正对摄像头的,则预设姿态为正对摄像头,此时三维网格模型与三维人脸模型的姿态相一致,更便于进行对比。Specifically, when performing face recognition, the current user may not aim at the camera with a preset pose. Therefore, the face pose corresponding to the obtained 3D mesh model is not the preset pose. The corresponding face pose is corrected to obtain a three-dimensional mesh model of the preset pose. Among them, the preset pose is consistent with the face pose corresponding to the preset 3D face model entered by the user. For example, if the user enters the 3D face model facing the camera, the preset pose is facing the camera. At this time, the 3D mesh model It is consistent with the posture of the three-dimensional face model, making it easier to compare.
子步骤8042,根据预设姿态的三维网格模型与预设用户的三维人脸模型,确定人脸是否为预设用户的三维人脸。In sub-step 8042, according to the three-dimensional mesh model of the preset posture and the three-dimensional face model of the preset user, it is determined whether the face is the three-dimensional face of the preset user.
具体而言,电子设备中存储有预设用户录入的三维人脸模型,将预设姿态的三维网格模型与预设用户的三维人脸模型进行对比,判断三维网格模型与三维人脸模型是否匹配,当二者匹配时,说明该人脸是预设用户的三维人脸;当二者不匹配时,说明该人脸并不是预设用户的三维人脸,即该人脸可能是非预设用户的人脸和/或是二维图片。Specifically, the electronic device stores the three-dimensional face model entered by the preset user, compares the three-dimensional mesh model of the preset pose with the three-dimensional face model of the preset user, and judges the three-dimensional mesh model and the three-dimensional face model Whether it matches, when the two match, it means that the face is the preset user's three-dimensional face; when the two do not match, it means that the face is not the preset user's three-dimensional face, that is, the face may be non-predicted. Set the user's face and/or two-dimensional picture.
本实施例相对于第一实施例而言,将三维网格模型对应的人脸姿态矫正到预设姿态,以便于与三维人脸模型进行对比,提升人脸识别准确度。本实施例也可以作为在第二至第七实施例基础上的改进,可以达到同样的技术效果。Compared with the first embodiment, this embodiment corrects the face posture corresponding to the three-dimensional mesh model to a preset posture, so as to facilitate comparison with the three-dimensional face model and improve the accuracy of face recognition. This embodiment can also be used as an improvement on the basis of the second to seventh embodiments, and the same technical effect can be achieved.
本申请第九实施例涉及一种处理芯片,如图10所示,处理芯片110与至少一存储器111连接,处理芯片110还连接于多个摄像头112,存储器111存储有可被处理芯片110执行的指令,指令被处理芯片110执行,以使处理芯片 110能够执行上述方法实施例。The ninth embodiment of the present application relates to a processing chip. As shown in FIG. 10, the processing chip 110 is connected to at least one memory 111, and the processing chip 110 is also connected to a plurality of cameras 112. The memory 111 stores data that can be executed by the processing chip 110. Instructions, the instructions are executed by the processing chip 110, so that the processing chip 110 can execute the foregoing method embodiments.
本申请第十实施例涉及一种主控设备,请参考图10,主控设备包括上述处理芯片110、至少一存储器111以及多个摄像头112。The tenth embodiment of the present application relates to a main control device. Please refer to FIG. 10. The main control device includes the aforementioned processing chip 110, at least one memory 111 and a plurality of cameras 112.
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。A person of ordinary skill in the art can understand that the above-mentioned embodiments are specific embodiments for realizing the present application, and in practical applications, various changes can be made to them in form and details without departing from the spirit and spirit of the present application. range.

Claims (12)

  1. 一种人脸识别方法,其特征在于,包括:A face recognition method, characterized in that it includes:
    通过多个摄像头分别采集至少一幅包含人脸的二维图像,并选取任意两个所述摄像头分别采集的包含所述人脸的第一二维图像和包含所述人脸的第二二维图像;Collect at least one two-dimensional image containing the human face through multiple cameras, and select the first two-dimensional image containing the human face and the second two-dimensional image containing the human face collected by any two cameras respectively image;
    在所述第一二维图像中提取所述人脸的多个特征点的二维坐标,并在所述第二二维图像中提取所述多个特征点的二维坐标;Extracting the two-dimensional coordinates of the multiple feature points of the human face in the first two-dimensional image, and extracting the two-dimensional coordinates of the multiple feature points in the second two-dimensional image;
    根据所述任意两个所述摄像头的预设参数,以及所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,得到所述人脸对应的三维网格模型;According to the preset parameters of any two of the cameras, and the two-dimensional coordinates of each of the feature points on the first two-dimensional image and the second two-dimensional image, a three-dimensional network corresponding to the face is obtained Lattice model
    根据所述三维网格模型与预设用户的三维人脸模型,确定所述人脸是否为所述预设用户的三维人脸。According to the three-dimensional mesh model and the three-dimensional face model of the preset user, it is determined whether the face is the three-dimensional face of the preset user.
  2. 如权利要求1所述的人脸识别方法,其特征在于,在所述根据所述三维网格模型与预设用户的三维人脸模型,确定所述人脸是否为所述预设用户的三维人脸之后,还包括:The face recognition method of claim 1, wherein in the step of determining whether the face is a three-dimensional face model of the preset user according to the three-dimensional mesh model and a preset user's three-dimensional face model After the face, it also includes:
    根据多帧所述三维网格模型,确定所述人脸是否为所述预设用户的活体人脸。According to the multiple frames of the three-dimensional mesh model, it is determined whether the human face is a living human face of the preset user.
  3. 如权利要求2所述的人脸识别方法,其特征在于,所述根据多帧所述三维网格模型,确定所述人脸是否为所述预设用户的活体人脸,包括:3. The face recognition method according to claim 2, wherein the determining whether the face is a living face of the preset user according to multiple frames of the three-dimensional mesh model comprises:
    选取两帧所述三维网格模型,并计算所述两帧所述三维网格模型上多个网格中对应的所述网格之间的形变量;Selecting two frames of the three-dimensional grid model, and calculating the deformation between the corresponding grids in the plurality of grids on the two frames of the three-dimensional grid model;
    若至少一所述形变量大于或等于预设的形变阈值,确定所述人脸是所述预设用户的活体人脸。If at least one of the deformation variables is greater than or equal to a preset deformation threshold, it is determined that the face is a living face of the preset user.
  4. 如权利要求1所述的人脸识别方法,其特征在于,在所述根据所述任意两个所述摄像头的预设参数,以及所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,得到所述人脸对应的三维网格模型之前,还包括:The face recognition method according to claim 1, wherein, on the basis of the preset parameters of any two of the cameras, and the first two-dimensional image and the second two-dimensional image Before obtaining the three-dimensional mesh model corresponding to the face, the two-dimensional coordinates of each of the feature points further includes:
    根据至少一幅所述二维图像与所述预设用户的二维人脸图像,确定所述二维图像是否来源于所述预设用户;Determining whether the two-dimensional image comes from the preset user according to at least one of the two-dimensional image and the two-dimensional face image of the preset user;
    若确定所述人脸来源于所述预设用户,进入所述根据所述任意两个所述摄像头的预设参数,以及所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,得到所述人脸对应的三维网格模型的步骤。If it is determined that the face is from the preset user, enter the preset parameters according to any two of the cameras, and each of the first two-dimensional image and the second two-dimensional image. The step of obtaining the three-dimensional mesh model corresponding to the human face according to the two-dimensional coordinates of the feature points.
  5. 如权利要求1所述的人脸识别方法,其特征在于,所述根据所述三维网格模型与预设用户的三维人脸模型,确定所述人脸是否为所述预设用户的三维人脸,包括:The face recognition method of claim 1, wherein the three-dimensional mesh model and a three-dimensional face model of a preset user are used to determine whether the face is a three-dimensional face of the preset user. Face, including:
    计算所述三维网格模型上各网格之间的相对位置参数;Calculating the relative position parameters between the grids on the three-dimensional grid model;
    判断所述三维网格模型上的各所述相对位置参数与预设的所述三维人脸模型上对应的所述相对位置参数是否均匹配;Determine whether each of the relative position parameters on the three-dimensional mesh model matches the corresponding relative position parameter on the preset three-dimensional face model;
    若所述三维网格模型上的各所述相对位置参数与预设的所述三维人脸模型上对应的所述相对位置参数均匹配,确定所述人脸是所述预设用户的三维人脸。If each of the relative position parameters on the three-dimensional mesh model matches the corresponding relative position parameter on the preset three-dimensional face model, it is determined that the face is a three-dimensional person of the preset user face.
  6. 如权利要求1所述的人脸识别方法,其特征在于,所述根据所述任意两个所述摄像头的预设参数,以及所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,得到所述人脸对应的三维网格模型,包括:The face recognition method of claim 1, wherein the preset parameters of any two of the cameras, and each of the first two-dimensional image and the second two-dimensional image The two-dimensional coordinates of the feature points to obtain the three-dimensional mesh model corresponding to the face includes:
    根据所述任意两个所述摄像头的预设参数,以及所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,计算各所述特征点的三维坐标;Calculate the three-dimensional coordinates of each of the feature points according to the preset parameters of any two of the cameras and the two-dimensional coordinates of each of the feature points on the first two-dimensional image and the second two-dimensional image ;
    根据各所述特征点的三维坐标,构建所述人脸图像对应的所述三维网格模型。According to the three-dimensional coordinates of each of the characteristic points, the three-dimensional mesh model corresponding to the face image is constructed.
  7. 如权利要求6所述的人脸识别方法,其特征在于,在所述根据所述任意两个所述摄像头的预设参数,以及所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,计算各所述特征点的三维坐标之前,还包括:The face recognition method according to claim 6, characterized in that, based on the preset parameters of any two of the cameras, and the first two-dimensional image and the second two-dimensional image Before calculating the three-dimensional coordinates of each of the feature points, the two-dimensional coordinates of each of the feature points further include:
    利用窗口匹配函数及所述预设用户的二维人脸图像对所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标进行修正;Correcting the two-dimensional coordinates of each of the feature points on the first two-dimensional image and the second two-dimensional image by using a window matching function and the two-dimensional face image of the preset user;
    所述根据所述任意两个所述摄像头的预设参数,以及所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,计算各所述特征点的三维坐标,具体为:According to the preset parameters of any two of the cameras, and the two-dimensional coordinates of each of the feature points on the first two-dimensional image and the second two-dimensional image, calculating the value of each feature point Three-dimensional coordinates, specifically:
    根据所述任意两个所述摄像头的预设参数,以及修正后的所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标,获取各所述特征点的三维坐标。Obtain each of the characteristic points according to the preset parameters of any two of the cameras, and the corrected two-dimensional coordinates of each of the characteristic points on the first two-dimensional image and the second two-dimensional image The three-dimensional coordinates.
  8. 如权利要求7所述的人脸识别方法,其特征在于,所述利用窗口匹配函数及所述预设用户的二维人脸图像对所述第一二维图像与所述第二二维图像上各所述特征点的二维坐标进行修正,包括:The face recognition method according to claim 7, wherein the use of the window matching function and the two-dimensional face image of the preset user compares the first two-dimensional image and the second two-dimensional image Correction of the two-dimensional coordinates of each of the feature points above includes:
    对于每个所述特征点,分别在所述第一二维图像与所述第二二维图像上设定包括所述特征点的窗口区域;For each of the feature points, respectively setting a window area including the feature point on the first two-dimensional image and the second two-dimensional image;
    分别在所述第一二维图像的窗口区域与所述第二二维图像的窗口区域中搜寻与所述二维人脸图像上的所述特征点匹配的点,记作精确特征点;Searching for points matching the feature points on the two-dimensional face image in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively, and record them as accurate feature points;
    将所述第一二维图像上的精确特征点的二维坐标作为修正后的所述第一二维图像上的所述特征点的二维坐标,并将所述第二二维图像上的精确特征点的二维坐标为修正后的所述第二二维图像上的所述特征点的二维坐标。The two-dimensional coordinates of the precise feature point on the first two-dimensional image are taken as the two-dimensional coordinates of the feature point on the first two-dimensional image after correction, and the two-dimensional coordinates on the second two-dimensional image The two-dimensional coordinates of the precise feature point are the two-dimensional coordinates of the feature point on the second two-dimensional image after correction.
  9. 如权利要求8所述的人脸识别方法,其特征在于,所述分别在所述第一二维图像的窗口区域与所述第二二维图像的窗口区域中搜寻与所述二维人脸图像上的所述特征点匹配的点,包括:8. The face recognition method according to claim 8, wherein said searching for the two-dimensional face in the window area of the first two-dimensional image and the window area of the second two-dimensional image respectively The matching points of the feature points on the image include:
    对于每个所述窗口区域,根据所述特征点在所述预设二维人脸图像上的灰度值以及所述窗口区域内的各点的灰度值,得到所述窗口区域内的各点与所述二维人脸图像上的所述特征点之间的匹配度;For each window area, according to the gray value of the feature point on the preset two-dimensional face image and the gray value of each point in the window area, each of the points in the window area is obtained The degree of matching between a point and the feature point on the two-dimensional face image;
    根据所述窗口区域内的各点与所述二维人脸图像上的所述特征点之间的匹配度,获取所述窗口区域内与所述二维人脸图像上的所述特征点匹配的点。According to the degree of matching between each point in the window area and the feature point on the two-dimensional face image, obtain a match between the feature point in the window area and the two-dimensional face image Point.
  10. 如权利要求1所述的人脸识别方法,其特征在于,在所述根据所述三维网格模型与预设用户的三维人脸模型,确定所述人脸是否为所述预设用户的三维人脸,包括:The face recognition method according to claim 1, wherein in the step of determining whether the face is a three-dimensional face model of the preset user according to the three-dimensional mesh model and a preset user's three-dimensional face model Human faces, including:
    将所述三维网格模型对应的人脸姿态矫正到预设姿态;Correcting the face pose corresponding to the three-dimensional mesh model to a preset pose;
    根据所述预设姿态的所述三维网格模型与所述预设用户的所述三维人脸模型,确定所述人脸是否为所述预设用户的三维人脸。According to the three-dimensional mesh model of the preset posture and the three-dimensional face model of the preset user, it is determined whether the face is the three-dimensional face of the preset user.
  11. 一种处理芯片,其特征在于,所述处理芯片与至少一存储器连接,所述处理芯片与多个摄像头连接,所述存储器存储有可被所述处理芯片执行的指令,所述指令被所述处理芯片执行,以使所述处理芯片能够执行如权利要求1至10中任一项所述的人脸识别方法。A processing chip, characterized in that the processing chip is connected to at least one memory, the processing chip is connected to a plurality of cameras, and the memory stores instructions that can be executed by the processing chip. The processing chip executes, so that the processing chip can execute the face recognition method according to any one of claims 1 to 10.
  12. 一种电子设备,其特征在于,包括权利要求11所述的处理芯片、至少一存储器,以及多个摄像头。An electronic device, characterized by comprising the processing chip of claim 11, at least one memory, and multiple cameras.
PCT/CN2019/084338 2019-04-25 2019-04-25 Facial recognition method, processing chip and electronic device WO2020215283A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000648.XA CN110199296A (en) 2019-04-25 2019-04-25 Face identification method, processing chip and electronic equipment
PCT/CN2019/084338 WO2020215283A1 (en) 2019-04-25 2019-04-25 Facial recognition method, processing chip and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/084338 WO2020215283A1 (en) 2019-04-25 2019-04-25 Facial recognition method, processing chip and electronic device

Publications (1)

Publication Number Publication Date
WO2020215283A1 true WO2020215283A1 (en) 2020-10-29

Family

ID=67756155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084338 WO2020215283A1 (en) 2019-04-25 2019-04-25 Facial recognition method, processing chip and electronic device

Country Status (2)

Country Link
CN (1) CN110199296A (en)
WO (1) WO2020215283A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528902A (en) * 2020-12-17 2021-03-19 四川大学 Video monitoring dynamic face recognition method and device based on 3D face model
CN112906468A (en) * 2021-01-15 2021-06-04 扬州哈工科创机器人研究院有限公司 Forest resource investigation method based on image recognition technology
CN113205090A (en) * 2021-04-29 2021-08-03 北京百度网讯科技有限公司 Picture rectification method and device, electronic equipment and computer readable storage medium
CN113673374A (en) * 2021-08-03 2021-11-19 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112997185A (en) * 2019-09-06 2021-06-18 深圳市汇顶科技股份有限公司 Face living body detection method, chip and electronic equipment
WO2021189303A1 (en) * 2020-03-25 2021-09-30 深圳市汇顶科技股份有限公司 Data acquisition apparatus, face recognition apparatus and method, device, and storage medium
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112651279B (en) * 2020-09-24 2024-07-26 深圳福鸽科技有限公司 3D face recognition method and system based on close range application
CN112926434A (en) * 2021-02-22 2021-06-08 王先峰 Face recognition equipment based on 5G framework
CN118230386A (en) * 2024-04-02 2024-06-21 中国人民解放军总医院第一医学中心 Face detection method and device based on template matching and artificial neural network confirmation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095715A (en) * 2015-06-30 2015-11-25 国网山东莒县供电公司 Identity authentication method of electric power system network
WO2018001092A1 (en) * 2016-06-29 2018-01-04 中兴通讯股份有限公司 Face recognition method and apparatus
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system
CN109145750A (en) * 2018-07-23 2019-01-04 华迅金安(北京)科技有限公司 A kind of driver identity rapid authentication method and system
CN109558764A (en) * 2017-09-25 2019-04-02 杭州海康威视数字技术股份有限公司 Face identification method and device, computer equipment
CN109670487A (en) * 2019-01-30 2019-04-23 汉王科技股份有限公司 A kind of face identification method, device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437067A (en) * 2017-07-11 2017-12-05 广东欧珀移动通信有限公司 Human face in-vivo detection method and Related product
CN107392137B (en) * 2017-07-18 2020-09-08 艾普柯微电子(上海)有限公司 Face recognition method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095715A (en) * 2015-06-30 2015-11-25 国网山东莒县供电公司 Identity authentication method of electric power system network
WO2018001092A1 (en) * 2016-06-29 2018-01-04 中兴通讯股份有限公司 Face recognition method and apparatus
CN109558764A (en) * 2017-09-25 2019-04-02 杭州海康威视数字技术股份有限公司 Face identification method and device, computer equipment
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system
CN109145750A (en) * 2018-07-23 2019-01-04 华迅金安(北京)科技有限公司 A kind of driver identity rapid authentication method and system
CN109670487A (en) * 2019-01-30 2019-04-23 汉王科技股份有限公司 A kind of face identification method, device and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528902A (en) * 2020-12-17 2021-03-19 四川大学 Video monitoring dynamic face recognition method and device based on 3D face model
CN112906468A (en) * 2021-01-15 2021-06-04 扬州哈工科创机器人研究院有限公司 Forest resource investigation method based on image recognition technology
CN113205090A (en) * 2021-04-29 2021-08-03 北京百度网讯科技有限公司 Picture rectification method and device, electronic equipment and computer readable storage medium
CN113205090B (en) * 2021-04-29 2023-10-24 北京百度网讯科技有限公司 Picture correction method, device, electronic equipment and computer readable storage medium
CN113673374A (en) * 2021-08-03 2021-11-19 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment
CN113673374B (en) * 2021-08-03 2024-01-30 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment

Also Published As

Publication number Publication date
CN110199296A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
WO2020215283A1 (en) Facial recognition method, processing chip and electronic device
WO2019128508A1 (en) Method and apparatus for processing image, storage medium, and electronic device
JP5538617B2 (en) Methods and configurations for multi-camera calibration
US10855932B2 (en) Processing holographic videos
WO2019137131A1 (en) Image processing method, apparatus, storage medium, and electronic device
WO2020134528A1 (en) Target detection method and related product
WO2019035155A1 (en) Image processing system, image processing method, and program
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
JP2019117577A (en) Program, learning processing method, learning model, data structure, learning device and object recognition device
CN110189269B (en) Correction method, device, terminal and storage medium for 3D distortion of wide-angle lens
WO2021084972A1 (en) Object tracking device and object tracking method
WO2021189804A1 (en) Image rectification method and device, and electronic system
CN111047678B (en) Three-dimensional face acquisition device and method
CN113706373A (en) Model reconstruction method and related device, electronic equipment and storage medium
CN112580434A (en) Face false detection optimization method and system based on depth camera and face detection equipment
CN108550167B (en) Depth image generation method and device and electronic equipment
CN111080571A (en) Camera shielding state detection method and device, terminal and storage medium
CN115457176A (en) Image generation method and device, electronic equipment and storage medium
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN113902851A (en) Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN114520906B (en) Monocular camera-based three-dimensional portrait complementing method and system
CN115063339A (en) Face biopsy method, system, equipment and medium based on binocular camera ranging
CN116152121B (en) Curved surface screen generating method and correcting method based on distortion parameters
CN116823863A (en) Infrared image contour extraction method and device
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925842

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925842

Country of ref document: EP

Kind code of ref document: A1