CN113033243A - Face recognition method, device and equipment - Google Patents

Face recognition method, device and equipment Download PDF

Info

Publication number
CN113033243A
CN113033243A CN201911251184.8A CN201911251184A CN113033243A CN 113033243 A CN113033243 A CN 113033243A CN 201911251184 A CN201911251184 A CN 201911251184A CN 113033243 A CN113033243 A CN 113033243A
Authority
CN
China
Prior art keywords
face
dimensional face
dimensional
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911251184.8A
Other languages
Chinese (zh)
Inventor
曾郁凯
颜国明
张有贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangzhou Lidaxin Optoelectronic Technology Co ltd
Original Assignee
Zhangzhou Lidaxin Optoelectronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangzhou Lidaxin Optoelectronic Technology Co ltd filed Critical Zhangzhou Lidaxin Optoelectronic Technology Co ltd
Priority to CN201911251184.8A priority Critical patent/CN113033243A/en
Publication of CN113033243A publication Critical patent/CN113033243A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A face recognition method, the device for face recognition comprises a front camera and a side camera, the face recognition method comprises: acquiring a front face image of a user to be detected through the front camera, and acquiring a side face image of the user to be detected through the side camera; extracting two-dimensional face features of the front face image, and generating a three-dimensional face model according to the front face image and the side face image; matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models respectively; and determining whether the user to be detected passes the verification or not according to the matching result. The three-dimensional face model is generated by the front face image and the side face image for matching, so that the authentication of illegal users using photos or videos can be identified, and the system safety is improved. And the three-dimensional face model can be more efficiently obtained according to the side face image and the front face image, and the identification efficiency is favorably improved.

Description

Face recognition method, device and equipment
Technical Field
The application belongs to the field of artificial intelligence, and particularly relates to a face recognition method, a face recognition device and face recognition equipment.
Background
Along with the development of artificial intelligence technology, intelligent equipment brings more and more convenience for the work and the life of people. For example, by means of the face recognition technology, the problems that a digital password is forgotten or an entity key is lost can be solved, and unlocking and authentication operations of a user on private information can be conveniently and quickly assisted.
However, when the user uses the face to perform the encryption verification of the system, the illegal user may determine the illegal user as a legal user through the fake face, for example, through the form of a fake face such as a photo, a movie, or a mask, and thus, a significant hidden danger may be brought to the security of the system. And the mode of carrying out the people's face based on two cameras needs comparatively complicated calculation, and the recognition efficiency is lower.
Disclosure of Invention
In view of this, embodiments of the present application provide a face recognition method, an apparatus, and a device, so as to solve the problem in the prior art that when face recognition is used, an illegal user may determine that the illegal user is a legal user through a forged face, which brings significant hidden danger to system security, or has low recognition efficiency.
A first aspect of an embodiment of the present application provides a face recognition method, where a device for face recognition includes a front camera for acquiring a front face of a user and a side camera for acquiring a side face of the user, and the face recognition method includes:
acquiring a front face image of a user to be detected through the front camera, and acquiring a side face image of the user to be detected through the side camera;
extracting two-dimensional face features of the front face image, and generating a three-dimensional face model according to the front face image and the side face image;
matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models respectively;
and determining whether the user to be detected passes the verification or not according to the matching result.
With reference to the first aspect, in a first possible implementation manner of the first aspect, before the step of matching the extracted two-dimensional face features and the extracted three-dimensional face model with the pre-stored two-dimensional face features and three-dimensional face model respectively, the method further includes:
acquiring front face images of registered users and image feature points in the side face images;
projecting the front face image and the side face image to a virtual three-dimensional space according to the orientation relation between the extracted image feature points and the camera, and synthesizing and storing a three-dimensional face model corresponding to the registered user, or further comprising view angle images of the three-dimensional face model under different view angles;
and extracting two-dimensional face features in the front face image, and storing the corresponding relation between the two-dimensional face features and the registered user.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the step of matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models respectively includes:
judging whether the generated three-dimensional face model is a head model;
if the generated three-dimensional face model is not the head model, the authentication of the user to be detected fails;
and if the generated three-dimensional face model is the head model, further matching the generated two-dimensional face features with the pre-stored two-dimensional face features.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the step of matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models respectively includes:
judging whether the extracted two-dimensional face features are matched with the pre-stored two-dimensional face features or not;
if the extracted two-dimensional face features fail to be matched with the pre-stored two-dimensional face features, the verification of the user to be detected fails;
and if the extracted two-dimensional face features are successfully matched with the pre-stored two-dimensional face features, further verifying whether the generated three-dimensional face model is matched with the pre-stored three-dimensional face model.
With reference to the first aspect, the first possible implementation manner of the first aspect, the second possible implementation manner of the first aspect, or the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the step of matching the generated three-dimensional face model with a pre-stored three-dimensional face model includes:
acquiring an image of a preset visual angle of a user to be detected;
searching a visual angle image matched with the preset visual angle in a prestored three-dimensional face model;
and judging whether the image of the preset visual angle is matched with the searched visual angle image.
A second aspect of the embodiments of the present application provides a face recognition apparatus, a device for face recognition includes a front camera for acquiring a front face of a user and a side camera for acquiring a side face of the user, the face recognition apparatus includes:
the image acquisition unit is used for acquiring a front face image of a user to be detected through the front camera and acquiring a side face image of the user to be detected through the side camera;
the human face model generating unit is used for extracting two-dimensional human face features of the front face image and generating a three-dimensional human face model according to the front face image and the side face image;
the matching unit is used for respectively matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models;
and the verification unit is used for determining whether the user to be detected passes the verification or not according to the matching result.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the apparatus further includes:
the characteristic point acquisition unit is used for acquiring image characteristic points in a front face image and a side face image of a registered user;
the three-dimensional face model synthesis unit is used for projecting the front face image and the side face image to a virtual three-dimensional space according to the orientation relation between the extracted image feature points and the camera, synthesizing and storing a three-dimensional face model corresponding to the registered user, or further comprising view angle images of the three-dimensional face model under different view angles;
and the two-dimensional face feature extraction unit is used for extracting the two-dimensional face features in the front face image and storing the corresponding relation between the two-dimensional face features and the registered users.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the matching unit includes:
a head model judging subunit, configured to judge whether the generated three-dimensional face model is a head model;
the verification subunit is used for failing to verify the user to be detected if the generated three-dimensional face model is not the head model;
and the two-dimensional face feature matching subunit is used for further matching the generated two-dimensional face features with the pre-stored two-dimensional face features if the generated three-dimensional face model is a head model.
A third aspect of the embodiments of the present application provides a face recognition apparatus, including a front camera, a side camera, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the face recognition method according to any one of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to implement the steps of the face recognition method according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: through setting up positive camera and side camera at equipment, acquire user's front face image side face image respectively, through front face image and side face image generation three-dimensional face model, through two-dimensional face feature is drawed to the front face image, matches two-dimensional face feature that draws and the three-dimensional face model that generates with the two-dimensional face feature that prestores respectively and three-dimensional face model, according to the matching result confirms whether the user passes verification. As the three-dimensional face model is generated by the front face image and the side face image for matching, the authentication of illegal users using photos or videos can be identified, thereby being beneficial to improving the system safety. And the three-dimensional face model can be more efficiently obtained according to the side face image and the front face image, and the identification efficiency is favorably improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of a face recognition scene according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a face recognition method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a method for acquiring pre-stored registered user data according to an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating an implementation process of an efficient face recognition method according to an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating an implementation process of a face recognition method with high security according to an embodiment of the present application;
fig. 6 is a schematic diagram of a face recognition apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a face recognition device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic view of a face recognition scene provided by an embodiment of the present application, as shown in fig. 1, the front of the face recognition device is provided with a positive camera, and a user is using the face recognition device, through the positive camera can collect a positive face image of the user the side of the face recognition device is provided with a side camera, and the user is using the face recognition device, and the side camera can acquire the side face image of the user. The front camera and the side camera acquire images of the user at different angles, so that a three-dimensional image of the user can be obtained, the obtained three-dimensional image is verified, the authentication that an illegal user forges a legal user by using a planar picture or a planar video image can be avoided, and the use safety of the system is improved.
Fig. 2 is a schematic view of an implementation flow of a method for performing face recognition based on the face recognition device shown in fig. 1 according to an embodiment of the present application, which is detailed as follows:
in step S201, a front face image of a user to be detected is acquired by the front camera, and a side face image of the user to be detected is acquired by the side camera;
when the image is collected, the image can be collected in advance through one of the front camera or the side camera, and whether the collected image comprises the face data or not is detected in real time. And when the acquired image is detected to comprise the face data, starting other cameras, acquiring the image and identifying the acquired face data.
The side camera with the collection visual angle of positive camera can mutually perpendicular, perhaps the contained angle between the visual angle of side camera and the visual angle of positive camera is in predetermined angular range. For example, the included angle is greater than 45 degrees and less than 120 degrees. When the user carries out authentication facing face recognition equipment, the user's front face image can be gathered through positive camera, through the side camera can gather user's side face image, and the front face image with the side face image can include the part overlap region, thereby it is right conveniently according to the overlap region front face image and side face image splice, obtain user's three-dimensional face model.
Of course, as an alternative implementation manner of the embodiment of the present application, multiple authentication manners with different security levels may also be set in the system, for example, the authentication manners may include only side images acquired by the side cameras, and feature acquisition and comparison are performed on the side images. Only the image information of the user needs to be collected by the side camera, so that misjudgment of an illegal user in the mode of using pictures and playing videos can be avoided, and the verification efficiency can be improved.
In step S202, two-dimensional face features of the front face image are extracted, and a three-dimensional face model is generated from the front face image and the side face image;
the two-dimensional human face features of the frontal face image can comprise one or more of eye contour, eyebrow contour, nose contour, face contour and the like, or can further comprise the distance and/or orientation relation between the eye contour, the eyebrow contour, the nose contour and the face contour. The two-dimensional face features included in the front face image can be determined by the variation of the RGB values of the pixels, or by means of haar features. The haar feature is that adjacent rectangles at specified positions in a detection window are used, pixel sums of all rectangles are calculated, and differences of the pixel sums are taken. From these differences, sub-regions of the image can be classified.
When generating the three-dimensional face model according to the front face image and the side face image, feature points may be first found in the front face image and the side face image, respectively, the same feature points included in the front face image and the side face image are determined, and then the front face image and the side face image are spliced according to the same feature points. And rotating the three-dimensional mesh model according to the acquisition visual angle of the current image, and mapping the spliced image to the face mesh model according to the spliced image and by combining the facial features in the image to obtain the three-dimensional face model.
In step S203, the extracted two-dimensional face features and the generated three-dimensional face model are respectively matched with pre-stored two-dimensional face features and three-dimensional face models;
before matching, the method may further include a step of pre-storing the two-dimensional face features and the three-dimensional face model, as shown in fig. 3, including:
in step S301, image feature points in a front face image and a side face image of a registered user are acquired;
specifically, the face recognition apparatus needs to register the user before performing user authentication. The registration can be set when the device is used, or a new registered user can be added after the user acquires the set authority.
When a user registers, two-dimensional face features and a three-dimensional face model of the user need to be stored in advance. The two-dimensional face features can be acquired according to the front face image, or the two-dimensional face features of the user can be acquired according to the side face image. The three-dimensional face model can be combined into a three-dimensional face model at any visual angle by combining the front face image with the side face image in a splicing and calculating combination mode. Therefore, it is necessary to acquire a front face image and a side face image of the registered user.
In step S302, according to the orientation relationship between the extracted image feature points and the camera, projecting the front face image and the side face image to a virtual three-dimensional space, and synthesizing and storing a three-dimensional face model corresponding to the registered user, or further including view angle images of the three-dimensional face model at different viewing angles;
the front face image and the side face image can be spliced according to the feature points included in the front face image and the side face image to obtain a spliced user image. And mapping the spliced image to a three-dimensional grid model according to the characteristic points included in the spliced image and the azimuth relation of the camera. And for the part of the three-dimensional grid model which can not be mapped with the image, the three-dimensional face model can be completed in a conjecture mode, and the three-dimensional face model corresponding to the registered user is synthesized and stored.
In a further preferred scheme of the present application, two-dimensional face view images at different viewing angles can be generated according to the synthesized three-dimensional face model, and the correspondence between the two-dimensional face view images and the angles is stored, so that feature comparison and verification can be performed subsequently and directly through the acquired image at the specific viewing angle of the user to be verified. When a user to be detected is authenticated, acquiring an image of a preset visual angle of the user to be detected, searching a visual angle image matched with the preset visual angle in a pre-stored three-dimensional face model through the preset visual angle, judging whether the searched visual angle image is matched with the currently acquired image of the preset visual angle, and if so, passing the verification. The preset visual angle can be a randomly generated visual angle, and the mode can avoid that a user uses pictures or visual angles for authentication, can conveniently and quickly verify the user, and is favorable for improving the authentication efficiency of the user.
In step S303, two-dimensional face features in the front face image are extracted, and a correspondence between the two-dimensional face features and registered users is stored.
The method comprises the steps of storing a three-dimensional face model and the corresponding relation between two-dimensional face features and a registered user, and verifying whether a real face is a registered user or not after the real face is verified to be a real face through the three-dimensional face model when the user is verified.
Of course, the method is not limited to this, and the two-dimensional face features and the three-dimensional face model of the registered user stored in other terminals or memories may also be directly obtained.
In step S204, it is determined whether the user to be detected passes the verification according to the matching result.
In the application, a plurality of authentication modes with different security levels can be set, and the authentication modes with different security levels can correspond to different authentication modes. For example, it may include:
1. the method comprises two-dimensional face feature verification and verification whether the face is a three-dimensional face. And when the two-dimensional face features of the user pass the verification and the current face is the three-dimensional face, the verification is passed. If the two-dimensional face features pass the verification, the verification fails when the face to be verified is not the three-dimensional face, or the verification fails when the face to be verified is the three-dimensional face but the two-dimensional face features are not matched.
2. The method comprises two-dimensional face feature verification and feature verification of a three-dimensional face model. And when only the two-dimensional face features pass the verification and the features of the three-dimensional face model are also matched, the user to be verified is a legal user, and the verification of the user to be verified is passed.
Through setting up positive camera and side camera at equipment, acquire user's front face image side face image respectively, through front face image and side face image generation three-dimensional face model, through two-dimensional face feature is drawed to the front face image, matches two-dimensional face feature that draws and the three-dimensional face model that generates with the two-dimensional face feature that prestores respectively and three-dimensional face model, according to the matching result confirms whether the user passes verification. As the three-dimensional face model is generated by the front face image and the side face image for matching, the authentication of illegal users using photos or videos can be identified, thereby being beneficial to improving the system safety.
Fig. 4 is a schematic view of an implementation flow of another face recognition method provided in the embodiment of the present application, which is detailed as follows:
in step S401, a front face image of a user to be detected is acquired by the front camera, and a side face image of the user to be detected is acquired by the side camera;
in step S402, two-dimensional face features of the front face image are extracted, and a three-dimensional face model is generated from the front face image and the side face image;
steps S401-S402 are substantially the same as steps S201-S202 in fig. 2.
In step S403, determining whether the generated three-dimensional face model is a head model;
when judging whether the three-dimensional face model is the head model, whether the three-dimensional face model is the head model can be judged according to the mode whether the side face image collected by the side camera has the general characteristics of the general side face image. The direct and efficient contrast processing of the side face image can be realized, and the contrast matching efficiency can be improved.
In step S404, if the generated three-dimensional face model is not a head model, the user to be detected fails to verify;
if the three-dimensional face model is not the head model, the three-dimensional face model represents the current verification object and may be a picture or a video, the verification failure of the current user to be detected can be directly determined, and the judgment efficiency of the virtual verification mode is improved.
In step S405, if the generated three-dimensional face model is a head model, the generated two-dimensional face features are further matched with pre-stored two-dimensional face features.
If the generated three-dimensional face model is head vinegar, the two-dimensional face features are further compared, namely whether the current user is a preset registered user is judged, and whether the user is the registered user is determined by comparing and analyzing the five sense organ features of the two-dimensional face features.
In step S406, if the two-dimensional face features match, the user to be detected is successfully verified.
And if the two-dimensional face features are successfully matched, finishing the authentication of the user to be detected. The face recognition method shown in fig. 4 can avoid the authentication of illegal users, and can effectively improve the authentication efficiency.
Fig. 5 is a schematic view of an implementation flow of another face recognition method provided in the embodiment of the present application, which is detailed as follows:
in step S501, a front face image of a user to be detected is acquired by the front camera, and a side face image of the user to be detected is acquired by the side camera;
in step S502, two-dimensional face features of the front face image are extracted, and a three-dimensional face model is generated according to the front face image and the side face image;
steps S501-S502 are substantially the same as steps S201-S202 described in fig. 2.
In step S503, it is determined whether the extracted two-dimensional face features match pre-stored two-dimensional face features;
the front face image of the user to be detected is obtained through the front camera, the two-dimensional face features of the front face image of the user to be detected are extracted and matched with the pre-stored two-dimensional face features, and whether the current user is a legal user or not is determined.
In step S504, if matching between the extracted two-dimensional face features and the pre-stored two-dimensional face features fails, the authentication of the user to be detected fails;
and if the two-dimensional face feature matching fails, the current user is an illegal user and is not authenticated.
In step S505, if the extracted two-dimensional face features are successfully matched with the pre-stored two-dimensional face features, it is further verified whether the generated three-dimensional face model is matched with the pre-stored three-dimensional face model.
If the two-dimensional face features are matched, further matching is carried out on the three-dimensional face model, the features of the three-dimensional face model are obtained, two-dimensional images of multiple visual angles can be generated according to the preset visual angle requirements on the generated three-dimensional face model to be compared and a pre-stored three-dimensional face model, the similarity of the two-dimensional images generated under the same visual angle is respectively compared, and the fact that the three-dimensional face model passes verification is determined according to the similarity of the two-dimensional images.
The predetermined viewing angle requirements may include a downward view, an upward view, a left view, a right view, etc. of a specified angle.
In step S506, if the three-dimensional face model is successfully matched, the user to be detected is successfully verified.
Through the comparison of the two-dimensional face features and the three-dimensional face model, the features of the three-dimensional face model can be further combined for comparison on the basis of the comparison of the two-dimensional face features, the mode of wearing a face mask for authentication can be effectively identified, and the feature safety of face identification is further improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application, which is detailed as follows:
the face recognition device comprises:
the image acquisition unit 601 is configured to acquire a front face image of a user to be detected through the front camera, and acquire a side face image of the user to be detected through the side camera;
a face model generating unit 602, configured to extract two-dimensional face features of the front face image, and generate a three-dimensional face model according to the front face image and the side face image;
a matching unit 603, configured to match the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models, respectively;
the verification unit 604 is configured to determine whether the user to be detected passes verification according to a matching result.
Preferably, the apparatus further comprises:
the characteristic point acquisition unit is used for acquiring image characteristic points in a front face image and a side face image of a registered user;
the three-dimensional face model synthesis unit is used for projecting the front face image and the side face image to a virtual three-dimensional space according to the orientation relation between the extracted image feature points and the camera, synthesizing and storing a three-dimensional face model corresponding to the registered user, or further comprising view angle images of the three-dimensional face model under different view angles;
and the two-dimensional face feature extraction unit is used for extracting the two-dimensional face features in the front face image and storing the corresponding relation between the two-dimensional face features and the registered users.
Preferably, the matching unit includes:
a head model judging subunit, configured to judge whether the generated three-dimensional face model is a head model;
the verification subunit is used for failing to verify the user to be detected if the generated three-dimensional face model is not the head model;
and the two-dimensional face feature matching subunit is used for further matching the generated two-dimensional face features with the pre-stored two-dimensional face features if the generated three-dimensional face model is a head model.
The face recognition apparatus shown in fig. 6 corresponds to the face recognition method shown in fig. 2, 4, and 5.
Fig. 7 is a schematic diagram of a face recognition device according to an embodiment of the present application. As shown in fig. 7, the face recognition apparatus 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a face recognition program, stored in said memory 71 and operable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the various face recognition method embodiments described above. Alternatively, the processor 70 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 72.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the face recognition device 7. For example, the computer program 72 may be divided into:
the image acquisition unit is used for acquiring a front face image of a user to be detected through the front camera and acquiring a side face image of the user to be detected through the side camera;
the human face model generating unit is used for extracting two-dimensional human face features of the front face image and generating a three-dimensional human face model according to the front face image and the side face image;
the matching unit is used for respectively matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models;
and the verification unit is used for determining whether the user to be detected passes the verification or not according to the matching result.
The face recognition device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The face recognition device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the face recognition device 7 and is not intended to be limiting as the face recognition device 7 may include more or fewer components than shown, or some components may be combined, or different components, for example, the face recognition device may also include an input-output device, a network access device, a bus, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the face recognition device 7, such as a hard disk or a memory of the face recognition device 7. The memory 71 may also be an external storage device of the face recognition device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the face recognition device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the face recognition device 7. The memory 71 is used to store the computer program and other programs and data required by the face recognition device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition method characterized in that an apparatus for the face recognition includes a front camera for acquiring a front face of a user and a side camera for acquiring a side face of the user, the face recognition method comprising:
acquiring a front face image of a user to be detected through the front camera, and acquiring a side face image of the user to be detected through the side camera;
extracting two-dimensional face features of the front face image, and generating a three-dimensional face model according to the front face image and the side face image;
matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models respectively;
and determining whether the user to be detected passes the verification or not according to the matching result.
2. The method of claim 1, wherein before the step of matching the extracted two-dimensional face features and the extracted three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models, respectively, the method further comprises:
acquiring front face images of registered users and image feature points in the side face images;
projecting the front face image and the side face image to a virtual three-dimensional space according to the orientation relation between the extracted image feature points and the camera, and synthesizing and storing a three-dimensional face model corresponding to the registered user, or further comprising view angle images of the three-dimensional face model under different view angles;
and extracting two-dimensional face features in the front face image, and storing the corresponding relation between the two-dimensional face features and the registered user.
3. The method of claim 1, wherein the step of matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models, respectively, comprises:
judging whether the generated three-dimensional face model is a head model;
if the generated three-dimensional face model is not the head model, the authentication of the user to be detected fails;
and if the generated three-dimensional face model is the head model, further matching the generated two-dimensional face features with the pre-stored two-dimensional face features.
4. The method of claim 1, wherein the step of matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models, respectively, comprises:
judging whether the extracted two-dimensional face features are matched with the pre-stored two-dimensional face features or not;
if the extracted two-dimensional face features fail to be matched with the pre-stored two-dimensional face features, the verification of the user to be detected fails;
and if the extracted two-dimensional face features are successfully matched with the pre-stored two-dimensional face features, further verifying whether the generated three-dimensional face model is matched with the pre-stored three-dimensional face model.
5. The face recognition method according to any one of claims 1 to 4, wherein the step of matching the generated three-dimensional face model with a pre-stored three-dimensional face model comprises:
acquiring an image of a preset visual angle of a user to be detected;
searching a visual angle image matched with the preset visual angle in a prestored three-dimensional face model;
and judging whether the image of the preset visual angle is matched with the searched visual angle image.
6. A face recognition apparatus characterized in that a device for the face recognition includes a front camera for acquiring a front face of a user and a side camera for acquiring a side face of the user, the face recognition apparatus comprising:
the image acquisition unit is used for acquiring a front face image of a user to be detected through the front camera and acquiring a side face image of the user to be detected through the side camera;
the human face model generating unit is used for extracting two-dimensional human face features of the front face image and generating a three-dimensional human face model according to the front face image and the side face image;
the matching unit is used for respectively matching the extracted two-dimensional face features and the generated three-dimensional face model with pre-stored two-dimensional face features and three-dimensional face models;
and the verification unit is used for determining whether the user to be detected passes the verification or not according to the matching result.
7. The face recognition apparatus of claim 6, wherein the apparatus further comprises:
the characteristic point acquisition unit is used for acquiring image characteristic points in a front face image and a side face image of a registered user;
the three-dimensional face model synthesis unit is used for projecting the front face image and the side face image to a virtual three-dimensional space according to the orientation relation between the extracted image feature points and the camera, synthesizing and storing a three-dimensional face model corresponding to the registered user, or further comprising view angle images of the three-dimensional face model under different view angles;
and the two-dimensional face feature extraction unit is used for extracting the two-dimensional face features in the front face image and storing the corresponding relation between the two-dimensional face features and the registered users.
8. The face recognition apparatus according to claim 6, wherein the matching unit comprises:
a head model judging subunit, configured to judge whether the generated three-dimensional face model is a head model;
the verification subunit is used for failing to verify the user to be detected if the generated three-dimensional face model is not the head model;
and the two-dimensional face feature matching subunit is used for further matching the generated two-dimensional face features with the pre-stored two-dimensional face features if the generated three-dimensional face model is a head model.
9. A face recognition apparatus comprising a front camera, a side camera, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the face recognition method according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the face recognition method according to any one of claims 1 to 5.
CN201911251184.8A 2019-12-09 2019-12-09 Face recognition method, device and equipment Pending CN113033243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911251184.8A CN113033243A (en) 2019-12-09 2019-12-09 Face recognition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911251184.8A CN113033243A (en) 2019-12-09 2019-12-09 Face recognition method, device and equipment

Publications (1)

Publication Number Publication Date
CN113033243A true CN113033243A (en) 2021-06-25

Family

ID=76452009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911251184.8A Pending CN113033243A (en) 2019-12-09 2019-12-09 Face recognition method, device and equipment

Country Status (1)

Country Link
CN (1) CN113033243A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673374A (en) * 2021-08-03 2021-11-19 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment
CN114820575A (en) * 2022-05-24 2022-07-29 四川中绳矩阵技术发展有限公司 Image verification method and device, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673374A (en) * 2021-08-03 2021-11-19 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment
CN113673374B (en) * 2021-08-03 2024-01-30 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment
CN114820575A (en) * 2022-05-24 2022-07-29 四川中绳矩阵技术发展有限公司 Image verification method and device, computer equipment and storage medium
CN114820575B (en) * 2022-05-24 2023-01-20 四川中绳矩阵技术发展有限公司 Image verification method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2020207189A1 (en) Method and device for identity authentication, storage medium, and computer device
US10339402B2 (en) Method and apparatus for liveness detection
CN108804884B (en) Identity authentication method, identity authentication device and computer storage medium
CN106778525B (en) Identity authentication method and device
CN110852310B (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
CN104246793A (en) Three-dimensional face recognition for mobile devices
EP4033458A2 (en) Method and apparatus of face anti-spoofing, device, storage medium, and computer program product
US11651624B2 (en) Iris authentication device, iris authentication method, and recording medium
CN108108711B (en) Face control method, electronic device and storage medium
CN112507889A (en) Method and system for verifying certificate and certificate holder
JP2020524860A (en) Identity authentication method and device, electronic device, computer program and storage medium
CN111753271A (en) Account opening identity verification method, account opening identity verification device, account opening identity verification equipment and account opening identity verification medium based on AI identification
JP2019527868A (en) Biological feature identification apparatus and method, and biological feature template registration method
CN111310512A (en) User identity authentication method and device
CN113033243A (en) Face recognition method, device and equipment
WO2024169261A1 (en) Image processing method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN108921080A (en) Image-recognizing method, device and electronic equipment
CN110163164B (en) Fingerprint detection method and device
CN112395580A (en) Authentication method, device, system, storage medium and computer equipment
CN109063442B (en) Service implementation method and device and camera implementation method and device
CN108875472B (en) Image acquisition device and face identity verification method based on image acquisition device
JP2018169943A (en) Face authentication processing device, face authentication processing method and face authentication processing system
CN111291586B (en) Living body detection method, living body detection device, electronic equipment and computer readable storage medium
CN108921036B (en) Random number generation method and generation system based on face image recognition
CN113095116A (en) Identity recognition method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination