CN108846321B - Method and device for identifying human face prosthesis and electronic equipment - Google Patents

Method and device for identifying human face prosthesis and electronic equipment Download PDF

Info

Publication number
CN108846321B
CN108846321B CN201810515416.5A CN201810515416A CN108846321B CN 108846321 B CN108846321 B CN 108846321B CN 201810515416 A CN201810515416 A CN 201810515416A CN 108846321 B CN108846321 B CN 108846321B
Authority
CN
China
Prior art keywords
face
image
region
features
prosthesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810515416.5A
Other languages
Chinese (zh)
Other versions
CN108846321A (en
Inventor
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201810515416.5A priority Critical patent/CN108846321B/en
Publication of CN108846321A publication Critical patent/CN108846321A/en
Application granted granted Critical
Publication of CN108846321B publication Critical patent/CN108846321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a method and a device for identifying a human face prosthesis and electronic equipment. The method comprises the following steps: acquiring a face image of a target object; the face image comprises a face area and a background area; acquiring the recognition features of the face region and the background region; and identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result. In the embodiment of the disclosure, the face region can be assisted to perform face recognition by adding the background region. Compared with the individual recognition of the human face region, the accuracy of recognizing the human face prosthesis can be improved and the safety risk is reduced by increasing the background region.

Description

Method and device for identifying human face prosthesis and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for recognizing a human face prosthesis, and an electronic device.
Background
At present, a face recognition technology can be used as a biological encryption technology and is widely applied to electronic equipment such as smart phones, notebook computers, access control systems and the like. For example, a smartphone may acquire a facial image of a user and then authenticate the user using the facial image. After the verification is passed, the user can use the smart phone.
In practical application, an illegal user may hold a face prosthesis (such as a face picture or a mask) of a legal user to replace the illegal user to perform identity authentication, and legally use the electronic device (or enter a corresponding area) after the authentication is passed, so that the accuracy of face identification is reduced, and the safety of the electronic device is also reduced.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for identifying a human face prosthesis and electronic equipment, so as to solve the problem of identity verification by using the human face prosthesis in the related technology.
According to a first aspect of the embodiments of the present disclosure, there is provided a method of recognizing a face prosthesis, the method including:
acquiring a face image of a target object; the face image comprises a face area and a background area;
acquiring the recognition features of the face region and the background region;
and identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result.
Optionally, the acquiring the face image of the target object includes:
acquiring the position of a face region of a target object in a source image; the source image is a three-dimensional source image or a two-dimensional source image;
determining a circumscribed rectangle frame of the face region based on the position of the face region;
adjusting the size of the circumscribed rectangular frame based on a preset magnification factor, wherein the adjusted image in the circumscribed rectangular frame is a face image; and the corresponding areas before and after the adjustment of the external rectangular frame are the background areas in the face image.
Optionally, when the face image is a three-dimensional face image, the identification features of the face region and the background region are depth texture features, and the depth texture features are formed by distances between each pixel point in the face image and the image acquisition module.
Optionally, identifying whether the face in the face image is a face prosthesis based on the identification features of the face region and the identification features of the background region, and obtaining a first identification result includes:
acquiring the maximum value and the minimum value of the distance in the depth texture features of the face region and the maximum value and the minimum value of the distance in the depth texture features of the background region;
respectively calculating a first distance interval of the face region and a second distance interval of the background region based on the maximum value and the minimum value;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is smaller than or equal to a second interval distance threshold value, determining that the human face is a human face prosthesis according to the first recognition result;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is larger than a second interval distance threshold value, determining that the first recognition result is that the face is suspected to be a face prosthesis;
and if the first distance interval is greater than the first interval distance threshold, determining that the first recognition result is that the human face is not a human face prosthesis.
Optionally, after obtaining the first recognition result, the method further includes:
recognizing the face based on two adjacent frames of face images to obtain a second recognition result;
calculating a probability value that the face in the face image is a face prosthesis according to the first recognition result and the second recognition result;
if the probability value is less than or equal to a probability threshold value, determining that the face in the face image is not a face prosthesis; and if the probability is larger than the probability threshold, determining that the face in the face image is a face prosthesis.
Optionally, when the face image is a three-dimensional face image, recognizing a face based on two adjacent frames of face images, and obtaining a second recognition result includes:
acquiring two adjacent frames of three-dimensional face images; each pixel point in the three-dimensional face image comprises a distance between the pixel point and the image acquisition module;
calculating the difference value of the corresponding distances of the pixel points at the same position of the two frames of three-dimensional face images to obtain a difference image;
acquiring the dynamic characteristics of a face region and the dynamic characteristics of a background region in the face image according to the differential image;
and identifying the face in the face image based on the dynamic features of the face region and the dynamic features of the background region to obtain a second identification result.
Optionally, recognizing a face in the face image based on the dynamic features of the face region and the dynamic features of the background region, and obtaining a second recognition result includes:
comparing the change rate of the dynamic features of the background area with the change rate of the dynamic features of the face area;
if the change rate of the dynamic features of the face region is greater than that of the dynamic features of the background region, determining that the face in the face image is a face prosthesis as the second recognition result; if the dynamic feature of the face area is smaller than or equal to the random variation feature, judging whether the dynamic feature of the face area is the random variation feature;
if the dynamic feature of the face region is a random variation feature, determining that the second recognition result is that the face in the face image is not a face prosthesis; and if the dynamic features of the face region are not random variation features, determining that the face in the face image is a face prosthesis according to the second recognition result.
Optionally, calculating a probability value that the face in the face image is the face prosthesis according to the first recognition result and the second recognition result includes:
acquiring a probability value in the first recognition result, a probability value in the second recognition result and a weight value corresponding to each probability value;
and respectively calculating products of the probability values and the corresponding weight values to obtain algebraic sums of the products as probability values of the human face in the human face image as the human face prosthesis.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for recognizing a face prosthesis, comprising:
the face image acquisition module is used for acquiring a face image of the target object; the face image comprises a face area and a background area;
the identification feature acquisition module is used for acquiring the identification features of the face region and the identification features of the background region;
and the first result acquisition module is used for identifying whether the face in the face image is a face prosthesis or not based on the identification characteristics of the face region and the identification characteristics of the background region to obtain a first identification result.
Optionally, the face image obtaining module includes:
the face region acquisition unit is used for acquiring the position of a face region of a target object in a source image; the source image is a three-dimensional source image or a two-dimensional source image;
the circumscribed rectangle obtaining unit is used for determining a circumscribed rectangle frame of the face region based on the position of the face region;
the human face image acquisition unit is used for adjusting the size of the circumscribed rectangular frame based on a preset magnification factor, and the adjusted image in the circumscribed rectangular frame is a human face image; and the corresponding areas before and after the adjustment of the external rectangular frame are the background areas in the face image.
Optionally, when the face image is a three-dimensional face image, the identification features of the face region and the background region are depth texture features, and the depth texture features are formed by distances between each pixel point in the face image and the image acquisition module.
Optionally, the first result obtaining module includes:
the distance maximum value acquisition unit is used for acquiring the maximum value and the minimum value of the distance in the depth texture features of the face region and the maximum value and the minimum value of the distance in the depth texture features of the background region;
a distance interval calculation unit, configured to calculate a first distance interval of the face area and a second distance interval of the background area based on a maximum value and a minimum value, respectively;
a distance interval determination unit, configured to determine that the first recognition result is that the human face is a human face prosthesis when the first distance interval is less than or equal to a first interval distance threshold and the second distance interval is less than or equal to a second interval distance threshold;
the face recognition device is further used for determining that the first recognition result is that the face is suspected to be a face prosthesis when the first distance interval is smaller than or equal to a first interval distance threshold and the second distance interval is larger than a second interval distance threshold;
and the face recognition module is further configured to determine that the face is not a face prosthesis when the first distance interval is greater than the first interval distance threshold.
Optionally, the apparatus further comprises:
the second result acquisition module is used for recognizing the face based on two adjacent frames of face images to obtain a second recognition result;
the probability value acquisition module is used for calculating the probability value that the face in the face image is the face prosthesis according to the first recognition result and the second recognition result;
the probability value judging module is used for determining that the face in the face image is not a face prosthesis when the probability value is less than or equal to a probability threshold; and the face image processing device is also used for determining that the face in the face image is a face prosthesis when the probability value is larger than the probability threshold.
Optionally, when the face image is a three-dimensional face image, the second result obtaining module includes:
the face image acquisition unit is used for acquiring two adjacent frames of three-dimensional face images; each pixel point in the three-dimensional face image comprises a distance between the pixel point and the image acquisition module;
the difference image calculating unit is used for calculating the difference value of the corresponding distances of the pixel points at the same position of the two frames of three-dimensional face images to obtain a difference image;
the dynamic feature obtaining unit is used for obtaining the dynamic features of the face region and the background region in the face image according to the difference image;
and the second result acquisition unit is used for identifying the face in the face image based on the dynamic characteristics of the face region and the dynamic characteristics of the background region to obtain a second identification result.
Optionally, the second result obtaining unit includes:
the change rate comparison subunit is used for comparing the change rate of the dynamic features of the background area with the change rate of the dynamic features of the face area;
a change rate judging subunit, configured to determine that the face in the face image is a face prosthesis as the second recognition result when the change rate of the dynamic feature of the face region is greater than the change rate of the dynamic feature of the background region; the dynamic feature judgment subunit is also used for triggering when the change rate of the dynamic features of the face area is less than or equal to the change rate of the dynamic features of the background area;
the face prosthesis judging subunit is configured to judge whether the dynamic feature of the face region is a randomly changing feature, and when the dynamic feature of the face region is the randomly changing feature, determine that the second recognition result is that the face in the face image is not a face prosthesis; and when the dynamic features of the face region are not random variation features, determining that the face in the face image is a face prosthesis according to the second recognition result.
Optionally, the probability value obtaining module includes:
a probability weight obtaining unit, configured to obtain a probability value in the first recognition result, a probability value in the second recognition result, and a weight value corresponding to each probability value;
a probability value calculating unit for calculating the products of the probability values and the corresponding weight values respectively to obtain the algebraic sum of the products as the probability value of the face being the face prosthesis in the face image
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: the image acquisition module is used for acquiring a face image, and the processor is used for storing a memory of an executable instruction of the processor; wherein the processor is configured to:
acquiring a face image of a target object; the face image comprises a face area and a background area;
acquiring the recognition features of the face region and the background region;
and identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
acquiring a face image of a target object; the face image comprises a face area and a background area;
acquiring the recognition features of the face region and the background region;
and identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, the face region can be assisted to perform face recognition by adding the background region. Compared with the individual recognition of the human face region, the accuracy of recognizing the human face prosthesis can be improved and the safety risk is reduced by increasing the background region.
In the embodiment of the disclosure, the face is recognized based on two frames of face images to obtain a second recognition result, and then whether the face in the face image is a face prosthesis is judged by combining the probability values of the first recognition result and the second recognition result. Therefore, the embodiment can further improve the accuracy of recognizing the human face prosthesis and further reduce the safety risk.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of identifying a face prosthesis according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating the acquisition of a face image containing a face region and a background region according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a circumscribed rectangular box illustrated by the present disclosure in accordance with an exemplary embodiment;
FIG. 4 is a schematic flow chart diagram illustrating the obtaining of a first recognition result according to an exemplary embodiment of the present disclosure;
FIG. 5 is a flow diagram illustrating a method of identifying a face prosthesis according to another exemplary embodiment of the present disclosure;
FIG. 6 is a flow chart diagram illustrating a method of identifying a face prosthesis according to yet another exemplary embodiment of the present disclosure;
FIG. 7 is a schematic flow chart illustrating the obtaining of a second recognition result from a dynamic feature according to an example embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a deep convolutional network structure shown in accordance with an exemplary embodiment of the present disclosure;
FIGS. 9-15 are block diagrams of an apparatus for recognizing a facial prosthesis according to yet another exemplary embodiment of the present disclosure;
fig. 16 is a block diagram illustrating the structure of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
At present, considering the phenomenon that when a face recognition technology is used as an encryption mode, an illegal user holds a picture or a mask of a legal user and other face prostheses to perform identity authentication, and after the authentication is passed, the electronic device of the legal user is used legally, the embodiment of the disclosure provides a method for recognizing the face prostheses, which can be applied to electronic devices such as a mobile terminal, a personal computer or a server, and can recognize whether the face in the face image is the face prostheses or not by acquiring a face image of a background region including a face region of a target object, then acquiring recognition characteristics of the face region and recognition characteristics of the background region, and then recognizing whether the face in the face image is the face prostheses or not based on the recognition characteristics.
For convenience of description, the following embodiments take a server as an execution subject for illustration, but do not constitute a limitation on the scheme.
Fig. 1 is a flow chart diagram illustrating a method of recognizing a face prosthesis according to an exemplary embodiment of the present disclosure. Referring to fig. 1, a method of recognizing a facial prosthesis includes:
101, acquiring a face image of a target object; the face image comprises a face area and a background area.
In this embodiment, after the target object is located within the viewing range of the image capturing module, the image capturing module may capture a source image of the target object in real time or real time, and upload the source image to the server.
It can be understood that the image acquisition module can be a plane image acquisition module or a 3D image acquisition module, so that the obtained face image can be a three-dimensional face image or a two-dimensional face image.
The server processes the source image to obtain a face image, and the face image needs to include a face region and a background region, which is shown in fig. 2 and includes:
the server acquires the position of the face region of the target object using a face recognition algorithm in the related art (corresponding to step 201). The server then determines a bounding rectangle for the face region based on the location of the face region (corresponding to step 202), the bounding rectangle being indicated by reference 301 in fig. 3. Thereafter, the server adjusts the bounding rectangle based on the preset magnification, and the adjusted bounding rectangle is shown as reference 302 in fig. 3. The image in the identifier 302 is a face image, the image in the identifier 301 is a face region, and the region between the identifier 301 and the identifier 302, i.e., the region corresponding to the circumscribed rectangle before and after adjustment, is a background region.
The preset magnification factor can be 1.5-5, and the preset magnification factor is set to be 2 in the embodiment. Of course, the technician may set the setting according to a specific scenario, and is not limited herein.
The face image can be obtained by processing the source image by the image acquisition module according to the mode and then sent to the server, and the scheme of the application can be realized.
And 102, acquiring the recognition features of the face region and the recognition features of the background region.
In this embodiment, the server may identify the identification feature in the face region according to a preset feature identification algorithm. The identification features of the face region may be one or more of a nose tip, a left eye, a right eye, a left mouth corner, a right mouth corner, a face contour, and a depth texture feature. Of course, other features of the target object's face, such as birthmarks, scars, etc., may also be used as its identifying features. The technician may set up the settings according to the particular scenario.
In this embodiment, the server identifies the identification feature in the background region according to a preset feature identification algorithm. The identification features of the background region may be one or more of background texture, pattern shape, distance between the pattern and features in the human face, and depth texture features, and the technical staff may set the features according to a specific scene.
And 103, identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result.
In this embodiment, the server may identify whether the face in the face image is a face prosthesis according to the identification features of the face region and the identification features of the background region, so that a first identification result may be obtained. It should be noted that, in this embodiment, the first recognition result is a probability value, by which it can be determined whether the face in the face image is prone to be a face prosthesis or not.
In an embodiment, the face image may be a two-dimensional face image, and the identification features of the face region may be positions of three features of a nose tip, a left eye, and a right eye, and a distance between any two features. The identifying feature of the background region may be a background texture.
The server can acquire the recognition features of the first frame of face image and then acquire the recognition features of the second frame of face image. And aiming at the first frame of face image and the second frame of face image, the server compares whether the identification characteristics of the face area change.
Considering that the human faces of the target object can change at any two moments, when the acquisition period of the image acquisition module is set reasonably, the human faces in the first frame of human face image and the second frame of human face image are inevitably changed. Based on the analysis content, if the identification characteristics of the face region are not changed, determining that the first identification result is that the face in the face image is a face prosthesis; if the recognition characteristics of the face area change, the server continuously judges whether the recognition characteristics of the background area change; if the recognition features of the background region are not changed, determining that the first recognition result is that the face in the face image is suspected to be a face prosthesis, if the change is larger than the change of the recognition features of the face region, determining that the face in the face image is not the face prosthesis, and if the change is smaller than the change of the recognition features of the face region, determining that the face in the face image is suspected to be the face prosthesis.
In another embodiment, the face image may be a three-dimensional face image, and the recognition feature of the face region and the recognition feature of the background region may be depth texture features. In the three-dimensional face image, the coordinate of each pixel point comprises the distance between the coordinate and the image acquisition module, the distance of all or part of the pixel points in the face region forms the depth texture feature of the face region, and the distance of all or part of the pixel points in the background region forms the depth texture feature of the face region.
Referring to fig. 4, the server may obtain a maximum value and a minimum value of distances in the depth texture features of the face region, and may calculate a first distance interval of the face region according to the maximum value and the minimum value. Similarly, the server may obtain a maximum value and a minimum value of the distance in the depth texture feature of the background region, and may calculate a second distance interval of the background region according to the maximum value and the minimum value.
Then, the server compares the first distance interval with the first interval distance threshold, and compares the second distance interval with the second interval distance threshold to obtain:
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is smaller than or equal to a second interval distance threshold value, determining that the first recognition result is that the face is a face prosthesis;
if the first distance interval is smaller than or equal to the first interval distance threshold value and the second distance interval is larger than the second interval distance threshold value, determining that the first recognition result is that the face is suspected to be a face prosthesis;
and if the first distance interval is larger than the first interval distance threshold, determining that the first recognition result is that the human face is not the human face prosthesis.
Therefore, in the embodiment of the disclosure, the background area is added in the face image, so that the face area can be assisted to perform face recognition. Compared with the individual recognition of the human face region, the accuracy of recognizing the human face prosthesis can be improved and the safety risk is reduced by increasing the background region.
Fig. 5 is a flow chart diagram illustrating a method of recognizing a face prosthesis according to an exemplary embodiment of the present disclosure. Referring to fig. 5, a method for recognizing a facial prosthesis includes steps 501-506, wherein:
501, acquiring a face image of a target object; the face image comprises a face area and a background area.
The specific method and principle of step 501 and step 101 are the same, please refer to fig. 1 and related contents of step 101 for detailed description, which is not repeated herein.
502, obtaining the recognition features of the face region and the background region.
The specific method and principle of step 502 and step 102 are the same, please refer to fig. 1 and the related contents of step 102 for detailed description, which is not repeated herein.
And 503, identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result.
The specific method and principle of step 503 and step 103 are the same, and please refer to fig. 1 and related contents of step 103 for detailed description, which is not repeated herein.
And 504, recognizing the face based on the two adjacent frames of face images to obtain a second recognition result.
Referring to fig. 6, in the present embodiment, the server selects any two adjacent frames of face images from the received frames of face images. Then, the server acquires a difference image of the two frames of face images. Then, the server can obtain the dynamic characteristics of the face region and the background region based on the difference image. And finally, the server identifies whether the face in the face image is a face prosthesis according to the dynamic characteristics of the face region and the dynamic characteristics of the background region to obtain a second identification result.
In some scenarios, the server also normalizes each frame of the face image before acquiring the difference image. For example, the server may detect feature points in each frame of face image, where the feature points may be left eye, right eye, nose tip, left mouth corner, and right mouth corner, and correct and normalize the face images according to the feature points, so that the two frames of face images are aligned at the same position, which is beneficial to improving subsequent recognition effect.
In an embodiment, the face image is a two-dimensional face image, and the step of acquiring the second recognition result by the server includes:
because the data of the pixel points in each frame of face image contains the gray value, the server can calculate the difference of the gray values of the pixel points at the same position in the two frames of face images, and thus, the corresponding difference image can be obtained.
The server can extract dynamic features such as winks, twitching of the mouth corners and the like in the differential image by using a feature extraction algorithm in the related art. The server can also extract dynamic characteristics of the background area in the differential image, such as background texture and the like.
Referring to fig. 7, the server calculates a change rate of the dynamic features of the face region and a change rate of the dynamic features of the background region. The change rate may be a change rate of gray values of different pixel points in each frame of differential image, or a change rate of gray values of pixel points at the same position in a plurality of frames of differential images, which is not limited herein.
In this embodiment, it is considered that the distance between the image acquisition module and the face is greater than the distance between the image acquisition module and the background, and therefore, when the face moves or the face changes, the change of the background area is greater than the change of the face area. Based on this, the server compares the change rate of the dynamic features of the face region with the change rate of the dynamic features of the background region, and if the change rate of the dynamic features of the face region is greater than the change rate of the dynamic features of the background region, it is determined that the face in the face image is a face prosthesis as the second recognition result.
If the change rate of the dynamic features of the face region is less than or equal to the change rate of the dynamic features of the background region, judging whether the dynamic features of the face region are random change features, and if the dynamic features of the face region are random change features, determining that the face in the face image is not a face prosthesis as a second recognition result; and if the dynamic features of the face region are not the random variation features, determining that the face in the face image is a face prosthesis according to the second recognition result.
In this embodiment, the randomly changing feature refers to a feature corresponding to an involuntary motion of a human face, such as blinking, twitching of the corners of the mouth, and the like. It can be understood that the randomly changing features can be obtained by training a certain neural network algorithm by using dynamic images of a large number of face images containing various expressions in a big data mode, and then identifying whether the dynamic features are the randomly changing features or not by the neural network algorithm. The random variation features can also be stored in a random variation feature library in advance, then the similarity between the dynamic features and one or more random variation features in the random variation feature library is calculated, and when the similarity exceeds a set similarity threshold, the dynamic features are determined to be the random variation features. Of course, a technician may set other ways to determine whether the dynamic feature is a randomly changing feature according to a specific scenario, and the method also falls within the scope of the present application when the scheme of the present application can be implemented.
In another embodiment, the step of acquiring the second recognition result by the server, with reference to fig. 6 and 7, includes:
because the data of the pixel points in each frame of face image comprises the distance between the data of the pixel points and the image acquisition module, the server can calculate the difference between the distances of the pixel points at the same position in the two frames of face images, and thus, a difference image can be obtained.
The server can extract dynamic features in the differential image by using a feature extraction algorithm in the related technology, for example, corresponding distance changes when a human face acts. Similarly, the server can also extract dynamic characteristics of the background area in the differential image, such as distance change corresponding to background texture.
The server calculates the change rate of the dynamic features of the face area and the change rate of the dynamic features of the background area. The change rate may be a change rate of distances between different pixel points in each frame of differential image, or a change rate of distances between pixel points at the same position in a plurality of frames of differential images, which is not limited herein.
In this embodiment, it is considered that the distance between the image acquisition module and the face is greater than the distance between the image acquisition module and the background, and therefore, when the face moves or the face changes, the change of the background area is greater than the change of the face area. Based on this, the server compares the change rate of the dynamic features of the face region with the change rate of the dynamic features of the background region, and if the change rate of the dynamic features of the face region is greater than the change rate of the dynamic features of the background region, it is determined that the face in the face image is a face prosthesis as the second recognition result.
If the change rate of the dynamic features of the face region is less than or equal to the change rate of the dynamic features of the background region, judging whether the dynamic features of the face region are random change features (features corresponding to involuntary actions of the face, such as blinking, twitching of mouth corners and the like) or not, and if the dynamic features of the face region are the random change features, determining that the face in the face image is not a face prosthesis according to the second recognition result; and if the dynamic features of the face region are not the random variation features, determining that the face in the face image is a face prosthesis according to the second recognition result.
And 505, calculating a probability value that the face in the face image is a face prosthesis according to the first recognition result and the second recognition result.
In this embodiment, the server obtains probability values in the first recognition result and the second recognition result, and weight values corresponding to the first recognition result and the second recognition result, respectively. Then, the server calculates a product of the probability value of the first recognition result and the corresponding weight value, and a product of the probability value of the second recognition result and the corresponding weight value. And finally, the server calculates the sum of the two products, so that the probability value that the face in the face image is the face prosthesis can be obtained.
For example, the probability value that the face is a human face prosthesis is 0.1 and the corresponding weight value is 0.6 as the first recognition result; the second recognition result is that the probability value that the face is the face prosthesis is 0.3, and the corresponding weight value is 0.4, then the probability value that the face is the face prosthesis in the obtained face image is:
0.1*0.6+0.3*0.4=0.06+0.12=0.18。
506, if the probability value is less than or equal to a probability threshold value, determining that the face in the face image is not a face prosthesis; and if the probability is larger than the probability threshold, determining that the face in the face image is a face prosthesis.
In this embodiment, the probability threshold is pre-stored in the server, and a value of the probability threshold may be set, for example, to be 0.5. And if the probability value is less than or equal to the probability threshold, determining that the face in the face image is not a face prosthesis, and if the probability value is greater than the probability threshold, determining that the face in the face image is a face prosthesis.
For example, since 0.18<0.5, it can be obtained that the face in the face image is not a facial prosthesis.
In the embodiment of the disclosure, the face region can be assisted to perform face recognition by adding the background region. Compared with the individual recognition of the human face region, the accuracy of recognizing the human face prosthesis can be improved and the safety risk is reduced by increasing the background region. In addition, in this embodiment, the face is recognized based on two frames of face images to obtain a second recognition result, and then whether the face in the face image is a face prosthesis is determined by combining the probability values of the first recognition result and the second recognition result. Therefore, the embodiment can further improve the accuracy of recognizing the human face prosthesis and further reduce the safety risk.
The embodiment of the invention also provides a method for identifying the human face prosthesis, wherein the server acquires the human face image shown in the figure 3 by adopting the steps shown in the figure 2, and the human face image comprises a human face area and a background area. Then, the server inputs the face image into a preset deep convolutional network, and the output result of the deep convolutional network is the probability that the face in the face image is a face prosthesis, namely the first recognition result.
The structure of the deep convolutional network, see fig. 8, sequentially includes, from top to bottom: an input layer, a convolution block 1, convolution blocks 2-5, a full connection layer and an output layer. In this embodiment, each convolution block includes a convolution layer and a pooling layer. For the convolution layer of the convolution block 1, the convolution kernel size is 3 x 3, the number of channels is 16, then the output result of the convolution layer is input into the pooling layer, a characteristic image is obtained through the processing of the pooling layer, and the characteristic image is used as the input image of the next convolution block. In the convolution layers of convolution block 2 to convolution block 5, the convolution kernel size is 3 × 3, the number of channels is 64, and the function is the same as that of convolution block 1. And finally, outputting the characteristic image to a full connection layer by the rolling block 5, processing the characteristic image by the full connection layer, inputting the processed characteristic image to an output layer, and finally outputting an identification result by the output layer. The working process of the convolutional layer and the pooling layer can refer to the scheme in the related art, and is not detailed here.
In an embodiment, the server may further acquire two adjacent frames of face images, and detect feature points, such as left eye, right eye, nose tip, left mouth corner and right mouth corner, in each frame of face image. And then, the server corrects and normalizes the face image according to the characteristic points. Then, the server identifies a face area in the normalized face image by using a face identification algorithm and obtains a difference image of the two frames of face images; and the server inputs the difference image into the deep convolution network to obtain a second identification result.
It is understood that the deep convolutional network for obtaining the first recognition result and the deep convolutional network for obtaining the second recognition result may be implemented by using convolutional networks with the same structure, or may be implemented by using different networks, which is not limited herein.
Finally, the server determines whether the face in the face image is a face prosthesis according to the first recognition result and the second recognition result, please refer to the content in step 505 for details, which will not be described herein again.
In the embodiment of the disclosure, the face image including the face region and the background region is identified by using the deep convolutional network, and the accuracy of identifying the face prosthesis can be improved and the safety risk can be reduced by increasing the background region. In addition, in the embodiment, the differential image is continuously added to identify the human face prosthesis on the basis of increasing the background area, so that the accuracy of identifying the human face prosthesis can be further improved, and the safety risk is further reduced.
Fig. 9 is a block diagram illustrating an apparatus for recognizing a facial prosthesis according to an exemplary embodiment of the present disclosure. Referring to fig. 9, an apparatus 900 for recognizing a facial prosthesis includes:
a face image obtaining module 901, configured to obtain a face image of a target object; the face image comprises a face area and a background area;
an identification feature obtaining module 902, configured to obtain an identification feature of the face region and an identification feature of the background region;
a first result obtaining module 903, configured to identify whether a human face in the human face image is a human face prosthesis based on the identification features of the human face region and the identification features of the background region, so as to obtain a first identification result.
Fig. 10 is a block diagram illustrating an apparatus for recognizing a facial prosthesis according to another exemplary embodiment of the present disclosure. Referring to fig. 10, on the basis of the apparatus 900 for recognizing a facial prosthesis shown in fig. 9, the facial image acquiring module 901 includes:
a face region acquiring unit 1001 configured to acquire a position of a face region of a target object in a source image; the source image is a three-dimensional source image or a two-dimensional source image;
an external rectangle obtaining unit 1002, configured to determine an external rectangle frame of the face region based on the position of the face region;
a face image obtaining unit 1003, configured to adjust the size of the circumscribed rectangular frame based on a preset magnification, where an image in the adjusted circumscribed rectangular frame is a face image; and the corresponding areas before and after the adjustment of the external rectangular frame are the background areas in the face image.
In an embodiment, when the face image is a three-dimensional face image, the identification features of the face region and the background region are depth texture features, and the depth texture features are formed by distances between each pixel point in the face image and an image acquisition module.
Fig. 11 is a block diagram illustrating an apparatus for recognizing a facial prosthesis according to another exemplary embodiment of the present disclosure. Referring to fig. 11, on the basis of the apparatus 900 for recognizing a facial prosthesis shown in fig. 9, the first result obtaining module 903 comprises:
a distance maximum value obtaining unit 1101, configured to obtain a maximum value and a minimum value of distances in the face region depth texture feature and a maximum value and a minimum value of distances in the background region depth texture feature;
a distance interval calculation unit 1102, configured to calculate a first distance interval of the face area and a second distance interval of the background area based on a maximum value and a minimum value, respectively;
a distance interval determination unit 1103, configured to determine that the first recognition result is that the human face is a human face prosthesis when the first distance interval is less than or equal to a first interval distance threshold and the second distance interval is less than or equal to a second interval distance threshold;
the face recognition device is further used for determining that the first recognition result is that the face is suspected to be a face prosthesis when the first distance interval is smaller than or equal to a first interval distance threshold and the second distance interval is larger than a second interval distance threshold;
and the face recognition module is further configured to determine that the face is not a face prosthesis when the first distance interval is greater than the first interval distance threshold.
Fig. 12 is a block diagram illustrating an apparatus for recognizing a facial prosthesis according to another exemplary embodiment of the present disclosure. Referring to fig. 12, the apparatus 1200 for recognizing a facial prosthesis shown in fig. 11 further includes:
a second result obtaining module 1201, configured to identify a face based on two adjacent frames of face images, to obtain a second identification result;
a probability value obtaining module 1202, configured to calculate a probability value that a face in the face image is a face prosthesis according to the first recognition result and the second recognition result;
a probability value judging module 1203, configured to determine that a face in the face image is not a face prosthesis when the probability value is less than or equal to a probability threshold; and the face image processing device is also used for determining that the face in the face image is a face prosthesis when the probability value is larger than the probability threshold.
Fig. 13 is a block diagram illustrating an apparatus for recognizing a facial prosthesis according to another exemplary embodiment of the present disclosure. Referring to fig. 13, on the basis of the apparatus 1200 for recognizing a facial prosthesis shown in fig. 12, when the facial image is a three-dimensional facial image, the second result obtaining module 1201 includes:
a face image obtaining unit 1301, configured to obtain two adjacent frames of three-dimensional face images; each pixel point in the three-dimensional face image comprises a distance between the pixel point and the image acquisition module;
a difference image calculating unit 1302, configured to calculate a difference value between corresponding distances of pixels at the same position in the two frames of three-dimensional face images, so as to obtain a difference image;
a dynamic feature obtaining unit 1303, configured to obtain a dynamic feature of a face region and a dynamic feature of a background region in the face image according to the difference image;
a second result obtaining unit 1304, configured to identify a face in the face image based on the dynamic features of the face region and the dynamic features of the background region, so as to obtain a second identification result.
Fig. 14 is a block diagram illustrating an apparatus for recognizing a face prosthesis according to another exemplary embodiment of the present disclosure. Referring to fig. 14, on the basis of the apparatus 1200 for recognizing a facial prosthesis shown in fig. 13, the second result obtaining unit 1304 includes:
a change rate comparison subunit 1401, configured to compare a change rate of the dynamic feature of the background region with a change rate of the dynamic feature of the face region;
a change rate determining subunit 1402, configured to determine that the face in the face image is a face prosthesis according to the second recognition result when the change rate of the dynamic feature of the face region is greater than the change rate of the dynamic feature of the background region; the dynamic feature judgment subunit 1403 is further configured to be triggered when the change rate of the dynamic feature of the face region is less than or equal to the change rate of the dynamic feature of the background region;
the face prosthesis determining subunit 1403 is configured to determine whether the dynamic feature of the face region is a randomly changing feature, and when the dynamic feature of the face region is a randomly changing feature, determine that the second recognition result is that the face in the face image is not a face prosthesis; and when the dynamic features of the face region are not random variation features, determining that the face in the face image is a face prosthesis according to the second recognition result.
Fig. 15 is a block diagram illustrating an apparatus for recognizing a facial prosthesis according to another exemplary embodiment of the present disclosure. Referring to fig. 15, based on the apparatus 1200 for recognizing a facial prosthesis shown in fig. 12, the probability value obtaining module 1202 includes:
a probability weight obtaining unit 1501, configured to obtain a probability value in the first recognition result, a probability value in the second recognition result, and a weight value corresponding to each probability value;
a probability value calculating unit 1502, configured to calculate products of the probability values and the corresponding weight values respectively, to obtain algebraic sums of the products as probability values of the face in the face image being a face prosthesis
It should be noted that the apparatus for recognizing a facial prosthesis provided in the embodiment of the present invention corresponds to the method embodiment shown in fig. 1 to 7, and reference may be made to part of the description of the method embodiment for relevant points, which is not described herein again.
FIG. 16 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 1600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 1616, electronic device 1600 may include one or more of the following components: processing component 1602, memory 1604, power component 1606, multimedia component 1608, audio component 1610, input/output (I/O) interface 1612, sensor component 1614, communication component 1616, and image capture module 1618. The image capturing module 1618 captures a face image. The memory 1604 is used to store instructions executable by the processing component 1602. The processing component 1602 reads instructions from the memory 1604 to implement:
acquiring a face image of a target object; the face image comprises a face area and a background area;
acquiring the recognition features of the face region and the background region;
and identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result.
The processing component 1602 generally controls overall operation of the electronic device 1600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1602 may include one or more processors 1620 to execute instructions. Further, the processing component 1602 can include one or more modules that facilitate interaction between the processing component 1602 and other components. For example, the processing component 1602 can include a multimedia module to facilitate interaction between the multimedia component 1608 and the processing component 1602.
The memory 1604 is configured to store various types of data to support operation at the electronic device 1600. Examples of such data include instructions for any application or method operating on the electronic device 1600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1604 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 1606 provides power to the various components of the electronic device 1600. The power components 1606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 1600.
The multimedia component 1608 comprises a screen that provides an output interface between the electronic device 1600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1608 comprises a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1610 is configured to output and/or input an audio signal. For example, the audio component 1610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1604 or transmitted via the communications component 1616. In some embodiments, audio component 1610 further includes a speaker for outputting audio signals.
The I/O interface 1612 provides an interface between the processing component 1602 and peripheral interface modules, such as keyboards, click wheels, buttons, and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 1614 includes one or more sensors for providing various aspects of status assessment for electronic device 1600. For example, sensor assembly 1614 may detect an open/closed state of electronic device 1600, the relative positioning of components, such as a display and keypad of electronic device 1600, a change in position of electronic device 1600 or a component of electronic device 1600, the presence or absence of user contact with electronic device 1600, orientation or acceleration/deceleration of electronic device 1600, and a change in temperature of electronic device 1600. The sensor assembly 1614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1616 is configured to facilitate communications between the electronic device 1600 and other devices in a wired or wireless manner. The electronic device 1600 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the image capture module 1618 may be a 3D structured light camera or a 3D video camera.
In an exemplary embodiment, the electronic device 1600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium comprising instructions, for example, including a memory 1604 storing a computer program executable by a processor 1620 of the electronic device 1600 to implement: acquiring a face image of a target object; the face image comprises a face area and a background area; acquiring the recognition features of the face region and the background region; and identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result.
Additionally, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method of recognizing a face prosthesis, the method comprising:
acquiring a face image of a target object; the face image comprises a face area and a background area;
acquiring the recognition features of the face region and the background region; when the face image is a three-dimensional face image, the identification features of the face region and the background region are depth texture features, and the depth texture features are formed by the distance between each pixel point in the face image and an image acquisition module;
identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result, wherein the identification result comprises the following steps:
acquiring the maximum value and the minimum value of the distance in the depth texture features of the face region and the maximum value and the minimum value of the distance in the depth texture features of the background region;
respectively calculating a first distance interval of the face region and a second distance interval of the background region based on the maximum value and the minimum value;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is smaller than or equal to a second interval distance threshold value, determining that the human face is a human face prosthesis according to the first recognition result;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is larger than a second interval distance threshold value, determining that the first recognition result is that the face is suspected to be a face prosthesis;
and if the first distance interval is greater than the first interval distance threshold, determining that the first recognition result is that the human face is not a human face prosthesis.
2. The method of claim 1, wherein obtaining a face image of a target object comprises:
acquiring the position of a face region of a target object in a source image; the source image is a three-dimensional source image or a two-dimensional source image;
determining a circumscribed rectangle frame of the face region based on the position of the face region;
adjusting the size of the circumscribed rectangular frame based on a preset magnification factor, wherein the adjusted image in the circumscribed rectangular frame is a face image; and the corresponding areas before and after the adjustment of the external rectangular frame are the background areas in the face image.
3. The method of claim 1, wherein after obtaining the first recognition result, the method further comprises:
recognizing the face based on two adjacent frames of face images to obtain a second recognition result;
calculating a probability value that the face in the face image is a face prosthesis according to the first recognition result and the second recognition result;
if the probability value is less than or equal to a probability threshold value, determining that the face in the face image is not a face prosthesis; and if the probability is larger than the probability threshold, determining that the face in the face image is a face prosthesis.
4. The method of claim 3, wherein when the face image is a three-dimensional face image, recognizing a face based on two adjacent frames of face images, and obtaining a second recognition result comprises:
acquiring two adjacent frames of three-dimensional face images; each pixel point in the three-dimensional face image comprises a distance between the pixel point and the image acquisition module;
calculating the difference value of the corresponding distances of the pixel points at the same position of the two frames of three-dimensional face images to obtain a difference image;
acquiring the dynamic characteristics of a face region and the dynamic characteristics of a background region in the face image according to the differential image;
and identifying the face in the face image based on the dynamic features of the face region and the dynamic features of the background region to obtain a second identification result.
5. The method of claim 4, wherein recognizing the face in the face image based on the dynamic features of the face region and the dynamic features of the background region, and obtaining a second recognition result comprises:
comparing the change rate of the dynamic features of the background area with the change rate of the dynamic features of the face area;
if the change rate of the dynamic features of the face region is greater than that of the dynamic features of the background region, determining that the face in the face image is a face prosthesis as the second recognition result; if the dynamic feature of the face area is smaller than or equal to the random variation feature, judging whether the dynamic feature of the face area is the random variation feature;
if the dynamic feature of the face region is a random variation feature, determining that the second recognition result is that the face in the face image is not a face prosthesis; and if the dynamic features of the face region are not random variation features, determining that the face in the face image is a face prosthesis according to the second recognition result.
6. The method of claim 3, wherein calculating a probability value that the face in the face image is a facial prosthesis based on the first recognition result and the second recognition result comprises:
acquiring a probability value in the first recognition result, a probability value in the second recognition result and a weight value corresponding to each probability value;
and respectively calculating products of the probability values and the corresponding weight values to obtain algebraic sums of the products as probability values of the human face in the human face image as the human face prosthesis.
7. An apparatus for recognizing a human face prosthesis, the apparatus comprising:
the face image acquisition module is used for acquiring a face image of the target object; the face image comprises a face area and a background area;
the identification feature acquisition module is used for acquiring the identification features of the face region and the identification features of the background region; when the face image is a three-dimensional face image, the identification features of the face region and the background region are depth texture features, and the depth texture features are formed by the distance between each pixel point in the face image and an image acquisition module;
the first result acquisition module is used for identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result;
the first result obtaining module includes:
the distance maximum value acquisition unit is used for acquiring the maximum value and the minimum value of the distance in the depth texture features of the face region and the maximum value and the minimum value of the distance in the depth texture features of the background region;
a distance interval calculation unit, configured to calculate a first distance interval of the face area and a second distance interval of the background area based on a maximum value and a minimum value, respectively;
a distance interval determination unit, configured to determine that the first recognition result is that the human face is a human face prosthesis when the first distance interval is less than or equal to a first interval distance threshold and the second distance interval is less than or equal to a second interval distance threshold;
the face recognition device is further used for determining that the first recognition result is that the face is suspected to be a face prosthesis when the first distance interval is smaller than or equal to a first interval distance threshold and the second distance interval is larger than a second interval distance threshold;
and the face recognition module is further configured to determine that the face is not a face prosthesis when the first distance interval is greater than the first interval distance threshold.
8. The apparatus of claim 7, wherein the face image acquisition module comprises:
the face region acquisition unit is used for acquiring the position of a face region of a target object in a source image; the source image is a three-dimensional source image or a two-dimensional source image;
the circumscribed rectangle obtaining unit is used for determining a circumscribed rectangle frame of the face region based on the position of the face region;
the human face image acquisition unit is used for adjusting the size of the circumscribed rectangular frame based on a preset magnification factor, and the adjusted image in the circumscribed rectangular frame is a human face image; and the corresponding areas before and after the adjustment of the external rectangular frame are the background areas in the face image.
9. The apparatus of claim 7, further comprising:
the second result acquisition module is used for recognizing the face based on two adjacent frames of face images to obtain a second recognition result;
the probability value acquisition module is used for calculating the probability value that the face in the face image is the face prosthesis according to the first recognition result and the second recognition result;
the probability value judging module is used for determining that the face in the face image is not a face prosthesis when the probability value is less than or equal to a probability threshold; and the face image processing device is also used for determining that the face in the face image is a face prosthesis when the probability value is larger than the probability threshold.
10. The apparatus according to claim 9, wherein when the face image is a three-dimensional face image, the second result obtaining module comprises:
the face image acquisition unit is used for acquiring two adjacent frames of three-dimensional face images; each pixel point in the three-dimensional face image comprises a distance between the pixel point and the image acquisition module;
the difference image calculating unit is used for calculating the difference value of the corresponding distances of the pixel points at the same position of the two frames of three-dimensional face images to obtain a difference image;
the dynamic feature obtaining unit is used for obtaining the dynamic features of the face region and the background region in the face image according to the difference image;
and the second result acquisition unit is used for identifying the face in the face image based on the dynamic characteristics of the face region and the dynamic characteristics of the background region to obtain a second identification result.
11. The apparatus of claim 10, wherein the second result obtaining unit comprises:
the change rate comparison subunit is used for comparing the change rate of the dynamic features of the background area with the change rate of the dynamic features of the face area;
a change rate judging subunit, configured to determine that the face in the face image is a face prosthesis as the second recognition result when the change rate of the dynamic feature of the face region is greater than the change rate of the dynamic feature of the background region; the dynamic feature judgment subunit is also used for triggering when the change rate of the dynamic features of the face area is less than or equal to the change rate of the dynamic features of the background area;
the face prosthesis judging subunit is configured to judge whether the dynamic feature of the face region is a randomly changing feature, and when the dynamic feature of the face region is the randomly changing feature, determine that the second recognition result is that the face in the face image is not a face prosthesis; and when the dynamic features of the face region are not random variation features, determining that the face in the face image is a face prosthesis according to the second recognition result.
12. The apparatus of claim 9, wherein the probability value obtaining module comprises:
a probability weight obtaining unit, configured to obtain a probability value in the first recognition result, a probability value in the second recognition result, and a weight value corresponding to each probability value;
and the probability value calculating unit is used for calculating products of the probability values and the corresponding weight values respectively to obtain algebraic sums of the products as probability values of the face in the face image being the face prosthesis.
13. An electronic device, characterized in that the electronic device comprises: the image acquisition module is used for acquiring a face image, and the processor is used for storing a memory of an executable instruction of the processor; wherein the processor is configured to:
acquiring a face image of a target object; the face image comprises a face area and a background area;
acquiring the recognition features of the face region and the background region; when the face image is a three-dimensional face image, the identification features of the face region and the background region are depth texture features, and the depth texture features are formed by the distance between each pixel point in the face image and an image acquisition module;
identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result, wherein the identification result comprises the following steps:
acquiring the maximum value and the minimum value of the distance in the depth texture features of the face region and the maximum value and the minimum value of the distance in the depth texture features of the background region;
respectively calculating a first distance interval of the face region and a second distance interval of the background region based on the maximum value and the minimum value;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is smaller than or equal to a second interval distance threshold value, determining that the human face is a human face prosthesis according to the first recognition result;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is larger than a second interval distance threshold value, determining that the first recognition result is that the face is suspected to be a face prosthesis;
and if the first distance interval is greater than the first interval distance threshold, determining that the first recognition result is that the human face is not a human face prosthesis.
14. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing:
acquiring a face image of a target object; the face image comprises a face area and a background area;
acquiring the recognition features of the face region and the background region; when the face image is a three-dimensional face image, the identification features of the face region and the background region are depth texture features, and the depth texture features are formed by the distance between each pixel point in the face image and an image acquisition module;
identifying whether the face in the face image is a face prosthesis or not based on the identification features of the face region and the identification features of the background region to obtain a first identification result, wherein the identification result comprises the following steps:
acquiring the maximum value and the minimum value of the distance in the depth texture features of the face region and the maximum value and the minimum value of the distance in the depth texture features of the background region;
respectively calculating a first distance interval of the face region and a second distance interval of the background region based on the maximum value and the minimum value;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is smaller than or equal to a second interval distance threshold value, determining that the human face is a human face prosthesis according to the first recognition result;
if the first distance interval is smaller than or equal to a first interval distance threshold value and the second distance interval is larger than a second interval distance threshold value, determining that the first recognition result is that the face is suspected to be a face prosthesis;
and if the first distance interval is greater than the first interval distance threshold, determining that the first recognition result is that the human face is not a human face prosthesis.
CN201810515416.5A 2018-05-25 2018-05-25 Method and device for identifying human face prosthesis and electronic equipment Active CN108846321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810515416.5A CN108846321B (en) 2018-05-25 2018-05-25 Method and device for identifying human face prosthesis and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810515416.5A CN108846321B (en) 2018-05-25 2018-05-25 Method and device for identifying human face prosthesis and electronic equipment

Publications (2)

Publication Number Publication Date
CN108846321A CN108846321A (en) 2018-11-20
CN108846321B true CN108846321B (en) 2022-05-03

Family

ID=64213588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810515416.5A Active CN108846321B (en) 2018-05-25 2018-05-25 Method and device for identifying human face prosthesis and electronic equipment

Country Status (1)

Country Link
CN (1) CN108846321B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215045A (en) * 2019-07-12 2021-01-12 普天信息技术有限公司 Living body detection method and device
CN111736697B (en) * 2020-06-22 2021-04-27 四川长虹电器股份有限公司 Camera-based gesture control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN107818313A (en) * 2017-11-20 2018-03-20 腾讯科技(深圳)有限公司 Vivo identification method, device, storage medium and computer equipment
CN107992833A (en) * 2017-12-08 2018-05-04 北京小米移动软件有限公司 Image-recognizing method, device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956582B (en) * 2016-06-24 2019-07-30 深圳市唯特视科技有限公司 A kind of face identification system based on three-dimensional data
CN106778518B (en) * 2016-11-24 2021-01-08 汉王科技股份有限公司 Face living body detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN107818313A (en) * 2017-11-20 2018-03-20 腾讯科技(深圳)有限公司 Vivo identification method, device, storage medium and computer equipment
CN107992833A (en) * 2017-12-08 2018-05-04 北京小米移动软件有限公司 Image-recognizing method, device and storage medium

Also Published As

Publication number Publication date
CN108846321A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN105488527B (en) Image classification method and device
CN106408603B (en) Shooting method and device
CN104156915A (en) Skin color adjusting method and device
CN108470322B (en) Method and device for processing face image and readable storage medium
US11308692B2 (en) Method and device for processing image, and storage medium
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
CN107944367B (en) Face key point detection method and device
CN105554389B (en) Shooting method and device
US9924090B2 (en) Method and device for acquiring iris image
CN107038428B (en) Living body identification method and apparatus
CN105654039A (en) Image processing method and device
CN108154466B (en) Image processing method and device
CN107220614B (en) Image recognition method, image recognition device and computer-readable storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN113643356A (en) Camera pose determination method, camera pose determination device, virtual object display method, virtual object display device and electronic equipment
CN112541400B (en) Behavior recognition method and device based on sight estimation, electronic equipment and storage medium
EP3495999A1 (en) Image recognition method, apparatus and computer readable storage medium
US9665925B2 (en) Method and terminal device for retargeting images
CN108846321B (en) Method and device for identifying human face prosthesis and electronic equipment
CN109145878B (en) Image extraction method and device
CN108154090B (en) Face recognition method and device
CN107564047B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109598183B (en) Face authentication method, device and system
CN106295579A (en) Face alignment method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant