CN105224924A - Living body faces recognition methods and device - Google Patents

Living body faces recognition methods and device Download PDF

Info

Publication number
CN105224924A
CN105224924A CN201510633817.7A CN201510633817A CN105224924A CN 105224924 A CN105224924 A CN 105224924A CN 201510633817 A CN201510633817 A CN 201510633817A CN 105224924 A CN105224924 A CN 105224924A
Authority
CN
China
Prior art keywords
depth information
facial image
point
living body
body faces
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510633817.7A
Other languages
Chinese (zh)
Inventor
张涛
汪平仄
陈志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510633817.7A priority Critical patent/CN105224924A/en
Publication of CN105224924A publication Critical patent/CN105224924A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure is directed to a kind of living body faces recognition methods and device, belong to field of face identification.Described method comprises: by the dual camera of configuration, obtains facial image; By described dual camera, obtain the depth information of the organ point in described facial image, described depth information and corresponding organ point are negative correlativing relation to the distance of local terminal; Determine the human face posture of described facial image; According to depth information and the described human face posture of the organ point in described facial image, judge whether described facial image is living body faces.The disclosure is by configuration dual camera, depth information and the human face posture of the organ point in facial image is obtained by this dual camera, according to the depth information got and human face posture, judge whether facial image is living body faces, coordinate without the need to user and make face action, reduce the requirement to user, and avoid because user's face action is not obvious and cause the problem of living body faces recognition failures.

Description

Living body faces recognition methods and device
Technical field
The disclosure is directed to field of face identification, specifically about a kind of living body faces recognition methods and device.
Background technology
Along with the development of biometrics identification technology, face recognition technology has been widely used in authentication field, by recognition of face, can carry out authentication, to improve security to active user.But when practical application, lawless person may be had with other people human face photo oneself face counterfeit, carry out authentication.Therefore, in order to improve security, need to carry out vivo identification to face.
Forge face because living body faces can change can not change, when carrying out In vivo detection to face, some face action instructions can be issued to detected user, as instruction of blinking, instruction etc. of opening one's mouth, require that user coordinates and make corresponding face action, only have determine current detection to face made the face action corresponding to this face action instruction time, just can determine that this face is living body faces.
Summary of the invention
In order to solve Problems existing in correlation technique, present disclose provides a kind of living body faces recognition methods and device.Described technical scheme is as follows:
According to the first aspect of disclosure embodiment, provide a kind of living body faces recognition methods, described method comprises:
By the dual camera of configuration, obtain facial image;
By described dual camera, obtain the depth information of the organ point in described facial image, described depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determine the human face posture of described facial image;
According to depth information and the described human face posture of the organ point in described facial image, judge whether described facial image is living body faces.
In another embodiment, described by described dual camera, obtain the depth information of the organ point in described facial image, comprising:
Organ point in described facial image is positioned;
By described dual camera, obtain the depth information of locating each organ point obtained.
In another embodiment, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
Judge whether depth information and the described human face posture of the organ point in facial image meet preset rules;
When the depth information of the organ point in described facial image and described human face posture meet preset rules, determine that described facial image is living body faces;
When the depth information of the organ point in described facial image and described human face posture do not meet described preset rules, determine that described facial image is for forging face.
In another embodiment, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
When described human face posture is frontal pose, judge whether the depth information of the nose point in described facial image is greater than the depth information of face mask point;
When the depth information of nose point is greater than the depth information of face mask point, determine that described facial image is living body faces;
When the depth information of nose point is not more than the depth information of face mask point, determine that described facial image is for forging face.
In another embodiment, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
When described human face posture is left side attitude, judge whether the depth information of left side face mask point is less than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns left;
When the depth information of left side face mask point is less than the depth information of right side face mask point, determine that described facial image is living body faces;
When the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that described facial image is for forging face.
In another embodiment, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
When described human face posture is right side attitude, judge whether the depth information of left side face mask point is greater than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns right;
When the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that described facial image is living body faces;
When the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that described facial image is for forging face.
In another embodiment, the described dual camera by configuration, obtains facial image, comprising:
By described dual camera, obtain multiple facial images;
Correspondingly, described method also comprises:
Judge whether the facial image of often opening in multiple facial images described is living body faces;
According to the judged result of often opening facial image, determine the living body faces number in multiple facial images described;
When the living body faces number in multiple facial images described reaches preset number, living body faces is identified by.
According to the second aspect of disclosure embodiment, provide a kind of living body faces recognition device, described device comprises:
Image collection module, for the dual camera by configuration, obtains facial image;
Data obtaining module, for by described dual camera, obtains the depth information of the organ point in described facial image, and described depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determination module, for determining the human face posture of described facial image;
Judge module, for according to the depth information of the organ point in described facial image and described human face posture, judges whether described facial image is living body faces.
In another embodiment, described data obtaining module is also for positioning the organ point in described facial image; By described dual camera, obtain the depth information of locating each organ point obtained.
In another embodiment, whether the depth information and described human face posture of described judge module also for judging the organ point in facial image meets preset rules; When the depth information of the organ point in described facial image and described human face posture meet preset rules, determine that described facial image is living body faces; When the depth information of the organ point in described facial image and described human face posture do not meet described preset rules, determine that described facial image is for forging face.
In another embodiment, described judge module, also for when described human face posture is frontal pose, judges whether the depth information of the nose point in described facial image is greater than the depth information of face mask point; When the depth information of nose point is greater than the depth information of face mask point, determine that described facial image is living body faces; When the depth information of nose point is not more than the depth information of face mask point, determine that described facial image is for forging face.
In another embodiment, described judge module, also for when described human face posture is left side attitude, judges whether the depth information of left side face mask point is less than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns left; When the depth information of left side face mask point is less than the depth information of right side face mask point, determine that described facial image is living body faces; When the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that described facial image is for forging face.
In another embodiment, described judge module, also for when described human face posture is right side attitude, judges whether the depth information of left side face mask point is greater than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns right; When the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that described facial image is living body faces; When the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that described facial image is for forging face.
In another embodiment, described image collection module also for by described dual camera, obtains multiple facial images;
Correspondingly, described device also comprises:
Described judge module, also for judging whether the facial image of often opening in multiple facial images described is living body faces;
Number determination module, for according to the judged result of often opening facial image, determines the living body faces number in multiple facial images described;
Identification module, for when the living body faces number in multiple facial images described reaches preset number, living body faces is identified by.
According to the third aspect of disclosure embodiment, provide a kind of living body faces recognition device, described device comprises:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
By the dual camera of configuration, obtain facial image;
By described dual camera, obtain the depth information of the organ point in described facial image, described depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determine the human face posture of described facial image;
According to depth information and the described human face posture of the organ point in described facial image, judge whether described facial image is living body faces.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect:
The method and apparatus that the present embodiment provides, by configuration dual camera, depth information and the human face posture of the organ point in facial image is obtained by this dual camera, according to the depth information got and human face posture, judge whether facial image is living body faces, coordinate without the need to user and make face action, reduce the requirement to user, and avoid because user's face action is not obvious and cause the problem of living body faces recognition failures.
Should be understood that, it is only exemplary that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in instructions and to form the part of this instructions, shows and meets embodiment of the present disclosure, and is used from instructions one and explains principle of the present disclosure.
Fig. 1 is the process flow diagram of a kind of living body faces recognition methods according to an exemplary embodiment;
Fig. 2 is the process flow diagram of a kind of living body faces recognition methods according to an exemplary embodiment;
Fig. 3 is the block diagram of a kind of living body faces recognition device according to an exemplary embodiment;
Fig. 4 is the block diagram of a kind of living body faces recognition device according to an exemplary embodiment;
Fig. 5 is the block diagram of a kind of living body faces recognition device according to an exemplary embodiment.
Embodiment
For making object of the present disclosure, technical scheme and advantage clearly understand, below in conjunction with embodiment and accompanying drawing, the disclosure is described in further details.At this, exemplary embodiment of the present disclosure and illustrating for explaining the disclosure, but not as to restriction of the present disclosure.
Disclosure embodiment provides a kind of living body faces recognition methods and device, is described in detail to the disclosure below in conjunction with accompanying drawing.
Fig. 1 is the process flow diagram of a kind of living body faces recognition methods according to an exemplary embodiment, and as shown in Figure 1, living body faces recognition methods is used for, in face recognition device, comprising the following steps:
In a step 101, by the dual camera of configuration, obtain facial image.
In a step 102, by this dual camera, obtain the depth information of the organ point in this facial image, this depth information and corresponding organ point are negative correlativing relation to the distance of local terminal.
In step 103, the human face posture of this facial image is determined.
At step 104, according to depth information and this human face posture of the organ point in this facial image, judge whether this facial image is living body faces.
The method that the present embodiment provides, by configuration dual camera, depth information and the human face posture of the organ point in facial image is obtained by this dual camera, according to the depth information got and human face posture, judge whether facial image is living body faces, coordinate without the need to user and make face action, reduce the requirement to user, and avoid because user's face action is not obvious and cause the problem of living body faces recognition failures.
In another embodiment, by this dual camera, the depth information of the organ point in this facial image should be obtained, comprising:
Organ point in this facial image is positioned;
By this dual camera, obtain the depth information of locating each organ point obtained.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
Judge whether depth information and this human face posture of the organ point in facial image meet preset rules;
When the depth information of the organ point in this facial image and this human face posture meet preset rules, determine that this facial image is living body faces;
When the depth information of the organ point in this facial image and this human face posture do not meet this preset rules, determine that this facial image is for forging face.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
When this human face posture is frontal pose, judge whether the depth information of the nose point in this facial image is greater than the depth information of face mask point;
When the depth information of nose point is greater than the depth information of face mask point, determine that this facial image is living body faces;
When the depth information of nose point is not more than the depth information of face mask point, determine that this facial image is for forging face.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
When this human face posture is left side attitude, judge whether the depth information of left side face mask point is less than the depth information of right side face mask point, and on the left of this, attitude refers to the attitude that face turns left;
When the depth information of left side face mask point is less than the depth information of right side face mask point, determine that this facial image is living body faces;
When the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that this facial image is for forging face.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
When this human face posture is right side attitude, judge whether the depth information of left side face mask point is greater than the depth information of right side face mask point, and on the left of this, attitude refers to the attitude that face turns right;
When the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that this facial image is living body faces;
When the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that this facial image is for forging face.
In another embodiment, by the dual camera of configuration, facial image should be obtained, comprising:
By this dual camera, obtain multiple facial images;
Correspondingly, the method also comprises:
Judge whether the facial image of often opening in these multiple facial images is living body faces;
According to the judged result of often opening facial image, determine the living body faces number in these multiple facial images;
When the living body faces number in these multiple facial images reaches preset number, living body faces is identified by.
Above-mentioned all alternatives, can adopt and combine arbitrarily formation embodiment of the present disclosure, this is no longer going to repeat them.
Fig. 2 is the process flow diagram of a kind of living body faces recognition methods according to an exemplary embodiment, and as shown in Figure 2, living body faces recognition methods is used for, in face recognition device, comprising the following steps:
In step 201, by the dual camera of configuration, obtain facial image.
Wherein, this face recognition device is used for identifying living body faces, and can be smart mobile phone, computing machine, access control equipment etc., the present embodiment limit this.And this face recognition device is configured with dual camera, living body faces identification can be carried out by dual camera.
This face recognition device can open dual camera, and is taken by dual camera, obtains original image.The image of multiple parts such as face, health, buildings may be comprised in this original image, then for the ease of identifying face, this face recognition device can detect the face in this original image, obtain the facial image in this original image, and neglect the image of other parts.
Wherein, this face recognition device can open dual camera when receiving recognition of face instruction, to carry out recognition of face.This recognition of face instruction can be the click commands to key range.Log in instruction or unlock command etc., the present embodiment does not limit this.In addition, this face recognition device can adopt the Adaboost (AdaptiveBoosting based on Haar feature, self-adaptation strengthens) algorithm detects the face in original image, or adopt other algorithms to detect the face in original image, the present embodiment does not also limit this.
In step 202., by this dual camera, obtain the depth information of the organ point in this facial image.
Wherein, organ point refers to the point on human face, and everyone can comprise one or more point by face.Can comprise multiple organ point in this facial image, as nose point, face mask point, face point, eye contour point etc., the present embodiment does not limit this.
After this face recognition device gets facial image, the organ point in this facial image can be positioned, obtain at least one organ point.Wherein, when carrying out organ point location, this face recognition device can adopt ASM (ActiveShapeModel, shape model), AAM (ActiveAppearanceModel, dynamic apparent model) or SDM (SupervisedDescentMethod, supervision descent algorithm) etc., the present embodiment does not limit the concrete mode of carrying out organ point location.
After obtaining the organ point in facial image, this face recognition device, namely by this dual camera, obtains the depth information of locating each organ point obtained.
Wherein, the depth information of organ point is used to indicate the distance of this organ point to this face recognition device, be negative correlativing relation with organ point to the distance of this face recognition device, that is to say, the depth information of organ point is larger, represent that this organ point is nearer apart from this face recognition device, the depth information of organ point is less, represents that this organ point is far away apart from this face recognition device.
During practical application, when this face recognition device takes this organ point by dual camera, because the angle between organ point from two cameras is different, then adopt triangulation location, the distance between this organ point and this face recognition device can be determined, according to the distance between this organ point and this face recognition device, the depth information of this organ point can be determined.
In step 203, the human face posture of this facial image is determined.
Wherein, this human face posture can be frontal pose, left side attitude, right side attitude, cocked-up attitude, nose-down attitude etc., frontal pose represents the attitude of face towards this face recognition device, on the left of this, attitude represents the attitude that face turns left, on the right side of this, attitude represents the attitude that face turns right, cocked-up attitude represents that people faces upward the attitude lifted, and nose-down attitude represents the attitude that face is bowed downwards, and the present embodiment does not limit this human face posture.
In the present embodiment, this face recognition device can adopt attitude sorter, determine the human face posture of this facial image, this attitude sorter can be Adaboost attitude sorter, or based on the SVM (SupportVectorMachine of Gabor characteristic, support vector machine) attitude sorter etc., the present embodiment does not limit this.
In step 204, judging whether the depth information of the organ point in this facial image and this human face posture meet preset rules, if so, determine that this facial image is living body faces, if not, determining that this facial image is for forging face.
In daily life, if using this face recognition device as object of reference, forge face and be generally the facial image etc. intercepted in human face photo or face video, these adulterators Different Organs point is on the face identical to the distance of this face recognition device, does not meet the regularity of distribution of human face.And Different Organs point on living body faces is different to the distance of this face recognition device, and any two organ points to this face recognition device distance between magnitude relationship meet the regularity of distribution of human face, as user's nose is usually large to the distance of face recognition device than eyes of user to the distance of this face recognition device.
In addition, although during the attitudes vibration of living body faces, organ point can change to the distance of this face recognition device, but now any two organ points to this face recognition device distance between magnitude relationship still meet the regularity of distribution at this human face posture human face organ.
Consider the above-mentioned difference of forging face and living body faces, this face recognition device can according to the regularity of distribution of different attitude human face organ, determine preset rules, this preset rules limits for the magnitude relationship between the depth information to Different Organs point under different human face posture.So, when this face recognition device gets depth information and this human face posture of organ point in facial image, judge whether depth information and this human face posture of organ point meet preset rules, if when the depth information of organ point and this human face posture meet this preset rules, can determine that this facial image is living body faces, if when the depth information of organ point and this human face posture do not meet this preset rules, can determine that this facial image is for forging face.
According to the difference of human face posture, above-mentioned deterministic process can comprise the following steps any one in (1)-(3):
(1) when this human face posture is frontal pose, judge whether the depth information of nose point is less than the depth information of face mask point, when the depth information of nose point is greater than the depth information of face mask point, determine that this facial image is living body faces, when the depth information of nose point is not more than the depth information of face mask point, determine that this facial image is for forging face.
According to the regularity of distribution of human face, the nose of living body faces will give prominence to the edge of face both sides usually, and so when user is towards this face recognition device, nose point is nearer apart from this face recognition device, and face mask point is farther apart from this recognition of face.Correspondingly, the preset rules under frontal pose can comprise: the depth information of nose point is greater than the depth information of face mask point.
Therefore, this face recognition device judges whether the depth information of nose point is greater than the depth information of face mask point, when the depth information of nose point is greater than the depth information of face mask point, represent that nose point is nearer apart from this face recognition device, and face mask point is farther apart from this face recognition device, meet this preset rules, then can determine that this facial image is living body faces.And when the depth information of nose point is not more than the depth information of face mask point, determine that this facial image does not meet this preset rules, then can determine that this facial image is for forging face.
It should be noted that, the present embodiment is only described for a nose point and a face mask point, in fact, this face recognition device can get multiple nose point and multiple face mask point, now, this face recognition device can obtain the mean depth information of multiple nose point, and obtain the mean depth information of multiple face mask point, the mean depth information of two the organ points got is compared, to judge whether this facial image meets this preset rules.Or, multiple nose point and multiple face mask point can also contrast by this face recognition device between two, when the depth information of arbitrary nose point is greater than the depth information of corresponding face mask point, for the ballot of nose point, when the depth information of arbitrary nose point is not more than the degree of depth of face mask point, for the ballot of face mask point, when whole contrast completes, if the ballot number of nose point is greater than the ballot number of face mask point, then determine that the depth information of nose point is greater than the depth information of face mask point, this facial image meets this preset rules.And if the ballot number of nose point is not more than the ballot number of face mask point, then determine that the depth information of nose point is not more than the depth information of face mask point, this facial image does not meet this preset rules.The present embodiment does not limit this.
(2) when this human face posture is left side attitude, judge whether the depth information of left side face mask point is less than the depth information of right side face mask point, on the left of this, attitude refers to the attitude that face turns left, when the depth information of left side face mask point is less than the depth information of right side face mask point, determine that this facial image is living body faces, when the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that this facial image is for forging face.
According to the regularity of distribution of human face, when user's face turns left, left face is greater than the distance of right face to this face recognition device usually to the distance of this face recognition device.Correspondingly, the preset rules under the attitude of left side can comprise: the depth information of left side face mask point is less than the depth information of right side face mask point.
Therefore, this face recognition device judges whether the depth information of left side face mask point is less than the depth information of right side face mask point, when the depth information of left side face mask point is less than the depth information of right side face mask point, represent that left face is farther apart from this face recognition device, right face is nearer apart from this face recognition device, meet this preset rules, then can determine that this facial image is living body faces.And when the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that this facial image does not meet this preset rules, then can determine that this facial image is for forging face.
It should be noted that, the present embodiment is only described for a left side face mask point and a right side face mask point, in fact, this face recognition device can get multiple left sides face mask point and multiple right sides face mask point, now, this face recognition device can obtain the mean depth information of multiple left sides face mask point, and obtain the mean depth information of multiple right sides face mask point, the mean depth information of two the organ points got is compared, to judge whether this facial image meets this preset rules.Or, multiple left sides face mask point and multiple right sides face mask point can also contrast by this face recognition device between two, when the depth information of arbitrary left side face mask point is less than the depth information of corresponding right side face mask point, for the ballot of left side face mask point, when the depth information of arbitrary left side face mask point is not less than the degree of depth of corresponding right side face mask point, for the ballot of right side face mask point, when whole contrast completes, if the ballot number of left side face mask point is greater than the ballot number of right side face mask point, then determine that the depth information of left side face mask point is less than the depth information of right side face mask point, this facial image meets this preset rules.And if the ballot number of left side face mask point is not more than the ballot number of right side face mask point, then determine that the depth information of left side face mask point is not less than the depth information of face mask point, this facial image does not meet this preset rules.The present embodiment does not limit this.
(3) when this human face posture is right side attitude, judge whether the depth information of left side face mask point is greater than the depth information of right side face mask point, on the left of this, attitude refers to the attitude that face turns right, when the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that this facial image is living body faces, when the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that this facial image is for forging face.
According to the regularity of distribution of human face, when user's face turns right, left face is less than the distance of right face to this face recognition device usually to the distance of this face recognition device.Correspondingly, the preset rules under the attitude of left side can comprise: the depth information of left side face mask point is greater than the depth information of right side face mask point.
Detailed process and the above-mentioned steps (2) of this step (3) are similar, do not repeat them here.
It should be noted that, the present embodiment is only described for the preset rules under three kinds of attitudes in above-mentioned steps (1)-(3), in fact, this face recognition device can also according to the regularity of distribution of human face, other forms of preset rules is adopted under above-mentioned three kinds of attitudes, and the preset rules under this face recognition device can also set except above-mentioned three kinds of attitudes other attitudes, the present embodiment does not limit this.
In addition, the left and right directions of the facial image that face recognition device obtains after taking face is contrary with the real left and right directions of face, the present embodiment can always distinguish left side attitude and left side attitude and left side face mask point and right side face mask point according to the right and left of facial image, also can always distinguish left side attitude and right side attitude and left side face mask point and right side face mask point according to the real right and left of face, need to ensure that the left and right directions that adopts in living body faces identifying unanimously.
It should be added that, the present embodiment only pays close attention to a facial image, when determining that this facial image is living body faces, can determine that living body faces is identified by.And when practical application, this face recognition device can pass through this dual camera, get multiple facial images.Such as, this face recognition device can capture video fragment, obtain the multiframe continuous print facial image comprised in this video segment, or this face recognition device repeatedly can be taken, and obtains multiple facial images.
When getting multiple facial images, this face recognition device all can perform above-mentioned steps 202-204 to the facial image of often opening in these multiple facial images, judge whether often open facial image is living body faces, often opened the judged result of facial image, according to the judged result obtained, determine the living body faces number in these multiple facial images, when determining that the living body faces number in these multiple facial images reaches preset number, living body faces is identified by, and when determining that the living body faces number in these multiple facial images does not reach preset number, living body faces identification is not passed through.
Wherein, this preset number can be pre-determined by this face recognition device, or determines according to the accuracy requirements of living body faces identification, or determines according to the number of multiple facial images and preset ratio, if this preset ratio can be 1/2nd, the present embodiment does not limit this.
Face recognition device in correlation technique needs to issue face action instruction, whether make corresponding face action according to current face and carry out living body faces identification, this living body faces identifying requires that the height of user coordinates, and when the face action that user makes is not obvious, be easy to the face of user is judged to forge face by mistake, thus cause living body faces recognition failures.And the living body faces recognition methods that the present embodiment provides, utilize the depth information got, in conjunction with the organ regularity of distribution and some intellectual analysis means of face, complete living body faces identification automatically and accurately, user only need stand in the coverage of dual camera, without the need to the cooperation of user, naturally can carry out the judgement of living body faces, there will not be the problem causing living body faces recognition failures due to the cooperation deficiency of user, improve Consumer's Experience, and prevent lawless person to utilize the situation of the human face photo of forgery or the counterfeit recognition of face of video.
The method that the present embodiment provides, by configuration dual camera, depth information and the human face posture of the organ point in facial image is obtained by this dual camera, according to the depth information got and human face posture, judge whether facial image is living body faces, coordinate without the need to user and make face action, reduce the requirement to user, and avoid because user's face action is not obvious and cause the problem of living body faces recognition failures.
Fig. 3 is the block diagram of a kind of living body faces recognition device according to an exemplary embodiment.See Fig. 3, this device comprises image collection module 301, data obtaining module 302, determination module 303 and judge module 304.
Image collection module 301 is configured to, for the dual camera by configuration, obtain facial image;
Data obtaining module 302 is configured to, for by this dual camera, obtain the depth information of the organ point in this facial image, and this depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determination module 303 is configured to the human face posture for determining this facial image;
Judge module 304 is configured to, for according to the depth information of the organ point in this facial image and this human face posture, judge whether this facial image is living body faces.
The device that the present embodiment provides, by configuration dual camera, depth information and the human face posture of the organ point in facial image is obtained by this dual camera, according to the depth information got and human face posture, judge whether facial image is living body faces, coordinate without the need to user and make face action, reduce the requirement to user, and avoid because user's face action is not obvious and cause the problem of living body faces recognition failures.
In another embodiment, this data obtaining module 302 is also configured to for positioning the organ point in this facial image; By this dual camera, obtain the depth information of locating each organ point obtained.
In another embodiment, whether this judge module 304 is also configured to for judging the organ point in facial image depth information and this human face posture meet preset rules; When the depth information of the organ point in this facial image and this human face posture meet preset rules, determine that this facial image is living body faces; When the depth information of the organ point in this facial image and this human face posture do not meet this preset rules, determine that this facial image is for forging face.
In another embodiment, this judge module 304 is also configured to, for when this human face posture is frontal pose, judge whether the depth information of the nose point in this facial image is greater than the depth information of face mask point; When the depth information of nose point is greater than the depth information of face mask point, determine that this facial image is living body faces; When the depth information of nose point is not more than the depth information of face mask point, determine that this facial image is for forging face.
In another embodiment, this judge module 304 is also configured to for when this human face posture is left side attitude, judge whether the depth information of left side face mask point is less than the depth information of right side face mask point, and on the left of this, attitude refers to the attitude that face turns left; When the depth information of left side face mask point is less than the depth information of right side face mask point, determine that this facial image is living body faces; When the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that this facial image is for forging face.
In another embodiment, this judge module 304 is also configured to for when this human face posture is right side attitude, judge whether the depth information of left side face mask point is greater than the depth information of right side face mask point, and on the left of this, attitude refers to the attitude that face turns right; When the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that this facial image is living body faces; When the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that this facial image is for forging face.
In another embodiment, this image collection module 301 is also configured to, for by this dual camera, obtain multiple facial images;
Correspondingly, see Fig. 4, this device also comprises:
This judge module 304 is also configured to for judging whether the facial image of often opening in these multiple facial images is living body faces;
Number determination module 305 is configured to, for according to the judged result of often opening facial image, determine the living body faces number in these multiple facial images;
Be identified by module 306 to be configured to for when the living body faces number in these multiple facial images reaches preset number, living body faces is identified by.
About the device in above-described embodiment, wherein the concrete mode of modules executable operations has been described in detail in about the embodiment of the method, will not elaborate explanation herein.
It should be noted that: the living body faces recognition device that above-described embodiment provides is when carrying out living body faces identification, only be illustrated with the division of above-mentioned each functional module, in practical application, can distribute as required and by above-mentioned functions and be completed by different functional modules, inner structure by face recognition device is divided into different functional modules, to complete all or part of function described above.In addition, the living body faces recognition device that above-described embodiment provides and living body faces recognition methods embodiment belong to same design, and its specific implementation process refers to embodiment of the method, repeats no more here.
Fig. 5 is the block diagram of a kind of living body faces recognition device 500 according to an exemplary embodiment.Such as, device 500 can be mobile phone, computing machine, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Fig. 5, device 500 can comprise following one or more assembly: processing components 502, storer 504, power supply module 506, multimedia groupware 508, audio-frequency assembly 510, the interface 512 of I/O (I/O), sensor module 514, and communications component 516.
The integrated operation of the usual control device 500 of processing components 502, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 502 can comprise one or more processor 520 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 502 can comprise one or more module, and what be convenient between processing components 502 and other assemblies is mutual.Such as, processing components 502 can comprise multi-media module, mutual with what facilitate between multimedia groupware 508 and processing components 502.
Storer 504 is configured to store various types of data to be supported in the operation of device 500.The example of these data comprises the instruction of any application program for operating on device 500 or method, contact data, telephone book data, message, picture, video etc.Storer 504 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), ROM (read-only memory) (ROM), magnetic store, flash memory, disk or CD.
The various assemblies that power supply module 506 is device 500 provide electric power.Power supply module 506 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 500 and be associated.
Multimedia groupware 508 is included in the screen providing an output interface between described device 500 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 508 comprises a front-facing camera and/or post-positioned pick-up head.When device 500 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 510 is configured to export and/or input audio signal.Such as, audio-frequency assembly 510 comprises a microphone (MIC), and when device 500 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The sound signal received can be stored in storer 504 further or be sent via communications component 516.In certain embodiments, audio-frequency assembly 510 also comprises a loudspeaker, for output audio signal.
I/O interface 512 is for providing interface between processing components 502 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor module 514 comprises one or more sensor, for providing the state estimation of various aspects for device 500.Such as, sensor module 514 can detect the opening/closing state of device 500, the relative positioning of assembly, such as described assembly is display and the keypad of device 500, the position of all right pick-up unit 500 of sensor module 514 or device 500 1 assemblies changes, the presence or absence that user contacts with device 500, the temperature variation of device 500 orientation or acceleration/deceleration and device 500.Sensor module 514 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor module 514 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor module 514 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 516 is configured to the communication being convenient to wired or wireless mode between device 500 and other equipment.Device 500 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 516 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, this communications component 516 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 500 can be realized, for performing above-mentioned living body faces recognition methods by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD) (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the storer 504 of instruction, above-mentioned instruction can perform said method by the processor 520 of device 500.Such as, this non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in this storage medium is performed by the processor of face recognition device, make face recognition device can perform a kind of living body faces recognition methods, the method comprises:
By the dual camera of configuration, obtain facial image;
By this dual camera, obtain the depth information of the organ point in this facial image, this depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determine the human face posture of this facial image;
According to depth information and this human face posture of the organ point in this facial image, judge whether this facial image is living body faces.
In another embodiment, by this dual camera, the depth information of the organ point in this facial image should be obtained, comprising:
Organ point in this facial image is positioned;
By this dual camera, obtain the depth information of locating each organ point obtained.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
Judge whether depth information and this human face posture of the organ point in facial image meet preset rules;
When the depth information of the organ point in this facial image and this human face posture meet preset rules, determine that this facial image is living body faces;
When the depth information of the organ point in this facial image and this human face posture do not meet this preset rules, determine that this facial image is for forging face.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
When this human face posture is frontal pose, judge whether the depth information of the nose point in this facial image is greater than the depth information of face mask point;
When the depth information of nose point is greater than the depth information of face mask point, determine that this facial image is living body faces;
When the depth information of nose point is not more than the depth information of face mask point, determine that this facial image is for forging face.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
When this human face posture is left side attitude, judge whether the depth information of left side face mask point is less than the depth information of right side face mask point, and on the left of this, attitude refers to the attitude that face turns left;
When the depth information of left side face mask point is less than the depth information of right side face mask point, determine that this facial image is living body faces;
When the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that this facial image is for forging face.
In another embodiment, according to the depth information of the organ point in this facial image and this human face posture, this judges whether this facial image is living body faces, comprising:
When this human face posture is right side attitude, judge whether the depth information of left side face mask point is greater than the depth information of right side face mask point, and on the left of this, attitude refers to the attitude that face turns right;
When the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that this facial image is living body faces;
When the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that this facial image is for forging face.
In another embodiment, by the dual camera of configuration, facial image should be obtained, comprising:
By this dual camera, obtain multiple facial images;
Correspondingly, the method also comprises:
Judge whether the facial image of often opening in these multiple facial images is living body faces;
According to the judged result of often opening facial image, determine the living body faces number in these multiple facial images;
When the living body faces number in these multiple facial images reaches preset number, living body faces is identified by.
Those skilled in the art, at consideration instructions and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Instructions and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (15)

1. a living body faces recognition methods, is characterized in that, described method comprises:
By the dual camera of configuration, obtain facial image;
By described dual camera, obtain the depth information of the organ point in described facial image, described depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determine the human face posture of described facial image;
According to depth information and the described human face posture of the organ point in described facial image, judge whether described facial image is living body faces.
2. method according to claim 1, is characterized in that, described by described dual camera, obtains the depth information of the organ point in described facial image, comprising:
Organ point in described facial image is positioned;
By described dual camera, obtain the depth information of locating each organ point obtained.
3. method according to claim 1, is characterized in that, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
Judge whether depth information and the described human face posture of the organ point in facial image meet preset rules;
When the depth information of the organ point in described facial image and described human face posture meet preset rules, determine that described facial image is living body faces;
When the depth information of the organ point in described facial image and described human face posture do not meet described preset rules, determine that described facial image is for forging face.
4. the method according to claim 1 or 3, is characterized in that, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
When described human face posture is frontal pose, judge whether the depth information of the nose point in described facial image is greater than the depth information of face mask point;
When the depth information of nose point is greater than the depth information of face mask point, determine that described facial image is living body faces;
When the depth information of nose point is not more than the depth information of face mask point, determine that described facial image is for forging face.
5. the method according to claim 1 or 3, is characterized in that, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
When described human face posture is left side attitude, judge whether the depth information of left side face mask point is less than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns left;
When the depth information of left side face mask point is less than the depth information of right side face mask point, determine that described facial image is living body faces;
When the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that described facial image is for forging face.
6. the method according to claim 1 or 3, is characterized in that, the described depth information according to the organ point in described facial image and described human face posture, judge whether described facial image is living body faces, comprising:
When described human face posture is right side attitude, judge whether the depth information of left side face mask point is greater than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns right;
When the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that described facial image is living body faces;
When the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that described facial image is for forging face.
7. method according to claim 1, is characterized in that, the described dual camera by configuration, obtains facial image, comprising:
By described dual camera, obtain multiple facial images;
Correspondingly, described method also comprises:
Judge whether the facial image of often opening in multiple facial images described is living body faces;
According to the judged result of often opening facial image, determine the living body faces number in multiple facial images described;
When the living body faces number in multiple facial images described reaches preset number, living body faces is identified by.
8. a living body faces recognition device, is characterized in that, described device comprises:
Image collection module, for the dual camera by configuration, obtains facial image;
Data obtaining module, for by described dual camera, obtains the depth information of the organ point in described facial image, and described depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determination module, for determining the human face posture of described facial image;
Judge module, for according to the depth information of the organ point in described facial image and described human face posture, judges whether described facial image is living body faces.
9. device according to claim 8, is characterized in that, described data obtaining module is also for positioning the organ point in described facial image; By described dual camera, obtain the depth information of locating each organ point obtained.
10. device according to claim 8, is characterized in that, whether the depth information and described human face posture of described judge module also for judging the organ point in facial image meet preset rules; When the depth information of the organ point in described facial image and described human face posture meet preset rules, determine that described facial image is living body faces; When the depth information of the organ point in described facial image and described human face posture do not meet described preset rules, determine that described facial image is for forging face.
Device described in 11. according to Claim 8 or 10, is characterized in that, described judge module, also for when described human face posture is frontal pose, judges whether the depth information of the nose point in described facial image is greater than the depth information of face mask point; When the depth information of nose point is greater than the depth information of face mask point, determine that described facial image is living body faces; When the depth information of nose point is not more than the depth information of face mask point, determine that described facial image is for forging face.
Device described in 12. according to Claim 8 or 10, it is characterized in that, described judge module is also for when described human face posture is left side attitude, judge whether the depth information of left side face mask point is less than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns left; When the depth information of left side face mask point is less than the depth information of right side face mask point, determine that described facial image is living body faces; When the depth information of left side face mask point is not less than the depth information of right side face mask point, determine that described facial image is for forging face.
Device described in 13. according to Claim 8 or 10, it is characterized in that, described judge module is also for when described human face posture is right side attitude, judge whether the depth information of left side face mask point is greater than the depth information of right side face mask point, and described left side attitude refers to the attitude that face turns right; When the depth information of left side face mask point is greater than the depth information of right side face mask point, determine that described facial image is living body faces; When the depth information of left side face mask point is not more than the depth information of right side face mask point, determine that described facial image is for forging face.
14. devices according to claim 8, is characterized in that, described image collection module also for by described dual camera, obtains multiple facial images;
Correspondingly, described device also comprises:
Described judge module, also for judging whether the facial image of often opening in multiple facial images described is living body faces;
Number determination module, for according to the judged result of often opening facial image, determines the living body faces number in multiple facial images described;
Identification module, for when the living body faces number in multiple facial images described reaches preset number, living body faces is identified by.
15. 1 kinds of living body faces recognition devices, is characterized in that, comprising:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
By the dual camera of configuration, obtain facial image;
By described dual camera, obtain the depth information of the organ point in described facial image, described depth information and corresponding organ point are negative correlativing relation to the distance of local terminal;
Determine the human face posture of described facial image;
According to depth information and the described human face posture of the organ point in described facial image, judge whether described facial image is living body faces.
CN201510633817.7A 2015-09-29 2015-09-29 Living body faces recognition methods and device Pending CN105224924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510633817.7A CN105224924A (en) 2015-09-29 2015-09-29 Living body faces recognition methods and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510633817.7A CN105224924A (en) 2015-09-29 2015-09-29 Living body faces recognition methods and device

Publications (1)

Publication Number Publication Date
CN105224924A true CN105224924A (en) 2016-01-06

Family

ID=54993884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510633817.7A Pending CN105224924A (en) 2015-09-29 2015-09-29 Living body faces recognition methods and device

Country Status (1)

Country Link
CN (1) CN105224924A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
CN105740779A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN105930710A (en) * 2016-04-22 2016-09-07 北京旷视科技有限公司 Living body detection method and device
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN106897675A (en) * 2017-01-24 2017-06-27 上海交通大学 The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
CN106991376A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the side face verification method and device and electronic installation of depth information
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN107273875A (en) * 2017-07-18 2017-10-20 广东欧珀移动通信有限公司 Human face in-vivo detection method and Related product
CN107392184A (en) * 2017-08-28 2017-11-24 广东欧珀移动通信有限公司 Recognition of face verification method and device
CN107506687A (en) * 2017-07-17 2017-12-22 广东欧珀移动通信有限公司 Biopsy method and Related product
CN107515509A (en) * 2016-06-15 2017-12-26 香港彩亿科技有限公司 Projector and method for automatic brightness adjustment
CN107736874A (en) * 2017-08-25 2018-02-27 百度在线网络技术(北京)有限公司 A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium
CN108229331A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
CN108875331A (en) * 2017-08-01 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium
CN108875333A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Terminal unlock method, terminal and computer readable storage medium
CN109325436A (en) * 2018-09-17 2019-02-12 王虹 Face identification system and server
CN109460697A (en) * 2017-09-06 2019-03-12 原相科技股份有限公司 The auxiliary filter of human face recognition and the starting method of electronic device
CN110059590A (en) * 2019-03-29 2019-07-26 努比亚技术有限公司 A kind of face living body verification method, device, mobile terminal and readable storage medium storing program for executing
CN111160251A (en) * 2019-12-30 2020-05-15 支付宝实验室(新加坡)有限公司 Living body identification method and device
CN111666835A (en) * 2020-05-20 2020-09-15 广东志远科技有限公司 Face living body detection method and device
CN113313057A (en) * 2021-06-16 2021-08-27 山东省科学院激光研究所 Face living body detection and recognition system
US11256903B2 (en) 2018-04-12 2022-02-22 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing device, computer readable storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106703A (en) * 2013-01-14 2013-05-15 张平 Anti-cheating driver training recorder
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN104751110A (en) * 2013-12-31 2015-07-01 汉王科技股份有限公司 Bio-assay detection method and device
CN104834901A (en) * 2015-04-17 2015-08-12 北京海鑫科金高科技股份有限公司 Binocular stereo vision-based human face detection method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106703A (en) * 2013-01-14 2013-05-15 张平 Anti-cheating driver training recorder
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN104751110A (en) * 2013-12-31 2015-07-01 汉王科技股份有限公司 Bio-assay detection method and device
CN104834901A (en) * 2015-04-17 2015-08-12 北京海鑫科金高科技股份有限公司 Binocular stereo vision-based human face detection method, device and system

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740779A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
CN105740779B (en) * 2016-01-25 2020-11-13 北京眼神智能科技有限公司 Method and device for detecting living human face
CN105574518B (en) * 2016-01-25 2020-02-21 北京眼神智能科技有限公司 Method and device for detecting living human face
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN105930710B (en) * 2016-04-22 2019-11-12 北京旷视科技有限公司 Biopsy method and device
CN105930710A (en) * 2016-04-22 2016-09-07 北京旷视科技有限公司 Living body detection method and device
CN107515509A (en) * 2016-06-15 2017-12-26 香港彩亿科技有限公司 Projector and method for automatic brightness adjustment
CN106897675B (en) * 2017-01-24 2021-08-17 上海交通大学 Face living body detection method combining binocular vision depth characteristic and apparent characteristic
CN106897675A (en) * 2017-01-24 2017-06-27 上海交通大学 The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN106991376B (en) * 2017-03-09 2020-03-17 Oppo广东移动通信有限公司 Depth information-combined side face verification method and device and electronic device
CN106991376A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the side face verification method and device and electronic installation of depth information
US11482040B2 (en) 2017-03-16 2022-10-25 Beijing Sensetime Technology Development Co., Ltd. Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
CN108229331A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
CN108229329A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
WO2018166525A1 (en) * 2017-03-16 2018-09-20 北京市商汤科技开发有限公司 Human face anti-counterfeit detection method and system, electronic device, program and medium
WO2019015415A1 (en) * 2017-07-17 2019-01-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of living body detection and terminal device
RU2731370C1 (en) * 2017-07-17 2020-09-02 Гуандун Оппо Мобайл Телекоммьюникейшнс Корп., Лтд. Method of living organism recognition and terminal device
US11100348B2 (en) 2017-07-17 2021-08-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of living body detection and terminal device
CN107506687B (en) * 2017-07-17 2020-01-21 Oppo广东移动通信有限公司 Living body detection method and related product
CN107506687A (en) * 2017-07-17 2017-12-22 广东欧珀移动通信有限公司 Biopsy method and Related product
CN107273875A (en) * 2017-07-18 2017-10-20 广东欧珀移动通信有限公司 Human face in-vivo detection method and Related product
CN108875331A (en) * 2017-08-01 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium
CN108875331B (en) * 2017-08-01 2022-08-19 北京旷视科技有限公司 Face unlocking method, device and system and storage medium
CN107736874B (en) * 2017-08-25 2020-11-20 百度在线网络技术(北京)有限公司 Living body detection method, living body detection device, living body detection equipment and computer storage medium
US11147474B2 (en) 2017-08-25 2021-10-19 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and computer storage medium
CN107736874A (en) * 2017-08-25 2018-02-27 百度在线网络技术(北京)有限公司 A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium
CN107392184A (en) * 2017-08-28 2017-11-24 广东欧珀移动通信有限公司 Recognition of face verification method and device
CN109460697A (en) * 2017-09-06 2019-03-12 原相科技股份有限公司 The auxiliary filter of human face recognition and the starting method of electronic device
CN108875333A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Terminal unlock method, terminal and computer readable storage medium
CN108875333B (en) * 2017-09-22 2023-05-16 北京旷视科技有限公司 Terminal unlocking method, terminal and computer readable storage medium
US11256903B2 (en) 2018-04-12 2022-02-22 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing device, computer readable storage medium and electronic device
CN109325436A (en) * 2018-09-17 2019-02-12 王虹 Face identification system and server
CN110059590A (en) * 2019-03-29 2019-07-26 努比亚技术有限公司 A kind of face living body verification method, device, mobile terminal and readable storage medium storing program for executing
CN110059590B (en) * 2019-03-29 2023-06-30 努比亚技术有限公司 Face living experience authentication method and device, mobile terminal and readable storage medium
CN111160251A (en) * 2019-12-30 2020-05-15 支付宝实验室(新加坡)有限公司 Living body identification method and device
CN111160251B (en) * 2019-12-30 2023-05-02 支付宝实验室(新加坡)有限公司 Living body identification method and device
CN111666835A (en) * 2020-05-20 2020-09-15 广东志远科技有限公司 Face living body detection method and device
CN113313057A (en) * 2021-06-16 2021-08-27 山东省科学院激光研究所 Face living body detection and recognition system

Similar Documents

Publication Publication Date Title
CN105224924A (en) Living body faces recognition methods and device
CN105426867B (en) Recognition of face verification method and device
CN105117086B (en) Fingerprint recognition system, the implementation method of fingerprint recognition and device, electronic equipment
CN104850828B (en) Character recognition method and device
CN105488527A (en) Image classification method and apparatus
CN106295566A (en) Facial expression recognizing method and device
CN106295511B (en) Face tracking method and device
CN106572299A (en) Camera switching-on method and device
CN106951884A (en) Gather method, device and the electronic equipment of fingerprint
WO2019024717A1 (en) Anti-counterfeiting processing method and related product
CN106503617A (en) Model training method and device
CN104408402A (en) Face identification method and apparatus
CN105046231A (en) Face detection method and device
CN105550637A (en) Contour point positioning method and contour point positioning device
CN105095881A (en) Method, apparatus and terminal for face identification
CN106295515A (en) Determine the method and device of human face region in image
CN107688781A (en) Face identification method and device
CN103886284B (en) Character attribute information identifying method, device and electronic equipment
CN104461014A (en) Screen unlocking method and device
CN104933419A (en) Method and device for obtaining iris images and iris identification equipment
CN105069426A (en) Similar picture determining method and apparatus
CN104408404A (en) Face identification method and apparatus
CN106295530A (en) Face identification method and device
CN104537380A (en) Clustering method and device
CN105426485A (en) Image combination method and device, intelligent terminal and server

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160106