CN109508694A - A kind of face identification method and identification device - Google Patents
A kind of face identification method and identification device Download PDFInfo
- Publication number
- CN109508694A CN109508694A CN201811504957.4A CN201811504957A CN109508694A CN 109508694 A CN109508694 A CN 109508694A CN 201811504957 A CN201811504957 A CN 201811504957A CN 109508694 A CN109508694 A CN 109508694A
- Authority
- CN
- China
- Prior art keywords
- target image
- classification
- image
- convolutional neural
- category
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a kind of face identification method and identification devices, and wherein method includes: to obtain the target image comprising face information;By in the first classification convolutional neural networks of target image input training in advance, the corresponding first category of target image is obtained;When the first category of target image is non-deception classification, by the human face region input of target image recognition of face convolutional neural networks trained in advance, target feature vector is obtained;When the similarity between target feature vector and default face feature vector is greater than preset threshold, by the extracted characteristic information input of target image in advance trained second classification convolutional neural networks, the corresponding second category of target image is obtained;When the second category of target image is living body classification, the recognition result of face in target image is generated.The embodiment of the present invention can be improved the accuracy of recognition of face.
Description
Technical field
The present invention relates to technical field of face recognition, more particularly to a kind of face identification method and identification device.
Background technique
Face recognition technology is a kind of computer skill that identity identification is carried out by analysis face visual signature information
Art is widely used in the fields such as safety-security area, financial field.For example, opening gate facility, logon account, payment etc. operate.People
In face identification process, the attack of face is forged in order to prevent, for example, being cheated using photo, screen shots, mask etc., is needed
Carry out In vivo detection, that is, judge collected face for real human face, or forge face.
Existing face identification method is when carrying out In vivo detection, since the characteristic information of living body faces and non-living body face has
There is notable difference, therefore is typically based on the extracted characteristic information from facial image and is detected.Specifically, it can will acquire
Facial image extracted characteristic information input In vivo detection model in, to obtain In vivo detection result.The In vivo detection
The characteristic information training that model is usually concentrated by training data obtains.
However, inventor has found in the implementation of the present invention, at least there are the following problems for the prior art:
Existing face identification method can not be covered since the characteristic information that training data is concentrated is limited
All characteristic information situations.And when carrying out In vivo detection, when tricker is cheated using photo or screen image,
This feature information is influenced the detection range beyond In vivo detection model by acquisition ambient lighting, acquisition image resolution ratio etc. and is difficult to
It is detected, therefore, the existing face identification method for being based purely on characteristic information there is a problem of identifying that accuracy is low.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of face identification method and identification device, to realize that improving face knows
Other accuracy.Specific technical solution is as follows:
In a first aspect, the embodiment of the invention provides a kind of face identification methods, which comprises
Obtain the target image comprising face information;
By in the first classification convolutional neural networks of target image input training in advance, the target image pair is obtained
The first category answered;The first category are as follows: deception classification or non-deception classification;Wherein, the first classification convolutional Neural
Network is obtained according to each first sample image and the training of corresponding first category;The first sample of the deception classification
Include default behavior act in image;
It is when the first category of the target image is non-deception classification, the human face region of the target image is defeated
Enter in recognition of face convolutional neural networks trained in advance, obtains target feature vector;Wherein, the target feature vector are as follows:
The corresponding feature vector of the human face region of the target image;The recognition of face convolutional neural networks are according to this face of various kinds
What region and the corresponding feature vector training of this human face region of various kinds obtained;
It, will be described when the similarity between the target feature vector and default face feature vector is greater than preset threshold
The extracted characteristic information input of target image in trained second classification convolutional neural networks, obtains the target figure in advance
As corresponding second category;The second category are as follows: living body classification or non-living body classification;The second classification convolutional Neural net
Network is obtained according to the characteristic information of each second sample image and the corresponding second category training of each second sample image;
When the second category of the target image is living body classification, the identification of face in the target image is generated
As a result, including: the corresponding identity information of face in the target image in the face recognition result.
Optionally, the recognition of face convolutional neural networks that the human face region input of the target image is trained in advance
In, before obtaining target feature vector, the method also includes:
By in target image input human face region detection convolutional neural networks trained in advance, the target figure is obtained
The human face region of picture;The human face region detection convolutional neural networks are according to each third sample image and each third sample
The corresponding human face region training of image obtains.
Optionally, described by the preparatory trained second classification convolution of the extracted characteristic information input of the target image
In neural network, before the step of obtaining the target image corresponding second category, the method also includes:
The characteristic information in the target image is extracted, the characteristic information includes: texture feature information or Optical-flow Feature
Information.
Optionally, when the second category of the target image is non-living body classification, the method also includes:
To the mobile end equipment sender face recognition failures information of user.
Optionally, after the acquisition is comprising the target image of face information, the method also includes:
The color space of the target image is converted to the color consistent with the first classification convolutional neural networks
Space.
Optionally, the training process of the first classification convolutional neural networks includes:
First sample image is obtained, and obtains the corresponding first category of each first sample image;Wherein, the first sample
Image includes: positive example first sample image and negative example first sample image;The positive example first sample image are as follows: in different background
Lower different people holds the image shot when photo or electronic equipment having a display function;The electronic equipment includes: hand
Machine, tablet computer;The negative example first sample image are as follows: different facial image under different background;
By the positive example first sample image and the corresponding first category input initial first of each positive example first sample image
Classification convolutional neural networks, and, by the negative example first sample image and the corresponding first kind of each negative example first sample image
The initial first classification convolutional neural networks are not inputted, and training obtains the first classification convolutional neural networks.
Second aspect, the embodiment of the invention also provides a kind of face identification device, described device includes:
First obtains module, for obtaining the target image comprising face information;
First processing module, for the target image to be inputted in the first classification convolutional neural networks of training in advance,
Obtain the corresponding first category of the target image;The first category are as follows: deception classification or non-deception classification;Wherein, institute
Stating the first classification convolutional neural networks is obtained according to each first sample image and the training of corresponding first category;It is described
It cheats in the first sample image of classification comprising default behavior act;
Second processing module, for when the first category of the target image be non-deception classification when, by the mesh
In the human face region input of logo image recognition of face convolutional neural networks trained in advance, target feature vector is obtained;Wherein, institute
State target feature vector are as follows: the corresponding feature vector of the human face region of the target image;The recognition of face convolutional Neural net
Network is obtained according to this human face region of various kinds and the corresponding feature vector training of this human face region of various kinds;
Third processing module, for being greater than when the similarity between the target feature vector and default face feature vector
When preset threshold, by the preparatory trained second classification convolutional neural networks of the extracted characteristic information input of the target image
In, obtain the corresponding second category of the target image;The second category are as follows: living body classification or non-living body classification;It is described
Second classification convolutional neural networks are according to the characteristic information of each second sample image and each second sample image corresponding the
The training of two classifications obtains;
Generation module, for generating the target figure when the second category of the target image is living body classification
The recognition result of face as in, includes: the corresponding identity information of face in the target image in the face recognition result.
Optionally, described first module is obtained, be specifically used for: when the first category of the target image is deception class
When other, a later frame image of the target image is obtained, using a later frame image as new target image;
Optionally, described device further include:
Fourth processing module, for target image input human face region trained in advance to be detected convolutional neural networks
In, obtain the human face region of the target image;The human face region detection convolutional neural networks are according to each third sample graph
What picture and the corresponding human face region training of each third sample image obtained.
Optionally, described device further include:
Extraction module, for extracting the characteristic information in the target image, the characteristic information includes: textural characteristics letter
Breath or Optical-flow Feature information.
Optionally, described device further include:
Sending module, for the movement when the second category of the target image is non-living body classification, to user
End equipment sends face recognition failures information.
Optionally, described device further include:
Conversion module, for being converted to and the first classification convolutional neural networks the color space of the target image
Consistent color space.
Optionally, described device further include:
Second obtains module, for obtaining first sample image, and obtains the corresponding first category of each first sample image;
Wherein, the first sample image includes: positive example first sample image and negative example first sample image;The positive example first sample
Image are as follows: different people holds the image shot when photo or electronic equipment having a display function under different background;Institute
Stating electronic equipment includes: mobile phone, tablet computer;The negative example first sample image are as follows: different facial image under different background;
Training module is used for the positive example first sample image and the corresponding first category of each positive example first sample image
Initial first classification convolutional neural networks are inputted, and, by the negative example first sample image and each negative example first sample image
The corresponding first category input initial first classification convolutional neural networks, training obtain the first classification convolutional Neural net
Network.
The third aspect, the embodiment of the invention also provides a kind of server, including processor, communication interface, memory and
Communication bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any recognition of face side of first aspect
The method and step of method.
At the another aspect that the present invention is implemented, a kind of computer readable storage medium is additionally provided, it is described computer-readable
Instruction is stored in storage medium, when run on a computer, so that computer executes any of the above-described face and knows
Other method.
At the another aspect that the present invention is implemented, the embodiment of the invention also provides a kind of, and the computer program comprising instruction is produced
Product, when run on a computer, so that computer executes any of the above-described face identification method.
A kind of face identification method and identification device provided in an embodiment of the present invention are obtaining the target comprising face information
After image, by the way that target image is inputted in the first classification convolutional neural networks of training in advance, so that it is determined that target image
First category, that is, determine target image for deception classification or non-deception classification, when target image is non-deception classification, to mesh
Logo image does further recognition of face processing, therefore can be avoided tricker and known using photo or screen image by face
The occurrence of other, can be improved the standard of recognition of face compared to the existing simple method detected using characteristic information
True property.Certainly, it implements any of the products of the present invention or method must be not necessarily required to reach all the above advantage simultaneously.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described.
Fig. 1 is a kind of flow diagram of face identification method provided in an embodiment of the present invention;
Fig. 2 is another flow diagram of face identification method provided in an embodiment of the present invention;
Fig. 3 is another flow diagram of face identification method provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of the first classification of training convolutional neural networks in the embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of face identification device provided in an embodiment of the present invention;
Fig. 6 is another structural schematic diagram of face identification device provided in an embodiment of the present invention;
Fig. 7 is the yet another construction schematic diagram of face identification device provided in an embodiment of the present invention;
Fig. 8 is the 4th kind of structural schematic diagram of face identification device provided in an embodiment of the present invention;
Fig. 9 is a kind of structural schematic diagram of server provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is described.
Existing face identification method, when carrying out In vivo detection, when tricker is carried out using photo or screen image
When deception, extracted characteristic information is influenced by factors such as acquisition ambient lighting, acquisition image resolution ratios, so that this feature information
Detection range beyond In vivo detection model and be difficult to be detected, cause the accuracy of In vivo detection to reduce, and then lead to people
The accuracy of face identification process reduces.
In view of this, including face obtaining the embodiment of the invention provides a kind of face identification method and identification device
After the target image of information, by the way that target image is inputted in the first classification convolutional neural networks of training in advance, so that it is determined that
The first category of target image, that is, determine target image for deception classification or non-deception classification, even if the characteristic information of tricker
Not in the detection range of In vivo detection model, it can also first be identified as cheating classification and being excluded.When target image is non-
When cheating classification, then further recognition of face is done to target image and is handled, therefore can be avoided tricker and use photo or screen
The occurrence of showing image and passing through recognition of face, compared to the existing simple method detected using characteristic information, energy
Enough improve the accuracy of recognition of face.
A kind of face identification method is provided for the embodiments of the invention first below to be illustrated.
The embodiment of the invention provides a kind of face identification methods, as shown in Figure 1, this method comprises:
S110 obtains the target image comprising face information.
In the embodiment of the present invention, the available image by mobile end equipment shooting, and using acquired image as target
Image.Its concrete application scene can be, when needing to carry out recognition of face, user can pass through the movement with camera function
End equipment is shot, and mobile end equipment collects the image comprising face information and is transmitted to server, to make server
Obtain target image.
Image acquisition process can be a continuous process, therefore, after acquisition one is opened comprising the image of face information,
If it is desired, mobile end equipment can acquire image again.Server can carry out recognition of face for acquired image, should
Image is target image,
It is corresponding to obtain target image by the first classification convolutional neural networks of target image input training in advance by S120
First category.
In the embodiment of the present invention, in order to improve the accuracy of recognition of face, server can be previously according to a certain number of
First sample image, such as 500,1000,10000 and each first sample image corresponding first category it is trained
To the first classification convolutional neural networks.Use the first classification convolutional neural networks, when inputting target image, first classification
Convolutional neural networks can export the corresponding first category of target image, it may be assumed that deception classification or non-deception classification.
Be judged as may include default behavior act in the target image of deception classification, for example, hold photo or mobile phone,
The behavior act of the electronic equipments such as tablet computer, alternatively, the behavior that hand lifts the electronic equipments such as photo or mobile phone, tablet computer is dynamic
Make, that is, the default behavior act is the movement that can show that user and attempt to be cheated by I non-real human face.
It is judged as in the target image of non-deception classification being the image for only including user's face, that is, non-deception
The default behavior act comprising attempting to be cheated by I non-real human face in the target image of classification.
Due to the first category of the available target image of the embodiment of the present invention, that is, target image has first been carried out whether
Classification with deceptive practices, therefore recognition of face situation can be passed through using photo or screen image to avoid tricker
Occur.
It should be noted that above-mentioned first classification convolutional neural networks can be, by existing open source image convolution nerve
The output classification of the full articulamentum of the last layer of network model obtains after being revised as 2 classes, such as: existing VGG is (by Oxford University
A kind of neural network model that computer vision group proposes), ResNet (is proposed by the He Kaiming team of Microsoft Research, Asia
A kind of neural network model), and DenseNet (Dense Convolutional Network, it is a kind of with the nerve intensively connected
Network model), NASNet (a kind of neural network model based on neuromechanism search framework building), a kind of MobileNet (needle
To embedded device propose lightweight neural network model) etc. image convolutions neural network model.
S130 inputs the human face region of target image preparatory when the first category of target image is non-deception classification
In trained recognition of face convolutional neural networks, target feature vector is obtained.
In embodiments of the present invention, in order to improve the accuracy of recognition of face, server can be previously according to certain amount
Sample human face region, such as 500,1000,10000 and the corresponding feature vector training of this human face region of various kinds
Obtain recognition of face convolutional neural networks.Using the recognition of face convolutional neural networks, when the corresponding face of input target image
When region, which can export the corresponding target feature vector of each human face region.Wherein, target is special
Sign vector is the corresponding feature vector of human face region of target image, and the face of target image is carried in target feature vector
The characteristic information in region.
Therefore, in embodiments of the present invention, when carrying out recognition of face, when server obtains the human face region of target image
Afterwards, the human face region of target image can be inputted in recognition of face convolutional neural networks trained in advance, obtains target signature
Vector.
S140, when the similarity between target feature vector and default face feature vector is greater than preset threshold, by mesh
In trained second classification convolutional neural networks, it is corresponding to obtain target image in advance for the extracted characteristic information input of logo image
Second category.
In advance in the embodiment of the present invention, the feature vector of user's face area image can be pre-saved in the database, i.e.,
If face feature vector, which can be saves when user's registration account.It is easily understood that due to
Feature vector is in the nature a vector, therefore after obtaining the corresponding target feature vector of target image face area, can be incited somebody to action
Target feature vector is matched with default face feature vector, so that the similarity between two vectors is obtained, it then will be similar
Degree is compared with preset threshold.Preset threshold is for example, it can be set to be 0.8,0.9 or 0.99, wherein with preset threshold
Increase, then the similarity of two feature vectors required just to be considered identical face in higher situation, but False Rate also with
Raising.Therefore, developer is under the premise of considering reasonable False Rate, can according to actual business requirement to preset threshold into
Row setting.
In embodiments of the present invention, server can previously according to a certain number of second sample images, such as 500,
1000,10000 etc. and the corresponding second category training of each second sample image obtain the second classification convolutional Neural net
Network.Using the second classification convolutional neural networks, when input feature vector information, which can be exported
The corresponding second category of target image, it may be assumed that living body classification or non-living body classification.Features described above information may include: texture spy
Reference breath or Optical-flow Feature information.It should be noted that facial image can be extracted by existing characteristics information extraction method
In characteristic information, details are not described herein for the embodiment of the present invention.
Therefore, in embodiments of the present invention, when carrying out In vivo detection, when server target image face area is corresponding
After target feature vector, target feature vector can be inputted in the second classification convolutional neural networks of training in advance, obtain mesh
The corresponding second category of logo image.
S150 generates the recognition result of face in target image when the second category of target image is living body classification.
In the embodiment of the present invention, when the second category of target image is living body classification, show the corresponding people of target image
Face is living body, can determine that the target image is the image that legitimate user uploads at this time, then recognition of face success, therefore, clothes
The recognition result of face in target image can be generated in business device, for example, the corresponding identity information of face in target image, including surname
The information such as name, gender, and then recognition result can be returned to mobile end equipment, the application program run for mobile end equipment makes
With.
A kind of face identification method provided in an embodiment of the present invention leads to after obtaining the target image comprising face information
It crosses in the first classification convolutional neural networks by target image input training in advance, so that it is determined that the first category of target image,
That is, determine that target image is deception classification or non-deception classification, when target image is non-deception classification, to target image do into
One step recognition of face processing, therefore can be avoided the hair that tricker passes through recognition of face situation using photo or screen image
It is raw, compared to the existing simple method detected using characteristic information, it can be improved the accuracy of recognition of face.
The embodiment of the invention also provides a kind of face identification methods, as shown in Fig. 2, this method comprises:
S210 obtains the target image comprising face information.
The step is identical as the step S110 in embodiment illustrated in fig. 11, and details are not described herein.
It is empty to be converted to the color consistent with the first classification convolutional neural networks by S220 for the color space of target image
Between.
If the color space of target image be with first classification convolutional neural networks used in color space it is inconsistent,
The color space of target image can be then converted to and the first classification consistent color space of convolutional neural networks.For example, such as
The color space of fruit target image is LUV color space, and color space used in the first classification convolutional neural networks is RGB color
The color space of target image then can be converted to RGB from LUV by color space, enable first classifying convolutional neural networks just
Really identify the target image.
It is corresponding to obtain target image by the first classification convolutional neural networks of target image input training in advance by S230
First category.
The step is identical as the step S120 in embodiment illustrated in fig. 11, and details are not described herein.
S240 obtains target image in target image input human face region detection convolutional neural networks trained in advance
Human face region.
In the embodiment of the present invention, in order to be accurately obtained the human face region of target image, server can be previously according to one
The third sample image of fixed number amount, such as 500,1000,10000 and the corresponding face area of each third sample image
Domain training obtains human face region detection convolutional neural networks.Using the face region detection convolutional neural networks, when input target
When image, which can export the corresponding human face region of target image, for example, including face
The coordinate range information of region.Certainly, except for example shown in addition to implementation, realize that the mode of this feature is equal
Belong to the protection scope of the embodiment of the present invention.
S250 inputs the human face region of target image preparatory when the first category of target image is non-deception classification
In trained recognition of face convolutional neural networks, target feature vector is obtained.
The step is identical as the step S130 in embodiment illustrated in fig. 1, and details are not described herein.
S260 extracts the characteristic information in target image.
Due to carrying the characteristics of image of target image in characteristic information, the feature letter in target image can be extracted
Breath, for example, texture feature information or Optical-flow Feature information.Target image can be extracted by existing feature extracting method
Characteristic information, details are not described herein for the embodiment of the present invention.
S270, when the similarity between target feature vector and default face feature vector is greater than preset threshold, by target
In trained second classification convolutional neural networks, it is corresponding to obtain target image in advance for the extracted characteristic information input of image
Second category.
The step is identical as the step S140 in embodiment illustrated in fig. 1, and details are not described herein.
S280 generates the recognition result of face in target image when the second category of target image is living body classification.
The step is identical as the step S150 in embodiment illustrated in fig. 1, and details are not described herein.
A kind of face identification method provided in an embodiment of the present invention can be converted to the color space of target image and the
The one classification consistent color space of convolutional neural networks, so that the first classification convolutional neural networks be enable correctly to identify the target
Image, also, convolutional neural networks are detected by human face region, it can be accurately obtained the human face region of target image, after being
Continuous target feature vector acquisition process offer accurately enters, to improve the accuracy of entire face recognition process.
The embodiment of the invention also provides a kind of face identification methods, as shown in figure 3, this method comprises:
S310 obtains the target image comprising face information.
The step is identical as the step S110 in embodiment illustrated in fig. 11, and details are not described herein.
It is corresponding to obtain target image by the first classification convolutional neural networks of target image input training in advance by S320
First category.
The step is identical as the step S120 in embodiment illustrated in fig. 11, and details are not described herein.
S330 inputs the human face region of target image preparatory when the first category of target image is non-deception classification
In trained recognition of face convolutional neural networks, target feature vector is obtained.
The step is identical as the step S130 in embodiment illustrated in fig. 11, and details are not described herein.
S340 obtains a later frame image of target image when the first category of target image is deception classification, will be latter
Frame image is as new target image.
By foregoing teachings it is found that image acquisition process can be that a continuous process therefore can be by mesh collected
The a later frame image of logo image is as new target image, then by the first classification of the new target image input training in advance
In convolutional neural networks, the corresponding first category of new target image is obtained, that is, execute the sorted of first category again
Journey, i.e. step S320.
S350, judges whether the similarity between target feature vector and default face feature vector is greater than preset threshold.
In the embodiment of the present invention, target feature vector can be matched with default face feature vector, to obtain
Similarity, is then compared by the similarity between two vectors with preset threshold, judges the size of similarity and preset threshold.
S360, when the similarity between target feature vector and default face feature vector is greater than preset threshold, by target
In trained second classification convolutional neural networks, it is corresponding to obtain target image in advance for the extracted characteristic information input of image
Second category.
The step is identical as the step S140 in embodiment illustrated in fig. 1, and details are not described herein.
When the similarity between target feature vector and default face feature vector is not more than preset threshold, step is executed
310。
In the embodiment of the present invention, when the similarity between target feature vector and default face feature vector is not more than default threshold
When value, that is, be less than or equal to preset threshold, show that target feature vector is low with default face feature vector similarity degree, therefore
Target image can be reacquired, executes the assorting process of first category again.
S370 generates the recognition result of face in target image when the second category of target image is living body classification.
The step is identical as the step S150 in embodiment illustrated in fig. 11, and details are not described herein.
S380 knows when the second category of target image is non-living body classification to the mobile end equipment sender face of user
Other failure information.
When the second category of target image is non-living body classification, show that the recognition of face of user does not pass through, although at this point,
User does not use the means such as photo to cheat, but it is possible to is cheated using mask.Therefore, server can be to
The mobile end equipment sender face recognition failures information of user, or user is required to re-shoot the image comprising face.
In embodiments of the present invention, server can train in advance and obtain the first classification convolutional neural networks, specifically, such as
Shown in Fig. 4, the face identification method of the embodiment of the present invention, can with the following steps are included:
S410 obtains first sample image, and obtains the corresponding first category of each first sample image.
In the embodiment of the present invention, server can obtain the first sample in training the first classification convolutional neural networks first
This image, for example, available first sample image as much as possible, also, first sample image includes: positive example first sample
Image and negative example first sample image.
Wherein, positive example first sample image are as follows: different people holds photo or electricity having a display function under different background
The image shot when sub- equipment.Illustratively, electronic equipment can be with are as follows: mobile phone, tablet computer, video player etc..It is negative
Example first sample image are as follows: different facial images or simple background image under different background.
S420, by positive example first sample image and the corresponding first category input initial first of each positive example first sample image
Classification convolutional neural networks, and, negative example first sample image and the corresponding first category of each negative example first sample image is defeated
Enter initial first classification convolutional neural networks, training obtains the first classification convolutional neural networks.
After obtaining the corresponding first category of each first sample image, server can be by each first sample image and corresponding
First category obtains the first classification convolutional neural networks as training sample, training.
Specifically, server can be using each positive example first sample image and corresponding deception classification as training sample, will
As training sample, training obtains the first classification convolutional Neural net for each negative example first sample image and corresponding non-deception classification
Network.
It should be noted that in embodiments of the present invention, the training process of the first classification convolutional neural networks can use
The prior art, the embodiment of the present invention is to this process without repeating.
A kind of face identification method provided in an embodiment of the present invention, when the first category of target image is deception classification,
Server can execute point of first category using a later frame image of target image collected as new target image again
Class process avoids accidentally surveying caused by due to single target image detects, improves user experience.
The embodiment of the invention also provides a kind of face identification devices, corresponding with the process of method shown in Fig. 1, such as Fig. 5 institute
Show, comprising:
First obtains module 501, for obtaining the target image comprising face information.
First processing module 502, for obtaining in the first classification convolutional neural networks by target image input training in advance
To the corresponding first category of target image;First category are as follows: deception classification or non-deception classification;Wherein, the first classification convolution
Neural network is obtained according to each first sample image and the training of corresponding first category;Cheat the first sample of classification
Comprising default behavior act in image, default behavior act is included at least: holding photo or electronic equipment having a display function
Behavior act.
Second processing module 503, for when the first category of target image be non-deception classification when, by the people of target image
In face region input recognition of face convolutional neural networks trained in advance, target feature vector is obtained;Wherein, target feature vector
Are as follows: the corresponding feature vector of the human face region of target image;Recognition of face convolutional neural networks be according to this human face region of various kinds,
And this human face region of various kinds corresponding feature vector training obtains.
Third processing module 504, for being greater than when the similarity between target feature vector and default face feature vector
When preset threshold, the extracted characteristic information of target image is inputted in trained second classification convolutional neural networks in advance,
Obtain the corresponding second category of target image;Second category are as follows: living body classification or non-living body classification;Second classification convolutional Neural
Network is obtained according to the characteristic information of each second sample image and the corresponding second category training of each second sample image
's;
Generation module 505, for generating face in target image when the second category of target image is living body classification
Recognition result includes: the corresponding identity information of face in target image in face recognition result.
A kind of face identification device provided in an embodiment of the present invention leads to after obtaining the target image comprising face information
It crosses in the first classification convolutional neural networks by target image input training in advance, so that it is determined that the first category of target image,
That is, determine that target image is deception classification or non-deception classification, when target image is non-deception classification, to target image do into
One step recognition of face processing, therefore can be avoided the hair that tricker passes through recognition of face situation using photo or screen image
It is raw, compared to the existing simple method detected using characteristic information, it can be improved the accuracy of recognition of face.
The embodiment of the invention also provides a kind of face identification devices, corresponding with the process of method shown in Fig. 2, in Fig. 5 institute
On the basis of showing device structure, as shown in Figure 6, further includes:
Conversion module 601, for being converted to and the first classification convolutional neural networks phase one color space of target image
The color space of cause.
Fourth processing module 602, for target image input human face region trained in advance to be detected convolutional neural networks
In, obtain the human face region of target image;Human face region detection convolutional neural networks be according to each third sample image, and it is each
The corresponding human face region training of third sample image obtains.
Extraction module 603, for extracting the characteristic information in target image, characteristic information include: texture feature information or
Optical-flow Feature information.
A kind of face identification device provided in an embodiment of the present invention can be converted to the color space of target image and the
The one classification consistent color space of convolutional neural networks, so that the first classification convolutional neural networks be enable correctly to identify the target
Image, also, convolutional neural networks are detected by human face region, it can be accurately obtained the human face region of target image, after being
Continuous target feature vector acquisition process offer accurately enters, to improve the accuracy of entire face recognition process.
The embodiment of the invention also provides a kind of face identification devices, corresponding with the process of method shown in Fig. 3, in Fig. 5 institute
On the basis of showing device structure, as shown in Figure 7, further includes:
First trigger module 701 is executed for triggering first processing module by the first of target image input training in advance
The step of classifying in convolutional neural networks, obtaining target image corresponding first category.
Whether judgment module 702, the similarity for judging between target feature vector and default face feature vector are greater than
Preset threshold.
Second trigger module 703 executes target image of the acquisition comprising face information for triggering the first acquisition module
Step.
Sending module 704, for when the second category of target image be non-living body classification when, to the mobile end equipment of user
Send face recognition failures information.
First obtains module 501, is specifically used for: when the first category of target image is cheats classification, obtaining target figure
The a later frame image of picture, using a later frame image as new target image.
As a kind of optional embodiment of the embodiment of the present invention, as shown in figure 8, device further include:
Second obtains module 801, for obtaining first sample image, and obtains the corresponding first kind of each first sample image
Not;Wherein, first sample image includes: positive example first sample image and negative example first sample image;Positive example first sample image
Are as follows: different people holds the image shot when photo or electronic equipment having a display function under different background;Electronics is set
Standby includes: mobile phone, tablet computer;Negative example first sample image are as follows: different facial image under different background.
Training module 802 is used for positive example first sample image and the corresponding first category of each positive example first sample image
Initial first classification convolutional neural networks are inputted, and, negative example first sample image and each negative example first sample image is corresponding
First category input it is initial first classification convolutional neural networks, training obtain the first classification convolutional neural networks.
A kind of face identification device provided in an embodiment of the present invention, when the first category of target image is deception classification,
Server can execute point of first category using a later frame image of target image collected as new target image again
Class process avoids accidentally surveying caused by due to single target image detects, improves user experience.
The embodiment of the invention also provides a kind of servers, as shown in figure 9, including processor 901, communication interface 902, depositing
Reservoir 903 and communication bus 904, wherein processor 901, communication interface 902, memory 903 are completed by communication bus 904
Mutual communication,
Memory 903, for storing computer program;
Processor 901 when for executing the program stored on memory 903, realizes following steps:
Obtain the target image comprising face information;
By in the first classification convolutional neural networks of target image input training in advance, target image corresponding first is obtained
Classification;First category are as follows: deception classification or non-deception classification;Wherein, the first classification convolutional neural networks are according to each first
What sample image and the training of corresponding first category obtained;It cheats dynamic comprising default behavior in the first sample image of classification
Make, default behavior act includes at least: holding the behavior act of photo or electronic equipment having a display function;
When the first category of target image is non-deception classification, by the human face region input training in advance of target image
In recognition of face convolutional neural networks, target feature vector is obtained;Wherein, target feature vector are as follows: the face area of target image
The corresponding feature vector in domain;Recognition of face convolutional neural networks are according to this human face region of various kinds and various kinds this human face region
What corresponding feature vector training obtained;
When the similarity between target feature vector and default face feature vector is greater than preset threshold, by target image
Extracted characteristic information input in trained second classification convolutional neural networks, obtains target image corresponding second in advance
Classification;Second category are as follows: living body classification or non-living body classification;Second classification convolutional neural networks are according to each second sample graph
What the characteristic information of picture and the corresponding second category training of each second sample image obtained;
When the second category of target image is living body classification, the recognition result of face in target image is generated, face is known
It include: the corresponding identity information of face in target image in other result.
The communication bus that above-mentioned server is mentioned can be Peripheral Component Interconnect standard (Peripheral Component
Interconnect, abbreviation PCI) bus or expanding the industrial standard structure (Extended Industry Standard
Architecture, abbreviation EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..
Only to be indicated with a thick line in figure, it is not intended that an only bus or a type of bus convenient for indicating.
Communication interface is for the communication between above-mentioned server and other equipment.
Memory may include random access memory (Random Access Memory, abbreviation RAM), also may include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.Optionally, memory may be used also
To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit,
Abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor
(Digital Signal Processing, abbreviation DSP), specific integrated circuit (Application Specific
Integrated Circuit, abbreviation ASIC), field programmable gate array (Field-Programmable Gate Array,
Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.
In another embodiment provided by the invention, a kind of computer readable storage medium is additionally provided, which can
Read storage medium in be stored with instruction, when run on a computer so that computer execute it is any in above-described embodiment
Face identification method.
In another embodiment provided by the invention, a kind of computer program product comprising instruction is additionally provided, when it
When running on computers, so that computer executes face identification method any in above-described embodiment.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.Computer program product
Including one or more computer instructions.When loading on computers and executing computer program instructions, all or part of real estate
Raw process or function according to the embodiment of the present invention.Computer can be general purpose computer, special purpose computer, computer network,
Or other programmable devices.The computer instruction may be stored in a computer readable storage medium, or count from one
Calculation machine readable storage medium storing program for executing is transmitted to another computer readable storage medium, for example, the computer instruction can be from one
Web-site, computer, server or data center by wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or
Wirelessly (such as infrared, wireless, microwave etc.) mode is passed to another web-site, computer, server or data center
It is defeated.The computer readable storage medium can be any usable medium that computer can access or include one or more
The data storage devices such as a usable medium integrated server, data center.The usable medium can be magnetic medium, (example
Such as, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk Solid State
Disk (SSD)) etc..
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (13)
1. a kind of face identification method, which is characterized in that the described method includes:
Obtain the target image comprising face information;
By in the first classification convolutional neural networks of target image input training in advance, it is corresponding to obtain the target image
First category;The first category are as follows: deception classification or non-deception classification;Wherein, the first classification convolutional neural networks
It is to be obtained according to each first sample image and the training of corresponding first category;The first sample image of the deception classification
In include default behavior act;
When the first category of the target image is non-deception classification, the human face region of the target image is inputted pre-
First in trained recognition of face convolutional neural networks, target feature vector is obtained;Wherein, the target feature vector are as follows: described
The corresponding feature vector of the human face region of target image;The recognition of face convolutional neural networks are according to this face of various kinds area
What domain and the corresponding feature vector training of this human face region of various kinds obtained;
When the similarity between the target feature vector and default face feature vector is greater than preset threshold, by the target
The extracted characteristic information input of image in trained second classification convolutional neural networks, obtains the target image pair in advance
The second category answered;The second category are as follows: living body classification or non-living body classification;It is described second classification convolutional neural networks be
It is obtained according to the characteristic information of each second sample image and the corresponding second category training of each second sample image;
When the second category of the target image is living body classification, the identification knot of face in the target image is generated
Fruit includes: the corresponding identity information of face in the target image in the face recognition result.
2. the method according to claim 1, wherein the human face region by the target image inputs in advance
In trained recognition of face convolutional neural networks, before obtaining target feature vector, the method also includes:
By in target image input human face region detection convolutional neural networks trained in advance, the target image is obtained
Human face region;The human face region detection convolutional neural networks are according to each third sample image and each third sample image
What corresponding human face region training obtained.
3. the method according to claim 1, wherein described that the extracted characteristic information of the target image is defeated
The step of entering in trained second classification convolutional neural networks in advance, obtaining the target image corresponding second category it
Before, the method also includes:
The characteristic information in the target image is extracted, the characteristic information includes: texture feature information or Optical-flow Feature information.
4. the method according to claim 1, wherein the method also includes:
When the second category of the target image is non-living body classification, identified to the mobile end equipment sender face of user
Failure information.
5. the method according to claim 1, wherein it is described obtain comprising face information target image after,
The method also includes:
The color space of the target image is converted to the color space consistent with the first classification convolutional neural networks.
6. method according to claim 1-5, which is characterized in that the instruction of the first classification convolutional neural networks
Practicing process includes:
First sample image is obtained, and obtains the corresponding first category of each first sample image;Wherein, the first sample image
It include: positive example first sample image and negative example first sample image;The positive example first sample image are as follows: under different background not
The image shot when holding photo or electronic equipment having a display function with manpower;The electronic equipment includes: mobile phone, puts down
Plate computer;The negative example first sample image are as follows: different facial image under different background;
By the positive example first sample image and initial first classification of the corresponding first category input of each positive example first sample image
Convolutional neural networks, and, the negative example first sample image and the corresponding first category of each negative example first sample image is defeated
Enter the initial first classification convolutional neural networks, training obtains the first classification convolutional neural networks.
7. a kind of face identification device, which is characterized in that described device includes:
First obtains module, for obtaining the target image comprising face information;
First processing module, for obtaining in the first classification convolutional neural networks by target image input training in advance
The corresponding first category of the target image;The first category are as follows: deception classification or non-deception classification;Wherein, described
One classification convolutional neural networks are obtained according to each first sample image and the training of corresponding first category;The deception
Include default behavior act in the first sample image of classification;
Second processing module, for when the first category of the target image be non-deception classification when, by the target figure
In the human face region input of picture recognition of face convolutional neural networks trained in advance, target feature vector is obtained;Wherein, the mesh
Mark feature vector are as follows: the corresponding feature vector of the human face region of the target image;The recognition of face convolutional neural networks are
It is obtained according to this human face region of various kinds and the corresponding feature vector training of this human face region of various kinds;
Third processing module, for being preset when the similarity between the target feature vector and default face feature vector is greater than
When threshold value, the extracted characteristic information of the target image is inputted in trained second classification convolutional neural networks in advance,
Obtain the corresponding second category of the target image;The second category are as follows: living body classification or non-living body classification;Described second
Classification convolutional neural networks are the characteristic information and corresponding second class of each second sample image according to each second sample image
It Xun Lian not obtain;
Generation module, for generating in the target image when the second category of the target image is living body classification
The recognition result of face includes: the corresponding identity information of face in the target image in the face recognition result.
8. device according to claim 7, which is characterized in that described device further include:
Fourth processing module, for target image input human face region trained in advance to be detected in convolutional neural networks,
Obtain the human face region of the target image;Human face region detection convolutional neural networks be according to each third sample image,
And each third sample image corresponding human face region training obtains.
9. device according to claim 7, which is characterized in that described device further include:
Extraction module, for extracting the characteristic information in the target image, the characteristic information include: texture feature information or
Optical-flow Feature information.
10. device according to claim 7, which is characterized in that described device further include:
Sending module, for being set to the mobile terminal of user when the second category of the target image is non-living body classification
Preparation is made a gift to someone face recognition failures information.
11. device according to claim 7, which is characterized in that described device further include:
Conversion module, for being converted to and the first classification convolutional neural networks phase one color space of the target image
The color space of cause.
12. according to the described in any item devices of claim 7-11, which is characterized in that described device further include:
Second obtains module, for obtaining first sample image, and obtains the corresponding first category of each first sample image;Its
In, the first sample image includes: positive example first sample image and negative example first sample image;The positive example first sample figure
Picture are as follows: different people holds the image shot when photo or electronic equipment having a display function under different background;It is described
Electronic equipment includes: mobile phone, tablet computer;The negative example first sample image are as follows: different facial image under different background;
Training module, for inputting the positive example first sample image and the corresponding first category of each positive example first sample image
Initial first classification convolutional neural networks, and, the negative example first sample image and each negative example first sample image is corresponding
First category input it is described it is initial first classification convolutional neural networks, training obtain it is described first classification convolutional neural networks.
13. a kind of server, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing
Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method and step of claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811504957.4A CN109508694B (en) | 2018-12-10 | 2018-12-10 | Face recognition method and recognition device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811504957.4A CN109508694B (en) | 2018-12-10 | 2018-12-10 | Face recognition method and recognition device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109508694A true CN109508694A (en) | 2019-03-22 |
CN109508694B CN109508694B (en) | 2020-10-27 |
Family
ID=65752131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811504957.4A Active CN109508694B (en) | 2018-12-10 | 2018-12-10 | Face recognition method and recognition device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109508694B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245645A (en) * | 2019-06-21 | 2019-09-17 | 北京字节跳动网络技术有限公司 | Face vivo identification method, device, equipment and storage medium |
CN110414358A (en) * | 2019-06-28 | 2019-11-05 | 平安科技(深圳)有限公司 | Information output method, device and storage medium based on face intelligent recognition |
CN110942033A (en) * | 2019-11-28 | 2020-03-31 | 重庆中星微人工智能芯片技术有限公司 | Method, apparatus, electronic device and computer medium for pushing information |
CN111191521A (en) * | 2019-12-11 | 2020-05-22 | 智慧眼科技股份有限公司 | Face living body detection method and device, computer equipment and storage medium |
CN111695453A (en) * | 2020-05-27 | 2020-09-22 | 深圳市优必选科技股份有限公司 | Drawing book identification method and device and robot |
CN111832369A (en) * | 2019-04-23 | 2020-10-27 | 中国移动通信有限公司研究院 | Image identification method and device and electronic equipment |
CN112329497A (en) * | 2019-07-18 | 2021-02-05 | 杭州海康威视数字技术股份有限公司 | Target identification method, device and equipment |
CN112668453A (en) * | 2020-12-24 | 2021-04-16 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN112800997A (en) * | 2020-04-10 | 2021-05-14 | 支付宝(杭州)信息技术有限公司 | Living body detection method, device and equipment |
CN113221830A (en) * | 2021-05-31 | 2021-08-06 | 平安科技(深圳)有限公司 | Super-resolution living body identification method, system, terminal and storage medium |
CN113255594A (en) * | 2021-06-28 | 2021-08-13 | 深圳市商汤科技有限公司 | Face recognition method and device and neural network |
CN113609931A (en) * | 2021-07-20 | 2021-11-05 | 上海德衡数据科技有限公司 | Face recognition method and system based on neural network |
CN113723243A (en) * | 2021-08-20 | 2021-11-30 | 南京华图信息技术有限公司 | Thermal infrared image face recognition method for wearing mask and application |
CN115115843A (en) * | 2022-06-02 | 2022-09-27 | 马上消费金融股份有限公司 | Data processing method and device |
CN116597527A (en) * | 2023-07-18 | 2023-08-15 | 第六镜科技(成都)有限公司 | Living body detection method, living body detection device, electronic equipment and computer readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
US20170083752A1 (en) * | 2015-09-18 | 2017-03-23 | Yahoo! Inc. | Face detection |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN107210007A (en) * | 2014-11-13 | 2017-09-26 | 英特尔公司 | Prevent the certification based on face from palming off |
-
2018
- 2018-12-10 CN CN201811504957.4A patent/CN109508694B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107210007A (en) * | 2014-11-13 | 2017-09-26 | 英特尔公司 | Prevent the certification based on face from palming off |
US20170083752A1 (en) * | 2015-09-18 | 2017-03-23 | Yahoo! Inc. | Face detection |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
Non-Patent Citations (1)
Title |
---|
蔡泽民 等.: "人脸识别:从二维到三维", 《计算机工程与应用》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832369A (en) * | 2019-04-23 | 2020-10-27 | 中国移动通信有限公司研究院 | Image identification method and device and electronic equipment |
CN110245645A (en) * | 2019-06-21 | 2019-09-17 | 北京字节跳动网络技术有限公司 | Face vivo identification method, device, equipment and storage medium |
CN110245645B (en) * | 2019-06-21 | 2021-06-08 | 北京字节跳动网络技术有限公司 | Face living body identification method, device, equipment and storage medium |
CN110414358A (en) * | 2019-06-28 | 2019-11-05 | 平安科技(深圳)有限公司 | Information output method, device and storage medium based on face intelligent recognition |
CN110414358B (en) * | 2019-06-28 | 2022-11-25 | 平安科技(深圳)有限公司 | Information output method and device based on intelligent face recognition and storage medium |
CN112329497A (en) * | 2019-07-18 | 2021-02-05 | 杭州海康威视数字技术股份有限公司 | Target identification method, device and equipment |
CN110942033A (en) * | 2019-11-28 | 2020-03-31 | 重庆中星微人工智能芯片技术有限公司 | Method, apparatus, electronic device and computer medium for pushing information |
CN110942033B (en) * | 2019-11-28 | 2023-05-26 | 重庆中星微人工智能芯片技术有限公司 | Method, device, electronic equipment and computer medium for pushing information |
CN111191521A (en) * | 2019-12-11 | 2020-05-22 | 智慧眼科技股份有限公司 | Face living body detection method and device, computer equipment and storage medium |
CN111191521B (en) * | 2019-12-11 | 2022-08-12 | 智慧眼科技股份有限公司 | Face living body detection method and device, computer equipment and storage medium |
CN112800997A (en) * | 2020-04-10 | 2021-05-14 | 支付宝(杭州)信息技术有限公司 | Living body detection method, device and equipment |
CN112800997B (en) * | 2020-04-10 | 2024-01-05 | 支付宝(杭州)信息技术有限公司 | Living body detection method, device and equipment |
CN111695453B (en) * | 2020-05-27 | 2024-02-09 | 深圳市优必选科技股份有限公司 | Drawing recognition method and device and robot |
CN111695453A (en) * | 2020-05-27 | 2020-09-22 | 深圳市优必选科技股份有限公司 | Drawing book identification method and device and robot |
CN112668453A (en) * | 2020-12-24 | 2021-04-16 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN112668453B (en) * | 2020-12-24 | 2023-11-14 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN113221830A (en) * | 2021-05-31 | 2021-08-06 | 平安科技(深圳)有限公司 | Super-resolution living body identification method, system, terminal and storage medium |
CN113221830B (en) * | 2021-05-31 | 2023-09-01 | 平安科技(深圳)有限公司 | Super-division living body identification method, system, terminal and storage medium |
CN113255594A (en) * | 2021-06-28 | 2021-08-13 | 深圳市商汤科技有限公司 | Face recognition method and device and neural network |
CN113609931A (en) * | 2021-07-20 | 2021-11-05 | 上海德衡数据科技有限公司 | Face recognition method and system based on neural network |
CN113723243A (en) * | 2021-08-20 | 2021-11-30 | 南京华图信息技术有限公司 | Thermal infrared image face recognition method for wearing mask and application |
CN115115843B (en) * | 2022-06-02 | 2023-08-22 | 马上消费金融股份有限公司 | Data processing method and device |
CN115115843A (en) * | 2022-06-02 | 2022-09-27 | 马上消费金融股份有限公司 | Data processing method and device |
CN116597527B (en) * | 2023-07-18 | 2023-09-19 | 第六镜科技(成都)有限公司 | Living body detection method, living body detection device, electronic equipment and computer readable storage medium |
CN116597527A (en) * | 2023-07-18 | 2023-08-15 | 第六镜科技(成都)有限公司 | Living body detection method, living body detection device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109508694B (en) | 2020-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109508694A (en) | A kind of face identification method and identification device | |
US10650261B2 (en) | System and method for identifying re-photographed images | |
CN106897658B (en) | Method and device for identifying human face living body | |
US8750573B2 (en) | Hand gesture detection | |
US8792722B2 (en) | Hand gesture detection | |
WO2019109526A1 (en) | Method and device for age recognition of face image, storage medium | |
US20180034852A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
CN108399665A (en) | Method for safety monitoring, device based on recognition of face and storage medium | |
CN107844748A (en) | Auth method, device, storage medium and computer equipment | |
WO2019033572A1 (en) | Method for detecting whether face is blocked, device and storage medium | |
CN108229335A (en) | It is associated with face identification method and device, electronic equipment, storage medium, program | |
WO2019033525A1 (en) | Au feature recognition method, device and storage medium | |
CN107679475B (en) | Store monitoring and evaluating method and device and storage medium | |
CN105956572A (en) | In vivo face detection method based on convolutional neural network | |
CN109858371A (en) | The method and device of recognition of face | |
CN108229457A (en) | Verification method, device, electronic equipment and the storage medium of certificate | |
US20230085605A1 (en) | Face image processing method, apparatus, device, and storage medium | |
US20170236355A1 (en) | Method for securing and verifying a document | |
CN108491794A (en) | The method and apparatus of face recognition | |
CN106056083B (en) | A kind of information processing method and terminal | |
WO2019200872A1 (en) | Authentication method and apparatus, and electronic device, computer program, and storage medium | |
CN110287862B (en) | Anti-candid detection method based on deep learning | |
JP7036401B2 (en) | Learning server, image collection support system for insufficient learning, and image estimation program for insufficient learning | |
CN108399401B (en) | Method and device for detecting face image | |
CN110363111B (en) | Face living body detection method, device and storage medium based on lens distortion principle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |